Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is it possible to delete part of string using regexp (or something else, may be something like CHARINDEX could help) in SQL query? I use MS SQL Server (2008 most likely). Example: I have strings like "*[some useless info]* **Useful part of string**" I want to delete parts with text in brackets if they are in line.
You can use the PATINDEX function. Its not a complete regular expression implementation but you can use it for simple things. > **[PATINDEX (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms188395.aspx)**> Returns the starting position of the first occurrence of a pattern in a specified expression, or zeros if the pattern is not found, on all valid text and character data types. **OR** You can use CLR to extend the SQL Server with a complete regular expression implementation. * [SQL Server 2005: CLR Integration](http://blogs.msdn.com/b/sqlclr/archive/2005/06/29/regex.aspx)
Use **REPLACE** for example : ``` UPDATE authors SET city = replace(city, 'To Remove', 'With BLACK or Whatever') WHERE city LIKE 'Salt%'; // with where condition ```
Transact SQL replace part of string
[ "", "sql", "sql-server", "t-sql", "" ]
I am having trouble executing this statement. What I am trying to achieve here is that my result will be sorted with a condition with the use of a parameter. For example, I would like to sort the userName in ascending order, therefore, in my web application coding I have a parameter @condition, when it reads 'userName' the sql statement below executes. The error is : ``` Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date Msg 206, Level 16, State 2, Line 1 Operand type clash: int is incompatible with date ``` It appeared 6 times. I am not sure if CASE is used correctly. The result I would like is that when a condition is being called and inserted to the @condition, it should sort the result of that specific condition. The attributes in the database are declared in this way: ``` C.joinDate - DATE C.userName - VARCHAR(20) C.firstName - VARCHAR(20) C.lastName - VARCHAR(15) C.contact - CHAR(8) C.dob - DATE C.userStatus - VARCHAR(8) C.totalPoints - INT R.resID - VARCHAR(8) P.orderID - VARCHAR(8) D.orderID - VARCHAR(8) CR.securityCode - VARCHAR(10) C.loginAttempted - INT SELECT C.joinDate, C.userName, (C.firstName+' '+C.lastName) AS Name, C.contact, C.dob, C.userStatus, C.totalPoints, COUNT(R.resID) AS Res, COUNT(P.orderID) AS PreOrd, COUNT(D.orderID) AS DelOrd, COUNT(CR.securityCode) AS Redeem, C.loginAttempted FROM Customer C full join Reservation R ON C.userID = R.userID full join PreOrder P ON R.resID = P.ResID full join DeliveryOrder D ON D.userID = C.userID full join CustomerRedemption CR ON CR.userID = C.userID WHERE (C.firstName+' '+C.lastName) LIKE '%%' and c.userName LIKE '%%' and c.contact LIKE '%%' and c.userStatus LIKE '%%' GROUP BY C.joinDate, C.userName, (C.firstName+' '+C.lastName), C.contact, C.dob, C.userStatus, C.totalPoints, C.loginAttempted HAVING C.userName IS NOT NULL ORDER BY CASE 'userName' WHEN 'joinDate' THEN joinDate WHEN 'userName' THEN userName WHEN 'Name' THEN (C.firstName+' '+C.lastName) WHEN 'dob' THEN dob WHEN 'userSatus' THEN userStatus WHEN 'totalPoints' THEN totalPoints WHEN 'totalPoints' THEN totalPoints WHEN 'Res' THEN COUNT(R.resID) WHEN 'PreOrd' THEN COUNT(P.orderID) WHEN 'DelOrd' THEN COUNT(D.orderID) WHEN 'Redeem' THEN COUNT(CR.securityCode) END ```
When you order by a case statement all possible outputs of the case must be of the same type. You need to break your case statement down into multiple case statements. Each case statement should return a single type and `NULL` for the else case. You can combine all those cases into one order by.
You maybe want to try something like this: ``` CASE WHEN @condition = 'username' THEN username WHEN @condition ='joinDate' THEN joinDate WHEN @condition = 'userName' THEN userName WHEN @condition = 'Name' THEN (C.firstName+' '+C.lastName) WHEN @condition = 'dob' THEN dob WHEN @condition = 'userSatus' THEN userStatus WHEN @condition = 'totalPoints' THEN totalPoints WHEN @condition = 'totalPoints' THEN totalPoints WHEN @condition = 'Res' THEN COUNT(R.resID) WHEN @condition = 'PreOrd' THEN COUNT(P.orderID) WHEN @condition = 'DelOrd' THEN COUNT(D.orderID) WHEN @condition = 'Redeem' THEN COUNT(CR.securityCode) END ``` or as you had it would also be correct but with a small correction: ``` CASE @condition WHEN 'username' THEN username WHEN 'joinDate' THEN joinDate WHEN 'userName' THEN userName WHEN 'Name' THEN (C.firstName+' '+C.lastName) WHEN 'dob' THEN dob WHEN 'userSatus' THEN userStatus WHEN 'totalPoints' THEN totalPoints WHEN 'totalPoints' THEN totalPoints WHEN 'Res' THEN COUNT(R.resID) WHEN 'PreOrd' THEN COUNT(P.orderID) WHEN 'DelOrd' THEN COUNT(D.orderID) WHEN 'Redeem' THEN COUNT(CR.securityCode) END ```
SQL Sorting: ORDER BY with parameters
[ "", "sql", "sorting", "sql-order-by", "case", "" ]
I'm very new to SQL, I'm trying to define a 2 tables `Hospital` and `Hospital_Address` but when I'm trying to add foreign key in `Hospital_Address` it throws an error: `"1215: Cannot add foreign key`" ``` create table Hospital ( HId Int not null, HName varchar(40) not null, HDept int, Hbed Int, HAreaCd int not null, Primary Key (HId) ); create table Hospital_Address ( HAreaCd Int not null, HArea varchar(40) not null, HCity varchar(40), HAdd1 varchar(40), HAdd2 varchar(40), Primary Key (HArea), foreign key (HAreaCd) references Hospital (HAreaCd)); ``` Please help me in this regard. Thanks in advance.
MySQL requires that there be an index on the `HAreaCd` column in the parent `Hospital` table, in order for you to reference that column in a FOREIGN KEY constraint. The normative pattern is for the FOREIGN KEY to reference the PRIMARY KEY of the parent table, although MySQL extends that to allow a FOREIGN KEY to reference a column that is a UNIQUE KEY, and InnoDB extends that (beyond the SQL standard) and allows a FOREIGN KEY to reference any set of columns, as long as there is an index with those columns as the leading columns (in the same order specified in the foreign key constraint.) (That is, in InnoDB, the referenced columns do not need to be unique, though the behavior with this type of relationship may not be what you intend.) If you create an index on that column in `Hospital` table, e.g.: ``` CREATE INDEX Hospital_IX1 ON Hospital (HAreaCd); ``` Then you can create a foreign key constraint that references that column. --- However, because this is a non-standard extension of MySQL and InnoDB, the "best practice" (as other answers here indicate) is for a `FOREIGN KEY` to reference the `PRIMARY KEY` of the foreign table. And ideally, this will be a single column. Given the existing definition of the `Hospital` table, a better option for a foreign key referencing it would be to add the `Hid` column to the `Hospital_Address` table ``` ... ADD HId Int COMMENT 'FK ref Hospital.HId' ... ADD CONSTRAINT FK_Hospital_Address_Hospital FOREIGN KEY (HId) REFERENCES Hospital (HId) ``` To establish the relationship between the rows, the values of the new `HId` column will need to be populated.
You cannot add a foreign key to a non-primary key element of another table usually. If you really need to do so, refer to this question for help : [Foreign Key to non-primary key](https://stackoverflow.com/questions/18435065/foreign-key-to-non-primary-key)
MySQL Error Code 1215: Cannot add foreign key Constraint
[ "", "mysql", "sql", "" ]
The following is my query to go through about a million rows to calculate MTBUR (Mean Time Before Unscheduled Repair): ``` DECLARE @BeginDate date = '01-01-2013', @EndDate date = '12-31-2013' BEGIN SELECT H.AutoType, COALESCE(((SUM(H.Hours))/(CASE WHEN R.ReceivedDate BETWEEN @BeginDate AND @EndDate THEN COUNT(R.Confirmed) END)), SUM(H.Hours)) AS 'MTBUR' FROM Hours H INNER JOIN Repair R ON H.SN = R.SN WHERE (R.Confirmed NOT LIKE 'C%' AND R.Confirmed NOT LIKE 'O%') AND (H.Date BETWEEN @BeginDate AND @EndDate) GROUP BY H.AutoType, R.ReceivedDate END ``` The following are example results for 2 types: ``` Type | MTBUR ------------ a | value a | value a | value b | value b | value b | value ``` I want my results to look like this: ``` Type | MTBUR ------------ a | value b | value ``` Why is it grouping the same type several times. I want only 1 value for each type. Also, Why is the DBMS making me also group by `ReceivedDate`? I get the feeling that is screwing my results up. Any suggestions? The following are my CREATE TABLE: ``` CREATE TABLE [dbo].[acss_hours]( [hoursId] [uniqueidentifier] NOT NULL, [name] [nvarchar](100) NULL, [Type] [nvarchar](100) NULL, [SN] [nvarchar](100) NULL, [Reg] [nvarchar](100) NULL, [Hours] [float] NULL, [Date] [datetime] NULL) CREATE TABLE [dbo].[repair]( [repairId] [uniqueidentifier] NOT NULL, [Part] [nvarchar](100) NULL, [Customer] [nvarchar](100) NULL, [AutoType] [nvarchar](100) NULL, [ReceivedDate] [datetime] NULL, [Confirmed] [nvarchar](100) NULL, [Company] [nvarchar](100) NULL, [Reg] [nvarchar](100) NULL, [Manu] [nvarchar](100) NULL, [SN] [nvarchar](100) NULL) ```
You are correct, adding ReceivedDate is screwing up your results. You are getting one row for each type for RecievedDate. SQL Server if forcing you to add RecievedDate to the group by because you are using it in the select clause. When SQL Server processes each AutoType, what ReceivedDate should it use? It has multiple ReceivedDates per AutoType. Either it needs to use each seperate ReceivedDate by adding it to the group by, or it can use a aggregate function like min or max to select one of the RecievedDates. How do you want your query to handle it? I think you should wrap your case in the COUNT. ``` COUNT(CASE WHEN R.ReceivedDate BETWEEN @BeginDate AND @EndDate THEN R.Confirmed ELSE 0 END) ```
You need to include R.ReceivedDate from your calculation in the group by because you're evaluating the column with the between statement. Its the same as including the column in the select. Basically any column in the select line that doesn't have an aggregation function needs to be in the group by.
SQL Aggregate Function Query not Producing Expected Results
[ "", "sql", "sql-server", "group-by", "aggregate-functions", "" ]
I have a table containing blobs of document text that I am joining to. Using oracle text, i can get the snippet of text containing my search term (using ctx\_doc.snippet). However, I must now specify the number of times this search term was found for each document that matched my join not all documents that i have. I have over 100K documents total but joins i do and filtering return a subset. Reading online, there's CTX\_QUERY.COUNT\_HITS that i can use but that gives the count for all documents. If I had a textkey paraemeter for COUNT\_HITS, life would be good but none exists. How can i go about accomplishing the number of hits for a given query in a document in Oracle?
You can continue using CTX\_DOC; the procedure [HIGHLIGHT](http://docs.oracle.com/cd/E11882_01/text.112/e24436/cdocpkg.htm#CCREF0706) can be contorted slightly to do exactly what you're asking for. Using this environment: ``` create table docs ( id number, text clob, primary key (id) ); Table created. insert all into docs values (1, to_clob('a dog and a dog')) into docs values (2, to_clob('a dog and a cat')) into docs values (3, to_clob('just a cat')) select * from dual; 3 rows created. create index i_text_docs on docs(text) indextype is ctxsys.context; Index created. ``` `CTX_DOC.HIGHLIGHT` has an OUT parameter of a HIGHLIGHT\_TAB type, which contains the count of the number of hits within a document. ``` declare l_highlight ctx_doc.highlight_tab; begin ctx_doc.set_key_type('PRIMARY_KEY'); for i in ( select * from docs where contains(text, 'dog') > 0 ) loop ctx_doc.highlight('I_TEXT_DOCS', i.id, 'dog', l_highlight); dbms_output.put_line('id: ' || i.id || ' hits: ' || l_highlight.count); end loop; end; / id: 1 hits: 2 id: 2 hits: 1 PL/SQL procedure successfully completed. ``` Obviously if you're doing this in a query then a procedure isn't the best thing in the world, but you can wrap it in a function if you want: ``` create or replace function docs_count ( Pid in docs.id%type, Ptext in varchar2 ) return integer is l_highlight ctx_doc.highlight_tab; begin ctx_doc.set_key_type('PRIMARY_KEY'); ctx_doc.highlight('I_TEXT_DOCS', Pid, Ptext, l_highlight); return l_highlight.count; end; ``` This can then be called normally ``` select id , to_char(text) as text , docs_count(id, 'dog') as dogs , docs_count(id, 'cat') as cats from docs; ID TEXT DOGS CATS ---------- --------------- ---------- ---------- 1 a dog and a dog 2 0 2 a dog and a cat 1 1 3 just a cat 0 1 ``` If possible, it might be simpler to replace the keywords as Gordon notes. I'd use [`DBMS_LOB.GETLENGTH()`](http://docs.oracle.com/cd/E11882_01/timesten.112/e21645/d_lob.htm#TTPLP66709) function instead of simply `LENGTH()` to avoid potential problems, but [`REPLACE()`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions134.htm) works on CLOBs so this won't be a problem. Something like the following (assuming we're still searching for dogs) ``` select (dbms_lob.getlength(text) - dbms_lob.getlength(replace(text, 'dog'))) / length('dog') from docs ``` It's worth noting that string searching gets progressively slower as strings get larger (hence the need for text indexing) so while this performs fine on the tiny example given it might suffer from performance problems on larger documents. --- I've just seen [your comment](https://stackoverflow.com/questions/24660075/counting-the-number-of-hits-for-a-given-search-query-term-per-document-in-oracle/25095318#comment39029248_24660075): > ... but it would require me going through each document and doing a count of the hits which frankly is computationally expensive No matter what you do you're going to have to go through each document. You want to find the exact number of instances of a string within another string and the *only* way to do this is to look through the entire string. (I would highly recommend reading [Joel's post on strings](http://www.joelonsoftware.com/articles/fog0000000319.html); it makes a point about XML and relational databases but I think it fits nicely here too.) If you were looking for an estimate you could calculate the number of times a word appears in the first 100 characters and then average it out over the length of the LOB (crap algorithm I know), but you want to be accurate. Obviously we don't know how Oracle has implemented all their functions internally, but let's make some assumptions. To calculate the length of a string you need to literally count the number of bytes in it. This means iterating over the entire string. [There are some algorithms to improve this](https://stackoverflow.com/questions/6584340/how-to-write-a-better-strlen-function), but they still involve iterating over the string. If you want to replace a string with another string, you have to iterate over the original string, looking for the string you want to replace. Theoretically, depending on how Oracle's implemented everything, using `CTX_DOC.HIGHLIGHT` should be quicker than anything else as it only has to iterate over the original string once, looking for the string you want to find and storing the byte/character offset from the start of the original string. The suggestion `length(replace(<original string>, <new string>)) - length(<original string)` may have to iterate three separate times over the original string (or something that's close to it in length). I doubt that it would actually do this as everything can be cached and Oracle should be storing the byte length to make `LENGTH()` efficient. This is the reason I suggest using `DBMS_LOB.GETLENGTH` rather than just `LENGTH()`; Oracle's almost certainly storing the byte length of the document. If you don't want to parse the document each time you run your queries it might be worth doing a single run when loading/updating data and store, separately, the words and the number of occurrences per document.
If by "blobs of document text" you mean "clob", then you can use this tried true method. Take the difference between length of the document and the length of the document with the search string replaced by something else. That will give you the number of matches. For example: ``` select t.* from (select t.*, length(replace(t.doc, KEYWORD, KEYWORD || 'x')) - length(t.doc) as nummatches from table t ) t order by nummatches desc; ```
Counting the number of hits for a given search query/term per document in Oracle
[ "", "sql", "oracle", "plsql", "oracle11g", "oracle-sqldeveloper", "" ]
So we are building this app, where the retrieval of data is based on small, modular queries. So for a product it would be something like: ``` $product = $this->product->getProductData($prod_id); //get main product record $locations = $this->locations->getAvailableLocations($prod_id); //sale locations $comments = $this->feedback->getFeedback($prod_id,'COMMENTS'); //user comments ``` On the other hand we could also do something like: `$this->getAllProductData($id)` which would essentially have an `SQL` that: ``` get * from product_data left join locations on <...> left join comments on <...> ``` From a `programming` perspective, the first option makes it much easier for us to handle data, mix and match build separate flows/user experience etc. Our concern is - from a `performance` perspective would this become an issue when the products run in hundreds of thousands of rows?
There's overhead associated with each execution of a SQL statement. Packets sent to the server, SQL text parsed, statement verified to syntactically correct (keywords, commas, parens, etc.), statement verified to be semantically correct (identifiers reference tables, columns, function, et al. exist and user has sufficient privilegs, evaluating possible execution plans and choosing optimum plan, executing the plan (obtaining locks, accessing data in buffers, etc.), materializing the resultset (metadata and values), and returning to caller, releasing locks, cleanup of resources, etc.. On the client side, there's the overhead of retrieving the resultset, fetching rows, and closing the statement. In general, it's more efficient to retrieve the actual data that is needed with fewer statements, but not if that entials returning a whole slew of information that's not needed. If we only need 20 rows, then we add `LIMIT 20` on the query. If we only need rows for a particular product\_id, `WHERE product_id = 42`. When we see tight loops of repeated execution of essentially the same statement, that's a tell tale sign that the developer is processing data RBER (row by excruciating row), rather than as a set. Bottom line, it depends on the use case. Sometimes, it's more efficient to run a couple of smaller statements in place of one humongous statement.
Use your second example (all joins in one query). As long as you have an index on "prod\_id" and anything else you're filtering on or joining on, the database query optimizer will do smart things, such as seeing that prod\_id will only return a few records and that doing that first will make the query about as fast as it could possibly be. Query Optimizers are *very, very good* at this in general.
large single join queries vs multiple smaller ones
[ "", "mysql", "sql", "performance", "join", "scalability", "" ]
I have a table like this **tbl\_user** ``` id user_id amount ``` first i want to update a row based on id ``` $amount = 123; // dyanamic value $sql = "UPDATE tbl_user SET amount=amount-'$amount' WHERE id='$id' LIMIT 1 "; ``` now i want to get updated value of amount column i have applied this sql ``` $sql = "SELECT amount FROM tbl_user WHERE id='$id' LIMIT 1 "; ``` my question is can i combine both of above sql or any single query to achieve above task?
The best you could imitate is to use two lines of queries, probably using a variable like: ``` UPDATE tbl_user SET amount = @amount := amount-'$amount' WHERE id='$id' LIMIT 1; SELECT @amount; ``` The best you could do then is to create a `Stored Procedure` like: ``` DELIMITER // CREATE PROCEDURE `return_amount` () BEGIN UPDATE tbl_user SET amount = @amount := amount-'$amount' WHERE id='$id' LIMIT 1; SELECT @amount; END // ``` And then [call `Stored Procedure`](http://www.mysqltutorial.org/php-calling-mysql-stored-procedures/) in your `PHP`. Note: `PostgreSQL` has this kind of option using `RETURNING` statement that would look like this: ``` UPDATE tbl_user SET amount=amount-'$amount' WHERE id='$id' LIMIT 1 RETURNING amount ``` See [here](http://www.postgresql.org/docs/9.1/static/sql-update.html)
A function can do this easily. It sounds like you want to limit how many times your code connects to the database. With a stored function or procedure, you are only making one connection. Yes, the stored function has two queries inside it (update then select), but these are executed on the server side without stopping to do round trips to the client. <http://sqlfiddle.com/#!2/0e6a09/1/0> Here's my skeleton of your table: ``` CREATE TABLE tbl_user ( id VARCHAR(100) PRIMARY KEY, user_id VARCHAR(100), amount DECIMAL(17,4) ); INSERT INTO tbl_user VALUES ('1', 'John', '100.00'); ``` And the proposed function: ``` CREATE FUNCTION incrementAmount (p_id VARCHAR(100), p_amount DECIMAL(17,4)) RETURNS DECIMAL(17,4) BEGIN UPDATE tbl_user SET amount = amount + p_amount WHERE id = p_id; RETURN (SELECT amount FROM tbl_user WHERE id = p_id); END // ``` Then you just run one query, a `SELECT` on the function you just created: ``` SELECT incrementAmount('1', 5.00) ``` The query result is: ``` 105 ```
mysql update column then select updated value
[ "", "mysql", "sql", "" ]
I have table with two columns. Depending on the length of the data in one column, I need to join the next column. How can i proceed with this. I have the base SQL up but i cant join the columns together and display both data into one table. The current table is like this : ``` ID Code ---------- ---------- ST01 00 ST0105 05 ET2256 56 ``` After a SELECT QUERY, I would like to have ``` ID ---------- ST0100 ST0105 ET2256 ``` As you can see when **ST01** is lesser than 5 characters i will need to add the Code column to it. When the length of the ID is more i do not need to add. How can i achieve. The DB is in production am I am unable to edit, cause all the old applications are configured and running. But the application which I am building uses the 7 character format. So I cant edit the table. I will need to do a select statement only. ``` SELECT CASE ID WHEN ((LEN(ID))<>5) THEN ID=(RTRIM(ID)+LTRIM(Code)) FROM tblID ORDER BY ID DESC ```
Based on your explanation I'm guessing: ``` SELECT CASE WHEN ((LEN(ID))<5) THEN (RTRIM(ID)+LTRIM(Code)) ELSE ID END AS ID FROM tblID ORDER BY ID DESC ``` The syntax of your `CASE` was a bit off, as well as the comparison on length (`<>5` vs `<5`).
Try this: ``` SELECT CASE WHEN LEN(ID) < 6 THEN LEFT(ID + Code),6) ELSE ID END AS Code FROM tblID ORDER BY ID DESC; ```
Check the length and add corresponding column
[ "", "sql", "sql-server", "" ]
I have two tables. Table-A filled with transactional data. Table-B being a reference table for rates given a particular week. I would like to get the rate from Table B into Table A based on the Date being within the week outlined in Table B. I tried multiple query techniques and all fell flat on their face. Any help would be appreciated. ``` Table B Start_Date End_Date Rate 12/30/2013 1/5/2014 $1.20 1/6/2014 1/12/2014 $1.25 1/13/2014 1/19/2014 $2.22 1/20/2014 1/26/2014 $2.23 1/27/2014 2/2/2014 $2.11 Table A ID Date Rate 1 1/1/2014 2 1/11/2014 3 1/21/2014 4 1/10/2014 5 1/15/2014 6 1/22/2014 7 1/20/2014 8 1/3/2014 9 1/2/2014 10 1/4/2014 ```
``` UPDATE A, B SET A.Rate = [B].[Rate] WHERE (([A].[DT] Between [B].[StartDate] And [B].[EndDate])); ``` I changed `A.Date` to `A.DT` because `Date` is a reserved word in Access.
Try this : ``` select b.rate from a,b where a.date <= b.end_date and a.date >= b.start.date ``` OR ``` select b.rate from a,b where a.date between b.start_date and b.end_date ```
Query based on date range
[ "", "sql", "ms-access", "vba", "" ]
Having a tough time with this one. I have a Job model, and a JobStatus model. A job has many statuses, each with different names (slugs in this case). I need an 'active' method I can call to find all jobs where none of the associated statuses has a slug of 'dropped-off'. ``` class Job < ActiveRecord::Base belongs_to :agent has_many :statuses, :class_name => "JobStatus" validates :agent_id, :pickup_lat, :pickup_lng, :dropoff_lat, :dropoff_lng, :description, presence: true class << self def by_agent agent_id where(agent_id: agent_id) end def active # # this should select all items where no related job status # has the slug 'dropped-off' # end end end ``` Job Status: ``` class JobStatus < ActiveRecord::Base belongs_to :job validates :job_id, :slug, presence: true end ``` The closest I've gotten so far is: ``` def active joins(:statuses).where.not('job_statuses.slug = ?', 'dropped-off') end ``` But it's still selecting the Job that has a dropped-off status because there are previous statuses that are not 'dropped-off'. If i knew the raw sql, I could probably work it into activerecord speak but I can't quite wrap my head around it. Also not married to using activerecord, if the solution is raw SQL that's fine too.
``` Job.where.not(id: JobStatus.where(slug: 'dropped-off').select(:job_id)) ``` will generate a nested subquery for you.
Not the cleanest method, but you could use two queries. ``` # Getting the ID of all the Jobs which have 'dropped-off' JobStatuses dropped_off_ids = JobStatus.where(slug: 'dropped-off').pluck(:job_id) # Using the previous array to filter the Jobs Job.where.not(id: dropped_off_ids) ```
Rails 4: ActiveRecord or MySQL query where no related models have attribute
[ "", "mysql", "sql", "ruby-on-rails", "ruby", "activerecord", "" ]
I've a table named `ExamResult` which has following columns ``` studentId, Subject, Standard, Marks ``` and i've following values in my table for studentId=1 ``` 1,maths,9,78 1,english,9,80 1,history,9,67 1,geography,9,90 ``` for studentId=2 ``` 2,maths,9,68 2,english,9,83 2,history,9,70 2,geography,9,69 ``` similar entries till studentId 30 now I want to calculate the percentage of each student and want to select data from the table with the following columns ``` studentName(from student table),Standard,Percentage(with highest percentage on top) ``` **Eg:** ``` Amit,9,78% Sam,9,77% ``` now the problem is how to calculate this percentage in SQL, ``` select stu.name,exam.standard,(what to do here) as Percentage from Student stu inner join ExamResult exam on stu.Id=exam.studentId; ``` Please help me
Try: ``` SELECT stu.name, exam.standard, AVG(stu.marks) as 'Percentage' FROM Student stu INNER JOIN ExamResult exam ON stu.Id=exam.studentId; GROUP BY stu.name, exam.standard ``` NOTE: This is biased towards Microsoft SQL Server's flavor of SQL. You didn't specify which version you were using. Hopefully you can get it working from here.
Have you worked with group-by clause? ``` select stu.name,exam.standard, **average(marks) as Percentage** from Student stu inner join ExamResult exam on stu.Id=exam.studentId **group by stendent_id order by Percentage desc** ```
Calculate percentage in SQL
[ "", "sql", "" ]
I need to make a table that contains a row of every date from 01/01/2012 to present and a count of how many bugs were currently open on each date. This is the data I have, in a table called `Bugs`: `BugID`, `CreatedDate`, `UpdatedDate`, `Status` (status can be open or closed). If the bug is closed, then the UpdatedDate is the day it closed. If the bug is open, then the UpdatedDate is irrelevant because the bug is open to the current date. I can make a list of the dates, but I don't know what to do from there. `WITH D AS ( SELECT @RangeStartDate DateValue UNION ALL SELECT DateValue + 1 FROM D WHERE DateValue + 1 < @RangeEndDate ),`
``` declare @bugs table(BUGID int,Createddate datetime,Updateddate datetime,Status char(1)) insert into @bugs Select 1,'20140101',NULL,'I' UNION Select 2,'20140102','20140110','U' UNION Select 3,'20140103','20140110','C' UNION Select 4,'20140104',NULL,'I' UNION Select 5,'20140105','20140110','U' UNION Select 6,'20140106','20140109','C' UNION Select 10,'20140101','20140110','C' declare @RangeStartDate datetime declare @RangeEndDate datetime select @RangeStartDate ='20140101' select @RangeEndDate ='20140201' ;WITH D AS ( SELECT @RangeStartDate DateValue UNION ALL SELECT DateValue + 1 FROM D WHERE DateValue + 1 < @RangeEndDate ) Select D.* , (Select SUM(dd) from (Select 1 as DD from @bugs b where b.Createddate<=d.DateValue and ((b.Status<>'C') or (b.Status='C' and b.Updateddate>=d.DateValue)) ) a) from D ```
``` WITH D AS ( SELECT @RangeStartDate DateValue UNION ALL SELECT DateValue + 1 FROM D WHERE DateValue + 1 < @RangeEndDate ) Select D.DateValue, coalesce(count(BugID),0) FROM D LEFT JOIN Bugs B on B.CreateDate <= D.Datevalue and (B.UpdateDate >= D.DateValue or B.UpdateDate is null) Group By D.DateValue ``` Intent: Return all dates from d and only those records that match from bugs where the bug create date is less than the date, bug greater than the updateddate, or if updateddate is null.
SQL - Count total bugs open per day
[ "", "sql", "sql-server", "" ]
I have two tables in my SQLite Database (dummy names): **Table 1:** FileID F\_Property1 F\_Property2 ... **Table 2:** PointID ForeignKey(fileid) P\_Property1 P\_Property2 ... The entries in Table2 all have a foreign key column that references an entry in `Table1`. I now would like to select entries from `Table2` where for example `F_Property1` of the referenced file in `Table1` has a specific value. I tried something naive: ``` select * from Table2 where fileid=(select FileID from Table1 where F_Property1 > 1) ``` Now this actually works..kind of. It selects a correct file id from `Table1` and returns entries from `Table2` with this ID. But it only uses the first returned ID. What I need it to do is basically connect the returned IDs from the inner select by `OR` so it returns data for all the IDs. How can I do this? I think it is some kind of cross-table-query like what is asked here [What is the proper syntax for a cross-table SQL query?](https://stackoverflow.com/questions/244390/what-is-the-proper-syntax-for-a-cross-table-sql-query) but these answers contain no explaination of what they are actually doing so I'm struggeling with any implementation. They are using `JOIN` statements, but wouldn't this mix entries from Table1 and Table2 together while only checking matching IDs in both tables? At least that is how I understand this <http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins> As you may have noticed from the style, I'm very new to using databases in general, so please forgive me if not everything is clear about what I want. Please leave a comment and I will try to improve the question if neccessary.
The `=` operator compares a single value against another, so it is assumed that the subquery returns only a single row. To check whether a (column) value is in a set of values, use IN: ``` SELECT * FROM Table2 WHERE fileid IN (SELECT FileID FROM Table1 WHERE F_Property1 > 1) ```
The way joins work is not by "mixing" the data, but sort of combining them based on the key. In your case (I am assuming the key field in Table 1 is unique), if you join those two tables on the primary key field, you will end up with all the entries in table2 plus all corresponding fields from table1. If you were doing this: ``` select * from table1, table2 where table1.fieldID=table2.foreignkey; ``` then, providing your key fields are set up right, you will end up with the following: ``` PointID ForeignKey(fileid) P_Property1 P_Property2 FileID F_Property1 F_Property2 ``` The field values from table1 would be from matching rows. Now, if you do this: ``` select table1.* from table 1, table2 where table1.fieldID=table2.foreignkey and F_Property1>1; ``` Would essentially get the same set of records, but will only show the columns from the second table, and only those that satisfy the where condition for the first one. Hope this helps :)
Select values from one table depending on referenced value in another table
[ "", "sql", "sqlite", "" ]
**Query** ``` SELECT DISTINCT code, hour(datetime), sum(price) FROM something ORDER BY code; ``` I have one result table with 3 columns `code`, `hour`, `sum`. I would like to have as many tables as there are different `code`. a code = a table and each table containing "code" (which will be the same in the table), "hour" and "sum". Thank you. **Edit 1 :** Thank you for each replies. I don't want to UNION tables, not at all. @Gordon Linoff : I forgot to say that it's for analytics db (Infinidb) so I know that it's not the right way for a relational DB. I would like to do this to avoid to have many queries. (one query is more faster than many queries) What I have : ![enter image description here](https://i.stack.imgur.com/Dcx24.jpg) What I want : (there is not every tables, just to show) ![enter image description here](https://i.stack.imgur.com/41J77.jpg)
First, your query would more appropriately be written as: ``` SELECT code, hour(datetime), sum(price) FROM something GROUP BY code, hour(datetime) ORDER BY code; ``` If you want a separate result table for each code, you need to create the tables separately. The basic statement is: ``` CREATE TABLE table_code1 as SELECT code, hour(datetime) as hour, sum(price) as price FROM something WHERE code = 'code1' GROUP BY hour(datetime); ``` You would need to put this in a loop for all the codes and use prepared statements for the code. However, I cannot really think of a good reason why you would want this. Using the query directly or putting all the codes summaries in a single table makes more sense. In fact, having multiple tables with the same structure in a database is usually a sign of poor database design.
Use union statement for concate mulitple query > `SELECT column_name(s) FROM table1 > UNION > SELECT column_name(s) FROM table2;`
Split a table from a query to many tables (with "DISTINCT")
[ "", "mysql", "sql", "" ]
In my software system, an important lookup column historically had one of three numeric values. We're now going to allow more values in between, but I still need to be able to map all in-between values on the original three-point scale. The system is for rating positive, neutral, and negative based on an integer value. it use to be 2, 4, and 6 We have now gone to a -5 to +5 rating system, but still only use 2, 4, and 6 to pass in positive neutral and negative values. In the database we had a sproc written that returned rows with exactly 2 or 4 or 6, but now it needs to return rows with -5 through -2 if it's 6. -1 through 1 if 4. and -2 through -5 if 2. This is how the inner Join was written ``` INNER JOIN @SiteIDs sti ON sti.SiteID = s.SiteID --AND d.SiteID IN (ISNULL(@SiteIDList, d.SiteID)) AND ISNULL(s.DatePosted, '1/1/1901') >= ISNULL(@StartDate, '1/1/1900') AND ISNULL(s.DatePosted, '1/1/9998') <= ISNULL(@EndDate, '1/1/9999') AND s.Favorite = CASE WHEN @FavoritesOnly = 1 THEN 1 ELSE s.Favorite END AND s.SID = ISNULL(@SID, s.SID) -- this is what needs to be changed ``` I know what needs to happen: when @SID is 6 s.sid needs to return all values that are -2, -3, -4 and -5, and similarly for the other two ranges. I'm just not sure how to syntax it properly.
I think this is what you're after: ``` INNER JOIN @SiteIDs sti ON sti.SiteID = s.SiteID --AND d.SiteID IN (ISNULL(@SiteIDList, d.SiteID)) AND ISNULL(s.DatePosted, '1/1/1901') >= ISNULL(@StartDate, '1/1/1900') AND ISNULL(s.DatePosted, '1/1/9998') <= ISNULL(@EndDate, '1/1/9999') AND s.Favorite = CASE WHEN @FavoritesOnly = 1 THEN 1 ELSE s.Favorite END AND s.SID between case coalesce(@SID,0) when 6 then -5 when 4 then -1 when 2 then 2 default s.SID --if @SID is not valid, all results are returned end and case coalesce(@SID,0) when 6 then -2 when 4 then 1 when 2 then 5 default s.SID end ```
Is this what you want? ``` AND @SID is null or @SID = 6 and SID in (-2, -3, -4, -5) ```
how can I make a single join value match a range of values
[ "", "sql", "inner-join", "" ]
I have a table called `Tasks`. I want to sum the number of tasks status types (open, closed) and priorities grouped by project. Something like this: ``` =============================================================================================== | Project | Closed | Major | Moderate | Minor | Total =============================================================================================== |Project 1 | 4 | 3 | 0 | 0 | 7 |Project 2 | 1 | 0 | 2 | 5 | 8 |Project 3 | 10 | 8 | 3 | 4 | 25 ``` The "priority" is determined by the ability to find a keyword within a VarChar field. (I know, very bad, but no problems there) *The part I can't figure out* is how can I write the query so that IF I determine priority is "Major", break, or move on to the next task. Once I've determined the priority is "Major" the other priorities (Moderate, and Minor) should be ignored. The same goes for "Moderate". If I find "Moderate" I should not worry about Minor. **This is the part I'm having trouble with.** **How do I create an if statement based on results from another select?** This is what I have so far which gives me all the counts...but the priorities are not "chained" or dependent on the match condition of the parent. ``` SELECT Project, count(status) as "Status Count" ,sum(case when status = 'Closed' then 1 else 0 end) as Closed ,sum(case when status = 'Open' then 1 else 0 end) as "Open" ,sum(case when CHARINDEX ( 'Major', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end) as Major ,sum(case when CHARINDEX ( 'Moderate', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end) as Moderate ,sum(case when CHARINDEX ( 'Minor', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end) as Minor FROM Tasks GROUP BY Project ``` ***DISCLAIMER***: I know this is a terrible data model. I inherited it from a previous developer. I'm in the process of overhauling the entire system but I need to be able to generate reports as it is now until I finish the new system.
I would use a subquery to determine the priority and then use that in your outer query: ``` SELECT Project ,count(status) as "Status Count" ,sum(case when status = 'Closed' then 1 else 0 end) as Closed ,sum(case when status = 'Open' then 1 else 0 end) as "Open" -- ,sum(case when priority = 'Major' then 1 else 0 end) as Major ,sum(case when priority = 'Moderate' then 1 else 0 end) as Moderate ,sum(case when priority = 'Minor' then 1 else 0 end) as Minor ,sum(case when priority = 'Unknown' then 1 else 0 end) as Unknown from ( select t.* , case when CHARINDEX ( 'Major', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 'Major' when CHARINDEX ( 'Moderate', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 'Moderate' when CHARINDEX ( 'Minor', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 'Minor' else 'Unknown' end priority from tasks t ) tasks GROUP BY Project ```
One way would be preparing a data source with the results of substring searches, exposing that source as a *Common Table Expression* (CTE), and basing the `GROUP BY` on that source: ``` WITH Task_CTE (Project, Status, Closed, Open, Major, Moderate, Minor) AS ( SELECT Project, status , case when status = 'Closed' then 1 else 0 end as Closed , case when status = 'Open' then 1 else 0 end as "Open" , case when CHARINDEX ( 'Major', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end as Major , case when CHARINDEX ( 'Moderate', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end as Moderate , case when CHARINDEX ( 'Minor', DefectSummary COLLATE Latin1_General_CI_AS) > 0 then 1 else 0 end as Minor FROM Tasks ) SELECT Project , count(status) , sum(Closed) as Closed , sum("Open") as "Open" , sum(Major) as Major , sum(CASE WHEN Major=1 THEN 0 ELSE Moderate END) as Moderate , sum(CASE WHEN Major=1 OR Moderate=1 THEN 0 ELSE Minor END) as Minor FROM Task_CTE GROUP BY Project ``` Using this approach lets your grouping `SELECT` see the flags prepared by the "raw" `SELECT` of the CTE. In the query above, the flags of `'Moderate'` and `'Minor'` are ignored when `'Major'` is present.
SQL sequential IF conditions
[ "", "sql", "sql-server", "" ]
For example: ``` Field1 | Field2 | Field3 | -------------------------- the | lazy | dog ``` into ``` Field1 | Field2 | Field3 | History -------------------------------------- the | lazy | dog | thelazydog ``` Don't care about spaces etc.
Try something like : ``` INSERT INTO Table (Field1, Field2, Field3, History) (Field1, Field2, Field3, Field1+Field2+Field3) ```
Use a virtual column. to handle summing null values use isNull. ``` CREATE TABLE Example ( ID int IDENTITY (1,1) NOT NULL , field1 smallint , field2 smallint , field3 smallint , history AS isnull(field1,0) + isnull(field2,0) + isNull(field3,0) ); ```
How can I merge 2 (or more) columns into a new one for each SQL row?
[ "", "sql", "sql-server-2008", "" ]
I have a table with costumers that are active or inactive. The inactive is the ones who are more then 1 year without any payment. I need to know how many inactive became active the first semester of this year. I only have a table with: "costumer id, amount paid, date paid, status" I'm trying something like that: ``` select DISTINCT COSTUMER_ID from finance where (DATEPAID >= '2014-01-01' and DATEPAID < '2014-07-01') and COSTUMER_ID in ( select COSTUMER_ID from finance where DATEPAID < '2013-07-01' ) ``` First part is to see who paid this year, the second, who paid at least once in its life. But can't continue. Any help? Sample data: ``` costumer_id amount paid date paid todaystatus 1 50 2012-02-03 inactive 1 75 2013-02-03 inactive 2 10 2013-01-02 active 2 12 2014-04-02 active 3 65 2014-06-02 active 4 10 2011-01-06 active 4 30 2014-04-16 active ``` The costumer 2 and 4 is the one I want. The 2 became inactive in 2014-01-02 but reactivated in 2014-04-02 The 4 became inactive in 2012-01-06 but reactivated in 2014-04-16 The output can be a list of costumer\_id Thanks
Try this, and add some data samples in SQLfiddle to tune this query. ``` select DISTINCT f1.COSTUMER_ID from finance f1 where (DATEPAID >= '2014-01-01' and DATEPAID < '2014-07-01') and datediff(d, ( select top 1 f2.DATEPAID from finance f2 where DATEPAID <> f1.DATEPAID and f1.COSTUMER_ID=f2.COSTUMER_ID order by f2.DATEPAID desc) ,f1.DATEPAID)>365 ```
You want to look at the maximum of the date paid for customers who are currently inactive. This seems to implement your business rules: ``` select CUSTUMER_ID, max(DATEPAID) as maxdp from finance group by CUSTUMER_ID having maxdp < '2013-07-01'; ``` Note that I changed `COSTOMERID` to `CUSTOMERID`, which is the better spelling in English.
SQL Select help - who paid this year but is inactive
[ "", "sql", "sql-server", "select", "" ]
`select player,date from nba.player_stats where tm='NOP';` This returns a column of NBA players for the New Orleans Pelicans and a column of dates of games that the players played in. There are multiple players for each date. When a player gets hurt and does not play, the player does not show up in the player column for that specific date. I'm trying to write a query where I can say something like WHERE player='Ryan Anderson' does not exist, return those dates and the list of players that played on those dates. Any help would be greatly appreciated. Sorry, I'm having trouble formatting the table. It is listed like so, except with more dates included: ``` **Player------------Date** Ryan Anderson 2014-01-01 Jrue Holiday 2014-01-01 Anthony Davis 2014-01-01 Tyreke Evans 2014-01-01 Anthony Morrow 2014-01-01 Eric Gordon 2014-01-01 Brian Roberts 2014-01-01 Jeff Withey 2014-01-01 Darius Miller 2014-01-01 Al-Farouq Aminu 2014-01-01 Austin Rivers 2014-01-01 Alexis Ajinca 2014-01-01 Greg Stiemsma 2014-01-01 Alexis Ajinca 2014-01-04 Eric Gordon 2014-01-04 Jrue Holiday 2014-01-04 Tyreke Evans 2014-01-04 Al-Farouq Aminu 2014-01-04 Anthony Davis 2014-01-04 Brian Roberts 2014-01-04 Jeff Withey 2014-01-04 Greg Stiemsma 2014-01-04 Darius Miller 2014-01-04 ```
This will show all players that played on dates when Ryan Anderson did not play ``` select player, date from nba.player_stats where date not in (select date from nba.player_stats where player = 'Ryan Anderson') ```
There are many ways to write the query, here are a few examples (all selecting NOP players that played when Ryan Anderson did not); Using NOT IN, just check dates where he plays and exclude those; ``` SELECT player, date FROM player_stats WHERE tm = 'NOP' AND date NOT IN ( SELECT date FROM player_stats WHERE player='Ryan Anderson' ); ``` Using NOT EXISTS, exclude rows where there's a player with the same date and the name 'Ryan Anderson'; ``` SELECT player, date FROM player_stats WHERE tm = 'NOP' AND NOT EXISTS ( SELECT 1 FROM player_stats ps WHERE player='Ryan Anderson' AND player_stats.date = ps.date ); ``` Using LEFT JOIN, try to match with a player named 'Ryan Anderson' and the same date, display if no match; ``` SELECT ps.player, ps.date FROM player_stats ps LEFT JOIN player_stats ps2 ON ps2.player='Ryan Anderson' AND ps.date = ps2.date WHERE ps.tm = 'NOP' AND ps2.player IS NULL ``` [A simple SQLfiddle to test all of them](http://sqlfiddle.com/#!2/dd88a/1).
SQL - Select Only Players and Dates When a Date Does Not Contain a Player
[ "", "sql", "" ]
I have two table one say `Employee` with columns as ``` EmpId Name Class Region 1 rr x t 2 tr v g ``` Another table `ConfidentalEmployee` with columns as ``` EmpId(foreign key) Name 1 rr ``` Now have to write query with all the fields of Employee table but those Employee's Id is in ConfidentalEmployee table the details like (`Class, Region`) for such employee should come as `CAN'T DISCLOSE` as follows: ``` EmpId Name Class Region 1 rr CAN'T DISCLOSE CAN'T DISCLOSE 2 tr v g ``` I can do it using two queries with join based on EMPIds and perform an union on both the resultset. My query is as follows: ``` select e.EmpId, e.Name,e.Class,e.Region from Employee e inner join ConfidentalEmployee ce on e.EmpId <> ce.EmpId UNION select e.EmpId, e.Name,'CAN'T DISCLOSE' as Class, 'CAN'T DISCLOSE' as Region from Employee e inner join ConfidentalEmployee ce on e.EmpId = ce.EmpId ``` But am wondering if its possible with a single query without union operation?
You can try this query ``` SELECT Emp.EmptId, Emp.Name, CASE WHEN CEmp.EmpId IS NULL THEN Emp.Class ELSE 'CAN''T DISCLOSE' END AS Class, CASE WHEN CEmp.EmpId IS NULL THEN Emp.Region ELSE 'CAN''T DISCLOSE' END AS Region FROM Employee AS Emp LEFT JOIN ConfidentialEmployee CEmp ON Emp.EmpId = CEmp.EmpId ```
You want a join, specifically a `left outer join` and check for matches: ``` select e.EmpId, coalesce(ce.Name, e.Name) as Name, (case when ce.empid is not null then 'CAN''T DISCLOSE' else e.Class end) as class, (case when ce.empid is not null then 'CAN''T DISCLOSE' else e.Region end) as region from employee e left outer join confidentialemployee ce on e.empid = ce.empid; ``` This is assuming that confidential employees are in both tables, as in the example in your question. Otherwise, `union all` is the appropriate approach.
Need Union kind of behavior without using Union of queries/resultset
[ "", "sql", "sql-server", "" ]
I got table like this: ``` Col1 | Col2 AAA | 1 BBB | X AAA | X CCC | 1 ``` I want to find duplicates based on Col1. Then i want to leave the row which has 'X' in Col2 and delete the other one. I found how to find the duplicates: ``` SELECT col1, col2, col3=count(*) INTO holdkey FROM t1 GROUP BY col1, col2 HAVING count(*) > 1 ``` But this example further shows only how to delete identical rows and not how to choose one. btw, this is mssql2000
Assuming the table is called test, this should work. I just tried it :-) ``` DELETE From test WHERE Col1 IN( SELECT Col1 FROM test GROUP BY Col1 HAVING COUNT(1) > 1) AND Col2 = '1' ```
This is a good use of updatable CTEs and window functions: ``` with todelete as ( select t1.*, count(*) over (partition by col1) as cnt from t1 ) delete from todelete where cnt > 1 and col2 <> 'X'; ```
Find duplicate rows and delete specific one
[ "", "sql", "sql-server", "sql-server-2000", "" ]
I'm having a hard time summing up a column on two tables. The scenario is something like this (refer to the image below) ![enter image description here](https://i.stack.imgur.com/ct6pT.png) Table 1 may have a lot of rows per Date. But Table 2 may only consists of two rows of data per Date. What I wanted to do is to sum up all Item/Price (Table1) according to their Date and ADD them with another SUM of Item/Price of Table2. The category of SUM is by Date. I tried any joins statement (left, right or inner) but it does not produce the result that I am expecting to. My expected result is the Result table. But on my query, it produces a very high value. Thanks.
Use a `UNION` clause like this: ``` WITH t(d, p) AS ( SELECT [Date], Price FROM Table1 UNION ALL SELECT [Date], Price FROM Table2 ) SELECT d, SUM(p) FROM t GROUP BY d ```
You can do this with `UNION ALL` in either a subquery or a cte, cte shown here: ``` ;WITH cte AS (SELECT [Date], Price FROM Table1 UNION ALL SELECT [Date], Price FROM Table2 ) SELECT [Date], SUM(Price) AS Total_Price FROM cte GROUP BY [Date] ``` Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/c2ace/2/0)
SQL SUM function inquiry
[ "", "sql", "sql-server", "" ]
So I want to insert a new column into a table using mysql. Here is an example of a insert-into function from w3schools ``` INSERT INTO Customers (CustomerName, Country) SELECT SupplierName, Country FROM Suppliers WHERE Country='Germany'; ``` I've attempted to modified it a bit in order to basically select and insert specific integer values from two other tables. I've tried a couple different ways without luck. Here's what I'm roughly trying to do: ``` INSERT INTO table0 (integerVar1, integerVar2, booleanVar1) SELECT int1 FROM table1 WHERE aString = "something", int2 FROM table2 WHERE bString = "otherthing", 0; ``` The code above isn't valid syntax. What would I have to adjust to make the selection? Or would I have to try something completely different? Thank you very much.
You need to have each in a sub-select like below: ``` INSERT INTO table0 (integerVar1, integerVar2, booleanVar1) SELECT (SELECT int1 FROM table1 WHERE aString = 'something'), (SELECT int2 FROM table2 WHERE bString = 'otherthing'), 0; ``` However, keep in mind that if you have more than one value returned by the sub-select, it will not work. This will be safer to use: ``` INSERT INTO table0 (integerVar1, integerVar2, booleanVar1) SELECT (SELECT int1 FROM table1 WHERE aString = 'something' LIMIT 1), (SELECT int2 FROM table2 WHERE bString = 'otherthing' LIMIT 1), 0; ```
The following would work for you as long as the sub queries only returned one row ([**SQL Fiddle**](http://sqlfiddle.com/#!2/18210/1/0)): ``` INSERT INTO table0 (integerVar1, integerVar2, booleanVar1) SELECT (SELECT int1 FROM table1 WHERE aString = "something") AS integerVar1, (SELECT int2 FROM table2 WHERE bString = "otherthing") AS integerVar2, 0 ``` If they have the potential of returning more than one row, then you should narrow down the results via a more detailed where clause or using a limit clause.
how to select more than one foreign key in a mysql query
[ "", "mysql", "sql", "insert", "foreign-keys", "subquery", "" ]
I have table like this: ``` ********************************** * row * field * item * content * ********************************** * 1 * 231 * 10 * A * * 2 * 232 * 10 * C * * 3 * 231 * 11 * A * * 4 * 232 * 11 * B * ********************************** ``` I would like to SELECT DISTINCT only the item for which there are both: field=231 & content=A AND field=232 & content=B (item for which exists both those rows with those values). So, in this case result should be 11. If I put WHERE clause like this: ``` where (field=231 and content=A) OR (field=232 and content=B) ``` the result will be both 10 and 11 because first row comply with the condition inside the first parenthesis. If I put 'AND' instead of 'OR' than I get nothing back because WHERE clause is tested only at one row and there is no row that meets such condition. How to construct the WHERE clause that gives back only the item 11?
Group by the `item` and take only those having your conditions ``` select item from your_table group by item having sum(case when field=231 and content='A' then 1 else 0 end) > 0 and sum(case when field=232 and content='B' then 1 else 0 end) > 0 ```
I think the most general way to approach this type of problem is with `group by` and `having`: ``` select item from table t group by item having sum(case when field=231 and content=A then 1 else 0 end) > 0 and sum(case when field=232 and content=B then 1 else 0 end) > 0; ``` Each condition in the `having` clause checks that exactly one condition is true. The `> 0` says that at least one row matches.
SQL query - how to construct a multirow dependent where clause
[ "", "sql", "where-clause", "" ]
I have a Query that is pulling in data for the current month for my by default. I then have a "Filter" option on my page where they can enter in some data to filter the content by and re-run the query. I have my select statement working for the current month however I am not quite sure how to implement the logic for fields that do not contain content. Example: If the user fills in the start and end date in the filter options it needs to use those dates vs the default ones of Current month/year. The tricky part for me is the department and category. If there is a value in the filter, it needs to look up submissions just on those values `category=3` `department=5`. However, when there those are empty on the filter settings, it needs to ignore the category and department and get me the record regardless of what the value is. Here is my current SP: ``` IF (@action = 'filter') BEGIN SELECT A.[submissionID], A.[subEmpID], A.[nomineeEmpID], CONVERT (VARCHAR (10), A.[submissionDate], 101) AS submissionDate, A.[situation], A.[task], A.[action], A.[result], A.[timestamp], A.[statusID], A.[approver], A.[approvalDate], B.[FirstName] + ' ' + B.[LastName] AS nomineeName, B.[ntid] AS nomineeNTID, B.[qid] AS nomineeQID, C.[FirstName] + ' ' + C.[LastName] AS submitName, C.[ntid] AS submitNTID, D.[categoryName] FROM empowermentSubmissions AS A INNER JOIN empTable AS B ON A.[nomineeEmpID] = B.[empID] INNER JOIN empTable AS C ON A.[subEmpID] = C.[empID] INNER JOIN empowermentCategories AS D ON A.[categoryID] = D.[catID] WHERE DATEPART(m, A.[submissionDate]) = MONTH(getdate()) AND DATEPART(yyyy, A.[submissionDate]) = YEAR(getdate()) FOR XML PATH ('data'), TYPE, ELEMENTS, ROOT ('root'); END ``` ![enter image description here](https://i.stack.imgur.com/q26NN.png)
By looking at *"If there is a value in the filter, it needs to look up submissions just on those values category=3 department=5. However, when there those are empty on the filter settings, it needs to ignore the category and department and get me the record regardless of what the value is."* So let say you have filter value in parameter called `@Filter` you can do something like below ``` Where (@Filter is null OR (@Filter is not null and category=3 and department=5)) ``` **Update** Didn't notice you have two parameters ``` Where (@category is null or (category = @category)) and (@department is null or (department= @department)) ```
If you need the query hardcoded in sql try: ``` where (category = @category or @category is null) ``` and keep that format for all the variables. If a category is provided, then it filters on the provided value. Otherwise all the rows are returned since the parameter is null. If you can dynamically build the sql query, either in code or in sql itself, use an if statement. ``` if(@category is not null) begin set @sqlQuery = @sqlQuery + 'category = @category' end ```
TSQL Optional Where Clause
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I am writing a T-SQL stored procedure that conditionally adds a record to a table only if the number of similar records is below a certain threshold, 10 in the example below. The problem is this will be run from a web application, so it will run on multiple threads, and I need to ensure that the table never has more than 10 similar records. The basic gist of the procedure is: ``` BEGIN DECLARE @c INT SELECT @c = count(*) FROM foo WHERE bar = @a_param IF @c < 10 THEN INSERT INTO foo (bar) VALUES (@a_param) END IF END ``` I think I could solve any potential concurrency problems by replacing the select statement with: ``` SELECT @c = count(*) WITH (TABLOCKX, HOLDLOCK) ``` But I am curious if there any methods other than lock hints for managing concurrency problems in T-SQL
One option would be to use the [sp\_getapplock](http://msdn.microsoft.com/en-us/library/ms189823.aspx) system stored procedure. You can place your critical section logic in a transaction and use the built in locking of sql server to ensure synchronized access. Example: ``` CREATE PROC MyCriticalWork(@MyParam INT) AS DECLARE @LockRequestResult INT SET @LockRequestResult=0 DECLARE @MyTimeoutMiliseconds INT SET @MyTimeoutMiliseconds=5000--Wait only five seconds max then timeouit BEGIN TRAN EXEC @LockRequestResult=SP_GETAPPLOCK 'MyCriticalWork','Exclusive','Transaction',@MyTimeoutMiliseconds IF(@LockRequestResult>=0)BEGIN /* DO YOUR CRITICAL READS AND WRITES HERE */ --Release the lock COMMIT TRAN END ELSE ROLLBACK TRAN ```
Use `SERIALIZABLE`. By definition it provides you the illusion that your transaction is the only transaction running. Be aware that this might result in blocking and deadlocking. In fact this SQL code is a classic candidate for deadlocking: Two transactions might first read a set of rows, then both will try to modify that set of rows. Locking hints are the classic way of solving that problem. Retry also works.
Conditionally inserting records into a table in multithreaded environment based on a count
[ "", "sql", "sql-server", "multithreading", "t-sql", "" ]
Please take a look at this **[Fiddle Example](http://sqlfiddle.com/#!2/420819/21/0)** I want to convert each field into a row from this table: ``` CREATE TABLE product (`ID` int, `name` varchar(1), `category` varchar(11), `price` int,`year`int) ; INSERT INTO product (`ID`, `name`, `category`, `price`,`year`) VALUES (1, 'A', 'Outdoor', 25,2010), (2, 'A', 'Doll', 34,2009), (3, 'C', 'Doll', 25,2008), (4, 'D', 'Outdoor', 20,2010), (5, 'E', 'Brainteaser', 22,2010), (6, 'E', 'Brainteaser', 22,2009), (7, 'G', 'Brainteaser', 30,2009), (8, 'G', 'Brainteaser', 30,2009) ; ``` Here's the output I'm trying to get: ``` field value name A,C,D,E,G category Brainteaser,Doll,Outdoor price 20,22,25,30,34 year 2008,2009,2010 ``` I read a thread about pivoting table with `UNION` and `MAX` but I was lost at using `MAX` with `GROUP_CONCAT` ``` SELECT MAX(CASE WHEN ... GROUP_CONCAT(DISTINCT (value) SEPARATOR '|')) as value from( select id,name value, 'name' field from product union all select id,category value, 'category' field from product union all select id,price value, 'price' field from product union all select id,year value, 'year' field from product ) GROUP BY field order by value ``` Can anyone show me how to get that output?
This will give you expected output: ``` SELECT 'name' AS `field`. GROUP_CONCAT(DISTINCT `name` ORDER BY `name`) AS `value` FROM product UNION ALL SELECT 'category' AS `field`. GROUP_CONCAT(DISTINCT `category` ORDER BY `category`) AS `value` FROM product UNION ALL SELECT 'price' AS `field`. GROUP_CONCAT(DISTINCT `price` ORDER BY `price`) AS `value` FROM product UNION ALL SELECT 'year' AS `field`. GROUP_CONCAT(DISTINCT `year` ORDER BY `year`) AS `value` FROM product ``` Added `ORDER BY` because looks like you need sorted output
something like this?? Using union all for better proformance and incase there are any dupilcates. ``` SELECT 'name' field, group_concat(DISTINCT name ORDER BY name SEPARATOR '|') value FROM product UNION ALL SELECT 'category' field, group_concat(DISTINCT category ORDER BY category SEPARATOR '|') value FROM product UNION ALL SELECT 'price' field, group_concat(DISTINCT price ORDER BY price SEPARATOR '|') value FROM product UNION ALL SELECT 'year' field, group_concat(DISTINCT year ORDER BY year SEPARATOR '|') value FROM product; ``` --- **EDIT:** If you would like to do this with just one query you can achieve it this way. ``` SELECT @uName := group_concat(DISTINCT name ORDER BY name SEPARATOR '|'), @uCat := group_concat(DISTINCT category ORDER BY category SEPARATOR '|') uCat, @uPrice := group_concat(DISTINCT price ORDER BY price SEPARATOR '|') uPrice, @uYear := group_concat(DISTINCT year ORDER BY year SEPARATOR '|') uYear FROM product; SELECT 'name' field, @uName value UNION ALL SELECT 'category' field, @uCat value UNION ALL SELECT 'price' field, @uPrice value UNION ALL SELECT 'year' field, @uYear value; ``` **NOTE:** you can do `ORDER BY` inside the `GROUP_CONCAT`
Converting column into row using "|" as separators
[ "", "mysql", "sql", "max", "group-concat", "" ]
I have a `Log` model with following columns: ``` ["id", "username", "event", "parameters", "extras", "created_at", "updated_at"] ``` I have indexes `created_at` and `username`. Now, I would like to get the first log for each `username` ordered by `created_at`. One way to do this is to run the following query for each username: ``` log = Log.where("username = :username", username: username).order(:created_at).first ``` But this obviously queries the database a lot of times (equal to the number of usernames). Is there some way to do only one database query?
Another case for `DISTINCT ON`: ``` SELECT DISTINCT ON (username) * FROM log ORDER BY username, created_at; ``` Returns *the whole row* for the "first" entry per `username`. Details: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) Similar answer for Ruby / AR / Postgres: * [Display latest messages from messages table, group by user](https://stackoverflow.com/questions/23728320/display-latest-messages-from-messages-table-group-by-user/23728690#23728690) How to execute raw SQL: * [Table join sql to rails active record query](https://stackoverflow.com/questions/10633412/table-join-sql-to-rails-active-record-query/10639918#10639918) This Ruby syntax should work: ``` Log.select("DISTINCT ON (username) *").order(:username, :created_at) ```
I think `group by` clause would do the trick. ``` Log.group(:username).order(:created_at) ``` This will give you a query like this: ``` SELECT `logs`.* from `logs` GROUP BY username ORDER BY `logs`.`created_at` ASC ``` which will return first record for each `username`.
Get a list of first record for each group
[ "", "sql", "ruby-on-rails", "postgresql", "activerecord", "greatest-n-per-group", "" ]
I'm trying to set a column in the fabcon table only if the original column is null. This is the code I've already tried. ``` UPDATE dbo.fabcon SET ext = COALESCE(ext, ( SELECT CASE WHEN [<3] IS NOT NULL THEN '3' END AS extent FROM dbo.spreadsheetData )) ``` This is the error I'm getting: Sub-query returned more than 1 value. This is not permitted when the sub-query follows `=, !=, <, <= , >, >=` or when the sub-query is used as an expression. Can anyone see where I've gone wrong? Thanks. :) EDIT: the two tables fabcon and spreadsheetData are linked by a column called main1 EDIT2: I've updated the query to this: ``` UPDATE dbo.fabcon SET ext = (SELECT CASE WHEN [<3] IS NOT NULL THEN '3' END FROM dbo.spreadsheetData ssd WHERE ssd.id = fabcon.id ) WHERE ext IS NULL; ``` However, its still failing with the same error.
You need a link between the table `fabcon` and `spreadsheetData`. Assuming it is called `id`: ``` UPDATE dbo.fabcon SET ext = (SELECT CASE WHEN [<3] IS NOT NULL THEN '3' END) FROM dbo.spreadsheetData ssd WHERE ssd.id = fabcon.id ) WHERE ext IS NULL; ``` Note that I removed the `coalesce()` and replaced it with a `where` clause for the update. This prevents the query from updating rows unnecessarily (with unchanged values).
``` DECLARE @PAth INT Select @path = COALESCE(ext,'')+[<3]+';' FROM dbo.spreadsheetData UPDATE dbo.fabcon SET ext = CASE WHEN @PAth IS NOT NULL THEN '3' ELSE '' END AS Extent FROM dbo.fabcon f WHERE f.id = @path ``` may be this works i think
SQL Update when not null
[ "", "sql", "sql-server", "" ]
I have a SQL Server database where I would like to take one main table that has columns that contain IDs and join? them with 3 other tables where those IDs match actual names. Example Table = slips looks like this: ``` ID | UserID | ClientID | TimeID --------------------------------- 10 | 35 | 48 | 27 ``` I have 3 other tables called `Usernames, Clientnames, Timeactnames` Each one of those tables looks like this: Usernames ``` ID | Nickname1 | ---------------- 35 | Shawn | ``` Clientnames ``` ID | Nickname1 | ---------------- 48 | Roger | ``` Timeactnames ``` ID | Nickname1 | ---------------- 27 | Filing | ``` I'd like to have a new query result that shows the slip table results like this: ``` ID | UserID | ClientID | TimeID --------------------------------- 10 | Shawn | Roger | Filing ``` Is this possible? I have tried Joins and Unions but I never get that result I get some massive huge table combination of results: ``` SELECT tsslips.*, ClientData.*, UserData.*, TimeActData.* FROM slips JOIN ClientData ON ClientData.ID = slips.ClientID JOIN TSUserData ON UserData.ID = slips.UserID JOIN TimeActData ON TimeActData.ID = slips.TimeActID ``` And all I got was a really huge long table that had all of the info but not a new merged table like I wanted. I then tried this: ``` SELECT * FROM ( SELECT UserID, ClientID, TimeActID FROM Slips UNION ALL SELECT Nickname1 FROM ClientData UNION ALL SELECT Nickname1 FROM UserData UNION ALL SELECT Nickname1 FROM TimeActData ) all_tables WHERE all_tables.ClientID = 'something' ``` But I don't think I formatted or used that one correctly since I got an error that said: > Msg 205, Level 16, State 1, Line 1 All queries combined using a UNION, INTERSECT or EXCEPT operator must have an equal number of expressions in their target lists. I'm trying to get query result so that I can use the right click save as to CSV option in SQL Server 2008
``` SELECT s.ID, u.NickName1 As UserID, c.NickName1 As ClientID, t.NickName1 As TimeID FROM Slips s LEFT JOIN Username u ON u.ID = s.UserID LEFT JOIN ClientNames c ON c.ID = s.ClientID LEFT JOIN TimeactNames t on t.ID = s.TimeID ``` Depending on your data, you may want INNER JOINs instead of LEFT JOINs.
You can try this query ``` SELECT Slips.ID,Usernames.Nickname1 AS UnserID,Clientnames ,Nickname1 AS ClientID ,Timeactnames.Nickname1 AS TimeId FROM Slips LEFT JOIN Usernames ON Slips.UserId = Usernames.ID LEFT JOIN Clientnames ON Slips.ClientId = Clientnames.ID LEFT JOIN Timeactnames ON Slips.TimeId = Timeactnames.ID ```
SQL create query based on data from multiple tables
[ "", "sql", "sql-server-2008", "join", "union", "" ]
I want to use ORDER BY on every UNION ALL queries, but I can't figure out the right syntax. This is what I want: ``` ( SELECT id, user_id, other_id, name FROM tablename WHERE user_id = 123 AND user_in IN (...) ORDER BY name ) UNION ALL ( SELECT id, user_id, other_id, name FROM tablename WHERE user_id = 456 AND user_id NOT IN (...) ORDER BY name ) ``` EDIT: Just to be clear: I need two ordered lists like this, not one: 1 2 3 1 2 3 4 5 Thank you very much!
Something like this should work in MySQL: ``` SELECT a.* FROM ( SELECT ... FROM ... ORDER BY ... ) a UNION ALL SELECT b.* FROM ( SELECT ... FROM ... ORDER BY ... ) b ``` to return rows in an order we'd like them returned. i.e. MySQL seems to honor the `ORDER BY` clauses inside the inline views. But, without an **`ORDER BY`** clause on the outermost query, the order that the rows are returned is *not* guaranteed. If we need the rows returned in a particular sequence, we can include an `ORDER BY` on the outermost query. In a lot of use cases, we can just use an `ORDER BY` on the outermost query to satisfy the results. But when we have a use case where we need all the rows from the first query returned before all the rows from the second query, one option is to include an extra discriminator column in each of the queries. For example, add **`,'a' AS src`** in the first query, **`,'b' AS src`** to the second query. Then the outermost query could include **`ORDER BY src, name`**, to guarantee the sequence of the results. --- **FOLLOWUP** In your original query, the `ORDER BY` in your queries is discarded by the optimizer; since there is no `ORDER BY` applied to the outer query, MySQL is free to return the rows in whatever order it wants. The "trick" in query in my answer (above) is dependent on behavior that may be specific to some versions of MySQL. Test case: populate tables ``` CREATE TABLE foo2 (id INT PRIMARY KEY, role VARCHAR(20)) ENGINE=InnoDB; CREATE TABLE foo3 (id INT PRIMARY KEY, role VARCHAR(20)) ENGINE=InnoDB; INSERT INTO foo2 (id, role) VALUES (1,'sam'),(2,'frodo'),(3,'aragorn'),(4,'pippin'),(5,'gandalf'); INSERT INTO foo3 (id, role) VALUES (1,'gimli'),(2,'boromir'),(3,'elron'),(4,'merry'),(5,'legolas'); ``` query ``` SELECT a.* FROM ( SELECT s.id, s.role FROM foo2 s ORDER BY s.role ) a UNION ALL SELECT b.* FROM ( SELECT t.id, t.role FROM foo3 t ORDER BY t.role ) b ``` resultset returned ``` id role ------ --------- 3 aragorn 2 frodo 5 gandalf 4 pippin 1 sam 2 boromir 3 elron 1 gimli 5 legolas 4 merry ``` The rows from `foo2` are returned "in order", followed by the rows from `foo3`, again, "in order". Note (again) that this behavior is *NOT* guaranteed. (The behavior we observer is a side effect of how MySQL processes inline views (derived tables). This behavior may be different in versions after 5.5.) If you need the rows returned in a particular order, then specify an **`ORDER BY`** clause for the outermost query. And that ordering will apply to the *entire* resultset. As I mentioned earlier, if I needed the rows from the first query first, followed by the second query, I would include a "discriminator" column in each query, and then include the "discriminator" column in the ORDER BY clause. I would also do away with the inline views, and do something like this: ``` SELECT s.id, s.role, 's' AS src FROM foo2 s UNION ALL SELECT t.id, t.role, 't' AS src FROM foo3 t ORDER BY src, role ```
You just use one ORDER BY at the very end. The Union turns two selects into one logical select. The order-by applies to the entire set, not to each part. Don't use any parens either. Just: ``` SELECT 1 as Origin, blah blah FROM foo WHERE x UNION ALL SELECT 2 as Origin, blah blah FROM foo WHERE y ORDER BY Origin, z ```
How to use ORDER BY inside UNION
[ "", "mysql", "sql", "union", "" ]
I have a calculated field in my query that looks like this: ``` (SELECT avg(RateAmount)/1.15 FROM ReservationStayDate f where f.ReservationStayID = a.ReservationStayID) AS 'Rate Amount Excl.VAT' ``` In the same query, I need to exclude all data where RatePlan begins with 'CO' AND the above result = 0 In other words, my WHERE clause need to look something like this: ``` AND NOT (d.rateplan like 'CO%' and (avg(RateAmount)/1.15)= 0) ``` I'm getting this error message: Invalid column name 'RateAmount' What's wrong with the syntax here? HERE IS THE FULL QUERY: ``` SELECT a.ReservationStayID, b.PropertyCode AS'Property', b.ReservationStatus AS 'Status', d.rsl_rateplan AS 'Rate Plan Code', b.MarketSegmentCode AS 'Market Segment', e.TravelAgencyTypeCode AS 'Source of Business', a.ArrivalDate AS 'Date of Arrival', a.DepartureDate AS 'Date of Departure', c.AdultCount AS 'Adult', c.ChildCount AS 'Child', b.GuestCount AS 'Guest', d.rsl_nationality AS 'Nationality', c.PMSConfirmationNumber, c.CurrencyCode As 'Currency', e.Name AS 'Tour Operator', e.CountryCode AS 'Market Code', g.CountryGroup AS 'Market', (SELECT avg(RateAmount)/1.15 FROM ReservationStayDate f where f.ReservationStayID = a.ReservationStayID) AS 'Rate Amount Excl.VAT' FROM GuestNameInfo a JOIN GuestStaySummary b ON a.ReservationStayID = b.ReservationStayID LEFT JOIN ReservationStay c ON c.ReservationStayID = b.ReservationStayID LEFT JOIN P5RESERVATIONLIST d ON d.rsl_code = b.ReservationStayID LEFT JOIN TravelAgency e ON e.TravelAgencyID = c.TAProfileID LEFT JOIN Market g ON e.CountryCode = g.CountryCode WHERE a.PrimaryGuest='+' and d.rsl_primaryguest='+' AND A.ARRIVALDATE <= '2013-09-30' AND A.DEPARTUREDATE > '2013-09-01' ORDER BY a.ReservationStayID ```
Without your full query I can't give an exact solution, but in SQL Server you can use [APPLY](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx), which will works like a subselect, but it will allow you to reuse the column created, and also create multiple columns. e.g. ``` SELECT a.SomeField, r.RateAmountExcVat FROM SomeTable AS a INNER JOIN SomeOtherTable AS d ON d.SomeField = a.SomeField --... Whatever else your query does... OUTER APPLY ( SELECT RateAmountExcVat = AVG(RateAmount)/1.15 FROM ReservationStayDate AS f WHERE f.ReservationStayID = a.ReservationStayID ) AS r WHERE NOT (d.RatePlan LIKE 'CO%' AND r.RateAmountExcVat = 0); ``` However, you don't necessarily need to use a correlated subquery, a normal subuery would work exactly the same. So rather than doing the aggregation in your select clause, you can move the aggregation to a subquery, meaning you can reference the result of the aggregation in the outer query. ``` SELECT a.SomeField, r.RateAmountExcVat FROM SomeTable AS a INNER JOIN SomeOtherTable AS d ON d.SomeField = a.SomeField --... Whatever else your query does... LEFT JOIN ( SELECT ReservationStayID, RateAmountExcVat = AVG(RateAmount)/1.15 FROM ReservationStayDate AS f GROUP BY ReservationStayID ) AS r ON r.ReservationStayID = a.ReservationStayID WHERE NOT (d.RatePlan LIKE 'CO%' AND r.RateAmountExcVat = 0); ``` --- **EDIT** Your full query would be something like this. ``` SELECT a.ReservationStayID, b.PropertyCode AS [Property], b.ReservationStatus AS [Status], d.rsl_rateplan AS [Rate Plan Code], b.MarketSegmentCode AS [Market Segment], e.TravelAgencyTypeCode AS [Source of Business], a.ArrivalDate AS [Date of Arrival], a.DepartureDate AS [Date of Departure], c.AdultCount AS [Adult], c.ChildCount AS [Child], b.GuestCount AS [Guest], d.rsl_nationality AS [Nationality], c.PMSConfirmationNumber, c.CurrencyCode As [Currency], e.Name AS [Tour Operator], e.CountryCode AS [Market Code], g.CountryGroup AS [Market], f.RateAmountExclVAT AS [Rate Amount Excl.VAT] FROM GuestNameInfo a JOIN GuestStaySummary b ON a.ReservationStayID = b.ReservationStayID LEFT JOIN ReservationStay c ON c.ReservationStayID = b.ReservationStayID LEFT JOIN P5RESERVATIONLIST d ON d.rsl_code = b.ReservationStayID LEFT JOIN TravelAgency e ON e.TravelAgencyID = c.TAProfileID LEFT JOIN Market g ON e.CountryCode = g.CountryCode LEFT JOIN ( SELECT ReservationStayID, RateAmountExclVAT = AVG(RateAmount) / 1.15 FROM ReservationStayDate f GROUP BY f.ReservationStayID ) AS f ON f.ReservationStayID = a.ReservationStayID WHERE a.PrimaryGuest='+' AND d.rsl_primaryguest='+' AND A.ARRIVALDATE <= '2013-09-30' AND A.DEPARTUREDATE > '2013-09-01' AND NOT (d.RatePlan like 'CO%' AND f.RateAmountExclVAT = 0) ORDER BY a.ReservationStayID ``` A couple of things worth noting is that I have removed you `AS 'alias'` syntax since using literals as column aliases is deprecated. [Further reading in [Bad Habits to Kick : Using AS instead of = for column aliases](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases) . I'd also recommend [using meaning table aliases](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-table-aliases-like-a-b-c-or-t1-t2-t3) rather than just a, b, c etc.
Take your existing query and remove this bit of the WHERE clause: ``` and (avg(RateAmount)/1.15)= 0 ``` then rewrite it like this: ``` SELECT * FROM ( <your original query> ) a WHERE a.[Rate Amount Excl.VAT] != 0; ``` That isn't the prettiest way to do it, but without the full query...
How to write WHERE clause with one of the conditions being the result of a calculated field?
[ "", "sql", "sql-server", "sql-server-2012", "where-clause", "" ]
I am using Oracle databases. I have an sql table `PS_Z_STAGE_TEST_JE` that has three fields (`EMPLID`, `LAST_NAME`, `FIRST_NAME`). I am trying to do a `select` statement that will pull many EMPLIDs from sql table:`ps_vc_plan_mem` and insert them into the `EMPLID` column while leaving the other two fields (`LAST_NAME` and `FIRST_NAME`) `null`. Below is my SQL but it will say > Cannot insert null Value into `LAST_NAME` when I try to run it. ``` INSERT INTO sysadm.PS_Z_STAGE_TEST_JE (EMPLID) SELECT DISTINCT(emplid) FROM ps_vc_plan_mem WHERE vc_plan_id IN ('PNC-RS','PNC-SO','PNC-ESPP'); ```
The error message is exactly what it says it is. Your last\_name column must be defined as not null. Therefore, you can't insert a null into it. Since you didn't define what to insert into the column in your insert, it tries to insert null by default and fails. You must insert something into last name. I would suggest either a default string or an empty string if you can't get an actual last name to insert. ``` INSERT INTO sysadm.PS_Z_STAGE_TEST_JE (EMPLID, LAST_NAME) SELECT DISTINCT(emplid), 'N/A' FROM ps_vc_plan_mem WHERE vc_plan_id IN ('PNC-RS','PNC-SO','PNC-ESPP'); ``` Alternatively, you could alter your table so that last\_name is nullable.
The only obvious problem that I see with your query is the asterisk at the end: ``` INSERT INTO sysadm.PS_Z_STAGE_TEST_JE(EMPLID) SELECT DISTINCT emplid FROM ps_vc_plan_mem WHERE vc_plan_id IN ('PNC-RS', 'PNC-SO', 'PNC-ESPP'); ``` Note that `distinct` is not a function. It is a modifier on `select`, so the parentheses don't mean anything.
Inserting into One Field Select statement Results
[ "", "sql", "oracle", "" ]
I'm using `CASE` to categorize data in the table and count them but the results aren't accurate live demo [[here]](http://sqlfiddle.com/#!2/f1f44/8) ``` select DATE(date) as day, count(*), count(distinct case when name = 'fruit' then 1 else 0 end) as fruits, count(distinct case when name = 'vege' then 1 else 0 end) as vege, count(distinct case when name = 'sweets' then 1 else 0 end) as sweets from food group by day with rollup ``` I'm not sure if the issue is with `CASE` or in the string matching `=` because there's no 'sweets' still it counts 1? any pointers I'd be grateful
Your problem is that `COUNT` counts every result that is not `NULL`. In your case you are using: ``` COUNT(distinct case when name = 'sweets' then 1 else 0 end) ``` So, when the name is not `sweets`, it counts the `0`. Furthermore, since you are using `DISTINCT`, it counts just one or two values. You should either use `SUM` or remove the `DISTINCT` and the `ELSE 0`: ``` SELECT DATE(date) as day, COUNT(*), SUM(CASE WHEN name = 'fruit' THEN 1 ELSE 0 END) as fruits, SUM(CASE WHEN name = 'vege' THEN 1 ELSE 0 END) as vege, SUM(CASE WHEN name = 'sweets' THEN 1 ELSE 0 END) as sweets FROM food GROUP BY DAY WITH ROLLUP ``` Or: ``` SELECT DATE(date) as day, COUNT(*), COUNT(CASE WHEN name = 'fruit' THEN 1 ELSE NULL END) as fruits, COUNT(CASE WHEN name = 'vege' THEN 1 ELSE NULL END) as vege, COUNT(CASE WHEN name = 'sweets' THEN 1 ELSE NULL END) as sweets FROM food GROUP BY DAY WITH ROLLUP ``` [**Here is**](http://sqlfiddle.com/#!2/f1f44/10) a modified sqlfiddle.
You can't group by an alias. You have to group by the expression. ``` group by date(date) ```
SQL using CASE in count and group by
[ "", "mysql", "sql", "aggregate-functions", "" ]
I am a newbie to SQL. The problem which I am facing is, I want to select a particular name **John** from the customer table, but I am storing John in a string called `var_name`. Now is there any way that I can use this variable name in the select statement instead of writing John? sorry i forgot to explain the question in depth, i am having a table which contains details of the customers, among that customer name is one of the coloumn, so when user clicks on any particular name from customer name coloumn that name is getting stored in the variable called var\_name and i want that particular customer name's customer id, this is the issue which i didnt mention in the question above. So i want that random value clicked in the coloumn should get replace in the SELECT statement. your query will work if the user clicks on John, i want a general query for the above issue Here is an example ``` SELECT var_name -- this is what I've used, is there any other way to accomplish this? FROM Customer Where Customer_ID = '1234'; ``` so please help me out to solve this issue. Thank You!
In case you are using SQL server, You can actually declare your var\_name: `DECLARE @var_name nvarchar(50) SET @var_name = (SELECT "something" FROM Customer WHERE Customer_ID = '1234')` `from here, you can do whatever you want with @var_name`
Wrap a single-value select in brackets to use its value in another query: ``` select something from sometable where somecolumn = (SELECT name FROM Customer Where Customer_ID = '1234') ```
How can I use a variable inside a SELECT statement?
[ "", "sql", "sql-server", "select", "" ]
I'm looking for some advice on the approach I should take with a query. I have a table (EMP) which stores employee details and working hours for this year (40 hours per week). A further 2 tables store the primary and secondary offices employees belong to. Since employees can move between offices, these are stored with dates. I'm looking to return the number of working hours during the time the employee is in an office. If primary offices overlap with secondary offices for an employee, the hours should be split by the number of overlapping offices for the overlapping period only. I attach sample DDL below. ``` -- Employee Table with hours for year 2014 CREATE TABLE [dbo].[EMP]( [EMP_ID] [int] NOT NULL, [EMP_NAME] [varchar](255) NULL, [EMP_FYHOURS] [float] NULL, CONSTRAINT [PK_EMP] PRIMARY KEY CLUSTERED ( [EMP_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY] GO -- Employees and their primary offices CREATE TABLE [dbo].[OFFICEPRIMARY]( [OFFICEPRIMARY_ID] [int] NOT NULL, [OFFICEPRIMARY_NAME] [varchar](255) NULL, [OFFICEPRIMARY_EMP_ID] [int] NOT NULL, [OFFICEPRIMARY_START] [datetime] NULL, [OFFICEPRIMARY_END] [datetime] NULL, CONSTRAINT [PK_OFFICEPRIMARY] PRIMARY KEY CLUSTERED ( [OFFICEPRIMARY_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[OFFICEPRIMARY] WITH CHECK ADD CONSTRAINT [FK_OFFICEPRIMARY_FK1] FOREIGN KEY([OFFICEPRIMARY_EMP_ID]) REFERENCES [dbo].[EMP] ([EMP_ID]) ON DELETE CASCADE GO ALTER TABLE [dbo].[OFFICEPRIMARY] CHECK CONSTRAINT [FK_OFFICEPRIMARY_FK1] GO -- Employees and their secondary offices CREATE TABLE [dbo].[OFFICESECONDARY]( [OFFICESECONDARY_ID] [int] NOT NULL, [OFFICESECONDARY_NAME] [varchar](255) NULL, [OFFICESECONDARY_EMP_ID] [int] NOT NULL, [OFFICESECONDARY_START] [datetime] NULL, [OFFICESECONDARY_END] [datetime] NULL, CONSTRAINT [PK_OFFICESECONDARY] PRIMARY KEY CLUSTERED ( [OFFICESECONDARY_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[OFFICESECONDARY] WITH CHECK ADD CONSTRAINT [FK_OFFICESECONDARY_FK1] FOREIGN KEY([OFFICESECONDARY_EMP_ID]) REFERENCES [dbo].[EMP] ([EMP_ID]) ON DELETE CASCADE GO ALTER TABLE [dbo].[OFFICESECONDARY] CHECK CONSTRAINT [FK_OFFICESECONDARY_FK1] GO -- Insert sample data INSERT INTO EMP (EMP_ID, EMP_NAME, EMP_FYHOURS) VALUES (1, 'John Smith', 2080); INSERT INTO EMP (EMP_ID, EMP_NAME, EMP_FYHOURS) VALUES (2, 'Jane Doe', 2080); GO INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (1, 'London', 1, '2014-01-01', '2014-05-31') INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (2, 'Berlin', 1, '2014-06-01', '2014-08-31') INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (3, 'New York', 1, '2014-09-01', '2014-12-31') INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (4, 'New York', 2, '2014-01-01', '2014-04-15') INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (5, 'Paris', 2, '2014-04-16', '2014-09-30') INSERT INTO OFFICEPRIMARY (OFFICEPRIMARY_ID, OFFICEPRIMARY_NAME, OFFICEPRIMARY_EMP_ID, OFFICEPRIMARY_START, OFFICEPRIMARY_END) VALUES (6, 'London', 2, '2014-10-01', '2014-12-31') GO INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (1, 'Paris', 1, '2014-01-01', '2014-03-31') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (2, 'Lyon', 1, '2014-04-01', '2014-05-15') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (3, 'Berlin', 1, '2014-05-16', '2014-09-30') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (4, 'Chicago', 1, '2014-10-01', '2015-02-22') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (5, 'Chicago', 2, '2013-11-21', '2014-04-10') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (6, 'Berlin', 2, '2014-04-11', '2014-09-16') INSERT INTO OFFICESECONDARY (OFFICESECONDARY_ID, OFFICESECONDARY_NAME, OFFICESECONDARY_EMP_ID, OFFICESECONDARY_START, OFFICESECONDARY_END) VALUES (7, 'Amsterdam', 2, '2014-09-17', '2015-03-31') GO ``` Thanks for the pointer. I adjusted your query so it presents a union of the primary and secondary office. All that remains is working out the hours for overlapping periods between offices. For example, John Smith, New York, 01/04/2014, 10/08/2014 John Smith, London, 01/08/2014, 31/12/2014 For the overlapping period between the offices which is 01/08/2014 to 10/08/2014, I would expect the hours to be split equally. If there were 3 overlapping offices, then it would be split 3-ways. ``` select 'Primary' as Office, e.EMP_NAME, op.OFFICEPRIMARY_NAME, op.OFFICEPRIMARY_START, op.OFFICEPRIMARY_END, datediff(wk,OFFICEPRIMARY_START,OFFICEPRIMARY_END) * 40 as HoursWorkedPrimary from EMP e inner join OFFICEPRIMARY op on op.OFFICEPRIMARY_EMP_ID = e.EMP_ID union all select 'Secondary' as Office, e.EMP_NAME, os.OFFICESECONDARY_NAME, os.OFFICESECONDARY_START, os.OFFICESECONDARY_END, datediff(wk,OFFICESECONDARY_START,OFFICESECONDARY_END) * 40 as HoursWorkedSecondary from EMP e inner join OFFICESECONDARY os on os.OFFICESECONDARY_EMP_ID = e.EMP_ID order by e.EMP_NAME ```
If I understand correctly, the end result you want to see is the number of total hours worked per employee and office? I've come up with this: ``` -- generate date table declare @MinDate datetime, @MaxDate datetime SET @MinDate = (SELECT MIN(d) FROM (SELECT d = OFFICEPRIMARY_START FROM dbo.OFFICEPRIMARY UNION SELECT OFFICESECONDARY_START FROM dbo.OFFICESECONDARY) a) SET @MaxDate = (SELECT MAX(d) FROM (SELECT d = OFFICEPRIMARY_END FROM dbo.OFFICEPRIMARY UNION SELECT OFFICESECONDARY_END FROM dbo.OFFICESECONDARY) a) SELECT d = DATEADD(day, number, @MinDate) INTO #tmp_dates FROM (SELECT DISTINCT number FROM master.dbo.spt_values WHERE name IS NULL) n WHERE DATEADD(day, number, @MinDate) < @MaxDate ;WITH CTE AS ( SELECT d.d ,o.OfficeType ,o.OfficeID ,o.OfficeName ,o.EmpID ,EmpName = e.EMP_NAME ,HoursWorked = 8 / (COUNT(1) OVER (PARTITION BY EmpID, d)) FROM ( SELECT OfficeType = 1 ,OfficeID = op.OFFICEPRIMARY_ID ,OfficeName = op.OFFICEPRIMARY_NAME ,EmpID = op.OFFICEPRIMARY_EMP_ID ,StartDate = op.OFFICEPRIMARY_START ,EndDate = op.OFFICEPRIMARY_END FROM dbo.OFFICEPRIMARY op UNION SELECT OfficeType = 2 ,OfficeID = os.OFFICESECONDARY_ID ,OfficeName = os.OFFICESECONDARY_NAME ,EmpID = os.OFFICESECONDARY_EMP_ID ,StartDate = os.OFFICESECONDARY_START ,EndDate = os.OFFICESECONDARY_END FROM dbo.OFFICESECONDARY os ) o INNER JOIN dbo.EMP e ON e.EMP_ID = o.EmpID INNER JOIN #tmp_dates d ON o.StartDate<=d.d AND o.EndDate>=d.d ) SELECT EmpID ,EmpName ,OfficeType ,OfficeName ,TotalHoursWorked = SUM(HoursWorked) FROM CTE GROUP BY EmpID ,EmpName ,OfficeType ,OfficeID ,OfficeName ORDER BY EmpID ,OfficeName ``` I first generate a temp table with the dates between minimum date and maximum date. Then I union both office tables (why you have 2 tables anyway?) and I get a CTE that returns data on employee, date, office and number of hours worked in this office (8 divided by count of offices where employee has worked in on this day). Then I sum this data to get sum of hours grouped by employee and office. Maybe there is a simpler solution to this. This was the first solution that came to my mind.
This should give you a head start: ``` select datediff(wk,OFFICEPRIMARY_START,OFFICEPRIMARY_END) * 40 as HoursWorkedPrimary ,datediff(wk,OFFICESECONDARY_START,OFFICESECONDARY_END) * 40 as HoursWorkedSecondary ,EMP_NAME ,OFFICEPRIMARY_NAME,OFFICEPRIMARY_START,OFFICEPRIMARY_END ,OFFICESECONDARY_NAME,OFFICESECONDARY_START,OFFICESECONDARY_END from [EMP] inner join OFFICEPRIMARY as op on op.OFFICEPRIMARY_EMP_ID = EMP.EMP_ID inner join OFFICESECONDARY as os on os.OFFICESECONDARY_EMP_ID = EMP.EMP_ID ```
SQL Server - Query to split time by count (overlapping offices)
[ "", "sql", "sql-server", "database", "t-sql", "" ]
I have a sql table with one varchar column. Some of the data it contains is like: ``` sadkjlsakjd Physics Test 2 Test Test 1 P Test C Physics Test None dstestsad ``` Now, I need a query that gives the most relevant record first when I search with 'Test' keyword. I am expecting: ``` Test Test 1 <Then other records where Test comes in between> ``` I have some how achieved this query with Temp table and intersection but not at all happy with what I written. I have a feeling that there should be something easy and fast. Please suggest. Thanks
Try this: ``` SELECT colName FROM tableName WHERE colName LIKE '%Test%' ORDER BY CASE WHEN colName LIKE 'Test%' THEN 1 ELSE 2 END, colName ```
May be something like this ``` SELECT * FROM Your_Table WHERE Your_Column LIKE '%Test%' ORDER BY CASE WHEN Your_Column LIKE 'Test%' THEN 0 ELSE 1 END,Your_Column ```
SQL Query to find relevant result
[ "", "sql", "sql-server-2008", "t-sql", "select", "sql-order-by", "" ]
I have a table where the table column as following date column data ``` Date ---------- 1900-01-01 1900-01-01 1900-01-01 2013-07-25 2012-07-25 2012-07-25 2013-07-25 2012-07-25 2013-07-25 ``` Can I write a Query to filter date column which is equal to 1900-01-01 or greater than 2013-1-1
> Can I write a Query to filter date column which is **equal to 1900-01-01 or greater than 2013-1-1** ? Yes, you can, it's pretty much *exactly* as you phrased the question. Something like this should do the trick: ``` select something from my_table where date = '1900-01-01' or date > '2013-01-01' ``` If your condition is more complex than that, you can parenthesise the clause to keep it self-contained (no precedence rules mucking up your overall condition): ``` select something from my_table where (date = '1900-01-01' or date > '2013-01-01') and blah blah blah ```
Try this, ``` select data from Table where date=' 1900-01-01' union all select data from Table where date >'2013-1-1' ```
Greater than a date and Equal to a another date
[ "", "sql", "sql-server-2008", "t-sql", "" ]
How do I implement boolean logic in the select statement while the query is running? ``` SELECT t.[Key] ,t.[Parent_Key] ,t.[Parent_Code] ,t.[Code] ,t.[Desc] ,t.[Point] ,[isChild] -- If Point > 2, then true, if Point == 1 Then false ,t.[By] ,t.[On] FROM [db].[stats] t WHERE t.[Parent_Key]= @tmpParameter ``` I want make some logic to determine [isChild] boolean value based on t.[Point]
``` SELECT t.[Key] ,t.[Parent_Key] ,t.[Parent_Code] ,t.[Code] ,t.[Desc] ,t.[Point] ,CASE WHEN t.[Point] > 2 THEN 1 ELSE CASE WHEN t.[Point] = 1 THEN 0 ELSE NULL END END AS [isChild] ,t.[By] ,t.[On] FROM [db].[stats] t WHERE t.[Parent_Key]= @tmpParameter ``` Be aware that when t.[Point] < 1 then [isChild] will be null
Case is your friend... ``` SELECT Key, Parent_Key, Parent_Code, Code, Desc, point, case when point > 2 then 1 when point = 1 then 0 end isChild, [By], [On] FROM db.stats WHERE Parent_Key= @tmpParameter ```
Boolean Logic in Select Statement
[ "", "sql-server-2008", "sql", "" ]
I have a several small entities in my database that I represent as small table with two columns: id and name. Example of such entities: countries, continent. Should I create an enum type instead, whenever the name of those entities doesn't matter?
Hate to answer a question with a question, but it depends. How often do you expect the values to change, and how often do you release code? Enum types will require a code change. A pure database table will be much easier to change. Enum types are more convenient for coding. For infrequent releases, or if you often have new/deleted/changed values, use a database table. For static sets of values, or if you release code all the time, use an enum.
Enum is good when: * You want to limit the choice of possible values, e.g. days of the week, gender, status of a ticket (opened/closed/archived) * You want to reduce storage space (tuples are stored in just 1 byte each) * Reduce the number of disc accesses for joins Enum is bad when: * You are not sure what to put as the possible values * Possible values are liable to change (this requires admin rights to ALTER tables, which may require you to go offline)
When to use an enum or a small table in a relational database?
[ "", "sql", "database", "enums", "" ]
i have a table of content like this **tblEmployees** ``` employeeID employeeName ___________________________________ 1 Jeffrey L. JR Van Hoosear 2 DAVID GUNGNER MR 3 CATHLEEN E STADECKER MRS. 4 MARTIN W SCHIFFMILLER 5 JAY F MOLDOVANYI VI ``` and Another table like this **tblPrefix** ``` prefixID Prefix _________________________ 1 JR 2 MR 3 MR / MRS 4 JR. 5 MRS. 6 I 7 II 8 III 9 IV 10 V 11 VI 12 VII ``` Now i would like to remove prefix (JR, JR., MR, MRS.....) present in EmployeeName. i have written a function. For that function i have passed Employee name as parameter like this ``` SELECT * FROM fn_SplitName (@employeeName) (Table-Valued Function) ``` and i tried like this ``` SELECT REPLACE(@employeeName,preFix,'') FROM tblPrefix WHERE @employeeName LIKE '% ' + preFix + ' %' ``` **expected outPut** ``` employeeID employeeName ___________________________________ 1 Jeffrey L. Van Hoosear 2 DAVID GUNGNER 3 CATHLEEN E STADECKER 4 MARTIN W SCHIFFMILLER 5 JAY F MOLDOVANYI ``` Compare with first tblEmpoyee
**Oracle Query:** ``` select employeeName, REPLACE(employeeName, PREFIX,'') from employee_table, prefix_table WHERE INSTR(employeeName, PREFIX) > 0 ``` In **SQL-Server**, I think it should be: ``` select employeeName, REPLACE(employeeName, PREFIX,'') from employee_table, prefix_table WHERE CHARINDEX(PREFIX,employeeName) > 0 ```
The following query selects employeeNames that start or end with a prefix. Then, the prefix is stripped off the employeeName using the `SUBSTRING` function. **EDIT**: Corrected the CASE statement. ``` SELECT te.employeeName, CASE WHEN te.employeeName like '%'+' '+tp.Prefix THEN SUBSTRING(te.employeeName, 1, LEN(te.employeeName)-LEN(tp.Prefix)-1) WHEN te.employeeName like tp.Prefix+' '+'%' THEN SUBSTRING(te.employeeName, LEN(tp.Prefix)+2, LEN(te.employeeName)-LEN(tp.Prefix)-1) END employeeName_without_Prefix FROM tblEmployees te INNER JOIN tblPrefix tp ON te.employeeName like '%'+' '+tp.Prefix OR te.employeeName like tp.Prefix+' '+'%'; ``` The above query would not unintentionally replace prefix characters that occur in the middle of the employeeName. `SQL Fiddle demo` You can embed the SQL statement in a function, as below. However, please note that the function would perform slower, as it is executed for each employeeName one by one. ``` CREATE FUNCTION dbo.remove_prefix (@employeeName varchar(100)) RETURNS varchar(100) AS BEGIN DECLARE @employeeName_without_Prefix varchar(100) SELECT @employeeName_without_Prefix = CASE WHEN te.employeeName like '%'+' '+tp.Prefix THEN SUBSTRING(te.employeeName, 1, LEN(te.employeeName)-LEN(tp.Prefix)-1) WHEN te.employeeName like tp.Prefix+' '+'%' THEN SUBSTRING(te.employeeName, LEN(tp.Prefix)+2, LEN(te.employeeName)-LEN(tp.Prefix)-1) END employeeName_without_Prefix FROM tblEmployees te INNER JOIN tblPrefix tp ON te.employeeName like '%'+' '+tp.Prefix OR te.employeeName like tp.Prefix+' '+'%'; RETURN (@employeeName_without_Prefix); END; ``` **Reference**: [Create Function on MSDN](http://msdn.microsoft.com/en-us/library/ms186755.aspx)
SQL Replace query to remove prefix from name contains (MR., MRS., MR / MRS.....)
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a SQL question from one of the well known IT company couple month ago when they were interviewing me, and I never got it figured out. An order can have multiple lines - For ex., if a customer ordered cookies, chocolates, and bread, this would count as 3 lines in one order. The question is to find the number of orders in each line count. The output of this query would be something like 100 orders had 1 line, 70 orders had 2 lines, 30 had 3 lines, and so on. This table has two columns - order\_id and line\_id ``` Sample Data: order_id line_id 1 cookies 1 chocolates 1 bread 2 cookies 2 bread 3 chocolates 3 cookies 4 milk ``` **desired output:** ``` orders line 1 1 2 2 1 3 ``` So generally speaking, we have a very large data set, and the line\_id per order\_id can be ranging from 1 to infinite(Theoretically speaking). ``` The desired output for the general case is: orders line 100 1 70 2 30 3 etc.. ``` How can I write a query to find the total number of orders per line count=1,2,3... etc My thought on this problem is to first subquery the count of line\_id per order\_id. And then select the subquery along with a list of values as the second column ranging from 1 to max(lines\_id per order) ``` Test Data: create table data ( order_id int, line_id char(50) ); insert into data values(1, 'cookies'), (1, 'chocolates'), (1, 'bread'), (2, 'bread'), (2, 'cookies'), (3, 'chocolates'), (3, 'cookies'), (4, 'milk'); Since order_id=1 has 3 lines, order_id=2 has 2 lines, order_id=3 has 2 lines, order_id=4 has 1 line. Thus it yield our solution: orders line 1 1 2 2 1 3 This is because both order_id = 2 and 3 has 2 lines. So it would mean 2 orders has line = 2. ``` So far, I have: ``` select lines, sum(case when orders_per_line = '1' then 1 else 0), sum(case when orders_per_line = '2' then 1 else 0), sum(case when orders_per_line = '3' then 1 else 0) from( select lines, order_id, count(*) as orders_per_line from data where lines in ('1, '2', '3') group by order_id, lines ) group by lines ``` My query is wrong, as I only want 2 columns, and also creating a sequence of numbers ranging from 1 to max(lines per order) is also wrong. Any suggestions? Thanks in advance!
Try this: ``` Select Count(*) as Orders, Lines from ( Select order_id, Count(*) as Lines from data group by order_id )query group by Lines ``` For exmaple, look at this [sqlfiddle](http://sqlfiddle.com/#!3/9904e/4)
Try This: ``` with a AS ( SELECT COUNT(order_id) AS Orders FROM Table_1 GROUP BY Order_Id ) SELECT Orders, COUNT(*) AS line FROM a GROUP BY Orders ```
SQL Find the number of orders in each line count
[ "", "sql", "" ]
``` SELECT right(name,7), substring(params, charindex('|-|', params)+3,LEN(params)) as 'List Name', convert(varchar,dateadd(hh,-8,created_date), 101) as Date, convert(char, dateadd(hh,-8,created_date), 108) as Time FROM [meldb].[dbo].[mr_message] WHERE name in ('CL_LIST_STARTED', 'CL_LIST_STOPPED') AND dateadd(hh,-8,created_date) > '7/1/2014' ORDER BY created_date ASC ``` List name will return something like: ``` firstname.lastname-|LISTNAME|-|PARENTLISTNAME ``` I'm trying to isolate `LISTNAME` and `PARENTLISTNAME` into separate columns, but since they can vary in char size I can't just specify right or left Btw I didn't create this table I'm just stuck using it Any ideas?
Did you try it yet? Ok... happy friday :) ``` declare @str varchar(100); set @str = 'jim.smith|-|firstItem|-|secondItem'; --- for your query, change @str to the column name, obviously --- select substring( @str , charindex('|-|', @str) + 3 , ( ( charindex('|-|', @str, charindex('|-|', @str) + 3) ) - ( charindex('|-|', @str) + 3) ) ) ,substring( @str , charindex('|-|', @str, charindex('|-|', @str) + 3) + 3 , len(@str) -- guaranteed to be past the end, to catch all ) ```
Do you want to split `params` into three columns? Please check below query. ``` SELECT SUBSTRING(params, 1, CHARINDEX('-', params)-1) AS FullName, SUBSTRING(STUFF(params, CHARINDEX('|-|', params), LEN(params), ''), CHARINDEX('-', params) + 2, LEN(params)) AS 'List Name', SUBSTRING(params, CHARINDEX('|-|', params) + 3, LEN(params)) AS 'Parent List Name', CONVERT(VARCHAR,DATEADD(hh,-8,created_date), 101) AS DATE, CONVERT(CHAR, DATEADD(hh,-8,created_date), 108) AS TIME FROM [meldb].[dbo].[mr_message] WHERE name IN ('CL_LIST_STARTED', 'CL_LIST_STOPPED') AND DATEADD(hh,-8,created_date) > '7/1/2014' ORDER BY created_date ASC ```
Remove part of a text string
[ "", "sql", "sql-server", "" ]
I am trying to get a record with highest `amount` in given hour. Data in DB: ``` id | date | amount ––––––––––––––––––––––––––––––––– 1 | 2014-07-11 18:10:00 | 10 2 | 2014-07-11 18:20:00 | 20 3 | 2014-07-11 18:30:00 | 100 4 | 2014-07-11 18:40:00 | 10 5 | 2014-07-11 19:10:00 | 50 6 | 2014-07-11 19:20:00 | 60 ``` Desired outcome: ``` id | date | amount --------------------------------- 3 | 2014-07-11 18:30:00 | 100 6 | 2014-07-11 19:20:00 | 60 ```
If I'm understanding your question correctly, you can join the table back to itself using the max aggregate, grouping by the hour and date: ``` select d.* from data d join (select max(amount) maxamount, hour(date) datehour, date(date) date from data group by hour(date), date(date) ) d2 on d.amount = d2.maxamount and hour(d.date) = d2.datehour and date(d.date) = d2.date ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!2/df2bc/3)
something like this? -- assuming table name is transactions ``` SELECT * FROM ( SELECT id, date, amount FROM transactions ORDER BY amount DESC ) AS t GROUP BY HOUR(date) , DATE(date); ``` [WORKING FIDDLE](http://sqlfiddle.com/#!2/81d52/1)
SQL get record with highest amount in given hour
[ "", "mysql", "sql", "greatest-n-per-group", "" ]
Please i am trying to query data with a maximum value from two tables, ``` table1 |user_id | name | | 001 | Paul | | 002 | Sean | table2 |id | class | Year | user_id | |201 | 1A | 2010 | 001 | |202 | 2A | 2011 | 001 | |203 | 1B | 2010 | 002 | ``` The `user_id` in `table2` references the `user_id` from `table1` This is how i want my output to be ``` OUTPUT | user_id | name | class| year | | 001 | Paul | 2A | 2011 | | 002 | Sean | 1B | 2010 | ``` **TRIED SO FAR** ``` SELECT a.user_id, a.name, b.class, max(Year) as year FROM table1 a INNER JOIN table2 b ON a.user_id=b.user_id GROUP BY user_id ``` This query above gives me a maximum year with a different class value in a row, thus the previous class value. This is how it looks like ``` | user_id | name | class| year | | 001 | Paul | 1A | 2011 | | 002 | Sean | 1B | 2010 | ``` Please where am i going wrong in my query? Any help is appreciated. Thanks
Maybe you could order the full result set first by year, and then group by user id: ``` SELECT * FROM ( SELECT a.user_id, a.name, b.class, year FROM table1 a INNER JOIN table2 b ON a.user_id=b.user_id ORDER BY year desc ) h GROUP BY user_id ``` [SqlFiddle demo here](http://sqlfiddle.com/#!2/29485e/4)
Because you are using `group by` already, you can use the `substring_index()`/`group_concat()` hack: ``` SELECT a.user_id, a.name, substring_index(group_concat(b.class order by year desc), ',', 1) as maxclass max(Year) as year FROM table1 a INNER JOIN table2 b ON a.user_id=b.user_id GROUP BY a.user_id, a.name; ``` You can also do this without a `group by`, using `not exists`: ``` SELECT a.user_id, a.name, b.class, b.year FROM table1 a INNER JOIN table2 b ON a.user_id=b.user_id WHERE NOT EXISTS (select 1 from table2 b2 where b2.user_id = b.user_id and b2.year > b.year) ``` The `where` clause rephrases the query. It says: "Get me all rows from `table2` where the same user does not have a bigger `year`." This is equivalent to getting the row with the maximum year. And, this is standard SQL, which often works quite well in any database.
select max value of a row from two tables
[ "", "mysql", "sql", "" ]
This is a MySQL 5.5 DB. You're supposed to be able to insert multiple rows of values with this syntax: ``` INSERT INTO tbl_name (a,b,c) VALUES(1,2,3), (4,5,6), (7,8,9); ``` But I'm getting an error ("Column count doesn't match value count at row 1") on the following query: ``` INSERT INTO users_X_shareItems (userID, itemID, userAction, detail, actionDate) VALUES ('CB381FC5-6373-4D01-A2ED-01CEACFA750B'), ('16nhbfsg6apltgtfhjkb29z4w'), ('like'), (''), (NULL) ``` Are my counting skills deficient, or are there five columns' worth of values right there? In this instance there's only one row's worth of data; hence only one value in each set of parentheses. But the PHP function that builds this query takes an arbitrary number of rows' worth of data, and that's a functional requirement. PLEASE NOTE in the example at the top, directly from the MySQL doc, the parentheses supposedly tell the engine that these are LISTS of values. Let's take that example and modify for an instance in which you're only adding ONE row's worth of values: ``` INSERT INTO tbl_name (a,b,c) VALUES(1), (4), (7); ``` The wording of the documentation is ambiguous, so I'm going to reorganize the query as some have suggested.
From the different comments I suspect you're making the assumption that the `INSERT` statement works like a function that accepts variable arguments (such as `COALESCE()` or `CONCAT_WS()`). That's simply not the case: is isn't a function and you need the same item count on each list: ``` INSERT INTO foo (a) VALUES (?), (?), (?), (?), (?); INSERT INTO foo (a, b) VALUES (?, ?), (?, ?), (?, ?), (?, ?), (?, ?); INSERT INTO foo (a, b, c) VALUES (?, ?, ?), (?, ?, ?), (?, ?, ?), (?, ?, ?), (?, ?, ?); ``` ... but never: ``` -- Not valid INSERT INTO foo (a, b) VALUES (?), (?, ?), (?, ?, ?); ``` If the table design allows so, some of the actual values can be `NULL`, but it isn't possible to omit them entirely. If you need to handle different column counts, you'll have to build your SQL code dynamically. That's trivial in most programming languages. --- Just seen your edit. You misunderstood the multiple-row syntax. It isn't like this: ``` -- Not valid INSERT INTO person (name, age) values ('Abe', 'Bill', 'Charles'), (23, 45, 17); ``` It's like this: ``` INSERT INTO person (name, age) values ('Abe', 23), ('Bill', 45), ('Charles', 17); ```
Why do you close the VALUES brackets after each single value? SQL know asumes you like to add 5 different rows with every row containing only one value but you said you will provide five values. So your Query should be ``` INSERT INTO users_X_shareItems (userID, itemID, userAction, detail, actionDate) VALUES ('CB381FC5-6373-4D01-A2ED-01CEACFA750B', '16nhbfsg6apltgtfhjkb29z4w', 'like', '', NULL); ``` **Edit** If you want to add more rows in one query you still have to provide these 5 values. See this example ``` INSERT INTO users_X_shareItems (userID, itemID, userAction, detail, actionDate) VALUES ('CB381FC5-6373-4D01-A2ED-01CEACFA750B', '16nhbfsg6apltgtfhjkb29z4w', 'like', '', NULL), ('row2', 'row2', 'like', '', NULL), ('row3', 'row3', 'like', '', NULL); ```
How is this a mismatch between column count and values?
[ "", "mysql", "sql", "" ]
This is my table structure: ``` File | Version | Function 1 | 1 | 1 1 | 2 | 1 1 | 3 | 1 1 | 2 | 2 2 | 1 | 4 3 | 2 | 5 ``` I need it to return these rows only ``` 1 | 3 | 1 2 | 1 | 4 3 | 2 | 5 ``` Meaning I only want the functions that have the most recent version for each file. I do not want the result below, i.e unique function ids that are not the most recent version ``` 1 | 3 | 1 1 | 2 | 2 ... ``` I've looked at [How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL?](https://stackoverflow.com/questions/612231/how-can-i-select-rows-with-maxcolumn-value-distinct-by-another-column-in-sql), but that returns the most recent unique function ids. The query needs to be sqlite3 compatible.
An efficient way to do this is often to use `not exists`: ``` select t.* from table t where not exists (select 1 from table t2 where t2.file = t.file and t2.Version > t.version ); ``` This query can take advantage of an index on `table(file, version)`. This rephrases the query to be: "Get me all rows from the table where the corresponding file has no larger version."
In SQLite 3.7.11 or later, when you use MAX, the other values are guaranteed to come from the row with the largest value: ``` SELECT File, MAX(Version) AS Version, Function FROM MyTable GROUP BY File ```
SQL: Filter rows with max value
[ "", "sql", "sqlite", "greatest-n-per-group", "" ]
How would you do this? I'm doing an ETL process on databases that do not currently have a Date\_added/Date\_updated column. The DBA does not want to add a Date column to the tables so I have to find some alternative way of select only new/changed records for nightly extractions. The databases are huge so the solution has to be space/time effective if possible. Addressing some follow up questions: \*\*The tables DO NOT have row dependency enabled. 1) does the table have PK? Yes,each table has PK that is unique only within itself. 2) Huge DB is.... billions, millions? 10 million records in the largest table right now 3) data subset size of new/changed records for nightly extractions? I would guess about 2000 rows per client, times 100 clients to give 200000. 4) any unique values in nightly extractions? There are unique values within each schema.
If you have configured your database for the Oracle Flashback Transaction Query feature, then you can query the database as of a past date, as below: ``` SELECT * FROM mytable AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' DAY); ``` Thus, from day to day, you can see the changes by using MINUS, as ``` SELECT * FROM mytable MINUS SELECT * FROM mytable AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' DAY); ``` **Reference**: [Using Oracle Flashback Technology on Oracle® Database Advanced Application Developer's Guide](http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm#i1008155)
**If** your PK is numeric **AND** generated from sequence in increasing order then you can record the start and end time and ID in a control table. The control table would have a range of batch primary keys. The control table can have dates, tablename (if you want to apply that design per multiple jobs), status, etc.
Is there any method to select only new/changed rows without Date column or row dependency in Oracle?
[ "", "sql", "database", "oracle", "data-warehouse", "" ]
I am sharing my SQL if someone has better than this please post here. To get First Highest Salary Run below query: ``` select salary from employee order by salary desc limit 1; ``` To get Second Highest Salary Run below query: ``` select salary from employee order by salary desc limit 1,1; ``` To get Third Highest Salary Run below query: ``` select salary from employee order by salary desc limit 2,1; ```
Your code is fine... apart from if multiples of salaries exist... i.e. if the top 10 salaries are 1000 then the top salary = 1000, the top 3rd one = 1000 etc etc Use group by... ``` select salary from employee GROUP BY salary order by salary desc limit 1; ``` or a distinct... ``` select DISTINCT(salary) from employee order by salary desc limit 1; ```
You can get Top Nth salary using this query where `N` is Nth highest salary. ``` SELECT * FROM Employee Emp1 WHERE (N) = ( SELECT COUNT(DISTINCT(Emp2.Salary)) FROM Employee Emp2 WHERE Emp2.Salary >= Emp1.Salary ) ```
How to get First Highest or Second Highest or Third Highest Salary from Employee table
[ "", "mysql", "sql", "" ]
ORACLE 10 g db: convert date format from ``` 02-JUL-14 to 02/JUL/14 ``` I tried using the the select query below to get "02/JUL/14" instead it shows "02-JUL-14": ``` SELECT ROUTINGNUM , to_date (EFFDATE,'DD/MM/YYYY') FROM hat; ``` Can anybody please help reading this.
if EFFDATE is a date column, ``` SELECT ROUTINGNUM , to_char( EFFDATE,'DD/MON/YYYY') FROM hat; ``` if it is a String in the format DD-MON-YY ``` SELECT ROUTINGNUM , to_char( to_date( EFFDATE,'DD-MON-YY') ,'DD/MON/YYYY') FROM hat; ```
TO\_DATE function is used for converting from String to Date type. Use TO\_CHAR function instead.
Date conversion in Oracle not working as expected
[ "", "sql", "oracle", "date", "" ]
I have table where there is a record for each week activities of my resources as follows; ``` Asset Name | Project | Week Beginning | Monday | Tuesday| Wednesday| Thursday| Friday| Saturday | Sunday James Project 1 26-06-2014 0 1 1 0 1 0 0 James Project 1 03-07-2014 1 1 0 1 0 0 0 Dave Project 3 03-07-2014 1 1 0 1 0 0 0 ``` I need to see James and when he started and finished his project . So I need a view like below ``` Asset Name | Project | Start Date on Project | End Date on Project| Number of days worked James Project 1 27-06-2014 06-07-2014 6 Dave Project 3 03-07-2014 06-07-2-14 3 ``` So I was thinking of a case statement, cursor or a stored procedure with set based logic. I would like to work out the best and fastest way of doing this . So any ideas how to generate this view quickly ? Brain freeze this morning due to a late one last night so if anyone can get me inspired I would most grateful.
Try This ``` SELECT ASSETNAME,PROJECT,MIN(T.MIN)START_DAY,MAX(T.MAX) END_DAY, DATEDIFF(DD,MIN(t.min) ,MAX(T.MAX)) DURATION FROM ( SELECT ASSETNAME,PROJECT,DATEADD(DD,MIN(WEEK),WEEKBEGINNING) 'MIN',DATEADD(DD,MAX(WEEK),WEEKBEGINNING) 'MAX' FROM #TEMP T CROSS APPLY(SELECT 0 'WEEK' WHERE T.MONDAY=1 UNION ALL SELECT 1 WHERE T.TUESDAY=1 UNION ALL SELECT 2 WHERE T.WEDNESDAY=1 UNION ALL SELECT 3 WHERE T.THURSDAY=1 UNION ALL SELECT 4 WHERE T.FRIDAY=1 UNION ALL SELECT 5 WHERE T.SATURDAY=1 UNION ALL SELECT 6 WHERE T.SUNDAY=1 ) D GROUP BY ASSETNAME,PROJECT,WEEKBEGINNING )T ```
Firstly, normalize your data, using `unpivot`, and convert the days to an actual date. ``` select AssetName, Project, dateadd(d, CASE dayname WHEN 'Monday' THEN 0 WHEN 'Tuesday' THEN 1 WHEN 'Wednesday' THEN 2 WHEN 'Thursday' THEN 3 WHEN 'Friday' THEN 4 WHEN 'Saturday' THEN 5 When 'sunday' then 6 end,WeekBeginning) workdate from ( select * from yourdata unpivot (dayvalue for dayname in (monday,tuesday,wednesday,thursday,friday,saturday,sunday)) u ) v ``` From there, it is a simple case of finding the earliest and latest records.
Generate interesting view of data
[ "", "sql", "sql-server", "" ]
I have a Table like this in MySQL ``` 1. ID, (int, autoincrement, primary) 2. Date (timestamp) 3. Device (int) 4. Text (text) ``` For a Report, i want to find out, how many records for each type of device there is. Like "Device '5' = 100 rows". Can this be done by a pure sql comand, or do i have to loop through all types of device's by php and fire a select sum query for each type? Im not familiar with mysql. But i know, OracleSQL can do stuff like this with insert. like `insert into table (col1,col2) (Select * From table)` is like a for-each. Is my idea possible in mysql?
This is a classic usecase of the `group by` clause: ``` SELECT device, COUNT(*) AS num_per_device FROM my_table GROUP BY device ``` Edit: To answer the followup question by @sgtBear in the comment, you can add the device text by joining this query to another one: ``` SELECT a.device, b.device_text, a.num_per_device FROM (SELECT device, COUNT(*) AS num_per_device FROM my_table GROUP BY device) a JOIN device_details b ON a.device = b.device ```
You can just try below query: ``` Select Count(Device) FROM Table1 Order By Device Group By Device ```
Select sum of each
[ "", "mysql", "sql", "select", "" ]
I using Open Office to create an .odb database that is connected to an Access database, but I am having a hard time querying dates in the .odb database. Here is an entry from the DATE column: ***02/11/13 12:00 AM*** (The column is actually called 'DATE') How would I query this? This is what I have tried: **ERROR:Data type mismatch in criteria expression.** ``` SELECT * FROM PHAII01 WHERE DATE = '02/11/13 12:00 AM' ``` **ERROR: Syntac error (missing operator) in query expression 'DATE=02/11/13 12:AM'.** ``` SELECT * FROM PHAII01 WHERE DATE = 02/11/13 12:00 AM ``` **RETURNS NOTHING** ``` SELECT * FROM PHAII01 WHERE DATE = 02/11/13 ``` **ERROR: Data type mismatch in criteria expression** ``` SELECT * FROM PHAII01 WHERE DATE = '02/11/13 12:00 AM' ``` **RETURNS NOTHING** ``` SELECT * FROM PHAII01 WHERE DATE = 2013/02/11 ```
Try it like this: ``` SELECT * FROM PHAII01 WHERE DATE = #02/11/13 12:00 AM# ``` or like this ``` SELECT * FROM PHAII01 WHERE DATE = CDate('02/11/13 12:00 AM') ```
I have to format() the values in the DATE column to better fit the query. **This had achieved the desired results** ``` SELECT * FROM PHAII01 WHERE format(DATE,'YYYY-MM-DD') = '2013-02-11' ```
Are my dates strings?
[ "", "sql", "date", "ms-access", "openoffice-base", "" ]
I am trying to just return all rows for an event that has closed within 7 days of the current date. My end\_date has a format such as 2014-06-25 (Y-m-d), what is the best way to select events between NOW and 7 days ago in the past. I have the following.. but this isn't correct SELECT \* FROM end\_date WHERE end\_date <= NOW() AND end\_date >= DATE\_SUB(end\_date, INTERVAL 7 DAY) For instance... e.g If to day i'd want to say events between 2014-07-14 and 2014-07-07
Try using `DATE_ADD` ``` SELECT * FROM end_date WHERE end_date <= NOW() AND end_date >= DATE_ADD(now(),INTERVAL -7 day) ```
You can do something like below: ``` SELECT .... FROM .... WHERE DATEDIFF(NOW(), end_at) <= 7; ```
Show events from the last 7 days via MySQL
[ "", "mysql", "sql", "database", "" ]
I have a MySQL table "results" which has the following fields: ``` id (PK, AI), user_id (FK), date, score, time ``` I want to be able to query this table so that it sorts and returns the fields in order of score (descending order) followed by time (ascending order). So something like: ``` SELECT * FROM results ORDER BY score DESC, time ASC. ``` However, after this sorting, if more than one row has the same `user_id`, I only want to include the highest row. How would I do this?
You can do this with `not exists`: ``` SELECT * FROM results r WHERE NOT EXISTS (select 1 from results r2 where r2.user_id = r.user_id and r2.id > r.id) ORDER BY score DESC; ``` This will work best with an index on `results(user_id, id)`.
My suggestion: `SELECT user_id, max(score), time FROM results GROUP BY user_id ORDER BY score DESC;` Select id and highest score per `user_id` via `max()` and `Group By`. Then order the records by score descending. EDIT: If you need the time for the user-score and there is only one entry with the same score you can use a subselect to get this time: ``` SELECT user_id, max(score), ( SELECT max(time) FROM results AS r2 WHERE r2.user_id = r1.user_id AND r2.score = max(r1.score) ) AS time FROM results AS r1 GROUP BY user_id ORDER BY score DESC; ```
MySQL - if row is duplicate, return only the first one
[ "", "mysql", "sql", "rows", "" ]
I have a table with information like this ``` VehicleID | dtDate | Lat | long ``` A new record is created every so often with new lat and long for the same vehicleID, I am trying to get the latest Date per vehicle so my lat and long would be the latest. ``` select row_number over (partition by vehicleID order by dtDate), vehicleID, Lat, Long from database.schema.table ```
``` ;WITH vehicleCTE AS (SELECT ROW_NUMBER() OVER (PARTITION BY vehicleID ORDER BY dtDate DESC) AS rowNum, vehicleID, Lat, Long, dtDate FROM t1) SELECT vehicleID, Lat, Long, dtDate FROM vehicleCTE WHERE rowNum = 1; ```
I'd use a CTE here if this is just an adhoc query: ``` WITH CTE1 AS ( SELECT vehicleID ,Lat ,Long ,row_number OVER ( PARTITION BY vehicleID ORDER BY dtDate DESC ) AS latestdate FROM MyTable ) SELECT vehicleID ,Lat ,Long FROM CTE1 WHERE latestdate = 1 ```
latest distinct record from partition by
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have many tables that have the same column 'customer\_number'. I can get a list of all these table by query: ``` SELECT table_name FROM ALL_TAB_COLUMNS WHERE COLUMN_NAME = 'customer_number'; ``` The question is how do I get all the records that have a specific customer number from all these tables without running the same query against each of them.
I assume you want to automate this. Two approaches. 1. SQL to generate SQL scripts . ``` spool run_rep.sql set head off pages 0 lines 200 trimspool on feedback off SELECT 'prompt ' || table_name || chr(10) || 'select ''' || table_name || ''' tname, CUSTOMER_NUMBER from ' || table_name || ';' cmd FROM all_tab_columns WHERE column_name = 'CUSTOMER_NUMBER'; spool off @ run_rep.sql ``` 1. PLSQL Similar idea to use dynamic sql: ``` DECLARE TYPE rcType IS REF CURSOR; rc rcType; CURSOR c1 IS SELECT table_name FROM all_table_columns WHERE column_name = 'CUST_NUM'; cmd VARCHAR2(4000); cNum NUMBER; BEGIN FOR r1 IN c1 LOOP cmd := 'SELECT cust_num FROM ' || r1.table_name ; OPEN rc FOR cmd; LOOP FETCH rc INTO cNum; EXIT WHEN rc%NOTFOUND; -- Prob best to INSERT this into a temp table and then -- select * that to avoind DBMS_OUTPUT buffer full issues DBMS_OUTPUT.PUT_LINE ( 'T:' || r1.table_name || ' C: ' || rc.cust_num ); END LOOP; CLOSE rc; END LOOP; END; ```
To get record from a table, you have write a query against that table. So, you can't get ALL the records from tables with specified field without a query against each one of these tables. If there is a subset of columns that you are interested in and this subset is shared among all tables, you may use UNION/UNION ALL operation like this: ``` select * from ( select customer_number, phone, address from table1 union all select customer_number, phone, address from table2 union all select customer_number, phone, address from table3 ) where customer_number = 'my number' ``` Or, in simple case where you just want to know what tables have records about particular client ``` select * from ( select 'table1' src_tbl, customer_number from table1 union all select 'table2', customer_number from table2 union all select 'table3', customer_number from table3 ) where customer_number = 'my number' ``` Otherwise you have to query each table separatelly.
How to select a column from all tables in which it resides?
[ "", "sql", "oracle", "" ]
Ok ok I know you probably all going to kill me for asking this, however I got into an friendly programmer argument with a co-worker about one of our database tables and he asked a question which I know the answer to but I couldn't explain it is the better way. I will simplify the situation for the simplicity of the question, We have a fairly large table of people / users. Now amongst other data being stored the data in question is as follows: we have a simNumber, cellNumber and the ipAddress of that sim. Now I am saying that we should make a table lets call it SimTable and put those 3 entries in the sim table, and then put a FK in the UsersTable linking the two. Why? Because that's what I have always been taught NORMALISE your tables!!! Ok so all is good in that regard. But now my friend says to me yes, but now when you want to query a users phone number, SQL now has to go and: 1. search for the user 2. search for the sim fk 3. search for the correct sim row in the sim database 4. get the phone number Now when I go and request 10000 users phone numbers, the number of operations done seriously grows in size. Vs the other approach 1. search for the user 2. find the phone number Now the argument is purely performance based. As much as I understand why we do normalize the data (to remove redundant data, maintainability, make changes to data in one table which propagate up etc.. ) It does appear to me that the approach with the data in one table will be faster or will at least less tasks/ operations to give me the data I want? So what is the case in this situation? I do hope that I have not asked anything insanely silly , it is early in the morning so do forgive me if im not thinking clearly The technology involved in MS SQL server 2012 [EDIT] This article below also touches on some pf the concepts I have mentioned above <http://databases.about.com/od/specificproducts/a/Should-I-Normalize-My-Database.htm>
The goal of normalization is not performance. The goal is to model your data correctly with minimum redundancy so you avoid data anomalies. Say for example two users share the same phone. If you store the phones in the user table, you'd have sim number, IP address, and cell number stored one each user's row. Then you change the IP address on one row but not the other. How can one sim number have two IP addresses? Is that even valid? Which one is correct? How would you fix such discrepancies? How would you even detect them? There are times when denormalization is worthwhile, if you really need to optimize data access for one query that you run very frequently. But denormalization comes at a cost, so be prepared to commit yourself to a lot more manual work to take responsibility for data integrity. More code, more testing, more cleanup tasks. Do those count when considering "performance" of the project overall? --- Re comments: I agree with @JoelBrown, as soon as you implement your first case of denormalization, you compromise on data integrity. I'll expand on what Joel mentions as "well-considered." Denormalization benefits *specific* queries. So you need to know which queries you have in your app, and which ones you need to optimize for. Do this conservatively, because while denormalization can help a specific query, it *harms* performance for all other uses of the same data. So you need to know whether you need to query the data in different ways. Example: suppose you are designing a database for StackOverflow, and you want to support tags for questions. Each question can have a number of tags, and each tag can apply to many questions. The normalized way to design this is to create a third table, pairing questions with tags. That's the physical data model for a many-to-many relationship: ``` Questions ----<- QuestionsTagged ->---- Tags ``` But you figure you don't want to do the join to get tags for a given question, so you put tags into a comma-separated string in the questions table. This makes it quicker to query a given question and its associated tags. But what if you also want to query for one specific tag and find its related questions? If you use the normalized design, it's simply a query against the many-to-many table, but on the `tag` column. But if you denormalize by storing tags as a comma-separated list in the Questions table, you'd have to search for tags as substrings within that comma-separated list. Searching for substrings can't be indexed with a standard B-tree style index, and therefore searching for related questions becomes a costly table-scan. It's also more complex and inefficient to insert and delete a tag, or to apply constraints like uniqueness or foreign keys. That's what I mean by denormalization making an improvement for one type of query *at the expense of other uses of the data*. That's why it's a good idea to start out with everything in normal form, and then refactor to denormalized designs later on a case by case basis as your bottlenecks reveal themselves. This goes back to old wisdom: > "Premature optimization is the root of all evil" -- [Donald Knuth](http://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evil/) In other words, don't denormalize until you can demonstrate during load testing that (a) it makes a real improvement to performance that justifies the loss of data integrity, and (b) it does not degrade performance of other cases unacceptably.
It sounds like you already understand the benefits of normalisation, so I won't cover these. There are a couple of considerations here: 1. Does a user always have one and only phone number? If so, then it is still normalised to add these to the user table. However, if the user can have either no phone number or multiple phone numbers, then the phone details should be held in a seperate table. 1. Assuming you have these in seperate tables, but after conducting performance tests you found that joining on these 2 tables was having a significant effect on performance, then you may choose to deliberately denormalise the tables for performance gains.
Performance gains vs Normalizing your tables?
[ "", "sql", "sql-server", "database", "database-design", "normalization", "" ]
I am not sure if its possible or not, Just want to know if it is. I have column `plan_popular` which has default value `0`. Lets same i have a list : ``` Plan Name | plan_popular | amount =================================== plan A 0 25.00 plan B 1 50.00 plan C 0 90.00 ``` This is how i am doing: ``` $stmt = "update {CI}plans set plan_popular = 0"; $this->db->query($stmt); $stmt2 = "update {CI}plans set plan_popular = 1 where plan_id = ?"; $this->db->query( $stmt2, array($plan_id) ); ``` Now i have set the plan C to make. Now i want to reset it and want to make popular plan C to 1. What i am doing is running two queries, One i reset and make the `plan_popular` 0 and the second is get the update the plan C to 1 with there id. Is it possible in single query?
You can use an expression to determine the value to assign: ``` UPDATE {CI}plans SET plan_popular = IF(plan_id = ?, 1, 0); ```
try this, ``` UPDATE {CI}plans SET `plan_popular` = CASE `Plan Name` WHEN 'plan C' THEN 1 ELSE 0 END WHERE `Plan Name` IN((select `Plan Name` from {CI}plans where plan_popular=1 ) , 'plan C'); ```
Update Same mysql field twice in single query
[ "", "mysql", "sql", "" ]
I have a table in sql server database in which records of transactions are stored. Table consists of user id of buyer and user id of seller of product. I have to find the circles in the table for example- I have to get the records of type- A sells to B, B sells to C, C sells to D AND D sells to A. Please help.
Use following function: ``` CREATE FUNCTION dbo.CheckIsCircular(@SellerId INT) RETURNS BIT AS BEGIN DECLARE @IsCircular BIT = 0 DECLARE @Sellers TABLE(Id INT) DECLARE @TempSellers TABLE(Id INT) DECLARE @Buyers TABLE(Id INT) INSERT INTO @TempSellers(Id)VALUES(@SellerId) WHILE EXISTS(SELECT * FROM @TempSellers)BEGIN IF EXISTS(SELECT * FROM @Sellers s INNER JOIN @TempSellers t ON t.Id = s.Id)BEGIN SET @IsCircular = 1 BREAK; END INSERT INTO @Sellers(Id) SELECT Id FROM @TempSellers INSERT INTO @Buyers(Id) SELECT BuyerId FROM YourTable DELETE @TempSellers INSERT Into @TempSellers(Id) SELECT YourTable.SellerId FROM YourTable INNER JOIN @Buyers ON [@Buyers].Id = YourTable.SellerId END RETURN @IsCircular END ```
Your problem is a graph traversal challenge; this is not natively supported in TSQL, but you can [simulate it](http://hansolav.net/sql/graphs.html).
find circular transactions in database table
[ "", "sql", "sql-server", "database", "" ]
The below query is working perfect but it return two rows of hours which I don't want ``` SELECT USERINFO.name, USERINFO.BADGENUMBER, departments.deptname, APPROVEDHRS.hours, sum(workingdays) as workingdays,TotalWorkingDays FROM (SELECT DISTINCT (DATEDIFF(DAY, '2014-06-01', '2014-06-30') + 1) - DATEDIFF(WEEK, '2014-06-01', '2014-06-30') * 2 - (CASE WHEN DATEPART(WEEKDAY, '2014-06-01') = 5 THEN 1 ELSE 0 END) - (CASE WHEN DATEPART(WEEKDAY, '2014-06-30') = 6 THEN 1 ELSE 0 END) AS TotalWorkingDays, COUNT(DISTINCT DATEADD(d, 0,DATEDIFF(d, 0, CHECKINOUT.CHECKTIME))) AS workingdays, USERINFO.BADGENUMBER, USERINFO.NAME, hours FROM USERINFO LEFT JOIN CHECKINOUT ON USERINFO.USERID = CHECKINOUT.USERID LEFT JOIN departments ON departments.deptid = userinfo.DEFAULTDEPTID LEFT JOIN APPROVEDHRS ON APPROVEDHRS.userid = userinfo.userid WHERE (DEPARTMENTS.DEPTNAME = 'xyz') AND (CHECKINOUT.CHECKTIME >= '2014-06-01') AND (CHECKINOUT.CHECKTIME <= '2014-06-30') GROUP BY hours, USERINFO.BADGENUMBER, deptname, USERINFO.NAME, CONVERT(VARCHAR(10), CHECKINOUT.CHECKTIME, 103)) blue GROUP BY name, BADGENUMBER, workingdays, TotalWorkingDays, deptname, hours ``` The output of above query : ``` name BADGENUMBER deptname hours --------------------------------------------------- abc 1111 xyz 00:07:59 abc 1111 xyz 00:08:00 pqr 2222 qwe NULL ``` Now the total hours (`APPROVEDHRS` table) in table is : ``` BADGENUMBER NAME DATE HOURS ------------------------------------------------- 1111 xyz 2014-06-15 00:07:59 1111 xyz 2014-06-14 00:08:00 1111 xyz 2014-07-20 00:10:00 ``` I am fetching record from 2014-06-01 to 2014-06-30 So I want the below output: ``` name BADGENUMBER deptname hours -------------------------------------------------------- abc 1111 xyz 00:15:59 pqr 2222 qwe NULL ``` Help me to get this desired output. Thank you
First thing, you might want to use table names in your query so you can see where data is coming from, e.g. ``` select name,BADGENUMBER,deptname,hours, ``` ...would be easier to read as: ``` select ??.name,??.BADGENUMBER,??.deptname,APPROVEDHRS.hours, ``` ...or you could use aliases in the "FROM" part of your query to make this even easier to follow? Anyway, the basic problem appears to be that you are filtering the CHECKINOUT table by date but you aren't filtering the APPROVEDHRS table. To fix this you could change your JOIN from this: ``` left join APPROVEDHRS on APPROVEDHRS.userid = userinfo.userid ``` to this: ``` left join APPROVEDHRS on APPROVEDHRS.userid = userinfo.userid AND (APPROVEDHRS.DATE >='2014-06-01') AND (APPROVEDHRS.DATE <='2014-06-30') ``` ...and to answer your *new* question (which probably should have been created as a new question on StackOverflow). It depends what the data type of your [hours] field is. I tried to fix up your query as a starting point, but it is a little tricky without knowing what the data types are, etc. So it looks like hours is a VARCHAR(?), seems odd, but here goes, note that I am assuming that your "hours" field will always be in the format '??:HH:MM.SSS' and that you only want to add the hours and minutes: ``` WITH Data AS ( SELECT DATEDIFF(DAY, '2014-06-01', '2014-06-30') + 1 - DATEDIFF(WEEK, '2014-06-01', '2014-06-30') * 2 - CASE WHEN DATEPART(WEEKDAY, '2014-06-01') = 5 THEN 1 ELSE 0 END - CASE WHEN DATEPART(WEEKDAY, '2014-06-30') = 6 THEN 1 ELSE 0 END AS TotalWorkingDays, COUNT(DISTINCT DATEADD(d, 0,DATEDIFF(d, 0, c.CHECKTIME))) AS WorkingDays, d.deptname, u.BADGENUMBER, u.NAME, CONVERT(INT, SUBSTRING([Hours], 4, 2)) AS [hours], CONVERT(INT, SUBSTRING([Hours], 7, 2)) AS [minutes] FROM USERINFO u LEFT JOIN CHECKINOUT c ON c.USERID = u.USERID LEFT JOIN departments d ON d.deptid = u.DEFAULTDEPTID LEFT JOIN APPROVEDHRS a ON a.userid = u.USERID WHERE d.DEPTNAME = 'xyz' AND c.CHECKTIME >= '2014-06-01' AND c.CHECKTIME <= '2014-06-30' GROUP BY d.deptname, u.BADGENUMBER, u.NAME, CONVERT(INT, SUBSTRING([Hours], 4, 2)), CONVERT(INT, SUBSTRING([Hours], 7, 2))) SELECT Name, BADGENUMBER, deptname, '00:' + CONVERT(VARCHAR(3), SUM([hours]) + ':' + CONVERT(VARCHAR(3), SUM([minutes]) + '.000' AS [Hours], SUM(WorkingDays) AS WorkingDays, TotalWorkingDays FROM Data GROUP BY Name, BADGENUMBER, deptname, TotalWorkingDays; ``` ...and if that works I will be amazed :P
If you are left-joining the ApprovedHrs table, then you'd need to limit (with a Where clause or a join clause) the dates from which those approved hours from July came from. To clarify this, I would fully qualify the hours field with ApprovedHrs.Hours, so that it's more clear where this comes from (I'm only assuming "hours" comes from ApprovedHrs--correct me if I'm wrong). You're just left joining on userid so it looks like ApprovedHrs will bring in everything for that user from ApprovedHrs.--Jim
total hours column return two output instead of one
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Am very new in *Database development* so I have some doubts regarding my following example: Function f1() - ***language sql*** ``` create or replace function f1(istr varchar) returns text as $$ select 'hello! '::varchar || istr; $$ language sql; ``` Function f2() - ***language plpgsql*** ``` create or replace function f2(istr varchar) returns text as $$ begin select 'hello! '::varchar || istr; end; $$ language plpgsql; ``` * Both *functions* can be called like `select f1('world')` or `select f2('world')`. * If I call `select f1('world')` the *output* will be: ``` `hello! world` ``` * And *output* for `select f2('world')`: > ERROR: query has no destination for result data > HINT: If you want to discard the results of a SELECT, use PERFORM instead. > CONTEXT: PL/pgSQL function f11(character varying) line 2 at SQL statement > \*\*\*\*\*\*\*\*\*\* Error \*\*\*\*\*\*\*\*\*\* * I wish to know the difference and in which situations I should use `language sql` or `language plpgsql`. Any useful link or answers regarding functions will much appreciated.
## [**SQL functions**](https://www.postgresql.org/docs/current/xfunc-sql.html) ... are the better choice: * For **simple scalar queries**. Not much to plan, better save any overhead. * For **single (or very few) calls per session**. Nothing to gain from plan caching via prepared statements that PL/pgSQL has to offer. See below. * If they are typically called in the context of bigger queries and are simple enough to be [**inlined**](https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions). * For lack of **experience** with any procedural language like PL/pgSQL. Many know SQL well and that's about all you need for SQL functions. Few can say the same about PL/pgSQL. (Though it's rather simple.) * A bit shorter code. No block overhead. ## [**PL/pgSQL functions**](https://www.postgresql.org/docs/current/plpgsql.html) ... are the better choice: * When you need any **procedural elements** or **variables** that are not available in SQL functions, obviously. * For any kind of **dynamic SQL**, where you build and [`EXECUTE`](https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN) statements dynamically. Special care is needed to avoid SQL injection. More details: + [Postgres functions vs prepared queries](https://dba.stackexchange.com/a/49718/3684) * When you have **computations** that can be **reused** in several places and a CTE can't be stretched for the purpose. In an SQL function you don't have variables and would be forced to compute repeatedly or write to a table. This related answer on dba.SE has side-by-side **code examples** for solving the same problem using an SQL function / a plpgsql function / a query with CTEs: + [How to pass a parameter into a function](https://dba.stackexchange.com/a/71442/3684) Assignments are somewhat more expensive than in other procedural languages. Adapt a programming style that doesn't use more assignments than necessary. * When a function cannot be inlined and is called repeatedly. Unlike with SQL functions, [**query plans can be cached** for all SQL statements inside a PL/pgSQL functions](https://www.postgresql.org/docs/current/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING); they are treated like **prepared statements**, the plan is cached for repeated calls within the same session (if Postgres expects the cached (generic) plan to perform better than re-planning every time. That's the reason why PL/pgSQL functions are ***typically faster*** after the first couple of calls in such cases. Here is a thread on pgsql-performance discussing some of these items: + [Re: pl/pgsql functions outperforming sql ones?](https://www.postgresql.org/message-id/flat/0238E40E527049828C48675488422F6D@CAPRICA#0238E40E527049828C48675488422F6D@CAPRICA) * When you need to [**trap errors**](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING). * For [**trigger functions**](https://www.postgresql.org/docs/current/plpgsql-trigger.html). * When including DDL statements changing objects or altering system catalogs in any way relevant to subsequent commands - because all statements in SQL functions are parsed at once while PL/pgSQL functions plan and execute each statement sequentially (like a prepared statement). See: + [Why can PL/pgSQL functions have side effect, while SQL functions can't?](https://stackoverflow.com/questions/51004980/why-can-pl-pgsql-functions-have-side-effect-while-sql-functions-cant/51033884#51033884) Also consider: * [PostgreSQL Stored Procedure Performance](https://dba.stackexchange.com/a/8189/3684) --- To actually *return* from a PL/pgSQL function, you could write: ``` CREATE FUNCTION f2(istr varchar) RETURNS text AS $func$ BEGIN RETURN 'hello! '; -- defaults to type text anyway END $func$ LANGUAGE plpgsql; ``` There are other ways: * [Can I make a plpgsql function return an integer without using a variable?](https://stackoverflow.com/questions/8169676/can-i-make-a-plpgsql-function-return-an-integer-without-using-a-variable/8169928#8169928) * [The manual on "Returning From a Function"](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING)
[PL/PgSQL is a PostgreSQL-specific procedural language based on SQL](http://www.postgresql.org/docs/current/static/plpgsql.html). It has loops, variables, error/exception handling, etc. Not all SQL is valid PL/PgSQL - as you discovered, for example, you can't use `SELECT` without `INTO` or `RETURN QUERY`. PL/PgSQL may also be used in `DO` blocks for one-shot procedures. [`sql` functions](http://www.postgresql.org/docs/current/static/xfunc-sql.html) can only use pure SQL, but they're often more efficient and they're simpler to write because you don't need a `BEGIN ... END;` block, etc. SQL functions may be inlined, which is not true for PL/PgSQL. People often use PL/PgSQL where plain SQL would be sufficient, because they're used to thinking procedurally. In most cases when you think you need PL/PgSQL you probably actually don't. Recursive CTEs, lateral queries, etc generally meet most needs. For more info ... see the manual.
LANGUAGE SQL vs LANGUAGE plpgsql in PostgreSQL functions
[ "", "sql", "database", "postgresql", "plpgsql", "" ]
I have the following SQL which selects all ManifestoBatches and gives me a count of each batches items (ManifestoItems): ``` SELECT m.Id, u.UserName, m.TotalRows, m.DateCreated, count(i.Id) as Items FROM ManifestoBatches as m left join ManifestoItems as i on m.Id = i.BatchId inner join AmzUsers as u on m.CreatedBy = u.Id GROUP BY m.Id ORDER BY Items DESC LIMIT 0, 10 ``` This works fine; however, I would like to only count the ManifestoItems which have the property ManifesoItems.ScannerId not equal to null. So, I tried adding this: ``` where NULLIF(i.ScannerId, '') is not null ``` But, this filters the entire result set and only gives me ManifestoBatches where ManifestoItems.ScannerId is not null. **Instead, I want it to get all ManifestoBatches, but only give me a count of ManifestoItems where ManifestoItems.ScannerId is not null.** How can I do this?
Instead of ``` count(i.Id) ``` use ``` SUM(CASE WHEN i.ScannerId IS NOT NULL THEN 1 ELSE 0 END)` ```
I think your problem is the `left join`. This produces `NULL` when the values do not match. But it sounds like you have `NULL` values in the field anyway. This may solve your problem: ``` SELECT m.Id, u.UserName, m.TotalRows, m.DateCreated, count(i.Id) as Items FROM ManifestoBatches m join ManifestoItems i on m.Id = i.BatchId inner join AmzUsers u on m.CreatedBy = u.Id WHERE i.ScannerId is not null GROUP BY m.Id ORDER BY Items DESC LIMIT 0, 10; ``` If you still want the `left join`, then move the logic to the `count()`: ``` SELECT m.Id, u.UserName, m.TotalRows, m.DateCreated, sum(i.ScannerId is not null) as Items FROM ManifestoBatches m left join ManifestoItems i on m.Id = i.BatchId inner join AmzUsers u on m.CreatedBy = u.Id GROUP BY m.Id ORDER BY Items DESC LIMIT 0, 10; ```
SQL filter query on joined column
[ "", "mysql", "sql", "join", "" ]
I need to insert 10 billion rows and update their values few times. Table structure: ``` Column1 Column2 Count 1 1 99 1 2 10003 1 3 1 1 4 23 1 5 9994 ... 99999 1 2 99999 2 2233 99999 3 5904 99999 4 12 99999 5 4598435 ... ``` I need `Column1` to be indexed. In one table `Count` will be Integer in another it will be Double. What database suits best for my needs? I was told I should use NoSQL but there are so many of them.
There is nothing in any mainstream RDBMS that would make this hard or even impossible. All your requirements are **trivial** for any RDBMS. What you need is a single table with a single index on it. This does not stress any system architecturally. Be aware that RAM is likely to be not enough to cache all data. This means that every access will hit the disk. You need disks that have enough IOPS.
I would use a database you know well as long as it can handle your required throughput. So I assume since you are asking your preferred database hasn't met your requirements. If you require high throughput with consistent sub-millisecond lookup latency take a look at [Aerospike](http://www.arerospike.com) which is used a lot in the ADTech industry. See this [Case Study from AppNexus and Intel](http://aerospike.com/wp-content/uploads/2014/04/Aerospike-AppNexus-SSD-Case-Study_Final.pdf). Aerospike is an open source, distributed, in memory and/or SSD NOSQL KV database with support for UDFs and Secondary Indexes.
Insert/Update/Index many rows (10 billion) numbers as values
[ "", "sql", "database", "data-structures", "bigdata", "nosql", "" ]
I am creating a table with `date` column. I would like to add a check constraint to validate if the date is in future. ``` create table test_table ( schedule_date date not null, check (schedule_date >= TODAY) ); ``` Above sql gives me a syntax error. ``` 09:43:37 [CREATE - 0 row(s), 0.000 secs] [Error Code: -201, SQL State: 42000] A syntax error has occurred. ... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.000/0.000 sec [0 successful, 0 warnings, 1 errors] ``` How do I add a constraint on date column?
You cannot use today in the check constraint. As the official documentation states: > Check constraints are defined using search conditions. The search condition cannot contain user-defined routines, subqueries, aggregates, host variables, or rowids. In addition, the condition cannot contain the variant built-in functions CURRENT, USER, SITENAME, DBSERVERNAME, or TODAY. I think a solution can be using an insert/update trigger.
As Copilot says, it's simply not permitted by Informix. If any other RDBMS does allow it, I'd be surprised. The inherent ticking time bomb should be obvious when you think about it. The row when inserted meets the constraint, but will fail the constraint as time passes. How is the engine to flag that? The solution is either: * the application responsible for creating the row to be inserted is responsible for pre-validating it, or * create an insert trigger that when fired executes a procedure to validate, and raise the appropriate error if the row fails
Check constraint on date
[ "", "sql", "informix", "check-constraints", "" ]
Let's say I have a table like below: ``` Name Score ----- ----- Don 3 Don 4 Don 0 Pat 3 Cat 5 ``` How to write a query that will filter out Name where the score is 0? I.e.: In the above table, the query must not return the Name "Don" since one of the rows for Don contains 0. (I'm using Oracle db BTW.)
``` select name from your_table group by name having sum(case when score = 0 then 1 else 0 end) = 0 ```
The first solution that usually jumps to ones mind is something with `EXISTS`: ``` SELECT * FROM players p1 WHERE NOT EXISTS ( SELECT 1 FROM players p2 WHERE p1.Name = p2.Name AND p2.Score = 0 ) ``` Here's a [SQLFiddle showing it in action](http://sqlfiddle.com/#!4/228c2/1), returning ``` | NAME | SCORE | |------|-------| | Cat | 5 | | Pat | 3 | ```
Selecting rows where there is repetition in columns
[ "", "sql", "oracle", "" ]
Given this table: ``` CREATE TABLE colors ( image_id int, color char(6) ); INSERT INTO colors (image_id, color) VALUES (1, '22ffcc'), (2, '22ffcc'), (2, '2200cc'), (3, '22ffcc'); ``` [SQL Fiddle](http://sqlfiddle.com/#!2/f9243/2) I want to fetch only those image\_id where image id is any among (`2,3`) but only that image\_id which have both colors `'22ffcc' and '2200cc'` against that image id In given example my desired result is `2` But it is giving me `2,2,3` Can we use `and` with `in` somehow?... As By default it uses `or`
The typical solution for your problem is to count the distinct values of colors for every image\_id: ``` select image_id from colors where color in ( '22ffcc' , '2200cc' ) and image_id in (2,3) GROUP BY image_id HAVING COUNT(DISTINCT color) = 2; ``` Explanation: To have both colors, the number of counts must be the same as the values in the value list of IN. You can check the results of aggregate functions with the `HAVING` clause not with `WHERE` see [modified fiddle](http://sqlfiddle.com/#!2/f9243/5)
If you're looking for specific colors that appear on different rows, then you can join the table on itself: ``` select color1.image_id from colors color1 join colors color2 on color1.image_id = color2.image_id where color1.color = '22ffcc' and color2.color = '2200cc' and color1.image_id in (2,3); ```
Using "and" with "in"
[ "", "mysql", "sql", "" ]
I have two mysql database table. They have one-to-one relationship between each others. They are empty. I can't insert value to anyone. Each one has a foreign key of another one. I'm planning to insert value to them in the same time. But, I didn't find which sql query is necessary. What is your offer? Thanks one-to-one relationship example: ``` student: student_id, first_name, last_name, address_id address: address_id, address, city, zipcode, student_id ```
For 1 to 1 table relationship, that doesn't mean you have to put their PK on each other, instead people usually decide which FK goes to another table and that's all. From what you did, that shows Many to Many table relationship, and the right schema would be creating a new table that has FK to those 2 tables. Details: If 1 student can have MANY address and 1 address can be lived by MANY student, then the schema is: ``` student: student_id, first_name, last_name, address_id address: address_id, address, city, zipcode, student_id student_address: student_id (FK), address_id (FK) ``` If 1 student can have only 1 address and 1 address can be lived only by 1 student, then the schema is: ``` student: student_id, first_name, last_name, address_id, address, city, zipcode ```
You don't need address\_Id in the student table. What if the student is homeless? What if a student has more than one addresss? He/she might have a mailing address a local address in the college town, and a home address. And even for students with only one address, you can get that address by querying the address table using student\_Id. ``` Select * From addresses where student_id = [Whatever his/her id is] ``` Even more generally, multiple students might live at the same address. Neither table should have a FK to the other. There should be an address table, with `addressID`, a Student table with `studentId`, and a third table that contains just `studentId` and `addressId` to indicate the association between one student and one address. The PK for this table should be composite (using both `StudentId` and `AddressId`)
insert values into two tables at the same time - one to one relationsship
[ "", "mysql", "sql", "insert", "one-to-one", "" ]
I want to unite the following two queries into one: ``` SELECT pg_get_serial_sequence('purchase_orders', 'id'); SELECT setval('purchase_orders_id_seq', 30000); ``` But if I place the upper SELECT into the setval's first parameter I get: ``` SELECT setval(SELECT pg_get_serial_sequence('purchase_orders', 'id'), 30000); ERROR: syntax error at or near "SELECT" SQL state: 42601 Character: 15 ``` How can I pass on the select's result ("purchase\_orders\_id\_seq") for setval? EDIT: The reason for this is that; I want to use it like a function where a user only have to enter the table's name and a number to where sequence will be set. ``` FUNCTION set_id_sequence(TEXT table_name, INTEGER sequence); ```
If you want to pass a subquery result as a function argument, you need parentheses around it: ``` SELECT setval((SELECT pg_get_serial_sequence('purchase_orders', 'id')), 30000); ``` But in this case, the SELECT is redundant; you can invoke the function directly: ``` SELECT setval(pg_get_serial_sequence('purchase_orders', 'id'), 30000); ```
## General function and automation All "migration" or "starting database" SQL-script files have some controlled `INSERT` sequence before to use `serial` automation, so it need a simple command to say "ok, back to standard operation". This generic operation is `SELECT MAX(id)+1 FROM schemaName.tableName`... and, as @NickBarnes showed above, the basic `setval()` operation is `setval(pg_get_serial_sequence('schemaName.tableName', 'idName'), NEWVAL)`, so putting all together we automate the task. ### 2018's standardization proposal ``` CREATE FUNCTION std_setmaxval( p_tname text, p_id_name text DEFAULT 'id' ) RETURNS SETOF bigint AS $f$ BEGIN RETURN QUERY EXECUTE format( 'SELECT setval(pg_get_serial_sequence(%L,%L), COALESCE((SELECT MAX(%s)+1 FROM %s), 1) , false)' ,p_tname, p_id_name, p_id_name, p_tname ); END $f$ LANGUAGE PLpgSQL; ``` See [quotation problem/solution](https://dba.stackexchange.com/a/141680/90651) to optimize. **Please review this answer, it is a Wiki (!)**. And update the [standard *snippet* proposal](https://wiki.postgresql.org/wiki/Category:Snippets). --- PS: I not understand why postgreSQL not offer a native function for this task... Well, I not see at [*info*'s Guide](https://www.postgresql.org/docs/current/static/functions-info.html) or [*sequence*'s Guide](https://www.postgresql.org/docs/current/static/functions-sequence.html).
PostgreSQL increase a table's sequence with one query
[ "", "sql", "postgresql", "sequence", "auto-increment", "" ]
I have thousands of rows in a table. Some rows have similar keywords but can be categorized in the same group. For example: ``` Table : Birds_Name +-------+---------------------+ |ID |Name | +-------+---------------------+ |1 |Blue Peckwood | +-------+---------------------+ |2 |North Peckwood | +-------+---------------------+ |3 |Northern Peckwood | +-------+---------------------+ |4 |Northern Peckwood | +-------+---------------------+ |5 |Red Heron | +-------+---------------------+ |6 |Red Heron | +-------+---------------------+ ``` As for the table above there should be 2 groups of birds. They are **Peckwood** and **Heron**. But after I run this mySQL I get : ``` SELECT * FROM birds_name WHERE name IN ( SELECT name FROM birds_name GROUP BY name HAVING COUNT(*) > 1 ) ``` After I run the query. This is what I've got: ``` +-------+---------------------+ |3 |Northern Peckwood | +-------+---------------------+ |4 |Northern Peckwood | +-------+---------------------+ |5 |Red Heron | +-------+---------------------+ |6 |Red Heron | +-------+---------------------+ ``` Actually, I expect any row which share a similar string to be chosen (in this case it's **Peckwood**. So it should have only 2 groups - **Peckwood** and **Heron**. Is it possible to do so? And how to adapt mysql code to achieve it? Regards.
Try this ``` SELECT SUBSTRING_INDEX(name,' ',-1),count(*) FROM birds_name GROUP BY SUBSTRING_INDEX(name,' ',-1) HAVING count(*)>0; ``` [`Manual`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) for SUBSTRING\_INDEX function in mysql.
Can you try this. ``` SELECT count(id),name FROM birds_name group by name having count(id) >1 ``` Thanks [SQL Fiddle](http://sqlfiddle.com/#!2/92759/2)
How to find duplicate rows with similar part of string
[ "", "mysql", "sql", "group-by", "duplicates", "" ]
I want to change the datatype (varchar2 to number) of a column in an oracle table and the column is not empty. So what I thought I will do is, create a new column, copy the data from one column to another column. Disable/Drop the previous column and rename the new column. To copy data between the same columns, I can use: ``` UPDATE TABLE_NAME SET NEW_COLUMN = TO_NUMBER(OLD_COLUMN); ``` But what I want to confirm before doing this is, do I need to specify which row's data goes where? Or it will be copied to its adjacent row in the column? What I meant is, do I need to do something like: ``` UPDATE (SELECT TO_NUMBER(OLD_COLUMN) AS OLDISH, NEW_COLUMN AS NEWISH FROM TABLE_NAME A, TABLE_NAME B WHERE A.ID = B.ID) SET NEWISH = OLDISH; ```
The operation will be done on the same row for each rows. Be aware that if you do not want to update all the table rows then you need to add a `WHERE` clause.
`UPDATE foo_table SET some_column = another_column` uses `some_column` and `another_column` of the same row. But note that it does so on every row of `foo_table`; make sure it's what you want.
Copy a column to another column within a same table in oracle db. Do I need to specify which data goes where?
[ "", "sql", "database", "oracle", "" ]
This is a recreational pursuit, and is not homework. If you value academic challenges, please read on. A radio quiz show had a segment requesting listeners to call in with words that have their characters in alphabetical order, e.g. "aim", "abbot", "celt", "deft", etc. I got these few examples by a quick Notepad++ (NPP) inspection of a Scrabble dictionary word list. I'm looking for an elegant way in T-SQL to determine if a word qulifies for the list, i.e. all its letters are in alpha order, case insensitive. It seemed to me that there should be some kind of T-SQL algorithm possible that will do a SELECT on a table of English words and return the complete list of all words in the Srcabble dictionary that meets the spec. I've spent considerable time looking at regex strings, but haven't hit on anything that comes even remotely close. I've thought about the obvious looping scenario, but abandoned it for now as "inelegant". I'm looking for your ideas that will obtain the qualifying word list,   preferably using      - a REGEX expression      - a tally-table-based approach      - a scalar UDF that returns 1 if the input word meets the requirement, else 0.      - Other, only limited by your creativity.   But preferably NOT using      - a looping structure      - a recursive solution      - a CLR solution Assumptions/observations:   1. A "word" is defined here as two or more characters. My dictionary shows 55 2-character words, of which only 28 qualify.   2. No word will have more than two concecutive characters that are identical. (If you find one, please point it out.)   3. At 21 characters, "electroencephalograms" is the longest word in my Scrabble dictionary (though why that word is in the Scrabble dictionary escapes me--the board is only a 15-by-15 grid.) Consider 21 as the upper limit on word length.   4. All words LIKE 'Z%' can be dismissed because all you can create is {'Z','ZZ', ... , 'ZZZ...Z'}.   5. As the dictionary's words' initial character proceedes through the alphabet, fewer words will qualify.   6. As the word lengths get longer, fewer words will qualify.   7. I suspect that there will be less than 0.2% of my dictionary's 60,387 words that will qualify. For example, I've tried NPP regex searches like "^a[a-z][b-z][b-z][c-z][c-z][d-z][d-z][e-z]" for 9-letter words starting with "a", but the character-by-character alphabetic enforcement is not handled properly. This search will return "abilities" which fails the test with the "i" that follows the "l". There's several free Scrabble word lists available on the web, but Phil Factor gives a really interesting treatment of T-SQL/Scrabble considerations at <https://www.simple-talk.com/sql/t-sql-programming/the-sql-of-scrabble-and-rapping/> which is where I got my word list. Care to give it a shot?
Split the word into individual characters using a [numbers table](http://dataeducation.com/you-require-a-numbers-table/ "You REQUIRE a Numbers table! "). Use the numbers as one set of indices. Use [ROW\_NUMBER](http://msdn.microsoft.com/en-us/library/ms186734.aspx "ROW_NUMBER (Transact-SQL)") to create another set. Compare the two sets of indices to see if they match for every character to see if they match. If they do, the letters in the word are in the alphabetical order. ``` DECLARE @Word varchar(100) = 'abbot'; WITH indexed AS ( SELECT Index1 = n.Number, Index2 = ROW_NUMBER() OVER (ORDER BY x.Letter, n.Number), x.Letter FROM dbo.Numbers AS n CROSS APPLY (SELECT SUBSTRING(@Word, n.Number, 1)) AS x (Letter) WHERE n.Number BETWEEN 1 AND LEN(@Word) ) SELECT Conclusion = CASE COUNT(NULLIF(Index1, Index2)) WHEN 0 THEN 'Alphabetical' ELSE 'Not alphabetical' END FROM indexed ; ``` The `NULLIF(Index, Index2)` expression does the comparison: it returns a NULL if the the arguments are equal, otherwise it returns the value of `Index1`. If all indices match, all the results will be NULL and COUNT will return 0, which means the order of letters in the word was alphabetical.
Interesting idea... Here's my take on it. This returns a list of words that are in order, but you could easily return 1 instead. ``` DECLARE @WORDS TABLE (VAL VARCHAR(MAX)) INSERT INTO @WORDS (VAL) VALUES ('AIM'), ('ABBOT'), ('CELT'), ('DAVID') ;WITH CHARS AS ( SELECT VAL AS SOURCEWORD, UPPER(VAL) AS EVALWORD, ASCII(LEFT(UPPER(VAL),1)) AS ASCIICODE, RIGHT(VAL,LEN(UPPER(VAL))-1) AS REMAINS, 1 AS ROWID, 1 AS INORDER, LEN(VAL) AS WORDLENGTH FROM @WORDS UNION ALL SELECT SOURCEWORD, REMAINS, ASCII(LEFT(REMAINS,1)), RIGHT(REMAINS,LEN(REMAINS)-1), ROWID+1, INORDER+CASE WHEN ASCII(LEFT(REMAINS,1)) >= ASCIICODE THEN 1 ELSE 0 END AS INORDER, WORDLENGTH FROM CHARS WHERE LEN(REMAINS)>=1 ), ONLYINORDER AS ( SELECT * FROM CHARS WHERE ROWID=WORDLENGTH AND INORDER=WORDLENGTH ) SELECT SOURCEWORD FROM ONLYINORDER ``` Here it is as a UDF: ``` CREATE FUNCTION dbo.AlphabetSoup (@Word VARCHAR(MAX)) RETURNS BIT AS BEGIN SET @WORD = UPPER(@WORD) DECLARE @RESULT INT ;WITH CHARS AS ( SELECT @WORD AS SOURCEWORD, @WORD AS EVALWORD, ASCII(LEFT(@WORD,1)) AS ASCIICODE, RIGHT(@WORD,LEN(@WORD)-1) AS REMAINS, 1 AS ROWID, 1 AS INORDER, LEN(@WORD) AS WORDLENGTH UNION ALL SELECT SOURCEWORD, REMAINS, ASCII(LEFT(REMAINS,1)), RIGHT(REMAINS,LEN(REMAINS)-1), ROWID+1, INORDER+CASE WHEN ASCII(LEFT(REMAINS,1)) >= ASCIICODE THEN 1 ELSE 0 END AS INORDER, WORDLENGTH FROM CHARS WHERE LEN(REMAINS)>=1 ), ONLYINORDER AS ( SELECT 1 AS RESULT FROM CHARS WHERE ROWID=WORDLENGTH AND INORDER=WORDLENGTH UNION SELECT 0 FROM CHARS WHERE NOT (ROWID=WORDLENGTH AND INORDER=WORDLENGTH) ) SELECT @RESULT = RESULT FROM ONLYINORDER RETURN @RESULT END ```
Selecting Strings With Alphabetized Characters - In SQL Server 2008 R2
[ "", "sql", "regex", "sql-server-2008-r2", "" ]
I am using Netezza to manipulate some data. I am trying to add a column to a table with values that are results of computation of other columns. First of all, I ran this sql to create a table to rearrange the order of other table: ``` CREATE TABLE SEQ_6_3_FNN_CID218_ORDERED AS SELECT A.* FROM SEQ_6_3_FNN_CID218 A ORDER BY TIMESTAMP ``` And then, what I need is like this, assuming columns TMP and ATT1 already there, and I need to insert ATT2: ``` TMP ATT1 ATT2 1 1 NULL 2 4 4-1=3 3 5 5-4=1 4 8 8-5=3 5 9 9-8=1 6 12 12-9=3 ``` What is the sql that can achieve this? Or is there a way that this can be achieved directly running sql on SEQ\_6\_3\_FNN\_CID218 without running my create new table by order? Thanks very much for your help. HELP STILL NEEDED!
What you are looking for here is often referred to as a "calculated column". Netezza does not implement this feature, nor does it implement triggers (another method by which you might achieve the same result). Since Netezza is focused on data warehousing, the sorts of calculations you're talking about are usually done in the ETL process, by the ETL tool. The good news is that you can do this purely through SQL with the LAG function, which is designed to do exactly this. Then, if you like, you can encode that in a view. ``` TESTDB.ADMIN(ADMIN)=> insert into base_table select * from base_ext; INSERT 0 6 TESTDB.ADMIN(ADMIN)=> select * from base_table order by col1; COL1 | COL2 ------+------ 1 | 1 2 | 4 3 | 5 4 | 8 5 | 9 6 | 12 (6 rows) TESTDB.ADMIN(ADMIN)=> select col1, col2, col2 - lag(col2,1,NULL) over ( TESTDB.ADMIN(ADMIN)(> order by col1 asc) col3 from base_table; COL1 | COL2 | COL3 ------+------+------ 1 | 1 | 2 | 4 | 3 3 | 5 | 1 4 | 8 | 3 5 | 9 | 1 6 | 12 | 3 (6 rows) ``` For clarity, the SQL again is: ``` select col1, col2, col2 - lag(col2,1,NULL) over ( order by col1 asc) col3 from base_table; ```
SQL Server can't do this "natively", but you could accomplish this with an insert and update trigger that responds to changes to the two columns and updates the third column. Edit -- I stand corrected: SQL Server *can* do this natively. See Amirreza Keshavarz's answer.
How to add a column using sql that in every record the unit is the result of computation of other columns?
[ "", "sql", "insert", "netezza", "" ]
My function is: ``` CREATE OR REPLACE FUNCTION FnUpdateSalegtab09 ( iacyrid Integer,iRepId Integer,iDrId Integer,ivrid Integer,imode smallint,itrno varchar,itrdate timestamp,iacid Integer,ivrno varchar,iSuppId Integer,icustname varchar,inetamt money,idisrate real,idisamt money,iRoundOff real,ijrmid integer,iuserid integer,iuserdtm timestamp,iVSNo integer,iRecdAmt money,icstrate real,icstsaleamt money,icstamt money,itdrate real,itdamt money,icdrate real,icdamt money,iCessRate real,iCessAmt money,iodesc1 varchar,ioamt1 money,iCashCredit boolean,iOrderNo varchar,iOrderDate timestamp,iCustAdd2 varchar,iRemarks varchar,iWhoRetSl boolean,iPatName varchar,iDrName varchar,iFormId integer,iSalesMan varchar,iCFMode smallint,iPatId integer,iStkPtId integer,iDisType smallint,iBranchID integer ) RETURNS void AS 'BEGIN INSERT INTO gtab09 ( acyrid, RepId, DrId, vrid, mode, trno, trdate, acid, vrno, SuppId, custname, netamt, disrate, disamt, RoundOff, jrmid, userid, userdtm, VSNo, RecdAmt, cstrate, cstsaleamt, cstamt, tdrate, tdamt, cdrate, cdamt, CessRate, CessAmt, odesc1, oamt1, CashCredit, OrderNo, OrderDate, CustAdd2, Remarks, WhoRetSl, PatName, DrName, FormId, SalesMan, CFMode,PatId,StkPtId,DisType,BranchID ) values ( iacyrid,iRepId,iDrId,ivrid,imode,itrno,itrdate,iacid,ivrno,iSuppId,icustname,inetamt,idisra te,idisamt,iRoundOff,ijrmid,iuserid,iuserdtm,iVSNo,iRecdAmt,icstrate,icstsaleamt,icstamt,it drate,itdamt,icdrate,icdamt,iCessRate,iCessAmt,iodesc1,ioamt1,iCashCredit,iOrderNo,iOrderDa te,iCustAdd2,iRemarks,iWhoRetSl,iPatName,iDrName,iFormId,iSalesMan,iCFMode,iPatId,iStkPtId, iDisType,iBranchID); END;' LANGUAGE plpgsql VOLATILE COST 100; ``` And I used to call this like: ``` select FnUpdateSalegtab09 (4, 1, 0, 12, 1, '9'::varchar,'2014-07-15'::timestamp, 4048, '9'::varchar, 4048, 'MYCUSTOMER'::varchar, 12::money, 0, 0::money, 0.32, 185, 0, '2014-07-15 11:24:12 AM'::timestamp, 0, 0::money, 0, 0::money, 0::money, 0, 0::money, 0, 0::money, 0, 0::money, ''::varchar, 0::money, False, ''::varchar, '2014-07-15'::timestamp, ''::varchar, ''::varchar, False, ''::varchar, ''::varchar, 1, ''::varchar, 1,0,1,0,42) ``` The error is: > ``` > ERROR: function fnupdatesalegtab09(integer, integer, integer, integer, integer, unknown, unknown, integer, unknown, integer, unknown, integer, integer, integer, numeric, integer, integer, unknown, integer, integer, integer, integer, integer, integer, integer, integer, integer, integer, integer, unknown, integer, boolean, unknown, unknown, unknown, unknown, boolean, unknown, unknown, integer, unknown, integer, integer, integer, integer, integer) does not exist > LINE 1: select FnUpdateSalegtab09 (4, 1, 0, 12, 1, '9','2014-07-15',... > ^ > HINT: No function matches the given name and argument types. You might need to add explicit type casts. > ********** Error ********** > ```
Your function has a couple of **`smallint`** parameters. But in the call, you are using numeric literals that are presumed to be type `integer`. A string literal or [string constant](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS) (`'123'`) is not typed immediately. It remains type "unknown" until assigned or cast explicitly. However, a numeric literal or **[numeric constant](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-NUMERIC)** is typed immediately. [The manual:](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-NUMERIC) > A numeric constant that contains neither a decimal point nor an > exponent is **initially presumed to be type `integer`** if its value > fits in type `integer` (32 bits); otherwise it is presumed to be type > `bigint` if its value fits in type `bigint` (64 bits); otherwise it is > taken to be type `numeric`. Constants that contain decimal points and/or > exponents are always initially presumed to be type `numeric`. Also see: * [PostgreSQL ERROR: function to\_tsvector(character varying, unknown) does not exist](https://stackoverflow.com/questions/14523624/postgresql-error-function-to-tsvectorcharacter-varying-unknown-does-not-exis/14524599#14524599) ### Solution Add explicit casts for the `smallint` parameters or pass quoted (untyped) literals. ### Demo ``` CREATE OR REPLACE FUNCTION f_typetest(smallint) RETURNS bool AS 'SELECT TRUE' LANGUAGE sql; ``` Incorrect call: ``` SELECT * FROM f_typetest(1); ``` Correct calls: ``` SELECT * FROM f_typetest('1'); SELECT * FROM f_typetest(smallint '1'); SELECT * FROM f_typetest(1::int2); SELECT * FROM f_typetest('1'::int2); ``` *db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_12&fiddle=a5619258db0ff1295033dd43d02a33c2)* Old [sqlfiddle.](http://sqlfiddle.com/#!17/862dc/1)
That error means that a function call is only matched by an existing function if all its arguments are of the same type and passed in same order. So if the next `f()` function ``` create function f() returns integer as $$ select 1; $$ language sql; ``` is called as ``` select f(1); ``` It will error out with ``` ERROR: function f(integer) does not exist LINE 1: select f(1); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. ``` because there is no `f()` function that takes an integer as argument. So you need to carefully compare what you are passing to the function to what it is expecting. That long list of table columns looks like bad design.
ERROR: function ... does not exist and HINT: No function matches the given name and argument types
[ "", "sql", "database", "postgresql", "plpgsql", "" ]
Im using the following statement (this is a shortened version as an example) to get results from my Microsoft SQL Express 2012 database: ``` SELECT id, name, city FROM tblContact ORDER BY RAND(xxx) ``` and injecting a seed stored in the session for the `xxx` part so that the results are consistently random for a given session (so when paging through results, the user doesn't see duplicates) **PROBLEM: No matter what the seed is, the results get returned in the same order** I have also tried this: ``` SELECT id, name, city, RAND(xxx) AS OrderValue FROM tblContact ORDER BY OrderValue ``` Both give the same (unexpected result) - am I using this incorrectly?
The value of `rand(seed)` will be the same for the entire query, You my want to use the ID column to generate random value on the row per row basis: ``` SELECT id, name, city, RAND(xxx + id) AS OrderValue FROM tblContact ORDER BY OrderValue ``` However I've been developing some functionality in the past where I needed to have random order for different session, but the same order within the same session. At that time I have used `HASHBYTES()` and it worked very well: ``` SELECT id, name, city, HASHBYTES('md5',cast(xxx+id as varchar)) AS OrderValue FROM tblContact ORDER BY OrderValue ```
Often, the `newid()` function is used for this purpose: ``` SELECT id, name, city FROM tblContact ORDER BY newid(); ``` I have heard that `rand(checksum(newid()))` actually has better properties as a random number generator: ``` SELECT id, name, city FROM tblContact ORDER BY rand(checksum(newid())); ``` If you want consistent result from one query to the next, then @dimt's solution using `id` or a function of `id`.
T-SQL Randomize order of results using RAND(seed)
[ "", "sql", "t-sql", "random", "sql-server-2012", "" ]
I have 3 tables, with Table B & C referencing Table A via Foreign Key. I want to write a query in PostgreSQL to get all ids from A and also their total occurrences from B & C. ``` a | b | c ----------------------------------- id | txt | id | a_id | id | a_id ---+---- | ---+----- | ---+------ 1 | a | 1 | 1 | 1 | 3 2 | b | 2 | 1 | 2 | 4 3 | c | 3 | 3 | 3 | 4 4 | d | 4 | 4 | 4 | 4 ``` Output desired (just the id from A & total count in B & C) : ``` id | Count ---+------- 1 | 2 -- twice in B 2 | 0 -- occurs nowhere 3 | 2 -- once in B & once in C 4 | 4 -- once in B & thrice in C ``` SQL so far [SQL Fiddle](http://sqlfiddle.com/#!15/25c22/2/0) : ``` SELECT a_id, COUNT(a_id) FROM ( SELECT a_id FROM b UNION ALL SELECT a_id FROM c ) AS union_table GROUP BY a_id ``` The query I wrote fetches from B & C and counts the occurrences. But if the key doesn't occur in B or C, it doesn't show up in the output (e.g. id=2 in output). How can I start my selection from table A & join/union B & C to get the desired output
If the query involves large parts of `b` and / or `c` it is more efficient to aggregate first and join later. I expect these two variants to be considerably faster: ``` SELECT a.id, , COALESCE(b.ct, 0) + COALESCE(c.ct, 0) AS bc_ct FROM a LEFT JOIN (SELECT a_id, count(*) AS ct FROM b GROUP BY 1) b USING (a_id) LEFT JOIN (SELECT a_id, count(*) AS ct FROM c GROUP BY 1) c USING (a_id); ``` You need to account for the possibility that some `a_id` are not present at all in `a` and / or `b`. `count()` never returns `NULL`, but that's cold comfort in the face of `LEFT JOIN`, which leaves you with `NULL` values for missing rows nonetheless. You *must* prepare for `NULL`. Use **[`COALESCE()`](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL)**. Or UNION ALL `a_id` from both tables, aggregate, *then* JOIN: ``` SELECT a.id , COALESCE(ct.bc_ct, 0) AS bc_ct FROM a LEFT JOIN ( SELECT a_id, count(*) AS bc_ct FROM ( SELECT a_id FROM b UNION ALL SELECT a_id FROM c ) bc GROUP BY 1 ) ct USING (a_id); ``` Probably slower. But still faster than solutions presented so far. And you could do without `COALESCE()` and still not loose any rows. You might get occasional `NULL` values for `bc_ct`, in this case.
Another option: ``` SELECT a.id, (SELECT COUNT(*) FROM b WHERE b.a_id = a.id) + (SELECT COUNT(*) FROM c WHERE c.a_id = a.id) FROM a ```
Get count of foreign key from multiple tables
[ "", "sql", "postgresql", "left-join", "aggregate-functions", "correlated-subquery", "" ]
I'm struggling with a thing that in my mind should be achievable but I can't get it to work...here the scenario: I have two tables structured as follows: 1- table where data are logged every 10 minutes: (tab\_cycle) ``` timestamp | value1 | value2 ... | valueN 20140715 10:10 | 10 | 20 ... | x 20140715 10:00 | 14 | 45 ... | x ``` 2 - table where data are logged with an event driven structure (tab\_event) ``` timestamp | descr | value 20140715 10:09 | a | 10 20140715 10:04 | a | 14 20140715 10:00 | a | 11 20140715 09:59 | a | 10 20140715 09:54 | a | 20 ``` Now what I want to achieve **(if it is possible)** without the need of using a cursor being able to create a select statement that is gonna produce the following result: ``` timestamp | value1 | value2 ... | valueN |countEvent 20140715 10:10 | 10 | 20 ... | x | null 20140715 10:00 | 14 | 45 ... | x | 3 ``` so basically count the number of time a selected event with a selected tag is generated within the timestamp and timestamp + 10min. What i tried was the following but without much success: ``` SELECT tab_cycle.timestamp AS startTime, DATEADD(mi, 10, tab_cycle.timestamp) AS endTime, (SELECT COUNT(tab_event.descr) FROM tab_event WHERE tab_event.timestamp BETWEEN tab_cycle.timestamp and DATEADD(mi, 10, tab_cycle.timestamp) AND tab_event.tag LIKE 'A' GROUP BY tag) AS eventCounter FROM tab_cycle ORDER BY timestamp DESC ``` Can anyone tell me what I'm doing wrong? Thanks.
You just need to remove the `group by` in the nested subquery: ``` SELECT tab_cycle.timestamp AS startTime, DATEADD(mm, 10, tab_cycle.timestamp) AS endTime, (SELECT COUNT(tab_event.descr) FROM tab_event WHERE tab_event.timestamp BETWEEN tab_cycle.timestamp and DATEADD(mm, 10, tab_cycle.timestamp) AND tab_event.tag LIKE 'A' ) AS eventCounter FROM tab_cycle ORDER BY timestamp DESC; ``` EDIT: I was actually thinking of adding this code anyway. The answer to your question -- if you are using SQL Server 2012 or later is to use `lead()`: ``` SELECT tc.timestamp AS startTime, DATEADD(mm, 10, tc.timestamp) AS endTime, (SELECT COUNT(te.descr) FROM tab_event te WHERE te.timestamp BETWEEN tc.timestamp and tc.next_timestamp AND te.tag LIKE 'A' ) AS eventCounter FROM (SELECT tc.*, LEAD(tc.timestamp) OVER (ORDER BY tc.timestamp) as next_timestamp FROM tab_cycle tc ) tc ORDER BY timestamp DESC; ``` I also added abbreviations for the table names. These make the query easier to write and read. If you are using an older version of SQL Server, you would do the same thing with a correlated subquery or using `cross apply`.
I may have the datediff backwards ``` select table1.timestamp, table1.value1, table1.value2, table1.valueN , count(tabl2.timestamp) from table1 left join table2 on datediff(mi, table1.timestamp, table2.timestamp) < 10 and table2.timestamp > table1.timestamp group by table1.timestamp, table1.value1, table1.value2, table1.valueN ```
Nested query in a select statement for counting entries based on fields derived from the outer query
[ "", "sql", "sql-server", "t-sql", "" ]
I have 3 tables A, B and C: * Table A is small (~1000 rows). * Table B has ~200,000 rows. * Table C has ~2.2 million rows. I'm running a query like this: ``` SELECT A.Id FROM A, B, C WHERE A.Id = B.SomeId OR (A.Id = C.SomeId AND C.SomeValue = 'X') INTO OUTFILE '/tmp/result.txt'; ``` * `A.Id` is the primary key of table A * `B.SomeId` has an index set up * **Edit**: `C.SomeId` has an index set up * `C.SomeVal` has an index set up too but it's a VARCHAR(1) with only two possible values I thought this would only have to iterate over each `Id` in table A (1000 rows) and then potentially query across the other tables (depending on whether MySQL short circuits, I don't know if it does). But the query seems to hang, or at least it's taking a very long time. Much longer than I would have expected if it only had to iterate 1000 rows. 10 minutes in and the output file is still empty. Let me know if I can provide any more information. ``` my@laptop$ mysql --version mysql Ver 14.14 Distrib 5.5.37, for debian-linux-gnu (i686) using readline 6.3 ``` **Edit**: The result I'm looking for is 'Give me all the Id's in table A where the Id matches B.SomeId OR ~~ELSE~~ the Id matches C.SomeId AND C.SomeValue equals 'X'.
`OR` expressions often make it difficult for MySQL to use indexes. Try changing to a `UNION`: ``` SELECT A.id FROM A JOIN B ON A.id = B.SomeID UNION SELECT A.id FROM A JOIN C ON A.id = C.SomeID WHERE C.SomeValue = 'A' ``` From the [documentation](http://dev.mysql.com/doc/refman/5.5/en/how-to-avoid-table-scan.html): > Minimize the OR keywords in your WHERE clauses. If there is no index that helps to locate the values on both sides of the OR, any row could potentially be part of the result set, so all rows must be tested, and that requires a full table scan. If you have one index that helps to optimize one side of an OR query, and a different index that helps to optimize the other side, use a UNION operator to run separate fast queries and merge the results afterward. Your query is described by the last sentence: you have different indexes for each side of the `OR` query.
Let's go even smaller. Let's say that your tables look like this: > ``` > A.ID > 1 > 2 > > B.SomeID > 1 > 3 > > C.SomeID | C.SomeValue > 1 | X > 2 | X > ``` Now, let's see what your query will do. First, we look to see if A.ID match and B.SomeID match. In the case of A.ID = 1, we have a match! Sql short circuits. This means that if the first part of your `or` is true, sql doesn't evaluate the 2nd part of your `or`. Now, we still have to join with table C. Since there is no join condition, for table C sql matches A.ID with all the columns in table C. Now we need to compare A.ID with the next row in B. Well, 1 <> 3. So, we move on to the second part of the `or`. When C.SomeID = 1, the row is included. When C.SomeID = 2, the row is not included. Your results for A.ID = 1 are: > ``` > A.ID | B.SomeID | C.SomeID | C.SomeValue > 1 | 1 | 1 | X > 1 | 1 | 2 | X > 1 | 3 | 1 | X > ``` This is clearly not the results table that you are looking for. Since you are going to join A with either table B or C, instead of an `or`, you should use a `union` ``` SELECT A.Id FROM A, B WHERE A.Id = B.SomeId Union All Select A.ID From A, C Where A.Id = C.SomeId AND C.SomeValue = 'X' ``` Union all puts the results from the first query into the same results table as the results from the second query. Now, your question says that you only want the A.IDs that are in one table but not the other (or else). There are several ways to do this. In this case, I am going to use a `having` and a subquery. You could also use a `not exists` but I believe that `having` is going to use less resources. ``` Select T.ID From (SELECT A.Id FROM A, B WHERE A.Id = B.SomeId Union All Select A.ID From A, C Where A.Id = C.SomeId AND C.SomeValue = 'X') T Group By T.ID Having count(1) = 1 ``` We only want the Ids that show up exactly one time. This will only work if the id is not repeated in B or C, so keep that in mind. Since the condition is based on the aggregate function, count, this stipulation must be in the `having`.
Unusual MySQL behavior with select query
[ "", "mysql", "sql", "" ]
I need a field (column) of my database to update automatically. For example, the field NUMBER for each record incrementing every minute. How can I do it? I am using SQL Server.
You're probably looking for something called SQL Server Agent jobs here is a good starting point reference: <http://technet.microsoft.com/en-us/library/ms181153(v=sql.105).aspx> This allows you to run some sql code on a schedule of your choosing. P.S. If you have access to sql server management studio, the GUI is much nicer.
You can set up a SQL Agent job to run an update statement once a minute: ``` UPDATE tablename SET NUMBER = NUMBER + 1 ```
How to automatically update database SQL Server?
[ "", "sql", "sql-server", "auto-increment", "" ]
I've spent a good amount of time trying to figure out how to implement a **CASCADE ON DELETE** for recursive primary keys on SQL Server for some time now. I've read about triggers, creating temporary tables, etc but have yet to find an answer that will work with my database design. Here is a Boss/Employee database example that will work for demonstration purposes: ``` TABLE employee id|name |boss_id --|---------|------- 1 |John |1 2 |Hillary |1 3 |Hamilton |1 4 |Scott |2 5 |Susan |2 6 |Seth |2 7 |Rick |5 8 |Rachael |5 ``` As you can see, each employee has a boss that is also an employee. So, there is a PK/FK relationship on id/boss\_id. Here is an (abbreviated) table with their information: ``` TABLE information emp_id|street |phone ------|-----------|----- 2 |blah blah |blah 6 |blah blah |blah 7 |blah blah |blah ``` There is a PK/FK on employee.id/information.emp\_id with a CASCADE ON DELETE. For example, if Rick was fired, we would do this: ``` DELETE FROM employee WHERE id=7 ``` This should delete Rick's rows from both employee and information. Yay cascade! Now, say we've hit hard times and we need to lay of Hamilton and his **entire** department. This means that we would need to remove * Hamilton * Scott * Susan * Seth * Rick * Rachael From both the employee and information tables when we run: ``` DELETE FROM employee WHERE id=3 ``` I tried a simple CASCADE ON DELETE for id/emp\_id, but SQL Server wasn't having it: ``` Introducing FOREIGN KEY constraint 'fk_boss_employee' on table 'employee' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. ``` I was able to use CASCADE ON DELETE on a test database in Access, and it behaved exactly as I wanted it to. Again, I **want** every possible child, grandchild, great-grandchild, etc of a parent to be deleted if their parent, grandparent, great-grandparent, etc is deleted. When I tried using triggers, I couldn't seem to get it to trigger itself (eg. when you try to delete Hamilton's employee Susan, first see if Susan has any employees, etc) let alone going down N-number of employees. So! I think I've provided every detail I can think of. If something still isn't clear, I'll try to improve this description.
The below might work for you (I haven't tested it so it may require some tweaking). Seems like all you have to do is delete the employees from the bottom of the hierarchy before you delete the ones higher-up. Use a CTE to build the delete hierarchy recursively and order the CTE output descending by the hierarchy level of the employee. Then delete in order. ``` CREATE PROC usp_DeleteEmployeeAndSubordinates (@empId INT) AS ;WITH employeesToDelete AS ( SELECT id, CAST(1 AS INT) AS empLevel FROM employee WHERE id = @empId UNION ALL SELECT e.id, etd.empLevel + 1 FROM employee e JOIN employeesToDelete etd ON e.boss_id = etd.id AND e.boss_id != e.id ) SELECT id, ROW_NUMBER() OVER (ORDER BY empLevel DESC) Ord INTO #employeesToDelete FROM employeesToDelete; DECLARE @current INT = 1, @max INT = @@ROWCOUNT; WHILE @current <= @max BEGIN DELETE employee WHERE id = (SELECT id FROM #employeesToDelete WHERE Ord = @current); SET @current = @current + 1; END; GO ```
Necromancing. There's 2 simple solutions. * You can either read Microsoft's sorry-excuse(s) of why they didn't implement this (because it is difficult and time-consuming - and time is money), and explanation of why you don't/shouldn't need it (although you do), and implement the delete-function with a cursor in a stored procedure + because you don't really need delete cascade, because you always have the time to change ALL your and ALL of OTHER people's code (like interfaces to other systems) everywhere, anytime, that deletes an employee (or employees, note: plural) (including all superordinate and subordinate objects [including when a or several new ones are added]) in this database (and any other copies of this database for other customers, especially in production when you don't have access to the database [oh, and on the test system, and the integration system, and local copies of production, test, and integration] or * you can use a proper DBMS that actually supports recursive cascaded deletes, like PostGreSQL (as long as the graph is directed, and non-cyclic; else ERROR on delete). **PS:** That's sarcasm. --- **Note:** As long as your delete does not stem from a cascade, and you just want to perform a delete on a self-referencing table, you can delete any entry, as long as you remove all subordinate objects as well in the in-clause. So to delete such an object, do the following: ``` ;WITH CTE AS ( SELECT id, boss_id, [name] FROM employee -- WHERE boss_id IS NULL WHERE id = 2 -- <== this here is the id you want to delete ! UNION ALL SELECT employee.id, employee.boss_id, employee.[name] FROM employee INNER JOIN CTE ON CTE.id = employee.boss_id ) DELETE FROM employee WHERE employee.id IN (SELECT id FROM CTE) ``` Assuming you have the following table structure: ``` IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'dbo.employee') AND type in (N'U')) BEGIN CREATE TABLE dbo.employee ( id int NOT NULL, boss_id int NULL, [name] varchar(50) NULL, CONSTRAINT PK_employee PRIMARY KEY ( id ) ); END GO IF NOT EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'dbo.FK_employee_employee') AND boss_id_object_id = OBJECT_ID(N'dbo.employee')) ALTER TABLE dbo.employee WITH CHECK ADD CONSTRAINT FK_employee_employee FOREIGN KEY(boss_id) REFERENCES dbo.employee (id) GO IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'dbo.FK_employee_employee') AND boss_id_object_id = OBJECT_ID(N'dbo.employee')) ALTER TABLE dbo.employee CHECK CONSTRAINT FK_employee_employee GO ```
SQL Server - Cascading DELETE with Recursive Foreign Keys
[ "", "sql", "sql-server", "recursion", "constraints", "cascade", "" ]
I have a very set of data as follows: ``` CustomerId char(6) Points int PointsDate date ``` with example data such as: ``` 000021 0 01-JAN-2014 000021 10 02-JAN-2014 000021 20 03-JAN-2014 000021 30 06-JAN-2014 000021 40 07-JAN-2014 000021 10 12-JAN-2014 000034 0 04-JAN-2014 000034 40 05-JAN-2014 000034 20 06-JAN-2014 000034 40 08-JAN-2014 000034 60 10-JAN-2014 000034 80 21-JAN-2014 000034 10 22-JAN-2014 ``` So, the `PointsDate` component is NOT consistent, nor is it contiguous (it's based around some "activity" happening) I am trying to get, for each customer, the total amount of positive and negative differences in points, the number of positive and negative changes, as well as Max and Min...but ignoring the very first instance of the customer - which will always be zero. e.g. ``` CustomerId Pos Neg Count(pos) Count(neg) Max Min 000021 40 30 3 1 40 10 000034 100 90 4 2 80 10 ``` ...but I have not a single clue how to achieve this! I would put it in a cube, but a) there is only a single table and no other references and b) I know almost nothing about cubes!
I'll copy my comment from above: I know literally nothing about cubes, but it sounds like what you're looking for is just a cursor, is it not? I know everyone hates cursors, but that's the best way I know to compare consecutive rows without loading it down onto a client machine (which is obviously worse). I see you mentioned in your response to me that you'd be okay setting it off to run overnight, so if you're willing to accept that sort of performance, I definitely think a cursor will be the easiest and quickest to implement. If this is just something you do here or there, I'd definitely do that. It's nobody's favorite solution, but it'd get the job done. Unfortunately, yeah, at twelve million records, you'll definitely want to spend some time optimizing your cursor. I work frequently with a database that's around that size, and I can only imagine how long it'd take. Although depending on your usage, you might want to filter based on user, in which case the cursor will be easier to write, and I doubt you'll be facing enough records to cause much of a problem. For instance, you could just look at the top twenty users and test their records, then do more as needed.
The problem can be solved in regular TSQL with a common table expression that numbers the lines per customer, along with an outer self join that compares each row with the previous one; ``` WITH cte AS ( SELECT customerid, points, ROW_NUMBER() OVER (PARTITION BY customerid ORDER BY pointsdate) rn FROM mytable ) SELECT cte.customerid, SUM(CASE WHEN cte.points > old.points THEN cte.points - old.points ELSE 0 END) pos, SUM(CASE WHEN cte.points < old.points THEN old.points - cte.points ELSE 0 END) neg, SUM(CASE WHEN cte.points > old.points THEN 1 ELSE 0 END) [Count(pos)], SUM(CASE WHEN cte.points < old.points THEN 1 ELSE 0 END) [Count(neg)], MAX(cte.points) max, MIN(cte.points) min FROM cte JOIN cte old ON cte.rn = old.rn + 1 AND cte.customerid = old.customerid GROUP BY cte.customerid ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!3/b32cf/3). The query would have been somewhat simplified using SQL Server 2012's more extensive analytic functions.
How to find daily differences over a flexible time period?
[ "", "sql", "sql-server", "olap-cube", "" ]
I'm trying to learn Subquerys. I have troubles with this: **The two tables:** ``` CREATE TABLE DEPT (DEPTNO NUMBER(2) CONSTRAINT DEPT_PRIMARY_KEY PRIMARY KEY, LOC varchar2(3)); CREATE TABLE EMP (ENAME varchar2(10), JOB varchar2(9), DEPTNO NUMBER(2) NOT NULL CONSTRAINT EMP_FOREIGN_KEY REFERENCES DEPT (DEPTNO)); ``` I want to get the name (emp.ename) and the job (emp.job) but only where the job also exists in 'CHICAGO'. **This is what I have done:** ``` SELECT emp1.ename, emp1.job FROM emp emp1 WHERE emp1.job EXISTS (SELECT emp2.job FROM emp emp2 FULL JOIN dept ON (emp2.deptno = dept.deptno) WHERE dept.loc = 'CHICAGO'); ``` I always get the "invalid relational operator" error in the line 3. **Example of the outcome:** ``` ENAME | JOB | LOC JONES | SALE | CHICAGO FORD | SALE | NEW YORK //He doesn't sit in CHICAGO but the job also exists in Chicago ```
To simplify your query, you can use: ``` select emp1.ename, emp1.job from EMP emp1 where emp1.deptno in (SELECT DEPT.DEPTNO from DEPT where DEPT.loc = 'CHICAGO'); ``` --- To select name and job from emp1 where emp1's job is same as emp2's job with location Chicago: ``` SELECT emp1.ename, emp1.job FROM emp emp1 WHERE emp1.job IN (SELECT emp2.job FROM emp emp2 FULL JOIN dept ON (emp2.deptno = dept.deptno) WHERE dept.loc = 'CHICAGO'); ``` --- To select name and job from emp1 if there exists atleast one record with location in chicago. ``` SELECT emp1.ename, emp1.job FROM emp emp1 WHERE EXISTS (SELECT emp2.job FROM emp emp2 FULL JOIN dept ON (emp2.deptno = dept.deptno) WHERE dept.loc = 'CHICAGO'); ``` --- In your case, i assume that the first and second query would be more appropriate.
The `exists` operator is applied on a subquery, not a column: ``` SELECT emp1.ename, emp1.job FROM emp emp1 WHERE EXISTS (SELECT emp2.job FROM emp emp2 FULL JOIN dept ON (emp2.deptno = dept.deptno) WHERE dept.loc = 'CHICAGO'); ```
SQL Subquery with JOIN
[ "", "sql", "database", "oracle", "" ]
I have a big database around 46GB in Mysql format and I managed to convert all the database to MSsql except two tables, the biggest ones. When I try to migrate those 2 tables, one by one , after a while I get the error message "The connection has been disabled" I encreased the timeout from SSMA option from 15 to 1440 and decreased the bash from 1000 to 500 and same thing, The tables have 52 mil rows and 110 milion rows with 1,5 GB and 6.5 GB. I tried incremential version but I don't have a unique id to use What can I do to migrate them Thank You
You should be able to use SQL Server Integration Services (SSIS). You can create a dataflow that pulls from MYSql and dumps the Data into MSSQL. You'll need to create a Data flow task that includes an OLE Database Source and connect it to an OLE Database Destination.
I had the same problem using SSMA. I managed to migrate 100+ million rows of a table with 40+ columns. I assume you've done the configuration well. You need to ensure that there is no activity in the MSSQL database. No SELECTS over those tables and no any other activities. Check out the two tables' structures to ensure they are fine. You can run the SSMA project only for those two tables. Hope this helps.
Migrate big tables from MySql to MsSql
[ "", "mysql", "sql", "sql-server", "database-migration", "" ]
I have a sql statement which is trying to retrieve the rows having a count greater than 1 for all the similar rows. In all those rows, one field appears to be holding null value. Case 1 : Let's say the table name is ABC --- BUSINESS\_UNIT, INVOICE, FLAG A 1 (Null) A 1 (Null) --- SQL Statement : SELECT BUSINESS\_UNIT, INVOICE, FLAG from TABLE ABC group by BUSINESS\_UNIT, INVOICE, FLAG having COUNT(\*) > 1 I am expecting it to return no rows as both the flags are basically blank. But it returns the --- BUSINESS\_UNIT, INVOICE, FLAG A 1 (Null) --- Case 2 : TABLE ABC --- BUSINESS\_UNIT, INVOICE, FLAG A 1 (Null) A 1 (Null) B 1 1 B 1 (Null) --- SQL Statement : SELECT BUSINESS\_UNIT, INVOICE, FLAG from TABLE ABC group by BUSINESS\_UNIT, INVOICE, FLAG having COUNT(\*) >1 I am expecting the result to be --- BUSINESS\_UNIT, INVOICE, FLAG B 1 1 --- This is for Oracle 11g. Can someone please help on this and let me know if we can use any of the delivered functions in Oracle? --- After editing the original sql statements. I should actually have removed flag from the Group by and select and added to the count in order to identify multiple flags for similar rows. --- Thanks!
I think I have found what I was intending to look for. SELECT BUSINESS\_UNIT, INVOICE from TABLE ABC group by BUSINESS\_UNIT, INVOICE having COUNT(DISTINCT Flag) >1 Case 1: It returns no rows Case 2 : It returns 1 row --- BUSINESS\_UNIT, INVOICE B 1 --- Thanks Raphael, Dnoeth and Gordon for your help on this!
well, null is also a value, so if you group by a nullable field, null values will be distinct from non null values. If you don't wanna null values, just add a where clause ``` SELECT INVOICE from TABLE ABC where flag is not null group by BUSINESS_UNIT, INVOICE, FLAG having COUNT(*) >1 ```
SQL in Oracle Group by NULL returns multiple rows
[ "", "sql", "oracle11g", "group-by", "crosstab", "" ]
I'm just trying to add in an the newest estimated return date for a car as part of a subquery in my Select statement, I just wanted to know if this was how it is done? I think I heard that Select happens before Order By so wanted to do a quick check. ``` select top 1 ESTIMATE_RETURN_DATE from CHECK.AOS where AOS.AUTO_NO = CHECK_EVENT.AUTONUM_4 ORDER BY REVISION_NO desc ```
ORDER BY is evaluated **before** the SELECT, as the ordering changes the results returned. `TOP 1` also ensures the lowest `REVISION_NO` is returned, therefore it appears you are using the query correctly.
I was pretty sure, but this explains it <http://use-the-index-luke.com/sql/partial-results/top-n-queries> It selects everything and sorts then stops when it reaches the number to return.
ORDER BY before SELECT?
[ "", "sql", "sql-server", "" ]
The table will look like this ``` STUDENTID | FRIEND1 | FRIEND2 | ADDRESS | TUITIONPAID ``` And student ID, Friend 1 and friend 2 will be a combination of four, five or six numbers and characters TUITION will be a true/false Of course, STUDENTID should be unique for all rows. Am I using the right types? ``` STUDENTID - Varchar (6) Friend1 - Varchar (6) Friend2 - Varchar (6) Address - Varchar (255) TUITION - BIT (3) ``` And should STUDENTID be "unique" or "primary key"? And lastly, if I am building a table of three million rows, is it recommended to separate into three tables? Or 1 table is good enough?
Lets separate the questions. > Am I using the right types? Yes from what I could understand you can use this model you created. > And should STUDENTID be "unique" or "primary key"? A primary key is better, but it can also be unique, since you can have a student that came back to study after a long period away. > And lastly, if I am building a table of three million rows, is it recommended to separate into three tables? Or 1 table is good enough? Size doesn't matter (much). You can have one table for all the rows. Obs: If you have a lot of columns, you can think of separating them into more tables. If you have a lot of rows, use only one table.
You are using the right types except TUITION - BIT (3). Since you want it to be true/false **use BOOLEAN** STUDENTID should definitely be your primary key. It shouldn't be a problem to use 1 table but you could make more. For example: Table 1: STUDENTID 1 - 1,000,000 Table 2: STUDENTID 1,000,001 - 2,000,000 Table 3: STUDENTID 2,000,001 + I still don't see much point in making more tables though. 3M rows are pretty easy for modern computers.
Writing a table for MySQL
[ "", "mysql", "sql", "" ]
i working sqllite.i successfully created sql database and i also can insert some values and show it in listview.now i want to write function witch return title where ServerID like my value i wrote function but this function did not return counter ``` public class DatabaseHandler extends SQLiteOpenHelper { private static final int DATABASE_VERSION = 1; private static final String DATABASE_NAME = "lv_db4"; private static final String TABLE_CONTACTS = "CardTable1"; public static final String KEY_ID = "id"; private static final String KEY_Tittle = "title"; private static final String KEY_Description = "description"; private static final String KEY_Price = "price"; private static final String KEY_Counter = "counter"; private static final String KEY_ServerId = "serverid"; private static final String KEY_Image = "image"; private final ArrayList<Contact> contact_list = new ArrayList<Contact>(); public static SQLiteDatabase db; public DatabaseHandler(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { String CREATE_CONTACTS_TABLE = "CREATE TABLE " + TABLE_CONTACTS + "(" + KEY_ID + " INTEGER PRIMARY KEY," + KEY_Tittle + " TEXT," + KEY_Description + " TEXT," + KEY_Price + " TEXT," + KEY_Counter + " TEXT," + KEY_ServerId + " TEXT," + KEY_Image + " TEXT" + ");"; db.execSQL(CREATE_CONTACTS_TABLE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + TABLE_CONTACTS); onCreate(db); } // Adding new contact public void Add_Contact(Contact contact) { db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(KEY_Tittle, contact.getTitle()); values.put(KEY_Description, contact.getDescription()); values.put(KEY_Price, contact.getPrice()); values.put(KEY_Counter, contact.getCounter()); values.put(KEY_ServerId, contact.getServerId()); values.put(KEY_Image, contact.getImage()); db.insert(TABLE_CONTACTS, null, values); db.close(); } public void deleteUser(String userName) { db = this.getWritableDatabase(); try { db.delete(TABLE_CONTACTS, "title = ?", new String[] { userName }); } catch (Exception e) { e.printStackTrace(); } finally { db.close(); } } public String GetCounterFromServerID(String value) { db = this.getWritableDatabase(); Cursor cursor = db.rawQuery("select " + KEY_Counter+ " from " + TABLE_CONTACTS + " where serverid like '%" + value + "%'", null); cursor.close(); return KEY_Counter; } public int Update_Contact(Contact contact) { db = this.getWritableDatabase(); ContentValues values = new ContentValues(); values.put(KEY_Counter, contact.getCounter()); // updating row return db.update(TABLE_CONTACTS, values, KEY_ID + " = ?", new String[] { String.valueOf(contact.getID()) }); } // Getting single contact Contact Get_Contact(int id) { SQLiteDatabase db = this.getReadableDatabase(); Cursor cursor = db.query(TABLE_CONTACTS, new String[] { KEY_ID, KEY_Tittle, KEY_Description, KEY_Price, KEY_Counter, KEY_ServerId, KEY_Image }, KEY_ID + "=?", new String[] { String.valueOf(id) }, null, null, null); if (cursor != null) cursor.moveToFirst(); Contact contact = new Contact(cursor.getString(0), cursor.getString(1), cursor.getString(2), cursor.getString(3), cursor.getString(4), cursor.getString(5)); cursor.close(); db.close(); return contact; } public boolean DublicateValues(String S_title) { db = this.getWritableDatabase(); Cursor cursor = db.rawQuery("select * from " + TABLE_CONTACTS + " where title like '%" + S_title + "%'", null); boolean exists = (cursor.getCount() > 0); cursor.close(); return exists; } ``` } my GetCounterFromServerID function return only "counter",i want to return for example 2 or 4 etc i call my function in another activity like this: ``` String counter; counter=dbHandler.GetCounterFromServerID(intentServerId); System.out.println(counter ); dbHandler.close(); ``` Log result is only counter what am i doing wrong? if anyone knows solution please help me
You can do this way. ``` public String GetCounterFromServerID(String value) { Cursor cursor = db.rawQuery("SELECT " + KEY_Counter + " FROM " + TABLE_CONTACTS + " WHERE serverid like '%" + value + "%'", null); if (!cursor.moveToFirst()) { return null; } if(-1 == cursor.getColumnIndex(KEY_Counter)) { return null; } String counter = cursor.getString(cursor.getColumnIndex(KEY_Counter)); cursor.close(); return counter; } ```
This is wrong ``` return KEY_Counter; ``` because that is your String `"counter"`. You should use: ``` String result = "no data found"; if (cursor != null && cursor.moveToFirst()) result = cursor.getString(cursor.getColumnIndex(KEY_Counter)); cursor.close(); return result; ```
Android sqlite select not working
[ "", "android", "sql", "database", "sqlite", "" ]
I have a database table that looks like below. It contains a key (id) that identified each transaction. Within each transaction, there may be multiple items that were purchased, thus someome with transact 103 has three different id values because they purchased three different items. Here is what I am trying to do. For a given set of conditions, I want to total number of items that were purchased (item qty). Let's say that my conditions are that for stores 20 and 35, AND items 7, 12, aned 21, I want to find the total number of purchased items (item qty). When condition x is met, which is the reason for the subquery, sun up the item quantity to get total sales. Can anyone help? ``` transac id item_qty store item 101 1 2 20 13 102 2 1 35 21 103 3 3 35 16 103 4 1 35 12 103 5 1 35 7 104 6 1 15 21 104 7 2 20 7 ``` I have the following query which is related to my example but when I utilize such queries on my data it returns a null value each time. ``` SELECT SUM(Cnt) AS "Sales Count" FROM (SELECT ti.id, SUM(ti.item_qty) AS Cnt FROM dbo.vTransactions1 ti WHERE ti.store IN (20, 35) AND ti.item IN (7, 12, 21) GROUP BY ti.id) inner_query1; ```
One way of doing this would be to `group by store and item` and then calculating the sum. This way you would be able to add more conditions if required based on valid combinations of (Store,Item). You have grouped by `id` which is not worth as each row will have unique id so no group will be formed. For given condition you can write as; ``` ;with CTE as ( select sum(item_qty) as Cnt,store,item from test group by store,item ) select sum (Cnt) as [Sales Count] from CTE where store in (20,35) and item in (7,12,21)) ``` [SQL Fiddle Demo here.](http://sqlfiddle.com/#!3/b62c2f/8)
I have no idea why there is a subquery here. Unless I'm missing something, this should work: ``` select sum (item_qty) FROM dbo.vTransactions1 ti WHERE ti.store IN (20, 35) AND ti.item IN (7, 12, 21) ```
Finding the sum of sales when a series of conditions are met
[ "", "sql", "sql-server", "" ]
I have a sales\_cat table, and a user\_cat table. **sales\_cat** * id\_cat * name **user\_cat** * id * id\_user * id\_cat I need to get all the sales\_cat rows, joined with the user\_cat table for a specific user, indicating if that user has or not the category. Example, for id\_user = 4 it should return: ``` id_cat | name | selected 1 | kids | 1 2 | men | 1 3 | women | 0 ``` Of course, the "selected" field is actually a value that depends on the existence of a linked record in user\_cat. I've set a table structure in [sqlfiddle](http://sqlfiddle.com/#!2/395ba). My current solution only returns the linked data: ``` SELECT sales_cat.id_cat, sales_cat.name FROM sales_cat LEFT JOIN user_cat ON user_cat.id_cat = sales_cat.id_cat WHERE user_cat.id_user = 4 ``` ...which is returning: ``` id_cat | name 1 | kids 2 | men ``` I'm still missing the "selected" column and the **3 | women** row. Any ideas? Thanks!
Try Like This ``` SELECT sales_cat.id_cat, sales_cat.name,case when user_cat.id is null then 0 else 1 end as "selected" FROM sales_cat LEFT JOIN user_cat ON user_cat.id_cat = sales_cat.id_cat and user_cat.id_user = 4 ``` <http://sqlfiddle.com/#!2/395ba/25> Thanks @Phil
Try this: ``` select distinct s.id_cat, s.name, case when u.id_user is null then 0 else 1 end selected from sales_cat s left join user_cat u on s.id_cat = u.id_cat and (u.id_user = 4 or u.id_user is null) ``` While the approach is similar to Satson's answer, we move the null check from `user_cat` to `id_user`.
MySQL: Return all rows of table A and true|false if record exists in table B
[ "", "mysql", "sql", "join", "left-join", "" ]
``` CREATE PROCEDURE sp_ViewEffortSummary @UserID INT , @Project_Name nVARCHAR(40), @Date DATETIME AS BEGIN DECLARE @query nVARCHAR(max) DECLARE @T_EmpID nVARCHAR(10) DECLARE @T_ProjName nVARCHAR(40) DECLARE @T_Date nVARCHAR(15) SET @T_EmpID = convert(VARCHAR(10),@UserID); SET @T_ProjName = @Project_Name; SET @T_Date = convert(VARCHAR(15),@Date,111); SET @query = 'select '''+@T_EmpID+''' as EmployeeID'; IF((select Proj_Team_Setup from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',Proj_Team_Setup'; IF((select Infra_Setup from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',Infra_Setup'; IF((select tblPSU.Doc_Work from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblPSU.Doc_Work'; IF((select tblPSU.Rework from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblPSU.Rework'; IF((select Proj_Mgmt_Telcom from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',Proj_Mgmt_Telcom'; IF((select tblPC.Doc_Work from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblPC.Doc_Work'; IF((select tblPC.Rework from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblPC.Rework'; IF((select Cust_Test_Proces from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',Cust_Test_Proces'; IF((select tblKT.Doc_Work from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblKT.Doc_Work'; IF((select tblKT.Rework from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',tblKT.Rework'; SET @query += ' from tblPSU a join tblPC b on a.Employee_ID = b.Employee_ID and a.Project_Name = b.Project_Name and a.Date_of_work = b.Date_of_work join tblKT c on b.Employee_ID = c.Employee_ID and b.Project_Name = c.Project_Name and b.Date_of_work = c.Date_of_work where a.Employee_ID = '+@T_EmpID+' and a.Project_Name = '+@T_ProjName+ ' and a.Date_of_work = '''+@T_Date+'''' IF (@query IS NOT NULL) Exec (@query) END EXEC sp_ViewEffortSummary '2312456','abc','2014-06-17 00:00:00.000' ``` on executing this proc I'm retrieving error as ``` Invalid column name 'abc' and The multi-part identifier "tblPC.Doc_Work" could not be bound ``` Could anyone please help me in solving this ?
There are few issues in your query as: 1. In the very first declaration you have written `@UserID INT` but when executing Sproc you have written it as string as '2312456'. It should be 2312456 as shown below or else change the declaration to `nVARCHAR(10)`. 2. When tables are aliased then use Aliases to refer columns in Select like `a.Proj_Team_Setup`.. 3. As abc is a string correct quotes should be used to repesent it as 'abc' in your final query. Following should work: ``` CREATE PROCEDURE sp_ViewEffortSummary @UserID INT, @Project_Name nVARCHAR(40), @Date DATETIME AS BEGIN DECLARE @query nVARCHAR(max) DECLARE @T_EmpID nVARCHAR(10) DECLARE @T_ProjName nVARCHAR(40) DECLARE @T_Date nVARCHAR(15) SET @T_EmpID = convert(VARCHAR(10),@UserID); SET @T_ProjName = @Project_Name; SET @T_Date = convert(VARCHAR(30),@Date,111); SET @query = 'select '''+@T_EmpID+''' as EmployeeID'; IF((select Proj_Team_Setup from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',a.Proj_Team_Setup'; IF((select Infra_Setup from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',a.Infra_Setup'; IF((select tblPSU.Doc_Work from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',a.Doc_Work'; IF((select tblPSU.Rework from tblPSU where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',a.Rework'; IF((select Proj_Mgmt_Telcom from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',b.Proj_Mgmt_Telcom'; IF((select tblPC.Doc_Work from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',b.Doc_Work'; IF((select tblPC.Rework from tblPC where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',b.Rework'; IF((select Cust_Test_Proces from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',c.Cust_Test_Proces'; IF((select tblKT.Doc_Work from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',c.Doc_Work'; IF((select tblKT.Rework from tblKT where Employee_ID = @T_EmpID and Project_Name = @T_ProjName and Date_of_work = @Date) is not null) SET @query += ',c.Rework'; SET @query += ' from tblPSU a join tblPC b on a.Employee_ID = b.Employee_ID and a.Project_Name = b.Project_Name and a.Date_of_work = b.Date_of_work join tblKT c on b.Employee_ID = c.Employee_ID and b.Project_Name = c.Project_Name and b.Date_of_work = c.Date_of_work where a.Employee_ID = '''+@T_EmpID+''' and a.Project_Name = '''+@T_ProjName+ ''' and a.Date_of_work = '''+@T_Date+'''' IF (@query IS NOT NULL) Exec (@query) END EXEC sp_ViewEffortSummary 2312456 ,'abc','2014-06-17 00:00:00.000' ```
You use alias for `tblPSU` and `tblPC` and must use **`Alias.YourColumnName`** for example use `a.Doc_Work` and `b.Doc_Work` instead of `tblPSU.Doc_Work` and `tblPS.Doc_Work` ``` tblPSU.Doc_Work --> a.Doc_Work tblPc.Doc_Work --> b.Doc_Work tblKT.Rework --> c.Rework tblPC.Rework --> b.Rework ```
Dynamic sql : multi-part identifier could not be found error
[ "", "sql", "sql-server", "" ]
Hi i am trying to delete all the rows in a table by below query ``` db.beginTransaction(); // db is a SQLiteDatabase object int deleted = db.delete("user", null, null); db.endTransaction(); ``` I am getting n number of rows deleted in the table. But still rows exist in the table. Anything wrong in my delete call?
You need to call `setTransactionSuccessful` to commit all the changes done to the database, before calling `endTransaction`: ``` db.beginTransaction(); int deleted = db.delete("user", null, null); db.setTransactionSuccessful(); db.endTransaction(); ``` Source (Android developer reference): > The changes will be rolled back if any transaction is ended without being marked as clean (by calling setTransactionSuccessful). Otherwise they will be committed.
Just try this ``` public int delete(String tableName) { SQLiteDatabase db = this.getWritableDatabase(); int x = db.delete(tableName, null, null); db.close(); return x; } ```
Android SQLite delete table not actually deleting the rows
[ "", "android", "mysql", "ios", "sql", "sqlite", "" ]
I'm currently encountering an issue with dividing by zero - I'm trying to implement `CASE` to just mark the divided number as "1" however, I'm still getting the error - I'm guessing I'm missing something here? Just not sure what... ``` SELECT CASE WHEN c.PageCount IS NULL OR c.PageCount=0 THEN 1 ELSE c.PageCount END as [PageCount], cast(c.PageCount / CASE WHEN DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) IS NULL OR DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) = 0 THEN 1 ELSE DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate)END as decimal(5,2)) as PagesPerMinute, CASE WHEN DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) >= 30 AND CAST(DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) AS FLOAT)/CAST([PageCount] AS FLOAT) >= 1 THEN CASE WHEN CAST([PageCount] AS FLOAT) * 0.5 /*Half minute*/ > 45 /*minutes*/ THEN 45 ELSE CAST([PageCount] AS FLOAT) * 0.5 /*Half minute*/ END WHEN DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) >= 60 AND CAST(DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) AS FLOAT)/CAST([PageCount] AS FLOAT) < 1 /* 1 minute/page */ THEN 60 ELSE DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) END [MinutesPaid] FROM tbl_Charts c left outer join (select cd.ChartId, count(*) as CodesFound from tbl_ChartCodes cd group by cd.ChartId) cd on cd.ChartId = c.ChartId, tbl_WFChartEvents ev, tbl_Users u, (select evar.ChartId, evar.ActionUserId, ar.ResultDescription, evar.ActionRemark, evar.ActionDate from tbl_WFChartEventActions evar, tbl_WFChartEventActionResults ar where evar.EventId = 201 and evar.ActionResultId = ar.ResultID and evar.EventId = ar.EventID and evar.ActionTypeId = ar.ActionTypeID and evar.EventId = 201) arr WHERE c.ChartId = ev.ChartId and ev.EventId = 201 and ev.EventCreateUserId = u.UserId and arr.ActionUserId = u.UserId and arr.ChartId = c.ChartId order by ev.EventCreateDate ```
**For easier readability/maintainability of division formulas, wrap every divisor with [`NULLIF(x,0)`](http://msdn.microsoft.com/en-us/library/ms177562.aspx).** This is easier than checking for zeros with a case statement and ensuring the divisor is identical to the case predicate. Everywhere you see a `/ [...]`, make it `/ NULLIF( [...] ,0)` instead of this: ``` cast( c.PageCount / CASE WHEN (DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) IS NULL OR DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) = 0 ) THEN 1 ELSE DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) END as decimal(5,2) ) as PagesPerMinute, ``` do this: ``` cast( coalesce( c.PageCount / NULLIF(DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate), 0) ,c.PageCount --Return this value if preceding is null or div_by_zero ) as as decimal(5,2) ) as PagesPerMinute ```
OK, 3rd time the charm? I thought I found the problem twice, but I think I know what's going on now. All sql I know, you can't use an aliased column else where in the select. By aliasing c.PageCount to [PageCount] I believe you are still using the original c.PageCount without the table alias. To test this out, just change [PageCount] to [PageCount2]. I suspect you will get an error it doesn't recognize the column name. This only 'appears' to work because you reused the already unique column name from the results. What I meant was take this case and replace the [PageCount] with it. ``` CASE WHEN c.PageCount IS NULL OR c.PageCount=0 THEN 1 ELSE c.PageCount END ``` like this ``` CASE WHEN DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) >= 30 AND CAST(DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) AS FLOAT)/CAST(CASE WHEN c.PageCount IS NULL OR c.PageCount=0 THEN 1 ELSE c.PageCount END AS FLOAT) >= 1 THEN CASE WHEN CAST([PageCount] AS FLOAT) * 0.5 /*Half minute*/ > 45 /*minutes*/ THEN 45 ELSE CAST([PageCount] AS FLOAT) * 0.5 /*Half minute*/ END WHEN DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) >= 60 AND CAST(DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) AS FLOAT)/CAST(CASE WHEN c.PageCount IS NULL OR c.PageCount=0 THEN 1 ELSE c.PageCount END AS FLOAT) < 1 /* 1 minute/page */ THEN 60 ELSE DATEDIFF(mi, ev.EventCreateDate, ev.EventCompletionDate) END [MinutesPaid] ```
Divide By Zero Issue
[ "", "sql", "sql-server", "ssms", "" ]
I am trying to create a very simple PostgreSQL function, though I keep getting a very strange syntax error. The syntax I use is different from anything I have seen online (though this is the one the textbook uses), and thus I can't figure out why it fails... This is the SQL: ``` CREATE OR REPLACE FUNCTION gsum(graphID integer) RETURNS integer BEGIN DECLARE total integer DEFAULT 0 SELECT sum(weight) INTO total FROM Edge WHERE gno = graphID RETURN total; END; ``` The error is: ``` ERROR: syntax error at or near "BEGIN" LINE 3: BEGIN ^ ********** Error ********** ERROR: syntax error at or near "BEGIN" SQL state: 42601 Character: 68 ```
Your basic mistakes: * [`DECLARE` must come before `BEGIN`](http://www.postgresql.org/docs/current/interactive/plpgsql-structure.html). * Statements need to be terminated with `;`. * Function body of a plpgsql function is a string and needs to be quoted. Use dollar-quoting to avoid complications with quotes in the body. * Missing keyword `AS`. * Missing language declaration `LANGUAGE plpgsql`. * Type mismatch. * You don't need a default. * This would still return `NULL` if the sum is `NULL`. ``` CREATE OR REPLACE FUNCTION gsum(graphID integer) RETURNS integer AS $func$ DECLARE total integer; BEGIN SELECT sum(weight)::int INTO total FROM edge WHERE gno = graphID; RETURN COALESCE(total, 0); END $func$ LANGUAGE plpgsql; ``` And you'd better use a simple SQL function for this like [@Clodoaldo advised](https://stackoverflow.com/a/24782219/939860). Just add `COALESCE()`.
It can be plain SQL in instead of plpgsql ``` create or replace function gsum(graphid integer) returns bigint as $$ select sum(weight) as total from edge where gno = graphid; $$ language sql; ``` Notice that if `weight` is integer `sum` will return `bigint` not `integer`.
SQL (Postgres) function definition - RETURNS
[ "", "sql", "database", "function", "postgresql", "plpgsql", "" ]
SO bear with me I want to create a view that combines data from 2 tables: RETRY: ``` TaskId status 1 13 2 4 ``` Files ``` FileId(key) TaskId Study 1 1 2.3 2 1 2.3 3 2 4.5 4 2 4.5 ``` I need a joined view: ``` TaskId Study 1 2.3 2 4.5 ``` what I get is: ``` TaskId Study 1 2.3 1 2.3 2 4.5 2 4.5 ``` since task id always belongs to the same study, I need to get only 1 study for each task Id. ``` CREATE VIEW [dbo].[TASK_TO_STUDY] As ( SELECT dbo.RETRY.task_id FROM dbo.RETRY_TASKS dbo.FILES.Study LEFT JOIN dbo.FILES ON dbo.FILES.task_id = dbo.RETRY.task_id ); ```
This might work based on the input data and expected data according to your explanation ``` CREATE VIEW [dbo].[TASK_TO_STUDY] As ( SELECT DISTINCT r.task_id,f.Study FROM dbo.RETRY_TASKS r JOIN dbo.FILES f ON f.task_id = r.task_id ); ```
Group by the task. Then you can use an aggregate function (like `min`) to take a specific study ``` CREATE VIEW [dbo].[TASK_TO_STUDY] As ( SELECT dbo.RETRY.task_id, MIN(dbo.FILES.Study) as Study FROM dbo.RETRY_TASKS LEFT JOIN dbo.FILES ON dbo.FILES.task_id = dbo.RETRY.task_id GROUP BY dbo.RETRY.task_id ); ```
Create a view from 2 tables without repetitions
[ "", "sql", "sql-server", "" ]
I Have a table with below data ``` column1 column2 DIU02 3D ITEM MAINTENANCE DIU02 DISTRIBUTION ITEM UPDATE APPLICATION DIU02 DIU - Distribution Item Update ``` I want for unique DIU02 , all data come in single column using comma seperation. Its for SQL Server Database Below is the Query DECLARE @Data VARCHAR(MAX) ``` SELECT @Data = COALESCE(@Data + ',', '') + column2 FROM Table WHERE column1= 'DIU02' ``` I want to get only distinct data and I used distinct keyword but I am getting only one. Please help !
``` SELECT @Data = COALESCE(@Data + ',', '') + column2 FROM ( SELECT DISTINCT column2 FROM Table WHERE column1= 'DIU02' ) t ```
I bet you're looking for that one (MS SQL): ``` DECLARE @List VARCHAR(8000) SELECT @List = COALESCE(@List + ',', '') + CAST(column2 AS VARCHAR) FROM yourtable WHERE column1 = 'DIU02' SELECT @List ``` [SQL Fiddle](http://www.sqlfiddle.com/#!3/40af5/1)
Using Distinct and COALESCE Together using variable
[ "", "sql", "sql-server", "database", "" ]
I have a scenario, I have two databases **DB1** and **DB2**. Both database have **same structure**. DB1 contains a table "table1" with **AUTO INCREMENT PRIMARY KEY** 1,2,3,4,5 and DB2 also contains "table1" with primary key 1,2,3,4,5. when I insert data from DB2 to DB1, the data from DB2 will become 6,7,8,9,10. But I need 1,2,3,4,5 for data from DB2 and 6,7,8,9,10 as DB1 data.. Can I use this query? ``` mysqldump -n -t -u userB --password=*** db2 | mysql -u userA --password=*** db1 ``` is it possible?
``` DECLARE @MaxId INT SELECT @MaxId = MAX(Id) FROM Db1 SET IDENTITY_INSERT db1 ON INSERT INTO Db1(Id, Column1, Column2, ...) SELECT @MaxId+ ROW_NUMBER()OVER(ORDER BY Id), Column1, Column2, ... FROM db2 SET IDENTITY_INSERT db1 OFF ``` Try this..
**You may use one of these two scenorio** First: * Copy db1.table1 data in temp table * Truncate db1.table1 * Copy data from db2.table to db1.table1 * Copy data from temp table to db1.table Second: * Copy db1.table1 data to db2.table1 * Truncate db1.table1 * Copy data from db2.table to db1.table1 **Use can use this mysql command for copying data:** insert into dbDestination.tblDestination (field1, field2, ...) select ... from dbSource.tblSource
database copy with same structure having primary key issues
[ "", "mysql", "sql", "" ]
I would like to filter Name,X combinations for which is never X=Y Let's assume the following table: ``` *Name* *X* *Y* A 2 1 A 2 2 <--- fulfills requirement for Name=A, X=2 A 10 1 A 10 2 B 3 1 B 3 3 <--- fulfills requirement for Name=B, X=3 B 1 1 <--- fulfills requirement for Name=B, X=1 B 1 3 ``` So I would like to return the combination Name=A, X=10 for which X=Y is never true. This was my approach (which is syntactically incorrect) ``` SELECT * FROM TABLE WHERE NAME , X NOT IN (SELECT DISTINCT NAME , X FROM TABLE WHERE X=Y) ``` My problem is the where statement which cannot handle multiple columns. Does anyone know how to do this?
Just put the columns into parentheses ``` SELECT * FROM TABLE WHERE (NAME, X) NOT IN (SELECT NAME, X FROM TABLE WHERE X=Y); ``` The above is ANSI standard SQL but not all DBMS support this syntax though. A `distinct` is not necessary for a sub-query for `IN` or `NOT IN`. However `NOT EXISTS` with a co-related sub-query is very often faster that an `NOT IN` condition.
I use this on SQL Server ``` SELECT * FROM TABLE WHERE (SELECT NAME + ';' + X) NOT IN (SELECT NAME + ';' + X FROM TABLE WHERE X = Y); ```
SQL "IN" statement for multiple columns
[ "", "sql", "distinct", "" ]
I have code written in an ASP file which is not working ``` Set Conn = Server.CreateObject("ADODB.Connection") Conn.Open "Provider=Microsoft.Jet.Oledb.4.0; Data Source=" & Server.MapPath("dbbb.mdb") Dim strSQL Dim strSQL strSQL = "SELECT * FROM people WHERE id =" & Request.QueryString("id") Set rs = Conn.Execute(strSQL) ``` The message I receive is: > Microsoft JET Database Engine error '80040e14' > > Syntax error (missing operator) in query expression 'id ='. > > /testAsp/test3/continue/person.asp, line 26 I tried so much and still not getting it where is the problem? (and yes I am noob in this area) please help p.s. I can receive a specific id by entering the num instead of `Request.QueryString("id")`
Your ID field will generally be looking for an number value. So before executing the database query you should check the QueryString Request does actually contain a number. Here is what I do and it will also prevent sql injections. Create Variable ID and Assign the value of the querystring to it. Check the ID value is not empty and is actually a number "IsNumeric()" Then only if both these are true execute your database lookup. nb. don't forget to close your recordset and connection and set them to Nothing. ``` Dim ID ID = Request.QueryString("id") If ID <> "" And IsNumeric(ID) Then Set Conn = Server.CreateObject("ADODB.Connection") Conn.Open "Provider=Microsoft.Jet.Oledb.4.0; Data Source=" & Server.MapPath("dbbb.mdb") Dim strSQL strSQL = "SELECT * FROM people WHERE id =" & ID Set rs = Conn.Execute(strSQL) ' DO YOUR BUSINESS ' rs.Close() Set rs = Nothing Conn.Close() Set Conn = Nothing Else If ID = "" Then Response.Write("ID is missing from the URL") ElseIf Not IsNumeric(ID) Then Response.Write("ID is NOT a number") End If End If ```
The most likely problem here is that you are not handling an empty value for your `id` `QueryString` value. Try checking the `id` field *before* executing the code to read the data from the database. As pointed out by others, be careful with SQL injection attacks. Always either use parameters, or if you wish to continue using in-line SQL, consider: ``` id = Replace(Request.QueryString("id"), "'", "''") ``` **-- EDIT --** Also, as per the post by Barry Dowd, as the example above is more for character based and date (and time) fields.
asp db receive "SELECT * FROM people WHERE id =" & Request.QueryString("id")
[ "", "sql", "database", "vbscript", "asp-classic", "request.querystring", "" ]
I want an alternative way to write the following query ``` SELECT COUNT(DISTINCT col1) FROM table. ``` I dont want to use `distinct`. Is there an alternative way?
Try `GROUP BY` as a subquery and `COUNT()` from outside query. It would achieve same result. ``` SELECT COUNT(*) FROM ( SELECT Col1 FROM Table GROUP BY Col1 ) tbl ```
Try this ``` SELECT COUNT(Col1) FROM (SELECT ROW_NUMBER() OVER (PARTITION BY Col1 ORDER BY Col1) As RNO, Col1 FROM Table_Name) WHERE RNO = 1 ```
Alternative for count distinct
[ "", "sql", "sql-server-2008", "" ]
I have a SQL Server script that I'm using to insert some data into a database. I won't upload the whole script here just for space/time savings sake, but I will include the important bits. So here is the problem. I have a table that has a column for some `loginhtml`, this column is of a non-nullable type. I would like for this column to be left blank on this particular add so it can default back to the parent that I'm pointing it at. So here we have the declaration for this important portion: ``` declare @loginTitle varchar(250), @loginHtml varchar(max) ``` And here we have what it will be set to: ``` set @loginHtml = null ``` And here is the insert part that is inevitably going to fail: ``` insert dbo.ApplicationLogin(ApplicationID, Title, Html) select @appID, @loginTitle, @loginHtml ``` EDIT: How can I have this script "default" the loginhtml column to whatever the PARENT Application is? Is there some "IF" statement/clause that can be used to accomplish this?
I may not have mentioned this in the initial post, and I would like to apologize for that. But the way the database is set up, is applications can have parent applications, so setting the loginhtml to the parent application and "technically" skipping adding it to the new application can be done by doing this: ``` if(@loginHtml is not null) begin insert dbo.ApplicationLogin(ApplicationID, Title, Html) select @appID, @loginTitle, @loginHtml end ``` This runs successfully and makes the "@loginhtml" default to whatever the parent application has set for that value.
As long as the column in non null, then you can't insert a null value into it. You could try setting it to a blank string or other default value. You could aslo select the parent's html and insert that instead. There is no way to skip over inserting any column.
Insert statement with a NON-NULLABLE column
[ "", "sql", "sql-server", "insert", "non-nullable", "" ]