Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
For Example: I have two tables table agent ![enter image description here](https://i.stack.imgur.com/Ypcu7.jpg) table application ![enter image description here](https://i.stack.imgur.com/adHOL.jpg) 1) ``` select a.agentID, agent_nm from agent a left join application ap on (ap.agentID=a.agentID); ``` I am using Left join in above query and got the result is ![enter image description here](https://i.stack.imgur.com/dSX0w.jpg) 2) ``` select a.agentID, agent_nm from application ap right join agent a on (ap.agentID=a.agentID); ``` And i am using Right join in above query and got the same result as i got from using the Left join. like ![enter image description here](https://i.stack.imgur.com/WlBpM.jpg) So my question why are left and right join in mysql. We can retrieve the data with left join only swap the table. So why we are using the right join in our queries. Same way we can achieve target only using the Righ join without using the left join. So what is the reason of Left and Right Join in SQL. Can anyone explain the logic?
Not much of a reason, really. In fact, MySQL docs say: > `RIGHT JOIN` works analogously to `LEFT JOIN`. To keep code portable across databases, it is recommended that you use `LEFT JOIN` instead of `RIGHT JOIN`. It's just an option you have, one few people exploit.
It's symmetric, LEFT and RIGHT outer joins are exchangeable. Most people use LEFT OUTER join because it's easier to think "main table" left join "additional data"...
What is reason of LEFT and Right join in SQL
[ "", "mysql", "sql", "sql-server", "database", "join", "" ]
I am trying to get a count on the number of times a number (`start_value`) doubles until it reaches a particular value (`end_value`) in the cleanest way possible. Consider the following example: ``` id start_value end_value 1 40 130 2 100 777 3 0.20 2.1 example 1: 40 * 2 = 80 80 * 2 = 160 160 = above value so therefore we had 2 doubles example 2: 100 * 2 = 200 200 * 2 = 400 400 * 2 = 800 800 = above value so we had 3 doubles example 3: 0.20 * 2 = 0.4 0.4 * 2 = 0.8 0.8 * 2 = 1.6 1.6 * 2 = 3.2 3.2 = 4 doubles ```
You want logarithms for this. Specifically, the exact number of times is the log-of-base-2 of the ratio of the two values. You want the next higher integer, so you want to round this up. In Postgres: ``` ceiling(log(2, end_value / start_value)) ```
Divide the end value by the start value to get the factor between them. For example 130/40 = 3.25. Doubling the value once gives a factor 2, and doubling it twice gives a factor 4, and so on. You can use the logarithm for base 2 to calculate how many times you need to double the value to get a specific factor. log2(3.25) = 1.7004397... Then you round that up to get the whole number of times that you need to double it. The binary logarithm can be calculated as log(n) / log(2): ``` select id, ceil(log(end_value / start_value) / log(2)) as times from TheTable ``` Demo: <http://sqlfiddle.com/#!15/90099/4>
Get the count of numbers doubling using SQL?
[ "", "sql", "postgresql", "logarithm", "" ]
I have primary key unique column ID nchar(10). I'm planning to generate in it numeric ID. Incremental order is ok, but not necessary. How to make optimal generator for this reason? Should I use stored procedure? Maybe MS SQL server have any features for this reason?
DECLARE PkStudentid as INT ``` CREATE TABLE Student ( Studentid int IDENTITY (1, 1) NOT NULL PRIMARY KEY, Firstname nvarchar (200) NULL, Lastname nvarchar (200), Email nvarchar (100) NULL ) insert into Student (Firstname,Lastname,Email) Values('Vivek', 'Johari', ‘vivekjohari@abc.com'); SET @PkStudentid = SCOPE_IDENTITY(); print @PkStudentid ``` This will insert Studentid as 1 and next entry 2 and in increment order. You need to use [IDENTITY property](http://technet.microsoft.com/en-US/library/ms186775%28v=SQL.105%29.aspx) in SQL Server. Edited : use [SCOPE\_IDENTITY](http://msdn.microsoft.com/en-IN/library/ms190315.aspx) in SQL Server to get back latest inserted Studentid.
**For primary key column you should have to use INT column with IDENTITY insert on.** **Alternate solution:** Still if you don't want to use INT data type then the alternate solution is create column with default value `LEFT(NEWID(), 10)` with UNIQUE/PRIMARY KEY on it. Because `NEWID()` function generates different random string every time. For Example: ``` SELECT LEFT(REPLACE(NEWID(),'-',''),(10)) ``` Above query will give different random string. Check [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!3/6fd0e/1): **OUTPUT** ``` | ID | NAME | |------------|------| | 482E5D4850 | pqr | | 70369ED157 | abc | | 768CC98442 | xyz | ```
Generate unique value in MS SQL
[ "", "sql", "sql-server", "sql-server-2008", "random", "primary-key", "" ]
Using PostgreSQL with pgAdmin, and was wondering if there is a way to search ALL of the functions of a database for a particular text. Is this possible?
Something like this should work: ``` select proname, prosrc from pg_proc where prosrc like '%search text%'; ``` see [How to display the function, procedure, triggers source code in postgresql?](https://stackoverflow.com/questions/6898453/how-to-display-the-function-procedure-triggers-source-code-in-postgresql)
If schema info is required too (we work with many): ``` select nspname, proname, prosrc from pg_catalog.pg_proc pr join pg_catalog.pg_namespace ns on ns.oid = pr.pronamespace where prosrc ilike '%search text%' ```
Search text of all functions in PGAdmin
[ "", "sql", "postgresql", "pgadmin", "" ]
Consider this query: ``` select count(*) from ( select pickle_id, pickle_code from pickles where pickle_code='DILL' group by pickle_id, pickle_code /* Returns 1000 rows */ minus select pickle_id, pickle_code from relish inner join pickles on relish_id=pickle_id where relish_code ='BURGER' /* Returns 1500 rows */ ) /* Complete query returns 900 rows */ ``` **Question:** How can I write query that returns the original 1000 rows and only the 100 that were subtracted from the minus query? Results should total 1100 rows (1000 + 100 that were subtracted)
Try this ``` select pickle_id, pickle_code from pickles where pickle_code='DILL' group by pickle_id, pickle_code /* Returns 1000 rows */ union all ( select pickle_id, pickle_code from pickles where pickle_code='DILL' group by pickle_id, pickle_code /* Returns 1000 rows */ intersect select pickle_id, pickle_code from relish inner join pickles on relish_id=pickle_id where relish_code ='BURGER' /* Returns 1500 rows */ ) ```
Hmmm, that would be everything before the `minus`: ``` select pickle_id, pickle_code from pickles where pickle_code = 'DILL' group by pickle_id, pickle_code; ``` EDIT: The answer to the question in the comments is: ``` select count(*) from ((select pickle_id, pickle_code from pickles where pickle_code='DILL' group by pickle_id, pickle_code /* Returns 1000 rows */ ) intersect (select pickle_id, pickle_code from relish inner join pickles on relish_id=pickle_id where relish_code ='BURGER' /* Returns 1500 rows */ ) ) p; ```
How can I include the subtracted rows from this query?
[ "", "sql", "oracle", "oracle11g", "" ]
I want to merge the results from the following queries: ``` Dispenser.includes(:reviews).where.not(reviews: { id: nil }) ``` and ``` Dispenser.includes(:dispenser_reviews).where.not(dispenser_reviews: { id: nil }) ``` I have tried simply placing a `||` in the middle of these two queries, but that does not give the expected result. I want to find all Dispensers with a `review` or a `dispenser_review`.. So let's say I have the following dispenser ids from each query: `[1, 2, 3]` and `[2, 3, 4]` .. The output should be the dispensers represented by the ids `[1, 2, 3, 4]`
You can accomplish that using <https://github.com/activerecord-hackery/squeel> rather than active record. It provides the more advanced functionality that Arel does not have out of the box. That being said, your logic is gonna be pretty nasty. If you wanted to get the result set and you didn't mind two queries instead of one, I'd just join the two results with the `+` operator. ``` r1 = Dispenser.includes(:reviews)# ... r2 = Dispenser.includes(:dispenser_reviews)# ... result = r1 + r2 ``` As for a squeel example, it'd be something like: ``` Dispenser.includes{reviews}. includes{dispenser_reviews}. where{(reviews.id.not_eq nil) | {dispenser_reviews.id.not_eq nil)}. references(:all) ```
Joins will do an INNER JOIN and only return dispenser objects that have reviews or dispenser\_reviews. Pipe '|' will get rid of dups. ``` Dispenser.joins(:reviews) | Dispenser.joins(:dispenser_reviews) ``` or to get ids ``` Dispenser.joins(:reviews).pluck(:id) | Dispenser.joins(:dispenser_reviews).pluck(:id) ```
Merge the results of two active record queries
[ "", "sql", "ruby-on-rails", "" ]
I have two SQL Tables EmployeeMst and EmployeeCity ``` EmployeeMst SrNo Name 1 abc 2 xyz 3 pqr 4 def EmployeeCity srno City EmplSrNo 1 Delhi 1,2,3,4 2 Mumbai 2,3,1 3 New York 3,2 ``` i want to get Employee Name from EmployeeMst with select query OUTPUT LIKE BELOW: ``` srno City EmployeeName 1 Delhi abc,xyz,pqr,def 2 Mumbai xyz,pqr,abc 3 New York pqr,xyz ``` Therea are several data in this table. Please give me query how can i do this i used `charindex` but it takes more time.
I still think that regardless of whether or not the table is old or new, you should take the time to fix poor design sooner rather than later. You are only delaying the inevitable. As such here is something to get you started on a normalised design: ``` -- CREATE CITY TABLE CREATE TABLE dbo.City ( SrNo INT, City VARCHAR(50) ); -- POPULATE FROM EXISTING TABLE INSERT dbo.City (SrNo, City) SELECT SrNo, City FROM dbo.EmployeeCity; -- CREATE CITY-EMPLOYEE JUNCTION TABLE USING EXISTING -- DATA, AND XML METHOD TO SPLIT COMMA SEPARATED VALUES -- INTO ROWS CREATE TABLE dbo.EmployeeCity2 (CitySrNo, EmployeeSrNo) SELECT SrNo, i.value('.', 'INT') FROM ( SELECT SrNo, x = CONVERT(XML, '<i>' + REPLACE(EmplSrNo, ',', '</i><i>') + '</i>') FROM dbo.EmployeeCity ) AS t CROSS APPLY t.x.nodes('i') rx (i); -- DROP EXISTING TABLE SO THAT WE CAN CREATE A VIEW OF THE SAME NAME DROP TABLE dbo.EmployeeCity; GO -- CREATE A VIEW THAT REPLICATES THE FORMAT OF THE CURRENT TABLE SO -- THAT EXISTING SELECT QUERIES ARE NOT AFFECTED CREATE VIEW dbo.EmployeeCity AS SELECT c.SrNo, c.City, EmplSrNo = STUFF(( SELECT ',' + CAST(EmployeeSrNo AS VARCHAR(10)) FROM dbo.EmployeeCity2 AS ec WHERE ec.CitySrNo = c.SrNo FOR XML PATH(''), TYPE ).value('.', 'VARCHAR(MAX)'), 1, 1, '') FROM dbo.City AS c; GO -- FINALLY, THE QUERY YOU NEED TO GET THE OUTPUT IN THE QUESTION SELECT c.SrNo, c.City, EmployeeName = STUFF(( SELECT ',' + m.Name FROM dbo.EmployeeCity2 AS ec INNER JOIN EmployeeMst AS m ON m.SrNo = ec.EmployeeSrNo WHERE ec.CitySrNo = c.SrNo FOR XML PATH(''), TYPE ).value('.', 'VARCHAR(MAX)'), 1, 1, '') FROM dbo.City AS c; ``` You will still need to amend how data is inserted/updated/deleted, however there should be much fewer places where this happens than where it is selected, and all your existing select queries are covered by the view. For further read on a couple of the principles used above to split the comma separated string, and then put it back together again, see: * [Splitting strings the right way - or the next best way](http://sqlperformance.com/2012/07/t-sql-queries/split-strings) * [Splitting Strings: A Follow-up](http://sqlperformance.com/2012/08/t-sql-queries/splitting-strings-follow-up) * [Splitting Strings : Now with less T-SQL](http://sqlperformance.com/2012/08/t-sql-queries/splitting-strings-now-with-less-t-sql) * [String Concatenation with SQL Server](https://stackoverflow.com/a/10381975/1048425)
Use this. **[Fiddler Demo](http://sqlfiddle.com/#!3/43e9ce/2)** ``` CREATE FUNCTION EmployeeName(@Expr1 AS VARCHAR(MAX)) RETURNS VARCHAR(MAX) BEGIN DECLARE @res AS VARCHAR(MAX) SET @res = (SELECT ',' + B.name AS A FROM (SELECT Split.a.value('.', 'VARCHAR(100)') AS CVS FROM ( SELECT CAST ('<M>' + REPLACE(@Expr1, ',', '</M><M>') + '</M>' AS XML) AS CVS ) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)) AS A INNER JOIN EmployeeMst AS B ON A.CVS = B.id FOR XML PATH(''), TYPE).value('.', 'varchar(max)') RETURN STUFF(@res, 1,1,'') END SELECT srno, city, dbo.EmployeeName(EmplSrNo) AS EmployeeName FROM EmployeeCity ```
Replace comma separated values in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
i am trying to replace a day part in a date value. say for example ``` select TRUNC(SYSDATE) from dual; ``` Result: 28/11/2014 I wanted to replace only the 28 with another number value(X). So i can arrive the result like X/11/2014. Can you please help me? Thanks in Advance, Murugan.
``` trunc(sysdate,'MM') + (x-1) ``` would do it. `trunc(sysdate,'MM')` returns the first of the month. Then you add however many days you want to get the date that you want.
``` SELECT TO_DATE('01' || SUBSTR(TO_CHAR(SYSDATE, 'DD/MM/YYYY'),3)) FROM DUAL WHERE TO_CHAR(SYSDATE, 'DD') = '25'; ```
Replacing day part in Date with another number in PLSQL
[ "", "sql", "oracle", "oracle11g", "date-formatting", "" ]
Have no idea why this does not work, but here you go. ``` update delievery set status = 'delivered' where order_num = 'a563' and order_num = 'a109'; ``` if I were to run this, I get 0 rows updated. of course I could do this individually, but if I needed to update more than two things, that would not be very efficient.
You have written your query by using `AND` it simply means both the conditions written with it **must** be true. The condition here is `order_num = 'a563'` and `order_num = 'a109'`. Now if you have a look at your table you will see no row in your whole table where the column `order_num` will be having both the values `a563` and `a109`. At one time only one value will be there. So both the conditions never get satisfied *at same time*, therefore no row is updated. You should use `OR` instead of `AND` in your query. As it will check if any one of the condition is true, then update values. So in one row the value of column `order_num` will be `a563` and in other it will be `a109`, hence both will be updated.
You need to use `OR`. `AND` requires both conditions to be met. ``` update delievery set status = 'delivered' where order_num = 'a563' or order_num = 'a109'; ``` You could also use `IN` like this: ``` update delievery set status = 'delivered' where order_num IN ('a563', 'a109'); ```
updating two or more values in a column in sql
[ "", "mysql", "sql", "" ]
Suppose we have a query ``` SELECT * FROM my_table ORDER BY id ``` which results in ``` id | title ----------- 1 | 'ABC' 2 | 'DEF' 3 | 'GHI' ``` How could I modify given select statement to have each row duplicated in the result set like this: ``` id | title ----------- 1 | 'ABC' 1 | 'ABC' 2 | 'DEF' 2 | 'DEF' 3 | 'GHI' 3 | 'GHI' ```
You could cross join to a row generator, the numeric value indicates how many duplicates per original you want. ``` select * from my_table cross join (select null from dual connect by level <= 2) order by id ```
You can use `union all`, but I like using `cross join`: ``` select * from MyTable cross join (select 1 from dual union all select 2 from dual) n order by id; ``` The reason I like the `cross join` is in the case where `MyTable` is really some complicated subquery. Although the query optimizer *might* evaluate it only once, you can't really depend on that fact. So the performance should be better in this case.
How to duplicate each row in sql query?
[ "", "sql", "oracle11g", "" ]
1. I want to display only the data where the `RequestStatus_hod` is `IN PROCESS` 2. But the query seems to display all of the data by ignoring the where clause 3. As you can see, I've tried to repeat `WHERE RequestStatus_hod = 'IN PROCESS'` but it just displays as `NULL` 4. I've tried to use the temporary column `RequestStatus_hods` and received `an invalid column error` SQL statement: ``` SELECT DISTINCT A.RequestNumber, A.EmployeeId, STUFF((SELECT ', ' + CAST(B.RequestDetailsId AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS RequestDetailIds, STUFF((SELECT ', ' + CAST(B.StockId AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS StockIds, STUFF((SELECT ', ' + CAST(B.RequestQuantity AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS RequestQuantity, STUFF((SELECT ', ' + G.ItemName FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId JOIN [dbo].[ITEM_MASTER] G ON F.ItemId = G.ItemId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS ItemNames, STUFF((SELECT ', ' + CAST(F.StockQuantity AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS AvailableQuantity, STUFF((SELECT ', ' + B.RequestStatus_hod FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS RequestStatus_hods -- Temporary Column -- FROM [dbo].[REQUISITION] A JOIN [dbo].[USER] D ON D.EmployeeId = A.EmployeeId JOIN [dbo].[REQUISITION_DETAILS] E ON E.EmployeeId = D.EmployeeId WHERE RequestStatus_hods = 'IN PROCESS' -- Col RequestStatus_hod FROM table REQUISITION_DETAILS -- ``` **Results:** ![enter image description here](https://i.stack.imgur.com/aXCmc.png)
One solution is to repeat the definition for your temporary column inside the `WHERE` clause. SQL Server doesn't support using an alias inside a `WHERE` directly, but it is still smart enough (in most cases) to figure out that the expression does not need to be evaluated twice. So, instead of writing `RequestStatus_hod = 'IN PROCESS'` inside your `WHERE`, just copy and paste the actual definition of the column in place of where it says `RequestStatus_hods`. *However*, an even better choice might be to extract your column into an `APPLY` clause. This way, you know for sure that it won't be run twice. For example: ``` SELECT DISTINCT A.RequestNumber, A.EmployeeId, STUFF((SELECT ', ' + CAST(B.RequestDetailsId AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS RequestDetailIds, STUFF((SELECT ', ' + CAST(B.StockId AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS StockIds, STUFF((SELECT ', ' + CAST(B.RequestQuantity AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS RequestQuantity, STUFF((SELECT ', ' + G.ItemName FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId JOIN [dbo].[ITEM_MASTER] G ON F.ItemId = G.ItemId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS ItemNames, STUFF((SELECT ', ' + CAST(F.StockQuantity AS VARCHAR(255)) FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId JOIN [dbo].[STOCK] F ON F.StockId = B.StockId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId -- AND RequestStatus_hod = 'IN PROCESS' FOR XML PATH ('')), 1, 2, '') AS AvailableQuantity, xx.RequestStatus_hods -- Temporary Column -- FROM [dbo].[REQUISITION] A JOIN [dbo].[USER] D ON D.EmployeeId = A.EmployeeId JOIN [dbo].[REQUISITION_DETAILS] E ON E.EmployeeId = D.EmployeeId CROSS APPLY (SELECT STUFF((SELECT ', ' + B.RequestStatus_hod FROM [dbo].[REQUISITION] C JOIN [dbo].[REQUISITION_DETAILS] B ON C.RequestDetailsId = B.RequestDetailsId WHERE C.RequestNumber = A.RequestNumber AND C.EmployeeId = A.EmployeeId FOR XML PATH ('')), 1, 2, '') as RequestStatus_hods) xx WHERE xx.RequestStatus_hods = 'IN PROCESS' -- Col RequestStatus_hod FROM table REQUISITION_DETAILS -- ```
Ok first of all, let's call things with their names, RequestStatus\_hods is not a "temporary column", it's a calculation placed in a column of the resultset of the query with an alias "RequestStatus\_hods". I don't really understand those solutions involving CROSS JOINS, so I can't assess them, but I guess that performance is not the best in those cases. I have two different approaches for this kind of problems 1.- Instead of "RequestStatus\_hods" in the where clause, use exactly the same expression that your are using to build the value in the column. It will work, but the performance will be affected if the resultset contains lots of records. 2.- I surround the query as follows ``` SELECT * FROM ( <YOUR-QUERY> ) AS RESULTSET WHERE RESULTSET.[RequestStatus_hods] = 'IN PROCESS' ``` so your query becomes an internal-made up temporary table (depends on who the engine resolves the query) but the effect is that you have the table you want with the fields you want. The performance with this approach usually is not very bad. Hope this helps.
Is it possible to use temporary column in WHERE clause
[ "", "sql", "sql-server", "t-sql", "where-clause", "" ]
I'm dealing with a table in which a bunch of arbitrary settings are stored as `VARCHAR(255)` values. The particular one I'm tasked with dealing with is a sequence number that needs to be incremented and returned to the caller. (Again, note that the sequence "number" is stored as `VARCHAR`, which is something I don't have any control over). Because it's a sequence number, I don't really want to select and update in separate steps. When I've dealt with this sort of thing in the past with actual numeric fields, my method has been something like ``` UPDATE TABLE SET @SEQ_NUM = VALUE = VALUE + 1 ``` which increments the value and gives me the updated value in one swell foop. I thought in this situation, I'd try the same basic thing with casts: ``` DECLARE @SEQ_NUM VARCHAR(255) UPDATE SOME_TABLE SET @SEQ_NUM = VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR) WHERE NAME = 'SOME_NAME' ``` The actual update works fine so long as I don't try to assign the result to the variable; as soon as I do, I receive the following error: > Msg 549, Level 16, State 1, Line 4 The collation > 'SQL\_Latin1\_General\_CP1\_CI\_AS' of receiving variable is not equal to > the collation 'Latin1\_General\_BIN' of column 'VALUE'. I understand what that means, but I don't understand why it's happening, or by extension, how to remedy the issue. As an aside to fixing the specific error, I'd welcome suggestions for alternative approaches to incrementing a char sequence "number".
From one of the comments, sounds like you may have already hit on this, but here's what I would recommend: ``` UPDATE TABLE SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR) OUTPUT inserted.VALUE WHERE NAME = 'SOME_NAME' ``` This will output the new value like a `SELECT` statement does. You can also cast `inserted.VALUE` to an `int` if you wanted to do that in the SQL. If you wanted to put the value into `@SEQ_NUM` instead of outputing the value from the statement/stored procedure, you can't use a scalar variable, but you can pump it into a table variable, like so: ``` DECLARE @SEQ_NUM AS TABLE ( VALUE VARCHAR(255) ); UPDATE TABLE SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR) OUTPUT inserted.VALUE INTO @SEQ_NUM ( VALUE ) WHERE NAME = 'SOME_NAME' SELECT VALUE FROM @SEQ_NUM ```
Maintaining a sequential number manually is by no means a solution I'd like to work with, but I can understand there might be constraints around this. If you break it down in to 2 steps, then you can work around the issue. Note I've replaced your `WHERE` clause for this example code to work: ``` CREATE TABLE #SOME_TABLE ( [VALUE] VARCHAR(255) ) INSERT INTO #SOME_TABLE ( VALUE ) VALUES ( '12345' ) DECLARE @SEQ_NUM VARCHAR(255) UPDATE #SOME_TABLE SET [VALUE] = CAST(( CAST([VALUE] AS INT) + 1 ) AS VARCHAR(255)) WHERE 1 = 1 SELECT * FROM #SOME_TABLE SELECT @SEQ_NUM = [VALUE] FROM #SOME_TABLE WHERE 1 = 1 SELECT @SEQ_NUM DROP TABLE #SOME_TABLE ```
SQL - "incrementing" a char value causes collation error
[ "", "sql", "sql-server", "sql-server-2008", "collation", "" ]
``` sel_prod_dt = "select distinct change_app_code, change_number " & " \<cfif NDA_check eq " &'"Y"' &"> FROM db.tb tb" &"\<cfelse\>FROM db.tb2 PC\<\/cfif\>" ``` i get the following error **Syntax error, expected something like a name or a Unicode delimited identifier between the word 'change\_number' and the end of the request.** could anyone please help me in this regard..
Why over complicate things? ``` <cfif NDA_check eq "Y"> sel_prod_dt = "select distinct change_app_code, change_number FROM db.tb tb" <cfelse> sel_prod_dt = "select distinct change_app_code, change_number FROM db.tb2 PC" </cfif> ```
One cannot add CFML to a string and them somehow expect it to run! Strings aren't processed until runtime, and the CFML code needs to be compiled before it'll run. Reading this should clarify "[The ColdFusion request/response process](http://blog.adamcameron.me/2012/10/the-coldfusion-requestresponse-process.html)" So you need to have complete and syntactically correct CFML in your file *before* it runs. You have not given enough detail in your question to be able to help with the actual code solution for your issue, but basically @Ocssor has probably got it right.
how do i evaluate a cfif with a select statement in a string
[ "", "sql", "database", "coldfusion", "coldfusion-9", "cfml", "" ]
I have table named ct\_rt and there is one field ct. The data in the filed is following : ``` 100 450 -20 500 300 -50 ``` Now i want to get the following result : ``` 100 550 530 1030 1330 1280 ``` which is the addition of each row with its succeeding row. So how can I get the desired output ? I am using Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production. Please help me. Thank you. :)
This can be done using a window function with a "cumulative sum": ``` select ct, sum(ct) over (order by some_column) as the_sum from ct_rt order by some_column; ``` You *have* to supply a column to sort the result. Rows in a relation database are ***not*** sorted and come in essentially a random order unless you specify an `order by`. The cumulative sum has the same "restriction" and therefor you have to supply an `order by` in the definition of the window function. A good candidate for sorting this result is a timestamp column that defines when the row has been inserted (or updated). A unique, increasing id column is also a good candidate.
Solution using a correlated subquery (you need an id field though) ``` SELECT ( SELECT sum(ct) FROM ct_rt AS sub WHERE sub.id <= qry.id ) FROM ct_rt AS qry ORDER BY qry.id ``` Edit: This is an alternative solution. The solution of a\_horse\_with\_no\_name will most propably outperform this.
How to perform vertical addition of column for each two rows in SQL?
[ "", "sql", "oracle", "" ]
I was wondering if someone could cast their eye over the query I am trying to execute, I can't quite think on the best way to do it. I need the Email, Firstname and Surname from the Contact table and the HotlineID and Last Action from the Hotline Table. I want to filter on 'flag' column stored in the Hotline table to only show rows where the value is 1. I have achieved this by this query: ``` select Email, FirstName, Surname, HotlineID, LastAction from Hotline left join contact on contact.companyid=hotline.CompanyID and contact.ContactID=hotline.ContactID where hotline.Flag = 1 ``` Now the bit I can't do. In the Actions Table there are 3 columns 'HotlineID' 'Comment' 'Date' the HotlineID in the Actions Table is linked to the HotlineID in the Hotlines Table. Multiple comments can be added for each Hotline and the date they are posted is recorded in the Date column. Of the returned rows from the first query I want to further filter out any rows where the Max Date (last recorded comment) is less than 48 hours behind the current date. I am using 'addwithvalue' in visual studio to populate the date variable, but for testing purposes I use '2014-12-04' I've come up with this, which fails. But I am unsure why? ``` Select Email, FirstName, Surname, hotline.HotlineID, LastAction from Hotline left join Contact on Contact.CompanyID=Hotline.CompanyID and Contact.ContactID=Hotline.ContactID inner join Actions on actions.HotlineID=hotline.HotlineID where hotline.flag=1 and CONVERT(VARCHAR(25), Max(Date), 126) LIKE '2014-12-03%' ``` I'm using SQL Server.
`MAX()` is an aggregate function of a group of rows. Its use would convert your ordinary query into an aggregate query if it appeared in the select list, which does not appear to be what you want. Evidently SQL Server will not accept it at all in your where clause. It seems like you want something like this instead: ``` SELECT Contact.Email, Contact.FirstName, Contact.Surname, recent.HotlineID, Hotline.Action FROM (SELECT HotlineID, MAX([Date]) as maxDate FROM Hotline GROUP BY HotlineID) recent INNER JOIN Hotline ON recent.HotlineId = Hotline.HotlineId LEFT JOIN Contact ON Hotline.HotlineId = Contact.HotlineId WHERE datediff(hour, recent.maxDate, GetDate()) < 48 AND Hotline.Flag = 1 ``` Possibly you want to put the `WHERE` clause inside the subquery. The resulting query would have a slightly different meaning than the one above, and I'm not sure which you really want.
You can try this ``` Select Email, FirstName, Surname, hotline.HotlineID, LastAction from Hotline left join Contact on Contact.CompanyID=Hotline.CompanyID and Contact.ContactID=Hotline.ContactID inner join Actions on actions.HotlineID=hotline.HotlineID where hotline.flag=1 and CONVERT(VARCHAR(25), Max(Date), 126) < CONVERT(VARCHAR(25), GetDate() - 2, 126) ```
Joining tables then filtering on Max Date
[ "", "sql", "sql-server", "join", "maxdate", "" ]
I have the below sql statement which sorts an address field (address1) using the street name not the number. This seems to work fine but I want the street names to appear alphabetically. The ASC at the end of order by doesnt help e.g Address1 field might contain "5 Elm Close" - a normal sort and order will sort by the number the below will sort by looking at the 2nd string "Elm" (Using SQL Server) ``` SELECT tblcontact.ContactID, tblcontact.Forename, tblcontact.Surname, tbladdress.AddressLine1, tbladdress.AddressLine2 FROM tblcontact INNER JOIN tbladdress ON tblcontact.AddressID = tbladdress.AddressID LEFT JOIN tblDonate ON tblcontact.ContactID = tblDonate.ContactID WHERE (tbladdress.CollectionArea = 'Queens Park') GROUP BY tblcontact.ContactID, tblcontact.Forename, tblcontact.Surname, tbladdress.AddressLine1, tbladdress.AddressLine2 ORDER BY REVERSE(LEFT(REVERSE(tbladdress.AddressLine1), charindex(' ', REVERSE(tbladdress.AddressLine1)+' ')-1)) asc ``` Gordon's statement sorts as below ``` 1 Kings Road 10 Olivier Way 11 Albert Street 11 Kings Road 11 Princes Road 120 High Street ```
Try this: I based it off of Gordon's code, but altered it to remove the `LEFT(AddressLine1, 1`) portion - a single-character string could never be match the pattern "*n* + *space* + %". This works on my SQL-Server 2012 environment: ``` WITH tbladdress AS ( SELECT AddressLine1 FROM (VALUES ('1 Kings Road'),('10 Olivier Way'), ('11 Albert Street')) AS V(AddressLine1) ) SELECT AddressLine1 FROM tbladdress order by (case when tbladdress.AddressLine1 like '[0-9]% %' then substrING(tbladdress.AddressLine1, charindex(' ', tbladdress.AddressLine1) + 1, len(tbladdress.AddressLine1)) else tbladdress.AddressLine1 end) ``` This is edited to be more similar to Gordon's code (position of closing parentheses, `substr` instead of `substring`): ``` order by (case when tbladdress.AddressLine1 like '[0-9]% %' then substr(tbladdress.AddressLine1, charindex(' ', tbladdress.AddressLine1) + 1), len(tbladdress.AddressLine1) else tbladdress.AddressLine1 end) ```
If you assume that the street name is the first or second value in a space separated string, you could try: ``` order by (case when left(tbladdress.AddressLine1, 1) like '[0-9]% %' then substr(tbladdress.AddressLine1, charindex(' ', tbladdress.AddressLine1) + 1), len(tbladdress.AddressLine1) ) else tbladdress.AddressLine1 end) ```
sort by second string in database field
[ "", "sql", "sql-server", "" ]
There is a table `job` that contains data as shown below: ``` Id Status ----------- 1 NEW 2 NEW ``` There is a table `item` that contains data as shown below: ``` Id Status JobId --------------------- 1 NEW 1 2 PROCESSED 1 3 NEW 1 4 PROCESSED 2 5 PROCESSED 2 ``` I want to run a query, that will return all Jobs whose "children" all have a status of X Pseudo-SQL: `SELECT * FROM Job WHERE status = 'NEW' AND Items for Job WHERE all items status = PROCESSED` That should return ``` Id Status ----------- 2 NEW ``` Because all of Job 2 items have `status = PROCESSED`. Job 1 does not appear because it has items with the unwanted status `NEW`
``` SELECT * from job where Id not in (SELECT JobId from item where Status <> 'PROCESSED'); ``` This will return all from job where id is not in result of all jobids which have status different from 'PROCESSED'.
``` SELECT j.* FROM Job j WHERE not exists (select 1 from item i where i.JobId = j.id and i.Status != 'PROCESSED') and exists (select 1 from item i where i.JobId = j.id and i.Status = 'PROCESSED') and j.status = 'NEW'; ``` Or ``` SELECT j.* FROM Job j WHERE j.id in (select jobId from ( select jobId, count(distinct status) n_all, count(distinct case when status = 'PROCESSED' then status else null end) n_processed from item group by jobId ) t where n_all = n_processed ) and j.status = 'NEW'; ```
SQL: find all items on left side where right side items all have specific field value
[ "", "mysql", "sql", "" ]
I have the following queries: ``` select year, avg(value) as winter_avg_Miami from AP_DATA where substring(series_id, 8) ='killowatts' and substring(series_id, 3,5) = 'Miami' and period IN ('M10','M11','M12') group by year; select year, avg(value) as winter_avg_notMiami from AP_DATA where substring(series_id, 8) = 'killowatts' and substring(series_id, 3,5) != 'Miami' and period IN ('M10','M11','M12') group by year; ``` What I want to do is instead of getting two different tables, one like: ``` year winter_avg_Miami 2000 28.1 2001 30.2 ``` and another table like: ``` year winter_avg_notMiami 2000 40.1 2001 50.2 ``` I want to just have all the info in one table such as: ``` year winter_avg_Miami winter_avg_notMiami 2000 28.1 40.1 2001 30.2 50.2 ``` How can i write my query in order to accomplish this?
You can use a `case` statement in your `avg` aggregate: ``` select year, avg(case when substring(series_id, 3,5) = 'Miami' then value end) as winter_avg_Miami, avg(case when substring(series_id, 3,5) != 'Miami' then value end) as winter_avg_notMiami from AP_DATA where substring(series_id, 8) ='killowatts' and period IN ('M10','M11','M12') group by year ``` * [Condensed SQL Fiddle Demo](http://sqlfiddle.com/#!2/e9c1c6/1)
``` select year, case when CntMiami = 0 then 0 else TotMiami / CntMiami end ) as AvgMiami, case when CntNotMiami = 0 then 0 else TotNotMiami / CntNotMiamt end ) as AvgNotMiami from ( select year, sum( case when substring(series_id, 3,5) = 'Miami' then value else 0 end ) as TotMiami, sum( case when substring(series_id, 3,5) = 'Miami' then 1 else 0 end ) as CntMiami, sum( case when substring(series_id, 3,5) != 'Miami' then value else 0 end ) as TotNotMiami, sum( case when substring(series_id, 3,5) != 'Miami' then 1 else 0 end ) as CntNotMiami from AP_DATA where substring(series_id, 8) ='killowatts' and period IN ('M10','M11','M12') group by year ) ```
How do I include info from two queries in same resulting table?
[ "", "mysql", "sql", "subquery", "" ]
I want to do a group by on a dataset with a where clause based upon a `datetime`, but I need to return a count of 0 for any users in the `Account` table that do not meet the `where date` requirement. Here is my SQL statement: ``` select a.userid, count(c.codeentryid) from [account] a left join codesentered c on a.userid = c.userid where a.camp = 0 and c.entrydate > '2013-12-03 00:00:00' group by a.userid order by a.userid ``` Currently I get counts for all the users who meet the `entrydate` requirement, but how would I also return the users who don't meet this requirement with a count of 0?
When you are using a `left join`, all conditions on the *second* table should go into the `on` clause. Otherwise, the outer join becomes an inner join. So, try this: ``` select a.userid, count(c.codeentryid) from [account] a left join codesentered c on a.userid = c.userid and c.entrydate > '2013-12-03 00:00:00' where a.camp = 0 group by a.userid order by a.userid; ``` Conditions on the first table, in the `on` clause are basically ignored. A `left join` returns all rows from the first table, even when the `on` clause evaluates to false or `NULL`.
You can include the condition in the `join`. Since it is a `left outer join`, it will always show all records from `account`, and only those of `codesentered` which match the condition: ``` select a.userid, count(c.codeentryid) from [account] a left outer join codesentered c on a.userid = c.userid /* here */ and c.entrydate > '2013-12-03 00:00:00' where a.camp = 0 group by a.userid order by a.userid ```
Group By with Where
[ "", "sql", "t-sql", "" ]
I am working on online and store ordering Pizza project. In this project I have two different databases one is the server where online ordered records will insert and another one is another server database which runs only in the stores. My requirement is whenever any order is placed in the server database, then that same order must be inserted into the another server database automatically without using triggers. Is there any solution please let me know ? Bhavana Ky
As an alternative to using Linked Servers as suggested elsewhere, there are various other options depending on your replication requirements: * **Log shipping** Allows you to periodically send transaction logs from the primary server to one or more secondary servers. [More info](http://msdn.microsoft.com/en-us/library/ms187103.aspx) * **Database mirroring** Typically used to increase availability of SQL Server databases, automatic failover, etc. [More info](http://msdn.microsoft.com/en-us/library/ms189852.aspx) * **Replication, Change Tracking, Change Data Capture** are features within AlwaysOn availability groups (only SQL Server 2012 or above), to easily automate replication. [More info](http://msdn.microsoft.com/en-us/library/hh403414.aspx) These alternatives help you replicate data at the database level rather than at the application level (you will need custom code, triggers, etc. when replicating data using Linked Servers).
None of the other answers seem to have mentioned this enough but a key part of what you need is a ***trigger***. A trigger allows you to update/insert multiple tables when a condition is met on your original table, in this case the condition would be ***inserted*** Couple this will a ***linked server*** and it should be fairly stright forward to achieve what you need. [Triggers](http://msdn.microsoft.com/en-us/library/ms189799.aspx) [Linked Servers](http://msdn.microsoft.com/en-GB/library/ms188279.aspx)
How to insert records into one server database when records inserted in the another SQL Server database
[ "", "sql", "sql-server", "" ]
Query: ``` SELECT dbo.tblDivision.DivisionName, dbo.tblDistrict.DistrictName, case when Gender = 'Male' then count(Gender) end as male, case when Gender = 'female' then count(Gender) end as female, UnitEName FROM dbo.tblDistrict INNER JOIN dbo.tblThana ON dbo.tblDistrict.DistrictNo = dbo.tblThana.DistrictNo INNER JOIN dbo.tblDivision ON dbo.tblDistrict.DivisionNo = dbo.tblDivision.DivisionNo INNER JOIN dbo.vw_EmpInfo ON dbo.tblThana.ThanaNo = dbo.vw_EmpInfo.PerThanaNo GROUP BY Gender, DistrictName, DivisionName, UnitEName, UnitEAddress ORDER BY DivisionName, DistrictName, UnitEName ``` This results like below: ![enter image description here](https://i.stack.imgur.com/6HDfn.jpg) but I want every unit's result in one single row. may be I have problem in my group by. How should I refactor my query?
If you want both genders in one row, you should not group by gender. Instead you should add up how many male and how many female there are using sum(): ``` SELECT dbo.tblDivision.DivisionName, dbo.tblDistrict.DistrictName, SUM(case when Gender='Male' then 1 else 0 end) as male, SUM(case when Gender='Female' then 1 else 0 end) as female, UnitEName FROM dbo.tblDistrict INNER JOIN dbo.tblThana ON dbo.tblDistrict.DistrictNo = dbo.tblThana.DistrictNo INNER JOIN dbo.tblDivision ON dbo.tblDistrict.DivisionNo = dbo.tblDivision.DivisionNo INNER JOIN dbo.vw_EmpInfo ON dbo.tblThana.ThanaNo = dbo.vw_EmpInfo.PerThanaNo group by DistrictName,DivisionName,UnitEName,UnitEAddress order by DivisionName,DistrictName,UnitEName ```
You have to leave Gender column out of the grouping clause: ``` SELECT dbo.tblDivision.DivisionName, dbo.tblDistrict.DistrictName, COUNT(CASE Gender WHEN 'Male' THEN 1 END) AS male, COUNT(CASE Gender WHEN 'female' THEN 1 END) AS female, UnitEName FROM dbo.tblDistrict INNER JOIN dbo.tblThana ON dbo.tblDistrict.DistrictNo = dbo.tblThana.DistrictNo INNER JOIN dbo.tblDivision ON dbo.tblDistrict.DivisionNo = dbo.tblDivision.DivisionNo INNER JOIN dbo.vw_EmpInfo ON dbo.tblThana.ThanaNo = dbo.vw_EmpInfo.PerThanaNo GROUP BY DistrictName,DivisionName,UnitEName,UnitEAddress ORDER BY DivisionName,DistrictName,UnitEName ```
How to use group by in SQL Server
[ "", "sql", "sql-server", "" ]
I want to print prime numbers between `1 to 50`. But I don't understand what I am doing wrong in my code. After `BEGIN`, SQLDeveloper says I had an error because it expected another sign and not `=` . ``` SET SERVEROUTPUT ON DECLARE i NUMBER; counter NUMBER; n NUMBER; k NUMBER; BEGIN i := 2; counter := 0; n := 50; k := n/2; FOR i IN 1..k LOOP IF (n%i := 0 ) THEN counter := 1; END IF; IF (counter := 0) THEN DBMS_OUTPUT.PUT_LINE(n||' is prime number'); END IF; END LOOP; END; ```
``` SET SERVEROUTPUT ON DECLARE i NUMBER; counter NUMBER; n NUMBER; k NUMBER; BEGIN i := 2; counter := 0; n := 50; k := floor(n/2); FOR i IN 1..k LOOP IF (mod(n, i) = 0 ) THEN counter := 1; END IF; IF (counter = 0) THEN DBMS_OUTPUT.PUT_LINE(n||' is prime number'); END IF; END LOOP; END; ``` `k := n/2;` -- added FLOOR (k is NUMBER, by default it's NUMBER(38, max\_scale)) `IF (n%i := 0 ) THEN` -> `IF (mod(n, i) = 0 ) THEN` Oracle has MOD function for remainder + to compare you need to use `=`, `:=` is for assignment. ``` DECLARE counter NUMBER; k NUMBER; BEGIN FOR n IN 1..50 LOOP counter := 0; k := floor(n/2); FOR i IN 2..k LOOP IF (mod(n, i) = 0 ) THEN counter := 1; END IF; END LOOP; IF (counter = 0) THEN DBMS_OUTPUT.PUT_LINE(n||' is prime number'); END IF; END LOOP; END; ```
In your IF clause you are assigning the value instead of comparison. You are using := operator, where you shd be using = It shud be like (IF counter = 0) then ...... Also I dont think n%i would work, you could do IF (trunc(n) = n) then ... or IF (mod (n,i) =0) then ...
PLSQL Printing prime numbers
[ "", "sql", "oracle", "plsql", "" ]
Got some problem after using inner join, this is my query. ``` select sum(total) as total,id from ( select * from midsemester union select * from endsemester ) as vij group by id ``` This is the table: **Table name: midsemester** ``` ID Grade ---------- 1 10 2 30 3 40 ``` **Table name: endsemester** ``` ID Grade ---------- 1 30 2 40 3 20 ``` and i need to sum these table to new table called total. This is the results that i was hoping. **Table name: total** ``` ID Grade ---------- 1 40 2 70 3 60 ``` I actually just need to sums up the grade's value using the id for the 3rd table. And tried several times using inner join, it's working. But when i re-checked the 3rd table again, the record was empty. Would appreciate any help, thanks! :)
try this ``` insert into total(ID,Grade) select midsemester.ID,(midsemester.grade+endsemester.grade)as total from midsemester inner join endsemester on midsemester.ID=endsemster.ID ``` Use order by ID if u want data according to ID increasing order
Try this: ``` INSERT INTO total (Id, Grade) SELECT id, SUM(grade) AS total FROM (SELECT id, grade FROM midsemester UNION ALL SELECT id, grade FROM endsemester ) AS vij GROUP BY id ```
Records gone after inner join
[ "", "mysql", "sql", "sql-server", "join", "sum", "" ]
I have to find out total number of saturday and sunday between Start Date & End Date. Example #1: ``` StartDate = Getdate(), EndDate = GetDate() + 5 -- result should be 2. ``` Example #2: ``` StartDate = Getdate(), EndDate = GetDate() + 10 -- result should be 4. ``` Can anyone suggest please.
Here it is ``` DECLARE @STARTDATE DATE='01/JAN/2014' DECLARE @ENDDATE DATE='01/MAR/2014' ;WITH CTE as ( SELECT CAST(@STARTDATE AS DATE) as [DAYS] UNION ALL SELECT DATEADD(DAY,1,[DAYS]) [DAYS] FROM CTE WHERE [DAYS] < CAST(@ENDDATE AS DATE) ) SELECT DISTINCT COUNT([DAYS]) OVER(PARTITION BY DATENAME(WEEKDAY,[DAYS])) CNT, DATENAME(WEEKDAY,[DAYS]) WD FROM CTE WHERE DATENAME(WEEKDAY,[DAYS]) = 'SATURDAY' OR DATENAME(WEEKDAY,[DAYS]) = 'SUNDAY' ORDER BY DATENAME(WEEKDAY,[DAYS]) ``` * [SQL FIDDLE](http://sqlfiddle.com/#!3/bdd1b/2) Here is your result ![enter image description here](https://i.stack.imgur.com/oWjjE.jpg)
Had the same question today. And I got here. If you don't want to use recursion (CTE) or while. You can use math plus Case When: ``` DECLARE @StartDate AS DATE DECLARE @EndDate AS DATE SET @StartDate = Getdate() SET @EndDate = GetDate() + 11 SELECT -- Full WE (*2 to get num of days Sa and So) (((DATEDIFF(d,@StartDate,@EndDate)+1)/7)*2) + -- WE-Days in between; given that Saturday = 7 AND Sunday = 1 -- what if startdate is sunday And you have remaining Days; you will always only get one WE-day CASE WHEN DATEPART(dw,@StartDate) = 1 AND (DATEDIFF(d,@StartDate,@EndDate)+1)%7 > 0 THEN 1 -- If you have remaining days (Modulo 7 > 0) and the sum of number of starting day and remaining days is 8 (+1 for startingdate) then you have + 1 WE-day (its a saturday) ELSE CASE WHEN (DATEDIFF(d,@StartDate,@EndDate)+1)%7 > 0 AND (((DATEDIFF(d,@StartDate,@EndDate)+1)%7) + DATEPART(dw,@StartDate)) = 8 THEN 1 -- If the remaining days + the number of the weekday is are greater then 8 (+1 for startingdate) you have 2 days of the weekend in between. ELSE CASE WHEN (DATEDIFF(d,@StartDate,@EndDate)+1)%7 > 0 AND (((DATEDIFF(d,@StartDate,@EndDate)+1)%7) + DATEPART(dw,@StartDate)) > 8 THEN 2 -- you have no WE-days in between! Either because of the fact that you have a number that is divisable by 7 or because the remaining days are between 2 (Tuesday) and 6 (Friday) ELSE 0 END END END AS TotalWEDays ``` I hope it gets clear by the comments. Let me know if it helps.
How do I get count of weekend days from a range of dates
[ "", "sql", "sql-server-2008", "date", "select", "dayofweek", "" ]
I was able to change all the other SQL syntax coloring in Eclipse Preferences, but the normal (non-Sql-syntactic) text is still black and I couldn't find where to find the setting for that. Black text is kind of hard to read against an almost black background. Thank you!
If you are looking for the place to change the Text Editor color settings, go to Preferences->General->Editors->Text Editors, select "Foreground color" in "Appearance color options", uncheck "System Default", click on the color and choose the color you want. If you want to change the SQL Editor color settings, go to Preferences->Data Management->SQL Development->SQL Editor->Syntax Coloring. Edit: You need to change the "Others" item in SQL Editor's syntax coloring. It's working on my Mac, anyway. First uncheck "Default foreground color" and then choose the color you want. Edit: For those not seeing the change, after selecting "Apply and Close", you must Restart your workspace to see the changes. File -> Restart
[Alexis Dufrenoy](https://stackoverflow.com/users/505714/alexis-dufrenoy) is absolutely right: > If you want to change the SQL Editor color settings, go to Preferences->Sata Management->SQL Development->SQL Editor->Syntax Coloring. > > Edit: You need to change the "Others" item in SQL Editor's syntax coloring. It's working on my Mac, anyway. First uncheck "Default foreground color" and then choose the color you want. Just in Luna release I have to **restart** Eclipse every time after changing SQL syntax coloring (other editors work without restart). Its strange, but with Windows 7 its the only way.
Eclipse SQL Editor normal text color
[ "", "sql", "eclipse", "editor", "settings", "textcolor", "" ]
Created database in cpanel > mysql database but when I am importing sql file in phpmyadmin I am getting this error ``` Error SQL query: -- -- Database: `alphanationalfire` -- -- -------------------------------------------------------- -- -- Table structure for table `access` -- CREATE TABLE IF NOT EXISTS `access` ( `aid` int( 11 ) NOT NULL AUTO_INCREMENT , `mask` varchar( 255 ) NOT NULL DEFAULT '', `type` varchar( 255 ) NOT NULL DEFAULT '', `status` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `aid` ) ) ENGINE = MYISAM DEFAULT CHARSET = utf8 AUTO_INCREMENT =1; MySQL said: Documentation #1046 - No database selected ``` And there is no database to select. how to tackle this?
First of all create database in phpmyadmin. Then from the left panel select the database which you create. after opening the database then import your SQL file.
In order to add data to a database from a file you need to have created the databse first. In this case create alphanationalfire then select databse in phpmyadmin and import sql file.
created databse in mysql but not showing in phpmyadmin
[ "", "mysql", "sql", "phpmyadmin", "cpanel", "" ]
I have on table: ``` Roll_No Subject Marks Percentage 1 Maths 75 70 1 Science 70 70 1 History 65 70 2 Maths 89 90 2 Science 91 90 2 History 90 90 3 Maths 50 55 3 Science 55 55 3 History 60 55 ``` I want to get 2 outputs in 1st query as: ``` Roll No sum(marks) Percentage 1 210 70 2 270 90 3 165 55 ``` I want to achieve 2nd output from base table as: Count(roll\_no), Sum(Marks), Sum(Percentage---This should be one value for one student). ``` 3 645(sum of all marks) 215 (i.e 70+90+55) ``` Can you please help me or guide me to achieve result.
Something like that: ``` select roll_no, sum(marks), max(percentage) from table group by roll_no ``` To the second part of your question: ``` select count(roll_no), sum(marks), max(percentage) from (select roll_no, sum(marks) as marks, max(percentage) as percentage from table group by roll_no) ```
``` select Roll_No,Sum(Marks),Max(Percentage) from table group by Roll_No; with cte as ( select Roll_No,Sum(Marks) as Marks ,Max(Percentage) as Percent from table group by Roll_No ) select count(Roll_No),Sum(Marks),Sum(Percent) from cte; ```
SQL query to get unique values from repeating rows
[ "", "mysql", "sql", "" ]
i have a delete command: ``` DELETE FROM exam WHERE excode = '2'; ``` But i need it to somehow reference another table and not delete it if the excode 2 exists in that table. i have been looking at this: <http://www.postgresql.org/docs/8.2/static/sql-delete.html> and this looks like something close to what i need to understand: ``` DELETE FROM films WHERE producer_id IN (SELECT id FROM producers WHERE name = 'foo'); ``` would love some help
Based on my understanding this is what you want: ``` DELETE FROM exam WHERE excode = '2' and not exists (select * from table2 where excode = '2') ``` The syntax is for SQL Server but I think you could use it with no change. If this is not what you want, explain a bit more.
Yes you could very well do it by using a query like this ,taking first table as you mentioned as "exam(excode)" and second table as "excodeTable(excode)" ``` DELETE FROM exam WHERE excode NOT IN(SELECT excode from excodeTable where excode='2') ``` If you want to check both the tables for the values i.e. excode=2 in first table and if excode=2 exists in second table also, then don't delete. For this you would have to use JOIN and use AND condition
How do i put a condition on a delete command?
[ "", "sql", "postgresql", "sql-delete", "" ]
I want to convert a number to hours.. my code is doing that, but is not 100% right. the number 2.98 is 2 hours and 0.98\*60 = 58,8 minutes.. I want round this minutes to 59 I want my result in my code +02:59, and it without this little round is +02:58. Anyone knows how to round a time in my code? ``` DECLARE @number_hours FLOAT = 2.98 --if number is negative IF(@number_hours) < 0 BEGIN SET @number_hours = @number_hours*-1 PRINT '-'+ CONVERT(varchar(5), CONVERT(DATETIME, @number_hours/24), 108) END ELSE PRINT '+'+ CONVERT(varchar(5), CONVERT(DATETIME, @number_hours/24), 108) ```
If you want to round to the nearest minute, you need to round your fractional number of hours to the nearest minute first, so something like: ``` DECLARE @number_hours FLOAT = 2.98 --if number is negative IF(@number_hours) < 0 BEGIN SET @number_hours = @number_hours*-1 PRINT '-'+ CONVERT(varchar(5), CONVERT(DATETIME, ROUND(@number_hours * 60, 0)/(24.0 * 60)), 108) END ELSE PRINT '+'+ CONVERT(varchar(5), CONVERT(DATETIME, ROUND(@number_hours * 60, 0)/(24.0 * 60)), 108) ```
Also you can use this one : ``` DECLARE @number_hours FLOAT = 2.98 DECLARE @HOUR NUMERIC DECLARE @MIN NUMERIC DECLARE @FORIF FLOAT = @number_hours SET @number_hours = CASE WHEN @number_hours<0 THEN @number_hours*-1 ELSE @number_hours END SET @HOUR = ROUND(@number_hours,1) SET @MIN = (1-(@HOUR-@number_hours))*60 IF(@FORIF) < 0 BEGIN PRINT '-'+CONVERT(varchar,@HOUR-1)+':'+CONVERT(VARCHAR,ROUND(@MIN,0)) END ELSE PRINT '+'+ CONVERT(varchar,@HOUR-1)+':'+CONVERT(VARCHAR,ROUND(@MIN,0)) ```
convert a number to hours (ROUND issue)
[ "", "sql", "sql-server", "" ]
I have two tables (Table1 - master table, Table2 - detail table). How can I select (view) only those records from master table that have no records in detail table? What SQL statement can I use for this? I'm using MS SQL Server 2012. Edit: Table definition Table1 - ID (PK) Table2 - ID (PK), Table1ID (FK)
Use [**`NOT IN`**](http://msdn.microsoft.com/en-us/library/ms177682.aspx) Operator ``` SELECT * FROM Table1 WHERE ID NOT IN(SELECT Table1ID FROM Table2 ) ```
I would use `NOT EXISTS` since it's clear, efficient and has no issues with nullable columns. For example (`MasterID` is the PK/FK): ``` SELECT master.* FROM dbo.Table1 master WHERE NOT EXISTS ( SELECT 1 FROM Table2 detail WHERE detail.MasterID = master.MasterID ) ``` But you have other options: <http://sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join>
SQL - How to select master records that have no detail records?
[ "", "sql", "sql-server", "" ]
Please consider the below example : ``` id name 1234 maria 1235 tamas 1236 helia 1234 maria 1235 jack 1235 david ``` What I want to do is to group the distinct value on the second column based on id : ``` id name id name id name 1234 maria 1235 tamas 1236 helia 1235 jack 1235 david ``` I am not sure, but I think I should use cursor on id and then select the second column by that id but I wasn't successful. Can somebody help me out in this?
`Distinct` keyword should solve the purpose: ``` declare @test table ( id int, name varchar(20)) insert into @test values (1234,'maria'), (1235,'tamas'), (1236,'helia'), (1234,'maria'), (1235,'jack'), (1235,'david') select distinct id,name from @test ```
this would give them in separate table, but @Deepshika solution is fine also. ``` DECLARE @ID INT DECLARE @getID CURSOR SET @getID = CURSOR FOR select distinct id from [work].[dbo].[FORM105] order by id OPEN @getID FETCH NEXT FROM @getID INTO @ID WHILE @@FETCH_STATUS = 0 BEGIN select distinct id,name from [work].[dbo].[FORM105] where id=@ID FETCH NEXT FROM @getID INTO @ID END CLOSE @getID DEALLOCATE @getID ```
group distinct value on column two based on 1st column in sql server
[ "", "sql", "sql-server", "sql-server-2008", "select", "" ]
Hi I am writing my own MySQL query where I need a result of records as follows. **Word in a table** - ABC XYZ **My string** - ABC XYZQWER when I ran my query as below - ``` SELECT * FROM myTABLE where `column` LIKE 'ABC XYZQWER%'; ``` I am getting empty result. I am aware of the fact that MySQL LIKE matches the result of string. I need a way to figure this out. I I searched it using 'ABC X' - it is giving me a proper result.
You can use the function `LOCATE()`: ``` SELECT `column` FROM myTable WHERE LOCATE(`column`, 'ABC XYZQWER') = 1; ``` As long as there is a value `ABC XYZ` in the column named `column`, the result of the query will be at least: ``` +---------+ | column | +---------+ | ABC XYZ | +---------+ ``` --- **Finding an inner match** Finding a matching string like `'BC'`, which is inside the search string `'ABC XYZQWER'`, is possible by using the compare operator `>=`. So the WHERE clause will look like this: ``` WHERE LOCATE(`column`, 'ABC XYZQWER') >= 1; ```
Try this: ``` SELECT * FROM myTABLE a WHERE 'ABC XYZQWER' LIKE CONCAT(a.column, '%'); ```
MySQL - Need search result of maximum matching letters from a string
[ "", "mysql", "sql", "regex", "select", "sql-like", "" ]
I want to fetch all the records on the basis of Search string E.G. Column name: `FileName` ``` MasterRoomTwo.jpg BedRoom.png MasterbedRoom.gif ``` and in simple scenario I can use ``` Declare @FileName nvarchar(60) = NULL set @FileName = '.jpg' SELECT * FROM JobAttachment WHERE AND Tags LIKE '%' + ISNULL(@FileName ,FileName ) + '%' ORDER BY updated DESC ``` but in my case I will get like ``` set @FileName = '.jpg,.Png,gif' ``` So how to make query like this? Any help will be appreciated. Thanks
Try this. **Split the input string** and use `charindex` ``` SELECT 'MasterRoomTwo.jpg' a INTO #temp UNION SELECT 'BedRoom.png' UNION SELECT 'MasterbedRoom.gif' DECLARE @FileName NVARCHAR(60) SET @FileName = '.jpg,.Png,gif' SELECT * FROM #temp JOIN (SELECT Rtrim(Ltrim(Split.a.value('.', 'VARCHAR(100)'))) fs FROM (SELECT Cast ('<M>' + Replace(@FileName, ',', '</M><M>') + '</M>' AS XML) AS Data) AS A CROSS APPLY Data.nodes ('/M') AS Split(a)) ad on Charindex(fs, a) > 0 ```
Try this : ``` SELECT * FROM JobAttachment a JOIN (SELECT t1.nod.value('.', 'varchar(50)') tags FROM (SELECT Cast('<N>.' + Replace(Replace(@FileName, '.', ''), ',', '</N><N>.') + '</N>' AS XML) AS format) t CROSS APPLY format.nodes('/N') AS t1(nod) WHERE t1.nod.value('.', 'varchar(50)') <> '.') fileformat ON a.tag LIKE ( '%' + fileformat.tags + '%' ) ```
How to get all records on the basis of Search String In sql
[ "", "sql", "sql-server", "" ]
I need to fetch specific row from DB2. ``` select istore,row_num() over() from store where row_num()=2; ``` so here i need to fetch 2nd row from store table but above query is not fetching any data.please help me. Thanks in advance
You need to fetch it in a outer query like ``` select * from ( select istore, ROW_NUMBER() OVER (PARTITION BY some_column ORDER BY some_column) AS rn from store ) tab where rn = 2; ```
You didn't specify what exactly your problem is, but there are two errors in your statement: 1. The window function is named `row_number()` not, `row_num()` 2. You can't use a window function in the where clause. Using `row_number()` without any `order by` doesn't make sense, because you wind up with a random ordering. ``` select * from ( select istore, row_number() over (order by something) as rn from store ) t where rn = 2; ``` Replace `something` with a column name that defines the order of your rows. Without any ordering there is no such thing as "the second row".
how to extract specific row from db2
[ "", "sql", "db2", "" ]
## Question How could I express the following statement in my query? ``` Between 4 and 5 years old ``` ## SQL Query ``` WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) <=2 ``` Get all Assets that are more than one year old from todays date ## What I want to say? Get all assets between 4 and 5 years old ---
Be very careful using `datediff(year)`. It counts the number of year boundaries between two dates. So, the difference between 2014-12-31 and 2015-01-01 is 1. In addition, I recommend putting the functions on the `getdate()` value rather than on the column. This allows an index to still be used on the column ("sargability"). So, something like this should do what you want: ``` where AcquiredDate >= dateadd(year, -5, GetDate()) and AcquiredDate < dateadd(year , -3, GetDate()) ``` On 2015-01-01, this will retrieve rows acquired between 2010-01-1 and 2011-12-31, which seems to be the intent of the question.
check for SQL's BETWEEN OPERATOR here [SQL Between](http://www.w3schools.com/sql/sql_between.asp) ``` SELECT * FROM TABLE NAME WHERE DATEDIFF(YEAR, AcquiredDate, GetDate()) BETWEEN 4 and 5 ```
Expressing age between two years
[ "", "sql", "sql-server", "" ]
I realize this is probably painfully simple. Just getting late and need an explanation. I have the table: ``` mysql> SELECT * FROM employee_expert; +------+---------+ | enum | package | +------+---------+ | E246 | Excel | | E246 | MySQL | | E246 | Python | | E246 | Word | | E403 | Java | | E403 | MySQL | | E892 | Excel | | E892 | PHP | | E892 | Python | +------+---------+ 9 rows in set (0.00 sec) ``` And I need to find the `enum` of the tuple NOT Python Result should be `E403` since its the only one not with Python. Tried ``` mysql> SELECT enum FROM employee_expert WHERE package != "Python" GROUP BY enum; +------+ | enum | +------+ | E246 | | E403 | | E892 | +------+ 3 rows in set (0.00 sec) ``` But, obviously, it just returned all the `enum`s...
One option is the `not in` operator: ``` SELECT DISTINCT enum FROM employee_expert WHERE enum NOT IN (SELECT enum FROM employee_expert WHERE package = 'Python') ```
A `NOT IN` will exclude a subset of data with a specific exclusion criteria: ``` SELECT DISTINCT(enum) FROM employee_expert WHERE enum NOT IN (SELECT enum FROM employee_expert WHERE package = 'Python'); ``` `Distinct` will exclude duplicates. [Sql Fiddle here](http://sqlfiddle.com/#!2/3c65e2/2)
SQL find single value from duplicates with certain condition
[ "", "mysql", "sql", "" ]
I have 2 Tables IlRoutes and PlRoutes and they both have a common RouteID. ``` IlRoute: PlRoute: RouteID CID RouteID PlID Division 1 3 1 2 71 2 3 1 7 29 3 3 2 1 78 4 37 2 7 21 5 37 3 5 50 6 37 3 6 50 ``` I want the records where plID = 7 and CID = 3 and also the records based on the RouteID's Here is the sample output: ``` RouteID CID RouteID PlID Division 1 3 1 2 71 1 3 1 7 29 2 3 2 1 78 2 3 2 7 21 ``` any help is greatly appreciated.
Assuming I'm understanding your question correctly, you want to return any record that matches with `cid = 3` in the `IlRoute` table, and at least one record in the PlRoute table `plId = 7`. First `join` the tables together, and then use `exists` to see if there is at least one record with plid = 7: ``` select i.routeid, i.cid, p.plid, p.division from ilroute i join plroute p on i.routeid = p.routeid where i.cid = 3 and exists (select 1 from PlRoute p2 where p.routeid = p2.routeid and p2.plid = 7) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/0ab8c/2)
``` select IlRoute.*, PlRoute.* from IlRoute join PlRoute where IlRoute.RouteID = PlRoute.RouteID ```
sql query to get records based on 2 ID's
[ "", "sql", "sql-server-2008", "" ]
In my `PostgreSQL 9.2` database there are many tables having name with mixed cases,For example `Tbl_Sales`,`Tbl_Purch`,`Tbl_logMnth` What I want to do is ``` alter table "Table1" rename to table1 ``` but how to rename all mixed-case tables in my database in an easy way ?
Use the following select to get the table(s) with mixed-cases in name ``` SELECT table_name ucase,lower(table_name) lcase FROM information_schema.tables where table_type = 'BASE TABLE' and table_schema = 'public' and table_name ~ E'^[[:upper:]][^[:upper:]]' ``` PostgreSQL string function [**lower**](http://www.postgresql.org/docs/9.2/static/functions-string.html) and [**information\_schema.tables**](http://www.postgresql.org/docs/9.2/static/infoschema-tables.html) and use PL/PGSQL [**SQL - DO**](http://www.postgresql.org/docs/9.2/static/sql-do.html) to rename all tables that have mixed-case ``` do $$ declare rw record; begin for rw in SELECT 'ALTER TABLE "'||t.ucase||'" RENAME to '||t.lcase||';' execme from ( SELECT table_name ucase, lower(table_name) lcase FROM information_schema.tables where table_type = 'BASE TABLE' and table_schema = 'public' and table_name ~ E'^[[:upper:]][^[:upper:]]')t loop execute rw.execme ; end loop; end; $$ ```
this is the query to use > ALTER TABLE name > RENAME TO new\_name
Rename mixed-case tables in a single query
[ "", "sql", "postgresql", "ddl", "" ]
have a table with duplicate data, I'm able to group email by using this query ``` SELECT id, first_name, email, status FROM customers WHERE status = '0' GROUP BY email ``` However, I don't want to select customer with duplicate emails that already have the status = '1' Table design as follows... ``` id first_name email status 1 Tony tony@gmail.com 0 2 Terry terry@gmail.com 0 3 Alex alex@gmail.com 0 4 John john@gmail.com 0 5 Mike mike@gmail.com 1 6 Mike Jones mike@gmail.com 0 7 Mike Fake mike@gmail.com 0 ``` my query still selects mike@gmail.com because there are status with 0 but in fact, there's one already with the status=1.... How do I not select the email that already have the status=1 ?
Try this: Using **LEFT JOIN** ``` SELECT c.id, c.first_name, c.email, c.status FROM customers c LEFT OUTER JOIN customers c1 ON c.email = c1.email AND c1.cstatus = 1 WHERE c.status = 0 AND c1.id IS NULL GROUP BY c.email; ``` Using **NOT EXISTS** ``` SELECT c.id, c.first_name, c.email, c.status FROM customers c WHERE c.status = 0 AND NOT EXISTS (SELECT 1 FROM customers c1 WHERE c.email = c1.email AND c1.cstatus = 1) GROUP BY c.email; ```
The simplest way to do it would be using `not exists` ``` select c.* from customers c where not exists ( select 1 from customers c1 where c1.email = c.email and c1.status = 1 ) ```
MySQL : Don't select duplicate row item when one of them has status = 1
[ "", "mysql", "sql", "select", "duplicates", "exists", "" ]
I have a table with 2 fields: TimeStamp (date & time) and Value (numeric). I'd need an (efficient) query to find those rows where the Value field is smaller from the previous row, being them sorted by TimeStamp. I provide a small set of data as an example: ``` Timestamp Value ------------- ----------- 2014/12/01 18:30:10 500 2014/12/01 18:30:20 510 2014/12/01 18:30:30 520 2014/12/01 18:30:40 530 2014/12/01 18:30:50 5 <- I want to have this row returned 2014/12/01 18:31:00 25 2014/12/01 18:31:10 40 2014/12/01 18:31:20 13 <- And this one as well. 2014/12/01 18:31:30 18 2014/12/01 18:31:40 23 ``` A row will be inserted every 10 seconds and data will last for years, so I expect the number of rows to grow quite quickly. If no efficient query can be produced, I'm thinking of creating a trigger on row insertion that will retrieve the previous row, check the Value field and, if the row being inserted has a smaller Value, insert a record into another table. What do you think about it? Thanks!
In SQL Server 2012+, you would use `lag()`: ``` insert into othertable(col1 . . . ) select t.* from (select t.*, lag(value) over (order by timestamp) as prev_value from table t ) t where value < prev_value; ``` For performance, you want an index on `table(timestamp, value)`. In earlier versions of SQL Server, you can use a correlated subquery or `cross apply`. If you are doing this on a regular basis, such as every night, then you will want a `where` clause. Be careful about boundary conditions (if the value goes down just over midnight, you still want to catch that).
I never thought of correlated subqueries. I think this would work for me: ``` SELECT t.TimeStamp, t.Valor FROM Tabla1 AS t WHERE t.Valor < ( SELECT TOP 1 t2.Valor FROM Tabla1 AS t2 WHERE t2.TimeStamp < t.TimeStamp ORDER BY t2.TimeStamp ASC ) ```
Compare data in consecutive rows
[ "", "sql", "sql-server", "" ]
Here are my two tables: **oasis** ``` +-----+-------+ | id | title | +-------------+ | 234 | a | | 235 | b | | 236 | c | +-----+-------+ ``` **user\_collection** ``` +----+---------+----------+------+ | id | oasisid | username | data | +--------------+----------+------+ | 1 | 234 | joe | blah | | 2 | 235 | bob | blah | | 3 | 236 | ted | blah | +----+---------+----------+------+ ``` Here's my query: ``` SELECT * FROM oasis JOIN user_collection ON oasis.id = user_collection.oasisid WHERE username = 'greg' AND oasis.id = '234' ``` What I want to do here is pull everything from **oasis** and **user\_collection** that match, but also pull the information from **oasis** even if there is NO match on **user\_collection**. How do I fix my query to accomplish this?
You want a `left join`, but you have to be careful about the conditions in the `where` clause. Conditions on the *second* table need to be in the `on` clause for the `left join` to work: ``` SELECT * FROM oasis JOIN user_collection ON oasis.id = user_collection.oasisid and user_collection.username = 'greg' WHERE oasis.id = '234'; ``` Conditions on the *first* table should go in the `where` clause.
``` SELECT s.ome , c.olumns FROM table1 s LEFT JOIN table2 c ON c.some_id = s.some_id AND c.some_column = 'one thing' WHERE s.some_other_column = 'another thing' ```
How do I always pull data from one table, but also pull data from a second table if it's there in MySQL?
[ "", "mysql", "sql", "" ]
I have the following table (this is just a sample): ``` id User dateAssigned dateComment ---|-------|---------------------|---------------------| 1 | Usr1 | 2014-12-02 12:35:00 | 2014-12-03 08:13:00 | 2 | Usr1 | 2014-12-02 12:35:00 | 2014-12-02 13:06:00 | 3 | Usr2 | 2014-12-02 07:47:00 | 2014-12-02 07:47:00 | 4 | Usr2 | 2014-12-02 07:47:00 | 2014-11-25 08:07:00 | ``` How do I write a query in SQL Server 2008 to select for each user the row where difference between `dateAssigned` and `dateComment` is minimum? In my example, query should return rows 2 and 3. Thank you.
You could use a `CTE`(Common Table Expression) and [`ROW_NUMBER` function](http://msdn.microsoft.com/en-us/library/ms186734.aspx): ``` WITH CTE AS ( SELECT id, [User], dateAssigned, dateComment, rn = ROW_NUMBER() OVER ( PARTITION BY [User] ORDER BY ABS(DATEDIFF(minute, dateAssigned, dateComment)) ASC) FROM dbo.Users u ) SELECT id, [User], dateAssigned, dateComment FROM CTE WHERE RN = 1 ```
Use this: ``` SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Username ORDER BY ABS(DATEDIFF(second, dateComment, dateAssigned)) ASC) AS datesOrder FROM @T ) t WHERE t.datesOrder = 1 ``` Row number is equal to 1 for those records corresponding to the minimum difference. Hence, the where clause in the outer select statement retrieves the records desired. **EDIT:** I added the ABS function applied to the date difference, since dateAssigned can also precede dateComment.
Get Rows Where Difference Between Two Date Columns is Minimum
[ "", "sql", "sql-server", "" ]
I have a table with some columns .. say A,B & C and a last column called DeletedDate. Its a nullable datetime. I have added a unique constraint on A,B & C. Its worked fine until I discovered an issue. After adding a deleted date. I should be able to add a new records for same A,B & C values. But I cant because of the unique constraint. I see two options here 1. Include the DeletedDate column in constraint 2. Add a where clause to the constraint saying `Where DeletedDate is not NULL` My question: Which option is better & why?
Option 2: a unique constraint cannot have a `WHERE` clause. But you can use a **[filtered index](http://msdn.microsoft.com/en-gb/library/cc280372.aspx)** instead: ``` CREATE UNIQUE INDEX a_b_c_UQ_when_DeletedDate_is_null ON TableName (a, b, c) WHERE DeletedDate IS NULL ; ``` --- Option 1: if you simply include the `DeletedDate` in the unique constraint, then there are 2 issues: * a tiny, minor one is that if SQL-Server had implemented unique constraints as in the sql-Standard, that would allow to have 2 or more rows with same a,b,c combination and null deleted date, so 2 or more undeleted rows can coexist, which would not match the requirements. This is not an issue of course with current implementation. It might only become one, if SQL-Server decides in the future to change the implementation, so as to comply with the standard. * a somewhat more important issue is that this way you would not allow to have 2 or more rows with same a,b,c combination and same exact deleted date. This is not a serious problem either in your case, because the `DeletedDate` is a `datetime` and not a `date`. It might only bring some difficulty if you try to delete several rows with the same statement/transaction.
You could use a computed column for the unique index. This returns a static value like `1` if `DeletedDate` is available and otherwise the negative counterpart of the primary key. ``` CREATE TABLE [dbo].[Test]( [ID] [int] IDENTITY(1,1) NOT NULL, [A] [int] NOT NULL, [B] [int] NOT NULL, [C] [int] NOT NULL, [DeletedDate] [datetime] NULL, [IsDeleted] AS (case when [DeletedDate] IS NULL then (1) else -(1)*[ID] end), CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` Now you can create a unique index on A,B,C and IsDeleted.
Would adding a where clause be a good idea here?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
This is my result : ``` Year matches 2005 1 2008 2 ``` and this is my expected result: ``` Year matches 2005 1 2006 0 2007 0 2008 2 ``` This is what I have tried: ``` SELECT DATEPART(yy,A.match_date) AS [Year], COUNT(A.match_id) AS "matches" FROM match_record A INNER JOIN match_record B ON A.match_id = B.match_id WHERE (score) IS NULL OR (score) = 0 GROUP BY DATEPART(yy,A.match_date); ``` I want to get zero as count in the years where score have some values(not null and zero, anything greater than 0) . Can someone help me?
This might do what you're looking for: ``` SELECT DATEPART(yy,A.match_date) AS [Year], SUM(CASE WHEN score=0 or score is null THEN 1 ELSE 0 END) AS "matches" FROM match_record A INNER JOIN match_record B ON A.match_id = B.match_id GROUP BY DATEPART(yy,A.match_date); ``` Assuming you have *any* data in the missing years, this should now produce your expected results. If, instead, you need `0`s for years where you have no data, you'll need to provide the list of years separately (say, via a numbers table) and then `LEFT JOIN` that source to your existing query.
Consider following is your table ``` SELECT * INTO #TEMP FROM ( SELECT 2005 [YEARS],1 [MATCHES] UNION ALL SELECT 2008,2 )T ``` Declare two variables to get min and max date in your table ``` DECLARE @MINYEAR int; DECLARE @MAXYEAR int; SELECT @MINYEAR = MIN(YEARS) FROM #TEMP SELECT @MAXYEAR = MAX(YEARS) FROM #TEMP ``` Do the following `recursion` to get years between the period in your table and `LEFT JOIN` with your table. ``` ; WITH CTE as ( select @MINYEAR as yr FROM #TEMP UNION ALL SELECT YR + 1 FROM CTE WHERE yr < @MAXYEAR ) SELECT DISTINCT C.YR,CASE WHEN T.MATCHES IS NULL THEN 0 ELSE T.MATCHES END MATCHES FROM CTE C LEFT JOIN #TEMP T ON C.yr=T.YEARS ```
Show 0 in count SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
How can I sort a table by a column varchar2 with characters in varying cases: upper, lower, numeric string For example, when I do an order by the column NAME, the data of the column are: ``` ANNIE BOB Daniel annie bob 1abc ``` The expected result is: ``` 1abc ANNIE annie BOB bob Daniel ```
This is complicated. Normal sort order is aa, aA, Aa, AA, ab, aB, Ab, AB, a1, A1, 1a, 1A. So same names are grouped together and then lower case comes first. Digits come after Z. This is close to what you are after. You want Ben to come before BOB, because you care about BEN being before BOB in the first place and only then about O being capital and e being not. However, you want digits come *before* a and upper case coming *before* lower case. That makes a great difference at last. You cannot do this easily, because while you want words (bob, BOB) be grouped as in default ordering, you want single characters be treated differently. You can first order by lower or upper to get the grouping, but that will put numbers last, you can then use binary order to get A before a. ``` order by lower(name), nlssort(name, 'NLS_SORT = BINARY'); ``` I think this is as close as you get with built-in stuff. Digits last. If you want to stick to your special order, you will have to write a function for it and use that. ``` order by my_own_sort_order(name); ``` EDIT (after acceptance :-) On second thought: you want the original sort behavior only with toggled upper/lower case consideration. You can use TRANSLATE for this: ``` order by translate(name, 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'); ```
You could do something like: ``` SELECT * FROM MyTable ORDER BY UPPER(MyCol) ```
How can I sort a table by a column for numeric letter first, then case sensitive for alphabet letters
[ "", "sql", "select", "oracle11g", "sql-order-by", "nls-sort", "" ]
I need to create a T-SQL query that will return the highest salary from the `job_positions` table and the name of the person(s) who has it. So far, my solution is: ``` SELECT MAX(e.salary) AS [Max salary] , p.firstname + ' ' + p.lastname AS [THE LUCKY MAN] FROM persons p JOIN job_positions e ON (p.id_person = e.id_person) ``` But this produces the error: > Msg 8120, Level 16, State 1, Line 67 > Column 'persons.firstname' is invalid in the select list because it is not contained in either an > aggregate function or the GROUP BY clause. > > Msg 8120, Level 16, State 1, Line 67 > Column 'persons.lastname' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. However, it only seems to be self-explanatory; The problem is probably with the `MAX` function. After deleting it, the result is full name-salary table. I wonder why it cannot just extract the maximum row...
If **only one** employee who can get `Max salary` then try this. ``` SELECT TOP 1 e.salary AS [Max salary], p.firstname + ' ' + p.lastname AS [THE LUCKY MAN] FROM persons p JOIN job_positions e ON ( p.id_person = e.id_person ) ORDER BY e.salary DESC ``` If **more than one** is getting Max salary then use `Window Function` with `Dense_Rank` to find all the names. ``` ;WITH cte AS (SELECT Dense_rank() OVER ( ORDER BY e.salary) Rn, e.salary AS [Max salary], p.firstname + ' ' + p.lastname AS [THE LUCKY MAN] FROM persons p JOIN job_positions e ON ( p.id_person = e.id_person )) SELECT * FROM cte WHERE Rn = 1 ``` OR ``` SELECT TOP 1 WITH TIES e.salary AS [Max salary], p.firstname + ' ' + p.lastname AS [THE LUCKY MAN] FROM persons p JOIN job_positions e ON ( p.id_person = e.id_person ) ORDER BY e.salary DESC ```
Finally found a solution that isn't totally ugly: ``` SELECT MAX(e.salary) AS [Max salary] , p.firstname + ' ' + p.lastname AS [THE LUCKY MAN] FROM persons p JOIN job_positions e ON (p.id_person=e.id_person) WHERE e.salary=(SELECT max(e.salary) FROM e.job_positions) ``` But I still wonder why isn't it possible to just use "MAX" with select with "join".
Select MAX() causes error message
[ "", "sql", "sql-server", "t-sql", "" ]
so I have been wrestling a perplexing issue with BULK INSERT for some time. The files come from a Linux box and when I look at them in hex edit mode/notepad ++ they appear to have just a linefeed (0A) as a row terminator. I store bulk insert statements in a table which later a job selects from and executes the statement in the table to load data into a staging table. The particular case that is perplexing to me is a table that has 7 columns. The data file only has the first 4 columns, the rest should be left NULL. Typically they look like this: ``` BULK INSERT STAGING_TABLE FROM 'FILE_LOCATION' WITH ( DATAFILETYPE = 'widechar' , FIELDTERMINATOR = ',' , ROWTERMINATOR = 'something_here' ); ``` The row terminator has been the biggest source of my issues. When I try to use "\n" the bulk insert fails on an truncation error-- it seems to treat the file as one long string and only delimits the columns correctly until it runs out of columns (hence truncation error). When I use "0x0a" the bulk insert fails on "unexpected end of file" error. There was a blank line at the end of the file but even when I removed that it still threw the same error so I'm not sure what is wrong there. The ONLY one so far that has worked for getting data actually into the table was "\l". Does anyone know what that means? I have searched far and wide but there doesn't seem to be documentation on it. That or I have been looking in the wrong place completely. The weird thing with \l as the rowterminator is that even though it load successfully it still doesn't respect the rowterminator... The rows just get loaded into all 7 columns and split on seemingly random intervals. Anyone have any idea? Should I clarify some more?
The issue you are having is actually not due to the Row Terminator. I suspect, along with the End of File error, you also saw something similar to the following: > Msg 4864, Level 16, State 1, Line 1 > Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 4 ({column\_name}). While what I said below the line is still valid regarding the `ROWTERMINATOR`, the real issue is indicated by your statement of: > [the] table that has 7 columns. The data file only has the first 4 columns, the rest should be left NULL. This is the issue. When using `BULK INSERT`, the data file has to have the same number of fields as the table being inserted into. If that is not the case, then you have to use the `FORMATFILE ='format_file_path'` option in which case you need to create a [Format File](http://msdn.microsoft.com/en-us/library/ms190393.aspx) and specify the location. I thought you could get away with the easier [OPENROWSET(BULK...)](http://msdn.microsoft.com/en-us/library/ms190312.aspx) so that you can do the following: ``` INSERT INTO STAGING_TABLE SELECT * FROM OPENROWSET(BULK 'FILE_LOCATION' ...); ``` But that doesn't allow you to specify a `ROWTERMINATOR` without using a Format File. Hence you need the Format File in either case. **OR**, you could just import into a different staging table that only has 4 columns, and then either: * dump that into your current STAGING\_TABLE, or * do an `ALTER TABLE` to add the 3 missing columns (it is more efficient to just add 3 NULLable fields than to transfer the data from one table to another :-). ***OR***, as mentioned by @PhilipKelley in a comment on this answer, you could create a View with just those four fields and have that be the destination/target. And if you were doing the appropriate steps to enable the operation to be minimally logged, the MSDN page for [Prerequisites for Minimal Logging in Bulk Import](http://msdn.microsoft.com/en-us/library/ms190422.aspx) does not say one way or the other what the effect will be if you use a View. --- Most likely the `\l` was just interpreted as those two literal characters, hence it not respecting the `rowterminator` when you tried it. The `0x0A` will work as I have tested it and it behaves as expected. Your statement should look like the following: ``` BULK INSERT STAGING_TABLE FROM 'FILE_LOCATION' WITH ( DATAFILETYPE = 'widechar', FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0A' ); ``` I tried both with and without a `0x0A` character at the end of the final line and both worked just the same. I then removed one of the commas from one of the lines, leaving it with less than the full set of fields, and that is when I got the following error: ``` Msg 4832, Level 16, State 1, Line 2 Bulk load: An unexpected end of file was encountered in the data file. Msg 7399, Level 16, State 1, Line 2 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 2 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)". ``` Make sure that all of the rows in the data file have the required number of field separators (`,` in this case). You mentioned having 4 columns in the file so that should be 3 commas per row.
I'd comment to ask these but my reputation is not high enough. I believe "\l" is "linefeed", so that would mesh with you seeing 0A in the file encoding. My first question would be, what character encoding is your data files in? And what is the datatype on your table columns? I would guess that this is going to be a character encoding issue. I see your DATAFILETYPE is 'widechar' Did you confirm that your source file is Unicode? And when you insert data and select it back out, does it look as if the character encoding is being preserved?
Bulk Insert - Row Terminator for UNIX file + "\l" row terminator
[ "", "sql", "sql-server-2008-r2", "bulkinsert", "" ]
I want to reduce my data frame (**EDIT:** in a cpu-efficient way) to rows with unique values of the pair c3, c4, while keeping all columns. In other words I want to transform my data frame ``` > df <- data.frame(c1=seq(7), c2=seq(4, 10), c3=c("A", "B", "B", "C", "B", "A", "A"), c4=c(1, 2, 3, 3, 2, 2, 1)) c1 c2 c3 c4 1 1 4 A 1 2 2 5 B 2 3 3 6 B 3 4 4 7 C 3 5 5 8 B 2 6 6 9 A 2 7 7 10 A 1 ``` to the data frame ``` c1 c2 c3 c4 1 1 4 A 1 2 2 5 B 2 3 3 6 B 3 4 4 7 C 3 6 6 9 A 2 ``` where the values of c1 and c2 could be any value which occurs for a unique pair of c3, c4. Also the order of the resulting data frame is not of importance. **EDIT:** My data frame has around 250 000 rows and 12 columns and should be grouped by 2 columns – therefore **I need a CPU-efficient solution**. ## Working but unsatisfactory alternative I solved this problem with ``` > library(sqldf) > sqldf("Select * from df Group By c3, c4") ``` but in order to speed up and parallelize my program I have to eliminate the calls to sqldf. **EDIT:** Currently the sqldf solution clocks at 3.5 seconds. I consider this a decent time. The problem is that I cannot start various queries in parallel therefore I am searching for an alternative way. ## Not working attempts ### duplicate() ``` > df[duplicated(df, by=c("c3", "c4")),] [1] c1 c2 c3 c4 <0 rows> (or 0-length row.names) ``` selects duplicate rows and does not select rows where only columns c3 and c4 are duplicates. ### aggregate() ``` > aggregate(df, by=list(df$c3, df$c4)) Error in match.fun(FUN) : argument "FUN" is missing, with no default ``` aggregate requires a function applied to all lines with the same values of c3 and c4 ### data.table's by ``` > library(data.table) > dt <- data.table(df) > dt[,list(c1, c2) ,by=list(c3, c4)] c3 c4 c1 c2 1: A 1 1 4 2: A 1 7 10 3: B 2 2 5 4: B 2 5 8 5: B 3 3 6 6: C 3 4 7 7: A 2 6 9 ``` does not kick out the rows which have non-unique values of c3 and c4, whereas ``` > dt[ ,length(c1), by=list(c3, c4)] c3 c4 V1 1: A 1 2 2: B 2 2 3: B 3 1 4: C 3 1 5: A 2 1 ``` does discard the values of c1 and c2 and reduces them to one dimension as specified with the passed function `length`.
Here is a data.table solution. ``` library(data.table) setkey(setDT(df),c3,c4) # convert df to a data.table and set the keys. df[,.SD[1],by=list(c3,c4)] # c3 c4 c1 c2 # 1: A 1 1 4 # 2: A 2 6 9 # 3: B 2 2 5 # 4: B 3 3 6 # 5: C 3 4 7 ``` The SQL you propose seems to extract the first row having a given combination of (c3,c4) - I assume that's what you want. --- **EDIT**: Response to OP's comments. The result you cite seems really odd. The benchmarks below, on a dataset with 12 columns and 2.5e5 rows, show that the data.table solution runs in about 25 **milliseconds** without setting keys, and in about 7 milliseconds with keys set. ``` set.seed(1) # for reproducible example df <- data.frame(c3=sample(LETTERS[1:10],2.5e5,replace=TRUE), c4=sample(1:10,2.5e5,replace=TRUE), matrix(sample(1:10,2.5e6,replace=TRUE),nc=10)) library(data.table) DT.1 <- as.data.table(df) DT.2 <- as.data.table(df) setkey(DT.2,c3,c4) f.nokeys <- function() DT.1[,.SD[1],by=list(c3,c4)] f.keys <- function() DT.2[,.SD[1],by=list(c3,c4)] library(microbenchmark) microbenchmark(f.nokeys(),f.keys(),times=10) # Unit: milliseconds # expr min lq median uq max neval # f.nokeys() 23.73651 24.193129 24.609179 25.747767 26.181288 10 # f.keys() 5.93546 6.207299 6.395041 6.733803 6.900224 10 ``` In what ways is your dataset different from this one??
Drawback (maybe): All solutions sort the result by group variables. # Using `aggregate` Solution mentioned by Martin: `aggregate(. ~ c3 + c4, df, head, 1)` My old solution: ``` > aggregate(df,by=list(df$c3,df$c4),FUN=head,1) Group.1 Group.2 c1 c2 c3 c4 1 A 1 1 4 A 1 2 A 2 6 9 A 2 3 B 2 2 5 B 2 4 B 3 3 6 B 3 5 C 3 4 7 C 3 > aggregate(df,by=list(df$c3,df$c4),FUN=head,1)[,-(1:2)] c1 c2 c3 c4 1 1 4 A 1 2 6 9 A 2 3 2 5 B 2 4 3 6 B 3 5 4 7 C 3 ``` # Using `ddply` ``` > require(plyr) Loading required package: plyr > ddply(df, ~ c3 + c4, head, 1) c1 c2 c3 c4 1 1 4 A 1 2 6 9 A 2 3 2 5 B 2 4 3 6 B 3 5 4 7 C 3 ```
What is the R equivalent of SQL "SELECT * FROM table GROUP BY c1, c2"?
[ "", "sql", "r", "data.table", "aggregate", "" ]
This seems to be easy in my head but I can't figure how to write the query: What I have: ``` user_id correct_questions 1 5 1 2 2 3 2 1 ``` What I want to have: ``` user_id correct_questions(sum) 1 7 2 4 ``` The only thing I get is how often user x has played. (with `count(correct_questions) GROUP BY user_id`)
Well only thing you have to do is SUM + group by : ``` select user_id, SUM(correct_questions) as correct_questions from yourtable Group by user_id ```
``` select sum(correct_questions) from table_name group by user_id ```
SQL Query to count(x) and group by y
[ "", "mysql", "sql", "" ]
After reading many tutorials, I found that there are many ways to bind arguments on prepared statement in Go, some of them ``` SELECT * FROM bla WHERE x = ?col1 AND y = ?col2 SELECT * FROM bla WHERE x = ? AND y = ? SELECT * FROM bla WHERE x = :col1 AND y = :col2 SELECT * FROM bla WHERE x = $1 AND y = $2 ``` **First question**, what is the cross-database way to bind arguments? (that works on any database) **Second question**, none of the tutorial I've read mention about `LIKE` statement, how to bind arguments for `LIKE`-statement correctly? ``` SELECT * FROM bla WHERE x LIKE /*WHAT?*/ ``` **Third question**, also none of them give an example for `IN` statement, how to bind arguments for `IN` statement correctly? ``` `SELECT * FROM bla WHERE x IN ( /*WHAT?*/ ) ```
**What is the cross-database way to bind arguments?** With database/sql, there is none. Each database has its own way to represent parameter placeholders. The Go database/sql package does not provide any normalization facility for the prepared statements. Prepared statement texts are just passed to the underlying driver, and the driver typically just sends them unmodified to the database server (or library for embedded databases). **How to bind arguments for LIKE-statement correctly?** You can use parameter placeholders after a like statement and bind it as a string. For instance, you could write a prepared statement as: ``` SELECT a from bla WHERE b LIKE ? ``` Here is an example (error management handling omitted). ``` package main import ( "database/sql" "fmt" _ "github.com/go-sql-driver/mysql" ) // > select * from bla ; // +------+------+ // | a | b | // +------+------+ // | toto | titi | // | bobo | bibi | // +------+------+ func main() { // Open connection db, err := sql.Open("mysql", "root:XXXXXXX@/test") if err != nil { panic(err.Error()) // proper error handling instead of panic in your app } defer db.Close() // Prepare statement for reading data stmtOut, err := db.Prepare("SELECT a FROM bla WHERE b LIKE ?") if err != nil { panic(err.Error()) // proper error handling instead of panic in your app } defer stmtOut.Close() var a string b := "bi%" // LIKE 'bi%' err = stmtOut.QueryRow(b).Scan(&a) if err != nil { panic(err.Error()) // proper error handling instead of panic in your app } fmt.Printf("a = %s\n", a) } ``` Note that the % character is part of the bound string, not of the query text. **How to bind arguments for IN statement correctly?** None of the databases I know allows binding a list of parameters directly with a IN clause. This is not a limitation of database/sql or the drivers, but this is simply not supported by most database servers. You have several ways to work the problem around: * you can build a query with a fixed number of placeholders in the IN clause. Only bind the parameters you are provided with, and complete the other placeholders by the NULL value. If you have more values than the fixed number you have chosen, just execute the query several times. This is not extremely elegant, but it can be effective. * you can build multiple queries with various number of placeholders. One query for IN ( ? ), a second query for IN (?, ?), a third for IN (?,?,?), etc ... Keep those prepared queries in a statement cache, and choose the right one at runtime depending on the number of input parameters. Note that it takes memory, and generally the maximum number of prepared statements is limited, so it cannot be used when the number of parameters is high. * if the number of input parameters is high, insert them in a temporary table, and replace the query with the IN clause by a join with the temporary table. It is effective if you manage to perform the insertion in the temporary table in one roundtrip. With Go and database/sql, it is not convenient because there is no way to batch queries. Each of these solutions has drawbacks. None of them is perfect.
I'm a newbie to Go but just to answer the first part: > First question, what is the cross-database way to bind arguments? (that works on any database) If you use [sqlx](https://godoc.org/github.com/jmoiron/sqlx), which is a superset of the built-in sql package, then you should be able to use [sqlx.DB.Rebind](https://godoc.org/github.com/jmoiron/sqlx#Rebind) to achieve that.
Cross-database prepared statement binding (like and where in) in Golang
[ "", "sql", "go", "prepared-statement", "sql-like", "where-in", "" ]
I have a following stored procedure. I have variable `@PercentCleared` . `IF @PercentCleared <= 70` then I have to subtract `PercentCleared-10` else use as it is. Can you please advise me the right way to do? ``` DECLARE @PercentCleared INT DECLARE @TenPercent int SET @TenPercent = 10 IF(@PercentCleared <=70) BEGIN SELECT @I, dbo.tblV.VegTypeCode, dbo.tblV.VegTypeName FROM dbo.tblVegetationType INNER JOIN dbo.tblVegFormationLink ON dbo.tblV.VegTypeID = dbo.tblVegFormationLink.VegTypeID WHERE dbo.tblVegetationType.PercentageCleared >=(@PercentCleared - @TenPercent) WHERE a.VegTypeID = dbo.tblVegetationType.VegTypeID AND dbo.tblVegetationType.VegTypeID <> (SELECT VegTypeID FROM @EcosystemCredits eco WHERE eco.theID = @I) END ELSE IF BEGIN SELECT @I, dbo.tblV.VegTypeCode, dbo.tblV.VegTypeName FROM dbo.tblVegetationType INNER JOIN dbo.tblVegFormationLink ON dbo.tblV.VegTypeID = dbo.tblVegFormationLink.VegTypeID WHERE dbo.tblVegetationType.PercentageCleared >=@PercentCleared WHERE a.VegTypeID = dbo.tblVegetationType.VegTypeID AND dbo.tblVegetationType.VegTypeID <> (SELECT VegTypeID FROM @EcosystemCredits eco WHERE eco.theID = @I) End ```
I think you can do this by using a CASE statement in the WHERE clause ``` SELECT @I, dbo.tblV.VegTypeCode, dbo.tblV.VegTypeName FROM dbo.tblVegetationType INNER JOIN dbo.tblVegFormationLink ON dbo.tblV.VegTypeID = dbo.tblVegFormationLink.VegTypeID WHERE dbo.tblVegetationType.PercentageCleared >= @PercentCleared - CASE WHEN (@PercentCleared <=70) THEN @TenPercent ELSE 0 END -- change here WHERE a.VegTypeID = dbo.tblVegetationType.VegTypeID AND dbo.tblVegetationType.VegTypeID <> (SELECT VegTypeID FROM @EcosystemCredits eco WHERE eco.theID = @I) ```
You can put the logic in the `where` clause. ``` SELECT @I, dbo.tblV.VegTypeCode, vt.VegTypeName FROM dbo.tblVegetationType vt INNER JOIN dbo.tblVegFormationLink vfl ON vt.VegTypeID = vlt.VegTypeID WHERE (vt.PercentageCleared >= @PercentCleared - (CASE WHEN @PercentCleared <= 70 THEN @TenPercent ELSE 0 END)) AND (vt.VegTypeID <> (SELECT VegTypeID FROM @EcosystemCredits eco WHERE eco.theID = @I) ); ``` I simplified the query by using table aliases. Also, you had two `where` clauses and the second was redundant.
Any other way to avoid if else condition
[ "", "sql", "sql-server", "" ]
This function is based on a Oracle PL/SQL Function: ``` create or replace FUNCTION SP_ComputeEntity (P_ENTITY NUMERIC, P_CAMPAIGN NUMERIC, P_COMPLETE_IF NUMERIC, P_COMPUTE_MODE NUMERIC ) RETURNS VOID AS $$ DECLARE [..] -- This is the list of the entity's subordinates [..] -- This is the list of the entity's questionnaires [..] BEGIN -- If the entity must not be computed for this campaign stop [..] -- Check if already computed . [..] -- If not already computed compute it now . IF V_EXISTS = 0 THEN -- Loop on subordinates to check if already computed OPEN ENTITY_COLUMNS; LOOP FETCH ENTITY_COLUMNS INTO V_COLUMN_ID; EXIT WHEN ENTITY_COLUMNS%NOTFOUND; SP_ComputeEntity(V_COLUMN_ID, P_CAMPAIGN, P_COMPLETE_IF, P_COMPUTE_MODE); END LOOP; CLOSE ENTITY_COLUMNS; [..] END; $$ LANGUAGE plpgsql; ``` My problem is that pgAdmin III gives me ERROR: syntax error at or near "SP\_ComputeEntity" SQL state: 42601 Character: 1773 and I don't know why. Can it be done? Can someone call in a function the same function in postgresql?
If you don't need a result from the query, you can use [Perform](http://www.postgresql.org/docs/9.1/static/plpgsql-statements.html) ``` PERFORM SP_ComputeEntity(V_COLUMN_ID, P_CAMPAIGN, P_COMPLETE_IF, P_COMPUTE_MODE); ```
You can also use your function just like a select and retrive results in variables. Just for example: ``` SELECT INTO myvar myres FROM myfunction(param1, param2); ```
How do I call a function inside a function
[ "", "sql", "oracle", "postgresql", "" ]
I normally don't do database programming so Im rusty on how to do certain stuff. But I have an issue where I'm to take an item and if this item is in the same location but in different placements, divide the value of said item among the sum count between placements. here is my table structure: ``` LOCATION PLACEMENT VALUE COUNT ITEM 25 12345 100 10 55555 <---- 25 67890 100 20 55555 <---- 25 11111 50 5 00000 25 22222 75 5 11111 ``` In other words `Item (55555)` is in 2 placements and the value of this item is `100` The new value should be: `PLACEMENT 12345` will be (10/30) \*100 = 33.3 and `PLACEMENT 67890` will be (20/30) \* 100 = 66.7 Any idea how to do this in SQL or HQL?
``` create table new as select item,count(distinct placement) as dist_placement,count(count)as count,sum(count) as s_count from mytable group by item,location; hive> select * from new; OK 00000 1 1 5 11111 1 1 5 55555 2 2 30 Create table final as select b.location as location,b.placement as placement, CASE WHEN a.count=2 and a.dist_placement=2 then cast(((b.count/a.s_count)*b.value) as double) ELSE cast(b.value as double) END , b.count as count, b.item as item from new a join mytable b on a.item=b.item; select * from final; ``` output ``` location placement value count item 25 12345 33.33333333333333 10 55555 25 67890 66.66666666666666 20 55555 25 11111 50.0 5 00000 25 22222 75.0 5 11111 ``` if you give input with same placement and different item ``` LOCATION PLACEMENT VALUE COUNT ITEM 25 12345 100 10 55555 <---- 25 12345 100 20 55555 <---- 25 11111 50 5 00000 25 22222 75 5 11111 ``` output will be ``` LOCATION PLACEMENT VALUE COUNT ITEM 25 12345 100.0 10 55555 25 12345 100.0 20 55555 25 11111 50.0 5 00000 25 22222 75.0 5 11111 ``` May I right, let me know if you have other requirements.
Your sample table ``` SELECT * INTO #TEMP FROM ( SELECT 25 LOCATION,12345 PLACEMENT,100 VALUE ,10 [COUNT], 55555 ITEM UNION ALL SELECT 25 , 67890 , 100 , 20,55555 UNION ALL SELECT 25 , 11111 , 50 , 5,00000 UNION ALL SELECT 25 , 22222 , 75 , 5, 11111 )TAB ``` Your result is below ``` SELECT *, CAST(([COUNT]/CAST(SUM([COUNT]) OVER(PARTITION BY ITEM)AS NUMERIC(20,2)))*VALUE AS NUMERIC(20,1)) Result FROM #TEMP ``` ![enter image description here](https://i.stack.imgur.com/ykcU4.jpg)
SQL - Function to divide value among rows
[ "", "sql", "hive", "hql", "" ]
I have a question regarding coldfusion and loops. I have this program where I ask for user input from the user. The user can enter something for each food item. ``` <cfloop query = "GET_ITEM"> <tr> <td align="left" nowrap> <label>#GET_ITEM.ITEM_NBR#</label> </td> <input type="hidden" name="Item_number" id="Item_number" value="#GET_ITEM.ITEM_NBR#"> <td> <input type="text" name="on_hand" id="on_hand" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="transit" id="transit" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="target_level" id="target_level" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="percentonhand" id="percentonhand" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> </tr> </cfloop> ``` I want to insert each record into my table seperately using the below code. ``` <cfquery name = "insert_records"> <cfloop index="Form.On_hand" list="#FORM.On_hand#" delimiters=","> Insert into sometable (VENDORCODE, ITEM_NBR, Item_desc, Target_Level, Target_Date_Active, Target_Date_End, Vendor_name, Per_of_Actual ) Values ( <cfqueryparam value = "#Form.Vendor_code#" cfsqltype = "CF_SQL_INTEGER">, <cfqueryparam value = "#Item_number#" cfsqltype = "CF_SQL_VARCHAR"> , <cfqueryparam value = "#Trim(itemdesc.Item_desc)#" cfsqltype = "CF_SQL_VARCHAR">, <cfqueryparam value = "#Trim(FORM.On_hand)#" cfsqltype = "CF_SQL_INTEGER">, '2014-12-02', '2040-01-01', <cfqueryparam value = "#Trim(itemdesc.Vendor_name)#" cfsqltype = "CF_SQL_VARCHAR">, 100 ) </cfloop> </cfquery> ``` My issue is two things. 1. How do I ask for the user input and make each record unique? 2. After I get the input how do I insert each record seperately into the database.
In your first loop try this: ``` <cfloop query = "GET_ITEM"> <tr> <td align="left" nowrap> <label>#GET_ITEM.ITEM_NBR#</label> </td> <input type="hidden" name="Item_number" id="Item_number" value="#GET_ITEM.ITEM_NBR#"> <td> <input type="text" name="on_hand_#get_item.Item_nbr#" id="on_hand" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="transit_#get_item.Item_nbr#" id="transit" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="target_level_#get_item.Item_nbr#" id="target_level" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> <td> <input type="text" name="percentonhand_#get_item.Item_nbr#" id="percentonhand" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> </tr> </cfloop> ``` When submitted you will have a list of item numbers i `form.item_nbr` and cooresponding values for each number. You second loop can work like this: ``` <cfquery name = "insert_records"> <cfloop index="#form.item_nbr#" index="item"> Insert into sometable (VENDORCODE, ITEM_NBR, Item_desc, Target_Level, Target_Date_Active, Target_Date_End, Vendor_name, Per_of_Actual ) Values ( <cfqueryparam value = "#Form.Vendor_code#" cfsqltype = "CF_SQL_INTEGER">, <cfqueryparam value = "#Item#" cfsqltype = "CF_SQL_VARCHAR"> , <cfqueryparam value = "#Trim(itemdesc.Item_desc)#" cfsqltype = "CF_SQL_VARCHAR">, <cfqueryparam value = "#Trim(FORM["on_hand_" & item)#" cfsqltype = "CF_SQL_INTEGER">, '2014-12-02', '2040-01-01', <cfqueryparam value = "#Trim(itemdesc.Vendor_name)#" cfsqltype = "CF_SQL_VARCHAR">, 100 ) </cfloop> </cfquery> ``` I'm not sure exactly where the itemdesc.*value* is coming from in this query - I assume another query based on the item. In which case you may want to loop *outside* this query and do one insert query *per item* rather than batching them. There is not much of a penalty for that for a typical shopping cart form.
Issue one, how to make things unique, you have most of it down pat but if you do this: ``` <cfset x = 0> <cfloop query="GET_ITEM"> <cfset x++> <input name="uniqueID_#x#" value="#x#" type="hidden"> <tr> <td align="left" nowrap> <label>#ITEM_NBR#</label> </td> <input type="hidden" name="Item_number" id="Item_number" value="#GET_ITEM.ITEM_NBR#"> <td> <input type="text" name="on_hand#x#" id="on_hand" value="" size="20" onKeyPress="javascript:CheckNumeric();" /> </td> etc... </cfloop> ``` you'll notice that you don't need to keep referring to the query name while inside your query loop when referencing the columns. `x` at this point is essentially an index, by adding it to the form field `name` part you can reference each individual form. So on receiving this entry, I would do something like this: ``` <cfquery name = "insert_records"> <cfloop collection=#form# item="field"> <cfif left(field,9) eq 'uniqueID_'> <cfset uniqueid = right(field,1)><!--- you'll have to work out your own logic for where you have more than 10 forms to a page---> Insert into sometable (VENDORCODE, ITEM_NBR, Item_desc, Target_Level, Target_Date_Active, Target_Date_End, Vendor_name, Per_of_Actual ) Values ( <cfqueryparam value = "#Form.Vendor_code##uniqueid#" cfsqltype = "CF_SQL_INTEGER">, <cfqueryparam value = "#Item_number#" cfsqltype = "CF_SQL_VARCHAR"> , <cfqueryparam value = "#Trim(itemdesc.Item_desc)#" cfsqltype = "CF_SQL_VARCHAR">, <cfqueryparam value = "#Trim(FORM.On_hand)##uniqueid#" cfsqltype = "CF_SQL_INTEGER">, '2014-12-02', '2040-01-01', <cfqueryparam value = "#Trim(itemdesc.Vendor_name)#" cfsqltype = "CF_SQL_VARCHAR">, 100 ) </cfif> </cfloop> </cfquery> ```
Coldfusion Loop for insert
[ "", "mysql", "sql", "coldfusion", "" ]
My query: ``` SELECT id, pl, ka, user, users.imie, users.id, users.nazwisko, users.klasa FROM word LEFT JOIN users ON word.user = users.id BY word.pl ASC ``` It gives me this error: ``` #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'BY word.pl ASC LIMIT 0, 30' at line 1 ```
Missing `Order` in `Order By` ALSO Both tables have an `id` column, so you need to call either/both `users.id` and `word.id` to avoid the ambiguous error: ``` SELECT users.id, word.id, pl, ka, user, users.imie, users.id, users.nazwisko, users.klasa FROM word LEFT JOIN users ON word.user = users.id ORDER BY word.pl ASC ```
You forgot ORDER: ``` ORDER BY word.pl ASC ```
#1064 - check the manual that corresponds to your MySQL server version for the right syntax to use near 'BY word.pl ASC LIMIT 0, 30' at line 1
[ "", "mysql", "sql", "" ]
umm I'm not sure I've made the title right but its kind of hard to express it in short words. I have to tables **table1:** ``` id | name 1 | alice 2 | bob ``` **table 2:** ``` user_id | date 2 | 2014-11-1 2 | 2014-11-2 1 | 2014-11-3 ``` as a query, if I want to show `table 2` but instead of the integer numbers of `user_id`, I want it to show the corresponding names of the users where this info is stored in `table 1`. I think this is supposed to be easy but I don't know how to get this done.
A query along the lines of - ``` select t1.name, t2.date from table_1 t1 inner join table_2 t2 on t1.id = t2.user_id ```
Try: ``` SELECT t2.user_id, t1.name FROM table1 t1 INNER JOIN table2 t2 ON t1.id = t2.user_id ```
how to make query show the an other column of the primary key?
[ "", "mysql", "sql", "" ]
I am currently trying to join on the same table twice. I have a table that looks similar to below: Trip: ``` arrival_loc_code leaving_loc_code ``` There is another table with the values for those locations: Locations: ``` location_code (pk) location ``` I am attempting to to get the location of arrival and leaving based on the location code in trip. My current attempts have lead to : ``` select leavingloc.location,arrivalloc.location from trip join locations as leavingloc on trip.departureloccode = locations.locationcode join locations as arrivalloc on trip.arrivalloccode = locations.locationcode; ``` Leads me to: ``` ERROR 1054 (42S22): Unknown column 'locations.locationcode' in 'on clause' ``` At this point I am pretty sure there is something I am understanding incorrectly and would appreciate any help.
You have aliased your tables but then not used the alias in the join clause, so mysql is getting confused: ``` select leavingloc.location, arrivalloc.location from trip join locations as leavingloc on trip.departureloccode = leavingloc.locationcode join locations as arrivalloc on trip.arrivalloccode = arrivalloc.locationcode; ```
This is because the name of your alisas is different to what you are using. change ``` join locations as leavingloc on trip.departureloccode = locations.locationcode join locations as arrivalloc on trip.arrivalloccode = locations.locationcode; ``` To ``` join locations as leavingloc on trip.departureloccode = leavingloc.locationcode join locations as arrivalloc on trip.arrivalloccode = arrivalloc.locationcode; ```
MySQL Join Unknown column in 'on clause'
[ "", "mysql", "sql", "join", "" ]
I have a table that has multiple duplicate records in the first column (ID records), but has varying numerical data in the second column. I want to be able to identify which ID records have 0 for all of their numerical records. For example the table can look like: ``` ID Value 1 2 1 2 1 0 2 0 2 0 2 0 ``` I would want to only identify ID 2 because all the values are equal to 0. I don't want ID 1 because there are values > 0 Sorry if this isn't formatted properly or confusing.
``` SELECT DISTINCT ID FROM TABLE WHERE ID NOT IN (SELECT DISTINCT ID FROM TABLE WHERE VALUE <> 0) ``` This will take all ID in the table where there is not a row where the value is non-zero.
You might use "NOT IN": ``` SELECT DISTINCT Id FROM table1 WHERE Id NOT IN (SELECT Id FROM table1 WHERE Value <> 0) ```
How to SQL Query records from Multiple that Equal 0?
[ "", "sql", "duplicates", "" ]
Hi We are working on a project and i am trying to call a stored procedure. I have searched for the solution but i didn't find any way that how to call a stored procedure so can any one please tell me how to execute stored procedure.
How ever finally i am using the below code to execute the stored procedure and get the result. ``` using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress)) { // temporary scope for new connection and setting stored procedure, parameters, and return RecordName list using (SqlConnection cn = new SqlConnection(_settingsManager.LoadSettings().First().DataConnectionString)) { if (cn.State == ConnectionState.Closed) { cn.Open(); } const string storedProcedure = "usp_spName"; SqlCommand cmd = new SqlCommand(storedProcedure, cn); cmd.Parameters.AddWithValue("@IsVerified", "value"); cmd.Parameters.AddWithValue("@page", "value"); // for out put parameter cmd.Parameters.AddWithValue("@totalRows", 0); cmd.Parameters["@totalRows"].Direction = ParameterDirection.Output; cmd.CommandType = CommandType.StoredProcedure; IDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { // To Get The Values of the result int id=Convert.ToInt32(reader["Id"].ToString()); } reader.Close(); try { // To Get the Out Put Perameter totalRecords = (int)cmd.Parameters["@totalRows"].Value; } catch { totalRecords = 0; } cmd.Parameters.Clear(); cmd.Dispose(); cn.Close(); } } ```
actually, I don't know about stored procedures, but you can call stored functions using code like examaple below. I'm new to C# and orchard, so may be my approach is not correct and good enough. At first you should get `ISessionLocator` object instance, using `IOrchardServices`, then create NHibernate `ISession` instance and then `IQuery/ISQLQuery` with `CreateSQLQuery()`. here is example code of `Services` class ``` public class ExampleService { private readonly IOrchardServices _oServices; public EParamsServices(IOrchardServices oServices) { _oServices = oServices; } public float GetRegionPopulationDencity(int rId){ //resolve ISession locator (you can do this using dependencies in ctor) ISessionLocator sl = _oServices.WorkContext.Resolve<ISessionLocator>(); //create session ISession _session = sl.For(null); // create raw SQL query return _session.CreateSQLQuery(@"SELECT data.RegionDencity(:rId) as result") .AddScalar("result", NHibernateUtil.Single) .SetParameter<int>("rId", rId) .List<float>() .FirstOrDefault<float>(); } } ``` I think you can exec stored procs the same way, just change SQL code do `exec yourProcName` and `ExecuteUpdate()` instead of `List()` method (I'm not sure in this part) you also should add reference to `NHibernate` assembly to your project and add `NHibernate` & `Orchard.Data` to your `using` part.
Stored Procedure in Orchard
[ "", "sql", "orchardcms", "" ]
I'm trying to make an eCommerce web site. And at `ProductCategory` page I have to do a list about `Companies` that included by `Products`. I have a `Product` table that contains: ``` ProductID ProductName ... MarkID ``` And I have a `Company` table that contains: ``` CompanyID CompanyName ``` I want to mix them in a query. After that code block, ``` SELECT CompanyID, count(CompanyID) as CompanyCount FROM Products GROUP by CompanyID ``` I get this result: ``` CompanyID CompanyCount ------------------------- 1 2 3 1 4 4 ``` after that I just want to inner join that with `CompanyName` And want a result like this: ``` CompanyName CompanyCount --------------------------- 1 2 3 1 4 4 ``` How can I do that?
This can be achieved without a subquery. ``` SELECT C.CompanyID, C.CompanyName, COUNT(*) FROM Products P INNER JOIN Companies C ON P.CompanyId = C.CompanyID GROUP BY C.CompanyID, C.CompanyName ```
With a subquery: ``` SELECT * FROM ( SELECT CompanyID, count(CompanyID) as CompanyCount FROM Products GROUP by CompanyID ) CompanyCounts INNER JOIN Companies on CompanyCounts.CompanyId = Companies.CompanyID ``` If you need this in more places, you may want to create a view for company count.
SQL Server inner join after Count and GroupBy
[ "", "sql", "sql-server", "t-sql", "" ]
I've got a import process that copies a table schema with the code below and then populates the table with data. However it doesn't copy over the roles granted. ``` CREATE TABLE TOTABLE (LIKE FROMTABLE INCLUDING INDEXES) ``` Is there a way I can copy privileges when the schema is copied, or I can apply the privileges afterwards from the "FROMTABLE"?
Information about tables in postgresql are stored in the `pg_class` table. The field containing table privileges is `relacl`. So something like the following would work: `update pg_class set relacl = (select relacl from pg_class where relname = 'from_table') where relname='to_table';` Note that `pg_class` has metadata for all tables -- so you should also take care to make sure you are using the right schema (`relnamespace`) in case there are tables of the same name in multiple schemas.
Be ***very careful*** when manipulating catalog tables directly. It's generally advisable to use DDL statements exclusively. Catalog tables are not meant to be written by users. If you mess this up, your DB cluster might be corrupted beyond repair. You have been warned. **Update**: Turns out, the above warning is quite right. This was a bad idea to begin with. Standard `GRANT` / `REVOKE` commands (as well as the [default privilege system](https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html)) also make entries in [`pg_shdepend`](https://www.postgresql.org/docs/current/catalog-pg-shdepend.html) table to remember dependencies between objects and roles mentioned in the access control list (except for the owner, which is linked anyway). [The manual:](https://www.postgresql.org/docs/current/catalog-pg-shdepend.html) > The catalog `pg_shdepend` records the dependency relationships between > database objects and shared objects, such as roles. This information > allows PostgreSQL to ensure that those objects are unreferenced before > attempting to delete them. By manipulating the access control list (`relacl` for relations) directly, dependencies fall out of sync, which can lead to "strange" problems when trying to drop roles later. There was a recent [discussion about "Copying Permissions" on pgsql-hackers](https://www.postgresql.org/message-id/flat/CADkLM%3DdovdTM4LJ1-_YuNJWjeOU00XrwYvhU5v3-ZytBZ8zDbQ%40mail.gmail.com#CADkLM=dovdTM4LJ1-_YuNJWjeOU00XrwYvhU5v3-ZytBZ8zDbQ@mail.gmail.com) (Nov 2016), but nothing has been implemented, yet. ### Incomplete solution (do not use!) The [query presented by @Robert](https://stackoverflow.com/a/27302555/939860) has a bug (as he noted): `relname` is *not* unique. There can be any number of tables with the same name in multiple schemas of the same db. To fix: ``` UPDATE pg_class c_to SET relacl = c_from.relacl FROM pg_class c_from WHERE c_from.oid = 'public.from_table'::regclass AND c_to.oid = 'public.to_table'::regclass ``` ### Differences * The cast to `regclass` picks a table deterministically, *even without schema-qualification*. Details: * [How do I speed up counting rows in a PostgreSQL table?](https://stackoverflow.com/questions/14570488/how-do-i-speed-up-counting-rows-in-a-postgresql-table/14571935#14571935) * If one of the tables is not found, you get an exception immediately (the cast to `regclass` fails). @Robert's query would happily set `relacl` to NULL if `from_table` is not found.
Can you copy table privileges from one table to another in PostgreSQL?
[ "", "sql", "postgresql", "privileges", "" ]
I tried like this This is my Query ``` SELECT sloat_day, sloat_start_time FROM time_table WHERE sloat_day BETWEEN '2014-12-01' AND '2014-12-07' ORDER BY sloat_day, sloat_start_time ASC ``` I got below output ``` sloat_day sloat_start_time 2014-12-01 6.30 2014-12-01 7.30 2014-12-01 18.30 2014-12-01 19.30 2014-12-02 6.30 2014-12-02 7.30 2014-12-02 18.30 2014-12-02 19.30 2014-12-03 6.30 2014-12-03 7.30 2014-12-03 18.30 2014-12-03 19.30 2014-12-04 18.30 2014-12-04 19.30 2014-12-05 7.30 2014-12-05 18.30 2014-12-05 19.30 2014-12-06 19.30 2014-12-07 7.30 2014-12-07 8.30 2014-12-07 18.30 2014-12-07 19.30 ``` I want query result like below what i want to change in query to got this output please help me ``` 2014-12-01 6.30 2014-12-02 6.30 2014-12-03 6.30 2014-12-04 18.30 2014-12-05 7.30 2014-12-06 19.30 2014-12-07 7.30 2014-12-01 7.30 2014-12-02 7.30 2014-12-03 7.30 2014-12-04 19.30 2014-12-05 18.30 2014-12-07 8.30 2014-12-01 18.30 2014-12-02 18.30 2014-12-03 18.30 2014-12-05 19.30 2014-12-07 18.30 ```
Try this: ``` SELECT sloat_day, sloat_start_time FROM (SELECT sloat_day, sloat_start_time, IF(@day=@day:=sloat_day, @id:=@id+1, @id:=1) rank FROM time_table, (SELECT @id:=0, @day:='') AS a WHERE sloat_day BETWEEN '2014-12-01' AND '2014-12-07' ORDER BY sloat_day, sloat_start_time ) AS A ORDER BY rank, sloat_day, sloat_start_time ``` Check [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/0b4032/1): **OUTPUT** ``` | SLOAT_DAY | SLOAT_START_TIME | |-------------------| -----------------| | December, 01 2014 | 06:30:00 | | December, 02 2014 | 06:30:00 | | December, 03 2014 | 06:30:00 | | December, 04 2014 | 18:30:00 | | December, 05 2014 | 07:30:00 | | December, 06 2014 | 19:30:00 | | December, 07 2014 | 07:30:00 | | December, 01 2014 | 07:30:00 | | December, 02 2014 | 07:30:00 | | December, 03 2014 | 07:30:00 | | December, 04 2014 | 19:30:00 | | December, 05 2014 | 18:30:00 | | December, 07 2014 | 08:30:00 | | December, 01 2014 | 18:30:00 | | December, 02 2014 | 18:30:00 | | December, 03 2014 | 18:30:00 | | December, 05 2014 | 19:30:00 | | December, 07 2014 | 18:30:00 | | December, 01 2014 | 19:30:00 | | December, 02 2014 | 19:30:00 | | December, 03 2014 | 19:30:00 | | December, 07 2014 | 19:30:00 | ```
Change the order of the columns in the ORDER clausule, like this. ``` SELECT sloat_day, sloat_start_time FROM time_table WHERE sloat_day BETWEEN '2014-12-01' AND '2014-12-07' ORDER BY sloat_start_time, sloat_day ASC ``` This way, the query will be ordered first by sloat\_start\_time.
Mysql query result date time wise
[ "", "mysql", "sql", "date", "select", "sql-order-by", "" ]
I have the following data: ``` tag_id | taggable_id --------+------------- 1 | 1 2 | 1 3 | 1 4 | 1 1 | 2 2 | 2 5 | 2 6 | 2 7 | 3 8 | 3 9 | 3 10 | 3 ``` And I want to bring all the taggable\_ids which are in a group of tag\_ids AND in another group AND another... (max 4 groups). For example: * if I search all the taggable\_ids with tag\_id 1 or 7 AND tag\_id 4 should return just 1 * if I search all the taggable\_ids with tag\_id 1 AND tag\_id 6 AND tag\_id 2 or 8 should return just 2 * if I search all the taggable\_ids with tag\_id 8 AND tag\_id 5 should not return ids In a gross way(for the second example), the query is the following: ``` SELECT taggable_id FROM taggings WHERE tag_id in (1) INTERSECT SELECT taggable_id FROM taggings WHERE tag_id in (6) INTERSECT SELECT taggable_id FROM taggings WHERE tag_id in (2,8) ``` I think simplifying it, it looks like: ``` SELECT taggable_id FROM taggings WHERE tag_id in (1,2,6,8) GROUP BY taggable_id HAVING COUNT(*)=3 ``` but I'm wondering if can be done in a simpler way. Any thoughts?
This can be cast as a case of **[relational division](https://stackoverflow.com/tags/relational-division/info)**. We have assembled an arsenal of query techniques under this related question: * [How to filter SQL results in a has-many-through relation](https://stackoverflow.com/questions/7364969/how-to-filter-sql-results-in-a-has-many-through-relation/7774879) Depending on data distribution and other factors, this may be fastest: ``` SELECT DISTINCT taggable_id FROM taggings t1 JOIN taggings t2 USING (taggable_id) JOIN taggings t3 USING (taggable_id) WHERE t1.tag_id = 1 AND t2.tag_id = 6 AND t3.tag_id IN (2, 8); ``` Assuming unique `(tag_id, taggable_id)`, `DISTINCT` is actually not needed for the example. But it might be necessary with other (list) predicates. [SQL Fiddle](http://sqlfiddle.com/#!15/3cb07/3) (building on @Clodoaldo's, thanks).
[SQL Fiddle](http://sqlfiddle.com/#!15/3cb07/1) Your second query fails if the tuple `(8, 2)` is inserted. Here is one solution although I don't know if simpler then the `intersect` one: ``` select taggable_id from taggings where tag_id in (1,2,6,8) group by taggable_id having array_agg(tag_id) @> array[1,2] and array_agg(tag_id) && array[6,8] ```
Simplify and/or optimize sql query with INTERSECT or HAVING
[ "", "sql", "postgresql", "relational-division", "" ]
I've seen this question asked in many ways all over the Internet but despite implementing the abundance of advice (and some voodoo), I'm still struggling. I have a 100GB+ database that is constantly inserting and updating records in very large transactions (200+ statements per trans). After a system restart, the performance is amazing (data is written to a large SATA III SSD connected via USB 3.0). The SQL Server instance is running on a VM running under VMWare Workstation. The host is set to hold the entire VM in memory. The VM itself has a paging cache of 5000 MB. The SQL Server user is set to 'hold pages in memory'. I have 5 GBs of RAM allocated to the VM, and the max memory of the SQL Server instance is set to half a Gig. I have played with every single one of these parameters to attempt to maintain consistent performance, but sure and steady, the performance eventually degrades to the point where it begins to time out. Here's the kicker though, if I stop the application that's loading the database, and then execute the stored proc in the Management Studio, it runs like lightning, clearly indicating it's not an issue with the query, and probably nothing to do with memory management or paging. If I then restart the loader app, it still crawls. If I reboot the VM however, the app once again runs like lightning...for a while... Does anybody have any other suggestions based upon the symptoms presented?
In the end, what I did was a combination of two things, putting in logic to recover when timeouts occurred, and setting the host core count to only reflect physical cores, not logical cores, so for example, the host has 2 cores that are hyper-threaded. When I set my VM to use 4 cores, it occasionally gets hung in some infinite loop, but when I set it to 2 cores, it runs without fail. Still, aberrant behavior like this is difficult to mitigate reliably.
* Depending on how large your hot set is, 5GB memory may just tax it for a 100+gb database. * Check indices and query plans. We can not help you without them. And I bet you miss some indices - which is the standard performance issue people have. * Otherwise, once you made your homework - head over to dba.stackexchange.com and ask there. * Generally - consider that 200 statements per transaction may simply indicate a seriously sub-optimal programming. For example you could bulk-load the data into a temp table then merge into the final one.
SQL Server query performance slows over time
[ "", "sql", "sql-server", "performance", "" ]
I have in a column in SQL a field that concatenates products a given user has reviewed. It's stored in the form "SKU1,SKU2,SKU3,...". An example string would be "1,2,4,6,13,15,16". I want to view all rows containing '6' in this column. I'd use LIKE, but then it gets 16, 26, 36, etc as well. ``` SELECT rows FROM table WHERE column LIKE '%6%' -- will get 6, 16, 26, ... SELECT rows FROM table WHERE column LIKE '%6%' AND column NOT LIKE '%16%' AND column NOT LIKE '%26%' ... ... -- will get 6, but only in rows without 16, 26, ... ``` I want to get all rows containing "6", but not rows containing "16", "26", etc without also having "6". I do want rows containing "6" alone, "6" AND "16", "26", etc but not rows with "6" OR "16", "26", etc. What am I missing?
You have made a fundamental mistake in the design of the database. The correct solution is to replace the column `column` with a separate table. This table will have two columns, the user ID and the product ID and will have one separate row for each product a user has reviewed. It then becomes quite easy to get the information you need. Failing that, you will have to write a complex WHERE clause with three parts to test for "%, 6, %" (with the spaces before and after or "6, % " or "%, 6". This will be slow to execute as the database gets larger, is not indexible, is not JOINable, and cannot be subject to referential integrity. If you are inconsistent in your use of spaces when storing data the search expression becomes more complex. Note that some SQL databases (you didn't say which you're using) have substring functions to make this kind of search slightly easier but even in those database the design is still a terrible violation of relational principles and first normal form and should be refactored at your earliest possible opportunity.
This where condition would give you the result you're looking for. ``` mycolumn like "%,6,%" or mycolumn like '6,%' or mycolumn like '%,6' ``` Here' I'm using the fact that commas would appear on either side of 6, except if the 6 comes at the start or end of the string.
SQL LIKE exclusive substring
[ "", "sql", "database", "sql-like", "" ]
I have a MySql table and it has following values: table name 'results' ``` Reg.No SubjectCode Attempt Pass/Fail 112108 CMIS 1113 1 fail 112110 CMIS 1114 1 pass 112119 CMIS 1114 1 fail 112108 CMIS 1113 2 fail 112107 CMIS 1113 1 fail 112108 CMIS 1113 3 pass ``` Students can have several attempts to pass the subject. Student should pass each subjects to get the degree. Some student pass in first attempt. Some take more than 3 attempt. However student can try until he/she pass. But some still remain fail. So I want to get the Reg.No of students who are still unable to pass the subject. Eg.: `112119` and `112107` are still unable to pass their subject. I was unable to write a query for this problem.
I would suggest using aggregation: ``` SELECT `Reg.No`, SubjectCode, SUM(`Pass/Fail` = 'Pass') FROM results GROUP BY `Reg.No`, SubjectCode HAVING SUM(`Pass/Fail` = 'Pass') = 0; ``` The `HAVING` clause counts the number of results for each student and course where the last column is `'Pass'`. In MySQL, booleans are treated as integers in a numeric context, with true being 1. So, `sum(`Pass/Fail`= 'Pass')` counts the number of times a student passed the course. The `= 0` says the student never passed the course. As a suggestion, don't put special characters such as `/` and `.` in column names. That requires escaping the columns and just makes the code harder to write because it is filled with backticks.
You don't need a subquery for this. ``` SELECT `Req.No`, `SubjectCode`, max(`Pass/Fail`) AS `Pass/Fail` FROM results GROUP BY 1, 2 HAVING max(`Pass/Fail`) = 'fail' ``` Assuming `Pass/Fail` is a string type, so `'pass'` > `'fail'`. You only get students who tried and failed, but not those who never tried (and are not in this table at all). I really wouldn't use column name with special characters. Very error prone.
Filter students that have not passed a subject yet
[ "", "mysql", "sql", "" ]
i'm having a problem to select all in the warehouse table and get the names of the customer and user from their tables with the id's for these names to have them from the warehouse table, **DATABASE STRUCTURE:** ``` table 1: warehouse table 2: customer (City, FirstName, LastName) table 3: user (FirstName, LastName) $result = mysql_query("SELECT customer.City, customer.FirstName AS customerFName, customer.LastName AS customerLName, user.FirstName AS clientFName, user.LastName AS clientLName, warehouse.*". "FROM warehouse". "INNER JOIN customer, user". "ON warehouse.CustomerID = customer.CustomerID AND warehouse.UserID = user.UserID") or trigger_error(mysql_error()); ``` the Notice: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ON warehouse.CustomerID = customer.CustomerID AND warehouse.UserID = user.UserID'
Change the query like this. ``` $result = mysql_query("SELECT customer.City, customer.FirstName AS customerFName, customer.LastName AS customerLName, user.FirstName AS clientFName, user.LastName AS clientLName, warehouse.* ". "FROM warehouse". " INNER JOIN customer". " ON warehouse.CustomerID = customer.CustomerID". " INNER JOIN user ON warehouse.UserID = user.UserID") or trigger_error(mysql_error()); ```
Try this: ``` SELECT c.City, c.FirstName AS customerFName, c.LastName AS customerLName, u.FirstName AS clientFName, u.LastName AS clientLName, w.* FROM warehouse w INNER JOIN customer c ON w.CustomerID = c.CustomerID INNER JOIN user u ON w.UserID = u.UserID; ```
mysql syntax error, same notice whatever i change
[ "", "mysql", "sql", "select", "join", "inner-join", "" ]
I am trying to create a function that shows the next half hour time. So when the current time is 13:40, I want it to show 14:00 and not 13:30. What I have created, gets the job done but on the nearest half hour time, not the future nearest: ``` CREATE OR REPLACE FUNCTION round_timestamp( ts timestamptz ,round_secs int ) RETURNS timestamptz AS $$ DECLARE _mystamp timestamp; _round_secs decimal; BEGIN _round_secs := round_secs::decimal; _mystamp := timestamptz 'epoch' + ROUND((EXTRACT(EPOCH FROM ts))::int / _round_secs) * _round_secs * INTERVAL '1 second'; RETURN _mystamp; END; $$ LANGUAGE plpgsql IMMUTABLE; ``` Any ideas on how to make this work to display the future nearest half hour interval?
Trivially, add 1 to the rounded down epoch before scaling it back up to a timestamp: ``` CREATE OR REPLACE FUNCTION round_timestamp(ts timestamptz, round_secs int) RETURNS timestamptz AS $$ BEGIN RETURN timestamptz 'epoch' + (ROUND((EXTRACT(EPOCH FROM ts))::int / round_secs) + 1) * round_secs * INTERVAL '1 second'; END; $$ LANGUAGE plpgsql IMMUTABLE; ``` There is no need for the local variables.
To avoid tripping over on using epoch and floating point arithmetics, you can rely on date arithmetics, with the additional benefit of making it clearer what is going on: ``` create or replace function round_tstz(ts timestamptz) returns timestamptz as $$ select date_trunc('hour', $1) + -- what hour will it be in 30 min? case date_trunc('hour', $1 + interval '30 min') -- the same: round to next half hour when date_trunc('hour', $1) then interval '30 min' -- not the same: round to next hour else interval '1 hour' end; $$ language sql stable; # select now()::timestamptz(0); now ------------------------ 2014-12-05 14:34:30+01 (1 row) # select round_tstz(now()), round_tstz(now() + interval '30 min'); round_tstz | round_tstz ------------------------+------------------------ 2014-12-05 15:00:00+01 | 2014-12-05 15:30:00+01 (1 row) ```
Show the next half hour time point in Postgres function
[ "", "sql", "postgresql", "postgresql-9.1", "" ]
I don't quite get it... Could somebody please give me a hint on why the results of queries B + C won't add-up to A? I first thought, that the amount of underscores (should be ten) mismatch between B and C because of a typo, but after copy/pasting I am a bit helpless. The result of A is higher than the sum of B + C. Is there some kind of implicit distinct etc. in statement B and C that I am not aware of? ``` -- statement A select count(*) from mytable; -- statement B select count(*) from mytable where mycolumn like '__________'; -- statement C select count(*) from mytable where mycolumn not like '__________'; ```
If mycolumn has some rows with `NULL` values, those will be excluded from both `LIKE` and `NOT LIKE` clauses. Therefore, those 2 statements should be equal: ``` SELECT (select count(*) from mytable where mycolumn like '__________') + (select count(*) from mytable where mycolumn not like '__________') + (select count(*) from mytable where mycolumn IS NULL) FROM DUAL -- is equal to select count(*) from mytable; ```
Most likely your mycolumn contains NULL values. NULL values won't compare to LIKE or NOT LIKE.
"NOT LIKE x" and "LIKE x" doesn't sum up?
[ "", "sql", "oracle", "sql-like", "oracle12c", "" ]
I've got this table with the following: ``` part_id | feature_name | feature_value | feature_unit | _____________________________________________________________ 1 | Weight | 2 | kg | 1 | Color | Blue | | 1 | Description |description here| | ``` What i wanted to do was place these in another (new)table ``` part_id | description | color | weight | _______________________________________________ 1 |description here| Blue | 2kg | ``` I thought of using the Insert..Select depending on the value of the feature\_value column but cant seem to build the right query for this.Any ideas?
You are converting columns to rows, you can use `case based aggregation` for this. ``` INSERT into newTable(part_id, description, color, weight) SELECT part_id max( case when feature_name ='Description' then feature_value end ) as description, max( case when feature_name ='Color' then feature_value end ) as Color, max( case when feature_name ='weight' then concat(feature_value,feature_unit) end ) as weight FROM my_table GROUP BY part_id ```
One way to do this is with conditional aggregation: ``` select part_id, max(case when feature_name = 'description' then feature_value end) as description, max(case when feature_name = 'color' then feature_value end) as color, max(case when feature_name = 'weight' then concat(feature_value, feature_unit) end) as weight from thistable group by part_id; ```
Insert..Select, same column different values
[ "", "mysql", "sql", "" ]
I would like to create a statement that is equivalent to `(x - y == 0) ? return 0 : return 100` in MySQL. Something that might look like this: ``` SELECT id, [(integer_val - 10 == 0) ? 0 : 100] AS new_val FROM my_table ``` I want to compare an attribute in each row to a certain number, and if the difference between that number and the number in the row is 0, I want it to give me 0, otherwise, I want it to give me 100. Example: Applying this query on `my_table` (with 10 being the 'compared to' number): ``` id | integer_val =================== 1 10 2 10 3 3 4 9 ``` Would return this: ``` id | new_val =================== 1 100 2 100 3 0 4 0 ``` How can I do this?
Try this: ``` SELECT id, IF(integer_val = 10, 100, 0) AS new_val FROM my_table; ``` **OR** ``` SELECT id, (CASE WHEN integer_val = 10 THEN 100 ELSE 0 END) AS new_val FROM my_table; ```
Use `case when` statement: ``` select *, (case when integer_val = 10 then 100 else 0 end) as New_Val from yourtable ```
SQL statement equivalent to ternary operator
[ "", "mysql", "sql", "select", "if-statement", "case", "" ]
Lets say I've got the following table which is a familiar example. ``` +----------------------------------+ | TAGS | +--------+-------------+-----------+ | tag_id | tag_name | parent_id | +--------+-------------+-----------+ | 1 | programming | NULL | | 2 | php | 1 | | 3 | class | 2 | | 4 | object | 2 | | 5 | method | 3 | +--------+-------------+-----------+ ``` I'm trying to devise a query which selects the associated `parent_id` and `tag_name` based on the value of the initial select statement. **Something like this:** ``` SELECT * FROM tags AS child WHERE child.tag_name = 'object' UNION SELECT * FROM tags AS parent WHERE parent.parent_id = child.parent_id ``` I need to be able to return the combined rows from both these queries which is why I'm using `UNION`. **The expected result should be:** ``` +--------+-------------+-----------+ | tag_id | tag_name | parent_id | +--------+-------------+-----------+ | 2 | php | 1 | | 4 | object | 2 | +--------+-------------+-----------+ ``` I also think that a `JOIN` may work but I can't quite make it work.
Try this: ``` SELECT tag_id, tag_name, parent_id FROM tags AS child WHERE child.tag_name = 'object' UNION SELECT parent.tag_id, parent.tag_name, parent.parent_id FROM tags AS parent INNER JOIN tags AS child ON parent.tag_id = child.parent_id AND child.tag_name = 'object'; ``` Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/5700a/1) **OUTPUT** ``` | TAG_ID | TAG_NAME | PARENT_ID | |--------|----------|-----------| | 4 | object | 2 | | 2 | php | 1 | ```
Try this: ``` SELECT * FROM tags t1, tags t2 WHERE t1.parent_id = t2.parent_id AND t1.tag_name = 'object'; ```
Use value from first select for second select in union?
[ "", "mysql", "sql", "select", "join", "union", "" ]
This sounds like a silly question, but I've noticed that in a lot of table designs for e-commerce related projects I almost always see decimal(19, 4) being used for currency. Why the 4 on scale? Why not 2? Perhaps I'm missing a potential calculation issue down the road?
First off - you are receiving some incorrect advice from other answers. Obseve the following (64-bit OS on 64-bit architecture): ``` declare @op1 decimal(18,2) = 0.01 ,@op2 decimal(18,2) = 0.01; select result = @op1 * @op2; result ---------.---------.---------.--------- 0.0001 (1 row(s) affected) ``` Note the number of underscores underneath the title - 39 in all. (I changed every tenth to a period to aid counting.) That is precisely enough for 38 digits (the maximum allowable, and the default on a 64 bit CPU) plus a decimal point on display. Although both operands were declared as *decimal(18,2)* the calculation was performed, and reported, in *decimal(38,4)* datatype. (I am running SQL 2012 on a 64 bit machine - some details may vary based on machine architecture and OS.) Therefore, it is clear that no precision is being lost. On the contrary, only overflow can occur, not precision loss. This is a direct consequence of all calculations on decimal operands being performed as integer arithmetic. You will occasionally see artifacts of this in intelli-sense when the type of intermediate fields of *decimal* type are reported as being *int* instead. Consider the example above. The two operands are both of type *decimal(18,2)* and are stored as being integers of value 1, with a scale of 2. When multiplied the product is still 1, but the scale is evaluated by adding the scales, to create a result of integer value 1 and scale 4, which is a value of 0.0001 and of type *decimal(18,4)*, stored as an integer with value 1 and scale 4. Read that last paragraph again. Rinse and repeat once more. In practice, on a 64 bit machine and OS, this is actually stored and carried forward as being of type \*decimal (38,4) because the calculations are being done on a CPU where the extra bits are free. To return to your question - All major currencies of the world (that I am aware of) only require 2 decimal places, but there are a handful where 4 are required, and there are financial transactions such as currency transactions and bond sales where 4 decimal places are mandated by law. When devising the *money* datatype Microsoft appears to have opted for the maximum scale that might be required rather than the normal scale required. Given how few transactions, and corporations, actually require precision greater than 19 digits this seems eminently sensible. If you have: 1. A high expectation of only dealing with major currencies (which at the current time only require 2 digits of scale); and 2. No expectation of dealing with transactions that are mandated by law to require 4 digits of scale then you would be safe to use type *decimal* with scale 2 (such as *decimal(19,2)* or *decimal(18,2)* or *decimal(38,2)*) instead of money. This will ease some of your conversions and, given the assumptions above, have no cost. A typical case where these assumptions **are** met is in a GL or Subledger accounting system tracking transactions to the penny. However, a stock- or bond-trading system would **not meet** these assumptions because 4 digits of scale are mandated by law in those case. A way to distinguish the two cases is whether transactions are reported in *cents* or *percents*, which only require 2 digits of scale, or in *basis points* which require 4 digits of scale. If you are at all unsure as to which case applies to your programming circumstance, consult your Controller or Director of Finance as to the legal and GAAP requirements for your application. (S)he will be able to give you definitive advice.
In SQL the 19 is amount of integers, the 4 is amount of decimals. If you only have 2 decimals and you store maybe a result of some calculations, which results in more than 2 decimals, theres "no way" to store those additional decimals. Some currencies operates with more than 2 decimals. Use the data type decimal, not money.
Decimal(19,4) or Decimal(19.2) - which should I use?
[ "", "sql", "sql-server", "t-sql", "sql-types", "" ]
I have no clue how to Import data from one database to another with condition. I have DB Name (Northwind) and the table Name Employee I have following columns ID Name I have another DB (Mater) and the table Name Employee. I have the following columns Emp.ID Emp.Name Now i want transfer all data from Northwind.Employee to Master.Employee table with Condition. Condition is IF ID=1 then Emp.ID=201 (this is a constant value no logic behind that) Any idea or suggestion please
If I'm understanding your question correctly, you can use a `case` statement in your `insert`: ``` insert into master.schema.employee (id, name) select case when id = 1 then 201 else id end, name from northwind.schema.employee ```
in order to select from diferent DB you can assist this question: [INSERT INTO from two different server database](https://stackoverflow.com/questions/14153657/insert-into-from-two-different-server-database) the id issue is a simple case you can see example here: [SQL Case Statement Syntax?](https://stackoverflow.com/questions/4622/sql-case-statement-syntax)
Import data from one DB to another Database with condition
[ "", "sql", "sql-server", "" ]
I have two tables with product numbers. They both are limited to 12 characters (`varchar(12)`). One of them (product A) has a number structure like this: Product No: ``` 2345 568 89 ``` And product B has the same exact numbers but with zeros to fill the 12 characters missing. It is something like this: Product No: ``` 000000002345 000000000568 000000000089 ``` I just want to modify product A table to add the zeros at the beginning of the sequence. I had an idea with `REPLACE()` function but to add the zeros I might need another function. Thanks for reading and sorry for the time.
This should do it: ``` UPDATE tblA SET ProductNo = REPLICATE('0', 12 - LEN(ProductNo)) + ProductNo ```
Try this, you can use this statement ``` RIGHT('000000000000'+ISNULL(ProductNo,''),12) ```
Add Zeros to a product number in SQL
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have a query that returns datetime format in sql: ``` SELECT time_in FROM job_punch_card WHERE emp_key=47 and punch_day<= DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0) ``` This returns results like `2014-2-15 07:36:32.000`, however I only want the time portion, not the dates. I can get the current time from SQL Server like: ``` SELECT CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond ``` But I couldn't apply this to my own query. How can I retrieve only time from `time_in` table above?
Use the same `CONVERT` function on your output column: ``` SELECT CONVERT(VARCHAR(8), time_in, 108) AS time_in FROM job_punch_card WHERE emp_key=47 and punch_day<= DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0) ```
Use `CAST` if you need it as a `TIME` type. If you have to do any calculations on the result set this will make things easier. ``` SELECT CAST(time_in AS TIME(0)) AS time_in FROM job_punch_card etc., etc. ```
How to convert datetime to time format
[ "", "sql", "sql-server", "datetime", "sql-server-2012", "" ]
Im trying to get this statement to work to mine some text in a freetext fields... ``` SELECT CASE WHEN Theme_Q1 LIKE '%Education%' THEN INSERT INTO #tempThemeCnt(Theme1, ThemeCnt) VALUES ( 'Education', 1 ) WHEN Theme_Q1 LIKE '%Care%' THEN INSERT INTO #tempThemeCnt(Theme1, ThemeCnt) VALUES ( 'Care', 1 ) END FROM dbo.tblWNHPSurvey ``` Thanks you all for the help...
EDIT -- Updated to reflect OP's actual problem as described in comment block below. <http://sqlfiddle.com/#!3/efd1b/5> This *should* work on sql server 2005 -- I have no way of testing this. Earliest version I can run it on is 2008R2. Also -- you should *totally* upgrade! If you can't see how to refactor it to your tables post a comment but I hope it will be fairly obvious. This uses APPLY to count references for you. ``` SELECT t.tag, ISNULL(m.matches, 0) AS Matches FROM tags AS t OUTER APPLY ( SELECT COUNT(*) FROM TestSet AS ts WHERE ts.Label LIKE '%' + t.tag + '%' ) AS m (matches) ``` Schema used: ``` CREATE TABLE TestSet ( TestID INT IDENTITY(1,1) PRIMARY KEY, Label VARCHAR(MAX) ) INSERT TestSet(Label) SELECT 'Educational Care' UNION SELECT 'Care Failure' UNION SELECT 'SomeRandomTextHere' UNION SELECT 'Care Education' CREATE TABLE Tags( Tag VARCHAR(255) PRIMARY KEY ) INSERT Tags (Tag) SELECT 'Education' UNION SELECT 'Care' ``` Results: ``` TAG MATCHES Care 3 Education 2 ``` Notes: This is likely to be fairly slow... the LIKE '%%' predicate has no possible index it can use. Original answer... case isn't a control of flow switch - its a formatter. you don't *do* something different for each case - you return something different it would be something like this: ``` INSERT #tempThemeCnt(Theme1, ThemeCnt) SELECT CASE WHEN Theme_Q1 like '%Education%' THEN 'Education' WHEN Theme_Q1 like '$Care%' THEN 'Care' ELSE Theme_Q1 END AS Theme1, 1 AS ThemeCnt FROM dbo.tblWNHPSurvey ``` Note that its impossible to tell from your example what you wanted if the values were not mapped to Education or Care -- this example simply sticks whatever was in dbo.tblWNHPSurvey into the temp table with a ThemeCnt of 1 if it didn't match... Also -- I suspect you may want to group this afterwards....
Try this: ``` INSERT INTO #tempThemeCnt(Theme1, ThemeCnt) SELECT (CASE WHEN Theme_Q1 LIKE '%Education%' THEN 'Education' WHEN Theme_Q1 LIKE '%Care%' THEN 'Care' ELSE '' END) AS Theme1, 1 FROM dbo.tblWNHPSurvey; ```
Case Select Then Insert Into Temp Table not working
[ "", "sql", "select", "case", "" ]
I have a MySQL table with around 1 million rows. The table name is `dedict` When I run a simple query like: ``` select writtenform from dedict where length(writtenform)>5 and partofspeech <> 'abbreviation' order by id asc limit 50,1 ``` the query takes around 1 second. I have as primary key the `id` column (autoincrement) and an index on `writtenForm`. Below you see the results of `explain dedict`: ``` Field Type Null Key Default Extra senseid varchar(255) NO NULL writtenForm varchar(255) YES MUL NULL languageIdentifier varchar(255) YES NULL partOfSpeech varchar(255) YES NULL _index int(11) YES NULL writtenText longtext YES NULL lexiconid varchar(255) YES NULL id int(11) NO PRI NULL auto_increment ``` There must be something wrong. I have to run this query 5 times to show a webpage so the load time is more than 5 seconds. Could you please help me?
Perhaps you can optimize the attribute length. There are 5 attributes of type varchar(255). In any case, to get something incredibly faster, you can create a trigger on insert/update statement to store the content length in another attribute. So each time a row is inserted or updated the trigger store the length in another column of type integer. Moreover, do you really need to query all the table at a time ?
You have to change the structure of the table like--senseid to integer writtenForm ---should have add indexed to this bcoz u r using this column for manipulating the data .
Slow Query on Medium MySQL Table (1 Million Rows)
[ "", "mysql", "sql", "query-optimization", "database-performance", "" ]
I want to write Sql Query for increase item price by percentage. Scenario is :- In table, I have 3 coloumn : ID, Item-Name, Price ``` Example : If item-Name is T-shirt, Increase price by 10% item-Name is Jins , Increase price by 50% item-Name is top , Increase price by 5% ```
If you are looking to update the table you can do conditional update. ``` update table_name set price = case when `Item-Name` = 'T-shirt' then price+( (price*10) /100 ) when `Item-Name` = 'Jins' then price+( (price*50) /100 ) when `Item-Name` = 'top' then price+( (price*5) /100 ) end ; ``` And if you are looking to show the increased price without doing any update in the table at the time of select then you can do as below. ``` select id,`Item-Name`,price, case when `Item-Name` = 'T-shirt' then price+( (price*10) /100 ) when `Item-Name` = 'Jins' then price+( (price*50) /100 ) when `Item-Name` = 'top' then price+( (price*5) /100 ) else price end as new_price from table_name; ```
Try this: ``` SELECT a.ID, a.ItemName, a.Price, (CASE WHEN a.ItemName = 'T-shirt' THEN (a.price * 10 / 100) WHEN a.ItemName = 'Jins' THEN (a.price * 50 / 100) WHEN a.ItemName = 'top' THEN (a.price * 5 / 100) ELSE a.price END) AS calculatedPrice FROM tableA a ```
Sql Query for increase item value price for multiple item
[ "", "mysql", "sql", "select", "sql-update", "case", "" ]
Excel Formula Grabs Date and if it is Less then 90 Days remaining States Warning, if it is today or more it is In Warranty and if it is Past Today it is Expired. Formula: ``` =IF(J2="","",IF(TODAY()-J2>0,"Expired",IF(J2-TODAY()<=90,"Warning","In Warranty"))) ``` J Column is End Date Column for Warranty... Example 03/02/2015 (Which has 84 Days left so it is in Warning Status) I want to convert this to SQL Query: This is what I have so far, but it is not working properly. ``` DECLARE @TODAY smalldatetime = getdate() CASE WHEN d.End_Date IS NULL THEN 'No Warranty Information Available' WHEN Datediff(DAY, @TODAY, d.End_Date) <90 THEN 'Warning' WHEN Datediff(DAY, @TODAY, d.End_Date) > 1 THEN 'In Warranty' WHEN Datediff(DAY, @TODAY, d.End_Date) = 0 THEN 'Expired' END AS [Warranty Status], ``` This is not returning properly can someone assist?
Just like your Excel formula, each step is evaluated and if the condition is met, that is the value printed out and the statement ends. So if there were, for example, 0 days left, your CASE statement will evaluate `datediff(DAY, @TODAY, d.End_date) < 90` which is TRUE so it will print out "Warning" rather than "Expired" as you intend. So you want to check your statements in the correct order: ``` DECLARE @TODAY smalldatetime = getdate() CASE WHEN d.End_Date IS NULL THEN 'No Warranty Information Available' WHEN Datediff(DAY, @TODAY, d.End_Date) <= 0 THEN 'Expired' WHEN Datediff(DAY, @TODAY, d.End_Date) <90 THEN 'Warning' ELSE 'In Warranty' END AS [Warranty Status], ```
I'm not sure what answer you're seeing, but from the SQL clause, it looks like 'Expired' will never occur because zero is also <90. Move the check for zero above the warning check, and it should be OK.
SQL Query - Excel Formula Conversion
[ "", "sql", "" ]
Lets say, I have a table: ``` +------------+-----------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +------------+-----------+------+-----+-------------------+-----------------------------+ | id | int(10) | NO | PRI | | AUTOINCREMENT | | id_action | int(10) | NO | IDX | | | | a_date | date | NO | IDX | | | | a_datetime | datetime | NO | IDX | | | +------------+-----------+------+-----+-------------------+-----------------------------+ ``` Each row has some `id_action`, and the `a_date` and `a_datetime` when it was executed on the website. My question is, when I want to return `COUNT()` of each `id_action` grouped by `a_date`, is it same, when I use this two selects, or they are different in speed? Thanks for any explanation. ``` SELECT COUNT(id_action), id_action, a_date FROM my_table GROUP BY a_date ORDER BY a_date DESC ``` and ``` SELECT COUNT(id_action), id_action, DATE_FORMAT(a_datetime, '%Y-%m-%d') AS `a_date` FROM my_table GROUP BY DATE_FORMAT(a_datetime, '%Y-%m-%d') ORDER BY a_date DESC ``` In other words, my question is, that each action has its `datetime`, and if I really need column `a_date`, or it is the same using `DATE_FORMAT` function and column `a_datetime` and I dont need column `a_date`?
I ran both the queries on similar table on MySQL 5.5. The table has 10634079 rows. First one took 10.66 initially and always takes approx 10 secs on further attempts. Seconds Query takes 1.25 mins to execute first time, on second, 3rd.... attempts its taking 22.091 secs So in my view, if your are looking for performance, then you must have column a\_date, as its taking half of the time when executed without Date\_Format. If performance is not the primay concern (like data redundancy can be) then a\_datetime column will serve all other date/datetime related purposes.
**DATE :** The DATE type is used for values with a date part but no time part. **DATETIME:** The DATETIME type is used for values that contain both date and time parts. so if you have DATETIME you can always derive DATE from it but from DATE you can not get DATETIME. And as per your sql there will not be a major difference. It will be better not to have `a_date` because you already have `a_datetime.` but in general if you can use `TIMESTAMP` you should, because it is more space-efficient than `DATETIME`.
Difference between DATE and DATETIME in WHERE clause
[ "", "mysql", "sql", "" ]
I have the following SQL Statement : ``` SELECT RTRIM(LTRIM(REPLACE(LAGKART.VARENUMMER,CHAR(2),''))) AS ItemNo, RTRIM(LTRIM(REPLACE(LAGKART.SXSON,CHAR(2),''))) AS Season, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT1,CHAR(2),''))),'') AS Variant1, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT2,CHAR(2),''))),'') AS Variant2, (SELECT * FROM [dbo].[B2BGetSpringFinal] ( LAGKART.VARENUMMER, LAGKART.VARIANT1, LAGKART.VARIANT2 )) AS SpringAvailable FROM LAGKART ``` But I get this error : > Msg 170, Level 15, State 1, Line 8 > Incorrect syntax near '.'. But if I call the function with fixed values : ``` SELECT RTRIM(LTRIM(REPLACE(LAGKART.VARENUMMER,CHAR(2),''))) AS ItemNo, RTRIM(LTRIM(REPLACE(LAGKART.SXSON,CHAR(2),''))) AS Season, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT1,CHAR(2),''))),'') AS Variant1, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT2,CHAR(2),''))),'') AS Variant2, (SELECT * FROM [dbo].[B2BGetSpringFinal] ( '6261', 'Black', 'S' )) AS SpringAvailable FROM LAGKART ``` I get the desired result. Any ideas? Br Mads
In SQL Server 2000, Only constants and @local\_variables can be passed to table-valued functions. In SQL 2005 and greater this was fixed. You could try using a scalar function to get the `SpringAvailable` column value instead, or look at upgrading to a newer SQL Server version.
You can use [APPLY](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) (`CROSS` or `OUTER`) to pass column(s) value(s) as arguments to a function: ``` SELECT RTRIM(LTRIM(REPLACE(LAGKART.VARENUMMER,CHAR(2),''))) AS ItemNo, RTRIM(LTRIM(REPLACE(LAGKART.SXSON,CHAR(2),''))) AS Season, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT1,CHAR(2),''))),'') AS Variant1, ISNULL(RTRIM(LTRIM(REPLACE(LAGKART.VARIANT2,CHAR(2),''))),'') AS Variant2, SpringAvailable.* FROM LAGKART CROSS APPLY ( SELECT * FROM [dbo].[B2BGetSpringFinal] ( LAGKART.VARENUMMER, LAGKART.VARIANT1,LAGKART.VARIANT2 ) ) AS SpringAvailable ```
SQL Select with a function
[ "", "sql", "sql-server", "function", "select", "sql-server-2000", "" ]
I have to get the result in a year-wise format but the following query returns an unexpected output: ``` SELECT EXTRACT (YEAR FROM Tab_1.DATE_STMP), COUNT (Tab_1.DATE_STMP), EXTRACT (YEAR FROM Tab_2.DATE_STMP), COUNT (Tab_2.DATE_STMP) FROM EMP_1 Tab_1 FULL OUTER JOIN EMP_2@LINK Tab_2 ON EXTRACT(YEAR FROM Tab_1.DATE_STMP)= EXTRACT(YEAR FROM Tab_2.DATE_STMP) GROUP BY EXTRACT (YEAR FROM Tab_1.DATE_STMP), EXTRACT (YEAR FROM Tab_2.DATE_STMP); ``` EXPECTED OUTPUT: ``` 2014 40 2014 20 ``` ACTUAL OUTPUT: ``` 2014 800 2014 800 ``` My long plsql solution is to iterate over the DATE values and query Count separately: ``` --PL/SQL OPEN cur FOR 'SELECT DATE_STMP FROM EMP_1@LINK1 UNION SELECT DATE_STMP FROM EMP_2@LINK2' FETCH cur INTO v_datVar; ... EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM EMP_1 WHERE EXTRACT (YEAR FROM DATE_STMP)='||v_datVar INTO v_Total2; EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM EMP_2 WHERE EXTRACT (YEAR FROM DATE_STMP)='||v_datVar INTO v_Total2; ... --PRINT VALUES HERE ``` Ques. Is there a single select query to get this output? (Different type of join or join-condition to use?)
I think it's better to group every table in separate `select`: ``` SELECT TAB_1.YEAR1, TAB_1.COUNT1, TAB_2.YEAR2, TAB_2.COUNT2 FROM (SELECT EXTRACT (YEAR FROM DATE_STMP) AS YEAR1, COUNT(DATE_STMP) AS COUNT1 FROM EMP_1 GROUP BY EXTRACT (YEAR FROM DATE_STMP)) AS TAB_1 FULL OUTER JOIN (SELECT EXTRACT (YEAR FROM DATE_STMP) AS YEAR2, COUNT(DATE_STMP) AS COUNT2 FROM EMP_2 GROUP BY EXTRACT (YEAR FROM DATE_STMP)) AS TAB_2 ON TAB_1.YEAR1=TAB_2.YEAR2; ```
You full-outer-join records on DATE\_STMP. So you'll get matches and mismatches (i.e. where a date has no match in the other table). Your result shows that you don't even have a sigle match, but only mismatches. However, it seems you are not at all interested in how many dates match. Moreover I daresay you seem not even interested in how often years match. You simply want to count records per year in the two tables. This has nothing to do with matching records and joins. So: Get the counts first. Then join the results so as to get one record per year: ``` select coalesce(t1.the_year,t2.the_year), t1.the_count, t2.the_count from ( select extract (year from date_stmp) the_year, count(*) as the_count from emp_1 group by extract (year from date_stmp) ) t1 full outer join ( select extract (year from date_stmp) the_year, count(*) as the_count from emp_2@link on group by extract (year from date_stmp) ) t2 on t2.the_year = t1.the_year; ``` EDIT: I see you changed your request's query and join by year now. Still, as mentioned, I think you are not interested in counting matches and mismatches, so my answer stays the same.
Select query with join between two tables
[ "", "sql", "oracle", "plsql", "" ]
How to sum two or three columns, but if any column is NULL then it should not affect the value of SUM. As i get NULL if columns are added all together. Here is the example of table i am trying to work on. ``` id col1 col2 col3 total 1 2 3 5 2 10 5 NULL 3 2 NULL NULL ``` This is the query i tried. ``` SELECT id,col1,col2,col3, (col1+col2+col3) AS Total FROM test_table; ``` This Query works perfectly fine if there is no NULL column, but if there is any null column then number added with null becomes null and i get null in result.. Below is the Result Screen Shot i added. ![enter image description here](https://i.stack.imgur.com/GdGxc.png) in above image i get NULL in Total column if any column in sum has null. But results should be like 10,15,2 respected to there id's.
One way would be to use [`IFNULL`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull) function: ``` SELECT id,col1,col2,col3, (IFNULL(col1,0)+IFNULL(col2,0)+IFNULL(col3,0)) AS Total FROM test_table; ```
Use coalesce to replace null values with 0. I prefer `coalesce`. In my experience it is database agnostic, where as `isnull`, `ifnull`, `nvl` and others are specific to the database. ``` SELECT col1,col2,col3, (coalesce(col1,0)+ coalesce(col2,0)+ coalesce(col3,0)) as total from test_table; ```
sum two or more columns in mysql table but leave out the null field
[ "", "mysql", "sql", "database", "" ]
I have a field name citation that have values C12345, C23456 etc. I need to update the citation field so all citations now have a 0 after the C. ``` Update account set citation = where citation like 'C%' ```
Replace should work for this: ``` Update account set citation = replace(citation, 'C', 'C0') where citation like 'C%' ``` I have used [sql server replace](http://msdn.microsoft.com/en-us/library/ms186862.aspx) syntax here. Other platforms may be different. As pointed out by @sjagr in the comments > To be safe of values already set to C0, perhaps it should be `where citation not like 'C0%' AND citation like 'C%'`
Try following. ``` Update account set citation = 'C0' + substring(citation, 2, len(citation)) where citation like 'C%' ```
Update SQL value
[ "", "sql", "" ]
I have a query that looks like this: ``` SELECT clicks.offer_id, count(distinct clicks.transaction_id) as unique_clicks FROM clicks WHERE date > '2014-12-01 17:43:30' and offer_id = 1; ``` This query counts the number of unique clicks by their transaction ID for an offer. Now, I want to add the unique number of clicks for mobile and desktop users. Along the lines of: ``` SELECT clicks.offer_id, count(distinct clicks.transaction) as unique_clicks, sum(case when clicks.mobile = 1 then 1 else 0 end) as mobile_unique_clicks, sum(case when clicks.mobile = 0 then 1 else 0 end) as desktop_unique_clicks FROM clicks WHERE date > '2014-12-01 17:43:30' and offer_id = 1; ``` The problem with this is that it won't count unique transaction IDs. Is there a way to do this with one query? And what kind of covering indexes would I need to keep this efficient?
``` SELECT clicks.offer_id, count(distinct clicks.transaction_id) as unique_clicks, count(DISTINCT case when clicks.mobile = 1 then clicks.transaction_id else null end) as mobile_unique_clicks, count(DISTINCT case when clicks.mobile = 0 then clicks.transaction_id else null end) as desktop_unique_clicks FROM clicks WHERE date > '2014-12-01 17:43:30' and offer_id = 1; ``` You can use COUNT(DISTINCT ...)
something like this? ``` SELECT clicks.offer_id, count(distinct clicks.transaction_id) as unique_clicks, sub.mobile_unique_clicks, count(distinct clicks.transaction_id) - sub.mobile_unique_clicks as desktop_unique_clicks FROM clicks JOIN ( SELECT clicks.offer_id, count(distinct clicks.transaction_id) as mobile_unique_clicks, FROM clicks WHERE date > '2014-12-01 17:43:30' and offer_id = 1 AND clicks.mobile = 1 ) sub WHERE date > '2014-12-01 17:43:30' and offer_id = 1; ```
How to select count(distinct) with additional where clause in MySQL?
[ "", "mysql", "sql", "select", "" ]
I am trying to speed up a PHP script and I am currently pushing some PHP logic in the Mysql domain of the thing. Is there a way to make a different select query if the first Select returns no rows, or a count of zero ? Keeping in mind that the first query needs to run first, and the second should **only** be activated if the first one returns an empty set. ``` SELECT * FROM proxies WHERE (A='B') || SELECT * FROM proxies WHERE (A='C') ``` For the above 2 queries I have this code, but it seems to run each query twice (once to count, and once to return). Is there a better way to do this? ``` IF (SELECT count(*) FROM proxies WHERE A='B')>0 THEN SELECT * FROM proxies WHERE A='B' ELSEIF (SELECT count(*) FROM proxies WHERE A='C')>0 THEN SELECT * FROM proxies WHERE A='C' END IF ```
One option would be to use `UNION ALL` with `EXISTS`: ``` SELECT * FROM proxies WHERE A='B' UNION ALL SELECT * FROM proxies WHERE A='C' AND NOT EXISTS ( SELECT 1 FROM proxies WHERE A='B' ) ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!2/3fcc4/2) This will return rows from the `proxies` table where `A='B'` if they exist. However, if they don't exist, it will look for those rows with `A='C'`.
``` SELECT * FROM proxies WHERE A=(CASE WHEN (SELECT COUNT(*) FROM proxies WHERE A='B') > 0 THEN'B' ELSE 'C' END) ``` **UPDATE** ``` SELECT * FROM proxies WHERE ( CASE WHEN (SELECT COUNT(*) FROM proxies WHERE A='B' LIMIT 1) > 0 THEN (A='B') WHEN (SELECT COUNT(*) FROM proxies WHERE A='C' LIMIT 1) > 0 THEN (A='C') WHEN (SELECT COUNT(*) FROM proxies WHERE A='D' LIMIT 1) > 0 THEN (A='D') ELSE 1=2 END) ```
Second SELECT query if first SELECT returns 0 rows
[ "", "mysql", "sql", "" ]
I have a table that looks similar to this: ``` images image_id int user_id int datetime datetime ``` So, I need to get only one result. This result should have a specific user\_id and should only be the most recent row in the table. I've checked out many similar questions, but wasn't able to understand how to do it and the codes I found didn't work at all. How can I achieve this? Here's a start query: ``` SELECT * FROM image WHERE user_id = :user_id ``` How do I change this query to select only the most recent row with a specific user\_id?
You can use order by clause like this: ``` SELECT * FROM image WHERE user_id = :user_id ORDER BY datetime DESC LIMIT 1 ```
By using limit you can get no.of required data ``` limit required_no. ```
Select only one row from table with the highest datetime value
[ "", "mysql", "sql", "" ]
I have the following table: ``` JobCode | Designation | SalaryWithIncrement ---------------------------------------------- JC001 | IT | 150,000 JC001 | IT | 155,000 JC002 | Sales | 100,000 JC003 | HR | 200,000 JC003 | HR | 210,000 JC003 | HR | 220,000 ``` Required output: ``` JobCode | Designation | SalaryWithIncrement ------------------------------------------------ JC001 | IT | 305,000 JC002 | Sales | 100,000 JC003 | HR | 630,000 ``` Below is the code I used, but I don't get grand total after grouping ``` SELECT JobCode, designation, salaryWithIncrement FROM table1 group by (JobCode) ``` Any help is appreciated.
You can use the aggregate [`sum`](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) function: ``` SELECT JobCode, designation, SUM(salaryWithIncrement) FROM table1 GROUP BY JobCode, designation ```
In most cases when you have such requirement, Implement using GroupBY with SQL Aggregate function Group the Fields based on what you want records, in youu case `JobCode` and `designation` You can learn about Group BY here [MSDN](http://msdn.microsoft.com/en-us/library/ms177673.aspx) ``` SELECT JobCode,designation,SUM(salaryWithIncrement) FROM Job GROUP BY JobCode,designation ``` Here is your sample working code [SQL Fiddle](http://sqlfiddle.com/#!2/e5307/1)
How to get sum of records based on another column?
[ "", "mysql", "sql", "" ]
Suppose I have a table for a set of Roles containing data like Business Analyst, Quality Assurance analyst etc. All are indexed by a unique Id. I have another table for People again indexed by a unique Id. I created a third table for people who have multiple roles called "`PersonRole`". If I want to query PersonRole for someone with a Business Analyst and Developer Role (assuming an Id of 2 and 3), what type of query do I need? Should I be building it with sub queries or unions? I have tried this but it returns nothing. ``` select * from PersonRole inner join Person on Person.Id = PersonRole.PersonId where Person.Id = 3 and PersonRole.RoleId = 2 and PersonRole.RoleId = 3 ```
If you want to query for someone who has both roles, I would suggest an aggregation query with a `having` clause. Here is one way to write this: ``` select pr.PersonId from PersonRole pr group by pr.PersonId having sum(case when pr.RoleId = 2 then 1 else 0 end) > 0 and sum(case when pr.RoleId = 3 then 1 else 0 end) > 0; ``` If you want more details from the `Person` table, you can join that back in.
You can use [OR](http://msdn.microsoft.com/en-us/library/ms188361.aspx) ``` select * from PersonRole inner join Person on Person.Id = PersonRole.PersonId where Person.Id = 3 and (PersonRole.RoleId = 2 OR PersonRole.RoleId = 3) ``` Alternatively [IN](http://msdn.microsoft.com/en-gb/library/ms177682.aspx) ``` select * from PersonRole inner join Person on Person.Id = PersonRole.PersonId where Person.Id = 3 and PersonRole.RoleId IN (2,3) ```
How to query a table for multiple conditions
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
Hello guys i building a web app with asp.net and using sql ( MS SQL server 2000 ) and now after getting closer to finish i noticed that the sql part of my code taking huge amounts of space... and i was wondering what ways are there to make it easier to maintain? maybe even change to some other language? Here is a simple example of the way my sql code is built ( there are some much longer commands in my code but they built this way) : ``` Dim connectionString As String = ConfigurationManager.ConnectionStrings("ConnectionString").ToString() Dim query As String = "SELECT workerName FROM [poWorker] WHERE ( companyId = @companyId ) AND (workerId=@workerId) " Using con As New SqlConnection(connectionString) con.Open() Using da As New SqlDataAdapter() Using command As New SqlCommand(query, con) command.Parameters.Add(New SqlParameter("@workerId", Session("userId"))) command.Parameters.Add(New SqlParameter("@companyId", Session("companyId"))) Dim ds As New DataSet() da.SelectCommand = command da.Fill(ds, "test") If ds.Tables(0).Rows.Count = 1 Then managerName = ds.Tables(0).Rows(0)(0).ToString() End If End Using End Using con.Close() End Using ``` This is taking a lot of space and i got a lot of sql written this way. I am sure there is some solution in making it easier to maintain, probably using a newer technology? Maybe if i could figure out a way to call all the sql commands from 1-2 functions but so far i failed to do such a thing since there big differences between many of those.
The modern way is to no use SQL directly, but rather an OR-Mapper such as Entity Framework that allows you to query using Linq. The query you show above would the be reduced to something shorter: ``` using(new context = MyAppDbContext()) { var workerId = Session["userId"]; var companyId = Session["companyId"]; managerName = context.PoWorker .Single(w => w.companyId == companyId && w.workerId == workerId) .workerName; } ``` Sorry for using C# syntax, but I hope you can figure out the intention.
A modern way to access databases from code is to use an ORM. The one that Microsoft provides with the .NET Framework is Entity Framework. This allows you to write your query like this: ``` Dim worker as Worker = dbContext.Workers .Where(Function (w) (w.companyId = Session("companyId") and w.workerId = Session("userId"))) .SingleOrDefault() If worker IsNot Nothing Then managerName = worker.workerName End If ``` This approach also provides a far more robust approach to dynamic queries as opposed to piecing SQL strings together. For example, you can dynamically swap out `Where` clauses, `OrderBy` clauses, and so on, and still have completely typesafe code. Entity Framework does not have builtin support for SQL Server 2000, but apparently there is a [workaround](http://www.skonet.com/Articles_Archive/How_To_Use_Entity_Framework_4_With_Visual_Studio_2010_and_SQL_Server_2000.aspx).
sql commands taking to much space
[ "", "sql", "asp.net", "space", "" ]
I have an SQL query that I am trying that I am sure is easy, but I am not that well versed so I can't quite figure it out. Wasn't even sure how to word the question. Anyway here is what I am looking at: I have a table that has the following columns: Hostname, Path, Filename, Filesize It is essentially a directory listing of a number of computers (Hostnames). What I want to get is a list of distinct Hostnames where neither of two paths exist for that host. For example, grab all Hostnames that do not have a corresponding C:\users\Jeff or C:\users\Mary directory. If they have one of the two, omit. Only return them if neither of these directories exist. Any help would be much appreciated. Thanks!!
I would recommend splitting this into two parts: 1. Generate the list of all host names. 2. Generate the list of all host names that contain a path you don't want. Then, use `MINUS` (which automatically removes duplicates) to get the unique results: ``` SELECT hostname FROM table MINUS SELECT hostname FROM table WHERE path IN ('/search/path/one', '/search/path/two') ``` You can also use an anti-join instead of a `MINUS`, but I'll leave that up to you.
One way ``` SELECT HostName FROM YourTable GROUP BY HostName HAVING COUNT(CASE WHEN Path IN ('C:\users\Jeff', 'C:\users\Mary') THEN 1 END) =0; ```
SQL Conditional Query
[ "", "sql", "" ]
I have an `addrLines` field formatted like `[Address] [City], [State] [Zip]`, and another field with just the `[city]` data, I am trying to extract just the `[Address]` portion from the `addrLines` field. but this query returns invalid length parameter error. ``` SELECT LEFT(addrLines,(CHARINDEX(',',addrLines)-LEN(city))) FROM MyTable ``` Could anyone suggest what I am doing incorrectly? Thank you!
It likely means that you have an entry in `addrLines` that doesn't have a comma in it or `LEN(city)` is greater than `CHARINDEX(',',addrLines)`. In either one of those cases, you're likely going to get a negative number back for `CHARINDEX(',',addrLines)-LEN(city)`, which the `LEFT` function can't use.
try this: ``` SELECT LEFT(addrLines,(CHARINDEX(',',addrLines + ',')-LEN(city))) FROM MyTable ```
Invalid Length Parameter Passed to The Left or SUBSTRING funcction
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I'm trying to puzzle through a large set of database stored procedures I'm now maintaining, and I've come to a piece that is confusing. I can't tell if it contains redundant/useless code, or if I'm missing some nuance to it, as I'm still only intermediate at the SQL language. Here's the section of code in question: ``` UPDATE #NewValues SET ValidNumber = CASE WHEN SP.ModelNumber IS NULL THEN 0 ELSE 1 END FROM #NewValues NV LEFT OUTER JOIN Stage.Products SP ON NV.SKU = SP.SKU AND (NV.ModelNumber = SP.ModelNumber OR NV.UPC = SP.ModelNumber) WHERE NV.SKU IN ( SELECT NV.SKU FROM #NewValues NV INNER JOIN Stage.Products SP ON NV.SKU = SP.SKU AND (NV.ModelNumber = SP.ModelNumber OR NV.UPC = SP.ModelNumber) WHERE NV.ValidNumber = 1 ) ``` From working through this, it seems to me as though there is no case in which `SP.ModelNumber` will be NULL, as the nested `SELECT` statement will never return SKUs with `SP.ModelNumber` NULL results. So the only thing that can ever do is `SET ValidNumber = 1`. Is that correct? Again, I'm not sure if there is something else going on here that I'm not able to factor into the logic, but it seems as though the nested `SELECT` statement would be all you would need to use to achieve the same results. Because of the way this code is positioned in a much larger query, I'm having an extremely difficult time trying to set up a situation that would match normal use to test this, especially since I'm not sure what the `ValidNumber` values would be *before* this section of code runs.
It depends on the nature of the data. If a single `Sku` can have multiple records in the newvalues table, then the `outer join` might be needed and it could be possible to set `validnumber = 0`. --- Here's some sample data to try to illustrate (fiddle below): ``` create table newvalues (validnumber int, sku int, modelnumber int, upc int); create table products (sku int, modelnumber int); insert into newvalues values (1, 1, 1, 1), (0, 1, 1, 1), (null, 1, 2, 2); insert into products values (1, 1); ``` As you can see, there are multiple results in the newvalues table with `sku = 1`. --- And then the `update` statement: ``` UPDATE NewValues SET validnumber = CASE WHEN SP.ModelNumber IS NULL THEN 0 ELSE 1 END FROM NewValues NV LEFT OUTER JOIN Products SP ON NV.SKU = SP.SKU AND (NV.ModelNumber = SP.ModelNumber OR NV.UPC = SP.ModelNumber) WHERE NV.SKU IN ( SELECT NV.SKU FROM NewValues NV INNER JOIN Products SP ON NV.SKU = SP.SKU AND (NV.ModelNumber = SP.ModelNumber OR NV.UPC = SP.ModelNumber) WHERE NV.ValidNumber = 1 ); ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!3/7c91f/1) This updates all rows, some to 0 and some to 1, because at least one of those existed in the subquery with the `inner join`. This will not negate the `outer join` as multiple rows exist for the same `sku`, so they all get updated (even those where `sp.modelnumber is null`)
The original coder is just being safe. There is a possibility that there could be an item in #NewValues which does not exist in Stage.Products, and if that is the case the LEFT JOIN will cause all fields in SP to be NULL where there is no match.
SQL CASE statement, one result seems unreachable
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "case", "" ]
I am very new to programming. I am trying to teach myself SQL. I made a video store database with a customer, rental, and inventory table. I am looking at examples online to try to LEARN SQL, so please be kind. I want this trigger to stop a person from renting the same movie at the same time. I have been reading up on triggers and would like some help with the code below. I am getting syntax errors on the last END, a syntax error on RAISEERROR, and INSERTED. I am also still getting the message "Trigger multipart identifier cannot be bound" on i.Rental.rentNum inserted i; Any help will be much appreciated! ``` CREATE TRIGGER insteadOFInsert ON Rental INSTEAD OF INSERT AS DECLARE @rentNum int, @action varchar(60) SELECT @Rent_rentNum=i.Rental.rentNum inserted i; SET @action='stop rental trigger.' @Rent_rentNum=(SELECT rentNum FROM inserted; BEGIN BEGIN TRAN SET NOCOUNT ON IF (@RENT_rentNum=Rental.rentNum) BEGIN RAISEERROR ('You cannot rent the same move twice'); ROLLBACK END ELSE BEGIN INSERT INTO Rental(rentNum) VALUES (@rentNum) INSERT INTO Rent values (rentNum); COMMIT PRINT 'Updated' END END ```
Your trigger has several **fundamental** flaws: * please ***never ever*** use `BEGIN TRAN` inside a trigger! The trigger is running in the context of the statement that caused it to fire, and therefore is *already* in the context of a transaction * you need to be aware that the trigger will be called **once per statement** - not once per row! So if your `INSERT` statement inserts 10 rows, your trigger is fired **once**, and the `Inserted` pseudo table will contain **10 rows** of data - which one do you think will be picked in your statement ``` SELECT rentNum FROM inserted; ``` One of them will do - one more or less at random - and the 9 others will be ignored. So basically, you need to totally rewrite your trigger to something like this: ``` CREATE TRIGGER insteadOFInsert ON dbo.Rental AFTER INSERT AS // if any one of the rows inserted already exists in the Rental table -> abort IF EXISTS (SELECT * FROM dbo.Rental WHERE RentNum IN (SELECT RentNum FROM Inserted)) BEGIN RAISEERROR ('You cannot rent the same move twice'); ROLLBACK END ``` You didn't explain why you had picked to use an `INSTEAD OF INSERT` trigger - I really don't see any good reason for that, so I chose to do this as an `AFTER INSERT` trigger instead (it's just plain simpler to write these)
Your first SELECT is missing a FROM before "inserted".
SQL TRIGGER THE MULTI-PART IDENTIFIER CANT BE BOUND
[ "", "sql", "sql-server-2008", "triggers", "syntax-error", "" ]
I have this database structure: ## Tables: **Category** ``` id | category 1 | fruit 2 | cars 3 | tables ``` **Product** ``` id | product | category_id 1 | banana | 1 2 | apple | 1 3 | orange | 1 4 | example 1 | 2 5 | example 2 | 3 ``` **User\_List** ``` id | product_id | user_id | bought_date 1 | 1 | 1 | 2012-06-21 11:00:00 2 | 2 | 1 | 2012-06-21 06:00:00 3 | 4 | 1 | 2012-06-21 08:00:00 4 | 5 | 1 | 2012-06-21 01:00:00 ``` what i want is create a query that "order by bought\_date (desc) by category". In that case the expected result is: ``` banana apple example 1 example 2 ``` My query: ``` SELECT c.id, u.bought_date FROM categry as c left join product p on (c.id=p.category_id) left join user_list u on (p.id=u.product_id) WHERE u.user_id=3 ORDER BY u.bought_date DESC NULLS LAST ``` But this only does a simple sort by bought date... with this result: ``` banana example 1 apple example 2 ```
I thought of one ordering. You want to order by the earliest or latest date for each category. For that, use window functions. ``` SELECT c.id, u.bought_date, max(u.bought_date) over (partition by c.id) as category_bd FROM categry c left join product p on (c.id=p.category_id) left join user_list u on (p.id=u.product_id) WHERE u.user_id = 3 ORDER BY category_bd DESC NULLS LAST, u.bought_date DESC NULLS LAST ```
It sounds as though you just need two columns in your `order by` clause: ``` SELECT c.id, u.bought_date FROM categry as c left join product p on (c.id=p.category_id) left join user_list u on (p.id=u.product_id) WHERE u.user_id=3 ORDER BY category_id, u.bought_date DESC NULLS LAST ```
PostgreSQL Order By SubQuery
[ "", "sql", "postgresql", "sql-order-by", "" ]
I have a database table below. ![database table](https://i.stack.imgur.com/mXyhb.png) And I want to get list of all DBKey that have: at least one entry with Staled=1, and the last entry is Staled=0 The list should not contain DBKey that has only Staled=0 OR Staled=1. In this example, the list would be: DBKey=2 and DBKey=3
I think this should do the trick: ``` SELECT DISTINCT T.DBKey FROM TABLE T WHERE -- checks that the DBKey has at least one entry with Staled = 1 EXISTS ( SELECT DISTINCT Staled FROM TABLE WHERE DBKey = T.DBKey AND Staled = 1 ) -- checks that the last Staled entry for this DBKey is 0 AND EXISTS ( SELECT DISTINCT Staled FROM TABLE WHERE DBKey = T.DBKey AND Staled = 0 AND EntryDateTime = ( SELECT MAX(EntryDateTime) FROM TABLE WHERE DBKey = T.DBKey ) ) ``` Here is a working [**SQLFiddle**](http://sqlfiddle.com/#!3/4274f/5) of the query, using your sample data. The idea is to use `EXISTS` to look for those individual conditions that you've described. I've added comments to my code to explain what each does.
Should be done with a simple JOIN... Starting FIRST with any 1 qualifiers, joined to itself by same key AND 0 staled qualifier AND the 0 record has a higher date. Ensure you have an index on ( DBKey, Staled, EntryDateTime ) ``` SELECT YT.DBKey, MAX( YT.EntryDateTime ) as MaxStaled1, MAX( YT2.EntryDateTime ) as MaxStaled0 from YourTable YT JOIN YourTable YT2 ON YT.DBKey = YT2.DBKey AND YT2.Staled = 0 AND YT.EntryDateTime < YT2.EntryDateTime where YT.Staled = 1 group by YT.DBKey having MAX( YT.EntryDateTime ) < MAX( YT2.EntryDateTime ) ```
Need help creating SQL query from example of data
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I would like to get your opinion about my problem. I am working on a project that stores publications from google scholar. So, when I store the data, it is displayed like that. ``` ID| COLUMN1 | COLUMN2 1 | 'Knowledge and Data Engineering' | 'IEEE transactions on 16 (1)' 1 | 'Knowledge and Data Engineering' | 'IEEE transactions on 16 (1) 28-40 ' 2 | 'Data Engineering' | '1999. Proceedings.' 2 | 'Data Engineering' | '1999. Proceedings. 15th International Conference on ' 2 | 'Data Engineering' | '1999. Proceedings. 15th International Conference on 146-153' 3 | 'ACM SIGMOD Record 30 (2)' | '187-198' ``` I hope you understood my table-like drawing. What I wanted to do is if there are same ID's on consecutive rows, having the last rows. ``` ID| COLUMN1 | COLUMN2 1 | 'Knowledge and Data Engineering' | 'IEEE transactions on 16 (1) 28-40 ' 2 | 'Data Engineering' | '1999. Proceedings. 15th International Conference on 146-153' 3 | 'ACM SIGMOD Record 30 (2)' | '187-198' ``` Thank you for your help.
You can use the **[ROW\_NUMBER()](http://msdn.microsoft.com/en-GB/library/ms186734.aspx)** windows function to produce a sequential number per `ID`, from which you would want to take the last / highest row number. > **ROW\_NUMBER()** : Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition. So I broke down the problem into 2 steps: 1. Creating a #temp table with the row numbers included 2. Selecting the rows from that temp table with the highest row number per group `SQL Fiddle Demo` **MS SQL Server 2012 Schema Setup**: ``` CREATE TABLE Publications ([ID] int, [COLUMN1] varchar(34), [COLUMN2] varchar(63)) ; INSERT INTO Publications ([ID], [COLUMN1], [COLUMN2]) VALUES (1, '''Knowledge and Data Engineering''', '''IEEE transactions on 16 (1)'''), (1, '''Knowledge and Data Engineering''', '''IEEE transactions on 16 (1) 28-40 '''), (2, '''Data Engineering''', '''1999. Proceedings.'''), (2, '''Data Engineering''', '''1999. Proceedings. 15th International Conference on '''), (2, '''Data Engineering''', '''1999. Proceedings. 15th International Conference on 146-153'''), (3, '''ACM SIGMOD Record 30 (2)''', '''187-198''') ; ``` **Query 1**: ``` -- INSERT VALUES INTO TEMP TABLE WITH ROW_NUMBER SELECT ID , Column1 , Column2 , ROW_NUMBER() OVER ( PARTITION BY ID ORDER BY ID ) RowNo INTO #TEMP FROM Publications -- SELECT ROW FOR EACH ID WITH MAX ROW_NUMBER SELECT T1.ID, T1.Column1, T1.Column2 FROM #TEMP T1 WHERE RowNo = (SELECT MAX(RowNo) FROM #TEMP T2 WHERE T1.ID = T2.ID) ORDER BY ID ``` **[Results](http://sqlfiddle.com/#!6/a8809/1/0)**: ``` | ID | COLUMN1 | COLUMN2 | |----|----------------------------------|---------------------------------------------------------------| | 1 | 'Knowledge and Data Engineering' | 'IEEE transactions on 16 (1) 28-40 ' | | 2 | 'Data Engineering' | '1999. Proceedings. 15th International Conference on 146-153' | | 3 | 'ACM SIGMOD Record 30 (2)' | '187-198' | ```
``` WITH CTE AS( SELECT Id, Column1, Column2, ROW_NUMBER() OVER (PARTITION BY Column1 ORDER BY Id DESC) AS rownum ) SELECT Id, Column1, column2 FROM CTE WHERE rownum = 1 ```
How to remove duplicate records by taking last record in each group
[ "", "sql", "sql-server", "" ]
I know that if an SQL query is not sorted by an Order By argument, it's not sorted at all. When running a query without an Order By, I'd expect to see the same results in a differing order. However, the results *are* still returned in an order (i.e. there is a row 1, a row 2 and so on), just not a dependable one - so if they're not sorted, what determines (presumably on a case-by-case basis) the order in which rows are returned. Specifically, my experience is with MySQL, but I'm curious about this from a theoretical point, rather than trying to solve a specific problem. **Edit: I always use an Order By if I actually want to order a query - this is just curiosity.**
For the obvious reason database vendors do not publish information about what order rows in an unordered result set will appear in (it would only lead to people relying on the unreliable current implementation). Common sense, however, tells us that the most likely factors contributing to apparent order would include: 1. Sequential order of the rows in whatever format they're stored in (which might be clustered index, row-added order for databases that store invariant length segments of records in SAM type files or use a variable-length mechanism like XML, or row-added-except-for-moved-rows for database that store variable length records and move them to the end of the storage file when the row size increases), at least for searches requiring a table scan. 2. Index order for instances in which a covering index exists and the actual table is never consulted. 3. One or the other of the above for indexed-based searches that require additional data from the table depending on the implementation.
The answer as to what determines the row order when no `ORDER BY` is specified is: a lot of internal considerations that can depend on how the data is stored on disk (or in-memory), what types of operations the db does to execute your query (e.g. `JOIN`s), and many other factors that can vary by database engine. Bottom-line is that the SQL standards all say that the dbms does not have to guarantee anything at all about ordering result sets unless you use `ORDER BY`. So, there is no theoretical default. If you need to have your data sorted in a certain way, *always* use `ORDER BY`.
As there is no default order in SQL, what determines the order rows are returned in if an Order By is not specified?
[ "", "mysql", "sql", "sorting", "" ]
I am trying to create a stored proc and have a where clause where to different operations can take place depending on the value of a parameter passed in: ``` WHERE (cdr.CircuitReference = @CircuitReference) AND CASE WHEN (@JDEDocumentReference <> 'Unbilled Calls') THEN sct.JDEDocumentReference = @JDEDocumentReference ELSE ((sct.JDEDocumentReference IS NULL) AND (sc.StartDate IS NOT null AND ((sc.CloseDate IS null) OR (datediff(day,sc.CloseDate,getdate()) < 0)) AND stp.SipTrunksProduct = sct.ProductCode)) END ``` I've just posted my where clause above but when i try to execute the script i get the following error: ``` Incorrect syntax near '='. ``` Is this the correct way to do a conditional statement in a where clause of an sql query? Thanks
This problem could be solved without a CASE statement by using the following: ``` WHERE (cdr.CircuitReference = @CircuitReference) AND ((@JDEDocumentReference <> 'Unbilled Calls' AND sct.JDEDocumentReference = @JDEDocumentReference) OR (@JDEDocumentReference = 'Unbilled Calls' AND ((sct.JDEDocumentReference IS NULL) AND (sc.StartDate IS NOT null AND ((sc.CloseDate IS null) OR (datediff(day,sc.CloseDate,getdate()) < 0)) AND stp.SipTrunksProduct = sct.ProductCode)))) ```
STATEMENT FULLY WRONG : There is no need for case here(Even there is a possibility to it correctly. But here no needed). **USE:** ``` (cdr.CircuitReference = @CircuitReference) AND ((JDEDocumentReference <> 'Unbilled Calls' AND @JDEDocumentReference) OR @JDEDocumentReference = 'Unbilled Calls' ) OR (JDEDocumentReference = 'Unbilled Calls' AND ((sct.JDEDocumentReference IS NULL) AND (sc.StartDate IS NOT null AND ((sc.CloseDate IS null) OR (datediff(day,sc.CloseDate,getdate()) < 0)) AND stp.SipTrunksProduct = sct.ProductCode))) ```
SQL Server Case in Where clause
[ "", "sql", "sql-server", "" ]