Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
To my best knowledge of MySQL this not a possible thought in this case would be helpful. We want to `select` a record in a parent table and concat (something like that) the child rows in the `select`. Here in obviously wrong MySQL but to illustrate what we want to achieve. ``` SELECT parentattr, CONCAT ( SELECT name FROM child WHERE child.parentId = parent.id)) as allchildernames FROM parent ```
You also need a `GROUP BY`, plus you need to specify exactly which name is being concatenated. ``` SELECT parentattr1, parentattr2, GROUP_CONCAT(c.name ORDER By c.name) FROM parent p LEFT JOIN child c ON parentId = c.id GROUP BY parentattr1, parentattr2 ```
Try using GROUP\_CONCAT (), here are [examples](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat). You also have an extra `)` after parent.id.
Concat child records in parent SELECT
[ "", "mysql", "sql", "" ]
I have an `Insert Into Select` statement with a `Case When` clause. I want to execute a stored procedure within the `When` statement. ``` Insert into Orders(id, custId, custIntake) Select id, custId custIntake = Case When ( Exec mySProc(custId) = 1 ) = 'InStore' When ( Exec mySProc(custId) = 0 ) = 'OutsideStore' Else null End From OrdersImport ``` How can I run `Exec mySProc(custId)` within the `Case When`?
I would suggest you convert your 'mySProc' procedure into a Scalar User Defined Function if you want to run it like this. Stored Procedures are not able to do what you want.
If I understand correctly then what you need is code to run when the WHEN statement is true. Just use CASE > WHEN > THEN as described here: <http://msdn.microsoft.com/en-us/library/ms181765.aspx> Hope this helps.
Execute a stored procedure within a case statement
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "case-when", "" ]
How can I concat a character to MySQL results except for the last result? So: ``` friends ------------- friend-a friend-b friend-c friend-d friend-e ``` I now do `select concat(friends,',') from friends` which actually gives me each friend name with a `,`, but I don't need the `,` for the last one. I cannot use group\_concat here due to it size restriction. **Attempting output** ``` friend-a, friend-b, friend-c, friend-d, friend-e ```
if you dont want to use group concat (which would be the optimal way to do this.. you can remove the last row then add back on with a union or use an aggregate function on your string to return the largest. ``` (select concat(friends,',') as friend from friends WHERE friends <> (SELECT friends from friends order by friends DESC limit 1) ) UNION (SELECT friends from friends order by friends DESC limit 1) ``` [**Fiddle Demo**](http://sqlfiddle.com/#!2/1d8e1a/4) An even better way to do it is like so ``` SET @a := (SELECT MAX(friends) FROM friends); SELECT CASE WHEN friends <> @a THEN concat(friends,',') ELSE friends END AS friends FROM friends GROUP BY friends ``` [**Another Fiddle**](http://sqlfiddle.com/#!2/86bafc/2) However the easiest way is to just use group\_concat and increase the size for your session... not in your .cnf file but just for the session.. that way you don't affect the server as much for storage for that. See [**My Answer Here**](https://stackoverflow.com/questions/25585446/maximum-length-allowed-in-group-concat-how-to-pass-it/25585642#25585642) for how to set your session max len and use it
I guess that you want the output so that you can use it in the `IN` parameter of sql? If yes, then you can use the following :- ``` select * from `some_other_table` where `friends` in (select `friends` from `friends`); ``` Tell me if I am wrong
MySql concat a character except for the last record
[ "", "mysql", "sql", "" ]
Suppose I have a table : ``` start_range end_range 1 4 4 8 ``` I want the result to be true if it is greater than any of the value of start\_range and less than any of the corresponding end\_range. Eg. value 2 should return `true` , as `2>1` and `2<4` but value 4 should return `false` in this case as `4>1` but `4<4` becomes false, as well as 4>4 becomes false for the second case. I cannot use the query ``` SELECT Sumthing FROM XYZ WHERE value> SOME(start_range) AND value < SOME(end_range) ``` The problem with the above query is let say value = 4. Now 4> SOME(start\_range) will become true as 4>1. AND 4< SOME(end\_range) will also become true as 4<8. But in actual the comparison should be like `(((4>1)AND(4<4)) OR ((4>4)AND(4<8)))` . It should return false. One more thing , the above table is not persistent , **I have been creating it in a subquery.Thats why i have been using SOME.** if still my question isn't clear, mention in comments.
Assuming that `xyz` is your table: ``` select (count(*) > 0) as HasMatch from xyz where value > start_range and value < end_range; ``` I'm not sure why you are using `some`. EDIT: It occurs to me that you want to use subqueries, and `xyz` is not the table in question. Perhaps this is what you want: ``` select xyz.* from xyz where exists (select 1 from (<your query here>) t where xyz.value > t.start_range and xyz.value < t.end_range ); ```
``` select (count(*) > 0) as HasMatch from (select IF(start_range<value and end_range>value, true, false ) as value from XYZ having value =1) as MatchTable ``` [DEMO](http://sqlfiddle.com/#!2/bb8fc/31)
query using SOME in mysql not giving expected results.?
[ "", "mysql", "sql", "" ]
``` query = "SELECT highest_score FROM users ORDER BY CAST(highest_score AS UNSIGNED) DESC"; ``` I have a table where the `highest_score` column is **VARCHAR**(it should be **INT**) but that's not the question. The question is why does the query return this: 1. -10 2. 255 3. 241 4. 231 5. 125 > Since I am casting highest score as an **integer** shouldn't -10 be at the > bottom of the list?
Casting **negative** values to `unsigned` causes integer overflow, that's why `-10` considered as the largest value.
You are casting it to `UNSIGNED` which can not store negative values. Cast it to `SIGNED`. Executing `SELECT CAST('-10' AS UNSIGNED)` reveals the reason: UNSIGNED overflows when you are trying to cast a negative value. The above statement's result is `18446744073709552000` which is high enough to be the first when you are ordering.
Cast as INT, sort by DESC issue
[ "", "mysql", "sql", "" ]
I am creating a site that is sort of ecommerce-ish. I want to give my users a perfect search ability using specific attributes that differ from product to product. I plan to create 1 products table storing the basic information that is shared among products i.e Name, Description, Price and a few others. Then I plan to create several "details" table say categories\_computers with columns Processor, HDD, RAM, etc and another table say table\_shoes with columns MATERIAL, SIZE, GENDER, etc. I am new to Mysql but not to the concept of Databases. I don't think I will have a problem storing this data to each table. My issue comes about from reads. It won't be hard to query a product id but I think it would be extremely wasteful to query all details tables to get the details of the product since 1 product can only have 1 details. So my question is how can I store a reference to a table in a column so that a product has say ID, Name, Description, Price, Details\_Table\_ID or something similar to save on queries. Do tables have unique ids in Mysql? Or how does the Stackoverflow community suggest I go about this? Thanks. **EDIT** Silly me, I have just remembered that every table name is uniques so I can just use that, so my question changes to how I can write a query that contains one cell in a table A to be used as a reference to a Table name.
Don't use separate details tables for each category, use a generic details table that can store any attribute. Its columns would be: ``` Product_ID INT (FK to Products) Attribute VARCHAR Value VARCHAR ``` The unique key of this table would be `(Product_ID, Attribute)`. So if `Product_ID = 1` is a computer, you would have rows like: ``` 1 Processor Xeon 1 RAM 4GB 1 HDD 1TB ``` And if `Product_ID = 2` is shoes: ``` 2 Material Leather 2 Size 6 2 Gender F ``` If you're worried about the space used for all those attribute strings, you can add a level of indirection to reduce it. Create another table `Attributes` that contains all the attribute names. Then use `AttributeID` in the `Details` table. This will slow down some queries because you'll need to do an additional join, but could save lots of space
Since this is an application so you must be generating the queries. So lets generate it in 2 steps. I assume you can add a column product\_type\_id in your Product table that will tell you which child table to user. Next create another table Product\_type which contains columns product\_type\_id and query. This query can be used as the base query for creating the final query e.g. Product\_type\_id | Query 1 | SELECT COMPUTERS.\* FROM COMPUTERS JOIN PRODUCT ON COMPUTERS.PRODUCT\_ID = PRODUCT.PRODUCT\_ID 2 | SELECT SHOES.\* FROM SHOES JOIN PRODUCT ON COMPUTERS.PRODUCT\_ID = PRODUCT.PRODUCT\_ID Based on the product\_id entered by the user lookup this table to build the base query. Next append your where clause to the query returned.
Store a unique reference to a Mysql Table
[ "", "mysql", "sql", "database", "performance", "database-design", "" ]
I have an `update` statement that's like this: ``` update user_stats set requestsRecd = (select count(*) from requests where requestedUserId = 1) where userId = 1, requestsSent = (select count(*) from requests where requesterUserId = 2) where userId = 2; ``` What I'm trying to do, is update the same table, but different users in that table, with the count of friend requests received for one user, and the count of friend requests sent by another user. What I'm doing works if I remove the `where` clauses, but then, that updates all the users in the entire table. Any idea how I can do something like this with the `where` clauses in there or achieve the same results using another approach?
(*As proposed in several other answers, obviously, you could run two separate statements, but to answer the question you asked, whether it was possible, and how to do it...*) Yes, it is possible to accomplish the update operation with a single statement. You'd need conditional tests as part of the statement (like the conditions in the WHERE clauses of your example, but those conditions can't go into a WHERE clause of the UPDATE statement. The big restriction we have with doing this in one UPDATE statement is that the statement has to assign a value to *both* of the columns, for *both* rows. One "trick" we can make use of is assigning the current value of the column back to the column, e.g. ``` UPDATE mytable SET mycol = mycol WHERE ... ``` Which results in no change to what's stored in the column. (*That would still fire BEFORE/AFTER update trigger on the rows that satisfy the WHERE clause*, but the **value** currently stored in the column will **not** be changed.) So, we can't conditionally specify which columns are to be updated on which rows, but we *can* include a condition in the **expression** that we're assigning to the column. As an example, consider: ``` UPDATE mytable SET mycol = IF(foo=1, 'bar', mycol) ``` For rows where foo=1 evaluates to TRUE, we'll assign 'bar' to the column. For all other rows, the value of the column will remain unchanged. In your case, you want to assign a "new" value to a column if a particular condition is true, and otherwise leave it unchanged. Consider the result of this statement: ``` UPDATE user_stats t SET t.requestsRecd = IF(t.userId=1, expr1, t.reqestsRecd) , t.requestsSent = IF(t.userId=2, expr2, t.reqestsSent) WHERE t.userId IN (1,2); ``` (I've omitted the subqueries that return the count values you want to assign, and replaced that with the "expr1" and "expr2" placeholders. This just makes it easier to see the pattern, without cluttering it up with more syntax, that hides the pattern.) You can replace `expr1` and `expr2` in the statement above with your **original subqueries** that return the counts. --- As an alternative form, it's also possible to return those counts on a single row, using in an inline view (aliased as `v` here), and then specify a join operation. Something like this: ``` UPDATE user_stats t CROSS JOIN ( SELECT (select count(*) from requests where requestedUserId = 1) AS c1 , (select count(*) from requests where requesterUserId = 2) AS c2 ) v SET t.requestsRecd = IF(t.userId=1, v.c1 ,t.reqestsRecd) , t.requestsSent = IF(t.userId=2, v.c2 ,t.reqestsSent) WHERE t.userId IN (1,2) ``` Since the inline view returns a single row, we don't need any ON clause or predicates in the WHERE clause. (\*I typically include the CROSS keyword here, but it could be omitted without affecting the statement. My primary rationale for including the CROSS keyword is to make the intent clear to a future reader, who might be confused by the omission of join predicates, expecting to find some in the ON or WHERE clause. The CROSS keyword alerts the reader that the omission of join predicates was intended.) Also note that the statement would work the same even if we omitted the predicates in the WHERE clause, we could spin through all the rows in the entire table, and only the rows with userId=1 or userId=2 would be affected. (But we want to include the WHERE clause, for improved performance; there's no reason for us to obtain locks on rows that we don't want to modify.) So, to summarize: yes, it **is** possible to perform the sort of conditional update of two (or more) rows within a single statement. As to whether you want to use this form, or use two separate statements, that's up for you to decide.
What you're trying to do is two updates try splitting these out: ``` update user_stats set requestsRecd = (select count(*) from requests where requestedUserId = 1) where userId = 1; update user_stats set requestsSent = (select count(*) from requests where requesterUserId = 2) where userId = 2; ``` There may be a way using CASE statements to dynamically chose a column but I'm not sure if that's possible.
Using WHERE clauses in an UPDATE statement
[ "", "mysql", "sql", "" ]
I try to find duplicate rows between two tables. This code works only if records are not duplicated: ``` (select [Name], [Age] from PeopleA except select [Name], [Age] from PeopleB) union all (select [Name], [Age] from PeopleB except select [Name], [Age] from PeopleA) ``` How to find missing, duplicate records. `Robert 34` in `PersonA` table for example below: **PersonA**: ``` Name | Age ------------- John | 45 Robert | 34 Adam | 26 Robert | 34 ``` **PersonB**: ``` Name | Age ------------- John | 45 Robert | 34 Adam | 26 ```
You can use `UNION ALL` to concat both tables and `Group By` with `Having` clause to find duplicates: ``` SELECT x.Name, x.Age, Cnt = Count(*) FROM ( SELECT a.Name, a.Age FROM PersonA a UNION ALL SELECT b.Name, b.Age FROM PersonB b ) x GROUP BY x.Name, x.Age HAVING COUNT(*) > 1 ``` --- According to your clarification in the comment, you could use following query to find all name-age combinations in `PersonA` which are different in `PersonB`: ``` WITH A AS( SELECT a.Name, a.Age, cnt = count(*) FROM PersonA a GROUP BY a.Name, a.Age ), B AS( SELECT b.Name, b.Age, cnt = count(*) FROM PersonB b GROUP BY b.Name, b.Age ) SELECT a.Name, a.Age FROM A a LEFT OUTER JOIN B b ON a.Name = b.Name AND a.Age = b.Age WHERE a.cnt <> ISNULL(b.cnt, 0) ``` [**Demo**](http://sqlfiddle.com/#!6/06ca1/4/0) --- If you also want to find persons which are in `PersonB` but not in `PersonA` you should use a `FULL OUTER JOIN` as Gordon Linoff has commented: ``` WITH A AS( SELECT a.Name, a.Age, cnt = count(*) FROM PersonA a GROUP BY a.Name, a.Age ), B AS( SELECT b.Name, b.Age, cnt = count(*) FROM PersonB b GROUP BY b.Name, b.Age ) SELECT Name = ISNULL(a.Name, b.Name), Age = ISNULL(a.Age, b.Age) FROM A a FULL OUTER JOIN B b ON a.Name = b.Name AND a.Age = b.Age WHERE ISNULL(a.cnt, 0) <> ISNULL(b.cnt, 0) ``` [**Demo**](http://sqlfiddle.com/#!6/9dcfe/2/0)
I like Tim's answer but you need to check in both tables if the records are missing. He is only checking if the records are missing in table A. Try this to check if records are missing in either of the tables and how many times. ``` Select *, 'PersonB' MissingInTable, a.cnt - isnull(b.cnt,0) TimesMissing From ( Select *, count(1) cnt from PersonA group by Name, Age) A Left join (Select *, count(1) cnt from PersonB group by Name, Age) B On a.age=b.age and a.name=b.name where a.cnt>isnull(b.cnt,0) Union All Select *, 'PersonA' MissingInTable, b.cnt - isnull(a.cnt,0) TimesMissing From ( Select *, count(1) cnt from PersonA group by Name, Age) A Right join (Select *, count(1) cnt from PersonB group by Name, Age) B On a.age=b.age and a.name=b.name where b.cnt>isnull(a.cnt,0) ``` See demo here : <http://sqlfiddle.com/#!6/06020/13>
Finding duplicate differences between two tables in sql
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have this SQL query for SQL Server 2008 R2: ``` Declare @fechaDesde DateTime Declare @fechaHasta DateTime set @fechaDesde = '01/01/2014 00:00:00.000' set @fechaHasta = '31/12/2014 23:59:59.999' Select Cuenta, isnull(sum(SaldoDebe), 0) as SumaDebe, isnull(sum(SaldoHaber), 0) as SumaHaber, isnull(sum(SaldoDebe01), 0) as SumaDebe01, isnull(sum(SaldoDebe02), 0) as SumaDebe02, isnull(sum(SaldoDebe03), 0) as SumaDebe03, isnull(sum(SaldoDebe04), 0) as SumaDebe04, isnull(sum(SaldoDebe05), 0) as SumaDebe05, isnull(sum(SaldoDebe06), 0) as SumaDebe06, isnull(sum(SaldoDebe07), 0) as SumaDebe07, isnull(sum(SaldoDebe08), 0) as SumaDebe08, isnull(sum(SaldoDebe09), 0) as SumaDebe09, isnull(sum(SaldoDebe10), 0) as SumaDebe10, isnull(sum(SaldoDebe11), 0) as SumaDebe11, isnull(sum(SaldoDebe12), 0) as SumaDebe12, isnull(sum(SaldoHaber01), 0) as SumaHaber01, isnull(sum(SaldoHaber02), 0) as SumaHaber02, isnull(sum(SaldoHaber03), 0) as SumaHaber03, isnull(sum(SaldoHaber04), 0) as SumaHaber04, isnull(sum(SaldoHaber05), 0) as SumaHaber05, isnull(sum(SaldoHaber06), 0) as SumaHaber06, isnull(sum(SaldoHaber07), 0) as SumaHaber07, isnull(sum(SaldoHaber08), 0) as SumaHaber08, isnull(sum(SaldoHaber09), 0) as SumaHaber09, isnull(sum(SaldoHaber10), 0) as SumaHaber10, isnull(sum(SaldoHaber11), 0) as SumaHaber11, isnull(sum(SaldoHaber12), 0) as SumaHaber12 From( Select c.Código as Cuenta, case When d.Debe_Haber = 'D' then d.Importe end as SaldoDebe, case When d.Debe_Haber = 'H' then d.Importe end as SaldoHaber, case When d.Debe_Haber = 'D' and Month(fecha) = 1 then d.Importe end as SaldoDebe01, case When d.Debe_Haber = 'D' and Month(fecha) = 2 then d.Importe end as SaldoDebe02, case When d.Debe_Haber = 'D' and Month(fecha) = 3 then d.Importe end as SaldoDebe03, case When d.Debe_Haber = 'D' and Month(fecha) = 4 then d.Importe end as SaldoDebe04, case When d.Debe_Haber = 'D' and Month(fecha) = 5 then d.Importe end as SaldoDebe05, case When d.Debe_Haber = 'D' and Month(fecha) = 6 then d.Importe end as SaldoDebe06, case When d.Debe_Haber = 'D' and Month(fecha) = 7 then d.Importe end as SaldoDebe07, case When d.Debe_Haber = 'D' and Month(fecha) = 8 then d.Importe end as SaldoDebe08, case When d.Debe_Haber = 'D' and Month(fecha) = 9 then d.Importe end as SaldoDebe09, case When d.Debe_Haber = 'D' and Month(fecha) = 10 then d.Importe end as SaldoDebe10, case When d.Debe_Haber = 'D' and Month(fecha) = 11 then d.Importe end as SaldoDebe11, case When d.Debe_Haber = 'D' and Month(fecha) = 12 then d.Importe end as SaldoDebe12, case When d.Debe_Haber = 'H' and Month(fecha) = 1 then d.Importe end as SaldoHaber01, case When d.Debe_Haber = 'H' and Month(fecha) = 2 then d.Importe end as SaldoHaber02, case When d.Debe_Haber = 'H' and Month(fecha) = 3 then d.Importe end as SaldoHaber03, case When d.Debe_Haber = 'H' and Month(fecha) = 4 then d.Importe end as SaldoHaber04, case When d.Debe_Haber = 'H' and Month(fecha) = 5 then d.Importe end as SaldoHaber05, case When d.Debe_Haber = 'H' and Month(fecha) = 6 then d.Importe end as SaldoHaber06, case When d.Debe_Haber = 'H' and Month(fecha) = 7 then d.Importe end as SaldoHaber07, case When d.Debe_Haber = 'H' and Month(fecha) = 8 then d.Importe end as SaldoHaber08, case When d.Debe_Haber = 'H' and Month(fecha) = 9 then d.Importe end as SaldoHaber09, case When d.Debe_Haber = 'H' and Month(fecha) = 10 then d.Importe end as SaldoHaber10, case When d.Debe_Haber = 'H' and Month(fecha) = 11 then d.Importe end as SaldoHaber11, case When d.Debe_Haber = 'H' and Month(fecha) = 12 then d.Importe end as SaldoHaber12 From Cuentas as c inner join Diario as d on c.Código = d.Cuenta Where d.Fecha >= @fechaDesde and d.Fecha <= @fechaHasta ) as table1 group by Cuenta order by Cuenta ``` ... There is two tables: Cuentas and Diario. In table Diario I save movements of the accouns. And here are the tables: ## Cuentas It has two fields and 300000 rows: Código and Nombre. It contains the accounts used in the table Diario ## Diario Contains movements of money between accounts of 'Cuentas' table. His structure is ``` [Apunte] [int] NOT NULL, --Identity [Fecha] [datetime] NOT NULL, [Concepto] [nvarchar](255) NULL, [Cuenta] [nvarchar](9) NULL, [Importe] [float] NULL, [Debe_Haber] [nvarchar](1) NULL, CONSTRAINT [PK_Diario] PRIMARY KEY CLUSTERED ( [Apunte] ASC ) Cuenta Concepto Importe Debe_Haber Fecha ---------------------------------------------------------------------------- 572000006 C/Ef.A2003313E01/01-572000006 123,52 H 01/02/14 433000077 C/Ef.A2003326E01/01-572000006 21,84 D 01/03/14 572000006 C/Ef.A2003326E01/01-572000006 21,84 H 01/03/14 430000754 C/Ef.A2003503E01/01-572000006 54,83 D 11/04/14 572000006 C/Ef.A2003503E01/01-572000006 54,83 H 12/05/14 430000807 C/Ef.F2030395E03/03-572000006 50,61 D 22/05/14 572000006 C/Ef.F2030395E03/03-572000006 50,61 H 23/08/14 430000497 C/Ef.F2034038E01/01-572000006 581,62 D 05/09/14 572000006 C/Ef.F2034038E01/01-572000006 581,62 H 06/09/14 ``` Fecha is a DateTime field. I have included the index: ``` CREATE NONCLUSTERED INDEX [<IX_Diario_Fecha>] ON [dbo].[Diario] ([Fecha]) INCLUDE ([Cuenta],[Importe],[Debe_Haber]) ``` My query takes 3/4 secs, I need improve it to get results faster.
Try this updated query,I removed the multiple `isnull` and added `else 0` in case to handle nulls. ``` DECLARE @fechaDesde DATETIME DECLARE @fechaHasta DATETIME SET @fechaDesde = '01/01/2014 00:00:00.000' SET @fechaHasta = '31/12/2014 23:59:59.999' Select Cuenta, sum(SaldoDebe)as SumaDebe, sum(SaldoHaber)as SumaHaber, sum(SaldoDebe01)as SumaDebe01, sum(SaldoDebe02)as SumaDebe02, sum(SaldoDebe03)as SumaDebe03, sum(SaldoDebe04)as SumaDebe04, sum(SaldoDebe05)as SumaDebe05, sum(SaldoDebe06)as SumaDebe06, sum(SaldoDebe07)as SumaDebe07, sum(SaldoDebe08)as SumaDebe08, sum(SaldoDebe09)as SumaDebe09, sum(SaldoDebe10)as SumaDebe10, sum(SaldoDebe11)as SumaDebe11, sum(SaldoDebe12)as SumaDebe12, sum(SaldoHaber01)as SumaHaber01, sum(SaldoHaber02)as SumaHaber02, sum(SaldoHaber03)as SumaHaber03, sum(SaldoHaber04)as SumaHaber04, sum(SaldoHaber05)as SumaHaber05, sum(SaldoHaber06)as SumaHaber06, sum(SaldoHaber07)as SumaHaber07, sum(SaldoHaber08)as SumaHaber08, sum(SaldoHaber09)as SumaHaber09, sum(SaldoHaber10)as SumaHaber10, sum(SaldoHaber11)as SumaHaber11, sum(SaldoHaber12)as SumaHaber12 From( Select c.Código as Cuenta, case When d.Debe_Haber = 'D' then d.Importe else 0 end as SaldoDebe, case When d.Debe_Haber = 'H' then d.Importe else 0 end as SaldoHaber, case When d.Debe_Haber = 'D' and Month(fecha) = 1 then d.Importe else 0 end as SaldoDebe01, case When d.Debe_Haber = 'D' and Month(fecha) = 2 then d.Importe else 0 end as SaldoDebe02, case When d.Debe_Haber = 'D' and Month(fecha) = 3 then d.Importe else 0 end as SaldoDebe03, case When d.Debe_Haber = 'D' and Month(fecha) = 4 then d.Importe else 0 end as SaldoDebe04, case When d.Debe_Haber = 'D' and Month(fecha) = 5 then d.Importe else 0 end as SaldoDebe05, case When d.Debe_Haber = 'D' and Month(fecha) = 6 then d.Importe else 0 end as SaldoDebe06, case When d.Debe_Haber = 'D' and Month(fecha) = 7 then d.Importe else 0 end as SaldoDebe07, case When d.Debe_Haber = 'D' and Month(fecha) = 8 then d.Importe else 0 end as SaldoDebe08, case When d.Debe_Haber = 'D' and Month(fecha) = 9 then d.Importe else 0 end as SaldoDebe09, case When d.Debe_Haber = 'D' and Month(fecha) = 10 then d.Importe else 0 end as SaldoDebe10, case When d.Debe_Haber = 'D' and Month(fecha) = 11 then d.Importe else 0 end as SaldoDebe11, case When d.Debe_Haber = 'D' and Month(fecha) = 12 then d.Importe else 0 end as SaldoDebe12, case When d.Debe_Haber = 'H' and Month(fecha) = 1 then d.Importe else 0 end as SaldoHaber01, case When d.Debe_Haber = 'H' and Month(fecha) = 2 then d.Importe else 0 end as SaldoHaber02, case When d.Debe_Haber = 'H' and Month(fecha) = 3 then d.Importe else 0 end as SaldoHaber03, case When d.Debe_Haber = 'H' and Month(fecha) = 4 then d.Importe else 0 end as SaldoHaber04, case When d.Debe_Haber = 'H' and Month(fecha) = 5 then d.Importe else 0 end as SaldoHaber05, case When d.Debe_Haber = 'H' and Month(fecha) = 6 then d.Importe else 0 end as SaldoHaber06, case When d.Debe_Haber = 'H' and Month(fecha) = 7 then d.Importe else 0 end as SaldoHaber07, case When d.Debe_Haber = 'H' and Month(fecha) = 8 then d.Importe else 0 end as SaldoHaber08, case When d.Debe_Haber = 'H' and Month(fecha) = 9 then d.Importe else 0 end as SaldoHaber09, case When d.Debe_Haber = 'H' and Month(fecha) = 10 then d.Importe else 0 end as SaldoHaber10, case When d.Debe_Haber = 'H' and Month(fecha) = 11 then d.Importe else 0 end as SaldoHaber11, case When d.Debe_Haber = 'H' and Month(fecha) = 12 then d.Importe else 0 end as SaldoHaber12 From Cuentas as c inner join (select distinct [Fecha], [Cuenta], isnull([Importe],0) as [Importe], [Debe_Haber] from Diario) as d on c.Código = d.Cuenta Where d.Fecha >= @fechaDesde and d.Fecha <= @fechaHasta ) as table1 group by Cuenta order by Cuenta CREATE NONCLUSTERED INDEX [<IX_Diario_Fecha>] ON [dbo].[Diario] ([Fecha]) INCLUDE ([Cuenta],[Importe],[Debe_Haber]) ```
The driver of performance isn't the `case` statements. It is the `join`, `where`, and `group by`. ``` From Cuentas c inner join Diario d on c.Código = d.Cuenta Where d.Pista in ('00') and d.Fecha >= @fechaDesde and d.Fecha <= @fechaHasta ``` I would recommend the following indexes: `diario(Pista, Fecha, Cuenta)` and `Cuentas(Codigo)`. You could also try reformulating the query using `pivot`. That may be marginally faster -- and the same indexes should work for that as well.
How to improve this SQL Server query with multiple 'CASE'?
[ "", "sql", "sql-server", "case", "" ]
My query. ``` UPDATE assets SET assets.Amount = (SELECT SUM(assets.Amount) - NEW.Amount FROM assets WHERE NEW.UserId = assets.UserId and NEW.AccountId = assets.AccountId) AS TmpAssets WHERE NEW.UserId = assets.UserId and NEW.AccountId = assets.AccountId ```
MySQL does not allow you to use the table being updated in a subquery in an `update` or `delete`. It is easy enough to get around this. Here is one approach using `update`/`join`: ``` UPDATE assets a JOIN (select sum(a.Amount) as sumamount, a.UserId, a.AccountId from assets a where NEW.UserId = a.UserId and NEW.AccountId = a.AccountId group by a.UserId, a.AccountId ) anew on NEW.UserId = a.UserId and NEW.AccountId = a.AccountId SET a.Amount = anew.sumamount - new.Amount; ```
Try this : ``` UPDATE assets SET assets.Amount = (select temp.val from (SELECT (SUM(assets.Amount) - NEW.Amount) val from assets WHERE NEW.UserId = assets.UserId and NEW.AccountId = assets.AccountId) temp) WHERE NEW.UserId = assets.UserId and NEW.AccountId = assets.AccountId ; ```
You can't specify target table 'assets' for update in FROM clause
[ "", "mysql", "sql", "" ]
I want to add an extra column, where the max values of each group (ID) will appear. Here how the table looks like: ``` select ID, VALUE from mytable ``` > ID VALUE > 1 4 > 1 1 > 1 7 > 2 2 > 2 5 > 3 7 > 3 3 Here is the result I want to get: > ID VALUE max\_values > 1 4 7 > 1 1 7 > 1 7 7 > 2 2 5 > 2 5 5 > 3 7 7 > 3 3 7 Thank you for your help in advance!
Your previous questions indicate that you are using SQL Server, in which case you can use [window functions](http://msdn.microsoft.com/en-GB/library/ms189461.aspx): ``` SELECT ID, Value, MaxValue = MAX(Value) OVER(PARTITION BY ID) FROM mytable; ``` Based on your comment on another answer about first summing value, you may need to use a subquery to actually get this: ``` SELECT ID, Date, Value, MaxValue = MAX(Value) OVER(PARTITION BY ID) FROM ( SELECT ID, Date, Value = SUM(Value) FROM mytable GROUP BY ID, Date ) AS t; ```
There is no need to use GROUP BY in subselect. ``` select ID, VALUE, (select MAX(VALUE) from mytable where ID = t.ID) as MaxValue from mytable t ```
Add a column with the max value of the group
[ "", "sql", "" ]
I would like to execute a SELECT, where it selects a column-value only if that column exists in the table, else display null. This is what I'm currently doing: ``` SELECT TOP 10 CASE WHEN EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName') THEN columnName ELSE NULL END AS columnName ``` I also tried this: ``` SELECT TOP 10 CASE WHEN (SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName') >0 THEN columnName ELSE NULL END AS columnName ``` Both of them work well if the column is present in the table. But when the column is not present, it gives me the error : *Invalid column name 'columnName'*
You can write as: ``` SELECT CASE WHEN EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName' ) THEN ( SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName' ) ELSE NULL END AS columnName ``` `DEMO` Edit: If you are looking to select top 10 values from a table's column if that column exists then you need to write a dynamic query as: ``` SELECT @columnVariable = CASE WHEN EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName' ) THEN ( SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA ='test' and TABLE_NAME='tableName' and COLUMN_NAME='columnName' ) ELSE NULL END /* Build the SQL string one time.*/ SET @SQLString = N'SELECT TOP 10 ' + @columnVariable+ ' FROM test.tableName '; EXECUTE sp_executesql @SQLString ``` `DEMO2`
``` SELECT * FROM sys.columns WHERE [name] = N'columnName' AND [object_id] = OBJECT_ID(N'tableName') ``` Add this inside your case statement. Please note that this code would work only for higher version of sqlserver.[like sqlserver 2008]
SQL Server How to SELECT a column only if it exists in the table
[ "", "sql", "sql-server", "select", "case", "union", "" ]
I wanna create the following table: ``` create table product (id bigint not null, product_type varchar(50), product_name varchar(100), available_from TIMESTAMP, available_to TIMESTAMP, primary key (id)); ``` My table's key is the 'id'. when inserting in the table, I wanna that the product\_type be unique. How to do that without setting the product\_type the key for my table
``` ALTER TABLE <your table name> ADD CONSTRAINT unique_product_type UNIQUE(product_type); ``` I can't see a table name in your create table SQL.
The function NEWID() generates a new unique id. Maybe you can make a trigger which insets this in every insert or make the default value to be NEWID(). Well the second option seems a bit better :P ``` CREATE TABLE Test123 ( ID NVARCHAR(200) DEFAULT (NEWID()), name NVARCHAR(100) ) INSERT INTO Test123 (name) VALUES ('test') SELECT * FROM Test123 DROP TABLE Test123 ```
product set a column unique without being the primary key
[ "", "sql", "db2", "" ]
I have an Exam Table and a query to get a list of exams: ``` CREATE TABLE [dbo].[Exam] ( [ExamId] INT IDENTITY (1, 1) NOT NULL, [Title] NVARCHAR (50) NULL, CONSTRAINT [PK_Exam] PRIMARY KEY CLUSTERED ([ExamId] ASC) ); SELECT Exam.ExamId AS ExamId, Exam.Title AS Name FROM Exam ``` What I really need is for this query to be modified so that it only shows exams where there is also a test that has a TestStatusId = 3. I know I can just join these tables with a normal join but then I would get many exam rows for each test. All I need is to see the Exam.ExamId and Exam.Title of an exam with one or more tests with TestStatusID = 3. ``` CREATE TABLE [dbo].[AdminTest] ( [AdminTestId] INT IDENTITY (1, 1) NOT NULL, [Title] NVARCHAR (100) NOT NULL, [TestStatusId] INT NOT NULL, [ExamId] INT NOT NULL, CONSTRAINT [PK_AdminTest] PRIMARY KEY CLUSTERED ([AdminTestId] ASC)) ) ``` Can someone show me how I could join these two tables with a SELECT to do what I need?
There are a couple of ways to do this, I prefer to use `EXISTS`: ``` SELECT E.ExamId AS ExamId, E.Title AS Name FROM Exam E WHERE EXISTS ( SELECT 1 FROM AdminTest A WHERE A.ExamId = E.ExamId AND A.TestStatusID = 3) ``` --- Alternatively, you could use `IN`: ``` SELECT ExamId AS ExamId, Title AS Name FROM Exam WHERE ExamId IN ( SELECT ExamId FROM AdminTest WHERE TestStatusId = 3 ) ```
You can use `EXISTS` as the other answer shows, you can also add `TestStatusID = 3` to your `JOIN`: ``` SELECT e.ExamId AS ExamId, e.Title AS Name FROM Exam e JOIN AdminTest a ON e.ExamID = a.ExamID AND a.TestStatusId = 3 ``` Or filter in a `WHERE` clause: ``` SELECT e.ExamId AS ExamId, e.Title AS Name FROM Exam e JOIN AdminTest a ON e.ExamID = a.ExamID WHERE a.TestStatusId = 3 ```
How can I join a table to another to do a select conditional upon a value in a column in the second table?
[ "", "sql", "sql-server", "" ]
In order to implement password complexity I need a function that can remove repeating characters (in order to generate passwords that will meet the password complexity requirements). So a string like weeeeee1 will not be allowed since the e is repeated. I need a function that will instead return we1, where the repeating characters have been removed. Oracle PL/SQL or straight sql must be used. I found [Oracle SQL -- remove partial duplicate from string](https://stackoverflow.com/questions/18473719/oracle-sql-remove-partial-duplicate-from-string), but this does not work with my weeeeee1 test case, since it only replaces with one iteration, therefore returning weee1. I'm not trying to test for no repeating characters. I'm trying to change a repeating character string into a non-repeating character string.
I think you can achieve your goal using regexp\_replace. Or do I missed something? ``` select regexp_replace(:password, '(.)\1+','\1') from dual; ``` --- Here is an example: ``` with t as (select 'weeee1' password from dual union select 'wwwwweeeee11111' from dual union select 'we1' from dual) select regexp_replace(t.password, '(.)\1+','\1') from t; ``` Producing: ``` REGEXP_REPLACE(T.PASSWORD,'(.)\1+','\1') we1 we1 we1 ```
Try this: ``` select regexp_replace('weeeeee1', '(.)\1+', '\1') val from dual VAL --- we1 ```
Oracle remove repeating characters
[ "", "sql", "oracle", "plsql", "" ]
I have a table with a column for customer names, a column for purchase amount, and a column for the date of the purchase. Is there an easy way I can find how much first time customers spent on each day? So I have ``` Name | Purchase Amount | Date Joe 10 9/1/2014 Tom 27 9/1/2014 Dave 36 9/1/2014 Tom 7 9/2/2014 Diane 10 9/3/2014 Larry 12 9/3/2014 Dave 14 9/5/2014 Jerry 16 9/6/2014 ``` And I would like something like ``` Date | Total first Time Purchase 9/1/2014 73 9/3/2014 22 9/6/2014 16 ``` Can anyone help me out with this?
The following is standard SQL and works on nearly all DBMS ``` select date, sum(purchaseamount) as total_first_time_purchase from ( select date, purchaseamount, row_number() over (partition by name order by date) as rn from the_table ) t where rn = 1 group by date; ``` The derived table (the inner select) selects all "first time" purchases and the outside the aggregates based on the date.
The two key concepts here are `aggregates` and `sub-queries`, and the details of which dbms you're using may change the exact implementation, but the basic concept is the same. 1. For each name, determine they're first date 2. Using the results of 1, find each person's first day purchase amount 3. Using the results of 2, sum the amounts for each date In SQL Server, it could look like this: ``` select Date, [totalFirstTimePurchases] = sum(PurchaseAmount) from ( select t.Date, t.PurchaseAmount, t.Name from table1 t join ( select Name, [firstDate] = min(Date) from table1 group by Name ) f on t.Name=f.Name and t.Date=f.firstDate ) ftp group by Date ```
Finding the first occurrence of an element in a SQL database
[ "", "sql", "" ]
I got some serial keys to find in sql database, such as “A-B-C”,”D-E-F”,”G-H-I”,”J-K-L” and they are stored in tblTemp using ntext data type. These above keys may store in three columns, colA, colB and colC (sometimes store in one column and the rest are null). Sometimes, two serial keys can find in one column (e.g. A-B-C;D-E-F) using “;” seperated. so i wrote the following sql query. ``` Declare @sa TABLE(var1 nvarchar(Max)); Insert INTO @sa(var1) VALUES (N’A-B-C’); Insert INTO @sa(var1) VALUES (N’D-E-F’); Insert INTO @sa(var1) VALUES (N’G-H-I’); Insert INTO @sa(var1) VALUES (N’J-K-I’); SELECT * FROM tblTemp WHERE colA IN (SELECT var1 FROM @sa); ``` so i got the following error message. ``` The data types ntext and nvarchar(max) are incompatible in the equal to operator. ``` I still need to find for colB and colC. How should write query for this kind of situation? all suggestions are welcome.
You have to convert/cast your search term as an appropriate data type, in this case `text`. Try this: ``` Declare @sa TABLE(var1 nvarchar(Max)); Insert INTO @sa(var1) VALUES (N’A-B-C’); Insert INTO @sa(var1) VALUES (N’D-E-F’); Insert INTO @sa(var1) VALUES (N’G-H-I’); Insert INTO @sa(var1) VALUES (N’J-K-I’); SELECT * FROM tblTemp t WHERE EXISTS (SELECT 1 FROM @sa s WHERE t.colA like cast('%'+s.var1+'%' as text) OR t.colB like cast('%'+s.var1+'%' as text) OR t.colC like cast('%'+s.var1+'%' as text) ); ```
[CAST/CONVERT](http://msdn.microsoft.com/en-us/library/ms187928%28v=sql.90%29.aspx) (msdn.microsoft.com) your var1 to NTEXT type in your query so that the types are compatible. ``` SELECT * FROM tblTemp WHERE colA IN ( SELECT CAST(var1 AS NTEXT) FROM @sa ); ```
SQL Query Finding From Table DataType Declaration
[ "", "sql", "sql-server", "t-sql", "" ]
I have two tables, Staff and Cust\_Order. I want to add the column 'First name' from the staff table while still performing the below code: ``` Select Staff_No Count(*) AS "Number Of Orders" From Cust_Order Group by Staff_No; ``` Thanks
Join the orders and staff tables, then group by will need to include the additional column(s) ``` SELECT co.Staff_No , s.First_name , COUNT(*) AS "Number Of Orders" FROM Cust_Order co INNER JOIN Staff s on co.Staff_No = s.Staff_No GROUP BY co.Staff_No , s.First_name ; ```
``` SELECT DISTINCT STAFF_NO, FIRST_NAME, COUNT (*) OVER (PARTITION BY STAFF_NO) AS "Number Of Orders" FROM CUST_ORDER; ``` Used distinct as there might be duplicate results in case first\_name is not unique.
Select count from one table, and columns from another ORACLE
[ "", "sql", "oracle", "count", "" ]
I have a stored procedure in SQL Server 2008 that is used to fetch data from a table. **My input parameters are:** ``` @category nvarchar(50) = '', @departmentID int ``` **My Where clause is:** ``` WHERE A.departmentID = @departmentID AND A.category = @category ``` Is there any way I can apply a Case statement (or something similar) to this Where clause to say that it should only check for the category match if @category is not '' and **otherwise select all categories**? The only thing I could think of here is to use the following. Technically this works but then I can't check for exact category matches which is required here: ``` WHERE A.departmentID = @departmentID AND A.category LIKE '%'+@category+'%' ```
you can modify your `WHERE` clause as follow: ``` WHERE A.departmentID = @departmentID AND (@category = '' or A.category = @category) ```
``` WHERE A.departmentID = @departmentID AND ((@category = '') or (@category <> '' AND A.category = @category)) ``` This should give you what you want as a result set, it basically doing a check for either a blank category which should then return all results for the department id or just the category specified in the parameters.
SQL Server: How to handle two different cases in Where clause
[ "", "sql", "sql-server", "case", "where-clause", "" ]
Please reference code below... ``` Private Sub Save_Click() On Error GoTo err_I9_menu Dim dba As Database Dim dba2 As Database Dim rst As Recordset Dim rst1 As Recordset Dim rst2 As Recordset Dim rst3 As Recordset Dim SQL As String Dim dateandtime As String Dim FileSuffix As String Dim folder As String Dim strpathname As String Dim X As Integer X = InStrRev(Me!ListContents, "\") Call myprocess(True) folder = DLookup("[Folder]", "Locaton", "[LOC_ID] = '" & Forms!frmUtility![Site].Value & "'") strpathname = "\\Reman\PlantReports\" & folder & "\HR\Paperless\" dateandtime = getdatetime() If Nz(ListContents, "") <> "" Then Set dba = CurrentDb FileSuffix = Mid(Me!ListContents, InStrRev(Me!ListContents, "."), 4) SQL = "SELECT Extension FROM tbl_Forms WHERE Type = 'I-9'" SQL = SQL & " AND Action = 'Submit'" Set rst1 = dba.OpenRecordset(SQL, dbOpenDynaset, dbSeeChanges) If Not rst1.EOF Then newname = Me!DivisionNumber & "-" & Right(Me!SSN, 4) & "-" & LastName & dateandtime & rst1.Fields("Extension") & FileSuffix Else newname = Me!DivisionNumber & "-" & Right(Me!SSN, 4) & "-" & LastName & dateandtime & FileSuffix End If Set moveit = CreateObject("Scripting.FileSystemObject") copyto = strpathname & newname moveit.MoveFile Me.ListContents, copyto Set rst = Nothing Set dba = Nothing End If If Nz(ListContentsHQ, "") <> "" Then Set dba2 = CurrentDb FileSuffix = Mid(Me.ListContentsHQ, InStrRev(Me.ListContentsHQ, "."), 4) SQL = "SELECT Extension FROM tbl_Forms WHERE Type = 'HealthQuestionnaire'" SQL = SQL & " AND Action = 'Submit'" Set rst3 = dba2.OpenRecordset(SQL, dbOpenDynaset, dbSeeChanges) If Not rst3.EOF Then newname = Me!DivisionNumber & "-" & Right(Me!SSN, 4) & "-" & LastName & dateandtime & rst3.Fields("Extension") & FileSuffix Else newname = Me!DivisionNumber & "-" & Right(Me!SSN, 4) & "-" & LastName & dateandtime & FileSuffix End If Set moveit = CreateObject("Scripting.FileSystemObject") copyto = strpathname & newname moveit.MoveFile Me.ListContentsHQ, copyto Set rst2 = Nothing Set dba2 = Nothing End If Set dba = CurrentDb Set rst = dba.OpenRecordset("dbo_tbl_EmploymentLog", dbOpenDynaset, dbSeeChanges) rst.AddNew rst.Fields("TransactionDate") = Date rst.Fields("EmployeeName") = Me.LastName rst.Fields("EmployeeSSN") = Me.SSN rst.Fields("EmployeeDOB") = Me.EmployeeDOB rst.Fields("I9Pathname") = strpathname rst.Fields("I9FileSent") = newname rst.Fields("Site") = DLookup("Folder", "Locaton", "Loc_ID='" & Forms!frmUtility!Site & "'") rst.Fields("UserID") = Forms!frmUtility!user_id rst.Fields("HqPathname") = strpathname rst.Fields("HqFileSent") = newname2 rst.Update Set dba = Nothing Set rst = Nothing exit_I9_menu: Call myprocess(False) DivisionNumber = "" LastName = "" SSN = "" ListContents = "" ListContentsHQ = "" Exit Sub err_I9_menu: Call myprocess(False) MsgBox Err.Number & " " & Err.Description 'MsgBox "The program has encountered an error and the data was NOT saved." Exit Sub End Sub ``` I keep getting an ODBC call error. The permissions are all correct and the previous piece of code worked where there were separate tables for the I9 and Hq logs. The routine is called when someone submits a set of files with specific information.
I solved this by recreating the table in SQL instead of up-sizing it out of Access.
Just a guess here, but I'm thinking you've got a typo that's resulting in assigning a Null to a required field. Change "Locaton": ``` rst.Fields("Site") = DLookup("Folder", "Locaton", "Loc_ID='" & Forms!frmUtility!Site & "'") ``` To "Location": ``` rst.Fields("Site") = DLookup("Folder", "Location", "Loc_ID='" & Forms!frmUtility!Site & "'") ``` --- Some general advice for troubleshooting 3146 ODBC Errors: DAO has an [Errors collection](http://sourcedaddy.com/ms-access/the-errors-collection.html) which usually contains more specific information for ODBC errors. The following is a quick and dirty way to see what's in there. I have a more refined version of this in a standard error handling module that I include in all of my programs: ``` Dim i As Long For i = 0 To Errors.Count - 1 Debug.Print Errors(i).Number, Errors(i).Description Next i ```
3146 ODBC Call Failed - Access 2010
[ "", "sql", "ms-access", "odbc", "subroutine", "" ]
I am having trouble determining the best way to compare dates in SQL based on month and year only. We do calculations based on dates and since billing occurs on a monthly basis the date of the month has caused more hindrance. For example ``` DECLARE @date1 DATETIME = CAST('6/15/2014' AS DATETIME), @date2 DATETIME = CAST('6/14/2014' AS DATETIME) SELECT * FROM tableName WHERE @date1 <= @date2 ``` The above example would not return any rows since @date1 is greater than @date2. So I would like to find a way to take the day out of the equation. Similarly, the following situation gives me grief for same reason. ``` DECLARE @date1 DATETIME = CAST('6/14/2014' AS DATETIME), @date2 DATETIME = CAST('6/15/2014' AS DATETIME), @date3 DATETIME = CAST('7/1/2014' AS DATETIME) SELECT * FROM tableName WHERE @date2 BETWEEN @date1 AND @date3 ``` I've done inline conversions of the dates to derive the first day and last day of the month for the date specified. ``` SELECT * FROM tableName WHERE date2 BETWEEN DATEADD(month, DATEDIFF(month, 0, date1), 0) -- The first day of the month for date1 AND DATEADD(s, -1, DATEADD(mm, DATEDIFF(m, 0, date2) + 1, 0)) -- The lastday of the month for date3 ``` There has to be an easier way to do this. Any suggestions?
To handle inequalities, such as between, I like to convert date/times to a YYYYMM representation, either as a string or an integer. For this example: ``` DECLARE @date1 DATETIME = CAST('6/14/2014' AS DATETIME), @date2 DATETIME = CAST('6/15/2014' AS DATETIME), @date3 DATETIME = CAST('7/1/2014' AS DATETIME); SELECT * FROM tableName WHERE @date2 BETWEEN @date1 AND @date3; ``` I would write the query as: ``` SELECT * FROM tableName WHERE year(@date2) * 100 + month(@date2) BETWEEN year(@date1) * 100 + month(@date1) AND year(@date3) * 100 + month(@date1); ```
You can filter the month and year of a given date to the current date like so: ``` SELECT * FROM tableName WHERE month(date2) = month(getdate()) and year(date2) = year(getdate()) ``` Just replace the `GETDATE()` method with your desired date.
SQL Server date comparisons based on month and year only
[ "", "sql", "sql-server", "sql-server-2008", "date", "" ]
Having a table that storage customer names, I need to duplicate those records and add a character at the end at the same time. For example I have: ``` CNAME customer1 customer2 customer3 ``` I would like to return: ``` CNAME customer1 customer1* customer1# customer2 customer2* customer2# customer3 customer3* customer3# ``` Could someone please help me? Thanks
You can union your results together: ``` select cname from table1 union all select cname + '*' from table1 union all select cname + '#' from table1 order by cname ``` You can see it working in [this fiddle](http://sqlfiddle.com/#!3/f540e/4)
I would use a number-table and this query: ``` with c as ( select c.*, rn=row_number()over(order by cname) from customers c ) select cname = cname + case n % 3 when 1 then '' when 2 then '*' when 0 then '#' END from numbers n join c on n between 1 and 3 order by rn ``` Result: ``` customer1 customer1* customer1# customer2 customer2* customer2# customer3 customer3* customer3# ``` How to create a number table: <http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1>
How to duplicate and add more characters to a string row in sql?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table in SQL Server: ``` T_Id Supplimentary_keywords ------------------------------------------------------------------------------------------- 1 Animal, Animals, 1, One, Single,live,living Organism 2 Animals, Animal, Two, 2,live,living Organism 3 Animals, Animal, Three, 3,live,living Organism 4 Animals, Animal, Four, 4,live,living Organism 5 Animals, Animal, 5, Five,live,living Organism 6 Group Of Animals, Small Group, Group, Groups, Small Size Group 7 Group Of Animals, Animals, Animal, Group, Groups, Grouping,live,living Organism 8 Group Of Animals, Animals, Animal, Group, Groups, Grouping,large Group,live 9 Head, Heads 10 Neck, Necks ``` Now I want to select data from `Supplimentary_keywords` with `distinct` like this: ``` Supplimentary_keywords ---------------------- Animal Animals 1 One Single live living Organism Two 2 live Three 3 ....... ``` I am using the following code ``` SELECT DISTINCT Split.a.value('.', 'VARCHAR(100)') Kwd FROM (SELECT T_Id, CAST('<M>' + Replace(Supplimentary_keywords, ',', '</M><M>') + '</M>' AS XML) AS Data from KWD_Theaurus_tbl) AS A CROSS APPLY data.nodes ('/M') AS Split(a) ``` and getting error > Msg 9421, Level 16, State 1, Line 1 > XML parsing: line 1, character 11, illegal name character Please help me how to achieve this or correct me if I am doing wrong.
Try this.. ``` SELECT distinct Split.a.value('.', 'VARCHAR(100)') data FROM (select id,Cast ('<M>' + replace(Replace(suppl, ',', '</M><M>'),'&','&amp;') + '</M>' AS XML) AS Data from #temp) AS A CROSS APPLY Data.nodes ('/M') AS Split(a) ``` If you have any other spl character replace it accordingly. Invalid special characters & its substitute in xml ``` & - &amp; < - &lt; > - &gt; " - &quot; ' - &#39; ```
``` create table #temp (id int,suppl varchar(1000)) insert into #temp select 1 , 'Animal, Animals, 1, One, Single,live,living Organism' union all select 2 , 'Animals, Animal, Two, 2,live,living Organism' union all select 3 , 'Animals, Animal, Three, 3,live,living Organism' union all select 4 , 'Animals, Animal, Four, 4,live,living Organism' union all select 5 , 'Animals, Animal, 5, Five,live,living Organism' union all select 6 , 'Group Of Animals, Small Group, Group, Groups, Small Size Group' union all select 7 , 'Group Of Animals, Animals, Animal, Group, Groups, Grouping,live,living Organism' union all select 8 , 'Group Of Animals, Animals, Animal, Group, Groups, Grouping,large Group,live' union all select 9 , 'Head, Heads' union all select 10 , 'Neck, Necks' SELECT distinct Split.a.value('.', 'VARCHAR(100)') data FROM (select id,Cast ('<M>' + Replace(suppl, ',', '</M><M>') + '</M>' AS XML) AS Data from #temp) AS A CROSS APPLY Data.nodes ('/M') AS Split(a) ```
how to select rows data as single column in sql server
[ "", "sql", "sql-server", "t-sql", "" ]
I've got 512 rows to insert to a database. I'm wondering if there is any advantage to submitting multiple inserts over one large insert. For example 1x 512 row insert -- ``` INSERT INTO mydb.mytable (id, phonenumber) VALUES (1, 555-555-5555) , (2, 555-555-5555) , (3, 555-555-5555), //repeat to id = 512 ``` VS 4x 128 row insert ``` INSERT INTO mydb.mytable (id, phonenumber) VALUES (1, 555-555-5555) , (2, 555-555-5555) , (3, 555-555-5555), //repeat to id = 128 INSERT INTO mydb.mytable (id, phonenumber) VALUES (129, 555-555-5555) , (130, 555-555-5555) , (131, 555-555-5555), //repeat to id = 256, then next 128, then next 128. ``` VS 512x 1 row insert ``` INSERT INTO mydb.mytable (id, phonenumber) VALUES (1, 555-555-5555) INSERT INTO mydb.mytable (id, phonenumber) VALUES (2, 555-555-5555) // repeat until id = 512 ``` And a question about testing this, if I were to set this up as a test - Say I do the first approach, one large 512 row insert. That takes 0.5 seconds. Then the next time it takes 0.3 seconds -- does this caching that I think will happen, like it does when a programming language performs the same action twice, happen in sql? (Thusly would it be neccesary to get a good average of test results for each approach?) What other **considerations should I take when doing extremely large inserts (say, half a million)**? Is it true that if the packet sent to the database is too large it will never recieve or execute the query -- Will I even run into trouble ever for making a very large insert?
My answer is assuming SQL Server; I suspect what I say would be applicable to other SQL engines. Much of the overhead of any SQL Server query is development of an execution plan. If you do this as a single insert, it will have to develop the execution plan one (1) time; if you do 512 seperate inserts, it will have to develop the execution plan 512 times. So it is considerably less overhead to do a single insert. I wouldn't be surprised to find that the engine finds other efficiencies that you either wouldn't know to do, wouldn't think to do, or wouldn't be able to do. But if it was only the execution plan savings, it's still worth doing in a single insert.
The answer is likely to vary based on which RDBMS product you're using. One can't make a fine-grained optimization plan in an implementation-agnostic way. But you can make broad observations, for example it's better to [remove loop-invariant code](https://en.wikipedia.org/wiki/Loop-invariant_code_motion). In the case of a loop of many INSERTs to the same table, you can make an educated guess that the loop invariants are things like SQL parsing and query execution planning. Some optimizer implementations may cache the query execution plan, some other implementations don't. So we can assume that a single INSERT of 512 rows is likely to be more efficient. Again, your mileage may vary in a given implementation. As for loading millions of rows, you should really consider bulk-loading tools. Most RDBMS brands have their own special tools or non-standard SQL statements to provide efficient bulk-loading, and this can be faster than any INSERT-based solution **by an order of magnitude.** * [The Data Loading Performance Guide](http://technet.microsoft.com/en-us/library/dd425070(v=sql.100).aspx) (Microsoft SQL Server) * [Oracle Bulk Insert tips](http://www.dba-oracle.com/t_bulk_insert.htm) (Oracle) * [How to load large files safely into InnoDB with LOAD DATA INFILE](https://www.percona.com/blog/2008/07/03/how-to-load-large-files-safely-into-innodb-with-load-data-infile/) (MySQL) * [Populating a database](http://www.postgresql.org/docs/current/interactive/populate.html) (PostgreSQL) So you have just wasted your time worrying about whether a single INSERT is a little bit more efficient than multiple INSERTs.
Which would be fastest, 1x insert 512 rows, 4x insert 128 rows, or 512x insert 1 rows
[ "", "sql", "database", "performance", "language-agnostic", "query-performance", "" ]
I have this t-sql script below that I need to modify to insure that there are no duplicate rows from table1. I would like to do that by grabbing the rows that have the most current date in the [ImportedDate] column. Could I get some guidance? Thanks UPDATE FOR CLARIFICATION I need to select all rows in table1 that match in table2. However there are multiple instances of the record in each table. So I also need to ensure that I am only pulling 1 record for each [MIN] number (the latest version) from table1. So for instance the current result set is 20008 records and I should get somewhere less than that by weeding out the duplicates. I'm thinking it needs an inner select. ``` SELECT mr.Id ,mr.F2FResolved ,mr.F2FIgnore ,mr.[F2FIgnore Always] ,REPLACE(LTRIM(RTRIM([MIN])), '-', '') AS [MIN] ,LTRIM(RTRIM(mr.BorrowerLastName)) AS BorrowerLastName ,LTRIM(RTRIM(mr.BorrowerFirstName)) AS BorrowerFirstName ,LTRIM(RTRIM(mr.BorrowerSSN)) AS BorrowerSSN ,LTRIM(RTRIM(mr.PropertyStreet)) AS PropertyStreet ,LTRIM(RTRIM(mr.PropertyZip)) AS PropertyZip ,LTRIM(RTRIM(mr.NoteAmount)) AS NoteAmount ,LTRIM(RTRIM(mr.LienType)) AS LienType FROM table1 mr INNER JOIN table2 d ON LTRIM(RTRIM(MERSMin)) = REPLACE(LTRIM(RTRIM(mr.[MIN])), '-', '') WHERE ( ( ( mr.[F2FResolved] IS NULL OR mr.[F2FResolved] = 0 ) AND ( d.[F2FResolved] IS NULL OR d.[F2FResolved] = 0 ) ) OR ( ( mr.[F2FIgnore Always] IS NULL OR mr.[F2FIgnore Always] = 0 ) AND d.[F2FIgnore Always] IS NULL OR d.[F2FIgnore Always] = 0 ) OR ( ( mr.[F2FIgnore] IS NULL OR mr.[F2FIgnore] = 0 ) AND d.[F2FIgnore] IS NULL OR d.[F2FIgnore] = 0 ) ) AND ( ( mr.[F2FProcessed] IS NULL OR mr.[F2FProcessed] = 0 ) AND ( d.[F2FProcessed] IS NULL OR d.[F2FProcessed] = 0 ) ) ``` In the image you will see that Id = 65759 and 52413 are for the same individual. I would need to only retrieve the 65759 record as it would have the most recent imported date. ![enter image description here](https://i.stack.imgur.com/i1u1D.png)
Updated with the Answer for anyone looking. FYI: I know there is extraneous code in the inner select but this statement is dynamically generated from a c# application where the user is able to select the columns they want to view. So the column selection code is only be generated once and added to a string.Format statement twice to output the formatted sql statement. ``` SELECT DISTINCT mr.Id ,mr.F2FResolved ,mr.F2FIgnore ,mr.[F2FIgnore Always] ,REPLACE(LTRIM(RTRIM([MIN])), '-', '') AS [MIN] ,LTRIM(RTRIM(mr.BorrowerLastName)) AS BorrowerLastName ,LTRIM(RTRIM(mr.BorrowerFirstName)) AS BorrowerFirstName ,LTRIM(RTRIM(mr.BorrowerSSN)) AS BorrowerSSN ,LTRIM(RTRIM(mr.PropertyStreet)) AS PropertyStreet ,LTRIM(RTRIM(mr.PropertyZip)) AS PropertyZip ,LTRIM(RTRIM(mr.NoteAmount)) AS NoteAmount ,LTRIM(RTRIM(mr.LienType)) AS LienType FROM ( SELECT mr.Id ,mr.F2FResolved ,mr.F2FIgnore ,mr.[F2FIgnore Always] ,REPLACE(LTRIM(RTRIM(mr.[MIN])), '-', '') AS [MIN] ,LTRIM(RTRIM(mr.BorrowerLastName)) AS BorrowerLastName ,LTRIM(RTRIM(mr.BorrowerFirstName)) AS BorrowerFirstName ,LTRIM(RTRIM(mr.BorrowerSSN)) AS BorrowerSSN ,LTRIM(RTRIM(mr.PropertyStreet)) AS PropertyStreet ,LTRIM(RTRIM(mr.PropertyZip)) AS PropertyZip ,LTRIM(RTRIM(mr.NoteAmount)) AS NoteAmount ,LTRIM(RTRIM(mr.LienType)) AS LienType ,mr.F2FProcessed ,ROW_NUMBER() OVER ( PARTITION BY mr.[MIN] ORDER BY mr.[ImportedDate] DESC ) AS rn FROM Table1 mr ) AS mr INNER JOIN Table2 d ON LTRIM(RTRIM(MERSMin)) = REPLACE(LTRIM(RTRIM(mr.[MIN])), '-', '') WHERE ( ( COALESCE(mr.[F2FResolved], 0) = 0 AND COALESCE(d.[F2FResolved], 0) = 0 ) OR ( COALESCE(mr.[F2FIgnore Always], 0) = 0 AND COALESCE(d.[F2FIgnore Always], 0) = 0 ) OR ( COALESCE(mr.[F2FIgnore], 0) = 0 AND COALESCE(d.[F2FIgnore], 0) = 0 ) ) AND COALESCE(mr.F2FProcessed, 0) = 0 AND COALESCE(d.F2FProcessed, 0) = 0 AND rn = 1 ```
Add ``` ORDER BY [ImportedDate] DESC ``` to the bottom of the query.
Selecting Rows from Table with Lastest Import Date
[ "", "sql", "t-sql", "" ]
While trying to build a data warehousing application using Talend, we are faced with the following scenario. We have two tables tables that look like Table master ``` ID | CUST_NAME | CUST_EMAIL ------------------------------------ 1 | FOO | FOO_BAR@EXAMPLE.COM ``` Events Table ``` ID | CUST_ID | EVENT_NAME | EVENT_DATE --------------------------------------- 1 | 1 | ACC_APPLIED | 2014-01-01 2 | 1 | ACC_OPENED | 2014-01-02 3 | 1 | ACC_CLOSED | 2014-01-02 ``` There is a one-to-many relationship between master and the events table.Since, given a limited number of event names I proposing that we denormalize this structure into something that looks like ``` ID | CUST_NAME | CUST_EMAIL | ACC_APP_DATE_ID | ACC_OPEN_DATE_ID |ACC_CLOSE_DATE_ID ----------------------------------------------------------------------------------------- 1 | FOO | FOO_BAR@EXAMPLE.COM | 20140101 | 20140102 | 20140103 ``` THE `DATE_ID` columns refer to entries inside the time dimension table. First question : Is this a good idea ? What are the other alternatives to this scheme ? Second question : How do I implement this using Talend Open Studio ? I figured out a way in which I moved the data for each event name into it's own temporary table along with cust\_id using the `tMap` component and later linked them together using another `tMap`. Is there another way to do this in talend ?
To do this in Talend you'll need to first sort your data so that it is reliably in the order of applied, opened and closed for each account and then denormalize it to a single row with a single delimited field for the dates using the tDenormalizeRows component. After this you'll want to use tExtractDelimitedFields to split the single dates field.
Yeah, this is a good idea, this is called a cumulative snapshot fact. <http://www.kimballgroup.com/2012/05/design-tip-145-time-stamping-accumulating-snapshot-fact-tables/> Not sure how to do this in Talend (dont know the tool) but it would be quite easy to implement in SQL using a Case or Pivot statement
How to flatten a one-to-many relationship
[ "", "sql", "data-warehouse", "talend", "dimensional-modeling", "" ]
Table name: `mytable` ``` Id username pizza-id pizza-size Quantity order-time -------------------------------------------------------------- 1 xyz 2 9 2 09:00 10/08/2014 2 abc 1 11 3 17:45 13/07/2014 ``` This is `mytable` which has 6 columns. `Id` is `int`, `username` is `varchar`, `order-time` is `datetime` and rest are of `integer` datatype. How to count the number of orders with the following pizza quantities: 1, 2, 3, 4, 5, 6,7 and above 7? Using a T-SQL query. It would be very helpful If any one could help to me find the solution.
If the requirement is like count the number of orders with different pizza quantities and represent count of orders as : 1, 2, 3, 4, 5, 6,7 and consider all above order counts in new category : 'above 7' then you can use window function as: ``` select case when totalorders < = 7 then cast(totalorders as varchar(10)) else 'Above 7' end as totalorders , Quantity from ( select distinct count(*) over (partition by Quantity order by Quantity asc) as totalorders, Quantity from mytable ) T order by Quantity ``` `DEMO` Edit: if the requirement is like count the number of orders with pizza quantities: 1, 2, 3, 4, 5, 6,7 and consider all other pizza quantities in new category : 'above 7' then you can write as: ``` select distinct count(*) over ( partition by Quantity order by Quantity asc ) as totalorders, Quantity from ( select case when Quantity < = 7 then cast(Quantity as varchar(20)) else 'Above 7' end as Quantity, id from mytable ) T order by Quantity ``` `DEMO`
Try ``` Select CASE WHEN Quantity > 7 THEN 'OVER7' ELSE Cast(quantity as varchar) END Quantity, COUNT(ID) NoofOrders from mytable GROUP BY CASE WHEN Quantity > 7 THEN 'OVER7' ELSE Cast(quantity as varchar) END ``` or ``` Select SUM(Case when Quantity = 1 then 1 else 0 end) Orders1, SUM(Case when Quantity = 2 then 1 else 0 end) Orders2, SUM(Case when Quantity = 3 then 1 else 0 end) Orders3, SUM(Case when Quantity = 4 then 1 else 0 end) Orders4, SUM(Case when Quantity = 5 then 1 else 0 end) Orders5, SUM(Case when Quantity = 6 then 1 else 0 end) Orders6, SUM(Case when Quantity = 7 then 1 else 0 end) Orders7, SUM(Case when Quantity > 7 then 1 else 0 end) OrdersAbove7 from mytable ```
Count in sql with group by
[ "", "sql", "sql-server", "t-sql", "" ]
Let's say I have two tables: ``` Table foo =========== id | val -------- 01 | 'a' 02 | 'b' 03 | 'c' 04 | 'a' 05 | 'b' Table bar ============ id | class ------------- 01 | 'classH' 02 | 'classI' 03 | 'classJ' 04 | 'classK' 05 | 'classI' ``` I want to return all the values of foo and bar for which foo exists in more than one distinct bar. So, in the example, we'd return: ``` val | class ------------- 'a' | 'classH' 'a' | 'classK' ``` because although 'b' exists multiple times as well, it has the same bar value. I have the following query returning all foo for which there are multiple bar, even if the bar are the same: ``` select distinct foo.val, bar.class from foo, bar where foo.id = bar.id and ( select count(*) from foo2, bar2 where foo2.id = bar2.id and foo2.val = foo.val ) > 1 order by va.name; ```
``` select f.val, b.class from foo f join bar b on f.id = b.id where f.val in ( select f.val from foo f join bar b on f.id = b.id group by f.val having count(distinct b.class) > 1 ); ```
you can just use a subquery that has all duplicated to display each row using EXISTS like so. ``` SELECT f.val, b.class FROM foo f JOIN bar b ON b.id = f.id WHERE EXISTS ( SELECT 1 FROM foo JOIN bar ON foo.id = bar.id WHERE foo.val = f.val GROUP BY foo.val HAVING COUNT(DISTINCT bar.class) > 1 ); ``` [**Fiddle Demo**](http://sqlfiddle.com/#!2/d1bc90/1/0) Generally exists executes faster than IN which is why I prefer it over IN... more details about IN VS EXISTS in [**MY POST HERE**](https://stackoverflow.com/q/25756112/2733506)
Select the foo in distinct bar from 2 tables
[ "", "mysql", "sql", "" ]
I have wrote SQL to select user data for last three months, but I think at the moment it updates daily. I want to change it so that as it is now October it will not count Octobers data but instead July's to September data and change to August to October when we move in to November This is the SQL I got at the moment: ``` declare @Today datetime declare @Category varchar(40) set @Today = dbo.udf_DateOnly(GETDATE()) set @Category = 'Doctors active last three months updated' declare @last3monthsnew datetime set @last3monthsnew=dateadd(m,-3,dbo.udf_DateOnly(GETDATE())) delete from LiveStatus_tbl where Category = @Category select @Category, count(distinct U.userid) from UserValidUKDoctor_vw U WHERE LastLoggedIn >= @last3monthsnew ``` How would I edit this to do that?
``` WHERE LastLoggedIn >= DATEADD(month, DATEDIFF(month, 0, GETDATE())-3, 0) AND LastLoggedIn < DATEADD(month, DATEDIFF(month, 0, GETDATE()), 0) ``` The above statement will return any results in July till before the start of current month.
Referencing this answer to get the first day of the month: **[How can I select the first day of a month in SQL?](https://stackoverflow.com/a/1520803/57475)** You can detect the month limitations like so: ``` select DATEADD(month, DATEDIFF(month, 0, getdate()) - 3, 0) AS StartOfMonth select DATEADD(month, DATEDIFF(month, 0, getdate()), 0) AS EndMonth ``` Then you can add that into variables or directly into your `WHERE` clause: ``` declare @StartDate datetime declare @EndDate datetime set @StartDate = DATEADD(month, DATEDIFF(month, 0, getdate()) - 3, 0) set @EndDate = DATEADD(month, DATEDIFF(month, 0, getdate()), 0) select @Category, count(distinct U.userid) from UserValidUKDoctor_vw U where LastLoggedIn >= @StartDate AND LastLoggedIn < @EndDate ``` Or: ``` select @Category, count(distinct U.userid) from UserValidUKDoctor_vw U where LastLoggedIn >= DATEADD(month, DATEDIFF(month, 0, getdate()) - 3, 0) and LastLoggedIn < DATEADD(month, DATEDIFF(month, 0, getdate()), 0) ```
SQL Last Three Months
[ "", "sql", "sql-server", "reporting", "" ]
i have a table follower and following count.. want to get the count of both in one stored procedure.. is it possible to have two select queries with different where condition on same table possible? ``` CREATE TABLE Table1 ([val1] int, [val2] int, [val3] int, [val4] int, other int) ; INSERT INTO Table1 ([val1], [val2], [val3], [val4], other) VALUES (1, 26, 13, 1, 1), (2, 13, 26, 1, 1), (3, 10, 26, 1, 1), (4, 26, 13, 1, 1), (5, 14, 26, 1, 1) ; ``` **MY select queries** ``` (select count(*) as following_count from table1 where val2=26) (select count(*) as follower_count from table1 where val3=26) ``` [**SQL FIDDLE LINK**](http://sqlfiddle.com/#!3/fd73c/10)
You could do this: ``` SELECT SUM(CASE WHEN val2=26 THEN 1 ELSE 0 END) AS following_count, SUM(CASE WHEN val3=26 THEN 1 ELSE 0 END) AS follower_count FROM table1 ```
Why don't you fire both statement using UNION ALL ? See: <http://dev.mysql.com/doc/refman/5.1/de/union.html> SO: ``` SELECT COUNT(*) AS following_count FROM table1 WHERE val2=26 UNION ALL SELECT COUNT(*) AS following_count FROM table1 WHERE val3=26 ``` returns two rows with 2 numbers in 1 query. In two columns do this: ``` SELECT (SELECT COUNT(*) AS following_count FROM table1 WHERE val2=26) col1 , (SELECT COUNT(*) AS following_count FROM table1 WHERE val3=26) col2 ```
two select queries with different where clause on same table
[ "", "mysql", "sql", "sql-server", "stored-procedures", "" ]
I have the following table: ``` ID GROUPID oDate oTime Value 1 A 2014-06-01 00:00:00 100 2 A 2014-06-01 01:00:00 200 3 A 2014-06-01 02:00:00 300 4 A 2014-06-01 03:00:00 400 5 A 2014-06-01 04:00:00 0 6 A 2014-06-01 05:00:00 10 7 A 2014-06-01 06:00:00 20 ``` I want to have the following result: ``` A B C D E F ---------------------------------------------------------------------- 1 ID GROUPID oDate oTime Value Result 2 1 A 2014-06-01 00:00:00 100 3 2 A 2014-06-01 01:00:00 200 100 4 3 A 2014-06-01 02:00:00 300 100 5 4 A 2014-06-01 03:00:00 400 100 6 5 A 2014-06-01 04:00:00 0 55 7 6 A 2014-06-01 05:00:00 10 10 8 7 A 2014-06-01 06:00:00 20 10 ``` Where `Result` formula is (In excel format): I put the formula on cell F3 --> `=IF(E3=0, IF(E4=0, 0, (F2+F4)/2), (E3-E2)` How can I do this in SQL Syntax? Does anyone know how to solve this? Thank you.
You can write as: ``` ;with CTE as ( select T1.ID, T1.GROUPID, T1.oDate, T1.oTime, T1.Value, --T2.oTime as previousrow, --T3.oTime as nextrow, case when T1.Value = 0 then 0 --E3=0 else T1.Value - T2.Value --(E3-E2) end as Result From test T1 left join test T2 on T1.oDate = T2.oDate and T1.GROUPID = T2.GROUPID and DATEADD(Hour, -1, T1.oTime)= T2.oTime left join test T3 on T1.oDate = T3.oDate and T1.GROUPID = T3.GROUPID and DATEADD(Hour, 1, T1.oTime)= T3.oTime ) ,CTE2 as ( select T1.ID, T1.GROUPID, T1.oDate, T1.oTime, T1.Value, case when T1.Value = 0 then (T2.Result + T3.Result)/2 --(F2+F4)/2) else T1.Result end as Result From CTE T1 left join CTE T2 on T1.oDate = T2.oDate and T1.GROUPID = T2.GROUPID and DATEADD(Hour, -1, T1.oTime)= T2.oTime left join CTE T3 on T1.oDate = T3.oDate and T1.GROUPID = T3.GROUPID and DATEADD(Hour, 1, T1.oTime)= T3.oTime ) select ID, GROUPID, oDate, oTime, Value, Result from CTE2 ``` `DEMO`
I think i got it for you ! test this ``` SELECT DISTINCT A.*, CASE WHEN A.VALUE = 0 THEN ( B.RESULT + C.RESULT ) / 2 ELSE A.RESULT END AS RESULT FROM (SELECT A.*, A.VALUE - B.VALUE AS RESULT FROM #TEMP1 A JOIN #TEMP1 B ON A.ID = ( B.ID + 1 ) AND A.GROUPID = B.GROUPID) A LEFT JOIN (SELECT A.*, A.VALUE - B.VALUE AS RESULT FROM #TEMP1 A JOIN #TEMP1 B ON A.ID = ( B.ID + 1 ) AND A.GROUPID = B.GROUPID)B ON A.ID = ( B.ID + 1 ) AND A.GROUPID = B.GROUPID LEFT JOIN (SELECT A.*, A.VALUE - B.VALUE AS RESULT FROM #TEMP1 A JOIN #TEMP1 B ON A.ID = ( B.ID + 1 ) AND A.GROUPID = B.GROUPID)C ON A.ID = ( C.ID - 1 ) AND A.GROUPID = C.GROUPID ```
previous and next record on calculation field SQL
[ "", "sql", "sql-server", "excel", "sql-server-2008", "t-sql", "" ]
Following is my table for artists - ``` id name sex 1 harsh male 2 geet female ``` Following is my table for events - ``` id artist_id created_by 2 2 16 2 2 17 ``` Following is my query - ``` SELECT * FROM `events` WHERE artist_id IN (SELECT id FROM `artists` WHERE name LIKE '%$search_term%') ``` But apart from events data I need to get artist name in the result as well, please let me know what I need to change in my query as I tried `*, artists.name` it wont worked.
Try this query ``` SELECT e.* FROM `artists` AS a JOIN `events` AS e ON e.artist_id = a.id WHERE a.name LIKE '%$search_term%'; ```
You need to select from two tables simultaneously. Use the [join](http://dev.mysql.com/doc/refman/5.0/en/join.html) for that ``` SELECT artists.name, events.* FROM artists INNER JOIN events ON artist.id = artist_id WHERE name LIKE '%search_term%' ```
Add columns when using IN in mysql for the result
[ "", "mysql", "sql", "" ]
I have a table that has an [ArchiveDate] column like this: ``` ArchiveDate 2014-10-06 2014-10-06 2014-10-06 2014-10-01 2014-10-01 2014-10-01 2014-10-01 2014-05-22 2014-05-22 ``` I want to select the penultimate date, but when I use: ``` select max([ArchiveDate]) -1 'previousweek' from [PipelineArchive] ``` I get 2014-10-05 (which doesn't exist in the column), rather than 2014-10-01. I can't figure out how to code this to select the "last but one"; any help will be much appreciated! Thank you.
You need to sort on `ArchiveDate` in descending order, skip one record, and take the next one. For example, in SQL Server 2012 you could do it this way: ``` SELECT DISTINCT [ArchiveDate] FROM [PipelineArchive] ORDER BY [ArchiveDate] DESC OFFSET (1) ROWS FETCH NEXT (1) ROWS ONLY ``` [Demo.](http://www.sqlfiddle.com/#!6/3b27c/2)
``` SELECT distinct([ArchiveDate]) FROM [PipelineArchive] WHERE [ArchiveDate] = (SELECT MAX([ArchiveDate]) AS second FROM [PipelineArchive] WHERE [ArchiveDate] < (SELECT MAX([ArchiveDate]) AS first FROM [PipelineArchive]) ) ``` The most recent date is: ``` SELECT MAX([ArchiveDate]) AS first FROM [PipelineArchive] ``` The maximum date less than that is: ``` (SELECT MAX([ArchiveDate]) AS second FROM [PipelineArchive] WHERE [ArchiveDate] < (SELECT MAX([ArchiveDate]) AS first FROM [PipelineArchive])) ``` PRO * it's (quite) standard SQL CON * it isn't simple to generalize to the n-th date
SQL - select the penultimate date
[ "", "sql", "" ]
I need to determine whether a customer has bought 100 or more unique drinks using a SQL query and so far I've done: ``` SELECT COUNT(DISTINCT beer_id) FROM Orders WHERE patron_email='smith@gmail.com' ``` How would I test whether the result of this is 100 or more? Update: Sorry for the vague question, I need to use standard SQL.
This works in MySQL, not sure about others: ``` SELECT COUNT(DISTINCT beer_id) > 100 FROM Orders WHERE patron_email='smith@gmail.com' ``` Just put a boolean condition in the `SELECT`. It will evaluate to 0 for false or 1 for true.
By using the `having` clause as in: ``` SELECT COUNT(DISTINCT beer_id) FROM Orders WHERE patron_email='smith@gmail.com' having count(distinct beer_id) > 100 ```
Getting true/false from SQL query
[ "", "sql", "" ]
I have three tables : ``` FORM(id) QUESTION(id, form_id, text) --> X questions for a form ANSWER(id, question_id, answer_number, text) --> X answers for a question ``` I also have a table for answers given by users : ``` USERS_ANSWERS(id, question_id, answer_number) ``` I would like to have, for each question, the COUNT given by answers, like : question\_id, answer\_number, COUNT(users answer for this one) If there is no answer, then count = 0. Can you help me ? It takes me hours and I still have nothing. :( PS : yes I know, why am I not using answer\_id in USERS\_ANSWERS ? I don't know, it isn't my database...
A simple JOIN should do it; ``` SELECT q.id question_id, a.answer_number, COUNT(ua.id) answer_count FROM question q JOIN answer a ON q.id = a.question_id LEFT JOIN users_answers ua ON q.id = ua.question_id AND a.answer_number = ua.answer_number GROUP BY q.id, a.answer_number ORDER BY q.id, a.answer_number ``` The first JOIN combines all questions with their answers to get all valid combinations. The following LEFT JOIN counts all answers for all combinations, allowing zero counts. [An SQLfiddle to test with](http://sqlfiddle.com/#!2/1075fe/2).
You simple need to group by (question\_id, answer\_number) and then count it ``` select question_id, answer_number, count(answer_number) as count from user_answers group by question_id, answer_number order by question_id, answer_number ```
SQL multiple join with count
[ "", "sql", "join", "count", "" ]
I'm trying to learn SQL and I downloaded a database to practice. I downloaded two files of extension .sql, one is the schema and the other one the actual data. I've also installed MySQL Workbench. I've been googling and I've been trying things to solve this but I don't understand Workbench and I can't load the database. Where do I import the schema and the data in order to try queries ? Any help would be really appreciated.
This is simple in Workbench, and I'll use the freely available sakila database as an example. Feel free to apply this to your situation: 1. Download "sakila" from here: <http://dev.mysql.com/doc/index-other.html> 2. Extract it somewhere, in my case, onto the Desktop into a new sakila-db/ directory 3. Open Workbench 4. In the Schema Navigator, right-click an empty area and choose "Create Schema" 5. In the schema creation wizard, name it "sakila", keep the defaults, hit "Apply", finish wizard 6. Go to "File" -> "Run SQL Script..." 7. Choose "sakila-schema.sql", make sure "Default Schema Name" is blank or select "sakila", execute 8. Go to "File" -> "Run SQL Script..." 9. Choose "sakila-data.sql", execute 10. Click the "refresh" icon in the Workbench Schema Navigator (or restart Workbench) 11. Now, use the populated sakila database :) Steps (4) and (5) are optional in this case (as executing sakila-schema.sql creates the schema), but the idea is worth mentioning. Here's how it would look when loading th script into the SQL IDE: ![enter image description here](https://i.stack.imgur.com/Aga1X.png)
The accepted answer is from 4 years ago, so I thought I'd give an update as in MySQL Workbench 6.3 the procedure is a bit different. You have to select the menu item *Server -> Data Import -> Import from Self-Contained File* and select the SQL file containing the database you want to import. In *Default Target Schema*, select the database you want to import the SQL dump to, or create a new empty database via *New...* Then click on *Start Import*.
Importing Data and Schema to MySQL Workbench
[ "", "mysql", "sql", "mysql-workbench", "" ]
I have a column of type DATE stored data are contains date and time . I can see value when i do ``` select CAST(MSG_DT AS TIMESTAMP) from table; ``` this is the output ``` 17-MAR-08 15:38:59,000000000 ``` I have to select the row using * Only date ``` select CAST(MSG_DT AS TIMESTAMP) from MWRB_RECEIVE where MSG_DT >= TO_DATE( '2000-02-03' ,'YYYY-MM-DD') and MSG_DT <= TO_DATE( '2010-02-03' ,'YYYY-MM-DD') ``` * Only time (eg: every message between 12:00:11 and 23:02:55) --- In DB2 i can do ``` SELECT * FROM TABLE WHERE DATE(INS_TMS) = '2014-02-18' SELECT * FROM TABLE WHERE TIME(INS_TMS) > '09.55.00' ``` In ORACLE I can't see the equivalent.
Try this: ``` SELECT * FROM your_table WHERE TO_CHAR (start_date, 'yyyy-mm-dd') = '2014-10-06' AND TO_CHAR (start_date, 'hh24:mi:ss') > '10:00:00' ```
Why are you casting the column value to a TIMESTAMP when the column in the database is a DATE type? The fractional part of the seconds will always be 0, as DATE only has resolution to the seconds value. You need to add the hours,minutes, and seconds format specifier to the query: ``` select MSG_DT from MWRB_RECEIVE where MSG_DT between TO_DATE( '2000-02-03 12:00:11' ,'YYYY-MM-DD HH24:MI:SS') AND TO_DATE( '2010-02-03 23:02:55' ,'YYYY-MM-DD HH24:MI:SS') ```
Oracle query time on date fields
[ "", "sql", "oracle", "select", "time", "" ]
So I have this code that I am trying to figure out and I am not sure how exactly to copy my input from one table to another. Say I have this input ``` Insert into TABLE Value ('blah'); ``` and I want to get that value I just inserted and insert it into another table ``` Insert into ANOTHERTABLE value ('blah'); ``` But in this case, I don't know what blah is, I just want to copy it from the first statement. How can I do this?
``` delimiter // create table first_table (fieldname varchar(20)); create table second_table (fieldname varchar(20)); create trigger insert_into_second before insert on first_table for each row begin insert into second_table (fieldname) values (new.fieldname); end // insert into first_table values ('blah'); delimiter ; select * from second_table; ``` Output of the above: ``` | FIELDNAME | |-----------| | blah | ``` **Fiddle Demonstration:** <http://sqlfiddle.com/#!2/e4384/1/0> Notice how, in the fiddle, after only doing an insert into FIRST\_TABLE, I select from SECOND\_TABLE, and it reflects the insert into FIRST\_TABLE. After you create this trigger, any inserts that go into FIRST\_TABLE will also go into SECOND\_TABLE. In MySQL the default statement delimiter is usually a semicolon ( ; ) that has to be changed to something else in order to create the trigger, in this case I used a double slash ( // ) that is why you see me set the delimiter to that at the top and then back to a semicolon at the end.
Your looking to insert from a select statement. ``` INSERT INTO TABLE (column_name) SELECT column_name FROM ANOTHERTABLE ``` As far as "get that value I just inserted" - you'd just run your insert twice of course - secondly on the new table. You could select the data you just inserted from the database and pop into the new table using a where condition ``` INSERT INTO TABLE (column_name) SELECT column_name FROM ANOTHERTABLE WHERE ANOTHERTABLE.ID = <YOUR ID HERE> ``` If you don't have the primary key of the last insert (as it was auto-increment), you should be able to get it with LAST\_INSERT\_ID()
How can I copy a mysql input to another table?
[ "", "mysql", "sql", "" ]
I need to build a query with a order by and rownum but without use a sublect. It is needed to get the first row of the query ordered. In other words, I want the result of ``` select * from ( SELECT CAMP1,ORDERCAMP FROM TABLENAME ORDER BY ORDERCAMP) where rownum=1; ``` but **whithout use a subselect**. Is it possible? I have a Oracle 11. You could say this is my whole query: ``` SELECT T1.CAMP_ID, T2.CAMP (SELECT OT.CAMP FROM OTHERTABLE OT WHERE OT.FK_TO_TABLE1=T1.CAMP_ID ORDER BY OT.ORDERCAMP ) FROM TABLE1 T1, TABLE2 T2 WHERE T1.FK_TO_T2=T2.PK; ``` The subquery returns more than one row, and I cant use another subquery like ``` SELECT T1.CAMP_ID, T2.CAMP (SELECT * FROM (SELECT OT.CAMP FROM OTHERTABLE OT WHERE OT.FK_TO_TABLE1=T1.CAMP_ID ORDER BY OT.ORDERCAMP ) WHERE ROWNUM=1 ) FROM TABLE1 T1, TABLE2 T2 WHERE T1.FK_TO_T2=T2.PK; SELECT CAMP1,ORDERCAMP FROM TABLE2 ORDER BY ORDERCAMP ``` Because the `T1.CAMP_ID` is an invalid identifier in the third level subquery. I hope I have explained myself enough.
Your current query (without the invalid `ORDER BY`) gets `ORA-01427: single-row subquery returns more than one row`. You can nest subqueries, but you can only refer back one level when joining; so if you did: ``` SELECT T1.CAMP_ID, T2.CAMP, (SELECT CAMP FROM FROM (SELECT OT.CAMP FROM OTHERTABLE OT WHERE OT.FK_TO_TABLE1=T1.CAMP_ID ORDER BY OT.ORDERCAMP ) WHERE ROWNUM = 1) FROM TABLE1 T1, TABLE2 T2 WHERE T1.FK_TO_T2=T2.PK; ``` ... then you would get `ORA-00904: "T1"."CAMP_ID": invalid identifier`. Hence your question, presumably. What you could do instead is join to the third table, and use the analytic `ROW_NUMBER()` function to assign the row number, and then use an outer select wrapped around the whole thing to only find the records with the lowest `ORDERCAMP`: ``` SELECT CAMP_ID, CAMP, OT_CAMP FROM ( SELECT T1.CAMP_ID, T2.CAMP, OT.CAMP AS OT_CAMP, ROW_NUMBER() OVER (PARTITION BY T1.CAMP_ID ORDER BY OT.ORDERCAMP) AS RN FROM TABLE2 T2 JOIN TABLE1 T1 ON T1.FK_TO_T2=T2.PK JOIN OTHERTABLE OT ON OT.FK_TO_TABLE1=T1.CAMP_ID ) WHERE RN = 1; ``` The `ROW_NUMBER()` can partition on the `T1.CAMP_ID` primary key value, or anything else that is unique. [SQL Fiddle demo](http://sqlfiddle.com/#!4/9c8d3/1), including the inner query run on its own so you can see the `RN` numbers assigned before the outer filter is applied. Another approach is to use the aggregate [`KEEP DENSE_RANK FIRST` function](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions056.htm) ``` SELECT T1.CAMP_ID, T2.CAMP, MAX(OT.CAMP) KEEP (DENSE_RANK FIRST ORDER BY OT.ORDERCAMP) AS OT_CAMP FROM TABLE2 T2 JOIN TABLE1 T1 ON T1.FK_TO_T2=T2.PK JOIN OTHERTABLE OT ON OT.FK_TO_TABLE1=T1.CAMP_ID GROUP BY T1.CAMP_ID, T2.CAMP; ``` Which is a bit shorter and doesn't need an inner query. I'm not sure if there's any real advantage of one over the other. [SQL Fiddle demo](http://sqlfiddle.com/#!4/9c8d3/2).
In the most recent version of Oracle, you can do: ``` SELECT CAMP1, ORDERCAMP FROM TABLENAME ORDER BY ORDERCAMP FETCH FIRST 1 ROWS ONLY; ``` Otherwise, I think you need a subquery of some sort.
How to use order by and rownum without subselect?
[ "", "sql", "oracle", "sql-order-by", "rownum", "" ]
I have seen similar questions but feel this is not a duplicate question. I would like to concatenate rows into single string with joins. I am confused on how to proceed. Syntax below I am getting an error "There is already an object named '#TEMPTABLE' in the database." also I am not sure if my sytax to the 2nd select statement is correct, help? ``` SELECT DISTINCT DisplayName, addrSt, addrCntyName, RIGHT('00' + CONVERT(varchar, addrStFips), 2) + RIGHT('000' + convert(varchar, addrCntyFips), 3) AS addrFips INTO #TEMPTABLE FROM PPP INNER JOIN poa ON PPP.OAJ = poa.OAJ INNER JOIN dcPfp ON PfpPayor.KEYF = dbo.dcPfp.KEYJ INNER JOIN ProvOff ON ProvOffAfl.OJK = ProvOff.OJK SELECT DISTINCT addrFips, STUFF ( ( SELECT ',' + DisplayName FROM #TEMPTABLE M WHERE M.addrFips = B.addrFips ORDER BY DisplayName FOR XML PATH('') ),1,1,'' ) AS DISPLAYNAMES FROM #TEMPTABLE B DROP TABLE #TEMPTABLE ```
What worked in my situation was IF OBJECT\_ID('tempdb.#TEMPTABLE') IS NOT NULL DROP TABLE #Results GO Thank you all!
The error message `There is already an object named '#TEMPTABLE' in the database.` means that you've created a temporary table with this name in a previous run. To solve this, you need to `DROP` this table or use another name. To drop, use the following code before your query: ``` IF OBJECT_ID('tempdb..#TEMPTABLE') IS NOT NULL DROP TABLE #TEMPTABLE ```
Concatenate rows into a single text string
[ "", "sql", "sql-server", "" ]
I'm sure this in easy one, but I can't go past conceptualization to syntax: I have a table of features where one named feature may populate several rows e.g: ``` [NAME], [GUID] Fred, NULL Fred, NULL Fred, NULL Tom, Null Mary, Null Mary, Null Mary, Null Mary, Null ``` What I'd like to do is assign ONE GUID per name: ``` Fred, {3b26af27-9d42-481c-a8c8-be1819dccda5} Fred, {3b26af27-9d42-481c-a8c8-be1819dccda5} Fred, {3b26af27-9d42-481c-a8c8-be1819dccda5} Tom, {ee64b706-def0-4e5c-a5fd-0c219962042e} Mary, {fd158f90-9705-4a18-b82c-baca29441401} Mary, {fd158f90-9705-4a18-b82c-baca29441401} Mary, {fd158f90-9705-4a18-b82c-baca29441401} Mary, {fd158f90-9705-4a18-b82c-baca29441401} ```
``` DECLARE @tmp TABLE ( Name varchar(30), GUID uniqueidentifier ) INSERT @tmp SELECT x.Name, NEWID() FROM (SELECT DISTINCT Name FROM MyTable) x UPDATE MyTable SET GUID = tmp.GUID FROM MyTable t INNER JOIN @tmp tmp ON t.Name = tmp.Name ```
Because `NEWID()` doesn't materialize until the very end of a query, you will need to materialize it temporarily. Here I use a table variable to store a single new GUID per name (by grouping on `Name`) before using that table to update the final destination: ``` DECLARE @MaterializedGuids TABLE ( Name varchar(20), NewGuid uniqueidentifier ) INSERT INTO @MaterializedGuids (Name, NewGuid) SELECT Name, NEWID() FROM YourTable GROUP BY Name ``` Now you can use this in-memory table to update your permanent table: ``` UPDATE y SET y.[Guid] = x.[NewGuid] FROM YourTable y JOIN @MaterializedGuids x ON x.Name = y.Name ``` Then check out the results: ``` SELECT * FROM YourTable ```
Loop and assign non-unique GUID to sets of values?
[ "", "sql", "sql-server", "sql-server-2008", "guid", "" ]
I know this may be quite simple, but its very late and for the life of me I cannot remember how to do it. I have four tables, A,B and C. Each one has an ID, Name and some other columns. What I need is a table E that has ID, Name\_A, Name\_B and Name\_C . So if I add another name to say B, it will update this new E table to contain it as well. I tried creating a view: ``` Create View VwNames as SELECT dbo.TableA.name AS NameA, dbo.TableB.name AS NameB, dbo.TableC.name AS NameC FROM dbo.TableA, dbo.TableB, dbo.TableC ``` But I was getting a view with repeated names, for example ``` Juan Pedro Mario Juan Mario Pedro Mario Juan Pedro Mario Pedro Juan Pedro Juan Mario Pedro Mario Juan ``` What I need is something that shows me: ``` Juan Mario Pedro ``` Or ``` Juan Mario Pedro ``` Any help would be really appreciated I've been searching for hours now. Thanks a lot in advance EDIT: How my views is looking as of now: ![Current View](https://i.stack.imgur.com/Qyo6e.png)
You could use a union. Like this: ``` SELECT dbo.TableA.name FROM dbo.TableA UNION SELECT dbo.TableB.name FROM dbo.TableB UNION SELECT dbo.TableC.name FROM dbo.TableC ``` This will get you to output: ``` Juan Mario Pedro ```
TRY LIKE THIS : ``` Create View VwNames as SELECT distinct dbo.TableA.name AS NameA, dbo.TableB.name AS NameB, dbo.TableC.name AS NameC FROM dbo.TableA TA LEFT JOIN ON TA.col1=TB.col1 dbo.TableB TB LEFT JOIN ON TB.col1=T.col1 dbo.Table T ```
Creating a table or view from one field in other tables
[ "", "sql", "sql-server", "t-sql", "" ]
I'm rather inexperienced with databases and have just read about the ["n+1 selects issue"](https://stackoverflow.com/questions/97197/what-is-the-n1-selects-issue). My follow-up question: **Assuming the database resides on the same machine as my program, is cached in RAM and properly indexed, why is the n+1 query pattern slow?** As an example let's take the code from the accepted answer: ``` SELECT * FROM Cars; /* for each car */ SELECT * FROM Wheel WHERE CarId = ? ``` With my mental model of the database cache, each of the `SELECT * FROM Wheel WHERE CarId = ?` queries should need: * 1 lookup to reach the "Wheel" table (one hashmap `get()`) * 1 lookup to reach the list of k wheels with the specified `CarId` (another hashmap `get()`) * k lookups to get the wheel rows for each matching wheel (k pointer dereferenciations) Even if we multiply that by a small constant factor for an additional overhead because of the internal memory structure, it still should be unnoticeably fast. Is the interprocess communication the bottleneck? --- **Edit**: I just found this related article via Hacker News: [Following a Select Statement Through Postgres Internals.](http://patshaughnessy.net/2014/10/13/following-a-select-statement-through-postgres-internals) - [HN discussion thread](https://news.ycombinator.com/item?id=8449329). **Edit 2**: To clarify, I *do* assume `N` to be large. A non-trivial overhead will add up to a noticeable delay then, yes. I am asking *why* the overhead is non-trivial in the first place, for the setting described above.
You are correct that avoiding n+1 selects is less important in the scenario you describe. If the database is on a remote machine, communication latencies of > 1ms are common, i.e. the cpu would spend millions of clock cycles waiting for the network. If we are on the same machine, the communication delay is several orders of magnitude smaller, but synchronous communication with another process necessarily involves a context switch, which commonly costs > 0.01 ms ([source](http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html)), which is tens of thousands of clock cycles. In addition, both the ORM tool and the database will have some overhead per query. To conclude, avoiding n+1 selects is far less important if the database is local, but still matters if n is large.
> Assuming the database resides on the same machine as my program **Never** assume this. Thinking about special cases like this is never a good idea. It's quite likely that your data will grow, and you will need to put your database on another server. Or you will want redundancy, which involves (you guessed it) another server. Or for security, you might want not want your app server on the same box as the DB. > why is the n+1 query pattern slow? You don't think it's slow because your mental model of performance is probably all wrong. 1) RAM is horribly slow. Your CPU is wasting around 200-400 CPU cycles each time it needs to read something something from RAM. CPUs have a lot of tricks to hide this (caches, pipelining, hyperthreading) 2) Reading from RAM is not "Random Access". It's like a hard drive: sequential reads are faster. See this article about how accessing RAM in the right order is 76.6% faster <http://lwn.net/Articles/255364/> (Read the whole article if you want to know how horrifyingly complex RAM actually is.) **CPU cache** In your "N+1 query" case, the "loop" for each N includes many megabytes of code (on client and server) swapping in and out of caches on each iteration, plus context switches (which usually dumps the caches anyway). The "1 query" case probably involves a single tight loop on the server (finding and copying each row), then a single tight loop on the client (reading each row). If those loops are small enough, they can execute 10-100x faster running from cache. **RAM sequential access** The "1 query" case will read everything from the DB to one linear buffer, send it to the client who will read it linearly. No random accesses during data transfer. The "N+1 query" case will be allocating and de-allocating RAM N times, which (for various reasons) may not be the same physical bit of RAM. **Various other reasons** The networking subsystem only needs to read one or two TCP headers, instead of N. Your DB only needs to parse one query instead of N. When you throw in multi-users, the "locality/sequential access" gets even more fragmented in the N+1 case, but stays pretty good in the 1-query case. Lots of other tricks that the CPU uses (e.g. branch prediction) work better with tight loops. See: <http://blogs.msdn.com/b/oldnewthing/archive/2014/06/13/10533875.aspx>
Why is the n+1 selects pattern slow?
[ "", "sql", "orm", "select-n-plus-1", "" ]
I have this table in Oracle database 11g: ``` NUNOTA SEQUENCIA QTD_CONTROLE QTDNEG ---------- ---------- ------------ ---------- 446 1 30 60 446 2 30 30 446 3 30 120 ``` I need to obtain a result like that: ``` NUNOTA SEQUENCIA QTD_CONTROLE QTDNEG ---------- ---------- ------------ ---------- 446 1 30 30 446 1 30 30 446 2 30 30 446 3 30 30 446 3 30 30 446 3 30 30 446 3 30 30 ``` It is basicly QTDNEG / QTD\_CONTROLE and the result must be the number of lines. I will use the result to print labels, in FreeReport, I tried to use Oracle Views to get the result but is to hard for me. Another example: ``` NUNOTA SEQUENCIA QTD_CONTROLE QTDNEG ---------- ---------- ------------ ---------- 446 1 30 60 446 2 100 300 446 3 15 30 ``` The result should be ``` NUNOTA SEQUENCIA QTD_CONTROLE QTDNEG ---------- ---------- ------------ ---------- 446 1 30 30 446 1 30 30 446 2 100 100 446 2 100 100 446 2 100 100 446 3 15 15 446 3 15 15 ```
Try this: ``` with t_res (NUNOTA,SEQUENCIA,QTD_CONTROLE,QTDNEG, lvl) as ( select NUNOTA,SEQUENCIA,QTD_CONTROLE,QTDNEG, 1 from t union all select t.NUNOTA,t.SEQUENCIA,t.QTD_CONTROLE,t.QTDNEG, t_res.lvl + 1 from t, t_res where t_res.lvl < t.QTDNEG/t.QTD_CONTROLE ) select unique NUNOTA,SEQUENCIA,QTD_CONTROLE,QTD_CONTROLE, lvl from t_res order by SEQUENCIA, lvl NUNOTA SEQUENCIA QTD_CONTROLE QTD_CONTROLE LVL ------------------------------------------------------- 446 1 30 30 1 446 1 30 30 2 446 2 30 30 1 446 3 30 30 1 446 3 30 30 2 446 3 30 30 3 446 3 30 30 4 ```
A recursive solution (available from Oracle 11gR2): ``` with t(nunota, sequencia, qtd_controle, qtdneg) as (select nunota , sequencia , qtd_controle , qtdneg from mytable union all select nunota , sequencia , qtd_controle , qtdneg - qtd_controle from t where qtdneg - qtd_controle > 0) select nunota , sequencia , qtd_controle , least(qtdneg, qtd_controle) from t order by nunota , sequencia , least(qtdneg, qtd_controle) desc ```
Select Oracle result two lines for same cases
[ "", "sql", "oracle", "select", "view", "oracle11g", "" ]
I have a pretty basic execution of a stored procedure in my code: ``` Dim objCmd As ADODB.Command Dim objConn As ADODB.Connection Dim rsTemplate As ADODB.Recordset Set objConn = New ADODB.Connection objConn.Open strConnection Set objCmd = New ADODB.Command objCmd.ActiveConnection = objConn objCmd.CommandType = adCmdStoredProc objCmd.CommandText = "GetTemplate" objCmd.Parameters.Append objCmd.CreateParameter("@Param1", adInteger, adParamInput, , lngParam1) objCmd.Parameters.Append objCmd.CreateParameter("@Param2", adInteger, adParamInput, , lngParam1) objCmd.CommandTimeout = 600 Set rsTemplate = objCmd.Execute() LogInfo "Found " & rsTemplate.RecordCount & " templates" ``` The SP is supposed to find a single row. The problem is that sometimes it fines none, which is actually okay, except that the `RecordCount` property is -1 in both cases. I would have expected 0 for zero records and 1 for one record. I have another section of code that can return a single or multiple rows, and with my testing parameters, it is returning seven rows, and `RecordCount` is correctly showing 7. Does anyone know why I'm getting inconsistent results? I need to be able to skip a subsequent piece of code if I get zero results here, and there's no other good way to check, other than trying to access a bad RecordSet object and having a special handler to `resume next` if it's a particular error code.
If your `strConnection` is such that you are specifiying a forward-only cursor, then the record count will always be `-1` as the result-size cannot be identified with that kind of connection. From here <http://msdn.microsoft.com/de-de/library/windows/desktop/ms676701(v=vs.85).aspx> > The cursor type of the Recordset object affects whether the number of > records can be determined. The RecordCount property will return -1 for > a forward-only cursor; the actual count for a static or keyset cursor; > and either -1 or the actual count for a dynamic cursor, depending on > the data source.
Make sure in your stored procedure `GetTemplate`, you have enabled the row count. Example: ``` CREATE procedure [dbo].[GetTemplate] AS BEGIN SET NOCOUNT OFF -- Enable row count here ... END ```
ADODB RecordSet RecordCount doesn't return useful information
[ "", "sql", "vb6", "adodb", "" ]
I have this simple SQL ``` SELECT (CASE WHEN (SELECT Name FROM Customers WHERE Customers_id = Customer) IS NOT NULL THEN (SELECT Name FROM Customers WHERE Customers_id = Customer) ELSE Customer END) AS Customer, DateStart, ProgramsRun.Programs_id, RunInSeconds, SwSQL, SwProgram, Name AS ProgramName FROM ProgramsRun LEFT JOIN Programs ON ProgramsRun.Programs_id = Programs.Programs_id; ``` My question is how can I avoid make two same SELECT statements: one as condition and other as a value selection. I know that I can create a temporary variable, but this forces me to use stored procedures or temporary tables. Is there a simple way to avoid this?
You could use `IFNULL` like this: ``` IFNULL((SELECT Name FROM Customers WHERE Customers_id = Customer),Customer) ``` Or `COALESCE` like this: ``` COALESCE((SELECT Name FROM Customers WHERE Customers_id = Customer),Customer) ``` Reference: * [COALESCE(value,...)](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce) * [IFNULL(expr1,expr2)](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull)
Try this ``` SELECT IsNull(Name,Customer) AS Customer, DateStart, ProgramsRun.Programs_id, RunInSeconds, SwSQL, SwProgram, Name AS ProgramName FROM ProgramsRun LEFT JOIN Programs ON ProgramsRun.Programs_id = Programs.Programs_id; ```
How to avoid conditional multiple SELECT in a query
[ "", "mysql", "sql", "" ]
``` mysqldump -t -u root -p mytestdb mytable --where=datetime LIKE '2014-09%' ``` This is what I am doing and it returns: ``` mysqldump: Couldn't find table: "LIKE" ``` I am trying to return all the rows where the column `datetime` is `like 2014-09` meaning "all September rows".
You may need to use quotes: ``` mysqldump -t -u root -p mytestdb mytable --where="datetime LIKE '2014-09%'" ```
Selecting dates using LIKE is not a good idea. I saw this method in one project. This causes huge DBMS load and slow system operation as no index by this table column used. If you need to select date range use between: ``` where datetime between '2014-09-01' and '2014-09-30 23:59:59' ```
mysqldump with --where clause is not working
[ "", "sql", "mysql", "" ]
I have a query that unions together some selects from two different tables I currently end up with the final result as so: ``` ColA ColB ColC ------------------------- 1 | NULL DEF 1 2 | ABC DEF 1 3 | NULL GBY 1 ``` Column B will have a maximum of two records with the same value. In this case I would only like to select the record that doesn't have NULL for Column A. Final Result would be: ``` ColA ColB ColC ------------------------- 1 | ABC DEF 1 2 | NULL GBY 1 ``` So far I have surrounded the original union with a select and I presume I need some sort of WHERE, HAVING or GROUP BY. Any ideas?
Maybe you could use a CTE with `ROW_NUMBER`: ``` WITH CTE AS ( SELECT ColA, ColB, ColC, RN = row_number() over (partition by ColB Order By ColA DESC) FROM dbo.TableName -- or your query ) SELECT ColA, ColB, ColC FROM CTE WHERE RN = 1 ``` [Sql-Fiddle](http://sqlfiddle.com/#!6/e596f/1/0)
``` IF OBJECT_ID('tempdb..#test') IS NOT NULL DROP TABLE #test CREATE TABLE #test (Col1 char(10), col2 char(10)) INSERT INTO #test(Col1,col2) VALUES (NULL,'DEF'), ('ABC','ABC'), (NULL,'DEF') select distinct * from #test where col2 is not null or col1 in ( select col1 from #test group by col1 having max(col2) is null) ```
Distinct Union with null
[ "", "sql", "sql-server", "" ]
I have a `users` table and a `likes` table. A user gets displayed a random other user's data and can decide whether he likes or dislikes it. I am struggling with selecting a new random user partner for the acting user that he has not yet rated! Now, I am trying to select all users that do not have a row in the `likes` table, with a relation rating of the acting user `user` to the rated user `partner`. The `users` table is a standard user table, in `likes` I have the columns, `id`, `user`, `partner` and `relation`. I am using Laravel Eloquent, but can also use raw sql. My attempt: ``` // $oUser->id is the acting user $oSearch = Db_User:: select( 'db_users.*', 'db_likes.*' ) ->where( 'db_users.id', '<>', $oUser->id ) ->where( 'db_likes.user', '=', $oUser->id ) ->where( 'db_likes.relation', '<>', 'dislike' ) ->where( 'db_likes.relation', '<>', 'like' ) ->where( 'db_likes.relation', '<>', 'maybe' ) ->join( 'db_likes', 'db_users.id', '=', 'db_likes.partner' ); ``` It is wrong, because I do not get any new user selected with this attempt. I think it is because there can not be found any row in `likes` ! There are no rows when he has not yet rated, so there is no result. Is this correct? EDIT: ``` $oSearch = Db_User:: select( 'db_users.*' ) ->where( 'db_users.id', '<>', $oUser->id ) ->where( 'db_users.sex', '=', $strSex ) ->whereRaw( "not exists ( select 1 from db_likes where db_likes.relation in ('dislike','like','maybe') and db_likes.user = " .$oUser->id . " and db_likes.partner = db_users.id )" ); ``` Error: `"{"error":{"type":"Illuminate\\Database\\QueryException","message":"SQLSTATE[42S22]: Column not found: 1054 Unknown column 'IO8fMLYUPHfX1HrwkAWc2xqX' in 'where clause' (SQL: select`db\_users`.* from`db\_users`where`db\_users`.`id`<> IO8fMLYUPHfX1HrwkAWc2xqX and`db\_users`.`sex`= w and not exists ( select 1 from db_likes where db_likes.relation in ('dislike','like','maybe') and db_likes.user = IO8fMLYUPHfX1HrwkAWc2xqX and db_likes.partner = db_users.id ) order by RAND() limit 1)","file":"\/Applications\/MAMP\/htdocs\/adamundeva-server\/adamundeva\/vendor\/laravel\/framework\/src\/Illuminate\/Database\/Connection.php","line":625}}"` \*\* ## FINAL SOLUTION: \*\* ``` $oSearch = Db_User:: select( 'db_users.*' ) ->where( 'db_users.id', '<>', $oUser->id ) ->where( 'db_users.sex', '=', $strSex ) ->whereRaw( "not exists ( select 1 from db_likes where db_likes.relation in ('dislike','like','maybe') and db_likes.user = '" .$oUser->id . "' and db_likes.partner = db_users.id )" ); ```
You can use `not exists` to select all users that have not partnered with a certain user yet ``` select * from db_users dbu where not exists ( select 1 from db_likes dbl where dbl.relation in ('dislike','like','maybe') -- not sure if this is necessary and dbl.user = $oUser->id and dbl.partner = dbu.id ) ``` <http://sqlfiddle.com/#!2/8c3bb9/6>
I think the sql you want is: ``` select db_users.*, db_likes.* from db_users left join db_likes on db_users.id = db_likes.partner and db_likes.user = 'Sx12cltADam2XvzBMUMwq7DG' and db_likes.relation <> 'dislike' and db_likes.relation <> 'like' and db_likes.relation <> 'maybe' where db_users.id != 'Sx12cltADam2XvzBMUMwq7DG' ``` What you want to do is left join the likes table to get all the users even if they didn't like anything. In Laravel it might be just as simple as changing the join to be left join: ``` $oSearch = Db_User:: select( 'db_users.*', 'db_likes.*' ) ->where( 'db_users.id', '<>', $oUser->id ) ->leftjoin( 'db_likes', function($join) { $join->on('db_users.id', '=', 'db_likes.partner' ) ->on('db_likes.user', '=', $oUser->id ) ->on('db_likes.relation', '<>', 'dislike' ) ->on('db_likes.relation', '<>', 'like' ) ->on('db_likes.relation', '<>', 'maybe' ) } ); ```
Sql select row if no row exists in other table
[ "", "sql", "" ]
I have the following table: ``` oDate value1 value2 value3 value4 2014-06-01 10 20 30 40 2014-06-02 20 25 35 50 ``` I want to have the following result ``` oDate oField oValue 2014-06-01 Value1 10 2014-06-01 Value2 20 2014-06-01 Value3 30 2014-06-01 Value4 40 ``` Is it possible to do that in SQL? Need advice. Cheers,
You could do this: **Test data:** ``` DECLARE @tbl TABLE(oDate DATETIME,value1 INT,value2 INT,value3 INT,value4 INT) INSERT INTO @tbl VALUES ('2014-06-01',10,20,30,40), ('2014-06-02',20,25,35,50) ``` **UNPIVOT query** ``` SELECT * FROM ( SELECT oDate, value1, value2, value3, value4 FROM @tbl ) sourceTable UNPIVOT ( oField FOR oValue IN (value1,value2,value3,value4) ) AS unpvt ``` Reference: * [Using PIVOT and UNPIVOT](http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx)
Try this ``` SELECT DISTINCT ODATE,VALUE,FIELD FROM (SELECT odate,value1,value2,value3,value4 FROM #temp) p UNPIVOT (VALUE FOR FIELD IN (value1,value2,value3,value4) )AS unpvt; GO ```
Horizontal to Vertical View SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
How can I use information from the dictionary views to get information on all types of data declared in a given package in Oracle 11g.
Use PL/Scope ... ``` alter session set plscope_settings = 'IDENTIFIERS:ALL'; ``` ... and recompile the package (`UTL_LOG` in my case) ... ``` alter package utl_log compile; alter package utl_log compile body; ``` ... and then query the `user_identifiers` view ... ``` select name, type, object_name, object_type, line, col from user_identifiers where object_name = 'UTL_LOG' and usage = 'DECLARATION' and type not in ('VARIABLE','FUNCTION','FORMAL IN','FORMAL OUT','CONSTANT','PROCEDURE','FUNCTION','PACKAGE') ; ``` ... which would (in my case) yield ... ``` NAME TYPE OBJECT_ OBJECT_ LINE COL ------------------- ------- ------- ------- ---- --- ARR_SOME_COLLECTION VARRAY UTL_LOG PACKAGE 19 6 REC_SOME_RECORD RECORD UTL_LOG PACKAGE 15 6 TYP_LOG_CODE SUBTYPE UTL_LOG PACKAGE 8 9 ``` **Please note** that PL/Scope can be used for **any** identifier declared/defined in **any** program unit, not only for data type declarations.
If you want to know how a package looks like run: `desc PACKAGE_NAME`: ``` SQL> desc dbms_output PROCEDURE DISABLE PROCEDURE ENABLE Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- BUFFER_SIZE NUMBER(38) IN DEFAULT PROCEDURE GET_LINE Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- LINE VARCHAR2 OUT STATUS NUMBER(38) OUT PROCEDURE GET_LINES Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- LINES TABLE OF VARCHAR2(32767) OUT NUMLINES NUMBER(38) IN/OUT PROCEDURE GET_LINES Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- LINES DBMSOUTPUT_LINESARRAY OUT NUMLINES NUMBER(38) IN/OUT PROCEDURE NEW_LINE PROCEDURE PUT Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- A VARCHAR2 IN PROCEDURE PUT_LINE Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- A VARCHAR2 IN ``` If you want to get all dependencies see [ALL\_DEPENDENCIES](http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_1041.htm): ``` SQL> ed Wrote file afiedt.buf 1 create or replace package t1_pkg 2 as 3 procedure fake_proc; 4* end t1_pkg; SQL> / Package created. SQL> ed Wrote file afiedt.buf 1 create or replace package body t1_pkg 2 as 3 procedure fake_proc 4 as 5 l_count number(10); 6 begin 7 select count(*) 8 into l_count 9 from user_objects; 10 end fake_proc; 11* end t1_pkg; SQL> / Package body created. SQL> select referenced_name, referenced_type from user_dependencies where name = 'T1_PKG'; REFERENCED_NAME REFERENCED_TYPE --------------- ------------------ STANDARD PACKAGE USER_OBJECTS SYNONYM T1_PKG PACKAGE ```
How to get information on all types of data declared in a given package
[ "", "sql", "database", "oracle", "" ]
I'm using `PostgreSQL 9.2`. Is there any way to get an output like, > Database : mydb, User : myUser using a **`SELECT`** query?
By using the inbuilt [System Information Functions](https://www.postgresql.org/docs/curren/functions-info.html#FUNCTIONS-INFO) 1.) Currently using database ``` select current_database() ``` 2.) Connected User ``` select user ``` To get the desired output use either this ``` select 'Database : ' ||current_database()||', '||'User : '|| user db_details ``` or ``` select format('Database: %s, User: %s',current_database(),user) db_details ``` [`Live Demo`](http://rextester.com/HLKM90597)
Check [this part of the manual](http://www.postgresql.org/docs/current/interactive/functions-info.html) for more functions. ``` SELECT current_user, user, session_user, current_database(), current_catalog, version(); ```
How to get current database and user name with `SELECT` in PostgreSQL?
[ "", "sql", "database", "postgresql", "select", "" ]
Here's my query ``` SELECT tum.user_id, tum.first_name, tum.last_name FROM di_webinar t LEFT JOIN tbl_event_registrants ter ON ter.event_ref_id = t.webinar_ref_id LEFT JOIN tbl_event_attendees tea ON tea.event_ref_id = t.webinar_ref_id INNER JOIN tbl_user_master tum ON tum.user_id = ter.user_ref_id OR tum.user_id = tea.user_ref_id WHERE t.di_ref_id ='93' GROUP BY tum.user_id ``` This query works fine gets me the expected results but its very slow due to the OR condition on inner join. Here's what i tried to make it better. ``` SELECT tum.user_id, tum.first_name, tum.last_name FROM di_webinar t LEFT JOIN ( SELECT event_ref_id, user_ref_id FROM tbl_event_registrants GROUP BY user_ref_id ) ter ON ter.event_ref_id = t.webinar_ref_id LEFT JOIN ( SELECT event_ref_id, user_ref_id FROM tbl_event_attendees GROUP BY user_ref_id ) tea ON tea.event_ref_id = t.webinar_ref_id -- LEFT JOIN tbl_event_registrants ter ON ter.event_ref_id = t.webinar_ref_id -- LEFT JOIN tbl_event_attendees tea ON tea.event_ref_id = t.webinar_ref_id INNER JOIN tbl_user_master tum ON tum.user_id = ter.user_ref_id OR tum.user_id = tea.user_ref_id WHERE t.di_ref_id ='93' GROUP BY tum.user_id ``` But i'm not sure thats the best way to go. Here's the explain plan ``` id select_type table type possible_keys key key_len ref rows Extra ------ ----------- --------------------- ------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------- ------- -------------------------------- ------ ---------------------------------------------------- 1 PRIMARY t ref FK_di_webinar_direfid FK_di_webinar_direfid 2 const 1 Using temporary; Using filesort 1 PRIMARY <derived2> ref <auto_key0> <auto_key0> 4 univarie_uni_db.t.webinar_ref_id 36 Using index 1 PRIMARY <derived3> ref <auto_key0> <auto_key0> 4 univarie_uni_db.t.webinar_ref_id 11 Using index 1 PRIMARY tum ALL PRIMARY,UNQ_tbl_user_master_LoginRefId,idx_user_master_membership_ref_id,FK_user_master_board_ref_id,FK_tbl_user_master_school_id,FK_tbl_user_master_profile_statusID,FK_tbl_user_master_payment_statusID,FK_tbl_user_master_url_ref_id,idx_tbl_user_master_FirstName,idx_tbl_user_master_LastName,fk_tbl_user_master_CityRefId,fk_tbl_user_master_StateRefId,fk_tbl_user_master_CountryRefId,fk_tbl_user_master_EthnicityRefId,fk_tbl_user_master_FamilyIncomeRefId,fk_tbl_user_master_ScholarshipImportanceRefId,fk_tbl_user_master_ReasonStudyAbroadRefId,fk_tbl_user_master_PrimaryFinancingRefId,fk_tbl_user_master_CitizenRefId,Idx_tbl_user_master_CriticalPercent,Idx_tbl_user_master_PresentClass,Idx_tbl_user_master_ClassStatus,srinivas_test (NULL) (NULL) (NULL) 35641 Using where; Using join buffer (Block Nested Loop) 3 DERIVED tbl_event_attendees index fk_tbl_event_attendees_UserRefId fk_tbl_event_attendees_UserRefId 5 (NULL) 845 (NULL) 2 DERIVED tbl_event_registrants index fk_tbl_event_registrants_UserRefId fk_tbl_event_registrants_UserRefId 5 (NULL) 3568 (NULL) ```
In your comments you said, you don't have to show a sample Webinar per user. Then you don't even have to select from the Webinar table. After all you simply want all users that had a part in a Webinar. So select from tbl\_user\_master and make sure the user id is mentioned in one of the two tables tbl\_event\_registrants and tbl\_event\_attendees. ``` select first_name, last_name from tbl_user_master where user_id in ( select user_ref_id from tbl_event_registrants union all select user_ref_id from tbl_event_attendees ); ``` MySQL can be very slow on IN clauses and UNION queries however. So here is the same with two EXISTS clauses: ``` select first_name, last_name from tbl_user_master tum where exists ( select * from tbl_event_registrants ter where ter.user_ref_id = tum.user_id ) or exists ( select * from tbl_event_attendees tea where tea.user_ref_id = tum.user_id ); ``` If you only want to show users that took part in *all* Webinars, then you'd have to find the total number of webinars and compare with the number of Webinars associated with a user. ``` select first_name, last_name from tbl_user_master tum where ( select count(distinct event_ref_id) from ( select event_ref_id from tbl_event_registrants where user_ref_id = tum.user_id union all select event_ref_id from tbl_event_attendees where user_ref_id = tum.user_id ) ) = (select count(*) from di_webinar); ``` EDIT: Here is the same with a join: ``` select tum.first_name, tum.last_name from tbl_user_master tum join ( select user_ref_id, event_ref_id from tbl_event_registrants union select user_ref_id, event_ref_id from tbl_event_attendees ) ref on ref.user_ref_id = tum.user_id group by tum.user_id having count(*) = (select count(*) from di_webinar); ```
try this one : ``` SELECT tum.user_id, tum.first_name, tum.last_name FROM di_webinar t LEFT JOIN tbl_event_registrants ter ON ter.event_ref_id = t.webinar_ref_id LEFT JOIN tbl_event_attendees tea ON tea.event_ref_id = t.webinar_ref_id INNER JOIN tbl_user_master tum ON tum.user_id = tea.user_ref_id WHERE t.di_ref_id ='93' GROUP BY tum.user_id union SELECT tum.user_id, tum.first_name, tum.last_name FROM di_webinar t LEFT JOIN tbl_event_registrants ter ON ter.event_ref_id = t.webinar_ref_id LEFT JOIN tbl_event_attendees tea ON tea.event_ref_id = t.webinar_ref_id INNER JOIN tbl_user_master tum ON tum.user_id = ter.user_ref_id WHERE t.di_ref_id ='93' GROUP BY tum.user_id ```
Mysql Inner Join with OR condition - Query optimization
[ "", "mysql", "sql", "left-join", "inner-join", "" ]
I have two columns in my database I'd like to combine to make a new third column. The first column is a company name and the second is the URL to the company website. I'd like to make a new column that is the company name hyperlinked to the website: ``` <a href="http://companywebsite.com>Company Name</a> ``` Is there a simple way to go about doing this? I'm VERY new to mySQL and can't figure out how I'd even go about doing this. Bonus points for coming up with a way where when I add a new entry with company name and URL it automatically generates a value for this new hyperlink column. Thanks.
As Polvonjon write, use CONCAT: ``` SELECT brief_description, CONCAT('<a href="', link, '">', innovation_name, '</a>') FROM inno_db ``` By the way - `inno_db` is a very odd name for a table in the database; it's a particular type of storage engine for MySQL. Do you not think "companies" is a better name? Creating a new column is a bad idea - you have to keep it updated, and you're duplicating data, which leads to bugs in the long term. Ideally, you use the query to populate your WP screen. If you can't do that, as the comments recommend, you could create a view, from which you can just do a straight select: ``` create view WPPlugin as select brief_description, CONCAT('<a href="', link, '">', innovation_name, '</a>') FROM inno_db ``` in your plug in code, you then do `select * from WPPlugin`.
Try use CONCAT function: ``` SELECT company, url, CONCAT('<a href="', url, '">', company, '</a>') FROM companies ``` But I will not advice to using this sample of getting anchor tag. Try to use VIEW element in your MVC app.
Create new column in mySQL using values from other columns
[ "", "mysql", "sql", "" ]
I have 3 tables. On the first table, there are multiple entries for each project. The second is basically a mapping table. It's more complicated than this, but for this example I've simplified. There's a simple condition I'm checking for on table 2. On the third table, each entry has a flag that's set to true or false. I want to return rows on the first table where all matching rows on the third table are false. In the example below, the result would return project A b/c all of Jane and Fred's rows in table 3 are false, but none of the other's since every other project has at least one true entry in table 3. ``` Project | Client name | id id | active --------------- ---------------- --------------- A | Jane John | 1 1 | false A | Fred Jane | 2 1 | true B | Mary Fred | 3 2 | false B | Jane Mary | 4 2 | false C | John 3 | false C | Jane 3 | false D | Jane 4 | true D | Mary 4 | false D | John D | Fred ```
The following should do what you want: ``` select t1.* from table1 t1 where not exists (select 1 from table2 t2 join table3 t3 on t2.id = t3.id where t2.name = t1.name and t3.active <> false ); ``` There is some ambiguity about what to do when one of the `join`s fails (this condition is not present in the sample data). This will return the row, because all matching rows in the third table are false, even in that case.
A fairly straight forward JOIN with HAVING should give you the results you want; ``` SELECT t1.project, t1.client FROM table1 t1 JOIN table2 t2 ON t1.client = t2.name JOIN table3 t3 ON t2.id = t3.id GROUP BY t1.project, t1.client HAVING NOT MAX(t3.active) ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!2/94d88/1). This basically just does a straight forward join of all tables, groups the results by client and project. It then uses NOT MAX(t3.active) to check that all booleans in the group are false. This version chooses to not return clients/projects that don't have any active flags to check.
sql query join where none of the many meet a condition
[ "", "sql", "join", "" ]
My table looks like this: ``` Cell Date Hour Minute Counter Value AB1 20141008 14 15 C1 40 AB1 20141008 14 15 C2 15 AB1 20141008 14 30 C1 30 AB1 20141008 14 30 C2 13 ``` I need to calculate formula PERCENT, which, if the data were horizontal and C1, C2 were columns, would look like this: ``` SELECT SUM(C2)/SUM(C1) as 'PERCENT' FROM [Table] group by Cell, Date, Hour ``` I was thinking of using OVER Clause, but I'm not sure how to implement it here. In the case below, the result would be: ``` Cell Date Hour PERCENT AB1 20141008 14 0.40 (Total of C2 = 28 / Total of C1 = 70) ``` Thanks.
*if the data were horizontal and C1, C2 were column* You could PIVOT the data to make it so: ``` SELECT Cell, Date, Hour, SUM([C2])*100/SUM([C1]) AS [%], SUM([C2])/CAST(SUM([C1]) AS DECIMAL(10,5)) AS [Percent] FROM TABLE1 PIVOT (SUM(value) FOR Counter in ([C1],[C2]) ) AS pvt GROUP BY Cell, Date, Hour ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!3/cc864/4) Output: ``` | CELL | DATE | HOUR | % | PERCENT | |------|----------|------|----|---------| | AB1 | 20141008 | 14 | 40 | 0.4 | ```
You do that like this: ``` SELECT Cell, Date, Hour, SUM(CASE WHEN Counter = 'C2' THEN Value ELSE 0 END) / SUM(CASE WHEN Counter = 'C1' THEN Value ELSE 0 END) AS Percent FROM [table] group by Cell, Date, Hour ```
Group vertical data for sums that will be used in division?
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
Not giving detailed information about the table structure and the data: What might be the reason, that this group by: ``` select c.state, case when age <20 then 'u20' when (age >=20 and age<30 ) then 'o20' when (age >=30 and age<40 ) then 'o30' when (age >=40 and age<50 ) then 'o40' else 'min50' end age_group, count(*) amount_trans, sum(t.SELLING_PRICE) sum_price from customers c, transactions t where t.CUSTOMER_ID=c.CUSTOMER_NO group by age, state order by state, age ``` is returning multiple entries for AGE\_GROUP, for example: ``` STATE AGE_GROUP AMOUNT_TRANS SUM_PRICE Arizona o30 26 667609 Arizona o30 84 913807 Arizona o30 34 161111 Arizona min50 2 93791 Arizona min50 3 907 California u20 1 83048 California u20 1 83048 California o20 1 54772 ``` May aim is: ``` STATE AGE_GROUP AMOUNT_TRANS SUM_PRICE Arizona o30 144 1742527 Arizona min50 5 94698 California u20 3 220868 ``` Are there maybe duplicate rows?
If you want to group by AGE\_GROUP you need to, er, group by AGE\_GROUP. ``` select state, age_group, count(*) amount_trans, sum(t.SELLING_PRICE) sum_price from ( select c.state, case when age <20 then 'u20' when (age >=20 and age<30 ) then 'o20' when (age >=30 and age<40 ) then 'o30' when (age >=40 and age<50 ) then 'o40' else 'min50' end age_group, t.SELLING_PRICE from customers c, transactions t where t.CUSTOMER_ID=c.CUSTOMER_NO ) group by age_group, state order by state, age_group ```
Its because you are grouping by age, state. Did you mean group by age\_group, state?
SQL - Group by not grouping as expected
[ "", "sql", "oracle", "group-by", "" ]
I am brand new to mysql so please excuse my level of knowledge here, and feel free to direct me in a better direction if what I am doing is out of date. I am pulling information from a database to fill in a php page. My tables: Server: ``` |ServerID (int) | ArticleID (int) | locationID (int) | | 1 | 46 | 55 | | 2 | 11 | 81 | | 3 | 81 | 46 | | 4 | 55 | 11 | | 5 | 81 | 99 | | 5 | 11 | 52 | ``` Article: ``` |ArticleID (int) | Name (varchar) | Typ (int) | | 46 | domain | 0 | | 81 | root-server | 1 | | 55 | vserver | 2 | | 11 | root-server2 | 1 | ``` Location: ``` |LocationID (int) | location (varchar) | | 46 | 1-5-15-2 | | 81 | 1-5-14-2 | | 55 | 2-25-1-9 | | 11 | 21-2-5-8 | | 99 | 17-2-5-8 | | 52 | 1-8-5-8 | ``` Result: ``` |location (int) | name (varchar) | count (int) | | 1 | root-server | 1 | | 1 | root-server2 | 2 | | 17 | root-server | 1 | ``` The location in the result is the first number block of the location in the location table (1-5-15-2 -> 1, 1-8-5-8 -> 1, 21-2-5-8 -> 21, 17-2-5-8 -> 17). The count is the sum of all servers with the same name and the same first location block. Do anyone think its possible to get this result in only one query? Thanks for any answer!
Check this ``` SELECT SUBSTRING_INDEX(location, '-', 1) as LID,Article.Name,Count(*) as Count from Location join Server on Server.locationID=Location.locationID join Article on Article.ArticleID=Server.ArticleID group by LID,Article.ArticleID ; ``` [DEMO](http://sqlfiddle.com/#!2/9cd922/14)
Please give this a shot: ``` SELECT s.locationID as id, a.name, count(*) as count FROM `Server` s LEFT JOIN `Article` a ON s.ArticleID = a.ArticleID GROUP BY s.locationID, a.name ```
Big Mysql Query (Join counts in the result and modify a string)
[ "", "mysql", "sql", "count", "" ]
I'm in the process of converting Python code over to the new SQLAlchemy-based Pandas 0.14.1. A common pattern we use is (generically): ``` connection = db.connect() # open connection/session sql = 'CREATE TEMP TABLE table1 AS SELECT ...' connection.execute(sql) ... other sql that creates TEMP tables from various joins of previous TEMP tables ... sql = 'CREATE TEMP TABLE tableN AS SELECT ...' connection.execute(sql) result = connection.query('SELECT * FROM tableN WHERE ...') connection.close() ``` Now, once the connection is closed the TEMP tables are purged by the DB server. However, as the final select query is using the same connection/session, it can access the tables. How can I achieve similar using SQLAlchemy and pd.read\_sql\_query() ? For example: ``` engine = sqlalchemy.create_engine('netezza://@mydsn') connection = engine.connect() sql = 'CREATE TEMP TABLE tmptable AS SELECT ...' connection.execute(sql) result = pd.read_sql_query('SELECT * FROM tmptable WHERE ...', engine) ``` yields a DB error that the TEMP table tmptable doesn't exist. Presumably this is because passing the engine to the read\_sql\_query() requires it to open a new connection which has an independent session scope and hence can't see the TEMP table. Is that a reasonable assumption? Is there a way to work around that? (passing the connection to read\_sql\_query() isn't supported) (I know that I can concatenate the SQL into a single string with ; separating the statements, but this is a simplification of the actual situation where the TEMP tables are created by a multitude of functions which call others nesting 3-4 deep. So, to achieve that would require implementing a layer than can coalesce the SQL across multiple calls before issuing it, which I'd rather avoid implementing if there is a nicer way) Using - Pandas: 0.14.1 sqlalchemy: 0.9.7 pyodbc: 3.0.6 Win7 x86\_64 Canopy Python distribution (Python 2.7.6) Josh Kuhn's Netezza SQLAlchemy dialect from <https://github.com/deontologician/netezza_sqlalchemy>
You can now pass SQLAlchemy connectable to `pandas.read_sql`. From the [docs](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html): > pandas.read\_sql(sql, con, index\_col=None, coerce\_float=True, params=None, parse\_dates=None, columns=None, chunksize=None) > > ... > > con : SQLAlchemy connectable (engine/connection) or database string URI > > or DBAPI2 connection (fallback mode) > > Using SQLAlchemy makes it possible to use any DB supported by that > library. If a DBAPI2 object, only sqlite3 is supported. So, this should work: ``` engine = sqlalchemy.create_engine('netezza://@mydsn') connection = engine.connect() sql = 'CREATE TEMP TABLE tmptable AS SELECT ...' connection.execute(sql) result = pd.read_sql('SELECT * FROM tmptable WHERE ...', con=connection) ```
All you need to do is add 'SET NOCOUNT ON' at the beginning of your query, that way pandas read\_sql will read everything as one statement. ``` sSQL = '''SET NOCOUNT ON CREATE TABLE ...... ''' ```
How can pandas.read_sql_query() query a TEMP table?
[ "", "sql", "python-2.7", "pandas", "sqlalchemy", "netezza", "" ]
I have for example following data: ``` shift date value ---------------------- A 2014-07-01 5 A 2014-07-02 8 A 2014-07-03 2 B 2014-07-03 1 C 2014-07-03 9 ``` How to create view, where will be all the shifts (A,B,C) in each day? ``` shift date value ---------------------- A 2014-07-01 5 B 2014-07-01 0 // add 0 value for B to 1.7.2014 C 2014-07-01 0 // add 0 value for C to 1.7.2014 A 2014-07-02 8 B 2014-07-02 0 // add 0 value for B to 2.7.2014 C 2014-07-02 0 // add 0 value for C to 2.7.2014 A 2014-07-03 2 B 2014-07-03 1 C 2014-07-03 9 ``` I need to have all the three production shifts (A,B,C) filled for each day, where at least one shift reported some work
Here is example if you don't want to put missing dates... ``` DECLARE @table TABLE (shift CHAR(1), date Date, Value INT) INSERT INTO @table SELECT 'A', '2014-07-01', 5 INSERT INTO @table SELECT 'A', '2014-07-02', 8 INSERT INTO @table SELECT 'A', '2014-07-03', 2 INSERT INTO @table SELECT 'B', '2014-07-03', 1 INSERT INTO @table SELECT 'C', '2014-07-03', 9 ;WITH shifts AS ( SELECT DISTINCT Shift FROM @table ), allDates AS ( SELECT DISTINCT date FROM @table ) SELECT S.Shift, AD.date, ISNULL(T.Value, 0) AS Value FROM allDates AS AD CROSS JOIN shifts AS S LEFT JOIN @table AS T ON T.Shift = S.Shift AND T.Date = AD.Date ORDER BY AD.date, S.Shift ``` Result: ``` Shift| date |Value A |2014-07-01|5 B |2014-07-01|0 C |2014-07-01|0 A |2014-07-02|8 B |2014-07-02|0 C |2014-07-02|0 A |2014-07-03|2 B |2014-07-03|1 C |2014-07-03|9 ```
First cross-join the shifts with the dates to get all combinations. Then left join all combinations you already have in your table. ``` select shift.shift, dates.date, coalesce(mytable.value,0) from (select distinct shift from mytable) shifts cross join (select distinct date from mytable) dates left join mytable on mytable.shift = shifts.shift and mytable.date = dates.date; ```
Repeatedly fill missing 0 values
[ "", "sql", "sql-server", "" ]
How do I do this in SQL: 1. `SELECT CLIENT, PAYMENT_CODE FROM PAYMENTS_TABLE` 2. `If PAYMENT_CODE = 1, SELECT other data from table_1` 3. `Else, SELECT other data from table_2` The results I want are: ``` | Client | Payment Code = 1 | Address from table_1 | Phone from table_1 | ... ``` or ``` | Client | Payment Code 1 | Address from table_2 | Phone from table_2 | ... ``` Thanks in advance!
You didn't tell us how table\_1 and table\_2 are linked to payments\_table, but something like this should work: ``` select p.client, p.payment_code, case when payment_code = 1 then t1.address else t2.address end as address, case when payment_code = 1 then t1.phone else t2.phone end as phon from payments_table p left join table_1 t1 on p.some_column = t1.some_column left join table_2 t2 on p.some_column = t2.come_column ```
Although Rajesh's solution is fine, you can also do it using two LEFT JOINs; ``` SELECT pt.CLIENT, pt.PAYMENT_CODE, COALESCE(t1.address, t2.address) address, COALESCE(t1.phone, t2.Phone) phone FROM PAYMENTS_TABLE pt LEFT JOIN Table_1 t1 ON pt.client_code = t1.client_code AND pt.PAYMENT_CODE = 1 LEFT JOIN Table_2 t2 ON pt.client_code = t2.client_code AND pt.PAYMENT_CODE <> 1 ``` EDIT: Added `client_code` as linking criteria according to a comment to another answer.
SQL - SELECT from a table if a certain condition is met, else SELECT from another
[ "", "sql", "db2", "" ]
i have the following table RENTAL(book\_date, copy\_id, member\_id, title\_id, act\_ret\_date, exp\_ret\_date). Where book\_date shows the day the book was booked. I need to write a query that for every day of the month(so from 1-30 or from 1-29 or from 1-31 depending on month) it shows me the number of books booked. i currently know how to show the number of books rented in the days that are in the table ``` select count(book_date), to_char(book_date,'DD') from rental group by to_char(book_date,'DD'); ``` my questions are: 1. How do i show the rest of the days(if let's say for some reason in my database i have no books rented on 20th or 19th or multiple days) and put the number 0 there? 2. How do i show the number of days only of the current month so(28,29,30,31 all these 4 are possible depending on month or year)... i am lost . This must be done using only SQL query no pl/SQL or other stuff.
The following query would give you all days in the current month, in your case you can replace `SYSDATE` with your date column and join with this query to know how many for a given month ``` SELECT DT FROM( SELECT TRUNC (last_day(SYSDATE) - ROWNUM) dt FROM DUAL CONNECT BY ROWNUM < 32 ) where DT >= trunc(sysdate,'mm') ```
The answer is to create a table like this: table yearsmonthsdays (year varchar(4), month varchar(2), day varchar(2)); use any language you wish, e.g. iterate in java with Calendar.getInstance().getActualMaximum(Calendar.DAY\_OF\_MONTH) to get the last day of the month for as many years and months as you like, and fill that table with the year, month and days from 1 to last day of month of your result. you'd get something like: ``` insert into yearsmonthsdays ('1995','02','01'); insert into yearsmonthsdays ('1995','02','02'); ... insert into yearsmonthsdays ('1995','02','28'); /* non-leap year */ ... insert into yearsmonthsdays ('1996','02','01'); insert into yearsmonthsdays ('1996','02','02'); ... insert into yearsmonthsdays ('1996','02','28'); insert into yearsmonthsdays ('1996','02','29'); /* leap year */ ... ``` and so on. Once you have this table done, your work is almost finished. Make an outer left join between your table and this table, joining year, month and day together, and when no lines appear, the count will be zero as you wish. Without using programming, this is your best bet.
SQL query for all the days of a month
[ "", "sql", "oracle", "" ]
Say if I have a database structure a bit like this: ``` CREATE TABLE person ( idPerson INT NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(50) ); CREATE TABLE detailTypes ( type INT NOT NULL AUTO_INCREMENT PRIMARY KEY, field VARCHAR(50) ); CREATE TABLE details ( idPerson INT NOT NULL, idDetailType INT NOT NULL, value VARCHAR(50) ); ``` With this data: ``` |----------|---------| | idPerson | Name | |----------|---------| | 1 | Tom | | 2 | Dick | | 3 | Harry | |----------|---------| |----------|----------| | type | field | |----------|----------| | 1 | Diet | | 2 | Hat size | |----------|----------| |----------|------|------------| | idPerson | type | value | |----------|------|------------| | 1 | 1 | Vegetarian | | 1 | 2 | Medium | | 2 | 2 | Small | |----------|------|------------| ``` And I want a result which looks like this: ``` |----------|----------|------------| | Name | field | value | |----------|----------|------------| | Tom | Diet | Vegetarian | | Tom | Hat Size | Medium | | Dick | Diet | NULL | | Dick | Hat Size | Small | | Harry | Diet | NULL | | Harry | Hat Size | NULL | |----------|----------|------------| ``` I would think, correct me if I'm wrong, that the query that ought to achieve this should look like: ``` SELECT name, field, value FROM person p, detailTypes t LEFT JOIN details d ON p.idPerson = d.idPerson AND t.type = d.type ``` But MySQL (5.5.32) / PHPMyAdmin complains that "unknown column 'p.idPerson' in 'on clause'". You can it round to: ``` SELECT name, field, value FROM detailTypes t, person p LEFT JOIN details d ON p.idPerson = d.idPerson AND t.type = d.type ``` and it has the same complaint, except that t.type is now unknown. The current solution is a *[shudder]* nested query, like this: ``` SELECT name, field, value FROM ( SELECT type, field, idPerson FROM person p, detailsType t ) pt LEFT JOIN pt.idPerson = p.idPerson AND pt.type = t.type ``` Is this a known problem with MySQL? Is it that my first queries are just plain wrong? Can I do it a better way? Cos it doesn't seem right to me ....
There is no "type" field in any of your tables. Did you meant to use `idDetailType` ? Try the following: ``` SELECT * FROM detailTypes t JOIN person p LEFT JOIN details d ON p.idPerson = d.idPerson AND t.idDetailType = d.idDetailType ``` Then refine the results to avoid using that nasty `*` PD: field and value are reserved keywords. There might be some unexpected behaviors because of this.
I don't have the status to comment yet so I write here. Have you tried setting your 'p' and 't' values as primary keys for the tables?
Selecting from mulitple tables, then joining to both, in MySQL
[ "", "mysql", "sql", "phpmyadmin", "" ]
Here is my table called sample. ``` id name grade sign 1 pinky A null 2 smita B sam ``` I want to add new record into sample table after checking name,grade and sign.If record not found for these combination,insert else not. ``` declare @count int select @count = count(*) from sample where name = @name and grade= @grade and sign = @sign -- @name,@grade and @sign are input para if(@count = 0) begin -- insert end ``` Problem occur when I trying add first record that is pinky,'A' and null.It give count as zero and record get added.How to handle this?My table may have null value or may not have null value.Depends. I know comparison of null with null,always false.`(null= null)` Hello friends,thanks for ur help.I tried following code. ``` select @count = count(*) from sample where name = @name and grade= @grade and (sign = @sign OR (Sign IS NULL AND @Sign IS NULL)) ``` But it does not worked.It gave me count as zero instead of 1. I tried liked this also.I worked when I trying to add first record that pinky,A and null.Ithas shown me 1 as count.Prompt me to add further. ``` select @count = count(*) from sample where name = @name and grade= @grade and (@Sign IS NULL or sign = @sign) ``` but fail when I am trying to add new column having a value 'smita','B',null.It gave me one as count.But it is wrong.Record does not exist for these combination Checked ur query once again @Mukund ``` select @count = count(*) from sample where name = @name and grade= @grade and (sign = @sign OR (Sign IS NULL AND @Sign IS NULL)). It gave me count as zero instead of 1 when I am trying to add 'pinky',A and null ```
use if exists: ``` if not exists( select * from sample where name = @name and grade= @grade and (sign = @sign or (sign is null and @sign is null)) ) begin insert into sample(name,grade,sign) values (@name,@grade,@sign) end; ``` have a look at the example `SQL Fiddle` 6 queries of insertion, with 2 of your records to show what happens in each case.
You could use a few extra checks on `null`: ``` sign = @sign or (sign is null and @sign is null) ```
How to add null value in where clause (comparison)
[ "", "sql", "asp.net", "" ]
This must be very trivial but I can't seem to find the solution. I work with two tables, both without any primary key. I want to add all the records of the first table to the second table only if they don't exist. Basically: ``` INSERT INTO Table2 SELECT Table1.* FROM Table WHERE "the record to be added doesn't already exists in Table2" ```
You could do something like this. You would need to check each relevant column - I have just put in 2 as an example. With a `Not Exists` clause you can check if a record already existed across multiple columns. With a `NOT IN` you would only be able to check if a record already existed against one column. ``` INSERT INTO Table2 SELECT t1.* FROM Table1 t1 WHERE NOT EXISTS ( SELECT 1 FROM table2 t2 WHERE t2.col1 = t1.col1 AND t2.col2 = t1.col2 ) ```
you could make usage of the `EXISTS` function: ``` INSERT INTO Table2 SELECT Table1.* FROM Table1 WHERE NOT EXISTS(SELECT * FROM table2 WHERE <your expression to compare the two tables goes here>) ``` But i would advise you to check the use of unique index for your tables
SQL: insert only new records
[ "", "mysql", "sql", "ms-access-2007", "" ]
I can specify schema with CREATE TABLE or DROP TABLE: ``` sqlQuery(odbcConnect("Hive"), "DROP TABLE schema.table;") ``` but not with ALTER TABLE: ``` sqlQuery(odbcConnect("Hive"), "ALTER TABLE schema.table RENAME TO schema.new_table;") ``` Error message: ...mismatched input 'RENAME' expecting KW\_EXCHANGE near 'table' in alter exchange partition This didn't work either: ``` sqlQuery(odbcConnect("Hive"), "USE schema; ALTER TABLE table RENAME TO new_table;") ``` Error message: ...missing EOF at ';' near 'schema' P.S. In the end, I worked around this problem using INSERT INTO TABLE. I'd still like to know the answer to the original question, though.
If possible, you might try upgrading to at least Hive 0.14.0. The issue with Hive not handling a schema in ALTER TABLE schema.table statements was resolved in Hive 0.14.0 See the following JIRA on this issue: <https://issues.apache.org/jira/browse/HIVE-9180> If you can upgrade, the original "ALTER TABLE schema.table" command will work.
I had the same problem while trying to add new partition. But then I issued 2 statements : "**use schema\_name;**" and "**alter table table\_name(without schema\_name) add partition...**" and everything went OK. Seems that ALTER statement doesn't support schema name in Hive. Try to exexute those 2 last statements separetely: "**USE schema;**" and then "**ALTER TABLE table RENAME TO new\_table;**" I don't know how to write it in java,but it's quite similar what we have in Oracle database: when you try to "**execute immediate 'alter ...smth; alter... smth\_else;'**" as a one statement it will fail. But if you separate them like this "**begin execute immediate 'alter ..smth;'; execute immediate 'alter ..smth\_else'; end;**" , do it in one transaction and it will succeed.
RODBC: can't specify schema with ALTER TABLE command
[ "", "sql", "r", "database-schema", "rodbc", "" ]
I have a PERSON table: ![PERSON](https://i.stack.imgur.com/3afqM.jpg) To get the list of persons who have a difference between their ages less than 5 years, I try: ``` SELECT * FROM PERSON p1 CROSS JOIN PERSON p2 WHERE p1.psn_id <> p2.psn_id AND p1.psn_age - p2.psn_age <= 5; ``` So I got: ![RESULT](https://i.stack.imgur.com/cEpE0.jpg) As you can see, the second line is the same as the first. My question is how to get rid of the duplicated line?
``` SELECT * FROM PERSON p1 CROSS JOIN PERSON p2 WHERE p1.psn_id < p2.psn_id AND ABS(p1.psn_age - p2.psn_age) <= 5; ```
Try this: ``` select * from PERSON a inner join PERSON b on a.psn_id < b.psn_id and abs(a.psn_age - b.psn_age) <= 5; ``` It will put the older person on the left. If for whatever reason you'd prefer it the other way around, change `a.psn_age - b.psn_age` to `b.psn_age - a.psn_age`. Update, I have verified this by adding three new people of ages 87, 84 and 50, to ensure that the 87 and 84 match each other, and the 50 does not match anything.
How to get rid of duplicated lines in SQL Cross Join Query result?
[ "", "sql", "" ]
I need to display date as **"Oct 10, 2014 1:06 PM"** My SQL query is ``` SELECT STUFF(CONVERT(char(19), CURRENT_TIMESTAMP, 100), 18,0, ' ') ``` and it is displaying like **"Oct 10 2014 1:06 PM"**
You already have the syntax for stuff. Why don't you just add another stuff ? ``` SELECT STUFF(STUFF(CONVERT(char(19), CURRENT_TIMESTAMP, 100), 18,0, ' '), 7,0, ',') ```
``` select CONVERT(char(13),GETDATE(),107)+CONVERT(varchar(15),CAST(GETDATE() AS TIME),100) Output Oct 10, 2014 1:24PM ```
Sql Server date format should be like " Oct 17, 2014 10:14 PM"
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I'm wondering why cost of this query ``` select * from address a left join name n on n.adress_id=a.id where a.street='01'; ``` is higher than ``` select * from address a left join name n on n.adress_id=a.id where a.street=N'01'; ``` where address table looks like this ``` ID NUMBER STREET VARCHAR2(255 CHAR) POSTAL_CODE VARCHAR2(255 CHAR) ``` and name table looks like this ``` ID NUMBER ADDRESS_ID NUMBER NAME VARCHAR2(255 CHAR) SURNAME VARCHAR2(255 CHAR) ``` These are costs returned by explain plan Explain plan for '01' ``` ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3591 | 1595K| 87 (0)| 00:00:02 | | 1 | NESTED LOOPS OUTER | | 3591 | 1595K| 87 (0)| 00:00:02 | |* 2 | TABLE ACCESS FULL | ADDRESS | 3 | 207 | 3 (0)| 00:00:01 | | 3 | TABLE ACCESS BY INDEX ROWID| NAME | 1157 | 436K| 47 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | NAME_HSI | 1157 | | 1 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("A"."STREET"='01') 4 - access("N"."ADDRESS_ID"(+)="A"."ID") ``` Explain plan for N'01' ``` ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 347 | 154K| 50 (0)| 00:00:01 | | 1 | NESTED LOOPS OUTER | | 347 | 154K| 50 (0)| 00:00:01 | |* 2 | TABLE ACCESS FULL | ADDRESS | 1 | 69 | 3 (0)| 00:00:01 | | 3 | TABLE ACCESS BY INDEX ROWID| NAME | 1157 | 436K| 47 (0)| 00:00:01 | |* 4 | INDEX RANGE SCAN | NAME_HSI | 1157 | | 1 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(SYS_OP_C2C("A"."STREET")=U'01') 4 - access("N"."ADDRESS_ID"(+)="A"."ID") ``` As you can see cost for N'01' query is lower than cost for '01'. Any idea why? N'01' needs additionally convert varchar to nvarchar so cost should be higher (SYS\_OP\_C2C()). The other question is why rows processed by N'01' query is lower than '01'? [EDIT] * Table `address` has 30 rows. * Table `name` has 19669 rows.
`SYS_OP_C2C` is an `internal function` which does an `implicit conversion` of `varchar2` to `national character set` using `TO_NCHAR` function. Thus, the filter completely changes as compared to the filter using normal comparison. I am not sure about the reason why the number of rows are *less*, but I can guarantee it could be *more* too. Cost estimation won't be affected. Let's try to see step-by-step in a test case. ``` SQL> CREATE TABLE t AS SELECT 'a'||LEVEL col FROM dual CONNECT BY LEVEL < 1000; Table created. SQL> SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE col = 'a10'; Explained. SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 5 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 1 | 5 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- 1 - filter("COL"='a10') 13 rows selected. SQL> ``` So far so good. Since there is only one row with value as 'a10', optimizer estimated one row. Let's see with the national characterset conversion. ``` SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE col = N'a10'; Explained. SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 50 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 10 | 50 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- 1 - filter(SYS_OP_C2C("COL")=U'a10') 13 rows selected. SQL> ``` What happened here? We can see `filter(SYS_OP_C2C("COL")=U'a10')`, which means an internal function is applied and it converts the `varchar2` value to `nvarchar2`. The filter now found 10 rows. This will also suppress any index usage, since now a function is applied on the column. We can tune it by creating a `function-based index` to avoid `full table scan`. ``` SQL> create index nchar_indx on t(to_nchar(col)); Index created. SQL> SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE to_nchar(col) = N'a10'; Explained. SQL> SELECT * FROM TABLE(dbms_xplan.display); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- Plan hash value: 1400144832 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 50 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 10 | 50 | 2 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | NCHAR_INDX | 4 | | 1 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------------------- --------------------------------------------------- 2 - access(SYS_OP_C2C("COL")=U'a10') 14 rows selected. SQL> ``` However, will this make the execution plans similar? No. i think with **two different charactersets** , the filter will not be applied alike. Thus, the difference lies. ***My research says,*** > Usually, such scenarios occur when the data coming via an application > is nvarchar2 type, but the table column is varchar2. Thus, Oracle > applies an internal function in the filter operation. My suggestion > is, to know your data well, so that you use similar data types during > design phase.
When worrying about explain plans, it matters whether there are current statistics on the tables. If the statistics do not represent the actual data reasonably well, then the optimizer will make mistakes and estimate cardinalities incorrectly. You can check how long ago statistics were gathered by querying the data dictionary: ``` select table_name, last_analyzed from user_tables where table_name in ('ADDRESS','NAME'); ``` You can gather statistics for the optimizer to use by calling `DBMS_STATS`: ``` begin dbms_stats.gather_table_stats(user, 'ADDRESS'); dbms_stats.gather_table_stats(user, 'NAME'); end; ``` So perhaps after gathering statistics you will get different explain plans. Perhaps not. The difference in your explain plans is primarily because the optimizer estimates how many rows it will find in address table differently in the two cases. In the first case you have an equality predicate with same datatype - this is good and the optimizer can often estimate cardinality (row count) reasonably well for cases like this. In the second case a function is applied to the column - this is often bad (unless you have function based indexes) and will force the optimizer to take a wild guess. That wild quess will be different in different versions of Oracle as the developers of the optimizer tries to improve upon it. Some versions the wild guess will simply be something like "I guess 5% of the number of rows in the table." When comparing different datatypes, it is best to avoid implicit conversions, particularly when like this case the implicit conversion makes a function on the *column* rather than the literal. If you have cases where you get a value as datatype NVARCHAR2 and need to use it in a predicate like above, it can be a good idea to explicitly convert the value to the datatype of the column. ``` select * from address a left join name n on n.adress_id=a.id where a.street = CAST( N'01' AS VARCHAR2(255)); ``` In this case with a literal it does not make sense, of course. Here you would just use your first query. But if it was a variable or function parameter, maybe you could have use cases for doing something like this.
Oracle SQL execution plan changes due to SYS_OP_C2C internal conversion
[ "", "sql", "oracle", "oracle11g", "" ]
Hi I Have Table that called Tags, in tag table I have 2 columns (QuestionID int ,Tag nvachar(100)) I want to Select Questions with all Tags in one column like the below ``` QuestionID Tag ---------- ---- 1 Math 1 Integral 2 Physics QuestionID QuestionText ---------- ----------- 1 What is 2*2? 2 What is Quantom roles? QuestionID QuestionText Tags ---------- ----------- ------- 1 What is 2*2? Math, Integral 2 What is Quantom roles? Physics ``` Can any one help me with out using `scalar value function`
There are two ways to answer this: 1. can use a query like in other answer, but this is work for one table only. 2. create `clr aggregate function` for this like a below code (my code in C#). this solution work for all tables and simple for use, only use: `select Concat(column) from Table` in sql server ``` using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Text; [Serializable] [Microsoft.SqlServer.Server.SqlUserDefinedAggregate(Format.UserDefined, IsInvariantToDuplicates = false, IsInvariantToNulls = true, IsInvariantToOrder = false, IsNullIfEmpty = true, MaxByteSize = -1)] public struct Concat : IBinarySerialize { public void Init() { SB = new StringBuilder(); } public void Accumulate(SqlString Value) { if (Value.IsNull) return; if (SB.Length > 0) SB.Append("\n"); SB.Append(Value); } public void Merge(Concat Group) { if (SB.Length > 0 && Group.SB.Length > 0) SB.Append("\n"); SB.Append(Group.SB.ToString()); } public SqlString Terminate() { return new SqlString(SB.ToString()); } // This is a place-holder member field StringBuilder SB; public void Read(System.IO.BinaryReader r) { SB = new StringBuilder(r.ReadString()); } public void Write(System.IO.BinaryWriter w) { w.Write(SB.ToString()); } } ```
Try this ``` create function fn_comma (@question_id int) returns varchar(100) as begin declare @value varchar(100) set @value=(SELECT top 1 STUFF((SELECT ', ' + CAST(Value AS VARCHAR(10)) [text()] FROM Tags WHERE ID = t.ID FOR XML PATH(''), TYPE) .value('.','NVARCHAR(MAX)'),1,2,' ') List_Output FROM Tags --where id=1 GROUP BY ID) return @value end ```
Sql Server - Is there any way to `Concat` nvarchar column in Select Such as `Agregate functions`
[ "", "sql", "sql-server", "" ]
I have a use case where I need to use ROW\_NUMBER() over PARTITION: Something like: ``` SELECT Column1 , Column 2 ROW_NUMBER() OVER ( PARTITION BY ACCOUNT_NUM ORDER BY FREQ, MAN, MODEL) as LEVEL FROM TEST_TABLE ``` I need a workaround for this in Impala. Unfortunately Impala does not support sub queries and also does not support ROW\_NUMBER() OVER functionality. Thank you for your help.
Impala is rather limited for this type of query. With some assumptions, this query is possible: * The four columns in the partitioning clause are never `NULL` * The four columns in the partitioning clause uniquely identify a row The query is rather ugly and expensive: ``` select tt.column1, tt.column2, count(*) as level from test_table tt join test_table tt2 on tt.account_num = tt2.account_num and (tt2.freq < tt.freq or tt2.freq = tt.freq and tt2.man < t.man or tt2.freq = tt.freq and tt2.man = t.man and tt2.model <= t.model ) group by tt.column1, tt.column2, tt.account_num, tt.freq, tt.man, tt.model; ```
ROW\_NUMBER() OVER PARTITION was added in CDH 5.2: <https://www.cloudera.com/documentation/enterprise/latest/topics/impala_analytic_functions.html#row_number> ``` ROW_NUMBER() OVER([partition_by_clause] order_by_clause) ```
ROW_NUMBER( ) OVER in impala
[ "", "sql", "window-functions", "impala", "" ]
How do you delete data from influxdb? The documentation shows it should be as simple as: ``` delete from foo where time < now() -1h ``` For some reason, influxdb rejects my delete statements saying "Delete queries can't have where clause that doesn't reference time" ``` select * from bootstrap where duration > 1000 and time > 14041409940s and time < now() ``` ## I want to delete these 5 entries whos duration > 1000 seconds enter image description here This should be a valid sql statement, yet it fails ![enter image description here](https://i.stack.imgur.com/BASs1.png) None of these delete statements work either ``` delete from bootstrap where duration > 3000000" delete from bootstrap where duration > 300000" delete from bootstrap where time = 1404140994043" delete from bootstrap where duration > 300000 and time > 1404141054508 " delete from bootstrap where duration > 300000 and time > 1404141054508s " delete from bootstrap where time > 1404141054508s and duration > 300000 " delete from bootstrap where duration > 30000 and time > 1s" ``` Documentation reference [https://docs.influxdata.com/influxdb/v2.0/query-data/influxql/](http://influxdb.com/docs/v0.8/api/query_language.html) **Update** Additional queries ``` delete from bootstrap where time > 1404141416824 and duration > 3000; delete sequence_number from bootstrap where time > 1s and duration > 1000; ``` Maybe this is a bug? <https://github.com/influxdb/influxdb/issues/975> <https://github.com/influxdb/influxdb/issues/84>
It appears that you can do this in influxdb 0.9. For instance, here's a query that just succeeded for me: ``` DROP SERIES FROM temperature WHERE machine='zagbar' ``` (Per generous comment by @MuratCorlu, I'm reposting my earlier comment as an answer...)
With influx, you can only delete by time For example, the following are invalid: ``` #Wrong DELETE FROM foo WHERE time < '2014-06-30' and duration > 1000 #Can't delete if where clause has non time entity ``` This is how I was able to delete the data ``` DELETE FROM foo WHERE time > '2014-06-30' and time < '2014-06-30 15:16:01' ``` Update: this worked on influx 8. Supposedly it doesn't work on influx 9
Can you delete data from influxdb?
[ "", "sql", "influxdb", "" ]
I have 2 fields I want to send values to within a `WHERE` statement. * If a variable = 0 then set 2 field values to 100. * If the variable = 1 then set those same 2 field values to 101. In my imaginary world, somehow this would work: ``` Where CASE WHEN @ReportType = 0 THEN od.StatusCd = 100 AND odm.StatusCd = 100 WHEN @ReportType = 1 THEN od.statusCd = 101 AND odm.StatusCd = 101 End And od.CompletionDate between .... And so on.... ``` I know this is wrong. But this is where I am at right now.
If I have understood what you are trying to do, this should work : ``` Where ( (@ReportType = 0 AND od.StatusCd = 100 AND odm.StatusCd = 100) OR (@ReportType = 1 AND od.statusCd = 101 AND odm.StatusCd = 101) ) And od.CompletionDate between .... And so on.... ```
Alternatively you could rewrite your CASE conditions in the form of a join, like below: ``` ... INNER JOIN ( VALUES (0, 100, 100), (1, 101, 101) ) AS v (ReportType, odStatusCd, odmStatusCd) ON @ReportType = v.ReportType AND od.statusCd = v.odStatusCd AND odm.StatusCd = v.odmStatusCd WHERE od.CompletionDate between ... AND ... ``` Although somewhat less readable, this would avoid using `OR` and thus might result in a better (more efficient) execution plan. (You would need to test that.)
Where clause depending of the value of a variable
[ "", "sql", "sql-server", "t-sql", "case", "where-clause", "" ]
I want to compare the data between two tables (data1 & data2) as the request: ``` SELECT COUNT(email) FROM data2 WHERE data2.email NOT IN (SELECT email FROM data1) ``` But each table are so big, data1 => 25 millions, data2 => 38 millions of tuples. So this request is awesome for small tables but sooo long for biger like mine... Is anyone knows another way (faster) to compare data between two tables ? Thanks for help
Be aware that you count emails repeatedly. If an email is in 100 records in data2 and not in data1, then you count 100, not 1. To get the query fast, you need an index on both data1.email and data2.email. Then the dbms doesn't actually even have to access the tables; it can work on the indexes only. To count each email just once you would `COUNT(distinct email)`. But the IN clause is usually slow, as others have mentioned already. You can simply do it thus (provided you *want* to avoid repeated counting): ``` select email from data2 except select email from data1; ```
``` SELECT COUNT(email) FROM data2 WHERE not exists (select 1 from data where data2.email = data.email) ```
Compare data from two tables
[ "", "sql", "linux", "postgresql", "debian", "" ]
I am trying to get query like this ``` SELECT sales.action_date, sales.item_id, items.item_name, sales.item_quantity, sales.item_price, sales.net FROM sales INNER JOIN items ON sales.item_id = items.ID GROUP BY sales.item_id HAVING (((sales.action_date)=[Forms]![rep_frm]![Text13].[value])); ``` Every time I try to show data this message show > your query does not include the specified expression ' action date ' > as part of aggregate function. and for all field in the query >>> but i just want the aggregation be for item\_id what i should do?
You don't have any aggregations like SUM in your SELECT statement. I also don't understand why you sales.action\_date is in de HAVING clause. This is for aggregated filtering like SUM(sales.item\_price) <> 0. It should be possible to put this part in de WHERE-clause, before the GROUP BY instead of the HAVING clause. This example should work: ``` SELECT sales.item_id, items.item_name, SUM(sales.item_quantity), SUM(sales.item_price), SUM(sales.net) FROM sales INNER JOIN items ON sales.item_id = items.ID WHERE sales.action_date=[Forms]![rep_frm]![Text13].[value] GROUP BY sales.item_id, items.item_name; ```
When you are grouping your data all fields in select query should be either included in `group by` clause, or some aggregate function should be applied to it - otherwise it doesn't makee sense. By the way - as far as I can see, you should use `WHERE(((sales.action_date)=[Forms]![rep_frm]![Text13].[value]))` before group, not `HAVING` after.
MS access query aggregation
[ "", "sql", "vba", "ms-access", "ms-access-2013", "" ]
I have a following table (simplified version) in SQLServer. ``` Table Events ----------------------------------------------------------- | Room | User | Entered | Exited | ----------------------------------------------------------- | A | Jim | 2014-10-10T09:00:00 | 2014-10-10T09:10:00 | | B | Jim | 2014-10-10T09:11:00 | 2014-10-10T09:22:30 | | A | Jill | 2014-10-10T09:00:00 | NULL | | C | Jack | 2014-10-10T09:45:00 | 2014-10-10T10:00:00 | | A | Jack | 2014-10-10T10:01:00 | NULL | . . . ``` I need to create a query that returns person's whereabouts in given timestamps. For an example: Where was (Jim at 2014-10-09T09:05:00), (Jim at 2014-10-10T09:01:00), (Jill at 2014-10-10T09:10:00), ... The result set must contain the given User and Timestamp as well as the found room (if any). ``` ------------------------------------------ | User | Timestamp | WasInRoom | ------------------------------------------ | Jim | 2014-10-09T09:05:00 | NULL | | Jim | 2014-10-09T09:01:00 | A | | Jim | 2014-10-10T09:10:00 | A | ``` The number of User-Timestamp tuples can be > 10 000. The current implementation retrieves all records from Events table and does the search in Java code. I am hoping that I could push this logic to SQL. But how? I am using MyBatis framework to create SQL queries so the tuples can be inlined to the query.
The basic query is: ``` select e.* from events e where e.user = 'Jim' and '2014-10-09T09:05:00' >= e.entered and ('2014-10-09T09:05:00' <= e.exited or e.exited is NULL) or e.user = 'Jill' and '2014-10-10T09:10:00 >= e.entered and ('2014-10-10T09:10:00' <= e.exited or e.exited is NULL) or . . .; ``` SQL Server can handle ridiculously large queries, so you can continue in this vein. However, if you have the name/time values in a table already (or it is the result of a query), then use a `join`: ``` select ut.*, t.* from usertimes ut left join events e on e.user = ut.user and ut.thetime >= et.entered and (ut.thetime <= exited or ut.exited is null); ``` Note the use of a `left join` here. It ensures that all the original rows are in the result set, even when there are no matches.
Answers from Jonas and Gordon got me on track, I think. Here is query that seems to do the job: ``` CREATE TABLE #SEARCH_PARAMETERS(User VARCHAR(16), "Timestamp" DATETIME) INSERT INTO #SEARCH_PARAMETERS(User, "Timestamp") VALUES ('Jim', '2014-10-09T09:05:00'), ('Jim', '2014-10-10T09:01:00'), ('Jill', '2014-10-10T09:10:00') SELECT #SEARCH_PARAMETERS.*, Events.Room FROM #SEARCH_PARAMETERS LEFT JOIN Events ON #SEARCH_PARAMETERS.User = Events.User AND #SEARCH_PARAMETERS."Timestamp" > Events.Entered AND (Events.Exited IS NULL OR Events.Exited > #SEARCH_PARAMETERS."Timestamp" DROP TABLE #SEARCH_PARAMETERS ```
SQL Query: Search with list of tuples
[ "", "sql", "sql-server", "" ]
I have a table in Hive with with 5 columns i.e. email, a\_first\_date, b\_first\_date, c\_first\_date, d\_first\_date. a, b, c, d are 4 different actions which a user can take and 4 columns in the table above indicate the date on which the user did the first respective action. For example, the value in **'a\_first\_date'** has the date on which the user did action a. Output: What I want is 2 columns having email, overall\_first\_date i.e. on what date user did his first action? Example Table: (Assume all values are BIGINT type other than email) email,a\_first\_date,b\_first\_date,c\_first\_date,d\_first\_date abc,20140707,20140702,20140801,20140907 xyz,20140107,20140822,20140201,20141007 Output: email,overall\_first\_date abc,20140702 xyz,20140107 Possible couple of solutions are to write a UDF or compare these values with each other using IF ELSE and then find the minimum but that would involve a lot of comparisons. Alternatively I can do a: ``` select email, min(action) as overall_first_date from ( select email, a_first_date as action from mytable UNION ALL select email, b_first_date as action from mytable UNION ALL select email, c_first_date as action from mytable UNION ALL select email, d_first_date as action from mytable ) q1 GROUP BY email ``` but again this is not a good way. Could anyone please suggest a better way to achieve this?
Why not use a case statement? This seems to have been thoroughly discussed in SQL here: <https://dba.stackexchange.com/questions/21542/what-is-the-most-efficient-way-to-get-the-minimum-of-multiple-columns-on-sql-ser>
Use the function least(). For example; Select \*, least(col1,col2,col3) as minofcol from Tablename;
Hive (Finding min of n columns in a row)
[ "", "sql", "hadoop", "hive", "" ]
Say I have a call ``` SELECT * FROM tablename WHERE id IN (5,3,8,9); ``` That returns the results in the id order of 3,5,8,9. Is there any way to get it to return the results in the given 5,3,8,9 order?
``` SELECT * FROM tablename WHERE id IN (5,3,8,9) ORDER BY CASE id WHEN 5 THEN 1 WHEN 3 THEN 2 WHEN 8 THEN 3 WHEN 9 THEN 4 ELSE 5 END ``` **[Fiddle](http://sqlfiddle.com/#!15/b27aa/1)**
Another option which is a bit more flexible because you only need to change the ids once. ``` with id_list (id, sort_order) as ( values (5,1), (3,2), (8,3), (9,4) ) select t.* from tablename join id_list l on l.id = t.id order by l.sort_order; ``` Maintaining the sort\_order is however somewhat ugly if you need to insert new ids. With the upcoming 9.4 this will be even easier: ``` with id_list (id, sort_order) as ( select * from unnest(array[5,3,8,9]) with ordinality ) select * from tablename join id_list l on l.id = t.id order by l.sort_order; ``` This *can* be done using pre 9.4 but this relies on the order of the unnest function which is **not** *guaranteed* to give a consistent ordering (but in reality it seems to be always the same): ``` with id_list (id, sort_order) as ( select *, row_number() over () from unnest(array[5,3,8,9]) ) select * from tablename join id_list l on l.id = t.id order by l.sort_order; ```
Is there a way to return SELECT results sorted by order of "WHERE IN" values?
[ "", "sql", "postgresql", "" ]
I have a table In MS SQL-Server which has hours in a `varchar` column. For ex. `'4:30'` (hr:Min). Basically it is a time span for an amount of work. Now I need the total sum of this column. How to convert it ? My table looks like this ``` Create table tblWrkingHors ( Id int primary key, name varchar(50), WorkingHr varchar(8) ) ``` My table data is like this ``` ============================================================== Id name WorkingHr 1 sam 08:20 2 sam 04:50 3 sam 04:34 4 Pam 05:50 5 sam 04:30 6 Pam 06:40 7 Pam 04:50 8 Todd 06:10 9 Todd 05:50 10 Todd 08:50 ```
A VARCHAR may not be the most efficient version of storing time, but with a bit of casting you can do what you're looking for; ``` WITH cte AS ( SELECT name, SUM(DATEDIFF(MINUTE, '00:00', CAST(WorkingHr AS TIME))) t FROM mytable GROUP BY name ) SELECT name, RTRIM(t/60) + ':' + RIGHT('0' + RTRIM(t%60), 2) FROM cte ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/82ba8/1). What it basically does is to cast the string to a TIME, then using DATEDIFF to convert the time to minutes. It then just basically sums up the minutes per name. The outer query just converts minutes to a HH:MM string using simple string operations. If you're not looking to group by name, the query can be simplified to; ``` WITH cte AS ( SELECT SUM(DATEDIFF(MINUTE, '00:00', CAST(WorkingHr AS TIME))) t FROM mytable ) SELECT RTRIM(t/60) + ':' + RIGHT('0' + RTRIM(t%60), 2) FROM cte; ``` ...which is pretty much the same query without grouping.
[SQL Fiddle](http://sqlfiddle.com/#!6/2ec56/2) **MS SQL Server 2012 Schema Setup**: ``` CREATE TABLE Test_Table(Id INT, name VARCHAR(10), WorkingHr VARCHAR(8)) INSERT INTO Test_Table VALUES (1 ,'sam' ,'08:20'), (2 ,'sam' ,'04:50'), (3 ,'sam' ,'04:34'), (4 ,'Pam' ,'05:50'), (5 ,'sam' ,'04:30'), (6 ,'Pam' ,'06:40'), (7 ,'Pam' ,'04:50'), (8 ,'Todd' ,'06:10'), (9 ,'Todd' ,'05:50'), (10 ,'Todd' ,'08:50') ``` **Query 1**: ``` SELECT name ,CAST(DATEADD(MILLISECOND, SUM(DATEDIFF(MILLISECOND, '00:00:00.000' , CAST(WorkingHr AS TIME))), '00:00:00.000') AS TIME) AS Total_Time FROM Test_Table GROUP BY name ``` **[Results](http://sqlfiddle.com/#!6/2ec56/2/0)**: ``` | NAME | TOTAL_TIME | |------|------------------| | Pam | 17:20:00.0000000 | | sam | 22:14:00.0000000 | | Todd | 20:50:00.0000000 | ```
How to sum time spans stored in a varchar column?
[ "", "sql", "sql-server", "" ]
I have the following SQL Tables: companies: ``` ╔═════╦══════════╦═════════╦═══════════╦══════════════════╦═════╗ ║ # ║ Name ║ Country ║ City ║ Address ║ Nr. ║ ╠═════╬══════════╬═════════╬═══════════╬══════════════════╬═════╣ ║ 1 ║ T-Online ║ Germany ║ Frankfurt ║ Teststr. ║ 15 ║ ║ 2 ║ Telecom ║ Italy ║ Rome ║ Via test ║ 20 ║ ║ 3 ║ Verizon ║ USA ║ New York ║ Something street ║ 53 ║ ║ ... ║ .... ║ ║ ║ ║ ║ ╚═════╩══════════╩═════════╩═══════════╩══════════════════╩═════╝ ``` tagsForCompany: ``` ╔═════════╦═════╗ ║ Company ║ TID ║ ╠═════════╬═════╣ ║ 1 ║ 10 ║ ║ 2 ║ 15 ║ ║ 1 ║ 11 ║ ║ 3 ║ 19 ║ ║ 2 ║ 11 ║ ╚═════════╩═════╝ ``` tags: ``` ╔════╦══════════════════════╗ ║ ID ║ Name ║ ╠════╬══════════════════════╣ ║ 1 ║ Software Development ║ ║ 2 ║ Hosting ║ ║ 3 ║ Restaurant ║ ║... ║ .... ║ ╚════╩══════════════════════╝ ``` (all the values are examples and are not what's on the real tables!) I need to search for a company in a given city and country and have a specific tag. For example I search all the companies in New York, USA that have the tag Software Development. I need to be able to do this with only one SQL Query. What I'm using right now: * I search for all the companies in the given city and country * Then search for the id of the given tag * After that I search all the companies that have that Tag ID * And at the end I filter the table companies to output all the companies that match those rules. Obviously this method isn't good to be used in production, the impact to the performance is too big to be used. EDIT: Thank you for all the answers, I will try each one of them and the one that works best will be the approved one :)
Well you can use JOIN and add index on country and city in companies table ``` SELECT Name FROM companies AS c INNER JOIN tagsForCompany AS tc ON c.id = tc.Company INNER JOIN tags AS t ON t.id = tc.TID WHERE city = "your_city" AND country = "your_country" AND t.Name REGEXP 'your_tag' ``` Well in this query a table will be generated first using INNER JOIN then filtering on basis of new table generated But a better and more optimized solution could be to generate a table using subquery by filtering city and country. Also add index on country and city in companies table as it will reduce a lot of time. Now new query would be ``` SELECT c.Name FROM (SELECT id, Name FROM companies WHERE city = "your_city" AND country = "your_country" ) AS c INNER JOIN tagsForCompany AS tc ON c.id = tc.Company INNER JOIN tags AS t ON t.id = tc.TID WHERE t.Name REGEXP 'your_tag' ``` Syntax to add index `ALTER TABLE tbl_name ADD INDEX index_name (column_list)`, it adds an ordinary index in which any value may appear more than once. ``` ALTER TABLE companies ADD INDEX city (city); ALTER TABLE companies ADD INDEX country (country); ```
try this ``` SELECT * FROM companies c, tags t, tagsForCompany tfc WHERE tfc.company = c.companyId AND t.ID = tfc.TID AND t.Name = 'Software Development' AND c.city = 'New York' AND c.Country = 'USA' ```
I cannot think of a valid SQL Query for solving this
[ "", "mysql", "sql", "performance", "" ]
I'd like to know an efficient way of taking event records with a start date and end date and basically replicating that record for each day between the start and end date So a record with a Start Date as 2014-01-01 and End Date as 2014-01-03 would become 3 records, one for each day I have a date table if that helps. I'm using SQL Server 2012 Thanks
As you already have date table, You can JOIN your table with date table to get all dates to have same record as your start and end date ``` SELECT A.data, DT.startDate, DT.endDate FROM DateTable DT JOIN A ON A.StartDate >= DT.startDate And A.EndDate <= DT.endDate ```
use this query ``` declare @startDate datetime = getdate() declare @endDate datetime = dateadd(day,10,getdate()) ;with days as ( select @startDate as StartDate, @endDate as EndDate, @startDate as CurrentDate, 0 as i union all select d.StartDate, d.EndDate, dateadd(day,d.i + 1,@startDate) as CurrentDate, d.i + 1 as i from days d where dateadd(day,d.i + 1,@startDate) < d.EndDate ) select * from days d ```
Converting 1 record with a start and end date into multiple records for each day
[ "", "sql", "sql-server", "t-sql", "" ]
I've problems join 4 mysql tables for my callmanagement. My tables are : ``` calls: callId | contactId | companyId | numberId | timestamp | callNote | duration | state contacts: contactId | firstName | lastName | companyId | email | contactNote numbers: numberId | contactId | number companies: companyId | companyName ``` I need a query which gives me: ``` callId | timestamp | duration | number | callNote | state | contactId | firstName | lastName | company | email | contactNote ``` I think it's possible,but I don't know how.
try this ``` select o.callId , o.timestamp , o.duration ,o.callNote , o.state,o.contactId, j.firstName , j.lastName ,j.company Id as company, j.email, j.contactNote, r.number from calls o left outer join contacts j on o.contactId =j.contactId left outer join numbers r on j.contactId=r.contactId; ```
You should use an `INNER JOIN` to join up the tables. For example: ``` SELECT c.callId, c.timestamp, c.duration, n.number, c.callNote, c.state, c.contactId, c1.firstName, c1.lastName, c2.CompanyName as company, c1.email, c1.contactNote FROM calls c INNER JOIN contacts c1 ON c1.contacId = c.contactId INNER JOIN numbers n ON n.contactId = c1.contactid INNER JOIN companies c2 ON c2.companyid = c.companyid ```
MySQL join 4 tables
[ "", "mysql", "sql", "database", "" ]
We have a working table that we build every night with over a million records. This procedure takes around 3 hours a night to complete. Within the `procedure` we insert all the data into the table first. Then we do a lot of updates to the table. For example: ``` Update a Set a.Field1 = b.Field1 From WorkingTable as a JOIN Table2 as b Where a.ID = b.ID ``` At this point we do not have any Indexes or Keys assigned to `WorkingTable`. Would it run faster if we did assign a `Index` or `Keys` to the `WorkingTable`? Thanks
To answer this question, you need to first know how keys and indexes work under the hood in SQL server. A primary key, by default, is a clustered unique index. While this does slow down inserting records, the slowdown should be minimal. The real drop in performance usually comes from a `where` clause in a SQL query or DML statement that causes a table scan. If you update enough records after the initial creation, then adding a primary key or clustered unique index on the `id` columns will be a performance win. Really the decision to use a primary key or an index comes down to this question: > Who generates the "id"? The application loading the data or the database? If the application loading the data generates the "id" values, then adding a clustered index on that column should be enough. ``` CREATE CLUSTERED INDEX IDX_WorkTable_ID ON dbo.WorkTable (ID); ``` If the database is generating these values, just make the "id" column a primary key of type `int`: ``` ALTER TABLE [WorkTable] ADD ID INT IDENTITY(1,1); ``` Inserts, updates and deletes will still be pretty darn quick with a primary key. From [MSDN](http://msdn.microsoft.com/en-us/library/ms186342.aspx): > With few exceptions, every table should have a clustered index. Besides improving query performance, a clustered index can be rebuilt or reorganized on demand to control table fragmentation. A clustered index can also be created on a view. Related: [Clustered and Nonclustered Indexes Explained](http://msdn.microsoft.com/en-us/library/ms190457.aspx) Indexes can be a drag on performance if you need to update the values of columns that are indexed. Every update to those column values causes SQL server to rebuild that index. As with any performance enhancement, test it out. The proof is in the pudding. **Conclusion** 1. Write your SQL to avoid table scans. 2. Don't create indexes on columns that have their values updated, and for columns which you do not need in a `where` clause for another query or statement 3. Avoid unnecessary joins These are the basic performance guidelines of any SQL query.
It may run faster and it may not. That an index exists doesn't guarantee that it will be used. Let's say table2 in your example contains only two records. Then it would definitely make sense for the dbms to use an index on WorkingTable.id to find the two records quickly. Now let's say table2 contains 10000 times as many records as your working table. Then it might make more sense to simply go through your working table record for record and look up the index for Table2.id. No need for an index in your working table then. Having said this: There is no guarantee that an index would speed up things, but it might. And if it doesn't, no harm has been done either. Just as Luc M sais in the comments to your request: insert and delete are slower when there are indexes that must be cared about (but as I understand you, you are through with inserts at this point). And updates and selects *can* profit from indexes. So yes, use indexes (on WorkingTable.id for your example) and see if they help.
Should you put Indexes on a SQL Table that you are updating?
[ "", "sql", "sql-server", "" ]
I want to to select from db ``` if id=0 select * from tbl else select * from tbl where id= :id ``` how can use it in mysql query?
Try this: ``` select * from tbl where :id = 0 union all select * from tbl where :id <> 0 and id = :id ``` This is just a single query and it will execute only one branch as the specified `:id` value. When `:id=0`, the first query's where condition become `true` and the result is the same as `select * from tbl`. When `:id<>0`, the result of the first query will be empty, however, the second query will return the result of `select * from tbl where id=:id`.
The difference between both your queries is just the `where` clause - you can express this with the `or` logical operator: ``` SELECT * FROM tbl WHERE (:id = id) OR (:id = 0) ``` Of course, this could be further cleaned up with the `in` operator: ``` SELECT * FROM tbl WHERE :id IN (id, 0) ```
how can write sql query statement with if-else
[ "", "mysql", "sql", "" ]
I have some data that I need to sum for each month and I am having trouble figuring out how to get it to have 12 columns (one for each month) with the sum total for that month. Example: Data as is: ``` GrossAmt ClaimDate 49764.00 2014-08-21 00:00:00.000 1382.43 2014-08-27 00:00:00.000 602.77 2014-09-02 00:00:00.000 497.04 2014-09-02 00:00:00.000 ``` desired result: ``` GrossAmt ClaimDate 51146.43 August 1099.81 September ``` actual Desired Result: ``` July August September 0 51146.43 1099.81 ``` Sorry I did not include this before! I am sure that this is the most basic example of using row over partition perhaps it is my lack of programming background but this is one concepts that I just can not wrap my head around. This is what I have so far but I am not sure where to go next. ``` With CTE ([GrossAmt],ClaimMonth) AS( SELECT TOP 10000 Sum([GrossAmt]) as TotalClaimAmt ,CASE WHEN Month([ClaimDate]) = 1 THEN 'January' WHEN Month([ClaimDate]) = 2 THEN 'February' WHEN Month([ClaimDate]) = 3 THEN 'March' WHEN Month([ClaimDate]) = 4 THEN 'April' WHEN Month([ClaimDate]) = 5 THEN 'May' WHEN Month([ClaimDate]) = 6 THEN 'June' WHEN Month([ClaimDate]) = 7 THEN 'July' WHEN Month([ClaimDate]) = 8 THEN 'August' WHEN Month([ClaimDate]) = 9 THEN 'September' WHEN Month([ClaimDate]) = 10 THEN 'October' WHEN Month([ClaimDate]) = 11 THEN 'November' WHEN Month([ClaimDate]) = 12 THEN 'Decmber' End AS ClaimMonth FROM [Database].[dbo].[Table] ) Group BY [ClaimDate] ) Select * From CTE ``` Perhaps what I might need is a basic lesson on how Row over partition works. I have tried reading articles I have found online but none seem to make sense to me, They are way to advanced and I get lost right away, Also it does not help that I do not learn very well by reading, I need to see it in action, and understand how it works. Anyway, thanks for the help. You guys rock! EDIT: actual Desired Result: ``` July August September 0 51146.43 1099.81 ``` Sorry I did not include this before!
You need to use `PIVOT` to get the desired results CTE is doing the Monthly aggregation and PIVOT is used to convert the CTE result rows into horizontal columns ``` ;With CTE ([GrossAmt],MonthVal) AS( SELECT TOP 10000 Sum([GrossAmt]) as GrossAmt, DATEADD(MONTH, DATEDIFF(MONTH,0,ClaimDate), 0) as MonthVal FROM [Table1] Group BY DATEADD(MONTH, DATEDIFF(MONTH,0,ClaimDate), 0) ) SELECT * FROM (Select [GrossAmt], DATENAME(MONTH,MonthVal) as ClaimMonth From CTE) t PIVOT ( MAX(GrossAmt) for ClaimMonth in ( [January], [Febrauary], [March], [April], [May], [June], [July], [August], [September], [October], [November], [December] ) )pvt ```
``` DECLARE @Cols NVARCHAR(MAX); SELECT @Cols = STUFF(( SELECT DISTINCT ', ' + QUOTENAME(DATENAME(MONTH,ClaimDate)) FROM TABLENAME FOR XML PATH(''), TYPE).value('.','NVARCHAR(MAX)'),1,2,'') DECLARE @Sql NVARCHAR(MAX); SET @Sql = 'SELECT ''GrossAmount'' AS AMOUNT_Sorted_By_DATE, ' + @Cols + ' FROM (SELECT [GrossAmt], DATENAME(MONTH,ClaimDate) as ClaimMonth FROM TABLENAME GROUP BY MONTH(ClaimDate)) AS SourceTable PIVOT ( SUM(GrossAmt) FOR DATENAME(MONTH,ClaimDate) IN (' + @Cols + ') ) AS PivotTable' EXECUTE sp_executesql @Sql ``` This will result in Only the number of Months Actually Having data.
Row over partition for totaling data by month
[ "", "sql", "t-sql", "window-functions", "" ]
While doing SSIS job I got an error in my create table query > Incorrect Syntax near '-' .(Microsoft SQL Server Native Client 10.0) SQL statement: ``` CREATE TABLE Staging_ACD_Precision_Queue ( PrecisionQueueID int, BucketIntervalID int, EnterpriseName varchar(32), AgentOrdering int, CallOrdering int, Description varchar(255), ServiceLevelThreshold int, ServiceLevelType smallint, ForceExpandingQueue varchar(1), Deleted varchar(1), ChangeStamp int, Partner varchar(4), Center varchar(4), Partner-Center varchar(9), LOB varchar(4), Circle varchar(4), TypeOfBusiness varchar(4) ) ``` I tried the above query. I guess from the error message that, the issue is in `Partner-Center varchar(9)` but I failed to fix this error. Since I'm new to SSIS and SQL queries I don't know to fix the issue. ![enter image description here](https://i.stack.imgur.com/8TQkY.png)
You need to put square brackets around `Partner-Center`, e.g. `[Partner-Center]`. Embedded spaces or special characters are not allowed in field names, but you can escape them by putting the field name within square brackets. ``` CREATE TABLE Staging_ACD_Precision_Queue ( PrecisionQueueID int, BucketIntervalID int, EnterpriseName varchar(32), AgentOrdering int, CallOrdering int, Description varchar(255), ServiceLevelThreshold int, ServiceLevelType smallint, ForceExpandingQueue varchar(1), Deleted varchar(1), ChangeStamp int, Partner varchar(4), Center varchar(4), [Partner-Center] varchar(9), LOB varchar(4), Circle varchar(4), TypeOfBusiness varchar(4) ) ``` As others have stated here, I too would like to recommend you avoid using a hyphen in the field name and go with either `Partner_Center` or `PartnerCenter` as the field name instead of `Partner-Center`.
SQL Server does not allow `-` in bare field names. However, you can "escape" it by surrounding the field name with square brackets (`[]`): ``` CREATE TABLE Staging_ACD_Precision_Queue ( PrecisionQueueID int, BucketIntervalID int, EnterpriseName varchar(32), AgentOrdering int, CallOrdering int, Description varchar(255), ServiceLevelThreshold int, ServiceLevelType smallint, ForceExpandingQueue varchar(1), Deleted varchar(1), ChangeStamp int, Partner varchar(4), Center varchar(4), [Partner-Center] varchar(9), LOB varchar(4), Circle varchar(4), TypeOfBusiness varchar(4) ) ```
Incorrect Syntax near '-' .(Microsoft SQL Server Native Client 10.0)
[ "", "sql", "sql-server", "ssis", "syntax-error", "" ]
am working with a wordpress website that is performing the following query, but I see this query is doing many inner joins and the website takes long to load and goes down a lot, and I have been trying to create a query that produces the same result but with no success yet I would like to know what could be a better way to do this ``` SELECT * FROM wp_posts INNER JOIN wp_postmeta color ON wp_posts.ID = color.post_id INNER JOIN wp_postmeta transmission ON wp_posts.ID = transmission.post_id INNER JOIN wp_postmeta model ON wp_posts.ID = model.post_id INNER JOIN wp_postmeta brand ON wp_posts.ID = brand.post_id AND color.meta_key = 'color' AND color.meta_value = 'red' AND transmission.meta_key = 'transmission' AND transmission.meta_value = 'auto' AND model.meta_key = 'model' AND model.meta_value = 'model' AND brand.meta_key = 'brand' AND brand.meta_value = 'brand' AND wp_posts.post_status = 'publish' AND wp_posts.post_type = 'car' ORDER BY wp_posts.post_title ``` Here's the explain output. ``` +----+-------------+-----------+--------+-----------------------------+----------+---------+------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-----------+--------+-----------------------------+----------+---------+------------------------+------+----------------------------------------------+ | 1 | SIMPLE | color | ref | post_id,meta_key | meta_key | 768 | const | 629 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | wp_posts | eq_ref | PRIMARY,type_status_date,ID | PRIMARY | 8 | tmcdb.color.post_id | 1 | Using where | | 1 | SIMPLE | brand | ref | post_id,meta_key | post_id | 8 | tmcdb.wp_posts.ID | 4 | Using where | | 1 | SIMPLE | transmission | ref | post_id,meta_key | post_id | 8 | tmcdb.color.post_id | 4 | Using where | | 1 | SIMPLE | model | ref | post_id,meta_key | post_id | 8 | tmcdb.transmission.post_id | 4 | Using where | +----+-------------+-----------+--------+-----------------------------+----------+---------+------------------------+------+----------------------------------------------+ ``` [Wordpress schema here.](http://codex.wordpress.org/Database_Description)
It seems you are trying to obtain a result set with one row per post of type `car`. It seems you want to display various attributes of each car in the post, and those are stashed away in `postmeta`. Pro tip: **Never** use `SELECT *` in software unless you absolutely know why you're doing it. Especially with queries containing lots of `JOIN` operations, `SELECT *` returns lots of pointless and redundant columns. There's a query design trick to know for the WordPress `postmeta` table. If you want to get a particular attribute, do this: ``` SELECT p.ID, p.post_title, color.meta_value AS color FROM wp_posts AS p LEFT JOIN wp_postmeta AS color ON p.ID = color.post_id AND 'color' = color.meta_key WHERE p.post_status = 'publish' AND /* etc etc */ ``` It's super-important to understand this pattern when doing what you're trying to do. This pattern is required because `postmeta` is a peculiar type of table called a [key-value](/questions/tagged/key-value "show questions tagged 'key-value'") or [entity-attribute-value](/questions/tagged/entity-attribute-value "show questions tagged 'entity-attribute-value'") store. What's going on here? A few things: 1. Using this pattern uou get one row for each post, with some columns from the `posts` table and a particular attribute from the `postmeta` table. 2. You are `LEFT JOIN`ing the `postmeta` table so you still get a row if the attribute is missing. 3. You are using an alias name for the `postmeta` table. Here it's `postmeta AS color`. 4. You are including the selector for `meta_key` (here it's `'color' = color.meta_key`) in the `ON` condition of the join. 5. You are using an alias in your `SELECT` clause to present the `postmeta.meta_value` item with an appropriate column name. Here it's `color.meta_value AS color`. Once you get used to employing this pattern, you can stack it up, with a cascade of `LEFT JOIN` operations, to get lots of different attributes, like so. ``` SELECT wp_posts.ID, wp_posts.post_title, wp_posts.whatever, color.meta_value AS color, transmission.meta_value AS transmission, model.meta_value AS model, brand.meta_value AS brand FROM wp_posts LEFT JOIN wp_postmeta AS color ON wp_posts.ID = color.post_id AND color.meta_key='color' LEFT JOIN wp_postmeta AS transmission ON wp_posts.ID = transmission.post_id AND transmission.meta_key='transmission' LEFT JOIN wp_postmeta AS model ON wp_posts.ID = model.post_id AND model.meta_key='model' LEFT JOIN wp_postmeta AS brand ON wp_posts.ID = brand.post_id AND brand.meta_key='brand' WHERE wp_posts.post_status = 'publish' AND wp_posts.post_type = 'car' ORDER BY wp_posts.post_title ``` I've done a bunch of indenting on this query to make it easier to see the pattern. You may prefer a different indenting style. It's hard to know why you were having performance problems with the query in your question. It's possibly because you were getting a combinatorial explosion with all the `INNER JOIN` operations that was then filtered. But at any rate the query you showed was probably returning no rows. If you are still having performance trouble, try creating a compound index on `postmeta` on the `(post_id, meta_key, meta_value)` columns. If you're creating a WordPress plugin, that's probably a job to do at plugin installation time.
This is a Wordpress database, and you might be reluctant to make extensive changes to the schema, because it could break other parts of the application or complicate upgrades in the future. The difficulty of this query shows one of the downsides to the [entity-attribute-value](/questions/tagged/entity-attribute-value "show questions tagged 'entity-attribute-value'") design. That design is flexible in that it allows for new attributes to be created at runtime, but it makes a lot of queries against such data more complex than they would be with a conventional table. [The schema for Wordpress has not been optimized well.](https://www.percona.com/blog/2014/01/16/analyzing-wordpress-mysql-queries-query-analytics/) There are some naive indexing mistakes, even in the most current version 4.0. For this particular query, the following two indexes help: ``` CREATE INDEX `bk1` ON wp_postmeta (`post_id`,`meta_key`,`meta_value`(255)); CREATE INDEX `bk2` ON wp_posts (`post_type`,`post_status`,`post_title`(255)); ``` The `bk1` index helps to look up exactly the right meta key *and* value. The `bk2` index helps to avoid the filesort. These indexes can't be covering indexes, because `post_title` and `meta_value` are `TEXT` columns, and these are too long to be fully indexed. You'd have to change them to `VARCHAR(255)`. But that risks breaking the application, if it's depending on storing longer strings in that table. ``` +----+-------------+--------------+------------+------+---------------+------+---------+----------------------------+------+----------+-----------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+--------------+------------+------+---------------+------+---------+----------------------------+------+----------+-----------------------+ | 1 | SIMPLE | wp_posts | NULL | ref | bk2 | bk2 | 124 | const,const | 1 | 100.00 | Using index condition | | 1 | SIMPLE | color | NULL | ref | bk1 | bk1 | 1542 | wp.wp_posts.ID,const,const | 1 | 100.00 | Using index | | 1 | SIMPLE | transmission | NULL | ref | bk1 | bk1 | 1542 | wp.wp_posts.ID,const,const | 1 | 100.00 | Using index | | 1 | SIMPLE | model | NULL | ref | bk1 | bk1 | 1542 | wp.wp_posts.ID,const,const | 1 | 100.00 | Using index | | 1 | SIMPLE | brand | NULL | ref | bk1 | bk1 | 1542 | wp.wp_posts.ID,const,const | 1 | 100.00 | Using index | +----+-------------+--------------+------------+------+---------------+------+---------+----------------------------+------+----------+-----------------------+ ```
Improving a query using a lot of inner joins to wp_postmeta, a key/value table
[ "", "mysql", "sql", "database", "wordpress", "entity-attribute-value", "" ]
I have the following: ``` | Group_ID | Value | |----------|-------| | 146 | A | | 146 | B | | 239 | NULL | | 239 | F | | 826 | NULL | | 826 | NULL | ``` I need to retrieve only the IDs that have all related values null. In this example, the output would be `826`. I know that it seems to be a simple query, but I'm struggling with this for a long time.
``` select group_id from t group by group_id having sum(case when value is null then 1 end)=count(*) ```
This should do it: ``` select t1.group_id from foo t1 group by t1.group_id having count(*) = (select count(*) from foo t2 where t2.group_id = t1.group_id and t2.value is null); ``` SQLFiddle example: <http://sqlfiddle.com/#!15/7d228/1>
Query to retrieve groups where all related values are null
[ "", "sql", "" ]
I have a `database1` which has more than 500 tables and I have `database2` which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table `reports` in `database1` has 9 columns and the table `reports` in `database2` has 10. I want to copy all the data from `database1` to `database2` and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column `xyz`, some times it says that `vs_isbroken`, for the read only column error as I mentioned a enabled the identity insert but it did not help.. Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools. If you get a "can't insert into column ABC", then it could be one of the following: * Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value * Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50) * Inserting into a calculated column (pointed out by @Nick.McDermaid) You could also get issues with referential integrity if your database uses this (most do). If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand. Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows. * Stage the data from DB1 into DB2 * Produce reports/queries on issues pertinent to your database/rules * Merge the data from stage tables into target tables using SQL That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated. SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided. I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion. Another two small suggestions: * Create \_Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2 * After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance * Drop those Indexes after each merge, this will increase the bulk insert Performance **Basic on Staging (response to question clarification):** Links: * <http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se> * <http://www.jasonstrate.com/tag/31daysssis/> * <http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx> Staging is the act of moving data from one place to another without any checks. 1. First you need to create the target tables, the schema should match the source tables. 2. Open up BIDS and create a new Project and in it a new SSIS package. 3. In the package, create a connection for the source server and another for the destination. 4. Then create a data flow step, in the step create a data source for each table you want to copy from. 5. Connect each source to a new data destination and set the appropriate connection and table. 6. When done, save and do a test run. Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like [Red Gate Sql Compare](http://www.red-gate.com/products/sql-development/sql-compare/) and [Red Gate SQL Data Compare](http://www.red-gate.com/products/sql-development/sql-data-compare/)? First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want. There's a 15 day trial if you want to experiment.
How can I copy and overwrite data of tables from database1 to database2 in SQL Server
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
Many code editors have a built-in menu item or keyboard function to get a UUID, for example when I press `CTRL` + `SHIFT` + `G` in Delphi it inserts a GUID at the current position in the source code. I know that I can use `SELECT NEWID()` to generate a UUID, but I have to go to the query result, copy the generated UUID to my clipboard (killing whatever was in there before) and then go back to the code and replace the query which is an awful way of dealing with this. Is there any function (maybe using IntelliSense code snippets?) to do this in SQL Server Management Studio that I haven't found yet? Some background on why I need this function, I often write SQL Scripts like this: ``` INSERT INTO table (table_id, value) VALUES ('112C8DD8-346B-426E-B06C-75BBA97DCD63', 'ABC'); ``` I can't use just call `NEWID()`, because I later (in another file) want to refer to the specific row using: ``` WHERE table_id = '112C8DD8-346B-426E-B06C-75BBA97DCD63' ``` How can I insert a UUID into the code editor?
`NEWID()` itself is a function. when called returns a GUID value. You do not have to put it in a separate window and then copy paste value from there. Just simply put that function there where you want the GUID value and when the query is executed at run time the value returned by this function will be used. For instance in an Insert statement ``` INSERT INTO TableName (Col1 , Col2, Col3) VALUES (1 , 'Value 1', NEWID()) ``` If you want col3 to have a GUID value you do not need to copy paste the value returned from NEWID() function but you use the function itself. At runtime a guid value will be retuned and inserted into col3. Similarly if you were updating ``` UPDATE TableName SET Col3 = NEWID() WHERE <Some Condition> ``` Again you dont have to copy paste the value returned from NEWID() function just use the function itself. Another Option would be suppose you are somewhere inside your code where you cannot call the `NEWID()` function . You would Declare a variable of type UNIQUEIDENTIFIER call the function store its value to that variable and then use that variable inside you code something like ... ``` DECLARE @GUID_Value UNIQUEIDENTIFIER; SET @GUID_Value = NEWID(); -- Now use this variable anywhere in your code. ``` ## Adding to Keyboard Shortcut For some strange reason if you want to add a shortcut to your SSMS to generate GUIDs for you. You would need to two thing. 1. Create a stored Procedure which returns GUID value . 2. Add a key shortcut to call that stored Procedure. ## Proc Definition ``` CREATE PROCEDURE get_Guid AS SELECT NEWID(); ``` ## Add it to shortcuts From your SSMS goto Tools --> Options --> Environment --> Keyboard add the stored procedure's name to the shortcut you want to. Click OK. Close SSMS and reopen it again and you are good to go. ![enter image description here](https://i.stack.imgur.com/l9HJE.png) As shown in the above snipshot, now if you press `CTRL` + `0` it will generate a GUID value for you in the same query window.
For SQL server, and maybe others. One can insert a string-encoded Guid/uuid in SQL server using the convert function. See the zeroed Guid in the example below. ``` INSERT INTO [schema].[table] ([Id] ,[stuff]) VALUES (Convert(uniqueidentifier, N'00000000-0000-0000-0000-000000000000') ,'Interesting Stuff' ) ```
How to insert a NEWID() / GUID / UUID into the code editor?
[ "", "sql", "sql-server", "ssms", "" ]
Recently faced issues with how to know about deleted records from table. I try to explain my question. I have a table like below : **Table Employee** ``` Emp No Employee Name unique Guid1 John Smith unique Guid2 Tom unique Guid3 Jenny unique Guid4 Paul unique Guid5 Scott ``` There are millions of records in the database and multiple developers are working on the database. If Developer A deleted `Emp No unique Guid1` and `unique Guid4`, Developer B wants to know which records are deleted recently. I know that Sql server 2008 R2 logs the transaction, but I didn't find an exact way to know these records. Please help me!!! Thanks
I see several options 1. **Modify the schema** (table definiation) to include fields like "isDeleted", "deletedOn", "deletedBy" and set these fields when "deleting" instead og actually deleteing the records. All "selects" from this table must then when altered to include this new logic 2. **Use a trigger** to listen to deletes and copy the data onto a "auditing" table. 3. **[Snapshot Backups](http://technet.microsoft.com/en-us/library/ms189548(v=sql.105).aspx)** What to use really depends on the use case
Change tracking feature is available for SQL Server 2008. See <http://msdn.microsoft.com/en-us/library/bb964713.aspx>
Find the deleted records in the Sql server 2008
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm working on the following query and table ``` SELECT dd.actual_date, dd.week_number_overall, sf.branch_id, AVG(sf.overtarget_qnt) AS targetreach FROM sales_fact sf, date_dim dd WHERE dd.date_id = sf.date_id AND dd.week_number_overall BETWEEN 88-2 AND 88 AND sf.branch_id = 1 GROUP BY dd.actual_date, branch_id, dd.week_number_overall ORDER BY dd.actual_date ASC; ACTUAL_DATE WEEK_NUMBER_OVERALL BRANCH_ID TARGETREACH ----------- ------------------- ---------- ----------- 13/08/14 86 1 -11 14/08/14 86 1 12 15/08/14 86 1 11.8 16/08/14 86 1 1.4 17/08/14 86 1 -0.2 19/08/14 86 1 7.2 20/08/14 87 1 16.6 21/08/14 87 1 -1.4 22/08/14 87 1 14.4 23/08/14 87 1 2.8 24/08/14 87 1 18 26/08/14 87 1 13.4 27/08/14 88 1 -1.8 28/08/14 88 1 10.6 29/08/14 88 1 7.2 30/08/14 88 1 14 31/08/14 88 1 9.6 02/09/14 88 1 -3.2 ``` the "TargetReach" column shows whether target has been reach or not. A negative value means target wasn't reached on that day. How can I get calculate the number of ROW with positive value for this query? that will show something like: ``` TOTAL_POSITIVE_TARGET_REACH WEEK_NUMBER_OVERALL --------------------------- ------------------ 13 88 ``` I have tried to use CASE but still not working right. Thanks a lot.
You want to use conditional aggregation: ``` with t as ( <your query here> ) select week_number_overall, sum(case when targetreach > 0 then 1 else 0 end) from t group by week_number_overall; ``` However, I would rewrite your original query to use proper `join` syntax. Then the query would look like: ``` SELECT week_number_overall, SUM(CASE WHEN targetreach > 0 THEN 1 ELSE 0 END) FROM (SELECT dd.actual_date, dd.week_number_overall, sf.branch_id, AVG(sf.overtarget_qnt) AS targetreach FROM sales_fact sf JOIN date_dim dd ON dd.date_id = sf.date_id WHERE dd.week_number_overall BETWEEN 88-2 AND 88 AND sf.branch_id = 1 GROUP BY dd.actual_date, branch_id, dd.week_number_overall ) t GROUP BY week_number_overall ORDER BY week_number_overall; ``` THe difference between a CTE (the first solution) and a subquery is (in this case) just a matter of preference.
``` select sum( decode( sign( TARGETREACH ) , -1 , 0 , 0 , 0 , 1 , 1 ) ) from ( "your query here" ); ```
Counting number of positive value in a query
[ "", "sql", "oracle", "" ]
In below code I am trying to find time difference in minutes ``` declare @a time declare @b time select @a = "Apr 1, 2014 22:36.000" select @b = "Apr 2, 2014 02:25.000" select datediff(minutes,@a,@b) ``` **Expected output is 229 minutes. 24 min (60- 36) + 3 hour (180 min) + 25 min = 229 minutes** But **I am getting -1211**.This function is doing direct subration. Please help.
You have declared the variables as time which cannot exceed 24 hours. The date portion is ignored. In effect you are sending this to the datediff functoon ``` datediff(minute,'22:36.000', '02:25.000') ``` and that is why you are getting a negative result. I suggest you declare @a and @b as datetime instead. ``` declare @a datetime declare @b datetime select @a = "Apr 1, 2014 22:36.000" select @b = "Apr 2, 2014 02:25.000" select datediff(minutes,@a,@b) ```
You can use this query as reference to calculate time difference . ``` select DATEDIFF(day,2007-11-30,2007-11-20) AS NumberOfDays, DATEDIFF(hour,2007-11-30,2007-11-20) AS NumberOfHours, DATEDIFF(minute,2007-11-30,2007-11-20) AS NumberOfMinutes FROM test_table ```
Finding difference between time
[ "", "sql", "sybase", "" ]
How would I check if a particular word is contained in the text of a field in my dataset using R. In SQL, we can use the LIKE comparison operator. For example, ``` SELECT * FROM schools WHERE name LIKE '%Public School%' ``` If I had to do the same thing in R, how would I do that?
Given ``` schools <- data.frame(rank = 1:20, name = rep(c("X Public School", "Y Private School"), 10)) ``` try this: ``` subset(schools, grepl("Public School", name)) ``` or this: ``` schools[ grep("Public School", schools$name), ] ``` or this: ``` library(sqldf) sqldf("SELECT * FROM schools WHERE name LIKE '%Public School%'") ``` or this: ``` library(data.table) data.table(schools)[ grep("Public School", name) ] ``` or this: ``` library(dplyr) schools %>% filter(grepl("Public School", name)) ```
In Base R one can use %in% to subset data e.g. dataframe[dataframe$variable %in% dataframe2$variable2]
What is the R equivalent of SQL's "LIKE '%searched_word%'"?
[ "", "sql", "r", "" ]
Suppose that you have a table with the following structure: ``` CREATE TABLE [log] ( [type] int NOT NULL, [stat] nvarchar(20) NOT NULL, [id] int IDENTITY (1, 1) NOT NULL, descr nvarchar(20), PRIMARY KEY ([type], [stat], [id]) ) ``` Is it possible to force the `[id]` to be incremented only whenever the other two PK fields have the same values, and not independently as is now? For instance: ``` type stat id descr 5 ERROR 1 Test <--- 3 WARNING 1 Test 5 ERROR 2 Test <--- 2 ERROR 1 Test 1 WARNING 1 Test 5 WARNING 1 Test 5 ERROR 3 Test <--- ```
No. The purpose of an IDENTITY (or SEQUENCE) is only to generate an incremental integer. There may be gaps as values are not reused, and values may be reserved but not used. You can use an expression in queries to show the desired value. ``` ROW_NUMBER() OVER (PARTITION BY type, stat ORDER BY id) AS Seq ```
This i think would get your job done ``` CREATE TABLE [LOG1] ( [TYPE] INT NOT NULL, [STAT] NVARCHAR(20) NOT NULL, [ID] INT , DESCR NVARCHAR(20), PRIMARY KEY ([TYPE], [STAT], [ID]) ) CREATE TRIGGER TR_LOG ON [DBO].[LOG1] INSTEAD OF INSERT AS BEGIN DECLARE @CNT INT=0 IF EXISTS(SELECT 'X' FROM LOG1 A JOIN INSERTED B ON A.TYPE=B.TYPE AND A.STAT=B.STAT) SET @CNT=(SELECT COUNT(*) FROM LOG1 A JOIN INSERTED B ON A.TYPE=B.TYPE AND A.STAT=B.STAT) PRINT @CNT INSERT INTO LOG1(TYPE,STAT,ID,DESCR) SELECT TYPE,STAT,ID+@CNT,DESCR FROM INSERTED END ```
T-SQL Unique Identity Column as Part of Composite Primary Key
[ "", "sql", "sql-server", "t-sql", "primary-key", "composite-primary-key", "" ]
I'm learning SQL and have been going through the chapters of the GalaXQL app. I've written the following query in answer to the question "Hilight the star (or stars) which has the planet with the highest orbit distance in the galaxy. Remember to clear the old hilights before beginning." This query returns the result starid:23334, which when inserted into the hilight table, allows me to progress. However, the program hangs for a long, long time before handing this result back to me, so I suspect there's a far more efficient way to pull this information from the database. **This query works, but takes a long time to process, how can I improve this query?** ``` INSERT INTO hilight SELECT DISTINCT s.starid FROM planets AS p, stars AS s, moons AS m WHERE s.starid=(SELECT starid FROM planets WHERE orbitdistance=(SELECT MAX(orbitdistance) FROM planets)); ``` **My logic behind this query structure** **First** find the planet with the greatest orbit distance from the table "planets". **Second** to compare the value of "highest orbit distance" to the "orbitdistance" field of the table "planets" and to return the "starid" that relates to that field. **Third** to compare the value of the field "starid" in the table "planets" to the field "starid" in the table "stars" and then insert that "starid" to the table "hilight". Data structure: ![enter image description here](https://i.stack.imgur.com/RDyid.png)
Alright let's start by looking at your base query. I know you already have an answer that works but I feel I need to explain to you what was going on in your query. ``` INSERT INTO hilight SELECT DISTINCT s.starid FROM planets AS p, stars AS s, moons AS m WHERE s.starid=( SELECT starid FROM planets WHERE orbitdistance=( SELECT MAX(orbitdistance) FROM planets ) ); ``` So here in your `FROM` clause you are also using several of the old style joins. You also are missing an `ON` clause. This is referred to as a [CROSS JOIN](http://technet.microsoft.com/en-us/library/ms190690(v=sql.105).aspx) which will produce what we refer to as a Cartesian product. The result set will the number of rows in the first table multiplied by the number of rows in the second table and so on and so on. So we can fix that by simply fixing the join syntax and joining on the relationships you've shown in your diagram. ``` INSERT INTO hilight SELECT --DISTINCT probably no longer needed as we aren't artifficially creating duplicate rows s.starid FROM planets AS p inner join stars AS s on s.StarID = p.starid inner join moons AS m m.planetID = p.planetID WHERE s.starid=( SELECT starid FROM planets WHERE orbitdistance=( SELECT MAX(orbitdistance) FROM planets ) ); ``` Upon further analysis you are joining to the table moons but are not using any of the data nor is it limiting your result set. This means you aren't gaining any benefit from having this in your query and can be cut right out. ``` INSERT INTO hilight SELECT --DISTINCT probably no longer needed as we aren't artifficially creating duplicate rows s.starid FROM planets AS p inner join stars AS s on s.StarID = p.starid WHERE s.starid=( SELECT starid FROM planets WHERE orbitdistance=( SELECT MAX(orbitdistance) FROM planets ) ); ``` Now upon further analysis if we look at your `WHERE` clause it seems that it is quite redundant. There seems to me to be no reason to go to the planet table twice to get your predicate when you could simply match the max orbit distance to the planets table. This also eliminates any reason to join to table stars as well. ``` INSERT INTO hilight SELECT p.starid FROM planets AS p WHERE p.orbitdistance= ( SELECT MAX(orbitdistance) FROM planets ) ``` The resulting query is much simpler and should run alot faster now that we aren't generating so many duplicate rows. I hope that sheds some light on what was happening in your query. UPDATE: Upon further review this GalaSQL appears to be quite terrible and has massively outdated information and I would highly recommend not using it.
You could remove the first select such that it would liook like... ``` INSERT INTO hilight SELECT DISTINCT p.starid FROM planets p WHERE orbitdistance=( SELECT MAX(orbitdistance) FROM planets) ; ``` You could also remove the distinct, unless you had a specific reason for including it.
SELECT .. FROM (SELECT .. FROM ..). How can I improve this query?
[ "", "sql", "sqlite", "" ]
I imported a database for MSSQL 2008 for development. I know they have stored procedures because I was able to edit them live with the web interface front that they have. Now that I have the database loaded in a development location how do I access them? Are they normally stored in a table? I see at least one table with stored procedures. Is there a way to edit them with a built in text editor? EDIT: I do have SQL Management Studio installed, just not sure how exactly to access/edit the procedures.
Got it! So different from working with MySQL. 1. In Object Explorer, connect to an instance of Database Engine and then expand that instance. 2. Expand Databases, expand the database in which the stored procedure belongs, and then expand Programmability. 3. Expand Stored Procedures, right-click the procedure to modify, and then click Modify. 4. Modify the text of the stored procedure. 5. To test the syntax, on the Query menu, click Parse. 6. To modify the stored procedure, on the Query menu, click Execute. 7. To save the script, on the File menu, click Save As. Accept the file name or replace it with a new name, and then click Save.
Normally you would use SSMS - SQL Server Management Studio to do this. Its usually included with SQL Server, so you may just need to install it. <http://msdn.microsoft.com/en-us/library/ms174173.aspx> or download it here: <http://www.microsoft.com/en-us/download/details.aspx?id=7593>
Editing Stored Procedures in MSSQL Server 2008
[ "", "sql", "sql-server", "windows", "sql-server-2008", "stored-procedures", "" ]
I have the select query with `or` logical operator like following: ``` SELECT * FROM grabli_new.product where article like '%AV2%' or article like '%AV22%'; ``` I need to ordering result set by "length of like pattern"(rows that contains the longest pattern in my case `'%AV22%'` must be in the beginning of result set). The query must return all result that contains `'%AV22%'` pattern in the beginning and only then all result that contains `'%AV2%'` pattern. Is it possible?
You can try to use something like this: ``` SELECT *, case when article like '%AV22%' then 1 when article like '%AV2%' then 2 end orderIndex FROM grabli_new.product where article like '%AV2%' or article like '%AV22%' order by orderIndex ```
if you want to count (multi-byte) characters, use the CHAR\_LENGTH function instead: ``` ORDER BY CHAR_LENGTH( column ) ```
MySQL "Order By" query
[ "", "mysql", "sql", "" ]
tbl\_chat: ``` message_id message_text users_id another_user time_sent 1 'hi' 9 1 2014-10-13 00:10:32 2 'hello' 1 9 2014-10-13 00:12:32 3 'good morning' 9 1 2014-10-13 00:12:34 4 'good night' 9 1 2014-10-13 00:14:02 5 'LOL' 1 9 2014-10-13 00:14:05 ``` tbl\_usersinfo: ``` users_id users_fname users_lname 1 ben ten 9 son goku ``` I want to get all the conversation of these people, and show their full name, message, and the time that they sent that message, but my query doesn't return the name of them correctly, here is my query: ``` SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.another_user WHERE c.users_id = 1 UNION SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.users_id WHERE c.users_id = 9 ORDER BY time_sent ASC ``` the result of this query will be: ``` full_name message_text time_sent message_id son goku 'hi' 2014-10-13 00:10:32 1 son goku 'hello' 2014-10-13 00:12:32 2 son goku 'good morning' 2014-10-13 00:12:34 3 son goku 'good night' 2014-10-13 00:14:02 4 son goku 'lol' 2014-10-13 00:14:05 5 ``` but my desired output is: ``` full_name message_text time_sent message_id son goku 'hi' 2014-10-13 00:10:32 1 ben ten 'hello' 2014-10-13 00:12:32 2 son goku 'good morning' 2014-10-13 00:12:34 3 son goku 'good night' 2014-10-13 00:14:02 4 ben ten 'lol' 2014-10-13 00:14:05 5 ```
``` SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.users_id WHERE (c.users_id = 1 AND c.another_user = 9) OR (c.users_id = 9 AND c.another_user = 1) ORDER BY time_sent ASC ```
There is a minor logic error ``` SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.another_user // this is another user so turning 1 to 9 WHERE c.users_id = 1 UNION SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.users_id WHERE c.users_id = 9 ORDER BY time_sent ASC ``` To correct this you can change i.users\_id = c.another\_user to i.users\_id = c.users\_id but the best thing is remove the union and keep the query simple like @Rimas pointed out ``` SELECT CONCAT_WS(' ',i.users_fname, i.users_lname) AS full_name, c.message_text,c.time_sent,c.message_id FROM tbl_chat AS c LEFT JOIN tbl_usersinfo AS i ON i.users_id = c.users_id WHERE c.users_id in (1,9) AND c.another_user in (1,9) ```
2 left joins with a union
[ "", "mysql", "sql", "" ]
I found some threads about the error. But all the solutions doesn't work for me. I created 2 tables a user table and one for articles. Now I want to store the user that created the article and the one who is the last modifier. ``` CREATE TABLE IF NOT EXISTS `testDb`.`users` ( `id` INT NOT NULL AUTO_INCREMENT, `nickname` VARCHAR(255) NULL, `first_name` VARCHAR(255) NULL, `last_name` VARCHAR(255) NULL, `e_mail` VARCHAR(255) NOT NULL, `activated` TINYINT(1) NOT NULL DEFAULT 0, `birth_date` DATE NULL, `locked` TINYINT(1) NOT NULL DEFAULT 0, `locked_date_time` DATETIME NULL, `street` VARCHAR(255) NULL, `street_number` VARCHAR(255) NULL, `city` VARCHAR(255) NULL, `postal_code` VARCHAR(255) NULL, `country` VARCHAR(255) NULL, `phone` VARCHAR(255) NULL, PRIMARY KEY (`id`), UNIQUE INDEX `user_id_UNIQUE` (`id` ASC) ) ENGINE = InnoDB AUTO_INCREMENT = 1; CREATE TABLE IF NOT EXISTS `testDb`.`articles` ( `id` INT NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) NULL, `description` VARCHAR(255) NULL, `create_user` INT ZEROFILL NOT NULL, `create_date_time` DATETIME NULL, `last_modifie_user` INT ZEROFILL NOT NULL, `last_modifie_date_time` DATETIME NULL, PRIMARY KEY (`id`), UNIQUE INDEX `article_id_UNIQUE` (`id` ASC), INDEX `fk_articles_users1_idx` (`create_user` ASC), INDEX `fk_articles_users2_idx` (`last_modifie_user` ASC) ) ENGINE = InnoDB AUTO_INCREMENT = 1; ALTER TABLE `testDb`.`articles` ADD CONSTRAINT `fk_articles_users1` FOREIGN KEY (`create_user`) REFERENCES `testDb`.`users` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `fk_articles_users2` FOREIGN KEY (`last_modifie_user`) REFERENCES `testDb`.`users` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; ``` I get the following error, but I didn't understand why I should have a index for that. Error Code: 1822. Failed to add the foreign key constaint. Missing index for constraint 'fk\_articles\_users1' in the referenced table 'users' I actived ``` SHOW ENGINE innodb STATUS; ``` but this doesn't shows any erros.
`create_user INT UNSIGNED ZEROFILL` cannot reference `id INT`, because these count as different data types for purposes of foreign key reference. Make them the same data type. The only data type difference that is permitted between columns in a foreign key relationship is length of a varchar. For example, VARCHAR(10) can reference VARCHAR(20) or vice-versa. Any other difference in data type, size, or character set is incompatible for referential integrity. Even having `ZEROFILL` on one column but not the other makes the data types incompatible.
I came across this issue and my Data Type was correct so I was stumped for a little but then I just made everything the same. When creating foreign keys be sure the columns you are using have the same: * Data Type * Collation * Zero Fill * Not Null * Unsigned * Binary
Error Code: 1822. Failed to add the foreign key constaint. Missing index for constraint
[ "", "mysql", "sql", "foreign-keys", "" ]