Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I came across this line of code in a SQL Report I am trying to fix. Could someone tell me what is the purpose of this. ``` DATEADD(dd, - DATEDIFF(dd, d.TxnDateTime, 1), 1) As TxnDate ``` To me it seems like it would cancel itself out.
It's removing the `TIME` portion of the `DATETIME` without changing the data type. You can observe the different behavior here: [SQL Fiddle](http://sqlfiddle.com/#!3/1fa93/10746/0) Again, not sure why you'd need to preserve the `DATETIME` type while removing the time, unless it predates the `DATE` datatype.
It was probably written for SQL Server 2005, when `CONVERT(DATE` was just a glimmer in some Microsoft employee's eye, and when we had to use cumbersome, inefficient and hard-to-explain workarounds to remove the time portion of a `DATETIME`. People still use those cumbersome, efficient, and hard-to-explain methods, of course. But I don't think anyone here can tell you why, especially if you're looking for the reason that particular developer chose that particular format in that particular case. We simply can't speak for them. Maybe they stole it from somewhere else, maybe it actually makes sense to them, maybe they just plugged it in without even knowing what it does. Today, the better approach is: ``` CONVERT(DATE, d.TxnDateTime); ``` ...and I have demonstrated this many times, including [here](http://www.sqlperformance.com/2012/09/t-sql-queries/what-is-the-most-efficient-way-to-trim-time-from-datetime) and [here](http://www.sqlperformance.com/2012/10/t-sql-queries/trim-time). Now, if you are trying to get all the rows where `d.TxnDateTime` falls on a specific day, a much better approach is to use a `DATE` parameter, and an open-ended range query: ``` WHERE d.TxnDateTime >= @ThatDay AND d.TxnDateTime < DATEADD(DAY, 1, @ThatDay); ``` This is superior to: ``` WHERE CONVERT(DATE, d.TxnDateTime) = @ThatDay; ``` Because, while sargable, that expression can still lead to rather poor cardinality estimates. For more information see this very thorough post: <https://dba.stackexchange.com/questions/34047/cast-to-date-is-sargable-but-is-it-a-good-idea> Also might not be a bad idea to read this one: <https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries> And also this regarding `dd`: <https://sqlblog.org/2011/09/20/bad-habits-to-kick-using-shorthand-with-date-time-operations>
Why would someone use this?
[ "", "sql", "sql-server-2008", "t-sql", "datediff", "dateadd", "" ]
I want to run a mass update statement that selects the min and max of a column. The issue is that there are multiple columns with null values. If `NULL` then I want to set the `MinValue` to `0`. I have the following statement, but I am getting errors with the `SELECT` and `ELSE`: ``` UPDATE Table1 SET MaxValue = ( SELECT MAX(column1) FROM Table2), MinValue = (CASE WHEN SELECT MIN(column1) FROM Table2 <> NULL THEN SELECT MIN(column1) FROM Table2 ELSE '0' END) ``` What am I missing?
Here are some immediately obvious adjustments that you need to make: ``` UPDATE Table1 SET MaxValue = ( SELECT MAX(column1) FROM Table2), MinValue = (CASE WHEN (SELECT MIN(column1) FROM Table2) IS NOT NULL -- subquery in parentheses per John Gibb's comment and IS NOT NULL rather than <> NULL THEN (SELECT MIN(column1) FROM Table2) -- subquery in parentheses per John Gibb's comment ELSE 0 -- 0 rather than '0' END) ``` Otherwise, you are effectively coalescing with a `CASE`: I would use `COALESCE` instead.
How about this? ``` UPDATE a SET a.MaxValue = b.max_val, a.MinValue = COALESCE(b.min_val, 0) FROM Table1 a CROSS JOIN ( SELECT MAX(column1) max_val, MIN(column1) min_val FROM Table2 ) b ```
SQL Update Statement with a Case with Select Inside
[ "", "sql", "sql-server", "t-sql", "" ]
I have a MySql table(pin\_codes). At first I want to select exactly 200 rows from this table and export them, then I want to delete this 200 rows. If the IDs of this table was continuous, I could do this: ``` SELECT * FROM pin_codes WHERE id<200 and id>1; DELETE FROM pin_codes WHERE id<200 and id>1; ``` But IDs are like this: ``` 52138 52139 52141 52142 52144 52147 52148 52157 52158 ... ``` How can I do that?
``` Select * from pin_codes order by id limit 200 ``` will give you first 200 rows based on id and then you can delete those 200 id's ``` Delete from pin-codes order by id limit 200 ``` to delete first 200 rows Also it is best practice to use column name instead of \* (Query Optimization)
Do something like this: ``` delete from t1 order by id limit 5; ``` [Example](http://sqlfiddle.com/#!2/8cba7/1/0)
Select and Delete exactly x rows from MySql table
[ "", "mysql", "sql", "select", "sql-delete", "" ]
``` +----------+---------+------------+-----------+ | MEMRECNO | RECSTAT | IDSRCRECNO | IDNUMBER | +----------+---------+------------+-----------+ | 556787 | D | 5 | 956645789 | | 123456 | A | 5 | 956645789 | | 546578 | A | 5 | 462454322 | | 262441 | A | 4 | 462454322 | | 657855 | D | 3 | 462454322 | | 746877 | A | 5 | 654988844 | | 989455 | A | 2 | 654988844 | | 444863 | A | 1 | 654988844 | +----------+---------+------------+-----------+ ``` I am attempting to write a query to select records where two or more sources (IDSRCRECNO) have same IDNUMBER. I have started the query with something along the lines of: WHERE idnumber IN (SELECT idnumber from table GROUP by idnumber HAVING count(idnumber) >= 2 ``` Expected Results: ``` MEMRECNO 546578 and 262441 included in results. This is because both records are active, the IDNUMBER's are identical and the IDSRCRECNO is different. MEMRECNO 556787 and 123456 would not be included because even though the IDNUMBERS match, one of the records/recstat is Deleted (D). Memrecno 262441 and 657855 would not be included because even though the IDNUMBERS match and IDSRCRECNO is different, one of the records/recstat is Deleted (D). Memrecno 262441 and 657855 would not be included because even though the IDNUMBERS match and IDSRCRECNO is different, one of the records/recstat is Deleted (D). Memrecno 746877, 989455, 444863 would be included because all three records are active, IDSRCRECNO is different and IDNUMBERs match. Thanks in advance!
Is that what are you looking for? I just added a `WHERE` clause to your inner query: ``` SELECT idnumber WHERE recstat <> 'D' FROM table GROUP by idnumber HAVING count(idnumber) >= 2 ```
I would do this like: ``` with data as ( select 556787 memrecno, 'D' recstat, 5 idsrcrecno, 956645789 idnumber from dual union all select 123456 memrecno, 'A' recstat, 5 idsrcrecno, 956645789 idnumber from dual union all select 546578 memrecno, 'A' recstat, 5 idsrcrecno, 462454322 idnumber from dual union all select 262441 memrecno, 'A' recstat, 4 idsrcrecno, 462454322 idnumber from dual union all select 657855 memrecno, 'D' recstat, 3 idsrcrecno, 462454322 idnumber from dual union all select 746877 memrecno, 'A' recstat, 5 idsrcrecno, 654988844 idnumber from dual union all select 989455 memrecno, 'A' recstat, 2 idsrcrecno, 654988844 idnumber from dual union all select 444863 memrecno, 'A' recstat, 1 idsrcrecno, 654988844 idnumber from dual ) select * from ( select memrecno, recstat, idsrcrecno, idnumber, count(distinct idsrcrecno) over (partition by idnumber) rec_cnt from data where recstat = 'A' ) where rec_cnt > 1; ``` Which returns: ``` MEMRECNO RECSTAT IDSRCRECNO IDNUMBER REC_CNT ---------- ------- ---------- ---------- ---------- 262441 A 4 462454322 2 546578 A 5 462454322 2 444863 A 1 654988844 3 989455 A 2 654988844 3 746877 A 5 654988844 3 ``` This assumes you only want to see Active records. The query would have to be changed if for example MEMRECNO = 444863 was not Active, but you still wanted it to show up (since there would still be 2 records with the same IDNUMBER and different IDSRCRECNO). Just comment if that is what you are looking for and I will update this accordingly.
Oracle Query to Find Records If Count >= 2 and unique in column
[ "", "sql", "oracle", "select", "count", "having", "" ]
I want to select the last answer for a specific question. My current solution selects only the Question from "Nutzer1237", but i want the last Answer from "Nutzer1234" too. ![enter image description here](https://i.stack.imgur.com/DXHTm.jpg) ``` SELECT pid, frage, antwort, user, created_at FROM antwortenverlauf WHERE frage = 'Risiko: Wie empfinden Sie die Kommunikation mit dem Kunden?' AND ( user, created_at ) IN ( SELECT user, MAX( created_at ) FROM antwortenverlauf ) ORDER BY created_at DESC ``` Sorry for my English! Translation: frage = question antwort = answer
you should have read the answer of @fancyPants ``` SELECT pid, frage, antwort, user, created_at FROM antwortenverlauf a1 WHERE frage = 'Risiko: Wie empfinden Sie die Kommunikation mit dem Kunden?' AND (user, created_at) IN (SELECT user, MAX( created_at ) FROM antwortenverlauf a2 WHERE a1.frage=a2.frage GROUP BY USER) ORDER BY created_at DESC ```
In the **[official manual](http://dev.mysql.com/doc/refman/5.5/en//example-maximum-column-group-row.html)** there are 3 examples how to solve this. > *Task: For each article, find the dealer or dealers with the most* > expensive price. > > This problem can be solved with a subquery like this one: ``` SELECT article, dealer, price FROM shop s1 WHERE price=(SELECT MAX(s2.price) FROM shop s2 WHERE s1.article = s2.article); ``` > The preceding example uses a correlated subquery, which can be > inefficient (see Section 13.2.10.7, “Correlated Subqueries”). Other > possibilities for solving the problem are to use an uncorrelated > subquery in the FROM clause or a LEFT JOIN. > > Uncorrelated subquery: ``` SELECT s1.article, dealer, s1.price FROM shop s1 JOIN ( SELECT article, MAX(price) AS price FROM shop GROUP BY article) AS s2 ON s1.article = s2.article AND s1.price = s2.price; ``` > LEFT JOIN: ``` SELECT s1.article, s1.dealer, s1.price FROM shop s1 LEFT JOIN shop s2 ON s1.article = s2.article AND s1.price < s2.price WHERE s2.article IS NULL; ``` > The LEFT JOIN works on the basis that when s1.price is at its maximum > value, there is no s2.price with a greater value and the s2 rows > values will be NULL.
Select one specfic row from each user by a different time
[ "", "mysql", "sql", "" ]
I'm beginner in sql. I have a question, how can I import the database from sql file. I tried option "import a script file" but it doesn't work. I attach a screenshots and my sql file [screenshot](http://i41.tinypic.com/ac61c3.jpg) [sql file](http://pastebin.com/zNzUbPZn) Thank you in advance for your help
To get started, you will need: ``` A .sql script file containing a CREATE TABLE command MySQL Query Browser or phpMyAdmin Other MySQL database tools will require similar steps. An empty MySQL database already created ``` MySQL Query Browser is part of the MySQL GUI Tools available at <http://mysql.com/>. These same steps can be used for most other MySQL administrator tools. ``` Using the MySQL Query Browser, connect to your MySQL Server. From the Schemata panel, select the database you intend to add the new database table to. Choose File > Open Script. Navigate to the .sql file you wish to import. Click Open. Select Execute. Confirm that the new database table appears in the Schemata panel. ```
Just open a terminal, login to mysql use database, source file/to/dump.sql
Import database from sql file
[ "", "mysql", "sql", "mysql-workbench", "" ]
I have one table created like this Table: ``` key value 1 10000 1 10001 2 10001 ``` And I want to select key 2 because it has 10001 but not 10000. Is there a simple way? I tried using joins but I have no idea how to make join select only missing value.
Assuming you're looking for `key`s that don't have all of the available `value`s, you can do that by comparing the number of `DISTINCT value`s for each key to the number of `DISTINCT value`s in the entire table. ``` SELECT `key` FROM `table` GROUP BY `key` HAVING COUNT(DISTINCT value) < (SELECT COUNT(DISTINCT value) FROM `table`) ``` Seen in action at [SQLFiddle](http://sqlfiddle.com/#!2/c9bb0/3) If there are only a particular set of values you're interested in, you can change this to using hardcoded values. ``` SELECT `key` FROM `table` WHERE value IN (10001, 10000) GROUP BY `key` HAVING COUNT(DISTINCT value) < 2 ``` For this to generalize to a larger number of values, the number in the `HAVING` clause needs to match the number of elements in the `IN` condition.
You can simply do this: ``` SELECT DISTINCT t1.`key` FROM tablename t1 WHERE t1.`key` NOT IN(SELECT `key` FROM tablename WHERE value = 10000); ``` * [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!2/a3350/15)
SQL: Select key based on missing value
[ "", "mysql", "sql", "" ]
Suppose I have a table in Postgres called `listings` that looks like this: | id | neighborhood | bedrooms | price | | --- | --- | --- | --- | | 1 | downtown | 0 | 189000 | | 2 | downtown | 3 | 450000 | | 3 | riverview | 1 | 300000 | | 4 | riverview | 0 | 250000 | | 5 | downtown | 1 | 325000 | | 6 | riverview | 2 | 350000 | etc. How do I write a crosstab query that shows the average price per bedrooms as the columns and neighborhoods as the rows? The output of the query should have the following format: | neighborhood | 0 | 1 | 2 | 3 | | --- | --- | --- | --- | --- | | downtown | 189000 | 325000 | - | 450000 | | riverview | 250000 | 300000 | 350000 | - | etc.
First compute the average with the aggregate function [`avg()`](https://www.postgresql.org/docs/current/functions-aggregate.html): ``` SELECT neighborhood, bedrooms, avg(price) FROM listings GROUP BY 1,2 ORDER BY 1,2; ``` Then feed the result to the `crosstab()` function (provided by the additional module [tablefunc](https://www.postgresql.org/docs/current/tablefunc.html)). Cast the avg to `int` if you want rounded results as displayed: ``` SELECT * FROM crosstab( 'SELECT neighborhood, bedrooms, avg(price)::int FROM listings GROUP BY 1, 2 ORDER BY 1, 2;' , $$SELECT unnest('{0,1,2,3}'::int[])$$ ) AS ct ("neighborhood" text, "0" int, "1" int, "2" int, "3" int); ``` [fiddle](https://dbfiddle.uk/0rdBs6pm) Detailed instructions: * [PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905#11751905) The same can be achieved with the aggregate `FILTER` clause. A bit simpler, and doesn't need an additional module, but typically slower. Related answer with side-by-side solutions: * [Conditional SQL count](https://stackoverflow.com/questions/29020065/conditional-sql-count/29022738#29022738)
Another solution that implement with filter: ``` SELECT neighborhood, avg(price) FILTER (WHERE bedrooms = 0) AS "0", avg(price) FILTER (WHERE bedrooms = 1) AS "1", avg(price) FILTER (WHERE bedrooms = 2) AS "2", avg(price) FILTER (WHERE bedrooms = 3) AS "3" FROM listings GROUP BY neighborhood; ```
Create a pivot table with PostgreSQL
[ "", "sql", "postgresql", "pivot-table", "aggregate-functions", "" ]
I have two tables * `Country` with `countryid` and `countryname` * `City` with `cityid` and `cityname` I want to extract city names based on the `countryid` I select. I'm very new to sql database and please help me with direct query if you can. Thank you very much.
According to the table structure which you showed, this query is not possible. You need to add a 'country\_id' to the cities table so that you know in which country is each city. Once this has been done, your query would be ``` select cities.cityname, countries.countryname from cities inner join countries on countries.country_id = cities.country_id order by countries.countryname, cities.cityname ```
First you should have countryid in city table as a country can have many cities. Then you can write. ``` select A.cityname from city A where A.countryid in (select B.countryid from country B) ``` Please check this link if you want to find out more. [SQL: Select from one table matching criteria in another?](https://stackoverflow.com/questions/5446778/sql-select-from-one-table-matching-criteria-in-another)
Sql query for extracting info from one table based on other
[ "", "sql", "" ]
It seems ok but am getting exception please correct me. ``` declare var_number number; begin var_number := 10; if var_number > 100 then dbms_output.put_line(var_number||' is greater than 100'); elseif var_number < 100 then dbms_output.put_line(var_number||' is less than 100'); else dbms_output.put_line(var_number||' is equal to 100'); end if; end; ``` **Exception :** ``` ORA-06550: line 8, column 8: PLS-00103: Encountered the symbol "VAR_NUMBER" when expecting one of the following: := . ( @ % ; ORA-06550: line 13, column 4: PLS-00103: Encountered the symbol ";" when expecting one of the following: if ```
The keyword for Oracle PL/SQL is "ELSIF" ( no extra "E"), not ELSEIF (yes, confusing and stupid) ``` declare var_number number; begin var_number := 10; if var_number > 100 then dbms_output.put_line(var_number||' is greater than 100'); elsif var_number < 100 then dbms_output.put_line(var_number||' is less than 100'); else dbms_output.put_line(var_number||' is equal to 100'); end if; end; ```
The IF statement has these forms in `PL/SQL`: ``` IF THEN IF THEN ELSE IF THEN ELSIF ``` You have used `elseif` which in terms of PL/SQL is wrong. That need to be replaced with `ELSIF`. So your code should appear like this. ``` declare var_number number; begin var_number := 10; if var_number > 100 then dbms_output.put_line(var_number ||' is greater than 100'); --elseif should be replaced with elsif elsif var_number < 100 then dbms_output.put_line(var_number ||' is less than 100'); else dbms_output.put_line(var_number ||' is equal to 100'); end if; end; ```
PLS-00103: Encountered the symbol when expecting one of the following:
[ "", "sql", "oracle", "plsql", "syntax-error", "" ]
I'm using MySql 5 and I would like to know if it's possible to get the follow values using sql syntax. Get value nodes which are include in group 1 and also in group 2 and 3. ``` Hosts | groups 1 1 1 2 1 3 2 1 3 1 ----->for this example host 1 and 3 are the value which I need 3 2 3 3 4 1 ``` The group 1 is the master list, a mean I need to get all hosts include in group 1 which are include also in group 2 and 3. If it's possible, could somebody give an example?
The answer with `group by hosts having count(distinct groups) = 3` is frequently mentioned, and it does give the right answer. But the following query also works, and usually performs much better if you have an appropriate index. The index should be on `(hosts,groups)` if I recall. ``` SELECT t1.hosts FROM MyTable AS t1 INNER JOIN MyTable AS t2 USING (hosts) INNER JOIN MyTable AS t3 USING (hosts) WHERE (t1.groups, t2.groups, t3.groups) = (1, 2, 3) ``` See my presentation [SQL Query Patterns, Optimized](http://www.slideshare.net/billkarwin/sql-query-patterns-optimized) for some analysis of this type of query.
You should be able to get the result using a combination of `WHERE`, `GROUP BY` and `HAVING`: ``` select hosts from yourtable where groups in (1, 2, 3) group by hosts having count(distinct groups) = 3; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/4746e/1)
SQL syntax to get row value with common column values
[ "", "mysql", "sql", "relational-division", "" ]
I simply want to have a SQL statement to `GROUP BY`, however I would want it to try and choose one row if that row is available. For example, I have this statement: ``` SELECT * FROM `translations` WHERE `lang` = "pl" OR `lang` = "en" GROUP BY `key` ``` In this statement, I am trying to select all where the `lang` is `pl`, and only get the `en` results if there is no result for the equivalent `pl` row. `key` is the column which is the same over multiple `lang`'s. If I run the above statement however, all the results will be where the `lang` is `en`, and I understand why this is however it isn't the behaviour I want, and I am unsure how to adjust it. I know how to do this programmatically, however thought it would be neater with just SQL.
You can phrase this as a `union all` statement. My assumption is that the `key`,`lang` pair uniquely identifies one row (otherwise you can still do the `group by` to get one row per key). The idea is to select all rows with `lang = 'pl'`. Then select the rows where `lang = 'en'` that have no corresponding row with `'pl'` for the same key: ``` SELECT * FROM translations WHERE lang = 'pl' union all SELECT * FROM translations WHERE lang = 'en' and `key` not in (select `key` from translations where lang = 'pl') ```
Not an optimal, but it should give the desired result : ``` SELECT a.* FROM ( SELECT * FROM `translations` where `lang` = "pl" UNION ALL SELECT * FROM `translations` a where `lang` = "en" WHERE NOT EXISTS (SELECT NULL FROM `translations` b WHERE b.lang="pl") )a GROUP BY `key` ```
How do I group by and "prefer" a row
[ "", "mysql", "sql", "group-by", "" ]
<http://sqlfiddle.com/#!3/78273/1> ``` create table emptb1 ( id int, name varchar(20), dept int ) insert into emptb1 values (1,'vish',10); insert into emptb1 values (2,'vish',10); insert into emptb1 values (3,'vish',30); insert into emptb1 values (4,'vish',20); create table depttb1 ( id int, name varchar(20) ) insert into depttb1 values(10,'IT') insert into depttb1 values(20,'AC') insert into depttb1 values(30,'LIC') select * from emptb1 select e.id, e.name, a.id from emptb1 e cross apply ( select top 1 * from depttb1 d where d.id = e.dept order by d.id desc ) a ``` I was trying to learn cross apply as it's similar as inner join but works with function. In above query I'm assuming it should take only dept=30 because order d.id desc will give only top 1st id which is 30 and then it should return employees with dept id = 30 but it's giving me all the rows and all the deptid. What's wrong with query or I'm wrong interpreting the concept of cross apply.
You say "*In above query I'm assuming it should take only dept=30 because order d.id desc will give only top 1st id which is 30 and then it should return employees with dept id = 30*". That's not how it works. Here's your query (reformatted a little for clarity): ``` select e.id, e.name, a.id from emptb1 e cross apply ( select top 1 * from depttb1 d where d.id = e.dept order by d.id desc ) a ``` The `APPLY` keyword means that the inner query is (logically) called once for each row of the outer query. For what happens inside the inner query, it's helpful to understand the logical order that the clauses of a `SELECT` are executed in. This order is: 1. `FROM` clause 2. `WHERE` clause 3. `SELECT` columns 4. `ORDER BY` clause 5. `TOP` operator Note that in your inner query then, the `TOP` operator gets applied *last*, well after the `WHERE` clause. This means the `where d.id = e.dept` will first reduce the inner rows to those whose `d.id` matches the `e.dept` of the outer row (which is not necessarily 30), then sort them, and then return the first one. And it does this *for every row in the outer query*. So obviously, many of them are not going to be `30`. What you are trying to would be more akin to this (still retaining the `CROSS APPLY`): ``` select e.id, e.name, a.id from emptb1 e cross apply ( select top 1 * from ( select top 1 * from depttb1 d order by d.id desc ) b where b.id = e.dept ) a ``` Here, the logic has been reordered by use of another, nested, sub-query that insures that the `ORDER BY`, then `TOP 1` get applied *before* the `WHERE` clause. (Note that this would not normally the recommended way to do this as nested sub-queries can hamper readability, I just used it here to retain the `CROSS APPLY` and to retain the rest of the original structure).
To exand on Damien's comment, the inner query: ``` select top 1 * from depttb1 d where d.id = e.dept order by d.id desc ``` is going to run for every row in the outer query: ``` select e.id, e.name, a.id from emptb1 e ``` So you will always get a match from the inner query for each row. I think you were expecting the inner query to run only one time, but that's not what `APPLY` does. So, taking the first row from your outer query, with an ID of 1 and a dept id of 10, your inner query will translate to: ``` select top 1 * from depttb1 d where d.id = 10 //this is the dept id for the current row from your outer query order by d.id desc ```
SQL Server cross apply not working?
[ "", "sql", "sql-server", "sql-server-2008", "cross-apply", "" ]
Afternoon, I am trying to convert a number padded with zeros to one that resembles a monetary amount in MS SQL. Here is an example of what I have, and what I want. Have: 00000000000039570 Want: 395.70 So far I have managed to get rid of the leading zeros with the below code. ``` SUBSTRING ([Balance], PATINDEX ('%[^0 ]%', [Balance] + ' '), LEN([Balance])) AS [Balance] ``` Going back to my example, this would give me '39570'. Any help on how to get the decimal as well, or a better way of trimming, would be very much appreciated! Sorry if this has been covered before. I did a few searches but could not see exactly what I was after. I am new here so go easy on me! ;) Dan
You can convert the string to a float and divide by 100: ``` select cast('00000000000039570' as float)/100 ``` If you want a string representation: ``` select str(cast('00000000000039570' as float)/100, 15, 2) ``` EDIT: In light of the comment from dasblinknight, the following is preferable given the size of the string: ``` select cast('00000000000039570' as decimal(19,2))/100 select str(cast('00000000000039570' as decimal(19,2))/100, 15, 2) ```
I am not a master of MSSQL but it seems to me after what you did this should work: ``` SUBSTRING([BALANCE], 0, LEN([BALANCE]) - 2) + '.' + SUBSTRING([BALANCE], LEN([BALANCE]) - 2, 2) ```
sql server moving number decimal place
[ "", "sql", "sql-server", "decimal", "" ]
I have an SQL statement that looks so: ``` select FY_CD, PD_NO, INPUT, SUM(HOURS) from LABOR_TABLE group by PD_NO, INPUT; ``` Returning this: ``` FY_CD|PD_NO| INPUT | HOURS 2008 1 Actuals 61000 2008 1 Baseline 59000 2008 2 Actuals 54000 2008 2 Baseline 59000 2008 3 Actuals 60000 2008 3 Baseline 70000 ``` I'm trying to figure out how to subtract the Actual values from the Baseline values for each period, returning the following: ``` FY_CD|PD_NO| INPUT | HOURS 2008 1 Variance 2000 2008 2 Variance -5000 2008 3 Variance -10000 ``` Any help is appreciated.
You can actually calculate it directly by using `CASE` to check the value of `Input`, ``` SELECT FY_CD, PD_NO, 'Variance' INPUT, SUM(CASE WHEN Input = 'Actuals' THEN HOURS ELSE -1 * HOURS END) HOURS FROM LABOR_TABLE GROUP BY PD_NO ``` * [SQLFiddle Demo](http://sqlfiddle.com/#!2/b3fcf/4)
You can use subqueries to divide the table into two parts and then join the parts so that the Actuals and Baselines are on the same row (by PD\_NO). Then it's just simple subtraction. ``` SELECT Baseline.FY_CD , Baseline.PD_NO , SUM(Baseline.HOURS - Actuals.Hours) FROM ( SELECT * FROM LABOR_TABLE WHERE INPUT = 'Baseline' ) AS Baseline JOIN ( SELECT * FROM LABOR_TABLE WHERE INPUT = 'Actuals' ) AS Actuals ON Actuals.PD_NO = Baseline.PD_NO GROUP BY Baseline.PD_NO ; ```
SQL Subtract Column Values Based on Second Column Value with Group By Statement
[ "", "sql", "group-by", "subtraction", "" ]
I am looking for a way of dealing with the following situation: 1. We have a database server with multiple databases on it (all have the same schema, different data). 2. We are looking for a way to query across all the databases (and for it to be easy to configure, as more databases may be added at any time). This data access must be realtime. Say, as an example, you have an application that inserts orders - each application has its own DB etc. What we are then looking for is an efficient way for a single application to then access the order information in all the other databases in order to query it and subsequently action it. My searches to date have not revealed very much, however I think I may just be missing the appropriate keywords in order to find the correct info...
It's not going to be the cleanest solution ever, but you could define a view on a "Master database" (if your individual databases are not going to stay constant) that includes the data from the individual databases, and allows you to execute queries on a single source. For example... ``` CREATE VIEW vCombinedRecords AS SELECT * FROM DB1.dbo.MyTable UNION ALL SELECT * FROM DB2.dbo.MyTable ``` Which allows you to do... ``` SELECT * FROM vCombinedRecords WHERE.... ``` When your databases change, you just update the view definition to include the new tables.
You must specify the database name before any database object. Single database: ``` SELECT * FROM [dbo].[myTable] ``` Multiple dabases: ``` SELECT * FROM [DB01].[dbo].[myTable] UNION ALL SELECT * FROM [DB02].[dbo].[myTable] UNION ALL SELECT * FROM [DB03].[dbo].[myTable] ```
Query across multiple databases on same server
[ "", "sql", "sql-server", "" ]
hope this makes sense. I have a table that has a bunch of records in it. Each record has a uniqueID and a localID called pro. UniqueID's never repeat but there can be multiple records with the same localID. I'm trying to build a query where, even though there may be 15 records with the same localID, just show me 1 of them. Anyone, it doesn't matter. How do i pull this off? I tried: ``` SELECT DISTINCT id, pro FROM ProLookup WHERE pro LIKE '%" & replace(q,"'","''") & "%'" ``` but that didn't work. The "q" is the query for an end user searching for a pro (localID). I'm not an expert on sql so i'm sure i'm doing something wrong. Any and all help is greatly appreciated!
If you are ok with getting the largest ID then this would work. ``` SELECT MAX(id) AS id, pro FROM ProLookup WHERE pro LIKE '%" & replace(q,"'","''") & "%'" GROUP BY pro ```
Perhaps `SELECT TOP 1 id, pro FROM ProLookup WHERE pro LIKE '%" & replace(q,"'","''") & "%'"` What is exactly wrong with your query? i.e. giving an error or incorrect data. Is the extra " at the end necessary? SQL Server doesn't especially like double quotes so if that is being passed to the query you may need to remove it depending on your environment.
selecting from a table based on a distinct column
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have simple SQL query.. on Oracle 10g. I want to know the difference between these queries: ``` select * from employee where id = 123 and to_char(start_date, 'yyyyMMdd') >= '2013101' and to_char(end_date, 'yyyyMMdd') <= '20121231'; select * from employee where id = 123 and start_date >= to_date('2013101', 'yyyyMMdd') and end_date <= to_date('20121231', 'yyyyMMdd'); ``` Questions: 1. Are these queries the same? start\_date, end\_date are indexed date columns. 2. Does one work better over the other? Please let me know. thanks.
The latter is almost certain to be faster. * It avoids data type conversions on a column value. * Oracle will estimate better the number of possible values between two dates, rather than two strings that are representations of dates. Note that neither will return any rows as the lower limit is probably *intended* to be higher than the upper limit according to the numbers you've given. Also you've missed a numeral in 2013101.
One of the **biggest flaw** when you converting, casting or transforming to expression (i.e. "NVL", "COALESCE" etc.) **columns** in **WHERE clause** is that **CBO will not be able to use index on that column**. I slightly modified your example to show the difference: ``` SQL> create table t_test as 2 select * from all_objects; Table created SQL> create index T_TEST_INDX1 on T_TEST(CREATED, LAST_DDL_TIME); Index created ``` Created table and index for our experiment. ``` SQL> execute dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'T_TEST', numrows => 100000, numblks => 10000); PL/SQL procedure successfully completed ``` We are making CBO think that our table kind of big one. ``` SQL> explain plan for 2 select * 3 from t_test tt 4 where tt.owner = 'SCOTT' 5 and to_char(tt.last_ddl_time, 'yyyyMMdd') >= '20130101' 6 and to_char(tt.created, 'yyyyMMdd') <= '20121231'; Explained SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 2796558804 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 300 | 2713 (1)| 00:00:33 | |* 1 | TABLE ACCESS FULL| T_TEST | 3 | 300 | 2713 (1)| 00:00:33 | ---------------------------------------------------------------------------- ``` Full table scan is used which would be costly on big table. ``` SQL> explain plan for 2 select * 3 from t_test tt 4 where tt.owner = 'SCOTT' 5 and tt.last_ddl_time >= to_date('20130101', 'yyyyMMdd') 6 and tt.created <= to_date('20121231', 'yyyyMMdd'); Explained SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 1868991173 ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 300 | 4 (0)| 00:00:01 | |* 1 | TABLE ACCESS BY INDEX ROWID| T_TEST | 3 | 300 | 4 (0)| 00:00:01 | |* 2 | INDEX RANGE SCAN | T_TEST_INDX1 | 8 | | 3 (0)| 00:00:01 | --------------------------------------------------------------------------------------------- ``` See, now it's index range scan and the cost is significantly lower. ``` SQL> drop table t_test; Table dropped ``` Finally cleaning.
performance difference between to_char and to_date
[ "", "sql", "oracle10g", "" ]
In the accepted answer to this question [How can I loop through all rows of a table? (MySQL)](https://stackoverflow.com/questions/5817395/how-can-i-loop-through-all-rows-of-a-table-mysql) the following code was posted: ``` INSERT INTO table_B(ID, VAL) VALUES(ID, VAL) FROM table_A LIMIT i,1; ``` I want to select from table\_A into variables first to allow me to reuse it like this: ``` SELECT VAL FROM table_A INTO variableVal LIMIT i, 1; INSERT INTO table_B(ID, VAL) VALUES(ID, variableVal); ``` but that gives me a syntax error: > #1064 - You have an error in your SQL syntax; check the manual that > corresponds to your MySQL server version for the right syntax to use > near 'LIMIT i, 1; Here is the code in full ``` DROP PROCEDURE IF EXISTS ROWPERROW; DELIMITER ;; CREATE PROCEDURE ROWPERROW() BEGIN DECLARE i INT DEFAULT 0; DECLARE factionCount INT DEFAULT 0; DECLARE factionName varchar(100); SELECT COUNT(*) FROM faction INTO factionCount; SET i = 0; WHILE i < factionCount DO SELECT name FROM faction INTO factionName LIMIT i, 1; //SELECT name FROM faction LIMIT i, 1 INTO factionName; (doesn't work either) INSERT INTO groups_group(name) values (factionName); //going to do something else with factionName SET i = i + 1; END WHILE; End; ;; call ROWPERROW(); ```
Your immediate error is caused by fact that your [`SELECT INTO`](http://dev.mysql.com/doc/refman/5.5/en/select-into.html) syntax is wrong. `INTO` clause should go before `FROM`. A syntactically correct version of your procedure might look like ``` DELIMITER $$ CREATE PROCEDURE ROWPERROW() BEGIN DECLARE i INT DEFAULT 0; DECLARE factionCount INT DEFAULT 0; DECLARE factionName varchar(100); SELECT COUNT(*) INTO factionCount FROM faction ; SET i = 0; WHILE i < factionCount DO SELECT name INTO factionName FROM faction LIMIT i, 1; INSERT INTO groups_group(name) VALUES (factionName); -- going to do something else with factionName SET i = i + 1; END WHILE; END$$ DELIMITER ; ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/63bae0/2)** demo Now even though it's technically possible and working I **strongly discourage** you from processing your data that way. 1. Don't use `LOOP` at all. If another session delete a few rows while your procedure is working your code will break. 2. If you want row per row processing use a cursor at least. 3. If you can express your processing with data set approach (and in most cases you can) stay away from cursors. --- A version with a cursor might look like ``` DELIMITER$$ CREATE PROCEDURE ROWPERROW2() BEGIN DECLARE done INT DEFAULT 0; DECLARE factionName varchar(100); DECLARE cursor1 CURSOR FOR SELECT name FROM faction; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; OPEN cursor1; read_loop: LOOP FETCH cursor1 INTO factionName; IF done THEN LEAVE read_loop; END IF; INSERT INTO groups_group(name) VALUES (factionName); -- going to do something else with factionName END LOOP; CLOSE cursor1; END$$ DELIMITER ; ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/365910/1)** demo
You cannot use variables with LIMIT. To loop through the rows use CURSOR instead. Try to see if you use SQL's set based approach (using normal SQL statements). Using loops / cursors should be a last resort as you usually get much better performance using normal SQL commands.
Syntax error while looping through all rows of a table in a stored procedure
[ "", "mysql", "sql", "stored-procedures", "cursor", "" ]
I have my request where co.DATE\_UTILISATION is an integer can be null : ``` SELECT COUNT(DISTINCT co.NUMERO) as TOTAL , co.DATE_UTILISATION, cast(NULL as int) as TEST FROM FOIATST.coupons as co LEFT OUTER JOIN FOIATST.operateurs as op ON co.OPERATEUR = op.CODE WHERE co.SOCIETTE = 999 AND co.DATE_DEB_VALIDITE = 20131007 AND co.DATE_FIN_VALIDITE = 20140107 AND co.ETAT = 'UTILISE ' AND co.DATE_DEB_VALIDITE is not NULL AND co.DATE_UTILISATION <> 0 GROUP BY DATE , ETAT ``` i 've tested cast(NULL as int) as TEST from [DB2: Won't Allow "NULL" column?](https://stackoverflow.com/questions/2509233/db2-wont-allow-null-column) my request works well without "co.DATE\_UTILISATION, " can you show me my mistake please ? thanks
The problem is that you are grouping by one column but selecting another. Try this: ``` SELECT COUNT(DISTINCT co.NUMERO) as TOTAL , co.DATE_UTILISATION, cast(NULL as int) as TEST FROM FOIATST.coupons as co LEFT OUTER JOIN FOIATST.operateurs as op ON co.OPERATEUR = op.CODE WHERE co.SOCIETTE = 999 AND co.DATE_DEB_VALIDITE = 20131007 AND co.DATE_FIN_VALIDITE = 20140107 AND co.ETAT = 'UTILISE ' AND co.DATE_DEB_VALIDITE is not NULL AND co.DATE_UTILISATION <> 0 GROUP BY co.DATE_UTILISATION, ETAT; ``` Also, the inclusion of `etat` in `group by` is not necessary, because you are filtering it down to one value anyway.
When using GROUP BY, all columns in your SELECT list must either be aggregate functions or in the GROUP BY clause. ``` SELECT COUNT(DISTINCT co.NUMERO) as TOTAL , co.DATE_UTILISATION, cast(NULL as int) as TEST FROM FOIATST.coupons co LEFT JOIN FOIATST.operateurs op ON co.OPERATEUR = op.CODE WHERE co.SOCIETTE = 999 AND co.DATE_DEB_VALIDITE is not NULL AND co.DATE_DEB_VALIDITE = 20131007 AND co.DATE_FIN_VALIDITE = 20140107 AND co.ETAT = 'UTILISE ' AND co.DATE_UTILISATION <> 0 GROUP BY co.DATE_UTILISATION, TEST; ``` But in this query you have given us, there is no usage of columns from the operators table, and since it is a LEFT JOIN, it has no affect on which records are chosen from the coupons table. Since you are selecting only records where DATE\_DEB\_VALIDITE = 20131007, then why bother also testing that it is not NULL? ``` SELECT COUNT(DISTINCT co.NUMERO) as TOTAL , co.DATE_UTILISATION, FROM FOIATST.coupons co WHERE co.SOCIETTE = 999 AND co.DATE_DEB_VALIDITE = 20131007 AND co.DATE_FIN_VALIDITE = 20140107 AND co.ETAT = 'UTILISE ' AND co.DATE_UTILISATION <> 0 GROUP BY co.DATE_UTILISATION; ```
DB2 can't select a column who contains nullable values
[ "", "sql", "db2", "" ]
I was wondering which is best practice. Lest say I have a table with 10+ columns and I want to select data from it. I've heard that 'select \*' is better since selecting specific columns makes the database search for these columns before selecting while selecting all just grabs everything. On the other hand, what if the table has a lot of columns in it? Is that true? Thanks
I think these two questions [here](https://stackoverflow.com/questions/3639861/why-is-select-considered-harmful) and [here](https://stackoverflow.com/questions/1960036/best-to-use-when-calling-a-lot-of-fields-in-mysql/1960043#1960043) have satisfactory answers. \* is not better, actually it is slower is one reason that select \* is not good. In addition to this, according to OMG Ponies, select \* is anti-pattern. See the questions in the links for detail.
### It is best practice to explicitly name the columns you want to select. As Mitch just said the performance isn't different. I even heard that looking up the actual columns names when using `*` is slower. But the advantage is that when your table changes then your select does not change when you name your columns.
SQL select * vs. selecting specific columns
[ "", "sql", "sql-server", "" ]
I would like to get the average or at least the sum of 200,000 rows from mySQL database. This is how I am querying the database but the amount is too large for me to query because I cannot afford to overload the server. ``` SELECT user_id, total_email FROM email_users WHERE email_code = 1 LIMIT 200000 SELECT SUM(total_email), AVG(total_email) FROM email_users WHERE user_id IN ( 01, 02,..., 200000-th user_id ) ``` My question is there a way to somehow combine the two queries into one so that I can get just the sum or average of 200,000 email\_users which has email\_code = 1. EDIT: Thanks to all that have answered. I didn't realise the answer was so easy - nested select statement.
You can do this with a subquery: ``` SELECT SUM(total_email), AVG(total_email) from (SELECT eu.* FROM email_users eu WHERE eu.email_code = 1 LIMIT 200000 ) eu ``` Some notes. First, using `limit` without an `order by` gives indeterminate results. You could (in theory) run this query twice and get different results. Second, this assumes that there is a field called `total_email` in `email_users`.
How about something like this assuming you just want any 200K records from the DB where email\_code=1 ``` SELECT SUM(total_email), AVG(total_email) FROM email_users WHERE user_id IN ( SELECT user_id FROM email_users WHERE email_code = 1 LIMIT 200000 ) ``` or ``` SELECT SUM(total_email), AVG(total_email) FROM (SELECT user_id , total_email FROM email_users WHERE email_code = 1 LIMIT 200000) ```
MySQL: Getting average or sum from 200,000 rows
[ "", "mysql", "sql", "sum", "" ]
I would like to have columns that look like this: ``` KILL, DEATH, WIN, playerName ``` this is the resultset that it should be applied on ![enter image description here](https://i.stack.imgur.com/LeuEv.png) and this is the query that i use to get that resultset: ``` SELECT SUM(amount) as amount, type.name as type, gamemode_statistics.Name as playerName FROM gamemode_statistics INNER JOIN type ON gamemode_statistics.type_id = type.id GROUP BY gamemode_statistics.type_id, playerName ``` i really have no clue on how to do this, i tried various ways but none of them solve my problem. Maybe i just configured my tables incorrectly?
If these are the only three values you want to show as column you can do so,this will give you the count of types ``` SELECT SUM(amount) as amount, SUM(`type`='KILL') AS `KILL`, SUM(`type`='DEATH') AS `DEATH`, SUM(`type`='WIN') AS `WIN`, gamemode_statistics.Name as playerName FROM gamemode_statistics INNER JOIN type ON gamemode_statistics.type_id = type.id GROUP BY gamemode_statistics.type_id, playerName ```
You're looking for a pivot query, and MySQL doesn't support them directly. For a simple 3-column result, it's not too bad, but this cannot be made to work as a general solution for `n`-way arbitrary columns: ``` SELECT SUM(IF type='KILL', amount, 0) AS KILL, SUM(IF type='DEATH', amount, 0) AS DEATH etc.. FROM ... ```
Use some row values as column SQL
[ "", "mysql", "sql", "" ]
My requirement is like select top 5 rows from sql, if it contains only 2 rows then by default it shows some text say 'no data' in remaining 3 rows. Same for all conditions. **For i.e. Select top 5 rows, but it contains only 3 rows then query will return** Row1 Row2 Row3 No Data No Data
Please try: ``` select top 5 Col from( select 0 srt, Col from YourTable union all select 1 srt, 'No Data' Col union all select 1 srt, 'No Data' Col union all select 1 srt, 'No Data' Col union all select 1 srt, 'No Data' Col union all select 1 srt, 'No Data' Col )x order by srt ```
``` /* assuming the column width is 8 characters and datatype is varchar */ DECLARE @NoDataTable AS TABLE(column1 VARCHAR(8)) DECLARE @i AS INT SET @i = 0 WHILE(@i<5) BEGIN insert into @NoDataTable (column1) values('No Data'); set @i = @i+1 end select top 5 * from ( select column1 from TestTable union all select column1 from @NoDataTable ) as T ```
Get top number of rows anyway
[ "", "sql", "sql-server", "" ]
I'm trying to determine if there is a better way to do this in SQL. My goal is to run one query which returns two values that are to be used for another query. See below ``` select * from table2 where col1 = ( select col1 from table1 where id = 123 ) and col2 = ( select col2 from table1 where id = 123 ); ``` Is there a way to simplify this code by either doing a `where` clause that checks both values against one nested query, or by running the first querying and somehow setting the values of `col1` and `col2` to variables that I can use in the second query?
You can do ``` select * from table2 where (col1, col2) = (select col1, col2 from table1 where id = 123) ```
``` SELECT DISTINCT a.* FROM table2 a INNER JOIN table1 b ON a.col1 = b.col1 AND a.col2 = b.col2 WHERE b.id = 123 ```
Cleaning up and simplifying a nested SQL statement
[ "", "sql", "oracle", "subquery", "" ]
I have a table, called Table1 which has 2 columns colA, which contains ints and colB, which contains strings. colA has duplicate values. colA and colB together make a row unique. I want to find the number of rows between 2 rows which are ordered by colA. I have all the details of the 2 rows. The following is the sql I have where the start row has colA value 10 and the end row has colA value 100: ``` select count(*) from Table1 where colA > 10 AND colA <= 100 ORDER BY colA ASC ``` But this only gives me a correct count if colA has only one 10. Any help is much appreciated. Thanks
first some data preparation ``` create table TEST ( A int, B varchar(10) ); insert into TEST (A,B) values (9,'B0'); insert into TEST (A,B) values (10,'B1'); insert into TEST (A,B) values (10,'B2'); insert into TEST (A,B) values (20,'B3'); insert into TEST (A,B) values (100,'B4'); insert into TEST (A,B) values (100,'B5'); insert into TEST (A,B) values (101,'B6'); ``` Actually, i came up with even simplier solution. What is important is, if you wanna count (FROM,TO) or or and how to handle if there is no value where colA = FROM and or there is no value where colB=TO, but: ``` select count(1) + coalesce((select 1 from TEST where A = 10 limit 1),0) + coalesce((select 1 from TEST where A = 100 limit 1),0) from TEST where A > 10 and A < 100; ``` hope this helps.
This is a bit long for a comment. First observation, the `order by` clause is totally unnecessary, because you are returning only one row (the `count(*)`). Second, if a row is uniquely identified only by a combination of `colA` and `colB`, then there is no consistent definition of the rows between two values of `ColA`. Consider: ``` 9 B0 10 B1 10 B2 20 B3 100 B4 100 B5 101 B6 ``` Is the number of rows between "10" and "100" exactly 1 (B3)? Exactly 3 (B3, B4, and B4) as in your query)? Exactly 5, from the first "10" to the last "100"? The statement of your question is ambiguous. Apart from the `order by`, your query is as good as any other attempt consistent with your definition.
get count between 2 known rows
[ "", "sql", "sqlite", "" ]
I am trying to import / restore a PostgreSQL database in pgAdmin on a Mac. I am doing this on a new machine that did not previously have a database on it. I installed and set up pgAdmin on the machine just recently and created a database with no files in it. Specifically, I am trying to figure out *where* to enter the restore commands found on various forums. On other forums, I see that others are saying to try the following commands: ``` psql [dbname] < [infile] ``` or ``` pg_restore [option...] [filename] ``` I have never used pgAdmin before so I have no idea where to enter them. I cannot enter anything in the SQL Pane of the database, nor can I enter it as a query (which I tried despite knowing it did not really make any sense >\_>). If it helps, my backup file is on my desktop and is in .dmp format (similar to Oracle). I might be able to get it in a different format if needed but the people I am talking to say that should suffice. Please let me know if anything comes to mind or if you need any more information. I appreciate any help or pointers anyone has. :)
I have worked in WINDOWS , I can tell you what i did, you may get the point and do it in MAC .the pg\_restore command and others like this i.e pg\_dump and others are ther commands which can be run from the pgsql terminal which can be found in the postgres installation folder or may be in the start menu . From the pgsql terminal following syntax may be used: pg\_restore -i -h localhost -p 5432 -U username -d database name -v filename Commands can also be run from inside the Command Prompt cmd , in WINDOWS i did it like this : 'set path=C:\apache2triad\pgsql\bin C:\apache2triad\pgsql\bin\psql.exe -h localhost -p 5432 -U username-d database\_name -f filename.sql pause.. ' For restoring database backup of .backup file from terminal: pg\_restore -i -h localhost -p 5432 -U username -d database name -v filename hope you may get some help here.
> Specifically, I am trying to figure out where to enter the restore > commands found on various forums. This is a misunderstanding. pgAdmin is a GUI. You are referring to command line invocations. You do *not* need pgAdmin to run those. Just open a shell and execute the commands. But read [the manual on `pg_restore` and `psql` first](http://www.postgresql.org/docs/current/interactive/reference-client.html).
Restoring Database in PGAdmin
[ "", "sql", "database", "postgresql", "import", "restore", "" ]
I have a table in a database that I made using an "auto-increment" primary key. Only now I went back and deleted some of the rows. The problem is, that I would like the table to be 1->X without any holes in the numbered column (in other words it should be 1,2,3,4,5,6,7,8,9,10 - but now it's more like 1,2,3,7,8,10. Is there a way I can re-set the values of this column to be incremented correctly via mysql???
Use a user defined variable, which is incremented during every row, and an `order by` to prevent duplicate id's \*during the update process: ``` SET @x := 0; UPDATE mytable SET id = (@x := @x + 1) ORDER BY id; ``` See an [SQLFiddle](http://sqlfiddle.com/#!2/e2217/2)
``` UPDATE yourtable t JOIN (SELECT id oldid, @id := @id + 1 newid FROM yourtable JOIN (SELECT @id := 0) var ORDER BY oldid) newid ON t.id = oldid SET t.id = newid ``` The `ORDER BY` and `ON` conditions are needed to prevent it from temporarily creating duplicate keys, which causes an error. [DEMO](http://www.sqlfiddle.com/#!2/099d8/1)
mysql - How can I correct an auto-increment field that has deleted rows (1,2,3,4,5 - is now 1,3,5) but I want it to be 1,2,3
[ "", "mysql", "sql", "" ]
I've been looking around this, but I can not go forward and is stopping a project that I'm into. My issue (I think) is very simple, but because I'm not familiar with postgresql I'm stuck in page 1. I've this table: ``` "id";"date";"name";"seller";"order";"result" "1";"2013-12-10 11:09:28.76";"adam";"mu";1;"5" "1";"2013-12-10 11:09:28.76";"adam";"mu";2;"3" "1";"2013-12-10 11:09:28.76";"adam";"mu";3;"1" "2";"2013-12-10 11:10:26.059";"eve";"wa";1;"3" "2";"2013-12-10 11:10:26.059";"eve";"wa";2;"9" "2";"2013-12-10 11:10:26.059";"eve";"wa";3;"5" "3";"2013-12-10 11:11:34.746";"joshua";"mu";1;"2" "3";"2013-12-10 11:11:34.746";"joshua";"mu";2;"2" "3";"2013-12-10 11:11:34.746";"joshua";"mu";3;"9" ``` Creation script: ``` CREATE TABLE myTable ( id character varying(50) NOT NULL, date timestamp without time zone NOT NULL, name character varying(64) NOT NULL, seller character varying(64) NOT NULL, order integer NOT NULL, result character varying(64) ) WITH (OIDS=FALSE); ALTER TABLE myTable OWNER TO postgres; ``` (Note: I can not modify the structure of that table) And I want to get a result like this for use the copy function and write it to file: ``` "id";"date";"name";"seller";"result_1";"result_2";"result_3" "1";"2013-12-10 11:09:28.76";"adam";"mu";"5";"3";"1" "2";"2013-12-10 11:10:26.059";"eve";"wa";"3";"9";"5" "3";"2013-12-10 11:11:34.746";"joshua";"mu";"2";"2";"9" ``` I've looked into the "crosstab" function, but I can not get that work within my environment and also I want to lose the column order in my output. I'm not a query expert so I'm very over my head here :( Any help will be appreciated. Thanks in advance!
`CASE` statements are the poor man's surrogate for a proper `crosstab()` function: ``` SELECT a.id ,max(CASE WHEN myorder = 1 THEN result END) AS result_1 ,max(CASE WHEN myorder = 2 THEN result END) AS result_2 ,max(CASE WHEN myorder = 3 THEN result END) AS result_3 FROM mytab GROUP BY id ORDER BY id; ``` Only requires a single table scan and is therefore *much* faster than multiple joins. BTW, never use [reserved words](http://www.postgresql.org/docs/current/interactive/sql-keywords-appendix.html) like `order` as identifiers. Details for this as well as a proper crosstab() query under this related question: [PostgreSQL Crosstab Query](https://stackoverflow.com/questions/3002499/postgresql-crosstab-query/11751905)
This is not exactly what you want but it creates an **array** for results field for each id. Similar to group\_concat in MySQL ``` SELECT id, array_agg(result) FROM table GROUP BY id ```
Query issue (row values to columns) in PostgreSQL
[ "", "sql", "database", "postgresql", "pivot", "" ]
I'd like to select two columns in a table and a count of associated rows from another. Basically, I got two tables: Table `rem` : ``` rem_id (Int, AI, Index) | rem_name (Varchar) ``` Table `map` : ``` rem_id (Int, AI, Index) | data (Text) ``` I want to get the two columns of the table `rem` and how many data entries are stored for each row. I've tried to use the following query, but it doesn't work : ``` SELECT rem_id, rem_name, COUNT( SELECT map.rem_id FROM map WHERE map.rem_id = rem.rem_id) FROM rem; ``` I'm under Postgresql 9.3 Could you pleas help me to solve this ?
``` SELECT rem.rem_id, rem.rem_name, COUNT(map.rem_id) as cnt FROM rem LEFT JOIN map ON map.rem_id = rem.rem_id GROUP BY rem.rem_id, rem.rem_name ``` Use a `left join` to connect the tables, group by the `rem` columns and then you can count for each `rem` record.
``` SELECT rem_id,rem_name,(SELECT COUNT(map.rem_id) FROM map WHERE map.rem_id = rem.rem_id) FROM rem; ```
Select some columns and a count
[ "", "sql", "postgresql", "" ]
I have `SELECT` query based on `IN()` clause, where I want to feed that clause with other queries like: ``` SELECT * FROM item_list WHERE itemNAME IN ( SELECT itemNAME FROM item_list WHERE itemID = '17' AND (itemSUB ='1' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1, SELECT itemNAME FROM item_list WHERE itemID = '57' AND (itemSUB ='0' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 ) ``` But it errors with: `#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT itemNAME FROM item_list WHERE itemID = '57' AND (itemSUB ='0' OR' at line 11`
User Goat CO deleted good answer: ``` SELECT * FROM item_list WHERE itemNAME = ( SELECT itemNAME FROM item_list WHERE itemID = '17' AND (itemSUB ='1' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1) OR itemName = ( SELECT itemNAME FROM item_list WHERE itemID = '57' AND (itemSUB ='0' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 ) ```
The syntax you are looking for is `union all` rather than `limit`: ``` SELECT * FROM item_list WHERE itemNAME IN ( SELECT itemNAME FROM item_list WHERE itemID = '17' AND (itemSUB ='1' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 union all SELECT itemNAME FROM item_list WHERE itemID = '57' AND (itemSUB ='0' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 ) ``` However, this probably will not work, because some SQL engines (notably MySQL) don't allow `limit` in such subqueries. Instead, you can do a join: ``` SELECT il.* FROM item_list il join (select * from ((SELECT itemNAME FROM item_list WHERE itemID = '17' AND (itemSUB ='1' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 ) union (SELECT itemNAME FROM item_list WHERE itemID = '57' AND (itemSUB ='0' OR itemSUB ='0') ORDER BY itemSUB DESC LIMIT 1 ) ) l ) l on il.itemName = l.itemName; ```
SQL queries inside IN() clause
[ "", "sql", "in-clause", "" ]
Please ignore if its very basic question as i am not good in SQL and have to develop query i google and couldnt get my desired result. I have two tables Product ``` ProductID, ProductName, Descritpion 2010 A xyz 2011 B uyt 2012 C jkj ``` And Sales like below ``` Sales SalesID, ProductID, SaleDate, Status 1 2010 10-Dec-2013 K 2 2010 10-Dec-2013 u 3 2011 11-Dec-2013 i ``` I have to develop a query which will return count of daily sales like following ``` ProductName, TotalSaleCount, SaleDate A 2 10-Dec-2013 ``` I develop following query ``` SELECT productid, Count(1) FROM sales WHERE saledate = Trunc(sysdate - 1) GROUP BY productid ``` Above query return productid with count but i want productname with count so how i will join sales table with product? Regards
Here's a slightly different answer than the existing ones; ``` SELECT Product.ProductName, TotalSales.TotalSaleCount, TotalSales.SaleDate FROM Product JOIN (SELECT ProductId, SaleDate, COUNT(*) as TotalSaleCount FROM Sales GROUP BY ProductId, SaleDate) TotalSales ON TotalSales.ProductId = Product.id ``` The results should be identical to some of the existing answers. **However**, this version is *likely* (but not guaranteed) to be faster - the columns chosen are more likely to have indices on them, and to be better choices for indices, too; `ProductName` is usually *not* a good column to put as the start of this type of index, and (depends on how smart the optimizer actually is), as it's not part of `Sales`, is less useful to the optimizer.
Just join on the `Product` table: ``` SELECT productname, COUNT(*) FROM Sales JOIN Product ON Sales.productid = Product.producyid WHERE saledate = TRUNC(SYSDATE - 1) GROUP BY ProductName ```
GroupBy with count on two tables
[ "", "sql", "oracle", "join", "" ]
I have a very simple example for the summation of two numbers. ``` CREATE OR REPLACE FUNCTION add(a integer, b integer) RETURNS integer AS $$ SELECT $1+$2; $$ LANGUAGE 'sql'; ``` My question is how to define a range `b` `10:20`, whose values will counted up by one until the end of the range (20) is reached. The result hast to be like this ``` res = a + b res = a + 10 res = a + 11 res = a + 12 res = a + 13 ``` When I retrieve the function add with: ``` SELECT add(1); ``` there should be the results: 11,12,13,...,21. I didn't use loops like `FOR EACH` before (especially in the `LANGUAGE sql`). Is it better to write those functions in `plpgsql`?
> I didn't use loops like FOR EACH before (especially in the LANGUAGE sql). There are no loops in SQL. (The only exception being [RECURSIVE CTEs](http://www.postgresql.org/docs/current/interactive/queries-with.html).) Functions with [`LANGUAGE sql`](http://www.postgresql.org/docs/current/interactive/xfunc-sql.html) (no quotes!), consist of SQL statements exclusively. If you need procedural elements like loops, you need to switch to [PL/pgSQL](http://www.postgresql.org/docs/current/interactive/plpgsql.html) (or any other procedural language) where [looping is easy enough](http://www.postgresql.org/docs/current/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS). Of course, the simple example you presented would best be solved with [generate\_series()](http://www.postgresql.org/docs/current/interactive/functions-srf.html), as other answers already pointed out. A word of caution: best use `generate_series()` in the `FROM` list. Using it in the `SELECT` list is allowed but frowned upon because non-standard. SRF (set returning functions) might be confined to the `FROM` list in future releases.
I think you are looking for the function `generate_series()`, documented [here](http://www.postgresql.org/docs/9.1/static/functions-srf.html). So, ``` select generate_series(1, 3) ``` Will return three rows: ``` 1 2 3 ``` You can use these numbers in arithmetic expressions.
How to use a range of values in a PostgreSQL function and to loop with a distinct interval
[ "", "sql", "function", "postgresql", "loops", "plpgsql", "" ]
Hello guys i had seprate 12 queries and some c# code to get the data formatted on the page, but now we are converted those pages to the SSRS reports. I have built the stored procedure which is combination of the 12 queries and some new additional queries to format the data as we want on the ssrs report. Now the new stored procedure is taking more time than the old page here is my query can any optimization possible on the following stored procedure any help would be great . ``` CREATE PROCEDURE [dbo].[GetHistoryByYear_Get] -- Add the parameters for the stored procedure here @Year AS VARCHAR(4), @PreYear AS VARCHAR(4) AS BEGIN SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT * INTO #tempCustVol FROM ( SELECT * FROM ( SELECT * FROM ( SELECT @Year AS 'Year',Company, Customer,SUM(Jan) AS Jan, SUM(Feb) AS Feb, SUM(Mar) As Mar, SUM(Apr) AS Apr, SUM(May) AS May, SUM(Jun) AS Jun, SUM(Jul) AS Jul, SUM(Aug) AS Aug, SUM(Sep) AS Sep, SUM(Oct) AS Oct, SUM(Nov) AS Nov, SUM(Dec) AS Dec ,(SUM(Jan) + SUM(Feb) + SUM(Mar) + SUM(Apr) + SUM(May) + SUM(Jun) + SUM(Jul) + SUM(Aug) + SUM(Sep) + SUM(Oct) + SUM(Nov) + SUM(Dec) ) AS YearlyTotal FROM( SELECT Company, Customer, DateRcvd, SUM(Records) AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '01' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, SUM(RECORDS) AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '02' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, SUM(RECORDS) As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '03' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, SUM(RECORDS) As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '04' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, SUM(RECORDS) As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '05' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, SUM(RECORDS) As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '06' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, SUM(RECORDS) As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '07' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, SUM(RECORDS) As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '08' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, SUM(RECORDS) As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '09' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, SUM(RECORDS) As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '10' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, SUM(RECORDS) As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '11' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, SUM(RECORDS) As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '12' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd ) F GROUP BY Company, Customer ) AS ALLDATA UNION SELECT * FROM ( SELECT @PreYear AS 'Year',Company, Customer,SUM(Jan) AS Jan, SUM(Feb) AS Feb, SUM(Mar) As Mar, SUM(Apr) AS Apr, SUM(May) AS May, SUM(Jun) AS Jun, SUM(Jul) AS Jul, SUM(Aug) AS Aug, SUM(Sep) AS Sep, SUM(Oct) AS Oct, SUM(Nov) AS Nov, SUM(Dec) AS Dec ,(SUM(Jan) + SUM(Feb) + SUM(Mar) + SUM(Apr) + SUM(May) + SUM(Jun) + SUM(Jul) + SUM(Aug) + SUM(Sep) + SUM(Oct) + SUM(Nov) + SUM(Dec) ) AS YearlyTotal FROM( SELECT Company, Customer, DateRcvd, SUM(Records) AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '01' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, SUM(RECORDS) AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '02' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, SUM(RECORDS) As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '03' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, SUM(RECORDS) As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '04' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, SUM(RECORDS) As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '05' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, SUM(RECORDS) As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '06' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, SUM(RECORDS) As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '07' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, SUM(RECORDS) As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '08' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, SUM(RECORDS) As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '09' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, SUM(RECORDS) As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '10' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, SUM(RECORDS) As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '11' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, SUM(RECORDS) As Dec FROM( select * from vwjmrep where datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND SUBSTRING(DateRcvd,5,2) = '12' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd ) F GROUP BY Company, Customer ) AS ALLDATA ) AS TEMPDATA ) AS data SELECT * FROM (SELECT * FROM #tempCustVol UNION SELECT @PreYear AS [Year],null,null,COALESCE(SUM(Jan),0),COALESCE(SUM(Feb),0),COALESCE(SUM(Mar),0),COALESCE(SUM(Apr),0), COALESCE(SUM(May),0),COALESCE(SUM(Jun),0),COALESCE(SUM(Jul),0),COALESCE(SUM(Aug),0),COALESCE(SUM(Sep),0),COALESCE(SUM(Oct),0), COALESCE(SUM(Nov),0),COALESCE(SUM(Dec),0),COALESCE((SUM(Jan) + SUM(Feb) + SUM(Mar) + SUM(Apr) + SUM(May) + SUM(Jun) + SUM(Jul) + SUM(Aug) + SUM(Sep) + SUM(Oct) + SUM(Nov) + SUM(Dec) ),0) AS YearlyTotal FROM #tempCustVol WHERE [Year] = @PreYear )AS DA ORDER BY CASE WHEN Company is null THEN 1 ELSE 0 END, Company,[Year] DROP TABLE #tempCustVol END ``` any help would be great i have indexed the tables and tables have lots of data it takes near about the 10 to 12 min to execute is there any way i can minimize it. and it's the SQL SERVER 2008 database **UPDATE** --- This is my updated stored procedure ``` BEGIN DECLARE @Year AS VARCHAR(4), @PreYear AS VARCHAR(4) SET @Year='2013' SET @PreYear='2012' SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT * INTO #tempCustVol FROM ( SELECT * FROM ( SELECT * FROM ( SELECT @Year AS 'Year',Company, Customer, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '01' THEN Records ELSE 0 END) AS Jan, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '02' THEN Records ELSE 0 END) AS Feb, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '03' THEN Records ELSE 0 END) AS Mar, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '04' THEN Records ELSE 0 END) AS Apr, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '05' THEN Records ELSE 0 END) AS May, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '06' THEN Records ELSE 0 END) AS Jun, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '07' THEN Records ELSE 0 END) AS Jul, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '08' THEN Records ELSE 0 END) AS Aug, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '09' THEN Records ELSE 0 END) AS Sep, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '10' THEN Records ELSE 0 END) AS Oct, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '11' THEN Records ELSE 0 END) AS Nov, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '12' THEN Records ELSE 0 END) AS Dec, SUM(Records) AS YearlyTotal FROM vwjmrep WHERE datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND Company IS NOT NULL GROUP BY Company, Customer ) AS ALLDATA UNION ALL SELECT * FROM ( SELECT @PreYear AS 'Year',Company, Customer, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '01' THEN Records ELSE 0 END) AS Jan, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '02' THEN Records ELSE 0 END) AS Feb, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '03' THEN Records ELSE 0 END) AS Mar, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '04' THEN Records ELSE 0 END) AS Apr, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '05' THEN Records ELSE 0 END) AS May, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '06' THEN Records ELSE 0 END) AS Jun, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '07' THEN Records ELSE 0 END) AS Jul, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '08' THEN Records ELSE 0 END) AS Aug, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '09' THEN Records ELSE 0 END) AS Sep, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '10' THEN Records ELSE 0 END) AS Oct, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '11' THEN Records ELSE 0 END) AS Nov, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '12' THEN Records ELSE 0 END) AS Dec, SUM(Records) AS YearlyTotal FROM vwjmrep WHERE datercvd >=@PreYear + '0101' and datercvd <= @PreYear + '1231' AND Company IS NOT NULL GROUP BY Company, Customer ) AS ALLDATA ) AS TEMPDATA ) AS data SELECT * FROM (SELECT * FROM #tempCustVol UNION ALL SELECT @PreYear AS [Year],null,null,COALESCE(SUM(Jan),0),COALESCE(SUM(Feb),0),COALESCE(SUM(Mar),0),COALESCE(SUM(Apr),0), COALESCE(SUM(May),0),COALESCE(SUM(Jun),0),COALESCE(SUM(Jul),0),COALESCE(SUM(Aug),0),COALESCE(SUM(Sep),0),COALESCE(SUM(Oct),0), COALESCE(SUM(Nov),0),COALESCE(SUM(Dec),0),COALESCE((SUM(Jan) + SUM(Feb) + SUM(Mar) + SUM(Apr) + SUM(May) + SUM(Jun) + SUM(Jul) + SUM(Aug) + SUM(Sep) + SUM(Oct) + SUM(Nov) + SUM(Dec) ),0) AS YearlyTotal FROM #tempCustVol WHERE [Year] = @PreYear )AS DA ORDER BY CASE WHEN Company is null THEN 1 ELSE 0 END, Company,[Year] DROP TABLE #tempCustVol END ``` which is still takes time but is there any more optimization possible thanks
I think, you can try replace this ``` SELECT * FROM ( SELECT @Year AS 'Year',Company, Customer,SUM(Jan) AS Jan, SUM(Feb) AS Feb, SUM(Mar) As Mar, SUM(Apr) AS Apr, SUM(May) AS May, SUM(Jun) AS Jun, SUM(Jul) AS Jul, SUM(Aug) AS Aug, SUM(Sep) AS Sep, SUM(Oct) AS Oct, SUM(Nov) AS Nov, SUM(Dec) AS Dec ,(SUM(Jan) + SUM(Feb) + SUM(Mar) + SUM(Apr) + SUM(May) + SUM(Jun) + SUM(Jul) + SUM(Aug) + SUM(Sep) + SUM(Oct) + SUM(Nov) + SUM(Dec) ) AS YearlyTotal FROM( SELECT Company, Customer, DateRcvd, SUM(Records) AS Jan, 0 AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '01' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd UNION SELECT Company, Customer, DateRcvd, 0 AS Jan, SUM(RECORDS) AS Feb, 0 As Mar, 0 As Apr, 0 As May, 0 As Jun, 0 As Jul, 0 As Aug, 0 As Sep, 0 As Oct, 0 As Nov, 0 As Dec FROM( select * from vwjmrep where datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND SUBSTRING(DateRcvd,5,2) = '02' AND Company IS NOT NULL ) AS T GROUP BY Company, Customer, DateRcvd GROUP BY Company, Customer ``` with this ``` SELECT @Year AS 'Year',Company, Customer, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '01' THEN Records ELSE 0 END) AS Jan, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '02' THEN Records ELSE 0 END) AS Feb, ..., SUM(Records) AS YearlyTotal FROM vwjmrep WHERE datercvd >=@Year + '0101' and datercvd <= @Year + '1231' AND Company IS NOT NULL GROUP BY Company, Customer ``` At least, there will be less Table Scans.
Check this out ``` CREATE PROCEDURE [dbo].[GetHistoryByYear_Get] -- Add the parameters for the stored procedure here @Year AS VARCHAR(4), @PreYear AS VARCHAR(4) AS BEGIN SET NOCOUNT ON; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT * INTO #tempCustVol FROM ( SELECT * , Jan + Feb + Mar + Apr + May + Jun + Jul + Aug + Sep + Oct + Nov + [Dec] AS YearlyTotal FROM ( SELECT @Year AS 'Year', Company, Customer, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '01' THEN Records ELSE 0)) Jan, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '02' THEN Records ELSE 0)) Feb, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '03' THEN Records ELSE 0)) Mar, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '04' THEN Records ELSE 0)) Apr, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '05' THEN Records ELSE 0)) May, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '06' THEN Records ELSE 0)) Jun, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '07' THEN Records ELSE 0)) Jul, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '08' THEN Records ELSE 0)) Aug, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '09' THEN Records ELSE 0)) Sep, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '10' THEN Records ELSE 0)) Oct, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '11' THEN Records ELSE 0)) Nov, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '12' THEN Records ELSE 0)) [Dec] FROM vwjmrep WHERE Company IS NOT NULL AND (datercvd >=@Year + '0101' AND datercvd <= @Year + '1231') GROUP BY Company, Customer UNION ALL SELECT @PreYear AS 'Year', Company, Customer, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '01' THEN Records ELSE 0)) Jan, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '02' THEN Records ELSE 0)) Feb, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '03' THEN Records ELSE 0)) Mar, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '04' THEN Records ELSE 0)) Apr, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '05' THEN Records ELSE 0)) May, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '06' THEN Records ELSE 0)) Jun, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '07' THEN Records ELSE 0)) Jul, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '08' THEN Records ELSE 0)) Aug, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '09' THEN Records ELSE 0)) Sep, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '10' THEN Records ELSE 0)) Oct, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '11' THEN Records ELSE 0)) Nov, SUM(CASE WHEN SUBSTRING(DateRcvd,5,2) = '12' THEN Records ELSE 0)) [Dec] FROM vwjmrep WHERE Company IS NOT NULL AND (datercvd >=@PreYear + '0101' AND datercvd <= @PreYear + '1231') GROUP BY Company, Customer ) x ) ALLDATA ...Rest of the code here ```
Query optimization?
[ "", "sql", "sql-server", "stored-procedures", "query-optimization", "query-performance", "" ]
I need your help about the query below which takes more than 2 min to return a result: ``` SELECT p.weight, o.login, o.date, o.s_address, o.s_city, o.s_county, o.s_state, o.s_country, o.s_zipcode, o.phone, c.categoryid, c.category, o.orderid, p.product product_name, p.productcode sku, d.amount, v.value emplacement, ( SELECT ev.value FROM xcart_extra_field_values ev LEFT JOIN xcart_extra_fields ef ON ef.fieldid=ev.fieldid WHERE ev.productid = d.productid AND ef.field = 'a_type' LIMIT 1 ) type, o.customer_notes, o.membership, o.s_firstname, o.s_lastname, o.phone, d.price, o.email FROM `xcart_orders` o LEFT JOIN `xcart_shipping` s ON s.shippingid=o.shippingid LEFT JOIN `xcart_order_details` d ON d.orderid=o.orderid LEFT JOIN `xcart_products` p ON p.productid=d.productid LEFT JOIN `xcart_products_categories` pc ON pc.productid=p.productid LEFT JOIN `xcart_categories` c ON c.categoryid=pc.categoryid LEFT JOIN `xcart_extra_field_values` v ON v.productid=p.productid LEFT JOIN `xcart_extra_fields` f ON f.fieldid=v.fieldid WHERE o.shippingid IN ( SELECT DISTINCT shippingid FROM `xcart_rafale_shipping` WHERE rafale='1' ) AND ( SELECT COUNT(*) FROM `xcart_order_details` d2 LEFT JOIN `xcart_products_categories` pc2 ON pc2.productid=d2.productid WHERE d2.orderid=o.orderid AND pc2.categoryid NOT IN ( SELECT DISTINCT ac2.categoryid FROM `xcart_rafale_aggregation_categories` ac2 WHERE ac2.aggregationid='12' ) ) = 0 AND ( ( o.date BETWEEN '1386802800' AND '1386889199' ) OR (o.orderid IN ('44', '55', '66')) ) AND o.orderid NOT IN ('11', '22', '33', '123', '458') AND o.paid = 'Y' AND o.status <> 'F' AND o.status <> 'Q' AND o.status <> 'I' AND f.field = 'emplacement' AND pc.main = 'Y' ORDER BY v.value ASC, p.productcode ASC LIMIT 100 ``` The problem may come from the following clause ``` AND ( ( o.date BETWEEN '1386802800' AND '1386889199' ) OR (o.orderid IN ('44', '55', '66')) ) ``` because the query executes faster when I remove the `OR (o.orderid IN ('44', '55', '66'))` **There are indexes on o.date and o.orderid columns** I used the query below inspired by reply from @Clockwork-Muse: ``` SELECT p.weight, o.login, o.date, o.s_address, o.s_city, o.s_county, o.s_state, o.s_country, o.s_zipcode, o.phone, c.categoryid, c.category, o.orderid, p.product product_name, p.productcode sku, d.amount, v.value emplacement, ( SELECT ev.value FROM xcart_extra_field_values ev LEFT JOIN xcart_extra_fields ef ON ef.fieldid=ev.fieldid WHERE ev.productid = d.productid AND ef.field = 'a_type' LIMIT 1 ) type, o.customer_notes, o.membership, o.s_firstname, o.s_lastname, o.phone, d.price, o.email FROM `xcart_order_details` d INNER JOIN (SELECT * FROM `xcart_orders` WHERE ( orderid IN ('44', '55', '66') OR (`date` >= '1386802800' AND `date` <= '1386889199') ) ) o ON o.orderid = d.orderid INNER JOIN (SELECT DISTINCT shippingid FROM `xcart_rafale_shipping` WHERE rafale = '1') rf ON rf.shippingid = o.shippingid INNER JOIN `xcart_shipping` s ON s.shippingid=o.shippingid INNER JOIN `xcart_products` p ON p.productid=d.productid INNER JOIN (SELECT * FROM `xcart_products_categories` WHERE main = 'Y') pc ON pc.productid=p.productid LEFT JOIN `xcart_categories` c ON c.categoryid=pc.categoryid LEFT JOIN `xcart_extra_field_values` v ON v.productid=p.productid LEFT JOIN `xcart_extra_fields` f ON f.fieldid=v.fieldid WHERE NOT EXISTS ( SELECT d2.orderid FROM `xcart_order_details` d2 LEFT JOIN `xcart_products_categories` pc2 ON pc2.productid=d2.productid WHERE d2.orderid=o.orderid AND pc2.categoryid NOT IN ( SELECT DISTINCT ac2.categoryid FROM `xcart_rafale_aggregation_categories` ac2 WHERE ac2.aggregationid='12' ) ) AND o.orderid NOT IN ('11', '22', '33', '123', '458') AND o.paid = 'Y' AND o.status NOT IN ('F', 'Q', 'I') AND f.field = 'emplacement' ```
Besides any other problems your query is having, or indices it may need, it's doing more work than it needs to; here's a slightly tweaked version which *might* run faster: ``` SELECT p.weight, o.login, o.date, o.s_address, o.s_city, o.s_county, o.s_state, o.s_country, o.s_zipcode, o.phone, c.categoryid, c.category, o.orderid, p.product product_name, p.productcode sku, d.amount, v.value emplacement, (SELECT ev.value FROM xcart_extra_field_values ev INNER JOIN xcart_extra_fields ef ON ef.fieldid = ev.fieldid AND ef.field = 'a_type' WHERE ev.productid = d.productid) type, o.customer_notes, o.membership, o.s_firstname, o.s_lastname, o.phone, d.price, o.email FROM `xcart_orders` o INNER JOIN (SELECT DISTINCT shippingid FROM `xcart_rafale_shipping` WHERE rafale = '1') rf ON rf.shippingid = o.shippingid LEFT JOIN `xcart_shipping` s ON s.shippingid = o.shippingid LEFT JOIN `xcart_order_details` d ON d.orderid = o.orderid LEFT JOIN `xcart_products` p ON p.productid = d.productid LEFT JOIN `xcart_products_categories` pc ON pc.productid = p.productid AND pc.main = 'Y' LEFT JOIN `xcart_categories` c ON c.categoryid = pc.categoryid LEFT JOIN `xcart_extra_field_values` v ON v.productid = p.productid LEFT JOIN `xcart_extra_fields` f ON f.fieldid = v.fieldid AND f.field = 'emplacement' WHERE NOT EXISTS (SELECT * FROM `xcart_products_categories` pc2 LEFT JOIN `xcart_rafale_aggregation_categories` ac2 ON ac2.categoryid = pc2.categoryid AND ac2.aggregationid = '12' WHERE pc2.productid = d.productid AND ac2.categoryid IS NULL) AND ((o.date >= '1386802800' AND o.date <'1386889200') OR o.orderid IN ('44', '55', '66')) AND o.orderid NOT IN ('11', '22', '33', '123', '458') AND o.paid = 'Y' AND o.status NOT IN ('F', 'Q', 'I') ORDER BY v.value ASC, p.productcode ASC LIMIT 100 ``` A couple of other things - 1. You have `LEFT JOIN`s with a condition in the `WHERE` clause - this actually turns them into `INNER JOIN`s. I've moved the relevant conditions into the join, which will likely change your results. If you wanted an actual `INNER JOIN`, just change/remove the word. This is why it's best to put **ALL** conditions in a join, when possible. 2. Date/time/timestamps (even if not stored as that type) are a "measurement" - *all* measurements logically have some imprecision in the recording; to reflect this please use "lower-bound inclusive, upper-bound exclusive" (`a >= x < b`, needs to be flipped for negative values) for comparisons. I also recommend this for integer counts, for the sake of consistency. 3. Without an `ORDER BY` clause, any use of `LIMIT` (or similar statements) returns essentially uncontrollable results. If you want exactly one value, you **must** do one of the following - 1) use an aggregate (`MAX()`, etc), 2) write your query/structure your db such that only one value will meet the criteria, 3) provide a relevant `ORDER BY` for the use of "select position x" type constructs. Failure to do so will cause your query to return unexpected results when you least expect it (and without throwing a warning, either). In this case I find it extremely unlikely that there is more than one instance of a value in a EAV table (essentially, case #2). 4. Your original query contains a rather obfuscated double negative (`SELECT COUNT(*) ... = 0)`). Unfortunately, without knowing more about the nature of your data/table schema, I can't really eliminate the double negative (although I can make it more obvious. For the sake of future maintainers, please avoid double negatives whenever possible. In this case, it's because of your (perhaps overly) liberal use of `LEFT`-joins - Are you sure that information *isn't required*?
Most likely thing to help you would be to make sure you have an index on xcart\_orders.orderid if you are sure that part of the query is making it slower.
OR clause slowing down SQL query
[ "", "mysql", "sql", "performance", "" ]
I have a query that contains a `WHERE` clause with a `CASE` statement in it (See code below), somehow it doesn't seem to work. ``` select * FROM details where orgcode in (case when orgtype='P' then (SELECT distinct [PCode] FROM [GPOS_Extract].[dbo].[GP8288List]) else 0 end ) ```
How about ``` select * FROM details where (orgtype <> 'P' AND orgcode = 0) or orgcode in ( SELECT distinct [PCode] FROM [GPOS_Extract].[dbo].[GP8288List] ) ```
Or try this: ``` SELECT * FROM details WHERE details.orgcode IN ( SELECT DISTINCT (CASE WHEN details.orgtype='P' THEN [GPOS_Extract].[dbo].[GP8288List].PCode ELSE 0 END) FROM [GPOS_Extract].[dbo].[GP8288List] ) ```
How to use case statement inside where clause of sql 2000
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "sql-server-2008-r2", "" ]
Does Oracle has similar datatype to [SQL Server's `RowVersion`](http://technet.microsoft.com/en-us/library/ms182776.aspx)? When you insert or update a row, the corresponding Version column(which is of type `RowVersion`) gets updated automatically. *[MSDN says about RowVersion](http://technet.microsoft.com/en-us/library/ms182776.aspx)*: > * Is a data type that exposes automatically generated, unique binary > numbers within a database. rowversion is generally used as a mechanism > for version-stamping table rows. The storage size is 8 bytes. The > rowversion data type is just an incrementing number and does not > preserve a date or a time. > * Each database has a counter that is incremented for each insert or > update operation that is performed on a table that contains a > rowversion column within the database. This counter is the database > rowversion. This tracks a relative time within a database, not an > actual time that can be associated with a clock. A table can have only > one rowversion column. Every time that a row with a rowversion column > is modified or inserted, the incremented database rowversion value is > inserted in the rowversion column. > * You can use the rowversion column of a row to easily determine whether > any value in the row has changed since the last time it was read. If > any change is made to the row, the rowversion value is updated. If no > change is made to the row, the rowversion value is the same as when it > was previously read. > * You can add a rowversion column to a table to help maintain the > integrity of the database when multiple users are updating rows at the > same time. You may also want to know how many rows and which rows were > updated without re-querying the table. We are designing a data model with oracle and would like to use Version column to manage concurrency. I would also like to know if there is better way in Oracle world.
The simple answer is No - but it's easy to create one yourself with a NUMBER column and a trigger to set/update it. A simple example for Oracle 11gR2: ``` CREATE SEQUENCE global_rowversion_seq; ALTER TABLE mytable1 ADD rowversion NUMBER; ALTER TABLE mytable2 ADD rowversion NUMBER; CREATE TRIGGER mytable1_biu BEFORE INSERT OR UPDATE ON mytable1 FOR EACH ROW BEGIN :NEW.rowversion := global_rowversion_seq.NEXTVAL; END mytable1_biu; CREATE TRIGGER mytable2_biu BEFORE INSERT OR UPDATE ON mytable2 FOR EACH ROW BEGIN :NEW.rowversion := global_rowversion_seq.NEXTVAL; END mytable2_biu; ``` (If you're on an earlier Oracle version, the assignments in the triggers must be done with a query, e.g.: ``` SELECT global_rowversion_seq.NEXTVAL INTO :NEW.rowversion FROM dual; ``` Now, keep in mind in some cases this design may have a performance impact in extreme situations (e.g. databases with extremely high insert/update activity) due to contention from all database inserts/updates using the same sequence. Of course, in this circumstance you probably would avoid triggers in the first place anyway. Depending on how you use the rowversion column, it may be a good idea to use a separate sequence for each table instead. This would mean, of course, that rowversion would no longer be globally unique - but if you are only interested in comparing changes to rows within a table, then this would be fine. Another approach is to advance the counter for each row individually - this doesn't need a sequence and allows you to detect changes to a row (but does not allow comparing any row to another row): ``` ALTER TABLE mytable ADD rowversion NUMBER; CREATE TRIGGER mytable_biu BEFORE INSERT OR UPDATE ON mytable FOR EACH ROW BEGIN :NEW.rowversion := NVL(:OLD.rowversion, 0) + 1; END mytable_biu; ``` Each row will be inserted with rowversion = 1, then subsequent updates to that row will increment it to 2, 3, etc.
Oracle has SCN (System Change Numbers): <http://docs.oracle.com/cd/E11882_01/server.112/e10713/transact.htm#CNCPT039> > A system change number (SCN) is a logical, internal time stamp used by Oracle Database. SCNs order events that occur within the database, which is necessary to satisfy the ACID properties of a transaction. Oracle Database uses SCNs to mark the SCN before which all changes are known to be on disk so that recovery avoids applying unnecessary redo. The database also uses SCNs to mark the point at which no redo exists for a set of data so that recovery can stop. > > SCNs occur in a monotonically increasing sequence. Oracle Database can use an SCN like a clock because an observed SCN indicates a logical point in time and repeated observations return equal or greater values. If one event has a lower SCN than another event, then it occurred at an earlier time with respect to the database. Several events may share the same SCN, which means that they occurred at the same time with respect to the database. > > Every transaction has an SCN. For example, if a transaction updates a row, then the database records the SCN at which this update occurred. Other modifications in this transaction have the same SCN. When a transaction commits, the database records an SCN for this commit. Use an ORA\_ROWSCN pseudocolumn to examine current SCN of rows: <http://docs.oracle.com/cd/B28359_01/server.111/b28286/pseudocolumns007.htm#SQLRF51145> An example: ``` SELECT ora_rowscn, t.* From test t; ``` Demo --> <http://www.sqlfiddle.com/#!4/535bc/1> (On SQLFiddle explicit commits apparently don't work - on a real database each commit increases SCN). An example on a "real" database: ``` CREATE TABLE test( id int, value int ); INSERT INTO test VALUES(1,0); COMMIT; SELECT ora_rowscn, t.* FROM test t; ORA_ROWSCN ID VALUE ---------- ---------- ---------- 3160728 1 0 UPDATE test SET value = value + 1 WHERE id = 1; COMMIT; SELECT ora_rowscn, t.* FROM test t; ORA_ROWSCN ID VALUE ---------- ---------- ---------- 3161657 1 1 UPDATE test SET value = value + 1 WHERE id = 1; COMMIT; SELECT ora_rowscn, t.* FROM test t; ORA_ROWSCN ID VALUE ---------- ---------- ---------- 3161695 1 2 ``` If SCN of the transaction is know, we can use a flashback query to obtain a past value of the row: <http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_flashback.htm#g1026131> An example: ``` SELECT t.*, versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_xid, versions_operation FROM test VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE t; ID VALUE VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN VERSIONS_ENDTIME VERSIONS_XID VERSIONS_OPERATION ---------- ---------- ----------------- ------------------- --------------- ------------------- ---------------- ------------------ 1 2 3161695 13/12/10 08:19:39 06000300EA070000 U 1 1 3161657 13/12/10 08:18:39 3161695 13/12/10 08:19:39 06001200EA070000 U 1 0 3161657 13/12/10 08:18:39 SELECT t.*, versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_xid, versions_operation FROM test VERSIONS BETWEEN SCN 3161657 AND 3161657 t; ID VALUE VERSIONS_STARTSCN VERSIONS_STARTTIME VERSIONS_ENDSCN VERSIONS_ENDTIME VERSIONS_XID VERSIONS_OPERATION ---------- ---------- ----------------- ------------------- --------------- ------------------- ---------------- ------------------ 1 1 3161657 13/12/10 08:18:39 06001200EA070000 U ```
SQL Server : RowVersion equivalent in Oracle
[ "", "sql", "sql-server", "oracle", "rowversion", "" ]
I wanna update a column of a table: ``` UPDATE product prod SET prod.prod_supplier_id = (SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno ) ``` the `SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno` mustn't return a null result, if it's null, the update will not be made How to do that?
You need to filter the rows to be updated in the WHERE clause as well: ``` UPDATE product prod SET prod.prod_supplier_id = (SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno ) WHERE EXISTS (SELECT 42 FROM supplier s2 WHERE s2.prodno = prod.prodno); ``` It *might* be faster using a `MERGE` (assuming `prodno` is the primary key in `product`): ``` merge into product using ( select p.prodno, s.prod_supplier_id from product p join supplier s on s.prodno = p.prodno ) t on (t.prodno = prod.prodno) when matched then update set prod_supplier_id = t.prod_supplier_id; ``` **Not tested!**
first of all create a back up table: ``` CREATE TABLE productBAK AS SELECT * FROM product; ``` now you can use update query like this: ``` UPDATE product prod SET prod.prod_supplier_id = (SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno and s.prod_supplier_id is not null ) WHERE prod.prodno in (SELECT s1.prodno FROM supplier s1 where s1.prod_supplier_id is not null); ```
oracle: update a column with not null value
[ "", "sql", "oracle", "" ]
this is my first question on stackoverflow, I hope someone can help me. I am making a website similar to facebook. heres my sql. ``` readsql = "SELECT * FROM post INNER JOIN ubuser ON (post.pos_USERID = ubuser.usr_ID) WHERE ubuser.usr_ID == '"session("ID")"'" ``` and heres the error im getting > Microsoft VBScript compilation error '800a0401' > > Expected end of statement > > /student/S0215538/newsfeed1.asp, line 22 ``` readsql = "SELECT * FROM post INNER JOIN ubuser ON (post.pos_USERID = ubuser.usr_ID) WHERE ubuser.usr_ID == ' "session("ID")" ' " --------------------------------------------------------------------------------------------------------------^ ``` I cant see why I am getting this error, if i comment out the WHERE onwards, it works, but its not what I need. thanks in advance Thanks everyone, that was fast. I always forget the &, and the people who mentioned the == should be =, you were also right. it fixed that error, however i now have a new error saying "Data type mismatch in criteria expression" in this line `post.Open readsql, connection, adOpenkeyset, AdLockOptimistic` "post" is my recordset and "connection" is the adodb connection.
Needs to be ``` readsql = "SELECT * FROM post INNER JOIN ubuser ON (post.pos_USERID = ubuser.usr_ID) WHERE ubuser.usr_ID = '" & session("ID") & "'" ``` Problem is your syntax, strings need to be terminated with double quotes `"` and concatenated correctly using ampersand `&` in VBScript. Also remove the `==` and replace with `=` as RedFilter suggested.
String concatenation required `&`: ``` readsql = "SELECT * FROM post INNER JOIN ubuser ON (post.pos_USERID = ubuser.usr_ID) WHERE ubuser.usr_ID = '" & session("ID") & "'" ``` Also use only one `=` in your query.
Expected end of statement sql and sessions
[ "", "sql", "sql-server", "session", "vbscript", "asp-classic", "" ]
I am writing a stored procedure that uses dynamic sql. I would like be able to conditionally add the first join criteria if The UserGroup is `<>` 'Initial'. The following is not the actual code, it is just to show my issue ``` SELECT A FROM MyTable IF UserGroup <> 'Initial THEN INNER JOIN Table1 ON MyTable.A = Table1.A END INNER JOIN Table1 ON MyTable.B = Table1.B INNER JOIN Table1 ON MyTable.C = Table1.C ```
You can use dynamic sql ``` declare @Query varchar(max) set @Query = ' SELECT A FROM myTable ' if (UserGroup <> 'Initial') Begin set @Query = @Query +'INNER JOIN Table1 ON MyTable.A = Table1.A' End exec(@Query) ```
Here an example of how you can get the same result using outer joins: ``` SELECT A FROM MyTable LEFT JOIN Table1 T1 ON (UserGroup <> 'Initial' AND MyTable.A = T1.A) INNER JOIN Table1 T2 ON MyTable.B = T2.B INNER JOIN Table1 T3 ON MyTable.C = T3.C WHERE (UserGroup <> 'Initial' AND T1.A IS NOT NULL) OR UserGroup = 'Initial' ```
Using a Case statement to conditionally modify the number of joins
[ "", "sql", "dynamic", "" ]
i have a huge table (200mln records). about 70% is not need now (there is column ACTIVE in a table and those records have value N ). There are a lot of multi-column indexes but none of them includes that column. Will removing that 70% records improve SELECT (ACTIVE='Y') performance (because oracle has to read table blocks with no active records and then exclude them from final result)? Is shrink space necessary?
It's really impossible to say without knowing more about your queries. At one extreme, access by primary key would only improve if the height of the supporting index was reduced, which would probably require deletion of the rows and then a rebuild of the index. At the other extreme, if you're selecting nearly all active records then a full scan of the table with 70% of the rows removed (and the table shrunk) would take only 30% of the pre-deletion time. There are many other considerations -- selecting a set of data and accessing the table via indexes, and needing to reject 99% of rows after reading the table because it turns out that there's a positive correlation between the required rows and an inactive status. One way of dealing with this would be through list partitioning the table on the ACTIVE column. That would move inactive records to a partition that could be eliminated from many queries, with no need to index the column, and would keep the time for full scans of active records down. If you really do not need these inactive records, why do you just not delete them instead of marking them inactive? Edit: Furthermore, although indexing a column with a 70/30 split is not generally helpful, you could try a couple of other indexing tricks. For example, if you have an indexed column which is frequently used in queries (client\_id?) then you can add the active flag to that index. You could also construct a partial index: ``` create index my_table_active_clients on my_table (case when active = 'Y' then client_id end); ``` ... and then query on: ``` select ... from ... where (case when active = 'Y' then client_id end) = :client_id ``` This would keep the index smaller, and both indexing approaches would probably be helpful. Another edit: A beneficial side effect of partitioning could be that it keeps the inactive records and active records "physically" apart, and every block read into memory from the "active" partition of course only has active records. This could have the effect of improving your cache efficiency.
Partitioning, putting the active='NO' records in a separate partition, might be a good option. <http://docs.oracle.com/cd/B19306_01/server.102/b14223/parpart.htm>
deleting rows will improve select performance in oracle?
[ "", "sql", "performance", "oracle", "" ]
I have a parent table, call it `parents`. Each parent has some number of children in a second table, called `children`. A typical left join would be something like this: ``` select * from parents p left join children c on p.id = c.parent_id ``` But this returns every child row and repeats all the parent information. What I want instead (for this application) is the first child for each parent, so that the resultset has the same number of rows as the parent table. ``` Parent1 Child_of_parent1_1 Parent2 Child_of_parent2_1 Parent3 Child_of_parent3_1 ``` I've tried to put distinct in strange places but can't seem to figure this one out.
``` SELECT A.* FROM ( SELECT p.*, c.child_id, some_field_of_children FROM parents p LEFT JOIN children c ON p.id = c.parent_id ) A WHERE A.child_id = ( SELECT MIN(B.child_id) FROM children B WHERE B.parent_id = A.parent_id) ``` I assume that children table has 'child\_id' as its PK that we can use to get the first child. You can replace child\_id by another column like 'timestamp' OR 'child\_position', ...
Freehand SQL: Select \* from Parent p cross join Child c where c.Id = (select max(c2.id) from Child c2 where c2.ParentId = p.Id) Something like that will get 1 child with the current Parent and join the two. If you want it return stuff if there are no children, ann: or not exists(select max(c3.id) from Child c3 where c3.ParentId = p.Id) Cheers -
Joining parent with one example of child
[ "", "mysql", "sql", "" ]
When inserting data into a SQL Server table, is it possible to specify which column you want to insert data to? For a table with I know you can have syntax like this: ``` INSERT INTO MyTable (Name, col4_on, col8_on, col9_on) VALUES ('myName', 0, 1, 0) ``` But the above syntax becomes unwieldy when you have lots of columns, especially if they have binary data. It becomes hard to match up which 1 and 0 go with which column. I'm hoping there's a named-parameter like syntax (similar to what C# has) which looks like the following: ``` INSERT INTO MyTable VALUES (Name: 'myName', col4_on: 0, col8_on: 1, col9_on: 0) ``` Thanks
I figured out a way around this but it's rather hacky and only works for tables which has columns with unique values: ``` INSERT INTO MyTable (Name) VALUES ('myName') UPDATE MyTable SET col4_on=0, col8_on=1, col9_on=0 WHERE Name = 'myName' ``` This could be expanded into a multiple row insert as follows: ``` INSERT INTO MyTable (Name) VALUES ('row1'), ('row2'), ('row3') UPDATE MyTable SET col4_on=0, col8_on=1, col9_on=0 WHERE Name = 'row1' UPDATE MyTable SET col4_on=1, col8_on=0, col9_on=0 WHERE Name = 'row2' UPDATE MyTable SET col4_on=1, col8_on=1, col9_on=1 WHERE Name = 'row3' ```
You must specify the column names. However, there is one exception. If you INSERTing *exactly* the same number of columns as the target table has *in the same order* as they are in the table, use this syntax: ``` INSERT INTO MyTable VALUES ('val1A', 'val4A', 'val8A') ``` Note that this is a fragile way of performing an `INSERT`, because if that table changes, or if the columns are ordered differently on a different system, the `INSERT` may fail, or worse-- it may put the wrong data in each column. I've found that when I `INSERT` a lot of columns, I find the queries easier to read if I can group them somehow. If column names are long, I may put them on separate lines like so: ``` INSERT INTO MyTable ( MyTable_VeryLongName_Col1, MyTable_VeryLongName_Col4, MyTable_VeryLongName_Col8, -- etc. ) SELECT Very_Long_Value_1, Very_Long_Value_4, Very_Long_Value_8, -- etc. ``` Or you can group 2 columns on a line, or put spaces on every 5, or comment every 10th line, etc. Whatever makes it easier to read. If you find including column names onerous when INSERTing a lot of rows, then try chaining the data together: ``` INSERT INTO MyTable (col1, col4, col8) VALUES ('val1A', 'val4A', 'val8A'), ('val1B', 'val4B', 'val8B'), -- etc. ``` Or `UNION` them together: ``` INSERT INTO MyTable (col1, col4, col8) SELECT 'val1A', 'val4A', 'val8A' UNION ALL 'val1B', 'val4B', 'val8B' UNION ALL ... -- etc. ``` Or, SELECT them from another table: ``` INSERT INTO MyTable (col1, col4, col8) SELECT val1, va4, val8 FROM MyOtherTable WHERE -- some condition is met ```
SQL: How to insert data into a table with column names
[ "", "sql", "sql-server", "" ]
TL/DR: I run this query = `"Select * from test where id = 0"` and it returns all the rows. Here is my code below: ``` CREATE TABLE IF NOT EXISTS `test` ( `id` varchar(20) NOT NULL, `desc` varchar(100) NOT NULL, UNIQUE KEY `id` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `test` (`id`, `desc`) VALUES ('AA', 'AA Desc'), ('BB', 'BB Desc'); SELECT count(*) FROM test WHERE id = 0 ``` In my mind it should return no rows however it returns all the rows in the table. Am I missing something? Any help and explanation would be most welcome.
You're performing an arithmetic comparison on a text field. MySQL will coerce the values in your ID column to a numeric value and compare it with zero. `AA`, when coerced, equals zero.
``` SELECT COUNT(*) FROM test WHERE id = '0' ``` You need to compare with a string, because comparing with a number will convert the table data to a number, and non-numeric strings convert to 0.
MySQL column = 0 returns true
[ "", "mysql", "sql", "" ]
I have the following SQL: ``` SELECT SUM(theNumberCol) qty, TRUNC(theDateCol, 'WW') weekDate FROM theTable GROUP BY TRUNC(theDateCol, 'WW'); ``` This works fine for finding the sum of theNumberCol for each week. The problem is that the week seems to start on a Tuesday. For example "12/17/2013 8:56:05 AM" is truncated to "12/17/2013" while "12/16/2013 5:09:25 AM" is truncated to "12/10/2013". 1. How does Oracle determine which day to start the week? 2. Can I change the week to start on Saturday?
Take 7 days away from the truncated date and then find the next Saturday from that date: [SQL Fiddle](http://sqlfiddle.com/#!4/84436/5) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE theTable ( theNumberCol, theDateCol ) AS SELECT 1, TO_DATE( '20131202 23:15:52', 'YYYYMMDD HH24:MI:SS' ) FROM DUAL UNION ALL SELECT 2, TO_DATE( '20131203', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 3, TO_DATE( '20131204', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 4, TO_DATE( '20131205', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 5, TO_DATE( '20131206', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 6, TO_DATE( '20131207', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 7, TO_DATE( '20131208', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 8, TO_DATE( '20131209', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 9, TO_DATE( '20131210', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 10, TO_DATE( '20131211', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 11, TO_DATE( '20131212', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 12, TO_DATE( '20131213', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 13, TO_DATE( '20131214', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 14, TO_DATE( '20131215', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 15, TO_DATE( '20131216', 'YYYYMMDD' ) FROM DUAL UNION ALL SELECT 16, TO_DATE( '20131217', 'YYYYMMDD' ) FROM DUAL; ``` **Query 1**: ``` SELECT SUM(theNumberCol) AS qty, NEXT_DAY( TRUNC( theDateCol ) - INTERVAL '7' DAY, 'SATURDAY' ) weekDate FROM theTable GROUP BY NEXT_DAY( TRUNC( theDateCol ) - INTERVAL '7' DAY, 'SATURDAY' ) ORDER BY weekDate ASC ``` **[Results](http://sqlfiddle.com/#!4/84436/5/0)**: ``` | QTY | WEEKDATE | |-----|---------------------------------| | 15 | November, 30 2013 00:00:00+0000 | | 63 | December, 07 2013 00:00:00+0000 | | 58 | December, 14 2013 00:00:00+0000 | ```
There are two different calculations available: classic oracle, which calculates week=int(dayOfYear+6)/7, and iso mode, which uses [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601). Format WW uses the classic calculation, while format IW uses the ISO standard. So as you see, WW does not start the week on any fixed day of the week, it just starts the week on january 1st, whichever day that is. IW is what should work for you if you live in any country that follows international standards.
How to change the day that Oracle TRUNC uses to start the week
[ "", "sql", "oracle", "" ]
Supose you have two tables with the exactly the same columns. ``` Table1: Name Type AveSls A 2 20 B 4 10 C 1 15 Table2: Name Type AveSls D 2 8 E 3 15 F 1 12 ``` How do i combine the two tables in SQL server 2008 with a SQL satement so that the combined table looks like this: ``` Table3: Name Type AveSls A 2 20 B 4 10 C 1 15 D 2 8 E 3 15 F 1 12 ```
You can simply use `UNION ALL` (to get all rows even if they repeat in both tables) or `UNION` to get non-repeating rows. ``` SELECT name, type, avesls FROM table1 UNION ALL SELECT name, type, avesls FROM table2 ``` Read more about `UNION` on [MSDN](http://technet.microsoft.com/en-us/library/ms180026.aspx).
You need to use the [UNION](http://technet.microsoft.com/en-us/library/ms180026.aspx) operator. it's very simple to use: ``` SELECT column_name(s) FROM table1 UNION ALL SELECT column_name(s) FROM table2; ``` See the following useful links: 1. [SQL UNION Operator](http://www.w3schools.com/sql/sql_union.asp) 2. [Introduction and Example of UNION and UNION ALL](http://blog.sqlauthority.com/2008/10/15/sql-server-introduction-and-example-of-union-and-union-all/)
Combining two tables with SQL
[ "", "sql", "sql-server-2008", "" ]
I have two queries and I believe I need to put query number two in the `FROM` statement of query number one could you tell me the best way to do this please? **QUERY1** ``` SELECT PARTNO_AUD, PARTNO_ING, COSTSET_AUD, ITEMVER_AUD, PROCNO_AUD, PROCVER_AUD, PROCSTAGE_AUD, ALLITEM_AUD, COSTERR, dbo.SSI_DAVL_FUNC('COSTERR', COSTERR, 'E') AS DESCRIPTION FROM dbo.MBI030 WHERE (PARTNO_ING <> N'') AND (PROCNO_AUD <> N'') AND (COSTERR <> N'00') ``` **QUERY2** ``` SELECT PARTNO_B02PAR, PARTNO_B02COM, QTYOFF / (100 - PSLOSS) * 100 AS QTY FROM dbo.MBB020 ``` `PARTNO_AUD` is the same as `PARTNO_B02PAR` and `PARTNO_ING` is the same as `PARTNO_B02COM`
I am assuming all you want to do is join both the table to see output together ``` SELECT PARTNO_AUD, PARTNO_ING, COSTSET_AUD, ITEMVER_AUD, PROCNO_AUD, PROCVER_AUD, PROCSTAGE_AUD, ALLITEM_AUD, COSTERR, dbo.SSI_DAVL_FUNC('COSTERR', COSTERR, 'E') AS DESCRIPTION, QTY FROM dbo.MBI030 A INNER JOIN ( SELECT PARTNO_B02PAR, PARTNO_B02COM, QTYOFF / (100 - PSLOSS) * 100 AS QTY FROM dbo.MBB020) B ON A.PARTNO_AUD =B.PARTNO_B02PAR AND A.PARTNO_ING =B.PARTNO_B02COM WHERE (PARTNO_ING <> N'') AND (PROCNO_AUD <> N'') AND (COSTERR <> N'00') ``` for more info on joins see [This Link about Joins](http://sqlbisam.blogspot.com/2013/12/InnerJoin-LeftOuterJoin-RighOuterJoin-and-FullOuterJoin.html)
something like: ``` SELECT PARTNO_AUD, PARTNO_ING, COSTSET_AUD, ITEMVER_AUD, PROCNO_AUD, PROCVER_AUD, PROCSTAGE_AUD, ALLITEM_AUD, COSTERR, dbo.SSI_DAVL_FUNC('COSTERR', COSTERR, 'E') AS DESCRIPTION FROM dbo.MBI030 inner join dbo.MBB020 on PARTNO_AUD = PARTNO_B02PAR and PARTNO_ING = PARTNO_B02COM WHERE (PARTNO_ING <> N'') AND (PROCNO_AUD <> N'') AND (COSTERR <> N'00') ``` but you may want to change the things you select, you can add: ``` PARTNO_B02PAR, PARTNO_B02COM, QTYOFF / (100 - PSLOSS) * 100 AS QTY ``` if you want.
Sub Query in the from clause in SQL
[ "", "sql", "subquery", "" ]
Just going crazy on what's wrong here. all I am trying to do is : ***"Take two columns from one table and insert in to other table"*** ![enter image description here](https://i.stack.imgur.com/wiNFC.png) The query that I use is this: ``` insert into Component_Capacitor values (Component_Capacitor.itemNo, Component_Capacitor.itemDescription) select [Item No#], [Item Description] from dbo.Sheet1$ where [Item Description] LIKE 'CAP %' ; ``` The error is this: ``` Msg 4104, Level 16, State 1, Line 1 The multi-part identifier "Component_Capacitor.itemNo" could not be bound. Msg 4104, Level 16, State 1, Line 1 The multi-part identifier "Component_Capacitor.itemDescription" could not be bound. ``` I checked the syntax, table-name and everything seems to be fine. The column types are also similar `(nvarchar(255),null)` Any ideas what am I doing wrong ?
I don't use SQL Server, but I think you need to remove the word "values" from your statement. ``` insert into Component_Capacitor (Component_Capacitor.itemNo, Component_Capacitor.itemDescription) select [Item No#], [Item Description] from dbo.Sheet1$ where [Item Description] LIKE 'CAP %' ; ```
Replace column names & table names properly. This should work. ``` INSERT INTO DBO.COMPONENT_CAPACITOR (ITEMNO, ITEMDESCRIPTION) SELECT A.ITEMNO, A. ITEMDESCRIPTION FROM SHEET A WHERE ITEMDESCRIPTION LIKE 'CAP%' ```
Inserting table data in to another table
[ "", "sql", "sql-server", "database", "sql-server-2008", "t-sql", "" ]
I have a million odd rows where most start ``` 'http://www.' or 'https://www.' ``` but occasionally they start with no 'www.' - this may be correct but the website owner wants consistency throughout the data and thus I need to update the table to always have 'www.' I'm struggling with the SQL to do this. I tried: ``` select * from the_million where URL like 'http://[!w]' ``` But that returns 0 records so I've fallen at the first hurdle of building up the SQL. I guess after I've got the records I want I'll then do a replace. I'm happy to run this in two goes for each of http and https so no need for anything fancy there.
You can try this query: ``` UPDATE the_million SET url=REPLACE(url, 'http://', 'http://www.') WHERE url NOT LIKE 'http://www.%' AND url NOT LIKE 'https://www.%' UPDATE the_million SET url=REPLACE(url, 'https://', 'https://www.') WHERE url NOT LIKE 'http://www.%' AND url NOT LIKE 'https://www.%' ``` Search & replace in 2 queries.
try this ``` select * from the_million where URL not like 'http://www.%' ```
Mysql SQL to update URLs that do not have www
[ "", "mysql", "sql", "" ]
I have a MySQL table. the column are ID, ChId, TotalView... Suppose I want to get all the rows which have Totalview -5 then me to all +5 then me. I want to search in table for people who get similar views. How I can write a query to get the all rows. ``` select * from test where test.chid = 1 and totalview are (-5 then current , +5 then current) ```
You may try this query ``` SELECT * FROM test WHERE totalview> (current-5) AND totalview< (current+5); ```
get your currentview in a subquery and use it in the search criteria ``` SELECT * FROM test, (SELECT totalview AS currentview FROM test WHERE child =1) t1 WHERE (totalview <= ( currentview +5 ) ```
how to get the rows from mysql that is-5 or +5 then the current row?
[ "", "mysql", "sql", "database", "" ]
I am having an issue with MySQL. I want to have dynamic columns based on the rows. Here are the details ``` SELECT `marks`.`id` , `marks`.`studentID` , `marks`.`subjectID` , `marks`.`mark` FROM `Mark` `marks` LEFT OUTER JOIN `Student` `students` ON ( `students`.`id` = `marks`.`studentID` ) WHERE ( `students`.`classID` =1 ) LIMIT 0 , 30 My Output is +----+-----------+-----------+------+ | id | studentID | subjectID | mark | +----+-----------+-----------+------+ | 1 | 1 | 1 | 20 | | 2 | 1 | 2 | 36 | | 3 | 2 | 1 | 47 | | 4 | 2 | 2 | 43 | +----+-----------+-----------+------+ 4 rows in set (0.00 sec) Output I need is +----+-----------+-----------+-----------+ | id | studentID | subject_1 | subject_2 | +----+-----------+-----------+-----------+ | 1 | 1 | 20 | 36 | | 2 | 2 | 47 | 43 | +----+-----------+-----------+-----------+ 4 rows in set (0.00 sec) ``` No of subjects can very depending on the enteries in the Subject table. I need only one row per user showing all the marks. Here are the table structure I use. ``` -- -- Table structure for table `Mark` -- CREATE TABLE IF NOT EXISTS `Mark` ( `id` int(11) NOT NULL AUTO_INCREMENT, `studentID` int(11) NOT NULL, `subjectID` int(11) NOT NULL, `mark` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB -- -- Table structure for table `Student` -- CREATE TABLE IF NOT EXISTS `Student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) NOT NULL, `classID` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB -- -- Table structure for table `Subject` -- CREATE TABLE IF NOT EXISTS `Subject` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(45) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB ``` Thanks is advance.
You can't have dynamic columns, at least without dynamically generating the SQL. You can build the SQL in a stored procedure as per this answer [MySQL pivot table query with dynamic columns](https://stackoverflow.com/questions/12598120/mysql-pivot-table-query-with-dynamic-columns) Alternatively it might be simpler to do this in your application code by selecting the distinct subjects in one query and using that result set to build the SQL that retrieves the result set you are after. At least with the logic in the application code you have some idea of how many columns you will be seeing in the result set.
According to you desired output, I' assuming there are 2 rows for each `studentID` and each of 2 rows has subjectID 1 or 2. Could you try this? **Using JOIN** ``` SELECT t1.studentID, t1.mark AS subject_1, t2.mark AS subject_2 FROM ( SELECT `marks`.`studentID` , `marks`.mark FROM `Mark` as `marks` LEFT OUTER JOIN `Student` AS `students` ON ( `students`.`id` = `marks`.`studentID` AND subjectID = 1) WHERE `students`.`classID` =1 ) t1 INNER JOIN ( SELECT `marks`.`studentID` , `marks`.mark FROM `Mark` as `marks` LEFT OUTER JOIN `Student` AS `students` ON ( `students`.`id` = `marks`.`studentID` AND subjectID = 2) WHERE `students`.`classID` =1 ) t2 ON t1.studentID = t2.studentID; ``` **Using CROSS TABULATION** ``` SELECT `marks`.`studentID`, SUM(IF(subjectID = 1, mark, 0)) AS subject_1, SUM(IF(subjectID = 2, 0, mark)) AS subject_2 FROM `Mark` as `marks` LEFT OUTER JOIN `Student` AS `students` ON ( `students`.`id` = `marks`.`studentID`) WHERE `students`.`classID` =1 GROUP BY marks.studentID ``` `JOIN` and `CROSS TABULATION` are general form to convert vertical result to horizontal (with my restricted knowledge)
Convert Rows to Columns in MySQL Dynamically
[ "", "mysql", "sql", "yii", "pivot", "" ]
Im running a cleanup job(every hour) on a table which constantly grows in rows.The job ran for about a week without any problems. Today i saw that the job started locking the entire table. Is this expected behaviour ? Could it be that after a specific ammount of rows that need to be deleted it locks the entire table instead of only the specific rows that need to be deleted ? Thanks in advance !
One possibility is that you need to index the column by which you are searching the table for rows to delete. If you do not have an index, then SQL Server will acquire many more locks while it searches for the rows to delete. I highly recommend deleting small chunks of rows in a loop. As others have pointed out, if you try to delete more than about 5,000 rows at once, SQL Server will escalate the row locks into a table lock. Deleting fewer records at a time-- say, 1,000-- avoids locking the entire table. Your job can continue looping over the deletes until it is done. The pseudocode for a looped delete looks like this: ``` declare @MoreRowsToDelete bit set @MoreRowsToDelete = 1 while @MoreRowsToDelete = 1 begin delete top (1000) MyTable from MyTable where MyColumn = SomeCriteria if not exists (select top 1 * from MyTable where MyColumn = SomeCriteria) set @MoreRowsToDelete = 0 end ``` Alternatively, you could look at the `@@ROWCOUNT` and use `READPAST` hint to avoid locked rows: ``` declare @RowCount int set @RowCount = 1 -- priming the loop while @RowCount > 0 begin delete top (1000) MyTable from MyTable with (readpast) where MyColumn = SomeCriteria set @RowCount = @@ROWCOUNT end ``` Note that the lock escalation threshold depends on other factors like concurrent activity. If you regularly have so much activity that even a 1,000 deletion will escalate to a table lock, you can lower the number of rows deleted at once. See the [Microsoft documentation on lock escalation](https://learn.microsoft.com/en-us/troubleshoot/sql/performance/resolve-blocking-problems-caused-lock-escalation#lock-escalation-thresholds) for more information.
If your query affects 5000 rows or more in the same table, that table gets locked during the operation. This is standard SQL Server behavior. Basically every DELETE causes a lock on that row and every 5000 row locks on the same table cause a Lock Escalation from row to table.
Delete statements locks table
[ "", "sql", "sql-server", "" ]
I'm trying to write an RQL query that does the equivalent of this sql: ``` select * from some_table t where t.property in (1, 2, 3, 4 ...) ``` I'm not sure if RQL supports this though. In the oracle docs, there's an example of how to do this on the ID property of a repository item: ``` ID IN { "0002421", "0002219", "0003244" ... } ``` but when I try to change ID in this example to the property I want to query on, I get an RQL ParseException. Does anyone know if this is possible?
this is possible through querybuilder api(see example below). I'm not sure why this is not available through plain RQL though. ``` QueryExpression thePropertyExpression = theQueryBuilder.createPropertyQueryExpression("postalCode"); String [] zipcodeArray = {"22185", "22183"}; QueryExpression theValueExpression = theQueryBuilder.createConstantQueryExpression(zipcodeArray); Query theQuery = theQueryBuilder.createIncludesQuery(theValueExpression, thePropertyExpression); ```
From the [ATG Documentation](http://docs.oracle.com/cd/E24152_01/Platform.10-1/ATGRepositoryGuide/html/s0305rqlgrammar01.html) the RQL Grammar includes a specific comparison query for `ID IN` so changing `ID` to another property will not parse properly thus your `ParseException`. Looking further down the Grammar document you'll find the `ComparisonOperator`. The one of particular interest is the `INCLUDES ANY`. An example around its use [(from the docs)](http://docs.oracle.com/cd/E24152_01/Platform.10-1/ATGRepositoryGuide/html/s0305multivaluedpropertyqueries01.html) > interests INCLUDES ANY { "biking", "swimming" } > > This is equivalent to: > > (interests INCLUDES "biking") OR (interests INCLUDES "swimming") So this may work, as long as you are searching scalar properties. So that leaves you with the final option, which you are probably trying to avoid which is to create a big `OR` condition, which is the way I normally would do it, since you'll have to go through a loop to build up your `IN` statement anyway.
ATG RQL query like SQL 'property in (x, y, z)'
[ "", "sql", "atg", "" ]
I have a MySQL table: ``` create table tbl ( amount int ); insert into tbl (amount) values (1); insert into tbl (amount) values (2); insert into tbl (amount) values (3); insert into tbl (amount) values (4); ``` My goal is a report of how many values are in the following buckets, by using a case statment. Bucket A: values 0-1 Bucket B: values 2-5 Bucket C: values 6-9 First lets try a simple query: ``` select "Bucket A" as Bucket, count(amount) "Count" from tbl where amount in (0,1) union select "Bucket B" as Bucket, count(amount) "Count" from tbl where amount in (2,3,4,5) union select "Bucket C" as Bucket, count(amount) "Count" from tbl where amount in (6,7,8,9); ``` Result: ``` +----------+-------+ | Bucket | Count | +----------+-------+ | Bucket A | 1 | | Bucket B | 3 | | Bucket C | 0 | +----------+-------+ ``` Results are perfect, but I want a case statement. So I try this: ``` select sum(case when amount in (0,1) then 1 else 0 end) as "Bucket A", sum(case when amount in (2,3,4,5) then 1 else 0 end) as "Bucket B", sum(case when amount in (6,7,8,9) then 1 else 0 end) as "Bucket C" from tbl; ``` Result: ``` +----------+----------+----------+ | Bucket A | Bucket B | Bucket C | +----------+----------+----------+ | 1 | 3 | 0 | +----------+----------+----------+ ``` Values are correct, and great that I have a case statement, but problem is the values got pivoted. How can I 1. use a case statement 2. have no pivot?
You can do this using aggregation: ``` select (case when amount in (0, 1) then 'Bucket A' when amount in (2, 3,4, 5) then 'Bucket B' when amount in (6, 7, 8, 9) then 'Bucket C' end) as bucket, count(*) as `count` from tbl where amount in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) group by (case when amount in (0, 1) then 'Bucket A' when amount in (2,3,4,5) then 'Bucket B' when amount in (6,7,8,9) then 'Bucket C' end); ``` EDIT: Digital Chris makes a very good point. This can be solved by using `left outer join`: ``` select (case when tbl.amount in (0, 1) then 'Bucket A' when tbl.amount in (2, 3,4, 5) then 'Bucket B' when tbl.amount in (6, 7, 8, 9) then 'Bucket C' end) as bucket, count(tbl.amount) as `count` from (select 0 as amount union all select 2 as amount union all select 6 as amount ) throwaway left outer join tbl on throwaway.amount = tbl.amount where tbl.amount in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) group by (case when tbl.amount in (0, 1) then 'Bucket A' when tbl.amount in (2,3,4,5) then 'Bucket B' when tbl.amount in (6,7,8,9) then 'Bucket C' end); ``` Or, perhaps more clearly, by using the original query as a subquery: ``` select buckets.bucket, coalesce(`count`, 0) as `count` from (select 'Bucket A' as bucket union all select 'Bucket B' union all select 'Bucket C' ) buckets left outer join (select (case when amount in (0, 1) then 'Bucket A' when amount in (2, 3,4, 5) then 'Bucket B' when amount in (6, 7, 8, 9) then 'Bucket C' end) as bucket, count(*) as `count` from tbl where amount in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) group by (case when amount in (0, 1) then 'Bucket A' when amount in (2,3,4,5) then 'Bucket B' when amount in (6,7,8,9) then 'Bucket C' end) ) g on buckets.bucket = g.bucket; ```
``` select "Bucket A" as "Bucket", sum(case when amount in (0,1) then 1 else 0 end) as "Count" from tbl UNION select "Bucket B", sum(case when amount in (2,3,4,5) then 1 else 0 end) from tbl UNION select "Bucket C", sum(case when amount in (6,7,8,9) then 1 else 0 end) from tbl; ``` Like this? [sqlfiddle](http://sqlfiddle.com/#!2/1b3dd/1)
In MySQL how to rewrite a query using a case statement?
[ "", "mysql", "sql", "" ]
I think I'm doing the below SQL correctly but apparently not. My dbo.AttendeesLedger.SessionId is sometimes null, and those null values never appear in the result. ``` SELECT dbo.AttendeesLedger.Id, dbo.AttendeesLedger.SessionId, dbo.AttendeesLedger.AttendeesId, dbo.Attendees.Id, dbo.Sessions.Id FROM dbo.AttendeesLedger INNER JOIN dbo.Attendees ON (dbo.AttendeesLedger.AttendeesId = dbo.Attendees.Id) LEFT OUTER JOIN dbo.Sessions ON (dbo.AttendeesLedger.SessionId = dbo.Sessions.Id) ```
That is a known thing with integers. 0 and Null is not the same thing and because you join to other tables it will not include null unless you explicitly say it should Add " or dbo.AttendeesLedger.SessionId is null " to see them. More details on this : <http://www.w3schools.com/sql/sql_null_values.asp> This might also be of use to you (ANSI\_NULLS) : <http://technet.microsoft.com/en-us/library/ms188048.aspx>
Most likely your `INNER JOIN` is filtering out those results. I mean, the `LEFT JOIN` keeps all the values in the left but the `INNER JOIN` will only keep those values that make the first `ON` condition `true`. Whenever the first condition is `false` then those results will not be displayed... even if you use a `LEFT JOIN` because the unmatched values will no longer be there.
Nulls not appearing when SessionId is null
[ "", "sql", "sql-server", "" ]
Say I have the following table: ``` Tree Park Slide 1 1 1 1 1 1 1 1 1 ``` What kind of code would I use to remove the rows that have empty columns such that I would just have the following result ``` Tree Park Slide 1 1 1 1 1 1 ``` I am new to sql and was wondering on some tips on how to write this code. Would I use a case statement such that ``` Case WHEN Tree IS NULL OR Park IS NULL --Then what would I say to remove the row ```
Use `DELETE` to delete a row and check it with `IS NULL` in the `WHERE` clause. ``` DELETE FROM dbo.TableName WHERE Tree IS NULL OR Park IS NULL OR Slide IS NULL ``` If you also want to delete rows where one of these columns is not null but empty text(if it's a `varchar` column): ``` DELETE FROM dbo.TableName WHERE (Tree IS NULL OR Tree = '') OR (Park IS NULL OR Park = '') OR (Slide IS NULL OR Slide = '') ```
If you mean *physically deleting* then just put that in your `WHERE`: ``` DELETE FROM {table} WHERE Tree IS NULL OR Park IS NULL OR Slide IS NULL ``` Or take advantage of the fact that `{null} + n = {null}` and just do: ``` DELETE FROM {table} WHERE (Tree+Park+Slide) IS NULL ``` But I would *definitely* add a comment to explain *why* that works. If you mean *excluding from `SELECT` results* then you could use: ``` SELECT * FROM {table} WHERE Tree IS NOT NULL AND Park IS NOT NULL AND Slide IS NOT NULL ```
How to remove an entire row if one column is empty
[ "", "sql", "" ]
I am running the follwoing SQL query: ``` SELECT * FROM COUNTRIES WHERE USERTYPE = 'PREMIUM' ORDER BY SALES DESC; ``` The output of this query yields a list of countries. Now I need to populate a field in another table which is like TOP\_SALES\_COUNTRY, SECOND\_TOP\_SALES\_COUNTRY and THRID\_TOP\_SALES\_COUNTRY for which I only need the first,second and third records in the output of this SELECT statement. Kindly advise on how this can be achieved.
You could use any of the following: **Option 1:** ``` SELECT TOP number|percent column_name(s) FROM table_name; ``` **Option 2:** ``` SELECT column_name(s) FROM table_name LIMIT number; ``` **Option 3:** ``` SELECT column_name(s) FROM table_name WHERE ROWNUM <= number; ```
`SELECT * FROM COUNTRIES WHERE USERTYPE = 'PREMIUM' ORDER BY SALES DESC limit 0,1;` to get the first row
How to select one record from a SELECT query in MySQL
[ "", "mysql", "sql", "select", "" ]
I am receiving an xml into my stored procedure as follows: ``` <records> <record>Name="Charles" Number="8" CustomerId ="3" Date="12/17/2013 12:00 AM"/> </records> ``` So I read the data as follows: in my procedure and insert ``` INSERT INTO CustomerNumbers (Name, Number, CustomerId) SELECT xtable.item.value('@Name[1]', 'NVARCHAR(100)') AS Name, xtable.item.value('@Number[1]', 'INT') AS Number, xtable.item.value('@CustomerId[1]', 'INT') AS CustomerId FROM @p_XmlPassed.nodes('//records/record') AS xtable(item) ``` Now this, works, but i have two problems 1) If the record already exists i should not insert it but updated (need to check per row) 2) I need to also update another table [Notifications] based on the CustomerId and Date. (need to check per row) What i have above inserts fine and reads the xml fine.. but this part is a but confusing to me not sure about the best way to handle it. How should I go about this?, i need to grab the CustomerId and Date values to update [Notifications] while this is happening?
``` DECLARE @xml XML; SET @xml = '<rows> <row Name="Charles" Number="8" CustomerId ="3" Date="12/17/2013 12:00 AM"/> <row Name="Mary" Number="7" CustomerId ="6" Date="12/19/2013 12:00 AM"/> <row Name="Miriam" Number="10" CustomerId ="10" Date="12/18/2013 12:00 AM"/> </rows>' --INSERT INTO CustomerNumbers (Name, Number, CustomerId) SELECT x.item.value('@Name[1]', 'NVARCHAR(100)') AS Name, x.item.value('@Number[1]', 'INT') AS Number, x.item.value('@CustomerId[1]', 'INT') AS CustomerId, x.item.value('@Date[1]', 'DATETIME') AS [Date] INTO #TempTable --<-- Data into Temp Table FROM @xml.nodes('//rows/row') AS x(item) ``` **Merge Statement** ``` MERGE CustomerNumbers AS trg USING (SELECT Name,Number,CustomerId,[Date] FROM #TempTable) AS src ON trg.CustomerId = src.CustomerId WHEN MATCHED THEN UPDATE SET trg.Name = src.Name, trg.Number = src.Number, trg.[Date] = src.[Date] WHEN NOT MATCHED THEN INSERT(Name,Number,CustomerId,[Date]) VALUES(src.Name, src.Number, src.CustomerId, src.[Date]); GO /*Another Merge Statement for your second table here then drop the temp table*/ DROP TABLE #TempTable GO ```
My rule of thumb is that if I hit "one table", I push directly into that table. If I hit 2 or more, I shred the xml into a #temp (or @variable) table, and then do Insert/Update/Upsert(Merge) from that #temp table. If I have more than 1 destination table, then I do my shredding outside of the BEGIN TRAN/COMMIT TRAN. Then do the Upsert stuff inside the TRAN. Here is a "typical" setup for me. Also note the "where not exists" if you are inserting (only). (an option, not necessarily your scenario) /\* EXEC dbo.uspMyEntityUpsertByXml ' ' \*/ IF EXISTS ( SELECT \* FROM INFORMATION\_SCHEMA.ROUTINES WHERE ROUTINE\_TYPE = N'PROCEDURE' and ROUTINE\_SCHEMA = N'dbo' and ROUTINE\_NAME = N'uspMyEntityUpsertByXml' ) BEGIN DROP PROCEDURE [dbo].[uspMyEntityUpsertByXml] END GO CREATE Procedure dbo.uspMyEntityUpsertByXml ( @parametersXML XML ) AS BEGIN ``` SET NOCOUNT ON IF OBJECT_ID('tempdb..#Holder') IS NOT NULL begin drop table #Holder end CREATE TABLE #Holder ( SurrogateKeyIDENTITY int not null IDENTITY (1,1) , NameOf NVARCHAR(100) , Number int , CustomerId int ) /* Start XML usage */ /* Only incur the penalty of XML parsing, if XML was specified */ if (@parametersXML IS NOT NULL) AND (Datalength(@parametersXML) > 10 ) /* Only process the xml If the xml exists, and it has at least 10 chars. 10 is just a somewhat */ /* arbritrary number, saying, that an xml doc with <10 chars doesn't have a whole lot going for it */ /* || DataLength is used for Text datatype */ BEGIN INSERT INTO #Holder ( NameOf , Number , CustomerId ) SELECT x.item.value('@Name[1]', 'NVARCHAR(100)') AS Name, x.item.value('@Number[1]', 'INT') AS Number, x.item.value('@CustomerId[1]', 'INT') AS CustomerId FROM @parametersXML.nodes('//rows/row') AS x(item) END /* End XML usage */ /* INSERT INTO dbo.CustomerNumbers (Name, Number, CustomerId) */ Select NameOf , Number, CustomerId from #Holder h /* Where not exists ( select null from dbo.CustomerNumbers innerRealTable where innerRealTable.Name = h.NameOf and innerRealTable.Number = h.Number and innerRealTable.CustomerId = h.CustomerId ) */ IF OBJECT_ID('tempdb..#Holder') IS NOT NULL begin drop table #Holder end ``` END
Handling XML using SQL to Create or Update multiple tables
[ "", "sql", "sql-server", "xml", "" ]
When I check if a value is NULL I usually use '=', This doesn't work somehow but 'is null' works. Why's that? What is the difference?
``` WHERE myvalue = null ``` will never be true. Since even if "myvalue" is undefined ("null"), the database can't determine that it equals null (since null by definition means "unknown" and two "unknowns" can't be equal to each other). ``` WHERE myvalue IS null ``` is more straight-forward. This checks whether "myvalue" is undefined (is "null" in database lingo).
Have a look at : [SQL is null and = null](https://stackoverflow.com/questions/9581745/sql-is-null-and-null) Basically, null can be thought of as no value or no value know. A comparison operator (eg =) is effectively asking if an unknown or non existent value is equal to another unknown or non existent value. The only sensible return in this case is another unknown or non existent value - null. IS NULL asks is the value unknown or non existent - this can return either true or false.
SQL: What's the difference between '=' and 'is'?
[ "", "sql", "" ]
I have a problem with some P-SQL syntax. I have reduced the code sample to its minimum below. The following works: ``` CREATE OR REPLACE FUNCTION MyFunction(LINE_ID SMALLINT) RETURN VARCHAR2 IS tmp VARCHAR2(4000); BEGIN tmp := CAST(LINE_ID AS VARCHAR2); RETURN(tmp); END MyFunction; / ``` However, I need to change the LINE\_ID parameter to NUMBER(5, 0), after which the following does not work: ``` CREATE OR REPLACE FUNCTION MyFunction2(LINE_ID NUMBER(5, 0)) RETURN VARCHAR2 IS tmp VARCHAR2(4000); BEGIN tmp := CAST(LINE_ID AS VARCHAR2); RETURN(tmp); END MyFunction2; / ``` The error message in Oracle SQL Developer 3.2.10.09 is > Error(1,36): PLS-00103: Encountered the symbol "(" when expecting one > of the following: := . ) , @ % default character The symbol ":=" > was substituted for "(" to continue. How should I write the `CAST` statement in order to make it work with `NUMBER(5, 0)` instead of `SMALLINT`? Again, this is not the original code but I am looking for a solution that does not deviate too much from the second version and preferably not another function call either. The `VARCHAR2` return type is important as well.
You can't specify `NUMBER` precision and scale for a function's parameter. Just declare it like this: ``` CREATE OR REPLACE FUNCTION MyFunction2(LINE_ID NUMBER) ```
The function you're looking for is `TO_CHAR`: ``` tmp := TO_CHAR(LINE_ID); ```
How do I CAST a NUMBER to VARCHAR2 in Oracle?
[ "", "sql", "oracle", "plsql", "casting", "" ]
I am just wondering if someone could help me with the following. I am using VB. NET and SQL Server and want to include some error handling in my code. I am doing a SQL SUM command which returns fine with no problems providing there is a match in the database. But when there is no match to the String I have specified I get a "Conversion from type 'DBNull' to type 'Integer' is not valid." I understand this error occurs because I have no value in the database to which I have specified. Below is the code that I am using which contains no error handling: ``` Dim dbCount1 As Integer SQLConnectLog() strSQLog = "SELECT SUM ([TotalTime]) FROM [Line1Log] WHERE ([State] = 'Test')" dbCommandLog = New SqlCommand(strSQLog, dbConnectionLog) dbCount1 = dbCommandLog.ExecuteScalar() SQLDisconnectLog() lblTest.Text = dbCount1 ``` Now I have tried several different ways to use isDBNull but to no avail, such as: ``` Dim dbCount1 As Integer SQLConnectLog() strSQLog = "SELECT SUM ([TotalTime]) FROM [Line1Log] WHERE ([State] = 'Test')" dbCommandLog = New SqlCommand(strSQLog, dbConnectionLog) If IsDBNull(dbCount1 = dbCommandLog.ExecuteScalar()) Then SQLDisconnectLog() lblTest.Text = dbCount1 Else lblTest.Text = "0" End If ``` The method I used above still does not work. I am sure I need to use isDBNull function but no sure where to place it. If anyone could give me some insight on how this can be done I would greatly appreciate it. Thanks in advance :)
A pure SQL solution can rely in coalesce or isnull for this simple query just returning a single aggregate column. ``` SELECT COALESCE(SUM ([TotalTime]), 0) FROM [Line1Log] WHERE ([State] = 'Test') ```
Try to change dbCount1 to nullable integer ``` Dim dbCount1 As Nullable(Of Integer) ``` Or ``` Dim dbCount1 As Integer? ```
How to handle Conversion from type 'DBNull' to type 'Integer' is not valid
[ "", "sql", "vb.net", "" ]
I need to use a case type of logic in my query. Database Used: Oracle 11g The present query looks like: ``` SELECT name FROM merchant m WHERE NOT EXISTS ( SELECT 1 FROM settlement s WHERE s.sid = m.mid AND s.rpt_id IN ('111','112','202') ) ``` There is another column SRID which takes precedence over SID and which can as well be mapped to MID. Now, we need something like IF SRID is null then use SID, else use SRID to map to MID in the WHERE clause.
I think you want the [coalesce function](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm#SQLRF00617), which returns the first non-null value. Unfortunately I don't have access to an Oracle system to confirm. ``` select NAME from merchant m where NOT EXISTS ( SELECT 1 from SETTLEMENT s where coalesce(s.SRID,s.SID) = m.MID and s.RPT_ID IN ('111','112','202') ) ``` [Here](https://stackoverflow.com/questions/950084/oracle-differences-between-nvl-and-coalesce) is a comparison between coalesce and nvl. (I was not familiar with nvl)
Please use ``` NVL(s.SRID, s.SID) = m.MID ``` instead of ``` s.SID = m.MID ``` in the where condition.
SQL - Where Clause to Use Case Logic
[ "", "sql", "oracle", "" ]
Someone help me out before i fall into depression. I'm creating a Simple licencing voting system. When a licence is received, it has to be **Approved** or **Rejected** by a committee. This committee consists of 5 members and the voting process is based on a majority vote(Meaning that every application received needs an approval/Rejection of 3 or more of the committee members. E.g *If 3 or more members vote to approve the application is said to be "Approved", if 3 or more members vote to reject then the application is said to have been "Rejected"* ). The idea is that 3 majority votes are always needed to determine the status of the application. If all the members of the committee have not voted or the number of votes for each status(Approved or Rejected) is less than 3 then the Application is said to be **"Pending"** E.g if 2 members approve and 2 reject **OR** 2 approve and 1 reject **OR** 1 approve and 2 reject **OR** 1 approve and 1 reject **OR** Only 2 approve **OR** Only 2 reject Here's the sructure of one of the tables am working on ``` CREATE TABLE [CommitteeApproval]( [CommitteeApprovalID] [int] IDENTITY(1,1) NOT NULL, [LicenceApplicationID] [int] NOT NULL, [UserID] [int] NOT NULL, [ActionDate] [date] NULL, [CommitteeApprovalStatusID] [int] NOT NULL) ``` *[CommitteeApproval]* is the Primary key while *[LicenceApplicationID]* ,*[CommitteeApprovalStatusID]* and *[UserID]* are foreign keys to their respective tables. *[CommitteeApprovalStatus]* Table(CommitteeApprovalStatus is the Primary Key) ``` CommitteeApprovalStatusID CommitteeApprovalStatusName 1 Approve 2 Reject ``` *[CommitteeApproval]* Table contains ``` LicenceApplicationID UserID ActionDate CommitteeApprovalStatusID 4173 37 2013-12-17 2 4173 36 2013-12-17 1 4173 6 2013-12-17 1 4173 7 2013-12-17 1 4174 37 2013-12-17 1 4174 36 2013-12-17 2 4174 7 2013-12-17 2 4174 6 2013-12-17 2 4174 38 2013-12-17 2 4176 38 2013-12-17 2 4177 7 2013-12-17 2 4179 36 2013-12-17 1 4179 38 2013-12-17 2 ``` I want to return the number of CommitteeApprovalStatus for each licence application. *For Example:* For LicenceApplication **4174**, 4 members rejected and 2 members approved, so the application is said to be "Rejected" I used the below query to display the list of rejected applications to the user ``` SELECT LicenceApplicationID ,CommitteeApprovalStatusID ,COUNT(UserID) AS votes FROM CommitteeApproval WHERE CommitteeApprovalStatusID=2 GROUP BY LicenceApplicationID, CommitteeApprovalStatusID HAVING COUNT(UserID) >= 3 ``` I also successfully retrieved the list of Approved queries with a similar query but by replacing with ``` WHERE CommitteeApprovalStatusID=1 ``` Now the **PROBLEM** arises when i try to retrieve the list of "Pending" applications. 1. I can't capture applications where 2 members approved and 2 members rejected, 1 member approved and 1 member rejected, 2 members approved and none or 1 member rejected, 3 members rejected and 1 or 2 members approved, 2 members rejected and 3 members approved. 2. I can only capture applications of only one type at a time e.g WHERE CommitteeApprovalStatusID=2 or 1 whereas i wanted to capture all applications that didn't qualify. The query i wrote is: ``` SELECT LicenceApplicationID, COUNT(UserID) AS votes FROM CommitteeApproval WHERE CommitteeApprovalStatusID=1 GROUP BY LicenceApplicationID HAVING COUNT(UserID) < 3 ``` This doesn't help much coz i still have to write another one with > WHERE CommitteeApprovalStatusID=2 And it still woun't be able to capture the results in problem 1. Is there a way to display all the "Pending" results with one query??
My attempt at readability: ``` WITH votes AS ( SELECT LicenceApplicationID, CommitteeApprovalStatusName Vote FROM CommitteeApproval A INNER JOIN CommitteeApprovalStatus S ON S.CommitteeApprovalStatusID = A.CommitteeApprovalStatusID ) SELECT LicenceApplicationID, Approve, Reject, CASE WHEN Approve >= 3 THEN 'Approved' WHEN Reject >= 3 THEN 'Rejected' ELSE 'Pending' END AS VoteStatus FROM votes PIVOT(COUNT(Vote) FOR Vote IN (Approve,Reject)) P ```
Try with this: ``` SELECT LicenceApplicationID, COUNT(UserID) AS votes, CASE WHEN CommitteeApprovalStatusID = 1 THEN 'Aprove' ELSE 'Rejected' END AS Status FROM CommitteeApproval WHERE CommitteeApprovalStatusID in (1, 2) GROUP BY LicenceApplicationID, CommitteeApprovalStatusID HAVING COUNT(UserID) < 3 ```
Trouble implementing T-SQL GROUP BY clause
[ "", "sql", "sql-server", "t-sql", "group-by", "aggregate-functions", "" ]
I am trying to count the number of records in a table. The table is called affiliations and only has 4 columns (2 of which are foreign keys) I want to count the number of records where the affiliated column is 0 and the business\_id is related to a particular account\_email. I know how to do this query using the IN keyword, but I was wondering if there is a better or more efficient way to do this. This is the IN version of the query: ``` SELECT COUNT(1) FROM affiliations WHERE business_id IN ( SELECT business_id FROM affiliations WHERE account_email = 'address@domain.ext' ) AND affiliated = 0 ``` I understand I could probably replace this with EXISTS: ``` SELECT COUNT(1) FROM affiliations WHERE EXISTS ( SELECT 1 FROM affiliations WHERE account_email = 'address@domain.ext' ) AND affiliated = 0 ``` Would the statement with EXISTS work? And as previously asked, is there just a better way to do this? Thanks in advance!
The first query from the question with `IN` clause is not equivalent to the second with `EXIST`. To convert the first query with `IN`, you must use a dependent subquery: ``` SELECT COUNT(1) FROM affiliations a1 WHERE EXISTS ( SELECT 1 FROM affiliations a2 WHERE account_email = 'address@domain.ext' AND a1.business_id = a2.business_id ) AND affiliated = 0 ``` Pay attention to this condition: `AND a1.business_id = a2.business_id` The above query is semantically eqivalent to your first query with `IN`. Their performance are the same as well, because MySql, during the optimalization phase, converts intenrally a condition of this form: `outer_expr IN (SELECT inner_expr FROM ... WHERE subquery_where)` into this: `EXISTS (SELECT 1 FROM ... WHERE subquery_where AND outer_expr=inner_expr)` See this link for detalis: <http://dev.mysql.com/doc/refman/5.0/en/subquery-optimization-with-exists.html> Pay speciall attention to discussion about NULL values and how NULL impacts the optimizer. In short - if `business_id` column is declared as `NOT NULL`, then MySql is able to optimize these two queries. See final conclusion (at the bottom of the page in this link): > To help the query optimizer better execute your queries, use these tips: > > * A column must be declared as NOT NULL if it really is. (This also helps other aspects of the optimizer.) > * If you don't need to distinguish a NULL from FALSE subquery result, you can easily avoid the slow execution path. Replace a comparison that looks like this: > > **outer\_expr IN (SELECT inner\_expr FROM ...)** > > with this expression: > > **(outer\_expr IS NOT NULL) AND (outer\_expr IN (SELECT inner\_expr FROM ...))** > > Then NULL IN (SELECT ...) will never be evaluated because MySQL stops evaluating AND parts as soon as the expression result is clear.
I'd use exists, but also remember to correlate the subquery to the primary table as below. ``` SELECT COUNT(1) FROM affiliations a WHERE exists ( SELECT 1 FROM affiliations a1 WHERE account_email = 'address@domain.ext' and a1.business_id=a.business_id ) AND affiliated = 0 ```
A better / more efficient way to write this query
[ "", "mysql", "sql", "" ]
How to optimize this query? I have tried using `LEFT JOIN` clause but still take at least 3 seconds. ``` SELECT DISTINCT MBLNR FROM MKPF WHERE MANDT='368' AND XBLNR = '0150525913' AND MBLNR NOT IN ( SELECT DISTINCT SMBLN FROM MSEG WHERE XBLNR_MKPF = '0150525913' ) ``` After using `LEFT JOIN`, still takes 3 seconds to get the results: ``` SELECT DISTINCT A.MBLNR FROM MKPF AS A LEFT JOIN MSEG AS B ON B.XBLNR_MKPF=A.XBLNR AND A.MBLNR <> B.SMBLN WHERE A.XBLNR = '0150525913' ``` Using `LEFT OUTER JOIN` really takes much time to return the rows. Thanks.
The correct left outer join formulation of the first query is: ``` SELECT DISTINCT A.MBLNR FROM MKPF A LEFT JOIN MSEG B ON B.SMBLN = A.MBLNR AND AND B.XBLNR_MKPF = '0150525913' WHERE A.XBLNR = '0150525913' and A.MANDT='368' and B.SMBLN is null; ``` Depending on the sizes of the tables, 3 seconds might be reasonable. However, you can speed the query by using indexes. This query suggests some indexes: `MKPF(XBLNR, MANDT, MBLNR)` and `MSEG(XBLNR_MKPF, SMBLN)`.
While intuitive, "not in" is slow. So, change this: ``` AND MBLNR NOT IN ( SELECT DISTINCT SMBLN FROM MSEG WHERE XBLNR_MKPF = '0150525913' ) ``` to ``` AND MBLNR IN (SELECT DISTINCT MBLNR FROM MKPF WHERE MANDT='368' AND XBLNR = '0150525913' except SELECT DISTINCT SMBLN FROM MSEG WHERE XBLNR_MKPF = '0150525913' ) ```
Optimizing query still takes a bit long time
[ "", "sql", "db2", "" ]
I would like to check which user performed insertions in specific table on MS SQL Server 2008. I am aware of some logging info being stored, but I don't know how to access it. I would be grateful with provided specific info on my question and also with general pointer where and what to look for if I ever need other information too. Thanks!
Are you talking about retrieving this from the transaction logs? This isn't ideal as you have no guarantee that the relevant rows will still be available in the active log and it is less efficient to query but something like the below would do it (returns `USER_NAME()` rather than the likely more useful `SUSER_NAME()` though). Change `dbo.X` to your actual table name. ``` DECLARE @allocation_unit_ids TABLE ( allocation_unit_id BIGINT PRIMARY KEY ) INSERT INTO @allocation_unit_ids SELECT allocation_unit_id FROM sys.allocation_units au JOIN sys.partitions p ON au.container_id = CASE WHEN au.type IN ( 1, 3 ) THEN p.hobt_id WHEN au.type = 2 THEN p.partition_id END WHERE p.object_id = OBJECT_ID('dbo.X'); WITH L1 AS (SELECT [Transaction ID], [Begin Time], [End Time], [Transaction SID], CASE WHEN Operation = 'LOP_INSERT_ROWS' AND AllocUnitId IN (SELECT allocation_unit_id FROM @allocation_unit_ids) THEN 1 END AS I FROM sys.fn_dblog(NULL, NULL) l), L2([Transaction ID], TransactionBegin, TransactionEnd, sid) AS (SELECT [Transaction ID], MAX([Begin Time]), MAX([End Time]), MAX([Transaction SID]) FROM L1 GROUP BY [Transaction ID] HAVING 1 = MAX(I)) SELECT TransactionBegin, TransactionEnd, p.name AS PrincipalName FROM L2 JOIN sys.database_principals p ON p.sid = L2.sid ```
You can try out ApexSQL Log to read the transaction log. Trial is fully functional for 14 days, add available data sources and filter only INSERT statements. Bottom line is that you can allows viewing transaction log data in read-friendly format Also you can check out undocumented function [fn\_dblog](http://raresql.com/2013/04/15/sql-server-undocumented-function-fn_dblog/) or [DBCC LOGINFO](http://sqlblog.com/blogs/kalen_delaney/archive/2009/12/21/exploring-the-transaction-log-structure.aspx) command. EDIT: Forgot to give you link to the tool: <http://www.apexsql.com/sql_tools_log.aspx> *Disclaimer: I work for ApexSQL as support engineer*
Check who performed insertion in table on MS SQL Server
[ "", "sql", "sql-server", "" ]
I have the below requirement to fulfill. How to achieve this simplistically. There is an inflation rate table as below ``` 2009, 5% 2010, 5% 2011, 5% 2012, 5% 2013, 5% ``` I need to first get ``` 2009, 5%, 5 2010, 5%, 5.25 - which is 5% over 5 2011, 5%, 5.5125 - which is 5% over 5.25 2012, 5%, 5.788125 - which is 5% over 5.5125 2013, 5%, 6.07753125 - which is 5% over 5.788125 ``` And then get ``` 2009, 5%, 5 , 5 2010, 5%, 5.25 , 10.25 = 5.25 + 5 2011, 5%, 5.5125 , 15.7625 = 5.5125 + 10.25 and so on 2012, 5%, 5.788125 , 21.550625 2013, 5%, 6.07753125 , 27.62815625 ```
I think the best way to do this on SQLserver is to use a common table expression with recursion. I'm not 100% sure because I can't test it here, but something along these lines as the example below. Btw I assume the rates in the table are stored as fractions, so 5% is 1.05 and 10.25% is 1.1025 etc. ``` WITH MyCompoundRates (TheYear, TheRate, CompoundRate) AS ( -- select one anchor record, starting point record SELECT TheYear, TheRate, -- I'm assuming "5%" is stored as value 1.05 TheRate as CompoundRate FROM MyRatesTable WHERE TheYear = 2009 -- <- starting point for recursion UNION ALL -- select recursive records, by linking them to a previous record SELECT r.TheYear, r.TheRate, r.TheRate * c.CompoundRate as CompoundRate -- calculate compound rate FROM MyRatesTable r JOIN MyCompoundRates c ON r.TheYear = c.TheYear+1 -- recursion! link a year to previous year ) -- Statement that executes the CTE SELECT TheYear, TheRate, CompoundRate FROM MyCompoundRates ```
Lacking a `PRODUCT()` statement, the query becomes a little more complex than would be required otherwise, but this should work well; it uses common table expressions for each step, and logarithmic sums to simulate `PRODUCT()`. ``` WITH cte1 AS ( SELECT a.year, 5 * EXP(SUM(COALESCE(LOG(b.rate),0))) rate FROM inflation a LEFT JOIN inflation b ON a.year > b.year GROUP BY a.year ), cte2 AS ( SELECT year, rate, SUM(rate) OVER (ORDER BY year) rate_sum FROM cte1 ) SELECT * FROM cte2 ORDER BY year ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/67b4a/1).
inflation calculation in MS SQL
[ "", "sql", "sql-server", "join", "" ]
I have this query ``` SELECT username FROM paymentValue ``` I can sum `paymentValue` for username like ``` SELECT username, SUM(paymentValue) WHERE username = 'john' ``` But how to select sums for all usernames to get return like: john, 146 marry, 3456 anna, 2043
use the 'group by' clause ``` select username, sum(paymentvalue) as sumofpayments from thetable group by username ```
``` SELECT username, SUM(paymentValue) FROM TableName GROUP BY username ``` to JOIN with another table ``` SELECT username, SUM(paymentValue) FROM TableName INNER JOIN SecondTable ON TableName.RefrencingColumn = SecondTable.RefrencingColumn WHERE username = 'john' GROUP BY username ```
SQL query - select usernames and sum their row
[ "", "sql", "" ]
``` <% postit = request.querystring("thispost") response.write(postit) %> ``` `postit` is the variable. The `response.write` works and this is all above the SQL statement below. This is the SQL however when I add the `postit` variable I get this error message: ``` delCmd.CommandText="DELETE * FROM post WHERE (pos_ID = postit )" ``` ``` Microsoft Access Database Engine error '80040e10' No value given for one or more required parameters. /student/s0190204/wip/deleterecord.asp, line 32 ```
Add a parameter to the SQL: ``` delCmd.CommandText="DELETE * FROM post WHERE (pos_ID = ?)" delCmd.Parameters.Append delCmd.CreateParameter("posid", adInteger, adParamInput) ' input parameter delCmd.Parameters("posid").Value = postit ```
Couple of things that will help you in the future 1. Use `Option Explicit` to avoid hiding issues that will come back to bite you later on 2. Use `ADODB.Command` object, which is very versatile enabling to do a range of database calls, from simple dynamic SQL statements to Stored Procedures without the risk of SQL injection. There are a few tips that can speed things up when using the `ADODB.Command` object in your code which will be demonstrated in the example below (assumes you already have a connection string stored in a global config call `gs_connstr`); ``` <% Option Explicit Dim postit postit = Request.QueryString("thispost") 'Always do some basic validation of your Request variables If Len(postit) > 0 And IsNumeric(postit) Then CLng(postit) Else postit = 0 Dim o_cmd, o_rs, a_rs, i_row, i_rows, l_affected Dim SQL 'SQL statement to be executed. For CommandType adCmdText this can be any dynamic 'statement, but adCmdText also gives you an added bonus - Parameterised Queries 'instead of concatenating values into your SQL you can specify placeholders (?) 'that you will define values for that will get passed to the provider in the order 'they are defined in the SQL statement. SQL = "DELETE * FROM post WHERE (pos_ID = ?)" Set o_cmd = Server.CreateObject("ADODB.Command") With o_cmd 'ActiveConnection will accept a Connection String so there is no need 'to instantiate a separate ADODB.Connection object the ADODB.Command object 'will handle this and also open the connection ready. .ActiveConnection = gs_connstr .CommandType = adCmdText .CommandText = SQL 'When using Parameters the most important thing to remember is the order you 'appended your parameters to the Parameters collection as this will determine 'the order in which they are applied to your SQL query at execution. Because 'of this the name you give to your parameters is not important in terms of 'execution but I find specifying a meaningful name is best (especially when 'revisiting some code a few years down the line). Call .Parameters.Append(.CreateParameter("@pos_ID", adInteger, adParamInput, 4)) 'Parameter values can be passed in via the Execute() method using an Array 'without having to define the parameter values explicitly. You can also specify 'the records affected value to return number of rows affected by a DELETE, 'INSERT or UPDATE statement. .Execute(l_affected, Array(postit)) End With 'Always tidy up after yourself, by releasing your object from memory, this will 'also tidy up your connection as it was created by the ADODB.Command object. Set o_cmd = Nothing %> ```
How to use ASP variables in SQL statement
[ "", "sql", "asp-classic", "" ]
I have a table as below. ``` DATE_WORKED COUNTRY 1-Nov-13 United Kingdom 4-Nov-13 United Kingdom 5-Nov-13 India 6-Nov-13 India 7-Nov-13 India 8-Nov-13 United Kingdom 11-Nov-13 United Kingdom 12-Nov-13 India 13-Nov-13 India 14-Nov-13 India 15-Nov-13 United Kingdom 18-Nov-13 United Kingdom 19-Nov-13 India 20-Nov-13 India 21-Nov-13 India 22-Nov-13 United Kingdom 25-Nov-13 United Kingdom 26-Nov-13 India 27-Nov-13 India 28-Nov-13 India 29-Nov-13 United Kingdom ``` I am looking to find the start\_date and end date for each stay in a country. ``` COUNTRY START_DATE END_Date United Kingdom 1-Nov-13 4-Nov-13 India 5-Nov-13 7-Nov-13 United Kingdom 8-Nov-13 11-Nov-13 India 12-Nov-13 14-Nov-13 United Kingdom 15-Nov-13 18-Nov-13 India 19-Nov-13 21-Nov-13 United Kingdom 22-Nov-13 25-Nov-13 India 26-Nov-13 28-Nov-13 United Kingdom 29-Nov-13 ``` Please help me with an SQL query to achieve this. Thanks in advance.
Using [Tabibitosan](https://forums.oracle.com/message/3996302): ``` SQL> create table mytable (date_worked,country) 2 as 3 select to_date('1-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 4 select to_date('4-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 5 select to_date('5-Nov-13','dd-Mon-yy'), 'India' from dual union all 6 select to_date('6-Nov-13','dd-Mon-yy'), 'India' from dual union all 7 select to_date('7-Nov-13','dd-Mon-yy'), 'India' from dual union all 8 select to_date('8-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 9 select to_date('11-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 10 select to_date('12-Nov-13','dd-Mon-yy'), 'India' from dual union all 11 select to_date('13-Nov-13','dd-Mon-yy'), 'India' from dual union all 12 select to_date('14-Nov-13','dd-Mon-yy'), 'India' from dual union all 13 select to_date('15-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 14 select to_date('18-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 15 select to_date('19-Nov-13','dd-Mon-yy'), 'India' from dual union all 16 select to_date('20-Nov-13','dd-Mon-yy'), 'India' from dual union all 17 select to_date('21-Nov-13','dd-Mon-yy'), 'India' from dual union all 18 select to_date('22-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 19 select to_date('25-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual union all 20 select to_date('26-Nov-13','dd-Mon-yy'), 'India' from dual union all 21 select to_date('27-Nov-13','dd-Mon-yy'), 'India' from dual union all 22 select to_date('28-Nov-13','dd-Mon-yy'), 'India' from dual union all 23 select to_date('29-Nov-13','dd-Mon-yy'), 'United Kingdom' from dual 24 / Table created. SQL> with tabibitosan as 2 ( select row_number() over (order by date_worked) 3 - row_number() over (partition by country order by date_worked) grp 4 , date_worked 5 , country 6 from mytable 7 ) 8 select country 9 , min(date_worked) start_date 10 , max(date_worked) end_date 11 from tabibitosan 12 group by country 13 , grp 14 order by start_date 15 / COUNTRY START_DATE END_DATE -------------- ------------------- ------------------- United Kingdom 01-11-2013 00:00:00 04-11-2013 00:00:00 India 05-11-2013 00:00:00 07-11-2013 00:00:00 United Kingdom 08-11-2013 00:00:00 11-11-2013 00:00:00 India 12-11-2013 00:00:00 14-11-2013 00:00:00 United Kingdom 15-11-2013 00:00:00 18-11-2013 00:00:00 India 19-11-2013 00:00:00 21-11-2013 00:00:00 United Kingdom 22-11-2013 00:00:00 25-11-2013 00:00:00 India 26-11-2013 00:00:00 28-11-2013 00:00:00 United Kingdom 29-11-2013 00:00:00 29-11-2013 00:00:00 9 rows selected. ```
Somewhat more complicated than @RobVanWijk's answer: ``` with v_data as ( select to_date('2013-11-01', 'YYYY-MM-DD') as date_worked, 'UK' as country from dual union all select to_date('2013-11-04', 'YYYY-MM-DD') as date_worked, 'UK' as country from dual union all select to_date('2013-11-05', 'YYYY-MM-DD') as date_worked, 'India' as country from dual union all select to_date('2013-11-06', 'YYYY-MM-DD') as date_worked, 'India' as country from dual union all select to_date('2013-11-07', 'YYYY-MM-DD') as date_worked, 'India' as country from dual union all select to_date('2013-11-08', 'YYYY-MM-DD') as date_worked, 'UK' as country from dual union all select to_date('2013-11-11', 'YYYY-MM-DD') as date_worked, 'UK' as country from dual union all select to_date('2013-11-12', 'YYYY-MM-DD') as date_worked, 'India' as country from dual ) select country, start_day, end_day from ( select v3.*, row_number() over (partition by start_day, end_day order by date_worked) as rn from ( select v2.*, max(case when is_first_day = 1 then date_worked else null end) over (Partition by null order by date_worked) as start_day, min(case when is_last_day = 1 then date_worked else null end) over (Partition by null order by date_worked desc) as end_day from ( select v1.*, (case when country <> nvl(country_next_day, 'n/a') then 1 else 0 end) is_last_day, (case when country <> nvl(country_prev_day, 'n/a') then 1 else 0 end) is_first_day from ( select date_worked, country, lead(country) over (order by date_worked) as country_next_day, lag(country) over (order by date_worked) as country_prev_day from v_data ) v1 ) v2 order by date_worked ) v3 ) v4 where rn=1 ``` Explanation: * for each workday, get the successor and the predecessor using the lag() and lead() analytic functions (v1) * for each workday, decide whether it is the start or end of a group by comparing its country to the previous and next countries (v2) * for each group, compute the start and end day (v3) * for each workday, compute its ordering inside its group (v4) * return only workdays with ordering 1
Oracle : min max values within a repeating group
[ "", "sql", "oracle", "" ]
I am passing one password value for either 2 or 3 or 4 or 'n' number of usernames. How to pass the user\_id dynamically to the update query ? ``` update user_table set column_password = 'password value' where user_id in ( ) ```
First create the function using this code: ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[SplitIDs] ( @List varchar(5000) ) RETURNS @ParsedList table ( ID int ) AS BEGIN DECLARE @ID varchar(10), @Pos int SET @List = LTRIM(RTRIM(@List))+ ',' SET @Pos = CHARINDEX(',', @List, 1) IF REPLACE(@List, ',', '') <> '' BEGIN WHILE @Pos > 0 BEGIN SET @ID = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) IF @ID <> '' BEGIN INSERT INTO @ParsedList (ID) VALUES (CAST(@ID AS int)) --Use Appropriate conversion END SET @List = RIGHT(@List, LEN(@List) - @Pos) SET @Pos = CHARINDEX(',', @List, 1) END END RETURN END GO ``` then in your stored procedure declate `@UserIDs varchar(max)`. You will pass comma separated ID list into this param. Then in your Stored proc you can have: ``` update U set U.column_password = 'password value' FROM dbo.SplitIDs(@UserIDs) I INNER JOIN user_table U ON I.ID=U.user_id ```
**CREATE A Function** ``` CREATE FUNCTION [dbo].[FnSplit] ( @List nvarchar(2000), @SplitOn nvarchar(5) ) RETURNS @RtnValue table (Id int identity(1,1), Value nvarchar(100)) AS BEGIN While(Charindex(@SplitOn,@List)>0) Begin Insert Into @RtnValue (value) Select Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1))) Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List)) End Insert Into @RtnValue (Value) Select Value = ltrim(rtrim(@List)) Return END ``` **Store Procedure** ``` CREATE Procedure usp_Multipleparameter (@Users VARCHAR(1000)= NULL) AS BEGIN update user_table set column_password = 'password value' where user_id collate database_default IN (SELECT Value FROM dbo.FnSplit(@Users,',')) END GO ``` **Calling Store Procedure** ``` EXEC usp_Multipleparameter 'User1,User2' ```
Dynamically updating column value in SQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
As the title of the question says, I would like to find all pairs of employees who have the same birthday date each year, such as each row of the result table must show a pair of employees and each pair of employees must appear only once in the result table. My query, as follows, finds only pairs of people with the same birthdate and each pair appears more than once, but not people of differing ages who have the same birthday each year. ``` SELECT DISTINCT E1.empno, E1.firstnme, E1.lastname, E1.birthdate, E2.empno, E2.firstnme, E2.lastname, E2.birthdate FROM employee E1 INNER JOIN employee E2 ON E1.birthdate = E2.birthdate WHERE Day(E1.birthdate) = Day(E2.birthdate) AND Month(E1.birthdate) = Month(E2.birthdate) AND E1.empno <> E2.empno ` ``` I would be very grateful if you can help me!
The problem is you are joining on the actual date. You can either join on DAY/MONTH or just remove the join: ``` SELECT DISTINCT E1.empno, E1.firstnme, E1.lastname, E1.birthdate, E2.empno, E2.firstnme, E2.lastname, E2.birthdate FROM employee E1 INNER JOIN employee E2 ON Day(E1.birthdate) = Day(E2.birthdate) AND Month(E1.birthdate) = Month(E2.birthdate) AND E1.empno < E2.empno ``` or ``` SELECT DISTINCT E1.empno, E1.firstnme, E1.lastname, E1.birthdate, E2.empno, E2.firstnme, E2.lastname, E2.birthdate FROM employee E1, employee E2 WHERE Day(E1.birthdate) = Day(E2.birthdate) AND Month(E1.birthdate) = Month(E2.birthdate) AND E1.empno < E2.empno ```
Don't use <> for the empno comparison, use > or <, this will keep the pairs from showing up twice. ``` select e1.*, e2.* from employee e1 inner join employee e2 on month(e2.birthdate)=month(e1.birthdate) and day(e2.birthdate)=day(e1.birthdate) and e2.empno > e1.empno ```
Find all pairs of employees who have the same birthday date each year
[ "", "sql", "" ]
I'm currently working on a project with a MySQL Db of more than 8 million rows. I have been provided with a part of it to test some queries on it. It has around 20 columns out of which 5 are of use to me. Namely: `First_Name, Last_Name, Address_Line1, Address_Line2, Address_Line3, RefundID` I have to create a unique but random `RefundID` for each row, that is not the problem. The problem is to create **same** `RefundID` for those rows whose `First_Name, Last_Name, Address_Line1, Address_Line2, Address_Line3` as same. This is my first real work related to MySQL with such large row count. So far I have created these queries: ``` -- Creating Teporary Table -- CREATE temporary table tempT (SELECT tt.First_Name, count(tt.Address_Line1) as a1, count(tt.Address_Line2) as a2, count(tt.Address_Line3) as a3, tt.RefundID FROM `tempTable` tt GROUP BY First_Name HAVING a1 >= 2 AND a2 >= 2 AND a3 >= 2); -- Updating Rows with First_Name from tempT -- UPDATE `tempTable` SET RefundID = FLOOR(RAND()*POW(10,11)) WHERE First_Name IN (SELECT First_Name FROM tempT WHERE First_Name is not NULL); ``` This update query keeps on running but never ends, `tempT` has more than 30K rows. This query will then be run on the main DB with more than 800K rows. Can someone help me out with this? Regards
The solutions that seem obvious to me.... Don't use a random value - use a hash: ``` UPDATE yourtable SET refundid = MD5('some static salt', First_Name , Last_Name, Address_Line1, Address_Line2, Address_Line3) ``` The problem is that if you are using an integer value for the refundId then there's a good chance of getting a collision (hint CONV(SUBSTR(MD5(...),1,16),16,10) to get a SIGNED BIGINT). But you didn't say what the type of the field was, nor how strict the 'unique' requirement was. It does carry out the update in a single pass though. An alternate approach which creates a densely packed seguence of numbers is to create a temporary table with the unique values from the original table and a random value. Order by the random value and set a monotonically increasing refundId - then use this as a look up table or update the original table: ``` SELECT DISTINCT First_Name , Last_Name, Address_Line1, Address_Line2, Address_Line3 INTO temptable FROM yourtable; set @counter=-1; UPDATE temptable t SET t,refundId=(@counter:=@counter + 1) ORDER BY r.randomvalue; ``` There are other solutions too - but the more efficient ones rely on having multiple copies of the data and/or using a procedural language.
Try using the following: ``` UPDATE `tempTable` x SET RefundID = FLOOR(RAND()*POW(10,11)) WHERE exists (SELECT 1 FROM tempT y WHERE First_Name is not NULL and x.First_Name=y.First_Name); ```
Update with Subquery never completes
[ "", "mysql", "sql", "sql-update", "subquery", "" ]
Hi I have this table Cars: ``` MODEL nvarchar(20) STYLE nvarchar(20) ENGINE nvarchar(5) CAPACITY smallint MAX_SPEED smallint PRICE smallmoney MARKET nvarchar(20) COMPETITOR nvarchar(20) ``` And I would like to split it into 3 tables via SQL query: Cars: ``` MODEL nvarchar(20) STYLE nvarchar(20) MAX_SPEED smallint PRICE smallmoney ``` Engine: ``` ENGINE nvarchar(5) CAPACITY smallint ``` Market: ``` MARKET nvarchar(20) COMPETITOR nvarchar(20) ``` So was wandering how this would be done using sql commands, thanks
Easiest way. Select... Into will create new tables: ``` SELECT DISTINCT ENGINE, CAPACITY INTO Engine FROM CARS SELECT DISTINCT MARKET, COMPETITOR INTO Market FROM CARS ``` Then just drop the defunct columns from the original table. Eg ``` ALTER TABLE Cars DROP COLUMN ENGINE ALTER TABLE Cars DROP COLUMN CAPACITY ALTER TABLE Cars DROP COLUMN MARKET ALTER TABLE Cars DROP COLUMN COMPETITOR ``` This will do specifically what you are asking. However, I'm not sure that is what you want - there is then no reference from the car to the engine or market details - so information is lost. If "ENGINE" and "MARKET" define the keys of the new table, I'd suggest leaving those columns on the car table as foreign keys. Eg only DROP Capacity and Competitor. You may wish to create the primary key on the new tables too. Eg: ALTER TABLE ENGINE ADD CONSTRAINT [PK\_Engine] PRIMARY KEY CLUSTERED ENGINE ASC
Run this.... ``` create table Engine ( EngineId int identity(1,1) not null primary key, Engine nvarchar(5) not null, Capacity smallint not null ) go insert into Engine (Engine, Capacity) (select distinct Engine,Capacity from Cars) go alter table Cars add EngineId int null go update Cars set Cars.EngineId = e.EngineId from Engine e where e.Engine = Cars.Engine go create table Market ( Id int identity(1,1) not null primary key, Market nvarchar(20) not null, Competitor nvarchar(20) not null ) go insert into Market (Market, Competitor) (select distinct Market,Competitor from Cars) go alter table Cars add MarketId int null go update Cars set Cars.MarketId = m.MarketId from Market m where m.Market = Cars.Market go alter table Cars drop column Market; alter table Cars drop column Competitor; alter table Cars drop column Engine; alter table Cars drop column Capacity; ```
How to split table into multiple tables using SQL
[ "", "sql", "sql-server", "" ]
In Amazon Redshift I'm looking to convert the current timestamp to have 0 seconds. That is go from this: ``` 2013-12-17 12:27:50 ``` to this: ``` 2013-12-17 12:27:00 ``` I have tried the following: ``` SELECT dateadd(second, -(date_part(second, getdate())), getdate()); ERROR: function pg_catalog.date_add("unknown", double precision, timestamp without time zone) does not exist HINT: No function matches the given name and argument types. You may need to add explicit type casts. SELECT dateadd(second, -cast(date_part(second, getdate()) as double precision), getdate()); ERROR: function pg_catalog.date_add("unknown", double precision, timestamp without time zone) does not exist HINT: No function matches the given name and argument types. You may need to add explicit type casts. SELECT getdate() - date_part(second, getdate()); ERROR: operator does not exist: timestamp without time zone - double precision HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. ``` I'm probably missing a very simple way of doing this! Does anyone have any suggestions, please?
It's easiest to use the [`date_trunc()` function](http://docs.aws.amazon.com/redshift/latest/dg/r_DATE_TRUNC.html), but that will work only while selecting: ``` SELECT date_trunc('minute', TIMESTAMP '2013-12-17 12:27:00'); ``` You may preprocess data before loading data into the redshift DB, or use intermediary table and then use [`INSERT INTO...SELECT` statement](http://docs.aws.amazon.com/redshift/latest/dg/r_INSERT_30.html): ``` INSERT INTO destination_table ( SELECT date_trunc('minute', date_column), other_columns_here FROM source_table ); ```
Check [date\_trunc()](http://www.postgresql.org/docs/current/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC) ``` SELECT date_trunc('minute', TIMESTAMP '2013-12-17 12:27:00'); ```
Remove seconds from current date in Redshift (PostgreSQL)
[ "", "sql", "postgresql", "amazon-redshift", "" ]
Another SQL Query issue that i am having. If anyone could help it would be appreciated. No errors are thrown (using the Try syntax) however it is not updating the database. ``` Dim con As OleDbConnection = New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source='\\$$$$\$$$$\$$$$.accdb';") Dim str As String str = "update Layer_1 set 1=@1, 2=@2, 3=@3, 4=@4, 5=@5, 6=@6, 7=@7, 8=@8, 9=@9, 10=@10 where ID=@id" Dim cmd As New OleDbCommand(str, con) cmd.Parameters.AddWithValue("@1", val2.Text) cmd.Parameters.AddWithValue("@2", val3.Text) cmd.Parameters.AddWithValue("@3", val4.Text) cmd.Parameters.AddWithValue("@4", val5.Text) cmd.Parameters.AddWithValue("@5", val6.Text) cmd.Parameters.AddWithValue("@6", val7.Text) cmd.Parameters.AddWithValue("@7", val8.Text) cmd.Parameters.AddWithValue("@8", val9.Text) cmd.Parameters.AddWithValue("@9", val10.Text) cmd.Parameters.AddWithValue("@10", val11.Text) cmd.Parameters.AddWithValue("@ID", SysID.Text) con.Open() cmd.ExecuteNonQuery() con.Close() ``` So the Val[#].Text is a textbox, whilst the SysId is a label, I also have each Param written in the following Syntax, just to see if there is a problem with my code. But its the same output, no DB update but no errors. I do have a smaller variation of this codes which works but i am not sure why, as it is an exact copy with more expressions added in. ``` Dim str As String str = "update FDSL set Hostname=@Hostname, Owner=@Owner where ID=@id" Dim cmd As New OleDbCommand(str, con) cmd.Parameters.AddWithValue("@Hostname", TextBox1.Text) cmd.Parameters.AddWithValue("@Owner", TextBox2.Text) cmd.Parameters.AddWithValue("@ID", textbox6.Text) con.Open() cmd.ExecuteNonQuery() con.Close() ``` Any Ideas? Cheers, Tad
In the one you have that works you have cmd.Parameters.AddWithValue("@ID", textbox6.Text) which is linking to a text box. The one that does not work has cmd.Parameters.AddWithValue("@ID", SysID.Text) which you say links to a label. I can't see why that would make a difference but could you try a read-only textbox just to see if that works. Also are your IDs strings or numeric. I tend to convert my ID params into integers rather than just use numeric string values directly from a textbox. e.g. cmd.Parameters.AddWithValue("@ID", CInt(SysID.Text))
`OleDbCommand` does not support named parameters. Use this instead: ``` update FDSL set Hostname=?, Owner=? where ID=? ``` And add the parameters in the order that they appear in the query. However you *should* be getting an error in that case, so something else may be swallowing the exception.
Why will MS Access DB not update?
[ "", "sql", "vb.net", "ms-access", "" ]
I want to check there is any duplicate record for that row. means as per table structure > if( C1\_CADNO=C2\_CADNO or C1\_CADNO=C3\_CADNO or C2\_CADNO=C3\_CADNO ) > then > display that record And my Table having 10 columns like C1\_CADNO.....C10\_CADNO [My Table Structure](https://i.stack.imgur.com/D9T8M.png)
Move it to the `WHERE` clause: ``` SELECT t.* FROM TableName t WHERE ( C1_CADNO = C2_CADNO OR C1_CADNO = C3_CADNO OR C2_CADNO = C3_CADNO ) ``` > but my table having 10 columns like > C1\_CADNO,C2\_CADNO,C3\_CADNO.....C10\_CADNO – Then this SQL is made for you: ``` SELECT t.* FROM TableName t WHERE ( t.C1_CADNO = t.C2_CADNO OR C1_CADNO = C3_CADNO OR C1_CADNO = C4_CADNO OR C1_CADNO = C5_CADNO OR C1_CADNO = C6_CADNO OR C1_CADNO = C7_CADNO OR C1_CADNO = C8_CADNO OR C1_CADNO = C9_CADNO OR C1_CADNO = C10_CADNO OR C2_CADNO = C3_CADNO OR C2_CADNO = C4_CADNO OR C2_CADNO = C5_CADNO OR C2_CADNO = C6_CADNO OR C2_CADNO = C7_CADNO OR C2_CADNO = C8_CADNO OR C2_CADNO = C9_CADNO OR C2_CADNO = C10_CADNO OR C3_CADNO = C4_CADNO OR C3_CADNO = C5_CADNO OR C3_CADNO = C6_CADNO OR C3_CADNO = C7_CADNO OR C3_CADNO = C8_CADNO OR C3_CADNO = C9_CADNO OR C3_CADNO = C10_CADNO OR C4_CADNO = C5_CADNO OR C4_CADNO = C6_CADNO OR C4_CADNO = C7_CADNO OR C4_CADNO = C8_CADNO OR C4_CADNO = C9_CADNO OR C4_CADNO = C10_CADNO OR C5_CADNO = C6_CADNO OR C5_CADNO = C7_CADNO OR C5_CADNO = C8_CADNO OR C5_CADNO = C9_CADNO OR C5_CADNO = C10_CADNO OR C6_CADNO = C7_CADNO OR C6_CADNO = C8_CADNO OR C6_CADNO = C9_CADNO OR C6_CADNO = C10_CADNO OR C7_CADNO = C8_CADNO OR C8_CADNO = C9_CADNO OR C8_CADNO = C10_CADNO OR C9_CADNO = C10_CADNO ) ``` Now you know why it's important to [normalize](http://technet.microsoft.com/en-us/library/ms191178%28v=sql.105%29.aspx) tables ;-)
Try this: ``` SELECT * FROM T WHERE C1_CADNO IN (C2_CADNO,C3_CADNO,C4_CADNO,...C10_CADNO) OR C2_CADNO IN (C3_CADNO,C4_CADNO,...C10_CADNO) OR C3_CADNO IN (C4_CADNO,...C10_CADNO) OR C4_CADNO IN (C5_CADNO,...C10_CADNO) ... OR C8_CADNO IN (C9_CADNO,...C10_CADNO) OR C9_CADNO = C10_CADNO ```
check duplicate records for particular row in sql server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have this SQL statement: ``` SELECT (CASE WHEN EXISTS (SELECT * FROM votes WHERE votes.user_id = 0 AND votes.post_id = posts.id AND votes.vote = 0) THEN 0 WHEN EXISTS (SELECT * FROM votes WHERE votes.user_id = 0 AND votes.post_id = posts.id AND votes.vote = 1) THEN 1 ELSE 2 END) AS vote_by_me , posts.* FROM `posts` ``` Is there a way I can do this in a DRY manner? Both select statements are almost the same, would be nice to factor them out some way. Thanks
Yes, you can select `votes.vote` directly, like this: ``` SELECT COALESCE( ( SELECT MIN(votes.vote) FROM votes WHERE votes.user_id = 0 AND votes.post_id = posts.id AND votes.vote in (0, 1) GROUP BY votes.user_id, votes.post_id ) , 2 ) AS vote_by_me , posts.* FROM `posts ``` If a post cannot have multiple votes by the same user, you could eliminate the `GROUP BY`, like this: ``` SELECT COALESCE( ( SELECT votes.vote FROM votes WHERE votes.user_id = 0 AND votes.post_id = posts.id AND votes.vote in (0, 1) ) , 2 ) AS vote_by_me , posts.* FROM `posts ```
This would seem to simplify the query: ``` SELECT (CASE WHEN v.votes0 > 0 THEN 0 WHEN v.votes1 > 0 THEN 1 ELSE 2 END) AS vote_by_me, p.* FROM posts p left outer join (select v.post_id, sum(v.vote = 1) as vote1, sum(v.vote = 0) as vote0 from votes v where v.user_id = 0 group by v.post_id ) v on p.post_id = v.post_id; ``` The bad news is that if you have an index on `votes(user_id, post_id, votes)` then your original form will probably have better performance. EDIT: The following formulation might perform well and sort-of simplify the query: ``` SELECT (CASE (SELECT min(vote) FROM votes WHERE votes.user_id = 0 AND votes.post_id = posts.id ) WHEN 0 then 0 WHEN 1 then 1 ELSE 2 END) AS vote_by_me, posts.* FROM `posts`; ```
SQL simplication
[ "", "mysql", "sql", "" ]
I have the following Data flow ![enter image description here](https://i.stack.imgur.com/TJOIP.png) I need to insert rows into flat files and table using ado.net connector ![enter image description here](https://i.stack.imgur.com/Drd7I.png) and I always get this error ``` [ADO NET Destination [2]] Error: An exception has occurred during data insertion, the message returned from the provider is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"utilisateur_id", "nom", "prenom", "type" at line 1 [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "ADO NET Destination" (2) failed with error code 0xC020844B while processing input "ADO NET Destination Input" (9). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure. ``` I've tried to search for an answer and found this link <http://blogs.msdn.com/b/mattm/archive/2009/01/07/writing-to-a-mysql-database-from-ssis.aspx> But I didn't really know how to fix it. Is there a simpler way to do this?
Use the Odbc Data Provider and save yourself a headache.
Start the MySQL Server and login to it. Run the following commands as shown in the figure. By setting the mode to 'ANSI' makes the syntax more standard like as MySQL can cater to clients using other SQL modes. This is why the above error is returned although the syntax itself appears correct. In fact a create statement run on command line to create a table directly on MySQL could not create a table and returned an error when SSIS was used to create the same table. RUN THE FOLLOWING... ``` mysql> select @@global.sql_mode; +-------------------+ | @@global.sql_mode | +-------------------+ | | +-------------------+ 1 row in set (0.00 sec) mysql> set global sql_mode='ANSI'; Query OK, 0 rows affected (0.01 sec) mysql> select @@global.sql_mode; +-------------------------------------------------------------+ | @@global.sql_mode | +-------------------------------------------------------------+ | REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ANSI | +-------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> ``` After running the above statements, build the BI project and execute the package. This time the execution is will be successful. One more thing: Make sure you set **ValidateExternalMetada on ADO NET Destination to false** or else you will be in hell hole for more than 3 days
Insert into Table Mysql
[ "", "mysql", "sql", "ssis", "" ]
I hope I've put the query together correctly below. I've simplified from something much, much bigger. But my question should be clear enough from the example. In the innermost query, `nest1`, let's say the results come back with three entries for invoice 123. The invoice was created (1), then two cash deposits were made (2, 3), the second of which (3), paid off the invoice. From that result set, I'm aggregating the sum. I'm also getting the paid\_off\_date, which only one of the rows will have. I'm happy to be schooled on better ways to design this query, but what I don't understand is why I have to select the aggregates all the way out. Why, for example, when I select `SUM(cash_deposit) AS 'cash_deposit'` in `nest2`, do I also have to select `SUM(cash_deposit)` in the outermost query? Once it is aggregated in a subquery, why isn't the aggregate passed along? And why do I have to keep grouping by id at each level? ``` SELECT id, SUM(cash_deposit), MAX(paid_off_date), MAX(job_phase), MAX(cust_id) FROM ( SELECT id AS id, SUM(cash_deposit) AS 'cash_deposit', MAX(paid_off_date) AS 'paid_off_date', MAX(job_phase) AS 'job_phase' FROM ( SELECT id, cash_deposit, paid_off_date FROM invoice GROUP BY id ) nest1 JOIN job j ON nest1.id = j.id GROUP BY id ) nest2 JOIN customer c ON c.invoice = nest2.id GROUP BY id ``` **Clarification** Thanks for any posts with improved versions of the query. I'm working with an existing query that produces unexpected results when I start trying to turn the nesting into joins. I'm going to keep playing with it, but I'd also really appreciate if someone could answer the question about aggregates and why they have to be repeated when nesting. In C#, I can sum up three numbers in one function and then pass the resulting value back. I don't have to keep summing the results. I need some help understanding how SQL subqueries function differently in that respect. **Possible Answer** One of my coworkers pointed out that in this case, It's because of my `GROUP BY` clauses. Since `cash_deposit` and `job_phase`, for example, aren't in the `GROUP BY` clause, they have to be aggregated.
It's forcing you to re-run the aggregates, because you are putting group by in each step of the query. If you remove `group by id` from all but the innermost query, you won't have to re-do the aggregation. Try writing it like this: ``` SELECT id, cash_deposit, paid_off_date, job_phase, cust_id FROM ( SELECT id AS id, SUM(cash_deposit) AS 'cash_deposit', MAX(paid_off_date) AS 'paid_off_date', job_phase FROM ( SELECT id, cash_deposit, paid_off_date FROM invoice GROUP BY id ) nest1 JOIN job j ON nest1.id = j.id ) nest2 JOIN customer c ON c.invoice = nest2.id ``` You could also do the two joins in one step now with the same exact result set, but I wanted to show you the minimum amount of changes necessary.
``` SELECT i.id, SUM(i.cash_deposit), MAX(i.paid_off_date), MAX(j.job_phase), MAX(c.cust_id) FROM invoice i JOIN job j ON j.id = i.id JOIN customer c ON c.invoice = i.id GROUP BY id ```
Why do aggregates in subqueries have to be aggregated again?
[ "", "sql", "sql-server", "subquery", "aggregates", "" ]
I am looking for a way to count the number of columns in a table in Hive. I know the following code works in Microsoft SQL Server. Is there a Hive equivalent? ``` SELECT COUNT(*), FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_CATALOG = 'database_name' AND TABLE_SCHEMA = 'schema_name' AND TABLE_NAME = 'table_name' ```
Try this ``` SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name] ```
Try this, it will show you the columns of your table: ``` DESCRIBE schemaName.tableName; ```
Count Number of Columns In Hive
[ "", "sql", "sql-server", "count", "hive", "distinct", "" ]
We have two 2 tables: ``` tbl_projekte [uid,werbemittel,projekt_name,kunden_id] tbl_kunden [uid, kunden_name] ``` We are using this statement to select recordsets from tbl\_projekte: ``` SELECT * FROM tbl_projekte WHERE werbemittel ='12' ORDER BY kunden_id ASC ``` How do we get the SQL statement to ORDER BY kunden\_name? Thanks for any help in advance!
Yes, you need a join for this ``` SELECT p.* FROM tbl_projekte p INNER JOIN tbl_kunden k on k.uid = p.kunden_id WHERE p.werbemittel ='12' ORDER BY k.kunden_name ASC ```
If You want to order by customer name, then do it this way: ``` SELECT p.uid, p.werbemittel, p.projekt_name FROM tbl_projekte p LEFT JOIN tbl_kunden k ON k.uid = p.kunden_id WHERE werbemittel ='12' ORDER BY k.kunden_name ASC ```
Do we need a JOIN for this SQL statement?
[ "", "mysql", "sql", "" ]
Hi all champions out there I am far from a guru when it comes to high performance SQL queries and and wonder in anyone can help me improve the query below. It works but takes far too long, especially since I can have 100 or so entries in the IN () part. The code is as follows, hope you can figure out the schema enough to help. ``` SELECT inv.amount FROM invoice inv WHERE inv.invoiceID IN ( SELECT childInvoiceID FROM invoiceRelation ir LEFT JOIN Payment pay ON pay.invoiceID = ir.parentInvoiceID WHERE pay.paymentID IN ( 125886, 119293, 123497 ) ) ```
Restructure your query to use a join instead of a subselect. Also, use an INNER JOIN instead of a LEFT JOIN to the Payment table. This is justified, since you have a WHERE filter that would filter rows without a match in the Payment table anyway. ``` SELECT inv.amount FROM invoice inv INNER JOIN invoiceRelation ir ON inv.incoiceID = ir.childInvoiceID INNER JOIN Payment pay on pay.invoiceID = ir.parentInvoiceID WHERE pay.paymentID IN (...) ```
One way to improve performance is to have a good index on relevant columns. In your example, an index on `inv.invoiceI`D would probably speed up the query quite a bit. Also on `pay.paymentID` Try this and see if it helps: ``` ALTER TABLE invoice ADD INDEX invoiceID_idx (invoiceID); ``` and ``` ALTER TABLE Payment ADD INDEX paymendID_idx (paymentID); ```
Improve performance mySQL query
[ "", "mysql", "sql", "" ]
Please refer the below examples and kindly let me know your ideas. ``` declare @EmployeeStartDate datetime='01-Sep-2013' declare @EmployeeEndDate datetime='15-Nov-2013' select DateDiff(mm,@EmployeeStartDate, DateAdd(mm, 1,@EmployeeEndDate)) ``` Output = `3` expected output = `2.5` Since I have only 15 days in Nov, So I should get `0.5` for Nov
Try this ``` SELECT CASE WHEN DATEDIFF(d,'2013-09-01', '2013-11-15')>30 THEN DATEDIFF(d,'2013-09-01', '2013-11-15')/30.0 ELSE 0 END AS 'MonthDifference' ``` OR ``` SELECT DATEDIFF(DAY, '2013-09-01', '2013-11-15') / 30.436875E ```
DateDiff compares the values of the column you specify to work out the difference, it doesn't compare both dates and give you an exact difference. You've told it to compare the Month values, so thats all it's looking it. <http://technet.microsoft.com/en-us/library/ms189794.aspx> The Technet article details the return value of the DateDiff Function - note that it's only int. If you want the value as an exact figure (or nearabouts), you should datediff the dates on days, then divide by 30. For neatness, I've also rounded to a single decimal place. ``` select Round(Convert(decimal, DateDiff(dd,@EmployeeStartDate, @EmployeeEndDate)) / 30, 1) ```
Month difference between two dates in sql server
[ "", "sql", "sql-server", "" ]
A few posters have asked similar questions on here and these have taken me 80% of the way toward reading text files with sql queries in them into R to use as input to RODBC: [Import multiline SQL query to single string](https://stackoverflow.com/questions/2003663/r-language-import-multiline-sql-query-to-single-string) [RODBC Temporary Table Issue when connecting to MS SQL Server](https://stackoverflow.com/questions/4747768/rodbc-temporary-table-issue-when-connecting-to-ms-sql-server/4748281#4748281) However, my sql files have quite a few comments in them (as --comment on this and that). My question is, how would one go about either stripping comment lines from query on import, or making sure that the resulting string keeps line breaks, thus not appending actual queries to comments? For example, query6.sql: ``` --query 6 select a6.column1, a6.column2, count(a6.column3) as counts --count the number of occurences in table 1 from data.table a6 group by a6.column1 ``` becomes: ``` sqlStr <- gsub("\t","", paste(readLines(file('SQL/query6.sql', 'r')), collapse = ' ')) sqlStr "--query 6select a6.column1, a6.column2, count(a6.column3) as counts --count the number of occurences in table 1from data.table a6 group by a6.column1" ``` when read into R.
Are you sure you can't just use it as is? This works despite taking up multiple lines and having a comment: ``` > library(sqldf) > sql <- "select * -- my select statement + from BOD + " > sqldf(sql) Time demand 1 1 8.3 2 2 10.3 3 3 19.0 4 4 16.0 5 5 15.6 6 7 19.8 ``` This works too: ``` > sql2 <- c("select * -- my select statement", "from BOD") > sql2.paste <- paste(sql2, collapse = "\n") > sqldf(sql2.paste) Time demand 1 1 8.3 2 2 10.3 3 3 19.0 4 4 16.0 5 5 15.6 6 7 19.8 ```
I had trouble with the other answer, so I modified Roman's and made a little function. This has worked for all my test cases, including multiple comments, single-line and partial-line comments. ``` read.sql <- function(filename, silent = TRUE) { q <- readLines(filename, warn = !silent) q <- q[!grepl(pattern = "^\\s*--", x = q)] # remove full-line comments q <- sub(pattern = "--.*", replacement="", x = q) # remove midline comments q <- paste(q, collapse = " ") return(q) } ```
SQL query with comments import into R from file
[ "", "sql", "r", "" ]
I have a data set of contracts and the nationalities of people working on them. A sample is as follows. ``` Contract Country GTT001 DE GTT001 DE GTT001 US BFF333 US BFF333 US BFF333 DE HHH222 GB HHH222 GB HHH222 GB ``` I need a query that will count the number of people working on each contract from each country. So one that will produce a table like below: ``` DE US GB GTT001 2 1 0 BFF333 1 2 0 HHH222 0 0 3 ``` I am working in Access 2010. Is there a countif or some equivalent that will allow me to count values based on conditions?
``` DECLARE @Pivotcols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @Pivotcols = STUFF((SELECT distinct N',' + QUOTENAME([Country]) from [Contract] FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') select @Pivotcols set @query = N'SELECT [Contract], ' + @Pivotcols + N' from ( SELECT [Contract] ,[Country] FROM [TEST_DB].[dbo].[Contract] ) sourceTable pivot ( Count([Country]) for [Country] in (' + @Pivotcols + N') ) p ' execute sp_executesql @query; ``` The core query ``` SELECT * from (SELECT [Contract] ,[Country] FROM [TEST_DB].[dbo].[Contract] ) sT pivot ( Count([Country]) for [Country] in ([DE],[US],[GB]) ) p ```
You want to use GROUP BY using both contract and then country. This will give you a list like this: ``` Contract Country Count GTT001 DE 2 GTT001 US 1 BFF333 US 2 BFF333 DE 1 HHH222 GB 3 ``` Then you want to pivot those values to get it into the format you want. The 0s will still be missing...
Count if or Count where in SQL
[ "", "sql", "ms-access", "" ]
i have stored procedure that InsertNewFlag this stored procedure works to check if specified condition record exist that upadte table with multiple (If..Else If) and if not exist that table insert query executed. here i include this stored procedure code: ``` set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[InsertNewFlag] ( @IsRead bit = NULL, @IsImportant bit = NULL, @IsTrashed bit = NULL, @IsRemoved bit = NULL, @User_id int, @Post_History_id int, @NewID int output ) AS BEGIN IF EXISTS(SELECT * FROM [FileSystem].[dbo].[tbl_Post_History_Status] WHERE(User_id=@User_id AND Post_History_id=@Post_History_id)) BEGIN IF @IsRead <> NULL BEGIN UPDATE [FileSystem].[dbo].[tbl_Post_History_Status] SET [IsRead] = @IsRead ,[User_id] = @User_id ,[Post_History_id] = @Post_History_id WHERE (User_id=@User_id AND Post_History_id=@Post_History_id) SET @NewID = CAST(SCOPE_IDENTITY() AS INT) END ELSE IF @IsImportant <> NULL BEGIN UPDATE [FileSystem].[dbo].[tbl_Post_History_Status] SET [IsImportant] = @IsImportant ,[User_id] = @User_id ,[Post_History_id] = @Post_History_id WHERE (User_id=@User_id AND Post_History_id=@Post_History_id) SET @NewID = CAST(SCOPE_IDENTITY() AS INT) END ELSE IF @IsTrashed <> NULL BEGIN UPDATE [FileSystem].[dbo].[tbl_Post_History_Status] SET [IsTrashed] = @IsTrashed ,[User_id] = @User_id ,[Post_History_id] = @Post_History_id WHERE (User_id=@User_id AND Post_History_id=@Post_History_id) SET @NewID = CAST(SCOPE_IDENTITY() AS INT) END ELSE IF @IsRemoved <> NULL BEGIN UPDATE [FileSystem].[dbo].[tbl_Post_History_Status] SET [IsRemoved] = @IsRemoved ,[User_id] = @User_id ,[Post_History_id] = @Post_History_id WHERE (User_id=@User_id AND Post_History_id=@Post_History_id) SET @NewID = CAST(SCOPE_IDENTITY() AS INT) END END ELSE BEGIN INSERT INTO [FileSystem].[dbo].[tbl_Post_History_Status] ([IsRead] ,[IsImportant] ,[IsTrashed] ,[IsRemoved] ,[User_id] ,[Post_History_id]) VALUES (@IsRead,@IsImportant,@IsTrashed,@IsRemoved,@User_id,@Post_History_id) SET @NewID = CAST(SCOPE_IDENTITY() AS INT) END END ``` how ever this stored procedure are syntactically right and show message like "Command(s) completed successfully." but when i execute this stored procedure and pass value @IsRead to True and @User\_id to 1 and @Post\_History\_id to 9 then record doesn't updated if exist. how ever record exist. please help me..
Try this, ``` UPDATE [FileSystem].[dbo].[tbl_Post_History_Status] SET [IsRead] =(CASE WHEN NOT @IsRead IS NULL THEN @IsRead ELSE [IsRead] END), [IsImportant] =(CASE WHEN NOT @IsImportant IS NULL THEN @IsImportant ELSE [IsImportant] END), [IsTrashed] =(CASE WHEN NOT @IsTrashed IS NULL THEN @IsTrashed ELSE [IsTrashed] END), [IsRemoved] =(CASE WHEN NOT @IsRemoved IS NULL THEN @IsRemoved ELSE [IsRemoved] END), [User_id] = @User_id, [Post_History_id] =@Post_History_id WHERE (User_id=@User_id AND Post_History_id=@Post_History_id) ```
Replace all null checking to like this ``` IF Not @IsRead Is Null --Instead of IF @IsRead <> NULL BEGIN .....Other CODE END ELSE IF Not @IsImportant Is Null --Instead of ELSE IF @IsImportant <> NULL BEGIN .....and SO ON ```
how to update table with multiple if..else statetemt
[ "", "sql", "t-sql", "" ]
So here is my situation. I have 1 query that gets results grouped by date then I union all a second query that gets the totals (no grouping by date). My issue is I am calculating the average of fields and when I want to total up the average my numbers don't add up. Here is my [SQLFiddle](http://sqlfiddle.com/#!2/aa3a91/10) Here is my query: ``` SELECT t.end, SUM(CASE WHEN (t.start != t.end) THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) / COUNT(t.id) as averageTime FROM store t GROUP BY t.end UNION ALL SELECT 'Total', SUM(CASE WHEN (t.start != t.end) THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) as averageTime FROM store t ``` Right now the second query just gives the total not the total of the average. Any help is appreciated, thank you. To clarify since there is some confusion... I need to get the average per grouped date by how many items were in that group timeDiff / count(t.id) Since I am not grouping in the union all query it is doing it as a whole then dividing. I hope that makes more sense. The first query is correct the data output is as follows: 1, 1.6667, 3 (Those are the averageTime values from the first query) 5.6667 (Should be the total row) Right now I have it out putting 10 that is the total before the first rows were averaged out.
THe first part of the query can probably be written as: ``` SELECT t.end, AVG(CASE WHEN t.start != t.end THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) as averageTime FROM store t GROUP BY t.end; ``` Presumably, you want the averages of the averages -- rather than the overall average. I assume this because your query is calculating the overall average. One way is the brute force way: ``` SELECT t.end, AVG(CASE WHEN t.start != t.end THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) as averageTime FROM store t GROUP BY t.end UNION ALL SELECT 'Total', avg(AverageTime) FROM (SELECT t.end, SUM(CASE WHEN t.start != t.end THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) as averageTime FROM store t GROUP BY t.end ) t; ```
Your question is not clear. But, I suppose you need Average. As @samD mentioned, the second part of you query can be written as SELECT 'Total', AVG(CASE WHEN (t.start != t.end) THEN TIMESTAMPDIFF(DAY, t.start, t.end) ELSE 1 END) as averageTime FROM store t
Getting total using Union All
[ "", "mysql", "sql", "" ]
I have been doing a bit of searching for a while now on a particular problem, but I can't quite find this particular question I have a rather unusual task to achieve in SQL: I have two tables, say A and B, which have exactly the same column names, of the following form: ``` id | column_1 | ... | column_n ``` Both tables have the same number of rows, with the same id's, but for a given id there is a chance that the rows from tables A and B differ in one or more of the other columns. I already have a query which returns all rows from table A for which the corresponding row in table B is not identical, but what I need is a query which returns something of the form: ``` id | differing_column ---------------------- 1 | column_1 3 | column_6 ``` meaning that the row with id '1' has different 'column\_1' values in tables A and B, and the row with id '3' has different 'column\_6' values in tables A and B. Is this at all achievable? I imagine it might require some sort of pivot in order to get the column names as values, but I might be wrong. Any help/suggestions much appreciated.
Yes you can do that with a query like this: ``` WITH Diffs (Id, Col) AS ( SELECT a.Id, CASE WHEN a.Col1 <> b.Col1 THEN 'Col1' WHEN a.Col2 <> b.Col2 THEN 'Col2' -- ...and so on ELSE NULL END as Col FROM TableOne a JOIN TableTwo b ON a.Id=b.Id ) SELECT Id, Col WHERE Col IS NOT NULL ``` Note that the above query is not going to return all the columns with differences, but only the first one that it is going to find.
You can do this with an `unpivot` -- assuming that the values in the columns are of the same type. If your data is not too big, I would just recommend using a bunch of `union all` statements instead: ``` select a.id, 'Col1' as column from a join b on a.id = b.id where a.col1 <> b.col1 or a.col1 is null and b.col1 is not null or a.col1 is not null and b.col1 is null union all select a.id, 'Col2' as column from a join b on a.id = b.id where a.col2 <> b.col2 or a.col2 is null and b.col2 is not null or a.col2 is not null and b.col2 is null . . . ``` This prevents issues with potential type conversion problems. If you don't mind having the results on one row, you can do: ``` select a.id, (case when a.col1 <> b.col1 or a.col1 is null and b.col1 is not null or a.col1 is not null and b.col1 is null then 'Col1;' else '' end) + (case when a.col2 <> b.col2 or a.col2 is null and b.col2 is not null or a.col2 is not null and b.col2 is null then 'Col2;' else '' end) + . . . from a join b on a.id = b.id; ```
Compare the data in two tables with same schema
[ "", "sql", "sql-server", "database", "" ]
I have these 2 tables: ``` table1 uid points 5 13 7 9 12 5 17 3 1 1 2 2 3 1 table2 uid points 9 21 13 17 15 11 17 7 12 6 2 2 1 3 22 1 ``` I need a query to return top 5 users have points Target result: ``` uid points 9 21 13 17 5 13 12 11 15 11 ``` What I tried: ``` select uid, count(points) c from table1 order by c limit 5 union all select uid, count(points) c from table2 order by c limit 5 ``` But I did not get what I want.
``` SELECT al.uid as UID , SUM(al.points) AS total_points FROM (SELECT points, uid FROM table1 UNION ALL SELECT points,uid FROM table2) al group by al.uid ```
Try This ``` select uid, (table1.points + table2.points) as c from table1 Left join table2 on table1.uid = table2.uid order by (table1.points + table2.points) desc limit 5 ```
Sum from different fields in different MySQL tables
[ "", "mysql", "sql", "" ]
I am working on automating some data that I receive from Germany. The Date format comes in as DD.MM.YYYY and I need it to be MM/DD/YYYY. I am building an import package using SSIS and I added a derived column to change the date format. I first tried to use ``` (DT_DATE) [CalendarDay] ``` but I keep getting an error at the Derived Column when I execute the package. ``` [Derived Column [2]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "Derived Column" failed because error code 0xC0049064 occurred, and the error row disposition on "Derived Column.Outputs[Derived Column Output].Columns[Date]" specifies failure on error. ``` So I moved through the many examples in StackExchange (all that I could find at least) and was met with the same error or not the desired output. Any suggestions.
1) Source :- Flat file use DT\_date for Date column 2) Derived Column :- Replace this with using SUBSTRING([Column 2],4,4) + "/" + SUBSTRING([Column 2],5,2) + "/" + SUBSTRING([Column 2],7,2) and use DT\_DATE IN Datatype 3) Destination :- Use datetime as datatype for date Run it Thanks!
Try instead: -- Issue with date column... use DT\_Date and then in derived column use substring to include ``` // so date will become 2009/02/05 and then transfer data to destination... ``` it will work out.. Thanks! Nilesh
Convert Date from Excel Import
[ "", "sql", "excel", "date", "ssis", "" ]
I have a mysql database with a table **entites** with multiple fields in it like entity\_title, entity\_description, ... . In the table there are also 3 foreign keys **user\_id**, **region\_id** an **category\_id**. In my Index View I would like to show all the entities in a table (show the title, description, ... , the user name, the region name and the category name). This is what I do in my Controller: ``` public ActionResult Index() { var model = this.UnitOfWork.EntityRepository.Get(); return View(model); } ``` In my Repository I do this: ``` public virtual IEnumerable<TEntity> Get( Expression<Func<TEntity, bool>> filter = null, Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null, string includeProperties = "") { IQueryable<TEntity> query = _dbSet; if (filter != null) { query = query.Where(filter); } foreach (var includeProperty in includeProperties.Split (new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries)) { query = query.Include(includeProperty); } if (orderBy != null) { return orderBy(query).ToList(); } else { return query.ToList(); } } ``` I always get the error `Input string was not in a correct format` on the last rule (`return query.ToList()`). But when I check the \_dbSet after the rule `IQueryable<TEntity> query = _dbSet;` it already gives the error: `There is already an open DataReader associated with this Connection which must be closed first.` This probably comes because I want to select from more then one table. But how can I fix this? I tried adding `MultipleActiveResultSets=True"` to my ConnectionString like this: ``` <connectionStrings> <add name="reuzzeCS" connectionString="server=localhost;uid=root;pwd=*****;Persist Security Info=True;database=reuzze;MultipleActiveResultSets=True"" providerName="MySql.Data.MySqlClient" /> ``` But that gave me the error that the keyword doesn't exists, because I work with MySql.Data.MySqlClient .. The Query executed is: > {SELECT > `Extent1`.`entity_id`, > `Extent1`.`entity_title`, > `Extent1`.`entity_description`, > `Extent1`.`entity_starttime`, > `Extent1`.`entity_endtime`, > `Extent1`.`entity_instantsellingprice`, > `Extent1`.`entity_shippingprice`, > `Extent1`.`entity_condition`, > `Extent1`.`entity_views`, > `Extent1`.`entity_created`, > `Extent1`.`entity_modified`, > `Extent1`.`entity_deleted`, > `Extent1`.`user_id`, > `Extent1`.`region_id`, > `Extent1`.`category_id` > FROM `entities` AS `Extent1`} But when he wants to execute the query and I want to expand the results, I get the error `There is already an open DataReader associated with this Connection which must be closed first` **EDIT:** My full repository: ``` using System; using System.Collections.Generic; using System.Data; using System.Data.Entity; using System.Linq; using System.Linq.Expressions; using System.Text; using System.Threading.Tasks; namespace App.Data.orm.repositories { // REPO FROM TEACHER public class GDMRepository<TEntity> where TEntity : class { internal GDMContext _context; internal DbSet<TEntity> _dbSet; public GDMRepository(GDMContext context) { this._context = context; this._dbSet = _context.Set<TEntity>(); } public virtual IEnumerable<TEntity> Get( Expression<Func<TEntity, bool>> filter = null, Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null, string includeProperties = "") { IQueryable<TEntity> query = _dbSet; if (filter != null) { query = query.Where(filter); } foreach (var includeProperty in includeProperties.Split (new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries)) { query = query.Include(includeProperty); } if (orderBy != null) { return orderBy(query).ToList(); } else { return query.ToList(); } } public virtual TEntity GetByID(object id) { return _dbSet.Find(id); } public virtual void Insert(TEntity entity) { _dbSet.Add(entity); } public virtual void Delete(object id) { TEntity entityToDelete = _dbSet.Find(id); Delete(entityToDelete); } public virtual void Delete(TEntity entity) { if (_context.Entry(entity).State == EntityState.Detached) { _dbSet.Attach(entity); } _dbSet.Remove(entity); } public virtual void Update(TEntity entity) { _dbSet.Attach(entity); _context.Entry(entity).State = EntityState.Modified; } } } ``` GDMContext class: ``` using App.Data.orm.mappings; using System; using System.Collections.Generic; using System.Data.Entity; using System.Data.Entity.ModelConfiguration.Conventions; using System.Linq; using System.Text; using System.Threading.Tasks; namespace App.Data.orm { public class GDMContext:DbContext { public GDMContext() : base("reuzzeCS") { } protected override void OnModelCreating(DbModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); //REMOVE STANDARD MAPPING IN ENTITY FRAMEWORK modelBuilder.Conventions.Remove<PluralizingTableNameConvention>(); //REGISTER MAPPERS modelBuilder.Configurations.Add(new UserMapping()); modelBuilder.Configurations.Add(new PersonMapping()); modelBuilder.Configurations.Add(new RoleMapping()); modelBuilder.Configurations.Add(new EntityMapping()); modelBuilder.Configurations.Add(new MediaMapping()); modelBuilder.Configurations.Add(new BidMapping()); modelBuilder.Configurations.Add(new CategoryMapping()); modelBuilder.Configurations.Add(new AddressMapping()); modelBuilder.Configurations.Add(new RegionMapping()); modelBuilder.Configurations.Add(new MessageMapping()); } } } ``` My entity Model: ``` public class Entity { public Int64 Id { get; set; } [Required(ErrorMessage = "Title is required")] [StringLength(255)] [DisplayName("Title")] public string Title { get; set; } [Required(ErrorMessage = "Description is required")] [DisplayName("Description")] public string Description { get; set; } [Required] public DateTime StartTime { get; set; } [Required] public DateTime EndTime { get; set; } /*[Required(ErrorMessage = "Type is required")] [StringLength(16)] [DisplayName("Type")] public string Type { get; set; }*/ [Required] public decimal InstantSellingPrice { get; set; } public Nullable<decimal> ShippingPrice { get; set; } public Condition? Condition { get; set; } public Nullable<Int64> Views { get; set; } [Required] public DateTime CreateDate { get; set; } public Nullable<DateTime> ModifiedDate { get; set; } public Nullable<DateTime> DeletedDate { get; set; } public Int32 UserId { get; set; } public Int32 RegionId { get; set; } public Int16 CategoryId { get; set; } public virtual User User { get; set; } public virtual Region Region { get; set; } public virtual Category Category { get; set; } //public virtual ICollection<Category> Categories { get; set; } public virtual ICollection<User> Favorites { get; set; } public virtual ICollection<Bid> Bids { get; set; } public virtual ICollection<Media> Media { get; set; } } public enum Condition { New = 1, Used = 2 } ``` My Entity Mapping: ``` internal class EntityMapping : EntityTypeConfiguration<Entity> { public EntityMapping() : base() { this.ToTable("entities", "reuzze"); this.HasKey(t => t.Id); this.Property(t => t.Id).HasColumnName("entity_id").HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity); this.Property(t => t.Title).HasColumnName("entity_title").IsRequired().HasMaxLength(255); this.Property(t => t.Description).HasColumnName("entity_description").IsRequired(); this.Property(t => t.StartTime).HasColumnName("entity_starttime").IsRequired(); this.Property(t => t.EndTime).HasColumnName("entity_endtime").IsRequired(); //this.Property(t => t.Type).HasColumnName("entity_type").IsRequired(); this.Property(t => t.InstantSellingPrice).HasColumnName("entity_instantsellingprice").IsRequired(); this.Property(t => t.ShippingPrice).HasColumnName("entity_shippingprice").IsOptional(); this.Property(t => t.Condition).HasColumnName("entity_condition").IsRequired(); this.Property(t => t.Views).HasColumnName("entity_views").IsOptional(); this.Property(t => t.CreateDate).HasColumnName("entity_created").IsRequired().HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed); this.Property(t => t.ModifiedDate).HasColumnName("entity_modified").IsOptional(); this.Property(t => t.DeletedDate).HasColumnName("entity_deleted").IsOptional(); this.Property(t => t.UserId).HasColumnName("user_id").IsRequired(); this.Property(t => t.RegionId).HasColumnName("region_id").IsRequired(); this.Property(t => t.CategoryId).HasColumnName("category_id").IsRequired(); //FOREIGN KEY MAPPINGS this.HasRequired(t => t.User).WithMany(p => p.Entities).HasForeignKey(f => f.UserId).WillCascadeOnDelete(false); this.HasRequired(t => t.Region).WithMany(p => p.Entities).HasForeignKey(f => f.RegionId); this.HasRequired(t => t.Category).WithMany(p => p.Entities).HasForeignKey(f => f.CategoryId); //MANY_TO_MANY MAPPINGS this.HasMany(t => t.Favorites) .WithMany(t => t.Favorites) .Map(mc => { mc.ToTable("favorites"); mc.MapLeftKey("entity_id"); mc.MapRightKey("user_id"); }); } } ``` [Link to stacktrace image!](http://i44.tinypic.com/2r5x72t.png) **UPDATE:** > * base {SELECT > `Extent1`.`entity_id`, > `Extent1`.`entity_title`, > `Extent1`.`entity_description`, > `Extent1`.`entity_starttime`, > `Extent1`.`entity_endtime`, > `Extent1`.`entity_instantsellingprice`, > `Extent1`.`entity_shippingprice`, > `Extent1`.`entity_condition`, > `Extent1`.`entity_views`, > `Extent1`.`entity_created`, > `Extent1`.`entity_modified`, > `Extent1`.`entity_deleted`, > `Extent1`.`user_id`, > `Extent1`.`region_id`, > `Extent1`.`category_id` > FROM `entities` AS `Extent1`} System.Data.Entity.Internal.Linq.InternalQuery {System.Data.Entity.Internal.Linq.InternalSet}
Your problem is > I think MySql connector probably doesn't support multiple active result sets and because of that the setting in connection string didn't help you. ***So Please try this way instead of your code*** **Edit :** ``` query.Include("User").Include("Region").Include("Category").ToList(); ``` Let me know, if you get same error after this change. **Update:** I have change some thing for you Please use this code instead of your method ``` public virtual IEnumerable<TEntity> Get( Expression<Func<TEntity, bool>> filter = null, Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null, string includeProperties = "") { IQueryable<TEntity> query = _dbSet; if (filter != null) { query = query.Where(filter); } if (orderBy != null) { return orderBy(query.Include("User").Include("Region").Include("Category").ToList()).ToList(); } else { return query.Include("User").Include("Region").Include("Category").ToList(); } } ``` **Update 2:** > It is not about closing connection. EF manages connection correctly. My understanding of this problem is that there are multiple data retrieval commands executed on single connection (or single command with multiple selects) while next DataReader is executed before first one has completed the reading. The only way to avoid the exception is to allow multiple nested DataReaders = turn on MultipleActiveResultSets. Another scenario when this always happens is when you iterate through result of the query (IQueryable) and you will trigger lazy loading for loaded entity inside the iteration. And stack overflow have lot of peoples got the solutions for your question 1: [Entity Framework: There is already an open DataReader associated with this Command](https://stackoverflow.com/questions/4867602/entity-framework-there-is-already-an-open-datareader-associated-with-this-comma) 2: [How to avoid "There is already an open DataReader associated with this Connection which must be closed first." in MySql/net connector?](https://stackoverflow.com/questions/6271971/how-to-avoid-there-is-already-an-open-datareader-associated-with-this-connectio) 3: [Error: There is already an open DataReader associated with this Command which must be closed first](https://stackoverflow.com/questions/15921821/error-there-is-already-an-open-datareader-associated-with-this-command-which-mu) and my personal advice for, I think you don't spent more time for this error, because waist of time and energy , and you can do it by using by manual query . So please try different ways. You don't need split and formatting queries for avoiding `input string was not correct format` error You can do this way instead of `return query.ToList();` ``` return _dbSet.Users .Include(x => x.Region) .Include(x => x.Category).ToList(); ``` I think you can do it by using my above `SO` link's. And My main question is : > Entity Framework can support ORM Concept, So why you don't try this way?. You can change the idea for using ORM Concept. It's may be solve this problem. [This is a link for that](http://codetunnel.com/blog/post/introduction-to-entity-framework-part-i-object-relational-mapping) and please [see this tutorial](http://www.entityframeworktutorial.net/what-is-entityframework.aspx)
**UPDATE** OK, so from your stack trace it looks like the "`open DataReader associated ...blah`" was a red-herring. Maybe that was visual studio and its intellisense visual debugger thingy trying to show you the values contained in your dbset but a connection was still open or something like that. To me, it looks like EF's `MySqlDatareader` is doing its job of enumerating the results and mapping them to POCO's. Maybe there is a column that is a varchar(..) or something of that sort on a table in your Database, and on your POCO's its mapped property is `oftype(Int32)`. So if there is a an empty string or a value that isn't a number in the database I believe that an `Input string was not in a correct format` exception should be expected when you try convert a null or empty string value to an Int. Just tried this now to see: ![enter image description here](https://i.stack.imgur.com/SS5pw.png) --- I think the issue is that MySql doesn't support `MARS` and maybe it also doesn't suport `Lazy Loading`. While I couldn't find anything official to say this was the case I found a few posts with the same issue as you. <http://www.binaryforge-software.com/wpblog/?p=163> [MySQL + Code First + Lazy Load problem !](https://stackoverflow.com/questions/5951373/mysql-code-first-lazy-load-problem) <http://forums.mysql.com/read.php?38,259559,267490> Now up until fairly recently I thought that calling `ToList()` on an IQueryable would Load the Results into memory and any Navigation properties would not be `LazyLoaded`, this is not strictly true. While the result will be persisted into Memory any virtual Navigation properties of that result will still be lazy loaded if you try to access them. On a high level `LazyLoading` works because entity framework `overrides` your `virtual' navigation properties and uses its own implementation to load entities from the database. My guess is that in your View or somewhere else in your code you must be accessing a property that you haven't explicitly loaded using an `Include`. My guess is that EF may be trying to do this on a single connection and that is why you see: ``` There is already an open DataReader associated with this Connection which must be closed first ``` I would turn off Lazyloading by doing the following: ``` public class GDMContext:DbContext { public GDMContext() : base("reuzzeCS") { base.Configuration.LazyLoadingEnabled = false; } } ``` Hope this helps.
There is already an open DataReader associated with this Connection which must be closed first + asp.net mvc
[ "", "mysql", "asp.net", "sql", "asp.net-mvc", "entity-framework", "" ]
I want to generate date value since 1/1/2011 - 31/12/2011 on sql query. Example My Table A ``` Field A Field B Field C 1 01/01/2011 125 2 03/01/2011 100 3 05/01/2011 50 ``` I want to result : ``` Field A Field B Field C 1 01/01/2011 125 0 02/01/2011 0 2 03/01/2011 100 0 04/01/2011 0 3 05/01/2011 50 ... 0 31/12/2011 0 ``` Please Advice Me and Thank a lot.
Under SQL Server, you can create a table-valued function rather than creating a temporary table, as this is reusable between queries: ``` -- List all of the dates between startdate and enddate (inclusive) CREATE FUNCTION [dbo].[DatesBetween] ( @startdate date, @enddate date ) RETURNS @ret TABLE (Date date) AS BEGIN DECLARE @dt date, @dtEnd date SELECT @dt = @startdate, @dtEnd = @enddate WHILE (@dt <= @dtEnd) BEGIN INSERT INTO @ret VALUES(@dt) SET @dt = DATEADD(day, 1, @dt) END RETURN END ``` This allows everything to be executed inside one query: ``` SELECT d.Date, COUNT(t.*) AS TotalOnDay FROM dbo.DatesBetween('2011-01-01', '2011-12-31') d LEFT JOIN MyTable t ON t.Date = d.Date GROUP BY d.Date ```
If your "Field B" is formatted as Date, you can use the function DATEDIFF(Day,StartDate,EndDate) you can also change the "DATEDIFF(Day" to Year, Month or any other part you like. hope this helps.
How to generate date in SQL Query?
[ "", "sql", "date", "insert", "dateadd", "" ]
I have a table ``` id value 1 a 2 a 3 b 4 b 5 b 6 c ``` My id is primary. I have total 2 a , 3 b and 1 c. So I want to count total repeat value in each primary id which matches on it I want this format ``` id value_count 1 2 2 2 3 3 4 3 5 3 6 1 ```
Try this query: ``` SELECT a.id, b.valueCnt FROM tableA a INNER JOIN (SELECT a.value, COUNT(a.value) valueCnt FROM tableA a GROUP BY a.value) AS B ON a.value = b.value; ``` Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!2/68a89/1) **OUTPUT** ``` | ID | VALUECNT | |----|----------| | 1 | 2 | | 2 | 2 | | 3 | 3 | | 4 | 3 | | 5 | 3 | | 6 | 1 | ```
Try This ``` select id, value_count from tablename as a1 join (select count(*) as value_count, value from tablename group by value) as a2 on a1.value= a2.value ```
count value for each unique id
[ "", "mysql", "sql", "select", "count", "group-by", "" ]
I basically have a very simple query like such ``` SELECT accounts . * , COUNT(receive) as total_counts FROM followings LEFT JOIN accounts ON followings.receive = accounts.account_id WHERE accounts.status <1 GROUP BY followings.receive ORDER BY COUNT(receive) DESC ``` I've tried changing it to the following and it's failing ``` SELECT accounts . * , COUNT(receive) as total_counts FROM followings LEFT JOIN accounts ON followings.receive = accounts.account_id WHERE accounts.status <1 AND total_accounts < 10 GROUP BY followings.receive ORDER BY COUNT(receive) DESC ``` `Unknown column 'total_counts' in 'where clause'` I'm sorry for posting such simple question, but my mind is boggling right now
First: **remove the space between the table and the `*`**: `accounts.*`. Second: You cannot use `where` on aggregate expressions. You need to use `having`: ``` SELECT accounts.* , COUNT(receive) as total_counts FROM followings LEFT JOIN accounts ON followings.receive = accounts.account_id WHERE accounts.status <1 GROUP BY followings.receive HAVING total_accounts < 10 ORDER BY COUNT(receive) DESC ``` A little guide on how does the select works: ``` SELECT "Fields and expressions" FROM "Tables, views and / or subqueries" WHERE "Conditions to apply on the raw data (contained directly in the FROM clause)" GROUP BY "Grouping fields" HAVING "Conditions to apply on the grouped data or (aggregate) expressions" ```
To filter *before* a `GROUP BY` clause, use `WHERE`, to filter *afterwards*, use `HAVING`. Since the aggregation of the `count` occurs during the grouping, it gives an error in the `WHERE` clause - it's simply not known yet at that point of execution. Change to: ``` SELECT accounts.* , COUNT(receive) as total_counts FROM followings LEFT JOIN accounts ON followings.receive = accounts.account_id WHERE accounts.status <1 GROUP BY followings.receive HAVING count(receive) < 10 ORDER BY COUNT(receive) DESC ```
Filtering grouped results on COUNT of a number of rows
[ "", "mysql", "sql", "" ]
We have a client that is going to use the AlwaysOn Availability feature of SQL Server 2012. They want to have the BizTalk WCF-SQL port connect to the read-only replica. The [documentation](http://technet.microsoft.com/en-us/library/hh213417.aspx#ReadOnlyAppIntent) on the subject say that the connection has to be made to the SQL Server 2012 Availability Group Listener, and the connection has to be able to set the “ApplicationIntent” parameter. This tells the SQL Listener that the connection is a read-only intent connection, and that it should be redirected to a readable secondary replica. Without that working the connection will be made to the primary database which is not what is wanted. How do you configure the “ApplicationIntent” parameter on a BizTalk WCF-SQL adapter?
There is no way of doing this via the WCF-SQL adapter. We have had to go with an external helper class to create the connection with a connection string and query the database. Update: A Blog written by a colleague on the issue [Can I use a BizTalk WCF-SQL send adapter with a SQL 2012 Always on Database?](http://connectedpawns.wordpress.com/2014/08/19/can-i-use-a-biztalk-wcf-sql-send-adapter-with-a-sql-2012-always-on-database/) a snippet quoted below. In summary I think your choices are (in order of preference): * Disable AlwaysOn Availability Groups / Mirroring on SQL server if you need to connect to this SQL server which has this enabled * Disable transactions and implement logic to be able to handle duplicates . * Disable transactions and handle the duplicates or lost messages with custom logic (e.g. Send twice and compare and implement error handling). You need to write your own DTC handling this which is probably very complicated. * Disable transactions and live with risk of duplicates or lost messages without handling duplicates.
I think you'll need to go WCF-Custom + sqlBinding to specify a connection string.
How do you configure a BizTalk WCF-SQL adapter to get data from a SQL Server 2012 "always on" replica database?
[ "", "sql", "sql-server", "wcf", "biztalk", "biztalk-2013", "" ]
I have table Groups, ``` ID NUMBER STATUS VARCHAR2(20 BYTE) ``` I am able to count the number of status as following. ``` select g.status, count(*) from groups g group by g.status; STATUS COUNT(*) -------------------- ---------- OK 2 NOK 1 ``` I have another status ,say PENDING, REJECTED. But there is no item exists in table, but I want them to be shown with zero count as following. ``` STATUS COUNT(*) -------------------- ---------- OK 2 NOK 1 PENDING 0 REJECTED 0 ``` What will be the SQL statement to make it possible?
Try this ``` SELECT A.status, COUNT(DISTINCT G.ID) statusCnt FROM (SELECT 'OK' status FROM DUAL UNION SELECT 'NOK' status FROM DUAL UNION SELECT 'PENDING' status FROM DUAL UNION SELECT 'REJECTED' status FROM DUAL ) AS A LEFT JOIN groups G ON A.status = G.STATUS GROUP BY A.status; ```
If exists a table with list of states you can write your query in this way: I suppose your state registry has called STATES ``` SELECT states.status, (select count(*) from groups g where g.status = states.status) FROM states ``` Alternatively: ``` SELECT s.status, count(*) FROM states s LEFT OUTER JOIN groups g ON s.status = g.status GROUP BY s.status ``` Otherwise you can't obtain this information **EDIT** (AFTER COMMENT) Please create a table: ``` CREATE TABLE states (id int, status varchar(20)) ``` In your tables GROUPS replace status field with fk to states table
SQL Count non existing item
[ "", "sql", "postgresql", "select", "count", "left-join", "" ]
Im trying to use an SQL statement to insert the current date into an access table. Ive got ``` DoCmd.RunSQL "INSERT INTO tblImportedData (dtmReportDate) VALUES Now();" ``` This isnt working. Anybody know what im doing wrong?
You need to put `Now()` between brackets like this: ``` INSERT INTO tblImportedData (dtmReportDate) VALUES (NOW()) ```
You need to put parenthesis around your list of values, even though there's only one column you're inserting into: ``` DoCmd.RunSQL "INSERT INTO tblImportedData (dtmReportDate) VALUES (Now());" ```
Insert todays date into Access table using SQL
[ "", "sql", "vba", "ms-access", "" ]
I have a submission page that I need to limit the number of attempts that a user can try in a specific time period. There is a stored procedure that is called that checks for certain data in database1 and also logs the IP address and the date/time the form was submitted into database2. All I need to do is check how many attempts have been logged by that IP address within a 30 minute time period and restrict further submission attempts if that number is over 5. Here is my VB code: ``` Protected Sub btn_Cont_Click(sender As Object, e As EventArgs) Handles btn_Cont.Click Dim StudentIDLast4 As Integer = Val(textSSN.Text) Dim StudentIDInst As String = textSID.Text.ToUpper Dim DateOfBirth As String = textDOB.Text Dim IPaddress As String = Request.UserHostAddress() Dim sqlConnection1 As New SqlConnection("Data Source=(localdb)\v11.0;Initial Catalog=tempdb;Integrated Security=True") Dim cmd As New SqlCommand Dim returnValue As String Dim returnCount As Integer cmd.CommandText = "proc_ReverseTransferConsent_Find_Match" cmd.CommandType = CommandType.StoredProcedure cmd.Parameters.AddWithValue("@StudentIDLast4", StudentIDLast4) cmd.Parameters.AddWithValue("@StudentIDInst", StudentIDInst) cmd.Parameters.AddWithValue("@DateOfBirth", DateOfBirth) cmd.Parameters.AddWithValue("@IPaddress", IPaddress) cmd.Connection = sqlConnection1 Dim sqlConnection2 As New SqlConnection("Data Source=(localdb)\v11.0;Initial Catalog=tempdb;Integrated Security=True") Dim attempts As String Dim comm As New SqlCommand("SELECT [Count] = COUNT(*) FROM ReverseTransferConsent_Attempt WHERE IPaddress = @IPaddress AND CreatedDate > DATEADD(MINUTE, -30, GETDATE())") Dim ap As New SqlDataAdapter(comm.CommandText, sqlConnection1) Dim ds As New DataSet() comm.Parameters.AddWithValue("@IPaddress", IPaddress) If Page.IsValid Then sqlConnection2.Open() ap.Fill(ds) attempts = ds.Tables(0).Rows.Count.ToString() sqlConnection2.Close() sqlConnection1.Open() returnValue = Convert.ToString(cmd.ExecuteScalar()) sqlConnection1.Close() returnCount = returnValue.Length If attempts <= 5 Then If returnCount > 4 Then Response.Redirect("RTAgreement.aspx?rVal=" + returnValue) Else Label2.Text = StudentIDInst End If ElseIf attempts > 5 Then Label2.Text = "Only 5 submission attempts allowed per 30 minutes" End If End If End Sub ``` It's giving me the error: > An exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll but was not handled in user code > > Additional information: Must declare the scalar variable "@IPaddress". I declared the variable using AddWithValue. Is that not correct?
The problem is that you instantiate your `SqlDataAdapter` using command text only (passing the query but not parameters) so it doesn't have the parameter passed: ``` Dim ap As New SqlDataAdapter(comm.CommandText, sqlConnection1) ``` you should use the command instead and instantiate your command passing the connection as well: ``` Dim comm As New SqlCommand("SELECT [Count] = COUNT(*) FROM ReverseTransferConsent_Attempt WHERE IPaddress = @IPaddress AND CreatedDate > DATEADD(MINUTE, -30, GETDATE())", sqlConnection1) Dim ap As New SqlDataAdapter(comm) ```
You might have an instance of SQL Server will be case sensitive. Can you check how IPAddress parameter is defined in `proc_ReverseTransferConsent_Find_Match` stored procedure?
VB.NET SQL limit submission attempts
[ "", "asp.net", "sql", "vb.net", "" ]
Suppose I have a string `1,2,3,` I would like to remove the last `,` or if the string looks like `,1,2,3,` or `,1,2,3` I would still like to get `1,2,3` as my result. And please try to be a little explanatory in your answer. I do not just want to copy paste stuff without understanding it. Thank you.
One way to deal with "trimming" commas like that would be using a `CASE` statement: ``` CASE WHEN str LIKE ',%,' THEN SUBSTRING(str, 2, LEN(str)-2) WHEN str LIKE ',%' THEN RIGHT(str, LEN(str)-1) WHEN str LIKE '%,' THEN LEFT(str, LEN(str)-1) ELSE str END ``` This is very much self-explanatory: the `CASE` statement considers three situations - * When the string `str` has commas on both sides, * When the string `str` starts in a comma, but does not end in one, and * When the string `str` ends in a comma, but does not start in one. In the first case, the first and the last characters are removed; in the second case, the leftmost character is removed; in the last case, the trailing character is removed. [Demo on sqlfiddle.](http://sqlfiddle.com/#!6/42c00/1)
``` declare @str varchar(20)=',1,2,3,' select case when @str like ',%,' then stuff(stuff(@str, 1, 1, ''),LEN(stuff(@str, 1, 1, '')),1,'') when @str like ',%' then stuff(@str, 1, 1, '') when @str like '%,' then stuff(@str, LEN(@str), 1, '') else @str end ```
How to remove a specific character from a string, only when it is the first or last character in the string.
[ "", "sql", "sql-server", "sql-server-2008", "" ]