Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a way to update a column in mysql if value is number only? ie. `UPDATE myTable SET ColumnA =NewValue WHERE ColumnA isnumeric` `Column A` is `varchar(25)` and `Column A` value may be text or 1 or 2 or 23... thank u
Use regular expression to see if your column contains number or anything else ``` WHERE ColumnA REGEXP '[0-9]+' ```
Use [**REGEXP()**](http://dev.mysql.com/doc/refman/5.5/en/regexp.html#operator_regexp) function Try this: ``` UPDATE myTable SET ColumnA = NewValue WHERE ColumnA REGEXP '^[0-9]+$' ``` **OR** ``` UPDATE myTable SET ColumnA = NewValue WHERE ColumnA REGEXP '^[[:digit:]]+$' ```
MySQL: Update a column if value is numeric only
[ "", "mysql", "sql", "regex", "sql-update", "numeric", "" ]
I have a xml column that look like ``` SET @XMLData = '<ArrayOfEntityNested xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.Bijak"> <EntityNested> <Id xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto">1</Id> <Date xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">0001-01-01T00:00:00</Date> <Description xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">deesc</Description> <Number xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak" i:nil="true" /> </EntityNested> </ArrayOfEntityNested>' ``` I need insert data from the XML into a temp table. [here](http://www.venkateswarlu.co.in/SQLServer/How_to_read_data_from_XML_string.aspx) For this I use from following code. But it's not working, and it's not inserting any data into temp table. ``` --Variables Decleration DECLARE @XMLData VARCHAR(MAX) DECLARE @idoc INT -- Creating Temporary Table CREATE TABLE #TEMP_TABLE ( REC_ID INT IDENTITY(1,1), [Id] INT, [Date] VARCHAR(50), [Number] VARCHAR(50), ); --Case 1 SET @XMLData = '<ArrayOfEntityNested xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.Bijak"> <EntityNested> <Id xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto">1</Id> <Date xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">0001-01-01T00:00:00</Date> <Number xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak" i:nil="true" /> </EntityNested> </ArrayOfEntityNested> ' --Reading Data from XML and inserting into Temp Table EXECUTE sp_xml_preparedocument @idoc OUTPUT, @XMLData INSERT INTO #TEMP_TABLE SELECT * FROM OpenXML(@idoc,'/ArrayOfEntityNested/EntityNested', 1) WITH #TEMP_TABLE EXECUTE sp_xml_removedocument @idoc --Displaying data from Temp Table SELECT * FROM #TEMP_TABLE DROP TABLE #TEMP_TABLE; ``` But that doesn't work, if xml format correct might look like : ``` SET @XMLData = '<ArrayOfEntityNested> <EntityNested> <Id>1</Id> <Date>0001-01-01T00:00:00</Date> <Description>deesc</Description> <EmployeeId>2</EmployeeId> <IsDeleted>false</IsDeleted> <LoadingPermitTruckId>7541</LoadingPermitTruckId> </EntityNested> </ArrayOfEntityNested>' ``` then it works. Please help me.
First of all - please **use appropriate data types!** If your source data is XML - why aren't you using the `XML` datatype? Also, if you have a `Date` in your table - why isn't that a `DATE` or `DATETIME` type?? And why is the `Number` a `VARCHAR(50)` ?? Makes no sense...... Then: you're not looking at the XML namespaces that are present in the XML document - but you **must**! At lastly - I would recommend using the native XQuery support instead of the legacy, deprecated `sp_xml_preparedocument` / `OpenXML` approach.... Seems much easier, much clearer to me... Use this: ``` -- variable declaration DECLARE @XMLData XML -- creating temporary table CREATE TABLE #TEMP_TABLE ( REC_ID INT IDENTITY(1,1), [Id] INT, [Date] DATETIME2(3), [Number] INT ); ``` and then use proper XQuery statements, **including the XML namespaces** to handle the data: ``` SET @XMLData = '<ArrayOfEntityNested xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.Bijak"> <EntityNested> <Id xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto">1</Id> <Date xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">0001-01-01T00:00:00</Date> <Number xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak" i:nil="true" /> </EntityNested> <EntityNested> <Id xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto">42</Id> <Date xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">2013-12-22T14:45:00</Date> <Number xmlns="http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak">373</Number> </EntityNested> </ArrayOfEntityNested>' ;WITH XMLNAMESPACES ('http://schemas.datacontract.org/2004/07/Gbms.Dto.Bijak' AS ns1, 'http://schemas.datacontract.org/2004/07/Gbms.Dto' AS ns2, 'http://schemas.datacontract.org/2004/07/Gbms.Dto.VirginBijak' AS ns3) INSERT INTO #TEMP_TABLE(ID, Date, Number) SELECT xc.value('(ns2:Id)[1]', 'int'), xc.value('(ns3:Date)[1]', 'DateTime2'), xc.value('(ns3:Number)[1]', 'int') FROM @XmlData.nodes('/ns1:ArrayOfEntityNested/ns1:EntityNested') AS xt(xc) ```
``` DECLARE @idoc int DECLARE @doc varchar(1000) SET @doc =' <OutLookContact> <Contact FirstName="Asif" LastName="Ghafoor" EmailAddress1="asifghafoor@my.web.pk" /> <Contact FirstName="Rameez" LastName="Ali" EmailAddress1="rameezali@my.web.pk" /> </OutLookContact>' --Create an internal representation of the XML document. EXEC sp_xml_preparedocument @idoc OUTPUT, @doc -- Execute a SELECT statement that uses the OPENXML rowset provider. DECLARE @Temp TABLE(FirstName VARCHAR(250),LastName VARCHAR(250),Email1 VARCHAR(250)) INSERT INTO @Temp(FirstName,LastName,Email1) SELECT * FROM OPENXML (@idoc, '/OutLookContact/Contact',1) WITH (FirstName varchar(50),LastName varchar(50),EmailAddress1 varchar(50)) select FirstName,LastName,Email1 from @Temp ```
insert data from xml column into temp table
[ "", "sql", "sql-server", "xml", "" ]
I have a search screen on my app where the users can use up to 4 parameters to search. I have written a stored procedure to facilitate the search. ``` Select ID, FirstName, LastName, CountryCd, State, Zip, Data1, Data2, Data3 From Customer Where ((FirstName like @paramfname) OR (@paramfname IS NULL) ) AND ((LastName like @paramlname) OR (@paramlname IS NULL)) AND CountryCd = @paramcountry AND ((Zip = @paramzip) OR (@paramzip IS NULL)) ``` I have added one secondary index that includes all 4 columns `FirstName,LastName,CountryCd` and `Zip`. ``` CREATE NONCLUSTERED INDEX [idx_Cust_FN_Ctry] ON [dbo].[Customer]([FirstName] ASC, [LastName] ASC, [CountryCd] ASC, [Zip] ASC) ``` My question: is this one index enough for efficient search? If the user runs a search, by using only `FirstName` and `Country`, does SQL Server know how to use the index efficiently? Or do i need to add 4 seperate Secondary Index on each of these fields ? Thanks
For this query, I would lead the index with `CountryCd` as that is the only column that is guaranteed to be searched. This of course assumes that the selectivity of `CountryCd` is selective for your data. If you want to get really fancy, you can add a computed column that contains the SOUNDEX of `FirstName` and another for `LastName` and index those. If you do, you can express the filter like this ``` ((FirstNameSound = SOUNDEX(@paramfname)) OR (@paramfname IS NULL) ) AND ((LastNameSound = SOUNDEX(@paramlname) OR (@paramlname IS NULL)) AND CountryCd = @paramcountry AND ((Zip = @paramzip) OR (@paramzip IS NULL)) ``` This only works for English sounding names - you should understand the meaning of the SOUNDEX function before doing this. Regarding your question about 1-4 indexes: This really depends on what is normally searched together. If any combination is equally likely, you probably need all four indexes (or at the very least, index on the columns that are selective). If `FirstName` plus `LastName` is typically searched together, you should have an index on those two columns. However, let me challenge your design a bit. It really looks like you are trying to solve a fulltext index problem with traditional indexes. I would recommend having a look at the fulltext indexing engine in SQL Server before finalising the design.
if user would search by first name and country than index would most probably be beneficial. However if a user would use only last name by a search than the index would have to be read entirely - index scan. Note that index key column order does matter because it tells SQL Server how to logically sort the index. While designing an index and its key columns ([FirstName] ASC, [LastName] ASC, [CountryCd] ASC, [Zip] ASC) you should start with the most selective one. In this case I believe would be the last name. I wrote a blog post on this subject: <http://sqlmate.wordpress.com/2013/11/05/interview-question-2/> In some cases adding multiple indexes to support reads is necessary. But be careful to not overindex :)
Should I add multiple secondary index to search
[ "", "sql", "sql-server", "performance", "" ]
In SQL Server 2008 R2 I have some data in a varchar(12) column, it looks something like this: ``` Data: %%1234 %1765 34566 123 %SDRMH HJG434 ``` I'd like to drop the '%' from all the rows that have it and also only select those which are numbers. I've already tried to use the ISNUMERIC() function, but it looks like that leaves the % in the rows. Any help is greatly appreciated.
You can use a combination of REPLACE and ISNUMERIC to achieve the result set you want: ``` SELECT REPLACE(columnName,'%','') FROM tableName WHERE ISNUMERIC(REPLACE(columnName,'%','')) = 1 ```
You could use the `REPLACE` function to strip out all the % instances ``` SELECT REPLACE(column_name,'%',''); GO ```
Select Only Numbers From Varchar column
[ "", "sql", "sql-server", "" ]
I have a db table say, `persons` in Postgres handed down by another team that has a column name say, `"first_Name"`. Now am trying to use PG commander to query this table on this column-name. ``` select * from persons where first_Name="xyz"; ``` And it just returns > ERROR: column "first\_Name" does not exist Not sure if I am doing something silly or is there a workaround to this problem that I am missing?
**Identifiers** (including column names) that are **not** double-quoted are folded to lower case in PostgreSQL. Identifiers created with ***double quotes*** retain upper case letters (and/or syntax violations) and have to be double-quoted for the rest of their life: ``` "first_Name" -- upper-case "N" preserved "1st_Name" -- leading digit preserved "AND" -- reserved word preserved ``` But (without double-quotes): ``` first_Name β†’ first_name -- upper-case "N" folded to lower-case "n" 1st_Name β†’ Syntax error! -- leading digit AND β†’ Syntax error! -- reserved word ``` **Values** (string literals / constants) are enclosed in ***single quotes***: ``` 'xyz' ``` So, *yes*, PostgreSQL column names are case-sensitive (when double-quoted): ``` SELECT * FROM persons WHERE "first_Name" = 'xyz'; ``` [The manual on identifiers.](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) My standing advice is to use legal, lower-case names exclusively, so double-quoting is never required.
To quote the [documentation](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS): > Key words and unquoted identifiers are case insensitive. Therefore: > > ``` > UPDATE MY_TABLE SET A = 5; > ``` > > can equivalently be written as: > > ``` > uPDaTE my_TabLE SeT a = 5; > ``` You could also write it using *quoted identifiers*: ``` UPDATE "my_table" SET "a" = 5; ``` Quoting an identifier makes it case-sensitive, whereas unquoted names are always folded to lower case (unlike the SQL standard where unquoted names are folded to upper case). For example, the identifiers `FOO`, `foo`, and `"foo"` are considered the same by PostgreSQL, but `"Foo"` and `"FOO"` are different from these three and each other. If you want to write portable applications you are advised to always quote a particular name or never quote it.
Are PostgreSQL column names case-sensitive?
[ "", "sql", "postgresql", "identifier", "case-sensitive", "case-insensitive", "" ]
I have query that I am searching for a range of user accounts but every time I pass query I will be using multiple id number's first 5 digits and based on that I will searching. I wanted to know is there any other way to re-write this query for user Id range when we use more than 10 userIDs to search? Is there going to be huge performance hit with this kind of search in query? ## example: ``` select A.col1,B.col2,B.col3 from table1 A,table2 B where A.col2=B.col2 and (B.col_id like '12345%' OR B.col_id like '47474%' OR B.col_id like '59598%'); ``` I am using Oracle11g.
Actually it is not important how many UserIDs you will pass to the query. The most considerable part is what is selectivity of your query. In other words: how many rows will return your query and how many rows are there in your tables. If the number of returned rows is relatively small then it is good idea to create an index on column `B.col_id`. There is also nothing bad in using `OR` condition. Basically each `OR` will add one more `INDEX RANGE SCAN` to the execution plan with final `CONCATENATION` (but you'd rather check your actual plan to be sure). If the total cost of all that operations are lower than full table scan then Oracle CBO will use your index. In other case if you select >=20-30% of data at once then full table scan is most likely to happen and you should even less be worried about `OR` because all data will be read and comparing each value with your multiple conditions won't add much overhead.
Generally the use of LIKE makes it impossible for Oracle to use indexes. If the query is going to be reused, consider creating a synthetic column with the first 5 characters of COL\_ID. Put a non-unique index on it. Put your search keys in a table and join that to that column. There may be a way to do it with a functional index on the first 5 characters.
how to improve the performance on this type of Query?
[ "", "sql", "performance", "oracle11g", "" ]
I need to find if a column value contains the following pattern: ``` 513-2400-23 - Valid 513-PBS-231 - Valid 521-PB-21 - Valid 52-12-21 - Valid 513-2321 - Not Valid ``` I have tried the following version and many other but they are working for one case but not for other. ``` SELECT CASE WHEN REGEXP_LIKE('B12-23-43', '.-.-.') THEN 'Y' ELSE 'N' END FROM DUAL; ```
``` Select Case WHEN REGEXP_LIKE('B12-23-43', '^[[:alnum:]]{1,}-[[:alnum:]]{1,}-[[:alnum:]]{1,}$') THEN 'Y' ELSE 'N' END FROM DUAL; ``` **UPDATE** To also cover cases such as this one: `544-445-PBBTS-24.3`, it could be extended as shown below: ``` Select Case When Regexp_Like('B12-23-43', '^([[:alnum:]]{1,}-){2}[[:alnum:]]{1,}(-[[:alnum:]]{1,}\.[[:alnum:]]{1,})?$') THEN 'Y' ELSE 'N' END FROM DUAL; ```
Assuming that valid pattern requires exactly two dashes, this should work: ``` SELECT CASE WHEN REGEXP_LIKE('B12-23-43', '^[^-]+-[^-]+-[^-]+$') THEN 'Y' ELSE 'N' END FROM DUAL; ``` The pattern requires the string to start in one or more non-dashes, then to have a dash, then some more non-dash characters, and finally some more non-dash characters. [Demo on sqlfiddle.](http://www.sqlfiddle.com/#!4/d41d8/22574)
REGEXP_LIKE - Find pattern containing special character
[ "", "sql", "regex", "oracle", "" ]
I need to write a query that returns the latest message in a conversation between two users. I've included the schema and my (failed) attempts in this fiddle: <http://sqlfiddle.com/#!15/322c3/11> I've been working around the problem for some time now but every time I run any of my ugly queries *a sweet little kitten dies*. Any help would be much appreciated. Do it for the *kittens*.
> ... the latest message in a conversation between two users. Assuming the users with ID `1` and `3`, like you did in the fiddle, we are interested in the message with the latest `created_at` and `(sender_id, receiver_id)` being `(1,3)` or `(3,1)`. You can use ad-hoc row types to make the syntax short: ``` SELECT * FROM messages WHERE (sender_id, receiver_id) IN ((1,3), (3,1)) ORDER BY created_at DESC LIMIT 1; ``` Or explicitly (and slightly faster, also easier to use with indexes): ``` SELECT * FROM messages WHERE (sender_id = 1 AND receiver_id = 3 OR sender_id = 3 AND receiver_id = 1) ORDER BY created_at DESC LIMIT 1; ``` ### For *all* conversations of a user Added solution as per request in comment. ``` SELECT DISTINCT ON (user_id) * FROM ( SELECT 'out' AS type, id, receiver_id AS user_id, body, created_at FROM messages WHERE sender_id = 1 UNION ALL SELECT 'in' AS type, id, sender_id AS user_id, body, created_at FROM messages WHERE receiver_id = 1 ) sub ORDER BY user_id, created_at DESC; ``` The approach here is to fold foreign sender / receiver into one column to simplify the extraction of the last row. Detailed explanation for `DISTINCT ON` in this related answer: * [Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) [sqlfiddle](http://sqlfiddle.com/#!15/1d80e/1) - with improved and simplified test case
This provides the latest message between two users, regardless of message direction: ``` SELECT Distinct mes.ID, sendu.Username AS Sender, recu.Username as Receiver, Body, maxSent as TimeSent FROM messages mes INNER JOIN ( SELECT One, Two, MAX(CREATED_AT) maxSent FROM ( SELECT 'Sender' as type, Sender_ID AS One, receiver_id as Two,created_At FROM messages UNION ALL SELECT 'Receiver' as type, receiver_id AS One, Sender_ID as Two ,created_At FROM messages ) a Group By One,Two ) b ON mes.created_at = b.maxSent INNER JOIN users sendu ON sendu.ID = mes.Sender_ID INNER JOIN users recu ON recu.ID = mes.Receiver_ID ``` It does not separate 'conversations', but there is nothing to signify that. Perhaps if you also include a message header or title field this will be possible.
How to get the latest message in each conversation of a certain user in SQL?
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
I have two tables `test1` and `test2`. What I need is, I would like to update one column in the table `test2` with data from the table `test1`. My query is ``` UPDATE test2 t2 SET t2.name = (SELECT t1.name FROM test1 t1 WHERE t1.id = t2.mob) WHERE t2.mob IN (SELECT t1.id FROM test1 t1 WHERE t1.id = t2.mob) ``` It's showing `3 Rows updated` , But It's not reflecting in my table. My [reference](https://stackoverflow.com/questions/7030699/oracle-sql-update-with-data-from-another-table). Is there any issue in my query. Or what should I do alternately. ![enter image description here](https://i.stack.imgur.com/4lb3b.png)
The `WHERE` part in your query is absolutely unnecessary because it always evaluates to `TRUE`. So the correct one to update all rows in `t2` is: ``` UPDATE test2 t2 SET t2.name = (SELECT t1.name FROM test1 t1 WHERE t1.id = t2.mob) ``` Also in PL/SQL Developer transactions are not commited automatically by default. You have to manually commit it by pressing that green arrow on the panel. ![enter image description here](https://i.stack.imgur.com/A4Bol.png)
It wold be easier to use [merge](http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9016.htm#SQLRF01606) statement: ``` /* test tables */ SQL> create table test1(id1, name1) as 2 select level 3 , dbms_random.string('l', 7) 4 from dual 5 connect by level <= 5; Table created SQL> create table test2(id1, name1) as 2 select level 3 , cast(null as varchar2(11)) 4 from dual 5 connect by level <= 5; Table created ``` Tables' contents: ``` SQL> column name1 format a10; SQL> select * from test1; ID1 NAME1 ---------- ---------- 1 ouegwac 2 bptytsz 3 xwpnuqi 4 jrbxeza 5 hlckwvk SQL> select * from test2; ID1 NAME1 ---------- ---------- 1 NULL 2 NULL 3 NULL 4 NULL 5 NULL ``` Update `test2.name1` column with data from `test1.name1` column: ``` SQL> merge into test2 t 2 using test1 q 3 on (q.id1 = t.id1) 4 when matched then 5 update set t.name1 = q.name1 6 ; 5 rows merged SQL> select * from test2; ID1 NAME1 ---------- ---------- 1 ouegwac 2 bptytsz 3 xwpnuqi 4 jrbxeza 5 hlckwvk ```
update query result not reflecting in table
[ "", "sql", "oracle", "" ]
``` +------+-----------------+ | id | name + +------+-----------------+ | 1 | David Gilmour | | 2 | Roger Waters | | 3 | Li Jen Ho | +------+-----------------+ ``` Current format of names is `firstName lastName` and I want to change it to `lastName firstName` and I want to that in one database query. The solution I've in my mind is to split names by `space`, reversing items and then imploding them again with `space` glue. But I don't know how to write a query for that. *I can simply do that in PHP by exploding names by space and putting them inside an array, reversing the array and then imploding the words by space glue but currently I don't have access to PHP.* **Update 1:** I found [this similar question](https://stackoverflow.com/questions/14950466/how-to-split-the-name-string-in-mysql) but I can't make it work. I hope I'm not asking a duplicate question. **Update 2:** Names can have more than 2 parts. I assume only the first word is first name and the rest of name is last name.
Try this: ``` select concat(last_name," ",first_name) as FullName from ( select SUBSTRING_INDEX(SUBSTRING_INDEX(name, ' ', 2), ' ', -1) AS last_name, SUBSTRING_INDEX(SUBSTRING_INDEX(name, ' ', 1), ' ', -1) AS first_name from your_table ) tab ``` See a sample here <http://sqlfiddle.com/#!2/cd4ee/4> **EDIT:** Then a slight modified version will do your work. You can refer the same fiddle for a updated sample as per your need. ``` select concat(last_name," ",first_name) as FullName from ( select right(name,(length(name) - instr(name,' '))) as last_name, SUBSTRING_INDEX(SUBSTRING_INDEX(name, ' ', 1), ' ', -1) AS first_name from tab ) tab1 ```
I know I am late to the party but there is a better solution that can handle an unknown number of results. [GROUP\_CONCAT](https://dev.mysql.com/doc/refman/8.0/en/group-by-functions.html#function_group-concat) ``` SELECT student_name, GROUP_CONCAT(DISTINCT test_score ORDER BY test_score DESC SEPARATOR ' ') FROM student GROUP BY student_name; ```
Explode and implode strings in mysql query
[ "", "mysql", "sql", "" ]
I've got a SQL statement (SQL Server Management Studio) that I'm passing data into a where-statement into via a dashboard software. The users can select the year (2013 or now 2014) and also the month (which gets passes as a numeric value - so December = 12). I need to adjust the statement to where I get the last 3 months from the year/month they select. Before, b/c the SQL statement was only dealing with 2013 data, it was just the following: ``` YEAR(Main.ActivityDate) = '@Request.parmYear~' AND (Month(Main.ActivityDate) Between ('@Request.parmMonth~'-2) and '@Request.parmMonth~') ``` Normally, parmYear = 2013 and then whatever month they select, it will grab 2 months prior through the current month. Now, b/c it's January 2014, I need to to grab January 2014 + December 2013 + November 2013. I'm wondering how to adjust the statement to make this happen dynamically. Thoughts?
There are two solutions for this. * Modify your current where statement and add a condition to check for this case. * Use `DATEADD` function. Present in comments and other answer(s). **Modifying your where to add Condition** Note: Minor error may exists since I need to check if January has month value of zero or 1. Example: ``` WHERE ( '@Request.parmMonth~'-2 < 1 AND YEAR(Main.ActivityDate) = '@Request.parmYear~'-1 AND Month(Main.ActivityDate) Between (12+'@Request.parmMonth~'-2) AND 12 ) OR ( YEAR(Main.ActivityDate) = '@Request.parmYear~' AND (Month(Main.ActivityDate) Between ('@Request.parmMonth~'-2) and '@Request.parmMonth~' ) ```
I do not have a running SQL Server instance to test this solution but I would suggest constructing a date and using the built in functions to calculate the previous date as those already take into consideration the multiple years etc. ``` Declare @requestDate date = DATEFROMPARTS('@Request.parmYear', '@Request.parmMonth', 1); ... AND Main.ActivityDate between @requestDate AND DATEADD(month, -2, @requestDate ) ``` [See this for more](http://technet.microsoft.com/en-us/library/ms186819.aspx) details.
Last 3 Months Where Statement
[ "", "sql", "sql-server", "where-clause", "rolling-computation", "" ]
I have following table: **dbPratiche :** ![enter image description here](https://i.stack.imgur.com/1W34h.png) In this i wanted to arrage distinct [Provincia contraente] (first column) and count of that [Provincia contraente] present in [Provincia Agenzia]. Eg. If I arrange fist column in distinct with following query: ``` select distinct([Provincia contraente]) from dbPratiche ``` Then wanted to give count in next column how many times it has arrived in second column i.e.[Provincia Agenzia] In simple words, prototypal query: ``` SELECT DISTINCT( [provincia contraente] ), Count([provincia agenzia]) FROM dbpratiche WHERE Distinct([provincia contraente]) = [provincia agenzia] ``` Ofcourse this query fails, but i made second query with joins as: ``` SELECT DISTINCT( p1.[provincia contraente] ) AS 'PC', Count(p2.[provincia agenzia]) FROM dbpratiche p1 INNER JOIN dbpratiche p2 ON p1.[provincia contraente] = p2.[provincia contraente] AND p2.[provincia agenzia] = p1.[provincia contraente] GROUP BY p1.[provincia contraente] ``` But not giving me correct results. Please help me. Actual table: ``` Provincia contraente | Provincia Agenzia TA TA CE TA CE CE MI FR MI TA FR FR ``` **Expected Result:** ``` Provincia contraente | Provincia Agenzia TA 3 CE 1 MI 0 FR 2 ``` **EDIT2:** Also wanted to add count of first column ``` Provincia contraente | Provincia Agenzia | cnt Provincia contraente TA 3 1 CE 1 2 MI 0 2 FR 2 1 ``` I tried: I added `COUNT(dbPratiche.[Provincia contraente])` in @verex's query as ``` SELECT T1.[provincia contraente], Count(dbpratiche.[provincia agenzia]), Count(dbpratiche.[provincia contraente]) FROM dbpratiche RIGHT JOIN (SELECT DISTINCT [provincia contraente] FROM dbpratiche) AS T1 ON dbpratiche.[provincia agenzia] = T1.[provincia contraente] GROUP BY T1.[provincia contraente] ``` But got the count of [Provincia Agenzia] plz help
This query also shows 0 values: ``` SELECT [provincia contraente], Sum(CASE WHEN [provincia contraente] = [provincia agenzia] THEN 1 ELSE 0 END) FROM dbpratiche GROUP BY [provincia contraente] ``` **UPD:** Here is the query which works for updated question: ``` SELECT T1.[provincia contraente], Count(dbpratiche.[provincia agenzia]), Max(T1.cnt) FROM dbpratiche RIGHT JOIN (SELECT [provincia contraente], Count(*) AS Cnt FROM dbpratiche GROUP BY [provincia contraente]) AS T1 ON dbpratiche.[provincia agenzia] = T1.[provincia contraente] GROUP BY T1.[provincia contraente] ``` `SQLFiddle demo`
You should be able to,use a group by statement. ``` select [Provincia contraente],count([Provincia Agenzia]) from dbPratiche where [Provincia contraente]=[Provincia Agenzia] group by [Provincia contraente] ```
SQL distinctly arranging column query issue
[ "", "sql", "sql-server", "database", "sql-server-2008-r2", "" ]
I have created a database in SQL Server 2008 R2 and front end in MS Access 2003 which is `.adp` file format. This application uses Devnagari Unicode fonts for labels and data in database. The back end has a table `tblBookInfo` for storing Unicode data. When I write a query to find all records starting from particular character then it does not return all the expected records. The query is as follow: ``` SELECT * FROM tblbookinfo info WHERE (Language = 1) AND (Type = 1) AND Author LIKE N'ΰ€•%' ``` It does not return records which have starting character is "ΰ€•ΰ₯". It returns records whose starting character is "ΰ€•", "ΰ€•ΰ€Ώ", "ΰ€•ΰ₯‹" etc. I get Unicode of "ΰ€•" character i.e. 0915 and "ΰ€•ΰ₯" = 0941 If I loop thr' "ΰ€•ΰ₯" for Unicode character numbers, I get 0915 then 0941 but like N'ΰ€•%' does not return records having "ΰ€•ΰ₯"
In my tests, if `NVARCHAR` data type is used with `Indic_General_100_CI_AI` collation then `LIKE` will returns required results: ``` SELECT * FROM ( SELECT N'ΰ€•' UNION ALL SELECT N'ΰ€•ΰ₯' UNION ALL SELECT N'how (strange!)' ) x (Author) WHERE x.Author COLLATE Indic_General_100_CI_AI LIKE N'ΰ€•%' COLLATE Indic_General_100_CI_AI ``` Output: ``` Author -------------- ΰ€• ΰ€•ΰ₯ ```
Can you not just use the `OR` operator: ``` SELECT * FROM tblbookinfo info WHERE (Language = 1) AND (Type = 1) AND ( (Author LIKE N'ΰ€•%') OR (Author LIKE N'"ΰ€•ΰ₯%') ) ```
Unicode T-SQL like not returning expected all records
[ "", "sql", "sql-server", "t-sql", "unicode", "" ]
I am using mysql database. I have a database lms\_system with table supplier\_details. How can i query to display the supplier id, supplier name and contact, If phone number is null then display β€˜No’ else display β€˜Yes’ with alias name β€œPHONENUMAVAILABLE”.
There are a few ways to do that ``` SELECT id, name, contact, IF(phone IS NULL, 'No', 'Yes') phonenumavailable FROM supplier_details ``` or ``` SELECT id, name, contact, CASE WHEN phone IS NULL THEN 'No' ELSE 'Yes' END phonenumavailable FROM supplier_details ``` or ``` SELECT id, name, contact, ELT((phone IS NULL) + 1, 'Yes', 'No') phonenumavailable FROM supplier_details ``` or ``` SELECT id, name, contact, COALESCE(REPLACE(phone, phone, 'Yes'), 'No') phonenumavailable FROM supplier_details ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/be3c4/4)** demo
``` SELECT ...... , IF(phone_number is null, 'NO', 'YES') FROM .... ```
display different text by checking null or not
[ "", "mysql", "sql", "" ]
Hi I am trying to get (current Datetime - Years). Below is my Query.. ``` print getdate() print (getdate()-(365.25*7)) Result: Dec 30 2013 10:47AM Dec 30 2006 4:52PM ``` is giving correct result. But When i try ``` print getdate() print (getdate()-year(7)) Result: Dec 30 2013 10:52AM Oct 17 2008 10:52AM ``` Can anyone please tell what is wrong in it?
Try this ``` print getdate() print DATEADD(Year, -7, GETDATE()) ```
Rather use [DATEADD](http://technet.microsoft.com/en-us/library/ms186819.aspx) with the datepart set to `YEAR`. > Returns a specified date with the specified number interval (signed > integer) added to a specified datepart of that date. Something like ``` SELECT GETDATE() Today, DATEADD(YEAR,-7,GETDATE()) Today7YearsAgo ``` ## [SQL Fiddle DEMO](http://www.sqlfiddle.com/#!3/d41d8/27374) The `year(7)` part return `1900`, which is the year part of `1900-01-08 00:00:00.000` (`CAST(7 AS DATETIME)`). Then `getdate()-year(7)` equates to `getdate()-1900`, which is a day subtraction.
SQL Select - Current Date - 7 Year
[ "", "sql", "" ]
I'm a bit confused about permissions in SQL I have created a medical database. As a script I'm creating a doctor role and I want to say they can perform updates on tables 1,2 and 3. The database though actually contains 5 tables. does it mean that they wont be able to update 4 and 5 ? or will i have to explicitly say deny update on tables 4 and 5 for the doctor role ?
Unless the role was made dbo, db\_owner or db\_datawriter, it won't have permission to edit any data. If you want to grant full edit permissions to a single table, do this: ``` GRANT ALL ON table1 TO doctor ``` Users in that role will have no permissions whatsoever to other tables (not even read).
SQL-Server follows the principle of "Least Privilege" -- you must (explicitly) grant permissions. 'does it mean that they wont be able to update 4 and 5 ?' If your users in the doctor role are *only* in the doctor role, then yes. However, if those users are **also in other roles** (namely, other roles that do have access to 4 & 5), then no. More Information: <http://msdn.microsoft.com/en-us/library/bb669084%28v=vs.110%29.aspx>
SQL permissions for roles
[ "", "sql", "sql-server", "" ]
I have the below table, pretty simple. ``` ========================================================================== attendanceID | agentID | incurredDate | points | comment ========================================================================== 10 | vimunson | 2013-07-22 | 2 | Some Text 11 | vimunson | 2013-07-29 | 2 | Some Text 12 | vimunson | 2013-12-06 | 1 | Some Text ``` The his query below: ``` SELECT attendanceID, agentID, incurredDate, leadDate, points, @1F:=IF(incurredDate <= curdate() - 90 AND leadDate = NULL, points - 1, IF(DATEDIFF(leadDate, incurredDate) > 90, points - 1, points)) AS '1stFallOff', @2F:=IF(incurredDate <= curdate() - 180 AND leadDate = NULL, points - 2, IF(DATEDIFF(leadDate, incurredDate) > 180, points - 2, @1F)) AS '2ndFallOff', IF(@total < 0, 0, @total:=@total + @2F) AS Total, comment, linked FROM (SELECT attendanceID, mo.agentID, @r AS leadDate, (@r:=incurredDate) AS incurredDate, comment, points, linked FROM (SELECT m . * FROM (SELECT @_date = NULL, @total:=0) varaible, attendance m ORDER by agentID , incurredDate desc) mo where agentID = 'vimunson' AND (case WHEN @_date is NULL or @_date <> incurredDate THEN @r:=NULL ELSE NULL END IS NULL) AND (@_date:=incurredDate) IS NOT NULL) T ORDER BY agentID , incurredDate ``` When I run the query it returns the below: ``` ======================================================================================================================================== attendanceID | agentID | incurredDate | leadDate | points | 1stFallOff | 2ndFallOff | Total | comment ======================================================================================================================================== 10 | vimunson | 2013-07-22 | NULL | 2 | 2 | 2 | 2 | Some Text 11 | vimunson | 2013-07-29 | NULL | 2 | 2 | 2 | 4 | Some Text 12 | vimunson | 2013-12-06 | NULL | 1 | 2 | 1 | 5 | Some Text ``` I cannot figure out why the `leadDate` column is `null'. I have narrowed it down to a user session. For example if I run it again with the same user session it will come back with what I want.
After reviewing multiple results I was able to come up with something that is what I expect. I have entered more data and the below syntax. I liked the idea of the cursor but it was not ideal for my use, so I did not use it. I did not want to use `CASE` or any `JOINS` since they can be complex. <http://sqlfiddle.com/#!2/2fb86/1> ``` SELECT attendanceID, agentID, incurredDate, @ld:=(select incurredDate from attendance where incurredDate > a.incurredDate and agentID = a.agentID order by incurredDate limit 1) leadDate, points, @1F:=IF(incurredDate <= DATE_SUB(curdate(), INTERVAL IF(incurredDate < '2013-12-02', 90, 60) DAY) AND @ld <=> NULL, points - 1, IF(DATEDIFF(COALESCE(@ld, '1900-01-01'), incurredDate) > IF(incurredDate < '2013-12-02', 90, 60), points - 1, points)) AS '1stFallOff', @2F:=IF(incurredDate <= DATE_SUB(curdate(), INTERVAL IF(incurredDate < '2013-12-02', 180, 120) DAY) AND getLeadDate(incurredDate, agentID) <=> NULL, points - 1, IF(DATEDIFF(COALESCE(@ld, '1900-01-01'), incurredDate) > IF(incurredDate < '2013-12-02', 180, 120), points - 2, @1F)) AS '2ndFallOff', IF((@total + @2F) < 0, 0, IF(DATE_ADD(incurredDate, INTERVAL 365 DAY) <= CURDATE(), @total:=0, @total:=@total + @2F)) AS Total, comment, linked, DATE_ADD(incurredDate, INTERVAL 365 DAY) AS 'fallOffDate' FROM (SELECT @total:=0) v, attendance a WHERE agentID = 'vimunson' GROUP BY agentID , incurredDate ```
The way variables `@r` and `@_date` are passed around relies on a specific order in which certain parts of the query are evaluated. That's a risky assumption to make in a query language that is declarative rather than imperative. The more sophisticated a query optimizer is, the more unpredictable the behaviour of this query will be. A 'simple' engine might follow your intentions, another engine might adapt its behaviour as you go, for example because it uses temporary indexes to improve query performance. In situations where you need to pass values from one row to another, it would be better to use a cursor. <http://dev.mysql.com/doc/refman/5.0/en/cursors.html> **EDIT:** sample code below. I focused on column 'leadDate'; implementation of the falloff and total columns should be similar. ``` CREATE PROCEDURE MyProc() BEGIN DECLARE done int DEFAULT FALSE; DECLARE currentAttendanceID int; DECLARE currentAgentID, previousAgentID varchar(8); DECLARE currentIncurredDate date; DECLARE currentLeadDate date; DECLARE currentPoints int; DECLARE currentComment varchar(9); DECLARE myCursor CURSOR FOR SELECT attendanceID, agentID, incurredDate, points, comment FROM attendance ORDER BY agentID, incurredDate DESC; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; CREATE TEMPORARY TABLE myTemp ( attendanceID int, agentID varchar(8), incurredDate date, leadDate date, points int, comment varchar(9) ) ENGINE=MEMORY; OPEN myCursor; SET previousAgentID := NULL; read_loop: LOOP FETCH myCursor INTO currentAttendanceID, currentAgentID, currentIncurredDate, currentPoints, currentComment; IF done THEN LEAVE read_loop; END IF; IF previousAgentID IS NULL OR previousAgentID <> currentAgentID THEN SET currentLeadDate := NULL; SET previousAgentID := currentAgentID; END IF; INSERT INTO myTemp VALUES (currentAttendanceID, currentAgentID, currentIncurredDate, currentLeadDate, currentPoints, currentComment); SET currentLeadDate := currentIncurredDate; END LOOP; CLOSE myCursor; SELECT * FROM myTemp ORDER BY agentID, incurredDate; DROP TABLE myTemp; END ``` FYC: <http://sqlfiddle.com/#!2/910a3/1/0>
Null Values during Query
[ "", "mysql", "sql", "select", "stored-procedures", "join", "" ]
I have searched and gone through the available topics similar to mine. But, failed to find that satisfies my requirements. Hence, posting it here. I have four tables as follows: ``` "Organization" table: -------------------------------- | org_id | org_name | | 1 | A | | 2 | B | | 3 | C | "Members" table: ---------------------------------------------- | mem_id | mem_name | org_id | | 1 | mem1 | 1 | | 2 | mem2 | 1 | | 3 | mem3 | 2 | | 4 | mem4 | 3 | "Resource" table: -------------------------------- | res_id | res_name | res_prop | | 1 | resource1 | prop-1 | | 2 | resource2 | prop-2 | | 3 | resource3 | prop-3 | | 4 | resource4 | prop-4 | | 5 | resource1 | prop-5 | | 6 | resource2 | prop-6 | ``` A constraint of UNIQUE INDEX (res\_name, res\_prop) is applied in the above table. ``` "member-resource" table: -------------------------------------------- | sl_no | mem_id | res_id | | 1 | 1 | 1 | | 2 | 1 | 2 | | 3 | 2 | 1 | | 4 | 4 | 3 | | 5 | 3 | 4 | | 6 | 2 | 3 | | 7 | 4 | 3 | | 8 | 1 | 5 | | 9 | 1 | 6 | ``` I want to find out the distinct `res_name` from `Resource` table that have more than one `res_prop` for a specific organization. For example, expected output for organization `A` would be as follows: ``` | res_name | res_prop_count | | resource1 | 2 | | resource2 | 2 | ``` Any help in this regard will highly be appreciated. Regards.
When doing things like this you should build your query up logically, so to start get all your resources and props from organisation A ``` SELECT r.res_name, r.res_prop FROM Resource r INNER JOIN `member-resource` mr ON mr.res_id = r.res_id INNER JOIN Members m ON m.mem_id = mr.mem_id INNER JOIN Organization o ON o.org_id = m.org_id WHERE o.org_name = 'A' ``` Then you can start thinking about how you want to filter it, so you want to find resource names that have more than one different res\_prop, so you need to group by res\_name, and apply a HAVING clause to limit it to res\_names with more than one distinct res\_prop: ``` SELECT r.res_name, COUNT(DISTINCT r.res_prop) AS res_prop_count FROM Resource r INNER JOIN `member-resource` mr ON mr.res_id = r.res_id INNER JOIN Members m ON m.mem_id = mr.mem_id INNER JOIN Organization o ON o.org_id = m.org_id WHERE o.org_name = 'A' GROUP BY r.res_name HAVING COUNT(DISTINCT r.res_prop) > 1; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!2/3f1a6/1)**
I'm not entirely sure I understand what you are looking for, but I think this should work: ``` SELECT Resource.res_name, COUNT(DISTINCT Resource.res_prop) AS res_prop_count FROM Resource INNER JOIN member_resource USING (res_id) INNER JOIN Members USING (mem_id) INNER JOIN Organization USING (org_id) WHERE Organization.org_name = 'A' GROUP BY res_name HAVING res_prop_count > 1; ```
Complex MySQL query for specific problemm
[ "", "mysql", "sql", "select", "join", "group-by", "" ]
I have mysql code for fetch all my menus from MENU table (working properly): ``` $sql = "SELECT id, category_id, menu_name, image_id from menu where restaurant_id = '" . (int) $_SESSION['uid'] . "' && status='1'"; ``` But I need data from second table as well. From ICON(table) where is ID value same like in my first table IMAGE\_ID value with access to column file\_url. So I need join this tables. Something like there: ``` $sql = "SELECT m.id, m.category_id, m.menu_name, m.image_id FROM menu m INNER JOIN icon i ON i.id = m.image_id WHERE m.restaurant_id = '" . (int) $_SESSION['uid'] . "' AND m.status='1'"; ``` All working but I still can't acess to value from second table...
try this ``` SELECT m.id, m.category_id, m.menu_name, m.image_id , i.file_url FROM menu m INNER JOIN icon i ON i.id = m.image_id WHERE m.restaurant_id = '" . (int) $_SESSION['uid'] . "' AND m.status='1' ```
You need to use a JOIN statement for that. <http://www.sitepoint.com/understanding-sql-joins-mysql-database/>
Load parameter from second mysql table when ID equals
[ "", "mysql", "sql", "" ]
I have a table of links. If the link was generated and then the next day it is still in the table than I want to delete it. If there are multiple rows I want to remove each row.My query is: ``` DELETE FROM resetpassword WHERE expiry_timestamp < DATEADD(day, -1, GETDATE()) ``` but this gives me an error: ``` com.web.command.exceptions.DatabaseException: "DAY" is not valid in the context where it is used. ``` How do I remove all rows that are a day old? **EDIT** `expiry_timestamp` is a `timestamp` so I think the query should be something like the below, but I still can't get it to work. ``` select * from resetpassword where timestamp(expiry_timestamp) = timestamp(current date) - 1 days ```
Note: this is a DB2 specific answer. Okay, this post here: <http://www.dbforums.com/db2/1637371-help-there-dateadd-function-db2.html> (and this post agrees): <http://www.ibm.com/developerworks/data/library/techarticle/0211yip/0211yip3.html> says you can do: ``` DELETE FROM resetpassword WHERE expiry_timestamp < (current date - 1 DAYS) ```
If expiry\_timestamp is defined as a timestamp, you should ``` DELETE FROM resetpassword WHERE expiry_timestamp < CURRENT TIMESTAMP - 1 day ``` It is generally better to avoid type conversion (such as timestamp to date) unless there is some need to do so.
Delete records that are older than a day
[ "", "sql", "db2", "" ]
I have a temp table declared ``` declare @tmptable( value nvarchar(500) not null ); ``` I use a function to insert values into that temp table. I am trying to figure out how to update a table using the values of @tmptable ``` insert into t1 ( active ,SchoolId ,inserted ) select 1 ,temp.value ,@insertedDate select temp.value from @tmptable; ``` When i try to insert in table t1 it doesn't work. I guess there are two Select statements is causing the problem. Please let me know how to fix it. Thanks
``` INSERT INTO t1 ( ACTIVE ,SchoolId ,INSERTED ) SELECT 1 ,temp.value ,@insertedDate FROM @tmptable temp; ```
Try this one - ``` INSERT INTO dbo.t1 ( Active , SchoolId , Inserted ) SELECT 1 , t.value , @insertedDate FROM @tmptable t; ```
Inserting temp table values into a table.
[ "", "sql", "sql-server", "t-sql", "" ]
Consider following subquery. This query retrieves all the personalIds of employees who have more then 1 'IDKleerkastPersoon' in their possession. For this query i'm useing a **COUNT**() statement in my **HAVING** Clause. This subuqery returns all the the PersonalIds for there employees. ``` (SELECT DISTINCT Persoon_1.Stamnr FROM dbo.KleerkastPerPersoon AS KleerkastPerPersoon_1 INNER JOIN dbo.Persoon AS Persoon_1 ON KleerkastPerPersoon_1.ID_Persoon = Persoon_1.ID_Persoon GROUP BY Persoon_1.Stamnr, Persoon_1.ID_Afdeling, KleerkastPerPersoon.IDKleerkastPersoon, Persoon.Naam HAVING (Persoon_1.ID_Afdeling = 2) AND (COUNT(KleerkastPerPersoon.IDKleerkastPersoon) >= 2) ORDER BY Persoon_1.Stamnr DESC) ``` Now user requested some more information then solely the PersonalId of the employees. So I wrote a query above it (see below) which retrieves more global info about the employee. As expected, sql server rejects this method. ``` SELECT dbo.Persoon.Stamnr, dbo.Persoon.Naam, dbo.Persoon.Voornaam, dbo.Refter.RefterOmschrijving, dbo.Kleedkamer.KleedkamerOmschrijving, dbo.Kleerkast.KleerkastOmschrijving FROM dbo.KleerkastPerPersoon INNER JOIN dbo.Persoon ON dbo.KleerkastPerPersoon.ID_Persoon = dbo.Persoon.ID_Persoon INNER JOIN dbo.Kleerkast INNER JOIN dbo.Kleedkamer ON dbo.Kleerkast.ID_Kleedkamer = dbo.Kleedkamer.ID_Kleedkamer INNER JOIN dbo.Refter ON dbo.Kleedkamer.ID_Refter = dbo.Refter.ID_Refter ON dbo.KleerkastPerPersoon.ID_Kleerkast = dbo.Kleerkast.ID_Kleerkast WHERE (dbo.Persoon.Stamnr IN <<<Result of my first subquery>>> ) ``` The error message being thrown is: ``` An aggregate may not appear in the where clause unless it is in a subquery contained in a having clause or a select list, and the column being aggregated is an outer reference ``` So i moved my subquery to the having clause: ``` <<<Query 2>>> HAVING (dbo.Persoon.Stamnr IN (SELECT TOP (100) PERCENT Persoon_1.Stamnr FROM dbo.KleerkastPerPersoon AS KleerkastPerPersoon_1 INNER JOIN dbo.Persoon AS Persoon_1 ON KleerkastPerPersoon_1.ID_Persoon = Persoon_1.ID_Persoon GROUP BY Persoon_1.Stamnr, Persoon_1.ID_Afdeling HAVING (Persoon_1.ID_Afdeling = 2) AND (COUNT(KleerkastPerPersoon.IDKleerkastPersoon) >= 2) ORDER BY Persoon_1.Stamnr DESC)) ``` But now the query doesn't return any results. While I should be seeing 98 different records. Does any of you have a solution to my problem or my workaround? How can i remove the need for a subquery for instance.
Store the result of first subquery in a table variable and then use it in your final query. Also no need of Order By clause in the subquery unless you are using TOP.
Well If I get what you are after something like ``` SELECT dbo.Persoon.Stamnr, dbo.Persoon.Naam, dbo.Persoon.Voornaam, dbo.Refter.RefterOmschrijving, dbo.Kleedkamer.KleedkamerOmschrijving, dbo.Kleerkast.KleerkastOmschrijving FROM dbo.KleerkastPerPersoon inner join (your original query) SomeSuitableAlias On SomeSuitableAlias.Stamnr = dbo.KleerkastPerPersoon.Stamnr ``` should do the job.
SQL subQuery with COUNT() in WHERE/HAVING Statement
[ "", "sql", "sql-server", "count", "" ]
I have a column with different values : ``` 1 2 4 8 ``` I want to create a new column in the same table based on these values. For example, if the value is `1 or 4`, I want to display the corresponding line in `OK` and `NOT OK` for the rest. Like: ``` 1 OK 2 NOT OK 4 OK 8 NOT OK ``` I think we should use an `if` condition, but how to create at same time with the column values ​​in the table?
Better use **[Case](http://dev.mysql.com/doc/refman/5.0/en/case.html)** statement. ``` SELECT Case when id in (1,4) Then 'Ok' Else 'Not Ok' End as Myfield FROM MyTable ``` **[Fiddle](http://sqlfiddle.com/#!2/4c37b/3)**
**Since you say "I want to display" I'll give you some If Snippets** This is how it works, Simply for ex: ``` SELECT IF(field= '1', 'OK', 'NOT OK') as Myfield FROM MyTable ``` or ``` SELECT IF(field= '1', 'OK', IF(field= '4', 'OK', 'NOT OK')) as Myfield FROM MyTable ;So here the result will be `OK` if there is any 1, 4 or else `NOT OK` ``` **[Here](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html) is MySQL Documentation about If Conditions**
Create SQL column relative to another
[ "", "mysql", "sql", "heidisql", "" ]
I am writing a SQL query to select row, where a field with space separated numbers contains a single number, in this example the 1. Example fields: * `"1 2 3 4 5 11 12 21"` - match, contains number one * `"2 3 4 6 11 101"` - no match, does not contain number one The best query so far is: ``` $sql = "SELECT * from " . $table . " WHERE brands REGEXP '[/^1$/]' ORDER BY name ASC;"; ``` Problem is that this `REGEXP` also finds 11 a match I read many suggestions on other post, for instance `[\d]{1}`, but the result always is the same. Is it possible to accomplish what I want, and how?
Try: ``` WHERE brands REGEXP '[[:<:]]1[[:>:]]' ``` `[[:<:]]` and `[[:>:]]` match word boundaries before and after a word.
You don't need regex: You can use `LIKE` if you add a space to the front and back of the column: ``` SELECT * from $table WHERE CONCAT(' ', brands, ' ') LIKE '% 1 %' ORDER BY name ```
finding a number in space separated list with REGEXP
[ "", "mysql", "sql", "regex", "" ]
My problem is in the first subquery select. I am being told that I am returning multiple rows. ``` $sql = "SELECT messages.message_id , messages.sent_timestamp , messages.content , messages.subject , users.user_name , (SELECT thread_participants.user_id FROM thread_participants WHERE thread_participants.user_id !=".$user_id.") as thread_participants , (SELECT message_read_state.readDate FROM message_read_state WHERE message_read_state.message_id = messages.message_id and message_read_state.user_id =". $user_id.") as ReadState FROM (messages INNER JOIN users ON messages.sender_user_id = users.user_id INNER JOIN thread_participants tp ON tp.thread_id = messages.thread_id) WHERE (((messages.thread_id)=".$thread_id.")) ORDER BY messages.sent_timestamp DESC"; ```
One way I found of doing it: ``` $sql = "SELECT messages.message_id , messages.sent_timestamp , messages.content , messages.subject , users.user_name , tp.user_id as thread_participants , (SELECT users.user_name FROM users WHERE users.user_id = thread_participants && users.user_id != messages.sender_user_id) as member_names , (SELECT message_read_state.readDate FROM message_read_state WHERE message_read_state.message_id = messages.message_id and message_read_state.user_id =". $user_id.") as ReadState FROM (messages INNER JOIN users ON messages.sender_user_id = users.user_id INNER JOIN thread_participants tp ON tp.thread_id = messages.thread_id) WHERE (((messages.thread_id)=".$thread_id.")) ORDER BY messages.sent_timestamp DESC"; ```
You can use `GROUP_CONCAT()` function to get all **userIds** in single field Try this: ``` $SQL = "SELECT messages.message_id , messages.sent_timestamp , messages.content , messages.subject , users.user_name , (SELECT GROUP_CONCAT(thread_participants.user_id) FROM thread_participants WHERE thread_participants.user_id !=".$user_id.") as thread_participants , (SELECT message_read_state.readDate FROM message_read_state WHERE message_read_state.message_id = messages.message_id and message_read_state.user_id =". $user_id.") as ReadState FROM (messages INNER JOIN users ON messages.sender_user_id = users.user_id INNER JOIN thread_participants tp ON tp.thread_id = messages.thread_id) WHERE (((messages.thread_id)=".$thread_id.")) ORDER BY messages.sent_timestamp DESC"; ```
mysql subquery returns multiple rows
[ "", "mysql", "sql", "select", "subquery", "group-concat", "" ]
I have an ID column which it supposed to set to auto-increment but I forgot to set in when creating the database. Let's say the ID is from 1 - 20. I used the select Max() Sql statement to get the largest ID: ``` SELECT MAX(id) FROM Table_Name; ``` It supposed to return me 20. However, it returns me 9. I also realized that the id column in database is jumbled up. It starts from 1,2 then skips to 9,10 - 20 then back to 3 - 8. And 8 appears to be the last row and I think that's where the 9 comes from. My id in database is varchar() data type. So, is there any way to amend my Sql statement to get the largest id in a list of sorted id? Thanks in advance.
Try this: ``` SELECT TOP 1 Id FROM Table ORDER BY Convert(INT, id) DESC ```
The issue is likely that the ID column is a varchar field, so 9 is greater than 10. ``` select max(convert(int, id)) from Table ```
SQL statement to get largest ID from database column
[ "", "sql", "database", "" ]
What is the proper way to query all data on current date? a function in mysql that will get the current date in 12:01 am and current date 11:59 pm ``` select * from tb_data where date between currentdate_starts and currentdate_ends ```
Try using CURDATE() ``` SELECT field FROM table WHERE DATE(column) = CURDATE() select * from tb_data where DATE(date) = CURDATE() ``` Documentation: [CURDATE](http://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_curdate)
Without using `DATE(column) = CURDATE()` ``` SELECT * FROM tb_data WHERE date between concat(curdate(),' ','00:00:00') AND concat(curdate(),' ','23:59:59') ``` [more info](https://stackoverflow.com/questions/14104304/mysql-select-where-datetime-matches-day-and-not-necessarily-time)
MySQL query value where current date
[ "", "mysql", "sql", "date", "datetime", "select", "" ]
I have 3 tables: icon, product and menu. ``` Product- id | menu_id ---------- 1 | 1 2 | 1 3 | 2 4 | 2 5 | 3 Menu- id | image_id ---------- 1 | 10 2 | 11 3 | 12 Icon- id | file_url ---------- 10 | www.example/.... 11 | www.example/.... 12 | www.example/.... ``` So when I try to list each menu with assigned icon, I use : ``` $sql = "SELECT m.id, m.category_id, m.menu_name, m.image_id, i.file_url FROM menu m INNER JOIN icon i ON i.id = m.image_id WHERE m.restaurant_id = '" . (int) $_SESSION['uid'] . "' AND m.status='1'"; ``` But I don't know how can I access to file\_url in my products. So I wanna list each product with icon of menu where is assigned product. How can I do that ? Thanks for any help.
You can add another join, i.e. something like that: ``` $sql = "SELECT p.id, m.menu_name, m.image_id, i.file_url FROM product p INNER JOIN menu m ON m.id = p.menu_id INNER JOIN icon i ON i.id = m.image_id WHERE ... "; ```
You can join more than 2 tables, the syntax is the exact same, so what you'd want is: ``` SELECT <stuff, including i.file_url> FROM menu m JOIN product p on m.id = p.menu_id JOIN icon i on i.id = m.image_id ```
Join 3 tables in mysql
[ "", "mysql", "sql", "" ]
I have a list of values that, i.e. ``` in ('1xxx','12xx','21xx','98xx','00xx') ``` I want to use for an insert script. How can a write a for loop in SQL Server using each value within the loop? I'm think something like. ``` For value in ('1xxx','12xx','21xx','98xx','00xx') select value endloop; ``` **Im trying to simply this** ``` INSERT INTO [dbo].[TimeCard] VALUES ('test'+Right(NewId(),12),'6121126800','5102289289',CONVERT(DATE,'01-01-2013'),CONVERT(DATE,'01-01-2013'),20,CURRENT_TIMESTAMP,NULL ) GO INSERT INTO [dbo].[TimeCard] VALUES ('test'+Right(NewId(),12),'6121126800','5102289289',CONVERT(DATE,'01-08-2013'),CONVERT(DATE,'01-08-2013'),20,CURRENT_TIMESTAMP,NULL) GO INSERT INTO [dbo].[TimeCard] VALUES ('test'+Right(NewId(),12),'6121126800','5102289289',CONVERT(DATE,'01-15-2013'),CONVERT(DATE,'01-15-2013'),20,CURRENT_TIMESTAMP,NULL ) GO .... ``` I have to insert these records for several testing scenarios.
You don't need expensive loops, cursors or functions to build a set from these values you've been handed manually. ``` DECLARE @start DATE = '20130101', @now DATETIME2(7) = CURRENT_TIMESTAMP; ;WITH months AS ( -- we need 12 months SELECT TOP (12) m = number FROM master.dbo.spt_values WHERE type = 'P' ORDER BY number ), -- we need a week in each month, starting at the 1st weeks AS (SELECT w FROM (VALUES(0),(1),(2),(3)) AS w(w)), dates AS ( -- this produces a date for the first 4 weeks of each -- month from the start date SELECT d = DATEADD(WEEK,w.w,DATEADD(MONTH,m.m,@start)) FROM months AS m CROSS JOIN weeks AS w ), vals AS ( -- and here are the values you were given SELECT v FROM (VALUES('1xxx'),('12xx'),('21xx'),('98xx'),('00xx')) AS v(v) ) -- INSERT dbo.TimeCard(column list here please) SELECT 'Test' + RIGHT(NEWID(),12), '6121126800', vals.v, dates.d, dates.d, 20, @now, NULL FROM dates CROSS JOIN vals ORDER BY vals.v,dates.d; ``` This should return 240 rows (12 months \* 4 weeks \* 5 values as supplied in your question). When you've manipulated the output to be what you expect, uncomment the INSERT (but please get in the habit of putting a column list there).
If you have comma delimited string use some of these 4 functions that returns table (<http://blogs.msdn.com/b/amitjet/archive/2009/12/11/sql-server-comma-separated-string-to-table.aspx>). Insert returned data in temp table with identity column (1,1). After that loop through table with cursor or using previously created identity column. <http://technet.microsoft.com/en-us/library/ms178642.aspx>
ForEach Loop in SQL Server
[ "", "sql", "foreach", "sql-server-2012", "iteration", "" ]
The requirement is to exclude ItemA and ItemE from the search as they contain value 1 in columnB. ![enter image description here](https://i.stack.imgur.com/q3Ooa.jpg) The final result should be something like this ![enter image description here](https://i.stack.imgur.com/79qRF.jpg) I have tried `SELECT * FROM my_table WHERE ColumnB not like '1'` , but this just removes the rows having 1 , I want the whole ITEM A/ITEM E removed as the contain value 1.
Try: ``` SELECT * FROM tablee WHERE ColumnA NOT IN ( SELECT columnA FROM tablee WHERE columnB = 1 ); ``` or ``` SELECT * FROM tablee t1 WHERE NOT EXISTS( SELECT 1 FROM tablee t2 WHERE t2.columnB = 1 and t1.columnA = t2.columnA ); ```
First we make a sub-query that looks for rows with ColumnB = '1'. Then we left join that sub query to our main table on matching ColumnA names. Items that have a different row with a '1' in ColumnB will now have a value there, other rows will just have NULL because of the left join. We then use a where to only find the ones that didn't have a match. ``` SELECT main.* from my_table AS main LEFT JOIN ( SELECT * FROM my_table WHERE ColumnB = '1' ) AS remove ON remove.ColumnA = main.ColumnA WHERE remove.ColumnB != '1' ``` To learn more about sql go through the primer at [w3schools](http://www.w3schools.com/sql/)
sql select query excluding certain values
[ "", "sql", "oracle", "" ]
I'm not that great with SQL Server, but I'm trying to do some behind the scenes work to create some functionality that our EMR system lacks - copying forms (and all their data) between patients. In SQL Server 2008 R2 I have three tables that deal with these forms that have been filled out: ``` **Table 1** encounter_id patient_id date time etc etc etc etc 1234 112233 2014-01-02 14:25:01:00 a b c d **Table 2** encounter_id page recorded_on recorded_by etc etc 1234 1 2014-01-02 134 asdf asdf 1234 2 2014-01-02 134 jkl; jkl; **Table 3** encounter_id page keyname keyvalue 1234 1 key1 aaa 1234 1 key2 bbb 1234 1 key3 ccc 1234 1 key4 ddd 1234 2 key5 eee 1234 2 key6 fff 1234 2 key7 ggg ``` As you can see, they all match together with the encounter\_id, which is linked to the patient\_id (In the first table). What I'm trying to be able to do is copy all the rows in these three tables for a particular encounter\_id back into the same table they come from, but with a different (system generated) encounter\_id for a patient\_id that I would specify. In essence, copying the form from one patient to another. Any help on this is greatly appreciated.
Made a little fiddle as an example, [here (link)](http://sqlfiddle.com/#!3/9f637/3/0) The solution is perhaps needlessly complex but it offers a good variety of other useful stuff as well, I just wanted to test how to build that dynamically. The script does print out the commands, making it relatively easy to remove the TSQL and just produce the plain-SQL to do as you wish. What it does, is that it requires an encounter\_id, which it will then use to dynamically fetch the columns (with the assumption that encounter\_id is the PK for TABLE\_1) to insert a new record in TABLE\_1, store the inserted.encounter\_id value, and use that value to fetch and copy the matching rows from TABLE\_2 and TABLE\_3. Basically, as long as the structure is correct (TABLE\_1 PK is encounter\_id which is an identity type), you should be able to just change the table names referenced in the script and it should work directly regardless of which types of columns (and how many of them) your particular tables have. The beef of the script is this: ``` /* Script begins here */ DECLARE @ENCOUNTER_ID INT, @NEWID INT, @SQL VARCHAR(MAX), @COLUMNS VARCHAR(MAX) IF OBJECT_ID('tempdb..##NEW_ID') IS NOT NULL DROP TABLE ##NEW_ID CREATE TABLE ##NEW_ID (ID INT) /* !!! SET YOUR DESIRED encounter_id RECORDS TO BE COPIED, HERE !!! */ SET @ENCOUNTER_ID = 1234 IF EXISTS (SELECT TOP 1 1 FROM TABLE_1 WHERE encounter_id = @ENCOUNTER_ID) BEGIN SELECT @COLUMNS = COALESCE(@COLUMNS+', ', 'SELECT ')+name FROM sys.columns WHERE OBJECT_NAME(object_id) = 'TABLE_1' AND name <> 'encounter_id' SET @COLUMNS = 'INSERT INTO TABLE_1 OUTPUT inserted.encounter_id INTO ##NEW_ID '+@COLUMNS+' FROM TABLE_1 WHERE encounter_id = '+CAST(@ENCOUNTER_ID AS VARCHAR(25)) EXEC(@COLUMNS) PRINT(@COLUMNS) SELECT TOP 1 @NEWID = ID, @COLUMNS = NULL FROM ##NEW_ID SELECT @COLUMNS = COALESCE(@COLUMNS+', ', '')+name FROM sys.columns WHERE OBJECT_NAME(object_id) = 'TABLE_2' SET @COLUMNS = 'INSERT INTO TABLE_2 ('+@COLUMNS+') SELECT '+REPLACE(@COLUMNS,'encounter_id',''+CAST(@NEWID AS VARCHAR(25))+'') +' FROM TABLE_2 WHERE encounter_id = '+CAST(@ENCOUNTER_ID AS VARCHAR(25)) EXEC(@COLUMNS) PRINT(@COLUMNS) SET @COLUMNS = NULL SELECT @COLUMNS = COALESCE(@COLUMNS+', ', '')+name FROM sys.columns WHERE OBJECT_NAME(object_id) = 'TABLE_3' SET @COLUMNS = 'INSERT INTO TABLE_3 ('+@COLUMNS+') SELECT '+REPLACE(@COLUMNS,'encounter_id',''+CAST(@NEWID AS VARCHAR(25))+'') +' FROM TABLE_3 WHERE encounter_id = '+CAST(@ENCOUNTER_ID AS VARCHAR(25)) EXEC(@COLUMNS) PRINT(@COLUMNS) IF OBJECT_ID('tempdb..##NEW_ID') IS NOT NULL DROP TABLE ##NEW_ID END ```
I always like creating sample tables in [tempdb] so that the syntax is correct. I created tables [t1], [t2], and [t3]. There are primary and foreign keys. If you have a well developed schema, ERD (entity relationship diagram) <http://en.wikipedia.org/wiki/Entity-relationship_diagram> , these relationships should be in place. ``` -- Playing around use tempdb go -- -- Table 1 -- -- Remove if it exists if object_id('t1') > 0 drop table t1 go -- Create the first table create table t1 ( encounter_id int, patient_id int, the_date date, the_time time, constraint pk_t1 primary key (encounter_id) ); go -- Add one row insert into t1 values (1234, 112233, '2014-01-02', '14:25:01:00'); go -- Show the data select * from t1 go -- -- Table 2 -- -- Remove if it exists if object_id('t2') > 0 drop table t2 go -- Create the second table create table t2 ( encounter_id int, the_page int, recorded_on date, recorded_by int, constraint pk_t2 primary key (encounter_id, the_page) ); go -- Add two rows insert into t2 values (1234, 1, '2014-01-02', 134), (1234, 2, '2014-01-02', 134); go -- Show the data select * from t2 go -- -- Table 3 -- -- Remove if it exists if object_id('t3') > 0 drop table t3 go -- Create the third table create table t3 ( encounter_id int, the_page int, key_name1 varchar(16), key_value1 varchar(16), constraint pk_t3 primary key (encounter_id, the_page, key_name1) ); go -- Add seven rows insert into t3 values (1234, 1, 'key1', 'aaa'), (1234, 1, 'key2', 'bbb'), (1234, 1, 'key3', 'ccc'), (1234, 1, 'key4', 'ddd'), (1234, 2, 'key5', 'eee'), (1234, 2, 'key6', 'fff'), (1234, 2, 'key7', 'ggg'); go -- Show the data select * from t3 go -- -- Foreign Keys -- alter table t2 with check add constraint fk_t2 foreign key (encounter_id) references t1 (encounter_id); alter table t3 with check add constraint fk_t3 foreign key (encounter_id, the_page) references t2 (encounter_id, the_page); ``` Here comes the fun part, a stored procedure to duplicate the data. ``` -- -- Procedure to duplicate one record -- -- Remove if it exists if object_id('usp_Duplicate_Data') > 0 drop procedure t1 go -- Create the procedure create procedure usp_Duplicate_Data @OldId int, @NewId int as begin -- Duplicate table 1's data insert into t1 select @NewId, patient_id, the_date, the_time from t1 where encounter_id = @OldId; -- Duplicate table 2's data insert into t2 select @NewId, the_page, recorded_on, recorded_by from t2 where encounter_id = @OldId; -- Duplicate table 3's data insert into t3 select @NewId, the_page, key_name1, key_value1 from t3 where encounter_id = @OldId; end ``` Last but not least, we have to call the stored procedure to make sure it works. ``` -- Sample call exec usp_Duplicate_Data 1234, 7777 ``` In summary, I did not add any error checking or accounted for a range of Id's. I leave these tasks for you to learn. ![enter image description here](https://i.stack.imgur.com/NOlHY.jpg)
SQL - Copy Data Within Same Table
[ "", "sql", "sql-server", "" ]
I'm trying to make my SQL query work and I fail, so I decided to ask more experienced and familiar with SQL people since I'm not so. What I have: 2 tables in my DB, one is "DEV" table that contains: `id, lat, lon, login, password` second one is "TASK" table which contains: `id, lat, lon, address, id_dev`. Id\_dev is a foreign key to table "DEV". What I'm trying to do is: Make query to get all DEVs that have NO task assigned (there is no record in table "task" with given dev.id) and get another list of DEVs that have tasks. I want them separated. I tried something from a tutorial: ``` SELECT * FROM `dev` INNER JOIN 'task' ON dev.id=task.id_dev ORDER BY dev.id; ``` But it didn't work for me. Any suggestions please? Kind regards!
If you want the 'dev' records with no 'task' you shouldn't use `INNER JOIN` as that brings back the intersection of the sets. One option is to use a `LEFT JOIN`, so something like: ``` SELECT dev.* FROM dev LEFT JOIN task ON dev.id=task.id_dev WHERE task.id_dev IS NULL ORDER BY dev.id; ```
``` SELECT * FROM `dev` INNER JOIN 'task' ON dev.id=task.id_dev ORDER BY dev.id; ``` Do not use single quote `'` on `task`. Query should look like: ``` SELECT * FROM `dev` INNER JOIN `task` ON dev.id=task.id_dev ORDER BY dev.id; ```
INNER JOIN does not work for me
[ "", "mysql", "sql", "select", "join", "left-join", "" ]
Currently i facing a problem to display search parameter in 'Like' statement as 1 new column. my SQL code is as below: ``` Select A.Description from table A Where A.Description like ('%Battery%') ``` The Output: ``` Description 12V Battery 9V Battery 5V Battery ``` The Output i want was: ``` Description Description_1 12V Battery Battery 9V Battery Battery 5V Battery Battery ``` Isn't possible to do so?
Something like: [SQL Fiddle](http://sqlfiddle.com/#!4/4b96f/1) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE table_name ( Description ) AS SELECT '9V Battery' FROM DUAL UNION ALL SELECT '12V Battery' FROM DUAL UNION ALL SELECT '24V Battery' FROM DUAL UNION ALL SELECT 'Torch' FROM DUAL UNION ALL SELECT 'Rope' FROM DUAL; ``` **Query 1**: ``` WITH search_terms AS ( SELECT 'Battery' AS search_term FROM DUAL UNION ALL SELECT 'Torch' FROM DUAL ) SELECT A.Description, s.search_term AS Description_1 FROM table_name A INNER JOIN search_terms s ON ( A.Description LIKE '%' || search_term || '%' ) ``` **[Results](http://sqlfiddle.com/#!4/4b96f/1/0)**: ``` | DESCRIPTION | DESCRIPTION_1 | |-------------|---------------| | 9V Battery | Battery | | 12V Battery | Battery | | 24V Battery | Battery | | Torch | Torch | ``` Or, if you are only using a single search term, you can replace the named sub-query with a [bind variable](http://docs.oracle.com/cd/A81042_01/DOC/sqlplus.816/a75664/ch34.htm): ``` SELECT A.Description, :search_term AS Description_1 FROM table_name A WHERE A.Description LIKE '%' || :search_term || '%' ```
how about including it in the `SELECT` statement? ``` SELECT A.Description , 'Battery' AS Description_1 FROM table A WHERE A.Description LIKE '%Battery%' ```
Display Search Parameter As Column
[ "", "sql", "oracle", "oracle11g", "" ]
I am using a very complex UNION ALL query to parse out information from an XML import into an existing Access table. I am running into an issue is when I am attempting to write the query to only populate the table if the field is not null. I have tried using an IIF statement at the beginning and I many different iterations various Null statements but all throw errors except this one which doesn't do anything ``` select SiteVisitCode + '-' AS Q_SiteVisitCode, IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", SiteVisitCode + '-CC / ' + Station_Name, IIf ([Sample_Collection_Method_ID] = "CHLPHL-1-C", SiteVisitCode + '-CT / ' + Station_Name, IIf ([Sample_Collection_Method_ID] = "HOOP-C", SiteVisitCode + '-CH / ' + Station_Name, SiteVisitCode + '-C(A) / ' + Station_Name))), IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", SiteVisitCode + '-CC ', IIf ([Sample_Collection_Method_ID] = "CHLPHL-1-C", SiteVisitCode + '-CT', IIf ([Sample_Collection_Method_ID] = "HOOP-C", SiteVisitCode + '-CH', SiteVisitCode + '-C(A)'))), 'S-ROUTINE' as Activity_Type, IIf ([Sample_Collection_Method_ID] = "SED-CORE", 'Sediment', IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", 'Sediment','other')), IIF ([COMP]="TRUE", Right([TransectA],Len([TransectA])-InStrRev([TransectA],"/"))+'-C', Right([TransectA],Len([TransectA])-InStrRev([TransectA],"/"))) AS Sample_Collection_Method_ID, '' AS Activity_Comment, 'CLPH' as DEQ_SampleTypeID, 'A' as Activity_Transect, Station_Visit_Date as Activity_Start_Date, Time as Activity_Start_Time FROM tblSiteVisit WHERE [transectA] is Not Null UNION ALL select SiteVisitCode + '-' AS Q_SiteVisitCode, IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", SiteVisitCode + '-CC / ' + Station_Name, IIf ([Sample_Collection_Method_ID] = "CHLPHL-1-C", SiteVisitCode + '-CT / ' + Station_Name, IIf ([Sample_Collection_Method_ID] = "HOOP-C", SiteVisitCode + '-CH / ' + Station_Name, SiteVisitCode + '-C(P) / ' + Station_Name))), IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", SiteVisitCode + '-CC ', IIf ([Sample_Collection_Method_ID] = "CHLPHL-1-C", SiteVisitCode + '-CT', IIf ([Sample_Collection_Method_ID] = "HOOP-C", SiteVisitCode + '-CH', SiteVisitCode + '-C(P)'))), 'S-ROUTINE' as Activity_Type, IIf ([Sample_Collection_Method_ID] = "SED-CORE", 'Sediment', IIf ([Sample_Collection_Method_ID] = "SED-CORE-C", 'Sediment','other')), IIF ([COMP]="TRUE", Right([TransectP],Len([TransectP])-InStrRev([TransectP],"/"))+'-C', Right([TransectP],Len([TransectP])-InStrRev([TransectP],"/"))) AS Sample_Collection_Method_ID, '' AS Activity_Comment, 'CLPH' as Q_SampleTypeID, 'P' as Activity_Transect, Station_Visit_Date as Activity_Start_Date, Time as Activity_Start_Time FROM tblSiteVisit WHERE [transectP] is Not Null ; ``` In the example below the 2nd entry should not exist as transect P is null: ``` Q_SiteVisitCode|Sample_ID|Activity_ID|Activity_Type|Medium|Activity_Start_Date|Activity_Start_Time|Sample_Collection_Method_ID|Activity_Transect|DEQ_SampleTypeID|Activity_Comment test123-|test123-CT / Fish Hatchery|test123-CT|S-ROUTINE|other|12/26/2013|1058|CHLPHL-1-C|A|CLPH test123-|test123-C(P) / Fish Hatchery|test123-C(P)|S-ROUTINE|other|12/26/2013|1058|-C|P|CLPH ``` Any assistance would be greatly appreciated
If `transectA` is text datatype and you want your query to ignore rows with Null or zero-length strings in that column ... ``` WHERE Len([transectA]) > 0 ```
If the two queries that you're UNIONing together are meant to represent mutually exclusive sets, then you need to indicate that. In each query you select from the same table, but with a different where clause. In the first query you say where transectA is not null. TrasnectP can still be null here, and those records will be pulled in. Vice versa for the 2nd query. As others have said - write this as a single query without the union, but use a case statement to check if transecta or tranectp is null and transform that field accordingly.
WHERE Is Not Null Query
[ "", "sql", "ms-access", "" ]
I have a table containing information about retail stores. I have a list of retail chain names (WalMart, Target, Eatons, etc...) When the user selects one I basically run a query to find anything having to do with that chain. ``` SELECT * FROM stores WHERE store_name LIKE '%$chain%' ORDER BY store_name ASC ``` For example, if the user selects 'WalMart', the query would return anything with the word 'WalMart' in it's name (WalMart Scarborough, WalMart Supercenter Toronto, WalMart Distribution Center etc...). But now I would like to give the user the ability to search through this list via a search text box. The way I usually do searches is like so: ``` SELECT * FROM stores WHERE store_name LIKE '%$user_input%' ORDER BY store_name ASC ``` But in this case the query will return ALL stores containing the `user_input`, not just WalMarts. If I type in Toronto I would like to see WalMart Supercenter Toronto, but will of course get Target Toronto etc.... How can I make it so that I'm looking for anything containing the `user_input` but also only within the WalMart subset. I would like to do this in a single query if possible. Can I use two `LIKE` statements like this? Sorry, haven't tried anything yet as I'm not sure where to start.
Yes, you can do the following: ``` SELECT * FROM stores WHERE store_name LIKE '%$user_input%' AND store_name LIKE '%Walmart%' ORDER BY store_name ASC ```
Sure, you can just add two LIKE clauses: ``` SELECT * FROM stores WHERE store_name LIKE '%$chain%' AND store_name LIKE '%$user_input%' ORDER BY store_name ASC ```
Multiple LIKE statements in SQL
[ "", "sql", "" ]
Here I want to take all the children including parent using parentid what is wrong.Am getting Exception connect by in loop user data. Thanks in Advance. **Query :** ``` select * from rdt_organization connect by prior parentid = 102; ``` **Table Content :** ``` id parentid 102 null 103 102 112 102 104 103 105 null 106 105 ``` **Output Expected :** ``` id parentid 102 null 103 102 112 102 104 103 ```
You need to connect the rows to the `PRIOR` row using id and parentid, and use `START WITH` to decide where to start; ``` SELECT * FROM rdt_organization START WITH id = 102 CONNECT BY PRIOR id = parentid ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!4/c3032/1).
Start with is missing ``` select * from rdt_organization start with id = 102 connect by prior id = parentid; ```
Oracle Connect By SQL Query
[ "", "sql", "oracle", "" ]
I currently have two stored procedures that are meant to do the same thing. I was given two ideas on how to retrieve a fairly complex set of data, so I wrote them both and now I need to determine which one is more efficient. I have tried using Sql Profiler, client statistics, and reviewing Execution Plans, but the results appear inconclusive. Which of the two stored procs that executes more quickly changes from execution to execution. I am guessing the buffer cache, statistics, and network traffic are causing the fluctuation in results. I can run the same stored proc 10 times in a row and get very different Durations (somewhere between 900ms and 1400ms. I know that's not THAT big, but it makes my decision more difficult). I know I can clear the buffer cache and reset the statistics, but I am working with a dev server that is used pretty frequently by others. I don't want to slow them down with my shenanigans. The amount of data I am working with is certainly smaller than it will be in production, so it would probably help to dummy up a realistic amount of data. What I am wondering is whether or not there is a nice way to run these two queries and determine which performs better. Maybe some software that will run each query 100 times and generate a nice chart or something? I don't want to totally remove statistics and cache from the equation because those will be in play in the production. If it turns out that the queries are fairly interchangeable, Ill ask a follow up question about which one is theoretically more efficient. Thanks! **UPDATE:** Okay, looks like I need to add more data and then get back to you. Maybe that is all I need to do. Also going to take the time to read Dalorzo's post. I will let you know if the results stabilize with a more realistic test. **UPDATE** Oh yeah, it became much more conclusive when I added more data. One of the stored procs barely changed when I added more data, the other one took about 3x as long. Thanks everyone! I figured more data would help but was hoping there was a way that didn't involve cramming a bunch of stuff in to the DB. Much appreciated.
With more test data the timing might stabilize because caching plays less of a role. Also the difference between the procs will become more pronounced. You will also be able to observe their asymptotic scaling behaviors. Different kinds of joins and indexing strategies scale differently with added data. Increase the amount of testing data to the maximum you expect to see in production in a few years from now. In addition, I'd compare not the average but the 95% worst execution time. You probably want an algorithm that behaves in a stable way. Erratic performance variations can lead to problems. You could go one step further and generate a histogram of execution times for each function by executing them 1000 times each. That allows you to see the whole distribution and pick the one you like more. That's easy to chart with Excel as an XY Scatter Plot.
What you need is to have the execution plan of both queries to be able to determine which one is better. This is a good article on this topic: <http://sqlmag.com/blog/comparing-execution-plans>
Is there a way to consistently determine which of two stored procedures is faster?
[ "", "sql", "sql-server", "performance", "stored-procedures", "" ]
This is my first time developing a database and first time using MS Access! I am trying to append 1 new record to a table via a form linked to a query. ``` Private Sub cmdAdjustStock_Click() 'Declare Vars Dim newqty As Long Dim Qty As Control Dim change As Control Dim BoxType As Control Dim sql As String 'Set form controls to vars Set Qty = Forms!formMain!txtQty Set change = Forms!formMain!txtQtyChange Set BoxType = Forms!formMain!txtBoxType 'Arithmetic and SQL newqty = Qty + change sql = "INSERT INTO tblHistory (BoxType, QtyChange, NewQty) VALUES ('&BoxType&','&change&','&newqty&')" MsgBox "New Quantity = " & newqty & ", Box Type = " & BoxType 'For Debugging DoCmd.RunSQL sql End Sub ``` "tblHistory" has the following fields: PID, logDate, BoxType, QtyChange, NewQty. All fields required. "logDate" default value = Date() and PID is autonumber. "tblHistory" currently has no records and this append would be the first! "BoxType" is on the many end of a 1-many relationship to a table "tblBoxList" containing master list of BoxType (Primary) and their corresponding quantities. My MsgBox displays correct quantity values and a BoxType from the form (ie. 'RMA-834') matching a Primary BoxType ('RMA-834') on the "tblBoxList". After verifying the data via the MsgBox the Append fails due to 1 key violation. Im assuming this violation has to do with either the PID (which is an auto increment number) or the BoxType im passing from the form somehow isnt matching up to the primary in tblBoxList. Which is confusing because MsgBox is displaying a value that looks identical to the Primary. FYI: Tables in this project are linked. I have the 3 tables in a "back-end" on the network. This is the "Front End" with just forms.
Inspect the `INSERT` statement you're asking Access to execute. ``` sql = "INSERT INTO tblHistory (BoxType, QtyChange, NewQty) VALUES ('&BoxType&','&change&','&newqty&')" Debug.Print sql ``` Run the code and examine the `INSERT` statement in the Immediate window. You can go there with `Ctrl`+`g`. And you can copy the statement text and paste it into SQL View of a new Access query for testing, which should help you diagnose the problem. Paste the statement text into your question if you need more help from us. Also consider converting the `INSERT` to a parameter query. You can use that named query in your VBA code and supply the parameter values at run time. Assuming you created and tested a parameter query similar to the one below, you named it *qryHistoryAppend*, and it includes *pBoxType*, *pQtyChange*, and *pNewQty* as the named parameters, you can open that query (`QueryDef` object) in VBA, supply the parameter values and execute it. ``` Const cstrQuery As String = "qryHistoryAppend" Dim db As DAO.Database Dim qdf As DAO.QueryDef Set db = CurrentDb Set qdf = db.QueryDefs(cstrQuery) qdf.Parameters("pBoxType") = Forms!formMain!txtBoxType qdf.Parameters("pQtyChange") = Forms!formMain!txtQtyChange qdf.Parameters("pNewQty") = Forms!formMain!txtQty qdf.Execute dbFailOnError ' <- always use Execute instead of DoCmd.RunSQL ``` Note if that code is contained in *formMain*, you can reference the text boxes like `Me!txtBoxType` instead of `Forms!formMain!txtBoxType` My idea for *qryHistoryAppend* looks like this. Adjust as needed. ``` INSERT INTO tblHistory (BoxType, QtyChange, NewQty) VALUES (pBoxType, pQtyChange, pNewQty); ```
This isn't going to work: ``` sql = "INSERT INTO tblHistory (BoxType, QtyChange, NewQty) VALUES ('&BoxType&','&change&','&newqty&')" ``` Try changing it to: ``` sql = "INSERT INTO tblHistory (BoxType, QtyChange, NewQty) VALUES ('" & BoxType & "','" & change & "','" & newqty & "')" ```
Fail to Append due to Key Violations
[ "", "sql", "ms-access", "ms-access-2007", "vba", "" ]
I encountered an odd situation where appending `OPTION (RECOMPILE)` to my query causes it to run in half a second, while omitting it causes the query to take well over five minutes. This is the case when the query is executed from Query Analyzer or from my C# program via `SqlCommand.ExecuteReader()`. Calling (or not calling) `DBCC FREEPROCCACHE` or `DBCC dropcleanbuffers` makes no difference; Query results are always returned instantaneously with `OPTION (RECOMPILE)` and greater than five minutes without it. The query is always called with the same parameters [for the sake of this test]. I'm using SQL Server 2008. I'm fairly comfortable with writing SQL but have never used an `OPTION` command in a query before and was unfamiliar with the whole concept of plan caches until scanning the posts on this forum. My understanding from the posts is that `OPTION (RECOMPILE)` is an expensive operation. It apparently creates a new lookup strategy for the query. So why is it then, that subsequent queries that omit the `OPTION (RECOMPILE)` are so slow? Shouldn't the subsequent queries be making use of the lookup strategy that was computed on the previous call which included the recompilation hint? Is it highly unusual to have a query that requires a recompilation hint on every single call? Sorry for the entry-level question but I can't really make heads or tails of this. **UPDATE: I've been asked to post the query...** ``` select acctNo,min(date) earliestDate from( select acctNo,tradeDate as date from datafeed_trans where feedid=@feedID and feedDate=@feedDate union select acctNo,feedDate as date from datafeed_money where feedid=@feedID and feedDate=@feedDate union select acctNo,feedDate as date from datafeed_jnl where feedid=@feedID and feedDate=@feedDate )t1 group by t1.acctNo OPTION(RECOMPILE) ``` When running the test from Query Analyzer, I prepend the following lines: ``` declare @feedID int select @feedID=20 declare @feedDate datetime select @feedDate='1/2/2009' ``` When calling it from my C# program, the parameters are passed in via the `SqlCommand.Parameters` property. For the purposes of this discussion, you can assume that the parameters never change so we can rule out sub-optimal parameter smelling as the cause.
There are times that using `OPTION(RECOMPILE)` makes sense. In my experience the only time this is a viable option is when you are using dynamic SQL. Before you explore whether this makes sense in your situation I would recommend rebuilding your statistics. This can be done by running the following: ``` EXEC sp_updatestats ``` And then recreating your execution plan. This will ensure that when your execution plan is created it will be using the latest information. Adding `OPTION(RECOMPILE)` rebuilds the execution plan every time that your query executes. I have never heard that described as `creates a new lookup strategy` but maybe we are just using different terms for the same thing. When a stored procedure is created (I suspect you are calling ad-hoc sql from .NET but [if you are using a parameterized query then this ends up being a stored proc call](https://dba.stackexchange.com/questions/123978/can-sp-executesql-be-configured-used-by-default)) SQL Server attempts to determine the most effective execution plan for this query based on the data in your database and the parameters passed in ([parameter sniffing](http://blogs.technet.com/b/mdegre/archive/2012/03/19/what-is-parameter-sniffing.aspx)), and then caches this plan. This means that if you create the query where there are 10 records in your database and then execute it when there are 100,000,000 records the cached execution plan may no longer be the most effective. In summary - I don't see any reason that `OPTION(RECOMPILE)` would be a benefit here. I suspect you just need to update your statistics and your execution plan. Rebuilding statistics can be an essential part of DBA work depending on your situation. If you are still having problems after updating your stats, I would suggest posting both execution plans. And to answer your question - yes, I would say it is highly unusual for your best option to be recompiling the execution plan every time you execute the query.
Often when there is a drastic difference from run to run of a query I find that it is often one of 5 issues. 1. **STATISTICS** - Statistics are out of date. A database stores statistics on the range and distribution of the types of values in various column on tables and indexes. This helps the query engine to develop a "Plan" of attack for how it will do the query, for example the type of method it will use to match keys between tables using a hash or looking through the entire set. You can call Update Statistics on the entire database or just certain tables or indexes. This slows down the query from one run to another because when statistics are out of date, its likely the query plan is not optimal for the newly inserted or changed data for the same query (explained more later below). It may not be proper to Update Statistics immediately on a Production database as there will be some overhead, slow down and lag depending on the amount of data to sample. You can also choose to use a Full Scan or Sampling to update Statistics. If you look at the Query Plan, you can then also view the statistics on the Indexes in use such using the command **DBCC SHOW\_STATISTICS (tablename, indexname)**. This will show you the distribution and ranges of the keys that the query plan is using to base its approach on. 2. **PARAMETER SNIFFING** - The query plan that is cached is not optimal for the particular parameters you are passing in, even though the query itself has not changed. For example, if you pass in a parameter which only retrieves 10 out of 1,000,000 rows, then the query plan created may use a Hash Join, however if the parameter you pass in will use 750,000 of the 1,000,000 rows, the plan created may be an index scan or table scan. In such a situation you can tell the SQL statement to use the option **OPTION (RECOMPILE)** or an SP to use WITH RECOMPILE. To tell the Engine this is a "Single Use Plan" and not to use a Cached Plan which likely does not apply. There is no rule on how to make this decision, it depends on knowing the way the query will be used by users. 3. **INDEXES** - Its possible that the query haven't changed, but a change elsewhere such as the removal of a very useful index has slowed down the query. 4. **ROWS CHANGED** - The rows you are querying drastically changes from call to call. Usually statistics are automatically updated in these cases. However if you are building dynamic SQL or calling SQL within a tight loop, there is a possibility you are using an outdated Query Plan based on the wrong drastic number of rows or statistics. Again in this case **OPTION (RECOMPILE)** is useful. 5. **THE LOGIC** Its the Logic, your query is no longer efficient, it was fine for a small number of rows, but no longer scales. This usually involves more indepth analysis of the Query Plan. For example, you can no longer do things in bulk, but have to Chunk things and do smaller Commits, or your Cross Product was fine for a smaller set but now takes up CPU and Memory as it scales larger, this may also be true for using DISTINCT, you are calling a function for every row, your key matches don't use an index because of CASTING type conversion or NULLS or functions... Too many possibilities here. In general when you write a query, you should have some mental picture of roughly how certain data is distributed within your table. A column for example, can have an evenly distributed number of different values, or it can be skewed, 80% of the time have a specific set of values, whether the distribution will varying frequently over time or be fairly static. This will give you a better idea of how to build an efficient query. But also when debugging query performance have a basis for building a hypothesis as to why it is slow or inefficient.
OPTION (RECOMPILE) is Always Faster; Why?
[ "", "sql", "sql-server", "sql-server-2008", "compilation", "query-hints", "" ]
I have two tables, and a query that looks like this:: ``` select * from table_1 where x.column_1 in ( select column_2 from table_2 where column_3 = 'foo') ``` which almost works for me; there are nulls in `column_1` and `column_2` that I want to be considered as matches. I could write a `union`with this: ``` -- query above union all select * from table_1 where column_2 is null and exists (select * from table_2 where column_3 = 'foo' and column_2 is null) ``` But the would scan each table twice, which seems inefficient. Is there a way to combine the two queries to make a mire efficient query?
Try this ``` select * from table_1 where coalesce (column_1,'special value') in ( select coalesce (column_2,'special value') from table_2 where column_3 = 'foo' ) ``` Of course `'special value'` should not be contained in `column_3` of `table_2` and must be compatible to the datatype of the column.
Would this not work for you? ``` select * from table_1 t1 where EXISTS ( select 1 from table_2 t2 where t2.column_3 = 'foo' AND ( t1.column_1 = t2.column_2 OR (t1.column_1 IS NULL AND t2.column_2 IS NULL) ) ) ``` This would include where they are equal or both NULL. The reason I avoid `ISNULL, IFNULL, COALESCE` is as @alzaimar stated, the *datatype and special value* issues **EDIT** As mentioned by @ypercube this could be simplified for MySQL as ``` select * from table_1 t1 where EXISTS ( select 1 from table_2 t2 where t2.column_3 = 'foo' AND t1.column_1 <=> t2.column_2 ) ```
What's the most elegant way to deal with nulls
[ "", "mysql", "sql", "" ]
I'm trying to update a record at the same time I select it. In Oracle SQL Developer, the below query works. However, when I execute it from within a CFQUERY tag in ColdFusion, I get an error (see below). I found this stack overflow ([ORA-06550 and PLS-00103](https://stackoverflow.com/questions/4270818/ora-06550-and-pls-00103)) but wasn't any help. Ideally, I'd also like to return the whole record, not just the ID of the affected record. So, I have two questions. 1. Why is the below record failing from within a ColdFusion CFC's CFQUERY? 2. How to I rewrite the query to return the affected record rather than just that record's id? ``` DECLARE record_id scpricequeue.scpricequeueid%TYPE; BEGIN update scpricequeue set islocked = 1, datelocked = sysdate where scpricequeueid = ( select scpricequeueid from ( select scpricequeueid from scpricequeue where islocked = 0 and completed = 0 order by dateadded asc ) where rownum < = 1 ) RETURNING scpricequeueid INTO record_id; DBMS_OUTPUT.put_line('Locked Record: ' || record_id); END; ``` ERROR RECEIVED when executed as CFQUERY: ``` ORA-06550: line 1, column 8: PLS-00103: Encountered the symbol "" when expecting one of the following: begin function package pragma procedure subtype type use <an identifier> <a double-quoted delimited-identifier> form current cursor The symbol "" was ignored.* ```
**EDIT**: you can see here that, after the semicolon in declare block, CF pushed the query to the db layer, and oracle in turn replied back stating, I see a space instead of a begin keyword! Sure we have to push the block completely, and this should work. ``` <cfset variables.plsql = " DECLARE record_id scpricequeue.scpricequeueid%TYPE; BEGIN update scpricequeue set islocked = 1, datelocked = sysdate where scpricequeueid = ( select scpricequeueid from ( select scpricequeueid from scpricequeue where islocked = 0 and completed = 0 order by dateadded asc ) where rownum < = 1 ) RETURNING scpricequeueid INTO record_id; DBMS_OUTPUT.put_line('Locked Record: ' || record_id); END; " > <cfquery name="q" datasource="yourDSN"> #variables.plsql# </cfquery> ``` Set the query string as a variable, while executing anonymous `pl/sql` blocks! **Courtesy:** <http://awads.net/wp/2005/07/25/oracle-plsql-in-cfquery/>
Do two queries, this should answer your question: <https://stackoverflow.com/a/1883117/3112803> Hope that helps. Since you are only doing one update and one select then the cftransaction tags are not needed. But if you were doing multiple add/update/deletes then you would want the cftransaction tag so if a error happens they would all roll back.
Oracle SQL RETURN INTO Fails within CFQUERY (ORA-06550/PLS-00103)
[ "", "sql", "oracle", "coldfusion", "" ]
I have 2 database tables: `customers` and `customers_1` I have 100 customers in the `customers` table but only 99 customers in the `customers_1` table. I would like to write a query that will compare the 2 tables and will result in the missing row. I have tried this following SQL: `select * from customers c where in (select * from customers_1)` But this will only check for the one table.
Your query shouldn't work this way. You have to compare one column to another and use `NOT IN` instead of `IN`: ``` select * from customers c where customerid not in (select customerid from customers_1) ``` However, Since you are on SQL Server 2008, you can use [`EXCEPT`](http://technet.microsoft.com/en-us/library/ms188055.aspx): ``` SELECT * FROM customers EXCEPT SELECT * FROM customers_1; ``` This will give you the rows which are in the `customers` table that are not in `customers_1` table: > EXCEPT returns any distinct values from the left query that are not > also found on the right query.
This is easy. Just join them with a left outer join and check for `NULL` in the table which has the 99 rows. It will look something like this. ``` SELECT * FROM customers c LEFT JOIN customers1 c1 ON c.some_key = c1.some_key WHERE c1.some_key IS NULL ```
Compare 2 tables and find the missing record
[ "", "sql", "sql-server-2008", "t-sql", "" ]
I am very unfamiliar with advanced SQL. Lets say I have the following table (in Access - using Jet 4.0 OLEDBAdapter in VB.NET). Table - Items ``` ID Date Account Amount ----- ------ ------- ------ 1 1/1/2013 Cash 10.00 2 2/1/2013 Cash 20.00 3 1/2/2013 Cash 30.00 4 2/2/2013 Cash 40.00 5 1/1/2013 Card 50.00 6 2/1/2013 Card 60.00 7 1/2/2013 Card 70.00 8 2/2/2013 Card 80.00 ``` And I want to generate the following - totals for each account per month Table - Totals ``` Account Jan Feb ----- ----- ------ Cash 30.00 70.00 Card 110.00 150.00 ``` Is this possible using one SQL statement. I can do it in two but it is very slow. Edit - the closest I have got is this - but it doesn't generate columns ``` SELECT accFrom, Sum(amount) FROM Items WHERE Year(idate) = '2012' GROUP BY Month(idate), accFrom ```
Since there are exactly 12 months in a year, you do not need to pivot; just calculate the sum for each month: ``` SELECT Account, Sum(IIF(Month(Date)=01, Amount, 0)) AS Jan, Sum(IIF(Month(Date)=02, Amount, 0)) AS Feb, Sum(IIF(Month(Date)=03, Amount, 0)) AS Mar, Sum(IIF(Month(Date)=04, Amount, 0)) AS Apr, Sum(IIF(Month(Date)=05, Amount, 0)) AS May, Sum(IIF(Month(Date)=06, Amount, 0)) AS Jun, Sum(IIF(Month(Date)=07, Amount, 0)) AS Jul, Sum(IIF(Month(Date)=08, Amount, 0)) AS Aug, Sum(IIF(Month(Date)=09, Amount, 0)) AS Sep, Sum(IIF(Month(Date)=10, Amount, 0)) AS Oct, Sum(IIF(Month(Date)=11, Amount, 0)) AS Nov, Sum(IIF(Month(Date)=12, Amount, 0)) AS "Dec" FROM Items WHERE Year(Date) = 2013 GROUP BY Account ```
Using your sample data, this is the output I got from the query below with Access 2010. ``` Account 2013-01 2013-02 ------- ------- ------- Card $120.00 $140.00 Cash $40.00 $60.00 ``` My totals don't match your expected output. I suspect your date values were *d-m-yyyy* format, but my US locale interpreted them as *m-d-yyyy*. It's better to present dates in *yyyy-m-d* format to avoid that confusion. Anyway this query formats the dates as *yyyy-mm*, and then pivots to generate the columns for each year-month combination. So it will accommodate a growing date range without requiring you to modify the query. And, as the date range grows, you could eventually add a `WHERE` clause to limit to columns to a convenient subset. ``` TRANSFORM Sum(i.Amount) AS SumOfAmount SELECT i.Account FROM Items AS i GROUP BY i.Account PIVOT Format(i.Date,'yyyy-mm'); ```
SQL statement to generate a table of totals by month
[ "", "sql", "vb.net", "ms-access", "" ]
I have following query:\ ``` SELECT * FROM (SELECT 1 CntCol, CONVERT(CHAR(3), [data out (no val#vuoto)], 0) month, ( [gruppi min (gg flusso/decorrenza-->out)] ) FROM dbpratiche WHERE compagnia = 'GENERALI INA ASSITALIA' AND stato = 'OUT ATTIVATA' AND [data out (no val#vuoto)] > '01-01-2012') TLB PIVOT ( Count(CntCol) FOR tlb.month IN([nov], [dec], [jan], [feb], [mar], [apr], [May], [jun], [jul], [aug], [sep], [oct] ) )PVT ``` This pivote query produces following result: ![enter image description here](https://i.stack.imgur.com/LGuLT.png) I want to again pivote this table to get the result as: ``` Months <=06 <=8 <=10 >10 Jan 0 1 1 6 Feb 8 5 4 2 Mar 9 7 6 2 ``` Please help.
try this code: ``` SELECT * FROM (SELECT 1 CntCol, CONVERT(CHAR(3), [data out (no val#vuoto)], 0) month, ( [gruppi min (gg flusso/decorrenza-->out)] ) FROM dbpratiche WHERE compagnia = 'GENERALI INA ASSITALIA' AND stato = 'OUT ATTIVATA' AND [data out (no val#vuoto)] > '01-01-2012') TLB PIVOT ( Count(CntCol) FOR tlb.[gruppi min (gg flusso/decorrenza-->out)] IN ([<=06], [<=08], [<=10], [>10] ) )PVT ```
I solved it myself through: ``` select * from ( SELECT 1 CntCol, CONVERT(CHAR(3), [data out (no val#vuoto)], 0) month, ( [gruppi min (gg flusso/decorrenza-->out)] ) FROM dbpratiche WHERE compagnia = 'GENERALI INA ASSITALIA' AND stato = 'OUT ATTIVATA' AND [data out (no val#vuoto)] > '01-01-2012' ) T pivot ( count(cntcol) for [gruppi min (gg flusso/decorrenza-->out)] in([>10],[<=10],[<=06],[<=08]) )P ```
Pivoting pivoted table
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have a table for comments : ``` +----------+---------------------+----------+ | match_id | timestampe | comment | +----------+---------------------+----------+ | 100 | 2014-01-01 01:00:00 | Hi | | 200 | 2014-01-01 01:10:00 | Hi1 | | 300 | 2014-01-01 01:20:00 | Hi2 | | 100 | 2014-01-01 01:01:00 | Hello | | 100 | 2014-01-01 01:02:00 | Hello1 | | 200 | 2014-01-01 01:11:00 | hey | +----------+---------------------+----------+ ``` I want to get the following information from the table ``` SELECT match_id, max(timestampe) as maxtimestamp, count(match_id) as comments_no FROM comments GROUP BY match_id order by maxtimestamp DESC ``` The previous explanation is working great but the problem is when I want to get the comment of the maxtimestamp. How can I get the latest comment of each match (the comment of the maxtimestamp) using the most optimized query?
You can do it this way. This is pretty optimal too. ``` SELECT c.comment, m.* FROM comments c JOIN ( SELECT t.match_id, max(t.timestampe) as maxtimestamp, count(t.match_id) as comments_no FROM comments t GROUP BY t.match_id ) m on c.match_id = m.match_id and c.timestampe = m.maxtimestamp ``` [SQL Fiddle](http://sqlfiddle.com/#!2/70efa/5/0)
I'm not sure about MySQL but Oracle supports window functions, so I can write something like: ``` select first_value(comment) over (order by timestamp desc) from comments ```
How to get the value of a row with Max aggregation function?
[ "", "mysql", "sql", "aggregate-functions", "" ]
We store date data as int instead if date formart with SQL server 2008. Why they won't store date as date format type? Is there any advantage?
Prior to SQL Server 2008 it was common (perhaps even best practice) to use integer encoded dates in warehouses, i.e. 20130106 It was storage efficient (4 bytes) and human readable. Now that there is a (3 byte) `Date` datatype, I would use that in new development. (splitting date and time in either scheme is important to reduce the size of those respective dimensions in warehouses)
Well, you would need to ask the people that put the guidelines/rules together to be certain. I, for one, would rather use the DBMS date formats since there are a lot of things you can do with them beyond simple comparisons, and in a way that's usually more efficient. I seem to recall that versions of SQL Server prior to 2008 didn't actually *have* separate date and time data types so it may be a hangover from earlier code/data, or coders who haven't kept themselves current. The `datetime` types that *were* available also had other limitations such as timezone-unawareness.
What is the advantage of having date as int?
[ "", "sql", "sql-server-2008", "" ]
Using SQL Server 2008 R2 when I enter the following query: ``` SELECT CAST(POWER(2.0, 63.0) AS BIGINT); ``` Which yields the result: ``` 9223372036854775800 ``` However, using the Windows desktop calculator and raising 2 to the 63 yields: ``` 9223372036854775807 ``` Can someone please explain the difference -- or is there some internal conversion that SQL Server is doing? ... or am I missing something else?
The range of `BIGINT`in MS Sql Server is: ``` -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807) ``` And your calculator is giving you the wrong number, because 2^63 can't have an odd number for its right-most digit. The `POWER` function in SQL Server (<http://technet.microsoft.com/en-us/library/ms174276.aspx>), returns the same type as its first argument. The correct way to write this query is: ``` DECLARE @foo REAL = 2.0 SELECT CAST(POWER( @foo, 63.0 ) AS BIGINT) ``` By which, you will get `Arithmetic overflow error converting expression to data type bigint.` error message. And about the reason that's <http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx> And regarding the question of why `POWER` function is returning a wrong number? As @simonatrcl mentioned in his answer, there is arithmetic problems with floating-point numbers which sometimes result in invalid result. You can read about floating-point numbers and the problems with them here: <http://www.extremeoptimization.com/resources/Articles/FPDotNetConceptsAndFormats.aspx> You can also check the boundaries for integer types in MS Sql Server here: <http://technet.microsoft.com/en-us/library/ms187745.aspx>
Power will be returning a FLOAT. Floating point numbers are not accurate beyond certain limits, and will drop a bit of accuracy (if you've ever has a negative 0 problem you'll know what I mean!). That's what you're getting here...
SQL Server POWER function
[ "", "sql", "sql-server", "sql-server-2008-r2", "bigint", "" ]
I'm trying to write a query that takes a list parameter (ie, a single parameter which is a list of values). It appears that this is at least sometimes possible in PostgreSQL (<https://stackoverflow.com/a/10829760/836390>). What I want is something like this: ``` rows, err := db.Query("SELECT * FROM table WHERE id in $1", []int{1, 2, 3}) ``` However, when I execute this using the [pq](https://github.com/lib/pq) driver, I get an error: ``` sql: converting Exec argument #0's type: unsupported type []int, a slice ``` Is this simply not supported in `pq` yet, or is this not supported in `database/sql`, or not in PostgreSQL at all, or what?
You can use [pq.Array](https://github.com/lib/pq/blob/master/array.go) with slice parameters nowadays. So the query would look like: ``` rows, err := db.Query("SELECT * FROM table WHERE id in $1", pq.Array([]int{1, 2, 3})) ```
So it looks like [pq](http://github.com/lib/pq) uses [database/sql](http://golang.org/pkg/database/sql)'s default ValueConverter, which has no ability to handle slices (see the [documentation](http://golang.org//pkg/database/sql/driver/#pkg-variables) for DefaultParameterConverter).
PostgreSQL list parameter in Go (using database/SQL and pq)
[ "", "sql", "postgresql", "go", "prepared-statement", "" ]
I'm currently using these sql statements. My table has the field CPaymentType which contains "Cash" or "Check". I can sum up the amount of payments by executing 2 SQL statements as shown below. In this case, the user won't even notice the speed difference when executing 2 sql statements or just 1, however, I don't like my way, I just want 1 sql statement. How do I reconstruct these into 1 statement with CASE conditions? I can't figure it out since examples online result in either 1 or 0 or boolean. I don't want the postdated Check payments to be included. Thank you very much. ``` Select SUM(CAmount) as PaymentAmount from TableOrderPayment where CPaymentType='Cash' and CStatus='Active'; Select SUM(CAmount) as PaymentAmount from TableOrderPayment where CPaymentType='Check' and CDate<=SYSDATETIME() and CStatus='Active'; ```
``` Select SUM(CASE When CPayment='Cash' Then CAmount Else 0 End ) as CashPaymentAmount, SUM(CASE When CPayment='Check' Then CAmount Else 0 End ) as CheckPaymentAmount from TableOrderPayment Where ( CPayment='Cash' Or CPayment='Check' ) AND CDate<=SYSDATETIME() and CStatus='Active'; ```
``` select CPaymentType, sum(CAmount) from TableOrderPayment where (CPaymentType = 'Cash' and CStatus = 'Active') or (CPaymentType = 'Check' and CDate <= bsysdatetime() abd CStatus = 'Active') group by CPaymentType ``` Cheers -
SELECT query with CASE condition and SUM()
[ "", "sql", "sql-server", "sum", "case", "conditional-statements", "" ]
I have a data set for appointments with start and end times for each appointment. I need to calculate the amount of time the appointment lasted in hours, but the two fields are in 24-hour time and formatted as `INTEGER`. To make things even worse they're formatted as both 3 and 4 character integers so `830` and `1630` for example, not `0830` and `1630`. What would be the best way to create a column with the number of hours between those two columns? I've tried converting them to `CHAR` and then taking a substring of the last two, but I can't get it to work for both the 3 and 4 time lengths. It currently looks like this: ``` ╔═══════════╦═════════╗ β•‘ startTime β•‘ endTime β•‘ ╠═══════════╬═════════╣ β•‘ 830 β•‘ 1600 β•‘ β•‘ 400 β•‘ 800 β•‘ β•‘ 1350 β•‘ 1400 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β• ``` And I'd ideally like it to look something like this: ``` ╔═══════════╦═════════╦═══════╗ β•‘ startTime β•‘ endTime β•‘ Hours β•‘ ╠═══════════╬═════════╬═══════╣ β•‘ 830 β•‘ 1600 β•‘ 7.5 β•‘ β•‘ 400 β•‘ 800 β•‘ 4 β•‘ β•‘ 1350 β•‘ 1400 β•‘ .5 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β• ```
Since `startTime` and `endTime` are both integers, you can use simple modulo and division to separate minutes and hours. The following expression will give you `startTime` in minutes: ``` startTime % 100 + (startTime / 100) * 60 ``` You can do the same for `endTime` and subtract `endTime` from `startTime` to get an expression for the time difference in minutes: ``` (endTime % 100 + (endTime / 100) * 60) - (startTime % 100 + (startTime / 100) * 60) ``` Finally, convert it to hours: ``` ((endTime % 100 + (endTime / 100) * 60) - (startTime % 100 + (startTime / 100) * 60)) / 60.0 ``` Note the division by `60.0` (which is a float) instead of `60` (which is an integer), so that the final result will be float and not integer. Your final SQL query should look like: ``` SELECT startTime, endTime, (((endTime % 100 + (endTime / 100) * 60) - (startTime % 100 + (startTime / 100) * 60)) / 60.0) AS diff FROM appointments ```
Use a helper function to convert the INT military time to the native DATETIME type of your SQL database: ``` SELECT DATEDIFF(hh, dbo.fnConvertMilitary(StartTime), dbo.fnConvertMilitary(EndTime) FROM appointments ``` Refer to @AlonGubkin's great answer for the implementation of dbo.fnConvertMilitary.
Figuring out the difference in hours between two 24-hour times in SQL
[ "", "sql", "" ]
Amazon Redshift is based on ParAccel which is based on Postgres. From my research it seems that the preferred way to perform hexadecimal string to integer conversion in Postgres is via a bit field, as outlined in this [answer](https://stackoverflow.com/questions/12375369/convert-hex-string-to-bigint-in-postgres). In the case of bigint, this would be: ``` select ('x'||lpad('123456789abcdef',16,'0'))::bit(64)::bigint ``` Unfortunately, this fails on Redshift with: ``` ERROR: cannot cast type text to bit [SQL State=42846] ``` What other ways are there to perform this conversion in Postgres 8.1ish (that's close to the Redshift level of compatibility)? UDFs are not supported in Redshift and neither are array, regex functions or set generating functions...
It looks like they added a function for this at some point: [STRTOL](http://docs.aws.amazon.com/redshift/latest/dg/r_STRTOL.html) > *Syntax* > > STRTOL(num\_string, base) > > *Return type* > > BIGINT. If num\_string is null, returns NULL. **For example** ``` SELECT strtol('deadbeef', 16); ``` Returns: `3735928559`
Assuming that you want a simple digit-by-digit ordinal position conversion (i.e. you're not worried about two's compliment negatives, etc) I think this should work on an 8.1-equivalent DB: ``` CREATE OR REPLACE FUNCTION hex2dec(text) RETURNS bigint AS $$ SELECT sum(CASE WHEN v >= ascii('a') THEN v - ascii('a') + 10 ELSE v - ascii('0') END * 16^ordpos)::bigint FROM ( SELECT n-1, ascii(substring(reverse($1), n, 1)) FROM generate_series(1, length($1)) n ) AS x(ordpos, v); $$ LANGUAGE sql IMMUTABLE; ``` The function form is optional, it just makes it easier to avoid repeating the argument a bunch of times. It should get inlined anyway. Efficiency will probably be awful, but most of the tools available to do this smarter don't seem to be available on versions that old, and this at least works: ``` regress=> CREATE TABLE t AS VALUES ('c13b'), ('a'), ('f'); regress=> SELECT hex2dec(column1) FROM t; hex2dec --------- 49467 10 15 (3 rows) ``` If you can use `regexp_split_to_array` and `generate_subscripts` it might be faster. Or slower. I haven't tried. Another possible trick is to use a digit mapping array instead of the `CASE`, like: ``` '[48:102]={0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,11,12,13,14,15}'::integer[] ``` which you can use with: ``` CREATE OR REPLACE FUNCTION hex2dec(text) RETURNS bigint AS $$ SELECT sum( ('[48:102]={0,1,2,3,4,5,6,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,10,11,12,13,14,15}'::integer[])[ v ] * 16^ordpos )::bigint FROM ( SELECT n-1, ascii(substring(reverse($1), n, 1)) FROM generate_series(1, length($1)) n ) AS x(ordpos, v); $$ LANGUAGE sql IMMUTABLE; ``` Personally, I'd do it client-side instead, rather than wrangling the limited capabilities of an old PostgreSQL fork, especially one you can't load your own sensible user-defined C functions on, or use PL/Perl, etc. --- In real PostgreSQL I'd just use this: **hex2dec.c**: ``` #include "postgres.h" #include "fmgr.h" #include "utils/builtins.h" #include "errno.h" #include "limits.h" #include <stdlib.h> PG_MODULE_MAGIC; Datum from_hex(PG_FUNCTION_ARGS); PG_FUNCTION_INFO_V1(hex2dec); Datum hex2dec(PG_FUNCTION_ARGS) { char *endpos; const char *hexstr = text_to_cstring(PG_GETARG_TEXT_PP(0)); long decval = strtol(hexstr, &endpos, 16); if (endpos[0] != '\0') { ereport(ERROR, (ERRCODE_INVALID_PARAMETER_VALUE, errmsg("Could not decode input string %s as hex", hexstr))); } if (decval == LONG_MAX && errno == ERANGE) { ereport(ERROR, (ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE, errmsg("Input hex string %s overflows int64", hexstr))); } PG_RETURN_INT64(decval); } ``` **Makefile**: ``` MODULES = hex2dec DATA = hex2dec--1.0.sql EXTENSION = hex2dec PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS) ``` **hex2dec.control**: ``` comment = 'Utility function to convert hex strings to decimal' default_version = '1.0' module_pathname = '$libdir/hex2dec' relocatable = true ``` **hex2dec--1.0.sql**: ``` CREATE OR REPLACE FUNCTION hex2dec(hexstr text) RETURNS bigint AS 'hex2dec','hex2dec' LANGUAGE c IMMUTABLE STRICT; COMMENT ON FUNCTION hex2dec(hexstr text) IS 'Decode the hex string passed, which may optionally have a leading 0x, as a bigint. Does not attempt to consider negative hex values.'; ``` Usage: ``` CREATE EXTENSION hex2dec; postgres=# SELECT hex2dec('7fffffffffffffff'); hex2dec --------------------- 9223372036854775807 (1 row) postgres=# SELECT hex2dec('deadbeef'); hex2dec ------------ 3735928559 (1 row) postgres=# SELECT hex2dec('12345'); hex2dec --------- 74565 (1 row) postgres=# select hex2dec(to_hex(-1)); hex2dec ------------ 4294967295 (1 row) postgres=# SELECT hex2dec('8fffffffffffffff'); ERROR: Input hex string 8fffffffffffffff overflows int64 postgres=# SELECT hex2dec('0x7abcz123'); ERROR: Could not decode input string 0x7abcz123 as hex ``` The performance difference is ... noteworthy. Given sample data: ``` CREATE TABLE randhex AS SELECT '0x'||to_hex( abs(random() * (10^((random()-.5)*10)) * 10000000)::bigint) AS h FROM generate_series(1,1000000); ``` conversion from hex to decimal takes about 1.3 from a warm cache using the C extension, which isn't great for a million rows. Reading them without any transformation takes 0.95s. It took 36 seconds for the SQL based hex2dec approach to process the same rows. Frankly I'm really impressed that the SQL approach was as fast as that, and surprised the C ext was that slow.
Hex string to integer conversion in Amazon Redshift
[ "", "sql", "postgresql", "amazon-web-services", "hex", "amazon-redshift", "" ]
How can I select across multiple rows based on a particular column value. For example I have a structure like this ``` +--+----+-----+ |id|data|count| +--+----+-----+ |1 |text|30 | +--+----+-----+ |1 |text|1 | +--+----+-----+ |1 |text|25 | +--+----+-----+ |1 |text|12 | +--+----+-----+ |1 |text|50 | +--+----+-----+ |2 |text|5 | +--+----+-----+ |2 |text|2 | +--+----+-----+ |2 |text|100 | +--+----+-----+ |2 |text|50 | +--+----+-----+ |2 |text|1000 | +--+----+-----+ |3 |text|2 | +--+----+-----+ |3 |text|4 | +--+----+-----+ |3 |text|6 | +--+----+-----+ |3 |text|8 | +--+----+-----+ ``` And I want to select three of every ID--the "top" 3 based on the highest values in the `count` column so that I'll end up with: ``` +--+----+-----+ |id|data|count| +--+----+-----+ |1 |text|30 | +--+----+-----+ |1 |text|25 | +--+----+-----+ |1 |text|50 | +--+----+-----+ |2 |text|100 | +--+----+-----+ |2 |text|50 | +--+----+-----+ |2 |text|1000 | +--+----+-----+ |3 |text|4 | +--+----+-----+ |3 |text|6 | +--+----+-----+ |3 |text|8 | +--+----+-----+ ``` I am essentially stuck on the first WHERE clause: I don't want the rows where `count` is above a particular value as that could return more than 3 results. I can do a limit 3 and sort by count, but that will only work for one id. How can I do this for every distinct id in my table?
You should be able to implement a [windowing function](http://www.postgresql.org/docs/current/static/functions-window.html) like `row_number()` to get the top 3 for each `id`: ``` select id, data, "count" from ( select id, data, "count", row_number() over(partition by id order by "count" desc) seq from yourtable ) d where seq <= 3 order by id, "count"; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!15/41c9f/2)
``` SELECT "id",data,"count" FROM (SELECT "id",data,"count" rank() OVER (partition by "id" ORDER BY "count" DESC) as rn FROM your_table) t WHERE rn <= 3 ORDER BY "id","count" desc ```
select 3 values for each id based on top values in a count column
[ "", "sql", "postgresql", "" ]
I have two tables ``` TABLE_A +-------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+------------------+------+-----+---------+-------+ | bid | int(10) unsigned | NO | PRI | 0 | | | uid | int(10) unsigned | NO | PRI | 0 | | +-------+------------------+------+-----+---------+-------+ 2 rows in set (0.00 sec) ``` and ``` TABLE_B +-------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+------------------+------+-----+---------+-------+ | bid | int(10) unsigned | NO | PRI | 0 | | | uid | int(10) unsigned | NO | PRI | 0 | | +-------+------------------+------+-----+---------+-------+ ``` I want to select bid from both tables when the uid = 123; Note: each table has about 15 results and some exists in both tables, I need to select distinctively. so I tried this: ``` SELECT DISTINCT ta.bid, tb.bid FROM table_a AS ta JOIN table_b AS tb using (uid) WHERE uid = 123; ``` And I got the wrong answer obviously. Why is it getting 150+ results instead of 30?
Try this ``` SELECT DISTINCT bid FROM TABLE_A WHERE uid = 123 UNION SELECT DISTINCT bid FROM TABLE_B WHERE uid = 123 ``` **OR** ``` SELECT DISTINCT bid FROM (SELECT bid FROM TABLE_A WHERE uid = 123 UNION SELECT bid FROM TABLE_B WHERE uid = 123 ) AS A ```
``` SELECT ta.bid, tb.bid FROM table_a AS ta, table_b AS tb WHERE ta.uid = tb.uid AND ta.uid = 123 GROUP BY ta.bid, tb.bid ``` Second Method would be ``` SELECT ta.bid, tb.bid FROM table_a AS ta INNER JOIN table_b AS tb ON ( ta.uid = tb.uid ) AND ( ta.uid = 123 ) ```
Combine result of two tables
[ "", "mysql", "sql", "select", "distinct", "union", "" ]
I am trying to generate a random 9 digit number for example `0987654321` note that each digit appear once.. i tried this : ``` select convert(numeric(10,0),rand() * 8999999999) + 10000000 ``` it generates but at some point numbers repeat. but it is client need to make sure in 9 digit sequence no number appears twice.. required output like this.. ``` 1234567890 9870654321 1234098567 8976543120 ```
It seems you are trying to generate a 10 digit number. I made it as a varchar because I don't want to supress the first 0 when applies: ``` declare @rnd varchar(10) = '' ;with a as ( select 0 x union all select x + 1 from a where x < 9 ), b as ( select top 10 x from a order by newid() ) select @rnd += cast(x as char(1)) from b select @rnd ``` You can also write it is a while loop: ``` DECLARE @rnd varchar(10) = '0123456789' DECLARE @i int = len(@rnd) ;while @i > 1 select @rnd = stuff(@rnd, rnd, 1, '') + substring(@rnd, rnd, 1), @i += -1 from (SELECT cast(rand(BINARY_CHECKSUM(newid()))*@i as int)+1 rnd) x select @rnd ```
As you want to insert these values in to a table, and also check for duplicated. I would do the following: **1. Create a view to generate a NewId** ``` create view vw_getNewID as select newid() AS [NewId] ``` **2. Generate a TVF** ``` CREATE FUNCTION dbo.fn_GenerateRandomNumber() RETURNS @r TABLE ( RandomNumber varchar(10) ) AS BEGIN ;WITH numbers as ( SELECT 0 AS [number] UNION ALL SELECT number + 1 FROM numbers WHERE number < 9 ) INSERT INTO @r SELECT stuff( (SELECT ''+ number FROM numbers CROSS APPLY dbo.vw_getNewID vgni order by vgni.NewId FOR XML PATH(''),TYPE ).value('.','VARCHAR(MAX)') , 1, 1, '') RETURN END ``` **3. Query your new TVF to insert into your table.** ``` INSERT INTO dbo.YourTable (RandomNumber) SELECT grn.RandomNumber FROM dbo.fn_GenerateRandomNumber() grn LEFT JOIN dbo.YourTable t ON t.RandomNumber = grn.RandomNumber WHERE t.RandomNumber IS NULL ``` \*\* Note that I have converted the output to `varchar` to preserve the leading zero.
Generate a random 9 digit number
[ "", "sql", "sql-server", "" ]
I am trying to create a query that calculates credit and debit for each month. the values are in column called SUM and there is a field for each record that says if it is credit or debit (ex\_type). I have manage to get the total sum of all the fields (although it is not correct as well - cos I am just sum all the values no matter if it is credit or debit). ``` SELECT sum(sum) as total FROM acc where month='$month' ``` But I cannot figure out for to do it for Credit and Debit So, to summarize... I want to have the following two queries combined in one. ``` 1. Select sum(sum) as Debit from acc where ex_type='Debit' and month='$month' 2. Select sum(sum) as Credit from acc where ex_type='Credit' and month='$month' ``` So, any help is much appreciated.
Try this with `CASE` ``` Select sum(CASE WHEN ex_type='Debit' THEN `sum` ELSE 0 END) as Debit, sum(CASE WHEN ex_type='Credit' THEN `sum` ELSE 0 END) as Credit FROM ... ```
This should work: ``` Select sum(d.sum) as Debit, (Select sum(c.sum) from acc c where c.ex_type='Credit' and c.month='$month') as Credit from acc d where d.ex_type='Debit' and d.month='$month' ``` However if you supply more details on other fields one could inner join onto the same table and that may be slightly more efficient.
SQL SUM values where 2 conditions are met
[ "", "mysql", "sql", "sql-server", "select", "" ]
I'm working in a centralized monitoring system on Windows 2008 R2, I have installed a PostgreSQL 9.3 to use **psql** from the command line. When I try to access to some remote Postgres (an 8.4 in my principal case) I have an error with the encoding: command: ``` psql.exe -h 192.168.114.12 -p 5432 -d db_seros_transaccion -U postgres -f script.sql ``` error: ``` psql: FATAL: la conversiΓ³n entre WIN1252 y LATIN1 no estΓ‘ soportada ``` I try adding the sentence ``` SET client_encoding = 'UTF8'; ``` in my script but the problem persist (and with other encodings too, like LATIN1 & WIN1252). After googling it I found people that update some rows in the server to make the connection, and this is a problem to me. Can anyone help me to make a connection using *psql* without an update? Is it possible?
Thanks a lot **Craig Ringer**, works, finally works! You are my new idool now! The steps are: 1. open the cmd 2. `SET PGCLIENTENCODING=utf-8` 3. `chcp 65001` 4. `psql -h your.ip.addr.ess -U postgres`
Windows 10 / Windows server 2016 or later: * Open `Windows Control Panel` * Select `Region (and Language)` * Click `Change system locale` * `Beta: Use Unicode UTF-8 for worldwide language support` * Click OK or ``` [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage] "ACP"="65001" "OEMCP"="65001" "MACCP"="65001" ``` The PowerShell console and CMD will display Cyrillic correctly
PostgreSQL: encoding problems on Windows when using psql command line utility
[ "", "sql", "postgresql", "command-line", "character-encoding", "psql", "" ]
I was trying to use sqlFetch. The fetch works perfectly when I change the name of my table to have underlines instead of periods. So if I use the command ``` sqlFetch(conn, "HelloWorld_40") ``` It works fine. Unfortunately, my friends are all using the real name of the table ``` sqlFetch(conn, "HelloWorld.40") ``` But then it crashes and it tells me that ``` Error in sqlColumns(conn, "HelloWorld.40") : 'HelloWorld.40': table not found on channel ``` I'm guessing the period "." is illegal name for a table. But I don't want my friends to change it because it's a lot of people who would be affected. Is there a way I can call the table, or do I have to secretly go to their database, change the name while I use it and then change it back to a period (risking that I will forget, someone will read, blah blah). Thanks.
It is a problem with `sqlFetch` which parse table name. Unfortunately it did not handle table quotes, so it's search for table `40` in schema `HelloWorld`. You need to directly call `sqlQuery` (with quoted table name, brackets for MS SQL Server): ``` sqlQuery(dbhandle, "SELECT * FROM [HelloWorld.40]") ``` Side note: you should specify which database you are using.
put the table name in square brackets: ``` [HelloWorld.40] ```
table with "." in its name
[ "", "sql", "r", "rodbc", "" ]
Below is a update query which is to update a table with about 40000 records: ``` UPDATE tableName SET colA = val, colB = val WHERE ID IN (select RecordIDs from tableB where needUpdate = 'Y') ``` When the above query is executed, I found out that the below query taken ~ 15 seconds ``` SELECT RecordIDs FROM tableB WHERE needUpdate = 'Y' ``` But when I take away the where clause (i.e. `update tableName set colA = val, colB = val`) The query runs smoothly. Why this happens? are there any ways to shorten the time of execution? --- Edited: Below is the structure of both tables: ``` tableName: ID int, VehicleBrandID int, VehicleLicenseExpiryDate nvarchar(25), LicensePlateNo nvarchar(MAX), ContactPerson nvarchar(MAX), ContactPersonID nvarchar(MAX), ContactPersonPhoneNumber nvarchar(MAX), ContactPersonAddress nvarchar(MAX), CreatedDate nvarchar(MAX), CreatedBy nvarchar(MAX) PRIMARY KEY (ID) tableB: RowNumber int RecordIDs int NeedUpdate char(1) PRIMARY KEY (RowNumber) ``` --- Edited Below screenshot is the execution plan for the update query ![enter image description here](https://i.stack.imgur.com/3GTnb.png)
The execution plan shows you are using table variables and are missing a useful index. Keep the existing PK on `@output` ``` DECLARE @output TABLE ( ID INT PRIMARY KEY, VehicleBrandID INT, VehicleLicenseExpiryDate NVARCHAR(25), LicensePlateNo NVARCHAR(MAX), ContactPerson NVARCHAR(MAX), ContactPersonID NVARCHAR(MAX), ContactPersonPhoneNumber NVARCHAR(MAX), ContactPersonAddress NVARCHAR(MAX), CreatedDate NVARCHAR(MAX), /*<-- Don't store dates as strings*/ CreatedBy NVARCHAR(MAX)) ``` And add a *new* index to `@tenancyEditable` ``` DECLARE @tenancyEditable TABLE ( RowNumber INT PRIMARY KEY, RecordIDs INT, NeedUpdate CHAR(1), UNIQUE (NeedUpdate, RecordIDs, RowNumber)) ``` With these indexes in place the following query ``` UPDATE @output SET LicensePlateNo = '' WHERE ID IN (SELECT RecordIDs FROM @tenancyEditable WHERE NeedUpdate = 'Y') OPTION (RECOMPILE) ``` Can generate the more efficient looking [![enter image description here](https://i.stack.imgur.com/0h25q.png)](https://i.stack.imgur.com/0h25q.png) Also you should use appropriate datatypes rather than storing everything as `NVARCHAR(MAX)`. A person name isn't going to need more than `nvarchar(100)` at most and `CreatedDate` should be stored as `date[time2]` for example.
I suppose you are in one of the 2 cases below: 1/ [STATISTICS are not updated](http://sqlhint.com/sqlserver/how-to/when-update-statistics-flag-2371) due to a some recently modification of in your table. In this case you should execute this: ``` UPDATE STATISTICS tableB ``` 2/ I suppose [a wrong query plan is used](http://sqlhint.com/sqlserver/sqlserver/how-to/ho-avoid-delete-or-invalidate-query-plan), case when I recommend to execute this in order to force recompilation of the query: ``` SELECT RecordIDs FROM tableB WHERE needUpdate = 'Y' OPTION (RECOMPILE) ``` Tell us the result and we'll come with more details about.
SQL IN operator in update query causes a lot of time
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I'm trying to set up a trigger that will insert certain values from a table and specify other values with the `VALUES` command. Something like the code below: ``` INSERT INTO table_name(ID, Name, Email) SELECT userID, userName FROM users_table WHERE condition1 VALUES ('example@yahoo.com') ``` So as you can see, I'm trying to fetch first 2 values for `ID` and `Name` from a table and for `email` I want to specify a value. Also I have an auto-increment column (called `crt`) in `table_name` if that is relevant. So how can I do this?
The `INSERT` command comes in two flavors: **(1)** either you have all your values available, as literals or SQL Server variables - in that case, you can use the `INSERT .. VALUES()` approach: ``` INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN) VALUES(Value1, Value2, @Variable3, @Variable4, ...., ValueN) ``` Note: I would recommend to **always** explicitly specify the list of column to insert data into - that way, you won't have any nasty surprises if suddenly your table has an extra column, or if your tables has an `IDENTITY` or computed column. Yes - it's a tiny bit more work - **once** - but then you have your `INSERT` statement as solid as it can be and you won't have to constantly fiddle around with it if your table changes. **(2)** if you **don't** have all your values as literals and/or variables, but instead you want to rely on another table, multiple tables, or views, to provide the values, then you can use the `INSERT ... SELECT ...` approach: ``` INSERT INTO dbo.YourTable(Col1, Col2, ...., ColN) SELECT SourceColumn1, SourceColumn2, @Variable3, @Variable4, ...., SourceColumnN FROM dbo.YourProvidingTableOrView ``` Here, you must define exactly as many items in the `SELECT` as your `INSERT` expects - and those can be columns from the table(s) (or view(s)), or those can be literals or variables. Again: explicitly provide the list of columns to insert into - see above. You can use **one or the other** - but you **cannot** mix the two - you cannot use `SELECT` and have a `VALUES(...)` clause in the middle of it - pick one of the two - stick with it. So in your concrete case, just use: ``` INSERT INTO dbo.table_name (ID, Name, Email) SELECT userID, userName, 'example@yahoo.com' FROM users_table WHERE condition1 ```
something like this, ``` INSERT INTO table_name (ID, Name, Email) SELECT userID, userName, 'example@yahoo.com' AS Email FROM users_table WHERE ..condition1.. ```
SQL Server : is there a way to insert into a table values using both SELECT and VALUES at the same time?
[ "", "sql", "sql-server", "select", "insert", "" ]
I have following table: ![enter image description here](https://i.stack.imgur.com/TBmRB.png) I wanted to pivot month column and show count of data according to groups of >10,<=06,etc. Means wanted to show how many counts for >10 in Jun, jan As folows: ``` Gruppi Min (GG Flusso/Decorrenza-->Out) Jan Feb Mar Apr May Jun >10 5 2 1 0 8 2 <=06 2 8 1 7 9 33 ``` **I Tried as follows:** ``` SELECT * FROM (SELECT CONVERT(CHAR(3), [data out (no val#vuoto)], 0) month, ( [gruppi min (gg flusso/decorrenza-->out)] ) FROM dbpratiche WHERE compagnia = 'GENERALI INA ASSITALIA' AND stato = 'OUT ATTIVATA' AND [data out (no val#vuoto)] > '01-01-2012') TLB PIVOT ( Count([gruppi min (gg flusso/decorrenza-->out)] ) FOR tlb.month IN([jan], [feb], [mar], [apr], [may], [jun]) )PVT ``` but given me following result: ![enter image description here](https://i.stack.imgur.com/BSeXf.png) Its not grouping according to >10,<=06,etc. and showing overall count. where i am making mistake? Please help me.
This worked ``` SELECT * FROM (SELECT 1 CntCol, [Gruppi Min (GG Flusso/Decorrenza-->Out)], [month] FROM #X) TLB PIVOT ( COUNT (CntCol) FOR TLB.[month] IN ( [jan],[feb],[mar],[apr],[may],[jun] ) ) AS pvt ``` You need to add a column that should be counted. Look at the complex example here: <http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx> Change your code to this: ``` SELECT * FROM (SELECT 1 CntCol, CONVERT(CHAR(3), [data out (no val#vuoto)], 0) month, ( [gruppi min (gg flusso/decorrenza-->out)] ) FROM dbpratiche WHERE compagnia = 'GENERALI INA ASSITALIA' AND stato = 'OUT ATTIVATA' AND [data out (no val#vuoto)] > '01-01-2012') TLB PIVOT ( Count(CntCol) FOR tlb.month IN([jan], [feb], [mar], [apr], [may], [jun]) )PVT ```
you can add an autoincrement field to your dbPratiche table(for example id), and then try this code: ``` select * from ( select [id], [month], [gruppi] from dbPratiche )as s pivot ( COUNT ([id]) for month in(jun, sep, jul, may, aug) )as PVT ```
Pivoting a column and showing data according to count
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I've an SQL table with this structure ``` Stud Name Score --------------------- Alex 7 John 5.6 Tom 8.2 Antony 6 George 9 Mathew 7 Jim 5.5 ``` I need to find top three students based on score. So my query is ``` select top 3 * from studtable order by score desc ``` But since there is a tie of score for Alex and mathew, we need to get both of them on the final result. So how can I handle this situation when a tie
You can try using subquery ``` select * from studtable where score in (select distinct top(3) score from studtable order by score desc) order by score desc ```
Try this: ``` select top 3 with ties * from studtable order by score desc ``` This'll get you top 3 including ties. # [See it in Action](http://www.sqlfiddle.com/#!6/d41d8/13597)
Top 3 rows based on score, but to handle ties
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have 4 tables: Foo, Bar, Charlie, & Delta. Tables Charlie, Delta, & Bar all have a foreign key column containing the primary key for a Foo. The foreign key columns have the unique constraint, so no more than 1 of each can be linked to a given Foo. Furthermore, due to how the domain was modeled, there should never be both a Charlie and a Delta linked to the same Foo. I have a Bar, and I would like to know if there is a Charlie or a Delta linked to the associated Foo, and if not then I need some other data from the associated Foo. Currently I'm doing this as 1-3 queries: 1. Get the primary key for the Charlie linked to the Foo linked to the Bar, if there is one 2. If there isn't a Charlie, get the primary key for the Delta linked to the Foo linked to the Bar, if there is one 3. If there isn't a Delta, get some other columns from the Foo linked to the Bar For historical reasons, we don't use stored procedures; each query is built in the code (we're using C#, if that's relevant for some reason) and its results are checked there before potentially running the next query. For Oracle & for MS SQL Server 2008 (we support both), is it faster to run these 3 queries separately or to combine them into a single query somehow? If it's faster to combine them, how would I do so? Edit: Think of each table as having two columns. Foo has columns PRIMARY\_KEY and OTHER\_DATA, while the other three tables have columns PRIMARY\_KEY and FOO, where FOO is a foreign key containing the primary key for a row in Foo. The 3 queries basically look like this: 1. `SELECT C.PRIMARY_KEY FROM Bar B, Charlie C WHERE B.FOO = C.FOO` (returns 0 or 1 rows) 2. `SELECT D.PRIMARY_KEY FROM Bar B, Delta D WHERE B.FOO = D.FOO` (returns 0 or 1 rows) 3. `SELECT F.OTHER_DATA FROM Bar B, Foo F WHERE B.FOO = F.PRIMARY_KEY` (returns 1 row)
Yes, it could *potentially* be faster to combine them. This is because you have a chance to avoid going back to Bar multiple times. The long answer, of course, is that it always depends on your indexing and hardware setup (and everything else :) ). So it is essential that you actually test the old way against any new way and look for a significant improvement. Since you say all of these are 1-1 (or 0-1) relationships, what I'm seeing is that you are really making one, extended record for each Foo record. There is nothing stopping you from writing ``` select foo.* -- of course, specific columns is better ,bar.* -- of course, specific columns is better ,c.* -- of course, specific columns is better ,d.* -- of course, specific columns is better from foo inner join bar on foo.pk = bar.fooId left join charlie c on bar.fooId = c.fooId left join delta d on bar.fooId = c.fooId ``` I know SQL Server is capable of reaching out to Bar only one time in this case, saving processing and potentially disk I/O. And because you are using the same join key for everything, that makes me even more confident because there is no issue of re-sorting the data for the different joins. The database "engine" should be able to pipe them into each other very well. Multiple queries should be a performance mistake because Bar is read again and again. It's very likely the same argument applies to Oracle in such a basic operation, but I'm not an expert there.
Since you're coding in C#, you can use Linq to SQL: ``` var query = from f in Foo // inner equijoin: join b in Bar on f.PRIMARY_KEY equals b.FOO // left outer join: join tc in Charlie on f.PRIMARY_KEY equals tc.FOO into gc from c in gc.DefaultIfEmpty() // left outer join: join td in Delta on f.PRIMARY_KEY equals td.FOO into gd from d in gd.DefaultIfEmpty() // anonymous object for result set: select new { Key = f.PRIMARY_KEY, Data = f.OTHER_DATA, HasCharlie = c == null, HasDelta = d == null }; // get first row with charlie or delta var resultRow = query.FirstOrDefault(row => row.HasCharlie || row.HasDelta); if (resultRow == null) { // get first row, regardless of charlie or delta var otherResultRow = query.FirstOrDefault(); } ``` Building `query` could also be accomplished with a chain of method calls: `Foo.Join(Bar,...).GroupJoin(Charlie,...).SelectMany(...)` etc. If you want to "bake in" the query results, you can assign `query.ToList()` to a variable, and perform future actions on that variable without hitting the database again, using Linq to Objects. Alternatively, if your future actions (such as `FirstOrDefault` in my above code) are being used on `query`, they'll reflect any changes to the database which occur during the execution of your code.
Should I combine these queries, and if so, how?
[ "", "sql", "query-optimization", "" ]
I am still a beginner in SQL and i'm facing an issue. hope you can help me. I have a table called Department where it has an attribute DEPARTMENT\_NO as its primary key. ``` CREATE TABLE DEPARTMENT( DEPARTMENT_NO INT NOT NULL, NAME VARCHAR(25) NOT NULL, LOCATION CHAR(15), PRIMARY KEY(DEPARTMENT_NO)); ``` I have another table called Doctor where it has an attribute DNUM as a foreign key referring to DEPARTMENT\_NO : ``` CREATE TABLE DOCTOR( DOCTOR_ID CHAR(9) NOT NULL, DNUM INT NOT NULL, NAME VARCHAR(20) NOT NULL, DOB DATE, SPECIALTY VARCHAR(20) NOT NULL, SALARY INT, CITY VARCHAR(15), STREET VARCHAR(15), START_DATE DATE, PRIMARY KEY(DOCTOR_ID)) FOREIGN KEY(DNUM) REFERENCES DEPARTMENT(DEPARTMENT_NO)); ``` A doctor can be working in one or two departments. So, if I have a doctor working in department 1 and 4 (the values for DNUM will include 1 and 4). I initially chose the data type of DNUM to be INT(same as DEPARTMENT\_NO data type). But INT is not ideal for multiple values. What should the data type be? or what other solution i have if,for example, I run a query for returning the name of the doctors working in department 4. The query should return all the names of doctors working in department 4(only) and the ones who work in multiple departments(including 4). Thanks very much in advance and sorry for the long message.
The standard way to represent a "many to many" relationship is via a "junction" (aka "link") table: ``` CREATE TABLE DOCTOR_DEPARTMENT ( DOCTOR_ID INT REFERENCES DOCTOR(DOCTOR_ID), DEPARTMENT_NO INT REFERENCES DEPARTMENT (DEPARTMENT_NO), PRIMARY KEY (DOCTOR_ID, DEPARTMENT_NO) ); ``` Note the key on {DOCTOR\_ID, DEPARTMENT\_NO}, which ensures the same doctor cannot be connected to the same department twice. It also implicitly creates a composite (aka. "concatenated") index on these fields in that order, which makes it very quick to find departments of a given doctor (via an index range scan). If you need to query in the opposite "direction" (for doctors of the given department), flip the order of fields. If you need both queries, then you'll need both indexes (i.e. you'll need to create one index explicitly). Consider adding [ORGANIZATION INDEX](http://use-the-index-luke.com/sql/clustering/index-organized-clustered-index) clause, if you need just one of these indexes.
You need an additional table called doctor\_department ``` create table doctor_department (doctor_id integer references doctor(doctor_id) not null, dnum integer references department(dnum) not null ) ```
Foreign key referring to more than one primary key values(from one table) - Oracle SQL PLUS
[ "", "sql", "database", "oracle", "" ]
This is the only error I get. Please Help. Thank you in advance. After I successfully created and a stored procedure in SQL Server. I tried to execute it with the following (example only) ``` exec [dbo].ExportResourceTime 'ABC111', '', 3, 1, '2013-10-01', '2013-10-30', 1, 21 ``` Then I get the following error. See attached photo. This is the actual error in SSMS. Line 96 is the gray cursor on top of if @OrgUnit <> '' ![enter image description here](https://i.stack.imgur.com/CELYE.png) This is the independent script which I think the error comes from. Correct me if I'm wrong. ![enter image description here](https://i.stack.imgur.com/tU7vJ.png) > Msg 213, Level 16, State 1, Procedure ExportResourceTime, Line 96 > Insert Error: Column name or number of supplied values does not match table definition. Code: ``` create procedure [dbo].ExportResourceTime @ResourceID nvarchar(30), @OrgUnit nvarchar(15), @TimeDetail int, @ExpenseDetail int, @FromDate Datetime, @ToDate Datetime, @IncludeID int, @TimeTypeGroup int --1 = No Time Type Group --2 = Group by Time Type as BEGIN create table #ItemisedTimeandMaterials ( IDNo int, OrderBy1 varchar(60), ItemDate datetime,--MOD005 RevenueTypeCode varchar(24), TimeType varchar(24), ProjectCode varchar(20), taskUID int, OutlineNum varchar(60), taskname varchar(60), activitycode varchar(24), ActivityDesc varchar(60), ResourceID nvarchar(24), OrganizationID nvarchar(15), EffectiveDate datetime, firstname varchar(60), lastname varchar(60), ExpenseTypeCode varchar(24), ExpenseTypeDesc varchar(60), Hours decimal(8,2), Rate decimal(8,2), Total decimal(20,8), Descr varchar(256), --MOD005 DM Added col for relevant detail for Expenses TimeTypeCode nvarchar(10) ) --GW: move this bit to the top--DONE create table #Resources ( ResourceID nvarchar(30), OrganizationID nvarchar(15), EffectiveDate datetime ) if @ResourceID <> '' begin insert into #Resources (ResourceID,OrganizationID,EffectiveDate) select ro.ResourceID, ro.OrganizationID, ro.EffectiveDate from ResourceOrganization ro, (select ResourceID, MAX(EffectiveDate) as maxEffectivedate from dbo.ResourceOrganization **where ResourceID = @ResourceID** group by ResourceID) as maxresults where ro.ResourceID = maxresults.ResourceID and ro.EffectiveDate = maxresults.maxEffectivedate end if @OrgUnit <> '' begin insert into #Resources (ResourceID,OrganizationID,EffectiveDate) Select ResourceID,OrganizationID,EffectiveDate from ResourceOrganization where OrganizationID like '' + @OrgUnit + '%' end -- get actual time - REGULAR insert into #ItemisedTimeandMaterials select Case when @IncludeID = 1 then b.timeID else '' end, --mod 07 e.lastname + e.firstname, case when @TimeDetail = 2 then g.enddate else (case when @TimeDetail = 3 then b.TimeEntryDate else null end) end,--MOD005 'FEES', 'Regular', b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, '','', -- expense sum(isnull(b.StandardHours,0)), -- MOD003 - added in isnull's 0,--h.StandardAmt,--b.NegotiatedChargeRate, --MOD005 Change to NegotiatedChargeRate from StandardChargeRate 0,--sum(isnull(b.StandardHours,0)* IsNull(h.standardAmt,0)),--sum(bd.BilledAmt),--MOD005 Change from BillableAmt feild (was incorrect for adjustments) case when @TimeDetail = 3 then b.invoicecomment else '' end,--MOD005 case when @TimeTypeGroup = 2 then b.TimeTypeCode else '' end--MOD008 from time b join activity c on b.activitycode = c.activitycode join resource e on b.resourceID = e.resourceID join project p on b.ProjectCode=p.ProjectCode and p.RevisionStatusCode='A' join task f on b.projectcode = f.projectcode and b.taskuid =f.taskuid and f.revisionnum = p.RevisionNum join SCWeekEnding g on b.TimeEntryDate between g.StartDate and g.EndDate join #Resources RES on b.ResourceID = RES.ResourceID --left join ratesetresource h on h.resourceid = b.resourceid where --b.projectcode = @PROJECTCODE and b.statuscode in ('A','V','T') and b.TimeEntryDate >= @FromDate and b.TimeEntryDate <= @ToDate and Isnull(b.StandardHours,0) <> 0 and b.resourceid in(Select ResourceId from #Resources) group by b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, case when @TimeDetail = 2 then g.enddate else (case when @TimeDetail = 3 then b.TimeEntryDate else null end) end,--MOD005 case when @TimeDetail = 3 then b.invoicecomment else '' end, Case when @IncludeID = 1 then b.timeID else '' end, --mod 07 case when @TimeTypeGroup = 2 then b.TimeTypeCode else '' end--MOD008 having sum(isnull(b.StandardHours,0)) <> 0 -- get actual time - OVERTIME insert into #ItemisedTimeandMaterials select Case when @IncludeID = 1 then b.timeID else '' end, --mod 07 e.lastname + e.firstname, case when @TimeDetail = 2 then g.enddate else (case when @TimeDetail = 3 then b.TimeEntryDate else null end) end,--MOD005 'FEES', 'Overtime', --GW: need projectcode here--DONE b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID as OrgUnit, e.firstname, e.lastname, '','', -- expense sum(isnull(b.OvertimeHours,0)), -- MOD003 - added in isnull's 0, 0, case when @TimeDetail = 3 then b.invoicecomment else '' end, --MOD005 case when @TimeTypeGroup = 2 then b.TimeTypeCode else '' end--MOD008 from time b join activity c on b.activitycode = c.activitycode join resource e on b.resourceID = e.resourceID join project p on b.ProjectCode=p.ProjectCode and p.RevisionStatusCode='A' join task f on b.projectcode = f.projectcode and b.taskuid =f.taskuid and f.revisionnum = p.RevisionNum join SCWeekEnding g on b.TimeEntryDate between g.StartDate and g.EndDate join #Resources RES on b.ResourceID = RES.ResourceID where b.statuscode in ('A','V','T') --and f.revisionnum = @latestapprovedrevision and b.TimeEntryDate >= @FromDate and b.TimeEntryDate <= @ToDate and Isnull(b.OvertimeHours,0) <> 0 and b.resourceid in (select resourceid from #Resources) group by --GW: projectcode here--DONE b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, case when @TimeDetail = 2 then g.enddate else (case when @TimeDetail = 3 then b.TimeEntryDate else null end) end,--MOD005 case when @TimeDetail = 3 then b.invoicecomment else '' end, case when @IncludeID = 1 then b.TimeID else '' end ,--mod 07 case when @TimeTypeGroup = 2 then b.TimeTypeCode else '' end--MOD008 having sum(isnull(b.Overtimehours,0)) <> 0 /**************** SECTION 7: OUTPUT DATA TO CRYSTAL REPORT ******************/ -- return data to the Crystal report only if taskuid are in table 1 select * from #ItemisedTimeandMaterials order by taskUID drop table #Resources drop table #ItemisedTimeandMaterials END GO ```
There are 23 columns in your `#ItemisedTimeandMaterials` temporary table while you are inserting only 22 values from select list. When inserting into table with large number of columns with computed values from SELECT list, you can keep track of columns by giving alias name to each computed column same as the target temporary table column. In this way you can easily compare the table columns and the columns in select list and find out missing column. The missing column is EffectiveDate , add it to select list at correct position ``` Select .. .. b.resourceID, RES.OrganizationID, RES.EffectiveDate, --- Missing column e.firstname, e.lastname, ... .. ```
try this ``` if @ResourceID <> '' begin insert into #Resources (ResourceID,OrganizationID,EffectiveDate) Select ro.ResourceID, ro.OrganizationID, ro.EffectiveDate from ResourceOrganization ro inner join (select ResourceID, MAX(EffectiveDate) as maxEffectivedate from dbo.ResourceOrganization where ResourceID = @ResourceID group by ResourceID) as maxresults on ro.ResourceID = maxresults.ResourceID and ro.EffectiveDate = maxresults.maxEffectivedate end ``` EDIT: try executing it like this line ``` EXEC dbo.ExportResourceTime @ResourceID = N'ABC111', @OrgUnit = N'', @TimeDetail = 3, @ExpenseDetail = 1, @FromDate = '2013-10-01 00:00:00', @ToDate = '2013-10-30 00:00:00', @IncludeID = 1, @TimeTypeGroup = 21 ``` I dont have your Database, so its hard to spot the fault from the other side of the world. I have somewhat formatted your Script, But cant see anything wrong, I can't run the script aswell because of my above mentioned problem ``` CREATE PROCEDURE [dbo].ExportResourceTime @ResourceID NVARCHAR(30), @OrgUnit NVARCHAR(15), @TimeDetail INT, @ExpenseDetail INT, @FromDate DATETIME, @ToDate DATETIME, @IncludeID INT, @TimeTypeGroup INT --1 = No Time Type Group --2 = Group by Time Type AS BEGIN CREATE TABLE #ItemisedTimeandMaterials ( IDNo INT, OrderBy1 VARCHAR(60), ItemDate DATETIME,--MOD005 RevenueTypeCode VARCHAR(24), TimeType VARCHAR(24), ProjectCode VARCHAR(20), taskUID INT, OutlineNum VARCHAR(60), taskname VARCHAR(60), activitycode VARCHAR(24), ActivityDesc VARCHAR(60), ResourceID NVARCHAR(24), OrganizationID NVARCHAR(15), EffectiveDate DATETIME, firstname VARCHAR(60), lastname VARCHAR(60), ExpenseTypeCode VARCHAR(24), ExpenseTypeDesc VARCHAR(60), Hours DECIMAL(8, 2), Rate DECIMAL(8, 2), Total DECIMAL(20, 8), Descr VARCHAR(256), --MOD005 DM Added col for relevant detail for Expenses TimeTypeCode NVARCHAR(10) ) --GW: move this bit to the top--DONE CREATE TABLE #Resources ( ResourceID NVARCHAR(30), OrganizationID NVARCHAR(15), EffectiveDate DATETIME ) IF @ResourceID <> '' BEGIN INSERT INTO #Resources (ResourceID, OrganizationID, EffectiveDate) SELECT ro.ResourceID, ro.OrganizationID, ro.EffectiveDate FROM ResourceOrganization ro, (SELECT ResourceID, MAX(EffectiveDate) AS maxEffectivedate FROM dbo.ResourceOrganization WHERE ResourceID = @ResourceID GROUP BY ResourceID) AS maxresults WHERE ro.ResourceID = maxresults.ResourceID AND ro.EffectiveDate = maxresults.maxEffectivedate END IF @OrgUnit <> '' BEGIN INSERT INTO #Resources (ResourceID, OrganizationID, EffectiveDate) SELECT ResourceID, OrganizationID, EffectiveDate FROM ResourceOrganization WHERE OrganizationID LIKE '' + @OrgUnit + '%' END -- get actual time - REGULAR INSERT INTO #ItemisedTimeandMaterials SELECT CASE WHEN @IncludeID = 1 THEN b.timeID ELSE '' END, --mod 07 e.lastname + e.firstname, CASE WHEN @TimeDetail = 2 THEN g.enddate ELSE (CASE WHEN @TimeDetail = 3 THEN b.TimeEntryDate ELSE NULL END) END,--MOD005 'FEES', 'Regular', b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, '', '', -- expense SUM(ISNULL(b.StandardHours, 0)), -- MOD003 - added in isnull's 0,--h.StandardAmt,--b.NegotiatedChargeRate, --MOD005 Change to NegotiatedChargeRate from StandardChargeRate 0,--sum(isnull(b.StandardHours,0)* IsNull(h.standardAmt,0)),--sum(bd.BilledAmt),--MOD005 Change from BillableAmt feild (was incorrect for adjustments) CASE WHEN @TimeDetail = 3 THEN b.invoicecomment ELSE '' END,--MOD005 CASE WHEN @TimeTypeGroup = 2 THEN b.TimeTypeCode ELSE '' END--MOD008 FROM time b JOIN activity c ON b.activitycode = c.activitycode JOIN resource e ON b.resourceID = e.resourceID JOIN project p ON b.ProjectCode = p.ProjectCode AND p.RevisionStatusCode = 'A' JOIN task f ON b.projectcode = f.projectcode AND b.taskuid = f.taskuid AND f.revisionnum = p.RevisionNum JOIN SCWeekEnding g ON b.TimeEntryDate BETWEEN g.StartDate AND g.EndDate JOIN #Resources RES ON b.ResourceID = RES.ResourceID --left join ratesetresource h on h.resourceid = b.resourceid WHERE --b.projectcode = @PROJECTCODE and b.statuscode IN ('A', 'V', 'T') AND b.TimeEntryDate >= @FromDate AND b.TimeEntryDate <= @ToDate AND ISNULL(b.StandardHours, 0) <> 0 AND b.resourceid IN (SELECT ResourceId FROM #Resources) GROUP BY b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, CASE WHEN @TimeDetail = 2 THEN g.enddate ELSE (CASE WHEN @TimeDetail = 3 THEN b.TimeEntryDate ELSE NULL END) END,--MOD005 CASE WHEN @TimeDetail = 3 THEN b.invoicecomment ELSE '' END, CASE WHEN @IncludeID = 1 THEN b.timeID ELSE '' END, --mod 07 CASE WHEN @TimeTypeGroup = 2 THEN b.TimeTypeCode ELSE '' END--MOD008 HAVING SUM(ISNULL(b.StandardHours, 0)) <> 0 -- get actual time - OVERTIME INSERT INTO #ItemisedTimeandMaterials SELECT CASE WHEN @IncludeID = 1 THEN b.timeID ELSE '' END, --mod 07 e.lastname + e.firstname, CASE WHEN @TimeDetail = 2 THEN g.enddate ELSE (CASE WHEN @TimeDetail = 3 THEN b.TimeEntryDate ELSE NULL END) END,--MOD005 'FEES', 'Overtime', --GW: need projectcode here--DONE b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID AS OrgUnit, e.firstname, e.lastname, '', '', -- expense SUM(ISNULL(b.OvertimeHours, 0)), -- MOD003 - added in isnull's 0, 0, CASE WHEN @TimeDetail = 3 THEN b.invoicecomment ELSE '' END, --MOD005 CASE WHEN @TimeTypeGroup = 2 THEN b.TimeTypeCode ELSE '' END--MOD008 FROM time b JOIN activity c ON b.activitycode = c.activitycode JOIN resource e ON b.resourceID = e.resourceID JOIN project p ON b.ProjectCode = p.ProjectCode AND p.RevisionStatusCode = 'A' JOIN task f ON b.projectcode = f.projectcode AND b.taskuid = f.taskuid AND f.revisionnum = p.RevisionNum JOIN SCWeekEnding g ON b.TimeEntryDate BETWEEN g.StartDate AND g.EndDate JOIN #Resources RES ON b.ResourceID = RES.ResourceID WHERE b.statuscode IN ('A', 'V', 'T') --and f.revisionnum = @latestapprovedrevision AND b.TimeEntryDate >= @FromDate AND b.TimeEntryDate <= @ToDate AND ISNULL(b.OvertimeHours, 0) <> 0 AND b.resourceid IN (SELECT resourceid FROM #Resources) GROUP BY --GW: projectcode here--DONE b.projectcode, b.taskuid, f.outlinenum, f.taskname, b.ActivityCode, c.ActivityDesc, b.resourceID, RES.OrganizationID, e.firstname, e.lastname, CASE WHEN @TimeDetail = 2 THEN g.enddate ELSE (CASE WHEN @TimeDetail = 3 THEN b.TimeEntryDate ELSE NULL END) END,--MOD005 CASE WHEN @TimeDetail = 3 THEN b.invoicecomment ELSE '' END, CASE WHEN @IncludeID = 1 THEN b.TimeID ELSE '' END,--mod 07 CASE WHEN @TimeTypeGroup = 2 THEN b.TimeTypeCode ELSE '' END--MOD008 HAVING SUM(ISNULL(b.Overtimehours, 0)) <> 0 /**************** SECTION 7: OUTPUT DATA TO CRYSTAL REPORT ******************/ -- return data to the Crystal report only if taskuid are in table 1 SELECT * FROM #ItemisedTimeandMaterials ORDER BY taskUID DROP TABLE #Resources DROP TABLE #ItemisedTimeandMaterials END GO ```
Stored Procedure Insert Error: Column name or number of supplied values does not match table definition
[ "", "sql", "sql-server", "" ]
I have a database table which stores ids of employees and their previous projects they have been working on. Now I want to retrieve pairs of employees that have been working on the same projects and amount of common projects between these two employees. If I do "self-join" approach then I get duplicate rows. ``` SELECT DISTINCT ep1.employee_id, ep2.employee_id, COUNT(p.id) FROM employee_project ep1, employee_project ep2, project p WHERE ep1.project_id=ep2.project_id AND ep1.employee_id ep2.employee_id AND p.id=ep1.project_id GROUP BY ep1.employee_id, ep2.employee_id, p.id ``` Result: employee1 | employee2 | 5 employee2 | employee1 | 5
add `ep1.employee_id >= ep2.employee_id` to the where clause.
I would create a new table with a unique index over the columns that you want to keep unique. Then do an insert from the old table into the new, ignoring the warnings about duplicated rows. Lastly, I would drop (or rename) the old table and replace it with the new table. In MySQL, this would look like ``` CREATE TABLE tmp LIKE mytable; ALTER TABLE tmp ADD UNIQUE INDEX myindex (emp_name, emp_address, sex, marital_status); INSERT IGNORE INTO tmp SELECT * FROM mytable; DROP TABLE mytable; RENAME TABLE tmp TO mytable; ```
How-to get rid of duplicates in SQL query
[ "", "sql", "" ]
From the table below ``` date description amount 29/12/13 <13363000054123>JIT BAHADUR LAMICHHANE CR 10,000.00 TBI 29/12/13 29/12/13 29/12/13 <13363740800138>MAN BAHADUR . CR 1,19,595.00 TBI 29/12/13 29/12/13 29/12/13 <555349302906>CHANDRA PRASAD DAHAL CR 24,054.30 TBI 29/12/13 29/12/13 29/12/13 <13362144250203>BISHNU GURUNG DHAN CR 1,30,562.00 TBI 29/12/13 29/12/13 ``` I need the records as below ``` date description amount 29/12/13 <13363000054123> CR 10,000.00 29/12/13 <13363740800138> CR 1,19,595.00 ``` I tried substring , but string size cannot be fixed on 'amount' column. What is the best way
Try the combination of CHARINDEX and SUBSTRING. Assuming amount has format 'xxx.xx' ``` SELECT date, LEFT(description, Charindex ('>', description)), LEFT(amount, Charindex ('.', amount)) + Substring(amount, Charindex ('.', amt)+1, 2) FROM tbl ```
This produce the information you have stipulated. ``` Declare @S varchar(50) Declare @T varchar(50) Select @S = '<13363000054123>JIT BAHADUR LAMICHHANE', @T = 'CR 10,000.00 TBI 29/12/13 29/12/13' Select SUBSTRING(@S, 1, PATINDEX('%>%', @S)), SUBSTRING(@T, 1, PATINDEX('% TBI%', @T)) ``` Use this in your query ``` Select date, SUBSTRING(description, 1, PATINDEX('%>%', description)), SUBSTRING(amount, 1, PATINDEX('% TBI%', amount)) ```
substring in sql server 2008
[ "", "sql", "sql-server", "t-sql", "" ]
I'm learning database and I have a question, When I run the following query to give me 12 month Average Salary, ``` SELECT `EmployeeNo`,(`Salary`/12,) as AverageSalary FROM Employee ``` The salary that it returns is like 7787.000992213 How can i round the value?
If you want the number truncated, do this, using `TRUNCATE`: ``` SELECT `EmployeeNo`,TRUNCATE((`Salary`/12,),0) as AverageSalary FROM Employee ``` If you want it rounded to the nearest integer, do this, using `ROUND`: ``` SELECT `EmployeeNo`,ROUND(`Salary`/12,) as AverageSalary FROM Employee ```
[ROUND(X), ROUND(X,D)](http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_round) - Rounds the argument X to D decimal places For your example: ``` SELECT `EmployeeNo`, ROUND(`Salary`/12) as AverageSalary FROM Employee ```
how to Round a value in SQL?
[ "", "mysql", "sql", "database", "phpmyadmin", "" ]
Please help, I'm trying to pivot sql server table like this: ![enter image description here](https://i.stack.imgur.com/mEVuq.png) To get something like this: ![enter image description here](https://i.stack.imgur.com/8beeR.png) I've tried to do that in excel but it didn't work for me as it's duplicating attributes etc. So, is there a query to pivot table to get that?
SQL Server's `pivot` only works if you pivot on one column. Since you're pivoting on `(Item Id, Lot Number)`, you have to do it the old fashioned way: ``` select [Item Id] , [Lot Number] , sum(case when Attribute = 'PPA' then Value end) as PPA , sum(case when Attribute = 'PU0' then Value end) as PU0 , ... from YourTable group by [Item Id] , [Lot Number] ```
See @Andomar's answer for writing a SQL query to do the pivot. But if you just want to use Excel it should work fine putting ItemId and Lot Number in rows, put Attribute in column section and Value in data section. You'll need to remove all the subtotal rows that Excel likes to add, which you do from the Properties of each field (if I remember correctly). If that doesn't work, can you post an image of what happens in Excel that you don't like?
Pivot table with specific logic query
[ "", "sql", "sql-server", "excel", "sql-server-2008", "pivot-table", "" ]
I want all student list which have `> 0` marks in asc order and then all 0 marks student. I made a query for this, but this not give correct response: ``` (SELECT * FROM tbl_student WHERE marks>0) UNION (SELECT * FROM tbl_student WHERE marks=0) ``` after union this merge the data. Kindly suggest me, what should I do for this.
Try this: `SELECT * FROM tbl_student ORDER BY CASE WHEN marks = 0 THEN 9999 ELSE marks END ASC`
Try this ``` SELECT * FROM tbl_student ORDER BY CASE WHEN marks = 0 THEN 9999 ELSE marks END ``` Instead of 9999 you can give your maximum number
Sql query to get result in asc order but 0 values should be last after all data
[ "", "sql", "" ]
I need to combine both select statements into one select ``` SELECT count(tbl_employer_post_details.employer_id) pending FROM tbl_employer_post_details, tbl_employer_registration WHERE job_status=0 AND tbl_employer_registration.employer_id = tbl_employer_post_details.employer_id LIMIT start,max; ``` And the second query, with the only difference being the `WHERE job_status=1`: ``` SELECT count(tbl_employer_post_details.employer_id) approved FROM tbl_employer_post_details, tbl_employer_registration WHERE job_status=1 AND tbl_employer_registration.employer_id = tbl_employer_post_details.employer_id LIMIT start,max; ```
Try this: ``` SELECT SUM(job_status = 0) pending, SUM(job_status = 1) approved FROM tbl_employer_post_details epd INNER JOIN tbl_employer_registration er ON epd.employer_id = er.employer_id WHERE job_status IN (0, 1); ```
Try it this way ``` SELECT SUM(job_status = 0) pending, SUM(job_status = 1) approved FROM tbl_employer_post_details d JOIN tbl_employer_registration r ON r.employer_id = d.employer_id WHERE job_status IN (0, 1) ```
Count same id column with different condition in same select statement
[ "", "mysql", "sql", "sql-server", "select", "count", "" ]
How do I change my column's default value from None to something else? For example, I want my dates to have a default value of 0000-00-00 if I don't specify one when I create the row. I understand this in phpMyAdmin, but I'm not sure how to do it via command prompt. I also understand how to do this when adding a column. But all of my columns are made and have data in some of them. ``` ALTER TABLE table1 ADD COLUMN foo INT DEFAULT 0; ``` From searching, I found this line, but I'm not sure if that's what I want? ``` ALTER TABLE foobar_data MODIFY COLUMN col VARCHAR(255) NOT NULL DEFAULT '{}'; ```
Use **ALTER TABLE** to **CHANGE** or **MODIFY** the **DEFAULT** value of column. Check this link [**ALTER TABLE SYNTAX**](http://dev.mysql.com/doc/refman/5.5/en/alter-table.html) ``` ALTER TABLE `tableName` CHANGE `columnName` `columnName` DATE DEFAULT '0000-00-00'; ALTER TABLE `tableName` MODIFY `columnName` DATE DEFAULT '0000-00-00'; ```
try this ``` ALTER TABLE foobar_data CHANGE COLUMN col VARCHAR(255) NOT NULL DEFAULT '{}'; ``` user CHANGE to alter a existing column see [Link](http://dev.mysql.com/doc/refman/5.1/en/alter-table.html)
MySQL - How to modify column default value?
[ "", "mysql", "sql", "default", "alter", "" ]
I am passing my parameter as 'Suburbun','Indigo' to retrieve records matching both Campaigns in below Stored Procedure created in MySql. ``` CREATE PROCEDURE `DemoSP`(Campaign VARCHAR(3000)) BEGIN SET @query = CONCAT('Select * from vicidial_log WHERE campaign_id IN (?)'); PREPARE stmt FROM @query; SET @CampaignID = Campaign; EXECUTE stmt USING @CampaignID; DEALLOCATE PREPARE stmt; END; ``` It Doesn't give any rows! But when i pass only 'Suburbun' in SP, it gives 6 Rows! Where am i going wrong? --Answer ! I tried as Lee Fentress commented in <http://www.poolofthought.com/index.php/2008/12/28/a-comma-seperated-list-as-parameter-to-mysql-stored-procedure/> and peterm answer reflected similar coding, It worked! Thanks, but i find this negative mark as compared to SQL Server. Gee, Thank you Guys!!
You won't be able to use `USING` in this case. You can just build the full query sting and execute it without parameters ``` DELIMITER $$ CREATE PROCEDURE DemoSP(Campaign VARCHAR(3000)) BEGIN SET @query = CONCAT('SELECT * FROM vicidial_log WHERE campaign_id IN (', Campaign, ')'); PREPARE stmt FROM @query; EXECUTE stmt; DEALLOCATE PREPARE stmt; END$$ DELIMITER ; ``` **Note:** make sure that delimited values that you pass in `Campaign` are properly quoted (like you said they are) and quotes in values, if there is any, are escaped. Here is **[SQLFiddle](http://sqlfiddle.com/#!2/01f6a/1)** demo
Try this: There is no need to use `PREPARE STATEMENT`. You can get the result using `FIND_IN_SET()` function ``` SELECT * FROM vicidial_log WHERE FIND_IN_SET(campaign_id, Campaign) ```
IN Clause dont work in MySQL for me
[ "", "mysql", "sql", "select", "in-clause", "find-in-set", "" ]
I have two tables in different databases (WAMP server) with the same structure. I want to copy from the first database (`newkt`) to the second one (`oldkt`) all rows that do not exist in the second database (`oldkt`). ``` newkt -> table : users (1500 records) (id, name, password) oldkt -> table : users (1200 records) (id, name, password) ``` I want to actually add rows to the `oldkt` database whose id doesn’t exist in `oldkt` yet. Also if I have more than 3 columns, can these be added automatically or I do have to tag all of them?
You can do like the following: ``` insert into database1.table select * from database2.table where id not in(select id from database1.table); ```
As said you should be able to perform a typical insert but by specifying the database name in the query: ``` SELECT * INTO TargetDatabase.dbo.TargetTable FROM SourceDatabase.dbo.SourceTable ```
How to copy rows from one MySQL database to another
[ "", "mysql", "sql", "" ]
So I have 6 Tables. ``` Company (1..1) --- (0..n) District District (1..1) --- (0..n) City City (1..1) --- (0..n) Employee ``` So alot of 1 on n relation's to start off. Then I have 1 n..n relation implemented like this: ``` Employee (1..1) --- (0..n) EmpLang EmpLang (0..n) --- (1..1) Lang ``` Now I select The employee like this: ``` SELECT Employee.Name, Employee.lastName, Employee.PhoneNumber FROM Company, District, Employee, EmpLang, Lang WHERE Company.id = District.cid AND District.id = City.did AND City.id = Employee.id AND Company.name LIKE '{1}%' AND District.name LIKE '{2}%' AND City.name LIKE '{3}%' ``` `{1}, {2}, {3}`: Some random variables (not important) My problem is I have another variable ( `{4}` ) wich is the language filter. * If it's empty I just want this query like above to be executed. (Or produce the same results at least.) * If it has something in it: + I want to filter by looking for relationships in the last table (lang), --- ``` ... AND City.name LIKE '{3}%' AND Employee.id = EmpLang.eid AND EmpLang.lid = Lang.id AND Lang.desc LIKE '{4}%' ``` * Also I don't want it to display the results twice if the person knows 2 lang's. So if the person knows English and Eduardian and the filter is 'e%' I only want to display the result once. I would like to do this purely in mysql.
Approach it with a sub query. In your where statement, you can do where employee.id in (select...) the select statement here will select all employee\_id's that meet that language. ``` where ... and employeeid in (select employeeID from employee left join Emplang on Employee.id = EmpLang.eid left join land on EmpLang.lid = Lang.id where Lang.desc LIKE '{4}%') ``` (I changed syntax on you, you are putting the joins in the where clause...it's older SQL, the join method I wrote here is easier to follow. To handle the null language, wrap an or above the statement. ``` and (employee id in (select...) or 1 = case when {4} is null then 1 else 0 end) ```
I think in this instance you may want to look at doing a join. You can learn more about joins if you are not familiar with them at this link: <http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html> EDIT: I can't post the code now (at work) but if nobody else has responded by the time I get home I'll take a crack at it.
Sql issue with selecting data
[ "", "mysql", "sql", "" ]
I am migrating my database from sqlserver to mysql when i come to `getDate()` function in sqlserver the replacement in mysql is `Now()` but `Now()` not returned the exact result and format that `getDate()` returned. Is there any solution to tweak the `Now()` result to make as same as `getDate()` ``` getDate() result : 2014-01-07 00:28:46.017 Now() result : 2014-01-07 10:28:03 ```
I overcome the problem of microsecond by usage of ``` sysdate(3) returned '2014-01-07 10:52:07.252' ``` Thanks for every one suggestions.
> The difference between the two functions is the milliseconds. > Try converting > the getdate() to exclude milliseconds.
Now() vs GetDate()
[ "", "mysql", "sql", "sql-server", "" ]
I have a Table Abstract, which has one of the columns (SerialNumber). It is having data as below. ``` 1 1.1 1.1.1 1.1.2 1.2 .. .. 10 10.1 10.2 ``` Now, my requirement is to sort the data based on this column as first preference. Maximum of 2 "dots" are possible in a SerialNumber. So `1.2.3.4` is not possible. Maximum Number may be `999` in any level of sequence. ie. `999.999.999` is the maximum possible sequence. I tried by issuing a `ORDER BY SerialNumber`, it comes like ``` 1 10 10.1 .. 2 2.1 ``` Just because of character sort , Instead of `2`, `10` comes after `1`. Any idea of how it can be achieved.? As I need this in JDBC and in multiple queries ( Different modules) hoping to have this as generic as possible.
Function to extract every number using `'.'` as delimiter and lpad with 0s. And call the function() in `order by` ``` CREATE OR REPLACE FUNCTION FORMAT_MY_SERIAL( ORIG_SERIAL VARCHAR2) RETURN VARCHAR2 AS FINAL_SERIAL VARCHAR2(15) := ''; SERIAL VARCHAR2(15); BEGIN SERIAL := ORIG_SERIAL; WHILE (INSTR(SERIAL,'.') <> 0) LOOP FINAL_SERIAL := TO_CHAR(SUBSTR(SERIAL,INSTR(SERIAL,'.',-1)+1),'FM099')||'.'||FINAL_SERIAL; SERIAL := SUBSTR(SERIAL,1,INSTR(SERIAL,'.',-1)-1); END LOOP; FINAL_SERIAL := TRIM(BOTH '.' FROM TO_CHAR(SERIAL,'FM099')||'.'||FINAL_SERIAL); RETURN FINAL_SERIAL; END FORMAT_MY_SERIAL; / ``` And this is an **example**: ``` WITH MY_TABLE AS ( SELECT '1.1.1' AS SerialNumber FROM dual UNION ALL SELECT '10' FROM dual UNION ALL SELECT '1' FROM dual UNION ALL SELECT '1.2' FROM dual UNION ALL SELECT '2.1' FROM dual UNION ALL SELECT '1.10.1' FROM dual UNION ALL SELECT '2.1' FROM dual ) SELECT SerialNumber, FORMAT_MY_SERIAL(SerialNumber) as formatted FROM MY_TABLE ORDER BY FORMAT_MY_SERIAL(SerialNumber); ``` **Result:** ``` SERIAL FORMATTED 1 001 1.1.1 001.001.001 1.2 001.002 1.10.1 001.010.001 2.1 002.001 2.1 002.001 10 010 ```
I would probably use a regex function to split each part for ordering. Something like: ``` select serialnumber from data order by to_number(regexp_substr(serialnumber, '[[:digit:]]+')), to_number(regexp_substr(serialnumber, '[[:digit:]]+', 1, 2)) nulls first, to_number(regexp_substr(serialnumber, '[[:digit:]]+', 1, 3)) nulls first ``` Which will give you results like: ``` SERIALNUMBER ------------------------------- 1.100 1.100.10 34.134.819 36 75.717 256.749.864 397 428.13.647 443 713.768 855.238 ```
Custom sort with series of numbers(sequences) using oracle sql
[ "", "sql", "oracle", "" ]
the problem is: I have two tables(column names are in brackets): Cars (CarColorId, CarName), CarColor (CarColorId, CarColorName); The task is to UPDATE Cars.CarName with a string "\_updated" but only if CarColor.CarColorName = 'red'. I have know idea how to do this without joins I have tried this way: ``` UPDATE Cars set CarName = concat (CarName, '_updated') WHERE CarColorId = 1; ``` CarColorId = 1 = red; This request works, but the task is to use both tables
You can try any one of this in Oracle **Normal Update** ``` UPDATE CARS SET CARS.CARNAME = CONCAT ( CARS.CARNAME, '_updated' ) WHERE EXISTS (SELECT CARCOLOR.CARCOLORID FROM CARCOLOR WHERE CARS.CARCOLORID = CARCOLOR.CARCOLORID AND CARCOLOR.CARCOLORNAME = 'RED'); ``` **Using Inline View (If it is considered updateable by Oracle)** **Note**: If you face a non key preserved row error add an index to resolve the same to make it update-able ``` UPDATE (SELECT CARS.CARNAME AS OLD, CONCAT ( CARS.CARNAME, '_updated' ) AS NEW FROM CARS INNER JOIN CARCOLOR ON CARS.CARCOLORID = CARCOLOR.CARCOLORID WHERE CARCOLOR.CARCOLORNAME = 'RED') T SET T.OLD = T.NEW; ``` **Using Merge** ``` MERGE INTO CARS USING (SELECT CARS.ROWID AS RID FROM CARS INNER JOIN CARCOLOR ON CARS.CARCOLORID = CARCOLOR.CARCOLORID WHERE CARCOLOR.CARCOLORNAME = 'RED') ON ( ROWID = RID ) WHEN MATCHED THEN UPDATE SET CARS.CARNAME = CONCAT ( CARS.CARNAME, '_updated' ); ```
You can modify your query like this: ``` UPDATE Cars set CarName = concat (CarName, '_updated') WHERE CarColorId in ( select CarColorId from CarColor where CarColorName='red' ) ; ```
Oracle database. How to update selected columns.
[ "", "sql", "oracle", "" ]
I have a mysql table with some weird id's like this: ``` ╔═══╦════════════╦═════════════╦═══════════╦═════════════╦═══════════╗ β•‘ β•‘ id β•‘ user_id β•‘ hours_a β•‘ hours_b β•‘ hours_c β•‘ ╠═══╬════════════╬═════════════╬═══════════╬═════════════╬═══════════╣ β•‘ 1 β•‘ 010120149 β•‘ 9 β•‘ 10 β•‘ 6 β•‘ 23 β•‘ β•‘ 2 β•‘ 0212201310 β•‘ 10 β•‘ 2 β•‘ 8 β•‘ 10 β•‘ β•‘ 3 β•‘ 021220138 β•‘ 8 β•‘ 1 β•‘ 4 β•‘ 9 β•‘ β•‘ 4 β•‘ 020120149 β•‘ 9 β•‘ 3 β•‘ 8 β•‘ 10 β•‘ β•šβ•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β• ``` I am trying to parse the total hours for user id 9, for the month January and year 2014. As you can see from the table, that is the first and last row. For example, `01 01 2014 9` is the first row's ID of which represents **DD/MM/YYYY/ID**. I want to be able to retrieve all hours (hours\_a, hours\_b & hours\_c **separately**) for **user\_id = 9** where **day = 01 - 31**(loop through all days?), **month = 01** and **year = 2014**. To be exported in something like this: ``` {"userid":"9","month":"01","year":"2014","total_hours_a":"13","total_hours_b":"14","total_hours_c":"33"} ``` The ID's represent Day, Month, Year and Userid as such: ![enter image description here](https://i.stack.imgur.com/m69in.png) --- ## Currently what I'm doing Currently I'm selecting all the table: ``` $query="SELECT * FROM `weird_table` WHERE `id` LIKE 9"; ``` Printing it to encoded json: ``` $result = mysql_query($query); $temp = 0; $json = array(); while ($row = mysql_fetch_array($result)) { $json[$temp]['id'] = $row['id']; $json[$temp]['userid'] = $row['userid']; $json[$temp]['hours_a'] = $row['hours_a']; $json[$temp]['hours_b'] = $row['hours_b']; $json[$temp]['hours_c'] = $row['hours_c']; } $temp++; } print json_encode($json); mysql_close(); ``` and then client side (as i am a front-end developer), i mix&match & sum it up and get the results I want. Instead of doing all this hassle and giving the user so much cpu pain; i would like the server to do the work and print me exactly what I am looking for.
Try this: ``` SELECT *, (hours_a + hours_b + hours_c) totalHours FROM weird_table WHERE user_id = 9 AND STR_TO_DATE(SUBSTRING(id, 1, 8), '%d%m%Y') BETWEEN '2014-01-01' AND '2014-01-31' ``` **OR** ``` SELECT *, (hours_a + hours_b + hours_c) totalHours FROM weird_table WHERE user_id = 9 AND SUBSTRING(id, 3, 6) = '012014' ``` **OR** ``` SELECT *, (hours_a + hours_b + hours_c) totalHours FROM weird_table WHERE user_id = 9 AND YEAR(STR_TO_DATE(SUBSTRING(id, 1, 8), '%d%m%Y')) = 2014 AND MONTH(STR_TO_DATE(SUBSTRING(id, 1, 8), '%d%m%Y')) = 1 ```
You can try this ``` select userid,SUBSTRING(id, 3, 2) ,SUBSTRING(id, 5, 4),hours_a,hours_b, hours_c FROM weird_table WHERE userid = 9 AND SUBSTRING('010120149', 3, 6) = '012014'; ``` and the retrived rows hours value can be simply added up.
PHP: Parse all data from mysql where the 3rd and 4th digit of the id
[ "", "mysql", "sql", "datetime", "select", "substring", "" ]
I have a table which shows me my users' downloads reports. The table looks like this: ``` ╔═══╦════════════╦═════════════╗ β•‘ β•‘ url β•‘ user β•‘ ╠═══╬════════════╬═════════════╣ β•‘ 1 β•‘ Bla β•‘ 1 β•‘ β•‘ 2 β•‘ Bla Bla β•‘ 1 β•‘ β•‘ 3 β•‘ Bla Bla Blaβ•‘ 1 β•‘ β•‘ 4 β•‘ Bla2 β•‘ 2 β•‘ β•šβ•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` If I want to select the user that downloaded the url `Bla`, I just do: ``` SELECT `user` FROM `links` WHERE `url` = 'Bla' ``` But I want to select the user that downloaded `Bla` **and** downloaded `Bla Bla` too. How can I do that? Thank you, and sorry for my English.
You can use a `WHERE` clause with a combination of `GROUP BY` and `HAVING` to get the result: ``` select user from yourtable where url in ('Bla', 'Bla Bla') group by user having count(distinct url) = 2; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!2/5c218/2)
You can use a self join ``` select u.user from links u join links u1 on(u.`user`=u1.user) where u1.url ='Bla' and u.url= 'Bla Bla' ``` [**Fiddle**](http://sqlfiddle.com/#!2/daa6a6/1)
SQL Condition With Multiple Rows
[ "", "mysql", "sql", "select", "" ]
I am trying to run a query in SQL Server 2008 and it's running very slowly when I change the value of one of the variables (`SourceID`). The below code works fine when the `SourceID` is set to another available ID but on the one in the code it just hangs... for hours if I let it! The `email`, `dupe` and `MyMystery` columns are all indexed... any thoughts? ``` WITH rmdup as ( SELECT act.Email , act.FirstName , act.LastName , act.SourceID SID , ac.ID CID , ROW_NUMBER() OVER (PARTITION BY act.Email ORDER BY act.Email DESC,act.dateadded DESC) RN from a_customer_test act inner join a_customer ac on act.Email = ac.email and act.sourceID = ac.sourceID where act.sourceID in (409) and dupe = 0 and mymystrey = 0 and act.Email not in (select cemail as email from a_unsub union select email as email from a_unsubscribe) ) select REPLACE(Email, ',', '.') as Email , FirstName , LastName , SID , CID from rmdup where RN=1 ORDER BY Email DESC ``` By the way, I can't run the "Display Estimated Execution Plan" as I don't have permissions and get the following error... story of my life!! > Msg 262, Level 14, State 4, Line 1 > SHOWPLAN permission denied in database
I suspect [statistics](http://sqlhint.com/sqlserver/how-to/when-update-statistics-flag-2371) for a\_customer\_test table are not up-to-date. Execute this in order to update them: ``` UPDATE STATISTICS a_customer_test; ```
The best thing to do would be to removed the where clause from the CTE and place it on the ON clause (Only do this with INNER JOINs not LEFT JOIN). The reason is when a Join is made, the conditions on the ON CLAUSE are made at the time of the join and eliminates a lot of the useless data while the data is being put on disk or in memory. When the conditions are on the WHERE Clause the conditions are eliminated AFTER all the data has been writing to disk or memory. The Where clause makes more work. I would also change the NOT IN clause into another CTE and try to find a way to use something other then NOT IN. NOT IN is not as inclusive as IN. The comparison for NOT IN is not as natural to SQL as IN. ``` WITH rmdup as ( SELECT act.Email , act.FirstName , act.LastName , act.SourceID SID , ac.ID CID , ROW_NUMBER() OVER (PARTITION BY act.Email ORDER BY act.Email DESC,act.dateadded DESC) RN from a_customer_test act inner join a_customer ac on act.Email = ac.email AND act.sourceID = ac.sourceID AND act.sourceID in (409) and dupe = 0 and mymystrey = 0 and act.Email not in (select cemail as email from a_unsub union select email as email from a_unsubscribe) ) select REPLACE(Email, ',', '.') as Email , FirstName , LastName , SID , CID from rmdup where RN=1 ORDER BY Email DESC ```
SQL query running very slow
[ "", "sql", "sql-server", "performance", "sql-server-2008", "t-sql", "" ]
I have a table with a column of type `xml`. I'd like to wrap the result set in a parent tag. For example: Here is the table result set: ``` SELECT * FROM MyColorTable ``` Result : ``` <Color>Red</Color> <Color>Orange</Color> <Color>Yellow</Color> <Color>Green</Color> <Color>Blue</Color> <Color>Indigo</Color> <Color>Violet</Color> ``` I would like a query that will set `@MyXml` as below: ``` DECLARE @MyXml xml SELECT @MyXml ``` Desired output: ``` <Colors> <Color>Red</Color> <Color>Orange</Color> <Color>Yellow</Color> <Color>Green</Color> <Color>Blue</Color> <Color>Indigo</Color> <Color>Violet</Color> </Colors> ``` Thanks
1) **Updated solution:** ``` DECLARE @MyTable TABLE (Col1 XML); INSERT @MyTable VALUES (N'<Color>Red</Color>'); INSERT @MyTable VALUES (N'<Color>Orange</Color>'); INSERT @MyTable VALUES (N'<Color>Yellow</Color>'); INSERT @MyTable VALUES (N'<Color>Blue</Color>'); INSERT @MyTable VALUES (N'<Color>Green</Color>'); INSERT @MyTable VALUES (N'<Color>Indigo</Color>'); INSERT @MyTable VALUES (N'<Color>Violet</Color>'); SELECT t.Col1 AS '*' FROM @MyTable t FOR XML PATH(''), ROOT('Colors'); ``` Output: ``` <Colors> <Color>Red</Color> <Color>Orange</Color> <Color>Yellow</Color> <Color>Blue</Color> <Color>Green</Color> <Color>Indigo</Color> <Color>Violet</Color> </Colors> ``` 2) If you want to change just the content of `@MyXML` variable then one solution is to use [`query` method](http://technet.microsoft.com/en-us/library/ms191474.aspx) thus: ``` DECLARE @MyXml XML; SET @MyXML = N' <Color>Red</Color> <Color>Orange</Color> <Color>Yellow</Color> <Color>Blue</Color> <Color>Green</Color> <Color>Indigo</Color> <Color>Violet</Color>'; SELECT @MyXml.query('<Colors>{.}</Colors>'); ```
The SQL Server has a good build-in support for work with XML. There is no need to store a `XML tag` in your column. You should remove the unnecessary `<Color>` tag in order to optimize the work with XML and reduce the column storage - it is easy to generate it when it is needed like this: ``` SELECT [color] FROM DataSource FOR XML PATH('Color'), ROOT('Colors') ``` In you are not allowed to do this, you can remove the tags with SQL like this: ``` CREATE TABLE DataSource ( [color] VARCHAR(32) ) INSERT INTO DataSource ([color]) VALUES ('<Color>Red</Color>') ,('<Color>Orange</Color>') ,('<Color>Yellow</Color>') ,('<Color>Blue</Color>') ,('<Color>Green</Color>') ,('<Color>Indigo</Color>') ,('<Color>Violet</Color>') SELECT REPLACE(REPLACE([color],'<Color>',''),'</Color>','') FROM DataSource FOR XML PATH('Color'), ROOT('Colors') ``` But the above seems very wrong for me and I will advise you to change the way you are storing the values in the table.
I need to wrap a result set of xml data with a parent tag
[ "", "sql", "sql-server", "xml", "t-sql", "" ]
I've a table what looks like that: ``` ID | title | author | timestamp | livetime | special ``` My goal is select all from table where livetime is larger than current timestamp and order it by timestamp desc BUT records where "special" is true must be in first (of course, also ordered). It's possible to do only with the MySQL? My SQL Query looks that: ``` SELECT * FROM ads WHERE livetime > UNIX_TIMESTAMP() ORDER BY timestamp DESC ```
Yes, you can sort by two columns. Value of true is 1 and false 0, so you need to sort in the descending order. ``` SELECT * FROM ads WHERE livetime > UNIX_TIMESTAMP() ORDER BY special DESC, timestamp DESC ```
``` SELECT * FROM ads WHERE livetime > UNIX_TIMESTAMP() ORDER BY special <> 1, timestamp DESC ```
Order by date but add important at first in MySQL
[ "", "mysql", "sql", "" ]
Should I be creating and dropping intermediate tables in hive? I can write something like (much simplified): ``` drop table if exists tmp1; create table tmp1 as select a, b, c from input1 where a > 1 and b < 3; drop table if exists tmp2; create table tmp2 as select x, y, z from input2 where x < 6; drop table if exists output; create table output as select x, a, count(*) as count from tmp1 join tmp2 on tmp1.c = tmp2.z group by tmp1.b; drop table tmp1; drop table tmp2; ``` or I can roll everything into one statement: ``` drop table if exists output; create table output as select x, a, count(*) as count from (select a, b, c from input1 where a > 1 and b < 3) t1 join (select x, y, z from input2 where x < 6) t2 on t1.c = t2.z group by t1.b; ``` Obviously, if I reuse the intermediate tables more than once, it makes perfect sense to create them. However, when they are used just once, I have a choice. I tried both and the second one is **6%** *faster* as measured by the wall time, but **4%** *slower* as measured by the `MapReduce Total cumulative CPU time` log output. This difference is probably within the random margin of error (caused by other processes &c). However, is it possible that combining queries could result in a dramatic speedup? Another question is: are intermediate tables, which are used just once, a normal occurrence in hive code, or should they be avoided when possible?
There is one significant difference. Running the one big query will allow the optimizer more freedom in optimizations. One of the most important optimizations in such cases are paralellims as set in `hive.exec.parallel`. when set to true hive will exexcute independant stages in parallel. In your case, in the 2nd query imagine t1,t2 do more complex work like`group by`. in the 2nd query t1,t2 will execute simultaniusly while in the first script the will be serial.
I like to create multiple views, and then only create a table at the end. This allows the Hive optimizer to reduce the number of map-reduce steps, and execute in parallel as dimamah and Nigel have pointed out, but helps maintain readability for very complicated pipelines. For your example, you could replace it with ``` CREATE VIEW IF NOT EXISTS tmp1_view AS SELECT a, b, c FROM inputs where a > 1 and b < 3; create view if not exists tmp2_view as select x, y, z_ from input2 where x < 6; drop table if exists output; create table output as select x, a, count(*) as count from tmp1_view join tmp2_view on tmp1_view.c = tmp2_view.z group by tmp1_view.b; ```
Hive SQL coding style: intermediate tables?
[ "", "sql", "hadoop", "hive", "" ]
By using the following query, temporary table `testingTemp` will be created and the definition of columns are depending on `oriTable`.(I guess) ``` Select a.ID AS [a], b.ID AS [b], c.ID AS [c] INTO #testingTemp FROM oriTable ``` And I have my second query as below: ``` Insert into #testingTemp (c) Select z.ID AS [c] FROM oriTable ``` Now when I execute second query, SQL Server complaint ``` Cannot insert the value NULL into column 'a' , table 'tempdb.dbo.testingTemp...blabla ``` **May I know how to solve this problem without changing `oriTable` structure?**
Use ``` SELECT ID + 0 AS [a], ID + 0 AS [b], ID + 0 AS [c] INTO #testingTemp FROM oriTable ``` The nullability of columns computed via an expression is almost always assumed to be `NULL` rather than `NOT NULL`
try this: ``` ALTER TABLE #testingTemp ALTER COLUMN [a] int NULL ```
SQL Server possible to create temp table with all columns nullable automatically?
[ "", "sql", "sql-server", "" ]
I have a table in a SQL Server database looking like this: ``` +-----+-----+-----+-----+ | ID | A | B | C | +=====+=====+=====+=====+ | 1 | 1 | 1 | 6 | | 2 | 1 | 2 | 7 | | 3 | 1 | 3 | 3 | | 4 | 2 | 1 | 6 | | 5 | 2 | 2 | 7 | | 6 | 2 | 3 | 8 | | 7 | 3 | 1 | 6 | | 8 | 3 | 2 | 9 | | 9 | 3 | 3 | 10 | | 10 | 4 | 4 | 4 | +-----+-----+-----+-----+ ``` In this table I need to extract Column A values, for which there are multiple duplicates with different A value, but equal B and C values. Meaning that in the above table ID 1 matches ID 4 and ID 7. Further ID 2 matches ID 5, A 1 and A 2 will have more than one common dataset and thus need to be extracted. However A 3 only have one in common with the others, so that should not be extracted. I hope that someone could help me with this issue as I have not found any simple way to do this.
Here is an answer that works with repeated values, [fiddle here](http://sqlfiddle.com/#!3/96cbd/11) ``` SELECT [t].[id], [t].[a], [t].[b], [t].[c] FROM [Temp] [t] JOIN ( SELECT [b], [c] FROM ( SELECT DISTINCT [a], [b], [c] FROM [Temp]) [deduplicated] GROUP BY [b], [c] HAVING count([a]) > 1) [r] ON [r].[b] = [t].[b] AND [r].[c] = [t].[c] ```
Are you looking for the below query.? and please tell me if i have missed out something. ``` declare @tab1 table (id int,a int,b int, c int) insert into @tab1 values (1,1,1,6),(2,1,2,7),(3,1,3,3),(4,2,1,6),(5,2,2,7),(6,2,3,8),(7,3,1,6),(8,3,2,9),(9,3,3,10),(10,4,4,4) select * from @tab1 ``` Code Part ``` select min(tem.id) id,min(tem.a) a, tem.b,tem.c from ( select x.* from @tab1 x where x.b in (select b from @tab1 y where y.a <> x.a) and x.c in (select c from @tab1 y where y.a <> x.a) ) tem group by tem.b,tem.c ```
Complicated SQL Search for duplicates
[ "", "sql", "sql-server", "" ]
I'm looking to do a SQL output with most updated price of product. The product\_price can be updated multiple times for the product number, therefore creating more than one row. I'm looking to eliminate more than one row per product\_number. ``` SELECT product_number ,product_price ,MAX(update_timestamp) FROM product_price ORDER BY 1,2 ```
There are a couple ways of doing this. My preferred is a subquery. First get the product number and it's maxtimestamp: ``` SELECT product_number ,MAX(update_timestamp) as maxtimestamp FROM product_price group by product_number ``` Now turn that into a subquery and inner join it to the first table to filter all but the max: ``` select a.product_number, a.maxtimestamp, b.product_price from ( SELECT product_number ,MAX(update_timestamp) as maxtimestamp FROM product_price group by product_number) a inner join product_price b on a.product_number = b.product_number and a.maxtimestamp = b.update_timestamp ```
Assuming that you want the last updated price *for each product*, then try this: ``` SELECT Ranked.product_number, Ranked.product_price, Ranked.update_timestamp FROM ( SELECT pp.product_number, pp.product_price, pp.update_timestamp, DENSE_RANK() OVER ( PARTITION BY pp.product_number ORDER BY pp.update_timestamp DESC ) AS TimeRank FROM product_price AS pp ) AS Ranked WHERE Ranked.TimeRank = 1 ``` That dense rank in the center says that within each product (PARTITION BY), rank the latest updates (update\_timestamp DESC) first. Then in the outer query, we only take the #1s (TimeRank = 1).
Trying to Eliminate Duplicate Rows SQL Query
[ "", "mysql", "sql", "" ]
I have 2 Tables in SQL server 2008 r2 ``` +----+-----------------+ | id | txt | +----+-----------------+ | 1 | {115467}Ruwan | | 2 | {7877878787}pat | +----+-----------------+ ``` and ``` +----+------------+ | id | pid | +----+------------+ | 1 | 115467 | | 2 | 7877878787 | | 3 | 78787878 | +----+------------+ ``` I need to compare txt of t1 and pid of t2. from t1.txt i need the things inside { only to consider so my output will be like ``` +----+------------+---------+ | id | pid | matches | +----+------------+---------+ | 1 | 115467 | yes | | 2 | 7877878787 | yes | | 3 | 78787878 | no | +----+------------+---------+ ``` Currently I did the following: ``` $sql='SELECT pid FROM t2"; //fetch all pid on array and then inside loop begin loop $sql1="select * from t1 where txt like %$array_of_pid[$i]%"; ``` Is there any efficient way?
Try this: ``` SELECT t1.id, t1.pid, (CASE WHEN t2.id IS NULL THEN 'no' ELSE 'yes' END) matches FROM temp2 t1 LEFT OUTER JOIN temp t2 ON t2.txt LIKE '%{' + CAST(t1.pid AS VARCHAR(10)) + '}%' ``` Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/80665/1) **OUTPUT** ``` | ID | PID | MATCHES | |----|------------|---------| | 1 | 115467 | yes | | 2 | 7877878787 | yes | | 3 | 78787878 | no | ```
try this ``` select t2.id, t2.pid, CASE when t1.txt is NULL THEN 'no' ELSE 'YES' END AS matches from t2 left join t1 on (t1.txt like '%{'+t2.pid+'}%') ``` result ``` id pid matches 1 115467 YES 2 7877878787 YES 3 78787878 no ```
using dynamic string like
[ "", "sql", "sql-server", "select", "left-join", "case", "" ]
I've searched over the internet but couldn't find a way to resolve my issue. My query is something like that ``` SELECT users.id, SUBSTRING(posts.datestr, 4, 8) FROM users, posts WHERE posts.datestr='Jan-2014' ``` (That query above was shortened) The value in posts.datestr is actually `20-Jan-2014`, but I want it to be `Jan-2014` instead, but even after adding that `SUBSTRING` code, it still somehow shows as `20-Jan-2014`. Any idea what I did wrong? Thanks!
You need to alias the result of the `SUBSTRING` and use that alias in your conditions (`shortdate` in the example below) Since you'll now be using an alias in your conditions, you can't use the `where` clause, you can though use the `having` clause something like: ``` SELECT users.id, SUBSTRING(posts.datestr, 4, 8) as shortdate FROM users, posts HAVING shortdate='Jan-2014' ```
Use [**DATETIME**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format) function instead of **SUBSTRING()** ``` SELECT users.id, DATE_FORMAT(STR_TO_DATE(posts.datestr, '%d-%b-%Y'), '%b-%Y') updatedDate FROM users, posts HAVING updatedDate = 'Jan-2014' ```
MySQL Substring with WHERE
[ "", "mysql", "sql", "select", "substring", "date-format", "" ]
I am building out an app that have different types of "posts" and a user can insert as many of these as they like. The posts though can be of different types, for example an image post, video post, text post etc... and obviously these will all have different fields in the database. An example would be that the image type will have: ``` Image ---------- date image_url ``` and the text post might have: ``` Text ---------- date title content ``` Now for each of the different types, do I have a unique table and just link the user to these? Assuming this is correct, If I want to show them in date order, do I do this with a `JOIN`, or am I completely on the wrong track. Thanks for any help in advance!
The natural way to model what you're doing here is by mapping an inheritance hierarchy into your relational schema. What you want is to take all the common elements of all posts (post\_id, date\_created, user\_id, etc.) plus a `type` field and you make that your 'base' table. This table represents all `posts` regardless of the specific kind of post it is. Next you take all the specific attributes for each type of post and create a table for that specific type (these are the 'children' tables). The primary key of this table is also a foreign key into the base table (post\_id). The `type` field in the base table references which specific table a post belongs to. So it all ends up looking like this: ![Class Table Inheritance](https://i.stack.imgur.com/lPRkL.png) Now you can do all sorts of queries: Select all videos, order by descending date: ``` SELECT post.*, video.* FROM post JOIN video USING(post_id) WHERE post.type = 'video' ORDER BY post.date_created DESC ``` Select all posts with just basic details, for a single user: ``` SELECT post.* FROM post WHERE user_id = ? ``` Polymorphic query (Select all post for a single user, with specific type detail): ``` SELECT p.*, i.*, t.*, v.* FROM post p LEFT JOIN image i ON (p.post_id = i.post_id) LEFT JOIN text t ON (p.post_id = t.post_id) LEFT JOIN video v ON (p.post_id = v.post_id) WHERE p.user_id = ? ``` ### The downside, and an alternative This model gives you great flexibility, but at the cost of greater query complexity and lower query performance. If your videos, text and image posts differ by a small enough number of fields you'll get much better performance by just implementing what some call Single Table Inheritance: a single table that has all the columns for all the subtypes plus a `type` column with the same purpose as above: ![Single Table Inheritance](https://i.stack.imgur.com/Bjx0C.png) The key aspect is that the type-specific columns should be nullable (since they will be null for posts which are not of that type). This model is more wasteful in space, but you can run all the same queries as above without needing the child table joins (effectively these tables are already joined by default) resulting in faster, less complex queries.
While you could have separate tables for each post type , I cannot think of any reason against using just one table : > posts ``` post_id post__type -- Ex. 1: Text ,2: Image ,3: Video . post__user_id post__datetime post__title -- Images and Videos probably could have titles as well . post__content -- This gets meaning based on post__type . Ex. This would be a URL if post__type is Image or Video . post__deleted post__likes_count ```
Database design help - Splitting out tables
[ "", "mysql", "sql", "database", "database-design", "" ]
I have a query (with the purpose of making a view) which is using a few joins to get each column. Performance degrades quickly (exponentially?) for each set of joins added. What would be a good approach to make this query faster? Please see comments within the query. If it helps, this is using the WordPress DB schema. Here is a screenshot of EXPLAIN ![enter image description here](https://i.stack.imgur.com/2c7uz.png) **PRODUCTS TABLE** ``` +--+----+ |id|name| +--+----+ |1 |test| +--+----+ ``` **METADATA TABLE** ``` +----------+--------+-----+ |product_id|meta_key|value| +----------+--------+-----+ |1 |price |9.99 | +----------+--------+-----+ |1 |sku |ABC | +----------+--------+-----+ ``` **TERM\_RELATIONSHIPS TABLE** ``` +---------+----------------+ |object_id|term_taxonomy_id| +---------+----------------+ |1 |1 | +---------+----------------+ |1 |2 | +---------+----------------+ ``` **TERM\_TAXONOMY TABLE** ``` +----------------+-------+--------+ |term_taxonomy_id|term_id|taxonomy| +----------------+-------+--------+ |1 |1 |size | +----------------+-------+--------+ |2 |2 |stock | +----------------+-------+--------+ ``` **TERMS TABLE** ``` +-------+-----+ |term_id|name | +-------+-----+ |1 |500mg| +-------+-----+ |2 |10 | +-------+-----+ ``` **QUERY** ``` SELECT products.id, products.name, price.value AS price, sku.value AS sku, size.name AS size FROM products /* These joins are performing quickly */ INNER JOIN `metadata` AS price ON products.id = price.product_id AND price.meta_key = 'price' INNER JOIN `metadata` AS sku ON products.id = sku.product_id AND sku.meta_key = 'sku' /* Here's the part that is really slowing it down - I run this chunk about 5 times with different strings to match */ INNER JOIN `term_relationships` AS tr ON products.id = tr.object_id INNER JOIN `term_taxonomy` AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id AND tt.taxonomy = 'size' INNER JOIN `terms` AS size ON tt.term_id = size.term_id ```
Your performance issue is most likely caused by the join with the 'term\_taxonomy' table. All other joins seems to use the primary key (where you probobly have working indexes on). So my suggestion is to add a compound index on **term\_taxonomy\_id** and **term\_id** (or if you must: **taxonomy**). Like this: ``` CREATE UNIQUE INDEX idx_term_taxonomy_id_taxonomy ON term_taxonomy( term_taxonomy_id, taxonomy); ``` Hope this will help you.
Make Sure all the columns on which there is "ON" conditional statements is there, should be indexed. This will significantly improve the speed.
How to Improve Query Performance with many JOINs
[ "", "mysql", "sql", "performance", "select", "query-optimization", "" ]
I have table which looks like this ``` +----------------------------+ | id | bdate | item | +----------------------------+ | 1 | 20010101 | 1a | | 1 | 20020202 | 1b | | 1 | 20030303 | 1c | | 2 | 20010101 | 1d | | 2 | 20020202 | 1e | +----------------------------+ ``` i want to update bdate to today grouping by id where bdate is not max. so for above records result will look like this after executing query. ``` +----------------------------+ | id | bdate | item | +----------------------------+ | 1 | 20140106 | 1a | | 1 | 20140106 | 1b | | 1 | 20030303 | 1c | | 2 | 20140106 | 1d | | 2 | 20020202 | 1e | +----------------------------+ ``` I came up with results using temp tables but wanted to see If anyone has better suggestion. Thanks,
Do it this way ``` UPDATE t SET bdate = GETDATE() FROM table1 t LEFT JOIN ( SELECT id, MAX(bdate) bdate FROM table1 GROUP BY id ) e ON t.id = e.id AND t.bdate = e.bdate WHERE e.id IS NULL; ``` Outcome: ``` | ID | BDATE | ITEM | |----|------------|------| | 1 | 2014-01-06 | 1a | | 1 | 2014-01-06 | 1b | | 1 | 2003-03-03 | 1c | | 2 | 2014-01-06 | 1d | | 2 | 2002-02-02 | 1e | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!6/ba233/1)** demo
I am assuming here that you already built your table with the data (as per your example). Let's say you name your table "MainTable", my take would be the following : ``` UPDATE MT SET bdate = CASE WHEN MT2.bdate IS NULL THEN GETDATE() ELSE MT2.bdate END FROM MainTable AS MT LEFT JOIN ( SELECT ID ,bdate = MAX(bdate) FROM MainTable GROUP BY ID ) MT2 ON MT2.ID = MT.ID AND MT2.bdate = MT.bdate ``` Now I would guess this logic should be very close to your temp table logic, except this is using a subquery (to work faster). This also assumes the "Item" column isn't relevant for your purposes.
Update multiple rows from group by SQL
[ "", "sql", "sql-server", "" ]
I'm unable to declare a mysql variable inside a stored procedure. Also i cannot open a cursor as well i keep getting the same error: > ERROR 1064 (42000): You have an error in your SQL syntax; check the > manual that corresponds to your MySQL server version for the right > syntax to use near '@userID INT; > DECLARE @cursorUserID CURSOR FOR SELECT (MAX(userID) + 1) FROM Use' at line 3. Here's the code: ``` use phpBank; DELIMITER // CREATE PROCEDURE newUser() BEGIN DECLARE @userID INT; DECLARE @cursorUserID CURSOR FOR SELECT (MAX(userID) + 1) FROM Users; OPEN @newUserID; END; // DELIMITER ; ```
1. As it has already been mentioned by others you're mixing local variables (without `@` in front of a variable name) and user(session) variables (that have `@` in front of a variable name). `DECLARE` is used **only** for local variables. Session variables are always available and don't require declaration. 2. It a good practice not to give your variables and procedure parameters the same names as column names in your tables. It can easily blow your code in situation when MySQL can't decide what you actually meant to use a variable or a column. 3. You **definitely don't need a cursor** (unlike plsql in Oracle) to return a scalar value by using an aggregate function (`MAX()` in your case). Just put a select in parentheses. That being said a working syntactically correct version of your code might look like this ``` DELIMITER $$ CREATE PROCEDURE newUser() BEGIN DECLARE newid INT; SET newid = (SELECT COALESCE(MAX(userID), 0) + 1 FROM Users); SELECT newid; END$$ DELIMITER ; ``` or technically you can also can do (but not recommended) this ``` DELIMITER $$ CREATE PROCEDURE newUser() BEGIN SET @newid = (SELECT COALESCE(MAX(userID), 0) + 1 FROM Users); SELECT @newid newid; END$$ DELIMITER ; ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/aab63/1)** demo --- **Now**, deducing from procedure name and a query you're trying to use it look like you're reinventing a weel of generating unique ids (most likely PK) doing it manually. **Stop! Don't do that.** It creates racing conditions and won't work correctly in concurrent environment, because multiple processes can potentially grab the same `MAX(userID)` value. The only safe method for generating unique ids in MySQL is `auto_increment` column. Use it instead.
Try this ``` use phpBank; DELIMITER // CREATE PROCEDURE newUser() BEGIN DECLARE @userID int; DECLARE @cursorUserID AS CURSOR; SET @cursorUserID = CURSOR FOR SELECT (MAX(userID) + 1) FROM Users; OPEN @newUserID; END; // DELIMITER ; ```
Cannot declare a variable in a stored procedure
[ "", "mysql", "sql", "stored-procedures", "" ]
I've created sqlfiddle to try and get my head around this <http://sqlfiddle.com/#!2/21e72/1> In the query, I have put a `max()` on the compiled\_date column but the recommendation column is still coming through incorrect - I'm assuming that a select statement will need to be inserted on line 3 somehow? I've tried the examples provided by the commenters below but I think I just need to understand this from a basic query to begin with.
As others have pointed out, the issue is that some of the select columns are neither aggregated nor used in the group by clause. Most DBMSs won't allow this at all, but MySQL is a little relaxed on some of the standards... So, you need to first find the `max(compiled_date)` for each case, then find the recommendation that goes with it. ``` select r.case_number, r.compiled_date, r.recommendation from reporting r join ( SELECT case_number, max(compiled_date) as lastDate from reporting group by case_number ) s on r.case_number=s.case_number and r.compiled_date=s.lastDate ```
Thank you for providing sqlFiddle. But only reporting data is given. we highly appreciate if you give us sample data of whole tables. Anyway, Could you try this? ``` SELECT `case`.number, staff.staff_name AS ``case` owner`, client.client_name, `case`.address, x.mx_date, report.recommendation FROM `case` INNER JOIN ( SELECT case_number, MAX(compiled_date) as mx_date FROM report GROUP BY case_number ) x ON x.case_number = `case`.number INNER JOIN report ON x.case_number = report.case_number AND report.compiled_date = x.mx_date INNER JOIN client ON `case`.client_number = client.client_number INNER JOIN staff ON `case`.staff_number = staff.staff_number WHERE `case`.active = 1 AND staff.staff_name = 'bob' ORDER BY `case`.number ASC; ```
MAX() Function not working as expected
[ "", "mysql", "sql", "select", "group-by", "max", "" ]
I got this simple join statement and I'm pretty sure the syntax is correct. I looked some tutorials and I don't find any difference between my code and the exemples. Here's the statement: ``` SELECT n.id nId, n.news_date, n.news_type, p.id pId, p.title pTitle, p.file_path pPath, s.id sId, s.title sTitle, s.content sContent, v.id vId, v.title vTitle, v.url vUrl FROM photo_news p, standard_news s, video_news v INNER JOIN news n ON p.news_id = n.id OR s.news_id = n.id OR v.news_id = n.id ORDER BY n.news_date DESC ``` I get the following error: > Unknown column 's.news\_id' in 'on clause' I really don't know why this error is launched because the column 'news\_id' exists in every table it has to exist. And if I change the order in the ON clause (i.e. I start with p.news\_id = n.news\_id) I get the same error (unknwonw column p.news\_id). So I think there's a problem with the aliases but I really don't have a clue. Thanks for your help ;)
Probably you are looking for something like this to return data for the record in photo news with data in at least one of the other table. In that case you need to use a LEFT JOINs and not OR in the JOIN conditions. ``` SELECT n.id nId, n.news_date, n.news_type, p.id pId, p.title pTitle, p.file_path pPath, s.id sId, s.title sTitle, s.content sContent, v.id vId, v.title vTitle, v.url vUrl FROM news n LEFT OUTER JOIN photo_news p ON n.id = p.news_id LEFT OUTER JOIN standard_news s ON n.id = s.news_id LEFT OUTER JOIN video_news v ON n.id = v.news_id WHERE p.news_id IS NOT NULL OR s.news_id IS NOT NULL OR v.news_id IS NOT NULL ORDER BY n.news_date DESC ```
Try this, You made mistake in JOINing tables. [for reference you can see how multiple tables are JOINed together.](https://stackoverflow.com/a/8974371/435559) ``` SELECT n.id nId, n.news_date, n.news_type, p.id pId, p.title pTitle, p.file_path pPath, s.id sId, s.title sTitle, s.content sContent, v.id vId, v.title vTitle, v.url vUrl FROM photo_news p INNER JOIN standard_news s p.news_id = s.news_id INNER JOIN video_news v on s.news_id = v.news_id INNER JOIN news n on v.news_id = n.id ORDER BY n.news_date DESC ```
MySQL inner join fails
[ "", "mysql", "sql", "join", "inner-join", "" ]
I tried this in two ways (I think they are the same): ``` select product.maker,speed from product,laptop where product.model=laptop.model and hd>='10' ``` and ``` select product.maker, speed from laptop join product on (product.model=laptop.model) where hd>='10' ``` and I receive this in the simulator: > "Your query produced correct result set on main database, but it failed test on second, checking database. \* Wrong number of records (more by 1) > This exercise has FAQ" The relevant tables are: ### Table Laptop ``` code model speed ram hd price screen ------------------------------------------------------ 1 1298 350 32 4.0 700.0000 11 2 1321 500 64 8.0 970.0000 12 3 1750 750 128 12.0 1200.0000 14 4 1298 600 64 10.0 1050.0000 15 5 1752 750 128 10.0 1150.0000 14 6 1298 450 64 10.0 950.0000 12 ``` ### Table Product: ``` maker model Type ----------------------- A 1232 PC A 1233 PC A 1276 Printer A 1298 Laptop A 1401 Printer A 1408 Printer A 1752 Laptop B 1121 PC B 1750 Laptop C 1321 Laptop D 1288 Printer D 1433 Printer E 1260 PC E 1434 Printer E 2112 PC E 2113 PC ```
select distinct product.maker, speed from laptop join product on (product.model=laptop.model) where hd>='10' or select distinct product.maker,speed from product,laptop where product.model=laptop.model and hd>='10'
*When this answer was written, one query used `hd>=10` and one used `hd>='10'`. The query has since been edited.* When you use the `'10'` in the comparison, you cause the DBMS to do the comparison as a string instead of as a number. Under string comparison, `9` is greater than `10`. As a general rule of thumb, if the data column is a string type, you should compare it with a string: if the column is numeric type, you should compare it with plain numbers (not with strings). Note that different DBMS may have different ways of interpreting a mixed-type expression.
Point out the maker and speed of the laptops having hard drive capacity more or equal to 10 Gb
[ "", "sql", "" ]
How can I find, from within a single table, customer names with orders greater than $2000 ? ``` Customer : custName custOrderVal A 100 B 200 A 900 C 2400 A 1100 B 8000 ``` `select cust.custName from customer cust where cust.custOrderVal > 2000;` will not show the correct data. Can someone please guide me on this ?
**Query** ``` DECLARE @t TABLE (custName VARCHAR(10),custOrderVal INT) INSERT INTO @t VALUES ('A',100), ('B',200),('A',900),('C',2400), ('A',1100),('B',8000),('D',1000),('F',500) SELECT CustName FROM @t GROUP BY CustName HAVING SUM(custOrderVal) > 2000 ``` **Result Set** ``` ╔══════════╗ β•‘ CustName β•‘ ╠══════════╣ β•‘ A β•‘ β•‘ B β•‘ β•‘ C β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β• ```
By using `HAVING` clause, like this: ``` SELECT custName FROM Customer GROUP BY custName HAVING SUM(custOrderVal) > 2000 ``` You need to use `GROUP BY` to get the `SUM` per customer and then use `HAVING` to limit the result (just like a `WHERE` clause).
How to find customer with orders greater than X from single table
[ "", "mysql", "sql", "" ]
``` ALTER PROCEDURE [dbo].[K_FS_GetFeedDivisionReport] @orderno varchar(50) AS BEGIN SET NOCOUNT ON; select OT.OrderNo,FT.Feedtype,ModeofPack='Plastic',BG.Bagtype,OD.Numofbags, (OD.numofbags*BG.Bagtype) as TotalQty,sum(TotalQty) as Total from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno inner join k_FPS_FeedType FT on FT.sno=OD.feedtype inner join K_FS_OrderDetails OT on OT.sno = OD.orderno where OD.orderno = @orderno group by OT.OrderNo,FT.Feedtype,BG.Bagtype,OD.Numofbags END ``` Here I calculate `TotalQty`,I want to calculate sum of `TotalQty` but `TotalQty` is a alias name Is it possible?other wise how can i solve it..please help me...
yes, you can do it like this : ``` ALTER PROCEDURE [dbo].[K_FS_GetFeedDivisionReport] @orderno varchar(50) AS BEGIN SET NOCOUNT ON; select OT.OrderNo,FT.Feedtype,ModeofPack='Plastic',BG.Bagtype,OD.Numofbags, (OD.numofbags*BG.Bagtype) as TotalQty,sum(OD.numofbags*BG.Bagtype) as Total from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno inner join k_FPS_FeedType FT on FT.sno=OD.feedtype inner join K_FS_OrderDetails OT on OT.sno = OD.orderno where OD.orderno = @orderno group by OT.OrderNo,FT.Feedtype,BG.Bagtype,OD.Numofbags,OD.numofbags*BG.Bagtype END ``` If you need totalQty for each row you should use a subQuery: ``` ALTER PROCEDURE [dbo].[K_FS_GetFeedDivisionReport] @orderno varchar(50) AS BEGIN SET NOCOUNT ON; select OT.OrderNo,FT.Feedtype,ModeofPack='Plastic',BG.Bagtype,OD.Numofbags, (OD.numofbags*BG.Bagtype) as TotalQty, Total = (Select Sum (OD.numofbags*BG.Bagtype) from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno inner join k_FPS_FeedType FT on FT.sno=OD.feedtype inner join K_FS_OrderDetails OT on OT.sno = OD.orderno where OD.orderno = @orderno) from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno inner join k_FPS_FeedType FT on FT.sno=OD.feedtype inner join K_FS_OrderDetails OT on OT.sno = OD.orderno where OD.orderno = @orderno group by OT.OrderNo,FT.Feedtype,BG.Bagtype,OD.Numofbags,OD.numofbags*BG.Bagtype END ```
You can achieve this using sub query. Modify your select statement this way: ``` select OT.OrderNo,FT.Feedtype,ModeofPack='Plastic',BG.Bagtype,OD.Numofbags, (OD.numofbags*BG.Bagtype) as TotalQty, (select SUM(OD.numofbags*BG.Bagtype) from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno ) as Total from K_FS_OrderconfirmDetails OD inner join K_FS_bagtype BG on OD.bagtype=BG.sno inner join k_FPS_FeedType FT on FT.sno=OD.feedtype inner join K_FS_OrderDetails OT on OT.sno = OD.orderno where OD.orderno = @orderno group by OT.OrderNo,FT.Feedtype,BG.Bagtype,OD.Numofbags ``` This will calculate total for everything in the last column.
sum clause method calculating the total with alias name?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Hi I am new to the Database, and i am trying to get the records from the multiple tables, but depending upon there selection following is my tables ``` Table1 Column1 Column2 1 10 2 25 3 23 4 15 5 7 Table2 Column1 Column2 2 15 3 13 5 17 Table3 Column1 Column2 2 45 ``` Resultant Table should have records like ``` Column1 Column2 1 10 2 45 3 13 4 15 5 17 ``` i am trying but not got the output yet. Any help or the direction to work out this output will be great help. --- **UPDATE** What i want is get the all rows from table1 then if table2 contains the matching records then it will remove the matching records form the resultset and add the table2 matching records and then same is repeated by table3.
Please use the Below Code and Try ``` select * from table1 where column1 not in ( select column1 from table2 union select column1 from table3) union select * from table2 where column1 not in (select column1 from table3) union select * from table3 ```
``` SELECT t1.column1, COALESCE(t3.column2,t2.column2,t1.column2) FROM t1 LEFT JOIN t2 on t1.column1=t2.column1 LEFT JOIN t3 on t1.column1=t3.column1 ```
How to get the records from multiple tables?
[ "", "sql", "sql-server", "sql-server-2008", "join", "outer-join", "" ]
I have a table that lists items and a status about these items. The problem is that some items have multiple different status entries. For example. ``` HOST Status 1.1.1.1 PASS 1.1.1.1 FAIL 1.2.2.2 FAIL 1.2.3.3 PASS 1.4.2.1 FAIL 1.4.2.1 FAIL 1.1.4.4 NULL ``` I need to return one status per asset. > ``` > HOST Status > 1.1.1.1 PASS > 1.2.2.2 FAIL > 1.2.3.3 PASS > 1.4.2.1 FAIL > 1.1.4.4 No Results > ``` I have been trying to do this with T-SQL Case statements but can't quite get it right. The conditions are any Pass + anything is a Pass, Fail+ No Results is a fail and Null is No Results.
Try using a `case` statement to convert to ordered results and group on that, finally, you'll need to convert back to the nice, human-readable answer: ``` with cte1 as ( SELECT HOST, [statNum] = case when Status like 'PASS' then 2 when Status like 'FAIL' then 1 else 0 end FROM table ) SELECT HOST, case max(statNum) when 2 then 'PASS' when 1 then 'FAIL' else 'No Results' end FROM cte1 GROUP BY HOST ``` NOTE: I used a CTE statement to hopefully make things a little clearer, but everything could be done in a single `SELECT`, like so: ``` SELECT HOST, [Status] = case max(case when Status like 'PASS' then 2 when Status like 'FAIL' then 1 else 0 end) when 2 then 'PASS' when 1 then 'FAIL' else 'No Result' end FROM table ```
You can use `Max(Status)` with `Group by Host` to get `Distinct` values: ``` Select host, coalesce(Max(status),'No results') status From Table1 Group by host Order by host ``` **[Fiddle Demo Results:](http://sqlfiddle.com/#!6/e0f4a/4)** ``` | HOST | STATUS | |---------|------------| | 1.1.1.1 | PASS | | 1.1.4.4 | No results | | 1.2.2.2 | FAIL | | 1.2.3.3 | PASS | | 1.4.2.1 | FAIL | ``` By default SQL Server is case insensitive, If case sensitivity is a concern for your server, then use the lower() function as below: ``` Select host, coalesce(Max(Lower(status)),'No results') status From Table1 Group by host Order by host ``` **[Fiddle demo](http://sqlfiddle.com/#!6/e0f4a/13)**
SQL Conditional Case
[ "", "sql", "sql-server", "t-sql", "case", "distinct-values", "" ]
Ive been looking at several articles on stack but not been exactly specific to what I need I have a table with application names, teams, service, directorate and username I want to bring back the application name, team, service, directorate back of the highest used location (team, service, directorate) based on user, ie usercount ``` SELECT [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,count(distinct username) Usercount FROM [Windows7data].[dbo].[devices_users_apps_detail] a GROUP BY [ApplicationName] ,[Team] ,[Service] ,[Directorate] ORDER BY [ApplicationName], count(distinct username) desc; ``` I have played with by adding to the above nested subqueries, having statements etc but this has not worked ([Using sub-queries in SQL to find max(count())](https://stackoverflow.com/questions/18740563/using-sub-queries-in-sql-to-find-maxcount))
You can use the Analytic function [RANK](http://technet.microsoft.com/en-us/library/ms176102.aspx) to put your team/service/directorate combination in order of number users by Application Name, then just select the top one for each. The key is that `ApplicationName` appears in the group by clause but not in the Partition by clause of the Rank function. ``` SELECT [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,UserCount FROM ( SELECT [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,COUNT(DISTINCT username) Usercount, [Rank] = RANK() OVER(PARTITION BY [Team], [Service], [Directorate] ORDER BY COUNT(DISTINCT UserName) DESC) FROM [Windows7data].[dbo].[devices_users_apps_detail] a GROUP BY [ApplicationName], [Team], [Service], [Directorate] ) t WHERE t.[Rank] = 1 ORDER BY [ApplicationName], UserCount DESC; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/90103/1)** --- I can't actually work out which way round you want this from the question so I will post both: ``` SELECT [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,UserCount FROM ( SELECT [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,COUNT(DISTINCT username) Usercount, [Rank] = RANK() OVER(PARTITION BY [ApplicationName] ORDER BY COUNT(DISTINCT UserName) DESC) FROM [Windows7data].[dbo].[devices_users_apps_detail] a GROUP BY [ApplicationName], [Team], [Service], [Directorate] ) t WHERE t.[Rank] = 1 ORDER BY [ApplicationName], UserCount DESC; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!3/c9ac0/2)**
``` SELECT TOP(1) [ApplicationName] ,[Team] ,[Service] ,[Directorate] ,count(distinct username) Usercount FROM [Windows7data].[dbo].[devices_users_apps_detail] a group by [ApplicationName] ,[Team] ,[Service] ,[Directorate] order by Usercount desc; ```
SQL group by max count
[ "", "sql", "sql-server", "" ]