Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've recently imported some spatial data into SQL 2008 from SDF. During the import process, DateTime fields were imported as nvarchar(254). An example of how the data was imported is this: `'20130515103000'` In setting up my view, I used `SELECT CAST(survey_date AS DATETIME) AS Expr1` and have the following Error: > Conversion failed when converting date and/or time from character string. From what I can tell, it looks like I may need to reformat my data to conform to the ISO-8601 format before casting or converting the data. I'm not sure how to go about doing this.
It looks like you need some string manipulation as your date string isn't in a recognized format. There might be a simpler way, but this works in SQL Server: ``` DECLARE @string VARCHAR(255) = '20130515103000' SELECT CAST(LEFT(@string,8)+' '+SUBSTRING(@string,9,2)+':'+SUBSTRING(@string,11,2)+'.'+RIGHT(@string,2) AS DATETIME) ``` Note, I'm assuming the format of your string is "yyyyMMDDHHMMSS" and using 24 hours since AM/PM is not indicated. Update: The variable is just for testing, to implement it just replace the variable with your datetime string field: ``` SELECT CAST(LEFT(survey_date,8)+' '+SUBSTRING(survey_date,9,2)+':'+SUBSTRING(survey_date,11,2)+'.'+RIGHT(survey_date,2) AS DATETIME) AS Expr1 ```
This is using `Stuff()` function. First change `yyyymmddHHMMSS` to `yyyymmdd HH:MM:SS` and then convert it to a `Datetime`. ``` --Example: Declare @mydate nvarchar(250) = '20130515103000' Select convert(datetime, stuff(stuff(stuff(@mydate, 9, 0,' '), 12,0,':'), 15,0,':')) --Applied to your table column Select convert(datetime, stuff(stuff(stuff(survey_date, 9, 0,' '), 12,0,':'), 15,0,':')) AS Expr1 From yourTable ``` **[Fiddle demo](http://sqlfiddle.com/#!3/d41d8/26167)**
CAST nvarchar to DATETIME
[ "", "sql", "sql-server", "datetime", "" ]
I am trying to delete a massive amount of users who have the email address from the domain: "shopchristianpump" I have a wordpress site and in the MySQL database trying to just list them all first: ``` SELECT * FROM `wp_users` WHERE `user_email` like 'shopchristianpump' ``` I know there are users in there with this domain however it returns zero results. i.e 5r8c32cYon at shopchristianpump and 677digqZ at shopchristianpump Can you help with what the delete statement would be to remove all users from here Cheers
``` DELETE FROM wp_users WHERE user_email like '%@shopchristianpump%'; ```
Use wildcard operator (`%`). Wildcard means, that the content of the symbol's place could be anything. ``` DELETE FROM `wp_users` WHERE `user_email` LIKE '%@shopchristianpump.com'; ``` If shopchristianpump's domain name is different that .com, then replace it.
Delete Users from SQL database with a email like
[ "", "mysql", "sql", "wordpress", "" ]
I have this tables, ``` user id name visit id id_user (fk user.id) date comment ``` If i execute this query, ``` SELECT u.id, u.name, e.id, e.date, e.comment FROM user u LEFT JOIN visit e ON e.id_user=u.id ``` I get, ``` 1 Jhon 1 2013-12-01 '1st Comment' 1 Jhon 2 2013-12-03 '2nd Comment' 1 Jhon 3 2013-12-01 '3rd Comment' ``` If I `GROUP BY u.id`, then I get ``` 1 Jhon 1 2013-12-01 '1st Comment' ``` I need the last visit from Jhon ``` 1 Jhon 3 2013-12-04 '3rd Comment' ``` I try this ``` SELECT u.id, u.name, e.id, MAX(e.date), e.comment FROM user u LEFT JOIN visit e ON e.id_user=u.id GROUP BY u.id ``` And this, ``` SELECT u.id, u.name, e.id, MAX(e.date), e.comment FROM user u LEFT JOIN visit e ON e.id_user=u.id GROUP BY u.id HAVING MAX(e.date) ``` And I get ``` 1 Jhon 1 2013-12-04 '1st Comment' ``` But this is not valid to me... I need the last visit from this user ``` 1 Jhon 3 2013-12-01 '3rd Comment' ``` Thanks!
This should give you the last comment for every user: ``` SELECT u.id, u.name, e.id, e.date, e.comment FROM user u LEFT JOIN (SELECT t1.* FROM visit t1 LEFT JOIN visit t2 ON t1.id_user = t2.id_user AND t1.date < t2.date WHERE t2.id_user IS NULL ) e ON e.id_user=u.id ```
``` SELECT u.id, u.name, e.id, e.date, e.comment FROM user u LEFT JOIN visit e ON e.id_user=u.id ORDER BY e.date desc LIMIT 1; ```
GROUP BY LAST DATE MYSQL
[ "", "mysql", "sql", "date", "group-by", "having", "" ]
Currently I have this table (#tmp) in my TSQL query: ``` | a | b | |:---|---:| | 1 | 2 | | 1 | 3 | | 4 | 5 | | 6 | 7 | | 9 | 7 | | 4 | 0 | ``` This table contains IDs of rows that I want to delete from another table. The thing is, I cannot have the same 'a' matching up with multiple 'b' and vice-versa, a single 'b' cannot match up with multiple 'a'. So essentially I need to remove the (1,3), (9,7), and (4,0) because either their 'a' or 'b' has already been used. I'm using the code below to try and do this but it seems like if a given 'a' has multiple corresponding 'b' that are higher AND lower than 'a' it causes an issue. ``` IF OBJECT_ID('tempdb..#tmp') IS NOT NULL DROP TABLE #tmp IF OBJECT_ID('tempdb..#KeysToDelete') IS NOT NULL DROP TABLE #KeysToDelete CREATE TABLE #tmp (a int, b int) INSERT INTO #tmp (a, b) VALUES (1,2), (1,3),(4,5),(6,7), (9,7), (4,0) SELECT * FROM #tmp -- Get the minimum b for each a select distinct a, (SELECT MIN(b) FROM #tmp t2 WHERE t2.a = t1.a) AS b INTO #KeysToDelete FROM #tmp t1 WHERE t1.a < t1.b -- Get the minimum a for each b INSERT INTO #KeysToDelete select distinct (SELECT MIN(a) FROM #tmp t2 WHERE t2.a = t1.a) AS a, b FROM #tmp t1 WHERE t1.a > t1.b SELECT DISTINCT a, b FROM #KeysToDelete ORDER BY 1, 2 ``` The output is this: ``` | a | b | |:---|---:| | 1 | 2 | | 4 | 0 | | 6 | 7 | | 9 | 7 | ``` But I really want this: ``` | a | b | |:---|---:| | 1 | 2 | -- it would match requirements if this were (1,3) instead | 4 | 5 | -- it would match requirements if this were (4,0) instead | 6 | 7 | -- it would match requirements if this were (9,7) instead ``` If anyone has any idea how I might be able to fix this it would be much appreciated! I know this is a long involved questions, but any suggestions you may have would be great! Thanks!
try this: ``` Select * from #tmp t Where Not exists(select * from #tmp where b = t.b and a < t.a) and Not exists(select * from #tmp where a = t.a and b < t.b) ```
Your problem statement is not well-defined. There are multiple ways that you could remove rows from the `@tmp`, enforcing the uniqueness of columns A and B. Here is one approach, which is to make A unique and then make B unique: ``` with todelete as ( select t.*, row_number() over (partition by a order by newid()) as a_seqnum from #tmp t ) delete from todelete where a_seqnum > 1; with todelete as ( select t.*, row_number() over (partition by b order by newid()) as b_seqnum from #tmp t ) delete from todelete where b_seqnum > 1; ```
Avoiding duplicates in my result table
[ "", "sql", "t-sql", "match", "" ]
I'm having a problem with the query builder in MS Access 2010. I'm not able to join two tables where one has two references to the other. I have one table, Workers, with two columns, HumanFactor1 and HumanFactor2. Both of these point to table HumanFactors which consists of only two fields, ID and HumanFactor. This table is used to populate two combobox selectors for the user, and those selections are then stored as FKs in Worker. Using the query builder, it automatically creates the following SQL ``` SELECT Incident.*, Worker.* FROM HumanFactors RIGHT JOIN (Incident LEFT JOIN Worker ON Incident.ID = Worker.IncidentID) ON (HumanFactors.ID = Worker.HumanFactor2) AND (HumanFactors.ID = Worker.HumanFactor1) WHERE (((HumanFactors.HumanFactor)="Fatigue")); ``` This doesn't work, because those two are always going to be different in practice, so the only returned results are records where I've forced both HumanFactors to be the same. This is incredibly easy to fix via SQL by simply changing to an OR expression ``` SELECT Incident.*, Worker.* FROM HumanFactors RIGHT JOIN (Incident LEFT JOIN Worker ON Incident.ID = Worker.IncidentID) ON (HumanFactors.ID = Worker.HumanFactor2) OR (HumanFactors.ID = Worker.HumanFactor1) WHERE (((HumanFactors.HumanFactor)="Fatigue")); ``` This gives me exactly what I'm looking for, but I can't figure out how to create an OR instance in the Builder. The application freaks out when I try and go back to design view. Is there a way to handle this via the builder without resorting to changing the SQL? My users refuse to do anything more complicated than drag&drop. Or will I have to create a linking table in between? My problem there is that while a linking table makes the query builder work perfectly fine, I'm at a loss to reconnect the data to the controls. There are only two comboboxes, instead of one list that you would expect for that setup. Thanks.
You can add multiple instances of a table in a query that you create with the Query Builder. When you add the table the first time its alias will be just be the table name (e.g., [HumanFactors]) and when you add it the second time its name will be the table name with `_1` appended. You can change the aliases by clicking on the table and then opening the Properties pane (see screenshot below). In your case I believe that the query would look something like this: ![QueryBuilder.png](https://i.stack.imgur.com/oqVxr.png) (To see a larger version of the screenshot, right-click it and choose "View Image".)
How 'bout this: ![enter image description here](https://i.stack.imgur.com/IqiAX.png) This will give you all the records where either human factor is Fatigue. Not exactly the way you had it, but might serve.
MS Access Inner Join Or
[ "", "sql", "ms-access", "ms-access-2010", "" ]
I think I am not doing the best thing I can in my designed DB architecture. Also I am using MySQL; I have 2 main tables: IMEIs and user; IMEIs structure: ``` +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | imei | varchar(20) | NO | PRI | NULL | | | user_id | int(11) | NO | MUL | NULL | | +---------+-------------+------+-----+---------+-------+ ``` users structure: ``` +-------+---------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | pass | varchar(2000) | NO | | NULL | | +-------+---------------+------+-----+---------+----------------+ ``` Now, I will join these tables, meaning each user can have multiple IMEI, and I think this structure is just fine. The problem is where I get some data for each IMEI, and I want to save them. I can't save all IMEIs data in one table, since that will be huge table. Think that each IMEI sends a data every 30 seconds. It could fill up my tables very soon, as that would be 120(per hour) \* 24 \* 30 \* 1000 = 86,400,000 which is quite huge for a table. I want to create individual tables for each IMEI with name `data_{IMEI}`, but then that can make many tables. I don't have any clue on how much MySQL can be efficient with multiple tables, but here we are talking about 1000 (a good estimate by me) tables. This is not a problem for now, but after 6 months or a year, I may run into problems. I have to foresee my design for it. Thanks in advance.
> I can't save all IMEIs data in one table, since that will be huge table. If there is something relational databases a good at, that's managing huge number of rows in a single table. Assuming you use indexing properly, performance will scale **logarithmically** (i.e. the time to do various operations will raise much slower than the amount of data). What you tried to do is in effect a form of (horizontal) **partitioning**. Fortunately, you can [let the DBMS do that for you](http://dev.mysql.com/doc/refman/5.7/en/partitioning.html), while still keeping one "logical" table and avoiding the complications your "manual partitioning" would entail. Partitioning can help with performance, and can also help in situations when a single table is so huge that it outgrows the capacity of a single physical drive, by [putting different partitions on separate physical drives](https://stackoverflow.com/a/8820437/533120)1. --- *1 [Unfortunately](http://dev.mysql.com/doc/refman/5.7/en/partitioning-overview.html): "The DATA DIRECTORY and INDEX DIRECTORY options have no effect when defining partitions for tables using the InnoDB storage engine.". I suggest you use a more capable DBMS than MySQL if your table becomes that huge*
`I can't save all IMEIs data in one table, since that will be huge table.` Why do you think this? One big table is often better than a database full of tables. If each table is the same, seriously consider making a big table. Then, if it goes slow, you can go about partitioning it (into PHYSICAL seperate parts), but keeping the logical table as one table. Premature optimization is the root of all evil ;)
What do you think about my DB architecture?
[ "", "mysql", "sql", "database", "" ]
I'm building a SSIS package where I need to get data from the day before at 4:15:01 pm to today's date at 4:15:00 pm, but so far the only query that I know is how to get the day before. I am not sure how to also add hour, minute, and second to the same query. Can someone please show me how to add the hour, minute, and second to this sql query? Below is the query I have so far. ``` SELECT Posted_Date, Total_Payment FROM Table1 WHERE Posted_Date >= dateadd(day, datediff(day, 1, Getdate()), 0) and Posted_date < dateadd(day, datediff(day, 0, getdate()), 0) order by posted_date ```
Be very careful about precision here - saying you want things from 4:15:01 PM yesterday, means that at some point you could possibly lose data (e.g. 4:15:00.500 PM). Much better to use an open-ended range, and I typically like to calculate that boundary outside of the query: ``` DECLARE @today DATETIME, @today_at_1615 DATETIME; SELECT @today = DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()), 0), @today_at_1615 = DATEADD(MINUTE, 16.25*60, @today); SELECT Posted_Date, Total_Payment FROM dbo.Table1 WHERE Posted_Date > DATEADD(DAY, -1, @today_at_1615) AND Posted_Date <= @today_at_1615 ORDER BY Posted_date; ``` You should also avoid using `DATEDIFF` in queries like this - there is [a cardinality estimation bug](http://support.microsoft.com/kb/2481274) that can [really affect the performance of your query](http://www.sqlperformance.com/2013/09/t-sql-queries/datediff-bug). I don't believe the bug affects SQL Server 2005, but if you wanted to be ultra-safe you could change it to the slightly more expensive: ``` SELECT @today = CONVERT(CHAR(8), GETDATE(), 112), ``` And in either case, you should mark this code with some kind of flag, so that when you do get onto SQL Server 2008 or later, you can update it to use the much more optimal: ``` SELECT @today = CONVERT(DATE, GETDATE()), ``` # SSIS things I created an SSIS package with an `Execute SQL Task` that created my table and populated it with data which was then used by a `Data Flow Task` ![enter image description here](https://i.stack.imgur.com/QSzYw.png) ## Execute SQL Task I created an Execute SQL Task, connected to an OLE DB Connection Manager and used the following direct input. ``` -- This script sets up a table for consumption by the DFT IF EXISTS ( SELECT * FROM sys.tables AS T WHERE T.name = 'Table1' AND T.schema_id = SCHEMA_ID('dbo') ) BEGIN DROP TABLE dbo.Table1; END; CREATE table dbo.Table1 ( Posted_Date datetime NOT NULL , Total_Payment int NOT NULL ); INSERT INTO dbo.Table1 ( Posted_Date , Total_Payment ) SELECT DATEADD(minute, D.rn, DATEADD(d, -1, CURRENT_TIMESTAMP)) AS Posted_Date , D.rn FROM ( -- 2 days worth of data SELECT TOP (60*24*2) DI.rn FROM ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS rn FROM sys.all_columns AS AC ) DI ) D; ``` Right click on the Execute SQL Task and execute it. This ensures the table is created so that we can work with it in the next step. ## Data Flow Task I created a `Data Flow Task` and set the `DelayValidation` property to True since my connection manager is pointed at tempdb. This is likely not needed in the real world. I added an OLE DB Source component and configured it to use the first query ![enter image description here](https://i.stack.imgur.com/E6ih8.png) I then added a Derived Column to allow me to attach a data viewer to the flow and fired the package off. You can observe that the last value streaming through is as expected. ![enter image description here](https://i.stack.imgur.com/iqiQt.png)
I'd think this would work similarly. ``` DECLARE @today DATETIME, @today_at_1615 DATETIME; SELECT @today = CONVERT(DATE, GETDATE()), @today_at_1615 = DATEADD(MINUTE, 16.25*60, @today); SELECT Posted_Date, Total_Payment FROM dbo.Table1 WHERE Posted_Date between DATEADD(DAY, -1, @today_at_1615) AND @today_at_1615 ORDER BY Posted_date ``` Would've made a comment about using BETWEEN but I don't have the reps for it. Is there a reason for not using the between clause?
Sql query to select data from the day before at 4:15:01 pm to today at 4:15:00 pm
[ "", "sql", "sql-server", "sql-server-2005", "ssis", "" ]
Here's my situation (or **see TLDR at bottom**): I'm trying to make a system that will search for user entered words through several documents and return the documents that contain those words. The user(s) will be searching through thousands of documents, each of which will be 10 - 100+ pages long, and stored on a webserver. The solution I have right now is to store each unique word in a table with an ID (only maybe 120 000 relevant words in the English language), and then in a separate table store the word id, the document it is in, and the number of times it appears in that document. E.g: Document foo's text is > abc abc def and document bar's text is > abc def ghi Documents table will have **id** | **name** ``` 1 'foo' 2 'bar' ``` Words table: **id** | **word** ``` 1 'abc' 2 'def' 3 'ghi' ``` Word Document table: **word id** | **doc id** | **occurrences** ``` 1 1 2 1 2 1 2 1 1 2 2 1 3 2 1 ``` As you can see when you have thousands of documents and each has thousands of unique words, the Word Document tables blows up very quickly and takes way too long to search through. **TL;DR My question is this:** How can I store searchable data from large documents in an SQL database, while retaining the ability to use my own search algorithm (I am aware SQL has one built in for .docs and pdfs) based on custom factors (like occurrence, as well as others) without having an outright massive table for all the entries linking each word to a document and its properties in that document? Sorry for the long read and thanks for any help!
Rather than building your own search engine using SQL Server, have you considered using a C# .net implementation of the lucene search api's? Have a look at <https://github.com/apache/lucene.net>
Good question. I would piggy back on the existing solution of SQL Server (full text indexing). They have integrated a nice indexing engine which optimises considerably better than your own code probably could do (or the developers at Microsoft are lazy or they just got a dime to build it :-) Please see [SQL server](http://technet.microsoft.com/en-us/library/cc879306.aspx) text indexing background. You could query views such as sys.fulltext\_index\_fragments or use stored procedures. Ofcourse, piggy backing on an existing solution has some draw backs: 1. You need to have a license for the solution. 2. When your needs can no longer be served, you will have to program it all yourself. But if you allow SQL Server to do the indexing, you could more easily and with less time build your own solution.
SQL Most effective way to store every word in a document separately
[ "", "sql", "sql-server", "database", "search", "document", "" ]
I have 2 columns in MS SQL one is Serial no. and other is values. I need the thrird column which gives me the sum of the value in that row and the next 2. Ex ``` SNo values 1 2 2 3 3 1 4 2 5 6 7 9 8 3 9 2 ``` So I need third column which has sum of 2+3+1, 3+1+2 and So on, so the 8th and 9th row will not have any values: ``` 1 2 6 2 3 6 3 1 4 4 2 5 5 1 6 7 2 7 8 3 9 2 ``` Can the Solution be generic so that I can Varry the current window size of adding 3 numbers to a bigger number say 60.
Here is the [**SQL Fiddle**](http://sqlfiddle.com/#!3/87dcd/82/0) that demonstrates the following query: ``` WITH TempS as ( SELECT s.SNo, s.value, ROW_NUMBER() OVER (ORDER BY s.SNo) AS RowNumber FROM MyTable AS s ) SELECT m.SNo, m.value, ( SELECT SUM(s.value) FROM TempS AS s WHERE RowNumber >= m.RowNumber AND RowNumber <= m.RowNumber + 2 ) AS Sum3InRow FROM TempS AS m ``` In your question you were asking to sum 3 consecutive values. You modified your question saying the number of consecutive records you need to sum could change. In the above query you simple need to change the `m.RowNumber + 2` to what ever you need. So if you need 60, then use ``` m.RowNumber + 59 ``` As you can see it is very flexible since you only have to change one number.
In case the `sno` field is not sequential, you can use `row_number()` with aggregation: ``` with ss as ( select sno, values, row_number() over (order by sno) as seqnum from s ) select s1.sno, s1.values, (case when count(s2.values) = 3 then sum(s2.values) end) as avg3 from ss s1 left outer join ss s2 on s2.seqnum between s1.seqnum - 2 and s1.seqnum group by s1.sno, s1.values; ```
Moving Average / Rolling Average
[ "", "sql", "sql-server", "sql-server-2008", "statistics", "subquery", "" ]
I have one table t1 with clientdivisionid, planid, memberid and values are 154 | 722 | 27510, 154 | 722 | 22222, 154 | 725 | 27510 I need to pull only members where planID = 722 for members who are not part of plan 725. So result should be 154| 722 | 22222 here is my SQL ``` Select memberid,planid from t1 group by memberid,planid having ((planid = 722) and (planid <> 725 )) ``` When I use where or having statement it still pulls in member id 27510.
``` SELECT t1.clientdivisionid, t1.planid, t1.memberid FROM t1 LEFT JOIN ( SELECT memberid FROM t1 WHERE planid = 725 ) AS subExclude ON t1.memberid = subExclude.memberid WHERE t1.planid = 722 AND subExclude.memberid Is Null; ``` If you want to alter the selection to `planid = 722` and not 725 and not 728 and not 727, change the `WHERE` clause in the subquery to this: ``` WHERE planid IN (725, 728, 727) ```
Try doing: ``` SELECT memberid,planid FROM tab1 t1 WHERE planid = 722 AND NOT EXISTS ( SELECT 1 FROM tab1 t2 WHERE t1.memberid = t2.memberid AND t2.planid = 725 ) ``` `sqlfiddle demo` EDIT: If you only want to select the ones that have planid = 722, and no other planid you can put planid <> 722 in the subquery: ``` SELECT memberid, planid FROM tab1 t1 WHERE planid = 722 AND NOT EXISTS ( SELECT 1 FROM tab1 t2 WHERE t1.memberid = t2.memberid AND t2.planid != 722 ) ``` `sqlfiddle demo` This can be quite efficient because it stops looking as soon as it finds one value in the subquery that has planid != 722.
How to exclude results from my resultset based on a different row?
[ "", "sql", "ms-access", "" ]
I'm trying to restore my dump file, but it caused an error: ``` psql:psit.sql:27485: invalid command \N ``` Is there a solution? I searched, but I didn't get a clear answer.
Postgres uses `\N` as substitute symbol for NULL value. But all psql commands start with a backslash `\` symbol. You can get these messages, when a copy statement fails, but the loading of dump continues. This message is a false alarm. You have to search all lines prior to this error if you want to see the real reason why COPY statement failed. Is possible to switch psql to "stop on first error" mode and to find error: ``` psql -v ON_ERROR_STOP=1 ```
I received the same error message when trying to restore from a binary pg\_dump. I simply used [`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore my dump and completely avoid the `\N` errors, e.g. `pg_restore -c -F t -f your.backup.tar` Explanation of switches: ``` -f, --file=FILENAME output file name -F, --format=c|d|t backup file format (should be automatic) -c, --clean clean (drop) database objects before recreating ```
psql invalid command \N while restore sql
[ "", "sql", "postgresql", "dump", "" ]
I'm inserting a bunch of rows into another table, and watch to generate an unique batch id for every X rows inserted (in this case X will be 100 or so). So if I'm inserting 1000 rows, the first 100 rows will have batch\_id = 1, the next 100 will have batch\_id = 2, etc. ``` INSERT INTO BatchTable(batch_id, col1) SELECT batchId, col1 //how to generate batchId??? FROM OtherTable ```
We can take advantage of integer division here for a simple way to round up to the next 100: ``` ;WITH x AS ( SELECT col1, rn = ROW_NUMBER() OVER (ORDER BY col1) FROM dbo.OtherTable ) --INSERT dbo.BatchTable(batch_id, col1) SELECT batch_id = (99+rn)/100, col1 FROM x; ``` When you're happy with the output, uncomment the `INSERT`...
Try this using `row_number()` function: ``` declare @batchGroup int = 100 Insert into BatchTable(batch_id, col1) Select ((row_number() over (order by col1)-1)/@batchGroup)+ 1 As batch_id, col1 From OtherTable ```
SQL server: Inserting a bunch of rows into a table, generating a "batch id" for every 100?
[ "", "sql", "sql-server", "" ]
I'm doing some work in MS Access and I need to append a prefix to a bunch of fields, I know SQL but it doesn't quite seem to work the same in Access Basically I need this translated to a command that will work in access: ``` UPDATE myTable SET [My Column] = CONCAT ("Prefix ", [My Column]) WHERE [Different Column]='someValue'; ``` I've searched up and down and can't seem to find a simple translation.
``` UPDATE myTable SET [My Column] = "Prefix " & [My Column] WHERE [Different Column]='someValue'; ``` As far as I am aware there is no CONCAT
There are two concatenation operators available in Access: `+`; and `&`. They differ in how they deal with Null. `"foo" + Null` returns Null `"foo" & Null` returns `"foo"` So if you want to update Null `[My Column]` fields to contain `"Prefix "` afterwards, use ... ``` SET [My Column] = "Prefix " & [My Column] ``` But if you prefer to leave it as Null, you could use the `+` operator instead ... ``` SET [My Column] = "Prefix " + [My Column] ``` However, in the second case, you could revise the `WHERE` clause to ignore rows where `[My Column]` contains Null. ``` WHERE [Different Column]='someValue' AND [My Column] Is Not Null ```
CONCAT equivalent in MS Access
[ "", "sql", "ms-access", "ms-access-2010", "" ]
I'm looking for how I can combine between "IN" and "LIKE" in a request like this one ``` SELECT * FROM Users WHERE Location LIKE IN (SELECT name FROM CITIES) ``` (I want to retrieve users who have a city mentioned in the table "cities" ) i get an error in my SQL syntax (i'm using Mysql). thank you.
``` SELECT u.* FROM Users u INNER JOIN Cities c ON u.Location like concat('%',c.Name,'%') ```
Not sure on MySQL syntax but try ``` SELECT * FROM Users INNER JOIN Cities ON Users.Location = Cities.Name ```
combine between IN and Like in a request Mysql
[ "", "mysql", "sql", "request", "" ]
I have a language table and want retrieve specific records for a selected language. However, when there is no translation present I want to get the translation of another language. **TRANSLATIONS** ``` TAG LANG TEXT "prog1" | 1 | "Programmeur" "prog1" | 2 | "Programmer" "prog1" | 3 | "Programista" "prog2" | 1 | "" "prog2" | 2 | "Category" "prog2" | 3 | "Kategoria" "prog3" | 1 | "Actie" "prog3" | 2 | "Action" "prog3" | 3 | "Dzialanie" ``` **PROGDATA** ``` ID | COL1 | COL2 1 | "data" | "data" 2 | "data" | "data" 3 | "data" | "data" ``` If I want translations from language 3 based on the ID's in table PROGDATA then I can do: ``` SELECT TEXT FROM TRANSLATIONS, PROGDATA WHERE TRANSLATIONS.TAG="prog" & PROGDATA.ID AND TRANSLATIONS.LANG=3 ``` which would give me: **"Programista"** **"Kategoria"** **"Dzialanie"** In case of language 1 I get an empty string on the second record: **"Programmeur"** **""** **"Actie"** How can I replace the empty string with, for example, the translation of language 2? **"Programmeur"** **"Category"** **"Actie"** I tried nesting a new select query in an `IIf()` function but that obviously did not work. ``` SELECT IIf(TEXT="", (SELECT TEXT FROM TRANSLATIONS, PROGDATA WHERE TRANSLATIONS.TAG="prog" & PROGDATA.ID AND TRANSLATIONS.LANG=2),TEXT) FROM TRANSLATIONS, PROGDATA WHERE TRANSLATIONS.TAG="prog" & PROGDATA.ID AND TRANSLATIONS.LANG=3 ```
I canabalized the solutions of @fossilcoder and @Smandoli and merged it in one solution: ``` SELECT IIf ( NZ(TRANSLATION.Text,"") = "", DEFAULT.TEXT, TRANSLATION.TEXT) FROM TRANSLATIONS AS TRANSLATION, TRANSLATIONS AS DEFAULT, PROGDATA WHERE TRANSLATION.Tag="prog_" & PROGDATA.Id AND DEFAULT.Tag="prog" & PROGDATA.Id AND TRANSLATION.LanguageId=1 AND DEFAULT.LanguageId=2 ``` I never thought of referencing a table twice under a different alias
A `SWITCH` or `CASE` statement may work well. But try this: ``` SELECT IIf(TEXT="", (SELECT TEXT AS TEXT_OTHER FROM TRANSLATIONS, PROGDATA WHERE TRANSLATIONS.TAG="prog" & PROGDATA.ID AND TRANSLATIONS.LANG=2),TEXT) AS TEXT_FINAL ``` I am using `TEXTOTHER` and `TEXTFINAL` to reduce ambiguity in your field names. Sometimes this helps. You may even need to apply the principle to the table name: ``` (SELECT TEXT AS TEXT_OTHER FROM TRANSLATIONS AS TRANSLATIONS_ALT... ``` Also, make sure your criterion is correct: an empty string, not a Null value. ``` IIf(TEXT="", ... IIf(ISNULL(TEXT), ... ```
Replace empty/null string with result from another record
[ "", "sql", "ms-access-2003", "" ]
I am having troubles combining multiple queries into one output. I am a beginner at SQL and was wondering if anyone can provide me with some feedback on how to get this done. Here is my code: ``` SELECT [status], [queryno_i] as 'Query ID', [assigned_to_group] as 'Assigned To Group', [issued_date] as 'Issuing Date', CASE WHEN [status] = 3 THEN [mutation] ELSE NULL END AS 'Closing Date' FROM tablename.[tech_query] WITH (NOLOCK) SELECT CASE WHEN [status] = 3 THEN 'CLOSED' ELSE 'OPEN' END AS [State] FROM tablename.[tech_query] WITH (NOLOCK) SELECT CASE WHEN [status] = 3 THEN [mutation_int]-[issued_date_INT] ELSE NULL END AS [TAT] FROM tablename.[tech_query] WITH (NOLOCK) ```
If you want all in one row just put all together ``` SELECT [status], [queryno_i] as 'Query ID', [assigned_to_group] as 'Assigned To Group', [issued_date] as 'Issuing Date', CASE WHEN [status] = 3 THEN [mutation] ELSE NULL END AS 'Closing Date', CASE WHEN [status] = 3 THEN 'CLOSED' ELSE 'OPEN' END AS [State], CASE WHEN [status] = 3 THEN [mutation_int]-[issued_date_INT] ELSE NULL END AS [TAT] FROM tablename.[tech_query] WITH (NOLOCK) ```
You can use a UNION clause <http://www.w3schools.com/sql/sql_union.asp> ``` SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 ``` However, from looking at your 3 statements, all 3 selects are coming from the same table, so you may be able to just combine and use CASE statements to get your results without using a Union.
SQL how to combine multiple SQL Queries into one output
[ "", "sql", "" ]
I have the following value in a nvarchar 0011223344 This value will always be an even length. I need to convert this value to 00\11\22\33\44 Using MSSQL.
This works in SQL Server.. ``` DECLARE @test nvarchar(20) = '0011223344' DECLARE @i int = 3 WHILE @i < LEN(@test) BEGIN SELECT @test = STUFF(@test, @i, 0, '\') SET @i = @i + 3 END SELECT @test ``` You could probably implement a more elegant solution using a numbers table.
``` create function changeFormat( @BeginWord varchar(10)) returns varchar(20) as begin declare @finalWord varchar(20) SET @finalWord=''; SET @finalWord= @finalWord + substring(@BeginWord,1,2)+ '/'; SET @finalWord= @finalWord + substring(@BeginWord,3,2)+ '/'; SET @finalWord= @finalWord + substring(@BeginWord,5,2)+ '/'; SET @finalWord= @finalWord + substring(@BeginWord,7,2); return @finalWord end; ``` //call the function ``` select word, dbo.changeFormat(word) as Formateado from table1; word Formateado 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 11223344 11/22/33/44 ```
In SQL, how do I insert a character every 2 spaces in a nvarchar?
[ "", "sql", "sql-server", "" ]
I have three tables (products, product\_info and specials) ``` products(id, created_date) product_info(product_id, language, title, .....) specials(product_id, from_date, to_date) ``` product\_id is foreign key which references id on products When searching products I want to order this search by products that are specials... Here's my try ``` SELECT products.*, product_info.* FROM products INNER JOIN product_info ON product_info.product_id = products.id INNER JOIN specials ON specials.product_d = products.id WHERE product_info.language = 'en' AND product_info.title like ? AND specials.from_date < NOW() AND specials.to_date > NOW() ORDER BY specials.product_id DESC, products.created_at DESC ``` But the result is only special products..
If not every product is in specials table, you should do a LEFT JOIN with specials instead and put the validations of the dates of specials in the ON CLAUSE. Then you order by products.id but put the specials first, by validating it with a CASE WHEN: ``` SELECT products.*, product_info.* FROM products INNER JOIN product_info ON product_info.product_id = products.id LEFT JOIN specials ON specials.product_d = products.id AND specials.from_date < NOW() AND specials.to_date > NOW() WHERE product_info.LANGUAGE = 'en' AND product_info.title LIKE ? ORDER BY CASE WHEN specials.product_id IS NOT NULL THEN 2 ELSE 1 END DESC, products.id DESC, products.created_at DESC ```
Try this ``` SELECT products.*, product_info.* FROM products INNER JOIN product_info ON product_info.product_id = products.id INNER JOIN specials ON specials.product_d = products.id WHERE product_info.language = 'en' AND product_info.title like ? AND specials.from_date < NOW() AND specials.to_date > NOW() ORDER BY specials.product_id DESC,products.id DESC, products.created_at DESC ```
MySql query to order by existence in another table
[ "", "mysql", "sql", "" ]
I have multiple `IF` statements that are independent of each other in my stored procedure. But for some reason they are being nested inside each other as if they are part of one big if statement ``` ELSE IF(SOMETHNGZ) BEGIN IF(SOMETHINGY) BEGIN..END ELSE IF (SOMETHINGY) BEGIN..END ELSE BEGIN..END --The above works I then insert this below and these if statement become nested---- IF(@A!= @SA) IF(@S!= @SS) IF(@C!= @SC) IF(@W!= @SW) --Inserted if statement stop here END ELSE <-- final else ``` So it will be treated like this ``` IF(@A!= @SA){ IF(@S!= @SS){ IF(@C!= @SC) { IF(@W!= @SW){} } } } ``` What I expect is this ``` IF(@A!= @SA){} IF(@S!= @SS){} IF(@C!= @SC){} IF(@W!= @SW){} ``` I have also tried this and it throws `Incorrect syntax near "ELSE". Expecting "CONVERSATION"` ``` IF(@A!= @SA) BEGIN..END IF(@S!= @SS) BEGIN..END IF(@C!= @SC) BEGIN..END IF(@W!= @SW) BEGIN..END ``` Note that from `ELSE <--final else` down is now nested inside `IF(@W!= @SW)` Even though it is part of the outer if statement `ELSE IF(SOMETHNGZ)` before. **EDIT** As per request my full statement ``` ALTER Procedure [dbo].[SP_PLaces] @ID int, ..more params AS BEGIN SET NOCOUNT ON DECLARE @SomeId INT ..more varaible SET @SomeId = user define function() ..more SETS IF(@ID IS NULL) BEGIN BEGIN TRY INSERT INTO Places VAlUES(..Values...) ... more stuff... BEGIN TRY exec Store procedure @FIELD = 15, ... more params... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH RETURN 0 END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH END ELSE IF(@ID IS NOT NULL AND @ID in (SELECT ID FROM Places)) BEGIN SELECT @MyName = Name ... ...Some stuff.... IF(SOMETHNG_1) BEGIN TRY UPDATE .... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH ELSE IF(SOMETHNG_2) BEGIN TRY UPDATE ... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH ELSE BEGIN BEGIN TRY UPDATE ... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH END --The above works I then insert this below and these if statement become nested---- IF(@A!= @SA) BEGIN exec Stored procedure @FIELD = 15, ... more params... END IF(@S!= @SS) BEGIN exec Stored procedure @FIELD = 10, ... more params... END IF(@C!= @SC) BEGIN exec Stored procedure @FIELD = 17, ... more params... END IF(@W!= @SW) BEGIN exec Stored procedure @FIELD = 12, ... more params... END --Inserted if statement stop here END ELSE BEGIN SET @ResultMessage = 'Update/Delete Failed. No record found with ID:'+CONVERT(varchar(50), @ID) SELECT @ResultMessage AS 'Message' RETURN -1 END Set NOCOUNT OFF END ```
IF you are checking one variable against multiple condition then you would use something like this Here the block of code where the condition is true will be executed and other blocks will be ignored. ``` IF(@Var1 Condition1) BEGIN /*Your Code Goes here*/ END ELSE IF(@Var1 Condition2) BEGIN /*Your Code Goes here*/ END ELSE --<--- Default Task if none of the above is true BEGIN /*Your Code Goes here*/ END ``` If you are checking conditions against multiple variables then you would have to go for multiple IF Statements, Each block of code will be executed independently from other blocks. ``` IF(@Var1 Condition1) BEGIN /*Your Code Goes here*/ END IF(@Var2 Condition1) BEGIN /*Your Code Goes here*/ END IF(@Var3 Condition1) BEGIN /*Your Code Goes here*/ END ``` After every IF statement if there are more than one statement being executed you MUST put them in BEGIN..END Block. Anyway it is always best practice to use BEGIN..END blocks **Update** Found something in your code some BEGIN END you are missing ``` ELSE IF(@ID IS NOT NULL AND @ID in (SELECT ID FROM Places)) -- Outer Most Block ELSE IF BEGIN SELECT @MyName = Name ... ...Some stuff.... IF(SOMETHNG_1) -- IF --BEGIN BEGIN TRY UPDATE .... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH -- END ELSE IF(SOMETHNG_2) -- ELSE IF -- BEGIN BEGIN TRY UPDATE ... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH -- END ELSE -- ELSE BEGIN BEGIN TRY UPDATE ... END TRY BEGIN CATCH SELECT ERROR_MESSAGE() AS 'Message' RETURN -1 END CATCH END --The above works I then insert this below and these if statement become nested---- IF(@A!= @SA) BEGIN exec Store procedure @FIELD = 15, ... more params... END IF(@S!= @SS) BEGIN exec Store procedure @FIELD = 10, ... more params... ```
To avoid syntax errors, be sure to always put `BEGIN` and `END` after an `IF` clause, eg: ``` IF (@A!= @SA) BEGIN --do stuff END IF (@C!= @SC) BEGIN --do stuff END ``` ... and so on. This should work as expected. Imagine `BEGIN` and `END` keyword as the opening and closing bracket, respectively.
Multiple separate IF conditions in SQL Server
[ "", "sql", "sql-server", "if-statement", "" ]
I have a table ``` url lastcached lastupdate ----- ---------- ---------- url1 0 1 url2 0 1 url3 1 1 ``` I want to make a query that returns a row for every record in which lastcached and lastupdate is not same. So result should be ``` url lastcached lastupdate ----- ---------- ---------- url1 0 1 url2 0 1 ``` This is simple question but i cant realize how to do this. Thank you! **[Updated]** Queries like this: ``` SELECT * FROM my_table WHERE lastcached <> lastupdate ``` is not working in this test SQLite database: yadi.sk/d/DtKHR4VsDuMhf
``` select url, lastcached, lastupdate from your_table where lastcached <> lastupdate ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!7/d48f1/1)
``` SELECT * FROM your_table WHERE lastcached <> lastupdate ``` produces the right result
SQLite Select rows with not same values in two columns
[ "", "sql", "sqlite", "" ]
My situation: I have a table in a SQL Server 2012 database ``` id | created | sum ------------------------------ 1 | 2013-12-10 12:00:00 | 200 2 | 2013-12-10 13:00:00 | 300 3 | 2013-12-10 14:00:00 | 400 4 | 2013-12-09 08:00:00 | 100 5 | 2013-12-09 15:00:00 | 600 6 | 2013-12-10 12:00:00 | 50 ... 50 | 2013-11-23 14:00:00 | 400 51 | 2013-11-22 08:00:00 | 100 52 | 2013-11-22 15:00:00 | 600 53 | 2013-11-20 12:00:00 | 50 ``` How can I select rows for 20 different dates **without taking into account the time**? Expected result of select operation: ``` 1 | 2013-12-10 1 | 2013-12-10 12:00:00 | 200 2 | 2013-12-10 13:00:00 | 300 3 | 2013-12-10 14:00:00 | 400 2 | 2013-12-09 4 | 2013-12-09 08:00:00 | 100 5 | 2013-12-09 15:00:00 | 600 ... 20| 2013-11-22 51 | 2013-11-22 08:00:00 | 100 52 | 2013-11-22 15:00:00 | 600 ```
You could try something like this: * have a CTE (Common Table Expression) extract the 20 date-only values from your table * join your base table against the CTE output to get all the rows from the base table, for those selected dates only Try something like this: ``` -- replace this with your own, base table - this is just for demo purposes DECLARE @table TABLE (ID INT, Created DATETIME2(0), ValueSum INT) INSERT INTO @table VALUES(1, '2013-12-10 12:00:00', 200), (2, '2013-12-10 13:00:00', 300 ), (3, '2013-12-10 14:00:00', 400), (4, '2013-12-09 08:00:00', 100 ), (5, '2013-12-09 15:00:00', 600), (6, '2013-12-10 12:00:00', 50), (50, '2013-11-23 14:00:00', 400 ), (51, '2013-11-22 08:00:00', 100 ), (52, '2013-11-22 15:00:00', 600 ), (53, '2013-11-20 12:00:00', 50 ) -- define a CTE thta selects TOP (n) distinct date-only values from your base table ;WITH RandomDates AS ( SELECT DISTINCT TOP (3) DateOnly = CAST(Created AS DATE) FROM @table ) SELECT * FROM RandomDates ``` This will list your chosen date-only values If you join those values against your base table, you might get your output wanted... ``` ;WITH RandomDates AS ( SELECT DISTINCT TOP (20) DateOnly = CAST(Created AS DATE) FROM dbo.YourBaseTable ) SELECT t.* FROM RandomDates rd INNER JOIN dbo.YourBaseTable t ON CAST(t.Created AS DATE) = rd.DateOnly ```
Try this: ``` SELECT rowNo, createdDate, sumCol FROM ( SELECT TOP 20 ROW_NUMBER() OVER (ORDER BY CONVERT(DATE, a.created)) rowNo, CONVERT(DATE, a.created) createdDate, '' sumCol FROM tableA a GROUP BY CONVERT(DATE, a.created) UNION SELECT B.id AS rowNo, b.created AS createdDate, b.sum AS sumCol FROM (SELECT TOP 20 CONVERT(DATE, a.created) createdDate FROM tableA a ORDER BY CONVERT(DATE, a.created)) A INNER JOIN tableA B ON A.createdDate = CONVERT(DATE, b.created) ) AS A ORDER BY createdDate ``` Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!6/3bd97/2) **OUTPUT** ``` | ROWNO | CREATEDDATE | SUMCOL | |-------|---------------------|--------| | 1 | 2013-11-20 00:00:00 | 0 | | 53 | 2013-11-20 12:00:00 | 50 | | 2 | 2013-11-22 00:00:00 | 0 | | 51 | 2013-11-22 08:00:00 | 100 | | 52 | 2013-11-22 15:00:00 | 600 | | 3 | 2013-11-23 00:00:00 | 0 | | 50 | 2013-11-23 14:00:00 | 400 | | 4 | 2013-12-09 00:00:00 | 0 | | 4 | 2013-12-09 08:00:00 | 100 | | 5 | 2013-12-09 15:00:00 | 600 | | 5 | 2013-12-10 00:00:00 | 0 | | 1 | 2013-12-10 12:00:00 | 200 | | 6 | 2013-12-10 12:00:00 | 50 | | 2 | 2013-12-10 13:00:00 | 300 | | 3 | 2013-12-10 14:00:00 | 400 | ```
T-SQL: select top 20 root nodes with children
[ "", "sql", "sql-server", "sql-server-2012", "" ]
Suppose I have a table MATCHES(OPPONENT, DATE, GOALS\_FOR, GOALS\_AGAINST) If I want an SQL query which returns the most recent match where GOALS\_FOR was greater than 2, then I can use ``` SELECT * FROM MATCHES WHERE GOALS_FOR > 2 AND DATE = ( SELECT MAX(DATE) FROM MATCHES WHERE GOALS_FOR > 2) ``` How can I do this without having to rewrite/recompute ``` MATCHES WHERE GOALS_FOR > 2 ``` twice?
Just select the top 1 order by date desc, example in tsql ``` SELECT top 1 * FROM MATCHES WHERE GOALS_FOR > 2 ORDER BY DATE Desc ``` (use rownum for oracle, limit for mysql, fetch first for db2, etc)
You can simple join it against a subquery that gets the latest `DATE` for every `GOALS_FOR`. ``` SELECT a.* FROM `matches` a INNER JOIN ( SELECT GOALS_FOR, MAX(DATE) Date FROM `matches` WHERE GOALS_FOR > 2 GROUP BY GOALS_FOR ) b ON a.GOALS_FOR = b.GOALS_FOR AND a.Date = b.Date ```
SQL MAX( ) function
[ "", "sql", "" ]
I am using LLBLGEN where there is a method to execute a query as a `scalar query`. Googling gives me a definition for `scalar sub-query`, are they the same ?
A scalar query is a query that returns one row consisting of one column.
For what it's worth: > Scalar subqueries or scalar queries are queries that return exactly > one column and one or zero records. [Source](https://books.google.co.uk/books?id=jfKoCwAAQBAJ&pg=PA155&lpg=PA155&dq=Scalar+subqueries+or+scalar+queries+are+queries+that+return+exactly+one+column+and+one+or+zero+records.+They&source=bl&ots=n0N4nni0PO&sig=9Tjb7P3Yu0Th3K4X1gAuJCqE_Q4&hl=ro&sa=X&ved=0ahUKEwia3ba1vL7UAhVJJMAKHQaPCrIQ6AEIJzAA#v=onepage&q=together.%20Scalar%20subqueries%20or%20scalar%20queries%20are%20queries%20that%20return%20exactly%20one%20column%20and%20one%20or%20zero%20records.&f=false)
What is a "Scalar" Query?
[ "", "sql", "database", "executescalar", "llblgen", "" ]
What's the easiest way to select a single record/value from the n-th group? The group is determined by a material and it's price(prices can change). I need to find the first date of the last and the last date of the next to last material-price-groups. So i want to know when exactly a price changed. I've tried following query to get the first date of the current(last) price which can return the wrong date if that price was used before: ``` DECLARE @material VARCHAR(20) SET @material = '1271-4303' SELECT TOP 1 Claim_Submitted_Date FROM tabdata WHERE Material = @material AND Price = (SELECT TOP 1 Price FROM tabdata t2 WHERE Material = @material ORDER BY Claim_Submitted_Date DESC) ORDER BY Claim_Submitted_Date ASC ``` This also only returns the last, how do i get the previous? So the date when the previous price was used last/first? I have simplified my schema and created [**this sql-fiddle**](http://www.sqlfiddle.com/#!3/3a791/8/0) with sample-data. Here in chronological order. So the row with ID=7 is what i need since it's has the next-to-last price with the latest date. ``` ID CLAIM_SUBMITTED_DATE MATERIAL PRICE 5 December, 04 2013 12:33:00+0000 1271-4303 20 4 December, 03 2013 12:33:00+0000 1271-4303 20 <-- current 3 November, 17 2013 10:13:00+0000 1271-4846 40 7 November, 08 2013 12:16:00+0000 1271-4303 18 <-- last(desired) 2 October, 17 2013 09:13:00+0000 1271-4303 18 1 September, 17 2013 08:13:00+0000 1271-4303 10 8 September, 16 2013 12:15:00+0000 1271-4303 17 6 June, 23 2013 14:22:00+0000 1271-4303 18 9 January, 11 2013 12:22:10+0000 1271-4303 20 <-- a problem since this is older than the desired but will be returned by my simply sub-query approach above ``` Is it even possible to parametrize this value, so the `nthLatestPriceGroup` if i want to know the 3rd last price-date? Note that the query sits in a scalar-valued-function. **Edit**: Many thanks to all. But unfortunately a simple `ROW_NUMBER` seems not to help here since i'm trying to get the row with **the most recent price before the current price** for a given material. So `GROUP BY`/`PARTITION BY material,price` includes rows with the same price that don't belong to the last recent material-price group. Consider that a price can change from ``` Date Price Comment 5 months ago 20 original price, note that this is the same as the curent which causes my query to fail! 3 months ago 18 price has changed, i might need the first and last date 2 months ago 20 price has changed, i might need the first and last date 1 month ago 18 previous price, i need the oldest and newest dates NOW 20 current price, i need the first/oldest date from this group ``` So i want the date of the most recent row of the last 20-group, the oldest 20-group is irrelevant. So i must somehow group by **consecutive** prices since a price can repeat after it has already changed. So actually i only need the most recent `Claim_Submitted_Date` from the price-group that starts with `1 month ago ... previous price` in the list above which is the date until the previous price was valid. The other informations listed in the comments are just nice to have(the `nthLatestPriceGroup` sub-question). That's the row with `ID=7` in the sample data above. By the way, the oldest row of this price-group would be the one with `ID=2`(October, 17) and not `ID=6`(June, 23) even if the latter is older. There was a different price(10) after. That's the reason why i can't use simple ranking functions.
You will need to use the windowed function ROWNUMBER in a subquery,... something like this will get you there: ``` ROW_NUMBER() OVER(PARTITION BY Price ORDER BY Claim_Submitted_Date DESC) AS Row ``` Here's the update based on your fiddle: ``` DECLARE @material VARCHAR(20) SET @material = '1271-4303' SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY Material ORDER BY Claim_Submitted_Date ASC) AS rn FROM tabdata t2 WHERE Material = @material ) res WHERE rn=2 ``` If idData is incremental(and therefore chronological) you could use this: ``` SELECT * FROM ( SELECT *, ROW_NUMBER() OVER(PARTITION BY Material ORDER BY idData DESC) AS rn FROM tabdata t2 WHERE Material = @material ) res ``` Looking at your latest requirements we could all be over thinking it(if I understand you correctly): ``` DECLARE @MATERIAL AS VARCHAR(9) SET @MATERIAL = '1271-4303' SELECT TOP 1 * FROM tabdata t2 WHERE Material = @material AND PRICE <> ( SELECT TOP 1 Price FROM tabdata WHERE Material = @material ORDER BY CLAIM_SUBMITTED_DATE desc) ORDER BY CLAIM_SUBMITTED_DATE desc --results idData Claim_Submitted_Date Material Price 7 2013-11-08 12:16:00.000 1271-4303 18 ``` Here's a [fiddle](http://sqlfiddle.com/#!3/44512/1) based on this.
Following your last comments, only solution I came with is counting the different price groups according to their `Claim_Submitted_Date`, and then include the obtained group indexes as part as the grouping criteria. Not sure it will be highly efficient. Hope it will help though. ``` declare @materialId nvarchar(max), @targetrank int set @materialId = '1271-4303' set @targetrank =2 ;with grouped as ( select *, (select count( t.price) -- don't put a DISTINCT here. (I know, I did) from tabdata as t where t.Price <> tj.Price and t.Claim_Submitted_Date> tj.Claim_Submitted_Date and t.Material= @materialId )as group_indicator from tabdata tj where Material= @materialId ), rankedClaims as ( select grouped.*, row_number() over (PARTITION BY material,price,group_indicator ORDER BY claim_submitted_date desc) as rank from grouped ), numbered as ( select *, ROW_NUMBER() OVER (order by Claim_Submitted_Date desc) as RowNumber from rankedClaims where rank =1 ) select Id, Claim_Submitted_Date, Material, Price from numbered where RowNumber=@targetrank ``` (Not sure also of should two claims on different prices on the same date should be treated `t.Claim_Submitted_Date> tj.Claim_Submitted_Date`) -------------------- **Previous answer** Maybe you can try something like : ``` SELECT ranked.[CLAIM_SUBMITTED_DATE] FROM ( SELECT trimmed.*, ROW_NUMBER() OVER (ORDER BY claim_submitted_date) AS rank FROM ( SELECT a.* ,row_number() over (PARTITION BY material,price ORDER BY claim_submitted_date) AS daterank FROM tabdata a WHERE a.material= '1271-4303' ) AS trimmed WHERE daterank=1 ) AS ranked WHERE rank=2 ``` Parameterizing the rank seems possible as it is only involved in `WHERE rank=2`
Get first/last row of n-th consecutive group
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "" ]
what;s wrong with my query ... i get the error message #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '. mail\_time FROM ibc\_messages m , ibc\_msg\_queue q AND m . id = q . msgid AND q' at line 1 ``` SELECT distinct q.msgid, q.mail_time, m.status, FROM ibc_msg_queue q , ibc_messages m WHERE q.mail_time = '0000-00-00 00:00:00' AND q.msgid = m.id ORDER BY q.msgid ```
remove the comma after your third column ``` SELECT distinct q.msgid , q.mail_time,m.status FROM ```
You have an extra comma before the "FROM" clause
what;s wrong with my sql query?
[ "", "mysql", "sql", "" ]
I have a DB with this layout... ``` contracts --------- id name description # Etc... locations --------- id contract_id # FK to contracts.id name order_position # Etc... ``` I need to find contracts by the `name` of their current location (and by other `contracts` columns at the same time). The current location is the one with the greatest `order_position`. In other words, I'm trying to write a query that will return rows from `contracts` based on `location.name`. Ordinarily that would just be a simple join via `location.contract_id` and `contracts.id`. For example, this would be the simple case, without the additional requirement... ``` SELECT c.* FROM contracts c, locations l WHERE c.id = l.contract_id AND c.name LIKE '%bay%' AND l.name LIKE '%admin%'; ``` But the additional requirement is that I want to narrow it down to the contract's location that has the greatest value for `order_position`. Is there a way to do that with one query?
I worked on it some more and found that this solves the problem... ``` SELECT c.* FROM contracts c, locations l WHERE c.id = l.contract_id AND c.name LIKE '%bay%' AND l.name LIKE '%admin%' AND l.order_position = ( SELECT MAX(order_position) FROM `locations` WHERE `contract_id` = c.id ); ```
Maybe ``` SELECT c.name FROM contracts c JOIN locations l ON c.id=l.contract_id AND c.name IN(SELECT name FROM locations WHERE locations.order_position IN(SELECT MAX(order_position) FROM locations GROUP BY locations.name)) ```
How can I find records using only certain rows in a joined table?
[ "", "mysql", "sql", "" ]
Given Table: **student** ``` | course | Name| --------------- | science | A | | math | B | | english | A | | physics | A | | chem | A | | bio | B | | geology | B | | history | C | ``` I will order this table alphabetically. Tmp table: **ordered\_student** ``` | course | Name| --------------- | bio | B | | chem | A | | english | A | | geology | B | | history | C | | math | B | | physics | A | | science | A | ``` By using the following code, ``` select Name, COUNT(*) as count from student group by Name ``` I was able to create temporary table Tmp Table: **num\_course\_per\_student** ``` | Name | count| --------------- | A| 4 | | B| 3 | | C| 1 | ``` **GOAL** Let say, student is allowed to take only 2 courses. If a student is taking more than 2 courses, the student will take first 2 courses. I should return the following.. ``` | course | Name| --------------- | bio | B | | chem | A | | english | A | | geology | B | | history | C | ``` How should I do? Your help and suggestion would be much appreciated :) Thank you!
This should do it: ``` WITH s AS ( SELECT ROW_NUMBER() OVER (PARTITION BY name ORDER BY course) rank, * FROM student ) SELECT course, name FROM s WHERE RANK <= 2 ORDER BY course ``` Output: ``` course name ------- ---- bio B chem A english A geology B history C ``` See it working [here](https://data.stackexchange.com/stackoverflow/query/151754).
Try this: ``` ; with temp as ( select DENSE_RANK() OVER (PARTITION BY Name ORDER BY Course) AS Rank, Name, course from student ) SELECT Course, Name FROM temp WHERE Rank<=2 order by Course ``` Result: ``` Course Name bio B chem A english A geology B history C ```
SQL help, limiting number of rows
[ "", "sql", "sql-server", "aggregate-functions", "" ]
I need to write a query in my project where I want sum of a column on every date range of another table (for ORACLE database). TABLE1 – It has number of pax booked on every date: ``` DT NO_OF_PAX -------- --------------- 01-JAN-14 10 02-JAN-14 5 03-JAN-14 8 05-JAN-14 5 : : 28-DEC-14 20 30-DEC-14 9 31-DEC-14 15 ``` TABLE2 – It has lot of date ranges: ``` ST_DT END_DT --------- ------------ 01-JAN-14 31-JAN-14 01-FEB-14 28-FEB-14 12-JAN-14 15-FEB-14 : : 01-NOV-14 20-NOV-14 01-DEC-14 31-DEC-14 ``` Now I need to write query that it should display SUM(NO\_OF\_PAX) from TABLE1 for every date range of TABLE2. Please advise how should I write. As both have no common column, I dont know how to join both the table. I wrote as `SELECT SUM(TABLE1.NO_OF_PAX) FROM TABLE1, TABLE2 WHERE TABLE1.DT BETWEEN TABLE2.ST_DT AND TABLE2.END_DT` It fails with not a single-group group function.
This is the correct SQL: ``` SELECT T2.ST_DT, T2.END_DT, ( SELECT SUM(T1.NO_OF_PAX) FROM TABLE1 T1 WHERE T1.DT BETWEEN T2.ST_DT AND T2.END_DT ) NO_OF_PAX FROM TABLE2 T2 ORDER BY T2.ST_DT ASC ``` See on SQLFiddle <http://sqlfiddle.com/#!4/2e9e9/1/0>
Try it this way ``` SELECT st_dt, end_dt, COALESCE(SUM(no_of_pax), 0) total FROM Table2 t2 LEFT JOIN Table1 t1 ON t1.dt BETWEEN t2.st_dt AND end_dt GROUP BY st_dt, end_dt ``` Sample output: ``` | ST_DT | END_DT | TOTAL | |-----------|------------|-------| | 01-JAN-14 | 31-JAN-14 | 28 | | 12-JAN-14 | 15-FEB-14 | 0 | | 01-FEB-14 | 28-FEB-14 | 0 | | 01-NOV-14 | 20-NOV-14 | 0 | | 01-DEC-14 | 31-DEC-14 | 44 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!4/b2bb5/4)** demo
Oracle Query to sum a column by every date range in another table
[ "", "sql", "oracle", "" ]
I am using following SQL command with `sp_rename` to rename a column. ``` USE MYSYS; GO EXEC sp_rename 'MYSYS.SYSDetails.AssetName', 'AssetTypeName', 'COLUMN'; GO ``` But it is causing an error: > Msg 15248, Level 11, State 1, Procedure sp\_rename, Line 238 > Either the parameter @objname is ambiguous or the claimed @objtype (COLUMN) is wrong. Please suggest how to rename a column using `sp_rename`. [ this command I am using found at [Microsoft Technet](http://technet.microsoft.com/en-us/library/ms188351.aspx) ]
Try this: ``` USE MYSYS; GO EXEC sp_rename 'SYSDetails.AssetName', 'AssetTypeName', 'COLUMN'; GO ``` sp\_rename (Transact-SQL) ([msdn](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-rename-transact-sql)): > [ @objname = ] 'object\_name' > > Is the current qualified or nonqualified name of the user object or > data type. **If the object to be renamed is a column in a table, > object\_name must be in the form table.column or schema.table.column.** > If the object to be renamed is an index, object\_name must be in the > form table.index or schema.table.index. If the object to be renamed is > a constraint, object\_name must be in the form schema.constraint. > > Quotation marks are only necessary if a qualified object is specified. > **If a fully qualified name, including a database name, is provided, the > database name must be the name of the current database.** object\_name is > nvarchar(776), with no default. Syntax with a fully qualified name: ``` USE Database GO EXEC sp_rename 'Database.Schema.TableName.ColumnName', 'NewColumnName', 'COLUMN'; GO ``` If you want to have in the *objectname* a fully qualified name you should also specified **schema**. So if your *SYSDetails* table is in the *dbo* schema, this code should work for you: ``` USE MYSYS; GO EXEC sp_rename 'MYSYS.dbo.SYSDetails.AssetName', 'AssetTypeName', 'COLUMN'; GO ```
Double-check that your table `SYSDetails` exists. If the target table doesn't exist you get this error.
Whats wrong with sp_rename with column?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have .csv which content is like this ``` "aa839","340000","2350444" ``` When I do the `BULK INSERT`, I use `FIELDTERMINATOR = ','` Thus, the data has quote `"`. I want to remove the quote `"`, so that my database has only `aa839, 340000, 2350444`.
Dont bother about replacing while doing bulk insert , you can replace later after bulk insert . So do bulk insert as it is originally and later fire update query as below. This is the fastest way too ``` update table set field= REPLACE(field, 'search', 'replace'); ``` where search is "" and replace is blank
while reading the file replace all (") with blank (), then replace (,) with space ( )
Bulk insert .csv - fieldterminator
[ "", "sql", "sqlbulkcopy", "" ]
I have tables like, **Table A** ``` field1 | field2 | field3 | field4 | field5 ``` **Table B** ``` id | field_names -------------------- 1 field2 2 field3 ``` I have to delete field2 and field3 from `Table A` ( means field names in `Table B` ). Is there any way to drop field names which present in `Table B`. I can't delete the fields manually. Thanks in advance
Not sure why you need this. anyway MySQL has no such feature. but if you want really do that, execute following query and execute each rows. ``` SELECT CONCAT('ALTER TABLE TableA DROP ', field_names, ';') FROM TableB; ``` for examples with PHP, (all error check is omitted) ``` $query = "SELECT CONCAT('ALTER TABLE TableA DROP ', field_names, ';') AS stmt FROM TableB;"; $result = mysqli_query($query); while ($row = mysqli_fetch_assoc($result)) { mysqli_query($rows['stmt']); } ```
This would help: ``` SELECT @s := CONCAT('ALTER TABLE tableA DROP COLUMN ', GROUP_CONCAT(field_names SEPARATOR ', DROP COLUMN '), ';') FROM tableB; PREPARE stmt FROM @s; EXECUTE @s; ```
DROP columns of a table,according to the values in another table
[ "", "mysql", "sql", "alter-table", "" ]
Is there any app for mac to split sql files or even script? I have a large files which i have to upload it to hosting that doesn't support files over 8 MB. \*I don't have SSH access
You can use this : <http://www.ozerov.de/bigdump/> Or Use this command to split the sql file ``` split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart- ``` The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended. Now you should be able to import your files one at a time through phpMyAdmin without issue. More info <http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/>
This tool should do the trick: [MySQLDumpSplitter](https://github.com/rodoic/mysqldumpsplitter) It's free and open source. Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax. **Full disclosure**: I am a share holder of the company that hosts this program.
How to split sql in MAC OSX?
[ "", "sql", "macos", "phpmyadmin", "splitter", "" ]
I'm relatively new to SQL, and am trying to find the best way to attack this problem. I am trying to take data from 2 tables and start merging them together to perform analysis on it, but I don't know the best way to go about this without looping or many nested subqueries. What I've done so far: I have 2 tables. Table1 has user information and Table2 has information on orders(prices and dates, as well as user) What I need to do: I want to have a single row for each user that has a summary of information about all of their orders. I'm looking to find the sum of prices of all orders by each user, the max price paid by that user, and the number of orders. I'm not sure how to best manipulate my data in SQL. Currently, my code looks as follows: ``` Select alias1.*, Table2.order_id, Table2.price, Table2.order_date From (Select * from Table1 where country='United States') as alias1 LEFT JOIN Table2 on alias1.user_id = Table2.user_id ``` This filters out the datatypes by country, and then joins it with users, creating a record of each order including the user information. I don't know if this is a helpful step, but this is part of my first attempt playing around with the data. I was thinking of looping over this, but I know that is against the spirit of SQL Edit: Here is an example of what I have and what I want: Table 1(user info): ``` user_id user_country 1 United States 2 United Kingdom (etc) ``` Table 2(order info): ``` order_id price user_id 100 5.00 1 101 3.50 2 102 2.50 1 103 1.00 1 104 8.00 2 ``` What I would like output: ``` user_id user_country total_price max_price number_of_orders 1 United States 8.50 5.00 3 2 United Kingdom 11.50 8.00 2 ```
Here's one way to do this: ``` SELECT alias1.user_id, MAX(alias1.user_name) As user_name, SUM(Table2.price) As UsersTotalPrice, MAX(Table2.price) As UsersHighestPrice FROM Table1 As alias1 LEFT JOIN Table2 ON alias1.user_id = Table2.user_id WHERE country = 'United States' GROUP BY user_id ``` If you can give us the actual table definitions, then we can show you some actual working queries.
This should work ``` select table1.*, t2.total_price, t2.max_price, t2.order_count ``` from table1 join (selectt user\_id, sum(table2.price) as total\_price, max(table2.price) as max\_price, count(order\_id) as order\_count from table2 as t2 group by t2.user\_id) on table1.user\_id = t2.user\_id where t1.country = 'untied\_states'
Best solution for SQL without looping
[ "", "sql", "" ]
I've got an Oracle table that holds a set of ranges (RangeA and RangeB). These columns are varchar as they can hold both numeric and alphanumeric values, like the following example: ``` ID|RangeA|RangeB 1 | 10 | 20 2 | 21 | 30 3 | AB50 | AB70 4 | AB80 | AB90 ``` I need to to do a query that returns only the records that have numeric values, and perform a Count on that query. So far I've tried doing this with two different queries without any luck: Query 1: ``` SELECT COUNT(*) FROM ( SELECT RangeA, RangeB FROM table R WHERE upper(R.RangeA) = lower(R.RangeA) ) A WHERE TO_NUMBER(A.RangeA) <= 10 ``` Query 2: ``` WITH A(RangeA,RangeB) AS( SELECT RangeA, RangeB FROM table WHERE upper(RangeA) = lower(RangeA) ) SELECT COUNT(*) FROM A WHERE TO_NUMBER(A.RangeA) <= 10 ``` The subquery is working fine as I'm getting the two records that have only numeric values, but the COUNT part of the query is failing. I should be getting only 1 on the count, but instead I'm getting the following error: ``` ORA-01722: invalid number 01722. 00000 - "invalid number" ``` What am I doing wrong? Any help is much appreciated.
You can test each column with a regular expression to determine if it is a valid number: ``` SELECT COUNT(1) FROM table_of_ranges WHERE CASE WHEN REGEXP_LIKE( RangeA, '^-?\d+(\.\d*)?$' ) THEN TO_NUMBER( RangeA ) ELSE NULL END < 10 AND REGEXP_LIKE( RangeB, '^-?\d+(\.\d*)?$' ); ``` Another alternative is to use a user-defined function: ``` CREATE OR REPLACE FUNCTION test_Number ( str VARCHAR2 ) RETURN NUMBER DETERMINISTIC AS invalid_number EXCEPTION; PRAGMA EXCEPTION_INIT(invalid_number, -6502); BEGIN RETURN TO_NUMBER( str ); EXCEPTION WHEN invalid_number THEN RETURN NULL; END test_Number; / ``` Then you can do: ``` SELECT COUNT(*) FROM table_of_ranges WHERE test_number( RangeA ) <= 10 AND test_number( RangeB ) IS NOT NULL; ```
Try this query: ``` SELECT COUNT(*) FROM table R WHERE translate(R.RangeA, 'x0123456789', 'x') = 'x' and translate(R.RangeB, 'x0123456789', 'x') = 'x' ``` First, you don't need the subquery for this purpose. Second, using `to_number()` or `upper()`/`lower()` are prone to other problems. The function `translate()` replaces each character in the second argument with values from the third argument. In this case, it removes numbers. If nothing is left over, then the original value was an integer. You can do more sophisticated checks for negative values and floating point numbers, but the example in the question seemed to be about positive integer values.
ORACLE - Select Count on a Subquery
[ "", "sql", "oracle", "count", "subquery", "" ]
I have a pl/sql loop which looks like below: ``` IF A IN (1,2,3,4,5) OR B IN (1,2,3,4,5) OR C IN (1,2,3,4,5) THEN --Do Something-- ELSIF A IN (6,7) AND B IN (6,7) AND C IN (6,7) THEN --Do Something Else-- ELSE --Do Nothing-- END IF; ``` Is there a better way to write the above loop? The above example is just for a reference. My original program consits of more conditions in the IF loop. Kindly help me in optimizing it. I am out of ideas. Thanks.
Create a [collection](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/collections.htm) (nested table) type to hold the values and the test conditions then you can use [`MULTISET'](http://docs.oracle.com/cd/B19306_01/server.102/b14200/operators006.htm) operations to compare them: * `(a MULTISET INTERSECT DISTINCT b) IS NOT EMPTY` is the equivalent of the `OR` statements; and * `(a MULTISET EXCEPT DISTINCT b) IS EMPTY` is the equivalent of the `AND` statements I've wrapped the code in a function for ease of testing: [SQL Fiddle](http://sqlfiddle.com/#!4/2dae3/2) **Oracle 11g R2 Schema Setup**: ``` CREATE OR REPLACE TYPE Number_Table_Type AS TABLE OF NUMBER / CREATE OR REPLACE FUNCTION Test_Multiple_Values ( A IN NUMBER, B IN NUMBER, C IN NUMBER, D IN NUMBER, E IN NUMBER, F IN NUMBER, G IN NUMBER, H IN NUMBER ) RETURN NUMBER AS inputs Number_Table_Type := Number_Table_Type( A, B, C, D, E, F, G, H ); BEGIN IF ( inputs MULTISET INTERSECT DISTINCT Number_Table_Type( 1, 2, 3, 4, 5 ) ) IS NOT EMPTY THEN RETURN 1; ELSIF ( inputs MULTISET EXCEPT DISTINCT Number_Table_Type( 6, 7 ) ) IS EMPTY THEN RETURN 2; ELSE RETURN 3; END IF; END; / ``` **Query 1**: ``` WITH inputs AS ( SELECT 1 AS A, 2 AS B, 3 AS C, 4 AS D, 5 AS E, 6 AS F, 7 AS G, 8 AS H FROM DUAL UNION ALL SELECT 6, 7, 6, 7, 6, 7, 6, 7 FROM DUAL UNION ALL SELECT 8, 9, 10, 11, 12, 13, 14, 15 FROM DUAL ) SELECT A, B, C, D, E, F, G, H, Test_Multiple_Values( A, B, C, D, E, F, G, H ) FROM inputs ``` **[Results](http://sqlfiddle.com/#!4/2dae3/2/0)**: ``` | A | B | C | D | E | F | G | H | TEST_MULTIPLE_VALUES(A,B,C,D,E,F,G,H) | |---|---|----|----|----|----|----|----|---------------------------------------| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 1 | | 6 | 7 | 6 | 7 | 6 | 7 | 6 | 7 | 2 | | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 3 | ```
Try `CASE` statement. It will be more readable than `IF` statement. [Check here for more](http://docs.oracle.com/cd/B10500_01/appdev.920/a96624/04_struc.htm)
PL/Sql IF loop with multiple OR - IN's
[ "", "sql", "oracle", "plsql", "" ]
I'm attempting to use an alpha between in SQL; however, I want it to be based on the beginnings of the words only. I am using T-SQL on Microsoft SQL Server 2005. My current SQL is: ``` SELECT * FROM table WHERE LEFT(word,1) BETWEEN 'a' AND 't ``` However, this only works for first letter. I'd like to expand this to work for any beginnings of words. For instance, between 'a' and 'ter'. Now, I am building this dynamically, so I could do: ``` SELECT * FROM table WHERE LEFT(word,1) >= 'a' AND LEFT(word,3) <= 'ter' ``` However, I'd like to know if there is a simpler way in SQL to make a dynamic beginning-of-word between. EDIT::: Follow up question, words less than the length of the checked value should be considered less than in the between. For instance, me is less than mem so word < 'mem' should include me. EDIT::: Attempting using padding, as suggested. The below does work; however, the added 'a's can cause issue. For instance, if we want words between 'a' and 'mera' and the word being checked is 'mer', this will be included because the left trim of 'mer' becomes 'mera' with added characters. I would like a solution that does not include this issue. ``` DECLARE @lb varchar(50) DECLARE @ub varchar(50) SET @lb='ly' SET @ub='z' SELECT name FROM table WHERE LEFT( CASE WHEN LEN(name) < LEN(@lb) THEN name+REPLICATE('a',LEN(@lb)-LEN(name)) ELSE name END, LEN(@lb) ) >= @lb AND LEFT(CASE WHEN LEN(name) < LEN(@ub) THEN name+REPLICATE('a',LEN(@ub)-LEN(name)) ELSE name END, LEN(@ub) ) <= @ub ``` EDIT::: Attempted solution, although CASE heavy. Mack's solution is better, though this works as well. LEFT('andy', 200000) will return 'andy', not an error as an OO language would, behavior I did not expect. ``` DECLARE @lb varchar(50) DECLARE @ub varchar(50) SET @lb='a' SET @ub='lyar' SELECT * FROM testtable WHERE CASE WHEN LEN(word) < LEN(@lb) THEN 0 WHEN LEFT(word, LEN(@lb)) >= @lb THEN 1 ELSE 0 END = 1 AND CASE WHEN LEN(word) < LEN(@ub) THEN CASE WHEN LEFT(@ub,LEN(word)) = word THEN 1 ELSE 0 END WHEN LEFT(word, LEN(@ub)) <= @ub THEN 1 ELSE 0 END = 1 ``` Thanks in advance!
You need to use the [STUFF](http://technet.microsoft.com/en-us/library/ms188043.aspx) function to achieve what you are looking for explicitly. If you follow the link says it deletes a specified number of characters at the end of the string and replaces them with another string. Combine the with the LEN function and we can get you on the road. ``` --Test Data DECLARE @table AS TABLE (word char(10)) INSERT INTO @table VALUES ('me') INSERT INTO @table VALUES ('mem') INSERT INTO @table VALUES ('tap') INSERT INTO @table VALUES ('t') DECLARE @minword char(5) DECLARE @maxword char(5) SET @minword='ai' SET @maxword='t' --SET @maxword='tb'--(remove the rem at the start of this line to unlock an extra test for comparison...) --Query SELECT word FROM @table WHERE STUFF(word, LEN(word)+1, 5, 'aaaaa') BETWEEN STUFF(@minword, LEN(@minword)+1, 5, 'aaaaa') AND STUFF(@maxword, LEN(@maxword)+1, 5, 'aaaaa') ``` Alternative solution based on your revised requirements: ``` DECLARE @testtable AS TABLE (word varchar(20)) INSERT INTO @testtable VALUES ('ly') INSERT INTO @testtable VALUES ('Ly') INSERT INTO @testtable VALUES ('Zoo') INSERT INTO @testtable VALUES ('r') INSERT INTO @testtable VALUES ('traci') DECLARE @minword varchar(20) DECLARE @maxword varchar(20) SET @minword='ly' SET @maxword='zol' SELECT word, LEFT(word,LEN(@minword)), LEFT(word,LEN(@maxword)), @minword, @maxword FROM @testtable WHERE LEFT(word,LEN(@minword))>=@minword AND LEFT(word,LEN(@maxword))<=@maxword ```
This should work: ``` SELECT * FROM table WHERE LEFT(word,3) BETWEEN 'a' AND 'ter' ``` There's no reason why BETWEEN shouldn't be able to compare your three-letter data string to the one-letter 'a'. Any 'axx' will be "greater than" just 'a' by itself, and so will be included.
SQL Between Begins With
[ "", "sql", "t-sql", "sql-server-2005", "between", "" ]
I have 2 tables, which are joined by a left join. Table1: * Id (int) * Name (string) Table2: * Id (int) * Table1Id (int) * Allowed (bit) Based on the name saved in table one, I can get all records in table 2. When one or more records in table 2 have the value of allowed = 0 the result of my query should return 0. I tried using min(), but it doesn't support bit values, i tried converting them to integer values but that doesn't seem to work either. how exactly would I do this? To clarify, I want to see all records that exist. Only when one of the records in table 2 has a value of 0 (for allowed), then the result of the query should display 0 as well Something like this would do the trick, but I don't know which aggregate function to use, so I could group buy the Table1.Name ``` SELECT Table1.Name, Table2.Allowed FROM Table1 LEFT JOIN Table2 on Table1.Id = Table2.Table1Id GROUP BY Table1.Name ```
``` SELECT Table1.Name, min(case when Table2.Allowed = 0 then 0 else 1 end) FROM Table1 LEFT JOIN Table2 on Table1.Id = Table2.Table1Id GROUP BY Table1.Name ``` ## [SQLFiddle demo](http://sqlfiddle.com/#!3/22197/1)
While the question might not have been clear, I have found a second solution to my problem: ``` SELECT [Name] AS [Name], MIN(Allowed) FROM( SELECT [Table1].[Name] AS [Name], CAST(ISNULL([Table2].[Allowed], 1) as int) AS [allowed] FROM [Table1] AS [T1] LEFT JOIN [Table2] AS [T2] on [T1].[Id] = [T2].[Table1Id] ) AS [Records] GROUP BY [Name] ``` When running in SSMS, looking at the execution plan, there are no differences with Juergen's answer. The query cost is 50% each, and the graphical paths are identical
selecting a distinct value based on 1 value
[ "", "sql", "sql-server-2012", "" ]
I'm trying to search a SQL database using this SQL query: > SELECT \* FROM Reservations WHERE fDate=06/12/13 Here is an image of my SQL Reservations table with dummy data: ![Data](https://i.stack.imgur.com/kE84C.png) What am I doing wrong here? Thanks, C.
Try changing date format from 'dd/mm/yyyy' to 'yyyy/mm/dd'. Like this: SELECT \* FROM Reservations WHERE fDate='2013/11/08' See sample [SQL Fiddle I created](http://sqlfiddle.com/#!2/0798a26/1)
You'll need to put the date inside '' ``` SELECT * FROM Reservations WHERE fDate='06/12/13' ``` But, if what you're showing us a view, then you'll need to use the standard format for dates: ``` SELECT * FROM Reservations WHERE fDate='2013-12-06' ```
Searching a SQL database by 'date'
[ "", "sql", "vb.net", "date", "" ]
I have multiple queries : ``` 1) select Year , Month, Sum(Stores) from ABC ; 2) select Year, Month , Sum(SalesStores) from DEF ; 3) slect Year, Month, Sum(Products) from FGH; ``` I want a result like : ``` Year, Month , Sum(Stores), Sum(SalesStores), Sum(Products) ``` I tried Full Outer Join with a query that looks something like this : ``` SELECT ISNULL(x.[Year], y.[Year]) AS [Year], ISNULL(x.[Month], y.[Month]) AS [Month], x.Sum_Stores, y.Sum_SalesStores, z.Sum_products FROM (select Year , Month, Sum(Stores) AS Sum_Stores from ABC ... GROUP BY [Month]) AS x FULL OUTER JOIN (select Year, Month , Sum(SalesStores) AS Sum_SalesStores from DEF ... GROUP BY [Month]) AS y ON x.[Year] = y.[Year] AND x.[Month] = y.[Month] FULL OUTER JOIN (select Year, Month , Sum(products) AS Sum_products from FGH ... GROUP BY [Month]) AS z ON y.[Year] = z.[Year] AND y.[Month] = z.[Month] ``` The problem is that , ABC has data only for certain months, DEF has data for different months and FGH has data again for different months. When I run my above query , I get a lot of nulls and 0's. Can anyone correct me as to whats wrong with the query or tell me a solution that works for my case.
Here is another way to get this information. This example has a short setup to demonstrate this solution in action. ``` CREATE TABLE #ABC([Year] INT, [Month] INT, Stores INT); CREATE TABLE #DEF([Year] INT, [Month] INT, SalesStores INT); CREATE TABLE #GHI([Year] INT, [Month] INT, Products INT); INSERT #ABC VALUES (2013,1,1); INSERT #ABC VALUES (2013,1,2); INSERT #ABC VALUES (2013,2,3); INSERT #DEF VALUES (2013,1,4); INSERT #DEF VALUES (2013,1,5); INSERT #DEF VALUES (2013,2,6); INSERT #GHI VALUES (2013,1,7); INSERT #GHI VALUES (2013,1,8); INSERT #GHI VALUES (2013,2,9); INSERT #GHI VALUES (2013,3,10); SELECT T.[Year] ,T.[Month] -- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run) ,(SELECT SUM(Stores) FROM #ABC WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_Stores] ,(SELECT SUM(SalesStores) FROM #DEF WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_SalesStores] ,(SELECT SUM(Products) FROM #GHI WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_Products] FROM ( -- this selects a list of all possible dates. SELECT [Year],[Month] FROM #ABC UNION SELECT [Year],[Month] FROM #DEF UNION SELECT [Year],[Month] FROM #GHI ) AS T; ``` EDIT: This can be wrapped up in parametrized stored procedure as requested: ``` -- Call proc like this --EXEC [getMyReport]; -- params are not required, these will default to NULL if not specified. --EXEC [getMyReport] @Year=2013; -- all 2013 data, all months --EXEC [getMyReport] @Month=1; -- all January data, from all years --EXEC [getMyReport] @Year=2013, @Month=2; -- Feb 2013 data only. CREATE PROCEDURE [dbo].[getMyReport] @Year INT = NULL -- default to NULL, this makes the param optional when you exec the procedure. ,@Month INT = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT T.[Year] ,T.[Month] -- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run) ,(SELECT SUM(Stores) FROM #ABC WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_Stores] ,(SELECT SUM(SalesStores) FROM #DEF WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_SalesStores] ,(SELECT SUM(Products) FROM #GHI WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_Products] FROM ( -- this selects a list of all possible dates. SELECT [Year],[Month] FROM #ABC UNION SELECT [Year],[Month] FROM #DEF UNION SELECT [Year],[Month] FROM #GHI ) AS T WHERE -- if the param IS NULL, then it will not apply filtering -- if the param is specified, then it will filter by year or month (@Year IS NULL OR T.[Year]=@Year) AND (@Month IS NULL OR T.[Month]=@Month) ; END GO ```
You could union the multiple sets like this: ``` SELECT Year, Month, 'Stores' AS Src, Stores AS Value FROM ABC UNION ALL SELECT Year, Month, 'SalesStores', SalesStores FROM DEF UNION ALL SELECT Year, Month, 'Products', Products FROM FGH ``` then [PIVOT](http://msdn.microsoft.com/en-us/library/ms177410.aspx "Using PIVOT and UNPIVOT") them with aggregation: ``` SELECT Year, Month, Stores, SalesStores, Products FROM ( SELECT Year, Month, 'Stores', Stores FROM ABC UNION ALL SELECT Year, Month, 'SalesStores', SalesStores FROM DEF UNION ALL SELECT Year, Month, 'Products', Products FROM FGH ) AS s (Year, Month, Src, Value) PIVOT ( SUM(Value) FOR Src IN (Stores, SalesStores, Products) ) AS p ; ``` Or it might be more efficient to first aggregate the results, then union the aggregated sets, then PIVOT: ``` SELECT Year, Month, Stores, SalesStores, Products FROM ( SELECT Year, Month, 'Stores', SUM(Stores) FROM ABC UNION ALL SELECT Year, Month, 'SalesStores', SUM(SalesStores) FROM DEF UNION ALL SELECT Year, Month, 'Products', SUM(Products) FROM FGH ) AS s (Year, Month, Src, Value) PIVOT ( MAX(Value) FOR Src IN (Stores, SalesStores, Products) ) AS p ; ``` As the PIVOT syntax requires the use of an aggregated function, we can use one that simply preserves an already aggregated value (in this case it was MAX).
Combine multiple datasets into a single query
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "sql-server-2008-r2", "" ]
I'm working with a Pro\*C query, but this question should be general SQL. My research has been a dead end, but I think I'm missing something. Suppose my server has an array of students' names, `{"Alice","Charlie","Bob"}`. I query the `Student_Ids` table for the students' ID numbers: ``` SELECT id_no FROM student_ids WHERE student_name IN ('Alice','Charlie','Bob'); ``` To simplify server-side processing, I want to sort the result set in the same order as the students' names. In other words, the result set would be `{alice_id_no, charlie_id_no, bob_id_no}` regardless of the actual ordering of the table or the behavior of any particular vendor's implementation. The only solution I can think of is: ``` . . . ORDER BY CASE WHEN student_name='Alice' THEN 0 WHEN student_name='Charlie' THEN 1 WHEN student_name='Bob' THEN 2 END; ``` but that seems extremely messy and cumbersome when trying to dynamically generate/run this query. Is there a better way? **UPDATE** I gave a terrible example by pre-sorting the students' names. I changed the names to be deliberately unsorted. In other words, I want to sort the names in a non-ASC or DESC-friendly way. **UPDATE II** Oracle, but for knowledge's sake, I am looking for more general solutions as well.
The `ORDER BY` expression you've given for your sample data is equivalent to `ORDER BY student_name`. Is that what you intended? If you want a custom ordering that is not alphabetical, I think you might have meant something like this: ``` ORDER BY CASE WHEN student_name = 'Alice' THEN 0 WHEN student_name = 'Charlie' THEN 1 WHEN student_name = 'Bob' THEN 2 END; ``` You can use a derived table as well, that holds the names as well as the ordering you want. This way you only have to put the names in a single time: ``` SELECT S.id_no FROM student_ids AS S INNER JOIN ( SELECT Name = 'Alice', Seq = 0 FROM DUAL UNION ALL SELECT 'Bob', 2 FROM DUAL UNION ALL SELECT 'Charlie', 1 FROM DUAL ) AS N ON S.student_name = N.Name ORDER BY N.Seq; ``` You could also put them into a temp table, but in Oracle that could be somewhat of a pain.
Can you do this? ``` order by student_name ``` To do a custom sort, you only need one `case`: ``` ORDER BY (CASE WHEN student_name = 'Alice' THEN 1 WHEN student_name = 'Bob' THEN 2 WHEN student_name = 'Charlie' THEN 3 ELSE 4 END) ```
Custom Ordering of SELECT Results
[ "", "sql", "oracle", "" ]
I have a table with columns CompanyID, EmployeeCode. EmployeeCode is unique and CompanyID can be repeated. So we can keep a list of employees in Google, Apple etc. I want a way to ensure that the same employee cannot be added if he/she is already in the database. What would be a good way to do this ? I'd prefer not to create additional columns or tables for this.
Create a UNIQUE constraint on the column where you want to prevent duplicates: ``` CREATE UNIQUE INDEX [IX_YourTableName_EmployeeCode] ON [YourTableName] (EmployeeCode) ```
Try this.. ``` Create table UniqueTable ( CompanyID int, EmployeeCode int ) --Insert dummy data INSERT INTO UniqueTable(EmployeeCode ,CompanyID ) Values(1,200) --When are Going to insert duplicate Ecmployeecode '1' then it will not insert data into table INSERT INTO UniqueTable(EmployeeCode ,CompanyID ) Select 1,200 From UniqueTable t1 Left join UniqueTable t2 on 1=t2.EmployeeCode WHERE t2.EmployeeCode IS NULL --We are Going to insert different Ecmployeecode '2' it will insert into table INSERT INTO UniqueTable(EmployeeCode ,CompanyID ) Select 2,300 From UniqueTable t1 Left join UniqueTable t2 on 2=t2.EmployeeCode WHERE t2.EmployeeCode IS NULL ```
Prevent insertion of duplicates
[ "", "sql", "sql-server", "" ]
i have the following tables: ``` projects: id, name, language 1 |test | php 2 |test | java 3 |hello| php attrs: id, name, value, tb1_id 1 | status | finish | 1 2 | reference | 2 | 1 ``` I need a query to select all projects, which have a attribute "reference" to another project and "status" finish. example output: ``` id, name 1 |test ``` Could you help me?
You can either: * group the joined tables, filtering such groups with suitable aggregate functions within the `HAVING` clause: ``` SELECT projects.id, projects.name FROM projects JOIN attrs ON attrs.tb1_id = projects.id GROUP BY projects.id HAVING SUM(attrs.name='reference') AND SUM(attrs.name='status' AND attrs.value='finish') ``` * or else join `attrs` multiple times: ``` SELECT p.id, p.name FROM projects AS p JOIN attrs AS r ON r.tb1_id = p.id AND r.name = 'reference' JOIN attrs AS s ON s.tb1_id = p.id AND s.name = 'status' WHERE s.value = 'finish' ```
``` SELECT P.id, P.name FROM projects P inner join attrs A on P.id = a.tb1_id WHERE (A.name = 'status' AND value = 'finish') UNION SELECT P.id, P.name FROM attrs A INNER JOIN projects P ON A.value = P.id WHERE A.name='reference' ``` See the result in [this fiddle](http://sqlfiddle.com/#!2/68a94/9)
SQL Query to select multiple rows
[ "", "mysql", "sql", "join", "" ]
I have a basic customer table. The columns are: ``` FirstName LastName Organization PhoneNumber City State Zip ID ``` `ID` is an autoincremented value. I want to insert a new record into the table so I issue this command: ``` INSERT INTO CustomerTable (FirstName, LastName, Organization, PhoneNumber, City, State, Zip) VALUES (John, Doe, None, 5555555555, Nowhere, NY, 12345); ``` However, when I issue this statement, I get this error: ``` ERROR 1054 (42s22): Unknown column 'John' in 'field list' ``` Why am I getting this error? 'John' is a value to go in a column, not an actual column itself. Thanks in advance for any help!
Put the string values in quotes. Outside quotes those strings will be treated as column or variable names. ``` VALUES (John, Doe, None, 5555555555, Nowhere, NY, 12345) ``` Should be ``` VALUES ('John', 'Doe', 'None', 5555555555, 'Nowhere', 'NY', 12345) ```
John is a string. You need to put in ''.
mysql 1054, unknown column, but I'm not specifying a column
[ "", "mysql", "sql", "insert-statement", "" ]
I am working with MySql 5.1 and am building my first many-to-many database. I understand the concepts, I have 3 tables: * `Albums`, with a unique ID * `Genres`, with a unique ID * `album_genres` with columns for each of the ID's from Albums and Genres. The issue I am having is that, of course, 1 album can have multiple genres. But when I do a search, I'm really in the dark about how to structure it so during searches, I get all the genres for each individual album. Please note, this is not the loading of 1 album, but doing a search that will net 1 or more albums. Sorry I don't really have anything to show what I've tried because I'm not even sure where to begin. I'm sure it's easy enough. But all the tutorials I could find only address the basics of M2M but not how to get multiple matching entries. After looking at many great suggestions, I have built this query: ``` SELECT album.album_title, Concat(genre.genre_id, ',') as GenreName, count(album.album_id) as GenreCount FROM $this->album_tbl album JOIN wp_musicmgr_albgnr albgnr ON albgnr.albgnr_album_fk = album.album_id JOIN $this->genre_tbl genre ON genre.genre_id = albgnr.albgnr_genre_fk GROUP BY album.album_id ``` Which is producing this: ``` [0] => stdClass Object ( [album_title] => album q [GenreName] => 1, [GenreCount] => 3 ) However, as you can see. Despite having a count of 3 hits on genres, it is listing the first one. ```
If I got the question you need output like ``` AlbumName | Genre1,Genre2......... ``` FOr this you need to use **GroupBy** ``` SELECT A.album_name as AlbumName, GROUP_CONCAT(G.genre_name) as GenreName, count(A.ID) as GenreCount FROM Album A JOIN album_genres AG ON (A.ID = AG.album_ID) JOIN Genre G ON (G.ID = AG.genre_ID) Group by A.ID ```
Join on the two tables. ``` SELECT cols FROM Albums JOIN album_genres USING (albumID) JOIN Genres USING (genreID) WHERE albumName LIKE :search ```
How to do a many to many subquery in MySql
[ "", "mysql", "sql", "" ]
I am trying to optimize a query and I believe I can do it with joins as opposed to subqueries but I am not sure how to. Bellow is my query: ``` SELECT * FROM accounts WHERE LOWER(firstname) = LOWER('Sam') AND id IN (SELECT account_id FROM addresses WHERE LOWER(name) = LOWER('Street1')) AND id IN (SELECT account_id FROM alternate_ids WHERE alternate_id_glbl = '5'); ``` I have 3 tables: accounts, addresses, and alternate ID's. When I do a search I want to return the set of accounts that have a first name of Sam, an address of Street 1, and an alternate ID of 5. There is an account\_id column in the address and alternate ID tables that has the ID of the account it is associated with. Do you have any ideas on how I could turn this into query that uses joins or possibly a more efficient query? Oh and this is a postgres DB
Try this: ``` SELECT acc.* FROM accounts acc JOIN addresses addr ON acc.id = addr.account_id JOIN alternate_ids ids ON acc.id = ids.account_id WHERE LOWER(acc.firstname) = LOWER('Sam') AND LOWER(addr.name) = LOWER('Street1') AND ids.alternate_id_glbl = '5'; ``` However, it is not guaranteed that this version will work better, as the query optimizer usually automatically tunes the execution plan.
Have you checked the execution plan to see if it is already using a join? The query optimiser will do it if possible. You might place indexes on the account table LOWER(firstname) and the addresses table LOWER(name).
Changing a query from subqueries to joins
[ "", "sql", "database", "postgresql", "" ]
I have a table with an event date column. I want to return records in an order in which first the most recent upcoming records show up and when all the upcoming records are over, the past records should show up with the latest passed first. For that, I was planning to first get records with `event date > GETDATE()` sorted in ascending order and merging it with the rest of the records in decreasing order of event date. As specifying 2 `ORDER BY` clause is not possible in `UNION`, what approach could be used in this. Summarizing, my query wants results like: ``` SELECT EventName, EventDate FROM EventTable WHERE EventDate>GETDATE() ORDER BY EventDate UNION ALL SELECT EventName, EventDate FROM EventTable WHERE EventDate<=GETDATE() ORDER BY EventDate DESC ```
``` SELECT * FROM ( SELECT EventName, EventDate, 1 AS OrderPri, ROW_NUMBER() OVER(ORDER BY EventDate) AS Row FROM EventTable WHERE EventDate > GETDATE() UNION ALL SELECT EventName, EventDate, 2 AS OrderPri, ROW_NUMBER() OVER(ORDER BY EventDate DESC) AS Row FROM EventTable WHERE EventDate <= GETDATE() ) AS m ORDER BY m.OrderPri, m.Row ```
You need to specify an `ORDER BY` that applies to *all* of the results, and if you want the results from the first `SELECT` to appear first, you have to *specify* that too: ``` SELECT EventName,EventDate FROM ( SELECT EventName, EventDate, 1 as ResultSet FROM EventTable WHERE EventDate>GETDATE() UNION ALL SELECT EventName, EventDate, 2 FROM EventTable WHERE EventDate<=GETDATE() ) t ORDER BY ResultSet, CASE WHEN ResultSet = 1 THEN EventDate END, CASE WHEN ResultSet = 2 THEN EventDate END DESC ``` Strictly, the second `CASE` expression isn't required, but I've included it for symmetry. --- Or, as Allan suggests, maybe just: ``` SELECT EventName, EventDate FROM EventTable ORDER BY CASE WHEN EventDate > GETDATE() THEN 1 ELSE 2 END, CASE WHEN EventDate > GETDATE() THEN EventDate END, EventDate desc ``` (Where this time I have omitted that final `CASE` expression)
Use ORDER BY with UNION ALL
[ "", "sql", "sql-server", "t-sql", "union", "union-all", "" ]
How do I SELECT multiple ranges in MySQL taking in consideration the following scenario: let say I want to get back from my product table products where price 0 - 300 and price 600-1000 price 1200-1600 I was trying the following ``` SELECT * FROM products WHERE price > 0 AND price <= 300 AND price >= 600 AND price <= 1000 ``` but doesn't return any rows
If a number is between `0` and `300` it can't **also** be between `600` and `1000`. Are you looking for `OR`? Perhaps with [`BETWEEN`](http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_between) just to make things easier? ``` SELECT * FROM products WHERE price BETWEEN 0 AND 300 OR price BETWEEN 600 AND 1000 OR price BETWEEN 1200 AND 1600 ```
``` SELECT * FROM products WHERE (price > 0 AND price <= 300 ) OR (price > 600 AND price <= 1000) OR (price > 1200 AND price <= 1600 ) ``` you can also use between ``` price BETWEEN 0 AND 300 or price between ... ``` consider that between `price BETWEEN 0 AND 300` will include 0 it will be parsed into `price >= 0 AND price <= 300`
select more ranges in mysql
[ "", "mysql", "sql", "range", "" ]
I'm having a slight issue. I have a PostgreSQL table with such format ``` time (datetime) | players (int) | servers (int) --------------------------------------------------- 2013-12-06 13:40:01 | 80 | 20 2013-12-06 13:41:13 | 78 | 21 etc. ``` I would like to group them by 5 minute periods and get an average of the group as a single value, so there will be 20% of the records, each containing an average of ~5 numbers, with time set to the first time value in the group. I have no idea how to do this in PgSQL. So the result would be: ``` 2013-12-06 13:40:01 | avg of players on :40, :41, :42, :43, :44 | same with servers 2013-12-06 13:45:05 | avg of players on :45, :46, :47, :48, :49 | same with servers 2013-12-06 13:50:09 | avg of players on :50, :51, :52, :53, :54 | same with servers 2013-12-06 13:55:12 | avg of players on :55, :56, :57, :58, :59 | same with servers ```
``` SELECT grid.t5 , min(t."time") AS min_time -- , array_agg(extract(min FROM t."time")) AS 'players_on' -- optional , avg(t.players) AS avg_players , avg(t.servers) AS avg_servers FROM ( SELECT generate_series(min("time") , max("time") , interval '5 min') AS t5 FROM tbl ) grid LEFT JOIN tbl t ON t."time" >= grid.t5 AND t."time" < grid.t5 + interval '5 min' GROUP BY grid.t5 ORDER BY grid.t5; ``` The subquery `grid` produces one row for every 5 minutes from minimum to maximum `"time"` in your table. `LEFT JOIN` back to the table slicing data in 5-min intervals. *Include* the lower bound, *exclude* the upper border. To drop 5-min slots where nothing happened, use `JOIN` instead of `LEFT JOIN`. To have your grid-times start at 0:00, 5:00 etc, round down the `min("time")` in `generate_series()`. Related: * [Group by data intervals](https://stackoverflow.com/questions/12623358/group-by-data-intervals/12624551#12624551) * [PostgreSQL: running count of rows for a query 'by minute'](https://stackoverflow.com/questions/8193688/postgresql-running-count-of-rows-for-a-query-by-minute/8194088#8194088) Aside: I wouldn't use "time" as identifier. It's a [reserved word in standard SQL](https://www.postgresql.org/docs/current/sql-keywords-appendix.html) and a function / type name in Postgres.
Try this, it should group minutes 0-4, 5-9, 10-14 and so on... ``` SELECT MIN(time), AVG(Players), AVG(Servers) FROM MyTable t GROUP BY date_trunc('hour', time), FLOOR(datepart('minute', time)/12) ``` EDIT: Changed the grouping to hour first and then to the `Floor` of minutes. I Think this should work.
Selecting an average of records grouped by 5 minute periods
[ "", "sql", "postgresql", "timestamp", "aggregate-functions", "generate-series", "" ]
I am having issues with a report after switching the database from SQL Server 7 to SQL Server 2008. The report ran fine on our old server, it was Server 2000, and again was running SQL Server 7. We had to update the DSN driver to point to the new database. Now when someone runs our report, they receive this error: ![enter image description here](https://i.stack.imgur.com/UiaZ4.png) I have tried to lower the master database compatibility level to SQL Server 2000(80), I thought maybe it had something to do with the user logins. The database that the report is pulling from is running on compatibility level to SQL Server 2000(80). I have tried to google the report error, but everything that I have seen doesn't relate to my issue. I have tried to reinstall crystal reports, somethings this help fix any issues that I may have, however this time that didn't work. I have tried to "fix" up the report as well. In crystal you can fix a report to make sure that it is point to the right database. I didn't fix any of the other reports, and they are working fine. I have tried testing the query against SQL Management Studio, I am pulling the data that I wanted for the report with no errors. I know the query is working, so it can't be the statement. I Have downloaded and installed the crystal report runtime 8.5, restarted the machine and reran the report. I still get the same results. So far that I know of, the issue is only this report. We have other reports in our VB6 project that work just fine. Here is the code that runs the report: Private Sub cmdPrintPo\_Click() Dim result As Variant repSinglePo.ReportFileName = ReportDirectory + "\singlepo.rpt" repSinglePo.Destination = crptToPrinter repSinglePo.CopiesToPrinter = 1 repSinglePo.Connect = "DSN = clearspan;UID = " + glUserName + ";PWD = " + glPassword + \_ ";DSQ = " & gsDatabaseName repSinglePo.SQLQuery = \_ "SELECT" + \_ " PO.PO\_Num, PO.Supplier, PO.DateOrdered, PO.DateRequired, PO.Terms, PO.Freight, PO.FOB, " + \_ " POItems.PO\_Num, POItems.Quantity, POItems.Description, POItems.Item, POItems.Price, POItems.KeyNum " + \_ " From" + \_ " PO PO," + \_ " POItems POItems" + \_ " Where" + \_ " PO.PO\_Num = POItems.PO\_Num and PO.PO\_Num = " + lblPONum.caption 'Print Original Po and then a copy result = repSinglePo.PrintReport repSinglePo.ReportFileName = ReportDirectory + "\po-copy.rpt" result = repSinglePo.PrintReport If result = 0 Then MsgBox "PO has been printed" Else Select Case CLng(result) Case 20520 DisplayErrorCode ("PrintingAlreadyStarted") `Case Else MsgBox "Error while printing PO. Error code: " + str(result) & vbCrLf & repSinglePo.LastErrorString End Select` End If End Sub As you can tell in the picture above, the code bombs on at: Case Else MsgBox "Error while printing PO. Error code: " + str(result) & vbCrLf & repSinglePo.LastErrorString End Select I have tried to alter the select statement, thinking maybe there was a difference in the way the two versions perceived it. Nothing I did seemed to mater. Can someone point me in the right direction? Thanks for any help ahead of time.
After weeks and weeks of working on this issue, I found a work around. So far this has been the only way to solve the issue. I had to open the crystal report view outside of Visual Studio. Then I had to open the report that was casing the issue. After that I ran the app and opened the report in the app. The report would open just fine, and could be printed. Then I would live the app open, and then save the report in the viewer (once again out side of the app, and visual studio). After I did this, we never had an issue out of the report again. ![Crystal Report Viewer 8](https://i.stack.imgur.com/QOibe.png)
Crystal 8 rpt files keep the data connection information which some times donot get cleared even when you change the details in run time Following is suggested: Change your rpt file to point to your new database and You need to refresh and preview the report assuring it is fetching data from new database. Hope it helps ..
Crystal Reports 8 Issues From SQL 7 To SQL 2008 VB6
[ "", "sql", "sql-server", "sql-server-2008", "vb6", "crystal-reports", "" ]
I have two tables ``` Child_id (this is temporary table) -------- 1001 ---------- 1002 ---------- 1003 ---------- 1004 ---------- 1006 ``` - ``` Child_id Amount (this table name is user_details) ------------------- 1001 100 -------------------- 1002 250 -------------------- 1003 100 -------------------- 1004 150 -------------------- 1008 400 ``` Now I want to add the total amount from the 2nd table where two child\_ids are the same into the 1st table. For example, output for this table is 600 (adding amounts of 1001,1002,1003,1004). Please could you help me do this.
if you have very big table then this variant is better ``` SELECT SUM(u.amount) FROM user_details u where exists (select * from tempTable t where t.child_id = u.child_id) ```
You can do: ``` SELECT SUM(u.amount) FROM tempTable t INNER JOIN user_details u ON t.child_id = u.child_id ``` This will result in the SUM of the amounts for the records that have child\_id in both tables
Adding sums from one table to a second table
[ "", "mysql", "sql", "sql-server", "" ]
I would like to make a single round trip to an MSSQL database to retrieve multiple tables. Each table relies in one way or another on the same somewhat expensive lookup, which will grab a bunch of IDs and handle pagination and very basic filtering. So I'm trying to move the expensive part out so it happens only once, and reuse its result set multiple times afterwards. Below is a really stripped-down version of what I'm trying to do. The "primary query" is the expensive lookup that's common to the next other queries below it. The "first result set" should return what I want, as you would expect, but the "second result set" fails because the primary query is out of scope. ``` -- Primary query WITH t(ItemId, Row) AS ( SELECT ItemId, ROW_NUMBER() OVER(ORDER BY DateCreated DESC) AS Row FROM Items ) -- First result set SELECT * FROM Items INNER JOIN t ON Items.ItemId = t.ItemId WHERE t.Row < 10 -- Second result set SELECT * FROM Photos INNER JOIN ItemPhotos ON Photos.PhotoId = ItemPhotos.PhotoId INNER JOIN t ON ItemPhotos.ItemId = t.ItemId WHERE t.Row < 10 ``` Is there a way to do this so that the second result set works? I would like to avoid creating temp tables because, in my experience, there is almost always a cheaper alternative that I've just not learned yet. In this case, I'm not sure there's an alternative, but I'm hoping someone knows a way around it. (I'll test both of course.) I know that in the example above you can probably return a single result set by doing an `INNER JOIN` on the whole thing, but in my case it's not a workable solution as the result set would be massive.
No, there is no way to do this. What you *can* do, is use temp tables, like you said, or use a materialized view.
As stated in my comment even if that syntax worked it would not achieve the objective of a single trip to the database as a CTE is just syntax. Given the t.Row > 1740 AND t.Row <= 1760 I would go #temp over temporary table I like the simplicity of a temporary table but it does not query optimize well This assumes ItemID is a PK If you are going to create a #temp then put some structure in it to make the joins as efficient as possible. The order by in the insert will minimize (or eliminate) fragmentation on the PK The #temp.rn in the join rather than the where gives the query optimizer a chance to filter before the join ``` IF OBJECT_ID(N'tempdb..#temp', N'U') IS NOT NULL DROP TABLE #temp; CREATE TABLE #temp (ItemId INT PRIMARY KEY CLUSTERED, rn INT); insert into #temp SELECT sID, ROW_NUMBER() OVER(ORDER BY addDate DESC) AS Row FROM docSVsys where sID < 10000 ORDER by sID; select count(*) from #temp; CREATE UNIQUE NONCLUSTERED INDEX [IX] ON #temp ([rn] ASC); select docSVtext.value from docSVtext join #temp on docSVtext.sID = #temp.ItemID and #temp.rn >= 100 and #temp.rn < 200; select docSVdate.value from docSVdate join #temp on docSVdate.sID = #temp.ItemID and #temp.rn >= 100 and #temp.rn < 200; IF OBJECT_ID(N'tempdb..#temp', N'U') IS NOT NULL DROP TABLE #temp; ``` Another option is a #temp2 that you insert the rows for a single condition join.
Using the results of a WITH query in multiple subqueries
[ "", "sql", "sql-server", "t-sql", "common-table-expression", "" ]
Consider the following table: ``` CREATE TABLE foo ( id INT PRIMARY KEY, effective_date DATETIME NOT NULL UNIQUE ) ``` Given a set of dates D, how do you fetch all rows from foo whose effective\_date is the greatest value less than each date in D in a single query? For simplicity, assume that each date will have exactly one matching row. Suppose foo has the following rows. ``` --------------------- | id |effective_date| --------------------- | 0 | 2013-01-07| --------------------- | 1 | 2013-02-03| --------------------- | 2 | 2013-04-19| --------------------- | 3 | 2013-04-20| --------------------- | 4 | 2013-05-11| --------------------- | 5 | 2013-06-30| --------------------- | 6 | 2013-12-08| --------------------- ``` If you were given D = {2013-02-20, 2013-06-30, 2013-12-19}, the query should return the following: ``` --------------------- | id |effective_date| --------------------- | 1 | 2013-02-03| | 4 | 2013-05-11| | 6 | 2013-12-08| ``` If D had only one element, say D = {2013-06-30}, you could just do: ``` SELECT * FROM foo WHERE effective_date = SELECT MAX(effective_date) FROM foo WHERE effective_date < 2013-06-30 ``` How do you generalize this query when the size of D is greater than 1, assuming D will be specified in an IN clause?
Actually, your problem is - that you have a list of values, which will be treated in MySQL as *row* - and not as a set - in most cases. That is - one of possible solutions is to generate your set properly in application so it will look like: ``` SELECT '2013-02-20' UNION ALL SELECT '2013-06-30' UNION ALL SELECT '2013-12-19' ``` -and then use produced set inside `JOIN`. Also, that will be great, if MySQL could accept static list in [`ANY`](http://dev.mysql.com/doc/refman/5.0/en/any-in-some-subqueries.html) subqueries - like for `IN` keyword, but it can't. `ANY` also expects rows set, not list (which will be treated as row with `N` columns, where `N` is count of items in your list). Fortunately, in your particular case your issue has important restriction: there could be no more items in list, than rows in your `foo` table (it makes no sense otherwise). So you can dynamically build that list, and then use it like: ``` SELECT foo.*, final.period FROM (SELECT period, MAX(foo.effective_date) AS max_date FROM (SELECT period FROM (SELECT ELT(@i:=@i+1, '2013-02-20', '2013-06-30', '2013-12-19') AS period FROM foo CROSS JOIN (SELECT @i:=0) AS init) AS dates WHERE period IS NOT NULL) AS list LEFT JOIN foo ON foo.effective_date<list.period GROUP BY period) AS final LEFT JOIN foo ON final.max_date=foo.effective_date ``` -your list will be automatically iterated via [`ELT()`](http://dev.mysql.com/doc/refman/5.6/en/string-functions.html#function_elt), so you can pass it directly to query without any additional restructuring. Note, that this method, however, will iterate through all `foo` records to produce row set, so it will work - but doing the stuff in application may be more useful in terms of performance. The demo for your table can be found [here](http://sqlfiddle.com/#!2/978b0/1).
perhaps this can help : ``` SELECT * FROM foo WHERE effective_date IN ( (SELECT MAX(effective_date) FROM foo WHERE effective_date < '2013-02-20'), (SELECT MAX(effective_date) FROM foo WHERE effective_date < '2013-06-30'), (SELECT MAX(effective_date) FROM foo WHERE effective_date < '2013-12-19') ) ``` result : ``` --------------------- | id |effective_date| --------------------- | 1 | 2013-02-03| -- different | 4 | 2013-05-11| | 6 | 2013-12-08| ``` --- **UPDATE - 06 December** --- **create procedure :** ``` DELIMITER $$ USE `test`$$ /*change database name*/ DROP PROCEDURE IF EXISTS `myList`$$ CREATE PROCEDURE `myList`(ilist VARCHAR(100)) BEGIN /*var*/ /*DECLARE ilist VARCHAR(100) DEFAULT '2013-02-20,2013-06-30,2013-12-19';*/ DECLARE delimeter VARCHAR(10) DEFAULT ','; DECLARE pos INT DEFAULT 0; DECLARE item VARCHAR(100) DEFAULT ''; /*drop temporary table*/ DROP TABLE IF EXISTS tmpList; /*loop*/ loop_item: LOOP SET pos = pos + 1; /*split*/ SET item = REPLACE( SUBSTRING(SUBSTRING_INDEX(ilist, delimeter, pos), LENGTH(SUBSTRING_INDEX(ilist, delimeter, pos -1)) + 1), delimeter, ''); /*break*/ IF item = '' THEN LEAVE loop_item; ELSE /*create temporary table*/ CREATE TEMPORARY TABLE IF NOT EXISTS tmpList AS ( SELECT item AS sdate ); END IF; END LOOP loop_item; /*view*/ SELECT * FROM tmpList; END$$ DELIMITER ; ``` **call procedure :** ``` CALL myList('2013-02-20,2013-06-30,2013-12-19'); ``` **query :** ``` SELECT *, (SELECT MAX(effective_date) FROM foo WHERE effective_date < sdate) AS effective_date FROM tmpList ``` **result :** ``` ------------------------------ | sdate |effective_date| ------------------------------ | 2013-02-20 | 2013-02-03 | | 2013-06-30 | 2013-05-11 | | 2013-12-19 | 2013-12-08 | ```
Return rows with maximum date less than each value in a set of dates in SQL
[ "", "mysql", "sql", "" ]
I have two tables, "user", and "user\_things". I want to get all users with one or more things, but I don't want to retrieve the things themselves (I only want one row per user returned). ``` Table 1: id username Table 2: id userid thingname ``` Example: I want to find all users with a "hat" and a "car". If there are two users with this, I want only two rows returned (not 4).
Select all users for which a record for 'car' and for 'hat' `exists` in the other table. ``` select * from User u where exists ( select 'x' from Things t where t.userid = u.id and t.thingname = 'hat') and exists ( select 'x' from Things t where t.userid = u.id and t.thingname = 'car') ``` Alternatively, you can do this, although I think it's less nice, less semantically correct: ``` select distinct u.* from Users u inner join Things tc on tc.userid = u.id and tc.thingname = 'car' inner join Things th on th.userid = u.id and th.thingname = 'hat' ``` Or even: ``` select u.* from Users u where (select count('x') from Things t where t.userid = u.id and t.thingname in ('car', 'hat')) = 2 ``` Although the last one might also return users that have no car and two hats.
Use aggregation: ``` select u.userid, u.username from user u join user_things ut on ut.userid = u.id group by t1.userid, t1.username having sum(case when ut.thingname = 'hat' then 1 else 0 end) > 0 and sum(case when ut.thingname = 'car' then 1 else 0 end) > 0 ``` The first part of the `having` clause counts the number of "hat"s. The second counts the number of "car"s. The `>` condition requires that both are present.
How to do left join without select
[ "", "sql", "" ]
I have a SQL Server database that contains two tables both of which contain email addresses. One contains email addresses that have been created by the organisation for the users, the other contains users personal email addressees. We want to give the user the option to set one of these as the preferred contact email address. What would be the best way to represent this in the database? As far as I can see I could: - 1. Create the same field on each table called is\_preferred. Create a stored procedure to set the correct field in the corresponding table that the email addresses belongs to, ensuring that any other email addresses are not set to default 2. Create a new table that stores the ID of the preferred mail addresses. I would need to have a 'mail type' in this table as well to know which table I need to lookup the mail address in. I.e. personal or organisation email. Which solution do you think would be best? A user could have multiple organisation mail addresses as well as multiple personal email addresses.
Here is one way to modal this: ![enter image description here](https://i.stack.imgur.com/Ls5cS.png) NOTE: If there are no *(specific fields)* or *(specific constraints)*, just merge all three email tables into one. The EMAIL's composite PK establishes an ordering of emails at the level of the given person. You can then simply introduce a convention that the first row relative to that ordering is the "preferred" e-mail. --- Here is another way (if you want simpler keys): ![enter image description here](https://i.stack.imgur.com/5a3qN.png)
I would create an e-mail address table with columns `email_id`, `email_address`, `employee_id`, and `type` (a flag for personal or organizational). I would then add the column `preferred` to my employee table, and have it store the appropriate `email_id`. I would assume email addresses must be DISTINCT, so you likely could scrap id all together, in favor of the address as the key.
Two database tables storing email addresses, one of the addresses to be set to default
[ "", "sql", "sql-server", "database", "database-design", "" ]
**Creating Table:** ``` CREATE TABLE test ( charcol CHAR(10), varcharcol VARCHAR2(10)); SELECT LENGTH(charcol), LENGTH(varcharcol) FROM test; ``` **Result:** ``` LENGTH(CHARCOL) LENGTH(VARCHARCOL) --------------- ------------------ 10 1 ``` Please Let me know what is the difference between Varchar2 and char? At what times we use both?
Simple example to show the difference: ``` SELECT '"'||CAST('abc' AS VARCHAR2(10))||'"', '"'||CAST('abc' AS CHAR(10))||'"' FROM dual; '"'||CAST('ABC'ASVARCHAR2(10))||'"' '"'||CAST('ABC'ASCHAR(10))||'"' ----------------------------------- ------------------------------- "abc" "abc " 1 row selected. ``` The CHAR is usefull for expressions where the length of charaters is always fix, e.g. postal code for US states, for example CA, NY, FL, TX
Although there are already several answers correctly describing the behaviour of `char`, I think it needs to be said that **you should not use it** except in three specific situations: 1. You are building a fixed-length file or report, and assigning a non-null value to a `char` avoids the need to code an `rpad()` expression. For example, if `firstname` and `lastname` are both defined as `char(20)`, then `firstname||lastname` is a shorter way of writing `rpad(firstname,20)||rpad(lastname,20)` to create `Chuck Norris` 2. You need to distinguish between the explicit empty string `''` and `null`. Normally they are the same thing in Oracle, but assigning `''` to a `char` value will trigger its blank-padding behaviour while `null` will not, so if it's important to tell the difference, and I can't really think of a reason why it would be, then you have a way to do that. 3. Your code is ported from (or needs to be compatible with) some other system that requires blank-padding for legacy reasons. In that case you are stuck with it and you have my sympathy. There is really **no reason to use `char`** just because some length is fixed (e.g. a `Y/N` flag or an ISO currency code such as `'USD'`). It's not more efficient, it doesn't save space (there's no mythical length indicator for a `varchar2`, there's just a blank padding overhead for `char`), and it doesn't stop anyone entering shorter values. (If you enter `'ZZ'` in your `char(3)` currency column, it will just get stored as `'ZZ '`.) It's not even backward-compatible with some ancient version of Oracle that once relied on it, because there never was one. And the contagion can spread, as (following best practice) you might anchor a variable declaration using something like `sales.currency%type`. Now your `l_sale_currency` variable is a stealth `char` which will get invisibly blank-padded for shorter values (or `''`), opening the door to obscure bugs where `l_sale_currency` does not equal `l_refund_currency` even though you assigned `'ZZ'` to both of them. Some argue that `char(n)` (where *n* is some character length) indicates that values are expected to be *n* characters long, and this is a form of self-documentation. But surely if you are serious about a 3-character format ([ISO-Alpha-3](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) country codes rather than [ISO-Alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2), for example), wouldn't you define a constraint to enforce the rule, rather than letting developers glance at a `char(3)` datatype and draw their own conclusions? `CHAR` was introduced in Oracle 6 for, I'm sure, ANSI compatibility reasons. Probably there are potential customers deciding which database product to purchase and *ANSI compatibility* is on their checklist (or used to be back then), and `CHAR` with blank-padding is defined in the ANSI standard, so Oracle needs to provide it. You are not supposed to actually use it.
What is the major difference between Varchar2 and char
[ "", "sql", "oracle", "char", "varchar2", "" ]
If any SQL gurus could point me in the right direction on this would be a huge help. I'm learning SQL and want to create a report that would generate the Account #s under the *same* Person ID that were created <= 30 days of each other and exclude any others. All the information needed is in the same table. For example this code pulls up a list of Person's with the same personid who have more than one Accountid and their Creation date: ``` select accounttid, creationdate, personid from Table1 where personid in ( select (personid) from Table1 group by personid having COUNT (accountid) > 1) EXAMPLE RESULT: accountid creationdate personid 5501624 2013-05-01 101 5501544 2013-05-03 101 5510220 2013-10-24 10337 5504204 2013-06-27 10337 5502332 2013-05-21 1047 5502628 2013-05-28 1047 5508844 2013-10-01 1047 ``` Not sure where to go from here. I want to then take these Accountids and somehow compare the Creationdates for less than or equal to a 30 day differential but only compare them when the PersonIDs are the same.
I would use a semi join: ``` select accountid, creationdate, personid from Table1 t1 where EXISTS( SELECT 1 FROM Table1 t2 WHERE t1.personid = t2.personid AND t1.accountid <> t2.accountid AND t1.creationdate BETWEEN t2.creationdate - interval 30 day AND t2.creationdate + interval 30 day ); ``` demo --> <http://www.sqlfiddle.com/#!2/2be93/2> **--- EDIT ---** --- On SQL-Server use this condition: ``` BETWEEN dateadd( day, -30, t2.creationdate ) AND dateadd( day, 30, t2.creationdate ) ``` here is a query for SQL-Server: ``` select accountid, creationdate, personid from Table1 t1 where EXISTS( SELECT 1 FROM Table1 t2 WHERE t1.personid = t2.personid AND t1.accountid <> t2.accountid AND t1.creationdate BETWEEN dateadd( day, -30, t2.creationdate ) AND dateadd( day, 30, t2.creationdate ) ); ``` demo: ----> <http://www.sqlfiddle.com/#!3/cc922/4> --- Some remark to your query: ``` select accountid, creationdate, personid from Table1 where personid in ( select personid from Table1 group by personid having COUNT(accountid) > 1 ); ``` think a while ..... the subquery with `HAVING COUNT` must calculate a number of records for each person - it must read the whole table (all rows) to obtain this information, because we asked `give me a number of rows for given person`. If this person has 10.000 accounts, we need read all of them to count them. However, we don't need this information and we don't need to read the whole table. What we need is the answer for this question: `if this person has at least 2 accounts`. For this kind of queries we can use an `EXISTS` operator: ``` select accountid, creationdate, personid from Table1 t1 where EXISTS( SELECT 1 FROM Table1 t2 WHERE t1.personid = t2.personid AND t1.accountid <> t2.accountid ); ``` In this query MySql doesn't need to count all records and doesn't need to read a whole table. It stops reading the table when it finds first record that meets criteria defined by the subquery inside EXISTS operator.
``` select distinct(personid, accountID) from Table1 where Table1 personid in ( select distinct(t1a.personid) from Table1 as t1a join Table1 as t1b on t1a.personid = t1b.personid and t1a.creationdate < t1b.creationdate and datediff(dd, t1a.creationdate, t1b.creationdate) <= 30 ) ```
How to compare dates for multiple rows under the same table and same account
[ "", "sql", "sql-server-2008", "t-sql", "compare", "report", "" ]
I have problem with one query. I have this Orders table: ``` workerID orderID Year orderCount 274 869 2008 14 274 758 2006 2 274 770 2006 13 274 853 2006 10 ``` And I need to create table like this one: ``` workerID orderID Year orderCount sumByYears 274 2005 30 (orderCount for year 2005) 274 2006 880 (orderCount for year 2005+2006) 274 2007 1456 (orderCount for year 2005+2006+2007) 274 2008 729 (...) ``` Its easy for me to write this query: ``` SELECT workerID , Year , SUM(orderCount) FROM Orders GROUP BY workerID , Year ORDER BY workerID , Year ; ``` But I do not know how to get the last column.
What you want is commonly called a running total. In SQL Server 2012, you can use `SUM() OVER ()` to calculate it. ``` SELECT workerID, Year, orderCount = SUM(orderCount), sumByYears = SUM(SUM(orderCount)) OVER (PARTITION BY workerID ORDER BY Year) FROM Orders GROUP BY workerID , Year ORDER BY workerID , Year ; ``` Read more about the OVER clause here: * [OVER clause (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms189461.aspx)
Assuming you want a literal string This should do it: ``` SELECT workerID , Year , SUM(orderCount), '(orderCount for year' + Year + ')' FROM Orders GROUP BY workerID , Year ORDER BY workerID , Year ; ``` If what you're really after is the rolling sum of the order counts by year then I believe this should work: ``` SELECT workerID , Year , SUM(orderCount) , (SELECT SUM(orderCount) FROM Orders As runningORders WHERE runningOrders.Year <= Orders.Year ) As SumByYears FROM Orders GROUP BY workerID , Year ORDER BY workerID , Year ; ``` Here's a sample [sqlfiddle](http://sqlfiddle.com/#!3/337f4/3)
sum amount by next years mssql
[ "", "sql", "sql-server", "sql-server-2012", "" ]
So I have a floating point **41617.063633** and I want to know how its made from a DATETIME . I have worked out the left side of the floating point (number of days since *12/30/1899*). But I'm stuck on the right side. I assume its a count of seconds or something, but I can't get the right side (*063633*) no matter what I try from the time part of the DATETIME string. Below is the SQL to get the left side: ``` DECLARE @targetDate DATETIME = '12/9/2013 12:31:37'; DECLARE @floor DATETIME = '12/30/1899 0:00:00' DECLARE @gmt TIME = '11:00:00'; DECLARE @left VARCHAR (8) = DATEDIFF(DAY, @floor , CONVERT (DATE, @targetDate)); DECLARE @right VARCHAR (8) = '0'; SELECT @left + '.' + @right AS [FloatingTime] ``` I know I can use the next bit of SQL to work it out for me: ``` SELECT CONVERT (DATETIME, 41617.063633) ``` But since I have come so close to figuring this out, could someone get me over the line? Do you know how to calculate the right side of the floating point (*the time part of the string*)?
You're correct about the meaning of the whole number part (the part to the left of the decimal). The fractional part (the part to the right of the decimal) indicates a portion of a 24-hour period. ``` Decimal value Time Value Calculation ============= ========== =========== 0.00094444444 12:01:00 AM 1.0 / 24 / 60 (1 day/24 hours/60 minutes per hour) 0.01041666666 12:15:00 AM 1.0 / 24 / 60 * 15 0.02083333333 12:30:00 AM 1.0 / 24 / 60 * 30 0.04166666666 01:00:00 AM 1.0 / 24 ```
It all depends on what type of datetime. There is currently 6 different version in SQL Server 2012. <http://msdn.microsoft.com/en-us/library/ff848733.aspx> As for date time, the value should be interpreted as two 32-bit integers. The number of days since 1/1/1900 and the time since midnight. It is accurate to increments of .000, .003, or .007 seconds or 3 ms. I would check this out but it might be the count of 3 ms since midnight. Again, you need to really know what generated the date to accurately decode the hex / number.
How does CONVERT (DATETIME, FLOAT) work?
[ "", "sql", "sql-server", "datetime", "" ]
I have a table: ``` CREATE TABLE passenger_details ( bank_card INT(20) NOT NULL AUTO_INCREMENT, email VARCHAR(20), mobile INT(15) NOT NULL, p_id INT NOT NULL, PRIMARY KEY (bank_card), FOREIGN KEY (p_id) REFERENCES passenger(p_id) ); INSERT INTO passenger_details VALUES (0123012301230123,'blah_@hotmail.com',0872863492,1234); ``` ``` select*from passenger_details; +------------+--------------------+-----------+------+ | bank_card | email | mobile | p_id | +------------+--------------------+-----------+------+ | 2147483647 | blah_@hotmail.com | 872863492 | 1234 | +------------+--------------------+-----------+------+ 1 row in set (0.00 sec) ``` As we can see, the previous value, just went wrong into table. Should be 16 numbers and not only 10, actually different numbers. When i try to insert a new value: ``` INSERT INTO passenger_details VALUES (1234258431681842,'blah@icloud.com',0895764829,123548); ``` I get this error: > ERROR 1062 (23000): Duplicate entry '2147483647' for key 'PRIMARY' If bank\_card is AUTO\_INCREMENT why there is error? Should I change the PK to another place?
`INT` has a [maximum signed value of 2147483647](http://dev.mysql.com/doc/refman/5.0/en/integer-types.html). Any number greater than that will be truncated to that value. What you need to do is change that column to be a `varchar(20)` which will hold strings of up to 20 characters since bank card numbers are strings and not actually numbers (you don't do math with the). You also should remove `AUTO_INCREMENT` as that is not a value you will be incrementing.
Something to ask yourself. How did the number get that big? I did not insert 2 billion rows! Well, possibly you 'burned' that many `AUTO_INCREMENT` ids. This can happen in man ways: ``` INSERT IGNORE ... -- when the insert is ignored (because it the row already exists) REPLACE IODKU ``` and probably others.
ERROR 1062 (23000): Duplicate entry '2147483647' for key 'PRIMARY'
[ "", "mysql", "sql", "" ]
I am working on SQL and I have the following problem: ``` select * from( select tname,teacher.tid,grade from teacher inner join _view on(_view.tid=teacher.tid))as D group by grade where // what should I do here to get the rows having the first and the second maxium values? order by grade desc,tid; ``` I want to select only the rows that have the first maxium value and the second maxium value , I have tried a lot of thing since yesterday but no benfits from that!! when I use some thing like MAX,COUNT or AND I get an ERROR of aggregate function, plaese help me with that because I did all I could !!
ok after too much thinking I got this to work right and smooth, more over TOP would not work just LIMIT in the end of the query , here is my answer: ``` select * from( select tname,teacher.tid,grade from teacher inner join _view on(_view.tid=teacher.tid) )as D where grade in(select grade from _view order by grade desc limit 2) order by grade desc,tid; ``` thanks everybody for your collaboration.
Try: ``` ORDER BY grade DESC LIMIT 2 ```
sql where condition for getting the first and second maxium values
[ "", "mysql", "sql", "" ]
I am building an app to support 200,000+ registered users, and want to add an addressbook functionality for each user to import their own contacts (e.g. name, address, email, etc). Each user will have c.150 different contacts, with 10-15 fields for each record. My question is simple: given the volume of users and the number of contacts for each user, is it better to create individual tables for each user's addressbook, or one single table with a user\_id lookup for that associated user account? If you could explain why from a performance perspective, that would be much appreciated. **UPDATE: Specifications** In response to questions in comments, here are the specifications: I will be hosting the database on AWS RDS (<http://aws.amazon.com/rds>). It will primarily be a heavy read load, rather than write. When write is accessed, it will be a balance between INSERT and UPDATE, with few deletes. Imagine the number of times you view vs edit your own addressbook. Thanks
**Specific answer in response to specifications** One table for contacts' data, with an indexed foreign key column back to user. Finding a particular user's contacts [will require about 3 seeks](http://dev.mysql.com/doc/refman/5.5/en/estimating-performance.html), a relatively small number. Use a SSD if seeks are bottlenecking you. If your 15 columns have 100 bytes each, and your have 150 of those, then your maximum data transfer per user is of the order 256k. I would design the application to show only the contact data required up front (say the top 3 most useful contact points -- name, email, phone), then to pull more specifics when requested for particular contacts. In the (presumably) rare cases when you need all contacts' info (eg export to CSV) consider [SELECT INTO OUTFILE](http://dev.mysql.com/doc/refman/5.5/en/select-into.html) if you have that access. vCard output would be less performant: you'd need to get all the data, then stuff into the right format. If you need vCard often, consider writing vCard out when database is updated (caching approach). If performance requirements are still not met, consider [partitioning on the user id](http://dev.mysql.com/doc/refman/5.5/en/partitioning.html). **General answer** Design your schema around KISS and your performance requirements, while documenting the scalability plan. In this particular situation, the volume of data does not strike me as being extreme, so I would lean KISS toward one table. However, it's not clear to me the kind of queries you will be making -- JOIN is the usual performance hog, not a straight SELECT. Also what's not clear to me is your SELECT/UPDATE mix. If read-heavy and by user, a single table will do it. Anyway, if after implementation you find the performance requirements aren't met, I would suggest you consider scaling by faster hardware, different engine (eg MyISAM vs. InnoDB -- know what the differences are for your particular MySQL version!), materialized views, or partitioning (eg around the first letter of the corresponding username -- presuming you have one).
Have a Single table, but **partition** the table by the starting alphabet of the user like all Last Names starting with A will be loaded into 1 partition. All names starting with B will be loaded into another partition. You could also do some amount of profiling to find the right distribution key.
MySQL Performance: Single table or multiple tables for large datasets
[ "", "mysql", "sql", "performance", "database-performance", "" ]
I'm trying to figure out what the pattern is for using named parameters in go's built-in database/sql package. I looked at the oracle driver, but it seems like just a wrapper for the C library. Have people solved this in an elegant way? So far I've just worked around the problem by putting `{0}`, `{1}` as the parameters in the unit tests, but it sure would be nice to be able to use them normally as a `map[string]interface{}` or something. Does anyone have an idea or an implementation that seems idiomatic? For reference, here is a test: ``` db := testConn() stmt, err := db.Prepare("return {0} as int1, {1} as int2") if err != nil { t.Fatal(err) } rows, err := stmt.Query(123, 456) if err != nil { t.Fatal(err) } rows.Next() var test int var test2 int err = rows.Scan(&test, &test2) if err != nil { t.Fatal(err) } if test != 123 { t.Fatal("test != 123;", test) } if test2 != 456 { t.Fatal("test2 != 456;", test2) } ``` And what I'm doing in `Query` is: ``` func (stmt *cypherStmt) Query(args []driver.Value) (driver.Rows, error) { cyphReq := cypherRequest{ Query: stmt.query, } if len(args) > 0 { cyphReq.Params = make(map[string]interface{}) } for idx, e := range args { cyphReq.Params[strconv.Itoa(idx)] = e } ... ```
It would be possible to create a `map[string]interface{}` type that implements `driver.Valuer{}` to serialize it as a `[]byte`, and then convert it back in the driver. But that would be inefficient and unidiomatic. Since your driver would then be used in a nonstandard way anyway, it would probably be better to just forget about database/sql and write a package with a totally custom interface.
I'm using wrapper on top of database/sql called sqlx <https://github.com/jmoiron/sqlx> You can check here how he did it. Example on how to select into a tuple ``` type Person struct { FirstName string `db:"first_name"` LastName string `db:"last_name"` Email string } jason = Person{} err = db.Get(&jason, "SELECT * FROM person WHERE first_name=$1", "Jason") fmt.Printf("%#v\n", jason) // Person{FirstName:"Jason", LastName:"Moiron", Email:"jmoiron@jmoiron.net"} ``` Example on how to insert a tuple ``` dude := Person{ FirstName:"Jason", LastName:"Moiron", Email:"jmoiron@jmoiron.net" } _, err = db.NamedExec(`INSERT INTO person (first_name,last_name,email) VALUES (:first,:last,:email)`, dude) ```
named parameters in database/sql and database/sql/driver
[ "", "sql", "database", "go", "neo4j", "driver", "" ]
Here is my problem: I want to insert a new row in my table but there is already some registers in it. If I need to put this new row at the same row that a already register is, what should I do? For example: I have this table with this rows: ``` ID|Value 1 |Sample1 2 |Sample2 3 |Sample3 ``` But now I want to insert a new row where Sample2 is, so the table should be like: ``` ID|Value 1 |Sample1 2 |NewSample 3 |Sample2 4 |Sample3 ``` Any thoughts?
> Any thoughts? Yes. Please forget about changing the primary key (the ID) if you have references somewhere. Rather add a column (e.g. ViewOrder) which is handling this explicitly for you: ``` ID|Value | ViewOrder 1 |Sample1 |1 5 |NewSample |2 2 |Sample2 |3 3 |Sample3 |4 ``` Query to select: ``` SELECT ID, Value, ViewOrder FROM yourTable ORDER BY ViewORDER ``` **Insert / Update** would look something like this (whereas YourRowIndex is the index where you wish to insert your new row, of course): ``` UPDATE dbo.table SET VIEWORDER = VIEWORDER + 1 WHERE VIEWORDER >= @YourRowIndex ; SET IDENTITY_INSERT dbo.table ON INSERT dbo.table (Value, ViewOrder) VALUES (@YourValue, @YourRowIndex); ```
The easy way is to add a new column -- set it to the same value as ID and then you have two choices, if you make it numeric you can just add a value in between ``` ID | Value | OrderCol 1 | Sample1 | 1 4 | NewSample | 1.5 2 | Sample2 | 2 3 | Sample3 | 3 ``` your other option is to renumber order -- which can be slow if you have a lot of stuff in the table. You probably don't want to change ID since there might be an external table which references this identifier.
SQL How to insert a new row in the middle of the table
[ "", "sql", "sql-server", "" ]
Forever I've used a case sensitive collation in Sql Server (SQL\_Latin1\_General\_CP1\_CS\_AS). I'm trying to move to Sql Azure Database and I've run into an unexpected problem. It looks like it's impossible to have case sensitive column names. Can this be true? I create my database... ``` CREATE DATABASE MyDatabase COLLATE SQL_Latin1_General_CP1_CS_AS ``` And I create my table... ``` CREATE TABLE [MyTable] ( [Name] NVarChar (4000) COLLATE SQL_Latin1_General_CP1_CS_AS NULL, [name] NVarChar (4000) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ) ``` And I get the error: Column names in each table must be unique. Column name 'name' in table 'MyTable' is specified more than once. Ugh, disaster. This works perfectly in Sql Server 2012. However on Sql Azure I can't seem to make it happen. Does anyone know why this is not working in Sql Azure? Does anyone know how I can make this work in Sql Azure? Thanks.
**I think this is a bug in Windows Azure SQL!** On line documentation states that you can override the collation at the database, column or expression levels. You can not do it at the server level. <http://msdn.microsoft.com/en-us/library/windowsazure/ee336245.aspx#sscs> Lets start with something we know will work, a local install of SQL Server 2012. ``` -- Start at master Use master; Go -- Create a new database CREATE DATABASE Koopa COLLATE SQL_Latin1_General_CP1_CS_AS; -- Use the new database Use Koopa; Go -- Create a new table CREATE TABLE [MyTable] ( [ColName1] NVarChar (4000) COLLATE SQL_Latin1_General_CP1_CS_AS NULL, [colname1] NVarChar (4000) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ); ``` If we try to run the create table in the MASTER database, we get your error. Msg 2705, Level 16, State 3, Line 2 Column names in each table must be unique. Column name 'colname1' in table 'MyTable' is specified more than once. If we run the create table in the Koopa database, it works fine. See image below. That is because MASTER is Case Insensitive CI! ![enter image description here](https://i.stack.imgur.com/G1zwB.jpg) I am going to use the Web Interface for Azure SQL database since it has nice colors (it is the web)! Let's create a new database with the Case Sensitive collation. Wow I am getting excited since there is an option to select our collation. ![enter image description here](https://i.stack.imgur.com/PDdef.jpg) Now that we have a new database, lets check the settings! ![enter image description here](https://i.stack.imgur.com/Fa5ws.jpg) I am still happy since we see the correct collation listed for the database. Lets log onto the database server directly and run a simple query to create the table. I am going to try the designer first! ![enter image description here](https://i.stack.imgur.com/oc6St.jpg) Oops, It did not work. Lets try a simple DDL statement in a query window. ![enter image description here](https://i.stack.imgur.com/9U2XA.jpg) **Now I am really disappointed.** We were sold a bill of goods but Azure SQL did not deliver. In short, the documentation says you can not set the collation at the server level. <http://technet.microsoft.com/en-us/library/ms143726.aspx> However, this Quote from BOL states we should be able to over ride it at the database level. *Server-level collations The default server collation is set during SQL Server setup, and also becomes the default collation of the system databases and all user databases. Note that Unicode-only collations cannot be selected during SQL Server setup because they are not supported as server-level collations.* ![enter image description here](https://i.stack.imgur.com/nVAZI.jpg) In short, I am extremely busy with a couple speaking engagements for PASS the next 7 days. I will have to open a bug report or see if there is one already open. Good find!! BTW - You now need to use distinct column names.
To solve your case you need to add CATALOG\_COLLATION. ``` CREATE DATABASE MyDatabase COLLATE SQL_Latin1_General_CP1_CS_AS WITH CATALOG_COLLATION = DATABASE_DEFAULT ``` Source: [What will happen with CATALOG\_COLLATION and Case Sensitive vs Case Insensitive](https://blogs.msdn.microsoft.com/azuresqldbsupport/2018/02/15/what-will-happen-with-catalog_collation-and-case-sensitive-vs-case-insensitive/) Unfortunately it looks like you can put only two values there: SQL\_Latin1\_General\_CP1\_CI\_AS or DATABASE\_DEFAULT I had mirror problem. I wanted to get schema objects (table names, column names etc.) to be CI and data inside of database to be CS. - This works in Azure by default with CS COLLATE eg. SQL\_Latin1\_General\_CP1\_CS\_AS set on database creation. But it does not work on local SQLEXPRESS database which is a problem to test application locally. When you create database case sensitive CS then all queries starts to be case sensitive too. This makes my hibernate stop to work. (Object does not exists etc). To solve this I need to create database with SQL\_Latin1\_General\_CP1\_CI\_AS COLLATE and define CS COLLATE for each column separately: ``` create table test ( pk int PRIMARY KEY, data_cs varchar(123) COLLATE SQL_Latin1_General_CP1_CS_AS, CONSTRAINT [unique_data] UNIQUE NONCLUSTERED (data_cs) ) insert into TEST values (1, 'ABC'); insert into tEsT values (2, 'abc'); select * from TEST where DATA_cs like 'A%' ``` CATALOG\_COLLATION works in Azure SQL only I guess.
Case Sensitive column names in Sql Azure Database
[ "", "sql", "sql-server", "database", "azure", "" ]
I am trying to execute the following query: ``` UPDATE table1 SET column1 = NULL WHERE column1 = (SELECT column1 FROM table1 WHERE id = @id) ``` on MySQL server but I get the follwing error message: "Error Code: 1093. You can't specify target table 'table1' for update in FROM clause" This works in Microsoft SQL Server. Any ideas on how to get it to work on MySQL? Thanks
``` SET @column1 = (SELECT column1 FROM table1 WHERE id = @id) UPDATE table1 SET column1 = NULL WHERE column1=@column1 ```
Add an extra `SELECT` to the inner query. This will act as a "different" table since you are selecting from `a` instead of `table1`: ``` UPDATE table1 SET column1 = NULL WHERE column1 = ( SELECT column1 FROM ( SELECT column1 FROM table1 WHERE id = @id )a); ``` `sqlfiddle demo`
SQL query works on SQL Server but not on MySQL
[ "", "mysql", "sql", "" ]
I am facing problem with an SQL query: Here is my table data: ``` +------------+--------+--------+------------+------------+ | priceId_PK | mPrice | aPrice | validFrom | validTo | +------------+--------+--------+------------+------------+ | 1 | 0.00 | 0.00 | 2013-07-01 | 2013-08-31 | | 2 | 7.50 | 2.50 | 2013-09-01 | 2013-12-31 | | 3 | 15.00 | 5.00 | 2014-01-01 | 2035-12-31 | +------------+--------+--------+------------+------------+ ``` And my SQL query is ``` SELECT mPrice, aPrice, CASE WHEN validFrom < '2013-11-01' THEN '2013-11-01' ELSE validFrom END AS validFrom, CASE WHEN validTo > '2013-11-30' THEN '2013-11-30' ELSE validTo END AS validTo FROM commission WHERE (validfrom BETWEEN '2013-11-01' AND '2013-11-30' OR validto BETWEEN '2013-11-01' AND '2013-11-30') ORDER BY validFrom ``` My expected result: ``` +--------+--------+------------+------------+ | mPrice | aPrice | validFrom | validTo | +--------+--------+------------+------------+ | 7.50 | 2.50 | 2013-11-01 | 2013-11-30 | +--------+--------+------------+------------+ ``` But the query actually returns an empty set. Without the `WHERE` condition, the query is ``` SELECT mPrice, aPrice, CASE WHEN validFrom < '2013-11-01' THEN '2013-11-01' ELSE validFrom END AS validFrom, CASE WHEN validTo > '2013-11-30' THEN '2013-11-30' ELSE validTo END AS validTo FROM commission ``` This returns: ``` +--------+--------+------------+------------+ | mPrice | aPrice | validFrom | validTo | +--------+--------+------------+------------+ | 0.00 | 0.00 | 2013-11-01 | 2013-08-31 | | 7.50 | 2.50 | 2013-11-01 | 2013-11-30 | | 15.00 | 5.00 | 2014-01-01 | 2013-11-30 | +--------+--------+------------+------------+ ``` My question is, Why does the query with the `WHERE` condition return empty data when I expect it to return a single result set? Is it a problem with [the execution order of `SELECT` and `WHERE` conditions](https://stackoverflow.com/questions/20425808/sql-execution-order)? Or is it another issue with `WHERE` condition that is not working properly?
I think you are trying to access the `validFrom` and `validTo` from the case query in the where condition. If that is the case, you need to reformat your query. ``` SELECT t1.* FROM ( SELECT mPrice, aPrice, CASE WHEN validFrom < '2013-11-01' THEN '2013-11-01' ELSE validFrom END AS validFrom, CASE WHEN validTo > '2013-11-30' THEN '2013-11-30' ELSE validTo END AS validTo FROM commission) t1 WHERE ((t1.validfrom BETWEEN '2013-11-01' AND '2013-11-30') OR (t1.validto BETWEEN '2013-11-01' AND '2013-11-30')) ORDER BY t1.validFrom ``` But this will return 3 results. If you need to get the expected result, then you need to use an `AND` condition instead of `OR`. Then your query will be ``` SELECT t1.* FROM ( SELECT mPrice, aPrice, CASE WHEN validFrom < '2013-11-01' THEN '2013-11-01' ELSE validFrom END AS validFrom, CASE WHEN validTo > '2013-11-30' THEN '2013-11-30' ELSE validTo END AS validTo FROM commission) t1 WHERE ((t1.validfrom BETWEEN '2013-11-01' AND '2013-11-30') AND (t1.validto BETWEEN '2013-11-01' AND '2013-11-30')) ORDER BY t1.validFrom ```
Can you try this, added proper `()` for validfrom and validto columns ``` WHERE ( (validfrom BETWEEN '2013-11-01' AND '2013-11-30') OR (validto BETWEEN '2013-11-01' AND '2013-11-30') ) ORDER BY validFrom ```
WHERE condition issue in SQL
[ "", "mysql", "sql", "" ]
I have a query to show customers and the total dollar value of all their orders. The query takes about 100 seconds to execute. I'm querying on an ExpressionEngine CMS database. ExpressionEngine uses one table `exp_channel_data`, for all content. Therefore, I have to join on that table for both customer and order data. I have about 14,000 customers, 30,000 orders and 160,000 total records in that table. Can I change this query to speed it up? ``` SELECT link.author_id AS customer_id, customers.field_id_122 AS company, Sum(orders.field_id_22) AS total_orders FROM exp_channel_data customers JOIN exp_channel_titles link ON link.author_id = customers.field_id_117 AND customers.channel_id = 7 JOIN exp_channel_data orders ON orders.entry_id = link.entry_id AND orders.channel_id = 3 GROUP BY customer_id ``` Thanks, and please let me know if I should include other information. **UPDATE SOLUTION** My apologies. I noticed that `entry_id` for the `exp_channel_data` table customers corresponds to `author_id` for the `exp_channel_titles` table. So I don't have to use `field_id_117` in the join. `field_id_117` duplicates `entry_id`, but in a TEXT field. JOINING on that text field slowed things down. The query is now **3 seconds** However, the inner join solution posted by @DRapp is **1.5 seconds**. Here is his sql with a minor edit: ``` SELECT PQ.author_id CustomerID, c.field_id_122 CompanyName, PQ.totalOrders FROM ( SELECT t.author_id SUM( o.field_id_22 ) as totalOrders FROM exp_channel_data o JOIN exp_channel_titles t ON t.author_id = o.entry_id AND o.channel_id = 3 GROUP BY t.author_id ) PQ JOIN exp_channel_data c ON PQ.author_id = c.entry_id AND c.channel_id = 7 ORDER BY CustomerID ```
If this is the same table, then the same columns across the board for all alias instances. I would ensure an index on (channel\_id, entry\_id, field\_id\_117 ) if possible. Another index on (author\_id) for the prequery of order totals Then, start first with what will become an inner query doing nothing but a per customer sum of order amounts.. Since the join is the "author\_id" as the customer ID, just query/sum that first. Not completely understanding the (what I would consider) poor design of the structure, knowing what the "Channel\_ID" really indicates, you don't want to duplicate summation values because of these other things in the mix. ``` select o.author_id, sum( o.field_id_22 ) as totalOrders FROM exp_channel_data customers o where o.channel_id = 3 group by o.author_id ``` If that is correct on the per customer (via author\_id column), then that can be wrapped as follows ``` select PQ.author_id CustomerID, c.field_id_122 CompanyName, PQ.totalOrders from ( select o.author_id, sum( o.field_id_22 ) as totalOrders FROM exp_channel_data customers o where o.channel_id = 3 group by o.author_id ) PQ JOIN exp_channel_data c on PQ.author_id = c.field_id_117 AND c.channel_id = 7 ```
Can you post the results of an `EXPLAIN` query? I'm guessing that your tables are not indexed well for this operation. All of the columns that you join on should probably be indexed. As a first guess I'd look at indexing `exp_channel_data.field_id_117`
Slow aggregate query with join on same table
[ "", "mysql", "sql", "expressionengine", "" ]
I have the code for the next button. The data in the database show up normally. The problem is when I click next button, the data will be repeated again as --> data1 > data2 > data3 > data1 > data2... I've to been told that I should count the maximum rows but I didn't know how to do it; I've search for the coding as well, but nothing that I understand came out. Please help me~~~ (I am not very good with English, sorry) > Private Sub btnNext\_Click(ByVal sender As System.Object, ByVal e As > System.EventArgs) Handles btnNext.Click > > ``` > btnBack.Enabled = True > > da.Fill(dt) > If position >= 0 Then > > position = position + 1 > Me.lblID.Text = dt.Rows(position).Item("RefNo") > Me.txtboxName.Text = dt.Rows(position).Item("Name") > Me.rtxtboxAddress.Text = dt.Rows(position).Item("Address") > Me.txtboxContactNo.Text = dt.Rows(position).Item("ContNo") > Me.txtboxFaxNo.Text = dt.Rows(position).Item("FaxNo") > Me.txtboxBrand.Text = dt.Rows(position).Item("Brand") > Me.txtboxModel.Text = dt.Rows(position).Item("Model") > Me.txtboxSN.Text = dt.Rows(position).Item("SN") > Me.rtxtboxProblems.Text = dt.Rows(position).Item("Problems") > Me.rtxtboxTechRemark.Text = dt.Rows(position).Item("TechRemark") > Me.rtxtboxServChange.Text = dt.Rows(position).Item("ServiceChange") > Me.rtxtboxPartChange.Text = dt.Rows(position).Item("PartsChange") > Me.txtboxTotal.Text = dt.Rows(position).Item("TotalPrice") > > End If > > End Sub > ``` I don't know if this is also need to be told, but... there is two different class 1) database.vb - sql coding 2) forms.vb - coding for my visual basic form Please help me!! **THANKS EVERYONE WHO HELO ME WITH THE ANSWERS!! I HAVE FOUND THE SOLUTION OF THE QUESTION AFTER RE-FIGURED THE CODING.** I didn't figure out the position value and the row value is same. My value of position = 0 and dt.Rows.Count = 4, since I have 4 data; so when the position = 0, the row = 1. I get confused about that; I thought both value is starting with 0.
A bit more code... this is *bad* code... don't rely on it for production since I'm sure there are plenty of corner cases it doesn't handle, but I wanted to give you a picture of how you could handle it and at least a glimpse of stuff you might need to worry about (e.g. 0 rows in the data table). Please work through this and try to understand the code... don't simply copy/paste. ``` Private Sub btnNext_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnNext.Click If position >= dt.Rows.Count Then // No more rows to show after this one, so disable the next button btnNext.Enabled = False End If // Put check for zero in btnBack.Click to make sure it doesn't go below 0 // I'm assuming position starts off at 0, and that you're showing the very // first row by default position += 1 // This is to handle a condition like having 0 rows in the data table, // in which case don't want to do the next part... If position > dt.Rows.Count Then Exit Sub // Only necessary once really, but we'll do it each time anyway... btnBack.Enabled = True da.Fill(dt) position += 1 // Update various textboxes and labels... End Sub ```
`dt.Rows.Count` is the number of rows in the Rows collection. Anytime you want to check the whether you've reached the maximum number of rows compare the row count to dt.Rows.Count
How to count Maximum rows in Visual Basic
[ "", "sql", "vb.net", "vba", "count", "rows", "" ]
Let's say I have table of number pairs: ``` a | b --+-- 1 | 1 2 | 1 3 | 1 2 | 2 4 | 2 5 | 5 1 | 3 ``` For each `b` I want to pick an `a` such as there are no 2 identical `a`s for different `b`s. If I do a simple query with `group by`: ``` select * from t group by b ``` I get the following results back: ``` a | b --+-- 1 | 1 2 | 2 5 | 5 1 | 3 ``` `a == 1` for `b == 1` and `b == 3` What I want instead is something like this: ``` a | b --+-- 3 | 1 2 | 2 5 | 5 1 | 3 ``` Could you help me with this problem? I assume that there's a known terminology for this kind of subset querying, but I'm not aware of it, and that makes searching for answers harder. Bonus points if the query picks largest `a` for given `b` while keeping the given uniqueness constraint. For my example the result would be: ``` a | b --+-- 3 | 1 4 | 2 5 | 5 1 | 3 ```
I think the first approach will be: ``` select MAX(a) as a, b from t as t1 where NOT EXISTS(select a from t where b<>t1.b and a=t1.a) or NOT EXISTS(select a from t where a in (select a from t where b=t1.b) GROUP BY a HAVING COUNT(*)=1) GROUP BY b ``` `SQLFiddle demo` We should group by `b` and find `MAX(a)` but from the special subrange of the main table. First we should get all `a` which aren't exist for another `b` (it is the first condition). But in the case of `b=3` we get the case that ALL `a` exist for other `b` so the second condition handles this case.
try this ``` select max(a),b from t group by b; ```
How to select the largest subset of pairs with unique elements?
[ "", "mysql", "sql", "select", "greatest-n-per-group", "" ]
I have a table called votes. ``` ID int(11) pk ai candidate_id int(11) region varchar(75) ``` Can I get an result with an sql statement with something like ``` West 46 East 75 North 28 etc... ``` West, East being regions... Thanks
``` SELECT region, COUNT(region) FROM votes GROUP BY region ORDER BY 2 DESC ``` Hmm, elaborating on your request for totals of votes per candidate, will depend on how your data is laid out. Do you simply have one row per vote for a candidate, if so, then I'd try (although I haven't tested); ``` SELECT region, COUNT(region), candidate_id, COUNT(candidate_id) FROM votes GROUP BY region, candidate_id ORDER BY 2 DESC, 4 DESC ```
Assuming you want one string, containing both fields, you can use MySQL's CONCAT function to concatenate. Example: ``` SELECT DISTINCT region, SUM(*) as votes_count, CONCAT(region, " ", votes_count) FROM votes ``` [More info on CONCAT()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat)
count the number of rows with a particular column value
[ "", "sql", "database", "" ]
I looking for way to run select query by O(1). Can I create index in this way that SELECT by primary key will take O (1) time complexity?
A [clustered primary key](http://msdn.microsoft.com/en-us/library/ms177443%28v=sql.105%29.aspx) is organised as a [b-tree](http://en.wikipedia.org/wiki/Hash-Based_Indexes). The clustered key is not a [hash-based index](http://en.wikipedia.org/wiki/Hash-Based_Indexes), which is required for [O(1)](http://en.wikipedia.org/wiki/Logarithmic_time#Constant_time). I believe b-tree searches are [O(log n)](http://en.wikipedia.org/wiki/Logarithmic_time#Logarithmic_time). So no, you can't > create an index in this way that SELECT by primary key will take O (1) > time complexity?
IIRC, some RDBMS engines have hash table indexes. That would AFAIK give you amortized constant time as you so desire. AFAICT, MS SQL Server does not have this feature.
Select by O(1) from SQL table
[ "", "sql", "sql-server", "select", "indexing", "time-complexity", "" ]
I have this row in my database table with a value of `1/5/2013 5:50:00 PM`, and I want to update only the date part. Time should be same without any change, need to change only the date in this record. I have tried the update statement but it change the time as well..but I can do a ``` UPDATE table1 SET date = '1/10/2013 5:50:00 PM' WHERE id =1 ``` This not what I'm looking for, different id's have different times, so just need to update the date keeping the time in that record same. Please give feedback. Thank you
You can do it this way if you're using SQL Server 2008 or higher ``` UPDATE table1 SET [date] = cast('1/10/2013' as datetime) + cast(cast([date] as time) as datetime) WHERE id =1 ``` If you're using SQL Server 2005 or below, you there's no `time` data type, so you have to do: ``` UPDATE table1 SET [date] = cast('1/10/2013' as datetime) + ([date] - DATEADD(dd, 0, DATEDIFF(dd, 0, [date]))) WHERE id =1 ```
``` UPDATE table1 SET date = DATEADD(dd,5,date)-- 5 is the number of days FROM table1 WHERE id =1 ```
Update date only in SQL Server
[ "", "sql", "sql-server", "sql-server-2005", "" ]
For example: in my db, there are 10 records: 500, 501, 502, 503, 504, 506, 507, 508, 509, 510. I want to get 505 which is not used. (The start number is 500). If 501, 502, 503 in db, I want to get 500. I use MySQL.
Another way to do it ``` SELECT number + 1 first_unused FROM ( SELECT 499 number UNION ALL SELECT number FROM table1 WHERE number >= 500 ) t WHERE NOT EXISTS ( SELECT * FROM table1 WHERE number = t.number + 1 ) LIMIT 1 ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/8d470/2)** demo
if you can get know the total sum (i.e., total sum of 500 to 510 without missing number) then, try ``` SELECT <total sum> - SUM(<field name>) FROM <table name>; ``` Eg: for db 1,2,3,4, 5, 7, 8, 9, 10 code is: ``` SELECT 55 - SUM(id) FROM 'a' ```
Using SQL to get the minimal unused number from a set of continuously increasing numbers
[ "", "mysql", "sql", "" ]
I have a table that contains student IDs and their GPA. ``` _____________ | sID | GPA | ------------- | 1 | 3.7 | | 2 | 3.9 | | 3 | 3.6 | | 4 | 3.7 | | 5 | 3.1 | | 6 | 3.9 | ``` I wanted to create a table that gives me only the students who have the highest, or second highest GPAs. In other words, **I want this result:** ``` _____________ | sID | GPA | ------------- | 2 | 3.9 | | 6 | 3.9 | | 1 | 3.7 | | 4 | 3.7 | ``` To do so, I began by writing a query that gives me all the students who match the highest score: ``` SELECT * FROM gpaTable WHERE gpa in ( SELECT max(gpa) FROM gpaTable) ``` This gives me: ``` | sID | GPA | ------------- | 2 | 3.9 | | 6 | 3.9 | ``` To get the second highest as well, I want to **subtract** this result from the entire original table, and then repeat the query that finds the max again. I assume it would look something like this, however, **I can't get it right.** ``` SELECT * FROM gpaTable WHERE gpa IN ( SELECT * FROM gpaTable WHERE gpa in ( SELECT max(gpa) FROM gpaTable) ) OR (SELECT * FROM gpaTable WHERE NOT EXISTS IN (SELECT * FROM gpaTable WHERE gpa in ( SELECT max(gpa) FROM gpaTable) ) ``` In English, the query says (or should say) *Give me every row that appears in the table of maximum GPAs* **OR** *in the table of maximum GPAs of the lesser GPAs* I'd really appreciate any help!
You can do it like this: ``` select * from gpaTable where gpa in ( select distinct gpa from gpaTable order by gpa desc limit 2 ) ``` The idea is similar to your first solution, except `MAX` is replaced with ordering and a [`LIMIT 2`](http://www.postgresql.org/docs/8.1/static/queries-limit.html) [Demo on sqlfiddle.](http://www.sqlfiddle.com/#!12/cd6ce/1)
Would it not be easier to just select the top 2 entries in the subselect? You could, e.g., use [`LIMIT`](http://www.postgresql.org/docs/8.1/static/queries-limit.html): ``` SELECT * FROM gpaTable WHERE gpa in (SELECT DISTINCT gpa FROM gpaTable ORDER BY gpa DESC LIMIT 2) ORDER BY gpa; ```
Subtracting one table from another in PostgreSQL
[ "", "sql", "postgresql", "" ]
I have this sql query: ``` select ct.ID, t.Tag from links ct inner join tags t on ct.ID_TAG = t.ID_TAG ``` returning this resultset: ``` ID | Tag -------------------------------------------------- 3e39e18a-f741-4ab8-9225-b11d8a6df440 | apps 3e39e18a-f741-4ab8-9225-b11d8a6df440 | testing dbfa053c-be7a-45f9-9dba-02fd5407a0e3 | tools dbfa053c-be7a-45f9-9dba-02fd5407a0e3 | apps dbfa053c-be7a-45f9-9dba-02fd5407a0e3 | testing cc7241a6-3054-4589-a011-55c3baa43d8a | bruzzo cc7241a6-3054-4589-a011-55c3baa43d8a | octocats cc7241a6-3054-4589-a011-55c3baa43d8a | github ae85d29d-9ca3-43c9-8345-d192eacd052a | bruzzo ae85d29d-9ca3-43c9-8345-d192eacd052a | octocats ae85d29d-9ca3-43c9-8345-d192eacd052a | github b682fc38-382d-4f5a-878a-e9a24f77587d | bruzzo b682fc38-382d-4f5a-878a-e9a24f77587d | octocats b682fc38-382d-4f5a-878a-e9a24f77587d | github 8c77dd17-466f-4a6e-916b-6c563d016fd4 | octocats d8a524cb-56b6-4d43-b136-3e1a923b9920 | octocats a460912d-ade9-433d-9d56-9d841480c1bb | gaziano ``` Now I need to retrieve all the IDs associated with both tags "octocats" and "github", so I expect: ``` cc7241a6-3054-4589-a011-55c3baa43d8a ae85d29d-9ca3-43c9-8345-d192eacd052a b682fc38-382d-4f5a-878a-e9a24f77587d ``` What is the most efficient way to retrieve the requested resultset? Also consider that I could have more than two tags to search for matching IDs. EDIT: For all people suggesting 'IN' statement, this is the result (not what I need): ``` cc7241a6-3054-4589-a011-55c3baa43d8a | octocats cc7241a6-3054-4589-a011-55c3baa43d8a | github ae85d29d-9ca3-43c9-8345-d192eacd052a | octocats ae85d29d-9ca3-43c9-8345-d192eacd052a | github b682fc38-382d-4f5a-878a-e9a24f77587d | octocats b682fc38-382d-4f5a-878a-e9a24f77587d | github 8c77dd17-466f-4a6e-916b-6c563d016fd4 | octocats d8a524cb-56b6-4d43-b136-3e1a923b9920 | octocats ``` I don't need the last two to be included, they only match 'octocats' but not 'github'.
Try this: ``` select ct.ID from links ct inner join tags t on ct.ID_TAG = t.ID_TAG where t.Tag in ('github', 'octocats') group by ct.ID having count(distinct t.tag) = 2; ``` This should give you the id's that have both tags
syntax wise simplest is to do `in` ``` where t.Tag in ('github', 'octocats') ```
aggregation in Tsql
[ "", "sql", "t-sql", "join", "aggregate-functions", "" ]
I have two tables. 1. NEW [contains data] [all columns are varchar] 2. NEW2 [empty table] [columns are of different data types] I want to copy all data from New to New2. What i did is, ``` SELECT T.* INTO #tmp FROM (SELECT * FROM [dbo].[new]) AS T ``` then ``` INSERT INTO New2(col1, col2....) SELECT * FROM #TMP ``` But its not working. ``` Msg 242, Level 16, State 3, Line 2 The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value. The statement has been terminated. ``` [what I want is to change the column data types of NEW table, especially the varchar to smalldatetime. So I tried this way. Any other approach is also welcome.] Any help would be greatly appreciated. Thank You.
Yes. Done. What I did is, Imported Excle data in SQL Server table with all columns in a table as varchar data type. The problem was in excel data, the date values, somewhere was NA. So I had to replace all those NA values with null. To check for those invalid date values in a table, I used following command. ``` SELECT ISDATE(COL_NAME) AS Result SELECT ISNULL(COL_NAME) AS Result ``` For this, sometime you have to also check & set for the date format of SQL Server using following commands, ``` DBCC useroptions SET DATEFORMAT mdy ``` Then all the result values I replaced them with NULL as ``` UPDATE TABLE SET COLUMN = NULL WHERE ISDATE(COLUMN) = 0 OR COLUMN = 'NA' ``` At last I updated required columns manually using simple alter commands as, ``` ALTER TABLE ALTER COLUMN COL_NAME <<data type>> ``` I also changed my dateforamat to dmy which prior was mdy. Thank for Suraj Singh, Deepshikha for their helpful suggestions.
While inserting cast your column to `smalldatetime` ``` SET DATEFORMAT ymd INSERT INTO New2(col1, col2....) SELECT Col1,Col2 , CAST('2007-05-08 12:35:29' AS smalldatetime) As Col_Name,...Col3 FROM #TMP ```
How to copy data from one table to another where column data types are different?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am looking for a SQL-function that gives the last 12 months with Start Date and End Date. Say you pick 10.Dec, it will give a result in: ``` - StartDate -- EndDate - 2013-11-01 - 2013-11-30 - 2013-10-01 - 2013-10-31 - 2013-09-01 - 2013-09-30 ``` and so it goes for the last 12 months. I tried modifying an old function we had, but I got totally off and confused in the end. ``` ALTER FUNCTION [dbo].[Last12Months](@Date date) RETURNS TABLE AS Return ( with cte as ( SELECT DATEADD(mm, DATEDIFF(mm, 01, @Date), 01) AS Start, DATEADD(mm, DATEDIFF(mm, -12, @Date), -12) AS EndDate union all select Start - 1, EndDate - 1 from cte where Start >= @Date ) select CAST(Start as DATE) StartDate, CAST(EndDate as DATE) EndDate from cte) ``` Runned it like this: ``` select * from dbo.Last12Months ('2013-12-10') ``` and got: ``` - StartDate - EndDate - 2013-12-02 - 2013-12-20 ``` Anyone know what to do?
Please try using CTE: ``` ALTER FUNCTION [dbo].[Last12Months] ( @Date datetime ) RETURNS @tbl TABLE (Start datetime, EndDate datetime) AS BEGIN WITH T AS( SELECT DATEADD(month, DATEDIFF(month, 0, @Date), 0) AS Start, DATEADD(d, -DAY(DATEADD(m,1,@date)),DATEADD(m,1,@date)) AS EndDate, 12 Cnt UNION ALL SELECT DATEADD(month, -1, Start), DATEADD(d, -DAY(DATEADD(m,1,Start-1)),DATEADD(m,1,Start-1)), Cnt-1 FROM T WHERE Cnt-1>0 ) INSERT INTO @tbl (Start, EndDate) SELECT Start, EndDate FROM T RETURN END ```
This seems to do the job - whether you want to put it in a function or just wherever you need to have the data: ``` ; With Numbers as ( select ROW_NUMBER() OVER (ORDER BY number ) as n from master..spt_values ), Months as ( select DATEADD(month,n,'20010101') as start_date, DATEADD(month,n,'20010131') as end_date from Numbers ) select * from Months where DATEDIFF(month,start_date,GETDATE()) between 0 and 11 ``` (Substitute any other date for `GETDATE()` if you want to get it based on some other date) (On my machine, this can generate any month from January 2001 on to at least the next century - it can be adjusted if you need earlier or later dates also)
SQL function for last 12 months
[ "", "sql", "sql-server", "function", "date", "" ]
I have a situation where I have to generate a range of date (say start date and end date). Now I want to fill the records of specific dates on those generated dates. ``` ;WITH DateRange AS ( SELECT @start_date DateValue UNION ALL SELECT DateValue + 1 FROM DateRange WHERE DateValue + 1 <= @end_date ) ``` If `@start_date = '2013-01-01' and @end_date= '2013-01-05'` The dates generated will be `'01/01/2013,01/02/2013,01/03/2013,01/04/2013,01/05/2013'` Now I want to pivot it to fill the specific data for specific date. How can I achieve this? **Edit:** I have 3 columns namely Duration, StartDate and EndDate. Now I want to generate these dates dynamically and fill the values of Duration accourdingly. For eg: if `Duration = 6` and `@start_date = '2013-01-01' and @end_date= '2013-01-05'` I want to generate a list of dates with duration equal to 6. **Update 2:** If you haven't understand my question here's a full detail for it. 1. I have to pass @startDate and @EndDate as parameters 2. I have fields Id, StartDate and EndDate, duration and other fields in my table 3. Now when I pass the parameters the query should generate the range of dates from @startDate and @EndDate 4. As the dates are generated it must check the dates StartDate and EndDate in the table and set the value of Duration to those fields from StartDate to EndDate. ![enter image description here](https://i.stack.imgur.com/bYl97.png) **Required output:** If `@startDate='2013-01-01'` and `@endDate='2013-01-07'` then the output must be like this: ![enter image description here](https://i.stack.imgur.com/SBbsa.png) \*\* ## Note: the Id for both the rows in first image is same. \*\* **Update 3:** ![enter image description here](https://i.stack.imgur.com/xEhEE.png) ![enter image description here](https://i.stack.imgur.com/MswYq.png)
[updated 2013-12-18 11:22 UTC] [updated 2013-12-18 14:19 UTC] [updated 2013-12-19 11:11 UTC] as i did not know the name of the table you are using, i created one named `yeahyeah`. you should replace that with the name of your own table obviously. i have inserted the values you mentioned (fromdate todate duration as shown in your question). the procedure should look like this. i called it `pivotit`. ``` create procedure pivotit (@start_date date, @end_date date) as /* step 1: identify all the data you need to fill the pivoted table. this is achieved by using your daterange generator and joining it with your table. the result is kept in a dynamic table */ declare @acols table (i int, d date, l int); ;WITH DateRange AS ( SELECT @start_date DateValue UNION ALL SELECT dateadd(dd,1,DateValue) FROM DateRange WHERE dateadd(dd,1,DateValue) <= @end_date ) insert into @acols (i, d, l) select id, DateValue, Duration from DateRange join yeahyeah on ( DateRange.DateValue >= yeahyeah.FromDate and DateRange.DateValue <= yeahyeah.ToDate); /* step 2: for pivot you need all the columns that will be adressed. so we create a string with all the distinct dates from the dynamic table. these will then be put into a format like [1], [2], [3], ... to create a dynamic select. */ declare @p varchar(max) = ''; declare @s varchar(max); select @p = @p + ', [' + CONVERT(varchar,d) + ']' from (select distinct d from @acols) a; set @p = SUBSTRING(@p,3,len(@p)-2); /* step 3: create the dynamic select. alas neither the dynamic table nor the parameters are available from inside the dynamic sql. i might try to use bind variables, but was not 100% sure if that would work here. so put in the declares for start_ and end_date from the procedure parameters and build up the dynamic table once more. then i use @p for the pivoted select. this is done by selecting the column for the rows (id) and all the values from the pivot as columns (@p). details on the whole pivot thing are here: http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx basically you tell sql-server (explicitly) what you want as the columns (@p), what rows you want (id) and how to aggregate the values in the intersections (sum(l)) [update 2013-12-19] i added a routine that makes a cartesian product that has all the combination of dates and ids in @acols, finds the ones that are missing in @acols and inserts them with duration 0. then the pivoted cells are complete and display the zero instead of NULL. you cannot use isnull or coalesce here since not the value of the cell is NULL, the intersection simply did not exist. */ set @s = ' declare @start_date date = convert(date,'''+CONVERT(varchar,@start_date,112)+''',112); declare @end_date date = convert(date,'''+CONVERT(varchar,@end_date,112)+''',112); declare @acols table (i int, d date, l int); ;WITH DateRange AS ( SELECT @start_date DateValue UNION ALL SELECT dateadd(dd,1,DateValue) FROM DateRange WHERE dateadd(dd,1,DateValue) <= @end_date ) insert into @acols (i, d, l) select id, DateValue, Duration from DateRange join yeahyeah on ( DateRange.DateValue >= yeahyeah.FromDate and DateRange.DateValue <= yeahyeah.ToDate); with cart as ( select distinct a.i , b.d from @acols a join @acols b on 1=1 ) insert into @acols (i, d, l) select cart.i , cart.d , 0 from cart left outer join @acols a on cart.i = a.i and cart.d = a.d where a.i is null; select id, '+@p+' from ( select convert(varchar,d) as d , l , i as id from @acols ) as sourcetable pivot ( sum(l) for d in ('+@p+') ) as pivottable'; execute(@s); ``` after you created the procedure you can do this: ``` exec pivotit @start_date = '2013-01-01', @end_date = '2013-01-31' ``` which will then yield: ![result with 0 and no zeroes](https://i.stack.imgur.com/T6SUk.png)
You can write a query as below: ``` declare @start_date datetime,@end_date datetime ; set @start_date ='2013-01-01' ; set @end_date = '2013-01-05' ; DECLARE @columns NVARCHAR(MAX),@sql NVARCHAR(MAX); SET @columns = N''; WITH DateRange AS ( SELECT @start_date DateValue UNION ALL SELECT DateValue + 1 FROM DateRange WHERE DateValue + 1 <= @end_date ) --Get column names for entire pivoting SELECT @columns += N', ' + QUOTENAME(SpreadCol) FROM (select distinct convert(varchar(10),DateValue,101) as SpreadCol from DateRange ) AS T; PRINT @columns; ``` and then use @columns as spreading columns in dynamic Pivot query.
Generate a range of date and compare with another range of date and pivot it
[ "", "sql", "sql-server", "" ]
I've got a table `Customer (ID, Ref, LastName, FirstName)`. I have to bring out all lines whose reference is the same at least four times. I tried this unsuccessfully : ``` SELECT * from MyTable Where (select count(ref) from MyTable Group By ref) >= 4 ``` Of course it is wrong, but I don't know how to do this in one query.
You need to use `HAVING` clause like below ``` SELECT * from Customer Where ref in (select ref from Customer Group By ref having count(*) >= 4) ``` **[SQL Demo](http://sqlfiddle.com/#!3/405b3/1)**
If I understand correctly, you can use the [`Having`](http://technet.microsoft.com/en-us/library/ms180199.aspx) clause in the query: ``` SELECT Ref, LastName, FirstName from MyTable group by Ref, LastName, FirstName having count(*) >= 4 ``` I have omitted `ID`, since this is possibly a primary key and probably not required here as the grouping wouldn't work. **EDIT**: Since the above query will not return your result...You can also use a join... ``` SELECT * FROM Customer c INNER JOIN ( SELECT ref FROM Customer GROUP BY ref HAVING COUNT(*) >= 4 ) t ON c.ref = t.ref ```
SQL Query, get only the line where a value appear 4 time
[ "", "sql", "t-sql", "" ]
So, while I recognize that date formatting etc. should be done in the presentation layer, I am interested to know if anyone has seen or recognized this difference (please try at home, if so inclined) I am a little baffled and mostly curious, the sample code first. *UPDATE: To clarify based on the initial responses, I am aware the date IS invalid or better "not safe", since the particular field that I am more generally concerned about comes from user input." That is, while I am aware that validation/formatting aren't SQL 2008 strong suits, it is at least curious to me that DATETIME is more forgiving and I am wondering as to cause to see **how forgiving**."* ``` DECLARE @RawValue NVARCHAR(30), @Value DATETIME; SET @RawValue = '01/20.1901' SET @Value = CAST(@RawValue AS DATETIME) PRINT @Value ``` This produces the correct result for my server settings: **Jan 20 1901 12:00AM** However if the penultimate line is changed to (replacing DATETIME with DATE): ``` SET @Value = CAST(@RawValue AS DATE) ``` *Msg 241, Level 16, State 1, Line 8 Conversion failed when converting date and/or time from character string.* Is there an explanation out there? To be clear it doesn't matter if I DECLARE @Value to be a DATE or DATETIME or even an NVARCHAR -- Same result. The error message seems to suggest that it is having trouble converting the date **AND/OR** time, why would DATETIME behave any differently? Thanks,
It is worth mentioning that DATE and DATETIME are completely different datatypes. DATE is not simply DATETIME with the time removed. For example, CAST('17520910' AS DATE) works, while the similar conversion to DATETIME does not. For you history buffs, there was no such day in England or her colonies, the calendar skipped from September 2nd to September 14. Unlike DATETIME, DATE goes back to the year 1 without considering calendar system. Another important difference is the lack of implicit type conversion to add a number of days directly to a date. If D is datetime, D+3 is the date three days hence. If D is DATE, then D+3 produces an error. I am assuming that since new code for implicit conversion was created from scratch for DATE that Microsoft simply made it a tad more fastidious.
can you tried with ``` DECLARE @RawValue NVARCHAR(30), @Value DATE; SET @RawValue = '01.20.1901' -- or '01/20/1901' SET @Value = CAST(@RawValue AS DATE) PRINT @Value ```
DATE vs. DATETIME casting of invalid dates in SQL SERVER 2008 R2
[ "", "sql", "date", "datetime", "sql-server-2008-r2", "" ]
I am getting a date conversion error when trying to execute SQL from and an Access database against a SQL database. Our machines are all running windows 7 with office 2010. The front end is an accde. There is a search form in the database and all the fields work fine, except the two date fields. The query's where clause looks like this in the front ends VB code ``` "WHERE DrawnDate BETWEEN #" & Format(FromDate, "yyyy-mm-dd") & "# AND #" & _ Format(ToDate, "yyyy-mm-dd") & "#" ``` Based on some research, I have also tried date formats of `dd-mon-yyyy` and `dd-mm-yyyy` with no change in outcome. > The expression On Click you entered as the event property setting produced the following error: ODBC -- call failed. The error received is: > [Microsoft][ODBC SQL Server Driver][SQL Server]Conversion failed when converting date and/ or time from character string. (#241) The other problem is that this only happens on one PC so far and not on mine or others that I have tested.
We had this exact same error on just one machine and solved it by removing, then re-adding our reference to DAO360.DLL (Under Visual Basic Tools->References). It never showed as MISSING. This is definitely not a pass-through (i.e. uses a linked table), as the # date literal termination character is definitely an Access/Jet thing. ODBC Trace showed a change in the generation of the actual SQL sent to SQL Server after the "re-reference".
The question is, whether your query is a Pass-Through-Query or not (the fact that the table is linked is irrelevant). If the query is Pass-Through then you must write it in the SQL-Server dialect, otherwise in Access dialect. Pass-Through is a property of the query, not a property of the table. Another point is whether the date lies within the valid date range of the SQL-Server. These ranges are different for Access and SQL-Server: > **SQL-Server** > *datetime* (January 1, 1753 - December 31, 9999) > *datetime2* (January 1, 0001 to December 31, 9999) > *smalldatetime* (January 1, 1900 to June 6, 2079) > *date* (January 1, 0001 to December 31, 9999) > > **Access** > *Date/Time* (January 1, 100 A.D. -December 31, 9999 A.D.) A date value of 0 (the default value) represents December 30, 1899 in Access and this lies outside of the `smalldatetime` range of SQL-Server. The `Format` function does not work if the value passed to it is a `String`. Make sure the `FromDate` and `ToDate` variables are declared `As Date` in VBA or that they are variants containing a date. `VarType(FromDate)` should return `7` (=`vbDate`) in the latter case. If they are variants they should also not be `Null`.
Date conversion error - MS Access front end querying sql back end
[ "", "sql", "date", "ms-access", "" ]
In a select query I can make-up columns but how can I assign values for them? Example ``` select a.col1, a.col2, 'column3' from A a union select b.col2, b.col3, b.col3 as `column3` from B b ``` I want to assign a default value of `-1` to the `column3` column I made in the first query. Also, I want the title of the column to still be `column3`. is this possible?
Try this ``` select a.col1, a.col2, -1 as column3 from A a union select b.col2, b.col3, b.col3 from B b ``` Or this if b.col3 is varchar ``` select a.col1, a.col2, '-1' column3 from A a union select b.col2, b.col3, b.col3 from B b ``` If A and B table has the same values for the tree columns the database will do an DISTINCT to avoid that, if you want, use UNION ALL
``` To create a dummy column and assgin values to it we can use select a.col1, a.col2, '-1' as col3 from A a ```
How to assign a values to a column made in a select query
[ "", "mysql", "sql", "" ]
I have boiled down a date query to its fundamental pieces in an effort to make a query easier for me to understand. My trouble is that I am trying to trigger an append event if and only if theier anniversary date is equal to today or in the past. My query works except when I get to the criteria of <=Date(). Here is my SQL. ``` SELECT ActiveAssociates_3.UserID , ActiveAssociates_3.[WM DOH] , DateValue(DateSerial(Year(Date()),Month([WM DOH]),Day([WM DOH]))) AS Anniversary FROM ActiveAssociates_3 GROUP BY ActiveAssociates_3.UserID , ActiveAssociates_3.[WM DOH] HAVING (((DateValue(DateSerial(Year(Date()) ,Month([WM DOH]) ,Day([WM DOH]))))<=Date())); ``` The WM DOH field is a ShortDate field, The Anniversary piece is a DateSerial merger of the hire date and month, with current system clock year. The anniversary populates properly except when I use the <=Date() criteria... I am wanting to only show anniversary dates that are equal to or before today. Thoughts...
I've been experimenting with variations of your `DateSerial` to `Date` comparison. The only way I could trigger *"Data type mismatch in criteria expression"* was with Null in `[WM DOH]`. Exclude any rows where `[WM DOH]` contains Null.
**Assuming you query execute successfully.** I have convert the having cluse to the where claus ePlease @DesertSpider use following SQL query. ``` SELECT ActiveAssociates_3.UserID , ActiveAssociates_3.[WM DOH] , DateValue(DateSerial(Year(Date()),Month([WM DOH]),Day([WM DOH]))) AS Anniversary FROM ActiveAssociates_3 WHERE (((DateValue(DateSerial(Year(Date()) ,Month([WM DOH]) ,Day([WM DOH]))))<=Date())); ORDER BY ActiveAssociates_3.UserID , ActiveAssociates_3.[WM DOH] ```
Date Trouble in SQL
[ "", "sql", "ms-access", "" ]
I've searched around for an answer and it seems definitive but I figured I would double check with the Stack Overflow community: Here's what I'm trying to do: ``` INSERT INTO my_table VALUES (a, b, c) RETURNING (SELECT x, y, z FROM x_table, y_table, z_table WHERE xid = a AND yid = b AND zid = c) ``` I get an error telling me I can't return more than one column. It works if I tell it `SELECT x FROM x_table WHERE xid = a`. Is this at all possible in a single query as opposed to creating a seperate `SELECT` query? I'm using PostgreSQL 8.3.
@corvinusz answer was wrong for 8.3 but gave me a great idea that worked so thanks! ``` INSERT INTO my_table VALUES (a, b, c) RETURNING (SELECT x FROM x_table WHERE xid = a), (SELECT y FROM y_table WHERE yid = b), (SELECT z FROM z_table WHERE zid = c) ``` I have no idea why the way it's stated in the question is invalid but at least this works.
Try this. ``` with aaa as ( INSERT INTO my_table VALUES(a, b, c) RETURNING a, b, c) SELECT x, y, z FROM x_table, y_table, z_table WHERE xid = (select a from aaa) AND yid = (select b from aaa) AND zid = (select c from aaa); ``` In 9.3 similar query works.
INSERT INTO ... RETURNING multiple columns (PostgreSQL)
[ "", "sql", "postgresql", "" ]
I have table `IteamMaster` and query as below, ``` Select ItemID from ItemMaster Where ItemID IN(1,2,...,100) ``` this query returns me the all the values from the table which are available now what i want is the query should return the itemID which are not present in the table that is suppose My table has data as ``` ItemId ItemName 1 x 3 y 4 z 10 a ``` then it must return me the all the item id's except 1,3,4,10 from set of `IN()` conditions value ? How to get that values any help would be great ? And my values are dynamically generating and pass to the query query is also dynamically genrated
You can't magic values up out of no-where. Whatever you select FROM must contain the value you want. Using the system table `master..spt_value` you can generate a long list of values, and check for numbers that are in there, but not in your table... ``` ;With Numbers as ( select ROW_NUMBER() OVER (ORDER BY number) as n from master..spt_values ) SELECT * FROM Numbers LEFT JOIN yourTable ON Numbers.n = yourTable.ItemId WHERE Numbers.n IN (1,2,...,100) AND yourTable.ItemID IS NULL ``` Or... ``` ;With Numbers as ( select ROW_NUMBER() OVER (ORDER BY number) as n from master..spt_values ) SELECT * FROM Numbers WHERE Numbers.n IN (1,2,...,100) AND NOT EXISTS (SELECT * FROM yourTable WHERE ItemID = Numbers.n) ``` There are hundreds of ways of creating that initial list of numbers. But create it you must. Until you have the list of numbers than *can* exist, you can't select the numbers that *don't* currently exist.
What about this? ``` Select ItemID from ItemMaster Where ItemID NOT IN(1,2,...,100) ```
Get the values which is not present in table?
[ "", "sql", "sql-server", "sql-server-2008", "exists", "" ]
I am using mysql DB server .. I have the following table that consists of only one column with the following data (where 0's separate sorted integers) ``` Number ------- 0 1 2 3 0 4 5 0 6 7 8 0 9 10 0 11 ``` I want to get the first value that comes after each 0 , so e.g. output would be ``` Output ------ 1 4 6 9 11 ```
``` SELECT id1 FROM table WHERE id IN (SELECT t1.id+1 FROM table t1 LEFT JOIN table t2 ON t1.id1=t2.id WHERE t2.id1 IS NULL); ``` [**SQL Fiddle**](http://sqlfiddle.com/#!2/4ab8a/15) Assuming no gaps in the increment field,but since it will be created..
MySQL doesn't guarantee a return order unless you specify a `ORDER BY` clause. If you get the values back in the order you insert them then that's just coincidence. As it stands there's no way to do what you want to do reliably. You need to add something to order the data by. An ID field set to `autoincrement` will probably do.
How can I return the next row of WHERE clause?
[ "", "mysql", "sql", "" ]
There is a postgresql installation on my server that worked fine so far. However now there is a single table (all other tables work fine) which I cannot open through pgadmin3 or drop. I've tried restarting the server. Didn't help. I also tried dropping the table with DROP TABLE from the command line on the server. It's just stuck. I've executed the command and it has been just hanging in the console for the past hour. I don't know what to do. Is there a file I could erase in the data directory perhaps?
Most probably explanation: some other open transaction is holding an exclusive lock on the table. You are using **pgAdmin**, so you can check with [**`Tools -> Server Status`**](http://www.pgadmin.org/docs/dev/status.html). The activity pane lists all current connections. For instance, there is one (or more) listings for every open SQL window. Look for long running connections. You can also try to issue a `DROP TABLE` and check this list. With any luck you'll see what blocks it. Once you have identified the troublemaker and made sure, it's not needed, you might be able to kill the process. Might be vacuuming gone haywire because of bad settings .. That, or something is seriously broken.
You could try taking a dump of the database and see if that works? Also have a look at the <http://www.postgresql.org/docs/9.1/static/runtime-config-logging.html#GUC-CLIENT-MIN-MESSAGES> and log\_min\_messages options. Change that to debug and see what is happening when you try to drop the table.
How to delete a table in postgresql
[ "", "sql", "linux", "postgresql", "ubuntu", "" ]
This's my database sample: *orderdetails* table with the ProID relate to *products* tabale ``` | ProID | OrderID | OrderQuantity | OrderPrice | | 93 | 17 | 1 | 150 | | 16 | 18 | 1 | 100 | | 93 | 19 | 3 | 450 | | 93 | 17 | 1 | 150 | ``` *products* table ``` | ProID | ProPicture | ProName | ProPrice | | 93 | ./a.jpg | Iphone | 150 | | 16 | ./b.jpg | Nokia | 100 | ``` How can I get the best-selling product information: ProID, ProPiecture, ProName, ProPrice from *products* table base on *orderdetails* table?
You can use a SQL join between the tables on ProID, e.g. ``` from products as p inner join orderdetails as od on p.ProID = od.ProID ``` You can use *group by* syntax to ensure you get distinct rows, e.g. ``` group by p.ProID ``` Use aggregation function such as sum, count and avg to find out the totals in a select, e.g. ``` select sum(od.OrderQuantity) as total ``` Use *order by* syntax to show the top answers, e.g. ``` order by sum(od.OrderQuantity) desc ``` Use *limit* to show top n results, e.g. ``` limit 5 ``` Hopefully that gives you enough pointers to formulate the SQL yourself. **notes**, I've given the tables an alias to ensure you don't get conflicts between the column names. You can't reference *total* in your order statement due to the way SQL calculates the dataset. I've used an inner join, but you might want to look into left & right joins.
You can try the following query - ``` select * from products where ProID = (select ProID from (select ProID , sum(OrderQuantity) as total_order, max(sum(OrderQuantity)) over() as maxSm from orderdetails group by ProID ) where total_order = maxSm) ```
How can I query this sql to get the best-selling product
[ "", "mysql", "sql", "" ]
I'm trying to get a list of all citizens of a country and list them in a table with 3 rows. The last row shows 'YES' or 'NO' depending on whether or not the person was born in a certain city. What I did was create two queries: The first query gets all the citizens who were born in the city, and the second query gets all the citizens not born in the city. Then I just do a UNION. THe query runs and I think the results look good. But I was wondering, is this the best way to form a query like this? Thanks ``` --all citizens who were born in CityID = 5 who is in the USA SELECT DISTINCT(CountryCitizenID), Country, 'YES' As Resident FROM DT_Country_CitizenS dtu INNER JOIN FT_City fte ON dtu.CountryCitizenID = fte.CitizenID WHERE cityID = 5 AND Country = 'USA' UNION --all Citizens who were not born in CityID = 5 who is in the USA SELECT DISTINCT(CountryCitizenID), Country, 'NO' As Resident FROM DT_Country_CitizenS dtu WHERE NOT EXISTS ( SELECT CitizenID FROM FT_City fte WHERE dtu.CountryCitizenID = fte.CitizenID AND cityID = 5) AND Country = 'USA' ```
You shouldn't need a `UNION` for this (and generally, when you have two queries that can't possibly intersect, you should use `UNION ALL` which avoids a unique sort (or similar operation)). Also, `DISTINCT` cannot apply to a single column, as your current query seems to imply. I find it unlikely that there are duplicates in the `CountryCitizenID` column, unless it is really poorly named, so I don't think `DISTINCT` is needed at all. Finally, what is the point of pulling the `Country` column at all, since by definition it will be the same in every row? I've changed it to a constant instead of pulling the actual value (this might prevent a lookup or give a better chance at using a skinnier index), but you shouldn't need that column in the output at all - the application should know that, since the `WHERE` clause filters to only rows where `Country = 'USA'`, there is no other value possible. ``` SELECT CountryCitizenID, Country = 'USA', Resident = CASE WHEN EXISTS ( SELECT 1 FROM dbo.FT_City fte WHERE fte.CitizenID = dtu.CountryCitizenID AND fte.cityID = 5 ) THEN 'YES' ELSE 'NO' END FROM dbo.DT_Country_CitizenS AS dtu WHERE dtu.Country = 'USA'; ```
Try this: ``` SELECT DISTINCT(CountryCitizenID), Country, case when fte.CitizenID is not null then 'YES' else 'NO' As Resident FROM DT_Country_CitizenS dtu LEFT JOIN (select * from FT_City where cityID = 5) fte ON (dtu.CountryCitizenID = fte.CitizenID) WHERE Country = 'USA' ```
creating an efficient query with a UNION
[ "", "sql", "sql-server", "t-sql", "" ]
In MS Access, I have a table like below: ``` FY Percent 2015 5% 2016 5% 2017 5% 2018 5% 2019 5% 2020 5% ``` Now I want to add a calculated row and that row should be calculated as shown below: ``` FY Calculated 2015 P * 1 (Multiply the value by 1 for first year) 2016 P * 2015 Calculated value (the above value) 2017 P * 2016 Calculated Value 2018 P * 2017 Calculated Value 2019 P * 2018 Calculated Value 2020 P * 2019 Calculated Value 2021 P * 2020 Calculated Value ``` How do I query that?
There's no way to do this in plain Access' SQL dialect. So, VBA is the choice. You will need to create a table to hold the values. Let's say your output table is something like this: **Table: `tbl_output`** ``` FY Integer Calculated Double ``` The VBA code: ``` public sub fillCalculatedValues() Dim db as DAO.Database, rsIn as DAO.RecordSet, rsOut as DAO.RecordSet Dim strSQL as String Dim value as Double, fy as Integer, i as Integer ' Connect to the current database Set db = currentDb() ' Get the values from your input table (let's say it is called tbl_in) in read only mode strSQL = "SELECT * FROM tbl_in ORDER BY FY" rsIn = db.OpenRecordSet(strSQL, dbOpenDynaset, dbReadOnly) ' Initialize your output table strSQL = "DELETE FROM tbl_output" DoCmd.RunSQL strSQL ' Open the output table (allowing edits) rsOut = db.OpenRecordset("tbl_output", dbOpenDynaset, dbEditAdd) ' Read the input row by row, and insert a new row in the output with rsIn .moveFirst i = 1 value = 1 do fy = ![FY] ' Read the field FY from rsIn ' Insert the new row in the output table rsOut.addNew rsOut![FY] = fy rsOut![calculated] = value rsOut.update ' Update the calculated value value = value * ![percent] ' Advance one row .moveNext loop until .EOF ' Loop until you get to the end of the table end with ' Close everything rsOut.close rsIn.close db.close end sub ``` This is just an example. You should modify it to fit your specific needs Hope this helps. --- P.S.: * A little easter egg for you: [The Ten Commandments of Access](http://access.mvps.org/access/tencommandments.htm) * If you are willing to move up from Access to a more robust RDBMS, I suggest you take your chances with MySQL. There are some tricks you can use in MySQL to do this sort of things. You can [take a look to this post](https://stackoverflow.com/questions/17664436/cumulative-sum-over-a-set-of-rows-in-mysql) to know how to make a cummulative sum over a set of rows in MySQL, and fit it to your needs (after all, you are making a cummulative product)
I am by no means sure what you mean, perhaps ``` SELECT t.FY, t.Calculated, Nz((SELECT Max(CPercent) FROM tFY WHERE FY< t.FY),1) * Percent AS [Last Value] FROM tFY AS t ``` Then, re comment ``` SELECT t.FY, (SELECT Sum(Percent) FROM tFY WHERE FY<= t.FY) AS [Calc Value] FROM tFY AS t; ```
How to take a cumulative previous year record?
[ "", "sql", "ms-access", "" ]
I am attempting to select the number of players in a softball database using GROUP BY and HAVING but am not getting the results I expect. Can someone help me understand what is wrong with this query? I expect it is a simple mistake that I am overlooking. ``` SELECT player_id AS 'players', SUM(sb_stats_HR) FROM `softball_stats` AS 'homers' GROUP BY 'players' HAVING 'homers' > 3; ``` thanks in advance.
This should work. ``` SELECT player_id AS 'players', SUM(sb_stats_HR) AS 'homers' FROM softball_stats GROUP BY player_id HAVING SUM(sb_stats_HR) > 3; ```
Use backticks,you are playing with strings. ``` SELECT player_id AS `players`, SUM(sb_stats_HR) FROM `softball_stats` AS `homers` GROUP BY `player_id ` HAVING `softball_stats` > 3; ``` If this doesnt work,edit your post with some sample data.
MySQL query related to GROUP BY and HAVING
[ "", "mysql", "sql", "" ]
Query 1: ``` create or replace procedure toUp(code in number) is sname staff_master.staff_name%type; recnotfound exception; begin select staff_name into sname from staff_master where staff_code=code; if sname is NULL then raise recnotfound; else update staff_master set staff_name=upper(staff_name) where staff_code=code; end if; exception when recnotfound then dbms_output.put_line('Record not found'); end; ``` Query 2: ``` declare commsn emp.comm%type; no_comm exception; begin select comm into commsn from emp where empno=7369; if commsn is NULL then raise no_comm; else dbms_output.put_line('Comm is '||commsn); end if; exception when no_comm then dbms_output.put_line('Commsn for emp doesnt exist'); end; ``` Here in Query 1 I'm checking whether sname is null.. However, when I pass an invalid code as a parameter to the procedure.. sname should be NULL and hence the exception 'recnotfound' must get raised.. but it is showing the following error: ``` SQL> exec toUp(7369); BEGIN toUp(7369); END; * ERROR at line 1: ORA-01403: no data found ORA-06512: at "LAB06TRG15.TOUP", line 6 ORA-06512: at line 1 ``` But when I do the same with Query 2 it is working as expected.. I guess it has something to do with how varchar2 is checked for null.. Am I doing it correctly? I modified the code as follows : ``` create or replace procedure toUp(code in number) is sname staff_master.staff_name%type; recnotfound exception; begin select staff_name into sname from staff_master where staff_code=code; if sname is NULL then dbms_output.put_line('a'); raise recnotfound; else dbms_output.put_line('b'); --update staff_master set staff_name=upper(staff_name) where staff_code=code; end if; exception when recnotfound then dbms_output.put_line('Record not found'); when no_data_found then raise recnotfound; end; ``` I get : ``` BEGIN toUp(7369); END; * ERROR at line 1: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "LAB06TRG15.TOUP", line 16 ORA-01403: no data found ORA-06512: at line 1 ``` How do I solve this? P.S. I want to do this using Exception only.. Its part of an assignment ..
If a query returns no rows then an "ORA-01403: no data found" error is raised. Your expectation, I think, is that execution will continue but no value will have been assigned to the variable -- that's not the case. If what you want to do is check for the existence of a record then use: ``` select count(*) into row_found from ... where ... and rownum = 1; ``` this is guaranteed to return a single row with a value of 0 or 1 into the row\_found variable. With regard to your edit, you are not handling the raising of the user defined exception in the exception handling block. Wrap the SELECT with a BEGIN-END-EXCEPTION. ``` begin begin select .. exception when NO_DATA_FOUND then raise recnotfound; end; if sname is NULL then dbms_output.put_line('a'); raise recnotfound; end if; exception when recnotfound then dbms_output.put_line('Record not found'); end; ``` I'm not clear what you're trying to do here though. Is the sname ever going to be returned as null from the query?
Actually, exceptions happens even before your IF statement. If SELECT INTO statement doesn't return a row, ORA-01403 is thrown. You might expect that in this situation NULL value is assigned to variable, but it is not so and exception is thrown instead. You must add exception handling in your stored procedure to get over it. Documentation on how to do that can be found [here](http://docs.oracle.com/cd/E13085_01/doc/timesten.1121/e13076/exceptions.htm) Sorry, don't have ORACLE now, so I can't check it, but it should be something like this: ``` ... select staff_name into sname from staff_master where staff_code=code; exception when NO_DATA_FOUND then ...handle no data...; when TOO_MANY_ROWS then ...handle too many data rows...; ... ```
ORACLE PL/SQL check whether string is NULL
[ "", "sql", "oracle", "plsql", "" ]
My tables are structured like this (there are more values in the tables but I only wrote the ones relevant to this): ``` Department(dep_id, dep_name) Employee(dep_id) ``` I need to display dep\_name and the number of employees in every department, except once specific department (let's call it DepX) and only the departments with more than one employee. I tried multiple methods to solve this but none of them worked. Some methods I tried: ``` SELECT department.dep_name, COUNT(employee.dep_id) AS NumberOfEmployees FROM employee INNER JOIN department ON employee.dep_id=department.dep_id WHERE dep_name<>'DepX' GROUP BY dep_id HAVING COUNT(employee.dep_id) > 1; SELECT dep_name FROM department WHERE dep_name <>'DepX' UNION SELECT COUNT(*) FROM employee WHERE COUNT(*) > 1 GROUP BY dep_id; ``` I can't figure this out. Thanks!
The first example does now work because you're including dep\_name in your results without an aggregation but not grouping on it. You can either use the department name in your grouping instead of the ID: ``` SELECT department.dep_name, COUNT(employee.dep_id) AS NumberOfEmployees FROM employee INNER JOIN department ON employee.dep_id=department.dep_id WHERE dep_name<>'DepX' GROUP BY department.dep_name HAVING COUNT(employee.dep_id) > 1; ``` or do the `COUNT` in a subquery: ``` SELECT department.dep_name, e.NumberOfEmployees FROM department INNER JOIN (SELECT dep_id, COUNT(*) NumberOfEmployees FROM employee GROUP BY dept_id HAVING COUNT(dept_id) > 1 ) e ON department.dep_id = e.dep_id WHERE dep_name<>'DepX' ```
``` SELECT department.dep_name, COUNT(employee.dep_id) AS NumberOfEmployees FROM employee INNER JOIN department ON employee.dep_id=department.dep_id WHERE department.dep_name not in('DepX') GROUP BY department.dep_name HAVING COUNT(employee.dep_id) > 1; ``` **update your table alias per your need**
SELECT specific information
[ "", "sql", "database", "oracle", "" ]
I am trying to filter a SELECT statament. The behavior I want is that if the parameter @productId = % then I want the WHERE clause to do the following: ``` WHERE (DTH.TaskId IS NULL OR PDT.PK_Product LIKE @productId) ``` If @productId does not = % I want the following WHERE clause: ``` WHERE (PDT.PK_Product = @productId) ``` I am having trouble achieving this. I have tried case statements but couldn't get it to make sense. I have also tried: ``` WHERE IF @productId = '%' ( DTH.TaskId IS NULL OR PDT.PK_Product LIKE @productId ) ELSE ( PDT.PK_Product = @productId ) ``` THis is causing syntax errors. How do I achieve the desired results?
``` WHERE (@productId = '%' AND (DTH.TaskId IS NULL OR PDT.PK_Product LIKE @productId)) OR (NOT @productId = '%' AND (PDT.PK_Product = @productId)) ```
``` --assumes -1 and 0 are invalid TaskIds WHERE PDT.PK_Product LIKE @productId OR coalesce(DTH.TaskId, -1) = case when @productId = '%' THEN -1 ELSE coalesce(DTH.TaskID,0) END ```
WHERE statement based on variable value
[ "", "sql", "sql-server", "" ]
Using MySQL, I'm trying to make a timestamp column from a date column and a time column. If the date or time column contains a NULL value, MySQL automatically sets the corresponding value in the `TIMESTAMP` column to the [`CURRENT_TIMESTAMP`](https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_current-timestamp). Is there a way to make it default to a `NULL`, instead of the `CURRENT_TIMESTAMP`? Something like: ``` ALTER TABLE customers ADD s_timestamp TIMESTAMP; UPDATE customers SET s_timestamp = timestamp(s_date,s_time) DEFAULT NULL; ```
Use this query to add your column ``` ALTER TABLE `customers` ADD COLUMN `s_timestamp` TIMESTAMP NULL DEFAULT NULL; ``` And, if you want to get current timestamp on update only : ``` ALTER TABLE `customers` ADD COLUMN `s_timestamp` TIMESTAMP NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP; ```
To add to accepted answer in case someone is looking to modify an existing column that was created defaulting to current\_timestamp: ``` alter table `customers` modify `s_timestamp` timestamp null ``` The column will now not default to current\_timestamp anymore.
MySQL TIMESTAMP to default NULL, not CURRENT_TIMESTAMP
[ "", "mysql", "sql", "timestamp", "" ]
I have a data set with several heirachical variables: Region State County City District There is a series of variables which will be counted, summed, etc. for each combination of the above variables. This is simple enough with a basic proc sql, EXCEPT that the output file needs to include a row for the totals at each level. So if there are 4 Districts for a particular City, there would be 5 rows, for example. One way of generating the fifth row would be something like this: ``` proc sql; create table district_sum as select Region, State, County, City, 'All Districts' as District, bla, bla, bla... ``` This would give me totals at the City level, and then I could repeat this process for each level. But I am thinking there must be a better way to do this than with a series of similar sql steps.
You can use PROC TABULATE to generate easily reports with various nested variables and subtotals: ``` PROC TABULATE data=yourdata; CLASS Region State County City District; VAR Sales; TABLE Region*(State*(County*(City*(District ALL) ALL) ALL) ALL) ALL, Sales*sum; RUN; ```
Try PROC SUMMARY. ``` proc summary data=foo; class region state county city; var bar; output out=outData sum=sum; run; ```
SAS: Summing to different levels
[ "", "sql", "sas", "" ]
I have a query which looks something like this: ``` select tb1.col1, tb1.col2, tb2.col4, tb2.col7 from server_1.database_a.dbo.table_1 tbl1 inner join server_2.database_c.dbo.table_2 tbl2 on tbl1.col_id = tbl2.col_id ``` This query runs fine (I had to create an link on the sql server to link the 2 servers for the query to work) when I manually execute it from within `Microsoft SQL Server Management Studio`. But when I try to create an sql job which executes this query once every day, I get the following error message and the query does not execute > Executed as user: NT AUTHORITY\SYSTEM. The object name > 'server\_1.database\_a.dbo.table\_1' contains more than the maximum > number of prefixes. The maximum is 2. [SQLSTATE 42000] (Error 117). > The step failed.
Try it: ``` use database_a; select tb1.col1, tb1.col2, tb2.col4, tb2.col7 from dbo.table_1 tbl1 inner join server_2.database_c.dbo.table_2 tbl2 on tbl1.col_id = tbl2.col_id ```
server\_1.database\_1.dbo.table\_1 this is not the same with you write first. Its: server\_1.database\_a.dbo.table\_1 may be the reason is this.
Can't run SQL query via SQL job
[ "", "sql", "sql-server-2008", "sql-job", "" ]
After many trails and tribulations I was able to write the following query. ``` SELECT repName.repID, repName.Rep_Name, Positions.Position, repName.Roster, DataSheet.ENTERED FROM DataSheet INNER JOIN repName ON DataSheet.EE_ID = repName.repID INNER JOIN Positions ON repName.Job_Code = Positions.Job_Code WHERE (DataSheet.ENTERED <= @ENTEREDEnd) GROUP BY repName.repID, repName.Rep_Name, Positions.Position, repName.Roster, DataSheet.ENTERED, DataSheet.ITEM_ASSIGNED HAVING (DataSheet.ENTERED >= @ENTEREDStart) AND (DataSheet.ITEM_ASSIGNED = @ITEM_ASSIGNED) AND (repName.Roster = N'YES') ORDER BY DataSheet.ENTERED ``` I am still feeling my way around SQL, and wanted to know if this query could be modified so that it only returns the top value or row 1. I want to send that value to a text box on my form. Thank you
To return TOP N Rows just use the following ``` SELECT TOP (1) /* rest of your query*/ ``` You can also use PERCENT with your TOP Clause something like this ``` SELECT TOP (10) PERCENT /* rest of your query*/ ``` If you have Same values in TOP N and you want all the rows TOP same values you can use something like ``` SELECT TOP (N) WITH TIES /* rest of your query*/ ```
**IF SQL CE used this TOP(N). You must enclose the Number with parenthesis.** using Top N or Limit N in your query will return the oldest row because you order them using ``` ORDER BY DataSheet.ENTERED ``` If you wished to retrieved the latest row make your Sorting as ``` ORDER BY DataSheet.ENTERED desc ```
Return top record (or row 1) only from query
[ "", "sql", "sql-server", "" ]
I have one table which has 20 columns default These 20 columns named as D1 D2 D3...D20 , now with select query i want to add the other columns dynamically.. for ex D21 D22...D31, so how can i write a query to add this columns dynamically with incremented value..max limit is 31,please help default table columns D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 D15 D16 D17 D18 D19 D20 now i want to add columns in continuity to D20 i.e D21 and so on upto D31 , while selecting this columns the other column i.e fro D21 to D31 also get added and selected query is ``` select * ,' ' as D21 from tbl1 ``` this will give me all 20 columns upto D20 and one additional D21 i want it upto D31 without hardcoding other column name
try below code ``` declare @a int =21 Declare @sql varchar(max)='select *,' while @a <32 begin set @sql = @sql + cast(@a as varchar) + ' as D' + cast(@a as varchar) + ' ,' set @a = @a+1 end set @sql = substring(@sql,0,len(@sql)-1) + ' from tbl1' exec( @sql) ```
Using dynamic SQL, you can do something like this: ``` DECLARE @SQL AS NVARCHAR(MAX) = 'select *'; WITH Numbers (N) AS ( SELECT ROW_NUMBER() OVER (ORDER BY object_id) N FROM sys.all_objects ) SELECT @SQL = @SQL + ', '' '' as D' + CAST(N AS NVARCHAR) FROM Numbers WHERE N BETWEEN 21 AND 31; SET @SQL = @SQL + ' from tbl1'; EXEC sp_executesql @SQL ```
add column dynamically with select query
[ "", "sql", "sql-server", "sql-server-2008", "dynamic-queries", "" ]
I am doing the following query, ``` SELECT 'Payment Token,Transaction ID,Create Date' FROM dual UNION ALL SELECT DISTINCT pt.mw_payment_token ||','|| t.mw_transaction_id ||','|| t.create_date FROM t_mw_payment_token pt, t_mw_transaction t where pt.mw_payment_token = t.mw_payment_token order by t.mw_transaction_id; ``` Error is: ``` ORA-00904: "T"."MW_TRANSACTION_ID": invalid identifier 00904. 00000 - "%s: invalid identifier" *Cause: *Action: Error at Line: 5 Column: 103 ``` Why is the column `mw_transaction_id` invalid? **Edit:** t\_mw\_transaction table ``` ENABLECREATE TABLE WFMW.T_MW_TRANSACTION ( MW_PAYMENT_TOKEN VARCHAR2(40 BYTE) NOT NULL, CREATE_DATE DATE DEFAULT NULL, MW_TRANSACTION_ID NUMBER NOT NULL, TRANSACTION_TYPE NUMBER(*, 0), CONSTRAINT XPK_T_MW_TRANSACTION PRIMARY KEY ( MW_TRANSACTION_ID ) ENABLE ) ``` mw\_payment\_token table ``` ENABLECREATE TABLE WFMW.T_MW_PAYMENT_TOKEN ( MW_PAYMENT_TOKEN VARCHAR2(40 BYTE) NOT NULL , CREATED_BY VARCHAR2(50 BYTE) , MODIFIED_BY VARCHAR2(50 BYTE) , CREATE_DATE DATE , MODIFIED_DATE DATE , TOKENIZED_PAN VARCHAR2(20 BYTE) , CONSTRAINT XPKT_MW_PAYMENT_TOKEN PRIMARY KEY ( MW_PAYMENT_TOKEN ) ENABLE ) ```
Here is the solution: ``` SELECT 'Payment Token,Transaction ID,Create Date' FROM dual UNION ALL SELECT mw_payment_token ||','|| mw_transaction_id ||','|| create_date FROM (SELECT DISTINCT pt.mw_payment_token, t.mw_transaction_id, t.create_date FROM t_mw_payment_token pt, t_mw_transaction t WHERE pt.mw_payment_token = t.mw_payment_token ORDER BY t.mw_transaction_id); ```
Your problem is the `ORDER BY`-clause because it's global over the whole query and doesn't recognize `t.mw_transaction_id` in the whole statement. Same problem as in: ``` select 'a' from dual union all select dummy from dual order by dummy; ``` You can fix that by giving the columns in both queries the same alias: ``` select 'a' some_col from dual union all select dummy some_col from dual order by some_col; ``` But I don't think that is what you want to achieve. **Edit**: I am not sure what you want to achieve with your query, but maybe this gives you the desired result: ``` SELECT DISTINCT pt.mw_payment_token "Payment Token", t.mw_transaction_id "Transaction ID", t.create_date "Create Date" FROM t_mw_payment_token pt, t_mw_transaction t where pt.mw_payment_token = t.mw_payment_token order by "Transaction ID"; ``` **Edit2**: If you want your output as CSV, maybe check [here](https://forums.oracle.com/thread/615034) or [here](https://forums.oracle.com/thread/615034) or [here](https://stackoverflow.com/questions/3750192/how-to-write-to-files-using-utl-file-in-oracle). Maybe consider asking another question here on SO. My suggestion would be to use the `UTL_FILE` package iterating over a cursor that is based on my first edit. But I have not tried that so far.
Error in where clause PL/SQL query selecting from dual
[ "", "sql", "oracle", "" ]