Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Concepts Banking Example ``` branch (branch_name, branch_city, assets) customer (customer_name, customer_street, customer_city) account (account_number, branch_name, balance) loan (loan_number, branch_name, amount) depositor (customer_name, account_number) borrower(customer_name, loan_number) ``` Find all customers who have an account at all branches located in Brooklyn. ``` select distinct S.customer-name from depositor as S where not exists ( (select branch-name from branch where branch-city = ‘Brooklyn’) except (select R.branch-name from depositor as T, account as R where T.account-number = R.account-number and S.customer-name = T.customer-name) ) ``` I find the following query difficult to understand. Will someone please explain the logic behind it?
So you're selecting the `customer-name` from the `depositor` table where: 1. You're getting the `branch-name` for all branches in Brooklyn: `(select branch-name from branch where branch-city = ‘Brooklyn’)` 2. You're getting all of the `branch-name`s in Brooklyn for which that particular customer has an account at: `(select R.branch-name from depositor as T, account as R where T.account-number = R.account-number and S.customer-name = T.customer-name)` 3. You're `EXCEPT`ing (subtracting) the values in 2 from 1. For instance, if "Best branch" was in 2, it would be removed from the result of 1 (if it was also there). 4. You're expecting `NOT EXISTS` for the result of 3. That is, you expect that there will be nothing left from the subtraction in step 3. All of the branches from 1 should've also been in 2 for the customer to have his/her name printed out. **Examples** Suppose there were three branches in Brooklyn: "Best", "Average" and "Worst". *Bob* goes to "Best" and "Average". 1. All branches: "Best", "Average", "Worst" 2. Branches that Bob goes to: "Best", "Average" 3. 2 - 1: "Best", "Average", "Worst" minus "Best", "Average". This leaves us only with "Worst" 4. `NOT EXISTS` - hold on a second, we still have "Worst"! Something exists! That means this condition evaluates to `false` Bob is NOT selected. --- *Mary* goes to "Best", only. 1. All branches: "Best", "Average", "Worst" 2. Branches that Bob goes to: "Best" 3. 2 - 1: "Best", "Average", "Worst" minus "Best". This leaves us with "Average", "Worst" 4. `NOT EXISTS` - hold on a second, we still have "Average" and "Worst"! Something exists! That means this condition evaluates to `false` Mary is NOT selected. --- *Busy person* goes to "Best", "Average" and "Worst". 1. All branches: "Best", "Average", "Worst" 2. Branches that Bob goes to: "Best" "Average", "Worst" 3. 2 - 1: "Best", "Average", "Worst" minus "Best", "Average", "Worst". This leaves us with... nothing 4. `NOT EXISTS` - that's right! I mean.. that's `true`! Nothing exists Busy person **is** selected.
Note: it will probably be easier to understand queries if format them so they read more easily. ``` -- get the unique customer names select distinct S.[customer-name] --from all all depositors from depositor as S --where there the are no brooklyn branches that don't have an account for that customer where not exists ( --get all brooklyn branches (select [branch-name] from branch where [branch-city] = 'Brooklyn') except --except removes from the set above anything in the set below --get all brooklyn branches that have an account for the correlated depositor (select R.branch-name from depositor as T , account as R where --this is the join condition which I would prefer to see written as ON rather than put in the WHERE T.[account-number] = R.[account-number] --this correlates the subsquery to the depositor and S.[customer-name] = T.[customer-name]) ) ```
advanced sql query using join
[ "", "sql", "database", "" ]
I have a table which contains sensor data `data1` and the allowed (alarm) limits for that sensor data `data1high` and `data1low`. I wish to create a view where only those data1 values are listed which are the first to move outside the limits (i.e. an alarm condition) or moved back in within "safe" limits (alarm condition no longer present). Here is a typical table: ``` | id | data1 | data1high | data1low | |----|-------|-----------|----------| | 1 | 60 | 200 | 100 | | 2 | 80 | 200 | 100 | | 3 | 123 | 200 | 100 | | 4 | 150 | 200 | 100 | | 5 | 60 | 200 | 100 | | 6 | 60 | 200 | 100 | | 7 | 150 | 200 | 100 | | 8 | 40 | 200 | 100 | | 9 | 58 | 200 | 100 | | 10 | 62 | 200 | 100 | | 11 | 300 | 200 | 100 | ``` The logic is that values within `where data1 < data1low OR data1 > data1high` are in the alarm condition and should be listed. For example, ``` | id | data1 | |----|-------| | 1 | 60 | | 2 | 80 | | 5 | 60 | | 6 | 60 | | 8 | 40 | | 9 | 58 | | 10 | 62 | | 11 | 300 | ``` The table above shows all the values in the alarm state. I do not want this, only those that have just transitioned into that state and those which first values where the data1 goes back to within safe limits, so my ideal view would be: ``` | id | data1 | data1high | data1low | |----|-------|-----------|----------| | 1 | 60 | 200 | 100 | | 3 | 123 | 200 | 100 | | 5 | 60 | 200 | 100 | | 7 | 150 | 200 | 100 | | 8 | 40 | 200 | 100 | | 11 | 300 | 200 | 100 | ``` id 1 is in alarm state so listed, id 2 is omitted because that was still in the alarm state, id 3 is listed because that is the next value to be back in limits, id 4 is omitted because that is still within limits, id 5 is listed because that is back outside limits etc ...
You can use a [Recursive CTE](https://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) to iterate over the rows and compare one row to the previous row, applying the logic of what you classify as a transition. Looking at your desired output though, I don't think `id = 10` should appear in the list as it hasn't transitioned. Here's a sample that you can run in isolation to test: ``` CREATE TABLE #Data1 ( [id] INT , [data1] INT , [data1high] INT , [data1low] INT ); INSERT INTO #Data1 ( [id], [data1], [data1high], [data1low] ) VALUES ( 1, 60, 200, 100 ), ( 2, 80, 200, 100 ), ( 3, 123, 200, 100 ), ( 4, 150, 200, 100 ), ( 5, 60, 200, 100 ), ( 6, 60, 200, 100 ), ( 7, 150, 200, 100 ), ( 8, 40, 200, 100 ), ( 9, 58, 200, 100 ), ( 10, 62, 200, 100 ), ( 11, 300, 200, 100 ); WITH cte AS ( SELECT TOP 1 id , data1 , data1high , data1low , CASE WHEN data1 < data1low OR data1 > data1high THEN 1 ELSE 0 END AS Transitioned FROM #Data1 ORDER BY id UNION ALL SELECT #Data1.id , #Data1.data1 , #Data1.data1high , #Data1.data1low , CASE WHEN cte.data1 < cte.data1low AND #Data1.data1 < #Data1.data1low THEN 0 WHEN cte.data1 > cte.data1high AND #Data1.data1 < #Data1.data1high THEN 0 WHEN cte.data1 BETWEEN cte.data1low AND cte.data1high AND #Data1.data1 BETWEEN #Data1.data1low AND #Data1.data1high THEN 0 WHEN cte.Transitioned = 1 AND #Data1.data1 BETWEEN #Data1.data1low AND #Data1.data1high THEN 1 ELSE 1 END AS Transitioned FROM #Data1 INNER JOIN cte ON cte.id + 1 = #Data1.id ) SELECT * FROM cte WHERE cte.Transitioned = 1 DROP TABLE #Data1 ``` Within the CTE, a column is added to mark rows that have transitioned. The `CASE WHEN` clauses contain what I can gauge as the logic you require to asses if a transition has taken place compared to the previous row. At the end of the CTE, you can simply select all rows where `Transitioned = 1`, to produce: ``` id data1 data1high data1low Transitioned 1 60 200 100 1 3 123 200 100 1 5 60 200 100 1 7 150 200 100 1 8 40 200 100 1 11 300 200 100 1 ``` ## [Working Demo SQL Fiddle](http://sqlfiddle.com/#!6/4f944/1)
If you're using SQL-Server 2012, you can use the `LAG` function: If I understood your problem correctly, you want to get all records that changed from being in alarm to not being in alarm. If that is the case, shouldn't be 11 not in the result set? The last change is 8. Records in between ( 9 and 10) is still in alarm, so is 11, so this should not be included. ``` WITH CteAlarm AS( SELECT *, alarm = CASE WHEN data1 < data1low OR data1 > data1high THEN 1 ELSE 0 END FROM test ), Cte AS( SELECT *, prevAlarm = LAG(alarm) OVER(ORDER BY id) FROM CteAlarm ) SELECT * FROM Cte WHERE alarm <> prevAlarm OR (prevAlarm IS NULL AND alarm = 1) ``` [**SQL Fiddle**](http://sqlfiddle.com/#!6/8dbb6/2/0)
SQL conditional Where with limits and previous value
[ "", "sql", "sql-server", "" ]
My problem is similar to this one: [how to maintain order while doing join in sql](https://stackoverflow.com/questions/5743373/how-to-maintain-order-while-doing-join-in-sql) Basically I have a table parameter in a stored procedure that contains some IDs in a particular order. I want to use those IDs in an inner join and return results based on the order of the input table-parameter. So, I must find a way to build a new table that contains a rank together with the IDs. From what I see on similar posts, there is no way to achieve this in SQL, unless I also feed some extra rank column from the outside non-SQL world (inside my table parameter)? This seems SO alien to me... Is that really the situation? **UPDATE:** (Obviously, I can't use ROW\_NUMBER() or RANK() since those require sorting by a column and I only have IDs in my table) Basically, my input table is in this form: ``` CREATE TYPE [dbo].[IdTable] AS TABLE( [Id] [int] NOT NULL ) ``` And it is fed as parameter to my SP.
I have tried using IDENTITY on a column, but it seems that the DataTable class that I used didn't allow inserting rows with less columns than the number of columns the table had (so that I could not omit the IDENTITY column when inserting data and let it auto-increment). What I did instead, was providing the rank column values from C# code and order by that column in my SP. Not the best solution that I could imagine, but at least it works.
1. Add `identity` seed enabled column to your input table now, column will be automatically populated with physical order they are inserted into. 2. Use this column to sort after join
sql server - inserting rank column in SP table-parameter
[ "", "sql", "sql-server", "sql-order-by", "" ]
How do I code format the return data in 2 decimals and with percentage format like 100.00% or 67.39% instead of 100.000000 or 67.391304? ``` SUM(qa.scripting1+qa.conduct1+qa.conduct2+qa.conduct3)*100.0/46 as 'C%' ``` I tried `ROUND()` but I got the error stating that the round function requires 2 to 3 arguments? ``` ROUND(SUM(qa.scripting1+qa.conduct1+qa.conduct2+qa.conduct3)*100.0/46) as 'C%' ``` Thanks!
You can convert to a decimal your original value: ``` CONVERT(VARCHAR(20), CONVERT(DECIMAL(18,2), SUM(qa.scripting1+qa.conduct1+qa.conduct2+qa.conduct3)*100.0/46) ) + '%' as 'C%' ``` The first number in the decimal represents the number of digits in the number including decimal places, and the second number represents the number of decimal places.
Using Round and Cast will work. First round to 2 decimal places then convert to a decimal with 2 places to truncate the excess zeros. ``` select cast(Round(yourValue, 2) as decimal(18,2)) ``` [Sql Fiddle](http://sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/624)
How to format % and in 2 decimal points?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
GUYS I KNOW (+) MEANS OUTER JOIN ``` SELECT * FROM email a,email_types b WHERE a.user_id (+) =10 AND a.email_id(+)=b.email_enum; SELECT * FROM email a,email_types b WHERE a.user_id =10 AND a.email_id(+)=b.email_enum; ``` What is the meaning of `a.user_id (+) = 10`? Is this equal to `a.user_id =10`, or do they have a different meaning? What is the difference between the two queries? `a.user_id (+) =10` is matched with which column `b`. The value 10 is matched with column of `b` table?
This is best explained by converting the Oracle syntax to standard SQL: ``` SELECT * FROM email a,email_types b WHERE a.user_id (+) =10 AND a.email_id(+)=b.email_enum; ``` is equivalent to: ``` SELECT * FROM email_types b LEFT JOIN email a ON a.email_id = b.email_enum AND a.user_id = 10; ``` Which means "do an outer join between email\_types and email but only consider rows from the `email` table where `user_id` equals 10". It will return all rows from `email_types` and all matching emails (if there are any) from `user_id = 10` if there are no emails for that user, then no rows from the `email` table are considered. If you remove the `(+)` operator from the original query's condition on the `user_id` column, it would be equivalent to: ``` SELECT * FROM email_types b LEFT JOIN email a ON a.email_id = b.email_enum WHERE a.user_id = 10; ``` which would result in an inner join because of the condition on the outer joined table.
The `(+)` syntax is Oracle's old, outdated, syntax for an implicit outer join. The side with the `(+)` is the side that may not have matches, so this query will return all records with `b` with their counterparts in `a` where `a.user_id = 10`, or with `null`s if there's no matching `a` record.
What is the meaning of the query?
[ "", "sql", "database", "" ]
I have a daily schedule process flow which refreshes a bunch of tables within the same library. At the end of the process flow, all tables should have the same up to date records. And I want to double check this via checking the maximum value of `date`. But problem is, how can I quickly extract all max value of date among these tables and then compare them? ``` proc sql; select max(date) from lib.table1; select max(date) from lib.table2; select max(date) from lib.table3; ... quit; ```
Create a view that appends all the tables with the date variable and select the max date from the variable. If your tables don't have the same structure you can modify the set statement to keep only the date variable. You may want to anyways to speed up the process. ``` data max_date/view=max_date; set table: indsname=source; dset=source; keep date dset; run; proc sql; create table maximum_date as select max(date) as Latest_Date from max_date group by dset; quit; ```
In plain SQL it would look like ``` select 'table1' table_name, max(date) max from lib.table1 union all select 'table2' table_name, max(date) max from lib.table2 union all select 'table3' table_name, max(date) max from lib.table3 ``` Other options are stored procedures, in-line views and probably more.
select maximum value of common column for multiple data set
[ "", "sql", "sas", "" ]
I've got some date/time data in a string, in the format '26/10/2009 09:06:43' (i.e. the date is in UK format). When I try ``` SELECT CAST('26/10/2009 09:06:43' AS DATETIME) ``` I get the following error: > The conversion of a char data type to a datetime data type resulted in > an out-of-range datetime value. I'm sure the solution is really, really simple but every example I can find online seems to overcomplicate things! I'd appreciate any help you can give!
Try with `convert` function instead providing style number: ``` SELECT convert(DATETIME, '26/10/2009 09:06:43', 103) ``` You can find out all formats here: <http://www.sqlhub.com/2009/04/list-of-all-available-datetime-format.html>
In oracle it's like that : ``` SELECT to_date('26/10/2009 09:06:43', 'DD/MM/YYYY HH24:MI:SS') from dual; ``` with a following site : <http://www.sqlines.com/oracle-to-sql-server/to_date> It's seems to be like this in sql server : ``` SELECT CONVERT(DATETIME, '26/10/2009 09:06:43'); ```
SQL - Converting a non-standard String to DateTime
[ "", "sql", "sql-server", "" ]
I need to get the records of the last 24 hours but not group by hour like this: ``` SELECT HOUR(CompDate) AS hour, COUNT(1) AS action FROM mytable WHERE ((CompDate >= DATE_SUB(NOW(), INTERVAL 24 HOUR)) GROUP BY `hour`; ``` The above query will tell me that: hour 22 --> 6 actions, hour 21 --> 9 actions. What I want to have is: 1 hour ago --> 5 actions, 2 hours ago --> 3 actions, etc... I need to get the sum of actions from hour ago, 2 hours ago.....n hours ago Any help would be appreciated
You can do It in following: ``` CREATE TABLE Test ( `Id` INT, `DateTimes` DATETIME ); INSERT INTO Test(Id, DateTimes) VALUES (1, '2015-06-09 10:12:12'), (2, '2015-06-09 10:13:12'), (3, '2015-06-09 09:12:12'), (4, '2015-06-09 09:15:12'), (5, '2015-06-09 08:15:10') SELECT TIMESTAMPDIFF(HOUR,NOW(),DateTimes) * -1 AS Hours, COUNT(*) AS Action FROM Test GROUP BY TIMESTAMPDIFF(HOUR,NOW(),DateTimes) ``` **OUTPUT:** ``` Hour Action 24 1 23 2 22 2 ``` **[SQL FIDDLE](http://sqlfiddle.com/#!9/a8980/4)**
I guess the only change you need to do is to convert your HOUR to the difference between NOW and the CompDate: ``` SELECT HOUR( timediff( NOW( ) , CompDate) ) AS HOUR , COUNT( 1 ) AS action FROM mytable WHERE CompDate > DATE_SUB( NOW( ) , INTERVAL 24 HOUR ) GROUP BY HOUR ORDER BY HOUR ```
Get records of last hour, last 2 hours.....last n hours
[ "", "mysql", "sql", "database", "" ]
There is a column in my Table. In which we are storing string value in format `'HH:MM'`.During fetching records with this table every things works ok with ``` DATEDIFF(MINUTE, '00:00', ColumnName) ``` Problem is when we have Value greater than `23:59`. Its showing error like > Conversion failed when converting date and/or time from character string. Can anybody suggest me the right approach for achieving this scenario.
So it sounds like your saving the length of a time period. Try storing it in minutes. My query can handle numbers of different lengths since it's based on the colon. ``` DECLARE @yourTable TABLE (ColumnName VARCHAR(10)); INSERT INTO @yourTable VALUES ('100:00'), ('24:20'); SELECT ColumnName, (hr * 60) + minut AS time_period_in_minutes FROM @yourTable CROSS APPLY (SELECT CAST(SUBSTRING(ColumnName,0,CHARINDEX(':',ColumnName)) AS INT), CAST(SUBSTRING(ColumnName,CHARINDEX(':',ColumnName) + 1,LEN(ColumnName)) AS INT)) CA(hr,minut) ``` Results: ``` ColumnName time_period_in_minutes ---------- ---------------------- 100:00 6000 24:20 1460 ```
If you are storing the value as something other than a time, why not just store the number of minutes and convert to whatever format you want on output? Otherwise, I would suggest that you simply convert the value to minutes: ``` select (cast(left(ColumnName, 2) as int) * 60 + cast(right(ColumnName, 2) as int) ) as Minutes ``` If you are not using date/time values, there is no requirement for using the functions *specifically* designed for them. EDIT: To handle hours longer than 99, use `charindex()`: ``` select (cast(left(ColumnName, charindex(':', ColumnName) - 1) as int) * 60 + cast(right(ColumnName, 2) as int) ) as Minutes ```
How to Handle DATEDIFF(MINUTE, '00:00', '24:20') Like scenario?
[ "", "sql", "sql-server", "sql-server-2014", "" ]
``` SELECT cust_email FROM customer AS c WHERE c.cust_email="pankaj@yahoo.com" SELECT cust_email FROM customer WHERE cust_email="pankaj@yahoo.com" ``` I have these 2 sql queries which perform same operation ,my question is what is the use of giving customer a another name for selecting data ?
It is so called alias. In your case, it is not required. It is used to make querying easier when current names are inconvenient, ambigious or unknown (for group functions). 1. Example with ambigiousness in JOIN query. Let we have two tables: **Users** ``` Id int (PK) Name nvarchar(50) ``` **Books** ``` Id int (PK) UserId int (FK) Name nvarchar(30) ``` Now, if you query both tables it will be ambigious because names are similar. Here you can use aliases: ``` SELECT u.Id as UserId, u.Name as UserName, b.Name AS BookName FROM Users AS u INNER JOIN Books AS b ON u.Id = b.UserId ``` You will get results: ``` | UserId | UserName | BookName | -------------------------------- | 1 | John | Book123 | | 2 | Mark | BookXYZ | etc... ``` In case you make a join on columns with the same name you will **have to** use aliases. Otherwise, it will be a wrong syntax. ``` INNER JOIN Books ON Id = Id ??? ``` 2. Group functions naming. For example, in queries with functions like. ``` SELECT COUNT(*) as RowsCount FROM Users SELECT CONCAT(FirstName, " ", LastName, " ", MiddleName) as FullName FROM Users ``` you can use column alias to set a name and access it in the SQL reader or somewhere else. 3. It can be used for aliasing columns for convenience in some situations like this one: ``` SELECT Id, HasUserEverBeenOnAHelpPage as Value FROM SomeTable ``` and get ``` | UserId | Value | ------------------ | 1 | true | | 2 | false | ``` instead of ``` | UserId | HasUserEverBeenOnAHelpPage | --------------------------------------- | 1 | true | | 2 | false | ``` It doesn't mean that it cannot be used in other situations. Sometimes, it is even used for code readabiliy and programmer's convenience: ``` SELECT u.Id, u.FirstName, u.LastName, ua.City, ua.AddressLine, ua.PostalCode, us.Language, us.IsCookieEnabled, lh.LastEnterDate FROM Users as u INNER JOIN UserAddresses as ua ON ua.UserId = u.Id INNER JOIN UserSettings as us, ON us.UserId = u.Id INNER JOIN LoginHistory as lh ON lh.UserId = u.Id ``` In this case, the names are not ambigious and this query can be easily done without aliases. However, it is more convenient to work with them: ``` SELECT Users.Id, Users.FirstName, Users.LastName, UserAddresses.City, UserAddresses.AddressLine, UserAddresses.PostalCode, UserSettings.Language, UserSettings.IsCookieEnabled, LoginHistory.LastEnterDate FROM Users INNER JOIN UserAddresses ON UserAddresses.UserId = Users.Id INNER JOIN UserSettings ON UserSettings .UserId = Users.Id INNER JOIN LoginHistory ON LoginHistory .UserId = Users.Id ``` Read more here: <http://www.w3schools.com/sql/sql_alias.asp> [When to use SQL Table Alias](https://stackoverflow.com/questions/198196/when-to-use-sql-table-alias) <http://www.techonthenet.com/sql/alias.php>
When querying from a single table, using an alias is really not necessary (but it's good practice to get used to using it.) When querying multiple tables, joining etc (specially when the same table is referenced multiple times), it is needed. ``` select * from Customer c join Email e on e.cust_id = c.id ``` Also, it is smart to use in nested selects, specially when multiple table share the same column name. This in order to be sure that you are referencing the column in the table you want to reference, and not another one.
Diffrence between these select queries
[ "", "mysql", "sql", "" ]
I have query like below: ``` SELECT COUNT(*) AS AppleSupports FROM VendorItemPricing WHERE VendorName = 'Apple' SELECT COUNT(*) AS HpSupports FROM VendorItemPricing WHERE VendorName = 'HP' ``` Above queries give me results like below: ``` AppleSupports 63 HpSupports 387 ``` How can make my query to get results in one row like below? ``` AppleSupports HpSupports 63 387 ```
Use sub-queries inside your select statement: ``` SELECT (select count(*) from VendorItemPricing where VendorName = 'Apple') as AppleSupports, (select count(*) from VendorItemPricing where VendorName = 'HP') AS HpSupports ```
``` Select Sum(Case When vp.VendorName = 'Apple' Then 1 Else 0 End) As AppleSupports ,Sum(Case When vp.VendorName = 'HP' Then 1 Else 0 End) As HpSupports From VendorItemPricing As vp With (Nolock) Where vp.VendorName In ('Apple','HP') ```
Merge two SQL query results into one result
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
In PostgreSQL, I would like to select a row based on some criteria, but if no row matches the criteria, I would like to return the first row. The table actually contains an ordinal column, so the task should be easier (the first row is the one with ordinal 0). For example: ``` SELECT street, zip, city FROM address WHERE street LIKE 'Test%' OR ord = 0 LIMIT 1; ``` But in this case, there is no way to guarantee the order of the records that match, and I have nothing to order them by. What would be the way to do this using a single `SELECT` statement?
> I would like to select a row based on some criteria, but if no row > matches the criteria, I would like to return the first row ## Shorter (and correct) You don't actually need a `WHERE` clause *at all*: ``` SELECT street, zip, city FROM address ORDER BY street !~~ 'Test%', ord LIMIT 1; ``` `!~~` is just the Postgres operator for `NOT LIKE`. You can use either. Note that by inverting the logic (`NOT LIKE` instead of `LIKE`), we can now use default `ASC`sort order and NULLs sort last, which may be important. Read on. This is shorter (but not necessarily faster). It is also subtly different (more reliable) than the [currently accepted answer by @Gordon](https://stackoverflow.com/a/30754014/939860). When **sorting by a `boolean` expression** you must understand how it works: * [Sorting null values after all others, except special](https://stackoverflow.com/questions/21891803/sorting-null-values-after-all-others-except-special/21892611#21892611) The currently accepted answer uses `ORDER BY <boolean expression> DESC`, which would sort NULLs first. In such a case you should typically add `NULLS LAST`: * [PostgreSQL sort by datetime asc, null first?](https://stackoverflow.com/questions/9510509/postgresql-sort-by-datetime-asc-null-first/9511492#9511492) If `street` is defined `NOT NULL` this is obviously irrelevant, but that has *not* been defined in the question. (*Always* provide the table definition.) The currently accepted answer avoids the problem by excluding NULL values in the `WHERE` clause. Some other RDBMS (MySQL, Oracle, ..) don't have a proper `boolean` type like Postgres, so we often see incorrect advice from people coming from those products. Your current query (as well as the currently accepted answer) *need* the `WHERE` clause - or at least `NULLS LAST`. With the different expression in `ORDER BY` neither is necessary. **More importantly**, yet, if multiple rows have a matching `street` (which is to be expected), the returned row would be arbitrary and could change between calls - generally an undesirable effect. This query picks the row with the smallest `ord` to break ties and produces a stable result. This form is also more flexible in that it does not rely on the existence of a row with `ord = 0`. Instead, the row with the smallest `ord` is picked either way. ## Faster with index (And still correct.) For big tables, the following index would radically improve performance of this query: ``` CREATE INDEX address_street_pattern_ops_idx ON address(street text_pattern_ops); ``` Detailed explanation: * [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528) Depending on undefined details it may pay to add more columns to the index. The fastest query using this index: ``` ( SELECT street, zip, city FROM address WHERE street LIKE 'Test%' ORDER BY ord -- or something else? -- LIMIT 1 -- you *could* add LIMIT 1 in each leg ) UNION ALL ( SELECT street, zip, city FROM address ORDER BY ord -- LIMIT 1 -- .. but that's not improving anything in *this* case ) LIMIT 1 ``` BTW, this is a *single* statement. This is more verbose, but allows for a simpler query plan. The second `SELECT` of the `UNION ALL` is never executed if the first `SELECT` produces enough rows (in our case: 1). If you test with `EXPLAIN ANALYZE`, you'll see `(never executed)` in the query plan. Details: * [Way to try multiple SELECTs till a result is available?](https://stackoverflow.com/questions/14339715/way-to-try-multiple-selects-till-a-result-is-available/14365613#14365613) ### Evaluation of `UNION ALL` In reply to Gordon's comment. [Per documentation:](http://www.postgresql.org/docs/current/static/sql-select.html#SQL-UNION) > Multiple `UNION` operators in the same `SELECT` statement are evaluated > **left to right**, unless otherwise indicated by parentheses. Bold emphasis mine. And `LIMIT` makes Postgres stop evaluating as soon as enough rows are found. That's why you see `(never executed)` in the output of `EXPLAIN ANALYZE`. If you add an outer `ORDER BY` before the final `LIMIT` this optimization is not possible. Then *all* rows have to be collected to see which might sort first.
You are on the right track. Just add an `order by`: ``` SELECT street, zip, city FROM address WHERE street LIKE 'Test%' OR ord = 0 ORDER BY (CASE WHEN street LIKE 'Test%' THEN 1 ELSE 0 END) DESC LIMIT 1; ``` Or, alternately: ``` ORDER BY ord DESC ``` Either of these will put the `ord = 0` row last. EDIT: Erwin brings up a good point that from the perspective of index usage, an `OR` in the `WHERE` clause is not the best approach. I would modify my answer to be: ``` SELECT * FROM ((SELECT street, zip, city FROM address WHERE street LIKE 'Test%' LIMIT 1 ) UNION ALL (SELECT street, zip, city FROM address WHERE ord = 0 LIMIT 1 ) ) t ORDER BY (CASE WHEN street LIKE 'Test%' THEN 1 ELSE 0 END) DESC LIMIT 1; ``` This allows the query to make use of two indexes (`street` and `ord`). Note that this is really only because the `LIKE` pattern does not start with a wildcard. If the `LIKE` pattern starts with a wildcard, then this form of the query would still do a full table scan.
Select first record if none match
[ "", "sql", "postgresql", "sql-limit", "" ]
I have a table containing an identity column as well as a column representing the creation date: ``` CREATE TABLE dbo.OrderStatus ( OrderStatusId int IDENTITY(1, 1) NOT NULL, CreationDate datetime NOT NULL default GETDATE(), CONSTRAINT PK_OrderStatus PRIMARY KEY(OrderStatusId) ) ``` Since the identity column generates a value by itself and the CreationDate is always going to be the current date (`GETDATE()`), I can add a row thanks to `DEFAULT VALUES`: ``` INSERT INTO dbo.OrderStatus DEFAULT VALUES; ``` But what can I do if I want to add, let's say, three records? **Current solution (edited some input since it didn't make any sense)** For now, in order to do what I want, I add several rows with `VALUES`: ``` INSERT INTO dbo.OrderStatus (CreationDate) VALUES (GETDATE()), (GETDATE()), (GETDATE()) ``` Although, I'd prefer to know the equivalent of `INSERT INTO .. DEFAULT VALUES` for multiple rows, in case that I add another column with a default value later on. Is there a way to insert N rows into a table with `DEFAULT VALUES` or in a similar way?
You can use your original definition and just use a while loop, for example ``` DECLARE @OrderStatus TABLE ( OrderStatusId int IDENTITY(1, 1) NOT NULL, CreationDate datetime NOT NULL DEFAULT GETDATE() --CONSTRAINT PK_OrderStatus PRIMARY KEY(OrderStatusId) -- this can be uncommented if creating a real table. ) DECLARE @i int = 0; WHILE @i < 100 -- insert 100 rows. change this value to whatever you want. BEGIN INSERT @OrderStatus DEFAULT VALUES SET @i = @i + 1; END SELECT * FROM @OrderStatus ``` Here's how to do it using a recursive CTE: ``` ;with cteNums(n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM cteNums WHERE n < 100 -- how many times to iterate ) INSERT @OrderStatus SELECT * FROM cteNums ``` Just note that for the CTE you'd have to specify `OPTION(MAXRECURSION ...)` if it's greater than 100. Also note that even though you're selecting a list of numbers from the CTE, they don't actually get inserted into the table.
An easier way is: ``` insert dbo.OrderStatus default values go 500 ``` this will insert 500 rows of default values.
How to insert N rows of default values into a table
[ "", "sql", "sql-server", "t-sql", "insert", "" ]
I'm looking for an SQL (MSSQL) tool that will allow me to edit/insert/etc. data without the need to type sql statements. I want to simply enter data into a grid. I can't find this functionality in SSMS. Is there any tool that does that (preferably by MS)?
## Table Editor If you right-click on a table in SSMS and click edit, you can edit the data in there directly. ![Table Editor](https://i.stack.imgur.com/mPTkW.png) --- ## Query Designer If you select a row or cell in the table editor, you have access to the Query Designer menu on the tool bar. By clicking on the pane menu, it will open a sub-menu that will give you access to the SQL, Criteria and a Diagram. This will allow you to design queries visually. ## Query Designer
SSMS does let you do it, although it's not a great tool by any stretch. If you right click on a table and go to Edit Top 100 rows, you can get the grid. If you want to be able to edit any number of rows, go to Tools -> Options -> SQL Server Object Explorer -> Commands and change both values in the Table and View options to 0, which means unlimited. You can also edit the query that is used to generate the grid. In the toolbar, there's a button that says SQL and the hovertext is "Show SQL Pane". You can then edit the SQL to include/exclude columns or add conditions. This may help with large tables. You can also use this method to overcome the row number limit if you didn't change the defaults as in the above paragraph.
Is there an SQL editor for the data itself?
[ "", "sql", "sql-server", "ssms", "" ]
I want to copy values from a `bit` column called "absent" in table `Person` to a new, empty `datetime` column I just made. The `bit` type column contains rows with values null, 0 and 1. Now, I'd like to copy the values of this `bit` column to the new `datetime` column so, that all null AND 0 values in the `bit` column would be null values in the `datetime` column. All the 1 values would be new, current dates. How to do that? I tried to search for w3bschools for some examples but couldn't find any.
Well, you can do it like this: ``` update Person set dateTime_Column = case when isnull(absent, 0) = 0 then null else getdate() end ```
Try this: ``` update someTable set dateColumn = case when bitColumn is null or bitColumn = 0 then null else getdate() end ```
How to copy values from bit column to datetime column?
[ "", "sql", "sql-server", "datetime", "sql-server-2012", "" ]
I have a Table like this. ``` NoteID CustomerID CustomerName Note Type Date Active 6 81 Paris test Info 2015-06-04 1 10 81 Rotterdam Everything is allright Comment 2015-06-04 1 11 81 Hamburg Everything is allright Info 2015-06-04 1 12 81 Hamburg Everything is allright Info 2015-06-04 1 13 81 Amsterdam Everything is allright Info 2015-06-04 1 14 81 Rotterdam Everything is allLeft Comment 2015-06-04 1 15 81 Hamburg Everything is allLeft Info 2015-06-04 1 16 81 Hamburg Everything is allLeft Info 2015-06-04 1 17 81 Amsterdam Everything is allLeft Info 2015-06-04 1 ``` When i execute this query: ``` SELECT * FROM CarsNote WHERE Note LIKE '%ddddddddddddddd%' AND Type != 'Comment' OR Type != 'error' ``` All notes are in the result. What I expect is that there are no notes in the result. Because of the `LIKE` statement `'%ddddddddddddddd%'`. Can someone please explain why this query doesn't work like I expected?
Sure. It's called operator precedence (or more generally, operator evaluation order). If you add parentheses, this is what you're evaluating: ``` WHERE ((Note LIKE '%ddddddddddddddd%') AND Type != 'Comment') OR Type != 'error' ``` As per MSDN: > When more than one logical operator is used in a statement, NOT is evaluated first, then AND, and finally OR. Arithmetic, and bitwise, operators are handled before logical operators. The `NOT` part is great - it means you can get rid of unnecessary parentheses. The rest is a bit more trickier - `AND` will always have precedence over `OR`, all else equal. This also means that Tim's second suggestion *will actually work* - but yeah, don't do that. It's just crazy. Even if you were aware of the rules for operatior evaluation, it's a bad idea to not make it explicit - it's just way too fragile, not to mention hard to read (I'm currently working with codebase full of stuff like this - just *don't*. You'll save yourself and everyone else a lot of trouble in the future.). Just use this instead: ``` where Note like '%ddddddddddddddd%' or not (Type = 'Comment' or Type = 'Error') ``` or even better, ``` where Note like '%ddddddddddddddd%' and Type not in ('Comment', 'Error') ```
You have two options: 1. wrap the `OR` in paranthesis 2. repeat the `AND` + `OR` First approach(\*): ``` SELECT * FROM CarsNote WHERE Note LIKE '%ddddddddddddddd%' AND (Type != 'Comment' OR Type != 'error') ``` Second: ``` SELECT * FROM CarsNote WHERE Note LIKE '%ddddddddddddddd%' AND Type != 'Comment' OR Note LIKE '%ddddddddddddddd%' AND Type != 'error' ``` I prefer the first since it's more concise and less error-prone. **\* Important Note:** Both approaches are pointless since the combination of `!=` and `OR` is always true, it removes the filter and returns all records. So actually you have to use `AND`: ``` WHERE Note LIKE '%ddddddddddddddd%' AND (Type != 'Comment' AND Type != 'error') ``` you don't need the paranthesis with `AND`: ``` WHERE Note LIKE '%ddddddddddddddd%' AND Type != 'Comment' AND Type != 'error' ``` If you want to include/exclude multiple values it's more readable to use `IN`/`NOT IN`: ``` WHERE Note LIKE '%ddddddddddddddd%' AND Type NOT IN('Comment', 'error') ``` Note that this skips records where the `Type` is `NULL`. Therefore you have to use: ``` WHERE Note LIKE '%ddddddddddddddd%' AND (Type IS NULL OR Type NOT IN('Comment', 'error')) ```
Combination AND OR result
[ "", "sql", "sql-server", "" ]
I have read many posts related to this issue but unfortunately none of the suggested solutions worked for me. I am trying to set up a SQL statement that would create a column in my results table with values formatted as follows: date + 8 digit number incremented by 1 for each new record and prefixed with leading zeros. In other words, I would like to have a column with data in the following format: ``` 2015061000000001 2015061000000002 2015061000000003 2015061000000004 2015061000000005 ... ``` I tried using `row_number()` as well as a local variable but I am not having much luck getting this to work. This is what I have so far: ``` declare @a int; set @a = 1; select 'aaa', (select (CONVERT(VARCHAR(10),GETDATE(),112)) + RIGHT('0000000'+ CONVERT(VARCHAR,2),8)), row_number() over(order by id), (select (CONVERT(VARCHAR(10),GETDATE(),112)) + RIGHT('0000000'+ CONVERT(VARCHAR, ( row_number() over(order by id) )),8)), (select (CONVERT(VARCHAR(10),GETDATE(),112)) + RIGHT('0000000'+ CONVERT(VARCHAR, ( select @a + 1 )),8)) FROM MY_TABLE ``` Results table: ``` aaa | 2015061000000002 | 1 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 2 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 3 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 4 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 5 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 6 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 7 | 2015061000000001 | 2015061000000002 aaa | 2015061000000002 | 8 | 2015061000000001 | 2015061000000002 ``` Could anyone please advise how to get to return this? ``` 2015061000000001 2015061000000002 2015061000000003 2015061000000004 2015061000000005 ... ``` Thank you!
You are on the right path, just need a little push... Try this: ``` SELECT CONVERT(varchar(10), GETDATE(), 112) + RIGHT('0000000'+ CAST( ROW_NUMBER() OVER(ORDER BY t_Id) As varchar(8)) ,8) FROM MY_TABLE ```
Try this: ``` SELECT 'aaa', CONVERT(VARCHAR(10),GETDATE(),112) + RIGHT('0000000' + CAST(ROW_NUMBER() OVER (ORDER BY id) AS VARCHAR(8)), 8) FROM yourTable ``` Concatenate the result from `ROW_NUMBER` with 7 `'0'`s so as to be sure that you will always have *at least* 8 digits, then select exactly 8 starting from the rightmost digit.
How to increment and format integer in results table in SQL Server 2008?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
this is my data and I need the result with the fastest way. ![enter image description here](https://i.stack.imgur.com/vOTTH.png)
Standart conditional aggregation: ``` select city, sum(case when active = 'true' then 1 else 0 end) active, sum(case when blacklist = 'true' then 1 else 0 end) blacklist, sum(case when license = 'true' then 1 else 0 end) license, sum(case when married = 'true' then 1 else 0 end) married from TableName group by city ```
this gives output you are expecting ``` declare @t table (code int,name varchar(10),Active varchar(5),Black varchar(5),License varchar(5),married varchar(5),city int) insert into @t (code,name,Active,Black,License,married,city)values (1,'john','true','false','true','true',1001), (2,'jack','true','true','true','fale',1002), (3,'sara','false','false','false','true',1001), (4,'shiela','true','false','false','false',1002) ;with cte as ( select distinct city, CASE WHEN Active = 'true' then COUNT(ACTIVE)else NULL end ACTIVE, CASE WHEN Black = 'true' then COUNT(Black)else NULL end BLACK, CASE WHEN License = 'true' then COUNT(License)else NULL end License, CASE WHEN married = 'true' then COUNT(married)else NULL end married from @t group by city,active,Black,License,married) select DISTINCT city,COUNT(ACTIVE)[ACTIVE(*)],COUNT(Black)[Black(*)],COUNT(License)[License(*)],COUNT(married)[married(*)] from cte GROUP BY CITY ```
Get count by multiple condition
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following SQL ``` SELECT a.StudentsID, a.ClassGroup, a.FinalGrade, COUNT(*) AS ranknumber FROM FinalAVG AS a INNER JOIN FinalAVG AS b ON (a.ClassGroup = b.ClassGroup) AND (a.FinalGrade <= b.FinalGrade) GROUP BY a.ClassGroup, a.StudentsID, a.FinalGrade HAVING COUNT(*) <= 3 ORDER BY a.ClassGroup, COUNT(*) DESC; ``` It works well. However when I switch to design view, it shows me the following error ``` Microsoft Access can't represent the join expression a.FinalGrade <= b.FinalGrade ``` Any help please ?
That's by design and a limitation of the GUI designer. However, the SQL view is available for this type of queries. Yes, just go ahead and correct errors if any. As is, your SQL seems valid.
Ms Access Design view can only handle queries where both path part of the join are of the same datatype and there is equality between them e.g ``` ON A.ID = B.ID ``` But on the SQL view you can do a;l kind of associations..a very common case is when one part is of String DataType (but numeric) and the other is Integer so you can join them like this : ``` A.ID = Cint(B.ID) ```
Microsoft Access can't represent the join expression "<=" operators
[ "", "sql", "ms-access", "ms-access-2013", "" ]
As I asked in this question : [Oracle SQL Group By if](https://stackoverflow.com/questions/30709480/oracle-sql-group-by-if) I log file usage in my application. There is 3 files sources: * Pool * MDA * Other If the file is opened twice from MDA and once from Pool, I'll get two entries: ``` TESTID SITE LATEST_READ READ_COUNT FILE_ORIGIN_ID ------------- ---------- ----------- ---------- -------------- File1 |Site1 |02/05/13 | 2| 1 File1 |Site2 |22/01/14 | 3| 2 ``` --- What I want to achieve is to get the ratio of files that are not in the Pool OR the MDA grouped by sites. So I managed to do this resquest: ``` SELECT Count(TESTID) as OTHER_FILES, SITE, 'OTHERS' FROM USER_STATS.FILE_USAGE_LOG WHERE TESTID not in ( -- Files that are on Pool OR MDA SELECT TESTID FROM USER_STATS.FILE_USAGE_LOG WHERE FILE_ORIGIN_ID < 2 ) AND LATEST_READ between '01/05/2015' and '01/06/2015' GROUP BY Site UNION ALL SELECT Count(TESTID) as OTHER_FILES, site, 'Files that are at least in Pool or MDA' FROM USER_STATS.FILE_USAGE_LOG WHERE TESTID in ( -- Files that are on Pool OR MDA SELECT TESTID FROM USER_STATS.FILE_USAGE_LOG WHERE FILE_ORIGIN_ID < 2 ) AND LATEST_READ between '01/05/2015' and '01/06/2015' GROUP BY Site ``` Which gives me this: ``` 18 BR-CTA Files that are at least in Pool or MDA 324 BR-CTA OTHERS 26 BR-CTA-VPN OTHERS 5 CN-TSN-VPN OTHERS 2040 FR-LYON Files that are at least in Pool or MDA 248 FR-LYON OTHERS 1 IN-BLR Files that are at least in Pool or MDA 1 IN-PUNE OTHERS 810 JP-SAIT OTHERS 48 JP-SAIT Files that are at least in Pool or MDA ... ``` And I would like to have this: ``` 94% BR-CTA Ratio -- 94% in OTHER 100% BR-CTA-VPN Ratio -- 100% in OTHER 100% CN-TSN-VPN Ratio -- 100% in OTHER 10% FR-LYON Ratio -- 10% in OTHER 0% IN-BLR Ratio -- 0% in OTHER 100% IN-PUNE Ratio -- 100% in OTHER 94% JP-SAIT Ratio -- 94% in OTHER ... ``` But I can't acheive this whatever I try. **How can I do this?** I use `nbTotal / (nbOther) * 100` as ratio calculation.
There's a few ways to do that and what is possible or best depends in part on you RDBMS. However, here is one way. I am substituting you query above with an IntermediateResults table for simplicity. In practice you could use your query with a CTE, derived table, temp table or table variable. ``` CREATE TABLE IntermediateResults (OtherFiles INT, Site VARCHAR(20), Message VARCHAR(100)); GO INSERT INTO IntermediateResults (OtherFiles,Site,Message) VALUES (18,'BR-CTA','Files that are at least in Pool or MDA'); INSERT INTO IntermediateResults (OtherFiles,Site,Message) VALUES (324,'BR-CTA' ,'OTHERS'); INSERT INTO IntermediateResults (OtherFiles,Site,Message) VALUES (26,'BR-CTA-VPN','OTHERS'); INSERT INTO IntermediateResults (OtherFiles,Site,Message) VALUES (1,'IN-BLR','Files that are at least in Pool or MDA'); GO SELECT COALESCE(o.Site,p.Site) Site ,Ratio = CASE WHEN o.OtherFiles IS NULL THEN 0 WHEN p.OtherFiles IS NULL THEN 100 ELSE 100 * o.OtherFiles/(p.OtherFiles + o.OtherFiles) END FROM (SELECT * FROM IntermediateResults WHERE Message = 'OTHERS') o FULL JOIN (SELECT * FROM IntermediateResults WHERE Message <> 'OTHERS') p ON o.Site = p.Site ``` Results: ``` BR-CTA 94 IN-BLR 0 BR-CTA-VPN 100 ``` EDIT: An example of how to replace the table in my example with your query would be to use a [subquery factoring](http://dba-oracle.com/t_oracle_subquery_factoring.htm) which is what Oracle calls TSQL Common Table Expression or the WITH construct. ``` WITH IntermediateResults AS ( /*your query here*/ ) SELECT COALESCE(o.Site,p.Site) Site ,Ratio = CASE WHEN o.OtherFiles IS NULL THEN 0 WHEN p.OtherFiles IS NULL THEN 100 ELSE 100 * o.OtherFiles/(p.OtherFiles + o.OtherFiles) END FROM (SELECT * FROM IntermediateResults WHERE Message = 'OTHERS') o FULL JOIN (SELECT * FROM IntermediateResults WHERE Message <> 'OTHERS') p ON o.Site = p.Site ```
Starting by the top you don't need a UNION ALL Query, you could retrieve your data with this query. I omitted the range period for easy read. ``` SELECT COUNT(TESTID) AS OTHER_FILES,SITE ,CASE WHEN FILE_ORIGIN_ID < 2 THEN 'Files that are at least in Pool or MDA' ELSE 'OTHERS' END AS validCondition FROM FILE_USAGE_LOG as pivot GROUP BY pivot.TESTID ,(CASE WHEN FILE_ORIGIN_ID < 2 THEN 'Files that are at least in Pool or MDA' ELSE 'OTHERS' END); ``` So with that you could make an derived table, with out grouping to obtain all the universe. ``` SELECT COUNT(TESTID) AS OTHER_FILES,SITE ,CASE WHEN FILE_ORIGIN_ID < 2 THEN 'Files that are at least in Pool or MDA' ELSE 'OTHERS' END AS validCondition, (COUNT(TESTID)/MAX(total))*100 AS ratio FROM FILE_USAGE_LOG as pivot CROSS JOIN(SELECT COUNT(TESTID) AS total FROM FILE_USAGE_LOG) AS u GROUP BY pivot.TESTID ,(CASE WHEN FILE_ORIGIN_ID < 2 THEN 'Files that are at least in Pool or MDA' ELSE 'OTHERS' END); ``` I hope this answer could help you Regards.
Percentage with union all request
[ "", "sql", "oracle", "union-all", "" ]
I have two tables `t1` and `t2`. `t1` has this structure: ``` yearmonth account company RCV_amount_t1 ``` `t2` has this structure: ``` yearmonth account company billing amount billing amount CM_1 billing amount CM_2 RCV_amount_t2 ``` I want to join `t2` to `t1` using `yearmonth`, `account`, and `company`. If they match, I want to update `RCV_amount_t2` with the value in `RCG_amount_t1`. Otherwise, I want to set `RCV_amount_t2` to spaces. In the same manner, I want to join `t1` with `t2` using `yearmonth`, `account`, and `company` and set values accordingly. Is it possible to achieve? If so, how do I go about it?
You'll want to use a [`MERGE`](https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm). It allows you to join two tables and specify how to update the values if they match. The general structure of a `MERGE` statement looks like: ``` MERGE INTO driver_table USING other_table ON ( driver_table.column1 = other_table.column1 AND driver_table.column2 = other_table.column2 AND ... ) WHEN MATCHED THEN UPDATE SET driver_table.some_column = other_table.some_value, driver_table.some_flag = 'Y', ... ; ```
> I want to join t2 to t1 using yearmonth, account, and company. If they > match, I want to update RCV\_amount\_t2 with the value in RCG\_amount\_t1. > Otherwise, I want to set RCV\_amount\_t2 to spaces. This will update the matching rows with the appropriate value, and update the rows with no match to `NULL`. If the field is numeric, you can't update it to "spaces"; `NULL` would be the appropriate indicator of no value. If the field is not numeric, then you could do a second update to replace `NULL` values with whatever you like, but `NULL` would still seem to me to be the most appropriate indicator of no value. ``` UPDATE t2 SET rcv_amount_t2 = ( SELECT rcv_amount_t1 FROM t1 WHERE t1.yearmonth = t2.yearmonth AND t1.account = t2.account AND t1.company = t2.company ) ```
Table update based on match/nomatch
[ "", "sql", "oracle", "join", "sql-update", "" ]
I have student marks in one database table, like ``` studid, studname, m1, m2, m3 ``` and students' personal information in another database table, like ``` studid, studname. ``` * I want to join both tables to get studid, studname, ((m1,m2,m3) must be sum as 'total'). The students' results must display if m1,m2,m3<40 then fail else pass. * I also want to display the rank for only students those who got pass marks.
``` select ID, Name, CASE WHEN total > 40 THEN 'Pass' ELSE 'fail' end from (select m.studid as ID, m.studname as Name, sum(m1)+sum(m2)+sum(m3) as total from marks m NATURAL JOIN info i group by studid,studname ) ```
try this, ``` select studID, Studname, m1, m2, m3, (m1+ m2+ m3) as 'Total', case when(m1<40 OR m2<40 OR m3<40) then 'FAIL' else 'PASS' end as 'Result' ``` it should work. you can join 2 tables to produce the desire results
How to find ranks for only students who got pass marks?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
One of the [core rules](http://en.wikipedia.org/wiki/Codd%27s_12_rules) for the relational model is the required uniqueness for tuples (rows): > Every individual scalar value in the database **must** be logically addressable by specifying the name of the containing table, the name of the containing column and the **primary key** value of the containing row. In a SQL world, that would mean that there could never exist two rows in a table for which all the column values were equal. If there was no meaningful way to guarantee uniqueness, a surrogate key could be presented to the table. When the first SQL standard was released, it defined no such restriction and it has been like this ever since. This seems like a root for all kind of evil. Is there any meaningful reason why it was decided to be that way? In a practical world, where could an absence of such restriction prove to be useful? Does it outweigh the cons?
You're assuming that databases are there solely for storing relational data; that's certainly not what they're used for because practical considerations will always win. A obvious example where there's no need for a primary key would be a "state" log of some description (weather/database/whatever). If you're never going to query a single value from this table you may not want to have a primary key in order to avoid having to wait for an insert into the key. If you have a use-case to pick up a single value from this table then sure, this would be a bad solution, but some people just don't need that. You can always add a surrogate key afterwards if it becomes absolutely necessary. Another example would be a write intensive application needs to tell another process to do something. This secondary process runs every N minutes/hours/whatever. Doing the de-duplication on N million records as a one off is quicker than checking for uniqueness on every insert into the table (trust me). What are sold as relational databases are not being used solely as relational databases. They're being used as logs, key-value stores, graph databases etc. They may not have all the functionality of the competition but some do and it's often simpler to have a single table that doesn't fit your relational model than to create a whole other database and suffer the data-transfer performance penalties. **tl;dr** People aren't mathematically perfect and so won't always use the mathematically perfect method of doing something. Committees are made up of people and can realise this, sometimes.
The short answer is that SQL is not relational and SQL DBMSs are not relational DBMSs. Duplicate rows are a fundamental part of the SQL model of data because the SQL language doesn't really try to implement the relational algebra. SQL uses a bag (multiset)-based algebra instead. The results of queries and other operations in relational algebra are relations that always have distinct tuples, but SQL DBMSs don't have the luxury of dealing only with relations. Given this fundamental "feature" of the SQL language, SQL database engines need to have mechanisms for processing and storing duplicate rows. Why was SQL designed that way? One reason seems to be that the relational model was just too big a leap of faith to make at that time. The relational model was an idea well ahead of its time. SQL on the other hand, was and remains very much rooted in the systems of three decades ago.
Why does SQL standard allow duplicate rows?
[ "", "sql", "relational-database", "" ]
How to check if there exist only one record for a certain Id I have two tables called `Tbl_Company` and `Tbl_Employee` I am fetching employees as follows- ``` SELECT DISTINCT emp.employee_id FROM Tbl_Company comp , Tbl_Employee emp WHERE emp.company_id = comp.company_id AND emp.company_id = 1234; ``` This query returns exactly one value. How can I make sure that above query returns exacly one value for any comany\_id I enter. I tried using solutions given in [This](https://stackoverflow.com/questions/4682821/oracle-get-a-query-to-always-return-exactly-one-row-even-when-theres-no-data "this") post with no success. Is there any simpler way to do this.
I have solved this by using ans by `@davegreen100` in comment ``` SELECT comp.companyid, count(distinct emp.employee_id), FROM Tbl_Company comp , Tbl_Employee emp WHERE emp.company_id = comp.company_id AND emp.company_id = 1234 GROUP BY comp.companyid ``` This will give me the count of employees per company
this would return one row per company ``` SELECT comp.companyid, max(emp.employee_id) lastEmployeeID FROM Tbl_Company comp , Tbl_Employee emp WHERE emp.company_id = comp.company_id AND emp.company_id = 1234 GROUP BY comp.companyid ```
How to check if there exist only one record for a certain Id
[ "", "sql", "oracle", "oracle10g", "distinct", "" ]
I have a cursor C1 which selects column status from table test. Using `for` loop, I have to check the previous value for status is `ERROR` and current value for status is `VALID`.If so then I have to update table `test1`. E.G. ``` for 1 in 1..c1 loop if previous.status = ERROR and current.status then update test1. end if; end loop; ``` Please help me how can I store previous and current value from for loop for comparison.
To my understanding, this should do the work for your with minor changes: ``` declare v_previous_status test.status%type; cursor c1 is select status from test; begin open c1; fetch c1 bulk collect into l_status; for i in 1..l_status.count if (v_previous = 'ERROR' and l_status(i) = 'VALID' ) then update .... end if; v_previous := l_status(i); end loop; END; ```
You would DECLARE your cursor. DECLARE a variable to hold the previous value.(Set a default because there is no previous at this point) OPEN your cursor. FETCH your first record. Then you would WHILE loop over the rest of the records. At the end of each WHILE loop you'll do another FETCH. So you want to set the variable that holds your previous value BEFORE you fetch the next record.
How to store previous value and current value in for loop
[ "", "sql", "oracle", "plsql", "" ]
I have the following table: ![Data Table](https://i.stack.imgur.com/NC3Sg.png) From a `select` statement what I want is the latest unique rows (in green) for each `policy`. Some cause the `policy` information will be from the day before (all `policies` will not be published on the same day) . In this scenario `ACB1` has changed her `last names` and `amounts` changed.
This will get the latest row for each policy by id column: ``` SELECT id, policy, first, last, amount, created FROM yourtable yt INNER JOIN (SELECT policy,MAX(id) as id FROM yourtable GROUP BY policy) maxid ON yt.policy = maxid.policy AND yt.id = maxid.id ```
Use `row_number` window function: ``` select * from (select *, row_number() over(partition by policy order by id desc) rn from TableName) t where rn = 1 ```
selecting latest result from a table using SQL query
[ "", "sql", "sql-server", "t-sql", "" ]
I'm facing a logic issue with my Query. I have two tables **`Table1`** and **`Table2`**, where `Table1` consists of: * `value` *to be summed* * `Id` *to be grouped by* * `Code` *holds foreign-key to `Table2`* And `Table2` consists of * `Code` * `Des` *the text description of code* What I'm trying to do is, group by `Table1.Id`, full join on `Table2.Code`, but, for each resulting group, I want to show all the rows from Table2 for each group generated by the query. **Sample code:** ``` SELECT Table2.Code, Table1.Id, Table2.DES, SUM(Table1.Value) AS SUM_VAL FROM ( SELECT 'A' AS Code, 1 AS Id, 10 AS Value FROM DUAL UNION SELECT 'A' AS Code, 2 AS Id, 20 AS Value FROM DUAL UNION SELECT 'B' AS Code, 1 AS Id, 10 AS Value FROM DUAL UNION SELECT 'B' AS Code, 1 AS Id, 30 AS Value FROM DUAL UNION SELECT 'B' AS Code, 2 AS Id, 50 AS Value FROM DUAL UNION SELECT 'C' AS Code, 1 AS Id, 40 AS Value FROM DUAL UNION SELECT 'C' AS Code, 2 AS Id, 60 AS Value FROM DUAL UNION SELECT 'D' AS Code, 1 AS Id, 20 AS Value FROM DUAL ) Table1 FULL JOIN ( SELECT 'A' AS Code, 'This is A' AS DES FROM DUAL UNION SELECT 'B' AS Code, 'This is B' AS DES FROM DUAL UNION SELECT 'C' AS Code, 'This is C' AS DES FROM DUAL UNION SELECT 'D' AS Code, 'This is D' AS DES FROM DUAL ) Table2 ON Table1.Code = Table2.Code GROUP BY Table2.Code, Table1.Id, Table2.DES ORDER BY Table2.Code, Table1.Id ASC ``` **Result:** ``` A 1 This is A 10 A 2 This is A 20 B 1 This is B 40 B 2 This is B 50 C 1 This is C 40 C 2 This is C 60 D 1 This is D 20 ``` **Required Result:** ``` A 1 This is A 10 A 2 This is A 20 B 1 This is B 40 B 2 This is B 50 C 1 This is C 40 C 2 This is C 60 D 1 This is D 20 D 2 This is D 0 <- This is the target ```
You have somehow to show the value pair `(D,2)` eg. by making a code list with possible values and translating `NULL` to `0`: ``` SELECT code.code, code.id, des.des, NVL (SUM (val.value), 0) sum_val FROM (SELECT 'A' code, 1 id FROM DUAL UNION SELECT 'A', 2 FROM DUAL UNION SELECT 'B', 1 FROM DUAL UNION SELECT 'B', 2 FROM DUAL UNION SELECT 'C', 1 FROM DUAL UNION SELECT 'C', 2 FROM DUAL UNION SELECT 'D', 1 FROM DUAL UNION SELECT 'D', 2 FROM DUAL) code INNER JOIN (SELECT 'A' code, 'This is A' des FROM DUAL UNION SELECT 'B', 'This is B' FROM DUAL UNION SELECT 'C', 'This is C' FROM DUAL UNION SELECT 'D', 'This is D' FROM DUAL) des ON code.code = des.code LEFT OUTER JOIN (SELECT 'A' code, 1 id, 10 VALUE FROM DUAL UNION ALL SELECT 'A', 2, 20 FROM DUAL UNION ALL SELECT 'B', 1, 10 FROM DUAL UNION ALL SELECT 'B', 1, 30 FROM DUAL UNION ALL SELECT 'B', 2, 50 FROM DUAL UNION ALL SELECT 'C', 1, 40 FROM DUAL UNION ALL SELECT 'C', 2, 60 FROM DUAL UNION ALL SELECT 'D', 1, 20 FROM DUAL) val ON code.code = val.code AND code.id = val.id GROUP BY code.code, code.id, des.des ORDER BY code, id ``` `UNION ALL` is used in `val` because duplicates can occur. No need for `FULL OUTER JOIN`.
If you want all the combinations of id and value, then use `cross join` to get the rows and a `left join` to bring in the rest of the values: ``` select t2.code, i.value, t2.desc, coalesce(cnt, 0) as cnt from (select distinct id from table1) i cross join table2 t2 left join (select id, value, count(*) as cnt from table1 group by id, value ) iv on iv.id = i.id and iv.code = t2.code ``` This should be much simpler than listing out all the combinations manually.
Full Join on Group
[ "", "sql", "oracle", "oracle10g", "full-outer-join", "" ]
This is how my articles table looks like: (tbl\_articles) ``` ID | SHORT_DESCRIPTION | DESCRIPTION | GROSS | NET 1 | v00556 | valve washroom | 9.00 | 7.49 ``` etc. My supplier provided me a new price list, in this format (tbl\_supplier) ``` SHORT_DESCRIPTION | DESCRIPTION | GROSS | NET v0056 | valve washroom | 9.50 | 7.99 ``` How can I update my price list with his prices? We have in common the short description column, but he has also new articles. Both lists contain over 10,000 articles and exporting to excel + vertical search does not work. I tried this, but without success: ``` UPDATE tbl_articles SET Gross = ( SELECT Gross FROM tbl_supplier WHERE tbl_articles.SHORT_DESCRIPTION = tbl_supplier.SHORT_DESCRIPTION ) ``` Shortcomings: * New products are not added in my table * Cannot update 2 fields
Create a unique index on `short_description`: ``` create unique index idx_articles_shortdesc on articles(short_description); ``` Then use `insert . . . on duplicate key update`: ``` insert into tbl_articles(SHORT_DESCRIPTION, DESCRIPTION, GROSS, NET) select s.SHORT_DESCRIPTION, s.DESCRIPTION, s.GROSS, s.NET from tbl_supplier s on duplicate key update gross = values(gross), net = values(net); ``` You don't specify that you want to update the description, so that is not included. As a note. You might want to investigate slowly changing dimension tables. I think a better structure would have an effective date and end date for each pricing. This allows you to keep a history of the price changes. Also, I would keep a date of when the record was created, so I know when products were introduced.
Assuming the `SHORT_DESCRIPTION` field is an actual primary/unique in the table, the easiest thing to do would be to have an `insert` statement with an `on duplicate key update` clause: ``` INSERT INTO tbl_articles (short_description, description, gross, net) -- Assumes ID is autogenerated SELECT short_description, description, gross, net FROM tbl_supplier ON DUPLICATE KEY UPDATE gross = VALUES(gross), net = VALUES(net) ```
Update prices in mysql
[ "", "mysql", "sql", "sql-update", "sql-insert", "" ]
I am running a very basic select against an oracle database (not sure of the version). ``` SELECT * FROM ACCOUNTING WHERE ID = 123456 ORDER BY DATE ``` I want to return only the most recent record. So I have tried ... ``` SELECT ROWNUM, * FROM ACCOUNTING WHERE ID = 123456 ORDER BY DATE SELECT * FROM ACCOUNTING WHERE ID = 123456 AND ROWNUM < 2 ORDER BY DATE ``` I get the same result every time ... ``` Error Source: System.Data.OracleClient Error Message: ORA-00904: "ROWNUM" : invalid identifier ``` Everything i see and read suggests that this should work. Can anyone see what I am missing? Could this be an issue with the driver? I am using the following package ... (Oracle ODAC 11.20.30 x64) **UPDATE** Thank you for all your replies ... I apologize for any confusion I created in my efforts to simplify the scenario. The ODAC driver is actually breaking the query out and formatting it for me, so what I originally posted is not ***exactly*** the query being run ... here's, specifically what the driver is spitting out that is generating the error ... ``` SELECT "ROWNUM", ID, SITE_ID, READING_DATE, SUBMITTED_DATE, DEPOSIT_DATE FROM ACCOUNTING WHERE (SITE_ID = 33730) ORDER BY READING_DATE ``` And for my second attempt ... ``` SELECT ID, SITE_ID, READING_DATE, SUBMITTED_DATE, DEPOSIT_DATE FROM ACCOUNTING WHERE (SITE_ID = 33730) AND ("ROWNUM" < 2) ORDER BY READING_DATE ```
Your **actual** query might be using `ROWNUM` within double quotes. Otherwise, this error is not possible. Though your first query would be `ORA-00936: missing expression` ``` select * from dual WHERE "ROWNUM" < =3; Error report - SQL Error: ORA-00904: "ROWNUM": invalid identifier 00904. 00000 - "%s: invalid identifier" *Cause: *Action: ``` `ROWNUM` is a pseudo-column and it is like function without parameters.. and by the way `"ROWNUM"` makes oracle to search for such a column in your table.. Quoted identifiers when is a Oracle reserved keyword, would surpass its original purpose, and behaves as user defined column. Unsure, of how to stop the Query builder to interpret this way. I would consider this a **BUG**.
Can try this approach: ``` SELECT * FROM (SELECT ROWNUM R, * FROM ACCOUNTING WHERE ID = 123456 ORDER BY DATE ) WHERE R < 2; ```
ROWNUM returns as "invalid identifier"
[ "", "sql", "oracle", "odac", "" ]
find below my HQL query ``` Query query = session.createQuery("select u from UserLog u where u.userLogSerialno = " + "(select max(uu.userLogSerialno) from UserLog uu where uu.userId = u.userId)"); ``` This query is working fine but in this, I want to pass the value of userId but I am not able to figure out how to do this. Kindly Help..!! Thanks in Advance..!!
I is very simple to add parameter to an HQL ``` Query query = session.createQuery("select u from UserLog u where u.userLogSerialno = " + "(select max(uu.userLogSerialno) from UserLog uu where uu.userId = :userId)").setParameter("userId", 15); ``` here i have hard coded 15 you can simply use variable instead of it
Simple example: ``` Integer id = 1; Query query = session.createQuery("from Employee e where e.idEmployee=:id"); query.setParameter("id", id); ```
How to pass parameter in HQL query
[ "", "sql", "hibernate", "hql", "" ]
In sql (MS sql server specifically) is it possible to combine multiple rows into a single string as an expression which is itself part of an update that is being applied to multiple rows. I have come across the approaches of using COALESCE or FOR XML PATH (e.g. [How to get multiple rows into one line as a string?](https://stackoverflow.com/questions/7958816/how-to-get-multiple-rows-into-one-line-as-a-string) ) but can't get them to work in my more complex case with the extra dimension of 'listiness'. My problem boils down to, in words: A Project has some Launches. A Launch has a LaunchType and a date. I have a big output table of projects ProjectOutput and I want to update a column in it with a CSV string of all the launch type names for that project that happen in the same month as the first (chronologically) launch of that project. In sql: ``` UPDATE ProjectOutput SET LaunchNamesColumn = <INSERT MAGICAL SQL STRING CONCATTING ACROSS ROWS FUNCTION HERE> of Launch.name FROM ProjectOuput INNER JOIN Launch ON Launch.projectId = ProjectOutput.projectId INNER JOIN LaunchType AS lt ON LaunchType.launchTypeId = Launch.launchTypeId OUTER APPLY ( SELECT TOP 1 Launch.month, Launch.year FROM Launch INNER JOIN Project ON Project.projectId = Launch.projectId WHERE Project.projectId = ProjectOutput.projectId --In reality there's loads more JOINS and WHERE conditions here ORDER BY Launch.date ) firstLaunch WHERE Launch.month = firstLaunch.month AND Launch.year = firstLaunch.year ``` If there were only 1 Launch per Project then the stuff would not be needed and just ``` SET LaunchNameColumn = Launch.name ``` However as there can be several Launches per Project some operation is needed to join them. I tried: ``` SET LaunchNamesColumn = STUFF((SELECT ', ' + lt.name FROM lt FOR XML PATH('')), 1, 2, '') ``` However that doesn't work (error, invalid name) because it doesn't know what the alias lt is inside that SELECT. If you just say LaunchType or dbo.LaunchType then the query runs but then you are just looping over all the possible launch types rather than only those returned by the big query below. What I really want is for that FROM in the SELECT FOR XML PATH is to be the result set of the giant query below (whereas in all the examples I've found so far it's just a simple table), but copying and pasting that in seems so wrong. Maybe there is some mental block or sql feature I'm unaware of that would make this work, or is it not possible?
The problem you have is that in the SET stage of your query you only have access to one of the matching Launches as there is no grouping applied. You can achieve want you want by moving your Launch lookup into a sub-query over the ProjectOutput rows. A simplified example: ``` UPDATE ProjectOutput SET LaunchNamesColumn = STUFF(( SELECT ', ' + Launch.name FROM Launch -- OUTER APPLY is not required within the sub-query. INNER JOIN ( SELECT TOP 1 Launch.month, Launch.year FROM Launch -- Filter results to specific project. WHERE Launch.projectId = ProjectOutput.projectId ORDER BY Launch.date ) firstLaunch ON Launch.month = firstLaunch.month AND Launch.year = firstLaunch.year -- Filter results to specific project. WHERE Launch.projectId = ProjectOutput.projectId FOR XML PATH('') ), 1, 2, '') FROM ProjectOutput ``` Logically the sub query is run once per ProjectOutput record, allowing you to filter and group by each ProjectId. Also nice bit of syntax that may simplify your query is [SELECT TOP WITH TIES](https://msdn.microsoft.com/en-us/library/ms189463.aspx), ``` UPDATE ProjectOutput SET LaunchNamesColumn = STUFF(( SELECT TOP (1) WITH TIES ', ' + Launch.name FROM Launch WHERE Launch.projectId = ProjectOutput.projectId ORDER BY Launch.Year, Launch.Month FOR XML PATH('') ), 1, 2, '') FROM ProjectOutput ``` This will return all the matching Launches that have the lowest Year then Month value.
It's a little bit difficult to understand your SQL without description of the tables, but what you should do is have the query with the XML path so that it returns only those items that you want to be concatenated for that single row, so my guess is that you want actually something like this: ``` UPDATE O SET LaunchNamesColumn = STUFF((SELECT ', ' + lt.Name From Launch L INNER JOIN Launch L ON L.projectId = O.projectId INNER JOIN LaunchType AS lt ON lt.launchTypeId = L.launchTypeId WHERE L.month = FL.month AND L.year = FL.year FOR XML PATH('')), 1, 2, '') FROM ProjectOutput O CROSS APPLY ( SELECT TOP 1 L2.month, L2.year FROM Launch L2 WHERE L2.projectId = O.projectId -- Removed the other tables from here. Are they really needed? ORDER BY L2.date ) FL ``` Couldn't really test this, but hopefully this helps.
Sql: How to combine multiple rows into a string as an expression within an update
[ "", "sql", "sql-server", "" ]
I'm a beginner to SQL so this is quite possibly a very simple question. I have a table with postage service information like so: ``` ID Service Max_Weight Cost ---------------------------------------- 1 SecondClassStandard 0.10 0.95 2 SecondClassStandard 0.25 1.19 3 SecondClassStandard 0.50 1.51 4 SecondClassStandard 0.75 2.05 5 SecondClassStandard 1.00 2.80 ``` How can I perform the following query (as an example): `SELECT * FROM table WHERE Service = 'SecondClassStandard' AND Max_Weight >= 0.075;` and from the result, get one service that will satisfy the weight requirement. In other words, select the one result that is suitable - for example, from the query above, it should only return: ``` 1|SecondClassStandard|0.10|0.95 ``` However, if I was to do: `SELECT * FROM table WHERE Service = 'SecondClassStandard' AND Max_Weight >= 0.105;` It should return: ``` 2|SecondClassStandard|0.25|1.19 ```
I think you meant to use [`LIMIT`](https://www.sqlite.org/lang_select.html) clause like ``` SELECT * FROM table WHERE Service = 'SecondClassStandard' AND Max_Weight >= 0.075 LIMIT 1; ```
You need to use `LIMIT` to just get the first row ``` SELECT * FROM table WHERE Service = 'SecondClassStandard' AND Max_Weight >= 0.075 LIMIT 1; SELECT * FROM table WHERE Service = 'SecondClassStandard' AND Max_Weight >= 0.105 LIMIT 1; ```
SQL Selecting the minimum value after a WHERE clause
[ "", "sql", "sqlite", "" ]
I need to compare the value in a single column in a single table. Here is a sample table: ``` ID Cat Color ====================== 1 red maroon 2 red orange 3 red pink 4 blue violet 5 blue purple 6 blue indigo 7 green puke green 8 green hunter green ``` I am given 2 colors from the Color column. I need to know if they belong to the same Cat column. For example, I will be given maroon and orange. I need the value red returned. Violet and purple should return blue. Puke green and violet should return null. So far I have the following SQL but it's not exactly what I am looking for, especially with the Limit 1. I am looking for a single query to return Cat field without using Limit 1. ``` SELECT Cat From foo WHERE Color = 'maroon' and Color = 'orange' LIMIT 1 ```
In addition to [Beginner's answer](https://stackoverflow.com/a/30716806/477563), it's possible to solve this problem without a `GROUP_CONCAT`: ``` SELECT cat FROM foo WHERE color IN ('maroon', 'orange') GROUP BY cat HAVING COUNT(*) = 2 ; ``` This works by selecting all cats with the specified colors. When we group them, the cats that appear multiple times (the `HAVING` clause) are the records you want to keep. Note: the number using the `HAVING` clause should match the number of colors you're searching for.
You can try this: ``` SELECT x.cat FROM ( SELECT cat, GROUP_CONCAT(color) AS colors FROM tablename GROUP BY cat) AS x WHERE FIND_IN_SET('maroon', x.colors) > 0 AND FIND_IN_SET('orange', x.colors) > 0 ``` > **Edit 1:** Another Alternative ``` SELECT IF( FIND_IN_SET('maroon', GROUP_CONCAT(color)) > 0 AND FIND_IN_SET('orange', GROUP_CONCAT(color)) > 0 , cat, NULL ) AS cat FROM tablename GROUP BY cat ```
Compare Two Values in Same Column
[ "", "mysql", "sql", "" ]
I am using sql server 2008 r2 and I have two database, which is one have 11.000 record and another is just 3000 record, when i do run this query ``` SELECT Right(rtrim(tbltransac.No_Faktur),6) as NoUrut, tbltransac.No_Faktur, tbltransac.No_FakturP, tbltransac.Kd_Plg, Tblcust.Nm_Plg, GRANDTOTAL AS Total_Faktur, tbltransac.Nm_Pajak, tbltransac.Tgl_Faktur, tbltransac.Tgl_FakturP, tbltransac.Total_Distribusi FROM Tblcust INNER JOIN ViewGrandtotal AS tbltransac ON Tblcust.Kd_Plg = tbltransac.Kd_Plg WHERE tbltransac.Kd_Trn = 'J' and year(tbltransac.tgl_faktur)=2015 And ISNULL(tbltransac.No_OPJ,'') <> 'SHOP' Order by Right(rtrim(tbltransac.No_Faktur),6) Desc ``` It takes me 1 minute 30 sec in my server (I query it using sql management tool) that have 3000 record but it only took 3 sec to do a query in my another server which is have 11000 record, whats wring with my database? I've already tried to backup and restore my 3000 record database and restore it in my 11000 record server, it's faster.. took 30 sec to do a query, but it's still annoying if i compare to my 11000 record server. They are in the same spec How this happend? what i should check? i check on event viewer, resource monitor or sql management log, i couldn't find any error or blocked connection. There is no wrong routing too.. Please help... It just happen a week ago, before this it was fine, and I haven't touch the server more than a month...
as already mentioned before, you have three issues in your query. Just as an example, change the query to this one: ``` SELECT Right(rtrim(tbltransac.No_Faktur),6) as NoUrut, tbltransac.No_Faktur, tbltransac.No_FakturP, tbltransac.Kd_Plg, Tblcust.Nm_Plg, GRANDTOTAL AS Total_Faktur, tbltransac.Nm_Pajak, tbltransac.Tgl_Faktur, tbltransac.Tgl_FakturP, tbltransac.Total_Distribusi FROM Tblcust INNER JOIN ViewGrandtotal AS tbltransac ON Tblcust.Kd_Plg = tbltransac.Kd_Plg WHERE tbltransac.Kd_Trn = 'J' and tbltransac.tgl_faktur BETWEEN '20150101' AND '20151231' And tbltransac.No_OPJ <> 'SHOP' Order by NoUrut Desc --Only if you need a sorted output in the datalayer ``` Another idea, if your viewGrandTotal is quite large, could be an pre-filtering of this table before you join it. Sometimes SQL Server doesn't get a good plan which needs some lovely touch to get him in the right direction. Maybe this one: ``` SELECT Right(rtrim(vgt.No_Faktur),6) as NoUrut, vgt.No_Faktur, vgt.No_FakturP, vgt.Kd_Plg, tc.Nm_Plg, vgt.Total_Faktur, vgt.Nm_Pajak, vgt.Tgl_Faktur, vgt.Tgl_FakturP, vgt.Total_Distribusi FROM (SELECT Kd_Plg, Nm_Plg FROM Tblcust GROUP BY Kd_Plg, Nm_Plg) as tc -- Pre-Filter on just the needed columns and distinctive. INNER JOIN ( -- Pre filter viewGrandTotal SELECT DISTINCT vgt.No_Faktur, vgt.No_Faktur, vgt.No_FakturP, vgt.Kd_Plg, vgt.GRANDTOTAL AS Total_Faktur, vgt.Nm_Pajak, vgt.Tgl_Faktur, vgt.Tgl_FakturP, vgt.Total_Distribusi FROM ViewGrandtotal AS vgt WHERE tbltransac.Kd_Trn = 'J' and tbltransac.tgl_faktur BETWEEN '20150101' AND '20151231' And tbltransac.No_OPJ <> 'SHOP' ) as vgt ON tc.Kd_Plg = vgt.Kd_Plg Order by NoUrut Desc --Only if you need a sorted output in the datalayer ``` The pre filtering could increase the generation of a better plan. Another issue could be just the multi-threading. Maybe your query get a parallel plan as it reaches the cost threshold because of the 11.000 rows. The other query just hits a normal plan due to his lower rows. You can take a look at the generated plans by including the actual execution plan inside your SSMS Query. Maybe you can compare those plans to get a clue. If this doesn't help, you can post them here to get some feedback from me. I hope this helps. Not quite easy to give you good hints without knowing table structures, table sizes, performance counters, etc. :-) Best regards, Ionic
***Note:*** first of all you should avoid any function in *Where clause* like this one ``` year(tbltransac.tgl_faktur)=2015 ``` **[Here](https://sqlblog.org/2009/10/16/bad-habits-to-kick-mis-handling-date-range-queries)** ***Aaron Bertrand*** how to work with date in *Where clause* "In order to make best possible use of indexes, and to avoid capturing too few or too many rows, the best possible way to achieve the above query is ": ``` SELECT COUNT(*) FROM dbo.SomeLogTable WHERE DateColumn >= '20091011' AND DateColumn < '20091012'; ``` And i cant understand your logic in this piece of code but this is bad part of your query too ``` ISNULL(tbltransac.No_OPJ,'') <> 'SHOP' ``` Actually `Null <> "Shop"` in this case, so Why are you replace it to `""`? Thanks and good luck
Why my sql query is so slow in one database?
[ "", "sql", "sql-server", "" ]
I have two tables like these. They represent an interaction of following/followers between two users. ``` users --------------------------------------- id | username | <br> 1 | bobby | <br> 2 | jessica | <br> 3 | jonny | <br> 4 | mike | <br> follows ---------------------------------------- id | userid_1 | userid_2 | <br> 1 | 1 | 2 | <br> 2 | 3 | 4 | <br> 3 | 4 | 1 | <br> ``` Set my user id to 1 for example, I want a query that returns something like this ``` ----------------------------------------- username | userid_1 | userid_2 |<br> jessica | 1 | 2 |<br> mike | 4 | 1 |<br> ``` Any help to solve this problem? :)
This is a bit more complicated than it first seems, because you want the name of the *other* user: ``` select u.username, f.* from follows f join users u on (u.id = f.userid_1 and f.userid_2 = 1) or (u.id = f.userid_2 and f.userid_1 = 1) where 1 in (f.userid_1, f.userid_2); ``` Actually, the `where` clause is not needed (it is covered by the `on`). I think it clarifies the filtering however.
You can **[JOIN](https://dev.mysql.com/doc/refman/5.0/en/join.html)** tables in following: ``` SELECT u.username, f.userid_1, f.userid_2 FROM users u JOIN follows f ON u.Id = f.Id ```
SQL Query Multiple Conditions
[ "", "mysql", "sql", "" ]
So as the title says, I'm trying to run a query in my ASP Classic page but for some reason it doesn't return a record set while it does returns a record set it if the query is copied directly in Access. There is one thing where it probably goes wrong, namely: * The query uses an `LIKE operator` on a `NUMERIC value`. Obviously the `like` operator can only be used on `STRING` values so I tried to cast the `numeric` value to a `string` using `CStr`, this had no effect. Then I tried to just hard code a value in my query and in Access this **does** seem to work (even though I am using the `LIKE` operator on a `string` to find `numeric` values - it works in Access). My code is as follows: ``` Set keywords_cmd = Server.CreateObject ("ADODB.Command") Set keywords_cmd.ActiveConnection = con sql = "SELECT Description, MyNumber FROM Orders where MyNumber LIKE '*23*' " keywords_cmd.CommandText = sql Set keywords = keywords_cmd.Execute(sql) if keywords.EOF then response.write("EOF???") end if Do While Not keywords.EOF response.write("A record") %><br> <% keywords.movenext Loop ``` When pasting the SQL command directly into Access it generates 6 records containing the 23 number. Though when doing the exact same in the ASP file it generates 0 records (EOF = true). I also checked my connection to the database by adjusting the SQL command to: ``` sql = "SELECT Description, MyNumber FROM Orders where MyNumber = 1506 " ``` This generates records in the ASP file so the connection works. So the question is: why are no records generated while they are using the exact same query in Access directly? Just a reminder: the column `MyNumber` is of type `NUMERIC` in the database. Some additional information: The Access database is a `mdb file` (older Access) perhaps this also has something to do with it?
Figured it out myself. Seems that I only had to replace the `*` wildcards with the `%` wildcard. This did the trick..
use `%` in "LIKE" comparisons for wildcards: ``` "SELECT Description, MyNumber FROM Orders where MyNumber LIKE '%23%'" ```
LIKE operator in SQL for my Access Database returns no values in ASP CLASSIC while it does if the query gets copied directly in Access
[ "", "sql", "ms-access", "asp-classic", "sql-like", "" ]
I have an old dynamic SQL query as below where the conditions in the where clause is appended dynamically based on the search text. > **Example 1** : Search string **'AMX AC-DIN-CS3 Bracket'** ``` SELECT * FROM Tx_Product Where Fk_CompanyId=1 and (ModelNumber like '%AMX%' or Manufacturer like '%AMX%' or Category like '%AMX%' or [Description] like '%AMX%') and (ModelNumber like '%AC-DIN-CS3%' or Manufacturer like '%AC-DIN-CS3%' or Category like '%AC-DIN-CS3%' or [Description] like '%AC-DIN-CS3%') and (ModelNumber like '%Bracket%' or Manufacturer like '%Bracket%' or Category like '%Bracket%' or [Description] like '%Bracket%') ``` Here there are 3 And Clauses as there are 3 parts in the search string (separated by space(AMX,AC-DIN-CSS3 and Bracket). > **Example 2** : Search string **'AMX AC-DIN-CS3'** ``` SELECT * FROM Tx_Product Where Fk_CompanyId=1 and (ModelNumber like '%AMX%' or Manufacturer like '%AMX%' or Category like '%AMX%' or [Description] like '%AMX%') and (ModelNumber like '%AC-DIN-CS3%' or Manufacturer like '%AC-DIN-CS3%' or Category like '%AC-DIN-CS3%' or [Description] like '%AC-DIN-CS3%') ``` Here there are 2 And Clauses as there are 2 parts in the search string (AMX, AC-DIN-CS3). I want to get rid of the **Dynamic sql** and create a single query that will get me the same result as above based on the search string but i am getting different result. > Equivalent of **Example 1** But With extra results ``` DECLARE @SearchString NVARCHAR(MAX) = 'AMX AC-DIN-CS3 Bracket' SELECT DISTINCT Prod.* FROM Tx_Product Prod CROSS APPLY (SELECT DISTINCT part FROM [dbo].[SplitString] (@SearchString,'')) AS SearchParts Where Fk_CompanyId = 1 AND (ModelNumber LIKE SearchParts.part OR Prod.Manufacturer LIKE SearchParts.part OR Prod.Category LIKE SearchParts.part OR Prod.[Description] LIKE SearchParts.part) ``` Here i get the extra records like ( all the Products with manufacturer **'AMX'** and all the products with ModelNumber **'AC-DIN-CS3'**). **Question:** I want help in building the single query (Equv. of Example 1, that will give me the same result as of Example 1) based on the search string. So, this new Equiv query with tweaks will satisfy for both the example. I have update the schema in [SQL Fidddle](http://sqlfiddle.com/#!6/4e0f0/14/0) As well. Thanks in Advance for help. --- **Note:** `[dbo].[SplitString]` returns the distint parts of the string as a table. Here for `'AMX-AC-DIN-CS3'` it will return ``` 'AMX' 'AC-DIN-CS3' ```
I prefer to use CTEs to make the query text more readable. I assume that the table `Tx_Product` has a primary key `ID`. At first we convert `@SearchString` to a table. Then count number of rows in it to get the total number of keywords. You want to find those products that have all the given keywords. For each keyword we use `CROSS APPLY` to get a list of product IDs that have this keyword. Then we group by ID to count the number of matches and keep only those IDs that have the number of matches the same as total number of keywords. Finally, we fetch all product details from the table using the found IDs. Here is an [SQL Fiddle](http://sqlfiddle.com/#!6/3af58/5/0) based on yours. ``` WITH CTE_Parts AS ( SELECT DISTINCT '%' + part + '%' AS Part FROM [dbo].[SplitString](@SearchString,'') ) ,CTE_PartCount AS ( SELECT COUNT(*) AS PartCount FROM CTE_Parts ) ,CTE_ProductIDs AS ( SELECT ID FROM CTE_Parts CROSS APPLY ( SELECT ID FROM Tx_Product WHERE Fk_CompanyId = 1 AND (ModelNumber LIKE CTE_Parts.Part OR Manufacturer LIKE CTE_Parts.Part OR Category LIKE CTE_Parts.Part OR [Description] LIKE CTE_Parts.Part) ) AS CA GROUP BY ID HAVING COUNT(*) = (SELECT PartCount FROM CTE_PartCount) ) SELECT Tx_Product.* FROM CTE_ProductIDs INNER JOIN Tx_Product ON Tx_Product.ID = CTE_ProductIDs.ID ```
The problem is the partial matches. Try the query below. The match for each keyword get its number and if product max number is equal to total count of keywords, then the match is full and product should be selected ``` select * from ( SELECT DISTINCT Prod.* ,row_number() over (partition by Pk_ProductID order by SearchParts.part) as num FROM Tx_Product Prod CROSS APPLY (SELECT DISTINCT '%'+part+'%' as part FROM [dbo].[SplitString] (@SearchString,'')) AS SearchParts Where Fk_CompanyId = 1 AND (ModelNumber LIKE SearchParts.part OR Prod.Manufacturer LIKE SearchParts.part OR Prod.Category LIKE SearchParts.part OR Prod.[Description] LIKE SearchParts.part) ) T -- the number of last line for product -- is equal to total count of keywords where T.num = (SELECT count(DISTINCT part) FROM [dbo].[SplitString] (@SearchString,'')) ```
SQL Search Query with Dynamic search condition
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have this query ``` SELECT poll_numbers.number_title as totale_stemmen, count(poll_stemmen.number_id) as stem FROM poll_stemmen LEFT JOIN poll_numbers on number_id = poll_numbers.id group by poll_stemmen.number_id ``` And the output is like this: ``` totale_stemmen stem name 1 another 1 ``` But how can i count the total number of stem together? in this result it would be 2. here is the table layout: ``` poll_stemmen number_id, id poll_numbers id, number_title ``` sample data: ``` id, number_id 1 2 2 8 3 8 4 8 id, number_title 2 title_1 8 title_2 expected output: number_title, count_number, total_number title_1 1 4 title_2 3 4 ```
You can use your query to get count per `number_id` and use a sub-query to get overall count: ``` SELECT number_title, COUNT(*) AS count_number, (SELECT COUNT(*) FROM poll_stemmen WHERE number_id IN (SELECT id FROM poll_numbers)) AS total_number FROM poll_numbers AS pn LEFT JOIN poll_stemmen AS ps ON ps.number_id = pn.id GROUP BY ps.number_id ``` [**Demo here**](http://sqlfiddle.com/#!9/62b2f6/1)
In most databases, you would just use window functions. But, MySQL doesn't support this. One method is to use a subquery, either in the `from` clause or `select`: ``` SELECT n.number_title, count(s.number_id) as stem, x.total_number FROM poll_stemmen s LEFT JOIN poll_numbers n on s.number_id = n.id CROSS JOIN (SELECT count(*) as total_number FROM poll_stemmen) x group by s.number_id; ``` One advantage of putting the query in the `from` clause is that you know it will be executed only once.
count all values and group by with count
[ "", "mysql", "sql", "" ]
I have a case where I insert multiple datasets into a temp table. At the end, I would like to display the total number of rows for these multiple datasets across all the rows of the temp table. For example: ``` cnt1 name age 300 peter 21 200 piper 22 ``` Desired result set: ``` cnt1 name age 500 peter 21 500 piper 22 ``` This is the outcome I am looking for at the end of a very long stored procedure. I am not able to figure out how to add up on a single column and display the sum across all the rows.
With window function: ``` select sum(cnt1) over() as cnt1, name, age from TableName ``` **EDIT:** ``` select (select sum(distinct cnt1) from TableName) as cnt1, name, age from TableName ```
Try this (you can use `union all` to append it to your earlier results if you want). ``` Select sum(Cnt1) over () as Cnt1, name, age from MyTable ``` My answer originally used `over (partition by 1)`, but I see that that is unnecessary.
Sum of 1 column to be displayed for all rows in a dataset
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Does SQLite offer a way to search every column of a table for a searchkey? ``` SELECT * FROM table WHERE id LIKE ... ``` Selects all rows where `...` was found in the column `id`. But instead to only search in the column `id`, I want to search in every column if the searchstring was found. I believe this does not work: ``` SELECT * FROM table WHERE * LIKE ... ``` Is that possible? Or what would be the next easy way? I use Python 3 to query the SQLite database. Should I go the route to search through the dictionary after the query was executed and data returned?
A simple trick you can do is: ``` SELECT * FROM table WHERE ((col1+col2+col3+col4) LIKE '%something%') ``` This will select the record if any of these 4 columns contain the word "something".
I could not comment on [@raging-bull answer](https://stackoverflow.com/a/30732523). So I had to write a new one. My problem was, that I have columns with `null` values and got no results because the "search string" was `null`. [Using coalesce](https://www.sqlitetutorial.net/sqlite-functions/sqlite-coalesce/) I could solve that problem. Here sqlite chooses the column content, or if it is `null` an empty string (""). So there is an actual search string available. ``` SELECT * FROM table WHERE (coalesce(col1,"") || coalesce(col2,"") || coalesce(col3,"") || coalesce(col4,"")) LIKE '%something%') ```
SQLite WHERE-Clause for every column?
[ "", "sql", "sqlite", "" ]
I have a table ``` TABLE [dbo].[IssueStatus]( [Id] [int] PRIMARY KEY , [IssueId] [varchar(50)], [OldStatus] [int] , [NewStatus] [int] , [Updated] [datetime] ``` The 2 status columns can have 3 possible values 1, 2 and 3 (1 for open, 2 for in progress and 3 for resolved). `Updated` contains the datetime when the status is changed. The status of an issue is initially set to 1 automatically. I want to calculate total time(in seconds) when an issue was in progress. Note:- The status may change from 1 to 3 directly. The status may change from 1 to 2 and then back to 1 and so on but the final status is guaranteed to be 3 I already checked - [Calculate time duration between differents records in a table based on datetimes](https://stackoverflow.com/questions/10980922/calculate-time-duration-between-differents-records-in-a-table-based-on-datetimes) but it didnt help me much The original situation is much more complicated 1 for open, 3 for in progress, 4 for reopen, 5 for resolved, 6 for closed Thank You ``` 79890 26327 3 In Progress 5 Resolved 2014-12-17 09:10:03.767 74980 26328 3 In Progress 5 Resolved 2014-11-20 10:21:29.780 74748 26328 1 Open 3 In Progress 2014-11-20 02:34:15.440 77843 26329 1 Open 3 In Progress 2014-12-08 08:04:04.567 77857 26329 1 Open 5 Resolved 2014-12-08 08:23:57.720 77856 26329 3 In Progress 1 Open 2014-12-08 08:23:46.067 75107 26330 1 Open 5 Resolved 2014-11-21 06:37:28.810 76441 26330 5 Resolved 6 Closed 2014-12-02 07:27:39.927 78638 26331 1 Open 3 In Progress 2014-12-10 07:47:41.347 78091 26331 3 In Progress 1 Open 2014-12-09 02:44:36.970 77858 26331 1 Open 3 In Progress 2014-12-08 08:28:08.597 78641 26331 3 In Progress 1 Open 2014-12-10 07:57:03.603 78642 26331 1 Open 5 Resolved 2014-12-10 07:57:11.483 74753 26332 1 Open 3 In Progress 2014-11-20 02:59:11.013 74763 26332 3 In Progress 5 Resolved 2014-11-20 03:04:01.127 76846 26333 1 Open 5 Resolved 2014-12-05 00:57:09.140 76849 26340 1 Open 5 Resolved 2014-12-05 01:52:05.957 87861 26341 5 Resolved 6 Closed 2015-02-02 04:18:25.230 85491 26341 1 Open 5 Resolved 2015-01-22 04:48:13.003 77321 26342 3 In Progress 1 Open 2014-12-08 00:56:26.233 75029 26342 1 Open 3 In Progress 2014-11-21 02:48:41.440 79030 26342 3 In Progress 5 Resolved 2014-12-11 21:43:23.657 76395 26342 1 Open 3 In Progress 2014-12-02 02:58:17.063 75197 26342 3 In Progress 1 Open 2014-11-24 02:06:38.490 78502 26342 1 Open 3 In Progress 2014-12-10 02:28:18.570 74933 26343 1 Open 5 Resolved 2014-11-20 08:08:44.423 74821 26344 1 Open 5 Resolved 2014-11-20 05:56:00.513 75295 26345 1 Open 5 Resolved 2014-11-25 02:06:07.260 ```
My query below shows for how long each entry was in status 2. using this sample data: ``` Id IssueId OldStatus NewStatus Updated 1 aa NULL 1 2015-01-01 14:00:00 2 aa 1 2 2015-01-01 16:00:00 4 aa 2 3 2015-01-01 17:30:00 5 bb NULL 1 2015-02-13 11:30:00 6 bb 1 2 2015-02-13 12:56:00 7 bb 2 3 2015-02-13 14:20:00 8 cc NULL 1 2015-02-14 11:30:00 9 cc 1 2 2015-02-14 12:56:00 10 cc 2 1 2015-02-14 13:19:00 11 cc 1 2 2015-02-14 14:20:00 12 cc 2 3 2015-02-14 14:25:00 ``` I can use this query: ``` ;with NewStatus2 as ( SELECT Id, IssueId, Updated, ROW_NUMBER() over (Order BY IssueId, id) PrevID FROM IssueStatus [is] WHERE NewStatus = 2 ) , OldStatus2 as ( SELECT Id, IssueId, Updated, ROW_NUMBER() over (Order BY IssueId, id) PrevID FROM IssueStatus [is] WHERE OldStatus = 2 ) SELECT ns.IssueId , ns.Updated FromTime , os.Updated ToTime , DATEDIFF(minute , ns.Updated , os.Updated) as TimeSpan_Spent_In_Status_2 FROM NewStatus2 ns INNER JOIN OldStatus2 os ON ns.IssueId = os.IssueId AND ns.PrevID = os.PrevID ORDER BY ns.Updated , os.Updated; ``` to get this result: ``` IssueId FromTime ToTime TimeSpan_Spent_In_Status_2 aa 2015-01-01 16:00:00 2015-01-01 17:30:00 90 bb 2015-02-13 12:56:00 2015-02-13 14:20:00 84 cc 2015-02-14 12:56:00 2015-02-14 13:19:00 23 cc 2015-02-14 14:20:00 2015-02-14 14:25:00 5 ``` **EDIT** Made query and sample data match structure provided by @vibhavSarraf
The sql window function can really come in handy in situations as this. Assuming you use sql server 2012 or later, you can use the [lead](https://msdn.microsoft.com/en-us/library/hh213125.aspx) function. For example: this lead usage adds the datetime of the 'next' record: ``` SELECT *, lead(updated,1,getdate()) over (partition by issueid order by updated) NextDate from issuestatus ``` "Next" in this case is of the same issueid (`partition by`) and in order of `updated` (with the default getdate() if there is no next record) Since you have the next date in row, duration till the next row is simply nextdate - updated. (you could do the same with `lag` to get the duration from oldstatus to newstatus, but chose lead here because it can use the default getdate() for items currently in progress) Lead can be used directly to calculate the duration: ``` SELECT IssueID, newstatus Status, datediff(minute,updated, lead(updated,1,getdate()) over (partition by issueid order by updated)) DurationInMinutes from issuestatus ``` From there getting the totals can be easily done with a normal sum, making the final result: ``` select sum(DurationInMinutes) TotalDuration from ( SELECT IssueID, newstatus Status, datediff(minute,updated, lead(updated,1,getdate()) over (partition by issueid order by updated)) DurationInMinutes from issuestatus ) d where Status = 2 --only duration of status progress ``` Note that the `where status=` cannot be added to the subquery, otherwise `lead` would look to the next record where the status is also 2. Of course, you can also do a `group by IssueID` to get the durations per issueid.
how to calculate time duration within a table
[ "", "sql", "sql-server", "database", "" ]
I am new to SQL and I am not sure what to Google. I have three tables with different numbers of columns. I would like to combine these following three tables into a single column(no duplicates). **Table1** ``` Col1 Col2 Col3 1 a aa 2 b ab 3 c bb ``` **Table2** ``` Col1 Col2 123 Test 456 Test2 346 Test3 ``` **Table3** ``` Col1 Col2 Col3 Col4 5695 93234 ABC CDE 4534 92349 MSF KSK 3244 12323 SLE SNE ``` **Expected Output:** ``` FileOutput 1aaa 123Test 569593234ABCCDE 2bab 456Test2 453492349MSFKSK ... ``` Any help would be much appreciated. Thanks!
The term you would want to Google would be: `UNION` and `CONCAT`. > Note: CONCAT is not supported in prior versions to SQL Server 2012. To get your expected output, I would do this: ``` select concat(cast(col1 as varchar(10)),col2,col3) as FileOutput from table1 UNION select concat(cast(col1 as varchar(10)),col2) as FileOutput from table2 UNION select concat(cast(col1 as varchar(10)),cast(col2 as varchar(10)),col3,col4) as FileOutput from table3 ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!3/6c209/3/0)
Not sure how you would parse the data, but you could do this: ``` select convert(varchar(100), col1) + convert(varchar(100), col2) + convert(varchar(100), col3) as fileOutput from table1 union all select convert(varchar(100), col1) + convert(varchar(100), col2) as fileOutput from table2 union all select convert(varchar(100), col1) + convert(varchar(100), col2) + convert(varchar(100), col3) + convert(varchar(100), col4) as fileOutput from table4 ``` note not knowing your column data types, your varchar(100) may need to expand, or could potentially shrink depending on your data.
Combine columns from three different tables into a single column
[ "", "sql", "sql-server-2012", "" ]
I am trying to add another statement to the WHERE clause of this SQL query to throw out some entries I do not need. ``` SELECT DATEPART(Year, CreateTimestamp) Year , DATEPART(Month, CreateTimestamp) Month , COUNT(*) Accepted FROM Interactions t WHERE t.QueueName = @queuName AND (CAST(CreateTimestamp AS DATE) BETWEEN @firstMonth AND @lastMonth) AND TransferFrom != 'Yes' GROUP BY DATEPART(Year, CreateTimestamp), DATEPART(Month, CreateTimestamp) ``` I need to add the filter `DATEDIFF(SECOND, CreateTimestamp, AbandonTimestamp) > 10` , but only if `AbandonTimestamp` is not NULL. I know that there is a `ISNULL()` function, but I basicly need the exact opposite of that. Any help would be much appreciated.
If `AbandonTimestamp` is null, the date calculation result will be null, so just use `COALESCE()` to provide a passing value in this case: ``` WHERE COALESCE(DATEDIFF(SECOND, CreateTimestamp, AbandonTimestamp), 11) > 10 ``` To explain in more detail, the `COALESCE()` function returns the first term in its list of parameters that is not null. For example: * `coalesce(1, 2, 3)` -> 1 * `coalesce(null, 2, 3)` -> 2 * `coalesce(null, null)` -> null In the expression above, there are only two parameters - the two queries. Now if there are no rows in the first table, the result if `max()` will be null, so coalesce will return the result of the second query (which could also be null if there are no rows in it either). See [live demo](http://sqlfiddle.com/#!6/9eecb7/334) of the whole expression.
You could use ISNULL and use CreateTimestamp when AbandonTimestamp is NULL. This will result in DATEDIFF being false, since the number of elapsed seconds would be 0. ``` SELECT DATEPART(Year, CreateTimestamp) Year , DATEPART(Month, CreateTimestamp) Month , COUNT(*) Accepted FROM Interactions t WHERE t.QueueName = @queuName AND (CAST(CreateTimestamp AS DATE) BETWEEN @firstMonth AND @lastMonth) AND TransferFrom != 'Yes' AND DATEDIFF(SECOND, CreateTimestamp, ISNULL(AbandonTimestamp, CreateTimestamp)) > 10 GROUP BY DATEPART(Year, CreateTimestamp), DATEPART(Month, CreateTimestamp) ```
Limiting SQL Search When a Value is NULL
[ "", "sql", "sql-server", "" ]
i have a User table which has many users but some users are having same first name and Last Name but only one user will have status active . So my requirement is if the user is unique i need the user regardless of Status but if the user is duplicate i need the record having status active. How can i achieve this in SQL server? Sorry For the confusion here is the example of User table ![enter image description here](https://i.stack.imgur.com/iHLLw.png) my result table should be ![enter image description here](https://i.stack.imgur.com/okaAa.png) Here Steve Jordan is having 2 records so i need the record having status 1 and for records having distinct First name and last name i need all the records regard less of status. Note : I have a user id as primary key but i am joining on first name and last name because other table doesn't have user id.
``` SELECT UserId, FirstName, LastName, Status FROM ( SELECT * , ROW_NUMBER() OVER (PARTITION BY FirstName, LastName ORDER BY Status DESC) AS rowNum FROM [User] ) u WHERE u.rowNum = 1 ``` This essentially groups by first and last name, orders by Status so that active are higher priority, and takes only one of each unique first/last name combination. This ensures that each each unique first/last name combination is in the result set only once, and if there are multiples, the active one is the one returned. If a name combination has multiples, but they are all not active, then only one is returned, chosen arbitrarily.
I didn't get you question. But, as per your subject line. It seems like you want the record, which is `active`, if record is duplicate. ``` select T.* from yourTable T INNER JOIN (select user, count(*) cnt FROM yourTable GROUP BY user) A ON A.user=T.user WHERE A.cnt>1 and T.status='A'; ``` If it wasn't your requirement. Then, I would ask you to share your table structure and expected output to understand better.
How to write a Sql query for retrieving one row from duplicate rows?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I need to split a string in a column into one character each into it's own column in SQL Server 2012. Example: if I have a column with `'ABCDE'`, I need to split it into `'A'`, `'B'`, `'C'`, `'D'`, `'E'`, with each of these into their own columns. The length of the column to be split may vary, so I need this to be as dynamic as possible. My question is different from the other post ([Can Mysql Split a column?](https://stackoverflow.com/questions/1096679/can-mysql-split-a-column)) since mine doesn't have any delimiters. Thanks
You can do this like this: ``` DECLARE @t TABLE(id int, n VARCHAR(50)) INSERT INTO @t VALUES (1, 'ABCDEF'), (2, 'EFGHIJKLMNOPQ') ;WITH cte AS (SELECT id, n, SUBSTRING(n, 1, 1) c, 1 AS ind FROM @t UNION ALL SELECT id, n, SUBSTRING(n, ind + 1, 1), ind + 1 FROM cte WHERE LEN(n) > ind ) SELECT * FROM cte PIVOT (MAX(c) FOR ind IN([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[12],[13],[14],[15])) p ``` Output: ``` id n 1 2 3 4 5 6 7 8 9 10 12 13 14 15 1 ABCDEF A B C D E F NULL NULL NULL NULL NULL NULL NULL NULL 2 EFGHIJKLMNOPQ E F G H I J K L M N P Q NULL NULL ``` Here is dynamic version: ``` DECLARE @l INT, @c VARCHAR(MAX) = '' SELECT @l = MAX(LEN(n)) FROM PivotTable WHILE @l > 0 BEGIN SET @c = ',[' + CAST(@l AS VARCHAR(MAX)) + ']' + @c SET @l = @l - 1 END SET @c = STUFF(@c, 1, 1,'') DECLARE @s NVARCHAR(MAX) = ' ;WITH cte AS (SELECT id, n, SUBSTRING(n, 1, 1) c, 1 AS ind FROM PivotTable UNION ALL SELECT id, n, SUBSTRING(n, ind + 1, 1), ind + 1 FROM cte WHERE LEN(n) > ind ) SELECT * FROM cte PIVOT (MAX(c) FOR ind IN(' + @c + ')) p' EXEC (@s) ```
If you want a new column for every character you simply need: ``` SELECT [1] = SUBSTRING(Col, 1, 1), [2] = SUBSTRING(Col, 2, 1), [3] = SUBSTRING(Col, 3, 1), [4] = SUBSTRING(Col, 4, 1), [5] = SUBSTRING(Col, 5, 1), [6] = SUBSTRING(Col, 6, 1), [7] = SUBSTRING(Col, 7, 1), [8] = SUBSTRING(Col, 8, 1), [9] = SUBSTRING(Col, 9, 1) FROM (VALUES ('ABCDE'), ('FGHIJKLMN')) t (Col); ``` Which is fine, if you have a know number of columns. If you have an unknown number of columns, then you just need to generate the same SQL with *n* columns. To do this you will need a numbers table, and since many people do not have one, I will do a quick demo on how to dynamically generate one. The below will generate a sequential list of numbers, 1 - 100,000,000. ``` WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2), Numbers (Number) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N3 AS N1 CROSS JOIN N3 AS N2) SELECT Number FROM Numbers; ``` It simply uses a table valued constructor to generate 10 rows (`N1`), then cross joins these 10 rows to get 100 rows (`N2`), then cross joins these 100 rows to get 10,000 rows (`N3`) and so on and so on. It finally uses `ROW_NUMBER()` to get the sequential numbers. This probably needs to be cut down for this use, I hope you are not splitting a string that is 100,000,000 characters long, but the principle applies. You can just use `TOP` and the maximum length of your string to limit it. For each number you can just build up the necessary repetetive SQL required, which is: ``` ,[n] = SUBSTRING(Col, n, 1) ``` So you have something like: ``` SELECT Number, [SQL] = ',[' + CAST(Number AS VARCHAR(10)) + '] = SUBSTRING(Col, ' + CAST(Number AS VARCHAR(10)) + ', 1)' FROM Numbers; ``` Which gives something like: ``` Number SQL ----------------------------------- 1 ,[1] = SUBSTRING(Col, 1, 1) 2 ,[2] = SUBSTRING(Col, 2, 1) 3 ,[3] = SUBSTRING(Col, 3, 1) 4 ,[4] = SUBSTRING(Col, 4, 1) ``` The final step is to build up your final statement by concatenating all the text in the column `SQL`; the best way to do this is using SQL Server's XML Extensions. So your final query might end up like: ``` DECLARE @SQL NVARCHAR(MAX) = ''; IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL DROP TABLE #T; CREATE TABLE #T (Col VARCHAR(100)); INSERT #T (Col) VALUES ('ABCDE'), ('FGHIJKLMN'); WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2), Numbers (Number) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N3 AS N1 CROSS JOIN N3 AS N2) SELECT @SQL = 'SELECT Col' + ( SELECT TOP (SELECT MAX(LEN(Col)) FROM #T) ',[' + CAST(Number AS VARCHAR(10)) + '] = SUBSTRING(Col, ' + CAST(Number AS VARCHAR(10)) + ', 1)' FROM Numbers FOR XML PATH(''), TYPE ).value('.', 'VARCHAR(MAX)') + ' FROM #T;'; EXECUTE sp_executesql @SQL; ``` Which gives: ``` Col 1 2 3 4 5 6 7 8 9 ------------------------------------------------- ABCDE A B C D E FGHIJKLMN F G H I J K L M N ``` Finally, if you actually wanted to split it into rows, I would still use the same approach, with your adhoc numbers table, just join it to your original table: ``` IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL DROP TABLE #T; CREATE TABLE #T (Col VARCHAR(100)); INSERT #T (Col) VALUES ('ABCDE'), ('FGHIJKLMN'); WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), N3 (N) AS (SELECT 1 FROM N2 AS N1 CROSS JOIN N2 AS N2), Numbers (Number) AS (SELECT TOP (SELECT MAX(LEN(Col)) FROM #T) ROW_NUMBER() OVER(ORDER BY N1.N) FROM N3 AS N1 CROSS JOIN N3 AS N2) SELECT t.Col, Position = n.Number, Character = SUBSTRING(t.Col, n.Number, 1) FROM #T AS t INNER JOIN Numbers AS n ON n.Number <= LEN(t.Col) ORDER BY t.Col, n.Number; ``` Which gives something like: ``` Col Position Character ------------------------------- ABCDE 1 A ABCDE 2 B ABCDE 3 C ABCDE 4 D ABCDE 5 E ```
Split a string with no delimiters into columns
[ "", "sql", "sql-server", "database", "" ]
In production, issuing a `SELECT COUNT` can be a bad idea - it can be a performance hit depending on your database engine. In Oracle, if I want to get an idea of the size of a table without having to resort to a `COUNT`, I can do the following: ``` SELECT table_name, num_rows, last_analyzed FROM all_tables WHERE table_name = 'MY_TABLE_NAME'; ``` This will retrieve Oracle's table analyses if they're enabled. While the count isn't exact, it can give me an idea of how large a table is in case I need to query it (and the last\_analyzed column lets me know how old that approximation is). How can I do something similar in SQL Server? (Related - is this necessary for SQL Server? Oracle has to count row-by-row, hence the avoidance.) Thanks!
You can use the management studio also ``` Right Click on table -> Properties -> Storage ``` or you can use the query like this: ``` sp_spaceused 'TableName' ``` To get it for all the tables you can use it like tihs: ``` CREATE TABLE #tmp ( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50) ) insert #tmp EXEC sp_MSforeachtable @cmd="EXEC sp_spaceused '?'" select * from #tmp ```
You can call `sp_spaceused 'table_name'`. If you want to do this for all tables, wrap it inside `sp_MSforeachtable`: ``` sp_MSforeachtable 'sp_spaceused ''[?]''' ``` Calling `sp_spaceused` without any parameter will give you the database size.
SQL Server - Get approximate size of table
[ "", "sql", "sql-server", "" ]
I have two tables, Product and ProductImages. Product table has 2 columns: ProductID and Name ProductImages table has 4 columns: ID, ProductID, ImageName and Primary(bit). The relation between Product and ProductImages is one to many, so one product can have many images, but for each product only one ProductImage will be Primary. I need to write a query to get all products with their primary images. If the product does not have a primary image, 1st record for the ProductId should be fetched. **Sample product Table** ``` | 1 | P1 | | 2 | P2 | | 3 | P3 | | 4 | P4 | ``` **Sample productImage Table** ``` | 1 | 1 | P1-1 | 1 | 2 | 1 | P1-2 | 0 | 3 | 1 | P1-3 | 0 | 4 | 1 | P1-4 | 0 | 5 | 2 | P2-1 | 1 | 6 | 2 | P2-2 | 0 | 7 | 3 | P3-1 | 0 | 8 | 3 | P3-2 | 0 | 9 | 4 | P4-1 | 0 | 10 | 4 | P4-2 | 0 ``` **Output Table** ``` | 1 | 1 | P1-1 | 1 | 5 | 2 | P2-1 | 1 | 7 | 3 | P3-1 | 0 | 9 | 4 | P4-1 | 0 ``` I hope I clarified my question. Please ask if further clarification is required.
You can do this simply like this with `row_number` window function: ``` select * from Products p join (select *, row_number() over(partition by ProductID order by ID) rn from ProductImages)pi on p.ProductID = pi.ProductID and pi.rn = 1 ``` I assume that primary image `ID` will precede non primary image `ID`s.
This is kind of "quick and dirty" but i works: ``` SELECT pr.ProductID, pr.Name, prim.ImageName, 1 AS IsPrimary FROM @product pr INNER JOIN @productimage prim ON pr.ProductID = prim.ProductID WHERE prim.[Primary] = 1 UNION ALL SELECT pr.ProductID, pr.Name, prim.ImageName, 0 AS IsPrimary FROM @product pr INNER JOIN -- Get any image for this Product MIN, MAX,...what you want ( SELECT ProductID, MIN(ImageName) AS ImageName FROM @productimage WHERE [Primary] = 0 GROUP BY ProductID ) prim ON pr.ProductID = prim.ProductID LEFT JOIN --Primary Images: ( SELECT ProductID FROM @productimage pri WHERE pri.[Primary] = 1 ) primages ON pr.ProductID = primages.ProductID WHERE primages.ProductID IS NULL --there is no primary image ``` The first query is for all products who have a primary image, the second query is for those who don't have that.
Always Get 1 record from Table 2 based on some condition
[ "", "sql", "sql-server", "sql-server-2008", "azure-sql-database", "" ]
How can i retrieve *(select)* half of records from a table, for example, a table with 1000 rows, retrieve 500 (50%) from the table. (in this case i can use rownum because we know the exact quantity of rows (1000) - `select * from table where rownum <= 500`), but i have to count every table to achieve the statement. What's the best way do you think i can do this?
Well, you could count the rows and select half: ``` select * from my_table where rownum <= (select count(*)/2 from my_table) ``` That would *tend* to select rows that are contiguous within the physical segments. Or ... ``` select * from (select rownum rn, * from my_table) where mod(rn,2) = 0 ``` That would *tend* to select "every other" row, so you'd get a pretty even spread from the physical data segments. Or ... ``` select * from my_table sample (50) ``` That would be approximately half of the rows. Or ... ``` select * from my_table sample block (50) ``` That would be the rows from approximately half of the data blocks below the high water marks of the segments. Probably lots of different ways available, and which one you want probably depends on whether you want the selected pseudo-randomly or not. If you want to use the output of the query, use something like: ``` select ... from (select * from my_table where rownum <= (select count(*)/2 from my_table)) my_table join ... ``` In that circumstance the SAMPLE syntax would be more compact.
With `NTILE` window function: ``` select * from (SELECT *, NTILE(2) OVER(ORDER BY (SELECT NULL FROM DUAL)) nt FROM TableName) as t where nt = 1 ``` or: ``` select * from (SELECT *, NTILE(2) OVER(ORDER BY NULL) nt FROM TableName) as t where nt = 1 ```
How to retrieve half of records from a table - Oracle 11g
[ "", "sql", "oracle", "" ]
I'm trying to run a query that will update a field in one table if a field in another table is equal to test. Here is the code: ``` UPDATE Table1 AS t1 INNER JOIN Table2 AS t2 ON t1.Field1 = t2.F_Name SET t1.Field4 = (CASE WHEN t2.PlayField = 'test' THEN 'test' ELSE 'No test' END); ``` However, I always receive a `Syntax Error (missing operator)` when I run it. Not sure what I'm doing wrong...
Since you only want to understand the issue..Your SQL : ``` UPDATE Table1 AS t1 INNER JOIN Table2 AS t2 ON t1.Field1 = t2.F_Name SET t1.Field4 = (CASE WHEN t2.PlayField = 'test' THEN 'test' ELSE 'No test' END); ``` MS Access doesn't support the `CASE` statement. It looks like SQL Server, not MS Access. You could try: ``` SET t1.Field4 = IIf([t2].[playfield]='test','test','No test'); ``` This basically says: Set `t1.Field` = If `t2.PlayField = 'Test'`, we will use the word 'test', if it doesn't we will use 'No Test'.
`CASE ... WHEN` is not supported in Access SQL. Use `IIf` instead. ``` UPDATE Table1 AS t1 INNER JOIN Table2 AS t2 ON t1.Field1 = t2.F_Name SET t1.Field4 = IIf(t2.PlayField = 'test', 'test', 'No test'); ```
SQL Syntax Error Missing Operator MS Access
[ "", "sql", "ms-access", "" ]
I created a function as shown below: ``` CREATE FUNCTION my_func(value INTEGER) RETURNS VOID AS $$ BEGIN END; $$ LANGUAGE plpgsql; ``` But I need to specify the parameter to drop the function as shown below: ``` DROP FUNCTION my_func(value INTEGER); ``` Because if not specifying the parameter as shown below: ``` DROP FUNCTION my_func; ``` Then, I got the error below: > ERROR: function name "my\_func" is not unique > HINT: Specify the argument list to select the function unambiguously. So, can I drop a function without parameters?
In Postgres functions [can be overloaded](http://www.postgresql.org/docs/9.4/static/xfunc-overload.html), so parameters are necessary to distinguish overloaded functions. To unambiguously identify a function you can put only types of its parameters. ``` DROP FUNCTION my_func(INT); ```
As of Postgres 10 you can drop functions by name only, as long as the names are unique to their schema. Example: ``` drop function my_func; ``` Documentation [here](https://www.postgresql.org/docs/current/static/sql-dropfunction.html).
PostgreSQL: Drop a function without parameters
[ "", "sql", "database", "postgresql", "plpgsql", "drop", "" ]
I have a huge `INSERT INTO TABLE1 (....) SELECT .... FROM TABLE2` statement. It gives me the error > "String or binary data would be truncated". I know that one of the columns from TABLE2 is way bigger for one column from TABLE1 in the `INSERT` statement. I have more than 100 columns in each table. So it is hard to find out the problem. Is there any easier way to figure this out?
You can query `Information_Schema.Columns` for both tables and check the difference in content length. Assuming your tables have the same column names, you can use this: ``` SELECT t1.Table_Name, t1.Column_Name FROM INFORMATION_SCHEMA.Columns t1 INNER JOIN INFORMATION_SCHEMA.Columns t2 ON (t1.Column_Name = t2.Column_Name) WHERE t1.Table_Name = 'Table1' AND t2.Table_Name = 'Table2' AND ISNULL(t1.Character_maximum_length, 0) < ISNULL(t2.Character_maximum_length, 0) ``` Assuming your tables have different column names, you can do this and just look for the difference ``` SELECT Table_Name, Column_Name, Character_maximum_length FROM INFORMATION_SCHEMA.Columns WHERE Table_Name IN('Table1', 'Table2') ORDER BY Column_Name, Character_maximum_length, Table_Name ```
To figure out which column the data is too long fit in, I would use following statement to output the results to a temp table. ``` SELECT ... INTO MyTempTable FROM Table2 ``` Then use the query example from [this article](http://blogs.lessthandot.com/index.php/datamgmt/datadesign/maximum-length-of-data-in/) to get the max data length of each column. I have attached a copy of the code below. ``` DECLARE @TableName sysname = 'MyTempTable', @TableSchema sysname = 'dbo' DECLARE @SQL NVARCHAR(MAX) SELECT @SQL = STUFF((SELECT ' UNION ALL select ' + QUOTENAME(Table_Name,'''') + ' AS TableName, ' + QUOTENAME(Column_Name,'''') + ' AS ColumnName, ' + CASE WHEN DATA_TYPE IN ('XML','HierarchyID','Geometry','Geography','text','ntext') THEN 'MAX(DATALENGTH(' ELSE 'MAX(LEN(' END + QUOTENAME(Column_Name) + ')) AS MaxLength, ' + QUOTENAME(C.DATA_TYPE,'''') + ' AS DataType, ' + CAST(COALESCE(C.CHARACTER_MAXIMUM_LENGTH, C.NUMERIC_SCALE,0) AS VARCHAR(10)) + ' AS DataWidth ' + 'FROM ' + QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(Table_Name) FROM INFORMATION_SCHEMA.COLUMNS C WHERE TABLE_NAME = @TableName AND table_schema = @TableSchema --AND DATA_TYPE NOT IN ('XML','HierarchyID','Geometry','Geography') ORDER BY COLUMN_NAME FOR XML PATH(''),Type).value('.','varchar(max)'),1,11,'') EXECUTE (@SQL) ```
how to find "String or binary data would be truncated" error on sql in a big query
[ "", "sql", "sql-server", "" ]
I am using the Microsoft sample database and the questions from SQLZOO.net to learn SQL for my job. I am stuck on the question: > For every customer with a 'Main Office' in Dallas show AddressLine1 of > the 'Main Office' and AddressLine1 of the 'Shipping' address - if > there is no shipping address leave it blank. Use one row per customer. How Do I use the same table?
You need to alias the table to join onto itself. For example: ``` SELECT T1.Column1, T2.Column2, ... FROM Table1 AS T1 JOIN Table1 AS T2 ON T1.Column1 = T2.Column1 ```
I browsed the AdventureWorks schema and I think this is the right approach. It does involve a self-join which is accomplished by bring the same table into the query with a second alias (Person.Address as "oa" and "sa" below.) Since the shipping address join is apparently optional I would say that that is actually the trickier part of the question to get correct. ``` select ... from Person.Address as oa /* office address */ inner join Sales.CustomerAddress as ca on ca.CustomerID = oa.CustomerID left outer join Person.Address as sa /* shipping address*/ on ca.CustomerID = oa.CustomerID and sa.AddressTypeID = ( select AddressTypeID from Person.AddressType where Name = 'Shipping' ) where oa.AddressTypeID = ( select AddressTypeID from Person.AddressType where Name = 'Main Office' ) and oa.City = 'Dallas' ```
Using the same table in a join
[ "", "sql", "sql-server", "sql-server-2008", "" ]
This is my Azure configuration: 1. I have a Virtual Network with a couple of subnets and a gateway configured to allow point-to-site. 2. There is one Virtual Machine with SQL Server (2014) installed. There are some databases in there already. SQL Server is set up to allow SQL Server and Windows Authentication mode. **This VM is in the Virtual Network** 3. I have an empty Azure Web App I deployed my main MVC WebApp to the empty Azure Web App and looks good, except when it tries to retrieve information from the database. Is it a connection string error? or there can be something else... My connection string looks like this: ``` <add name="MyEntities" connectionString="metadata=res://*/Data.MyModel.csdl| res://*/Data.MyModel.ssdl| res://*/Data.MyModel.msl;provider=System.Data.SqlClient; provider connection string=&quot; data source=tcp:10.0.1.4; initial catalog=MyDataBase; persist security info=False; user id=MySystemAdmin; password=SystemAdminPassword; multipleactiveresultsets=True; App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> ``` Here is the error thrown by the azure web app... ![enter image description here](https://i.stack.imgur.com/t8Ry0.png) So it seems to be related to either the way I'm providing the connection string or the end-points/firewall configuration.
The other answers gave me the guidelines to find out the solution. I'll try to describe the steps I followed: 1. Using the new azure portal (portal.azure.com currently in preview) I established a connection between the Azure Web App and the Virtual network: * Home > Browse > Click on Azure Web App name * In the Azure Web App blade click on Networking tile * In Virtual Network blade, click on the Virtual Network where the database is located (*it's important to mention that the Virtual Network ought to have a gateway previously configured*) 2. My intention was to provide certain level of security to the VM with the databases by placing it inside a Virtual Network, so I had not considered opening ports. Turns out that it's necessary, so, in the VM: * I enabled the TCP/IP protocol for SQL Server using the Sql Server Configuration Manager (How to? [here](https://msdn.microsoft.com/en-us/library/bb909712%28v=vs.90%29.aspx)) * Then I created a new Inbound Rule opening the 1433 port, **but only for private connections** (very nice). * It was not necessary to create an endpoint in the VM for this port (very happy with this). 3. Finally, I published the the app to the Azure Web App using the connection strings as shown in the question (with internal database IP) Final touch: in the new Azure Portal > Azure Web App > Settings, I was able to enter Connection Strings. Settings created in the portal are not overwritten; so now I'm sure this Azure Web App will always use the correct connection string. Final note: in theory (not tested yet) the internal IP will not change as long as the VM is not *Stopped (Deallocated)*.
Check your connection string against this connection string for Entity Framework designer files (<https://msdn.microsoft.com/en-us/data/jj556606.aspx#Connection>) Just from a quick glance I see two possible errors: 1. Semicolon missing added after provider=System.Data.SqlClient (the example on the page I provided the link to doesn't have one) 2. The IP address you specify to connect to is a local one (10.0.0.1) and should be the IP/DNS name of your database in Azure. Not sure if this is the issue or if StackOverflow just clobbered your markup. In addition you talk about a lot of gateways so I would check to make sure you can talk between your systems. Finally posting error messages and capturing exceptions about what's actually going on will help diagnose the error because at this point it's all guesswork. Hope that helps.
Publishing a MVC App in Azure Web App
[ "", "sql", "sql-server", "asp.net-mvc", "azure", "" ]
I have a simple data set of customers (about 40,000k) It looks like: ``` customerid, group, other_variable a,blue,y b,blue,x c,blue,z d,green,y e,green,d f,green,r g,green,e ``` I want to randomly select for each group, Y amounts of customers (along with their other variable(s). The catch is, i want to have two random selections of Y amounts for each group i.e. ``` 4000 random green customers split into two sets of 2000 randomly and 4000 random blue customers split into two sets of 2000 randomly ``` This is because I have different messages to give to the two different splits I'm not sampling with replacement. Needs to be unique customers Would prefer a solution in PROC SQL but happy for alternative solution in sas if proc sql isn't idea
`proc surveyselect` is the general tool of choice for random sampling in SAS. The code is very simple, I would just sample 4000 of each group, then assign a new subgroup every 2000 rows, since the data is in a random order anyway (although sorted by group). The default sampling method for `proc surveyselect` is `srs`, which is simple random sampling without replacement, exactly what is required here. Here's some example code. ``` /* create dummy dataset */ data have; do customerid = 1 to 10000; length group other_variable $8; if rand('uniform')<0.5 then group = 'blue'; /* assign blue or green with equal likelihood */ else group = 'green'; other_variable = byte(97+(floor((1+122-97)*rand('uniform')))); /* random letter between a and z */ output; end; run; /* dataset must be sorted by group variable */ proc sort data=have; by group; run; /* extract random sample of 4000 from each group */ proc surveyselect data=have out=want n=4000 seed=12345; /* specify seed to enable results to be reproduced */ strata group; /* set grouping variable */ run; /* assign a new subgroup for every 2000 rows */ data want; set want; sub=int((_n_-1)/2000)+1; run; ```
``` data custgroup ; do i=1 to nobs; set sorted_data nobs=nobs ; point = ranuni(0); end; proc sort data = custgroup out=sortedcust by group point; run; data final; set sortedcust; by group point; if first group then i=1; i+1; run; ``` Basically what I am doing is first assign a random number to all observations in the data set. Then perform sorting based on the variable `group` and `point`. Now I achieved a random sequence of observation within group. `i=1` and `i+1` would be to identify the row of observation(s) within group. This means would avoid extracting duplicated observations . Use `output` statement as well to control where you want to store the observation based on `i.` My approach may not be the most efficient one.
SAS how to get random selection by group randomly split into multiple groups
[ "", "sql", "sas", "sample", "proc", "" ]
I have two sets of records Set 1: ``` -11 -12 -12 AN '' -134 -125 +135 ``` Set 2: ``` 1.15 1.1 ``` In Set 1 I need to check which values are either blank `''` or start with a `+` sign and are greater than 125. In Set 2 I need to check which values have less than two decimal places Example output for the above sets: ``` '' +135 1.1 ```
In SQL-Server could be something like that: ``` WITH cte AS ( SELECT Col FROM set1 WHERE Col = '' OR Col LIKE'+%' AND (CAST(REPLACE(REPLACE(Col,'+',''),'-','') AS INT) > 125) ) SELECT * FROM cte UNION ALL SELECT Col FROM set2 WHERE Col LIKE '%._' ``` OUTPUT: ``` '' -- blank +135 1.1 ``` **[SQL FIDDLE](http://sqlfiddle.com/#!3/174b8/6)**
For the first set, you can use the `like` operator to check if a string starts with '+' and then cast it to `numeric` and compare it with `125`. Using `isnumeric` beforehand will help avoid casting errors: ``` WHERE col = '' OR (col LIKE '+%' AND ISNUMERIC(col) AND CAST(col AS NUMERIC) > 125) ``` For the second set, you can use the `like` operator with `_`, the single character wildcard: ``` WHERE col NOT LIKE '%.__%' ```
Check for two decimal digit number in string
[ "", "sql", "sql-server", "sql-server-2008", "select", "" ]
I often find myself running into situations like this one (which is contrived but illustrative of the problem): ``` CREATE TABLE customer ( id SERIAL PRIMARY KEY, type TEXT -- other columns... ); CREATE TABLE product_order ( id SERIAL PRIMARY KEY, customer_id INTEGER REFERENCES customer (id), type TEXT REFERENCES customer (type), -- not actually legitimate -- other columns... CHECK (type = 'business') ); ``` Of course, the foreign key constraint on `product_order.type` doesn't work because `customer.type` is not `UNIQUE` or a primary key (and I can't use a `CHECK CONSTRAINT` on a column that only exists in another table). However, I would only like `product_order` entries for `type = 'business'` customers. I could make `customer.id` and `customer.type` a composite primary key, but then any other tables that want to reference just `customer.id` must also reference `customer.type` unnecessarily. What's the best approach in this situation? EDIT: Forgot the foreign key constraint `product_order.customer_id`!
If you create a unique *constraint* on the `customer.type` you can reference it from the `product_order` table: ``` CREATE TABLE customer ( id SERIAL PRIMARY KEY, type TEXT, -- other columns... constraint unique_cust_type unique (id, type) -- this makes the combination id/type "referencable" ); CREATE TABLE product_order ( id SERIAL PRIMARY KEY, customer_id INTEGER, type TEXT default 'business', CHECK (type = 'business'), foreign key (customer_id, type) references customer (id, type) ); ```
You could make a lookup table for types, and use the FKEY relationship to enforce ``` CREATE TABLE type ( id integer, PRIMARY KEY, name TEXT ); CREATE TABLE customer ( id SERIAL PRIMARY KEY, type_id INTEGER, NOT NULL -- other columns... FOREIGN KEY (type_id) REFERENCES type(id) ); CREATE TABLE product_order ( id SERIAL PRIMARY KEY, type_id INTEGER, NOT NULL -- other columns... FOREIGN KEY (type_id) REFERENCES type(id) ); ```
What is the best way to enforce constraints across tables?
[ "", "sql", "postgresql", "" ]
I need to add a where condition for a SQL query , means when the dept id become 9 only i need that where condition , else i dont required. i have written the below query , is the approach is correct? ``` SELECT b.DeptId, b.DeptName, a.SurveyID, a.SurveyName, a.Status, a.AllUsers, IsNull(a.SelectedUsers,'') as SelectedUsers, a.OpenDate, a.CloseDate, e.Role as RoleName from Surveys a inner join Departments b on a.deptid=b.deptid left outer join surveyUsers c on c.surveyid=a.SurveyID and c.empCode= 9902 left outer join [360HRSurveyEmployee] d on d.surveyid=a.SurveyID left outer join [360HRSurvey] e on e.sempid = c.empCode and e.empid = d.empid where ( c.empCode= 9902 or a.AllUsers = 1 ) and a.status in (1) and a.OpenDate <= '6/9/2015' and a.CloseDate >= '6/9/2015' and CASE WHEN DeptId == 9 THEN e.Role IS NOT NULL END order by b.DeptID,a.SurveyID ``` Note the last three lines in the above query where i added the case : ``` and CASE WHEN DeptId == 9 THEN e.Role IS NOT NULL END order by b.DeptID,a.SurveyID ``` I am getting a syntax error also ``` Incorrect syntax near '='. ```
If I understand you correctly, you only need rows where DeptId is not 9, or DeptId is not null. Also, your gross disregard for consistency in your capitalization hurts me. What is this beast?! ``` SELECT b.DeptID, b.DeptName, a.SurveyID, a.SurveyName, a.Status, a.AllUsers, ISNULL(a.SelectedUsers,'') as SelectedUsers, a.OpenDate, a.CloseDate, e.Role as RoleName FROM Surveys AS a INNER JOIN Departments AS b ON a.DeptID = b.DeptID LEFT OUTER JOIN SurveyUsers AS c ON (c.SurveyID = a.SurveyID AND c.EmpCode = 9902) LEFT OUTER JOIN [360HRSurveyEmployee] AS d ON d.SurveyID = a.SurveyID LEFT OUTER JOIN [360HRSurvey] AS e ON (e.EmpID = c.EmpCode AND e.EmpID = d.EmpID) WHERE ( c.EmpCode = 9902 OR a.AllUsers = 1 ) AND a.Status = 1 AND a.OpenDate <= '6/9/2015' AND a.CloseDate >= '6/9/2015' AND ( a.DeptID != 9 OR e.Role IS NOT NULL ) ORDER BY a.DeptID, a.SurveyID; ```
``` and CASE WHEN DeptId == 9 ``` You only need one = sign for this comparison.
adding a where condition for one criteria in sql query
[ "", "sql", "sql-server", "sql-server-2008", "rdbms", "" ]
I have two tables with same structure: `tmp_grn` and `grn`. I have to delete rows from table `tmp_grn` which already exists in table `grn`. Problem is I don't have a unique or primary key but I can determine a unique row with the combination of two columns. Let's say column names are `grn_code` and `item_skucode`. My query: ``` DELETE FROM tmp_grn WHERE grn_code AND item_skucode IN (SELECT grn_code , item_skucode FROM grn); ``` I am getting this error: > ERROR: subquery has too many columns What should be the right way to do this?
If you want to combine two columns, you need to put them into parenthesis: ``` DELETE FROM tmp_grn WHERE (grn_code, item_skucode) IN (SELECT grn_code, item_skucode FROM grn); ``` But suslov's answer using an `exists` is most probably faster - you need to check the execution plan to verify that.
You can use `exists` (*if you want to check the pair of values*): ``` delete from tmp_grn t where exists ( select * from grn where grn_code = t.grn_code and item_skucode = t.item_skucode); ```
Subquery has too many columns
[ "", "sql", "postgresql", "" ]
A simple table contains the one column with integer vales.The Figure is given below. ![A table named model with one column 'numbers'](https://i.stack.imgur.com/h4tiX.png) I am using COALESCE to construct the 'numbers' by comma. ![comma delimited value](https://i.stack.imgur.com/S67Zn.png) So, now there is a problem when i check a above constructed value in IF Condition like below. It shows an error for cannot convert the varchar datatype to integer. ![enter image description here](https://i.stack.imgur.com/pL2Dz.png) Now how to check the constructed values in IF condition without changing a logic? I am new for T-SQL.Thank you
When you concatenate all the numbers you are converting it a string so it no longer acts like an integer. If you want to check for a value in a list, do it directly like. SQL was made to have different values in different rows and work with them that way. Try this out: ``` DECLARE @failIds INT = 23; IF @failIds IN (SELECT numbers FROM model) BEGIN PRINT 'YES' END ELSE BEGIN PRINT 'NO' END ```
Instead of the comma separated string, you should use a table variable, temp. table or just the original table. For example, something like this: ``` declare @Ids table (id int) insert into @Ids select numbers from model where numbers in (23,234) declare @failIds int = 23 if (exists (select 1 from @Ids where id = @failIds)) begin print 'Yes' end else begin print 'No' end ``` But you could of course do this too: ``` if (exists (select 1 from model where numbers = @failIds)) begin print 'Yes' end else begin print 'No' end ```
How to check a comma delimited integer values by "IF" condition T-SQL?
[ "", "sql", "sql-server", "t-sql", "" ]
I have a table with the structure below: ``` create table roster ( date date not null, first nvarchar(20) not null, second nvarchar(20) not null, third nvarchar(20) not null, fourth nvarchar(20) not null, ) go ``` And that has the following data inserted: ``` insert into roster values ('2015-06-10 12:45:34', 'e', 'm', 'a', 'r') insert into roster values ('2015-06-11 12:45:34', 'e', 'v', 'a', 'r') insert into roster values ('2015-06-12 12:45:34', 'e', 'm', 'a', 'd') insert into roster values ('2015-06-13 12:45:34', 'e', 'm', 'a', 'd') * insert into roster values ('2015-06-14 12:45:34', 'e', 'm', 'a', 'r') insert into roster values ('2015-06-15 12:45:34', 'e', 'm', 'a', 'r') * insert into roster values ('2015-06-16 12:45:34', 'z', 'm', 't', 'r') ``` Note: \* marks duplicates. How do I select only one unique consecutive combination of "first", "second", "third", and "fourth"? For example, with the above inserted data, the desired output is: ``` Date First Second Third Fourth 2015-06-10 12:45:34, e m a r 2015-06-11 12:45:34, e v a r 2015-06-12 12:45:34, e m a d 2015-06-14 12:45:34, e m a r 2015-06-16 12:45:34, z m t r ``` I am looking for a solution that preserves entries when they are no longer consecutive (or when the sequence is broken), but removes duplicates of consecutive entries. I have seen similar questions posted here, but I have been uable to make a solution that uses a group by. Any help would be appreciated
Why need a `group by` if you just need a single `exists()`? ``` ;with data as ( select ROW_NUMBER() OVER (ORDER BY date) AS number, * from roster ) select * from data where not exists -- Just compare with the previous column, if match say bye ( select * from data prev where 1 = 1 and prev.first = data.first and prev.second = data.second and prev.third = data.third and prev.fourth = data.fourth and prev.number + 1 = data.number ) ``` [SQL Fiddle](http://sqlfiddle.com/#!3/c151c/1) **EDIT** ``` ;with data as ( select ROW_NUMBER() OVER (ORDER BY date) AS number, ROW_NUMBER() OVER (PARTITION BY first, second, third, fourth ORDER BY date) AS part, * from roster ) select MIN(date) as startdate, MAX(date) as enddate, COUNT(*) count, first, second, third, fourth from data group by first, second, third, fourth, number - part order by number - part ``` [SQL Fiddle](http://sqlfiddle.com/#!3/c151c/3)
You can group by the values of first, second, third, fourth then select the first date those values are encountered with min(date) or the last time they occur with max(date) example for the last date encountered: [fiddle](http://sqlfiddle.com/#!3/349c1/2) ``` SELECT min(date) as startdate ,max(date) as enddate, first, second, third, fourth from roster GROUP BY first, second, third, fourth ``` EDIT: edited previous query to include start and enddate EXTRA: something I was playing with when waiting for your reply: including a list of dates where the values occured in 1 field: ``` SELECT first, second, third, fourth, STUFF(( SELECT ',' + convert(varchar(25),T.date) FROM roster T WHERE A.first = T.first AND A.second = T.second AND A.third = T.third AND A.fourth = T.fourth ORDER BY T.date FOR XML PATH('')), 1, 1, '') as dates from roster A GROUP BY first, second, third, fourth ``` EDIT: I got pretty close to what you wanted but not quite, however I have no idea how to get it closer, I guess this is as far as I go, the rest is up to someone else :D : [SQLFIDDLE](http://sqlfiddle.com/#!3/9d498/30) ``` SELECT b.date as startdate, a.date as enddate, a.first, a.second, a.third, a.fourth FROM (Select ROW_NUMBER() OVER (ORDER BY first, second, third, fourth,date ) AS Row, date, first,second,third,fourth from roster) A JOIN (Select ROW_NUMBER() OVER (ORDER BY first, second, third, fourth,date ) AS Row, date, first,second,third,fourth from roster) B ON A.row = b.row + 1 WHERE a.first = b.first and a.second = b.second and a.third = b.third and a.fourth = b.fourth UNION select max(date) as startdate, null as enddate, first, second, third, fourth FROM roster group by first, second, third, fourth having count(*) = 1; ```
Selecting distinct consecutive rows amongst duplicates
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
What I am doing is pretty straight forward dug around at a few posts and couldn't figure out how to express it properly TVP Declaration ``` CREATE TYPE [dbo].[CustomSeoDic] as table ( [RecordID] [int] NULL, [Name] [nvarchar](125) NULL) GO ``` Attempt 1 ``` BEGIN MERGE INTO Listings L USING @CustomSeo AS Tvp ON Listings.ListingID = @CustomSeo.RecordID WHEN MATCHED AND L.OriginalSubdivisionName IS NULL THEN UPDATE SET L.OriginalSubdivisionName = L.SubdivisionName WHEN MATCHED AND L.OriginalSubdivisionName IS NOT NULL THEN UPDATE SET L.SubdivisionName = Tvp.Name END ``` Attempt 2 ``` BEGIN MERGE INTO Listings L USING @CustomSeo AS Tvp ON Listings.ListingID = @CustomSeo.RecordID WHEN MATCHED AND L.OriginalSubdivisionName IS NULL THEN UPDATE SET CASE L.OriginalSubdivisionName IS NULL THEN L.OriginalSubdivisionName = L.SubdivisionName CASE L.OriginalSubdivisionName IS NOT NULL THEN L.SubdivisionName = Tvp.Name END ``` Error Received > Msg 10714, Level 15, State 1, Procedure Update\_SubdivisionNames, Line 17 > An action of type 'WHEN MATCHED' cannot appear more than once in a 'UPDATE' clause of a MERGE statement.
You don't need a MERGE. You need conditional `SET ColumnX = (this or that)` statements. Something like this. ``` Update Listings Set L.OriginalSubdivisionName = case when L.OriginalSubdivisionName IS NULL Then L.SubdivisionName else L.OriginalSubdivisionName / * a little trick to keep it the same value */ end , L.SubdivisionName = case when L.OriginalSubdivisionName IS NOT NULL then Tvp.Name else L.SubdivisionName / * a little trick to keep it the same value */ End From Listings L join @CustomSeo AS Tvp ON Listings.ListingID = @CustomSeo.RecordID ```
I am mostly guessing here but it seems like a simple update along these lines should be close to what you are trying to do. ``` UPDATE l SET OriginalSubdivisionName = CASE WHEN l.OriginalSubdivisionName IS NULL THEN L.SubdivisionNam ELSE Tvp.Name END FROM Listing l JOIN @CustomSeo Tvp ON Tvp.RecordID = l.ListingID ```
Sql Server TVP Merge with Where/Case Statement
[ "", "sql", "sql-server", "sql-server-2008", "table-valued-parameters", "" ]
I have a table with two columns: ``` No Value 1 20 2 10 3 50 4 35 5 17 ``` I also have a variable or parameter where the variables will reduce the value of a column in a row. So, if my variable `V = 5` then my column will update: ``` No Value 1 15 2 10 3 50 4 35 5 17 ``` Or if `V = 50` then: ``` No Value 1 0 2 0 3 30 4 35 5 17 ``` How can I do that?
``` DECLARE @qty int SET @qty= 50 WHILE @qty> 0 BEGIN SELECT @qty= @qty- value FROM table WHERE no = (SELECT MIN(no) FROM table WHERE value > 0) IF @qty< 0 BEGIN UPDATE table SET value = ABS(@qty) WHERE (SELECT MIN(no) FROM table WHERE value > 0) END ELSE BEGIN UPDATE table SET value = 0 WHERE (SELECT MIN(no) FROM table WHERE value > 0) END END ```
First prepare the structure and the data: ``` CREATE TABLE TAB ( [No] int identity(1,1) primary key, [Value] int ); INSERT INTO TAB VALUES (20); INSERT INTO TAB VALUES (10); INSERT INTO TAB VALUES (50); INSERT INTO TAB VALUES (35); INSERT INTO TAB VALUES (17); ``` So now we define your variable to reduce the [Value]: ``` DECLARE @var int SET @var = 5 ``` And now you can query your table: ``` SELECT [No], CASE WHEN [Value] - @var < 0 THEN 0 ELSE [Value] - @var END AS [Value] FROM TAB ``` Very easy. You can set your variable to 50 an try again. Here is a [fiddle](http://sqlfiddle.com/#!3/e8eb64/4) for this example.
How to reduce the value of a column in a row with?
[ "", "sql", "sql-server", "row", "" ]
I have a database in SQL Server with a lot of tables and wish to export all tables in csv format. From a very similar question asked previously - [Export from SQL Server 2012 to .CSV through Management Studio](https://stackoverflow.com/questions/17471994/export-from-sql-server-2012-to-csv-through-management-studio) > Right click on your database in management studio and choose Tasks -> > Export Data... > > Follow a wizard, and in destination part choose 'Flat File > Destination'. Type your file name and choose your options. What I want is the capability to export all tables at once. The SQL Server Import and Export Wizard only permits one table at a time. This is pretty cumbersome, if you have a very big database. I think a simpler solution might involve writing a query, but not sure.
The export wizard allows only one at a time. I used the powershell script to export all my tables into csv. Please try this if it helps you. ``` $server = "SERVERNAME\INSTANCE" $database = "DATABASE_NAME" $tablequery = "SELECT s.name as schemaName, t.name as tableName from sys.tables t inner join sys.schemas s ON t.schema_id = s.schema_id" #Delcare Connection Variables $connectionTemplate = "Data Source={0};Integrated Security=SSPI;Initial Catalog={1};" $connectionString = [string]::Format($connectionTemplate, $server, $database) $connection = New-Object System.Data.SqlClient.SqlConnection $connection.ConnectionString = $connectionString $command = New-Object System.Data.SqlClient.SqlCommand $command.CommandText = $tablequery $command.Connection = $connection #Load up the Tables in a dataset $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $SqlAdapter.SelectCommand = $command $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet) $connection.Close() # Loop through all tables and export a CSV of the Table Data foreach ($Row in $DataSet.Tables[0].Rows) { $queryData = "SELECT * FROM [$($Row[0])].[$($Row[1])]" #Specify the output location of your dump file $extractFile = "C:\mssql\export\$($Row[0])_$($Row[1]).csv" $command.CommandText = $queryData $command.Connection = $connection $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $SqlAdapter.SelectCommand = $command $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet) $connection.Close() $DataSet.Tables[0] | Export-Csv $extractFile -NoTypeInformation } ``` Thanks
Instead of clicking `Export Data`, choose `Generate Scripts`. Select the tables you want, click next and click the `Advanced` button. The last option under `General` is `Types of data to script`. Chose `Schema and data` or just `Data`.
SQL Server Management Studio 2012 - Export all tables of database as csv
[ "", "sql", "sql-server", "ssms", "" ]
I have a table like this: *(Pay attention to the sequence numbers for every word)* ``` // tablename +---------+----------+ | word | number | ---------------------- | jack | 1 | | jack | 2 | | jack | 3 | | ali | 1 | | ali | 2 | | ali | 3 | | ali | 4 | | ali | 5 | | peter | 1 | | peter | 2 | | peter | 3 | | peter | 4 | | raj | 1 | | raj | 2 | +---------+----------+ ``` Now i need a query to select a range. My range is *(for example)*: ``` // Range Start select * from tablename where word='ali' and number='2'; // Range End select * from tablename where word='peter' and number='3'; ``` In fact I want below result: ``` +---------+----------+ | word | number | ---------------------- | ali | 2 | | ali | 3 | | ali | 4 | | ali | 5 | | peter | 1 | | peter | 2 | | peter | 3 | +---------+----------+ ``` How can I do that ?
I have to head out but if your data looks like this with the ID column: ``` +---------+----------+----+ | word | number | ID | --------------------------- | jack | 1 | 1 | | jack | 2 | 2 | | jack | 3 | 3 | | ali | 1 | 4 | | ali | 2 | 5 | | ali | 3 | 6 | | ali | 4 | 7 | | ali | 5 | 8 | | peter | 1 | 9 | | peter | 2 | 10 | | peter | 3 | 11 | | peter | 4 | 12 | | raj | 1 | 13 | | raj | 2 | 14 | +---------+----------+----+ ``` Then you could do this (this is just one way, there are most likely more optimal ways) ``` select * from table where id >= (select id from table where word='ali' and number='2') and id <= (select id from table where word='peter' and number='3') ```
If you have an Id column, this is a simple way to do what you need ``` SELECT word, number FROM Test WHERE id BETWEEN (SELECT id FROM Test WHERE word = 'ali' AND number = '2') AND (SELECT id FROM Test WHERE word = 'peter' AND number = '3'); ``` [Here](http://sqlfiddle.com/#!9/1c199/2) you have a working example in SQLFiddle Hope this helps
How to select a range of my table
[ "", "mysql", "sql", "select", "range", "" ]
I have date column in table ColumnName: event\_timestamp 2015-06-01 15:23:31.000 2015-06-01 15:25:21.000 2015-06-03 09:00:41.000 2015-06-03 09:14:49.000 2015-06-03 09:15:03.000 2015-06-03 09:15:23.000 2015-06-06 08:40:06.000 2015-06-06 08:40:19.000 2015-06-06 11:13:35.000 2015-06-06 11:13:53.000 2015-06-06 11:15:04.000 2015-06-06 11:15:30.000 2015-06-09 15:08:13.000 2015-06-09 15:08:33.000 2015-06-09 15:08:45.000 2015-06-09 15:09:05.000 I want only 2015-06-01 2015-06-03 2015-06-06 2015-06-09 AND 2015-06-06 in one column 2015-06-09 in other column AND also same column 2015-06-06 2015-06-09 I tried: ``` SELECT CONVERT(datetime,MAX(myDate),103), (MAX(CONVERT(datetime,mydate,103)))-1 as DT FROM (SELECT DISTINCT (CONVERT(VARCHAR,events.event_timestamp, 103)) myDate FROM events where event_timestamp Between '01-Jun-15 11:14:40 AM' AND '11-Jun-15 11:14:40 AM' ) tbl ``` but cannot access 2015-06-06 in one column 2015-06-09 in other column
using sql server ``` WITH cteEvents AS ( SELECT CONVERT(VARCHAR,event_timestamp,101) event_date, RANK()OVER(ORDER BY CONVERT(VARCHAR,event_timestamp,101) DESC) rnk FROM events WHERE event_timestamp BETWEEN '01-Jun-15 11:14:40 AM' AND '11-Jun-15 11:14:40 AM' GROUP BY CONVERT(VARCHAR,event_timestamp,101) ) SELECT event_date, (SELECT event_date FROM cteEvents WHERE rnk = 2), (SELECT event_date FROM cteEvents WHERE rnk = 1) FROM cteEvents ```
Stick with ISO-standard date formats and built-in types: ``` SELECT DISTINCT CONVERT(DATE, e.event_timestamp) myDate FROM events e WHERE e.event_timestamp Between '2015-06-01T11:14:40' AND '2015-06-11T11:14:40' ``` This will probably fix your problem.
Get max date and exact previous from max date from a given date range sql server
[ "", "sql", "datetime", "" ]
I need to select the last 12 months. As you can see on the picture, May occurs two times. But I only want it to occur once. And it needs to be the newest one. Plus, the table should stay in this structure, with the latest month on the bottom. ![enter image description here](https://i.stack.imgur.com/Uxw5U.png) And this is the query: ``` SELECT Monat2, Monat, CASE WHEN NPLAY_IND = '4P' THEN 'QuadruplePlay' WHEN NPLAY_IND = '3P' THEN 'TriplePlay' WHEN NPLAY_IND = '2P' THEN 'DoublePlay' WHEN NPLAY_IND = '1P' THEN 'SinglePlay' END AS Series, Anzahl as Cnt FROM T_Play_n where NPLAY_IND != '0P' order by Series asc ,Monat ``` This is the new query ``` SELECT sub.Monat2,sub.Monat, CASE WHEN NPLAY_IND = '4P' THEN 'QuadruplePlay' WHEN NPLAY_IND = '3P' THEN 'TriplePlay' WHEN NPLAY_IND = '2P' THEN 'DoublePlay' WHEN NPLAY_IND = '1P' THEN 'SinglePlay' END AS Series, Anzahl as Cnt FROM (SELECT ROW_NUMBER () OVER (PARTITION BY Monat2 ORDER BY Monat DESC)rn, Monat2, Monat, Anzahl, NPLAY_IND FROM T_Play_n)sub where sub.rn = 1 ``` It does only show the months once but it doesn't do that for every Series. So with every Play it should have 12 months. ![enter image description here](https://i.stack.imgur.com/fiDNe.png)
In `Oracle` and `SQL-Server` you can use `ROW_NUMBER`. `name` = month name and `num` = month number: ``` SELECT sub.name, sub.num FROM (SELECT ROW_NUMBER () OVER (PARTITION BY name ORDER BY num DESC) rn, name, num FROM tab) sub WHERE sub.rn = 1 ORDER BY num DESC; ```
``` WITH R(N) AS ( SELECT 0 UNION ALL SELECT N+1 FROM R WHERE N < 12 ) SELECT LEFT(DATENAME(MONTH,DATEADD(MONTH,-N,GETDATE())),3) AS [month] FROM R ``` The `With R(N)` is a Common Table Expression.The R is the name of the result set (or table) that you are generating. And the N is the month number.
How to select the last 12 months in sql?
[ "", "sql", "sql-server", "select", "" ]
I have two tables which are not fully equal but similar. They look like this: ``` CREATE TABLE FIRST_TABLE( FIRST_ID RAW(16) NOT NULL CONSTRAINT FIRST_PK PRIMARY KEY, FIRST_NAME VARCHAR2(2000), FIRST_VALID NUMBER(1) NOT NULL, AUDIT_CRE_AT TIMESTAMP(0) DEFAULT CURRENT_TIMESTAMP NOT NULL, AUDIT_CRE_FROM VARCHAR2(32) DEFAULT 'system' NOT NULL, CONSTRAINT SECOND_VALID CHECK (SECOND_VALID IN (0,1)) ); CREATE TABLE SECOND_TABLE( SECOND_ID RAW(16) NOT NULL CONSTRAINT SECOND_PK PRIMARY KEY, SECOND_NAME VARCHAR2(2000), AUDIT_CRE_AT TIMESTAMP(0) DEFAULT CURRENT_TIMESTAMP NOT NULL, AUDIT_CRE_FROM VARCHAR2(32) DEFAULT 'system' NOT NULL, ); ``` Now I want to migrate all records from the `FIRST_TABLE` into the `SECOND_TABLE` if the record is valid. A record is valid, if `FIRST_VALID` is `1`. That is what I want to to: * Select all valid records from FIRST\_TABLE `SELECT * FROM FIRST_TABLE f WHERE f.FIRST_VALID like '1';` * Loop through all records I got from the select query and `INSERT` everything except `VALID` `INTO` the `SECOND_TABLE` and `DELETE` them. Thats where I got problems. This is what I have: ``` DECLARE v_count_valids NUMBER(10):= 0; v_i NUMBER(10):=0; BEGIN SELECT COUNT(*) INTO v_count_valids FROM FIRST_TABLE f WHERE f.VALID like '1'; WHILE v_i <= v_count_valids LOOP --INSERT INTO SECOND_TABLE VALUES (current record...) --DELETE current record v_i := v_i + 1; END LOOP; END; ``` Can you give me a hint how I can access the current in the loop record to insert some values into the other table? Or is there a better way to do this?
Just use `insert . . . select`: ``` insert into second_table(second_id, second_name, audit_cre_at, audit_cre_from) SELECT first_id, first_name, audit_cre_at, audit_cre_from FROM FIRST_TABLE f WHERE f.FIRST_VALID = 1; ``` You don't need PL/SQL for this. Also, when comparing numbers, use numeric constants and `=`, not `like` and strings.
You don't need a loop. You could use an insert-select statement instead: ``` INSERT INTO second_table (second_id, second_name, audit_cre_at, audit_cre_from) SELECT first_id, first_name, audit_cre_at, audit_cre_from FROM first_table WHERE valid = 1 ```
Migrate records to another table
[ "", "sql", "oracle", "plsql", "" ]
I want to display a List of Instructors with only first name and last name, from Georgia and whose last name ends in ‘son’. I have written this so far: ``` SELECT firstname, lastname, state FROM instructors WHERE state = 'ga' AND lastname LIKE '%son' ``` But it is not returning the information requested just a blank table. Any help would be greatly appreciated
FINALLY !!!!! I Figured It Out! ``` SELECT firstname, lastname, state FROM instructors WHERE state = 'ga' AND lastname LIKE '*son*' ``` Instead of using the %WildCard I inserted \*WildCard on both ends and it displayed exactly what I was requesting ! Thanks Everyone For Your Help!
To find names ending in 'son' you need to make a small change and remove the second '%' sign. with both it looks for 'son' any where such as 'sonnentag' The second one I would guess that the DB has Georgia as 'GA' not 'ga'. Case is important. ``` SELECT firstname, lastname, state FROM instructors WHERE state = 'GA' AND lastname LIKE '*son' ```
How To Use The Where & Like Operator Together in SQL?
[ "", "sql", "ms-access", "where-clause", "sql-like", "" ]
I'm not very good at SQL, and I have a very peculiar request to do. My table looks something like this : ``` FOO BAR ----+---- foo1 bar1 foo2 bar3 foo1 bar1 foo1 bar1 foo2 bar3 foo4 bar3 foo3 bar2 foo2 bar4 foo5 bar4 ``` I manage easily to count the number of each different "bar" entries with a ``` SELECT bar, COUNT(*) as barcount FROM table GROUP BY bar ORDER BY barcount ``` which gives me ``` BAR barcount ----+---- bar1 3 bar2 1 bar3 3 bar4 2 ``` but what I'm trying to achieve is have a table where I know how many "bars" have a barcount of 1, how many have a barcount of 2 times, how many have a barcount of 3etc. The restult I need is this, to make it simple: ``` barcount occurences --------+----------- 1 1 2 1 3 2 ``` Is it possible to do this in a single SQL query, or would I have to rely on some code ?
If you need to nest aggregates you must use a Derived Table (or Common Table Expression): ``` select barcount, count(*) as occurrences from ( SELECT bar, COUNT(*) as barcount FROM table GROUP BY bar ) as dt group by barcount ORDER BY barcount ```
``` select barcount, count(*) as occurences from ( SELECT bar, COUNT(*) as barcount FROM your_table GROUP BY bar ) tmp group by barcount ```
SQLite: Count the number of similar COUNT(*) rows resulting from a GROUP_BY statement
[ "", "sql", "sqlite", "" ]
I'm trying to do something similar to this: ``` CASE WHEN number IN (1,2,3) THEN 'Y' ELSE 'N' END; ``` Instead I want to have a query in the place of the list, like so: ``` CASE WHEN number IN (SELECT num_val FROM some_table) THEN 'Y' ELSE 'N' END; ``` I can't seem to get this to work. Also, here is an example of the query. ``` SELECT number, (CASE WHEN number IN (SELECT num_val FROM some_table) THEN 'Y' ELSE 'N' END) AS YES_NO FROM some_other_table; ```
Yes, it's possible. See an example below that would do what you are intending. The difference is that it uses `EXISTS` instead of `IN`. ``` SELECT a.number, (CASE WHEN EXISTS (SELECT null FROM some_table b where b.num_val = a.number) THEN 'Y' ELSE 'N' END) AS YES_NO FROM some_other_table a; ``` **EDIT:** I confess: I like the answers given by the others better personally. However, there will be a difference between this query and the others depending on your data. If for a value `number` in the table `some_other_table` you can have many matching entries of `num_val` in the table `some_table`, then the other answers will return duplicate rows. This query will not. That said, if you take the `left join` queries given by the others, and add a `group by`, then you won't get the duplicates.
I suggest using an OUTER JOIN instead of trying to use a subquery in a CASE expression: ``` SELECT t.NUMBER, CASE WHEN s.NUM_VAL IS NOT NULL THEN 'Y' ELSE 'N' END AS YES_NO FROM SOME_OTHER_TABLE t LEFT OUTER JOIN SOME_TABLE s ON s.NUM_VAL = t.NUMBER ``` Best of luck.
Can Oracle PL/SQL CASE statement include a SELECT query?
[ "", "sql", "oracle", "plsql", "case", "plsqldeveloper", "" ]
I have a table like this: ``` | id | name | description | publish 1 B text 1 2 B text2 1 3 B3 text3 1 4 B4 text4 1 5 B5 text 1 ``` Two or more records maybe be duplicate in name or description. I want SQL query find all duplicates and then set `publish = 0` except one! Something like this algorithm to select: ``` update table (set punlish = 0 for all records of each group of dublicate except lowset id in that froup) where (name = name or description = description) AND publish = 1 ``` For example, after run query I must have this: ``` | id | name | description | publish 1 B text 1 // not change! 2 B text2 0 3 B3 text3 1 4 B4 text4 1 5 B5 text 0 ``` Or for this before: ``` | id | name | description | publish 1 B text 1 2 B text2 1 3 B3 text 1 4 B4 text4 1 5 B5 text5 1 6 B5 text6 1 7 B5 text7 1 ``` And after: ``` | id | name | description | publish 1 B text 1 // no change 2 B text2 0 3 B3 text 0 4 B4 text4 1 5 B5 text5 1 // no change 6 B5 text6 0 7 B5 text7 0 ```
Considered table name as `temp1` ``` update temp1 set publish=0 where id in (select x.id from (select a.id from temp1 a ,temp1 b where a.id<>b.id and (a.name=b.name or a.description=b.description)and a.id>b.id group by a.id) as x); ``` Check Here : <http://www.sqlfiddle.com/#!9/19863/1> This will work fine.
using `"pubs"` as table name ``` UPDATE pubs LEFT JOIN ( SELECT MIN(id) minId FROM pubs GROUP BY name ) minPubs ON minPubs.minId = pubs.id SET pubs.publish = 0 WHERE minPubs.minId IS NULL; ``` <http://sqlfiddle.com/#!9/54fae/1>
How to group duplicate records and then update all members except lowest id for each group?
[ "", "mysql", "sql", "" ]
I have table named `boolean` which contain 'true' and/or 'false' values as strings in one column. I have problem to create `case` statement to show me whether there are only 'true' or 'false' or 'both' values Example 1: ``` 'true' 'true' ``` result:'true' Example 2: ``` 'false' 'false' 'false' ``` Result: 'false' Example 3: ``` 'true' 'false' 'true' ``` Result: 'both' Edit: `case` statement should look like: ``` case when "column content are only true values" then 'true' when "column content are only false values" then 'false' else 'both' end ```
You could aggregate the `max` and `min` of the column, and then evaluate the results - if they are the same, there's only one value in the column. If not, there must be both. Note that since these are string representations the values are sorted lexicographically: ``` SELECT CASE WHEN MAX(col) = MIN(col) THEN MAX(col) ELSE 'both' END FROM my_table ```
``` SELECT CASE WHEN MIN(Col) <> MAX(Col) THEN 'Both' ELSE MIN(Col) END FROM YourTable ```
SQL Server : case statement
[ "", "sql", "sql-server", "select", "case", "" ]
I have a table ``` StudentID StudentName Subject Marks 1 Savita EC1 50 1 Savita EC2 55 1 Savita EC3 45 1 Savita EC4 34 1 Savita EC5 23 2 Rajesh EC1 34 2 Rajesh EC2 56 2 Rajesh EC3 12 2 Rajesh EC4 45 2 Rajesh EC5 23 3 Smita EC1 76 3 Smita EC2 45 3 Smita EC3 67 3 Smita EC4 56 3 Smita EC5 76 4 Rahul EC1 66 4 Rahul EC2 34 4 Rahul EC3 22 4 Rahul EC4 18 4 Rahul EC5 33 ``` I wrote a query like ``` SELECT StudentName, EC1,EC2,EC3,EC4,EC5,TotalMarks, case when EC1<30 and ec2<30 then 'fail' when EC1<30 and EC3<30 then 'fail' when EC1<30 and EC4<30 then 'fail' when EC1<30 and EC5<30 then 'fail' when EC2<30 and EC3<30 then 'fail' when EC2<30 and EC4<30 then 'fail' when EC2<30 and EC5<30 then 'fail' when EC3<30 and EC4<30 then 'fail' when EC3<30 and EC5<30 then 'fail' when EC4<30 and EC5<30 then 'fail' else 'pass' end as Result FROM (SELECT StudentName, EC1, EC2, EC3, EC4, EC5, TotalMarks=EC1+EC2+EC3+EC4+EC5 FROM Student PIVOT(sum(Marks) for subject in([EC1],[EC2],[EC3],[EC4],[EC5],[TotalMarks]))as pivotTable) A ``` Which gives an output of students who have less than 30 marks in 2 subjects as fail or else pass ``` Rahul 66 34 22 18 33 173 fail Rajesh 34 56 12 45 23 170 fail Savita 50 55 45 34 23 207 pass Smita 76 45 67 56 76 320 pass ``` I want to add 7 marks to each subject which is less than 30 and see the pass fail students after adding 7 marks. eg-After adding 7 marks rajesh record should be like `Rajesh 34 56 19 45 30 170 pass`
You can try this ``` SELECT StudentName, CASE WHEN EC1 < 30 THEN EC1 + 7 ELSE EC1 END AS EC1, CASE WHEN EC2 < 30 THEN EC2 + 7 ELSE EC2 END AS EC2, CASE WHEN EC3 < 30 THEN EC3 + 7 ELSE EC3 END AS EC3, CASE WHEN EC4 < 30 THEN EC4 + 7 ELSE EC4 END AS EC4, CASE WHEN EC5 < 30 THEN EC5 + 7 ELSE EC5 END AS EC5, Total = (EC1 + EC2 + EC3 + EC4 + EC5), CASE WHEN EC1 < 23 AND EC2 < 23 THEN 'FAIL' WHEN EC1 < 23 AND EC3 < 23 THEN 'FAIL' WHEN EC1 < 23 AND EC4 < 23 THEN 'FAIL' WHEN EC1 < 23 AND EC5 < 23 THEN 'FAIL' WHEN EC2 < 23 AND EC3 < 23 THEN 'FAIL' WHEN EC2 < 23 AND EC4 < 23 THEN 'FAIL' WHEN EC2 < 23 AND EC5 < 23 THEN 'FAIL' WHEN EC3 < 23 AND EC4 < 23 THEN 'FAIL' WHEN EC3 < 23 AND EC5 < 23 THEN 'FAIL' WHEN EC4 < 23 AND EC5 < 23 THEN 'FAIL' ELSE 'PASS' END AS Result FROM ( SELECT * FROM Student ) AS ST PIVOT ( SUM(Marks) For [Subject] IN (EC1, EC2, EC3, EC4, EC5) ) AS PV ``` **Output** ``` Rahul 66 34 29 25 33 173 FAIL Rajesh 34 56 19 45 30 170 PASS Savita 50 55 45 34 30 207 PASS Smita 76 45 67 56 76 320 PASS ```
Maybe this is something you're looking for: ``` SELECT A.StudentName, EC1,EC2,EC3,EC4,EC5,Total, case when fail2 >= 2 then 'Failure' when fail >= 2 then 'Near Pass' else 'Pass' end as Result FROM ( SELECT StudentName, EC1, EC2, EC3, EC4, EC5 FROM Student PIVOT(sum(Marks) for subject in([EC1],[EC2],[EC3],[EC4],[EC5]))as pt) A, ( select studentName, sum(case when Marks < 30 then 1 else 0 end) as fail, sum(case when Marks < 23 then 1 else 0 end) as fail2, sum(case when Marks >= 30 then 1 else 0 end) as pass, sum(marks) as total from student group by studentname ) B where A.StudentName = B.StudentName ``` I removed you're comparison logic that was for all the failure combinations and replaced it with sum + group by + case from the original table, so that you can determine the counts for fails, near passes and passes for each student without having to list all the cases separately. You can test this in [SQL Fiddle](http://sqlfiddle.com/#!3/608eb/11)
Using if else block in pivot query
[ "", "sql", "sql-server", "" ]
I want to find few data from SQL Server that only query by current date. The purpose is to view today transaction only Here is my code ``` SELECT * FROM dbo.Student.studentProfile WHERE TransactionDate = curdate() ``` RESULT > Curdate is not a recongined built-in function name
If you are looking for the current date: ``` WHERE TransactionDate = cast(getdate() as date) ``` Or if you prefer ANSI standards: ``` WHERE TransactionDate = cast(CURRENT_TIMESTAMP as date) ```
Different sql implementations (ie SQL Server, Mysql, postgresql etc) can have different methods supported. For SQL Server the method you want to use is GETDATE() instead of CURDATE() The documentation for this method is here: <https://msdn.microsoft.com/en-us/library/ms188383.aspx>
sql server query current date from database
[ "", "sql", "sql-server", "date", "" ]
I have three tables for listing products with product attributes **Product Table** with dummy data ![enter image description here](https://i.stack.imgur.com/hoTcu.png) ![enter image description here](https://i.stack.imgur.com/ttBNs.png) **Product\_Attributes** with dummy data ![enter image description here](https://i.stack.imgur.com/oLzyh.png) ![enter image description here](https://i.stack.imgur.com/VlpGa.png) **Attributes** with dummy data ![enter image description here](https://i.stack.imgur.com/qKYzq.png) ![enter image description here](https://i.stack.imgur.com/GNFw7.png) Kespersky antivirus (productid = 1) has no attributes but the iPhone (productid =2) has two attributes applicable to it, memory and resolution both in **Attribute** table which has its value stored in **Product\_Attribute** table. How do I join these tables to show/display both the products with there corresponding attributes? **EDIT** I need to display these products as ![enter image description here](https://i.stack.imgur.com/VPQSB.png)
The following will work for any number of attributes: ``` select product.productId, product.name, group_concat(concat(attr.attributeName, ":", pa.attributeValue)) from product left outer join product_attributes pa on (pa.productId = product.productId) left outer join attributes attr on (attr.attributeId = pa.attributeId) group by product.productId, product.name ```
Your question requires a pivot, which needs to be predefined. Meaning, if you want to include 2 extra COLUMNS in your result set, your query can then only store up to 2 attributes. This is a PRESENTATION layer problem, not query layer. But alas, I have a general solution for you. It assumes you will have a max number of 2 attributes (for the reasons states above). Here is the query: ``` SELECT P.ProductName, A.AttributeName, PA.AttributeValue, B.AttributeName, PB.AttributeValue FROM lb_products P LEFT JOIN (select row_number() over (partition by productID order by AttributeID asc) rn, * from lb_product_attributes x) PA ON P.ProductID = PA.ProductID and PA.rn = 1 LEFT JOIN (select row_number() over (partition by productID order by AttributeID asc) rn, * from lb_product_attributes x) PB ON P.ProductID = PB.ProductID and PB.rn = 2 LEFT JOIN lb_attributes A ON PA.AttributeID = A.AttributeID LEFT JOIN lb_attributes B ON PB.AttributeID = B.AttributeID; ``` And the SQL Fiddle for you to play around. Good luck! And feel free to ask any questions :) <http://sqlfiddle.com/#!6/49a9e0/5>
Joining product attributes table with the product table to display product
[ "", "mysql", "sql", "database", "join", "left-join", "" ]
I have a Sql Procedure that receives a string as a parameter. Now the task that I need to perform is that the parameter contains data that is separated by two different kinds of delimiters i.e **":"** and **","** There are two columns in the DB **Id** and **Value**. ``` sample Data: "10:0,11:1,12:3,13:4,15:5,16:6" In This case Ids are: 10,11,12,13,14,15,16 and their respective values are: 1,2,3,4,5,6 ``` Now I want to insert these values in DB. Can you please suggest a solution?
You can use following function: ``` CREATE FUNCTION Split ( @InputString VARCHAR(8000), @Delimiter VARCHAR(50) ) RETURNS @Items TABLE ( Item VARCHAR(8000) ) AS BEGIN IF @Delimiter = ' ' BEGIN SET @Delimiter = ',' SET @InputString = REPLACE(@InputString, ' ', @Delimiter) END IF (@Delimiter IS NULL OR @Delimiter = '') SET @Delimiter = ',' DECLARE @Item VARCHAR(8000) DECLARE @ItemList VARCHAR(8000) DECLARE @DelimIndex INT SET @ItemList = @InputString SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0) WHILE (@DelimIndex != 0) BEGIN SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex) INSERT INTO @Items VALUES (@Item) -- Set @ItemList = @ItemList minus one less item SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex) SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0) END -- End WHILE IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString BEGIN SET @Item = @ItemList INSERT INTO @Items VALUES (@Item) END -- No delimiters were encountered in @InputString, so just return @InputString ELSE INSERT INTO @Items VALUES (@InputString) RETURN END -- End Function GO CREATE TABLE #Test ( Item NVARCHAR(1000) ) INSERT INTO #Test SELECT * FROM Split('10:0,11:1,12:3,13:4,15:5,16:6', ':') SELECT f.* FROM #Test t CROSS APPLY Split(t.Item, ',') f DROP TABLE #Test ```
``` IF OBJECT_ID('tempdb..#Test') IS NOT NULL DROP TABLE #Test GO CREATE TABLE #Test(ID INT,Val INT) DECLARE @t table (val varchar(50)) INSERT INTO @t (val)values ('10:0,11:1,12:3,13:4,15:5,16:6') ;WITH CTE AS ( SELECT Split.a.value('.', 'VARCHAR(100)') AS String FROM (SELECT CAST ('<M>' + REPLACE([val], ',', '</M><M>') + '</M>' AS XML) AS String FROM @t) AS A CROSS APPLY String.nodes ('/M') AS Split(a)) INSERT INTO #Test select SUBSTRING(String,0,CHARINDEX(':',String)),REVERSE(SUBSTRING(reverse(String),0,CHARINDEX(':',reverse(String)))) from cte select * from #test ```
Use data separated by delimiters in Sql Procedure
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
I have a query that is supposed to pull the highest primary key id for a specific request code: ``` SELECT id FROM [QTRA410].[Admin].[qt_request] WHERE id IN (SELECT Max(id) FROM [QTRAX4619410].[QTRAXAdmin].[qt_request]) AND requestcode = 'FOREMAN'; ``` Here is the current data in the table: ![screenshot1](https://i.stack.imgur.com/5h7Cb.png) However the query is returning nothing at all and I don't understand why: ![screenshot2](https://i.stack.imgur.com/6HF9z.png) Strangely when I search for the request code 'JOB' it works fine: ![Screenshot3](https://i.stack.imgur.com/xw2p2.png)
you might also try if [QTRAX4619410].[QTRAXAdmin].[qt\_request] doeos not contain the requestcode: ``` SELECT id FROM [QTRA410].[Admin].[qt_request] WHERE id IN (SELECT Max(r1.id) FROM [QTRAX4619410].[QTRAXAdmin].[qt_request] r1 join [QTRA410].[Admin].[qt_request] r2 on r1.id = r2.id AND r2.requestcode = 'FOREMAN') ```
Try this: ``` SELECT id FROM [QTRA410].[Admin].[qt_request] WHERE id IN (SELECT Max(id) FROM [QTRAX4619410].[QTRAXAdmin].[qt_request] WHERE requestcode = 'FOREMAN') ```
Getting highest primary key for specified column value
[ "", "sql", "sql-server", "" ]
I was writing a `Stored Procedure` today and Wrote the line: ``` SELECT pv1.Version FROM depl... ``` and the word `Version` turned blue shown below: ![enter image description here](https://i.stack.imgur.com/MQm4p.png) so I assumed it was a reserved word, so did some investigating here: [Reserved Keywords (TRANSACT-SQL)](https://msdn.microsoft.com/en-us/library/ms189822.aspx) But could not find the word `Version` in the list. Is `Version` a SQL reserved word, and if not why is my word `Version` displaying blue? I am using SQL Management Studio 2012
As you rightly noted, `Version` is not in the official reserved word list. It's just a "feature" of SQL Server Management Studio that is showing it in blue. There are many words like this. Here's a few more: ``` DESCRIPTION SERVER INSTEAD ``` There are even some words that show as pink such as `LOOKUP`.
"Version" is not a SQL Server reserved keyword. However, it is used in a global variable used to show the OS name & version, SQL Server version, SQL Server patches and hardware attributes of the SQL Server being used. ``` SELECT @@VERSION ``` Perhaps the "Version" text turns blue in Transact-SQL because it is known to SQL Server in the context of this global variable. I'm not sure about that, this is just a theory.
Is "Version" a Reserved word in TRANSACT-SQL? (Shows Blue but not in Reserved word list)
[ "", "sql", "sql-server", "reserved-words", "" ]
I am struggling to find an elegant solution to this problem. I have 5 tables and their relationships are described in the image. ![enter image description here](https://i.stack.imgur.com/Yk6R7.jpg) A page can have multiple products and each product can have many ProductRates. A Page with specific Product could have many rates as well. To get around the many to many issue there is table PageToProductToRate. Users want to query on multiple conditions where the selection could be combination of any: * Product1 + Rate1 + rate attribute1 * Product1 + Rate1 + rate attribute2 * Product1 + Rate2 + rate attribute2 * Product2 + Rate3 + rate attribute1 etc... This is an example of data and WHERE condition and expected results: ![enter image description here](https://i.stack.imgur.com/VVPQe.jpg) And another: ![enter image description here](https://i.stack.imgur.com/ktI1z.jpg) The query that works for me uses INTERSECT to get the right results. I tried UNION but would get results not matching all of the conditions. ``` SELECT DISTINCT P.[PageID] FROM [Page] P join PageToProduct p2p on p2p.[PageID] = P.[PageID] join Product pr on p2p.[Product] = pr.[Product] join PageToProductToRate p2p2r on p2p2r.[PageToProductID] = p2p.[PageToProductID] join ProductRates r on r.[ProductRatesID] = p2p2r.[ProductRatesID] WHERE (PR.[Product] = 'ALMOND' AND R.CommissionType = 'PREMIUM' AND R.Rate = 0.25) INTERSECT SELECT DISTINCT P.[PageID] FROM [Page] P join PageToProduct p2p on p2p.[PageID] = P.[PageID] join Product pr on p2p.[Product] = pr.[Product] join PageToProductToRate p2p2r on p2p2r.[PageToProductID] = p2p.[PageToProductID] join ProductRates r on r.[ProductRatesID] = p2p2r.[ProductRatesID] WHERE (PR.[Product] = 'WALNUT' AND R.CommissionType = 'SERVICE FEE' AND R.Rate = 1.25) INTERSECT SELECT DISTINCT P.[PageID] FROM [Page] P join PageToProduct p2p on p2p.[PageID] = P.[PageID] join Product pr on p2p.[Product] = pr.[Product] join PageToProductToRate p2p2r on p2p2r.[PageToProductID] = p2p.[PageToProductID] join ProductRates r on r.[ProductRatesID] = p2p2r.[ProductRatesID] WHERE (PR.[Product] = 'HAZELNUT' AND R.CommissionType = 'EXCESS' AND R.Rate = 1.68) ``` Is there a better way of going on around this problem? I could potentially have over a dozen of these conditions and the query with all the joins could get out of the control.
``` SELECT p2p.PageID FROM PageToProduct as p2p inner join Product as pr on p2p.Product = pr.Product inner join PageToProductToRate as p2p2r on p2p2r.PageToProductID = p2p.PageToProductID inner join ProductRates as r on r.ProductRatesID = p2p2r.ProductRatesID WHERE (pr.Product = 'ALMOND' AND r.CommissionType = 'PREMIUM' AND r.Rate = 0.25) OR (pr.Product = 'WALNUT' AND r.CommissionType = 'SERVICE FEE' AND r.Rate = 1.25) OR (pr.Product = 'HAZELNUT' AND r.CommissionType = 'EXCESS' AND r.Rate = 1.68) GROUP BY p2p.PageID HAVING COUNT(*) = 3; /* requires all three are present, as long as no rows are duplicate */ ```
My best guess. ``` WITH products AS ( SELECT [Product], [ProductRatesID] FROM Product p JOIN ProductRates pr ON p.[Product] = pr.[Product] WHERE (p.[Product] = 'ALMOND' AND pr.CommissionType = 'PREMIUM' AND pr.Rate = 0.25) OR (p.[Product] = 'WALNUT' AND pr.CommissionType = 'SERVICE FEE' AND pr.Rate = 1.25) OR (p.[Product] = 'HAZELNUT' AND pr.CommissionType = 'EXCESS' AND pr.Rate = 1.68) ) SELECT P.[PageID] FROM [Page] P JOIN ( SELECT p2p.[PageID], COUNT(*) as ProductCount FROM products pr JOIN PageToProduct p2p ON p2p.[Product] = pr.[Product] JOIN PageToProductToRate p2p2r on p2p2r.[PageToProductID] = p2p.[PageToProductID] WHERE p2p2r.[ProductRatesID] = pr.[ProductRatesID] GROUP BY p2p.[PageID] ) sq ON sq.[PageID] = p.[PageID] WHERE sq.ProductCount = @ProductFilterCount ``` You'll need to figure out how you want to handle `@ProductFilterCount`. It can either be a count of the number filters you're using, or the number of products that actually match those filters [SQL Fiddle](http://sqlfiddle.com/#!3/82b9fe/18)
SQL SELECT multiple conditions many to many relationship
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Consider this schema ``` CREATE TABLE [PrimaryTable] [Id] int IDENTITY(1,1) NOT NULL ALTER TABLE [PrimaryTable] ADD CONSTRAINT [PK_PrimaryTable] PRIMARY KEY ([Id]) CREATE TABLE [SecondaryTable] [PrimaryTableId] int NOT NULL [Name] nvarchar(4000) NOT NULL [Value] nvarchar(4000) NULL ALTER TABLE [SecondaryTable] ADD CONSTRAINT [PK_SecondaryTable] PRIMARY KEY ([PrimaryTableId],[Name]) ``` And then the following data ``` PrimaryTable | Id | | 1 | | 2 | | 3 | SecondaryTable | PrimaryTableId | Name | Value | | 1 | xxx | yyy | | 2 | xxx | zzz | ``` I am attempting to write a query which will give me all the entries in `PrimaryTable` that DO NOT have a name/value of `xxx=yyy`, including those where there is no entry in `SecondaryTable` for `xxx` Currently I have the following which only returns ID = 2, and not ID = 3 ``` SELECT Id FROM PrimaryTable LEFT OUTER JOIN SecondaryTable ON PrimaryTable.Id = SecondaryTable.PrimaryTableId WHERE (SecondaryTable.Name = 'xxx' AND SecondaryTable.Value NOT LIKE 'yyy') ``` Describing the additional clause in plain English would be something along the lines of `...OR SecondaryTable.Name = 'xxx' does not exist` **Edit** I should note that I've simplified both the table structure and the query for this question - other columns from `PrimaryTable` will also be retrieved (as well as form part of the query), and there are additional queries on `SecondaryTable` using different name/value combinations, and different operators (=, !=, LIKE, NOT LIKE) (Environment is SQL Server LocalDb 2014)
Using NOT IN ``` SELECT Id FROM PrimaryTable WHERE Id NOT IN ( SELECT PrimaryTableId FROM SecondaryTable WHERE Name='xxx' AND Value='yyy') ``` Using NOT EXISTS ``` SELECT pt.Id FROM PrimaryTable pt WHERE NOT EXISTS ( SELECT * FROM SecondaryTable st WHERE pt.Id = pt.Id AND (Name='xxx' AND Value='yyy') ```
I simply added a null check on the joined table so the value would be included. ``` SELECT Id FROM PrimaryTable LEFT OUTER JOIN SecondaryTable ON PrimaryTable.Id = SecondaryTable.PrimaryTableId WHERE ( SecondaryTable.Name = 'xxx' AND SecondaryTable.Value NOT LIKE 'yyy' ) OR SecondaryTable.PrimaryTableId IS NULL ```
SQL join against a table with both a NOT LIKE condition and a value not existing
[ "", "sql", "sql-server", "" ]
I'm trying to find an effective way to filter out result set produced by chained three tables left join, where second table join would take into account third table's properties. Fiddle: <http://sqlfiddle.com/#!2/e319e/2/0> A simplified example would be a join between three tables: Post, Comment and Author. Posts can have 0..N comments that are written by an Author. I'd like to get a list of all Posts + active Comments written only by active Authors. Considering the following data: **Post:** ``` | id | title | |----|-------------| | 1 | First post | | 2 | Second post | | 3 | Third post | | 4 | Forth post | ``` **Author**: ``` | id | title | is_active | |----|------------------------|-----------| | 1 | First author | 1 | | 2 | Second author | 1 | | 3 | Third author | 1 | | 4 | Fourth inactive author | 0 | | 5 | Fifth inactive author | 0 | ``` **Comment**: ``` | id | post_id | author_id | title | is_active | |----|---------|-----------|------------------------|-----------| | 1 | 1 | 1 | First comment | 1 | | 2 | 2 | 1 | Second comment | 1 | | 3 | 1 | 2 | Third comment | 1 | | 4 | 2 | 4 | Fourth comment | 1 | | 5 | 2 | 5 | Fifth inactive comment | 0 | | 6 | 2 | 3 | Sixth inactive comment | 0 | | 7 | 4 | 4 | Seventh comment | 1 | ``` Now executing simple filtering query: ``` SELECT p.id post_id, p.title post_title, c.id as comment_id, c.title comment, c.is_active active_comment, a.id author_id, a.title author, a.is_active active_author FROM Post p LEFT JOIN Comment c ON c.post_id = p.id AND c.is_active = 1 LEFT JOIN Author a ON a.id = c.author_id AND a.is_active = 1 ORDER BY p.id; ``` brings us the following result set: ``` | id | title | id | title | is_active | id | title | is_active | |----|-------------|--------|-----------------|-----------|--------|---------------|-----------| | 1 | First post | 1 | First comment | 1 | 1 | First author | 1 | | 1 | First post | 3 | Third comment | 1 | 2 | Second author | 1 | | 2 | Second post | 2 | Second comment | 1 | 1 | First author | 1 | | 2 | Second post | 4 | Fourth comment | 1 | (null) | (null) | (null) | | 3 | Third post | (null) | (null) | (null) | (null) | (null) | (null) | | 4 | Forth post | 7 | Seventh comment | 1 | (null) | (null) | (null) | ``` that has two comments that should be omitted - "Forth comment" and "Seventh comment" that have been written by inactive authors. The only way I see that would work is to add JOIN condition for Comment ``` AND c.id IN (SELECT id FROM Author WHERE is_active = 1) ``` that produces a correct result set, but I guess is not very optimal. However I can't find any other working solution. Is there some way to optimize it somehow? Thanks!
I think you want this `from` clause: ``` FROM Post p LEFT JOIN (Comment c JOIN Author a ON a.id = c.author_id AND a.is_active = 1 and c.is_active = 1 ) ON c.post_id = p.id ``` As I mention in the comment, though, you might want to inactivate comments by inactive authors. That would involve a trigger or stored procedure. Oh, and you were polite enough to put up a SQL Fiddle, so [here](http://sqlfiddle.com/#!2/e319e/6) it is working.
This should do too: ``` select a.title, p.title, c.title from author a left join comment c on a.id = c.author_id left join post p on c.post_id = p.id where a.id in (select id from author where is_active = 1) and c.is_active = 1 ``` <http://sqlfiddle.com/#!2/e319e/1>
MySQL three table chained left join, filtering by last table
[ "", "mysql", "sql", "" ]
I have this table: ``` SELECT * FROM #BH2 BookingID | Detail | CreatedAgentCode | ChangeDate ----------|------------------------------------------------------|------------------|-------------------------- 196162093 | MRS LUCIENE CORREA correa MRS LUCIENE CORREA | lclisboa | 2015-01-18 13:29:35.130 196162093 | MRS LUCIENE CORREA LISBOA MRS LUCIENE CORREA correa | VOMATOS | 2015-01-18 13:25:26.420 ``` And this: ``` SELECT * FROM BookingPassengerVersion WHERE BookingID = 196162093 ORDER BY ModifiedDate DESC BookingID | Title | FirstName | MiddleName | LastName | AgentCode | ModifiedDate ----------|-------------------------------------------------------|---------------------------- 196162093 | MRS | LUCIENE | | CORREA | lclisboa | 2015-01-18 13:29:35.130 196162093 | MRS | LUCIENE | CORREA | correa | VOMATOS | 2015-01-18 13:25:26.420 196162093 | MRS | LUCIENE | CORREA | LISBOA | ADM | 2015-01-12 18:01:09.503 196162093 | MRS | LUCIENE | CORREA | LISBOA | ADM | 2015-01-12 18:01:05.227 ``` I need to add a new column to the `old name` and `new name`: I tried this query: ``` BEGIN TRY DROP TABLE #FINAL_TABLE END TRY BEGIN CATCH END CATCH SELECT DISTINCT BH.BookingID, -- S OldName, (CASE WHEN _NewName.Title>'' THEN _NewName.Title+' ' ELSE '' END)+_NewName.FirstName+' '+ _NewName.MiddleName+' '+_NewName.LastName AS NewName, BH.CreatedAgentCode, BH.ChangeDate, INTO #FINAL_TABLE FROM #BH2 BH CROSS APPLY ( SELECT TOP 2 Title , FirstName , MiddleName , LastName FROM BookingPassengerVersion WHERE BookingID = BH.BookingID AND BH.ChangeDate = ModifiedDate ORDER BY ModifiedDate DESC ) _NewName ``` But I couldn't get this result: ``` BookingID | OldName | NewName | Detail | CreatedAgentCode | ChangeDate ----------|---------------------------|---------------------------|-----------------------------------------------------|------------------|-------------------------- 196162093 | MRS LUCIENE CORREA correa | MRS LUCIENE CORREA | MRS LUCIENE CORREA correa MRS LUCIENE CORREA | lclisboa | 2015-01-18 13:29:35.130 196162093 | MRS LUCIENE CORREA LISBOA | MRS LUCIENE CORREA correa | MRS LUCIENE CORREA LISBOA MRS LUCIENE CORREA correa | VOMATOS | 2015-01-18 13:25:26.420 ``` Table `# BH2` has the `detail` column, this column is the `Old name` plus the `New name`. I need the `old name` and the `new name` separated into two columns, so I'll use the `BookingPassengerVersion` table that has the change history. The name is formed by adding the `title`, `first name`, `middle name` and `last name`.
You can try this.[SQL Fiddle](http://sqlfiddle.com/#!6/6e8d2/1) ``` WITH cteBookingPassengerVersion AS ( SELECT BookingID, RTRIM( CONCAT ( ISNULL(Title + ' ', ''), ISNULL(FirstName + ' ', ''), ISNULL(MiddleName + ' ', ''), ISNULL(LastName, '') ) ) AS NAME, ModifiedDate, ROW_NUMBER()OVER(PARTITION BY BookingID ORDER BY ModifiedDate DESC) rowNum FROM BookingPassengerVersion ) SELECT cte.BookingID, ctePrev.NAME AS OldName, cte.NAME AS NewName, bh.Detail, bh.CreatedAgentCode, bh.ChangeDate FROM BH2 bh JOIN cteBookingPassengerVersion cte ON bh.BookingID = cte.BookingID AND bh.ChangeDate = cte.ModifiedDate LEFT JOIN cteBookingPassengerVersion ctePrev ON ctePrev.BookingID = cte.BookingId AND ctePrev.rowNum = cte.rowNum + 1 ORDER BY cte.BookingID, bh.ChangeDate DESC ``` **EDIT** I updated the query to join back on date also and get all updates for all bookings Update [New SQL Fiddle](http://sqlfiddle.com/#!6/6e8d2/15) To filter the CTE by the BookingID's in BH2 you can either do ``` WITH cteBookingPassengerVersion AS ( SELECT BookingID, RTRIM( CONCAT ( ISNULLLL(Title + ' ', ''), ISNULL(FirstName + ' ', ''), ISNULL(MiddleName + ' ', ''), ISNULL(LastName, '') ) ) AS NAME, ModifiedDate, ROW_NUMBER()OVER(PARTITION BY BookingID ORDER BY ModifiedDate DESC) rowNum FROM BH2 JOIN BookingPassengerVersion ON BH2.BookingID = BookingPassengerVersion.BookingID ) ``` Or ``` WITH cteBookingPassengerVersion AS ( SELECT BookingID, RTRIM( CONCAT ( ISNULLLL(Title + ' ', ''), ISNULL(FirstName + ' ', ''), ISNULL(MiddleName + ' ', ''), ISNULL(LastName, '') ) ) AS NAME, ModifiedDate, ROW_NUMBER()OVER(PARTITION BY BookingID ORDER BY ModifiedDate DESC) rowNum FROM BookingPassengerVersion WHERE BookingID IN (SELECT BookingID FROM BH2) ) ``` you should try different things when dealing with large datasets. I would even replace the cte with a temp table and see if it helps. check your execution plan to see if you need any indexes also. temp table instead of cte ``` SELECT BookingID, RTRIM( CONCAT ( ISNULLLL(Title + ' ', ''), ISNULL(FirstName + ' ', ''), ISNULL(MiddleName + ' ', ''), ISNULL(LastName, '') ) ) AS NAME, ModifiedDate, ROW_NUMBER()OVER(PARTITION BY BookingID ORDER BY ModifiedDate DESC) rowNum INTO #bpv FROM BookingPassengerVersion WHERE BookingID IN (SELECT BookingID FROM BH2) SELECT cte.BookingID, ctePrev.NAME AS OldName, cte.NAME AS NewName, bh.Detail, bh.CreatedAgentCode, bh.ChangeDate FROM BH2 bh JOIN #bpv cte ON bh.BookingID = cte.BookingID AND bh.ChangeDate = cte.ModifiedDate LEFT JOIN #bpv ctePrev ON ctePrev.BookingID = cte.BookingId AND ctePrev.rowNum = cte.rowNum + 1 ORDER BY cte.BookingID, bh.ChangeDate DESC ```
You can try this in [SqlFiddle](http://sqlfiddle.com/#!6/6e8d2/13) I update the user1221684 answer to remove duplicated rows. ``` WITH cteBookingPassengerVersion AS ( SELECT BookingID, RTRIM( CONCAT ( ISNULL(Title + ' ', ''), ISNULL(FirstName + ' ', ''), ISNULL(MiddleName + ' ', ''), ISNULL(LastName, '') ) ) AS NAME, AgentCode, -- add this line ROW_NUMBER()OVER(PARTITION BY BookingID ORDER BY ModifiedDate DESC) rowNum FROM BookingPassengerVersion WHERE BookingID = 196162093 ) SELECT cte.BookingID, ctePrev.NAME AS OldName, cte.NAME AS NewName, bh.Detail, bh.CreatedAgentCode, bh.ChangeDate, cte.rowNum, ctePrev.rowNum FROM BH2 bh JOIN cteBookingPassengerVersion cte ON (bh.BookingID = cte.BookingID and bh.CreatedAgentCode = cte.AgentCode) --Update this line LEFT JOIN cteBookingPassengerVersion ctePrev ON ctePrev.rowNum = cte.rowNum + 1 WHERE cte.rowNum <= 2 ```
Search specific row in another table by date
[ "", "sql", "sql-server", "" ]
Given is following mysql table: ``` CREATE TABLE fonts (`id` int, `fontName` varchar(22), `price` int,`reducedPrice` int,`weight` int) ; INSERT INTO fonts (`id`, `fontName`, `price`,`reducedprice`,`weight`) VALUES (1, 'regular', 50,30,1), (2, 'regular-italic', 50,20,1), (3, 'medium', 60,30,2), (4, 'medium-italic', 50,30,2), (5, 'bold', 50,30,3), (6, 'bold-italic', 50,30,3), (7, 'bold-condensed', 50,30,3), (8, 'super', 50,30,4) ; ``` As an example a user chooses following ids: 1,2,3,5,6,7 which would result in following query/result: ``` > select * from fonts where id in(1,2,3,5,6,7); id fontName price reducedPrice weight 1 regular 50 30 1 2 regular-italic 50 20 1 3 medium 60 30 2 5 bold 50 30 3 6 bold-italic 50 30 3 7 bold-condensed 50 30 3 ``` Is it possible to have a kind of "if statement" in a query to return a new field based on column weight. Where a value occurs more than once reducedPrice should be returned as newPrice else price: ``` id fontName price reducedPrice weight newPrice 1 regular 50 30 1 30 2 regular-italic 50 20 1 20 3 medium 60 30 2 60 5 bold 50 30 3 30 6 bold-italic 50 30 3 30 7 bold-condensed 50 30 3 30 ``` **Which means ids 1,2,5,6,7 should be reduced but id 3 not as its weight "2" only occurs once** Please find a fiddle here: <http://sqlfiddle.com/#!9/73f5db/1> And thanks for your help!
Write a subquery that gets the number of occurrences of each weight, and join with this. Then you can test the number of occurrences to decide which field to put in `NewPrice`. ``` SELECT f.*, IF(weight_count = 1, Price, ReducedPrice) AS NewPrice FROM fonts AS f JOIN (SELECT weight, COUNT(*) AS weight_count FROM fonts WHERE id IN (1, 2, 3, 5, 6, 7) GROUP BY weight) AS w ON f.weight = w.weight WHERE id IN (1, 2, 3, 5, 6, 7) ``` [Updated fiddle](http://sqlfiddle.com/#!9/73f5db/6)
``` select *,if(occurences>=2,reducedPrice,price) as newPrice from fonts left join (Select count(id) as occurences, id,weight from fonts where fonts.id in(1,2,3,5,6,7) group by weight) t on t.weight = fonts.weight where fonts.id in(1,2,3,5,6,7); ``` The mysql if keyword reference is here:<https://dev.mysql.com/doc/refman/5.1/en/control-flow-functions.html#function_if> Edit: Added fiddle, changed to instances as comment requested. Updated fiddle:<http://sqlfiddle.com/#!9/a93ef/14>
select: result based on occurrence of explicit value
[ "", "mysql", "sql", "" ]
I have 2 tables. Player and Stats. Player has the fields: Name, Age, DOB and SSN. Stats has the fields: Tackles, Goals, Assists, and SSN. SSN is a foreign key. How can I write a query to find the stats of players with DOB >= '1994'. DOB is not a foreign key, I was wondering how that could work.
``` SELECT * FROM Player INNER JOIN Stats ON Player.SSN = Stats.SSN WHERE Player.DOB >= '1994-01-01' ``` If you just want specific fields then specify these instead of SELECT \*
this is a simple Join commmand. ``` SELECT * FROM Stats INNER JOIN Player On Player.SSN = Stats.SSN WHERE DOB > 1994 ``` there is an issue from here however as you imply that DOB is a string, not a datetime or Integer with those quotes. you cant compare Strings as numbers like you are trying to do. if DOB is just year e.g. 1994, then `DOB >= 1994` would work, if its a `date` or `datetime`, you need to use `Datetime` equivalents.
Sql: Selecting from other tables
[ "", "mysql", "sql", "database", "join", "foreign-keys", "" ]
I have a function to convert a string to `datetime (101)` format. But it gives me an error when I convert this value. ``` 2016-03-01 00:00:00.0000000 ``` And the error is ``` Msg 241, Level 16, State 1, Line 2 Conversion failed when converting date and/or time from character string. ``` And my function is .. ``` ALTER FUNCTION [dbo].[ConvertToDate] ( @Value nVarchar(MAX) ) RETURNS DATETIME AS BEGIN IF (@Value <> NULL OR @Value <> '') BEGIN DECLARE @dt DATETIME SET @dt = CONVERT(DATETIME,@Value,101) --SET @dt = CAST(@Value AS DATETIME(101)) RETURN @dt END RETURN NULL END ``` What is the problem?
Too many zeros in the millisecond part. This works fine ``` DECLARE @Value nVarchar(MAX)='2016-03-01 00:00:00.000' SELECT CONVERT(DATETIME,@Value,101) ``` You can try using `LEFT` like this ``` DECLARE @Value nVarchar(MAX)='2016-03-01 00:00:00.000000' SELECT CONVERT(DATETIME,LEFT(@Value,23),101) ```
It is problem your data. It is not converting. Your data should be like this ``` '2016-03-01 00:00:00.0000000' ``` But you can change your data like this ``` '2016-03-01 00:00:00.000' ```
Conversion failed when converting date and/or time from character string in sql server
[ "", "sql", "sql-server", "" ]
I'm trying to do something that's very simple to do in other languages but in SQL it's proving rather puzzling. I have a database with the patient ID as row, and 100 symptoms as columns. Symptoms are binary, 0 or 1 if the patient has it or not. Let's say Patient 1 has 0, 1, 1, ... for coughing, sneezing, headaches, etc. Patient 2 similarly has a binary list of symptoms 1, 0, 1, .... What I want to do is simply generate a table that has counts for pairwise symptoms that number of patients that have that combination of symptoms. So when I look up in the new table, I can see how many people have both sneezing and coughing. How many have both headache and sneezing and so forth. Only pairwise. If I look up sneezing and sneezing, it would give me back the number of people who have sneezing. Something like a matrix format of symptoms in both rows and columns and number of patients suffering from it in the content. With a for loop this is incredibly easy in any other language. I'm new to SQL though, and struggling to figure out an efficient way to do this.
This answer by @GarethD, helped solved the problem: Thanks! The key is to unpivot your data so that you have one row per patient per symptom, then join this data to itself to get pairs of symptoms, then pivot the joined data back up to get your counts. Since I don't know which DBMS to use, I cannot answer the question, I have however created a [working demo in SQL Server](http://sqlfiddle.com/#!3/7ea17/1) that should demonstrate how to do it. – GarethD yesterday
Your question needs to write functions and using cursors in them. However, there is an alternative approach: Suppose you have a table with four columns: ``` a b c d ------------------------- 1 0 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 ``` This is a the answer: ``` Select sum(a) as a_a, (select count(*) from patients where a=1 and b=1 as a_b) as a_b, (select count(*) from patients where a=1 and c=1 as a_c) as a_c, (select count(*) from patients where a=1 and d=1 as a_d) as a_d, sum(b) as b_b, (select count(*) from patients where b=1 and c=1 as b_c) as b_c, (select count(*) from patients where b=1 and d=1 as b_d) as b_d, sum(c) as c_c, (select count(*) from patients where c=1 and d=1 as c_d) as c_d, sum(d) as d_d ``` Now, the result is like this: ``` a_a a_b a_c a_d b_b b_c b_d c_c c_d d_d ------------------------------------------------------------------------- 5 2 3 2 4 2 2 7 4 6 ``` It is not like a matrix; it has only one row, but it has everything you would like to have. You can expand it to your own table with many fields.
Counting the number of join symptoms
[ "", "sql", "data-analysis", "" ]
I am trying to join two views I created, however I am joining them using their common field (cAuditNumber). The issue is, once I have done the joins, it will not let me create the view as it cannot have the field name cAuditNumber twice. Is the cAuditNumber the PK I should use? How do I correct this and still join the tables? ``` CREATE VIEW KFF_Sales_Data_Updated AS SELECT CustSalesUpdated.*, StkSalesUpdated.* FROM CustSalesUpdated INNER JOIN StkSalesUpdated ON StkSalesUpdated.cAuditNumber = CustSalesUpdated.cAuditNumber ``` I get the following error: Msg 4506, Level 16, State 1, Procedure KFF\_Sales\_Data\_Updated, Line 2 Column names in each view or function must be unique. Column name 'cAuditNumber' in view or function 'KFF\_Sales\_Data\_Updated' is specified more than once.
Substitute your own column names instead of ColumnA, Column B, etc, but it should follow this format: ``` CREATE VIEW KFF_Sales_Data_Updated AS SELECT CustSalesUpdated.cAuditNumber ,CustSalesUpdated.ColumnA ,CustSalesUpdated.ColumnB ,CustSalesUpdated.ColumnC ,StkSalesUpdated.ColumnA as StkColumnA ,StkSalesUpdated.ColumnB as StkColumnB ,StkSalesUpdated.ColumnC as StkColumnC FROM CustSalesUpdated INNER JOIN StkSalesUpdated ON StkSalesUpdated.cAuditNumber = CustSalesUpdated.cAuditNumber ``` You only have to alias duplicate columns using "as", or you can use it to rename any column that you so desire.
``` CREATE VIEW KFF_Sales_Data_Updated AS SELECT csu.cAuditNumber cAuditNumber1 , ssu.cAuditNumber cAuditNumber2 FROM CustSalesUpdated csu INNER JOIN StkSalesUpdated ssu ON StkSalesUpdated.cAuditNumber = CustSalesUpdated.cAuditNumber ``` You could add any other column in the select statement from the two tables but if there are two column with the same name you should give them aliases
SQL Change View Name / Joins
[ "", "sql", "sql-server", "t-sql", "view", "" ]
I have a table called `tourn_results` there are multiple records that share the same `tournamentId`. Example of a `tourn_result` record set ``` uniqueId tournamentId playerRank 1 111 1 2 111 2 3 111 3 4 222 1 5 222 2 6 222 3 7 333 1 8 333 2 9 333 3 10 333 4 11 111 1 12 111 2 13 111 3 ``` For each tournament there cannot be more than 1 `playerRank` = 1 (Only one first place per tournamentId) I need to to find each duplicate and delete them, can't find a suitable answer. I appreciate your help.
This will delete everything but the rows with the lowest `uniqueid` for each combination of (`tournamentId`,`playerRank`). ``` delete from tourn_results x where uniqueid <> ( select min(y.uniqueid) from tourn_results y where y.tournamentid = x.tournamentid and y.playerrank = x.playerrank ); ```
You can add a unique index on the columns you don't want to have duplicates. It will drop all your duplicates. ``` ALTER IGNORE TABLE tourn_result ADD UNIQUE INDEX idx_name (tournamentId, playerRank); ``` The other option is to create a new temp table, insert your duplicates rows and delete them from your actual table.
Find duplicates based on two fields and delete them
[ "", "mysql", "sql", "" ]
I have a table `CommentsTable` with columns like, `CommentA, CommentB, CommentC, CommentD, CommentE`. All comments columns are `VARCHAR (200)`, by default all columns are `NULL` also. The data looks like: ``` CommentId CommentA CommentB CommentC CommentD CommentE --------------------------------------------------------------------- 12345 NULL C 001 C 002 NULL C 003 45678 C 005 NULL NULL C 007 NULL 67890 C 010 NULL C 011 C 012 NULL 36912 C 021 C 023 C 024 C 025 C 026 ``` I need to avoid the null values and the remaining values are concatenate with `comma`. So, the expected output like: ``` CommentId CommetDetails ------------------------------- 12345 C 001, C 002, C 003 45678 C 005, C 007 67890 C 010, C 011, C 012 36912 C 021, C 023, C 024, C 025, C 026 ``` I tried with simple query: ``` SELECT CommentId, ISNULL(CommentA, '') + ', ' + ISNULL(CommentB, '') + ', ' + ISNULL(CommentC, '') + ', ' + ISNULL(CommentD, '') + ', ' + ISNULL(CommentE, '') [CommentDetails] FROM CommentsTable WHERE ...... --Some conditions ``` But the unwanted `comma` are occurred, So added `IIF` ``` SELECT CommentId, IIF(ISNULL(CommentA, '') <> '', (CommentA + ', '), '') + IIF(ISNULL(CommentB, '') <> '', (CommentB + ', '), '') + IIF(ISNULL(CommentC, '') <> '', (CommentC + ', '), '') + IIF(ISNULL(CommentD, '') <> '', (CommentD + ', '), '') + ISNULL(CommentE, '') [CommentDetails] FROM CommentsTable WHERE ...... --Some conditions ``` But here also, the `comma` occurred in the last position for some cases (If `CommentD, CommetE` are `NULL`. Is there any way to achieve to solve for all the cases. [`Sample SQL Fiddle`](http://sqlfiddle.com/#!3/412f0/8)
You can use `ISNULL` like this `ISNULL(',' + CommentA, '')` and write your query like this. ``` SELECT CommentId, STUFF( ISNULL(',' + CommentA, '') + ISNULL(',' + CommentB, '') + ISNULL(',' + CommentC, '') + ISNULL(',' + CommentD, '') + ISNULL(',' + CommentE, ''),1,1,'') as [CommentDetails] FROM CommentsTable WHERE ...... //Some conditions ``` See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!3/6e6ddf/2).
The above answers are correct and no challenge to the accepted answer but in case some columns have empty string instead of null then below might help. Please don't hesitate for a better approach and correct me if it's wrong. ``` SELECT CommentId, STUFF( ISNULL(',' + CASE WHEN CommentA= '' THEN NULL ELSE CommentA END, '') + ISNULL(',' + CASE WHEN CommentB= '' THEN NULL ELSE CommentB END, '') + ISNULL(',' + CASE WHEN CommentC= '' THEN NULL ELSE CommentC END, '') + ISNULL(',' + CASE WHEN CommentD= '' THEN NULL ELSE CommentD END, '') + ISNULL(',' + CASE WHEN CommentE= '' THEN NULL ELSE CommentE END, ''),1,1,'') as [CommentDetails] FROM CommentsTable ```
Multiple column values into a single column as comma separated value
[ "", "sql", "sql-server", "" ]
Im a lil bit stuck here, really hope somebody could help me out. I couldn't found yet the answer from google. Ok let say the data is like below: ``` TableName=> Events | Node_Name| Status | +----------+-----------+ | Node_1 | Completed | | Node_1 | Completed | | Node_2 | Failed | | Node_2 | Completed | | Node_3 | Failed | | Node_3 | Failed | ``` Query: ``` select node_name, count(*) from events where status='Failed' group by node_name ``` Result: ``` | Node_2 | 1 | | Node_3 | 2 | ``` The question is, how can i group a node\_name that never "failed" and always completed. Query: ????????? Result that I wanted: ``` =>Node_1 ``` Really appreciate your help. Thank you!
To get only the "never failed" nodes: ``` select node_name from events group by node_name having max(status) = 'Completed' ``` To get all nodes with a 'Failed' count: ``` select node_name, sum(case when status = 'Failed' then 1 else 0 end ) from events group by node_name ```
In SQL Server, the `EXCEPT` operator can do this. ``` select node_name from events group by node_name EXCEPT select node_name from events where status<>'Completed' ; ``` **Explanation:** You query **all** different node names, and you remove all names which are not completed. As a result, only those will stay which are ONLY completed.
SQL Query: How to show the opposite result of your query in select statement
[ "", "sql", "" ]
I have two tables: * `Table1` with columns `colA, colB, colC` * `Table2` with columns `colX, colY, colZ` I'm trying to get all rows from `Table1` which have `colC` values that match `Table2` on `colZ`. I tried the following: ``` select Table1.colA,Table1.colB,Table1.colC from Table1 inner join Table2 on Table1.colC = Table2.colZ ``` This does not seem to work as the result of the query had 20 times the number of rows present in `Table1`. Any help is sincerely appreciated.
You can use [`EXISTS`](https://msdn.microsoft.com/en-us/library/ms188336.aspx) like this. ``` select Table1.colA,Table1.colB,Table1.colC from Table1 WHERE EXISTS (SELECT 1 FROM Table2 WHERE Table1.colC = Table2.colZ) ```
Three options: 1. Use `INNER JOIN` with `DISTINCT` ``` SELECT DISTINCT Table1.colA, Table1.colB, Table1.colC FROM Table1 INNER JOIN Table2 ON Table1.colC = Table2.colZ ``` 2. Use `EXISTS` ``` SELECT Table1.colA, Table1.colB, Table1.colC FROM Table1 WHERE EXISTS (SELECT 1 FROM Table2 WHERE ColZ = ColC) ``` 3. Use `IN` ``` SELECT Table1.colA, Table1.colB, Table1.colC FROM Table1 WHERE ColC IN (SELECT ColZ FROM Table2) ```
Join two tables to get common data based on a column
[ "", "sql", "sql-server", "" ]
I'm doing a query on a table in which I have to count all records on different 1 hour interval time (eg from 13:00:00 to 14:00:00). What I'm doing right now is like this: ``` select count (*) from tabel where TO_CHAR(ins_ts, 'DD-MON-YYYY HH24') like ('02-GIU-2015 13'); ``` RESULT: 23 ``` select count (*) from tabel where TO_CHAR(ins_ts, 'DD-MON-YYYY HH24') like ('02-GIU-2015 14'); ``` RESULT: 25 But it's too much effort for doing that for all 1 hour intervals of all days of a week.. Is there a way to make a query that would return all results splitted on diffent time interval at least of each day, like this: RESULT: 23, 25 and so on
If you actually want to use an index on the datetime you need to stop using functions on ins\_ts (in the where clause). ``` select TO_CHAR(ins_ts, 'DD-MON-YYYY HH24'),count (*) from table where ins_ts >= TO_DATE('01/JAN/2015 00:00:00', 'dd/mon/yyyy HH24:MI:SS') and ins_ts < TO_DATE('08/JAN/2015 00:00:00', 'dd/mon/yyyy HH24:MI:SS') group by TO_CHAR(ins_ts, 'DD-MON-YYYY HH24'); ``` This should give you each hour and a count for every record between the 1st of January and the 7th. (note the second ins\_ts check is using less than and not less than or equal to).
You want to truncate the time value to the nearest hour and do a count. You have the basic components in your query, you just want a `group by`: ``` select TO_CHAR(ins_ts, 'DD-MON-YYYY HH24') as thehour, count(*) from tabel group by TO_CHAR(ins_ts, 'DD-MON-YYYY HH24') order by min(ins_ts) ; ``` The `order by min(ins_ts)` puts the values in order by time, because your preferred output format does not have a natural sort order.
SQL Searching multiple interval times on ORACLE DB
[ "", "sql", "database", "oracle", "datetime", "" ]
How to display SQL Statements in the log ? I'm using EBeans and it fails to insert for some reasons but I can't see what's the problem. I tried to edit my config to: ``` db.default.logStatements=true ``` and add this to logback.xml ``` <logger name="com.jolbox" level="DEBUG" /> ``` to follow some answers I found online, but it doesn't seem to work for 2.4…
Logging has changed with **Play 2.4**. Starting from now, to display the SQL statements in the console, simply add the following line to the conf/logback.xml file: ``` <logger name="org.avaje.ebean.SQL" level="TRACE" /> ``` It should work just fine. As @Flo354 pointed out in the comments, with **Play 2.6** you should use: ``` <logger name="io.bean" level="TRACE" /> ```
From Play 2.5 Logging SQL statements is very easy, Play 2.5 has an easy way to log SQL statements, built on [jdbcdslog](https://github.com/jdbcdslog/jdbcdslog), that works across all JDBC databases, connection pool implementations and persistence frameworks (Anorm, Ebean, JPA, Slick, etc). When you enable logging you will see each SQL statement sent to your database as well as performance information about how long the statement takes to run. The SQL log statement feature in Play 2.5 can be configured by database, using logSql property: ``` db.default.logSql=true ``` After that, you can configure the jdbcdslog-exp log level by adding this lines to `logback.xml`: ``` <logger name="org.jdbcdslog.ConnectionLogger" level="OFF" /> <!-- Won' log connections --> <logger name="org.jdbcdslog.StatementLogger" level="INFO" /> <!-- Will log all statements --> <logger name="org.jdbcdslog.ResultSetLogger" level="OFF" /> <!-- Won' log result sets --> ```
Play 2.4 - Display Ebeans SQL statement in logs
[ "", "sql", "logging", "playframework", "ebean", "playframework-2.4", "" ]
I am using Microsoft SQL Server 2008 R2, and I am trying to compare counted results from the same query I ran the following: ``` select e.JobTitle, COUNT(p.BusinessEntityID) [NO. of persons] from AdventureWorks2008.person.Person p with (Nolock) join AdventureWorks2008.HumanResources.Employee e with (nolock) on e.BusinessEntityID = p.BusinessEntityID group by e.JobTitle ``` I get the following results as expected: ``` JobTitle NO. Of persons Accountant 2 Accounts Manager 1 Accounts Payable Specialist 2 Accounts Receivable Specialist 3 Application Specialist 4 Assistant to the Chief Financial Officer 1 Benefits Specialist 1 Buyer 9 Chief Executive Officer 1 Chief Financial Officer 1 Control Specialist 2 Database Administrator 2 Design Engineer 3 Document Control Assistant 2 Document Control Manager 1 Engineering Manager 1 ``` What I want to do now is display the job title and count from these results where the count is the same, but the job title is not the same. Basically Buyer would not be returned as there is no other group with a count of 9. But Assistant to the Chief Financial Officer, Benefits Specialist among others would be returned as there are many job-titles with a count of 1. What is the easiest and most efficient way to do this? thanks ahead.
You can do this with cte: ``` with cte as(select e.JobTitle, COUNT(p.BusinessEntityID) [NO. of persons] from AdventureWorks2008.person.Person p with (Nolock) join AdventureWorks2008.HumanResources.Employee e with (nolock) on e.BusinessEntityID = p.BusinessEntityID group by e.JobTitle) select * from cte c1 where exists(select * from cte c2 where c2.[NO. of persons] = c1.[NO. of persons] and c2.JobTitle <> c1.JobTitle) ```
You could use a `join` to search for other rows with the same count but a different job title: ``` ; with List as ( ... your query here ... ) select * from List l1 join List l2 on l1.[NO. of persons] = l2.[NO. of persons] and l1.JobTitle > l2.JobTitle -- Filter out duplicates ```
Compared counted data from the same table
[ "", "sql", "sql-server", "" ]
So data is something like this: ``` ID | START_DATE | END_DATE | UID | CANCELED ------------------------------------------------- 44 | 2015-10-20 22:30 | 2015-10-20 23:10 | 'one' | 52 | 2015-10-20 23:00 | 2015-10-20 23:30 | 'one' | 66 | 2015-10-21 13:00 | 2015-10-20 13:30 | 'two' | ``` There are more than 100k of these entries. We can see that start\_date of the second entry overlaps with the end\_date of the first entry. When dates do overlap, entries with lower id should be marked as true in 'CANCELED' column. I tried some queries but they take a really long time so I'm not sure even if they work. Also I want to cover all overlaping cases so this also seems to slow this down. I am the one responsible for inserting/updating these entries using pl/sql ``` update table set column = 'value' where ID = '44'; if sql%rowcount = 0 then insert values(...) end if ``` so I could maybe do this in this step. But all tables are updated/inserted using one big pl/sql created dynamically where all rows either get updated or new ones get inserted so once again this seems to get slow. And of all the sql 'dialects' oracle one is the most cryptic I had chance to work with. Ideas? EDIT: I forgot one important detail, there is also one more column (UID) which is to be matched, update above
I would start with this query: ``` update table t set cancelled = true where exists (select 1 from table t2 where t.end_date > t2.start_date and t.uid = t2.uid and t.id < t2.id ) ``` An index on `table(uid, start_date, id)` might help. As a note: this is probably *much* easier to do when you create the table, because you can use `lag()`.
This will do the trick without dynamic query nor correlated subqueries, but it consume some memory for the `with` clauses: ``` MERGE INTO Table1 USING ( with q0 as( select rownum fid, id, start_date from( select id, start_date from table1 union all select 999999 id, null start_date from dual order by id ) ), q1 as ( select rownum fid, id, end_date from( select -1 id, null end_date from dual union all select id, end_date from table1 order by id ) ) select q0.fid, q1.id, q0.start_date, q1.END_DATE, case when (q0.start_date < q1.END_DATE) then 1 else 0 end canceled from q0 join q1 on (q0.fid = q1.fid) ) ta ON (ta.id = Table1.id) WHEN MATCHED THEN UPDATE SET Table1.canceled = ta.canceled; ``` The inner `with` `select` statement with alias `ta` will produce this result: ``` "FID"|"ID"|"START_DATE" |"END_DATE" |"CANCELED" --------------------------------------------------------- 1 |-1 |20/10/15 22:30:00| |0 2 |44 |20/10/15 23:00:00|20/10/15 23:10:00|1 3 |52 |21/10/15 13:00:00|20/10/15 23:30:00|0 4 |66 | |20/10/15 13:30:00|0 ``` Then its used in the `merge` vwithout any correlated queries. Tested and worked fine using SQLDeveloper.
oracle sql - finding entries with dates (start/end column) overlap
[ "", "sql", "oracle", "date", "range", "" ]
I have two tables similar to shown below (just leaving out fields for simplicity). *Table `lead` :* ``` id | fname | lname | email --------------------------------------------- 1 | John | Doe | jd@test.com 2 | Mike | Johnson | mj@test.com ``` *Table `leadcustom` :* ``` id | leadid | name | value ------------------------------------------------- 1 | 1 | utm_medium | cpc 2 | 1 | utm_term | fall 3 | 1 | subject | business 4 | 2 | utm_medium | display 5 | 2 | utm_term | summer 6 | 2 | month | may 7 | 2 | color | red ``` I have a database that captures leads for a wide variety of forms that often have many different form fields. The first table gets the basic info that I know is on each form. The second table captures all other forms fields that were sent over so it can really contain a lot of different fields. What I am trying to do is to do a join where I can grab all fields from `lead` table along with `utm_medium` and `utm_term` from `leadcustom` table. I don't need any additional fields even if they were sent over. *Desired results :* ``` id | fname | lname | email | utm_medium | utm_term --------------------------------------------------------------------------- 1 | John | Doe | jd@test.com | cpc | fall 2 | Mike | Johnson | mj@test.com | display | summer ``` Only way I know I could do this is to grab all lead data and then for each record make more calls to get `leadcustom` data I am looking for but I know there has to me a more efficient way of getting this data. I appreciate any help with this and it is not something I can change the way I capture that data and table formats.
If your columns are fixed, you can do this with group by + case + max like this: ``` select fname, lname, email, max(case when name = 'utm_medium' then value end) as utm_medium, max(case when name = 'utm_term' then value end) as utm_term from lead l join leadcustom c on l.id = c.leadid group by fname, lname, email ``` The case will assign value from the leadcustom table when it matches the given name, otherwise it will return null, and max will pick take the assigned value if it exists over the null. You can test this in [SQL Fiddle](http://sqlfiddle.com/#!3/4d3229/3) The other way to do this is to use pivot operator, but that syntax is slightly more complex -- or at least this is more easy for me.
You can try with [`pivot`](https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx) and `join`: ``` select [id] , [fname] , [lname] , [email] , [utm_medium] , [utm_term] from ( select t2.* , t1.[name] , t1.[value] from [leadcustom] t1 join [lead] t2 on t2.[id] = t1.[leadid] ) t pivot ( max([value]) for [name] in ([utm_medium], [utm_term]) ) pt ``` `pivot` rotates the joined table-valued expression, by turning the *unique* values from `[value]` column in the expression into `[utm_medium]` and `[utm_term]` columns in the output, and performs fake aggregation with `max` function (*it works so because a corresponding column can have multiple values for one unique pivoted column, in this case, `[name]` for `[value]`)*. [**SQLFiddle**](http://sqlfiddle.com/#!3/4d3229/14)
Rotate columns to rows for joined tables
[ "", "sql", "sql-server", "join", "pivot", "" ]
I am trying to do an INNER JOIN on two tables that have similar values, but not quite the same. One table has a fully qualified host name for its primary key, and the other the hosts short name, as well as the subdomain. It it safe to assume that the short name and the subdomain together are unique. So I've tried: ``` SELECT table1.nisinfo.* FROM table1.nisinfo INNER JOIN table2.hosts ON (table1.nisinfo.shortname + '.' + table1.nisinfo.subdomainname + '.domain.com') = table2.hosts.fqhn WHERE table2.hosts.package = 'somepkg'; ``` This doesn't return the results I expect, it returns the first result hundreds of times. I'd like to return distinct rows. It takes a long time to run as well. What am I doing wrong? I was thinking of running a subquery to get the hostnames, but I don't know what the right path from here is. Thank you!
So in this case, I used a subquery to get the initial results, and then used a join. ``` SELECT table1.nisinfo.* FROM table1.nisinfo JOIN (SELECT distinct(fqhn) FROM table2.hosts WHERE package = 'bash') AS FQ ON ((SUBSTRING_INDEX(FQ.fqhn, '.', 1)) = table1.nisinfo.shortname); ```
Try putting your results into a temp table and then view the table to make sure that the columns are as expected. ``` SELECT table1.nisinfo.*, table1.nisinfo.shortname + '.' + table1.nisinfo.subdomainname + '.domain.com' AS ColID INTO #temp FROM table1.nisinfo; Select * from #temp INNER JOIN table2.hosts ON @#temp.ColID = table2.hosts.fqhn WHERE table2.hosts.package = 'somepkg' ; ``` Put a Group By clause at the end of the second statement
JOIN on keys that don't have the same value
[ "", "mysql", "sql", "join", "" ]
I am managing a large databse. I am trying to Join a table. but the data in the Colums dont actualy match. One has dashes and the other has space. Such GPD 142 pol (Partnumber)in the Company table and GPD-142-pol (PartNumber)in the Customer table. My query is written like this: ``` SELECT * FROM CompanyPartsList JOIN SalesReport On FordPartsList.[Company Part Number] = SalesReport.[Customer Part #] ``` I tries something like this ``` SELECT * FROM CompanyPartsList JOIN SalesReport On FordPartsList.[Company Part Number] Like SalesReport.[Customer Part #] ``` Any help would be appreciated.
again doing this will be very slow , a solution would be a trigger to create the correct formatted column on either side ``` SELECT * FROM CompanyPartsList JOIN SalesReport On FordPartsList.[Company Part Number] = Replace(SalesReport.[Customer Part #],'-',' ') ```
Try replacing characters that can cause the values to be different. ``` SELECT * FROM CompanyPartsList cpl, SalesReport sr WHERE REPLACE(REPLACE(cpl.[Company Part Number],'-',''),' ','') = REPLACE(REPLACE(sr.[Customer Part #],'-',''),' ','') ```
Sql How to Join a table where data in Colums are alike
[ "", "sql", "sql-server", "database", "" ]
I want to know how many rows were removed in a certain DELETE operation. I took [the Microsoft example B](https://msdn.microsoft.com/de-de/library/ms177564.aspx) which is ``` DELETE Sales.ShoppingCartItem OUTPUT DELETED.* WHERE ShoppingCartID = 20621; ``` and tried to modify it to return only the `count` of deleted records: ``` DELETE FROM datacache OUTPUT COUNT(DELETED.*) WHERE userId=@id ``` but this throws ``` ExceptionMessage: "Incorrect syntax near '*'." ExceptionType: "System.Data.SqlClient.SqlException" Message: "Error" ``` So I tried ``` DELETE FROM datacache OUTPUT COUNT(DELETED) WHERE userId=@id ``` which throws ``` ExceptionMessage: "Invalid column name 'DELETED'." ExceptionType: "System.Data.SqlClient.SqlException" Message: "Error" ``` What did I miss?
Just run your query and get the modified rows ``` DELETE FROM datacache WHERE userId=@id SELECT @@ROWCOUNT ```
You can not use aggregates in `OUTPUT` clause. You can output any column into table variable instead and count from there: ``` DECLARE @t TABLE(id int) DELETE FROM Sales.ShoppingCartItem OUTPUT Deleted.ShoppingCartID INTO @t WHERE ShoppingCartID = 20621; SELECT COUNT(*) FROM @t ```
DELETE ... OUTPUT COUNT(DELETED.*)
[ "", "sql", "sql-server", "output-clause", "" ]
I want to match up Buyers and Sellers. I'm doing this to make it as simple as possible. I have a table: tmpSales. In it, I have a TransID, BuyerID, SellerID, Item and Date. Sometimes buyers are sellers too, and vice versa. What I want is a record where a buyer sold to a seller and that same seller sold to that same buyer. I've got this query, which works fine: ``` SELECT * FROM [dbo].[tmpSales] T1 INNER JOIN [dbo].[tmpSales] T2 ON T1.[BuyerID] = T2.[SellerID] AND T2.[BuyerID] = T1.[SellerID] ``` However, it returns 2 records for each match. Is there any way for this to return a single record, with both BuyerID and SellerID present? Sample data would look like so: ``` TransID BuyerID SellerID ItemID Date 1 10012 10032 65 10/15/2014 2 11111 10012 120 12/15/2014 3 10032 10012 32 2/2/2015 4 11111 10032 30 2/10/2015 5 10012 11111 45 3/1/2015 ``` In this case, I can see that **10012** and **10032** both sold to and bought from each other, as did **10012** and **11111**. I just want something like: ``` ID1 ID2 10012 10032 10012 11111 ``` The data will be ever growing, so it's got to be dynamic (i.e. I can't put anything like, "Where BuyerID = '10012'" into the code). EDIT: Actually, what I want to do is make this a view or stored procedure and pass 2 IDs to it, and have it tell me whether or not there is a mutual match.
You can easily limit the data in your select to be just one way by selecting which one is the first, like this: ``` SELECT * FROM [dbo].[tmpSales] T1 INNER JOIN [dbo].[tmpSales] T2 ON T1.[BuyerID] = T2.[SellerID] AND T2.[BuyerID] = T1.[SellerID] AND T2.[BuyerID] > T1.[BuyerID] ``` So this way the one that has bigger buyer ID is always as the second one -- and you can have similar logic inside a procedure too.
You can use the following query to retrieve all the mutual map ``` SELECT DISTINCT T.BuyerID AS ID1,T.SellerID AS ID2 FROM [dbo].[tmpSales] T WHERE EXISTS (SELECT * FROM [dbo].[tmpSales] T1 WHERE T1.BuyerID = T.SellerId AND T1.SellerID = T.BuyerID) ``` If you want to check only for a specified pair, you can also do like this ``` DECLARE @Id1 INT DECLARE @Id2 INT SELECT * FROM [dbo].[tmpSales] T WHERE T.BuyerID = @Id1 AND T.SellerID = @Id2 AND EXISTS (SELECT * FROM [dbo].[tmpSales] T1 WHERE T1.BuyerID = @Id2 AND T1.SellerID = @Id1) ```
Getting mutual matches in SQL Server
[ "", "sql", "sql-server", "" ]