Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have to produce artikel number based on some convention, and this convention is as below The number of digits ``` {1 or 2 or 3}.{4 or 5}.{n} ``` example products numbers: ``` 7.1001.1 1.1453.1 3.5436.1 12.7839.1 12.3232.1 13.7676.1 3.34565.1 12.56433.1 247.23413.1 ``` The first part is based on producent, and every producent has its own number. Let's say Rebook - 12, Nike - 256 and Umbro - 3. I have to pass this number and check in table if there are some rows containing it e.g i pass 12 then i should get everything which starts from 12. and now there should be three cases what to do: **1st CASE: no rows at the table:** > then retrieve 1001 **2nd case: if there are rows** so for sure there is already at least one: ``` 12.1001.1 ``` and more if they are let's say: ``` 12.1002.1 12.1003.1 ... 12.4345.1 ``` > so should be retreived next one so: 4346 and if there are already 5-digits for this product so let's say: ``` 12.1002.1 12.1003.1 ... 12.9999.1 ``` > so should be retreived next one so: 10001 **3rd case: in fact same as 2nd but if it rached 9999 for second part:** ``` 12.1001.1 ... 12.9999.1 ``` > then returned should be: 10001 or ``` 12.1002.1 12.1003.1 ... 12.9999.1 12.10001.1 12.10002.1 ``` > so should be retreived next one so: 10003 Hope you know what i mean I already have started something. This code is taking producent number - looking for all rows starting with it and then just simply adding 1 to the second part unfortunetly i am not sure how should i change it according to those 3 cases. ``` select parsename(max(nummer), 3) + '.' -- 3 + ltrim(max(cast(parsename(nummer, 2) as int) +1)) -- 5436 -> 5437 + '.1' from tbArtikel where Nummer LIKE '3.%' ``` Counting on your help. If something unclear let me know. **Additional question:** ``` Using cmd As New SqlCommand("SELECT CASE WHEN r.number Is NULL THEN 1001 WHEN r.number = 9999 THEN 10001 Else r.number + 1 End number FROM (VALUES(@producentNumber)) AS a(art) -- this will search this number within inner query And make case.. LEFT JOIN( -- Get producent (in Like) number And max number Of it (without Like it Get all producent numbers And their max number out Of all SELECT PARSENAME(Nummer, 3) art, MAX(CAST(PARSENAME(Nummer, 2) AS INT)) number FROM tbArtikel WHERE Nummer Like '@producentNumber' + '[.]%' GROUP BY PARSENAME(Nummer, 3) ) r On r.art = a.art", con) cmd.CommandType = CommandType.Text cmd.Parameters.AddWithValue("@producentNumber", producentNumber) ```
A fairly straight forward way is to (ab)use [PARSENAME](https://msdn.microsoft.com/en-us/library/ms188006.aspx) to split the string to be able to extract the current maximum. An outer query can then just implement the rules for the value being missing/9999/other. The value (12 here) is inserted in a table value constructor to be able to detect a missing value using a `LEFT JOIN`. ``` SELECT CASE WHEN r.number IS NULL THEN 1001 WHEN r.number = 9999 THEN 10001 ELSE r.number + 1 END number FROM ( VALUES(12) ) AS a(category) LEFT JOIN ( SELECT PARSENAME(prodno, 3) category, MAX(CAST(PARSENAME(prodno, 2) AS INT)) number FROM products GROUP BY PARSENAME(prodno, 3) ) r ON r.category = a.category; ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/1886a/2). As a further optimization, you could add a `WHERE prodno LIKE '12[.]%'` in the inner query to not parse through un-necessary rows.
I don't fully understand what you're asking for. I am unsure about the examples...but if i was doing it I'd try to break the field into 3 fields first and then do something with them. [sqlfiddle](http://sqlfiddle.com/#!6/91513/15) ``` SELECT nummer,LEFT(nummer,first-1) as field1, RIGHT(LEFT(nummer,second-1),second-first-1) as field2, RIGHT(nummer,LEN(nummer)-second) as field3 FROM (SELECT nummer, CHARINDEX('.',nummer) as first, CHARINDEX('.',nummer,CHARINDEX('.',nummer)+1)as second from tbArtikel)T ``` Hopefully with the 3 fields broken up, it's much easier to apply logics to them now. update: Okay i reread your question and i sort of know what you're trying to get at.. if user search for a value that doesn't exist for example 8. Then you want 1001 returned if they search for anything else that has results then return the max+1 unless it's 9999 then return 10001. If this is correct then check this [sqlfiddle2](http://sqlfiddle.com/#!6/84137/18) ``` DECLARE @search varchar(20) SET @search = '8' SELECT field1,max(nextvalue) as nextvalue FROM (SELECT field1, MAX(CASE (field2) WHEN 9999 THEN 10001 ELSE field2+1 END) as nextvalue FROM (SELECT nummer, CAST(LEFT(nummer,first-1) as INTEGER) as field1, CAST(RIGHT(LEFT(nummer,second-1),second-first-1) as INTEGER) as field2, CAST(RIGHT(nummer,LEN(nummer)-second) as INTEGER) as field3 FROM (SELECT nummer, CHARINDEX('.',nummer) as first, CHARINDEX('.',nummer,CHARINDEX('.',nummer)+1)as second FROM tbArtikel )T )T2 GROUP BY field1 UNION SELECT CAST (@search as INTEGER)as field1 ,1001 )T3 WHERE field1 = @search GROUP BY field1 ``` Just change the @search variable to see it's results I think there might be a cleaner way to do this but it's not coming to me right now :(
Get max number from table add one and check with specific convention
[ "", "sql", "sql-server", "" ]
How can I execute the below Stored Procedure? ``` create or replace procedure squareOf(x IN OUT NUMBER) is begin x:= x*x; end; ```
@Massie already mentioned one approach using anonymous block. Another approach is using bind variable in command line like below - ``` var c number; exec :c:= 6; execute squareOf(:c); print c; ```
``` DECLARE x NUMBER := 6; BEGIN squareOf(x => x ); dbms_output.put_line( 'X: '|| x ); END; ``` returns 36
Executing a Stored Procedure from Oracle SQL Developer
[ "", "sql", "oracle", "stored-procedures", "" ]
We are using the below updated SQL to get customers list from our db whom we send SMS before 3 days. ``` SELECT * FROM sms WHERE sent_time >= NOW() - INTERVAL 3 DAY; ``` The table `sms` is updated daily along with the `sent_time` column with default value of 0 or the last sent time. There are rows with the value of `sent_time = 0` but no row is fetched by the above script. What is the correct SQL? Earlier we were using the SQL with php like mentioned below: ``` $vTime = time() - ( 60*60*24*3 ); $sql = "SELECT * FROM sms WHERE $vTime <= sent_time"; ```
The function `NOW()` will return current date and time, but as I can see you have used PHP [time()](http://php.net/manual/en/function.time.php) before, which returns a Unix-Timestamp. The SQL equivalent is `UNIX_TIMESTAMP()`. Syntax `UNIX_TIMESTAMP()` ``` SELECT * FROM sms WHERE sent_time >= UNIX_TIMESTAMP() - (60*60*24*3); ``` Syntax `UNIX_TIMESTAMP(date)` ``` SELECT * FROM sms WHERE sent_time >= UNIX_TIMESTAMP(NOW() - INTERVAL 3 DAY) OR sent_time = 0 ```
`NOW() - INTERVAL 3 DAY;` returns a DATETIME while `echo time() - ( 60*60*24*3 );` returns a timestamp. If your database column is a timestamp, your MySQL test will never work, use this instead: ``` SELECT * FROM sms WHERE sent_time >= UNIX_TIMESTAMP(NOW() - INTERVAL 3 DAY) ```
Get all rows before a specific day
[ "", "mysql", "sql", "" ]
I got a Table which looks like this: ``` DATE | Number 01-01-16 00:00:00 10 02-01-16 00:00:00 10 03-01-16 00:00:00 11 04-01-16 00:00:00 12 05-01-16 00:00:00 13 .... 31-01-16 00:00.00 15 ........ 29-02-16 00:00:00 18 ``` I got this table for the last few months. I now want to retrieve the value of the rows, which contain the last day of the previous month and the month before the last month. So for today I would like to retrieve the Value of the 31-1-16 and 29-2-16. My result should look like: ``` lastmonth | lastmonth2 18-> Corresponding value to Date: 29-02-16 | 15 -> value for 31-01-16 ``` Would appreciate any help. Cheers
This is Gordon's code for determining the correct dates plus subqueries to fetch the Number values for those rows: ``` SELECT (SELECT Number FROM cc_open_csi_view WHERE last_day(date_sub(curdate(), interval 1 month)) = date(`DATE`)) as lastmonth, (SELECT Number FROM cc_open_csi_view WHERE last_day(date_sub(curdate(), interval 2 month)) = date(`DATE`)) as lastmonth2 FROM DUAL; ``` Hope that's what you wanted! Works for me in a simple example. I don't know if you need the `date()` part around `DATE` but it seemed safest.
Here is logic for the last day of this month and the previous month: ``` select last_day(curdate()) as last_day_of_this_month, last_day(date_sub(curdate(), interval 1 month)) as last_day_of_prev_month ``` You can get the last day of any month relative to the current month by changing the "1". And, I have no idea what date "30-2-16". When describing dates, you should use ISO standard formats. The last day of February 2016 was 2016-02-29.
SQL - Last Day of Month
[ "", "mysql", "sql", "" ]
I have a select statement, where I have created 2 temp tables and doing an insert into select before taking the data from those temp tables creating a join between them. This final select is what I want the metadata to be. In ssms it runs fine, in ssis I don't know why its throwing that error. Query is as such: ``` CREATE TABLE #Per (PerID bigint NOT NULL......) CREATE TABLE #Pre (PerID bigint NOT NULL, IsWorking.......) INSERT INTO #Per SELECT .... FROM ..... INSERT INTO #Pre SELECT .... FROM ..... SELECT * FROM #Per per LEFT JOIN #Pre pre ON per.PerID = pre.PerID ``` I have tested all the statements to make sure they work and the query as a whole and it returns me the data, but ssis is throwing the error: ``` The metadata could not be determined because statement 'INSERT INTO #Per SELECT ...... uses a temp table.". Error at project_name [646]: Unable to retrieve column information from the data source. Make sure your target table in the database is available. ```
try using a table variable instead something like: ``` DECLARE @Per TABLE (PerID bigint NOT NULL......) DECLARE @Pre TABLE (PerID bigint NOT NULL, IsWorking.......) INSERT INTO @Per SELECT .... FROM ..... INSERT INTO @Pre SELECT .... FROM ..... SELECT * FROM @Per per LEFT JOIN @Pre pre ON per.PerID = pre.PerID ``` Should work fine
If you are working on SSIS 2012 or later versions, then it uses system stored procedure **sp\_describe\_first\_result\_set** to fetch the metadata of the tables and it does not support temporary tables. But you can use other options like table variables and CTEs.
The metadata could not be determined because statement 'insert into
[ "", "sql", "ssis", "" ]
I am not very familiar with SQL queries, but I would like to move and combine multiple queries which I'm doing on the code level to the server to speed it up and to simplify it. Currently this takes several seconds even for only 5-10 items. I have a view and a table, let's call them View1, Table1. My first query: ``` SELECT UnitSerialNumber FROM Table1 WHERE OrderID = 1234 AND IsActive = 1 ORDER BY SerialNumberDate, IsPrinted ``` This returns a list (every item is a unique `UnitSerialNumber`), which I'm looping through... `BEGINNING OF LOOP` ``` SELECT ResultId FROM View1 WHERE Data = UnitSerialNumber AND ItemId = 338 AND StatusId = 2 ``` This returns a single value (`ResultId`) which I'm using in a query... ``` SELECT Data FROM View1 WHERE ID = ResultId AND (ItemId = 311 OR ItemId = 313) AND StatusId = 2 ORDER BY ItemId ``` (I know this table structure is crap, but I'm not in the position to do anything with it, this is how the data stored.) So this returns with an object with 2 values. `END OF LOOP`
I have given a try with subqueries, this is working for me. Thanks everyone for trying to help! ``` SELECT View1.Data, View1.ItemId, z.SerialNumberDate, z.IsPrinted FROM View1 JOIN ( SELECT View1.Id, x.SerialNumberDate, x.IsPrinted FROM View1 JOIN ( SELECT UnitSerialNumber, SerialNumberDate, IsPrinted FROM Table1 WHERE OrderID = 613 AND IsActive = 1 ) AS x ON View1.Data = x.UnitSerialNumber ) AS z ON View1.DataCardId = z.Id WHERE View1.ItemId = 313 AND z.IsPrinted IS NULL ORDER BY z.IsPrinted,z.SerialNumberDate ```
CTEs are a simple way to combine such queries: ``` with q1 as ( SELECT UnitSerialNumber, SerialNumberDate, IsPrinted FROM Table1 WHERE OrderID = 1234 AND IsActive = 1 ), q2 as ( SELECT ResultId, SerialNumberDate, IsPrinted FROM View1 WHERE ItemId = 338 AND StatusId = 2 AND Data in (SELECT UnitSerialNumber FROM q1) ) SELECT q2.ResultId, v.Data FROM q2 JOIN View1 v ON v.ID = q2.ResultId WHERE v.itemId IN (311, 313) AND v.StatusId = 2 ORDER BY a2.SerialNumberDate, q2.IsPrinted, v.ItemId; ```
SQL Query Simplification - How to do in SQL Server what is currently done in the code?
[ "", "sql", "sql-server", "while-loop", "" ]
I have data set of call customer, I want to make count () to know: Total number of calls for each customer Total duration of call for each customer Total of locations the customer he where in This my data: ``` Phone no. - Duration In minutes - Location 1111 3 88 2222 4 33 3333 4 4 1111 7 55 3333 9 4 3333 7 3 ``` the result of query: ``` phone no- Total number of records -Total duration of calls- Total of location 1111 2 10 2 2222 1 4 1 3333 3 20 2 ```
This is almost similar to fthiella answer. Try like this ``` select PhoneNo, count(*) as TotalNumberOfRecords, sum(DurationInMinutes) as TotalDurationOfCalls, count(distinct location) as TotalOfLocations from yourtablename group by PhoneNo ```
You can use a GROUP BY query with basic aggregated functions, like COUNT(), SUM() and COUNT(DISTINCT) like this: ``` select phone_no, count(*), sum(duration), count(distinct location) from tablename group by phone_no ```
make many count () in one query
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am trying to figure out what the correct syntax for `UNION` is. My schema looks like is the following: ``` Players (playerNum, playerName, team, position, birthYear) Teams = (teamID, teamName, home, leagueName) Games = (gameID, homeTeamNum, guestTeamNum, date) ``` I need to print all `teamIDs` where the team played against the X team but not against the Y team. So my first idea was to check for the hometeamNum and then do a check for the guesteamNum, but I am not sure home to do the proper syntax. ``` SELECT DISTINCT hometeamNum FROM games WHERE guestteamNum IN (SELECT teamid FROM teams WHERE teamname = 'X') AND guestteamNum NOT IN (SELECT teamid FROM teams WHERE teamname = 'Y') UNION DISTINCT ```
If you just need the home teams, this should suffice: ``` SELECT DISTINCT hometeamnum FROM games WHERE guestteamnum NOT IN (SELECT teamid FROM teams WHERE teamname = 'Y') ``` If you need both home teams and guest teams: Select all teams that are not 'y' that didn't play agains 'y' as home team and didn't play against 'y' as guest team, and played against 'x' as guest team or played against 'x' as home team. ``` SELECT DISTINCT teamid FROM teams WHERE teamname != 'y' AND teamid NOT IN (SELECT hometeamnum FROM games INNER JOIN teams ON games.guestteamnum = teams.teamid WHERE teamname = 'y' UNION SELECT guestteamnum FROM games INNER JOIN teams ON games.hometeamnum = teams.teamid WHERE teamname = 'y') AND teamid IN (SELECT guestteamnum FROM games INNER JOIN teams on games.hometeamnum = teams.teamid WHERE teamname = 'x' UNION SELECT hometeamnum FROM games INNER JOIN teams on games.guestteamnum = teams.teamid WHERE teamname = 'x'); ``` Hopefully this is what you were after. There may be a more concise query out there but it's too late in the night for me to think of one :)
Using `NOT EXISTS` allows you to locate rows that don't exist. That is , you want teams that have played against 'X' which are rows that do exist and these can be located by using a simple join and where clause\*\*. Then from those rows you need to find any that do not exist against the team 'Y'. ``` SELECT DISTINCT hometeamnum FROM games INNER JOIN teams AS guests ON games.guestTeamNum = guests.teamID WHERE guests.teamname = 'X' AND NOT EXISTS ( SELECT 1 FROM games AS games2 INNER JOIN teams AS guests2 ON games2.guestTeamNum = guests2.teamID WHERE games.hometeamnum = games2.hometeamnum AND guests2.teamname = 'Y' ) ``` Notes. `EXISTS`/`NOT EXISTS` does not actually need to return any data so it is possible to use `select 1` or `select null` or `select *`. I have used `select 1` here simply because it may be easier to understand - however I would personally prefer `select null' which stresses that no data is being returned by the exists subquery. `EXISTS`/`NOT EXISTS` are both reasonably efficient and can perform better than `IN (...)` \*\* for performance, and where it does not alter the result, use a join in preference to `IN ( subquery )`
SQL UNION syntax
[ "", "sql", "" ]
I have two tables - DataRecord and DataRecordArchive New(unique dataRecordID - ex. 'id1') dataRecord inserted to DataRecord table with version 0. When (almost every) update is performed on some existing dataRecord (dataRecordID 'id1' already exists in DataRecord table) - the existing dataRecord (id :'id1' version :0 )is inserted to DataRecordArchive table and then the dataRecord is updated and the version changes to 1 (and so on to version 2 , 3 etc.). So in the end in table DataRecordArchive table the record is with id: 'id1' and version: 0 and in DataRecord table the record is with id: 'id1' and version: 1. I need to update dataRecord with some value (bool value - no need to duplicate the row into DataRecordArchive table - for example 'correctFlag') , BUT I need to choose which table to update , because I have id and version , so the record to update might be in DataRecord table or in DataRecordArchive table. For example if I need to update dataRecord with id1 version 0 - the record is in DataRecordArchive table. If I need to update dataRecord with id1 version 1 the record is in DataRecord table. Something like : ``` update case when select count (*) from DataRecord where id = 'id1' and version=0 >0 then DataRecord when select count (*) from DataRecordArchive where id = 'id1' and version=0 >0 then DataRecordArchive end set correctFlag =1 where id= 'id1' and version = 0 ``` What is the correct SQL statment for this update?
Create a view that unions the two tables. Update the view.
I cannot place a comment so will post as an answer. Ideally, we would need your table structure, sample data from your tables to suggest correctly. However, based on limited info I suggest to use dynamic sql. You will use dynamic sql to manipulate the from clause of your sql query based on conditions. Please post table strucgture and sample data so that we can give you exact query to use.
How to select record from one of two tables by criteria , then update it
[ "", "sql", "sql-server", "database", "t-sql", "sql-update", "" ]
How I can make the following query and delete in one query ? ``` select krps.kpi_results_fk from report.kpi_results_per_scene krps inner join report.kpi_results kr on kr.session_uid = '0000c2af-1fc8-4729-bb2a-d4516a63107a' and kr.pk = krps.kpi_results_fk delete from report.kpi_results_per_scene where kpi_results_fk = 'answer from above query' ```
I think for your case, *NO* need to use `inner join`. Following query could reduce the overhead of `inner join` ``` DELETE FROM report.kpi_results_per_scene WHERE kpi_results_fk IN (SELECT kr.pk FROM report.kpi_results kr WHERE kr.session_uid = '0000c2af-1fc8-4729-bb2a-d4516a63107a') ```
use IN operator: ``` delete from report.kpi_results_per_scene where kpi_results_fk in ( select krps.kpi_results_fk from report.kpi_results_per_scene krps inner join report.kpi_results kr on kr.session_uid = '0000c2af-1fc8-4729-bb2a-d4516a63107a' and kr.pk = krps.kpi_results_fk) ```
Write a SQL delete based on a select statement
[ "", "mysql", "sql", "" ]
I have a **table** (lets call it AAA) containing 3 colums **ID,DateFrom,DateTo** I want to write a query to return all the records that contain (even 1 day) within the period DateFrom-DateTo of a **specific year** (eg 2016). I am using SQL Server 2005 Thank you
Try this: ``` SELECT * FROM AAA WHERE DATEPART(YEAR,DateFrom)=2016 OR DATEPART(YEAR,DateTo)=2016 ```
Another way is this: ``` SELECT <columns list> FROM AAA WHERE DateFrom <= '2016-12-31' AND DateTo >= '2016-01-01' ``` If you have an index on `DateFrom` and `DateTo`, this query allows Sql-Server to use that index, unlike the query in Max xaM's answer. On a small table you will probably see no difference but on a large one there can be a big performance hit using that query, since Sql-Server can't use an index if the column in the where clause is inside a function
SQL find period that contain dates of specific year
[ "", "sql", "sql-server-2008", "" ]
I Use simple sql query to save some date to database. mysql column: ``` current_date` date DEFAULT NULL, ``` But when executed query show Error: ``` insert into computers (computer_name, current_date, ip_address, user_id) values ('Default_22', '2012-01-01', null, 37); ``` [2016-03-22 12:21:46] [42000][1064] You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'current\_date, ip\_address, user\_id)
`current_date` is a [mysql function](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html), you can't have it as columns alias in your insert into query; try escaping your column names `` insert into computers (`computer_name`, `current_date`, .... ``
"current\_date" is reserved in MySQL, so use (`) character to enclose field names Use this ``` INSERT INTO computers (`computer_name`, `current_date`, `ip_address`, `user_id`) VALUES ('Default_22', '2012-01-01', null, 37); ```
SQL syntax error, when saving date to MySql
[ "", "mysql", "sql", "date", "" ]
I have SQL Server 2014 and for college I want to implement soft delete on all my tables. ``` SET DATEFORMAT dmy CREATE TABLE Customers ( CustomerId int IDENTITY (1,1) not null, FirstName varchar (20) not null, LastName varchar (30) not null, Address1 varchar (30) not null, Address2 varchar (30) not null, Address3 varchar (30) null, Eircode varchar (8) null, DateOfBirth date not null, CountyId int not null, CountryId int not null, AssociationId int null, CustomerTypeId int not null, AccountId int not null ) ``` I want add a column for soft deletes using deleted at. What is the best way to do this? Is it recommended that you use soft deletes (`deleted_at`) on all table on your database to keep it consistent.
*Consistency is key.* Whatever field name you use on one table try to keep it consistent for the other tables also, this will help greatly when you refactor code and need to apply a new where clause to many lines of code. Using [`ALTER TABLE`](https://msdn.microsoft.com/en-us/library/ms190273.aspx) you could simply add a boolean field for `deleted` or you can log much more data such as the date/time and even the user. Again though, consistency is key. Whatever field names you use keep it consistent among the other tables. Then you can create triggers to update the field information on delete, and also cancel the deletion from the trigger. Consistency in the field names will help you greatly here.
Add field deleted\_time (user etc. ) and add trigger to fill this fields on delete and cancel delete record. In query's add condition deleted\_time is not null. For better performance on current data you can create new table like "Customers\_arch" and add trigger on delete to Customers, to insert row from Customers to Customers\_arch with some additional fields like date\_time, user etc, then you don't need change query's on your existing apps.
Implementing soft delete
[ "", "sql", "sql-server-2014", "soft-delete", "" ]
I need to write an SQL query to identify the title of the film with the longest running time and I'm just wondering how I would do that? I've tried this but I'm not sure exactly what I need to do to fix the statement. ``` select f.film_title from film f order by f.film_len desc limit 1; ``` I thought the simplest approach would be to simply sort the movies by length and sort them in ascending order. Then only take the first result which would be the longest movie. However, this does not take into account films with the same length. And this is the table I've created that I have to find the results from. ``` drop table film_director; drop table film_actor; drop table film; drop table studio; drop table actor; drop table director; CREATE TABLE studio( studio_ID NUMBER NOT NULL, studio_Name VARCHAR2(30), PRIMARY KEY(studio_ID)); CREATE TABLE film( film_ID NUMBER NOT NULL, studio_ID NUMBER NOT NULL, genre VARCHAR2(30), genre_ID NUMBER(1), film_Len NUMBER(3), film_Title VARCHAR2(30) NOT NULL, year_Released NUMBER NOT NULL, PRIMARY KEY(film_ID), FOREIGN KEY (studio_ID) REFERENCES studio); CREATE TABLE director( director_ID NUMBER NOT NULL, director_fname VARCHAR2(30), director_lname VARCHAR2(30), PRIMARY KEY(director_ID)); CREATE TABLE actor( actor_ID NUMBER NOT NULL, actor_fname VARCHAR2(15), actor_lname VARCHAR2(15), PRIMARY KEY(actor_ID)); CREATE TABLE film_actor( film_ID NUMBER NOT NULL, actor_ID NUMBER NOT NULL, PRIMARY KEY(film_ID, actor_ID), FOREIGN KEY(film_ID) REFERENCES film(film_ID), FOREIGN KEY(actor_ID) REFERENCES actor(actor_ID)); CREATE TABLE film_director( film_ID NUMBER NOT NULL, director_ID NUMBER NOT NULL, PRIMARY KEY(film_ID, director_ID), FOREIGN KEY(film_ID) REFERENCES film(film_ID), FOREIGN KEY(director_ID) REFERENCES director(director_ID)); INSERT INTO studio (studio_ID, studio_Name) VALUES (1, 'Paramount'); INSERT INTO studio (studio_ID, studio_Name) VALUES (2, 'Warner Bros'); INSERT INTO studio (studio_ID, studio_Name) VALUES (3, 'Film4'); INSERT INTO studio (studio_ID, studio_Name) VALUES (4, 'Working Title Films'); INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (1, 1, 'Comedy', 1, 180, 'The Wolf Of Wall Street', 2013); INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (2, 2, 'Romance', 2, 143, 'The Great Gatsby', 2013); INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (3, 3, 'Science Fiction', 3, 103, 'Never Let Me Go', 2008); INSERT INTO film (film_ID, studio_ID, genre, genre_ID, film_Len, film_Title, year_Released) VALUES (4, 4, 'Romance', 4, 127, 'Pride and Prejudice', 2005); INSERT INTO director (director_ID, director_fname, director_lname) VALUES (1, 'Martin', 'Scorcese'); INSERT INTO director (director_ID, director_fname, director_lname) VALUES (2, 'Baz', 'Luhrmann'); INSERT INTO director (director_ID, director_fname, director_lname) VALUES (3, 'Mark', 'Romanek'); INSERT INTO director (director_ID, director_fname, director_lname) VALUES (4, 'Joe', 'Wright'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (1, 'Matthew', 'McConnaughy'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (2, 'Leonardo', 'DiCaprio'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (3, 'Margot', 'Robbie'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (4, 'Joanna', 'Lumley'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (5, 'Carey', 'Mulligan'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (6, 'Tobey', 'Maguire'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (7, 'Joel', 'Edgerton'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (8, 'Keira', 'Knightly'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (9, 'Andrew', 'Garfield'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (10, 'Sally', 'Hawkins'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (11, 'Judi', 'Dench'); INSERT INTO actor (actor_ID, actor_fname, actor_lname) VALUES (12, 'Matthew', 'Macfadyen'); INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 1); INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 2); INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 3); INSERT INTO film_actor (film_ID, actor_ID) VALUES (1, 4); INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 2); INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 5); INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 6); INSERT INTO film_actor (film_ID, actor_ID) VALUES (2, 7); INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 5); INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 8); INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 9); INSERT INTO film_actor (film_ID, actor_ID) VALUES (3, 10); INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 5); INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 8); INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 11); INSERT INTO film_actor (film_ID, actor_ID) VALUES (4, 12); INSERT INTO film_director (film_ID, director_ID) VALUES (1,1); INSERT INTO film_director (film_ID, director_ID) VALUES (2,2); INSERT INTO film_director (film_ID, director_ID) VALUES (3,3); INSERT INTO film_director (film_ID, director_ID) VALUES (4,4); ```
--You have to assume that there will be movies with the same runtime. ``` select f.film_title from film f where film_Len = (select max(film_Len) from film) ```
You can also use ranking function to determine: ``` SELECT * FROM (SELECT f.film_title, rank() over(partition BY f.film_title order by f.film_len DESC) rnk from film f ) WHERE rnk = 1 ``` If there're 2 films with the same length, they will be shown.
SQL ORDER and LIMIT to 1 result
[ "", "sql", "oracle", "max", "sql-order-by", "rownum", "" ]
I have a column in `jsonb` storing a map, like `{'a':1,'b':2,'c':3}` where the number of keys is different in each row. I want to count it -- jsonb\_object\_keys can retrieve the keys but it is in `setof` Are there something like this? ``` (select count(jsonb_object_keys(obj) from XXX ) ``` (this won't work as `ERROR: set-valued function called in context that cannot accept a set`) [Postgres JSON Functions and Operators Document](http://www.postgresql.org/docs/9.4/static/functions-json.html) ``` json_object_keys(json) jsonb_object_keys(jsonb) setof text Returns set of keys in the outermost JSON object. json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}') json_object_keys ------------------ f1 f2 ``` Crosstab isn't feasible as the number of key could be large.
You could convert keys to array and use array\_length to get this: ``` select array_length(array_agg(A.key), 1) from ( select json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}') as key ) A; ``` If you need to get this for the whole table, you can just group by primary key.
Shortest: ``` SELECT count(*) FROM jsonb_object_keys('{"a": 1, "b": 2, "c": 3}'::jsonb); ``` Returns 3 If you want all json number of keys from a table, it gives: ``` SELECT (SELECT COUNT(*) FROM jsonb_object_keys(myJsonField)) nbr_keys FROM myTable; ``` Edit: there was a typo in the second example.
How to count setof / number of keys of JSON in postgresql?
[ "", "sql", "json", "postgresql", "" ]
I've stuck in an MS SQL SERVER 2012 Query. What i want, is to write multiple values in "CASE" operator in "IN" statement of WHERE clause, see the following: ``` WHERE [CLIENT] IN (CASE WHEN T.[IS_PHYSICAL] THEN 2421, 2431 ELSE 2422, 2432 END) ``` The problem here is in 2421, 2431 - they cannot be separated with comma. is there any solution to write this in other way? thanks.
This is simpler if you don't use `case` in the `where` clause. Something like this: ``` where (T.[IS_PHYSICAL] = 1 and [client] in (2421, 2431)) or (T.[IS_PHYSICAL] = 0 and [client] in (2422, 2432)) ```
I'd use AND / OR instead of a case expression. ``` WHERE (T.[IS_PHYSICAL] AND [CLIENT] IN (2421, 2431)) OR (NOT T.[IS_PHYSICAL] AND [CLIENT] IN (2422, 2432)) ```
"CASE WHEN" operator in "IN" statement
[ "", "sql", "sql-server", "t-sql", "" ]
I have 2 rows from 2 tables in a database that I want to compare. Column1 is on table1 and is an Integer field with entries like the following `column1 147518 187146 169592` Column2 is on table2 and is a Varchar(15) field with various entries but for this example lets use these 3: `column2 169592 00010000089 DummyId` For my query part of it relies on checking if rows from table1 are linked to the rows in table2, but to do this, I need to compare column1 and column2. `SELECT * FROM table1 WHERE column1 IN (SELECT column2 FROM table2)` The result of this using the data above should be 1 row - 169592 Obviously this wont work (A character to numeric conversion process failed) as they cannot be compared as is, but how do I get them to work? I have tried ``` SELECT * FROM table1 WHERE column1 IN (SELECT CAST(column2 AS INTEGER) FROM table2) ``` and ``` SELECT * FROM table1 WHERE column1 IN (SELECT (column2::INTEGER) column2 FROM table2) ``` Using Server Studio 9.1 if that helps.
You can try to use `ISNUMERIC` in following: ``` SELECT * FROM table1 WHERE column1 IN (SELECT CASE WHEN ISNUMERIC(column2) = 1 THEN CAST(column2 AS INT) END FROM table2) ```
Try casting the int to a string: ``` SELECT * FROM table1 WHERE cast(column1 as varchar(15)) IN (SELECT column2 FROM table2) ```
Cast varchar that holds some strings to integer field in informix
[ "", "sql", "casting", "informix", "" ]
Is there a way to calculate how old someone is based on today's date and their birthday then display it in following manners: ``` If a user is less than (<) 1 year old THEN show their age in MM & days. Example: 10 months & 2 days old If a user is more than 1 year old AND less than 6 years old THEN show their age in YY & MM & days. Example: 5 years & 3 months & 10 days old If a user is more than 6 years old THEN display their age in YY. Example: 12 years ```
Probably not the most efficient way to go about it, but here's how I did it: I had to first get the date difference between today's date and person's birthdate. I used it to get years, months, days, etc by combining it with ABS(), and Remainder (%) function. ``` declare @year int = 365 declare @month int = 30 declare @sixYears int = 2190 select --CAST(DATEDIFF(mm, a.BirthDateTime, getdate()) AS VARCHAR) as GetMonth, --CAST(DATEDIFF(dd, DATEADD(mm, DATEDIFF(mm, a.BirthDateTime, getdate()), a.BirthDateTime), getdate()) AS VARCHAR) as GetDays, CASE WHEN DATEDIFF(dd,a.BirthDateTime,getdate()) < @year THEN cast((DATEDIFF(dd,a.BirthDateTime,getdate()) / (@month)) as varchar) +' Months & ' + CAST(ABS(DATEDIFF(dd, DATEADD(mm, DATEDIFF(mm, a.BirthDateTime, getdate()), a.BirthDateTime), getdate())) AS VARCHAR) + ' Days' WHEN DATEDIFF(dd,a.BirthDateTime,getdate()) between @year and @sixYears THEN cast((DATEDIFF(dd,a.BirthDateTime,getdate()) / (@year)) as varchar) +' Years & ' + CAST((DATEDIFF(mm, a.BirthDateTime, getdate()) % (12)) AS VARCHAR) + ' Months' WHEN DATEDIFF(dd,a.BirthDateTime,getdate()) > @sixYears THEN cast(a.Age as varchar) + ' Years' end as FinalAGE, ```
This is basically what you are looking for: ``` DECLARE @date1 DATETIME , @date2 DATETIME; SELECT @date1 = '1/1/2008' , @date2 = GETDATE(); SELECT CASE WHEN DATEDIFF(YEAR, @date1, @date2) < 1 THEN CAST(DATEDIFF(mm, @date1, @date2) AS VARCHAR)+' Months & '+CAST(DATEDIFF(dd, DATEADD(mm, DATEDIFF(mm, @date1, @date2), @date1), @date2) AS VARCHAR)+' Days' WHEN DATEDIFF(YEAR, @date1, @date2) BETWEEN 1 AND 5 THEN CAST(DATEDIFF(mm, @date1, @date2) / 12 AS VARCHAR)+' Years & '+CAST(DATEDIFF(mm, @date1, @date2) % 12 AS VARCHAR)+' Months' WHEN DATEDIFF(YEAR, @date1, @date2) >= 6 THEN CAST(DATEDIFF(YEAR, @date1, @date2) AS VARCHAR)+' Years' END; ``` Result for when a user is less than (<) 1 year old THEN show their age in MM & days: [![enter image description here](https://i.stack.imgur.com/NdjcU.png)](https://i.stack.imgur.com/NdjcU.png) Result for when a user is more than 1 year old AND less than 6 years old THEN show their age in YY & MM & days: [![enter image description here](https://i.stack.imgur.com/3zYJS.png)](https://i.stack.imgur.com/3zYJS.png) Result for when a user is more than 6 years old THEN display their age in YY: [![enter image description here](https://i.stack.imgur.com/5ChCE.png)](https://i.stack.imgur.com/5ChCE.png)
Get date difference in year, month, and days SQL
[ "", "sql", "sql-server", "date", "date-difference", "" ]
How I can move value of a column to upper row where banakaccount is null Below is my table data of two creditor TABLE1 ``` UniqueDatabaseNo Creditor BankAccountNo 882370 300020 NULL NULL 300020 NULL NULL 300020 NULL 0 300020 NL21SOGE0946 NULL 300020 NULL NULL 380910 NULL 0 380910 1432981 0 380910 NL98RABO0181 NULL 380910 NULL 2293483 380910 NULL ``` I NEED BELOW OUT PUT WHERE UniqueDatabaseNo > 0 AND ON SAME ROW BANK ACCOUNT SHOULD ON SAME ROW Here is desired output ``` UniqueDatabaseNo Creditor BankAccountNo 882370 300020 NL21SOGE0946 2293483 380910 NL98RABO0181 ``` I tried below query but it is not working correctly ``` select * from TABLE1 where uniquedatabaseno >0 union all select * from TABLE1 where BankAccountNo LIKE '[a-Z][a-Z]%' ``` Thanks,
You can do what you want using aggregation: ``` select max(UniqueDatabaseNo) as UniqueDatabaseNo, Creditor, max(case when BankAccountNo like '[a-Z][a-Z]%' then BankAccountNo end) as BankAccountNo from t group by Creditor; ``` Edit: You might was conditional logic for `UniqueDatabaseNo` as well: ``` select max(case when UniqueDatabaseNo > 0 then UniqueDatabaseNo end) as UniqueDatabaseNo ``` This is not necessary for your sample data.
Try this ``` select UniqueDatabaseNo,Creditor,TT.BankAccountNo from TABLE1 T1 OUTER APPLY( SELECT BankAccountNo as 'BankAccountNo' FROM TABLE1 T2 WHERE T1.Creditor=T2.Creditor AND T2.BankAccountNo IS NOT NULL )TT where T1.uniquedatabaseno >0 AND T1.UniqueDatabaseNo IS NOT NULL ```
Moving value from below row to upper one
[ "", "sql", "sql-server", "" ]
I have several tables with 30+ columns each and I would like to easily get the names of the columns that do not allow for null values. Is there a simple query that can do this for a table? Something like `describe [table_name]` but that only shows required columns, and not necessarily other info about the columns (like type) although that could be nice too.
``` USE [dbtest] GO CREATE TABLE dbo.[Event] ( ID INT PRIMARY KEY, Name VARCHAR(10) NULL ) GO ------------------------------------------------ USE [dbtest] --- !!! GO SELECT name FROM sys.columns WHERE [object_id] = OBJECT_ID('dbo.Event', 'U') AND is_nullable = 0 ``` Output - ``` name ----------- ID ```
``` SELECT * FROM INFORMATION_SCHEMA.columns WHERE table_name = 'test1' AND is_nullable = 'no' ```
Select column names that cannot be null
[ "", "sql", "sql-server", "" ]
I have a `SQL` query that retrieves only the names not containing any number: ``` ... WHERE Name NOT LIKE '%[0-9]%' ``` On the other hand, when trying to use this query in `Lambda Expression` with different combinations as shown below, none of them is working does not work: ``` .Where(m => !m.EmployeeName.Contains("%[0-9]%") ``` or ``` .Where(m => !m.EmployeeName.Contains(".*[0-9].*") ``` How can I use `NOT LIKE` method in `Lambda Expression`? **Update:** My lambda expression is shown below: ``` return Json(db.TEmployees .Where(m => m.Status == Enums.Status.Active) .AsEnumerable() .Where(m => !Regex.IsMatch(m.EmployeeName, ".*[0-9].*")) .Select(m => new { ID = m.EmployeeID, EmployeeName = m.EmployeeName }), JsonRequestBehavior.AllowGet); ```
As far I know you can't apply regular expression in Linq to Entities. What I recommend to do is if you have other conditions call `Where` method using them first, and then call `AsEnumerable` to work with Linq to Object which allows you use regular expressions, so you can apply the condition you need: ``` var query= context.YourDbSet.Where(...) .AsEnumerable() .Where(m => !Regex.IsMatch(m.EmployeeName, @"\d")); ``` Or you can also do the following: ``` var query= context.YourDbSet.Where(...) .AsEnumerable() .Where(e=>e.!EmployeeName.Any(char.IsDigit)); ``` ## Update: A third solution could be using [DbSet.SqlQuery](https://msdn.microsoft.com/en-us/library/system.data.entity.dbset.sqlquery(v=vs.113).aspx) method to execute your raw SQL query: ``` var query= context.YourDbSet.SqlQuery("SELECT * FROM Table WHERE Name NOT LIKE '%[0-9]%'"); ``` Translating that to your scenario would be: ``` // This column names must match with // the property names in your entity, otherwise use * return Json(db.TEmployees.SqlQuery("SELECT EmployeeID,EmployeeName FROM Employees WHERE Status=1 AND Name NOT LIKE '%[0-9]%'"), JsonRequestBehavior.AllowGet);// Change the value in the first condition for the real int value that represents active employees ```
You can use `Regex.IsMatch`. ``` yourEnumerable.Where(m => !Regex.IsMatch(m.EmployeeName, @"\d")); ```
Check if a String value contains any number by using Lambda Expression
[ "", "sql", "asp.net-mvc", "entity-framework", "linq", "lambda", "" ]
Let's say I have a table like this: ``` name_1 name_2 value ------------------- john alex 6 alex john 6 bob rick 7 rick bob 7 ``` I want to get rid of the duplicates so I'm left with this: ``` name_1 name_2 value ------------------- john alex 6 rick bob 7 ``` Does `distinct` work? And if so, how would I apply it? **EDIT:** I'm not concerned about the order of the names in the final table. I am looking for **name pairs**. So I am treating `john alex` the same as `alex john`. Therefore, I want to get rid of those "duplicates"
Here's one option using `least` with `greatest` and `distinct`: ``` select distinct least(name_1, name_2) name_1, greatest(name_1, name_2) name_2, value from yourtable ``` * [SQL Fiddle Demo](http://sqlfiddle.com/#!4/5cddf/2)
[SQL Fiddle](http://sqlfiddle.com/#!4/0d0f5/1) **Oracle 11g R2 Schema Setup**: ``` create table table_name (name1, name2, value) AS SELECT 'john', 'alex', 6 FROM DUAL UNION ALL SELECT 'alex', 'john', 6 FROM DUAL UNION ALL SELECT 'bob', 'rick', 7 FROM DUAL UNION ALL SELECT 'rick', 'bob', 7 FROM DUAL UNION ALL SELECT 'alice','carol',7 FROM DUAL UNION ALL SELECT 'carol','alice',7 FROM DUAL UNION ALL SELECT 'david','david',5 FROM DUAL; ``` **Query 1**: ``` SELECT name1, name2, value FROM ( SELECT t.*, ROW_NUMBER() OVER ( PARTITION BY LEAST( NAME1, NAME2 ), GREATEST( NAME1, NAME2 ), VALUE ORDER BY ROWNUM ) AS RN FROM table_name t ) WHERE RN = 1 ``` **[Results](http://sqlfiddle.com/#!4/0d0f5/1/0)**: ``` | NAME1 | NAME2 | VALUE | |-------|-------|-------| | john | alex | 6 | | alice | carol | 7 | | bob | rick | 7 | | david | david | 5 | ``` **Deleting Duplicates**: ``` DELETE FROM table_name WHERE ROWID IN ( SELECT rid FROM ( SELECT ROWID AS rid, ROW_NUMBER() OVER ( PARTITION BY LEAST( name1, name2 ), GREATEST( name1, name2 ), VALUE ORDER BY ROWNUM ) AS rn FROM table_name ) WHERE rn > 1 ); ``` **Query 1**: ``` SELECT * FROM table_name ``` **[Results](http://sqlfiddle.com/#!4/73c2b/1/0)**: ``` | NAME1 | NAME2 | VALUE | |-------|-------|-------| | john | alex | 6 | | bob | rick | 7 | | alice | carol | 7 | | david | david | 5 | ```
SQL - remove duplicate tuples, even if values are out of order
[ "", "sql", "oracle", "" ]
I need help with a correlated subquery in Oracle Sql. The problem is, that the second level deep subquery contains the daily.day, so this query results in an error. ``` DAILY - columns: daily_id, day, emp_details_id, worked_hour EMP_DETAILS - columns: emp_details_id, valid_from, valid_to, detail_type, detail_value ``` I'd like to get the detail\_value for each row, where the row's day is between ed.valid\_from and ed.valid\_to. Then I'd like to take the row for this day, where ed.valid\_from is the greatest (most recent). So I'd like the most recent valid detail value for the given emp\_details\_id Example: (I only wrote the needed columns) DAILY ``` day = '2016-03-02', emp_details_id = 1 day = '2016-03-04', emp_details_id = 1 ``` EMP\_DETAILS ``` valid_from = '2016-01-01', valid_to = '2016-12-31', detail_value = 6, emp_details_id = 1 valid_from = '2016-03-02', valid_to = '2016-12-31', detail_value = 7, emp_details_id = 1 valid_from = '2016-03-03', valid_to = '2016-12-31', detail_value = 8, emp_details_id = 1 valid_from = '2016-03-01', valid_to = '2016-12-31', detail_value = 10, emp_details_id = 2 ``` Result: ``` day = '2016-03-02', valid_from = '2016-03-02', valid_to = '2016-12-31', detail_value = 7, emp_details_id = 1 day = '2016-03-04', valid_from = '2016-03-03', valid_to = '2016-12-31', detail_value = 8, emp_details_id = 1 ``` My query: ``` SELECT da.*, ed.detail_value FROM DAILY da INNER JOIN EMP_DETAILS ed ON(da.emp_details_id = ed.emp_details_id) WHERE ed.detail_value = (SELECT worktime.detail_value FROM (SELECT ed2.detail_value FROM EMP_DETAILS ed2 WHERE ed2.valid_from <= da.day AND --error ed2.valid_to >= da.day AND --error ed2.emp_details_id = ed.emp_details_id --error ORDER BY ed2.valid_from DESC ) worktime WHERE ROWNUM = 1 ) ```
You can avoid the self-joins by using an analytic query to rank the joined rows by the latest `ed.valid_from` date for the `daily` record. The basic query is something like: ``` SELECT daily.*, ed.*, rank() over (partition by daily.emp_details_id, daily.day order by ed.valid_from DESC) rnk FROM DAILY daily INNER JOIN EMP_DETAILS ed ON daily.emp_details_id = ed.emp_details_id AND ed.valid_from <= daily.day AND ed.valid_to >= daily.day; DAY EMP_DETAILS_ID VALID_FROM VALID_TO DETAIL_VALUE EMP_DETAILS_ID RNK ---------- -------------- ---------- ---------- ------------ -------------- ---------- 2016-03-02 1 2016-03-02 2016-12-31 7 1 1 2016-03-02 1 2016-01-01 2016-12-31 6 1 2 2016-03-04 1 2016-03-03 2016-12-31 8 1 1 2016-03-04 1 2016-03-02 2016-12-31 7 1 2 2016-03-04 1 2016-01-01 2016-12-31 6 1 3 ``` The record with the greatest date is ranked 1, so you can put that in a subquery and filter on the generated `rnk` column: ``` SELECT emp_details_id, day, detail_value FROM ( SELECT daily.day, daily.emp_details_id, ed.detail_value, rank() over (partition by daily.emp_details_id, daily.day order by ed.valid_from DESC) rnk FROM DAILY daily INNER JOIN EMP_DETAILS ed ON daily.emp_details_id = ed.emp_details_id AND ed.valid_from <= daily.day AND ed.valid_to >= daily.day ) WHERE rnk = 1; EMP_DETAILS_ID DAY DETAIL_VALUE -------------- ---------- ------------ 1 2016-03-02 7 1 2016-03-04 8 ``` From the data is doesn't look likely that you'd have two matching records, but if you did (if 7 and 8 we both valid from the same date) then this would return two rows. You would need to adjust the partition by clause to choose how to break the tie. (You can also use dense\_rank, row\_number etc. but the same applies - if there can be a tie you should specify how to break it).
You need to query DAILY in the subquery. Also, you can get rid of the nested subquery, ORDER BY ... DESC, and ROWNUM = 1 by using the MAX function in the subquery, with the [FIRST or LAST](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions065.htm) aggregate variation to get the DETAIL\_VALUE corresponding to the latest date: ``` SELECT d.*, ed.DETAIL_VALUE FROM DAILY d INNER JOIN EMP_DETAILS ed ON ed.EMP_DETAILS_ID = d.EMP_DETAILS_ID WHERE (d.EMP_DETAILS_ID, d.DAY, ed.DETAIL_VALUE) IN (SELECT d2.EMP_DETAILS_ID, d2.DAY, MAX(ed2.DETAIL_VALUE) KEEP (DENSE_RANK LAST ORDER BY ed2.VALID_FROM) FROM DAILY d2 INNER JOIN EMP_DETAILS ed2 ON ed2.EMP_DETAILS_ID = d2.EMP_DETAILS_ID WHERE d2.DAY BETWEEN ed2.VALID_FROM AND ed2.VALID_TO GROUP BY d2.EMP_DETAILS_ID, d2.DAY); DAY EMP_DETAILS_ID DETAIL_VALUE ---------- -------------- ------------ 2016-03-02 1 7 2016-03-04 1 8 ``` In this simplified example the subquery on its own actually finds all the information you need: ``` SELECT d2.EMP_DETAILS_ID, d2.DAY, MAX(ed2.DETAIL_VALUE) KEEP (DENSE_RANK LAST ORDER BY ed2.VALID_FROM) FROM DAILY d2 INNER JOIN EMP_DETAILS ed2 ON ed2.EMP_DETAILS_ID = d2.EMP_DETAILS_ID WHERE d2.DAY BETWEEN ed2.VALID_FROM AND ed2.VALID_TO GROUP BY d2.EMP_DETAILS_ID, d2.DAY; EMP_DETAILS_ID DAY MAX(ED2.DETAIL_VALUE)KEEP(DENSE_RANKLAS -------------- ---------- --------------------------------------- 1 2016-03-02 7 1 2016-03-04 8 ``` and you could get the other fields from DAILY quite simply; for other EMP\_DETAILS you'd need to use more MAX KEEP DENSE\_RANK formulations. If that gets too messy or complicated then using that as a subquery and joining to it, as in the first example, might be clearer - but would be less efficient as it has to hit both the tables twice. Best of luck.
Correlated query in oracle sql
[ "", "sql", "oracle", "" ]
I have table that has 3 columns. I want to select data by list of data. ``` Table 1 key1 key2 value 12 A 100 15 A 150 17 C 56 13 D 600 12 C 100 10 B 80 ``` I have this list as key to select: ``` key1 key2 12 A 17 C 13 D ``` and the result should be: ``` 100 56 600 ```
It's unclear to me what you mean with "list of data", but if those are two tables, you can do: ``` select value from table1 where (key1, key2) in (select key1, key2 from table2); ``` You can also supply the values directly: ``` select value from table1 where (key1, key2) in ( (12,'A'), (17,'C'), (13,'D') ); ```
There's no meaning for 'list of data' in SQL. But if you want to display the result that you mentioned above. Use this code- Select value from Table 1 Where (key1,key2) in ((12,'A'), (17,'C'),(13,'D')) ;
How to select row of data by list of data
[ "", "sql", "postgresql", "" ]
I have 3 tables, `persons`, `companies` and `tasks`. Persons make different tasks in different companies. What I want is a list of ALL the persons table, the last task they have in tasks and the name of the company when they do that task. The most recent task could be the newest task\_date or the higher id.tasks, it has the same result. Table `persons`: ``` | id | Name | | 1 | Person 1 | | 2 | Person 2 | | 3 | Person 3 | | 4 | Person 4 | ``` Table `companies`: ``` β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” | id | company | β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” | 1 | Company 1 | | 2 | Company 2 | | 3 | Company 3 | | 4 | Company 4 | β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” ``` Table `tasks`: ``` β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” | id | task_name | task_date | id_persons | id_companies | β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” | 1 | Task 1 | 2015-01-02 | 1 | 3 | | 2 | Task 2 | 2016-03-02 | 1 | 4 | | 3 | Task 3 | 2016-06-04 | 2 | 1 | | 4 | Task 4 | 2016-01-03 | 4 | 2 | ``` The result should be a table like this: ``` | persons.id | persons.name | company.name | | 1 | Person 1 | Company 4 | | 2 | Person 2 | Company 1 | | 3 | Person 3 | | | 4 | Person 4 | Company 2 | β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€” ``` I have this query: ``` SELECT t.id id_t, t.id_companies t_id_companies, c.company_name , p.* FROM persons p INNER JOIN tasks t ON t.id_persons = p.id INNER JOIN ( SELECT id_persons, MAX(id) max_id FROM tasks GROUP BY id_persons ) b ON t.id_persons = b.id_persons AND t.id = b.max_id INNER JOIN companies c ON c.id = t.companies.id WHERE p.deleted = 0 ``` I think the result is ok, but there some missing persons, because the use of `INNER JOIN` (I have persons that doesn’t have any task associated). I’ve tried to change `INNER JOIN` to `LEFT JOIN` but the result was not ok. Any help is appreciated.
The solution is in the `LEFT JOIN`, but you need to first join the sub query, and only then the `tasks` table, otherwise you get too many results (I also fixed some typos in your query): ``` SELECT p.id persons_id, p.name persons_name, c.company_name FROM persons p LEFT JOIN ( SELECT id_persons, MAX(id) max_id FROM tasks GROUP BY id_persons ) b ON p.id = b.id_persons LEFT JOIN tasks t ON t.id_persons = p.id AND t.id = b.max_id LEFT JOIN companies c ON c.id = t.id_companies WHERE p.deleted = 0 ORDER BY 1 ``` Output is exactly as you listed in your question: ``` | id | Name | company_name | |----|----------|--------------| | 1 | Person 1 | Company 4 | | 2 | Person 2 | Company 1 | | 3 | Person 3 | (null) | | 4 | Person 4 | Company 2 | ``` Here is a [fiddle](http://sqlfiddle.com/#!9/e7659/5)
What you should do is make the latest task/company data an inline view, then do a left join to that from the person table. ``` SELECT * FROM persons p LEFT JOIN ( SELECT * FROM tasks t INNER JOIN companies c ON c.id = t.companies.id WHERE t.id IN (SELECT max(id) FROM tasks GROUP BY id_persons) ) combined_tasks ON p.id = combined_tasks.id WHERE p.deleted = 0 ```
Join 3 tables, LIMIT 1 on second table
[ "", "mysql", "sql", "" ]
Is there a way to group and sum columns based on a condition? ``` id | code | total | to_update 1 | A1001 | 2 | 0 2 | B2001 | 1 | 1 3 | A1001 | 5 | 1 4 | A1001 | 3 | 0 5 | A1001 | 2 | 0 6 | B2001 | 1 | 0 7 | C2001 | 11 | 0 8 | C2001 | 20 | 0 ``` In this example I want to group and sum all rows which share the same `code` where at least one row has an `to_update` value of 1. Group by `code` column and sum by `total`. The example above would result in: ``` code total A1001 12 B2001 2 ```
You need to have a subquery that gives you all codes that have at least 1 record where update=1 and you need to join this back to your table and do the group by and sum: ``` select m.code, sum(total) from mytable m inner join (select distinct code from mytable where `to_update`=1) t on m.code=t.code group by m.code ``` Or you can sum the to\_update column as well and filter in having: ``` select m.code, sum(total) from mytable m group by m.code having sum(to_update)> 0 ```
You could do it like this: ``` SELECT code, SUM(total) AS total FROM mytable GROUP BY code HAVING MAX(to_update) = 1 ``` This assumes that the possible values of *to\_update* are 0 or 1. Implemented in this [fiddle](http://sqlfiddle.com/#!9/0d81f/1), which outputs the result as requested in the question. As this query only scans the table once, it will have better performance than solutions that make joins.
Mysql group and sum based on condition
[ "", "mysql", "sql", "" ]
I'd like to update an existing table to have a unique, auto-generated int field. How can I do this in entity framework (code first)? --- Longer explanation: A client would like for each record in a table to have a unique identifier as a reference number for other databases/bookkeeping. Ordinarily I would simply use the primary key, but in this case the primary key is sensitive information (a design flaw, no doubt). I would like to update this table to have a unique, auto-generated int that has nothing to do with the identity (I guess it's not a big deal if it does). A bonus would be if all the existing records could have values generated as well. An alternate solution would be to change all the primary keys (and any references) in the database, but that is probably even more difficult. I'm open to alternate solutions, though. Thanks for any help.
To solve this issue I ended up writing a script to copy everything in the table so that they were all given new ID's. I then moved any foreign keys from the original record to the copy. Then, I deleted the originals. In this convoluted fashion I was able to alter all the ID's to something less proprietary, and could then use that ID elsewhere without worry.
There is nothing native to EF, but you can be [generate unique values for using guid's](https://stackoverflow.com/questions/12012736/entity-framework-code-first-using-guid-as-identity-with-another-identity-column) for property (populated using NEWID() T-SQL function). Additionally, you could also create a new [SEQUENCE](https://msdn.microsoft.com/en-us/library/ff878091(v=sql.110).aspx) if your using SQL 2012+ in a similar manner in integrating a property of numeric only values.
Updating a table to have a unique, generated int (that is not the primary key)
[ "", "sql", "sql-server", "entity-framework", "ef-code-first", "entity-framework-migrations", "" ]
How can I pad an integer with zeros on the left (lpad) and padding a decimal after decimal separator with zeros on the right (rpad). For example: If I have 5.95 I want to get 00005950 (without separator).
If you want the value up to thousandths but no more of the decimal part then you can multiply by 1000 and either `FLOOR` or use `TRUNC`. Like this: ``` SELECT TO_CHAR( TRUNC( value * 1000 ), '00000009' ) FROM table_name; ``` or: ``` SELECT LPAD( TRUNC( value * 1000 ), 8, '0' ) FROM table_name; ``` Using `TO_CHAR` will only allow a set maximum number of digits based on the format mask (if the value goes over this size then it will display `#`s instead of numbers) but it will handle negative numbers (placing the minus sign before the leading zeros). Using `LPAD` will allow any size of input but if the input is negative the minus sign will be in the middle of the string (after any leading zeros).
How about multiplication and `lpad()`: ``` select lpad(col * 1000, 8, '0') . . . ```
How to pad zeroes for a number field?
[ "", "sql", "oracle", "oracle-data-integrator", "" ]
I have a table that looks like the following but also has more columns that are not needed for this instance. ``` ID DATE Random -- -------- --------- 1 4/12/2015 2 2 4/15/2015 2 3 3/12/2015 2 4 9/16/2015 3 5 1/12/2015 3 6 2/12/2015 3 ``` ID is the primary key Random is a foreign key but i am not actually using table it points to. I am trying to design a query that groups the results by Random and Date and select the MAX Date within the grouping then gives me the associated ID. IF i do the following query ``` select top 100 ID, Random, MAX(Date) from DateBase group by Random, Date, ID ``` I get duplicate Randoms since ID is the primary key and will always be unique. The results i need would look something like this ``` ID DATE Random -- -------- --------- 2 4/15/2015 2 4 9/16/2015 3 ``` Also another question is there could be times where there are many of the same date. What will MAX do in that case?
You can use `NOT EXISTS()` : ``` SELECT * FROM YourTable t WHERE NOT EXISTS(SELECT 1 FROM YourTable s WHERE s.random = t.random AND s.date > t.date) ``` This will select only those who doesn't have a bigger date for corresponding `random` value. Can also be done using `IN()` : ``` SELECT * FROM YourTable t WHERE (t.random,t.date) in (SELECT s.random,max(s.date) FROM YourTable s GROUP BY s.random) ``` Or with a join: ``` SELECT t.* FROM YourTable t INNER JOIN (SELECT s.random,max(s.date) as max_date FROM YourTable s GROUP BY s.random) tt ON(t.date = tt.max_date and s.random = t.random) ```
This method will work in all versions of SQL as there are no vendor specifics (you'll need to format the dates using your vendor specific syntax) You can do this in two stages: **The first step is to work out the max date for each random:** ``` SELECT MAX(DateField) AS MaxDateField, Random FROM Example GROUP BY Random ``` **Now you can join back onto your table to get the max ID for each combination:** ``` SELECT MAX(e.ID) AS ID ,e.DateField AS DateField ,e.Random FROM Example AS e INNER JOIN ( SELECT MAX(DateField) AS MaxDateField, Random FROM Example GROUP BY Random ) data ON data.MaxDateField = e.DateField AND data.Random = e.Random GROUP BY DateField, Random ``` SQL Fiddle example here: [SQL Fiddle](http://sqlfiddle.com/#!9/7932d/8) **To answer your second question:** If there are multiples of the same date, the `MAX(e.ID)` will simply choose the highest number. If you want the lowest, you can use `MIN(e.ID)` instead.
SQL query with grouping and MAX
[ "", "sql", "" ]
here is my query: ``` SELECT COALESCE ([dbo].[RSA_BIRMINGHAM_1941$].TOS, [dbo].[RSA_CARDIFFREGUS_2911$].TOS,[dbo].[RSA_CASTLEMEAD_1941$].TOS, [dbo].[RSA_CHELMSFORD_1941$].TOS) AS [TOS Value] ,RSA_BIRMINGHAM_1941$.Percentage AS [Birmingham] ,RSA_CARDIFFREGUS_2911$.Percentage AS [Cardiff Regus] ,[dbo].[RSA_CASTLEMEAD_1941$].Percentage AS [Castlemead] ,[dbo].[RSA_CHELMSFORD_1941$].Percentage AS [Chelmsford] FROM [dbo].[RSA_BIRMINGHAM_1941$] FULL OUTER JOIN [dbo].[RSA_CARDIFFREGUS_2911$] ON [dbo].[RSA_BIRMINGHAM_1941$].TOS = [dbo].[RSA_CARDIFFREGUS_2911$].TOS FULL OUTER JOIN [dbo].[RSA_CASTLEMEAD_1941$] ON [dbo].[RSA_BIRMINGHAM_1941$].TOS = [dbo].[RSA_CASTLEMEAD_1941$].TOS FULL OUTER JOIN [dbo].[RSA_CHELMSFORD_1941$] ON [dbo].[RSA_BIRMINGHAM_1941$].TOS = [dbo].[RSA_CHELMSFORD_1941$].TOS ``` And here is the output: ``` TOS Value Birmingham Cardiff Regus Castlemead Chelmsford default (DSCP 0) 61.37% 61.74% 99.48% 79.78% af11 (DSCP 10) 15.22% 4.63% 0.00% 6.16% af33 (DSCP 30) 11.49% 15.44% NULL 7.33% af31 (DSCP 26) 8.86% 13.85% 0.01% 5.59% ef (DSCP 46) 1.91% 3.72% 0.49% 0.91% af41 (DSCP 34) 0.70% 0.03% 0.01% 0.05% cs4 (DSCP 32) 0.15% 0.20% NULL 0.10% af12 (DSCP 12) 0.12% NULL NULL NULL cs3 (DSCP 24) 0.06% 0.11% 0.01% 0.04% af21 (DSCP 18) 0.05% 0.05% 0.00% 0.02% cs6 (DSCP 48) NULL 0.23% NULL NULL cs6 (DSCP 48) NULL NULL 0.00% NULL af32 (DSCP 28) NULL NULL NULL 0.02% ``` If you have a look at the TOS column and look at value cs6 (DSCP 48) you will see that it has been duplicated. there should be only 1 cs6 (DSCP 48) row but for some reason the Castlemead value (0.00%) for cs6 (DSCP 48) has been created as a separate column. there should be only one row per TOS value if that makes sense so please tell where did I go wrong?
The results you get are as expected. This is because the joins are all relative to the first table. If there is a TOS in the second table that has no match with the first table that will generate a new record. If there is a TOS in the third table that has no match with the first table that will again generate a new record. There is no clue for the engine to know that such instances should be combined into one result. There are probably several ways to resolve this. I will suggest one where you introduce a `UNION` sub select that will combine all TOS values, and then an `INNER JOIN` to each of the four tables. ``` SELECT REF.TOS AS [TOS Value] ,RSA_BIRMINGHAM_1941$.Percentage AS [Birmingham] ,RSA_CARDIFFREGUS_2911$.Percentage AS [Cardiff Regus] ,RSA_CASTLEMEAD_1941$.Percentage AS [Castlemead] ,RSA_CHELMSFORD_1941$.Percentage AS [Chelmsford] FROM ( SELECT TOS FROM RSA_BIRMINGHAM_1941$ UNION SELECT TOS FROM RSA_CARDIFFREGUS_2911$ UNION SELECT TOS FROM RSA_CASTLEMEAD_1941$ UNION SELECT TOS FROM RSA_CHELMSFORD_1941$ ) AS REF INNER JOIN RSA_BIRMINGHAM_1941$ ON REF.TOS = RSA_BIRMINGHAM_1941$.TOS INNER JOIN RSA_CARDIFFREGUS_2911$ ON REF.TOS = RSA_CARDIFFREGUS_2911$.TOS INNER JOIN RSA_CASTLEMEAD_1941$ ON REF.TOS = RSA_CASTLEMEAD_1941$.TOS INNER JOIN RSA_CHELMSFORD_1941$ ON REF.TOS = RSA_CHELMSFORD_1941$.TOS ```
Queries are so much easier to write and to read with table aliases. The problem is the matching in the second `FULL OUTER JOIN`. The `FROM` clause needs to look like this: ``` FROM [dbo].[RSA_BIRMINGHAM_1941$] b FULL OUTER JOIN [dbo].[RSA_CARDIFFREGUS_2911$] cr ON b.TOS = cr.TOS FULL OUTER JOIN [dbo].[RSA_CASTLEMEAD_1941$] cm ON cm.TOS IN (b.TOS, cr.TOS) FULL OUTER JOIN [dbo].[RSA_CHELMSFORD_1941$] cf ON cf.TOS IN (b.TOS, cr.TOS, cm.TOS) ``` In other words, by comparing to only one `TOS` field in the later joins, you might be joining to an unmatched column -- and hence getting a duplicate. One `FULL OUTER JOIN` is fine. Multiple `FULL OUTER JOIN`s are tricky. I often use `UNION ALL` queries instead.
Values of FULL OUTER JOIN appearing in 2 different fields
[ "", "sql", "sql-server", "t-sql", "" ]
I need to improve the performance of a view. Unfortunately I can't use an index since I'm using "Top Percent" and randomness in my query. Here is the query used by the view ``` Select Top (10) Percent from Table Order By NEWID() ``` The view pulls the data in around 50 seconds which is too much. I hope you could help me to find a solution for that, without touching the business layer.
There is no way to improve this given your requirements. Get more hardware - only solution. It is likely you overload tempdb - in which case a high performance SSD and proper configuration on that one may help. The reason is that in order to get the top 10 percent by your random order, SQL Server MUST process ALL rows, and order them by the random element. This is the type of query that looks nice on paper but can lead to tremendous performance issues. I would start by looking at this requirement and try to get around it. FULL randomness is just expensive for non trivial data sets.
For a truly random sample, you need some form of randomness. One method that doesn't require sorting is approximate, but might be sufficient for your purposes: ``` Select t.* from Table t where rand(checksum(newid()) <= 0.1; ``` This is approximate, of course. If you really needed *exactly* 10 percent, this approach would need more work. An alternative if an almost-random-sample is good enough is `tablesample` (which you can read about [here](https://technet.microsoft.com/en-us/library/ms189108(v=sql.105).aspx)). ``` select t.* from table t tablesample (10 percent); ``` Note that this does a random sample of *pages*, so it is not a true random sample. And, it cannot be used in a view.
Improving view performance without using an Index
[ "", "sql", "sql-server", "" ]
I have a text which looks something like this `VENDOR CORPORATION (GA/ATL)`. I want to make it look like `Vendor Corporation (GA/ATL)`. So, I want to make only the first letter of every word upper case except those words which exists between `(` and `)`. I came across - `UPPER(LEFT(FIELD_NAME,1))+LOWER(SUBSTRING(FIELD_NAME,2,LEN(FIELD_NAME)))`, but it handles only one word at a time and doesn't have the functionality I want. A function which can do the job is most desired.
Try to use [function](http://www.sql-server-helper.com/functions/initcap.aspx) like this: ``` BEGIN DECLARE @Index INT DECLARE @Char CHAR(1) DECLARE @PrevChar CHAR(1) DECLARE @OutputString VARCHAR(255) SET @OutputString = LOWER(@InputString) SET @Index = 1 WHILE @Index <= LEN(@InputString) BEGIN SET @Char = SUBSTRING(@InputString, @Index, 1) SET @PrevChar = CASE WHEN @Index = 1 THEN ' ' ELSE SUBSTRING(@InputString, @Index - 1, 1) END IF @PrevChar IN (' ', ';', ':', '!', '?', ',', '.', '_', '-', '/', '&', '''', '(') BEGIN IF @PrevChar != '(' AND @PrevChar != '/' SET @OutputString = STUFF(@OutputString, @Index, 1, UPPER(@Char)) IF @PrevChar = '(' SET @OutputString = LEFT(@OutputString, LEN(@OutputString) - LEN(SUBSTRING(@OutputString, CHARINDEX('(',@OutputString), CHARINDEX(')',@OutputString)))) + UPPER(SUBSTRING(@OutputString, CHARINDEX('(',@OutputString), CHARINDEX(')',@OutputString))) END SET @Index = @Index + 1 END RETURN @OutputString END ``` **USAGE** ``` SELECT [dbo].[InitCap]('VENDOR CORPORATION (GA/ATL)') ``` **OUTPUT** ``` Vendor Corporation (GA/ATL) ```
Using the Jeff Moden splitter (which can be found here. <http://www.sqlservercentral.com/articles/Tally+Table/72993/>) this can be accomplished. You then need to use a cross tab, also known as a conditional aggregate to put the piece back together. You could also do this with a PIVOT but I find the cross tab less obtuse for syntax and it has been proven to be slightly faster performance wise. This is also using the InitCap function found here. [How to update data as upper case first letter with t-sql command?](https://stackoverflow.com/questions/11688182/how-to-update-data-as-upper-case-first-letter-with-t-sql-command/11688803#11688803) ``` declare @Value varchar(100) = 'VENDOR CORPORATION (GA/ATL)'; with sortedValues as ( select Case when left(s.Item, 1) = '(' then s.Item else dbo.InitCap(s.Item) end as CorrectedVal , s.ItemNumber from dbo.DelimitedSplit8K(@Value, ' ') s ) select MAX(case when ItemNumber = 1 then CorrectedVal end) + ' ' + MAX(case when ItemNumber = 2 then CorrectedVal end) + ' ' + MAX(case when ItemNumber = 3 then CorrectedVal end) from sortedValues ``` If you don't know ahead of time how many "words" you will have you can adjust this crosstab to a dynamic version. You can read more about the dynamic crosstab here. <http://www.sqlservercentral.com/articles/Crosstab/65048/> --EDIT-- Thanks to JamieD77 for a suggestion using STUFF. I particularly like this option because I have another version of InitCap that uses a tally table instead of the version referenced here which uses a while loop. Using STUFF facilitates turning this whole thing into an inline table valued function so it will be super fast. If anybody wants to see the InitCap without looping let me know and I will be happy to post it. Here is the query using the suggested STUFF methodology. ``` SELECT STUFF((SELECT ' ' + s.CorrectedVal FROM sortedValues s ORDER BY s.ItemNumber FOR XML PATH('') ),1,1,'') ```
Making first letter of every word upper case with a condition
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
I have been having issues switching to an offline version of the Lahman SQL baseball database. I was using a terminal embed into an EDX course. This command runs fine on the web terminal: ``` SELECT concat(m.nameFirst,concat(" ",m.nameLast)) as Player, p.IPOuts/3 as IP, p.W,p.L,p.H,p.BB,p.ER,p.SV,p.SO as K, p.IPOuts+p.W*5+p.SV+p.SO-p.BB-p.L-p.H as PTS, p.yearID as Year FROM Pitching p Inner Join Master m ON p.playerID=m.playerID WHERE p.yearID=2014 AND p.IPOuts>=50 GROUP BY m.playerID ORDER BY PTS DESC; ``` Which is running SQL 5.5.46, but when I use my offline version running 5.7.10 I get the following error code: > Error Code: 1055. Expression #1 of SELECT list is not in GROUP BY > clause and contains nonaggregated column 'stats.m.nameFirst' which is > not functionally dependent on columns in GROUP BY clause; this is > incompatible with sql\_mode=only\_full\_group\_by I've been reading a lot of solutions to people's problems, but they haven't helped in this case. That's never happened before, so I think this is either super obvious or maybe I'm getting ok at coding. Anyway, anyone know how to fix this?
In 5.7 the sqlmode is set by default to: ``` ONLY_FULL_GROUP_BY,NO_AUTO_CREATE_USER,STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION ``` To remove the clause ONLY\_FULL\_GROUP\_BY you can do this: ``` SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); ``` This supposed you need to make that GROUP BY with non aggregated columns. Regards
The accepted solution above didn't work for me on version `5.7.9, for osx10.9 (x86_64)`. Then the following worked - ``` set global sql_mode = 'STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'; ```
Error Code: 1055 incompatible with sql_mode=only_full_group_by
[ "", "mysql", "sql", "mysql-workbench", "mysql-error-1055", "" ]
I have the below query that shows me the records in Oracle that are not `null` but some of the records contain spaces such as '',' ', etc. How can I modify the query so it will ignore empty spaces? ``` select * from table where field1 is not null ``` Many Thanks.
## You should use trim or replace function e.g. 1. ``` select * from table where field1 is not null and trim(field1) != '' ; ``` 2. ``` select * from table where field1 is not null and replace(field1,' ') ; ``` p.s null is not empty data ! it is unknown.
If you problem is empty or extra space you can do something like this.. ``` select * from table where replace(field1,' ','') is not null ```
Not Null - Spaces on field
[ "", "sql", "oracle", "" ]
I'm new to PL/SQL and I'm trying to learn it as fast as I can. I was trying to do a simple SELECT but I came across this error. Although I know what it means, I really don't know how to solve the problem... This is my portion of code: ``` SELECT NVL(UPPER(T.COL1),'N.D.') COL1, V.SECO, 'N' CL_MED, V.DEST_USO, (CASE WHEN V.COL2 IS NULL AND V.SECO IN ('B090','B100') THEN '' WHEN V.COL2 LIKE 'L-DEF%' OR V.COL2 LIKE 'L-FUI%' AND V.SECO IN ('B090','B100') THEN 'FUI/DEF' WHEN V.COL2 IS NULL AND V.SECO = 'B080' AND V.COL3 LIKE 'DEF%' OR V.COL3 LIKE 'FUI%' THEN 'FUI/DEF' ELSE '' END ) FLAG_DEF_FUI FROM TAB1 V JOIN TAB2 C ON (V.COL4 = C.COL4 AND V.COL5 = C.COL5 AND V.COL6 = C.COL6) JOIN TAB3 T ON (V.COL4 = T.COL4 AND V.COL5 = T.COL5 AND V.COL5A = T.COL5A AND T.COL6 =V.COL6) WHERE V.COL4 = :COL4 AND V.COL6 = :COL6 AND V.COL5 NOT IN (SELECT gcm.PDR FROM TAB4 gcm WHERE gcm.COL6 = :COL6 ) GROUP BY (UPPER(T.COL1),V.SECO, V.DEST_USO, FLAG_DEF_FUI) ``` and FLAG\_DEF\_FUI is the column that causes this error..... Any help?! EDIT: I'm not asking WHY I can't use an alias in a GROUP BY. I'm asking a workaround for this problem...
To make a grouping of a complex function that the one you have, I always make a subselect. Thus, your query will become: ``` select child_query.stuff, child_query.flag_def_fui from ( select 'some-stuff' some_stuff, (case when v.col2 is null and v.seco in ('b090','b100') then '' when v.col2 like 'l-def%' or v.col2 like 'l-fui%' and v.seco in ('b090','b100') then 'fui/def' when v.col2 is null and v.seco = 'b080' and v.col3 like 'def%' or v.col3 like 'fui%' then 'fui/def' else '' end ) flag_def_fui from tab1 v join tab2 c on (v.col4 = c.col4 and v.col5 = c.col5 and v.col6 = c.col6) join tab3 t on (v.col4 = t.col4 and v.col5 = t.col5 and v.col5a = t.col5a and t.col6 =v.col6) where v.col4 = :col4 and v.col6 = :col6 and v.col5 not in (select gcm.pdr from tab4 gcm where gcm.col6 = :col6 ) ) child_query group by child_query.stuff, child_query.flag_def_fui; ```
The other answers give you two options and are both correct. Just to be clear, and to specifically answer your edited question, you have three options to work around the issue of not being able to reference aliased columns in the `GROUP BY`: 1) [Answer 1: Wrap your query](https://stackoverflow.com/a/36182347/8432) so that column aliases can be referenced easily, i.e. ``` SELECT column_alias FROM (<your query>) GROUP BY column_alias; ``` 2) [Answer 2: Don't use `GROUP BY`](https://stackoverflow.com/a/36182771/8432) if you aren't using aggregate functions, use `DISTINCT` instead. 3) Copy the complicated expression that makes up the column into the `GROUP BY`, i.e. ``` SELECT CASE WHEN col1 = 1 THEN 'one' WHEN col1 = 2 THEN 'two' ELSE '' END as col1_alias, SUM(col2) as col2_alias, col3 FROM table_name GROUP BY CASE WHEN col1 = 1 THEN 'one' WHEN col1 = 2 THEN 'two' ELSE '' END, col3; ```
ORA-00904 - Invalid Identifier
[ "", "sql", "oracle", "" ]
I have a two unbound textboxes in a Form where the user sets the start and end dates for query. The user than hits a button to generate report. Everything works except Access pops up a Dialog Box asking for the start and stop dates even though the variables myStartDate and myEndDate have proper values. I suspect I am missing something simple here. ``` Private Sub PrintReport_Click() Dim myForm As Form Dim myTextBox As TextBox Dim myStartDate As Date, myEndDate As Date myStartDate = CDate(Forms![Data Entry - Ammonia and Alkalinity]![StartDate]) myEndDate = CDate(Forms![Data Entry - Ammonia and Alkalinity]![EndDate]) Dim whereString As String whereString = "LabDate Between myStartDate And myEndDate" DoCmd.OpenReport "Ammonia and Alkalinity Report", acViewPreview, , whereString End Sub ```
You need to escape the string to use these variables. What you want is: ``` whereString = "LabDate Between #" & myStartDate & "# AND #" & myEndDate & "#" ```
If you (and I guess so) have applied a *date* format to the two textboxes, you don't need most of the converting, but you must pass formatted string expression for the date values to the SQL code: ``` Private Sub PrintReport_Click() Dim myForm As Form Dim myTextBox As TextBox Dim myStartDate As String Dim myEndDate As String Dim whereString As String myStartDate = Format(Forms![Data Entry - Ammonia and Alkalinity]![StartDate], "yyyy\/mm\/dd") myEndDate = Format(Forms![Data Entry - Ammonia and Alkalinity]![EndDate], "yyyy\/mm\/dd") whereString = "LabDate Between #" & myStartDate & "# And #" & myEndDate & "#" DoCmd.OpenReport "Ammonia and Alkalinity Report", acViewPreview, , whereString End Sub ```
VBA not recognizing Value of Unbound Textbox for Query
[ "", "sql", "vba", "ms-access", "" ]
I checked many posts with related questions, but couldnt find an answer. I have 2 tables which have a one to many relationship. One is customers and the other one is projects. One customer can have many projects. their PK and FK are customer.customer\_id and project\_customer\_id. Now when I use the following SQL ``` SELECT *, COUNT(project.project_id) AS totalProjects FROM `customer` LEFT JOIN `project` ON `project`.`customer_id` = `customer`.`customer_id` ORDER BY `customer`.`date_created` DESC ``` However when I get all my customers now it only returns the customers which actually have a project. I used inner, outer, left, union and right joins but no luck. I also tried DISTINCT but didnt work either. Does anyone have any idea for such a query that it returns all customers even if they have no projects? thanks in advance, Rodney
Since you are only concerned with the count of projects "if I understood correctly from your question", either create a function to get you this count, or write a sub query like the example below... ``` SELECT *, ( SELECT COUNT(project.project_id) from project WHERE project.customer_id = customer.customer_id ) AS totalProjects FROM customer ORDER BY customer.date_created DESC ```
Use this query: ``` SELECT *, COUNT(project.project_id) AS totalProjects FROM `customer` LEFT JOIN `project` ON `project`.`project_customer_id` = `customer`.`customer_id` GROUP BY `customer`.`customer_id` ORDER BY `customer`.`date_created` DESC ```
SQL join get results that have no join aswell
[ "", "mysql", "sql", "join", "" ]
Good day developers! [![Image 1](https://i.stack.imgur.com/PMB0M.png)](https://i.stack.imgur.com/PMB0M.png) I have this table JOBS and i wanted to get the latest record of every department. As you can see, there is only 2 department. The output row should be ID's # **1 and 6** I tried using this ``` SELECT j.id, j.title, j.department_id, j.date_created FROM JOBS j WHERE Date = (SELECT MAX(Date) FROM JOBS WHERE id = j.id) GROUP BY j.department_id ``` but i'm getting the row of ID's # 1 and 3. Please help. thank you!
One solution using `subquery`: ``` SELECT j.id, j.title, j.department_id, j.date_created FROM JOBS j WHERE (department_id, Date) IN (SELECT department_id, MAX(date_created) FROM JOBS WHERE department_id = j.department_id) ``` This can also be rewritten as a `JOIN`.
Use `IN()` : ``` SELECT j.id, j.title, j.department_id, j.date_created FROM JOBS j WHERE (date_created,department_id) IN (SELECT MAX(Date),department_id FROM group by department_id) ``` Or `NOT EXISTS()` : ``` SELECT * FROM JOBS t WHERE NOT EXISTS(SELECT 1 FROM JOBS s WHERE s.department_id = t.department_id AND s.date_created > t.date_created) ``` Or with a left join: ``` SELECT t.* FROM JOBS t LEFT OUTER JOIN JOBS s ON(t.department_id = s.department_id AND s.date_created > t.date_created) WHERE s.id is null ```
Getting latest record on every department
[ "", "mysql", "sql", "" ]
I'm trying to merge tables where rows correspond to a many:1 relationship with "real" things. I'm writing a blackjack simulator that stores game history in a database with a new set of tables generated each run. The tables are really more like templates, since each game gets its own set of the 3 mutable tables (players, hands, and matches). Here's the layout, where suff is a user-specified suffix to use for the current run: ``` - cards - id INTEGER PRIMARY KEY - cardValue INTEGER NOT NULL - suit INTEGER NOT NULL - players_suff - whichPlayer INTEGER PRIMARY KEY - aiType TEXT NOT NULL - hands_suff - id BIGSERIAL PRIMARY KEY - whichPlayer INTEGER REFERENCES players_suff(whichPlayer) * - whichHand BIGINT NOT NULL - thisCard INTEGER REFERENCES cards(id) - matches_suff - id BIGSERIAL PRIMARY KEY - whichGame INTEGER NOT NULL - dealersHand BIGINT NOT NULL - whichPlayer INTEGER REFERENCES players_suff(whichPlayer) - thisPlayersHand BIGINT NOT NULL ** - playerResult INTEGER NOT NULL --AKA who won ``` Only one cards table is created because its values are constant. So after running the simulator twice you might have: ``` hands_firstrun players_firstrun matches_firstrun hands_secondrun players_secondrun matches_secondrun ``` I want to be able to combine these tables if you used the same AI parameters for both of those runs (i.e. players\_firstrun and players\_secondrun are exactly the same). The problem is that the way I'm inserting hands makes this really messy: whichHand can't be a BIGSERIAL because the relationship of hands\_suff rows to "actual hands" is many:1. matches\_suff is handled the same way because a blackjack "game" actually consists of a set of games: the set of pairs of each player vs. the dealer. So for 3 players, you actually have 3 rows for each round. Currently I select the largest whichHand in the table, add 1 to it, then insert all of the rows for one hand. I'm worried this "query-and-insert" will be really slow if I'm merging 2 tables that might both be arbitrarily huge. When I'm merging tables, I feel like I should be able to (entirely in SQL) query the largest values in whichHand and whichGame once then use them combine the tables, incrementing them for each unique whichHand and whichGame in the table being merged. (I saw [this question](https://stackoverflow.com/questions/18543187/complicated-table-merging), but it doesn't handle using a generated ID in 2 different places). I'm using Postgres and it's OK if the answer is specific to it. \* sadly postgres doesn't allow parameterized table names so this had to be done by manual string substitution. Not the end of the world since the program isn't web-facing and no one except me is likely to ever bother with it, but the SQL injection vulnerability does not make me happy. \*\* matches\_suff(whichPlayersHand) was originally going to reference hands\_suff(whichHand) but [foreign keys must reference unique values](https://stackoverflow.com/questions/20120239/postgresql-foreign-key-no-unique-constraint). whichHand isn't unique because a hand is made up of multiple rows, with each row "holding" one card. To query for a hand you select all of those rows with the same value in whichHand. I couldn't think of a more elegant way to do this without resorting to arrays. EDIT: This is what I have now: ``` thomas=# \dt List of relations Schema | Name | Type | Owner --------+----------------+-------+-------- public | cards | table | thomas public | hands_first | table | thomas public | hands_second | table | thomas public | matches_first | table | thomas public | matches_second | table | thomas public | players_first | table | thomas public | players_second | table | thomas (7 rows) thomas=# SELECT * FROM hands_first thomas-# \g id | whichplayer | whichhand | thiscard ----+-------------+-----------+---------- 1 | 0 | 0 | 6 2 | 0 | 0 | 63 3 | 0 | 0 | 41 4 | 1 | 1 | 76 5 | 1 | 1 | 23 6 | 0 | 2 | 51 7 | 0 | 2 | 29 8 | 0 | 2 | 2 9 | 0 | 2 | 92 10 | 0 | 2 | 6 11 | 1 | 3 | 101 12 | 1 | 3 | 8 (12 rows) thomas=# SELECT * FROM hands_second thomas-# \g id | whichplayer | whichhand | thiscard ----+-------------+-----------+---------- 1 | 0 | 0 | 78 2 | 0 | 0 | 38 3 | 1 | 1 | 24 4 | 1 | 1 | 18 5 | 1 | 1 | 95 6 | 1 | 1 | 40 7 | 0 | 2 | 13 8 | 0 | 2 | 84 9 | 0 | 2 | 41 10 | 1 | 3 | 29 11 | 1 | 3 | 34 12 | 1 | 3 | 56 13 | 1 | 3 | 52 thomas=# SELECT * FROM matches_first thomas-# \g id | whichgame | dealershand | whichplayer | thisplayershand | playerresult ----+-----------+-------------+-------------+-----------------+-------------- 1 | 0 | 0 | 1 | 1 | 1 2 | 1 | 2 | 1 | 3 | 2 (2 rows) thomas=# SELECT * FROM matches_second thomas-# \g id | whichgame | dealershand | whichplayer | thisplayershand | playerresult ----+-----------+-------------+-------------+-----------------+-------------- 1 | 0 | 0 | 1 | 1 | 0 2 | 1 | 2 | 1 | 3 | 2 (2 rows) ``` I'd like to combine them to have: ``` hands_combined table: id | whichplayer | whichhand | thiscard ----+-------------+-----------+---------- 1 | 0 | 0 | 6 --Seven of Spades 2 | 0 | 0 | 63 --Queen of Spades 3 | 0 | 0 | 41 --Three of Clubs 4 | 1 | 1 | 76 5 | 1 | 1 | 23 6 | 0 | 2 | 51 7 | 0 | 2 | 29 8 | 0 | 2 | 2 9 | 0 | 2 | 92 10 | 0 | 2 | 6 11 | 1 | 3 | 101 12 | 1 | 3 | 8 13 | 0 | 4 | 78 14 | 0 | 4 | 38 15 | 1 | 5 | 24 16 | 1 | 5 | 18 17 | 1 | 5 | 95 18 | 1 | 5 | 40 19 | 0 | 6 | 13 20 | 0 | 6 | 84 21 | 0 | 6 | 41 22 | 1 | 7 | 29 23 | 1 | 7 | 34 24 | 1 | 7 | 56 25 | 1 | 7 | 52 matches_combined table: id | whichgame | dealershand | whichplayer | thisplayershand | playerresult ----+-----------+-------------+-------------+-----------------+-------------- 1 | 0 | 0 | 1 | 1 | 1 2 | 1 | 2 | 1 | 3 | 2 3 | 2 | 4 | 1 | 5 | 0 4 | 3 | 6 | 1 | 7 | 2 ``` Each value of "thiscard" represents a playing card in the range [1..104]--52 playing cards with an extra bit representing if it's face up or face down. I didn't post the actual table for space reasons. So player 0 (aka the dealer) had a hand of (Seven of Spades, Queen of Spaces, 3 of Clubs) in the first game.
I think you're not using PostgreSQL the way it's intended to be used, plus your table design may not be suitable for what you want to achieve. Whilst it was difficult to understand what you want your solution to achieve, I wrote this, which seems to solve everything you want using a handful of tables only, and functions that return recordsets for simulating your requirement for individual runs. I used Enums and complex types to illustrate some of the features that you may wish to harness from the power of PostgreSQL. Also, I'm not sure what parameterized table names are (I have never seen anything like it in any RDBMS), but PostgreSQL does allow something perfectly suitable: recordset returning functions. ``` CREATE TYPE card_value AS ENUM ('1', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'J', 'Q', 'K'); CREATE TYPE card_suit AS ENUM ('Clubs', 'Diamonds', 'Hearts', 'Spades'); CREATE TYPE card AS (value card_value, suit card_suit, face_up bool); CREATE TABLE runs ( run_id bigserial NOT NULL PRIMARY KEY, run_date timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE players ( run_id bigint NOT NULL REFERENCES runs, player_no int NOT NULL, -- 0 can be assumed as always the dealer ai_type text NOT NULL, PRIMARY KEY (run_id, player_no) ); CREATE TABLE matches ( run_id bigint NOT NULL REFERENCES runs, match_no int NOT NULL, PRIMARY KEY (run_id, match_no) ); CREATE TABLE hands ( hand_id bigserial NOT NULL PRIMARY KEY, run_id bigint NOT NULL REFERENCES runs, match_no int NOT NULL, hand_no int NOT NULL, player_no int NOT NULL, UNIQUE (run_id, match_no, hand_no), FOREIGN KEY (run_id, match_no) REFERENCES matches, FOREIGN KEY (run_id, player_no) REFERENCES players ); CREATE TABLE deals ( deal_id bigserial NOT NULL PRIMARY KEY, hand_id bigint NOT NULL REFERENCES hands, card card NOT NULL ); CREATE OR REPLACE FUNCTION players(int) RETURNS SETOF players AS $$ SELECT * FROM players WHERE run_id = $1 ORDER BY player_no; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION matches(int) RETURNS SETOF matches AS $$ SELECT * FROM matches WHERE run_id = $1 ORDER BY match_no; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION hands(int) RETURNS SETOF hands AS $$ SELECT * FROM hands WHERE run_id = $1 ORDER BY match_no, hand_no; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION hands(int, int) RETURNS SETOF hands AS $$ SELECT * FROM hands WHERE run_id = $1 AND match_no = $2 ORDER BY hand_no; $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION winner_player (int, int) RETURNS int AS $$ SELECT player_no FROM hands WHERE run_id = $1 AND match_no = $2 ORDER BY hand_no DESC LIMIT 1 $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION next_player_no (int) RETURNS int AS $$ SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN COALESCE((SELECT MAX(player_no) FROM players WHERE run_id = $1), 0) + 1 END $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION next_match_no (int) RETURNS int AS $$ SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN COALESCE((SELECT MAX(match_no) FROM matches WHERE run_id = $1), 0) + 1 END $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION next_hand_no (int) RETURNS int AS $$ SELECT CASE WHEN EXISTS (SELECT 1 FROM runs WHERE run_id = $1) THEN COALESCE((SELECT MAX(hand_no) + 1 FROM hands WHERE run_id = $1), 0) END $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION card_to_int (card) RETURNS int AS $$ SELECT ((SELECT enumsortorder::int-1 FROM pg_enum WHERE enumtypid = 'card_suit'::regtype AND enumlabel = ($1).suit::name) * 13 + (SELECT enumsortorder::int-1 FROM pg_enum WHERE enumtypid = 'card_value'::regtype AND enumlabel = ($1).value::name) + 1) * CASE WHEN ($1).face_up THEN 2 ELSE 1 END $$ LANGUAGE SQL; -- SELECT card_to_int(('3', 'Spades', false)) CREATE OR REPLACE FUNCTION int_to_card (int) RETURNS card AS $$ SELECT ((SELECT enumlabel::card_value FROM pg_enum WHERE enumtypid = 'card_value'::regtype AND enumsortorder = ((($1-1)%13)+1)::real), (SELECT enumlabel::card_suit FROM pg_enum WHERE enumtypid = 'card_suit'::regtype AND enumsortorder = (((($1-1)/13)::int%4)+1)::real), $1 > (13*4))::card $$ LANGUAGE SQL; -- SELECT i, int_to_card(i) FROM generate_series(1, 13*4*2) i CREATE OR REPLACE FUNCTION deal_cards(int, int, int, int[]) RETURNS TABLE (player_no int, hand_no int, card card) AS $$ WITH hand AS ( INSERT INTO hands (run_id, match_no, player_no, hand_no) VALUES ($1, $2, $3, next_hand_no($1)) RETURNING hand_id, player_no, hand_no), mydeals AS ( INSERT INTO deals (hand_id, card) SELECT hand_id, int_to_card(card_id)::card AS card FROM hand, UNNEST($4) card_id RETURNING hand_id, deal_id, card ) SELECT h.player_no, h.hand_no, d.card FROM hand h, mydeals d $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION deals(int) RETURNS TABLE (deal_id bigint, hand_no int, player_no int, card int) AS $$ SELECT d.deal_id, h.hand_no, h.player_no, card_to_int(d.card) FROM hands h JOIN deals d ON (d.hand_id = h.hand_id) WHERE h.run_id = $1 ORDER BY d.deal_id; $$ LANGUAGE SQL; INSERT INTO runs DEFAULT VALUES; -- Add first run INSERT INTO players VALUES (1, 0, 'Dealer'); -- dealer always zero INSERT INTO players VALUES (1, next_player_no(1), 'Player 1'); INSERT INTO matches VALUES (1, next_match_no(1)); -- First match SELECT * FROM deal_cards(1, 1, 0, ARRAY[6, 63, 41]); SELECT * FROM deal_cards(1, 1, 1, ARRAY[76, 23]); SELECT * FROM deal_cards(1, 1, 0, ARRAY[51, 29, 2, 92, 6]); SELECT * FROM deal_cards(1, 1, 1, ARRAY[101, 8]); INSERT INTO matches VALUES (1, next_match_no(1)); -- Second match SELECT * FROM deal_cards(1, 2, 0, ARRAY[78, 38]); SELECT * FROM deal_cards(1, 2, 1, ARRAY[24, 18, 95, 40]); SELECT * FROM deal_cards(1, 2, 0, ARRAY[13, 84, 41]); SELECT * FROM deal_cards(1, 2, 1, ARRAY[29, 34, 56, 52]); SELECT * FROM deals(1); -- This is the output you need (hands_combined table) -- This view can be used to retrieve the list of all winning hands CREATE OR REPLACE VIEW winning_hands AS SELECT DISTINCT ON (run_id, match_no) * FROM hands ORDER BY run_id, match_no, hand_no DESC; SELECT * FROM winning_hands; ```
Wouldn't using the UNION operator work? For the hands relation: ``` SELECT * FROM hands_first UNION ALL SELECT * FROM hands_second ``` For the matches relation: ``` SELECT * FROM matches_first UNION ALL SELECT * FROM matches_second ``` As a more long term solution I'd consider restructuring the DB because it will quickly become unmanageable with this schema. Why not improve normalization by introducing a games table? In other words *Games* have many *Matches*, *matches* have many *players* for each game and *players* have many hands for each *match*. I'd recommend drawing the UML for the entity relationships on paper (<http://dawgsquad.googlecode.com/hg/docs/database_images/Database_Model_Diagram(Title).png>), then improving the schema so it can be queried using normal SQL operators. Hope this helps. **EDIT:** In that case you can use a subquery on the union of both tables with the `rownumber()` PG function to represent the row number: ``` SELECT row_number() AS id, whichplayer, whichhand, thiscard FROM ( SELECT * FROM hands_first UNION ALL SELECT * FROM hands_second ); ``` The same principle would apply to the matches table. Obviously this doesn't scale well to even a small number of tables, so would prioritize normalizing your schema. Docs on some PG functions: <http://www.postgresql.org/docs/current/interactive/functions-window.html>
Merging Complicated Tables
[ "", "sql", "database", "postgresql", "merge", "" ]
I need to find out the names of the students who have enrolled in *at least* two courses. Creating the two tables: ``` CREATE TABLE Student ( StudentID varchar(20) PRIMARY KEY, FirstName varchar(15), LastName varchar(30), Sex varchar(6), DOB date, Email varchar(40) ); CREATE TABLE Enrols ( StudentID varchar(20), CourseID varchar(20), CONSTRAINT Student_Course PRIMARY KEY (StudentID,CourseID), FOREIGN KEY (StudentID) REFERENCES Student(StudentID), FOREIGN KEY (CourseID) REFERENCES Course(CourseID) ); ``` My (unsuccessful) attempt: ``` SELECT DISTINCT Student.FirstName, Student.LastName, Enrols.CourseID, COUNT (Student.FirstName) AS NumberOfNames FROM Student INNER JOIN Enrols ON Student.StudentID = Enrols.StudentID WHERE COUNT(Student.Firstname) >= 2; ```
``` SELECT Student.FirstName AS "First Name", Student.LastName AS "Last Name", Enrols.CourseID AS "Course ID", COUNT(Student.FirstName) AS "Number of Names" FROM Student INNER JOIN Enrols ON Student.StudentID = Enrols.StudentID GROUP BY Student.FirstName HAVING COUNT(Student.FirstName) >= 2; ``` Will work but you will see only one `CourseId`. You do not need `DISTINCT`.
You should use `HAVING` and `GROUP BY` (but not the curseID if you use group by) ``` SELECT Student.FirstName AS "First Name", Student.LastName AS "Last Name", COUNT (*) AS "Number of Names" FROM Student INNER JOIN Enrols ON Student.StudentID = Enrols.StudentID HAVING COUNT(Student.Firstname) >= 2 GROUP BY Student.FirstName, Student.LastName; ```
SELECT from One Table an Element that Occurs Multiple Times in Another Table
[ "", "mysql", "sql", "inner-join", "" ]
I have a self-referencing table: there's an ID and a PARENTID column that allows the records to be ordered into a hierarchical structure (let's call them record hierarchies). There's also a query (let's call it 'Query A') that returns a list of records from this table. Some of the returned records are 'root records' (PARENTID = NULL), while some are non-root records (PARENTID != NULL). Note that 'Query A' can return multiple records that belong to the same record hierarchy. What I need to accomplish in the most efficient way (efficiency is important but not paramount) is to get the root records for all records returned by 'Query A' so that non-root records in 'Query A' are searched for their root records.
One of possible solutions: ``` declare @TableA table ( ID int, ParentID int NULL, Name varchar(100) ) insert into @TableA(ID, ParentID, Name) values (1, NULL, 'root 1'), (2, NULL, 'root 2'), (3, 2, 'node 3->2'), (4, 1, 'node 4->1'), (5, 4, 'node 5->4->1'), (6, 3, 'node 6->3->2'), (7, 4, 'node 7->4->1'), (8, 7, 'node 8->7->4->1') ;with QueryA as ( /* your query could be here */ select t.ID, t.Name from @TableA t where t.ID in (1, 3, 8) ), Tree as ( select t.ID, t.ParentID, t.Name, case when t.ParentID is NULL then t.ID end as RootID from @TableA t /* starting from rows we have in QueryA */ where t.ID in (select q.ID from QueryA q) union all select tt.ID, t.ParentID, t.Name, case when t.ParentID is NULL then t.ID end as RootID from @TableA t /* recursion to parents */ inner join Tree tt on tt.ParentID = t.ID ) select q.ID, q.Name, t.Name as RootName from QueryA q inner join Tree t on t.ID = q.ID and t.RootID is not NULL order by 1, 2 ``` Also you may start from building a tree without linking to QueryA (for whole table). Will look a bit simpler. In this case you'll refer QueryA in final statement only.
If you want to retrieve `Root` Item of each item then you can use the following approach : ``` select t1.*,(case when t1.PARENTID is null then t1.ID else t1.PARENTID end ) Id_Root , 0 IsTraced into #tmp from TableName t1 left outer join TableName t2 on t1.ID=t1.PARENTID order by t1.PARENTID while exists(select TOP 1 * , (select PARENTID from #tmp where ID=t1.PARENTID) Id_GrandParent from #tmp t1 where IsTraced=0 order by PARENTID desc ) begin Declare @CurrentID as uniqueIdentifier set @CurrentID = (select TOP 1 ID from #tmp t1 where IsTraced=0 order by PARENTID desc ) Declare @CurrentParentID as uniqueIdentifier set @CurrentParentID = (select TOP 1 PARENTID from #tmp t1 where IsTraced=0 order by PARENTID desc ) Declare @CurrentGrandParentID as uniqueidentifier set @CurrentGrandParentID=(select PARENTID from #tmp where ID=@CurrentParentID) if(@CurrentGrandParentID is null) begin update #tmp set IsTraced=1 where ID=@CurrentID end else begin update #tmp set PARENTID= @CurrentGrandParentID, Id_Root=@CurrentGrandParentID where ID=@CurrentID end end select ID,Id_Root from #tmp order by PARENTID ``` as you can see after the while loop you can retrieve `ID` and `Id_Root` of each element from the `Temp Table #tmp`
Getting 'root records' from self-referencing table
[ "", "sql", "sql-server", "hierarchical-data", "recursive-query", "" ]
I want to pull all the unique IDs for particular rows with the same username and then display each result as a row. For example... Here's my table: ``` +----+------+ | id | name | +----+------+ | 1 | Joe | | 2 | Amy | | 3 | Joe | | 4 | Amy | | 5 | Joe | | 6 | Amy | +----+------+ ``` Here's the result I want: ``` +------+-------+ | name | ids | +------+-------+ | Joe | 1,3,5 | | Amy | 2,4,6 | +------+-------+ ``` How do I pull this result in MySQL?
Use a [`GROUP_CONCAT()`](http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) with `DISTINCT` clause to aggregate unique ids for a particular name: ``` SELECT name, GROUP_CONCAT(DISTINCT id SEPARATOR ',') AS ids FROM yourtable GROUP BY name ``` To review the usage of it also see [MySQL group\_concat with select inside select](https://stackoverflow.com/questions/35614678/mysql-group-concat-with-select-inside-select/35614847#35614847).
You can use [group\_concat](http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) for that: ``` SELECT name, GROUP_CONCAT(id) AS ids FROM table GROUP BY name ``` You can also specify a separator, but the one by default is the comma. You can also specify `DISTINCT`, but since *id* is unique, there is no reason to this: all it will do is slow down the query. Here is [SQL fiddle](http://sqlfiddle.com/#!9/ccb282/2) producing the output as desired: ``` +------+-------+ | name | ids | +------+-------+ | Joe | 1,3,5 | | Amy | 2,4,6 | +------+-------+ ```
In MySQL How do I SELECT all ids from rows with a similar value
[ "", "mysql", "sql", "select", "" ]
In SQL Server Management Studio I went to Generate Scripts and create an INSERT script which looks like the following: ``` SET IDENTITY_INSERT [dbo].[Product] ON GO INSERT [dbo].[Product] ([Id], [Name]) VALUES (1, N'Product 1') GO INSERT [dbo].[Product] ([Id], [Name]) VALUES (2, N'Product 2') GO INSERT [dbo].[Product] ([Id], [Name]) VALUES (3, N'Product 3') GO SET IDENTITY_INSERT [dbo].[Product] OFF GO ``` Lets say I have multiple client databases and I want to be able to safely run this script on all of them whether these products already exist or not. Do I really have to wrap every single insert line with an if exists or is there a better way? The ultimate plan is to use it as a Post-Deployment script in a SQL SERVER Database Project for tables that hold items like System Settings.
Why not do all the inserts in one step? ``` INSERT [dbo].[Product] ([Id], [Name]) SELECT id, name FROM (VALUES (1, N'Product 1'), (2, N'Product 2') (3, N'Product 3') ) v(Id, Name) WHERE NOT EXISTS (SELECT 1 FROM Product P2 WHERE p2.id = v.id); ```
``` SET IDENTITY_INSERT [dbo].[Product] ON GO INSERT [dbo].[Product] ([Id], [Name]) select id,name from products t where not exists(select 1 from products t2 where t2.id=t1.id and t2.name=t1.name GO SET IDENTITY_INSERT [dbo].[Product] OFF ```
Creating a SQL Script that inserts several items into a table but checking if each exist first
[ "", "sql", "sql-server", "ssms", "" ]
So let's say I have a table like ``` Table 1 ============= id | ... ============= 1 | ... 2 | ... 3 | ... . . . . . . Table 2 ======================= id | table1_id | ... ======================= 1 | 1 | ... 2 | 1 | ... 3 | 2 | ... . . . . . . . . . ``` where `table1_id` in `Table 2` references `id` in `Table 1`. I want to the `id`s of the rows with `id=1` and `id=2`. Can I do this without screwing up the relationships?
Insert a dummy entry with copy of id 1 row in [Table 1] with new id. Then update all columns of id 1 with id 2 with following query ``` UPDATE T SET T.col2 = S.col2 ,T.col3 = S.col3 ,T.col4 = S.col4 . = . . = . . = . [Table 1] T CROSS JOIN ( SELECT col2 ,col3 ,col4 . . FROM [Table 1] WHERE id = 2 ) S WHERE id = 1 ``` In this same way update all columns of row id 2 with newly created row. After this delete newly created row.
I would create a temporary entry with a new id in Table 1, move the references in Table 2 from `table1_id=1` to the new id, move `table1_id=2` to `table_1_id=1` and then move the temporary referencves to `table1_id=2`.
How can I safely swap the ids of two rows that are references in other tables?
[ "", "sql", "sql-server", "t-sql", "" ]
Essentially, I have two tables: Table A ``` aId|isOne|bId ---+-----+--- 1 |1 |2 2 |0 |2 3 |1 |1 ``` Table B ``` bId|one|two ---+---+--- 1 |5 |13 2 |3 |11 ``` Table A refers to Table B and specifies whether the data of `one` or of `two` is desired. What I need to do is sum the values given the `bId`, so the expected result of the query on these table would be: ``` bId|value ---+----- 2 |14 1 |5 ``` Currently my query for doing this has the following form: ``` select bId, coalesce(if(bId = 0, null, sum(if(isOne = 0, (select two from tableB where tableB.bId = bId), (select one from tableB where tableB.bId = bId)))) as value from (select bId, isOne, one, two from tableA join tableB on tableA.bId = tableB.bId) as tableRes; ``` Please note that this is in relaity part of a larger query and uses larger tables, where the coalesce and first if statement do make sense to use. An error occurs with the above query though ( > Error Code: 1242. Subquery returns more than 1 row) which is believed to come from the if statement within `sum()`. I have tried applying `group by`, since I think I need to group the `bId = 2` value results together, but have failed to be able to place this legally, or place it legally within the query and have it actually stop the subqueries from returning more than one result. Any direction or help in fixing the error would be appreciated, also may good to know that as stated, this is a stripped down version of the query, so if it is thought the problem is not in what has been shown more can be added, but I'm pretty confident in the error lying in the above version.
I was actually able to solve this by simply replacing the arbitrary `bId` within the subqueries with a specified value of `tableA.bId`, for anyone curious of an answer.
I would simply inner join the 2 tables and use case statement within the sum() to sum the right values: ``` select tableA.bId, sum(case when isOne=0 then two when isOne=1 then one else 0 end) as val from tableA inner join tableB on tableA.bId = tableB.bId group by tableA.bId ; ```
Apply Group By to If Statement Results
[ "", "mysql", "sql", "" ]
``` SELECT salesman_id, COUNT(sale_id) FROM Commission GROUP BY salesman_id HAVING salesman_id IN (select... *subqueries* ``` COUNT(sale\_id) gives me the number of sale\_ids **regardless of the subqueries**, although i want the number of sale\_ids **after** the subqueries are done. how come this is not the case for COUNT, and how can i fix it? [example](https://i.stack.imgur.com/NGeDp.jpg) [what i get](https://i.stack.imgur.com/v4ezS.jpg) what i want: ``` 1 | 2 6 | 1 ```
If you want to limit the results that your query deals with then you need to do that in the `WHERE` clause. The `HAVING` clause is filtering that happens **after** aggregates have been performed. Try moving your subqueries to the `WHERE` clause instead.
Try this: ``` SELECT salesman_id, COUNT(sale_id) FROM Commission WHERE salesman_id IN (select... *subqueries*) GROUP BY salesman_id ```
getting the COUNT value depending on the subqueries
[ "", "mysql", "sql", "database", "" ]
I have this function that I found here: [Insert trigger to Update another table using PostgreSQL](https://stackoverflow.com/questions/12343984/insert-trigger-to-update-another-table-using-postgresql) ``` CREATE TABLE table1 ( id integer NOT NULL, name character varying, CONSTRAINT table1_pkey PRIMARY KEY (id) ) CREATE TABLE table2 ( id integer NOT NULL, name character varying ) CREATE OR REPLACE FUNCTION function_copy() RETURNS TRIGGER AS $BODY$ BEGIN INSERT INTO table2(id,name) VALUES(new.id,new.name); RETURN new; END; $BODY$ language plpgsql; CREATE TRIGGER trig_copy AFTER INSERT ON table1 FOR EACH ROW EXECUTE PROCEDURE function_copy(); ``` If I insert these two rows: ``` insert into table1 values (1, 'Andrey'); insert into table1 values (2, 'Mariza'); ``` Then they also go into table2. My problem is when I do an update on a value: ``` update table1 set name = 'Andi' where id = '1'; ``` nothing happens in table2. How can I create a function that updates changes in table?
To support `UPDATE` you can do this: Trigger: ``` CREATE TRIGGER trig_copy AFTER INSERT OR UPDATE ON table1 FOR EACH ROW EXECUTE PROCEDURE function_copy(); ``` Function: ``` CREATE OR REPLACE FUNCTION function_copy() RETURNS TRIGGER AS $BODY$ BEGIN if TG_OP='INSERT' then INSERT INTO table2(id,name) VALUES(new.id,new.name); end if; if TG_OP='UPDATE' then Update table2 set name=new.name where id=old.id; end if; RETURN new; END; $BODY$ language plpgsql; ```
``` create or replace trigger trig_copy after insert or update on table1 begin merge into table2 t2 using ( select id, name from table1 t1) t_new on (t2.id = t_new.id) when matched then update set t2.name = t_new.name where t2.id = t_new.id WHEN NOT MATCHED then insert (id,name) values(t_new.id,t_new.name); end; ``` This will be applied to all records, not only updated record. If U want to update/insert only one record Create trigger referencing new as new for each row and use :new.id :new.name , to find record You want in t2.
How to create function that updates changes in one table from another table?
[ "", "sql", "postgresql", "" ]
I need to write a Delphi 7 and mysql database query which would return the records including ALL words in the submitted name. So query will return records which has all those name words but can have different order. For example, if search string is John Michael Smith, query should be able to return records with names such as John Smith Michael, Michael Smith John, Smith John Michael or other combination with all those words there. As can be seen return only records which still has all words in name field but can have different order. I can't figure out how to write a query for such requirement that I have. Please help. ``` procedure Tfrm_Query.Button1Click(Sender: TObject); var mask : string; begin mask:='''%'+StringReplace(Edit1.text,' ','%',[rfReplaceAll, rfIgnoreCase])+'%'''; if Edit1.Text > '' then begin Adosorgulama.Close; Adosorgulama.SQL.Clear; Adosorgulama.SQL.Add('SELECT * FROM stok.product'); Adosorgulama.SQL.ADD('Where (P_Name like '+mask+') limit 50'); Adosorgulama.Open; end; end; ``` as a result; ``` edit1.text:='Jo Mich'; // Result Ok! edit1.text:='Smi Jo Mic'; //No result edit1.text:='Mich Sm'; // No result ```
Instead of replacing spaces with `%`, you could replace them with `% AND P_Name LIKE %`: ``` mask:='''WHERE (P_Name LIKE %'+StringReplace(Edit1.text,' ','% AND P_Name LIKE %',[rfReplaceAll, rfIgnoreCase])+'%)'''; ``` Apologies if there is some problem with the syntax (I don't know Delphi), but if `Edit1.text:= 'John Michael Smith'` this should generate the following `WHERE` clause: ``` WHERE (P_Name LIKE %John% AND P_Name LIKE %Michael% AND P_Name LIKE %Smith%) ``` Which should find all records where `P_Name` contains the strings 'John', 'Michael' and 'Smith'. Then, of course, instead of ``` Adosorgulama.SQL.ADD('Where (P_Name like '+mask+') limit 50'); ``` you'd do something like ``` Adosorgulama.SQL.ADD(mask + ' limit 50'); ``` If the input can contain extraneous spaces, you will need to remove those first, otherwise this won't work. Forming SQL queries with string concatenation could make your application vulnerable to SQL injection, just so you know. I don't know how to do prepared statements with Delphi, so I can't help you there.
You can build table of words dynamically. To find yours match do query that join both tables in possible match, and by grouping results test it - is name have all of words, try this: ``` WITH words AS (SELECT 'John' AS word FROM dual union SELECT 'Michael' FROM dual union SELECT 'Smith' FROM dual ) , --build your table of words (this is example on oracle DB engine) names AS (SELECT 'John Michael Smith' AS name FROM dual UNION SELECT 'John SmithMichael' FROM dual union SELECT 'Smith Michael' FROM dual union SELECT 'Smith Michael John' FROM dual union SELECT 'John' FROM dual union SELECT 'John John' FROM dual union SELECT 'John John John' FROM dual union SELECT 'xyz abc' FROM dual ) --this is simulation of yours table of names SELECT name, Count(DISTINCT word) FROM names, words WHERE ' ' || name || ' ' LIKE '% ' || word || ' %' GROUP BY name HAVING Count(DISTINCT word) = (SELECT Count(1) FROM words) ; ```
Query to Search All possible words
[ "", "mysql", "sql", "delphi", "search", "" ]
I have an Oracle table with "users" in it, and a table with "events" related to users, date-stamped as to when each event took place. I'd like to produce a simple table showing each user and the **most recent event** that took place, but I can't quite work out the nested query or aggregation function to do this, since I'm not aggregating on the column I want. If I simply wanted `MAX(event_date)` that would be fairly simple, but I want something like `event_status where MAX(event_date)`. I'm sure it's possible to do without resorting to a procedure, but I can't quite work it out. Anyone got a solution? ``` user_id user_name 1 bob 2 sally 3 fred event_id user_id event_date event_status 1 1 3/1/15 hired 2 1 3/2/15 active 3 3 4/1/15 hired 4 3 4/3/15 on leave 5 2 3/3/15 hired 6 2 4/1/15 on leave 7 2 5/1/15 active ``` Desired result: ``` user_name current_status bob active sally active fred on leave ```
This may be simplest using a correlated subquery: ``` select u.*, (select e.status from events e where e.user_id = u.user_id order by e.event_date desc limit 1 ) as Most_Recent_Status from users u; ``` This saves the trouble of aggregation (or equivalently `select distinct`) which is rather expensive in some databases. Note that this uses the MySQL/Postgres `LIMIT 1` for the subquery. Other databases have similar functionality. EDIT: ``` select u.*, (select max(e.status) keep (dense_rank first order by event_date desc) from events e where e.user_id = u.user_id ) as Most_Recent_Status from users u; ```
Here is an sql query: ``` select U.user_name, E.event_status from users U, events E where U.user_id = E.user_id and (E.user_id, E.event_date) = (select distinct user_id, max(event_date) from events where user_id = E.user_id group by user_id ) ```
Oracle - Join tables using aggregated function?
[ "", "sql", "oracle", "" ]
I have wrote query like this to check json column has key ``` SELECT * FROM "details" where ("data"->'country'->'state'->>'city') is not null; ``` How can we write query which will select row if "data" contains "city" JSON structure of data is not consistent.
You can check the **top-level** keys of `data` with `?` as it is said in the [documentation](http://www.postgresql.org/docs/9.5/static/functions-json.html). For example ``` SELECT * FROM details WHERE data ? 'city'; ``` Checking every key in all nested objects from the json column requires a recursive [CTE](http://www.postgresql.org/docs/9.4/static/queries-with.html) ``` select * from details where 'city' in ( WITH RECURSIVE t(k,j) as ( select jsonb_object_keys(details.data), details.data UNION ALL select jsonb_object_keys(t.j->t.k), t.j->t.k FROM t WHERE jsonb_typeof(t.j->t.k) = 'object' ) select k from t ); ``` This of course is not very efficient.
You can use `?`: ``` SELECT * FROM "details" WHERE data->'country'->'state' ? 'city'; ```
Postgresql JSON column check key exists
[ "", "sql", "json", "postgresql", "" ]
I'm having trouble identifying all the querystring parameters that are used on a site. I want to write a T-SQL query that extracts all parameters and counts them, but I don't have permission to write SQL functions, so [this solution](https://stackoverflow.com/a/10083023/896802) isn't much help. The field that I'm working with (`Query`) includes data that looks like this: ``` _=1457999955221 tab=profile tab=tags&sort=votes&page=13 page=5&sort=newest&pagesize=15 ... ``` The query I need to write would return the result: ``` querystring | count ___________________ _ | 1 tab | 2 sort | 2 page | 2 pagesize | 1 ... ``` Any help is greatly appreciated.
You can borrow one of the functions [from here](http://sqlperformance.com/2012/07/t-sql-queries/split-strings) and just inline it into the query. An example below. I would not expect good performance. Creating a CLR function is by far the most efficient way of splitting strings prior to SQL Server 2016. ``` DECLARE @QueryStrings Table ( Query VARCHAR(8000) ) INSERT INTO @QueryStrings VALUES ('INVALID'), ('_=1457999955221'), ('tab=profile'), ('tab=tags&sort=votes&page=13'), ('page=5&sort=newest&pagesize=15'); WITH E1(N) AS ( SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1), E2(N) AS (SELECT 1 FROM E1 a, E1 b), E4(N) AS (SELECT 1 FROM E2 a, E2 b), E42(N) AS (SELECT 1 FROM E4 a, E2 b) SELECT parameter, count(*) FROM @QueryStrings qs CROSS APPLY (SELECT SUBSTRING(qs.Query, t.N + 1, ISNULL(NULLIF(CHARINDEX('&', qs.Query, t.N + 1), 0) - t.N - 1, 8000)) FROM (SELECT 0 UNION ALL SELECT TOP (DATALENGTH(ISNULL(qs.Query, 1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E42) t(N) WHERE ( SUBSTRING(qs.Query, t.N, 1) = '&' OR t.N = 0 )) ca1(split_result) CROSS APPLY (SELECT CHARINDEX('=',split_result)) ca2(pos) CROSS APPLY (SELECT CASE WHEN pos > 0 THEN LEFT(split_result,pos-1) END, CASE WHEN pos > 0 THEN SUBSTRING(split_result, pos+1,8000) END WHERE pos > 0) ca3(parameter,value) GROUP BY parameter ```
More sexy way to approach that: ``` DECLARE @xml xml ;WITH cte AS ( SELECT * FROM (VALUES ('_=1457999955221'), ('tab=profile'), ('tab=tags&sort=votes&page=13'), ('page=5&sort=newest&pagesize=15') ) as T(Query)) SELECT @xml = ( SELECT CAST( ( SELECT '<d><param>' + REPLACE(REPLACE((STUFF(( SELECT '/' + REPLACE(REPLACE(Query,'&','/'),'=','!') FROM cte FOR XML PATH('') ),1,1,'')),'/','</value><param>'),'!','</param><value>') + '</value></d>') as xml)) ;WITH final AS ( SELECT t.v.value('.','nvarchar(20)') as querystring FROM @xml.nodes('/d/param') as t(v) ) SELECT querystring, COUNT(*) as [count] FROM final GROUP BY querystring ``` Result: ``` querystring count -------------------- ----------- _ 1 page 2 pagesize 1 sort 2 tab 2 (5 row(s) affected) ```
How to extract URL querystring parameters in SQL Server without writing a function?
[ "", "sql", "sql-server", "string", "query-string", "" ]
I need to add records/rows to an existing table, based on values of a couple fields. The rows are basically the range of months for each different id - most id's will have multiple months but some only one month. I have a first\_date field and a last\_date field and need to fill in rows for however intervening months are between the two dates and creating a "time id" for the row identifying that month. Current: [![enter image description here](https://i.stack.imgur.com/zoCh3.png)](https://i.stack.imgur.com/zoCh3.png)
If you are using for summary you can use FREQ total\_months; in most procs or in proc freq is it WEIGHT. I you really need to expand I think this will suffice. ``` data expand; set <data-name>; do time_id = 1 to total_months; output; end; run; ```
What I think you're going to need is an additional table, a dimension or mapping table, that will give you information on those dates/months. Think you can then join on it a few times to get your complete list. Here's what I did: ``` CREATE TABLE #tblCurrent (ID INT, First_Date VARCHAR(9), Last_Date VARCHAR(9), TotalMonths INT, VAR1 INT, VAR2 INT) INSERT INTO #tblCurrent SELECT 123,'01jan2015','01mar2015',3,5,2 union SELECT 124,'01jul2015','01aug2015',2,5,2 union SELECT 125,'01jan2015','01jan2015',1,5,2 ``` This was to just create a table mimicking yours... ``` CREATE TABLE #Month ([MonthName] VARCHAR(9), MonthRank INT) INSERT INTO #Month SELECT '01jan2015', 1 union SELECT '01feb2015', 2 union SELECT '01mar2015', 3 union SELECT '01apr2015', 4 union SELECT '01may2015', 5 union SELECT '01jun2015', 6 union SELECT '01jul2015', 7 union SELECT '01aug2015', 8 union SELECT '01sep2015', 9 union SELECT '01oct2015', 10 union SELECT '01nov2015', 11 union SELECT '01dec2015', 12 ``` This was to create a table with month information, like the order/rank. ``` SELECT c.*, m3.MonthRank Time_ID FROM #tblCurrent c JOIN #Month m ON c.First_Date = m.[MonthName] JOIN #Month m2 ON c.Last_Date = m2.[MonthName] JOIN #Month m3 ON m3.MonthRank >= m.MonthRank and m3.MonthRank <=m2.MonthRank ORDER BY ID, m3.MonthRank ``` This third step pulls in info on the first month (join m), the last month (join m2) and then all the months in between (m3). If you are going to continue to use the '01jan2015' style of dates, it would probably be useful to build a date dimension table to store a bunch of relevant info in columns.... month, year, etc..
Using sas or sql add new records to table based on monthly date variables
[ "", "sql", "insert", "sas", "rows", "" ]
I have a table which stores a `ID`, `Name`, `Code`, `IPLow`, `IPHigh` such as: ``` 1, Lucas, 804645, 192.130.1.1, 192.130.1.254 2, Maria, 222255, 192.168.2.1, 192.168.2.254 3, Julia, 123456, 192.150.3.1, 192.150.3.254 ``` Now, if I have an IP address `192.168.2.50`, how can I retrieve the matching record? **Edit** Based on Gordon's answer (which I'm getting compilation errors) this is what I have: ``` select PersonnelPC.* from (select PersonnelPC.*, ( cast(parsename(iplow, 4)*1000000000 as decimal(12, 0)) + cast(parsename(iplow, 3)*1000000 as decimal(12, 0)) + cast(parsename(iplow, 2)*1000 as decimal(12, 0)) + (parsename(iplow, 1)) ) as iplow_decimal, ( cast(parsename(iphigh, 4)*1000000000 as decimal(12, 0)) + cast(parsename(iphigh, 3)*1000000 as decimal(12, 0)) + cast(parsename(iphigh, 2)*1000 as decimal(12, 0)) + (parsename(iphigh, 1)) ) as iphigh_decimal from PersonnelPC ) PersonnelPC where 192168002050 between iplow_decimal and iphigh_decimal; ``` but this gives me an error: ``` Msg 8115, Level 16, State 2, Line 1 Arithmetic overflow error converting expression to data type int. ``` Any ideas?
Painfully. SQL Server has lousy string manipulation functions. It does, however, offer `parsename()`. This approach converts the IP address to a large decimal value for the comparison: ``` select t.* from (select t.*, (cast(parsename(iplow, 4)*1000000000.0 as decimal(12, 0)) + cast(parsename(iplow, 3)*1000000.0 as decimal(12, 0)) + cast(parsename(iplow, 2)*1000.0 as decimal(12, 0)) + cast(parsename(iplow, 1) as decimal(12, 0)) ) as iplow_decimal, (cast(parsename(iphigh, 4)*1000000000.0 as decimal(12, 0)) + cast(parsename(iphigh, 3)*1000000.0 as decimal(12, 0)) + cast(parsename(iphigh, 2)*1000.0 as decimal(12, 0)) + cast(parsename(iphigh, 1) as decimal(12, 0)) ) as iphigh_decimal from t ) t where 192168002050 between iplow_decimal and iphigh_decimal; ``` I should note that IP addresses are often stored in the database as the 4-byte unsigned integers. This makes comparisons much easier . . . although you need complicated logic (usually wrapped in a function) to convert the values to a readable format.
Try this simple way checking range ``` DECLARE @IP NVARCHAR(30)='192.168.500.1' SELECT * FROM Branches WHERE CAST (PARSENAME(@IP,4) AS INT)>=CAST(PARSENAME(IPLow,4) AS INT) AND CAST(PARSENAME(@IP,3) AS INT)>=CAST(PARSENAME(IPLow,3) AS INT) AND CAST(PARSENAME(@IP,2) AS INT)>=CAST(PARSENAME(IPLow,2) AS INT) AND CAST(PARSENAME(@IP,1) AS INT)>=CAST(PARSENAME(IPLow,1) AS INT) AND CAST(PARSENAME( @IP,4) AS INT) <= CAST(PARSENAME(IPHigh ,4) AS INT) AND CAST(PARSENAME(@IP ,3) AS INT) <=CAST(PARSENAME(IPHigh ,3) AS INT) AND CAST(PARSENAME(@IP ,2) AS INT) <=CAST(PARSENAME(IPHigh ,2) AS INT) AND CAST(PARSENAME(@IP ,1) AS INT)<=CAST(PARSENAME(IPHigh ,1) AS INT) ``` AS Per @Ed Haper Comment Cast is needed.
Select record between two IP ranges
[ "", "sql", "sql-server", "" ]
I want to know what is the easiest way to extract number from character which representing percentage. For example, I have ``` name, rate Google, 10% Google, 20% Uber, 25% ... ``` I want a query that return the average rate group by name ``` Google, 15% Uber, 25% ```
I'm a fan of regular expressions, but you don't need it in this case, here's an alternative using native functions, with your sample data: ``` WITH nameandrate AS ( SELECT 'Google, 10%' AS namerate UNION ALL SELECT 'Google, 20%' AS namerate UNION ALL SELECT 'Uber, 25%' AS namerate ), split1 AS ( SELECT namerate, CHARINDEX(', ', namerate) AS position FROM nameandrate ), split2 AS ( SELECT LEFT(namerate, position - 1) AS name, CAST( SUBSTRING(namerate, position + 2, LEN(namerate)-position-2) AS int ) AS rate FROM split1 ) SELECT name, CAST(AVG(rate) AS varchar(100)) + '%' FROM split2 GROUP BY name ```
You can use this Regex for Extracting; ``` ([a-z]+\,[\s]*[\d]+\%) ``` use `InCaseSensitive` Comparison. You can refer these links for how to use Regex in Sql: [Regular Expressions in MS SQL Server 2005/2008](http://www.codeproject.com/Articles/42764/Regular-Expressions-in-MS-SQL-Server) [Create and Run a CLR SQL Server User-Defined Function](https://msdn.microsoft.com/en-us/library/w2kae45k(v=vs.80).aspx)
extract number from character string in sql and perform calculation
[ "", "sql", "regex", "" ]
From a select query I have the result something like this: ``` IdCompany | IdUser | ComapnyName | JobTitle 1 100 Company1 Developer 2 100 Company2 Developer 3 200 Company3 Developer 4 200 Company4 Developer 5 200 Company5 Developer 6 300 Company6 Developer ``` What I want is to get same fields but not the duplicated IdUser, for each IdUser to get only one record. I tried with DISTINCT and EXISTS but couldn't come to a solution. This is the way I want my result to be: ``` IdCompany | IdUser | ComapnyName | JobTitle 1 100 Company1 Developer 5 200 Company5 Developer 6 300 Company6 Developer ```
A typical way of doing this uses the ANSI standard window function`row_number()`: ``` select t.* from (select t.*, row_number() over (partition by idUser order by idUser) as seqnum from t ) t where seqnum = 1; ```
You could try this: ``` select * from TABLE group by Β΄IdCompanyΒ΄ ```
Don't get a duplicated record on SQL
[ "", "sql", "greatest-n-per-group", "" ]
I have a table called "seekers" which contains a list of people. There is a column called "username" which contains the usernames of the people and a column called "resume\_status" which either has a value of 0 or 1. Currently the the query below does not check the "seekers" table at all. I would like it to only display results with the a value of "1" in the "resume\_status" column. NOTE: The only common column and value in the "seekers" table and the ones in the existing query below is the "username" column. My confusion comes from trying to figure out how to link the query below to the "seekers" table. ``` $query=" (SELECT username, MATCH(highlight) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_highlights HAVING score>0 ORDER by score desc) UNION ALL (SELECT username, MATCH(skill,skill_list) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_skills HAVING score >0 ORDER by score desc) UNION ALL (SELECT username, MATCH(education_title,education_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_education HAVING score >0 ORDER by score desc) UNION ALL (SELECT username, MATCH(employer_title,employer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_employer HAVING score>0 ORDER by score desc) UNION ALL (SELECT username, MATCH(volunteer_title,volunteer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_volunteer HAVING score >0 ORDER by score desc) "; ```
@Brijesh's answer is good but I think this will be faster -- it will also improve with addition of one index on the seekers table and we want username indexes on the other tables. .. ``` SELECT * FROM ( SELECT a.username, MATCH(a.highlight) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_highlights a JOIN seekers ON a.username = seekers.username and seekers.resume_status = 1 HAVING score>0 UNION ALL SELECT b.username, MATCH(b.skill,b.skill_list) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_skills b JOIN seekers ON b.username = seekers.username and seekers.resume_status = 1 HAVING score >0 UNION ALL SELECT c.username, MATCH(c.education_title,c.education_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_education c JOIN seekers ON c.username = seekers.username and seekers.resume_status = 1 HAVING score >0 UNION ALL SELECT d.username, MATCH(d.employer_title,d.employer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_employer d JOIN seekers ON d.username = seekers.username and seekers.resume_status = 1 HAVING score>0 UNION ALL SELECT e.username, MATCH(e.volunteer_title,e.volunteer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_volunteer e JOIN seekers ON e.username = seekers.username and seekers.resume_status = 1 HAVING score >0 ) AS X ORDER BY SCORE desc ```
Use below query to sortout ``` SELECT A.username, A.score FROM ( (SELECT username, MATCH(highlight) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_highlights HAVING score>0 ORDER by score desc) UNION ALL (SELECT username, MATCH(skill,skill_list) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_skills HAVING score >0 ORDER by score desc) UNION ALL (SELECT username, MATCH(education_title,education_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_education HAVING score >0 ORDER by score desc) UNION ALL (SELECT username, MATCH(employer_title,employer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_employer HAVING score>0 ORDER by score desc) UNION ALL (SELECT username, MATCH(volunteer_title,volunteer_organization) AGAINST (\"{$keywords}\" IN BOOLEAN MODE) AS score FROM resume_volunteer HAVING score >0 ORDER by score desc) ) A LEFT JOIN seekers on A.username = seekers.username WHERE seekers.resume_status = 1 ```
WHERE clause after UNION ALL
[ "", "mysql", "sql", "" ]
Basically at current i have some script that allows my users to see all distinct values of 'make' which shows the distinct values of a b c ect. But under each of make there is another column which is models. i would like to be abel to select all distinct values of the models column where the make column is equal to a certain make. Is this possible? if so if someone could point me in the right direction it would be helpful. current code as follows: ``` $sql = "SELECT * FROM table_name WHERE make=146"; ``` i would like it to //SELECT ALL DISTINCT VALUES FROM Column\_Name2 where Column\_Name1=146. Surely this is possible and i just can't figure out which function i need to solve my solution. Many Thanks in Advance.
``` SELECT DISTINCT model FROM table_name WHERE make = 146; ``` Regards
You can do it with simple `distinct` query, e.g.: ``` select distinct model from table where make = '<make>' ```
selecting a distinct value of column 2 where column one is equal to a certain value
[ "", "mysql", "sql", "sql-server", "web", "" ]
I have three tables- A, B, and C that each contain a list of active customers for 2012, 2013 and 2014, respecitvely. I want to get a list of customers who were active in all three years. I am doing it this way: ``` select distinct customer_id from table_A a inner join table_B b on a.customer_id=b.customer_id inner join table_C c on a.customer_id=c.customer_id ``` But would that give different results than this: ``` select distinct customer_id from table_A a inner join table_B b on a.customer_id=b.customer_id inner join table_C c on b.customer_id=c.customer_id ``` Thank you!
The order of joins for `inner join` does not make a difference. However, if one of the tables is a "master" table with one row per `customer_id`, then it is more efficient to do: ``` select a.customer_id from table_A a where exists (select 1 from table_B b where a.customer_id = b.customer_id) and exists (select 1 from table_C c on a.customer_id = c.customer_id); ``` This eliminates the duplicate reduction for the `select distinct`.
No, because you are doing an inner join. Inner joins are an intersection, so only id's that are in all 3 are going to make it through, no matter what order you put the joins together. If you do an outer join, you have to worry more about order.
Inner join Order
[ "", "mysql", "sql", "teradata", "proc", "" ]
I really would like this help. I have two tables lets say t1 and t2. I want update t1 based on value obtain from t2. There is common fields between t1 and t2 which is t1.username = t2.emaiAddress. Is there a way to update t1.username? Below is the t1 and t2 structure ``` create table t1 ( username varchar2 (50) primary key, password varchar2 (50) not null ); create table t2 ( ID varchar2 (50) primary key, emailAddress varchar2 (50) not null ); ``` I did not do references to t1 as this is an example from this i can apply to various things. What I have tried is ``` UPDATE (SELECT t1.username, t1.emailAddress FROM t1 join t2 on t1.username = t2.EMAILADDRESS WHERE t2.id = 'SCM-026020') SET t1.username = 'john@gmail.com'; ```
This would typically be done using a subquery in Oracle: ``` UPDATE t1 SET username = 'john@gmail.com' WHERE EXISTS (SELECT 1 FROM t2 WHERE t1.username = t2.EMAILADDRESS AND t2.id = 'SCM-026020' ); ```
Let's assume in table t1 there is a record where the username='mary@gmail.com'. And you need to update this username with the email address from t2 where the ID = 'SCM-026020'. Try below query. ``` UPDATE t1 SET t1.username = (SELECT emailAddress FROM t2 WHERE ID = 'SCM-026020') WHERE t1.username = 'mary@gmail.com' ``` This will update the username from 'mary@gmail.com' to 'john@gmail.com' in table t1. Hope this is what you want.
Updating a table column based on another table field
[ "", "sql", "oracle", "oracle11g", "" ]
I'm using Oracle 11g database which contains data where in I want to replace a dynamically generated text with a dummy text using a oracle query. For e.g my column in table contain data : `Hello Mike, Your registered no. is 3525. Kindly check the same` . Now the issue is, Name of customer i.e. 'Mike' can be dynamic that is why I'm not able to use SUBSTR function. And I want to replace 3525 with XXXX E.g `Hello Mike, Your registered no. is XXXX. Kindly check the same`. Please help me with the issue. I'm using Oracle 11g
If the only dynamic part of you string is the name ( and assuming that names do not contain numbers...) you can try: ``` select regexp_replace('Hello Mike, Your registered no. is 3525. Kindly check the same', '([0-9])', 'X' ) from dual ``` This simply replaces every numeric character with 'X'. To replace things like, for example, 'AB3525', with a fixed string, say 'XXXX', you can try replacing it with a fixed : ``` select regexp_replace('Hello Mike, Your registered no. is AB3525. Kindly check the same', '(Hello [^\,]*\, Your registered no. is )([^\.]*)(\. Kindly check the same)', '\1XXXX\3' ) from dual ```
You could use **TRANSLATE** which would be much faster than **REGULAR EXPRESSION**. It would simply any occurrence of a number with `X`. For example, ``` SQL> SELECT TRANSLATE('Hello Mike, Your registered no. is 3525. Kindly check the same', 2 '0123456789', 3 'XXXXXXXXXX') str 4 FROM dual; STR -------------------------------------------------------------- Hello Mike, Your registered no. is XXXX. Kindly check the same SQL> ```
Replace a dynamically created Substring in Oracle query
[ "", "sql", "oracle", "oracle11g", "" ]
I'd like to pick the brains of any sql expert who can tell me how I can select the distinct values from a field and then add a unique ID to each set of distinct values. I can write a quick bit of code to do this but I need it in a query. Important to add that I need the unique value to start at 1 (otherwise yes I know I can use the existing ID). So it will look like this: > ``` > Patient_ID New_Unique_Value > 23 1 > 23 1 > 23 1 > 4378 2 > 4378 2 > 48 3 > 48 3 > 48 3 > 48 3 > ``` I can write the Patient\_IDs to a temp table but I can't find any info on dynamically adding a unique increment.
The simplest way in MySQL is to use variables: ``` select p.Patient_ID, (@rn := if(@p = p.Patient_ID, @rn, if(@p := p.Patient_ID, @rn + 1, @rn + 1) ) ) as New_Unique_Value from t cross join (select @rn := 0, @p := -1) params order by patient_id; ```
My solution using a temporary table ``` CREATE TABLE IF NOT EXISTS OAK_origres.TEMP2 (PATID INTEGER, NEWID INTEGER AUTO_INCREMENT, PRIMARY KEY (NEWID)); INSERT INTO OAK_origres.TEMP2 (PATID) SELECT DISTINCT OAK_origres.`Original Results`.`Patient ID` FROM OAK_origres.`Original Results` INNER JOIN OAK_patient.Demographic ON OAK_origres.`Original Results`.`Patient ID` = OAK_patient.Demographic.`System ID` WHERE `Import Date` = '2014-09-23 13:00:00'; ``` What I needed was smaller values that I could use to differentiate the values of 'Import Date'. I used the autoincrement values for this: ``` UPDATE OAK_origres.`Original Results` INNER JOIN OAK_origres.TEMP2 ON OAK_origres.`Original Results`.`Patient ID` = OAK_origres.TEMP2.PATID SET `Import Date`=DATE_ADD(`Import Date`, INTERVAL OAK_origres.TEMP2.NEWID SECOND) ```
mySQL select distinct and add unique value
[ "", "mysql", "sql", "" ]
Having a bit of trouble with a basic SQL problem. The question is that I have to find the salespersons first and last name, then their Social Insurance Number, the product description, the product price, and quantity sold where the total quantity sold is greater than 5. I'll attach the database information below as a photo. [![enter image description here](https://i.stack.imgur.com/D6BQp.png)](https://i.stack.imgur.com/D6BQp.png)
Product quantity sold greater than 5 ``` SELECT ProductId FROM ProductsSales HAVING SUM(QuantitySold) > 5 ``` Use that to get the rest: ``` SELECT s.FirstName, s.LastName, s.SIN, p.ProductDescription, ps.UnitSalesPrice, ps.QuantitySold FROM ProductsSales ps LEFT JOIN Products p on p.ProductID = ps.ProductID LEFT JOIN Salesmen s on s.SalesmaneID = ps.SellerID WHERE ps.ProductID IN ( SELECT ProductId FROM ProductsSales GROUP BY ProductId HAVING SUM(QuantitySold) > 5 ) ```
``` SELECT a.FirstName, a.LastName, a.SIN, c.ProductDescription, b.UnitSalesPrice, b.QuantitySold FROM Salesmen a LEFT JOIN ProductsSales b ON a.SalesmanId = b.SellerId LEFT JOIN Products c ON b.ProductId = c.ProductId WHERE b.QuantitySold > 5 ```
Basic SQL Joining Tables
[ "", "sql", "" ]
I have one table with cars, and another table with fuel types. A third table tracks which cars can use which fuel types. I need to select all data for all cars, including which fuel types they can use: Car table has Car\_ID, Car\_Name, etc Fuel table has Fuel\_ID, Fuel\_Name Car\_Fuel table has Car\_ID, Fuel\_ID (one car can have multiple Fuel options) What I want to return: ``` SELECT * , Can_Use_Gas , Can_Use_Diesel , Can_Use_Electric FROM Car ``` The Can\_Use columns are a BIT value, indicating if the car has a matching Fuel entry in the Car\_Fuel table. I can do this with multiple SELECT statements, but this looks painfully messy (and possibly very inefficient?). I'm hoping there's a better way: ``` SELECT c.* , (SELECT COUNT(*) FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 1) AS Can_Use_Gas , (SELECT COUNT(*) FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 2) AS Can_Use_Diesel , (SELECT COUNT(*) FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 3) AS Can_Use_Electric FROM Car c ```
Presumably you have no duplicates in `Car_fuel`, so you don't need aggregation. Hence you can do: ``` SELECT c.*, ISNULL((SELECT TOP 1 1 FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 1), 0) AS Can_Use_Gas ISNULL((SELECT TOP 1 1 FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 2), 0) AS Can_Use_Diesel ISNULL((SELECT TOP 1 1 FROM Car_Fuel f WHERE f.Car_ID = c.Car_ID AND f.Fuel_ID = 3), 0) AS Can_Use_Electric FROM Car c; ``` This is one case where `ISNULL()` has a performance advantage over `COALESCE()`, because `COALESCE()` evaluates the first argument twice.
Although not a perfect solution, you could use the [pivot clause](https://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx): ``` select * from ( select car_name, fuel_name from Car inner join Car_Fuel on Car.car_id = Car_Fuel.car_id inner join Fuel on Car_Fuel.fuel_id = Fuel.fuel_id ) as data pivot ( count(fuel_name) for fuel_name in (Gas, Diesel, Electric) ) as pivot_table; ``` See [this fiddle](http://sqlfiddle.com/#!6/b492a/11), which outputs a table like this: ``` | car_name | Gas | Diesel | Electric | |----------|-----|--------|----------| | Jaguar | 0 | 1 | 0 | | Mercedes | 0 | 1 | 1 | | Volvo | 1 | 0 | 1 | ``` The SQL statement still has the hard-coded list in the `for` clause of the `pivot` part, but when the number of fuel types increases, this might be easier to manage and have better performance. ### Generating the SQL dynamically If you use an application server, you could first execute this query: ``` SELECT stuff( ( SELECT ',' + fuel_name FROM Fuel FOR XML PATH('') ), 1, 1, '') columns ``` This will return the list of columns as one comma-separated value, for example: ``` Gas,Diesel,Electric ``` You would grab that result and inject it in the first query in the `FOR` clause.
"If one-to-many table has value" as column in SELECT
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Practicing some SQL, we have to get the name of the employees whose salary is the greatest of his department. But if in any department there were more than one employer with the greatest salary, we would not have to consider that department. We got the first part but not the second one (because there are two employees with the same greatest salary (3,000) in the same department (20)). This is what we did: ``` SQL> SELECT ename, sal, deptno FROM emp a WHERE sal >= ALL (SELECT sal FROM emp WHERE deptno=a.deptno) ORDER BY sal; ``` And this is what we got: ``` ENAME SAL DEPTNO ---------- ------- ------ BLAKE 2,850 30 FORD 3,000 20 SCOTT 3,000 20 KING 5,000 10 4 filas seleccionadas. ``` Any help will be useful, thank you!
``` SELECT ename, sal, deptno FROM emp a WHERE not exists ( SELECT * FROM emp WHERE deptno=a.deptno and sal >= a.sal and ename != a.ename) ORDER BY sal; ```
``` with cte as ( SELECT ename, sal, deptno , row_number() over (partition by deptno order by sal desc) as rn FROM emp ) select ename, sal, deptno from cte where rn = 1 except select ename, sal, deptno from cte where rn = 2 order by sal ``` if this does not work in oracle - it used to be also tagged mssql
SQL select rows without duplicates
[ "", "sql", "oracle", "" ]
I have a simple SQL statement to create a table like this: ``` Create table tblAccountBalance ( Id int, AccountName nvarchar(200), Balance int ) insert into tblAccountBalance values (1, 'Mark', 1000); insert into tblAccountBalance values (2, 'Mary', 1000); ``` Resulting in ``` Id AccountName Balance ----------------------- 1 Mark 1000 2 Mary 1000 ``` Then I create a transaction like this: ``` begin try begin transaction -- must have transaction keyword here! update tblAccountBalance set Balance = Balance - 100 where Id = 1 update tblAccountBalance set Balance = Balance + 100 where Id = 2 commit transaction --or simply commit, but by default, it is commit WORK - thus you cannot specify transaction name, best is to put transaction print 'Transaction successful!' end try begin catch rollback transaction --or simply rollback, but again, it might be similar to transaction, best is to put transaction keyword print 'Transaction is rolled back!' end catch ``` Which, when executed, simply transfers 100 units of money from Mark to Mary: ``` Id AccountName Balance ----------------------- 1 Mark 900 2 Mary 1100 ``` Now, continuing from the above - updated - table, to introduce error in the transaction, I change the transaction to make the second update statement having `nvarchar` as Id like this: ``` begin try begin transaction update tblAccountBalance set Balance = Balance - 100 where Id = 1 update tblAccountBalance set Balance = Balance + 100 where Id = '24' -- note the Id here is changed commit transaction print 'Transaction successful!' end try begin catch rollback transaction print 'Transaction is rolled back!' end catch ``` To my surprise, executing the above query resulting in: ``` (1 row(s) affected) (0 row(s) affected) Transaction successful! Id AccountName Balance ----------------------- 1 Mark 800 2 Mary 1100 ``` Which is not the desired result. But then, if I further continue from the undesired updated table result above with a modified transaction (again, in the second update Id) like this: ``` begin try begin transaction update tblAccountBalance set Balance = Balance - 100 where Id = 1 update tblAccountBalance set Balance = Balance + 100 where Id = 'A24' -- note the Id here is changed commit transaction print 'Transaction successful!' end try begin catch rollback transaction print 'Transaction is rolled back!' end catch ``` This results in: ``` (1 row(s) affected) (0 row(s) affected) Transaction is rolled back! Id AccountName Balance ------------------------ 1 Mark 800 2 Mary 1100 ``` Which is my desired result. What's wrong with the second transaction? Why does the update still getting executed? I am using SQL Server 2014 if that matters.
It is because your second experiment is actually successful and executed. When there is a operation that compare 2 different data types, SQL Server will perform implicit conversion.Refer to [Data Type Precedence](https://msdn.microsoft.com/en-us/library/ms190309.aspx) for more details. So there is conversion of your `id = '24'` to type int, so it eventually becomes `id = 24`. So the experiment will deduct balance from Mark, and add the balance of record with `id = 24`, but since there is no record with this id, nothing changes. In your third experiment, the implicit conversion fails (cannot convert `A24` to integer type) and eventually the transaction is rolled back.
Your second transaction is successful that's why the `UPDATE` was still executed. The `UPDATE` statement that you changed: ``` update tblAccountBalance set Balance = Balance + 100 where Id = '24' --note the Id here is changed ``` did not cause any error. It simply did not return any rows with `Id = '24'`, but without error. Your third transaction resulted in an error that's why the `UPDATE` is rolled back. ``` update tblAccountBalance set Balance = Balance + 100 where Id = 'A24' --note the Id here is changed ``` The above will result to an error like this: > Conversion failed when converting the varchar value 'A24' to data type > int. Since your `Id` is `INT`, SQL Server tries to convert `A24` which is `VARCHAR` to `INT` but fails to do so and thus the error.
SQL Server Transaction successful with wrong Id
[ "", "sql", "sql-server", "transactions", "sql-server-2014", "" ]
I have a table, **testing**, with the attributes: **id**, **fruits**. We have the following contents in said table: ``` id, fruits 1, Apple 2, Banana 3, Apple ``` I would like a query that groups these by the fruits (Apples in one group, Bananas in another) and returns if there is more than 1 in that group. So, for the example above, the query should return: ``` 1, Apple 3, Apple ``` Here's what I have so far: ``` SELECT * FROM testing GROUP BY 'fruits' HAVING COUNT(*) > 1 ORDER BY 'id' ``` This query only returns one of the apples. Thanks for any help! Toby.
You can use a subquery to find the duplicates, and an outer query that gets your rows; ``` SELECT * FROM testing WHERE fruits IN ( SELECT fruits FROM testing GROUP BY fruits HAVING COUNT(*)>1 ) ORDER BY id ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!9/130e7/2).
Actually, the most efficient way to do this is probably to use `exists`: ``` select t.* from testing t where exists (select 1 from testing t2 where t2.fruits = t.fruits and t2.id <> t.id ); ``` For optimal performance, you want an index on `testing(fruits, id)`.
SQL Group by attribute and show results if there is more than one in that group
[ "", "mysql", "sql", "" ]
I am looking for someone to help by updating this sql statement, as I want to join two tables but without using " ON AD.[UID] = UI.[UID] " ? ``` SELECT AD.[AdsID] ,AD.[UID] ,AD.[Section] ,AD.[Category] ,AD.[Country] ,AD.[State] ,AD.[City] ,SUBSTRING([AdsTit],1,30)+'...' as AdsTit ,SUBSTRING([AdsDesc],1,85) as AdsDesc ,AD.[AdsPrice] ,AD.[Img1] ,AD.[Currency] ,AD.[Section] ,AD.[Currency] ,AD.[AdsDate] ,AD.[approvAds] ,UI.[approv] FROM [ads] as AD JOIN UserInfo as UI ON AD.[UID] = UI.[UID] where AD.[Country] = @Location AND AD.[approvAds]= 'Y' AND UI.[approv]='Y' ORDER BY [AdsDate] DESC ```
You can move condition from 'on' to 'where', it works like inner join ``` SELECT AD.[AdsID] ,AD.[UID] ,AD.[Section] ,AD.[Category] ,AD.[Country] ,AD.[State] ,AD.[City] ,SUBSTRING([AdsTit],1,30)+'...' as AdsTit ,SUBSTRING([AdsDesc],1,85) as AdsDesc ,AD.[AdsPrice] ,AD.[Img1] ,AD.[Currency] ,AD.[Section] ,AD.[Currency] ,AD.[AdsDate] ,AD.[approvAds] ,UI.[approv] FROM [ads] as AD, UserInfo as UI where AD.[UID] = UI.[UID] and AD.[Country] = @Location AND AD.[approvAds]= 'Y' AND UI.[approv]='Y' ORDER BY [AdsDate] DESC ```
You can use the [`USING`](https://docs.oracle.com/javadb/10.6.1.0/ref/rrefsqljusing.html) keyword instead of `ON` in your SQL query or use a [natural join](http://www.w3resource.com/sql/joins/natural-join.php), which compares all the common columns in two tables itself without the on condition. Query for your tables like this: ``` select AD.column1, AD.column2, UI.column1, UI.column2........ from ads AD join userinfo UI using(UID) where AD.country='location' and AD.ApprovAds='y' and UI.approv='y' order by Ads(date) desc ``` Or ``` select AD.column1, AD.column2, UI.column1, UI.column2........ from ads AD natural join userinfo UI where AD.country='location' and AD.ApprovAds='y' and UI.approv='y' order by Ads(date) desc ```
SQL Using join but without on Condition
[ "", "sql", "" ]
I'm totaly new in all database stuff. What I would like to do is save a list of movies that belongs to a user. For exemple the user "james" likes the following movies "james bond, matrix, the revenant, batman". I don't know how to assign this list of movies to the user "james", I don't know how to create my tables and fields for that. If you could give me an hint, I would apreciate. Thanks
user table ``` id | user | other | email ``` movie table ``` id | name | year | etc ``` user\_movie (this is called a pivot table) ``` user_id | movie_id ``` pivot tables are fairly amazing. example ``` users id | name | details 3 | Hakim | xxxxxxx 4 | Mohammed | xxxxxxx videos id | name | year 2 | matrix | 2000 3 | Batman R.| 2008 4 | deadpool | 2016 5 | Minions | 2015 user_video (pivot table) user_id | video_id 3 | 3 3 | 5 4 | 2 4 | 3 4 | 4 ``` this way one users can watch many videos, and vice versa. :)
link the user id with the movie id It can be a third table PK, userid, movieid,
Assign a list of movie to an user
[ "", "mysql", "sql", "" ]
I executed this SQL statement in Postgres ``` alter table user modify column 'distinguishedName1' text; ``` and ``` alter table user modify column distinguishedName1 text; ``` * `user` is the table name * `distinguishedName1` is the column name with integer data type. I wanted to modify the data type to boolean or text or varchar(256) etc based on user's input. But when I run the query I get the error > ERROR: syntax error at or near "modify" Not sure what is the problem. Help required on right query.
Try this: ``` ALTER TABLE "user" ALTER COLUMN distinguishedName1 TYPE text USING code::text; ``` or ``` ALTER TABLE "user" ALTER COLUMN distinguishedName1 TYPE text ``` Also do note that the USING is optional. See the [manual](http://www.postgresql.org/docs/current/static/sql-altertable.html) here: > The optional USING clause specifies how to compute the new column > value from the old; if omitted, the default conversion is the same as > an assignment cast from old data type to new. A USING clause must be > provided if there is no implicit or assignment cast from old to new > type. On a side note try to avoid naming your tables as [reserved keywords](http://www.postgresql.org/docs/7.4/static/sql-keywords-appendix.html).
`POSTGRES` syntax for altering column type : ``` ALTER TABLE user ALTER COLUMN distinguishedName1 TYPE text; ```
ERROR: syntax error at or near "modify" - in postgres
[ "", "sql", "postgresql", "ddl", "alter-table", "" ]
I'm struggling to create a SQL statement that returns both the parent and child records in a single query. These are my tables.... **COURSE** ``` COURSE_ID | COURSE_CODE ----------+------------ 912689 | AUS_COURSE 912389 | AUS_FH1 912769 | AUS_FH2 912528 | AUS_SSMOC1 912293 | AUS_UNIT1 912295 | AUS_UNIT2 912303 | AUS_UNIT3 ``` **COURSE\_LINKS** ``` COURSE_ID_FROM | COURSE_ID_TO ---------------+------------- 912689 | 912293 912689 | 912295 912689 | 912303 ``` So as you can see in my link table **AUS\_COURSE** has 3 child records, **AUS\_UNIT1**, **AUS\_UNIT2**, and **AUS\_UNIT3** I would like my query to somehow return both parent and child records from the COOURSE table, so the output would be something like... ``` COURSE_ID | COURSE_CODE ----------+------------ 912689 | AUS_COURSE 912293 | AUS_UNIT1 912295 | AUS_UNIT2 912303 | AUS_UNIT3 ``` I'm struggling with working out what join to use and what field to join on Many thanks,
You can join the tables by using IN(child,parent) and distinct to drop the duplicates, like this: ``` SELECT distinct c.course_ID,c.course_code FROM COURSE c INNER JOIN COURSE_LINKS cl ON(c.course_ID in(cl.course_id_from,cl.course_id_to)) ```
I'd go for a subselect instead of a join. ``` select COURSE_ID, COURSE_CODE from COURSE where COURSE_ID in (select COURSE_ID_FROM from COURSE_LINKS) OR COURSE_ID in (select COURSE_ID_TO from COURSE_LINKS) ```
SQL join for parent and child records in a link table
[ "", "sql", "oracle", "join", "" ]
I have three tables with following data Table 3 : ``` Table1_id Table2_id 1 1 1 2 1 3 2 1 2 3 3 2 ``` Table 2 : ``` Table2_id Name 1 A 2 B 3 C ``` Table 1 : ``` Table1_id Name 1 P 2 Q 3 R ``` I have a problem where I need to return all table1\_id's which have an entry for all Table2\_ids's in Table 3. ie. I want my output to be ``` Table1_id 1 ``` I found a solution using count(). But is there a way to use all() or exists() to solve the query?
Using `NOT IN` with excluding `LEFT JOIN` in a subselect with a `CROSS JOIN` ``` select * from table1 where Table1_id not in ( select t1.Table1_id from table1 t1 cross join table2 t2 left join table3 t3 using (Table1_id, Table2_id) where t3.Table1_id is null ) ``` VS using `COUNT()` ``` select table1_id from table3 group by table1_id having count(1) = (select count(1) from table2) ``` Explanation: The `CROSS JOIN` ``` select t1.Table1_id from table1 t1 cross join table2 t2 ``` represents how `table3` would look like, if every item from `table1` would be related to every item from `table2`. A (natural) left join with `table3` will show us which relations really exists. Filtering by `where t3.Table1_id is null` (excluding `LEFT JOIN`) we get the missing relations. Using that result for the `NOT IN` clause, we get only table1 items that have no missing relation with table2.
You can use the following query: ``` SELECT DISTINCT t1.* FROM Table2 AS t2 CROSS JOIN Table1 AS t1 WHERE NOT EXISTS (SELECT 1 FROM Table3 AS t3 WHERE t1.Table1_id = t3.Table1_id AND t2.Table2_id = t3.Table2_id) ``` to get `Table1` records not having a complete set of entries from `Table2` in `Table3`. Then use `NOT IN` to get the expected result.
Returning ids of a table where all values of other table exist with this id using all() or exists()
[ "", "mysql", "sql", "database", "dbms-output", "" ]
I have a table ``` EmpId EmpName ManagerId Gender 1 Shahzad 2 M 2 Santosh 1 F 3 Sayanhi 2 M ``` By mistake 'M' is assigned to female employees and 'F' is assigned to male employees. So I need to write a query to make the correction. I tried the below query. ``` UPDATE Employee SET Gender='M' WHERE EmpId IN (SELECT EmpId FROM Employee WHERE Gender='F') AND Gender='F' WHERE EmpId IN (SELECT EmpId FROM Employee WHERE Gender='M') ``` but it didn't work.
here is a simple solution that will fix both cases at once: ``` UPDATE Employee SET Gender = CASE Gender WHEN 'M' THEN 'F' WHEN 'F' THEN 'M' END ```
Try this instead: ``` SELECT empid, empname, CASE WHEN gender = 'M' THEN 'F' ELSE 'M' END AS Gender INTO #tmp FROM Employee ``` If you're happy with what you see in there, then: ``` UPDATE Employee SET Employee.Gender = #tmp.Gender FROM Employee INNER JOIN #tmp ON Employee.empid = #tmp.empid ```
Using more than two where clauses in the single update query in SQL Server
[ "", "sql", "sql-server", "" ]
Returns `45.2478` ``` SELECT CAST( geography::STPointFromText( 'POINT(-81.2545 44.1244)', 4326 ).Lat + 1.12342342 AS VARCHAR(50) ) ``` Returns `4.524782342440000e+001` ``` SELECT CONVERT( VARCHAR(50), geography::STPointFromText( 'POINT(-81.2545 44.1244)' , 4326 ).Lat + 1.1234234244, 2 ) ``` According to the "Truncating and Rounding Results" section on [this page](https://msdn.microsoft.com/en-us/library/ms187928.aspx) it looks like CAST should never truncate a float but it's doing it in this case.
The [link](https://msdn.microsoft.com/en-us/library/ms187928.aspx) to the docs that you included in the question has an answer. `CAST` is the same as `CONVERT` without explicitly specifying the optional style parameter. > float and real Styles > > ``` > Value: 0 (default) > Output: A maximum of 6 digits. Use in scientific notation, when appropriate. > ``` So, when you use `CAST` it is the same as using `CONVERT` with `style=0`. Which returns a maximum of 6 digits, i.e. result is rounded to 6 digits.
It is due to the `style` part you mentioned in `CONVERT` function Your query with `style = 2` ``` SELECT CONVERT(VARCHAR(50),geography::STPointFromText('POINT(-81.2545 44.1244)',4326).Lat+1.1234234244,2) ``` **Result :** `4.524782342440000e+001` But when I remove the `Style` part from `Convert` function ``` SELECT CONVERT(VARCHAR(50),geography::STPointFromText('POINT(-81.2545 44.1244)',4326).Lat+1.1234234244) ``` **Result :** `45.2478` which is same as `CAST` function **FYI,** Style `2` is used to format dates in `yy.mm.dd` format
Is this a casting bug in SQL Server 2016 RC0?
[ "", "sql", "sql-server", "sql-server-2016", "" ]
In SQL Server 2008, I want to get the list of columns (column names) that the Primary Key spans. I have tried ``` SELECT * FROM sys.key_constraints LEFT JOIN sysconstraints ON (sys.key_constraints.object_id = sysconstraints.constid) WHERE type = 'PK' AND parent_object_id = OBJECT_ID('dbo.permissioncache'); ``` This returns the primary key and some other values, but not the full list of PK columns. What other table(s) will I have to join in?
Try this way `INFORMATION_SCHEMA` method ``` SELECT TC.TABLE_NAME, COLUMN_NAME, TC.CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE cc ON cc.Constraint_Name = tc.Constraint_Name AND cc.Table_Name = tc.Table_Name WHERE Constraint_Type = 'PRIMARY KEY' AND cc.Table_Name = 'Yourtable' ``` `Sys` option, ``` SELECT t.name AS TABLE_NAME, c.name AS COLUMN_NAME, kc.name AS CONSTRAINT_NAME FROM sys.key_constraints AS kc JOIN sys.tables AS t ON t.object_id = kc.parent_object_id JOIN sys.index_columns AS ic ON ic.object_id = t.object_id AND ic.index_id = kc.unique_index_id JOIN sys.columns AS c ON c.object_id = t.object_id AND c.column_id = ic.column_id WHERE kc.type = 'PK' AND t.name = 'Yourtable' ```
``` SELECT Col.Column_Name from INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col ON Col.Constraint_Name = Tab.Constraint_Name AND Col.Table_Name = Tab.Table_Name WHERE Constraint_Type = 'PRIMARY KEY' ``` and if you want to get the list of all primary key columns in your database then ``` USE myDB; GO SELECT i.name AS IndexName, OBJECT_NAME(ic.OBJECT_ID) AS TableName, COL_NAME(ic.OBJECT_ID,ic.column_id) AS ColumnName FROM sys.indexes AS i INNER JOIN sys.index_columns AS ic ON i.OBJECT_ID = ic.OBJECT_ID AND i.index_id = ic.index_id WHERE i.is_primary_key = 1 ```
Getting list of columns for a PK
[ "", "sql", "sql-server", "" ]
Can someone check what is wrong with this codes? I already check the other questions for reference but its still not working. [![enter image description here](https://i.stack.imgur.com/zD3nD.png)](https://i.stack.imgur.com/zD3nD.png) ``` declare @sourceTable varchar(500) declare @year varchar(22) declare @month varchar(3) declare @test varchar(12) declare @result varchar(8) declare @index int declare @string varchar(15) set @string = (SELECT DISTINCT TOP 1 REPLACE(dbo.fn_Parsename(WHOLEROW, '|', 0), CHAR(9), '') FROM #temp1) set @test = (select UPPER(convert(datetime,substring(@string,2,charindex('-',@string,1)-2)))) set @month =(left(@test,3)) set @year = (right(@test,5)) set @result = @month + @year -- select @result set @sourceTable = 'gen_048_'+@result select @sourceTable declare @string2 varchar(255) set @string2 = (select convert(varchar(55),refdate)+''-''+convert(varchar(55),refcount) FROM @sourceTable) select @string2 ``` This is the error > Must declare the table variable "@sourceTable".
You need dynamic query ``` SET @string2 = 'select convert(varchar(55),refdate)+''-''+convert(varchar(55),refcount) FROM ' + Quotename(@sourceTable) EXEC (@string2) ```
You are not declaring @sourcetable , the error shouts it loud and clear. Add this At the beginning : ``` declare @sourcetable varchar(50) ``` Also, I believe you need to use dynamic SQL for this sort of queries and variable using.
SQL Server: set 2 column into 1 variable from variable table
[ "", "sql", "sql-server", "" ]
i have a quick question please. I am trying to write a query that combines "NOT" and "AND" in MS access. But for some reason I am not getting the correct result. For example, if i have a table: ``` ID| Name1| Name2| 1 | a | a | 2 | b | b | 3 | a | | 4 | | a | 5 | a | b | 6 | b | a | ``` What i want from my query is everything from name1 and name 2 that isn't b,b so all the IDs except 2. but i can only see ID - 1 which is a,a. My query is ``` SELECT * FROM TABLE Names WHERE NOT Name1 = 'b' AND NOT Name2 = 'b' ``` which only returns ID 1. Does anybody know what I am doing wrong. Thank you.
Avoiding the NULLs by only using equality testing, applying deMorgan's laws: ``` SELECT * FROM Names WHERE NOT (Name1 = 'b' AND Name2 = 'b') ; ``` The idea is: conditions with `NULL` values never test true: `NULL = 'x'` is false, and `NULL <> 'x'` also is false. (even `NULL = NULL` is false!) In short: the condition `(Name1 = 'b' AND Name2 = 'b')` is only true for the row with `id=2`; by applying NOT to this condition you get all rows EXCEPT id=2.
Use parentheses to help break up bits of logic, and prefer `<>` over `NOT` ``` SELECT * FROM TABLE Names WHERE (Name1 <> 'b') OR (Name2 <> 'b') ``` You may have to coerce null values to get 3 and 4: ``` SELECT * FROM TABLE Names WHERE (NZ(Name1, "") <> 'b') OR (NZ(Name2, "") <> 'b') ```
Combining "NOT" and "AND" in SQL (MS Access specifically)
[ "", "sql", "ms-access", "" ]
So I'm counting activity records from users in my system. I get the activity counter for each day in a certain month and year, just like the query that follows ``` SELECT CONVERT(date, VIS_DATETIME) AS DATETIME, COUNT(*) AS ACTIVITY FROM ACTIVITY WHERE DATEPART(year, VIS_DATETIME) = 2016 AND DATEPART(month, VIS_DATETIME) = 3 GROUP BY CONVERT(date, VIS_DATETIME) ORDER BY CONVERT(date, VIS_DATETIME) ``` The question is, if, let's say, March 28th doesn't have any activity, it won't be even listed. But, for my charts API, I need to get that listed and with a `0` for the counter. Obviously, accepting suggestions!
Create a table that contains all dates. Then do a left join with the Activity table. Group on the date, and do a `COUNT` on Activity.id. The left join ensures that all dates from the date table are included in the result set, even if they are not matched in the join clause.
``` Declare @DayOfMonth TinyInt Set @DayOfMonth = 1 Declare @Month TinyInt Set @Month = 1 Declare @Year Integer Set @Year = 2016 Declare @startDate datetime Declare @endDate datetime -- ------------------------------------ Select @startDate = DateAdd(day, @DayOfMonth - 1, DateAdd(month, @Month - 1, DateAdd(Year, @Year-1900, 0))) select @endDate = dateadd(month,1,@startDate) ;with dateRange as ( select dt = dateadd(dd, 0, @startDate) where dateadd(dd, 0, @startDate) < @endDate union all select dateadd(dd, 1, dt) from dateRange where dateadd(dd, 1, dt) < @endDate ) select * from dateRange ``` The above query gives all the dates in the month,you can left join it with your aggregated query to get the entries with zero count.
Get "zero" for a count at dates without records
[ "", "sql", "sql-server", "" ]
I am trying to get count value 0 if there are no rows that matches the condition. I have a big table. But, I am posting a truncated table as it explains what I want. Example Table1: ``` **name year title** name1 2008 title1 name2 2009 title2 name3 2010 title1 name1 2008 title2 name4 2009 title1 name2 2008 title2 ``` I want to show the count of rows for every name where year=2008. So, that if there is no row with year 2008 then. A name and its count 0 should be shown. Output should be: ``` **name count** name1 2 name2 1 name3 0 name4 0 ``` Workaround: ``` select p.name,(select count(*) from table1 b where b.name=p.name) as count from table1 p where p.year=2008; ``` Tried some more. But, none of them worked.
Try this: ``` select p.name, count(case when p.year=2008 then 1 end) as count from table1 p group by p.name ``` The query uses *conditional aggregation* so as to conditionally count `year=2008` occurrences of `p.name` values.
You could also do it using **DECODE**. Of course, **CASE** is pretty clear and verbose. ``` SELECT NAME, COUNT(DECODE(YEAR, 2008, 1)) COUNT FROM your_table GROUP BY NAME ORDER BY COUNT DESC; ``` For example, ``` SQL> WITH sample_data AS( 2 SELECT 'name1' NAME, 2008 YEAR FROM dual UNION ALL 3 SELECT 'name2' NAME, 2009 YEAR FROM dual UNION ALL 4 SELECT 'name3' NAME, 2010 YEAR FROM dual UNION ALL 5 SELECT 'name1' NAME, 2008 YEAR FROM dual UNION ALL 6 SELECT 'name4' NAME, 2009 YEAR FROM dual UNION ALL 7 SELECT 'name2' NAME, 2008 YEAR FROM dual 8 ) 9 --end of sample_Data mimicking real table 10 SELECT NAME, 11 COUNT(DECODE(YEAR, 2008, 1)) COUNT 12 FROM sample_data 13 GROUP BY NAME 14 ORDER BY COUNT DESC; NAME COUNT ----- ---------- name1 2 name2 1 name4 0 name3 0 SQL> ```
How to get count(*) value 0 in select from where group by
[ "", "sql", "oracle", "" ]
Goal: Keep a running table of student class ranks each month of the year Haves: I have code that provides me with columns ``` StudentID; '+@DateTXT+' ``` The DateTXT is dynamic variable, returns whatever month I'm running the code in. Needs: I'm trying to use the MERGE, UPDATE, INSERT functions to where I can run the code once and establish a table: ``` | StudentID | Jan | | 56789 | 2 | | 12345 | 7 | ``` Then each month I add a new month column the permanent table: ``` EXEC('ALTER TABLE StudentRanking ADD ' + @DateTXT + ' smallint NOT NULL DEFAULT(999)') | StudentID | Jan | Feb | | 56789 | 2 | 999 | | 12345 | 7 | 999 | ``` I'll run the ranking code again for February and save it into a temporary table, which I will use to merge, update, insert with the StudentRanking table: ``` | StudentID | Feb | | 56789 | 3 | (note.. student 12345 doesn't come up) ``` So I'd like to end up with a running list: ``` EXEC(' MERGE StudentRanking AS TARGET USING ##TEMPDB2 AS SOURCE ON (TARGET.StudentID = SOURCE.StudentID) WHEN MATCHED AND TARGET.' + @DateTXT + ' <> SOURCE.' + @DateTXT + ' THEN UPDATE SET TARGET.' + @DateTXT + ' = SOURCE.' + @DateTXT + ' WHEN NOT MATCHED BY TARGET THEN INSERT (StudentID, ' + @Rank_TXT + ') VALUES (SOURCE.StudentID, SOURCE.' + @Rank_TXT + ') ') | StudentID | Jan | Feb | | 56789 | 2 | 3 | | 12345 | 7 |null | ``` Problem: Some students leave the school, thereby creating a null ranking in proceeding months (e.g. 12345 has no rank in February), so when I try to INSERT the results from a temporary table, I get this ERROR: ``` SQL Server Database Error: Cannot insert the value NULL into column 'Feb', table 'tempdb.dbo.##TEMPDB'; column does not allow nulls. UPDATE fails. ``` I could do an ISNULL(ranking,0) but I'd rather have nulls than 0's
Change your ALTER TABLE to: ``` EXEC('ALTER TABLE StudentRanking ADD ' + @DateTXT + ' smallint NULL') ``` Assuming the down votes are because I didn't offer an alternative that was normalized, I'd recommend using PIVOT for this type of problem. Setup: ``` CREATE TABLE dbo.StudentRanking ( MonthID CHAR(3) NOT NULL, StudentID INT NOT NULL, Score INT NOT NULL, CONSTRAINT PK_StudentRanking__StudentID_Date PRIMARY KEY(MonthID, StudentID), ); INSERT INTO dbo.StudentRanking VALUES ('JAN', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('FEB', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('MAR', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('APR', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('MAY', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('JUN', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('JUL', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('AUG', 56321, 3) INSERT INTO dbo.StudentRanking VALUES ('SEP', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('OCT', 56321, 3) INSERT INTO dbo.StudentRanking VALUES ('NOV', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('DEC', 56321, 2) INSERT INTO dbo.StudentRanking VALUES ('JAN', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('FEB', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('MAR', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('APR', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('MAY', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('JUN', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('JUL', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('AUG', 56821, 2) INSERT INTO dbo.StudentRanking VALUES ('SEP', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('OCT', 56821, 2) INSERT INTO dbo.StudentRanking VALUES ('NOV', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('DEC', 56821, 1) INSERT INTO dbo.StudentRanking VALUES ('JAN', 56021, 3) INSERT INTO dbo.StudentRanking VALUES ('FEB', 56021, 3) INSERT INTO dbo.StudentRanking VALUES ('MAR', 56021, 3) INSERT INTO dbo.StudentRanking VALUES ('APR', 56021, 3) INSERT INTO dbo.StudentRanking VALUES ('MAY', 56021, 3) INSERT INTO dbo.StudentRanking VALUES ('JUN', 56021, 4) INSERT INTO dbo.StudentRanking VALUES ('JUL', 56021, 5) ``` Query ``` SELECT * FROM StudentRanking PIVOT (SUM(Score) FOR MonthID IN (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DEC)) AS PVT ``` Results [![Query Output](https://i.stack.imgur.com/WPKcG.png)](https://i.stack.imgur.com/WPKcG.png) The SUM(SCORE) is harmless in this instance since there is never more than one record per student per month. It's just there to allow PIVOT to know what to work around.
Always open for a different, better, approach @GarethD! I actually got it to work by doing: ``` EXEC('ALTER TABLE StudentRanking ADD ' + @Date_TXT + ' smallint DEFAULT(null)') WHEN MATCHED AND TARGET.' + @Date_TXT + ' IS NULL THEN UPDATE SET TARGET.' + @Date_TXT + ' = SOURCE.' + @Date_TXT + ' ```
MERGE, UPDATE, INSERT T-SQL Cannot INSERT value NULL
[ "", "sql", "sql-server", "merge", "null", "" ]
I'm trying to SET more than one value within the if else statement below, If I set one value it works, but if I set two values, it doesn't work: ``` DECLARE @test1 varchar(60); DECLARE @test2 varchar(60); IF ((SELECT COUNT(*) FROM table WHERE table.Date > '2016-03-20') > 10) SET @test1 = 'test1' SET @test2 = 'test2' ELSE SET @test1 = 'testelse' SET @test2 = 'testelse' ``` > Error message: "Msg 156, Level 15, State 1, Line 9 > Incorrect syntax near the keyword 'ELSE'." However it seems to be possible to have multiple SET variables after the else; this code works: ``` IF ((SELECT COUNT(*) FROM table WHERE table.Date > '2016-03-20') > 10) SET @test1 = 'test1' ELSE SET @test1 = 'testelse' SET @test2 = 'testelse' ``` How can I do this correctly?
If you have more than one statement in a if condition, you must use the `BEGIN ... END` block to encapsulate them. ``` IF ((SELECT COUNT(*) FROM table WHERE table.Date > '2016-03-20') > 10) BEGIN SET @test1 = 'test1' SET @test2 = 'test2' END ELSE BEGIN SET @test1 = 'testelse' SET @test2 = 'testelse' END ```
Use `BEGIN` and `END` to mark a multi-statement block of code, much like using `{` and `}` in other languages, in which you can place your multiple `SET` statements... ``` IF ((SELECT COUNT(*) FROM table WHERE table.Date > '2016-03-20') > 10) BEGIN SET @test1 = 'test1' SET @test2 = 'test2' END ELSE BEGIN SET @test1 = 'testelse' SET @test2 = 'testelse' END ``` Or, use `SELECT` to assign values to your variables, allowing both to be assigned in a single statement and so avoid requiring the use of `BEGIN` and `END`. ``` IF ((SELECT COUNT(*) FROM table WHERE table.Date > '2016-03-20') > 10) SELECT @test1 = 'test1', @test2 = 'test2' ELSE SELECT @test1 = 'testelse', @test2 = 'testelse' ```
How to set multiple values inside an if else statement?
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following query to return userids that are not in the message\_log table ``` select * from likes where userid not in(select to_id from message_log) ``` I have an index on the userid column in the likes table and an index on the to\_id column in the message\_log table but the index are not being used according to EXPLAIN. Is something wrong with my query? My query has been running for 20 minutes and still no results.
You can try this ``` select * from likes lk where not exists (select 1 from message_log where to_id = lk.userid ) ```
``` select * from likes left join message_log ml on ml.to_id=likes.userid where ml.to_id is null ``` Try the query with LEFT JOIN instead and leave the only userids without mesages
Query taking a very long time
[ "", "sql", "postgresql", "postgresql-performance", "" ]
I have 2 columns in the Table1: `Time_Stamp` and `RunTimeMinute`. How can I subtract the `Time_Stamp` value where RunTimeMinute=0 from the `Time_Stamp` value corresponding to RunTimeMinute=1 (which would give me the time taken to get the machine running)? ``` Time_Stamp RunTimeMinute 2016-03-01 04:32:10.0000000 1 2016-03-01 04:33:11.0000000 2 2016-03-01 04:34:13.0000000 3 2016-03-01 04:35:15.0000000 4 2016-03-01 04:36:16.0000000 5 2016-03-01 04:37:18.0000000 6 2016-03-01 04:38:20.0000000 7 2016-03-01 04:39:22.0000000 8 2016-03-01 04:40:23.0000000 9 2016-03-01 04:41:16.0000000 0 2016-03-01 04:45:36.0000000 10 ```
To accomplish the task you have described (provided the `Time_Stamp` field is type of `DateTime` and `RunTimeMinute` is an Integer Number), you may use SQL `SELECT` Query/Subquery technique and `DATEDIFF()` function as shown in the following example: ``` SELECT [YourTable].Time_Stamp AS t1, (SELECT YourTable.Time_Stamp FROM YourTable WHERE YourTable.RunTimeMinute=1) AS t0, DateDiff("n",[t0],[t1]) AS ElapsedTimeMin FROM YourTable WHERE ([YourTable].RunTimeMinute)=0; ``` `ElapsedTimeMin` will display result in minutes. You may specify the "datepart" of `DATEDIFF()` function as "s" to get result in seconds. Hope this may help.
For example we have a sample like this: ``` CREATE TABLE Table1 ( Time_Stamp datetime, RunTimeMinute int ) INSERT INTO Table1 VALUES ('2016-03-01 04:32:10.000', 1), ('2016-03-01 04:33:11.000', 2), ('2016-03-01 04:34:13.000', 3), ('2016-03-01 04:35:15.000', 4), ('2016-03-01 04:36:16.000', 5), ('2016-03-01 04:37:18.000', 6), ('2016-03-01 04:38:20.000', 7), ('2016-03-01 04:39:22.000', 8), ('2016-03-01 04:40:23.000', 9), ('2016-03-01 04:41:16.000', 0), ('2016-03-01 04:45:36.000', 10), ('2016-03-01 05:31:10.000', 1), ('2016-03-01 05:35:11.000', 2), ('2016-03-01 05:37:13.000', 3), ('2016-03-01 05:39:15.000', 4), ('2016-03-01 05:41:16.000', 5), ('2016-03-01 05:46:18.000', 6), ('2016-03-01 05:48:20.000', 7), ('2016-03-01 05:51:22.000', 8), ('2016-03-01 05:53:23.000', 9), ('2016-03-01 05:55:16.000', 0), ('2016-03-01 05:57:36.000', 10), ('2016-03-02 05:34:09.000', 1), ('2016-03-02 05:35:14.000', 2), ('2016-03-02 05:36:11.000', 3), ('2016-03-02 05:37:18.000', 4), ('2016-03-02 05:38:20.000', 5), ('2016-03-02 05:39:38.000', 6), ('2016-03-02 05:40:40.000', 7), ('2016-03-02 05:41:12.000', 8), ('2016-03-02 05:42:32.000', 9), ('2016-03-02 05:44:11.000', 0), ('2016-03-02 05:47:38.000', 10) ``` Then we make this: ``` ;WITH cte AS ( SELECT Time_Stamp, RunTimeMinute, ROW_NUMBER() OVER (PARTITION BY RunTimeMinute ORDER BY Time_Stamp) AS rnum FROM Table1 WHERE RunTimeMinute IN (0,1) ) SELECT MIN(Time_Stamp) as StartTime, DATEDIFF(minute, MIN(Time_Stamp), MAX(Time_Stamp)) AS ElapsedTimeMin FROM cte GROUP BY rnum ``` And get this: ``` | StartTime | ElapsedTimeMin | |-------------------------|----------------| | March, 01 2016 04:32:10 | 9 | | March, 01 2016 05:31:10 | 24 | | March, 02 2016 05:34:09 | 10 | ```
SQL Statement to Calculate DateTime Difference between Two Rows of the same Table
[ "", "sql", "sql-server", "subquery", "datediff", "" ]
I want to subtract 2 dates in MS SQL Server. Example: ``` Current date Last used date '2016-03-30' '2015-02-03' ``` Current date refers to today's date, "Last used date" is a measure. How to write a query in SQL Server? I have this but doesn't work (it says "Operand data type is invalid for subtract operator") ``` select CONVERT(DATE, GETDATE()) - CONVERT(DATE, LastUsedDate) from databasename ```
``` SELECT DATEDIFF(day,'2014-06-05','2014-08-05') AS DiffDate ``` Output DiffDate 61 More practice please refer below W3 school: <https://www.w3schools.com/sql/func_sqlserver_datediff.asp>
Here you don't have to cast GETDATE() to date, as it is already datetime datatype. So your query will be as follows ``` SELECT DATEDIFF(day,CAST(LastUsedDate as date),GETDATE()) AS DifferneceDays FROM TableName ```
Subtract two dates in Microsoft SQL Server
[ "", "sql", "sql-server", "database", "sql-server-2012", "data-warehouse", "" ]
i have a table with a column have value seperated by semi colon. the concern is value in the column are not fixed. it starts from 1 and end upto 80 semicolon sepaeration. i am trying to put each individual value to seperate column **SQL SERVER 2008 code** ``` DECLARE @Table TABLE( Val VARCHAR(50) ) INSERT INTO @Table (Val) SELECT '2Xcalcium; kidney' union all SELECT '3XMagnessium; liver' union all SELECT '2-ECG;3XSODIUM;DIALYSIS' SELECT *, CAST(LEFT(Val,CHARINDEX(';',Val)-1) AS VARCHAR) FirstValue, CAST(RIGHT(Val,LEN(Val) - CHARINDEX(';',Val)) AS VARCHAR) SecondValue FROM @Table ``` I tried the above code but this is limited to 2 semicolon only. please share your expertise.
Try it like this: ``` DECLARE @Table TABLE( Val VARCHAR(50) ) INSERT INTO @Table (Val) SELECT '2Xcalcium; kidney' union all SELECT '3XMagnessium; liver' union all SELECT '2-ECG;3XSODIUM;DIALYSIS'; ;WITH Splitted AS ( SELECT * ,CAST('<x>' + REPLACE(Val,';','</x><x>') + '</x>' AS XML) ValuesAsXML FROM @Table ) SELECT * ,ValuesAsXML.value('x[1]','varchar(max)') AS FirstCol ,ValuesAsXML.value('x[2]','varchar(max)') AS SecondCol ,ValuesAsXML.value('x[3]','varchar(max)') AS ThirdCol ,ValuesAsXML.value('x[4]','varchar(max)') AS FourthCol ,ValuesAsXML.value('x[5]','varchar(max)') AS FifthCol FROM Splitted ``` The result ``` Val FirstCol SecondCol ThirdCol FourthCol FifthCol 2Xcalcium; kidney 2Xcalcium kidney NULL NULL NULL 3XMagnessium; liver 3XMagnessium liver NULL NULL NULL 2-ECG;3XSODIUM;DIALYSIS 2-ECG 3XSODIUM DIALYSIS NULL NULL ```
Most of the link provided extract the element into rows. If you prefer to use your existing logic and extract the individual element into separate column, you can use multiple cascaded CROSS APPLY. ``` SELECT t.Val, v1.V as V1, v2.V as V2, v3.V as V3 FROM @Table t cross apply ( select V = LEFT(t.Val, CHARINDEX(';', t.Val + ';') - 1), Val = STUFF(t.Val, 1, CHARINDEX(';', t.Val + ';'), '') ) v1 cross apply ( select V = LEFT(v1.Val, CHARINDEX(';', v1.Val + ';') - 1), Val = STUFF(v1.Val, 1, CHARINDEX(';', v1.Val + ';'), '') ) v2 cross apply ( select V = LEFT(v2.Val, CHARINDEX(';', v2.Val + ';') - 1), Val = STUFF(v2.Val, 1, CHARINDEX(';', v2.Val + ';'), '') ) v3 ```
Semicolon seperated value to other column in sql server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
Folks, I've read the fixes for this issue but I have a variation that I can't solve. Firstly the following syntax throws the error ``` Case when #TFs.lTFID = 1161165 then REPLACE(CONVERT(VARCHAR(11), cast([dCreatedUTC] as datetime), 106), ' ', '-') when #TFs.lTFID = 1161166 then 'Administrator' when #TFs.lTFID = 1161167 then '' when #TFs.lTFID = 1161168 then AssetID when #TFs.lTFID = 1161169 then '' when #TFs.lTFID = 1161170 then '' when #TFs.lTFID = 1161172 then '' when #TFs.lTFID = 1161173 then '' else CAST(#TFs.lTFID as varchar(20)) End 'Value' ``` If I comment out all but one of the "when" lines as below it works. ``` Case when #TFs.lTFID = 1161165 then REPLACE(CONVERT(VARCHAR(11), cast([dCreatedUTC] as datetime), 106), ' ', '-') /* when #TFs.lTFID = 1161166 then 'Administrator' when #TFs.lTFID = 1161167 then '' when #TFs.lTFID = 1161168 then AssetID when #TFs.lTFID = 1161169 then '' when #TFs.lTFID = 1161170 then '' when #TFs.lTFID = 1161172 then '' when #TFs.lTFID = 1161173 then ''*/ else CAST(#TFs.lTFID as varchar(20)) End 'Value' ``` Any thoughts about how I can format this query so that all "cases" work?
Based on the error message, the result of the `CASE` expressions gets converted to `INT`. Also of the return value of your `WHEN`s is known to be `VARCHAR`, except for `AssetID`, so I assume this must be an `INT` value. The error happens because when using a `CASE` expression, all return values must be of the same data type. In case they have different data types, all values are converted to the type with a higher [**data type precedence**](https://msdn.microsoft.com/en-us/library/ms190309.aspx?f=255&MSPPError=-2147217396). And since `INT` has a higher precedence than `VARCHAR`, the results are converted to `INT` which caused the error. To fix this, you have to convert `AssetID` to `VARCHAR(n)` also ``` Case when #TFs.lTFID = 1161165 then REPLACE(CONVERT(VARCHAR(11), cast([dCreatedUTC] as datetime), 106), ' ', '-') when #TFs.lTFID = 1161166 then 'Administrator' when #TFs.lTFID = 1161167 then '' when #TFs.lTFID = 1161168 then CAST(AssetID AS VARCHAR(20)) when #TFs.lTFID = 1161169 then '' when #TFs.lTFID = 1161170 then '' when #TFs.lTFID = 1161172 then '' when #TFs.lTFID = 1161173 then '' else CAST(#TFs.lTFID as varchar(20)) End 'Value' ```
Try and cast the AssetID to a varchar in the case statement. ``` Case when #TFs.lTFID = 1161165 then REPLACE(CONVERT(VARCHAR(11), cast([dCreatedUTC] as datetime), 106), ' ', '-') when #TFs.lTFID = 1161166 then 'Administrator' when #TFs.lTFID = 1161167 then '' when #TFs.lTFID = 1161168 then CAST(AssetID AS VARCHAR(20)) when #TFs.lTFID = 1161169 then '' when #TFs.lTFID = 1161170 then '' when #TFs.lTFID = 1161172 then '' when #TFs.lTFID = 1161173 then '' else CAST(#TFs.lTFID as varchar(20)) End 'Value' ```
SQL Select Case Conversion failed when converting the varchar value to data type int.
[ "", "sql", "sql-server", "" ]
I have a student table with the below structure. ``` StudentId StudentName SubjectId 123 Lina 1 456 Andrews 4 123 Lina 3 123 Lina 4 456 Andrews 5 ``` Need to write a query to get the studentId where the subjectid is equal to 1,3 and studentid is not equal to 4 ``` Select studentId from student where subject Id='1' and SubjectId ='3' and subjectId ='4' . ``` Output studentId should be 123 But it does not work out. Any help is appreciated
Try with grouping: ``` Select studentId from student where subject_Id in ('1', '3', '4') group by studentId having count(distinct subject_Id) = 3 ``` **Note:** You might consider changing `('1', '3', '4')` to `(1, 3, 4)` if `subject_Id` field is of type `int`. **Note2:** `distinct` keyword inside `count` should be used in case you have duplicate `subject_Id` values per `studentId`.
The easiest way would be ``` select StudentId from student where SubjectId in (1,3,4) group by StudentId having count(distinct SubjectId) = 3 ```
Sql query to find Id tagged to multiple rows
[ "", "mysql", "sql", "oracle10g", "" ]
`Suppliers` table ``` Part Number Supplier XXX A XXX B YYY C ``` `Part Numbers` table ``` PK_ID Part Number 1 XXX 2 YYY ``` I want to select it with additional column `FK_ID` based on `PK_ID` from `Part Numbers`: ``` FK_ID Part Number Supplier 1 XXX A 1 XXX B 2 YYY C ``` What doesn't work: ``` SELECT s.`Part Number`, s.`Supplier`, p.`PK_ID` FROM `Suppliers` AS s, `Part Numbers` AS p JOIN ON s.`Part Number` = p.`Part Number` ```
Never use commas in the `FROM` clause. Always use explicit `JOIN` syntax: ``` SELECT s.`Part Number`, s.Supplier, p.PK_ID` FROM Suppliers JOIN `Part Numbers` p ON s.`Part Number` = p.`Part Number` ; ``` I would encourage you to name your tables and columns without spaces, so you don't need to escape the names. Having to put a bunch of backticks in the SQL statement is just a distraction for writing or reading the query.
Try: ``` SELECT s.`Part Number`, s.`Supplier`, p.`PK_ID` FROM `Suppliers` AS s JOIN `Part Numbers` AS p ON s.`Part Number` = p.`Part Number` ```
SQL select column from other table with corresponding value
[ "", "mysql", "sql", "join", "" ]
im using Oracle for doing database personal learning purpose. For my current knowledge i can't solve this question... here i'll show my problem. Here is my 2 table structure: ``` CREATE table "HOSPITAL" ( "HOSPITAL_ID" NUMBER NOT NULL, "NAME" VARCHAR2(255), "ADDRESS" VARCHAR2(255), "BED_CAPACITY" NUMBER, "CONTACT" NUMBER, "EMAIL" VARCHAR2(255), "PAYMENT-SCALE" VARCHAR2(255), constraint "HOSPITAL_PK" primary key ("HOSPITAL_ID") ); CREATE table "HOSPITAL_SPECIALIZE" ( "HOSPITAL_ID" NUMBER NOT NULL, "MATERINITY" VARCHAR2(255), "ENT" VARCHAR2(255), "DERMATOLOGY" VARCHAR2(255), "ORTHOPEDICS" VARCHAR2(255), constraint "HOSPITAL_SPECIALIZE_PK" primary key ("HOSPITAL_ID") ); ``` -----Here is the data inside my table----- [![enter image description here](https://i.stack.imgur.com/HPg6M.png)](https://i.stack.imgur.com/HPg6M.png) Here is some hospital specialize field, **Yes** mean the hospital was specialize in that particular field. [![enter image description here](https://i.stack.imgur.com/Il7xC.png)](https://i.stack.imgur.com/Il7xC.png) **How should i write a SQL that will automatic detect and only list out the hospitals are specialize in more than 1 area.** Here is the output i wish to get: [![enter image description here](https://i.stack.imgur.com/g7sFl.png)](https://i.stack.imgur.com/g7sFl.png) Is this possible to do it in SQL or maybe my table structure was wrong? Thank you for reading my question, and sorry about my poor explanation.
I dont see easy way to do this. So the cumbersome way is ``` SELECT * FROM "HOSPITAL" H JOIN "HOSPITAL_SPECIALIZE" HS ON H."HOSPITAL_ID" = HS."HOSPITAL_ID" WHERE DECODE("MATERINITY", 'YES', 1, 0) + DECODE("ENT", 'YES', 1, 0) + DECODE("DERMATOLOGY", 'YES', 1, 0) + DECODE("ORTHOPEDICS", 'YES', 1, 0) >= 2 ```
Try: ``` SELECT a.HOSPITAL_ID, a.NAME, b.MATERNITY, b.ENT, b.DERMATOLOGY, b.ORTHOPEDICS FROM HOSPITAL AS a JOIN HOSPITAL_SPECIALIZE AS b ON a.HOSPITAL_ID = b.HOSPITAL_ID WHERE CASE MATERNITY WHEN 'YES' THEN 1 ELSE 0 END + CASE ENT WHEN 'YES' THEN 1 ELSE 0 END + CASE DERMATOLOGY WHEN 'YES' THEN 1 ELSE 0 END + CASE ORTHOPEDICS WHEN 'YES' THEN 1 ELSE 0 END > 1 ```
SQL queries to determine more than 1 specialize and display in table
[ "", "sql", "oracle", "" ]
I'm joining two tables on three fields. The problem I'm having is that I need to use a default row from table b if a.baz is not in b.baz. The problem is some will have a match in b but also have a default and that is causing a cross-product that I don't want. ``` select a.foo, a.bar, a.baz, b.fee, b.fie from a join b on a.foo = b.foo and a.bar = b.bar and ((a.baz = b.baz) or b.baz = 'DEFAULT') ``` Current output: ``` foo bar baz fee fie bob doe NYC 500 200 bob doe DEFUALT 100 100 john doe DEFAULT 100 100 jane doe NYC 500 500 ``` desired output: ``` foo bar baz fee fie bob doe NYC 500 200 john doe DEFAULT 100 100 jane doe NYC 500 500 ``` Sample data: `a: foo bar baz bob doe NYC john doe NYC jane doe NYC` `b: foo bar baz fee fie bob doe NYC 500 200 bob doe DEFAULT 100 100 john doe CHI 300 200 john doe DEFAULT 100 100 jane doe NYC 500 100`
You have to add a `NOT EXISTS` so as to exclude the `b` record having `baz = 'DEFAULT'` when a match `a.baz = b.baz` also exists: ``` select a.foo, a.bar, a.baz, b.baz, b.fee, b.fie from a join b on a.foo = b.foo and a.bar = b.bar and ((a.baz = b.baz) OR b.baz = 'DEFAULT') where not exists (select 1 from b as b1 where a.foo = b1.foo and a.bar = b1.bar and b.baz = 'DEFAULT' and b1.baz = a.baz) ``` [**Demo here**](http://sqlfiddle.com/#!9/77dab/2)
Your query is quite fine, as it already contains the rows you want. Now you must must remove the unwanted rows. You do this by ranking your matches and only keeping the better match. This can be done with `ROW_NUMBER`, giving the better record row number 1. ``` select foo, bar, abaz, bbaz, fee, fie from ( select a.foo, a.bar, a.baz as abaz, b.baz as bbaz, b.fee, b.fie, row_number() over (partition by a.foo, a.bar order by case when b.baz = 'DEFAULT' then 2 else 1 end) as rn from a join b on b.foo = a.foo and b.bar = a.bar and b.baz in (a.baz, 'DEFAULT') ) ranked where rn = 1; ```
Join with default values but not wanting a cross product
[ "", "sql", "postgresql", "" ]
I have a data set like this (see below) and I try to extract digits which are in form {variable\_number\_of\_digits}{hyphen}{only\_one\_digit}: ``` with mcte as ( select 'ILLD/ELKJS/00000000/ELKJS/FHSH' as addr from dual union all select 'ILLD/EFECTE/0116988-7-002/ADFA/ADFG' as addr from dual union all select 'IIODK/1573230-0/2216755-7/' as addr from dual union all select 'IIODK/1573230-0/2216755-700/WRITE' as addr from dual ) select addr, REGEXP_SUBSTR(addr,'(\/)([0-9-]+)',1,1,NULL,2) AS num1, REGEXP_SUBSTR(addr,'(\/)([^\/]+\/)([0-9\-]+)',1,1,NULL,3) num2 from mcte ; ``` I'm not getting a correct results set which should be the following ``` +-------------------------------------+-----------+-----------+ | ADDR | NUM1 | NUM2 | +-------------------------------------+-----------+-----------+ | ILLD/ELKJS/00000000/ELKJS/FHSH | NULL | NULL | | ILLD/EFECTE/0116988-7-002/ADFA/ADFG | NULL | NULL | | IIODK/1573230-0/2216755-7/ | 1573230-0 | 2216755-7 | | IIODK/1573230-0/2216755-700/WRITE | 1573230-0 | NULL | +-------------------------------------+-----------+-----------+ ``` How to achieve this?
If you want to get the results from the second and third `/` delimited groups then: ``` with mcte ( addr ) as ( select 'ILLD/ELKJS/00000000/ELKJS/FHSH' from dual union all select 'ILLD/EFECTE/0116988-7-002/ADFA/ADFG' from dual union all select 'IIODK/1573230-0/2216755-7/' from dual union all select 'IIODK/1573230-0/2216755-700/WRITE' from dual union all select 'IIODK/TEST/1573230-0/2216755-700/WRITE' from dual ) select addr, REGEXP_SUBSTR(addr,'^[^/]*/(\d+-\d)/',1,1,NULL,1) AS num1, REGEXP_SUBSTR(addr,'^[^/]*/[^/]*/(\d+-\d)/',1,1,NULL,1) num2 from mcte; ``` **Output**: ``` ADDR NUM1 NUM2 -------------------------------------- ------------------- ------------------- ILLD/ELKJS/00000000/ELKJS/FHSH ILLD/EFECTE/0116988-7-002/ADFA/ADFG IIODK/1573230-0/2216755-7/ 1573230-0 2216755-7 IIODK/1573230-0/2216755-700/WRITE 1573230-0 IIODK/TEST/1573230-0/2216755-700/WRITE 1573230-0 ``` **Update**: If you just want the first and second pattern that match and do not care where they are in the string then: ``` with mcte ( addr ) as ( select 'ILLD/ELKJS/00000000/ELKJS/FHSH' from dual union all select 'ILLD/EFECTE/0116988-7-002/ADFA/ADFG' from dual union all select 'IIODK/1573230-0/2216755-7/' from dual union all select 'IIODK/1573230-0/2216755-700/WRITE' from dual union all select 'IIODK/TEST/1573230-0/2216755-700/WRITE' from dual union all select '1234567-8' from dual union all select '1234567-8/9876543-2' from dual union all select '1234567-8/TEST/9876543-2' from dual ) select addr, REGEXP_SUBSTR(addr,'(^|/)(\d+-\d)(/|$)',1,1,NULL,2) AS num1, REGEXP_SUBSTR(addr,'(^|/)\d+-\d(/.+?)?/(\d+-\d)(/|$)',1,1,NULL,3) num2 from mcte; ``` **Outputs**: ``` ADDR NUM1 NUM2 -------------------------------------- ------------------- ------------------ ILLD/ELKJS/00000000/ELKJS/FHSH ILLD/EFECTE/0116988-7-002/ADFA/ADFG IIODK/1573230-0/2216755-7/ 1573230-0 2216755-7 IIODK/1573230-0/2216755-700/WRITE 1573230-0 IIODK/TEST/1573230-0/2216755-700/WRITE 1573230-0 1234567-8 1234567-8 1234567-8/9876543-2 1234567-8 9876543-2 1234567-8/TEST/9876543-2 1234567-8 9876543-2 ```
Combining the [**delimiter split query**](https://stackoverflow.com/questions/31601828/how-to-select-values-within-a-column/31606666#31606666) with `REGEXP_LIKE` and *pivot*-ing the result you get this query working for up to 6 numbers. You will need to update the `cols` subquery and teh `pivot` list to be able to process more numbers per record. (Unfortunately this can't be done general in a static SQL). ``` with mcte as ( select 1 id, 'ILLD/ELKJS/00000000/ELKJS/FHSH' as addr from dual union all select 2 id, 'ILLD/EFECTE/0116988-7-002/ADFA/ADFG' as addr from dual union all select 3 id, 'IIODK/1573230-0/2216755-7/' as addr from dual union all select 4 id, '1-1/1573230-0/2216755-700/676-7' as addr from dual ), cols as (select rownum colnum from dual connect by level < 6 /* (max) number of columns */), mcte2 as (select id, cols.colnum, (regexp_substr(addr,'[^/]+', 1, cols.colnum)) addr from mcte, cols where regexp_substr(addr, '[^/]+', 1, cols.colnum) is not null), mcte3 as ( select ID, ROW_NUMBER() over (partition by ID order by COLNUM) as col_no, ADDR from mcte2 where REGEXP_like(addr, '^[0-9]+-[0-9]$') ) select * from mcte3 PIVOT (max(addr) for (col_no) in (1 as "NUM1", 2 as "NUM2", 3 as "NUM3", 4 as "NUM4", 5 as "NUM5", 6 as "NUM6")) order by id; ``` this gives a result ``` ID NUM1 NUM2 NUM3 NUM4 NUM5 NUM6 ---------- ---------- ---------- ---------- ---------- ---------- ---------- 3 1573230-0 2216755-7 4 1-1 1573230-0 676-7 ```
How to parse data using REGEXP_SUBSTR?
[ "", "sql", "regex", "oracle", "oracle11g", "regex-greedy", "" ]
I have a table that accommodates data that is logically groupable by multiple properties (foreign key for example). Data is sequential over continuous time interval; i.e. it is a time series data. What I am trying to achieve is to select only latest values for each group of groups. Here is example data: ``` +-----------------------------------------+ | code | value | date | relation_id | +-----------------------------------------+ | A | 1 | 01.01.2016 | 1 | | A | 2 | 02.01.2016 | 1 | | A | 3 | 03.01.2016 | 1 | | A | 4 | 01.01.2016 | 2 | | A | 5 | 02.01.2016 | 2 | | A | 6 | 03.01.2016 | 2 | | B | 1 | 01.01.2016 | 1 | | B | 2 | 02.01.2016 | 1 | | B | 3 | 03.01.2016 | 1 | | B | 4 | 01.01.2016 | 2 | | B | 5 | 02.01.2016 | 2 | | B | 6 | 03.01.2016 | 2 | +-----------------------------------------+ ``` And here is example of desired output: ``` +-----------------------------------------+ | code | value | date | relation_id | +-----------------------------------------+ | A | 3 | 03.01.2016 | 1 | | A | 6 | 03.01.2016 | 2 | | B | 3 | 03.01.2016 | 1 | | B | 6 | 03.01.2016 | 2 | +-----------------------------------------+ ``` To put this in perspective β€” for every related object I want to select each code with latest date. Here is a select I came with. I've used `ROW_NUMBER OVER (PARTITION BY...)` approach: ``` SELECT indicators.code, indicators.dimension, indicators.unit, x.value, x.date, x.ticker, x.name FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY indicator_id ORDER BY date DESC) AS r, t.indicator_id, t.value, t.date, t.company_id, companies.sic_id, companies.ticker, companies.name FROM fundamentals t INNER JOIN companies on companies.id = t.company_id WHERE companies.sic_id = 89 ) x INNER JOIN indicators on indicators.id = x.indicator_id WHERE x.r <= (SELECT count(*) FROM companies where sic_id = 89) ``` It works but the problem is that it is painfully slow; when working with about 5% of production data which equals to roughly 3 million `fundamentals` records this select take about 10 seconds to finish. My guess is that happens due to subselect selecting huge amounts of records first. Is there any way to speed this query up or am I digging in wrong direction trying to do it the way I do?
Postgres offers the convenient `distinct on` for this purpose: ``` select distinct on (relation_id, code) t.* from t order by relation_id, code, date desc; ```
Other option: ``` SELECT DISTINCT Code, Relation_ID, FIRST_VALUE(Value) OVER (PARTITION BY Code, Relation_ID ORDER BY Date DESC) Value, FIRST_VALUE(Date) OVER (PARTITION BY Code, Relation_ID ORDER BY Date DESC) Date FROM mytable ``` This will return top value for what ever you partition by, and for whatever you order by.
Select latest values for group of related records
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
I have a query that joins my customers and transactions table, lets alias this joined query as `jq`. I want to create a ranking of each customer's purchases (transactions) by order timestamp (`order_ts`). So I did, ``` SELECT customer_id, order_id, order_ts, RANK() OVER (PARTITION BY customer_id ORDER BY order_ts ASC), amount FROM jq GROUP BY customer_id ORDER BY customer_id; ``` Now, I want 5th purchases onwards to be an aggregated single row instead of 5th, 6th, 7th, and so on. The summed row will retain the 5th's `order_id` and `order_ts`. How do I do this in MS SQL Server and Postgres?
If I understood you correctly, you can achieve this with `CASE EXPRESSION` : ``` SELECT customer_id,min(order_id),min(order_ts), CASE WHEN rnk < 5 then rnk else 5 end as rnk,sum(amount) FROM( SELECT customer_id, order_id, order_ts, RANK() OVER (PARTITION BY customer_id ORDER BY order_ts ASC) as rnk, amount FROM jq) GROUP BY customer_id, CASE WHEN rnk < 5 then rnk else 5 end ORDER BY customer_id ``` This will group each rnk > 5 as 5, so as 1 group. I selected min order\_id,ts to select it form the 5th.
> **Though this produces the correct result, sagi's [answer](https://stackoverflow.com/a/36303500/2203084) is more efficient.** --- You can use a `SELECT` on the result and filter for `RANK < 5`. Then do a `UNION ALL` on the aggregated values for `RANK >= 5` ``` WITH Cte AS( SELECT customer_id, order_id, order_ts, RANK() OVER (PARTITION BY customer_id ORDER BY order_ts ASC) AS rnk, amount FROM jq GROUP BY customer_id ) SELECT customer_id, order_id, order_ts, rnk, amount FROM Cte WHERE rnk < 5 UNION ALL SELECT customer_id, MIN(order_id), MIN(order_ts), MIN(rnk), SUM(amount) FROM Cte WHERE rnk >= 5 GROUP BY customer_id ORDER BY customer_id; ``` *\*This is for SQL Server*
Aggregating rows with rank or index >= N
[ "", "sql", "sql-server", "postgresql", "" ]
I have two tables. One consists of customers and another consists of products they have purchased: Table customer ``` CustID, Name 1, Tom 2, Lisa 3, Fred ``` Table product ``` CustID, Item 1, Toaster 1, Breadbox 2, Toaster 3, Toaster ``` I would like to get all the customers that bought a Toaster, unless they also bought a breadbox. So I have tried the following: ``` SELECT * FROM Customer JOIN Product ON Customer.CustID=Product.CustID WHERE Product in (SELECT Item FROM Product WHERE (Item = 'Toaster' AND Item != 'Breadbox')); ``` And: ``` SELECT * FROM Customer INNER Join Product ON Customer.CustID=PRODUCT.CustID WHERE Product.Item = 'Toaster' AND Product.Item NOT IN ('Breadbox'); ``` But both gives the same result, which includes Tom, that does already own a breadbox. How can I make sure that only the customers that own a toaster, but do not own a breadbox gets listed?
``` SELECT distinct * FROM Customer LEFT JOIN Product ON Customer.CustID=Product.CustID WHERE Item = 'Toaster' AND Customer.CustID NOT IN ( Select CustID FROM Product Where Item = 'Breadbox' ) ```
This is my first post so forgive any missteps. There is a "many to one" relationship between the Customer and Product tables. To make the logical restrictions you want to apply, you would either need to aggregate the Product table or join the Product table twice. In effect, you are seeking to collapse the many to one relationship to a "one to one" relationship. Here are some examples of where the Product table is joined twice. ``` SELECT DISTINCT a.Name FROM Customer a JOIN Product b ON a.CustID = b.CustID LEFT JOIN Product c on a.CustID = c.CustID AND c.Item = 'Breadbox' WHERE b.Item = 'Toaster' AND c.CustID IS NULL ``` or (slightly less efficient depending on indexes) ``` SELECT DISTINCT a.Name FROM Customer a JOIN Product b ON a.CustID = b.CustID WHERE b.Item = 'Toaster' AND NOT EXISTS (SELECT 1 FROM Product c where a.CustID = c.CustID AND c.Item = 'Breadbox') ``` And, here is an example of where the Product table is joined once - possibly more complicated than you require. ``` SELECT a.Name FROM Customer a JOIN ( SELECT CustID, SUM(case when Item = 'Toaster' then 1 else 0 end) sum_Toaster, SUM(case when Item = 'Breadbox' then 1 else 0 end) sum_Breadbox FROM Product WHERE Item in ('Toaster','Breadbox') GROUP BY CustID HAVING SUM(case when Item = 'Toaster' then 1 else 0 end) > 0 AND SUM(case when Item = 'Breadbox' then 1 else 0 end) = 0 ) b ON a.CustID = b.CustID ```
Select all customers except if they have another product - SQL
[ "", "mysql", "sql", "" ]
I'm doing my SQL exercises but I got stuck in one. I need to retrieve the employees with the two highest salaries, but I can't use any type of subquery or derived table. I do it with a subquery like this: ``` SELECT * FROM (SELECT * FROM emp ORDER BY sal DESC) new_emp WHERE ROWNUM < 3; ``` I also know that this can be achieved using the `WITH` clause, but I'm wondering if there is any alternative to this. PS: I'm using Oracle 11.
This is actually a pathetic method, in my opinion, but you can use a `join`: ``` select e.col1, e.col2, . . . from emp e left join emp e2 on e2.salary >= e.salary group by e.col1, e.col2, . . . having count(distinct e2.salary) <= 2; ``` Note: this is really equivalent to a `dense_rank()`, so if there are ties, you'll get more than two rows. It is easy enough to fix this (assuming you have a unique identifier for each row), but the fix complicates the logic and hides the basic idea.
If you are Oracle version 12.1 or above you can use a row limiting clause. In your case you would just use the subquery plus the row limiting clause like so: ``` SELECT * FROM emp ORDER BY sal DESC FETCH FIRST 5 ROWS ONLY; ``` Source: <https://oracle-base.com/articles/12c/row-limiting-clause-for-top-n-queries-12cr1#top-n>
ORACLE SQL retrieve n rows without subqueries or derived tables
[ "", "sql", "oracle", "oracle11g", "window-functions", "row-number", "" ]
I have created a foreign key without specifying the name so sql server has created it with auto generated name. Now I want to drop the column which has this foreign key. The problem is that i don't know the name of this foreign key. Is there any way to drop all the foreign keys for particular column in particular table? So far I've found this script which drops all default constraints for column ``` DECLARE @tableName VARCHAR(MAX) DECLARE @ConstraintName nvarchar(200) DECLARE @columnName VARCHAR(MAX) SET @tableName = 'tablename' SET @columnName = 'columnname' SELECT @ConstraintName = Name FROM SYS.DEFAULT_CONSTRAINTS WHERE PARENT_OBJECT_ID = OBJECT_ID(@tableName) AND PARENT_COLUMN_ID = ( SELECT column_id FROM sys.columns WHERE NAME = @columnName AND object_id = OBJECT_ID(@tableName)) IF @ConstraintName IS NOT NULL BEGIN EXEC('ALTER TABLE '+@tableName+' DROP CONSTRAINT ' + @ConstraintName) END ALTER TABLE [tablename] DROP COLUMN columnname GO ``` But it doesn't help with foreign key constraints.
If you want to get more information about FK and specifically about a particular scheme and table than you can use. ``` SELECT t.Name as TableName, c.name as ColumnName, fk.name as FK_NAME FROM sys.foreign_keys as fk inner join sys.tables as t on fk.parent_object_id = t.object_id inner join sys.columns as c on c.object_id = t.object_id inner join sys.schemas as sc on t.schema_id = sc.schema_id WHERE sc.name = 'Schema' and t.name = 'Table' and c.name = 'Column' ``` If you are interested only about certain column then u can use **Ross Presser** answer. Also if you want to drop all fk constraint you can execute this: ``` Declare @sql nvarchar(4000) SET @sql = N''; SELECT @sql = @sql + ' ALTER TABLE [' + sc.NAME + '].[' + OBJECT_NAME(fk.parent_object_id) + ']' + ' DROP CONSTRAINT ' + '[' + fk.NAME + '] ' FROM sys.foreign_keys as fk inner join sys.tables as t on fk.parent_object_id = t.object_id inner join sys.columns as c on c.object_id = t.object_id inner join sys.schemas as sc on t.schema_id = sc.schema_id WHERE sc.name = 'schemaName' and c.name = 'columnName' -- you can include and fk name ORDER BY fk.NAME PRINT @sql; --EXEC sys.sp_executesql @sql; ```
You can get all the constraint list along with the names assigned to particular tables with the help of the below query: ``` SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE TABLE_NAME='YourTableName'; ``` Once you find the constraint you want to delete you can do it with the help of below query: ``` ALTER TABLE Orders DROP CONSTRAINT constraint_name; ```
How to drop foreign keys of a particular column
[ "", "sql", "sql-server", "t-sql", "" ]
I'm reading a documentation about SQL injections, and there is a strange statement that i don't understand: ``` concat(col1,col2)x ``` What is the use of the `x` ?
> *What is the use of the x ?* It is column [alias](https://en.wikipedia.org/wiki/Alias_%28SQL%29): ``` CREATE TABLE tab(col1 VARCHAR(100), col2 VARCHAR(100)); INSERT INTO tab(col1, col2) VALUES ('a','b'); ``` Query: ``` SELECT concat(col1,col2)x FROM tab ``` same as ``` SELECT concat(col1,col2) AS x FROM tab; ``` `LiveDemo` Output: ``` ╔════╗ β•‘ x β•‘ ╠════╣ β•‘ ab β•‘ β•šβ•β•β•β•β• ```
`concat` - returns a string that is the result of concatenating two or more string values. The confusion is about the x. The x is the concated single name, so you can access it. You can also place `AS` before the x. **Query:** ``` SELECT CONCAT ( 'Happy ', 'Birthday ', 11, '/', '25' ) AS Result; ``` **Output:** > Happy Birthday 11/25
In SQL what does concat(col1,col2)x does?
[ "", "sql", "" ]
I'm trying to determine how many people have purchased a pair of goods out of all distinct pairs of products that can exist. E.g. I have three products, A, B, C and I would like to determine how many % of customers bought A and B, B and C, and A and C out of the number of customers who own either product within each pair by country. My table looks like the below. ``` Customer | Country | Product 1 | US | A 1 | US | B 2 | CA | A 2 | CA | C 3 | US | A 3 | US | C 4 | US | B 5 | US | A ``` Note that a customer can only belong to one country. My desired output is: ``` Country | Pair | % US | A_B | 25% US | B_C | 0% US | A_C | 33% CA | A_B | 0% CA | B_C | 0% CA | A_C | 100% ``` The % is essentially the ratio ``` (# of unique customers who bought Product1 and Product2)/ (# of unique customers who bought Product1 or Product2) ``` by country. So for instance, in the US for `A_B` we have 4 customers who bought `A` or `B` but only 1 of these bought both `A` and `B` so the ratio is `1/4`. Is there a nice solution to this which would scale if I had a large number, arbitrary number of pairs?
Iterative query development... If you don't have a `product` table, and only have the `customer_country_product` table, you can use an inline view to create a distinct list of products for each country. To get products by country... ``` SELECT ccp.product_id , ccp.country_id FROM customer_country_product ccp GROUP BY ccp.product_id , ccp.country_id ``` We can use that query as a rowsource, by making it an inline view. Wrap that query in parens, assign an alias, and reference it in the FROM clause of another query. In order to get "pairs" of products, we can join the inline view to itself (avoiding returning pairs of the same product (`A_A`), and avoiding returning "duplicate" pairs (return only one of `A_C` and `C_A`). ``` SELECT a.country_id , a.product_id AS a_product_id , b.product_id AS b_product_id FROM ( SELECT ccpa.product_id , ccpa.country_id FROM customer_country_product ccpa GROUP BY ccpa.product_id , ccpa.country_id ) a JOIN ( SELECT ccpb.product_id , ccpb.country_id FROM customer_country_product ccpb GROUP BY ccpb.product_id , ccpb.country_id ) b ON b.country_id = a.country_id AND b.product_id > a.product_id ORDER BY a.country_id , a.product_id , b.product_id ``` That should get you all the product "pairs" for each country. NOTE: this will omit products that there isn't a customer that has the product. If we want all possible product pairs, for each country, we'd need to write that a little differently... ``` SELECT c.country_id , a.product_id AS a_product_id , b.product_id AS b_product_id FROM ( SELECT ccpa.product_id FROM customer_country_product ccpa GROUP BY ccpa.product_id ) a JOIN ( SELECT ccpb.product_id FROM customer_country_product ccpb GROUP BY ccpb.product_id ) b ON b.product_id > a.product_id CROSS JOIN ( SELECT ccpc.country_id FROM customer_country_product ccpc GROUP BY ccpc.country_id ) c ORDER BY c.country_id , a.product_id , b.product_id ``` If you have `product` and `country` tables, you could replace the inline views in the queries above with references to those tables. To get the "counts" of customer, we could either use correlated subqueries in the SELECT list, or we can perform join operations and aggregates in the SELECT list. (With the joins, if we're not careful, there's a potential to generate and count "duplicates".) To get a count of the distinct customers in a particular country that has a particular product ``` SELECT COUNT(DISTINCT ccp.customer_id) AS cnt_cust FROM customer_country_product ccp WHERE ccp.country_id = ? AND ccp.product_id = ? ``` To get a count of distinct customers from a particular country that has at least one of two particular products ``` SELECT COUNT(DISTINCT ccp.customer_id) AS cnt_cust_have_either FROM customer_country_product ccp WHERE ccp.country_id = ? AND ccp.product_id IN ( ? , ? ) ``` To get a count of customers in a particular country that have two particular products: ``` SELECT COUNT(DISTINCT ccp1.customer_id) AS cnt_cust_have_both FROM customer_country_product ccp1 JOIN customer_country_product ccp2 ON ccp2.country_id = ccp1.country_id AND ccp2.customer_id = ccp1.customer_id WHERE ccp1.country_id = ? AND ccp1.product_id = ? AND ccp2.product_id = ? ``` Since those queries return a single row containing a single column, we can use those as expressions in the SELECT list of another query. We start with the "product pairs" query, and add to the SELECT list. We replace those question mark placeholders with references to columns from the outer query: ``` SELECT c.country_id , a.product_id AS a_product_id , b.product_id AS b_product_id , ( SELECT COUNT(DISTINCT ccp1.customer_id) FROM customer_country_product ccp1 JOIN customer_country_product ccp2 ON ccp2.country_id = ccp1.country_id AND ccp2.customer_id = ccp1.customer_id WHERE ccp1.country_id = c.country_id AND ccp1.product_id = a.product_id AND ccp2.product_id = b.product_id ) AS cnt_cust_have_both , ( SELECT COUNT(DISTINCT ccp.customer_id) FROM customer_country_product ccp WHERE ccp.country_id = c.country_id AND ccp.product_id IN (a.product_id,b.product_id) ) AS cnt_cust_have_either FROM ( SELECT ccpa.product_id FROM customer_country_product ccpa GROUP BY ccpa.product_id ) a JOIN ( SELECT ccpb.product_id FROM customer_country_product ccpb GROUP BY ccpb.product_id ) b ON b.product_id > a.product_id CROSS JOIN ( SELECT ccpc.country_id FROM customer_country_product ccpc GROUP BY ccpc.country_id ) c ORDER BY c.country_id , a.product_id , b.product_id ``` Now, to calculate the "percentage" we just need to do a division operation. With MySQL a "divide by zero" will return NULL. (We wouldn't need to be concerned with that, if our outer query only returned rows where we know a customer from the country has one of the products... i.e. the result returned by the first query ``` SELECT c.country_id , a.product_id AS a_product_id , b.product_id AS b_product_id , ( SELECT COUNT(DISTINCT ccp1.customer_id) FROM customer_country_product ccp1 JOIN customer_country_product ccp2 ON ccp2.country_id = ccp1.country_id AND ccp2.customer_id = ccp1.customer_id WHERE ccp1.country_id = c.country_id AND ccp1.product_id = a.product_id AND ccp2.product_id = b.product_id ) / ( SELECT COUNT(DISTINCT ccp.customer_id) FROM customer_country_product ccp WHERE ccp.country_id = c.country_id AND ccp.product_id IN (a.product_id,b.product_id) ) * 100.00 AS percent_cust_have_both FROM ( SELECT ccpa.product_id FROM customer_country_product ccpa GROUP BY ccpa.product_id ) a JOIN ( SELECT ccpb.product_id FROM customer_country_product ccpb GROUP BY ccpb.product_id ) b ON b.product_id > a.product_id CROSS JOIN ( SELECT ccpc.country_id FROM customer_country_product ccpc GROUP BY ccpc.country_id ) c ORDER BY c.country_id , a.product_id , b.product_id ``` As far as "scaling" that up, for any non-trivial table, we are going to need to have suitable indexes available. Especially for the correlated subqueries. Those are going to get executed for *every* row returned by the outer query. That last query has the potential to return NULL, when there is a count of zero in the denominator. We can substitute a zero, by wrapping that while division operation in a conditional test ``` IFNULL( <expr> , 0) * 100.00 AS ``` (Likely there's an error somewhere in those queries, a missing paren, an invalid reference, a wrong qualifier, etc. Those queries are not tested. I strongly recommend you test each one, and not just grabbing that last one.) --- **FOLLOWUP** A table for testing... ``` CREATE TABLE customer_country_product ( customer_id INT , country_id VARCHAR(2) , product_id VARCHAR(2) ) ; INSERT INTO customer_country_product (customer_id, country_id, product_id) VALUES ('1','US','A') ,('1','US','B') ,('2','CA','A') ,('2','CA','C') ,('3','US','A') ,('3','US','C') ,('4','US','B') ,('5','US','A') ; ``` Final query returns: ``` country_id a_product_id b_product_id percent_cust_have_both ---------- ------------ ------------ ---------------------- CA A B 0.000000 CA A C 100.000000 CA B C 0.000000 US A B 25.000000 US A C 33.333333 US B C 0.000000 ``` It would a trivial change to concatenate `a.product_id` and `b.product_id` into a single column. The second and third columns in the SELECT list could be replaced with something like `CONCAT(a.product_id,'_',b.product_id) AS a_b`.
You need to generate all pairs of products along with the country. Then you need to calculate the number of matching customers that purchased either and the number that purchased both. Let me assume you have a products table and a countries table. Then, I think that subqueries might be the simplest solution: ``` select p1.product as product1, p2.product as p2, (select count(*) from (select cp.customer from customerproducts cp where cp.product in (p1.product, p2.product) and cp.country = c.country group by cp.customer having count(distinct product) = 2 ) cp ) as numWithBoth, (select count(*) from (select cp.customer from customerproducts cp where cp.product in (p1.product, p2.product) and cp.country = c.country group by cp.customer ) cp ) as numWithEither from countries c cross join products p1 cross join products p2 ; ``` The final answer is the ratio of the two values.
Percentage of Cross product purchase
[ "", "sql", "sql-server", "" ]
I have an `INSEE` table in my SQL Server 2008 R2. I have some duplicate values (15 000 lines out of 54 000 have to be deleted). This is my current script ``` declare @datainsee table ( di_nom varchar(100), di_departement varchar(5), di_type varchar ) declare @datainseeidentifiant table ( dii_nom varchar(100), dii_departement varchar(5), dii_type varchar, dii_identifiant uniqueidentifier ) insert into @datainsee select iee_nom,iee_departement,iee_type from insee group by iee_nom,iee_departement,iee_type having(count(*)>1) insert into @datainseeidentifiant select di_nom,di_code,di_type, (select top 1 iee_guid from insee where iee_departement=di_departement and iee_nom=di_nom and iee_type= di_type) from @datainsee delete from insee where iee_guid not in (select dii_identifiant from @datainseeidentifiant) and iee_nom in (select dii_nom from @datainseeidentifiant) ``` Rows are considered as duplicate if they have the same `iee_nom`, `iee_departement` and `iee_type`. I tried to look at solutions like [T-SQL: Deleting all duplicate rows but keeping one](https://stackoverflow.com/questions/6025367/t-sql-deleting-all-duplicate-rows-but-keeping-one) but it doesn't work in my case because `GUID` is not number.. Any ideas to optimise my query? One minute is too long for my application.
With the help of a CTE and ROW\_NUMBER. This should be enough: ``` with x as ( select ROW_NUMBER() OVER ( PARTITION BY iee_nom, iee_departemen, iee_type ORDER BY <pick your priority column here> )rID, * from insee ) delete from x where rID > 1 ```
I suggest you to use traditional way like below, Using `Temp` Table ``` SELECT DISTINCT * INTO #TmpTable FROM insee DELETE FROM insee --OR Use Truncate to delete records INSERT INTO insee SELECT * FROM #TmpTable DROP TABLE #TmpTable ```
Optimise SQL request for duplicate Value
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table called `widgets` that has a string column named `version`. Data for version is in period-separated string format, similar to semantic versioning. e.g. "1.2.4" When I do the traditional `ORDER BY widgets.version` then I get the following order ``` +--------------+ | Widgets | +----+---------+ | id | version | |----|---------| | 1 | 1.3.2 | <- This is fine | 3 | 10.1.2 | <- This should be last, since 10 > 4 | 2 | 4.5.7 | <- This should be second, since 4 < 10 +----+---------+ ``` How can I update my query so that the order returned is by version pt 1, version pt 2, then version pt 3?
The easiest way is to convert the version string to an integer array and then sort on that array: ``` select id, version from widgets order by string_to_array(version, '.')::int[] ``` Note that this will fail if the version contains non-numeric values.
Have you tried? ``` SELECT id, split_part(version, '.', 1)::int as major, split_part(version, '.', 2)::int as minor, split_part(version, '.', 3)::int as patch FROM widgets ORDER BY major, minor, patch ```
Is it possible to order by substrings in postgreSQL?
[ "", "sql", "postgresql", "version-sort", "" ]
I'm trying to get the correct `ORDER BY` for my MySQL query, however I can't get the order by correct. I have two columns: 1. `breaking_news` = values are `NULL` or `1` 2. `news_published_dt` = values are `DATETIME` How can I sort so that the first output are `breaking_news NOT NULL` sorted by `DATETIME`, and then the rest afterwards is just ordered by `DATETIME`? I've tried this, but it gave me no output ``` ORDER BY CASE WHEN n.breaking_news IS NOT NULL THEN 1 ELSE 2 END, n.news_published_dt DESC ```
Looks like, you are looking for this: ``` SELECT * FROM tableName ORDER BY breaking_news DESC, news_published_dt ASC ```
``` SELECT * FROM table_name ORDER BY news_published_dt DESC ```
MySQL order by first column, sorted by second column
[ "", "mysql", "sql", "sorting", "datetime", "sql-order-by", "" ]