Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've created a query that shows the number of times an individual client appears in a list of transactions.... ``` select Client_Ref, count(*) from Transactions where Start_Date >= '2015-01-01' group by Client_Ref order by Client_Ref ``` ...this returns data like this... ``` Client1 1 Client2 4 Client3 1 Client4 3 ``` ..What I need to do is summarize this into bands of frequency so that I get something like this... ``` No. of Clients with 1 transaction 53 No. of Clients with 2 transaction 157 No. of Clients with 3 transaction 25 No. of Clients with >3 transactions 259 ``` I can't think how to so this in SQL, I could probably figure it out in Excel but I'd rather it was done at server level.
I call this a "histogram of histogram" query. Just use `group by` twice: ``` select cnt, count(*), min(CLlient_Ref), max(Client_Ref) from (select Client_Ref, count(*) as cnt from Transactions where Start_Date >= '2015-01-01' group by Client_Ref ) t group by cnt order by cnt; ``` I include the min and max client ref, because I often want to investigate certain values further. If you want a limit at 3, you can use `case`: ``` select (case when cnt <= 3 then cast(cnt as varchar(255)) else '4+' end) as grp, count(*), min(CLlient_Ref), max(Client_Ref) from (select Client_Ref, count(*) as cnt from Transactions where Start_Date >= '2015-01-01' group by Client_Ref ) t group by (case when cnt <= 3 then cast(cnt as varchar(255)) else '4+' end) order by min(cnt); ```
``` select cnt, count(*) from ( select case count(*) when 1 then 'No. of Clients with 1 transaction' when 2 then 'No. of Clients with 2 transactions' when 3 then 'No. of Clients with 3 transactions' else 'No. of Clients with >3 transactions' end as cnt from Transactions where Start_Date >= '2015-01-01' group by Client_Ref ) group by cnt ```
SQL: How to group data into bands
[ "", "sql", "grouping", "" ]
I have a Student History table which maintains the enrolled section history for each student. For example, Student X is presently in Section 1 and Student X may have been in other sections in the past (including past enrollment in Section 1). Each time Student X changes to another section a record is added to the Student History table. The Student History table has following structure: `Student Id`, `Date_entered`, `section_id` I need to write a SQL query to get the records for the following scenario: Get `Student Id` of all students CURRENTLY in Sections 1 & 2 (Students most recent `date_entered` must have been either Sections 1 or 2). The results should not include any students who were in these sections 1 & 2 in the past. Sample Query: ``` select student_id from student_Queue_history where section_id in (1, 2) ``` Can someone help me write query for this one?
You can first `select` max date for each student and `join` it back to the `student_history` table. ``` with maxdate as ( select student_id, max(date_entered) as mxdate from student_history group by student_id) select s.* from student_history s join maxdate m on s.student_id = m.student_id and s.date_entered = m.mxdate where s.section_id in (1,2) ```
You have some pretty challenging design flaws with your table but you can leverage ROW\_NUMBER for this. This is not the best from a performance perspective but the suboptimal design limits what you can do. Please realize this is still mostly a guess because you haven't provided much in the way of details here. ``` with CurrentStudents as ( select * , ROW_NUMBER() over(partition by student_id order by date_entered desc) as RowNum from student_Queue_history ) select * from CurrentStudents where section_id in (1, 2) and RowNum = 1 ```
SQL query to get most recent row
[ "", "sql", "sql-server", "rank", "" ]
If I have a table with user info that contains datetime column having their registration date (ie, 2015-01-01) called "added", how can I show the count of all records registered/active per following periods: 1) less than a year 2) between 1 and 2 years 3) between 2 and 3 years 4) ... so on for as long back as the "added" years go. I've tried this: ``` SELECT Count(*) AS count, YEAR(CURDATE()) - YEAR(added) AS years FROM users GROUP BY YEAR(added) ``` But that calc is off, since it just groups the results by YEAR, not by the actual date from today. As in, someone registered in December of 2014 would still come out showing as count "1" on January 2015... even though in reality, the actual registration date should be taken into consideration, not just the YEAR. Suggestions?
Try this : ``` SELECT Count(*) AS count, SUM(IF (DATE_ADD(added, INTERVAL 1 YEAR) > NOW(), 1, 0)) AS num_1year, SUM(IF (DATE_ADD(added, INTERVAL 1 YEAR) < NOW() AND DATE_ADD(added, INTERVAL 2 YEAR) > NOW(), 1, 0)) AS num_2year, SUM(IF (DATE_ADD(added, INTERVAL 2 YEAR) < NOW() AND DATE_ADD(added, INTERVAL 3 YEAR) > NOW(), 1, 0)) AS num_3year FROM users ```
As you are using the `GROUP BY` statements, aggregate function is needed in the select part. ``` SELECT Count(*) AS count, YEAR(CURDATE() - YEAR(added) AS years FROM users ^^^ no way to group GROUP BY YEAR(added) ``` Calculate the date diff using the [DATEDIFF](http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_datediff) function. ``` SELECT Count(*) AS count, FLOOR(DATEDIFF(CURDATE() - added)/365) AS year_diff FROM users GROUP BY years_diff ``` PS: this is not quite accurate, as assuming the datediff 365 as a year. But it support any year range, you won't have to change this to support year diff from n to n+1.
Display count of records from table based on length of time active?
[ "", "mysql", "sql", "" ]
I have got a table containing material types: ``` id type mat_number description count ------------------------------------------------ a mat_1 123456 wood type a 5 a mat_2 333333 plastic type a 8 b mat_1 654321 wood type b 7 c mat_2 444444 plastic type c 11 d mat_1 121212 wood type z 8 d mat_2 444444 plastic type c 2 d mat_2 555555 plastic type d 3 ``` with SQL I want to create list as follows: ``` id mat_1 desciption count mat_2 description count ------------------------------------------------------------------- a 123456 wood type a 5 333333 plastic type c 8 b 654321 wood type b 7 null c null 444444 plastic type c 11 d 121212 plastic type c 8 444444 plastic type c 2 d null 555555 plastic type c 3 ``` Is that possible with not too much effort?
If you first of all compute a row number for each id and type grouping, then pivoting is easy: ``` with sample_data as (select 'a' id, 'mat_1' type, 123456 mat_number from dual union all select 'a' id, 'mat_2' type, 333333 mat_number from dual union all select 'b' id, 'mat_1' type, 654321 mat_number from dual union all select 'c' id, 'mat_2' type, 444444 mat_number from dual union all select 'd' id, 'mat_1' type, 121212 mat_number from dual union all select 'd' id, 'mat_2' type, 444444 mat_number from dual union all select 'd' id, 'mat_2' type, 555555 mat_number from dual) select id, mat_1, mat_2 from (select id, type, mat_number, row_number() over (partition by id, type order by mat_number) rn from sample_data) pivot (max(mat_number) for (type) in ('mat_1' as mat_1, 'mat_2' as mat_2)) order by id, rn; ID MAT_1 MAT_2 -- ---------- ---------- a 123456 333333 b 654321 c 444444 d 121212 444444 d 555555 ```
I think you need a standard **PIVOT** query. Your output seems wrong though. For example, **Table** ``` SQL> SELECT * FROM t; ID TYPE MAT_NUMBER -- ----- ---------- a mat_1 123456 a mat_2 333333 b mat_1 654321 c mat_2 444444 d mat_1 121212 d mat_2 444444 d mat_2 555555 7 rows selected. ``` **PIVOT query** ``` SQL> SELECT * 2 FROM (SELECT id, mat_number, type 3 FROM t) 4 PIVOT (MAX(mat_number) AS mat FOR (TYPE) IN ('mat_1' AS A, 'mat_2' AS b)) 5 ORDER BY ID; I A_MAT B_MAT - ---------- ---------- a 123456 333333 b 654321 c 444444 d 121212 555555 ```
oracle sql split columns by type
[ "", "sql", "oracle", "pivot", "" ]
I've a table looks like this ``` Serial | Name | Age ------------------------ 1 | Aby | 43 3 | Philip | 15 5 | Tom | 65 6 | Jacob | 33 7 | Matt | 13 11 | Jerom | 37 ``` --- I need to update this table such a way that all the valus in **serial** column must be continues without any missing values like this ``` Serial | Name | Age ------------------------ 1 | Aby | 43 2 | Philip | 15 3 | Tom | 65 4 | Jacob | 33 5 | Matt | 13 6 | Jerom | 37 --------------------------- ``` How can I achieve this in a single **update query**
You can do it this way: ``` ;with T as ( select row_number () over (order by Serial) as RN, * from yourtable ) update T set Serial = RN ```
You should do this: 1. Create a new table with the same structure but with a `Primary Key Identity`: ``` CREATE TABLE [dbo].[Z_NEW_TABLE]( [SERIAL] [bigint] IDENTITY(1,1) NOT NULL, [NAME] [varchar](MAX) NULL, [AGE] [INT] NULL CONSTRAINT [PK_Z_NEW_TABLE] PRIMARY KEY CLUSTERED ( [SERIAL] ASC )WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` 2. Insert your data in the new table ``` INSERT INTO Z_NEW_TABLE (NAME, AGE) SELECT NAME, AGE FROM Z_OLD_TABLE ``` 3. At the end drop the old table and rename new table
Update all rows in a table serially in a single update query
[ "", "sql", "sql-server", "" ]
I have two tables with user IDs, and another table representing a relation between two users by storing two user IDs. How can I count the mutual relations between two users, with a mutual relation defined as the number of users that two users both have a relation with. For example if I have: ``` 3 - 4 1 - 4 3 - 6 5 - 6 2 - 6 1 - 6 ``` I would want my query to return (in order) ``` User1 User2 MutualCount 1 | 3 | 2 2 | 3 | 1 1 | 2 | 1 1 | 5 | 1 2 | 5 | 1 4 | 6 | 1 3 | 5 | 1 ``` And so on... I'm thinking some sort of Inner Joining of User1/User2, but I can't figure out how the ON part would work, nor how to store and return the count. I'd appreciate any help! I've used this to extract all the mutual relations for any two users, but I haven't been able to figure out a way to do it for all users ``` SELECT b.userid, FROM user b, user c, relation f WHERE c.user_id = <user id here> AND (c.user_id = f.user1_id OR c.user_id = f.user2_id) AND (b.user_id = f.user1_id OR b.user_id = f.user2_id) INTERSECT SELECT b.user_id FROM user b, user c, relation f WHERE c.user_id = <user id here> AND (c.user_id = f.user1_id OR c.user_id = f.user2_id) AND (b.user_id = f.user1_id OR b.user_id = f.user2_id); ```
**EDIT:** *I threw this out as a first attempt on my way out the door even though it should have been immediately obvious that it couldn't work. (For instance none of the values in columns 1 and 2 are completely disjoint and could never even match.)* Maybe this?: ``` select case when mr1.user1 < mr2.user2 then mr1.user1 else mr2.user2 end as User1, case when mr1.user1 < mr2.user2 then mr2.user2 else mr1.user1 end as User2, count(*) as MutualCount from mr mr1 inner join mr mr2 on mr1.user2 = mr2.user1 group by mr1.user1, mr2.user2 order by case when mr1.user1 < mr2.user2 then mr1.user1 else mr2.user2 end, case when mr1.user1 < mr2.user2 then mr2.user2 else mr1.user1 end ``` @Joel the problem is a little trickier than it first seemed. The common user could be in either of the two columns and neither of us handled that. That's where the `case` expression come in. I believe a correct solution is below: ``` select mr1.user1, case when mr1.user2 <> mr2.user1 then mr2.user1 else mr2.user2 end as user2, count(*) as MutualCount from mr mr1 inner join mr mr2 on mr1.user2 in (mr2.user1, mr2.user2) /* match either user... */ and mr1.user1 <> mr2.user1 /* ...but not when it's the same row */ where mr1.user1 < case when mr1.user2 <> mr2.user1 then mr2.user1 else mr2.user2 end group by mr1.user1, case when mr1.user2 <> mr2.user1 then mr2.user1 else mr2.user2 end order by mr1.user1, case when mr1.user2 <> mr2.user1 then mr2.user1 else mr2.user2 end ``` <http://sqlfiddle.com/#!3/7e652/15> You can also use this same join to find the common users. Just strip out the `group by` and the `count()`.
``` SELECT user1Id, user2ID, COUNT(*) MutualCount FROM UserVsUser GROUP BY user1Id, user2ID ORDER BY MutualCount DESC ``` UserVsUser is the table representing a relation between two users.
SQL Query Counting Relations Between Two Tables
[ "", "sql", "" ]
When I launch this request, I receive the SQL Server number error 10714. I understand that I can't use more than one UPDATE with WHEN MATHED but I don't know how can I do. ``` MERGE INTO Photo p USING TmpPhoto tp ON p.ProductNumberID = tp.ProductNumberID and p.SHA1 = tp.SHA1 WHEN MATCHED AND p.Rank = tp.Rank THEN UPDATE SET p.VerifiedDate = getDate() WHEN MATCHED AND p.Rank != tp.Rank AND tp.Rank != 1 THEN UPDATE SET p.VerifiedDate = getDate(), p.Rank = tp.Rank, p.Active = 0 WHEN MATCHED AND p.Rank != tp.Rank AND tp.Rank = 1 THEN UPDATE SET p.VerifiedDate = getDate(), p.Rank = tp.Rank, p.Active = 1 WHEN NOT MATCHED THEN INSERT (ProductNumberID, Code, Extension, Rank, CreatedDate, VerifiedDate, FCTH, SHA1, Active) VALUES (tp.ProductNumberID, tp.Code, tp.Extension, tp.Rank, getdate(), getdate(), tp.FCTH, tp.SHA1, 0) OUTPUT inserted.NumberID as PhotoNumberID, inserted.ProductNumberID, inserted.SHA1, inserted.Rank INTO InsertedPhotos; ```
Simplified version (verifiedDate is updated always, rank is updated always since if it's equal it stays the same, the only field that change is p.Active using `CASE`) ``` MERGE INTO Photo p USING TmpPhoto tp ON p.ProductNumberID = tp.ProductNumberID and p.SHA1 = tp.SHA1 WHEN MATCHED THEN UPDATE SET p.VerifiedDate = getDate(), p.RANK = tp.RANK, p.Active = (CASE WHEN p.Rank != tp.Rank AND tp.Rank != 1 THEN 0 WHEN p.Rank != tp.Rank AND tp.Rank = 1 THEN 1 ELSE p.Active END ) WHEN NOT MATCHED THEN INSERT (ProductNumberID, Code, Extension, Rank, CreatedDate, VerifiedDate, FCTH, SHA1, Active) VALUES (tp.ProductNumberID, tp.Code, tp.Extension, tp.Rank, getdate(), getdate(), tp.FCTH, tp.SHA1, 0) OUTPUT inserted.NumberID as PhotoNumberID, inserted.ProductNumberID, inserted.SHA1, inserted.Rank INTO InsertedPhotos; ```
If you can, use `CASE` expressions in your `UPDATE` sub-statements to mimic the behavior of having multiple `WHEN MATCHED` clauses. Something like this: ``` MERGE INTO Photo p USING TmpPhoto tp ON p.ProductNumberID = tp.ProductNumberID and p.SHA1 = tp.SHA1 WHEN MATCHED THEN UPDATE SET p.VerifiedDate = getDate(), p.Rank = CASE WHEN p.Rank != tp.Rank AND tp.Rank != 1 THEN tp.Rank ELSE p.Rank END, p.Active = CASE WHEN p.Rank = tp.Rank THEN p.Active WHEN tp.Rank != 1 THEN 0 ELSE 1 END WHEN NOT MATCHED THEN INSERT (ProductNumberID, Code, Extension, Rank, CreatedDate, VerifiedDate, FCTH, SHA1, Active) VALUES (tp.ProductNumberID, tp.Code, tp.Extension, tp.Rank, getdate(), getdate(), tp.FCTH, tp.SHA1, 0) OUTPUT inserted.NumberID as PhotoNumberID, inserted.ProductNumberID, inserted.SHA1, inserted.Rank INTO InsertedPhotos; ``` What this does is move the logic about which fields to update and how into `CASE` expressions. Note that if a field isn't to be updated, then it is simply set to itself. In SQL Server, this appears to be a no-op. However, I'm not sure if it will count as a modified column for triggers. You can always test to see if the row actually changed in the trigger to avoid any problems this approach might cause.
MERGE - Multiple WHEN MATCHED cases with update
[ "", "sql", "sql-server", "merge", "" ]
To make a long story short I propose to discuss the code you see below. When running it: * Oracle 11 compiler raises > "PLS-00306: wrong number or types of arguments tips in call to 'PIPE\_TABLE'" > > "PLS-00642: Local Collection Types Not Allowed in SQL Statement" * Oracle 12 compiles the following package with no such warnings, but we have a surprise in runtime > when executing the anonymous block as is - everything is fine > (we may pipe some rows in the `pipe_table` function - it doesn't affect) > > now let's uncomment the line with `hello;` or put there a call to any procedure, and run the changed anonumous block again > we get "ORA-22163: left hand and right hand side collections are not of same type" And the question is: Does Oracle 12 allow local collection types in SQL? If yes then what's wrong with the code of `PACKAGE buggy_report`? ``` CREATE OR REPLACE PACKAGE buggy_report IS SUBTYPE t_id IS NUMBER(10); TYPE t_id_table IS TABLE OF t_id; TYPE t_info_rec IS RECORD ( first NUMBER ); TYPE t_info_table IS TABLE OF t_info_rec; TYPE t_info_cur IS REF CURSOR RETURN t_info_rec; FUNCTION pipe_table(p t_id_table) RETURN t_info_table PIPELINED; FUNCTION get_cursor RETURN t_info_cur; END buggy_report; / CREATE OR REPLACE PACKAGE BODY buggy_report IS FUNCTION pipe_table(p t_id_table) RETURN t_info_table PIPELINED IS l_table t_id_table; BEGIN l_table := p; END; FUNCTION get_cursor RETURN t_info_cur IS l_table t_id_table; l_result t_info_cur; BEGIN OPEN l_result FOR SELECT * FROM TABLE (buggy_report.pipe_table(l_table)); RETURN l_result; END; END; / DECLARE l_cur buggy_report.t_info_cur; l_rec l_cur%ROWTYPE; PROCEDURE hello IS BEGIN NULL; END; BEGIN l_cur := buggy_report.get_cursor(); -- hello; LOOP FETCH l_cur INTO l_rec; EXIT WHEN l_cur%NOTFOUND; END LOOP; CLOSE l_cur; dbms_output.put_line('success'); END; / ```
In further experiments we found out that problems are even deeper than it's been assumed. For example, varying elements used in the package `buggy_report` we can get an `ORA-03113: end-of-file on communication channel` when running the script (in the question). It can be done with changing the type of `t_id_table` to `VARRAY` or `TABLE .. INDEX BY ..`. There are a lot of ways and variations leading us to different exceptions, which are off topic to this post. The one more interesting thing is that compilation time of `buggy_report` package specification can take up to 25 seconds, when normally it takes about 0.05 seconds. I can definitely say that it depends on presence of `TYPE t_id_table` parameter in the `pipe_table` function declaration, and "long time compilation" happen in 40% of installation cases. So it seems that the problem with `local collection types in SQL` latently appear during the compilation. So we see that Oracle 12.1.0.2 obviously have a bug in realization of using local collection types in SQL. The minimal examples to get `ORA-22163` and `ORA-03113` are following. There we assume the same `buggy_report` package as in the question. ``` -- produces 'ORA-03113: end-of-file on communication channel' DECLARE l_cur buggy_report.t_info_cur; FUNCTION get_it RETURN buggy_report.t_info_cur IS BEGIN RETURN buggy_report.get_cursor(); END; BEGIN l_cur := get_it(); dbms_output.put_line(''); END; / -- produces 'ORA-22163: left hand and right hand side collections are not of same type' DECLARE l_cur buggy_report.t_info_cur; PROCEDURE hello IS BEGIN NULL; END; BEGIN l_cur := buggy_report.get_cursor; -- comment `hello` and exception disappears hello; CLOSE l_cur; END; / ```
Yes, in Oracle 12c you are allowed to use local collection types in SQL. Documentation [Database New Features Guide](https://docs.oracle.com/database/121/NEWFT/chapter12101.htm#FEATURENO10014) says: > **PL/SQL-Specific Data Types Allowed Across the PL/SQL-to-SQL Interface** > > The table operator can now be used in a PL/SQL program on a collection whose data type is declared in PL/SQL. This also allows the data type to be a PL/SQL associative array. (In prior releases, the collection's data type had to be declared at the schema level.) However, I don't know why your code is not working, maybe this new feature has still a bug.
Does Oracle 12 have problems with local collection types in SQL?
[ "", "sql", "oracle", "collections", "oracle12c", "database-cursor", "" ]
Apologies, wasn't really sure what to put for the title of this one, I think it's a bit more complex than it sounds. This question is for Microsoft SQL Server 2008. I have two tables that look like this: ### Logging.Logs: ``` +---------+------------+--------------+ | LogID | LogEntry | LogTimeUtc | +---------+------------+--------------+ | 1 | Foo | 2015-10-16..| | 2 | Bar | 2015-10-16..| | ... | ... | ... | ``` ### Logging.LogAttributes: ``` +---------+------------------+----------------+ | LogID | LogAttributeID | LogAttribute | +---------+------------------+----------------+ | 1 | 1 | FooAttribute | | 1 | 2 | BarAttribute | | 1 | 3 | BazAttribute | | 2 | 1 | FooAttribute | | 2 | 2 | BazAttribute | | ... | ... | ... | ``` I want all of the LogIDs from Logging.Logs that don't have a corresponding entry in Logging.LogAttributes with a LogAttribute field that starts with 'Bar'. In the tables above, I would just get LogID 2, because LogID 1 has a row in in LogAttributes with 'BarAttribute' in the LogAttribute field. I started with a left join, but it returns 1 and 2 because there are entries in LogAttributes with LogID 1 and LogAttribute not starting with 'Bar' ``` SELECT * FROM Logging.Logs l LEFT JOIN Logging.LogAttributes la ON ( l.LogID = la.LogID AND la.LogAttribute NOT LIKE 'Bar%' ) ```
You could try: ``` SELECT * FROM Logging.Logs l WHERE NOT EXISTS (SELECT * FROM Logging.LogAttributes la WHERE l.LogID = la.LogID AND la.LogAttribute LIKE 'Bar%' ) ```
You need to revise your JOIN statement: ``` SELECT l.* FROM Logging.Logs l LEFT JOIN Logging.LogAttributes la ON l.LogID = la.LogID AND la.LogAttributeID LIKE 'Bar%' WHERE la.LogID IS NULL ``` With proper indexes, it should be *much* faster than `EXISTS` and `IN` queries. [SQL Fiddle](http://www.sqlfiddle.com/#!3/01ce4/1)
Get all rows from table A where no matching row exist in table B
[ "", "sql", "sql-server", "sql-server-2008", "join", "" ]
I have two numbers in a table corresponding to different years (as shown below). How do I write a `SELECT` query to calculate the difference in value between 2014 and 2013. ``` Table 1 sample information: year value -------------------- 2013 100 2014 150 ```
Dont like it too much because is very specific. But this is a way without `join` using conditional `SUM` ``` SELECT SUM(CASE WHEN year = 2014 THEN value ELSE -value END) as total FROM Table1 ```
The trick is to realize you need to join the table to itself so that you're operating on rows that can tell you something about two different years. For example: ``` SELECT t1.value-t2.value as difference FROM yourtable AS t1 INNER JOIN yourtable AS t2 ON(t1.year=2013 AND t2.year=2014) ```
How to do an arithmetic operation in MySQL
[ "", "mysql", "sql", "" ]
I am trying to create a temp table with values from an existing table. I would like the temp table to have an additional column (phone), which does not exist from the permanent table. All values in this column should be NULL. Not sure how to do, but here is my existing query: ``` SELECT DISTINCT UserName, FirstName, LastName INTO ##TempTable FROM ( SELECT DISTINCT Username, FirstName, LastName FROM PermanentTable ) data ```
You need to give the column a value, but you don't need a subquery: ``` SELECT DISTINCT UserName, FirstName, LastName, NULL as phone INTO ##TempTable FROM PermanentTable; ``` In SQL Server, the default type for `NULL` is an int. It is more reasonable to store a phone number as a string, so this is perhaps better: ``` SELECT DISTINCT UserName, FirstName, LastName, CAST(NULL as VARCHAR(255)) as phone INTO ##TempTable FROM PermanentTable; ```
Just add the name of column that you will insert into TempTable and in inner select just select NULL something like this ``` SELECT DISTINCT UserName, FirstName, LastName, Phone INTO ##TempTable FROM ( SELECT DISTINCT Username, FirstName, LastName, NULL FROM PermanentTable ) data ```
Adding column with NULL values to temp table
[ "", "sql", "sql-server", "" ]
``` Date 9/25/2015 9/26/2015 9/27/2015 9/28/2015 9/29/2015 9/30/2015 10/1/2015 10/2/2015 10/3/2015 10/4/2015 10/5/2015 ``` Can anyone help me in MySQL. I would like to select only date from `9/28/2015` to `10/4/2015`. Please take note, this date is in Text field. Thank you.
you can use `STR_TO_DATE(yourdatefield, '%m/%d/%Y')` to convert text to date and you can later use between clause to restrict output data.
Convert first your dates using `CONVERT`, then use `BETWEEN` in your `WHERE` clause. Try this.. ``` SELECT * FROM TableName WHERE Date BETWEEN CONVERT(DATE,'9/28/2015') AND CONVERT(DATE,'10/4/2015') ```
SELECT range of date in Text field
[ "", "mysql", "sql", "" ]
I was working on Oracle APEX writing a query which gives a list of the upcoming birthdays of students in 2016 and orders them chronologically. This worked and I used the following code: ``` SELECT first_name, last_name, to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy') AS birthday FROM students WHERE date_of_birth IS NOT NULL ORDER BY birthday ``` This gave the right output and started showing names with the chronologically ordered birthdays in 2016. However, I now wanted to get another column which shows on which day students celebrate their birthday in class. So for students with a birthday during a week day that would be the same day, but for students with a birthday during the weekend that would be Monday, using a case statement to change 'saturday' or 'sunday' to 'monday'. I already made a query that shows on which day the birthday of a student is celebrated, which gives proper output like 'friday' or 'wednesday', the code is: ``` SELECT first_name, last_name, to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy') AS birthday, to_char(to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy'), 'day') AS celebrationday FROM students WHERE date_of_birth IS NOT NULL ORDER BY birthday ``` I had already done some simple tests with case statement, like adding something when the first name is 'John', like this: ``` SELECT student_number, first_name, (CASE first_name WHEN 'John' THEN 'Check' END) addition FROM Students ``` Which resulted in correct output; for all names other than 'John' the column **addition** would be null and those with the name would have 'Check' in the column **addition**. Now, the main problem starts than when I try to replicate this in order to check whether **celebrationday** is in the weekend I either get the following error: *'ORA-00904 'celebrationday': invalid identifier'*, for this piece of code: ``` SELECT first_name, last_name, to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy') AS birthday, to_char(to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy'), 'day') AS celebrationday, (CASE celebrationday WHEN 'saturday' THEN 'monday' WHEN 'sunday' THEN 'monday' END) addition FROM students WHERE date_of_birth IS NOT NULL ORDER BY birthday ``` (Little commentary, I know the case should also contain the other days of the week, but at first I'm interested in getting a proper output from the weekend days) Or for the following piece of code I just don't get any results in the column **addition**, just null values. ``` SELECT first_name, last_name, to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy') AS birthday, CASE to_char(to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy'), 'day) WHEN 'saturday' THEN 'monday' WHEN 'sunday' THEN 'monday' END) addition FROM Students WHERE date_of_birth IS NOT NULL ORDER BY date_of_birth ``` * What do I need to do in order to get the case statement working so that I can change celebrationday ? I hope the code examples provided are clear enough to work with and give you an idea of what is and isn't working.
Ah... you are using to\_char to return the day. You will need to account for the trailing spaces in your query. The value is essentially typed as char(9). I think if you trim the result and add an ELSE condition to account for the other days you should be good to go.. ``` CASE trim(to_char(to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy'), 'day')) WHEN 'saturday' THEN 'monday' WHEN 'sunday' THEN 'monday' else trim(to_char(to_date(concat(to_char(date_of_birth, 'dd-mm'), '-2016'), 'dd-mm-yyyy'), 'day')) END addition ```
In order to refer to celebrationday in the case statement, you need to wrap it in an inner query. `select case celebrationday = . . . from ( select . . . as celebrationday . . . )`. An inner query to define it and an outer one to respond to it.
Using a to_char function inside a case statement with Oracle APEX
[ "", "sql", "oracle", "oracle-apex", "case-statement", "to-char", "" ]
There is a table T, with a random value in id, How with one select we can get extreme value of id in input . example : ``` T.id = 12 34 76 89 1234 1254 6789 3456 ``` For input we give select id=1254, as output we have to get two values 1234 and 6789
You can do It in following: **SAMPLE DATE** ``` CREATE TABLE #Test (ID INT) INSERT INTO #Test VALUES (12),(34),(76),(89),(1234),(1254),(6789),(3456) ``` **INPUT** ``` DECLARE @var INT = 1234 ``` **QUERY** ``` ;WITH cte AS ( SELECT Id, ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) rn1 FROM #Test t ) SELECT PrevId, NextId FROM cte LEFT JOIN ( SELECT Id PrevId, ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) rn FROM #Test t1 ) previd ON cte.rn1 = previd.rn +1 LEFT JOIN ( SELECT Id NextId, ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) rn FROM #Test t1 ) nextid ON cte.rn1 = nextid.rn -1 WHERE cte.Id = @var ``` **OUTPUT** ``` PrevId NextId 89 1254 ``` **DEMO** You can test It at `SQL FIDDLE`
You can use conditional aggregation: ``` select max(case when id < 1254 then id end) as prev, min(case when id > 1254 then id end) as next from t; ``` A similar approach produces two rows but is more efficient if you have indexes: ``` select 'prev', max(id) from t where id < 1254 union all select 'next', min(id) from t where id > 1254; ``` EDIT: I seem to have missed that the ids are out of order. In that case, you need to assume that there is a column that specifies the ordering of the data. SQL tables represent *unordered* sets, so there is no next or previous value. You can handle this using window functions if you have a column for ordering: ``` with n as ( select t.*, row_number() over (order by <ordering column goes here>) as seqnum from t ) select max(case when seqnum = theseqnum - 1 then id end) as prev_id, max(case when seqnum = theseqnum + 1 then id end) as next_id from (select n.*, max(case when id = 1254 then seqnum end) as theseqnum from n ) n where seqnum = theseqnum - 1 or seqnum = thesequm + 1; ```
T_SQL query extreme value
[ "", "sql", "sql-server", "t-sql", "select", "" ]
I am trying to generate a specific string based on the following data using SQL 2012 ``` | Id | Activity | Year | |----|----------|------| | 01 | AAAAA | 2008 | | 01 | AAAAA | 2009 | | 01 | AAAAA | 2010 | | 01 | AAAAA | 2012 | | 01 | AAAAA | 2013 | | 01 | AAAAA | 2015 | | 01 | BBBBB | 2014 | | 01 | BBBBB | 2015 | ``` With the result needing to look like; ``` | 01 | AAAAA | 2008-2010, 2012-2013, 2015 | | 01 | BBBBB | 2014-2015 | ``` Any ideas on how to achieve this would be greatly appreciated.
Use `ROW_NUMBER` to [group the contiguous years](http://www.sqlservercentral.com/articles/T-SQL/71550/) and `FOR XML PATH('')` for string concatenation. [**SQL Fiddle**](http://sqlfiddle.com/#!6/6a8cb/7/0) ``` WITH Cte AS( SELECT *, grp = year - ROW_NUMBER() OVER(PARTITION BY id, activity ORDER BY year) FROM tbl ) SELECT id, activity, x.years FROM Cte c CROSS APPLY( SELECT STUFF(( SELECT ', ' + CONVERT(VARCHAR(4), MIN(year)) + CASE WHEN MIN(year) <> MAX(year) THEN '-' + CONVERT(VARCHAR(4), MAX(year)) ELSE '' END FROM Cte WHERE id = c.id ANd activity = c.activity GROUP BY id, activity, grp FOR XML PATH('') ), 1, 2, '') )x(years) GROUP BY id, activity, x.years ``` RESULT: ``` | id | activity | years | |----|----------|----------------------------| | 01 | AAAAA | 2008-2010, 2012-2013, 2015 | | 01 | BBBBB | 2014-2015 | ```
You can do it by using XML path (for concatenating group values) and grouping by *id* and *anctivity*: **MS SQL Server Schema Setup**: ``` create table tbl (id varchar(2),activity varchar(10),year int); insert into tbl values ( '01' ,'AAAAA', 2008 ), ( '01' ,'AAAAA', 2009 ), ( '01' ,'AAAAA', 2010 ), ( '01' ,'AAAAA', 2012 ), ( '01' ,'AAAAA', 2013 ), ( '01' ,'AAAAA', 2015 ), ( '01' ,'BBBBB', 2014 ), ( '01' ,'BBBBB', 2015 ) ``` **Query**: ``` select id, activity, stuff( (select distinct ',' + cast(year as varchar(4)) from tbl where id = t.id and activity=t.activity for xml path ('')) , 1, 1, '') as years from tbl AS t group by id,activity ``` **[Results](http://sqlfiddle.com/#!6/6a8cb/1/0)**: ``` | id | activity | years | |----|----------|-------------------------------| | 01 | AAAAA | 2008,2009,2010,2012,2013,2015 | | 01 | BBBBB | 2014,2015 | ``` --- **Edit after comments and noticing well to the desired output:** if you want to also group the consecutive like *2008-2009* then you need an extra grouping (the difference of year and rank in each group will give you a distinct nested group): **Query**: ``` with cte1 as ( select r = year - (rank() over(partition by id,activity order by year)), id,activity,year from tbl ) ,cte2 as ( select id, activity, cast(min(year) as varchar(4)) + case when min(year)<>max(year) then '-' + cast(max(year) as varchar(4)) else '' end as years from cte1 group by r,id,activity ) select id, activity, stuff( (select distinct ',' + years from cte2 where id = t.id and activity=t.activity for xml path ('')) , 1, 1, '') as years from cte2 AS t group by id,activity ``` **[Results](http://sqlfiddle.com/#!6/cefc9/5/0)**: ``` | id | activity | years | |----|----------|--------------------------| | 01 | AAAAA | 2008-2010,2012-2013,2015 | | 01 | BBBBB | 2014-2015 | ```
SQL Sequential Grouping and strings for sequence gaps
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have stored fiscal week in my table as `Nvarchar(Max)` ``` CREATE TABLE sample( id int ,FiscalWeekName NvarChar(MAX) ); INSERT INTO sample VALUES(1,'FY15-W1'); ``` No, I want to convert this `fiscalweekname` into the first day of that week For example query should return ``` 01-01-2014 ```
I don't even know how you define fiscal weeks but here's a stab: ``` dateadd( week, cast(substring(FiscalWeekName, 7, 2) as int) - 1, dateadd(year, -1, cast('20' + substring(FiscalWeekName, 3, 2) + '0101' as date)) ) ``` A numeric year by itself will cast to January 1 but it's probably safer not to rely on that so I added the `'0101'`. EDIT: After your clarification I'm trying to adjust the day of week to slide back to Monday (and I'm assuming that's what your `DATEFIRST` setting is as well.) This seems messy so maybe there's a cleaner way. ``` dateadd( day, (cast(substring(FiscalWeekName, 7, 2) as int) - 1) * 7 - case when cast(substring(FiscalWeekName, 7, 2) as int) > 1 then datepart( dw, dateadd( year, -1, cast('20' + substring(FiscalWeekName, 3, 2) + '0101' as date) ) ) else 0 end, dateadd(year, -1, cast('20' + substring(FiscalWeekName, 3, 2) + '0101' as date)) ) ```
Please try this, correction from @shawnt00 ``` declare @FiscalWeekName as NvarChar(MAX) set @FiscalWeekName = 'FY15-W2' SELECT cast(substring(@FiscalWeekName, charindex('W', @FiscalWeekName) + 1, 2) as int), dateadd( wk, cast(substring(@FiscalWeekName, charindex('W', @FiscalWeekName) + 1, 2) as int) ,dateadd(yy, -1, cast('20'+substring(@FiscalWeekName, 3, 2)+'0101' as date)) ) ```
How to get date if we have fiscal week
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I am having hard time figuring out how I can select records if my (Between From AND To) is missing From. Eg. On the form I have date range, and if user enters only TO and leaves FROM blank how can I select ALL records up to that point. My issue occures here, > SELECT \* FROM table WHERE date BETWEEN from AND to; This is my query and I would like to use this same query and just modify my variables so that I don't have to have multiple SELECTS depending on what data was entered. Thanks
I would suggest that you arrange your application to have two queries: ``` SELECT * FROM table WHERE date BETWEEN $from AND $to ``` and: ``` SELECT * FROM table WHERE date <= $to ``` Then choose the query based on whether or not `$from` is suppled. Why? Both queries can take advantage of an index on `date`. In general, MySQL does a poor job of recognizing index usage with an `or` condition. Alternatively, you can use AlexK's suggest and set `$from` to some ridiculously old date and use the query in the OP.
Try something like: ``` SELECT * FROM table WHERE ($from = '' AND date <= $to) OR (date BETWEEN $from AND $to); ```
SQL Date Range - Select all if no start point supplied?
[ "", "mysql", "sql", "" ]
hi have the following schema ``` -- Accounts ---- [id] name ---------------- 20 BigCompany 25 SomePerson -- Followers ------- [id follower_id] -------------------- 20 25 -- Daily Metrics -------------------------------- [id date ] follower_count media_count ------------------------------------------------- 25 2015-10-07 350 24 25 2015-10-13 500 27 25 2015-10-12 480 26 ``` I would like a list of all followers of a particular account, returning their most up to date `follower_count`. I've tried JOINs, correlated subqueries etc but none are working for me. Expected result for followers of `BigCompany`: ``` id username follower_count media_count 'last_checked' --------------------------------------------------------------- 25 SomePerson 500 27 2015-10-13 ```
Do some `JOIN`'s, use `NOT EXISTS` to exclude older metrics: ``` select a1.id, a1.name, dm.follower_count, dm.media_count, dm.date as "last_checked" from Accounts a1 join Followers f on f.follower_id = a1.id join Accounts a2 on f.id = a2.id join DailyMetrics dm on dm.id = a1.id where a2.name = 'BigCompany' and not exists (select 1 from DailyMetrics where id = dm.id and date > dm.date) ```
Try this: ``` SELECT DISTINCT a.id, a.name AS username, d.media_count, d.date AS last_checked FROM Accounts AS a INNER JOIN Followers AS f ON a.id = f.follower_id INNER JOIN DailyMetrics AS d ON d.id = f.follower_id INNER JOIN ( SELECT id, MAX(date) AS MaxDate FROM DailyMetrics GROUP BY id ) AS dm ON d.date = dm.maxdate WHERE f.id = 999 ; ``` The subquery: ``` SELECT id, MAX(date) AS MaxDate FROM DailyMetrics GROUP BY id ``` Will get the most recent date for each `id`, then `JOIN`ing it with the table `DailyMetrics` will eliminate all the rows except the one with the most recent date. * [SQL Fiddle Demo](http://sqlfiddle.com/#!9/80def/6) This will give you: ``` | id | name | media_count | date | |----|------------|-------------|---------------------------| | 25 | SomePerson | 27 | October, 13 2015 00:00:00 | ```
MySQL sorting/grouping inside JOIN
[ "", "sql", "join", "correlated-subquery", "" ]
Oracle 11g R2 is in use. This is my source table: ``` ASSETNUM WONUM WODATE TYPE1 TYPE2 LOCATION -------------------------------------------------------- W1 1001 2015-10-10 N N loc1 W1 1002 2015-10-02 Y N loc2 W1 1003 2015-10-04 Y N loc2 W1 1004 2015-10-05 N Y loc2 W1 1005 2015-10-07 N Y loc2 W2 2001 2015-10-11 N N loc1 W2 2002 2015-10-03 Y N loc2 W2 2003 2015-10-02 Y N loc2 W2 2004 2015-10-08 N Y loc3 W2 2005 2015-10-06 N Y loc3 ``` <http://sqlfiddle.com/#!4/8ee297/1> I want to write a query to get following data: ``` ASSETNUM LATEST LOCATION for LATEST_WODATE_FOR LATEST_WODATE_FOR WODATE LATEST WODATE TYPE1=Y TYPE2=Y ---------------------------------------------------------------------------- W1 2015-10-10 loc1 2015-10-04 2015-10-07 W2 2015-10-11 loc1 2015-10-03 2015-10-08 ``` I need a similar resultset with only one row for each unique value in ASSETNUM. Any help would be appreciated!
Analytic functions to the rescue. <http://sqlfiddle.com/#!4/8ee297/4> ``` select assetnum, wodate, wonum, location, last_type1_wodate, last_type2_wodate from(select assetnum, wodate, wonum, location, rank() over (partition by assetnum order by wodate desc) rnk_wodate, max(case when type1 = 'Y' then wodate else null end) over (partition by assetnum) last_type1_wodate, max(case when type2 = 'Y' then wodate else null end) over (partition by assetnum) last_type2_wodate from t) where rnk_wodate = 1 ``` Walking through what that's doing * `rank() over (partition by assetnum order by wodate desc)` takes all the rows for a particular `assetnum` and sorts them by `wodate`. The predicate on the outside `where rnk_wodate = 1` returns just the most recent row. If there can be ties, you may want to use `dense_rank` or `row_number` in place of `rank` depending on how you want ties to be handled. * `max(case when type1 = 'Y' then wodate else null end) over (partition by assetnum)` takes all the rows for a particular `assetnum` and finds the value that maximizes the `case` expression. That will be the last row where `type1 = 'Y'` for that `assetnum`.
Using aggregate function [first](https://docs.oracle.com/database/121/SQLRF/functions074.htm#SQLRF00641), [SQL Fiddle](http://sqlfiddle.com/#!4/8ee297/18) **Query**: ``` select assetnum, max(wodate), max(wonum) keep (dense_rank first order by wodate desc) wonum, max(case when type1 = 'Y' then wodate end) last_type1_wodate, max(case when type2 = 'Y' then wodate end) last_type2_wodate from t group by assetnum ``` **[Results](http://sqlfiddle.com/#!4/8ee297/18/0)**: ``` | ASSETNUM | MAX(WODATE) | WONUM | LAST_TYPE1_WODATE | LAST_TYPE2_WODATE | |----------|---------------------------|-------|---------------------------|---------------------------| | W1 | October, 10 2015 00:00:00 | 1001 | October, 04 2015 00:00:00 | October, 07 2015 00:00:00 | | W2 | October, 11 2015 00:00:00 | 2001 | October, 03 2015 00:00:00 | October, 08 2015 00:00:00 | ``` `(dense_rank) (first) (order by wodate desc)` `( 2 ) ( 3 ) ( 1 )` 1. order the dates in descending order for each assetnum(as specified in GROUP BY clause). 2. assign dense\_rank to them. 3. select only first record. In your sample data, this will select only single record. corresponding to latest date. But you cannot directly select wonum, since you are using GROUP BY clause. So you have to use a aggregare function, which can be MIN , MAX , SUM, etc. It is there only for semantic purpose.
Oracle - produce unique rows for each unique column value and convert rows to columns
[ "", "sql", "oracle", "" ]
I have table with some data, for example ``` ID Specified TIN Value ---------------------- 1 0 tin1 45 2 1 tin1 34 3 0 tin2 23 4 3 tin2 47 5 3 tin2 12 ``` I need to get rows with all fields by MAX(Specified) column. And if I have few row with MAX column (in example ID 4 and 5) i must take last one (with ID 5) finally the result must be ``` ID Specified TIN Value ----------------------- 2 1 tin1 34 5 3 tin2 12 ```
This will give the desired result with using window function: ``` ;with cte as(select *, row_number(partition by tin order by specified desc, id desc) as rn from tablename) select * from cte where rn = 1 ```
One method is to use window functions, `row_number()`: ``` select t.* from (select t.*, row_number() over (partition by tim order by specified desc, id desc ) as seqnum from t ) t where seqnum = 1; ``` However, if you have an index on `tin, specified id` and on `id`, the most efficient method is: ``` select t.* from t where t.id = (select top 1 t2.id from t t2 where t2.tin = t.tin order by t2.specified desc, id desc ); ``` The reason this is better is that the index will be used for the subquery. Then the index will be used for the outer query as well. This is highly efficient. Although the index will be used for the window functions; the resulting execution plan probably requires scanning the entire table.
Getting all fields from table filtered by MAX(Column1)
[ "", "sql", "sql-server", "max", "" ]
I have this table sample [![enter image description here](https://i.stack.imgur.com/vx3CT.jpg)](https://i.stack.imgur.com/vx3CT.jpg) I need to select only the latest Area Value based on latest dates that will produce this kind of output [![enter image description here](https://i.stack.imgur.com/daLbm.jpg)](https://i.stack.imgur.com/daLbm.jpg)
Fixing the solution by Felix. I think you shouldn't partition by `area` in the first CTE. You should partition by `area` in the second CTE instead of ordering by it. [SQL Fiddle](http://sqlfiddle.com/#!6/abca0/1/0) ``` WITH CTE1 AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY tenant ORDER BY date desc) AS rn FROM yourTable ) ,CTE2 AS ( SELECT * ,rn - ROW_NUMBER() OVER (PARTITION BY tenant, area ORDER BY rn) AS rnk FROM CTE1 ) SELECT tenant ,area ,date ,sales FROM CTE2 WHERE rnk = 0 ORDER BY tenant, date desc ```
Using a Gaps and Islands solution: [**SQL Fiddle**](http://sqlfiddle.com/#!6/f3825/1/0) ``` WITH CteIslands AS( SELECT *, grp = DATEADD(DAY, -ROW_NUMBER() OVER(PARTITION BY tenant, area ORDER BY date), date) FROM yourTable ), Cte AS( SELECT *, rnk = RANK() OVER(PARTITION BY tenant ORDER BY grp DESC, area) FROM CteIslands ) SELECT tenant, area, date, sales FROM Cte WHERE rnk = 1 ```
SQL Select Query - Select rows in the table based on the latest values of column
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to find the borrowers who took all loan types. Schema: ``` loan (number (PKEY), type, min_rating) borrower (cust (PKEY), no (PKEY)) ``` Sample tables: ``` number | type | min_rating ------------------------------ L1 | student | 500 L2 | car | 550 L3 | house | 500 L4 | car | 700 L5 | car | 900 cust | no ----------- Jim | L2 Tom | L1 Tom | L2 Tom | L3 Tom | L4 Tom | L5 Bob | L3 ``` The answer here would be "Tom". I can simply count the total number of loans and compare the borrower's number of loans to that, but I'm NOT allowed to (this is a homework exercise), for the purposes of this homework and learning. I wanted to use double-negation where I first find the borrowers who didn't take all the loans and find borrowers who are not in that set. I want to use nesting with `NOT EXISTS` where I first find the borrowers that didn't take all the loans but I haven't been able to create a working query for that.
A simple approach is to use the facts: * that an outer join gives you nulls when there's no join * coalesce() can turn a null into a blank (that will always be less that a real value) Thus, the minimum coalesced loan number of a person who doesn't have every loan type will be blank: ``` select cust from borrower b left join loan l on l.number = b.no group by cust having min(coalesce(l.number, '')) > '' ``` The group-by neatly sidesteps the problem of selecting people more than once (and the ugly subqueries that often requires), and relies on the quite reasonable assumption that a loan number is never blank. Even if that were possible, you could still find a way to make this pattern work (eg coalesce the min\_rating to a negative number, etc). The above query can be re-written, possibly more readably, to use a `NOT IN` expression: ``` select distinct cust from borrower where cust not in ( select cust from borrower b left join loan l on l.number = b.no where l.number is null ) ``` By using the fact that a missed join returns all nulls, the where clause of the inner query keeps only *missed* joins. You need to use `DISTINCT` to stop borrowers appearing twice. --- Your schema has a problem - there is a many-to-many relationship between borrower and load, but your schema handles this poorly. `borrower` should have one row for each person, and another *association* table to record the fact that a borrower took out a loan: ``` create table borrower ( id int, name varchar(20) -- other columns about the person ); create table borrrower_loan ( borrower_id int, -- FK to borrower load_number char(2) -- FK to loan ); ``` This would mean you wouldn't need the `distinct` operator (left to you to figure out why), but also handles real life situations like two borrowers having the same name.
I think a good first step would be to take a cartesian product\* of the borrowers and the loans, then use a where clause to filter down to the ones which aren't present in your "borrowers" table. (Although I think that would use a NOT IN rather than a NOT EXISTS, so may not be exactly what you have in mind?) (\* With the caveat that cartesian products are a terrible thing to do, and you'd need to think very carefully about performance before doing this in real life) ETA: The NOT EXISTS variant could look like this: Take the Cartesian product as before, do a correlated subquery for the combination of borrower and loan, then filter by whether this query returns any rows, using a WHERE clause with a NOT EXISTS condition.
Borrowers that take all loans using NOT EXISTS
[ "", "sql", "sqlite", "relational-division", "" ]
I want to select just one column from multiple columns in a select statement. ``` select A, sum(B) paid from T where K LIKE '2015%' group by A having B >= 100 ``` This query will return two columns but how to select just the first column from the select query? If I do something like this : ``` select A from (select A, sum(B) paid from T where K LIKE '2015%' group by A having B >= 100 ) ``` It is running into errors? Is there a way in mysql to select the only the first column ?
You can do one of the two: One ``` select A from ( select A, sum(B) paid from T where K LIKE '2015%' group by A having sum(B) >= 100 ) m ``` Two ``` select A from T where K like '2015%' group by A having sum(B)>=100 ```
Your second query was correct just that you didn't add `sum` before the `b` Try this ``` select A from (select A, sum(B) paid from T where K LIKE '2015%' group by A having sum(B) >= 100 ) As temp ```
How to select one column from multiple columns returning from a select statement in mysql
[ "", "mysql", "sql", "mysql-workbench", "" ]
``` Insert into Les_Mills_Customers ( CUSTOMER_ID ,C_USERNAME ,C_TITLE ,F_NAME ,L_NAME ,C_MESSAGE ,C_ADDRESS ,C_GENDER ,C_MOBILE ,C_NOTES ,C_PAYMENT_MODE ,C_EMAIL ,C_TYPE ,C_PICTURE ,C_JOINDATE ,C_TIMETABLES ) values ( 50 ,’A_Joe’ ,’Mrs’, ’Allison’ ,’Joe’ ,’RPM’ ,’Claudelands’ ,’F’ ,0273252302 , ’RPM’ ,’E’ ,’123@gmail.com’ ,'NULL' ,’NULL’ ,To_DATE ('20-02-15','DD-MM-YY'),01 ) ``` > Error at Command Line : 328 Column : 171 Error report - SQL Error: > ORA-00917: missing comma > 00917. 00000 - "missing comma" > \*Cause: > \*Action:
As noted by @Lalit, you must enclose strings with single quotes. Double quotes can be used in some databases products, with properly settings, but this configuration is not ANSI compatible and must be avoided. Please do that only in raw SQL statements hand made. Pass strings to SQL commands in executed code will let you vulnerable to [SQL injection attacks](https://en.wikipedia.org/wiki/SQL_injection). Using SQL parameters is the right way. And beware names like Sant'Anna, with apostrophes in them. Apostrophes are represented as single-quotes very often. In that case, double the apostrophes to represent a single apostrophe. ``` INSERT INTO TABLE1 (NAME) VALUE ('Sant''Anna') ```
There are multiple issues with your **INSERT** statement: ``` ’A_Joe’,’Mrs’,’Allison’,’Joe’,’RPM’,’Claudelands’,’F’,0273252302, ’RPM’,’E’,’123@gmail.com’,'NULL',’NULL’,To_DATE ('20-02-15','DD-MM-YY') ``` 1. You must enclose the **strings** within **single-quotation marks**. `’` is not single quote, `'` is a single quote. Just like you used in the TO\_DATE function. 2. Better use `YYYY` format, else you will reinvent the **Y2K** bug. 3. **NULL** should not be used within single quotes, just leave the keyword as it is. Else, you will store it as a string, and not the NULL value.
SQL Error: ORA-00917: missing comma when inserting values into Customer table:
[ "", "sql", "oracle", "oracle11g", "" ]
I have a set of data that looks like this: ``` ID Date 62 2012-06-12 05:30:57.000 202 2012-06-13 00:00:00.000 73 2012-06-17 05:25:15.000 74 2012-06-17 06:20:00.000 75 2012-06-17 10:46:03.000 76 2012-06-17 11:15:33.000 77 2012-06-17 12:17:09.000 79 2012-06-17 21:12:44.000 81 2012-06-18 12:34:45.000 82 2012-06-18 16:46:29.000 83 2012-06-19 00:21:44.000 84 2012-06-20 11:31:52.000 86 2012-06-22 23:27:38.000 87 2012-06-23 17:02:18.000 89 2012-06-25 10:05:00.000 91 2012-06-25 12:36:13.000 92 2012-06-25 15:28:36.000 93 2012-06-26 12:16:45.000 97 2012-06-27 14:03:14.000 98 2012-06-27 14:20:37.000 99 2012-06-27 16:21:21.000 114 2012-06-28 21:58:43.000 115 2012-06-29 10:46:53.000 120 2012-07-09 01:11:34.000 ``` This goes on for multiple years. I tried this, but it didn't work: ``` SELECT COUNT(Q.Questionaire_ID) AS [Count], Q.Start_Date AS [Date] FROM Questionaires as Q GROUP BY Q.Start_Date ``` I'm trying to sum each month's count. For example if: ``` Date Count Total 2012-06 10 10 2012-07 5 15 ```
If you cast each [Date] to a date it removes the time, and if you deduct the day (minus one) we get the first day of the month. Then Group by that. Finally use SUM() OVER() to form the running total. also: CONVERT(varchar(7), [Date], 120) produces a string of YYYY-MM, if you have MS SQL 2012+ you could use FORMAT([Date], 'yyyy-MM') instead. [SQL Fiddle](http://sqlfiddle.com/#!6/73efb/1) **MS SQL Server 2014 Schema Setup**: ``` CREATE TABLE Questionaires ([ID] int, [Date] datetime) ; INSERT INTO Questionaires ([ID], [Date]) VALUES (62, '2012-06-12 05:30:57'), (202, '2012-06-13 00:00:00'), (73, '2012-06-17 05:25:15'), (74, '2012-06-17 06:20:00'), (75, '2012-06-17 10:46:03'), (76, '2012-06-17 11:15:33'), (77, '2012-06-17 12:17:09'), (79, '2012-06-17 21:12:44'), (81, '2012-06-18 12:34:45'), (82, '2012-06-18 16:46:29'), (83, '2012-06-19 00:21:44'), (84, '2012-06-20 11:31:52'), (86, '2012-06-22 23:27:38'), (87, '2012-06-23 17:02:18'), (89, '2012-06-25 10:05:00'), (91, '2012-06-25 12:36:13'), (92, '2012-06-25 15:28:36'), (93, '2012-06-26 12:16:45'), (97, '2012-06-27 14:03:14'), (98, '2012-06-27 14:20:37'), (99, '2012-06-27 16:21:21'), (114, '2012-06-28 21:58:43'), (115, '2012-06-29 10:46:53'), (120, '2012-07-09 01:11:34') ; ``` **Query 1**: ``` SELECT CONVERT(varchar(7), [Date], 120) AS yr_month , CountOf , SUM(CountOf) OVER (order by [Date]) as Total FROM ( SELECT DATEADD(DAY, -(DAY(Q.Date) - 1), CAST(Q.[Date] as Date)) AS [Date] , COUNT(*) AS [CountOf] FROM Questionaires AS Q GROUP BY DATEADD(DAY, -(DAY(Q.Date) - 1), CAST(Q.[Date] as Date)) ) AS d ``` **[Results](http://sqlfiddle.com/#!6/73efb/1/0)**: ``` | yr_month | CountOf | Total | |----------|---------|-------| | 2012-06 | 23 | 23 | | 2012-07 | 1 | 24 | ```
This should work. ``` select str(year) + '-' + str(month) as month, total, count from ( SELECT COUNT(Q.Questionaire_ID) AS [Count], sum(Q.[Count]) as total, MONTH(Q.Start_Date) as month, YEAR(Q.Start_Date) as year FROM Questionaires as Q GROUP BY MONTH(Q.Start_Date), YEAR(Q.Start_Date) ) pretty ``` something like this? here it is in action: <http://sqlfiddle.com/#!6/8d955/4>
SQL Count and Sum Over Time
[ "", "sql", "sql-server", "sql-server-2014", "" ]
I am wondering how best to migrate my data when splitting a Table into a many to many relationship. I've made a simplified example and I'll also post some of the solutions I have come up with. I am using a Postgresql Database. **Before Migration** Table Person ``` ID Name Pet PetName 1 Follett Cat Garfield 2 Rowling Hamster Furry 3 Martin Cat Tom 4 Cage Cat Tom ``` **After Migration** Table Person ``` ID Name 1 Follett 2 Rowling 3 Martin 4 Cage ``` Table Pet ``` ID Pet PetName 6 Cat Garfield 7 Hamster Furry 8 Cat Tom 9 Cat Tom ``` Table PersonPet ``` FK_Person FK_Pet 1 6 2 7 3 8 4 9 ``` Notes: * I will specifically duplicate entries in the Pet Table (because in my case - due to other related data - one of them might still be editable by the customer while the other might not). * There is no column that uniquely identifies a "Pet" record. * For me it does not matter whether 3-8 and 4-9 are linked in the PersonPet table or 3-9 and 4-8. * Also I omitted all code that handles the schema changes of the tables as this is - in my understanding - irrelevant for this question. **My Solutions** 1. When creating the Pet Table temporarily add a column containing the id of the Person Table that was used to create this entry. ``` ALTER TABLE Pet ADD COLUMN IdPerson INTEGER; INSERT INTO Pet (Pet, PetName, IdPerson) SELECT Pet, PetName, ID FROM Person; INSERT INTO PersonPet (FK_Person, FK_Pet) SELECT ID, IdPerson FROM Pet; ALTER TABLE Pet DROP Column IdPerson; ``` 2. avoid temporarily modifying the Pet table ``` INSERT INTO Pet (Pet, PetName) SELECT Pet, PetName FROM Person; WITH CTE_Person AS (SELECT Id, Pet, PetName ,ROW_NUMBER() OVER (PARTITION BY Pet, PetName ORDER BY Id) AS row_number FROM Person ) ,CTE_Pet AS (SELECT Id, Pet, PetName ,ROW_NUMBER() OVER (PARTITION BY Pet, PetName ORDER BY Id) AS row_number FROM Pet ) ,CTE_Joined AS (SELECT CTE_Person.Id AS Person_Id, CTE_Pet.Id AS Pet_Id FROM CTE_Person INNER JOIN CTE_Pet ON CTE_Person.Pet = CTE_Pet.Pet CTE_Person.PetName = CTE_Pet.PetName AND CTE_Person.row_number = CTE_Pet.row_number ) INSERT INTO PersonPet (FK_Person, FK_Pet) SELECT Person_Id, Pet_Id from CTE_Joined; ``` **Questions** 1. Are both solutions correct? (I have tested the second solution and the result does seem to be correct, but I might have missed some corner case) 2. What are the advantages/disadvantages of the two solution? 3. Is there an easier way of doing the same data migration? (For my curiosity I would also be interested in answers that slightly modify my constraints (e.g. no duplicate entries in the Pet table), but please indicate which ones :) ).
Another solution to achieve effect you described (in my opinion the simplest one; without any CTE-s or additional columns): ``` create table Pet as select Id, Pet, PetName from Person; create table PersonPet as select Id as FK_Person, Id as FK_Pet from Person; create sequence PetSeq; update PersonPet set FK_Pet=nextval('PetSeq'::regclass); update Pet p set Id=FK_Pet from PersonPet pp where p.Id=pp.FK_Person; alter table Pet alter column Id set default nextval('PetSeq'::regclass); alter table Pet add constraint PK_Pet primary key (Id); alter table PersonPet add constraint FK_Pet foreign key (FK_Pet) references Pet(Id); ``` We are simply using existing person id as a temporary id for pet unless we generate one using sequence. **Edit** It's also possible to use my approach having schema changes already done: ``` insert into Pet(Id, Pet, PetName) select Id, Pet, PetName from Person; insert into PersonPet(FK_Person, FK_Pet) select Id, Id from Person; select setval('PetSeq'::regclass, (select max(Id) from Person)); ```
You can overcome the limitation of having to add an extra column to the pets table by inserting first into the foreign key table and then into the pets table. This allows establishing what the mapping is first and then filling in the details in a second pass. ``` INSERT INTO PersonPet SELECT ID, nextval('pet_id_seq'::regclass) as PetID FROM Person; INSERT INTO Pet SELECT FK_Pet, Pet, Petname FROM Person join PersonPet on (ID=FK_Person); ``` This can be combined into a single statement using the common table expression mechanisms outlined by Vladimir in his answer: ``` WITH fkeys AS ( INSERT INTO PersonPet SELECT ID, nextval('pet_id_seq'::regclass) as PetID FROM Person RETURNING FK_Person as PersonID, FK_Pet as PetID ) INSERT INTO Pet SELECT f.PetID, p.Pet, p.Petname FROM Person p join fkeys f on (p.ID=f.PersonID); ``` As far as advantages and disadvantages: Your solution #1: * Is more computationally efficient, it consists of two scan operations, no joins and no sorts. * Is less space efficient because it requires storing extra data in the Pet table. In Postgres that space is not recovered on DROP column (but you could recover it with CREATE TABLE AS / DROP TABLE). * Could cause issues if you are doing this repeatedly, e.g. adding/dropping a column regularly, because you will run into the Postgres max column limit. The solution I outlined is less computationally efficient than your solution #1 because it requires the join, but is more efficient than your solution #2.
Split Table into many to many relationship: Data Migration
[ "", "sql", "postgresql", "many-to-many", "database-migration", "" ]
I am using SQL Server 2014 and I am working with a column from one of my tables, which list arrival dates. It is in the following format: ``` ArrivalDate 2015-10-17 00:00:00.000 2015-12-03 00:00:00.000 ``` I am writing a query that would pull data from the above table, including the ArrivalDate column. However, I will need to convert the dates so that they become the first day of their respective months. In other words, my query should output the above example as follows: ``` 2015-10-01 00:00:00.000 2015-12-01 00:00:00.000 ``` I need this so that I can create a relationship with my Date Table in my PowerPivot model. I've tried this syntax but it is not meeting my requirements: ``` CONVERT(CHAR(4),[ArrivalDate], 100) + CONVERT(CHAR(4), [ArrivalDate], 120) AS [MTH2] ```
If, for example, it is 15th of given month then you subtract 14 and cast the result to date: ``` SELECT ArrivalDate , CAST(DATEADD(DAY, -DATEPART(DAY, ArrivalDate) + 1, ArrivalDate) AS DATE) AS FirstDay FROM (VALUES (CURRENT_TIMESTAMP) ) AS t(ArrivalDate) ``` ``` ArrivalDate | FirstDay 2019-05-15 09:35:12.050 | 2019-05-01 ``` But my favorite is [`EOMONTH`](https://learn.microsoft.com/en-us/sql/t-sql/functions/eomonth-transact-sql?view=sql-server-2017) which requires SQL Server 2012: ``` SELECT ArrivalDate , DATEADD(DAY, 1, EOMONTH(ArrivalDate, -1)) AS FirstDay FROM (VALUES (CURRENT_TIMESTAMP) ) AS t(ArrivalDate) ``` ``` ArrivalDate | FirstDay 2019-05-15 09:35:52.657 | 2019-05-01 ```
Use **[`FORMAT`](https://msdn.microsoft.com/en-us/library/hh213505.aspx)** to format your date. ``` DECLARE @date DATETIME = '2015-10-17 00:00:00.000' SELECT FORMAT(@date, 'yyyy-MM-01 HH:mm:ss.fff') ``` Or if you don't want time part: ``` SELECT FORMAT(@date, 'yyyy-MM-01 00:00:00.000') ``` `LiveDemo`
Rounding dates to first day of the month
[ "", "sql", "sql-server", "t-sql", "date", "sql-server-2014", "" ]
I am new to SQL and I was looking at the DELETE keyword. I want to know how can I delete multiple rows in one go. Eg I want to delete CategoryID 2,3,5. I am trying ``` DELETE FROM Categories WHERE CategoryID="2"AND CategoryID="3" AND CategoryID="5"; ``` but no rows and deleted. And if I use OR then everything gets deleted. Table name Categories ``` CategoryID CategoryName 1 Beverages 2 Condiments 3 Confections 4 Dairy Products 5 Grains/Cereals 6 Meat/Poultry ```
Use `IN`: ``` DELETE FROM Categories WHERE CategoryID IN (2, 3, 5); ```
In your query case, ``` DELETE FROM Categories WHERE CategoryID="2"AND CategoryID="3" AND CategoryID="5"; ``` there is no row with the data with same category id as 2,3,5. So you can use 'IN' for getting the respective rows.(3 rows from your data)
How to delete specific rows in a sql table
[ "", "sql", "sql-delete", "" ]
What I have is a table of completed training. Each user has a username. Each user may completed numerous courses. The table has the following headers: ``` +-------------------------+----------+---------+---------+---------+---------+-----------+ | recordnumber (KEY - AI) | username | type | course | status | started | completed | +-------------------------+----------+---------+---------+---------+---------+-----------+ | int | varchar | varchar | varchar | varchar | date | date | +-------------------------+----------+---------+---------+---------+---------+-----------+ ``` And I have a PHP script set up to populate the db from a CSV upload. What I'm trying to achieve is for it to add new rows, and to update existing ones. The problem is that recordnumber (they key, unique field) is not constant. So instead of doing a "ON DUPLICATE KEY" query, I want to do it based on whether username and course already exist as a row. Basically to say "If this username already has this course, update the other fields. If the username does not have this course, add this as a new row". The query that I have at the moment (which works based on key) is: ``` INSERT into table(recordnumber, username,type,course,status,started,completed) values('$data[0]','$data[1]','$data[2]','$data[3]','$data[4]','$data[5]','$data[6]') ON DUPLICATE KEY UPDATE username='$data[1]',type='$data[2]',course='$data[3]',status='$data[4]',started='$data[5]',completed='$data[6]' ``` Any thoughts on how I could amend the query to get it to check based on username and course instead of duplicate key? Thank you. :-)
The most correct way would be to create a unique index on username - course columns and use on duplicate key update. Obviously, you can issue a select before the insert checking for existing record with same user name and course and issue an insert or an update as appropriate.
create a key on the username and course column and then use on duplicate key ``` CREATE TABLE test ( username varchar(255) NOT NULL, course varchar(255), num_entries INT DEFAULT 0, UNIQUE KEY (username, course) ); insert into test (username, course) values ('billybob', 'math'), ('billy', 'math'), ('billybob', 'math'), ('bob', 'math') ON DUPLICATE KEY UPDATE num_entries = num_entries + 1; ``` this is a simple example, but you should understand what to do from here [SAMPLE FIDDLE](http://sqlfiddle.com/#!9/c2d24/1) so putting this to work on your table ``` ALTER TABLE `courses` -- assuming the table is named courses ADD CONSTRAINT `UK_COURSE_USERNAME` UNIQUE (username, course); ``` then your insert should just be the same as what you have
MySQL: INSERT or UPDATE if exists, but not based on key column
[ "", "mysql", "sql", "" ]
I'd like to have a result grouped by a propertie. Here's an example about what I would like to retrieve: [![enter image description here](https://i.stack.imgur.com/CJE3x.png)](https://i.stack.imgur.com/CJE3x.png) And here's come the table definition : [![enter image description here](https://i.stack.imgur.com/JW0rY.png)](https://i.stack.imgur.com/JW0rY.png) I tried this but it does not work : ``` SELECT OWNER.NAME, DOG.DOGNAME WHERE OWNER.ID = DOG.OWNER_ID AND OWNER.NAME = (SELECT OWNER.NAME FROM OWNER); ``` But it returns me an error: > 1427. 00000 - "single-row subquery returns more than one row" Thanks a lot !
**Edit** Based on Alex's response, a modified version of the query would be: `SELECT OWNER.NAME, DOG.DOGNAME FROM OWNER LEFT JOIN DOG ON OWNER.ID = DOG.OWNER_ID ORDER BY OWNER.NAME`
I am not Oracle expert, but I believe you need `FROM` and `JOIN` part :-) : <http://sqlfiddle.com/#!4/f8630/1> ``` SELECT OWNER.ID, OWNER.NAME, DOG.ID, DOG.DOGNAME FROM OWNER LEFT JOIN DOG ON OWNER.ID = DOG.OWNER_ID; ```
Group by several results SQL Oracle
[ "", "sql", "oracle", "group-by", "" ]
I am left joining message replies, on to the main message, but when I left join the user table is not being joined: ``` "SELECT messages.*, message_replies.message_reply_message AS message_body FROM messages LEFT JOIN users ON messages.message_user = users.user_id LEFT JOIN message_replies ON messages.message_id = message_replies.message_reply_main LEFT JOIN user_personal_information ON messages.message_user = user_personal_information.user_personal_information_user" . $user . " " . $order . "" ``` When I remove: ``` messages.*, message_replies.message_reply_message AS message_body ``` and just select `*` then it works fine, but they my message replies aren't included, here's my php: ``` $messages = MessageModel::messages($user," WHERE message_user=? "," AND message_deleted=0 AND message_permdeleted=0 ORDER BY message_date DESC LIMIT 5"); ``` and my message sql: ``` CREATE TABLE IF NOT EXISTS `messages` ( `message_id` int(11) NOT NULL, `message_user` int(11) NOT NULL, `message_subject` varchar(100) NOT NULL, `message_body` text NOT NULL, `message_to` int(11) NOT NULL, `message_read` int(1) NOT NULL DEFAULT '0', `message_date` datetime NOT NULL, `message_deleted` int(11) NOT NULL DEFAULT '0', `message_permdeleted` int(11) NOT NULL DEFAULT '0', `message_type` varchar(11) NOT NULL ) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=latin1; INSERT INTO `messages` (`message_id`, `message_user`, `message_subject`, `message_body`, `message_to`, `message_read`, `message_date`, `message_deleted`, `message_permdeleted`, `message_type`) VALUES (1, 3, 'test', 'hello', 12, 1, '2015-10-12 02:09:51', 0, 0, 'sent'), ``` and my message replies: ``` CREATE TABLE IF NOT EXISTS `message_replies` ( `message_reply_id` int(11) NOT NULL, `message_reply_user` int(11) NOT NULL, `message_reply_main` int(11) NOT NULL, `message_reply_message` text NOT NULL, `message_reply_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ) ENGINE=InnoDB AUTO_INCREMENT=20 DEFAULT CHARSET=latin1; INSERT INTO `message_replies` (`message_reply_id`, `message_reply_user`, `message_reply_main`, `message_reply_message`, `message_reply_date`) VALUES (1, 3, 1, 'Hello, this is just a test reply\r\n', '2015-09-29 18:42:23'), ``` user sql: ``` CREATE TABLE IF NOT EXISTS `users` ( `user_id` int(11) NOT NULL, `user_username` varchar(25) NOT NULL, `user_email` varchar(100) NOT NULL, `user_password` varchar(255) NOT NULL, `user_enabled` int(1) NOT NULL DEFAULT '1', `user_staff` varchar(15) NOT NULL DEFAULT '', `user_account_type` varchar(20) NOT NULL DEFAULT '0', `user_registerdate` date NOT NULL, `user_twofactor` int(11) NOT NULL DEFAULT '0', `user_twofackey` varchar(255) NOT NULL, `user_forgot_email_code` varchar(255) NOT NULL, `user_emailverified` varchar(25) NOT NULL DEFAULT 'unverified', `user_banned` varchar(25) NOT NULL DEFAULT 'unbanned', `user_has_avatar` int(11) NOT NULL DEFAULT '0', `user_has_banner` int(11) NOT NULL DEFAULT '0' ) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=latin1; -- -- Dumping data for table `users` -- INSERT INTO `users` (`user_id`, `user_username`, `user_email`, `user_password`, `user_enabled`, `user_staff`, `user_account_type`, `user_registerdate`, `user_twofactor`, `user_twofackey`, `user_forgot_email_code`, `user_emailverified`, `user_banned`, `user_has_avatar`, `user_has_banner`) VALUES (3, 'lol', 'email@mail.com', '$2y$10$jjTLGiOC2XtwhzRrLOq15euw4S0jXmWveEctd9pYEL44LEt3Vdfa2', 1, 'admin', 'Business', '2015-07-21', 0, '5GILYNBWBXVAUV3A', 'd71a30cb75faed7c48cba971cf934922', 'unverified', 'unbanned', 1, 1), ``` So how can I get my sql to work with the information supplied above Var\_dump: > array(5) { [0]=> object(stdClass)#22 (10) { ["message\_id"]=> string(2) > "10" ["message\_user"]=> string(1) "3" ["message\_subject"]=> string(8) > "yooooooo" ["message\_body"]=> string(5) "fffff" ["message\_to"]=> > string(2) "12" ["message\_read"]=> string(1) "1" ["message\_date"]=> > string(19) "2015-10-12 03:36:32" ["message\_deleted"]=> string(1) "0" > ["message\_permdeleted"]=> string(1) "0" ["message\_type"]=> string(8) > "recieved" } [1]=> object(stdClass)#23 (10) { ["message\_id"]=> > string(2) "10" ["message\_user"]=> string(1) "3" ["message\_subject"]=> > string(8) "yooooooo" ["message\_body"]=> string(3) "lol" > ["message\_to"]=> string(2) "12" ["message\_read"]=> string(1) "1" > ["message\_date"]=> string(19) "2015-10-12 03:36:32" > ["message\_deleted"]=> string(1) "0" ["message\_permdeleted"]=> > string(1) "0" ["message\_type"]=> string(8) "recieved" } [2]=> > object(stdClass)#24 (10) { ["message\_id"]=> string(1) "9" > ["message\_user"]=> string(1) "3" ["message\_subject"]=> string(8) > "jhjhjhjh" ["message\_body"]=> NULL ["message\_to"]=> string(2) "12" > ["message\_read"]=> string(1) "1" ["message\_date"]=> string(19) > "2015-10-12 03:34:54" ["message\_deleted"]=> string(1) "0" > ["message\_permdeleted"]=> string(1) "0" ["message\_type"]=> string(4) > "sent" } [3]=> object(stdClass)#25 (10) { ["message\_id"]=> string(1) > "8" ["message\_user"]=> string(1) "3" ["message\_subject"]=> string(8) > "jhjhjhjh" ["message\_body"]=> NULL ["message\_to"]=> string(2) "12" > ["message\_read"]=> string(1) "0" ["message\_date"]=> string(19) > "2015-10-12 03:34:40" ["message\_deleted"]=> string(1) "0" > ["message\_permdeleted"]=> string(1) "0" ["message\_type"]=> string(4) > "sent" } [4]=> object(stdClass)#26 (10) { ["message\_id"]=> string(1) > "7" ["message\_user"]=> string(1) "3" ["message\_subject"]=> string(2) > "yo" ["message\_body"]=> NULL ["message\_to"]=> string(2) "12" > ["message\_read"]=> string(1) "0" ["message\_date"]=> string(19) > "2015-10-12 03:33:17" ["message\_deleted"]=> string(1) "0" > ["message\_permdeleted"]=> string(1) "0" ["message\_type"]=> string(4) > "sent" } } ``` user_personal_information CREATE TABLE IF NOT EXISTS `user_personal_information` ( `user_personal_information_id` int(11) NOT NULL, `user_personal_information_user` int(11) NOT NULL, `user_firstname` varchar(75) NOT NULL, `user_surname` varchar(75) NOT NULL, `user_birthdate` date NOT NULL, `user_age` int(11) NOT NULL, `user_gender` varchar(25) NOT NULL, `user_contactemail` varchar(75) NOT NULL, `user_telephone` varchar(75) NOT NULL, `user_mobile` varchar(75) NOT NULL, `user_introduction` text NOT NULL, `user_occupation` varchar(75) NOT NULL, `user_relocate` varchar(75) NOT NULL, `user_available` varchar(75) NOT NULL, `user_fax` varchar(100) NOT NULL, `user_street` varchar(100) NOT NULL, `user_zip` varchar(100) NOT NULL, `user_city` varchar(100) NOT NULL, `user_state` varchar(100) NOT NULL, `user_country` varchar(75) NOT NULL, `user_personal_information_chestsize` varchar(11) NOT NULL, `user_personal_information_waistsize` varchar(11) NOT NULL, `user_personal_information_bootsize` varchar(11) NOT NULL, `user_personal_information_harness_size` varchar(11) NOT NULL, `user_personal_information_inside_leg` varchar(11) NOT NULL ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1; -- -- Dumping data for table `user_personal_information` -- INSERT INTO `user_personal_information` (`user_personal_information_id`, `user_personal_information_user`, `user_firstname`, `user_surname`, `user_birthdate`, `user_age`, `user_gender`, `user_contactemail`, `user_telephone`, `user_mobile`, `user_introduction`, `user_occupation`, `user_relocate`, `user_available`, `user_fax`, `user_street`, `user_zip`, `user_city`, `user_state`, `user_country`, `user_personal_information_chestsize`, `user_personal_information_waistsize`, `user_personal_information_bootsize`, `user_personal_information_harness_size`, `user_personal_information_inside_leg`) VALUES (1, 3, 'name', '123', '0000-00-00', 0, '', '', '07000', '00000', 'hello\r\n', 'Looking for work', '', '', '', ' jfkfkfjk', 'kjkjkjkj', 'kjkjkjk', 'kjkjk', 'United Kingdom', '123', '0', '0', '0', '0'), ```
Use Table alias for messages table and try : ``` SELECT ms.*, ms_r.message_reply_message AS message_body FROM messages AS ms LEFT JOIN users AS u ON ms.message_user = u.user_id LEFT JOIN message_replies AS ms_r ON ms.message_id = ms_r.message_reply_main LEFT JOIN user_personal_information AS u_p_i ON ms.message_user = u_p_i.user_personal_information_user WHERE ms.message_user=3 AND ms.message_deleted=0 AND ms.message_permdeleted=0 ORDER BY ms.message_date DESC LIMIT 5 ``` [SEE HERE](http://sqlfiddle.com/#!9/8e360/2) live example
Try this. Put users first in left join ``` "SELECT messages.*, message_replies.message_reply_message AS message_body FROM users LEFT JOIN messages ON messages.message_user = users.user_id LEFT OUTER JOIN message_replies ON messages.message_id = message_replies.message_reply_main LEFT JOIN user_personal_information ON messages.message_user = user_personal_information.user_personal_information_user" . $user . " " . $order . "" ```
left join not working correctly
[ "", "mysql", "sql", "" ]
I'm doing my best lately to look for the best way to run certain queries in SQL that could potentially be done multiple different ways. Among my research I've come across quite a lot of hate for the WHERE IN concept, due to an inherent inefficiency in how it works. eg: `WHERE Col IN (val1, val2, val3)` In my current project, I'm doing an UPDATE on a large set of data and am wondering which of the following is more efficient: (or whether a better option exists) ``` UPDATE table1 SET somecolumn = 'someVal' WHERE ID IN (id1, id2, id3 ....); ``` In the above, the list of ID's can be up to 1.5k ID's. VS Looping through all ID's in code, and running the following statement for each: ``` UPDATE table1 SET somecolumn = 'someVal' WHERE ID = 'theID'; ``` To myself, it seems more logical that the former would work better / faster, because there's less queries to run. That said, I'm not 100% familiar with the in's and out's of SQL and how query queueing works. I'm also unsure as to which would be friendlier on the DB as far as table locks and other general performance. General info in case it helps, I'm using Microsoft SQL Server 2014, and the primary development language is C#. Any help is much appreciated. EDIT: Option 3: ``` UPDATE table1 SET somecolumn = 'someVal' WHERE ID IN (SELECT ID FROM @definedTable); ``` In the above, @definedTable is a SQL 'User Defined Table Type', where the data inside comes through to a stored procedure as (in C#) type SqlDbType.Structured People are asking how the ID's come in: ID's are in a `List<string>`in the code, and are used for other things in the code before then being sent to a stored procedure. Currently, the ID's are coming into the stored procedure as a 'User-Defined Table Type' with only one column (ID's). I thought having them in a table might be better than having the code concatenate a massive string and just spitting it into the SP as a variable that looks like `id1, id2, id3, id4` etc
I'm using your third option and it works great. My stored procedure has a [table-valued parameter](https://msdn.microsoft.com/en-us/library/bb675163(v=vs.110).aspx). See also [Use Table-Valued Parameters](https://msdn.microsoft.com/en-AU/library/bb510489.aspx). In the procedure there is one statement, no loops, like you said: ``` UPDATE table1 SET somecolumn = 'someVal' WHERE ID IN (SELECT ID FROM @definedTable); ``` It is better to call the procedure once, than 1,500 times. It is better to have one transaction, than 1,500 transactions. If the number of rows in the `@definedTable` goes above, say, 10K, I'd consider splitting it in batches of 10K. --- Your first variant is OK for few values in the `IN` clause, but when you get to really high numbers (60K+) you can see something like this, as shown in [this answer](https://stackoverflow.com/a/1869789/4116017): > Msg 8623, Level 16, State 1, Line 1 The query processor ran out of > internal resources and could not produce a query plan. This is a rare > event and only expected for extremely complex queries or queries that > reference a very large number of tables or partitions. Please simplify > the query. If you believe you have received this message in error, > contact Customer Support Services for more information.
Your first or third options are the best way to go. For either of them, you want an index on `table1(id)`. In general, it is better to run one query rather than multiple queries because the overhead of passing data in and out of the database adds up. In addition, each update starts a transactions and commits it -- more overhead. That said, this will probably not be important unless you are updating thousands of records. The overhead is measured in hundreds of microseconds or milliseconds, on a typical system.
SQL UPDATE WHERE IN (List) or UPDATE each individually?
[ "", "sql", "sql-server", "" ]
I have a table say Table1 which has column1,column2.column3 now I run a select query : ``` Select * from Table1 ``` which returns all the rows. Another query ``` Select * from Table1 where column1 like '%%' ``` also returns all the rows. The first one is a simple regular sql select query. Why does the second one act similar to the first one? Can anyone tell me?
The difference is only that in second one you have a filter that is not filtering anything. The `LIKE` operator works by searching a text or part of a text from a field. The `%` is a wildcard that is used to replace a set of characters (any amount). In your case the second query is returning all rows because your given filter `column1 like '%%'` is evaluating to everything. It works like this. Imagine that your field `column1` has these values ``` foo bar goo boo zar ``` So if you use `column1 like '%%'` you are saying to your database: **give me all rows that the text into that `column1` starts with anything (any times) and ends with anything**. So everything. It will also return all rows if you use `column1 like '%'` But if you use `column1 like '%oo%'` it will give you as answer `foo, goo and boo` or if you use `column1 like '%r'` you will have `bar and zar` Hope it helps you to understand.
What are you expecting the outcome to be? '%' is a wildcard in SQL, so you're basically saying this: 1) "Select all from table" 2) "Select all from table where column 1 is literally anything" These are effectively the same since the latter part of the second query does not refine the conditions at all. It's kind of like saying: 1) "Select all from alphabet" 2) "Select all from alphabet where the entry is a letter" It's going to produce the same results because you aren't adding a significantly refined condition to the statement.
What is the difference between the following sql queries?
[ "", "mysql", "sql", "sql-server", "" ]
I want to do the following: 1) Find the total rows in a table 2) Find the total rows that meets a certain criteria. 3) Subtract (1) from (2). Sample table Employees: ``` EmployeeID Nationality 1 Brazil 2 Korea 3 Germany 4 Brazil 5 Brazil ``` What I've tried: ``` SELECT count(EmployeeID) as Total from Employees UNION SELECT count(EmployeeID) as Brazilians from Employees WHERE Nationality = 'Brazil' ``` Result: ``` Total 5 3 ``` Row 1 will give me the total Employees. Row 2 will give me the Brazilian Employees. I used UNION to see if I could subtract row 2 from row 1. I could do this using CASE and SUM(), but that would require the row\_number() function, which I can't use given that I'm using WebSQL. Is there another way to index these rows to be able to subtract? Is there another approach I could use to solve this seemingly simple problem?
How about counting the rows that don't meet that criteria? ``` SELECT COUNT(EmployeedID) as non_brazilians FROM Employees WHERE Nationality <> 'Brazil'; ```
You can use conditional aggregation: ``` select count(*) as TotalRows, sum(case when Nationality = 'Brazil' then 1 else 0 end) as Brazilians, sum(case when Nationality <> 'Brazil' then 1 else 0 end) as nonBrazilians from Employee; ``` This assumes that `Nationality` is never `NULL`. If that is possible, the last condition should be: ``` sum(case when Nationality = 'Brazil' then 0 else 1 end) as nonBrazilians ```
How to subtract Total from conditioned sum in SQL
[ "", "sql", "" ]
I am trying to select all values that have a first name beginning with the letters a-d, however when I do this ``` select * from tblprofile where firstname between 'a' and 'd' ``` I get all values from a to c, not including d, how can I make sure it includes d?
It is inclusive. You don't get the results you want because any string beginning with 'd' and longer than 1 character is greater than 'd'. For example `'da' > 'd'`. So, your query would return all values starting with 'a', 'b', 'c', and a value 'd'. To get the results you want use ``` select * from tblprofile where firstname >= 'a' and firstname < 'e' ```
Try using Left() Function: ``` SELECT * FROM tblprofile WHERE LEFT(FirstName,1) between 'a' and 'd' ```
Microsoft SQL between statement for characters is not inclusive?
[ "", "sql", "sql-server", "select", "" ]
I am using Spark SQL (I mention that it is in Spark in case that affects the SQL syntax - I'm not familiar enough to be sure yet) and I have a table that I am trying to re-structure, but I'm getting stuck trying to transpose multiple columns at the same time. Basically I have data that looks like: ``` userId someString varA varB 1 "example1" [0,2,5] [1,2,9] 2 "example2" [1,20,5] [9,null,6] ``` and I'd like to explode both varA and varB simultaneously (the length will always be consistent) - so that the final output looks like this: ``` userId someString varA varB 1 "example1" 0 1 1 "example1" 2 2 1 "example1" 5 9 2 "example2" 1 9 2 "example2" 20 null 2 "example2" 5 6 ``` but I can only seem to get a single explode(var) statement to work in one command, and if I try to chain them (ie create a temp table after the first explode command) then I obviously get a huge number of duplicate, unnecessary rows. Many thanks!
**Spark >= 2.4** You can skip `zip` `udf` and use `arrays_zip` function: ``` df.withColumn("vars", explode(arrays_zip($"varA", $"varB"))).select( $"userId", $"someString", $"vars.varA", $"vars.varB").show ``` **Spark < 2.4** What you want is not possible without a custom UDF. In Scala you could do something like this: ``` val data = sc.parallelize(Seq( """{"userId": 1, "someString": "example1", "varA": [0, 2, 5], "varB": [1, 2, 9]}""", """{"userId": 2, "someString": "example2", "varA": [1, 20, 5], "varB": [9, null, 6]}""" )) val df = spark.read.json(data) df.printSchema // root // |-- someString: string (nullable = true) // |-- userId: long (nullable = true) // |-- varA: array (nullable = true) // | |-- element: long (containsNull = true) // |-- varB: array (nullable = true) // | |-- element: long (containsNull = true) ``` Now we can define `zip` udf: ``` import org.apache.spark.sql.functions.{udf, explode} val zip = udf((xs: Seq[Long], ys: Seq[Long]) => xs.zip(ys)) df.withColumn("vars", explode(zip($"varA", $"varB"))).select( $"userId", $"someString", $"vars._1".alias("varA"), $"vars._2".alias("varB")).show // +------+----------+----+----+ // |userId|someString|varA|varB| // +------+----------+----+----+ // | 1| example1| 0| 1| // | 1| example1| 2| 2| // | 1| example1| 5| 9| // | 2| example2| 1| 9| // | 2| example2| 20|null| // | 2| example2| 5| 6| // +------+----------+----+----+ ``` With raw SQL: ``` sqlContext.udf.register("zip", (xs: Seq[Long], ys: Seq[Long]) => xs.zip(ys)) df.registerTempTable("df") sqlContext.sql( """SELECT userId, someString, explode(zip(varA, varB)) AS vars FROM df""") ```
You could also try ``` case class Input( userId: Integer, someString: String, varA: Array[Integer], varB: Array[Integer]) case class Result( userId: Integer, someString: String, varA: Integer, varB: Integer) def getResult(row : Input) : Iterable[Result] = { val user_id = row.user_id val someString = row.someString val varA = row.varA val varB = row.varB val seq = for( i <- 0 until varA.size) yield {Result(user_id,someString,varA(i),varB(i))} seq } val obj1 = Input(1, "string1", Array(0, 2, 5), Array(1, 2, 9)) val obj2 = Input(2, "string2", Array(1, 3, 6), Array(2, 3, 10)) val input_df = sc.parallelize(Seq(obj1, obj2)).toDS val res = input_df.flatMap{ row => getResult(row) } res.show // +------+----------+----+-----+ // |userId|someString|varA|varB | // +------+----------+----+-----+ // | 1| string1 | 0| 1 | // | 1| string1 | 2| 2 | // | 1| string1 | 5| 9 | // | 2| string2 | 1| 2 | // | 2| string2 | 3| 3 | // | 2| string2 | 6| 10| // +------+----------+----+-----+ ```
Explode (transpose?) multiple columns in Spark SQL table
[ "", "sql", "apache-spark", "apache-spark-sql", "hiveql", "" ]
I would like to find the most commonly banned networks in a redshift table. I have tried this: ``` select network(set_masklen(ip::inet,8)), count(1) from banlist where status='BLOCKED' group by 1 order by 2 desc limit 10; ``` And got the following error: ``` INFO: Function ""network"(inet)" not supported. INFO: Function "set_masklen(inet,integer)" not supported. INFO: Function "inet(text)" not supported. INFO: Function ""network"(inet)" not supported. INFO: Function "set_masklen(inet,integer)" not supported. INFO: Function "inet(text)" not supported. ERROR: Specified types or functions (one per INFO message) not supported on Redshift tables. ``` OTOH, this works: ``` # select network(set_masklen('10.0.0.1'::inet, 24)); network ------------- 10.0.0.0/24 (1 row) ```
While everyone else's responses regarding creating a UDF are probably a fantastic option, if you're willing to give up some of the flexibility and just get either class A, class B, or class C subnets, you can use SPLIT\_PART and the concatenation operator to get a (not super) quick and dirty solution. `select SPLIT_PART(ip_address, '.', 1) || '.' || SPLIT_PART(ip_address, '.', 2) || '.' || SPLIT_PART(ip_address,'.', 3) as network, count(1) as mc from banlist group by network order by mc desc limit 10;`
As per the Redshift documentation, Network address functions and operators from PostgreSQL are not supported. ***References -*** [PostgreSQL functions not supported in Redshift](http://docs.aws.amazon.com/redshift/latest/dg/c_unsupported-postgresql-functions.html) [PostgreSQL documentation - Network Address Functions and Operators](http://www.postgresql.org/docs/8.2/static/functions-net.html)
In redshift, group by IP network
[ "", "sql", "amazon-redshift", "ipv4", "" ]
Using date range to select values, but also need to use an hour range to determine if a record should be selected. The date ranges and time ranges are not necessarily associated. ``` game_time (between 6 am and 6 pm) ``` have tried straight between statement and datepart, but cannot get anything to capture what we need. ``` create table gametime(name varchar, start_time datetime, end_time datetime) insert assorted name, start_times and end_times ``` Desired results ``` name start_time end_time name1 8:00 AM 10:00 AM name2 8:00 AM 11:30 AM name3 4:00 PM 5:30 PM name4 6:00 PM 9:00 PM ``` `datetime` is used is storage, but not needed in presentation.. only times are needed in presentation. Selected games should only start between the hours of 6:00 AM and 6:00 PM. Any and all suggestions and insight appreciated...... Using ``` LTRIM(RIGHT(CONVERT(VARCHAR(20), start_time, 100), 7)) ``` to get the correct format for presentation, but when I try to use ``` LTRIM(RIGHT(CONVERT(VARCHAR(20), start_time, 100), 7)) > 6 ``` I get conversion errors.
I would use `DATEPART` rather than relying on converting to/comparing strings: ``` WHERE DATEPART(hour,start_time) BETWEEN 6 AND 18 ```
Try CONVERT(VARCHAR(5),start\_time,108) BETWEEN '06:00' AND '18:00'. Right now you're trying to compare a string to an integer.
T-SQL : convert(datetime) to include/exclude certain hours
[ "", "sql", "sql-server-2008", "t-sql", "date", "datetime", "" ]
There are many t-sql CONVERT strings to produce quite a variety of date and or time strings. But I cannot find the solution to needing no date and no seconds. We want to return the time only from a datetime field, and eliminate the seconds. It would not matter if the seconds were truncated or rounded, but we need to show no seconds. ``` desired results- from any DATETIME field 10:00 AM 11:00 AM 4:59 PM ``` any and all insights or suggestions appreciated!!
Would this do it? ``` select CONVERT(varchar(15),CAST(GETDATE() AS TIME),100) ``` Just change out `GETDATE()` with your date variable.
Try this: ``` SELECT LTRIM(RIGHT(CONVERT(VARCHAR(20), GETDATE(), 100), 7)) ``` Put your `DATETIME` field instead `GETDATE()`
t-sql convert datetime to time only with NO SECONDS
[ "", "sql", "sql-server-2008", "t-sql", "date", "datetime-format", "" ]
there is a table in SQL database, called Players: Players (ID, name, age, gender, score) where ID is the primary key. Now I want to write a query to find the following results: For each age, find the name and age of the player(s) with the highest score among all players of this age. I wrote the following query: ``` SELECT P.name, P.age FROM Players P WHERE P.score = (SELECT MAX(P2.score) FROM Players P2) GROUP BY P.age, P.name ORDER BY S.age ``` However, the result of the above query is a list of players with the highest score among ALL players across all ages, not for EACH age. Then I changed my query to the following: ``` SELECT P.name, P.age, MAX(P.score) FROM Players P GROUP BY P.age, P.name ORDER BY P.age ``` However, the second query I wrote gives a list of players with each age, but for each age, there are not only the players with the highest score, but also other players with lower scores within this age group. How should I fix my logic/query code? Thank you!
Your original query is quite close. You just need to change the subquery to be a correlated subquery and remove the `GROUP BY` clause: ``` SELECT P.name, P.age FROM Players P WHERE P.score = (SELECT MAX(P2.score) FROM Players P2 WHERE p2.age = p.age) ORDER BY P.age; ``` The analytic ranking functions are another very viable method for processing this question. Both methods can take advantage of an index on `Players(age, score)`. This also wants an index on `Players(score)`. With that index, this should have better performance on large data sets.
You can use `rank` to do this. ``` select name, age from ( SELECT *, rank() over(partition by age order by score desc) rnk FROM Players) t where rnk = 1 ```
How to find the following SQL query?
[ "", "sql", "database", "oracle", "" ]
I have 2 tables **Table 1 : tbl\_appointments** **Table 2: tbl\_appointmentschedule\_details** ``` Table1 AppointmentTypeID | AppointmentTimeID | AppointmentDate | NumberOfApplicants ----------------------------------------------------------------------------- 11 23 10-16-2015 1 11 23 10-16-2015 1 11 24 10-16-2015 1 11 24 10-16-2015 1 11 23 10-16-2015 1 11 24 10-16-2015 1 11 25 10-16-2015 1 11 22 10-17-2015 1 11 22 10-17-2015 1 11 22 10-17-2015 1 11 22 10-17-2015 1 ``` ``` Table2 ScheduleID | AppointmentTimeID ---------------------------- 27 22 27 23 27 24 27 25 27 26 ``` 1. I want my all AppointmentTimeID from Table2 should match with table1 and display in final results either these table2.AppointmentTimeID match with table1.AppointmentTimeID or not but should display with date and zero NumberOfApplicants. 2. Then I want it should use between date from table1.AppointmentDate between '2015-10-15' and '2015-10-15'. 3. My Final results should be all values from table2 for each day and sum of table1.NumberOfApplicats. 4. Please see my final outcome should be like this **Following Query I am using** ``` SELECT ad.AppointmentTimeID, COALESCE(sum(a.NumberOfApplicants),0) AS TBooked, a.AppointmentDate <br>FROM tbl_appointmentschedule_details ad LEFT OUTER JOIN tbl_appointments a ON ad.AppointmentTimeID = a.AppointmentTimeID AND (a.AppointmentDate BETWEEN '2015-10-16' AND '2015-10-17') AND ad.ScheduleID = 27 AND a.AppointmentTypeID = 11 WHERE a.AppointmentDate IS NOT NULL GROUP BY a.AppointmentDate, ad.AppointmentTimeID ORDER BY a.AppointmentDate ASC ``` It is showing only those records which is available in table1 matched with appointmenttimeid, but I want to see all the records. ``` ## FINAL RESULTS I WANT SHOULD BE AppointmentTimeID | AppointmentDate | NumberOfApplicants --------------------------------------------------------- 22 16-10-2015 0 23 16-10-2015 3 24 16-10-2015 3 25 16-10-2015 1 26 16-10-2015 0 22 17-10-2015 4 23 17-10-2015 0 24 17-10-2015 0 25 17-10-2015 0 26 17-10-2015 0 ``` My query is only showing those records which have values like that ``` ## MY QUERY OUTPUT AppointmentTimeID | AppointmentDate | NumberOfApplicants --------------------------------------------------------- 23 16-10-2015 3 24 16-10-2015 3 25 16-10-2015 1 22 17-10-2015 4 ``` Please help me to solve this issue.
This should do it for you: ``` select m.AppointmentTimeID, m.AppointmentDate, coalesce(sum(ap.NumberOfApplicants),0) as NoOfApplicants from ( -- just basically doing a cross to get all time/date combinations select distinct d.AppointmentTimeID, a.AppointmentDate from tbl_appointments a, tbl_appointmentschedule_details d ) m left join tbl_appointments ap on ap.AppointmentTimeID = m.AppointmentTimeID and ap.AppointmentDate = m.AppointmentDate where m.AppointmentDate between '2015-10-16' and '2015-10-17' group by m.AppointmentDate, m.AppointmentTimeID order by m.AppointmentDate, m.AppointmentTimeID ``` SQLFiddle example: <http://sqlfiddle.com/#!9/1e632/15> ``` Results: | AppointmentTimeID | AppointmentDate | NoOfApplicants | |-------------------|---------------------------|----------------| | 22 | October, 16 2015 00:00:00 | 0 | | 23 | October, 16 2015 00:00:00 | 3 | | 24 | October, 16 2015 00:00:00 | 3 | | 25 | October, 16 2015 00:00:00 | 1 | | 26 | October, 16 2015 00:00:00 | 0 | | 22 | October, 17 2015 00:00:00 | 4 | | 23 | October, 17 2015 00:00:00 | 0 | | 24 | October, 17 2015 00:00:00 | 0 | | 25 | October, 17 2015 00:00:00 | 0 | | 26 | October, 17 2015 00:00:00 | 0 | ``` In order to speed things up a little, you might benefit from some indexing: ``` create index idx_tbl_appointments_apptdate_timeid on tbl_appointments(AppointmentDate, AppointmentTimeID); create index idx_tbl_appointmentschedule_details_TimeID on tbl_appointmentschedule_details(AppointmentTimeID); ``` Query modification: ``` select m.AppointmentTimeID, m.AppointmentDate, coalesce(sum(ap.NumberOfApplicants),0) as NoOfApplicants from ( select distinct AppointmentTimeID, AppointmentDate from (select distinct AppointmentTimeID from tbl_appointmentschedule_details) one cross join (select distinct AppointmentDate from tbl_appointments where AppointmentDate between '2015-10-16' and '2015-10-17') two ) m left join tbl_appointments ap on ap.AppointmentTimeID = m.AppointmentTimeID and ap.AppointmentDate = m.AppointmentDate where m.AppointmentDate between '2015-10-16' and '2015-10-17' group by m.AppointmentDate, m.AppointmentTimeID order by m.AppointmentDate, m.AppointmentTimeID ``` SQLFiddle example: <http://sqlfiddle.com/#!9/0de6d7/1> Notice that I have added date span in two locations in this query. See how this query performs for you.
Place the filter criteria in the `WHERE` clause instead of the `JOIN` ``` SELECT ad.AppointmentTimeID, COALESCE(sum(a.NumberOfApplicants), 0) AS TBooked, a.AppointmentDate FROM tbl_appointmentschedule_details ad LEFT OUTER JOIN tbl_appointments a ON ad.AppointmentTimeID = a.AppointmentTimeID WHERE a.AppointmentDate IS NOT NULL AND (a.AppointmentDate BETWEEN '2015-10-16' AND '2015-10-17') AND ad.ScheduleID = 27 AND a.AppointmentTypeID = 11 GROUP BY a.AppointmentDate, ad.AppointmentTimeID ORDER BY a.AppointmentDate ASC ```
MySQL LEFT OUTER JOIN is having some final results issue
[ "", "mysql", "sql", "join", "" ]
I'm missing many records due to the condition `not like '%TEST%'` althought that field contains a `NULL` value. ``` select * from credit_case cc left join (select skp_case, name_full from client) cl on cc.skp_case = cl.skp_case where cl.name_full not like '%TEST%' ``` Table `credit_case` contains full data whereas table `client` does not. When I re-write it as ``` select * from credit_case cc left join (select skp_case, name_full from client where name_full not like '%TEST%') cl on cc.skp_case = cl.skp_case ``` records from `credit_case` are not lost. Why is it?
* `null` is never equal to another value, including `null`. * `null` is never unequal to another value, including `null`. * `null` is never like another value, including `null`. * `null` is never unlike another value, including `null`. The only way to do comparisons with `null` is to use `is null` or `is not null`. None of these queries will ever return a row. ``` select * from table where column = null; select * from table where column != null; select * from table where column like null; select * from table where column not like null; ``` You would need to explicitly include an `is null` or an `is not null` clause ``` where ( cl.name_full not like '%TEST%' or cl.name_full is null) ``` will return the rows with `null` values for `name_full`.
On the first case, there's a condition clause that will filter out some of records from the result. On the second case, there's no condition clause on the master table. What you're doing is actually ``` select * from credit_case cc left join [SUBTABLE] ``` so that would definitely give you a complete list of the master table, `credit_case`
null LIKE '%text%'?
[ "", "sql", "oracle", "null", "sql-like", "" ]
I got confused in writing a sql query for getting the date and the total maximum number of files downloaded on which day. My table contain 3 fields `sno` , `da_te` , `fileName`, maybe there is no need of `fileName` in this situation. `sno` is in integer, `da_te` is in default timestamp, `fileName` in varchar so for example my table looks like this:- ``` sno da_te fileName 1 2015-10-18 11:13:00 xyz.pdf 2 2015-10-18 11:13:00 xxy.pdf 3 2015-10-18 11:13:00 xyy.pdf 4 2015-10-18 11:13:00 yyz.pdf 5 2015-10-18 11:13:00 xyz.pdf 6 2015-10-17 11:13:00 xzz.pdf 7 2015-10-17 11:13:00 zyz.pdf 8 2015-10-17 11:13:00 xyx.pdf 9 2015-10-16 11:13:00 xyy.pdf 10 2015-10-16 11:13:00 xyz.pdf ``` So, in result we can get:- Maximum Number of files downloaded = 5 and the date max file download is = 2015-10-18 i want to know how to write the sql query for above situation.
You can use `group by` with `count`. ``` select date(da_te), count(*) from tablename group by date(da_te) order by count(*) desc limit 1 ```
``` SELECT * FROM ( SELECT @n := @n+1, z.* FROM ( SELECT DATE(da_te), COUNT(*) FROM YOURTABLE GROUP BY DATE(DA_TE) ORDER BY COUNT(*) DESC ) z, (SELECT @n := 0) v ) x WHERE n = 1; ```
SQL: Getting maximum number of files download on which day
[ "", "mysql", "sql", "" ]
I have a query which returns a single row ``` SELECT 'abcde' type, NVL(SUM(something),0) value FROM tableA a, tableB b WHERE a.id=b.id; ``` In this case it returns the following if the condition satisfies, ``` TYPE VALUE ------------- abcde 100 ``` If the condition not satisfied, it won't returns any row.I need to return a default row in this case, like, ``` TYPE VALUE ------------- abcde 0 ``` I tried using WHERE NOT EXISTS but in this case I have to include the large query in WHERE NOT EXISTS(my\_large\_query). ``` SELECT 'abcde' type, NVL(SUM(something),0) value FROM tableA a, tableB b WHERE a.id=b.id GROUP BY 'abcde' UNION SELECT 'abcde' type, 0 value FROM dual WHERE NOT EXISTS ( SELECT 'abcde' type, NVL(SUM(something),0) value FROM tableA a, tableB b WHERE a.id=b.id ); ``` Is there any other way I can use instead? If there any performance issue occures if I use `NOT EXISTS`
You want to select one record with 'abcde', so select this from dual. You get the sum in a subquery. ``` select 'abcde' type, nvl( ( SELECT SUM(something) FROM tableA a JOIN tableB ON a.id=b.id ), 0) as value from dual; ``` UPDATE: If you want more aggregates from the tables, you could do this with three subqueries, but it would be ineffective to do about the same query thrice. So you could use an outer join instead. As your tables' aggregates are totally unrelated with the dual table, however, this looks a bit strange, but well, the query *is* strange, consisting of an outer-cross join :-) ``` select 'abcde' as type, nvl(sum_current, 0) as sum_curr, nvl(sum_previous, 0) as sum_prev from dual left join ( select sum(data_current) as sum_current, sum(data_prev) as sum_previous from tablea a join tableb on a.id = b.id ) on 1 = 1; -- cross-outer joining ``` And here is the same without the `1 = 1` dummy ON clause. We select the type 'abcde' along with the aggregations and use this for the join. ``` select 'abcde' as type, nvl(sum_current, 0) as sum_curr, nvl(sum_previous, 0) as sum_prev from (select 'abcde' as type from dual) left join ( select 'abcde' as type, sum(data_current) as sum_current, avg(data_prev) as sum_previous from tablea a join tableb on a.id = b.id ) using (type); ``` Choose whichever query you like better.
You could use an **[LEFT|RIGHT]OUTER JOIN**. For example, using standard EMP and DEPT table in SCOTT schema: ``` SQL> SELECT d.deptno, 2 NVL(e.empno, 0) empno 3 FROM emp e 4 RIGHT OUTER JOIN dept d 5 ON e.deptno = d.deptno 6 AND d.deptno IN (30,40); DEPTNO EMPNO ---------- ---------- 10 0 20 0 30 7499 30 7521 30 7654 30 7698 30 7844 30 7900 40 0 9 rows selected. ``` In above output, you can see that there is an additional row for **DEPT = 40** using **RIGHT OUTER JOIN**. Try this query: ``` SELECT 'abcde' type, NVL(SUM(something),0) value FROM tableA a LEFT OUTER JOIN tableB b ON a.id=b.id GROUP BY type; ```
Return default row without NOT EXISTS
[ "", "sql", "oracle", "" ]
Suppose that I have a table with the following columns: ``` CREATE TABLE users { id INTEGER PRIMARY KEY, login VARCHAR(50) UNIQUE, password VARCHAR(50) } ``` And I need to create a table which will store one cookie for each account. What's better -- to create a foreign key that refers to the `login` column from the first table ``` CREATE TABLE cookies { id INTEGER PRIMARY KEY, cookie VARCHAR(50), login VARCHAR(50) FOREIGN KEY REFERENCES users(login) } ``` or to create a foreign key that refers to the `id` column? ``` CREATE TABLE cookies { id INTEGER PRIMARY KEY, cookie VARCHAR(50), expires DATETIME, id_user INTEGER FOREIGN KEY REFERENCES users(id) } ``` And why?
Ok as you have several columns in cookie table and since that is a seperate entity I agree and it makes sense to take those additional columns in seperate table. What about pks and fks? Since you have only one row per user I would make `userid` a `primary key` and `foreign key` simultaneously: ``` cookies { id_user(pk, fk), cookie, expires } ```
``` CREATE TABLE cookies { id INTEGER PRIMARY KEY, cookie VARCHAR(50), login VARCHAR(50) FOREIGN KEY REFERENCES users(login) } ``` I think this is better option, as you need to communicate with `login`.
Foreign key -- id or just UNIQUE column
[ "", "mysql", "sql", "" ]
I have two tables: ``` table 1: rN rnk rid rsn rsi tid rd r abc_1m xy_mfgabc1m_z 1 penta 1 27 2009-01-01 1.2 abc_1m xy_mfgabc1m_z 1 penta 1 27 2009-01-02 0.3 abc_3m xy_mfgabc3m_z 2 penta 1 30 2009-01-01 0.6 abc_3m xy_mfgabc3m_z 2 penta 1 30 2009-01-02 0.4 ``` * rN is a list of 100+ values * rnk is also a list of 100+ values * rid also (this and tid have a code depending on the rN) * rsn is always the same * rsi is always the same * tid is a list of 100+ values (this and rid have a code depending on the rN) * rd date from 2009 till asofdate * r are some numeric values table 1 has over 100.000 rows table 2 is smaller, @ 2000 rows. ``` table 2: rN rnk rid rsn rsi tid rd r tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-01 1.7 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-02 0.7 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-01 0.2 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-02 0.9 ``` I want to have everything from table 1 and just append table 2 to the table 1. expected table: ``` table 3 : rN rnk rid rsn rsi tid rd r abc_1m xy_mfgabc1m_z 1 penta 1 27 2009-01-01 1.2 abc_1m xy_mfgabc1m_z 1 penta 1 27 2009-01-02 0.3 abc_3m xy_mfgabc3m_z 2 penta 1 30 2009-01-01 0.6 abc_3m xy_mfgabc3m_z 2 penta 1 30 2009-01-02 0.4 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-01 1.7 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-02 0.7 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-01 0.2 tdf_1y xy_mfgtdf1y_z 90 penta 1 94 2009-01-02 0.9 ``` If i want everything from the first table and everything from the second table, this should be a **full outer join**, right? My problem is that i don't know how to group this by, because if i do something like this, I will have everything from table1 and nothing from table2 ``` select one.* from table1 join table2 two on one.rsn = two.rsn group by one.rN, one.rnk, one.rid, one.rsn, one.rsi, one.tid, one.rd, one.r order by rid ``` and if I won't group them, then I will have millions of records, hence this I must group it somehow. I think I'm not using the groupping by correctly and maybe the columns i'm joining them on. Any advice would be well received since this seems so simple yet can't see the solution. Thanks ## edit: Thank you all for your answers. You all gave me important information which I forgot about. I totally missed the fact that i could use union. That being said, I chose as an accepted answer the first one which posted (since more or less you all said the same thing - i upvoted all of you since every answer gave a insight of how I should be doing this, thank you again)
Just use the `UNION` operator: ``` SELECT t1.* FROM table1 t1 UNION ALL SELECT t2.* FROM table2 t2 ``` A `JOIN` is not the way to go because that would leave your result set with new columns corresponding to both `table1` and `table2`. As a rule of thumb, when you want to stack 2 tables together you can use `UNION` and when you want to bring in new columns, or refine existing columns, you can use a `JOIN`.
> If i want everything from the first table and everything from the second table, this should be a full outer join, right? Based on your description of the problem, and your example "table 3", no. You don't want a JOIN at all, because there's no relationship between the rows in the two tables. What you want is a UNION, because you want a new result set that's just all the rows from table1 plus all the rows from table2. So (assuming they have their columns in the same order): ``` SELECT * FROM table1 UNION SELECT * FROM table2 ``` Because of the nature of UNION, it will do essentially what your GROUP BY seems to be intended to do: remove any duplicate rows.
SQL server - groupping by correctly in a join
[ "", "sql", "sql-server", "t-sql", "join", "sql-server-2012", "" ]
I have a dataset (query in this case) that kind of looks like this: ``` ZipCode Territory Date ---------------------------------- 12345 Unknown 9/30/2015 12345 Unknown 9/25/2015 12345 Istanbul 9/20/2015 12345 Istanbul 9/10/2015 12345 Unknown 9/5/2015 12345 Istanbul 8/31/2015 12345 Istanbul 8/21/2015 12345 Unknown 8/16/2015 12345 Constantinople 8/11/2015 12345 Constantinople 8/1/2015 12345 Constantinople 7/22/2015 12345 Constantinople 7/12/2015 ``` I'd like to return the max date and **KNOWN** territory name for each zip code (one row per zipcode). The end result I would expect from the previous query would turn into this: ``` ZipCode Territory Date ---------------------------------- 12345 Istanbul 9/20/2015 ``` My current best efforts only got me here: ``` ZipCode Territory Date --------------------------------- 12345 Istanbul 9/20/2015 12345 Constantinople 8/11/2015 ``` Help! This is a huge gap in my SQL writing abilities.
You need two queries. The first to retrieve the max date. ``` SELECT Zipcode, Max([Date]) AS MaxDate FROM <TableName> Group BY Zipcode ``` Let's save this query as qryMaxDateForZipCode The second to query the territory for that date. ``` SELECT ZipCode, Territory FROM qryMaxDateForZipCode INNER JOIN <TableName> ON qryMaxDateForZipCode.MaxDate = <TableName>.[Date] ``` However you may have two territories qualifying for the max date. What do you want to do then?
This is essentially a greatest-n-per-group question. Group your data by zip code to grab the latest date per zip code. Then match the zip code-latest date pairs with your data: ``` SELECT YourTable.* FROM YourTable INNER JOIN ( SELECT ZipCode, MAX(Date) AS Date_Max FROM YourTable WHERE Territory <> 'Unknown' GROUP BY ZipCode ) AS TempGroup ON YourTable.ZipCode = TempGroup.ZipCode AND YourTable.Date = TempGroup.Date_Max ``` Result (tested in MS-Access 2007): ``` ZipCode Territory Date ---------------------------------- 12345 Istanbul 9/20/2015 ```
Select a value based on the aggregate value of another column
[ "", "sql", "ms-access", "ms-access-2010", "" ]
Hello every one i have two tables called: cash\_billings\_returns\_articles, cash\_billings\_bills\_articles. I need to subtract this tables for example. Table: `cash_billings_returns_articles` [![enter image description here](https://i.stack.imgur.com/ge5qb.png)](https://i.stack.imgur.com/ge5qb.png) Table: `cash_billings_bills_articles` [![enter image description here](https://i.stack.imgur.com/Hg1CA.png)](https://i.stack.imgur.com/Hg1CA.png) I need to return the subtract of `cashbillingBRCarticle_total` column, like this: ``` cashbilling_id article_id cashbillingBRCarticle_total 55 3564 0 55 1871 0 55 9134 0 55 950 0 55 4402 0 55 2156 0 55 2228 0 55 2017 -90 55 3397 0 ``` These rows represents the billings articles, It need be compared with `cashbilling_id` and `article_id` for each subtraction. Any ideas?
Try this solution ``` SELECT t.cashbilling_id AS cashbilling_id_bills, s.cashbilling_id AS cashbilling_id_returns, t.article_id, t.cashbillingBRCarticle_total - IFNULL(s.cashbillingBRCarticle_total, 0) AS diff FROM cash_billings_bills_articles t LEFT OUTER JOIN cash_billings_returns_articles s ON t.cashbilling_id = s.cashbilling_id AND t.article_id = s.article_id ```
Try this ``` SELECT t2.cashbilling_id, t2.cashbillingbill_id, t2.article_id, (t1.cashbillingBRCarticle_total - t2.cashbillingBRCarticle_total) cashbillingBRCarticle_total FROM table2 t2 RIGHT JOIN table1 t1 ON t1.article_id = t2.article_id ```
Subtract two mysql tables with negative in no existent rows
[ "", "mysql", "sql", "" ]
I have two queries that are "merged" with Union: ``` A union B ``` I want the result to be order by a specific column. for example: **A** ``` id sum dateissue 5 30 1.1.15 8 14 2.4.15 ``` **B** ``` id sum dateissue 13 30 1.4.12 8 14 4.4.15 ``` Desired result after Union **with order by** dateissue column: ``` id sum dateissue 13 30 1.4.12 : from B 5 30 1.1.15 : from A 8 14 2.4.15 : from A 8 14 4.4.15 : from B ``` I tried to do ``` (A) Union (B) order by dateissue ``` but it gives error: > ERROR: column "dateissue" does not exist How can I sort the result of the union?
You just need to make sure that the first select actually extracts 'dateissue,' ie ``` select id, sum, dateissue from a where... union select id, sum, dateissue from a where... order by dateissue; ``` To clarify, the 'order by' is applied to the complete resultset (after the union).
``` SELECT * FROM ( SELECT id, sum, dateissue FROM A UNION ALL SELECT id, sum, dateissue FROM B ) dum ORDER BY dateissue ``` the order is affect in SELECT \* ``` SELECT * FROM ( SELECT id, sum, dateissue FROM A UNION ALL SELECT id, sum, dateissue FROM B ) dum -> id sum dateissue 5 30 1.1.15 8 14 2.4.15 13 30 1.4.12 8 14 4.4.15 ORDER BY dateissue -> id sum dateissue 13 30 1.4.12 5 30 1.1.15 8 14 2.4.15 8 14 4.4.15 ``` you can use UNION ALL : [What is the difference between UNION and UNION ALL?](https://stackoverflow.com/questions/49925/what-is-the-difference-between-union-and-union-all) in case of same row
How to order by union result?
[ "", "sql", "postgresql", "" ]
I have a database with three tables * Products * Fieldnames * Field values Products: * ID * Name Fieldnames: * NameID * DisplayName Field values * ProductID * NameID * FieldValue I am trying to get all products where the date in the field with the name 'start' is earlier then today and the date in the field 'end' is later then today. My current solution: ``` SELECT ID FROM Product WHERE (SELECT FieldValue FROM FieldValues WHERE NameID = ( SELECT NameID FROM Fieldnames WHERE DisplayName = 'start')) < today AND (SELECT FieldValue FROM FieldValues WHERE NameID = ( SELECT NameID FROM Fieldnames WHERE DisplayName = 'end')) > today ``` Where today is filled in by c# code. But this does not return the desired output. What am I missing ?
Try ... ``` SELECT ID FROM Products, FieldNames, FieldValues WHERE FieldValues.ProductID = Products.ID AND FieldValues.NameID = FieldNames.NameID AND ( ( DisplayName = "start" AND FieldValue < today ) OR ( DisplayName = "end" AND FieldValue > today ) ); ``` Without test data I can not test this out, and fortunately enough time has become available for me to do so. In the future, if you post script for creating the tables and populating them with test data we can check our answers for any bugs. Firstly I shall explain the reasoning behind my answer, after which I shall list the script I used to test with. I started with - ``` SELECT * FROM Products, FieldNames, FieldValues WHERE FieldValues.ProductID = Products.ID AND FieldValues.NameID = FieldNames.NameID ``` I used `SELECT *` rather than `SELECT ID` here so I could examine the data to determine if I was getting accurate results. It works just as well either way. I used `FROM Products, FieldNames, FieldValues` rather than just `FROM Products` since we are referencing all three tables in the `WHERE` clause even though we are returning just the ID's. I tried it with just `FROM Products` - it complained (and didn't work). I added - ``` WHERE FieldValues.ProductID = Products.ID AND FieldValues.NameID = FieldNames.NameID ``` to join the tables, which was made necessary by the fact that FieldName and FieldValue are in separate tables. I included a join to Products as well just in case you wish to return any other fields from there in addition to ID. If you only wish to return ID, then you should change the first four lines to - ``` SELECT * FROM FieldNames, FieldValues WHERE FieldValues.NameID = FieldNames.NameID ``` Either way , to refine the search to only valid results I added - ``` ( ( DisplayName = "start" AND FieldValue < today ) OR ( DisplayName = "end" AND FieldValue > today ) ); ``` The outermost brackets are necessary, otherwise the OR could muck with the joining clause. Even if it wouldn't, it's a nice way of isolating our out-of-range argument. Similarly, the innermost brackets help preserve the (start, <) and (end, >) conditions. I tested the final statement by substituting `45` for `value`, a la - ``` SELECT ID FROM Products, FieldNames, FieldValues WHERE FieldValues.ProductID = Products.ID AND FieldValues.NameID = FieldNames.NameID AND ( ( DisplayName = "start" AND FieldValue < 45 ) OR ( DisplayName = "end" AND FieldValue > 45 ) ); ``` The results were as desired. The following is the script I used to create and populate the tables... ``` CREATE DATABASE Products20151020; USE Products20151020; CREATE TABLE Products ( ID INT NOT NULL AUTO_INCREMENT, Name VARCHAR( 50 ) NOT NULL, PRIMARY KEY ( ID ) ); CREATE TABLE FieldNames ( NameID INT NOT NULL AUTO_INCREMENT, DisplayName VARCHAR( 50 ) NOT NULL, PRIMARY KEY ( NameID ) ); CREATE TABLE FieldValues ( fldID INT NOT NULL AUTO_INCREMENT, ProductID INT NOT NULL, NameID INT NOT NULL, FieldValue INT NOT NULL, PRIMARY KEY ( fldID ), FOREIGN KEY ( ProductID ) REFERENCES Products( ID ), FOREIGN KEY ( NameID ) REFERENCES FieldNames( NameID ) ); INSERT INTO Products SET Name = "Name 001"; INSERT INTO Products SET Name = "Name 002"; INSERT INTO Products SET Name = "Name 003"; INSERT INTO Products SET Name = "Name 004"; INSERT INTO Products SET Name = "Name 005"; INSERT INTO Products SET Name = "Name 006"; INSERT INTO Products SET Name = "Name 007"; INSERT INTO Products SET Name = "Name 008"; INSERT INTO Products SET Name = "Name 009"; INSERT INTO Products SET Name = "Name 010"; INSERT INTO FieldNames SET DisplayName = "start"; INSERT INTO FieldNames SET DisplayName = "end"; INSERT INTO FieldValues SET ProductID = 1, NameID = 1, FieldValue = 26; INSERT INTO FieldValues SET ProductID = 5, NameID = 1, FieldValue = 46; INSERT INTO FieldValues SET ProductID = 3, NameID = 1, FieldValue = 45; INSERT INTO FieldValues SET ProductID = 7, NameID = 1, FieldValue = 44; INSERT INTO FieldValues SET ProductID = 10, NameID = 1, FieldValue = 100; INSERT INTO FieldValues SET ProductID = 8, NameID = 1, FieldValue = 10; INSERT INTO FieldValues SET ProductID = 9, NameID = 1, FieldValue = 32; INSERT INTO FieldValues SET ProductID = 2, NameID = 1, FieldValue = 99; INSERT INTO FieldValues SET ProductID = 10, NameID = 2, FieldValue = 26; INSERT INTO FieldValues SET ProductID = 9, NameID = 2, FieldValue = 46; INSERT INTO FieldValues SET ProductID = 7, NameID = 2, FieldValue = 45; INSERT INTO FieldValues SET ProductID = 6, NameID = 2, FieldValue = 44; INSERT INTO FieldValues SET ProductID = 4, NameID = 2, FieldValue = 100; INSERT INTO FieldValues SET ProductID = 3, NameID = 2, FieldValue = 10; INSERT INTO FieldValues SET ProductID = 1, NameID = 2, FieldValue = 32; INSERT INTO FieldValues SET ProductID = 2, NameID = 2, FieldValue = 99; ``` If anyone has any questions or general comments, then please feel free to post a comment.
I think this is what you are trying to do. ``` SELECT p.ID FROM Product p JOIN FieldValues fv on p.id = fv.productid JOIN Fieldnames fn on fn.nameid = fv.nameid WHERE (DisplayName = 'start' and fv.FieldValue < today) OR (DisplayName = 'end' and fv.FieldValue > today) ```
querying a one to many realtion where condition
[ "", "mysql", "sql", "" ]
I have a single table that contains questions with corresponding references to another table and field that contain the answers. Something like: [![enter image description here](https://i.stack.imgur.com/ShcST.jpg)](https://i.stack.imgur.com/ShcST.jpg) I would like to query the questions table and return QID, QuestionText and the value contained in the [ResponseTable].[ResponseField] for each QID. The design seamed flexible at the time. However the app developer is expecting a stored procedure and the SQL developer was counting on an in app solution for this issue. I am at the end of my rope trying to build this query. How would you suggest accomplishing this task?
I don't think you'll like hearing this answer because it will likely mean some major rework, but I think it's the right answer. Get rid of the questions table and put the questions into new Question fields in the Client1, Client9, and Jobs tables; one for each response. For example the Client1 table will have these fields: ``` ColorPref ColorPrefQuestion Rating RatingQuestion ``` ...and so on Working around that design will be manageable where working around the design you have now will be a headache.
Until you get to the rewrite you mentioned, consider the idea of using a view to bring these response tables together. ``` CREATE VIEW ClientResponses AS SELECT QID, ResponseField FROM [Client1] UNION SELECT QID, ResponseField FROM [Jobs] UNION SELECT QID, ResponseField FROM [Client9] -- ..... add the new tables as they are created ``` This will 1. Avoid dynamic SQL 2. Give you a single place to maintain querying 3. Provide a pretty simple, readable way to hobble this together
SQL JOIN based on table contents
[ "", "sql", "sql-server", "" ]
I have 2 tables in an SQL database. ``` SELECT name from table1 ORDER BY name SELECT name from table2 ORDER BY name ``` I want to create a stored procedure with a union select that creates one table with following output - five rows from table1 and one row from table2, then 5 rows from table1 and one row from table2 etc: ``` row 1 from table1 row 2 from table1 row 3 from table1 row 4 from table1 row 5 from table1 row 6 from table2 row 7 from table1 row 8 from table1 row 9 from table1 row 10 from table1 row 11 from table1 row 12 from table2 etc ``` Is that possible? If yes any hints? Thanks
Using `ROW_NUMBER` and some math: ``` WITH Cte AS( SELECT name, rn = ROW_NUMBER() OVER(ORDER BY name) + ((ROW_NUMBER() OVER(ORDER BY name) - 1) / 5) FROM table1 UNION ALL SELECT name, rn = ROW_NUMBER() OVER(ORDER BY name) * 6 FROM table2 ) SELECT name FROM Cte ORDER BY rn ``` The above will display 5 rows from `table` followed by 1 row from `table2`, and so on. [**SQL Fiddle**](http://sqlfiddle.com/#!6/b5d43/1/0)
You can use a temporary table (or add a column) for both tables with a column called for sample "GroupId INT". ``` create #temp ( groupId int, tbl int, name nvarchar(...)) ``` For table1, fill this column with CONVERT(INT, ROW\_NUMBER() / 5) For table2, fill this column with ROW\_NUMBER() ``` insert into #temp ( groupId, tbl, name) select CONVERT(INT, ROW_NUMBER() / 5.0), 1, name from table1 order by name insert into #temp ( groupId, tbl, name) select ROW_NUMBER(), 2, name from table2 order by name ``` Then use ``` SELECT name from #temp ORDER BY groupId, tbl, name ``` NB: Use of a temporary table can be avoid using CTE I think
Select rows from 2 tables where the first 5 rows come from one table then 1 from the second table
[ "", "sql", "sql-server", "stored-procedures", "" ]
Remove duplicate words from a column cell, for example: Data column this is a single field: '`BLACKHEATH 0AA BLACKHEATH COLCHESTER CO2 0AA` ' But I want like this, "BLACKHEATH" should only be once, if any other entry is found then it should be removed ``` BLACKHEATH COLCHESTER CO2 0AA ``` I can do this through code, but I've no idea how to remove duplicate words through SQL in SQL Server. I've been instructed to do this through SQL. Any help would be appreciated.
``` declare @text varchar(max) = 'BLACKHEATH 0AA BLACKHEATH COLCHESTER CO2 0AA' declare @i int = 1; declare @nextCharacter varchar(max) declare @word varchar(max)='' declare @lastChar varchar(1) = substring(@text,len(@text),1) --select @lastChar declare @lastIndex varchar(max) = charindex(@lastChar,@text,len(@text)) select @lastIndex --select len(@text) create table #tmp (id int,word varchar(max)) while (@i <= len(@text)) begin select @nextCharacter= substring(@text, @i, 1) --select @nextCharacter,@i, @lastChar, len(@text) if (@nextCharacter !=' ') begin select @word = @word + @nextCharacter end else begin insert into #tmp values(@i,@word) set @word='' end if (@i = len(@text) and @nextCharacter= @lastChar) begin insert into #tmp values(@i,@word) end set @i = @i +1 end; select id,word from #tmp order by id; WITH tblTemp as ( SELECT ROW_NUMBER() Over(PARTITION BY word ORDER BY id) As RowNumber,* FROM #tmp ) --select * from tblTemp select * into #tmp2 FROM tblTemp where RowNumber =1 declare @newString varchar(max)='' select @newString = @newString +word+' ' from #tmp2 order by id select rtrim(@newString) drop table #tmp2 drop table #tmp ```
If the order doesn't matter, you could do it quite quite easily: ``` DECLARE @string VARCHAR(100) = 'BLACKHEATH 0AA BLACKHEATH COLCHESTER CO2 0AA'; SELECT @string AS Source , LTRIM(( SELECT DISTINCT ' ' + column1 AS [text()] FROM dbo.SplitString(@string, ' ') FOR XML PATH('') )) AS UniqueWords; ``` What's the idea here? 1. `dbo.SplitString` function splits your string into rows based on delimiter (`space` in your case). See [this article](http://sqlperformance.com/2012/07/t-sql-queries/split-strings) to find SplitString function that suits your needs best. 2. `DISTINCT` keyword removed dupes 3. Using `FOR XML PATH('')` we concatenate them back together. **Result:** ``` ╔═══════════════════════════════════════════════╦═══════════════════════════════╗ β•‘ Source β•‘ UniqueWords β•‘ ╠═══════════════════════════════════════════════╬═══════════════════════════════╣ β•‘ BLACKHEATH 0AA BLACKHEATH COLCHESTER CO2 0AA β•‘ 0AA BLACKHEATH CO2 COLCHESTER β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` If you need to keep order, you'd have to create a function that stores your terms in original order (probably using [ROW\_NUMBER()](https://msdn.microsoft.com/en-us/library/ms186734.aspx)) and has order for each seperate term (to remove dupes) and then you could calculate where each words need to be. I didn't bother creating it, but it should output such result for your string: ``` ╔═══════════╦═══════════╦════════════╗ β•‘ WordOrder β•‘ TermOrder β•‘ Term β•‘ ╠═══════════╬═══════════╬════════════╣ β•‘ 1 β•‘ 1 β•‘ BLACKHEATH β•‘ β•‘ 2 β•‘ 1 β•‘ 0AA β•‘ β•‘ 3 β•‘ 2 β•‘ BLACKHEATH β•‘ β•‘ 4 β•‘ 1 β•‘ COLCHESTER β•‘ β•‘ 5 β•‘ 1 β•‘ CO2 β•‘ β•‘ 6 β•‘ 2 β•‘ 0AA β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β• ``` Which could be reused in such a query(`@Splitted` is table above): ``` SELECT @string AS Source , LTRIM(( SELECT ' ' + Term AS [text()] FROM @Splitted WHERE TermOrder = 1 ORDER BY WordOrder FOR XML PATH('') )) AS UniqueWords; ``` It ouputs this string: ``` ╔═══════════════════════════════════════════════╦═══════════════════════════════╗ β•‘ Source β•‘ UniqueWords β•‘ ╠═══════════════════════════════════════════════╬═══════════════════════════════╣ β•‘ BLACKHEATH 0AA BLACKHEATH COLCHESTER CO2 0AA β•‘ BLACKHEATH 0AA COLCHESTER CO2 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` P.S. Why you were instructed to do this in SQL? SQL Server doesn't deal well with text data, thus I'd recommend doing this in code as it might affect your performance.
Remove duplicate words from a column cell using SQL
[ "", "sql", "sql-server", "replace", "character", "" ]
I have two table: Table **A**: ``` +--+----+ |id|name| +--+----+ |0 |foo | |1 |bar | |2 |baz | +-------+ ``` Table **B**: ``` +--+----+ |A |cond| +--+----+ |0 |X | |1 |Y | +-------+ ``` Where B.A column is A.id value. I want to select all row from **A** where have no match in **B** table when B.cond = 'X'. So, result should be: * bar * baz How to write this SQL request with join (or similar performance method) ?
``` SELECT A.* FROM A LEFT JOIN B ON A.id = B.A AND B.cond = 'X' WHERE B.A IS NULL ``` This query joins the tables based on the conditions you specified, and then only selects the rows where there's no match in table `B`.
You can use `NOT EXISTS` ``` SELECT a.id, a.name FROM A WHERE NOT EXISTS ( SELECT 1 FROM B WHERE b.A = a.id AND b.cond = 'X' ) ``` However, i always forget that MySql is the only(?) rdbms which has problems to optimize an `EXISTS`/`NOT EXISTS`. So it's slightly more efficient to use a `LEFT JOIN` approach. <http://explainextended.com/2009/09/18/not-in-vs-not-exists-vs-left-join-is-null-mysql/> In MS SQL-Server it's better to use the `NOT EXISTS`. <http://sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join>
Exclude records if right table matches
[ "", "mysql", "sql", "" ]
Creating this table: ``` CREATE TABLE #Test (id int, name char(10), list int, priority int) INSERT INTO #Test VALUES (1, 'One', 1, 1) INSERT INTO #Test VALUES (2, 'Two', 2, 1) INSERT INTO #Test VALUES (3, 'Three', 3, 2) INSERT INTO #Test VALUES (4, 'Four', 4, 1) INSERT INTO #Test VALUES (5, 'THREE', 3, 1) ``` and ordering it by, list and priority: ``` SELECT * FROM #Test ORDER BY list, priority 1 | One | 1 | 1 2 | Two | 2 | 1 5 | THREE | 3 | 1 3 | Three | 3 | 2 4 | Four | 4 | 1 ``` However I want to step through rows one by one selecting the top one for each list ordered by priority, and start over when I get to the end. For example with this simpler table: ``` 1 | One | 1 | 1 2 | Two | 2 | 1 3 | Three | 3 | 1 4 | Four | 4 | 1 ``` and this query: ``` SELECT TOP 1 * FROM #Test ORDER BY (CASE WHEN list>@PreviousList THEN 1 ELSE 2 END) ``` If `@PreviousList` is the `list` for the previous row I got, then this will select the next row and gracefully jump to the top when I have selected the last row. But there are rows that will have the same `list` only ordered by `priority` - like my first example: ``` 1 | One | 1 | 1 2 | Two | 2 | 1 5 | THREE | 3 | 1 3 | Three | 3 | 2 4 | Four | 4 | 1 ``` Here `id=3` should be skipped because `id=5` have the same list ordering and a better priority. The only way I can think of doing this is simply by first order the entire list by list and priority, and then run the order by that goes through the rows one by one, like this: ``` SELECT TOP 1 * FROM ( SELECT * FROM #Test ORDER BY list, priority ) ORDER BY (CASE WHEN list>@PreviousList THEN 1 ELSE 2 END) ``` But of course I cannot order by an already ordered subquery and get the error: ``` The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified. ``` Are there any ways and can get past this problem or get the query down to a single query and order by?
Another possible solution is to use a subquery to select the min priority grouped by list and join it back to the table for the rest of the details ``` SELECT T2.* FROM (SELECT MIN(priority) as priority, list FROM #Test GROUP BY list) AS T1 INNER JOIN #Test T2 ON T1.list = T2.list AND T1.priority = T2.priority ORDER BY T1.list, T1.priority ```
> I want to step through rows one by one selecting the top one for each > list ordered by priority, and start over when I get to the end. You can use the built in [`ROW_NUMBER`](https://msdn.microsoft.com/en-us/library/ms186734.aspx) function that is designed for these scenarios with `OVER(PARTITION BY name ORDER BY priority)` to do this directly: ``` WITH CTE AS ( SELECT *, ROW_NUMBER() OVER(PARTITION BY name ORDER BY priority) AS RN FROM #Test ) SELECT * FROM CTE WHERE RN = 1; ``` * [**Live DEMO**](https://data.stackexchange.com/stackoverflow/query/377826) The ranking number `rn` generated by `ROW_NUMBER() OVER(PARTITION BY name ORDER BY priority)` will rank each group of rows that has the same `name` ordered by `priority` then when you filtered by `WHERE rn = 1` it will remove all the duplicate with the same name and left only the first priority.
How to order an already ordered subquery
[ "", "sql", "sql-server", "" ]
I would like to take some data from a table from DB1 and insert some of that data to a table in DB2. How would one proceed to do this? This is what I've got so far: ``` CREATE VIEW old_study AS SELECT * FROM dblink('dbname=mydb', 'select name,begins,ends from study') AS t1(name varchar(50), register_start date, register_end date); /*old_study now contains the data I wanna transfer*/ INSERT INTO studies VALUES (nextval('studiesSequence'),name, '',3, 0, register_start, register_end) SELECT name, register_start, register_end from old_study; ``` This is how my table in DB2 looks: ``` CREATE TABLE studies( id int8 PRIMARY KEY NOT NULL, name_string VARCHAR(255) NOT NULL, description VARCHAR(255), field int8 REFERENCES options_table(id) NOT NULL, is_active INTEGER NOT NULL, register_start DATE NOT NULL, register_end DATE NOT NULL ); ```
``` INSERT INTO studies ( id ,name_string ,description ,field ,is_active ,register_start ,register_end ) SELECT nextval('studiesSequence') ,NAME ,'' ,3 ,0 ,register_start ,register_end FROM dblink('dbname=mydb', 'select name,begins,ends from study') AS t1(NAME VARCHAR(50), register_start DATE, register_end DATE); ``` You can directly insert values that retured by `dblink()`(*that means no need to create a view*)
You should include the column names in both the `insert` and `select`: ``` insert into vip_employees(name, age, occupation) select name, age, occupation from employees; ``` However, your data structure is suspect. Either you should use a flag in `employees` to identify the "VIP employees". *Or* you should have a primary key in `employees` and use this primary key in `vip_employees` to refer to `employees`. Copying over the data fields is rarely the right thing to do, especially for columns such as age which are going to change over time. Speaking of that, you normally derive age from the date of birth, rather than storing it directly in a table.
Insert values of one table in a database to another table in another database
[ "", "sql", "postgresql", "psql", "" ]
I have a User domain and a Role domain and a working joinTable coded on the User side as ``` static hasMany = [ roles: Role ] ... static mapping = { table 'user_data' id column: 'employee_number', name: 'employeeNumber', generator: 'assigned', type: 'int' version false sort 'lastName' roles joinTable: [ name: 'user_role' ] } ``` I am trying to query the database to pull all users with a security officer role with ``` def roleInstance = Role.find { name == 'security_officer' } def secList = User.findAll("from User as u where u.roles = :roleInstance", [roleInstance:roleInstance]) ``` But I am getting the error ``` Class: com.microsoft.sqlserver.jdbc.SQLServerException Message: The value is not set for the parameter number 1. ``` What am I doing wrong?
I figured it out with a bunch of guess and checking. ``` def roleInstance = Role.findByName("security_officer") def query = User.where { roles { id == roleInstance.id } } def securityOfficerList = query.list() ```
Roles is a hasMany relationship so I think following should work. ``` def secList = User.findAll("from User as u where u.roles in (:roleInstance)", [roleInstance:[roleInstance]]) ```
Grails joinTable query issue
[ "", "sql", "grails", "grails-orm", "ggts", "" ]
I have 3 tables ``` #Products [Pro_ID] ,[Pro_Name], #Stock [Stock_ID] ,[Pro_ID],[Warehouse_ID] ,[Qty] ,[Status] Warehouse [Warehouse_ID] ,[Name] ``` I am making a report for my asp.net project, that shows the total Quantity of products we have in each warehouse. I tried this query which shows nothing except of `Pro_ID` written as header with no data (0 row(s) affected). ``` SELECT Pro_ID from Stock where Qty > 0 GROUP BY Pro_ID HAVING COUNT(*) = (SELECT COUNT(*) FROM Warehouse) ``` This is what i have recently in stock [![Stock table](https://i.stack.imgur.com/pZrXl.png)](https://i.stack.imgur.com/pZrXl.png)
if you just want to get what is in each warehouse: ``` SELECT w.Name, sum(s.Qty) FROM Stock s INNER JOIN Warehouse w ON s.Warehouse_ID = w.Warehouse_ID GROUP BY w.Warehouse_ID,w.Name ```
``` SELECT Warehouse.[Warehouse_ID], Warehouse.[Name] #Products.[Pro_ID], #Products.[Pro_Name], #Stock.[Stock_ID], #Stock.[Qty], #Stock.[Status] FROM #Stock LEFT OUTER JOIN Warehouse ON #Stock.[Warehouse_ID] = Warehouse.[Warehouse_ID] LEFT OUTER JOIN #Products ON #Stock.[Pro_ID] = #Products.[Pro_ID] ``` Note that by starting with `#Stock` and outer joining to the two master tables, you only get results for which you have actual records in the `#Stock` table. If you wanted all products at all warehouses, you would select `FROM Warehouse CROSS JOIN #Products` to get every combo of product and warehouse, then outer join that to your `#Stock` table to get records where they exist, and use null substitution to plug zeroes where you had no record in your `#Stock` table.
How to get total quantity of products we have in each warehouse?
[ "", "sql", "sql-server-2008", "" ]
I have a code in SQL Server Query that list a tabular presentation of sales of a particular tenant in PER YEAR (Column) and PER MONTH (rows) form. This is the part code which i think is relevant to post ``` SELECT tenantcode ,datename(month, date) [month] ,isnull(sum(case when year(DATE) = @Year1 then sales end), 0) as 'Year1' ,isnull(sum(case when year(DATE) = @Year2 then sales end), 0) as 'Year2' ,isnull(sum(case when year(DATE) = @Year3 then sales end), 0) as 'Year3' ,isnull(sum(case when year(DATE) = @Year4 then sales end), 0) as 'Year4' ,isnull(sum(case when year(DATE) = @Year5 then sales end), 0) as 'Year5' FROM TenantSales GROUP BY datename(month,date), tenantcode ORBDER BY datepart(MM,DATENAME(MONTH, DATE) + '01 2000') ``` Please note that @Year are variables formulated to get the 5 years based on user selection. In this example 2008-2012. The particular tenant started on MAY 2008, so the sales are made available only starting May 2008. The code produce this output. (For illustration) [![enter image description here](https://i.stack.imgur.com/ecJI2.jpg)](https://i.stack.imgur.com/ecJI2.jpg) What I desire to achieve is to include all the months like this [![enter image description here](https://i.stack.imgur.com/Yqv7m.jpg)](https://i.stack.imgur.com/Yqv7m.jpg) When the sales of the tenant starts or available on January onwards, the code works just fine, however in an instance like what I mentioned, it does not. I am using SQL Server 2008
You would like to include all months, but are running a query against data which does not contain all months. You will need to join to another table containing all months: ``` SELECT tenants.tenantcode ,months.monthname ,isnull(sum(case when year(t.DATE) = @Year1 then sales end), 0) as 'Year1' ,isnull(sum(case when year(t.DATE) = @Year2 then sales end), 0) as 'Year2' ,isnull(sum(case when year(t.DATE) = @Year3 then sales end), 0) as 'Year3' ,isnull(sum(case when year(t.DATE) = @Year4 then sales end), 0) as 'Year4' ,isnull(sum(case when year(t.DATE) = @Year5 then sales end), 0) as 'Year5' FROM ( SELECT Number ,DATENAME(MONTH, '2015-' + CAST(Number as varchar(2)) + '-1') monthname FROM master..spt_values WHERE Type = 'P' and Number between 1 and 12 ) months CROSS JOIN ( SELECT DISTINCT tenantcode FROM TenantSales ) tenants LEFT JOIN TenantSales t ON months.monthname = datename(month,t.date) AND tenants.tenantcode = t.tenantcode GROUP BY months.monthname, months.number, tenants.tenantcode ORDER BY months.number ``` Month table generation taken from <https://social.msdn.microsoft.com/Forums/sqlserver/en-US/697f4ee6-35d1-403d-a9f7-caecaf1ba479/all-monthnames-and-month-numbers-in-sql-server>
You need a list of all months. For a single tenant (as asked in the question), you can do: ``` with months as ( select cast('2015-01-01' as date) as d union all select dateadd(month, 1, months.d) from months where months.d < cast('2015-12-01' as date) ) SELECT tenantcode, datename(m.d, date) [month], isnull(sum(case when year(DATE) = @Year1 then sales end), 0) as Year1, isnull(sum(case when year(DATE) = @Year2 then sales end), 0) as Year2, isnull(sum(case when year(DATE) = @Year3 then sales end), 0) as Year3, isnull(sum(case when year(DATE) = @Year4 then sales end), 0) as Year4, isnull(sum(case when year(DATE) = @Year5 then sales end), 0) as Year5 FROM months m LEFT JOIN TenantSales CROSS JOIN ON month(date) = month(m.d) GROUP BY datename(month, m.d), tenantcode ORBDER BY month(m.d); ``` Note that you should only use single quotes for string and date names. You should not use single quotes for column aliases, because this can lead to confusion.
SQL Server Query to Fill Empty Record with null or 0 Values
[ "", "sql", "sql-server", "sql-server-2008", "" ]
In our application we follow the below pattern to save the hours spent by an employee. * for 30 minutes = 0.30 * for 1 hour = 1 So if an employee works 1 hour 30 minutes, then the hours value would be `1.30.` We also calculate the wages for each employee using below formula. ``` wages = rate * hours ``` If an employee has an hourly rate of `50$`, his wages for 1 hour 30 minutes would be `75$`. But as per our data structure we got ``` wages = 50 * 1.30 = 65 ``` How can I get the correct wages? (Note: we were not permitted to change the data structure in the table. i.e to make the 1 hour 30 minutes as 1.50) Thanks for the help
Here is example how to convert your number to hours, that you can multiply on th wage. ``` declare @t decimal(5,2)=1.3 /* converting to hours */ select cast(@t as int)+@t%1*100/60 ```
``` cast(substring('1.30',charindex('.','1.30')+1,len('1.30')) as float)/60 ```
Calculate the Rates based on the Hour value SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
**CONTEXT** I'm trying to create a sales fact table. Each row represent a product from an order for a customer. Among other fields, I have these 3 specific fields: * **purchasing\_price** * **selling\_price** * **margin** **PROBLEM** In our company, when we sell a product, I don't know exactly the purchasing price, because I will pay this product later and the purchasing price will be based on the exchange rate at the payment date. I usually know the real purchasing price between 1 week and 2 month after the order date. Indeed, my firm got an arrangement that allow us to pay our suppliers each 15th of the next month, from the month when we receive the product from our supplier. Since we have suppliers in different countries the exchange rate raise some issues. **RESULTS AWAITING** I had to generate 2 monthly reports and 1 annual report : - 1 report on the 1st of each month based on the exchange rate of the order date - 1 report on the 15th of each month based on the exchange rate of the payment date (which is the exchange rate of the current date because we pay our suppliers each 15th of the month) - 1 annual report based on the exchange rate of the payment date (which could sometimes be 2 month after the order date) **EXAMPLE** 1. I order a product on 3rd July. 2. This product is only delivered on the 7th August 3. Then I will pay the product the 15th September based on the exchange rate of this date. **SOLUTIONS** So far, I find only 3 solutions: 1. a) create 1 row in the fact table and 2 fields: **real\_purchasing\_price** (which would be equal to 0) and **temporary\_purchasing\_price** (which would automatically be equal to the purchasing price based on the exchange rate of the order date). b) once I paid the product, I know the right exchange rate, therefore I can update this row an fulfill the field **real\_purchasing\_price** with the purchasing price based on the exchange rate of the payment. 2. a) create 1 row in the fact table with the **purchasing\_price** based on the exchange rate of the order date. b) once I paid the product, I know the right exchange rate therefore I can create 1 new row in the fact table almost similar to the first one, but this time with the **purchasing\_price** based on the exchange rate of the payment date. 3. a) create a row in the fact table with the **purchasing\_price** based on the exchange rate of the order date b) once I paid the product I know the right exchange rate therefore I can update this row and replace the **purchasing\_price** by the one based on the right exchange rate. The 4th solution belongs to you. Thx for your help. Don't hesitate to ask me about more details. Have a good day,
It seems your **order** goes through three stages: * **ordered** * **delivered** * **purchasing price is known** One data-warehouse design approach is the **immutability** (insert only,make no updates). This approach would lead to creation of three fact records for your order **Ordered Event** with attributes ``` orderId, productId, orderDate and sellingPrice ``` **Delivered Event** ``` orderId, DeliveryDate, ``` Note that the order and delivery records are uniquely correlated with the OrderId (under simplified assumption of only one product per order). Both of those events are stored in separate fact table or in a common one - it depends on the full attribute definition in you case. The **purchasing price** is stored in a separate table with attributes ``` productId, entryDate, validFromDate, ValidToDate, purchasingPrice ``` The table is filled based on your rules on the 15th of following month (entryDate) with the validity interval for the preceding month. The crucial role of this table is to support the query with productId and validDate and return either the purchasing price or *unknown*. Based on this design you may setup an **access view** (simple view, materialize view or other solution) providing the current state of the order ``` orderId, productId, orderDate, sellingPrice, DeliveryDate, -- NULL if not delivered purchasingDate, purchasingStaus -- 1 purchased, 0 - not yet purchased purchasingPrice ``` The `purchasingDate` is calculated based on the delivery date based on the business rule. The `purchasingStatus` is a result of comparison of the reporting date and purchasing Date. The `purchasingPrice` is either the estimation *last known price of the product* or the proper purchasing price. You may also ask why is **immutability** in data-warehouse design important. It is similar to transaction concept in OLTP. In troubles you may *rollback* the data to some point in the past using **auditing dimension** and reload it again. If you update this is much more complex. **Small Example** On 15.2. you get purchasing price for product A in January **Purchasing Price Table** ``` entryDate = 15.2. validFrom = 1.1. validTo = 31.1. purchacingPrice = 100 ``` Order of product A on 1.3. creates a record in **Order Event Table** ``` orderDate = 1.3. sellingPrice = 200 ... ``` You may report this event with "last know purchacing price", which is currently 100. (Lookup in Purchasing Price Table with **orderDate** gives no valid result, last stored value is returned) Delivery on 10.3. creates a record in **Delivery Event Table** ``` deliveryDate = 10.3. .... ``` The exact purchasing price is still unknown (Lookup in Purchasing Price Table with **deliveryDate** gives no valid result, last stored value is returned) On 15.4. new purchasing price is entered in **Purchasing Price Table** for March. From this points the real **purchacing price** is known.
I actually choose to use a currency table and to add a field payment\_date which allow me to match each row of the fact table with right exchange rate in the currency table. Nevertheless, I have to UPDATE each row of the fact table to add the payment\_date once I know it. I couldn't find a better solution so far. Thx everyone.
Fact table changes
[ "", "sql", "database", "etl", "data-warehouse", "business-intelligence", "" ]
so i had a table with 3 columns: id \ first\_name \ last\_name and i need to find how many of people share the same full name. i had something like this: ``` SELECT COUNT(*) FROM ACTOR WHERE FIRST_NAME IN (SELECT FIRST_NAME,LAST_NAME FROM ACTOR HAVING COUNT(FIRST_NAME,LAST_NAME) >1); ```
Use `GROUP BY` ``` SELECT FIRST_NAME, LAST_NAME, Count(*) AS CNT FROM ACTOR GROUP BY FIRST_NAME, LAST_NAME HAVING COUNT(*) > 1 ``` This returns the first- and lastname and how often they appear for all which have duplicates. If you only want to know how many that are you can use: In SQL-Server: ``` SELECT TOP 1 COUNT(*) OVER () AS RecordCount -- TOP 1 because the total-count is repeated for every row FROM ACTOR GROUP BY FIRST_NAME, LAST_NAME HAVING COUNT(*) > 1 ``` all others: ``` Select COUNT(*) AS RecordCount From ( SELECT FIRST_NAME, LAST_NAME FROM ACTOR GROUP BY FIRST_NAME, LAST_NAME HAVING COUNT(*) > 1 ) As X ```
Try this: ``` SELECT COUNT(*) as Totals, NAME FROM (SELECT FIRST_NAME+LAST_NAME AS NAME FROM ACTOR)A GROUP BY NAME ```
how to find people with the same first and last name
[ "", "sql", "" ]
I would like to combine two datasets t1 and t2: dataset 1 (one variable X) ``` X 1 2 3 4 ``` dataset 2 (one variable Y) ``` Y A B C ``` into one new dataset dataset1+2 t3 ``` X Y 1 A 1 B 1 C 2 A 2 B 2 C 3 A 3 B 3 C 4 A 4 B 4 C ``` As you can see, I have no common variables, I tried different flavours of set and merge combination ``` data t3 ; merge t1 t2 ; run ; data t3 ; set t1 ; set t2 ; run ; data t3 ; set t1 ; if _n_ then set t2 ; run ; data t3 ; set t1 t2 ; run ; ``` any help (SAS or SQL) much appreciated Regards SW
This will work and produce the expected results, basically you are looking at a Cartesian product in SAS, ``` proc sql; select t1.*,t2.* from t1 ,t2 ; quit; ``` [Output of above Code](https://i.stack.imgur.com/nXXaE.png)
Use `CROSS JOIN`: ``` CREATE TABLE #tab1(X INT); CREATE TABLE #tab2(Y NVARCHAR(12)); INSERT INTO #tab1 VALUES (1), (2), (3), (4); INSERT INTO #tab2 VALUES ('A'), ('B'), ('C'); SELECT X, Y FROM #tab1 CROSS JOIN #tab2 ORDER BY X, Y; ``` `LiveDemo`
combining 2 sas dataset
[ "", "sql", "sas", "" ]
I have a table with `speed` column in MySQL DB. speed ``` 100 Mbits 120 Mbits ``` I am trying to update this column to remove `Mbits` part from this column. Can I have an in place query which can do this task? I tried googling which suggested `split_str` function which looks complicated for this task. Any help is much appreciated. Regards, Madan
Use this: ``` Update table set speed=LEFT(speed, INSTR(speed, ' ') - 1) ``` Result would be from: ``` 100 Mbits 120 Mbits ``` To ``` 100 120 ```
You can use `replace` to do this. ``` update mytable set speed = replace(speed, 'Mbits', '') where lower(speed) like '%mbits%' ``` Edit: Multiplying by 1000 where speed contains `Mbits` ``` update mytable set speed = 1000 * cast(replace(speed, 'Mbits', '') as signed) where lower(speed) like '%mbits%' ```
Update existing column data MySQL
[ "", "mysql", "sql", "database", "sql-update", "" ]
I have the following table structure ``` start|end 09:00|11:00 13:00|14:00 ``` I know ``` SELECT ARRAY_AGG(start), ARRAY_AGG(end) ``` Will result in ``` start|end [09:00,13:00]|[11:00,14:00] ``` But how can i get the following result? result ``` [09:00,11:00,13:00,14:00] ``` BTW, I'm using Postgres
You could do array concatenation (if order is not important): ``` SELECT ARRAY_AGG(start) || ARRAY_AGG(end) FROM TABLE1 ``` If order is important you could use [Gordon's](https://stackoverflow.com/users/1144035/gordon-linoff) approach **but**: * add aggregate order `array_agg(d order by d ASC)` * use `unnest` instead of `union all`, because Gordon's solution (`union all`) performs two sequence scan. If table is big it could be better for performance to use: ``` SELECT array_agg(d ORDER BY d ASC) FROM( SELECT unnest(ARRAY[start] || ARRAY[end]) as d from table1 ) sub ``` which performs only one sequence scan on table (and will be faster).
One method is to unpivot them and then aggregate: ``` select array_agg(d) from (select start as d from t union all select end as d from t ) t; ``` A similar method uses a `cross join`: ``` select array_agg(case when n.n = 1 then t.start else t.end end) from t cross join (select 1 as n union all select 2) n; ```
Aggregate two columns and rows into one
[ "", "sql", "postgresql", "aggregate-functions", "aggregate", "" ]
i have a table Calling "TableAvailable" TableID as int and Available as smallInt (0,1) like this ``` TableID |Available 1 |1 2 |0 3 |0 4 |1 5 |1 6 |1 7 |0 8 |1 ``` i need a sql which i can select the 1st 3 table which is together, in our example it should be 4,5,6 which is the 1st 3 rows what is available together
You can get all groups of consecutive ids by using: ``` select min(tableid), max(tableid) from (select ta.*, (row_number() over (order by tableid) - row_number() over (partition by available order by tableid) ) as grp from tableavailable ta ) ta where available = 1 group by grp; ``` Then, adding `having count(*) >= 3 order by min(tableid)` will get the first. However, a faster method is to just look at the availability of the next two records. In SQL Server 2012+, you would use `lead()`: ``` select top 1 tableid_1, tableid_2, tableid_3 from (select ta.*, lead(available) over (order by tableid) as available_1, lead(available, 2) over (order by tableid) as available_2, lead(tableid) over (order by tableid) as tableid_1, lead(tableid, 2) over (order by tableid) as tableid_2, from tableavailable ta ) ta where available = 1 and available_1 = 1 and available_2 = 1 order by tableid; ```
Assuming you would only ever need to do this for 3 and assuming tableId's are sequential with no gaps... both are probably bad assumptions... ``` SELECT Top 1 A.TableID, B.TableID, C.TableId FROM TableAvailable A LEFT JOIN tableAvailable B on A.ID = B.ID+1 LEFT JOIn tableAvailable C on A.ID = B.ID+2 WHERE A.Available = 1 and B.Availabe=1 and C.Available=1 order by tableID asc ```
Get x amount of rows in Sequence order
[ "", "sql", "sql-server", "" ]
Now my MYSQL query is below. But in 700 projects, 50 params and 12000 projectparams. foreign keys created, but what indexes to create - i dont know :( the query takes 45+ seconds. It is too big, as i read on some forums, even if i want make it in CRON every 5 minutes. How to optimize that query? and what indexes i want to create? and how to understand that process? Thanks. ``` SELECT `tt1`.`id` `projects_id`, `tt2`.`projectparams_id`, `tt2`.`defaultvalue`, `tt2`.`defaultvaluealias`, `tt2`.`globalvalue`, `tt2`.`globalvaluealias`, `tt2`.`value`, `tt2`.`valuealias` FROM `projects` `tt1` LEFT JOIN ( SELECT `t1`.`id` `projectparams_id`, `t1`.`defaultvalue`, `t1`.`defaultvaluealias`, `t1`.`globalvalue`, `t1`.`globalvaluealias`, `t2`.`value`, `t2`.`valuealias`, `t2`.`projects_id` FROM `projectparams` `t1` LEFT JOIN `projects_projectparams` `t2` ON `t2`.`projectparams_id` = `t1`.`id` ) tt2 ON `tt1`.`id` = `tt2`.`projects_id` ```
In your comments to PaulF's answer it shows you were not only looking for a query optimization, but for another query really. You don't want *all products* with or without specific parameters, but *all possible combinations* of products and parameters. You should have made this a new request. Here is the query to get all project / parameter combinations along with the actual project parameters: ``` select p.id as projects_id, pp.id projectparams_id, pp.defaultvalue, pp.defaultvaluealias, pp.globalvalue, pp.globalvaluealias, ppp.value, ppp.valuealias from projects p cross join projectparams pp left join projects_projectparams ppp on ppp.projects_id = p.id and ppp.projectparams_id = pp.id; ```
As well as the indexes - can you reorganise the query to remove the inner subquery something like : ``` SELECT `tt1`.`id` `projects_id`, `t1`.`id` `projectparams_id`, `t1`.`defaultvalue`, `t1`.`defaultvaluealias`, `t1`.`globalvalue`, `t1`.`globalvaluealias`, `t2`.`value`, `t2`.`valuealias` FROM `projects` `tt1` LEFT JOIN `projects_projectparams` `t2` ON `tt1`.`id` = `t2`.`projects_id` LEFT JOIN `projectparams` `t1` ON `t2`.`projectparams_id` = `t1`.`id` ```
How to optimize the query? What i could to read, for understand it? (in russian if you can, or english)
[ "", "mysql", "sql", "database", "optimization", "explain", "" ]
Let’s say you have this set of records on a database: ``` PartNumber Qty Group A 1 1 B 2 1 C 3 1 B 4 1 a 1 2 B 2 2 C 3 2 B 4 2 ``` How can we come up with this result: ``` A 2 B 4 C 6 B 8 ``` That is, adding the first row of the first group with the first row of the second group, the second row of the first group with the second row of the second group, the third row of the first group with the third row of the second group and so on. Notes: \*- I can have multiples groups \*- The groups will always have the same length (i.e the same number of rows PER GROUP) \*- The partnumbers in the group will always be the same in the same order \*- I can have multiples partnumber repeat, i.e not only B repeat but all of them PER GROUP I know I can add a new column to the table to have something like this: ``` PartNumber Qty Group idx A 1 1 1 B 2 1 2 C 3 1 3 B 4 1 4 a 1 2 1 B 2 2 2 C 3 2 3 B 4 2 4 ``` and: select partnumber, sum(qty) from table group by idx but that's exactly what I can't do. Any suggestions?
You can use `ROW_NUMBER` in order to create `idx` field that you mention in the OP. Then its a simple `GROUP BY` to get desired result: ``` SELECT PartNumber, SUM(Qty) FROM ( SELECT PartNumber, Qty, ROW_NUMBER() OVER (PARTITION BY [Group] ORDER BY id) AS idx FROM mytable) AS t GROUP BY PartNumber, idx ORDER BY idx ``` Grouping relies on this rule stated in the OP: > The partnumbers in the group will always be the same in the same order I believe the above rule (no. 3) makes rule no. 2 redundant.
SQL tables represent *unordered* sets. Let me assume there is another column that specifies the ordering (`qty` looks like such a column, but it seems to have another purpose). With this column, we can use `row_number()` and assuming that your conditions are really true, this becomes a simple aggregation query: ``` select min(PartNumber) as PartNumber, sum(qty) as qty from (select t.*, row_number() over (partition by group order by id) as seqnum from t ) t group by seqnum; ``` However, I prefer this version, just to check the part numbers: ``` select (case when min(PartNumber) = max(PartNumber) then min(PartNumber) else 'Oops! Parts don''t match' end) as PartNumber, sum(qty) as qty from (select t.*, row_number() over (partition by group order by id) as seqnum from t ) t group by seqnum; ```
Grouping by row number in a column
[ "", "sql", "sql-server", "database", "" ]
I need to convert minutes into hours in sql server. I used following logic to do it. ``` CAST(REPLACE(LEFT(CONVERT(varchar(10), DATEADD(MINUTE, 19.80 *100, ''), 114),5),':','.') AS Decimal(5,2)) AS tpschedhours ``` My expected Output is `33` hours (1980 minutes in hours) But I got output as 9 hours. I have found that, the issue occurs because `DATEADD(MINUTE,1980, '')` returns ouptut as `1900-01-02 09:00:00.000` (One day + 9 hours). But I need the Output as Hours value i.e 33 hours Thanks for the help
I Got the solution from the answers. ``` SELECT CAST((CAST(((2.72) *100)AS INT) / 60 )+ (CAST((2.72 *100)AS INT) % 60) / 100.0 AS DECIMAL(5,2)). ``` thanks tinka and Stanislovas KalaΕ‘nikovas
You can try in following: ``` DECLARE @time INT = 1980 SELECT LEFT(CONVERT(VARCHAR(10), DATEADD(MINUTE, @time / 60 + (@time % 60), ''),114),5) ```
convert minute into hours sql server
[ "", "sql", "sql-server", "t-sql", "" ]
I have below code in one of our stored procedures. ``` SELECT '<table width=100% style=''FONT-SIZE: 10px; FONT-FAMILY: MS Reference Sans Serif;''><tr><td>' + '</td><Other Wages</td><td al<td align=center>' + LTRIM(STR(0.30))+'</td></tr></table>' AS SCHEDSTATS ``` The problem with above code is, it doesn't return the decimal values. the output of the above code came like this, ``` <table width=100% style='FONT-SIZE: 10px; FONT-FAMILY: MS Reference Sans Serif;'> <tr> <td></td><Other Wages</td> <td al<td align=center>0</td></tr></table> ``` (I have given `0.30` decimal value which becomes `0` in output). How can I show decimal value in the above code? Thanks for the help
[Syntaxes](https://msdn.microsoft.com/en-IN/library/ms189527.aspx) ``` STR ( float_expression [ , length [ , decimal ] ] ) ``` float\_expression ``` Is an expression of approximate numeric (float) data type with a decimal point. ``` length ``` Is the total length. This includes decimal point, sign, digits, and spaces. The default is 10. ``` decimal ``` Is the number of places to the right of the decimal point. decimal must be less than or equal to 16. If decimal is more than 16 then the result is truncated to sixteen places to the right of the decimal point. ``` Code ``` SELECT STR(0.30, 3,2); ``` > if you don't want any change you just want `0.30` don't use `STR` use > `LTRIM((0.30))`
It's not an HTML problem. The STR() function returns a rounded integer unless you specify a number of decimals greater than 0. You must write "STR(0.30, 5, 2)", that means "return a string of maximum 5 digits, with 2 decimals". The result will be correct.
Html code Not returning decimal values in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I have 36 columns in a table but one of the columns have data multiple times like below ``` ID Name Ref abcd john doe 123 1234 martina 100 123x brittany 123 ab12 joe 101 ``` and i want results like ``` ID Name Ref cnt abcd john doe 123 2 1234 martina 100 1 123x brittany 123 2 ab12 joe 101 1 ``` as 123 has appeared twice i want it to show 2 in cnt column and so on
``` select ID, Name, Ref, (select count(ID) from [table] where Ref = A.Ref) from [table] A ``` **Edit:** As mentioned in comments below, this approach may not be the most efficient in all cases, but should be sufficient on reasonably small tables. In my testing: * a table of 5,460 records and 976 distinct 'Ref' values returned in less than 1 second. * a table of 600,831 records and 8,335 distinct 'Ref' values returned in 6 seconds. * a table of 845,218 records and 15,147 distinct 'Ref' values returned in 13 seconds.
You should provide SQL brand to know capabilities: 1) If your DB supports [window functions](http://www.postgresql.org/docs/9.1/static/tutorial-window.html): ``` Select *, count(*) over ( partition by ref ) as cnt from your_table ``` 2) If not: ``` Select T.*, G.cnt from ( select * from your_table ) T inner join ( select count(*) as cnt from your_table group by ref ) G on T.ref = G.ref ```
countif type function in SQL where total count could be retrieved in other column
[ "", "sql", "countif", "" ]
i already was able to fetch the manager ids ,but i would need to be able to fetch the names of those three managers, currently stuck there. ``` select d.manager_id, count(employee_id) from hr.departments d inner join hr.employees e on d.department_id = e.department_id group by d.manager_id having count(employee_id) > 5 ``` When i try to do ``` select d.manager_id, e.first_name, e.last_name, count(employee_id) from hr.departments d inner join hr.employees e on d.department_id = e.department_id group by d.manager_id, e.first_name, e.last_name having count(employee_id) >5 ``` i get blank results
You have to `join` on `employee_id` as well. ``` select d.manager_id, e1.first_name, e1.last_name, count(e.employee_id) from hr.departments d inner join hr.employees e on d.department_id = e.department_id inner join hr.employees e1 on d.manager_id = e1.employee_id group by d.manager_id, e.first_name, e.last_name having count(e.employee_id) > 5 ```
``` inner join hr.employees e on d.department_id = e.department_id group by d.manager_id, e.first_name, e.last_name ``` You need another join to employee to get the manager name. ``` select name.first_name, name.last_name, ... inner join hr.employees name on d.manager_id = name.employee_id ``` If you wanted to skip the additional join you could, ``` select d.manager_id, max(case when d.manager_id = e.employee_id then e.first_name end) first_name, max(case when d.manager_id = e.employee_id then e.last_name end) last_name, count(*) from department d join employee e on d.department_id = e.department_id group by d.manager_id having count(*) > 5 ; ```
manager names needed from ids
[ "", "mysql", "sql", "" ]
The query works but only give years 1985 values. How do I add unlimited amount of years (1985-2014) ``` use baseball; SELECT CAST(tf.franchname AS CHAR(20)), s.yearID, s.lgid, AVG(s.salary) FROM salaries s, teams t, teamsfranchises tf WHERE s.teamID = t.teamID AND t.franchID = tf.franchID AND s.yearID = 1985 AND (s.lgid='AL' OR s.lgid='NL') GROUP BY tf.franchname, s.yearID, s.lgid order BY s.yearID; ```
``` select cast(tf.franchname as char(20)), s.yearID, s.lgid, avg(s.salary) from salaries s, teams t, teamsfranchises tf where s.teamID = t.teamID and t.franchID = tf.franchID and (s.yearID between 1985 and 2014 )and (s.lgid='AL' OR s.lgid='NL') group by tf.franchname, s.yearID, s.lgid order by s.yearID; ```
You could just use [BETWEEN](https://msdn.microsoft.com/en-us/library/ms187922.aspx). Your where clause should then look like ``` (s.yearID BETWEEN 1985 AND 2014) and ``` Alternatively you could use the `<` and `>` operators: ``` (s.yearID >= 1984 and <= 2014) ``` If, for any reason you don't have a continous range of years (You only want 5 years). `IN` could also be an option: ``` s.yearID IN (1984, 1991, 1996, 2001, 2006) ```
SQL Editing Year
[ "", "sql", "sql-server", "" ]
* Server1: Prod, hosting DB1 * Server2: Dev hosting DB2 Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query like below. I will be using SQL Server DTS (import export data utility)to do this thing. ``` Insert into Dev.db1.table1 Select * from Prod.db1.table1 where table1.PK not in (Select table1.PK from Dev.db1.table1) ```
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database? Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two. To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table. This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be [SSIS](https://msdn.microsoft.com/en-us/library/ms141026.aspx). Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing. There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use [rowversion](https://www.mssqltips.com/sqlservertip/3295/using-rowversion-with-sql-server-integration-services-to-reduce-load-times/), use [Change Data Capture](https://msdn.microsoft.com/en-us/library/cc645937.aspx), make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow [directly in SSIS](https://stackoverflow.com/questions/19011049/ssis-2012-insert-new-rows-ignore-existing-rows)... The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
Getting data from different database on different server with one SQL Server query
[ "", "sql", "sql-server", "" ]
This is my stored procedure in Oracle: ``` CREATE OR REPLACE PROCEDURE execute_cproc ( callnum IN VARCHAR2 , RESULT OUT VARCHAR2) AS vara_val NUMBER; varb_val NUMBER; BEGIN SELECT a_val, b_val INTO vara_val, varb_val FROM data_table WHERE callnum LIKE numberpattern || '%'; END; ``` If `CALLNUM` is 03354123 then I am getting 2 results: 1. `03354123 like 033%` 2. `03354123 like 03354%` Both are true so I'm getting 2 results. How to make procedure find the longest matching only, i.e. `03354123 like 03354%`? Table : Table Name : DATA\_TABLE Columns: ``` NumberPattern (varchar2) : 033, 03354 a_val ( integer ) : 1, 2 b_val ( integer ) : 1, 2 ```
You must restructure you query. If you want to get the best or max pattern, do this: ``` select a_val, b_val into vara_val, varb_val from (select NumberPattern , a_val, b_val, rank() over(order by length(NumberPattern) desc) ranking from DATA_TABLE where CALLNUM like NumberPattern || '%' ) where ranking = 1 ``` This will set your variables vara\_val and varb\_val into values 2, 2 respectively. Or if you like to get the pattern as well, just declare another variable that will hold that value then edit your query to this: ``` select NumberPattern ,a_val, b_val into yournew_variable,vara_val, varb_val from (select NumberPattern , a_val, b_val, rank() over(order by length(NumberPattern) desc) ranking from DATA_TABLE where CALLNUM like NumberPattern || '%' ) where ranking = 1 ``` Here's a brief explanation: I created a subquery that consists of the NumberPattern , a\_val, b\_val and the rank of the NumberPattern according to its length. This will only rank patterns that exists on the CALLNUM parameter that you have provided. So for example, you got patterns 12345,1234,789 and you passed 123456789 to your CALLNUM parameter, it will only rank patterns 12345 and 1234 since your CALLNUM starts with 12345 and 1234, not with 789. Once it is ranked, I selected the NumberPattern, a\_val and b\_val from that subquery where the ranking is 1
Does your suggested solution provide an answer? What if you data contains: 03354121 03354122 03354123 These will all match 03354%. So which one is best? Maybe there is no answer to that. One "fix" is to sort the data in descending order and pick the first result.
Stored Procedure Maximum Pattern Match
[ "", "sql", "oracle", "stored-procedures", "top-n", "" ]
I have the following table ``` drink_name........cost........calories Black.............1...........30 Clue..............2...........40 Elephant----------3...........50 ``` When I use the **between** command for characters (it excludes ending positions) ``` select drink_name from drink_info where drink_name between 'B' and 'C'; output: Black ``` Note that Clue is omitted. Now when using **between** for number comparison (it includes the ending position) ``` select drink_name from drink_info where cost between 1 and 3 Output: 1 2 3 ``` Note that 3 is included. Why is there a behaviour difference of the **between** keyword between integer and characters, because it includes the last number (3) whereas last character (Clue) is excluded
Between works in exactly the same way in both cases. It include both end points. The difference is in how integers differ from strings (and floats and datetimes). For this reason, it is often better to use `<` for the second comparison: ``` select drink_name from drink_info where drink_name >= 'b' and drink_name < 'c'; ``` This will not include `'c'`. If the second comparison were `<=` then `'c'` would be included, but nothing else that begins with `'c'`.
If you want to choose drink names with a beginning character between B and C inclusive: select drink\_name from drink\_info where left(drink\_name, 1) between 'B' and 'C';
Using Between Command in SQL
[ "", "mysql", "sql", "database", "" ]
I have the following data set: ``` Date Occupation Count Jan2006 Nurse 15 Jan2006 Lawyer 2 Jan2006 Mechanic 3 Feb2006 Economist 2 Feb2006 Lawyer 1 Feb2006 Nurse 5 ``` The data continues all the way until Dec 2014 with difference occupations and and counts for each occupation. What I want to do is to create an aggregate the counts by occupation all into one year. So assuming that the above data has all the months and counts I want my final data set to look like this: ``` Date Occupation Sum 2006 Nurse 20 2006 Lawyer 3 2006 Mechanic 3 2006 Economist 2 and so on until Dec2014. ``` I tried using the first.variable and last.variable as follows but it didn't work. ``` data want, set have; if first.date and first.Occupation then sum = 0; sum+Count; if last.date and last.occupation then output; run; ``` But this doesn't give me the desired output. I feel like this could be done easily in SQL, but not being familiar with SQL, I am hesitant to use it. Thanks in advance for your help.
Since you are using SAS, you can take advantage of the fact that procedures such as `proc summary` group by the formatted value of a variable. Therefore if you apply a `year.` format to the Date variable, then it will automatically group by year. ``` data have; input Date :monyy7. Occupation $20. Count; format date monyy7.; datalines; Jan2006 Nurse 15 Jan2006 Lawyer 2 Jan2006 Mechanic 3 Feb2006 Economist 2 Feb2006 Lawyer 1 Feb2006 Nurse 5 ; run; proc summary data=have nway; class date occupation / order=freq; /* sort by descending sum */ format date year.; /* apply year format to date for grouping purposes */ var count; output out=want (drop=_:) sum=; run; ```
Try this: ``` proc sql; create table want as select year(date) as date, occupation,sum(count) as sum from have group by year(date),occupation; quit; ```
Sum monthly into yearly in SAS
[ "", "sql", "sum", "sas", "aggregate", "" ]
I have a cust table ``` id name class mark 1 John Deo Matt Four 75 2 Max Ruin Three 85 3 Arnold Three 55 4 Krish Star HN Four 60 5 John Mike Four 60 6 Alex John Four 55 ``` I would like to search for a customer which might be given as `John Matt` without the `deo` string. How to use a LIKE condition for this? ``` SELECT * FROM cust WHERE name LIKE '%John Matt%' ``` The result should fetch the row 1. what if the search string is `Matt Deo` or `john` The above can't be implemented when trying to find an exact name. How can I make the LIKE query to fetch the customer even if 2 strings are given?
If the pattern to be matched is ``` string1<space>anything<space>string2 ``` you can write: ``` like string1||' % '||string2 ```
Why not this ``` select * from cust where name Like 'John%Matt' ; ```
How to use LIKE in a query to find multiple words?
[ "", "sql", "oracle", "sql-like", "textmatching", "" ]
I have a column that contains the value: "Mandatory info on model l\_90 with features games\_14. Please provide the info for this model.[server=stack3\_112] " i want to run only the first part, "Mandatory info on model l\_90 with features games\_14" and exclude "Please provide info for this model" from the result. I tried ``` model like '% Mandatory info on model l_90 with features games_14%' ``` in the select but it didn't give me what I wanted.
I believe you are asking how to have only part of this column returned in your query. See: ``` select substr(your_column, 1, INSTR(your_column, 'Please')) from table where your_column like '%Mandatory info on model l_90 with features games_14%' ``` This will give you a substring of everything up until the first point it finds "Please". Note: I'm not sure what dbms you are on but the above select will work in Oracle.
``` WHERE {column name} like '% Mandatory info on model I_90 with features games_14% AND {column name} NOT LIKE '%Please%') ``` Could this work? This way any text that contains 'please' should be filtered out
like command in sql query
[ "", "sql", "" ]
I have a table with a `varchar` column `categoryIds`. It contains some IDs separated by commas, for example: ``` id categoryIds -------------------- 1 3,7,12,33,43 ``` I want to do a select statement and check if an int exists in that column. Something like this: ``` select * from myTable where 3 in (categoryIds) ``` I know this is possible in MySQL by doing [this](https://stackoverflow.com/questions/2674011/mysql-check-if-numbers-are-in-a-comma-separated-list), but can it be done in SQL Server as well? I have tried casting the int to a char, which runs the following statement: ``` select * from myTable where '3' in (categoryIds) ``` But it doesn't look like there's any "out of the box" support for comma separated lists as it returns nothing.
You should really redesign this table to split out those values from being comma separated to being in individual rows. However, if this is not possible, you are stuck with doing string comparison instead: ``` DECLARE @id INT = 3 DECLARE @stringId VARCHAR(50) = CAST(@id AS VARCHAR(50)) SELECT * FROM MyTable WHERE categoryIds = @stringId -- When there is only 1 id in the table OR categoryIds LIKE @stringId + ',%' -- When the id is the first one OR categoryIds LIKE '%,' + @stringId + ',%' -- When the id is in the middle OR categoryIds LIKE '%,' + @stringId -- When the id is at the end ```
``` SELECT * FROM myTable WHERE (',' + RTRIM(categoryIds) + ',') LIKE '%,' + @stringId + ',%' ``` Here @stringId is your text to be searched. In this way you can avoid unnecessary multiple where conditions Kind Regards, Raghu.M.
How can I check whether a number is contained in comma separated list stored in a varchar column?
[ "", "sql", "sql-server", "" ]
I have following data ``` SalId EmpId TakenSal TakenDate AvailSal CompId 13 68 1000 02-Jul-2015 14000 1021 14 68 100 02-Jul-2015 13900 1021 15 69 1000 02-Jul-2015 11000 1021 16 82 1000 06-Jul-2015 9000 1024 17 82 1000 06-Jul-2015 8000 1024 18 83 1000 06-Jul-2015 9000 1024 19 83 1000 06-Jul-2015 8000 1024 20 82 1000 06-Jul-2015 7000 1024 21 82 1000 06-Jul-2015 6000 1024 22 82 1000 06-Jul-2015 5000 1024 23 82 1000 06-Jul-2015 4000 1024 24 94 1000 09-Jul-2015 9000 1014 25 94 1000 09-Jul-2015 8000 1014 26 94 1000 09-Jul-2015 7000 1014 27 94 1000 09-Jul-2015 6000 1014 ``` in which I want to select those records 'TakenDate' nearest to today I tried like below but throwing anerror ``` select * from Employee_SalaryDetails where Employee_SalaryDetails.TakenDate = max(TakenDate) group by Employee_SalaryDetails.EmpId having count(Employee_SalaryDetails.EmpId)>0 ```
I think you are looking for nearest date's data from today's date. I have tried with your actual data, ``` CREATE TABLE AbbasTable ( SalId INT ,EmpId INT ,TakenSal INT ,TakenDate date ,AvailSal INT ,CompId INT ) insert into AbbasTable values (13, 68, 1000 , '02-Jul-2015', 14000 , 1021) insert into AbbasTable values (14, 68, 100 ,'02-Jul-2015', 13900, 1021) insert into AbbasTable values (15,69 ,1000 ,'02-Jul-2015', 11000, 1021) insert into AbbasTable values (16,82 ,1000 ,'06-Jul-2015', 9000 , 1024) insert into AbbasTable values (17,82 ,1000 ,'06-Jul-2015', 8000 , 1024) insert into AbbasTable values (18,83 ,1000 ,'06-Jul-2015', 9000 , 1024) insert into AbbasTable values (19,83 ,1000 ,'06-Jul-2015', 8000 , 1024) insert into AbbasTable values (20,82 ,1000 ,'06-Jul-2015', 7000 , 1024) insert into AbbasTable values (21,82 ,1000 ,'06-Jul-2015', 6000 , 1024) insert into AbbasTable values (22,82 ,1000 ,'06-Jul-2015', 5000 , 1024) insert into AbbasTable values (23,82 ,1000 ,'06-Jul-2015', 4000 , 1024) insert into AbbasTable values (24,94 ,1000 ,'09-Jul-2015', 9000 , 1014) insert into AbbasTable values (25,94 ,1000 ,'09-Jul-2015', 8000 , 1014) insert into AbbasTable values (26,94 ,1000 ,'09-Jul-2015', 7000 , 1014) insert into AbbasTable values (27,94 ,1000 ,'09-Jul-2015', 6000 , 1014) ``` For output use below query, ``` WITH FianlTable AS ( SELECT AT.* ,ROW_NUMBER() OVER(PARTITION BY empid ORDER BY takendate DESC) AS RN FROM AbbasTable AT ) SELECT * FROM FianlTable WHERE RN = 1 ORDER BY TakenDate DESC ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!6/0524f/2/0)
In case you want the nearest but below, you can use ``` select esd1.* from Employee_SalaryDetails AS esd1 where esd1.TakenDate <= getdate() and not exists (select 1 from Employee_SalaryDetails AS esd2 where esd2.TakenDate <= getdate() and esd2.TakenDate > esd1.TakenDate ) ```
How to get all records which is nearest date from today?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
In my database I have three tables (don't ask why it was set up this way, it just was, this is a simplification but gets at the problem): ``` Table: players id username weapon_id shield_id Table: items id name stats (either attack or defend stats number) item_type_id Table: item_types id is_weapon (true if weapon, false if shield) is_rare (true/false boolean) ``` Each player has either no items, 1 item, or 2 items. (Either a weapon, a shield, both, or neither, in which case the weapon\_id and shield\_id would be null) I want to run a select query that gets: ``` Results of my query id (of player) username weapon_id weapon_name weapon_stats weapon_is_rare shield_id shield_name shield_stats shield_is_rare ``` In the case where shield\_id is null, for example, the values in the result should just be null. What's a good SQL query to get this sort of result from this setup?
You need to do a `LEFT JOIN` with the `items` and `item_types` table separately for the weapon and shield. Use table aliases to distinguish which join you're referencing. ``` SELECT p.id ,p.username ,p.weapon_id ,w.name AS weapon_name ,w.stats AS weapon_stats ,wt.is_rare AS weapon_is_rare ,p.shield_id ,s.name AS shield_name ,s.stats AS shield_stats ,st.is_rare AS shield_is_rare FROM players AS p LEFT JOIN items AS w ON w.id = p.weapon_id LEFT JOIN item_types AS wt ON wt.id = w.item_type_id LEFT JOIN items AS s ON s.id = p.shield_id LEFT JOIN item_types AS st ON st.id = s.item_type_id ```
Start with the `players` table as the main table you are querying. As others have said, you are going to want to `LEFT JOIN` the other tables to the `players` table on the foreign keys that match. If you aren't sure which join type to use, look here: <http://www.techonthenet.com/oracle/joins.php> That page is for Oracle DB, but the same concept applies to all flavors of SQL.
How to select from a table and up to two other items
[ "", "mysql", "sql", "" ]
I have a table with ~30M tuples. The table looks like: ``` id | first_name | last_name | email ----------------------------------------- 1 | foo | bar | foo@bar.com ``` Also there are an index (btree index) for first\_name and other for last\_name. The query below tooks about 200ms to return the results: ``` SELECT * FROM my_table WHERE (first_name ILIKE 'a%') LIMIT 10 OFFSET 0 ``` But the next one tooks about 15 seconds (adding the order by) ``` SELECT * FROM my_table WHERE (first_name ILIKE 'a%') ORDER BY last_name asc, first_name asc LIMIT 10 OFFSET 0 ``` What can I do to improve the performance of the last query?
You have two choices of indexes for this query: ``` SELECT t.* FROM my_table WHERE first_name ILIKE 'a%' ORDER BY last_name asc, first_name asc LIMIT 10 OFFSET 0 ; ``` One is for the `WHERE` clause. The best index for this is `my_table(first_name)`. The second possibility is to use an index for the `ORDER BY`, `my_table(last_name, first_name)`. Which is better depends on the data you have. You might want to try both to see which works better, if overall performance is a goal. Finally, a computed index might be the best way to go. For your case, write the query as: ``` SELECT t.* FROM my_table WHERE lower(substr(first_name, 1, 1)) = 'a' ORDER BY last_name asc, first_name asc LIMIT 10 OFFSET 0 ; ``` Then, the index you want is `mytable(lower(substr(first_name, 1, 1)), last_name, first_name)`. This index can be used for both the `WHERE` and the `ORDER BY`, which should be optimal for this query.
I assume the following index will speed up the `ORDER BY`: ``` create index my_table_lname_fname on my_table (last_name, first_name) ```
Slow query on a large table when using order by
[ "", "sql", "postgresql", "indexing", "postgresql-9.3", "postgresql-performance", "" ]
VB.NET - I've found other questions related to this, but none specifically to my situation. I'm dealing with only two tables - "task" and "task\_run." I have a Gridview with rows listing certain "tasks." They come from the "task" table and each task has a "tsk\_id." I want to have a delete button for each task (row) and only want the delete button visible for that row if the task does not have a run associated with that task from the "task\_run" table. (i.e. I do not want the user to be able to delete that task if it has already been run.) table1 - "task" PKY = "tsk\_id" table2 - "task\_run" PKY = "run\_id" FKY = "run\_tsk\_id" I assume I need to have a template field in my gridview and have the delete button conditionally show based on whether there are rows in the run table associated with that particular task Id, but am stuck on how to do this. Hopefully this makes sense. Any help is appreciated. Thanks!
You first get the task\_Id from Task\_run table accordingly user if exist otherwise return zero value , and placed this Task\_Id in gridview on Label or textbox or hidden field with visible=false property if you not showing to user, use this command for checking Task have already run or not by user. ``` SELECT ext.* , ISNULL((SELECT top 1 run_tsk_id FROM task_run WHERE run_tsk_id = ext.tsk_id),0) AS CheckId FROM task ext ``` then use the gridview RowDataBound event to hide or show the delete button conditionally, code as. ``` Protected Sub Grid_RowDataBound(sender As Object, e As GridViewRowEventArgs) If e.Row.RowType = DataControlRowType.DataRow Then Dim lblCheckId As Label = DirectCast(e.Row.FindControl("lblCheckId"),Label) Dim deleteButton As Button = DirectCast(e.Row.FindControl("btnDelete"), Button) If CInt(lblCheckId.Text) > 0 Then deleteButton.Visible = False Else deleteButton.Visible = True End If End If End Sub ```
The DataSource which you are binding to to the grid should also have Task run count. Then you can use the RowDataBound event to show or hide the button. ``` Protected Sub Grid_RowDataBound(sender As Object, e As GridViewRowEventArgs) If e.Row.RowType = DataControlRowType.DataRow Then Dim taskRunCount As Integer = Convert.ToInt16(e.Row.Cells(0).Text) ''The Cells[0] should be which ever column you have Task run count Dim deleteButton As Button = DirectCast(e.Row.FindControl("DeleteButton"), Button) If taskRunCount > 0 Then deleteButton.Visible = False Else deleteButton.Visible = True End If End If End Sub ```
Hide Delete Button in Gridview if RowID links to Foreign Key
[ "", "sql", "asp.net", "database", "vb.net", "gridview", "" ]
I need to `SELECT` all rows where `column1` does not have a leading `0`. `column1` has a `String` data type. ``` column1 0123455677 0987654321 2345567887 0233445566 3422245666 ```
ANSI SQL answer: ``` select * from tablename where substring(column1 from 1 for 1) <> '0' ``` Some dbms products have `SUBSTR(column1, 1, 1)` instead, or `LEFT(column1, 1)`.
I asume you are talking about a character field, as a numeric field wouldn't be displayed with leading 0's unless you format it ... anyway for a character field you would select ... ``` WHERE column1 NOT LIKE '0%' ``` in case of a numeric field you might select ... ``` WHERE column1 >= 1000000000 ```
Get all the rows which are not have leading zero
[ "", "sql", "" ]
``` id - Name - Subject - Marks 1 - ABC - MAT - 90 2 - ABC - SCI - 80 3 - ABC - ENG - 90 4 - ABC - HIS - 96 5 - ABC - PHY - 70 6 - ABC - CHE - 43 7 - XYZ - MAT - 90 8 - XYZ - SCI - 80 9 - XYZ - ENG - 90 10 - XYZ - HIS - 96 11 - XYZ - PHY - 70 13 - XYZ - CHE - 43 etc ..... ``` Just want to show 3 topper of each subject ``` ABC - MATH - 90 XYZ - MATH - 90 DEF - MATH - 80 etc ```
You can do this using variables. ``` select t.* from (select t.*, (@rn := if(@s = subject, @rn + 1, if(@s := subject, 1, 1) ) ) as rn from t cross join (select @rn := 0, @s := '') params order by subject, marks desc ) t where rn <= 3 order by t.subject, t.rn; ```
With your data I create a `SqlFiddleDemo.` I include two addtional student so query return only 3 from 4. ``` CREATE TABLE Courses (`id` int, `Name` varchar(3), `Subject` varchar(3), `Marks` int); ``` And create this derivated to simplify the next code. The idea is create an unique code with 3 number `000-100` + `Name` so I can sorted during `left outer join` using `AND L.comb <= R.comb`. Take note because this sorting in case of tie the latest name alphabetic will be showing first. ``` CREATE TABLE s_course AS SELECT `id`, `Name`, `Subject`, `Marks`, concat(LPAD(`Marks`, 3, '0'), `Name`) as comb FROM Courses; ``` Now the `SELECT`, if you run the inner select will see the 4 result and check how the ties are resolve. ``` SELECT * FROM ( SELECT L.Subject, L.Marks, L.Name, count(*) as rn FROM s_course L left outer join s_course R ON L.Subject = R.Subject AND L.comb <= R.comb GROUP BY L.Subject, L.comb ORDER BY L.Subject, L.comb ) t WHERE rn <= 3 ORDER BY Subject, rn ``` OR maybe you can `ORDER BY Subject, Marks DESC, Name` This query exploit one issue of MySQL where you dont need put the same fields on `select` and `group by` **OUTPUT** ``` | Subject | Marks | Name | rn | |---------|-------|------|----| | CHE | 48 | PQR | 1 | | CHE | 48 | FGH | 2 | | CHE | 43 | XYZ | 3 | |---------|-------|------|----| | ENG | 95 | PQR | 1 | | ENG | 92 | FGH | 2 | | ENG | 90 | XYZ | 3 | |---------|-------|------|----| | HIS | 96 | XYZ | 1 | | HIS | 96 | ACB | 2 | | HIS | 91 | PQR | 3 | |---------|-------|------|----| | MAT | 95 | PQR | 1 | | MAT | 95 | FGH | 2 | | MAT | 90 | XYZ | 3 | |---------|-------|------|----| | PHY | 75 | PQR | 1 | | PHY | 70 | XYZ | 2 | | PHY | 70 | ACB | 3 | |---------|-------|------|----| | SCI | 80 | XYZ | 1 | | SCI | 80 | ACB | 2 | | SCI | 75 | PQR | 3 | ```
How to find top 3 topper of each subject in given table
[ "", "mysql", "sql", "" ]
Suppose I have a table as follows: ``` loan: no | credit ------------- L1 | 600 L2 | 550 L3 | 800 L4 | 800 L5 | 700 ``` If I want to find the max loan I can simply do this: ``` SELECT MAX(l.no) FROM loan l WHERE l.credit = (SELECT MAX(l.credit) from loan l) ``` But this would only return me one loan. Is there a way to display both `L3` and `L4` without making a complicated query?
Why you put a "MAX" in the select query? ``` --delete the MAX here, or else only one record would be fetched. SELECT l.no FROM loan l WHERE l.credit = (SELECT MAX(l.credit) from loan l) ```
Yes, just don't use an aggregate function without a group by if you want to have more than one row. ``` SELECT no, credit FROM loan l WHERE l.credit = (SELECT MAX(l.credit) from loan l) ```
Show multiple MAX results
[ "", "sql", "sqlite", "" ]
I have a table with two columns and containing two records. SQL table structure: ``` CREATE TABLE #TEMP_TEST ( ID bigint NOT NULL, DESCRIPTION varchar(500) NULL ) ON [PRIMARY] INSERT INTO #TEMP_TEST VALUES(1, 'obsolete') INSERT INTO #TEMP_TEST VALUES(2, NULL) SELECT * FROM #TEMP_TEST ``` Total dummy records in `#TEMP_TEST` table ``` ID | DESCRIPTION | --------+-------------+ 1 | obsolete | 2 | NULL | ``` Query with these conditions `=`, `like`, `in` are working fine. ``` SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION ='obsolete' SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION LIKE 'obsolete' SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION IN ('obsolete') ``` Working fine with these results: ``` ID | DESCRIPTION | --------+-------------+ 1 | obsolete | ``` **Problem** : Query with these conditions `!=`, `not like`, `not in` are not working fine. ``` SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION != 'obsolete' SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION NOT LIKE 'obsolete' SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION NOT IN ('obsolete') ``` Results: ``` ID | DESCRIPTION | --------+-------------+ ``` Expected results: ``` ID | DESCRIPTION | --------+-------------+ 2 | NULL | ``` What should I do to get expected result? Kindly review and give a required answer or comment if any further condition is required for getting expected result.
You can use `OR` and It will work even if `ANSI_NULLS` is `ON` ``` SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION != 'obsolete' OR LC2.DESCRIPTION IS NULL SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION NOT LIKE 'obsolete' OR LC2.DESCRIPTION IS NULL SELECT * FROM #TEMP_TEST LC2 WHERE LC2.DESCRIPTION NOT IN ('obsolete') OR LC2.DESCRIPTION IS NULL ```
This is because of three valued logic in SQL Server. Predicate can evaluate to: * true * false * unknown When you are comparing data in SQL Server you should always think about possible `NULL`s. Consider these predicates: ``` where 1=1 => evaluates to true where 2=1 => evaluates to false where 1=null => evaluates to unknown where null=null => evaluates to unknown ``` So comparing `NULL` to any value, even with `NULL` evaluates to UNKNOWN`. Now you should know how`WHERE`clause works. It returns rows where predicate evaluates to`TRUE` only! In your case the predicate: ``` WHERE LC2.DESCRIPTION != 'obsolete' ``` will evaluate to: ``` obselete != obselete => false obselete != null => unknown ``` So there are no rows where predicate evaluates to `TRUE` and you get nothing as a result. As for the question what should you do, you can do the following: ``` WHERE ISNULL(LC2.DESCRIPTION, 'not absolete') != 'obsolete' ``` but here your predicate is not `SARG`able and you will not gain from indexes if any is created for `LC2.DESCRIPTION` column. The standard way is to use `OR`: ``` WHERE LC2.DESCRIPTION != 'obsolete' OR LC2.DESCRIPTION IS NULL ```
Why Sql Conditions (not in , not like , !=) not working on varchar value of table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I was given this formula in a program, but it's giving me the first of the current month. I need to get the last day of the previous month, so report runs the 3rd day of current month we want only the previous month data. ``` ToNumber(ToText(Year(CurrentDate), 0, "") + ToText(Month(CurrentDate), "00") + '01') ```
you could use ``` {datefield} in lastfullmonth ``` or if you just need that last day ``` date(year(currentdate),month(currentdate),1)-1 ```
Does this give you what you need? ``` select DATEADD(MONTH, DATEDIFF(MONTH, -1, GETDATE())-1, -1) ```
Crystal reports - last day of previous month
[ "", "sql", "crystal-reports", "" ]
I am login with a user `abc` in a database `xyz` with **Oracle SQL Developer**. How can I get the usernames of the schema through which I am logged in?
I believe by *usernames* you mean operating system usernames. Username in Oracle database (and SQL Developer), is a synonym to the schema name. So in your case, your schema called *abc*. Now assume your operating system username is 'John', and you want to know other users who are connected to the schema 'abc', then you can run the query: ``` SELECT osuser FROM v$session WHERE schemaname = 'abc'; ``` Refer to this [post](https://stackoverflow.com/questions/880230/difference-between-a-user-and-a-schema-in-oracle) for more details
If you do not change current schema the following code will be fine: ``` -- In PLSQL DECLARE vv_SchemaName VARCHAR2(100); BEGIN vv_SchemaName := Sys_Context('USERENV','CURRENT_SCHEMA'); dbms_output.put_line(vv_SchemaName); END; -- IN SQL SELECT Sys_Context('USERENV','CURRENT_SCHEMA') FROM DUAL ``` Current\_schema is a bit different from LOGGED USER see the example: ``` -- The output will be: -- Current schema:LOGGED_SCHEMA session user:LOGGED_SCHEMA -- Current schema:CHANGED_SCHEMA session user:LOGGED_SCHEMA -- When you are connected to LOGGED_SCHEMA and have CHANGED_SCHEMA. DECLARE vv_SchemaName VARCHAR2(100); vv_SessionUser VARCHAR2(100); BEGIN vv_SchemaName := Sys_Context('USERENV','CURRENT_SCHEMA'); vv_SessionUser := Sys_Context('USERENV','SESSION_USER' ); dbms_output.put_line('Current schema:' || vv_SchemaName || ' session user:' || vv_SessionUser); EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA=CHANGED_SCHEMA'; vv_SchemaName := Sys_Context('USERENV','CURRENT_SCHEMA'); vv_SessionUser := Sys_Context('USERENV','SESSION_USER' ); dbms_output.put_line('Current schema:' || vv_SchemaName || ' session user:' || vv_SessionUser); END; ``` So If you plan to connect to one user and work on another one than depending on your needs using Sys\_Context('USERENV','SESSION\_USER' ) may be a better option.
How to get user of a schema with sql/plsql
[ "", "sql", "plsql", "oracle-sqldeveloper", "" ]
Suppose I have tables `A` and `B`, and I want to create a single select query with a conditional join like this ``` @customParam bit, when customParam is true select * from A a join B b on a.B_Id = b.Id else select * from A a join B b on b.someid = a.someid ```
You can use `IF ELSE`: ``` IF @customParam = 1 BEGIN SELECT * FROM A a JOIN B b ON a.B_Id = b.Id END ELSE BEGIN SELECT * FROM A a JOIN B b ON b.someid=a.someid END ``` In one statement (poor performance): ``` SELECT * FROM A a JOIN B b ON (@customParam = 1 AND a.B_Id = b.Id) OR (@customParam = 0 AND b.someid=a.someid) ``` You can also use Dynamic-SQL and build custom query: ``` DECLARE @sql NVARCHAR(MAX) = N'SELECT * FROM A a JOIN B b ON <placeholder>'; SET @sql = REPLACE(@sql, '<placeholder>', CASE WHEN @customParam = 1 THEN 'a.B_Id = b.Id' ELSE 'b.someid=a.someid' END); EXEC [dbo].[sp_executesql] @sql; ``` `LiveDemo`
This could be achievable using [CASE](https://msdn.microsoft.com/en-us/library/ms181765.aspx?f=255&MSPPError=-2147217396). Please see this: ``` SELECT * FROM A AS a INNER JOIN B AS b ON CASE WHEN @customParam = 1 AND b.Id = a.B_Id THEN 1 WHEN @customParam = 0 AND b.someid = a.someid THEN 1 ELSE 0 END = 1; ``` Your query will check your `@customParam` value and based on that will compare your JOIN condition and if's a match - it will result in 1, which then compares to 1 (`END = 1;`) and it returns you result.
SQL Server Conditional Join between two tables
[ "", "sql", "sql-server", "t-sql", "" ]
I have a query which searches rows in a database for matching strings. An example row may be: ``` This is a row which contains a String ``` The query that I am currently running is syntactically identical to ``` SELECT table.column FROM table WHERE table.column LIKE "*String*" ``` although it returns every row where the text "string" is found, regardless of case. Does MS Access 2010 have any sort of case sensitive string comparator that I should be using instead of this?
You will have to resort to VBA methods, I'm afraid. Fortunately, VBA methods can be used in JET SQLs (although performance might not be the best). The VBA `Instr` method allows you to specify the comparison mode (0 = binary = case-sensitive): ``` SELECT table.column FROM table WHERE INSTR(table.column, "String", 0) > 0 ```
You can use [Instr](http://www.techonthenet.com/access/functions/string/instr.php): ``` SELECT t.FieldName FROM Table t WHERE ((InStr(1,[FieldName],"aB",0)>"0")); ```
Case sensitive searching in MS Access
[ "", "sql", "ms-access", "ms-access-2010", "" ]
I have been trying to create an sql query that returns the most recent record for each user on a given day. ``` Select dbfirst, dblast, max(dbdate) as MaxDate from table where (DATEPART(yy, dbdate) = 2015 AND DATEPART(mm, dbdate) = 10 AND DATEPART(dd, dbdate) = 22) group by dbfirst, dblast ``` The results I get back are for users that have signed in on that given day ie. ``` ╔═════════╦════════╦════════════════════════╗ β•‘ dbfirst β•‘ dblast β•‘ MaxDate β•‘ ╠═════════╬════════╬════════════════════════╣ β•‘ user β•‘ 10 β•‘ 20151022 13:13:09.000 β•‘ β•‘ user β•‘ 11 β•‘ 20151022 10:18:50.000 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` This returns only one record for a user which is the most current. I also need to be able to display the column 'dbflow' in the results which is a varchar "In" or "Out" but when I do this. ``` Select dbfirst, dblast, dbflow, max(dbdate) as MaxDate from [Inventory].[dbo].[pr_dbs] where (DATEPART(yy, dbdate) = 2015 AND DATEPART(mm, dbdate) = 10 AND DATEPART(dd, dbdate) = 22) group by dbfirst, dblast, dbflow ``` It returns the newest In and Out and I just need the latest record regardless of dbflow. ``` ╔═════════╦════════╦════════╦═════════════════════════╗ β•‘ dbfirst β•‘ dblast β•‘ dbflow β•‘ MaxDate β•‘ ╠═════════╬════════╬════════╬═════════════════════════╣ β•‘ user β•‘ 10 β•‘ In β•‘ 2015-10-22 13:13:09.000 β•‘ β•‘ user β•‘ 11 β•‘ In β•‘ 2015-10-22 10:18:50.000 β•‘ β•‘ user β•‘ 10 β•‘ Out β•‘ 2015-10-22 12:13:09.000 β•‘ β•‘ user β•‘ 11 β•‘ Out β•‘ 2015-10-22 9:18:50.000 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• ``` Thanks in advance for your assistance or advice.
According to the official **Microsoft Training Kit Book for Exam 70-461** (SQL Server), you have 3 workaround for your problem (if you know that there can’t be more than one distinct dbflow per each distinct (dbfirst, dblast)): 1. Just add it to the `GROUP BY` clause 2. The 2nd option is to apply an aggregate function like `MAX` to the column. 3. The 3rd option is to group and aggregate the rows from the [pr\_dbs] table first, define a table expression based on the grouped query, and then join the table expression with the [pr\_dbs] original table to get the last column : ``` WITH CTE AS ( SELECT dbfirst, dblast, max(dbdate) as MaxDate FROM [Inventory].[dbo].[pr_dbs] WHERE (DATEPART(yy, dbdate) = 2015 AND DATEPART(mm, dbdate) = 10 AND DATEPART(dd, dbdate) = 22) GROUP BY dbfirst, dblast ) SELECT CTE.*, D.dbflow FROM [Inventory].[dbo].[pr_dbs] AS D INNER JOIN CTE ON D.dblast = CTE.dblast AND D.dbfirst = CTE.dbfirst; ``` > SQL Server usually optimizes the third solution like it does the > first. The first solution might be preferable because it involves much > less code. **EDIT:** As you precised your question, does it fill your need: ``` SELECT table.dbfirst, table.dblast, table.MaxDate, table.dbflow FROM table INNER JOIN ( SELECT dbfirst, dblast, max(dbdate) as MaxDate FROM table WHERE (DATEPART(yy, dbdate) = 2015 AND DATEPART(mm, dbdate) = 10 AND DATEPART(dd, dbdate) = 22) GROUP BY dbfirst, dblast ) AS T ON table.dbfirst = T.dbfirst AND table.dblast = T.dblast AND table.dbdate = T.MaxDate ```
Have you thought of using windowed functions? This would be example for your query: ``` SELECT DISTINCT dbflow , dbfirst , dblast , MAX(dbdate) OVER (PARTITION BY dbfirst, dblast) AS MaxDate FROM [Inventory].[dbo].[pr_dbs] WHERE DATEPART(yy, dbdate) = 2015 AND DATEPART(mm, dbdate) = 10 AND DATEPART(dd, dbdate) = 22; ``` However I'm not certain if this is correct logic.
SQL query group by with extra column
[ "", "sql", "sql-server", "t-sql", "" ]
Given a data set for example as follows: PERSON - the person allocating funds (key field) EFFECTIVE DATE - the effective date of the percentage distribution RECIPIENT - the person receiving a certain percentage of the funds PERCENTAGE - the percentage that the person receives (so for any given effective date, a person allocates a total of 100% to any given combination of recipients) ``` PERSON EFFECTIVE DATE RECIPIENT PERCENTAGE ---------- --------------- ------------- ----------- MONICA 2015-10-01 BARNEY 100% + MONICA 2015-09-01 BARNEY 50% + MONICA 2015-09-01 MARSHALL 20% + MONICA 2015-09-01 LILY 30% + MONICA 2015-08-01 ROBIN 50% MONICA 2015-08-01 TED 50% CHANDLER 2015-10-01 ROBIN 50% + CHANDLER 2015-10-01 LILY 50% + CHANDLER 2015-07-10 BARNEY 50% + CHANDLER 2015-07-10 MARSHALL 50% + CHANDLER 2015-06-01 ROBIN 50% CHANDLER 2015-06-01 LILY 50% CHANDLER 2015-04-10 BARNEY 50% CHANDLER 2015-04-10 MARSHALL 50% ROSS 2015-10-01 MARSHALL 100% + ROSS 2015-09-15 BARNEY 100% + PHOEBE 2015-10-01 MARSHALL 20% + PHOEBE 2015-10-01 BARNEY 20% + PHOEBE 2015-10-01 LILY 20% + PHOEBE 2015-10-01 ROBIN 20% + PHOEBE 2015-10-01 TED 20% + PHOEBE 2015-09-01 MARSHALL 100% + PHOEBE 2015-08-01 BARNEY 100% PHOEBE 2015-07-01 LILY 100% PHOEBE 2015-06-01 ROBIN 100% PHOEBE 2015-05-01 TED 100% ``` How can I construct a single SQL query that would return only the two latest sets of allocations per person at once (all of the records marked with a "+" above)? This is so that I could process the data into a display that says, for example, that: "Monica changed allocation FROM 50% for Barney, 20% for Marshall, and 30% for Lily TO 100% for Barney." "Chandler changed allocation FROM 50% for Barney, 50% for Marshall TO 50% for Robin, 50% for Lily." "Ross changed allocation FROM 100% for Barney TO 100% for Marshall." "Phoebe changed allocation FROM 100% for Marshall TO 20% for Marshall, 20% for Lily, 20% for Barney, 20% for Robin, 20% for Ted."
Here is one method using `where`: ``` select t.* from t where t.effective_date in (select t2.effective_date from t t2 where t2.person = t.person group by t2.effective_date order by t2.effective_date desc fetch first 2 rows only ); ``` You can also do this with `dense_rank()`: ``` select t.* from (select t.*, dense_rank() over (partition by person order by effective_date desc) as seqnum from t ) t where seqnum <= 2; ```
Can try like this. ``` select * from table as W where w.effective_date >= ( //get 2nd max effective date select max(a.effective_date) from table as a where a.person = w.person and a.effective_date Not = ( //get max of effective_date per person select max(x.effective_date) from table as x where x.person = a.person) ) ```
How can I construct my SQL query to select the two latest/most recent groups of records for each key field in a table in DB2?
[ "", "sql", "db2", "greatest-n-per-group", "" ]
I have a table with the following data: ``` create table tempdata(account varchar2(20)not null,bookid number(10),seqno number(20) not null,book_date date, book1 number(10), book2 number(10),book3 number(10)) insert into tempdata values('123',101,09,add_months((sysdate),-1),100,120,130); insert into tempdata values('123',101,10,sysdate),70,60,100) select * from tempdata; ACCOUNT BOOKID SEQNO BOOK_DATE BOOK1 BOOK2 BOOK3 123 101 9 9/22/2015 10:05:28 AM 100 120 130 123 101 10 10/22/2015 10:01:42 AM 70 60 100 ``` I need to output something like the following in order to create another temp table with latest book details including the previous date and latest date: ``` ACCOUNT BOOKID SEQNO Previous_DATE Latest_date BOOK1 BOOK2 BOOK3 123 101 10 9/22/2015 10:05:28 AM 10/22/2015 10:01:42 AM 70 60 100 ```
Here I am assuming that you want data for a unique `account` and `bookid` combination. ``` SELECT T1.ACCOUNT, T1.BOOKID, T1.SEQNO,T1.PREVIOUS_DATE, T1.BOOK_DATE AS LATEST_DATE , T1.BOOK1, T1.BOOK2, T1.BOOK3 FROM ( SELECT T.* ,ROW_NUMBER() OVER (PARTITION BY ACCOUNT,bookid ORDER BY BOOK_DATE desc) as rno, LAG(TO_CHAR(BOOK_DATE), 1, 0) OVER (ORDER BY BOOK_DATE) AS PREVIOUS_DATE FROM TEMPDATA T) T1 WHERE T1.RNO =1 ```
The LAG and ROW\_NUMBER analytic functions would come in handy here: ``` select account, bookid, seqno, previous_date, latest_date, book1, book2, book3 from (select account, bookid, seqno, lag(book_date) over (partition by account order by book_date) previous_date, book_date latest_date, book1, book2, book3, row_number() over (partition by account order by book_date) rn from tempdata) where rn = 1; ACCOUNT BOOKID SEQNO PREVIOUS_DATE LATEST_DATE BOOK1 BOOK2 BOOK3 -------------------- ---------- ---------- --------------------- --------------------- ---------- ---------- ---------- 123 101 9 22/09/2015 14:34:06 100 120 130 ``` N.B. I've made the assumption that you want this information for each account. If the grouping needs to be changed (eg. maybe it's account and bookid, or just bookid) then you'll need to amend the partition by clauses appropriately.
How to get previous and latest date and its details in SQL
[ "", "sql", "oracle", "" ]
I'm new to SQL and facing following problem: This is my table: ``` name city people -----|-----|--------| John | A | 5 | Ben | D | 6 | John | A | 5 | Ben | A | 5 | John | B | 8 | Ben | D | 6 | ``` I want to group by the name and receive associated to the name that city with the largest quantity. As a second query, instead of the largest quantity, that city with the highest sum of inhabitants. This would be the outcome for the first query: ``` name city -----|-----| John | A | Ben | D | ``` Thank you!
I don't know exactly what you mean by "to the name that city with the largest quantity". What I understood was you sum the column 'people' per couple (name, city), thus (John, A) would be 10 and (John, B) would be 8, and you take the max value to get (John, A). In this case, you can do it this way: ``` SELECT name, city FROM (SELECT name, city, SUM(people) AS tot FROM table GROUP BY name, city ORDER BY name ASC, tot DESC) AS a GROUP BY name ; ``` As for the city with the largest number of inhabitants, you just have to group by city and sum the column people and take the max: ``` SELECT city, SUM(people) AS nb_inhabitants FROM table GROUP BY city ORDER BY nb_inhabitants DESC LIMIT 1; ```
``` SELECT name, city, sum( people ) FROM `detail` GROUP BY name ORDER BY people ASC LIMIT 0 , 30 ``` I am not really understand what your are expecting ,but I guess you want to do this thing. Description : I am group by people from there name , and got sum of the people and make them ASC order. I am not sure your are expect this thing. You can also , group people by their city ``` SELECT name, city, sum( people ) FROM `detail` GROUP BY city ORDER BY people ASC LIMIT 0 , 30 ``` If this not you expect , Please , further describe question ,we will try to give some answer. [![enter image description here](https://i.stack.imgur.com/evGRE.png)](https://i.stack.imgur.com/evGRE.png)
SQL: Group by - further attribute with largest quantity
[ "", "mysql", "sql", "group-by", "" ]
I'm using sql query to view data from a table: ``` SELECT created, name, surname, phone, email, address FROM jos_registration_form ORDER BY created DESC ``` This returns a list in the front of a table with first column name created, second column name name, third column name surname and so on. Is there an sql statement that can change the column names to something else (not in the mysql database) only in the front? For example to view in the front for the first column name created, the greek Δημιουργία, for the second column name name, the greek Όνομα and so on. I know that this can be done with php etc, but I'd like to do it with sql.
Simply add the name as an alias to the sql statement: ``` SELECT created Δημιουργία, name Όνομα,surname,phone,email,address FROM jos_registration_form ORDER BY created DESC ```
Use this. You can find documents if you search for `alias in sql` ``` SELECT Created AS CreateDate FROM table_name ```
Query to list table with different column names
[ "", "sql", "sql-server", "" ]
I am still new to postgres. I would like to have a SELECT statement in the SELECT portion of the query, but right now I am getting an error. ``` SELECT cu.user_name, cu.created_date, cu.updated_date, cu.email_address, cua.attribute_name, cua.attribute_value, (select to_char(to_timestamp(cua.attribute_value / 1000), 'yyyy-mm-dd HH24:MI:SS')) AS Issue_Update FROM cwd_user cu INNER JOIN cwd_user_attribute cua ON cu.id = cua.user_id WHERE cu.user_name LIKE 'perter%' ``` I am getting the following error: > ERROR: operator does not exist: character varying / integer > LINE 3: (select to\_char(to\_timestamp(cua.attribute\_value / 1000), '... > ^ > HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Apparenlty `cua.attribute_value` is defined as `varchar`. The error message is telling you that you can not divide a string by a number. You need to convert (cast) the varchar to an integer. And you don't need the `select` at all. This is the workaround for your current design: ``` SELECT cu.user_name, cu.created_date, cu.updated_date, cu.email_address, cua.attribute_name, cua.attribute_value, to_char(to_timestamp(cua.attribute_value::bigint / 1000), 'yyyy-mm-dd HH24:MI:SS') AS Issue_Update FROM cwd_user cu JOIN cwd_user_attribute cua ON cu.id = cua.user_id WHERE cu.user_name LIKE 'perter%'; ``` `::bigint` casts the string to an integer value. It's the Postgres specific syntax for the ANSI SQL `cast(... as bigint)` operator. See [the manual](http://www.postgresql.org/docs/current/static/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS) for details. But this will **fail** if `cua.attribute_value` contains values that can not be converted to an integer (an empty string `''` would already break this). The correct solution is to store numbers in `integer` columns. Do not store numbers as `varchar` --- `attribute_name` and `attribute_value` sound very much like the (anti) pattern called "Entity-Attribute-Value". If you are sure that the timestamp information is correct for attributes with a specific name you can do something like this to avoid casting errors: ``` CASE WHEN cua.attribute_name = 'timestamp' THEN to_char(to_timestamp(cua.attribute_value::bigint / 1000), 'yyyy-mm-dd HH24:MI:SS') END AS Issue_Update ``` This will return `NULL` for all rows where `attribute_name` is not `'timestamp'` and the formatted timestamp for those that are. But again this will only work if the values for that attribute are valid numbers (and of course you need to adjust the comparison with the string literal `'timestamp'` to use the correct attribute name)
``` SELECT cu.user_name, cu.created_date, cu.updated_date, cu.email_address, cua.attribute_name, cua.attribute_value, case when (attribute_value like '%True%' or attribute_value like '%False%') then cast(NULL as bigint) else CAST(nullif(cua.attribute_value, '') AS bigint) end filter_attribute_value, (select to_char(to_timestamp( filter_attribute_value / 1000), 'yyyy-mm-dd HH24:MI:SS')) AS Issue_Update FROM cwd_user cu INNER JOIN cwd_user_attribute cua ON cu.id = cua.user_id WHERE cu.user_name LIKE 'perter%' ```
SELECT Sub Query PostgreSQL
[ "", "sql", "postgresql", "subquery", "" ]
For security purpose, we will create a database log that will contain all changes done on different tables on the database, to achieve this we will use triggers as stated [here](https://stackoverflow.com/questions/12563706/is-there-a-mysql-option-feature-to-track-history-of-changes-to-records) but my concern is that if the system admin or anyone who has the root privilege changes the data on the logs for their benefit it will then make having logs meaningless. thus, I would like to know if there is a way for me to prevent anyone and I mean no one at all from doing any changes on the logs table, i.e dropping the table, updating, and deleting a row. if this is even possible? also in regards to my logs table, is it possible to keep track of the previous data that was changed using the update query? I would like to have a previous and new data on my logs table so that we may know what changes were made.
The problem you are trying to fix is hard, as you want someone who can administer you system, but you don't want them to be able to actually do something with all parts of the system. That means you either need to administer the system yourself and give someone limited access, trust all administrators, or look for an external solution. What you could do is write your logs to a system where only you (or at least: a different adminsitrotor then the first) have access. Then, if you only ever write (and don't allow changes/updates and deletes) on this system, you will be able to keep a trusted log and even spot inconsistencies in case of tampering. A second method would be to use a specific method to write logs, one that adds a signed message. In this manner you can be sure that the logs have been added by that system. If you'd also save (signed) message of the state of the complete system, you are probably going to be able to recognize any tampering. The 'system' used for signing should live on another machine obviously, making it somewhat equivalent to the first option.
There is no way to stop root access from having permissions to make alterations. A combination approach can help you detect tampering though. You could create another server that has more limited access and clone the database table there. Log all login activity on both servers and cross backup the logs between servers. also, make very regular off server backups. You could also create a hashing table that matches each row of the log table. They would not only have to find the code that creates the hash, but reverse engineer it and alter the time stamp to match. However, I think your best bet is to make a cloned server that has no net login. Physical login only. If you think there has been any tampering, you will have to do some forensics. You can even add a USB key to the physical clone server and keep it with a CEO or something. Of course, if you can't trust the sysadmin's, no matter what your job is very difficult. The trick is not to create solid wall, but a fine net and scrutinize everything coming through the net. Once you setup the Master Slave relationship, and only give untrusted users access to the slave database, you won't need to alter your code. Just use the master database as the primary in your code. The link below is information on setting up a master slave replication. To be fully effective though, these need to be on different servers. I don't know how this solution would work on one server. It may be possible, I just don't know. <https://dev.mysql.com/doc/refman/5.1/en/replication.html>
how to prevent anyone from dropping, deleting, and changing the contents of a log table in mysql
[ "", "mysql", "sql", "database", "logging", "" ]
I have a table called users that holds users ids, as well as a few tables like `cloud_storage_a`, `cloud_storage_b` and `cloud_storage_c`. If a user exists in `cloud_storage_a`, that means they are a connected to cloud storage a. A user can exist in many cloud storages too. Here's an example: `users` table: ``` user_id | address | name ------------------------------- 123 | 23 Oak Ave | Melissa 333 | 18 Robson Rd | Steve 421 | 95 Ottawa St | Helen 555 | 12 Highland | Amit 192 | 39 Anchor Rd | Oliver ``` `cloud_storage_a`: ``` user_id ------- 421 333 ``` `cloud_storage_b`: ``` user_id ------- 555 ``` `cloud_storage_c`: ``` user_id ------- 192 555 ``` Etc. I want to create a query that grabs all users connected on any cloud storage. So for this example, users `421, 333, 555, 192` should be returned. I'm guessing this is some sort of join but I'm not sure which one.
You are close. Instead of a JOIN that merges tables next to each other based on a key, you want to use a UNION which stacks recordsets/tables on top of eachother. ``` SELECT user_id FROM cloud_storage_a UNION SELECT user_id FROM cloud_storage_b UNION SELECT user_id FROM cloud_storage_c ``` Using keyword `UNION` here will give you distinct user\_id's across all three tables. If you switched that to `UNION ALL` you would no longer get Distinct, which has it's advantages in other situations (not here, obviously). ## Edited to add: If you wanted to bring in user address you could use this thing as a subquery and join into your user table: ``` SELECT subunion.user_id user.address FROM user INNER JOIN ( SELECT user_id FROM cloud_storage_a UNION SELECT user_id FROM cloud_storage_b UNION SELECT user_id FROM cloud_storage_c ) subunion ON user.user_id = subunion.user_id ``` That union will need to grow as you add more cloud\_storage\_N tables. All in all, it's not a great database design. You would be much better off creating a single `cloud_storage` table and having a field that delineates which one it is `a`, `b`, `c`, ... ,`N` Then your UNION query would just be `SELECT DISTINCT user_id FROM cloud_storage;` and you would never need to edit it again.
You need to join unknown(?) number of tables `cloud_storage_X` this way. You'd better change your schema to the following: storage: ``` user_id cloud ------- ----- 421 a 333 a 555 b 192 c 555 c ``` Then the query is as simple as this: ``` select distinct user_id from storage; ```
How to find distinct users in multiple tables
[ "", "sql", "postgresql", "join", "distinct", "exists", "" ]