Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
``` Similar to the following: ``` [Columns to Rows in MS Access](https://stackoverflow.com/questions/9183764/columns-to-rows-in-ms-access) I want to know how , within the MS-Access Query Design environment, I can transform the following sample data from state #1 to state #2. Here is what the data currently look like in my table (state #1): ``` Row | School | LocationCode2011 | LocationCode2012 | LocationCode2013 001 ABC 1000A 1000A 2000X 002 DEF 1000A 1000A 2000X 003 GHI 2000X 1000A 2000X ``` Here is what I want my resulting query to look like (state #2): ``` Row | LocationCode | Year | School#1 | School#2 | School#3 001 1000A 2011 ABC DEF 002 1000A 2012 ABC DEF GHI 003 2000X 2011 GHI 004 2000X 2012 005 2000X 2013 ABC DEF GHI ``` **Edit (2/19/2014)**: I wanted to present a simpler version (as recommended by elc below), since my previous sample data presented too many problems at once. ``` State #1 Row | School | LocationCode | Year | 001 ABC 1000A 2011 002 DEF 1000A 2011 003 GHI 2000X 2011 State #2 Row | LocationCode | Year | School#1 | School#2 | School#3 001 1000A 2011 ABC DEF 002 2000X 2011 GHI ``` Please keep in mind that: 1) I am using Access 2010
Starting with our data in a table named [CurrentData] ``` Row School LocationCode Year --- ------ ------------ ---- 001 ABC 1000A 2011 002 DEF 1000A 2011 003 GHI 1000X 2011 ``` the query ``` SELECT cd1.Year, cd1.LocationCode, cd1.School, COUNT(*) AS SchoolRank FROM CurrentData AS cd1 INNER JOIN CurrentData AS cd2 ON cd2.Year = cd1.Year AND cd2.LocationCode = cd1.LocationCode AND cd2.School <= cd1.School GROUP BY cd1.Year, cd1.LocationCode, cd1.School ``` produces ``` Year LocationCode School SchoolRank ---- ------------ ------ ---------- 2011 1000A ABC 1 2011 1000A DEF 2 2011 1000X GHI 1 ``` A very minor tweak to that converts the rank number to a string like "School\_1" ``` SELECT cd1.Year, cd1.LocationCode, cd1.School, 'School_' & COUNT(*) AS XtabColumn FROM CurrentData AS cd1 INNER JOIN CurrentData AS cd2 ON cd2.Year = cd1.Year AND cd2.LocationCode = cd1.LocationCode AND cd2.School <= cd1.School GROUP BY cd1.Year, cd1.LocationCode, cd1.School ``` producing ``` Year LocationCode School XtabColumn ---- ------------ ------ ---------- 2011 1000A ABC School_1 2011 1000A DEF School_2 2011 1000X GHI School_1 ``` We can just wrap that in the code to produce a crosstab query ``` TRANSFORM First(School) AS whatever SELECT [Year], LocationCode FROM ( SELECT cd1.Year, cd1.LocationCode, cd1.School, 'School_' & COUNT(*) AS XtabColumn FROM CurrentData AS cd1 INNER JOIN CurrentData AS cd2 ON cd2.Year = cd1.Year AND cd2.LocationCode = cd1.LocationCode AND cd2.School <= cd1.School GROUP BY cd1.Year, cd1.LocationCode, cd1.School ) AS something GROUP BY [Year], LocationCode PIVOT XtabColumn ``` and we get ``` Year LocationCode School_1 School_2 ---- ------------ -------- -------- 2011 1000A ABC DEF 2011 1000X GHI ```
I think what you'll want to try for this is the UNPIVOT operation. UNPIVOT (LocationCode FOR Locations IN (Locationcode2011, LocationCode2012, LocationCode2013) I found a short [tutorial](http://mangalpardeshi.blogspot.com/2009/04/unpivot-multiple-columns.html) that demonstrates how to do a very similar problem if you're unsure how to use UNPIVOT.
What SQL code is best for the following column to row transformation?
[ "", "sql", "database", "ms-access", "rows", "" ]
I am trying for an hour now but I am still not able to figure out what's the problem in this query. :/ ``` SELECT * FROM question ORDER BY question_id DESC LIMIT 3 WHERE topic_name = (SELECT * FROM topic WHERE subject_name = 'Maths') ``` This is the error: > 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use > near 'WHERE topic\_name = (SELECT \* FROM topic WHERE subject\_name = > 'Maths')' at line 1
Your statements are all out of order. The correct statement is: ``` SELECT * FROM question WHERE topic_name = (SELECT * FROM topic WHERE subject_name = 'Maths') ORDER BY question_id DESC LIMIT 3 ; ``` However, that won't work because of the subquery. Perhaps you mean something like: ``` SELECT * FROM question WHERE topic_name in (SELECT topic_name FROM topic WHERE subject_name = 'Maths') ORDER BY question_id DESC LIMIT 3 ; ```
Your WHERE clause needs to go before your ORDER BY clause: ``` SELECT * FROM question WHERE topic_name IN (SELECT topic_name FROM topic WHERE subject_name = 'Maths') ORDER BY question_id DESC LIMIT 3 ```
Can't figure out the error in the mysql query
[ "", "sql", "mysql", "" ]
I've been trying to build an sql query that finds from (table) the most recent date for selected id's that fulfill the condition where 'type' is in hierarchy 'vegetables'. My goal is to be able to get the whole row once max(date) and hierarchy conditions are met for each id. ``` Example values ID DATE PREFERENCE AGE 123 1/3/2013 carrot 14 123 1/3/2013 apple 12 123 1/2/2013 carrot 14 124 1/5/2013 carrot 13 124 1/3/2013 apple 13 124 1/2/2013 carrot 14 125 1/4/2013 carrot 13 125 1/3/2013 apple 14 125 1/2/2013 carrot 13 ``` I tried the following ``` SELECT * FROM table WHERE date in (SELECT max(date) FROM (table) WHERE id in (123,124,125)) and preference in (SELECT preference FROM (hierarchy_table) WHERE hierarchy = vegetables)) and id in (123,24,125) ``` but it doesn't give me the most recent date for each id that meets the hierarchy conditions. (ex. in this scenario I would only get id 124) Thank you in advance!
I figured this out. Please see the query below as an example: ``` SELECT * FROM (table) t WHERE t.date in (SELECT max(date) FROM table sub_t where t.ID = sub_t.ID and (date !> (currentdate)) and preference in (SELECT preference FROM (hierarchy_table) WHERE hierarchy ='vegetables') and ID in ('124') ```
``` SELECT max(date) FROM (table) WHERE id in (123,124,125) ``` is giving you the max date from all dates, you need to group them. Try replacing with: ``` SELECT max(date) FROM (table) GROUP BY id ``` This way you will get the max date for each id
How to find most recent date given a set a values that fulfill condition *
[ "", "sql", "sybase", "" ]
I have a table that includes these columns: ``` Nameplate Model Segment ``` A sample table looks like: ``` Nameplate Model Segment Acura ILX Small Lux Car Audi Q5 Compact Lux Car Audi Q5 Mid Lux Car Audi SQ5 Compact Lux Car ``` I need to find all `Nameplate, Model` combinations with multiple `Segments`. In the above table, I need it to return: ``` Audi Q5 Compact Lux Car Audi Q5 Mid Lux Car ``` I thought the following would work: ``` SELECT DISTINCT [Nameplate], [Model], [Segment] FROM dbo.[Weighted Extract] GROUP BY [Nameplate], [Model], [Segment] HAVING COUNT([Segment]) > 1; ``` The above code only returns combinations with multiple `Nameplate, Model`, and `Segment` rows. This should be easier than I am making it, but I'm stuck.
Try this : ``` SELECT [Nameplate], [Model] FROM dbo.[Weighted Extract] GROUP BY [Nameplate], [Model] HAVING COUNT(distinct [Segment]) > 1; ```
You can use `EXISTS`: ``` SELECT [Nameplate], [Model], [Segment] FROM dbo.[Weighted_Extract] we1 WHERE EXISTS ( SELECT 1 FROM dbo.[Weighted_Extract] we2 WHERE we1.Nameplate = we2.Nameplate AND we1.Model = we2.Model AND we1.Segment <> we2.Segment ); ``` `Demo`
SQL - Finding rows with duplicate values in one column
[ "", "sql", "sql-server", "" ]
i got a requirement to write a MS-SQL stored procedure, the requirement is **Table 1** It consists total student fee to be paid in installments **EX:** ``` SID SNAME INST.Date Amount 123 XYZ 01-01-2013 3500 123 XYZ 01-05-2013 3500 123 XYZ 01-10-2013 3500 123 XYZ 01-04-2014 3500 123 XYZ 01-06-2014 3500 ``` ` **Table 2** It consists Payment recd details ``` SID SNAME Paydate amoount 123 XYZ 01-01-2013 1167 123 XYZ 01-02-2013 1167 123 XYZ 01-03-2013 1167 123 XYZ 01-05-2013 1750 123 XYZ 01-05-2013 1750 123 XYZ 01-10-2013 1167 123 XYZ 01-10-2013 1167 ``` now the requirement is as on jan2014,how many installment to paid,what is the Recd amount, no of balance installments to be paid as on jan-14. **Expected output** ``` SID Sname Total Inst.Amt as on01.01.2014 Paid.amount Balance amt Balance.Installments 123 XYZ 10500 9335 1165 1 ``` For reference I gave one student data. The stored procedure should run for 36000 students.
Here's a query that will give you what you want, expect `Balance.Installments` I don't know what it means : ``` SELECT distinct t1.SID , t1.SNAME , t1.Total_Inst , t2.Paid_amount AS 'Paid.Amount' , t1.Total_Inst - t2.Paid_amount as 'Balance amt' , CEILING( (t1.Total_Inst - t2.Paid_amount) / t.Amount ) as 'Balance.Installments' FROM table_1 as t JOIN ( SELECT SID, SNAME, SUM(Amount) as Total_Inst FROM table_1 WHERE Date < STR_TO_DATE('01/01/2014', '%m/%d/%Y') group by SID, SNAME ) ON ( t1.SID = t.SID AND t1.SNAME = t.SNAME ) AS t1 LEFT JOIN ( SELECT SID, SNAME, SUM(Amount) as Paid_amount FROM table_2 WHERE Paydate < STR_TO_DATE('01/01/2014', '%m/%d/%Y') group by SID, SNAME ) AS t2 ON ( t1.SID = t2.SID AND t1.SNAME = t2.SNAME ) ```
why you will need to run for 36000 record at one time.Also what is actially installment.There is only one instalment for any student no several.So i think database is lil wrong. There should only one instalment amount and time interval(due date) per student in table. Check the latest change.It is only for one student . ``` Declare @student table(SID int,SNAME varchar(50),INST Date ,Amount int) insert into @student select 123,'XYZ','01-01-2013',3500 union all select 123, 'XYZ', '01-05-2013', 3500 union all select 123, 'XYZ', '01-10-2013', 3500 union all select 123, 'XYZ', '01-04-2014', 3500 union all select 123, 'XYZ', '01-06-2014', 3500 Declare @instalmentamount float select @instalmentamount=amount from @student where sid=123 --select @instalmentamount Declare @Table2 table(SID int,Paydate date,amoount int) insert into @Table2 select 123, '01-01-2013', 1167 union all select 123, '01-02-2013', 1167 union all select 123, '01-03-2013', 1167 union all select 123, '01-05-2013', 1750 union all select 123, '01-05-2013', 1750 union all select 123, '01-10-2013', 1167 union all select 123, '01-10-2013', 1167 declare @input date='01-01-2014' ;With CTE as (select s.SID, sum(s.amount) as [Total Inst.Amt ] from @student s where s.INST<=@input group by s.SID), cte1 as ( select t.SID, sum(t.amoount) as [Paid.amount] from @Table2 t where t.Paydate<=@input group by t.SID ) select c.sid, (select top 1 s.SNAME from @student s where s.SID=c.SID) [Name], c.[Total Inst.Amt ],c1.[Paid.amount], c.[Total Inst.Amt ]-c1.[Paid.amount] [BalanceAmount],cast(((c.[Total Inst.Amt ]-c1.[Paid.amount])/@instalmentamount) as int)+1 [Balance.Installments] from CTE c inner join cte1 c1 on c.SID=c1.SID ```
count of Balance installments
[ "", "sql", "sql-server", "" ]
I'm trying to figure out how to write an SQL query to find the distinct pairs of student ids that have the same quarter and year and class id and that have been in more than one classid together. So the database looks like this ``` Studentid courseid quarter year 11035 1020 Fall 2012 11035 1092 Fall 2012 75234 3201 Winter 2012 8871 1092 Fall 2013 39077 1092 Fall 2013 57923 9219 Winter 2013 60973 9219 Winter 2013 19992 3201 Winter 2013 60973 8772 Spring 2013 90421 8772 Spring 2013 90421 2987 Spring 2013 60973 2987 Spring 2013 ``` the result that I am trying to get is: ``` Studentid student id year quarter course id 60973 90421 2013 Spring 8772 60973 90421 2013 Spring 2987 ``` so far I have ``` SELECT s.studentid, st.studentid, st.year, s.quarter, st.courseid FROM enrolled s LEFT JOIN enrolled st ON st.COURSEID = s.COURSEID WHERE s.QUARTER = st.quarter AND s.YEAR = st.year AND s.studentid <> st.studentid ``` but it is giving me every combination that can be created
Self join twice - once for the match, and again for to find a different match: ``` SELECT s.studentid, st.studentid, s.year, s.quarter, s.courseid FROM enrolled s JOIN enrolled st ON st.COURSEID = s.COURSEID AND st.quarter = s.quarter AND st.year = s.year AND s.studentid < st.studentid JOIN enrolled e2 ON e2.COURSEID != s.COURSEID AND e2.quarter = s.quarter AND e2.year = s.year AND e2.studentid = st.studentid ``` Output: ``` | STUDENTID | OTHER_STUDENTID | YEAR | QUARTER | COURSEID | |-----------|-----------------|------|---------|----------| | 60973 | 90421 | 2013 | Spring | 2987 | | 60973 | 90421 | 2013 | Spring | 8772 | ``` See [SQLFiddle](http://sqlfiddle.com/#!2/175bf6/8) Notes: * You want an (inner) join, not a left join * Use `s.studentid < st.studentid` to stop: + students joining to themselves, and + both sides of pairs from displaying * Use a different course for the second join but the same other student id
``` SELECT a.*,b.courseid FROM (SELECT a.year ,a.quarter ,a.studentid AS Student1 ,b.studentid AS Student2 FROM enrolled a JOIN enrolled b ON a.COURSEID = b.COURSEID AND a.QUARTER = b.quarter AND a.YEAR = b.year AND a.courseid = b.courseid AND a.studentid < b.studentid GROUP BY a.year, a.quarter, a.studentid, b.studentid HAVING COUNT(*) > 1) a JOIN enrolled b ON a.year = b.year AND a.quarter = b.quarter AND a.Student1 = b.studentid ``` [SQL Fiddle](http://sqlfiddle.com/#!2/a30e90/1/0) I added a row in the table: `(11035, 2987, 'Spring', 2013)` That row breaks some answers that work with your sample data. You need a `HAVING` clause to ensure the pair of students have more than 1 class together in a given quarter.
SQL finding distinct pairs
[ "", "sql", "database", "relational-division", "" ]
I am new in Sql Could you please help it with this problem. ``` id |name |reportingTo ---------------------- 1 A NULL 2 B 1 3 C 1 4 D 2 5 E 2 6 F 5 ``` I want the expected output as below. How i can impliment this with one query. ``` Output ------------------------ Name Reporter A No One B A C A D B E B F E ``` I have tried this but not working ``` select name,(case when reporting = null then 'MD' ELSE (select a.name from testLevel a inner join testLevel b on a.id=b.reporting where a.id=b.reporting) END) reportingto From testLevel ``` Thanks In-advance..
How about a self join? So something like ``` SELECT r.Name , rTo.Name Reporter FROM testLevel r LEFT JOIN testLevel rTo ON r.ReportingTo = rTo.id ``` Based on the fact that you have picked 3 RDBMSs you might want to take a look at MySQL [IFNULL](https://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull): ``` IFNULL(rTo.Name, 'No One') Reporter ``` SQL Server [ISNULL](http://technet.microsoft.com/en-us/library/ms184325.aspx) ``` ISNULL(rTo.Name, 'No One') Reporter ```
Keep joins and subselects out of your select statement. MySQL and Oracle are two seperate databases...choose on or the other ``` select a.name, b.name from reportingto a left join reportingto b on a.id = b.reportingto ``` that will give names..use a isnull statement to resolve the 'no one' when bnot found. Note the left join makes this possible, an inner join won't work here. ``` select a.name, isnull(b.name, 'No One') from reportingto a left join reportingto b on a.id = b.reportingto ```
Couldn't Create the Expected Query
[ "", "mysql", "sql", "oracle", "sql-server-2008", "sql-server-2005", "" ]
I'm having a problem with organizing SQL scripts that contain more than 10k lines of code. Let's say there's a declaration of 10 variables: ``` -- declaration DECLARE @SaleId1 int DECLARE @SaleId2 int DECLARE @SaleId3 int DECLARE @SaleId4 int DECLARE @SaleId5 int DECLARE @SaleId6 int DECLARE @SaleId7 int DECLARE @SaleId8 int DECLARE @SaleId9 int DECLARE @SaleId10 int ``` Is there any way to format this code so there would appear minus symbol allowing me to hide all the content and leave just comment? Something like this: ![enter image description here](https://i.stack.imgur.com/dUz7c.png)
Code regions are not natively supported in SQL Server Management Studio. In order to organize your code, you have a few options: * Refactor to introduce stored procedure and user defined functions * Use the β€œcode region hack” defined in [this](https://stackoverflow.com/questions/4386099/sql-server-region) post * Install the [SSMS Tools Pack](http://www.ssmstoolspack.com/) which provides advanced format features Good luck.
In SSMS , goto **Tools** > **Options** . In dialog box find node **Transact-SQL** > **Intellisense** Check **Outline Statements** option. Reopen the Sql Script.
How to organize parts of T-SQL scripts in SQL Server Management Studio?
[ "", "sql", "sql-server", "t-sql", "ssms", "" ]
I am writing a query to join 3 tables, I have the following tables: * `Apn [ID, QualHolderID, RecvDate, Barcode]` * `QualHolder [ID, Surname, FullName, DOB, ResCountryID, Gender]` * `Country [ID, Name]` * `Gender [ID, Name]` I wrote the following SQL statement to retrieve the data: ``` SELECT a.QualHolderID, b.FullName, b.Surname, d.Name, b.DOB, b.ResCountryID, a.RecvDate FROM dbo.Apn a INNER JOIN dbo.QualHolder b ON a.QualHolderID = b.ID JOIN dbo.QualHolder c INNER JOIN dbo.Gender d ON c.Gender = d.ID WHERE b.ResCountryID = 48 ``` But now I get the following error: > Msg 102, Level 15, State 1, Line 9 > Incorrect syntax near 'ID'.
You don't need to join `dbo.QualHolder` table two times. Try it this way: ``` SELECT a.qualholderid, b.fullname, b.surname, d.name, b.dob, b.rescountryid, a.recvdate FROM dbo.apn a INNER JOIN dbo.qualholder b ON a.qualholderid = b.id INNER JOIN dbo.gender d ON b.gender = d.id WHERE b.rescountryid = 48 ```
I think that the correct query is: ``` SELECT a.QualHolderID, b.FullName, b.Surname, d.Name, b.DOB, b.ResCountryID, a.RecvDate FROM dbo.Apn a INNER JOIN dbo.QualHolder b ON a.QualHolderID = b.ID INNER JOIN dbo.Gender c ON b.Gender = c.ID WHERE b.ResCountryID = 48 ```
Joining Multiple Tables in SQL Server 2008 causes error
[ "", "sql", "sql-server", "inner-join", "where-clause", "" ]
I am using the following stored procedure to fetch some data from a database. The column meetingDate is formatted as datetime and only contains valid data. How do I have to amend the stored procedure so that it only shows me the next matching record and not all of them ? **Example:** Today = 18/02/2014; the database contains records with meetingDate = 20/02/2014, 27/02/2014 and 04/03/2014; in that case the result should only be the record for 20/02/2014 as the next matching one in the future. **My stored procedure:** ``` ALTER PROCEDURE [dbo].[FetchMeetings] AS BEGIN SET NOCOUNT ON; SELECT A.meetingID, CONVERT(VARCHAR(11), A.meetingDate, 106) AS meetingDate, A.created, A.createdBy, A.updated, A.updatedBy, B.speaker AS speaker, B.topic AS topic FROM MeetingDates A INNER JOIN MeetingDetails B ON A.meetingID = B.meetingID WHERE meetingDate >= GETDATE() ORDER BY meetingDate, speaker, topic FOR XML PATH('meeting'), ELEMENTS, TYPE, ROOT('ranks') END ``` Many thanks for any help with this, Tim.
Use SELECT TOP 1 ``` SELECT A.meetingID, CONVERT(VARCHAR(11), A.meetingDate, 106) AS meetingDate, A.createdBy, B.speaker AS speaker, B.topic AS topic FROM MeetingDates A INNER JOIN MeetingDetails B ON A.meetingID = B.meetingID WHERE a.meetingDate IN ( SELECT TOP 1(a.meetingDate) FROM MeetingDates A WHERE meetingDate >= GETDATE() ORDER BY meetingDate ASC) ORDER BY speaker, topic FOR XML PATH('meeting'), ELEMENTS, TYPE, ROOT('ranks') ```
Use windowed function. it's compatible sql server 2005+ here is an example with my data ``` with p as ( select *, dense_RANK() over (order by foryear desc, formonth desc) as rnk from RISK_TRANS.dbo.mytable ) select * from p where rnk = 2 ``` My keys are 2014 - 1 for last and 2013-12; 2013-11. It only picks me 2013-12 as you would expect can you try this : (i'm not sure syntax is correct, i only did the changes, not runned) ``` ALTER PROCEDURE [dbo].[FetchMeetings] AS BEGIN SET NOCOUNT ON; with p as ( SELECT A.meetingID, CONVERT(VARCHAR(11), A.meetingDate, 106) AS meetingDate, A.created, A.createdBy, A.updated, A.updatedBy, B.speaker AS speaker, B.topic AS topic, DENSE_RANK() over (order by CONVERT(VARCHAR(11), A.meetingDate, 106) desc) rnk FROM MeetingDates A INNER JOIN MeetingDetails B ON A.meetingID = B.meetingID WHERE meetingDate >= GETDATE()) select meetingid, meetingdate, createddate, createdby, updated, updatedby, speaker, topic from p ORDER BY meetingDate, speaker, topic FOR XML PATH('meeting'), ELEMENTS, TYPE, ROOT('ranks') END ```
SQL Server: How to limit stored procedure to only show one (next) matching record
[ "", "sql", "sql-server", "datetime", "stored-procedures", "" ]
*Sorry for my English* I have two tables: **Partners** ``` ID | NAME | IS_FAVORITE ``` **PartnerPoints** ``` ID | PARTNER_ID | NAME ``` And I want to get all rows from `PartnerPoints` which related to `Partners` (by `PARTNER_ID`) with the field `IS_FAVORITE` set to `1`. I.e. I want to get all *favorite* partner points. How can I do that?
You just need to use a `WHERE` clause: ``` SELECT PartnerPoints.* FROM PartnerPoints WHERE EXISTS ( SELECT * FROM Partners WHERE Partners.ID = PartnerPoints.PARTNER_ID AND Partners.IS_FAVORITE = 1 ) ```
You can do this by **JOINING** the tables. ``` SELECT PartnerPoints.* FROM PartnerPoints JOIN Partners ON PartnerPoints.Partner_ID=Partners.ID WHERE Partners.Is_favorite = 1 ``` This is an **INNER JOIN**. Oscar PΓ©rez’s answer, with the subquery, is called a **SEMI-JOIN**. The database may execute the same plan, or this INNER JOIN may be faster. In more complicated cases, you may have to use a semi-join.
SQLite3 - Complex WHERE expression that uses two tables
[ "", "sql", "sqlite", "android-sqlite", "where-clause", "" ]
## Description I have four models: 1. User 2. Organization 3. Role 4. OrganisationUserRole The idea is that one user can belong to many organizations and can have many roles, but just one per organization. My models look like this: **user.rb** ``` class User < ActiveRecord::Base has_many :roles, :through => :organization_user_roles has_many :organizations, :through => :organization_user_roles has_many :organization_user_roles end ``` **organization.rb** ``` class OrganizationUserRole < ActiveRecord::Base has_many :organization_user_roles has_many :users, :through => :organization_user_roles has_many :roles, :through => :organization_user_roles end ``` **role.rb** ``` class Role < ActiveRecord::Base end ``` **organization\_user\_role.rb** ``` class OrganizationUserRole < ActiveRecord::Base belongs_to :user belongs_to :organization belongs_to :role end ``` I am seeding my db with following seeds.rb ``` require 'faker' # seed with standard roles role_list = [ [ "superadmin" ], [ "admin" ], [ "user" ], [ "owner" ], ] role_list.each do |role| Role.create( :name => role[0] ) end # create default superadmin & organization p = User.create(email: 'thomas@aquarterit.com', password: 'password') o = Organization.create(name: 'A Quarter IT', website: 'www.aquarterit.com') o.users << User.find_by_email('thomas@aquarterit.com') p.roles << Role.find_by_name("superadmin") # 30 organizations, 3 users each 30.times do |organization| o = Organization.create(name: Faker::Company.name, website: Faker::Internet.domain_name) 3.times do |user| p = User.create(email: Faker::Internet.email, password: 'password') p.roles << Role.find_by_name("user") o.users << User.last end end ``` ## Problem Migrations and rake db:seed run successfully, but afterwards the table > organization\_user\_roles contains duplicate rows per user: Row 1: User\_id 1 -> Organization\_id 1 Row 2: User\_id 1 -> Role\_id 1 How can I associate the user, organization and role at the same time in one row? Thanks a lot in advance, you guys are are always an amazing help!
I ended up following the instructions here: [How to join mutli-role, multi organisation tables in Rails](https://stackoverflow.com/questions/16771503/how-to-join-mutli-role-multi-organisation-tables-in-rails) And it worked like a charm.
you need to add a database unique key for the three params, something like ``` add_index "organization_user_roles", ["user_id", "organization_id", "role_id"], name: "unique_roles", unique: true, using: :btree ``` then in your organization\_user\_role model ``` validates_uniqueness_of :role_id, scope: [:user_id, :organization_id] ``` i did a similar app with unique columns in my db and this solution worked
duplicate rows in join table with three has_many :through associations
[ "", "sql", "ruby-on-rails", "ruby", "" ]
I have an aggregation problem that can probably best be described with some example data. Below is a dataset with transports, identified by `trp_no`. Each such transport is loaded in a container. A container may hold multiple such transports, and in this example any transport may only be loaded in one container. ``` TRP_NO TRANSPORT_VOLUME COUNTRY CONTAINER_ID CONTAINER_MAX ------ ---------------- ------- ------------ ------------- 1 10 SE A 80 2 20 SE A 80 3 30 SE A 80 ``` The following keys (or functional dependencies) exists in the dataset: ``` trp_no -> {transport_volume, country, container_id} container_id -> {container_max} ``` I want to calculate Filling Rate per Country, which is calculated as transported volume divided by the capacity. Translated into SQL, this becomes: ``` with sample_data as( select 1 as trp_no, 10 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 2 as trp_no, 20 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 3 as trp_no, 30 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual ) select country ,sum(transport_volume) / sum(container_max) from sample_data group by country; ``` ...which returns (10+20+30) / (80+80+80) = 25%. Which is **not** what I want, because all transports used the **same** container\_id, and my query triple-counted the capacity. The result I want is (10+20+30) / 80 = 75%. So, I only want to sum container\_max once for each container\_id within the group. Any ideas on how to fix the query?
This uses Rachcha's bigger sample set, which I think is necessary to really test this problem. ``` with sample_data as( select 1 as trp_no, 10 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 2 as trp_no, 20 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 3 as trp_no, 30 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 4 as trp_no, 10 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 5 as trp_no, 20 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 6 as trp_no, 30 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 7 as trp_no, 10 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual union all select 8 as trp_no, 15 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual union all select 9 as trp_no, 20 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual ), country_container_sum as ( select country, sum(container_max) sum_container_max from ( select distinct country, container_id, container_max from sample_data ) group by country ), country_transport_volume_sum as ( select country, sum(transport_volume) sum_transport_volume from sample_data group by country ) select country, sum_transport_volume / sum_container_max rate from country_container_sum join country_transport_volume_sum using (country); ``` Results: ``` COUNTRY RATE ------- ---- SE 0.666666666666667 AU 0.9 ```
I added a little more sample data for illustrating a minor fix in the query that solved it- ``` with sample_data as( select 1 as trp_no, 10 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 2 as trp_no, 20 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 3 as trp_no, 30 as transport_volume, 'SE' as country, 'A' as container_id, 80 as container_max from dual union all select 4 as trp_no, 10 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 5 as trp_no, 20 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 6 as trp_no, 30 as transport_volume, 'SE' as country, 'B' as container_id, 100 as container_max from dual union all select 7 as trp_no, 10 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual union all select 8 as trp_no, 15 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual union all select 9 as trp_no, 20 as transport_volume, 'AU' as country, 'C' as container_id, 50 as container_max from dual ) select country ,sum(transport_volume / container_max) -- Note the change here from sample_data group by country; ``` **OUTPUT:** ``` COUNTRY SUM(TRANSPORT_VOLUME/CONTAINER_MAX) ------- ----------------------------------- SE 1.35 AU .9 ``` **EDIT:** As I see your sample data, I think you need a bit of normalization in your database. The columns for a container and columns for a transport trip should reside in separate tables like this:\ ``` TABLE CONTAINER ( container_id VARCHAR2 / INTEGER, container_max INTEGER, country VARCHAR2 ) TABLE trip ( trp_no INTEGER, transport_volume INTEGER, container_id VARCHAR2 / INTEGER REFERENCES container.container_id ) ``` **EDIT 2:** If you want to specifically sum up the transport volumes according to the containers' capacities, you can use something like the following query (with the same sample data table `sample_data` from above): ``` select d.country, (select sum(t.transport_volume) from sample_data t where t.country = d.country) / (select sum(c.container_max) from ( select country, container_max from sample_data group by container_id, country, container_max ) c where c.country = d.country) as col1 from sample_data d group by d.country; ``` **OUTPUT:** ``` COUNTRY COL1 ------- ----------- SE 0.666666667 AU 0.9 ```
Conditional aggregation - once for each key
[ "", "sql", "oracle", "" ]
I would like to return a product together with its latest value and values from last hour. I have a product-table : ``` id, name, type (and so on)... ``` I have a values-table : ``` id_prod, timestamp, value ``` Something like : ``` 12:00:00 = 10 12:15:00 = 10 12:30:00 = 10 12:45:00 = 10 13:00:00 = 10 13:15:00 = 10 13:30:00 = 10 ``` I would like a query that returns the latest value (13:30:00) together with the sum of values one hour back. This should return: ``` time = 13:30:00 latestread = 10 lasthour = 40 ``` What I almost got working was: ``` SELECT *, (SELECT value FROM values S WHERE id_prod=P.id ORDER BY timestamp DESC LIMIT 1) as latestread, (SELECT sum(value) FROM values WHERE id_prod=D.id and date_created>SUBTIME(S.date_created,'01:00:00')) as trendread FROM prod P ORDER BY name ``` But this fails with "Unknown column 'S.date\_created' in 'where clause'" Any suggestions?
If I understand correctly what you're trying to do, then You would have something like: ``` SELECT p.id, max(date_created), sum(value), mv.max_value FROM product p JOIN values v on p.id = v.product_id JOIN (SELECT product_id, value as max_value FROM values v2 WHERE date_created = (SELECT max(date_created) FROM values WHERE product_id=v2.product_id)) mv on product_id=p.id WHERE date_created between DATE_SUB(now(), INTERVAL 1 HOUR)) and now() GROUP BY p.id ORDER BY p.id ```
Aleks G and mhasan gave solutions, but not the reason why this fails. The reason this fails is because the alias S is not known inside the subquery. Subqueries have no knowledge about the tables outside their scope.
mysql : Get latest value and sum of values from previous hour
[ "", "mysql", "sql", "" ]
How can I insert string which includes quotes in oracle? my code is ``` INSERT INTO TIZ_VADF_TL_MODELS (name) VALUES ('xxx'test'yy'); ``` if I use ``` INSERT INTO TIZ_VADF_TL_MODELS (name) VALUES ("xxx'test'yy"); ``` I get identifier is too long error because xxx'test'yy is clob. how can I do that? thx.
You can also use the ['alternative quoting mechanism' syntax](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements003.htm#i42617): ``` INSERT INTO TIZ_VADF_TL_MODELS (name) VALUES (q'[xxx'test'yy]'); ``` The pair of characters immediately inside the first set of quotes, `[]` in this case, delimit the quoted text; single quotes within those do not have to be escaped. Of course, you can't then have `]'` within the string itself, but you can pick your own delimiters so that can be avoided if it's going to be an issue; `]` on its own would still be OK. This can be simpler than making sure single quotes are escaped, which can get a bit messy, or at least hard to read and debug. [SQL Fiddle](http://sqlfiddle.com/#!4/311d7/1).
Try escaping the quotes: ``` 'xxx''test''yy' ``` In SQL quotes can be escaped by adding another quote before them.
insert string which includes quotes in oracle
[ "", "sql", "string", "oracle", "sql-insert", "clob", "" ]
I have a mysql database column named like `telephoneNo` ``` Telephone No 25645656 45454545 45565656 ``` I want to prepend two digits XX to every value of `telephoneNo` column ``` Telephone No xx25645656 xx45454545 xx45565656 ``` I was trying to workout with concat but its not with integer values in my case please help with update query
You can use [`CAST()`](http://dev.mysql.com/doc/refman/5.0/en/cast-functions.html#function_cast) to convert your integers explicitely: ``` UPDATE t SET phone=CAST(CONCAT('10', phone) AS UNSIGNED) ``` That will work with integer prefixes. However, I don't see solid reason to store phone numbers as integers and not strings
That's a hack change your col to varchar or something. ``` UPDATE table SET telephoneNo=9200000000+telephoneNo; ``` **EDIT**: This method requires that all your numbers are of the same length, and 8 digits long, if all your numbers are more or less than 8 digits modify the number of 0's after the 92 accordingly
MySQL update query to prepend digit to existing column with integers values
[ "", "mysql", "sql", "concatenation", "" ]
I have a stored procedure in SQL Server 2012 that is executed by a console application (VB using MS Visual Studio 2013) by passing in 2 parameters. The code for the stored procedure is here: ``` ALTER procedure [dbo].[OptimiserReads_getMaxValue] @ReadDate datetime, @tagname nvarchar(50) AS Begin declare @first float declare @second float set @first = '265' set @second = (select value from RawHistorianData where interval = @readdate and tagname = @tagname) Select CASE when @first > @second then @first else @second end end ``` Basically it should result in a result of either `@first` or the result of `@second`. If I execute the procedure in SQL Server Management Studio I get the required result. In the console app I don't know how to name the result due to the `CASE` statement. How do I give the result a name in SQL ie something like ``` Select CASE (when @first > @second then @first else @second) as value ```
Just like you mentioned in your questions, all you need to do is to add alias for your case statement. The format is as follows. ``` SELECT CASE WHEN @first > @second THEN @first ELSE @second END AS BiggerValue ``` Aliases can be used for columns, tables, joins, sub-queries and etc. Format is always `'SomeObject' as 'Alias'` For example: * Table : `SELECT t.* from MyTable as t` * Column: `SELECT FirstName as FN from MyTable` * Sub-Query `SELECT * FROM (SELECT Name FROM MyTable) as SubQuery`
``` Select CASE when @first > @second then @first else @second end as columnname ```
How to name a CASE statement
[ "", "sql", "sql-server", "vb.net", "stored-procedures", "" ]
So I took accepted answer here [Select Parent and Children With MySQL](https://stackoverflow.com/questions/6240028/select-parent-and-children-with-mysql) for my `submissions_comments` schema which looks like: ``` +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | user_id | int(10) unsigned | NO | MUL | NULL | | | submission_id | int(11) | NO | MUL | NULL | | | comment | text | NO | | NULL | | | parent_id | int(10) unsigned | YES | MUL | NULL | | | created | datetime | NO | MUL | NULL | | | created_ip | int(11) | NO | | NULL | | | helpful_count | int(11) | NO | MUL | NULL | | | deleted | tinyint(4) | NO | MUL | 0 | | +---------------+------------------+------+-----+---------+----------------+' ``` as ``` SELECT * FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id WHERE parent.parent_id IS NULL ORDER BY parent.id, child.id; ``` And I ended up with the following results: ``` +----+---------+---------------+-------------------------------+-----------+---------------------+------------+---------------+---------+------+---------+---------------+--------------------------------+-----------+---------------------+------------+---------------+---------+ | id | user_id | submission_id | comment | parent_id | created | created_ip | helpful_count | deleted | id | user_id | submission_id | comment | parent_id | created | created_ip | helpful_count | deleted | +----+---------+---------------+-------------------------------+-----------+---------------------+------------+---------------+---------+------+---------+---------------+--------------------------------+-----------+---------------------+------------+---------------+---------+ | 1 | 15 | 23 | This is a parent | NULL | 2014-02-19 01:41:39 | 127001 | 0 | 0 | 2 | 15 | 23 | This is a child comment | 1 | 2014-02-19 01:41:43 | 127001 | 0 | 0 | | 1 | 15 | 23 | This is a parent | NULL | 2014-02-19 01:41:39 | 127001 | 0 | 0 | 4 | 15 | 23 | This is a second child comment | 1 | 2014-02-19 02:01:29 | 127001 | 0 | 0 | | 3 | 15 | 23 | I don&#39;t have any children | NULL | 2014-02-19 01:43:30 | 127001 | 0 | 0 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | +----+---------+---------------+-------------------------------+-----------+---------------------+------------+---------------+---------+------+---------+---------------+--------------------------------+-----------+---------------------+------------+---------------+---------+ ``` As you can see, the first two rows in the results contain the parent comment with the child comments joined. Is there a way with MySQL to cleanly nest all child comments within one parent comment that is returned or do I need to use something like underscore's `_.pluck` method on my returned results object?
Is this what you mean? ``` SELECT parent.id, MAX(parent.comment) as pcomm, GROUP_CONCAT(child.id ORDER BY child.id) as siblings, GROUP_CONCAT(child.comment ORDER BY child.id) as siblingComments FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id WHERE parent.parent_id IS NULL GROUP BY parent.id ORDER BY parent.id; ``` I'm assuming that by "nesting" you just mean you want the sibling results grouped together somehow.
Edit - Try this to see if you like it, otherwise skip forward to another group\_concat idea - The following approach will show all column values for each parent only on the first row where there is a child comment (or none). On rows that show a 2nd, 3rd, etc. child comment, the values for the parent will all be blank, and only the sibling data will be shown on the right. This gives you a quick visual as to where one comment (parent) ends and where another (along with another sequence of siblings) begin. ``` SELECT case when parent.id = child_min_id then parent.id else null end as id, case when parent.id = child_min_id then child.parent.user_id else null end as user_id, case when parent.id = child_min_id then parent.submission_id else null end as submission_id, case when parent.id = child_min_id then parent.comment else null end as comment, case when parent.id = child_min_id then parent.parent_id else null end as parent_id, case when parent.id = child_min_id then parent.created else null end as created, case when parent.id = child_min_id then parent.created_ip else null end as created_ip, case when parent.id = child_min_id then parent.helpful_count else null end as helpful_count, case when parent.id = child_min_id then parent.deleted else null end as deleted, child.* FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id left join submissions_comments as child_min on child.id = child_min.id WHERE parent.parent_id IS NULL and (child_min.id = (select min(x.id) from submissions_comments x where x.parent_id = child.parent_id) or child_min.id is null) ORDER BY parent.id, child.id; ``` Below is an edit of the above in answer to how I would approach joining into the users table for both parent/children, and how to display relevant columns from each. Note that because I'm not sure what columns you want selected, I just grabbed a column called "relevantcolumn" for both parent/child (change in your version): ``` SELECT case when parent.id = child_min_id then parent.id else null end as id, case when parent.id = child_min_id then child.parent.user_id else null end as user_id, case when parent.id = child_min_id then parent.submission_id else null end as submission_id, case when parent.id = child_min_id then parent.comment else null end as comment, case when parent.id = child_min_id then parent.parent_id else null end as parent_id, case when parent.id = child_min_id then parent.created else null end as created, case when parent.id = child_min_id then parent.created_ip else null end as created_ip, case when parent.id = child_min_id then parent.helpful_count else null end as helpful_count, case when parent.id = child_min_id then parent.deleted else null end as deleted, case when u_par.id = child_min_id then u_par.relevant_column else null end as usertblcolumn, child.*, u.relevantcolumn as usertblcolumnforchild FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id left join submissions_comments as child_min on child.id = child_min.id left join users as u on child.id = u.id left join users as u_par on parent.id = u_par.id WHERE parent.parent_id IS NULL and (child_min.id = (select min(x.id) from submissions_comments x where x.parent_id = child.parent_id) or child_min.id is null) ORDER BY parent.id, child.id; ``` Group concat approach: Regarding your comment about wanting the meta data, you can use group\_concat and concat together to both aggregate vertically and horizontally at the same time, like this: ``` SELECT parent.id, MAX(parent.comment) as pcomm, GROUP_CONCAT(concat(child.id,', ',child.user_id,', ',child.created,', ',child.comment) ORDER BY child.id) as siblings FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id WHERE parent.parent_id IS NULL GROUP BY parent.id ORDER BY parent.id; ``` You still have to split them into separate columns later if that's what you want, but the other fields will be there side by side. You can also use literals to help distinguish, if it makes it any easier to read: ``` SELECT parent.id, MAX(parent.comment) as pcomm, GROUP_CONCAT(concat('Child ID ', ', ', child.id, ', ', 'Child User ID ', ', ', child.user_id, ', ', 'Child Date ', ', ', child.created, ', ', 'Child Comment ', child.comment) ORDER BY child.id) as siblings FROM submissions_comments AS parent LEFT JOIN submissions_comments AS child ON child.parent_id = parent.id WHERE parent.parent_id IS NULL GROUP BY parent.id ORDER BY parent.id; ```
Combining nested child comments with MySQL
[ "", "mysql", "sql", "join", "" ]
Following is my schema detail: * DB\_A : schema\_1, schema\_2, schema\_3 * DB\_B : schema\_3 some procedures in schema\_3 access resources(table, view, sp) from schema\_1 and schema\_2. All procedures in schema\_3 are same on both the dbs. **How do I access schema\_1 from schema\_3 for both the dbs**. Now I can hard code DB\_A in my procedures but when I move code to client machine, it will create a problem since DB\_A may not be same(one of the reason being client is miser and having QA, Dev and Prod on same machine). Second option is getting DB\_A name as a parameter, but it will make all the schema\_3 SPs dynamic (as I did not get any method to access something like @DBName.schema\_name.ResourceName). Third option is creating linked servers, which again do not solve my problem because of same reason as first. Any idea how to proceed, where I do not want my procedures to be dynamic just because 80% of them are straight. Edit Start: So I can restate it as I have multiple databases with a database having resources (table/view/schema) which needs to be shared and then having other databases (one or more) which have stored procedures which computes on data from shared database and self database. Shared database name is not going to be constant on all the environments and I want to change them(environment specific). I have come out with a solution where I will be creating synonym for all the shared resources and all procedures will be using them, that way they are all referring to shared resources from first database. For each installation I need to modify synonyms definition to reflect correct share database name. Is there any [SYNONYM](http://technet.microsoft.com/en-us/library/ms177544.aspx) For Database Name, that way I will have way less synonyms to handle.
Well the best choice I found is as follows. Create [Synonym](http://technet.microsoft.com/en-us/library/ms177544.aspx) (independent database `DB_B`) for individual objects (in shared database `DB_A`) with same name in same schema. That way your existing procedures need not change, and will work as required. [Synonym](http://technet.microsoft.com/en-us/library/ms177544.aspx) gives a good reference on this. I will soon be creating an app to ease creating synonyms for these kind of situations. ``` CREATE SYNONYM DB_B.schema_1.proc_1 FOR DB_A.schema_1.proc_1 ```
If DB\_A and DB\_B are on same server, only sure you that the login have permission in two database. Now, use [database].[schema].[object], when you use object of others database eg: I have two database, ("helpdesk", "intranet") from heldesk to intranet ``` create view dbo.users as select login, name, lastname from intranet.dbo.user // [database].[schema].[object] user is a table in dbo schema from intranet database. where status = 1 ; ```
Accessing data from another database in stored procedure
[ "", "sql", "sql-server", "stored-procedures", "dynamic-data", "synonym", "" ]
My code, as I thought it would work, is as follows: ``` WHERE cp.OwnerId = 1 AND CASE WHEN [CustAccountNote] <> 'n/a' AND [CustRulesDocPath] <> 'n/a' THEN (cp.NAME = 'IQRRulesLinks' OR cp.NAME = 'AccountNote') END AND pp.lastupdated BETWEEN '01/01/2013' AND GETDATE() ``` Inevitably what I want to do is OMIT what is in the THEN statement if [CustAccountNote] and [CustRulesDocPath] are both 'n/a'. T-SQL doesn't like what I'm doing. (shocker). How would I write this WHERE statement so that if BOTH of those field results are 'n/a', it won't run the "cp.Name = ....." ?
something like this? ``` WHERE cp.OwnerId = 1 AND ( ([CustAccountNote] = 'n/a' AND [CustRulesDocPath] = 'n/a') or (cp.NAME = 'IQRRulesLinks' OR cp.NAME = 'AccountNote') ) AND pp.lastupdated BETWEEN '01/01/2013' AND GETDATE() ``` So, if **both** of `CustAccountNote` and `CustRulesDocPath` is equal to 'n/a', then it doesn't matter what values `cp.name` have. Or, if **any** of `CustAccountNote` and `CustRulesDocPath` is NOT equal to 'n/a', then `cp.name` has to be either 'IQRRulesLinks' or 'AccountNote'.
You can't do it exactly as you want, since the Case can only return a single value, but you *could* hack it: ``` WHERE cp.OwnerId = 1 AND ( cp.NAME = 'IQRRulesLinks' OR cp.NAME = 'AccountNote' Or 1 = CASE WHEN [CustAccountNote] = 'n/a' AND [CustRulesDocPath] = 'n/a' Then 1 Else 0 End ) AND pp.lastupdated BETWEEN '01/01/2013' AND GETDATE() ``` Essentially, here you are saying that either Name must equal IQRRulesLinks, or the Name must equal Name *or* 1 must equal a value determined by the Case statement. When CustAccountNote is n/a, or CustRulesDocPath is n/a, that value is 1 - effectively short circuiting the entire bracketed where clause.
Case...When...In a WHERE CLAUSE
[ "", "sql", "sql-server", "t-sql", "where-clause", "" ]
I have two table :- table a, table b. table a ---ID--- ``` 1 2 3 4 5 7 ``` table b ---ID---- ``` 2 3 4 5 6 ``` I have to get Output Like this without **UNION** Command:- ----ID----- ``` 1 2 3 4 5 6 7 ``` *Note: I have one solution with union:-* ``` **select * from a UNION select * from b;** ``` *I need alternative to this. please experts suggest.*
We need another table with (at least) 2 rows for this: ``` CREATE TABLE d ( id INT NOT NULL ); INSERT INTO d (id) VALUES (0), (1) ; ``` Then, if we want to have only one query, we can use (**this is for fun, DO NOT USE in production**, that's why we have **`UNION`**): ``` SELECT DISTINCT COALESCE(aa.id, bb.id) AS id FROM d LEFT JOIN a AS aa ON d.id = 0 LEFT JOIN b AS bb ON d.id = 1 WHERE COALESCE(aa.id, bb.id) IS NOT NULL ORDER BY id ; ``` Tested at **[SQLfiddle.com](http://sqlfiddle.com/#!2/dbee48/1)**, and for other table combinations: [1 row - 1 row](http://sqlfiddle.com/#!2/e30f8/7) [2 rows - 2 rows](http://sqlfiddle.com/#!2/36bf7/2) [0 rows - 1 row](http://sqlfiddle.com/#!2/a27fe/2) [0 rows - 2 rows](http://sqlfiddle.com/#!2/03d770/3) [0 rows - 0 rows](http://sqlfiddle.com/#!2/a0e4bb/2)
try this: I think it works well in MS-SQL, change it to MySQL if you need, but MYSql doesnot support full outer join! Good luck ``` SELECT ( CASE WHEN b.ID IS NULL THEN a.ID WHEN b.ID=a.ID THEN b.ID ELSE b.ID END) FROM (SELECT ID FROM table2 )b FULL OUTER JOIN (SELECT ID FROM table1 ) a ON a.ID=b.ID ``` and play around with the query Fiddle: <http://sqlfiddle.com/#!3/c657d/13> And here is the MYSQL version: ``` SELECT DISTINCT COALESCE(t1.id, t2.id) id FROM ( SELECT TABLE_NAME <> 'table_a' n FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = SCHEMA() AND TABLE_NAME IN('table_a', 'table_b') ) t LEFT JOIN table_a t1 ON t.n = 0 LEFT JOIN table_b t2 ON t.n = 1 ``` Working fiddle: <http://sqlfiddle.com/#!2/c657d8/34>
Alternative to UNION clause in Mysql
[ "", "mysql", "sql", "database", "" ]
i am having difficulty when i try to group by but with using order by statement. Here is my query ``` create table AllData(NoOfPerson int,NoOfMinutes int,StartTime Datetime); INSERT INTO AllData VALUES(1,2,GETDATE()), (0,3,GETDATE()+1), (3,4,GETDATE()+2), (2,5,GETDATE()+3), (0,6,GETDATE()+4), (3,7,GETDATE()+5), (2,8,GETDATE()+6); ``` and output from the query ``` select NoOfperson,SUM(NoOfMinutes)NoOfMinutes,MIN(StartTime)StartTime from AllData group by NoOfperson,StartTime order by StartTime NoOfperson NoOfMinutes StartTime 1 2 2014-02-19 15:44:52.617 0 3 2014-02-20 15:44:52.617 3 4 2014-02-21 15:44:52.617 2 5 2014-02-22 15:44:52.617 0 6 2014-02-23 15:44:52.617 3 7 2014-02-24 15:44:52.617 2 8 2014-02-25 15:44:52.617 ``` But i want output should be like first ``` 1 -- 2 0 -- 9 3 -- 11 2 -- 13 ``` How can i get this output ?
try wrap query: ``` SELECT * FROM (SELECT noofperson, Sum(noofminutes) NoOfMinutes, Min(starttime) StartTime FROM alldata GROUP BY noofperson) t ORDER BY noofminutes ASC; ``` or simply: ``` SELECT noofperson, Sum(noofminutes) NoOfMinutes, Min(starttime) StartTime FROM alldata GROUP BY noofperson ORDER BY noofminutes ASC; ```
Try this: ``` SELECT NoOfperson, NoOfMinutes FROM ( select NoOfperson,SUM(NoOfMinutes)NoOfMinutes,MIN(StartTime)StartTime from AllData group by NoOfperson ) AS Data GROUP BY NoOfperson, NoOfMinutes, StartTime order by StartTime ```
issue with group by when order by is used
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I've prepared sql query like this ``` UPDATE Towar JOIN TowarZamowienie ON Towar.Tow_id = TowarZamowienie.Tow_id SET Tow_ilosc = Tow_ilosc - CAST(TowZam_ilosc AS UNSIGNED); ``` and it returns me Modified records: 0. I will admit that I've got records in database Here are my tables: ``` TowarZamowienie 1 TowZam_id int(19) AUTO_INCREMENT 2 Tow_id int(255) 3 Zam_id int(255) 4 TowZam_ilosc varchar(10) ``` --- ``` Towar 1 Tow_id int(255) 2 Tow_ilosc int(6) ``` Here is my schema <http://sqlfiddle.com/#!2/981b4/1>
i guess you making wrong in the ON clause try this ``` ON Towar.Tow_id = TowarZamowienie.TowZam_id ``` [DEMO](http://sqlfiddle.com/#!2/a9965/1)
``` UPDATE Towar, TowarZamowienie SET Towar.Tow_ilosc = Towar.Tow_ilosc - CAST(TowarZamowienie.TowZam_ilosc AS UNSIGNED) WHERE Towar.Tow_id = TowarZamowienie.TowZam_id; ``` --OR ``` UPDATE Towar JOIN TowarZamowienie ON Towar.Tow_id = TowarZamowienie.TowZam_id SET Tow_ilosc = Tow_ilosc - CAST(TowZam_ilosc AS UNSIGNED); ```
How to cast varchar into int properly?
[ "", "mysql", "sql", "phpmyadmin", "" ]
This [screenshot](http://i.hizliresim.com/Kgb6lL.jpg) is my result. How can I take top record in Same ID ? (query result have 3 record but i want to take top record) Thank You My Code Δ°s but i have error (The column 'ID' was specified multiple times for 'CTE'.) ``` WITH CTE AS ( SELECT RN = ROW_NUMBER() OVER (PARTITION BY dbo.Product.ID ORDER BY dbo.Picture.UpdatedDate ASC), * FROM Dbo.Product inner JOIN dbo.Product_Picture_Mapping on dbo.Product_Picture_Mapping.ProductID = dbo.Product.ID inner join dbo.Picture on dbo.Picture.ID = dbo.Product_Picture_Mapping.PictureID ) SELECT * FROM CTE WHERE RN = 1 ORDER BY dbo.Picture.UpdatedDate DESC ```
There is no inherent "top" or "bottom" so you need to specify the column(s) you want to order by. However, you can use `ROW_NUMBER` to get one record per ID. For example(assuming `UPDATEDDATE` as order-column): ``` WITH CTE AS ( SELECT RN = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY UPDATEDDATE ASC), * FROM rbo.Product INNER JOIN .... ) SELECT * FROM CTE WHERE RN = 1 ORDER BY UPDATEDDATE DESC ``` (replace `....` with the joins, i didn't want to type them from your image)
You can use `top 1 with ties` option. ``` select top 1 with ties * from ... order by id ```
Sql Query Select One Record
[ "", "sql", "sql-server", "t-sql", "" ]
I have a MS SQL Server 2012 table that contains daily reports of people. Each day contains maximum 1 record of a person and the day is stored as a DateTime, having the time part set to "00:00:00.000". Example: '2014-01-30 00:00:00.000' ``` +-----------+-------------------------+----------------+ | Person id | Date | Report content | +-----------+-------------------------+----------------+ | 1 | 2014-01-29 00:00:00.000 | Account stuff | +-----------+-------------------------+----------------+ | 2 | 2014-01-29 00:00:00.000 | Coaching stuff | +-----------+-------------------------+----------------+ | 2 | 2014-01-30 00:00:00.000 | Still coaching | +-----------+-------------------------+----------------+ ``` All personnel are stored in a separate table. How do I select those people from a database that have missing records for a period? Even if the person reported 30 reports but 1 is missing, he should be in the results. It can happen, that everyone missed one or more report days. This is why I cannot compare people to each other. For example I would like to know who did not create his/her reports between 1st of January and 1st of February.
OK well how about something like ``` DECLARE @StartDate DATETIME = '01 Jan 2014', @EndDate DATETIME = '01 Feb 2014' ;WITH Dates AS ( SELECT @StartDate RunDate UNION ALL SELECT RunDate + 1 FROM Dates WHERE RunDate + 1 <= @EndDate ) , PersonIDs AS ( SELECT DISTINCT PersonID FROM MyTable ) , PersonDates AS ( SELECT RunDate, PersonID FROM Dates, PersonIDs ) SELECT * FROM PersonDates pd LEFT JOIN MyTable mt ON pd.RunDate = mt.[Date] AND pd.PersonID = mt.PersonID WHERE mt.PersonID IS NULL OPTION (MAXRECURSION 0) ``` **EDIT:** Just as a brief over view. Using a [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) structure, I used the recursive nature to generate a list of date. Then I retrieved a distinct list of `PersonID`s (if you had a `Person` table, that would have helped too, or an active list, or a `MustReportList`, etc). After that we use a Cartesian product to generate a list of dates by PersonID as to account for all Persons for all dates. Then we left join onto the original table, and use the IS NULL to determine entries that is missing. Lastly, the `OPTION (MAXRECURSION)` is to ensure that once you exceed the MAX RECURSION LEVEL, we dont get an exception (basically telling SQL SERVER you know what you are doing). Hope that helps.
Try this: ``` SELECT DISTINCT PersonID FROM [Table] A WHERE NOT EXISTS ( SELECT 1 FROM [Table] WHERE [PersonID] = A.[PersonID] AND TRY_CAST([Date] AS DATE) BETWEEN TRY_CAST('January 1, 2014' AS DATE) AND TRY_CAST('February 1, 2014' AS DATE) ) ```
SQL query to get items that have no value for a certain day
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a dilemma, I would like to use JOIN on all of my 22 tables to combine them into one long row for each `company` I have a main table `company` that has a `CompanyID` as a primary key and 22 other tables that reference `company.CompanyID` some of the tables do not contain an entry of the company while others do. So if I use INNER JOIN the company that does not contain an entry in one of the tables does not show up. Is there a way to make it show up regardless? ``` SELECT * FROM `company` C INNER JOIN sales S ON S.CompanyID = C.CompanyID INNER JOIN owner O ON O.CompanyID = S.CompanyID ``` For example I have 5 companies in my database only 4 of them show up with the above statement because one of them does not contain an entry at all inside of `owner`.
To remove the duplicate ids, use the `using` clause: ``` SELECT * FROM `company` C LEFT OUTER JOIN sales S USING (CompanyID) LEFT OUTER JOIN owner O USING (CompanyID); ``` The `using` clause will output only one copy of each ID (in the more recent versions of MySQL -- this behavior has changed over time).
When your Reference key table dost not have primary key record but still you need that record the use [LEFT JOIN](https://dev.mysql.com/doc/refman/5.0/en/left-join-optimization.html) Try like this ``` SELECT * FROM `company` C LEFT JOIN sales S ON S.CompanyID = C.CompanyID LEFT JOIN owner O ON O.CompanyID = S.CompanyID ```
Using JOIN on several tables matching a foreign key
[ "", "mysql", "sql", "join", "" ]
I have a simple table with data as below: ``` col_1 ========== haddock cod hake mackerel tench sprat dace rudd pike gudgeon .... ``` I want to select the data such that I can output it in 5 columns: ``` col_1 col_2 col_3 col_4 col_5 ======== ======== ======== ======== ======== haddock cod hake mackerel tench sprat dace rudd pike gudgeon ... ``` Is there a nice way to do this? NB iSeries DB2 SQL
I have a solution which seems to work although not very elegant: ``` with tab1 as ( select col_1 as col_1 from my_table a where mod(rrn(a), 5) = 1 ), with tab2 as ( select col_1 as col_1 from my_table a where mod(rrn(a), 5) = 2 ), with tab3 as ( select col_1 as col_1 from my_table a where mod(rrn(a), 5) = 3 ), with tab4 as ( select col_1 as col_1 from my_table a where mod(rrn(a), 5) = 4 ), with tab5 as ( select col_1 as col_1 from my_table a where mod(rrn(a), 5) = 0 ) select tab1.col_1 as col_1, tab2.col_1 as col_2, tab3.col_1 as col_3, tab4.col_1 as col_4, tab5.col_1 as col_5 from tab1 LEFT JOIN tab2 on rrn(tab1) + 1 = rrn(tab2) LEFT JOIN tab3 on rrn(tab2) + 1 = rrn(tab3) LEFT JOIN tab4 on rrn(tab3) + 1 = rrn(tab4) LEFT JOIN tab5 on rrn(tab4) + 1 = rrn(tab5) ```
To show what's going on, I'll break this down in small stages a,b,c,.. with "common table expressions", but this is one SELECT statement ``` with a as ( select row_number() over(order by order of f) - 1 as nb, col_1 as fish from fishtable as f ), b as ( select smallint(nb/5)+1 as outrow, smallint(mod(nb),5)+1 as outcol, col_1 as fish from a ), c as ( select outrow, (case when outcol=1 then fish else null end) as fish1, (case when outcol=2 then fish else null end) as fish2, (case when outcol=3 then fish else null end) as fish3, (case when outcol=4 then fish else null end) as fish4, (case when outcol=5 then fish else null end) as fish5 from b ) select outrow, max(fish1) col_1, max(fish2) col_2, max(fish3) col_3, max(fish4) col_4, max(fish5) col_5 from c group by outrow order by outrow ``` The first step gives you an intermediate result of ``` rn fish ====== ========== 0 haddock 1 cod 2 hake 3 mackerel 4 tench 5 sprat 6 dace 7 rudd 8 pike 9 gudgeon ``` The next step gives ``` outrow outcol fish ====== ====== ========== 1 1 haddock 1 2 cod 1 3 hake 1 4 mackerel 1 5 tench 2 1 sprat 2 2 dace 2 3 rudd 2 4 pike 2 5 gudgeon ``` Then we spread the values out to separate columns based on the column number ``` outrow fish1 fish2 fish3 fish4 fish5 ====== ======== ======== ======== ======== ======== 1 haddock 1 cod 1 hake 1 mackerel 1 tench 2 sprat 2 dace 2 rudd 2 pike 2 gudgeon ``` The last step squeezes the rows together by outrow number ``` outrow col_1 col_2 col_3 col_4 col_5 ====== ======== ======== ======== ======== ======== 1 haddock cod hake mackerel tench 2 sprat dace rudd pike gudgeon ``` Of course that query may seem like a rather long way to write it. I tested it out a bit larger scale, using a table I built of distinct first names. I then condensed my syntax down. ``` select max(case when mod(rn,5)=0 then fname else null end) fname1 ,max(case when mod(rn,5)=1 then fname else null end) fname2 ,max(case when mod(rn,5)=2 then fname else null end) fname3 ,max(case when mod(rn,5)=3 then fname else null end) fname4 ,max(case when mod(rn,5)=4 then fname else null end) fname5 from (select fname, row_number() over(order by order of f)-1 as rn from firstnames f ) as a group by int(rn/5) order by int(rn/5) ```
Rows to Columns with iSeries DB2 SQL
[ "", "sql", "db2", "ibm-midrange", "db2-400", "" ]
I didn't have room to make the title more descriptive, sorry, but this is what I am trying to accomplish: I am making a data table for a Google Histogram chart on a web page. I am looking at customer and invoice data, and I want to get a snapshot of the customer status for each week of a previous time period (week, month, quarter, year). I know what my SELECT statement should be for a week, and I can do it using UNION ALL, but that seems like a lot of code to maintain if I want each week for the past three years. There will be other filters which I have removed for this example (MS SQL 2008): ``` SELECT COUNT(distinct customer.AccountNumber) FROM Datahub..InvoiceHold invoice INNER JOIN Datahub..CustomerBase as customer ON invoice.AccountNumber = customer.AccountNumber INNER JOIN Datahub..OrderHold as orders ON invoice.AccountNumber = orders.AccountNumber WHERE customer.CoNo = '003' AND customer.CustomerStartDate <= GETDATE() - 7 AND invoice.InvoiceDate > GETDATE() - 7 AND orders.OrderDate > GETDATE() - 7 UNION ALL SELECT COUNT(distinct customer.AccountNumber) FROM Datahub..InvoiceHold invoice INNER JOIN Datahub..CustomerBase as customer ON invoice.AccountNumber = customer.AccountNumber INNER JOIN Datahub..OrderHold as orders ON invoice.AccountNumber = orders.AccountNumber WHERE customer.CoNo = '003' AND customer.CustomerStartDate <= GETDATE() - 14 AND invoice.InvoiceDate > GETDATE() - 14 AND orders.OrderDate > GETDATE() - 14 UNION ALL........And so on ``` I have thought about dynamically writing the SQL string in my calling code, and I can easily do a for loop there and concatenate a query string which I send to MS SQL 2008. But if I want to do this in a stored procedure, how would I narrow this down to a single statement? I have read that using loops in SQL is a no-no except for a narrow range of situations, and I don't understand using cursors. It seems like this should be easy but I am not a SQL guy. **EDIT** I tried the solutions presented, and did not get the kind of performance I was looking for, oddly enough. It was taking 7 minutes or more. I am not sure why. Anyways, I ended up doing some filtering on the data and pulling it into memory, and then using LINQ on the dataTable and going from there. It may not be pretty, but it only takes about 20 seconds. Thank you very much for the replies!
I tried the solutions presented, and did not get the kind of performance I was looking for, oddly enough. It was taking 7 minutes or more. I am not sure why. Anyways, I ended up doing some filtering on the data and pulling it into memory, and then using LINQ on the dataTable and going from there. It may not be pretty, but it only takes about 20 seconds. Here is my code if anyone ever wants to do something similar: ``` public IList<object[]> GetBarChartData(string timeframe, string channel, string unit, string division) { DataTable dt = new DataTable(); List<CustomerClass> myList = new List<CustomerClass>(); List<object[]> retVal = new List<object[]>(); CustomerClass myClass = new CustomerClass(); string marketChannel = (channel == "ALL MARKET CHANNELS" ? string.Empty : "AND customer.MarketChannelDescription = '" + channel + "' "); string businessUnit = (unit == "ALL BUSINESS UNITS" ? string.Empty : "AND customer.BusinessUnitDescription = '" + unit + "' "); using (SqlConnection conn = ConnectionManager.DataHubConnection()) { string sql = "SELECT customer.AccountNumber " + ", customer.CustomerStartDate " + ", MAX(invoice.InvoiceDate) as LastInvoice " + ", MAX(orders.OrderDate) AS LastOrder " + "FROM Datahub..InvoiceHold as invoice " + "INNER JOIN Datahub..CustomerBase as customer ON invoice.AccountNumber = customer.AccountNumber " + "INNER JOIN Datahub..OrderHold as orders ON invoice.AccountNumber = orders.AccountNumber " + "WHERE customer.CoNo = '003' " + "AND customer.CustomerStartDate BETWEEN GETDATE() - 1095 AND GETDATE() " + marketChannel + businessUnit + "GROUP BY customer.AccountNumber, customer.CustomerStartDate " + "ORDER BY LastInvoice "; using (SqlDataAdapter da = new SqlDataAdapter(sql, conn)) { da.Fill(dt); } } string foo = string.Empty; foreach (DataRow row in dt.Rows) { myClass = new CustomerClass(); myClass.AccountNumber = row["AccountNumber"].ToString(); myClass.CustomerStartDate = row["CustomerStartDate"] == DBNull.Value ? new DateTime(1900,1,1) : Convert.ToDateTime(row["CustomerStartDate"]); myClass.LastInvoiceDate = row["LastInvoice"] == DBNull.Value ? new DateTime(1900, 1, 1) : Convert.ToDateTime(row["LastInvoice"]); myClass.LastOrderDate = row["LastOrder"] == DBNull.Value ? "1-1-1900" : Convert.ToDateTime(row["LastOrder"]).ToString("d"); myList.Add(myClass); } // Use LINQ to break this data into series and then write string // How many weeks do we need int weeks = 1; switch (timeframe) { case "Last Year": weeks = 104; break; case "This Year": weeks = 52; break; case "Last Quarter": weeks = 26; break; case "This Quarter": weeks = 13; break; case "Last Month": weeks = 8; break; case "This Month": weeks = 4; break; case "Last Week": weeks = 2; break; case "This Week": weeks = 1; break; } object[] myArray = { "Class", "Lost", "Active", "Prospect" }; retVal.Add(myArray); for (int i = 1; i <= weeks; i++) { var dateOffset = -(i * 7); DateTime date = DateTime.Today.AddDays(dateOffset); var prospect = from c in myList where c.CustomerStartDate <= date && c.LastInvoiceDate > date && c.LastOrderDate > date select c; var prospectCount = prospect.Count(); var active = from c in myList where c.CustomerStartDate <= date && c.LastInvoiceDate >= date.AddDays(-365) select c; var activeCount = active.Count(); var lost = from c in myList where c.CustomerStartDate <= date && c.LastInvoiceDate <= date.AddDays(-365) && c.LastInvoiceDate >= date.AddDays(-730) select c; var lostCount = lost.Count(); myArray = new object[] { i, lostCount, activeCount, prospectCount }; retVal.Add(myArray); } return retVal; } ```
You could use something like the following to create a temp table of however many weeks you want to go back and use a group by to partition your data. This will make the temp table with the week numbers (in this example it makes weeks 1 - 150) ``` declare @digits table ( digit int ) insert into @digits values (1) insert into @digits values (2) insert into @digits values (3) insert into @digits values (4) insert into @digits values (5) insert into @digits values (6) insert into @digits values (7) insert into @digits values (8) insert into @digits values (9) insert into @digits values (0) declare @weeks table ( weekNumber int ) insert into @weeks select a.digit * 100 + b.digit * 10 + c.digit from @digits a cross join @digits b cross join @digits c where a.digit * 100 + b.digit * 10 + c.digit <= 150 and a.digit * 100 + b.digit * 10 + c.digit > 0 ``` Then you would do something like this in your query: ``` SELECT w.weekNumber, COUNT(distinct customer.AccountNumber) FROM Datahub..InvoiceHold invoice INNER JOIN Datahub..CustomerBase as customer ON invoice.AccountNumber = customer.AccountNumber INNER JOIN Datahub..OrderHold as orders ON invoice.AccountNumber = orders.AccountNumber INNER JOIN @weeks w ON customer.CustomerStartDate <= DATEADD(dd, -(w.weekNumber * 7), GetDate()) AND invoice.InvoiceDate > DATEADD(dd, -(w.weekNumber * 7), GetDate()) AND orders.OrderDate > DATEADD(dd, -(w.weekNumber * 7), GetDate()) WHERE customer.CoNo = '003' GROUP BY w.weekNumber ORDER BY w.weekNumber ```
SQL - How to have multiple queries in a statement without a loop or union all
[ "", "sql", "sql-server-2008", "" ]
The tables look like these: ``` Table: Shops ShopCode ShopName -------- -------- A Aladdin B Backstreet C Clerk's Store D Debs Tool Table: Sale ShopCode Product -------- ------- A Hammer A Thermometer A Compass B Eraser B Hammer C Thermometer C Hammer D Thermometer ``` Find the name of the shops which sells BOTH Hammer and Thermometer. The result table would be ``` ShopName -------------- Aladdin Clerk's Store ``` I thought the following query will work, but its returning empty set ``` mysql> SELECT Shops.ShopName FROM Shops -> JOIN Sale ON Shops.ShopCode=Sale.ShopCode -> WHERE Sale.Product='Hammer' AND Sale.Product='Thermometer' -> GROUP BY Shops.ShopCode; ``` Also tried with OR instead of AND, but not working (returning all the shops). What might be a possible solution? Just to make it little bit clear, I want to select the shops that have both the items(hammer and thermo), even though shop B sells hammer and D sells Thermometer, they will not be included. Only A and C which is selling both the items should be on the result
There are two fairly straight forward options. You can join twice with the sale table, once per item. If you skip the `DISTINCT`, you may get duplicate values if the store sells more than one hammer or thermometer. ``` SELECT DISTINCT s.shopname FROM shops s JOIN sale s1 ON s.shopcode = s1.shopcode AND s1.product='hammer' JOIN sale s2 ON s.shopcode = s2.shopcode AND s2.product='thermometer'; ``` ...or you can find all matches with hammer or thermometer and count how many distinct values there are. If there are two possible values and you get both, you're set. ``` SELECT s.shopname FROM shops s JOIN sale s1 ON s.shopcode = s1.shopcode WHERE s1.product IN('hammer','thermometer') GROUP BY s.shopname HAVING COUNT(DISTINCT s1.product)=2; ``` [An SQLfiddle to test both](http://sqlfiddle.com/#!2/cf57b/2).
You can easily Join twice on the Sale table--once for hammers and once for Thermometers to get the store with BOTH Hammers and Theremoeteres ``` SELECT s.ShopName FROM Shops s JOIN Sale s1 ON s.ShopCode=s1.ShopCode AND s1.Product='Thermometer' JOIN Sale s2 ON s1.ShopCode=s2.ShopCode AND s2.Product='Hammer' GROUP BY s.ShopName ```
MySQL Select both This and That
[ "", "mysql", "sql", "database", "" ]
i got a problem of my homework~ i want execute different sql in proc to use CASE WHEN and the code like this ``` CREATE proc [dbo].[removeFileNew] (@transerid int,@Type int) as --PRINT @file --PRINT @transerid CASE WHEN @Type=1 --report THEN UPDATE order_report_path SET ReportPath='' WHERE tran_ser_id=@transerid WHEN @Type=2--gl UPDATE order_report_path SET GRpath='' WHERE tran_ser_id=@transerid WHEN @Type=3--ic UPDATE order_report_path SET ICpath='' WHERE tran_ser_id=@transerid END ``` but it got some error like 'Incorrect syntax near the keyword 'CASE'' How can i use the case when like use if in c++. thx all.. Best Wishes.
You can't use CASE WHEN like this. In SQL this is meant to return a value, not control flow. Use IF statements instead. ``` IF @Type=1 --report BEGIN UPDATE order_report_path SET ReportPath='' WHERE tran_ser_id=@transerid END ELSE IF @Type=2 --gl BEGIN UPDATE order_report_path SET GRpath='' WHERE tran_ser_id=@transerid END ELSE IF @Type=3 --ic BEGIN UPDATE order_report_path SET ICpath='' WHERE tran_ser_id=@transerid END ``` Note that BEGIN / END statements are optional as long as you only have one instruction per block, but I still recommend that you keep them. Same goes for ELSE: not THAT useful in this context but it would be if your expressions were more complex and/or if needs be that you must make sure only one block is executed.
``` IF @Type = 1 BEGIN UPDATE order_report_path SET ReportPath = '' WHERE tran_ser_id = @transerid END ELSE IF @Type = 2 BEGIN UPDATE order_report_path SET GRpath = '' WHERE tran_ser_id = @transerid END ELSE IF @Type = 3 BEGIN UPDATE order_report_path SET ICpath = '' WHERE tran_ser_id = @transerid END ```
How to Use Case When in a stored procedure to execute a different SQL
[ "", "sql", "sql-server", "" ]
There are three tables in MySQL which looks like this ``` Table:Travel Table:Airline Table:Location Code From To Code Name Port Country ----- ---- --- ---- ---- --- ------- ET PAR IST ET Ettihad PAR France ET NYC ANK VA VirginAir MER France VA BER PAR TA TurkishAir IST Turkey TA SIN MER AF AirFlorida SIN Singapore TA SHA SIN VM VimanaAir ANK Turkey AF MER DUB ``` I want to find the **NAME of the Airlines which DO NOT Depart FROM France AT ALL which also includes Airline that doesnot have any flight at all**. Please Note that ET and AF Departs from France. So, the result table will be ``` Airline ------- VirginAir TurkishAir VimanaAir ``` I have tested in with general LEFT JOIN query and found that `WHERE Location.Country <> 'France'` after all the joins eliminates AF but still shows ET. The Table and Data is here for anyone to test: ``` CREATE TABLE Airline ( `Code` varchar(2) NOT NULL, `Name` varchar(30) NOT NULL, PRIMARY KEY (`Code`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO Airline (Code, `Name`) VALUES ('AF', 'AirFlorida'), ('ET', 'Ettihad'), ('TA', 'TurkishAir'), ('VA', 'VirginAir'), ('VM', 'VimanaAir'); CREATE TABLE Location ( `Port` varchar(3) NOT NULL, Country varchar(30) NOT NULL, PRIMARY KEY (`Port`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO Location (Port, Country) VALUES ('ANK', 'Turkey'), ('DUB', 'Emirates'), ('IST', 'Turkey'), ('MER', 'France'), ('NYC', 'USA'), ('PAR', 'France'), ('SIN', 'Singapore'); CREATE TABLE Travel ( `Code` varchar(2) NOT NULL, `From` varchar(3) NOT NULL, `To` varchar(3) NOT NULL, PRIMARY KEY (`Code`,`From`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO Travel (Code, `From`, `To`) VALUES ('AF', 'MER', 'DUB'), ('ET', 'NYC', 'ANK'), ('ET', 'PAR', 'IST'), ('TA', 'SHA', 'SIN'), ('TA', 'SIN', 'MER'), ('VA', 'BER', 'PAR'); ```
Try: ``` select * from Airline a where not exists( select 1 from Travel t join Location l on t.from = l.port where t.Code = a.code and l.Country = 'France' ) ``` demo: <http://sqlfiddle.com/#!2/0e7dd/1>
How about something like this... ``` SELECT a.* FROM airline a LEFT JOIN travel t ON t.code = a.code LEFT JOIN location l ON l.port = t.from LEFT JOIN airline x ON x.code = a.code AND l.country = 'France' WHERE l.country IS NULL; ```
Values corresponds to NOT AT ALL in MYSQL
[ "", "mysql", "sql", "database", "" ]
A similar question is asked here [multiple foreign keys referencing single column in other table](https://stackoverflow.com/questions/20830116/multiple-foreign-keys-referencing-single-column-in-other-table/20830137#20830137) but the syntax is not shown in the answer. I would like to know how this can be accomplished in SQL server. The following syntax gives error ``` ALTER TABLE ItemIssue ADD CONSTRAINT FK_ItemIssue_Person FOREIGN KEY (PersonID, AdvisorID) REFERENCES Person (PersonID) ; ``` ERROR: Number of referencing columns in foreign key differs from number of referenced columns, table 'ItemIssue'. ``` -- Create Tables CREATE TABLE ItemIssue ( ItemIssueID int identity(1,1) NOT NULL, PersonID int, AdvisorID int, ) ; CREATE TABLE Person ( PersonID int NOT NULL, Name nvarchar(500), ) ``` ;
You need to define two foreign keys, one for each column: ``` ALTER TABLE ItemIssue ADD CONSTRAINT FK_ItemIssue_Person FOREIGN KEY (PersonID) REFERENCES Person (PersonID) ; ALTER TABLE ItemIssue ADD CONSTRAINT FK_ItemAdvisor_Person FOREIGN KEY (AdvisorID) REFERENCES Person (PersonID) ; ```
It is impossible to create one foreign key for two columns referencing one column. Create them seperate: ``` ALTER TABLE ItemIssue ADD CONSTRAINT FK_ItemIssue_Person_Person FOREIGN KEY (PersonID) REFERENCES Person (PersonID), ADD CONSTRAINT FK_ItemIssue_Advisor_Person FOREIGN KEY (AdvisorID) REFERENCES Person (PersonID); ```
two columns referencing a single column in another table
[ "", "sql", "sql-server", "foreign-keys", "foreign-key-relationship", "" ]
I've got some data with two times defining a time range. ``` CREATE TABLE MY_TIME_TABLE ( MY_PK NUMBER(10) NOT NULL ENABLE, FROM_TIME DATE NOT NULL ENABLE, TO_TIME DATE NOT NULL ENABLE ); INSERT INTO MY_TIME_TABLE(MY_PK,FROM_TIME,TO_TIME) VALUES(1,TO_DATE('2014-01-01 09:00:00', 'YYYY-MM-DD HH24:MI:SS'),TO_DATE('2014-01-01 13:00:00', 'YYYY-MM-DD HH24:MI:SS'); INSERT INTO MY_TIME_TABLE(MY_PK,FROM_TIME,TO_TIME) VALUES(2,TO_DATE('2014-01-02 14:00:00', 'YYYY-MM-DD HH24:MI:SS'),TO_DATE('2014-01-02 15:00:00', 'YYYY-MM-DD HH24:MI:SS'); INSERT INTO MY_TIME_TABLEMY_PK,(FROM_TIME,TO_TIME) VALUES(3,TO_DATE('2014-01-03 00:30:00', 'YYYY-MM-DD HH24:MI:SS'),TO_DATE('2014-01-03 03:30:00', 'YYYY-MM-DD HH24:MI:SS'); ``` What I would like to do is create a query that would return all of the half hour blocks in between each of the two times. So it would return something like the following: ``` 1, 2014-01-01 09:00:00 1, 2014-01-01 09:30:00 1, 2014-01-01 10:00:00 1, 2014-01-01 10:30:00 1, 2014-01-01 11:00:00 1, 2014-01-01 11:30:00 1, 2014-01-01 12:00:00 1, 2014-01-01 12:30:00 2, 2014-01-02 14:00:00 2, 2014-01-02 14:30:00 3, 2014-01-03 00:30:00 3, 2014-01-03 01:00:00 3, 2014-01-03 01:30:00 3, 2014-01-03 02:00:00 3, 2014-01-03 02:30:00 3, 2014-01-03 03:00:00 ``` The data is guaranteed to start and end on the hour or half hour, so I don't have to worry about partial matches. I normally try to show what I've done on my own to solve my problem, but in this case I don't even have the faintest clue where to start.
You can do it using a Hierarchical query or a CTE. [SQL Fiddle](http://sqlfiddle.com/#!4/74704/7) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE MY_TIME_TABLE ( MY_PK, FROM_TIME, TO_TIME ) AS SELECT 1, TO_DATE('2014-01-01 09:00:00', 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2014-01-01 13:00:00', 'YYYY-MM-DD HH24:MI:SS') FROM DUAL UNION ALL SELECT 2, TO_DATE('2014-01-02 14:00:00', 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2014-01-02 15:00:00', 'YYYY-MM-DD HH24:MI:SS') FROM DUAL UNION ALL SELECT 3, TO_DATE('2014-01-03 00:30:00', 'YYYY-MM-DD HH24:MI:SS'), TO_DATE('2014-01-03 03:30:00', 'YYYY-MM-DD HH24:MI:SS') FROM DUAL; ``` **Hierarchical Query**: ``` SELECT MY_PK, FROM_TIME + (LEVEL-1) / 48 FROM MY_TIME_TABLE CONNECT BY LEVEL <= (TO_TIME - FROM_TIME) * 48 AND PRIOR MY_PK = MY_PK AND PRIOR dbms_random.value IS NOT NULL ``` **[Results](http://sqlfiddle.com/#!4/74704/7/0)**: ``` | MY_PK | FROM_TIME+(LEVEL-1)/48 | |-------|--------------------------------| | 1 | January, 01 2014 09:00:00+0000 | | 1 | January, 01 2014 09:30:00+0000 | | 1 | January, 01 2014 10:00:00+0000 | | 1 | January, 01 2014 10:30:00+0000 | | 1 | January, 01 2014 11:00:00+0000 | | 1 | January, 01 2014 11:30:00+0000 | | 1 | January, 01 2014 12:00:00+0000 | | 1 | January, 01 2014 12:30:00+0000 | | 2 | January, 02 2014 14:00:00+0000 | | 2 | January, 02 2014 14:30:00+0000 | | 3 | January, 03 2014 00:30:00+0000 | | 3 | January, 03 2014 01:00:00+0000 | | 3 | January, 03 2014 01:30:00+0000 | | 3 | January, 03 2014 02:00:00+0000 | | 3 | January, 03 2014 02:30:00+0000 | | 3 | January, 03 2014 03:00:00+0000 | ```
If you're using 11gR2 you can use [recursive subquery factoring](http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#SQLRF55268) (aka recursive CTE or recursive with): ``` with r (my_pk, from_time, to_time) as ( select my_pk, from_time, to_time from my_time_table union all select my_pk, from_time + interval '30' minute, to_time from r where from_time + interval '30' minute < to_time ) select my_pk, to_char(from_time, 'YYYY-MM-DD HH24:MI:SS') as from_time from r order by my_pk, from_time; MY_PK FROM_TIME ---------- ------------------- 1 2014-01-01 09:00:00 1 2014-01-01 09:30:00 1 2014-01-01 10:00:00 1 2014-01-01 10:30:00 1 2014-01-01 11:00:00 1 2014-01-01 11:30:00 1 2014-01-01 12:00:00 1 2014-01-01 12:30:00 2 2014-01-02 14:00:00 2 2014-01-02 14:30:00 3 2014-01-03 00:30:00 3 2014-01-03 01:00:00 3 2014-01-03 01:30:00 3 2014-01-03 02:00:00 3 2014-01-03 02:30:00 3 2014-01-03 03:00:00 ``` The anchor clause gets the start time for each PK value, and the recursive parts keeps adding 30-minute intervals until the end time for that PK is reached. You can then use that CTE as a source table elsewhere in the query; here I'm just displaying the contents, clearly. Depending on how you're going to use these ranges, you might find it useful to generate the end of each half-hour block as well, e.g. for use in a `between` clause in the main query: ``` with r (my_pk, from_time, to_time, max_time) as ( select my_pk, from_time, from_time + interval '30' minute - interval '1' second, to_time from my_time_table union all select my_pk, from_time + interval '30' minute, to_time + interval '30' minute, max_time from r where from_time + interval '30' minute < max_time ) select my_pk, to_char(from_time, 'YYYY-MM-DD HH24:MI:SS') as from_time, to_char(to_time, 'YYYY-MM-DD HH24:MI:SS') as to_time from r order by my_pk, from_time; MY_PK FROM_TIME TO_TIME ---------- ------------------- ------------------- 1 2014-01-01 09:00:00 2014-01-01 09:29:59 1 2014-01-01 09:30:00 2014-01-01 09:59:59 1 2014-01-01 10:00:00 2014-01-01 10:29:59 1 2014-01-01 10:30:00 2014-01-01 10:59:59 1 2014-01-01 11:00:00 2014-01-01 11:29:59 1 2014-01-01 11:30:00 2014-01-01 11:59:59 1 2014-01-01 12:00:00 2014-01-01 12:29:59 1 2014-01-01 12:30:00 2014-01-01 12:59:59 2 2014-01-02 14:00:00 2014-01-02 14:29:59 2 2014-01-02 14:30:00 2014-01-02 14:59:59 3 2014-01-03 00:30:00 2014-01-03 00:59:59 3 2014-01-03 01:00:00 2014-01-03 01:29:59 3 2014-01-03 01:30:00 2014-01-03 01:59:59 3 2014-01-03 02:00:00 2014-01-03 02:29:59 3 2014-01-03 02:30:00 2014-01-03 02:59:59 3 2014-01-03 03:00:00 2014-01-03 03:29:59 ```
Oracle: Get every half hour between two times
[ "", "sql", "oracle", "date", "" ]
``` SELECT RC.Name AS RiskCategory, ( SELECT SUM(CASE WHEN IA.ImpactLevel = 'High' THEN 1 ELSE 0 END) FROM Rpt_ImpactAssess IA JOIN Rpt_Risk R ON IA.FKRiskID = R.RiskID WHERE R.RiskID IN ( SELECT FKRiskID FROM Rpt_Impact WHERE FKItemID =38 ) AND R.RiskCatrogry = RC.Name )AS High_Impact_Risks From RM_RiskCategories RC WHERE RC.Name <> 'All' GROUP BY RC.Name ORder By RC.Name DESC ```
I'm fairly certain this produces the same result, try it: ``` SELECT RC.Name AS RiskCategory, SUM(CASE WHEN IA.ImpactLevel = 'High' THEN 1 ELSE 0 END) FROM RM_RiskCategories RC LEFT JOIN Rpt_Risk R ON R.RiskCatrogry = RC.Name LEFT JOIN Rpt_ImpactAssess IA ON IA.FKRiskID = R.RiskID WHERE RC.Name <> 'All' AND ( R.RiskID IS NULL OR R.RiskID IN (SELECT FKRiskID FROM Rpt_Impact WHERE FKItemID = 38) ) GROUP BY RC.Name ORDER BY RC.Name DESC ``` Since you need all the categories (a point I initially missed) your original query might actually be a pretty good way of doing it - I'm really not a fan of using `LEFT JOIN ... WHERE pk IS NULL OR pk = something` like I have had to above - so you definitely need to benchmark the above to see if it is actually any better. You could turn that subquery into a `JOIN`, but I'm not sure there would be any performance gain. Still might be worth a test, remove the subquery from the `WHERE` clause, and add another `LEFT JOIN`: ``` LEFT JOIN Rpt_ImpactAssess IA ON IA.FKRiskID = R.RiskID WHERE ... AND (R.RiskID IS NULL OR RI.FKItemID = 38) ```
try this,sorry for syntax if any.it will be definitely faster then that sub-query .also try creating relevant index.also knowing this that,which table return how much row when join then accordingly you can break them in CTR.sometime it help.Also you have very correctly use sum instead of count ``` SELECT RC.Name AS RiskCategory,SUM(CASE WHEN IA.ImpactLevel = 'High' THEN 1 ELSE 0 END)High_Impact_Risks From RM_RiskCategories RC inner join Rpt_Risk R on R.RiskCatrogry = RC.Name inner join Rpt_ImpactAssess IA on IA.FKRiskID = R.RiskID inner join dbo.Rpt_Impact R1 on R.RiskID=R1.FKRiskID and R1.FKItemID =38 WHERE RC.Name <> 'All' GROUP BY RC.Name ORder By RC.Name DESC ```
Can someone help to optimize SQL Query
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2008-r2", "" ]
RDBMS - Oracle 11g. Question: I have a table like this - ``` column1 column2 A 111 A 222 A 333 B 444 C 555 C 666 ``` I want to write a query that picks up just one record for each distinct column1 value. The output should be this way: ``` A 111 B 444 C 555 ``` I am not worried about which value in corresponding column2 gets picked up. Only constraint is I just want a singe record being fetched for each of the distinct column1 value. I cannot think of a way to do this without using procedures. Is this possible with just sql queries (no function/ procedure)? Thanks.
Sure ``` SELECT column1, MIN(column2) FROM your_table GROUP BY column1 ``` Since you don't care which `column2` value you get, you could also use `MAX` or many other aggregate functions instead of `MIN`.
If you don't care, let the column2 be MIN ``` SELECT column1, MIN(column2) FROM table GROUP BY column1; ```
how to retrieve just one record from (non-distinct) column2 for every distinct column1 value?
[ "", "sql", "oracle11g", "distinct", "" ]
I have a SQL server table RealEstate with columns - Id, Property, Property\_Value. This table has about 5-10 million rows and can increase even more in the future. I want to insert a row only if a combination of Id, Property, Property\_Value does not exist in this table. Example Table - ``` 1,Rooms,5 1,Bath,2 1,Address,New York 2,Rooms,2 2,Bath,1 2,Address,Miami ``` Inserting `2,Address,Miami` should NOT be allowed. But, `2,Price,2billion` is okay. I am curious to know which is the "best" way to do this and **why**. The why part is most important to me. The two ways of checking are - 1. At application level - The app should check if a row exists before it inserts a row. 2. At database level - Set unique constraints on all 3 columns and let the database do the checking instead of person/app. Is there any scenario where one would be better than the other ? Thanks. **PS:** I know there is a similar question already, but it does not answer my problem - [Unique constraint vs pre checking](https://stackoverflow.com/questions/4714024/unique-constraint-vs-pre-checking) Also, I think that UNIQUE is applicable to all databases, so I don't think I should remove the mysql and oracle tags.
I think it most cases the differences between that two are going to be small enough that the choice should mostly be driven by picking the implementation that ends up being most understandable to someone looking at the code for the first time. However, I think exception handling has a few *small* advantages: * Exception handling avoids a potential race condition. The 'check, then insert' method might fail if another process inserts a record between your check and your insert. So, even if you're doing 'check then insert' you still want exception handling on the insert and if you're already doing exception handling anyways then you might as well do away with the initial check. * If your code is not a stored procedure and has to interact with the database via the network (i.e. the application and the db are not on the same box), then you want to avoid having two separate network calls (one for the check and the other for the insert) and doing it via exception handling provides a straightforward way of handling the whole thing with a single network call. Now, there are tons of ways to do the 'check then insert' method while still avoiding the second network call, but simply catching the exception is likely to be the simplest way to go about it. On the other hand, exception handling requires a unique constraint (which is really a unique index), which comes with a performance tradeoff: * Creating a unique constraint will be slow on very large tables and it will cause a performance hit on every single insert to that table. On truly large databases you also have to budget for the extra disk space consumed by the unique index used to enforce the constraint. * On the other hand, it might make selecting from the table faster if your queries can take advantage of that index. I'd also note that if you're in a situation where what you actually want to do is 'update else insert' (i.e. if a record with the unique value already exists then you want to update that record, else you insert a new record) then what you actually want to use is your particular database's UPSERT method, if it has one. For SQL Server and Oracle, this would be a MERGE statement.
Dependent on the cost of #1 (doing a lookup) being reasonable, I would do both. At least, in Oracle, which is the database I have the most experience with. Rationale: * Unique/primary keys should be a core part of your data model design, I can't see any reason to not implement them - if you have so much data that performance suffers from maintaining the unique index: + that's a *lot* of data + partition it or archive it away from your OLTP work * The more constraints you have, the safer your data is against application logic errors. * If you check that a row exists first, you can easily extract other information from that row to use as part of an error message, or otherwise fork the application logic to cope with the duplication. * In Oracle, rolling back DML statements is relatively expensive because Oracle expects to succeed (i.e. `COMMIT` changes that have been written) by default.
UNIQUE constraint vs checking before INSERT
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I have tables `Address`, `Property` and `Listing`: ``` Create Table Listing ( PropertyID int -- Property ID as per the Property table , AgentID int , ListingDate DateTime not null property , AskingPrice Decimal(10,2) not null , SaleDate Date , SalePrice Decimal(10, 2) , Primary Key (PropertyID, ListingDate) , Foreign Key (PropertyID) references Property(PropertyID) , Foreign Key (AgentID) references Agent(AgentID) on delete no action on update no action Create Table Address ( AddressID int Primary Key , StreetAddress varchar (100) , City varchar (50) , StateCode char(3) , PostalCode char (12) , Country varchar(30) ) ) Create Table Property ( PropertyID int Primary Key -- Unique ID for each property , AddressID int references Address(AddressID) On Delete no action on update no action , NumberOfRooms int not null Check (NumberOfRooms > 0) -- Number of rooms ) ``` I would like to create a view that will have the number of properties for sale in each city and their average price. Property is for sell when `AskingPrice` is not and `SaleDate = null`. The problem is that I can't get the count per city because I get error > Each GROUP BY expression must contain at least one column that is not an outer reference How do I solve this? My code: ``` create view MarketStatistics as select City = a.City, Properties = (select count(PropertyID)from Listing l where l.AskingPrice is not Null and l.SaleDate is Null group by a.City), AskingPrice = (select avg(AskingPrice)from Listing) from Address a join Property p on p.AddressID = a.AddressID join Listing l on p.PropertyID = l.PropertyID ```
Your query can be much more simple, it's enough to group without using subqueries: ``` select a.City, count(*) as Properties, avg(l.AskingPrice) as AskingPrice from Address a inner join Property p on p.AddressID = a.AddressID inner join Listing l on p.PropertyID = l.PropertyID where l.AskingPrice is not Null and l.SaleDate is Null group by a.City ```
I think you don't need group by. Simple co-related query should work: ``` create view MarketStatistics as select City = a.City, Properties = (select count(PropertyID)from Listing l join Property p on p.PropertyID = a.PropertyID where l.AskingPrice is not Null and l.SaleDate is Null and p.addressID = a.addressID), AskingPrice = (select avg(AskingPrice)from Listing l join Property p on p.PropertyID = a.PropertyID where p.addressID = a.addressID) from Address a ``` I am assuming that you need asking price for all properties as you have not placed a null check in your query for asking price.
SQL view with count and group by
[ "", "sql", "sql-server", "" ]
I believe this outputs the date of Sunday of the current week but I don't know why. Can someone please break down what's going on here. ``` SELECT trunc(sysdate+1,'DAY') FROM DUAL; ```
Run this to understand what `trunc` does ``` SELECT to_char(trunc(sysdate+1, 'DAY'),'dd/mon/yyyy hh:mi:ss') FROM DUAL; ``` `DAY` does returns starting day of the week but `trunc` will also cut off the hours, minutes, seconds of that date. `Sysdate` will have some hours and minutes, but after trunc it is defaulted to 00.00.00.000 By calling this ``` trunc(sysdate+1,'DAY') ``` you may see `16-FEB-14`. You can't see the real result because Oracle doesn't display the minutes for you. If you call this ``` SELECT to_char(sysdate+1,'dd/mon/yyyy hh:mi:ss') FROM DUAL; ``` you will see all the time details. Trunk takes that off. In other words, you have 3 effects here - `sysdate + 1` - next date, `Day` - first day of the week, `Trunc` - hours, minutes, seconds, etc. off
The TRUNC function will return the starting day of the week when the second parameter is DAY. Today Sysdate returns Thursday, +1 returns Friday etc. So when you add 3 it gives you next Sunday which marks the start of a new week.
trunc / sysdate... funny syntax need explanation
[ "", "sql", "oracle", "" ]
How can I turn this big query into a stored procedure *and should I? What would the benefit be?* ``` SELECT * FROM user_items LEFT JOIN items ON (items.item_id = user_items.item_id) INNER JOIN item_categories ON (item_categories.item_id = items.item_id) INNER JOIN item_subcategories ON (item_subcategories.item_id = items.item_id) INNER JOIN brands ON (brands.brand_id = items.item_brand) INNER JOIN item_photos ON (item_photos.item_id = items.item_id) INNER JOIN place_items ON (place_items.item_id = items.item_id) INNER JOIN places ON (places.place_id = place_items.place_id) WHERE user_items.user_id = :user_id ``` * from the `brands` table I only need the `brand_name` * from the `places` table I only need the `place_name` The way I'm doing it right now, I'm getting all columns from `brands` and `places`, so a friend of mine told me I should probably consider using stored procedures
If you want columns from brands and items tables only, you can do like below ``` "SELECT brands.brand_name,places.place_name, user_items.*,items.*,item_categories .*, item_subcategories.*,item_photos.*,place_items.* FROM user_items LEFT JOIN items ON (items.item_id = user_items.item_id) INNER JOIN item_categories ON (item_categories.item_id = items.item_id) INNER JOIN item_subcategories ON (item_subcategories.item_id = items.item_id) INNER JOIN brands ON (brands.brand_id = items.item_brand) INNER JOIN item_photos ON (item_photos.item_id = items.item_id) INNER JOIN place_items ON (place_items.item_id = items.item_id) INNER JOIN places ON (places.place_id = place_items.place_id) WHERE user_items.user_id = :user_id" ``` The use of stored procedure is to reuse a set of SQL statements . The performance of stored procedure would be as good as the SQL statements it contains. A better approach for better readability of your code is to use **ALIASES** for table names. [**When to use SQL Table Alias**](https://stackoverflow.com/questions/198196/when-to-use-sql-table-alias)
In my experience, stored procedures have been more trouble than they're worth, being inflexible and therefore more difficult to maintain than inline SQL, residing outside version control, and failing to provide much if any performance benefit. And in this case a stored routine doesn't seem necessary or beneficial, because your query doesn't demand an advanced feature, such as a cursor. For another discussion of advantages and disadvantages, see [this post](https://stackoverflow.com/questions/2954318/mysql-stored-procedures).
How to turn this big query into a stored procedure
[ "", "sql", "stored-procedures", "" ]
I am trying to get sum for some columns from across multiple mysql tables using python/sqlalchemy. The number of tables is dynamic, and each table has **same** schema. ``` Table_1 | col1 | col2| ... | Table_2 | col1 | col2| ... | Table_... | col1 | col2| ... | ``` I studied sqlachemy, and realised that the better idea might be to generate a SQL text and execute it, creating models might not be a good solution, I feel that may introduce additional cost on performance, I prefer a single SQL statement. ``` select (t1.col1 + t2.col1 + t3.col1 + t?.col1 ...) as col1, (t1.col2 + t2.col2 + ...) as col2, ... from (select sum(col1), sum(col2), sum(col3) ... from Table_1 as t1, select sum(col1), sum(col2), sum(col3) ... from Table_2 as t2, ... ) ``` The above is the SQL I intend to make using python. I am not a SQL professional, so I am not sure if that is a good statement, and I am wondering if there are any better solution, simpler and efficient, other than this?
Your general approach looks reasonable. Getting the SUMs from the individual tables as a single row, and the combining those, is the most efficient approach. There's just a couple of minor fixes. It looks like you will need to provide an alias for each of the SUM() expression returned. And you're going to need to wrap the SELECT from each table in a set of parens, and give each of those inline views an alias. Also, there's a potential for one of the inner SUM() expressions to return a NULL, so the addition performed in the outer query could return a NULL. One fix for that would be wrap the inner SUM expressions in a IFNULL or COALESCE, to replace a NULL with a zero, but that could introduces a zero where the outer SUM would really be a NULL. Personally, I'd avoid using the comma notation for the JOIN operation. The comma is valid, but I'd write it out using the CROSS JOIN keywords, to make it a little more readable. But my preference would be avoid the JOIN and the addition operations in the outer query. I'd use a SUM aggregate in the outer query, something like this: ``` SELECT SUM(t.col1_tot) AS col1_tot , SUM(t.col2_tot) AS col2_tot , SUM(t.col3_tot) AS col3_tot FROM ( SELECT SUM(col1) AS col1_tot , SUM(col2) AS col2_tot , SUM(col3) AS col3_tot FROM table1 UNION ALL SELECT SUM(col1) AS col1_tot , SUM(col2) AS col2_tot , SUM(col3) AS col3_tot FROM table2 UNION ALL SELECT SUM(col1) AS col1_tot , SUM(col2) AS col2_tot , SUM(col3) AS col3_tot FROM table3 ) t ``` That avoids anomalies with NULL values, and makes it return the same values that would be returned if the the individual tables were all concatenated together. But this isn't any more efficient than what you have. --- To use the JOIN method, as in your query (if I don't mind returning a zero where a NULL would have been returned in the query above, to that approach to work: ``` SELECT t1.col1_tot + t2.col1_tot + t3.col1_tot AS col1_tot , t1.col2_tot + t2.col2_tot + t3.col2_tot AS col2_tot , t1.col3_tot + t2.col3_tot + t3.col3_tot AS col3_tot FROM ( SELECT IFNULL(SUM(col1),0) AS col1_tot , IFNULL(SUM(col2),0) AS col2_tot , IFNULL(SUM(col3),0) AS col3_tot FROM table1 ) t1 CROSS JOIN ( SELECT IFNULL(SUM(col1),0) AS col1_tot , IFNULL(SUM(col2),0) AS col2_tot , IFNULL(SUM(col3),0) AS col3_tot FROM table2 ) t2 CROSS JOIN ( SELECT IFNULL(SUM(col1),0) AS col1_tot , IFNULL(SUM(col2),0) AS col2_tot , IFNULL(SUM(col3),0) AS col3_tot ) t3 ``` But, again, my personal preference would be to avoid doing those addition operations in the outer query. I'd use the SUM aggregate, and UNION the results from the individual tables, rather than doing a join.
Unless you have some where clauses to join those tables together, you're going to end up with a cartesian join, where every record from each table in the query is joined against all other combinations of records from the other tables. so if each of those tables has (say) 1000 records, and you've got 5 tables in the query, you're going to end up with 1000^5 = 1,000,000,000,000,000 records in the result set. What you want is probably something more like this: ``` SELECT sum(col1) AS sum1, sum(col2) AS sum2, .... FROM ( SELECT col1, col2, col3, ... FROM table1 UNION ALL SELECT col1, col2, col3, ... FROM table2 UNION ALL ... ) a ``` The inner `UNION` join will take all the columns from each of those tables and turn them into a single contiguous result set. The outer query will then take each of those columns and sum up the values.
Better SQL to sum same columns across multiple tables?
[ "", "mysql", "sql", "sum", "" ]
How do I get only the middle part of the data in my table? I tried the following code, but this only removes the right part... my output should only be middle part. For instance when I select the data `1-021514-1` the output should be `021514` without the left and right dashes ``` select LEFT(ticketid, CHARINDEX('-', ticketid + '-') + 4) from Table ``` My Data is: ``` |TicketID | ------------ |1-021514-1 | |10-021514-1| |2-021514-1 | |4-021414-1 | ```
Try this: ``` select right(left(ticketid, charindex('-', ticketid, charindex('-', ticketid, 0) + 1) - 1), len(left(ticketid, charindex('-', ticketid, charindex('-', ticketid, 0) + 1) - 1)) - charindex('-', left(ticketid, charindex('-', ticketid, charindex('-', ticketid, 0) + 1) - 1), 0)) from Table ```
Please try: ``` SELECT LEFT(st, CHARINDEX('-', st)-1) TicketID from ( SELECT SUBSTRING(TicketID, CHARINDEX('-',TicketID)+1, 10000) st FROM Table )x ```
How to split dash-separated values in SQL Server?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a MySQL dump, which I tried to restore with: ``` mysql -u"username" -p"password" --host="127.0.0.1" mysql_db < mysql_db ``` However, this threw an error: ``` ERROR 1115 (42000) at line 3231: Unknown character set: 'utf8mb4' ``` This is lines 3231-3233: ``` /*!50003 SET character_set_client = utf8mb4 */ ; /*!50003 SET character_set_results = utf8mb4 */ ; /*!50003 SET collation_connection = utf8mb4_general_ci */ ; ``` I am using MySQL 5.1.69. How can I solve this error?
Your version does not support that character set, I believe it was `5.5.3` that introduced it. You should upgrade your mysql to the version you used to export this file. The error is then quite clear: you set a certain character set in your code, but your mysql version does not support it, and therefore does not know about it. According to <https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html> : > utf8mb4 is a superset of utf8 so maybe there is a chance you can just make it utf8, close your eyes and hope, but that would depend on your data, and I'd not recommend it.
You can try: Open sql file by text editor find and replace all ``` utf8mb4 to utf8 ``` Import again.
ERROR 1115 (42000): Unknown character set: 'utf8mb4'
[ "", "mysql", "sql", "character-encoding", "utf8mb4", "" ]
I am very new to Microsoft SQL Server and I have a problem with this `INSERT` query that inserts a new record in a very very big table (it has many columns). I have this query: ``` INSERT INTO VulnerabilityAlertDocument ([Id], [VulnerabilityAlertId], [SourceId], [BugTraqID], [Title], [StatusID], [CVE], [Published], [LastUpdated], [Remote], [Local], [Credibility], [Classification], [Availability], [Ease], [Authentication], [CVSS2_BaseScore], [CVSS2_TemporalScore], [CVSS2_BaseVector], [CVSS2_TemporalVector], [CVSS1_BaseScore], [CVSS1_TemporalScore], [NVD_CVSS2_BaseScore], [NVD_CVSS2_ComponentString], [ImpactRating], [Severity], [EaseofExploit], [UrgencyRating], [LastChange], [ShortSummary], [Impact], [TechnicalDescription], [AttackScenario], [Exploit], [Credit], [URL], [AlertStatusId], [Type], [DetailLevel], [Language], [dd]) VALUES('10000', '10000', 'TEST', '5', 'TEST TITLE', '1', 'TEST CVE', '1998-04-30 00:00:00.000', '2007-11-05 16:32:34.000', 'TEST REMOTE', 'TEST LOCAL', 'TEST CREDIBILITY', 'TEST CLASSIFICATION', 'TEST Availability', 'TEST EASE', 'TEST Authentication', 'TEST CVSS2_BaseScore', 'TEST VSS2_TemporalScore', 'TEST CVSS2_BaseVector', 'TEST VSS2_TemporalVector', 'TEST CVSS1_BaseScore', 'TEST CVSS1_TemporalScore', 'TEST NVD_CVSS2_BaseScore', 'TEST NVD_CVSS2_ComponentString', '2', '3', '10', '7', 'TEST LastChange', 'TEST ShortSummary', 'TEST IMPACT', 'TEST TechnicalDescription', 'TEST AttackScenario', 'TEST Exploit', 'TEST Credit', 'TEST URL', '5', '3', '1', 'TEST Language', 'NULL'); ``` In which I insert a specific value into a specified column (I specify columns by the first query section, and I specify the related values by the second section of the query) The problem is that when I try to execute the previous query I obtain the following error > Msg 544, Level 16, State 1, Line 1 > Cannot insert explicit value for identity column in table 'VulnerabilityAlertDocument' when > IDENTITY\_INSERT is set to OFF. Why? What does this mean? How can I change my query to solve this problem and so insert the record in my table?
Try `SET IDENTITY_INSERT VulnerabilityAlertDocument ON` before `INSERT` After `INSERT`, add `SET IDENTITY_INSERT VulnerabilityAlertDocument OFF`
you have an identity column then you don't have to insert the Id, you have to delete Id from your query and the value of the Id
Why this insert query go into error?
[ "", "sql", "sql-server", "rdbms", "database", "" ]
Can somebody please help me with this 'simple' query. I have a two tables: Tunes and SavedRatings Tunes contains a load of tune info including a Rating (INTEGER) field and an MD5 field used to uniquely identify a tune. SavedRatings is a table consisting of a Rating and an MD5 field so that when I delete the contents of Tunes and add tunes back at a later date, I can identify the rating given to that tune. So.. what I'm trying to do is update the Rating field in my Tunes table, by matching the Tunes MD5 field in the SavedRatings table. I came up with the below command which is completely wrong. Can you please suggest an alternative? I'm using SQLite. ``` UPDATE Tunes SET Tunes.Rating=SavedRatings.Rating WHERE Tunes.MD5 IN (SELECT MD5 FROM SavedRatings); ```
Try it `update Tunes set Rating = ( select SavedRating.Rating from SavedRating where Tunes.md5 = SavedRating.md5)` Hope this helps!
``` UPDATE T SET T.Rating = S.Rating FROM Tunes T INNER JOIN SavedRatings S ON T.MD5 = S.MD5 ```
SQL: UPDATE number of rows using another table
[ "", "sql", "sqlite", "" ]
I have a football fixtures table in the following format. ``` date date primary key homescore int(4) awayscore int(4) ``` Data is stored in the following format ``` DATE | HOMESCORE | AWAYSCORE ------------------------------------------ 01-01-2014 | 1 | 0 08-01-2014 | 2 | 1 15-01-2014 | 1 | 1 22-01-2014 | 3 | 2 29-01-2014 | 0 | 0 06-02-2014 | 1 | 3 ``` And so on... I'd like to run a single query to return the won, lost and drawn totals. ``` select count(*) as won from fixtures where homescore > awayscore; select count(*) as lostfrom fixtures where homescore < awayscore; select count(*) as drawnfrom fixtures where homescore = awayscore; ``` The result of this single query would look something like... ``` Won Lost Drawn 3 1 2 ``` Please can someone give me some help.
``` SELECT SUM(homescore > awayscore) AS won, SUM(homescore < awayscore) AS lost, SUM(homescore = awayscore) AS tie FROM ... ``` The boolean results of the `>`, `<` and `=` will get auto-converted to integer `0` or `1`, by mysql, which can then be summed up.
[**SQL Fiddle**](http://sqlfiddle.com/#!2/38644/1/0): ``` SELECT SUM(CASE WHEN homescore > awayscore THEN 1 ELSE 0 END) WonCount, SUM(CASE WHEN homescore < awayscore THEN 1 ELSE 0 END) LostCount, SUM(CASE WHEN homescore = awayscore THEN 1 ELSE 0 END) TieCount FROM fixtures ```
return multiple counts from single table
[ "", "mysql", "sql", "select", "count", "case", "" ]
I wanted to know how I can get the records for the current week. The query I am using is : ``` DECLARE @TableX TABLE ([Date] DATETIME) INSERT INTO @TableX SELECT '2014-2-17' UNION ALL SELECT '2014-2-18' UNION ALL SELECT '2014-2-19' UNION ALL SELECT '2014-2-20' UNION ALL SELECT '2014-2-21' SELECT * FROM @TableX WHERE Date >= DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) / 7 * 7, 0) AND Date <= DATEADD(DAY, DATEDIFF(DAY, -1, GETDATE()), 0) ``` The query I have wrote doesn't produce the data correctly ? Can anyone please tell me what is wrong in the query. The records I get from this query is : ``` 2014-02-17 00:00:00.000 2014-02-18 00:00:00.000 ```
Please try using CTE. Below query returns 7 days of week considering Sunday as week start day. ``` ;WITH t AS (SELECT Dateadd(wk, Datediff(wk, 0, Getdate()), -1) AS WeekD, 1 cnt UNION ALL SELECT weekd + 1, cnt + 1 FROM t WHERE cnt + 1 < 8) SELECT CONVERT(NVARCHAR(20), weekd, 106) WeekDate, Datename(dw, weekd) Name FROM t ```
This will get all reacords with same week number and same year. ``` select * from cal where date_format(cal_date,'%v') = date_format(now(),'%v') and date_format(cal_date,'%Y')=date_format(now(),'%Y'); ```
How to get the data for the current week from MSSQL
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I tried to solve this all day long but it doesn't seem to work for me. I would like to execute a command and get the result back to a recordset. The problem is one of two things: either I'm getting an empty response or there is a problem with my code. I know for sure that this command should fetch few lines from the DB. I added `response.write` inside the loop, but they are never printed. Here is the code: ``` Set conn = Server.CreateObject("ADODB.Connection") conn.open "PROVIDER=SQLOLEDB;DATA SOURCE=X;DATABASE=Y;UID=Z;PWD=W;" Set objCommandSec = CreateObject("ADODB.Command") With objCommandSec Set .ActiveConnection = Conn .CommandType = 4 .CommandText = "usp_Targets_DataEntry_Display" .Parameters.Append .CreateParameter("@userinumber ", 200, 1, 10, inumber) .Parameters.Append .CreateParameter("@group ", 200, 1, 50, "ISM") .Parameters.Append .CreateParameter("@groupvalue", 200, 1, 50, ismID) .Parameters.Append .CreateParameter("@targettypeparam ", 200, 1, 50, targetType) End With set rs = Server.CreateObject("ADODB.RecordSet") rs = objCommandSec.Execute while not rs.eof response.write (1) response.write (rs("1_Q1")) rs.MoveNext wend response.write (2) ``` **EDITED** After revising the code, following @Joel Coehoorn answer, the solution is: ``` set rs = Server.CreateObject("ADODB.RecordSet") rs.oppen objCommandSec ``` instead of... ``` set rs = Server.CreateObject("ADODB.RecordSet") rs = objCommandSec.Execute ```
Looked at this for a few minutes, and it's been a *long* time since I've worked with classic asp, but I did see three things to look at: 1. Do you need to `Open` the connection before calling `objCommandSec.Execute`? 2. Can you try writing out a string literal inside the loop, that does not depend at all on the recordset... only that you are in fact looping through the code, so see if records are coming back to the recordset. 3. Have you checked the html source, to see if perhaps malformed html is hiding your results? I remember this happening a few times with tables in classic asp loops, where data would be hidden somehow between two rows, or a closing table tag in the wrong place would end the table, and later rows would not be visible.
Couple of tips after working with [asp-classic](/questions/tagged/asp-classic "show questions tagged 'asp-classic'") for years 1. There is no need to create a `ADODB.Connection` you can pass a connection string direct to `.ActiveConnection` property of the `ADODB.Command` object. This has two benefits, you don't have instantiate and open another object and because the context is tied to the `ADODB.Command` it will be released with `Set objCommandSec = Nothing`. 2. A common reason for `.Execute` returning a closed recordset is due to `SET NOCOUNT ON` not being set in your SQL Stored Procedure, as an `INSERT` or `UPDATE` will generate a records affected count and closed recordset. Setting `SET NOCOUNT ON` will stop these outputs and only your expected recordset will be returned. 3. Using `ADODB.Recordset` to cycle through your data is overkill unless you need to move backwards and forwards through and support some of the more lesser used methods that are not needed for standard functions like displaying a recordset to screen. Instead try using an `Array`. ``` Const adParamInput = 1 Const adVarChar = 200 Dim conn_string, row, rows, ary_data conn_string = "PROVIDER=SQLOLEDB;DATA SOURCE=X;DATABASE=Y;UID=Z;PWD=W;" Set objCommandSec = CreateObject("ADODB.Command") With objCommandSec .ActiveConnection = conn_string .CommandType = 4 .CommandText = "usp_Targets_DataEntry_Display" .Parameters.Append .CreateParameter("@userinumber", adVarChar, adParamInput, 10, inumber) .Parameters.Append .CreateParameter("@group", adVarChar, adParamInput, 50, "ISM") .Parameters.Append .CreateParameter("@groupvalue", adVarChar, adParamInput, 50, ismID) .Parameters.Append .CreateParameter("@targettypeparam", adVarChar, adParamInput, 50, targetType) Set rs = .Execute() If Not rs.EOF Then ary_data = rs.GetRows() Call rs.Close() Set rs = Nothing End With Set objCommandSec = Nothing 'Command and Recordset no longer needed as ary_data contains our data. If IsArray(ary_data) Then ' Iterate through array rows = UBound(ary_data, 2) For row = 0 to rows ' Return our row data ' Row N column 2 (index starts from 0) Call Response.Write(ary_data(1, row) & "") Next Else ' Nothing returned Call Response.Write("No data returned") End If ```
Using Stored Procedure in Classical ASP .. execute and get results
[ "", "sql", "asp-classic", "ado", "" ]
I have table for shift periods : ``` empoyeeid ShiftId PeriodId Description DateFrom DateTo TimeIn TimeOut Night ---------------------------------------------------------------------------------------------------------- 5 9 17 Morning 2014-01-01 2014-12-31 09:00:00 12:59:00 0 5 9 18 Night 2014-01-01 2014-12-31 23:00:00 07:00:00 1 ``` and table for employee transactions: ``` Employeeid EventDate EventTime EventType ------------------------------------------------ 5 2014-01-02 23:04:29 IN ``` I want to select the period id from first table I tried this ``` select periodid from shifts where timein <= eventtime and timein >= eventtime and employeeid = 5 ```
**Thanks All** **abs() function get nearest shift for time in** ``` select PeriodId from shifts where EmployeeCode = @employeecode and datefrom <= @INDat and dateto >= @INDat order by abs((cast(DATEPART(HOUR,timein) as int))-(cast(DATEPART(HOUR,@INDat) as int))) ```
``` select periodid from shifts s inner join employeeTransactions e on s.employeeid=e.employeeid where timein <= eventtime and timeout >=eventtime and e.employeeid=5 ```
Select time in and time out from fingerprint transaction table according shiftperiod table
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table like this: ``` InstallID |InstallationDate 1 |01-01-2014 1 |01-02-2014 1 |01-03-2014 1 |01-04-2014 2 |01-01-2014 2 |01-02-2014 3 |01-01-2014 3 |01-02-2014 3 |01-03-2014 4 |01-04-2014 4 |01-05-2014 ``` where I need to get the latest `installationDate` for each Installation ID e.g. ``` InstallID |InstallationDate 1 |01-04-2014 2 |01-02-2014 3 |01-03-2014 4 |01-05-2014 ``` Could someone help me how above can be achieve using SQL?
Query: **[SQLFiddleExample](http://sqlfiddle.com/#!2/89e3b/1)** ``` SELECT t1.InstallID, t1.InstallationDate FROM yourTable t1 LEFT JOIN yourTable t2 ON t1.InstallID = t2.InstallID AND t1.InstallationDate < t2.InstallationDate WHERE t2.InstallID is null ``` Result: ``` | INSTALLID | INSTALLATIONDATE | |-----------|--------------------------------| | 1 | January, 04 2014 00:00:00+0000 | | 2 | January, 02 2014 00:00:00+0000 | | 3 | January, 03 2014 00:00:00+0000 | | 4 | January, 05 2014 00:00:00+0000 | ```
i think s simpliest way is ``` SELECT t1.InstallID, MAX(t1.InstallationDate) FROM yourTable t1 Group by t1.InstallID ```
SQL Get Last Record By Date
[ "", "sql", "" ]
I'm working on a school project. I have a problem filling a dataset with 2 SELECT Queries ``` sql = "SELECT PhoneNo, Forename, Surname, Address, Postcode FROM tblCustomerInfo" Conn.Open() da1 = New OleDb.OleDbDataAdapter(sql, Conn) 'initialises connection with the database and runs sql statement da1.Fill(ds1, "Order") sql = "SELECT OrderNo, Total, OrderDate, OrderTime FROM tblOrder" da2 = New OleDb.OleDbDataAdapter(sql, Conn) 'initialises connection with the database and runs sql statement da2.Fill(ds1, "Order") MaxRows = ds1.Tables("Order").Rows.Count count = -1 Label1.Text = ds1.Tables("Order").Rows(0).Item(5) ``` > ``` > Label1.Text = ds1.Tables("Order").Rows(0).Item(5) > ``` > > is to test for the second dataset, however I'm running into a "dbnull" error. Thank you for your time in advance. Edit: Conversion from type 'DBNull' to type 'String' is not valid. Exception Detail
Try to combine the two SQL statements into one, separating by semi colon. Then run that query using single DataAdapter. You will probably get it. I didn't test it. But it should work.
I normally perform queries like this in ASP.Net VB using the `System.Data.SqlClient` library (i.e., I use SqlConnection, SqlCommand, and SqlDataAdapter instead of OleDb), but assuming it works the same way, I can tell you what I think I see is wrong. Change ``` Label1.Text = ds1.Tables("Order").Rows(0).Item(5) ``` To ``` Label1.Text = ds1.Tables("Order").Rows(0)(5) ``` --- Although, I normally define a variable for the row first since it makes it easier to loop through and reference various column names. Something like this... ``` Dim r as DataRow = ds1.Tables("Order").Rows(0) Label1.Text = r("column/field name") 'I'm sure you can use an index number instead of the field name, but I normally just reference the field name, so I may be mistaken. ```
VB Forms, filling out a dataset with two SELECT statements
[ "", "sql", "vb.net", "dataset", "" ]
I have a sample database which you can view here: <https://class.stanford.edu/c4x/Engineering/db/asset/socialdata.html> and I am trying to write an sql statement that does the following: /*Display a list of students who are friends with someone at least two years older then themselves. Do not list any pair more than once*/ this is the code i have so far: ``` SELECT distinct A.name, B.name, A.grade, B.grade FROM Highschooler A, Highschooler B, Friend F WHERE A.ID = F.ID1 AND B.ID = F.ID2 AND B.grade > A.grade ORDER BY B.name, B. grade, A.name, A.grade ASC; ``` This is what I am getting: ``` Cassandra Alexis 9 11 Tiffany Alexis 9 11 Gabriel Andrew 9 10 Andrew Austin 10 11 Andrew Jordan 10 12 Austin Kyle 11 12 Jessica Kyle 11 12 ``` It seems like its working, but the last four entries are only 1 year older not 2. Can anyone help steer me in the right direction or tell me if I Already have it right? I appreciate any help.
Try: ``` SELECT distinct A.name, B.name, A.grade, B.grade FROM Highschooler A, Highschooler B, Friend F WHERE A.ID = F.ID1 AND B.ID = F.ID2 AND B.grade - A.grade >= 2 ORDER BY B.name, B. grade, A.name, A.grade ASC; ```
Try the following ``` SELECT distinct A.name, B.name, A.grade, B.grade FROM Highschooler A, Highschooler B, Friend F WHERE A.ID = F.ID1 AND B.ID = F.ID2 AND -- check both ways (B.grade - A.grade >= 2 OR A.grade - B.grade >= 2) ORDER BY B.name, B. grade, A.name, A.grade ASC ```
how to generate a list where a column is twice as large as the other column in sql
[ "", "sql", "sqlite", "" ]
My target RDBMS is SQL Server 2012 - but I'm interested in a general response here. Let's consider the following SQL Update clause: ``` UPDATE MyTable SET Active = 0 WHERE EffectiveDate < GETDATE() ``` Assuming that there could be MANY entries in *MyTable* where Active is already set to "0" for past events, would there be any huge performance benefit to performing the update like this instead? ``` UPDATE MyTable SET Active = 0 WHERE EffectiveDate < GETDATE() AND Active = 1 ```
You would have less writes (possibly as you state most is already 0?) and less updates to the indexes if you have indexes on `Active`
Possibly the largest savings is in writing to the log. Every row being updated generates log records. By updating fewer rows, you will be logging fewer changes. As noted not too long ago in another question, avoiding these updates can simplify some arcane aspects on triggers. The"update(column)" syntax would identify that column as being modified, even though the value didn't change.
Benefits of WHERE filtering before performing SQL UPDATE
[ "", "sql", "sql-server", "" ]
Im designing MySql database for store multiple products such as computers,mobile phones,pen drives....etc.each product has different features such as ``` **Computers has** Processor Ram HDD Monitor Size ....etc **Mobile phone has** Display Size Network type OS Internal Memory ....etc **Pendrive has** Capacity USB Version ....etc ``` And i have to store unlimited number of products, instead of creating separate tables for each product how to create database structure to store these information on one or fixed number of tables(data tables + mapping tables).(i think Wordpress store data in this kind of format, it uses few tables and store any number field related to post/category within those tables).any help/idea to solve this problem would be appreciated.
Consider this Create three table product, feature, product\_feature and maybe product\_photos Product database will be ``` pid, p_name, p_description, p_price, ... insert query INSERT INTO (p_name, p_description, p_price, ....) VALUES(?,?,?,...) ``` feature table will ``` fid, f_name, f_description, ... insert query INSERT INTO (F_name, F_description, ....) VALUES(?,?,?,...) ``` now the product\_feature table will be ``` id, pid, fid query for one product // say a product Id is 1 INSERT INTO (pid, fid) VALUES(1, 10) INSERT INTO (pid, fid) VALUES(1, 15 INSERT INTO (pid, fid) VALUES(1, 30) ``` where pid and fid are foreign keys with relations, phpmyadmin can do that for you you can then add a product with multiple features then maybe the photo table ``` foto_id, photo_name, photo_path .... ``` use InnoDB for all the tables Let me know if you need further help
You need to design a table in such a way that it covers all attributes for all products. This would help you insert any type of product in to that table. But, keep in mind that if there are 4 products with 10 different attributes, you end up creating 40 columns and might use only 10 to insert data at any point of time.
Store multiple data tables in single database table
[ "", "mysql", "sql", "database", "orm", "relational-database", "" ]
I have this code: ``` SELECT ID, Name, 100 AS TempColumn FROM MyTable; ``` And the table is this: ``` | ID | Name | TempColumn| ------------------------- | 1 | A | 100 | ------------------------- | 2 | B | 100 | ------------------------- | 3 | C | 100 | ------------------------- | 1 | A | 100 | ------------------------- | 4 | D | 100 | ------------------------- ``` Now I want to find the sum of the |TempColumn| where ID=1. So it should look like this: ``` | ID | Name | TempColumn| ------------------------- | 1 | A | 200 | ------------------------- ``` How can I query this?
You can sum a constant: ``` SELECT ID, Name, SUM(100) AS SumOfTempColumn FROM MyTable WHERE ID = 1 GROUP BY ID, Name; ``` **[Example on SQL Fiddle](http://sqlfiddle.com/#!2/e135c/1)**
Should this not be reasonably straight forward using an aggregate query? ``` SELECT ID, Name, SUM(100) AS TempColumn FROM MyTable GROUP BY ID, Name; ```
How to find the sum of a temporary column in SQL?
[ "", "mysql", "sql", "mysql-workbench", "" ]
Here's an example query: ``` SELECT thing_id FROM thing WHERE thing_type IN (3, 7) ``` I would like to turn the 3 and 7 into human-readable names to help understand what the query is truly doing. Something like the following would be great: ``` SELECT thing_id FROM thing WHERE thing_type_id IN (OPENED, ONHOLD) ``` Knowing that OPENED and ONHOLD would have their actual values declared somewhere else. I'm thinking there may also be a way to do this with a JOIN of a thing\_type table. Note that I'm stuck in an environment where I'm coding queries directly rather than using an abstraction framework.
Assuming you have a linked table called ThingNames where there are two columns, id and ThingName, you could do this ``` SELECT thing_id FROM thing LEFT JOIN ThingNames on thing.thing_type_id = ThingName.id WHERE ThingNames.ThingName IN ('OPENED', 'ONHOLD') ``` (Don't forget the quotes around the ThingNames in your in brackets.
You can do this by generating a lookup table for the values: ``` with Lookup(value, name) as ( select 3, 'OPENED' from dual union all select 7, 'ONHOLD' from dual ) SELECT thing_id FROM thing t WHERE thing_type_id IN (select value from Lookup where name in ('OPENED', 'ONHOLD')); ``` I would recommend an approach like this. But you could also do: ``` with thevalues as ( select 3 as OPENED, 7 as ONHOLD from dual ) SELECT thing_id FROM thing cross join thevalues WHERE thing_type_id IN (OPENED, ONHOLD); ``` This is most similar to your original query.
Using human-readable constants in queries
[ "", "sql", "oracle", "" ]
How can you get today's date and convert it to `01/mm /yyyy` format and get data from the table with delivery month 3 months ago? Table already contains delivery month as `01/mm/yyyy`.
``` SELECT * FROM TABLE_NAME WHERE Date_Column >= DATEADD(MONTH, -3, GETDATE()) ``` Mureinik's suggested method will return the same results, but doing it this way your query can benefit from any indexes on `Date_Column`. or you can check against last 90 days. ``` SELECT * FROM TABLE_NAME WHERE Date_Column >= DATEADD(DAY, -90, GETDATE()) ```
Latest Versions of mysql don't support DATEADD instead use the syntax ``` DATE_ADD(date,INTERVAL expr type) ``` To get the last 3 months data use, ``` DATE_ADD(NOW(),INTERVAL -90 DAY) DATE_ADD(NOW(), INTERVAL -3 MONTH) ```
SQL query for getting data for last 3 months
[ "", "sql", "sql-server", "sql-server-2008", "date", "select", "" ]
I have a complex query that feeds into a simple temp table named #tempTBRB. select \* from #tempTBRB ORDER BY AccountID yields this result set: ![enter image description here](https://i.stack.imgur.com/qO6tN.png) In all cases, when there is only 1 row for a given AccountID, the row should remain, no problem. But whenever there are 2 rows (there will never be more than 2), I want to keep the row with SDIStatus of 1, and filter out SDIStatus of 2. Obviously if I used a simple where clause like "WHERE SDIStatus = 1", that wouldn't work, because it would filter out a lot of valid rows in which there is only 1 row for an AccountID, and the SDIStatus is 2. Another way of saying it is that I want to filter out all rows with an SDIStatus of 2 ONLY WHEN there is another row for the same AccountID. And when there are 2 rows for the same AccountID, there will always be exactly 1 row with SDIStatus of 1 and 1 row with SDIStatus of 2. I am using SQL Server 2012. How is it done?
``` SELECT AccountID ,MIN(SDIStatus) AS MinSDIStatus INTO #MinTable FROM #tempTBRB GROUP BY AccountID SELECT * FROM #tempTBRB T JOIN #MinTable M ON T.AccountID = M.AccountID AND T.SDIStatus = M.MinSDIStatus DROP TABLE #MinTable ```
Here is a little test that worked for me. If you just add the extra columns in your SELECT statements, all should be well: ``` CREATE TABLE #Temp ( ID int, AccountID int, Balance money, SDIStatus int ) INSERT INTO #Temp ( ID, AccountID, Balance, SDIStatus ) VALUES ( 1, 4100923, -31.41, 2 ) INSERT INTO #Temp ( ID, AccountID, Balance, SDIStatus ) VALUES ( 2, 4132170, 0, 2 ) INSERT INTO #Temp ( ID, AccountID, Balance, SDIStatus ) VALUES ( 3, 4137728, 193.10, 1 ) INSERT INTO #Temp ( ID, AccountID, Balance, SDIStatus ) VALUES ( 4, 4137728, 0, 2 ) SELECT ID, AccountID, Balance, SDIStatus FROM ( SELECT ID, AccountID, Balance, SDIStatus, row_number() over (partition by AccountID order by SDIStatus desc) as rn FROM #Temp ) x WHERE x.rn = 1 DROP TABLE #Temp ``` Yields the following: ``` ID AccountID Balance SDIStatus 1 4100923 -31.41 2 2 4132170 0.00 2 4 4137728 0.00 2 ```
SQL Server - How to filter rows based on matching rows?
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have Table A, Column 1. This table has values such as: ``` 1 2 3 3 4 4 4 5 ``` I have Table B, Column 2. Which lists certain values, like: ``` 1 3 4 ``` I need a query to Count each unique value in Table A, but ONLY if that value is present in Table B. So with the above, the end result would be: 1 has a quantity of 1, 3 has a quantity of 2, and 4 has a quantity of 3. My only problem is that I do not have this ability. Any help out there?
Based on your question, something like the following should solve your problem. ``` select b.column1, count(a.column2) from tableb as b inner join tablea as a on b.column1 = a.column2 group by b.column1 ``` Since you wanted only records which are in both tables, I am using an inner join. Then I am just grouping by the ID found in tableb, and getting the count of rows in tablea. Let me know if you have any problems. For more information regarding inner join, see : <http://www.w3schools.com/sql/sql_join_inner.asp>, and for group by, see : <http://www.w3schools.com/sql/sql_groupby.asp>
I would use an `INNER JOIN` query with `GROUP BY` aggregate function ``` SELECT a.column1, count(a.column1) as total FROM tablea a INNER JOIN tableb b ON a.column1 = b.column2 GROUP BY a.column1 ```
SQL Table Counting and Joining
[ "", "mysql", "sql", "sql-server", "" ]
I have a csv large file (>1GB) sitting in network filestorage which gets updated weekly with new records. The file has colums similar to these: ``` Customer ID | Product | Online? (Bool) | Amount | Date ``` I need to use this file to update a postgresql database of customer IDs with the total amount in each month by product and store. Something like this: ``` Customer ID | Month | (several unrelated fields) | Product 1 (Online) | Product 1 (Offline) | Product 2 (Online) | ect... ``` Because the file is so large (and getting steadily larger with each update) I need an efficient way to grab the updated records and update the database. Unfortunately, our server updates the the file by Customer ID and not date, so I can't tail it. Is there a clever way to diff the file in a way that won't break as the file keeps growing?
COPY the file to a staging table. This assumes of course you have a PK aka a unique identifier for each row that doesn't mutate. I checksum remaining columns and the same for the rows you already loaded into your destination table and compare source to destination this will find the updates, deletes, and new rows. As you can see I haven't added any indexes or tuned this in any other way. My goal was to make it function correctly. ``` create schema source; create schema destination; --DROP TABLE source.employee; --DROP TABLE destination.employee; select x employee_id, CAST('Bob' as text) first_name,cast('H'as text) last_name, cast(21 as integer) age INTO source.employee from generate_series(1,10000000) x; select x employee_id, CAST('Bob' as text) first_name,cast('H'as text) last_name, cast(21 as integer) age INTO destination.employee from generate_series(1,10000000) x; select destination.employee.*, source.employee.*, CASE WHEN (md5(source.employee.first_name || source.employee.last_name || source.employee.age)) != md5((destination.employee.first_name || destination.employee.last_name || destination.employee.age)) THEN 'CHECKSUM' WHEN (destination.employee.employee_id IS NULL) THEN 'Missing' WHEN (source.employee.employee_id IS NULL) THEN 'Orphan' END AS AuditFailureType FROM destination.employee FULL OUTER JOIN source.employee on destination.employee.employee_id = source.employee.employee_id WHERE (destination.employee.employee_id IS NULL OR source.employee.employee_id IS NULL) OR (md5(source.employee.first_name || source.employee.last_name || source.employee.age)) != md5((destination.employee.first_name || destination.employee.last_name || destination.employee.age)); --Mimic source data getting an update. UPDATE source.employee SET age = 99 where employee_id = 45000; select destination.employee.*, source.employee.*, CASE WHEN (md5(source.employee.first_name || source.employee.last_name || source.employee.age)) != md5((destination.employee.first_name || destination.employee.last_name || destination.employee.age)) THEN 'CHECKSUM' WHEN (destination.employee.employee_id IS NULL) THEN 'Missing' WHEN (source.employee.employee_id IS NULL) THEN 'Orphan' END AS AuditFailureType FROM destination.employee FULL OUTER JOIN source.employee on destination.employee.employee_id = source.employee.employee_id WHERE (destination.employee.employee_id IS NULL OR source.employee.employee_id IS NULL) OR (md5(source.employee.first_name || source.employee.last_name || source.employee.age)) != md5((destination.employee.first_name || destination.employee.last_name || destination.employee.age)); ```
Don't store data in a CSV > 1 gigabyte. Store it in a file called something like `current_week_sales`. At the end of the week schedule a script which renames it to something like `2014_12_sales` and creates a new, empty `current_week_sales`.
Good way to pull new lines from (non indexed/non sequential) huge file
[ "", "sql", "ruby", "postgresql", "diff", "" ]
I am trying to remove all records from table based on the date of another for example I have one table called `pd_course_mstr` that has the following fields ``` course_id start_dt ``` Then I have another table called `pd_eval_dtl` that has the following fields in it ``` course_id eval_question ``` The goal is to delete all eval questions that have a particular date. I was able to use SQL to select all the eval questions by using the following statement ``` SELECT * FROM pd_eval_dtl AS eval JOIN pd_course_mstr AS course ON eval.course_id = course.course_id WHERE course.start_dt='02/17/2014' ``` So I tried to change it to ``` DELETE FROM pd_eval_dtl JOIN pd_course_mstr ON pd_eval_dtl.course_id = pd_course_mstr.course_id WHERE pd_course_mstr.start_dt='02/17/2014' ``` but it keeps saying I have a syntax error near `JOIN` I don't know what I am doing wrong.
The syntax is ``` DELETE FROM eval FROM pd_eval_dtl AS eval INNER JOIN pd_course_mstr AS course ON eval.course_id = course.course_id WHERE ( course.start_dt = '20140217' ) ```
Try this. ``` DELETE ped FROM pd_eval_dtl ped INNER JOIN pd_course_mstr pcm ON ped.course_id = pcm.course_id WHERE pcm.start_dt='20140217' ```
Delete from table on join
[ "", "sql", "sql-server", "" ]
Below is my query which I am running against Postgres database. ``` SELECT ad.col1 ,ad.col2 ,md.col3 ,ad.col4 ,mcd.col5 ,AVG(mcd.col5 / md.col3) AS dc ,AVG(md.col3 / ad.col4) AS cb FROM tableCount AS md INNER JOIN tablePop AS ad ON ad.col1 = md.col1 AND ad.col2 = md.col2 INNER JOIN tableData AS mcd ON mcd.col1 = md.col1 AND mcd.col2 = md.col2 WHERE md.col2 = 23 AND md.col1 = '1' GROUP BY ad.col1 ,ad.col2 ,md.col3 ,ad.col4 ,mcd.col5 ORDER BY md.col3 DESC limit 30 GROUP BY ad.col1 ,ad.col2; ``` And below is the output I am getting back on the console with the use of above query- ``` col1 col2 col3 col4 col5 dc cb 1 23 48108 224123 479 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 89 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 142 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 1649 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 14 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 203 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 52 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 62 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 65 0.00000000000000000000 0.00000000000000000000 1 23 48108 224123 33 0.00000000000000000000 0.00000000000000000000 ``` * Now if you see my above output, `1` and `23` is coming multiple times in `col1` and `col2`. I would like to group them in one single row such that I would SUM `col3, col4, col5, dc and cb`. So the output I would like to see it is - ``` col1 col2 col3 col4 col5 dc cb 1 23 48108 2241230 278 0.00000000000000000000 0.00000000000000000000 ``` Is this possible to do in SQL? P.S Any Fiddle example would be great.
You want to remove `col5` from the `group by`: ``` SELECT ad.col1 ,ad.col2 ,md.col3 ,ad.col4 ,sum(mcd.col5) as col5 ,AVG(mcd.col5 / md.col3) AS dc ,AVG(md.col3 / ad.col4) AS cb FROM tableCount AS md INNER JOIN tablePop AS ad ON ad.col1 = md.col1 AND ad.col2 = md.col2 INNER JOIN tableData AS mcd ON mcd.col1 = md.col1 AND mcd.col2 = md.col2 WHERE md.col2 = 23 AND md.col1 = '1' GROUP BY ad.col1 ,ad.col2 ,md.col3 ,ad.col4 ORDER BY md.col3 DESC limit 30 ``` I assume the last `group by` and `order by` are there accidentally.
You can `SUM()` or take the `MAX()`, really it's only `col5` in your sample that is resulting in multiple rows, `GROUP BY` any non-aggregate field, and pick the appropriate aggregate for any other field: ``` SELECT ad.col1 ,ad.col2 ,MAX(md.col3) ,MAX(ad.col4) ,AVG(mcd.col5) ,AVG(mcd.col5 / md.col3) AS dc ,AVG(md.col3 / ad.col4) AS cb FROM tableCount AS md INNER JOIN tablePop AS ad ON ad.col1 = md.col1 AND ad.col2 = md.col2 INNER JOIN tableData AS mcd ON mcd.col1 = md.col1 AND mcd.col2 = md.col2 WHERE md.col2 = 23 AND md.col1 = '1' GROUP BY ad.col1 ,ad.col2 ORDER BY md.col3 DESC limit 30 ```
How to group by two columns to show only one single row?
[ "", "sql", "postgresql", "group-by", "aggregate-functions", "" ]
I'm working on the Northwind Microsoft Access database in Acceess 2007. In the Orders table there are three date fields: OrderDate, RequiredDate and ShippedDate. All of these fields are in 1994-1996. I'm trying to input this into an ETL system, but that system does not allow dates over 15 years old. I'd like to add 10 years to each of those three fields. I'm trying something like this: UPDATE Orders set OrderDate = DateAdd("yyyy",10,OrderDate) ...but receive the error "Too few parameters. Expected 1." When I see that error, it's usually a typo in a column name used, but I'm not seeing it anywhere here. Any suggestions?
I ended up going into Create then Query Design. In there the ribbon now showed Design as a tab. I clicked on Update and then SQL view and pasted it, then switched to design view. After saving the query, I tried to run it and it failed due to some security feature that was disabled. I enabled it and updated the database ok. This ended up doing it: ``` UPDATE Orders SET RequiredDate = DateAdd("yyyy",10,RequiredDate); UPDATE Orders SET OrderDate = DateAdd("yyyy",10,OrderDate); UPDATE Orders SET ShippedDate = DateAdd("yyyy",10,ShippedDate); ``` Thanks everyone for your help.
The problem is definitely not due to a syntax error with `DateAdd`. This Immediate window example demonstrates your `DateAdd` syntax is valid. ``` ? DateAdd("yyyy",10,Date()) 2/18/2024 ``` And it will work the same way in VBA code or in a query. Beware that an Access field can have both *name* and *caption* properties. ![field properties showing name and caption](https://i.stack.imgur.com/q9GFh.png) When a field has a caption assigned, that caption is used instead of the field's name in many situations. One such situation is when you open the table directly in Datasheet View. So in your situation, the table may include a field whose caption is *"OrderDate*", but the actual field name is something else. And in a query, you must use the name because Access will not recognize the caption, assume it must be a parameter, and expect you to supply a value for the parameter. Check the table design to make sure you're using the actual field name in your query. You can avoid this problem by building your query in the Access query designer. Start it as a `SELECT` query and choose from the available field names. After you have it working correctly as a `SELECT`, you can convert it to the `UPDATE` you actually need. Access offers convenient hand-holding features. At times they get it your way and become annoying. But this is a case where Access' helpful tendencies can be genuinely helpful. :-) Turns out I have a copy of Northwind from Access 2007. At least in my copy, the field is named "Order Date". So caption wasn't the culprit. Just bracket the field name so Access will recognize it as "one thing" instead of two. ``` UPDATE Orders SET [Order Date] = DateAdd("yyyy",10,[Order Date]); ``` Notice this is another example of a problem the query designer can help you avoid.
How to add time (years) to a date column in a table
[ "", "sql", "ms-access", "ms-access-2007", "dateadd", "" ]
I have a table named "star" with has two columns celeb, movie and an other table named "releases" with two columns celeb, album. I want to create a query that will show me a table with tree new columns: celeb, number of albums, number of movies. So the idea is that I want to show the celeb that have played to a movie (from star table) and also have made an album (from releases table) and below the number of movies and albums. Thank you
Perhaps this is what you are looking for: ``` SELECT movie.celeb, movieCount, albumCount FROM (SELECT celeb, count(1) movieCount FROM star GROUP BY celeb) movie INNER JOIN (SELECT celeb, count(1) albumCount FROM releases GROUP BY celeb) album ON movie.celeb=album.celeb ```
I assume you meant "number of albums" and "number of movies". If that is correct, this should work: ``` select celeb, sum(case when num_type = 'movie' then num_recs else 0 end as num_movies, sum(case when num_type = 'album' then num_recs else 0 end as num_albums from (select celeb, count(*) as num_recs, 'movie' as num_type from star group by celeb union select celeb, count(*) as num_recs, 'album' as num_type from releases group by celeb) x group by celeb ```
Query sql with two table in one and show the counting the total number
[ "", "sql", "sqlite", "" ]
I am having a class in computer security and are having a little trouble with the syntax when doing a sql injection on a local machine that we are suppose to hack. below is the syntax of the sql syntax. ``` SqlCommand sqlc = new SqlCommand("SELECT Record FROM Table_Users WHERE Username='" + username + "' AND Password='" + password + "'"); ``` We are trying the following in the login (username) field and ' or '1'='1 in the password ``` ;INSERT Table_Users (Username, Password) VALUES ('hejsan', 'glenn'); ``` But am getting a syntax error "Incorrect syntax near 'hejsan'." Can you see the syntax error? =)
try ``` ';INSERT Table_Users (Username, Password) VALUES ('hejsan', 'glenn'); ``` -> you need to close the `'` after `Username=`. In this case you don't even need a value for the password field. You could put `--` after your injected statement to cancel the rest of the select statement: ``` ';INSERT Table_Users (Username, Password) VALUES ('hejsan', 'glenn');-- ```
My first take was to have `INSERT INTO Table_Users` instead of `INSERT Table_Users` but as the poster noted `INTO` is optional(in MSSQL in contrast to the standard ANSI SQL). On second thought depending on what the data type your columns are the query could work by appending N in front of the values as per [What is the meaning of the prefix N in T-SQL statements?](https://stackoverflow.com/questions/10025032/what-is-the-meaning-of-the-prefix-n-in-t-sql-statements)
SQL syntax error (SQL injection)
[ "", "sql", "sql-server", "visual-studio-2010", "" ]
The following statement works in my database: ``` select column_a, count(*) from my_schema.my_table group by 1; ``` but this one doesn't: ``` select column_a, count(*) from my_schema.my_table; ``` I get the error: > ERROR: column "my\_table.column\_a" must appear in the GROUP BY clause > or be used in an aggregate function Helpful note: This thread: [What does SQL clause "GROUP BY 1" mean?](https://stackoverflow.com/questions/7392730/what-does-sql-group-by-clause-group-by-1-mean) discusses the meaning of "`group by 1`". ## Update: The reason why I am confused is because I have often seen `count(*)` as follows: ``` select count(*) from my_schema.my_table ``` where there is **no** `group by` statement. Is `COUNT` always required to be followed by `group by`? Is the `group by` statement implicit in this case?
This error makes perfect sense. `COUNT` is an "aggregate" function. So you need to tell it which field to aggregate by, which is done with the `GROUP BY` clause. The one which probably makes most sense in your case would be: ``` SELECT column_a, COUNT(*) FROM my_schema.my_table GROUP BY column_a; ``` If you *only* use the `COUNT(*)` clause, you are asking to return the complete number of rows, instead of aggregating by another condition. Your questing if `GROUP BY` is implicit in that case, could be answered with: "sort of": If you don't specify anything is a bit like asking: "group by nothing", which means you will get one huge aggregate, which is the whole table. As an example, executing: ``` SELECT COUNT(*) FROM table; ``` will show you the number of rows in that table, whereas: ``` SELECT col_a, COUNT(*) FROM table GROUP BY col_a; ``` will show you the the number of rows *per* value of `col_a`. Something like: ``` col_a | COUNT(*) ---------+---------------- value1 | 100 value2 | 10 value3 | 123 ``` You also should take into account that the `*` means to count *everything*. Including `NULL`s! If you want to count a specific condition, you should use `COUNT(expression)`! See [the docs about aggragate functions](http://www.postgresql.org/docs/9.2/static/functions-aggregate.html) for more details on this topic.
When you have a function like `count`, `sum` etc. you need to group the other columns. This would be equivalent to your query: ``` select column_a, count(*) from my_schema.my_table group by column_a; ``` When you use `count(*)` with no other column, you are counting all rows from `SELECT *` from the table. When you use `count(*)` alongside another column, you are counting the number of rows for each different value of that other column. So in this case you need to group the results, in order to show each value and its count only once.
"group by" needed in count(*) SQL statement?
[ "", "sql", "postgresql", "" ]
I've a requirement where in if the query string matches column1 , return me 1. If it matches column 2 return 2 else if it matches column 3 return 3. Table strunctre: ``` col1 col2 col3 11 12 13 22 23 24 ``` If my query string is 23, then i'm expecting a return value of 2 as it matches col2. Something like below: ``` select 1 from table1 where col1=querystring and orderid=xxx or select 2 from table1 where col2=querystring and orderid=xxx or select 3 from table1 where col3=querystring and orderid=xxx ``` Basically i'm expecting one query which return single value based on the column it matches. Is it something doable in SQL as i'm not very good in DB skills. Any help is highly appreciated.
There are a couple of approaches. If there is a guarantee that no more than one column will match at a time, a UNION will work: ``` SELECT 1 AS SomeCol FROM table1 WHERE col1 = querystring AND orderid = xxx UNION SELECT 2 FROM table1 WHERE col2 = querystring AND orderid = xxx UNION SELECT 3 FROM table1 WHERE col3 = querystring AND orderid = xxx; ``` If more than one match can happen, another approach is this (note the order of precedence is now col1, col2, col3 etc): ``` SELECT CASE WHEN col1 = querystring THEN 1 WHEN col2 = querystring THEN 2 WHEN col3 = querystring THEN 3 END AS SomeCol FROM table1 WHERE orderid = xxx; ```
Please try using `case` ``` declare @var int set @var=23 select case @var when col1 then 1 when col2 then 2 when col3 then 3 end from YourTable where col1=@var OR col2=@var OR col3=@var ```
sql query to return single value based on column it matches
[ "", "sql", "sql-server", "" ]
Following code works in Oracle version >= 9i as displayed in [this fiddle](http://sqlfiddle.com/#!4/ddf2e/6) ``` SELECT T1.col1, T1.col3,count(T2.col1) FROM T1 LEFT JOIN T2 ON T1.col3 = T2.col3 AND ',' || T2.col2 || ',' like '%,' || T1.col1 || ',%' GROUP BY T1.col1, T1.col3 ``` However based on [Oracle Left Outer Join](http://www.dba-oracle.com/tips_oracle_left_outer_join.htm) article it seems like the `LEFT JOIN` is not available in `Oracle 8i` – and unfortunately I am working in 8i. For `=` operator I know the equivalent code ``` SELECT T1.col1, T1.col3 FROM T1 , T2 WHERE T1.col3 = T2.col3 (+) --AND ',' || T2.col2 || ',' like '%,' || T1.col1 || ',%' ``` **QUESTION** But how can we specify the OUTER JOIN condition for the `LIKE` operator. Note: I believe, without the `(+)`, it will behave as INNER JOIN for that column. **UPDATE** Following query gives `ORA-00920: invalid relational operator` error ``` SELECT T1.col1, T1.col3,count(T2.col1) FROM T1 ,T2 WHERE T1.col3 = T2.col3(+) AND (',' || T2.col2 || ',') LIKE ('%,' || T1.col1 || ',%')(+) GROUP BY T1.col1, T1.col3 ```
You could try to create subqueries containing the modified columns and then joining the tables as below: ``` Select Mt1.T1_Col1, Mt1.T1_Col3,Count(Mt2.T2_Col1) From (Select T1.Col1 As T1_Col1, T1.Col3 As T1_Col3, '%,' || T1.Col1 || ',%' As T1_Col1_New From T1) Mt1, (Select T2.Col1 As T2_Col1, T2.Col2 As T2_Col2, T2.Col3 As T2_Col3, ',' || T2.Col2 || ',' As T2_Col2_New From T2) Mt2 Where Mt1.T1_Col3 = Mt2.T2_Col3 (+) And Mt1.T1_Col1_New Like Mt2.T2_Col2_New (+) Group By Mt1.T1_Col1, Mt1.T1_Col3; ``` I combined inline subqueries and outer joins here. References: 1. <http://www.orafaq.com/wiki/Inline_view> 2. <http://www.oracle-base.com/articles/9i/ansi-iso-sql-support.php>
Following is working code from [fiddle](http://sqlfiddle.com/#!4/1ae4a/10) ``` Select Mt1.T1_Col1, Mt1.T1_Col3,Count(Mt2.T2_Col1) From (Select T1.Col1 As T1_Col1, T1.Col3 As T1_Col3, '%,' || T1.Col1 || ',%' As T1_Col1_New From T1) Mt1 , (Select T2.Col1 As T2_Col1, T2.Col2 As T2_Col2, T2.Col3 As T2_Col3, ',' || T2.Col2 || ',' As T2_Col2_New From T2) Mt2 WHERE Mt1.T1_Col3 = Mt2.T2_Col3 (+) And Mt2.T2_Col2_New (+) LIKE Mt1.T1_Col1_New --Mt2 should be on the left side of LIKE Group By Mt1.T1_Col1, Mt1.T1_Col3; ```
How to specify optional OUTER JOIN condition in Oracle 8i
[ "", "sql", "oracle", "" ]
I Have the following table "Custumer": ``` Customer_ID Customer_Name 01 John 02 Mary 03 Marco ``` I would like to return the following result from a select: ``` Query_Result 01 02 03 John Mary Marco ``` Is it possible? Merge both columns in one?
One way (assumes Customer\_ID is a character type) ``` SELECT Customer_ID FROM Customer UNION ALL SELECT Customer_Name FROM Customer ```
Try using `UNION ALL`, the following should work for you. ``` SELECT CONVERT(varchar(30), Customer_ID) FROM Customer UNION ALL SELECT Customer_Name FROM Customer ```
Concat two SQL's Results in one Column
[ "", "sql", "sql-server", "" ]
Note: I am pretty new to MySQL so bear with me please. I have 2 tables in my database which are set up as follows: ``` guides table: guide_id (primary) | cat_id | title 1 0 guide01 2 0 guide02 steps table: step_id | guide_id (foreign) | step_txt 1 1 step1 text... 2 1 step2 text... ``` And I am trying to search the database for keywords within steps.step\_txt and return a list of guides. My current query looks like: ``` SELECT DISTINCT * FROM guides JOIN steps ON guides.guide_id=steps.guide_id WHERE step_txt LIKE "%keyword%" ``` What I have found is that as some guides have more than one step with the keyword contained, this returns duplicated rows. I would like the query to output 1 row containing guide\_id, cat\_id and title even if it finds 2. I think the problem is that I have used JOIN so the query is actually returning a joined row from both tables which would have different step\_id and step\_txt so the DISTINCT isn't effecting it. What is the best work-around for this?
Crude solution would be:- ``` SELECT DISTINCT guides.* FROM guides JOIN steps ON guides.guide_id=steps.guide_id WHERE step_txt LIKE "%keyword%" ``` Possibly more elegant and giving you the matched text :- ``` SELECT g.guide_id, g.cat_id, g.title, GROUP_CONCAT(s.step_txt) FROM guides g INNER JOIN steps s ON g.guide_id = s.guide_id WHERE step_txt LIKE "%keyword%" GROUP BY g.guide_id, g.cat_id, g.title ```
`distinct *` is going to return rows where *all* the columns are distinct. For what you want, you can use: ``` SELECT * FROM guides JOIN steps ON guides.guide_id = steps.guide_id WHERE step_txt LIKE "%keyword%" GROUP BY guides.guide_id; ``` This uses a MySQL extension to `group by` and it will not work in other databases. The columns returned from `steps` comes from arbitrary matching rows.
MySQL DISTINCT function not working as I expected
[ "", "mysql", "sql", "join", "distinct", "" ]
I was told my somebody for small project table count should be 10-15 some told me more tables better it is. I don't have a requirement specification because i'm doing a small project at home. but its growing bigger. Typically I don't make requirement specification but end of the day I wish I should've made it. Anyway, Assume that you are building a industry grade student management system (SMS) for a university and your were given full authority build a SMS system that's would be like an off the shelf package. How many tables would you add for the database?
This question is a bit like ["How long is a piece of string?"](http://en.wiktionary.org/wiki/how_long_is_a_piece_of_string) The number of tables in a database depends on the domain model of your application. It is simply impossible to answer your question without doing a data analysis of your case. There might be huge applications with only one or two database tables, and tiny ones with hundreds. The number in itself is not a good indicator of the quality of the architecture. As a general rule: One table for each uniquely identifiable type of information *(on a low level)* that you need to store, plus tables for cross-referencing *(for many-to-many relationships)*. And then there might be administrative tables, tables for logging, etc. Try learning about [Object Role Modeling](http://en.wikipedia.org/wiki/Object-Role_Modeling) *(not the same as Object-relational mapping)*, for ways of automatically creating databases based on the facts and constraints of your business model. In your specific case, I think you need to **stop thinking about how many tables you end up with**. Just keep developing, and when you run into new data that needs storing, refactor your database solution, adding and modifying tables as you go. If you do a good job refactoring, **the number you end up with, is the right one**.
Though SO is not the right platform to answer this as this is a very opinion based discussion, still from my personal experience I tend to follow a thumb rule- > "Only bare bones business entities which are absolutely needed must > constitute as database entities." Note that in all terms, database entities is not same as database tables. A normalised entity can be represented as a set of tables. For a student management system which is an enterprise grade, will have more than 10 business entities for sure. For me these bare minimum 10 will be * Person * Class * Course * Hostel * Attendance * Events * Marks * Schedule * Office * Comments
How many entities a database should have in a real world app?
[ "", "sql", "database", "" ]
I have a table which contains a column that has a German umlaut in its name. Yes, I know this is an absolutely bad idea, but I did not create the table and can definitely not alter it, so I have to somehow deal with it. My problem is that my queries don't seem to work when I use this column in the `WHERE` clause. When I execute the following query: ``` SELECT t.BadUmlautColumnΓ„5 FROM myTable t; ``` everything works fine. But when I execute this query: ``` SELECT t.BadUmlautColumnΓ„5 FROM myTable t WHERE t.BadUmlautColumnΓ„5 = '20.02.2014' ``` I simply get an empty result set. No Errors, but just no results. I also tried: ``` WHERE t."BADUMLAUTCOLUMNΓ„5" = '20.02.2014' ``` But its exactly the same. And I know that there are entries for the value I use in the where clause, because with the query without where clause that works, I can see that there are some. Google wasnt really helpful on this Topic, so does anyone know how to solve this issue? The Column with the Umlaut is of type date, if this is somehow relevant. And I am on a Oracle 10g database.
It's nothing to do with the column name. it looks like the values in the column have a time component. You can query for the whole day with: ``` WHERE TRUNC(t.BadUmlautColumnΓ„5) = TO_DATE('20.02.2014', 'DD.MM.YYYY') ``` But if the column is indexed it would be better to do it the other way around: ``` SELECT TO_CHAR(t.BadUmlautColumnΓ„5, 'DD.MM.YYYY HH24:MI:SS') FROM myTable t WHERE t.BadUmlautColumnΓ„5 >= TO_DATE('20.02.2014 00:00:00', 'DD.MM.YYYY HH24:MI:SS') AND t.BadUmlautColumnΓ„5 <= TO_DATE('20.02.2014 23:59:59', 'DD.MM.YYYY HH24:MI:SS') ``` Assuming it is a `DATE`, not a `TIMESTAMP`. I've specified the display format, which you should always do anyway, to include the time so you can what is actually in there. It looks like your `NLS_DATE_FORMAT` is `'DD.MM.YYYY'` so it will only show the date part by default. Never rely on implicit date conversion, though.
Try that ``` SELECT t.BadUmlautColumnΓ„5 FROM myTable t WHERE t.BadUmlautColumnΓ„5 = TO_DATE('20.02.2014', 'dd.mm.yyyy'); ``` If wouldn't work - ``` SELECT t.BadUmlautColumnΓ„5 FROM myTable t WHERE t.BadUmlautColumnΓ„5 = TO_TIMESTAMP ('20.02.2014', 'dd.mm.yyyy'); ``` You can also try ``` SELECT t.BadUmlautColumnΓ„5 FROM myTable t WHERE to_char(t.BadUmlautColumnΓ„5, 'dd.mm.yyyy') = '20.02.2014'; ```
Query table with german Umlaut in column name
[ "", "sql", "oracle", "" ]
I have a table (BUDDY) with these attributes * id * requestor * requested * status if the status='a' means that requestor and requested are buddies however how can i know the buddies of a certain user if the user can either be a requestor and a requested? ``` SELECT requestor, requested FROM buddy,user WHERE user_id = requestor or user_id = requested ``` this is giving me multiple values?
If you want to get some fields from the user, you need to join `buddy` to the `user` table. Otherwise, you are going to get an uncorrelated cross-join. However, it does not look like you need any of the user's columns, so a simple change below should do the trick: ``` SELECT requestor, requested FROM buddy WHERE user_id = requestor or user_id = requested ``` If you want to add fields from the user, add a join: ``` SELECT b.requestor, b.requested, u.first_name, u.last_name FROM buddy b JOIN user u ON b.requestor=u.id OR b.requested=u.id WHERE user_id = b.requestor or user_id = b.requested ```
Given a user U ``` SELECT requestor FROM buddy WHERE requested = U AND status='a' UNION SELECT requested FROM buddy WHERE requestor = U AND status='a' ```
SQL query not getting it right
[ "", "sql", "join", "" ]
A very basic question, I have an update I would like to do when I do the update then it is affecting 2000+ rows but when I just do the select query in the subquery then I get 1726 rows. I know there is something wrong in my update statement, can someone please help? ``` update ship_plu set pluc_dt='1-Jan-1999' where pluc_dt in ( select sp.pluc_dt from ship_plu sp,ship s where sp.pluc_dt between '16-Feb-2014' and '20-Feb-2014' and sp.ship_num=s.ship_num and s.rcv_dt is null ) ``` So above the subquery executed only brings back 1726 rows, but when I execute the entire update query then it effects over 2000 rows, I want to do just 1726?
Because you are updating rows, that shouldn't be updated. `ship_plu.pluc_dt` might meet the conditions, while `ship_plu.ship_num` is **not**. This is the wrong way to update. You should try that: ``` update ship_plu sp JOIN ship s ON sp.ship_num=s.ship_num set pluc_dt='1-Jan-1999' where pluc_dt between '16-Feb-2014' and '20-Feb-2014' and s.rcv_dt is null; ``` The other choice (assuming `ship_num` is unique and a foreign key somewhere) is: ``` update ship_plu set pluc_dt='1-Jan-1999' where ship_num in ( select sp.ship_num from ship_plu sp,ship s where sp.pluc_dt between '16-Feb-2014' and '20-Feb-2014' and sp.ship_num=s.ship_num and s.rcv_dt is null ); ``` I, personally, like first one better.
You want a correlated subquery. But you have the inner subquery referring to the outside table. Try this: ``` update ship_plu sp set pluc_dt='1-Jan-1999' where pluc_dt in ( select sp.pluc_dt from ship s where sp.pluc_dt between '16-Feb-2014' and '20-Feb-2014' and sp.ship_num=s.ship_num and s.rcv_dt is null ); ``` This form of the query will work in any database. Depending on the actual database you are using, there is other syntax (using `join`) that you could use.
Why are different values being affected in my update sql query?
[ "", "sql", "select", "subquery", "" ]
I have two tables say Table1(`id`,`name`) and Table2 with (`id`,`name`). Table1 looks like: ``` id name 1 ABC 2 DEF ``` Table2 looks like: ``` id name 1 XYZ 2 ASD ``` Can someone shed light on how I can add Table2 rows to Table1 i.e. Table1 must finally look like: ``` id name 1 ABC 2 DEF 3 XYZ 4 ASD ```
``` Insert into table1 (name) Select name from table2 ```
Try like below, If `id` in `table1` is not `auto_increment` ``` SELECT * FROM table1; +------+------+ | id | name | +------+------+ | 1 | ABC | | 2 | DEF | +------+------+ 2 rows in set (0.00 sec) SELECT * FROM table2; +------+------+ | id | name | +------+------+ | 1 | PQR | | 2 | XYZ | +------+------+ 2 rows in set (0.00 sec) SELECT MAX(id) INTO @row FROM table1; Query OK, 1 row affected (0.00 sec) INSERT INTO table1 SELECT @row := @row + 1 as row, name FROM table2; Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 SELECT * FROM table1; +------+------+ | id | name | +------+------+ | 1 | ABC | | 2 | DEF | | 3 | PQR | | 4 | XYZ | +------+------+ 4 rows in set (0.00 sec) ```
Adding rows from one table to another existing table where primary key is autogenerated
[ "", "mysql", "sql", "pentaho", "" ]
I want to get numOfItem from table BUY using ticketTypeId and then using the BUY.userId to find in the table USER to get the gender. So I can get numOfItem from table BUY and gender from table USER. I don't know how to write this in one query. Any idea? table structure: TABLE BUY: > ticketTypeId > > numOfItem > > userId TABLE USER: > gender
You need to join your tables on a common field, in this case user id ``` Select b.ticketTypeId, b.numOfItem, b.userId, u.gender From buy b inner join user u on b.userid = u.userid Where b.ticketTypeId = <val> ``` You want to include `where` to get only needed `ticketTypeId`
Generally speaking a join between two tables is something like: ``` select table1.*,table2.* from table1 join table2 on table1.key=table2.key ```
How to write a sql query to get data from two tables
[ "", "mysql", "sql", "" ]
I am trying to figure if it is possible to concatenate the results of a SQL query if the output has more than one value For example: If the below query returns more than one result ``` select * from employee ``` Output: ``` emp1 emp2 emp3 ``` I want the results to show as > emp1, emp2, emp3
You can use "listagg" Example: ``` SELECT LISTAGG(columnname, ',') WITHIN GROUP (ORDER BY columname) ``` Hope it resolves the issue!
``` SELECT LISTAGG(empname,',') WITHIN GROUP(ORDER BY empname) FROM emp; ```
concatenate results of a sql query
[ "", "sql", "oracle", "" ]
Sorry for my bad English. Please assume that we want to save all commercial actions(sell & buy) for all days and each day has 100 or more action. After 3 years we have a table with more than 100,000 row. Now we want to get a report from the actions done in 2 years ago between 5/20 to 5/25. ``` SELECT * FROM actions WHERE Date BETWEEN 2012/5/20 AND 2012/5/25 ``` The problem is that all 100,000 row must be read for doing this. Firstly I decided to separate the data for each month in separated tables. But I do not know how to handle this with entity framework. Have you any suggestion please? Thanks.
Don't assume that your query will behave bad when there is X amount of rows in your tables. You should test it! It is very easy to create a few million rows of test data for each of your tables (Should be done in development or test environment). Then you can test each of your queries and see exactly how "slow" they will be. This snippet will create a table and insert 1 000 000 rows to it. Try it and try running a few different queries on it. ``` CREATE TABLE [dbo].[Orders]( [OrderId] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, [CustomerId] [int] NOT NULL, [ArticleId] [int] NOT NULL, [TotalAmount] [decimal](19, 6) NULL, [OrderDate] DATETIME NOT NULL DEFAULT(GETDATE()) ); WITH C0(c) AS (SELECT 1 UNION ALL SELECT 1), C1(c) AS (SELECT 1 FROM C0 AS A CROSS JOIN C0 AS B), C2(c) AS (SELECT 1 FROM C1 AS A CROSS JOIN C1 AS B), C3(c) AS (SELECT 1 FROM C2 AS A CROSS JOIN C2 AS B), C4(c) AS (SELECT 1 FROM C3 AS A CROSS JOIN C3 AS B), C5(c) AS (SELECT 1 FROM C4 AS A CROSS JOIN C4 AS B), C6(c) AS (SELECT 1 FROM C5 AS A CROSS JOIN C5 AS B), numbers(n) AS( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM C6) INSERT dbo.Orders ( CustomerId , ArticleId , TotalAmount, OrderDate ) SELECT TOP 1000000 N % 150 + 1, N % 100 + 1, N % 500 + 20, DATEADD(MINUTE, (N - 1), '2014-01-01') FROM numbers; ``` The table will contain 1 000 000 orders, done by 150 different Customers, for 100 different Articles, for an amount between 20 and 520 each. Each order is placed with one minute in between each other starting from 2014-01-01 00:00:00. Using that data, the following query still executed in under one second on my workstation: ``` SELECT * FROM dbo.Orders WHERE orderDate BETWEEN '2014-05-01' AND '2014-08-01' ``` Data have a tendency to be much smaller on disk than you think. This table with ONE MILLION rows in it still only take about 70MB of space. ``` EXEC sys.sp_spaceused @objname = N'Orders' --name rows reserved data index_size unused --Orders 1000000 70432 KB 37560 KB 32072 KB 800 KB ``` How long does it take to read this much of MB from disk? 2-3 seconds, worst case on a desktop. **Adding indexes:** To comment on other answers. I added an index on the date column, but the query optimizer still thought it was better to scan the entire table. This is probably because it is more expensive to perform lookups for all those orders in the date range than it is to read it all sequentially from disk. Depending on the data in the table, those indexes might or might not be used. This is why you should generate test data that matches your expected load, only then can you tune your queries and create the "correct" indexes. For this particular table, and probably the one in the question, I would suggest putting the CLUSTERED index on the date column instead of the primary key.
> The problem is that all 100,000 row must be read for doing this. A: EVEN IF - that would be trivial unless you run it on an old mobile phone. I regularly aggregate 100 million rows from a 10 billion row table. B: Learn what an index is, then not all the row must be read.
Speed up finding records in a very large sql table
[ "", "sql", "sql-server", "entity-framework", "" ]
I have a table with 2 columns: EntryID and its Version. Each entry can have many different version, the highest version is the latest version of that entry. Also each entryID may have more than 1 name ``` EntryID - Version - EntryName 212 - 1 - Car 212 - 1 - Car2 212 - 2 - Batr 212 - 2 - hoo 451 - 2 - Csert 451 - 3 - xxx 451 - 3 - xxx2 111 - 1 - yyy 333 - 4 - ggg ``` Now, based on the entryID provided, I have a need to get the all the entries that have max versions only. Ex, the user may enter 212 & 451 & hit the button then it will show: ``` EntryID - Version - EntryName 212 - 2 - Batr 212 - 2 - hoo 451 - 3 - xxx 451 - 3 - xxx2 ``` The below query using group by but doesn't work. ``` select * from table where entryID in (212,451) and version in (select max(version) from table where entryID in (212,451) group by entryID) ``` Result: ``` EntryID - Version - EntryName 212 - 2 - Batr 212 - 2 - hoo 451 - 2 - Csert 451 - 3 - xxx 451 - 3 - xxx2 ``` This is not correct because entry 451 included the version 2 which is the max version of entry 212.
``` SELECT t1.* FROM Table t1 JOIN (SELECT EntryID, MAX(version) maxversion FROM Table WHERE EntryID IN (212, 451) GROUP BY EntryID) t2 ON t1.EntryID = t2.EntryID AND t1.version = t2.maxversion ```
Try: ``` select y.* from y.table y where y.entryid in (212, 451) and y.version = (select max(x.version) from table x where x.entryid = y.entryid) ```
How to query to get the Max value of a field in relating to another field? Group By Doesn't work? (MySQL)
[ "", "mysql", "sql", "" ]
Let's say I have a table like this: ``` Event (eventID, StartDateTime, EndDateTime) ``` * `StartDateTime` is `datetime` datatype * `EndDateTime` is `datetime` datatype Now sample data could be like so: ``` EventID StartDateTime EndDateTime ----------------------------------------------------------- 1 2014-02-21 00:00:00.000 2014-02-23 23:59:59.000 2 2014-02-22 00:00:00.000 2014-02-24 23:59:59.000 ``` I want to search what events are happening at `2014-02-23 00:00:00.000` ``` SELECT * FROM Event WHERE (StartDateTime <= '2/23/2013 00:00:00 AM') OR (EndDateTime >= '2/23/2013 00:00:00 AM') ``` I have tried the above code but it doesn't return correct result. Am I missing something? Can you tell me what I am missing?
use this query ``` SELECT * FROM Event WHERE '2014-02-23 00:00:00.000' BETWEEN StartDateTime and EndDateTime ```
You don't want `OR`, you want `AND`. You want events that started before or on the date you specified, *and* that end after the date you specified: ``` SELECT * FROM Event WHERE StartDateTime <= '20130223' AND EndDateTime > '20130223' ``` Also, I'd seriously recommend that you start storing these date ranges as a semi-open interval, with an *exclusive* end date, if the time portion is important. It's a lot easier to compute exclusive end points, that read more cleanly: ``` INSERT INTO Event(EventID,StartDateTime,EndDateTime) values (1,'2014-02-21T00:00:00.000','2014-02-24T00:00:00.000'), (2,'2014-02-22T00:00:00.000','2014-02-25T00:00:00.000') ``` Which has the advantage that it's not (arbitrarily) excluding the last minute of the day, as your current ranges do.
How to search Date in SQL Server 2012 over two columns
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have a table CLIENT ``` CLIENT_ID NAME CP VILLE 1 razer 49004 St hallo 2 mayui 49005 Kubol ``` and another table AGENCE ``` CLIENT_ID AGENCE_ID ADDR CP VILLE 1 1 qsdf null null 2 2 qsdf null null 1 9 dqsf 5454 5254 1 5 fff 4587 6568 ``` How can i fill the CP and VILLE of table AGENCE ? Of course i can delete / truncate AGENCE and use SELECT INSERT. But i can not do that.
So one way to do what you *asked* would be the following: ``` UPDATE A SET A.CP = C.CP, A.VILLE = C.VILLE; FROM AGENCE A LEFT JOIN CLIENT C ON A.CLIENT_ID = C.CLIENT_ID ``` A better way to do what you *really* wanted, would be to drop the columns from the `AGENCE` table and just do the join whenever you need it ``` SELECT AGENCE.AGENCE_ID, AGENCE.CLIENT_ID, CLIENT.CP, CLIENT.VILLE FROM AGENCE LEFT JOIN CLIENT ON AGENCE.CLIENT_ID = CLIENT.CLIENT_ID; ``` Make sure you have a proper foreign key setup. Why is `AGENCE_ID` not the first column in your example by the way? It looks like a primary key.
i just update the answer of @compuchip ``` UPDATE A SET A.CP = C.CP, A.VILLE = C.VILLE; FROM AGENCE A LEFT JOIN CLIENT C ON A.CLIENT_ID = C.CLIENT_ID AND A.AGENCE_ID = C.CLIENT_ID ```
How can i copy column table to another table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table in which one of the column is timestamp and I have the following query > SELECT \* FROM test WHERE (timepacket BETWEEN 2014-02-16 00:00:00 and 2014-02-19 00:00:00) AND (shift = 1) But this query gives me all the rows between the date range given in sql query. Thus my question is how can group the results from above query according to dates. Like > 2014-02-16 > > 1st row > > 2nd row > > 3rd row > > 2014-02-17 > > 1st row > > 2nd row > > 3rd row > > 4th row and so on
``` SELECT * FROM test WHERE timepacket BETWEEN '2014-02-16 00:00:00' AND '2014-02-19 00:00:00' AND shift = 1 GROUP BY timepacket; ```
Use GROUP BY or *Order by* ``` SELECT * FROM test WHERE (timepacket BETWEEN 2014-02-16 00:00:00 and 2014-02-19 00:00:00) AND (shift = 1) GROUP BY timepacket ```
Group results by days in a date range
[ "", "mysql", "sql", "" ]
I have a database structured in the following way ``` ID | DATE | col_0 | -------------------------- 1 | 2014 | A_Ver2_data0 | 2 | 2014 | A_Ver2_data1 | 3 | 2014 | A_Ver2_data2 | 4 | 2013 | A_Ver1_data0 | 5 | 2013 | A_Ver1_data1 | 6 | 2012 | A_Ver0_data0 | 7 | 2012 | A_Ver0_data1 | 8 | 2013 | B_Ver3_data0 | 9 | 2013 | B_Ver3_data1 | 10 | 2013 | B_Ver3_data2 | 11 | 2010 | B_Ver2_data0 | 12 | 2010 | B_Ver2_data1 | 13 | 2009 | B_Ver1_data0 | 14 | 2007 | B_Ver0_data0 | ``` I need to write a query that will return the most recent version of the A\_ and B\_ prefixed data sets. So I was thinking something like `SELECT * FROM db.table ORDER BY DATE DESC` But I want to filter out expired versions. desired output should be: ``` ID | DATE | col_0 | -------------------------- 1 | 2014 | A_Ver2_data0 | 2 | 2014 | A_Ver2_data1 | 3 | 2014 | A_Ver2_data2 | 8 | 2013 | B_Ver3_data0 | 9 | 2013 | B_Ver3_data1 | 10 | 2013 | B_Ver3_data2 | ``` Any Ideas?
I think this does what you want. It parses the column to get the first and last parts and then finds the maximum "DATE" for each. It returns the row that matches the date: ``` select id, "DATE", COL_A from (select v.*, max("DATE") over (partition by substr(col_A, 1, 1), substr(col_A, 8) ) as maxdate from versiones v ) v where "DATE" = maxdate; ``` The SQL Fiddle is [here](http://sqlfiddle.com/#!4/84a8f/9).
I am not sure but i think this would work : "HAVING date >= MAX(date)-1" max(date)-1 will return 2014-1 = 2013 , which will eventually filter out the results based on date >= 2013 . But this would list all the 2013,2014 entries ..
Using ORDER BY and getting most recent version of records
[ "", "sql", "oracle", "sql-order-by", "" ]
I have a table with below data ``` ------------------- ID Key Value ------------------- 1 1 2 0 3 1 4 0 5 0 6 0 7 1 8 0 9 0 -------------------- ``` I want to update the `Value` column as below ``` ------------------- ID Key Value ------------------- 1 1 0 2 0 1 3 1 0 4 0 3 5 0 2 6 0 1 7 1 0 8 0 0 9 0 0 -------------------- ``` That is, every `Key`=1 will have `Value`= 0. Every `Key`=0 will have the `Value` = Number of traverses from current row to row which has `Key`= 1. And the last two `Key`, since there is no '1' to follow, will have the `Value`=0. I need a plain Oracle SQL Update statement for this.
``` SQL> create table t (id int, key int, value int); SQL> insert into t (id, key) 2 select * from 3 ( 4 select 1 x, 1 y from dual union all 5 select 2, 0 from dual union all 6 select 3, 1 from dual union all 7 select 4, 0 from dual union all 8 select 5, 0 from dual union all 9 select 6, 0 from dual union all 10 select 7, 1 from dual union all 11 select 8, 0 from dual union all 12 select 9, 0 from dual 13 ) 14 / Π‘ΠΎΠ·Π΄Π°Π½ΠΎ строк: 9. SQL> commit; SQL> select * from t; ID KEY VALUE ---- ---------- ---------- 1 1 2 0 3 1 4 0 5 0 6 0 7 1 8 0 9 0 SQL> merge into t using( 2 select id, key, 3 decode(key,1,0, 4 decode((max(key) over(order by id rows between current row and unbounded following)),0,0, 5 sum(decode(key,0,1)) over(partition by grp order by id rows between current row and unbounded following)) 6 ) 7 value 8 from ( 9 select id, key, decode(key,1,0, 10 decode((max(key) over(order by id rows between current row and unbounded following)),0,0, -- Define if there is 1 below 11 (sum(key) over(order by id rows between current row and unbounded following)) 12 )) grp 13 from t 14 ) 15 ) src 16 on (t.id = src.id) 17 when matched then 18 update set t.value = src.value 19 / SQL> select * from t; ID KEY VALUE ---- ---------- ---------- 1 1 0 2 0 1 3 1 0 4 0 3 5 0 2 6 0 1 7 1 0 8 0 0 9 0 0 ```
If there is no gaps in ID field, then this query will do the trick: ``` UPDATE TEST_T tm SET VALUE = (SELECT CASE WHEN t1.KEY = 1 THEN 0 WHEN (SELECT MIN(ID) FROM TEST_T t2 WHERE t2.ID > t1.id AND t2.key = 1) IS NOT NULL THEN (SELECT MIN(ID) FROM TEST_T t2 WHERE t2.ID > t1.id AND t2.key = 1) - t1.id ELSE 0 END VALUE FROM TEST_T t1 WHERE t1.id = tm.id) ```
Update rows based on values present in forthcoming rows
[ "", "sql", "oracle", "oracle11g", "oracle10g", "" ]
I'm working on code written by a previous developer and in a query it says, ``` WHERE p.name <=> NULL ``` What does `<=>` mean in this query? Is it something equal to `=`? Or is it a syntax error? But it is not showing any errors or exceptions. I already know that `<>` = `!=` in [MySQL](http://en.wikipedia.org/wiki/MySQL).
### TL;DR It's the [`NULL` safe equal](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_equal-to) operator. Like the regular `=` operator, two values are compared and the result is either `0` (not equal) or `1` (equal); in other words: `'a' <=> 'b'` yields `0` and `'a' <=> 'a'` yields `1`. Unlike the regular `=` operator, values of `NULL` don't have a special meaning and so it never yields `NULL` as a possible outcome; so: `'a' <=> NULL` yields `0` and `NULL <=> NULL` yields `1`. ### Usefulness This can come in useful when both operands may contain `NULL` and you need a consistent comparison result between two columns. Another use-case is with prepared statements, for example: ``` ... WHERE col_a <=> ? ... ``` Here, the placeholder can be either a scalar value or `NULL` without having to change anything about the query. ### Related operators Besides `<=>` there are also two other operators that can be used to compare against `NULL`, namely `IS NULL` and `IS NOT NULL`; they're part of the ANSI standard and therefore supported on other databases, unlike `<=>`, which is MySQL-specific. You can think of them as specialisations of MySQL's `<=>`: ``` 'a' IS NULL ==> 'a' <=> NULL 'a' IS NOT NULL ==> NOT('a' <=> NULL) ``` Based on this, your particular query (fragment) can be converted to the more portable: ``` WHERE p.name IS NULL ``` ### Support The SQL:2003 standard introduced a predicate for this, which works exactly like MySQL's `<=>` operator, in the following form: ``` IS [NOT] DISTINCT FROM ``` The following is universally supported, but is relative complex: ``` CASE WHEN (a = b) or (a IS NULL AND b IS NULL) THEN 1 ELSE 0 END = 1 ```
is **<=>** `NULL-safe equal to operator` This operator performs an equality comparison like the = operator, but returns 1 rather than NULL if both operands are NULL, and 0 rather than NULL if one operand is NULL. See here for the [documentation](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to) Sample : you should use IS NOT NULL. (The comparison operators = and <> both give UNKNOWN with NULL on either side of the expression.) ``` SELECT * FROM table WHERE YourColumn IS NOT NULL; ``` can also negate the null safe equality operator but this is not standard SQL. ``` SELECT * FROM table WHERE NOT (YourColumn <=> NULL); ```
What is this operator <=> in MySQL?
[ "", "mysql", "sql", "operators", "spaceship-operator", "" ]
I have a table that has a bunch of rows, but only three columns, `date`, `username` and `posts`. `Username` has usernames, and they repeat a lot. `Posts` has a bunch of numbers. `Date` has the date something was posted in the `Y-m-D` format. Now when I run `SELECT * FROM table WHERE date = '2014-02-20'`, I get a bunch of mixed results, like this: ``` date username posts 2014-02-20 user1 1 2014-02-20 user2 2 2014-02-20 user14 1 2014-02-20 user3 1 2014-02-20 user2 3 2014-02-20 user2 4 2014-02-20 user11 1 2014-02-20 user1 2 2014-02-20 user8 2 2014-02-20 user9 3 2014-02-20 user55 4 2014-02-20 user5 3 ``` I want to sort it out so it will look like this: ``` date username posts 2014-02-20 user1 1 2014-02-20 user1 2 2014-02-20 user1 3 2014-02-20 user1 4 2014-02-20 user2 1 2014-02-20 user2 2 2014-02-20 user2 3 2014-02-20 user2 4 2014-02-20 user2 5 2014-02-20 user2 6 2014-02-20 user3 1 2014-02-20 user3 2 ``` How can I do that?
Try like ``` SELECT * FROM table WHERE date = '2014-02-20' ORDER BY username,posts ASC ```
``` SELECT * FROM tablw WHERE date ='$something' ORDER BY username ASC, posts ASC ```
How can I sort rows based on two columns in MySQL?
[ "", "mysql", "sql", "" ]
Im quite new for all the hibernate and SQL scopes, so forgive me for the silly question. I want to make a sql query using hibernate which will group all the results by the groupUserName. I have the following tables in my DB: ms\_GroupUser: ``` +-------------+---------------+----------+---------+ | groupUserId | groupUserName | password | groupId | +-------------+---------------+----------+---------+ | 1 | Alice | 123456 | 1 | | 2 | BOB | 654321 | 1 | | 3 | BOB | 654321 | 2 | | 4 | Charlie | 654321 | 1 | +-------------+---------------+----------+---------+ ``` ms\_Group: ``` +---------+-----------+--------+----------------+------------+ | groupId | groupName | system | systemHostName | systemPort | +---------+-----------+--------+----------------+------------+ | 1 | TEST | FOO | 1.1.1.1 | 8080 | | 2 | TEST2 | FOO | 1.1.1.1 | 8080 | +---------+-----------+--------+----------------+------------+ ``` My Query is: ``` SELECT groupt.groupname, usert.groupusername FROM ms_group AS groupt, ms_groupuser AS usert WHERE groupt.groupid = usert.groupid AND usert.password = '654321' GROUP BY usert.groupusername ``` I expect it will bring me back as a result this bag: > BOB -> TEST, TEST2. > > Charlie -> TEST. What it is actually bring back is: > BOB -> TEST. > > Charlie -> TEST. Where is TEST2? Can you please help me? am I miss understood something? Thank you very much!
``` SELECT groupUserName,groupName FROM ms_GroupUser u, ms_group g WHERE g.groupId=u.groupId AND u.password=654321 ``` **Output** ``` | GROUPUSERNAME | GROUPNAME | |---------------|-----------| | BOB | TEST | | BOB | TEST2 | | Charlie | TEST | ``` **[Fiddle](http://sqlfiddle.com/#!2/0c35e/10)** **Edit** Based upon your question in comments ``` SELECT groupUserName,GROUP_CONCAT(groupName) FROM ms_GroupUser u, ms_group g WHERE g.groupId=u.groupId AND u.password=654321 GROUP BY groupUserName ``` **Output** ``` | GROUPUSERNAME | GROUP_CONCAT(GROUPNAME) | |---------------|-------------------------| | BOB | TEST,TEST2 | | Charlie | TEST | ``` **[Fiddle](http://sqlfiddle.com/#!2/0c35e/13)**
Just simply add "groupt.groupName" in the group by. So the code will become like this ``` SELECT groupt.groupName, usert.groupUserName FROM Group1 AS groupt, GroupUser AS usert WHERE groupt.groupId = usert.groupId AND usert.password = '654321' GROUP BY groupt.groupName, usert.groupUserName; ```
SQL query group by results
[ "", "mysql", "sql", "hibernate", "select", "group-by", "" ]
I have two columns: ``` ID name -------------- NULL Bose NULL Bose NULL Computer NULL Bose NULL Monitor NULL Monitor NULL Computer NULL Bose NULL Phone NULL Computer ``` Need to add unique values like this: Values should start from some number like 400. ``` ID name ------------ 400 Bose 400 Bose 401 Computer 400 Bose 402 Monitor 402 Monitor 401 Computer 400 Bose 403 Phone 401 Computer ``` I have tried with DISTINCT but cant figure it out, can somebody please help? Thank you!
Try this: ``` UPDATE your_table s JOIN (SELECT (@r:=@r+1) rn, t.name FROM (SELECT DISTINCT name FROM your_table) t ,(SELECT @r:=399) nums) tab ON tab.name = s.name SET s.ID = tab.rn; ``` Here is working code at SQL Fiddle: <http://www.sqlfiddle.com/#!2/b831ed/1>
You can do this in various ways. One method is to use variables. ``` update table t cross join (select @name := '',@prevname := '', @id := 399) const set id = (case when (@prevname := @name) is null then null when (@name := name) is null then null when @prevname = name then @id else @id := @id + 1 end) order by name; ``` The use of the `case` statement is simply to allow variable assignments in the `update` clause.
SQL query - How to add same ID values to values from another column?
[ "", "sql", "phpmyadmin", "" ]
need to write a sql query to fetch all tables in a schema that was updated on sysdate. ``` select distinct(table_name) from All_Tab_Columns where owner = 'DBO' and last_analyzed = sysdate; ``` It doesn't seem to work properly.
As mentioned in answers to the question I linked to, you can use the `ORA_ROWSCN` pseudo-column to get an idea of when the table was last updated. This will example all tables in your schema and list those which were modified on the specified date, according to the `ORA_ROWSCN`. This may take a while to run, of course. ``` set serveroutput on declare last_update varchar2(10); bad_scn exception; no_scn exception; pragma exception_init(bad_scn, -8181); pragma exception_init(no_scn, -1405); begin for r in (select table_name from all_tables where owner = 'DBO') loop begin execute immediate 'select to_char(scn_to_timestamp(max(ora_rowscn)), ' || '''YYYY-MM-DD'') from DBO.' || r.table_name into last_update; if last_update = '2014-02-21' then dbms_output.put_line(r.table_name || ' last updated on ' || last_update); end if; exception when bad_scn then dbms_output.put_line(r.table_name || ' - bad scn'); when no_scn then dbms_output.put_line(r.table_name || ' - no scn'); end; end loop; end; / ``` The exception handlers are covering views (which are listed but have no SCN), and where there is an invalid SCN for some reason; you may want to ignore those rather than displaying them. If you are only looking for today, not a specific date, then this might be faster: ``` declare start_scn number; changed_rows number; changed_tables number := 0; begin start_scn := timestamp_to_scn(trunc(systimestamp)); for r in (select table_name from all_tables where owner = 'BDO' order by table_name) loop execute immediate 'select count(*) from (' || 'select ora_rowscn from BDO.' || r.table_name || ') where ora_rowscn >= :1 and rownum < 2' into changed_rows using start_scn; if changed_rows > 0 then dbms_output.put_line(r.table_name || ' updated'); changed_tables := changed_tables + 1; end if; end loop; dbms_output.put_line(changed_tables || ' tables updated today'); end; / ``` You could do the same thing for any date really but you'd need to find the earliest and latest SCN for that day (which is more complicated for the current date). Also note that this may only work within your flashback window - if you go back to far you won't be able to translate an SCN to a timestamp anyway.
You need to apply `TRUNC` function on `last_analyzed` and `sysdate` and then it will work ``` select distinct(table_name) from All_Tab_Columns where owner = 'DBO' and trunc(last_analyzed) = trunc(sysdate); ```
sql query to fetch all tables in a schema that was updated on sysdate
[ "", "sql", "oracle", "" ]
I have the following database structure ``` table_countries ---------- country_id country_name table_cities ---------- city_id country_id city_name table_streets ---------- street_id city_id street_name table_people ---------- person_id street_id person_name ``` There multiple countries, which can have multiple cities, which in turn multiple streets and so on. I am looking to perform a query that will get a list of all countries that have 1 or more people in them. The problem is countries table is not linked directly to people table. And LEFT JOIN returns multiple rows for the same country.
For the expected result mentioned in your edit I would change the left joins to inner joins and select only country name with a group by clause. Note the foreign key names in the on clauses, I think you have to clarify/correct your table structures: ``` SELECT table1.country FROM table1 JOIN table2 ON table1.id = table2.table1_id JOIN table3 ON table2.id = table3.table2_id JOIN table4 ON table3.id = table4.table3_id GROUP BY table1.country ```
``` SELECT * FROM table1 WHERE id IN ( SELECT DISTINCT table1.id FROM table1 LEFT JOIN table2 ON table1.id = table2.id LEFT JOIN table3 ON table2.id = table3.id LEFT JOIN table4 ON table3.id = table4.id ); ``` ought to do the trick then? You don't even need `DISTINCT`, but it will make the inner query sufficient if you just want to get the country IDs.
MySQL left join limit to one row
[ "", "mysql", "sql", "" ]
In my Data Table I have a Date Column Format = `yyyy/mm/dd` and a Time column Format `hh:mm:ss`. I am trying to concat the two so I can use it in a calendar. I keep getting an error. Here is my qry: `CAST(T0.[Date]) AS Date) + CAST(T0.[Time]) AS Time(7))` Where am I going wrong?
probably the bracket near the [date] and [Time] ``` CAST(T0.[Date]) AS Date) + CAST(T0.[Time]) AS Time(7)) ``` change to: ``` CAST(T0.[Date] AS Date) + CAST(T0.[Time] AS Time(7)) ``` **UPDATED:** The data types datetime and time are incompatible in the add operator. ``` select cast(CAST(T0.[Date] AS date)as nvarchar(8)) + cast(CAST(T0.[Time] AS time(7)) as nvarchar(8)) ```
Try this ``` CAST(T0.[Date] AS SmallDateTime) + CAST(T0.[Time] AS Time) ```
CAST Date and Time in SQL Server
[ "", "sql", "sql-server", "" ]
I need to load some test data into the Channel field on my Account table. The Channel can be one of 10 different values, so I thought I'd randomly assign the Channel one of the values using a `CASE` expression along with `ABS(CHECKSUM(NewId())) % 10` like so: ``` SELECT id, name, Channel = CASE ABS(CHECKSUM(NewId())) % 10 WHEN 0 THEN 'Baby Only' WHEN 1 THEN 'Club' WHEN 2 THEN 'Drug' WHEN 3 THEN 'Food' WHEN 4 THEN 'Internet' WHEN 5 THEN 'Liquidators' WHEN 6 THEN 'Mass' WHEN 7 THEN 'Military' WHEN 8 THEN 'Other' WHEN 9 THEN 'Speciality' ELSE '*NONE*' -- How is this ever getting reached? END FROM retailshelf_nil...account A ``` Since I'm using modulo 10 I thought the only possible values should be 0-9. But when I run the above code, I'm finding that the `ELSE` clause is indeed being reached and that my data is coming up with 'NONE' on some records as shown: ``` id name Channel 001L000000KpgFqIAJ Acct1 *NONE* 001L000000KpgFrIAJ Acct2 Mass 001L000000KpgFsIAJ Acct3 Club 001L000000KpgFtIAJ Acct4 *NONE* 001L000000KpgFuIAJ Acct5 Baby Only 001L000000KpgFvIAJ Acct6 *NONE* 001L000000KpgFwIAJ Acct7 Mass ``` **Can someone please explain what logical error I've made that's allowing the ELSE clause to be reached?** When I run a simple test to just generate the random number like so: ``` SELECT RadomNum = ABS(CHECKSUM(NewId())) % 10 FROM retailshelf_nil...account A ORDER BY 1 ``` All the numbers generated are from 0-9 as expected, so what's different about this first SQL? And is there a workaround to ensure that `ELSE` is never reached?
The written form of the query is expanded to: ``` Channel = CASE WHEN ABS(CHECKSUM(NewId())) % 10 = 0 THEN 'Baby Only' WHEN ABS(CHECKSUM(NewId())) % 10 = 1 THEN 'Club' WHEN ABS(CHECKSUM(NewId())) % 10 = 2 THEN 'Drug' WHEN ABS(CHECKSUM(NewId())) % 10 = 3 THEN 'Food' WHEN ABS(CHECKSUM(NewId())) % 10 = 4 THEN 'Internet' WHEN ABS(CHECKSUM(NewId())) % 10 = 5 THEN 'Liquidators' WHEN ABS(CHECKSUM(NewId())) % 10 = 6 THEN 'Mass' WHEN ABS(CHECKSUM(NewId())) % 10 = 7 THEN 'Military' WHEN ABS(CHECKSUM(NewId())) % 10 = 8 THEN 'Other' WHEN ABS(CHECKSUM(NewId())) % 10 = 9 THEN 'Speciality' ELSE '*NONE*' -- How is this ever getting reached? END ``` A new value for `NEWID` is used in each test.
A new "random" number will be calculated for every WHEN clause - you can instead use a derived table: ``` SELECT ID, Name, Channel = CASE Rand WHEN 0 THEN 'Baby Only' WHEN 1 THEN 'Club' WHEN 2 THEN 'Drug' WHEN 3 THEN 'Food' WHEN 4 THEN 'Internet' WHEN 5 THEN 'Liquidators' WHEN 6 THEN 'Mass' WHEN 7 THEN 'Military' WHEN 8 THEN 'Other' WHEN 9 THEN 'Speciality' ELSE '*NONE*' -- How is this ever getting reached? END FROM ( SELECT id, name, ABS(CHECKSUM(NewId())) % 10 Rand FROM retailshelf_nil...account A ) zzz; ``` or a CROSS APPLY subquery: ``` SELECT A.ID, A.Name, Channel = CASE zzz.Rand WHEN 0 THEN 'Baby Only' WHEN 1 THEN 'Club' WHEN 2 THEN 'Drug' WHEN 3 THEN 'Food' WHEN 4 THEN 'Internet' WHEN 5 THEN 'Liquidators' WHEN 6 THEN 'Mass' WHEN 7 THEN 'Military' WHEN 8 THEN 'Other' WHEN 9 THEN 'Speciality' ELSE '*NONE*' -- How is this ever getting reached? END FROM retailshelf_nil...account A CROSS APPLY ( SELECT ABS(CHECKSUM(NewId())) % 10 ) zzz (Rand); ``` That way `NewID()` is called only once per record. A similar scneario was resolved [here](http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3f430d6d-ebe6-4219-9c3e-27414a38130c/a-strange-query-behaviour-with-rand?forum=transactsql). The [T-SQL documentation](http://manuals.sybase.com/onlinebooks/group-as/asg1250e/sqlug/@Generic__BookTextView/39235;pt=39091) explains this phenomenon (granted it's for Sybase but apparently still applies to SQL Server): > Expressions that reference the `rand` function, the `getdate` function, > and so on, produce different values each time they are evaluated. This > can yield unexpected results when you use these expressions in certain > case expressions. For example, the SQL standard specifies that case > expressions with the form: > > ``` > case expression > when value1 then result1 > when value2 then result2 > when value3 then result3 > ... > end > ``` > > are equivalent to the following form of case expression: > > ``` > case expression > when expression=value1 then result1 > when expression=value2 then result2 > when expression=value3 then result3 > ... > end > ```
How does this CASE expression reach the ELSE clause?
[ "", "sql", "sql-server-2008", "t-sql", "" ]
Hello what I want is to write a query which will fetch me 3 column:- 1. nvarchar column1 2. integer Values column2 3. single cell of the summed column2 is it possible , I am getting the following error:- Msg 8120, Level 16, State 1, Line 1 Column 'tablename.columnname' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. What is the correct procedure to get data in the format I wish to get. **Edit** Jaques' answer works but I dont get what I want. What I want is: ``` column 1 PID | column 2 SID | column 3 | column 4 | Column 5(Total of 4) ------------------------------------------------------------------------- 1 | 1 | ABC | 125.00 | 985.00 2 | 2 | XYZ | 420.00 | 3 | 3 | DEF | 230.00 | 4 | 4 | GHI | 210.00 | ```
You can do it in the following way, because you need to add all non aggregated values in the group by, which makes it difficult ``` Select column1, column2, SUM(column2) OVER (PARTITION BY column1) as Total from [Table] ``` This should work. You can do it with a subselect from your edited answer, but why do you want it like that? ``` Select Column1, Column2, Column3, Column4, (Select SUM(Column4) from Table) as Column 5 from Table ```
i suspect you are using some aggregate function on some columns and not listing your remaining columns in group by clause. your query should look like this. ``` select sum(column2), column1 from table1 group by column1 ```
How to create a result from sql server with a summed column
[ "", "sql", "sql-server", "sum", "correlated-subquery", "" ]
This is my SQL CODE: ``` SELECT `qty`,`unitprice`, qty * unitprice as Stock FROM item where Stock<1000 ``` but it is saying: > #1054 - Unknown column 'Stock' in 'where clause' how to fix it?
As documented under [Problems with Column Aliases](http://dev.mysql.com/doc/en/problems-with-alias.html): > Standard SQL disallows references to column aliases in a `WHERE` clause. This restriction is imposed because when the `WHERE` clause is evaluated, the column value may not yet have been determined. You must instead repeat the calculation within your `WHERE` clause: ``` SELECT qty, unitprice, qty * unitprice as Stock FROM item WHERE qty * unitprice < 1000 ```
You can't have an alias in a where condition. It will only be exposed to the next layer up - the calling application, or a parent query if this is written as a subquery. ``` where qty * unitprice < 1000 ```
Mysql alias in where condition
[ "", "mysql", "sql", "" ]
Writing queries (a sql file) that can run both in MySQL and PostgreSQL, everything is OK so far except the following: ``` SELECT MONTH(Date), YEAR(Date) FROM Ticket; ``` for extracting the month number (9,10,11 ..) and year (2011, 2012..) from the Date column (yes thats true, I cannot change the name of this column) in the Ticket table. It works fine in MySQL but when I run the query in PostgreSQL, it gives the following error message for both MONTH() and YEAR(). ``` ERROR: function month(date) does not exist HINT: No function matches the given name and argument types. You might need to add explicit type casts. ``` After researching a little bit I came along some other functions that will do the job in Postgres, but I am afraid those might not work in MySQL. Strongly looking forward for any possible solution.
Well if you use an alias on the front of your field and use ***EXTRACT*** instead, the same query will work for both PostgreSQL and MySQL: ``` SELECT EXTRACT(MONTH FROM t.Date) AS MonthOfDate, EXTRACT(YEAR FROM t.Date) AS YearOfDate FROM Ticket t; ``` [**PostgreSQL SQL Fiddle**](http://sqlfiddle.com/#!15/4177e/2/0) [**MySQL SQL Fiddle**](http://sqlfiddle.com/#!2/4177e/1/0)
Because you are open for any possible solution I have two ideas. As long as you need to write plain sql statements I would suggest a table prepopulated with dates and their corresponding attributes in columns. Sometimes I used a table like this: ``` ID THE_DATE THE_MONTH_OF_DATE THE_YEAR_OF_DATE ... ----------------------------------------------------------- 1 01/01/2014 1 2014 2 01/02/2014 1 2014 3 01/03/2014 1 2014 4 01/04/2014 1 2014 5 02/01/2014 2 2014 6 03/01/2014 3 2014 n xx/xx/xxxx x xxxx ``` This table prepopulated with the dates that you'll need for your application could be easily joined to all dates and then deliver the column you want. A better solution would be to use a database abstraction layer/object relational mapper that translates you're queries to the correct database dialect depending on the underlying database, e.g. Hibernate (Java).
Any alternative to MONTH() and YEAR() functions which are common to both MySQL and PostgreSQL
[ "", "mysql", "sql", "database", "postgresql", "" ]
I have 5000 rows of data. and I want to delete all the rows after 1000 how would I do this? ``` Delete from table where * limit 1000 ``` is this correct?
SQL Server ``` WITH cte AS (SELECT *, row_number() OVER (ORDER BY some_column) AS rn FROM TABLE) DELETE FROM cte WHERE rn > 1000 ```
``` DELETE FROM TABLE WHERE ID NOT IN ( SELECT ID FROM TABLE LIMIT 1000 ) ``` This query will : 1 - Select THE FIRST 1000 records from your database 2 - Deletes all the records that are not in the list of the first 100 records
Delete All Rows After a certain number
[ "", "sql", "sql-server", "" ]
I have 2 tables Joined by StudentID and ParkingID. My Table B has duplicate parking information. Im looking to get StudentID, StudentName, ParkingSpace number and the count of duplicates. This is my first post so forgive me if I dont follow all the correct protocols here. I appreciate the help. Example: ``` Table A: StudentID StudentName ---- ------ 001 Mary 002 Jane 003 Peter 004 Smith 005 Kathy Table B: ParkingID ParkingSpace ----- ----- 001 25 001 25 002 18 003 74 004 22 005 31 005 31 005 31 005 31 005 31 ``` This is my goal. ``` StudentID StudentName ParkingSpace dupCount ---- ------ ------ ------ 001 Mary 25 2 005 Kathy 31 5 ```
Here is a solution for your problem. ``` select studentid, studentname, parkingspace , count(*) dupcount from tablea inner join tableb on tablea.studentid=tableb.parkingid group by studentid, studentname, parkingspace having count(*)>1 ``` We count the duplicates and by `having count(*)>1` showing only real duplicates. <http://sqlfiddle.com/#!2/29c2d/2>
**Test Data** ``` DECLARE @Table_1 TABLE (StudentID VARCHAR(100),StudentName VARCHAR(100)) INSERT INTO @Table_1 VALUES ('001','Mary'),('002','Jane'),('003','Peter'), ('004','Smith'),('005','Kathy') DECLARE @Table_2 TABLE (ParkingID VARCHAR(100),ParkingSpace INT) INSERT INTO @Table_2 VALUES ('001',25),('001',25),('002',18),('003',74),('004',22),('005',31), ('005',31),('005',31),('005',31),('005',31) ``` **Query** ``` SELECT T1.StudentID ,T1.StudentName ,T2.ParkingSpace ,COUNT(T2.ParkingSpace) AS Duplicates FROM @Table_1 T1 INNER JOIN @Table_2 T2 ON T1.StudentID = T2.ParkingID GROUP BY T1.StudentID ,T1.StudentName ,T2.ParkingSpace HAVING COUNT(T2.ParkingSpace) > 1 ``` **Result Set** ``` ╔═══════════╦═════════════╦══════════════╦════════════╗ β•‘ StudentID β•‘ StudentName β•‘ ParkingSpace β•‘ Duplicates β•‘ ╠═══════════╬═════════════╬══════════════╬════════════╣ β•‘ 001 β•‘ Mary β•‘ 25 β•‘ 2 β•‘ β•‘ 005 β•‘ Kathy β•‘ 31 β•‘ 5 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β• ```
SQL Query that joins one table where the second table has duplicates
[ "", "sql", "join", "duplicates", "" ]
``` ..... temp Varchar2 (20); e_name Varchar2 (255); ..... Begin e_name := e_name || temp; Dbms_Output.Put_Line('names: ' || e_name); Output result ------------- names: 'John', 'Sam', 'David', 'Sam', 'John', 'Alex' ``` How do I format my e\_name to remove the duplicate names so that I have the output result ``` required result ------------- names: 'John', 'Sam', 'David', 'Alex' ```
Well you can modify your program like this ``` Begin e_name := e_name || temp; SELECT listagg ( names, ',' ) within GROUP (ORDER BY rn ) INTO e_name FROM ( SELECT level rn, regexp_substr ( e_name, '[^,]+', 1, level ) names, row_number ( ) over ( partition BY regexp_substr ( e_name, '[^,]+', 1, level ) order by level ) rnn FROM dual CONNECT BY regexp_substr ( e_name, '[^,]+', 1, level ) IS NOT NULL ) WHERE rnn = 1; Dbms_Output.Put_Line('names: ' || e_name); ``` The inner most query will convert the list into rows and then outer queries will filter and create the string again.
Replace WM\_CONCAT with LISTAGG - I'm running Oragle 10g, LISTAGG is 11g: ``` SELECT wm_concat(ename) AS employees FROM emp_test WHERE deptno = 20 / Output - SMITH repeats twice: SMITH,JONES,SCOTT,ADAMS,FORD,SMITH SELECT wm_concat(distinct ename) AS employees FROM emp_test WHERE deptno = 20 / The distinct fixes the problem: ADAMS,FORD,JONES,SCOTT,SMITH ```
Oracle PL SQL remove duplicate data in string
[ "", "sql", "oracle", "format", "" ]
I am just starting to learn SQL. How do you add a condition to a statement? I am trying to sort the destination to 'BNA' which is the airport code. ``` SELECT CHARTER.CUS_CODE, CHARTER.DESTINATION "AIRPORT", CHARTER.CHAR_DATE, CHARTER.CHAR_DISTANCE, CHARTER.AC_NUMBER, FROM C.CHARTER ; WHERE DESTINATION = 'BNA' ; ``` Any hints in the right direction would be great.
The following is your query with the syntax corrected: ``` SELECT CHARTER.CUS_CODE, CHARTER.DESTINATION "AIRPORT", CHARTER.CHAR_DATE, CHARTER.CHAR_DISTANCE, CHARTER.AC_NUMBER FROM CHARTER WHERE DESTINATION = 'BNA'; ``` 1. The semicolon goes at the end only. 2. Get rid of "c." from the table name in your from clause. You might have been thinking of giving it an alias of "c" which, if if that's the case, you would put it after the table name (and then use it as a prefix for each field).
``` SELECT CHARTER.CUS_CODE, CHARTER.DESTINATION "AIRPORT", CHARTER.CHAR_DATE, CHARTER.CHAR_DISTANCE, CHARTER.AC_NUMBER, FROM C.CHARTER WHERE DESTINATION = 'BNA' ; ``` The `;` character is a statement terminator; you only need one per SQL statement.
SQL- Adding a condition
[ "", "sql", "conditional-statements", "" ]
In Microsoft SQL Server 2012, I need to remove adjacent duplicate rows in the `Flow` column below, and just keep the first ones (marked `*` to illustrate). After that, I need to take the time difference between the `1`s and `0`s for all rows and get the total cumulative time. ``` Record Number Downhole Time Flow ------------------------------------------- 0 03/27/2013 19:23:48.582 1 * 58 03/27/2013 19:28:12.606 1 137 03/27/2013 19:32:16.070 0 * 143 03/27/2013 19:33:59.070 0 255 03/27/2013 19:40:14.070 0 272 03/29/2013 14:43:55.071 1 * 289 03/29/2013 14:45:44.070 1 293 03/29/2013 14:45:59.071 0 * 294 03/29/2013 14:46:10.070 0 ``` Result with the adjacent records removed: ``` Record Number Downhole Time Flow ------------------------------------------- 0 03/27/2013 19:23:48.582 1 * 137 03/27/2013 19:32:16.070 0 * 272 03/29/2013 14:43:55.071 1 * 293 03/29/2013 14:45:59.071 0 * ``` ***Final desired result*** ``` cumulative time difference = (03/27/2013 19:32:16.070 - 03/27/2013 19:23:48.582) + (03/29/2013 14:45:59.071 - 03/29/2013 14:43:55.071) + if there are more rows. ```
I believe this does the job you requested: ``` WITH FlowIntervals AS ( SELECT FromTime = Min(D.[Downhole Time]), X.ToTime FROM dbo.vLog D OUTER APPLY ( SELECT TOP 1 ToTime = D2.[Downhole Time] FROM dbo.vLog D2 WHERE D.[Downhole Time] < D2.[Downhole Time] AND D.[Flow] <> D2.[Flow] ORDER BY D2.[Downhole Time] ) X WHERE D.Flow = 1 GROUP BY X.ToTime ) SELECT Sum(DateDiff(ms, FromTime, IsNull(ToTime, GetDate())) / 1000.0) FROM FlowIntervals ; ``` This query works in SQL 2005 and up. It will perform decently, but requires a self-join of the vLog table and so it may not perform as well as a solution using `LEAD` or `LAG`. If you are looking for the absolute best possible performance, this query may do the trick: ``` WITH Ranks AS ( SELECT Grp = Row_Number() OVER (ORDER BY [Downhole Time]) - Row_Number() OVER (PARTITION BY Flow ORDER BY [Downhole Time]), [Downhole Time], Flow FROM dbo.vLog ), Ranges AS ( SELECT Result = Row_Number() OVER (ORDER BY Min(R.[Downhole Time]), X.Num) / 2, [Downhole Time] = Min(R.[Downhole Time]), R.Flow, X.Num FROM Ranks R CROSS JOIN (SELECT 1 UNION ALL SELECT 2) X (Num) GROUP BY R.Flow, R.Grp, X.Num ), FlowStates AS ( SELECT FromTime = Min([Downhole Time]), ToTime = CASE WHEN Count(*) = 1 THEN NULL ELSE Max([Downhole Time]) END, Flow = IsNull(Min(CASE WHEN Num = 2 THEN Flow ELSE NULL END), Min(Flow)) FROM Ranges R WHERE Result > 0 GROUP BY Result ) SELECT ElapsedSeconds = Sum(DateDiff(ms, FromTime, IsNull(ToTime, GetDate())) / 1000.0) FROM FlowStates WHERE Flow = 1 ; ``` Using your sample data, it returns `631.486000` (seconds). If you select just the rows from the `FlowStates` CTE, you get the following result: ``` FromTime ToTime Flow ----------------------- ----------------------- ---- 2013-03-27 19:23:48.583 2013-03-27 19:32:16.070 1 2013-03-27 19:32:16.070 2013-03-29 14:43:55.070 0 2013-03-29 14:43:55.070 2013-03-29 14:45:59.070 1 2013-03-29 14:45:59.070 NULL 0 ``` This query works in SQL 2005 and up, and should stack up very well performance-wise against any other solution, including one using `LEAD` or `LAG` (which this simulates in a sneaky way). I'm not promising it will win, but it could do very well and might win after all. See [this answer to a similar question](https://stackoverflow.com/questions/13614431/how-can-i-detect-and-bound-changes-between-row-values-in-a-sql-table/13618019#13618019) for details on what's going on in the query. Finally, for a complete solution, here's a Lag/Lead version for SQL Server: ``` WITH StateChanges AS ( SELECT [Downhole Time], Flow, Lag(Flow) OVER (ORDER BY [Downhole Time]) PrevFlow FROM dbo.vLog ), Durations AS ( SELECT [Downhole Time], Lead([Downhole Time]) OVER (ORDER BY [Downhole Time]) NextTime, Flow FROM StateChanges WHERE Flow <> PrevFlow OR PrevFlow IS NULL ) SELECT ElapsedTime = Sum(DateDiff(ms, [Downhole Time], NextTime) / 1000.0) FROM Durations WHERE Flow = 1 ; ``` This query requires SQL Server 2012 or up. It calculates the state changes (did the flow change?), then selects those where the flow did change, then finally calculates the duration for those where the flow changed from 0 to 1 (the start of a period of Flow). I'd be interested to see what your actual performance results on I/O and time for this query compared to the others. If you look at just the execution plans, this query will seem to win--but it may not be such a clear winner on actual performance statistics.
Not sure what kind of database you are using. Here is a solution with analytical functions and Oracle: ``` SELECT un, mytime, flow, lead (mytime) OVER (ORDER BY UN) lead_time, (lead (mytime) OVER (ORDER BY UN) - mytime)*24*60 minutes FROM ( SELECT un, mytime, flow, LAG (flow) OVER (ORDER BY UN) lag_val FROM test ORDER BY un) a WHERE a.flow != NVL (a.lag_val, 9999) ``` The inner select gets the value of the previous flow with the LAG analytical function. The where clause of the outer select filters the 'duplicate' flows (leaves only the rist occurrence of the change). The outer select also calculates the difference in the times (in minutes) using the LEAD analytical function. This will be great performance wise despite the amount of data you have. Let me know what kind of database you are using - there are analytical functions implementation (or workarounds) for most databases... This will work only in Orace.
In SQL, remove adjacent duplicate rows and perform time calculation
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I have some code record in Oracle DB for example: A00105XYZ CC000036QWE How to write the criteria, if users input A105XYZ, CC36QWE, these records can still be searched?
You'll probably want to use a regular expression: ``` SELECT STR, REGEXP_REPLACE(STR,'([^[:digit:]]*)(0*)(.*)','\1\3') NEW_STR FROM (SELECT 'A00105XYZ' STR FROM DUAL UNION SELECT 'CC000036QWE' STR FROM DUAL UNION SELECT 'FD403T' STR FROM DUAL UNION SELECT '000000010' STR FROM DUAL) ╔═════════════╦═════════╗ β•‘ STR β•‘ NEW_STR β•‘ ╠═════════════╬═════════╣ β•‘ 000000010 β•‘ 10 β•‘ β•‘ A00105XYZ β•‘ A105XYZ β•‘ β•‘ CC000036QWE β•‘ CC36QWE β•‘ β•‘ FD403T β•‘ FD403T β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β• ```
One way would be to replace the `0`s on both sides: ``` where replace(code, '0', '') = replace (:var, '0', '') ``` Note that `A00105XYZ` will also match `A15XYZ`.
Oracle, how to match leading zero number in varchar?
[ "", "sql", "oracle", "varchar", "" ]