Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Trying to understand why this query produces results in descending order
```
SELECT
DateDiff(minute, convert(datetime, SUBSTRING(MSH_07_DateTime, 1, 8) + ' ' + SUBSTRING(MSH_07_DateTime, 9, 2) + ':' + SUBSTRING(MSH_07_DateTime, 11, 2), 100), GetDate())
FROM
EPDBHL7.DBO.[HL7MSH]
WHERE
MessageStatus <> 'ok'
```
|
Unless you use an `ORDER BY` you cant guarantee any result order. So you probably get the order based in how the data is stored or how the planner get the data using the index
just add
```
where MessageStatus <> 'ok'
order by DateDiff(minute,convert(datetime, SUBSTRING ( MSH_07_DateTime ,1 , 8 ) + ' '+ SUBSTRING ( MSH_07_DateTime ,9 , 2 ) + ':'+ SUBSTRING ( MSH_07_DateTime ,11 , 2 ), 100)
or
order by 1
```
to order using first column
|
If you are not specifying the ORDER BY clause then the order in which you are going to get the result is unspecified.
> If you want to rely on the order of a query, ALWAYS add ORDER BY
|
Why does this SQL query returns data in descending order?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I working with SQL Server data base in order to store very long Unicode string. The field is from type 'ntext', which theoretically should be limit to 2^30 Unicode characters.
From [MSDN documentation](https://msdn.microsoft.com/en-us/library/ms187993.aspx):
> ntext
>
> Variable-length Unicode data with a maximum string length of 2^30 - 1 (1,073,741,823) bytes. Storage size, in bytes, is two times the string length that is entered. The ISO synonym for ntext is national
> text.
*I'm made this test:*
1. Generate 50,000 characters string.
2. Run an Update SQL statement
UPDATE [table]
SET Response='... 50,000 character string...'
WHERE ID='593BCBC0-EC1E-4850-93B0-3A9A9EB83123'
3. Check the result - what actually stored in the field at the end.
The result was that the field [Response] contain **only 43,679 characters**. All the characters at the end of the string was thrown out.
Why this happens? How I can fix this?
If this is really the capacity limit of this data type (ntext), which another data type can store longer Unicode string?
|
Based on what I've seen, you may just only be able to copy 43679 characters. It is storing all the characters, they're in the db(check this with Select Len(Reponse) From [table] Where... to verify this), and SSMS has problem copying more than when you go to look at the full data.
|
`NTEXT` datatype is deprecated and you should use `NVARCHAR(MAX)`.
I see two possible explanations:
1. Your `ODBC` driver you use to connect to database truncate parameter value when it is too long (try using `SSMS`)
2. You write you generate your input string. I suspect you generate `CHAR(0)` which is **[`Null literal`](https://stackoverflow.com/questions/2828333/what-is-the-null-character-literal-in-tsql)**
If second is your case make sure you cannot generate `\0` char.
**EDIT:**
I don't know how you check the length but keep in mind that `LEN` does not count trailing whitespaces
```
SELECT LEN('aa ') AS length -- 2
,DATALENGTH('aa ') AS datalength -- 7
```
Last possible solution I see you do sth like:
```
SELECT 'aa aaaa'
-- result in SSMS `aa aaaa`: so when you count you lose all multiple whitespaces
```
Check query below if returns **100k**:
```
SELECT DATALENGTH(ntext_column)
```
|
SQL Server NText field limited to 43,679 characters?
|
[
"",
"sql",
"sql-server",
"sql-update",
"ntext",
""
] |
I am trying to Join 3 tables based on specific columns.
data\_customer(main Table), data\_customer\_Address(Cust\_id as foreign key) and Data\_customer\_contacts (Cust\_id as foreign key)
I have successfully achieved the result from below query:
```
SELECT cst.cust_companyName AS 'Company Name',
cnt.cntct_fname AS 'Contact First Name',
cnt.cntct_lName AS 'Contact Last Name',
cnt.cntct_mainPhone AS 'Main Phn Number',
cnt.cntct_Mobile AS 'Mobile Number',
cst.cust_mainEmail AS 'Main Email',
cnt.cntct_email AS 'Contact Email',
adr.addressLine1 AS 'Adress line 1',
adr.addressLine2 AS 'Address Line 2',
adr.City AS 'City',
adr.State AS 'State',
adr.pinZip AS 'Pin/Zip Code'
FROM data_customer AS cst,
data_customer_Address AS adr,
data_customer_contacts AS cnt
WHERE cst.[cust_id]='2015Q4'
AND adr.[cust_id] ='2015Q4'
AND cnt.[cust_id]='2015Q4';
```
However, Cust\_id will be passed into query dynamically. If I won't pass cust\_id on any of the single place, I will get a Cartesian product. I tried other ways but could not short the query.
Kindly suggests is there any way I can improve the query or improve the performance?
Note\*: I am using Sqlite on Windows.
|
You need to learn proper `join` syntax. Then you won't have such problems with your query.
Simple rule: *Never* use commas in the `FROM` clause. *Always* use explicit `JOIN`. So, your QUERY should look like:
```
SELECT . .
FROM data_customer cst JOIN
data_customer_Address adr
ON cst.cust_id = adr.cust_id JOIN
data_customer_contacts cnt
ON cst.cust_id = cnt.cust_id
WHERE cst.cust_id = '2015Q4';
```
Lo and behold. You will never get a Cartesian product by leaving out a parameter. *And* you only have to specify the customer id once.
|
```
select cst.cust_companyName as 'Company Name',
cnt.cntct_fname as 'Contact First Name',
cnt.cntct_lName as 'Contact Last Name',
cnt.cntct_mainPhone as 'Main Phn Number',
cnt.cntct_Mobile as 'Mobile Number',
cst.cust_mainEmail as 'Main Email',
cnt.cntct_email as 'Contact Email',
adr.addressLine1 as 'Adress line 1',
adr.addressLine2 as 'Address Line 2',
adr.City as 'City', adr.State as 'State',
adr.pinZip as 'Pin/Zip Code'
from data_customer as cst,
data_customer_Address as adr,
data_customer_contacts as cnt
where
cst.[cust_id]=adr.[cust_id] and cst.[cust_id]=cnt.[cust_id]
and cst.[cust_id]='2015Q4';
```
the sql parse usually will first fetch the record in the table cst through index(if it has) or full scan , then run two for ...loop for inner join the table adr and cnt.in both of those two for ... loop ,the table cst will be the "outer table".
|
SQL Join Query refactoring
|
[
"",
"sql",
"plsql",
"sqlite",
""
] |
I am using the AdventureWorks database and SQL Server 2012 T-SQL Recipes book and got myself into trouble on the following example :
I have to check the SalesQuota in both 2007 and 2008 from Sales.SalesPersonQuotaHistory.
```
SELECT sp.BusinessEntityID
, SUM(s2008.SalesQuota) AS '2008'
, SUM(S2007.SalesQuota) AS '2007'
FROM Sales.SalesPerson sp
LEFT OUTER JOIN Sales.SalesPersonQuotaHistory s2008
ON sp.BusinessEntityID = s2008.BusinessEntityID
AND YEAR(s2008.QuotaDate) = 2008
LEFT OUTER JOIN Sales.SalesPersonQuotaHistory s2007
ON sp.BusinessEntityID = s2007.BusinessEntityID
AND YEAR(s2007.QuotaDate) = 2007
GROUP BY sp.BusinessEntityID
```
First results are:
```
BusinessEntityID 2008 2007
---------------- --------------------- ---------------------
274 1084000.00 1088000.00
275 6872000.00 9432000.00
276 8072000.00 9364000.00
277 6644000.00 8700000.00
```
Just like the book says.
But then I try to get the 2008 SalesQuota with the following query:
```
SELECT sp.BusinessEntityID,
SUM(spqh.SalesQuota) AS '2008'
FROM Sales.SalesPerson sp
LEFT JOIN Sales.SalesPersonQuotaHistory spqh
ON sp.BusinessEntityID = spqh.BusinessEntityID
AND YEAR(spqh.QuotaDate) = 2008
GROUP BY sp.BusinessEntityID
```
and got this:
```
BusinessEntityID 2008
---------------- ---------------------
274 271000.00
275 1718000.00
276 2018000.00
277 1661000.00
```
What am I doing wrong here? I think I miss something on those LEFT JOINs but I can't figure out what.
**This one gives the same result:**
```
SELECT BusinessEntityID
, SUM(SalesQuota) AS '2008'
FROM Sales.SalesPersonQuotaHistory
WHERE YEAR(QuotaDate) = 2008
GROUP BY BusinessEntityID
BusinessEntityID 2008
---------------- ---------------------
274 271000.00
275 1718000.00
276 2018000.00
```
|
Your first query is probably creating a cartesian product of the results. Instead, I would use conditional aggregation to get both values (which should match your second query):
```
SELECT sp.BusinessEntityID,
SUM(CASE WHEN Year(spqh.QuotaDate) = 2007 then spqh.SalesQuota end) AS '2007',
SUM(CASE WHEN Year(spqh.QuotaDate) = 2008 then spqh.SalesQuota end) AS '2008'
FROM Sales.SalesPerson sp
LEFT JOIN Sales.SalesPersonQuotaHistory spqh
ON sp.BusinessEntityID = spqh.BusinessEntityID
GROUP BY sp.BusinessEntityID
```
|
If you aren't returning any columns from the `Sales.SalesPerson` table, you can exclude it and use `pivot` to get your desired results:
```
select
BusinessEntityID,
[2008],
[2007]
from (
select
BusinessEntityID,
year(QuotaDate) as SalesQuotaYear,
sum(SalesQuota) as SalesQuota
from Sales.SalesPersonQuotaHistory
where year(QuotaDate) in(2007,2008)
group by BusinessEntityID,
year(QuotaDate)
) as t
pivot (
sum(SalesQuota)
for SalesQuotaYear in([2007],[2008])
) as p
```
|
Table subsets comparison
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I have a table with a varchar(max). I would like to know if it's possible to get all rows that contains any substring from a list of it.
I know that for one value I can use like 'AAA%', but I don't know if like has any way to say something like where IN().
Something like this:
```
select * from TableA where
TableA.Field1 contains any (select Fild1 from TableB where field2 > 5);
```
Where TableA.Field 1 and TableB.Field1 are varchar(max).
Thank you so much.
|
SQL Server includes the CONTAINS function so something like this will work fine.
```
SELECT *
FROM TableA
JOIN TableB ON CONTAINS(TableA.Field1, TableB.Fild1) AND TableB.field2 > 5
```
You may need to adjust for your requirements.
Here is the documentation on contains
<https://msdn.microsoft.com/en-us/library/ms187787.aspx>
|
```
select * from TableA
where exists (select 1 from TableB
where TableA.Field1 like '%' + TableB.Fild1 + '%'
and TableB.field2 > 5)
```
|
How can I get any row which substring is any?
|
[
"",
"sql",
"sql-server",
""
] |
I would like to insert into one table from three different tables,
but the first one statement below works, next one does not.
What should I do?
```
SELECT * into NewTable from ta where farm like'%aa';
SELECT * into NewTable from tb where farm like'%aa';
SELECT * into NewTable from tc where farm like'%aa';
```
|
try this way
```
select * into NewTable from (
SELECT * from ta where farm like'%aa'
Union all
SELECT * from tb where farm like'%aa'
Union all
SELECT * from tc where farm like'%aa'
)a;
```
|
Query `select * into NewTable` in fact **creates** that `NewTable` according to results of select query and then insert these results.
So in order to insert into existed table after first query, change second and third ones to
```
insert into NewTable
select * from tb ...
insert into NewTable
select * from tc ...
```
Notice - all tables in this case should has the same structure.
|
'insert into' from more source-tables
|
[
"",
"sql",
"sql-server",
"insert",
""
] |
I came upon a rather interesting situation where I need guidance to help me design my Database schema that follows "best practises" or is done "the recommended way".
My dilemma is as follows:
I have an `Event` table with basic properties such as Id, Name, Date etc. It needs an address info so the most straight forward way would be to extend the table with fields such as street, city, country etc. Well I also have a User table that also needs to store address data. So the right thing to do would be to create third table called Address and set up relationships between Address/User and Address/Event. This is the tricky part. Which table should hold primary key/foreign key.
1. One way to do is to extend table `Address` with columns such as `EventId` and `UserId`. So tables `Event` and `User` would be the "parent" table and address would be the "child" table. The `Address` table would hold foreign keys to User/Event's Id primary keys.
```
|EventTable:| |UserTable: | |AddressTable|
| | | | | |
|EventId PK | |UserId PK | |AddresId PK |
|Name | |Name | |Street |
|OtherColumn| |OtherColumn| |City |
|EventId FK |
|UserId FK |
```
Two drawbacks that I see from such design is that for every row `AddressTable` would contain extra unnecessary Null field. For example if address specifies user address then column `EventId` would be Null and same goes for if address row specifies Event address then column `UserId` would be Null.
Second drawback is that anytime I add a new table that also needs to be connected to the address table then I would need to add another column to table Address that would reference the new table's primary key.
2. Second possibility is to extend tables `Event` and `User` with `Address`'s primary key column so they would be the foreign key in the relationship.
```
|EventTable:| |UserTable: | |AddressTable|
| | | | | |
|EventId PK | |UserId PK | |AddresId PK |
|Name | |Name | |Street |
|OtherColumn| |OtherColumn| |City |
|AddressId FK| |AddressId FK|
```
Everything would be perfect with this solution except that I have doubts now when I enable cascading delete on Foreign keys. To me natural way of thinking is that when I remove an Event or User for database I'd like to have their addresses removed as well. But in such design the address table is the parent and User/Event are the children. So when I remove Address entry with cascading deletes enabled I would also remove Event/User entry. Logically it doesn't make too much sense to me. It should be the other way around, and this is the problem I'm unable to solve. Perhaps the second design is acceptable and I'm just confusing myself for no reason.
Ideally I'd love to come up with such design where by enabling cascade deletes I first remove Event or User and then their address would be deleted automatically.
I know that there is third option of joint tables but that's only for many to many relationships and what if User/Event should contain just a single Address.
Thanks!
|
For the reasons you give option 1 is a no-go.
With option 2 you should not worry about unused Address records. In fact, they might turn useful during the creation of new Events or Users, as you could provide a search facility in your address "database". Taken further, you could even decide to pre-fill the Address table with data downloaded from some address provider. Then the search facility would become very useful.
Once you plan to have a big address list, you might want to break an Address up into it's own hierarchy: a street belongs to a city, a city belongs to a country. Of course, in practice a street can be shared by several cities, and you could decide to set up an n-to-n relationship there, or you could opt for n-to-1, where you have some (but in practice very little) duplication of streets.
As you can see, this can be taken very far, and will lead to more effort in writing code around it to manage it all.
If on the other hand you are not interested in keeping unused addresses, you could manage this via delete triggers on Event and User tables, which would check if the related address has become orphaned, and if so, deletes it. However, this should not be that important to happen at that same time, with the risk that your delete operation might take longer to execute or even fail, affecting the user experience. It is better to do this asynchronously and let a scheduled job do the clean-up once a week or so.
|
Addresses are tricky, indeed.
First, address is an independent thing - its existence is beyond your control, rather it exists as long as its local council wants it to. Another important thing - addresses tend to be reused again and again, especially if we are talking about large events or short term rent accommodations.
Considering all that, it is clear that option 1 is just plain wrong and does not correlate to reality. The second is better, but still misses quite a lot, though in this case it depends more on how far you are willing to go.
For example, if you want to store history of address changes for any kind of entity, you will need history table(s) - again, there are several possible designs. You can make a single address history table with fields like:
```
AddressId (PK)
TenantId (PK)
StartDate (PK)
EndDate
```
, where `TenantId` will reference a supertype table which will be made a parent for all entities that can use addresses. Such a table (not the supertype one) will also help in preventing (or allowing?) of simultaneous use of the same address by more than 1 tenant at any given time.
And this is just the tip of the iceberg :)
|
What is the best solution to design sql table relationship for a table that will be used by other tables
|
[
"",
"sql",
"sql-server",
"entity-framework",
"database-design",
"table-relationships",
""
] |
How can I query a database that has the following columns : `id`, `name`. With the result having column 'name' rows displayed as list?
```
Id Name
1 name1
2 name2
3 name3
4 name4
Result: name1,name2,name2,name4
```
Currently my query looks like this
```
SELECT name FROM banned
```
|
The [`group_concat`](https://dev.mysql.com/doc/refman/5.6/en/group-by-functions.html#function_group-concat) aggregate function should do the trick:
```
SELECT GROUP_CONCAT(name ORDER BY name) AS name
FROM banned
```
EDIT:
To answer the question in the comment, you could add a `separator` clause to replace the comma in the result:
```
SELECT GROUP_CONCAT(name ORDER BY name SEPARATOR '...') AS name
FROM banned
```
|
SELECT GROUP\_CONCAT(
DISTINCT Name
ORDER BY Name
SEPARATOR ',' )
FROM banned;
|
Column as list with comma between
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have some SQL that selects columns from tables like normal. However, one of my columns (u.cmc\_rti\_type) is a number and I want it to output text instead. I assume I need some sort of if statement. Here is my sql:
```
SELECT '', s.student_number, s.lastfirst, s.grade_level, t.TEACHER, u.cmc_rti_tier, u.cmc_rti_type
FROM students s
JOIN u_def_ext_students u
ON u.studentsdcid = s.dcid
LEFT JOIN cmc_homeroom_teacher t
ON s.dcid = t.dcid
WHERE u.cmc_rti_tier <> 0
ORDER BY s.lastfirst
```
if u.cmc\_rti\_type is 1 then I want it to output 'Reading'
if u.cmc\_rti\_type is 2 then I want it to output 'Math'
if u.cmc\_rti\_type is 3 then I want it to output 'Enrichment'
if u.cmc\_rti\_type is 4 then I want it to output 'Both Math & Reading'
|
You can use `CASE`:
```
SELECT '', s.student_number, s.lastfirst, s.grade_level, t.TEACHER,
u.cmc_rti_tier,
CASE u.cmc_rti_type
WHEN 1 THEN 'Reading'
WHEN 2 THEN 'Math'
WHEN 3 THEN 'Enrichment'
WHEN 4 THEN 'Both Math & Reading'
ELSE NULL
END AS cmc_rti_type
FROM students s
JOIN u_def_ext_students u
ON u.studentsdcid = s.dcid
LEFT JOIN cmc_homeroom_teacher t
ON s.dcid = t.dcid
WHERE u.cmc_rti_tier <> 0
ORDER BY s.lastfirst
```
Or if you are using **SQL Sever 2012+** you can use `CHOOSE`:
```
CHOOSE(u.cmc_rti_type, 'Reading', 'Math', 'Enrichment', 'Both Math & Reading') AS cmc_rti_type
```
`LiveDemo`
|
You can use `CASE` in following:
```
SELECT '', s.student_number, s.lastfirst, s.grade_level, t.TEACHER, u.cmc_rti_tier,
CASE u.cmc_rti_type WHEN 1 THEN 'Reading'
WHEN 2 THEN 'Math'
WHEN 3 THEN 'Enrichment'
WHEN 4 THEN 'Both Math & Reading'
END AS Etc
FROM students s
JOIN u_def_ext_students u
ON u.studentsdcid = s.dcid
LEFT JOIN cmc_homeroom_teacher t
ON s.dcid = t.dcid
WHERE u.cmc_rti_tier <> 0
ORDER BY s.lastfirst
```
|
Translate SQL column result into text
|
[
"",
"sql",
"asp.net",
"vb.net",
"if-statement",
""
] |
I have 5 tables. I want to find the unique discount values of all 5 tables using MYSQL.
The form of the tables is:
```
table1(userid,discount)
table2(userid,discount)
table3(userid,discount)
table4(userid,discount)
table5(userid,discount)
```
|
You can use `UNION` to achieve this.
The default behavior for `UNION` is that duplicate rows are removed from the result. The optional `DISTINCT` keyword has no effect other than the default because it also specifies duplicate-row removal.
With the optional `ALL` keyword, duplicate-row removal does not occur and the result includes all matching rows from all the `SELECT` statements.
**Query**
```
select discount from table1
union
select discount from table2
union
select discount from table3
union
select discount from table4
union
select discount from table5;
```
And if you want to sort the result from above sql query, you can use an `ORDER BY` like below.
**Query**
```
select t.discount from
(
select discount from table1
union
select discount from table2
union
select discount from table3
union
select discount from table4
union
select discount from table5
)t
order by t.discount;
```
order by descending or ascending as per your requirement.
|
Use MySQL `union`
```
select * from (
select * from table1
union
select * from table2
union
select * from table3
union
select * from table4
union
select * from table5
)new group by new.discount
```
|
How to select unique discounts in multiple tables
|
[
"",
"mysql",
"sql",
""
] |
I have the following SELECT statmentment
```
SELECT
UserID
,UserName
,TradingParty
,Email
,[PrivDesc]
,LastLogin
,IsApproved
FROM
cte_getAll allUsers
WHERE
allUsers.TradingParty = COALESCE(@TradingParty, allUsers.TradingParty)
AND allUsers.Username = COALESCE(@Username, allUsers.Username)
AND allUsers.Email = COALESCE(@EmailAddress, allUsers.Email)
AND DATEADD(dd, DATEDIFF(dd, 0, allUsers.[LastLogin]), 0) >= COALESCE(@FromDate, DATEADD(dd, DATEDIFF(dd, 0, allUsers.[LastLogin]), 0))
AND DATEADD(dd, DATEDIFF(dd, 0, allUsers.[LastLogin]), 0) <= COALESCE( @ToDate, DATEADD(dd, DATEDIFF(dd, 0, allUsers.[LastLogin]), 0))
AND allUsers.IsApproved = COALESCE(@AccountActive, allUsers.IsApproved)
AND allUsers.[PrivId] -- IN (SELECT privId from @selectedPriv)
IN (COALESCE((SELECT PrivID FROM @selectedPriv), allUsers.[PrivId]))
```
**EDIT**
Prior to this statment i am populating the @selectedPriv temp table according to the users input. If there is not user input therefore the table will contain nothin
On the final AND statement i am trying to find all data where privId is equal to a list of privs which the user has entered. So for example the user eneters 1,2,3,4,5,6 that is stored into a temp table, from this i need to find all the rows from my original table which are associated with these priv.
Here is my issue if i use the commented IN statement it works but since the user is allowed to pass in NULL i need it within a COALESCE, but when i do this i get the message
> Subquery returned more than 1 value. This is not permitted when the
> subquery follows =, !=, <, <= , >, >= or when the subquery is used as
> an expression.
I havent got a clue on why this is. Any ideas that will help me fix this issue?
|
If I understand your question correctly, you have a table variable that has values that you want to select from, but if they don't have any values in the table, you don't want the restriction at all.
The reason you're getting the error you're getting is due to `COALESCE()` only expecting a single value to transform, but your query is returning multiple.
This may be what you're looking for:
```
AND
(
NOT EXISTS (SELECT * FROM @selectedPriv WHERE PrivID IS NOT NULL)
OR
(
EXISTS (SELECT * FROM @selectedPriv WHERE PrivID IS NOT NULL)
AND allUsers.[PrivId] IN (SELECT PrivID FROM @selectedPriv)
)
)
```
|
Since you are populating the temp table @selectedPriv prior to this statement just add an else there -- if the input is null have @selectedPriv =
```
SELECT DISTINCT PrivId FROM cte_getAll
```
This should be faster than a more complicated where statement that checks count(\*) in @selectedPriv which is the other way to do it.
|
SQL Subquery using IN Statment with coalesce
|
[
"",
"sql",
"subquery",
"coalesce",
"sql-in",
""
] |
I have two tables,
**tblFXRates**
```
Base Quote Rate
USD JPY 1.5
USD GBP 2.5
USD EUR 1.75
EUR JPY 1.5
USD USD 1
```
**tblHoldings**
```
FX FXRate
EUR null
EUR null
USD null
GBP null
JPY null
```
What I would like to do is update the FXRate in tblHoldings. So I want to select all the rates in the tblFXRates where the base is equal to USD. Then join the result of this query to the column FX in tblHoldings to update the FXRate. It should look something like below,
**Result**
```
FX FXRate
EUR 1.75
EUR 1.75
USD 1
GBP 2.5
JPY 1.5
```
I'm not sure how to do this as the nested query is throwing me. My attempt so far (is pretty rubish) is below,
```
update h
set h.FXRate = fx.rate
from tblHoldings h
inner join fx
on h.FX = fx.Quote
with fx as
(
select quote, rate
from tblFxRates
where base = 'USD'
)
```
|
You can use `UPDATE FROM JOIN`:
```
UPDATE th
SET FXRate = tr.Rate
FROM #tblHoldings th
JOIN #tblFXRates tr
ON th.FX = tr.Quote
WHERE tr.Base = 'USD';
```
`LiveDemo`
Output:
```
βββββββ¦βββββββββ
β FX β FXRate β
β ββββββ¬βββββββββ£
β EUR β 1.75 β
β EUR β 1.75 β
β GBP β 2.5 β
β JPY β 1.5 β
β USD β 1 β
βββββββ©βββββββββ
```
Alternatively you could use `MERGE` statement:
```
MERGE #tblHoldings AS th
USING #tblFXRates tr
ON th.FX = tr.Quote
WHEN MATCHED AND tr.Base = 'USD' THEN
UPDATE SET FXRate = tr.Rate;
```
`LiveDemo2`
Which will detect corrupt data like duplicates:
```
INSERT INTO #tblFXRates(Base,Quote,Rate) VALUES ('USD','USD',1);
INSERT INTO #tblFXRates(Base,Quote,Rate) VALUES ('USD','USD',2);
```
Using `UPDATE` it passes silently.
|
Try this.
```
update h
set h.FXRate = fx.rate
from tblHoldings h
inner join tblfxrates f on h.FX = f.Quote
where h.base = 'USD'
```
|
update table with a join on a temporary table
|
[
"",
"sql",
"sql-server",
""
] |
I have these data below
```
TABLE 1 SID CAMBORROW OTMARK
S1 CAM1 1
S2 CAM2 1
S3 CAM3 0
TABLE 2 SID BOOKBORROW OTMARK
S1 book1 1
S2 book2 0
S3 book3 0
TABLE 3 SID NAME
S1 SNAME1
S2 SNAME2
S3 SNAME3
```
Now what I want to do is getting how many times does this person have OTMARK 1 on him/her, also group by his/her name. This is what I use for the query.
```
SELECT TABLE3.NAME,COUNT(*)
FROM TABLE1,TABLE2,TABLE3
WHERE TABLE3.SID = TABLE1.SID
AND TABLE1.OTMARK = 1
OR( TABLE3.SID = TABLE2.SID
AND TABLE2.OTMARK = 1)
GROUP BY TABLE1.NAME
```
The result I got is far wrong than what should happen. I don't know which part I got it wrong, so I need help. Thank you in advance.
EDIT: The expect output is
```
SNAME COUNT
SNAME1 2
SNAME2 1
```
|
You have to `join` the tables with proper `join` syntax.
[SQL Fiddle](http://sqlfiddle.com/#!4/dff63f/19)
```
SELECT t3.name,
sum(nvl(t1.otmark,0)+nvl(t2.otmark,0))
FROM table3 t3
left join table2 t2 on t2.sid = t3.sid and t2.otmark =1
left join table1 t1 on t1.sid = t3.sid and t1.otmark =1
where t1.otmark is not null or t2.otmark is not null
GROUP BY t3.NAME
```
Edit: A simpler solution would be
```
SELECT t3.name, t1.otmark+t2.otmark
FROM t3
join t2 on t2.sid = t3.sid
join t1 on t1.sid = t3.sid
where t1.otmark > 0 or t2.otmark > 0
```
|
This is your query:
```
Select table3.name SNAME,(table2.otmark+table1.otmark) COUNT
From table3
inner join table2
on table3.sid=table2.sid
inner join table1
on table1.sid=table2.sid
Where table1.otmark>0
```
|
How to get COUNT from several tables?
|
[
"",
"sql",
"oracle11g",
""
] |
I can't seem to wrap my mind around getting this query right. I have three tables and want to combine them to show how many tickets are left at each location. I am running SQL Server 2008 R2.
Location Table
```
LocationId | LocationName
1 | Location1
2 | Location2
3 | Location3
```
Tickets Per Location Table
```
TicketId | LocationId | EventId | Amount
1 | 1 | 4 | 25
1 | 2 | 4 | 50
1 | 3 | 4 | 100
```
Purchased Tickets Table
```
AttendeeId | EventId | TicketId | LocationId | Amount | EmployeeId
1 | 4 | 1 | 1 | 5 | 101
2 | 4 | 1 | 1 | 10 | 102
3 | 4 | 1 | 2 | 2 | 103
4 | 4 | 1 | 2 | 4 | 103
```
I want the query to display the three tables like this:
```
LocationName | Starting | Sold | Remaining
Location1 | 25 | 15 | 10
Location2 | 50 | 6 | 44
Location3 | 100 | 0 | 0
```
This is the query I had been playing with but it only shows me locations with tickets that have been sold, I need it to show me all locations even if they don't have tickets sold.
```
SELECT L.LocationName, T.Amount as Starting, COALESCE(SUM(P.Amount),0) as Sold, COALESCE((T.Amount - SUM(P.Amount)),0) as Remaining
FROM TicketsPerLocation T
LEFT JOIN Locations L
ON T.LocationId = L.LocationId
LEFT JOIN PurchasedTickets P
ON T.LocationId = P.LocationId
WHERE T.TicketId = 1 AND T.EventId = 4 AND P.TicketId = 1
GROUP BY L.Name, T.Amount, T.TicketId
```
|
You do need to move your tables around a little bit..
```
SELECT l.LocationName,
tpl.Amount,
COALESCE(SUM(pt.Amount),0) Sold,
tpl.Amount - COALESCE(SUM(pt.Amount),0) Remaining
FROM Locations l
INNER JOIN TicketsPerLocation tpl ON l.LocationID = tpl.LocationID
LEFT JOIN PurchasedTickets pt ON tpl.TicketID = pt.TicketID
AND tpl.LocationID = pt.LocationID
AND tpl.EventID = pt.EventID
WHERE tpl.TicketID = 1 AND tpl.EventID = 4
GROUP BY l.LocationName,
tpl.Amount
```
plus your where statement had `P.TicketId = 1` which makes that a INNER JOIN instead of LEFT JOIN.. so remove that.
when you join to `PurchasedTickets`, make sure it's on all of the fields that are related to `TicketsPerLocation` and not just Location
[**SQL Fiddle**](http://sqlfiddle.com/#!3/8bf66/2)
|
I can see a few problems in your query. See if fixing these solves it:
1. Since you want all locations, you should start with that table before the LEFT JOIN, not after it.
2. The Purchased Tickets table is an outer join, but you have it in your where clause. That's always a bad combination because those two things contradict each other. I would move that requirement to the ON clause.
```
SELECT L.LocationName
,T.Amount AS Starting
,COALESCE(SUM(P.Amount), 0) AS Sold
,COALESCE((T.Amount - SUM(P.Amount)), 0) AS Remaining
FROM Locations L
LEFT JOIN TicketsPerLocation T ON T.LocationId = L.LocationId
LEFT JOIN PurchasedTickets P ON T.LocationId = P.LocationId
AND P.TicketId = 1
WHERE T.TicketId = 1
AND T.EventId = 4
GROUP BY L.NAME
,T.Amount
,T.TicketId
```
|
Query that combines three tables to show total counts
|
[
"",
"sql",
"sql-server",
""
] |
Need to check OR condition between multiple columns.
```
select * from tableA where (COL1 or Col2) is not null
```
how to fetch this?
My actual query is something like :
```
select b.empname,b.hours,b.minutes from table1 a,table2 b
where a.pk_id=b.fk_id and a.type='UT'
and a.SOME_NUMBER='123' and b.hours is not null or b.minutes is not null;
```
but
> b.hours is not null or b.minutes is not null;
is fetching repeated records
|
```
select b.empname,b.hours,b.minutes from table1 a,table2 b
where a.pk_id=b.fk_id and a.type='UT'
and a.SOME_NUMBER='123' and COALESCE(b.hours,b.minutes) is not null;
```
worked fine for me.
Thanks everyone for the input
|
Just use
```
select * from tableA where COL1 is not null or Col2 is not null
```
|
Oracle SQL - Fetch all rows if one among multiple columns not null
|
[
"",
"sql",
"oracle",
""
] |
This is what I've got:
```
SELECT
SUBJECT_ID, SUM(SOMETABLE.COLUMN) AS HOURS, POINTS, SEMESTER_ID
FROM
SOME_TABLES
WHERE
(GROUP = (SELECT TOP (1) GROUP
FROM SOMETABLE2
WHERE (STUDENT_ID = 123)))
GROUP BY
SUBJECT_ID, POINTS, SEMESTER_ID
HAVING
(SUBJECT_ID = 782)
```
This query returns:
[](https://i.stack.imgur.com/PL87e.png)
I need to get this result:
[](https://i.stack.imgur.com/GBBOF.png)
To get that results I'm using this query:
```
SELECT
SUBJECT_ID, SUM(SOMETABLE.COLUMN) AS HOURS,
SUM(SOMETABLE3.COLUMN) AS POINTS, SEMESTER_ID
FROM
SOME_TABLES
WHERE
(GROUP = (SELECT TOP (1) GROUP
FROM SOMETABLE2
WHERE (STUDENT_ID = 123)))
GROUP BY
SUBJECT_ID, SEMESTER_ID
HAVING
(SUBJECT_ID = 12)
```
But it returns `SUM` without including `GROUP BY` statement - like on second screenshot but there is 16 points two times, while there should be two rows with 8 points per semester.
How to get correct POINTS to SEMESTER\_ID? There is script with sample data
in comment under this post.
|
You can use [**OVER**](https://msdn.microsoft.com/en-us/library/ms189461.aspx) clause, something like:
**QUERY**
```
select distinct
Id,
sum(Hours) over (partition by SimId) Hours,
sum(Points) over (partition by SimId) Points,
SimId
from #t
```
**SAMPLE DATA**
```
create table #t
(
Id INT,
Hours INT,
Points INT,
SimId INT
)
insert into #t values
(787,100,0,214858),
(787,0,8,214858),
(787,100,8,233562),
(787,0,0,233562)
```
**OUTPUT**
```
Id Hours Points SimId
787 100 8 214858
787 100 8 233562
```
**UPDATE**
That because you not provided sample data you can do with `CTE` in following:
```
WITH cte AS
(
SELECT
SUBJECT_ID, SUM(SOMETABLE.COLUMN) AS HOURS,
SUM(SOMETABLE3.COLUMN) AS POINTS, SEMESTER_ID
FROM
SOME_TABLES
WHERE
(GROUP = (SELECT TOP (1) GROUP
FROM SOMETABLE2
WHERE (STUDENT_ID = 123)))
GROUP BY
SUBJECT_ID, SEMESTER_ID
HAVING
(SUBJECT_ID = 12)
)
SELECT DISTINCT SUBJECT_ID,
sum(Hours) over (partition by SEMESTER_ID) Hours,
sum(Points) over (partition by SEMESTER_ID) Points,
SEMESTER_ID
FROM cte
```
**UPDATE 2**
You can create `#temp_table` and insert data from your select query, after It use `CTE` in following:
```
;WITH cte AS
(
SELECT *
FROM #temp_table
)
SELECT DISTINCT SUBJECT_ID,
sum(Hours) over (partition by SEMESTER_ID) Hours,
sum(Points) over (partition by SEMESTER_ID) Points,
SEMESTER_ID
FROM cte
```
|
Try this instead:
```
SELECT
SUBJECT_ID,SEMESTER)ID
,SUM(HOURS) as HOURS
,SUM(POINTS) as POINTS
FROM SOME_TABLES
WHERE SUBJECT_ID = 12
GROUP BY
SUBJECT_ID,SEMESTER)ID
```
|
How get multiple SUM() for different columns with GROUP BY
|
[
"",
"sql",
"sql-server",
"t-sql",
"group-by",
"sum",
""
] |
I have the two following tables:
***Person:***
```
EntityId FirstName LastName
----------- ------------------ -----------------
1 Ion Ionel
2 Fane Fanel
3 George Georgel
4 Mircea Mircel
```
***SalesQuotaHistory***
```
SalesQuotaId EntityId SalesQuota SalesOrderDate
------------ ----------- ----------- -----------------------
1 1 1000 2014-01-01 00:00:00.000
2 1 1000 2014-01-02 00:00:00.000
3 1 1000 2014-01-03 00:00:00.000
4 3 3000 2013-01-01 00:00:00.000
5 3 3000 2013-01-01 00:00:00.000
7 4 4000 2015-01-01 00:00:00.000
8 4 4000 2015-01-02 00:00:00.000
9 4 4000 2015-01-03 00:00:00.000
10 1 1000 2015-01-01 00:00:00.000
11 1 1000 2015-01-02 00:00:00.000
```
I am trying to get the SalesQuota for each user in 2014 and 2015.
Using this query i am getting an erroneous result:
```
SELECT p.EntityId
, p.FirstName
, SUM(sqh2014.SalesQuota) AS '2014'
, SUM(sqh2015.SalesQuota) AS '2015'
FROM Person p
LEFT OUTER JOIN SalesQuotaHistory sqh2014
ON p.EntityId = sqh2014.EntityId
AND YEAR(sqh2014.SalesOrderDate) = 2014
LEFT OUTER JOIN SalesQuotaHistory sqh2015
ON p.EntityId = sqh2015.EntityId
AND YEAR(sqh2015.SalesOrderDate) = 2015
GROUP BY p.EntityId, p.FirstName
EntityId FirstName 2014 2015
--------- ----------- ---------- --------------------
1 Ion 6000 6000
2 Fane NULL NULL
3 George NULL NULL
4 Mircea NULL 12000
```
In fact, Id 1 has a total SalesQuota of 3000 in 2014 and 2000 in 2015.
What i am asking here, is .. what is really happening behind the scenes? What is the order of operation in this specific case?
Thanks to my last post i was able to solve this using the following query:
```
SELECT p.EntityId
, p.FirstName
, SUM(CASE WHEN YEAR(sqh.SalesOrderDate) = 2014 THEN sqh.SalesQuota ELSE 0 END) AS '2014'
, SUM(CASE WHEN YEAR(sqh.SalesOrderDate) = 2015 THEN sqh.SalesQuota ELSE 0 END) AS '2015'
FROM Person p
LEFT OUTER JOIN SalesQuotaHistory sqh
ON p.EntityId = sqh.EntityId
GROUP BY p.EntityId, p.FirstName
EntityId FirstName 2014 2015
----------- --------------------- ----------- -----------
1 Ion 3000 2000
2 Fane 0 0
3 George 0 0
4 Mircea 0 12000
```
but without understanding what's wrong with the first attempt .. i can't get over this ..
Any explanation would be greatly appreciated.
|
You have multiple rows for each year, so the first method is producing a Cartesian product.
For instance, consider `EntityId` 100:
```
1 1 1000 2014-01-01 00:00:00.000
2 1 1000 2014-01-02 00:00:00.000
3 1 1000 2014-01-03 00:00:00.000
10 1 1000 2015-01-01 00:00:00.000
11 1 1000 2015-01-02 00:00:00.000
```
The intermediate result from the join produces six rows, with these `SalesQuotaId`:
```
1 10
1 11
2 10
2 11
3 10
3 11
```
You can then do the math -- the result is off because of the multiple rows.
You seem to know how to fix the problem. The conditional aggregation approach produces the correct answer.
|
Is easy to see what is happening if you change your select to
```
SELECT *
```
and remove the `group by`
You first approach need something like this
**[Sql Fiddle Demo](http://sqlfiddle.com/#!6/18cc8/17)**
```
SELECT p.[EntityId]
, p.FirstName
, COALESCE(s2014,0) as [2014]
, COALESCE(s2015,0) as [2015]
FROM Person p
LEFT JOIN (SELECT EntityId, SUM(SalesQuota) s2014
FROM SalesQuotaHistory
WHERE YEAR(SalesOrderDate) = 2014
GROUP BY EntityId
) as s1
ON p.[EntityId] = s1.EntityId
LEFT JOIN (SELECT EntityId, SUM(SalesQuota) s2015
FROM SalesQuotaHistory
WHERE YEAR(SalesOrderDate) = 2015
GROUP BY EntityId
) as s2
ON p.[EntityId] = s2.EntityId
```
Joining with the result data only if exist for that `id` and `year`.
**OUTPUT**
```
| EntityId | FirstName | 2014 | 2015 |
|----------|-----------|------|-------|
| 1 | Ion | 3000 | 2000 |
| 2 | Fane | 0 | 0 |
| 3 | George | 0 | 0 |
| 4 | Mircea | 0 | 12000 |
```
|
Multiple joins with aggregates
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am extracting some wildcards from a string type column using certain keywords but for certain keywords in my list i am getting some false positives which I do not want in my output. Some of the keywords in my wildcard select is 'old', 'older' and 'age'
```
select * from DESCRIPTIONS..LONG
where (DESCR like '% old %'
or DESCR like '% older %'
or DESCR like '% age %'
or DESCR like '%old%'
or DESCR like '%older%'
or DESCR like '%age%')
```
I want to extract only rows that contain these absolute words but I end up returning strings that include 'management', 'image', 'cold', 'colder' etc. I could remove these true negatives by not looking for the below
```
DESCR like '%old%'
or DESCR like '%older%'
or DESCR like '%age%'
```
but in that process I am excluding rows that have special characters like period, comma, slash etc. which are true positives E.g. i would miss strings ending in 'age.' or 'old.' or 'older,' or 'age' when it is the last word in the string without a trailing space.
How do I exclude true negatives and false positives and only get all true positives?
here is a complete list of my keywords separated by a comma.
keywords: newborn, newborns, infant, infants, year, years, child, children, adult, adults, pediatric, old, older, young, younger, age
Thanks
|
I suggest you to start looking at the LIKE Syntax from Microsoft:
<https://msdn.microsoft.com/en-us/library/ms179859.aspx>
Are you searching for a free text field? You could use the [] syntax:
```
SELECT FROM DESCRIPTION..LONG
WHERE DESCR LIKE '%[ "\/-]age[,.:;'' "\/-]%'
```
You put inside square brackets everything you accept in that position, resolving your issues with punctuation.
|
Assuming that spaces delineate the words, you can use this trick:
```
select *
from DESCRIPTIONS..LONG
where ' ' + DESCR + ' ' like '% old %' or
' ' + DESCR + ' ' like '% older %' or
' ' + DESCR + ' ' like '% age %';
```
|
SQL Server wildcard select with a twist
|
[
"",
"sql",
"sql-server-2005",
""
] |
I'm attempting to extract a username value from XML output that was loaded into a database column (file\_output). My query is bringing back a null value and not performing as I expect it to. Your help is appreciated.
XML output:
```
<soap:Envelope xmlns:ones="http://onesource.gmtorque.com" xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
<soap:Header>
<wsse:Security>
<wsse:UsernameToken>
<wsse:Username>username_prod</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
</soap:Header>
</soap:Envelope>
```
Query:
```
SELECT DISTINCT
RR.file_output
ExtractValue (file_output, '/soap:Envelope/soap:Header/wsse:Security/wsse:UsernameToken/wsse:Username') AS"Username"
FROM schema.Records
WHERE create_DTM >'2015-10-25';
```
Expected value is username\_prod
|
The problem that youβre running into is a bug with MySQL. The XML that youβre trying to parse is most likely too long. Iβm pretty sure itβs related to this bug: <https://bugs.mysql.com/bug.php?id=62429>. To get around this bug, you can just extract the portion of the XML that youβre trying to navigate or just use another method to extract the value that you're looking for. Anyways, the solution I chose was to extract the XML that you were trying to navigate through XPath and use the XPath query that you provided to extract the results.
```
SELECT DISTINCT RR.consumer_ID
, RR.file_output
, RR.response_message
, RR.external_ID
, RR.create_DTM
, E.client_license_ID
, ExtractValue(CONCAT(LEFT(RR.file_output, (INSTR(LEFT(RR.file_output,1000), "</soap:Header>") + 14)),"</soap:Envelope>"), '//wsse:Security/wsse:UsernameToken/wsse:Username') AS Username
FROM kettle_data_transfer.Records RR
JOIN kettle_data_transfer.Event_Mappings EM ON RR.event_mapping_ID = EM.event_mapping_ID AND RR.data_transfer_ID = EM.data_transfer_ID
JOIN efn.Events E on EM.event_ID = E.event_ID
WHERE 0=0
AND RR.data_transfer_ID = 43
AND RR.failure_code = 0
AND RR.mode = 'production'
AND RR.`ignore` = 0
AND RR.create_DTM > '2015-10-25';
```
Anyways, that should resolve your problem.
|
The XML sample you have is malformed as it is missing the closing `</soap:Envelope>` tag.
[SQL Fiddle](http://sqlfiddle.com/#!9/63afd/1)
|
ExtractValue with MySQL
|
[
"",
"mysql",
"sql",
"xml",
"extract-value",
""
] |
**I have a table budget like this:**
```
ββββββ¦βββββββ¦ββββββββ¦ββββββββββββ¦βββββββ
β Id β Site β Rayon β Date β Amt β
β βββββ¬βββββββ¬ββββββββ¬ββββββββββββ¬βββββββ£
β 1 β 20 β 23 β 2015-11-3 β 200 β
β 2 β 40 β 2 β 2015-12-4 β 20 β
β 3 β 20 β 3 β 2015-11-4 β 400 β
β 4 β 30 β 13 β 2015-11-5 β 500 β
β 5 β 20 β 23 β 2015-08-3 β 200 β
ββββββ©βββββββ©ββββββββ©ββββββββββββ©βββββββ
```
How to calculate the sum of amt from today till the end of month and from today till the end of year ?
|
With a problem such as this, the first thing you need to do is understand how to calculate "the end of this month" and "the end of this year". Using sql2008, you do not have access to a helpful function [`EOMONTH()`](https://msdn.microsoft.com/en-gb/library/hh213020.aspx) so you must calculate this yourself.
There are [a few different ways of doing this](http://blogs.msdn.com/b/samlester/archive/2013/09/23/eomonth-equivalent-in-sql-server-2008-r2-and-below.aspx) but I've gone for one which will calculate a few millisonds before midnight on the last day of the current month, and the last day of the current year.
```
declare @date DATETIME = getdate()
select dateadd(millisecond,-3,DATEADD(MONTH,datediff(MONTH,0,@date)+1,0)) as EOMONTH
select dateadd(millisecond,-3,DATEADD(year,datediff(year,0,@date)+1,0)) as EOYEAR
```
(Note: the `-3` is due to sql server's `dattime` data type having a [minimum accuracy of 3ms](https://msdn.microsoft.com/en-gb/library/ms187819.aspx) so taking 3 ms from the beginning of next month/next year is as accurate as you're going to get. In any case it matters very little all we're trying to do is get a date boundary as close to the end of the month/end of the year as possible)
---
Given this bit is worked out, you could use them individually in 2 different queries
```
DECLARE @now DATETIME = GETDATE();
DECLARE @eomonth DATETIME = dateadd(millisecond,-3,DATEADD(MONTH,datediff(MONTH,0,@date)+1,0))
DECLARE @eoyear DATETIME = dateadd(millisecond,-3,DATEADD(year,datediff(year,0,@date)+1,0))
SELECT SUM(amt) FROM Budget
WHERE date >= @now AND date <=@eomonth
SELECT SUM(amt) FROM Budget
WHERE date >= @now AND date <= @eoyear
```
Or, you could combine them
```
SELECT SUM(CASE WHEN DATE<@eomonth THEN Amt ELSE 0 END) AS summonth,
SUM(CASE WHEN DATE<@eoyear THEN Amt ELSE 0 END) AS sumyear
FROM Budget
WHERE Date>=@date
```
|
For end of current month:
```
SELECT SUM(Amt) AS Amt
FROM TableName
WHERE Date >= CAST(GETDATE() AS DATE) AND
Date < DATEADD(m, 1, DATEADD(dd, -DAY(GETDATE()) + 1, CAST(GETDATE() AS DATE)))
```
For end of current year:
```
SELECT SUM(Amt) AS Amt
FROM TableName
WHERE Date >= CAST(GETDATE() AS DATE) AND
Date < CAST(YEAR(GETDATE()) + 1 AS char(4)) + '0101'
```
|
Calculate sum of budget
|
[
"",
"sql",
"sql-server-2008",
""
] |
The title may not seem to make sense, but I've been trying to get something to work here, but I must be missing a trick. To explain...
For example, take these records containing the strings:
```
(1) 7 bla bla bla 17 bla bla 9 bla
(2) Bla 12 bla bla 7 bla bla bla 54 bla bla
(3) Bla bla bla 6 bla bla 17 bla bla bla 2 bla
```
So, I need to find records, using the example above, which have the value of 7 anywhere in the string. If I use a ...LIKE '%7%'... it finds records 1, 2 and 3, but I only want it to find records with 7 (and not just 17), so it should only find records 1 and 2.
Obviously, if I add ...NOT LIKE '%17%'... then I only get record 2 so that doesn't help.
|
You should probably be storing these values in a junction table, rather than in delimited lists.
However, you can do what you want using `like`:
```
where ' ' + col + ' ' like '% 7 %'
```
That is, add delimiters to the beginning and end of the string and then use them in the pattern to match.
|
What about this - (include some spaces to exclude other values that include 7)
```
LIKE '7 %' OR LIKE '% 7 %'
```
|
In SQL Server, finding in a string record with 7 but not 17, but could have both
|
[
"",
"sql",
"sql-server",
""
] |
i have two table here is `table_user` and `table_feedback` like below
```
table_user
| id | name |
|----|------|
| 1 | john |
| 2 | tony |
| 3 | mona |
table_feedback
| id | rate | user_id | date |
|----|------|---------|----------|
| 1 | 1 | 3 |2015-11-2 |
| 2 | 1 | 2 |2015-11-2 |
| 3 | 1 | 3 |2015-11-1 |
```
I wanted to show report by `date` from `table_feedback` including `name` and `id` from `table_user` and all `user` will be show if table\_feedback didn't contain the user id then this will be return blank data. I have idea about `inner join` and here is my query. problem is that the query return 2 row only but i need 3 row including table\_user id 1 with blank column rate.
Here is my query below.
```
SELECT
table_user.id,
table_user.name,
table_feedback.rate,
table_feedback.date
FROM table_feedback
INNER JOIN table_user
ON table_user.id = table_feedback.user_id
WHERE table_feedback = '2015-11-2'
expected_result_table
| user_id | name | rate | date |
|-------- |------|------|----------|
| 1 |jony | |2015-11-2 |
| 2 |tony | 1 |2015-11-2 |
| 3 |mona | 1 |2015-11-2 |
```
|
The solution to this is an outer join. Anytime you find yourself thinking along the lines of "I need to see all rows from this table, regardless of a match in another table..." you should look to an outer join.
We can use an outer join to select all users, and link them to the feedbackTable in our `JOIN` clause. This will return null values for any columns in the table that don't match up. Try this:
```
SELECT u.id, u.name, t.rate, t.dateCol
FROM userTable u
LEFT JOIN feedbackTable t ON t.user_id = u.id AND t.dateCol = '2015-11-02';
```
Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/ef7e4/3) example. As a side note, it is good practice not to name date columns `date` since that is a keyword in MySQL.
---
Edit based on your expected results:
To make sure the date column appears in each row, you can hardcode it into your select. If you choose to use a variable, you won't have to update the date twice each time, you can just update the declaration:
```
SET @reportDate = '2015-11-02';
SELECT u.id, u.name, t.rate, @reportDate
FROM userTable u
LEFT JOIN feedbackTable t ON t.user_id = u.id AND t.dateCol = @reportDate;
```
Here is an updated [SQL Fiddle](http://sqlfiddle.com/#!9/ef7e4/4).
|
My guess is you need to use `LEFT JOIN`:
```
SELECT
table_user.id,
table_user.name,
table_feedback.rate,
table_feedback.date
FROM table_user
LEFT JOIN table_feedback
ON table_user.id = table_feedback.user_id
AND table_feedback.date = '2015-11-2'
```
|
mysql help to make a report
|
[
"",
"mysql",
"sql",
""
] |
So I have used [this](https://stackoverflow.com/questions/20719901/count-rows-per-hour-in-sql-server-with-full-date-time-value-as-result) post as a reference, however I would like to count all the rows based on a 15 minute time period.
Here is what I have so far:
```
SELECT DateAdd(minute, DateDiff(minute, 0, [datetime]), 0) as Timestamp,
Count(*) as Tasks
FROM [table]
GROUP BY DateAdd(minute, DateDiff(minute, 0, [datetime]), 0)
ORDER BY Timestamp
```
This is great for getting rows per minute, however I need 15 minutes...
So I change:
`DateAdd(minute, DateDiff(minute, 0, [datetime]), 0)`
to
`DateAdd(minute, DateDiff(minute, 0, [datetime]), 15)`
however that is just pushing the date 15 days ahead.
Any help is appreciated!
|
To get 15 minutes, divide by 15 (and then multiply again):
```
SELECT DateAdd(minute, 15*(DateDiff(minute, 0, [datetime]) / 15), 0
) as Timestamp,
Count(*) as Tasks
FROM [table]
GROUP BY (DateDiff(minute, 0, [datetime]) / 15)
ORDER BY Timestamp;
```
SQL Server does integer division. If you want to be unambiguous about your intentions, use `FLOOR()`.
|
```
SELECT ROUND(DATEDIFF(SECOND,{d '1970-01-01'},[datetime])/(15 * 60),0) as Timestamp,
Count(*) as Tasks
FROM [table]
GROUP BY ROUND(DATEDIFF(SECOND,{d '1970-01-01'},[datetime])/(15 * 60),0)
ORDER BY Timestamp
```
|
SQL Query by time interval
|
[
"",
"sql",
"sql-server",
""
] |
I have a table Exams, where I have a date column in the following format: 'MM/DD/YYYY'.
I have to sort this column and change the format to 'MONTH/DD/YYYY'
My code is the following:
```
SELECT
TO_CHAR(Exam_Date, 'MONTH/DD/YY') AS Exam_date
FROM
Exams
WHERE
Student_id = '0000049'
ORDER BY
Exam_date ASC
```
Unfortunately this sorts my column like a string :(
I also have tried this:
```
SELECT
TO_DATE(TO_CHAR(Exam_date, 'MONTH/DD/YYYY'), 'MONTH/DD/YYYY') AS Exam_date
FROM Exams
WHERE Student_id = '0000049'
ORDER BY Exam_date
```
Here I got them sorted but in 'MM/DD/YYYY' format :/
Thank you for helping.
|
You are not using sql server. It appears you are using Oracle. If you were using Sql Server, you could just cast to datetime, like the following:
```
ORDER BY cast(Exam_date as DATETIME)
```
I haven't used Oracle for a very long time... try using the following:
```
ORDER BY TO_DATE(Exam_Date)
```
|
I'm making the assumption that your underlying field in the Exams table for Exam\_date is a date or datetime type. If so, you need to understand that when you perform manipulations on a field and then call that expression the same thing as the underlying field name, you lose access to the base field when you attempt to do the ORDER BY.
A better approach would be to expose the expression as a different field name. If you cannot do that, you would want to expose your base field as an aliased one so that you can sort by it.
A second assumption that I'm making is that the TO\_CHAR is a custom function you implemented to display the date as text.
Method 1:
```
SELECT TO_CHAR(Exam_Date, 'MONTH/DD/YY') AS Exam_date_Text
FROM Exams
WHERE Student_id='0000049'
ORDER BY Exam_date ASC
```
Method 2:
```
SELECT TO_CHAR(Exam_Date, 'MONTH/DD/YY') AS Exam_date, Exam_date AS ORIG_Exam_date
FROM Exams
WHERE Student_id='0000049'
ORDER BY ORIG_Exam_date ASC
```
|
SQL: How can I sort strings in Date format(MONTH/DD/YY)?
|
[
"",
"sql",
"string",
"oracle",
"sorting",
"date",
""
] |
Table 1
```
1 A 1 1
2 A 1 2
5 A 1 1
6 B 2 1
```
Table 2
```
1 1 12
2 2 45
3 5 22
4 6 21
```
table1.col1 is a FK to table2.col2
You want to duplicate values where col2 = A, and have col2 = AA :
```
1 A 1 1
2 A 1 2
5 A 1 1
6 B 2 1
7 AA 1 1 <- New
8 AA 1 2 <- New
9 AA 1 1 <- New
```
How do you join Table 2 to the new resultset such that values that existed for A also exist for AA?
Result wanted:
```
1 A 1 1 | 1 1 12
2 A 1 2 | 2 2 45
5 A 1 1 | 3 5 22
6 B 2 1 | 4 6 21
7 AA 1 1 | 1 1 12
8 AA 1 2 | 2 2 45
9 AA 1 1 | 3 5 22
```
|
Untested:
The logic seems solid, but I don't have data, test environment to try this...
You could use a union and (inline view or a common table expression) to accomplish this.
First we build table1 with both sets of desired data (inline view A below). This approach makes the join simple. This is accomplished using a union statement and hard coding AA value while limiting the set to only A's, then unioning in the base set.
We then join back to table2 as normal.
I used row\_number() and over order by col 2 to identify the individual values to increment the max ID by. 1 for first row of a 2 for second row of a and 3 for 3rd row of a based on a seed of 6 which is the max value in table1 for.
I used parent\_ID to always identify the related record to join to table2.
**Inline view**
```
Select * --(though you should spell out desired columns)
from (Select ROW_NUMBER() OVER(ORDER BY Col2)+C.mID, 'AA', col3, col4, col1 as Parent_ID
from table1
CROSS JOIN (select max(col1) mID from table1) C
where table1.col2 = 'A'
record
UNION ALL
Select col1, Col2, col3, col4, col1 as Parent_ID
from table1) A
INNER JOIN table2
on table2.col2 = A.parent_ID
```
**CTE:**
```
With cte as (Select ROW_NUMBER() OVER(ORDER BY Col2)+C.mID col1, 'AA' col2, col3, col4, col1 as Parent_Id
FROM table1
CROSS JOIN (select max(col1) mID from table1) C
WHERE table1.col2 = 'A'
UNION ALL
SELECT col1, Col2, col3, col4, col1 as Parent_Id
from table1)
SELECT * --(though you should spell out desired columns)
FROM cte
INNER JOIN table2
on table2.col2 = cte.Parent_Id
```
|
consider each value A/B/AA in isolation and use window functions to find lag lead on col3 and col4
treat each "prev\_col3, col3, next\_col3, prev\_col4, col4, next\_col4" as a unique "context" identifier and join on that. this is how we can avoid confusing row 7 with row 9 in the data; they have distinct prev/next lag/lead values for col3 and col4.
We need to control for null cases (I made null into -1) for the joins to work.
You can copy/paste this into SQL server to see it work:
```
CREATE TABLE #TABLE1 (col1 INT, col2 varchar(5), col3 INT, col4 INT)
CREATE TABLE #TABLE2 (col1 INT, col2 INT, col3 INT)
INSERT INTO #TABLE1
select 1 col1,'A' col2, 1 col3, 1 col4 union
select 2 col1,'A' col2, 1 col3, 2 col4 union
select 5 col1,'A' col2, 1 col3, 1 col4 union
select 6 col1,'B' col2, 2 col3, 1 col4 union
select 7 col1,'AA' col2, 1 col3, 1 col4 union
select 8 col1,'AA' col2, 1 col3, 2 col4 union
select 9 col1,'AA' col2, 1 col3, 1 col4
INSERT INTO #TABLE2
select 1 col1, 1 col2, 12 col3 union
select 2 col1,2 col2, 45 col3 union
select 3 col1,5 col2, 22 col3 union
select 4 col1,6 col2, 21 col3
select
Bu.col1, bu.col2, bu.col3, bu.col4, t2.col1, t2.col2, t2.col3
from
(
select
col1, col2,
lag(col3) over (order by col1 asc) prev_col3,
col3,
lead(col3) over (order by col1 asc) next_col3,
lag(col4) over (order by col1 asc) prev_col4,
col4,
lead(col4) over (order by col1 asc) next_col4
from
#TABLE1 t1 where col2 in ('A')
) A
join
( /*bu big union*/
select
col1, col2,
lag(col3) over (order by col1 asc) prev_col3,
col3,
lead(col3) over (order by col1 asc) next_col3,
lag(col4) over (order by col1 asc) prev_col4,
col4,
lead(col4) over (order by col1 asc) next_col4
from
#TABLE1 t1 where col2 in ('A')
UNION
select
col1, col2,
lag(col3) over (order by col1 asc) prev_col3,
col3,
lead(col3) over (order by col1 asc) next_col3,
lag(col4) over (order by col1 asc) prev_col4,
col4,
lead(col4) over (order by col1 asc) next_col4
from
#TABLE1 t1 where col2 in ('AA')
) bu
on
(
a.col3 = bu.col3 and isnull(a.prev_col3,-1) = isnull(bu.prev_col3,-1) and
isnull(a.next_col3,-1) = isnull(bu.next_col3,-1) and
a.col4 = bu.col4 and isnull(a.prev_col4,-1) = isnull(bu.prev_col4,-1) and
isnull(a.next_col4,-1) = isnull(bu.next_col4 ,-1)
)
join
#TABLE2 t2
on
a.col1 = t2.col2
UNION
select
t1.col1, t1.col2, t1.col3, t1.col4,
t2.col1, t2.col2, t2.col3
from
#TABLE1 t1
join #TABLE2 t2 on t1.col1 = t2.col2
where t1.col2 = 'B'
order by 1 asc
drop table #TABLE1
drop table #TABLE2
```
|
Joining on non unique columns
|
[
"",
"sql",
"sql-server",
"join",
""
] |
I'm selecting a `timestamptz` from a PosgreSQL database.
I want my `SELECT` to return only the date, hours and minutes. No seconds. Can I set the date format in my psql `SELECT` to accommodate this?
|
You can use [**`date_trunc()`**](https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC) to truncate seconds and still return a `timestamptz`:
```
SELECT date_trunc('minute', ts_col) ...
```
Or you can use [**`to_char()`**](https://www.postgresql.org/docs/current/functions-formatting.html) to return a formatted timestamp as `text` any way you like:
```
SELECT to_char(ts_col, 'YYYY-MM-DD HH24:MI') ...
```
Note that this will format the `timestamptz` according to your current time zone setting. Details:
* [Ignoring time zones altogether in Rails and PostgreSQL](https://stackoverflow.com/questions/9571392/ignoring-timezones-altogether-in-rails-and-postgresql/9576170#9576170)
|
With timezone:
```
SELECT date_trunc('minute', TABLE_NAME.COLUMN AT TIME ZONE '+03') FROM TABLE_NAME
```
|
Select timestamp with time zone, but ignore seconds
|
[
"",
"sql",
"postgresql",
"timestamp-with-timezone",
""
] |
I need to compare 2 dates and return the number of days in between. Here is a table as example:
```
+----+--------+-------------------------+-------------------------+
| id | userid | datestarted | datefinished |
+----+--------+-------------------------+-------------------------+
| | | | |
| 1 | 23 | 2014-03-25 09:05:00.000 | 2014-03-25 12:15:00.000 |
| 2 | 43 | 2014-03-25 09:05:00.000 | 2014-03-25 12:15:00.000 |
| 3 | 23 | 2014-03-31 09:05:00.000 | 2014-03-31 12:15:00.000 |
| 4 | 12 | 2014-03-25 09:05:00.000 | 2014-03-26 12:15:00.000 |
+----+--------+-------------------------+-------------------------+
```
In the first 3 cases, we have the same day, only the hours don't match.
```
Datestarted = 2014-03-25 09:05:00.000
Datefinished = 2014-03-25 12:15:00.000
```
We only input `hours` and `minutes`.
Until now, wee needed only to show the `difference` as `whole number`, without `decimal points`, and did it like this:
```
DATEDIFF(carsharing.datestarted, carsharing.datefinished)
```
But now, we have to show the `difference` between the dates as `0,5 day`, if it is `less` than `4,5 hours`. If the difference is `greater` it should stay as `1 day`.
In the more complecated last case from the table, we should also compare and show difference between two different days
```
Datestarted = 2014-03-25 09:05:00.000
Datefinished = 2014-03-26 12:15:00.000
```
Here the result should be `1,5 days`
|
You can try this query. It gets the difference in minutes and multiply it by 2 in order to get a 0.5 day range. It then devide it by 24 hours and by 60 minutes before calculating the Ceiling value. Once you have it, it can be devide by 2 again.
When the value is over 4.5\*24\*60 (4.5 days in minutes), it only has to be devided by 24 and 60.
> **Query:**
```
Select id, userid, datestarted, datefinished
, Days = Case When DATEDIFF(minute, datestarted, datefinished) > 4.5*60*24
then DATEDIFF(minute, datestarted, datefinished) / 24 / 60
else CEILING(((2.0*DATEDIFF(minute, datestarted, datefinished)) / 24 / 60)) / 2
end
From @dates
```
> **Output:**
```
id userid datestarted datefinished Days
1 23 2014-03-25 09:05:00.000 2014-03-25 12:15:00.000 0.500000
2 43 2014-03-25 09:05:00.000 2014-03-25 12:15:00.000 0.500000
3 23 2014-03-31 09:05:00.000 2014-03-31 12:15:00.000 0.500000
4 12 2014-03-25 09:05:00.000 2014-03-26 12:15:00.000 1.500000
5 12 2014-03-25 09:05:00.000 2014-03-29 12:15:00.000 4.500000
6 12 2014-03-25 09:05:00.000 2014-03-29 22:15:00.000 4.000000
```
> **Sample Data**
```
declare @dates table(id int, userid int, datestarted datetime, datefinished datetime);
insert into @dates(id, userid, datestarted, datefinished) values
(1, 23, '2014-03-25 09:05:00.000', '2014-03-25 12:15:00.000')
, (2, 43, '2014-03-25 09:05:00.000', '2014-03-25 12:15:00.000')
, (3, 23, '2014-03-31 09:05:00.000', '2014-03-31 12:15:00.000')
, (4, 12, '2014-03-25 09:05:00.000', '2014-03-26 12:15:00.000')
, (5, 12, '2014-03-25 09:05:00.000', '2014-03-29 12:15:00.000')
, (6, 12, '2014-03-25 09:05:00.000', '2014-03-29 22:15:00.000')
```
|
I believe this is what you're looking for - this will round the difference to 0.5 for anything under 4.5 hours in the day, and everything else over that will go to a full day:
```
Declare @StartDate DateTime = '2014-03-25 09:05:00.000',
@EndDate DateTime = '2014-03-26 12:15:00.000'
;With TotalHours As
(
Select DateDiff(Minute, @StartDate, @EndDate) / 60.0 As TotalHours
)
Select Case
When TotalHours % 24 = 0
Then Floor(TotalHours / 24)
When TotalHours % 24 < 4.5
Then Floor(TotalHours / 24) + 0.5
Else Floor(TotalHours / 24) + 1.0
End As Days
From TotalHours
```
|
SQL calculate Datediff for half of the 8 working day
|
[
"",
"sql",
"sql-server",
""
] |
My database has a cost table for items. Items can have more than one cost record.
I want to grab only the items that have multiple cost records.
The UPC field is F01, and the table is called COST\_TAB. I only want to grab items where there are multiple entries based on the F01 field.
I'm struggling with how to write this query.
|
```
select F01
from COST_TAB
group by F01
having count(*) > 1
```
|
You need to use group by
```
select f01
from cost_tab
group by f01
having count(f01) > 1
```
|
How to select rows where that record has multiple entries?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table of contractors like so:
```
SELECT [ContractorID]
,[Contractor]
,[BWSLVendorCode]
,[PhoneNumber]
,[Address]
,[PreQualification]
,[GSTNumber]
,[Avg] FROM Contractor
```
And a Feedback table that stores individual rating and comments for each contractor like so:
```
SELECT [ID]
,[ContractorID]
,[Rating]
,[Comments]
,[Timestamp] FROM Feedback
```
What I want to do is find the average rating of each individual contractor and include it in the contractor table under the Avg column. I thought about using a function and calling it in a computed column, but I can't seem to figure out how to write said function.
**EDIT:**
The rating is between 1 - 5
|
This will update the `Avg` column in the `Contractor` table with the average value of ratings of the contractor in the `Feedback` table.
```
;with feedback_grouped
AS
(
SELECT f.ContractorID, AVG(CAST(f.rating as decimal(3,2))) as avg_rating
FROM Feedback f
GROUP BY f.ContractorID
)
UPDATE Contractor
SET avg=avg_rating
FROM Contractor c
INNER JOIN feedback_grouped
ON c.ContractorID = feedback_grouped.ContractorID
```
This groups each feedback by `ContractorID` and calculates the average `Rating` to update the original `Contractor` table
Ensure that the `Avg` column in `Contractor` table is a `decimal` datatype since the average might have decimal values. You can change the `(CAST(f.rating as decimal(3,2)))` to the same decimal datatype you are using in the `Avg` column.
|
**EDIT**
To automatically update the column value, you can use a trigger, which would look something like:
```
CREATE TRIGGER [DBName].[updateAvg]
ON [DBName].Contractors
AFTER UPDATE,INSERT
AS
BEGIN
;WITH avgRates AS (
SELECT [ContractorID]
,SUM([Rating]) as totalRating
,COUNT(*) as noOfRatings
FROM Feedback
GROUP BY ContractorID
)
UPDATE C
SET Avg = A.totalRating / A.noOfRatings
FROM Contractor C
INNER JOIN avgRates A
ON C.ContractorID = A.ContractorID
END
```
My syntax may be wrong on the `CREATE TRIGGER`, so you would want to reference the official documentation:
[CREATE TRIGGER (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms189799.aspx)
|
SQL Server - How to get the average rating from another table and include it in current table?
|
[
"",
"sql",
"sql-server",
"average",
"calculated-columns",
""
] |
I am trying to combine a date field and a number field into one in my query - I need the date to display as a number with the sequence number following.
I have tried
```
SELECT CAST(movement_date AS int) + CAST(sequence_no AS int) AS MOVE
```
This gives the result 42312 for a date of 03/11/2015 and a sequence number of 000003. But I need the result of 42309000003 - so the date and the sequence concatenated as an INT rather than added.
|
You need to cast it append it like a string as:
```
SELECT CAST(CAST(movement_date AS int) as varchar(5)) + CAST(CAST(sequence_no AS int) as varchar(6)) AS MOVE
```
**EDIT:**
```
SELECT (CAST(CAST(movement_date AS int) as varchar(5)) +
RIGHT('00000'+ CAST(CAST(sequence_no AS int) as varchar(6)),6)) AS [MOVE]
```
or you can use REPLICATE to pad the required number of 0's like this:
```
SELECT CAST(CAST(movement_date AS int) as varchar(5)) + REPLICATE('0',6-LEN(CAST(CAST(sequence_no AS int) as varchar(6)))) AS [MOVE]
```
|
This is how you should do it.
```
SELECT
{fn concat(CAST(CAST(movement_date AS int) AS varchar), CAST(CAST(sequence_no AS int) AS varchar))} AS MOVE FROM table_name
```
or
```
SELECT
CONCAT(CAST(CAST(movement_date AS int) AS varchar), CAST(CAST(sequence_no AS int) AS varchar)) AS MOVE FROM table_name
```
If you still want to select as int or any other number type, you just encapsulate everything into CAST(value AS anyNumberType).
EDIT:
If you want to keep leading zeros in "sequence\_no", you shouldn't CAST as int, because that is where leading zeros are removed. Just cast into something else (varchar maybe).
```
SELECT
{fn concat(CAST(CAST(movement_date AS int) AS varchar), CAST(sequence_no AS varchar))} AS MOVE FROM table_name
```
or
```
SELECT
CONCAT(CAST(CAST(movement_date AS int) AS varchar), CAST(sequence_no AS varchar)) AS MOVE FROM table_name
```
Also, you can use "+" sign to concatenate non-numeric values, but I prefer functions :)
|
SQL SELECT Concatenate Date and Int fields and convert result to Int
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I am not a db admin by any means so if semantics look a bit bizarre, I apologize.
In SQL Server, is it possible to use a variable for a comparison operator like so?
```
declare @compare = '>';
declare @limit = '5';
select agentId, sessionId, duration
from table_x
where duration @compare @limit
```
I do not want to write a ton of lines depending on what the variable could be, which is why I ask. Not to mention if I throw in another variable, that further makes the amount of lines even larger. Also, I like the fact that it's extensible, so in the event there was an additional operator, it would just process it anyway.
Thanks for the help!
|
You can, actually, by invoking different operators via a guard expression:
```
DECLARE @comparison varchar(2) = '>'
DECLARE @limit int 5 = 123
SELECT
Foo
FROM
Table
WHERE
( @comparson = '>' AND Duration > @limit )
OR
( @comparson = '<' AND Duration < @limit )
OR
( @comparson = '=' AND Duration = @limit )
OR
( @comparson = '<>' AND Duration <> @limit )
```
To generalize this, you can convert this into a UDF and use it in multiple places:
```
CREATE FUNCTION VariableComparison(@comparison AS varchar(2), @value as numeric, @operand as numeric) RETURNS bit AS
BEGIN
RETURN ( @comparison = '>' AND @value > @operand )
OR
( @comparison = '<' AND @value < @operand )
OR
( @comparison = '=' AND @value = @operand )
OR
( @comparison = '<>' AND @value <> @operand )
END
```
Used like so:
```
SELECT
...
WHERE
dbo.VariableComparison( @comparison, Duration, @limit ) = 1
```
|
You can try doing this using dynamic sql, something like
```
declare @compare varchar(2) = '>';
declare @limit int = '5';
declare @dynamicQuery = nvarchar(max) = N'select agentId, sessionId, duration
from table_x
where duration' + @compare + @limit
EXEC(@dynamicQuery)
```
|
Using variable in sql WHERE clause for Operation
|
[
"",
"sql",
"sql-server",
""
] |
I've got this situation in a table
```
client staff code time1 time2
------------------------
c1 null code1
c2 null code1
null s1 code1
null s2 code1
```
Trying to grouping rows by staff and code :
```
SELECT GROUP_CONCAT(client),staff,code from table GROUP BY staff,code
```
I get obviously:
```
client staff code ... time1 time2
------------------------
c1,c2 null code1
null s1 code1 <-
null s2 code1 <-
```
For the purpose of the feature requested i need the entries "focused" on the staff so i can get also relative time1 and time2.
The problem is that the rows indicated by the arrows above don't have any clients id so there is no chance to retrieve their data.
Client informations are into the staff = null row.
How can i achieve something like this?
```
client staff code ... time1 time2
------------------------
c1,c2 s1 code1 <-
c1,c2 s2 code1 <-
```
thanks
|
(Little late answer :), but it may help someone else !)
First, you must realize that a single `FROM` can't generate a row with **both** c1 and s1 (for example), and another row with **both** c1 and s2. Except, of course, a single row with `GROUP_CONCAT(client), GROUP_CONCAT(staff)`, but it wouldn't be useful, as it loses the link between clients and staffs.
What you have to do is to join your table with itself :
```
client staff code
------------------------
c1 null code1
c2 null code1
null s1 code1
null s2 code1
c3 null code1*
c4 null code2*
SELECT c.client, s.staff, c.code
FROM `table` c
INNER JOIN `table` s ON s.code = c.code
client staff code
------------------------
c1 null code1
c1 null code1
c1 s1 code1
c1 s2 code1
c2 null code1
c2 null code1
c2 s1 code1
c2 s2 code1
null null code1
null null code1
null s1 code1
null s2 code1
null null code1
null null code1
null s1 code1
null s2 code1
```
Obviously, the cross join between two 4-rows tables give 16 result. Excluding rows with null :
```
SELECT c.client, s.staff, c.code
FROM `table` c
INNER JOIN `table` s ON s.code = c.code AND s.staff IS NOT NULL
WHERE c.client IS NOT NULL
client staff code
------------------------
c1 s1 code1
c1 s2 code1
c2 s1 code1
c2 s2 code1
```
Then, you can group by staff and code, and aggregates the clients :
```
SELECT GROUP_CONCAT(c.client), s.staff, c.code
FROM `table` c
INNER JOIN `table` s ON s.code = c.code AND s.staff IS NOT NULL
WHERE c.client IS NOT NULL
GROUP BY staff, c.code
client staff code
------------------------
c1,c2 s1 code1
c1,c2 s2 code1
```
NB : As written, it won't show the codes having only clients and not staffs, and those having staffs and not codes. If one of this case is possible, then you have to `LEFT JOIN` the concerned table (`table c LEFT JOIN staff s`, or `table s LEFT JOIN table c`, with the appropriate tests on NULL) instead of INNER JOIN.
```
client staff code time1 time2
------------------------
c1 null code1
c2 null code1
null s1 code1
null s2 code1
*c3 null code2*
*null s3 code3*
SELECT GROUP_CONCAT(c.client), s.staff, c.code
FROM `table` c
LEFT JOIN `table` s ON s.code = c.code AND s.staff IS NOT NULL
WHERE c.client IS NOT NULL
GROUP BY staff, c.code
client staff code
------------------------
c1,c2 s1 code1
c1,c2 s2 code1
c3 NULL code2
SELECT GROUP_CONCAT(c.client), s.staff, c.code
FROM `table` s
LEFT JOIN `table` c ON c.code = s.code AND c.client IS NOT NULL
WHERE s.staff IS NOT NULL
GROUP BY staff, c.code
client staff code
------------------------
c1,c2 s1 code1
c1,c2 s2 code1
NULL s3 code3
```
It's not clear why you have rows that have either client or staff filled (but not both); maybe you should rethink your model.
|
put the `where` clause before `group` like this :-
```
SELECT GROUP_CONCAT(client),staff,code from table
where client IS NOT NULL AND staff IS NOT NULL
GROUP BY staff,code
```
|
SQL group by exclude NULL value from grouping
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
I created a table using the following migration:
```
class CreateProfilePictures < ActiveRecord::Migration
def change
create_table :profile_pictures do |t|
t.integer :user_id, null: false
t.integer :picture_id, null: false
t.timestamps null: false
end
add_index :profile_pictures, :user_id, unique: true
add_index :profile_pictures, :picture_id, unique: true
end
end
```
I tried to remove the constraint with the following:
```
class FixProfilePic < ActiveRecord::Migration
def change
change_column :profile_pictures, :picture_id, :integer, unique: false
end
end
```
I still get a unique constraint violation error if I try to use the same picture\_id in more than one place. What is the proper way to remove the uniqueness constraint from picture\_id?
|
There is a problem with the accepted answer: Rollbacks don't work correctly as the unique index is not restored.
You could try this instead:
```
reversible do |dir|
dir.up do
remove_index :profile_pictures, :picture_id
add_index :profile_pictures, :picture_id
end
dir.down do
remove_index :profile_pictures, :picture_id
add_index :profile_pictures, :picture_id, unique: true
end
end
```
|
You must remove your index with:
```
remove_index :profile_pictures, :picture_id
```
and add it again with:
```
add_index :profile_pictures, :picture_id
```
[ActiveRecord::Migration](http://api.rubyonrails.org/classes/ActiveRecord/Migration.html)
|
How can I remove a unique constraint from a database column in Rails?
|
[
"",
"sql",
"ruby-on-rails",
"rails-migrations",
""
] |
I have declared in MSSQL 2 variables that can be `null` so:
```
DECLARE @startCheckDate VARCHAR(10) = '2015-10-20' --this can be null
DECLARE @endCheckDate VARCHAR(10) = '2015-10-31' -- this can be null
```
Variables can be both `null` or both not `null` no other option.
Now in the `where` condition in my query I want to check if those values are `null` using `case` I'm trying:
```
declare @since varchar(10) = '2015-10-20' --this is never null so it do not matter
select doc.id
from document
where
doc.active = 'T'
and
(
case when @startCheckDate is not null and @endCheckDate is not null
then
doc.modifiedDate > @since or (doc.modifiedDate between @startCheckDate and @endCheckDate)
else
doc.modifiedDate > @since
end)
```
|
If my understanding of this question is correct, you could do this with an `OR` condition in the `WHERE` clause as opposed to using `CASE`.
**NOTE:** You should compare `DATE` values as dates. I've added conversions to change the values from strings to `DATE` values, but it would be easier if the values were of type `DATE` in the first place in the variables and the columns that you are comparing to.
```
SELECT doc.id
FROM document
WHERE doc.active = 'T'
AND ( -- compare doc.modifiedDate > @since if start and end dates NULL
( @startCheckDate IS NULL
AND @endCheckDate IS NULL
AND CONVERT(DATE, doc.modifiedDate) > CONVERT(DATE, @since)
)
OR ( -- use between if start and end dates not NULL
@startCheckDate IS NOT NULL
AND CONVERT(DATE, doc.modifiedDate)
BETWEEN CONVERT(DATE, @startCheckDate)
AND CONVERT(DATE, @endCheckDate)
)
)
```
Looking at your code though, it look like you have the `IS NULL` and `IS NOT NULL` checks the wrong way round, which I've swapped around in my sample.
With this, only the portion of the `WHERE` clause that meets the conditions is evaluated.
|
In SQL, when I need to dynamically build where clause, I build the query and run it using Exec. It helps me to reduce all these case statements which may be expensive in terms of performance. Something similar below. This way you can build where clause dynamically
```
declare @since varchar(10) = '2015-10-20' --this is never null so it do not matter
declare @sql nvarchar(max) = ''
if @startCheckDate is not null and @endCheckDate is not null
begin
set @sql = 'select doc.id from document where doc.active = ''T'' and doc.modifiedDate > ' + @since + ' or (doc.modifiedDate between ' + @startCheckDate + ' and ' + @endCheckDate + ')'
end
else
begin
set @sql = 'select doc.id from document where doc.active = ''T'' and doc.modifiedDate > ' + @since
end
exec (@sql)
```
|
Check if variables are null in where clause using case
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables:
Table `objects`:
```
object_id | object_group_id
```
Table `attributes`:
```
attr_id | attr_object_id | attr_property_id | attr_value
```
Now, I want to get all `object_id` where `object_group_id = 1` and filters two attributes:
```
(attr_property_id = 1 AND attr_value <= '100000')
AND
(attr_property_id = 2 AND attr_value > '2000')
```
I was trying to construct some queries, like this:
```
SELECT * FROM objects as o
/* filter1 join */
INNER JOIN
attributes AS f1
ON
o.object_id = f1.attr_object_id
AND
f1.attr_property_id = 1
/* filter2 join */
INNER JOIN
attributes AS f2
ON
f1.attr_object_id = f2.attr_object_id
AND
f2.attr_property_id = 2
WHERE
o.object_group_id = 1
AND
f1.attr_value <= '100000'
AND
f2.attr_value > '2000'
```
... but still can't get what I need.
|
After couple hours of combining and trying, I finally did:
```
SELECT * FROM objects as o
/* filter1 join */
INNER JOIN
attributes AS f1
ON
o.object_id = f1.attr_object_id
AND
f1.attr_property_id = 1
AND
f1.attr_value <= '100000'
/* filter2 join */
INNER JOIN
attributes AS f2
ON
f1.attr_object_id = f2.attr_object_id
AND
f2.attr_property_id = 2
AND
f2.attr_value > '2000'
WHERE
o.object_group_id = 1
```
I was too close, and done this by moving all filter conditions to `INNER JOIN`.
|
I've come up with an alternative, possibly more understandable approach.
Let's look at the problem step by step in a simple example. Our initial tables can have the following data:
*objects* table:
```
object_id | object_group_id
1 1
2 1
3 1
4 2
5 1
6 1
```
*attributes* table:
```
attr_id | attr_object_id | attr_property_id | attr_value
1 1 1 50000
2 1 1 75000
3 1 1 150000
4 1 2 1000
5 1 2 5000
6 2 1 30000
7 2 1 200000
8 2 2 7000
9 3 1 500000
10 3 2 1000
11 4 1 90000
12 4 2 6000
13 5 1 150000
14 5 2 3000
15 6 1 70000
16 6 2 1000
```
I. We start working with the second table since the main part of the problem is actually in it, and we apply our filter directly as:
```
SELECT * from attributes
WHERE (attr_property_id = 1 AND attr_value <= 100000) OR (attr_property_id = 2 AND attr_value > 2000)
```
Notice that we use an "OR" between conditions since we need to get all the rows from the *attributes* table that apply to *one* OR *another* condition
The result is the following:
```
attr_id | attr_object_id | attr_property_id | attr_value
1 1 1 50000
2 1 1 75000
5 1 2 5000
6 2 1 30000
8 2 2 7000
11 4 1 90000
12 4 2 6000
14 5 2 3000
15 6 1 70000
```
II. Now we need to take only those *attr\_object\_id* that have both *attr\_property\_id* "1" and "2" in the above result, i.e. those *attr\_object\_id* adhere to our initial problem's filter. We can achieve that by the following query:
```
SELECT attr_object_id, count(distinct(attr_property_id)) FROM attributes
WHERE (attr_property_id = 1 AND attr_value <= 100000) OR (attr_property_id = 2 AND attr_value > 2000)
GROUP BY attr_object_id
```
The result is:
```
attr_object_id | count
1 2
2 2
4 2
5 1
6 1
```
You can see from the above result that *attr\_object\_id* with "1", "2" and "4" satisfy both our initial problem's filters, but "5" and "6" are only one of the filters.
Let's filter out "5" and "6" now:
```
SELECT attr_object_id FROM attributes
WHERE (attr_property_id = 1 AND attr_value <= 100000) OR (attr_property_id = 2 AND attr_value > 2000)
GROUP BY attr_object_id
HAVING count(distinct(attr_property_id)) = 2
```
And we're getting:
```
attr_object_id
1
2
4
```
III. At this point, the main part of the problem is solved and we only need to apply the filter from the *objects* table, i.e. *object\_group\_id* = 1. The query for this one is straightforward, we just need to INNER JOIN *objects* and *attributes* tables and add a condition to the WHERE clause:
```
SELECT attr_object_id FROM attributes
INNER JOIN objects on attributes.attr_object_id = objects.object_id
WHERE object_group_id = 1 AND ((attr_property_id = 1 AND attr_value <= 100000) OR (attr_property_id = 2 AND attr_value > 2000))
GROUP BY attr_object_id
HAVING count(distinct(attr_property_id)) = 2
```
The final result for our example is:
```
attr_object_id
1
2
```
|
Filtering EAV table with multiple conditions
|
[
"",
"mysql",
"sql",
"entity-attribute-value",
""
] |
I am trying to find all items on hand from `@Supplier_ID` and summarize any sales since `@Begin_Date`. What is returned are all items on hand that have never been sold and those sold since `@Begin_Date`. Items on hand that were sold *before* `@Begin_Date` are excluded from the results. How do I fix that?
I am using SQL Server 2012 and SSRS v3.
```
SELECT DISTINCT
inventory_supplier.supplier_id AS [Supp ID],
address.name AS Supplier,
inv_loc.location_id AS [Inventory Loc ID],
inv_mast.item_id AS [Item ID],
inv_mast.item_desc AS [Item Desc],
inv_loc.qty_on_hand AS QOH,
inv_loc.moving_average_cost AS MAC,
invoice_line.qty_shipped,
invoice_hdr.customer_id AS [Customer ID],
invoice_hdr.bill2_name AS Customer,
oe_line.source_loc_id AS [Sales Source Loc]
FROM
inventory_supplier
INNER JOIN
inv_mast ON inventory_supplier.inv_mast_uid = inv_mast.inv_mast_uid
INNER JOIN
address ON inventory_supplier.supplier_id = address.id
FULL OUTER JOIN
invoice_line ON inv_mast.inv_mast_uid = invoice_line.inv_mast_uid
FULL OUTER JOIN
inv_loc ON inv_mast.inv_mast_uid = inv_loc.inv_mast_uid
FULL OUTER JOIN
invoice_hdr ON invoice_line.invoice_no = invoice_hdr.invoice_no
FULL OUTER JOIN
oe_line ON invoice_hdr.order_no = oe_line.order_no
AND invoice_line.inv_mast_uid = oe_line.inv_mast_uid
WHERE
(inventory_supplier.supplier_id = @Supplier_ID)
AND (invoice_hdr.invoice_date >= @Begin_Date
OR invoice_hdr.invoice_date IS NULL)
AND (inv_loc.qty_on_hand > 0)
ORDER BY
[Item ID], [Inventory Loc ID], [Customer ID], [Sales Source Loc]
```
|
You could move your `invoice_hdr.invoice_date >= @Begin_Date` to your join statement
```
FULL OUTER JOIN
invoice_hdr ON invoice_line.invoice_no = invoice_hdr.invoice_no
AND invoice_hdr.invoice_date >= @Begin_Date
```
Don't see a lot of `FULL OUTER JOIN`s. Sure you don't want `LEFT JOIN` here?
You might want to separate out the Invoice information from the Inventory information into a subquery, and LEFT JOIN to the Invoice information.
```
SELECT DISTINCT
inventory_supplier.supplier_id AS [Supp ID],
address.name AS Supplier,
inv_loc.location_id AS [Inventory Loc ID],
inv_mast.item_id AS [Item ID],
inv_mast.item_desc AS [Item Desc],
inv_loc.qty_on_hand AS QOH,
inv_loc.moving_average_cost AS MAC,
invoices.qty_shipped,
invoices.customer_id AS [Customer ID],
invoices.bill2_name AS Customer,
invoices.source_loc_id AS [Sales Source Loc]
FROM
inventory_supplier
INNER JOIN
inv_mast ON inventory_supplier.inv_mast_uid = inv_mast.inv_mast_uid
INNER JOIN
address ON inventory_supplier.supplier_id = address.id
INNER JOIN
inv_loc ON inv_mast.inv_mast_uid = inv_loc.inv_mast_uid
LEFT OUTER JOIN
(SELECT
invoice_line.inv_mast_uid,
invoice_line.qty_shipped,
invoice_hdr.customer_id,
invoice_hdr.bill2_name,
oe_line.source_loc_id
FROM
invoice_line
INNER JOIN
invoice_hdr ON invoice_line.invoice_no = invoice_hdr.invoice_no
INNER JOIN
oe_line ON invoice_hdr.order_no = oe_line.order_no
AND invoice_line.inv_mast_uid = oe_line.inv_mast_uid
WHERE
invoice_hdr.invoice_date >= @Begin_Date
) invoices ON invoices.inv_mast_uid = inv_mast.inv_mast_uid
WHERE
inventory_supplier.supplier_id = @Supplier_ID
AND inv_loc.qty_on_hand > 0
ORDER BY
[Item ID], [Inventory Loc ID], [Customer ID], [Sales Source Loc]
```
|
Try changing
```
WHERE
(inventory_supplier.supplier_id = @Supplier_ID)
AND (invoice_hdr.invoice_date >= @Begin_Date
OR invoice_hdr.invoice_date IS NULL)
AND (inv_loc.qty_on_hand > 0)
```
to
```
WHERE
(inventory_supplier.supplier_id = @Supplier_ID)
AND (invoice_hdr.invoice_date >= @Begin_Date)
AND (inv_loc.qty_on_hand > 0)
```
|
SQL: How do I show all Items in Inventory and Sum sales of items sold in a time period?
|
[
"",
"sql",
"sql-server",
"reporting-services",
""
] |
Is it possible in SQL to select values in a column then rename the duplicate ones? (assuming maximum of one possible duplicate only)
Let's say I have a table..
```
| id | name | 0or1_id |
| 0 | Eddy | 0 |
| 1 | Allan | 0 |
| 2 | Eddy | 1 |
| 3 | Allan | 1 |
```
What query can I do to make it like this?
```
| id | name | 0or1_id |
| 0 | Eddy | 0 |
| 1 | Allan | 0 |
| 2 | Eddy-copy | 1 |
| 3 | Allan-copy | 1 |
```
|
Assuming you want to actually change the data, use `update`:
```
update t join
(select name, count(*) as cnt, min(id) as minid
from t
group by name
having cnt > 1
) tt
on t.name = tt.name and t.id <> tt.minid
set name = concat(name, '-copy');
```
If you only want a `select`, then the logic is quite similar.
|
This will work in SQL Server..
```
select id , name ,0or1_id from (
select id , name ,0or1_id ,row_number() over (partition by name order by id ) as rnm
from table)z1
where rnm =1
union
select id , name || '- Copy' as new_name ,0or1_id from (
select id , name ,0or1_id ,row_number() over (partition by name order by id ) as rnm
from table)z2
where rnm > 2
```
|
SQL - Select same values in column then rename them
|
[
"",
"mysql",
"sql",
""
] |
I have a table with the following structure
```
Item Id, Start Date, End Date
1 , 2015-01-01, 2015-06-01
2 , 2015-01-01, 2015-02-01
3 , 2015-03-01, 2015-08-01
4 , 2015-06-01, 2015-10-01
```
I would like to view results so i will have *each month in the column*.
Each row will contain the *id of the item that is within this month*.
Example:
I am asking for all items that are within `2015-01-01` to `2015-03-01`.
The results should display, in columns, all the months within that range. So in this case it's 3 columns, `Jan` `Feb` and `March`.
The number of rows will be the total number of items that are within that range BUT each cell should show value of item id only if that item is within range:
example:
```
2015-01-01, 2015-02-01, 2015-03-01
1 1 1
2 2 NULL
NULL NULL 3
```
|
In order to use pivot, you can create a recursive cte get each item id and the list of months it covers, then pivot the cte.
```
;WITH cte AS
(
SELECT [Item Id], [Start Date], [End Date]
FROM Table1
WHERE [Start Date] BETWEEN '2015-01-01' AND '2015-03-01' --Date Range you want
OR [End Date] BETWEEN '2015-01-01' AND '2015-03-01' --Date Range you want
UNION ALL
SELECT [Item Id], DATEADD(MONTH, 1, [Start Date]), [End Date]
FROM cte
WHERE DATEADD(MONTH, 1, [Start Date]) <= [End Date]
)
SELECT [2015-01-01],[2015-02-01],[2015-03-01] --List of Dates you want
FROM (
SELECT [Item Id] rn, -- need a unique id here to give one row per record
[Item Id],
CONVERT(VARCHAR(10), [Start Date], 120) [Start Date] -- Format date to yyyy-mm-dd
FROM cte
) t
PIVOT
( MAX([Item Id])
FOR [Start Date] IN ([2015-01-01],[2015-02-01],[2015-03-01])
) p
```
|
You most likely need to use dynamic SQL.
> This is your data:
```
declare @first date = '20150101';
declare @last date = '20150301';
Create Table #items(ItemId int, StartDate date, EndDate date);
Insert into #items(ItemId, StartDate, EndDate) values
(1, '2015-01-01', '2015-06-01')
, (2, '2015-01-01', '2015-02-01')
, (3, '2015-03-01', '2015-08-01')
, (4, '2015-06-01', '2015-10-01');
```
> You first need to get the range of values and columns:
```
declare @values varchar(max);
declare @cols varchar(max);
with range(d) as (
Select top(DATEDIFF(month, @first, @last)+1) cast(DATEADD(month, ROW_NUMBER() over(order by (select 0))-1, @first) as varchar(20))
From (
Select 1 From (values(1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) as x1(n)
Cross Join (values(1), (1), (1), (1), (1), (1), (1), (1), (1), (1)) as x2(n)
) as x(n)
)
Select @values = coalesce(''+@values+ ', ', ' ') + '('''+d+''')'
, @cols = coalesce(''+@cols+ ', ', ' ') + '['+left(DATENAME(month, d), 3)+CAST(year(d) as char(4))+']'
From range
;
```
This basically create a row for each date between @first and @last and concatenate them with parenthesis and commas (@values) or brackets (@cols).
> Content in @values and @cols look like this:
```
@values = ('2015-01-01'), ('2015-02-01'), ('2015-03-01')
@cols = [Jan2015], [Feb2015], [Mar2015]
```
> You then create a SQL script using theses 2 variables:
```
declare @sql nvarchar(max);
Set @sql = '
Select *
From (
Select i.ItemId, d = left(DATENAME(month, r.d), 3)+CAST(year(r.d) as char(4))
, id = case when r.d >= i.StartDate and r.d <= i.EndDate then i.ItemId end
From (values'+@values+') as r(d)
Cross Join (Select ItemId, StartDate, EndDate From #items
Where (@first >= StartDate and @first <= EndDate) or (@last >= StartDate and @last <= EndDate)
) i
) as dates
Pivot (
min(id)
For d in('+@cols+')
) as piv
';
```
This is the pivot query.
> Created SQL will look like this in this example:
```
Select *
From (
Select i.ItemId, d = left(DATENAME(month, r.d), 3)+CAST(year(r.d) as char(4))
, id = case when r.d >= i.StartDate and r.d <= i.EndDate then i.ItemId end
From (values ('2015-01-01'), ('2015-02-01'), ('2015-03-01')) as r(d)
Cross Join (Select ItemId, StartDate, EndDate From #items
Where (@first >= StartDate and @first <= EndDate) or (@last >= StartDate and @last <= EndDate)
) i
) as dates
Pivot (
min(id)
For d in( [Jan2015], [Feb2015], [Mar2015])
) as piv
```
> You can finally execute the script:
```
exec sp_executesql @sql, N'@first date, @last date', @first, @last;
```
> Ouput:
```
ItemId Jan2015 Feb2015 Mar2015
1 1 1 1
2 2 2 NULL
3 NULL NULL 3
```
|
SQL - possible pivot issue
|
[
"",
"sql",
"sql-server",
"pivot",
""
] |
Column name Subject
Value = Something TEST001 something
I need to get TEST001 from "Something TEST001 Something"
If there are spaces or any specials characters after the 1 of TEST001 they will be removed.
I only have this
```
SELECT
Subject
, REPLACE(SUBSTRING(MailSubject, CHARINDEX('TEST', MailSubject), LEN(MailSubject)),'', '') AS Assingment
FROM
AssingmentEmail
```
The numbers from `TEST001` can be more but if there are any spaces or non numeric it will be removed.
|
Based on comments here is edited solution:
```
DECLARE @t TABLE(v VARCHAR(100))
INSERT INTO @t VALUES
('Something TEST001 something'),
('Something [TEST0001] something'),
('Something TEST001 something 123 something'),
('Something {TEST0001} something 123 something')
;WITH cte AS(SELECT SUBSTRING(v, CHARINDEX('TEST', v), LEN(v)) AS v FROM @t)
SELECT SUBSTRING(v, 1, PATINDEX('%[^TEST0-9]%', v) - 1) AS v FROM cte
```
Output:
```
v
TEST001
TEST0001
TEST001
TEST0001
```
Explanation:
In `cte` you are selecting substrings:
```
TEST001 something
TEST0001] something
TEST001 something 123 something
TEST0001} something 123 something
```
Then you are searching for first occurrence of symbol that is not `T E S T 0 1 2 3...9` with `%[^TEST0-9]%` and get substring till that symbol.
|
I would do this using a subquery. It is easier to handle the intermediate results. First, get rid of the part before the "TEST". Then, take the result up the first first space:
```
select left(ms1, charindex(' ', ms1)) as TestStuff
from (select stuff(MailSubject, 1, charindex(' TEST', MailSubject), '') as ms1
from assignment email
) ae
```
|
How do i get a certain string from a column in sql
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I try to do a simple select for some duplicates but my sql statement always end up with this error:
> Command not properly ended
What I have is a table like this
```
EAN STR
============= ====
8030524096397 AAAA
8030524096397 BBBB
8030524096731 XXXX
8030524096731 YYYY
8030524096324 CCCC
```
My select is actually simple
```
SELECT EAN, COUNT(*) FROM ean GROUP BY ean HAVING COUNT(*) > 1;
```
Reults:
```
EAN COUNT(*)
============= ========
8030524096397 2
8030524096731 2
```
Everything is fine until here! Now I want the `STR` of the duplicates and try this
```
SELECT * FROM EAN E
INNER JOIN ( SELECT EAN, COUNT(*) FROM ean GROUP BY ean HAVING COUNT(*) > 1 )
R ON
E.EAN = R.EAN;
```
But this results this error.
It exactly says this:
```
SELECT * FROM EAN E
INNER JOIN ( SELECT EAN, COUNT(*) FROM ean GROUP BY ean HAVING COUNT(*) > 1 ) R ON
^
Error: Command not properly ended
```
What am I doing wrong?
**Information to DB:** Gupta Centura SQLBase 7.0.1
|
I don't think SQLBase 7.01 supports proper ANSI `JOIN` syntax (aside: what a good reason to use a more modern product). The error indicates a problem on the `INNER JOIN`.
Here are two possible solutions.
First, yucky archaic `join` syntax:
```
SELECT *
FROM EAN E,
( SELECT EAN, COUNT(*) as cnt FROM ean GROUP BY ean HAVING COUNT(*) > 1 )
R
WHERE E.EAN = R.EAN;
```
Second, `IN`:
```
SELECT *
FROM EAN E
WHERE E.EAN IN ( SELECT EAN FROM ean GROUP BY ean HAVING COUNT(*) > 1 )
```
|
Try this (adding aliases to the tables with `AS` keyword)
```
SELECT * FROM EAN AS E
INNER JOIN
(SELECT EAN, COUNT(*) FROM ean GROUP BY ean HAVING COUNT(*) > 1) AS R
ON
E.EAN = R.EAN;
```
|
SQL-Select ends up in an Error when combined
|
[
"",
"sql",
"sqlbase",
"sqltalk",
""
] |
Error:
> Msg 121, Level 15, State 1, Procedure InsertNonExistingNode, Line 5
> The select list for the INSERT statement contains more items than the
> insert list. The number of SELECT values must match the number of
> INSERT columns.
Procedure in SQL Management Studio:
```
USE NWatchEntitiesUnitTest
GO
CREATE PROCEDURE InsertNonExistingNode (@TableVariable dbo.NodeTableTable READONLY, @ScalarParameter nvarchar(255))
AS
BEGIN
INSERT INTO NWatchNodes WITH (ROWLOCK) (
NodeTypeId,
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias)
SELECT
NodeTypeId,
Name,
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias
FROM @TableVariable t
/*Left Join then where ID is null to make sure the record doesn't exists*/
LEFT JOIN NWatchNodes PR WITH (NOLOCK)
ON PR.ID = @ScalarParameter
AND PR.Name = t.Name
WHERE PR.ID IS NULL
END
GO
```
|
You have a missing column in your `INSERT` , the column is `Name` , you have `Name` column in your `SELECT` statement, but not in `INSERT`.
To fix, just put `Name` in your insert statement between `NodeTypeId` and `Location`.
Below are the correct query to remove your error, but it seems you `select` the name twice, so its either you will remove the `name` column after `NodeTypeId` in your select statement, or you will add it in your insert, see below adding the missing column.
```
INSERT INTO NWatchNodes WITH( ROWLOCK )
( NodeTypeId,
Name, -->> This column is the cause of error, because its missing
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias
)
SELECT NodeTypeId,
Name,
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias
```
|
Your select clause has 14 columns
```
NodeTypeId,
Name,
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias
```
But in your INSERT only 13
```
NodeTypeId,
// You are missing Name here
Location,
DisplayName,
AccessLevel,
IsEnabled,
CreatedOn,
CreatedBy,
ModifiedOn,
ModifiedBy,
NativeId,
SourceId,
Name,
Alias
```
|
SQL Procedure Error - INSERT statement contains more items than the insert list
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I wrote a sql query which is as below
```
SELECT ryot_code, ryot_name, variety_group, SUM(total_area)
FROM field_survey_trn
WHERE unit_code = '01'
AND season_cntrl = 20
AND crop_type_code NOT IN (11,12)
AND ryot_code != 0
GROUP BY ryot_code, ryot_name, variety_group;
```
**output**
[](https://i.stack.imgur.com/Zs2WF.gif)
In **variety\_group** column I am getting different variety code for same ryot\_code in different lines.
My question is, how can I write a query, using which I can get all type or \*\* variety\_code\*\* for a same grower in a single row like below image
[](https://i.stack.imgur.com/hm0np.gif)
|
I wrapped the query in your original question in an outer query which performs a pivot on the `variety_group` column:
```
SELECT t.ryot_code, t.ryot_name,
SUM(CASE WHEN t.variety_group = 10 THEN t.theSum ELSE 0 END) AS '10',
SUM(CASE WHEN t.variety_group = 20 THEN t.theSum ELSE 0 END) AS '20',
SUM(CASE WHEN t.variety_group = 30 THEN t.theSum ELSE 0 END) AS '30'
FROM
(
SELECT ryot_code, ryot_name, variety_group, SUM(total_area) AS theSum
FROM field_survey_trn
WHERE unit_code = '01' AND season_cntrl = 20
AND crop_type_code NOT IN (11,12) AND ryot_code != 0
GROUP BY ryot_code, ryot_name, variety_group
) t
GROUP BY t.ryot_code, t.ryot_name
```
|
you can do this in SQL using `PIVOT`.
```
SELECT * FROM
(
SELECT ryot_code, ryot_name, variety_group, total_area
FROM field_survey_trn
WHERE unit_code = '01'
AND season_cntrl = 20
AND crop_type_code NOT IN (11,12)
AND ryot_code != 0
) as my_name
PIVOT
(
SUM(total_area)
FOR [variety_group] IN ([10], [20], [30])
) piv1;
```
|
Sql: show data in a single row instead of different columns
|
[
"",
"mysql",
"sql",
""
] |
Suppose I have the following table:
[](https://i.stack.imgur.com/uOFXa.png)
My goal is to display a select resultset that looks like this:
[](https://i.stack.imgur.com/qt0ye.png)
The tricky part here is to display the AverageCostPerType column for every single book.
I know how to get the AverageCostPerType, it's simply the following:
SELECT avg(bookcost) as AverageCostPerType FROM BOOK GROUPBY BookType;
This will display 3 rows since I have 3 distinct types. How can I display an averagecostpertype for each book ?
I'll appreciate your help.
|
You can calculate the average per booktype in a derived table and `join` it to the original table to get the result.
```
select book_num, t.booktype, x.avgcost, bookcost, x.avgcost-bookcost
from tablename t join
(select booktype, avg(bookcost) as avgcost from tablename group by booktype) x
on t.booktype = x.booktype
```
|
you need use [analytic functions](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions004.htm)
AVG per BookType
```
select b.*, avg(bookcost) over (PARTITION BY BookType)
from book b
```
AVG for all books
```
select b.*, avg(bookcost) over ()
from book b
```
|
SQL query scenario.
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have a table of data which has students and their subject results in it. The students will appear multiple times, once for each subject they have a result for.
```
**tableID,studentID,lastName,firstName,subject,grade**
1,1a,Student1,Name1,English,A
2,1a,Student1,Name1,Maths,A
3,1a,Student1,Name1,Science,A
4,2a,Student2,Name2,English,A
5,2a,Student2,Name2,Maths,B
6,2a,Student2,Name2,Science,A
7,3a,Student3,Name3,English,A
8,3a,Student3,Name3,Maths,A
```
---
Using Microsoft Access SQL, how can I select only the students who have received an A for all of their subjects? E.g. In the above table, I only want to select all instances of Student1 and Student3, I don't want Student2 as they have not received all A's.
|
I was able to get the results I want by using a sub-query:
```
SELECT studentID, lastName, firstName
FROM table
WHERE grade = "A"
AND studentID NOT IN (SELECT studentID FROM table WHERE grade <> "A" GROUP BY studentID)
GROUP BY studentID, lastName, firstName
```
This seems to exclude all students who received a result other than an A.
|
Get all students with grade A except students with any other grade
```
SELECT
studentID,lastName,firstName
FROM
(SELECT
studentID,lastName,firstName
FROM
result
WHERE
grade = 'A'
GROUP BY
studentID,lastName,firstName) GradeA
LEFT OUTER JOIN
(SELECT
studentID,lastName,firstName
FROM
result
WHERE
grade <> 'A'
GROUP BY
studentID,lastName,firstName) GradeOther
ON GradeA.studentId = GradeOther.StudentID AND GradeA.LAstName = GradeOther.LastName AND GradeA.FirstName = GradeOther.FirstName
WHERE
GradeOther.StudentID IS NULL
```
|
SQL Select entries where none of the entities have a value in a particular column
|
[
"",
"sql",
"ms-access",
""
] |
I need to do an operation based on an other operations result. Something such as:
```
SELECT
SUM(first_column) as res1,
SUM(second_column) as res2,
res1/res2 as res3
FROM my_table
```
But it seems that it's only possible to do operations on existing columns. What is the workaround for this use case?
|
You can use subquery to achieve that.
Something like this:
```
SELECT res1, res2, res1/res2 as res3
(SELECT SUM(xxx) as res1, SUM(xxx) as res2
FROM yyy) as subquery
```
|
Try something like:
```
SELECT res1/res2 as res3
FROM (SELECT SUM(xxx) as res1, SUM(xxx) as res2
FROM yyy))
```
|
MySQL query with an operation based on an other operation result
|
[
"",
"mysql",
"sql",
""
] |
Good Afternoon,
I have what seems like a simple problem that has turned out to be not so simple. I have 2 dates. BeginPeriod (2010-06-10) and EndPeriod (2011-06-11).
What I would like to see if if these dates can be broken down into their respective monthly break downs. For the above example something like
* 2010/06/10 - 2010/06/30
* 2010/07/01 - 2010/07/31
* 2010/08/01 - 2010/08/31
* ............
* 2011/06/01 - 2011/06/10
I am not particular about the method. CTEs are fine but not preferred. As they say, beggars can't be choosers.
All the best,
George
|
CTE it is.
```
DECLARE @BeginPeriod DATETIME = '2010-06-10',
@EndPeriod DATETIME = '2011-06-11'
;WITH cte AS
(
SELECT DATEADD(month, DATEDIFF(month, 0, @BeginPeriod), 0) AS StartOfMonth,
DATEADD(s, -1, DATEADD(mm, DATEDIFF(m, 0, @BeginPeriod) + 1, 0)) AS EndOfMonth
UNION ALL
SELECT DATEADD(month, 1, StartOfMonth) AS StartOfMonth,
DATEADD(s, -1, DATEADD(mm, DATEDIFF(m, 0, DATEADD(month, 1, StartOfMonth)) + 1, 0)) AS EndOfMonth
FROM cte
WHERE DATEADD(month, 1, StartOfMonth) <= @EndPeriod
)
SELECT
(CASE WHEN StartOfMonth < @BeginPeriod THEN @BeginPeriod ELSE StartOfMonth END) StartOfMonth,
(CASE WHEN EndOfMonth > @EndPeriod THEN @EndPeriod ELSE EndOfMonth END) EndOfMonth
FROM cte
```
the last `EndOfMonth` is the value you used as `@EndPeriod` set it to `DATEADD(day, -1, @EndPeriod)` if you want the previous day
You can use this to trim the time.
```
SELECT
CONVERT(VARCHAR(10), (CASE WHEN StartOfMonth < @BeginPeriod THEN @BeginPeriod ELSE StartOfMonth END), 120) StartOfMonth,
CONVERT(VARCHAR(10), (CASE WHEN EndOfMonth > @EndPeriod THEN @EndPeriod ELSE EndOfMonth END), 120) EndOfMonth
FROM cte
```
|
Another options with "Numbers" CTE
```
declare @df datetime, @dt datetime
set @df = '20100610'
set @dt = '20110611'
;WITH e1(n) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), -- 10
e2(n) AS (SELECT ROW_NUMBER() OVER (ORDER BY e1.n) FROM e1 CROSS JOIN e1 AS b) -- 10*10
select
case when e2.n = 1
then @df
else dateadd(day, -day(@df) + 1, dateadd(month, e2.n - 1, @df)) end,
case when e2.n = datediff(month, @df, @dt) + 1
then dateadd(month, e2.n -1 , @df)
else EOMONTH( dateadd(month, e2.n -1 , @df) ) end
from e2
where e2.n
```
Instead of Numbers CTE you can use some other option for example as described here
<http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1>
I often have Numbers table in my DB for such tasks. I think primarily because I started using this technique before CTEs were added to MS SQL but it is less typing also.
|
SQL Split Period by Months
|
[
"",
"sql",
"sql-server",
"t-sql",
"common-table-expression",
""
] |
I have a sql which is failing in a left outer join subquery with
```
ORA-01427: single-row subquery returns more than one row
```
Here is the `left outer join` query fragment:
```
LEFT OUTER JOIN (aa.location) LOCATION
ON (location_info_300.client_num = location.client_num
AND location_info_300.source = location.source
AND location_info_300.location_code = location.location_code
AND 1 =
(SELECT ROW_NUMBER()
OVER(PARTITION BY location_code, client_num, SOURCE
ORDER BY expiry_date DESC)
AS rec_order_by_expiry_desc
FROM aa.location l2
WHERE location.client_num = l2.client_num
AND location.source = l2.source
AND location.location_code = l2.location_code
AND l2.expiry_date >=
TO_DATE('01-JAN-' || location_info_300.reporting_year,
'DD-MON-YYYY')
AND l2.effective_date <=
TO_DATE('31-DEC-' || location_info_300.reporting_year,
'DD-MON-YYYY')))
```
I tried fixing it by doing the following change in the last `AND` criteria:
```
1 =
(SELECT rec_order_by_expiry_desc
FROM (SELECT ROW_NUMBER() OVER (PARTITION BY LOCATION_CODE, CLIENT_NUM, SOURCE ORDER BY EXPIRY_DATE DESC) AS REC_ORDER_BY_EXPIRY_DESC
FROM aa.LOCATION l2
WHERE location.CLIENT_NUM = l2.CLIENT_NUM
AND location.SOURCE = l2.SOURCE
AND location.LOCATION_CODE = l2.LOCATION_CODE
AND l2.EXPIRY_DATE >= TO_DATE('01-JAN-'||location_info_300.REPORTING_YEAR,'DD-MON-YYYY')
AND l2.EFFECTIVE_DATE <= TO_DATE('31-DEC-'||location_info_300.REPORTING_YEAR,'DD-MON-YYYY'))
WHERE rec_order_by_expiry_desc = 1)
```
But now I am getting the following error:
```
ORA-00904: "LOCATION_INFO_300"."REPORTING_YEAR": invalid identifier
```
I am not sure what else to try. I hope someone does though!
|
I think you're basically checking if the row exists in the subquery? If so, then just do an `EXISTS`:
```
LEFT OUTER JOIN (aa.location) LOC
ON (location_info_300.client_num = loc.client_num
AND location_info_300.source = loc.source
AND location_info_300.location_code = loc.location_code
AND exists (SELECT null
FROM aa.location l2
WHERE loc.client_num = l2.client_num
AND loc.source = l2.source
AND loc.location_code = l2.location_code
AND l2.expiry_date >= TO_DATE('01-JAN-' || location_info_300.reporting_year, 'DD-MON-YYYY')
AND l2.effective_date <= TO_DATE('31-DEC-' || location_info_300.reporting_year, 'DD-MON-YYYY')))
```
N.B. I changed the alias of the aa.location table, just to avoid any possible conflicts between the outer and sub-query's aa.location tables (much better to make sure that aliases aren't the same as existing identifier names to avoid any potential scope clash issues. Also, it makes it easier to understand when you read the query).
|
Your first subquery should return exactly one row, but it returns more than one.
This is because you perform an analytic function, which is essentially supposed to return a varied amount of rows. The only way to assert that only one row will be returned is using an aggregate function or using conditions that will assure that.
Regarding the second query, note that you're calling a field from a table that doesn't exist in the given context.
If you're trying to check if a row exists or not, you should read into EXISTS function, or perhaps using COUNT in the subquery.
Edit: here is an example for the second option (the fist was already posted):
```
(SELECT COUNT(*)
FROM aa.location l2
WHERE location.client_num = l2.client_num
AND location.source = l2.source
AND location.location_code = l2.location_code
AND l2.expiry_date >= TO_DATE('01-JAN-' || location_info_300.reporting_year, 'DD-MON-YYYY')
AND l2.effective_date <= TO_DATE('31-DEC-' || location_info_300.reporting_year, 'DD-MON-YYYY'))
```
Note that I removed the fields in the PARTITION BY clause and didn't add them in GROUP BY because they conflict with your need of only one row.
|
Oracle issue with a left outer join subquery
|
[
"",
"sql",
"oracle",
"subquery",
"left-join",
"ora-01427",
""
] |
I have a requirement of getting an employee id from department id (lets consider that one department will only contain 1 employee at most)
I can do something like below:
```
Select emp_id from emp where dept_id = 101;
```
Now in case this row was not existing, I won't be getting any value.
However since I am using this query in PL/SQL layer, it will throw a NO\_DATA\_FOUND exception (when no employee exists in db)
Can you refactor this query to return some emp\_id = -1 in case no employee exists in db.
i can do something like below (which results in 2 sql queries, so not efficient)
```
select count(1) INTO temp_count from emp where dept_id = 101; -- or use an exists clause
if (temp_count != 0)
Select emp_id from emp where dept_id = 101;
```
|
Try:
```
select nvl(min(emp_id),-1)
from emp
where dept_id = 101;
```
|
You can try this code block :
`select case
when e.emp_id is null then
-1
else
e.emp_id
end as emp_id
from emp e
where e.dept_id = 101`
|
SQL query to return a particular column or a default value in case of row not existing in the db
|
[
"",
"sql",
"oracle",
""
] |
SQLFiddle : <http://sqlfiddle.com/#!9/9d5b2>
I have the following tables under MYSQL 5.6 :
```
users
id | name
---------
1 | John
groups
id | name
---------
1 | admin
constaints
id | name
------------
1 | cons1
2 | cons2
user_to_group
userid | groupid
----------------
1 | 1
constraint_to_group
constrainid | groupid
----------------------
1 | 1
constraint_to_user
constrainid | userid
----------------------
2 | 1
```
The idea is :
* I have *groups* that have constraints.
* *User* can be member or 0..n *groups*
* *Constraints* can also be directly associated to a *user*
I'm trying to build a query that return, for a particular user,
* All its *groups*
* The *constraints* in those *groups*
* The *constraints* directly associated to him
I done the following :
```
SELECT `users`.*,
`constraints`.id as cons_id,
`constraints`.name as cons_name,
`groups`.id as group_id,
`groups`.name as group_name
FROM `users`
LEFT JOIN `user_to_group`
ON `user_to_group`.userid=`users`.id
LEFT JOIN `groups`
ON `groups`.id=`user_to_group`.groupid
LEFT JOIN `constraint_to_group`
ON `constraint_to_group`.groupid=`user_to_group`.groupid
LEFT JOIN `constraint_to_user`
ON `constraint_to_user`.userid=`users`.id
LEFT JOIN `constraints`
ON (`constraints`.id=`constraint_to_group`.constraintid OR `constraints`.id=`constraint_to_user`.constraintid)
WHERE `users`.id=1
```
This gets me :
```
|id| name |cons_id|cons_name|group_id|group_name
------------------------------------------------
|1 | john |1 |cons1 |1 |admin
|1 | john |2 |cons2 |1 |admin
```
**My problem :**
*cons2* doesn't come from group *admin*, it's directly associated to the user, so I'd like *group\_id* and *group\_name* to be set to NULL (or something else) so I can know if the constraint come from a group or is directly associated to the user
|
You can perform a comparison between `constraint_to_group.constraintid` and `constraints.id`.
So, instead of:
```
`groups`.id as group_id,
`groups`.name as group_name
```
use:
```
IF(`constraint_to_group`.constraintid = `constraints`.id,
`groups`.id, NULL) as group_id,
IF(`constraint_to_group`.constraintid = `constraints`.id,
`groups`.name, NULL) as group_name
```
in the `SELECT` clause.
[**Demo here**](http://sqlfiddle.com/#!9/9d5b2/10)
|
Think I would split it to 2 unioned queries. Without doing too much checking, something like this:-
```
SELECT users.id,
users.name,
constraints.id as cons_id,
constraints.name as cons_name,
groups.id as group_id,
groups.name as group_name
FROM users
LEFT JOIN user_to_group ON user_to_group.userid = users.id
LEFT JOIN groups ON groups.id = user_to_group.groupid
LEFT JOIN constraint_to_group ON constraint_to_group.groupid = user_to_group.groupid
LEFT JOIN constraints ON constraints.id = constraint_to_group.constraintid
WHERE users.id=1
UNION
SELECT users.id,
users.name,
constraints.id as cons_id,
constraints.name as cons_name,
NULL as group_id,
NULL as group_name
FROM users
LEFT JOIN constraint_to_user ON constraint_to_user.userid = users.id
LEFT JOIN constraints ON constraints.id = constraint_to_user.constraintid
WHERE users.id=1
```
|
MySQL 5.6 : multiple left join leads to incorrectly filled column
|
[
"",
"mysql",
"sql",
""
] |
I am trying to select a row count from another table even if it's empty, so if it's empty it just shows the number 0 but still selects the main table's rows.
Here's my sql:
```
SELECT training.*,
count(distinct training_transactions.training_transaction_course) as completed_training_payments
FROM training
INNER JOIN training_transactions
ON training.course_id = training_transactions.training_transaction_course
WHERE course_main = ?
AND course_enabled = 'enabled'
```
Training table:
```
CREATE TABLE IF NOT EXISTS `training` (
`course_id` int(11) NOT NULL,
`course_user` int(11) NOT NULL,
`course_main` int(11) NOT NULL,
`course_type` varchar(255) NOT NULL,
`course_name` varchar(255) NOT NULL,
`course_description` text NOT NULL,
`course_location` varchar(255) NOT NULL,
`course_duration` varchar(255) NOT NULL,
`course_fitness_type` varchar(255) NOT NULL,
`course_instructor_name` varchar(255) NOT NULL,
`course_price` int(15) NOT NULL,
`course_start_date` date NOT NULL,
`course_max_attendees` int(8) NOT NULL,
`course_accommodation` varchar(255) NOT NULL,
`course_accommodation_price` varchar(255) NOT NULL,
`course_status` varchar(50) NOT NULL,
`course_enabled` varchar(10) NOT NULL DEFAULT 'enabled',
`course_location_name` varchar(255) NOT NULL,
`course_location_street` varchar(255) NOT NULL,
`course_location_town` varchar(255) NOT NULL,
`course_location_county` varchar(255) NOT NULL,
`course_location_postcode` varchar(255) NOT NULL,
`course_location_country` varchar(255) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `training`
--
INSERT INTO `training` (`course_id`, `course_user`, `course_main`, `course_type`, `course_name`, `course_description`, `course_location`, `course_duration`, `course_fitness_type`, `course_instructor_name`, `course_price`, `course_start_date`, `course_max_attendees`, `course_accommodation`, `course_accommodation_price`, `course_status`, `course_enabled`, `course_location_name`, `course_location_street`, `course_location_town`, `course_location_county`, `course_location_postcode`, `course_location_country`) VALUES
(1, 3, 4, 'Health & Safety', 'lol', 'This is just a short description, this can be editted', '1', '13', 'lol', 'lol', 5, '1991-02-12', 4, '1', '4', 'live', 'enabled', '', '', '', '', '', 'United Kingdom'),
(2, 3, 4, 'Working at Height', 'lol', '', '1', '11', 'jkjkj', 'kjkjkj', 124, '0000-00-00', 6, '0', '', 'live', 'enabled', '', '123', '123', '123', 'WN8', 'United Kingdom'),
(3, 3, 4, 'Working at Height', 'lol', '', '1', '11', 'jkjkj', 'kjkjkj', 124, '0000-00-00', 6, '0', '', 'live', 'enabled', '', '123', '123', '123', 'WN8', 'United Kingdom');
```
training\_transactions
```
CREATE TABLE IF NOT EXISTS `training_transactions` (
`training_transaction_id` int(11) NOT NULL,
`training_transaction_user` int(11) NOT NULL,
`training_transaction_course` int(11) NOT NULL,
`training_transaction_status` varchar(50) NOT NULL DEFAULT 'pending',
`training_transaction_payment_status` varchar(50) NOT NULL,
`training_transaction_cost` int(11) NOT NULL,
`training_transaction_enabled` varchar(50) NOT NULL DEFAULT 'enabled',
`training_transaction_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`training_transaction_billing_name` varchar(250) NOT NULL,
`training_transaction_billing_address1` varchar(250) NOT NULL,
`training_transaction_billing_address2` varchar(250) NOT NULL,
`training_transaction_billing_city` varchar(250) NOT NULL,
`training_transaction_billing_state` varchar(250) NOT NULL,
`training_transaction_billing_postcode` varchar(250) NOT NULL,
`training_transaction_billing_country` varchar(250) NOT NULL
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
--
-- Dumping data for table `training_transactions`
--
INSERT INTO `training_transactions` (`training_transaction_id`, `training_transaction_user`, `training_transaction_course`, `training_transaction_status`, `training_transaction_payment_status`, `training_transaction_cost`, `training_transaction_enabled`, `training_transaction_date`, `training_transaction_billing_name`, `training_transaction_billing_address1`, `training_transaction_billing_address2`, `training_transaction_billing_city`, `training_transaction_billing_state`, `training_transaction_billing_postcode`, `training_transaction_billing_country`) VALUES
(1, 3, 1, 'pending', 'complete', 0, 'enabled', '2015-09-17 14:02:29', *removed my address*);
```
at the moment only the training course with id 1 is showing because a row in transactions exists how can I make all training display if it no id matches in the training\_transactions?
|
Use LEFT JOIN instead of inner join to list all records from the table on the left of the join, plus you need to place the count() in a subquery to get all fields from the training table:
```
SELECT training.*, ifnull(T.transactioncount,0) as transactioncount
FROM training
LEFT JOIN (SELECT training_transaction_course, COUNT(training_transaction_course) as transactioncount FROM training_transactions GROUP BY training_transaction_course) AS T
ON training.course_id = T.training_transaction_course
WHERE course_main = ?
AND course_enabled = 'enabled'
```
|
INNER JOIN only selects records where records with join key in both tables exist.
With LEFT JOIN / RIGHT JOIN, selections can be made where records on one side don't exist; their field values will be NULL (and when counted, they should be zero but I'm not COMPLETELY sure of that)
|
Selecting count from another table is only showing 1 row where it exists
|
[
"",
"mysql",
"sql",
""
] |
Normally, in SQL Developer, if one does, let's say, `INSERT` statement, then there is an output in `Script output`:
```
insert into a (a) values (1);
1 rows inserted.
```
If, then, we compile it into procedure:
```
create or replace procedure testp as
begin
insert into a (a) values (1);
end;
PROCEDURE TESTP compiled
```
and make call to it:
```
execute testp
anonymous block completed
```
There is no output about how many rows were inserted. How then, can we turn on detailed output in procedure?
|
If you just want "detailed output" that will be printed out for users running a procedure interactively in an IDE with appropriate settings enabled, you could
```
create or replace procedure testp as
begin
insert into a (a) values (1);
dbms_output.put_line( sql%rowcount || ' row inserted. ' );
end;
```
Assuming that the caller is using an application that knows how to display what is written to `dbms_output` (i.e. an IDE), that they've enabled `dbms_output` (using the `dbms_output` window in SQL Developer), that they've allocated a sufficient buffer, etc., the caller will see "1 row inserted" (you could add more code if you want to handle the grammar of singular/ plural programmatically).
Why do you want to have that sort of output though? That usually implies that you should be doing some real logging to a log table of some sort rather than hoping that an interactive user is running your code and seeing the output.
|
> There is no output about how many rows were inserted. How then, can we turn on detailed output in procedure?
SQL and PL/SQL are not same. In PL/SQL, you could use **SQL%ROWCOUNT** to get the total number of rows affected by the **DML**.
To get this in the output buffer, use **DBMS\_OUTPUT.PUT\_LINE**.
For example,
```
DBMS_OUTPUT.PUT_LINE(SQL%ROWCOUNT || ' rows were inserted' );
```
Make sure you enable DBMS\_OUTPUT in your client.
In **SQL\*Plus**, do:
```
set serveroutput on
```
Other GUI based client tools have an option to enable it.
I wouldn't suggest DBMS\_OUTPUT in production though. You could use it for logging purpose. Like you want to log the number of rows affected by a program on daily basis.
|
Getting output in PLSQL procedure
|
[
"",
"sql",
"oracle",
"plsql",
"oracle-sqldeveloper",
""
] |
I have query like this
```
SELECT P9_IR259_FI, FX_IR259_SYS_PROD_TYP, FX_IR259_SLS_POS, FX_IR259_SLS_CHNL, PX_IR259_OFC_CDE, FX_IR259_SPR_OFC_CDE, FX_IR259_NAME, SUPERNAME, SUPERCDE, SUPERPOS
FROM IR259 K
LEFT OUTER JOIN
(SELECT FX_IR259_NAME AS SUPERNAME ,FX_IR259_SLS_POS AS SUPERPOS, PX_IR259_OFC_CDE AS SUPERCDE
FROM IR259 M
WHERE P9_IR259_FI = 75214 AND FX_IR259_SLS_POS = 2 AND FX_IR259_NAME LIKE 'ME%'
) ON K.FX_IR259_SPR_OFC_CDE = SUPERCDE
WHERE
FX_IR259_SLS_POS = '1'
AND FX_IR259_SLS_CHNL = 'BRO'
AND F9_IR259_CRE_TMS > 00000000000000000
ORDER BY FX_IR259_SLS_POS, FX_IR259_SPR_OFC_CDE, FX_IR259_SLS_CHNL
```
but result seems not correct means, i want to show SUPERNAME like 'ME%' only
[](https://i.stack.imgur.com/vRUsD.png)
the result show the SUPERNAME with like 'ME%' but other records have show too.
Thanks for Advice
|
You need to add another filter to the where clause.
```
WHERE
FX_IR259_SLS_POS = '1'
AND FX_IR259_SLS_CHNL = 'BRO'
AND F9_IR259_CRE_TMS > 00000000000000000
AND SUPERNAME LIKE 'ME%'
```
|
You only need to add another condition in the where clause
```
SELECT P9_IR259_FI, FX_IR259_SYS_PROD_TYP, FX_IR259_SLS_POS, FX_IR259_SLS_CHNL, PX_IR259_OFC_CDE, FX_IR259_SPR_OFC_CDE, FX_IR259_NAME, SUPERNAME, SUPERCDE, SUPERPOS
FROM IR259 K
LEFT OUTER JOIN
(SELECT FX_IR259_NAME AS SUPERNAME ,FX_IR259_SLS_POS AS SUPERPOS, PX_IR259_OFC_CDE AS SUPERCDE
FROM IR259 M
WHERE P9_IR259_FI = 75214 AND FX_IR259_SLS_POS = 2 AND FX_IR259_NAME LIKE 'ME%'
) ON K.FX_IR259_SPR_OFC_CDE = SUPERCDE
WHERE
FX_IR259_SLS_POS = '1'
AND FX_IR259_SLS_CHNL = 'BRO'
AND F9_IR259_CRE_TMS > 00000000000000000
AND SUPERNAME LIKE 'ME%'
ORDER BY FX_IR259_SLS_POS, FX_IR259_SPR_OFC_CDE, FX_IR259_SLS_CHNL
```
|
SQL Query Left Outer Join with specific record
|
[
"",
"sql",
"oracle",
""
] |
T-SQL User
SSMS 2008
I am running two (what are supposed to be) effectively *identical* queries that pull records from a table (say, Table A), using a WHERE statement to define the date parameters by which records will be returned. The issue is, is that the 2 queries are returning different counts. Could anyone help me understand why? As you can see, I am trying to essentially pull September records. Here are the queries...
```
/*Returns 26,310 records*/
select A.*
from A
where A.Date between '9/1/2015' and '9/30/2015'
/*Returns 27,925 records*/
select A.*
from A
where YEAR(A.Date)*100 + MONTH(A.Date) = 201509
```
|
If you're working with datetime data (as opposed to purely dates, with no time components), I'd recommend forgoing between and using an *exclusive* endpoint:
```
select A.*
from A
where A.Date >='20150901' and A.Date < '20151001'
```
Exclusive endpoints tend to be easier to compute *and* you don't have to worry about the precision of the time component.
The problem with your first query is that `'9/30/2015'` is the same as `'9/30/2015 00:00:00'` - so, for instance, a `Date` value of `'9/30/2015 00:00:01'` (and any later time) is greater than that value. Some people try to compute the last moment of the day (as 23:59:59, or 23:59:59.997), but you have to get it precisely right or you'll miss values. Hence my comment above regarding precision.
|
Try changing the first query like this. It could be due to `time` part in your `Date` column
```
select A.*
from A
where cast(A.Date as date) between '9/1/2015' and '9/30/2015'
```
|
2 of the "same" WHERE statements returning different count of records
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have two tables, `customers` and `purchases`. `Purchases` has a `total_price` column and foreign key `cid` referencing `customers.cid`. I need to select names of customers (came) who made the highest total\_price purchase.
I'm trying this
```
select
cname
from
customers c
where exists
(select pid
from purchases p
where total_price in (select max(total_price)
from purchases p
where max(total_price) = total_price
and p.cid = c.cid))
```
I get the error, group function is not allowed here -->
```
where max(total_price) = total_price
```
Please help me out
|
You could just sum up the totals for each customer and then order it by highest totals like so:
```
select cname, sum(total_price) as totals
from customers c
inner join purchases p
on c.cid = p.cid
group by cname
order by totals desc
limit 1
```
Here's an example with MySQL database: <http://sqlfiddle.com/#!9/133fa/1>
I have an example with Oracle here: <http://sqlfiddle.com/#!4/9d786/4>
```
with
totals as
(
select cid, sum(total_price) as totals
from purchases
group by cid
),
highest as (
select max(totals) as highest from totals
)
select cname
from customers c
inner join totals t on c.cid = t.cid
inner join highest h on t.totals = h.highest
```
|
You can also store the highest purchase for each customer in a temporaly table(In SQL Server, for instance):
```
SELECT c.cid, cname, max(total_price) as total
into #temp_customers_highest_purchase
FROM customers c
JOIN purchases p on c.cid = p.cid
group by c.cid, cname
```
After that you can select the customer with the highest total\_price:
```
select cname, total
from #temp_customers_highest_purchase tmp
join customer c on c.cid = tmp.cid
```
This approach could hep with performance since you are not using subqueries.
If you are not using SQL Server you can try using WITH clausule, which is very helpful regarding to performance.
```
WITH subquery AS (
SELECT c.cid, cname, max(total_price) as total
FROM customers c
JOIN purchases p on c.cid = p.cid
group by c.cid, cname)
SELECT total from subquery sb WHERE sb.cid = 'id you want'
```
|
SQL query to find name with max price purchase made?
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to join 2 tables based on a value that is present in another table, specifically where a field is null.
Depending on if that field is null, I'd like to execute a LEFT JOIN in a different way.
Example:
```
LEFT JOIN (SELECT report_transaction_id, line_number, sale_staff_id, item_type, article_id, sale_count, return_count, line_price_inc_vat, merchandise_category
FROM [CRM].[TransactionItems]) ti
ON th.report_transaction_id = ti.report_transaction_id
CASE ti.article_id WHEN IS NULL THEN
LEFT JOIN (SELECT TOP 1 article_id, merchandise_category, long_desc, division_code, division_desc, group_code, group_desc
FROM [CRM].[Products]) p
ON ti.MERCHANDISE_CATEGORY = p.MERCHANDISE_CATEGORY
ELSE
LEFT JOIN (SELECT article_id, merchandise_category, long_desc, division_code, division_desc, group_code, group_desc
FROM [CRM].[Products]) p
ON ti.article_id = p.article_id
END
```
The CASE section is the part that is erroring, with INCORRECT SYNTAX NEAR CASE. As far as I can see, using CASE isn't going to be a valid way of choosing which of these Joins to use for the Products table, so I was wondering what the correct way would be of determining which join needs to be run,
Thanks.
|
I don't think you can use CASE like that, you will have to do something like this:
```
LEFT JOIN (SELECT TOP 1 article_id, merchandise_category, long_desc, division_code, division_desc, group_code, group_desc
FROM [CRM].[Products]) p
ON ti.MERCHANDISE_CATEGORY = p.MERCHANDISE_CATEGORY
WHERE ti.article_id IS NULL
LEFT JOIN (SELECT article_id, merchandise_category, long_desc, division_code, division_desc, group_code, group_desc
FROM [CRM].[Products]) p
ON ti.article_id = p.article_id
WHERE ti.article_id IS NOT NULL
```
Join both subqueries but use the `ti.article_id` in a where clause.
You will have to use a different alias for the 2nd subquery (e.g. 'q') and in the main SELECT statement, use `ISNULL` or `COALESCE` to return the value, e.g:
```
SELECT ISNULL(p.article_id, q.article_id)
```
|
A big part of the problem may be in your use of TOP 1 in this subquery:
```
(SELECT TOP 1 article_id, merchandise_category, long_desc, division_code,
division_desc, group_code, group_desc
FROM [CRM].[Products])
```
As written, TOP 1 here mean, "Independent of everything else in this query, randomly pick one row from [CRM].[Products], and use that row for my join"
I assume this isn't the intent - in which case, you need to clarify (in English if you don't know the SQL syntax) exactly what you're trying to accomplish with the TOP 1.
This is really a situation where it'd be helpful if you provided some sample data and showed desired output.
|
Joining Same Table using Different Fields Based on Previous Values
|
[
"",
"sql",
"left-join",
""
] |
I have two queries:
```
'SELECT * FROM `table `WHERE weight = 0 OR weight IS NULL'
```
and
```
'SELECT * FROM `table `
```
The first query returns around 4000 values, the second query returns around 4100.
I'm attempting to create a query that will return the rows which are distinct between the two values, I'm attempting this by using a nested or sub query but I'm struggling with syntax here. Having only worked with very simple queries before. Could anyone suggest how I might do this.
|
I think this does what you want:
```
SELECT *
FROM `table`
WHERE NOT (weight = 0 OR weight IS NULL);
```
That is more simply written as:
```
SELECT *
FROM `table`
WHERE weight <> 0;
```
|
use MINUS operator as below
```
SELECT * FROM `table`
MINUS
SELECT * FROM `table` WHERE weight = 0 OR weight IS NULL
```
|
Creating a mySQL sub query to list distinct rows from two queries
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to come up with a query that will list all the task\_groups where all task\_names in a group are performed by the 'AUTO' user except for the 'Initial' task which will be manual.
So for the below data I should only see Task\_Group '1' in the result and not Task\_Group '2'
```
CREATE TABLE [dbo].[QUERY_TST](
[ID] [int] IDENTITY(1,1) PRIMARY KEY,
[TASK_GROUP] [int] NOT NULL,
[TASK_NAME] [varchar](50) NULL,
[PERFORMED_BY] [varchar](10) NULL
)
--Data
INSERT INTO QUERY_TST VALUES(1, 'INITIAL', 'MANUAL')
INSERT INTO QUERY_TST VALUES(1, 'TASK1', 'AUTO')
INSERT INTO QUERY_TST VALUES(1, 'TASK2', 'AUTO')
INSERT INTO QUERY_TST VALUES(1, 'TASK3', 'AUTO')
INSERT INTO QUERY_TST VALUES(2, 'INITIAL', 'MANUAL')
INSERT INTO QUERY_TST VALUES(2, 'TASK1', 'AUTO')
INSERT INTO QUERY_TST VALUES(2, 'TASK2', 'MANUAL')
INSERT INTO QUERY_TST VALUES(2, 'TASK3', 'AUTO')
```
|
This should get you what you're after:
```
SELECT DISTINCT
task_group
FROM query_tst As a
WHERE (
(NOT EXISTS (
SELECT NULL
FROM query_tst As b
WHERE (
(a.task_group = b.task_group) And
(b.task_name <> 'Initial') And
(b.performed_by <> 'AUTO')
)
))
);
```
|
* All task less one is performed by `AUTO`
* The Initial task is perfomen by `MANUAL`
.
```
select [TASK_GROUP]
from [dbo].[QUERY_TST]
GROUP BY [TASK_GROUP]
HAVING SUM(CASE WHEN [PERFORMED_BY] = 'AUTO' AND [TASK_NAME] <> 'Initial' THEN 1
ELSE 0 END) = COUNT(*)-1
AND SUM(CASE WHEN [PERFORMED_BY] = 'MANUAL' AND [TASK_NAME] = 'Initial' THEN 1
ELSE 0 END) = 1
```
|
SQL query for fetching a value where all but one row matches
|
[
"",
"sql",
"sql-server",
""
] |
So I'm wondering which one is best performance wise and also if using an AND is simply bad practice here.
Compare the two following queries ends :
Using a "WHERE" at the end :
```
select c.cust_last_name,
o.order_total,
oi.quantity
from customers c
join orders o on (c.customer_id = o.customer_id)
join order_items oi on (o.order_id = oi.order_id)
where c.GENDER='M';
```
Using an "AND" at the end :
```
select c.cust_last_name,
o.order_total,
oi.quantity
from customers c
join orders o on (c.customer_id = o.customer_id)
join order_items oi on (o.order_id = oi.order_id and c.GENDER='M');
```
The and is riding on the last ON's conditions to retrieve the exact same dataset as the first query. Is this OK?
|
In this instance, I doubt it would make much difference to Oracle which version of the query you used. You could check by looking at the explain plan for each query.
However, it is only "safe" to move the `c.gender = 'M'` predicate into the join condition here because you're doing an inner join. If you were doing an outer join, you'd see different results depending on whether that predicate was in the where or join clause.
|
It makes no difference whether to put the condition in the WHERE clause or an ON clause.
But yes, what you show *is* bad practise, because `and c.GENDER='M'` has nothing to do with which records to join from table `order_items`. The criteria in an ON clause should always belong with its table.
An example with additional criteria on the order items table would be
```
join order_items oi on (o.order_id = oi.order_id and and oi.price > 50)
```
Here it is more or less a matter of personal preference if you want to see this in the ON clause or WHERE clause. You could argue that you join the tables on their order IDs and then only *keep* results with a price higher then 50, so the join is on the IDs only. Or you could argue that you *join* order items with a price > 50. Both statements are semantically correct.
However it is a good habit to always have all criteria on a table in its ON clause. When you change
```
inner join order_items oi on (o.order_id = oi.order_id)
where oi.price > 50
```
to
```
left join order_items oi on (o.order_id = oi.order_id)
where oi.price > 50
```
this is effectively an inner join still, because the outer-joined records will have a price of NULL which doesn't meet your WHERE clause criteria so you'd remove the records right after creating them :-) So you would have to move the criteria to the ON clause because of the other join type. Wouldn't it be better to have it there already?
|
Using an sql join's "ON" statement with an "AND" instead of a "WHERE"
|
[
"",
"sql",
"oracle",
"join",
""
] |
I have three different models with association
**user.rb**
```
has_many :products
has_many :comments
```
**comment.rb**
```
belongs_to :user
belongs_to :product
```
**Product.rb**
```
has_many :comments
belongs_to :user
def product_comments_object
comments_object = self.comments.all.select(:id,:user_id,:comment)
{comments: comments_object}
end
```
Now, I'm getting the `comments_object` like:
```
"comments": [
{
"id": 1,
"user_id": 1,
"comment": "comment number 1"
},
{
"id": 2,
"user_id": 1,
"comment": "comment number 2"
}
]
```
But how to get `user_name` in-place of `user_id`. I tried using `map` but didn't help me.
|
you need to parse comment objects to get user name . parse each comment object and call `parse_comment` method which return a hash with user name.
```
def product_comments_object
comments_object = []
self.comments.each do |comment|
comments_object << parse_comment(comment)
end
{comments: comments_object}
end
def parse_comment comment
{id: comment.id, user_name: comment.user.name, comment: "comment number 1"}
end
```
|
Try this:
```
def product_comments_object
comments_object = self.comments.all.joins('users')
.where('users.id = comments.user_id')
.select('comments.id, users.name AS user_name, comments.comment')
{ comments: comments_object }
end
```
|
How to map user name to each comment
|
[
"",
"sql",
"ruby-on-rails",
"model",
"associations",
""
] |
my connection string is saved in a string variable names str
what i am trying to do is use an input from the user as part of the string
the parts i want to take from the user are the ID and PASS
i am simply trying to check the connection statues with the ID and the PASS as inputs from the user.
```
Dim str As String = "Data Source=DESKTOP;uid=ID;pwd=PASS;database=DB"
Dim conn As New SqlConnection(str)
Private Sub btnconnect_Click(sender As Object, e As EventArgs) Handles btnconnect.Click
PW = txtadminpass.Text
Try
conn.Open()
conn.Close()
MsgBox("GOOD")
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
```
i haven't had much like while using the + and & for the strings.
any help would be appreciated.
|
So, you need to check if the user is allowed to log into the database or not. The way you have followed looks good, you define the connection string based on the given ID and password, and you try to establish a connection, if it fails, the user can't log in, else he can do that.
However, the way you defined the string is wrong, you must use `concatenation` to preserve the ID and password values, try this,
```
Dim str As String = "Data Source=DESKTOP; uid=" & ID & "; pwd=" & PASS & ";database=DB"
```
|
The [SqlConnectionStringBuilder](https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnectionstringbuilder%28v=vs.110%29.aspx) is an appropriate class to use in this case. You can add parts of the connection string to it via properties, so there is no chance of making mistakes:
```
Imports System.Data.SqlClient
Module Module1
Sub Main()
Dim csb As New SqlConnectionStringBuilder
csb.DataSource = "DESKTOP"
csb.InitialCatalog = "DB"
csb.UserID = "z"
csb.Password = "x"
' output "Data Source=DESKTOP;Initial Catalog=DB;User ID=z;Password=x" '
Console.WriteLine(csb.ToString())
Console.ReadLine()
End Sub
End Module
```
|
i am trying to use an input from the user as part of a connection string to a SQL database
|
[
"",
"sql",
"vb.net",
"string",
""
] |
I'm trying to migrate some data from an old Access database to a new one, some columns changed, for example the old "adress" used to be divided into street, number, city and postal code is now only one column.
So I need to union all the columns into only one before update the new database.
I get no error and no warning, althought it only shows the first columns on the updated database.
What I'm trying to do:
```
//Open the connection with the database
oldBuys.Open()
newBuys.Open()
Dim codTable As New DataTable
Dim codTemp As Integer
Dim addressTable As New DataTable
Dim tempValue As String
//Command to find all the client codes on the new database
Dim findAllCodes As New OleDb.OleDbDataAdapter("SELECT Cod FROM Data", newBuys)
findAllCodes .Fill(codTable)
//Search for all the adress of that code
For i = 0 To codTable.Rows.Count - 1
codTemp = codTable.Rows(i).Item(0)
//Union all the values
Dim findAllAdress As New OleDb.OleDbCommand("SELECT Street FROM Data" & _
" WHERE Cod=@cod1 UNION ALL SELECT Number FROM Data WHERE Cod=@cod2 UNION ALL SELECT City" & _
" FROM Data WHERE Cod=@cod3 UNION ALL SELECT State FROM Data WHERE Cod=@cod4 UNION" & _
" ALL SELECT postalCode FROM Data WHERE Cod=@cod5", cnCompraAntigo)
findAllAdress.Parameters.AddWithValue("@cod1", codTemp)
findAllAdress.Parameters.AddWithValue("@cod2", codTemp)
findAllAdress.Parameters.AddWithValue("@cod3", codTemp)
findAllAdress.Parameters.AddWithValue("@cod4", codTemp)
findAllAdress.Parameters.AddWithValue("@cod5", codTemp)
//Transfer the value to the variable
tempValue = findAllAdress.ExecuteScalar
//Try to update the new database
Dim updateAllAdress As New OleDb.OleDbCommand _
("UPDATE Data SET fullAdress=@end WHERE Cod=@cod", newBuys)
updateAllAdress.Parameters.AddWithValue("@end", tempValue)
updateAllAdress.Parameters.AddWithValue("@cod", codTemp)
updateAllAdress.ExecuteNonQuery()
Next
newBuys.Close()
oldBuys.Close()
```
|
What you are trying to do can be accomplished concatenating columns in a SELECT statement like this:
```
SELECT Street & ' ' & Number & ' ' & PostalCode & ' ' & City & ' ' & State
FROM Data
WHERE Cod=@cod1
```
*This SELECT works under the assumption that all columns are alphanumeric, if some of they are numeric, you have to convert the columns before the concatenation.*
The UNION statement is used to merge the results of different queries, combining the resulting "tables" on one single result set.
For example, you can select all masculine names of a table and in another select statement retrieve all feminine names, using an UNION, you can combine both select statements in a single result.
**EDIT:** I've corrected the statement substituting + for & to adjust it to Access syntax.
|
I suspect you need just this (`||` is the SQL string concatenation operator):
```
//Union all the values
Dim findAllAdress As New OleDb.OleDbCommand("SELECT Street || Number || City || State || postalCode FROM Data" & _
" WHERE Cod=@cod1", cnCompraAntigo)
findAllAdress.Parameters.AddWithValue("@cod1", codTemp)
```
|
UNION ALL only returns the fist column
|
[
"",
"sql",
"vb.net",
"ms-access",
""
] |
I want to use a variable to determine conditional GROUP BY in my SQL statement like so:
```
group by
case when (@GroupByFirst is not null) then a.Name else b.Name end
```
Unfortunately I get an error that a.Name is
```
invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
```
Is there any solution to make it work. Forgot to mention, SQL is part of stored procedure, but it shouldn't make any difference I believe
|
Your query should look like:
```
select (case when (@GroupByFirst is not null) then a.Name else b.Name end) as name
from . . .
group by (case when (@GroupByFirst is not null) then a.Name else b.Name end)
```
You cannot refer to either `a.Name` nor `b.Name` in the `SELECT` (without an aggregation function, that is). You can use the expression.
|
declare @GroupByFirst varchar(30)='s'
select max(someval) from Login\_Master
group by
case when @GroupByFirst is not null then a.Name else b.Name end
|
Conditional GROUP BY with custom variable
|
[
"",
"sql",
"t-sql",
"group-by",
""
] |
Allow me to preface this by saying that I am fairly new to sql, and I'm sure there is an easy way to do this that I'm not understanding.
Lets say we have a table:
```
X | Y
2 | 2
3 | 1
3 | 3
3 | 2
```
I am trying to find values of y such that x contains both 2 and 3.
Basically, y = 2 is the only value that satisfies this.
EDIT: I know that in relational algebra this is trivial with division
|
You could probably try this:
```
select y
from test
where x in (2,3)
group by y
having count(*) = 2;
```
EDIT: Notice a good recommendation by Juan. In case your data contains X=2 and Y=2, a better way of writing the query would be this:
```
select y
from test
where x in (2,3)
group by y
having count(distinct x) = 2;
```
|
using a conditional `SUM`. If any group of `Y` contain `2` sum will be greater than 0, same with `3`
```
SELECT Y
FROM YourTable
GROUP BY Y
HAVING SUM(CASE WHEN X = 2 THEN 1 ELSE 0 END) > 0
and SUM(CASE WHEN X = 3 THEN 1 ELSE 0 END) > 0
```
|
Selecting certain values that contain every number but not only one
|
[
"",
"sql",
"oracle",
""
] |
I've written a query to delete all rows from a table and then bulk insert from a txt. If there is a problem with the bulk insert I don't want to loose my old data so I want to rollback the transaction.
This is what I've been trying:
```
BEGIN TRANSACTION
DELETE Users_Login
BULK INSERT Users_Login FROM '\\STRMV1234\ - Some_Table.txt' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', FIRSTROW = 2, CODEPAGE = 'ACP')
COMMIT TRANSACTION
```
However if there is an error I'm losing everything from the table. I also tried the code below, which doesn't work either:
```
BEGIN TRY
BEGIN TRANSACTION
DELETE Users_Login
BULK INSERT Users_Login FROM '\\STRMV1234\ - Some_Table.txt' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', FIRSTROW = 2, CODEPAGE = 'ACP')
COMMIT
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK
END CATCH
```
|
I would give this a try.
```
BEGIN TRANSACTION
DELETE FROM Users_Login
BULK INSERT Users_Login FROM '\\STRMV1234\ - Some_Table.txt' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', FIRSTROW = 2, CODEPAGE = 'ACP')
IF @@ERROR = 0
BEGIN
COMMIT
PRINT 'GOOD'
END
ELSE
BEGIN
ROLLBACK
PRINT 'BAD'
END
```
|
This works fine:
```
Begin Try
Begin Tran
Truncate Table data
BULK INSERT data FROM '...\data.txt' WITH (FIELDTERMINATOR = ';', ROWTERMINATOR = '\n', FIRSTROW = 1, CODEPAGE = 'ACP')
if @@TRANCOUNT > 0 Commit
print 'ok'
End Try
Begin Catch
print 'error'
if @@TRANCOUNT > 0 Rollback
End Catch
```
|
SQL how to rollback transaction if bulk insert fails
|
[
"",
"sql",
"sql-server",
""
] |
I have a table `Employee`.
[](https://i.stack.imgur.com/Y9kmN.jpg)
How to get the table with aggregate of a column as separate column as shown in the image?
[](https://i.stack.imgur.com/IplNs.jpg)
|
The exact problem I faced last year. Hope this help.
```
SELECT e1.EmpId,
e1.EmpName
e1.EmpSalary,
SUM(e2.EmpSalary) AS Aggregate_Salary
FROM Employee e1 JOIN Employee e2
ON e1.id >= e2.id GROUP BY e1.EmpId,
e1.EmpName,
e1.EmpSalary
```
|
Using a correlated sub-query:
```
select EmpId, EmpName, EmpSalary, (select sum(EmpSalary) from Employee e2
where e2.EmpId <= e1.EmpId) as AggregateSalary
from Employee e1
```
|
How to write a single SQL query to get aggregate of column as other column in SQL Server 2008?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"cumulative-sum",
""
] |
I'm currently trying to find all the entries with the same IDs in two sub queries and display the first table. I am having an issue with using aliases.
```
(SELECT *
FROM personTable
WHERE ID IN
( SELECT ID
FROM workerTable
)
AND firstName LIKE 'O%');
(SELECT *
FROM ownsTable
WHERE PhoneNumberID IN
( SELECT ID
FROM phonenumberTable
WHERE Home <>'' AND `Work` <>'' AND Cell <>''
)
);
```
I want to now check the first table's 'ID' against the second table's 'PersonID' and return the rows in the first table where ID and PersonID match.
|
Use `Exists` to do this
```
SELECT *
FROM persontable p
WHERE id IN (SELECT id
FROM workertable)
AND firstname LIKE 'O%'
AND EXISTS (SELECT 1
FROM ownstable o
WHERE phonenumberid IN (SELECT id
FROM phonenumbertable
WHERE home <> ''
AND ` work ` <> ''
AND cell <> '')
AND p.id = o.personid);
```
Also if possible convert the `IN` to `Exists` in all the `sub-queries` might be little efficient
|
It sounds like you can just do a simple join for this:
```
SELECT p.*
FROM personTable p
JOIN ownsTable o ON o.id = p.id AND p.firstName LIKE 'O%' AND o.home <> '' AND o.work <> '' AND o.cell <> '';
```
This will select all columns from the first table as long as the id exists in the second table, and the matching rows meet the given requirements.
|
SQL Multiple Sub-query
|
[
"",
"mysql",
"sql",
""
] |
I have **table A** that is linked to **table B**, which in turn is linked to **table C**.
I can only get from A to C through a key in B.
I must get all rows from A where ALL linked rows from C have C.value = 'Y'
I tried the following code but it already selects the row from A once a match has been found in one of the linked rows in C, not when ALL linked rows from C are matching C.value = 'Y'.
```
SELECT * FROM A
LEFT JOIN B ON A.ID1 = B.ID1
LEFT JOIN C ON B.ID2 = C.ID2
WHERE C.value = 'Y'
```
Is there a way to do this in SQL?
[](https://i.stack.imgur.com/UyFgb.png)
[](https://i.stack.imgur.com/UmUXH.png)
|
This query returns all the rows from A where all the linked rows from C have C.value 'Y' or where there is no link to B or to C available. Regards to JB King for the suggestion.
```
SELECT * FROM A WHERE A.ID1
NOT IN
(
SELECT DISTINCT A.ID1 FROM A
JOIN B ON A.ID1=B.ID1
JOIN C ON B.ID2=C.ID2 WHERE C.value = 'N'
)
```
|
I think I understand the problem now.. try this. Might be a better way but this should work.
```
SELECT *
FROM A
LEFT JOIN B ON A.ID1 = B.ID1
LEFT JOIN C ON B.ID2 = C.ID2
WHERE C.ID2 in (SELECT ID2 FROM C WHERE VALUE = 'N')
AND C.ID2 not in (SELECT ID2 FROM C WHERE VALUE <> 'N')
```
|
SQL Select rows from table where all joined rows match value
|
[
"",
"sql",
"join",
""
] |
My Tables look like this
```
Table 1 Table 2
Users Options
id name id user_id option
------- --- -------- --------
1 Donald 1 1 access1
2 John 2 1 access2
3 Bruce 3 1 access3
4 Paul 4 2 access1
5 Ronald 5 2 access3
6 Steve 6 3 access1
```
Now, i want to select join these to find a user which has only **access1**
If i do something like
```
select t1.id,t1.name,t2.id,t2.user_id,t2.option
from table1 t1, table2 t2
where t1.id=t2.user_id
and option='access1';
```
This does not give me unique results, as in the example i need only **user\_id=3** my data has has these in hundreds
I also tried something like
```
select user_id from table2 where option='access1'
and user_id not in (select user_id from table2 where option<>'access1')
```
There have been other unsuccessful attempts too but i am stuck here
|
You can do this using a EXISTS subquery (technically, a left semijoin):
```
SELECT id, name
FROM table1
WHERE EXISTS(
SELECT * FROM table2
WHERE table1.id = table2.user_id
AND table2.option = 'access1'
)
```
If you want only users that have access1 and not any other access, add NOT EXISTS (a left anti-semi-join; there's a term to impress your colleagues!):
```
AND NOT EXISTS (
SELECT * FROM table2
WHERE table1.id = table2.user_id
AND table2.option <> 'access1'
)
```
|
[`bool_and`](http://www.postgresql.org/docs/current/static/functions-aggregate.html) makes it simple
```
with users (id,name) as ( values
(1,'donald'),
(2,'john'),
(3,'bruce'),
(4,'paul'),
(5,'ronald'),
(6,'steve')
), options (id,user_id,option) as ( values
(1,1,'access1'),
(2,1,'access2'),
(3,1,'access3'),
(4,2,'access1'),
(5,2,'access3'),
(6,3,'access1')
)
select u.id, u.name
from
users u
inner join
options o on o.user_id = u.id
group by 1, 2
having bool_and(o.option = 'access1')
;
id | name
----+-------
3 | bruce
```
|
select distinct from join
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables
```
Table1 (Users)
ID, NAME, COUNTRY
1, Joe Bloggs, 1
2, Joe Jnr, 1
3, Joe Snr, 2
Table2 (Orders)
UsersID, Product
1, Apple TV
1, iPad Pro
1, MacBook Pro
2, iPad Mini
2, iPad Pro
3, iPad Air
3, iPad Pro
```
If i wanted a list of products bought if user lives in Country = 1
```
Select Orders.Product from users left join Users.ID = ORDERS.ID WHERE COUNTRY = 1
```
works fine
What if though I wanted a list of products bought if user lives in country = 1 and brought macbook Pro (but not show the macbook pro in the list)
So just show
```
AppleTV
iPAD pRO
```
|
Writing in classic SQL you have a layered sub-select.
Code that should work:
```
Select Product from Orders
Where Product <> 'MacBook Pro' -- you don't want MacBook Pro in results
and UsersID in
(Select ID from Users
where country = '1' --get all Country 1 users that Purchased MacBooks
and ID in
(Select B.UsersID
from Orders B --- aliased for clarity
where Product = 'MacBook Pro'))
```
|
The quick answer for that is simple. First, select the users who bought a MacBook Pro and who are living in that country:
```
SELECT T_USERS.id
FROM Users T_USERS INNER JOIN Orders T_ORDERS
ON (T_USERS.id = T_ORDERS.UsersID)
WHERE T_USERS.Country = 1
AND T_ORDERS.product = 'MacBook Pro'
```
And, then, use it in a subquery to get the users and avoid the rows containing the MacBook Product:
```
SELECT Product
FROM Orders
WHERE product <> 'MacBook Pro'
AND UsersID IN (SELECT T_USERS.id
FROM Users T_USERS INNER JOIN Orders T_ORDERS
ON ( T_USERS.id = T_ORDERS.UsersID )
WHERE T_USERS.country = 1
AND T_ORDERS.product = 'MacBook Pro')
```
It works just fine.
|
SQL One to Many Condition Issue
|
[
"",
"sql",
"one-to-many",
""
] |
Suppose I have a table with columns ID and Content populated with data.
```
ID | Content
1 | a
1 | b
1 | c
2 | b
2 | a
3 | b
```
I want to find every ID that has at least one of each 'a', 'b', and 'c' so the returned table would be :
```
ID | Content
1 | a
1 | b
1 | c
```
|
Using conditional `SUM` validate if inside the `group id` exist 1 or more of each element.
Then select `DISTINCT ID, Content` to eliminate possible duplicates
```
SELECT DISTINCT ID, Content
From YourTable
WHERE ID IN (SELECT ID
FROM YourTable
GROUP BY ID
HAVING SUM(case when Content = 'a' then 1 else 0 end) >= 1
AND SUM(case when Content = 'b' then 1 else 0 end) >= 1
AND SUM(case when Content = 'c' then 1 else 0 end) >= 1
)
```
|
```
select * from (
where
Select ID ,SUM(case when Content = 'a' then 1 else 0 end) as sum_a ,UM(case when Content = 'b' then 1 else 0 end) as sum_b,
SUM(case when Content = 'c' then 1 else 0 end) as sum_c
FROM table
Group by ID)x
where x.sum_a >1 or sum_b>1 or sum_c>1
```
|
SQL Counting rows with columns that satisfy a condition
|
[
"",
"mysql",
"sql",
""
] |
I am including a SQLFiddle to show as an example of where I am currently at. In the example image you can see that simply grouping you get up to two lines per user depending on their status and how many of those statuses they have.
[](https://i.stack.imgur.com/vWGS0.png)
<http://sqlfiddle.com/#!3/9aa649/2>
The way I want it to come out is to look like the image below. Having a single line per user with two totaling columns one for Fail Total and one for Pass Total. I have been able to come close but since BOB only has Fails and not Passes this query leaves BOB out of the results. which I want to show BOB as well with his 6 Fail and 0 Pass
```
select a.PersonID,a.Name,a.Totals as FailTotal,b.Totals as PassTotals from (
select PersonID,Name,Status, COUNT(*) as Totals from UserReport
where Status = 'Fail'
group by PersonID,Name,Status) a
join
(
select PersonID,Name,Status, COUNT(*) as Totals from UserReport
where Status = 'Pass'
group by PersonID,Name,Status) b
on a.PersonID=b.PersonID
```
The below picture is what I want it to look like. Here is another SQL Fiddle that shows the above query in action
<http://sqlfiddle.com/#!3/9aa649/13>
[](https://i.stack.imgur.com/Gi6QT.png)
|
Use conditional aggregation:
```
select PersonID, Name,
sum(case when Status = 'Fail' then 1 else 0 end) as FailedTotal,
sum(case when Status = 'Pass' then 1 else 0 end) as PassedTotal
from UserReport
group by PersonID, Name;
```
|
Use conditional aggregation if the number of values for `status` column is fixed.
[Fiddle](http://sqlfiddle.com/#!3/9aa649/19)
```
select PersonID,Name,
sum(case when "status" = 'Fail' then 1 else 0 end) as failedtotal,
sum(case when "status" = 'Pass' then 1 else 0 end) as passedtotals
from UserReport
group by PersonID,Name
```
|
SQL Multiple Rows to Single Row Multiple Columns
|
[
"",
"sql",
"join",
""
] |
Sorry didn't know what title to give to this question as it might get confusing.
My table has only two columns - userName1 and userName2
I have this:
```
SELECT * FROM `Friends` WHERE `userName1` = 'aName'
UNION
SELECT * FROM `Friends` WHERE `userName2` = 'aName'
```
The result of this sql statements brings me the rows that include '`aName`' in either `userName1` or `userName2`.
How to filter the result to show the OTHER entry only? avoid returning the column that contains 'aName'
|
You can use:
```
select userName2 from friends where userName1 = 'aName'
union
select userName1 from friends where userName2 = 'aName';
```
A method without `union`:
```
select distinct (case when userName1 = 'aName' then userName2 else userName1 end)
from friends
where 'aName' in (userName1, userName2);
```
|
I would try something like this :
```
SELECT userName2 as 'name' FROM Friends WHERE userName1 = 'aName'
UNION
SELECT userName1 as 'name' FROM Friends WHERE userName2 = 'aName'
```
You might also require that they can't be both equal to 'aName'
```
SELECT userName2 as 'name' FROM Friends WHERE userName1 = 'aName' and userName2 != 'aName'
UNION
SELECT userName1 as 'name' FROM Friends WHERE userName2 = 'aName' and userName1 != 'aName'
```
(...which could be improved)
|
simple mysql statement that I can not find
|
[
"",
"mysql",
"sql",
"union",
""
] |
Every month I get sent a file from a external company which needs to be stored in a database, each file containing up to a million records. The main data fields are Month, Year, Postcode and TransactionType.
I was proposing that we should save the data in our database as a new SQL table each month so we know there is only a finite amount of data in each table. However one of my collegues said he was once told that to create a new table every month is bad practice, but he didn't know why.
If I was to have multiple tables, there would only be a maximum of 60 tables, though there may be far fewer (down to 12) dependent on how far into the past my client needs to look. This means that every month I will need to delete a month's worth of data.
However when I do my SQL queries I will only need a single row of data from a single table per query. I would think in theory this would be more efficient than having a single table filled with millions of rows.
I was wondering if anyone had any definitive reasons as to why splitting the data this way would be a bad thing to do?
|
All "like" items should be stored together in a database for the following reasons:
* You should be able to provide any subset of the items using a single `SELECT` statement only by changing the `WHERE` clause of that statement. With separate tables you will have to write code to decompose the request into the parts that compute the table name and the parts that filter that table. And you will have to duplicate that logic in each application, or teach it to each user, that wants to use your database.
* You should not artificially limit the use to which your data can be put. If you have separate monthly tables you have already substantially limited the types of queries you can enter against them without having to write more complex `UNION` queries.
* The addition of more instances of a known data type to your database should not require `ALTER`ing the structure of your database and, as a general principal, regularly-run code should not even have `ALTER` permissions
* If proper indexes are maintained, there is very little performance difference when `SELECT`ing data from a table 60 times the size of a smaller table. (There can be more effect on `INSERT` and `UPDATE` commands but it sound like you'll be doing a bulk update rather than updating the data constantly).
I can think of only two reasons for sharding data into separate tables:
* You discover that you have a performance issue that can't be resolved through better data design.
* You have records with different level of security and are relying on `GRANT SELECT` permissions to allow some users to see the records at higher levels of security.
|
A simpler method would be to add a column to that table which contains a datetimestamp of when that was loaded into the system. That way you can filter by that perticular column to segregate that data into the months/years that it was loaded in.
Another advantage from a performance perspective, that if you regularly filter data this way, you can create an index based on this date column.
Having multiple tables that contain the same information is not recommended for performance reasons and how information is stored in SQL. Eventually it will take up more space and if one month's data needs to reference another month's data it will be quite slow.
Hope this helps.
|
Creating sql tables by month and year
|
[
"",
"mysql",
"sql",
"t-sql",
"bigdata",
""
] |
Is it possible to add numbered bullets using Oracle's LISTAGG function?
i.e.:
I have a table:
```
PRODUCT_ID PRODUCT_NAME
1001 Bananas
1002 Apples
1003 Pears
1004 Oranges
```
SQL statement:
```
SELECT LISTAGG('*' || product_name, CHR(13)) WITHIN GROUP (ORDER BY product_name) "Product_Listing"
FROM products;
```
`*` is a numbered bullet that should produce the ff:
```
1. Apples
2. Bananas
3. Oranges
4. Pears
```
Also, is it possible to use letters instead of numbers?
|
The `Product_Listing_1` gives you number bullets, the `Product_Listing_2` gives you letter bullets.
```
with products as (
select 1001 as product_id, 'Bananas' as product_name from dual
union all
select 1002 as product_id, 'Apples' as product_name from dual
union all
select 1003 as product_id, 'Pears' as product_name from dual
union all
select 1004 as product_id, 'Oranges' as product_name from dual
),
make_bullets$ as (
select X.*,
row_number() over (partition by null order by product_name) as the_number_bullet,
chr(ascii('a') - 1 + row_number() over (partition by null order by product_name)) as the_letter_bullet
from products X
)
select
listagg(the_number_bullet||'. '||product_name, chr(13)) within group (order by product_name) as "Product_Listing_1",
listagg(the_letter_bullet||'. '||product_name, chr(13)) within group (order by product_name) as "Product_Listing_2"
from make_bullets$
;
```
Result:
```
Product_Listing_1 Product_Listing_2
----------------- -----------------
1. Apples a. Apples
2. Bananas b. Bananas
3. Oranges c. Oranges
4. Pears d. Pears
```
|
You can use row\_number() in subquery to generate number
```
with products as
(
select 1001 as product_id, 'Bananas' as product_name from dual union all
select 1002 as product_id, 'Apples' as product_name from dual union all
select 1003 as product_id, 'Pears' as product_name from dual union all
select 1004 as product_id, 'Oranges' as product_name from dual
)
SELECT LISTAGG(rn||'. '|| product_name, CHR(13)) WITHIN GROUP (ORDER BY product_name) "Product_Listing"
FROM
(SELECT product_id, product_name, row_number() over (order by product_name) as rn
FROM products)
```
or rownum
```
SELECT LISTAGG(rownum||'. '|| product_name, CHR(13)) WITHIN GROUP (ORDER BY product_name) "Product_Listing"
FROM
(SELECT product_id, product_name
FROM products
ORDER BY product_name)
```
|
Add numbered bullets using Oracle LISTAGG function
|
[
"",
"sql",
"oracle",
"listagg",
""
] |
```
SELECT TOP 1000 [pk_Id]
,[fk_resumeID]
,[fk_LoginID]
,[fk_CompanyId]
,Convert(nvarchar(11),ViewDate,105)
FROM [RecruitingDB].[Recruiting].[tbl_ViewResumeStatus]
where Convert(nvarchar(10),ViewDate,106)
between Convert(nvarchar(10),'17-10-2015',106) and Convert(nvarchar(10),'23-10-2015',106)
```
I am using this above sql for getting only Oct. Record but I am getting Sept. record.
Have you any idea why this is happening.
|
You are comparing the alphabetic values. You should avoid that, when dealing with numeric or date values.
Example `17-10-2015` would unintentionally be between `16-01-2015` and `18-01-2015`
Use this instead:
```
SELECT TOP 1000 [pk_Id]
,[fk_resumeID]
,[fk_LoginID]
,[fk_CompanyId]
,Convert(char(10),ViewDate,105)
FROM [RecruitingDB].[Recruiting].[tbl_ViewResumeStatus]
WHERE
ViewDate > '2015-10-17' and
ViewDate < dateadd(d, 1, '2015-10-23')
```
If ViewDate is a date, you can replace the *WHERE* clause with this:
```
WHERE
ViewDate BETWEEN '2015-10-17' and '2015-10-23'
```
|
Well it will do if you convert your dates to strings. Your query is looking for anything alphabetically between'17-10-2015' and '23-10-2015'. So for example that would include '18-09-2015'.
|
Getting wrong month
|
[
"",
"sql",
"sql-server",
""
] |
I have the below code that is not working as expected:
```
DECLARE @MySelect varchar(max), @MyRecipients varchar(max), @MyId
char(4),@MyResult varchar(max);
SET @MyId = '1';
SET @MySelect = 'SELECT SQL_Script FROM AutoSendMail.dbo.A01_St WHERE
ID =' + @MyId;
USE msdb
EXEC sp_send_dbmail
@profile_name='Operator',
@recipients='Mail@test.com',
@subject='Mail test',
@body= 'This is a test-mail',
@mailitem_id='1',
@query= @MyResult,
@attach_query_result_as_file = 1
```
I need to save the result of the select query: `@MySelect` into the variable `@MyResult`, because I need to send the result of the `SELECT` via Mail. But I can't find a way to do just that. The query should return a single string value.
|
You can do this without dynamic SQL, like so:
```
DECLARE @MyRecipients VARCHAR(MAX) ,
@MyId CHAR(4) = '1' ,
@MyResult VARCHAR(MAX);
SET @MyId = '1';
SELECT TOP 1
@MyResult = SQL_Script
FROM AutoSendMail.dbo.A01_St
WHERE ID = @MyId
USE msdb
EXEC sp_send_dbmail @profile_name = 'Operator', @recipients = 'Mail@test.com',
@subject = 'Mail test', @body = 'This is a test-mail', @mailitem_id = '1',
@query = @MyResult, @attach_query_result_as_file = 1
```
Not that you need dynamic SQL, but please note, that doing this:
```
SET @MySelect = 'SELECT SQL_Script FROM AutoSendMail.dbo.A01_St WHERE ID =' + @MyId;
```
Does not execute any code, it simply sets the value of `@MySelect`. You would need to call an execute command to run the SQL:
```
EXEC SP_EXECUTESQL @MySelect
```
|
You can declare table variable, insert into it the results of executing your query and then select into your result variable:
```
DECLARE @t TABLE(SQL_Script varchar(max))
INSERT INTO @t EXEC(@MySelect)
SELECT TOP 1 @MyResult = SQL_Script FROM @t
```
You can also use `sp_executesql` procedure with output parameters like:
```
SET @MyId = '1';
SET @MySelect = 'SELECT @MyResult = SQL_Script FROM AutoSendMail.dbo.A01_St WHERE
ID =' + @MyId;
EXEC sp_executesql @MySelect, '@MyResult varchar(max) OUTPUT', @MyResult OUTPUT
```
|
Saving the result of a SQL query into a local SQL variable
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have 3 tables in sqlserver :
```
tbl_Users - User_ID, User_Name
tbl_Roles - Role_ID, Role_Name
tbl_Users_Roles_MTM - User_ID, Role_ID
```
A user can have multiple roles assigned to him, and that will show in the Many-To-Many table.
In my stored-procedure I need `Role_Name` which are **NOT** assigned to a specific `User_ID` (which is given as a parameter).
I guess I should use an INNER JOIN (or a LEFT one...).
There are numerous entries in SO and other forums with questions nearly similar to this but not quite. I experimented a lot but by now I completely lost my hands and feet!
Thank you all.
**EDIT :**
With the help of the good people of SO, I got it to work :
```
SELECT r.Role_Name
FROM tbl_Roles r
WHERE NOT EXISTS(
SELECT 1
FROM tbl_Users_Roles_MTM ur
WHERE ur.User_ID = @User_ID
AND ur.Role_ID = r.Role_ID);
```
SO people are awesome!!!!!
|
Try this query:
```
SELECT r.Role_Name
FROM tbl_Roles r
WHERE NOT EXISTS (
SELECT 1
FROM tbl_Users_Roles_MTM ur
WHERE ur.User_ID = @User_ID
AND ur.Role_ID = r.Role_ID);
```
|
You can do this:
```
SELECT *
FROM Roles
WHERE Role_Name = @rolename
AND NOT EXISTS(SELECT 1
FROM tbl_Roles AS r
INNER JOIN tbl_Users_Roles_MTM AS ur ON ur.Role_ID = r.Role_ID
WHERE r.role_Name = @rolename
AND ur.User_ID = @User_ID);
```
The `NOT EXISTS` predicate will ensure that the role is not assigned to any user.
|
sql join Many-To-Many - 3 tables
|
[
"",
"sql",
"sql-server",
"join",
"many-to-many",
""
] |
yeah sorry for the piss poor title.
I created an example DB with sqlfiddle but I am so lost on this one I can't even start with the query.
<http://sqlfiddle.com/#!3/940b7d>
in the example DB I need all Account numbers that have equipment and can have equipment "CRAPY" but only if it has any other type of equipment as well.
So in the example below I will have a return value of 12345 because that account does have Equipment even though it does also have "CRAPY"
it will also return 44444 and 66666 because it has equipment.
It will not return 67891 because even though it has equipment it only has "CRAPY" equipment.
God I really hope that is clear,
```
create table testdb
(
Account varchar(5),
Equipment varchar(5)
)
insert into testdb (Account,Equipment) values ('12345','CDG12')
insert into testdb (Account,Equipment) values ('12345','CRAPY')
insert into testdb (Account,Equipment) values ('12345','CDG12')
insert into testdb (Account,Equipment) values ('12345','CDG12')
insert into testdb (Account,Equipment) values ('12345','CDG12')
insert into testdb (Account,Equipment) values ('67891','CRAPY')
insert into testdb (Account,Equipment) values ('67891','CRAPY')
insert into testdb (Account,Equipment) values ('67891','CRAPY')
insert into testdb (Account,Equipment) values ('67891','CRAPY')
insert into testdb (Account,Equipment) values ('67891','CRAPY')
insert into testdb (Account,Equipment) values ('44444','YYYYY')
insert into testdb (Account,Equipment) values ('66666','PPPPP')
```
|
If we remove the 'CRAPY' records, any accounts left are ones you need to report (have equipment other than 'CRAPY')
```
select distinct account from testdb where Equipment not like 'CRAPY'
```
If you need to see what equipment they have, or need to doa quick sanity check:
```
select account, Equipment
from testdb
where account in
(
select distinct account from testdb where Equipment not like 'CRAPY'
)
```
|
I think you're suffering from a poor problem statement. A clearer statement is *Return any account number which has non-"CRAPY" equipment*. The existence of "CRAPY" equipment is largely irrelevant.
You can do this with:
```
SELECT DISTINCT Account FROM testdb WHERE Equipment <> 'CRAPY'
```
which will ignore "CRAPY" records and return only accounts with non-"CRAPY" equipment (regardless of whether that account has or does not have "CRAPY" equipment -- we don't care about that).
|
SQL Select Where does not include X only
|
[
"",
"sql",
"t-sql",
""
] |
I have the following SQL query:
```
SELECT users.user_id,
users.first_name,
users.last_name,
roles.role,
roles.role_id,
users.username,
users.description,
users_vs_teams.team_id,
teams.team_name,
teams.status
FROM users
INNER JOIN roles ON users.role_id = roles.role_id
INNER JOIN users_vs_teams ON users.user_id = users_vs_teams.user_id
INNER JOIN teams ON users_vs_teams.team_id = teams.team_id
WHERE( users.role_id = 3 )
```
This query displays all users who have a user role of 3 and the team they have been assigned for a sprint due to their `user_id` appearing in the `users_vs_teams` table. However, I want to display the opposite of this where it displays those users who have a role of 3 but their `user_id` is not found in the `users_vs_teams` table.
Any help would be much appreciated.
|
```
SELECT users.user_id,
users.first_name,
users.last_name,
roles.role,
roles.role_id,
users.username,
users.description
FROM users
INNER JOIN roles ON users.role_id = roles.role_id
LEFT JOIN users_vs_teams ON users.user_id = users_vs_teams.user_id
WHERE users.role_id = 3
AND users_vs_teams.user_id IS NULL
```
|
```
SELECT users.user_id, users.first_name, users.last_name, roles.role, roles.role_id, users.username, users.description, users_vs_teams.team_id, teams.team_name, teams.status
FROM users INNER JOIN roles ON users.role_id = roles.role_id
INNER JOIN users_vs_teams ON users.user_id <> users_vs_teams.user_id
INNER JOIN teams ON users_vs_teams.team_id = teams.team_id
WHERE (users.role_id = 3)
```
|
Select Command Where Users id does not appear in another table
|
[
"",
"sql",
""
] |
During execution of below query, the output is **2090**.
```
select to_char(to_date('10-OCT-90', 'DD-MON-YY'), 'YYYY') from dual;
```
Required output is **1990**, the date in the snippet is Birth Date.
**EDIT:**
The input date is fetched from a table, so the date is in DD-MON-YY format.
|
For alternative resolving the double-digit years in strings, Oracle has [the `RR` format element](http://docs.oracle.com/database/121/SQLRF/sql_elements004.htm#i116004). Thus, your query would be
```
select to_char(to_date('10-OCT-90', 'DD-MON-RR'), 'YYYY') from dual;
```
From the referenced Oracle doc:
> The RR datetime format element is similar to the YY datetime format element, but it provides additional flexibility for storing date values in other centuries. The RR datetime format element lets you store 20th century dates in the 21st century by specifying only the last two digits of the year.
|
Use RR instead of YY. See here: <https://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements004.htm#SQLRF00215>
|
Oracle date conversion outputs 2000s instead of required 1990s
|
[
"",
"sql",
"oracle",
"date",
""
] |
I'm using SQL Server 2012.
I have two tables and I'm trying to see how many rows both tables contain. They have no common field on which they can join. Below is my query so far, which obviously doesn't work. How should it be?
```
;with tblOne as
(
select count(*) numRows from tblOne where DateEx = '2015-10-27'
),
tblTwo as
(
select count(*) numRows from tblTwo where Status = 0
)
select tblOne.numRows + tblTwo.numRows
```
|
You don't need CTEs; you can just change them to subqueries:
```
select
(select count(*) numRows from tblOne where DateEx = '2015-10-27') +
(select count(*) numRows from tblTwo where Status = 0)
```
If you REALLY want to use CTEs then just do an implicit CROSS JOIN:
```
with tblOne as
(
select count(*) numRows from tblOne where DateEx = '2015-10-27'
),
tblTwo as
(
select count(*) numRows from tblTwo where Status = 0
)
select tblOne.numRows + tblTwo.numRows
FROM tblOne, tblTwo
```
|
You can use a `cross join`:
```
with tblOne as (
select count(*) as numRows from tblOne where DateEx = '2015-10-27'
),
tblTwo as (
select count(*) as numRows from tblTwo where Status = 0
)
select tblOne.numRows + tblTwo.numRows
from tblOne cross join tblTwo;
```
|
adding the sum of two select count queries
|
[
"",
"sql",
"sql-server",
""
] |
I have a query that's something like this
```
select days, date(date_event) + interval '10' day from tbl_user_marketing_program as programtable
```
Now in place of the '10', I want to add the value present in the "days" column. How can I do that?
I tried `select user_id, date(date_event) + interval user_id day from tbl_user_marketing_program as programtable`
then I got the below error
> ERROR: syntax error at or near "day"
> LINE 1: ...ect user\_id, date(date\_event) + interval user\_id day from t...
|
Unfortunately the "number" for an interval can't be an arbitrary expression, it has to be a string constant (which is a strange choice). You need to use a little workaround:
```
select days, date(date_event) + (days * interval '1' day)
from tbl_user_marketing_program as programtable
```
But `date + integer` is also directly supported and the unit is days in that case. So you can als write:
```
select days, date(date_event) + days
from tbl_user_marketing_program as programtable
```
|
You can quote `'10 days'`:
```
select days, date(date_event) + interval '10 days'
from tbl_user_marketing_program as programtable
```
`SqlFiddleDemo`
If you want add variable/column use:
> **datetime + variable \* INTERVAL '1 day'**
```
select days, date(date_event) + column * interval '1 day'
from tbl_user_marketing_program as programtable
```
`SqlFiddleDemo`
|
Create interval using a column value postgresql
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables. Both of them contain (dutch) postalcodes.
Those have the format 9999AA and are stored as varchar(6).
In the left table the codes are complete
```
John Smith 1234AB
Drew BarryMore 3456HR
Ted Bundy 3456TX
Henrov 8995RE
My mother 8995XX
```
In the right table the codes can be incomplete
```
1234AB Normal neigbourhood
3456 Bad neighbourhood
8995R Very good neighbourhood
```
I need to join these tables on the postalcodes. In this example the output would have to be
```
John Smith Normal neighbourhood
Drew BarryMore Bad neighbourhood
Ted Bundy Bad neighbourhood
Henrov Very good neighbourhood
My mother -unknown-
```
So I have to join the two tables based on the length of the postal code in the **right** table.
Any suggestions as to how to do this? I could only come up with a CASE in the ON statement but that was not so smart ;)
|
If you have no "duplicates" in the second table, you could use `like`:
```
SELECT t1.*, t2.col2
FROM table1 AS t1
JOIN table2 AS t2
ON t1.postalcode LIKE t2.postalcode + '%';
```
However, this is not going to be efficient. Instead, an index on `table2(postalcode)` and a series of `LEFT JOIN`s is probably faster:
```
SELECT t1.*, COALESCE(t2a.col2, t2b.col2, t2c.col2)
FROM table1 t1
LEFT JOIN table2 t2a ON t2a.postalcode = t1.postalcode
LEFT JOIN table2 t2b ON t2b.postalcode = LEFT(t1.postalcode, LEN(t1.postalcode) - 1)
LEFT JOIN table2 t2c ON t2c.postalcode = LEFT(t1.postalcode, LEN(t1.postalcode) - 2)
```
This can take advantage of an index on `table2(postalcode)`. In addition, it only returns one row, even when there are multiple matches in `table2`, returning the best match.
|
Use `JOIN`.
**Query**
```
SELECT t1.col1 as name,
coalesce(t2.col2,'-unknown-') as col2
FROM table_1 t1
LEFT JOIN table_2 t2
ON t1.pcode LIKE t2.col1 + '%';
```
[**SQL Fiddle**](http://sqlfiddle.com/#!3/9e93d/1)
|
compare strings of uneven length in TSQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"string-comparison",
""
] |
sorry for the title but is a little difficult to explain the topic in a sigle row..
I have a table like this and i want to know (for each month in the year) the number of employee who have received bonus for the first time.
```
EMPLOYEE_NAME MONTH BONUS_RECEIVED
AAA 1 1
BBB 1 1
CCC 2 1
AAA 2 1
DDD 2 1
AAA 3 1
BBB 3 1
XXX 3 1
```
So, the result should be
```
MONTH TOTAL_BONUS
1 2
2 2
3 1
```
* Month 1, employee AAA and BBB receive the bonus (so the result is 2)
* Month 2, employee CCC and DD receive bonus (AAA already received across the year), so the result is 2
* Month 3, only employee XXX has received bonus, because AAA and BBB has already received it across the year
|
You could use **ROW\_NUMBER()** analytic function.
For example,
**Setup**
```
CREATE TABLE t
(EMPLOYEE_NAME varchar2(3), MONTH number, BONUS_RECEIVED number);
INSERT ALL
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('AAA', 1, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('BBB', 1, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('CCC', 2, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('AAA', 2, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('DDD', 2, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('AAA', 3, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('BBB', 3, 1)
INTO t (EMPLOYEE_NAME, MONTH, BONUS_RECEIVED)
VALUES ('XXX', 3, 1)
SELECT * FROM dual;
```
**Query**
```
SQL> SELECT MONTH,
2 COUNT(rn) total_bonus
3 FROM
4 (SELECT t.*,
5 row_number() OVER(PARTITION BY employee_name ORDER BY MONTH) rn
6 FROM t
7 WHERE BONUS_RECEIVED = 1
8 )
9 WHERE rn = 1
10 GROUP BY MONTH;
MONTH TOTAL_BONUS
---------- -----------
1 2
2 2
3 1
```
|
A double aggregation solves your problem:
```
select month, count(1) as total_bonus
from (
select employee_name, min(month) as month
from table_like_this
where bonus_received = 1
group by employee_name
)
group by month;
```
First, for each employee you find the first month he/she received a bonus. Then, you count the number of employees per all "first bonus-received-month found".
|
Oracle SQL - Sum of distinct value belong year, for the first time
|
[
"",
"sql",
"oracle",
""
] |
I have 2 tables: 1 with tasks, 1 with task\_actions.
I'd like to have the task table with some info about the task\_actions column. That works fine when I use a LEFT JOIN like this:
```
SELECT
*, wp_task_mgr.id AS task_id,
wp_task_actions.id AS action_id,
wp_task_actions.completion_percentage AS lastcompperc
FROM
wp_task_mgr
LEFT JOIN wp_task_actions
ON wp_task_mgr.id = wp_task_actions.id_task
```
However, the problem is: I get the oldest actions towards my tasks with this query, and not the most recent.
I have sought for a long time for getting the most recent records grouped on a certain column and I was able to get the most recent records.
I was able to get the most recent actions grouped per task\_id. So search for 1 task\_id and get the most recent action in the table:
```
SELECT *
FROM wp_task_actions
INNER JOIN
(SELECT id_task, MAX(created_taskaction) AS MaxDateTime
FROM wp_task_actions
GROUP BY id_task) mostrecent ON wp_task_actions.id_task = mostrecent.id_task
AND wp_task_actions.created_taskaction = mostrecent.MaxDateTime");
```
Now I need to combine these two queries. I need my full task table with one column from the actions which shows the most recent action for that task\_id. (namely task\_actions.completion\_percentage)
My desired table looks like:
```
task_mgr.id | task_mgr.name | task_actions.completion_percentage
```
(which is the last record towards this task\_mgr.id or task\_actions.task\_id)
Any ideas on how to get this?
Cheers!
|
To solve that, MS SQL 2005+ use OUTER APPLY, but I don't sure MySQL can use it:
```
SELECT
wp_task_mgr.*,
wp_task_mgr.id AS task_id,
A.id AS action_id,
A.lastcompperc
FROM
wp_task_mgr
OUTER APPLY (
SELECT DISTINCT id, id_task,
(SELECT MAX(created_taskaction) AS lastcompperc
FROM wp_task_actions WHERE id_task = wp.id_task)
FROM wp_task_actions wp
WHERE wp_task_mgr.id = wp_task_actions.id_task
) AS A
```
But, you can use LEFT JOIN:
```
SELECT
wp_task_mgr.*,
wp_task_mgr.id AS task_id,
A.id AS action_id,
A.lastcompperc
FROM
wp_task_mgr
LEFT JOIN (
SELECT DISTINCT id, id_task,
(SELECT MAX(created_taskaction) AS lastcompperc
FROM wp_task_actions WHERE id_task = wp.id_task)
FROM wp_task_actions wp
) AS A
ON wp_task_mgr.id = A.id_task
```
|
Have you tried quite literally combining your queries? LEFT JOIN your Max Date list to the wp\_task\_mgr and then LEFT JOIN on the Max Date list to subset the details to only ones that match the Max Date query result?
eg:
```
SELECT
wp_task_mgr.id AS task_id,
wp_task_actions.id AS action_id,
wp_task_actions.completion_percentage AS lastcompperc
FROM
wp_task_mgr
LEFT JOIN
(SELECT id_task, MAX(created_taskaction) AS MaxDateTime
FROM wp_task_actions
GROUP BY id_task) mostrecent
ON wp_task_mgr.id = mostrecent.id_task
LEFT JOIN wp_task_actions
ON mostrecent.id_task = wp_task_actions.id_task
AND wp_task_actions.created_taskaction = mostrecent.MaxDateTime
```
Apologies for formatting, on mobile.
|
How to JOIN two tables based on a WHERE that uses a complex statement in SQL
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I have this query:
```
SELECT Count(*) AS GesamtanzahlMitarbeiter,
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%80') AS "1980",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%81') AS "1981",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%82') AS "1982",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%83') AS "1983"
FROM EMP
GROUP BY DEPTNO;
```
And I get this Output:
```
GESAMTANZAHLMITARBEITER 1980 1981 1982 1983
----------------------- ---------- ---------- ---------- ----------
6 1 10 2 1
5 1 10 2 1
3 1 10 2 1
```
Now the Point is, that I don't want to group by deptno. I want to Group them all and count all the "GESAMTANZAHLMITARBEITER" like this:
```
GESAMTANZAHLMITARBEITER 1980 1981 1982 1983
----------------------- ---------- ---------- ---------- ----------
14 1 10 2 1
```
But I cannot GROUP BY \* or something..
A workaround would be to give them all a new attribute with the same value and group by that attribute, but thats just not right.
I have no Idea how to do that...
|
Use a conditional aggregation, add `case` in the `SUM`
```
SELECT
COUNT(*) as GesamtanzahlMitarbeiter,
SUM(CASE WHEN HIREDATE LIKE '%80'THEN 1 ELSE 0) as "1980",
SUM(CASE WHEN HIREDATE LIKE '%81'THEN 1 ELSE 0) as "1981",
SUM(CASE WHEN HIREDATE LIKE '%82'THEN 1 ELSE 0) as "1982",
SUM(CASE WHEN HIREDATE LIKE '%83'THEN 1 ELSE 0) as "1983"
FROM EMP
```
**NOTE:** If you add `GROUP BY` to this query you will get `COUNT(*)` by each `emp` In your previous query you get `COUNT(*)` for all table in each row.
|
You can use `count` without `group by` like this:
```
SELECT
(SELECT Count(*) FROM EMP) AS GesamtanzahlMitarbeiter,
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%80') AS "1980",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%81') AS "1981",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%82') AS "1982",
(SELECT Count(*) FROM EMP WHERE HIREDATE LIKE '%83') AS "1983"
FROM DUAL;
```
|
SQL one row output with count()
|
[
"",
"sql",
"oracle",
""
] |
I have searched far and wide, but I can't seem find a way to convert julian to `yyyy-mm-dd`.
Here is the format of my julian:
The Julian format consists of the year, the first two digits, and the day within the year, the last three digits.
For example, `95076` is `March 17, 1995`. The `95` indicates the year and the
`076` indicates it is the 76th day of the year.
```
15260
```
I have tried this but it isn't working:
```
dateadd(d,(convert(int,LAST_CHANGED_DATE) % 1000)-1, convert(date,(convert(varchar,convert(int,LAST_CHANGED_DATE) /1000 + 1900) + '/1/1'))) as GrgDate
```
|
You can select each part of the date using `datepart()`
```
SELECT DATEPART(yy, 95076), DATEPART(dy, 95076)
```
+++EDIT: I misunderstood something. Here's my correction: +++++
```
SELECT DATEADD(day, CAST(RIGHT('95076',3) AS int) β 1, CONVERT(datetime,LEFT('95076',2) + '0101', 112))
```
|
**Edit: leaving this answer for Oracle and MySQL users**
This will not work in T-SQL.
Use this:
```
MAKEDATE(1900 + d / 1000, d % 1000)
```
For example:
```
SELECT MAKEDATE(1900 + 95076 / 1000, 95076 % 1000)
```
This returns `March, 17 1995 00:00:00`.
[SQLFiddle](http://sqlfiddle.com/#!9/9eecb7d/29294)
|
Convert Julian Date to YYYY-MM-DD
|
[
"",
"sql",
"sql-server",
"julian-date",
""
] |
I am working with SQL Server 2008 R2.
I have a table `seq_audit` which has an identity column. This is the definition of the table:
```
CREATE TABLE [dbo].[seq_audit]
(
[id] [bigint] IDENTITY(1,1) NOT NULL,
[value] [bit] NULL,
PRIMARY KEY CLUSTERED ([id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
```
The table is empty and never has had any rows in it before.
To check its current identity value, I ran this command:
```
DBCC CHECKIDENT (seq_audit, NORESEED)
GO
```
And this is the result I get:
> Checking identity information: current identity value 'NULL', current
> column value 'NULL'. DBCC execution completed. If DBCC printed error
> messages, contact your system administrator.
I want to set its current identity value to 15953711. So I ran this command:
```
DBCC CHECKIDENT (seq_audit, RESEED, 15953711)
GO
```
And this is the result I get:
> Checking identity information: current identity value 'NULL', current
> column value '15953711'. DBCC execution completed. If DBCC printed
> error messages, contact your system administrator.
I thought it worked so I again check its current identity by running this command:
```
DBCC CHECKIDENT (seq_audit, NORESEED)
GO
```
But I was not expected the result I get:
> Checking identity information: current identity value 'NULL', current
> column value 'NULL'. DBCC execution completed. If DBCC printed error
> messages, contact your system administrator.
So somehow the setting of current identity value is not working. Why? What am I doing wrong here?
|
This is caused by the fact that your table is empty. Try adding a single record and then everything will work. I have tried this and can confirm that it works.
Also, if you use SQL Server Management studio you can use the design feature to change the seed values. and manually add and delete records.
|
<https://msdn.microsoft.com/es-es/library/ms176057(v=sql.120).aspx>
```
DBCC CHECKIDENT ( table_name, RESEED, new_reseed_value )
```
> Current identity value is set to the new\_reseed\_value. If no rows have been inserted into the table since the table was created, or if all rows have been removed by using the TRUNCATE TABLE statement, the first row inserted after you run DBCC CHECKIDENT uses new\_reseed\_value as the identity. Otherwise, the next row inserted uses new\_reseed\_value + the current increment value.
Also why you dont start the seed on the create table `IDENTITY`?
**[Sql Fiddle Demo](http://sqlfiddle.com/#!6/fc8f6/1)**
```
CREATE TABLE [dbo].[seq_audit](
[id] [bigint] IDENTITY(15953711,1) NOT NULL,
[value] [bit] NULL,
PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY];
```
|
Why setting current identity value is not working for me in SQL Server 2008 R2?
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"identity",
""
] |
I have two tables: followers:
```
followee_id | followers_id
--------------|--------------
1 | 2
1 | 3
2 | 3
3 | 1
```
and users:
```
| user_id | user_name | etc...
|---------|-----------|-------
| 1 | user1 |
| 2 | user2 |
| 3 | user3 |
```
and a query that selects the top 50 users ordered by their number of followers:
```
SELECT followers.followee_id AS user_id, COUNT(*) followers, users.user_name, users.user_profile_picture, users.display_name
FROM followers
INNER JOIN users
ON users.user_id = followers.followee_id
GROUP BY followee_id ORDER BY followers DESC LIMIT 50
```
and a query that selects if you follow them:
```
SELECT followers.followee_id, 1 AS follows_them FROM followers WHERE followers_id = 1
```
and I need to join them so that i have a table (user 3) that shows:
```
| user_id | user_name | followers | follows_them | follow_you | etc...
----------|-----------|-----------|--------------|------------|-----------
1 | user1 | 2 | 1 | 1 |
2 | user2 | 1 | 1 | 2 |
3 | user3 | 1 | 0 | 0 |
```
where the table is ordered by the number of followers the user has
SQLFiddle:
<http://sqlfiddle.com/#!9/17815>
|
**[SqlFiddleDemo](http://sqlfiddle.com/#!9/18cc8/9)**
Please double check, my main lenguaje isn't english so not sure who follow who.
```
SELECT
T.*,
if(f_to.followee_id is null, 'no', 'yes') is_followby_userid,
if(f_him.followee_id is null, 'no', 'yes') is_following_userid
FROM
(
SELECT followers.followee_id AS user_id,
COUNT(*) followers,
users.user_name,
users.user_profile_picture,
users.display_name
FROM followers
INNER JOIN users
ON users.user_id = followers.followee_id
GROUP BY followee_id
ORDER BY followers DESC LIMIT 50
) T
LEFT JOIN followers f_to
ON T.user_id = f_to.followee_id
and f_to.followers_id = 1 -- your @user_id
LEFT JOIN followers f_him
ON T.user_id = f_him.followers_id
and f_him.followee_id = 1 -- your @user_id
```
**Output**
```
| user_id | followers | user_name | user_profile_picture | display_name | is_followby_userid | is_following_userid |
|---------|-----------|-----------|----------------------|--------------|--------------------|---------------------|
| 1 | 2 | User1 | 1_562a7cb9.jpg | User One | no | no |
| 2 | 1 | User2 | 2_562b7cb9.jpg | User Two | no | yes |
| 3 | 1 | User3 | 3_562c7cb9.jpg | User Three | yes | yes |
```
|
Depending on the data:
```
(1,2),
(1,3),
(2,3),
(3,1);
```
Results are:
```
User_id = 1 has 1 follower
User_id = 2 has 1 follower
User_id = 3 has 1 follower
User_id = 1 follows 1 user (user_id = 3)
User_id = 2 follows 1 user (user_id = 1)
User_id = 3 follows 2 users (user_id = 1, user_id = 2)
SELECT bb.user_id, bb.user_name,
(
SELECT COUNT(*) AS a
FROM followers
WHERE followee_id = bb.user_id
GROUP BY followee_id
) AS followers,
(
SELECT COUNT(*) AS b
FROM followers
WHERE followers_id = bb.user_id
GROUP BY followers_id
) AS follows_them
FROM followers AS aa
INNER JOIN users AS bb
ON aa.followee_id = bb.user_id
GROUP BY bb.user_id
ORDER BY followers DESC
LIMIT 50;
user_id user_name followers follows_them
1 User1 2 1
3 User3 1 2
2 User2 1 1
```
UPDATE:
```
SELECT bb.user_id, bb.user_name,
(
SELECT COUNT(*)
FROM followers
WHERE followee_id = bb.user_id
GROUP BY followee_id
) AS followers,
(
SELECT (CASE WHEN COUNT(*) > 0 THEN 1 ELSE 0 END)
FROM followers
WHERE followers_id = bb.user_id
GROUP BY followers_id
) AS follows_them,
(
SELECT (CASE WHEN COUNT(*) > 0 THEN 1 ELSE 0 END)
FROM followers
WHERE followee_id = bb.user_id
GROUP BY followee_id
) AS follow_you
FROM followers AS aa
INNER JOIN users AS bb
ON aa.followee_id = bb.user_id
GROUP BY bb.user_id
ORDER BY followers DESC
LIMIT 50;
```
Result:
```
user_id user_name followers follows_them follow_you
1 User1 2 1 1
3 User3 1 1 1
2 User2 1 1 1
```
|
MySQL Followers Query using Joins
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a Deaths table where each personID display that died. There is also a column for reason of death and a date for when he/she died.
I need to count all the people that died by Illness, Accident, Suicide, etc.
I want my output to be like this:
`| Illness | Accident | Suicide |
| 32 | 55 | 3 |`
I can easily create a view like this:
`CREATE VIEW viewDeaths AS
SELECT COUNT(personID) AS Illness
WHERE Reason = 'Illness';`
And it will display it correct, but how do I do it with multiple conditions?
The main purpose is to display the different values for each reason on a graph in a C# application
|
Simply use multiple subqueries:
```
CREATE VIEW viewDeaths AS
SELECT Illness = (SELECT COUNT(*) FROM dbo.Deaths d
WHERE d.Reason = 'Illness'),
Accident = (SELECT COUNT(*) FROM dbo.Deaths d
WHERE d.Reason = 'Accident'),
Suicide = (SELECT COUNT(*) FROM dbo.Deaths d
WHERE d.Reason = 'Suicide')
```
|
In this way:
```
CREATE VIEW viewDeaths AS
SELECT Reason, COUNT(personID) AS Illness
-- WHERE Reason IN (.....)
GROUP BY Reason
```
|
Count in a VIEW
|
[
"",
"sql",
"sql-server",
""
] |
I have a select statement as follows:
```
SELECT
firstName,
lastName,
CASE
WHEN primaryEmail <> 'name@example.com' THEN primaryEmail
WHEN secondaryEmail <> 'name@example.com' THEN secondaryEmail
ELSE ternaryEmail
END AS email
FROM participant
WHERE
primaryEmail <> 'name@example.com'
OR secondaryEmail <> 'name@example.com'
OR ternaryEmail <> 'name@example.com'
```
The problem is that there are some records with empty fields, either firstName, lastName, or the resulting email. I added this to the WHERE clause:
```
firstName IS NOT NULL
AND lastName IS NOT NULL
AND primaryEmail <> // The already existing WHERE clause code.
```
but those fields still show up. I haven't bothered trying to get rid of the email part yet, so there's no code for me to show there.
|
```
SELECT
firstName,
lastName,
CASE
WHEN primaryEmail <> 'name@example.com' THEN primaryEmail
WHEN secondaryEmail <> 'name@example.com' THEN secondaryEmail
ELSE ternaryEmail
END AS email
FROM participant
WHERE
(primaryEmail <> 'name@example.com' AND primaryEmail IS NOT NULL) OR
(secondaryEmail <> 'name@example.com' AND secondaryEmail IS NOT NULL) OR
(ternaryEmail <> 'name@example.com' AND ternaryEmail IS NOT NULL)
```
|
You need `or` logic, not `and`
```
where
(firstName is not null or lastName is not null)
and other conditions
```
|
Remove Empty Fields in Select Statement
|
[
"",
"sql",
"select",
"where-clause",
""
] |
I've got an error with a simple query using a join.
My query :
```
SELECT users_sessions.user_id AS users_sessions_user_id,
users.last_name AS users_last_name,
users.first_name AS users_first_name
FROM prefix_users_sessions AS users_sessions,
prefix_users AS users INNER JOIN
users_sessions
ON users.id = users_sessions.user_id
```
My error :
> SQLSTATE[42000]: Syntax error or access violation: 1066 Not unique
> table/alias: 'users\_sessions'
I don't understand the message... i don't see where the alias is used more that one.
Who can help me to solve this problem ?
Thankz by advance for your help.
--SOLVED--
The good query is the following :
```
SELECT users_sessions.id AS users_sessions_id,
users_sessions.user_id AS users_sessions_user_id,
users.lastname AS users_lastname, users.firstname AS
users_firstname
FROM prefix_users_sessions AS users_sessions
JOIN prefix_users AS users ON users.id = users_sessions.user_id
```
|
Your `from` clause is all messed up. A simple rule: *Never* use commas in the `FROM` clause. *Always* use explicit `JOIN` syntax.
Also, use shorter table aliases so your query is easier to write and to read:
```
SELECT us.user_id AS users_sessions_user_id,
u.last_name AS users_last_name,
u.first_name AS users_first_name
FROM prefix_users_sessions us INNER JOIN
prefix_users u
ON u.id = us.user_id;
```
This assumes that you don't really have a table called `users_sessions`, and the intention is to use a table called `prefix_users_sessions`.
|
You use name `users_sessions` twice.
First as the alias of `prefix_users_sessions` and then as a normal table in the `INNER JOIN`. Also try to use `INNER JOIN`s and not selecting from two tables simultaneously e.g (in your query).:
```
FROM prefix_users_sessions AS users_sessions, prefix_users AS users
```
|
SQLSTATE Error Syntax error or access violation: 1066 Not unique table/alias: 'users_sessions'
|
[
"",
"mysql",
"sql",
"select",
"join",
"mysql-error-1066",
""
] |
I've got this table
## beds
id
name
size
room
status
hotel
created\_at
updated\_at
I need to filter all beds that belong to a certain room. In order to do so, I've coded this lines.
```
public function index()
{
//
$user = JWTAuth::parseToken()->authenticate();
$data = Input::get('room');
if( $data ){
$beds = Bed::where('room', '=', $data )->get();
}else{
$beds = Bed::where('hotel', '=', $user->hostel )->get();
}
foreach( $beds as $bed) {
return $bed->get( array('size','room', 'id') );
}
}
```
So, If i give it the room id, it should return me only that room's ones.
The thing is that it's returning all table entries.
Any ideas?
---
UPDATE
Fixed relations and tried this:
```
return Room::with('beds')->findOrFail($data)->beds;
```
Now it gives me the number of items.
How can I get the items?
---
UPDATE
This is the model's code:
```
class Room extends \Eloquent {
protected $fillable = array('beds', 'price', 'name', 'description','hotel');
public function beds(){
return $this->hasMany('Bed', 'id', 'room');
}
}
```
---
UPDATE
The var\_dump for:
```
var_dump( Room::with('beds')->findOrFail($data)->beds );
```
is:
```
int(1)
```
---
UPDATE
So, the final code is the following.
controller
```
public function index()
{
//
$user = JWTAuth::parseToken()->authenticate();
$data = Input::get('room');
if( $data ){
$d = intval( $data );
return Bed::where('room', '=', $d )->get( array('size', 'room', 'id', 'name') );
}else{
return Bed::where('hotel', '=', $user->hostel )->get( array('size', 'room', 'id', 'name') );
}
}
```
model
```
class Room extends \Eloquent {
protected $fillable = array('beds', 'price', 'name', 'description','hotel');
public function camas(){
return $this->hasMany('Bed', 'room', 'id');
}
}
```
Thank you guys!
|
Try this and see if it works. If not, can you provide the `var_dump` of `Input::get('room')` and the structure of the the beds table?
```
public function index()
{
//
$user = JWTAuth::parseToken()->authenticate();
$data = Input::get('room');
if( $data ){
$beds = Bed::where('room', '=', $data );
}else{
$beds = Bed::where('hotel', '=', $user->hostel );
}
return $beds->get(['size','room', 'id'])->toArray();
}
```
Better yet if you want to get specific beds in a room and you have your relations set up correctly:
```
return Room::with('beds')->findOrFail($data)->beds;
```
EDIT
I saw your update. Are you sure its giving you a number of items, maybe there is one item and the number is the id of it. Can you verify? Please provide a vardump of it if thats not the case. Also can you post your code for the relations in the model?
|
You have quite a few issues in your attempts:
```
return $bed->get( array('size', 'room', 'id') );
// runs SELECT size, room, id from `rooms`
```
so it returns all the rooms (why on earth would you like to do this in a foreach anyway?)
---
```
return $this->hasMany('Bed', 'id', 'room');
// should be:
return $this->hasMany('Bed', 'room', 'id');
```
---
```
protected $fillable = array('beds', ...
public function beds(){
```
this is conflict - you will never get a relations when calling `$room->beds` since you have a column `beds` on your table.
---
that said, this is what you need:
```
public function index()
{
$user = JWTAuth::parseToken()->authenticate();
if(Input::has('room')){
$query = Bed::where('room', '=', Input::get('room'));
}else{
$query = Bed::where('hotel', '=', $user->hostel);
}
return $query->get(['size', 'room', 'id']); // given you need only these columns
}
```
|
Laravel where query not working
|
[
"",
"sql",
"laravel",
"eloquent",
""
] |
I have two tables, lets say `table1` and `table2`.
```
table1 || table2
--------||-------------
col1 || col1 | col2
--------||------|------
a || a | 4
b || a | 2
c || a | 5
d || b | 1
|| b | 3
|| d | 6
```
With `SELECT table1.col1, table2.col2 FROM table1 LEFT OUTER JOIN table2 ON table1.col1 = table2.col1` I get following:
```
table1.col1 | table2.col2
-------------|-------------
a | 4
a | 2
a | 5
b | 1
b | 3
c | NULL
d | 6
```
How is it possible to achieve this (only get the minimum of `table2.col2` so that there's no entry of `table1.col1` more than once):
```
table1.col1 | table2.col2
-------------|-------------
a | 2
b | 1
c | NULL
d | 6
```
Or is it a wrong approach?
|
Alternative solution, use a correlated sub-query:
```
select col1, (select min(col2) from table2 t2 where t2.col1 = t1.col1)
from table1 t1
```
|
You need to use `MIN`:
```
SELECT
t1.col1,
MIN(t2.col2) AS col2
FROM table1 t1
LEFT JOIN table2 t2
ON t2.col1 = t1.col1
GROUP BY t1.col1
```
[**SQL Fiddle**](http://sqlfiddle.com/#!6/9eecb7d/2545/0)
|
SQL left outer join to a min value of right table
|
[
"",
"sql",
"sql-server",
"join",
""
] |
Consider this example table "Table1".
```
Col1 Col2
A 1
B 1
A 4
A 5
A 3
A 2
D 1
B 2
C 3
B 4
```
I am trying to fetch those values from Col1 which corresponds to all values (in this case, 1,2,3,4,5). Here the result of the query should return 'A' as none of the others have all values 1,2,3,4,5 in Col2.
Note that the values in Col2 are decided by other parameters in the query and they will always return some numeric values. Out of those values the query needs to fetch values from Col1 corresponding to all in Col2. The values in Col2 could be 11,12,1,2,3,4 for instance (meaning not necessarily in sequence).
I have tried the following select query:
```
select distinct Col1 from Table1 where Col1 in (1,2,3,4,5);
select distinct Col1 from Table1 where Col1 exists (select distinct Col2 from Table1);
```
and its different variations. But the problem is that I need to apply an 'and' for Col2 not an 'or'.
like Return a value from Col1 where Col2 'contains' all values between 1 and 5.
Appreciate any suggestion.
|
You could use analytic **ROW\_NUMBER()** function.
**[SQL FIddle](http://sqlfiddle.com/#!4/91c29/1)** for a setup and working demonstration.
```
SELECT col1
FROM
(SELECT col1,
col2,
row_number() OVER(PARTITION BY col1 ORDER BY col2) rn
FROM your_table
WHERE col2 IN (1,2,3,4,5)
)
WHERE rn =5;
```
---
**UPDATE** As requested by OP, some explanation about how the query works.
The inner sub-query gives you the following resultset:
```
SQL> SELECT col1,
2 col2,
3 row_number() OVER(PARTITION BY col1 ORDER BY col2) rn
4 FROM t
5 WHERE col2 IN (1,2,3,4,5);
C COL2 RN
- ---------- ----------
A 1 1
A 2 2
A 3 3
A 4 4
A 5 5
B 1 1
B 2 2
B 4 3
C 3 1
D 1 1
10 rows selected.
```
`PARTITION BY` clause will group each sets of col1, and `ORDER BY` will sort col2 in each group set of col1. Thus the sub-query gives you the row\_number for each row in an ordered way. now you know that you only need those rows where row\_number is **at least 5**. So, in the outer query all you need ot do is `WHERE rn =5` to filter the rows.
|
You can use listagg function, like
```
SELECT Col1
FROM
(select Col1,listagg(Col2,',') within group (order by Col2) Col2List from Table1
group by Col1)
WHERE Col2List = '1,2,3,4,5'
```
|
select query to fetch rows corresponding to all values in a column
|
[
"",
"sql",
"oracle",
""
] |
I'm not a database guru and feel like I'm missing some core SQL knowledge to grok a solution to this problem. Here's the situation as briefly as I can explain it.
**Context:**
I have a SQLite database table that contains timestamped user event records. The records can be uniquely identified by the combination of timestamp and user ID (i.e., when the event took place and who the event is about). I understand this situation is called a "composite primary key." The table looks something like this (with a bunch of other columns removed, of course):
```
sqlite> select Last_Updated,User_ID from records limit 4;
Last_Updated User_ID
------------- --------
1434003858430 1
1433882146115 3
1433882837088 3
1433964103500 2
```
**Question:** How do I `SELECT` a result set containing only the *most recent* record for *each* user?
Given the above example, what I'd like to get back is a table that looks like this:
```
Last_Updated User_ID
------------- --------
1434003858430 1
1433882837088 3
1433964103500 2
```
(Note that the result set only includes user `3`'s most recent record.)
In reality, I have approximately 2.5 million rows in this table.
**Bonus:** I've been reading answers about JOINs, de-dupe procedures, and a bunch more, and I've been googling for tutorials/articles in the hopes that I would find what I'm missing. I have extensive programming background so I could de-dupe this dataset in procedural code like I've done a hundred times before, but I'm tired of writing scripts to do what I believe should be possible in SQL. That's what it's for, right?
So, what do you think is missing from my understand of SQL, conceptually, that I need in order to understand why the solution you've provided to my question actually works? (A reference to a good article that actually explains the theory behind the practice would suffice.) I want to know WHY the solution actually works, not just that it does.
Many thanks for your time!
|
You could try this:
```
select user_id, max(last_updated) as latest
from records
group by user_id
```
This should give you the latest record per user. I assume you have an index on user\_id and last\_updated combined.
In the above query, generally speaking - we are asking the database to group user\_id records. If there are more than 1 records for user\_id 1, they will all be grouped together. From that recordset, maximum last\_updated will be picked for output. Then the next group is sought and the same operation is applied there.
If you have a composite index, sqlite will likely just use the index because the index contains both fields addressed in the query. Indexes are smaller than the table itself, so scanning or seeking is faster.
|
Well, in true "d'oh!" fashion, right after I ask this question, I find [the answer](https://stackoverflow.com/a/3814269/4463672).
For my case, the answer is:
```
SELECT MAX(Last_Updated),User_ID FROM records GROUP BY User_ID
```
I was making this more complicated than it needed to be by thinking I needed to use JOINs and stuff. Applying an aggregate function like `MAX()` is all that's needed to select only those rows whose content matches the function result. That means this statementβ¦
```
SELECT MAX(Last_Updated),User_ID FROM records
```
β¦would therefor return a result set containing only 1 row, the most recent event.
By adding the `GROUP BY` clause, however, the result set contains a row *for each* "group" of results, i.e., for each user. My programmer-brain did not understand that `GROUP BY` is how we say "for each" in SQL. I think I get it now.
Note to self: keep it simple, stupid. :)
|
SQLite: How to SELECT "most recent record for each user" from single table with composite key?
|
[
"",
"sql",
"sqlite",
"greatest-n-per-group",
""
] |
I am not sure what to call this but basically, my database has games that can be played by 1 to 4 players.
The `Games` table has 4 foreign keys to the `PlayerGames` table. GameFirstPlace, GameSecondPlace... etc. They can be null.
If they are not they point to an entry in the `PlayerGames` table.
The `PlayerGames` table has, among other things, a foreign key to the `players` table. The `players` table has PlayerName.
I want to get the PlayerName for all the participants of a Game where PlayerGame is not null.
That is, if my game looks like:
```
GameFirstPlace GameSecondPlace GameThirdPlace GameFourthPlace
6 7 NULL NULL
Then PlayerGame id 6 has PlayerID = 7 and PlayerGame id 7 has PlayerID 3
Then Player with id 7 PlayerName = 'Jack' and Player with id 3 PlayerName = 'Mary'
```
Then my query might return:
```
GameID FirstPlaceName SecondPlaceName ThirdPlaceName FourthPlaceName
5 'Jack' 'Mary' NULL NULL
```
What might the select query for something like this look like?
|
```
SELECT
Game.GameID
,Player1.Name AS FirstPlaceName
,Player2.Name AS SecondPlaceName
,Player3.Name AS ThirdPlaceName
,Player4.Name AS FourthPlaceName
FROM
Game
LEFT OUTER JOIN PlayerGame as PlayerGame1 ON Game.GameFirstPlace = PlayerGame1.Id
LEFT OUTER JOIN Player AS Player1 ON PlayerGame1.PlayerId = Player1.Id
LEFT OUTER JOIN PlayerGame as PlayerGame2 ON Game.GameSecondPlace = PlayerGame2.Id
LEFT OUTER JOIN Player AS Player2 ON PlayerGame2.PlayerId = Player2.Id
LEFT OUTER JOIN PlayerGame as PlayerGame3 ON Game.GameSecondPlace = PlayerGame3.Id
LEFT OUTER JOIN Player AS Player3 ON PlayerGame3.PlayerId = Player3.Id
LEFT OUTER JOIN PlayerGame as PlayerGame4 ON Game.GameFirstPlace = PlayerGame4.Id
LEFT OUTER JOIN Player AS Player4 ON PlayerGame4.PlayerId = Player4.Id
```
|
You're going to have to duplicate all of the join logic four ways.
```
select
g.GameID,
p1.PlayerName as FirstPlaceName,
p2.PlayerName as SecondPlaceName,
p3.PlayerName as ThirdPlaceName,
p4.PlayerName as FourthPlaceName
from
Games g
left outer join PlayerGames pg1 on pg1.PlayerID = g.GameFirstPlace
left outer join Players p1 on p1.PlayerID = pg1.PlayerID
left outer join PlayerGames pg2 on pg2.PlayerID = g.GameSecondPlace
left outer join Players p2 on p2.PlayerID = pg2.PlayerID
left outer join PlayerGames pg3 on pg3.PlayerID = g.GameThirdPlace
left outer join Players p3 on p3.PlayerID = pg3.PlayerID
left outer join PlayerGames pg4 on pg4.PlayerID = g.GameFourthPlace
left outer join Players p4 on p4.PlayerID = pg4.PlayerID
```
I'm guessing that your `PlayerGame` table has a `GameID` which you could take advantage of to simplify the join logic. The output gets a little more complicated in return but the query will probably perform better.
```
SELECT
g.GameID,
min(case when p.PlayerID = g.GameFirstPlace then p.PlayerName end) AS FirstPlaceName,
min(case when p.PlayerID = g.GameSecondPlace then p.PlayerName end) AS SecondPlaceName,
min(case when p.PlayerID = g.GameThirdPlace then p.PlayerName end) AS ThirdPlaceName,
min(case when p.PlayerID = g.GameFourthPlace then p.PlayerName end) AS FourthPlaceName
FROM
Games g
inner join PlayerGames pg on pg.GameID = g.GameID
inner join Players p on p.PlayerID = pg.PlayerID
GROUP BY
g.GameID
```
|
Query that can get player's name if it is not null
|
[
"",
"mysql",
"sql",
""
] |
I currently have a query where it returns the total number of accounts each customer holds, but how do I make it so that it returns only customers who has more than 1 account?
```
SELECT C.customerID, COUNT(O.accNumber) AS "total"
FROM Customer C, Owns O
WHERE C.customerID = O.customerID
GROUP BY C.customerID
```
|
The answer to your question is `HAVING`. However, you need to learn to use proper`JOIN` syntax. Simple rule: *Never* use a comma in the `FROM` clause. *Always* use explicit `JOIN` syntax.
```
SELECT C.customerID, COUNT(O.accNumber) AS total
FROM Customer C JOIN
Owns O
ON C.customerID = O.customerID
GROUP BY C.customerID
HAVING COUNT(*) > 1;
```
Actually, you don't even need the `JOIN`:
```
SELECT o.customerID, COUNT(o.accNumber) AS total
FROM Owns o
GROUP BY o.customerID
HAVING COUNT(*) > 1;
```
That's much simpler.
|
Add a HAVING clause
```
SELECT C.customerID, COUNT(O.accNumber) AS "total"
FROM Customer C, Owns O
WHERE C.customerID = O.customerID
GROUP BY C.customerID
HAVING COUNT(*) > 1
```
|
SQL COUNT function for results "greater than or equal to"
|
[
"",
"sql",
"count",
""
] |
I am using the following spooling script with [Oracle SQL Developer 4.1](http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index-097090.html):
```
set echo off
set feedback off
set termout off
spool A.txt
select /*csv*/ * from A where rownum <= 1000;
spool off
spool B.txt
select /*csv*/ * from B where rownum <= 1000;
spool off
...
```
But the spooled output files contain a blank line at the beginning.
This is the same problem detailed in [sqlplus spooling: How to get rid of first, empty line?](https://stackoverflow.com/q/5576901/238421). I tried using `SET NEWPAGE NONE` but, as of SQL Developer 4.1.2, this only results in an error message and no change in the output format:
```
SP2-0158: unknown SET option "newpage"
```
Is there any way to suppress the output of this first, empty line in SQL Developer?
|
two things:
1. it's the default SQL\*Plus behavior, which we try to emulate 100% as much as possible
2. there's a bug - we're not supporting SET PAGESIZE 0. if you use this in conjunction with SET TRIMSPOOL ON, you'll lose the blank line(s)
we've got it on the list for the next release
2020 Update
Using Version 20.2 of SQL Developer, your script works as expected
[](https://i.stack.imgur.com/SopSz.png)
Unfortunately I see the issue in SQLcl (command line version of SQLDev) version 20.2, but it's fixed for 20.3 thanks to feedback from some folks on Twitter earlier this Summer.
Here's what it'll look like in a month or so when SQLcl 20.3 is released
```
10:38:34 nolog >show version
Oracle SQLDeveloper Command-Line (SQLcl) version: 20.3.0.0 build: 20.3.0.240.1605
10:40:31 nolog >set echo off
10:40:49 nolog >set feedback off
10:40:52 nolog >set termout off
10:40:56 nolog >spool A.txt
10:41:04 nolog >select /*csv*/ * from regions;
"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"
10:41:14 nolog >spool off
10:41:19 nolog >exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
c:\SQLDev\sqlcl\20.3-lines\sqlcl\bin>type A.txt
"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"
```
c:\SQLDev\sqlcl\20.3-lines\sqlcl\bin>
|
That doesn't seem to be possible; it's been asked before but I don't see any resolution. (Tried with `set sqlformat csv` instead of the `/*csv*/` hint, just in case, but the behaviour is the same).
If you export the query results from the grid as CSV (right-click the grid and choose Export...) then you don't get the blank line, but that's obviously a manual process outside your script, so not really equivalent.
Options seem to be to post-process the file to remove the blank line, export from the grid, or run your script using SQL\*Plus instead. None of which seem ideal.
|
SQL Developer spooling: how to get rid of the first, empty line
|
[
"",
"sql",
"oracle",
"output",
"oracle-sqldeveloper",
"flat-file",
""
] |
i am not sure whether it is possible but appreciate if any one can help me on this.
**if the column is blank for all the output rows then should not appear in the output.**
sample data:
```
DECLARE @T TABLE
(
BaseVehicle VARCHAR (50),
SubModel VARCHAR (50),
Make VARCHAR (50),
Years VARCHAR (50),
FromYear VARCHAR (50),
ToYear VARCHAR (50)
)
INSERT @T
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','2005' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','',''
```
Expected Output
```
BaseVehicle |Make|year|toyear
1979 Ford LTD |ford|1979|
1979 Ford LTD |ford|1979|2005
1979 Ford LTD |ford|1979|
1979 Ford LTD |ford|1979|
```
Thanks
|
Check for the count of cells which is null with the count of all. If both matches, then the entire column in null. Then leave that column from the selection.
**Query**
```
create table #T
(
BaseVehicle VARCHAR (50),
SubModel VARCHAR (50),
Make VARCHAR (50),
Years VARCHAR (50),
FromYear VARCHAR (50),
ToYear VARCHAR (50)
)
INSERT into #T
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','2005' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','',''
declare @strsql varchar(2500)
set @strsql = 'select '
set @strsql +=
(select case when (select COUNT(*) from #T where BaseVehicle = '')
<> (select count(*) from #T ) then 'BaseVehicle, ' else '' end)
set @strsql +=
(select case when (select COUNT(*) from #T where SubModel = '')
<> (select count(*) from #T ) then 'SubModel, ' else '' end)
set @strsql +=
(select case when (select COUNT(*) from #T where Make = '')
<> (select count(*) from #T ) then 'Make, ' else '' end)
set @strsql +=
(select case when (select COUNT(*) from #T where Years = '')
<> (select count(*) from #T ) then 'Years, ' else '' end)
set @strsql +=
(select case when (select COUNT(*) from #T where FromYear = '')
<> (select count(*) from #T ) then 'FromYear, ' else '' end)
set @strsql +=
(select case when (select COUNT(*) from #T where ToYear = '')
<> (select count(*) from #T ) then 'ToYear, ' else '' end)
set @strsql = LEFT(@strsql,len(@strsql) -1)
set @strsql += ' from #T'
exec (@strsql)
```
**Result**
```
BaseVehicle Make Years ToYear
1979 Ford LTD FORD 1979
1979 Ford LTD FORD 1979 2005
1979 Ford LTD FORD 1979
1979 Ford LTD FORD 1979
```
Even you can make the query dynamically if the table is there in `INFORMATION_SCHEMA`.
|
Actually you can do this with temp table, because you can alter it. But I wouldn't recommend to do this:
```
DECLARE @T TABLE(
BaseVehicle VARCHAR (50),
SubModel VARCHAR (50),
Make VARCHAR (50),
Years VARCHAR (50),
FromYear VARCHAR (50),
ToYear VARCHAR (50))
INSERT @T
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','2005' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','','' UNION ALL
SELECT '1979 Ford LTD','','FORD','1979','',''
SELECT * INTO #staging FROM @T
IF(NOT EXISTS(SELECT * FROM #staging WHERE BaseVehicle <> ''))
ALTER TABLE #staging DROP COLUMN BaseVehicle
IF(NOT EXISTS(SELECT * FROM #staging WHERE SubModel <> ''))
ALTER TABLE #staging DROP COLUMN SubModel
IF(NOT EXISTS(SELECT * FROM #staging WHERE Make <> ''))
ALTER TABLE #staging DROP COLUMN Make
IF(NOT EXISTS(SELECT * FROM #staging WHERE Years <> ''))
ALTER TABLE #staging DROP COLUMN Years
IF(NOT EXISTS(SELECT * FROM #staging WHERE FromYear <> ''))
ALTER TABLE #staging DROP COLUMN FromYear
IF(NOT EXISTS(SELECT * FROM #staging WHERE ToYear <> ''))
ALTER TABLE #staging DROP COLUMN ToYear
SELECT * FROM #staging
DROP TABLE #staging
```
Output:
```
BaseVehicle Make Years ToYear
1979 Ford LTD FORD 1979
1979 Ford LTD FORD 1979 2005
1979 Ford LTD FORD 1979
1979 Ford LTD FORD 1979
```
|
if blank value in entire column then remove from output
|
[
"",
"sql",
"sql-server",
"database",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.