Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I have below table
```
create table #t (Id int, Name char)
insert into #t values
(1, 'A'),
(2, 'A'),
(3, 'B'),
(4, 'B'),
(5, 'B'),
(6, 'B'),
(7, 'C'),
(8, 'B'),
(9, 'B')
```
I want to count consecutive values in name column
```
+------+------------+
| Name | Repetition |
+------+------------+
| A | 2 |
| B | 4 |
| C | 1 |
| B | 2 |
+------+------------+
```
The best thing I tried is:
```
select Name
, COUNT(*) over (partition by Name order by Id) AS Repetition
from #t
order by Id
```
but it doesn't give me expected result
|
One approach is the difference of row numbers:
```
select name, count(*)
from (select t.*,
(row_number() over (order by id) -
row_number() over (partition by name order by id)
) as grp
from t
) t
group by grp, name;
```
The logic is easiest to understand if you run the subquery and look at the values of each row number separately and then look at the difference.
|
You could use windowed functions like `LAG` and running total:
```
WITH cte AS (
SELECT Id, Name, grp = SUM(CASE WHEN Name = prev THEN 0 ELSE 1 END) OVER(ORDER BY id)
FROM (SELECT *, prev = LAG(Name) OVER(ORDER BY id) FROM t) s
)
SELECT name, cnt = COUNT(*)
FROM cte
GROUP BY grp,name
ORDER BY grp;
```
**[db<>fiddle demo](https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=f9f74f6f7e8d81e16bb06aefd2f54d0d)**
The first cte returns group number:
```
+-----+-------+-----+
| Id | Name | grp |
+-----+-------+-----+
| 1 | A | 1 |
| 2 | A | 1 |
| 3 | B | 2 |
| 4 | B | 2 |
| 5 | B | 2 |
| 6 | B | 2 |
| 7 | C | 3 |
| 8 | B | 4 |
| 9 | B | 4 |
+-----+-------+-----+
```
And main query groups it based on `grp` column calculated earlier:
```
+-------+-----+
| name | cnt |
+-------+-----+
| A | 2 |
| B | 4 |
| C | 1 |
| B | 2 |
+-------+-----+
```
|
Count Number of Consecutive Occurrence of values in Table
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"aggregation",
""
] |
I would like to sum two columns "Immo"+"Conso" group by "ID" in order to create a new variable "Mixte". My new variable "Mixte" is as follow:
* if one ID has (at least) 1 in "Immo" AND 1 in "Conso" then "Mixte" is yes, otherwise "Mixte" is no.
For exemple:
```
Ident | Immo | Conso | Mixte
---------------------------------
1 | 0 | 1 | yes
1 | 1 | 0 | yes
2 | 1 | 0 | no
3 | 0 | 1 | no
3 | 0 | 1 | no
3 | 0 | 1 | no
4 | 0 | 1 | yes
4 | 0 | 1 | yes
4 | 1 | 0 | yes
```
Thank you for helping me. Do not hesitate to ask me questions if I wasn't clear.
|
```
select ident,result=(case when sum(Immo)>0 and sum(Conso)>0 then 'yes'
else 'no' end)
from tabname (NOLOCK)
group by id
```
|
Use a correlated sub-select:
```
select t1.Ident, t1.Immo, t1.Conso,
case when (select max(Immo) + max(Conso) from tablename t2
where t2.Ident = t1.Ident) = 2 then 'yes'
else 'no'
end as Mixte
from tablename t1
```
`Ident` is a reserved word in ANSI SQL, so you may need to delimit it as `"Ident"`.
|
SQL - Sum two columns group by ID
|
[
"",
"sql",
"sas",
""
] |
I have a table A that looks like this
```
Date Name Value
----------------------------
2015-01-01 A 12
2015-01-01 B 13
2015-01-01 C 10
2015-01-01 D 9
2015-01-01 E 15
2015-01-01 F 11
2015-01-02 A 1
2015-01-02 B 2
2015-01-02 C 3
2015-01-02 D 4
2015-01-02 E 5
2015-01-02 F 6
2015-01-03 A 7
2015-01-03 B 8
2015-01-03 C 9
2015-01-03 D 10
2015-01-03 E 15
2015-01-03 F 16
....
```
Which contains a value for each name for each day. I need a second table which looks like this
```
Date Name ValueDate ValueDate+1 ValueDate+2
--------------------------------------------------------------
2015-01-01 A 12 1 7
2015-01-01 B 13 2 8
2015-01-01 C 10 3 9
2015-01-01 D 9 4 10
2015-01-01 E 15 5 15
2015-01-01 F 11 6 16
2015-01-02 A 1 7 ...
2015-01-02 B 2 8 ...
2015-01-02 C 3 9 ...
2015-01-02 D 4 10 ...
2015-01-02 E 5 15 ...
2015-01-02 F 6 16 ...
```
I tried creating an intermediate table which has all the dates correctly entered
```
Date Name ValueDate ValueDate+1 ValueDate+2
----------------------------------------------------------------
2015-01-01 A 2015-01-01 2015-01-02 2015-01-03
2015-01-01 B 2015-01-01 2015-01-02 2015-01-03
2015-01-01 C 2015-01-01 2015-01-02 2015-01-03
2015-01-01 D 2015-01-01 2015-01-02 2015-01-03
2015-01-01 E 2015-01-01 2015-01-02 2015-01-03
2015-01-01 F 2015-01-01 2015-01-02 2015-01-03
...
```
My idea then was to use some kind of JOIN on table a to map the the corresponded Values to the dates and use s.th like
```
CASE WHEN Date = ValueDate THEN Value ELSE NULL END AS ValueDate+1
```
I am struggling to figure out how this can be done in SQL. I essentially need all the Values over a window for an initial date sequence. To give some background I want to see for a regular time interval how the value behaves in the following x days. The Datatypes are Date for all the Date columns, Varchar for the Name and numerics for the Values. The ValueDate+1 and +2 means +1/2 days. Also it cannot be ruled out that the counts of names stays constant over time.
thanks
|
I found one way of getting the desired results, by writing a row\_number() sub select limit to the desired window size. Which gives each entry per date s.th like this
```
Date Name Value Row_Num
---------------------------------------
2015-01-01 A 12 0
2015-01-01 A 12 1
2015-01-01 A 12 2
2015-01-01 A 12 3
```
In the next step one can use
```
(Date + Row_Num*INTERVAL'1 DAY')::DATE
```
which then can be joined on the initial table and pivoted. This will allow for any arbitrary combination of Names per date.
|
You just want `lead()`:
```
select a.*,
lead(value) over (partition by name order by date) as value_1,
lead(value, 2) over (partition by name order by date) as value_2
from a;
```
|
JOIN multiple rows to multiple columns in single row Netezza/Postgres
|
[
"",
"sql",
"join",
"netezza",
""
] |
I have been tasked with replacing a costly stored procedure which performs calculations across 10 - 15 tables, some of which contain many millions of rows. The plan is to pre-stage the many computations and store the results in separate tables for speeding reading.
Having quickly created these new tables and inserted all of the necessary pre-staged data as a test case, the execution time of getting the same results is vastly improved, as you would expect.
My question is, **what is the best practice for keeping these new separate tables up to date**?
* A procedure which runs at a specific interval could do it, but there
is a requirement for the data to be live.
* A trigger on each table could do it, but that seems very costly, and
could cause slow-downs for everywhere else that uses these tables.
Are there other alternatives?
|
Have you considered Indexed Views for this? As long as you meet the criteria for creating Indexed Views (no self joins etc) it may well be a good solution.
The downsides of Indexed Views are that when the data in underlying tables is changed (delete, update, insert) then it will have to recalculate the indexed view. This can slow down these types of operations in certain circumstances so you have to be careful. I've put some links to documentation below;
<https://www.brentozar.com/archive/2013/11/what-you-can-and-cant-do-with-indexed-views/>
<https://msdn.microsoft.com/en-GB/library/ms191432.aspx>
<https://technet.microsoft.com/en-GB/library/ms187864(v=sql.105).aspx>
|
what is the best practice for keeping these new separate tables up to date?
Answer is it depends .Depends on what ..?
1.How frequently you will use those computed values
2.what is the acceptable data latency
we to have same kind of reporting where we store computed values in seperate tables and use them in reports.In our case we run this sps before sending the reports through SQL server agent
|
Pre-Staging Data Solution
|
[
"",
"sql",
"sql-server",
""
] |
I currently use the following query which takes about 8 minute to return the result due to the volume of data (About 14 months). is there a way I can speed this up please?
The database in question is MySQL with InnoDb engine
```
select
CUSTOMER as CUST,
SUM(IF(PAGE_TYPE = 'C',PAGE_TYPE_COUNT,0)) AS TOTAL_C,
SUM(IF(PAGE_TYPE = 'D',PAGE_TYPE_COUNT,0)) AS TOTAL_D
from
PAGE_HITS
where
EVE_DATE >= '2016-01-01' and EVE_DATE <= '2016-01-05'
and SITE = 'P'
and SITE_SERV like 'serv1X%'
group by
CUST
```
Data is partitioned by 6 months. Every column that goes into where clause is indexed. There a quite some indexes & would be a big list to list here. Hence, just summarizing in words. With respect to this query, `EVE_DATE + PAGE_TYPE_COUNT` is one of the composite indexes & so are `CUST + SITE_SERV + EVE_DATE`, `EVE_DATE + SITE_SERV`, `EVE_DATE + SITE`,
The primary key is actually a dummy auto increment number. It isn't used to be honest. I don't have access to the explain plan. I'll see what best I can do for this.
I'd appreciate any help to improve this one please.
|
Okay, as the table range partition is on EVE\_DATE, the DBMS should easily see which partition to read. So it's all about what index to use then.
There is one column you check for equality (`SITE = 'P'`). This should come first in your index. You can then add `EVE_DATE` and `SITE_SERV` in whatever order I guess. Thus your index should be able to locate the table records in question as fast as possible.
If, however, you add the other fields used in your query to your index, the table wouldn't even have to be read, because all data would be avaliable in the index itself:
```
create index on page_hits(site, eve_date, site_serv, customer, page_type, page_type_count);
```
This should be the optimal index for your query if I am not mistaken.
|
I don't have the data so I can't test the speed of this but I think it would be faster.
```
select
CUSTOMER as CUST,
SUM(PAGE_TYPE_COUNT * (PAGE_TYPE = 'C')) AS TOTAL_C,
SUM(PAGE_TYPE_COUNT * (PAGE_TYPE = 'D')) AS TOTAL_D
from
PAGE_HITS
where
EVE_DATE >= '2016-01-01' and EVE_DATE <= '2016-01-05'
and SITE = 'P'
and SITE_SERV like 'serv1X%'
group by
CUST
```
It worked just fine on my fiddle on MySql 5.6
|
SQL - speed up query
|
[
"",
"mysql",
"sql",
"query-optimization",
""
] |
I have a table of this structure:
```
ID TaskID ResourceID IsActive
--- ----- ---------- --------
1 51 101 1
2 52 101 1
3 53 101 1
4 51 102 0
5 52 102 0
6 53 102 0
7 51 103 1
8 52 103 0
9 53 103 1
```
I want to get the Resources whose `IsActive` column is 0 in all records. In this example I want to get ResourceID- 102 as the result since all it's `IsActive` columns are 0.
I tried doing :
```
select ResourceID
from TableName
where ResourceID <> (SELECT ResourceID
from TableName
group by ResourceID, IsActive
having IsActive = 1)
```
In the subquery, I'm trying to get all Resources who have IsActive = 1. But when none of the records have IsActive = 1, it returns no result. Hence my main query also fails. Any suggestions on how to achieve my result?
Edit :
**Solution** :
```
select distinct ResourceID
from TableName t1
where not exists (select 1 from TableName t2
where t1.ResourceID = t2.ResourceID
and t2.IsActive = 1)
```
Also I think my question is simple and to the point instead of the "possible duplicate" . Future readers might find this question easier to relate to than the suggested duplicate. Users are more likely to search for this problem by "SQL Server groupby two columns" instead of "SQL: Selecting IDs that don't have any rows with a certain value for a column".
|
Return a row as long as no other row with same ResourceID has IsActive = 1:
```
select ResourceID
from TableName t1
where not exists (select 1 from TableName t2
where t1.ResourceID = t2.ResourceID
and t2.IsActive = 1)
```
Perhaps you want to do `select distinct ResourceID` to remove duplicates.
|
```
SELECT ResourceID
FROM
TableName T
WHERE
NOT EXISTS
(
SELECT *
FROM TableName
WHERE
ResourceID = T.ResourceID AND
IsActive = 1
)
```
Or...
```
SELECT ResourceID
FROM
TableName
GROUP BY
ResourceID
HAVING
MAX(IsActive) = 0
```
|
SQL Server groupby two columns
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
i am trying to run this SQL Query:
```
SELECT avg(response_seconds) as s FROM
( select time_to_sec( timediff( from_unixtime( floor( UNIX_TIMESTAMP(u.datetime)/60 )*60 ), u.datetime) ) ) as response_seconds
FROM tickets t JOIN ticket_updates u ON t.ticketnumber = u.ticketnumber
WHERE u.type = 'update' and t.customer = 'Y' and DATE(u.datetime) = '2016-04-18'
GROUP BY t.ticketnumber)
AS r
```
but i am seeing this error:
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM tickets t JOIN ticket_updates u ON t.ticketnumber = u.ticketnumber WHE' at line 3
```
and i cannot work out where the error is in the query
|
`)` the one more extra parenthesis in the `) ) as response_seconds` causing the problem, removing that will solve the problem. For better readability I aligned the code:
```
SELECT avg(response_seconds) AS s
FROM
(
SELECT
time_to_sec(
timediff(
from_unixtime(
floor(
UNIX_TIMESTAMP(u.datetime)/60
)*60
), u.datetime
) -- ) the one more extra parenthesis causing the problem
) as response_seconds
FROM tickets t
JOIN ticket_updates u ON t.ticketnumber = u.ticketnumber
WHERE u.type = 'update' and t.customer = 'Y' and DATE(u.datetime) = '2016-04-18'
GROUP BY t.ticketnumber
) AS r
```
|
Remove the `)` just before `as response_seconds`
```
SELECT avg(response_seconds) as s FROM
( select time_to_sec( timediff( from_unixtime( floor( UNIX_TIMESTAMP(u.datetime)/60 )*60 ), u.datetime) ) as response_seconds
FROM tickets t
JOIN ticket_updates u ON t.ticketnumber = u.ticketnumber
WHERE u.type = 'update'
and t.customer = 'Y'
and DATE(u.datetime) = '2016-04-18'
GROUP BY t.ticketnumber
) AS r
```
You had to many closing brackets on that calculation which had the effect of closing the sub select to early.
|
cannot find SQL syntax error
|
[
"",
"mysql",
"sql",
""
] |
I'm working with SQL Server 2012 and wish to query the following:
I've got 2 tables with mostly different columns. (1 table has 10 columns the other has 6 columns).
however they both contains a column with ID number and another column of category\_name.
1. The ID numbers may be overlap between the tables (e.g. 1 table may have 200 distinct IDs and the other 900 but only 120 of the IDs are in both).
2. The Category name are different and unique for each table.
Now I wish to have a single table that will include all the rows of both tables, with a single ID column and a single Category\_name column (total of 14 columns).
So in case the same ID has 3 records in table 1 and another 5 records in table 2 I wish to have all 8 records (8 rows)
The complex thing here I believe is to have a single "Category\_name" column.
I tried the following but when there is no null in both of the tables I'm getting only one record instead of both:
```
SELECT isnull(t1.id, t2.id) AS [id]
,isnull(t1.[category], t2.[category_name]) AS [category name]
FROM t1
FULL JOIN t2
ON t1.id = t2.id;
```
Any suggestions on the correct way to have it done?
|
Make your `FULL JOIN ON 1=0`
This will prevent rows from combining and ensure that you always get 1 copy of each row from each table.
Further explanation:
A `FULL JOIN` gets rows from both tables, whether they have a match or not, but when they do match, it combines them on one row.
You wanted a full join where you never combine the rows, because you wanted every row in both tables to appear one time, no matter what. 1 can never equal 0, so doing a FULL JOIN on 1=0 will give you a full join where none of the rows match each other.
And of course you're already doing the ISNULL to make sure the ID and Name columns always have a value.
|
This demonstrates how you can use a UNION ALL to combine the row sets from two tables, TableA and TableB, and insert the set into TableC.
Create two source tables with some data:
```
CREATE TABLE dbo.TableA
(
id int NOT NULL,
category_name nvarchar(50) NOT NULL,
other_a nvarchar(20) NOT NULL
);
CREATE TABLE dbo.TableB
(
id int NOT NULL,
category_name nvarchar(50) NOT NULL,
other_b nvarchar(20) NOT NULL
);
INSERT INTO dbo.TableA (id, category_name, other_a)
VALUES (1, N'Alpha', N'ppp'),
(2, N'Bravo', N'qqq'),
(3, N'Charlie', N'rrr');
INSERT INTO dbo.TableB (id, category_name, other_b)
VALUES (4, N'Delta', N'sss'),
(5, N'Echo', N'ttt'),
(6, N'Foxtrot', N'uuu');
```
Create TableC to receive the result set. Note that columns other\_a and other\_b allow null values.
```
CREATE TABLE dbo.TableC
(
id int NOT NULL,
category_name nvarchar(50) NOT NULL,
other_a nvarchar(20) NULL,
other_b nvarchar(20) NULL
);
```
Insert the combined set of rows into TableC:
```
INSERT INTO dbo.TableC (id, category_name, other_a, other_b)
SELECT id, category_name, other_a, NULL AS 'other_b'
FROM dbo.TableA
UNION ALL
SELECT id, category_name, NULL, other_b
FROM dbo.TableB;
```
Display the results:
```
SELECT *
FROM dbo.TableC;
```
[](https://i.stack.imgur.com/VPyoT.png)
|
How to join two tables together and return all rows from both tables, and to merge some of their columns into a single column
|
[
"",
"sql",
"sql-server",
"database",
"join",
""
] |
I am able to do this in SSMS. I want to do this in SSOE in VS13.
|
View the table in Designer mode, right click and try set identity. good luck.
|
Things to check:
If table has already been created, SSMS is default-set to prevent changes like that (which actually drop and re-create the table behind the scenes). If this is the case with you, in SSMS go to Tools -> Options -> Designers -> uncheck "Prevent saving changes that require table re-creation"
If it's a new table (or you've already done the above), make sure the column in question is of type "int". By default, SSMS sets a new column (even one that ends with "ID") to be nchar(10), which can be misleading.
|
Identity Increment in SQL Server object explorer is grayed out. How to set is identity = true in sql server object explorer in VS13?
|
[
"",
"sql",
"visual-studio-2013",
"sql-server-2012",
""
] |
I have a column in my sql table. I am wondering how can I add leading zero to my column when my column's value is less than 10? So for example:
```
number result
1 -> 01
2 -> 02
3 -> 03
4 -> 04
10 -> 10
```
|
```
format(number,'00')
```
Version >= 2012
|
You can use `RIGHT`:
```
SELECT RIGHT('0' + CAST(Number AS VARCHAR(2)), 2) FROM tbl
```
For `Number`s with length > 2, you use a `CASE` expression:
```
SELECT
CASE
WHEN Number BETWEEN 0 AND 99
THEN RIGHT('0' + CAST(Number AS VARCHAR(2)), 2)
ELSE
CAST(Number AS VARCHAR(10))
END
FROM tbl
```
|
How to add leading zero when number is less than 10?
|
[
"",
"sql",
"sql-server",
""
] |
I am getting an error, While I am trying to connect (LocalDB)\MSSQLLocalDB through SQL Server management studio. I also tried to login with default database as master the error is same.
[](https://i.stack.imgur.com/Q3Gyb.png)
Here is the Server details.
[](https://i.stack.imgur.com/dE9Aw.png)
|
**Warning: this will delete all your databases located in MSSQLLocalDB. Proceed with caution.**
The following command through sqllocaldb utility works for me.
```
sqllocaldb stop mssqllocaldb
sqllocaldb delete mssqllocaldb
sqllocaldb start "MSSQLLocalDB"
```
[](https://i.stack.imgur.com/Vqiv6.png)
After that I restarted the sql server management studio, and it is successfully established connection through `(LocalDB)\MSSQLLocalDB`
|
For this particular error, what gave me access to my MDF in VS2019 was:
1. In Solution Explorer, right click your MDF file
2. Detach
That was it and I now have access. I was expecting to detach and attach, but that wasn't needed.
I also could not get to my (localdb) in SSMS either, so what helped me there was a solution by Leniel Maccaferri. Here is the link to his site, along with the excerpt that helped me:
<https://www.leniel.net/2014/02/localdb-sqlserver-2012-cannot-open-database-requested-by-login-the-login-failed-error-4060.html>
[](https://i.stack.imgur.com/xuIsg.png)
So guess what: the solution is ridiculously easy once you know what to do of course…
Click that Options >> button in Figure 1. Now select the Connection Properties tab.
SSMS Connect to Server | Connection Properties | Connect to database optionFigure 2 - SSMS Connect to Server | Connection Properties | Connect to database option
I had to type master in Connect to database field since I did not have it in the list of available databases.
Now click connect and you’re done.
|
Cannot connect to (LocalDB)\MSSQLLocalDB -> Login failed for user 'User-PC\User'
|
[
"",
"sql",
"sql-server",
"ssms",
"localdb",
""
] |
I'm using a Crystal Reports 13 Add Command for record selection from an Oracle database connected through the Oracle 11g Client. The error I am receiving is ORA-00933: SQL command not properly ended, but I can't find anything the matter with my code (incomplete):
```
/* Determine units with billing code effective dates in the previous month */
SELECT "UNITS"."UnitNumber", "BILL"."EFF_DT"
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT OUTER JOIN "MFIVE"."VIEW_ALL_UNITS" "UNITS" ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
WHERE "UNITS"."OwnerDepartment" LIKE '580' AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
INNER JOIN
/* Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "BILL2"."UNIT_ID", MAX("BILL2"."EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL2"
WHERE TO_CHAR("BILL2"."EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "BILL2"."UNIT_ID"
)
ON "BILL"."UNIT_ID" = "BILL2"."UNIT_ID"
ORDER BY "UNITS"."UnitNumber", "BILL"."EFF_DT" DESC
```
We are a state entity that leases vehicles (units) to other agencies. Each unit has a billing code with an associated effective date. The application is to develop a report of units with billing codes changes in the previous month.
Complicating the matter is that for each unit above, the report must also show the latest billing code and associated effective date prior to the previous month. A brief example:
Given this data and assuming it is now April 2016 (ordered for clarity)...
```
Unit Billing Code Effective Date Excluded
---- ------------ -------------- --------
1 A 04/15/2016 Present month
1 B 03/29/2016
1 A 03/15/2016
1 C 03/02/2016
1 B 01/01/2015
2 C 03/25/2016
2 A 03/04/2016
2 B 07/24/2014
2 A 01/01/2014 A later effective date prior to previous month exists
3 D 11/28/2014 No billing code change during previous month
```
The report should return the following...
```
Unit Billing Code Effective Date
---- ------------ --------------
1 B 03/29/2016
1 A 03/15/2016
1 C 03/02/2016
1 B 01/01/2015
2 C 03/25/2016
2 A 03/04/2016
2 B 07/24/2014
```
Any assistance resolving the error will be appreciated.
|
If the `where` before `Join` really matters to you, use a CTE. (Employing `with` clause for temporary table and joining on the same.)
```
With c as (SELECT "UNITS"."UnitNumber", "BILL"."EFF_DT","BILL"."UNIT_ID" -- Correction: Was " BILL"."UNIT_ID" (spacetanker)
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL" -- Returning unit id column too, to be used in join
LEFT OUTER JOIN "MFIVE"."VIEW_ALL_UNITS" "UNITS" ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
WHERE "UNITS"."OwnerDepartment" LIKE '580' AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY'))
select * from c --Filter out your required columns from c and d alias results, e.g c.UNIT_ID
INNER JOIN
--Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "BILL2"."UNIT_ID", MAX("BILL2"."EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL2"
WHERE TO_CHAR("BILL2"."EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "BILL2"."UNIT_ID"
) d
ON c."UNIT_ID" = d."UNIT_ID"
order by c."UnitNumber", c."EFF_DT" desc -- COrrection: Removed semicolon that Crystal Reports didn't like (spacetanker)
```
It seems this query has lots of scope for tuning though. However, one who has access to the data and requirement specification is the best judge.
**EDIT :**
You are not able to see data PRIOR to previous month since you are using BILL.EFF\_DT in your original question's select statement, which is filtered to give only dates of previous month(`..AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY'))`
If you want the data as you want, I guess you have to use the BILL2 section (d part in my subquery), by giving an alias to Max(EFF\_DT), and using that alias in your select clause.
|
You have a `WHERE` clause before the `INNER JOIN` clause. This is invalid syntax - if you swap them it should work:
```
SELECT "UNITS"."UnitNumber",
"BILL"."EFF_DT"
FROM "MFIVE"."BILL_UNIT_ACCT" "BILL"
LEFT OUTER JOIN
"MFIVE"."VIEW_ALL_UNITS" "UNITS"
ON "BILL"."UNIT_ID" = "UNITS"."UNITID"
INNER JOIN
/* Loop through previously identified units and determine last billing code change prior to preious month */
(
SELECT "UNIT_ID",
MAX("EFF_DT")
FROM "MFIVE"."BILL_UNIT_ACCT"
WHERE TO_CHAR("EFF_DT", 'MMYYYY') < TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
GROUP BY "UNIT_ID"
) "BILL2"
ON "BILL"."UNIT_ID" = "BILL2"."UNIT_ID"
WHERE "UNITS"."OwnerDepartment" LIKE '580'
AND TO_CHAR("BILL"."EFF_DT", 'MMYYYY') = TO_CHAR(ADD_MONTHS(TRUNC(SYSDATE), -1), 'MMYYYY')
ORDER BY "UNITS"."UnitNumber", "BILL"."EFF_DT" DESC
```
Also, you need to move the `"BILL2"` alias outside the `()` brackets as you do not need the alias inside the brackets but you do outside.
Are you really sure you need the double-quotes `""`? Double-quotes enforce case sensitivity in column names - the default behaviour is for Oracle to convert all table and column names to upper case to abstract the case-sensitivity from the user - since you are using both double-quotes and upper-case names the quotes seems redundant.
|
SQL Command Not Properly Ended - Oracle Subquery
|
[
"",
"sql",
"oracle",
"oracle11g",
"crystal-reports",
""
] |
I get this message `column "mi.*" must appear in the GROUP BY clause or be used in an aggregate function` what does that means? and how to solve it?
I tried change `GROUP BY m.id` to `GROUP BY m.id, mi.media_id` still same error
I test this if I remove `GROUP BY m.id ORDER BY COUNT(mua.id)` it works
data structure
```
[ { id: 54,
media_information: {
id: 1,
media_id: 54
}
},
]
```
query
```
SELECT
m.*,
row_to_json(mi.*) as media_information
FROM media m
LEFT JOIN media_information mi ON mi.media_id = m.id
LEFT JOIN media_user_action mua ON mua.media_id = m.id
GROUP BY m.id
ORDER BY COUNT(mua.id)
...
```
table
```
media
id | ...
1
media_information
id | media_id fk media.id | ...
1 | 1
media_user_action
id | media_id fk media.id | user_id
1 | 1 | 1
2 | 1 | 3
```
UPDATE base on below anser
```
Select m2.*
From media m2
LEFT JOIN media_user_action mua ON mua.media_id = m2.id
Where m2.id in (
SELECT
m.*,
row_to_json(mi.*) as media_information
FROM media m
LEFT JOIN media_information mi ON mi.media_id = m.id
)
GROUP BY m2.id
ORDER BY COUNT(mua.id)
```
|
You can get the grouping first and then do a join like
```
SELECT
m.*,
row_to_json(mi.*) as media_information
FROM media m
LEFT JOIN media_information mi ON mi.media_id = m.id
LEFT JOIN (select media_id, COUNT(id) as mua_count
from media_user_action
group by media_id) xxx ON xxx.media_id = m.id
ORDER BY xxx.mua_count;
```
|
Do group by in subselect. I don't have your structure - so perhaps it wouldn't work - so please treat is as a sample:
```
SELECT
m.*,
row_to_json(mi.*) as media_information
FROM media m
JOIN media_information mi ON mi.media_id = m.id
JOIN media_user_action mua ON mua.media_id = m.id
JOIN (
SELECT
m.id, count(mua.id) as cnt
FROM media m
JOIN media_information mi ON mi.media_id = m.id
JOIN media_user_action mua ON mua.media_id = m.id
GROUP BY m.id) as counts on m.id = counts.id
ORDER BY counts.cnt
```
|
select row_to_json table, error: must appear in the GROUP BY clause or be used in an aggregate function
|
[
"",
"sql",
"postgresql",
""
] |
I have a table looks like:
```
user key
x 1
x 2
y 1
z 1
```
The question is simple. How to find out which one is the user who has not key 2?
The result should be y and z users.
|
@jarlh's answer is likely the fastest if you have ***two*** tables;
- One with the users
- One with your facts
```
select "users"."user_id"
from "users"
where not exists (select 1 from tablename t2
where t2."user_id" = "users"."user_id"
and t2."key" = 2)
```
That's the structure I would recommend too, having two tables.
For your case, where you only have one table, the following may be a faster alternative; it does not need to join or run a correlated sub-query, but rather scans the whole table just once.
```
SELECT
"user"
FROM
tablename
GROUP BY
"user"
HAVING
MAX(CASE WHEN "key" = 2 THEN 1 ELSE 0 END) = 0
```
|
Return a user row as long as the same user doesn't have another row with key 2.
```
select user, key
from tablename t1
where not exists (select 1 from tablename t2
where t2.user = t1.user
and t2.key = 2)
```
Note that `user` is a reserved word in ANSI SQL, so you may need to delimit it as `"user"`.
|
PostgreSQL selection
|
[
"",
"sql",
"postgresql",
""
] |
Is there anything wrong with this statement?
```
SELECT *
FROM Movies INNER JOIN
Sessions
ON Movies.MovieID=Sessions.MovieID INNER JOIN
Tickets
ON Sessions.SessionID=Tickets.SessionID;
```
When ever I run it on Access I get a Syntax error 'Missing Operator'.
Also are there any alternatives to Access that I can import data from an excel spread sheet?
|
In general, no. In MS Access, yes. It likes extra parentheses, probably because the database developers don't believe in readability:
```
SELECT *
FROM (Movies INNER JOIN
Sessions
ON Movies.MovieID = Sessions.MovieID
) INNER JOIN
Tickets
ON Sessions.SessionID = Tickets.SessionID;
```
|
You could enable OPENROWSET if you have a local instance of SQL, and install MDACs (I would install both x86 and x64 if you have a 64 bit pc). Below is a link to an article that will help you get setup. Also, be sure to run the management studio with elevated privileges.
[How to enable Ad Hoc Distributed Queries](https://stackoverflow.com/questions/14544221/how-to-enable-ad-hoc-distributed-queries)
Below is how the query would look. In my example I use Excel 8.0 instead of 12 because the column names are addressable in my select statements for 8.
```
SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 8.0;Database=C:\Temp\MyExcelDoc.xlsx;',
'SELECT * FROM [Sheet1$]')
```
|
SQL error Statement, Missing Operator
|
[
"",
"sql",
"ms-access",
""
] |
I encountered many times this problem of decimal in MySQL !
When i put this type: `DECIMAL(10,8)`
The maximum value allowed are: `99.99999999` !
It supposed to be: `9999999999.99999999` no ?
I want a maximum value of decimal with 8 digits after the point (.).
|
From the [documentation](https://dev.mysql.com/doc/refman/5.7/en/precision-math-decimal-characteristics.html):
> The declaration syntax for a DECIMAL column is DECIMAL(M,D). The ranges of values for the arguments in MySQL 5.7 are as follows:
>
> * M is the maximum number of digits (the precision). It has a range of 1 to 65.
> * D is the number of digits to the right of the decimal point (the scale). It has a range of 0 to 30 and must be no larger than M.
The first value is not the number of digits to the left of the decimal point, but the total number of digits.
That's why the value `9999999999.99999999` with `DECIMAL(10, 8)` is not possible: it is 18 digits long.
|
A `decimal` is defined by two parameters - `DECIMAL(M, D)`, where `M` is the total number of digits, and `D` is number of digits after the decimal point **out of `M`**. To properly represent the number `9999999999.99999999`, you'd need to use `DECIMAL(18, 8)`.
|
encountered many times this difficulty of decimal in MySQL
|
[
"",
"mysql",
"sql",
"ddl",
""
] |
I have two tables (1. orders and 2. cars):
**Cars**
[](https://i.stack.imgur.com/N9Lfp.png)
**Orders**
[](https://i.stack.imgur.com/saM9O.png)
I'm trying to find all cars that are available at a given date. In this case I want to find all available cars between 2016-05-03 and 2016-05-05. I check for cars that are `NOT BETWEEN` said date or cars that have not been registered in an order yet (`orders.car_id IS NULL`). Here is the query:
```
SELECT destination, COUNT(destination) AS 'available cars'
FROM cars
LEFT JOIN orders ON cars.id = orders.car_id
WHERE (orders.car_id IS NULL
OR (
date_to NOT BETWEEN '2016-05-03' AND '2016-05-05'
AND date_from NOT BETWEEN '2016-05-03' AND '2016-05-05'
)
)
AND destination = 'Kristiansand' GROUP BY destination
```
The problem is with the **Audi A1 with id = 8**. As you can see, it is registered on two appointments, one from `2016-05-03` to `2016-05-05` and one from `2016-04-29` to `2016-04-30`.
Since the second pair of dates at the end of April are `NOT BETWEEN` the given dates in the query, the A1 is an available car which is far from true.
> I'm trying to fetch all cars available for rental outside of the given
> dates in Kristiansand.
|
Let's say you have 2 periods T1 and T2 to check to see if they overlap
you do this check (T1.start <= T2.end) AND (T1.end >= T2.start).
so try this below query, (it checks and makes sure that there doesn't exist an order of the same car that overlap the specified period
```
SET @startdate = '2016-05-03',@enddate = '2016-05-05';
SELECT c.destination,COUNT(c.destination) as available_cars
FROM cars c
WHERE NOT EXISTS (SELECT 1
FROM orders o
WHERE o.car_id = c.id
AND o.date_from <= @enddate
AND o.date_to >= @startdate)
AND c.destination = 'Kristiansand'
GROUP BY c.destination
```
<http://sqlfiddle.com/#!9/9340e3/4>
You can remove the SET statement and hardcode in your @enddate and @startdate
|
Change your thinking from exclusionary:
```
date_to NOT BETWEEN '2016-05-03' AND '2016-05-05'
AND date_from NOT BETWEEN '2016-05-03' AND '2016-05-05'
```
to inclusionary:
```
(date_from < '2016-05-03' AND date_to < '2016-05-05') OR
(date_from > '2016-05-03' AND date_to > '2016-05-05')
```
|
Find available dates
|
[
"",
"mysql",
"sql",
"date",
""
] |
I am struggling with a TSQL query and I'm all out of googling, so naturally I figured I might as well ask on SO.
Please keep in mind that I just began trying to learn SQL a few weeks back and I'm not really sure what rules there are and how you can and can not write your queries / sub-queries.
This is what I have so far:
Edit: Updated with DDL that should help create an example, also commented out unnecessary "Client"-column.
```
CREATE TABLE NumberTable
(
Number varchar(20),
Date date
);
INSERT INTO NumberTable (Number, Date)
VALUES
('55512345', '2015-01-01'),
('55512345', '2015-01-01'),
('55512345', '2015-01-01'),
('55545678', '2015-01-01'),
('55512345', '2015-02-01'),
('55523456', '2015-02-01'),
('55523456', '2015-02-01'),
('55534567', '2015-03-01'),
('55534567', '2015-03-01'),
('55534567', '2015-03-01'),
('55534567', '2015-03-01'),
('55545678', '2015-03-01'),
('55545678', '2015-04-01')
DECLARE
--@ClientNr AS int,
@FromDate AS date,
@ToDate AS date
--SET @ClientNr = 11111
SET @FromDate = '2015-01-01'
SET @ToDate = DATEADD(yy, 1, @FromDate)
SELECT
YEAR(Date) AS [Year],
MONTH(Date) AS [Month],
COUNT(Number) AS [Total Count]
FROM
NumberTable
WHERE
--Client = @ClientNr
Date BETWEEN @FromDate AND @ToDate
AND Number IS NOT NULL
AND Number NOT IN ('888', '144')
GROUP BY MONTH(Date), YEAR(Date)
ORDER BY [Year], [Month]
```
With this I am getting the Year, Month and Total Count.
I'm happy with only getting the top 1 most called number and count each month, but showing top 5 is preferable.
Heres an example of how I would like the table to look in the end (having the months formatted as JAN, FEB etc instead of numbers is not really important, but would be a nice bonus):
```
╔══════╦═══════╦═════════════╦═══════════╦══════════╦═══════════╦══════════╗
║ Year ║ Month ║ Total Count ║ #1 Called ║ #1 Count ║ #2 Called ║ #2 Count ║
╠══════╬═══════╬═════════════╬═══════════╬══════════╬═══════════╬══════════╣
║ 2016 ║ JAN ║ 80431 ║ 555-12345 ║ 45442 ║ 555-94564 ║ 17866 ║
╚══════╩═══════╩═════════════╩═══════════╩══════════╩═══════════╩══════════╝
```
I was told this was "easily" done with a sub-query, but I'm not so sure...
|
artm's query corrected (PARTITION) and the last step (pivoting) simplified.
```
with data AS
(select '2016-01-01' as called, '111' as number
union all select '2016-01-01', '111'
union all select '2016-01-01', '111'
union all select '2016-01-01', '222'
union all select '2016-01-01', '222'
union all select '2016-01-05', '111'
union all select '2016-01-05', '222'
union all select '2016-01-05', '222')
, ordered AS (
select called
, number
, count(*) cnt
, ROW_NUMBER() OVER (PARTITION BY called ORDER BY COUNT(*) DESC) rnk
from data
group by called, number)
select called, total = sum(cnt)
, n1= max(case rnk when 1 then number end)
, cnt1=max(case rnk when 1 then cnt end)
, n2= max(case rnk when 2 then number end)
, cnt2=max(case rnk when 2 then cnt end)
from ordered
group by called
```
**EDIT** Using setup provided by OP
```
WITH ordered AS(
-- compute order
SELECT
[Year] = YEAR(Date)
, [Month] = MONTH(Date)
, number
, COUNT(*) cnt
, ROW_NUMBER() OVER (PARTITION BY YEAR(Date), MONTH(Date) ORDER BY COUNT(*) DESC) rnk
FROM NumberTable
WHERE Date BETWEEN @FromDate AND @ToDate
AND Number IS NOT NULL
AND Number NOT IN ('888', '144')
GROUP BY YEAR(Date), MONTH(Date), number
)
-- pivot by order
SELECT [Year], [Month]
, total = sum(cnt)
, n1 = MAX(case rnk when 1 then number end)
, cnt1 = MAX(case rnk when 1 then cnt end)
, n2 = MAX(case rnk when 2 then number end)
, cnt2 = MAX(case rnk when 2 then cnt end)
-- n3, cnt3, ....
FROM ordered
GROUP BY [Year], [Month];
```
|
Interesting one this, I believe you can do it with a CTE and PIVOT but this is off the top of my head... This may not work verbatim
```
WITH Rollup_CTE
AS
(
SELECT Client,MONTH(Date) as Month, YEAR(Date) as Year, Number, Count(0) as Calls, ROW_NUMBER() OVER (PARTITION BY Client,MONTH(Date) as SqNo, YEAR(Date), Number ORDER BY COUNT(0) DESC)
from NumberTable
WHERE Number IS NOT NULL AND Number NOT IN ('888', '144')
GROUP BY Client,MONTH(Date), YEAR(Date), Number
)
SELECT * FROM Rollup_CTE Where SqNo <=5
```
You may then be able to pivot the data as you wish using PIVOT
|
How do I select the most frequent value for a specific month and display this value as well as the amount of times it occurs?
|
[
"",
"sql",
"sql-server",
"t-sql",
"ssms",
""
] |
```
SELECT ShopOrder.OrderDate
, Book.BookID
, Book.title
, COUNT(ShopOrder.ShopOrderID) AS "Total number of order"
, SUM (Orderline.Quantity) AS "Total quantity"
, Orderline.UnitSellingPrice * Orderline.Quantity AS "Total order value"
, book.Price * Orderline.Quantity AS "Total retail value"
FROM ShopOrder
JOIN Orderline
ON Orderline.ShopOrderID = ShopOrder.ShopOrderID
JOIN Book
ON Book.BookID = Orderline.BookID
JOIN Publisher
ON Publisher.PublisherID = Book.PublisherID
WHERE Publisher.name = 'Addison Wesley'
GROUP
BY ShopOrder.OrderDate
, Book.BookID
, Book.title
, Orderline.UnitSellingPrice
, Orderline.Quantity, book.Price
, Orderline.Quantity, ShopOrder.ShopOrderID
ORDER
BY ShopOrder.OrderDate
```
[Please look at the picture](https://i.stack.imgur.com/A3s8E.png)
I want the query OrderDate group by year and month, so the data for the same month could be added together
Thanks a lot for your help
|
You need to extract the year and month from the date and use those in the `select` and `group by` columns. How you do this depends highly on the database. Many support functions called `year()` and `month()`.
Then you need to just aggregate by the fields that you want. Something like this:
```
SELECT YEAR(so.OrderDate) as yyyy, MONTH(so.OrderDate) as mm,
b.BookID, b.title,
COUNT(so.ShopOrderID) AS "Total number of order",
SUM(ol.Quantity) AS "Total quantity",
SUM(ol.UnitSellingPrice * ol.Quantity AS "Total order value",
SUM(b.Price * ol.Quantity) AS "Total retail value"
FROM ShopOrder so JOIN
Orderline ol
ON ol.ShopOrderID = so.ShopOrderID JOIN
Book b
ON b.BookID = ol.BookID JOIN
Publisher p
ON p.PublisherID = b.PublisherID
WHERE p.name = 'Addison Wesley'
GROUP BY YEAR(so.OrderDate), MONTH(so.OrderDate), b.BookID, b.title
ORDER BY MIN(so.OrderDate)
```
Note the use of table aliases makes the query easier to write and to read.
The above works in MySQL, DB2, and SQL Server. In Postgres and Oracle, `to_char(so.OrderDate, 'YYYY-MM')`.
|
For group, you can use datepart function
```
GROUP BY (DATEPART(yyyy, ShopOrder.OrderDate), DATEPART(mm, ShopOrder.OrderDate))
```
|
SQL sorting date by year and month
|
[
"",
"sql",
"pgadmin",
""
] |
I have a table with columns mentioned below:
```
transaction_type transaction_number amount
Sale 2016040433 50
Cancel R2016040433 -50
Sale 2016040434 50
Sale 2016040435 50
Cancel R2016040435 -50
Sale 2016040436 50
```
I want to find net number of rows with only sales which does not include canceled rows.
(Using SQL Only).
|
If you just want to count the sales and subtract the cancels (as suggested by your sample data), you can use conditional aggregation:
```
select sum(case when transaction_type = 'Sale' then 1
when transaction_type = 'Cancel' then -1
else 0
end)
from t;
```
|
```
SELECT count(transaction_type) FROM TBL_NAME GROUP BY count(transaction_type) HAVING transaction_type = 'Sale'
```
Just use the count function, and use HAVING with GROUP BY to filter
|
How to find net rows from sales and cancel rows using sql
|
[
"",
"sql",
""
] |
I have a table like below
```
event_date id
---------- ---
2015-11-18 x1
2015-11-18 x2
2015-11-18 x3
2015-11-18 x4
2015-11-18 x5
2015-11-19 x1
2015-11-19 x2
2015-11-19 y1
2015-11-19 y2
2015-11-19 y3
2015-11-20 x1
2015-11-20 y1
2015-11-20 z1
2015-11-20 z2
```
**Question**: How to get unique count of id for every date (such that we get count of only those id which were not seen in the previous records)? Something like this:
```
event_date count(id)
----------- ---------
2015-11-18 5
2015-11-19 3
2015-11-20 2
```
Each ID should only be counted once regardless of whether it occurs within the same date group or otherwise.
|
Here is an answer that'll work although I am not sure I like it:
```
select t.event_date,
count(1)
from (
-- Record first occurrence of each id along with the earliest date occurred
select id,
min(event_date) as event_date
from
mytable
group by id
) t
group by t.event_date;
```
I know it works because I tested with your data to get the results you wanted.
This actually works for this data but if you had a date group that consisted only of duplicate ids, for example, if among rows, you had one more row `('2016-01-01', 'z2')` this won't display any records for that `2016-01-01` because `z2` is a duplicate. If you need to return a row within your results:
> 2016-01-01 0
then, you have to use a LEFT JOIN with the GROUP BY.
# [sqlfiddle](http://sqlfiddle.com/#!15/c2e79/6/0) here
|
You could group by the date and apply a distinct count to the id per group:
```
SELECT event_date, COUNT(DISTINCT id)
FROM mytable
GROUP BY event_date
```
|
how to get unique values in SQL?
|
[
"",
"sql",
"postgresql",
"count",
""
] |
I just stumbled across this gem in our code:
```
my $str_rep="lower(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(field,'-',''),'',''),'.',''),'_',''),'+',''),',',''),':',''),';',''),'/',''),'|',''),'\',''),'*',''),'~','')) like lower('%var%')";
```
I'm not really an expert in DB, but I have a hunch it can be rewritten in a more sane manner. Can it?
|
It depends on the DBMS you are using. I'll post some examples (feel free to edit this answer to add more).
## MySQL
There is really not much to do; the only way to replace all the characters is nesting `REPLACE` functions as it has already been done in your code.
## Oracle DB
Your clause can be rewritten by using the [`TRANSLATE`](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions196.htm) function.
## SQL Server
Like in MySQL there aren't any functions similar to Oracle's `TRANSLATE`. I have found some (much longer) alternatives in the answers to [this question](https://stackoverflow.com/questions/19835090/replace-multiple-characters-from-string-without-using-any-nested-replace-functio). In general, however, queries become very long. I don't see any real advantages of doing so, besides having a more structured query that can be easily extended.
## Firebird
As suggested by Mark Rotteveel, you can use `SIMILAR TO` to rewrite the entire clause.
If you are allowed to build your query string via Perl you can also use a for loop against an array containing all the special characters.
**EDIT**: Sorry I did not see you indicated the DB in the tags. Consider only the last part of my answer.
|
Your flagged this as *Perl*, but it's probably not?
Here is a Perl solution anyway:
```
$var =~ s/[\-\.\_\+\,\:\;\/\|\\\*\~]+//g;
```
|
Rewrite this exceedingly long query
|
[
"",
"sql",
"firebird",
"firebird2.5",
""
] |
I met the following MySQL code:
```
SELECT ServiceFee
FROM Shows
WHERE ID = (SELECT ShowID FROM Orders WHERE ID = ?)
```
It makes me wonder because the people who wrote this code usually use SQL joins. I would rewrite it
```
SELECT ServiceFee
FROM Shows
INNER JOIN Orders ON Shows.ID = Orders.ShowID
WHERE Orders.ID = ?
```
My question: Is there any reason why this code was written with a subquery and whether it is completely safe (producing the same result in all situations) to rewrite it with the join?
Are there any caveats?
|
Nope, there are no caveats. As a matter of fact, the [`INNER JOIN`](http://www.codersrevolution.com/blog/MySQL-performance-INNER-JOIN-vs-subselect) query might run faster
|
"Is there any reason why this code was written with a subquery"
a very long time ago MySQL joins used to be slow
|
Weird SQL code: Why do they use a subquery instead of join?
|
[
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
In a database I have two linked tables that store records of fish landings. A business required is that these landings are priced once a week and then posted a week later to allow those individuals for whom payment will then be made to check the paperwork. The system has generally worked well over the years but last week a user managed somehow to change a record such that one particular species and size of fish was priced differently, and because of that it prevented other operations from occurring.
I now need to add an extra validation check (even though the chances of a repeat incident are slim). To that end I have contrived some false data replicating the issue and designed the first simple part of a query to pick it up.
This is the first part of the query;
```
SELECT DISTINCT
ld.ProductId, ld.UnitPrice
FROM
LandingDetails ld
JOIN
LandingHeaders lh ON ld.LandingId = lh.LandingId
WHERE
lh.LandingDate1 BETWEEN '20160313' AND '20160319'
```
And here is an example of the type of records returned;
[](https://i.stack.imgur.com/d7E3w.png)
As you can see there are ProductId's listed with different prices. From this point I would like to amend this so that instead of returning all the distinct ProductId's and prices from the date period, it just returns those ProductId's where there are two distinct prices. What I'm looking for is the most efficient way in SQL to achieve this goal. I don't need the prices, it's sufficient just to know which productId's will need alteration of their prices in a separate procedure I have yet to compose.
|
The most straightforward method would be to wrap your query with an outer query that utilizes a [HAVING clause](https://msdn.microsoft.com/en-us/library/ms180199.aspx):
```
SELECT q.ProductId
FROM (
SELECT DISTINCT
ld.ProductId, ld.UnitPrice
FROM
LandingDetails ld
JOIN
LandingHeaders lh ON ld.LandingId = lh.LandingId
WHERE
lh.LandingDate1 BETWEEN '20160313' AND '20160319'
) q
GROUP BY q.ProductId
HAVING COUNT(1) >= 2
```
|
```
SELECT
ld.ProductId
FROM
LandingDetails ld
JOIN
LandingHeaders lh ON ld.LandingId = lh.LandingId
WHERE
lh.LandingDate1 BETWEEN '20160313' AND '20160319'
GROUP BY ld.ProductId
HAVING COUNT(DISTINCT ld.UnitPrice)>1
```
|
What is the most efficient way to identify rows with differing values
|
[
"",
"sql",
"sql-server-2014",
""
] |
In my attempts to edit a procedure using the line
```
CREATE OR DROP PROCEDURE
```
I have created two procedures with the same name, how can I delete them?
The error I receive whenever I attempt to drop it is
> Reference to Rountine BT\_CU\_ODOMETER was made without a signature, but the routine is not unique in its schema.
> SQLSTATE = 42725
I am using DB2
|
Assuming this is DB2 for LUW.
DB2 allows you to "overload" procedures with the same name but different number of parameters. Each procedure receives a *specific name*, which can be provided by you or generated by the system and which will be unique.
To determine the specific names of your procedures, run
```
SELECT ROUTINESCHEMA, ROUTINENAME, SPECIFICNAME FROM SYSCAT.ROUTINES
WHERE ROUTINENAME = 'BT_CU_ODOMETER'
```
You can then drop each procedure individually:
```
DROP SPECIFIC PROCEDURE <specific name>
```
|
**PROBLEM**
When multiple stored procedures are created with the same name but with a different number of parameters, then the stored procedure is considered overloaded. When attempting to drop an overloaded stored procedure using the DROP PROCEDURE statement, the following error could result:
```
db2 drop procedure SCHEMA.PROCEDURENAME
```
DB21034E The command was processed as an SQL statement because it was not valid Command Line Processor command. During SQL processing it returned: SQL0476N Reference to routine "SCHEMA.PROCEDURENAME" was made without a signature, but the routine is not unique in its schema. SQLSTATE=42725
**CAUSE**
The error is returned because the stored procedure is overloaded and therefore the procedure is not unique in that schema. To drop the procedure you must specify the data types that were specified on the CREATE PROCEDURE statement or use the stored procedure's specific name per the examples below.
**SOLUTION**
In order to drop an overloaded stored procedure you can use either of the following statements:
```
db2 "DROP PROCEDURE procedure-name(int, varchar(12))"
db2 "DROP SPECIFIC PROCEDURE specific-name"
```
Note: The specific-name can be identified by selecting the SPECIFICNAME column from syscat.routines catalog view.
|
Deleting a non-unique procedure on DB2
|
[
"",
"sql",
"db2",
"procedure",
""
] |
I have an order\_transactions table with 3 relevant columns. `id` (unique id for the transaction attempt), `order_id` (the id of the order for which the attempt is being made), and `success` an int which is 0 if failed, and 1 if successful.
There can be 0 or more failed transactions before a successful transaction, for each `order_id`.
The question is, how do I find:
* The number of orders which never had a successful transaction
* The number of orders which had a transaction with a failure (eventually successful or not)
* The number of orders which never had a failed transaction (success only)
I realize this is some combination of distinct, group by, maybe a subselect, etc, I'm just not well versed in this enough. Thanks.
|
To get the number of orders which never had a successful transaction you can use:
```
SELECT COUNT(*)
FROM (
SELECT order_id
FROM transactions
GROUP BY order_id
HAVING COUNT(CASE WHEN success = 1 THEN 1 END) = 0) AS t
```
[**Demo here**](http://sqlfiddle.com/#!9/0bcbf4/5)
The number of orders which had a transaction with a failure (eventually successful or not) can be obtained using the query:
```
SELECT COUNT(*)
FROM (
SELECT order_id
FROM transactions
GROUP BY order_id
HAVING COUNT(CASE WHEN success = 0 THEN 1 END) > 0) AS t
```
[**Demo here**](http://sqlfiddle.com/#!9/0bcbf4/6)
Finally, to get the number of orders which never had a failed transaction (success only):
```
SELECT COUNT(*)
FROM (
SELECT order_id
FROM transactions
GROUP BY order_id
HAVING COUNT(CASE WHEN success = 0 THEN 1 END) = 0) AS t
```
[**Demo here**](http://sqlfiddle.com/#!9/0bcbf4/7)
|
You want "counts" of orders that meet specific conditions over multiple rows, so I'd start with a GROUP BY order\_id
```
SELECT ...
FROM mytable t
GROUP BY t.order_id
```
To find out if a particular order ever had a failed transaction, etc. we can use aggregates on expressions that "test" for conditions.
For example:
```
SELECT MAX(t.success=1) AS succeeded
, MAX(t.success=0) AS failed
, IF(MAX(t.success=1),0,1) AS never_succeeded
FROM mytable t
GROUP BY t.order_id
```
The expressions in the SELECT list of that query are MySQL shorthand. We could use longer expressions (MySQL IF() function or ANSI CASE expressions) to achieve an equivalent result, e.g.
```
CASE WHEN t.success = 1 THEN 1 ELSE 0 END
```
We could include the `order\_id` column in the SELECT list for testing. We can compare the results for each order\_id to the rows in the original table, to verify that the results returned meet the specification.
To get "counts" of orders, we can reference the query as an inline view, and use aggregate expressions in the SELECT list.
For example:
```
SELECT SUM(r.succeeded) AS cnt_succeeded
, SUM(r.failed) AS cnt_failed
, SUM(r.never_succeeded) AS cnt_never_succeeded
FROM (
SELECT MAX(t.success=1) AS succeeded
, MAX(t.success=0) AS failed
, IF(MAX(t.success=1),0,1) AS never_succeeded
FROM mytable t
GROUP BY t.order_id
) r
```
Since the expressions in the SELECT list return either 0, 1 or NULL, we can use the SUM() aggregate to get a count. To make use of a COUNT() aggregate, we would need to return NULL in place of a 0 (FALSE) value.
```
SELECT COUNT(IF(r.succeeded,1,NULL)) AS cnt_succeeded
, COUNT(IF(r.failed,1,NULL)) AS cnt_failed
, COUNT(IF(r.never_succeeded,1,NULL)) AS cnt_never_succeeded
FROM (
SELECT MAX(t.success=1) AS succeeded
, MAX(t.success=0) AS failed
, IF(MAX(t.success=1),0,1) AS never_succeeded
FROM mytable t
GROUP BY t.order_id
) r
```
If you want a count of all order\_id, add a COUNT(1) expression in the outer query. If you need percentages, do the division and multiply by 100,
For example
```
SELECT SUM(r.succeeded) AS cnt_succeeded
, SUM(r.failed) AS cnt_failed
, SUM(r.never_succeeded) AS cnt_never_succeeded
, SUM(1) AS cnt_all_orders
, SUM(r.failed)/SUM(1)*100.0 AS pct_with_a_failure
, SUM(r.succeeded)/SUM(1)*100.0 AS pct_succeeded
, SUM(r.never_succeeded)/SUM(1)*100.0 AS pct_never_succeeded
FROM (
SELECT MAX(t.success=1) AS succeeded
, MAX(t.success=0) AS failed
, IF(MAX(t.success=1),0,1) AS never_succeeded
FROM mytable t
GROUP BY t.order_id
) r
```
(The percentages here are a comparison to the count of distinct order\_id values, not as the total number of rows in the table).
|
MySQL -- Finding % of orders with a transaction failure
|
[
"",
"mysql",
"sql",
"select",
""
] |
I've got 3 tables.
Companies, Kommuner and Fylker.
The companies table have an empty field `forretningsadresse_fylke` but an other field `forretningsadresse_kommune` with a value.
So basically, I need to fill in `forretningsadresse_fylke`, based on the value of `forretningsadresse_kommune`.
Now, the value of `forretningsadresse_kommune` and the value I want for `forretningsadresse_fylke` is stored in the Kommuner and Fylker tables.
So I wrote this query, but that doesn't seem to work because after 600 seconds the "MySQL server goes away".
```
UPDATE companies, fylker, kommuner
SET companies.forretningsadresse_fylke = (
SELECT fylkeNavn
FROM fylker
WHERE fylker.fylkeID = kommuner.fylkeID
)
WHERE companies.forretningsadresse_kommune = kommuner.kommuneNavn
```
Here is what the Kommuner and Fylker tables look like.
Kommuner Table
[](https://i.stack.imgur.com/r5ckf.png)
Fylker Table
[](https://i.stack.imgur.com/9f8IV.png)
Companies table
[](https://i.stack.imgur.com/3CDLI.png)
companies Table
```
| forretningsadresse_fylke | forretningsadresse_kommune |
|===========================|============================|
| | |
| | |
| | |
| | |
| | |
| | |
```
So I was wondering if there was something wrong with the query?
Also, it might be good to mention, the table I try to update (Companies) has over 1 million rows.
Thanks in advance!
|
You do not want `fylker` in the `UPDATE` statement. You should also be using a proper `join`. So the first rewrite is:
```
UPDATE companies c JOIN
kommuner k
ON c.forretningsadresse_kommune = k.kommuneNavn
SET c.forretningsadresse_fylke = (SELECT f.fylkeNavn
FROM fylker f
WHERE f.fylkeID = k.fylkeID
);
```
If we assume a single match in `fylker`, then this is fine. If there are multiple matches, then you need to choose one. A simple method is:
```
UPDATE companies c JOIN
kommuner k
ON c.forretningsadresse_kommune = k.kommuneNavn
SET c.forretningsadresse_fylke = (SELECT f.fylkeNavn
FROM fylker f
WHERE f.fylkeID = k.fylkeID
LIMIT 1
);
```
Note: This will update all companies that have a matching "kommuner". If there is no matching "fylker" the value will be set to `NULL`. I believe this is the intent of your question.
Also, table aliases make the query easier to write and to read.
|
you can refer this question
<https://stackoverflow.com/questions/15209414/how-to-use-join-in-update-query>
```
UPDATE companies c
JOIN Kommuner k ON c.kommuneID = k.kommuneID
JOIN fylker f ON f.fylkeID = k.fylkeID
SET c.forretningsadresse_fylke = f.fylkeNavn
```
|
MySQL update with select from another table
|
[
"",
"mysql",
"sql",
""
] |
I am using SQL Server 2014 and I need to add a line to my SQL query that will convert a column called StayDate into the "MMM YYYY" format.
The StayDate column is in `datetime` format (eg: 2016-06-01 00:00:00.000)
Basically, I need the output to be "Jul 2016" (from example above).
I have tried playing around with the following code:
```
Format (StayDate, "MMM DD YYYY")
```
which I converted into: `Format (StayDate, "MMM YYYY")`
But I end up with the following result: Jul YYYY
I like the simplicity of the above code a lot. Is there a workaround using the `Format`syntax?
|
The `format` function takes a .NET format string, so the four digit year part has to be in lowercase, like this:
```
Format(StayDate, "MMM yyyy")
```
(reference: <https://msdn.microsoft.com/de-de/library/hh213505(v=sql.120).aspx>)
|
Try this
```
DECLARE @SYSDATE DATETIME = '2016-06-01 00:00:00.000'
SELECT RIGHT(CONVERT(VARCHAR(11), @SYSDATE, 106) ,8)
--OR
SELECT LEFT(DATENAME(MONTH, @SYSDATE), 3) + ' ' + DATENAME(YEAR, @SYSDATE)
```
|
What is the most simple T-SQL syntax to convert a datetime column into the 'MMM YYYY" format?
|
[
"",
"sql",
"sql-server",
"t-sql",
"date",
"datetime",
""
] |
in this select i need the sum result the IIF expression, but when i execute this query obtain only first IIF statement. Any suggestion?? Thanks
```
SELECT conto, desconto, date, codoperaio, SUM(IIF(totcasse ='1',SUM(totcasse),0)+
IIF(totcasse ='6',SUM(totcasse*3),0)+
IIF(totcasse ='8',SUM(totcasse*4),0)+
IIF(totcasse ='10',SUM(totcasse*5),0)) as Kilogrammi
FROM dbo.Import
where totcasse BETWEEN 1 and 10
Group by conto, desconto, date,codoperaio, totcasse
```
|
If am not wrong you are looking for this
```
SELECT conto,
desconto,
date,
codoperaio,
Sum(CASE totcasse WHEN '1' THEN totcasse ELSE 0 END) +
Sum(CASE totcasse WHEN '6' THEN totcasse * 3 ELSE 0 END) +
Sum(CASE totcasse WHEN '8' THEN totcasse * 4 ELSE 0 END) +
Sum(CASE totcasse WHEN '10' THEN totcasse * 5 ELSE 0 END)
FROM dbo.Import
WHERE totcasse in (1,6,8,10)
GROUP BY conto,
desconto,
date,
codoperaio
```
or
```
SELECT conto,
desconto,
date,
codoperaio,
Sum(CASE totcasse
WHEN '1' THEN totcasse
WHEN '6' THEN totcasse * 3
WHEN '8' THEN totcasse * 4
WHEN '10' THEN totcasse * 5
END)
FROM dbo.Import
WHERE totcasse in (1,6,8,10)
GROUP BY conto,
desconto,
date,
codoperaio
```
|
```
SELECT conto, desconto, date, codoperaio, IIF(totcasse ='1',SUM(totcasse),0)+
IIF(totcasse ='6',SUM(totcasse*3),0)+
IIF(totcasse ='8',SUM(totcasse*4),0)+
IIF(totcasse ='10',SUM(totcasse*5),0) as Kilogrammi
FROM dbo.Import
where totcasse BETWEEN 1 and 10
Group by conto, desconto, date,codoperaio, totcasse
```
|
SUM IIF expression result
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL Server 2014 and I need to add a line of code to my SQL query that will filter the data extracted only to those records where the `StayDate` (a column in database) is `greater than or equal to` the `1st day of the current month`.
In other words, the line of code I need is the following:
```
WHERE StayDate >= '1st Day of Current Month'
```
Note: `StayDate` is in the `datetime` format (eg: 2015-12-18 00:00:00.000)
|
Use [**`EOMONTH`**](https://msdn.microsoft.com/en-IN/library/hh213020.aspx) to get the first day of current month
```
WHERE StayDate >= Dateadd(dd, 1, Eomonth(Getdate(), -1))
```
|
SQL Server 2012 and above
```
WHERE StayDate >= DATEADD(DAY, 1, EOMONTH(GETDATE(), -1))
```
Before SQL Server 2012
```
WHERE StayDate >= DATEADD(MONTH, DATEDIFF(MONTH, 0, GETDATE()), 0)
```
|
T-SQL syntax to filter records where the datetime variable is greater than or equal to the 1st Day of the Current Month
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
"sql-server-2014",
""
] |
I use below code but doesn't return what I expect,
the table relationship,
each `gallery` is include multiple `media` and each media is include multiple `media_user_action`.
I want to count each `gallery` how many `media_user_action` and order by this count
```
rows: [
{
"id": 1
},
{
"id": 2
}
]
```
and this query will return duplicate gallery rows something like
```
rows: [
{
"id": 1
},
{
"id": 1
},
{
"id": 2
}
...
]
```
I think because in the `LEFT JOIN` subquery select `media_user_action` rows only group by `media_id`,
need to group by `gallery_id` also ?
```
SELECT
g.*
FROM gallery g
LEFT JOIN gallery_media gm ON gm.gallery_id = g.id
LEFT JOIN (
SELECT
media_id,
COUNT(*) as mua_count
FROM media_user_action
WHERE type = 0
GROUP BY media_id
) mua ON mua.media_id = gm.media_id
ORDER BY g.id desc NULLS LAST OFFSET $1 LIMIT $2
```
table
```
gallery
id |
1 |
2 |
gallery_media
id | gallery_id fk gallery.id | media_id fk media.id
1 | 1 | 1
2 | 1 | 2
3 | 2 | 3
....
media_user_action
id | media_id fk media.id | user_id | type
1 | 1 | 1 | 0
2 | 1 | 2 | 0
3 | 3 | 1 | 0
...
media
id |
1 |
2 |
3 |
```
**UPDATE**
There's more other table I need to select, this is a part in a function like this <https://jsfiddle.net/g8wtqqqa/1/> when user input option then build query.
So I correct my question I need to find a way if user want to count `media_user_action` order by it, I wanna know how to put these in a subquery possible not change any other code
Base on below @trincot answer I update code, only add `media_count` on top change a little bit and put those in sub query. is what I want,
now they are group by gallery.id, but sort media\_count desc and asc are same result not working I can't find why?
```
SELECT
g.*,
row_to_json(gi.*) as gallery_information,
row_to_json(gl.*) as gallery_limit,
media_count
FROM gallery g
LEFT JOIN gallery_information gi ON gi.gallery_id = g.id
LEFT JOIN gallery_limit gl ON gl.gallery_id = g.id
LEFT JOIN "user" u ON u.id = g.create_by_user_id
LEFT JOIN category_gallery cg ON cg.gallery_id = g.id
LEFT JOIN category c ON c.id = cg.category_id
LEFT JOIN (
SELECT
gm.gallery_id,
COUNT(DISTINCT mua.media_id) media_count
FROM gallery_media gm
INNER JOIN media_user_action mua
ON mua.media_id = gm.media_id AND mua.type = 0
GROUP BY gm.gallery_id
) gm ON gm.gallery_id = g.id
ORDER BY gm.media_count asc NULLS LAST OFFSET $1 LIMIT $2
```
|
The join with *gallery\_media* table is multiplying your results. The count and grouping should happen after you have made that join.
You could achieve that like this:
```
SELECT g.id,
COUNT(DISTINCT mua.media_id)
FROM gallery g
LEFT JOIN gallery_media gm
ON gm.gallery_id = g.id
LEFT JOIN media_user_action mua
ON mua.media_id = gm.id AND type = 0
GROUP BY g.id
ORDER BY 2 DESC
```
If you need the other informations as well, you could use the above (in simplified form) as a sub-query, which you join with anything else that you need, but will not multiply the number of rows:
```
SELECT g.*
row_to_json(gi.*) as gallery_information,
row_to_json(gl.*) as gallery_limit,
media_count
FROM gallery g
LEFT JOIN (
SELECT gm.gallery_id,
COUNT(DISTINCT mua.media_id) media_count
FROM gallery_media gm
INNER JOIN media_user_action mua
ON mua.media_id = gm.id AND type = 0
GROUP BY gm.gallery_id
) gm
ON gm.gallery_id = g.id
LEFT JOIN gallery_information gi ON gi.gallery_id = g.id
LEFT JOIN gallery_limit gl ON gl.gallery_id = g.id
ORDER BY media_count DESC NULLS LAST
OFFSET $1
LIMIT $2
```
The above assumes that *gallery\_id* is unique in the tables *gallery\_information* and *gallery\_limit*.
|
You're grouping by `media_id` to get a count, but since one `gallery` can have many `gallery_media`, you still end up with multiple rows for one `gallery`. You can either sum the `mua_count` from your subselect:
```
SELECT g.*, sum(mua_count)
FROM gallery g
LEFT JOIN gallery_media gm ON gm.gallery_id = g.id
LEFT JOIN (
SELECT media_id,
COUNT(*) as mua_count
FROM media_user_action
WHERE type = 0
GROUP BY media_id
) mua ON mua.media_id = gm.media_id
GROUP BY g.id
ORDER BY g.id desc NULLS LAST;
```
```
id | sum
----+-----
2 | 1
1 | 2
```
Or you can just `JOIN` all the way through and group once on `g.id`:
```
SELECT g.id, count(*)
FROM gallery g
JOIN gallery_media gm ON gm.gallery_id = g.id
JOIN media_user_action mua ON mua.media_id = gm.id
GROUP BY g.id
ORDER BY count DESC;
```
```
id | count
----+-------
1 | 2
2 | 1
```
|
group twice in one query
|
[
"",
"sql",
"postgresql",
"join",
"group-by",
""
] |
I have a SQL Server table with a few columns.
One of those columns is a `date` and another is `No of Nights`.
Number of nights is always a two character `varchar` column with values like 1N, 2N, 3N etc depending on the number of nights up to 7N.
I want to subtract the 1 part of the 1N column from the date.
For ex: **`25Oct15 - 1N = 24Oct15`**
Obviously I will be replacing the '1N' with the actual column name. I tried doing a trim as:
```
date - left(no of nights, 1)
```
But I get an error
> Conversion failed when converting the varchar value '25Oct16' to data type int.
Sample date below
```
Date | NoofNIghts | Result
2016-04-26 00:00:00.000 | 1N |
2016-04-28 00:00:00.000 | 3N |
```
Where the result column would be the subtracted value. Any help would be great. Thanks.
|
```
SELECT DATEADD ( DAY, - CONVERT(INT, REPLACE(NoofNights, 'N', '')), getdate() ) as Result
```
|
Try this
```
DECLARE @V_Date DATETIME = '2016-04-26 00:00:00.000'
,@V_NoofNIghts VARCHAR(2) = '1N'
SELECT DATEADD(DAY, CAST(LEFT(@V_NoofNIghts,1) AS INT) *-1 ,@V_Date)
```
|
Date subtraction error
|
[
"",
"sql",
"sql-server",
""
] |
I want to delete specific values/data from one column with the `WHERE` condition.
Table `CIVILITE`:
```
ID_CIVILITE CIV_LIBELLE
1 M.
2 Mme
3 Mlle
4 Aucun
DELETE FROM CIVILITE WHERE CIV_LIBELLE='Aucun';
```
Error:
> The DELETE statement is in conflict with the constraint REFERENCE
> "FK\_PERS\_CIVILITE". The conflict occurred in database "DBDB",
> "dbo.PERSONNE" table, column 'ID\_CIVILITE'.
How can delete some value where there is a constraint ..?
|
The reason why you are getting this error is because you are trying to delete a row which is being referenced by another row hence resulting in the error. So either delete the reference row or remove the constraint temporarily. You need to first alter your table like this:
```
ALTER TABLE [DBDB].[dbo].[PERSONNE] NOCHECK CONSTRAINT [FK_PERS_CIVILITE]
```
and then you can delete the record.
Make sure that once you delete the record you apply the constraint again.
```
ALTER TABLE [DBDB].[dbo].[PERSONNE] WITH CHECK CONSTRAINT [FK_PERS_CIVILITE]
```
|
Seems table CIVILITE column ID\_CIVILITE is a primary key, first delete the similar rows in referenced table i.e foreign key table.
```
-- Run 1st
DELETE FROM [PERSONNE]
WHERE ID_CIVILITE IN
(SELECT CIVILITE WHERE CIV_LIBELLE='Aucun')
-- Run 2nd
DELETE CIVILITE WHERE CIV_LIBELLE='Aucun'
```
|
Delete specific values from column
|
[
"",
"sql",
"sql-server",
"sql-delete",
""
] |
I am trying to export from my `Table` data into `Excel` through `T-SQL` query. After little research I came up with this
```
INSERT INTO OPENROWSET ('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=G:\Test.xls;',
'SELECT * FROM [Sheet1$]')
SELECT *
FROM dbo.products
```
When I execute the above query am getting this error
> Msg 7302, Level 16, State 1, Line 7 Cannot create an instance of OLE
> DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)".
So went through internet for solution, got the below link
<https://blogs.msdn.microsoft.com/spike/2008/07/23/ole-db-provider-microsoft-jet-oledb-4-0-for-linked-server-null-returned-message-unspecified-error/>
In the above link they were saying like we need to be administrator to create folder in **C drive** `TEMP` folder since `OPENROWSET` creates some files or folder inside `TEMP` folder
I am doing this in My Home PC and I am the administrator. Still am getting the same error.
**SQL SERVER** details
> Microsoft SQL Server 2016 (RC1) - 13.0.1200.242 (X64) Mar 10 2016
> 16:49:45 Copyright (c) Microsoft Corporation Enterprise Evaluation
> Edition (64-bit) on Windows 10 Pro 6.3 (Build 10586: )
Any pointers to fix the problem will be highly appreciated
**Update :** Already I have configured the `Ad Hoc Distributed Queries` and
Executed the below queries
```
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.Jet.OLEDB.4.0', N'AllowInProcess', 1
GO
EXEC master.dbo.sp_MSset_oledb_prop N'Microsoft.Jet.OLEDB.4.0', N'DynamicParameters', 1
GO
```
now am getting this error
> Msg 7438, Level 16, State 1, Line 7 The 32-bit OLE DB provider
> "Microsoft.Jet.OLEDB.4.0" cannot be loaded in-process on a 64-bit SQL
> Server.
|
I have MS Sql server 2012, and Office 2013. This seems to be very finicky, so you may have to adjust to your particular versions.
1. Download the Microsoft.ACE.OLEDB.12.0 for Windows, 64 bit version found here: <https://www.microsoft.com/en-us/download/details.aspx?id=13255>
2. Install it on your server.
3. Check the user running SQL Server and make sure that user has access to the temp directory C:\Windows\ServiceProfiles\LocalService\AppData\Local\Temp if it's a local service account or C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp if it's a network service account.
4. Configure 'Ad Hoc Distributed Queries' and enable the `Microsoft.ACE.OLEDB` files like this:
Here's the SP\_CONFIGURE commands:
```
SP_CONFIGURE 'show advanced options', 1;
GO
RECONFIGURE;
SP_CONFIGURE 'Ad Hoc Distributed Queries', 1;
GO
RECONFIGURE;
EXEC sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', N'AllowInProcess', 1
EXEC sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', N'DynamicParam', 1
```
On newer **SQL Server 2014** You had use `'DynamicParameters'` instead of `'DynamicParam'`
```
EXEC sp_MSset_oledb_prop N'Microsoft.ACE.OLEDB.12.0', N'DynamicParameters', 1
```
Make sure you register msexcl40.dll like this:
```
regsvr32 C:\Windows\SysWOW64\msexcl40.dll
```
|
Check out sp\_configure /RECONFIGURE...
```
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
GO
RECONFIGURE;
GO
```
See these links for more info:
<https://technet.microsoft.com/en-us/library/aa259616(v=sql.80).aspx>
<http://blog.sqlauthority.com/2010/11/03/sql-server-fix-error-ms-jet-oledb-4-0-cannot-be-used-for-distributed-queries-because-the-provider-is-used-to-run-in-apartment-mode/>
|
Cannot create an instance of OLE DB provider Microsoft.Jet.OLEDB.4.0 for linked server null
|
[
"",
"sql",
"sql-server",
"export-to-excel",
"openrowset",
""
] |
In this part of query:
```
. . . WHERE col <> 10
```
It matches everything except `10` and `NULL`. While I want to match everything except only `10`. So how can I exclude `NULL` of that?
|
This you can use for MySQL,
```
...WHERE IFNULL(col,0) <> 10
```
If the value of col is `NULL` then `IFNULL(col,0)` will convert the value to `'0'` and perform the comparison. So you will get all the records except only `10`.
|
The problem is that the condition `col != 10` (in MySQL) means "row where `col` has a value that is not 10". `NULL` is not a value so `NULL` rows aren't matched. If they were, you could have problems with `NULL`s cascading into other parts of your logic messing things up, since they don't use the same equality logic as values.
As far as I understand, doing it in two conditions (`col IS NULL OR col != 10`) is the proper way since by MySQL logic you're asking for two separate things. "rows where `col` has a value that is not 10, or rows where `col` has no value".
|
How can I exclude NULL of not equal operator?
|
[
"",
"mysql",
"sql",
"operators",
"where-clause",
""
] |
I would like to include a column row\_number in my result set with the row number sequence, where 1 is the newest item, without gaps. This works:
```
SELECT id, row_number() over (ORDER BY id desc) AS row_number, title
FROM mytable
WHERE group_id = 10;
```
Now I would like to query for the same data in chunks of 1000 each to be easier on memory:
```
SELECT id, row_number() over (ORDER BY id desc) AS row_number, title
FROM mytable
WHERE group_id = 10 AND id >= 0 AND id < 1000
ORDER BY id ASC;
```
Here the row\_number restarts from 1 for every chunk, but I would like it to be as if it were part of the global query, as in the first case. Is there an easy way to accomplish this?
|
Assuming:
* `id` is defined as `PRIMARY KEY` - which means `UNIQUE` and `NOT NULL`. Else you may have to deal with NULL values and / or duplicates (ties).
* You have no concurrent write access on the table - or you don't care what happens after you have taken your snapshot.
A [`MATERIALIZED VIEW`](https://www.postgresql.org/docs/current/sql-creatematerializedview.html), like you demonstrate [in your answer](https://stackoverflow.com/a/36977574/939860), is a good choice.
```
CREATE MATERIALIZED VIEW mv_temp AS
SELECT row_number() OVER (ORDER BY id DESC) AS rn, id, title
FROM mytable
WHERE group_id = 10;
```
But index and subsequent queries must be on the row number `rn` to get
> data in chunks of 1000
```
CREATE INDEX ON mv_temp (rn);
SELECT * FROM mv_temp WHERE rn BETWEEN 1000 AND 2000;
```
Your implementation would require a guaranteed gap-less `id` column - which would void the need for an added row number to begin with ...
When done:
```
DROP MATERIALIZED VIEW mv_temp;
```
The index dies with the table (materialized view in this case) automatically.
Related, with more details:
* [Optimize query with OFFSET on large table](https://stackoverflow.com/questions/34110504/optimize-query-with-offset-on-large-table/34291099#34291099)
|
You want to have a query for the first 1000 rows, then one for the next 1000, and so on?
Usually you just write one query (the one you already use), have your app fetch 1000 records, do something with them, then fetch the next 1000 and so on. No need for separate queries, hence.
However, it would be rather easy to write such partial queries:
```
select *
from
(
SELECT id, row_number() over (ORDER BY id desc) AS rn, title
FROM mytable
WHERE group_id = 10
) numbered
where rn between 1 and 1000; -- <- simply change the row number range here
-- e.g. where rn between 1001 and 2000 for the second chunk
```
|
Global row numbers in chunked query
|
[
"",
"sql",
"postgresql",
"pagination",
"row-number",
"postgresql-9.4",
""
] |
I want to make a select query which groups rows based on a given column and then sorts by size of such groups.
Let's say we have this sample data:
```
id type
1 c
2 b
3 b
4 a
5 c
6 b
```
**I want to obtain the following** by grouping and sorting the column 'type' in a descending way:
```
id type
2 b
3 b
6 b
1 c
5 c
4 a
```
As of now I am only able to get the count of each group but that is not exactly what I need:
```
SELECT *, COUNT(type) AS typecount
FROM sampletable
GROUP BY type
ORDER BY typecount DESC, type ASC
id type count
2 b 3
1 c 2
4 a 1
```
Can anybody please give me a hand with this query?
Edit:
Made 'b' the biggest group to avoid coming to the same solution by using only SORT BY
|
It may not be the best way, but it will give you what you want.
You work out the totals for each group and then join that "virtual" table to your original table by the determined counts.
```
SELECT *
FROM sampletable s1
INNER JOIN (SELECT count(type) AS iCount,type
FROM sampletable
GROUP BY type) s2 ON s2.type = s1.type
ORDER BY s2.iCount DESC, s1.type ASC
```
<http://sqlfiddle.com/#!9/f6b0c4/6/0>
|
You can't use a column alias in your `GROUP BY`; just repeat the expression:
```
SELECT type, COUNT(type) AS count
FROM sampletable
GROUP BY type
ORDER BY COUNT(*) DESC, type ASC
```
Note that I changed the `SELECT` clause - you can't use `*` in your `SELECT` either since expressions in the `SELECT` need to either be in the `GROUP BY` clause or an aggregation.
|
SQL Sort by group size
|
[
"",
"sql",
""
] |
Using SQL-Server.
I have a table AVQ. Two of the columns are named Questions and Instructions. I would like to concatenate those two columns and store the result back into Questions.
I've got the concat query `SELECT question + ISNULL(' ' + instructions, '') from AVQ;`
But I'm unsure of how to get the result back into the Questions column.
I've tried to use that query as a subquery like so:
```
update AVQ
set Question = (SELECT question + ISNULL(' ' + instructions, '') from AVQ)
where AVID = 2;
```
but I get the error `Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.`
Could someone point me in the correct direction?
|
Sub-query isn't required here.
```
update AVQ
set Question = question + ISNULL(' ' + instructions, '')
where AVID = 2;
```
|
Here is a method that might actually be more efficient:
```
update AVG
set Question = Question + ' ' + instructions
where AVID = 2 and instructions is not null;
```
Why bother even attempting to do the update if `instructions` is empty?
|
Insert concat of two columns back into one of the original columns
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables both of which have the same two columns: feature ID and language. I want to pull out records where the language has changed for the feature ID. The problem, though, is that there are multiple languages for each feature ID. My tables look like:
Table 1
```
Feature ID | Language
------------------------
001 | 'en'
001 | 'es'
001 | 'pt'
002 | 'es'
002 | 'fr'
```
Table 2
```
Feature ID | Language
-----------------------
001 | 'es'
001 | 'en
001 | 'fr'
002 | 'fr'
002 | 'es'
```
I initially tried something like:
```
SELECT a.feature_id, b.feature_id, a.language, b.language
FROM table 1 a FULL OUTER JOIN table 2 b on a.feature_id = b.feature_id
WHERE a.language <> b.language
```
but this didn't work exactly as I hoped. I'm noticing results like:
```
002 | 002 | 'fr' | 'es'
002 | 002 | 'es' | 'fr'
```
Is there a way in SQL to alleviate this?
Thanks.
|
```
SELECT a.feature_id, b.feature_id, a.language, b.language
FROM [table 1] a FULL OUTER JOIN [table 2] b on a.feature_id = b.feature_id
and a.language = b.language
WHERE a.feature_id is null or b.feature_id is null
```
Change it to a full outer join on language and feature and then a where clause to filter down to those rows without a match.
|
Try this:
```
SELECT a.feature_id, b.feature_id, a.language, b.language
FROM table 1 a FULL OUTER JOIN table 2 b on a.feature_id = b.feature_id
WHERE a.feature_id || a.language <> b.feature_id || b.language
```
Or:
```
SELECT a.feature_id, b.feature_id, a.language, b.language
FROM table 1 a LEFT OUTER JOIN table 2 b
on a.feature_id || a.language <> b.feature_id || b.language
WHERE b.feature_id is NULL
```
|
SQL: Find diff between two tables with non-unique feature IDs?
|
[
"",
"sql",
""
] |
I keep getting a syntax error, I've look at countless examples I'm not sure what the problem is...
```
DELIMITER //
CREATE PROCEDURE test()
BEGIN
SELECT * FROM user;
END //
DELIMITER ;
```
SCREENSHOT:
[](https://i.stack.imgur.com/IjsI7.png)
|
Alter procedure and remove delimiter....like below
```
CREATE PROCEDURE test()
BEGIN
SELECT * FROM user;
END
```
|
Modify the user to [user] as user is reserved keyword
```
CREATE PROCEDURE test()
BEGIN
SELECT * FROM [user];
END
```
|
creating stored procedure - syntax error
|
[
"",
"mysql",
"sql",
"stored-procedures",
""
] |
I am using SQL Server 2008 and I have two columns in date format:
```
Column_1: [2014-12-19]
Column_2: [2015-08-31]
```
I want to merge them and change the the data type to `NVARCHAR`.
I tried this code
```
CONVERT(NVARCHAR,[ Column_1])+CONVERT(NVARCHAR,[Column_2])AS TEST
```
but I get this result:
```
2014-12-192015-08-31
```
instead of `2014121920150831` without the hyphens.
Could you please help?
Thanks
|
Although you can fiddle around with conversion codes, just use replace:
```
REPLACE(CONVERT(NVARCHAR(255), Column_1) + CONVERT(NVARCHAR(255), Column_2), '-', '') AS TEST
```
Or, if you don't want to be dependent on the local date format:
```
CONVERT(NVARCHAR(255), Column_1, 112) + CONVERT(NVARCHAR(255), Column_2, 112) AS TEST
```
|
`CONVERT` has a third parameter which determines the format of the date/time. See [here](https://msdn.microsoft.com/en-GB/library/ms187928.aspx?f=255&MSPPError=-2147217396) for definition. Code 112 will give you what you want.
You can also use `REPLACE` to remove the hyphens.
|
Convert date to nvarchar and merge two columns
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to calculate the variation of two values between current month and the previous one.
let's say I have a total calls in different months and and want to have the variation for each month from the previous one.
U have a table contains vendor, month and calls in each month
I have tried the following query nut it gives a wrong results for the same vendor if there is no data in the previous month
```
select vendor,
nvl(round(sum(calls),0),0.00) as "total_calls",
nvl((((lag(CAST(sum(calls) AS decimal) ,0) over(order by month)) -
(lag(CAST(sum(calls) AS DECIMAL),1) over(order by month))) /
(lag(CAST(sum(calls) AS DECIMAL),1) over(order by month))), 0) as tot_calls_variation
from table_summary
group by full_month,vendor
order by month,vendor
```
the lag() function returns the row by given index.but this gives wrong results since the variation is calculated by rows and not by each vendor
wondering if there is any other way to do so ? thanks
|
thank you all for your answers, I have came up with a solution for this, I created a temp table and join it on vendor and month with previous month as following:
```
select vendor,month,
nvl(round(sum(calls),2),0.0) as "total_calls"
into temp1
from table_summary
group by month,vendor
order by month,vendor
select tb1.month ,tb1.vendor,
((tb1.total_calls - tb2.total_calls) / nullif(tb2.total_calls,0)) as tot_calls_variation
from temp1 tb1
left join temp1 tb2 on (tb1.month -1) = (tb2.month) and tb1.vendor = tb2.vendor
order by tb1.month ;
drop table temp1;
```
this is also working when there are no data for calls in some months for vendor
|
It's difficult to say without seeing your data and desired results, but perhaps you might be better of just going with a self join instead of window functions:
```
SELECT
month_summary.vendor,
month_summary.calls,
month_summary.calls - prev_month_summary.calls / prev_month_summary.calls) as tot_calls_variation
FROM
(SELECT vendor, full_month, sum(calls) as calls FROM table_summary GROUP BY vendor, full_month) as month_summary
INNER JOIN (SELECT vendor, full_month, sum(calls) as calls FROM table_summary GROUP BY vendor, full_month) as prev_month_summary ON
month_summary.vendor = prev_month_summary.vendor AND
month_summary.full_month - 1 = prev_month_summar.full_month
```
|
Calculate the variation of values in amazon redshift database
|
[
"",
"sql",
"amazon-redshift",
""
] |
I have a table which save xml data. The xml is as follows
```
<responses>
<response>
<id>UniqueRowID</id>
<value>16</value>
</response>
<response>
<id>Language</id>
<value>en-GB</value>
</response>
<response>
<id>UserId</id>
<value>21</value>
</response>
</responses>
```
In next column it can have some other data. id & value is common for all. But the some ids in one may not in another row.
Now I need a table in the following format. I know what all columns should show. If some column not present in the xml it can be blank. How can i achieve this?
UniqueRowID Language UserId
--- ------- ------
16 en-GB 21
I saw some similar but nothing works for me since xml have different way of representation in those examples
|
I place this as second answer as it is a completely different approach. With this code you can read any XML of this structure. Using dynamic SQL makes it possible to create dynamic column names:
```
CREATE TABLE #tmpTbl (YourXml XML);
INSERT INTO #tmpTbl VALUES
('<responses>
<response>
<id>UniqueRowID</id>
<value>16</value>
</response>
<response>
<id>Language</id>
<value>en-GB</value>
</response>
<response>
<id>UserId</id>
<value>21</value>
</response>
<response>
<id>SomeOther1</id>
<value>SO1</value>
</response>
<response>
<id>SomeOther2</id>
<value>SO2</value>
</response>
</responses>');
DECLARE @columnNames VARCHAR(MAX)=
(
STUFF(
(
SELECT ',[' + B.value('id[1]','varchar(max)')+ ']'
FROM #tmpTbl
CROSS APPLY YourXml.nodes('/responses/response') AS A(B)
FOR XML PATH('')
),1,1,''
)
);
DECLARE @cmd VARCHAR(MAX)=
'SELECT p.*
FROM
(
SELECT B.value(''id[1]'',''varchar(max)'') AS ColumnName
,B.value(''value[1]'',''varchar(max)'') AS ColumnValue
FROM #tmpTbl
CROSS APPLY YourXml.nodes(''/responses/response'') AS A(B)
) AS tbl
PIVOT
(
MIN(ColumnValue) FOR ColumnName IN(' + @columnNames + ')
) As p';
EXEC(@cmd);
GO
DROP TABLE #tmpTbl;
```
The result:
```
UniqueRowID Language UserId SomeOther1 SomeOther2
16 en-GB 21 SO1 SO2
```
|
This is a simple XML structure. If you need to "flatten" it into a SQL table, you can do so with XPATH syntax as shown below.
```
declare @xml xml = '
<responses>
<response>
<id>UniqueRowID</id>
<value>16</value>
</response>
<response>
<id>Language</id>
<value>en-GB</value>
</response>
<response>
<id>UserId</id>
<value>21</value>
</response>
</responses>
';
select
T.c.value('(id)[1]','nvarchar(100)') as UniqueRowID
,T.c.value('(value)[1]','nvarchar(100)') as Language
from
@xml.nodes('/responses/response') T(c)
```
Which will give you data like this:
```
UniqueRowID | Language
======================
UniqueRowID | 16
Language | en-GB
UserId | 21
```
I'm a little confused about what you mean when you write:
*In next column it can have some other data. id & value is common for all. But the some ids in one may not in another row.*
It also seems that you structured this data in a way that isn't really well suited for XML. Every response element should ideally have the same components.
You might consider something more like this (just food for thought):
```
<responses>
<response>
<UniqueRowID>100</UniqueRowID>
<UserId>16</id>
<Language>EN-GB</value>
</response>
<response>
<UniqueRowID>200</UniqueRowID>
<UserId>17</id>
<Language>ES</value>
</response>
<response>
<UniqueRowID>300</UniqueRowID>
<UserId>18</id>
<Language>DE</value>
</response>
</responses>
```
|
Xml to table in tsql
|
[
"",
"sql",
"xml",
"t-sql",
""
] |
* 1. The difference of the two statements?
* 2. Is the second statemnet faster than the first statement?
First statement :
```
SELECT * FROM students WHERE id = 2197176;
SELECT * FROM students WHERE id = 74877;
```
Second statement:
```
SELECT * FROM students WHERE id IN(2197176, 74877, ...)
```
**UPDATE**:
* 3. If the time complexity of first statement is `m*n`, will the second statement be `m*n`?
> `m`: the time complexity of `SELECT * FROM students WHERE id = 2197176;`.
>
> `n`: the amount of ids.
**UPDATE**:
* 4. In the following two cases, which case is faster? And why?
Assuming the table is as follows:
```
| ID | FLAG |
| ----------|:------:|
| 2197176 | true |
| 74877 | false |
```
First case:
```
List ids = getIds();
for(id in ids){
result = getResultFromFirstStatement(id); //one sql statement
if(result.flag) { do sth ...}
}
```
Second case:
```
List ids = getIds();
results = getResultFromSecondStament(ids); //`n` sql statements
for(r in results){
if(r.flag) { do sth ...}
}
```
|
I ran execution plan on 3 different queries.
First query: Using `UNION`
Second query: Using `UNION ALL`
Third query: Using `IN`
```
USE AdventureWorksLT2012
-- First query using UNION
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 716
UNION
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 727
UNION
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 770
-- Second query using UNION ALL
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 716
UNION ALL
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 727
UNION ALL
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID = 770
-- Third query using IN
SELECT ProductID, Name FROM SalesLT.Product WHERE ProductID IN(716, 727, 770)
```
[](https://i.stack.imgur.com/RipLA.png)
As you can see the `UNION` is using **53%** (Because `UNION` tries to delete duplicates), `UNION ALL` is costing **34%** and `IN` costs **14%** of whole batch
|
First query
```
SELECT * FROM students WHERE id = 2197176 ..
```
returns rows with an id column value equal with specific value in this case 2197176 multiple select returns union of results.
In the second query
```
SELECT * FROM students WHERE id IN (2197176, 74877, ...);
```
returns rows where the id column value equals with 2197176 or 74877 or ... .
In equal parameter result of both query are the same records, but in readability and performance second query are better.
|
What's the difference of SELECT and SELECT IN in sql?
|
[
"",
"sql",
""
] |
So what I'm looking to do is create a report that shows how many sales a company had on a weekly basis.
So we have a time field called `created` that looks like this:
```
2016-04-06 20:58:06 UTC
```
This field represents when the sale takes place.
Now lets say I wanted to create a report that gives you how many sales you had on a weekly basis. So the above example will fall into something like `Week of 2016-04-03` (it doesn't have to exactly say that, I'm just going for the simplest way to do this)
Anyone have any advice? I imagine it involves using the `UTEC_TO_xxxxxx` functions.
|
you can use [WEEK()](https://cloud.google.com/bigquery/query-reference#week) function - it gives you week number
```
SELECT WEEK('2016-04-06 20:58:06 UTC')
```
if you need first day of the week - you can try something like
```
STRFTIME_UTC_USEC((UTC_USEC_TO_WEEK(TIMESTAMP_TO_USEC(TIMESTAMP('2016-05-02 20:58:06 UTC')), 0)),'%Y-%m-%d')
```
|
The documentation advises to use standard SQL functions, like [DATE\_TRUNC()](https://cloud.google.com/bigquery/docs/reference/standard-sql/date_functions):
```
SELECT DATE_TRUNC(DATE '2019-12-25', WEEK) as week;
```
|
Best way to break down by weeks in BigQuery
|
[
"",
"sql",
"google-bigquery",
"bigdata",
""
] |
I am looking for the detail answer with simple examples of these functions.
|
**NVL** checks if first argument is null and returns second argument:
```
select nvl(null, 'arg2') from dual
```
in this example result will be: arg2;
```
select nvl('arg1', 'arg2') from dual
```
and in this one: arg1;
**NVL2** has different logic. If first argument is not null then NVL2 returns second argument, but in other case it will return third argument:
```
select nvl2('arg1', 'arg2', 'arg3') from dual
```
Result: arg2
```
select nvl2(null, 'arg2', 'arg3') from dual
```
Result: arg3
|
Both the NVL(exp1, exp2) and NVL2(exp1, exp2, exp3) functions check the value exp1 to see if it is null.
With the NVL(exp1, exp2) function, if exp1 is not null, then the value of exp1 is returned; otherwise, the value of exp2 is returned, but case to the same data type as that of exp1.
With the NVL2(exp1, exp2, exp3) function, if exp1 is not null, then exp2 is returned; otherwise, the value of exp3 is returned.
|
What are the NVL and the NVL2 functions in Oracle? How do they differ?
|
[
"",
"sql",
"oracle",
""
] |
My requirement is to retrieve all the employee names from `employees` table and if there are no matching rows in `employee` table then employee name should be displayed along with the count as 0
```
CREATE TABLE #EMPLOYEES
(
employeeId int,
employeename varchar(50)
)
INSERT INTO #EMPLOYEES VALUES (1,'Dinesh Alla')
INSERT INTO #EMPLOYEES VALUES (2,'ram')
INSERT INTO #EMPLOYEES VALUES (3,'Lakshmi')
INSERT INTO #EMPLOYEES VALUES (4,'sumanth')
CREATE TABLE #LOGS
(
entityID int,
EntityCode int,
employeeID int
)
INSERT INTO #LOGS VALUES (1,201,1)
INSERT INTO #LOGS VALUES (1,201,1)
INSERT INTO #LOGS VALUES (1,201,1)
INSERT INTO #LOGS VALUES (1,201,1)
INSERT INTO #LOGS VALUES (1,201,1)
SELECT
te.employeeID, employeeName,
COUNT(ISNULL(entityCode, 0)) AS caseEntryCount
FROM
#EMPLOYEES Te
LEFT JOIN
#LOGS Tee ON ISNULL(TE.employeeID,0) = ISNULL(Tee.employeeID,0) --OR entityEmployeeID IS NULL
WHERE
entityCode = 201
GROUP BY
te.employeeID, employeename, entityCode
ORDER BY
employeeID
```
Check the below image of my output.
[](https://i.stack.imgur.com/M8J2q.png)
But my expected output would be:
```
employeeID employeeName caseEntry
1 Dinesh Alla 5
2 ram 0
3 Lakshmi 0
4 sumanth 0
```
|
Try this:
```
SELECT te.employeeID, employeeName,
(SELECT COUNT(*) FROM #LOGS Tee
WHERE Te.employeeID = Tee.employeeID AND Tee.entityCode = 201) AS caseEntryCount
FROM #EMPLOYEES Te
ORDER BY Te.employeeID
```
You can rewrite your query with a LEFT OUTER JOIN using a GROUP BY but is more slow than a simple query on main table with a subquery for count in the selection list field
|
Try this.
```
SELECT e.*
, COUNT(l.entityID) AS CaseEntry
FROM #EMPLOYEES e
LEFT JOIN #LOGS l
ON l.employeeID = e.employeeId
GROUP BY e.employeeId
, e.employeename
, l.EntityCode
ORDER BY e.employeeId;
```
Try to avoid using ORDER BY if the order doesn't matter. That would give you better performance of the query.
|
How to get all columns from first table when there is no matching column in SQL Server 2012
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
""
] |
I have the following table for student's fee payments
```
[fee_id] INT
[user_id] INT
[payment] DECIMAL (18, 2)
[date] DATETIME
[fee_remaining] DECIMAL (18, 2)
[year] INT
[payment_method] NVARCHAR (50)
[fee_required] DECIMAL (18, 2)
```
This is my current query to display the number of students who have either paid, yet to pay or have partially paid their fees for the year
```
SELECT DISTINCT
(SELECT COUNT(*) AS Expr1
FROM fee_payments
WHERE (fee_remaining = 0)
AND (YEAR = @year)) AS Fully_Paid,
(SELECT COUNT(*) AS Expr1
FROM fee_payments
WHERE (fee_remaining = fee_required)
AND (YEAR = @year)) AS Unpaid,
(SELECT COUNT(*) AS Expr1
FROM fee_payments
WHERE (fee_remaining > 0)
AND (YEAR = @year)
AND (fee_remaining <> fee_required)) AS Partially_Paid
FROM fee_payments AS fee_payments_1
```
This is my output
```
Fully_Paid | Unpaid | Partially_Paid
-------------------------------------
8 | 1 | 5
```
Is it at all possible to have my output displayed as follows?
```
Status | Total
----------------------------
Fully Paid | 8
Unpaid | 1
Partially Paid | 5
```
Any help would be greatly appreciated
|
Use a `case` expression to assign the required status to each row and `group by` the calculated column.
```
select status, count(*) as total
from (
SELECT
case when fee_remaining = 0 then 'fully_paid'
when fee_remaining <> fee_required then 'partially_paid'
when fee_remaining = fee_required then 'unpaid'
end as status
FROM fee_payments
WHERE YEAR = @year) t
group by status
```
Also note this assumes fee\_remaining and fee\_required are non null values. If they can be `null`, use `coalesce` to handle them when comparing.
|
So without *completely* restructuring your query into something more efficient like [kvp's answer](https://stackoverflow.com/a/36990923/74757), you could `UNION` each of the results instead of using them each as a sub-query:
```
SELECT 'Fully Paid' AS Status, COUNT(*) AS Total
FROM fee_payments
WHERE (fee_remaining = 0) AND (YEAR = @year)
UNION
SELECT 'Unpaid', COUNT(*)
FROM fee_payments
WHERE (fee_remaining = fee_required) AND (YEAR = @year)
UNION
SELECT 'Partially Paid', COUNT(*)
FROM fee_payments
WHERE (fee_remaining > 0) AND (YEAR = @year) AND (fee_remaining <> fee_required)
```
|
Rewrite SQL query to return rows for each dataset
|
[
"",
"sql",
"sql-server",
"dataset",
""
] |
I need to check whether the column middle name contain any value or not.
If it is empty, then it should not be concatenated with the name.
```
Select
..
Agent,
FirstName + ' ' MiddleName + ' ' + LastName as Name,
...
from tbSystemUser
```
In above query it adds space two times in Name if MiddleName is empty.
|
You could use `CASE expression` with `COALESCE` in following:
```
select
..
Agent,
case when coalesce(MiddleName, '') = '' then FirstName + ' ' + LastName
else FirstName + ' ' + MiddleName + ' ' + LastName
end as Name,
...
from tbSystemUser
```
|
You can use case statement
```
Select
..
Agent,
Case MiddleName
when ' ' then
FirstName + ' ' + LastName
else
FirstName + ' ' MiddleName + ' ' + LastName
end
as Name,
...
from tbSystemUser
```
OR
```
Select
..
Agent,
Case
when MiddleName = ' ' then
FirstName + ' ' + LastName
else
FirstName + ' ' MiddleName + ' ' + LastName
end
as Name,
...
from tbSystemUser
```
|
How to check values in column in Select query
|
[
"",
"sql",
"sql-server",
""
] |
In my stored procedure, i am having several parameters which in case if one of them is filled some of them will be ignored and vise versa.
```
create procedure FlightReservations
@resDate date = getdate,
@fromdate date = null,
@todate date = null,
-- few more parameters
as
begin
if(@fromdate != null)
begin
--it has more inner joins
select * from tbl1 inner join tbl2 on tbl1.id=tbl2.id where tbl1.fromDate=@fromDate
end
else
begin
select * from tbl1 inner join tbl2 on tbl1.id=tbl2.id where tbl1.date=@resDate
end
end
```
how can i avoid repeating the
```
select * from tbl1 inner join tbl2 on tbl1.id=tbl2.id
```
from my code, and apply the where clause on my result table?
what i need is have my result set in something like temporary table and select from the temporary table. but since i do not want to use the temporary table what is the alternative way.(because in it contains several fields and using the temporary table is not a good option)
|
Note that checking against null should be used as IS NULL or IS NOT NULL, since NULL is not a value.
This should be what you are looking for
```
SELECT * FROM tbl1
INNER JOIN tbl2
ON tbl1.id = tbl2.id
WHERE (@fromdate IS NOT NULL AND tbl1.fromDate = @fromDate)
OR (@fromdate IS NULL AND tbl1.date = @resDate)
```
|
One option might be to use dynamic query and build only the where clause based on your need.
```
DECLARE @sql nvarchar(max) = '<YOUR SELECT & JOIN STATEMENT>'
```
then construct your where clause based on your options,
```
if(<CONDITION1>)
BEGIN
SET @sql = @sql + '<WHERE CLAUSE FOR CONDITION1>'
END
if(<CONDITION2>)
BEGIN
SET @sql = @sql + '<WHERE CLAUSE FOR CONDITION2>'
END
...
```
You can execute the query using any of the suggested options here,<https://www.mssqltips.com/sqlservertip/1160/execute-dynamic-sql-commands-in-sql-server/>
|
how to avoid repeating multiple selects and inner joins and apply the where clause when its necessary?
|
[
"",
"sql",
"sql-server",
""
] |
I have tried the different solutions that found on this site when it comes to update or insert using SQL code, I got the update working but can't implement a working insert if the rows doesn't exists, the (update) code that is working looks like this, any help on how to make it insert also would be nice since I'm an \*SQL Newbie, well here is the working code:
```
select top 0 * into #tmp_x from TTT;
select * from #tmp_x;
BULK INSERT #tmp_x FROM 'path\to\file1.csv' WITH (FIELDTERMINATOR =';',ROWTERMINATOR ='\n' ,FIRSTROW = 2);
UPDATE TTT
SET TTT.[Artikelbenämning] = #tmp_x.[Artikelbenämning],TTT.[Count of Ordernummer] = #tmp_x.[Count of Ordernummer],TTT.[Sum of Antal beställda] = #tmp_x.[Sum of Antal beställda],TTT.[Min of Skapat Datum] = #tmp_x.[Min of Skapat Datum],TTT.[Max of Planerat Plockdatum] = #tmp_x.[Max of Planerat Plockdatum],TTT.[Kommentar Inköpsplanerare] = #tmp_x.[Kommentar Inköpsplanerare]
FROM #tmp_x
WHERE TTT.Artikelnummer = #tmp_x.Artikelnummer;
drop table #tmp_x;
print 'dropped table';
```
|
If you have SQL Server 2008 or later, you can make use of the MERGE statement. The following Microsoft TechNet article describes the merge statement pretty well:
<https://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx>
Your query should look something like this:
```
BULK INSERT #tmp_x FROM 'path\to\file1.csv' WITH (FIELDTERMINATOR =';',ROWTERMINATOR ='\n' ,FIRSTROW = 2);
MERGE TTT AS T
USING #tmp_x AS S
ON (T.Artikelnummer = S.Artikelnummer)
WHEN NOT MATCHED BY TARGET
THEN INSERT(
[Artikelnummer]
,[Artikelbenämning]
,[Count of Ordernummer]
,[Sum of Antal beställda]
,[Min of Skapat Datum]
,[Max of Planerat Plockdatum]
,[Kommentar Inköpsplanerare]
)
VALUES(
S.[Artikelnummer]
,S.[Artikelbenämning]
,S.[Count of Ordernummer]
,S.[Sum of Antal beställda]
,S.[Min of Skapat Datum]
,S.[Max of Planerat Plockdatum]
,S.[Kommentar Inköpsplanerare]
)
WHEN MATCHED
THEN UPDATE
SET T.[Artikelbenämning] = S.[Artikelbenämning]
,T.[Count of Ordernummer] = S.[Count of Ordernummer]
,T.[Sum of Antal beställda] = S.[Sum of Antal beställda]
,T.[Min of Skapat Datum] = S.[Min of Skapat Datum]
,T.[Max of Planerat Plockdatum] = S.[Max of Planerat Plockdatum]
,T.[Kommentar Inköpsplanerare] = S.[Kommentar Inköpsplanerare];
,T.[Kommentar Inköpsplanerare] = ISNULL(S.[Kommentar Inköpsplanerare], T.[Kommentar Inköpsplanerare]);
-- Example exludes records where [Kommentar Inköpsplanerare] IS NULL from the merge
MERGE TTT AS T
USING (
SELECT
[Artikelnummer]
,[Artikelbenämning]
,[Count of Ordernummer]
,[Sum of Antal beställda]
,[Min of Skapat Datum]
,[Max of Planerat Plockdatum]
,[Kommentar Inköpsplanerare]
FROM #tmp_x
WHERE [Kommentar Inköpsplanerare] IS NOT NULL
)AS S
ON (T.Artikelnummer = S.Artikelnummer)
WHEN NOT MATCHED BY TARGET
THEN INSERT(
[Artikelnummer]
,[Artikelbenämning]
,[Count of Ordernummer]
,[Sum of Antal beställda]
,[Min of Skapat Datum]
,[Max of Planerat Plockdatum]
,[Kommentar Inköpsplanerare]
)
VALUES(
S.[Artikelnummer]
,S.[Artikelbenämning]
,S.[Count of Ordernummer]
,S.[Sum of Antal beställda]
,S.[Min of Skapat Datum]
,S.[Max of Planerat Plockdatum]
,S.[Kommentar Inköpsplanerare]
)
WHEN MATCHED
THEN UPDATE
SET T.[Artikelbenämning] = S.[Artikelbenämning]
,T.[Count of Ordernummer] = S.[Count of Ordernummer]
,T.[Sum of Antal beställda] = S.[Sum of Antal beställda]
,T.[Min of Skapat Datum] = S.[Min of Skapat Datum]
,T.[Max of Planerat Plockdatum] = S.[Max of Planerat Plockdatum]
,T.[Kommentar Inköpsplanerare] = S.[Kommentar Inköpsplanerare];
-- Example ignores updates on the [Kommentar Inköpsplanerare] column if the [Kommentar Inköpsplanerare] IS NULL in the source dataset
MERGE TTT AS T
USING #tmp_x AS S
ON (T.Artikelnummer = S.Artikelnummer)
WHEN NOT MATCHED BY TARGET
THEN INSERT(
[Artikelnummer]
,[Artikelbenämning]
,[Count of Ordernummer]
,[Sum of Antal beställda]
,[Min of Skapat Datum]
,[Max of Planerat Plockdatum]
,[Kommentar Inköpsplanerare]
)
VALUES(
S.[Artikelnummer]
,S.[Artikelbenämning]
,S.[Count of Ordernummer]
,S.[Sum of Antal beställda]
,S.[Min of Skapat Datum]
,S.[Max of Planerat Plockdatum]
,S.[Kommentar Inköpsplanerare]
)
WHEN MATCHED
THEN UPDATE
SET T.[Artikelbenämning] = S.[Artikelbenämning]
,T.[Count of Ordernummer] = S.[Count of Ordernummer]
,T.[Sum of Antal beställda] = S.[Sum of Antal beställda]
,T.[Min of Skapat Datum] = S.[Min of Skapat Datum]
,T.[Max of Planerat Plockdatum] = S.[Max of Planerat Plockdatum]
,T.[Kommentar Inköpsplanerare] = ISNULL(S.[Kommentar Inköpsplanerare], T.[Kommentar Inköpsplanerare]);
```
|
In Sql server, you can do both insert and update in the same statement, called [`MERGE`](https://msdn.microsoft.com/en-us/library/bb510625.aspx)
```
MERGE TTT AS target
USING #tmp_x AS source
ON (target.Artikelnummer = source.Artikelnummer )
WHEN MATCHED THEN
UPDATE SET [Artikelbenämning] = source.[Artikelbenämning],
[Count of Ordernummer] = source.[Count of Ordernummer],
[Sum of Antal beställda] = source.[Sum of Antal beställda]....
WHEN NOT MATCHED THEN
INSERT (<target's columns list>) -- I got a little lazy here...
VALUES (<source's columns list>)
```
|
SQL UPDATE or INSERT
|
[
"",
"sql",
"sql-server",
"insert",
""
] |
I've got two columns in the same table for my users: `name-displayed` and `short-name`.
`name-displayed` is populated with the full name of the user, for example "John Doe". In `short-name`, there is the short value, e.g. "john-doe" (essentially de-capitalized and hyphenated).
How would I amend the data in `short-name` based on the data in `name-displayed`? I'm sure I could use a self-join based on `UPDATE`, but I'm not sure how to implement a change in data across the columns.
Any help would be hugely appreciated!
|
You need to use the `Lower` and `Replace` functions for this.
See: [Lower](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_lower) and [Replace](http://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_replace) in the docs.
```
Update <table_name>
set `short-name` = REPLACE(LOWER(`name-displayed`), ' ','-')
where <conditions>;
```
In case you want this done automatically, you'll need to write a trigger as Walter\_Ritzel suggests.
```
delimiter //
CREATE TRIGGER auto_set_short_name BEFORE INSERT ON account
FOR EACH ROW
BEGIN
SET NEW.`short-name` = REPLACE(LOWER(`name-displayed`), ' ','-');
END;//
delimiter ;
```
|
You could use triggers: [Triggers](https://www.google.com.br/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiopdnT173MAhUFGZAKHZW2DVAQFggnMAA&url=https%3A%2F%2Fdev.mysql.com%2Fdoc%2Frefman%2F5.5%2Fen%2Ftrigger-syntax.html&usg=AFQjCNHj2JF0L2PSERatEbgVdbqVEFy1iQ&sig2=BboKED4pgK-H_zEVjCGKBA&bvm=bv.121070826,d.Y2Ia)
A trigger Before Insert/Update could solve that easily.
```
delimiter //
CREATE TRIGGER ins_sum BEFORE INSERT OR UPDATE ON table
FOR EACH ROW
begin
SET New.`short-name` = REPLACE(LOWER(NEW.`name-displayed`), ' ','-') ;
end;
//
```
Use backticks or this char: ```, to make sure the `-` is not interpreted as a minus sign.
|
MySQL: Modify column based on column values in same table
|
[
"",
"mysql",
"sql",
"multiple-columns",
""
] |
```
column name data type
wp_stime DATE
```
QUERY:
```
select wp_stime from workpaths;
```
OUTPUT:
```
29-FEB-12
29-FEB-12
24-FEB-12
24-FEB-12
31-OCT-11
12-DEC-11
12-JAN-11
19-OCT-11
19-OCT-11
11-AUG-11
19-OCT-11
21-NOV-11
28-JUL-11
02-AUG-11
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
01-MAR-12
```
The question is how can I select based on 9AM to 5PM of wp\_stime?
QUERY 2 :
```
select to_char(wp_stime, 'HH24:MI:SS') from workpaths;
```
OUTPUT:
```
00:09:20
00:10:17
00:11:26
00:42:50
00:44:32
00:45:35
00:48:47
00:51:45
00:53:01
00:56:43
01:31:39
01:34:01
14:49:30
01:53:32
02:06:37
02:36:54
02:51:12
```
|
```
select * from workpaths where to_char(wp_stime,'hh24') between 9 and 16;
```
Should help. Oracle will extract the hour part from your date field as string, and on seeing that you are comparing with numbers would implicitly convert it to number. Thus you can compare between hours. Effectively, this query gives dates whose time is greater than 9 am and less than 5pm.
**EDIT :**
17 is replaced with 16, so that values till 16:59:59 would be considered.
**EDIT 2 :**
To explicitly perform string to numeric casting :
```
select * from workpaths where to_number(to_char(wp_stime,'hh24')) between 9 and 16;
```
|
trunc(wp\_stime) will return the same date, with the time portion truncated down to 00:00:00. Then wp\_stime - trunc(wp\_stime) will return the "fractional part" (the time portion of wp\_stime). This is expressed as a number, in days. You want this fractional part to be between 9 and 17 hours, or 9/24 and 17/24 days.
Here is the query, preceded by test data in the WITH clause. For you, with the table existing already, the complete query is just the last line in my code:
```
with workpaths(wp_stime) as (
select to_date('02-MAY-2016 13:30:44', 'dd-mon-yyyy hh24:mi:ss') from dual union all
select to_date('15-FEB-2013 17:43:00', 'dd-mon-yyyy hh24:mi:ss') from dual union all
select to_date('21-DEC-2015 2:15:27', 'dd-mon-yyyy hh24:mi:ss') from dual union all
select to_date('02-JAN-2016 10:22:09', 'dd-mon-yyyy hh24:mi:ss') from dual
)
select wp_stime from workpaths where wp_stime - trunc(wp_stime) between 9/24 and 17/24;
```
**Output**:
```
WP_STIME
------------------
02-MAY-16 13:30:44
02-JAN-16 10:22:09
```
|
oracle database: select between certain time of the day
|
[
"",
"sql",
"oracle",
""
] |
I have following columns in my table:
```
name | columnA | columnB
```
And I am trying to invoke the following query:
```
SELECT name, columnA + columnB AS price
FROM house
WHERE NOT (columnA IS NULL OR columnB IS NULL)
GROUP BY name
ORDER BY price
```
Which throws me:
house.columnA needs to be in GROUP BY clause. - I am not sure how I should understand that.
What I want to do, is to receive the table, where I will have `name` of `house`, and column `price`, which will equal to `columnA + columnB`, only if both of the columns are not null. And I would like to sort it by the calculated price.
Where am I doing a mistake?
|
There are two options:
Option 1 - the group by is not needed. This will happen in case there is a single row for each name, in this case:
```
SELECT name,columnA+columnB as price
FROM house
WHERE columnA is not null
AND columnB is not null
ORDER BY price
```
Option 2 - the group by is needed, and that means you have more then 1 row for each name, and in this case you should use `SUM` :
```
SELECT name,sum(columnA+columnB) as price
FROM house
WHERE columnA is not null
AND columnB is not null
GROUP BY name
ORDER BY price
```
|
i might be missunderstanding you, but i think
what you need is a basic sql query without a group.
also sum is a function that lets you sum values from different rows. creating the sum of values from the same row is trival.
this is how the aggregate function sum could be used to calculate the price for all houses per street:
```
select street, sum(columnA + columnB) as price_per_street
from houses
where columnA is not null and columnB is not null
group by street
order by price;
```
this should be what you are looking for:
```
SELECT name, columnA + columnB AS price
FROM house
WHERE columnA IS NOT NULL AND columnB IS NOT NULL
ORDER BY price
```
|
Issue with SUM of columns and group by
|
[
"",
"sql",
"postgresql",
""
] |
I have a date in a column like this (`2016-06-01 21:50:00.000`) - `YYYY-MM-DD HH:MM:SS`
I am trying to convert it to like this --> (`01/06/2016 21:50:00.000`) `DD-MM-YYYY HH:MM:SS`
I've tried the following;
```
SELECT CONVERT(VARCHAR(30), CONVERT(DATETIME, MYDATECOLUMN, 101), 103)
FROM MYTABLE
```
**Time part is missing**.
Can anyone please suggest how to achieve this?
I don't have permission to change data in the table
|
try below code
```
select CONVERT(VARCHAR(10), MYDATECOLUMN, 103) + ' ' + convert(VARCHAR(8), MYDATECOLUMN, 14)
FROM MYTABLE
```
IF you want to show millisecond also change varchar size from 8 to 12 in second part
```
select CONVERT(VARCHAR(10), MYDATECOLUMN, 103) + ' ' + convert(VARCHAR(12), MYDATECOLUMN, 14)
FROM MYTABLE
```
|
You can use `103` for the date part and then `CAST` the date\_column to `TIME` for getting the time part.
**QUERY**
```
SELECT CONVERT(VARCHAR(25), GETDATE(), 103) +
' ' + CAST(CAST(GETDATE() AS TIME) AS VARCHAR(20));
```
**Result**
```
02/05/2016 12:43:05.9930000
```
Or if you want to remove the milliseconds part,
**QUERY**
```
SELECT CONVERT(VARCHAR(25), GETDATE(), 103) +
' ' + CONVERT(VARCHAR(8), GETDATE(), 108);
```
**Result**
```
02/05/2016 12:43:30
```
|
Time part is missing when using convert
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
""
] |
I have been trying to get a sub query to work but I'm thinking it needs to be join instead but I'm new to databases and having a hard time wrapping my head around just how it works.
I need to collect a rows from userprofile based on three columns in connect, what I have that doesn't work is this.
```
SELECT * FROM userprofile WHERE user_id IN
(SELECT connect.user_id, connect.connect_to_id FROM connect WHERE
(user_id = '1' OR connect_to_id = '1') AND is_connected = '1' );
```
Because the second SELECT has two columns listed it errors with
```
#1241 - Operand should contain 1 column(s)
```
Having just one column listed in the second join doesn't return all entries, only the ones that the first test meets.
Does this have to be done with JOIN and if so how? Every join that I try just results in all entries from the userprofile table being returned.
Thanks for your help.
\*UPDATE
I have two queries working now.
```
SELECT u.* FROM userprofile u
JOIN connect c ON (u.user_id = c.user_id OR u.user_id = c.connect_to_id) WHERE
((c.user_id = '1' OR c.connect_to_id = '1') AND c.is_connected = '1') AND u.user_id != '1'
GROUP BY u.id;
```
And
```
SELECT u.* FROM userprofile u WHERE u.user_id IN
(SELECT connect.user_id FROM connect WHERE connect.connect_to_id = '1' AND connect.is_connected = '1'
UNION ALL
SELECT connect.connect_to_id FROM connect WHERE connect.user_id = '1' AND connect.is_connected = '1')
```
Over 10000 calls using microtime I have the second using the Sub Query being a tad faster.
```
Join Query:4.6199560165405
Sub Query:4.0528790950775
```
I then decided to do a test to see why and discovered that the GROUP BY command is slowing down the JOIN considerably. So with the GROUP BY removed I get these numbers.
```
Join Query:2.6410460472107
Sub Query:4.1510140895844
```
As it turns out I don't really need the GROUP BY as I'm checking that u.user\_id != '1' and this stops the same problem I was having.
So the question, is using GROUP BY frowned on?
|
It can be done either via a `join` or a sub-query. Typically with MySQL joins are more efficient because in many cases MySQL is unable to use indexes with sub-queries. So, in your case the query with a join would be:
```
SELECT u.*
FROM
userprofile u
JOIN connect c ON (u.user_id = c.user_id OR u.user_id = c.connect_to_id)
WHERE
(c.user_id = '1' OR c.connect_to_id = '1') AND
c.is_connected = '1'
```
|
I think you are with two problems. One of them is the two colmumns thing. The other one is a cartesian product.
Try this:
```
SELECT *
FROM userprofile up
WHERE up.user_id IN (SELECT connect.user_id
FROM connect
WHERE connect.connect_to_id = '1'
UNION ALL
SELECT connect.connect_to_id
FROM connect
WHERE connect.user_id = '1')
AND is_connected = '1' ;
```
|
MYSQL Sub Query with two columns
|
[
"",
"mysql",
"sql",
""
] |
I have a table say,
```
column1 column2
a apple
a ball
a boy
b apple
b eagle
b orange
c bat
c ball
c cork
```
Now I would like to fetch column1 based on the rows that doesn't contain 'apple' and also ignore values in column1 if any of the rows have 'apple' in it. So in the table above only 'C' must be retured.
I am kind of new to Oracle SQL and I know Select column1 from table where column2 != 'apple' will not work. I need some help with this please.
|
Assuming that `column1` is `NOT NULL` you could use:
```
SELECT DISTINCT t.column1
FROM table_name t
WHERE t.column1 NOT IN (SELECT column1
FROM table_name
WHERE column2 = 'apple');
```
`LiveDemo`
To get all columns and rows change `DISTINCT t.column1` to `*`.
|
You could use `DISTINCT` with `NOT IN` in following:
**QUERY 1 using NOT IN**
```
select distinct col1
from t
where col1 not in (select col1 from t where col2 = 'Apple')
```
**QUERY 2 using NOT EXISTS**
As per @jarlh comment you could use `NOT EXISTS` in following:
```
select distinct col1
from #t t1
where not exists (select 1 from #t t2 where col2 = 'Apple' and t1.col1 = t2.col1)
```
**SAMPLE DATA**
```
create table t
(
col1 nvarchar(60),
col2 nvarchar(60)
)
insert into t values
('a','apple')
,('a','ball')
,('a','boy')
,('b','apple')
,('b','eagle')
,('b','orange')
,('c','bat')
,('c','ball')
,('c','cork')
```
|
Select statement for Oracle SQL
|
[
"",
"sql",
"oracle",
"select",
""
] |
I am trying to get the minimum price per travel and know which travel-details correspond to that minimum price per travel.
For this I have tried all kind of variations with subqueries, joins etc. but since there is not 1 primary key I cannot figure it out.
What I'm trying to achieve is get the travel with the lowest price, and then included in the record the details of the travel with that lowest price.
```
SELECT travel_id, persons, days, MIN(`price`) AS `price`
FROM travel_details
WHERE 1
GROUP BY `travel_id`
HAVING MIN(price);
```
Simple version of my table-columns, columns are:
`travel_id`, `persons`, `days`, `price`
Those columns together form the primary key.
A travel can be booked for various persons, days and prices. It can also occur that there are multiple price-options for the same combination of `travel_id`, `persons`, and `days`.
E.g.,
```
100, 2, 4, **250**
100, 2, 4, **450**
100, 2, **5**, 450
101, 2, 4, 190
101, 2, 5, 185
```
Being `travel_id` 100 for 2 persons for 4 persons.
What I would like to achieve is return:
100, 250, and then with correct corresponding values:
100, 2, 4, 250
101, 2, 5, 185
Now my result just mixes all the other data. When I include those columns in the group by, it will not only group on `travel_id` anymore, but also e.g., on `persons`. Then it will return all combinations for a `travel_id` and `persons`.
Any idea how to approach this?
|
You can use `IN()` to do this:
```
SELECT * FROM travel_details t
WHERE (t.travel_id,t.price) IN(SELECT s.travel_id,min(s.price)
FROM travel_details s
GROUP BY s.travel_id)
GROUP BY t.travel_id; // this last group-by is to filter doubles when there are multiple records per travel with the lowest price.
```
|
```
Select a.travel_id, a.persons, a.days, a.price from travel_details a
JOIN (Select travel_id,MIN(Price) as p from travel_details group by travel_id) b
on b.travel_id=a.travel_id and b.p=a.price
```
The above query uses self join. Derived table b will contain travel\_id along with min price.
|
Find minimum value AND include correct column-values
|
[
"",
"mysql",
"sql",
"group-by",
"min",
""
] |
let's say i have this query:
```
SELECT COUNT(datum) FROM unique_ip WHERE (datum>='2016-02-01' AND datum<='2016-02-29')
```
I want to do a WHERE LIKE so only the month has to be specific like 01, 02 etc. So my query is only looking for this: \_\_\_\_-02-\_\_ . How can i use WHERE LIKE in this case?
This is my Database scheme:
* ip - INT UNSIGNED - PK NOT NULL
* datum - DATE - NOT NULL (datum means date)
* Telling - INT - NOT NULL DEFAULT = 1 (Telling meaning counting)
i'd be glad to supply you with more information if needed.
|
You can use the `MONTH()` function or something like `SELECT COUNT(datum) FROM unique_ip WHERE datum LIKE '%-02-%';`.
|
Use [MONTH()](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_month) function.
For your query.
```
$month1 = '02';
$month2 = '02';
SELECT COUNT(datum) FROM unique_ip WHERE (MONTH(datum)>='$month1' AND MONTH(datum)<='$month2')
```
|
Mysql only WHERE LIKE on month
|
[
"",
"sql",
""
] |
I'm having difficulty finding a solution to this problem, as searching for merging SQL columns or adding columns from table gives a wide variety of results I don't want. It's much easier for me to illustrate with an example.
I have two tables:
**Table 1**
```
ColA ColB ColC
0 A AL
1 B DZ
```
**Table 2**
```
ColA ColB ColC
2 C IS
3 D KA
```
I want to merge these tables so the similar columns are basically combined, so I've got a new table with the same structure, and all of the values. So the output would be:
**Output**
```
ColA ColB ColC
0 A AL
1 B DZ
2 C IS
3 D KA
```
The issue is I want to find the distinct values across these columns from two similarly structured tables, so I cannot see how I can use a join for this, as if I join on any one value the other values will be lost, and a multiple-field join doesn't seem to work.
|
If you want to keep duplicate rows you could use `UNION ALL` and if you want to remove duplicate rows from result set you could use `UNION` in following:
```
SELECT ColA, ColB, ColC
FROM Table1
UNION ALL
SELECT ColA, ColB, ColC
FROM Table2
```
*Note that `UNION ALL` working faster than `UNION`*
|
```
select ColA, ColB, ColC from table1
union
select ColA, ColB, ColC from table2
```
|
Merging columns from identical tables in SQL
|
[
"",
"sql",
"select",
""
] |
I have a table of adverts. These adverts have `start` and `end` columns, which are both of the `DATETIME` type.
I need to select ones that are going to start in the next 24 hours, and, separately, ones that are going to end in the next 24 hours. I wrote the query `select * from slides where enabled = 1 and start <= NOW() + interval 24 hour`, which seemed to work at first.
The problem is, it also selects rows from a long time ago. I need it just to select ones starting between now and 24 hours from now.
What am I doing wrong?
|
So, use two comparisons:
```
select s.*
from slides s
where s.enabled = 1 and
s.start <= NOW() + interval 24 hour and
s.start >= NOW();
```
|
This works in oracle you can modify it according the DB you are using
```
select * from slides where enabled = 1 and start between sysdate and sysdate+1
```
Here sysdate returns current date and time, adding 1 to it returns date and time 24 hours from now.
|
Selecting rows with timestamps set to the future
|
[
"",
"mysql",
"sql",
""
] |
I want my MySql Column 'Time' to be update as Column 'End\_Time' **Minus** Column 'Start\_Time'
Note: End\_Time & Start\_Time are in DATETIME format....
Thanks
|
Try this method,
```
SELECT SEC_TO_TIME(TIMESTAMPDIFF(second,Start_Time,End_Time)) --include sec
or
SELECT SEC_TO_TIME(TIMESTAMPDIFF(minute,Start_Time,End_Time)*60) --diff in minute
eg
SELECT SEC_TO_TIME(TIMESTAMPDIFF(minute,'2016-05-04 10:00:00','2016-05-04 11:29:00')*60)
```
Here `TIMESTAMPDIFF(minute` will return the diff in minutes and mutiple it \*60 to get seconds, and use `SEC_TO_TIME` to convert from sec to Time.
|
Use Datediff function in mysql
[See here for DATEDIFF](http://www.w3schools.com/sql/func_datediff_mysql.asp)
|
set column values equals to columnB minus columnA
|
[
"",
"mysql",
"sql",
""
] |
I have a table like:
```
Name Size
--------------------------------------
backup_20160426000000.comp.trn 1
backup_20160426001000.comp.trn 2
backup_20160426002000.comp.trn 4
(..)
backup_20160426230000.comp.trn 4
backup_20160426231000.comp.trn 5
```
I need to be able to GROUP the text BY "hour" (20160426000000 would be Hour 0, 20160426010000 would be hour 1, etc..) and then sum up the total size.
Output should be:
```
backup_20160426000000.comp.trn 7
(..)
backup_20160426230000.comp.trn 9
```
Currently I have:
```
SELECT
SUBSTRING(dbo.CLName.NAME, PATINDEX('%2016%', dbo.CLName.NAME), 14), size
FROM
dbo.CLName
GROUP BY
substring(Name, 1, 15)
```
|
Since all the strings have the same format, you could just take them appart and reconstruct them using `substring`, as you have partially done:
```
SELECT
SUBSTRING(name, 1, 15) + '0000.comp.trn', SUM(size)
FROM
dbo.CLName
GROUP BY
SUBSTRING(name, 1, 15)
```
|
Use `SUBSTRING` function to get the hour.
```
SELECT MIN(Name) AS Name, SUM(Size) AS Size_Sum
FROM TblBackup
GROUP BY SUBSTRING(Name, 16, 2)
```
Output Result:
```
Name Size_Sum
--------------------------------------------
backup_20160426000000.comp.trn 7
backup_20160426230000.comp.trn 9
```
|
SQL Server : grouping on partial text matching
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
"group-by",
""
] |
I have a table of UserIds and rolenames.
For example:
```
UserId Rolename
1 Admin
1 Editor
1 Other
2 Admin
3 Other
```
I want to return a single row per user containing `UserId, IsAdmin, IsEditor`, where the latter two columns are booleans representing whether or not the user has the "Admin" role or "Editor" role.
From the above example I would get the following output:
```
UserId IsAdmin IsEditor
1 True True
2 True False
3 False False
```
Any thoughts? I've been trying all sorts of things with aggregate functions in group by, sub selects etc., but I'm just not getting it.
|
users :
```
UserId UserName
1 amir
2 john
3 sara
```
user roles :
```
UserId RoleName
1 Admin
1 Editor
2 Editor
```
query :
```
select UserId ,
(select count(UserRoles.UserId) from userRoles where userRoles.UserId=users.UserId and RoleName='Admin' ) as IsAdmin ,
(select count(userRoles.UserId) from userRoles where userRoles.UserId=users.UserId and RoleName='Editor' ) as IsEditor
from users;
```
result :
```
UserId IsAdmin IsEditor
1 1 1
2 0 1
3 0 0
```
|
One possible solution:
```
SELECT
UserId,
CASE WHEN EXISTS (SELECT * FROM #UserRoles A WHERE A.UserId = UR.UserId AND A.Rolename = 'Admin') THEN 'True' ELSE 'False' END AS IsAdmin,
CASE WHEN EXISTS (SELECT * FROM #UserRoles E WHERE E.UserId = UR.UserId AND E.Rolename = 'Editor') THEN 'True' ELSE 'False' END AS IsEditor
FROM
UserRoles UR
GROUP BY
UR.UserId
```
Whether or not that's valid syntax depends on what type of SQL you're using - Oracle? You didn't specify.
Another possible solution:
```
SELECT
U.UserId,
CASE WHEN A.UserId IS NOT NULL THEN 'True' ELSE 'False' END AS IsAdmin,
CASE WHEN E.UserId IS NOT NULL THEN 'True' ELSE 'False' END AS IsEditor
FROM
(
SELECT DISTINCT
UserId
FROM
UserRoles UR
) U
LEFT OUTER JOIN UserRoles A ON A.UserId = U.UserId AND A.Rolename = 'Admin'
LEFT OUTER JOIN UserRoles E ON E.UserId = U.UserId AND E.Rolename = 'Editor'
```
These solutions also both assume that you will never have multiple rows for the same user id with the same exact role name. For example, User ID 1 with Admin twice in the table.
|
How to return a single row aggregating data in multiple rows in SQL
|
[
"",
"sql",
"ibm-midrange",
""
] |
Let's say I have a `users` table with two column, `id` and `referer_id`
[](https://i.stack.imgur.com/DReth.png)
If the user was refered by someone else, his referrer will be in `referer_id`. If he signed up by himself then the `referer_id` will be NULL
I want to count how many referred users a user has.
I have a query like this
```
SELECT `referer_id`,count(`referer_id`) FROM `users`
GROUP BY `referer_id`
```
[](https://i.stack.imgur.com/CRJZ6.png)
As you can see the NULL count is 0 , but I have a lot of users who was not refered by anybody. How can I solve this ?
|
I don't like this, I think there's a more elegant solution out there, but it works and may help you find that better solution.
```
select
t1.id,
ifnull(t3.ct, 0)
from
temp t1
left join
(select
t2.referer_id,
count(t2.referer_id) as ct
from temp t2
group by t2.referer_id) t3
on t1.id = t3.referer_id;
```
With a little more thought, here's an option that avoids the subselect:
```
select t1.id, ifnull(count(t2.referer_id), 0)
from temp t1
left join temp t2 on t1.id = t2.referer_id
group by t1.id;
```
|
Even though I can't explain what reason caused this issue, I figured it out with another solution, like this;)
```
SELECT `referer_id`,
if(`referer_id` is null, @num := @num + 1, count(`referer_id`)) as referer_id_cnt
FROM `users`, (select @num := 0) tmp
GROUP BY `referer_id`
```
Hmm, what I've wrote above is definitely not a proper answer. Actually this will help you.
```
SELECT `referer_id`,count(1) FROM `users`
GROUP BY `referer_id`
```
And take a look of this link [How to count NULL values in MySQL?](https://stackoverflow.com/questions/3057746/how-to-count-null-values-in-mysql)
|
MySQL Select return wrong NULL value with COUNT
|
[
"",
"mysql",
"sql",
""
] |
These are my tables: o, p and v corresponding to order, person and village
v corresponds to village
```
vk | vname
1 | v1
2 | v2
3 | v3
```
p corresponds to person in the village. A and B live in V1. C lives in V2.
```
pk | pname | vk
1 | A | 1
2 | B | 1
3 | C | 2
```
o corresponds to order placed by person. Person A from V1 has placed 3 orders, person C from V2 has placed 1 order.
```
ok | pk | cost
1 | 1 | 10
2 | 1 | 100
3 | 1 | 20
4 | 3 | 200
```
I am trying to find out the average number of orders per person in a village.
I have formed the query but it excludes results from people from those villages who have not placed any orders at all.
My query:
```
SELECT
v.vname,
count(ok) / count(DISTINCT o.pk) AS avg
FROM
v,
o,
p
WHERE
p.pk = o.pk
AND p.vk = v.vk
GROUP BY
v.vk;
```
What I want:
```
vname | avg
v1 | 1.5
v2 | 1
v3 | 0
```
What I am getting:
```
vname | avg
v1 | 3
v2 | 2
```
It is not considering the third village from where no orders have taken place and no person exists, but I want it to be shown. Also, person B from village 1 has placed no orders and hence, only the orders from person A in village 1 are taken into consideration.
With 3 orders from person A from V1 and 0 from person 2 from V1, the average must be 3/2 but it excludes the other person and shows 3/1 which is 3.
Any leads as to where I am going wrong and how I can perfect the query?
|
Use left join and inner join
```
SELECT v.vname, count(ok)/count(distinct o.pk) AS avg
FROM v
left join o on o.pk = p.pk
inner join p on p.vk = v.vk
group v.vname
```
|
Try the `LEFT JOIN` instead of `archaic comma join`
```
SELECT v.vname, NULLIF(count(ok)/count(distinct o.pk),0) AS avg
FROM v
LEFT JOIN p ON p.vk = v.vk
LEFT JOIN o ON o.pk = p.pk
GROUP BY v.vname
```
|
Query automatically excluding records with no entries
|
[
"",
"mysql",
"sql",
""
] |
I'm having an issue where I am running a script against a database to get the average difference between multiple VARCHARs that need to be converted to DateTimes, and then take the average between all the results.
My code is:
```
SELECT YEAR(b.DateAcknow),AVG(datediff(dd,convert(datetime,b.DateAssign),
convert(datetime,b.DateResolv))) as DayAverage,
AVG(datediff(hh,convert(datetime,b.TimeAcknow),
convert(datetime,b.TimeResolv))) as HourAverage
FROM table AS b
WHERE (x = y)
AND YEAR(DateResolv) >= 2006
AND YEAR(DateResolv) < 2016
AND b.resolution <>''
GROUP BY YEAR(b.DateAcknow)
ORDER BY YEAR(b.DateAcknow)`
```
The result I'm getting does not seem to make sense, much less it includes 1900 which falls outside of my parameters of the where clause
Here it is:
```
NULL 42 NULL
1900 0 12
2006 7 -5
2007 6 1
2008 7 1
2009 4 1
2010 2 0
2011 2 0
2012 2 0
2013 2 0
2014 2 0
2015 2 0
```
Am I converting the `VARCHAR`s wrong?
I doubt that the average for thousands of entries from 2010-2015 are all the same 2 days and 0 hours too, so either I'm doing something wrong or the data is bad.
|
You are filtering by DateResolv and group by DateAcknow.
Filter and group by the same Field and NULL and values outside of the Range should disappear.
|
You'll probably want to take away the aggregate part and just run:
```
SELECT YEAR(b.DateAcknow)
, convert(datetime,b.DateAssign) AS DateAssignDateTime
, convert(datetime,b.DateResolv) AS DateResolveDateTime
, datediff(dd,convert(datetime,b.DateAssign), convert(datetime,b.DateResolv)) AS AssignResolveDayDiff
, convert(datetime,b.TimeAcknow) AS TimeAcknowDateTime
, convert(datetime,b.TimeResolv) AS TimeResolveDateTime
, datediff(hh,convert(datetime,b.TimeAcknow), convert(datetime,b.TimeResolv)) AS AcknowResolveHourDiff
FROM table AS b
WHERE (x = y)
AND YEAR(DateAcknow) >= 2006
AND YEAR(DateAcknow) < 2016
AND b.resolution <>''
ORDER BY YEAR(b.DateAcknow)
```
To ensure that all of your conversions are making sense first. Then you will have a better understanding of what it is you're actually averaging.
Afterwards, if it all checks out, then your query should work fine (though, do check that mxix' change from
```
...
AND YEAR(DateResolv) >= 2006
AND YEAR(DateResolv) < 2016
...
```
to
```
...
AND YEAR(b.DateAcknow) >= 2006
AND YEAR(b.DateAcknow) < 2016
...
```
makes sense for you.
If you're looking to increase the precision of the output, then try converting your datediffs like so:
Old: `AVG(datediff(dd,convert(datetime,b.DateAssign), convert(datetime,b.DateResolv)))`
New: `AVG(Convert(Decimal(10, 5), datediff(dd,convert(datetime,b.DateAssign), convert(datetime,b.DateResolv))))`
Your old query is averaging days, rounded to the nearest integer value, giving you values like '2'. This new adjustment will give you answers like "1.51235" days instead.
Since there's 100k records of differences (both plus and minus), there's a good chance the averages will be close to zero if they follow a normal or uniform distribution. Also try:
`AVG(Convert(Decimal(10, 5), ABS(datediff(dd,convert(datetime,b.DateAssign), convert(datetime,b.DateResolv)))))`
if you want absolute difference instead. If your old data had values "5, -3, 4, -1, 3", then the old method would produce the average of 2, but if you had the "ABS" function working on them, it would change the values to "5, 3, 4, 1, 3" and will move your resulting average in the ++ direction (here, it changes to "3", or "3.2", if you did your decimal conversion too).
|
How do I calculate averages of dates formatted as VARCHAR from multiple rows?
|
[
"",
"sql",
"datetime",
"varchar",
"datediff",
""
] |
From what I have read about triggers and how they work, I thought that this trigger would insert data into the relations related to my table phonenumber after I insert values into it. I'm using dbms\_random to create a random 5 digit usageID not already in the usage table (or atleast that's what I had thought it would do).
```
create or replace TRIGGER addPhoneLine
AFTER INSERT ON phoneNumber
REFERENCING NEW AS NEW
FOR EACH ROW
DECLARE
primNum varchar(12);
acctNum numeric(5);
NEWusageID_new varchar(5);
BEGIN
if :new.primaryNumber is not NULL then
select acctID, primaryNumber into acctNum, primNum
from account A
where A.primaryNumber = :new.primaryNumber;
select to_char(round(dbms_random.value(10000, 99999),0)) into
NEWusageID_new from dual
minus
select usageID from usage;
INSERT INTO acct_num VALUES
(acctNum, primNum, :new.phonenumber);
INSERT INTO phone_usage VALUES
(NEWusageID_new, :new.phonenumber);
end if;
END;
```
But it throws the following errors when I attempt to insert into the phoneNumber table:
> ORA-01403: no data found
>
> ORA-06512: at "ADDPHONELINE", line 9
>
> ORA-04088: error during execution of trigger 'ADDPHONELINE'
The relevant tables were created as follows:
```
create table phoneNumber(phoneNumber varchar(12) PRIMARY KEY, primaryNumber varchar(12));
create table acct_num(acctID numeric(5) references ACCOUNT, primaryNumber varchar(12) references ACCOUNT, phoneNumber varchar(12) references phoneNumber);
create table phone_usage(usageID varchar(5) references USAGE, phoneNumber varchar(12) references PHONENUMBER)
```
|
Your trigger is based on an insert on table `phoneNumber` and the error "No Data Found" is thrown when a SELECT INTO is used and it doesnt find any information to insert.
So the problem must be this statement.
```
select acctID, primaryNumber into acctNum, primNum
from account A
where A.primaryNumber = :new.primaryNumber;
```
Are you certain that the `:new.primaryNumber` exists in the `account` table when this trigger is active?
Maybe you are only populating the `account` table after this insert is complete?
|
I show you here what will happen when dbms\_random gives a value (NEWusageID\_new), which exists in the usage:
```
DECLARE
i NUMBER;
BEGIN
SELECT 15 INTO i FROM DUAL
MINUS
SELECT 15 FROM DUAL;
END;
ORA-01403: no data found
ORA-06512: in line 4
```
Use a sequence instead.
|
Why doesn't this trigger work properly?
|
[
"",
"sql",
"oracle",
"plsql",
"triggers",
"oracle-sqldeveloper",
""
] |
I'm currently using the following SQL query which is returning 25 rows. How can I modify it to ignore the first row:
```
SELECT fiscal_year, SUM(total_sales) as sum_of_year, AVG(SUM(total_sales))
OVER () as avg_sum
FROM sales_report
GROUP BY fiscal_year
ORDER BY fiscal_year ASC
```
I'm using SQL Server 2008.
Thanks.
|
You can use `EXCEPT` in SQL Server 2008.
```
SELECT fiscal_year, SUM(total_sales) as sum_of_year, AVG(SUM(total_sales))
OVER () as avg_sum
FROM sales_report
GROUP BY fiscal_year
EXCEPT
SELECT TOP 1 fiscal_year, SUM(total_sales) as sum_of_year, AVG(SUM(total_sales))
OVER () as avg_sum
FROM sales_report
GROUP BY fiscal_year
ORDER BY fiscal_year ASC
```
For SQL Server 2012 and above, you can use `FETCH OFFSET`
|
assuming this is exactly how you'd query it, then:
`SELECT fiscal_year, SUM(total_sales) as sum_of_year, AVG(SUM(total_sales)) OVER () as avg_sum
FROM sales_report
WHERE fiscal year <> (SELECT MIN(Fiscal_year) FROM sales_report))
GROUP BY fiscal_year
ORDER BY fiscal_year ASC`
And then you can remove the "order by".
Works on all versions
|
Select all rows and ignore the first row
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Objective:
`Remove all` entries `where` `LastLogin` is `null` `AND` `RegDate` is older than 30 seconds.
I'm trying to use transact SQL to remove an entry from table 'ONE'
```
SELECT *
FROM dbo.ONE
WHERE LastLogin == Null AND RegDate is older than DATETIME.NOW by 30 seconds;
```
I hope the question makes sense. So how would I achieve this?
|
As @Alex K. pointed out check for null and using datediff
```
select * from dbo.ONE where LastLogin is NULL and
datediff(ss,regdate,GetDate())<30
```
|
Im not 100% sure about the older part you mean `<` or `>` . But should be something like this.
```
DELETE FROM dbo.ONE
WHERE LastLogin IS NULL
AND RegDate < DATEADD(ss, -30, getdate());
```
getdate() is current time
|
How do I Remove an entry which is more than 30 seconds old And has null value
|
[
"",
"sql",
"t-sql",
"datetime",
""
] |
I have a table that holds a list of classes available at a school. Each class can have a number of sessions. And each class can have pupils assigned to it.
What I need to do is get a count of all sessions for each class, as well as the number of students attending the class. I have done the first bit, but if I join to the pupil allocation table, my counts will be wrong.
I have conjured up some fake SQL that you can use.
I'm stuck with efficiently getting a count from the Pupil link table.
```
DECLARE @Class TABLE
(
ClassID INT NOT NULL,
ClassName VARCHAR(20) NOT NULL
)
INSERT INTO @Class VALUES (1, 'English')
INSERT INTO @Class VALUES (2, 'Maths')
DECLARE @ClassSession TABLE
(
ClassSessionID INT NOT NULL,
ClassID INT NOT NULL,
Description VARCHAR(100) NOT NULL
)
INSERT INTO @ClassSession VALUES (1, 1, 'Basic English')
INSERT INTO @ClassSession VALUES (2, 1, 'Advanced English')
INSERT INTO @ClassSession VALUES (3, 1, 'Amazing English')
INSERT INTO @ClassSession VALUES (4, 2, 'Basic English')
INSERT INTO @ClassSession VALUES (5, 2, 'Basic English')
DECLARE @ClassPupil TABLE
(
ClassPupilID INT NOT NULL,
ClassID INT NOT NULL,
PupilID INT NOT NULL -- FK to the Pupils table.
)
INSERT INTO @ClassPupil VALUES (1, 1, 1000)
INSERT INTO @ClassPupil VALUES (2, 1, 1001)
INSERT INTO @ClassPupil VALUES (3, 1, 1002)
INSERT INTO @ClassPupil VALUES (4, 1, 1003)
INSERT INTO @ClassPupil VALUES (5, 1, 1004)
INSERT INTO @ClassPupil VALUES (6, 2, 1005)
INSERT INTO @ClassPupil VALUES (7, 2, 1006)
INSERT INTO @ClassPupil VALUES (8, 2, 1007)
SELECT ClassName, COUNT(*) AS Sessions, '??' AS NumerOfPupils
FROM @Class c
INNER JOIN @ClassSession cs
ON cs.ClassID = c.ClassID
GROUP BY c.ClassID, c.ClassName
```
It can maybe be done with a sub query? Is that the best way?
|
You have two independent dimensions for each class. You need to aggregat them separately:
```
SELECT c.ClassName, cs.Sessions, cp.Pupils
FROM @Class c INNER JOIN
(SELECT ClassId, COUNT(*) as sessions
FROM @ClassSession cs
GROUP BY ClassId
) cs
ON cs.ClassID = c.ClassID INNER JOIN
(SELECT ClassId, COUNT(*) as pupils
FROM @ClassPupil cp
GROUP BY ClassId
) cp
ON cp.ClassId = c.ClassId;
```
|
Another method is to use `CROSS APPLY` to get the count of pupils:
```
SELECT
ClassName, COUNT(*) AS Sessions, cp.NumberOfPupils
FROM @Class c
INNER JOIN @ClassSession cs
ON cs.ClassID = c.ClassID
CROSS APPLY (
SELECT COUNT(*) AS NumberOfPupils
FROM @ClassPupil
WHERE
ClassID = c.ClassID
) cp
GROUP BY c.ClassID, c.ClassName, cp.NumberOfPupils
```
|
COUNT of different tables in GROUP
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
For example, I have a time format: 1000
how do I convert into MySQL Time() 10:00:00
It can also be more complex, since some numbers are 3 digits long, for example:
900 into MySQL Time() 09:00:00
Let me know if this needs more explaining.
|
Use CONVERT() -- (<http://dev.mysql.com/doc/refman/5.7/en/charset-convert.html>)
You'll have to pad it with zeros to show CONVERT() that it is in hours and not minutes.
**Specific to your example:**
```
> SELECT CONVERT(CONCAT('1000','00'), TIME) AS time1;
time1
10:00:00
```
or
```
> SELECT CONVERT(CONCAT(`fieldname`,'00'), TIME) AS time1 FROM `tablename`;
time1
10:00:00
```
|
You just want to use [str\_to\_date()](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date)
cast your into to a char so its a string and pad it with lpad to keep a leading 0 :)
```
SELECT STR_TO_DATE(LPAD(CAST(my_col AS CHAR(25)), 4, '0'), '%H') -- or %k
```
or you can just drop the cast since mysql will do the converting for you
```
SELECT STR_TO_DATE(LPAD(my_col, 4, '0'), '%k')
```
[FIDDLE](http://sqlfiddle.com/#!9/0c07c/1)
|
How to convert int number into time - SQL
|
[
"",
"mysql",
"sql",
""
] |
I have 2 tables
Table1. StudentMaster
```
ROLL CLASS NAME TOTALMARKS STREAMID
---------- ---------- ------------ ---------- ----------
12345 5 Rohit 75 100
12346 7 Suman 95 101
12347 5 Rajib 41 100
12348 10 Rakesh 52 102
12349 10 Himesh 101
12350 7 Mizanur 85 103
42145 5 Mohit 103
```
Table2. Stream
```
STREAMID NAME DURATION FEES
---------- ---------- ---------- ----------
100 electrical 3 4500
102 civil 4 5400
103 mechanical 3 4500
101 ece 2 2500
```
Now I need to get the details of the student who gets the highest marks in each stream.
The output table should look like:
```
Roll Name Stream HighestMarks
---- ------- ------ ------------
12345 Rohit electrical 75
12346 Suman ece 95
12348 Rakesh civil 52
12350 Mizanur mechanical 85
```
Can you please help me with the correct Oracle SQL query to retrieve this? Thanks in advance.
|
I have finally solved this problem with the simplest way possible (I think)
```
select
studentmaster.name,studentmaster.totalmarks highest_marks,stream.name stream
from studentmaster,stream
where studentmaster.streamid=stream.streamid
and totalmarks in(select max(totalmarks) from studentmaster group by streamid);
```
|
You can use this:
```
SELECT *
FROM (SELECT ROW_NUMBER() OVER(PARTITION BY STREAMID ORDER BY TOTALMARKS desc NULLS LAST) AS RANK,
StudentMaster.name,
stream.name AS stream,
totalMarks AS HighestMarks
FROM StudentMaster INNER JOIN STREAM USING (streamId))
WHERE RANK = 1
```
It computes, for every row of the joined tables, the rank of the student in the stream, ordering by mark; the outer query simply filters to get only the students at the first places of every stream.
The ordering is done in descending order, and with NULL values in the latest position
|
Getting Stream wise Highest Marks in SQL
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
""
] |
My query look like:
```
select *
from `games`
inner join `prowizja` on `prowizja`.`id_gry` = `games`.`id`
where `games`.`status` = 2 and `games`.`winner_user_id` != 49
and `games`.`owner_user_id` = 49 or `games`.`join_user_id` = 49
```
Important think is **games`.`winner\_user\_id` != 49**. But I result of this query up is:
[](https://i.stack.imgur.com/jHzvZ.jpg)
Someone could tell me, why I receive result with winner\_user\_id 49, but I want not equals.
Thanks
|
You probably have to wrap the last two predicates of the `WHERE` clause in parentheses:
```
select *
from `games`
inner join `prowizja`
on `prowizja`.`id_gry` = `games`.`id`
where `games`.`status` = 2 and
`games`.`winner_user_id` != 49 and
(`games`.`owner_user_id` = 49 or `games`.`join_user_id` = 49)
```
|
`AND`has precedence over `OR`. Hence your query equals:
```
select *
from games
inner join prowizja on prowizja.id_gry = games.id
where (games.status = 2 and games.winner_user_id <> 49 and games.owner_user_id = 49)
or games.join_user_id = 49;
```
where you want it to be
```
select *
from games
inner join prowizja on prowizja.id_gry = games.id
where (games.status = 2 and games.winner_user_id <> 49)
and (games.owner_user_id = 49 or games.join_user_id = 49);
```
Conclusion: Use parentheses whenever mixing `AND` and `OR`.
|
Wrong result of (or where query)
|
[
"",
"mysql",
"sql",
""
] |
I currently have a complex SQL query which is inserted into a temp table. The query includes an `OUTER APPLY` as not all returned records will apply to the result set.
I also need to use the `OUTER APPLY` columns in the `WHERE` clause to filter results but also include the results which do not apply into the `OUTER APPLY` .e. All Outer APPLY results = 1 and non-outer apply results.
This is a simple version of the query layout:
```
INSERT INTO #temp (X, Y, Z, O1, O2)
SELECT
X Y Z
FROM T1
INNER JOIN T2, T etc.
OUTER APPLY (
SELECT O1, O2 FROM XYZ…) OATable
WHERE
OATable.O1 = 1 -- I tried just adding “IN (1, NULL)” but this
still excludes the results.
```
Any help would be greatly appreciated.
Platform: SQL Server 2012+
Thank you
|
You can't compare directly to `NULL` because nothing equates to `NULL` (not even `NULL` itself). That precludes using `IN` here. Instead, just use an `OR` statement:
```
INSERT INTO #temp (X, Y, Z, O1, O2)
SELECT
X, Y, Z,
FROM T1
INNER JOIN T2 ON ...
OUTER APPLY (SELECT O1, O2 FROM XYZ…) OATable
WHERE
OATable.O1 = 1 OR OATable.O1 IS NULL
```
That assumes that `O1` is a `NOT NULL` column in `XYZ`.
|
I think the answer from Tom H adresses the stated question
But I think this might be what you are actually looking for
```
SELECT X Y Z, OATable.*
FROM T1
INNER JOIN T2, T etc.
LEFT JOIN XYZ as OATable
on OATable.O1 = 1
```
In the answer from Tom you would need a literal OATable.O1 IS NULL (from my findings)
And that is not going to happen if the column is defined as not null
OATable.O1 IS NULL in an outer apply would only find literal value of null even if the column allows null
In this answer you get the left side with no match on OATable O1 = 1
|
Filter OUTER APPLY column in WHERE clause
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
"sql-server-2012",
""
] |
I am developing an Oracle database schema visualizer. So, as a first step, I thought, I will first need to get all the schema details (tables and relationships between tables, constraints also maybe).
To get that information, what is the SQL command that will return the result?
(the `DESCRIBE` command does not return the information about the all the table)
**EDIT #1**:
Actually, what I want to do is to get all the information about all tables as I mentioned (columns, rows, foreign keys, constraints) to store them ina MongoDB database, and then create visualizations (diagrams are not included)
|
I do something similar. I read those things from a SQL Server over OPENQUERY statements directly from Oracle DBs and save results into SQL Server tables to allow analysis of historic comparison schema information and changes.
So what you have to do with the resultsets of the following queries is to store them (regulary) somehow and add some kind of unique / primary key or timestamp to it, in order to distinguish between your different scans.
Leaving away the SQL Server specific code stuff, those are the basic oracle sql queries I use so far:
```
--Tables
SELECT table_name, owner, Tablespace_name, Num_Rows
FROM all_tables WHERE tablespace_name is not NULL
AND owner not in ('SYS', 'SYSTEM')
ORDER BY owner, table_name;
--Columns
SELECT OWNER, TABLE_NAME, Column_name, Data_type, data_length, data_precision, NULLABLE, character_Set_Name
From all_tab_cols
where USER_GENERATED = 'YES'
AND owner not in ('SYS', 'SYSTEM');
--Indexes
select Owner, index_name, table_name, uniqueness,BLEVEL,STATUS from ALL_INDEXES
WHERE owner not in ('SYS', 'SYSTEM')
--Constraints
select owner, constraint_name, constraint_type, table_name, search_condition, status, index_name, index_owner
From all_constraints
WHERE generated = 'USER NAME'
AND owner not in ('SYS', 'SYSTEM')
--Role Previleges
select grantee, granted_role, admin_option, delegate_option, default_role, common
From DBA_ROLE_PRIVS
--Sys Privileges
select grantee, privilege, admin_option, common
From DBA_SYS_PRIVS
```
|
You could start with:
```
select DBMS_METADATA.GET_DDL ('TABLE', table_name,user) from user_tables;
```
That will reverse engineer all the DDL that would create the tables. It wouldn't give you current table/column statistics though.
Some tools (eg Oracle's Data Modeller / SQL Developer) will allow you to point them at a database and it will reverse engineer the database model into a diagram. That would be a good place to start for the relationships between tables.
|
Oracle : which SQL command to get all details about a table?
|
[
"",
"sql",
"oracle",
"schema",
""
] |
I have 2 tables `Document_type_de` and the other is `document` . `Document` table has all the documents stored but which type of document it is, is defined in `document_type_de`, so i need help with the query which will help me find the count of the documents of each type in the document table.
columns under document\_type\_de table
```
ID, display name
```
columns under document table
```
documenttypede
```
|
This'll do it:
```
SELECT Type.[Display Name],
COUNT(*) AS [Number of Documents]
FROM Document_type_de Type
JOIN Document D
ON Type.ID = D.documenttypede
GROUP BY Type.[Display Name]
ORDER BY Type.[Display Name]
```
|
That query will give you the count of all the documents grouped by documents\_type\_id. Dunno about you tables primarys and foreign keys names, just replace it.
```
SELECT count(documents_id)
FROM document d
INNER JOIN Document_type_de dtd dtd.document_id = d.document_id
GROUP BY d.documents_type_id
```
Regards.
|
count using inner join from 2 tables
|
[
"",
"sql",
""
] |
Access Query that I need to convert to work under SQL Server:
```
TRANSFORM Sum([T_Leads]![OrderType]='New Order')-1 & " / " & Sum([T_Leads]![OrderType]='Change Order')-1
AS [New / Change]
SELECT Employees.EmployeeName as Name, Count(T_Leads.OrderType) AS Total
FROM Employees INNER JOIN T_Leads ON Employees.EmployeeID = T_Leads.EmployeeID
WHERE (((T_Leads.Date)>Date()-7))
and [Employees.LeadRotation] <> "Inactive"
GROUP BY Employees.EmployeeName
ORDER BY T_Leads.Date
PIVOT T_Leads.Date;
```
The output displays a list of employees currently taking leads (who are not "inactive"). For the column headers, the date is shown for the previous seven days (if a lead was submitted on that day), and two totals are displayed under each date. One for the total number of New Orders received, and another for total number of Change Orders. I've not been able to find any examples that generate the date columns and display two values
under each column.
The Access query currently produces output like this in a GridView:
```
+-------------+-------+----------+----------+----------+----------+-----------+
| Name | Total | 4/5/2016 | 4/6/2016 | 4/7/2016 | 4/8/2016 | 4/11/2016 |
+-------------+-------+----------+----------+----------+----------+-----------+
| Doe, Jane | 9 | 0/1 | 0/2 | 0/3 | / | 0/3 |
+-------------+-------+----------+----------+----------+----------+-----------+
| Guy, Some | 4 | 0/1 | 0/1 | / | / | 0/2 |
+-------------+-------+----------+----------+----------+----------+-----------+
| Doe, John | 10 | 0/1 | 1/1 | 2/1 | 0/3 | 0/1 |
```
Sample Data:
```
| EmployeeID | Customer | Date | OrderType|
+-------------+------------------+------------+----------+
| 1 | Fake Customer | 2016-05-14 | New |
+-------------+------------------+------------+----------+
| 2 | Some Company | 2016-05-13 | Change |
+-------------+------------------+------------+----------+
| 3 | Stuff Inc. | 2016-05-14 | New |
+-------------+------------------+------------+----------+
| 3 | Cool Things | 2016-05-12 | Change |
```
|
```
IF OBJECT_ID('tmpEmployees_Test', 'U') IS NOT NULL DROP TABLE tmpEmployees_Test;
CREATE TABLE tmpEmployees_Test (EmployeeID INT, EmployeeName VARCHAR(255));
INSERT tmpEmployees_Test (EmployeeID, EmployeeName)
VALUES (1, 'Doe, Jane'), (2, 'Doe, John'), (3, 'Guy, Some');
IF OBJECT_ID('tmpOrders_Test', 'U') IS NOT NULL DROP TABLE tmpOrders_Test;
CREATE TABLE tmpOrders_Test (EmployeeID INT, Customer VARCHAR(255), Date DATE, OrderType VARCHAR(255));
INSERT tmpOrders_Test (EmployeeID, Customer, Date, OrderType)
VALUES (1, 'Fake Customer', '2016-05-14', 'New')
, (2, 'Some Company', '2016-05-13', 'Change')
, (3, 'Stuff Inc.', '2016-05-14', 'New')
, (3, 'Cool Things', '2016-05-12', 'Change')
, (3, 'Amazing Things', '2016-05-12', 'Change');
DECLARE @columns NVARCHAR(MAX), @sql NVARCHAR(MAX);
SET @columns = N'';
SELECT @columns += N', p.' + QUOTENAME(Name)
FROM (SELECT distinct CONVERT(nvarchar(30) , p.Date , 101) as Name FROM dbo.tmpOrders_Test AS p where [Date] > GETDATE()-7
) AS x;
-- Kept it for formatting Purpose
DECLARE @columns1 NVARCHAR(MAX)
SET @columns1 = N'';
SELECT @columns1 += N', ISNULL(p.' + QUOTENAME(Name) + ',''/'') AS ' + QUOTENAME(Name)
FROM (SELECT distinct CONVERT(nvarchar(30) , p.Date , 101) as Name FROM dbo.tmpOrders_Test AS p where [Date] > GETDATE()-7
) AS x;
SET @sql = N'
SELECT EmployeeName, Count(*) as Total ' + @columns1 + '
FROM
(
SELECT EmployeeID, EmployeeName' + ''+ @columns1 + '' + '
FROM
(
SELECT o.employeeID,EmployeeName, CAST(COUNT(case WHEN OrderType = ''New'' then 1 end) as varchar(5)) + ''/'' +
CAST(COUNT(case WHEN OrderType = ''Change'' then 1 end) as varchar(5)) as OrderType, CONVERT(nvarchar(30) , p.Date , 101) as Date
FROM dbo.tmpOrders_Test AS p
INNER JOIN dbo.tmpEmployees_Test AS o
ON p.EmployeeID = o.EmployeeID
GROUP BY EmployeeName, Date, o.employeeID
) AS j
PIVOT
(
Max(OrderType) FOR Date IN ('
+ STUFF(REPLACE(@columns, ', p.[', ',['), 1, 1, '')
+ ')
) AS p) as p JOIN tmpOrders_Test as m on p.employeeID = m.employeeID
where [Date] > GETDATE()-7
GROUP BY EmployeeName ' + @columns + '
';
PRINT @sql;
EXEC sp_executesql @sql;
```
This one is using dynamic Pivot. You might want to do this business logic on Application or Reporting Side instead of complex sql.
|
You'd need to dynamically produce your pivot columns then do case statements for each of them. The following is an example of how you could do it:
```
IF OBJECT_ID('tmpEmployees_Test', 'U') IS NOT NULL DROP TABLE tmpEmployees_Test;
CREATE TABLE tmpEmployees_Test (EmployeeID INT, EmployeeName VARCHAR(255));
INSERT tmpEmployees_Test (EmployeeID, EmployeeName)
VALUES (1, 'Doe, Jane'), (2, 'Doe, John'), (3, 'Guy, Some');
IF OBJECT_ID('tmpOrders_Test', 'U') IS NOT NULL DROP TABLE tmpOrders_Test;
CREATE TABLE tmpOrders_Test (EmployeeID INT, Customer VARCHAR(255), Date DATE, OrderType VARCHAR(255));
INSERT tmpOrders_Test (EmployeeID, Customer, Date, OrderType)
VALUES (1, 'Fake Customer', '2016-05-14', 'New')
, (2, 'Some Company', '2016-05-13', 'Change')
, (3, 'Stuff Inc.', '2016-05-14', 'New')
, (3, 'Cool Things', '2016-05-12', 'Change')
, (3, 'Amazing Things', '2016-05-12', 'Change');
DECLARE @startDate DATE = '2016-05-14', @cols VARCHAR(MAX) = '', @cols2 VARCHAR(MAX) = '';
SELECT @cols += ', CONVERT(VARCHAR(255), SUM(CASE WHEN O.Date = ''' + CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate)) + ''' AND O.OrderType = ''New'' THEN 1 ELSE 0 END)) + ''/'' + CONVERT(VARCHAR(255), SUM(CASE WHEN O.Date = ''' + CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate)) + ''' AND O.OrderType = ''Change'' THEN 1 ELSE 0 END)) ' + QUOTENAME(CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate), 103)) + CHAR(10) + CHAR(9) + CHAR(9)
, @cols2 += ', CASE WHEN ' + QUOTENAME(CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate), 103)) + ' = ''0/0'' THEN ''/'' ELSE ' + QUOTENAME(CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate), 103)) + ' END ' + QUOTENAME(CONVERT(VARCHAR(255), DATEADD(DD, X.Y, @startDate), 103)) + CHAR(10) + CHAR(9)
FROM (VALUES (0),(-1),(-2),(-3),(-4),(-5),(-6)) X(Y)
JOIN tmpOrders_Test O ON O.Date = DATEADD(DD, X.Y, @startDate)
GROUP BY X.Y
ORDER BY X.Y;
DECLARE @SQL VARCHAR(MAX) = '
WITH T AS (
SELECT E.EmployeeID
, COUNT(*) Total
' + @cols + '
FROM tmpEmployees_Test E
JOIN tmpOrders_Test O ON O.EmployeeID = E.EmployeeID
WHERE O.Date BETWEEN ''' + CONVERT(VARCHAR(255), DATEADD(dd, -6, @startDate)) + ''' AND ''' + CONVERT(VARCHAR(255), @startDate) + '''
GROUP BY E.EmployeeID)
SELECT E.EmployeeName
, Total
' + @cols2 + '
FROM T
JOIN tmpEmployees_Test E ON E.EmployeeID = T.EmployeeID;'
--PRINT @SQL;
EXEC(@SQL);
```
This mirrors the output you're expecting (as far as I can tell), even if it looks a bit messy. I don't think you can produce your desired output without it being a bit messy, though.
Note: The CTE in the dynamic SQL is just to get rid of all the '0/0' and make them '/' and it seems the easiest way to do it.
|
Convert Access Crosstab/PIVOT query to T-SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"ms-access",
""
] |
I have the following Table.
```
STAFF
STAFFNO STAFFNAME DESIGNATI SALARY DEPTNO
---------- ---------- --------- ---------- ----------
1000 Rajesh Manager 35000 1
1001 Manoj Caretaker 7420.35 1
1002 Swati HR 22500 3
1003 Suresh HR 23400 3
1004 Najim Mangager 17200 2
1006 Ritesh Prgrmr 23500 2
1005 Nisha Prgrmr 24852 1
1007 Rajib Security 6547 3
1008 Neeraj Prgrmr 17300 1
1009 Dushant Prgrmr 16500 1
1010 Pradyut Manager 26300 2
1011 Manisha Prgrmr 21500 2
1012 Janak Security 8500 2
```
Now I want to run a query on oracle (SQL\*Plus) in which I can retrieve the details of those employees who works in a department having 5 or more head count.(e.g. deptno. 1 and deptno. 2 have 5 employees working in them)
Can you help me with the Oracle query to retrieve that? Thanks in advance.
|
You need create a sub query or perform a `JOIN`.
With a `JOIN` first you need to know what department has more that 5 employees.
```
SELECT DEPTNO
FROM STAFF
GROUP BY DEPTNO
HAVING COUNT(*) >= 5
```
Now you join both result
```
SELECT S.*
FROM STAFF S
JOIN ( SELECT DEPTNO
FROM STAFF
GROUP BY DEPTNO
HAVING COUNT(*) >= 5 ) F
ON S.DEPTNO = F.DEPTNO
```
Subquery version:
```
SELECT S.*
FROM STAFF S
WHERE S.DEPTNO IN ( SELECT DEPTNO
FROM STAFF
GROUP BY DEPTNO
HAVING COUNT(*) >= 5 )
```
|
It should be like this
```
SELECT * FROM STAFF WHERE DEPTNO IN
(SELECT DEPTNO FROM STAFF GROUP BY DEPTNO HAVING COUNT(*)>4)
```
|
SQL Query to Retrieve the details of those employees who works in a department having head count more than 5
|
[
"",
"sql",
"database",
"oracle",
""
] |
I'm trying to rewrite some old SQL queries that look particularly awful. I am wondering if there is a more efficient way to prioritize values in a where statement for order of precedence. Basically the table contains multiple email\_code records per user but I want to prioritize based on what records are preferred. In this case if the email\_code is WORK it should be selected. But if there is no WORK record then HOME should be selected, and so on. Here is an example of what I am working with. There has to be a more graceful way to do this...?
```
select
*
from
email m
where
status_ind='A'
and decode(email_code, 'WORK',1,
'HOME',2,
'ALT1',3,
'ALT2',4,5) = (select
min(decode(email_code, 'WORK',1,
'HOME',2,
'ALT1',3,
'ALT2',4,5))
from
email
where
email_uid = m.email_uid
and status_ind='A');
```
|
Try:
```
SELECT * FROM (
SELECT e.*,
dense_rank() over (PARTITION BY user_id
ORDER BY CASE email_code
WHEN 'WORK' THEN 1
WHEN 'HOME' THEN 2
WHEN 'ALT1' THEN 3
WHEN 'ALT2' THEN 4
ELSE 5
END ) As priority
FROM emails e
WHERE status_ind='A'
)
WHERE priority = 1
```
|
First of all, it shall be better to create a lookup table to hold email types and their priorities. And then you can join this lookup table with your mail table. Finally, it will be ok to use one of Oracle analytic functions ( row\_number, rank, dense\_rank etc ) in order to get the highest priority one.
|
oracle select based on priority
|
[
"",
"sql",
"oracle",
"select",
"operator-precedence",
""
] |
I am having different results on what appears to be the same query. I am very confused on why the total differs from the queries. Hopefully someone has an explanation on this issue and maybe say which should be the correct total. Thank you
```
select sum(tm_hours)
FROM bhds_timecard
WHERE (tm_notes = 'Session'
OR tm_notes = 'Intake'
OR tm_notes = 'OT'
or tm_notes = 'PT'
or tm_notes = 'Speech')
AND (tm_date BETWEEN '2016-04-01' AND '2016-04-30')
```
This totals 1792.25
```
SELECT SUM(tm_hours)
FROM bhds_timecard
WHERE (tm_date BETWEEN '2016-04-01' AND '2016-04-30')
AND tm_notes = 'Session'
OR tm_notes = 'Intake'
OR tm_notes = 'OT'
OR tm_notes = 'PT'
OR tm_notes = 'Speech'
```
This totals 1796.25
|
If you understand why 5 + 3 \* 2 = 11 then this will help understand the source of your problem. Some people may mistakenly think the above calculation should give you 16 because they are unaware of the order of operations. Multiplication takes precedence over addition so it is done first.
Likewise, AND has a higher precedence than OR so it is done first. Check out the [Operator Precedence](http://dev.mysql.com/doc/refman/5.7/en/operator-precedence.html) section in the MySQL reference manual.
This is likely the code you were trying to build:
```
SELECT SUM(tm_hours)
FROM bhds_timecard
WHERE tm_date BETWEEN '2016-04-01' AND '2016-04-30'
AND (tm_notes = 'Session'
OR tm_notes = 'Intake'
OR tm_notes = 'OT'
OR tm_notes = 'PT'
OR tm_notes = 'Speech');
```
In this case, since parentheses override the standard order of operations, everything in the parentheses gets executed first (which are all the OR statements). Therefore the entire parenthetical section will evaluate to TRUE if only one of the conditions evaluates to TRUE). Next all of your ANDS get evaluated and you have your answer.
|
Second query must be like that:
```
SELECT SUM(tm_hours) FROM bhds_timecard WHERE (tm_date BETWEEN '2016-04-01' AND '2016-04-30')
AND (tm_notes = 'Session' OR tm_notes = 'Intake' OR tm_notes = 'OT' OR tm_notes = 'PT' OR tm_notes = 'Speech')
```
You lose the `(` bracket for second group of conditions
|
Difference on sum totals on mysql query
|
[
"",
"mysql",
"sql",
"sum",
""
] |
I am writing a query like this
```
SELECT *
FROM table
WHERE id IN (thousands of ids)
```
but this won't work because the limit of values for the `WHERE IN` is 128 on our version of SQL Server.
Is there a way to do this using a sub-query with a join, or maybe a temp table?
Any help is appreciated!
|
You can use [**`VALUES`**](https://msdn.microsoft.com/en-us/library/dd776382.aspx) in order to construct an in-line table containing all the values to be considered and then `JOIN` to this table:
```
SELECT t1.*
FROM table AS t1
INNER JOIN (VALUES (10), (20), (12), ... ) AS t2(id)
ON t1.id = t2.id
```
|
Place them in temp table and
```
SELECT *
FROM table
WHERE id IN (SELECT id from #Table_of_thousands_of_ids)
```
|
SQL Server : how to work around WHERE IN limit of 128 values
|
[
"",
"sql",
"sql-server",
""
] |
```
student table teacher table sports table parents table
--------- -------------- ------------ ---------------
id name id name id name id stud_id fathername mothername
------------ ------------ ------------ -----------------------------------------
1 S1 1 T1 1 SP1 1 1 xxxxxx yyyyyyy
2 S2 2 T2 2 SP2 2 2 abc aaa
3 S3 3 T3 3 SP3
student_teacher table student_sports table
id stud_id teacher_id id sutd_id sports_id
------------------------------ ------------------------------
1 1 1 1 1 1
2 1 2 2 1 2
3 1 3 3 1 3
4 2 2 4 3 2
5 2 3 5 3 3
```
How to write a query to get information of student S1 from all the table.
For example, student S1 participated sports name, teachers name who teaching student S1, student S1 parents information. Here id in student, teacher, parent table are primary key. stud\_id, teacher\_id, sports\_id are foreign key which refers student, teacher, sports table primary key.
Kindly help me to get the record of Student S1 from table. Thanks in advance.
|
You can simply use where clause :
```
select S.name, T.name, SS.name, P.fathername, P.mothername
from student S, teacher T, sports SS, parents P,student_teacher ST, student_sports SSP
where S.id = ST.stud_id and T.id = ST.teacher_id and
S.id = SSP.stud_id and SS.id = SSP.sports_id and
S.id = P.stud_id
```
|
Here is the solution :
```
select st.name as Student, t.name as teacher, sp.name as sports, p.fathername,p.mothername from student st,teacher t,sports sp,parents p,student_teacher s_t,student_sports s_s where s_t.stud_id=st.id and s_t.teacher_id=t.id and p.stud_id=st.id and s_s.stud_id=st.id;
```
|
How to retrieve the student information from the table?
|
[
"",
"mysql",
"sql",
"join",
""
] |
I'm studying for my Database System exam tomorrow, and I'm working on an SQL question. This question is the only one from the paper that doesn't have an answer, but here's the question:
---
We use the following schema:
* *Professor*(name, office, dept, age) (age is key)
* *Course*(cno, title, dept) (cno stands for course number and is a key, title is the name of the course and dept is name of department offering the course)
* *Enrollment*(cno, semester, inst\_name, enrollment) (the key for this is (cno, semester)
**Question: Write the following sql query: Output a table containing the single row "yes" in it if the age difference between the oldest and the youngest professors teaching the Database Systems course between 2000 and 2009 is at most 5 years**
---
I amn't sure my approach is right, since we don't exactly want to output something from the table. Note that I think enrollment corresponds to when the instructor started teaching the course (which isn't the usual definition AFAIK).
My approach is as follows:
```
WITH dbsProfs AS (
SELECT P.age
FROM Professor P, Enroll E, Course C
WHERE P.name = E.inst_name AND C.cno = E.cno AND C.title = "Database Systems"
AND E.enrollment BETWEEN 2000 and 2009
)
SELECT "Yes"
FROM dbsProfs
WHERE MAX(dbsProfs.age) - MIN(dbsProfs.age) <= 5
```
I'm fairly confident with my temporary table. I'm doing a join on all 3 tables and filtering out to include only the ones relevant to my query. It's the other half I'm unsure about.
Any insight on whether this is correct/how to correct this would be much appreciated. I amn't convinced `WHERE MAX(dbsProfs.age) - MIN(dbsProfs.age) <= 5` is valid SQL
|
With aggregate function you shoud use having
```
SELECT "Yes"
FROM dbsProfs
HAVING MAX(dbsProfs.age) - MIN(dbsProfs.age) <= 5
```
|
```
WITH dbsProfs AS (
SELECT MIN(P.age) as min_age,MAX(P.age) as max_age
FROM Professor P, Enroll E, Course C
WHERE P.name = E.inst_name AND C.cno = E.cno AND C.title = "Database Systems"
AND E.enrollment BETWEEN 2000 and 2009
)
SELECT CASE WHEN min_age<max_age THEN "Yes" END
FROM dbsProfs
```
|
SQL Query outputting "yes"
|
[
"",
"mysql",
"sql",
"view",
""
] |
I am trying to generate a report on a sql server database in asp.net and I am getting the results of some columns as a product of two columns. Here is the code
```
comm.CommandText = "SELECT Count(Courses.CourseID) AS CourseCount, Count(Students.StudentID) AS StudentCount, Schools.Name, Schools.StartDate, Schools.SchoolFees " +
"FROM Schools" +
"LEFT JOIN Courses ON (Schools.SchoolID = Courses.SchoolID)" +
"LEFT JOIN Students ON (Schools.SchoolID = Students.SchoolID) " +
"WHERE Schools.Active = 1 " +
"GROUP BY Schools.Name, Schools.StartDate, Schools.SchoolFees";
```
When I run the code, the result displays, but the columns for "CourseCount" and "StudentCount" display a value that is a product of each individual column. "CourseCount" is normally 288 and "StudentCount" is 38 but when I run the code, both "CourseCount" and "StudentCount" display 10944 which is 38 x 288.
Anyway I can make them display the correct values?
|
Changing your code from using a count of all rows, to a count of distinct values only, should return the results you expect
```
comm.CommandText = "SELECT Count(DISTINCT Courses.CourseID) AS CourseCount, Count(DISTINCT Students.StudentID) AS StudentCount, Schools.Name, Schools.StartDate, Schools.SchoolFees " +
"FROM Schools" +
"LEFT JOIN Courses ON (Schools.SchoolID = Courses.SchoolID)" +
"LEFT JOIN Students ON (Schools.SchoolID = Students.SchoolID) " +
"WHERE Schools.Active = 1 " +
"GROUP BY Schools.Name, Schools.StartDate, Schools.SchoolFees";
```
The results being returned are technically correct, if all schools have courses, and all courses have students
|
As stated above, it is how you are using the COUNT (), You are asking it to count all, which is why it returns so many. Use count on just the two values you want counted.
|
Sql Joins on multiple table returning product of two columns
|
[
"",
"sql",
"asp.net",
"sql-server",
"database",
""
] |
I have following list of `Amount (float)` in my table.
```
Amount
123
123.1
123.0123
123.789456
```
How can i get the number of digits after the decimal point.
**Duplicate ?:** I have checked already existing posts, but there is no correct way to handle the **`float`** numbers **`with or without decimal part`**.
Result
```
Amount Result
123 0
123.1 1
123.0123 4
123.789456 6
```
**EDIT** :
After spending some valuable time, i have found some relatively simple script to handle this. My answer is [below](https://stackoverflow.com/questions/37024739/get-the-number-of-digits-after-the-decimal-point-of-a-float-with-or-without-dec/37027599#37027599)
|
I found some simple script (relatively to me) to handle this.
```
ISNULL(NULLIF(CHARINDEX('.',REVERSE(CONVERT(VARCHAR(50), Amount, 128))),0) - 1,0)
```
Here the `ISNULL(NULLIF` is only to handle the float without decimal part.
If there is no values without decimal part, then it is very simple
```
CHARINDEX('.',REVERSE(CONVERT(VARCHAR(50), Amount, 128))) -1
```
Hope this will be helpful to you.
Full script below
```
declare @YourTable table (Amount float)
insert into @YourTable
values(123),(123.1),(123.0123),(123.789456)
SELECT ISNULL(NULLIF(CHARINDEX('.',REVERSE(CONVERT(VARCHAR(50), Amount, 128))),0) - 1,0)
FROM @YourTable
SELECT CHARINDEX('.',REVERSE(CONVERT(VARCHAR(50), Amount, 128))) -1
FROM @YourTable
```
|
You can do It in following:
**QUERY**
```
SELECT Amount,
CASE WHEN FLOOR(Amount) <> CEILING(Amount) THEN LEN(CONVERT(INT,CONVERT(FLOAT,REVERSE(CONVERT(VARCHAR(50), Amount, 128))))) ELSE 0 END AS Result
FROM YourTable
```
**OUPUT**
```
Amount Result
123 0
123,1 1
123,0123 4
123,789456 6
```
|
Get the number of digits after the decimal point of a float (with or without decimal part)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Here's where I am:
* `TABLE1.ITM_CD` is VARCHAR2 datatype
* `TABLE2.ITM_CD` is NUMBER datatype
* Executing `left join TABLE2 on TABLE1.ITM_CD = TABLE2.ITM_CD` yields **ORA-01722: invalid number** error
* Executing `left join TABLE2 on to_number(TABLE1.ITM_CD) = TABLE2.ITM_CD` also yields **ORA-01722: invalid number** error.
-- I suspect this is because one of the values in `TABLE1.ITM_CD` is the string "MIXED"
* Executing `left join TABLE2 on TABLE1.ITM_CD = to_char(TABLE2.ITM_CD)` successfully runs, but it returns blank values for the fields selected from `TABLE2`.
Here is a simplified version of my working query:
```
select
A.ITM_CD
,B.COST
,B.SIZE
,B.DESCRIPTION
,A.HOLD_REASON
from
TABLE1 a
left join TABLE2 b on a.ITM_CD = to_char(b.ITM_CD)
```
This query returns a list of item codes and hold reasons, but just blank values for the cost, size, and descriptions. And I did confirm that `TABLE2` contains values for these fields for the returned codes.
**UPDATE:** Here are pictures with additional info.
I selected the following info from `ALL_TAB_COLUMNS`--I don't necessarily know what all fields mean, but thought it might be helpful
[](https://i.stack.imgur.com/cjjlE.png)
**TABLE1** sample data
[](https://i.stack.imgur.com/b1f6l.png)
**TABLE2** sample data
[](https://i.stack.imgur.com/qii1Q.png)
|
You can convert the `TABLE1.ITM_CD` to a number after you strip any leading zeros and filter out the "MIXED" values:
```
select A.ITM_CD
,B.COST
,B.SIZE
,B.DESCRIPTION
,A.HOLD_REASON
from ( SELECT * FROM TABLE1 WHERE ITM_CD <> 'MIXED' ) a
left join TABLE2 b
on TO_NUMBER( LTRIM( a.ITM_CD, '0' ) ) = b.ITM_CD
```
|
This is a SQL (and really a database) problem, not PL/SQL. You will need to fix this - fight your bosses if you have to. Item code must be a Primary Key in one of your two tables, and a Foreign Key in the other table, pointing to the PK. Or perhaps Item code is PK in another table which you didn't show us, and Item code in both the tables you showed us should be FK pointing to that PK.
You don't have that arrangement now, which is exactly why you are having all this trouble. The data type must match (it doesn't now), and you shouldn't have values like 'MIXED' - unless your business rules allow it, and then the field should be VARCHAR2 and 'MIXED' should be one of the values in the PK column (whichever table that is in).
In your case, the problem is that the codes in VARCHAR2 format start with a leading 0, so if you compare to the numbers converted to strings, you never get a match (and in the outer join, the match is always assumed to be to NULL).
Instead, when you convert your numbers to strings, add leading zero(s) like this:
```
...on a.ITM_CD = TO_CHAR(b.ITM_CD, '099999')
```
|
PL/SQL Developer: joining VARCHAR2 to NUMBER?
|
[
"",
"sql",
"oracle",
""
] |
I created the following tables:
```
create table people (
ID varchar(9),
name varchar(20),
CONSTRAINT pk_ID PRIMARY KEY (ID)
);
create table cars (
license_plate varchar(9),
ID varchar(9),
CONSTRAINT pk_ID PRIMARY KEY (license_plate)
);
create table accidents (
code varchar(9),
license_plate varchar(9),
CONSTRAINT pk_ID PRIMARY KEY (code)
);
```
I inserted the following data:
```
insert into people(ID, name) values('0x1','Louis');
insert into people(ID, name) values('0x2','Alice');
insert into people(ID, name) values('0x3','Peter');
insert into cars(license_plate, ID) values('001','0x1');
insert into cars(license_plate, ID) values('002','0x2');
insert into cars(license_plate, ID) values('003','0x1');
insert into cars(license_plate, ID) values('004','0x3');
insert into accidents(code, license_plate) values('fd1','001');
insert into accidents(code, license_plate) values('fd2','004');
insert into accidents(code, license_plate) values('fd3','002');
```
The question is: **How to select people who don't have had accidents in any of their cars?**
My problem is that when I was trying to use `not in`. Having "Louis" at least one car in the table `accidents`, the query show me "Louis"and should not show "Louis".
My query:
```
select ID from people where ID in (select ID from cars where license_plate not in (select license_plate from accidents));
```
Result:
```
+-----+
| ID |
+-----+
| 0x1 |
+-----+
```
|
```
select name from people where ID not in (
select distinct c.ID from
accidents as a inner join cars as c
on a.license_plate = c.license_plate
)
```
Explanation = the sub query will join the cars and accidents, will give you the ID's of all cars who had accidents. On this you can run `not in` query on the people table
|
I need two subquery
```
select id from people
where id not it
(select id form cars where licens_plate not in
(select distintc license_plate from accidents))
```
|
how to get the ID card in MYSQL?
|
[
"",
"mysql",
"sql",
""
] |
I have three tables
Table1
```
userid mobile
1 123456789
2 321654987
3 987456321
```
Table2
```
revid userid revdes mobile
1 2 ASD 123456789
2 2 DSA 123456348
3 1 QWE 963258124
```
Table3
```
revid revloc
1 asdf
3 dsaq
```
I want output like this where userid=2
```
userid revid revdes mobile revloc inTable1
2 1 ASD 123456789 asdf true
2 2 DSA 123456348 NULL false
```
**In the above output inTable1 column 1st row element is true because mobile "123456789" is available on Table1**
I am using MySQL.
|
You can achieve what you want using a series of left joins. The tricky part with your query was knowing to join `Table1` and `Table2` using the mobile number, rather than the user id.
```
SELECT t2.userid, t2.revid, t2.revdes, t2.mobile, t3.revloc,
t1.mobile IS NOT NULL AS inTable1
FROM Table2 t2 LEFT JOIN Table1 t1
ON t2.mobile = t1.mobile
LEFT JOIN Table3 t3
ON t2.revid = t3.revid
WHERE t2.userid = 2
```
**Follow the link below for a running demo:**
[# SQLFiddle](http://sqlfiddle.com/#!9/75268c/20)
|
You could use a `left join` and just check if the resulting column is `null` or not:
```
SELECT t2.userid, t2.revid, t2.revdes, t2.mobile, t3.revloc,
t1.mobile IS NOT NULL AS inTable1
FROM table2 t2
JOIN table3 t3 ON t2.revid = t3.revid
LEFT JOIN table1 t1 ON t2.mobile = t1.mobile
```
|
MySQL three table join
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
I thought this would have been an easy google search but couldn't find any solutions. Is there a way to use the like and between together in a query?
Example
```
REASON_CODES
A00 VMC B10
A00 RTD B19
.
.
.
A99 RNT B40
```
I am trying to write a query like:
```
Select count(*) from table_1 where REASON_CODES like between '%A10%' and '%A25%'
```
Is there a solution to do this? I was reading "convert" may do the trick but I had no luck.
Thanks
|
You can use substring
```
Select count(*) from table_1
where substr(reason_codes, 1,3) between 'A10' and 'A25';
```
|
If you're just trying to match the beginning of the `REASON_CODE` strings, you can do:
```
SELECT COUNT(*)
FROM table_1
WHERE REASON_CODE >= 'A10' AND REASON_CODE < 'A26'
```
This is equivalent to scaisEdge's answer, but it can take advantage of an index on the `REASON_CODE` column, which cannot be used if you first call `SUBSTR()`.
You have to use `>=` and `<` because `BETWEEN` includes both endpoints, and you want to match everything up to, but not including `A26`. You can't use `BETWEEN 'A10' AND 'A25'` because strings beginning with `A25` and having additional characters are higher than that.
|
Oracle like and between used together
|
[
"",
"sql",
"oracle",
"sql-like",
""
] |
I have an SQL table with 4 columns. The fourth column is FullName. I want this column to autofill itself from the results of 2nd and 3rd Column. ie.Firstname and Middlename.
I have tried this code
```
cn.Open()
Dim query As String
query = "Insert into Details(Adm,FirstName,MiddleName,FullName ) VALUES ('" & TextBox1.Text & "' , '" & TextBox2.Text & "', '" & TextBox3.Text & " ', CONCATE(Textbox2.text, ',', Textbox3.Text))"
cmd = New SqlCommand(query, cn)
reader = cmd.ExecuteReader
MessageBox.Show("Data Saved")
```
|
The section CONCATINATE will be like the following:
```
"CONCATE('" & Textbox2.text &"',',','" & Textbox3.Text & "'))"
```
But i wont tell you to use like this, since it may a worst suggestion. I prefer you to use parameters as well to avoid injection and specifying the types.
Example:
```
Dim query = "Insert into Details(Adm,FirstName,MiddleName,FullName ) VALUES (" & _
"@adm,@fName,@mName,CONCATE(@fNameC,',',@mNameC))"
Dim cmd As New SqlCommand(query, cn)
cmd.Parameters.Add("@adm", SqlDbType.VarChar).Value = TextBox1.Text
cmd.Parameters.Add("@fName", SqlDbType.VarChar).Value = TextBox2.Text
cmd.Parameters.Add("@mName", SqlDbType.VarChar).Value = TextBox3.Text
cmd.Parameters.Add("@fNameC", SqlDbType.VarChar).Value = TextBox2.Text
cmd.Parameters.Add("@mNameC", SqlDbType.VarChar).Value = TextBox3.Text
'Execute the query here
```
|
Before query first store two textbox value in one variable
```
cn.Open()
Dim query As String
Dim fullname As String
fullname = TextBox1.text + "" + TextBox2.text
query = "Insert into Details(Adm,FirstName,MiddleName,FullName ) VALUES ('" & TextBox1.Text & "' , '" & TextBox2.Text & "', '" & TextBox3.Text & " ', '" & fullname & '")"
cmd = New SqlCommand(query, cn)
reader = cmd.ExecuteReader
MessageBox.Show("Data Saved")
```
|
Combining two Columns
|
[
"",
"sql",
"vb.net",
""
] |
I'm at my first question on stackexchange, as i have a few days since I struggle on this matter:
I want to make a complex query(PLSQL) on a table that has **col1,col2,col3,col4,col5** having values like (names: which are split- one part per column)
```
+------+--------+--------+--------+------+
| ID | Col1 | Col2 | Col3 | Col4 |
+------+--------+--------+--------+------+
| (#1) | Andrew | Joan | Bach | Mike |
| (#2) | Mark | Andrew | Livy | |
| (#3) | Joan | Arch | Donnie | |
| (#4) | Joan | Andrew | Lyx | |
+------+--------+--------+--------+------+
```
Number of name parts differ from 1 to 5.
I want to search in different combinations like this:
* search for *Bach Joan Mike* - get **#1, #3,#4** - in this order
* search for *Andrew Bach* - get the following order **#1,#2,#4** - in this order
I don't like the idea of using a looooong query in which I will write all possible permutations in order to mach each part of my search string
What I'd like to achieve is this:
* **first set**: match all those n parts (both **Bach** and **Joan** and **Bach** match, in any order)
* **second set**: match n-1 parts (at least **N-1** of my search parts match the row, in any order)
* **third set**: match **n-2** parts
I use a ORACLE database and I was thinking on making it in a stored procedure: match\_my\_set(query\_str,col1,col2,col3,col4,col5). I would write at least 5 loops (loop into loop) in order to achieve this, but I doubt it is a professional idea . .
Any help is appreciated. Thank you
|
If you're on 11g or higher, you can unpivot your columns into rows; this is using a CTE to provide your sample data:
```
with t (id, col1, col2, col3, col4, col5) as (
select 1, 'Andrew', 'Joan', 'Bach', 'Mike', null from dual
union all select 2, 'Mark', 'Andrew', 'Livy', null, null from dual
union all select 3, 'Joan', 'Arch', 'Donnie', null, null from dual
union all select 4, 'Joan', 'Andrew', 'Lyx' , null, null from dual
)
select * from t
unpivot (name for col_no in (col1 as 1, col2 as 2, col3 as 3, col4 as 4, col5 as 5));
ID COL_NO NAME
---------- ---------- ------
1 1 Andrew
1 2 Joan
1 3 Bach
1 4 Mike
2 1 Mark
2 2 Andrew
2 3 Livy
...
```
Then you can look for matches against the single name column:
```
select distinct id
from (
select * from t
unpivot (name for col_no in (col1 as 1, col2 as 2, col3 as 3, col4 as 4, col5 as 5))
)
where name in ('Bach', 'Joan', 'Mike')
order by id;
ID
----------
1
3
4
```
I *think* you want to make the ordering more complicated though, by counting how many of the terms match in each row. If so you can do:
```
select id, count(*) as cnt
from (
select * from t
unpivot (name for col_no in (col1 as 1, col2 as 2, col3 as 3, col4 as 4, col5 as 5))
)
where name in ('Bach', 'Joan', 'Mike')
group by id;
ID CNT
---------- ----------
1 3
4 1
3 1
```
and then have another level of inline view to order by the count, with some way to break ties:
```
select id
from (
select id, count(*) as cnt
from (
select * from t
unpivot (name for col_no in (col1 as 1, col2 as 2, col3 as 3, col4 as 4, col5 as 5))
)
where name in ('Bach', 'Joan', 'Mike')
group by id
)
order by cnt desc, id;
```
Which gets the same result with your sample data. Changing the `IN` condition to user `('Andrew', 'Bach')` also gets 1,2,4 with both versions.
Depending on how you're getting the values you're searching for, you might want to use an array instead (via a table collection expression and a join), or tokenise a string containing all the search words, or some other variation.
|
You can do it using Oracle's collections (which should work in 10g or later)
**Oracle Setup**:
```
CREATE TABLE TABLE_NAME( ID, Col1, Col2, Col3, Col4 ) AS
SELECT 1, 'Andrew', 'Joan', 'Bach', 'Mike' FROM DUAL UNION ALL
SELECT 2, 'Mark', 'Andrew', 'Livy', NULL FROM DUAL UNION ALL
SELECT 3, 'Joan', 'Arch', 'Donnie', NULL FROM DUAL UNION ALL
SELECT 4, 'Joan', 'Andrew', 'Lyx', NULL FROM DUAL;
CREATE TYPE stringlist AS TABLE OF VARCHAR2(100);
/
```
**Query**:
```
SELECT id,
col1,
col2,
col3,
col4
FROM (
SELECT t.*,
stringlist( col1, col2, col3, col4 )
MULTISET INTERSECT
stringlist( 'Bach', 'Joan', 'Mike' ) -- Search terms
AS names
FROM TABLE_NAME t
)
WHERE names IS NOT EMPTY
ORDER BY CARDINALITY( names ) DESC, ID;
```
**Output**:
```
ID COL1 COL2 COL3 COL4
---------- ------ ------ ------ ----
1 Andrew Joan Bach Mike
3 Joan Arch Donnie
4 Joan Andrew Lyx
```
|
How to make a query on multiple columns without writting all possible permutations?
|
[
"",
"sql",
"oracle",
""
] |
I have a database with a **Datetime** column containing intervals of +/- 30 seconds and a **Value** column containing random numbers between 10 and 100. My table looks like this:
```
datetime value
----------------------------
2016-05-04 20:47:20 12
2016-05-04 20:47:40 44
2016-05-04 20:48:30 56
2016-05-04 20:48:40 25
2016-05-04 20:49:30 92
2016-05-04 20:49:40 61
2016-05-04 20:50:00 79
2016-05-04 20:51:20 76
2016-05-04 20:51:30 10
2016-05-04 20:51:40 47
2016-05-04 20:52:40 23
2016-05-04 20:54:00 40
2016-05-04 20:54:10 18
2016-05-04 20:54:50 12
2016-05-04 20:56:00 55
```
What I want the following output:
```
datetime max_val min_val
-----------------------------------------
2016-05-04 20:45:00 92 12
2016-05-04 20:50:00 79 10
2016-05-04 20:55:00 55 55
```
Before I can even continue getting the maximum value and the minimum value, I first have to **GROUP** the **datetime** column into 5 minute intervals. According to my research I came up with this:
```
SELECT
time,
value
FROM random_number_minute
GROUP BY
UNIX_TIMESTAMP(time) DIV 300
```
Which actually **GROUPS** the datetime column into 5 minute intervals like this:
```
datetime
-------------------
2016-05-04 20:47:20
2016-05-04 20:50:00
2016-05-04 20:56:00
```
This comes very close as it takes the next closest **datetime** to, in this case, `20:45:00`, `20:50:00`, etc. I would like to **rounddown** the **datetime** to the nearest 5 minutes regardless of the **seconds**, for instance if the minutes are:
```
minutes rounddown
--------------------
10 10
11 10
12 10
13 10
14 10
15 15
16 15
17 15
18 15
19 15
20 20
```
The time could be **14:59** and I would like to **rounddown** to **10:00**. I also tried using this after hours of research:
```
SELECT
time,
time_rounded =
dateadd(mi,(datepart(mi,dateadd(mi,1,time))/5)*5,dateadd(hh,datediff(hh,0,dateadd(mi,1,time)),0))
```
But sadly this did not work. I get this error:
*Incorrect parameter count in the call to native function 'datediff'*
I tried this too:
```
SELECT
time, CASE
WHEN DATEDIFF(second, DATEADD(second, DATEDIFF(second, 0, time_out) / 300 * 300, 0), time) >= 240
THEN DATEADD(second, (DATEDIFF(second, 0, time) / 300 * 300) + 300, 0)
ELSE DATEADD(second, DATEDIFF(second, 0, time) / 300 * 300, 0)
END
```
Returning the same error.
How can I do this? And after the **datetime** is grouped, how can I get the max and min value of the data grouping?
|
Sorry if I'm repeating another answer. I'll delete if I am..
```
SELECT FROM_UNIXTIME(FLOOR(UNIX_TIMESTAMP(datetime)/300)*300) x
, MIN(value) min_value
, MAX(value) max_value
FROM my_table
GROUP
BY x;
```
|
Use various date partition functions inside a GROUP BY.
Code:
```
SELECT from_unixtime(300 * round(unix_timestamp(r.datetime)/300)) AS 5datetime,
MAX(r.value) AS max_value,
MIN(r.value) As min_value,
(SELECT r.value FROM random_number_minute ra WHERE ra.datetime = r.datetime order by ra.datetime desc LIMIT 1) as first_val
FROM random_number_minute r
GROUP BY UNIX_TIMESTAMP(r.datetime) DIV 300
```
Output:
```
5datetime max_value min_value first_val
May, 04 2016 20:45:00 92 12 12
May, 04 2016 20:50:00 79 10 79
May, 04 2016 20:55:00 55 55 55
```
SQL Fiddle: <http://sqlfiddle.com/#!9/e16b1/17/0>
|
How to group time column into 5 minute intervals and max/min value respectively SQL?
|
[
"",
"mysql",
"sql",
"database",
"select",
""
] |
How might I select the following data in an existing table and order by a mix of letters and numbers. Here is the sample...
```
A-1
A-10
A-2
A-3
A-4
A-5
A-6
A-7
A-8
A-9
A-3a
A-3b
A-3c
B-1
B-10
B-11
B-12
B-12a
B-12b
B-13
B-2
B-3
B-4
B-5
B-6
B-7
B-8
B-9
```
|
I place this as a new answer, as it is not really an answer but rather a comparison of different approaches:
### The conclusio:
* all approaches scale fairly linear, except XML
* XML is fastest with small row count but gets worse with high row count
### Create a test scenario
```
CREATE TABLE #tbl (ID INT IDENTITY,sortColumn VARCHAR(100));
INSERT INTO #tbl VALUES
('A-1')
,('A-10')
,('A-2')
,('A-3')
,('A-4')
,('A-5')
,('A-6')
,('A-7')
,('A-8')
,('A-9')
,('A-3a')
,('A-3b')
,('A-3c')
,('B-1')
,('B-10')
,('B-11')
,('B-12')
,('B-12a')
,('B-12b')
,('B-13')
,('B-2')
,('B-3')
,('B-4')
,('B-5')
,('B-6')
,('B-7')
,('B-8')
,('A-8a')
,('B-8')
,('B-9'); --30 rows
GO 1000 -- x 1.000 = 30.000 rows
```
### Matt's approach (cleaned to the necessary)
* 46 seconds on 3 mio rows
* 4.5 seconds on 300.000 rows
* 1.3 seconds on 30.000 rows
* 0.7 seconds on 3.000 rows
The code
```
SELECT ID,sortColumn
FROM
#tbl
ORDER BY
LEFT(sortColumn,CHARINDEX('-',sortColumn) -1)
,CAST((CASE
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,3)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,3)
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,2)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,2)
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,1)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,1)
ELSE NULL
END) AS INT)
,RIGHT(sortColumn,
LEN(sortColumn) -
LEN(LEFT(sortColumn,CHARINDEX('-',sortColumn) -1))
- LEN(CASE
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,3)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,3)
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,2)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,2)
WHEN ISNUMERIC(SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,1)) = 1 THEN SUBSTRING(sortColumn,CHARINDEX('-',sortColumn) + 1,1)
ELSE NULL
END)
- 1 --the '-'
),ID;
```
### Stepwise calculation in `CROSS APPLY`s, sorting on calculated columns
* 44 seconds on 3 mio rows
* 4.4 seconds on 300.000 rows
* 0.9 seconds on 30.000 rows
* 0.3 seconds on 3.000 rows
The code
```
SELECT ID,sortColumn
FROM #tbl
CROSS APPLY(SELECT CHARINDEX('-',sortColumn) AS posMinus) AS pos
CROSS APPLY(SELECT SUBSTRING(sortColumn,1,posMinus-1) AS part1
,SUBSTRING(sortColumn,posMinus+1,1000) AS part2
) AS parts
CROSS APPLY(SELECT ISNUMERIC(part2) AS p2isnum) AS checknum
CROSS APPLY(SELECT CASE WHEN p2isnum=1 THEN '' ELSE RIGHT(part2,1) END AS part3
,CASE WHEN p2isnum=1 THEN part2 ELSE SUBSTRING(part2,1,LEN(part2)-1) END AS part2New
) AS partsNew
ORDER BY part1,part2new,part3,ID;
```
### Stepwise calculation in `CROSS APPLY`s, sorting on concatenated padded string
* 42 seconds on 3 mio rows
* 4.2 seconds on 300.000 rows
* 0.7 seconds on 30.000 rows
* 0.4 seconds on 3.000 rows
The code
```
SELECT ID,sortColumn
FROM #tbl
CROSS APPLY(SELECT CHARINDEX('-',sortColumn) AS posMinus) AS pos
CROSS APPLY(SELECT SUBSTRING(sortColumn,1,posMinus-1) AS part1
,SUBSTRING(sortColumn,posMinus+1,1000) AS part2
) AS parts
ORDER BY RIGHT('.....' + part1,5) + RIGHT('.....' + part2,5 - ISNUMERIC(RIGHT(part2,1)))
,ID;
```
### Splitting with XML, sorting on concatenated padded string
* 67 seconds on 3 mio rows
* 6.2 seconds on 300.000 rows
* 0.7 seconds on 30.000 rows
* 0.3 seconds on 3.000 rows
The code
```
SELECT ID,sortColumn
FROM
(
SELECT CAST('<r>' + REPLACE(sortColumn,'-','</r><r>') + '</r>' AS XML) AS SortColumnSplitted
,*
FROM #tbl
) AS tbl
ORDER BY RIGHT('.....' + SortColumnSplitted.value('r[1]','varchar(max)'),5) + RIGHT('.....' + SortColumnSplitted.value('r[2]','varchar(max)'),5 - ISNUMERIC(RIGHT(SortColumnSplitted.value('r[2]','varchar(max)'),1)))
,ID;
```
|
The most powerful solution is to create an [SQL CLR function](https://stackoverflow.com/a/10840893/5931028). That's a bit tough, though.
Another approach is writing an insert/update trigger that splits the value in the mixed column with TSQL and stores the three parts (character, number, character) in specific helper columns (that you can use to sort).
Based on your examples, you can experiment with the splitting along the lines of this code:
```
declare @value nvarchar(10) = 'B-12b';
-- first part
select substring(@value, 1, 1)
-- second part
select case when isnumeric(right(@value, 1)) = 1
then substring(@value, 3, len(@value) - 2)
else substring(@value, 3, len(@value) - 3)
end
-- third part
select case when isnumeric(right(@value, 1)) = 1
then '_'
else right(@value, 1)
end
```
|
SQL: Order by substring of mixed letters and numbers
|
[
"",
"sql",
"sql-server-2008",
""
] |
I want to know what's the best datatype to store these:
* `null`
* `0`
* `/* the length of other numbers is always 7 digits */`
* `0000000`
* `0000001`
* `0000010`
* `0000011`
* `/* and so on */`
* `1111111`
I have tested, `INT` works as well. But there is a better *datatype*. Because all my numbers are made of `0` or `1` digits. Is there any better datatype?
|
What you are showing are binary numbers
* 0000000 = 0
* 0000001 = 2^0 = 1
* 0000010 = 2^1 = 2
* 0000011 = 2^0 + 2^1 = 3
So simply store these numbers in an integer data type (which is internally stored with bits as shown of course). You could use BIGINT for this, as recommended in the docs for bitwise operations (<http://dev.mysql.com/doc/refman/5.7/en/bit-functions.html>).
Here is how to set flag n:
```
UPDATE mytable
SET bitmask = POW(2, n-1)
WHERE id = 12345;
```
Here is how to add a flag:
```
UPDATE mytable
SET bitmask = bitmask | POW(2, n-1)
WHERE id = 12345;
```
Here is how to check a flag:
```
SELECT *
FROM mytable
WHERE bitmask & POW(2, n-1)
```
But as mentioned in the comments: In a relational database you usually use columns and tables to show attributes and relations rather than an encoded flag list.
|
As you've said in a comment, the values `01` and `1` should not be treated as equivalent (which rules out binary where they would be), so you could just store as a string.
It actually might be more efficient than storing as a byte + offset since that would take up 9 characters, whereas you need a maximum of 7 characters
Simply store as a `varchar(7)` or whatever the equivalent is in MySql. No need to be clever about it, especially since you are interested in extracting positional values.
Don't forget to bear in mind that this takes up a lot more storage than storing as a `bit(7)`, since you are essentially storing 7 bytes (or whatever the storage unit is for each level of precision in a varchar), not 7 bits.
If that's not an issue then no need to over-engineer it.
|
Best datatype to store a long number made of 0 and 1
|
[
"",
"mysql",
"sql",
"sqldatatypes",
""
] |
I have 3 tables.
* InvoiceOriginal
* Invoice
* InvoiceHistory
the invoice table has a foreign key constraint.
Each entry in the invoice table has a corresponding entry in Invoiceoriginal.
The invoiceOriginal table stores the original values of the invoice and invoice table stores the values which have been modified by the user.
this is done to get diferrences at the time of submission.
the SQL I am using is
```
DELETE i
FROM invoice i
INNER JOIN InvoiceHistory aih
ON i.ClientId = aih.HistoryClientNumber
AND i.invoiceNumber = HistoryInvoiceNumber
```
however deletion is not possible because of foreign Key constraint.
The table is as under:
```
Invoice InvoiceOriginal InvoiceHistory
Id FK_InvoiceId ClientId
ClientId ClientId InvoiceNumber
InvoiceNumber
```
I need to delete the entry in invoice and InvoiceOriginal once there is an entry for that invoice number in InvoiceHistory for the same clientId.
|
You cannot issue a delete statement against more than one table at a time, you need to have individual delete statements for each of the related tables before deleting the parent record(s)
|
I'm fairly sure you can't delete from multiple tables with a single statement. I would normally delete the child rows first with one statement and then delete the parent record. You may wish to do this inside a transaction if you might need to roll back on failure.
Alternatively, you could enable CASCADE ON DELETE on the foreign key which would automatically cascade the deletions through the child records if that is something that is suitable for this system.
|
Delete from 2 tables using INNER JOIN
|
[
"",
"sql",
"sql-server",
"sql-delete",
""
] |
I have a table of course :
[](https://i.stack.imgur.com/7jdtU.png)
I want to fetch data from searching based on columns Country, university, level, interest and substream.
Query which I've tried but not getting better result.
```
select *
from edu_college_desc
where (country = @country and
university = @university and
leveln = @level and
interest = @interest and
substream=@substream)
or (country = @country or
university = @university or
leveln = @level or
interest = @interest or
substream = @substream)
```
What I want to do is: if select only country then the data should come based on only country or if I select only stream then data fetched based from stream only if I select both or more then data fetch should be based on those columns.
How can I get perfect results?
|
Try this method,
```
select * from edu_college_desc
where country = ISNULL(@country ,country)
and university = ISNULL(@university ,university)
and leveln = ISNULL(@level ,leveln)
and interest = ISNULL(@interest ,interest
and substream = ISNULL(@substream,substream)
```
In this you can pass the value `NULL` to any of the parameters if it is not selected.(ie If you have set value only for @university and others are `NULL`, then result will be `university = @university`)
|
For each category, check if the parameter is null (not given) or the same as specified:
```
select *
from edu_college_desc
where (@country is null or country = @country)
and (@university is null or university = @university)
and etc...
```
And, of course, you can also use MS SQL Server's `ISNULL`, just as in Abdul Rasheed answer. Convenient but less portable.
|
Fetch data from table based on multiple columns from same table
|
[
"",
"sql",
"sql-server",
""
] |
I have this table:
```
// numbers
+---------+------------+
| id | numb |
+---------+------------+
| int(11) | bit(10) |
+---------+------------+
| 1 | 1001100111 |
| 2 | 0111000101 |
| 3 | 0001101010 |
| 4 | 1111111011 |
+---------+------------+
```
Now I'm trying to get third digit *(left to right)* from those number. Example:
```
1001100111
^ // I want to get 0
```
So it is expected result:
```
+--------------------+
| substr(numb, 3, 1) |
+--------------------+
| 0 |
| 1 |
| 0 |
| 1 |
+--------------------+
```
---
Here is my query:
```
SELECT SUBSTR(numb, 3, 1) FROM numbers
```
But it doesn't work. because `bit(10)` isn't string and `SUBSTR()` cannot parse it. Is there any workaround?
|
Although you could use `substr` after converting to `varchar`, a simpler approach for `BIT(...)` data type it to use bit operators.
Since according to your comment it is OK to extract 8-th bit from the right, rather than the third bit from the left, this will produce the expected result:
```
select id, (x>>7)&1
from test
```
[Demo.](http://sqlfiddle.com/#!9/7fce0/1/0)
> Is it possible to I update just one of its digits? I mean I want to update seventh digit (right to left) from `1001011101` and make it `0`?
You can set a single bit to zero like this:
```
UPDATE test SET x = x & b'1110111111' WHERE id=3
```
Position of 0 indicates the bit you are setting to zero.
If you want to set it to `1`, use
```
UPDATE test SET x = x | b'0001000000' WHERE id=3
```
You can have more than one zero in the first example if you would like to set multiple bits to zero. Similarly, you can have more than one `1` in the second example if you need to set multiple bits to `1`.
|
You could convert `BIT` to `VARCHAR` (*or `CHAR`*) and then use `SUBSTR` in following:
```
SELECT SUBSTR(CONVERT(VARCHAR(10),numb), 3, 1)
FROM numbers
```
Or using `LEFT` and `RIGHT`:
```
SELECT LEFT(RIGHT(CONVERT(VARCHAR(10),numb),8),1)
FROM numbers
```
|
How to use substr(...) for BIT(...) data type columns?
|
[
"",
"mysql",
"sql",
""
] |
I have a table which shows Grades and percentages.
Now I want to run query on table which fetch Grade between these percentages.
Example if a student get 72% I want to show the Grade as `C`.
How to get Grade from table?
Please refer this table picture:
[](https://i.stack.imgur.com/y5EWw.jpg)
|
```
Drop Table Grades
Drop Table Students
Create Table Students (Name Varchar(200), Percentage Numeric(5,2))
Insert Students Values ('John', 0.00)
Insert Students Values ('Jane', 38.00)
Insert Students Values ('Joe', 45.00)
Insert Students Values ('Greg', 50.00)
Insert Students Values ('Buck', 55.00)
Insert Students Values ('Harold', 60.00)
Insert Students Values ('Jack', 65.00)
Insert Students Values ('Bill', 68.00)
Insert Students Values ('Gerald', 75.00)
Insert Students Values ('Steve', 79.00)
Insert Students Values ('Walter', 85.00)
Insert Students Values ('Mike', 92.00)
Insert Students Values ('Mary', 100.00)
Insert Students Values ('Mark', 101.00)
Select * From Students
Create Table Grades (Grade Char(2), Percentage Numeric(5,2))
Go
Insert Grades Values ('A*', 101.00)
Insert Grades Values ('A', 85.00)
Insert Grades Values ('B', 75.00)
Insert Grades Values ('C', 65.00)
Insert Grades Values ('D', 55.00)
Insert Grades Values ('E', 45.00)
Insert Grades Values ('F', 0.00)
Select S.*, G.Grade
From
(
Select *, IsNull(Lead(Percentage) Over (Order By Percentage), (Select Max(Percentage)+.01 From Grades)) NextPercentage
From Grades ) G
Join Students S On S.Percentage >= G.Percentage And S.Percentage < G.NextPercentage
```
|
`ORDER BY Percentage DESC` with `<=` the percentage in WHERE and `TOP 1 Grade` will given the expected result
```
CREATE TABLE #GradeMaster (Grade VARCHAR(2), Percentage DECIMAL(5,2))
INSERT INTO #GradeMaster
SELECT 'A*', 101 UNION
SELECT 'A', 85 UNION
SELECT 'B', 75 UNION
SELECT 'C', 65 UNION
SELECT 'D', 55 UNION
SELECT 'E', 45 UNION
SELECT 'F', 0
SELECT TOP 1 Grade
FROM #GradeMaster
WHERE Percentage <= 72
ORDER BY Percentage DESC
DROP TABLE #GradeMaster
```
|
How to get numeric range between row value SQL
|
[
"",
"sql",
""
] |
This is the SQL statement which I taken from debug mode in VB.Net.
```
SELECT dt_Date as 'Date',
s_Time as 'Time',
s_For as 'For',
s_Categ as 'Category',
s_Count as 'Count',
s_Remarks as 'Remarks'
FROM Entry
WHERE (s_ENo = '22' and dt_date BETWEEN '06-05-16' And '27-05-16')
```
I am not sure what's wrong with the above statement since everything seems to be fine to my eyes.
**Description of the Error Message**
> Additional information: Conversion failed when converting date and/or
> time from character string.
Any suggestions how to solve this riddle?
|
If you are using MS-SQL server than use following instructions.
The basic solution is that you have to provide date into either mm/DD/YYYY format or in YYYY-MM-DD date format into ms-sql query.
So, before passing date to query convert your date into either mm/DD/YYYY format or in YYYY-MM-DD format.
|
Instead if string literals, use strongly-typed parameters. In addition to avoiding data type conversion issues on the SQL Server side, parameters:
* Avoid the need to format date/time string literals in a particular way
* Improve performance by providing execution plan resue
* Prevent SQL injection (barring dynamic SQL on the server side)
|
SQL statement error in VB.Net
|
[
"",
"mysql",
"sql",
"sql-server",
"vb.net",
"visual-studio",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.