Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Hi I have a weather database in SQL Server 2008 that is filled with weather observations that are taken every 20 minutes. I want to get the weather records for each hour not every 20 minutes how can I filter out some the results so only the first observation for each hour is in the results.
Example:
```
7:00:00
7:20:00
7:40:00
8:00:00
```
Desired Output
```
7:00:00
8:00:00
```
|
To get exactly (less the fact that it's an `INT` instead of a `TIME`; nothing hard to fix) what you listed as your desired result,
```
SELECT DISTINCT DATEPART(HOUR, TimeStamp)
FROM Observations
```
You could also add in `CAST(TimeStamp AS DATE)` if you wanted that as well.
---
Assuming you want the data as well, however, it depends a little, but from exactly what you've described, the simple solution is just to say:
```
SELECT *
FROM Observations
WHERE DATEPART(MINUTE, TimeStamp) = 0
```
That fails if you have missing data, though, which is pretty common.
---
If you do have some hours where you want data but don't have a row at :00, you could do something like this:
```
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY CAST(TimeStamp AS DATE), DATEPART(HOUR, TimeStamp) ORDER BY TimeStamp)
FROM Observations
)
SELECT *
FROM cte
WHERE n = 1
```
That'll take the first one for any date/hour combination.
Of course, you're still leaving out anything where you had no data for an entire hour. That would require a numbers table, if you even want to return those instances.
|
You can use a formula like the following one to get the nearest hour of a time point (in this case it's `GETUTCDATE()`).
```
SELECT DATEADD(MINUTE, DATEDIFF(MINUTE, 0, GETUTCDATE()) / 60 * 60, 0)
```
Then you can use this formula in the `WHERE` clause of your SQL query to get the data you want.
|
SQL Getting data by the hour
|
[
"",
"sql",
"sql-server",
""
] |
I am new to SQL. I am using SQL Server.
I am writing a query to get top scores (sc) of each user (unique).
I have written a query which results in a table having non-unique values of pname and pid.
I have the following resultant table
```
id pid pname sc
___________________________
1584 268 user1 99
1608 268 user1 99
1756 268 user1 95
1750 268 user1 95
1240 268 user1 94
1272 268 user1 94
1290 268 user1 93
1298 268 user1 93
1177 268 user1 93
1488 268 user1 93
1401 268 user1 92
1407 268 user1 92
1482 268 user1 89
1245 268 user1 89
1705 268 user1 88
2848 310 user2 81
2888 310 user2 81
1178 268 user1 80
2084 50 user3 80
2727 50 user3 80
2729 50 user3 80
2782 50 user3 80
2792 50 user3 79
2848 50 user3 79
2851 310 user2 79
2833 310 user2 78
2851 50 user3 78
2857 50 user3 78
2619 50 user3 77
2890 50 user3 77
2593 310 user2 77
2596 310 user2 77
2792 310 user2 77
2810 310 user2 77
2806 310 user2 76
```
from this query
```
SELECT
t.id,
t.pid,
u.pname,
t.sc
FROM
table t,
table u
WHERE
t.pid=u.pid
GROUP BY
id,
pid,
u.pname
ORDER BY
sc DESC
```
What i want is to have unique pnames in my resultant table.
For example the required output should be:
```
id pid pname sc
___________________________
1584 268 user1 99
2851 310 user2 79
2084 50 user3 80
```
i.e. first maximum 'sc' of each user
Thank you!
|
The typical approach to this problem is not `GROUP BY` but window functions. These are ANSI standard functions that include `ROW_NUMBER()`:
```
SELECT id, pid, pname, sc
FROM (SELECT t.id, t.pid, u.pname, t.sc,
ROW_NUMBER() OVER (PARTITION BY u.pid ORDER BY t.sc DESC) as seqnum
FROM table t JOIN
table u
ON t.pid = u.pid
) tu
WHERE seqnum = 1;
```
|
You can try this:
```
select id,pid,pname,sc
from
(
select t.id,t.pid,u.pname,t.sc,
DENSE_RANK() over (partition by pname order by sc desc) as rank
from t,u where t.pid=t.pid=u.pid
) x
where x.rank=1;
```
as I have just created one table based on your given records after running i am getting following output.
```
select id,pid,pname,sc from
(
select id,pid,pname,sc,
DENSE_RANK() over (partition by pname order by sc desc) as rank
from t
) x
where x.rank=1;
```
Query result:
[](https://i.stack.imgur.com/edghY.png)
|
Select records based on one distinct column
|
[
"",
"sql",
"sql-server",
""
] |
Is it possible to insert multiple values in a table with the same data except from the primary key (`ID`)?
For instance:
```
INSERT INTO apples (name, color, quantity)
VALUES of(txtName, txtColor, txtQuantity)
```
Is it possible to insert 50 red apples with different IDs?
```
ID(PK) |Name | Color | Quantity
1 apple red 1
2 apple red 1
```
Is it possible like this?
|
You can use INSERT ALL or use the UNION ALL like this.
```
INSERT ALL
INTO apples (name, color, quantity) VALUES ('apple', 'red', '1')
INTO apples (name, color, quantity) VALUES ('apple', 'red', '1')
INTO apples (name, color, quantity) VALUES ('apple', 'red', '1')
SELECT 1 FROM DUAL;
```
or
```
insert into apples (name, color, quantity)
select 'apple', 'red', '1' from dual
union all
select 'apple', 'red', '1' from dual
```
Prior to Oracle 12c you can create SEQUENCE on your ID column. Also if you are using Oracle 12c then you can make your ID column as identity
```
CREATE TABLE apples(ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY);
```
Also if the sequence is not important and you just need a different/unique ID then you can use
```
CREATE TABLE apples( ID RAW(16) DEFAULT SYS_GUID() )
```
|
You can use `SEQUENCE`.
```
`CREATE SEQUENCE seq_name
START WITH 1
INCREMENT BY 1`
```
Then in your `INSERT` statement, use this
```
`INSERT INTO apples (id, name, color, quantity)
VALUES(seq_name.nextval, 'apple', 'red', 1 );`
```
|
SQL insert same values with different IDs in 1 query
|
[
"",
"sql",
"oracle",
"sql-insert",
""
] |
I have a simple DB which has two tables, serie and season.
Serie has this structure:
```
create table serie(
name varchar2(30) not null,
num_seasons number(2,0),
launch date,
constraint pk_serie primary key(name)
);
```
Whereas season has this other structure:
```
create table season(
name_serie varchar2(30) not null,
num_season number(2,0) not null,
launch date not null,
end date,
constraint pk_season primary key(name_serie,num_season),
constraint fk_season foreign key(name_serie) references serie(name),
constraint check_time check(launch<end)
);
```
For example, for a serie with two seasons (num\_seasons=2), it would have in season table two rows, num\_season=1 and num\_season=2.
I would like the num\_seasons column in table serie to be a count of how many rows are in season table with the name of the serie. In fact, I want that column to depend in changes in the season table, if you insert a new season of a serie, increase the num\_seasons value by 1.
Thank you for your help :)
|
**The others answer are simply wrong** since they are telling you to perform an insert using a select to check how many season exists only once (at insert time).
**What it would happens on Update/Delete on season table?**
The answer is obvious, you will have all counter not aligned and the data will be unrielable.
For this purpose you have to modify the `serie` table, in particular:
```
num_season NUMBER(2,0) DEFAULT 0
```
and create some [TRIGGER](https://docs.oracle.com/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm) on `season` table:
> Triggers are procedures that are stored in the database and are
> implicitly run, or fired, when something happens.
>
> Traditionally, triggers supported the execution of a PL/SQL block when
> an INSERT, UPDATE, or DELETE occurred on a table or view. Triggers
> support system and other data events on DATABASE and SCHEMA. Oracle
> Database also supports the execution of PL/SQL or Java procedures.
```
CREATE TRIGGER incSeasonNum AFTER INSERT ON season
FOR EACH ROW
BEGIN
UPDATE serie SET num_seasons = num_seasons + 1
WHERE name = NEW.name_serie;
END
```
Another one in case for any rows deletion:
```
CREATE TRIGGER decSeasonNum AFTER DELETE ON season
FOR EACH ROW
BEGIN
UPDATE serie SET num_seasons = num_seasons - 1
WHERE name = OLD.name_serie;
END
```
And just to be sure to avoid strange update on serie's name or number:
```
CREATE TRIGGER incDecSeasonNum AFTER UPDATE ON season
FOR EACH ROW
BEGIN
UPDATE serie SET num_seasons = num_seasons - 1
WHERE name = OLD.name_serie;
UPDATE serie SET num_seasons = num_seasons + 1
WHERE name = NEW.name_serie;
END
```
Hope this help.
|
You can get the number of seasons using a sub-select. No need to store it in serie at all. You could make a view for this.
If you store the seasons anyway, storing the number of seasons is redundant information. You should avoid storing redundant information unless for specific performance reasons.
```
SELECT
s.name,
( SELECT count(*)
FROM season ss
WHERE ss.name_serie = s.name) as season_count
FROM
serie s
```
|
How to update a row on parent table after child table has been changed
|
[
"",
"sql",
"oracle",
""
] |
I have a table where the first column is an integer and second column is string. What i wanted is for the first column to sort by sequence first and then second column should group itself one after another when the value is the same. To simulate my idea please see below. Not sure if this even possible?
Sorting 2 columns
[](https://i.stack.imgur.com/olrTO.jpg)
You see the correct order should be as below where the Seq is running in ascending order but when there is same phase it will pick list below it before moving to the next. So the sequence is arrange correctly and yet the grouping of phase also correct
[
|
This might be close to what you want:
```
SELECT t1.*
FROM PlanActual AS t1
INNER JOIN (
SELECT MIN(Seq) AS minSeq, Phase
FROM PlanActual
GROUP BY Phase
) AS t2 ON t1.Phase = t2.Phase
ORDER BY t2.minSeq
```
|
Your requirement can be solved by this only
```
SELECT * FROM table ORDER BY column1, column2
```
What you fail to understand is the fact that if `column1` is sorted first then `column2` will only be sorted in manner that it does not violate `column1` sorting
consider this table :
```
----------------------------------
empid | empname | salary |
----------------------------------
200 | Johnson | 10000 |
----------------------------------
400 | Adam | 12000 |
----------------------------------
300 | Mike | 11000 |
----------------------------------
100 | Johnson | 17000 |
----------------------------------
500 | Tomyknoker | 10000 |
----------------------------------
```
if you sort by `empid` and `empname`, then output would be like as below
---
```
empid | empname | salary |
----------------------------------
100 | Johnson | 17000 |
----------------------------------
200 | Johnson | 10000 |
----------------------------------
300 | Mike | 11000 |
----------------------------------
400 | Adam | 12000 |
----------------------------------
500 | Tomyknoker | 10000 |
----------------------------------
```
So here, first, `empid` is sorted as `100, 200, 300, 400, 500`.
Now `empid` `100`corresponds to `Johnson -> 17000` and `200` corresponds to `Johnson -> 10000` , so once empid is sorted, it will try sorting the `empname`
get the idea?
|
How to sort two columns (integer and string)?
|
[
"",
"sql",
"columnsorting",
""
] |
I am using Delphi RAD Studio 9 and Firebird 2.5
I want to use the count of the number of rows that fit a certain condition.
When I put
```
Select count(*) from VRDB where Lname - 'SMITH'
```
into the the SQL property , upon opening SQLQuert1, I get the error message
> SQLQuery1: Unable to determine field names for %s.
I assume this means Firebird or Delphi doesn't know what to do with the result.
How do I trap the result of the query? (My query statements work fine using isql.)
|
Using a Firebird database in Delphi 10 Seattle, the following works fine for me:
```
procedure TForm2.btnCountClick(Sender: TObject);
begin
SqlQuery3.Sql.Text := 'select count(*) from maimages';
SqlQuery3.Open;
Caption := IntToStr(SqlQuery3.Fields[0].AsInteger);
end;
```
Btw, which Delphi version do you mean by "RAD Studio 9"? In case you mean Delphi 2009, the earliest Delphi version I have after D7 is XE4, and the above code also works fine with that.
|
try to use
```
select count(*) CNT from VRDB where Lname = 'SMITH'
```
|
How do I access the result of a Select count(*) using an SQLQuery component in Delphi
|
[
"",
"sql",
"delphi",
"firebird",
"dbexpress",
""
] |
I have a data a set which is already grouped by `Person` and `Class` columns and I use this query for this process:
```
SELECT Person,Class, MAX(TimeSpent) as MaxTimeSpent
FROM Persons
GROUP BY Person,Class
```
Output:
```
Person Class MaxTimeSpent
--------|--------|-------------|
MJ | 0 | 0 |
MJ | 1 | 659 |
MJ | 2 | 515 |
```
What I want to do is to get the row that has the maximum `Class` value in this data set (which is the 3rd row for this example).
How can I do this ? Any help would be appreciated.
|
Try this one
```
SELECT T.*
FROM
(SELECT Person,
Class,
MAX(TimeSpent) AS MaxTimeSpent
FROM Persons AS P
WHERE Person = 'MJ'
GROUP BY Person, Class) AS T
WHERE T.class = (
SELECT MAX(class) FROM Persons AS P
WHERE P.person = T.person)
```
|
You can use cte for that.
```
declare @Persons table (person nvarchar(10),Class int ,TimeSpent int)
insert into @Persons
select 'MJ',0,0 union all
select 'MJ',1,659 union all
select 'MJ',2,515
;with cte
as(
SELECT Person,Class,TimeSpent , row_number() over(partition by Person order by Class desc ) as RN
FROM @Persons
)
select * from cte where RN=1
```
Solution 2 :With Out Cte:
```
SELECT * FROM (
SELECT Person
,Class
,TimeSpent
,row_number() OVER (PARTITION BY Person ORDER BY Class DESC) AS RN FROM @Persons
) t WHERE t.RN = 1
```
|
SQL:How do i get the row that has the max value of a column in SQL Server
|
[
"",
"sql",
"sql-server",
"group-by",
"greatest-n-per-group",
""
] |
I'm quite new to SQL and coding generally.
I have a SQL query that works fine. All I want to do now is return the number of rows from that query result.
The current SQL query is:
```
SELECT
Progress.UserID, Questions.[Question Location],
Questions.[Correct Answer], Questions.[False Answer 1],
Questions.[False Answer 2], Questions.[False Answer 3]
FROM
Questions
INNER JOIN
Progress ON Questions.[QuestionID] = Progress.[QuestionID]
WHERE
(((Progress.UserID) = 1) AND
((Progress.Status) <> "Correct")
);
```
I know I need to use
```
SELECT COUNT(*)
```
...though not quite sure how I integrate it into the query.
I then intend to use OLEDB to return the result to a VB Windows Form App.
All help is much appreciated.
Thanks! Joe
|
A simple way is to use a subquery:
```
select count(*)
from (<your query here>) as q;
```
In your case, you can also change the `select` to be:
```
select count(*)
```
but that would not work for aggregation queries.
|
To count all of the records, use a simple subquery; subqueries must have aliases (here I've named your subquery 'subquery').
```
SELECT COUNT(*)
FROM (
SELECT Progress.UserID, Questions.[Question Location],Questions.[Correct Answer], Questions.[False Answer 1],
Questions.[False Answer 2], Questions.[False Answer 3]
FROM Questions
INNER JOIN Progress ON Questions.[QuestionID] = Progress.[QuestionID]
WHERE (((Progress.UserID)=1) AND ((Progress.Status)<>"Correct"))
) AS subquery;
```
|
Using SQL, how do I COUNT the number of results from a query?
|
[
"",
"sql",
"vb.net",
"ms-access",
"oledb",
""
] |
I have two tables, a and b
table a
```
--------------------
|id | item |
--------------------
|1 | apple |
--------------------
|2 | orange |
--------------------
|3 | mango |
--------------------
|4 | grapes |
--------------------
|5 | plum |
--------------------
|6 | papaya |
--------------------
|7 | banana |
--------------------
```
table b
```
----------------------------
user_id | item_id | price |
----------------------------
32 | 3 | 250 |
----------------------------
32 | 6 | 180 |
----------------------------
32 | 2 | 120 |
----------------------------
```
Now I want to join the two tables in MySql so that I get list of all fruits in table a along with their prices as in table b for user 32; something like this:
```
-----------------------------
|id | item | price |
-----------------------------
|1 | apple | |
-----------------------------
|2 | orange | 120 |
------------------------------
|3 | mango | 250 |
------------------------------
|4 | grapes | |
------------------------------
|5 | plum | |
------------------------------
|6 | papaya | 180 |
------------------------------
|7 | banana | |
------------------------------
```
The best I could do was this:
```
SELECT a.id,
a.item,
b.price
FROM a
INNER JOIN b ON a.id = b.item_id
WHERE b.user_id = 32
```
This gives me only the rows whose price have been set, not the ones whose prices have not been set. How do I frame the SQL?
|
Use [**LEFT OUTER JOIN**](http://dev.mysql.com/doc/refman/5.7/en/join.html) instead of **INNER JOIN**
Try this:
```
SELECT a.id, a.item, b.price
FROM a
LEFT OUTER JOIN b ON a.id = b.item_id AND b.user_id = 32;
```
|
You have to use a OUTER JOIN instead
```
SELECT a.id, a.item, b.price
FROM a LEFT OUTER JOIN b ON a.id = b.item_id WHERE b.user_id = 32
```
|
MySQL select statement involving join
|
[
"",
"mysql",
"sql",
"select",
"join",
"left-join",
""
] |
I'm trying to use this SQL query to generate some test data
```
INSERT INTO EVENT (ID, SOURCE, TYPE, EVENT_DATE, DESCRIPTION) VALUES (DBMS_RANDOM.Value(5000, 90000), 101, 'WARNING', (SYSDATE - 1/10 + LOOP_COUNTER/1300), DBMS_RANDOM.STRING('X', 5))
CREATE TABLE EVENT(
ID INTEGER NOT NULL,
SOURCE VARCHAR2(50 ),
TYPE VARCHAR2(50 ),
EVENT_DATE DATE,
DESCRIPTION VARCHAR2(100 )
)
```
I get error Error report -
SQL Error: ORA-00984: column not allowed here
00984. 00000 - "column not allowed here"
Do you have any idea how I can fix this issue?
|
The Column SOURCE is a type of varchar2 but you are providing an integer
|
Such error appears when you use unidentified variable in `VALUES` clause. Probably your variable `LOOP_COUNTER` is not declared or is written with a typo.
|
SQL Error: ORA-00984: column not allowed here
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a TEXT column of dates and need to convert them to dates, but the two methods I'm using are not working correctly. See below.
```
SELECT CAST("12/01/2009" as date);
12
```
This only returns the first digit before stoping at the '/'.
```
SELECT DATE("12/01/2009");
Returns nothing
```
I also tried CONVERT, but I'm using SQLite and it doesn't appear to support it as I'm getting a syntax error. Any suggestions on how to solve this?
|
Try using [`STR_TO_DATE`](http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date) function
```
SELECT STR_TO_DATE('12/01/2009','%m/%d/%Y');
```
* **[SQL FIDDLE DEMO](http://www.sqlfiddle.com/#!9/9eecb7d/38058)**
|
SqLite doesn't have date type. You need to do string manipulation do achieve this.
|
SQL: Casting text to date
|
[
"",
"sql",
"sqlite",
""
] |
I am trying to accomplish the following, and I am not sure if it is possible. I have a `SELECT` Statement that contains an inner `SELECT` for two of the table columns like so:
```
SELECT
col1,
col2,
(SELECT SUM(col1)
FROM table2)
AS FirstResultToAdd,
(SELECT SUM(col2)
FROM table3)
AS SecondResultToAdd,
FROM Table1
```
So my question is: Is it possible to perform a calculation, such as doing a `SUM` of "FirstResultToAdd" and "SecondResultToAdd, and returning that as a single column result on "Table1"? Also to keep in mind, I have excluded any joins of the tables to keep the example simple.
|
I believe you want to perform some logic on the result of `Sub-query`
To add the two `sub-query` result
```
SELECT col1,
col2,
(SELECT col1
FROM table2)
AS FirstResultToAdd,
(SELECT col2
FROM table3)
AS SecondResultToAdd,
(SELECT col1
FROM table2)
+
(SELECT col2
FROM table3)
AS total
FROM table1
```
To make the query more readable you can make the original query as `Sub-Select` and perform the logic in `Outer query`
|
just nest one more time...
```
select col1, col2, sum( FirstResultToAdd )
from (
SELECT
col1,
col2,
(SELECT col1
FROM table2)
AS FirstResultToAdd,
(SELECT col2
FROM table3)
AS SecondResultToAdd,
FROM Table1
)
```
|
In SQL can I Perform Logic on Multiple Columns, which are SELECT Statements?
|
[
"",
"sql",
""
] |
I want to do something like this:
```
SELECT SUM(COL1) / SUM(COL2) FROM table_name
```
My questions:
1. Error handling. If SUM(COL2) is 0, what will return for most
database implementations? How can I return 0 in this case?
2. Performance tuning. Is it **usually** guaranteed that these 2 SUM()
will be computed in a single loop instead of going through all the
records to compute SUM(COL1) then going through everything again to
compute SUM(COL2)? If it's not guaranteed, how to improve the
performance such that this division can be done in a single loop by the database engine?
**You can answer this question based on the database engine you are familiar with.**
---
*(I have already got lots of excellent answers to Question 1, but it seems that nobody knows the answer to Question 2.)*
|
SQL:
```
DECLARE @Table TABLE (column1 INT, column2 INT)
INSERT INTO @Table
VALUES
(0, 0)
SELECT CASE WHEN SUM(Column2) > 0
THEN
SUM(column1) / SUM(column2)
ELSE 0
END AS [Sum Division]
FROM @Table
```
Will give a 0 value if column2's sum is not greater than 0 (includes nulls).
**EDIT**
This assumes that you won't have negative values in column2. If negative values are possible, you would want to use:
```
SELECT CASE WHEN SUM(Column2) IS NOT NULL AND SUM(Column2) <> 0
THEN
SUM(column1) / SUM(column2)
ELSE 0
END AS [Sum Division]
FROM @Table
```
This will do the calculations for all non 0 **NUMBERS**. But, for **0** or **NULL** values, it will just return 0.
|
Here is a SQL Server solution for Question 1.
```
SELECT COALESCE((SUM(COL1) / NULLIF(SUM(COL2),0)),0) FROM table_name
```
I don't know the answer to question 2, however I can say that I've never seen any other alternative to the way you've written the query being chosen over this way, and I find it safe to assume that this is the most efficient way the query can be written.
|
SQL tuning: division between the sum of two columns
|
[
"",
"mysql",
"sql",
"sql-server",
"database",
""
] |
I have an issue with the query below, in the main **SELECT** the value of
**ENTITY\_ID** cannot be retrieved, as I'm using LIKE I get more than a single result back.
How can I overcome this?
If I place an inner SELECT such as:
```
(
SELECT
SEC_NN_SERVER_T.SRV_ID
FROM
SEC_NN_SERVER_T
WHERE
UPPER(SEC_NN_SERVER_T.SRV_NAME) LIKE UPPER('%gen%') ) AS ENTITY_ID,
```
I get the same error:
**"ORA-01427: single-row subquery returns more than one row".**
He is the full query:
```
SELECT DISTINCT
SEC_USER.USR_ID AS USR_ID,
SEC_USER.USR_FIRST_NAME AS USR_FIRST_NAME,
SEC_USER.USR_LAST_NAME AS USR_LAST_NAME,
SEC_USER_PRIV.ROLE_ID AS SYSTEM_ROLE_ID,
21 AS ENTITY_TYP_CODE,
(
SELECT
SEC_NN_SERVER_T.SRV_ID
FROM
SEC_NN_SERVER_T
WHERE
UPPER(SEC_NN_SERVER_T.SRV_NAME) LIKE UPPER('%gen%') ) AS ENTITY_ID
FROM
SEC_USER_PRIV,
SEC_USER
WHERE
SEC_USER.USR_ID = SEC_USER_PRIV.USR_ID
AND ((
SEC_USER_PRIV.ENTITY_TYP_CODE = 21
AND SEC_USER_PRIV.ENTITY_ID IN (
(
SELECT
SERVER_T.SRV_ID
FROM
SERVER_T
WHERE
UPPER(SERVER_T.SRV_NAME) LIKE UPPER('%gen%') ))))
```
Please assist thanks in advance.
|
By repeating the subquery from your where clause in your select you destroy the coupling between SEC\_USER\_PRIV.ENTITY\_ID and the subquery. Subqueries in the SELECT part should always return only one value, because the select constructs the row to be returned. I expect your problem will be solved by simply including SEC\_USER\_PRIV.ENTITY\_ID instead of the subquery.
|
What part of the error message do you not understand? Your subquery is returning more than one row. You can fix the error by using `rownum` or aggregation:
```
(SELECT MAX(SEC_NN_SERVER_T.SRV_ID)
FROM SEC_NN_SERVER_T
WHERE UPPER(SEC_NN_SERVER_T.SRV_NAME) LIKE UPPER('%gen%')
) as ENTITY_ID
```
or perhaps:
```
(SELECT LISTAGG(SEC_NN_SERVER_T.SRV_ID, ', ') WITHIN GROUP (ORDER BY SEC_NN_SERVER_T.SRV_ID)
FROM SEC_NN_SERVER_T
WHERE UPPER(SEC_NN_SERVER_T.SRV_NAME) LIKE UPPER('%gen%')
) as ENTITY_IDS
```
However, this might not be what you really want. You need to review the logic of what you are doing.
|
PLSQL more than single row returned issue in ID field
|
[
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"ora-01427",
""
] |
Actually I want to insert multiple rows to a table.
The Structure of the table is
```
Create Table tbl_username
(id int autoincrement,
username varchar(100),
Primary key(id))
```
and I tried to insert multiple rows like
```
Declare @s as varchar(100)
set @s='(''name1''),(''name2'')'
insert into tbl_username(username)values @s;
```
but I get the output as
```
id username
1 (''name1''),(''name2'')
```
Actually the required output for me is
```
id username
1 name1
2 name2
```
How can I achieve this?
|
Use dynamic SQL
```
Declare @s as varchar(100)
Declare @sql as varchar(max)
set @s='(''name1''),(''name2'')'
set @sql = 'insert into tbl_username(username) values ' + @s;
execute(@sql);
```
However I would avoid dynamic SQL where possible.
The standard way to do it if your values are not in a variable is:
```
INSERT INTO tbl_username(username) values ('name1'),('name2')
```
Or
```
INSERT INTO tbl_username(username) values ('name1')
INSERT INTO tbl_username(username) values ('name2')
```
If possible, opt for one of the above instead of the dynamic option originally mentioned.
|
```
insert into tbl_username(username)values ('name1'),('name2'),.....;
```
Here because username is of type varchar so it's considering @s as single value and inserting it in one row.
|
Inserting multiple rows through a variable
|
[
"",
"sql",
"sql-server",
"sql-insert",
""
] |
My update receives the following error when updating to the database:
> ORA-00001: unique constraint (DONALDBURY.BARS\_ID\_PK) violated
I am using the following code as my update. I am not wanting to get rid of the constraint but to find out a workaround for it. I want to stop the duplicates values being set.
```
MERGE INTO bars DEST_TABLE
USING (select :P3_filename as filename from dual) SOURCE_TABLE
ON (DEST_TABLE.name = SOURCE_TABLE.filename)
WHEN MATCHED THEN
UPDATE SET image = ORDSYS.ORDImage()
WHEN NOT MATCHED THEN
INSERT (
image_id,
filename,
image,
name,
address,
postcode,
description)
VALUES (:P3_image_id,
:P3_filename,
ORDSYS.ORDImage(),
:P3_NAME,
:P3_ADDRESS,
:P3_POSTCODE,
:P3_DESCRIPTION);
```
|
There is an unique constraint named `BARS_ID_PK` on `image_id` column - this constraint prohibits multiple rows from having the same value in this columns.
You are trying to insert a new row with `image_id` that already exists in the table.
To avoid this error, simply assing to `:P3_image_id` placeholder in the query a value, that doesn't exists yet in the table.
|
Per the information provided, there is a primary key on the DEST table based on IMAGE\_ID so duplicate IMAGE\_IDs are not allowed. The MERGE statement checks for the existence of record based on filename (DEST\_TABLE.filename). You would need to check on the image\_id instead (or both filename and image\_id). Based on the information provided, it seems that there may be multiple image\_ids with the same file name in your bars table.
|
Update violates primary key
|
[
"",
"sql",
"oracle",
"plsql",
"sql-merge",
""
] |
I have a request which count the values of a field in differents cases.
Here is the request :
```
SELECT SUM(CASE WHEN Reliquat_id = 1 THEN Poids END) AS NbrARRNP,
SUM(CASE WHEN Reliquat_id = 2 THEN Poids END) AS NbrSTNP,
SUM(CASE WHEN Reliquat_id = 3 THEN Nombre END) AS NbrARR,
SUM(CASE WHEN Reliquat_id = 4 THEN Nombre END) AS ST,
SUM(CASE WHEN Reliquat_id = 5 THEN Nombre END) AS NbrCLASS,
SUM(CASE WHEN Reliquat_id = 6 THEN Nombre END) AS NbrINDEX FROM datas WHERE Chantier_id = 4 AND main_id =1;
```
And sometimes I get a problem if there is no records in a case. The return value is null.
* For example : if there are no records in the case when Reliquat\_id = 2 I get null instead of zero.
I see an other question in StackOverflow which is interesting :
[How do I get SUM function in MySQL to return '0' if no values are found?](https://stackoverflow.com/questions/7602271/how-do-i-get-sum-function-in-mysql-to-return-0-if-no-values-are-found)
I try to use theses functions to my request but I don't understant the syntax to apply in my case.
Have you an idea ?
Thanks
|
Just add `ELSE 0`:
```
SELECT SUM(CASE WHEN Reliquat_id = 1 THEN Poids ELSE 0 END) AS NbrARRNP,
SUM(CASE WHEN Reliquat_id = 2 THEN Poids ELSE 0 END) AS NbrSTNP,
SUM(CASE WHEN Reliquat_id = 3 THEN Nombre ELSE 0 END) AS NbrARR,
SUM(CASE WHEN Reliquat_id = 4 THEN Nombre ELSE 0 END) AS ST,
SUM(CASE WHEN Reliquat_id = 5 THEN Nombre ELSE 0 END) AS NbrCLASS,
SUM(CASE WHEN Reliquat_id = 6 THEN Nombre ELSE 0 END) AS NbrINDEX
FROM datas
WHERE Chantier_id = 4 AND main_id = 1;
```
Note: This will still return a row with all `NULL` values if no rows at all match the `WHERE` conditions.
|
Use `IFNULL()` or `COALESCE()` function
[SQL NULL Functions-W3 Schools Ref](http://www.w3schools.com/sql/sql_isnull.asp)
`IFNULL(Poids,0)` or `COALESCE(Poids,0)`
|
How to set the SUM (CASE WHEN) function's return to 0 if there are not values found in MYSQL?
|
[
"",
"mysql",
"sql",
""
] |
This is my first question here and I hope you guys can help me with this.
I'm trying to make a view based on a table that has the following columns: `DAY_0`, `DAY_1`, `DAY_2`, `DAY_3`, `DAY_4`, `DAY_5`, `DAY_6`.
The problem is that I only want to compare the column based on the actual day and see if it return the value `0` or `1`.
I was thinking on something like this but didn't work:
```
WHERE 'DAY_'+weekday(curdate()) = 1
```
Anyone knows how to help with it?
|
Use the function
```
ELT(weekday(curdate())+1,DAY_0,DAY_1,..)
```
|
You can't use variable column names like that, but you could do something like this:
```
WHERE
CASE WEEKDAY(CURDATE())
WHEN 0 THEN DAY_0
WHEN 1 THEN DAY_1
...
END CASE = 1
```
You might also want to look into database normalization.
|
Is there a way to choose a different column on a comparison based on a variable?
|
[
"",
"mysql",
"sql",
""
] |
If the current date is 3/12/2015, then I need to get the files from dates 2/12/2015, 3/12/2015, 4/12/2015. Can anyone tell me an idea for how to do it?
```
<%
try
{
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
Connection con = DriverManager.getConnection("jdbc:sqlserver://localhost:1433/CubeHomeTrans","sa","softex");
Statement statement = con.createStatement() ;
ResultSet resultset = statement.executeQuery("
select file from tablename
where date >= DATEADD(day, -1, convert(date, GETDATE()))
and date <= DATEADD(day, +1, convert(date, GETDATE()))") ;
while(resultset.next())
{
String datee =resultset.getString("Date");
out.println(datee);
}
}
catch(SQLException ex){
System.out.println("exception--"+ex);
}
%>
```
This is the query I have done, but it's erroneous. I need to get the previous date, current date and next date.
|
Use [**DATE\_ADD()**](http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-add) And [**DATE\_SUB()**](http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-sub) functions:
Try this:
```
SELECT FILE, DATE
FROM ForgeRock
WHERE STR_TO_DATE(DATE, '%d/%m/%Y') >= DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)
AND STR_TO_DATE(DATE, '%d/%m/%Y') <= DATE_ADD(CURRENT_DATE(), INTERVAL 1 DAY);
```
Check the [**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!9/fe53d5/3)
**::OUTPUT::**
```
| file | DATE |
|------|------------|
| dda | 31/12/2015 |
| ass | 01/01/2016 |
| sde | 02/01/2016 |
```
|
Simplest way to get all these **dates** are as below:-
**CURRENT DATE**
```
SELECT DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0)
```
**NEXT DAY DATE** *(Adding 1 to the `dateadd` parameter for one day ahead)*
```
SELECT DATEADD(day, DATEDIFF(day, 0, GETDATE()), 1)
```
**YESTERDAY DATE** *(Removing 1 from the `datediff` parameter for one day back)*
```
SELECT DATEADD(day, DATEDIFF(day, 1, GETDATE()), 0)
```
If you go through the link [here](https://dba.stackexchange.com/questions/1426/how-to-the-get-current-date-without-the-time-part), you will get an amazing way of explanation for getting `date`. It will clear your logic and will be useful for future reference too.
Hope that helps you
|
SQL query to find the previous date, current date and next date
|
[
"",
"mysql",
"sql",
"date",
"datetime",
"select",
""
] |
Is there a SQL format to remove leading zeros from the date?
Like if the date is `01/12/2015` to present it as `1/12/2015`, and `01/01/2016` should be shown as `1/1/2016` etc
The entire date normally contains `dd/MM/yyyy HH:mm:ss`. I need to remove those redundant leading zeroes without changing the rest of information.
Currently I use query containing something like this:
```
convert(varchar, dateadd(hh, " + 2 + " , o.start_time), 103)) + ' '
left(convert(varchar, dateadd(hh, " + 2 + " , o.start_time), 108), 110)
```
I'm working with SQL Server 2008
|
Not sure why you want to do this. Here is one way.
* Use [**`DAY`**](https://msdn.microsoft.com/en-IN/library/ms176052.aspx) and [**`MONTH`**](https://msdn.microsoft.com/en-us/library/ms187813.aspx) inbuilt `date` functions to extract `day` and `month`
from `date`.
* Both the function's **return type** is `INT` which will remove the unwanted leading zero
* Then concatenate the values back to form the `date`
Try something like this
```
declare @date datetime = '01/01/2016'
select cast(day(@date) as varchar(2))+'/'+cast(month(@date) as varchar(2))+'/'+cast(year(@date) as varchar(4))
```
**Result :** 1/1/2016
* [**DEMO**](http://www.sqlfiddle.com/#!3/9eecb7/6634)
**Note:** Always prefer to store `date` in `date` **datatype**
|
Try this, format is a much cleaner solution:
```
declare @date datetime = '01/01/2016'
SELECT FORMAT(@date,'M/d/yyyy')
```
**result:** 1/1/2016
* [DEMO](http://www.sqlfiddle.com/#!18/9eecb/54433)
|
How to present Date in SQL without leading zeros
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"date-format",
"date-formatting",
""
] |
I have a column with numeric values in which the last 3 digits should actually be behind the decimal point.
e.g.
* 89302500
* 1260840
* 218580
I need a regular expression that will put the decimal point before the last 3 digits:
* 89302.500
* 1260.840
* 218.580
I'm using the `REGEXP_REPLACE` function to change the format of values, but I can't find a way to do this. Is it possible to write such regular expression and use it to replace value format?
|
If you want to use regex to achieve this, then you can use a regex like this:
```
(\d*)(\d{3})
```
**[Working demo](https://regex101.com/r/rV1bR1/1)**
According to the [documentation](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions130.htm), you can do something like this:
```
SELECT
REGEXP_REPLACE(phone_number,
'([[:digit:]]*)([[:digit:]]{3})',
'\1.\2') "REGEXP_REPLACE"
FROM employees;
```
|
You don't use regular expressions on numbers. If you want a *number* with a precision of 3:
```
select cast(col / 1000 as number(18, 3))
```
If you want this expressed as a string, then use `to_char()` on `col / 1000`.
|
Regular expression for turning number into number with decimal precision of 3
|
[
"",
"sql",
"regex",
"replace",
"oracle11g",
"toad",
""
] |
I am looking to compare the results of 2 cells in the same row. the way the data is structured is essentially this:
```
Col_A: table,row,cell
Col_B: row
```
What I want to do is compare when Col\_A 'row' is the same as Col\_B 'row'
```
SELECT COUNT(*) FROM MyTable WHERE Col_A CONTAINS Col_B;
```
sample data:
```
Col_A: a=befd-47a8021a6522,b=7750195008,c=prof
Col_B: b=7750195008
Col_A: a=bokl-e5ac10085202,b=4478542348,c=pedf
Col_B: b=7750195008
```
I am looking to return the number of times the comparison between Col\_A 'b' and Col\_B 'b' is true.
|
This does what I was looking for:
```
SELECT COUNT(*) FROM MyTable WHERE Col_A LIKE CONCAT('%',Col_B,'%');
```
|
I see You answered Your own question.
```
SELECT COUNT(*) FROM MyTable WHERE Col_A LIKE CONCAT('%',Col_B,'%');
```
is good from performance perspective. While normalization is **very** good idea, it would not improve speed much in this particular case. We must simply scan all strings from table. Question is, if the query is always correct. It accepts for example
```
Col_A: a=befd-47a8021a6522,ab=7750195008,c=prof
Col_B: b=7750195008
```
or
```
Col_A: a=befd-47a8021a6522,b=775019500877777777,c=prof
Col_B: b=7750195008
```
this may be a problem depending on the data format. Solution is quite simple
```
SELECT COUNT(*) FROM MyTable WHERE CONCAT(',',Col_A,',') LIKE CONCAT('%,',Col_B,',%');
```
But this is not the end. String in LIKE is interpreted and if You can have things like % in You data You have a problem. This should work on mysql:
```
SELECT COUNT(*) FROM MyTable WHERE LOCATE(CONCAT(',',Col_B,','), CONCAT(',',Col_A,','))>0;
```
|
How do I search for a string in a cell substring containing a string from another cell in SQL
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
I am working on SQL Server. I have a table that has an `int` column `HalfTimeAwayGoals` and I am trying to get the `AVG` with this code:
```
select
CAST(AVG(HalfTimeAwayGoals) as decimal(4,2))
from
testtable
where
AwayTeam = 'TeamA'
```
I get as a result 0.00. But the correct result should be 0.55.
Do you have any idea what is going wrong ?
|
```
select
AVG(CAST(HalfTimeAwayGoals as decimal(4,2)))
from
testtable
where
AwayTeam = 'TeamA'
```
|
If the field `HalfTimeAwayGoals` is an integer, then the `avg` function does an integer average. That is, the result is 0 or 1, but cannot be in between.
The solution is to convert the value to a number. I often do this just by multiplying by 1.0:
```
select CAST(AVG(HalfTimeAwayGoals * 1.0) as decimal(4, 2))
from testtable
where AwayTeam = 'TeamA';
```
Note that if you do the conversion to a decimal *before* the average, the result will not necessary have a scale of 4 and a precision of 2.
|
AVG function not working
|
[
"",
"sql",
"sql-server",
""
] |
For what practical purposes would I'd potentially need to add an index to columns in my table? What are they typically needed for?
|
**Indexing columns speeds up queries on tables with many rows.**
Indexes allow your database to search for the desired row using searching algorithms like binary search.
*This would only be helpful if you had a large number of rows*, for example 16 or more (this number is taken from the quicksort algorithm, which says if sorting 16 or less items, just do an insertion sort). Otherwise there would be negligible performance gain compared to a plain linear search.
If a table had 100 rows and you wanted to find the 80th row, without indexes, it might take 80 operations to find the 80th row. However with indexes, assuming they enable something like binary search, you could find the 80th row in something like 10 or less operations.
|
Indexes are database structures that improve the speed of retrieving data from the columns they are applied on. The [wikipedia article](https://en.wikipedia.org/wiki/Database_index) on the subject gives a pretty good overview without going in to too much implementation-specific details.
|
What is the use of index in table columns?
|
[
"",
"sql",
"database",
"indexing",
"relational-database",
""
] |
I'm working on ASP.NET application whose SQL backend (MySQL 5.6) has 4 tables:
The first table is defined in this way:
```
CREATE TABLE `items` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`descr` varchar(45) NOT NULL,
`modus` varchar(8) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
);
```
These are the items managed in the application.
the second table:
```
CREATE TABLE `files` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`file_path` varchar(255) NOT NULL,
`id_item` int(11) NOT NULL,
`id_type` int(11) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
);
```
these are files that are required for items management. Each 'item' can have 0 or multiple files ('id\_item' field is filled with a valid 'id' of 'items' table).
the third table:
```
CREATE TABLE `file_types` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`file_type` varchar(32) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
);
```
this table describe the type of the file.
the fourth table:
```
CREATE TABLE `checklist` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`id_type` int(11) NOT NULL,
`modus` varchar(8) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
);
```
this table, as suggested by its name, is a checklist. It describe what types of files needs to be collected for a particular 'modus', 'modus' field holds the same values as for 'modus' in 'items' table, 'id\_type' holds valid 'id' values from 'file\_types' table.
Let's suppose that the first table holds those items:
```
id descr modus
--------------------
1 First M
2 Second P
3 Third M
4 Fourth M
--------------------
```
The second:
```
id file_path id_item id_type
--------------------------------------
1 file1.jpg 1 1
2 file2.jpg 1 2
3 file3.jpg 2 1
4 file4.jpg 1 4
5 file5.jpg 1 1
--------------------------------------
```
The third:
```
id file_type
--------------
1 red
2 blue
3 green
4 default
--------------
```
The fourth table:
```
id id_type modus
--------------------
1 1 M
2 2 M
3 3 M
4 4 M
5 1 P
6 4 P
--------------------
```
What I need to obtain is a table with such items (referred to id\_item = 1):
```
id_item file_path id_type file_type
--------------------------------------------
1 file1.jpg 1 red
1 file5.jpg 1 red
1 file2.jpg 2 blue
1 file4.jpg 4 default
<null> <null> 3 green
--------------------------------------------
```
While the result table for id\_item = 2 should be the following:
```
id_item file_path id_type file_type
--------------------------------------------
2 file3.jpg 1 red
<null> <null> 4 default
--------------------------------------------
```
where of course 'id\_item' is the 'id' of 'items' table, 'id\_type' is the 'id' of the 'types' table etc.
In short I need to have a table that depicts the checklist status for a particularm 'item' id i.e. which files have been collected but also which of them are missing.
I tried to use RIGHT JOIN clause without success:
```
SELECT
items.id AS id_item,
files.file_path AS file_path,
file_types.id AS id_type,
file_types.file_type AS file_type
FROM
files
RIGHT JOIN
checklist ON (files.id_type = checklist.id_type )
INNER JOIN
items ON (files.id_item = items.id)
AND (items.modus = checklist.modus)
INNER JOIN
file_types ON (checklist.id_type = file_types.id)
WHERE (items.id = 1);
```
the result of this query is:
```
id_item file_path id_type file_type
------------------------------------------
1 file1.jpg 1 red
1 file5.jpg 1 red
1 file2.jpg 2 blue
1 file4.jpg 4 default
```
it lacks of the last row (the missing file from the checklist).
|
Following query gives you status of each item as following (kind of checklist). I had to change some of the column names which were reserved words in my test environment.
```
select item_id,
fp filepath,
m_type,
item_desc,
modee,
(select t.type from typess t where t.id = m_type)
from (select null item_id,
i.descr item_desc,
c.modee modee,
c.id_type m_type,
null fp
from items i, checklist c
where c.modee = i.modee
and i.id = 0
and c.id_type not in
(select f.id_type from files f where f.id_item = i.id)
union all
select i.id item_id,
i.descr item_desc,
c.modee modee,
c.id_type m_type,
f.file_path fp
from items i, checklist c, files f
where c.modee = i.modee
and i.id = 0
and f.id_item = i.id
and f.id_type = c.id_type)
order by item_id asc, m_type asc
```
|
Try this:
```
SELECT
files.file_path,
types.type
FROM files
LEFT JOIN checklist ON (files.id_type = checklist.id_type )
LEFT JOIN items ON (files.id_item = items.id)
AND (items.mode = checklist.mode)
LEFT JOIN types ON (checklist.id_type = types .id)
WHERE (items.id = 0);
```
|
SQL RIGHT JOIN misunderstanding
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have problem with SQL query.
I have names in column `Name` in `Table_Name`, for example:
```
'Mila', 'Adrianna' 'Emma', 'Edward', 'Adam', 'Piter'
```
I would like to count how many names contain the letter `'A'` and how many contain the letter `'E'`.
The output should be:
```
letter_A ( 5 )| letter_E (3)
```
I tried to do this:
```
SELECT Name,
letter_A = CHARINDEX('A', Name),
letter_E = CHARINDEX('E', Name)
FROM Table_Name
GROUP BY Name
HAVING ( CHARINDEX('A', Nazwisko) != 0
OR ( CHARINDEX('E', Nazwisko) ) != 0 )
```
My query only shows if 'A' or 'E' is in Name :/
Can anyone help? :)
|
You just need to aggregate if you only need the counts.
```
select
sum(case when charindex('a',name) <> 0 then 1 else 0 end) as a_count
,sum(case when charindex('e',name) <> 0 then 1 else 0 end) as e_count
from table_name
```
|
You can use conditional aggregation:
```
select sum(case when Nazwisko like '%A%' then 1 else 0 end) as A_cnt,
sum(case when Nazwisko like '%E%' then 1 else 0 end) as E_cnt
from table_name
where Nazwisko like '%A%' or Nazwisko like '%E%';
```
|
SQL Server - count how many names have 'A' and how many have 'E'
|
[
"",
"sql",
"sql-server",
""
] |
I'm thinking about making a project in a database with a large amount of objects / people / animals / buildings, etc.
The application would let the user select two candidates and see which came first. The comparison would be made by date, or course.
MySQL only allow dates after `01/01/1000`.
If one user were to compare which came first: **Michael Jackson** or **Fred Mercury**, the answer would be easy since they came after this year.
But if they were to compare which came first: **Tyranosaurus Rex** or **Dog**, they both came before the accepted date.
How could I make those comparisons considering the SQL limit?
I didn't do anything yet, but this is something I'd like to know before I start doing something that will never work.
***THIS IS NOT A DUPLICATE OF OTHER QUESTIONS ABOUT OLD DATES.***
*In other questions, people are asking about how to store. It would be extremely easy, just make a string out of it. But in my case, I'd need to compare such dates, which they didn't ask for, yet.*
I could store the dates as a string, using `A` for after and `B` for before, as people answered in other questions. There would be no problem. But how could I compare those dates? What part of the string I'd need to break?
|
You could take a signed BIGINT field and use it as a UNIX timestamp.
A UNIX timestamp is the number of seconds that passed since January 1, 1970, at 0:00 UTC.
Any point in time would simply be a negative timestamp.
If my amateurish calculation is correct, a BIGINT would be enough to take you 292471208678 years into the past (from 1970) and the same number of years into the future. That ought to be enough for pretty much anything.
That would make dates very easy to compare - you'd simply have to see whether one date is bigger than the other.
The conversion from calendar date to timestamp you'd have to do outside mySQL, though.
Depending on what platform you are using there may be a date library to help you with the task.
|
Why deal with static age at time of entry and offset?
User is going to want to see a date as a date anyway
Complex data entry
Three fields
```
year smallint (good for up to -32,768 BC)
month tinyint
day tinyint
```
if ( (y1\*10000 + m1\*100 + d1) > (y2\*10000 + m2\*100 + d2) )
|
Compare creation dates of things that may have been made before christ
|
[
"",
"mysql",
"sql",
"date",
""
] |
I have a table **books** which looks like this:
`___BookTitle______Author
1. Sample Book AuthorA
2. Sample Book AuthorB`
*Sample Book* has been written by both *AuthorA* and *AuthorB*. I want to combine them to get the following result
`___BookTitle______Author
1. Sample Book AuthorA, AuthorB`
I cannot figure out how to do it in SQL.
|
This can be done with the [`group_concat`](http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) aggregate function:
```
SELECT book_title, GROUP_CONCAT(author SEPARATOR ', ')
FROM books
GROUP BY book_title
```
|
I believe this will do the trick,
```
SELECT BookTitle, GROUP_CONCAT(Author)
FROM books
GROUP BY BookTitle
```
|
How to combine some attributes in two rows into one in sql?
|
[
"",
"mysql",
"sql",
"select",
""
] |
in mysql i can list all the meta\_values of my meta\_key `_simple_fields_fieldGroupID_6_fieldID_10_numInSet_0` with
```
select * from pm_postmeta where meta_key LIKE '%_simple_fields_fieldGroupID_6_fieldID_10_numInSet_0%'
```
its a list of user heights like 1.67 , 168 etc
i want to remove the dots on the numbers...sometimes i have 1.79 and i want 179 ... how can i do this ?
i tried
```
UPDATE pm_postmeta
SET meta_value = REPLACE(REPLACE(meta_value,',00',''),'.','')
WHERE meta_key='_simple_fields_fieldGroupID_6_fieldID_10_numInSet_0';
```
but it deleted the table row and i needed to imported again...
|
Actually every depends on the data type you use to store people heights.
You say "*they are numbers*" and I assume they're stored in a NUMERIC column (you cannot store 1.78 into an INTEGER data type).
So, assuming your original table contains something like:
```
SQL> select * from people ;
id| height
----------+--------
1| 180.00
2| 1.78
3| 165.00
4| 2.01
```
You basically want to update this table multiplying by 100 all heights with a fractional component:
```
SQL> update people set height = height * 100 where mod(height, 1) > 0 ;
SQL> select * from people ;
id| height
----------+--------
1| 180.00
2| 178.00
3| 165.00
4| 201.00
```
**Edit**
Ok, you say now that the values you want to change contain **either** commas or dots so... the column is a CHAR/VARCHAR. Something like this:
```
SQL> select * from people2;
id|height
----------+----------
1|180
2|1.78
3|165
4|2,01
```
In this case I would use:
```
SQL> update people2 set height = replace(replace(height,'.',''),',','') where height regexp '.*[,.].*';
SQL> select * from people2;
id|height
----------+----------
1|180
2|178
3|165
4|201
```
|
```
SELECT p.*,REPLACE(REPLACE(p.`meta_value`, '00', ''),'.', '') AS meta_value
FROM `pm_postmeta` p
WHERE p.`meta_key`='_simple_fields_fieldGroupID_6_fieldID_10_numInSet_0';
```
|
sql remove dots and commas from fields
|
[
"",
"mysql",
"sql",
""
] |
I want to get declared `Month` and `date` by the below query, but I am getting something as
`Jul 21 1905 12:00AM`
I want it as
`Dec 31 2015`
below is my query
```
declare @actualMonth int
declare @actualYear int
set @actualYear = 2015
set @actualMonth = 12
DECLARE @DATE DATETIME
SET @DATE = CAST(@actualYear +'-' + @actualMonth AS datetime)
print @DATE
```
what is wrong here
|
This will give you as expected output,
```
DECLARE @actualMonth INT
DECLARE @actualYear INT
SET @actualYear = 2015
SET @actualMonth = 12
DECLARE @DATE DATETIME;
SET @DATE = CAST(
CAST(@actualYear AS VARCHAR)+'-'+CAST(@actualMonth AS VARCHAR)+'-'+'31'
AS DATETIME
);
PRINT Convert(varchar(11),@DATE,109)
```
Try this,
```
SET @DATE = CAST(
CAST(@actualYear AS VARCHAR)+'-'+CAST(@actualMonth AS VARCHAR)+'-'+ Cast(Day(DATEADD(DAY,-1,DATEADD(month,@actualMonth,DATEADD(year,@actualYear-1900,0)))) AS VARCHAR)
AS DATETIME
);
```
Or this one,
```
SET @DATE = CAST(
CAST(@actualYear AS VARCHAR)+'-'+CAST(@actualMonth AS VARCHAR)+'-'+'01'
AS DATETIME
);
PRINT CONVERT(VARCHAR(11), DATEADD(D, -1, DATEADD(M, 1, @DATE)), 109)
```
|
Actually you are actualMonth and actualYear and converting as datetime it wil give other result
Try like this
```
declare @actualMonth int
declare @actualYear int
set @actualYear = 2015
set @actualMonth = 12
DECLARE @DATE DATETIME
SET @DATE = DATEADD(dd,-1,DATEADD(YY,1,CAST(@actualYear AS varchar(20)) ))
select SUBSTRING(convert (varchar,@DATE),0,CHARINDEX(':',convert (varchar,@DATE))-2)
print @DATE
```
|
Want to print declared date and Month
|
[
"",
"sql",
"sql-server-2005",
""
] |
I'm trying to make a blog system of sort and I ran into a slight problem.
Simply put, there's 3 columns in my `article` table:
```
id SERIAL,
category VARCHAR FK,
category_id INT
```
`id` column is obviously the PK and it is used as a global identifier for all articles.
`category` column is well .. category.
`category_id` is used as a `UNIQUE` ID within a category so currently there is a `UNIQUE(category, category_id)` constraint in place.
However, I also want for `category_id` to *auto-increment*.
I want it so that every time I execute a query like
```
INSERT INTO article(category) VALUES ('stackoverflow');
```
I want the `category_id` column to be automatically be filled according to the latest `category_id` of the 'stackoverflow' category.
Achieving this in my logic code is quite easy. I just select latest num and insert +1 of that but that involves two separate queries.
I am looking for a SQL solution that can do all this in one query.
|
## Concept
There are at least several ways to approach this. First one that comes to my mind:
Assign a value for `category_id` column inside a trigger executed for each row, by overwriting the input value from `INSERT` statement.
## Action
Here's the `SQL Fiddle` to see the code in action
---
For a simple test, I'm creating `article` table holding categories and their `id`'s that should be unique for each `category`. I have omitted constraint creation - that's not relevant to present the point.
```
create table article ( id serial, category varchar, category_id int )
```
Inserting some values for two distinct categories using `generate_series()` function to have an auto-increment already in place.
```
insert into article(category, category_id)
select 'stackoverflow', i from generate_series(1,1) i
union all
select 'stackexchange', i from generate_series(1,3) i
```
Creating a trigger function, that would select `MAX(category_id)` and increment its value by `1` for a `category` we're inserting a row with and then overwrite the value right before moving on with the actual `INSERT` to table (`BEFORE INSERT` trigger takes care of that).
```
CREATE OR REPLACE FUNCTION category_increment()
RETURNS trigger
LANGUAGE plpgsql
AS
$$
DECLARE
v_category_inc int := 0;
BEGIN
SELECT MAX(category_id) + 1 INTO v_category_inc FROM article WHERE category = NEW.category;
IF v_category_inc is null THEN
NEW.category_id := 1;
ELSE
NEW.category_id := v_category_inc;
END IF;
RETURN NEW;
END;
$$
```
Using the function as a trigger.
```
CREATE TRIGGER trg_category_increment
BEFORE INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE category_increment()
```
Inserting some more values (post trigger appliance) for already existing categories and non-existing ones.
```
INSERT INTO article(category) VALUES
('stackoverflow'),
('stackexchange'),
('nonexisting');
```
**Query** used to select data:
```
select category, category_id From article order by 1,2
```
**Result for initial** inserts:
```
category category_id
stackexchange 1
stackexchange 2
stackexchange 3
stackoverflow 1
```
**Result after** final inserts:
```
category category_id
nonexisting 1
stackexchange 1
stackexchange 2
stackexchange 3
stackexchange 4
stackoverflow 1
stackoverflow 2
```
|
This has been asked many times and the general idea is **bound to fail in a multi-user environment** - and a blog system sounds like exactly such a case.
So the best answer is: **Don't.** Consider a different approach.
Drop the column `category_id` completely from your table - it does not store any information the other two columns `(id, category)` wouldn't store already.
Your `id` is a `serial` column and already auto-increments in a reliable fashion.
* [Auto increment SQL function](https://stackoverflow.com/questions/9875223/auto-increment-sql-function/9875517#9875517)
If you *need* some kind of `category_id` without gaps per `category`, generate it on the fly with `row_number()`:
* [Serial numbers per group of rows for compound key](https://stackoverflow.com/questions/24918552/serial-numbers-per-group-of-rows-for-compound-key/24918964#24918964)
|
Custom SERIAL / autoincrement per group of values
|
[
"",
"sql",
"postgresql",
"database-design",
"auto-increment",
""
] |
I would like to ask if it is possible to do this:
For example the search string is '009' -> (consider the digits as string)
is it possible to have a query that will return any occurrences of this on the database not considering the order.
for this example it will return
'009'
'090'
'900'
given these exists on the database. thanks!!!!
|
Use the `Like` operator.
**For Example :-**
```
SELECT Marks FROM Report WHERE Marks LIKE '%009%' OR '%090%' OR '%900%'
```
|
Split the string into individual characters, select all rows containing the first character and put them in a temporary table, then select all rows from the temporary table that contain the second character and put these in a temporary table, then select all rows from *that* temporary table that contain the third character.
Of course, there are probably many ways to optimize this, but I see no reason why it would not be *possible* to make a query like that work.
|
SQL - just view the description for explanation
|
[
"",
"sql",
"string",
""
] |
My table has the following columns:
```
gamelogs_id (auto_increment primary key)
player_id (int)
player_name (varchar)
game_id (int)
season_id (int)
points (int)
```
The table has the following indexes
```
+-----------------+------------+--------------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-----------------+------------+--------------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| player_gamelogs | 0 | PRIMARY | 1 | player_gamelogs_id | A | 371330 | NULL | NULL | | BTREE | | |
| player_gamelogs | 1 | player_name | 1 | player_name | A | 3375 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | points | 1 | points | A | 506 | NULL | NULL | YES | BTREE | ## Heading ##| |
| player_gamelogs | 1 | game_id | 1 | game_id | A | 37133 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | season | 1 | season | A | 30 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | team_abbreviation | 1 | team_abbreviation | A | 70 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | player_id | 1 | game_id | A | 41258 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | player_id | 2 | player_id | A | 371330 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | player_id | 3 | dk_points | A | 371330 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | game_player_season | 1 | game_id | A | 41258 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | game_player_season | 2 | player_id | A | 371330 | NULL | NULL | YES | BTREE | | |
| player_gamelogs | 1 | game_player_season | 3 | season_id | A | 371330 | NULL | NULL | | BTREE | | |
+-----------------+------------+--------------------+--------------+--------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
```
I am trying to calculate the mean of points for a season and player prior to the game being played. So for the 3rd game of the season, avg\_points would be the mean of games 1 and 2. The game numbers are in sequential order such that an earlier game is less than a later game. I also have the option to use a date field but I figured that numeric comparison would be faster?
My query is as follows:
```
SELECT game_id,
player_id,
player_name,
(SELECT avg(points)
FROM player_gamelogs t2
WHERE t2.game_id < t1.game_id
AND t1.player_id = t2.player_id
AND t1.season_id = t2.season_id) AS avg_points
FROM player_gamelogs t1
ORDER BY player_name, game_id;
```
EXPLAIN produces the following output:
```
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-------+------+--------------------------------------+------+---------+------+--------+-------------------------------------------------+
| 1 | PRIMARY | t1 | ALL | NULL | NULL | NULL | NULL | 371330 | Using filesort |
| 2 | DEPENDENT SUBQUERY | t2 | ALL | game_id,player_id,game_player_season | NULL | NULL | NULL | 371330 | Range checked for each record (index map: 0xC8) |
```
I am not sure if it is because of the nature of the task involved or because of an inefficiency in my query. Thanks for any suggestions!
|
Please consider this query:
```
SELECT t1.season_id, t1.game_id, t1.player_id, t1.player_name, AVG(COALESCE(t2.points, 0)) AS average_player_points
FROM player_gamelogs t1
LEFT JOIN player_gamelogs t2 ON
t1.game_id > t2.game_id
AND t1.player_id = t2.player_id
AND t1.season_id = t2.season_id
GROUP BY
t1.season_id, t1.game_id, t1.player_id, t1.player_name
ORDER BY t1.player_name, t1.game_id;
```
Notes:
* To perform optimally, you'd need an additional index on (season\_id, game\_id, player\_id, player\_name)
* Even better, would be to have player table where to retrieve the name from the id. It seems redundant to me that we have to grab the player name from a log table, moreover if it's required in an index.
* `Group by` already sorts by grouped columns. If you can, avoid ordering afterwards as it generates useless overhead. *As outlined in the comments, this is not an official behavior and the outcome of assuming its consistency over time should be pondered vs the risk of suddenly losing sorting.*
|
Your query is fine as written:
```
SELECT game_id, player_id, player_name,
(SELECT avg(t2.points)
FROM player_gamelogs t2
WHERE t2.game_id < t1.game_id AND
t1.player_id = t2.player_id AND
t1.season_id = t2.season_id
) AS avg_points
FROM player_gamelogs t1
ORDER BY player_name, game_id;
```
But, for optimal performance you want two composite indexes on it: `(player_id, season_id, game_id, points)` and `(player_name, game_id, season_id)`.
The first index should speed the subquery. The second is for the outer `order by`.
|
MySQL very slow query
|
[
"",
"mysql",
"sql",
""
] |
I have this update that takes data from a view (upd\_g307) that counts members in each family and put it in another table s\_general
Here is the update
```
Update s_general
Set g307 = (select upd_g307.county
from upd_g307
where upd_g307.id_section = s_general.Id_section
and upd_g307.rec_no = s_general. Rec_no
and upd_g307.f306 = s_general.F306)
Where
g307 is null
and id_section between 14000 and 15000
```
This query is taking so long to run like half of an hour or even more! What should I do to make it faster?
I'm using oracle sql\*
|
This kind of statement is often faster when re-written as a MERGE statement:
```
merge into s_general
using (
select county, id_section, rec_no, f306
from upd_g307
where id_section between 14000 and 15000
) t on (t.id_section = s_general.Id_section and t.rec_no = s_general.rec_no and t.f306 = s_general.F306)
when matched then update
set g307 = upd_g307.county
where g307 is null;
```
An index on `upd_g307 (id_section, rec_no, f306, county)` might help, as well as an index on `s_general (id_section, rec_no, f306)`.
|
You may use [updatable join](http://docs.oracle.com/cd/E11882_01/server.112/e25494/views.htm#ADMIN11782) view in Oracle and I will take a look on the performance of the join anyway, because it is IMO the lower bound of the elapsed time you may expect from the update.
The syntax is a follows:
```
UPDATE (
SELECT s_general.g307, upd_g307.county
FROM s_general
JOIN upd_g307 on upd_g307.id_section = s_general.Id_section
and upd_g307.rec_no = s_general. Rec_no
and upd_g307.f306 = s_general.F306
and s_general.g307 is null
and s_general.id_section between 14000 and 15000
)
set g307 = county
```
Unfortunately you'll get (with a high probability)
```
ORA-01779: cannot modify a column which maps to a non key-preserved table
```
if the view `upd_g307`will be not found [key-preserved](http://docs.oracle.com/cd/E11882_01/server.112/e25494/views.htm#i1006318)
You may or may not to resolve it defining a PK constraint on the source table of the view. As a last resort define a temporary table (it could be GGT) with PK constraint on the join key and us it instead of the view.
```
-- same DDL as upd_g307
CREATE GLOBAL TEMPORARY TABLE upd_g307_gtt
(Id_section number,
Rec_no number,
F306 number,
g307 number,
county number)
;
-- default -> on commit delete
ALTER TABLE upd_g307_gtt
ADD CONSTRAINT pk_upd_g307_gtt
PRIMARY KEY (id_section, Rec_no, F306);
insert into upd_g307_gtt
select * from upd_g307;
UPDATE (
SELECT s_general.g307, upd_g307_gtt.county
FROM s_general
JOIN upd_g307_gtt on upd_g307_gtt.id_section = s_general.Id_section
and upd_g307_gtt.rec_no = s_general. Rec_no
and upd_g307_gtt.f306 = s_general.F306
and s_general.g307 is null
and s_general.id_section between 14000 and 15000
)
set g307 = county
;
```
|
How to speed up this update sql statment
|
[
"",
"sql",
"oracle",
"sql-update",
""
] |
TL;DR: most of this post consists of example that I've included to make as clear as possible, but the core of the question is contained in the middle section "The Actual question" where examples are reduced to the bone.
## My problem:
I have a database which contains data about football matches from which I am trying to extract some stats.
The database contains just one table, called 'allMatches', in which eache entry represent a match, the fields (I am just including the fields that are absolutely necessary to give a sense of what the problem is) of the table are:
* ID: int, primary key of the table
* Date: date, the date the match have been played
* HT: varchar, the Home Team
* AT: varchar, the Away Team
* HG: int, the Home Team Score
* AG: int, the Away Team Score
For **each** entry in the database I have to extract some stats about both away and home team. This can be achieved very easily when you are considering stats about ALL previous matches, for example, to obtain goal scored and conceded stats, first I run this query:
```
singleTeamAllMatches=
select ID as MatchID,
Date as Date,
HT as Team,
HG as Scored,
AG as Conceded
from allMatches
UNION ALL
select ID as MatchID,
Date as Date,
AT as Team,
AG as Scored,
HG as Conceded
from allMatches;
```
This is not absolutely necessary, since it simply transform the orginal table in this way:
```
this row in allMatches:
|ID |Date | HT |AT |HG | AG|
|42 |2011-05-08 |Genoa |Sampdoria | 2 | 1 |
"becomes" two rows in singleTeamAllMatches:
|MatchID |Date |Team |Scored|Conceded|
|42 |2011-05-08 |Genoa | 2 | 1 |
|42 |2011-05-08 |Sampdoria | 1 | 2 |
```
but allows me to get the stats I need with a very simple query:
```
select a.MatchID as MatchID,
a.Team as Team,
Sum(b.Scored) as totalScored,
Sum(b.Conceded) as totalConceded
from singleTeamAllMatches a, singleTeamAllMatches b
where a.Team == b.Team AND b.Date < a.Date
```
I end up with a query that, when runned, returns:
* MatchID: the ID of the corresponding match in the original Database
* Team: the team the data in this row is about
* totalScored: the goal scored by team in all matches before the one indicated by ID
* totalConceded: the goal scored by team in all matches before the one indicated by ID
In other words, if in this last query I obtain:
```
|MatchID| Team |totalScored|totalConceded|
|42 | Genoa |38 | 40 |
|42 | Sampdoria |30 | 42 |
```
It means that Genoa and Sampdoria played against each other in the match with ID 42 and, before that match Genoa had scored 38 goals and conceded 40, while Sampdoria had scored 30 and conceded 42.
## The Actual question:
Now, this is very easy because I consider ALL previous matches, what I have no idea how to accomplish is how to obtain the exact same stats considering only the 6 previous matches. For example, let's say that in singleTeamAllMatches I have:
```
|MatchID |Date |Team |Scored|Conceded|
|1 |2011-05-08 |TeamA | 1 | 5 |
|2 |2011-06-08 |TeamA | 0 | 2 |
|3 |2011-07-08 |TeamA | 3 | 0 |
|4 |2011-08-08 |TeamA | 4 | 0 |
|5 |2011-09-08 |TeamA | 1 | 0 |
|6 |2011-10-08 |TeamA | 0 | 1 |
|7 |2011-11-08 |TeamA | 0 | 1 |
|8 |2011-12-08 |TeamA | 1 | 1 |
```
I need to find a way to obtain something like this:
```
|MatchID| Team |totalScored|totalConceded|
|1 | TeamA |0 | 0 |
|2 | TeamA |1 | 5 |
|3 | TeamA |1 | 7 |
|4 | TeamA |4 | 7 |
|5 | TeamA |8 | 7 |
|6 | TeamA |9 | 7 |
|7 | TeamA |9 | 8 |
|8 | TeamA |8 | 4 |
```
Let's have a look at the last two rows in this query:
Row 7 means that in the last 6 matches before match 7 (matches 1-6) teamA scored 9 goals and conceded 8.
Row 8 isn't affected by the goals scored in match 1, because it just tells us that in the last 6 matches before match 8 (matches 2-7) teamA scored 8 goals and conceded 4.
Is there a way to obtain this with sql through the sqldf package?
(edit: actually any solution will do, with the dplyr package the task is almost trivial and efficiently accomplished)
## What I've done and why I don't like it
At the moment the only thing I could come up with was to import the data in R and loop over all the rows in allMatches using sql 'LIMIT' and the sqldf R package.
The following is an adaptation to the example I've used here of the code I use. It is just a sample that gets the stats only for the home team, but the complete code is quite long and wouldn't be useful here.
allMatches and singleTeamAllMatches are dataframes whose structure and content is the same of the table and query I've described above.
```
lastMatchesData <- NULL
for(match in (1:nrow(allMatches))){
matchRow <- allMatches[match,]
T <- matchRow$HT
Date <- matchRow$Date
ID <- matchRow$ID
lastMatches <- singleTeamAllMatches[singleTeamAllMatches$T == T & singleTeamAllMatches$Date < Date ,]
TPerformance <- sqldf("select sum(Scored) as Scored,
sum(Conceded) as Conceded
from
(select * from lastMatches order by Date DESC limit 6)")
newRow <- cbind(ID,TPerformance)
lastMatchesData <- rbind(lastMatchesData,newRow)
}
```
I don't like this solution for two reason: first of all, it is really ugly and messy, remember that this is just a sample, but in the future I think I'm going to modify this code, and an all-sql solution would be much better.
The second reaso is that it is slow, and I mean really slow, again an all-sql solution would be much better.
|
Here is one solution I came up with using dplyr:
```
library(dplyr)
df <- df %>% group_by(Team) %>% mutate(cumScored = cumsum(Scored), totalScored = cumScored - ifelse(row_number() >= 7, lag(cumScored, 6), 0), cumConceded = cumsum(Conceded), totalConceded = cumConceded - ifelse(row_number() >= 7, lag(cumConceded, 6), 0)) %>% select(-cumScored, -cumConceded)
```
The idea is to first compute cumulative sums of scores and concessions, and then to retain only the last six matches, subtract out the 6th lag of the cumulative sum from current cumulative sum so you get the partial cumulative sum over the last six lags of matches. I could not find a way to do cumulative sum on the fly over arbitrary number of lags. So, using a trick of adding a new column and then deselecting it. Hope this helps.
|
If you're not particular if you are going use `R` or not. This is easily doable in `MS SQL` using `PARTITION`.
So, you can do it like this:
```
SELECT MatchID, Team,
ISNULL(SUM(Scored) OVER
(PARTITION BY Team ORDER BY MatchID ROWS
BETWEEN 6 PRECEDING AND 1 PRECEDING),0) as TotalScored,
ISNULL(SUM(Conceded) OVER (PARTITION BY Team ORDER BY MatchID ROWS
BETWEEN 6 PRECEDING AND 1 PRECEDING),0) as TotalConceded
FROM singleTeamAllMatches
```
Check the result `here` which is the same as your desired output.
|
Getting football(soccer) stats with R
|
[
"",
"sql",
"r",
"limit",
"dplyr",
""
] |
My Database Table values are inserted like below.
```
Userid AttedanceDate SessionType
mazhar 2016-01-02 10:37:22.397 login
mazhar 2016-01-03 10:38:24.970 logout
mazhar 2016-01-02 11:39:22.397 login
mazhar 2016-01-02 11:40:24.970 logout
mazhar 2016-01-03 10:37:22.397 login
mazhar 2016-01-03 10:38:24.970 logout
```
I need result like below.
```
Userid AttedanceDatelogin AttedanceDatelogout Total Hours
mazhar 2016-01-02 10:37:22.397 2016-01-02 11:40:24.970 01:02
mazhar 2016-01-03 10:37:22.397 2016-01-03 10:38:24.970 00:01
```
My select query.
```
select t1.AttendanceDate,t1.Userid,t1.SessionType from Table_Branches_AttendanceInfo as t1 inner join Table_Authorize_Users as t2 on t1.Userid=t2.Userid where (CONVERT(varchar(50), AttendanceDate, 101) BETWEEN @Fromdate AND @Todate)
```
|
You can use *conditional aggregation*:
```
SELECT UserId,
MAX(CASE WHEN SessionType = 'login' THEN AttedanceDate END) AS AttedanceDatelogin,
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END) AS AttedanceDatelogout,
DATEDIFF(hour,
MAX(CASE WHEN SessionType = 'login' THEN AttedanceDate END),
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END)) AS TotalHours
FROM Table_Branches_AttendanceInfo as t
GROUP BY Userid
```
**Edit**:
For multiple logins / logouts per day you can use the following query that considers only the first login and last logout of each day:
```
SELECT UserId,
MIN(CASE WHEN SessionType = 'login' THEN AttedanceDate END) AS AttedanceDatelogin,
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END) AS AttedanceDatelogout,
DATEDIFF(hour,
MIN(CASE WHEN SessionType = 'login' THEN AttedanceDate END),
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END)) AS TotalHours
FROM Table_Branches_AttendanceInfo as t
GROUP BY Userid, CONVERT(DATE, AttedanceDate)
```
**Edit2:**
To get hours / minutes you can use:
```
SELECT UserId,
MIN(CASE WHEN SessionType = 'login' THEN AttedanceDate END) AS AttedanceDatelogin,
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END) AS AttedanceDatelogout,
DATEDIFF(hour,
MIN(CASE WHEN SessionType = 'login' THEN AttedanceDate END),
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END)) AS Hours,
DATEDIFF(minute,
MIN(CASE WHEN SessionType = 'login' THEN AttedanceDate END),
MAX(CASE WHEN SessionType = 'logout' THEN AttedanceDate END)) % 60 AS Minutes
FROM mytable as t
GROUP BY Userid, CONVERT(DATE, AttedanceDate)
```
[**Demo here**](http://sqlfiddle.com/#!6/04e52/9)
|
In *SQL Server* you can achieve It in following:
**QUERY**
```
select userid
, max(case when SessionType = 'login' then AttedanceDate end) as AttedanceDatelogin
, max(case when SessionType = 'logout' then AttedanceDate end) as AttedanceDatelogout
, datediff(hh, max(case when SessionType = 'login' then AttedanceDate end), max(case when SessionType = 'logout' then AttedanceDate end)) Hours
from #t
group by UserId
```
**SAMPLE DATA**
```
create table #t
(
Userid nvarchar(60),
AttedanceDate datetime,
SessionType nvarchar(60)
)
insert into #t values
('mazhar','2016-01-02 10:37:22.397','login'),
('mazhar','2016-01-03 10:38:24.970','logout'),
('mazhar','2016-01-02 11:39:22.397','login'),
('mazhar','2016-01-02 11:40:24.970','logout'),
('mazhar','2016-01-03 10:37:22.397','login'),
('mazhar','2016-01-03 10:38:24.970','logout')
```
**OUTPUT**
```
userid AttedanceDatelogin AttedanceDatelogout Hours
mazhar 2016-01-02 10:37:22.397 2016-01-02 11:37:24.970 1
```
---
**EDIT #1**
As per your comment to achieve that you can in following:
```
select userid
, min(case when SessionType = 'login' then AttedanceDate end) as AttedanceDatelogin
, max(case when SessionType = 'logout' then AttedanceDate end) as AttedanceDatelogout
, convert(char(5),dateadd(ss,datediff(ss, min(case when SessionType = 'login' then AttedanceDate end), max(case when SessionType = 'logout' then AttedanceDate end)),'19000101'),8) Hours
from #t
group by UserId, convert(date, AttedanceDate)
```
**OUTPUT AFTER EDIT**
```
userid AttedanceDatelogin AttedanceDatelogout Hours
mazhar 2016-01-02 10:37:22.397 2016-01-02 11:40:24.970 01:03
mazhar 2016-01-03 10:37:22.397 2016-01-03 10:38:24.970 00:01
```
|
How to divide the SQL single column for multiple use
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
The SQL I want is something like this:
```
SELECT week_no, type,
SELECT count(distinct user_id) FROM group WHERE pts > 0
FROM base_table
GROUP BY week_no, type
```
But I know I can use `COUNT(DISTINCT user_id)` to count the total number of users in each group, but how can I count the number of users who meet the condition 'pts > 0'?
I'm using mysql,
Thanks.
|
You probably want conditional aggregation:
```
SELECT week_no, type, COUNT(DISTINCT CASE WHEN pts > 0 THEN user_id END)
FROM base_table
GROUP BY week_no, type;
```
`SqlFiddleDemo`
|
the condition for group is made with "having" clause
SELECT count(user\_id) FROM group group by user\_id having pts > 0
|
How to select count(distinct) with condition inside groups
|
[
"",
"mysql",
"sql",
"syntax",
""
] |
I have a newbie question about a database I am trying to create. I have a list of publications, with this general order:
UID, Author, URL, Title, Publication, start page, end page, volume, year
Then I realized that there are multiple Authors and I began trying to normalize the Database for multiple Authors. Then I realized that the Order of the authors is important, and that a journal article could also have numerous Authors, between 1, and dozens, or possibly even more.
Should I just create a table with multiple Authors (null columns)(like 12 or something)? Or is there a way to have a variable number of columns depending on the number of authors?
|
## Database model
You basically need a `many-to-many` relationship between Authors and Publications, since one author can write many publications, and one publication can be written by more than one author.
This require you to have 3 tables.
* Author - general info about every author (without publications\_id)
* Publication - general info about every publication (without author\_id)
* AuthorPublication - columns `author_id` and `publication_id` that are references to tables `Author` and `Publication`.
This way you're not binding a specific author to a publication, but you can have more of them, and the same thing the other way around.
## Additional notes
If you would like to distinguish authors' role in particular publication you could also add some column like `id_role` that would be a reference to a dictionary table stating all possible roles for an author. This way you could differ between leading authors, co-authors etc. This way you could also store information about people handling translation of the book, but perhaps you should then change the naming of `Author` to something less specific.
## Order of appearance
You can ensure a proper ordering of your authors by adding a column in `AuthorPublication` which you would increment separately for every `Publication`. This way you would be able to preserve the ordering as you need it.
|
You have many to many relationship between entity Publication and entity Author.
Publication can have many authors, author can have many publications.
So, you should create table for this relationship. For example table Authors\_Publications with columns: UID, author\_id, publication\_id, order
|
SQL Database with variable number of columns
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I just wrote the following SQL query:
```
SELECT * from (SELECT voornaam, achternaam, TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE())
AS Leeftijd FROM leiding)
AS kolommen
HAVING Leeftijd < AVG(Leeftijd)
ORDER BY Leeftijd;
```
It doesn't return any results, but when I execute these queries seperately, they do give results:
```
SELECT AVG(Leeftijd) from (SELECT voornaam, achternaam, TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE()) AS Leeftijd FROM leiding) AS kolommen;
```
returns '19.3571', and
```
SELECT Leeftijd from (SELECT voornaam, achternaam, TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE()) AS Leeftijd FROM leiding) AS kolommen;
```
returns a list of ages, ranging from 25 to 18.
Does anyone have an idea what's going wrong? Thanks in advance!
|
The combinaion of previous answers should work:
```
SELECT
voornaam,
achternaam,
TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE()) AS Leeftijd
from
leiding
WHERE
TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE()) < (select AVG(TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE())) from leiding)
ORDER BY
Leeftijd
;
```
The first one is using aggregate function in WHERE clause, which is forbidden in SQL, the second one is using HAVING as WHERE which is just bad. Oh, and you fetched TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE()) as Leeftijd, and probably You have column with that name in Your table...
So, this combination should work well for You.
|
use `WHERE` instead `HAVING`. `HAVING` is use with `GROUP BY` keyword not single
so write this query
```
SELECT * from (SELECT voornaam, achternaam, TIMESTAMPDIFF(YEAR, geboortedatum, CURDATE())
AS Leeftijd FROM leiding)
AS kolommen
GROUP BY Leeftijd
HAVING Leeftijd < AVG(Leeftijd)
ORDER BY Leeftijd;
```
> The MySQL HAVING clause is used in the SELECT statement to specify
> filter conditions for a group of rows or aggregates.
>
> The MySQL HAVING clause is often used with the GROUP BY clause. When
> using with the GROUP BY clause, we can apply a filter condition to the
> columns that appear in the GROUP BY clause. If the GROUP BY clause is
> omitted, the HAVING clause behaves like the WHERE clause.
<http://www.mysqltutorial.org/mysql-having.aspx>
|
No results when comparing values in SQL
|
[
"",
"mysql",
"sql",
""
] |
I'm creating an application that is essentially an integrity check between two databases - one is MSSQL and one is an old provider Btrieve. As part of the requirements all columns for every table need to be compared to ensure the data matches. Currently we loop through each table, get the basic count of the table in both DBs, and then delve into the columns. For numeric fields we do a simple SUM, and for text fields we sum up the length of the column for every row. If these match in both DBs, it's a good indicator the data has migrated across correctly.
This all works fine, but I need to develop something similar for datetime fields. Obviously we can't really SUM these fields, so I'm wondering if anyone has ideas on the best way to approach this. I was thinking maybe the seconds since a certain date but the number will be huge.
Any other ideas? Thanks!
|
The most straightforward answer to me would be to convert the date or datetime fields to integers with the same format. YYYYMMDD or YYYYMMDDHHmmss work just fine as long as your formats use leading zeroes. In SQL Server, you can do something like:
```
SELECT SUM(CAST(REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(20),DateTimeColumn,120),' ',''),':',''),'-','') AS BIGINT)) .....
```
Alternately, you can convert them to either the number of days from a given date (`'1970-01-01'`), or the number of seconds from a given date (`'1970-01-01 00:00:00'`) if you use time.
```
SELECT SUM(DATEDIFF(DAY,'19700101',DateColumn)) ....
```
I'm not familiar enough with Btrieve to know what kinds of functions are available for formatting dates, however.
|
Using "Except" in SQL on the lines of Numeric fields you can compare the date counts in both the tables. For the Old source you may generate the select statement using excel or in the native database and bring to the SQL Server. For demonstration purpose I have used two tables and showing Except example below.
```
IF EXISTS (SELECT * FROM sys.objects
WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[DateCompareOld]') AND
TYPE IN (N'U'))
DROP TABLE [dbo].[DateCompareOld]
GO
CREATE TABLE dbo.DateCompareOld
(
AsOf DATETIME
)
INSERT INTO DateCompareOld
SELECT '01/01/2016' UNION ALL
SELECT '01/01/2016' UNION ALL
SELECT '01/01/2016' UNION ALL
SELECT '01/02/2016' UNION ALL
SELECT '01/02/2016' UNION ALL
SELECT '01/02/2016'
IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[DateCompareNew]') AND TYPE IN (N'U'))
DROP TABLE [dbo].[DateCompareNew]
GO
CREATE TABLE dbo.DateCompareNew
(
AsOf DATETIME
)
INSERT INTO DateCompareNew
SELECT '01/01/2016' UNION ALL
SELECT '01/01/2016' UNION ALL
SELECT '01/01/2016' UNION ALL
SELECT '01/02/2016' UNION ALL
SELECT '01/02/2016' UNION ALL
SELECT '01/02/2016'
SELECT AsOf,COUNT(*) AsOfCount
FROM DateCompareOld
GROUP BY AsOf
Except
SELECT AsOf,COUNT(*) AsOfCount
FROM DateCompareNew
GROUP BY AsOf
```
|
'Summing' a date field in SQL - any ideas?
|
[
"",
"sql",
"sql-server",
"datetime",
"integrity",
""
] |
I have below sample data:
`03202012` as date but the column datatype is `Varchar`.
I want to convert it to `2012-03-20 00:00:00.000` as `Datetime`.
I tried using
```
CAST(CONVERT(CHAR(10), Column, 101) AS DATETIME)
```
But I get an error:
> The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
Complete code snippet to test:
```
DECLARE @Column VARCHAR(MAX) = '03202012'
SELECT CAST(CONVERT(CHAR(10), @Column, 101) AS DATETIME)
```
|
I found below script help me solved my concern.
```
SELECT convert(datetime, STUFF(STUFF('31012016',3,0,'-'),6,0,'-'), 105)
Result: 2016-01-31 00:00:00.000
```
Thanks all for the effort. :D
|
Use yyyyMMdd format, that always works:
```
DECLARE @myDateString varchar(10) = '03202012';
SELECT cast( substring(@myDateString, 5, 4)+
substring(@myDateString, 1, 2)+
substring(@myDateString, 3, 2) AS datetime);
```
|
SQL Server: how to change data entry from VARCHAR to DATETIME?
|
[
"",
"sql",
"sql-server",
""
] |
I am attempting to update a set of records that are duplicates in three particular columns. The reason for this update is that there is a conflict when trying to insert this data into an updated database schema. The conflict is caused by a new constraint that has been added on `DM_ID`, `DM_CONTENT_TYPE_ID`, and `DMC_TYPE`. I need to adjust the `DM_CONTENT_TYPE_ID` column to either 1, 3, or 5 based on the row number to get around this. A sample of the duplicate data looks as such. Notice that the first three columns are the same.
```
+--------+--------------------+----------+--------------------------------------+
| DM_ID | DM_CONTENT_TYPE_ID | DMC_TYPE | DMC_PATH |
+--------+--------------------+----------+--------------------------------------+
| 314457 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7897-0.tif |
| 314457 | 1 | TIF | \\DOCIMG\DR\640\0001_640_0001.tif |
| 314458 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7898-0.tif |
| 314458 | 1 | TIF | \\DOCIMG\TD\640\0002_640_0001.tif |
| 314460 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7900-0.tif |
| 314460 | 1 | TIF | \\DOCIMG\ZZ\640\0003_640_0003.tif |
| 314461 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7901-0.tif |
| 314461 | 1 | TIF | \\DOCIMG\ED\6501\03_0001.tif |
| 314461 | 1 | TIF | \\DOCIMG\ZZ\640\0004_640_0004.tif |
+--------+--------------------+----------+--------------------------------------+
```
This is the desired output to get around the constraint issue:
```
+--------+--------------------+----------+--------------------------------------+
| DM_ID | DM_CONTENT_TYPE_ID | DMC_TYPE | DMC_PATH |
+--------+--------------------+----------+--------------------------------------+
| 314457 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7897-0.tif |
| 314457 | 3 | TIF | \\DOCIMG\DR\640\0001_640_0001.tif |
| 314458 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7898-0.tif |
| 314458 | 3 | TIF | \\DOCIMG\TD\640\0002_640_0001.tif |
| 314460 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7900-0.tif |
| 314460 | 3 | TIF | \\DOCIMG\ZZ\640\0003_640_0003.tif |
| 314461 | 1 | TIF | \\DOCIMG\CD\1965\19651227\7901-0.tif |
| 314461 | 3 | TIF | \\DOCIMG\ED\6501\03_0001.tif |
| 314461 | 5 | TIF | \\DOCIMG\ZZ\640\0004_640_0004.tif |
+--------+--------------------+----------+--------------------------------------+
```
The script I have developed is as such:
```
;WITH CTE AS
(SELECT -- Grab the documents that have a duplicate.
DM_ID
,DM_CONTENT_TYPE_ID
,DMC_TYPE
,COUNT(*) 'COUNT'
FROM
[DM_CONTENT]
GROUP BY
DM_ID
,DM_CONTENT_TYPE_ID
,DMC_TYPE
HAVING
COUNT(*) > 1),
CTE2 AS
(SELECT -- Designate the row number for the duplicate documents.
DMC.*
,ROW_NUMBER() OVER(PARTITION BY DMC.DM_ID, DMC.DM_CONTENT_TYPE_ID, DMC.DMC_TYPE ORDER BY DMC.DMC_PATH) AS 'ROWNUM'
FROM
[DM_CONTENT] DMC
JOIN CTE
ON DMC.DM_ID = CTE.DM_ID),
CTE3 AS
(SELECT -- Set the new document type ID based on the row number.
*
,CASE
WHEN ROWNUM = 1
THEN 1
WHEN ROWNUM = 2
THEN 3
WHEN ROWNUM = 3
THEN 5
END AS 'DM_CONTENT_TYPE_ID_NEW'
FROM
CTE2)
UPDATE -- Update the records.
DMC
SET
DMC.DM_CONTENT_TYPE_ID = CTE3.DM_CONTENT_TYPE_ID_NEW
FROM
[DM_CONTENT] DMC
JOIN CTE3
ON DMC.DM_ID = CTE3.DM_ID
```
Now when I execute the script, it says that the appropriate rows have been affected. However, when I check the `[DM_CONTENT]` table, the `DM_CONTENT_TYPE_ID` actually hasn't been updated and still remains at a value of `1`. If I `SELECT` from `CTE3`, the `DM_CONTENT_TYPE_ID_NEW`, is the appropriate new ID. My logic seems to be sound, but I cannot figure out what mistake I am making. Does anyone have any insight? Thanks in advance!
|
This seems much simpler to write as:
```
WITH toupdate AS (
SELECT DMC.*,
ROW_NUMBER() OVER (PARTITION BY DMC.DM_ID, DMC.DM_CONTENT_TYPE_ID, DMC.DMC_TYPE
ORDER BY DMC.DMC_PATH) AS ROWNUM
FROM DM_CONTENT DMC
)
UPDATE toupdate
SET DM_CONTENT_TYPE_ID = (CASE ROWNUM WHEN 2 THEN 3 WHEN 3 THEN 5 END)
WHERE ROWNUM > 1;
```
Now, I find it suspicious that your `join` conditions are only on `DM_ID`. I think the problem is that you are getting multiple matches between the CTE and your table. An arbitrary match is used for the update -- and that happens to be the first one encountered (hence a value of 1).
|
Try
```
UPDATE CTE3
SET DM_CONTENT_TYPE_ID = DM_CONTENT_TYPE_ID_NEW
```
instead of what you're currently doing.
Updating from a CTE works a little different that regular table joins.
|
SQL Update with CTEs not updating records
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm looking at an example in my book, *SQL Antipatterns*. The example table is
```
TreePaths
==============================
ancestor | descendant
------------------------------
1 1
1 2
1 3
1 4
1 5
1 6
1 7
2 2
2 3
3 3
4 4
4 5
4 6
4 7
5 5
6 6
6 7
7 7
```
and query I'm confused about it
```
INSERT INTO TreePaths (ancestor, descendant)
SELECT t.ancestor, 8
FROM TreePaths AS t
WHERE t.descendant = 5
UNION ALL
SELECT 8,8
```
So let me first undersand the sub-query
```
SELECT t.ancestor, 8
FROM TreePaths AS t
WHERE t.descendant = 5
```
This should return
```
t.ancestor | ????
---------------------
5 ?
5 ?
5 ?
```
I thought
```
SELECT x, y
```
meant that `x` and `y` are names or columns, and `8` is not a name of a column. What the heck is going on here?
|
In this case, they really just represent the number, these are literals. Since you tagged MySql, you can check it [here](http://dev.mysql.com/doc/refman/5.7/en/literals.html).
So the subquery would return
```
t.ancestor | 8
---------------------
1 8
4 8
5 8
```
Taking the UNION in the account, it would return
```
t.ancestor | 8
---------------------
1 8
4 8
5 8
8 8
```
|
`x` and `y` are *expressions* which are evaluated once for each row in the table, and whose values are taken to be the corresponding columns in the current row. `8` is also an expression, *evaluated once for each row,* whose value is taken to be the constant "8".
In this case, the query will return
```
t.ancestor | 8
------------------
1 8
4 8
5 8
```
as scaisEdge observed.
|
What does it mean to SELECT a number?
|
[
"",
"mysql",
"sql",
"algorithm",
"database-design",
"tree",
""
] |
This is a question from Stanford online database course exercise.
Find the movie(s) with the highest average rating. Return these movie title(s) and their average rating. Use **SQLite**.
I've seen solutions kindly suggested by others, e.g,
1. [fetch the row with max values](https://stackoverflow.com/questions/121387/fetch-the-row-which-has-the-max-value-for-a-column).
2. [get top entries](https://stackoverflow.com/questions/11320039/mysql-get-top-average-entries).
But what I hope to understand here is where and why my current solution present here went wrong.
---
*The movie rating table:*
```
rID mID stars ratingDate
201 101 2 2011-01-22
201 101 4 2011-01-27
203 103 2 2011-01-20
203 108 4 2011-01-12
203 108 2 2011-01-30
204 101 3 2011-01-09
205 103 3 2011-01-27
205 104 2 2011-01-22
...
```
*Note: mID represents movie ID, rID represents reviewer ID, stars represent the scores ranked by reviewers.*
My first thought is to get the average scores for each movie, using code as following:
```
Select mID, avg(stars) AS avg_stars
From Rating
Group by mID
```
The resulting summary table is
```
mID avg_stars
101 3.0
103 2.5
104 2.5
106 4.5
107 4.0
108 3.3
```
---
Then I want to select the max values of the scores column and the assciated mIDs
```
Select mID, max(avg_stars) AS Max_score
From (
Select mID, avg(stars) AS avg_stars
From Rating
Group by mID) T
```
I expected to get:
```
mID Max_score
106 4.5
```
But instead, I get:
```
mID Max_score
108 4.5
```
|
You seem to use MySQL as a DBMS, which allows non-Standard syntax:
When you return `mID` without adding it to `GROUP BY` MySQL returns a single row with the maximum(average) and a random value for mID.
This is a generic Standard SQL solution:
```
Select mID, avg(stars) AS avg_stars
From Rating
Group by mID
having avg(stars) =
( select max(avg_stars) -- find the max of the averages
from
( Select mID, avg(stars) AS avg_stars
From Rating
Group by mID
) T
)
```
This might be very inefficient, that's why there are several proprietary syntax extensions. Most DBMSes (but not MySQL) support Standard SQL's Windowed Aggregate Functions:
```
select *
from
(
Select mID, avg(stars) AS avg_stars,
max(avg(stars)) as max_avg
From Rating
Group by mID
) T
where avg_stars = max_avg
```
Edit:
As you added SQLite as DBMS, my 2nd query will not work (SQLite doesn't support Analytical Functions, too).
But as `WITH` is supported you can simplify #1 to a query similar to @user3580870's:
```
with cte as
( Select mID, avg(stars) AS avg_stars
From Rating
Group by mID
)
select * from cte
where avg_stars =
( select max(avg_stars) -- find the max of the averages
from cte
);
```
And this is still Standard SQL compliant...
|
Instead of subquery try using order by and limit to first result:
```
SELECT mID, AVG(stars) AS avg_stars
FROM Rating
GROUP BY mID
ORDER BY avg_stars DESC LIMIT 1;
```
|
Find the movies with the highest average rating using SQL max()
|
[
"",
"sql",
"sqlite",
""
] |
I have a few million strings that relate to file paths in my database;
due to a third party program these paths have become nested like below:
```
C:\files\thirdparty\thirdparty\thirdparty\thirdparty\thirdparty\thirdparty\unique_bit_here\
```
I want update the entries so that `thirdparty\thirdparty\etc` becomes `\thirdparty`.
I have tried this code:
```
UPDATE table
SET Field = REPLACE(Field, 'tables\thirdparty\%thirdparty\%\', 'tables\thirdparty\')
```
|
```
WHILE EXISTS (SELECT * FROM table WHERE Field LIKE '%\thirdparty\thirdparty\%')
BEGIN
UPDATE table SET Field = REPLACE(Field, '\thirdparty\thirdparty\', '\thirdparty\')
END
```
|
So do you want something like this?
```
SELECT SUBSTRING('tables\thirdparty\%thirdparty\%\',0,CHARINDEX('\','tables\thirdparty\%thirdparty\%\',0)) + '\thirdparty\'
```
OR
```
UPDATE table
SET Field = REPLACE(Field, Field, (SELECT SUBSTRING(Field,0,CHARINDEX('\',Field,0)) + '\thirdparty\'))
```
|
How to replace duplicate words in a column with just one word in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have a table trans\_hist in MS Access program where time is stored in short Date format "HH:MM" ex:
[![Image][1]][1]
Now I have created a query which tells total time for each user (Just simply summing the time) but if the total time exceeds 24 hours then it resets and starts over again from 00:15(all time are stored in 15 minutes intervals)
**Problem: For Customer(UID) 1 the total time should have been 32:30 however it shows 8:30**
Current Result:
```
UID Time_Elapsed
1 5:00
1 8:30
1 9:00
1 6:00
2 2:15
2 2:00
3 1:15
5 4:00
1 4:00
```
Result:
[![Image][2]][2]
* DATA
Cust\_UID Trans\_Date Agen\_Name Prog\_Name Prime\_Serv Prime\_Serv\_Time
10014 13-Dec-15 LAC RA BMC 01:00
10021 14-Dec-15 LAC RA AP 01:00
10022 15-Dec-15 LAC RA AP 01:00
10021 16-Dec-15 LAC RA SM 00:45
10020 17-Dec-15 LAC RA AP 01:00
10027 18-Dec-15 LAC RA DA 00:15
10028 18-Dec-15 LAC RA DA 00:15
10026 18-Dec-15 LAC RA DA 00:15
10029 18-Dec-15 LAC RA DA 00:15
10030 18-Dec-15 LAC RA DA 00:15
10031 18-Dec-15 LAC RA DA 00:15
10023 19-Dec-15 LAC RA Clinical 02:00
10023 20-Dec-15 LAC RA Clinical 01:30
10023 20-Dec-15 LAC RA Clinical 02:00
10020 21-Dec-15 LAC RA SM 00:15
10023 21-Dec-15 LAC RA SM 00:30
10022 22-Dec-15 LAC RA Clinical 00:30
10022 22-Dec-15 LAC RA IB 00:30
10021 22-Dec-15 LAC RA IB 00:30
10009 22-Dec-15 LAC RA IB 00:30
10019 23-Dec-15 LAC RA STM 00:45
10009 23-Dec-15 LAC RA Staff - In 00:30
10021 23-Dec-15 LAC RA Staff - In 00:30
10022 23-Dec-15 LAC RA Staff - In 00:30
10024 23-Dec-15 LAC RA Staff - In 00:30
10033 23-Dec-15 LAC RA Staff - In 00:30
10025 23-Dec-15 LAC RA Clinical 00:45
10035 28-Dec-15 LAC OA CA 05:00
10040 28-Dec-15 LAC OA CA 05:00
10039 28-Dec-15 LAC OA CA 05:00
10038 28-Dec-15 LAC OA CA 05:00
10042 28-Dec-15 LAC OA CA 05:00
10036 28-Dec-15 LAC OA CA 05:00
10037 28-Dec-15 LAC OA CA 05:00
10006 30-Dec-15 LAC Test 1 DA 01:45
10005 30-Dec-15 LAC Test 2 DG 01:45
10015 30-Dec-15 LAC Test 2 IB 02:15
10015 30-Dec-15 LAC Test 4 DG 03:15
10019 30-Dec-15 LAC OA CA 15:30
10005 31-Dec-15 LAC OA CA 12:00
[Data][3]
Result
Prog\_Name Prime\_Serv Total\_Serv\_Time
OA CA 62:30
RA AP 3:0
RA BMC 1:0
RA Clinical 7:45
RA DA 2:30
RA IB 2:30
RA Staff - In 2:30
RA SM 2:30
RA STM 1:45
Test 1 DA 2:45
Test 2 DG 2:45
Test 2 IB 2:15
Test 4 DG 3:15
[Result][4]
|
I hope, this SQL query helps:
```
SELECT T.UID,
CSTR(total_hours
+ INT(total_min / 60))
+ ":"
+ CSTR(total_min Mod 60) as result
FROM
(SELECT UID,
SUM(HOUR(TH.time_elapsed)) AS total_hours,
SUM(MINUTE(TH.time_elapsed)) AS total_min
FROM trans_hist AS TH
GROUP BY UID) AS T
```
|
Use a function like this:
```
Public Function FormatHourMinute( _
ByVal datTime As Date, _
Optional ByVal strSeparator As String = ":") _
As String
' Returns count of days, hours and minutes of datTime
' converted to hours and minutes as a formatted string
' with an optional choice of time separator.
'
' Example:
' datTime: #10:03# + #20:01#
' returns: 30:04
'
' 2005-02-05. Cactus Data ApS, CPH.
Dim strHour As String
Dim strMinute As String
Dim strHourMinute As String
strHour = CStr(Fix(datTime) * 24 + Hour(datTime))
' Add leading zero to minute count when needed.
strMinute = Right("0" & CStr(Minute(datTime)), 2)
strHourMinute = strHour & strSeparator & strMinute
FormatHourMinute = strHourMinute
End Function
```
and a simple query:
```
Select
UID,
FormatHourMinute(Sum([Time_Elapsed])) As TotalTime
From
trans_hist
Group By
UID
```
|
MS Access: Summing time greater than 24 hours in query restarts from 1 again
|
[
"",
"sql",
"vba",
"ms-access-2010",
""
] |
Laravel Framework version 5.2.5
When I updated a record, update\_at was not changed but created\_at changed to the current timestamp.
Is this correct?
```
MariaDB [moon]> show columns from users;
+----------------+------------------+------+-----+---------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+------------------+------+-----+---------------------+-----------------------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | UNI | NULL | |
| password | varchar(60) | NO | | NULL | |
| remember_token | varchar(100) | YES | | NULL | |
| created_at | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| updated_at | timestamp | NO | | 0000-00-00 00:00:00 | |
+----------------+------------------+------+-----+---------------------+-----------------------------+
```
```
<?php
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateUsersTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('email')->unique();
$table->string('password', 60);
$table->rememberToken();
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::drop('users');
}
}
```
|
Looking at your database structure the problem is `on update CURRENT_TIMESTAMP` for `created_at` column. It shouldn't be obviously like this. Looking at migrations it's unlikely this migration set automatically this, so probably it has been set manually in MySQL, so you should remove `on update CURRENT_TIMESTAMP` for `created_at` column.
When you update your model using Eloquent `updated_at` column should be updated without a problem (also in current database schema) - if it's not you probably don't use Eloquent, so it won't be updated automatically. If you are not sure, please show your code how you update this record
|
A lot of people have had this issue recently, and you can read the discussion on Github here: <https://github.com/laravel/framework/issues/11518>
> MySQL 5.7 no longer allows 0000-00-00 as a valid timestamp with
> strict mode turned on (which it is by default). So either use
> `->nullableTimestamps()` or `->timestamp()->useCurrent()`.
You can fix this by changing this:
```
$table->timestamps();
```
To either one of these options:
```
// Option 1:
$table->nullableTimestamps();
// Option 2:
$table->timestamp('updated_at')->useCurrent();
$table->timestamp('created_at')->useCurrent();
```
Also, on this MySQL page: <https://dev.mysql.com/doc/refman/5.6/en/timestamp-initialization.html>
> Alternatively, if `explicit_defaults_for_timestamp` is disabled (the default), do either of the following:
>
> Define the column with a DEFAULT clause that specifies a constant default value.
>
> Specify the NULL attribute. This also causes the column to permit NULL values, which means that you cannot assign the current timestamp by setting the column to NULL. Assigning NULL sets the column to NULL.
In either case, the above solution should fix your problem.
|
Is laravel created_at and updated_at inverse?
|
[
"",
"sql",
"laravel",
"eloquent",
"laravel-5",
""
] |
I have two tables:
```
T1
A
B
C
D
T2
A
B
E
F
G
```
Now I want to have query that will combine those two tables but exclude
same records. The output table should be like:
```
T1T2
C
D
E
F
G
```
How to do that?
|
Looks like you need `FULL OUTER JOIN` and exclude common part. You can simulate it with:
```
SELECT T1.col_name
FROM T1
LEFT JOIN T2
ON T1.col_name = T2.col_name
WHERE T2.col_name IS NULL
UNION
SELECT T2.col_name
FROM T2
LEFT JOIN T1
ON T1.col_name = T2.col_name
WHERE T1.col_name IS NULL;
```
`SqlFiddleDemo`
```
ββββββββββββ
β col_name β
β βββββββββββ£
β C β
β D β
β E β
β F β
β G β
ββββββββββββ
```
---
More info: [Visual Representation of SQL Joins](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins)
[](https://i.stack.imgur.com/cwCjY.png)
```
SELECT <select_list>
FROM Table_A A
FULL OUTER JOIN Table_B B
ON A.Key = B.Key
WHERE A.Key IS NULL OR B.Key IS NULL
```
Unfortunately `MySQL` does not support `FULL OUTER JOIN` so I used union of 2 `LEFT JOIN`.
[](https://i.stack.imgur.com/roWB1.png)
[](https://i.stack.imgur.com/Bs79Y.png)
All images from <http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins>
# Addendum
> But what if I have two different tables with different columns, but both of them have one same column? The used SELECT statements have a different number of columns
You could easily expand it with additional columns.
```
SELECT 'T1' AS tab_name, T1.col_name, T1.col1, NULL AS col2
FROM T1
LEFT JOIN T2
ON T1.col_name= T2.col_name
WHERE T2.col_name IS NULL
UNION
SELECT 'T2' AS tab_name, T2.col_name, NULL, T2.col2
FROM T2
LEFT JOIN T1
ON T1.col_name= T2.col_name
WHERE T1.col_name IS NULL;
```
`LiveDemo`
Output:
```
ββββββββββββ¦βββββββββββ¦βββββββ¦ββββββββββββββββββββββ
β tab_name β col_name β col1 β col2 β
β βββββββββββ¬βββββββββββ¬βββββββ¬ββββββββββββββββββββββ£
β T1 β C β 3 β β
β T1 β D β 4 β β
β T2 β E β β 2016-01-03 00:00:00 β
β T2 β F β β 2016-01-02 00:00:00 β
β T2 β G β β 2016-01-01 00:00:00 β
ββββββββββββ©βββββββββββ©βββββββ©ββββββββββββββββββββββ
```
|
I see two possible solutions.
Using `UNION ALL` together with an outer select with `GROUP BY x HAVING COUNT(x) = 1`:
```
SELECT * FROM (SELECT a FROM t1 UNION ALL SELECT a FROM t2) as t12 GROUP BY a HAVING COUNT(a) = 1
```
Using `UNION` to combine two `SELECT`s filtered by subqueries:
```
(SELECT a FROM t1 WHERE a NOT IN (SELECT a FROM t2)) UNION (SELECT a FROM t2 WHERE a NOT IN (SELECT a FROM t1))
```
Not sure which one is most performant!
[See it in action here](http://sqlfiddle.com/#!9/ff1f3/6).
|
Combine two tables, exclude same records
|
[
"",
"mysql",
"sql",
""
] |
Usually I only run this loop with less than 50 and it runs fine. Now I am trying to scale up and with 5k+ it takes serveral minuts.
```
Public Shared Sub Add(type As BufferType, objIDs As List(Of Integer), userToken As String)
Dim timeInUTCSeconds As Integer = Misc.UTCDateToSeconds(Date.Now)
For Each objID As Integer In objIDs
Dim insertStmt As String = "IF NOT EXISTS (SELECT ObjectID From " & TableName(type) & " where ObjectID = " & objID & " and UserToken = '" & userToken.ToString & "')" & _
" BEGIN INSERT INTO " & TableName(type) & "(ObjectID,UserToken,time) values(" & objID & ", '" & userToken.ToString & "', " & timeInUTCSeconds & ") END" & _
" ELSE BEGIN UPDATE " & TableName(type) & " set Time = " & timeInUTCSeconds & " where ObjectID = " & objID & " and UserToken = '" & userToken.ToString & "' END"
DAL.SQL.Insert(insertStmt)
Next
End Sub
```
|
You should always use [**SQLBULKCOPY**](https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy(v=vs.110).aspx) for inserting a large amount of data. You can have a look at [here](https://stackoverflow.com/questions/287708/best-practices-for-uploading-files-to-database#287749) which has been always *considered* as the best practice for inserting a large amount of the data into the table.
A demo code will make your fact clear, which has been taken from [here](http://www.codeproject.com/Articles/18418/Transferring-Data-Using-SqlBulkCopy)
```
private static void PerformBulkCopy()
{
string connectionString =
@"Server=localhost;Database=Northwind;Trusted_Connection=true";
// get the source data
using (SqlConnection sourceConnection =
new SqlConnection(connectionString))
{
SqlCommand myCommand =
new SqlCommand("SELECT * FROM tablename", sourceConnection);
sourceConnection.Open();
SqlDataReader reader = myCommand.ExecuteReader();
// open the destination data
using (SqlConnection destinationConnection =
new SqlConnection(connectionString))
{
// open the connection
destinationConnection.Open();
using (SqlBulkCopy bulkCopy =
new SqlBulkCopy(destinationConnection.ConnectionString))
{
bulkCopy.BatchSize = 500;
bulkCopy.NotifyAfter = 1000;
bulkCopy.SqlRowsCopied +=
new SqlRowsCopiedEventHandler(bulkCopy_SqlRowsCopied);
bulkCopy.DestinationTableName = "Tablename";
bulkCopy.WriteToServer(reader);
}
}
reader.Close();
}
}
```
Also do not forget to read the documentation related to the **[Batchsize](https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.batchsize(v=vs.110).aspx)**
Hope that helps.
|
Best option is to move whole work to SQL side. Pass your *objIDs* list as table parameter to stored procedure and utilize MERGE..UPDATE..INSERT statement.
If you require VB side code, there is number of optimization you can do.
* SQL Parameters. Must do.
* Check DAL.SQL class if it handles connection properly.
* Construct single batch from commands and run it as whole batch.
|
5k+ SQL inserts takes a long time - how can I improve it?
|
[
"",
"sql",
"vb.net",
"sql-server-2012",
"bulkinsert",
"sqlperformance",
""
] |
How to fetch data from 3 tables with max date
```
select
c.model_key, v.id, v.category_Id, v.Text1, v.Numeric1,
c.Name, fx.Currency_code, fx.Rate, fx.effective_dt
from
aps.v_m_fxrates fx
join
aps.t_m_value v on fx.Currency_code = v.Text1
join
aps.t_m_category c on v.Category_Id = c.Id
where
c.Name = 'FXRates'
and c.Model_Key = 25
and v.Text1 = fx.Currency_code
and fx.version = 2
```
Using the above query I get results, but the results appearing for all the `effective_dt`. Instead of this, I need to pick only the records with latest `effective_dt`. In the below image, there is 2 records with AED in which the one with latest date is 1999-03-31 which must be returned. After this I do not know how to proceed and filter the result further to achieve the output.
I also tried this
```
select
c.model_key, v.id, v.category_Id, v.Numeric1,
c.Name, fx.Currency_code, fx.Rate, fx.effective_dt
from
aps.v_m_fxrates fx
join
aps.t_m_value v on fx.Currency_code = v.Text1
join
aps.t_m_category c on v.Category_Id = c.Id
where
c.Name = 'FXRates'
and c.Model_Key = 25
and v.Text1 = fx.Currency_code
and fx.version = 2
and fx.effective_dt in (select MAX(effective_dt)
from aps.v_m_fxrates)
```
but nothing is being returned.
Actual output:
[](https://i.stack.imgur.com/n4mcw.png)
Expected output:
[](https://i.stack.imgur.com/umQhA.png)
|
Use the ***row\_number()*** function in a subquery like this:
```
select
c.model_key, v.id, v.category_Id, v.Text1, v.Numeric1,
c.Name, fx.Currency_code, fx.Rate, fx.effective_dt
from (
select
Currency_code,Rate,effective_dt
,SeqNo = row_number() over (partition by Currency_code order by effective_dt desc)
from aps.v_m_fxrates
where version = 2
) fx
join
aps.t_m_value v on fx.Currency_code = v.Text1
join
aps.t_m_category c on v.Category_Id = c.Id
where c.Name = 'FXRates'
and c.Model_Key = 25
and v.Text1 = fx.Currency_code
and fx.SeqNo = 1
```
|
Try use operator **TOP 1** and corresponding **ORDER BY**
For example:
```
SELECT TOP 1 C.model_key,
V.id,
...
FROM ...
ORDER BY FX.effective_dt DESC
```
Another option that you have is using aggregate function and correlation in where clause.
```
select C.model_key,
V.id,
V.category_Id,
V.Text1,
V.Numeric1,
C.Name,
FX.Currency_code,
FX.Rate,
FX.effective_dt
FROM aps.v_m_fxrates AS FX
JOIN aps.t_m_value AS V on FX.Currency_code = V.Text1
JOIN aps.t_m_category AS C on V.Category_Id = C.Id
WHERE C.Name = 'FXRates'
AND C.Model_Key = 25
AND FX.version = 2
AND FX.effective_dt =
(SELECT MAX(effective_dt) FROM aps.v_m_fxrates AS FX2
WHERE FX2.currency_code = FX.currency_code)
```
|
How to filter data based using date column?
|
[
"",
"sql",
"sql-server-2008",
"join",
""
] |
I have some question in Sqlserver2012. I have a table that contains a filed that save who System Used from this information and separated by **','**, I want to set into parameter the name of Systems and query the related rows:
```
declare @System nvarchar(50)
set @System ='BPM,SEM'
SELECT *
FROM dbo.tblMeasureCatalog t1
where ( ( select Upper(value) from dbo.split(t1.System,','))
= any( select Upper(value) from dbo.split(@System,',')))
```
`dbo.split` is a function to return systems in separated rows
|
Forgetting for a second that storing delimited lists in a relational database is abhorrent, you can do it using a combination of [`INTERSECT`](https://msdn.microsoft.com/en-us/library/ms188055.aspx) and [`EXISTS`](https://msdn.microsoft.com/en-GB/library/ms188336.aspx), for example:
```
DECLARE @System NVARCHAR(50) = 'BPM,SEM';
DECLARE @tblMeasureCatalog TABLE (System VARCHAR(MAX));
INSERT @tblMeasureCatalog VALUES ('BPM,XXX'), ('BPM,SEM'), ('XXX,SEM'), ('XXX,YYY');
SELECT mc.System
FROM @tblMeasureCatalog AS mc
WHERE EXISTS
( SELECT Value
FROM dbo.Split(mc.System, ',')
INTERSECT
SELECT Value
FROM dbo.Split(@System, ',')
);
```
Returns
```
System
---------
BPM,XXX
BPM,SEM
XXX,SEM
```
**EDIT**
Based on your question stating "Any" I assumed that you wanted rows where the terms matched any of those provided, based on your comment I now assume you want records where the terms match all. This is a fairly similar approach but you need to use `NOT EXISTS` and `EXCEPT` instead:
Now *all* is still quite ambiguous, for example if you search for "BMP,SEM" should it return a record that is "BPM,SEM,YYY", it does contain all of the searched terms, but it does contain additional terms too. So the approach you need depends on your requirements:
```
DECLARE @System NVARCHAR(50) = 'BPM,SEM,XXX';
DECLARE @tblMeasureCatalog TABLE (System VARCHAR(MAX));
INSERT @tblMeasureCatalog
VALUES
('BPM,XXX'), ('BPM,SEM'), ('XXX,SEM'), ('XXX,YYY'),
('SEM,BPM'), ('SEM,BPM,XXX'), ('SEM,BPM,XXX,YYY');
-- METHOD 1 - CONTAINS ALL SEARCHED TERMS BUT CAN CONTAIN ADDITIONAL TERMS
SELECT mc.System
FROM @tblMeasureCatalog AS mc
WHERE NOT EXISTS
(
SELECT Value
FROM dbo.Split(@System, ',')
EXCEPT
SELECT Value
FROM dbo.Split(mc.System, ',')
);
-- METHOD 2 - ONLY CONTAINS ITEMS WITHIN THE SEARCHED TERMS, BUT NOT
-- NECESSARILY ALL OF THEM
SELECT mc.System
FROM @tblMeasureCatalog AS mc
WHERE NOT EXISTS
( SELECT Value
FROM dbo.Split(mc.System, ',')
EXCEPT
SELECT Value
FROM dbo.Split(@System, ',')
);
-- METHOD 3 - CONTAINS ALL ITEMS IN THE SEARCHED TERMS, AND NO ADDITIONAL ITEMS
SELECT mc.System
FROM @tblMeasureCatalog AS mc
WHERE NOT EXISTS
( SELECT Value
FROM dbo.Split(@System, ',')
EXCEPT
SELECT Value
FROM dbo.Split(mc.System, ',')
)
AND LEN(mc.System) = LEN(@System);
```
|
You have a problem with your data structure because you are storing lists of things in a comma-delimited list. SQL has a great data structure for storing lists. It goes by the name "table". You should have a junction table with one row per "measure catalog" and "system".
Sometimes, you are stuck with other people's really bad design decisions. One solution is to use `split()`. Here is one method:
```
select mc.*
from dbo.tblMeasureCatalog mc
where exists (select 1
from dbo.split(t1.System, ',') t1s join
dbo.split(@System, ',') ss
on upper(t1s.value) = upper(ss.value)
);
```
|
compare some lists in where condition sql
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have legacy sql query that selects bit masks (among other data), something like:
```
1
2
1
```
How do I group this output like:
```
1 or 2 or 1
```
That should be `3`
|
In order to do bit-wise logic you have to do a "bit" of math. (Bad puns are free around here :-).
Oracle defines the BITAND function. To get a bitwise 'or' you can define your own function as:
```
FUNCTION BITOR(n1 IN NUMBER, n2 IN NUMBER)
RETURN NUMBER
IS
BEGIN
RETURN n1 - BITAND(n1, n2) + n2;
END BITOR;
```
And for completeness, BITXOR is
```
FUNCTION BITXOR(n1 IN NUMBER, n2 IN NUMBER)
RETURN NUMBER
IS
BEGIN
RETURN BITOR(n1, n2) - BITAND(n1, n2);
END BITXOR;
```
Best of luck.
|
If I understand you correctly, you want an aggregate bit-or function. Oracle doesn't provide one, to my knowledge, so you have to roll your own using their ODCI (Oracle Data Cartridge Interface). Here's a working example:
```
CREATE OR REPLACE TYPE matt_bitor_aggregate_impl AS OBJECT
(
result NUMBER,
CONSTRUCTOR FUNCTION matt_bitor_aggregate_impl(SELF IN OUT NOCOPY matt_bitor_aggregate_impl ) RETURN SELF AS RESULT,
-- Called to initialize a new aggregation context
-- For analytic functions, the aggregation context of the *previous* window is passed in, so we only need to adjust as needed instead
-- of creating the new aggregation context from scratch
STATIC FUNCTION ODCIAggregateInitialize (sctx IN OUT matt_bitor_aggregate_impl) RETURN NUMBER,
-- Called when a new data point is added to an aggregation context
MEMBER FUNCTION ODCIAggregateIterate (self IN OUT matt_bitor_aggregate_impl, value IN NUMBER ) RETURN NUMBER,
-- Called to return the computed aggragate from an aggregation context
MEMBER FUNCTION ODCIAggregateTerminate (self IN matt_bitor_aggregate_impl, returnValue OUT NUMBER, flags IN NUMBER) RETURN NUMBER,
-- Called to merge to two aggregation contexts into one (e.g., merging results of parallel slaves)
MEMBER FUNCTION ODCIAggregateMerge (self IN OUT matt_bitor_aggregate_impl, ctx2 IN matt_bitor_aggregate_impl) RETURN NUMBER --,
);
/
CREATE OR REPLACE TYPE BODY matt_bitor_aggregate_impl IS
CONSTRUCTOR FUNCTION matt_bitor_aggregate_impl(SELF IN OUT NOCOPY matt_bitor_aggregate_impl ) RETURN SELF AS RESULT IS
BEGIN
SELF.result := null;
RETURN;
END;
STATIC FUNCTION ODCIAggregateInitialize (sctx IN OUT matt_bitor_aggregate_impl) RETURN NUMBER IS
BEGIN
sctx := matt_bitor_aggregate_impl ();
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateIterate (self IN OUT matt_bitor_aggregate_impl, value IN NUMBER ) RETURN NUMBER IS
BEGIN
IF self.result IS NULL THEN
self.result := value;
ELSE
-- Logic for bitwise OR
-- see also: http://www.oracledba.co.uk/tips/bitwise_ops.htm
self.result := self.result - BITAND(self.result, value) + value;
END IF;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateTerminate (self IN matt_bitor_aggregate_impl, returnValue OUT NUMBER, flags IN NUMBER) RETURN NUMBER IS
BEGIN
returnValue := result;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateMerge (self IN OUT matt_bitor_aggregate_impl, ctx2 IN matt_bitor_aggregate_impl) RETURN NUMBER IS
BEGIN
-- Logic for bitwise OR
-- see also: http://www.oracledba.co.uk/tips/bitwise_ops.htm
self.result := self.result - BITAND(self.result, ctx2.result) + ctx2.result;
RETURN ODCIConst.Success;
END;
END;
/
-- Now that you have a TYPE to implement the logic, here is where you define the new aggregate function
CREATE OR REPLACE FUNCTION matt_bitor_aggregate ( input NUMBER) RETURN NUMBER
PARALLEL_ENABLE AGGREGATE USING matt_bitor_aggregate_impl;
/
-- Here's a simple test with your test data set...
with test_data as (
SELECT 1 a FROM dual UNION ALL
select 2 from dual union all
select 1 from dual
)
select matt_bitor_aggregate(a)
from test_data;
-- Here is a more complex test that also highlights the fact that you can use ODCI custom aggregates with window clauses.
with test_data as (
SELECT 1 a FROM dual UNION ALL
select 2 from dual union all
select 1 from dual union all
select 16 from dual union all
SELECT 18 from dual)
select a, matt_bitor_aggregate(a) over ( partition by null order by rownum rows between 1 preceding and 1 following ) from
test_data;
```
|
How do I group "or" in pl/sql
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a database contains information for a telecommunication company with the following table:
* `SUBSCRIBERS (SUB_ID , F_NAME , L_NANE , DATE_OF_BIRTH , COUNTRY)`
* `LINES (LINE_ID , LINE_NUMBER)`
* `SUBSCRIBERS_LINES (SUB_LINE_ID , SUB_ID "foreign key", LINE_ID "foreign key", ACTIVATION_DATE)`
* `CALLS (CALL_ID , LINE_FROM "foreign key", LINE_TO "foreign key" , START_DATE_CALL, END_DATE_CALL)`
I want to retrieve the names of top 3 subscribers who make the highest count number of calls (with duration less than 60 seconds for each call) in specific given day.
So, I write the following query :
```
with TEMPRESULT AS
(
select * from
(
select CALLS.LINE_FROM , count(*) totalcount
from CALLS
where (((END_DATE_CALL-START_DATE_DATE)*24*60*60)<=60 and to_char(S_DATE,'YYYY-MM-DD')='2015-12-12')
group by CALLS.LINE_FROM
order by totalcount DESC
)
where rownum <= 3
)
select F_NAME,L_NAME
from TEMPRESULT inner join SUBSCRIBERS_LINES on TEMPRESULT.LINE_FROM=SUBSCRIBERS_LINES.line_id inner join SUBSCRIBERS on SUBSCRIBERS_LINES.SUB_ID=SUBSCRIBERS.SUB_ID;
```
But this query will not work if one of the subscribers has more than one line,
for example:
(`X1` has `L1` and `L2` lines
`X2` has `L3`
`X3` has `L4`)
**if** `X1` talks 20 calls from `L1`, and 19 calls from `L2`
`X2` talks 15 calls from `L3`
`X3` talks 10 calls from `L4`
my query will return the following output:
`X1`
`X1`
`X2`
it must return :
`X1`
`X2`
`X3`
how to modify the query to not return duplicate name ?
|
The subquery must GROUP BY on SUB\_ID (not on LINE\_FROM). This will provide the total calls of a subscriber and not the top line calls.
In other words move the join in the subquery and group and order by SUB\_ID.
DISTINCT in the main query is too late, you will get no duplicates but less results.
|
Could you try adding the DISTINCT keyword to the SELECT query at the bottom?
Something like this:
```
with TEMPRESULT AS
(
select * from
(
select CALLS.LINE_FROM , count(*) totalcount
from CALLS
where (((END_DATE_CALL-START_DATE_DATE)*24*60*60)<=60 and to_char(S_DATE,'YYYY-MM-DD')='2015-12-12')
group by CALLS.LINE_FROM
order by totalcount DESC
)
where rownum <= 3
)
select DISTINCT F_NAME,L_NAME
from TEMPRESULT
inner join SUBSCRIBERS_LINES on TEMPRESULT.LINE_FROM = SUBSCRIBERS_LINES.line_id
inner join SUBSCRIBERS on SUBSCRIBERS_LINES.SUB_ID = SUBSCRIBERS.SUB_ID;
```
In theory (I haven't tested it by creating this database) this should show:
X1
X2
X3
|
prevent query from return duplicate results
|
[
"",
"sql",
"oracle",
""
] |
I have this piece in my SELECT portion of my TSQL query:
```
CAST(DATEPART(HOUR,s.Arr) AS VARCHAR) + CAST(DATEPART(MINUTE,s.Arr) AS VARCHAR),
```
The original `s.Arr` field looks like:
```
2016-01-04 04:43:00.000
```
The goal is to make it look like:
```
0443
```
But right now my SQL makes it look like:
```
443
```
Because it isn't grabbing the leading zeroes. How can I grab the leading zeroes? Is there a way to do this without hackish methods like padding zeroes onto it?
Server: 2008 R2
|
With `SQL Server 2012+` you can use `FORMAT`:
```
DECLARE @d DATETIME = '2016-01-04 04:43:00.000';
SELECT FORMAT(@d, 'HHmm');
-- 0443
```
`LiveDemo`
---
`SQL Server 2008`
Without padding with `0`:
```
DECLARE @d DATETIME = '2016-01-04 04:43:00.000';
SELECT REPLACE(LEFT(CONVERT(VARCHAR(100), @d, 14),5), ':', '');
-- 0443
```
`LiveDemo2`
> Style: **14 hh:mi:ss:mmm(24h)**
|
Padding with 00 and take right two
```
RIGHT('00' + CAST(DATEPART(HOUR,s.Arr) AS VARCHAR),2) + RIGHT('00' + CAST(DATEPART(MINUTE,s.Arr) AS VARCHAR),2)
```
|
How to get leading zeroes with DatePart?
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
I want to get all the detail about my purchase from the following tables,
```
1) tbl_track
2) tbl_album
3) tbl_purchase
```
tbl \_purchase table :-
```
item_id
type
price
```
tbl\_track table :-
```
id
title
description
```
tbl\_album table :-
```
id
title
description
```
Now based on type of tbl\_purchase I want to join one of table,
It means If `type = 0` then `LEFT JOIN` with `tbl_track` and for `type = 1` then `LEFT JOIN` with `tbl_album`.
I want to do these with one query only, It is easily done with two query.
Thanks in advance.
|
With `UNION`:
```
SELECT p.type, t.id, t.title, t.description
FROM tbl_purchase p
JOIN tbl_track t ON t.id = p.item_id AND p.type = 0
UNION
SELECT p.type, a.id, a.title, a.description
FROM tbl_purchase p
JOIN tbl_album a ON a.id = p.item_id AND p.type = 1
```
|
I think you need left join both tables in any case. So you can use next query for example
```
SELECT P.*,
T.id AS track_id,
A.id AS album_id
FROM tbl_purchase AS P
LEFT JOIN tbl_track AS T ON P.item_id = T.id
AND P.type = 0
LEFT JOIN tbl_album AS A ON P.item_id = A.item_id
AND P.type = 1
```
|
Left Join based on field value
|
[
"",
"mysql",
"sql",
"select",
"join",
"left-join",
""
] |
I have something like this:
`UPDATE users SET credits=credits-1000 WHERE id=1`
How do I make this fail if credits goes negative? I want to avoid having to query the database twice just to check for that. I need it to FAIL/return false if it goes negative, not set the credits to zero.
|
What about adding the condition ?
```
UPDATE users SET credits=credits-1000 WHERE id=1 AND credits > 1000
```
|
I think you have to use stored procedure for this purpose. For example
```
CREATE procedure UpdateUserCredit
@user_id INT,
@credits INT,
@op_status BIT OUTPUT
AS
BEGIN
SET @op_status = 0
UPDATE Users
SET credits = credits - @credits
WHERE id = @user_id
AND credits - @credits > 0
IF ROW_COUNT() = 0
RETURN
ELSE
SET @op_status = 1
END
```
This stored procedure will return `0` in variable `@op_status` if `current credits + @credits` for `@user_id` would be less zero.
|
How can I make SQL not execute the query if the field goes negative?
|
[
"",
"mysql",
"sql",
""
] |
I am trying to find the very first row where a certain field is null but the caveat is there cannot be a non-null coming after. If there isn't a null value or a non-null comes after the null then I do not want to return that one at all. I am using Teradata SQL and the following mock dataset should illustrate what I am looking for.
```
ID | Date | Field_Of_Interest
A | 1/1/2015 | 1
A | 2/1/2015 | 1
A | 3/1/2015 |
A | 4/1/2015 |
A | 5/1/2015 |
B | 1/1/2015 | 1
B | 2/1/2015 | 1
B | 3/1/2015 |
B | 4/1/2015 | 1
B | 5/1/2015 |
C | 1/1/2015 | 1
C | 2/1/2015 | 1
C | 3/1/2015 | 1
C | 4/1/2015 | 1
C | 5/1/2015 | 1
D | 1/1/2015 | 1
D | 2/1/2015 | 1
D | 3/1/2015 |
D | 4/1/2015 |
D | 5/1/2015 | 1
```
Desired Result:
```
ID | Date
A | 3/1/2015
B | 5/1/2015
```
Since C and D have a non-null for the last record I do not want them all all.
Where I run into trouble are situations like B or D where I can't just take the minimum of the date field where Field\_Of\_Interest is null. Another thought I had was to find the min where null and the max where not null and if the date for the min was greater than that of the max use that. The problem there is in B where a non-null came after a null and then it went back to null.
Any ideas?
|
Does this give you what you want?
```
SELECT
T1.ID,
MIN(T1.some_date) AS some_date
FROM
My_Table T1
WHERE
T1.some_column IS NULL AND
NOT EXISTS (SELECT * FROM My_Table T2 WHERE T2.ID = T1.ID AND T2.some_date > T1.some_date AND T2.some_column IS NOT NULL)
GROUP BY
T1.ID
```
Alternatively:
```
SELECT
T1.id,
MIN(T1.some_date) AS some_date
FROM
My_Table T1
LEFT OUTER JOIN My_Table T2 ON
T2.id = T1.id AND
T2.some_date > T1.some_date AND
T2.some_column IS NOT NULL
WHERE
T1.some_column IS NULL AND
T2.id IS NULL
GROUP BY
T1.id
```
|
You can do this with a difference of row number or using subqueries. The latter method results in a query like this:
```
select id, min(date)
from t
where t.field_of_interest is null and
not exists (select 1
from t
where t2.id = t.id and t2.date > t.date and
t2.field_of_interest is not null
)
group by id;
```
|
Find the first instance of a null value in a group as long as no non null comes after - Teradata SQL
|
[
"",
"sql",
"teradata",
""
] |
I know about Dateadd and datediff, but I cannot find any information how to use these functions on an actual date column rather than something like today's date with SQL Server.
Say I have the following Column
```
Dated
06/30/2015
07/31/2015
```
Now I want to add the following derived column that subtracts one month from every row in the Dated column.
```
Dated Subtracted
06/30/2015 05/31/2015
07/31/2015 06/30/2015
```
Thank you
|
The short answer: I suspect this is what you want:
```
dateadd(day, -datepart(day, Dated), Dated)
```
However, if you want "regular" subtract one month behavior in tandem with sticking to the end of month, having June 30 fall back to May 31 is slightly trickier. There's a discrepancy between the title or your question and the example where it appears you want the final day of month to stay anchored. It would be helpful for you to clarify this.
`dateadd(month, -1, ...)` doesn't handle that when the previous month has more days than the starting month although it works the other way around. If that's truly what you need I think this should handle it:
```
case
when datediff(month, Dated, dateadd(day, 1, Dated)) = 1
then dateadd(day, -datepart(day, Dated), Dated)
else dateadd(month, -1, Dated)
end
```
There's also a flavor of several date functions in that expression and a sense of how this date stuff can get complicated. The condition in the `when` looks to see if `Dated` is the last day of the month by checking that the following day is in a different calendar month. If so we extract the day of month and subtract that many days to jump back to the last day of the previous month. (Months start at one not zero. So for example, counting backward 17 days from the 17th lands in the month before.) Otherwise it uses regular `dateadd(month, -1, ...)` calculations to jump backward to the same day of month.
Of course if *all* your dates fall on the end of the month then this simple version will be adequate by itself because it always returns the last day of the previous month (regardless of where it falls in the starting month):
```
dateadd(day, -datepart(day, Dated), Dated) /* refer back to the top */
dateadd(day, -day(Dated), Dated) /* same thing */
```
And just for fun and practice with date expressions, another approach is that you could start on a known month with 31 days and calculate relative to that:
```
dateadd(month, datediff(month, '20151231', Dated) - 1, '20151231')
```
This finds the number of months between your date and a reference date. It works for all dates since it doesn't matter whether the difference is positive or negative. So then subtracting one from that difference and adding that many months to the reference point is the result you want.
People will come up with some pretty crazy stuff and I'm often amazed (for differing reasons) at some of the variations I see. chancrovsky's answer is a good example for closer examination:
```
dateadd(month, datediff(month, -1, Dated) - 1, -1)
```
It relies on the fact that date `-1`, when treated as implicitly converted to `datetime`, is the day before January 1, 1900, which does happen to be a month of 31 days as required. (Note that the `- 1` in the middle is regular arithmetic and not a date value.) I think most people would advise you to be careful with that one as I'm not sure that it is guaranteed to be portable when Microsoft deprecates features in the future.
|
Why don't you just get the last day of the previous month? If this solve your problem, here's the sql server syntax, just replace the variable @yourDate with your column name.
```
DECLARE @yourDate DATE = '20160229'
select DATEADD(MONTH, DATEDIFF(MONTH, -1, @yourDate)-1, -1)
```
|
How to subtract one month from a Date Column
|
[
"",
"sql",
"sql-server",
"dateadd",
""
] |
I have a table named jos\_user\_usergroup\_map that has 2 columns: user\_id and group\_id
A user can be a member of several groups and thus has several rows in this table, each with the group ID, e.g.
```
user_id|group_id
62 | 1
62 | 4
62 | 12
108 | 1
```
I want to find all the user\_id's where that is not having a group\_id = 12, but it's giving me a headache...
In the above example, I should find only user\_id = 108
Any ideas how to do this?
Any help is appreciated.
Thanks
|
Try this:
```
SELECT DISTINCT A.user_id
FROM jos_user_usergroup_map A
LEFT OUTER JOIN jos_user_usergroup_map B ON A.user_id = B.user_id AND B.group_id = 12
WHERE B.user_id IS NULL;
```
|
Use post aggregation filtering
```
SELECT user_id FROM jos_user_usergroup_map
GROUP BY user_id
HAVING SUM(group_id=12)=0
```
|
SQL Query to find users not having specific group id in join table
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
I have a query :
```
select channel, status, sum(quantity::integer)
from sale group by channel,status;
```
this is giving the following output:
```
channel status quantity
Arham Return 1801
Arham DISPATCHED 49934
Arham CANCELLED 1791
Arham DELIVERED 22
```
But I want this output like:
```
channel return DISPATCHED CANCELLED DELIVERED
Arham 1801 49934 1791 22
```
Is it possible in postgresql?
If yes then how ?
|
Exploit Boolean to integer conversion giving either 0 or 1, then multiply by that:
```
select channel
, sum((status = 'Return') :: int * quantity :: int) as return
, sum((status = 'DISPATCHED') :: int * quantity :: int) as DISPATCHED
, sum((status = 'CANCELLED') :: int * quantity :: int) as CANCELLED
, sum((status = 'DELIVERED') :: int * quantity :: int) as DELIVERED
from sale
group by channel
```
An equivalent solution is using `case`/`when`/`then`, for example:
```
sum(case when status = 'Return' then quantity :: int else 0 end)
```
|
If you don't want to use the `crosstab` function you can do this using filtered aggregates:
```
select channel,
sum(quantity) filter (where status = 'Return') as return_amount,
sum(quantity) filter (where status = 'DISPATCHED') as dispatched,
sum(quantity) filter (where status = 'CANCELLED') as cancelled,
sum(quantity) filter (where status = 'DELIVERED') as delivered
from sale
group by channel;
```
|
Arrange select query in postgresql?
|
[
"",
"sql",
"postgresql",
"pivot",
"crosstab",
""
] |
I am trying to implement a query like below
```
SELECT *
FROM emp
WHERE LOWER (ename) IN LOWER ('A', 'b', 'C', 'd','eF','GG','Hh');
```
but getting error
```
ORA-00909: invalid number of arguments.
```
Is there any specific function by which I can compare multiple arguments in the 'IN' clause having lower case?
|
@Muhammad Muazzam has it right, but if for some reason you really need to user LOWER on the right hand side with a list of values then you could do this:
```
select ename from emp
where lower(ename) in
(select lower(column_value)
from table(SYS.KU$_VCNT('A','B','C','D'))
);
```
`SYS.KU$_VCNT` is a table of VARCHAR2(4000) type that should already exist in your database.
|
Change query to:
```
SELECT *
FROM emp
WHERE LOWER (ename) IN ('a', 'b', 'c', 'd');
```
|
comparing using multiple arguments in the Lower/Upper function
|
[
"",
"sql",
"oracle",
""
] |
I'm currently working on a Postgres SQL query that will check for rows that match a space separated string (3 words) in any random order.
For example I want to look for a row that matches "lorem ipsum dolor" it should return row id 0.
```
+----+-------------------+
| id | sentence |
| 0 | lorem dolor ipsum |
| 1 | lorem ipsum |
| 2 | ipsum dolor |
| 3 | ipsum dolor |
+----+-------------------+
```
So it has to meet the following conditions:
* the same 3 words in this case
* In any random order:
1. Lorem ipsum dolor
2. dolor Lorem ipsum
3. ipsum dolor Lorem
4. Lorem dolor ipsum
5. ...
If I'm correct this should result in 3 \* 3 \* 3 = 27 possible formats. But this can imagine that this quite intensive when using more words. How can I achieve this without hammering the server, or what is the right direction to look for.
|
The method, which Clodoaldo Neto described, is good for sorting the words. If performance is crucial for you, you can even create an index for that to improve lookup speed. Create a custom function `sortwords` first:
```
CREATE OR REPLACE FUNCTION sortwords (words text) RETURNS text AS
$$ SELECT string_agg(lower(s), ' ' order by s)
FROM regexp_split_to_table($1, '\s+') s(s) $$
LANGUAGE sql IMMUTABLE;
```
The keyword `IMMUTABLE` designates, that the function result is solely dependant on it's argument, and hence the function is suitable for creating an index.
Then, create the index:
```
CREATE INDEX mytable_sortwords ON mytable (sortwords(sentence));
```
and perform selects like:
```
SELECT * FROM mytable WHERE sortwords(sentence) = sortwords('some words');
```
This has the advantage, that the sorting of the words (which can be quite costy) is only performed once per row (either when the index is created or on row insertion).
|
```
with t(s) as (values
('lorem dolor ipsum'),
('lorem ipsum'),
('ipsum dolor'),
('ipsum dolor')
)
select *
from t
where
(
select string_agg(lower(s), ' ' order by s)
from regexp_split_to_table(s, '\s+') s(s)
)
=
(
select string_agg(lower(s), ' ' order by s)
from regexp_split_to_table('lorem ipsum dolor', '\s+') s(s)
)
;
s
-------------------
lorem dolor ipsum
```
<http://www.postgresql.org/docs/current/static/functions-aggregate.html>
<http://www.postgresql.org/docs/current/static/functions-string.html#FUNCTIONS-STRING-OTHER>
|
Find rows in Postgres where string is in any random order
|
[
"",
"sql",
"postgresql",
""
] |
**MY TABLES:**
```
USERS_1: USERS_2:
+------------+---------+ +------------+---------+
| id |username | | username |claimedBy|
+------------+---------+ +------------+---------+
| 4 | pitiqu | | myUsername | NULL |<- this should become 4
+------------+---------+ +------------+---------+
```
**MY SQL:** (Literally MySQL)
```
UPDATE UL
SET UL.claimedBy = US.username
FROM USERS_1 as UL
INNER JOIN USERS_2 as US
ON US.id = 4
where UL.username="myUsername"
```
It's probably obvious that i want to set table 2's claimed\_by (for the username "myUsername") to the username "pitiqu" found in table 1 at id = 4.
I'm sorry if all the "username" is confusing. Hope the tables and the SQL clears my question.
The error that pops out:
**#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM USERS\_1 as UL INNER JOIN USERS\_2 as US ON US.id = 4 where UL' at line 3**
Why is this happening... anyone?
**EDIT** : Excuse me for the incorrect syntax. I've been trying to use [THIS](https://stackoverflow.com/questions/9588423/sql-server-inner-join-when-updating#) example and while editing it I deleted the SET.
|
You could use a update query like this:
```
update
USERS_2
set
claimedBy = (SELECT username FROM USERS_1 WHERE id=4)
where
username="myUsername"
```
if you want a join, the correct syntax is like this however on this particular context it doesn't make much sense and I would suggest you to use the first query:
```
UPDATE
USERS_1 as UL INNER JOIN USERS_2 as US ON US.id = 4
SET
UL.claimedBy = US.username
WHERE
UL.username="myUsername"
```
|
That's a wrong syntax. You should use a update join like
```
UPDATE UL u
JOIN USERS_2 US ON US.id = 4
SET u.claimedBy = US.username
where u.username='myUsername';
```
|
SQL Syntax error on JOIN
|
[
"",
"mysql",
"sql",
""
] |
I am new in sql i am facing one problem
I need to create stored procedure which return start and end date of any quarter by passing Quarters as string string and year
ex : when i pass `exec spname '2014-15','jul-sep'`, it should start date of july and end date of September
|
It is unclear why you do not provide only year and quarter as input parameters. The following procedure should return the quarter beginning and end:
```
alter procedure spQuarter (
@yearStr VARCHAR(7),
@QuarterStr VARCHAR(20)
)
AS
BEGIN
DECLARE @year INT = CAST(SUBSTRING(@yearStr, 1, 4) AS INT)
DECLARE @QuarterId INT = (CASE
WHEN @QuarterStr = 'jan-mar' THEN 4
WHEN @QuarterStr = 'apr-jun' THEN 1
WHEN @QuarterStr = 'jul-sep' THEN 2
WHEN @QuarterStr = 'oct-dec' THEN 3
END)
DECLARE @startOfYStr VARCHAR(20) = CAST (@year AS VARCHAR) + '01' + '01'
PRINT @yearStr
PRINT @startOfYStr
DECLARE @startDate DATE = CAST(@startOfYStr AS DATE)
DECLARE @startOfQ DATE = DATEADD(quarter, @QuarterId, @startDate)
DECLARE @endOfQ DATE = DATEADD(day, -1, DATEADD(quarter, @QuarterId + 1, @startDate))
SELECT @startOfQ, @endOfQ
END
GO
```
**Tests:**
```
exec spQuarter '2014-15', 'jan-mar' --> 2015-01-01 2015-03-31
exec spQuarter '2014-15', 'apr-jun' --> 2014-04-01 2014-06-30
exec spQuarter '2014-15', 'oct-dec' --> 2014-10-01 2014-12-31
```
Some extra checking should be performed on input parameters, but it should be a good start for you need.
|
You need to just pas year parameter like this '2014-2015' instead of 2014-15 and below sp will work
```
create proc spQuarter
(
@YEAR VARCHAR(10),
@Quarter VARCHAR(10),
)
AS
BEGIN
DECLARE @QuarterId INT = (CASE
WHEN @Quarter = 'jan-mar' THEN 1
WHEN @Quarter = 'apr-jun' THEN 2
WHEN @Quarter = 'jul-sep' THEN 3
WHEN @Quarter = 'oct-dec' THEN 4
END)
DECLARE @PreYear VARCHAR(10) = (SELECT top 1 items from dbo.Split(@year,'-'))
DECLARE @NextYear VARCHAR(10)= (SELECT top 1 items from dbo.Split(@year,'-')
where items not in(SELECT top 1 items from dbo.Split(@year,'-')))
DECLARE @SDate Datetime
SET @SDate = (Case @QuarterId When 1 then CONVERT(Datetime,'01-01-'+ @NextYear )
when 2 then CONVERT(Datetime,'01-04-'+ @PreYear)
when 3 then CONVERT(Datetime,'01-07-'+ @PreYear)
when 4 then CONVERT(Datetime,'01-10-'+ @PreYear) end)
SELECT @SDate as StartDate ,DATEADD (dd, -1, DATEADD(qq, DATEDIFF(qq, 0, @SDate) +@QuarterId ,0)) as EndDate
END
```
|
Finding sql date with account year and Quarter
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-date-functions",
"sqldatetime",
""
] |
I would like to know that how to select when using stored procedure with filter as combobox in website have more than 3 value
Exp: I would like to select match listing. and filter is "IsFinish" with value (All, Yes, No)
```
DECLARE @IsFinish INT -- 1: All 2: Yes 3: No
SELECT * FROM MATCH
WHERE [Status] = ?
```
Status values: F: Finished C: Canceled L: Live N: Non-Live P:Pause X:(Close) Waiting Confirm
When select All the result will return all status.
When select Yes the result will return F & X.
When select No the result will return N, L, C, P.
I would like filter them by once select.
How can I do it?
|
Prepare Data
```
CREATE TABLE [Match] ([Status] CHAR(1) PRIMARY KEY, [Wording] VARCHAR(50));
INSERT INTO [Match] ([Status], [Wording]) VALUES
('F', 'Finished'),
('C', 'Canceled'),
('L', 'Live'),
('N', 'Non-Live'),
('P', 'Pause'),
('X', '(Close) Waiting Confirm');
```
Prepare Procedure
```
CREATE PROCEDURE [FilterMatch] (@IsFinish INT = 1 /* 1: All (Default) 2: Yes 3: No */)
AS
SELECT * FROM [Match]
WHERE @IsFinish = 1 OR
(@IsFinish = 2 AND [Status] In ('F','X')) OR
(@IsFinish = 3 AND [Status] In ('N','L','C','P'));
```
Run
```
Exec FilterMatch;
Exec FilterMatch 2;
Exec FilterMatch @IsFinish = 3;
```
|
In *SQL-Server* you can achieve It in following:
```
where [status] LIKE (case @IsFinish when 1 then '%' end) or
[status] = (case @IsFinish when 2 then 'F' end) or
[status] = (case @IsFinish when 2 then 'X' end) or
[status] = (case @IsFinish when 3 then 'N' end) or
[status] = (case @IsFinish when 3 then 'L' end) or
[status] = (case @IsFinish when 3 then 'C' end) or
[status] = (case @IsFinish when 3 then 'P' end)
```
|
How to select on stored procedure with filter have more than 3 value
|
[
"",
"sql",
"select",
"stored-procedures",
""
] |
Using "SELECT [Users],[Stars] FROM [Table]", here is an example result set returned:
```
<table border="1">
<th>Users</th>
<th>Stars</th>
<tr>
<td>Admin</td><td>3</td>
</tr>
<tr>
<td>Worker</td><td>4</td>
</tr>
<tr>
<td>Student</td><td>2</td>
</tr>
</table>
```
How would I show the Admin ONLY if he has more than 3 stars and not be a part of the result set if it is 3 stars or less?
My best guess is a CASE WHEN statement but would it go in the SELECT or a WHERE clause?
I'm still quite new to development and any help is appreciated.
|
This should cover it just using a Where clause
```
SELECT
[Users],
[Stars]
FROM
[Table]
WHERE
[Users] != 'Admin'
OR [Stars] > 3
```
|
This will exclude any user Admin with stars less than 3:
```
SELECT [Users],[Stars]
FROM [Table]
WHERE NOT ([Users] = 'Admin' AND [Stars] <= 3)
```
|
Filtering a result set to show or hide specific result in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Is there any way to write below SQL query in Single Select query,
```
SET @T1 = COUNT(1) FROM @TableVar
SET @T2 = COUNT(1) FROM @TableVar WHERE bit1 = 1 AND bit2 = 0
SET @T3 = COUNT(1) FROM @TableVar WHERE bit1 = 0 AND bit2 = 1
SELECT @T1 AS Col1,
@T2 AS Col2,
@T3 AS Col3
```
|
```
SELECT T1 = COUNT(1),
T2 = COUNT(CASE WHEN bit1 = 1 AND bit2 = 0 THEN 1 END),
T3 = COUNT(CASE WHEN bit1 = 0 AND bit2 = 1 THEN 1 END)
FROM @TableVar
```
|
Try:
[SQL Fiddle](http://sqlfiddle.com/#!3/9eecb7/6723)
**MS SQL Server 2008 Schema Setup**:
**Query 1**:
```
DECLARE @TableVar TABLE
(bit1 bit, bit2 bit)
INSERT INTO @TableVar Values(0,0)
INSERT INTO @TableVar Values(1,1)
INSERT INTO @TableVar Values(1,0)
INSERT INTO @TableVar Values(1,0)
INSERT INTO @TableVar Values(0,1)
SELECT COUNT(1) AS Col1,
SUM(CASE WHEN bit1=1 and bit2=0 THEN 1 ELSE 0 END) AS Col2,
SUM(CASE WHEN bit1=0 AND bit2=1 THEN 1 ELSE 0 END) AS Col3
FROM @TableVar
```
**[Results](http://sqlfiddle.com/#!3/9eecb7/6723/0)**:
```
| Col1 | Col2 | Col3 |
|------|------|------|
| 5 | 2 | 1 |
```
|
Single Select query to Select 3 different variable based on 3 different condition
|
[
"",
"sql",
"sql-server",
""
] |
I have a table, which has a column(orderid), which its IDENTITY is set to true. Now I would like to set it off. How can I do that with ALTER COLUMN?some thing like this?
```
ALTER TABLE MyTable
ALTER Column MyColumn SET IDENTITY OFF
```
|
Once the identity column is set you **cannot remove** it or you cannot set it to OFF.
You probably have to drop the column by first copying the data into some other column(*which does not have the identity*). So it would be like add a new column
to your table and copy the values of your existing identity column to it. Then drop the old column(*having identity*) and finally rename the new column to the old column name.
|
You have to use SET IDENTITY\_INSERT TO ON. If you set it as ON then you should explicitly pass values to the ID Column.
Why should you switch off the Identity? May be you are trying to pass some explicit values.
Please refer the sample demo here.
```
-- Create tool table.
CREATE TABLE dbo.Tool
(
ID INT IDENTITY NOT NULL PRIMARY KEY,
NAME VARCHAR(40) NOT NULL
);
GO
-- Inserting values into products table.
INSERT INTO dbo.Tool
(NAME)
VALUES ('Screwdriver'),
('Hammer'),
('Saw'),
('Shovel');
GO
-- Create a gap in the identity values.
DELETE dbo.Tool
WHERE NAME = 'Saw';
GO
SELECT *
FROM dbo.Tool;
GO
-- Try to insert an explicit ID value of 3;
-- should return a warning.
INSERT INTO dbo.Tool
(ID,
NAME)
VALUES (3,
'Garden shovel');
GO
-- SET IDENTITY_INSERT to ON.
SET IDENTITY_INSERT dbo.Tool ON;
GO
-- Try to insert an explicit ID value of 3.
INSERT INTO dbo.Tool
(ID,
NAME)
VALUES (3,
'Garden shovel');
GO
SELECT *
FROM dbo.Tool;
GO
-- Drop products table.
DROP TABLE dbo.Tool;
GO
```
|
Alter table to set the IDENTITY to off
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
I need to measure the execution time of query on Apache spark (Bluemix).
What I tried:
```
import time
startTimeQuery = time.clock()
df = sqlContext.sql(query)
df.show()
endTimeQuery = time.clock()
runTimeQuery = endTimeQuery - startTimeQuery
```
Is it a good way? The time that I get looks too small relative to when I see the table.
|
**Update:**
No, using `time` package is not the best way to measure execution time of Spark jobs. The most convenient and exact way I know of is to use the Spark History Server.
On Bluemix, in your notebooks go to the "Paelette" on the right side. Choose the "Evironment" Panel and you will see a link to the Spark History Server, where you can investigate the performed Spark jobs including computation times.
|
To do it in a spark-shell (Scala), you can use `spark.time()`.
See another response by me: <https://stackoverflow.com/a/50289329/3397114>
```
df = sqlContext.sql(query)
spark.time(df.show())
```
The output would be:
```
+----+----+
|col1|col2|
+----+----+
|val1|val2|
+----+----+
Time taken: xxx ms
```
Related: [On Measuring Apache Spark Workload Metrics for Performance Troubleshooting](https://db-blog.web.cern.ch/blog/luca-canali/2017-03-measuring-apache-spark-workload-metrics-performance-troubleshooting).
|
How to measure the execution time of a query on Spark
|
[
"",
"sql",
"time",
"apache-spark",
"ibm-cloud",
""
] |
I need stats on orders, week by week, so I have done this:
```
SELECT YEAR(orders.date), WEEKOFYEAR(orders.date), COUNT(*)
FROM orders
GROUP BY YEAR(orders.date), WEEKOFYEAR(orders.date)
```
It worked for one year, but just now (new year) it does not count the last days of 53rd week (jan 1st, 2nd, 3rd). How can I update my Query in order to have the full last week (from Monday `2015-12-28` to Sunday `2016-01-03`)?
|
You need to switch to [`YEARWEEK(orders.date,3)`](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_yearweek) to get the ISO weeks as a single column. Using `WEEK(orders.date,3)` (which is exactly the same as `WEEKOFYEAR`) will return the correct week number, but `YEAR(orders.date)` will return either 2015 or 2016, splitting the week into four days in 2015 and and three days in 2016.
|
As Strawberry mentioned in the comments you're looking for the `WEEK` function. I just checked the documentation at the MySQL website.
**Week(date [,mode])**
> This function returns the week number for date. The two-argument form of WEEK() enables you to specify whether the week starts on Sunday or Monday and whether the return value should be in the range from 0 to 53 or from 1 to 53. If the mode argument is omitted, the value of the default\_week\_format system variable is used
Here's an example
```
SELECT WEEK('2008-12-31',1);
=> 53
```
It should also be noted that this is not the same as the `WEEKOFYEAR` function.
> Returns the calendar week of the date as a number in the range from **1 to 53**. WEEKOFYEAR() is a compatibility function that is equivalent to **WEEK(date,3)**.
We can see that the value of the mode parameter here is 3. Here is the table that shows the values for the modes
```
Mode First day of week Range Week 1 is the first week
0 Sunday 0-53 With a Sunday in this year
1 Monday 0-53 With 4 or more days this year
2 Sunday 1-53 With a Sunday in this year
3 Monday 1-53 With 4 or more days this year
4 Sunday 0-53 With a Sunday in this year
5 Monday 0-53 With 4 or more days this year
6 Sunday 1-53 With a Sunday in this year
7 Monday 1-53 With 4 or more days this year
```
Source
<https://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_week>
|
Group by weekofyear MySQL for the end of the year
|
[
"",
"mysql",
"sql",
""
] |
I have this table in my postgresql database:
```
purchase
userid | date | price
---------------------------
1 | 2016-01-06 | 10
1 | 2016-01-05 | 5
2 | 2016-01-06 | 12
2 | 2016-01-05 | 15
```
I want the sum of the last purchase price of all users. For user 1 the last purchase is on 2016-01-06 and the price is 10. For user 2 the last purchase is on 2016-01-06 and the price is 12. So the result of the SQL query should by `22`.
How can I do that in SQL ?
|
All proposed solutions are good and work but as my table contains millions of records I had to find the more efficient way to do what I want. And it seems that the better way is to use the foreign key between the tables `purchase` and `user` (which I didn't mention in my question, my apologies) which is `purchase.user -> user.id`. Knowing this I can do the following request:
```
select sum(t.price) from (
select (select price from purchase p where p.userid = u.id order by date desc limit 1) as price
from user u
) t;
```
**EDIT**
To answer to @a\_horse\_with\_no\_name here is `explain analyse verbose` for his and my solutions:
His solution:
```
Aggregate (cost=64032401.30..64032401.31 rows=1 width=4) (actual time=566101.129..566101.129 rows=1 loops=1)
Output: sum(purchase.price)
-> Unique (cost=62532271.89..64032271.89 rows=10353 width=16) (actual time=453849.494..566087.948 rows=12000 loops=1)
Output: purchase.userid, purchase.price, purchase.date
-> Sort (cost=62532271.89..63282271.89 rows=300000000 width=16) (actual time=453849.492..553060.789 rows=300000000 loops=1)
Output: purchase.userid, purchase.price, purchase.date
Sort Key: purchase.userid, purchase.date
Sort Method: external merge Disk: 7620904kB
-> Seq Scan on public.purchase (cost=0.00..4910829.00 rows=300000000 width=16) (actual time=0.457..278058.430 rows=300000000 loops=1)
Output: purchase.userid, purchase.price, purchase.date
Planning time: 0.076 ms
Execution time: 566433.215 ms
```
My solution:
```
Aggregate (cost=28366.33..28366.34 rows=1 width=4) (actual time=53914.690..53914.690 rows=1 loops=1)
Output: sum((SubPlan 1))
-> Seq Scan on public.user2 u (cost=0.00..185.00 rows=12000 width=4) (actual time=0.021..3.816 rows=12000 loops=1)
Output: u.id, u.name
SubPlan 1
-> Limit (cost=0.57..2.35 rows=1 width=12) (actual time=4.491..4.491 rows=1 loops=12000)
Output: p.price, p.date
-> Index Scan Backward using purchase_user_date on public.purchase p (cost=0.57..51389.67 rows=28977 width=12) (actual time=4.490..4.490 rows=1 loops=12000)
Output: p.price, p.date
Index Cond: (p.userid = u.id)
Planning time: 0.115 ms
Execution time: 53914.730 ms
```
My table contains 300 million of records.
I don't know if it's relevant but I also have an index on `purchase (userid, date)`.
|
You can use windowed functions to get rank number and then use normal aggregation with `SUM`:
```
WITH cte AS
(
SELECT *, RANK() OVER(PARTITION BY userid ORDER BY "date" DESC) AS r
FROM purchase
)
SELECT SUM(price) AS total
FROM cte
WHERE r = 1;
```
`SqlFiddleDemo`
Keep in mind that this solution calculates ties. To get only one purchase per user you need a column that is distinct per group (like `datetime`). But still it is possibility to get ties.
**EDIT:**
Handling ties:
```
CREATE TABLE purchase(
userid INTEGER NOT NULL
,date timestamp NOT NULL
,price INTEGER NOT NULL
);
INSERT INTO purchase(userid,date,price) VALUES
(1, timestamp'2016-01-06 12:00:00',10),
(1,timestamp'2016-01-05',5),
(2,timestamp'2016-01-06 13:00:00',12),
(2,timestamp'2016-01-05',15),
(2,timestamp'2016-01-06 13:00:00',1000)'
```
Note the difference `RANK()` vs `ROW_NUMBER`:
`SqlFiddleDemo_RANK`
`SqlFiddleDemo_ROW_NUMBER`
`SqlFiddleDemo_ROW_NUMBER_2`
Output:
```
ββββββββββ¦βββββββββββββββ¦βββββββββββββββ
β RANK() β ROW_NUMBER() β ROW_NUMBER() β
β βββββββββ¬βββββββββββββββ¬βββββββββββββββ£
β 1022 β 22 β 1010 β
ββββββββββ©βββββββββββββββ©βββββββββββββββ
```
Without `UNIQUE` index on `userid/date` there is always possibility(probably small) for tie. Any solutions that is based on `ORDER BY` have to work on stable manner.
|
Sum of last value from users
|
[
"",
"sql",
"postgresql",
"sum",
"greatest-n-per-group",
""
] |
I have a situation where I need to work with EAV data in an existing application where refactoring is not an option, so I can't change the structure of the data. That said, I have a query that returns data like so
```
| account | field | value | group |
|---------|-------|-------|-------|
| 1 | A | 1 | 1 |
| 1 | B | foo | 1 |
| 1 | A | 2 | 2 |
| 1 | B | foo | 2 |
| 1 | A | 2 | 3 |
| 1 | B | foo | 3 |
| 2 | A | 1 | 4 |
| 2 | A | 2 | 5 |
| 2 | A | 1 | 6 |
```
How can I remove groups 3 and 6, based on the fact that groups 2 and 4 have all of the same accounts, fields, and values respectively?
I thought of using something like
```
select account, field, value, rank() over (partition by account, field, value order by group)
```
but the members of the group will have different rankings depending on whether each specific row has been seen before.
In other words, is it possible to get a distinct "sets" of rows, eliminating others with the same number of rows and the same value for those rows, using the "group" column, where the number of rows in different groups may be different?
EDIT:
I'm not sure the original example was a very good one. Using distinct/unique will not work, as I'm interested in distinct groups of rows, not distinct rows. As a better example, consider
```
| account | field | value | group |
|---------|-------|-------|-------|
| 1 | A | 1 | 1 |
| 1 | B | foo | 1 |
| 1 | A | 1 | 2 |
| 1 | B | bar | 2 |
| 1 | A | 2 | 3 |
| 1 | B | foo | 3 |
| 2 | A | 1 | 4 |
| 2 | A | 2 | 5 |
| 2 | A | 1 | 6 |
| 3 | A | 1 | 7 |
| 3 | B | foo | 7 |
| 3 | C | bar | 7 |
| 3 | A | 1 | 8 |
| 3 | B | foo | 8 |
| 3 | C | baz | 8 |
| 3 | A | 1 | 9 |
| 3 | B | foo | 9 |
| 3 | C | bar | 9 |
```
In this case, I I would like to remove only groups 6 and 9, as they are the same as groups 4 and 7, respectively. I still need to keep all of the information about the other groups, including the fact that they are grouped.
|
I have solved the problem using part of the suggestion by @Hogan
```
WITH hashes AS
(SELECT group,
SUM(Ora_hash(Concat(Concat(account,field), value))) AS hash
FROM table
GROUP BY group)
SELECT account,
field,
value,
group
FROM table
WHERE group IN (SELECT group
FROM (SELECT group,
row_number() over (PARTITION BY hash ORDER BY NULL) AS rn
FROM hashes)
WHERE rn = 1);
```
The downside to this is that I believe there will be issues if the length of the concatenation of fields passed to ora\_hash() surpasses the maximum character length for a varchar2 field. Ora\_hash() is NOT deterministic for LOB fields, so casting to CLOB would not help. In my case, length constraints on the account, field, and value columns in the database will prevent this from occuring. In other cases, it may be useful to look into the dbms\_crypto.hash() function.
EDIT:
It seems the ora\_hash() function has an unacceptably high rate of collisions for this type of problem. Considering that I am already relying on the fact that the concatenations will not pass the varcar2 character limit, it may be better to compare the values directly. This will only fail in the case that the concatentation of two distinct fields yields the same result, e.g. ('a', 'bc') and ('ab', 'c'), as opposed to the possibility for collisions of any values with ora\_hash().
```
WITH group_vals AS
(SELECT group,
listagg(account || field || value, ',') AS vals
FROM table
GROUP BY group)
SELECT account,
field,
value,
group
FROM table
WHERE group IN (SELECT group
FROM (SELECT group,
row_number() over (PARTITION BY vals ORDER BY NULL) AS rn
FROM group_vals)
WHERE rn = 1);
```
|
New answer based on comments:
```
WITH Prior AS
(
-- First find matches to Prior groups
SELECT A.account, A.field, A.value, A.group, MIN(B.group) as Prior_Group
FROM TABLE A
LEFT JOIN TABLE B ON A.account=B.account
AND A.Field = B.field
AND A.value = B.value
AND A.group > B.group
GROUP BY A.account, A.field, A.value, A.group
), Counts AS
(
-- Count group members and priors
-- Using a trick that nulls for Prior_Group won't be counted
SELECT account, field, value, group,
Count(*) AS Group_Count, Count(Prior_Group) as Prior_Count
FROM Prior
GROUP BY account, field, value, group
)
SELECT account, field, value, group
FROM TABLE
WHERE (account, field, value, group) NOT IN
(SELECT account, field, value, group
FROM Counts
WHERE Group_Count = Prior_Count)
```
---
You can use
```
SELECT DISTINCT account, field, value
FROM (
-- PRIOR QUERY
) x
```
or
```
SELECT account, field, value
FROM (
-- PRIOR QUERY
) x
GROUP BY account, field, value
```
Finally if you want to include the "lowest group" in the distinct set you can do this
```
SELECT account, field, value, group
FROM (
SELECT account, field, value, group
row_number() OVER (PARTITION BY account, field, value ORDER BY group ASC) AS rn
FROM (
-- PRIOR QUERY
) x
) x2
WHERE rn = 1
```
a side note, using the row\_number() trick, you can include any other columns you want without worry if they are part of the distinct partition.
|
How to get distinct sets of rows using SQL (Oracle)?
|
[
"",
"sql",
"oracle",
""
] |
There are some columns that i want to fecth under the conditions.
**Columns are :**
point\_id, weight(row currentness index 0 to 7, 0 is the most current row), localdate (YYYYmmdd), tmin(minimum temperature), tmax (maximum temperature) and precip\_amount (precipitation amount mm).
**The conditions are :**
(localdate >= 20151201 AND localdate <= 20160104) AND
(tmin < 0 OR tmax < 0) AND precip\_amount > 40
**My aim is to fetch the most current rows based on weight.**
I wrote a sql that works fine (2.7s elapsed, 64.4 MB processed).
But, is there a way to optimize my sql in order to run faster ?
**My sql :**
```
select a.point_id as point_id , a.weight as min_weight,a.localdate as local_date, a.tmin as temp_min, a.tmax as temp_max, a.precip_amount as precipitation
from table1 a
join (select point_id, min(weight) as min_weight
from
(select point_id, localdate, tmin, tmax, precip_amount,weight
from table1
where (localdate >= 20151201 and localdate <= 20160104) and (tmin < 0 or tmax < 0) and precip_amount > 40
order by weight)
group by point_id) b
on a.point_id = b.point_id and a.weight = b.min_weight
where (a.localdate >= 20151201 and a.localdate <= 20160104) and (a.tmin < 0 OR a.tmax < 0) and a.precip_amount > 40
order by a.weight, a.localdate
```
|
Try this query. I have moved calculation on minimum weight to subquery.
```
SELECT a.point_id as point_id,
(SELECT MIN(A2.weight) FROM Table A2
WHERE A2.point_id = A.point_id
AND A2.localdate >= 20151201
AND A2.localdate <= 20160104
AND (A2.tmin < 0 or A2.tmax < 0)
AND A2.precip_amount > 40) AS min_weight,
A.localdate as local_date,
A.tmin as temp_min,
A.tmax as temp_max,
A.precip_amount as precipitation
FROM Table AS A
WHERE A.localdate >= 20151201
AND A.localdate <= 20160104
AND (A.tmin < 0 OR A.tmax < 0)
AND A.precip_amount > 40
ORDER BY A.weight, A.localdate
```
You can also try use windowed function. For example
```
SELECT T.* FROM
(SELECT A.point_id,
MIN(A.weight) OVER(PARTITION BY A.point_id) AS min_weight,
A.localdate AS local_date,
A.tmin AS temp_min,
A.tmax AS temp_max,
A.precip_amount as precipitation
FROM Table A
WHERE A.localdate >= 20151201
AND A.localdate <= 20160104
AND (A.tmin < 0 OR A.tmax < 0)
AND A.precip_amount > 40) AS T
ORDER BY T.min_weight,
T.localdate
```
|
```
select *
from
( select a.point_id as point_id , a.weight as min_weight,a.localdate as local_date, a.tmin as temp_min, a.tmax as temp_max, a.precip_amount as precipitation
, row_number() over (partition by point_id order by weight asc) as rn
from table1 a
where (a.localdate >= 20151201 and a.localdate <= 20160104)
and (a.tmin < 0 OR a.tmax < 0)
and a.precip_amount > 40
) tt
where tt.rn = 1
order by tt.weight, tt.localdate
```
|
Is there a way to optimize my bigquery sql in order to run faster?
|
[
"",
"sql",
"optimization",
"google-bigquery",
""
] |
On applying abs on divisible data, whole number quotient is converted to decimal.
```
create table #data ( ConditionValue money, ConditionRateNbr real)
insert into #data values(9665.77,37.61)
select abs(conditionvalue/conditionrateNbr) Using_Abs ,*
from #data
--Using_Abs ConditionValue ConditionRateNbr
--256.999969482422 9665.77 37.61
```
Why this happens and how to solve?
|
What I have seen using money is more reliable than decimal.
Here is another example.
Why on earth decimal considers 3/1.5 as 1.5, I do not know !!!
```
declare @real real = 1.5, @realup money=3
select @realUp numerator, @real denominator, @realup /cast (@real as money) money, @realup /cast(@real as decimal) decimal
--numerator denominator money decimal
--3.00 1.5 2.00 1.50000000000000000000000
```
|
It has to do with the "real" data type being an "approximate-number", see <https://msdn.microsoft.com/en-us/library/ms173773.aspx>. My knowledge of SQL is not deep enough to give a detailed explanation but I do have a solution, use the decimal data type i/o real:
```
create table #data ( ConditionValue money, ConditionRateNbr decimal(38,20))
insert into #data values(9665.77,37.61)
select abs(conditionvalue/conditionrateNbr) Using_Abs ,*
from #data
```
IMO it's usually smarter to use decimal than real to avoid this kind of issue.
|
SQL ABS() is returning unexpected decimal when applied on divisible data
|
[
"",
"sql",
""
] |
In Oracle, I have column named Create\_date with data returning as 1400003659, 1400072380, and 1403796514 as examples. The column type is NOT NULL NUMBER(15). I'm trying to edit my SELECT statement to return these values as dates (or are these dates and times?).
**I've tried the below, but all are returning errors**:
```
SELECT to_date(Create_date, 'YYMMDD'),
SELECT to_date(to_char(Create_date), 'YYMMDD'),
SELECT to_timestamp(Create_date, 'YYMMDD'),
SELECT to_date(LPAD(Create_date, 15, '0'), 'YYMMDD'),
SELECT to_date(LPAD(Create_date), 'YYMMDD'),
```
**An example error message I'm receiving:**
```
SQL Error: ORA-01843: not a valid month
01843. 00000 - "not a valid month"
```
|
This looks like a unix timestamp which is the number of seconds since 1/1/1970.
If you want just the date, you need to calculate the number of days and add it to 1/1/1970 like so:
```
to_date('1970-01-01','YYYY-MM-DD') + numtodsinterval(1400003659,'SECOND')
```
If you want to retain the timestamp, you can do so like this:
```
to_char(to_date('1970-01-01','YYYY-MM-DD') + numtodsinterval(1400003659,'SECOND'),'YYYY-MM-DD HH24:MI:SS')
```
[See this for more information.](https://dba.stackexchange.com/questions/16461/convert-a-unix-timestamp-to-a-datetime-in-a-view)
|
The answers are not precise! UNIX time is seconds since 1970-01-01 00:00:00 **UTC!**
So, unless your database server runs on UTC you should do it like this:
```
SELECT
(TIMESTAMP '1970-01-01 00:00:00' AT TIME ZONE 'UTC'
+ 1400003659 * INTERVAL '1' SECOND) AT LOCAL
FROM dual;
```
or
```
SELECT
(TIMESTAMP '1970-01-01 00:00:00' AT TIME ZONE 'UTC'
+ numtodsinterval(1400003659,'second')) AT LOCAL
FROM dual;
```
or to get the time at time zone of database server's operating system
```
SELECT
(TIMESTAMP '1970-01-01 00:00:00' AT TIME ZONE 'UTC'
+ numtodsinterval(1400003659,'second')) AT TO_CHAR(SYSTIMESTAMP, 'tzr')
FROM dual;
```
|
Convert number to date
|
[
"",
"sql",
"oracle",
"date",
"select",
""
] |
I am trying to compare data between two ORACLE tables with different table structure, different column names. I need a ORACLE SQL query that compares the data and returns the unmatched data with their IDs. Both the tables have same ID column which can be used as the comparison link. Could anyone please guide me on this.
Example: TABLE A (ID\_A,QTY\_A,DATE\_A)
Example: TABLE B (ID\_B,QTY\_B,DATE\_B) where ID\_A = ID\_B
|
You should be able to do this with a `FULL OUTER JOIN`:
```
SELECT
A.id,
A.qty_a,
B.qty_b,
A.date_a,
B.date_b
FROM
Table_A A
FULL OUTER JOIN Table_B B ON B.id = A.id
WHERE
(
A.qty_a <> B.qty_b OR
(A.qty_a IS NULL AND B.qty_b IS NOT NULL) OR
(A.qty_a IS NOT NULL AND B.qty_b IS NULL)
) OR
(
A.date_a <> B.date_b
(A.date_a IS NULL AND B.date_b IS NOT NULL) OR
(A.date_a IS NOT NULL AND B.date_b IS NULL)
)
```
|
```
(
(select id_a as id, qty_a as qty, date_a as date from table_a)
minus
(select id_b as id, qty_b as qty, date_b as date from table_b)
)
union
(
(select id_b as id, qty_b as qty, date_b as date from table_b)
minus
(select id_a as id, qty_a as qty, date_a as date from table_a)
)
```
|
Need to compare data between two ORACLE tables with different table structure and return the discrepancies
|
[
"",
"sql",
"oracle",
"compare",
""
] |
I am trying to rank my students by their points that I've calculated before
but the problem is if students have same points they both should be in same rank
E.g
Student 1 has full points
Student 2 has full points
they both have to be rank as 1;
[](https://i.stack.imgur.com/IZyNy.png)
Here an example of my database
the query I am trying to do is (just for select then I can insert the values to my column)
```
SELECT a.points
count(b.points)+1 as rank
FROM examresults a left join examresults b on a.points>b.points
group by a.points;
```
**Edit for being more clear:**
* Student 1 points 80
* Student 2 points 77.5
* Student 3 points 77.5
* Student 4 points 77
their ranks should be like
* Student 1 Rank 1
* Student 2 Rank 2
* Student 3 Rank 2
* Student 4 Rank 3
my current query returns a values like
[](https://i.stack.imgur.com/6YMBg.png)
As it is missing the third rank. (because second rank has 2 values)
|
This is just a fix of Gordon solution using variables. The thing is your rank function isnt the way rank should work. (student 4 should be rank 4)
**[SQL Fiddle Demo](http://sqlfiddle.com/#!9/ff872c/1)** *You can add more student to improve the testing.*
```
select er.*,
(@rank := if(@points = points,
@rank,
if(@points := points,
@rank + 1,
@rank + 1
)
)
) as ranking
from students er cross join
(select @rank := 0, @points := -1) params
order by points desc;
```
**OUTPUT**
```
| id | points | ranking |
|----|--------|---------|
| 1 | 80 | 1 |
| 2 | 78 | 2 |
| 3 | 78 | 2 |
| 4 | 77 | 3 |
| 5 | 66 | 4 |
| 6 | 66 | 4 |
| 7 | 66 | 4 |
| 8 | 15 | 5 |
```
|
You want a real rank, which is calculated by the ANSI standard `rank()` function. You can implement this in MySQL using this logic:
```
select er.*,
(select 1 + count(*)
from examresults er2
where er2.points > er.points
) as ranking
from exampleresults er;
```
For larger tables, you can do this with variables, but it is a rather awkward:
```
select er.*,
(@rank := if(@rn := @rn + 1 -- increment row number
if(@points = points, @rank, -- do not increment rank
if(@points := points, -- set @points
@rn, @rn -- otherwise use row number
)
)
)
) as ranking
from examresults er cross join
(select @rn := 0, @rank := 0, @points := -1) params
order by points desc;
```
|
Rank users in mysql by their points
|
[
"",
"mysql",
"sql",
"sorting",
"ranking",
"mysql-variables",
""
] |
I got a mySQL table, that holds the configuration of my project, each configuration change creates a new entry, so that i have a history of all changes, and who changed it.
```
CREATE TABLE `configurations` (
`name` varchar(255) NOT NULL,
`value` text NOT NULL,
`lastChange` datetime NOT NULL,
`changedBy` bigint(32) NOT NULL,
KEY `lastChange` (`lastChange`),
KEY `name` (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO `configurations` (`name`, `value`, `lastChange`, `changedBy`) VALUES
('activePageLimit', 'activePageLimit-old-value', '2016-01-06 12:25:05', 1096775260340178),
('activePageLimit', 'activePageLimit-new-value', '2016-01-06 12:27:57', 1096775260340178),
('customerLogo', 'customerLogo-old-value', '2016-02-06 00:00:00', 1096775260340178),
('customerLogo', 'customerLogo-new-value', '2016-01-07 00:00:00', 1096775260340178);
```
Right now i have a problem with my select query, that should return all names and their latest value (ordered by `lastChange`).
```
| name | value | lastChange |
|-----------------|---------------------------|---------------------------|
| customerLogo | customerLogo-new-value | January, 07 2016 00:00:00 |
| activePageLimit | activePageLimit-new-value | January, 06 2016 12:27:57 |
```
My current Query is:
```
SELECT `name`, `value`, `lastChange`
FROM (
SELECT `name`, `value`, `lastChange`
FROM `configurations`
ORDER BY `lastChange` ASC
) AS `c`
GROUP BY `name` DESC
```
But unfortunately this does not always return the right values, and i don't like to use a subquery, there has to be a cleaner and faster way to do this.
I also created a SQL-Fiddle for you as a playground: <http://sqlfiddle.com/#!9/f1dc9/1/0>
Is there any other clever solution i missed?
|
Your method is documented to return indeterminate results (because you have columns in the `select` that are not in the `group by`).
Here are three alternatives. The first is standard SQL, using an explicit aggregation to get the most recent change.
```
SELECT c.*
FROM configurations c JOIN
(SELECT `name`, MAX(`lastChange`) as maxlc
FROM `configurations`
GROUP BY name
) mc
ON c.name = mc.name and c.lasthange = mc.maxlc ;
```
The second is also standard SQL, using `not exists`:
```
select c.*
from configurations c
where not exists (select 1
from configurations c2
where c2.name = c.name and c2.lastchange > c.lastchange
);
```
The third uses a hack which is available in MySQL (and it assumes that the value does not have any commas in this version and is not too long):
```
select name, max(lastchange),
substring_index(group_concat(value order by lastchange desc), ',', 1) as value
from configurations
order by name;
```
Use this version carefully, because it is prone to error (for instance, the intermediate `group_concat()` result could exceed a MySQL parameter, which would then have to be re-set).
There are other methods -- such as using variables. But these three should be sufficient for you to consider your options.
|
If we want to avoid SUBQUERY the only other option is JOIN
```
SELECT cc.name, cc.value, cc.lastChange FROM configurations cc
JOIN (
SELECT name, value, lastChange
FROM configurations
ORDER BY lastChange ASC
) c on c.value = cc.value
GROUP BY cc.name DESC
```
|
MySQL Query Fixing/Optimisation for my configuration table
|
[
"",
"mysql",
"sql",
"performance",
""
] |
I've have the following data.
```
WM_Week POS_Store_Count POS_Qty POS_Sales POS_Cost
------ --------------- ------ -------- --------
201541 3965 77722 153904.67 102593.04
201542 3952 77866 154219.66 102783.12
201543 3951 70690 139967.06 94724.60
201544 3958 70773 140131.41 95543.55
201545 3958 76623 151739.31 103441.05
201546 3956 73236 145016.54 98868.60
201547 3939 64317 127368.62 86827.95
201548 3927 60762 120309.32 82028.70
```
I need to write a SQL query to get the last four weeks of data, and their last four weeks summed for each of the following columns: `POS_Store_Count`,`POS_Qty`,`POS_Sales`, and `POS_Cost`.
For example, if I wanted 201548's data it would contain 201548, 201547, 201546, and 201545's.
The sum of 201547 would contain 201547, 201546, 201545, and 201544.
The query should return 4 rows when ran successfully.
How would I formulate a recursive query to do this? Is there something easier than recursive to do this?
Edit: The version is Azure Sql DW with version number 12.0.2000.
Edit2: The four rows that should be returned would have the sum of the columns from itself and it's three earlier weeks.
For example, if I wanted the figures for 201548 it would return the following:
```
WM_Week POS_Store_Count POS_Qty POS_Sales POS_Cost
------ --------------- ------- -------- --------
201548 15780 274938 544433.79 371166.3
```
Which is the sum of the four (non-identity) columns from `201548`, `201547`, `201546`, and `201545`.
|
Pretty sure this will get you what you want.. Im using cross apply after ordering the data to apply the SUMS
```
Create Table #WeeklyData (WM_Week Int, POS_Store_Count Int, POS_Qty Int, POS_Sales Money, POS_Cost Money)
Insert #WeeklyData Values
(201541,3965,77722,153904.67,102593.04),
(201542,3952,77866,154219.66,102783.12),
(201543,3951,70690,139967.06,94724.6),
(201544,3958,70773,140131.41,95543.55),
(201545,3958,76623,151739.31,103441.05),
(201546,3956,73236,145016.54,98868.6),
(201547,3939,64317,127368.62,86827.95),
(201548,3927,60762,120309.32,82028.7)
DECLARE @StartWeek INT = 201548;
WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY [WM_Week] DESC) rn
FROM #WeeklyData
WHERE WM_Week BETWEEN @StartWeek - 9 AND @StartWeek
)
SELECT *
FROM cte c1
CROSS APPLY (SELECT SUM(POS_Store_Count) POS_Store_Count_SUM,
SUM(POS_Qty) POS_Qty_SUM,
SUM(POS_Sales) POS_Sales_SUM,
SUM(POS_Cost) POS_Cost_SUM
FROM cte c2
WHERE c2.rn BETWEEN c1.rn AND (c1.rn + 3)
) ca
WHERE c1.rn <= 4
```
|
You can use `SUM()` in combination with the [OVER Clause](https://msdn.microsoft.com/en-us/library/ms189461.aspx)
Something like:
```
SELECT WM_Week.
, SUM(POS_Store_Count) OVER (ORDER BY WM_Week ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)
FROM Table
```
|
Calculate Sum From Moving 4 Rows in SQL
|
[
"",
"sql",
"sql-server",
"azure-synapse",
""
] |
Ok so I have a table with, amongst other, columns: name, c\_id, date.
Certain entries in that table obey a certain criteria, which I can effectively select using WHERE, so this is not a problem.
What I would like, then, is a way to group by, in addition to the name and c\_id columns, a *group* of three dates: date - 1, date, and date + 1. In other words, I want each row of the output to represent all entries that have the same name and c\_id as a certain relevant entry and which happened between the day before and the day after that entry, including itself.
How would I go about doing that?
--EDIT:
(EDIT2: The origin table is supposed to be an INNER JOIN of Table1 and Table2 ON Table1.id = Table2.id)
Sample data:
```
Table1:
id | c_id | date | other stuff
-----------------------------------------------------
01 | abc | 2015/12/09 | whatever
02 | abc | 2015/12/09 | whatever
03 | abc | 2015/12/10 | relevant criterion
04 | abc | 2015/12/11 | whatever
05 | def | 2015/11/15 | whatever
06 | def | 2015/11/16 | relevant criterion
07 | abc | 2015/11/17 | whatever
08 | mnc | 2016/01/02 | whatever
09 | mnc | 2016/01/02 | whatever
10 | mnc | 2016/01/03 | whatever
11 | mnc | 2016/01/03 | whatever
12 | mnc | 2016/01/03 | whatever
13 | mnc | 2016/01/04 | relevant criterion
14 | mnc | 2016/01/05 | whatever
15 | mnc | 2016/01/05 | whatever
16 | mnc | 2016/01/06 | whatever
Table2:
id | Name | other stuff
--------------------------------------
01 | John | whatever
02 | John | whatever
03 | John | whatever
04 | John | whatever
05 | Mary | whatever
06 | Mary | whatever
07 | Mary | whatever
08 | Alice | whatever
09 | Alice | whatever
10 | Alice | whatever
11 | Alice | whatever
12 | Alice | whatever
13 | Alice | whatever
14 | Alice | whatever
15 | Alice | whatever
16 | Alice | whatever
```
Sample desired output:
```
Name | c_id | pivot_date | count
------------------------------------------
John | abc | 2015/12/10 | 4
Mary | def | 2015/11/16 | 2
Alice | mnc | 2016/01/04 | 6
```
(The pivot\_date part is not particularly necessarily the one with the relevant criterion, any one of the dates involved are good.)
|
Updated for new sample data:
```
SELECT t.name, t.c_id, t.date pivot_date, COUNT(*) count
FROM record t
JOIN record t2
ON t2.name = t.name
AND t2.c_id = t.c_id
AND t2.date >= t.date - INTERVAL 1 DAY
AND t2.date <= t.date + INTERVAL 1 DAY
WHERE t.other_stuff = 'relevant criterion'
GROUP BY t.name, t.c_id, t.date
```
[See SQLFiddle](http://sqlfiddle.com/#!9/3d5f34/1)
--
Updated for new sample data:
```
SELECT t2.name, t1.c_id, t1.date pivot_date, COUNT(*) count
FROM table1 t1
JOIN table1 to1
ON to1.c_id = t1.c_id
AND to1.date >= t1.date - INTERVAL 1 DAY
AND to1.date <= t1.date + INTERVAL 1 DAY
JOIN table2
ON t2.id = t1.id
WHERE t1.other_stuff = 'relevant criterion'
GROUP BY t2.name, t1.c_id, t1.date
```
|
This is a way:
```
@dt := '2015-12-10'
SELECT Name , c_id, MAX(`date`) maxdate, COUNT(*) countof
FROM table1
WHERE date >= @dt - INTERVAL 1 DAY AND `date` < @dt + INTERVAL 2 DAY
GROUP BY Name , c_id
```
Note the date column could be MIN(date) as an alternative
|
SQL Group By set of dates
|
[
"",
"mysql",
"sql",
"group-by",
""
] |
In SQL Server 2014 I try to count a value per game per row like this:
[](https://i.stack.imgur.com/Hqs7x.png)
Does anybody know if this is possible and how?
|
An old solution for this is to use a query like this:
```
SELECT
t1.id, t1.Game, t1.Points, SUM(t2.Points) [Points (Add)]
FROM
yourTable t1 JOIN
yourTable t2 ON t1.Game = t2.Game AND t1.id >= t2.id
GROUP BY
t1.id, t1.Game, t1.Points
```
|
You can use a cumulative sum:
```
select [Order], Game, Points,
sum(Points) over (partition by Game order by [Order]) as CumePoints
from t;
```
You should avoid using reserved words and keywords for table or column names. In other words, `Order` is a bad name for a column name, because it needs to be escaped.
|
Add per row the total value in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2014",
""
] |
I am trying to query through my db to receive all posts which contain a certain user\_id as owner or participant of the post.
My DB structure is as following:
```
id: INT
media_id: INT
owner_id: INT
participants: STRING eg. [1,2]
comments: TEXT eg. [{},{}]
```
Here are two example entries:
```
id: 1, media_id: 2, owner_id: 1, participants: "[1,2]", comments: "[]"
id: 2, media_id: 3, owner_id: 2, participants: "[2,1]", comments: "[]"
```
What I am trying to achieve is to get all rows where the user with the id `1` is part of the column `owner_id` OR `participants`
My current query looks as following:
```
SELECT * FROM posts WHERE owner_id = 1 OR participants IN "1"
```
And the result which I receive is only the line where `1` is the `owner_id`
I might understand the IN equasion wrong but `1` is a part of participants in both rows, therefore I should get both rows as result, shouldn't I?
|
If you are willing to change your data structure for `participants` field to a comma separated value e.g. `1,2` or `2,1`, you can then use:
`SELECT * FROM posts WHERE owner_id = 1 OR FIND_IN_SET(1, participants);`
|
**This will do the trick:**
```
SELECT * FROM posts WHERE owner_id = 1
OR participants like "%,1]"
OR participants like "[1,%"
OR participants like "%,1,%"
```
|
mySQL WHERE clause with column = val1 OR coumn IN values
|
[
"",
"mysql",
"sql",
""
] |
Can a WHERE clause return NULL instead of TRUE or FALSE?
According to the exercise below it is possible, but i can't imagine an example that returns NULL, Is it really possible?
```
4. Which of the following values can NOT be returned after evaluation of WHERE clause
condition?
A. UNKNOWN
B. TRUE
C. FALSE
D. NULL
Answer: A. If the result of the condition in WHERE clause is not known, NULL is returned. In all
other scenarios, either TRUE or FALSE is returned.
```
|
In SQL, all logical operators evaluate to TRUE, FALSE, and UNKNOWN ([Oracle docs](https://docs.oracle.com/cd/B19306_01/server.102/b14200/conditions004.htm)) in MySQL UNKNOWN result calls NULL ([MySQL docs](https://docs.oracle.com/cd/E17952_01/refman-5.5-en/logical-operators.html)).
According to oracle documentation:
> "To test for nulls, use only the comparison conditions IS NULL and IS
> NOT NULL. If you use any other condition with nulls and the result
> depends on the value of the null, then the result is UNKNOWN."
So only TRUE, FALSE, and UNKNOWN can be returned after evaluation.
About your question:
> "Can a WHERE clause return NULL instead of TRUE or FALSE?"
Strictly speaking in Oracle - **NO** because the such result called UNKNOWN.
But in general the meaning of UNKNOWN and NULL is equivalent in this context and it is just a different name for the same thing.
So the example of SQL below (`a.a >= all`) evaluated as UNKNOWN.
```
with table_a as (
select null as a from dual
union all
select 10 as a from dual
union all
select 5 as a from dual),
table_b as (
select null as a from dual
union all
select 10 as a from dual
union all
select 5 as a from dual)
select * from table_a a where a.a >= all(select a from table_b b)
```
|
> Not even a NULL can be equal to NULL.
1. The correct way to understand NULL is that it is not a value. Not
βthis is a NULL valueβ but βthis NULL is not a value.β Everything
either is a value, or it isnβt.
2. When something is a value, it is β1,β or βhello,β or βgreen,β or
β$5.00β³ etc β but when something isnβt a value, it just isnβt
anything at all.
3. SQL represents βthis has no valueβ by the special non-value NULL.
When someone says βthe NULL value,β one should mentally disagree,
because thereβs no such thing. NULL is the complete, total absence
of any value whatsoever.
*emphasized text*
***A Non-Technical aspect***
> If you ask two girls, how old they are? may be you would hear them to
> refuse to answer your question, Both girls are giving you NULL as age
> and this doesn't mean both have similar age. So there is nothing can
> be equal to null.
`SELECT 0 IS NULL , 0 IS NOT NULL , '' IS NULL , '' IS NOT NULL, NULL != NULL, NULL = NULL, NULL != '', NULL = ''`
|
Can a WHERE clause predicate evaluate to NULL?
|
[
"",
"sql",
"oracle",
"null",
"where-clause",
"clause",
""
] |
I can't connect to SQL Server and connection string of my project is:
```
<add name="Teleport_DEVEntities" connectionString="metadata=res://*/Data.Model.AdvertisingModel.csdl|res://*/Data.Model.AdvertisingModel.ssdl|res://*/Data.Model.AdvertisingModel.msl;provider=System.Data.SqlClient;provider connection string="data source=*****;initial catalog=****;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" />
```
I get this error:
> Keyword not supported: 'metadata'
How can I fix this error?
|
That connection string is only supported by Entity Framework. (To be fair, the keyword "entities" is in the key name!) If you want to use the connection string in an ADO raw connection, remove anything outside the `"` string parts, including the `"`s:
Change it to:
```
<add name="Teleport_DEVEntities"
connectionString="data source=*****;initial catalog=****;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework"
providerName="System.Data.EntityClient"
/>
```
|
It seems the connectionString is of EntityFramework Type.
**Possible way could be skipping metaData** and then **getting the complete connectionString.**
> The below code saved my time !!
```
if (connectionString.ToLower().StartsWith("metadata=")) {
System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder efBuilder = new System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder(connectionString);
connectionString = efBuilder.ProviderConnectionString; }
```
|
How do I solve "Keyword not supported: 'metadata' "?
|
[
"",
"sql",
"sql-server",
"database-connection",
"connection-string",
""
] |
I have the following records in my database table:
```
Date Credit Debit Description
--------------- ------- ------- ---------------
12-24-2015 5 Purchased credit
12-20-2015 1 Consumed credit
12-15-2015 3 Purchased credit
12-08-2015 1 Consumed credit
12-08-2015 1 Consumed credit
12-07-2015 1 Consumed credit
12-04-2015 1 Consumed credit
12-03-2015 1 Consumed credit
12-01-2015 5 Purchased credit
```
I want to calculate and display the balance for each record as shown below:
```
Date Credit Debit Balance Description
------------ ------- ------- ------- ---------------
12-24-2015 5 0 7 Purchased credit
12-20-2015 1 2 Consumed credit
12-15-2015 3 0 3 Purchased credit
12-08-2015 1 0 Consumed credit
12-08-2015 1 1 Consumed credit
12-07-2015 1 2 Consumed credit
12-04-2015 1 3 Consumed credit
12-03-2015 1 4 Consumed credit
12-01-2015 5 0 5 Purchased credit
```
Can anyone please help me to achieve the above result?
|
To produce balance use `sum()` in analytical version.
```
select tdate, credit, debit,
sum(nvl(credit, 0)-nvl(debit, 0)) over (order by rn) balance, description
from (
select tdate, credit, debit, row_number() over (order by tdate) rn, description
from test)
order by rn desc
```
If your table contains increasing primary key you can use this instead of generated row number.
Test data and output:
```
create table test (tdate date, credit number(6), debit number(6), description varchar2(20));
insert into test values (date '2015-12-24', 5, null, 'Purchased credit');
insert into test values (date '2015-12-20', null, 1, 'Consumed credit');
insert into test values (date '2015-12-15', 3, null, 'Purchased credit');
insert into test values (date '2015-12-08', null, 1, 'Consumed credit');
insert into test values (date '2015-12-08', null, 1, 'Consumed credit');
insert into test values (date '2015-12-07', null, 1, 'Consumed credit');
insert into test values (date '2015-12-04', null, 1, 'Consumed credit');
insert into test values (date '2015-12-03', null, 1, 'Consumed credit');
insert into test values (date '2015-12-01', 5, null, 'Purchased credit');
TDATE CREDIT DEBIT BALANCE DESCRIPTION
----------- ------- ------- ---------- --------------------
2015-12-24 5 7 Purchased credit
2015-12-20 1 2 Consumed credit
2015-12-15 3 3 Purchased credit
2015-12-08 1 0 Consumed credit
2015-12-08 1 1 Consumed credit
2015-12-07 1 2 Consumed credit
2015-12-04 1 3 Consumed credit
2015-12-03 1 4 Consumed credit
2015-12-01 5 5 Purchased credit
```
|
You should be able to use the `LAG` analytic function for this to look at the previous row's data.
```
SELECT Date,
Credit,
Debit,
LAG(Balance, 1, 0) OVER(ORDER BY Date) - Debit + Credit AS Balance,
Description
FROM sometable
```
The `1` argument means that it looks 1 row previous, the `0` argument means if the row at the given offset doesn't exist it will return 0 instead (i.e. for the first row).
Source: <https://oracle-base.com/articles/misc/lag-lead-analytic-functions>
|
Calculate balance from Credit and Debit records
|
[
"",
"sql",
"database",
"oracle",
"oracle11g",
""
] |
I have two columns, an ID, and the other a value which is either 0 or 1. I am trying to select all Rows for the ID where it has a 0 and a 1, for example,
```
RowNumber ------------- ID ------- value
1 ------------------- 001 ------- 1
2 ------------------- 001 ------- 1
3 ------------------- 001 ------- 1
4 ------------------- 002 ------- 1
5 ------------------- 002 ------- 0
6 ------------------- 003 ------- 1
7 ------------------- 003 ------- 1
8 --------------------004 ------- 1
9 -------------------- 004 ------- 0
10 ------------------- 004 ------- 1
```
The result should select rows 4, 5, 8, 9, 10
|
You can use window version of `COUNT`:
```
SELECT RowNumber, ID, value
FROM (
SELECT RowNumber, ID, value,
COUNT(CASE WHEN value = 1 THEN 1 END) OVER (PARTITION BY ID) AS cntOnes,
COUNT(CASE WHEN value = 0 THEN 1 END) OVER (PARTITION BY ID) AS cntZeroes
FROM test
WHERE value IN (0,1) ) AS t
WHERE cntOnes >= 1 AND cntZeroes >= 1
```
`COUNT(DISTINCT value)` has a value of 2 if *both* `0, 1` values exist within the same `ID` slice.
|
`DISTINCT` is indeed not allowed in a windowed version of the `COUNT`, so you can use `MIN` and `MAX` instead.
```
DECLARE @T TABLE(RN int, ID int, value int);
INSERT INTO @T (RN, ID, value) VALUES
(1, 001, 1),
(2, 001, 1),
(3, 001, 1),
(4, 002, 1),
(5, 002, 0),
(6, 003, 1),
(7, 003, 1),
(8, 004, 1),
(9, 004, 0),
(10, 004, 1);
WITH
CTE
AS
(
SELECT
RN, ID, value
,MIN(value) OVER (PARTITION BY ID) AS MinV
,MAX(value) OVER (PARTITION BY ID) AS MaxV
FROM @T AS T
)
SELECT RN, ID, value
FROM CTE
WHERE MinV <> MaxV
;
```
**Result**
```
+----+----+-------+
| RN | ID | value |
+----+----+-------+
| 4 | 2 | 1 |
| 5 | 2 | 0 |
| 8 | 4 | 1 |
| 9 | 4 | 0 |
| 10 | 4 | 1 |
+----+----+-------+
```
|
select rows where ID has variation of values
|
[
"",
"sql",
"sql-server",
""
] |
My executed SQL query is as follow :
```
update elements E
set E.END_I = (select n.node_num
from nodes N
where abs(E.X_I - N.XI) < 0.001 and
abs(E.Y_I - N.YI) < 0.001 and
abs(E.Z_I - N.ZI) < 0.001
)
```
It takes about 24 secs to complete, I read about firebird troubleshooting [Why is my database query slow?](http://www.firebirdfaq.org/faq13/) It instructs to create indices for the related fields in table and I've added the decreasing/increasing indices for the XI, YI, ZI fields in both of the Nodes and Elements tables. But still the performance is very slow, there 6677 rows in database and I'm using the FlameRobin as SQL editor.
Interesting thing is: As depicted in Firebird troubleshooting guide having
> If you see a NATURAL plan going against a big table, you've found the
> problem
this error is described as bad case and source of slow down, recommended solution is, create decreasing indices for related fields. But in my case even after defining the indices it seems that I'm still suffering from that PLAN (N NATURAL), PLAN (E NATURAL) which is reported in Flamerobin output as shown below.
How am I supposed to eliminate that?
```
Preparing query: update elements E set E.END_I = (select n.node_num from nodes N
where abs(E.X_I-N.XI)<0.001 and abs(E.Y_I - N.YI)<0.001 and abs(E.Z_I-N.ZI)<0.001 )
Prepare time: 0.004s
PLAN (N NATURAL)
PLAN (E NATURAL)
Executing...
Done.
108818273 fetches, 79227 marks, 4050 reads, 9380 writes.
0 inserts, 6677 updates, 0 deletes, 0 index, 14549183 seq.
Delta memory: 212 bytes.
ELEMENTS: 6677 updates.
6677 rows affected directly.
Total execution time: 24.038s
Script execution finished.
CREATE DESCENDING INDEX IDX_ELEMENTS1 ON ELEMENTS (Z_I);
CREATE DESCENDING INDEX IDX_XI ON ELEMENTS (X_I);
CREATE DESCENDING INDEX IDX_YI ON ELEMENTS (Y_I);
GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE
ON ELEMENTS TO SYSDBA WITH GRANT OPTION;
CREATE DESCENDING INDEX IDX_NODES1_XI ON NODES (XI);
CREATE DESCENDING INDEX IDX_NODES1_YI ON NODES (YI);
CREATE DESCENDING INDEX IDX_NODES1_ZI ON NODES (ZI);
GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE
ON NODES TO SYSDBA WITH GRANT OPTION;
```
|
Your query is slowed down by the `abs()` function as the index on bare column doesn't work with expressions.
Try changing the query to at least give the db a chance to use an index
```
update elements E
set E.END_I = (select n.node_num
from nodes N
where N.XI < E.X_I + 0.001 AND N.XI > E.X_I - 0.001
AND N.YI < E.Y_I + 0.001 AND N.YI > E.Y_I - 0.001
AND N.ZI < E.Z_I + 0.001 AND N.ZI > E.Z_I - 0.001
)
```
|
Create indices on columns X\_I, Y\_I, Z\_I, then run statement:
```
MERGE INTO elements dst
USING (
SELECT e.x_i, e.y_i, e.z_i, n.node_num
FROM nodes N JOIN elements E ON
abs(E.X_I - N.XI) < 0.001 and
abs(E.Y_I - N.YI) < 0.001 and
abs(E.Z_I - N.ZI) < 0.001
) src
ON dst.X_I = src.X_I AND dst.Y_I = src.Y_I AND dst.Z_I = src.Z_I
WHEN MATCHED THEN UPDATE SET dst.END_I = src.NODE_NUM
```
As mentioned in the answer here <https://stackoverflow.com/a/34656659/55350> you could get rid of ABS functions and create indices on columns XI, YI, ZI of N table to further speed up a process.
|
Slow query execution in Firebird
|
[
"",
"sql",
"performance",
"firebird",
""
] |
I am new to ORACLE SQL and I am trying to learn it quickly.
I have following table definition:
```
Create table Sales_Biodata
(
Saler_Id INTEGER NOT NULL UNIQUE,
Jan_Sales INTEGER NOT NULL,
Feb_Sales INTEGER NOT NULL,
March_Sales INTEGER NOT NULL
);
Insert into Sales_Biodata (SALER_ID,JAN_SALES,Feb_Sales,March_Sales)
values ('101',22,525,255);
Insert into Sales_Biodata (SALER_ID,JAN_SALES,Feb_Sales,March_Sales)
values ('102',22,55,25);
Insert into Sales_Biodata (SALER_ID,JAN_SALES,Feb_Sales,March_Sales)
values ('103',45545,5125,2865);
```
My objective is the following:
1- Searching the highest sales and second highest sales against each saler\_id.
For example in our above case:
For saler\_id =101 highest sales is 525 and second highest sales is 255
similary for saler\_id=102 highest sales is 55 and second highest sales is 25
For my above approach I am using the following query:
```
Select Saler_Id,
(
CASE
WHEN JAN_SALES>FEB_SALES AND JAN_SALES>MARCH_SALES THEN JAN_SALES
WHEN FEB_SALES>JAN_SALES AND FEB_SALES>MARCH_SALES THEN FEB_SALES
WHEN MARCH_SALES>JAN_SALES AND MARCH_SALES>FEB_SALES THEN MARCH_SALES
WHEN JAN_SALES=FEB_SALES AND JAN_SALES=MARCH_SALES THEN JAN_SALES
WHEN JAN_SALES=FEB_SALES AND JAN_SALES>MARCH_SALES THEN JAN_SALES
WHEN JAN_SALES=MARCH_SALES AND JAN_SALES>FEB_SALES THEN JAN_SALES
WHEN FEB_SALES=JAN_SALES AND FEB_SALES>MARCH_SALES THEN FEB_SALES
WHEN FEB_SALES=MARCH_SALES AND FEB_SALES>JAN_SALES THEN FEB_SALES
WHEN MARCH_SALES=JAN_SALES AND MARCH_SALES>FEB_SALES THEN MARCH_SALES
WHEN MARCH_SALES=FEB_SALES AND MARCH_SALES>JAN_SALES THEN MARCH_SALES
ELSE 'NEW_CASE_FOUND'
END
) FIRST_HIGHEST,
(
CASE
WHEN JAN_SALES>FEB_SALES AND FEB_SALES>MARCH_SALES THEN FEB_SALES
WHEN FEB_SALES>JAN_SALES AND JAN_SALES>MARCH_SALES THEN JAN_SALES
WHEN JAN_SALES>MARCH_SALES AND MARCH_SALES>FEB_SALES THEN MARCH_SALES
ELSE 'NEW_CASE_FOUND'
END
) SECOND_HIGHEST
from
Sales_Biodata;
```
but I am getting the following errors:
```
ORA-00932: inconsistent datatypes: expected NUMBER got CHAR
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 60 Column: 6
```
Please guide me on the following:
1- How to search the data horizontally for maximum and second maximum.
2- Please guide me on alternate approaches for searching data for a row horizontally.
|
Your data model is wrong.
The first thing I would do is to unpivot data using this query:
```
select * from sales_biodata
unpivot (
val for mon in ( JAN_SALES,FEB_SALES,MARCH_SALES )
)
;
```
and after this, getting two top values is relatively easy:
```
SELECT *
FROM (
SELECT t.*,
dense_rank() over (partition by saler_id order by val desc ) x
FROM (
select * from sales_biodata
unpivot (
val for mon in ( JAN_SALES,FEB_SALES,MARCH_SALES )
)
) t
)
WHERE x <= 2
```
the above query will give a result in this format:
```
SALER_ID MON VAL X
---------- ----------- ---------- ----------
101 FEB_SALES 525 1
101 MARCH_SALES 255 2
102 FEB_SALES 55 1
102 MARCH_SALES 25 2
103 JAN_SALES 45545 1
103 FEB_SALES 5125 2
```
If you have more month than 3 months, you can easily extend this query changing this part:
`val for mon in ( JAN_SALES,FEB_SALES,MARCH_SALES, April_sales, MAY_SALES, JUNE_SALES, JULY_SALES, ...... NOVEMBER_SALES, DECEMBER_SALES )`
If you want both two values in one row, you need to pivot data back:
```
WITH src_data AS(
SELECT saler_id, val, x
FROM (
SELECT t.*,
dense_rank() over (partition by saler_id order by val desc ) x
FROM (
select * from sales_biodata
unpivot (
val for mon in ( JAN_SALES,FEB_SALES,MARCH_SALES )
)
) t
)
WHERE x <= 2
)
SELECT *
FROM src_data
PIVOT(
max(val) FOR x IN ( 1 As "First value", 2 As "Second value" )
);
```
This gives a result in this form:
```
SALER_ID First value Second value
---------- ----------- ------------
101 525 255
102 55 25
103 45545 5125
```
---
**EDIT - why `MAX` is used in the PIVOT query**
---
The short answer is: because the syntax reuires an aggregate function here.
See this link for the syntax: <http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_10002.htm#CHDCEJJE>
[](https://i.stack.imgur.com/UO8aW.gif)
---
A broader answer:
The PIVOT clause is only a syntactic sugar that simplifies a general "classic" pivot query which is using aggregate function and `GROUP BY` clause, like this:
```
SELECT id,
max( CASE WHEN some_column = 'X' THEN value END ) As x,
max( CASE WHEN some_column = 'Y' THEN value END ) As y,
max( CASE WHEN some_column = 'Z' THEN value END ) As z
FROM table11
GROUP BY id
```
More on PIVOT queries you can find on the net, there is a lot of excelent explanations how the pivot query works.
The above pivot query, written in "standard" SQL, is equivalent to this Oracle's query:
```
SELECT *
FROM table11
PIVOT (
max(value) FOR some_column IN ( 'X', 'Y', 'Z' )
)
```
These PIVOT queries transform records like this:
```
ID SOME_COLUMN VALUE
---------- ----------- ----------
1 X 10
1 X 15
1 Y 20
1 Z 30
```
into one record (for each `id`) like this:
```
ID 'X' 'Y' 'Z'
---------- ---------- ---------- ----------
1 15 20 30
```
Please note, that the source table contains two values for id=1 and some\_column='X' -> 10 and 15. PIVOT queries uses aggregate function to support that "general" case, where there could be many source records for one record in the output. In this example 'MAX' function is used to pick greater value 15.
However PIVOT queries supports also your specific case where there is only one source record for each value in the result.
|
Getting the maximum value is simply:
```
select greatest(jan_sales, feb_sales, mar_sales)
```
If you want the second value:
```
select (case when jan_sales = greatest(jan_sales, feb_sales, mar_sales)
then greatest(feb_sales, mar_sales)
when feb_sales = greatest(jan_sales, feb_sales, mar_sales)
then greatest(jan_sales, mar_sales)
else greatest(jan_sales, feb_sales)
end)
```
However, this is the wrong approach to the whole problem. The main issues is that you have the wrong data structure. Store values in *rows* not *columns*. So, you need to unpivot your data and re-aggregation, such as:
```
select saler_id,
max(case when seqnum = 1 then sales end) as sales_1,
max(case when seqnum = 2 then sales end) as sales_2,
max(case when seqnum = 3 then sales end) as sales_3
from (select s.*, dense_rank() over (partition by saler_id order by sales desc) as seqnum
from (select saler_id, jan_sales as sales Sales_Biodata union all
select saler_id, feb_sales Sales_Biodata union all
select saler_id, mar_sales Sales_Biodata
) s
) s
group by saler_id;
```
|
Case statement not supporting horizontal search with column name in query
|
[
"",
"sql",
"oracle",
""
] |
I have a table like this:
```
id | name | surname | city|
-------------------------------
'1', 'mohit', 'garg', 'delhi'
'2', 'mohit', 'gupta', 'delhi'
'3', 'ankita', 'gupta', 'jaipur'
'4', 'ankita', 'garg', 'jaipur'
'5', 'vivek', 'garg', 'delhi'
```
I am looking for a query that returns (id,city) grouped by city, with at most two (id) per city, but without using nested queries.
**Expected output**:
```
'1', 'delhi'
'2', 'delhi'
'3', 'jaipur'
'4', 'jaipur'
```
|
Perhaps the only way without subqueries is to use a trick with `substring_index()` and `group_concat()`:
```
select city, substring_index(group_concat(id), ',', 2)
from t
group by city;
```
This puts the ids in a comma-delimited list, rather than in separate rows. Also, you have to be careful about the size of the intermediate results.
Of course, the accepted practice would use either a subquery in the `where` clause or a subquery using variables.
EDIT:
Here is a method for getting two ids per city without listing the cities:
```
select city, min(id) as id
from t
group by city
union
select city, max(id)
from t
group by city;
```
|
You can do this with a `LEFT OUTER JOIN`, although using a subquery will probably be clearer and might be faster. Here's a method using the `JOIN`:
```
SELECT
T1.id,
T1.city
FROM
My_Table T1
LEFT OUTER JOIN My_Table T2 ON T2.city = T1.city AND T2.id <= T1.id
GROUP BY
T1.id,
T1.city
HAVING
COUNT(*) <= 2
```
You're effectively finding all rows in T1 where the number of rows with the same name and a lower id is <= 2, which means that it must be one of the top two rows by id.
|
SQL: Find top n in groupby data
|
[
"",
"mysql",
"sql",
""
] |
Yesterday I asked a question about ranking students by their points.
[[Rank users in mysql by their points](https://stackoverflow.com/questions/34637943/rank-users-in-mysql-by-their-points)
]
I actually solved my problem with the help I've recieved from other question (thanks to everyone who helped me).
Meanwhile I figured out something accidently. If I try to rank my students by their points and if points column was empty (NULL) my query didn't gave an error also it sorted ranks like 1-2-3-4 but all values were **NULL**
this is the query I've tried
```
select er.*,
(@rank := if(@points = points,
@rank,
if(@points := points,
@rank + 1,
@rank + 1
)
)
) as ranking
from examresults er cross join
(select @rank := 0, @points := -1) params
order by points desc;
```
and this is the result.
[](https://i.stack.imgur.com/zjuC8.png)
So I wonder arent **NULL** values are the same ? isnt this query should gave **rank 1** for every user in my database ? why it ranks null values incrementally ?
|
Many people think of `NULL` as meaning `no value` or `non-existent`. Even [Wikipedia](https://en.wikipedia.org/wiki/Null_%28SQL%29) explains it this way.
Think of it as `unknown value` and everything will make more sense.
One cannot compare two `unknown values` because, well, they are not known. They are not equal but they are also not different. None of them is greater than the other.
The result of comparison of two `unknown values` is also an `unknown value` (i.e. `NULL`).
|
You compare `@points = points` which results in `@points=NULL`.
But any comparison to `NULL` results in `UNKNOWN`:
```
value= NULL -> UNKNOWN
value<>NULL -> UNKNOWN
value< NULL -> UNKNOWN
value> NULL -> UNKNOWN
```
Even `NULL=NULL` results in `UNKNOWN`.
Of course there are exceptions:
`DISTINCT` and `GROUP BY` consider NULLs equal.
|
Why null values are not ranking same in mysql?
|
[
"",
"mysql",
"sql",
"database",
"null",
"ranking",
""
] |
```
SELECT b.first_name, b.last_name, SUM(t.price_total)
FROM brokers b
LEFT OUTER JOIN
trades t
ON b.broker_id=t.broker_id
GROUP BY
b.broker_id
```
My problem question asks to 'Display the total value of each broker's trades'. The answer groups by `b.first_name||' '||b.last_name`, but I think the group by should be done via the broker's ID (i.e. two people with the same name could be grouped together, this wouldn't happen via broker id).
yet when running my code, i get an error
```
ORA-00979: not a GROUP BY expression
00979. 00000 - "not a GROUP BY expression"
*Cause:
*Action:
Error at Line: 1 Column: 8
```
my question is, why can't I use b.broker\_id as the only column to group by?
|
try this:
```
SELECT b.broker_id, b.first_name, b.last_name, SUM(t.price_total)
FROM brokers b
LEFT OUTER JOIN
trades t
ON b.broker_id=t.broker_id
GROUP BY
b.broker_id, b.first_name, b.last_name
```
When you use GROUP BY, select clause can have only the columns specified in GROUP BY and aggregated/calculated fields for that group like sum, average, min, max etc.
You can use analytic functions to overcome it, this is mainly used to avoid self joins and get the all the fields you require while doing aggregations:
```
SELECT distinct * from (SELECT b.first_name, b.last_name,
SUM(t.price_total) over (partition by b.broker_id) price_total
FROM brokers b
LEFT OUTER JOIN
trades t
ON b.broker_id=t.broker_id);
```
|
A slightly different approach is to group the trades by broker ID, then join the result of that to the brokers table:
```
SELECT b.first_name, b.last_name, price_sum
FROM brokers b
LEFT OUTER JOIN (
SELECT broker_id, SUM(price_total) AS price_sum
FROM trades
GROUP BY broker_id
) t
ON b.broker_id=t.broker_id
```
Personally I feel like this expresses more clearly what you are trying to do.
|
Oracle SQL using group by: Can you group by a field used to join two tables?
|
[
"",
"sql",
"oracle",
"group-by",
"outer-join",
""
] |
A Tenis database I'm working with contains rows as matches. The row has the player name and a condition (win or loss). I need to display a table with ALL the players showing the player name, total games, number of wins, number of losses and win percentage.
I haven't figured out how to properly set up the SELECT statement to perform what I need. Somewhere to start would be:
`SELECT player, COUNT(player) FROM tenis_table GROUP BY player`
Grouping by player shows every unique player name, and COUNT(player) gives me the total games played. But how do I properly create an SQL SELECT statement that also gives me a column with the number of wins, losses and win percentage based off a condition that is either "win" or "loss"?
|
Realistically, you should only be asking for player, TotalGames, and Wins. Everything else should be done in your code, but here it is in case you absolutely want to stray from good design:
```
SELECT player,
COUNT(*) AS TotalGames,
SUM(CASE WHEN condition='win' THEN 1 ELSE 0 END) AS Wins,
COUNT(*)-SUM(CASE WHEN condition='win' THEN 1 ELSE 0 END) AS Losses,
-- SUM(CASE WHEN condition='loss' THEN 1 ELSE 0 END) AS Losses,
SUM(CASE WHEN condition='win' THEN 1 ELSE 0 END)*100/COUNT(*) AS WinPercentage
FROM tenis_table
GROUP BY player
```
Also, COUNT(player) will give you the total number of records in the group, only if `player` is not null. COUNT(\*) I believe is slightly faster because it does not have to check if each record's `player` is null or not, so I've switched to that.
|
Probably the simplest way to write this code in MySQL is:
```
SELECT player,
COUNT(*) AS TotalGames,
SUM(condition = 'win' ) AS NumWins,
SUM(condition = 'lose') AS NumLosses,
AVG(condition = 'win') AS WinPercentage
FROM tenis_table
GROUP BY player;
```
This uses a MySQL extension of treating boolean expressions as numbers (I happen to like this extension; `case` is also perfectly acceptable). It simplifies the logic and uses `AVG()` for the computation of percentage.
I'm mostly answering because I disagree with Robert's first sentence: the database is a fine place to do all the calculations.
|
SQL SELECT statement that displays number of wins/losses based off a value ("win" or "loss")
|
[
"",
"mysql",
"sql",
"select",
""
] |
I have a table where I need to get the oldest date from a group and be able to return all rows. I'm finding it difficult since I need to return the system\_id field.
```
AssignedProfsHistory MatterID EffectiveDate
1 33434-3344 08/22/2005
2 33434-3344 07/12/2004
3 33434-3344 07/12/2004
4 21122-323 12/05/2007
5 43332-986 10/18/2014
6 43332-986 03/23/2013
```
So in this example, the rows for systemid 2 & 3 should return because they are tied for earliest date. The row for systemid 4 should return and systemid 6 should be returned.
This is what I have so far. Because I need to include the systemid(AssignedProfHistory) I'm not getting the results I need.
```
SELECT aph.AssignedProfsHistory,
m.MatterID,
Min(aph.EffectiveDate) as 'EffectiveDate'
from AssignedProfsHistory aph
INNER JOIN Matters m
ON aph.Matters = m.Matters
WHERE aph.AssignedType = 'Originating'
Group by m.matters,m.matterid,aph.assignedprofshistory
order by m.MatterID
```
Any idea how to get the results I need?
Thank you in advance.
|
Since you want to keep the ties, I'd do it like this:
```
SELECT t2.AssignedProfsHistory, m.MatterID, t2.EffectiveDate
FROM (
SELECT MatterID, MIN(EffectiveDate) med
FROM AssignedProfsHistory
WHERE AssignedType = 'Originating'
GROUP BY MatterID
) t1
INNER JOIN AssignedProfsHistory t2 ON t2.MatterID = t1.MatterID
and t2.EffectiveDate = t1.med and t2.AssignedType = 'Originating'
INNER JOIN Matters m on m.Matters = t2.Matters
ORDER BY m.MatterId
```
Here is an [SQLFiddle](http://www.sqlfiddle.com/#!6/9c697/1/0) without the `Matters` table that demonstrates it can work, no windowing functions or CTE required, though a CTE would allow you to avoid repeating the `AssignedType='Originating'` condition.
|
```
select AssignedProfsHistory, MatterID, EffectiveDate
from (
SELECT
aph.AssignedProfsHistory,
m.MatterID,
aph.EffectiveDate,
row_number() over(partition by m.MatterID order by aph.EffectiveDate) as rn
from AssignedProfsHistory aph
INNER JOIN Matters m ON aph.Matters = m.Matters
WHERE aph.AssignedType = 'Originating'
) t
where rn = 1;
```
You can use the `row_number` window function to assign row numbers to dates for each matterid. Because the ordering is based on the ascending EffectiveDate, rows with the oldest date get assigned `1` and you select those.
If a matterid can have multiple rows with the oldest dates, you can use `rank` or `dense_rank` to get all the rows for the oldest date.
|
How do I find the oldest date in Group
|
[
"",
"sql",
"sql-server",
""
] |
I have a table that looks like the following -
```
βββββββββββ¦βββββββββββββββββββββββββ¦βββββββββββββββββ
β QueueID β AttributeName β AttributeValue β
β 123 β Domain β Azure β
β 123 β Area β EMEA β
β 123 β ContractType β Contract1 β
β 123 β RequestType β Workshop β
β 124 β Domain β .NET β
β 124 β Area β Asia-Pacific β
β 124 β ContractType β Contract2 β
β 124 β RequestType β Critical β
βββββββββββ©βββββββββββββββββββββββββ©βββββββββββββββββ
```
I want to find the QueueID for the following combination
```
Domain = .NET; Timezone = Asia-Pacific; ContractType = Contract2; RequestType = Critical
```
Basically, I'd like to find the QueueId for a specified combination, which in the example above would be 124
|
I haven't tested this, but please give it a try:
```
SELECT QueueID
FROM MyTable
WHERE ( AttributeName = 'Domain' AND AttributeValue = '.NET' )
OR ( AttributeName = 'Area' AND AttributeValue = 'Asia-Pacific' )
OR ( AttributeName = 'ContractType' AND AttributeValue = 'Contract2' )
OR ( AttributeName = 'RequestType' AND AttributeValue = 'Critical' )
GROUP BY QueueID
HAVING Count(*) = 4
```
|
```
SELECT queueid
FROM yourtable
WHERE AttributeName = 'Domain' AND AttributeValue = '.NET'
intersect
SELECT queueid
FROM yourtable
WHERE AttributeName = 'Area'
AND AttributeValue = 'Asia-Pacific'
intersect
SELECT queueid
FROM yourtable
WHERE AttributeName = 'ContractType'
AND AttributeValue = 'Contract2'
intersect
SELECT queueid
FROM yourtable
WHERE AttributeName = 'RequestType'
AND AttributeValue = 'Critical'
```
Try this.
|
SQL - How to return rows with a common column value and also based on other conditions?
|
[
"",
"sql",
"sql-server",
""
] |
I want to get statistics with sql query. My table is like this:
```
ID MATERIAL CREATEDATE DEPARTMENT
1 M1 10.10.1980 D1
2 M2 11.02.1970 D2
2 M3 18.04.1971 D3
.....................
.....................
.....................
```
How can I get a range of data count like this
```
DEPARTMENT AGE<10 10<AGE<20 20<AGE
D1 24 123 324
D2 24 123 324
```
|
Assuming that CREATEDATE is a date column, in PostgreSQL you can use the [AGE](http://www.postgresql.org/docs/8.2/static/functions-datetime.html) function:
```
select DEPARTMENT, age(CREATEDATE) as AGE
from Materials
```
and with date\_part you can get the age in years. To show the data in the format that you want, you could use this GROUP BY query:
```
select
DEPARTMENT,
sum(case when date_part('year', age(CREATEDATE))<10 then 1 end) as "age<10",
sum(case when date_part('year', age(CREATEDATE))>=10 and date_part('year', age(CREATEDATE))<20 then 1 end) as "10<age<20",
sum(case when date_part('year', age(CREATEDATE))>=20 then 1 end) as "20<age"
from
Materials
group by
DEPARTMENT
```
which can be simplified as:
```
with mat_age as (
select DEPARTMENT, date_part('year', age(CREATEDATE)) as mage
from Materials
)
select
DEPARTMENT,
sum(case when mage<10 then 1 end) as "age<10",
sum(case when mage>=10 and mage<20 then 1 end) as "10<age<20",
sum(case when mage>=20 then 1 end) as "20<age"
from
mat_age
group by
DEPARTMENT;
```
if you are using PostgreSQL 9.4 you can use FILTER:
```
with mat_age as (
select DEPARTMENT, date_part('year', age(CREATEDATE)) as mage
from Materials
)
select
DEPARTMENT,
count(*) filter (where mage<10) as "age<10",
count(*) filter (where mage>=10 and mage<20) as "10<age<20",
count(*) filter (where mage>=20) as "20<age"
from
mat_age
group by
DEPARTMENT;
```
|
You can use `extract(year FROM age(createdate))` to get the exact age
i.e
```
select extract(year FROM age(timestamp '01-01-1989')) age
```
will give you
Result:
```
age
---
27
```
so you can use following select statement to get your desired output:
```
SELECT dept
,sum(CASE WHEN age < 10THEN 1 END) "age<10"
,sum(CASE WHEN age >= 10 AND age < 20 THEN 1 END) "10<age<20"
,sum(CASE WHEN age >= 20 THEN 1 END) "20<age"
FROM (
SELECT dept,extract(year FROM age(crdate)) age
FROM dt
) t
GROUP BY dept
```
If you don't want to use a sub select use this.
```
SELECT dept
,sum(CASE WHEN extract(year FROM age(crdate)) < 10THEN 1 END) "age<10"
,sum(CASE WHEN extract(year FROM age(crdate)) >= 10 AND extract(year FROM age(crdate)) < 20 THEN 1 END) "10<age<20"
,sum(CASE WHEN extract(year FROM age(crdate)) >= 20 THEN 1 END) "20<age"
FROM dt
GROUP BY dept
```
|
SQL query to group by age range from date created
|
[
"",
"sql",
"postgresql",
""
] |
This Code does not work:
```
SELECT
(
SELECT [T_Licence].[isInstalled]
FROM [T_Licence]
WHERE [T_Licence].[System] = [T_System].[ID]
AND [T_Licence].[Software] = 750
) AS [IsInstalled] ,*
FROM [T_System]
WHERE [IsInstalled] = 1
```
I have to do it this way, but this makes the whole code so complicated. I really dont want that:
```
SELECT
(
SELECT [T_Licence].[isInstalled]
FROM [wf_subj_all].[T_Licence]
WHERE [T_Licence].[System] = [T_System].[ID]
AND [T_Licence].[Software] = 750
) AS [IsInstalled] ,*
FROM [wf_subj_it].[T_System]
WHERE
(
SELECT
(
SELECT [T_Licence].[isInstalled]
FROM [wf_subj_all].[T_Licence]
WHERE [T_Licence].[System] = [T_System].[ID]
AND [T_Licence].[Software] = 750
)
) = 1
```
Is there any way to do it like shown in the first code snippet?
So that the code stays somehow readeble.
thx very much
|
Try this one -
```
SELECT *
FROM wf_subj_it.T_System s
CROSS APPLY (
SELECT /*TOP(1)*/ t.isInstalled
FROM wf_subj_all.T_Licence t
WHERE t.[System] = s.ID
AND t.Software = 750
) t
WHERE t.isInstalled = 1
```
|
Just wrap the query with an outer select and it should work.
```
SELECT *
FROM
(
SELECT
(
SELECT [T_Licence].[isInstalled]
FROM [T_Licence]
WHERE [T_Licence].[System] = [T_System].[ID]
AND [T_Licence].[Software] = 750
) AS [IsInstalled], *
FROM [T_System]
) As tbl1
WHERE [IsInstalled] = 1
```
|
WHERE Clause does not accept just defined column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
first of all i have 3 schema as follow ( MEDICINE, MEDICINE1 ,MEDICINE2) .
**MEDICINE** and **MEDICINE1** have same tables, **MEDICINE** have updated data and **MEDICINE1** have old data. and i have **MEDICINE2** with no tables so i want to create table in **MEDICINE2** as select from **MEDICINE** where not in **MEDICINE1**.
Ex. `create MEDICINE2.table1 as ( select * from MEDICINE.table1 minus select * MEDICINE1.table1 )`
so if table1 in schema MEDICINE has data (1,2,3,4,5,6) and MEDICINE1 has (1,2,3,4) so table1 will create in MEDICINE2 with data (5,6)
because i have many tables, i create this procedure :
```
CREATE OR REPLACE PROCEDURE SYSTEM.create_table_from_schema IS
TYPE own_array IS TABLE OF VARCHAR2(40)
INDEX BY binary_integer;
TYPE tab_array IS TABLE OF VARCHAR2(40)
INDEX BY binary_integer;
v_dml_str VARCHAR2 (400);
v_own_array own_array;
v_tab_array tab_array;
BEGIN
SELECT owner,table_name,BULK COLLECT
INTO v_own_array,v_tab_array
from SYS.all_tables
where global_stats='YES'
And owner = 'MEDICINE';
FOR i IN v_tab_array.first..v_tab_array.last LOOP
v_dml_str := 'Create table MEDICINE2.'
||v_tab_array(i)||'as (select * from '||v_own_array(i)||'.'|| v_tab_array(i)
||' minus select * from MEDICINE1.'|| v_tab_array(i)||' )' ;
EXECUTE IMMEDIATE v_dml_str;
END LOOP;
END;
/
```
but there is error will show
> PROCEDURE SYSTEM.CREATE\_TABLE\_FROM\_SCHEMA On line: 11 PL/SQL:
> ORA-00904: "BULK": invalid identifier
**Are there errors in procedure ?**
thanks and regard,
|
this is working fine
```
SET SERVEROUTPUT ON;
Declare
v_dml_str VARCHAR2 (400);
cursor c1 is
SELECT owner,table_name
from SYS.all_tables
where global_stats='YES'
and owner = 'MEDICINE';
BEGIN
FOR i in c1
LOOP
v_dml_str := 'Create table MEDICINE2.'
||i.table_name||' as (select * from '||i.owner||'.'|| i.table_name
||' minus select * from MEDICINE1.'|| i.table_name||' )' ;
EXECUTE IMMEDIATE v_dml_str;
END LOOP;
END;
/
```
|
Use the following code. It will execute without any errors
```
CREATE OR REPLACE PROCEDURE SYSTEM.create_table_from_schema IS
v_dml_str VARCHAR2 (400);
v_own_array VARCHAR2(128);
v_tab_array VARCHAR2(128);
cursor c1 is
SELECT owner,table_name
from SYS.all_tables
where global_stats='YES'
And owner = 'MEDICINE';
BEGIN
FOR i in c1
LOOP
v_dml_str := 'Create table MEDICINE2.'
||i.table_name||' as (select * from '||i.owner||'.'|| i.table_name
||' minus select * from MEDICINE1.'|| i.table_name||' )' ;
EXECUTE IMMEDIATE v_dml_str;
END LOOP;
END;
/
```
|
Create table as select statement using procedure
|
[
"",
"sql",
"oracle",
"plsql",
"procedure",
""
] |
I have an SQL table with a structure similar to the following:
```
Name Value
(varchar) (bit)
__________________
Val1 1
Val2 1
Val3 0
Val4 0
Val1 0
Val2 0
Val3 0
Val4 1
```
So basically, I have two instances of the same row but with a different bit value. I want to retrieve distinct rows from this table, with the bit value being OR'ed. So for the given example, the result would look like:
```
Name Value
```
---
```
Val1 1 (1 | 0)
Val2 1 (1 | 0)
Val3 0 (0 | 0)
Val4 1 (0 | 1)
```
Can this be achieved?
I am working in Microsoft SQL Server 2012. Any kind of help is appreciated. Please do let me know if any further clarification is required.
|
Observing that "bitwise OR" has the same function as the aggregate function `MAX` (assuming all inputs are either 0 or 1, if any input is 1, the result is 1, otherwise 0) and accounting for the fact that we can't directly aggregate `bit`, this seems to work:
```
declare @t table (Name varchar(17) not null,Value bit not null)
insert into @t(Name,Value) values
('Val1',1),
('Val2',1),
('Val3',0),
('Val4',0),
('Val1',0),
('Val2',0),
('Val3',0),
('Val4',1)
select Name,CONVERT(bit,MAX(CONVERT(int,Value))) as Value
from @t group by Name
```
|
You can try this, hoping that value is numeric field with values 0 and 1.
```
SELECT Name, MAX(Value)
FROM [table]
GROUP BY Name
```
|
How to perform bitwise OR operation between two rows in same SQL table?
|
[
"",
"sql",
"sql-server",
"bitwise-operators",
""
] |
I ran the following query on a previous years data and it took 3 hours, this year it took 13 days. I don't know why this is though. Any help would be much appreciated.
I have just tested the queries in the old SQL server and it works in 3 hours. Therefore the problem must have something to do with the new SQL server I created. Do you have any ideas what the problem might be?
The query:
```
USE [ABCJan]
CREATE INDEX Link_Oct ON ABCJan2014 (Link_ref)
GO
CREATE INDEX Day_Oct ON ABCJan2014 (date_1)
GO
UPDATE ABCJan2014
SET ABCJan2014.link_id = LT.link_id
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
UPDATE ABCJan2014
SET SumAvJT = ABCJan2014.av_jt * ABCJan2014.n
UPDATE ABCJan2014
SET ABCJan2014.DayType = LT2.DayType
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
```
With the following data structures:
ABCJan2014 (70 million rows - NO UNIQUE IDENTIFIER - Link\_ref & date\_1 together are unique)
```
Link_ID nvarchar (17)
Link_ref int
Date_1 smalldatetime
N int
Av_jt int
SumAvJT decimal(38,14)
DayType nvarchar (50)
```
LookUp\_ABC\_20142015
```
Link_ID nvarchar (17) PRIMARY KEY
Link_ref int INDEXED
Link_metres int
```
ABC\_20142015\_days
```
Date1 smalldatetime PRIMARY KEY & INDEXED
DayType nvarchar(50)
```
EXECUTION PLAN
[](https://i.stack.imgur.com/GJFAJ.png)
It appears to be this part of the query that is taking such a long time.
Thanks again for any help, I'm pulling my hair out.
|
If you look at the execution plan the time is in the actual update
Look at the log file
Is the log file on a fast disk?
Is the log file on the same physical disk?
Is the log file required to grow?
Size the log file to like 1/2 the size of the data file
As far as indexes test and tune this
If the join columns are indexed not much to do here
```
select count(*)
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
select count(*)
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
```
Start with a top (1000) to get update tuning working
For grins please give this a try
Please post this query plan
(do NOT add an index to ABCJan2014 link\_id)
```
UPDATE top (1000) ABCJan2014
SET MT.link_id = LT.link_id
FROM ABCJan2014 MT
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
AND MT.link_id <> LT.link_id
```
If LookUp\_ABC\_20142015 is not active then add a nolock
```
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT with (nolock)
```
nvarchar (17) for a PK to me is just strange
why n - do you really have some unicode?
why not just char(17) and let it allocate space?
|
Create Index on ABCJan2014 table as it is currently a heap
|
SQL Server Update query very slow
|
[
"",
"sql",
"sql-server",
""
] |
I have a table like so

I want to get the first date (column "dt") of the final "id" sequence.
According to this example the final "id" sequence is 6
I Want to get the date of the first appearance **"2015-01-04"**
|
This approach utilizes two ROW\_NUMBERs, both will increment similar for the first *n* rows with the same `id`:
```
SELECT MIN(id), MIN(dt)
FROM
(
SELECT
tab.*,
ROW_NUMBER () OVER (ORDER BY dt DESC) AS rn,
ROW_NUMBER () OVER (PARTITION BY id ORDER BY dt DESC) AS rn2
FROM tab
) AS dt
WHERE RN = RN2
```
|
This is a bit of a pain. One method using standard SQL:
```
select min(t.date)
from t cross join
(select top 1 id
from t
order by date desc
) tt
where not exists (select 1
from t t2
where t2.date >= t.date and t2.id <> tt.id
);
```
Another method uses `row_number()` and an observation: if we enumerate the rows by date in descending order and we enumerate the rows by date in descending order and partitioned by `id`, then the values are the same for the last sequence of ids. We can use this as a condition to get the minimum date:
```
select min(date)
from (select t.*,
row_number() over (order by date desc) as seqnum,
row_number() over (partition by id order by date desc) as seqnum_id
from t
) t
where seqnum = seqnum_id;
```
|
Get the first line of the final sequence
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.