Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I could not get the exact query to get the over all total of the total table. i want to get the total of each date in call\_time table. here's my query:
```
SELECT call_type, channel, call_time,
count (CASE WHEN upper(status) = upper('no answer') THEN 1 ELSE NULL END) AS cnt_no_answer,
count (CASE WHEN upper(status) = upper('answered') THEN 1 ELSE NULL END) AS cnt_answer,
count (status) AS cnt_total
FROM app_account.cc_call
WHERE channel = 'DAHDI/i1/'
AND call_time BETWEEN ('30-DEC-2013') AND ('04-JAN-2014')
GROUP BY call_type, channel, call_time;
```
Some output of that query:
```
CALL_TYPE CHANNEL CALL_TIME CNT_NO_ANSWER CNT_ANSWERED CNT_TOTAL
LANDLINE DAHDI/i1/ 03-JAN-14 1 0 1
MOBILE-SUN DAHDI/i1/ 03-JAN-14 0 1 1
MOBILE-SUN DAHDI/i1/ 03-JAN-14 1 0 1
LANDLINE DAHDI/i1/ 03-JAN-14 1 0 1
LANDLINE DAHDI/i1/ 03-JAN-14 1 0 1
LANDLINE DAHDI/i1/ 03-JAN-14 1 0 1
MOBILE-SUN DAHDI/i1/ 02-JAN-14 1 0 1
MOBILE-SUN DAHDI/i1/ 02-JAN-14 0 1 1
LANDLINE DAHDI/i1/ 02-JAN-14 0 1 1
LANDLINE DAHDI/i1/ 02-JAN-14 1 0 1
MOBILE-SMART DAHDI/i1/ 02-JAN-14 1 0 1
```
My excepted Output:
```
CALL_TIME CNT_NO_ANSWER CNT_ANSWERED
03-JAN-14 27 10
02-JAN-14 48 20
```
Please help me.
Thank you! | Try this:
```
SELECT CALL_TYPE, CHANNEL, TRUNC(CALL_TIME)
,COUNT (CASE WHEN UPPER(STATUS) = UPPER('no answer') THEN 1 END) AS CNT_NO_ANSWER
,COUNT (CASE WHEN UPPER(STATUS) = UPPER('answered') THEN 1 END) AS CNT_ANSWER
,COUNT (STATUS) AS CNT_TOTAL
FROM APP_ACCOUNT.CC_CALL
WHERE CHANNEL = 'DAHDI/i1/'
AND CALL_TIME BETWEEN ('30-DEC-2013') AND ('04-JAN-2014')
GROUP BY CALL_TYPE, CHANNEL, TRUNC(CALL_TIME);
```
If `CALL_TIME` contains time value and you want to `GROUP BY` each date, you should `trunc` the `CALL_TIME` to its date. | Use something like the following:
```
SELECT CALL_TYPE, CHANNEL, TRUNC(CALL_TIME)
, COUNT (CASE UPPER(STATUS)
WHEN UPPER('no answer') THEN 1
ELSE NULL
END) AS CNT_NO_ANSWER
, COUNT (CASE UPPER(STATUS)
WHEN UPPER('answered') THEN 1
ELSE NULL
END) AS CNT_ANSWER
, COUNT (STATUS) AS CNT_TOTAL
FROM APP_ACCOUNT.CC_CALL
WHERE CHANNEL = 'DAHDI/i1/'
AND CALL_TIME BETWEEN TO_DATE('30-DEC-2013')
AND TO_DATE('04-JAN-2014')
GROUP BY CALL_TYPE, CHANNEL, TRUNC(CALL_TIME);
```
The major change I have made is `TRUNC(CALL_TIME)`. Oracle stores dates as datetime values, which have dates as well as time values. Hence, when you use `GROUP BY ..., CALL_TIME, ...`, what really happens is that the grouping is done for the datetime values, not date values. Only the calls which were made on the exact time accurate to a fraction of a second will be grouped together, which is not the expected behavior. Hence use `GROUP BY TRUNC(CALL_DATE)` when you have to show the grouping by day.
**EDIT:**
To get the overall total for each day, you have already used `COUNT(STATUS) AS CNT_TOTAL` in your query! It would give you the total number of calls if the column is a not null and status is recorded for each call. If this column contains null values, I would suggest you use `COUNT(*) AS CNT_TOTAL` as it would count all the rows without regards to constraints on columns.
As far as the "for each day" part, `TRUNC(datetime)` function can truncate datetime values from their year down to their minute. This means, if you want to get the number of calls, or any other statistics, each year then you can simply use `TRUNC(call_time, 'YYYY')`. On the other hand, if you want call statistics for each hour, you can use `TRUNC(call_time, 'HH')` or `TRUNC(call_time, 'HH24')`. Same goes for a minute.
But beware, unless you use a `TO_CHAR` function to display dates, the front-end dev tools like Toad or SQL Developer display datetime values in the DD-MON-YYYY format, discarding the time information. This is what got you in the first place. Hence, if you group by truncating datetimes to an hour or a minute, and even though the results are correct, you will see repeated date in DD-MON-YYYY format. Hence, don't get confused.
For further reading on `TRUNC`, I would suggest [Oracle Docs](http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions209.htm#SQLRF06151) AND [this link to techonthenet.com](http://www.techonthenet.com/oracle/functions/trunc_date.php). For `TO_CHAR`, [Oracle Docs here](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions180.htm) has detailed and easy to understand explanation. | How can I get the total? | [
"",
"sql",
"oracle",
"count",
"sum",
"oracle-sqldeveloper",
""
] |
In Oracle database I have one table with primary key GAME\_ID.
I have to insert a copy of a row where game\_name = 'Texas holdem' but it tells me:
> An UPDATE or INSERT statement attempted to insert a duplicate key.
This is query I am using:
```
INSERT INTO GAME (SELECT * FROM GAME WHERE NAME = 'Texas Holdem');
``` | Assuming your `game_id` is generated by a sequence, you can get a new as part of the select statement:
```
INSERT INTO GAME (game_id, name, col_3)
SELECT seq_game_id.nextval, name, col_3
FROM GAME
WHERE NAME = 'Texas Holdem';
``` | Let me just offer a slightly more abstract point of perspective...
* In a relational database, table is a physical representation of the mathematical concept of relation.
* Relation is a **set** (of tuples, i.e. table rows/records).
* A set either contains given element or it doesn't, it cannot contain the same element multiple times (unlike multiset).
* Therefore, you can never have two identical rows in the table, and still call your database "relational".1
You can insert a *similar* row through, as other answers have demonstrated.
---
*1 Although practical DBMS (including Oracle) will typically allow you to create a table without any key, making identical duplicates physically possible. However, consider it a big red flag if you ever find yourself doing that.* | Oracle Database. Insert a copy of a row to the same table (Duplicate key error message) | [
"",
"sql",
"oracle",
""
] |
I am trying to perform the following query:
```
SELECT * FROM table_name_here WHERE Date LIKE %Jan % 2014%
```
Now, the table name is different and hidden here, but it just won't go through. It says there is an error in my syntax around `%Jan % 2014%`
I can get this to work, so I know the connection works: `SELECT * FROM table_name_here`
So the problem lies with the `WHERE` and `LIKE` part.
I also tried to perform this on my hosting sites DB management tool:
```
SELECT *
FROM `table_name`
WHERE `Date` LIKE '%Jan % 2014%'
```
and that one works | You have two syntax errors, firstly the word `Date` is a keyword, so needs to be wrapped and you need quotes around your string, like so:
```
SELECT * FROM table_name_here WHERE `Date` LIKE "%Jan % 2014%"
``` | Assuming that `date` is begin stored as a date/datetime column, don't use `like` on it. The `like` implicitly converts it to a string, using some local format.
Instead, be explicit:
```
where month(`date`) = 1 and year(`date`) = 2014
```
or
```
where date_format(`date`, '%Y-%m') = '2014-01'
```
As for your original question, you discovered that quotes are important around string constants. I would recommend using single quotes (as opposed to double quotes), because single quotes are the ANSI standard string delimiter. | What is the proper syntax for LIKE? | [
"",
"mysql",
"sql",
""
] |
I'm trying to execute a query to return orders where they only have 1 cross reference number.
Something like this (field names and tables changed to protect the innocent ;-P) :
```
SELECT ordernum FROM orders WHERE (COUNT(orderref) = 1) ORDER BY ordernum;
```
The problem is, having an aggregate function is not possible in the WHERE clause using Access (not sure if it's allowed in normal SQL).
How can I achieve this using Access SQL? | The count(\*) has to be in the HAVING clause since it is calculated. Also, you are missing a GROUP BY clause.
```
-- Updated statement
SELECT ordernum, COUNT(orderref) as Total
FROM orders
GROUP BY ordernum
HAVING COUNT(orderref) = 1
ORDER BY ordernum
```
Someone emailed me stating that MS Access does not support the HAVING clause. That is news to me. A long time ago I was MOS ACCESS certified.
Let's use the Northwind database for MS Access 2007. I change the syntax since the column names are different. However, results are the same.

 | I'm not sure if it is working in Access but try something like this
```
SELECT ordernum FROM orders group by orderref having count(*) = 1 ORDER BY ordernum;
``` | SQL Access COUNT in WHERE clause | [
"",
"sql",
"ms-access",
""
] |
I have an SQL query that returns a column like this:
```
foo
-----------
1200
1200
1201
1200
1200
1202
1202
1202
```
It has already been ordered in a specific way, and I would like to perform another query on this result set to ID the repeated data like this:
```
foo ID
---- ----
1200 1
1200 1
1201 2
1200 3
1200 3
1202 4
1202 4
1202 4
```
It's important that the second group of 1200 is identified as separate from the first. Every variation of OVER/PARTITION seems to want to lump both groups together. Is there a way to window the partition to only these repeated groups?
Edit:
This is for Microsoft SQL Server 2012 | This is my solution using a cursor and a temporary table to hold the results.
```
DECLARE @foo INT
DECLARE @previousfoo INT = -1
DECLARE @id INT = 0
DECLARE @getid CURSOR
DECLARE @resultstable TABLE
(
primaryId INT IDENTITY(1, 1) NOT NULL PRIMARY KEY,
foo INT,
id int null
)
SET @getid = CURSOR FOR
SELECT originaltable.foo
FROM originaltable
OPEN @getid
FETCH NEXT
FROM @getid INTO @foo
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@foo <> @previousfoo)
BEGIN
SET @id = @id + 1
END
INSERT INTO @resultstable VALUES (@foo, @id)
SET @previousfoo = @foo
FETCH NEXT
FROM @getid INTO @foo
END
CLOSE @getid
DEALLOCATE @getid
``` | Not sure this will be the fastest results...
```
select main.num, main.id from
(select x.num,row_number()
over (order by (select 0)) as id
from (select distinct num from num) x) main
join
(select num, row_number() over(order by (select 0)) as ordering
from num) x2 on
x2.num=main.num
order by x2.ordering
```
Assuming the table "num" has a column "num" that contains your data, in the order-- of course num could be made into a view or a "with" for your original query.
Please see the following [sqlfiddle](http://sqlfiddle.com/#!6/ab579/16) | Identifying repeated fields in SQL query | [
"",
"sql",
"sql-server-2012",
""
] |
This sql code throws an
> aggregate functions are not allowed in WHERE
```
SELECT o.ID , count(p.CAT)
FROM Orders o
INNER JOIN Products p ON o.P_ID = p.P_ID
WHERE count(p.CAT) > 3
GROUP BY o.ID;
```
How can I avoid this error? | Replace `WHERE` clause with `HAVING`, like this:
```
SELECT o.ID , count(p.CAT)
FROM Orders o
INNER JOIN Products p ON o.P_ID = p.P_ID
GROUP BY o.ID
HAVING count(p.CAT) > 3;
```
`HAVING` is similar to `WHERE`, that is both are used to filter the resulting records but `HAVING` is used to filter on aggregated data (when `GROUP BY` is used). | Use `HAVING` clause instead of `WHERE`
Try this:
```
SELECT o.ID, COUNT(p.CAT) cnt
FROM Orders o
INNER JOIN Products p ON o.P_ID = p.P_ID
GROUP BY o.ID HAVING cnt > 3
``` | How to avoid error "aggregate functions are not allowed in WHERE" | [
"",
"mysql",
"sql",
"aggregate-functions",
""
] |
Given the data below from the two tables cases and acct\_transaction, how can I include just the acct\_transaction.create\_date of the largest acct\_transaction amount whilst also calculating the sum of all amounts and the value of the largest amount? Platform is t-sql.
```
id amount create_date
---|----------|------------|
1 | 1.99 | 01/09/2009 |
1 | 2.99 | 01/13/2009 |
1 | 578.23 | 11/03/2007 |
1 | 64.57 | 03/03/2008 |
1 | 3.99 | 12/12/2012 |
1 | 31337.00 | 04/18/2009 |
1 | 123.45 | 05/12/2008 |
1 | 987.65 | 10/10/2010 |
```
Result set should look like this:
```
id amount create_date sum max_amount max_amount_date
---|----------|------------|----------|-----------|-----------
1 | 1.99 | 01/09/2009 | 33099.87 | 31337.00 | 04/18/2009
1 | 2.99 | 01/13/2009 | 33099.87 | 31337.00 | 04/18/2009
1 | 578.23 | 11/03/2007 | 33099.87 | 31337.00 | 04/18/2009
1 | 64.57 | 03/03/2008 | 33099.87 | 31337.00 | 04/18/2009
1 | 3.99 | 12/12/2012 | 33099.87 | 31337.00 | 04/18/2009
1 | 31337.00 | 04/18/2009 | 33099.87 | 31337.00 | 04/18/2009
1 | 123.45 | 05/12/2008 | 33099.87 | 31337.00 | 04/18/2009
1 | 987.65 | 10/10/2010 | 33099.87 | 31337.00 | 04/18/2009
```
This is what I have so far, I just don't know how to pull the date of the largest acct\_transaction amount for max\_amount\_date column.
```
SELECT cases.id, acct_transaction.amount, acct_transaction.create_date AS 'create_date', SUM(acct_transaction.amount) OVER () AS 'sum', MIN(acct_transaction.amount) OVER () AS 'max_amount'
FROM cases INNER JOIN
acct_transaction ON cases.id = acct_transaction.id
WHERE (cases.id = '1')
``` | Thanks, that got me on the right track to this which is working:
```
,CAST((SELECT TOP 1 t2.create_date from acct_transaction t2
WHERE t2.case_sk = act.case_sk AND (t2.trans_type = 'F')
order by t2.amount, t2.create_date DESC) AS date) AS 'max_date'
```
It won't let me upvote because I have less than 15 rep :( | ```
;WITH x AS
(
SELECT c.id, t.amount, t.create_date,
s = SUM(t.amount) OVER(),
m = MAX(t.amount) OVER(),
rn = ROW_NUMBER() OVER(ORDER BY t.amount DESC)
FROM dbo.cases AS c
INNER JOIN dbo.acct_transaction AS t
ON c.id = t.id
)
SELECT x.id, x.amount, x.create_date,
[sum] = y.s,
max_amount = y.m,
max_amount_date = y.create_date
FROM x CROSS JOIN x AS y WHERE y.rn = 1;
``` | Choose column based on max() of another column | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"max",
""
] |
I have two tables - `artist and album`
columns in `artist - id, name, artist_genre`
columns in `album - id, name, artist_name, album_genre, release_date`
I would like to find all artists in the album table that is not in the genre listed in artist table (to be more specific - if artist X has genre ‘pop’ and ‘rock’ registered but produce an album Y with genre ‘classic’, then the artist, album and genre should be listed. | ```
select artist_name, name, album_genre
from album left join artist on artist.name = album.artist_name and album_genre = artist_genre
where artist.id is null
``` | Try this :
```
select artist_name, name, album_genre
from album alb
where album_genre not in (select distinct artist_genre
from artist
where name = alb.artist_name)
```
**[SQL Fiddle Demo](http://sqlfiddle.com/#!2/4edd3/1)** | Sql - data reference does not exist when comparing two tables | [
"",
"mysql",
"sql",
""
] |
I'm creating a new trigger and want to have both null value and NULL string in :new.SCO\_NUMBER validation. I'm getting error when i'm using both (as shown below) but when i use ':new.SCO\_NUMBER IS NULL', it works fine. How to use or in this validation.
```
CREATE OR REPLACE TRIGGER TRIG_SCONUMBER_INSERT AFTER
INSERT ON S_SYN_EAI_SCO_IN FOR EACH row DECLARE XYZ BEGIN XYZ
SELECT XYZ
FROM xyz
WHERE xyz IF inserting
AND :new.SCO_NUMBER IS (NULL
OR 'NULL') THEN varError_Msg := 'SCO Number cannot be NULL in';
varError_id := 1;
varSucceeded := 'N' ;
varErrorExists :=1;
END IF;
``` | In case of T-SQL,
replace `new.SCO_NUMBER IS (NULL OR 'NULL')` with `new.SCO_NUMBER IS NULL OR new.SCO_NUMBER = 'NULL'`. This should work for you. | Try this
```
IF inserting
AND ( :new.SCO_NUMBER IS NULL
OR :new.SCO_NUMBER = 'NULL') THEN
``` | Inserting null and string 'Null in trigger condition | [
"",
"mysql",
"sql",
"oracle-sqldeveloper",
""
] |
Here's my data:
```
with first_three as
(
select 'AAAA' as code from dual union all
select 'BBBA' as code from dual union all
select 'BBBB' as code from dual union all
select 'BBBC' as code from dual union all
select 'CCCC' as code from dual union all
select 'CCCD' as code from dual union all
select 'FFFF' as code from dual union all
select 'GFFF' as code from dual )
select substr(code,1,3) as r1
from first_three
group by substr(code,1,3)
having count(*) >1
```
This query returns the characters that meet the cirteria. Now, how do I select from this to get desired results? Or, is there another way?
**Desired Results**
```
BBBA
BBBB
BBBC
CCCC
CCCD
``` | ```
WITH code_frequency AS (
SELECT code,
COUNT(1) OVER ( PARTITION BY SUBSTR( code, 1, 3 ) ) AS frequency
FROM table_name
)
SELECT code
FROM code_frequency
WHERE frequency > 1
``` | ```
WITH first_three AS (
...
)
SELECT *
FROM first_three f1
WHERE EXISTS (
SELECT 1 FROM first_three f2
WHERE f1.code != f2.code
AND substr(f1.code, 1, 3) = substr(f2.code, 1, 3)
)
``` | How to get query to return rows where first three characters of one row match another row? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I want to find all foreign keys in my database that reference to a primary key of a certain table.
For example, I have a column `A` in table `T` which is the primary key. Now I want to find in which tables column `A` is referenced in a foreign key constraint?
One simple way I've considered is to check the database diagram, but this only works if a database is very small. It's not a very good solution for a database that has more than 50 tables.
Any alternatives? | Look at [How to find foreign key dependencies in SQL Server?](https://stackoverflow.com/questions/925738/how-to-find-foreign-key-dependencies-in-sql-server)
You can sort on PK\_Table and PK\_Column to get what you want | On the last line, change [Primary Key Table] to your table name, change [Primary Key Column] to your column name, and execute this script on your database to get the foreign keys for the primary key.
```
SELECT FK.TABLE_NAME as Key_Table,CU.COLUMN_NAME as Foreignkey_Column,
PK.TABLE_NAME as Primarykey_Table,
PT.COLUMN_NAME as Primarykey_Column,
C.CONSTRAINT_NAME as Constraint_Name
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME =Fk.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON C.UNIQUE_CONSTRAINT_NAME=PK.CONSTRAINT_NAME
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME
INNER JOIN (
SELECT i1.TABLE_NAME, i2.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON i1.CONSTRAINT_NAME =i2.CONSTRAINT_NAME
WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'
) PT ON PT.TABLE_NAME = PK.TABLE_NAME
WHERE PK.TABLE_NAME = '[Primary Key Table]' and PT.COLUMN_NAME = '[Primary Key Column]';
``` | Find all foreign keys constraints in database referencing a certain primary key | [
"",
"sql",
"sql-server",
"foreign-keys",
"primary-key",
"foreign-key-relationship",
""
] |
I just want to drop all table that start with "T%".
The db is Netezza.
Does anyone know the sql to do this?
Regards, | With the catalog views and `execute immediate` it is fairly simple to write this in nzplsql. Be careful though, `call drop_like('%')` will destroy a database pretty fast.
```
create or replace procedure drop_like(varchar(128))
returns boolean
language nzplsql
as
begin_proc
declare
obj record;
expr alias for $1;
begin
for obj in select * from (
select 'TABLE' kind, tablename name from _v_table where tablename like upper(expr)
union all
select 'VIEW' kind, viewname name from _v_view where viewname like upper(expr)
union all
select 'SYNONYM' kind, synonym_name name from _v_synonym where synonym_name like upper(expr)
union all
select 'PROCEDURE' kind, proceduresignature name from _v_procedure where "PROCEDURE" like upper(expr)
) x
loop
execute immediate 'DROP '||obj.kind||' '||obj.name;
end loop;
end;
end_proc;
``` | You can create a script and then execute it. something like this...
```
DECLARE @SQL nvarchar(max)
SELECT @SQL = STUFF((SELECT CHAR(10)+ 'DROP TABLE ' + quotename(TABLE_SCHEMA) + '.' + quotename(TABLE_NAME)
+ CHAR(10) + 'GO '
FROM INFORMATION_SCHEMA.TABLES WHERE Table_Name LIKE 'T%'
FOR XML PATH('')),1,1,'')
PRINT @SQL
```
**Result**
```
DROP TABLE [dbo].[tTest2]
GO
DROP TABLE [dbo].[TEMPDOCS]
GO
DROP TABLE [dbo].[team]
GO
DROP TABLE [dbo].[tbInflowMaster]
GO
DROP TABLE [dbo].[TABLE_Name1]
GO
DROP TABLE [dbo].[Test_Table1]
GO
DROP TABLE [dbo].[tbl]
GO
DROP TABLE [dbo].[T]
GO
``` | Drop multiple tables in Netezza | [
"",
"sql",
"netezza",
""
] |
```
Mentor table
+--------------+
| id | name |
+-----+--------+
| 1 | name1 |
| 2 | name2 |
| 3 | name3 |
+-----+--------+
MentorLanguage table
+------------------+
| id | language |
+-----+------------+
| 1 | english |
| 1 | french |
| 1 | german |
| 2 | chinese |
| 2 | english |
| 3 | russian |
| 3 | german |
| 3 | greek |
+-----+------------+
Student table
+--------------+
| id | name |
+-----+--------+
| A | name1 |
| B | name2 |
| C | name3 |
+-----+--------+
StudentLanguage table
+------------------+
| id | language |
+-----+------------+
| A | english |
| A | french |
| B | chinese |
| B | german |
| C | russian |
| C | spanish |
| C | greek |
+-----+------------+
```
I want to match `mentor` with `student` based on the `language`, such that for example:
if `student A` knows `english` and `french`, he will be matched with all `mentors` that know at least `english` or `french`.
```
student A (english, french)
---------------------------------
mentor 1 (english, french, german);
mentor 2 (chinese, english);
```
I tried
```
select * from Mentor m
where m.id =
( select ml.id from MentorLanguage ml, StudentLanguage sl
where ml.language like sl.language
group by ml.id )
```
which doesn't work since the `Subquery returned more than 1 value`. | You could try using the "IN" operator instead of = in your where clause. This allows you to to do a "contains" instead of comparing with a single value.
```
select * from Mentor m
where m.id IN
( select ml.id from MentorLanguage ml, StudentLanguage sl
where ml.language like sl.language
group by ml.id )
``` | There are a ton of ways to do this. I guess it just depends on your preference and/or need in the result set. I've included two ways I would meet this request. Pretty simple. Let me know if you need additional returned results.
```
CREATE TABLE #Mentor ([id] INT Identity, [name] NVARCHAR(20))
GO
INSERT INTO #Mentor(name)
VALUES ('John Smith'),('Jack Smith'),('Jane Smith')
CREATE TABLE #MentorLanguage ([id] INT, [language] NVARCHAR(20))
GO
INSERT INTO #MentorLanguage([id],[language])
VALUES (1,'English'),(1,'French'),(1,'German')
,(2,'Chinese'),(2,'English'),(3,'Russian')
,(3,'German'),(3,'Greek')
CREATE TABLE #Student([id] NVARCHAR(2), [name] NVARCHAR(20))
GO
INSERT INTO #Student ([id],[name])
VALUES ('A','name1'),('B','name2'),('C','name3')
CREATE TABLE #StudentLanguage ([id] NVARCHAR(2),[language] NVARCHAR(20))
GO
INSERT INTO #StudentLanguage ([id],[language])
VALUES ('A','English'),('A','French'),('B','Chinese'),('B','German'),('C','Greek')
/* Inner Join to between #MentorLanguage and #StudentLanguage
would elimate rows where the mentor and student don't match */
SELECT *
FROM #Mentor m
INNER JOIN #MentorLanguage ml ON m.[id] = ml.id
INNER JOIN #StudentLanguage sl ON ml.[language] = sl.[language]
INNER JOIN #Student s ON s.id = sl.id
/* Agg Count of how many students each mentor could teach
based on the languages students know */
SELECT m.name, count(s.id) as [count]
FROM #Mentor m
INNER JOIN #MentorLanguage ml ON m.[id] = ml.id
INNER JOIN #StudentLanguage sl ON ml.[language] = sl.[language]
INNER JOIN #Student s ON s.id = sl.id
GROUP BY m.name
``` | select rows based on WHERE clause which return multiple rows | [
"",
"sql",
"sql-server",
"subquery",
""
] |
Is it possible to get results similar to the Oracle `DESCRIBE` command for a query? E.g. I have a join among several tables with a restriction of the columns that are returned, and I want to write that to a file. I later want to restore that value from a file into its own base table in another DBMS.
I could describe all of the tables individually and manually prune the columns, but I was hoping something like `DESC (select a,b from t1 join t2) as q` would work but it doesn't.
Creating a view isn't going to work if I don't have `create view` privileges, which I don't. Is there no way to describe a query result directly? | If you plan to re-use the query, it may make sense to create a view for it.
You can comment on a database view in the same way that you can for a table:
```
create view TEST_VIEW as select 'TEST' COL1 from dual;
comment on table TEST_VIEW IS 'TEST ONLY';
```
To find comments on a view, execute this:
```
select * from user_tab_comments where table_name='TEST_VIEW';
```
References:
[How to create a comment to an oracle database view](https://stackoverflow.com/questions/10602148/how-to-create-a-comment-to-an-oracle-database-view)
<http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:233014204543>
NOTE: This URL states that the SQLPLUS DESCRIBE command is only supposed to be used with a "table, view or synonym" or "function or procedure". This means that the target of DESCRIBE must be an existing database object.
<http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12019.htm>
As an SQLPLUS command, DESCRIBE cannot dynamically parse an SQL statement. All the information returned by DESCRIBE is stored in the data dictionary. | If you have a query that represents a set of data that you want to extract from one database and load into a different database, it would seem eminently sensible to create a view in the source database for that query. Once you have that view, you can `describe` the view or otherwise extract the information you are looking for from the various data dictionary tables.
And I'm assuming that there is a solid reason to prefer a custom file-based solution for replicating data from one database to another over any of the technologies Oracle provides to handle data replication. Materialized views, Streams, GoldenGate, etc. would all generally be a much better solution than writing your own.
If you're not allowed to create objects on the source database, you cannot use the SQL\*Plus `describe` command. You could write an anonymous PL/SQL block that used the `dbms_sql` package to parse and describe a dynamic SQL statement. That's going to be quite a bit more complex than using the `describe` command and you'll have to figure out how you want to format the output. I'd use [this `describe_columns` example](http://www.morganslibrary.org/reference/pkgs/dbms_sql.html#dsql18) as a starting point. | Describing the schema of a query result in Oracle? | [
"",
"sql",
"oracle",
"ddl",
"describe",
""
] |
I have the following query in mysql:
```
SELECT title,
added_on
FROM title
```
The results looks like this:
```
Somos Tão Jovens 2013-10-10 16:54:10
Moulin Rouge - Amor em Vermelho 2013-10-10 16:55:03
Rocky Horror Picture Show (Legendado) 2013-10-10 16:58:30
The X-Files: I Want to Believe 2013-10-10 22:39:11
```
I would like to get the count for the titles in each month, so the result would look like this:
```
Count Month
42 2013-10-01
20 3013-09-01
```
The closest I can think of to get this is:
```
SELECT Count(*),
Date(timestamp)
FROM title
GROUP BY Date(timestamp)
```
But this is only grouping by the day, and not the month. How would I group by the month here? | Could you try this?
```
select count(*), DATE_FORMAT(timestamp, "%Y-%m-01")
from title
group by DATE_FORMAT(timestamp, "%Y-%m-01")
```
Please, note that `MONTH()` can't differentiate '2013-01-01' and '2014-01-01' as follows.
```
mysql> SELECT MONTH('2013-01-01'), MONTH('2014-01-01');
+---------------------+---------------------+
| MONTH('2013-01-01') | MONTH('2014-01-01') |
+---------------------+---------------------+
| 1 | 1 |
+---------------------+---------------------+
``` | ```
SELECT Count(*),
Date(timestamp)
FROM title
GROUP BY Month(timestamp)
```
Edit: And obviously if it matters to display the 1st day of the month in the results, you do the `DATE_FORMAT(timestamp, "%Y-%m-01")` thing mentioned in some other answers :) | GROUP BY month on DATETIME field | [
"",
"mysql",
"sql",
""
] |
I am trying to get the lowest amount below an average in SQL
I have amount 2000, 2500, 3000. The average is 2500.
I want to build an SQL query to calculate the AVG and to extract the lowest amount from it.
SELECT AVG(Amount) FROM CONTRACT....
I can't figure out how to do the rest
Thanks | Assuming that you are concerned only about the amount field and that the data is not very huge you could try this out -
SELECT MIN(AMOUNT) FROM CONTRACT
WHERE AMOUNT <= (SELECT AVG(AMOUNT) FROM CONTRACT)
But, wouldn't the lowest amount below average would simply be the lowest amount of all? Something like this -
SELECT MIN(AMOUNT) FROM CONTRACT | I think what you're looking for is simply:
```
SELECT MIN(Amount) FROM Contract
```
But your question implies somehow applying AVG to a subset of your data, which I don't really understand. | lowest amount below the average amoutn in SQL | [
"",
"sql",
""
] |
I'm running SQL server management studio and my table/dataset contains approximately 700K rows. The table is a list of accounts each month. So at the begining of each month, a snapshot is taken of all the accounts (and who owns them), etc. etc. etc. and that is used to update the data-set. The 2 fields in question are AccountID and Rep (and I guess you could say month). This query really should be pretty easy but TBH, I have to move-on to other things so I thought I'd throw it up here to get some help.
Essentially, I need to extract distinct AccountIDs that at some point changed reps. See a screenshot below of what I'm thinking:

Thoughts?
--- I should note for instance that AccountID ABC1159 is not included in the results b/c it appears only once and is never handled by any other rep.
--- Also, another parameter is if the first time an account appears and the rep name appears in a certain list and then moves to another rep, that's fine. For instance, if the first instance of a Rep was say "Friendly Account Manager" or "Support Specialist" and then moves to another name, those can be excluded from the result field. So we essentially need a where statement or something that eliminates those results if the first instance appears in this list, then there is an instance where the name changed but non after that. The goal is to see if after the rep received a human rep (so they didn't have a name in that list), did they then change human reps at a certain point in time. | You want to first isolate the distinct `AccountID` and `Rep` combinations, then you want to use `GROUP BY` and `HAVING` to find `AccountID` values that have multiple `Rep` values:
```
SELECT AccountID
FROM (SELECT DISTINCT AccountID, Rep
FROM YourTable
WHERE Rep NOT IN ('Support Specialist','Friendly Account Manager')
)sub
GROUP BY AccountID
HAVING COUNT(*) > 1
``` | Try this:
```
SELECT t.AccountID
FROM [table] t
WHERE NOT EXISTS(SELECT * FROM [reps table] r WHERE r.Rep = t.Rep AND r.[is not human])
GROUP BY t.AccountID
HAVING COUNT(DISTINCT t.Rep) > 1;
``` | SQL - Where Field has Changed Over Time | [
"",
"sql",
"time",
"duplicates",
"where-clause",
""
] |
I am using this function to get the diff of two dates a person with the date joined of =03/03/1993 and a date left date of 06/02/2012 should be bringing back 8yrs 11mths but in mine its bringing back 9 years 2 months also i need it adapted If year in Date Joined is after the Year in Date Left you need to minus one Year. This is the same for months
```
ALTER FUNCTION [dbo].[hmsGetLosText]
(@FromDt as datetime,@DateLeftOrg as Datetime)
returns varchar(255)
as
BEGIN
DECLARE @yy AS SMALLINT, @mm AS INT, @dd AS INT,
@getmm as INT, @getdd as INT, @Fvalue varchar(255)
SET @DateLeftOrg = CASE WHEN @DateLeftOrg IS NULL THEN GetDate()
WHEN @DateLeftOrg = '1/1/1900' THEN GetDate()
ELSE @DateLeftOrg
END
SET @DateLeftOrg = CASE WHEN YEAR(@FromDt) > YEAR(@DateLeftOrg) THEN DateAdd(yy, -1, @DateLeftOrg)
else @DateLeftOrg
end
SET @yy = DATEDIFF(yy, @FromDt, @DateLeftOrg)
SET @mm = DATEDIFF(mm, @FromDt, @DateLeftOrg)
SET @dd = DATEDIFF(dd, @FromDt, @DateLeftOrg)
SET @getmm = ABS(DATEDIFF(mm, DATEADD(yy, @yy, @FromDt), @DateLeftOrg))
SET @getdd = ABS(DATEDIFF(dd, DATEADD(mm, DATEDIFF(mm, DATEADD(yy, @yy, @FromDt), @DateLeftOrg), DATEADD(yy, @yy, @FromDt)), @DateLeftOrg))
IF @getmm = 1
set @getmm=0
RETURN (
Convert(varchar(10),@yy) + 'y ' + Convert(varchar(10),@getmm) + 'm ')
END
``` | This is how I get your desired answer:
```
DECLARE @D DATETIME
SET @D = CONVERT(datetime,'06/02/2002', 103) - CONVERT(datetime,'03/03/1993', 103)
SELECT @D
SELECT DATEPART(yyyy,@D) - 1900
SELECT DATEPART(mm,@D) - 1
SELECT DATEPART(day,@D) - 1
``` | This is happening because the caller is somewhere converting the strings "03/03/1993" and "06/02/2012" to dates without being explicit about the date semantics and the months and days are getting swapped.
So "03/03/1993" is (coincidentally) unambiguous as "1993-MAR-03", but if "06/02/2012" is the same as "2012-FEB-06" then their difference is 8 years, 11 months. But if "06/02/2012" is interpreted as "2012-JUN-02", then their difference is 9 years, 2 months.
To fix this, the caller should either use a less ambiguous format, such as "2012-02-06" (which is always Feb 6th) or explicitly qualify and cast it. This is how you would do that in T-SQL:
```
SELECT @date = CONVERT(DATETIME, '06/02/2012', 103)
```
which is also always interpreted as Feb 6th. | Function to find length of date range | [
"",
"sql",
"sql-server",
"user-defined-functions",
""
] |
I have a table team (name,id,points).
i want to find the rank of team based on points.
```
SELECT *FROM team ORDER BY points DESC
```
above query sorts result in descending order of points of team. now i want to find the rank of particular team. means the row number of this result set. | easy way: Count all the teams with more (or equal) points than your desired\_team
```
SELECT Count(*)
FROM team
WHERE points >= (SELECT points
FROM team
WHERE name = "team_name");
```
(you need to include your team so the "list" start with 1.)
in that case if your team has the same points than another team this row will say your team is under. if you whant only to list the teams with more points use this:
```
SELECT Count(*) + 1
FROM team
WHERE points > (SELECT points
FROM team
WHERE name = "team_name");
```
pd: sorry for my english | Try this. It should give you exactly what you need.
```
SELECT
@i:=@i+1 AS rownumber,
t.*
FROM
team AS t,
(SELECT @i:=0) AS test
ORDER BY points DESC
``` | mysql query to find row number of particular record from result set | [
"",
"mysql",
"sql",
""
] |
Suppose,I have a table named items:
```
sender_id receiver_id goods_id price
2 1 a1 1000
3 1 b2 2000
2 1 c1 5000
4 1 d1 700
2 1 b1 500
```
Here I want to select the sender\_id,goods\_id in descending order of price from the ***items*** table such that no row appears more than once which contains the same sender\_id value (here sender\_id 2). I used the following query,but was in vain:
```
select distinct sender_id,goods_id from items where receiver_id=1 order by price desc
```
The result shows all the five tuples(records) with the tuples containing sender\_id 2 thrice in descending order of time.But what I want is to display only three records one of them having sender\_id of 2 with only the highest price of 5000. What should I do?
My expected output is:
```
sender_id goods_id
2 c1
3 b2
4 d1
``` | please try this
```
select sender_id,goods_id from items t1
where not exists (select 1 from items t2
where t2.sender_id = t1.sender_id
and t2.receiver_id = t1.receiver_id
and t2.price > t1.price)
and receiver_id = 1
order by price desc
``` | Get the highest price of each group, you could do like below:
```
SELECT T1.*
FROM (
SELECT
MAX(price) AS max_price,
sender_id
FROM items
GROUP BY sender_id
) AS T2
INNER JOIN items T1 ON T1.sender_id = T2.sender_id AND T1.price = T2.max_price
WHERE T1.receiver_id=1
ORDER BY T1.price
``` | Selecting distinct value from a column in MySql | [
"",
"mysql",
"sql",
"select",
"group-by",
"groupwise-maximum",
""
] |
I'm using the following T-SQL to obtain role members from my SQL Server 2008 R2 database:
```
select rp.name as database_role, mp.name as database_user
from sys.database_role_members drm
join sys.database_principals rp on (drm.role_principal_id = rp.principal_id)
join sys.database_principals mp on (drm.member_principal_id = mp.principal_id)
order by rp.name
```
When I examine the output I notice that the only role members listed for `db_datareader` are db roles - no user members of `db_datareader` are listed in the query.
Why is that? How can I also list the user members of my db roles?
I guess I should also ask whether the table `sys.database_role_members` actually contains all members of a role? | I've worked out what's going on.
When I queried out the role members I was comparing the output with what SSMS listed as role members in the role's properties dialog - this included users as well as roles, but the users weren't being listed by the query as listed in my question. I turns out that when listing role members, SSMS expands members that are roles to display the members of those roles.
The following query replicates the way in which SSMS lists role members:
```
WITH RoleMembers (member_principal_id, role_principal_id)
AS
(
SELECT
rm1.member_principal_id,
rm1.role_principal_id
FROM sys.database_role_members rm1 (NOLOCK)
UNION ALL
SELECT
d.member_principal_id,
rm.role_principal_id
FROM sys.database_role_members rm (NOLOCK)
INNER JOIN RoleMembers AS d
ON rm.member_principal_id = d.role_principal_id
)
select distinct rp.name as database_role, mp.name as database_userl
from RoleMembers drm
join sys.database_principals rp on (drm.role_principal_id = rp.principal_id)
join sys.database_principals mp on (drm.member_principal_id = mp.principal_id)
order by rp.name
```
The above query uses a recursive CTE to expand a role into it's user members. | Here is another way
```
SELECT dp.name as RoleName, us.name as UserName
FROM sys.sysusers us right
JOIN sys.database_role_members rm ON us.uid = rm.member_principal_id
JOIN sys.database_principals dp ON rm.role_principal_id = dp.principal_id
``` | How to list role members in SQL Server 2008 R2 | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I'm trying to make changes to an existing table and am getting this error when i try to save :
> Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that require the table to be re-created.
I only have one data entry in the database - would deleting this solve the problem or do i have to re-create the tables as the error suggests? (This is on SQL-Server 2008 R2) | The following actions might require a table to be re-created:
1. Adding a new column to the middle of the table
2. Dropping a column
3. Changing column nullability
4. Changing the order of the columns
5. Changing the data type of a column
To change this option, on the Tools menu, click Options, expand Designers, and then click Table and Database Designers. Select or clear the Prevent saving changes that require the table to be re-created check box.
[refer](http://technet.microsoft.com/en-us/library/bb895146.aspx) | you need to change settings to save the changes
1. Open SQL Server Management Studio (SSMS).
2. On the Tools menu, click Options.
3. In the navigation pane of the Options window, click Designers.
4. Select or clear the Prevent saving changes that require the table re-creation check box, and then click OK.
 | SQL server: can't save/change table design | [
"",
"sql",
"sql-server",
""
] |
Trying to figure how how to replace the following, with equivalent left outer join:
```
select distinct(a.some_value)
from table_a a, table_b b
where a.id = b.a_id
and b.some_id = 123
and b.create_date < '2014-01-01'
and b.create_date >= '2013-12-01'
MINUS
select distinct(a.some_value)
from table_a a, table_b b
where a.id = b.a_id
and b.some_id = 123
and b.create_date < '2013-12-01'
```
Can not do "NOT IN", as the second query has too much data. | ```
SELECT * FROM
(
select distinct(a.some_value)
from table_a a, table_b b
where a.id = b.a_id
and b.some_id = 123
and b.create_date < '2014-01-01'
and b.create_date >= '2013-12-01'
) x
LEFT JOIN
(
select distinct(a.some_value)
from table_a a, table_b b
where a.id = b.a_id
and b.some_id = 123
and b.create_date < '2013-12-01'
) y
ON
x.some_value = y.some_value
WHERE
y.some_value IS NULL
``` | Here's what my brain puts out after a beer:
```
select distinct
a.some_value
from
table_a a
join table_b b on a.id = b.a_id
where
b.some_id = 123
and b.create_date < '2014-01-01'
and b.create_date >= '2013-12-01'
and not exists (
select
a2.some_value
from
table_a a2
join table_b b2 on a2.id = b2.a_id
where
b2.some_id = 123
and b2.create_date < '2013-12-01'
)
```
Whether this'll optimize to faster than a left join or not is something I can't think of right now... | How to replace a complex SQL MINUS query with LEFT OUTER JOIN equivalent | [
"",
"sql",
"outer-join",
""
] |
I want to return the numbers 1,2,3,4 from mysql in *different* rows.
If I run
`select 1,2,3,4` then I will get *a single row* with these four numbers.
How can I get four different rows each with a single number ?
Please don't answer me to create a table containing these numbers ! Also the use case is for a jasper report I want to make. | You could use [`UNION ALL`](http://dev.mysql.com/doc/refman/5.1/en/union.html) to concat the rows:
```
SELECT 1 AS ColumnName
UNION ALL
SELECT 2 AS ColumnName
UNION ALL
SELECT 3 AS ColumnName
UNION ALL
SELECT 4 AS ColumnName
```
`Demo` | Try using UNION
```
select 1 as col_name
UNION
select 2
UNION
select 3
UNION
select 4
```
If some of your values occur more than once (say you have two 1s and you want them both in your returned rows), then you may want to use `UNION ALL` instead of `UNION`. | Mysql return enumeration of numbers in different rows | [
"",
"mysql",
"sql",
""
] |
I've created a SQL query in teradata which at product price changes, but want to show the most updated - using the timestamp. The issue however is that the data does have instances where the product\_number, price, timestamp is repeated exactly, giving multiples values. I'm looking to eliminate those duplicates.
```
select a.product_number, a.maxtimestamp, b.product_price
from ( SELECT DISTINCT product_number ,MAX(update_timestamp) as maxtimestamp
FROM product_price
group by product_number) a
inner join product_price b on a.product_number = b.product_number
and a.maxtimestamp = b.update_timestamp;
``` | Simply use a ROW\_NUMBER + QUALIFY
```
select *
from product_price
qualify
row_number()
over (partition by product_number
order by update_timestamp desc) = 1;
``` | You should be able to simply move your `DISTINCT` operator to the outside query, or do a GROUP BY that covers all columns (doing it on just `maxtimestamp` will result in an error).
```
select DISTINCT a.product_number, a.maxtimestamp, b.product_price
from ( SELECT product_number ,MAX(update_timestamp) as maxtimestamp
FROM product_price
group by product_number) a
inner join product_price b on a.product_number = b.product_number
and a.maxtimestamp = b.update_timestamp
```
or
```
select a.product_number, a.maxtimestamp, b.product_price
from ( SELECT DISTINCT product_number ,MAX(update_timestamp) as maxtimestamp
FROM product_price
group by product_number) a
inner join product_price b on a.product_number = b.product_number
and a.maxtimestamp = b.update_timestamp
GROUP BY a.product_number, a.maxtimestamp, b.product_price
```
As an aside, the DISTINCT in the inner subquery is redundant since you already have a GROUP BY. | SQL timestamp, eliminate duplicate rows | [
"",
"sql",
"teradata",
""
] |
I wanted to understand the UDF WeekOfYear and how it starts the first week. I had to artifically hit a table and run
the query . I wanted to not hit the table and compute the values. Secondly can I look at the UDF source code?
```
SELECT weekofyear
('12-31-2013')
from a;
``` | You do not need table to test UDF since Hive 0.13.0.
See this Jira: [HIVE-178 SELECT without FROM should assume a one-row table with no columns](https://issues.apache.org/jira/browse/HIVE-178)
**Test:**
```
hive> SELECT weekofyear('2013-12-31');
```
**Result:**
```
1
```
The source code (master branch) is here: [UDFWeekOfYear.java](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFWeekOfYear.java) | If you are Java developer, you can write Junit Test cases and test the UDFs..
you can search the source code of all hive built in functions in **grepcode**.
<http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hive/hive-exec/1.0.0/org/apache/hadoop/hive/ql/udf/UDFWeekOfYear.java> | how can we test HIVE functions without referencing a table | [
"",
"sql",
"hive",
"hiveql",
"hive-udf",
""
] |
I would like to know if a user has a privilege on an object or not.
I'm working on SQL Developer.
When I query manually the table **DBA\_TAB\_PRIVS**, I get all the information needed.
However, I need this information to be used in some triggers and functions.
So, I'm writing PL/SQL function that will return 1 if a role has the privilege and 0 otherwise.
```
CREATE OR REPLACE FUNCTION HAS_PRIVILEGE_ON_OBJECT(rolename IN VARCHAR2,
objectname IN VARCHAR2,
objectowner IN VARCHAR2,
privilegename IN VARCHAR2)
RETURN NUMBER
AS
output NUMBER;
BEGIN
SELECT count(*) INTO output
FROM dba_tab_privs
WHERE grantee = rolename
AND owner = objectowner
AND table_name = objectname
AND privilege = privilegename;
IF output > 0 THEN
RETURN 1;
ELSE
RETURN 0;
END IF;
END has_privilege_on_object;
```
The function doesn't compile and says :
> ORA 942 : table or view does not exist.
The user connected has access to the view DBA\_TAB\_PRIVS since I can query it, but when trying to automate it using a function. It doesn't work.
Any ideas please? | I'll wager that you have privileges on `dba_tab_privs` via a role, not via a direct grant. If you want to use a definer's rights stored function, the owner of the function has to have privileges on all the objects granted directly, not via a role.
If you disable roles in your interactive session, can you still query `dba_tab_privs`? That is, if you do
```
SQL> set role none;
SQL> select * from dba_tab_privs
```
do you get the same ORA-00942 error? Assuming that you do
```
GRANT select any dictionary
TO procedure_owner
```
will give the `procedure_owner` user the ability to query any data dictionary table in a stored function. Of course, you could also do a direct grant on just `dba_tab_privs`. | You can use `table_privileges`:
```
select * from table_privileges;
```
This does not require any specific rights from your user. | How to know if a user has a privilege on Object? | [
"",
"sql",
"oracle",
"plsql",
"privileges",
""
] |
Is there a way to retrieve all the sequences defined in an existing oracle-sql db schema?
Ideally I would like to use something like this:
```
SELECT * FROM all_sequences WHERE owner = 'me';
```
which apparently doesn't work. | Try this:
```
SELECT object_name
FROM all_objects
WHERE object_type = 'SEQUENCE' AND owner = '<schema name>'
``` | Yes:
```
select * from user_sequences;
```
Your SQL was almost correct too:
```
select * from all_sequences where sequence_owner = user;
``` | How to check if a sequence exists in my schema? | [
"",
"sql",
"oracle11g",
"sequences",
""
] |
Assume I have an table of orders with the following columns:
`Model`
`Quantity`
`Price`
`ScheduleB`
`OrderID`
There can be multiple `Model` with the same `ScheduleB` classification. I need a SQL statement that will only return a record where the total price (`Quantity` \* `Price`) of all `Model` of similarly grouped `ScheduleB` classifications are greater than $2500 for an `OrderID`. There are hundreds of different `ScheduleB` classifications possible. So an example of what you might see in the table for `OrderID` = 10054 would be:
`+-------------------------------------------+`
`| Model | Qty | Price | ScheduleB | OrderID |`
`+-------------------------------------------+`
`Dr1625, 2, $1298.87, 1029202938, 10054`
`Dr1624, 1, $123.87, 1029202930, 10054`
`Dr1623, 5, $2499.87, 1029202931, 10054`
`Dr1622, 3, $600.87, 1029202938, 10054`
`Dr1621, 1, $3298.87, 1029202938, 10054`
The records with `ScheduleB` equal to 1029202938 have a combined total of greater than $2500, I would want the following returned:
`+-------------------------------------------+`
`| Model | Qty | Price | ScheduleB | OrderID |`
`+-------------------------------------------+`
`Dr1625, 2, $1298.87, 1029202938, 10054`
`Dr1622, 3, $600.87, 1029202938, 10054`
`Dr1621, 1, $3298.87, 1029202938, 10054`
Basically, I only want to show records from a table where the same `ScheduleB` classifications have a total price greater than $2500 for a specific `OrderID`. Can this be done with a SQL statement in MYSQL?
EDIT:
Here is the SQL statement that I am using to get the above mentioned columns (plus a few others):
`select brands.Brand, products.Model_PartNumber, categories.CategoryDescription, (Select Sum(Quantity) From orderitems where OrderID = 10054 AND ProductID = products.ProductID Group BY ProductID) as Quantity, orderitems.ItemPrice, brands.CountryOrigin, categories.ScheduleB, products.Weight, products.WeightIn from orderitems INNER JOIN products ON products.ProductID = orderitems.ProductID INNER JOIN brands ON products.BrandID = brands.BrandID LEFT JOIN productcategories ON productcategories.ProductID = products.ProductID INNER Join categories ON productcategories.CategoryID = categories.CategoryID WHERE orderitems.OrderID = 10054 Group By products.Model_partNumber` | Yes, it can be done:
```
Select
Model
,Quantity
,Price
,ScheduleB
,OrderID
From Table1
WHERE ScheduleB IN
(SELECT ScheduleB FROM Table1 GROUP BY ScheduleB HAVING SUM(Quantity * Price) > 2500)
``` | The following SQL statement did the trick:
`Select ScheduleB from (select (Select Sum(Quantity) From orderitems where OrderID = ? AND ProductID = products.ProductID Group BY ProductID) as Quantity, orderitems.ItemPrice, categories.ScheduleB from orderitems INNER JOIN products ON products.ProductID = orderitems.ProductID LEFT JOIN productcategories ON productcategories.ProductID = products.ProductID INNER Join categories ON productcategories.CategoryID = categories.CategoryID WHERE orderitems.OrderID = ? Group By products.ProductID) as table1 Group by ScheduleB Having SUM(Quantity * ItemPrice) > 2500;`
Thanks to both `Cha` and `Gordon` for pointing me in the right direction. | SQL Statement - Based on Total Sum of a Column Type | [
"",
"mysql",
"sql",
""
] |
I have been trying out postgres 9.3 running on an Azure VM on Windows Server 2012. I was originally running it on a 7GB server... I am now running it on a 14GB Azure VM. I went up a size when trying to solve the problem described below.
I am quite new to posgresql by the way, so I am only getting to know the configuration options bit by bit. Also, while I'd love to run it on Linux, I and my colleagues simply don't have the expertise to address issues when things go wrong in Linux, so Windows is our only option.
**Problem description:**
I have a table called test\_table; it currently stores around 90 million rows. It will grow by around 3-4 million rows per month. There are 2 columns in test\_table:
```
id (bigserial)
url (charachter varying 300)
```
I created indexes **after** importing the data from a few CSV files. Both columns are indexed.... the id is the primary key. The index on the url is a normal btree created using the defaults through pgAdmin.
When I ran:
```
SELECT sum(((relpages*8)/1024)) as MB FROM pg_class WHERE reltype=0;
```
... The total size is 5980MB
The indiviual size of the 2 indexes in question here are as follows, and I got them by running:
```
# SELECT relname, ((relpages*8)/1024) as MB, reltype FROM pg_class WHERE
reltype=0 ORDER BY relpages DESC LIMIT 10;
relname | mb | reltype
----------------------------------+------+--------
test_url_idx | 3684 | 0
test_pk | 2161 | 0
```
There are other indexes on other smaller tables, but they are tiny (< 5MB).... so I ignored them here
The trouble when querying the test\_table using the url, particularly when using a wildcard in the search, is the speed (or lack of it). e.g.
```
select * from test_table where url like 'orange%' limit 20;
```
...would take anything from 20-40 seconds to run.
Running explain analyze on the above gives the following:
```
# explain analyze select * from test_table where
url like 'orange%' limit 20;
QUERY PLAN
-----------------------------------------------------------------
Limit (cost=0.00..4787.96 rows=20 width=57)
(actual time=0.304..1898.583 rows=20 loops=1)
-> Seq Scan on test_table (cost=0.00..2303247.60 rows=9621 width=57)
(actual time=0.302..1898
.542 rows=20 loops=1)
Filter: ((url)::text ~~ 'orange%'::text)
Rows Removed by Filter: 210286
Total runtime: 1898.650 ms
(5 rows)
```
Taking another example... this time with the wildcard between american and .com....
```
# explain select * from test_table where url
like 'american%.com' limit 50;
QUERY PLAN
-------------------------------------------------------
Limit (cost=0.00..11969.90 rows=50 width=57)
-> Seq Scan on test_table (cost=0.00..2303247.60 rows=9621 width=57)
Filter: ((url)::text ~~ 'american%.com'::text)
(3 rows)
# explain analyze select * from test_table where url
like 'american%.com' limit 50;
QUERY PLAN
-----------------------------------------------------
Limit (cost=0.00..11969.90 rows=50 width=57)
(actual time=83.470..3035.696 rows=50 loops=1)
-> Seq Scan on test_table (cost=0.00..2303247.60 rows=9621 width=57)
(actual time=83.467..303
5.614 rows=50 loops=1)
Filter: ((url)::text ~~ 'american%.com'::text)
Rows Removed by Filter: 276142
Total runtime: 3035.774 ms
(5 rows)
```
I then went from a 7GB to a 14GB server. Query Speeds were no better.
**Observations on the server**
* I can see that Memory usage never really goes beyond 2MB.
* Disk reads go off the charts when running a query using a LIKE statement.
* Query speed is perfectly fine when matching against the id (primary key)
The postgresql.conf file has had only a few changes from the defaults. Note that I took some of these suggestions from the following blog post: <http://www.gabrielweinberg.com/blog/2011/05/postgresql.html>.
**Changes to conf:**
```
shared_buffers = 512MB
checkpoint_segments = 10
```
(I changed checkpoint\_segments as I got lots of warnings when loading in CSV files... although a production database will not be very write intensive so this can be changed back to 3 if necessary...)
```
cpu_index_tuple_cost = 0.0005
effective_cache_size = 10GB # recommendation in the blog post was 2GB...
```
On the server itself, in the Task Manager -> Performance tab, the following are probably the relevant bits for someone who can assist:
CPU: rarely over 2% (regardless of what queries are run... it hit 11% once when I was importing a 6GB CSV file)
Memory: 1.5/14.0GB (11%)
More details on Memory:
* In use: 1.4GB
* Available: 12.5GB
* Committed 1.9/16.1 GB
* Cached: 835MB
* Paged Pool: 95.2MB
* Non-paged pool: 71.2 MB
**Questions**
1. How can I ensure an index will sit in memory (providing it doesn't get too big for memory)? Is it just configuration tweaking I need here?
2. Is implementing my own search index (e.g. Lucene) a better option here?
3. Are the full-text indexing features in postgres going to improve performance dramatically, even if I can solve the index in memory issue?
Thanks for reading. | Those seq scans make it look like you didn't run `analyze` on the table after importing your data.
<http://www.postgresql.org/docs/current/static/sql-analyze.html>
During normal operation, scheduling to run `vacuum analyze` isn't useful, because the autovacuum periodically kicks in. But it is important when doing massive writes, such as during imports.
On a slightly related note, see this reversed index tip on Pavel's PostgreSQL Tricks site, if you ever need to run anchord queries at the end, rather than at the beginning, e.g. `like '%.com'`
<http://postgres.cz/wiki/PostgreSQL_SQL_Tricks_I#section_20>
---
Regarding your actual questions, be wary that some of the suggestions in that post you liked to are dubious at best. Changing the cost of index use is frequently dubious and disabling seq scan is downright silly. (Sometimes, it *is* cheaper to seq scan a table than itis to use an index.)
With that being said:
1. Postgres primarily caches indexes based on how often they're used, and it will not use an index if the stats suggest that it shouldn't -- hence the need to `analyze` after an import. Giving Postgres plenty of memory will, of course, increase the likelihood it's in memory too, but keep the latter points in mind.
2. and 3. Full text search works fine.
For further reading on fine-tuning, see the manual and:
<http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server>
Two last notes on your schema:
1. Last I checked, bigint (bigserial in your case) was slower than plain int. (This was a while ago, so the difference might now be negligible on modern, 64-bit servers.) Unless you foresee that you'll actually need more than 2.3 billion entries, int is plenty and takes less space.
2. From an implementation standpoint, the only difference between a `varchar(300)` and a `varchar` without a specified length (or `text`, for that matter) is an extra check constraint on the length. If you don't actually *need* data to fit that size and are merely doing so for no reason other than habit, your db inserts and updates will run faster by getting rid of that constraint. | Unless your encoding or collation is C or POSIX, an ordinary btree index cannot efficiently satisfy an anchored like query. You may have to declare a btree index with the varchar\_pattern\_ops op class to benefit. | Postgresql: How do I ensure that indexes are in memory | [
"",
"sql",
"postgresql",
"configuration",
"indexing",
""
] |
I'm new to `SQL` and I want to create a `One-To-Many` relationship between two tables.
I have these two tables created with the following queries:
```
CREATE TABLE Customers
(
CustomerId INT NOT NULL AUTO_INCREMENT,
FirstName VARCHAR(255) NOT NULL,
LastName VARCHAR(255) NOT NULL,
Email VARCHAR(255) NOT NULL,
Address VARCHAR(255) NOT NULL,
PRIMARY KEY(CustomerId)
);
CREATE TABLE Orders
(
OrderId INT NOT NULL AUTO_INCREMENT,
Date DATE NOT NULL,
Quantity INT NOT NULL,
TotalDue FLOAT NOT NULL,
CustomerId INT NOT NULL,
PRIMARY KEY(OrderId),
FOREIGN KEY(CustomerId) REFERENCES Customers(CustomerId)
);
```
However even though I set `CustomerId` as a foreign key for the `Orders` table I'm still able to add rows in the `Orders` table with a `CustomerId` that is not present in the `Customers` table. Why is that and how can I create a real link between the tables?
This is what the tables look like(copy/paste from my `mysql` client):
```
mysql> select * from Customers;
+------------+-----------+----------+-------------------+------------------------+
| CustomerId | FirstName | LastName | Email | Address |
+------------+-----------+----------+-------------------+------------------------+
| 1 | Jacks | James | james98@yahoo.com | Str. Moony, No. 9 |
| 2 | Mock | Grad | rrfuX@yahoo.com | Str. Mars, No. 91 |
| 3 | James | Geremy | gv@yahoo.com | Str. Monday, No. 12 |
| 4 | Joana | Joan | iiogn@yahoo.com | Str. Comete, No. 19 |
| 5 | Granicer | James | gtuawr@yahoo.com | Str. Sydney, No. 651 |
+------------+-----------+----------+-------------------+------------------------+
5 rows in set (0.00 sec)
mysql> select * from Orders;
+---------+------------+----------+----------+------------+
| OrderId | Date | Quantity | TotalDue | CustomerId |
+---------+------------+----------+----------+------------+
| 1 | 2014-01-09 | 10 | 340 | 3 |
| 2 | 2014-01-09 | 1 | 50 | 3 |
| 3 | 2014-01-09 | 11 | 55 | 5 |
| 4 | 2014-01-09 | 11 | 55 | 51 |
+---------+------------+----------+----------+------------+
4 rows in set (0.00 sec)
```
As you can see `OrderId` 4 contains a customer with `CustomerId` 51 which is not in the `Customers` table. | I gave up and started using the web client of MySQL... | You need to set the storage engine...change your query to...
```
CREATE TABLE Orders
(
OrderId INT NOT NULL AUTO_INCREMENT,
Date DATE NOT NULL,
Quantity INT NOT NULL,
TotalDue FLOAT NOT NULL,
CustomerId INT NOT NULL,
PRIMARY KEY(OrderId),
FOREIGN KEY(CustomerId) REFERENCES Customers(CustomerId)
)ENGINE=INNODB;
``` | SQL One-To-Many relationship | [
"",
"mysql",
"sql",
"relational-database",
"one-to-many",
""
] |
I'm trying to add a computed column to a SQL Server 2008 Express table.
The formula is:
```
case when callrecord_contacttype=1 then 'Routed voice'
else when callrecord_contacttype=2 then 'Direct incoming voice'
else when callrecord_contacttype=3 then 'Direct outgoing voice'
else when callrecord_contacttype=4 then 'Direct internal voice'
else when callrecord_contacttype=5 then 'Routed callback'
else when callrecord_contacttype=6 then 'Routed email'
else when callrecord_contacttype=7 then 'Direct outgoing email'
else when callrecord_contacttype=8 then 'Routed chat' else '' end
```
But I'm getting the error:
> Incorrect syntax near the keyword 'when'. | Try :
```
case callrecord_contacttype
when 1 then 'Routed voice'
when 2 then 'Direct incoming voice'
when 3 then 'Direct outgoing voice'
when 4 then 'Direct internal voice'
when 5 then 'Routed callback'
when 6 then 'Routed email'
when 7 then 'Direct outgoing email'
when 8 then 'Routed chat'
else ''
end
```
See <http://msdn.microsoft.com/en-us/library/ms181765.aspx> for syntax. | Have only 1 `ELSE` in your query:
```
case when callrecord_contacttype=1 then 'Routed voice'
when callrecord_contacttype=2 then 'Direct incoming voice'
when callrecord_contacttype=3 then 'Direct outgoing voice'
when callrecord_contacttype=4 then 'Direct internal voice'
when callrecord_contacttype=5 then 'Routed callback'
when callrecord_contacttype=6 then 'Routed email'
when callrecord_contacttype=7 then 'Direct outgoing email'
when callrecord_contacttype=8 then 'Routed chat'
else '' end
``` | computed column - Incorrect syntax near the keyword 'when' | [
"",
"sql",
"sql-server",
"sql-server-2008",
"case",
"calculated-columns",
""
] |
```
ID Place Name Type Count
--------------------------------------------------------------------------------
7718 | UK1 | Lemuis | ERIS TELECOM | 0
7713 | UK1 | Astika LLC | VERIDIAN | 34
7712 | UK1 | Angel Telecom AG | VIACLOUD | 34
7710 | UK1 | DDC S.r.L | ALPHA UK | 25
7718 | UK1 | Customers | WERTS | 0
```
Basically I have a variable and I want to compare that variable the the `'Type'` column. If the variable matches the type then I want to return all the rows that have the same ID as the variable's ID.
For example, my variable is `'ERIS TELECOM'`, I need to retrieve the ID for `'ERIS TELECOM'` which is `7718`. Then I search the table for rows that have the `ID 7718`.
My desired output should be:
Table Name: `FullResults`
```
ID Place Name Type Count
--------------------------------------------------------------------------------
7718 | UK1 | Lemuis | ERIS TELECOM | 0
7718 | UK1 | Customers | WERTS | 0
```
Is there a query that will do this? | ```
SELECT *
FROM FullResults
WHERE ID = (SELECT ID
FROM FullResults
WHERE Type= @variable);
```
I guess it will be something like this? | Something like this should do the trick, returns all data for all ID's that have a matching type.
```
SELECT *
FROM Table
WHERE ID
IN (SELECT ID from Table where Type = 'ERIS TELECOM')
``` | SQL query to get specific rows based on one value | [
"",
"mysql",
"sql",
"sql-server",
"stored-procedures",
""
] |
I want to return two defaults column values even if the table has no records. I'm using the following query (thanks to [How to SELECT DEFAULT value of a field](https://stackoverflow.com/a/8266834/466153)):
```
SELECT DEFAULT(membership_credits) AS membership_credits,
DEFAULT(product_credits) AS product_credits
FROM (SELECT 1) AS dummy LEFT JOIN Users ON True LIMIT 1
```
But instead of the default values, I'm getting NULL:
```
membership_credits product_credits
NULL NULL
```
What's the problem?
EDIT:
Adding the table schema as suggested in a comment:
```
CREATE TABLE Users (
user_id BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
user_login VARCHAR(40) NOT NULL UNIQUE,
user_name VARCHAR(100) NOT NULL,
user_email VARCHAR(254) NOT NULL UNIQUE,
user_telephone VARCHAR(100) NOT NULL,
user_password VARCHAR(64) NOT NULL,
user_address VARCHAR(255) NOT NULL,
user_postal_code VARCHAR(100) NOT NULL,
user_district VARCHAR(100) NOT NULL,
user_country VARCHAR(100) NOT NULL,
user_tax_number VARCHAR(20) NOT NULL,
user_billing_email VARCHAR(254) NOT NULL,
company_description TEXT,
company_history TEXT,
company_products TEXT,
public_contact BINARY(1) NOT NULL,
user_active BINARY(1) NOT NULL DEFAULT '0',
user_key VARCHAR(255) NOT NULL,
user_registered TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
unread_messages INT UNSIGNED DEFAULT 0,
membership_credits INT UNSIGNED NOT NULL DEFAULT 0,
product_credits INT UNSIGNED NOT NULL DEFAULT 0
) ENGINE INNODB CHARACTER SET utf8 COLLATE utf8_general_ci;
``` | It all depends on if your columns allow `NULL` or not.
If `NULL` allow, a query like the following will be useful, so the table does not have records:
```
SELECT
DEFAULT(`membership_credits`) `membership_credits`,
DEFAULT(`product_credits`) `product_credits`
FROM (SELECT 1) `dummy`
LEFT JOIN `users` ON TRUE
LIMIT 1;
```
`SQL Fiddle demo`
If not allow `NULL`, and the table has no record, you will get a `NULL` to the query above. In this case would require a query like:
```
SELECT
DEFAULT(`membership_credits`) `membership_credits`,
DEFAULT(`product_credits`) `product_credits`
FROM (SELECT *, COUNT(0)
FROM `users`) `users`;
```
`SQL Fiddle demo`
**UPDATE**
Be careful in MySQL >= 5.6 does not operate in the same way and NULL values are obtained.
`SQL Fiddle demo` | if you use following query to create table
```
CREATE TABLE `users` (
`user_id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,
`user_login` VARCHAR(40) NOT NULL,
`user_name` VARCHAR(100) NOT NULL,
`user_email` VARCHAR(254) NOT NULL,
`user_telephone` VARCHAR(100) NOT NULL,
`user_password` VARCHAR(64) NOT NULL,
`user_address` VARCHAR(255) NOT NULL,
`user_postal_code` VARCHAR(100) NOT NULL,
`user_district` VARCHAR(100) NOT NULL,
`user_country` VARCHAR(100) NOT NULL,
`user_tax_number` VARCHAR(20) NOT NULL,
`user_billing_email` VARCHAR(254) NOT NULL,
`company_description` TEXT NULL,
`company_history` TEXT NULL,
`company_products` TEXT NULL,
`public_contact` BINARY(1) NOT NULL,
`user_active` BINARY(1) NOT NULL DEFAULT '0',
`user_key` VARCHAR(255) NOT NULL,
`user_registered` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`unread_messages` INT(10) UNSIGNED NULL DEFAULT '0',
`membership_credits` INT(10) UNSIGNED NULL DEFAULT '0',
`product_credits` INT(10) UNSIGNED NULL DEFAULT '0',
PRIMARY KEY (`user_id`),
UNIQUE INDEX `user_login` (`user_login`),
UNIQUE INDEX `user_email` (`user_email`)
)
```
And use below query to get default values
```
SELECT if(DEFAULT(membership_credits) is null,0,
DEFAULT(membership_credits))AS membership_credits,if(DEFAULT(product_credits)
is null,0,DEFAULT(product_credits) )AS product_credits FROM
(SELECT 1) AS dummy LEFT JOIN Users ON True LIMIT 1
```
You must note that in the above create query `membership_credits` and `product_credits` modified to allow null | SELECT DEFAULT returns NULL | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I am relatively new to SQL so apologies for any stupid questions, but I can't even get close on this.
I have a data set of customer orders which consists of **Cust\_ID** and **Date**. I want to return a query that has all the customer orders adding two fields, "Date of first order" and "order count"
```
Cust_ID Date FirstOrder orderCount
5001 04/10/13 04/10/13 1
5001 11/10/13 04/10/13 2
5002 11/10/13 11/10/13 1
5001 17/10/13 04/10/13 3
5001 24/10/13 04/10/13 4
5002 24/10/13 11/10/13 2
```
Any pointers would be much appreciated.
Thanks | ```
SELECT foo.Cust_ID
, foo.`Date`
, MIN(p.`Date`) AS FirstOrder
, COUNT(*) AS orderCount
FROM foo
JOIN foo AS p
ON p.Cust_id = foo.Cust_id
AND p.`Date` <= foo.`Date`
GROUP BY foo.Cust_ID, foo.`Date`
ORDER BY foo.`Date`;
``` | If I understood you correctly:
Source data you have:
```
Cust_ID Date
5001 04/10/13
5001 11/10/13
5002 11/10/13
5001 17/10/13
5001 24/10/13
5002 24/10/13
```
Result dataset you expect:
```
Cust_ID Date FirstOrder OrderNumber
5001 04/10/13 04/10/13 1
5001 11/10/13 04/10/13 2
5002 11/10/13 11/10/13 1
5001 17/10/13 04/10/13 3
5001 24/10/13 04/10/13 4
5002 24/10/13 11/10/13 2
```
Then query should be (if using AF):
```
SELECT Cust_ID, Date,
MIN(Date) over ( partition by Cust_ID ) as FirstOrder,
RowNumber() over ( partition by Cust_ID order by Date asc ) as OrderNumber
FROM Orders
```
Excluding AF, using only standart SQL:
```
SELECT S.Cust_ID, S.Date, MIN(J.Date) as FirstDate, Count(S.Cust_id)
FROM Orders S
INNER JOIN Orders J
ON S.Cust_ID = J.Cust_ID and S.Date >= J.Date
GROUP BY S.Cust_id, S.Date
``` | SQL aggregating running count of records | [
"",
"mysql",
"sql",
""
] |
I've got a database which contains lots of data. I would like to select all data, but Group by one column.
So for example:
```
column a | column b
example | apple
example | pear
example | orange
example | strawberry
```
In the example column A is a column which has duplicate values and needs to be grouped.
Column B however has different values, and I would like to select all these values.
In the end, I would like to get a query that results in:
```
example
- apple
- pear
- orange
- strawberry
```
Where the values of column B are in a jQuery accordion.
I've tried and searched for a couple of hours but still haven't found a way to do this.
**EDIT:**
I've manage to solve the problem using the answer Gordon provided.
Using PHP I exploded the query results and with a while loop I collected all the data from the arrays.
It's working great now, thanks guys! | I think you can get what you want using `group_concat()`:
```
select a, group_concat(b)
from t
group by a;
```
This will create a list of "b"s for each a. In your example:
```
example apple,pear,orange,strawberry
```
You can change the separator using the `SEPARATOR` keyword.
EDIT:
You can use `group_concat()` multiple times:
```
select a, group_concat(b) as bs, group_concat(c) as cs
from t
group by a;
```
Or, combine it with `concat()`:
```
select a, group_concat(concat(b, ':', 'c')) as bcs
from t
group by a;
``` | All SQL systems deal in tables: rectangles of data with rows and columns. Your question asks for a result set which isn't really a rectangle of data, in the sense that it contains "header" rows and "detail" rows.
```
Example: (header row)
- apple (detail row)
```
It's common practice to create such header / detail breakout in your client (php) software.
Pro tip: Remember that if you don't specify ORDER BY, MySQL (and all SQLs) are permitted to return the information in your result in any convenient order. Enlarging on Gordon's fine answer, then, you might want:
```
SELECT a,
GROUP_CONCAT(CONCAT(b, ':', 'c') ORDER BY b,c) AS bcs
FROM t
GROUP BY A
ORDER BY A
```
I learned this the hard way when I helped write a SQL app that was really successful. All the ordering worked great until we switched over to higher - capacity clustered access methods. Then lots of "default" ordering broke and our customers saw strange stuff until we fixed it. | MySQL Group by one column, but select all data | [
"",
"mysql",
"sql",
""
] |
I'm using the following query to return the ages (in years) of all stored user date of births. It would be useful if I could essentially tally the ages so I could generate a report. The query is chunky and therefore makes it very difficult for me to understand how I'd do this, especially since aggregate functions cannot be assigned aliases.
```
Select
DateDiff(YEAR, dob, CURRENT_TIMESTAMP) -
CASE WHEN
DATEADD(year,DateDiff(YEAR, dob, CURRENT_TIMESTAMP),CURRENT_TIMESTAMP)
> CURRENT_TIMESTAMP
THEN 1
ELSE 0 END as ageYears
from [database].[dbo].[users]
```
Using the `distinct` keyword I can return the age groups, not the instances, but how would I join these with a `count`?
This output would be ideal:
```
age | numUsers
18 | 1
19 | 3
20 | 7
21 | 1
22 | 9
23 | 2
``` | Just use a CTE or subquery to create the ages first, and then a simple group by query to tally them up:
```
; With Ages as (
Select
DateDiff(YEAR, dob, CURRENT_TIMESTAMP) -
CASE WHEN
DATEADD(year,DateDiff(YEAR, dob, CURRENT_TIMESTAMP),CURRENT_TIMESTAMP)
> CURRENT_TIMESTAMP
THEN 1
ELSE 0 END as ageYears
from [database].[dbo].[users]
)
SELECT ageYears,COUNT(*) as NumUsers
FROM Ages
GROUP BY ageYears
```
The alternative, that doesn't need the subquery or CTE is to repeat the entire complex expression in the GROUP BY clause:
```
Select
DateDiff(YEAR, dob, CURRENT_TIMESTAMP) -
CASE WHEN
DATEADD(year,DateDiff(YEAR, dob, CURRENT_TIMESTAMP),CURRENT_TIMESTAMP)
> CURRENT_TIMESTAMP
THEN 1
ELSE 0 END as ageYears,
COUNT(*) as numUsers
from [database].[dbo].[users]
group by
DateDiff(YEAR, dob, CURRENT_TIMESTAMP) -
CASE WHEN
DATEADD(year,DateDiff(YEAR, dob, CURRENT_TIMESTAMP),CURRENT_TIMESTAMP)
> CURRENT_TIMESTAMP
THEN 1
ELSE 0 END as ageYears
```
Personally, I prefer the former version. | Can you try this,
```
Select X.age, sum (X.num) numUsers
From (
Select
DateDiff(YEAR, dob, CURRENT_TIMESTAMP) -
CASE WHEN
DATEADD(year,DateDiff(YEAR, dob, CURRENT_TIMESTAMP),CURRENT_TIMESTAMP)
> CURRENT_TIMESTAMP
THEN 1
ELSE 0 END age,
1 num
from [database].[dbo].[users]
) X
group by X.age
``` | Return number of people of a certain age deduced from stored DOBs | [
"",
"sql",
"sql-server",
"database",
""
] |
I currently have a table of Users, and at what time they connected to a device (e.g. a Wifi Router).
```
+-------------+-----------+---------+------------+---------------------+
| location_id | device_id | user_id | dwell_time | date |
+-------------+-----------+---------+------------+---------------------+
| 14 | 1 | 1 | 27.000000 | 2014-01-04 00:51:12 |
| 15 | 2 | 1 | 12.000000 | 2014-01-04 01:08:56 |
| 16 | 1 | 1 | 12.000000 | 2014-01-04 01:09:26 |
| 17 | 2 | 1 | 318.000000 | 2014-01-04 01:09:38 |
| 18 | 1 | 2 | 20.000000 | 2014-01-04 01:30:03 |
| 19 | 2 | 3 | 20.000000 | 2014-01-04 01:30:03 |
+-------------+-----------+---------+------------+---------------------+
```
I need to write a query title "Get Latest User Connections".
Basically, it needs to go through the history table shown above, and pick the latest record (based on Date) for each user and display it. In the example above, the result should be:
```
+-------------+-----------+---------+------------+---------------------+
| location_id | device_id | user_id | dwell_time | date |
+-------------+-----------+---------+------------+---------------------+
| 17 | 2 | 1 | 318.000000 | 2014-01-04 01:09:38 |
| 18 | 1 | 2 | 20.000000 | 2014-01-04 01:30:03 |
| 19 | 2 | 3 | 20.000000 | 2014-01-04 01:30:03 |
+-------------+-----------+---------+------------+---------------------+
```
Can someone please help me write a SQL statement that does this? | ```
select *
from users
inner join (select user_id,max(date) as maxdate
from users
group by user_id)T1
on T1.user_id = users.user_id
AND T1.maxdate = users.date
```
or if you don't want to have a subquery, you can user @variables like this query below
```
SELECT location_id,device_id,user_id,dwell_time,date,
IF(@prevUserId IS NULL OR @prevUserId != user_id,@row:=1,@row:=@row+1) as row,
@prevUserId := user_id
FROM users
HAVING row = 1
ORDER BY user_id,date DESC
```
here's the [sqlFiddle](http://sqlfiddle.com/#!2/2f26be/1/0) | Assuming the combination of `user_id` and `date` is unique in the table, you could
```
SELECT
tablename.*
FROM tablename
INNER JOIN (
SELECT user_id, MAX(`date`) AS maxdate
FROM tablename
GROUP BY user_id
) AS selector
ON tablename.user_id=selector.user_id AND tablename.`date`=selector.maxdate
``` | SQL Statement Construction: Selecting Unique Records | [
"",
"mysql",
"sql",
"select",
"group-by",
"unique",
""
] |
I have a schema that has two relations. One is loan which has the attributes loan-number, branch-name and amount. The other is borrower which has customer-name and loan-number as its attributes. These two relations are linked via loan-number.
How would I write a query in relational algebra to find the names of customers with a balance less than 10000?
How would I do this as a SQL query? | Do some research on relational algebra 8 main operators: restriction, projection, cartesian product, join, union, intersection, set difference and division.
To answer your question:
```
loan(loan_number, branch_name, amount)
borrower(customer_name, loan_number)
```
Perform a natural join of the both relations, apply the restriction (balance less than 10000) and then display the names with the use of a projection. The following 2 relational algebra expressions below will both answer your question

Both expressions evaluate to the following SQL query:
```
select customer_name
from borrower b, loan l
where b.loan_number=l.loan_number and amount>10000;
``` | Disclaimer: I am not too familiar with relational algebra.
SQL can be quickly seen as using an implicit inner join and then an filter on loan.amount:
```
SELECT customer-name
FROM Borrower, Loan
WHERE Customer.loan-number = Loan.loan-number
AND Loan.amount > 10000
```
And then translate this into an relational algebra keeping in mind that the following symbols are the only ones needed for this:
The select operation (σ): - to identify a set of tuples which is a part of a relation and to extract only these tuples out. The select operation selects tuples that satisfy a given predicate or condition.
The project operation (Π): - returns its argument relation with certain attributes left out.
```
Π customer-name (σ Borrower.loan-number=Loan.loan-number (σ Loan.amount>10000 (Borrower X Loan)))
``` | SQL Relational Algebra Query | [
"",
"sql",
"relational-algebra",
""
] |
I am trying to return people from a search string that can consist of any variation of a name, such as: 'john', 'john smith', 'smith, john', 'john jason smith', or any other common variation. I can cleanup the string and format as a regular expression if need be.
I am aiming for something similar to this:
```
SELECT firstname,
middlename,
lastname
FROM people
WHERE firstname LIKE 'search'
OR middlename LIKE 'search'
OR lastname LIKE 'search';
```
The problem is that a search string such as 'john smith' will return all **johns** and all **smiths** instead of only **john smith**. Any ideas? | If you want to only return rows that meet all of your supplied criteria then you need to add those filter criteria. There are a couple of ways to do it but you could do something as simple as this:
* Split up your term into the searchable tokens
* Make your search case insensitive by forcing everything to upper case
* Use INTERSECT to only return those rows that match all your tokens
* Be sure to use wildcarding if desired - In this example token1 = '%JOHN%' and token2 =
'%SMITH%'
**SQL:**
```
SELECT firstname,
middlename,
lastname
FROM people
WHERE ( Upper(firstname) LIKE token1
OR Upper(middlename) LIKE token1
OR Upper(lastname) LIKE token1 )
INTERSECT
SELECT firstname,
middlename,
lastname
FROM people
WHERE ( Upper(firstname) LIKE token2
OR Upper(middlename) LIKE token2
OR Upper(lastname) LIKE token2 );
``` | Try this dude. I tested this against SQL server and it works. Make the necessary change for it to work against Oracle. I've only tested this with 1 field name. Just edit the code so it suits your needs. It's basically @Carth 's solution implemented only using SQL.
```
DECLARE @stringsearch varchar(255)='stringvalue'
while CHARINDEX(' ',@stringsearch) > 1
begin
SET @stringsearch=REPLACE(@stringsearch,' ',' ')
end
SET @stringsearch=REPLACE(@stringsearch,' ','%'' AND fieldname LIKE ''%')+'%'''
DECLARE @sql varchar(max)=''
SET @sql='SELECT fieldname FROM TableName
WHERE fieldname Like ''%'+ @stringsearch
EXEC (@sql)
``` | Searching names with SQL | [
"",
"sql",
"oracle",
""
] |
I want to insert data into my table, but insert only data that doesn't already exist in my database.
Here is my code:
```
ALTER PROCEDURE [dbo].[EmailsRecebidosInsert]
(@_DE nvarchar(50),
@_ASSUNTO nvarchar(50),
@_DATA nvarchar(30) )
AS
BEGIN
INSERT INTO EmailsRecebidos (De, Assunto, Data)
VALUES (@_DE, @_ASSUNTO, @_DATA)
WHERE NOT EXISTS ( SELECT * FROM EmailsRecebidos
WHERE De = @_DE
AND Assunto = @_ASSUNTO
AND Data = @_DATA);
END
```
And the error is:
> Msg 156, Level 15, State 1, Procedure EmailsRecebidosInsert, Line 11
> Incorrect syntax near the keyword 'WHERE'. | instead of below Code
```
BEGIN
INSERT INTO EmailsRecebidos (De, Assunto, Data)
VALUES (@_DE, @_ASSUNTO, @_DATA)
WHERE NOT EXISTS ( SELECT * FROM EmailsRecebidos
WHERE De = @_DE
AND Assunto = @_ASSUNTO
AND Data = @_DATA);
END
```
replace with
```
BEGIN
IF NOT EXISTS (SELECT * FROM EmailsRecebidos
WHERE De = @_DE
AND Assunto = @_ASSUNTO
AND Data = @_DATA)
BEGIN
INSERT INTO EmailsRecebidos (De, Assunto, Data)
VALUES (@_DE, @_ASSUNTO, @_DATA)
END
END
```
**Updated :** (thanks to @Marc Durdin for pointing)
Note that under high load, this will still sometimes fail, because a second connection can pass the IF NOT EXISTS test before the first connection executes the INSERT, i.e. a race condition. See [stackoverflow.com/a/3791506/1836776](https://stackoverflow.com/a/3791506/1836776) for a good answer on why even wrapping in a transaction doesn't solve this. | For those looking for the *fastest way*, I recently [came across these benchmarks](http://cc.davelozinski.com/sql/fastest-way-to-insert-new-records-where-one-doesnt-already-exist) where apparently using "INSERT SELECT... EXCEPT SELECT..." turned out to be the fastest for 50 million records or more.
Here's some sample code from the article (the 3rd block of code was the fastest):
```
INSERT INTO #table1 (Id, guidd, TimeAdded, ExtraData)
SELECT Id, guidd, TimeAdded, ExtraData
FROM #table2
WHERE NOT EXISTS (Select Id, guidd From #table1 WHERE #table1.id = #table2.id)
-----------------------------------
MERGE #table1 as [Target]
USING (select Id, guidd, TimeAdded, ExtraData from #table2) as [Source]
(id, guidd, TimeAdded, ExtraData)
on [Target].id =[Source].id
WHEN NOT MATCHED THEN
INSERT (id, guidd, TimeAdded, ExtraData)
VALUES ([Source].id, [Source].guidd, [Source].TimeAdded, [Source].ExtraData);
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT id, guidd, TimeAdded, ExtraData from #table2
EXCEPT
SELECT id, guidd, TimeAdded, ExtraData from #table1
------------------------------
INSERT INTO #table1 (id, guidd, TimeAdded, ExtraData)
SELECT #table2.id, #table2.guidd, #table2.TimeAdded, #table2.ExtraData
FROM #table2
LEFT JOIN #table1 on #table1.id = #table2.id
WHERE #table1.id is null
``` | SQL Server Insert if not exists | [
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"sql-insert",
""
] |
In SQL Server 2008, this query works:
```
SELECT
SUM(CAST(isredeemed AS TINYINT)) AS totalredeemed,
FROM rewards
GROUP BY merchantid
```
It gives you the number of redeemed rewards by merchant. The `TINYINT` cast is need to avoid the error `Operand data type bit is invalid for sum operator.`
Now I'd like to do a similar query, but one that only finds rewards redeemed in the last few days. I tried this:
```
SELECT
SUM(CAST((isredeemed & ( MIN(dateredeemed) > '2014-01-10 05:00:00')) AS TINYINT)) AS claimedthisweek,
FROM rewards
GROUP BY merchantid
```
and I get the error
> Incorrect syntax near '>'.
I also tried replacing `&` with `&&` and also with `AND`. But those don't work either.
How can I make the example work? | This question lacks detail to give an exact answer but you need to use a derived table or subquery for the calculation.
something like this
```
SELECT r1.merchantid, r2.claimedthisweek
FROM rewards r1
JOIN (
SELECT merchantid, SUM(CAST(isredeemed AS INT)) claimedthisweek
FROM rewards
GROUP BY merchantid
HAVING MIN(dateredeemed) > '20140101'
) r2
``` | I think you want this would work but it would count some twice:
```
SELECT
SUM(case when MIN(dateredeemed) > '2014-01-10 05:00:00' then 1 else 0 end )
FROM rewards
GROUP BY merchantid
``` | Using boolean logic inside SUM function | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
We will get id from user in this form:
```
"A-123456-14"
```
and want to search in database with simple id in this form :
```
123456
```
I have tried
```
select * from orders where id = '%' + searchId + '%';
```
here id = 123456 in database and searchId = 'A-123456-14' which get from user.
but its not working as its not properly
(A : is prefix, - is delimiter and 14 is postFix)
Please help me to solve this problem. | you can use:
```
SELECT * FROM orders WHERE WHERE 'A-123456-14' REGEXP '[:punct:]' + id + '[:punct:]';
```
in above you can replace 'A-123456-14' with user input searchId
i have tried its work fine | ```
SELECT
*
FROM
orders
WHERE
id = SUBSTRING_INDEX(
SUBSTRING_INDEX(searchId, '-', 2),
'-',- 1)
```
EDIT:
If you want to make everything dynamic then you can try this (You should post delimiter as a parameter also, if it is dynamic).
```
SELECT
*
FROM
orders
WHERE
id = SUBSTRING_INDEX(
SUBSTRING_INDEX(searchId, delimiter, 2),
delimiter, - 1)
``` | search database id field with string manipulated id | [
"",
"mysql",
"sql",
"search",
"select",
"sql-like",
""
] |
```
SELECT bm.WinningNumber,bd.BetOnNumber,"WinLossAmount" =
CASE
WHEN 2 in (2,1) THEN ('WIN')
END
FROM BettingMaster bm inner join BettingDetail bd on bm.GameID = bd.GameID
where bd.GameID = 1
```
This works as a Charm and I get 'WIN' in WinLossAmount. Now I actually have value 2 in column WinningNumber(varchar) and value 2,1 in column BetOnNumber(varchar). I retried the statement as
```
SELECT bm.WinningNumber,bd.BetOnNumber,"WinLossAmount" =
CASE
WHEN bm.WinningNumber in (bd.BetOnNumber) THEN ('WIN')
END
FROM BettingMaster bm inner join BettingDetail bd on bm.GameID = bd.GameID
where bd.GameID = 1
```
This doesn't work. Is it not possible this way?! Any help?? | IN operator in SQL searches values in the row set. But in the second example you try to find INT value in the STRING so you can't use `IN` this way.
In the most SQL systems you can try to use following condition. Where `+` is a concatenate operator in MSSQL (in Oracle `||`, in MySQL `CONCAT()`)
```
WHEN ','+bd.BetOnNumber+',' LIKE '%,'+CAST(bm.WinningNumber AS VARCHAR(10))+',%'
THEN ...
```
Also in MySQL you can use [FIND\_IN\_SET()](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set) function:
```
WHEN FIND_IN_SET(bm.WinningNumber,bd.BetOnNumber) THEN ...
```
PS: Both ways can't use indexes so you shouldn't use them on big tables. | I believe the problem is that in the first statement, SQL is comparing 2 to 1 and then 2 to 2, all integers. Obviously, finding the match and working.
In the second case, you're comparing 2 to a string of "1,2" and 2 does not equal "1,2". You'll need to split the varchar into a series of integers before you compare it. Take a look at:
[Parse comma-separated string to make IN List of strings in the Where clause](https://stackoverflow.com/questions/17481479/parse-comma-separated-string-to-make-in-list-of-strings-in-the-where-clause)
This should help you out. | trouble with IN clause in sql | [
"",
"sql",
"sql-server-2012",
""
] |
Let's say I have a Table that looks like this:
```
id fk value
------------
1 1 'lorem'
2 1 'ipsum'
3 1 'dolor'
4 2 'sit'
5 2 'amet'
6 3 'consetetur'
7 3 'sadipscing'
```
Each **fk** can appear multiple times, and for each **fk** I want to select the last row (or more precise the row with the respectively highest id) – like this:
```
id fk value
------------
3 1 'dolor'
5 2 'amet'
7 3 'sadipscing'
```
I thought I could use the keyword `DISTINCT` here like this:
```
SELECT DISTINCT id, fk, value
FROM table
```
but I am not sure on which row `DISTINCT` will return and it must be the last one.
Is there anything like (*pseudo*)
```
SELECT id, fk, value
FROM table
WHERE MAX(id)
FOREACH DISTINCT(fk)
```
I hope I am making any sense here :)
thank you for your time | ```
SELECT *
FROM table
WHERE id IN (SELECT MAX(id) FROM table GROUP BY fk)
``` | Try this:
```
SELECT a.id, a.fk, a.value
FROM tableA a
INNER JOIN (SELECT MAX(a.id) id, a.fk FROM tableA a GROUP BY a.fk
) AS b ON a.fk = b.fk AND a.id = b.id;
```
**OR**
```
SELECT a.id, a.fk, a.value
FROM (SELECT a.id, a.fk, a.value FROM tableA a ORDER BY a.fk, a.id DESC) AS a
GROUP BY a.fk;
``` | SELECT only one entry of multiple occurrences | [
"",
"mysql",
"sql",
"select",
"group-by",
"greatest-n-per-group",
""
] |
i trying to do a sql query which i combine de compare operators with substring.
in my column date i have the following value inside : **09-01-2014 12:02:55**
what i try to now is to select all rows which is **>= 09-01-2014** and for example **<=22-01-2014**
how can i do it?
i have trying for example with this code:
```
SELECT * From table Where Name= 'Something'
AND SUBSTRING(date,1,10) = '09-01-2014'
AND SUBSTRING(date,1,10) < '22-01-2014'
``` | You can use the `BETWEEN` operator
```
SELECT * FROM table
WHERE Name = 'Something'
AND SUBSTRING(date, 1, 10) BETWEEN '09-01-2014' AND '22-01-2014'
```
EDIT: I'm still leaving this here, but it is not an error proof solution (as pointed out by oerkelens down in the comments) | The `BETWEEN` operator will work, like this:
```
SELECT *
From table
Where Name= 'Something'
AND `date` BETWEEN '2014-01-09' AND '2014-01-23'
```
Working Demo: <http://sqlfiddle.com/#!2/b4d7e> | SQL - Select all rows which is >= and <= | [
"",
"mysql",
"sql",
"datetime",
"select",
""
] |
Say I have a package with a spec defined in **mySpec.pks**
```
CREATE OR REPLACE PACKAGE TEST_PKG AS
PROCEDURE TEST_1 ( asdf int );
PROCEDURE TEST_2 ( asdf int, asdf2 char );
END;
```
Is it possible to split the implementation of each procedure into multiple body "CREATE OR REPLACE PACKAGE BODY" statements?
I'm imagining something like this for the body files:
**test1.pkb** ( only has implementation of procedure \*TEST\_1\* )
```
CREATE OR REPLACE PACKAGE BODY TEST_PKG AS
PROCEDURE TEST_1 ( asdf int ) IS
BEGIN
--do stuff
END;
END;
```
**test2.pkb** (only has implementation of procedure \*TEST\_2\*)
```
CREATE OR REPLACE PACKAGE BODY TEST_PKG AS
PROCEDURE TEST_2 ( asdf int, asdf2 char ) IS
BEGIN
--do stuff
END;
END;
``` | No, it is not. A package body is a single entity, there must be a single `CREATE` statement. Just like the entire implementation of an object has to be in a single file, the entire implementation of a package must be in a single file.
The desire to split the implementation across multiple files would tend to imply to me that the package itself is trying to do too much and that the entire package needs to be refactored into two or more smaller, more self-contained packages. | You can't quite do what you suggest, but it is possible to split the package body *file* into smaller sections, as long as you're using SQL\*Plus to load the package. I'm not suggesting this is a good idea, and I'd generally agree with Justin about refactoring your design if you're doing this for size reasons. But just because you can:
Define the package body in a script, say `test_pkg.pkb`, and 'include' sub-files for each procedure:
```
CREATE OR REPLACE PACKAGE BODY TEST_PKG AS
@test_1.sql
@test_2.sql
END TEST_PKG;
/
```
Then in `test_1.sql`:
```
PROCEDURE TEST_1 ( asdf int ) IS
BEGIN
--do stuff
END TEST_1;
```
And in `test_2.sql`:
```
PROCEDURE TEST_2 ( asdf int, asdf2 char ) IS
BEGIN
--do stuff
END TEST_2;
```
When you execute the script it's still seen as a single `create` statement by the parser.
You can't run the 'included' files separately, they'll only work as part of the package body load - `sqlplus @test_pkg.pkb` - so if your aim is to allow just one procedure to be reloaded independently this won't work.
There is also another downside, in that any compilation error messages will refer to line numbers in the package body overall, so you'd have to work out which of the sub-files the relevant code is in, which is doable but a bit tricky. And in the same vein, what's stored in `user_source` won't match what is in your files. It's kind of like looking at a pre-processed C or Pro\*C file I suppose, related but not always easy to reconcile with the original.
So... probably not worth the effort and potential for confusion. | Define elements of a package spec in multiple scripts? | [
"",
"sql",
"database",
"oracle",
"plsql",
""
] |
I need a little help with a MySQL query.
I have two tables one table is a list of product and one table is a list of warehouses quantity
product:
```
product_id product_name
1 name1
2 name2
3 name3
```
warehouse\_product
```
id warehouse_id product_id product_quantity
1 1 1 15
2 2 1 30
3 1 2 100
4 2 2 30
5 1 3 20
6 2 3 40
```
The results Im looking to get from the above data would be
```
product_id product_name product_quantity
1 name1 45
2 name2 130
3 name3 60
```
I've tried many query but it's not working. My query is:
```
SELECT
product_id as product_id, SUM(quantity) as quantity
FROM
(SELECT
p.product_id as product_id, wp.product_quantity as quantity
FROM
product as p
LEFT JOIN warehouse_product as wp ON p.product_id = wp.product_id
WHERE
product_active = 1)
``` | Try this:
```
SELECT p.product_id, p.product_name, SUM(wp.quantity) as quantity
FROM product as p
LEFT JOIN warehouse_product as wp
ON p.product_id = wp.product_id
AND product_active = 1
GROUP BY p.product_id, p.product_name
```
After JOINing the two tables you need to use `GROUP BY` so that `SUM` of quantity can be calculated for each product. | ```
SELECT P.productid,
P.ProductName,
Sum(W.product_quantity) As quantity
FROM Product AS P
LEFT JOIN warehouse_product AS W
ON P.productid = W.productid
GROUP BY P.ProductId , P.ProductName
```
`SQLFiddle` | Join two table. One column using SUM | [
"",
"mysql",
"sql",
""
] |
For the sake of brevity, let's assume we have a `numbers` table with 2 columns: `id` & `number`:
```
CREATE TABLE numbers(
id INT NOT NULL AUTO_INCREMENT,
NUMBER INT NOT NULL,
PRIMARY KEY ( id )
);
```
I want the `number` column to auto-increment, but restart from 1 after hitting 100, satisfying the following requirements:
1. If there are no records yet, set `number` to 1.
2. If there are already records in the table:
1. If the last record's `number` is less than 100, set `number` to that + 1.
2. If the last record's `number` is 100 or more, set `number` to 1.
Here's what I have so far:
```
INSERT INTO `numbers` (`number`)
VALUES (
IF(
IFNULL(
(SELECT `number`
FROM `numbers` as n
ORDER BY `id` DESC
LIMIT 1),
0
) > 99,
1,
IFNULL(
(SELECT `number`
FROM `numbers` as n
ORDER BY `id` DESC
LIMIT 1),
0
) + 1
)
)
```
This works, but uses two identical `SELECT` queries.
Is there any way to remove the duplicate `SELECT`?
---
**P.S.** I want to do all this within a query. No triggers or stored procedures. | Try to use modulo `%` operator
```
INSERT INTO `numbers` (`number`)
VALUES (
IFNULL(
(SELECT `number`
FROM `numbers` as n
ORDER BY `id` DESC
LIMIT 1),
0
) % 100 + 1
)
``` | You could use a temp variable.
```
INSERT INTO `numbers`(`number`) VALUES (
IF((@a := IFNULL((SELECT number FROM `numbers` AS n ORDER BY id DESC LIMIT 1), 0)) > 99, 1, @a + 1)
);
``` | Any way to remove the duplicate SELECT statement? | [
"",
"mysql",
"sql",
"select",
"insert",
"select-insert",
""
] |
I want to know what is the meaning or purpose of inserting a value like-
```
insert into table_name (column_name) values('');
```
And what is the difference b/w above statement and not inserting a value for the column(means adding `null`) | Selects records where ID is an empty string
```
select * from tablename where id=''
```
Selects records where ID is NULL
```
select * from tablename where id is null
```
A NULL indicates absence of data where as an '' (empty string) is actually a string of length 0.
For you better understanding:
```
SELECT LEN(NULL) AS LEN_NULL, LEN('') AS LEN_EMPTY_STR
```
would give :-
```
LEN_NULL LEN_EMPTY_STR
NULL 0
``` | in case if you set `NOT NULL` as a property of the column - this doesn't give you an error as a result of insert. | difference between select * from tablename where id='' and select * from tablename where id is null | [
"",
"mysql",
"sql",
""
] |
Ok, I am using Mysql DB. I have 2 simple tables.
```
Table1
ID-Text
12-txt1
13-txt2
42-txt3
.....
Table2
ID-Type-Text
13- 1 - MuTxt1
42- 1 - MuTxt2
12- 2 - Xnnn
```
Now I want to join these 2 tables to get all data for `Type=1` in table 2
SQL1:
```
Select * from
Table1 t1
Join
(select * from Table2 where Type=1) t2
on t1.ID=t2.ID
```
SQL2:
```
Select * from
Table1 t1
Join
Table2 t2
on t1.ID=t2.ID
where t2.Type=1
```
These 2 queries give the same result, but which one is faster?
I don't know how Mysql does the Join (or How the Join works in Mysql) & that why I wonder this!!
Exxtra info, Now if i don't want type=1 but want t2.text='MuTxt1', so Sql2 will become
```
Select * from
Table1 t1
Join
Table2 t2
on t1.ID=t2.ID
where t2.text='MuTxt1'
```
I feel like this query is slower?? | Sometimes the MySQL query optimizer does a pretty decent job and sometimes it sucks. Having said that, there are exception to my answer where the optimizer optimizes something else better.
Sub-Queries are generally expensive as MySQL will need to execute and store results seperately. Normally if you could use a sub-query or a join, the join is faster. Especially when using sub-query as part of your where clause and don't put a limit to it.
```
Select *
from Table1 t1
Join Table2 t2 on t1.ID=t2.ID
where t2.Type=1
```
and
```
Select *
from Table1 t1
Join Table2 t2
where t1.ID =t2.ID AND t2.Type=1
```
should perform equally well, while
```
Select *
from Table1 t1
Join (select *
from Table2
where Type=1) t2
on t1.ID=t2.ID
```
most likely is a lot slower as MySQL stores the result of `select * from Table2 where Type=1` into a temporary table.
Generally joins work by building a table comprised of all combinations of rows from both table and afterwards removing lines which do not match the conditions. MySQL of course will try to use indexes containing the columns compared in the on clause and specified in the where clause.
If you are interested in which indexes are used, write `EXPLAIN` in front of your query and execute. | As per my view 2nd query is more better than first query in terms of code readability and performance. You can include filter condition in Join clause also like
```
Select * from
Table1 t1
Join
Table2 t2 on t1.ID=t2.ID and t2.Type=1
```
You can compare execution time for all queries in SQL fiddle here :
[Query 1](http://sqlfiddle.com/#!2/e0c7b/2)
[Query 2](http://sqlfiddle.com/#!2/e0c7b/3)
[My Query](http://sqlfiddle.com/#!2/e0c7b/4) | Which Query is faster if we put the "Where" inside the Join Table or put it at the end? | [
"",
"mysql",
"sql",
""
] |
Okay, so I've got two tables. One table (`table 1`) contains a column `Books_Owned_ID` which stores a series of numbers in the form of `1,3,7`. I have another table (`table 2`) which stores the Book names in one column and the book ID in another column.
What I want to do is create an SQL code which will take the numbers from `Books_Owned_ID`and display the names of those books in a new column. Like so:
```
|New Column |
Book 1 Name
Book 2 Name
Book 3 Name
```
I can't wrap my head around this, it's simple enough but all the threads I look on get really confusing.
`Table1` contains the following columns:
```
|First_Name| Last_Name| Books_Owned_ID |
```
`Table2` contains the following columns:
```
|Book_Name|Book_ID|
``` | The core of this centers around data normalisation... Each fact is stored only once (and so is "authoritative"). You should also get into the habit of only storing a single fact in any field.
So, imagine the following table layouts...
```
Books
Id, Name, Description
Users
Id, Username, EmailAddress, PasswordHash, etc....
BooksOwned
UserId, BookId
```
So if a single user owns multiple books, there will be multiple entries in the `BooksOwned` table...
```
UserId, BookID
1, 1
1, 2
1, 3
```
Indicates that User 1 owns books 1 through 3.
The reason to do it this way is that it makes it much easier to query in future. You also treat BookId as an Integer instead of a string containing a list - so you don't need to worry about string manipulation to do your query.
The following would return the name of all books owned by the user with `Id = 1`
```
SELECT Books.Name
FROM BooksOwned
INNER JOIN Books
ON BooksOwned.BookId = Books.Id
WHERE BooksOwned.UserId = 1
``` | You need to do an inner join. [This is a great example/reference for these](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
```
SELECT Book_Name FROM Table2
INNER JOIN Table1
ON Table1.Books_Owned_ID = Table2.Book_ID
```
**EDIT** [SQL Fiddle](http://sqlfiddle.com/#!3/6abca/2)
I will work on getting the column comma split working. It wont be a lot extra for this.
**EDIT 2** [See this answer to build a function to split your string.](https://stackoverflow.com/a/878964/300913) Then you can do this:
```
SELECT Book_Name FROM Table2
WHERE Book_ID IN(SELECT FN_ListToTable(',',Table1.Books_Owned_ID) FROM Table1 s)
``` | SQL Cross-Table Referencing | [
"",
"sql",
"sql-server",
""
] |
I'm really confused here. Running the following query:
```
SELECT * FROM `articles` WHERE `form` = 'Depotplåster' AND `size` = 5
```
returns rows that also start with "5", despite me neither using `LIKE` nor a `%` wildcard operator. How come?
The `size` field is of type `VARCHAR`.
 | That is because you're using comparison between numeric and varchar data. MySQL will implicitly convert your column to `double`, resulting in `5`. See this simple test data:
```
mysql> select * from test;
+-----------------+
| name |
+-----------------+
| 5 |
| 5 and some crap |
+-----------------+
2 rows in set (0.00 sec)
```
Now, "good" way: compare strings:
```
mysql> select * from test where name = '5';
+------+
| name |
+------+
| 5 |
+------+
1 row in set (0.00 sec)
```
And "bad" way: compare integers:
```
mysql> select * from test where name = 5;
+-----------------+
| name |
+-----------------+
| 5 |
| 5 and some crap |
+-----------------+
2 rows in set, 1 warning (0.05 sec)
```
-and here is your reason:
```
+---------+------+-----------------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------------+
| Warning | 1292 | Truncated incorrect DOUBLE value: '5 and some crap' |
+---------+------+-----------------------------------------------------+
1 row in set (0.00 sec)
```
Finally, to understand, why is it so:
```
SELECT
CAST('5' AS DECIMAL) AS 5d,
CAST('5 and some crap' AS DECIMAL) AS 5sd,
CAST('5' AS DECIMAL) = CAST('5 and some crap' AS DECIMAL) AS areEqual;
```
Will result in:
```
+----+-----+----------+
| 5d | 5sd | areEqual |
+----+-----+----------+
| 5 | 5 | 1 |
+----+-----+----------+
1 row in set (0.00 sec)
```
-as you can see, non-significant part was just truncated (as mentioned in warning message above) | ```
SELECT * FROM `articles` WHERE `form` = 'Depotplåster' AND `size` = '5'
-- this will compare the string 'size' with the string '5'
SELECT * FROM `articles` WHERE `form` = 'Depotplåster' AND `size` = 5
-- this will convert string 'size' to integer and then compare with the integer 5
```
The conversion of string to integer looks for ints i nthe beginning of the string, and takes the largest integer until the first non-numeric character.
```
select '5s4'=5, 's5'=5, '5'=5 -- =>1,0,1
``` | Query returns "LIKE" results despite not having wildcards? | [
"",
"mysql",
"sql",
"equality",
""
] |
I wish to find all records of the current day. I have a field Date of type DATE.
I am getting error on sql server
```
'DATE' is not a recognized built-in function name.
```
on this line
```
(DATE(EnterDate) = CURDATE() )
``` | As the error states, there is no `DATE` *function* in SQL Server 2008 or 2012 (you tagged both so I'm not sure which you're targeting). You *can*, however, cast to a `date` type in SQL Server 2008 and above:
```
WHERE EnterDate = CONVERT(date,GETDATE())
```
Note that there's no `CURDATE` function either, so I've translated that to `GETDATE()` | Use the following condition in your where cluase
```
WHERE CAST(DateColumn AS DATE) = CAST(GETDATE() AS DATE)
^------------ Your Column Name with `Date` or 'DateTime' data type
```
`CURDATE()` is a mysql function, In Sql-Server we have `GETDATE()` function to get current date and time. | 'DATE' is not a recognized built-in function name | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
"ssms",
""
] |
Conceptual stage question:
I have several Tables (**Person**, **Institution**, **Factory**) each has many kinds of **Addresses** (**Mailing**, **Physical**)
Is there a way to create a **single** Address table that contains **all the addresses** of all the Entities?
I'd rather not have a **PersonAddress** and **FactoryAddress** etc set of tables.
Is there another option?
The amount of data will only be several thousand addresses at most, so light in impact. | My proposal relies on the principle that one entity (person, Institution, Factory, etc) can have multiple adresses, which is usually the case (home, business, etc), and that one adress can be shared by entities of different nature:
```
CREATE TABLE ADDRESS
(
ID INT IDENTITY PRIMARY KEY NOT NULL,
.... (your adress fields here)
id_Person ... NULL,
id_Institution ... NULL,
id_Factory ... NULL
)
```
The main limit is that 2 different persons cannot share the same adress. In such a situation, you'll have to go with an additional "EntityAddress" table, like this:
```
CREATE TABLE ADDRESS
(
ID INT IDENTITY PRIMARY KEY NOT NULL,
.... (your adress fields here)
)
CREATE TABLE ENTITY_ADDRESS
(
ID INT IDENTITY PRIMARY KEY NOT NULL
id_Address .... NOT NULL,
id_Person .... NULL,
id_Institution ... NULL,
id_Factory .... NULL
)
```
The last model allows you to share for example one adress for multiple persons working in the same institution.
**BUT**: according to me, the 'better' solution would be to merge your different entities into one table. You will then need:
1. An Entity Table, made for all entities
2. An Entity Type table, that will contain the different entity types.
In your case you have at least 3 rows: persons, factories,
institution
If one adress per entity is enough, you could go for the address details as properties of the Entity table.
If you need multiple addresses by entity, you'll have to go with the Addresses Table with an Id\_Entity as a foreign key.
If you want to share one adress among multiple entities, each entity having potentially multiple adresses (a many-to-many relation between entities and adresses), then you will need to go for the EntityAddres table in addition to the Entity and Address Tables.
Your choice between these models will depend on your needs and your businness rules. | You need to use abstraction and inheritance.
An `individual` and `institution` (I'd call it organization) are really just concrete representations of an abstract `legal party`.
A `mailing` or `physical address` is the concretion of an abstract `address`, which could also be an email address, telephone number, or web address.
A `legal party` can be have zero or more `addresses`.
An `address` can be belong to zero or more `legal parties`.
A party could use the same address for multiple roles, such as 'Home' address and 'Work' address.
If a `factory` is big enough, sub-facilities in the factory might have their own addresses, so you might want to consider a hierarchical relationship there. For example, each apartment in a condo has one address each. Each building in a large factory might have their own address.
```
create table party (
party_id identity primary key
);
create table individual (
individual_id int primary key references party(party_id),
...
);
create table organization (
organization_id int primary key references party(party_id),
...
);
create table address (
address_id identity primary key,
...
);
create table mailing_address (
address_id int primary key references address(address_id),
...
);
create table party_address (
party_id int references party(party_id),
address_id int references address(address_id),
role varchar(255), --this should really point to a role table
primary key (party_id, address_id, role)
);
create table facility (
facility_id identity primary key,
address_id int not null references address(address_id),
parent_id int null references facility(facility_id)
);
``` | One Address Table for Many entities? | [
"",
"sql",
"sql-server",
"database-design",
""
] |
Surprisingly there isn't any straight forward example to demonstrate the use of delimiter in Stored Procedure?
I have a string variable as input (with delimiter) and would like to make it as condition in Stored Procedure. Something like below:
```
CREATE PROCEDURE testing
(
@stringVar NVARCHAR(255) = NULL
)
....
BEGIN
Select * from table where column not in (@stringVar)
.....
END
```
Sample value for `@stringVar` will be `a~b~c~d`
How should I handle such case? Is there any built-in delimiter function in Sql Server? | it is a bit of tricky situation and there is no "Simple way" of doing it. but I can give you the simplest way of doing it.
You will need to do two things to make this work,
> 1) Create a Split function which takes a parameter of deliminited
> string and split these values.
> 2) Make your stored procedure in a
> way that passed deliminited string is passed to that function and then
> splited values are passed to the stored procedure
**Split Function**
```
CREATE FUNCTION [dbo].[FnSplit]
(
@List nvarchar(2000),
@SplitOn nvarchar(5)
)
RETURNS @RtnValue table (Id int identity(1,1),Value nvarchar(100))
AS
BEGIN
While(Charindex(@SplitOn,@List)>0)
Begin
Insert Into @RtnValue (value)
Select Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1)))
Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List))
End
Insert Into @RtnValue (Value)
Select Value = ltrim(rtrim(@List))
Return
END
```
**Your Proc**
```
CREATE PROCEDURE testing
(
@stringVar NVARCHAR(255) = NULL
)
....
BEGIN
Select * from table
where column not IN(SELECT Value
FROM dbo.FnSplit(@stringVar,'~'))
.....
END
``` | From SQL Server 2008 on, you can use **table valued parameters**.
I cannot explain it better than here: <http://www.sommarskog.se/arrays-in-sql.html> | SQL Server Simple example for the use of Delimiter in Stored Procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
"delimiter",
""
] |
How can I get the create table command which is used to create a table in oracle 11g so that I can copy the command and run it in another database? Please help. | ```
select dbms_metadata.get_ddl('TABLE', 'YOUR_TABLE_NAME_GOES_HERE')
from dual;
``` | Try to spool the output of the below query,
```
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM USER_TABLES u;
```
Like,
```
set pagesize 0
set long 90000
set feedback off
set echo off
spool schema.sql
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM USER_TABLES u WHERE TABLE_NAME = '<your_table>';
spool off;
```
Reference: <http://www.dba-oracle.com/oracle_tips_dbms_metadata.htm> | Getting the create table command used in oracle 11g | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Edit: For anyone facing this problem, don't miss the tips about only using
parameters instead of inserting the values directly into the SQL queries.
---
i'm facing a big problem with my vb.net project, i'm stuck with this for a week
i've a combobox item that i need to compare with an access number which is my database to retrieve some information, but i just got an error, no matter what format i convert my combobox item, it says my datatype is inconpatible with the expression
Here's one of the SQL queries from my code:
```
Dim dt1 As New DataTable
'This query select some itens from a row that match with the selected combobox number
Dim find1 As New OleDb.OleDbDataAdapter("SELECT Product, Number," _
& " Customer, Quantity, ProductionDate, AskDay, Pack, Company FROM RegOPE" _
& " WHERE Number ='" & CInt(mycombobox.SelectedItem) & "'", cn)
'Ive tried SelectedItem, Item, Text, SelectedValue...
'For conversion i tried parse, tryparse, conversion...
cn.Open() 'Opens database connection
find1.Fill(dt1) <- I got the error here
cn.Close() 'Close database connect
mydatagrid.DataSource = dt1 'Show the result in datagridview
``` | number criteria should be without quote
```
Dim find1 As New OleDb.OleDbDataAdapter("SELECT Product, Number, " _
& "Customer, Quantity, ProductionDate, AskDay, Pack, Company FROM RegOPE " _
& "WHERE Number =" & CInt(mycombobox.SelectedItem), cn)
```
But better always use parameters:
```
Dim comm = New OleDb.OleDbCommand("SELECT Product, Number, " _
& "Customer, Quantity, ProductionDate, AskDay, Pack, Company FROM RegOPE " _
& "WHERE Number =?", cn)
comm.Parameters.AddWithValue("unusedName", mycombobox.SelectedItem)
Dim find1 As New OleDb.OleDbDataAdapter(comm)
``` | In your WHERE clause have you tried to remove the quotes ? They are not required if you are looking for a number. | Can't compare vb.net item with access number | [
"",
"sql",
".net",
"database",
"vb.net",
"ms-access",
""
] |
```
create procedure db.Test
@input1 varchar(max), @input2 {Either 'yes' or 'no'}
```
I want the input2 from users to be restricted to either 'yes' or 'no'. Can anyone provide me with the syntax.
I found the synax for default value
```
@input2 varchar(max) default 'yes'
```
But couldnt find anything for restricted input. | You need to make a check with and `if` statement. There isn't a "restriction" validator for what you need.
```
create procedure db.Test @input1 varchar(max), @input2 varchar(3)
as
begin
if @input2 not in ('yes', 'no')
begin
--raiserror or similar
return
end
(...)
end
go
``` | You can create your own custom error messages for RAISERROR.
```
exec sp_addmessage @msgnum=50010,@severity=1,@msgtext='Invalid input parameters'
```
After this , errorid 50010 will point to message Invalid input parameters.
```
create procedure db.Test @input1 varchar(max), @input2 varchar(3)
as
begin
if @input2 not in ('yes', 'no')
begin
RAISERROR ( 50010 ,1,1)
end
(...)
end
go
``` | Restricted user input values in Stored Procedures SQL | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I want to find how many modules a lecturer taught in a specific year and want to select name of the lecturer and the number of modules for that lecturer.
Problem is that because I am selecting `Name`, and I have to group it by name to make it work. But what if there are two lecturers with same name? Then sql will make them one and that would be wrong output.
So what I really want to do is select `name` but group by `id`, which sql is not allowing me to do. Is there a way around it?
Below are the tables:
```
Lecturer(lecturerID, lecturerName)
Teaches(lecturerID, moduleID, year)
```
This is my query so far:
```
SELECT l.lecturerName, COUNT(moduleID) AS NumOfModules
FROM Lecturer l , Teaches t
WHERE l.lecturerID = t.lecturerID
AND year = 2011
GROUP BY l.lecturerName --I want lectureID here, but it doesn't run if I do that
``` | ```
SELECT a.lecturerName, b.NumOfModules
FROM Lecturer a,(
SELECT l.lecturerID, COUNT(moduleID) AS NumOfModules
FROM Lecturer l , Teaches t
WHERE l.lecturerID = t.lecturerID
AND year = 2011
GROUP BY l.lecturerID) b
WHERE a.lecturerID = b.lecturerID
``` | You should probably just group by `lecturerID` and *include it in the select column list.* Otherwise, you're going to end up with two rows containing the same name with no way to distinguish between them.
You raise the problem of "wrong output" when grouping just by name but "undecipherable output" is just as big a problem. In other words, your desired output (grouping by ID but giving name):
```
lecturerName Module
------------ ------
Bob Smith 1
Bob Smith 2
```
is no better than your erroneous output (grouping by, and giving, name):
```
lecturerName Module
------------ ------
Bob Smith 3
```
since, while you now know that *one* of the lecturers taught two modules and the other taught one, you have no idea which is which.
The *better* output (grouping by ID and displaying both ID and name) would be:
```
lecturerId lecturerName Module
---------- ------------ ------
314159 Bob Smith 1
271828 Bob Smith 2
```
---
And, yes, I'm aware this doesn't answer your specific request but sometimes the right answer to `"How do I do XYZZY?"` is `"Don't do XYZZY, it's a bad idea for these reasons ..."`.
Things like writing operating systems in COBOL, accounting packages in assembler, or *anything* in Pascal come to mind instantly :-) | Group by a field not in select | [
"",
"sql",
"group-by",
""
] |
I am executing a query from which i want to get all server name installed on a system
So I did it with this query
```
select srvname from sysservers
```
It gives me server names but it is not the correct server name it gives me srvname
`WIN-1BKHGVK7J3A\SQLSERVER2008R2`
but this is not my server name also i tried to do it from another way by this query
```
Select @@Servername
```
it also gives above wrong server name `WIN-1BKHGVK7J3A\SQLSERVER2008R2`
But when i did it with this query
```
SELECT SERVERPROPERTY ('ServerName')
```
it gives correct server name which is `AIS-OCTACORE\SQLserver2008r2`
but limitation with this is it only gives current server name and i want to get all server name installed on a system
can you please tell why i'm getting wrong servername??
Also how could i get all server names | To check what the current values are set to
```
SELECT ServerProperty('machinename') as [machinename]
,ServerProperty('ServerName') as [ServerName]
,@@ServerName as [@@ServerName];
```
To correct the issue run the following:
```
EXEC sp_dropserver 'old_server_name';
GO
EXEC sp_addserver 'new_server_name', 'local';
GO
```
You will need to restart the SQLSERVER service | This can happen when the server name is changed after SQL Server was installed.
You could try sp\_dropserver and sp\_addserver to change it back: <http://www.brentozar.com/blitz/servername-not-set/>
Im not sure i would though you dont know why it was changed in the first place it might break something. | Getting Wrong Server Name | [
"",
"sql",
"sql-server-2008-r2",
""
] |
here is table `T` :-
```
id num
-------
1 50
2 20
3 90
4 40
5 10
6 60
7 30
8 100
9 70
10 80
```
and the following is a **fictional** sql
```
select *
from T
where sum(num) = '150'
```
the expected result is :-
(A)
```
id num
-------
1 50
8 100
```
(B)
```
id num
-------
2 20
7 30
8 100
```
(C)
```
id num
-------
4 40
5 10
8 100
```
the 'A' case is most preferred !
i know this case is related to combinations.
in real world - client gets items from a shop, and because of an agreement between him and the shop, he pay every Friday. the payment amount is not the exact total of items
for example: he gets 5 books of 50 € ( = 250 € ), and on Friday he bring 150 €, so the first 3 books are perfect match - 3 \* 50 = 150. i need to find the id's of those 3 books !
any help would be appreciated! | You can use recursive query in MSSQL to solve this.
`SQLFiddle demo`
The first recursive query build a tree of items with cumulative sum <= 150. Second recursive query takes leafs with cumulative sum = 150 and output all such paths to its roots. Also in the final results ordered by `ItemsCount` so you will get preferred groups (with minimal items count) first.
```
WITH CTE as
( SELECT id,num,
id as Grp,
0 as parent,
num as CSum,
1 as cnt,
CAST(id as Varchar(MAX)) as path
from T where num<=150
UNION all
SELECT t.id,t.num,
CTE.Grp as Grp,
CTE.id as parent,
T.num+CTE.CSum as CSum,
CTE.cnt+1 as cnt,
CTE.path+','+CAST(t.id as Varchar(MAX)) as path
from T
JOIN CTE on T.num+CTE.CSum<=150
and CTE.id<T.id
),
BACK_CTE as
(select CTE.id,CTE.num,CTE.grp,
CTE.path ,CTE.cnt as cnt,
CTE.parent,CSum
from CTE where CTE.CSum=150
union all
select CTE.id,CTE.num,CTE.grp,
BACK_CTE.path,BACK_CTE.cnt,
CTE.parent,CTE.CSum
from CTE
JOIN BACK_CTE on CTE.id=BACK_CTE.parent
and CTE.Grp=BACK_CTE.Grp
and BACK_CTE.CSum-BACK_CTE.num=CTE.CSum
)
select id,NUM,path, cnt as ItemsCount from BACK_CTE order by cnt,path,Id
``` | If you restrict your problem to "which ***two*** numbers add up to a value", the solution is as follows:
```
SELECT t1.id, t1.num, t2.id,t2.num
FROM T t1
INNER JOIN T t2
ON t1.id < t2.id
WHERE t1.num + t2.num = 150
```
If you also want the result for three and more numbers you can achieve that by using the above query as a base for recursive SQL. Don't forget to specify a maximum recursion depth! | sql server : select rows who's sum matches a value | [
"",
"sql",
"sql-server",
"sql-server-2012",
"subset-sum",
""
] |
Its hard to explain what exactly i mean using words so ill write a query similar to what i have at the moment.
```
SELECT *
FROM table t
WHERE t.something = 'something'
AND CASE WHEN t.name = 'john' THEN 'smith' ELSE t.lastname END = t.lastname
```
Basicly depending if a field is what i want it to be (name = john) i want to add another condition (lastname = smith). If that field isnt what i want it to be (name != john) then no condition is needed (lastname can be whatever)
This code works but for me it seems to be kind of a hack. My question is if this can be done more easily or more prettily? Thanks in advance :) | You can do this without the `case`:
```
WHERE t.something = 'something' AND
(t.lastname = 'smith' or t.name <> 'john')
``` | You can place the entire test within the `CASE` statement:
```
AND CASE WHEN t.name = 'john' THEN t.lastname = 'smith' ELSE True
```
(I don't remember my Oracle so well. `True` may not be the correct term here.)
Now the result of the `CASE` is a truth value rather than a value to be tested against `t.lastname`. | oracle query - additional where condition depending on a field | [
"",
"sql",
"oracle",
""
] |
i need assign value word by numeric so i tried below code.but its showing error"Incorrect syntax near the keyword 'case'."
```
declare @ww int
set @ww=1
case @ww
WHEN '1' THEN print='one'
ELSE NULL
```
where i made error...? | Please read <http://technet.microsoft.com/en-us/library/ms181765.aspx>
In there it states "CASE can be used in any statement or clause that allows a valid expression. For example, you can use CASE in statements such as SELECT, UPDATE, DELETE and SET, and in clauses such as select\_list, IN, WHERE, ORDER BY, and HAVING. ". This is not the case here.
In response to your comment (as you cannot line break in comments)
```
declare @ww int
set @ww=1
if @ww = 1
SET @result = 'one'
```
Sorry for the editting, my rabbit ran across my keyboard. | **You can't use Case in stand alone, instead use it like this**
```
declare @ww int
set @ww=1
SELECT CASE @ww
WHEN '1' THEN 'one'
ELSE NULL
end
``` | how to use case sql server 2008 | [
"",
"sql",
"sql-server",
""
] |
I have a query like this one:
```
SELECT
type,
count(case when STATUS = 'N/A' then 1 end) as NOTAPPLICABLE,
count(case when STATUS = 'Failed' then 1 end) as FAILED,
count(case when STATUS = 'No Run' then 1 end) as NO_RUN,
count(case when STATUS = 'Not Completed' then 1 end) as NOT_COMPLETE,
count(case when STATUS = 'Blocked' then 1 end) as Blocked,
count(case when STATUS = 'Passed' then 1 end) as PASSED,
count(case when STATUS <> 'N/A' then 1 end) as TOTAL
FROM
table
GROUP BY
type
```
I want to order the results so the rows with the type with the highest percentage of passed is on top.
I though something like:
```
ORDER BY
"PASSED"/"TOTAL" DESC
```
But it's not working.
Do you have any idea to achieve this?
Thanks, | You can use expressions in `ORDER BY`
```
SELECT
type,
count(case when STATUS = 'N/A' then 1 end) as NOTAPPLICABLE,
count(case when STATUS = 'Failed' then 1 end) as FAILED,
count(case when STATUS = 'No Run' then 1 end) as NO_RUN,
count(case when STATUS = 'Not Completed' then 1 end) as NOT_COMPLETE,
count(case when STATUS = 'Blocked' then 1 end) as Blocked,
count(case when STATUS = 'Passed' then 1 end) as PASSED,
count(case when STATUS <> 'N/A' then 1 end) as TOTAL
FROM
table
GROUP BY
type
ORDER BY
count(case when STATUS = 'Passed' then 1 end)/count(case when STATUS <> 'N/A' then 1 end) desc
```
But this can produce `division by zero` exception you have to check if count(case when STATUS <> 'N/A' then 1 end) is not zero.
The other solution is using sub-queries - You enclose your initial query in sub-query and then you can order, limit or filter this sub-query as simple table in SQL
```
SELECT *
FROM (
SELECT
type,
count(case when STATUS = 'N/A' then 1 end) as NOTAPPLICABLE,
count(case when STATUS = 'Failed' then 1 end) as FAILED,
count(case when STATUS = 'No Run' then 1 end) as NO_RUN,
count(case when STATUS = 'Not Completed' then 1 end) as NOT_COMPLETE,
count(case when STATUS = 'Blocked' then 1 end) as Blocked,
count(case when STATUS = 'Passed' then 1 end) as PASSED,
count(case when STATUS <> 'N/A' then 1 end) as TOTAL
FROM
table
GROUP BY
type
) AS SUB_DATA
ORDER BY PASSED/TOTAL DESC
```
if you are using PostgreSQL you can use `WITH` construction (I very like it).
```
WITH _records as (
SELECT
type,
count(case when STATUS = 'N/A' then 1 end) as NOTAPPLICABLE,
count(case when STATUS = 'Failed' then 1 end) as FAILED,
count(case when STATUS = 'No Run' then 1 end) as NO_RUN,
count(case when STATUS = 'Not Completed' then 1 end) as NOT_COMPLETE,
count(case when STATUS = 'Blocked' then 1 end) as Blocked,
count(case when STATUS = 'Passed' then 1 end) as PASSED,
count(case when STATUS <> 'N/A' then 1 end) as TOTAL
FROM
table
GROUP BY
type
)
SELECT *
FROM _records
ORDER BY PASSED/TOTAL DESC
``` | If Column aliases that are defined in the `SELECT` are then referenced in the `ORDER BY` they must be used on their own. Not in an expression.
You can use a derived table.
```
SELECT *
FROM
(
/* Your Query here*/
) T
ORDER BY PASSED/TOTAL DESC
```
You may also need to cast `PASSED` to numeric to avoid integer division depending on your DBMS. | Order by the result of a division of two counts | [
"",
"sql",
""
] |
I'm currently developing an API for a company that didn't do a very good job on maintaining a good test database with test data. The MySQL database structure is quite big and complicated and the live database should be around 160-200GB.
And because I'm quite lazy and don't want to create test data for all the table from scratch, I was wondering what would be the best way to turn such a big database into a smaller test database that keeps all data with their relationships in a correct form. Is there an easy way to this with some kind of script that checks the database model and knows what kind of data it needs to keep or delete when reducing the database to smaller size?
Or am I doomed and have to go through the tedious task of creating my own test data? | Take a look at [Jailer](http://sourceforge.net/projects/jailer/) which describes itself as a "Database Subsetting and Browsing Tool". It is specifically designed to select a subset of data, following the database relationships/constraints to include all related rows from linked tables. To limit the amount of data you export, you can set a WHERE clause on the table you are exporting.
The issue of scrubbing your test data to remove customer data is still there, but this will be easier once you have a smaller subset to work with. | I would suggest that it doesn't matter how thorough you are the risk of getting live customer details into a test database is too high. What happens if you accidentally email or charge a real customer for something your testing!?
There are a number of products out there such as RedGate's [Data Generator](http://www.red-gate.com/products/sql-development/sql-data-generator/) which will create test data for you based on your schema (there is a free trial I believe so you can check it meets your needs before committing).
Your other alternative? Hire a temp to enter data all day!
ETA: Sorry - I just saw you're looking more at MySQL rather than MSSQL which probably rules out the tool I recommended. A quick google produces [similar results](https://www.google.co.uk/search?q=mysql%20data%20generator). | How to turn a huge live database into a small testing database? | [
"",
"mysql",
"sql",
"database",
"testing",
"database-migration",
""
] |
```
SELECT
INT_VALUE = NULL -- Result Data Type = INT
,STR_VALUE = NULL -- Result Data Type = INT
```
I want to make sure that STR\_VALUE is a varchar NULL and not int NULL. I need this to be done in SELECT statement and in derived column.
How can I achieve that?
Thank you
UPDATE: Thanks guys for really quick feedback. just "fyi" kind of thing, I am trying to create a lookup query for an ETL app and wanted to see if I could create NULL string column in t-sql rather than creating an ETL app based NULL string derived column.
Thx | Just `CAST` it:
```
SELECT
INT_VALUE = CAST(NULL AS int)
,STR_VALUE = CAST(NULL as VARCHAR(10))
``` | `NULL` defaults to an int. You can change it to any other type using `cast()`:
```
SELECT INT_VALUE = NULL, -- Result Data Type = INT
STR_VALUE = cast(NULL as varchar(255)) -- Result Data Type = varchar(255)
``` | T-SQL Derived Column in SELECT | [
"",
"sql",
"sql-server",
""
] |
I have a query where I am using SUM to total up bill detail values. I would like to create a `CASE` statement that works off the results of that sum, rather than the values being summed. Below is a partial example of what my query currently looks like.
```
SELECT
bill.billnumber_id,
billdetail.billclass,
SUM(billdetail.amount) AS Amount
GROUP BY bill.billnumber_id, billdetail.billclass
```
Say this totals 4 lines of `billdetail` (2, 0, 5, 3) for a summed amount of 10. I would like to do a `CASE` statement based on the 10 amount, rather than the individual values. Thanks. | Is this what you want?
```
SELECT bill.billnumber_id, billdetail.billclass, SUM(billdetail.amount) AS Amount,
(case when SUM(billdetail.amount) > 10 then 'BigOne'
else 'Little'
end)
GROUP BY bill.billnumber_id, billdetail.billclass
``` | You need to use a sub-query:
```
SELECT CASE WHEN Amount = 0 THEN 'Nothing' ELSE 'somthing' END AmountComment
FROM(
SELECT bill.billnumber_id, billdetail.billclass, SUM(billdetail.amount) AS Amount
GROUP BY bill.billnumber_id, billdetail.billclass
)x
``` | CASE statement based on SUM results in query | [
"",
"sql",
"sql-server-2008",
"reporting-services",
""
] |
I'm trying to use a case statement to update a column based on the value of another column. My table called 'rma' in MySQL is as follows:
```
ID | rma_number | serial_number | ref_status_id
1 | 9898 | 123456 | 19
2 | 7869 | 098768 | 3
```
Here is my stored procedure:
```
CREATE DEFINER=`admin`@`localhost` PROCEDURE `delete_update_rma`(in selectedID int, in selectedRMAID int)
begin
declare rmaStatus int(5);
select ref_status_id into rmaStatus
from rma
where id = selectedRMAID;
case rmaStatus
when 19 then
set ref_status_id = 0;
end case;
delete from dropbox where id = selectedID;
end
```
When I try to save to create the procedure I receive the error #1193 - Unknown system variable 'ref\_status\_id'.
Can anybody help me identify what may be wrong with my stored procedure? | Try this:
```
CREATE DEFINER=`admin`@`localhost` PROCEDURE `delete_update_rma`(IN selectedID INT, IN selectedRMAID INT)
BEGIN
UPDATE rma
SET ref_status_id =
CASE ref_status_id
WHEN 19 THEN 0
WHEN 3 THEN 2
ELSE ref_status_id
END
WHERE id = selectedRMAID;
DELETE FROM dropbox WHERE id = selectedID;
END
```
Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/643ff/1)
**OUTPUT**
```
| ID | RMA_NUMBER | SERIAL_NUMBER | REF_STATUS_ID |
|----|------------|---------------|---------------|
| 1 | 12345 | 67898 | 0 |
| 2 | 45645 | 89056 | 2 |
``` | The case syntax is correct, however you cannot change a database field inside a case statement, you need to use an update statement for that.
```
UPDATE rma
SET ref_status_id = 0
WHERE ref_status_id = 19
AND id = selectedrmaid;
```
You can only change local vars inside this kind of case statement.
Yours is a typical case of mixing procedural and query logic, a common error when doing stored procedures. | Case statement in stored procedure - Unknown System Variable | [
"",
"mysql",
"sql",
"stored-procedures",
"sql-update",
"case-expression",
""
] |
I have created a Scalar Functions, it was created successfully, but when I call the function using select statement, it says Invalid object name 'dbo.fun\_functional\_score'.
my function:
```
ALTER function [dbo].[fun_functional_score] (@phy_id varchar(20))
returns varchar(50)
as
begin
declare @level_initial int, @level_current int
-- initial functional score
set @level_initial=(SELECT pflag.fun_level
FROM tbl_phy_demographic_details as [phy]
inner join tbl_all_purple_flag_level as [pflag] on phy.Demographic_id=pflag.Id
WHERE phy.Physicion_id=@phy_id
and pflag.visited_count=(select MAX(visited_count)-1 from tbl_all_purple_flag_level ))
-- current functional score
set @level_current=(SELECT pflag.fun_level
FROM tbl_phy_demographic_details as [phy]
inner join tbl_all_purple_flag_level as [pflag] on phy.Demographic_id=pflag.Id
WHERE phy.Physicion_id=@phy_id
and pflag.visited_count=(select MAX(visited_count) from tbl_all_purple_flag_level ))
--to calculate functional score
declare @fun_level varchar(20),@result varchar(50)
set @fun_level=@level_current-@level_initial;
if @fun_level = 0 set @result='Maintained'
if @fun_level = '-1' set @result='Minor Improvement'
if @fun_level = '-2' set @result='Moderate Improvement'
if @fun_level = '-3' set @result='Significant Improvement'
if @fun_level = '-4' set @result='Substantial Improvement'
if @fun_level = '1' set @result='Minor Reduction'
if @fun_level = '2' set @result='Moderate Reduction'
if @fun_level = '3' set @result='Significant Reduction'
if @fun_level = '4' set @result='Substantial Reduction'
return @result
end
```
i used this select to call
```
select * from dbo.fun_functional_score('01091400003') as [er]
```
or
```
select * from dbo.fun_functional_score('01091400003')
```
both showing error "Invalid object name 'dbo.fun\_functional\_score'.
"
where i made error . can anyone help me... | Your syntax is for table valued function which return a resultset and can be queried like a table. For scalar function do
```
select dbo.fun_functional_score('01091400003') as [er]
``` | You have a scalar valued function as opposed to a table valued function. The from clause is used for tables. Just query the value directly in the column list.
```
select dbo.fun_functional_score('01091400003')
``` | how to call scalar function in sql server 2008 | [
"",
"sql",
"sql-server",
""
] |
I have the below code which is checking physical file counts to confirm that a previous process has generated the correct number of files (in different folders)
```
set NOCOUNT ON
declare @DateOffset int
set @DateOffset=0
-- Create a table variable to store user data
DECLARE @myTable TABLE
(
docID INT IDENTITY(1,1),
docRef VARCHAR(10),
YPTMPID varchar(3),
saveDir VARCHAR(500),
totalLettersExpected int,
actualLetters int
);
insert @myTable SELECT docRef, YPTMPID,
saveDir=max(Save_Directory) + cast(YEAR(GETDATE()-@DateOffset) as varchar(4)) + '\' + datename(month, GETDATE()-@DateOffset) + '\'+SUBSTRING(CONVERT(CHAR(20), GETDATE()-@DateOffset, 101),4, 2) + '.' + LEFT(CONVERT(CHAR(20), GETDATE()-@DateOffset, 101), 2)
+ '.' + SUBSTRING(CONVERT(CHAR(20), GETDATE()-@DateOffset, 101),7, 4),
COUNT(*) as 'Total Letters', null
FROM [alpsMaster].[dbo].[uExtractMonitor]
group by docRef, YPTMPID
order by 1,2
-- Get the number of rows in the looping table
DECLARE @RowCount INT, @SQL nvarchar(4000), @LoopSQL nvarchar(4000), @Date varchar(20)
set @Date=rtrim(CONVERT( CHAR(12), getDate()-@DateOffset, 106)) --'29 Oct 2013'
SET @RowCount = (SELECT COUNT(docID) FROM @myTable)
-- Declare an iterator
DECLARE @I INT
-- Initialize the iterator
SET @I = 1
-- Loop through the rows of table @myTable, and for each docRef, check the file directory for the correct number of files
WHILE (@I <= @RowCount)
BEGIN
DECLARE @docRef VARCHAR(10), @saveDir VARCHAR(500), @TemplateID varchar(3), @letterCount int
DECLARE @statement nvarchar(200), @EQRecordCout int
-- Get the data from table and set to variables
SELECT @docRef = docref FROM @myTable WHERE docID = @I
SELECT @saveDir = saveDir FROM @myTable WHERE docID = @I
SELECT @TemplateID = YPTMPID FROM @myTable WHERE docID = @I
update @myTable set actualLetters = 0 where docRef=@docRef
create table #files (subdirectory varchar(100), depth int, [file] int)
insert into #files EXEC master.sys.xp_dirtree @saveDir,0,1;
-- *** PROBLEM HERE ***
set @statement= 'SELECT count(*) FROM BOCTEST.S653C36C.LIVEBOC_A.' + @docRef
print @statement
exec @EQRecordCout=sp_executesql @statement
print @EQRecordCout
select @letterCount = COUNT(*) from #files
print cast(@letterCount as char(3)) + ' files for ' + @docRef
drop table #files
update @myTable set actualLetters = @letterCount where docRef=@docRef-- and YPTMPID=@TemplateID
-- Increment the iterator
SET @I = @I + 1
END
select * from @myTable
set NOCOUNT OFF
```
Inside my final `WHILE LOOP` I want to get a count of the records in each file on the iSeries remote server using Dynamic SQL (with the table name `docRef` at the end of the SQL string being dynamic)
But even though it execute fine, all I get into the variable **@EQRecordCout** is a zero. This is incorrect, although some of the tables are empty, most have records in them
What is the correct way to write this to return a value?
```
set @statement= 'SELECT count(*) FROM BOCTEST.S653C36C.LIVEBOC_A.' + @docRef
print @statement
exec @EQRecordCout=sp_executesql @statement
print @EQRecordCout
```
thanks
Philip | You need to use `output` parameter with `sp_Executesql` like this
```
DECLARE @intCount int
SET @statement = 'SELECT @numRecords = count(*) FROM BOCTEST.S653C36C.LIVEBOC_A.' + @docRef
EXEC sp_Executesql @statement, N'@numRecords int output', @intCount OUTPUT
PRINT @intCount
```
**Check here :** [Extract value returned from dynamic SQL](https://stackoverflow.com/questions/12201023/extract-value-returned-from-dynamic-sql) | You could use `sp_executesql` with an `output` parameter. Remember to specify `output` twice:
```
set @statement= 'SELECT @rc = count(*) FROM BOCTEST.S653C36C.LIVEBOC_A.' + @docRef
declare @rc int
exec sp_executesql @statement, N'@rc int output', @rc output;
``` | return a variable from a Dyanmic SQL query | [
"",
"sql",
"t-sql",
"dynamic-sql",
"remote-server",
""
] |
So basically this would be the psuedo code, but I don't know how to do this in SQL, please help.
```
for each row in table1{
loop through each row in table 2 {
if table1's row.column 1 = table2's row.column 2 for this row {
set table1's row.col2 = table2's row.col2
}
}
}
```
Edit: Okay let me be more specific. We are basically switching from hibernate sequence as ids to using guids for the id column. I'm trying to update the foreign keys associated by making a temp of the previous foreign key column and then matching the temporary columns to update the actual columns.
suppose table one had id's and table two had a column for those ids to use as foreign keys. I wanna use the previous values in table 2 to match with the rows in table 1 and set the key values in table 2 to match the new guid of table 1.
so table 2 may have multiple rows of duplicate id's but table 1 will never have duplicates. If that makes any sense. | In SQL Server you can do something like:
```
UPDATE Table_1
SET Column_2 = t2.Column_2
FROM Table_1 AS t1
INNER JOIN Table_2 AS t2 ON t2.Column_1 = t1.Column_1
```
or something like
```
UPDATE Table_1
SET Column_2 = (
SELECT t2.Column_2
FROM Table_2 AS t2
WHERE t2.Column_1 = Table_1.Column_1
)
```
Of course if you have multiple rows in Table\_2, you will get an error.... | The basics of it is:
```
UPDATE Table1 SET col2 =
(select col2 FROM Table2 where Table2.column2 = Table1.column1)
```
But that may not do exactly what you need if there's not a 1-1 correspondence between rows in the two tables - and hence my current comment below your question:
> What should happen if there are more than one matching row in table 2? What should happen if there's no matching row? | In SQL how do you update each row of the table by finding all rows that are equal for a column, then set another column equal to eachother | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I need to write a SQL query which Displays the Name of the Courses which the students have an average greater than 55 in it
TABLES : STUDENTS,GRADES,COURSES,DEPARTMENTS
WHAT I DID :
```
SQL> SELECT COURSE_NAME
2 FROM COURSES
3 where
4 (select avg(grade)
5 from grades,courses
6 where
7 grades.course_id=courses.course_id)>55;
```
and the result is bad ! (It displays all the courses)
TABLES :
```
create table DEPARTMENTS
(
DEPARTMENT_ID char(2),
NAME varchar2(20),
HEAD varchar2(20));
create table COURSES
(
COURSE_ID char(10),
COURSE_NAME varchar2(20),
TYPE char(6),
POINTS number(2),
DEPARTMENT_ID char(2));
create table GRADES
(
STUDENT_ID number(3),
COURSE_ID char(10),
SEMESTER varchar2(10),
TERM char(1),
GRADE number(3),
GRADE_SEM number(3));
create table STUDENTS
(
STUDENT_ID number(3),
NAME char(15),
CITY char(15));
``` | Your query returns every record or no records from `COURSES` based on whether the subquery returns more than 55 or less than 55, because the outer query is not related to the subquery in any way. You want to `JOIN` the tables, `GROUP BY` the course name, and use a `HAVING` clause to filter:
```
SELECT c.course_name
FROM grades g
JOIN courses c
ON g.course_id = c.course_id
GROUP BY c.course_name
HAVING AVG(grade) > 55
``` | Try this query :
```
select COURSE_NAME, avg(grade) from grades,courses where grades.course_id=courses.course_id
group by COURSE_NAME
having avg(grade) > 55
``` | Bad results in SQL query | [
"",
"sql",
""
] |
I am trying to execute two select statements into a query that pumps data into a temp table. The first query will have 5 columns while the second query will have only one column.
The first can be achieved by:
```
Select a.ID AS [a],
b.ID AS [b],
c.ID AS [c]
INTO #testingTemp
FROM
....
```
Now I have my second query trying to pump data into `#testingTemp`:
```
Select z.ID AS [c]
INTO #testingTemp
FROM
....
```
But my problem is `There is already an object named #testingTemp in the database`?
I tried to search for a solution on the Internet but mostly people are only facing the problem at my first part but apparently nobody trying to expand a temp table on second query? | Change it into a insert into statement. Otherwise you create the same temp table multiple times and that is not allowed.
```
Insert into #testingTemp (a,b,c)
Select a.ID AS [a],
b.ID AS [b],
c.ID AS [c]
FROM
``` | The second query should just be a normal insert.
```
INSERT INTO #testingTemp
(a,
b,
c)
select etc.
```
dont forget to drop the temptable when you are done. | How to insert into an existing temp table in SQL Server | [
"",
"sql",
"sql-server",
"temp-tables",
""
] |
I have an oracle SQL query that selects the count of the company from different department.
```
select
a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM'),
count(*)
from
customer a, case b
where
a.cust_nb = b.case_cust_nb
and a.cust_company_nb in
('01062','01602','01603','01604','01605','01606')
and b.case_receive_dt > sysdate -365
and b.case_status_cd = 'CC'
group by
a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM')
order by
a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM')
```
This return the `count` of `a.cust_nm, a.cust_acct_nb, a.cust_company_nb, to_char(b.case_receive_dt, 'YYYYMM')`
In the same query, I need one more count of all `cust_acct_nb`
Eg:-
```
cust_acct_nb cust_acct_nb cust_acct_nb cust_acct_nb count(*) Final_Total
KFC 1 12 09-10-1991 12
KFC 1 12 10-10-1991 10
KFC 1 12 11-10-1991 10 32
KFC 2 12 09-10-1991 12
KFC 2 12 10-10-1991 10
KFC 2 12 11-10-1991 15 37
```
How can I get `Final_Total` in same query ?
Please help!! | Use a subquery in the `select` statement as follows:
```
select a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM'),
count(*),
(select count(*)
from customer a1, case b1
where a1.cust_nb = b1.case_cust_nb
and a1.cust_company_nb in ('01062','01602','01603','01604','01605','01606')
and b1.case_receive_dt > sysdate -365
and b1.case_status_cd = 'CC'
and a1.cust_acct_nb = a.cust_acct_nb)
from customer a, case b
where a.cust_nb = b.case_cust_nb
and a.cust_company_nb in ('01062','01602','01603','01604','01605','01606')
and b.case_receive_dt > sysdate -365
and b.case_status_cd = 'CC'
group by a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM')
order by a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM');
```
**OUTPUT:**
```
cust_acct_nb cust_acct_nb cust_acct_nb cust_acct_nb count(*) Final_Total
KFC 1 12 09-10-1991 12 32
KFC 1 12 10-10-1991 10 32
KFC 1 12 11-10-1991 10 32
KFC 2 12 09-10-1991 12 37
KFC 2 12 10-10-1991 10 37
KFC 2 12 11-10-1991 15 37
``` | Can you try this? I cant test it since I have no data setup.
```
select
a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM'),
count(*)
from
customer a, case b
where
a.cust_nb = b.case_cust_nb
and a.cust_company_nb in
('01062','01602','01603','01604','01605','01606')
and b.case_receive_dt > sysdate -365
and b.case_status_cd = 'CC'
group by rollup
(a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM'))
order by
a.cust_nm,
a.cust_acct_nb,
a.cust_company_nb,
to_char(b.case_receive_dt, 'YYYYMM')
``` | Getting count of count from a Oracle SQL query | [
"",
"sql",
"oracle",
""
] |
Recently I was working on MySql and Sql, where I found I can almost insert everything primitive type data but is it possible to store a jpeg/jpg/bmp/png (image)file in a database. If so then can you please write the code. Don't need to explain the whole thing just the key point.
Thanks | Sorry for the late response. However, I found a way to convert the image to String and then store it in the mysql.
This answer is just for those who are still looking for a easy solution:
code:
```
Bitmap bm = BitmapFactory.decodeFile("/path/to/myImage.jpg");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] b = baos.toByteArray();
```
and then:
```
String encodedImage = Base64.encodeToString(byteArrayImage, Base64.DEFAULT);
```
Then the encodedImage which is a String format, I can insert into the db.
Hope that helps other who are still looking for the answer. | **1** Perhaps convert the image to a `Byte Array` (if it's not `bmp`, then convert it), then store it in **MySql** as a Blob type, then when reading it, you can create a **Bitmap** through the code (I don't know what language you're using) with the **Byte Array** with what you read, or perhaps just get the code associated with the image and store that and read it.
[Converting an image to a Byte Array](https://stackoverflow.com/questions/3211156/how-to-convert-image-to-byte-array-in-java)
**2** Use an image hosting API (like Imgur) and have the user's image upload to that site, and just read the **URI** from the database whenever you want to use it.
[Imgur API](http://api.imgur.com/), [Android Example](https://github.com/talklittle/ImgurAPIv3ExampleAndroid) | how to insert an image in mysql -android | [
"",
"android",
"mysql",
"sql",
"database",
"image",
""
] |
I have a query like this,
```
SELECT b.user_id,b.first_name, b.last_name, b.email_id,
COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_receiver_id = b.user_id
AND a.email_sender_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id
UNION
SELECT b.user_id,b.first_name, b.last_name, b.email_id,
COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_sender_id = b.user_id
AND a.email_receiver_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id;
```
Its ouput is -
```
user_id first_name last_name email_id sharedFiles
17 Mohan Sharma mohan.guptademo@gmail.com 2
17 Mohan Sharma mohan.guptademo@gmail.com 1
```
Its showing two different records. but i want output like this -
```
user_id first_name last_name email_id sharedFiles
17 Mohan Sharma mohan.guptademo@gmail.com 3
```
I am new in mysql, so please help me how to get this output.
Thank you. | ```
SELECT b.user_id,b.first_name, b.last_name, b.email_id, SUM(sharedFiles)as sharedFiles
FROM(
SELECT b.user_id,b.first_name, b.last_name, b.email_id,
COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_receiver_id = b.user_id
AND a.email_sender_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id
UNION
SELECT b.user_id,b.first_name, b.last_name, b.email_id,
COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_sender_id = b.user_id
AND a.email_receiver_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id)x
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id;
``` | Using your query as a sub query allows for additional grouping and summartion. Look at the following query:
```
SELECT (b.user_id,b.first_name, b.last_name, b.email_id, SUM(b.sharedFiles)
FROM (
SELECT b.user_id,b.first_name, b.last_name, b.email_id, COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_receiver_id = b.user_id
AND a.email_sender_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id UNION
UNION
SELECT b.user_id,b.first_name, b.last_name, b.email_id, COUNT(DISTINCT a.file_id) AS sharedFiles
FROM email_history a, users b
WHERE a.email_sender_id = b.user_id
AND a.email_receiver_id= 20
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id;
) AS b
GROUP BY b.user_id,b.first_name, b.last_name, b.email_id
``` | how to combine these two results as one in mysql? | [
"",
"mysql",
"sql",
"union",
""
] |
I am trying to convert this SQL View to sort the month names in order.
Any help would be appreciated:
```
SELECT DATENAME(Month, REQDATE) AS Month, COUNT(WO_NUM) AS Tickets
FROM dbo.TASKS
WHERE (REQDATE >= '6/1/13')
GROUP BY DATENAME(Month, REQDATE)
```
It currently displays the months out of order.. | Try this.
```
SELECT MAX(DATENAME(Month, REQDATE)) AS Month, COUNT(WO_NUM) AS Tickets
FROM dbo.TASKS
WHERE (REQDATE >= '6/1/13')
GROUP BY DATEPART(Month, REQDATE)
ORDER BY DATEPART(Month, REQDATE) ASC
```
SQL Fiddle: <http://sqlfiddle.com/#!3/220be/4> | This will display the results with the months listed in calendar order.
```
SELECT DATENAME(Month, REQDATE) AS Month, COUNT(WO_NUM) AS Tickets
FROM dbo.TASKS
WHERE (REQDATE >= '6/1/13')
GROUP BY DATENAME(Month, REQDATE), DATEPART(Month, REQDATE)
ORDER BY DATEPART(Month, REQDATE) ASC
```
I added `DATEPART(Month, REQDATE)` to the GROUP BY clause, which shouldn't effect the grouping as the result for that DATEPART function will be the same for all dates in each group. | SQL Query Month Name Sort | [
"",
"sql",
"sql-server",
""
] |
I have table:
```
describe tests;
+-----------+-----------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-----------+------+-----+-------------------+-----------------------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| line_id | int(11) | NO | | NULL | |
| test_time | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| alarm_id | int(11) | YES | | NULL | |
| result | int(11) | NO | | NULL | |
+-----------+-----------+------+-----+-------------------+-----------------------------+
```
And I execute query:
```
SELECT avg(result) FROM tests WHERE line_id = 4 ORDER BY test_time LIMIT 5;
```
which I want to generate average of 5 latest results.
Still something is not ok, because query generates average of all table data.
What can be wrong? | If you want the *last* five rows, then you need to order by the time column in *descending* order:
```
select avg(result)
from (select result
from tests
where line_id = 4
order by test_time desc
limit 5
) t
``` | the guy before submitted something link that
for my it works
```
select avg( id ) from ( select id from rand limit 5) as id;
``` | mysql average latest 5 rows | [
"",
"mysql",
"sql",
""
] |
Sorry to bring this topic up again, I've come across several helpful existing posts that are very similar but stackoverflow won't let me post a comment/question on an existing thread because I'm so new (less than 50 points).
Anyways, a lot of the running total posts are great for 1 group / running total only. As soon as I introduce a second group (which I want the running total to start at 0 again) it keeps adding them up.
I've had success using the quirky update, CTE with recursion, etc... but not for multiple groups.
The basic ouput I'm looking for looks like this:
```
NAME DATE DOLLARS RUNNING
John 1/1/2014 5 5
John 1/2/2014 3 8
John 1/2/2014 4 12
John 1/2/2014 8 20
John 1/3/2014 12 32
Matt 1/1/2014 2 2
Matt 1/2/2014 7 9
Matt 1/3/2014 10 19
```
How could I achieve this? Thanks a bunch in advance
Here are some of the articles I've come across that are helpful:
[Partitioning results in a running totals query](https://stackoverflow.com/questions/10832067/partitioning-results-in-a-running-totals-query)
[Using "Update to a local variable" to calculate Grouped Running Totals](https://stackoverflow.com/questions/12698479/using-update-to-a-local-variable-to-calculate-grouped-running-totals) | You can do this using a correlated subquery:
```
select name, date, dollars,
(select sum(dollars)
from table t2
where t2.name = t.name and
t2.date <= t.date
) as running
from table t;
```
EDIT:
If you have multiple rows on the same date, then you need to introduce another ordering criterion. The data in the question has no other order column (such as an id). So, we can create one:
```
with t as (
select t.*, row_number() over (partition by name order by date) as seqnum
from table t
)
select name, date, dollars,
(select sum(dollars)
from t t2
where t2.name = t.name and
t2.seqnum <= t.seqnum
) as running
from t t;
```
The only problem is that the rows within a date will be an in arbitrary order, that could even change between executions of the query. SQL tables are inherently unordered, so you need a column to specify the ordering. | You can use `OUTER APPLY`:
```
SELECT A.[NAME],
A.[DATE],
A.DOLLARS,
SUM(B.[DOLLARS]) RUNNING
FROM YourTable A
OUTER APPLY (SELECT *
FROM YourTable
WHERE [NAME] = A.[NAME]
AND [DATE] <= A.[DATE]) B
GROUP BY A.[NAME],
A.[DATE],
A.DOLLARS
ORDER BY A.[NAME],
A.[DATE],
A.DOLLARS
```
[**Here is a demo**](http://sqlfiddle.com/#!3/ac378/1) for you to try.
The results are:
```
╔══════╦════════════╦═════════╦═════════╗
║ NAME ║ DATE ║ DOLLARS ║ RUNNING ║
╠══════╬════════════╬═════════╬═════════╣
║ John ║ 2014-01-01 ║ 5 ║ 5 ║
║ John ║ 2014-01-02 ║ 3 ║ 8 ║
║ John ║ 2014-01-03 ║ 12 ║ 20 ║
║ Matt ║ 2014-01-01 ║ 2 ║ 2 ║
║ Matt ║ 2014-01-02 ║ 7 ║ 9 ║
║ Matt ║ 2014-01-03 ║ 10 ║ 19 ║
╚══════╩════════════╩═════════╩═════════╝
``` | Running Total (pre 2012) for more than 1 group | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
What is the difference between implicit conversion and explicit conversion in SQL Server? | An explicit conversion occurs when you use the `CONVERT` or `CAST` keywords explicitly in your query.
An implicit conversion arises when you have differing datatypes in an expression and SQL Server casts them automatically according to the rules of [datatype precedence](http://technet.microsoft.com/en-us/library/ms190309.aspx).
For example nvarchar has higher precedence than varchar
```
CREATE TABLE Demo
(
X varchar(50) PRIMARY KEY
)
/*Explicit*/
SELECT *
FROM Demo
WHERE CAST(X AS NVARCHAR(50)) = N'Foo'
/*Implicit*/
SELECT *
FROM Demo
WHERE X = N'Foo' /*<-- The N prefix means nvarchar*/
```
The second execution plan shows a predicate of
```
CONVERT_IMPLICIT(nvarchar(50),[D].[dbo].[Demo].[X],0)=[@1]
```
Both the explicit and implicit conversions prevent an index seek in this case. | **Implicit conversions** are not visible to the user. SQL Server automatically converts the data from one data type to another. For example, when a `smallint` is compared to an `int`, the `smallint` is implicitly converted to int before the comparison proceeds.
`GETDATE()` implicitly converts to date style 0. `SYSDATETIME()` implicitly converts to date style 21.
**Explicit conversions** use the `CAST` or `CONVERT` functions.
The CAST and CONVERT functions convert a value (a local variable, a column, or another expression) from one data type to another. For example, the following CAST function converts the numeric value of *$157.27* into a character string of *'157.27'*:
```
CAST ( $157.27 AS VARCHAR(10) )
``` | Difference between Implicit Conversion and Explicit Conversion in SQL Server | [
"",
"sql",
"sql-server",
"implicit-conversion",
""
] |
I have a table that looks something like
```
ID Col1 Col2 Col3 Col4
1 3 5 3 3
```
What I want to do is COUNT the number of `3`s in this particular row.
I have tried the
```
select COUNT(*)
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'TableName' -- but obviously I need WHERE Col1 = 3 OR Col2 = 3...
```
What would be the best way to achieve this? | Based on what OP asked, this can be done
```
select
CASE WHEN Col1 = 3 then 1 ELSE 0 END +
CASE WHEN Col2 = 3 then 1 ELSE 0 END +
CASE WHEN Col3 = 3 then 1 ELSE 0 END +
CASE WHEN Col4 = 3 then 1 ELSE 0 END
From TableName
``` | I don't really enjoy working with PIVOT so here a solution using APPLY.
```
SELECT
T.id
, Val
, COUNT(*)
FROM MyTable AS T
CROSS APPLY (
VALUES
(T.C1)
, (T.C2)
, (T.C3)
, (T.C4)
) AS X(Val)
GROUP BY T.Id, X.Val
ORDER BY T.Id, X.val
``` | COUNT the number of columns where a condition is true? SQL Server 2008 R2 | [
"",
"sql",
"sql-server-2008",
"t-sql",
"sql-server-2008-r2",
""
] |
I need to get a comma separated list of ids as a field for a messy third party api :s This is a simplified version of what I am trying to achieve.
```
| id | name |
|====|======|
| 01 | greg |
| 02 | paul |
| 03 | greg |
| 04 | greg |
| 05 | paul |
SELECT name, {some concentration function} AS ids
FROM table
GROUP BY name
```
Returning
```
| name | ids |
|======|============|
| greg | 01, 03, 04 |
| paul | 02, 05 |
```
I know MySQL has the CONCAT\_GROUP function and I was hoping to solve this problem without installing more functions because of the environment. Maybe I can solve this problem using an OVER statement? | A really old question, but as an update, you can use listagg() function
| id | name |
| --- | --- |
| 01 | greg |
| 02 | paul |
| 03 | greg |
| 04 | greg |
| 05 | paul |
SELECT name, listagg(id) AS ids
FROM table
GROUP BY name
That will return the desire output:
| name | ids |
| --- | --- |
| greg | 01, 03, 04 |
| paul | 02, 05 | | You'll have to use [`OVER()`](https://my.vertica.com/docs/6.1.x/HTML/index.htm#13400.htm) with [`NVL()`](https://my.vertica.com/docs/6.1.x/HTML/index.htm#10435.htm) (you'll have to extend the concatenation for more than 10 instances per name):
```
CREATE TABLE t1 (
id int,
name varchar(10)
);
INSERT INTO t1
SELECT 1 AS id, 'greg' AS name
UNION ALL
SELECT 2, 'paul'
UNION ALL
SELECT 3, 'greg'
UNION ALL
SELECT 4, 'greg'
UNION ALL
SELECT 5, 'paul';
COMMIT;
SELECT name,
MAX(DECODE(row_number, 1, a.id)) ||
NVL(MAX(DECODE(row_number, 2, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 3, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 4, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 5, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 6, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 7, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 8, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 9, ',' || a.id)), '') ||
NVL(MAX(DECODE(row_number, 10, ',' || a.id)), '') id
FROM
(SELECT name, id, ROW_NUMBER() OVER(PARTITION BY name ORDER BY id) row_number FROM t1) a
GROUP BY a.name
ORDER BY a.name;
```
**Result**
```
name | id
------+-------
greg | 1,3,4
paul | 2,5
``` | Concat GROUP BY in Vertica SQL | [
"",
"sql",
"vertica",
""
] |
I have a line chart/graph that I need to include for all service outages we went through in the year. Currently, my SQL can pull data that includes any tickets creating in a ticketing system for these services.
However, if no ticket was created for a service, the service will not show in the rows or the service will show but not have a 100% Uptime for the months where no tickets were created for it.
Example of what I am looking for:
```
MONTH Service1 Service2 Service3
1 100% 99.7% 100%
2 99.8% 100% 96.5%
3 100% 99.8% 100%
```
But what it looks like is this:
```
MONTH Service1 Service2 Service3
1 99.7%
2 99.8% 96.5%
3 99.8%
```
The services get pulled by using a `WHERE [Resource]='Affected Service'` so they are dynamically brought into the table, but no data is pulled for the service if no ticket was created in that month.
Current SQL Coding:
```
WITH rslt (ResourceID, YearNumber, MonthNumber, AvailableMinutes, DowntimeMinutes) AS (
SELECT
ore.ResourceID,
DATEPART(yyyy, ipr.OpenDate_MST) YearNumber,
DATEPART(mm, ipr.OpenDate_MST) MonthNumber,
MAX(CASE
WHEN DATEADD(MONTH, DATEDIFF(MONTH, -1, ipr.OpenDate_MST), -1) = DATEADD(MONTH, DATEDIFF(MONTH, -1, GETDATE()), -1)
THEN (DATEPART(DD, GETDATE()) * 1440.0)
ELSE (DATEPART(DD, DATEADD(MONTH, DATEDIFF(MONTH, -1, ipr.OpenDate_MST), -1)) * 1440.0)
END) AvailableMinutes,
ISNULL(SUM(DATEDIFF(mi, ipr.OutageStartTime, ipr.OutageEndTime)), 0) DowntimeMinutes
FROM
vIncidentProblemRequest ipr
INNER JOIN vOwnedResource ore ON ore.ResourceID = ipr.AffectedServiceID
WHERE
CONVERT(DATETIME, CONVERT(CHAR(10), ipr.OpenDate_MST, 101)) >= '1/1/2013 12:00:00'
AND CONVERT(DATETIME, CONVERT(CHAR(10), ipr.OpenDate_MST, 101)) <= '12/31/2013 11:59:59'
GROUP BY
ore.ResourceID,
DATEPART(yyyy, ipr.OpenDate_MST),
DATEPART(mm, ipr.OpenDate_MST),
),
rslt2 (ResourceID, Application, ResourceClass, YearNumber, MonthNumber, AvailableMinutes, DowntimeMinutes, UptimePercent) AS (
SELECT
ore.ResourceID,
ore.ResourceName Application,
ore.ResourceClass,
rslt.YearNumber,
rslt.MonthNumber,
rslt.AvailableMinutes,
ISNULL(rslt.DowntimeMinutes, 0) DowntimeMinutes,
CASE
WHEN rslt.DowntimeMinutes IS NULL
THEN 1.0
ELSE ((rslt.AvailableMinutes - rslt.DowntimeMinutes)/rslt.AvailableMinutes)
END UptimePercent
FROM
vOwnedResource ore
LEFT OUTER JOIN rslt ON rslt.ResourceID = ore.ResourceID
WHERE
ore.ResourceClass = 'Enterprise Service')
select
MIN(DATEPART(yyyy, d.Date)) Year,
MIN(DATEPART(mm, d.Date)) MonthNum,
SUBSTRING(MIN(DATENAME(mm, d.Date)), 1, 3) Month,
r.Application,
r.ResourceClass,
CASE
WHEN r.UptimePercent IS NULL
THEN 1.0
ELSE r.UptimePercent
END UptimePercent
FROM
DimDate d
INNER JOIN rslt2 r ON r.YearNumber = datepart(yyyy, d.Date) AND r.MonthNumber = datepart(mm, d.Date)
WHERE
d.Date >= '1/1/2013 12:00:00'
AND d.Date <= '12/31/2013 11:59:59'
GROUP BY
datepart(yyyy, d.Date),
datepart(mm, d.Date),
DATENAME(mm, d.Date),
r.Application,
r.ResourceClass,
r.UptimePercent
ORDER BY
4,1,2
``` | How about this-
Create a temp table having all service values as 100% for each month.
After that, update the temp table with the real values based on your filter condition. | It would really help if you showed the query. However, somewhere along the way, you can solve your problem using `coalesce()`. Assuming the values are really numbers between 0 and 100:
```
select month, coalesce(service1, 100) as service1,
coalesce(service2, 100) as service2, coalesce(service3, 100) as service3
``` | How to select Months for data without any data existing in SQL | [
"",
"sql",
"select",
"common-table-expression",
""
] |
I have spent quite some time dealing with the following:
Imagine that you have **N** number of groups with multiple records each and every record has **unique** `starting` and `ending` points.
In other words:
```
ID|GroupName|StartingPoint|EndingPoint|seq(row_number)|desired_seq
__|_________|_____________|___________|_______________|____________
1 | Grp1 |2014-01-06 |2014-01-07 |1 |1
__|_________|_____________|___________|_______________|____________
2 | Grp1 |2014-01-07 | 2014-01-08|2 |2
__|_________|_____________|___________|_______________|____________
3 | Grp2 |2014-01-08 | 2014-01-09|1 |1
__|_________|_____________|___________|_______________|____________
4 | Grp1 |2014-01-09 | 2014-01-10|3 |1
__|_________|_____________|___________|_______________|____________
5 | Grp2 |2014-01-10 | 2014-01-11|2 |1
__|_________|_____________|___________|_______________|____________
```
As you can see, the `starting point` for every consecutive record is the same as the `ending point` of the previous.
Basically, I would like to obtain the `minimumS and maximumS` for each group based on the dates. Once a record with new group name appears, then consider it as a new group and reset the sequencing.
Single `row_number()` function is not sufficient enough for this task since it doesnt reflect the change in the group names.(I have included a seq column in the sample data which represents the values generated by row number)
Desired result based on the sample data:
```
1 Grp1 |2014-01-06 | 2014-01-08
2 Grp2 |2014-01-08 | 2014-01-09
3 Grp1 |2014-01-09 | 2014-01-10
4 Grp2 |2014-01-10 | 2014-01-11
```
What I have tried:
```
;with cte as(
select *
, row_number() over (partition by GroupName order by startingpoint) as seq
from table1
)
select *
into #temp2
from cte t1
left join cte t2 on t1.id=t2.id and t1.seq= t2.seq-1
select *
,(select startingPoint from #temp2 t2 where t1.id=t2.id and t2.seq= (select MIN(seq) from #temp2) as Oldest
(select startingPoint from #temp2 t2 where t1.id=t2.id and t2.seq= (select MAX(seq) from #temp2) as MostRecent
from #temp2 t1
``` | This is a `gaps-and-islands` problem with subgrouping. The trick is grouping by the difference between two ROW\_NUMBER() values, one partitioned and one unpartitioned.
```
WITH t AS (
SELECT
GroupName,
StartingPoint,
EndingPoint,
ROW_NUMBER() OVER(PARTITION BY GroupName ORDER BY StartingPoint)
- ROW_NUMBER() OVER(ORDER BY StartingPoint) AS SubGroupId
FROM #test
)
SELECT
ROW_NUMBER() OVER (ORDER BY MIN(StartingPoint)) AS SortOrderId,
GroupName AS GroupName,
MIN(StartingPoint) AS GroupStartingPoint,
MAX(EndingPoint) AS GroupEndingPoint
FROM t
GROUP BY GroupName, SubGroupId
ORDER BY SortOrderId
``` | Not sure, but maybe:
```
SELECT DISTINCT
GroupName,
MIN(StartingPoint) OVER (PARTITION BY GroupName ORDER BY Id),
MAX(EndingPoint) OVER (PARTITION BY GroupName ORDER BY Id)
FROM table1
```
Because `partition` does not lead to the reduction of number of rows there will be originally duplicated entries, which are removed with `distinct`. | Identifying the boundaries of N-groups | [
"",
"sql",
"sql-server",
"sql-server-2008",
"gaps-and-islands",
""
] |
I have a query where I need to check a date between two dates using Oracle. Whenever I run the code I get an ORA-01843: not a valid month error. However whenever I remove either of the two parameters from the sql it works fine, but trying to use two date parameters throw an error. What am I missing?
```
StringBuilder sql = new StringBuilder();
DateTime yearBegin = new DateTime(Convert.ToInt32(taxYear) + 1, 1, 1);
DateTime yearEnd = new DateTime(Convert.ToInt32(taxYear) + 1, 12, 31);
sql.Append(
"SELECT * FROM TABLE WHERE FIELD = '1099' AND CREATED_DT >= TO_DATE(:createdYearBegin, 'MM/DD/YYYY') AND CREATED_DT <= TO_DATE(:createdYearEnd, 'MM/DD/YYYY') AND SSN = :ssn");
try
{
using (OracleConnection cn = new OracleConnection(ConfigurationManager.AppSettings["cubsConnection"]))
using (OracleCommand cmd = new OracleCommand(sql.ToString(), cn))
{
cmd.Parameters.Add("ssn", ssn);
cmd.Parameters.Add("createdYearBegin", yearBegin.ToShortDateString());
cmd.Parameters.Add("createdYearEnd", yearEnd.ToShortDateString());
cn.Open();
OracleDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
ret = dr.HasRows;
}
}
``` | think you've got a problem with your parameter's order.
If you don't bind parameters by name, they are bound by position (means the order in which you add parameters is taken).
Just try to add :
```
cmd.BindByName = true;
``` | You expect date formatted as `MM/DD/YYYY`, but it is not guaranteed that `ToShortDateString()` returns it in this format. A format specifiction is needed. But well, I do not even know what is the programming language you are using to provide further help... | 2nd Date Parameter Throws ORA-01843: not a valid month error | [
"",
"sql",
"oracle",
"datetime",
"oraclecommand",
""
] |
[**STRUCTURE TABLES AND ERROR WHEN EXECUTE QUERY ON SQLFIDDLE**](http://sqlfiddle.com/#!2/bd60dc/32)
QUERY:
```
SELECT
n.Type as Type,
n.UserIdn as UserIdn,
u.Username as Username,
n.NewsIdn as NewsIdn,
n.Header as Header,
n.Text as Text,
n.Tags as Tags,
n.ImageLink as ImageLink,
n.VideoLink as VideoLink,
n.DateCreate as DateCreate,
SUM(l.Type = 'up') as Uplikes,
SUM(l.Type = 'down') as Downlikes,
(SUM(l.Type = 'up') - SUM(l.Type = 'down')) as SumLikes
FROM News n
INNER JOIN Users u ON n.UserIdn = u.UserIdn
LEFT JOIN Likes l ON n.NewsIdn = l.NewsIdn
WHERE SumLikes > 20 AND n.DateCreate < STR_TO_DATE('2014-01-03 14:17:43', '%Y-%m-%d %H:%i:%s')
GROUP BY n.id
ORDER by SumLikes DESC
```
When i use query i get error:
```
Unknown column 'SumLikes' in 'where clause'...
```
I know that error in 'SumLikes' but how correctly make sql query?
Tell me please how make it? | Wrap it in a subquery
```
SELECT * FROM
(
SELECT
n.Type as Type,
n.UserIdn as UserIdn,
u.Username as Username,
n.NewsIdn as NewsIdn,
n.Header as Header,
n.Text as Text,
n.Tags as Tags,
n.ImageLink as ImageLink,
n.VideoLink as VideoLink,
n.DateCreate as DateCreate,
SUM(l.Type = 'up') as Uplikes,
SUM(l.Type = 'down') as Downlikes,
(SUM(l.Type = 'up') - SUM(l.Type = 'down')) as SumLikes
FROM News n
INNER JOIN Users u ON n.UserIdn = u.UserIdn
LEFT JOIN Likes l ON n.NewsIdn = l.NewsIdn
WHERE n.DateCreate < STR_TO_DATE('2014-01-03 14:17:43', '%Y-%m-%d %H:%i:%s')
GROUP BY n.id
) Sub
WHERE Sub.SumLikes > 20
ORDER by Sub.SumLikes DESC
``` | You cant use aggregate fields like `sum()` in where clause, you shuld use `having`,
the end of your query could be like:
```
GROUP BY n.id
HAVING SumLikes > 20
ORDER by SumLikes DESC
```
ofcourse remove `SumLikes > 20` in your `where`.
Check this out for more information: [MySQL Extensions to GROUP BY](http://dev.mysql.com/doc/refman/5.0/en/group-by-extensions.html) | how correctly execute query? | [
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
Want to know which appropriate fields to use in review table.
requirement for review table:
```
> review will be given by customer
* review in points
* review in description
* review in videos (instead of text can upload review video in which it present review description)
* comments on reviews.
* restriction to display review in public or not
> review for menuItem
> review for restaurants
> review for packs / coupns available
```
can any one help me who has worked on review system. | as per my senior suggestion its need to be like this:
**user\_reviews**
```
- id
- restaurant_menu_item_id
- restaurant_id
- customer_id
- review_title : title like wow / superb / or ...
- review_points : from 1 - 10
- description : limit it to ...
- review_video_id
- created_bye
- created_date
- update_date
- status
```
I think this works fine for now as well as future (i mean it allow me to change in future if required) | `Review` Table Contains
```
1. reviewId (INT, Primary key, Auto Increment)
2. restaurantId (INT, ForeignKey of restaurant table)
3. userId (INT, ForeignKey of user_master table)
4. menuItemId (INT, ForeignKey of menuItem table)
5. couponId (INT, ForeignKey of packs / coupons table)
6. title (VARCHAR)
7. comment (VARCHAR or TEXT)
8. reviewePoints (TINYINT)
9. reviewedOn (DATETIME)
10. status (TINYINT)
``` | want to design review table for restaurants | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
How to return all the characters from a string and count it in sql.
if the string is "how are you"
it should return
char count
```
2
h 1
o 2
w 1
a 1
r 1
e 1
y 1
u 1
``` | You can use this script. It will give you exactly what you need.
This one counts just the letters in the string.
```
declare @c int
declare @ch varchar(10)
declare @str varchar(max)
set @str = 'how are you'
declare @letter int
declare @i int
set @i = 1
create table #tbl(ch varchar(10), cnt int)
while (@i <= len(@str))
begin
set @letter = 0
set @ch = substring(@str, @i, 1)
select @c = count(*) from #tbl
where ch = @ch
if ( (@ch >= 'a' and @ch <= 'z') or (@ch >= 'A' and @ch <= 'Z') )
begin
set @letter = 1
end
if (@c = 0)
begin
if (@letter = 1)
begin
insert into #tbl (ch, cnt) values (@ch, 1)
end
end
else
begin
update #tbl set cnt = cnt + 1 where ch = @ch
end
set @i = @i + 1
end
select * from #tbl
drop table #tbl
```
And if you want to count all chars (not just letters),
this makes it even easier. Use this script.
```
declare @c int
declare @ch varchar(10)
declare @str varchar(max)
set @str = 'how are you'
declare @i int
set @i = 1
create table #tbl(ch varchar(10), cnt int)
while (@i <= len(@str))
begin
set @ch = substring(@str, @i, 1)
select @c = count(*) from #tbl
where ch = @ch
if (@c = 0)
begin
insert into #tbl (ch, cnt) values (@ch, 1)
end
else
begin
update #tbl set cnt = cnt + 1 where ch = @ch
end
set @i = @i + 1
end
select * from #tbl
drop table #tbl
``` | This will return the result set you have requested. It does this by taking each letter and adding it to a new row within a temporary table and then querying the results to return the counts for each occurrence of the character.
```
DECLARE @individual CHAR(1);
DECLARE @text NVARCHAR(200)
SET @text = 'how are you';
IF OBJECT_ID('tempdb..#tmpTable') IS NOT NULL
DROP TABLE #tmpTable
CREATE TABLE #tmpTable (letter char(1));
WHILE LEN(@text) > 0
BEGIN
SET @individual = SUBSTRING(@text, 1, 2)
INSERT INTO #tmpTable (letter) VALUES (@individual);
SET @text = SUBSTRING(@text, 2, LEN(@text))
END
SELECT letter, COUNT(*) AS [count]
FROM #tmpTable
GROUP BY letter;
``` | return separate character from a string | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I would need to reverse the word positions in a sentence or String.
```
For example : "Hello World! I Love StackOverflow", to be displayed as "StackOverflow Love I World! Hello".
```
Can it be done with a `SQL` ? The word length is no greater than `VARCHAR2(4000)` which is the maximum length support in a Oracle `VARCHAR2` table column.
I got solutions for reversing a string (Characters in reverse order) only | *Create a **Function**:*
`REGEXP_SUBSTR('Your text here','[^ ]+', 1, ?)` will extract a word from the text using Space as a delimiter. Tt returns the original String itself on Exception!
```
CREATE OR REPLACE FUNCTION reverse_words (v_STRING IN VARCHAR2)
RETURN VARCHAR2
IS
L_TEMP_TEXT VARCHAR2(4000);
L_FINAL_TEXT VARCHAR2(4000);
V_LOOPCOUNT NUMBER :=0;
T_WORD VARCHAR2(4000);
BEGIN
L_TEMP_TEXT := regexp_replace(V_STRING,'[[:space:]]+',' '); -- Replace multiple spaces as single
LOOP
v_LOOPCOUNT := v_LOOPCOUNT+1;
T_WORD := REGEXP_SUBSTR(L_TEMP_TEXT,'[^ ]+', 1, V_LOOPCOUNT);
L_final_TEXT := T_WORD||' '||L_final_TEXT;
EXIT WHEN T_WORD IS NULL;
END LOOP;
RETURN(TRIM(L_final_TEXT));
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(sqlerrm||chr(10)||dbms_utility.format_error_backtrace);
RETURN V_STRING;
END reverse_words;
/
```
***Sample Result:***
You can call `reverse_words(yourcolumn) from your_table`
```
SQL> select reverse_words('Hello World! I Love StackOverflow') "Reversed" from dual;
Reversed
--------------------------------------------------------------------------------
StackOverflow Love I World! Hello
``` | XML-based version to avoid defining your own function; requires 11g for `listagg()`:
```
select listagg(word, ' ') within group (order by rn desc) as reversed
from (
select word, rownum as rn
from xmltable('for $i in ora:tokenize($STR, " ") return $i'
passing 'Hello World! I Love StackOverflow' as str
columns word varchar2(4000) path '.'
)
);
REVERSED
----------------------------------------
StackOverflow Love I World! Hello
```
The `XMLTable()` does the tokenising, and assigns a row number:
```
select rownum as rn, word
from xmltable('for $i in ora:tokenize($STR, " ") return $i'
passing 'Hello World! I Love StackOverflow' as str
columns word varchar2(4000) path '.'
);
RN WORD
---------- --------------------
1 Hello
2 World!
3 I
4 Love
5 StackOverflow
```
The `listagg()` then pieces it back together in reverse order. | Reverse String Word by Word using SQL | [
"",
"sql",
"oracle",
"plsql",
""
] |
Is there any function similar to the MYSQL `BIT_COUNT` function in MSSQL? I want to create a very simple Hammingdistance function in MSSQL that i can use in my selects.
Here is what i have for MYSQL:
```
CREATE FUNCTION `HAMMINGDISTANCE`(`hasha` BIGINT, `hashb` BIGINT)
RETURNS int(11)
DETERMINISTIC
RETURN
BIT_COUNT(hasha^hashb)
``` | Why not just write your own bit\_count code in T-SQL? There's no need to use SQL CLR if all you need is to count the number of set bits in a bigint. Here is an example:
```
CREATE FUNCTION bit_count
(
@pX bigint
)
RETURNS int
AS
BEGIN
DECLARE @lRet integer
SET @lRet = 0
WHILE (@pX != 0)
BEGIN
SET @lRet = @lRet + (@pX & 1)
SET @pX = @pX / 2
END
return @lRet
END
GO
```
Also, here's a [fiddle](http://sqlfiddle.com/#!3/f6c09/5) you can try to see this function in action.
Be advised that this algorithm only works on non-negative bigints. If you are looking for an algorithm that works on negative bigints, see [this link](http://www.dbforums.com/microsoft-sql-server/1630934-bit-counting.html). | I just wrote a HAMMINGDISTANCE using the provided code by dcp and it seems to work.
```
CREATE FUNCTION [dbo].[HAMMINGDISTANCE](@hasha BIGINT, @hashb BIGINT)
RETURNS int
AS
BEGIN
DECLARE @pX BIGINT
DECLARE @lRet integer
SET @lRet = 0
SET @pX = @hasha ^ @hashb
WHILE (@pX != 0)
BEGIN
SET @lRet = @lRet + (@pX & 1)
SET @pX = @pX / 2
END
return @lRet
END
GO
``` | MSSQL BIT_COUNT (Hammingdistance) | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a Table Variable and when attempting to filter down the data I have a condition that must be met.
User Story -- When HCPC\_CODE Contains LCS AND the Allowed\_AMT > 0 Then display these records
What I am seeing it that I try parenthesis around the data, but It treats the Allowed\_AMT > 0 as if it is just another condition in the Where clause
Thus:
```
select * from @billingEdit where (HCPC_CODE NOT IN ('LCS%') AND ALLOWED_AMT > 0)
```
And
```
select * from @billingEdit where ALLOWED_AMT > 0
```
They Both return 1,056 records, but, I was changing to Omit the 'NOT' and even had a LIKE instead of the 'IN'
I must be super rusty with sql server as this should not be hard to do.....
**Would I want to do something like this instead?**
```
select * from @billingEdit where
((HCPC_CODE LIKE('LCS%') AND ALLOWED_AMT > 0)
OR
(HCPC_CODE NOT LIKE('LCS%')
))
```
---
```
select * from @billingEdit currently gives 1,069 records
select * from @billingEdit where (HCPC_CODE LIKE ('%LCS%') AND ALLOWED_AMT > 0) gives zero records as in table variable there are NO LCS (changes every hour)
select * from @billingEdit where ALLOWED_AMT > 0 -- gives 1,056 records
select * from @billingEdit where HCPC_CODE like 'LCS%' -- gives 0 records as expected for right now
``` | Your third query should work. Just change `LCS%` to `%LCS%`. This would handle `HCPC_CODE` that begins with `LCS`, ends with `LCS` or has `LCS` in the middle. Considering your `HCPC_CODE` could also be null, your query should look something like this -
```
select * from @billingEdit
where (HCPC_CODE like '%LCS%' and ALLOWED_AMT > 0)
or (isnull(HCPC_CODE,'') not like '%LCS%');
```
Here's the sqlfiddle that I have tried -> `Demo` | Unless I am missing something, it looks like just need to change the "NOT IN" to "LIKE" and add your wildcard to the beginning of LCS.
Also, you don't need the parenthesis around the LSC.
```
select * from @billingEdit where (HCPC_CODE LIKE '%LCS%' AND ALLOWED_AMT > 0) OR HCPC_CODE NOT LIKE '%LCS%'
``` | Sql Server is not combining the results , it is becoming mutually exclusive | [
"",
"sql",
"sql-server",
"t-sql",
"where-clause",
""
] |
I have a requirement where we need to modify a column's default value in database table. The table is already an existing table in database and currently the default value of the column is NULL.
Now if add a new default value to this column, If I am correct it updates all the existing NULLs of the column to new DEfault value. Is there a way to not to do this but still set a new default value on column.
I mean I do not want the existing NULLs to be updated and want them to remain as NULLs.
Any help on this is appreciated.
Thanks | Your belief about what will happen is not correct. Setting a default value for a column will not affect the existing data in the table.
I create a table with a column `col2` that has no default value
```
SQL> create table foo(
2 col1 number primary key,
3 col2 varchar2(10)
4 );
Table created.
SQL> insert into foo( col1 ) values (1);
1 row created.
SQL> insert into foo( col1 ) values (2);
1 row created.
SQL> insert into foo( col1 ) values (3);
1 row created.
SQL> select * from foo;
COL1 COL2
---------- ----------
1
2
3
```
If I then alter the table to set a default value, nothing about the existing rows will change
```
SQL> alter table foo
2 modify( col2 varchar2(10) default 'foo' );
Table altered.
SQL> select * from foo;
COL1 COL2
---------- ----------
1
2
3
SQL> insert into foo( col1 ) values (4);
1 row created.
SQL> select * from foo;
COL1 COL2
---------- ----------
1
2
3
4 foo
```
Even if I subsequently change the default again, there will still be no change to the existing rows
```
SQL> alter table foo
2 modify( col2 varchar2(10) default 'bar' );
Table altered.
SQL> select * from foo;
COL1 COL2
---------- ----------
1
2
3
4 foo
SQL> insert into foo( col1 ) values (5);
1 row created.
SQL> select * from foo;
COL1 COL2
---------- ----------
1
2
3
4 foo
5 bar
``` | ```
ALTER TABLE *table_name*
MODIFY *column_name* DEFAULT *value*;
```
worked in Oracle
e.g:
```
ALTER TABLE MY_TABLE
MODIFY MY_COLUMN DEFAULT 1;
``` | Alter table to modify default value of column | [
"",
"sql",
"oracle",
"alter-table",
""
] |
I had a sql query which brings me info about tablespaces and it's sizes. I'm using this query for an oracle database.
**QUERY**
```
SELECT
df.tablespace_name tablespace,
(df.total_space_mb - fs.free_space_mb) kullanilan_alan,
fs.free_space_mb bos_alan,
ROUND(100 * ((df.total_space - fs.free_space) / df.total_space), 2) doluluk_orani
FROM (
SELECT
tablespace_name,
SUM(bytes) total_space,
ROUND(SUM(bytes) / 1048576) total_space_mb
FROM dba_data_files
GROUP BY tablespace_name) df,
(
SELECT
tablespace_name,
SUM(bytes) free_space,
ROUND(SUM(bytes) / 1048576) free_space_mb
FROM dba_free_space
GROUP BY tablespace_name) fs
WHERE df.tablespace_name = fs.tablespace_name (+)
ORDER BY fs.tablespace_name;
```
This gives me a result like this:
system - 322 - 23 - 92.32
The query above shows me 4 columns. I created a new table which has 5 columns, which has a date field. So i need to copy the result of the query above, plus the date. Here is example:
system - 322 - 23 - 92.32 - 01/09/2014
I'm not good in sql, so i tried too much but failed. | you can always use a "constant" value in a select (and so in an insert... select)
so this is perfectly valid :
```
insert into table2 (id, f1, f2, f3, f4, fdate, farbitrary)
select id, field1, field2, field3, field4, SYSDATE, 'I put what I want here')
from table1
```
In this case, id, field1, field2, field3, field4 are fields of table1, while `SYSDATE` is... the current date and time (and `'I put what I want here'` is a "constant" string) | You can get todays date by `SYSDATE` and append to select list for inserting in new table.
```
INSERT INTO newTable(colA,colB,colC,colD,DateColumn)
SELECT df.tablespace_name TABLESPACE,
(df.total_space_mb - fs.free_space_mb) KULLANILAN_ALAN,
fs.free_space_mb BOS_ALAN,
ROUND(100 * ((df.total_space - fs.free_space) / df.total_space),2) DOLULUK_ORANI ,
SYSDATE
FROM (SELECT tablespace_name, SUM(bytes) TOTAL_SPACE,
ROUND(SUM(bytes) / 1048576) TOTAL_SPACE_MB
FROM dba_data_files
GROUP BY tablespace_name) df,
(SELECT tablespace_name, SUM(bytes) FREE_SPACE,
ROUND(SUM(bytes) / 1048576) FREE_SPACE_MB
FROM dba_free_space
GROUP BY tablespace_name) fs
WHERE df.tablespace_name = fs.tablespace_name(+)
ORDER BY fs.tablespace_name;
``` | Inserting columns to another table | [
"",
"sql",
"oracle",
""
] |
I need to write a query to pull out the records as below.
```
membership_id,
person_id,
first_name
last_name
```
who joined or called yesterday and the members who didn't join will have only person\_id
But the below query is pulling out all the records from the table.
```
SELECT
dbo.pn_membership.membership_id,
dbo.pn.person_id,
dbo.pn.first_name,
dbo.pn.surname,
dbo.pn.create_datetime
FROM
dbo.pn
LEFT OUTER JOIN
dbo.pn_membership
ON dbo.pn.person_id=dbo.pn_membership.person_id AND
dbo.pn.create_datetime >=getdate()-1
```
I need the records only for the day before the run date. | Try this version.
Your write in your question "who joined or called yesterday"
but your query does "who joined or called in the last 24 hours"
which is kind of different. Also, as Sparky noted you had this lack
of WHERE clause problem. My version does "who joined or called yesterday".
```
SELECT
dbo.pn_membership.membership_id,
dbo.pn.person_id,
dbo.pn.first_name,
dbo.pn.surname,
dbo.pn.create_datetime
FROM
dbo.pn
LEFT OUTER JOIN
dbo.pn_membership
ON dbo.pn.person_id=dbo.pn_membership.person_id
WHERE
dbo.pn.create_datetime >= DATEADD(Day, DATEDIFF(Day, 0, getdate()), -1)
AND
dbo.pn.create_datetime < DATEADD(Day, DATEDIFF(Day, 0, getdate()), 0)
``` | Try this:
```
SELECT
dbo.pn_membership.membership_id,
dbo.pn.person_id,
dbo.pn.first_name,
dbo.pn.surname,
dbo.pn.create_datetime
FROM
dbo.pn
LEFT OUTER JOIN
dbo.pn_membership
ON dbo.pn.person_id=dbo.pn_membership.person_id
WHERE dbo.pn.create_datetime >=getdate()-1
```
Your query says...
* Give me some fields from the pn table.
* Also, if the person has matching membership record, give me that
information
* if they don't give me the fields from the membership table with NULL
values
By moving the date test condition to the WHERE clause, you are reducing the rows from the pn table. By applying date as part of the JOIN, you are only increasing the likelihood of getting more NULL value columns from the membership table.. | T-SQL programming | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
I have a table like this
```
| ID | DATE | DURATION | STATUS | DUPLICATION |
|----|------------------- -------|----------|------------|-------------|
| 1 | January, 08 2014 19:30:12 | 00:00:03 | Aborted | (null) |
| 2 | January, 08 2014 19:30:12 | 00:00:06 | Dropped | (null) |
| 3 | January, 08 2014 05:25:11 | 00:00:20 | Connected | Multiple |
| 4 | January, 08 2014 05:19:21 | 00:00:21 | Connected | Repeat |
| 5 | January, 08 2014 05:12:56 | 00:00:20 | Connected | Unique |
| 6 | January, 08 2014 04:46:56 | 00:00:41 | Unanswered | Multiple |
| 7 | January, 08 2014 04:42:56 | 00:00:35 | Unanswered | Repeat |
| 8 | January, 08 2014 04:34:56 | 00:00:31 | Unanswered | Unique |
| 9 | January, 09 2014 12:06:32 | 00:00:06 | Dropped | (null) |
| 10 | January, 09 2014 12:06:32 | 00:00:03 | Aborted | (null) |
| 11 | January, 09 2014 05:25:11 | 00:00:20 | Connected | Multiple |
| 12 | January, 09 2014 05:19:21 | 00:00:21 | Connected | Repeat |
| 13 | January, 09 2014 05:12:56 | 00:00:20 | Connected | Unique |
| 14 | January, 09 2014 04:46:56 | 00:00:41 | Unanswered | Multiple |
| 15 | January, 09 2014 04:42:56 | 00:00:35 | Unanswered | Repeat |
| 16 | January, 09 2014 04:34:56 | 00:00:31 | Unanswered | Unique |
| 17 | January, 09 2014 12:19:01 | 00:00:20 | Aborted | (null) |
| 18 | January, 09 2014 12:19:01 | 00:00:19 | Connected | Repeat |
| 19 | January, 09 2014 12:15:30 | 00:00:20 | Aborted | (null) |
| 20 | January, 09 2014 12:15:30 | 00:00:19 | Connected | Unique |
| 21 | January, 09 2014 07:25:11 | 00:00:41 | Connected | Multiple |
| 22 | January, 09 2014 07:19:21 | 00:00:27 | Connected | Repeat |
| 23 | January, 09 2014 07:12:56 | 00:00:20 | Connected | Unique |
| 24 | January, 09 2014 06:46:56 | 00:00:32 | Unanswered | Unique |
| 25 | January, 09 2014 06:42:56 | 00:00:29 | Aborted | (null) |
| 26 | January, 09 2014 06:34:56 | 00:00:27 | Aborted | (null) |
| 27 | January, 09 2014 05:34:56 | 00:00:27 | Aborted | (null) |
| 28 | January, 09 2014 05:25:11 | 00:00:41 | Connected | Multiple |
| 29 | January, 09 2014 05:19:21 | 00:00:27 | Connected | Repeat |
| 30 | January, 09 2014 05:12:56 | 00:00:20 | Connected | Unique |
```
I want to update only the date but dont know where to start from. I mean in the datetime field `date` i want to update the date '2014-01-08' to '2014-01-01'. How can i do that?
I have tried this.
```
UPDATE calldate cd
INNER JOIN calldate cdl ON cdl.id = cd.id
SET cd.date = ''/*Dont know what to do here */
WHERE DATE(cd.date) = '2014-01-08'
```
Here is the [### Fiddle Structure](http://sqlfiddle.com/#!2/92ba6/1) | One option is to use this:
```
UPDATE `calldata`
SET `date` = DATE_SUB(`date`, INTERVAL 7 DAY)
WHERE DATE(`date`) = '2014-01-08'
```
Fiddle: <http://sqlfiddle.com/#!2/28d71/1>
For better performance use the following query because index (if any) can be used by MySQL as `DATE()` is not on the left side of comparison operator:
```
UPDATE `calldata`
SET `date` = DATE_SUB(`date`, INTERVAL 7 DAY)
WHERE `date` >= '2014-01-08'
AND `date` < '2014-01-09
```
Fiddle: <http://sqlfiddle.com/#!2/7e7b2/1> | Try this:
```
UPDATE calldata
SET date = CONCAT('2014-01-01 ', TIME(date))
WHERE DATE(date) = '2014-01-08'
```
Check the [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/53279/1)
**OUTPUT**
```
| ID | DATE | DURATION | STATUS | DUPLICATION |
|----|--------------------------------|--------------------------------|------------|-------------|
| 1 | January, 01 2014 19:30:12+0000 | January, 01 1970 00:00:03+0000 | Aborted | (null) |
| 2 | January, 01 2014 19:30:12+0000 | January, 01 1970 00:00:06+0000 | Dropped | (null) |
| 3 | January, 01 2014 05:25:11+0000 | January, 01 1970 00:00:20+0000 | Connected | Multiple |
| 4 | January, 01 2014 05:19:21+0000 | January, 01 1970 00:00:21+0000 | Connected | Repeat |
| 5 | January, 01 2014 05:12:56+0000 | January, 01 1970 00:00:20+0000 | Connected | Unique |
| 6 | January, 01 2014 04:46:56+0000 | January, 01 1970 00:00:41+0000 | Unanswered | Multiple |
| 7 | January, 01 2014 04:42:56+0000 | January, 01 1970 00:00:35+0000 | Unanswered | Repeat |
| 8 | January, 01 2014 04:34:56+0000 | January, 01 1970 00:00:31+0000 | Unanswered | Unique |
| 9 | January, 09 2014 12:06:32+0000 | January, 01 1970 00:00:06+0000 | Dropped | (null) |
| 10 | January, 09 2014 12:06:32+0000 | January, 01 1970 00:00:03+0000 | Aborted | (null) |
| 11 | January, 09 2014 05:25:11+0000 | January, 01 1970 00:00:20+0000 | Connected | Multiple |
| 12 | January, 09 2014 05:19:21+0000 | January, 01 1970 00:00:21+0000 | Connected | Repeat |
| 13 | January, 09 2014 05:12:56+0000 | January, 01 1970 00:00:20+0000 | Connected | Unique |
| 14 | January, 09 2014 04:46:56+0000 | January, 01 1970 00:00:41+0000 | Unanswered | Multiple |
| 15 | January, 09 2014 04:42:56+0000 | January, 01 1970 00:00:35+0000 | Unanswered | Repeat |
| 16 | January, 09 2014 04:34:56+0000 | January, 01 1970 00:00:31+0000 | Unanswered | Unique |
| 17 | January, 09 2014 12:19:01+0000 | January, 01 1970 00:00:20+0000 | Aborted | (null) |
| 18 | January, 09 2014 12:19:01+0000 | January, 01 1970 00:00:19+0000 | Connected | Repeat |
| 19 | January, 09 2014 12:15:30+0000 | January, 01 1970 00:00:20+0000 | Aborted | (null) |
| 20 | January, 09 2014 12:15:30+0000 | January, 01 1970 00:00:19+0000 | Connected | Unique |
| 21 | January, 09 2014 07:25:11+0000 | January, 01 1970 00:00:41+0000 | Connected | Multiple |
| 22 | January, 09 2014 07:19:21+0000 | January, 01 1970 00:00:27+0000 | Connected | Repeat |
| 23 | January, 09 2014 07:12:56+0000 | January, 01 1970 00:00:20+0000 | Connected | Unique |
| 24 | January, 09 2014 06:46:56+0000 | January, 01 1970 00:00:32+0000 | Unanswered | Unique |
| 25 | January, 09 2014 06:42:56+0000 | January, 01 1970 00:00:29+0000 | Aborted | (null) |
| 26 | January, 09 2014 06:34:56+0000 | January, 01 1970 00:00:27+0000 | Aborted | (null) |
| 27 | January, 09 2014 05:34:56+0000 | January, 01 1970 00:00:27+0000 | Aborted | (null) |
| 28 | January, 09 2014 05:25:11+0000 | January, 01 1970 00:00:41+0000 | Connected | Multiple |
| 29 | January, 09 2014 05:19:21+0000 | January, 01 1970 00:00:27+0000 | Connected | Repeat |
| 30 | January, 09 2014 05:12:56+0000 | January, 01 1970 00:00:20+0000 | Connected | Unique |
``` | How can I update date of datetime field with mysql only? | [
"",
"mysql",
"sql",
"date",
"datetime",
"sql-update",
""
] |
How would I calculate the last weekday of the current month given a date using SQL?
I was able to get the last day of current month, but not sure how to do the last weekday programmatically.
I don't want to generate a calendar look-up table.
Here's the last day of month code i'm currently using:
```
declare @date datetime
set @date='1/4/13'
select DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, @date) + 1, 0))
``` | I know it is not the most intuitive or effective or easy way of doing it. But here is my solution to find the last **Weekday** of the month...
```
declare @date datetime, @lastDate datetime, @lastWeekDay datetime
set @date='05/4/2014';--'1/1/2014'
set @lastDate = (SELECT DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, @date) + 1, 0)));
/* @dayOfWeek represents -- 0-Monday through 7-Sunday */
declare @dayOfWeek INT = (SELECT DATEDIFF(dd, 0, @lastDate) % 7);
/* If last date is sat/sun substract 1 or 2 days from last date */
set @lastWeekDay = (SELECT CASE WHEN @dayOfWeek = 5 THEN DATEADD(dd, -1, @lastDate)
WHEN @dayOfWeek = 6 THEN DATEADD(dd, -2, @lastDate)
ELSE @lastDate END)
SELECT @lastWeekDay;
``` | Quite a neat solution:
```
declare @date1 as date = '20130401'
declare @date as date= eomonth(@date1,0)
set @date = (select case
when datepart(dw,@date) = 1 then dateadd(day,-2,@date) --when day is a sunday subtract 2 day to friday
when datepart(dw,@date) = 7 then dateadd(day,-1,@date) --when day is a saturday subtract 1 day to friday
else @date END)
``` | How to find last weekday of current month using SQL | [
"",
"sql",
"sql-server-2008",
""
] |
I wonder how I could select every user who met the first and second lowest values of a result table?
Table Test
```
NAME VALUE
John 8
Marie 8
Luis 10
Carlos 10
Leo 13
Max 14
```
So the result in this case would be
```
NAME VALUE
John 8
Marie 8
Luis 10
Carlos 10
```
Thanks a lot!
BTW, I did my home work searching on google and everything, didn't come up with anything but querying the table and then using PHP to filter that for me, not good for performance.
@Aziz I get the error : #1235 - This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
Thanks
Thank you all for the load of useful answers!
Just in case falls into funny results like I did, I needed more filters and had to add to the query, here follows:
SELECT \* FROM `results_temp` WHERE
semana='semana6' AND
`total_pontos` IN (
SELECT \* FROM (
SELECT DISTINCT `total_pontos`
FROM `results_temp`
```
WHERE semana='semana6'
ORDER BY `total_pontos`
LIMIT 0,2
```
) AS t
)
Regards | Try this:
```
SELECT * FROM Test WHERE `value` IN (
SELECT * FROM (
SELECT DISTINCT `value`
FROM Test
ORDER BY `value`
LIMIT 0,2
) AS t
)
``` | Try this:
```
SELECT t.*
FROM Test t
INNER JOIN (SELECT DISTINCT t.value FROM Test t ORDER BY t.value LIMIT 2
) A ON t.value = A.value;
```
Check this [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/9aafb/1)
**OUTPUT**
```
| NAME | VALUE |
|--------|-------|
| John | 8 |
| Marie | 8 |
| Luis | 10 |
| Carlos | 10 |
``` | mysql select all rows meeting the first and second lowest values? | [
"",
"mysql",
"sql",
"select",
"sql-order-by",
"limit",
""
] |
As I was learning SQL statements I encountered one example (regarding the demo [SCOTT](http://www.orafaq.com/wiki/SCOTT "SCOTT") database), I have no idea how to solve.
> In which department(s) are all salgrades present?
My most promising approach is to **group all salgrades and departments** in the joined tables **emp**, **dept** and **salgrade**:
```
SELECT s.grade AS "Salgrade",
d.dname AS "Department ID"
FROM emp e INNER JOIN dept d ON(e.deptno = d.deptno)
INNER JOIN salgrade s ON(e.sal BETWEEN s.losal AND s.hisal)
GROUP BY d.dname, s.grade
```
Executing this gives me the following results:

If I could group this another time by department, *COUNT(\*)* could give me the **number of different salgrades per department**. Then I could compare this number (with *HAVING*) to the following subselect:
```
(SELECT COUNT(*)
FROM salgrade)
```
Is there any possibility to group a table which already contains
*GROUP BY*?
Is there another (better) approach I could use?
---
I am using an apex-oracle-server with *"Application Express 4.2.4.00.07"* | I found an even easier solution now:
```
SELECT d.dname
FROM emp e INNER JOIN dept d ON(e.deptno = d.deptno)
INNER JOIN salgrade s ON(e.sal BETWEEN s.losal AND s.hisal)
GROUP BY d.dname
HAVING COUNT(DISTINCT s.grade) = (SELECT COUNT(*) FROM salgrade);
``` | Minor change from your version, by removing the grouping inside, and this version, first generates, `salgrade` and `department` of all employees, and then doing a grouping outside, counting distinct salary grades.
```
SELECT Department_ID
FROM
(
SELECT s.grade AS Salgrade,
d.dname AS Department_ID
FROM emp e
INNER JOIN dept d ON(e.deptno = d.deptno)
INNER JOIN salgrade s ON(e.sal BETWEEN s.losal AND s.hisal)
)
GROUP BY Department_ID
HAVING COUNT(distinct Salgrade) = ( SELECT count(1) FROM salgrade);
``` | "Double Group By" in Oracle SQL? | [
"",
"sql",
"oracle",
"oracle-apex",
""
] |
I have a query that I would like to run but is not returning the expected results.
So my tables are like this
```
users (has two columns)
user_id,name
users_archive (has the same two columns)
user_id,name
```
I want to basically run a query that lists user\_id from the respective table where the username matches what I'm searching for
For my example I have a user called MikeBOSS in users\_archive with an user\_id of 123 (there is no MikeBOSS in users table)
```
SELECT users.user_id, users_archive.user_id
FROM users
LEFT JOIN users_archive ON users_archive.name='MikeBOSS'
WHERE users.name='MikeBOSS';
```
but that returns no results
```
SELECT users.user_id, users_archive.user_id
FROM users, users_archive
WHERE (users.name='MikeBOSS' OR users_archive.name='MikeBOSS');
```
That returns a bunch of results from the users table that are incorrect.
Could someone maybe point me in the correct direction? | You do not want a JOIN, you want a UNION. Look
```
SELECT users.user_id, 'users'
FROM users
WHERE users.name='MikeBOSS'
UNION
SELECT users_archive.user_id, 'archive'
FROM users_archive
WHERE users_archive.name='MikeBOSS';
``` | A join condition normally links two tables. Yours does not:
```
ON users_archive.name='MikeBOSS'
```
A join condition that does link the two tables might look something like:
```
ON users.name = users_archive.name
```
If you wonder about the number of rows this returns, check each table individually. Is there even a row with `name = 'MikeBoss'` in the `users_archive` table? | MySQL query from 2 tables confusion | [
"",
"mysql",
"sql",
"join",
"multiple-tables",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.