Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I would like to delete all tables sharing the same prefix ('supenh\_agk') from the same database, using one sql command/query.
|
To do this in *one* command you need dynamic SQL with `EXECUTE` in a [`DO`](https://www.postgresql.org/docs/current/sql-do.html) statement (or function):
```
DO
$do$
DECLARE
_tbl text;
BEGIN
FOR _tbl IN
SELECT quote_ident(table_schema) || '.'
|| quote_ident(table_name) -- escape identifier and schema-qualify!
FROM information_schema.tables
WHERE table_name LIKE 'prefix' || '%' -- your table name prefix
AND table_schema NOT LIKE 'pg\_%' -- exclude system schemas
LOOP
RAISE NOTICE '%',
-- EXECUTE
'DROP TABLE ' || _tbl; -- see below
END LOOP;
END
$do$;
```
This includes tables from *all* schemas the current user has access to. I excluded system schemas for safety.
If you do not escape identifiers properly the code fails for any non-standard [identifier](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) that requires double-quoting.
Plus, you run the risk of allowing *SQL injection*. All user input must be sanitized in dynamic code - that includes identifiers potentially provided by users.
**Potentially hazardous!** All those tables are dropped for good. I built in a safety. Inspect the generated statements before you actually execute: comment `RAISE` and uncomment the `EXECUTE`.
If any other objects (like views etc.) depend on a table you get an informative error message instead, which cancels the whole transaction. If you are confident that all dependents can die, too, append [`CASCADE`](https://www.postgresql.org/docs/current/sql-droptable.html):
```
'DROP TABLE ' || _tbl || ' CASCADE;
```
Closely related:
* [Update column in multiple tables](https://stackoverflow.com/questions/27527231/update-column-in-multiple-tables/27530863#27530863)
* [Changing all zeros (if any) across all columns (in a table) to... say 1](https://stackoverflow.com/questions/9814195/changing-all-zeros-if-any-across-all-columns-in-a-table-to-say-1/9814845#9814845)
**Alternatively** you could build on the catalog table [`pg_class`](https://www.postgresql.org/docs/current/catalog-pg-class.html), which also provides the `oid` of the table and is faster:
```
...
FOR _tbl IN
SELECT c.oid::regclass::text -- escape identifier and schema-qualify!
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname NOT LIKE 'pg\_%' -- exclude system schemas
AND c.relname LIKE 'prefix' || '%' -- your table name prefix
AND c.relkind = 'r' -- only tables
...
```
System catalog or information schema?
* [How to check if a table exists in a given schema](https://stackoverflow.com/questions/20582500/how-to-check-if-a-table-exists-in-a-given-schema/24089729#24089729)
How does `c.oid::regclass` defend against SQL injection?
* [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349)
**Or** do it all in a **single `DROP` command**. Should be a bit more efficient:
```
DO
$do$
BEGIN
RAISE NOTICE '%', (
-- EXECUTE (
SELECT 'DROP TABLE ' || string_agg(format('%I.%I', schemaname, tablename), ', ')
-- || ' CASCADE' -- optional
FROM pg_catalog.pg_tables t
WHERE schemaname NOT LIKE 'pg\_%' -- exclude system schemas
AND tablename LIKE 'prefix' || '%' -- your table name prefix
);
END
$do$;
```
Related:
* [Is there a postgres command to list/drop all materialized views?](https://stackoverflow.com/questions/23092983/is-there-a-postgres-command-to-list-drop-all-materialized-views/23093450#23093450)
Using the conveniently fitting system catalog `pg_tables` in the last example. And `format()` for convenience. See:
* [How to check if a table exists in a given schema](https://stackoverflow.com/questions/20582500/how-to-check-if-a-table-exists-in-a-given-schema/24089729#24089729)
* [Table name as a PostgreSQL function parameter](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349)
|
Suppose the prefix is 'sales\_'
**Step 1:** Get all the table names with that prefix
```
SELECT table_name
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'sales_%';
```
**Step 2:** Click the "Download as CSV" button.
**Step 3:** Open the file in an editor and replace **"sales\_** with **,sales**
and **"** with a space
**Step 4:** `DROP TABLE sales_regist, sales_name, sales_info, sales_somthing;`
|
drop all tables sharing the same prefix in postgres
|
[
"",
"sql",
"postgresql",
"plpgsql",
"dynamic-sql",
""
] |
I have a table `ASSETS` that has a structure as it is shown below :
```
----------------------------------------------------
ID (PK) | DESCRIPTION | TYPE | Do- | Do+ | Dx- | Dx+
----------------------------------------------------
```
`TYPE` column has a *foreign key*, possible values are `SECURITY` or `CURRENCY` (i.e. *FX*), also I have two more tables : `CURRENCIES` (*for example*, `EUR`, `RUB` or `USD`) :
```
--------------------------------------------------------
ID (PK)| FROM (FK ASSETS.ID) | TO (FK ASSETS.ID) | VALUE
--------------------------------------------------------
```
and `SECURITIES` (*for example*, `MTS`, `GAZP` or `VTB`) :
```
----------------------------------------------------------
ID (PK)(FK ASSETS.ID)| CURRENCY (PK)(FK ASSETS.ID) | VALUE
----------------------------------------------------------
```
How I can make a constraint, that not only acts like foreign key in `CURRENCIES.FROM`, `CURRENCIES.TO` and `SECURITIES.CURRENCY`,but also checks if referring `ASSETS.TYPE` is `CURRENCY`, and in `SECURITIES` also checks if referring `ASSETS.TYPE` for `SECURITIES.ID` is `SECURITY`?
I guess I can write triggers to check `ASSETS.TYPE` value, but I am searching for another solution right now (*if it is possible, of course*).
*If there are better ways to do the things a want (as a better database design), please, share your ideas.*
*P.S. I guess it is quite a common problem, so if there are articles about it or similar questions asked on this network or some general-case-solutions, feel free to share.*
|
Answer to your original question is to use an additional `CHECK` *constraint* like :
```
CREATE TABLE CURRENCIES (
...
CONSTRAINT c_asset_from CHECK(exists(select 1 from ASSETS a where a.id = from and a.type = 'CURRENCY'))
);
```
And similar constraion for `TO` field and in `SECURITIES` for `CURRENCY` field.
But I think your new design, with separate *FK* for `security` and `currency`, is better design.
|
IMO technically the design could be criticized in two categories:
* Having a dual-purpose foreign key in Asset table called **`type`** ([Polymorphic
Association anti-pattern](https://stackoverflow.com/questions/22177776/foreign-key-column-mapped-to-multiple-primary-keys/22179608#22179608)).
That will violating first normal form (atomic
issue), loosing referential integrity.
A solution could be
simplification of the relationship by inheritance.
Having a base
table for Currency and Security tables called `Money`,containing shared properties of them, like `name`.
primary key of Money table will be primary key of `Currency` and `Security` tables.
Having foreign key of `Money` inside `Asset` will be the solution.
* Using [surrogate identifier](http://en.wikipedia.org/wiki/Surrogate_key) on Asset tables, that will result
losing business logic in schema design.
I will prefer haveing
composite primary key in Asset Table `PK{ID, TYPE(money fk)}`.
Then having
check constraints on `CURRENCIES` and `SECURITIES` will solve the
problem.
`CURRENCIES_chk {FK.CURRENCY = FK_TO.Money && FK.CURRENCY = FK_FROM.Money}
SECURITIES_chk {FK.SECURITY = FK.Money}`

|
Specify foreign key on one column and the value of another column
|
[
"",
"sql",
"database",
"database-design",
"schema",
"polymorphic-associations",
""
] |
i have query like following
```
select max(id), count(id)
from table
group by col1, col2
having count(col1)>1 and count(col2)>1;
```
which results output like below
```
**+---------+-----------+
| max(id) | count(id) |
+---------+-----------+
| 43 | 4 |
| 108 | 2 |
| 171 | 2 |
| 177 | 2 |
| 195 | 2 |
+---------+-----------+**
```
which is used to find the duplicate records with the same col1 and col2 for multiple records
From this i have to delete all the records except max(id) record.
I have tried it like this
```
Delete
from payment_records
group by policy_id, ref_txn_no
having count(policy_id)>1 and count(ref_txn_no)>1
where id !=(select max(id)
from payment_records
group by policy_id,ref_txn_no
having count(policy_id)>1 and count(ref_txn_no)>1)
```
is it correct
|
As far as I know, MySQL does not allow select from same table in sub query for delete. You have to use temporary table.
```
create temporary table t as
select
max(id)
from payments_records
group by
policy_id,
ref_txn_no;
delete from payments_record
where id not in (select * from t);
drop temporary table t;
```
|
The id should be unique. Therefore you can select the data rows to delete by the ids and you do not need grouping etc. The inner select returns multiple values, so you should use the IN statement.
The following code is expected to work, but it does not, because MySql cannot delete from the table nested in the where clause.
```
DELETE FROM payment_records
WHERE NOT id IN (
SELECT max(id)
FROM payment_records
GROUP BY policy_id, ref_txn_no
HAVING count(policy_id) > 1 AND count(ref_txn_no) > 1
)
```
But there is a workaround without creating a temporay table:
```
DELETE FROM payment_records
WHERE NOT id IN (
SELECT max_id
FROM (
SELECT max(id) max_id
FROM payment_records
GROUP BY policy_id, ref_txn_no
HAVING count(policy_id) > 1 AND count(ref_txn_no) > 1
) AS t
)
```
|
SQL : Delete from table where not max in group by
|
[
"",
"mysql",
"sql",
""
] |
I have the following data.
```
FullName
---------------
Ray,John
Daly,Raymond
```
I want to make a search using last name.
I tried this
```
SELECT FullName
FROM Names
WHERE UPPER(FullName) LIKE UPPER('%ray%')
```
This returns both the rows as there is a clear match.
However searching by first name was easier. I just used the following query
```
SELECT FullName
FROM Names
WHERE UPPER(FullName) LIKE UPPER(', %ray%')
```
I am not sure how to accomplish this. Should I split the full name as comma delimited values and ignore the first name. Please help.
|
assuming the last name is the bit before the comma:
and you want lastnames containing the subscting 'ray'
```
SELECT FullName FROM Names WHERE UPPER(FullName) LIKE UPPER('%ray%,%')
```
if you want last name exactly 'ray' use
```
SELECT FullName FROM Names WHERE UPPER(FullName) LIKE UPPER('ray,%')
```
if your database has ILIKE using that may help performance depending on which indices you have
```
SELECT FullName FROM Names WHERE FullName ILIKE 'ray,%';
```
|
```
SELECT FullName FROM Names
WHERE UPPER(FullName) LIKE
UPPER('%ray%,%')
```
Just add the comma after the expression!
|
How to search by Last Name in a Full name in SQL?
|
[
"",
"sql",
""
] |
I need MySQL select query like
```
select wt_id,wt_name from work_type where cat_id=1,2,5..;
```
Is it possible?
|
You just have to use [**IN**](http://dev.mysql.com/doc/refman/5.5/en/comparison-operators.html#function_in) operator
```
SELECT wt_id, wt_name
FROM work_type
WHERE cat_id IN (1,2,5..);
```
|
Use IN operator ex.
```
... where cat_id IN (1,2,5..)
```
|
Mysql select Query with multiple conditions of same column
|
[
"",
"mysql",
"sql",
"select",
"where-clause",
"in-operator",
""
] |
```
Select *
FROM Products as a
join ProductTags as b
on (a.ProductId = b.ProductId)
join Tags as c
on (b.TagId = c.TagId)
where c.TagName = 'FISH'
Select *
FROM Products as a
join ProductTags as b
on (a.ProductId = b.ProductId)
join Tags as c
on ( b.TagId = c.TagId)
where c.TagName = 'STEAK'
```
I know there is probably a fairly easy solution but I cannot figure it out
I want to get all of the products that have an entry in the ProductTags table for both 'Tags'.
new queries
Original:
```
SELECT *
FROM ProductTags as a
JOIN Tags as b on (a.TagId = b.TagId)
WHERE b.TagName = 'FISH'
SELECT *
FROM ProductTags as a
JOIN Tags as b on (a.TagId = b.TagId)
WHERE b.TagName = 'STEAK'
```
Solution
SELECT a.ProductId
FROM ProductTags as a
JOIN Tags as b on (a.TagId = b.TagId)
WHERE b.TagName in ('FISH', 'STEAK')
I think i need a HAVING clause? Idk
|
This is something called relational division, which I just brought up on [Meta](https://meta.stackoverflow.com/questions/281023/what-can-we-do-to-prevent-repeated-relational-division-questions) earlier today. You can use a subquery for this. The outer query will pull all rows that match 'Fish' and the inner query will select from that only the rows that also match 'Steak'. Try this:
```
SELECT a.product_id
FROM producttags a
JOIN tags b ON b.tagid = a.tagid
WHERE b.tagname = 'Fish' AND a.product_id IN(
SELECT a.product_id
FROM producttags a
JOIN tags b ON b.tagid = a.tagid
WHERE b.tagname = 'Steak'
```
**EDIT**
To add other tags, if necessary, you can add more `IN` operators to your `WHERE` clause:
```
SELECT a.product_id
FROM producttags a
JOIN tags b ON b.tagid = a.tagid
WHERE b.tagname = 'Fish'
AND a.product_id IN(
SELECT a.product_id
FROM producttags a
JOIN tags b ON b.tagid = a.tagid
WHERE b.tagname = 'Steak')
AND a.product_id IN(
SELECT a.product_id
FROM producttags a
JOIN tags b ON b.tagid = a.tagid
WHERE b.tagname = 'Chicken')
```
|
You just need an `IN` condition:
```
Select *
FROM Products as a
join ProductTags as b
on (a.ProductId = b.ProductId)
join Tags as c
on (b.TagId = c.TagId)
where c.TagName IN ('FISH', 'STEAK')
```
You can add as many literals as you like in the `IN` clause as long as they are comma separated.
**Edit per the updated question:**
If you want to get all `ProductId` where multiple categories ore matched, you could do this:
```
SELECT ProductTags.ProductId
FROM ProductTags
WHERE
-- Pull all matches of FISH
ProjectTags.TagId IN (SELECT TagId
FROM Tags
WHERE TagName = 'FISH')
AND
-- Pull all matches of STEAK
ProjectTags.TagId IN (SELECT TagId
FROM Tags
WHERE TagName = 'STEAK')
```
|
How can I combine these two queries? (where in clause)?
|
[
"",
"sql",
"join",
""
] |
I'm trying to list some data in a table. The data comes from a database, I have 3 tables in my DB.
I'm trying to join those 3 three tables in a SQL-statement like this:
```
SELECT * FROM cases
LEFT JOIN users ON cases.fk_supporter = users.user_id
LEFT JOIN users ON cases.fk_creator = users.user_id
```
The `fk_supporter` is a foreign key to a user and so is the `fk_creator` to, but not the same user. How to do this correctly?
|
You need unique alias for same tables in the join
```
SELECT * FROM cases c
LEFT JOIN users u1 ON c.fk_supporter = u1.user_id
LEFT JOIN users u2 ON c.fk_creator = u2.user_id
```
|
You just have to give **ALIAS** name to table
Try this:
```
SELECT *
FROM cases c
LEFT JOIN users u1 ON c.fk_supporter = u1.user_id
LEFT JOIN users u2 ON c.fk_creator = u2.user_id;
```
|
Joining the same table twice
|
[
"",
"mysql",
"sql",
"select",
"join",
"alias",
""
] |
Why are these two SQL queries not equivalent? One uses a correlated subquery, the other uses group by. The first produces a little over 51000 rows from my database, the second nearly 66000. In both cases, I am simply trying to return all the parts meeting the stated condition, current revision only. A comparison of the output files shows that method #1 (oracle\_test1.txt) fails to return quite a few values. Based on that, I can only assume that method #2 is correct. I have some code that has used method #1 for a long time, but it appears I will have to change it. My reasoning concerning the correlated subquery was that as the inner select is comparing the columns in the self join, it will find the max vaule for the prev value for all matches; then return that max prev value for use in the outer query. I designed that query long ago before becoming familiar with the use of group by. Any insights would be appreciated.
Query #1
```
select pobj_name, prev
from pfmc_part
where pmodel in ('PN-DWG', 'NO-DWG') and pstatus = 'RELEASED'
and prev = (select max(prev) from pfmc_part a where a.pobj_name = pfmc_part.pobj_name)
order by pobj_name, prev"
```
Query #2
```
select pobj_name, max(prev) prev
from pfmc_part
where pmodel in ('PN-DWG', 'NO-DWG') and pstatus = 'RELEASED'
group by pobj_name
order by pobj_name, prev"
```
Sample output:
```
Query #2 Query #1
P538512 B P538512 B
P538513 A P538513 A
P538514 C P538514 C
P538520 B
P538522 B P538522 B
P538525 A P538525 A
P538531 C P538531 C
P538533 A P538533 A
P538538 B
P538541 B
P538542 B
P538553 A P538553 A
P538569 A P538569 A
```
|
Query 1 is returning each of the max ids and then those that have a pmodel of the type specified within your where clause.
Whereas query 2 is selecting all items with a pmodel of the type specified in your where clause and each of the max ids of that.
You may have data which isn't the max id which satisfies your where clause in query 2 which is why it's being omitted in query 1
|
There are two differences and the rest of the answers focus on one. The "easy" difference is that the `max()` in the `group by` is affected by the filter clause. The `max()` in the other query has no filter, and so it might return no rows (when `max(prev)` is on a row otherwise filtered out by the `where` conditions).
*In addition*, the `where` version of the query might return duplicate rows when there are multiple rows with the same value of `max(prev)` for a given `pobj_name`. The `group by` will never return duplicate rows.
|
Why are queries these not equivalent? (correlated subquery vs. group by)
|
[
"",
"sql",
"sql-server",
""
] |
I am trying to execute sp as sub query and treat result set of sp as column of outer query . Some thing like this
`Select U.FirstName , (exec SomeSP ) as columnFromSP from User U`
Is this possible i searched alot but found nothing on google.
**Update**
I cannot use #temp table because i am trying to do without #temp table
|
If you are able to convert your USP to a table value UDF, you will be use the UDF in your FROM statement.
```
CREATE FUNCTION dbo.SomeUDF
(
-- Add the parameters for the function here
@param varchar(1000)
)
RETURNS TABLE
AS
RETURN
(
SELECT @param as Value
)
GO
SELECT
a.Value,
'B' as Value2
FROM dbo.SomeUDF('ABC') a
```
|
Not possible, but you can work around it
* Create a temp table & insert the results of the procedure into
it
* Now join the User table with the temporary table and select the
columns you want from both tables
This assumes however, you have a joinable expression returned from the stored proc (one that you can match to a field in the user table). If the stored procedure on returns a single row, use a condition of 1=1 or something similar
|
Combine sp result in select as column
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
I have a select that gives me this result:
```
- ID | System | Type1 | NID | Name_ | Type2__ | Date
- 24 | AA-Tool | PRIV | 816 | Name1 | IMPLICIT | 17.12.2014
- 24 | AA-Tool | PRIV | 816 | Name1 | EXPLICIT | 19.12.2014
- 24 | AA-Tool | PRIV | 816 | Name1 | EXPLICIT | 20.12.2014
- 25 | BB-Tool | PRIV | 817 | Name2 | EXPLICIT | 20.12.2014
- 25 | BB-Tool | PRIV | 817 | Name2 | EXPLICIT | 21.12.2014
```
So ID, System, Type1, NID and Name should be distinct and Type2 and Date should be the last entry by date..
This should be the result:
```
- 24 | AA-Tool | PRIV | 816 | Name1 | EXPLICIT | 20.12.2014
- 25 | BB-Tool | PRIV | 817 | Name2 | EXPLICIT | 21.12.2014
```
I hope thats understandable :)
Thanks,
Michael
|
Try this:
```
SELECT a.ID, a.System, a.Type1, a.NID, a.Name_, a.Type2__, a.Date
FROM tableA a
INNER JOIN (SELECT a.ID, a.System, a.Type1, a.NID, a.Name_, MAX(a.Date) Date
FROM tableA a
GROUP BY a.ID, a.System, a.Type1, a.NID, a.Name_
) b ON a.ID = b.ID AND a.System = b.System AND
a.Type1 = b.Type1 AND a.NID = b.NID AND
a.Name_ = b.Name_ AND a.Date = b.Date;
```
|
Another approach could be:
```
SELECT t.*
FROM(SELECT t.*,
ROW_NUMBER() OVER (PARTITION BY id, system, type1, nid, name
ORDER BY date DESC) AS rn
FROM tabelA t
)
WHERE rn = 1
```
Maybe it will help you.
|
Oracle group part of row and get row with latest timestamp
|
[
"",
"sql",
"oracle",
"select",
"group-by",
"greatest-n-per-group",
""
] |
```
+---+------------+
|sid| discounts |
+---+------------+
| 1 | 10.00 |
| 2 | 20.00 |
2 | 20.00 |
+---+------------+
```
I need to get the sum of column discounts but if column sid has the same values just add one row to total not all.
So I need to say 30.00 and not 50.00
I tried something like this
```
SELECT DISTINCT sid FROM (SELECT SUM(discounts) FROM $table_name)
```
and
```
SELECT SUM(discounts) FROM $table_name WHERE sid IN (SELECT DISTINCT 'sid' FROM $table_name)
```
I am just getting 'You have an error in your SQL syntax'
|
Your second query will work if you remove `single quuto betwwen sid`
```
SELECT SUM(discounts) FROM $table_name WHERE sid IN (SELECT DISTINCT sid FROM $table_name)
```
|
```
select sum(discounts) from (select discounts,sid from $table_name group by sid) t
```
|
How do I sum distinct values?
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to join two tables together as shown below, in the " \* " section, I'm having trouble doing the following,
Operation: want to select all columns on the left, and just wanna attach few relevant columns from the right table to the left table. Instead of writing all the columns like left\_table.column1, left\_table.column2 ... so on is there another method which saves the manual coding?
```
SELECT * FROM nutrients LEFT JOIN measures ON nutrients.name=measures.name
```
|
Yes, add the table name before the `*` to select all columns of a table
```
SELECT nutrients.*, measures.colX
FROM nutrients
LEFT JOIN measures ON nutrients.name=measures.name
```
|
You can select all column using `*` like following
```
SELECT nutrients.*,
measures.name,
measures.col2
FROM nutrients
LEFT JOIN measures ON nutrients.name=measures.name
```
|
MySQL - Left Join, Select all columns left, few columns on the right tables,
|
[
"",
"mysql",
"sql",
"left-join",
""
] |
I would like to know the most `efficient` way of `removing` any `occurrence` of characters like `, ; / "` from a `varchar` column.
I have a function like this but it is incredibly slow. The table has about **20 million records**.
```
CREATE FUNCTION [dbo].[Udf_getcleanedstring] (@s VARCHAR(255))
returns VARCHAR(255)
AS
BEGIN
DECLARE @o VARCHAR(255)
SET @o = Replace(@s, '/', '')
SET @o = Replace(@o, '-', '')
SET @o = Replace(@o, ';', '')
SET @o = Replace(@o, '"', '')
RETURN @o
END
```
|
Whichever method you use it is probably worth adding a
```
WHERE YourCol LIKE '%[/-;"]%'
```
Except if you suspect that a very large proportion of rows will in fact contain at least one of the characters that need to be stripped.
As you are using this in an `UPDATE` statement then simply adding the `WITH SCHEMABINDING` attribute can massively improve things and allow the UPDATE to proceed row by row rather than needing to cache the entire operation in a spool first for [Halloween Protection](http://blogs.msdn.com/b/sqlprogrammability/archive/2006/05/12/596424.aspx)

Nested `REPLACE` calls in TSQL are slow anyway though as they involve multiple passes through the strings.
You could knock up a CLR function as below (if you haven't worked with these before then they are very easy to deploy from an SSDT project as long as CLR execution is permitted on the server). The UPDATE plan for this too does not contain a spool.
The Regular Expression uses `(?:)` to denote a non capturing group with the various characters of interest separated by the alternation character `|` as `/|-|;|\"` (the `"` needs to be escaped in the string literal so is preceded by a slash).
```
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Text.RegularExpressions;
public partial class UserDefinedFunctions
{
private static readonly Regex regexStrip =
new Regex("(?:/|-|;|\")", RegexOptions.Compiled);
[SqlFunction]
public static SqlString StripChars(SqlString Input)
{
return Input.IsNull ? null : regexStrip.Replace((string)Input, "");
}
}
```
|
I want to show the huge performance differences between the using with 2 types of USER DIFINED FUNCTIONS:
1. User TABLE function
2. User SCALAR function
See the test example :
```
use AdventureWorks2012
go
-- create table for the test
create table dbo.FindString (ColA int identity(1,1) not null primary key,ColB varchar(max) );
declare @text varchar(max) = 'A web server can handle a Hypertext Transfer Protocol request either by reading
a file from its file ; system based on the URL <> path or by handling the request using logic that is specific
to the type of resource. In the case that special logic is invoked the query string will be available to that logic
for use in its processing, along with the path component of the URL.';
-- init process in loop 1,000,000
insert into dbo.FindString(ColB)
select @text
go 1000000
-- use one of the scalar function from the answers which post in this thread
alter function [dbo].[udf_getCleanedString]
(
@s varchar(max)
)
returns varchar(max)
as
begin
return replace(replace(replace(replace(@s,'/',''),'-',''),';',''),'"','')
end
go
--
-- create from the function above new function an a table function ;
create function [dbo].[utf_getCleanedString]
(
@s varchar(255)
)
returns table
as return
(
select replace(replace(replace(replace(@s,'/',''),'-',''),';',''),'"','') as String
)
go
--
-- clearing the buffer cach
DBCC DROPCLEANBUFFERS ;
go
-- update process using USER TABLE FUNCTIO
update Dest with(rowlock) set
dest.ColB = D.String
from dbo.FindString dest
cross apply utf_getCleanedString(dest.ColB) as D
go
DBCC DROPCLEANBUFFERS ;
go
-- update process using USER SCALAR FUNCTION
update Dest with(rowlock) set
dest.ColB = dbo.udf_getCleanedString(dest.ColB)
from dbo.FindString dest
go
```
AND these are the execution plan :
As you can see the UTF is much better the USF ,they 2 doing the same thing replacing string, but one return scalar and the other return as a table

Another important parameter for you to see (SET STATISTICS IO ON ;)

|
Efficiently replacing many characters from a string
|
[
"",
"sql",
"sql-server",
""
] |
I am studying for SQL exam, and I came across this fact, regarding subqueries:
```
2. Main query and subquery can get data from different tables
```
When is a case when this feature would be useful? I find it difficult to imagine such a case.
|
Millions of situations call for finding information in different tables, it's the basis of relational data. Here's an example:
Find the emergency contact information for all students who are in a chemistry class:
```
SELECT Emergency_Name, Emergency_Phone
FROM tbl_StudentInfo
WHERE StudentID IN (SELECT b.StudentID
FROM tbl_ClassEnroll b
WHERE Subject = 'Chemistry')
```
|
```
SELECT * FROM tableA
WHERE id IN (SELECT id FROM tableB)
```
|
Why would a SQL query need to be so complicated like this feature allows?
|
[
"",
"mysql",
"sql",
"subquery",
""
] |
Currently I'm working on a project where we just added the ability to have multiple values stored within a field. Previously, it stored a single value, but now it contains multiple. Below is an example of what I'm talking about.
Ex. A person's name is passed in (John Smith). Now the user can pass in multiple people's names, delimited by a ';' (John Smith;John Doe;Jane Tandy).
My issue is that we have a data source for a drop down that currently feeds off this field. Below is the SQL for it.
```
Select Distinct Content
From LIB_DocumentAttribute A
inner join LIB_PublicDocument B on A.PublicDocumentId = B.PublicDocumentId
Where AttributeId = (Select AttributeId From LIB_Attribute Where FieldName='Author')
and B.Status = 'Approved'
```
This somewhat works now. Content is the field that contains the multiple names. Now when the drop down is loaded, it pulls back the concatenated string of names (the longer one from above). I want to break it apart for the data source. So far, my only ideas is to split out the data based on the ';'. However, I need to take that split out data and apply it to the table that returns the rest of the data. Below is where I have gotten to but have become stuck on.
```
CREATE TABLE #Authors
(
Content varchar(MAX)
)
CREATE TABLE #Temp1
(
Content varchar(MAX)
)
CREATE TABLE #Temp2
(
Content varchar(MAX)
)
CREATE TABLE #Temp3
(
Content varchar(MAX)
)
--Load Authors table to store all Authors
INSERT INTO #Authors
Select Distinct Content
From LIB_DocumentAttribute A
inner join LIB_PublicDocument B on A.PublicDocumentId = B.PublicDocumentId
Where AttributeId = (Select AttributeId From LIB_Attribute Where FieldName='Author')
and B.Status = 'Approved'
--Take multiple Authors separated by '; ' and add to Temp1
INSERT INTO #Temp1
SELECT REPLACE(Content, '; ', ';') FROM #Authors WHERE Content LIKE '%; %'
--Remove multiple Authors separated by '; ' from Authors table
DELETE FROM #Authors
WHERE Content LIKE '%; %'
--Take multiple Authors separated by ';' and add to Temp2
INSERT INTO #Temp2
SELECT Content FROM #Authors WHERE Content LIKE '%;%'
--Remove multiple Authors separated by ';' from Authors table
DELETE FROM #Authors
WHERE Content LIKE '%;%'
--Somewhow split data and merge back together
DROP TABLE #Authors
DROP TABLE #Temp1
DROP TABLE #Temp2
DROP TABLE #Temp3
```
Edit:
So in the end, I came up with a solution that utilized some of the pieces that Kumar suggested. I created a function for splitting the string as he suggested and added some personal changes to make it work. Mind you this is in a table return function, with the table called @Authors, and it has one column called Content.
```
BEGIN
DECLARE @Temp TABLE
(
Content varchar(MAX)
)
--Load Authors table to store all Authors
INSERT INTO @Authors
Select Distinct Content
From LIB_DocumentAttribute A
inner join LIB_PublicDocument B on A.PublicDocumentId = B.PublicDocumentId
Where AttributeId = (Select AttributeId From LIB_Attribute Where FieldName='Author')
--Take multiple Authors separated by ', ' and add to Temp
INSERT INTO @Temp
SELECT REPLACE(Content, ', ', ',')
FROM @Authors;
--Remove multiple Authors separated by ', ' from Authors table
DELETE FROM @Authors
WHERE Content LIKE '%,%';
--Readd multiple Authors now separated into Authors table
INSERT INTO @Authors
SELECT s.Content
FROM @Temp
OUTER APPLY SplitString(Content,',') AS s
WHERE s.Content <> (SELECT TOP 1 a.Content FROM @Authors a WHERE s.Content = a.Content)
RETURN
END
```
|
Check the demo in fiddler link <http://sqlfiddle.com/#!3/390f8/11>
```
Create table test(name varchar(1000));
Insert into test values('AAA BBB; CCC DDD; eee fff');
CREATE FUNCTION SplitString
(
@Input NVARCHAR(MAX),
@Character CHAR(1)
)
RETURNS @Output TABLE (
Item NVARCHAR(1000)
)
AS
BEGIN
DECLARE @StartIndex INT, @EndIndex INT
SET @StartIndex = 1
IF SUBSTRING(@Input, LEN(@Input) - 1, LEN(@Input)) <> @Character
BEGIN
SET @Input = @Input + @Character
END
WHILE CHARINDEX(@Character, @Input) > 0
BEGIN
SET @EndIndex = CHARINDEX(@Character, @Input)
INSERT INTO @Output(Item)
SELECT SUBSTRING(@Input, @StartIndex, @EndIndex - 1)
SET @Input = SUBSTRING(@Input, @EndIndex + 1, LEN(@Input))
END
RETURN
END
Declare @name varchar(100)
Declare @table as table(name varchar(1000))
Declare cur cursor for
Select name from test
Open cur
fetch next from cur into @name
while (@@FETCH_STATUS = 0)
begin
Insert into @table
Select * from dbo.splitstring(@name,';')
fetch next from cur into @name
end
close cur
deallocate cur
Select * from @table
```
|
this might work.
```
drop table authors
GO
create table Authors (Author_ID int identity (1,1),name varchar (255), category varchar(255))
GO
insert into authors
(name,category)
select
'jane doe','nonfiction'
union
select
'Jules Verne; Mark Twain; O. Henry', 'fiction'
union
select
'John Smith; John Doe', 'nonfiction'
GO
DECLARE @table TABLE (
names VARCHAR(255)
,id INT
)
DECLARE @category VARCHAR(255)
SET @category = 'nonfiction'
DECLARE @Author_ID INT
DECLARE AuthorLookup CURSOR
FOR
SELECT Author_ID
FROM authors
WHERE category = @category
OPEN AuthorLookup
FETCH NEXT
FROM AuthorLookup
INTO @Author_ID
WHILE @@FETCH_STATUS = 0
BEGIN
IF (
SELECT CHARINDEX(';', NAME, 0)
FROM authors
WHERE Author_ID = @Author_ID
) = 0
BEGIN
INSERT INTO @table
SELECT NAME
,Author_ID
FROM authors
WHERE Author_ID = @Author_ID
END
ELSE
BEGIN
DECLARE @value VARCHAR(255)
SELECT @value = NAME
FROM authors
WHERE Author_ID = @Author_ID
WHILE len(@value) > 0
BEGIN
INSERT INTO @table
SELECT substring(@value, 0, CHARINDEX(';', @value, 0))
,@Author_ID
SELECT @value = replace(@value, substring(@value, 0, CHARINDEX(';', @value, 0) + 2), '')
IF CHARINDEX(';', @value, 0) = 0
BEGIN
INSERT INTO @table
SELECT @value
,@Author_ID
SET @value = ''
END
END
END
FETCH NEXT
FROM AuthorLookup
INTO @Author_ID
END
CLOSE AuthorLookup
DEALLOCATE AuthorLookup
SELECT *
FROM @table
```
|
Dealing with splitting strings in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I want to write a mysql query to fetch distinct product\_id where combination of attribute\_id column for the similar product\_id values to be displayed.
I tried this one:
`SELECT DISTINCT(product_id) FROM oc_product_attribute WHERE (attribute_id = 26 AND CAST(text AS UNSIGNED) >= '1') and (attribute_id = 29 AND text = 'Flange')`
```
product_id attribute_id language_id text
1 12 1 0.25 - 20
1 13 1 2500
1 14 1 30
1 15 1 130-160
1 16 1 1.00
1 17 1 0.50
1 18 1 Medium accuracy
1 19 1 PC mountable
1 20 1 N/A
2 21 1 50
2 22 1 +/- +/-100
2 23 1 +5
2 24 1 2.5 v to +/-0.625
2 25 1 DC to 20
2 26 1 1.00
2 27 1 < 1 % of FS
2 28 1 4 PIN Connector
2 29 1 Flange
2 30 1 N/A
```
So basically what i want is, a query should be run on different rows of similar product\_id column where logical operator with concatenation and logical operators.
Please let me know, if the question is not clear.
|
You need to do EXISTS()...
```
SELECT DISTINCT product_id
FROM oc_product_attribute t1
WHERE EXISTS(SELECT * FROM oc_product_attribute t2
WHERE t1.product_id=t2.product_id
AND attribute_id = 26 AND CAST(text AS UNSIGNED) >= '1'
)
AND EXISTS (SELECT * FROM oc_product_attribute t3
WHERE t1.product_id=t3.product_id
AND attribute_id = 29 AND text = 'Flange')
```
If you need to test for more attribute/text combinations, just add another EXISTS() clause for each one.
|
You can use a subquery along with the `IN` operator to solve this. Think of it this way.
First, write a query that pulls all product\_ids that match the first requirement:
```
SELECT DISTINCT product_id
FROM oc_product_attribute
WHERE (attribute_id = 26 AND CAST(text AS UNSIGNED) >= '1');
```
Then, write a second query that pulls all product\_ids that match the first requirement:
```
SELECT DISTINCT product_id
FROM oc_product_attribute
WHERE (attribute_id = 29 AND text = 'Flange');
```
Now, you can write your final query with the idea that you want to select all rows from the first result set whose product\_id value is also in the results of the second set. Try this:
```
SELECT DISTINCT product_id
FROM oc_product_attribute
WHERE (attribute_id = 26 AND CAST(text AS UNSIGNED) >= '1') AND product_id IN(
SELECT DISTINCT product_id
FROM oc_product_attribute
WHERE (attribute_id = 29 AND text = 'Flange'));
```
|
mysql query to run query after grouping rows based on product_id
|
[
"",
"mysql",
"sql",
""
] |
I have a many-to-many relational schema with 3 tables: Users, Teams, and Teamuser.
Since a user can be on many teams and a team can have many users, teamuser is a table that joins users.id to teams.id.
I'd like to do a query that asks "With a given user ID, show me all of the teams in the teams table, with a calculated field called "member" that returns 1 if the given user is a member of that team, and a 0 otherwise"
Is there a way to do this with one query directly in MySQL?
TeamUser:
```
id teamid userid
5 1 [->] 1 [->]
1 1 [->] 2 [->]
2 2 [->] 2 [->]
6 3 [->] 1 [->]
```
teamid is a foreign key joined to Teams.id. userid is a foreign key on Users.id
Teams:
```
id name
1 Whomping Willows
2 Chudley Cannons
```
Users:
```
id username
1 fred
2 finn
3 paul
16 pickles
```
|
Run something like the following to get the Cartesian product and then do the correlated sub-select wrapped in a coalesce to indicate membership?
select u.username, t.teamname,
COALESCE((select 1 from teamusers itu where itu.teamid = t.teamid and itu.userid = u.userid), 0)
from users u, teams t
where u.username = 'finn'
Warning though this will not scale well as it does a select from team users for each row in the outer Cartesian product. More teams == more rows to run the sub-select
|
Here how you can do it
```
select
t.id,
t.name,
case when x.userid is not null then 1 else 0 end as `member`
from teams t
left join (
select
tu.userid,
tu.teamid
from teamuser tu
left join users u on u.id = tu.userid
where u.id = 1 -- change the user id here which you are looking at
)x
on x.teamid = t.id
```
|
SQL Query to return Yes/No Based on a Subquery with Many-Many Join
|
[
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
How to select people like apple and banana both from the data below?
table: MyTable
```
persons | fruit
-----------------------------
P1 Apple
P1 Banana
P1 Mango
P2 Banana
P2 Apple
P3 Mango
P3 Apple
```
i.e in this case, P1, P2 should be the result.
I tried with
```
select * from MyTable where fruit in("Apple","Banana");
```
This is also resulting P3 because P3 also have apple.
Thanks for any help.
|
```
SELECT a.persons
FROM MyTable a JOIN MyTable b on a.persons=b.persons
WHERE a.fruit='Apple' and b.fruit='Banana'
```
|
Try this:
```
SELECT persons
FROM MyTable
WHERE fruit IN ('Apple', 'Banana')
GROUP BY persons
HAVING COUNT(DISTINCT fruit) = 2;
```
|
select persons who like apple and banana both
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
Got two tables:
```
tblJumper
JumperID
JumperName
```
and
```
tblWidthScored
ScoreID
fkJumperID
fkScoredWidth
```
Tables related by `tblWidthScored.fkJumperID = tblJumper.JumperID`
Table Contains following data:
```
tblJumper
1 Tom
2 Jerry
3 Bugs
tblWidthScored
1 1 5,72m
2 2 6,13m
3 1 5,80m
4 3 6,40m
5 2 6,30m
6 3 6,20m
```
What I'm trying to get is a list of each Jumpers personla best:
```
Tom 5,80m
Jerry 6,30m
Bugs 6,40m
```
Tried SELECT DISTINCT... in various forms but didn't succeed in any way.
Anyone couldl give a hint, please?
Thanks!
|
Although I think this is just a Join question and appropriate articles could be found on the web, I'll post one of the answers...
```
SELECT
J.JumperName,
S.Score
FROM tblJumper as J
INNER JOIN(
SELECT
fkJumperID,
Max(fkScoredWidth) as Score
FROM tblWidthScored
GROUP BY
fkJumperID
) as S on J.JumperID = S.fkJumperID
```
The hints you're asking for are in your actual question actually:
You're trying to find personal bests! That means some kind of aggregation: `Maximum`.
You're trying to find personal bests distinctly. Not "the best score amongst whole jumpers" but every person's personal best distinctly. That means some kind of distinction or grouping that you can partition the score datas people by people: `Group By Clause`
|
you can try this query. for this type of query, you have to use Group BY Clause with Sub-query.
```
SELECT JumperName, innertblWidthScored
FROM tblJumper INNER JOIN(SELECT fkJumperID,
Max(fkScoredWidth) as innertblWidthScored
FROM tblWidthScored
GROUP BY fkJumperID) WidthScored
on tblJumper.JumperID = WidthScored.fkJumperID
```
[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/74ac5/2)
|
Select records from joint tables
|
[
"",
"sql",
"select",
"distinct",
""
] |
I need to add 40 days to a column , which contains date in d-m-y format,
and insert result into new column . eg
```
$querydate="UPDATE services SET paymentdue_date=payment_date+30 DAY WHERE my condition ";
```
and i tried
```
$querydate="SELECT up_id=".$up_id.",DATE_ADD(payment_date,INTERVAL 30 DAY) AS paymentdue_date FROM up_services";
```
Column structure for payment\_date and paymentdue\_date is varchar10 utf8\_unicode\_ci
any suggestions?
|
I tried running this query and it worked for me , updated my column with new :)
```
$querydate="UPDATE up_services SET paymentdue_date=DATE_ADD(payment_date,INTERVAL 30 DAY) WHERE my condition ";
```
|
correct query will be
```
$querydate="SELECT up_id,DATE_ADD(payment_date,INTERVAL 30 DAY) AS paymentdue_date FROM up_services where up_id='$up_id'";
```
you can't add compare in select like `up_id=".$up_id."` use where instead
if you want update query you can follow **@saharsh shah** answer
|
Adding days to date column and result in new column?
|
[
"",
"mysql",
"sql",
"date",
"select",
"dateadd",
""
] |
I have a database in Microsoft SQL Server where the table names look like **firstpart.secondpart**. For example, Accounts.BankTransactionBASE or CustomerService.SurveyCommentBASE. I don't think the entire name is the name of the table, because when I find the name of the tables in the database using sys.tables, it only gives the second part of the name which is SurveyCommentBASE. Please see the image below.

I have two questions
1. What is this first part of the name?
2. How do I (change the 'select \* from sys.tables' sql command) to get
the first part of the name along with the second name, so I can
perform my queries?
|
Naming format in SQL Server is: `servername.databasename.schemaname.tablename`. Typically only the last two are needed; the database name and server name are generally omitted as unnecessary unless you're working across multiple databases and/or servers.
In your case, the firstpart is the schema name and the secondpart is the table name. It's considered to be a good practice to use both schema and table name together to refer to tables, views, etc. in SQL Server.
|
As the other answers here have already said, the part before the table name is the schema. A simple edit to your query will get it for you:
```
select schema_name(schema_id), * from sys.tables
```
|
what's the part before the dot in the name of the table?
|
[
"",
"sql",
"sql-server",
""
] |
I have one table as below:
```
ID USER1 USER2
1 Q Y
2 W Y
3 R Y
4 T Y
5 XY Y
```
How I can check when USER2 column is ALL duplicate ? I'm using this code but It not working
```
$res = mysqli_query($conn, "SELECT COMLUMN FROM TABLE");
$result = array_unique($res);
if($result == 1 )
{
echo "Unique";
}
else
{
echo "NOT Unique";
}
```
|
Just do a:
```
SELECT COUNT(USER2) FROM tablename GROUP BY USER2
```
And see if it returns 1 record, or the first record value is equal to the total record count.
Here is an example:
```
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT (SELECT COUNT(USER2) FROM tablename GROUP BY USER2 LIMIT 1) = (SELECT COUNT(USER2) FROM tablename)";
$result = $conn->query($sql);
if ($result->fetch_row()[0] == 1)
{
// all same
}
else
{
//not same
}
```
|
suppose you are trying to find `USER2` column duplicate values, then try this query
```
SELECT tableName.*, COUNT(*) AS duplicate_count FROM tableName
GROUP BY USER2
HAVING duplicate_count > 1
```
this query return all duplicate `USER2` rows
|
How to check duplicate value in MySQL
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
I have two tables:
**Product**
> productID, productName, productPrice
**Purchased**
> productID, creationDate
Query - return a list of `unique products` that were bought since `June 2nd` with `most expensive product` returned `first`
For this I wrote below query:
```
SELECT Product.productID,
Product.productName,
Product.productPrice,
Purchased.creationDate
FROM Product
LEFT JOIN Purchased
ON Product.productID = Purchased.productID
ORDER BY Product.productPrice DESC
WHERE Purchased.creationDate > 02 / 06 / 2014
```
Could you please confirm if above written query is correct or I am doing any mistake.
|
Try this:
```
SELECT p.productID, p.productName, p.productPrice, pp.creationDate
FROM Product p
LEFT JOIN Purchased pp ON p.productID = pp.productID
WHERE pp.creationDate > '2014-06-02'
ORDER BY p.productPrice DESC;
```
|
And make it shorter using `alises`
```
SELECT pr.productID,
pr.productName,
pr.productPrice,
pu.creationDate
FROM Product pr
LEFT JOIN Purchased pu
ON pr.productID = pu.productID
WHERE pu.creationDate > 02 / 06 / 2014
ORDER BY pr.productPrice DESC
```
|
SQL query with left join to get rows in decreasing order from a specific date
|
[
"",
"mysql",
"sql",
"sql-server",
"sqlite",
""
] |
I im trying to hide and just dysplay the values in Working\_On attribute if they are lower than 2. I have tried using WHERE clauses but without luck. I am pretty new to MySQL. Anybody a hand?
This is the table I am using:
```
mysql> SELECT*
FROM employee_project;
+------+-------+---------------+
| eNum | pCode | date_assigned |
+------+-------+---------------+
| E246 | P123 | 2014-12-05 |
| E246 | P294 | 2013-11-21 |
| E387 | P123 | 2014-12-20 |
| E403 | P123 | 2015-04-15 |
| E403 | P332 | 2014-04-01 |
| E933 | P294 | 2013-11-21 |
| E933 | P332 | 2014-03-01 |
| E933 | P342 | 2014-12-20 |
+------+-------+---------------+
mysql> SELECT eNum, COUNT(pCOde) as Working_On
FROM employee_project
GROUP BY eNum;
+------+------------+
| eNum | Working_On |
+------+------------+
| E246 | 2 |
| E387 | 1 |
| E403 | 2 |
| E933 | 3 |
+------+------------+
4 rows in set (0.00 sec)
```
|
Below Query will help you
```
SELECT eNum, COUNT(pCOde) as Working_On
FROM employee_project
GROUP BY eNum;
HAVING COUNT(pCOde) < 2;
```
|
To this:
```
SELECT eNum, COUNT(pCOde) as Working_On
FROM employee_project
GROUP BY eNum;
```
add this:
```
having count(pCode) < 2
```
after the group by clause. Some database engines will allow you to do this:
```
having working_on < 2
```
but I don't know if mySql is one of those.
|
Don't Show if Count<2
|
[
"",
"mysql",
"sql",
""
] |
Why would adding columns to the select list and group by in query increase the number of rows returned? For instance, below is my basic query, but if I add columns, I get more rows returned. The more columns added, the more rows returned. It seems the where clause conditions should be the only thing controlling the rows that are returned.
Example Query:
```
select pobj_name, max(prev) prev
from pfmc_part
where pmodel in ('PN-DWG', 'NO-DWG') and pstatus = 'RELEASED'
group by pobj_name
order by pobj_name
```
|
When you add a `WHERE` clause, it filters the set of rows which will be handed down next to the `GROUP BY`, so `WHERE` is applied *before* groups. The rows which will be grouped and on which aggregates like `SUM(),MAX(),MIN(),COUNT()` will be performed have already been limited to those matching the `WHERE` conditions before `GROUP BY` is applied.
As to why you get more rows when adding columns into `SELECT` and `GROUP BY` -- well, that's how `GROUP BY` aggregates work. If you add additional columns into `SELECT` you must also group them (in most RDBMS) and as long as the values differ across rows, they will result in more grouped rows.
Consider this table:
```
Name Score
John 2
Bill 3
John 1
```
A single `GROUP BY` on `Name` will collapse the `John` rows into one:
```
SELECT Name, SUM(Score) AS total
FROM scores
GROUP BY name
Name total
John 3
Bill 3
```
Now consider this table, which has another column for sport. Here, `John` has 2 different sports represented while Bill has only 1. To include both `Name, Sport` in the `SELECT` list they must both also be in the `GROUP BY`. The similar values across rows collapse into groups, but now `John` has *two* sets of similar values to group:
```
Name Sport Score
John Baseball 3
John Bowling 9
Bill Baseball 10
Bill Baseball 6
John Bowling 12
SELECT Name, Sport, SUM(Score) AS total
FROM scores
GROUP BY Name, Sport
Name Sport total
John Baseball 3
John Bowling 21
Bill Baseball 16
```
So adding additional columns to the `GROUP BY` will result in more output rows if the columns have dissimilar values across rows.
Applying a `WHERE` clause to this second table to look for only `John` rows, would eliminate all `Bill` rows *before* applying groups. The result would be two rows.
```
SELECT Name, Sport, SUM(Score) AS total
FROM scores
-- Filter with a WHERE clause this time
WHERE Name = 'John'
GROUP BY Name, Sport
-- Results in only John's rows
Name Sport Score
John Baseball 3
John Bowling 21
```
|
Adding a new group-by increases the number of results you see beacuase each `group by` partitions the data in another way.
If you are using only an aggregate function you only get a single row result, The more finely you group the data the more result rows you will see
The `where` clause controls the inputs to the grouping perfomed by the `group by`. A `having` clause if present would control the results returned to the user.
|
Why does adding columns to group by increases rows returned
|
[
"",
"sql",
"sql-server",
""
] |
I have a query which returns the total # of tickets of assignees:
```
SELECT DISTINCT case
when ASSIGNEE LIKE '%e474728%' then 'Jason'
when ASSIGNEE LIKE '%e499653%' then 'Alexis'
when ASSIGNEE LIKE '%e509255%' then 'David'
when ASSIGNEE LIKE '%e533676%' then 'KC Lin'
when ASSIGNEE LIKE '%Desktop_Support%' then 'LIKE Desktop Support'
when ASSIGNEE = 'Desktop_Support' then 'Desktop Support (unknown)'
else 'ASSIGNEE UNKNOWN'
End as ASSIGNEE,
COUNT(STATUS) as tickets
FROM techsupport
WHERE STATUS = 'Closed'
AND ASSIGNEE LIKE '%Desktop_Support%'
GROUP BY ASSIGNEE
```
WHICH produces the result:
```
ASSIGNEE tickets
Desktop Support (unknown) 981
David 445
Alexis 135
Jason 48
KC Lin 20
ASSIGNEE UNKNOWN 19
KC Lin 18
Alexis 15
Alexis 14
KC Lin 12
David 11
ASSIGNEE UNKNOWN 11
Alexis 10
Alexis 8
ASSIGNEE UNKNOWN 7
Alexis 4
Jason 4
ASSIGNEE UNKNOWN 4
```
How do I combine the various Assignees?
So that it shows the combined totals of each Assignee?
```
ASSIGNEE tickets
Desktop Support (unknown) 981
David 456
Jason 52
KC Lin 50
ASSIGNEE UNKNOWN 41
Alexis 186
```
Thanks!
|
You are grouping by ASSIGNEE, not by the result of the case expression (rename is performed after group by). Also distinct is redundant, here's one way to do it without repeating the case in the group by clause:
```
SELECT ASSIGNEE, count(status)
FROM (
SELECT case when ASSIGNEE LIKE '%e474728%' then 'Jason'
when ASSIGNEE LIKE '%e499653%' then 'Alexis'
when ASSIGNEE LIKE '%e509255%' then 'David'
when ASSIGNEE LIKE '%e533676%' then 'KC'
when ASSIGNEE LIKE '%Desktop_Support%' then 'LIKE Desktop Support'
when ASSIGNEE = 'Desktop_Support' then 'Desktop Support (unknown)'
else 'ASSIGNEE UNKNOWN'
End as ASSIGNEE
, STATUS
FROM techsupport
WHERE STATUS = 'Closed'
) as X
GROUP BY ASSIGNEE;
```
|
Use a different name for the alias of the `case` to distinguish it from the original column name.
```
SELECT DISTINCT case
when ASSIGNEE LIKE '%e474728%' then 'Jason'
when ASSIGNEE LIKE '%e499653%' then 'Alexis'
when ASSIGNEE LIKE '%e509255%' then 'David'
when ASSIGNEE LIKE '%e533676%' then 'KC'
when ASSIGNEE LIKE '%Desktop_Support%' then 'LIKE Desktop Support'
when ASSIGNEE = 'Desktop_Support' then 'Desktop Support (unknown)'
else 'ASSIGNEE UNKNOWN'
End as ASSIGNEE_GRP,
COUNT(STATUS) as tickets
FROM techsupport
WHERE STATUS = 'Closed'
AND ASSIGNEE LIKE '%Desktop_Support%'
GROUP BY ASSIGNEE_GRP
```
|
How to combine multiple results?
|
[
"",
"sql",
""
] |
I have two tables **Table A** and **Table B**
**Table A**
```
1. *id*
2. *name*
```
**Table B**
```
1. *A.id*
2. *datetime*
```
I want to select
```
1. *A.id*
2. *A.name*
3. *B.datetime*
```
Even if table B do not contains a row with *A.id* for specific day and it should replace that column with *NULL*
e.g
*Table A* contains
```
1. *(1 , Haris)*
2. *(2, Hashsim)*
```
*Table B* Contains following for today's date.
```
1. *(1, '2014-12-26 08:00:00')*
```
I should show 2 results with id **1** and **2** instead of only id **1**.
Using LEFT OUTER JOIN with WHERE Clause makes it a LEFT INNER JOIN, how to work around that ?
|
Use [**LEFT OUTER JOIN**](http://www.w3schools.com/sql/sql_join_left.asp) to get all the rows from `Left table` and one that does not have match will have `NULL` values in `Right table` columns
```
SELECT A.id,
A.name,
B.[datetime]
FROM tableA A
LEFT OUTER JOIN tableB B
ON A.Id = B.id
AND B.[datetime] < @date
```
|
```
SELECT A.id, A.name, b.datetime
FROM A
LEFT Outer JOIN B on B.id = A.id
```
|
Select using LEFT OUTER JOIN with condition
|
[
"",
"sql",
"sql-server",
""
] |
This is sample information from my database so full picture can be shown as to what I am needing to accomplish
```
Create Table #Information
(
salesID int,
storelocation varchar(100),
salespersonName varchar(100)
)
Insert Into #Information Values
(1, 'New York', 'Michael'),
(2, 'New York', 'Michael'),
(3, 'New York', 'Michael'),
(4, 'New York', 'Michael'),
(5, 'Texas', 'Richard'),
(6, 'Texas', 'Richard'),
(7, 'Texas', 'Richard'),
(8, 'Texas', 'Richard'),
(9, 'Texas', 'Richard'),
(10, 'Texas', 'Richard'),
(11, 'Washington', 'Sam'),
(12, 'Washington', 'Sam'),
(13, 'Washington', 'Sam'),
(14, 'Washington', 'Sam'),
(15, 'Washington', 'Sam')
SELECT storelocation,
COUNT(salesID/storelocation)
FROM #Information
```
I want to get a count of Total count of salesID then divide by the salesID for that storelocation. So the division I want to happen would be
```
New York - 15/4 = .266
Texas - 15/6 = .4
Washington - 15/5 = .333
```
The way I have been doing this is like so - but this is not returning accurate results.
```
declare @TotalCount as int
select @TotalCount = convert(decimal(18,4), count(salesID))
from #information
Select
convert(decimal(18,4), Count(salesID))/@TotalCount
From #information
```
|
Make the total Count query as `Subquery` and divide it by `storelocation` group count
```
SELECT storelocation,
(SELECT CONVERT(DECIMAL(18, 4), Count(1))
FROM #Information) / Count(1)
FROM #Information
GROUP BY storelocation
```
|
```
CREATE TABLE #Information
(
salesID INT,
storelocation VARCHAR(100),
salespersonName VARCHAR(100)
)
INSERT INTO #Information
VALUES
(1, 'New York', 'Michael'),
(2, 'New York', 'Michael'),
(3, 'New York', 'Michael'),
(4, 'New York', 'Michael'),
(5, 'Texas', 'Richard'),
(6, 'Texas', 'Richard'),
(7, 'Texas', 'Richard'),
(8, 'Texas', 'Richard'),
(9, 'Texas', 'Richard'),
(10, 'Texas', 'Richard'),
(11, 'Washington', 'Sam'),
(12, 'Washington', 'Sam'),
(13, 'Washington', 'Sam'),
(14, 'Washington', 'Sam'),
(15, 'Washington', 'Sam')
DECLARE @TotalCount AS INT
SELECT @TotalCount = CONVERT(DECIMAL(18, 4), COUNT(salesID))
FROM #information
SELECT storelocation,
@TotalCount / CONVERT(DECIMAL(18, 4), COUNT(storelocation)) AS Division
FROM #Information
GROUP BY
storelocation
```
|
Division In Query
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
"group-by",
""
] |
I have two tables.
### common\_products
* id
* product
* owner\_uid
* backup\_uid
* manager\_uid
## ss\_users
* userID
* firstName
* lastName
* email
I want to get a name/email list of all the owners, backups and managers.
I am using the query below, but was wondering if there was a more efficient way to go about querying the tables.
WORKING QUERY:
```
SELECT DISTINCT email,
( firstName + ' ' + lastName ) AS userFull,
lastName
FROM common_products cp
LEFT OUTER JOIN ss_users u
ON u.userID = cp.owner_uid
UNION
SELECT DISTINCT email,
( firstName + ' ' + lastName ) AS userFull,
lastName
FROM common_products cp
LEFT OUTER JOIN ss_users u
ON u.userID = cp.backup_uid
UNION
SELECT DISTINCT email,
( firstName + ' ' + lastName ) AS userFull,
lastName
FROM common_products cp
LEFT OUTER JOIN ss_users u
ON u.userID = cp.manager_uid
```
Is there a more optimized way to query the database?
|
I suspect that this version might be faster:
```
select u.email, (u.firstName+ ' '+u.lastName) AS userFull, u.lastName
from ss_users u
where exists (select 1 from common_products cp where u.userID = cp.owner_uid) or
exists (select 1 from common_products cp where u.userID = cp.backup_uid) or
exists (select 1 from common_products cp where u.userID = cp.manager_uid);
```
Then for best performance add three indexes: `common_products(owner_uid)`, `common_products(backup_uid)`, and `common_products(manager_uid)`.
This will eliminate the duplicate elimination (because you are using `union`) and the `exists` should be at least as fast as the `join`s.
|
Ensure there are indexes on common\_products's owner\_uid, backup\_uid, and manager\_uid fields as well as ss\_users's userID field and you could improve performance a bit further by including the columns needed on the index.
```
SELECT DISTINCT
user_owner.email [OwnerEmail],user_owner.firstName + ' ' + user_owner.lastName [OwnerUserFull], user_owner.lastName [OwnerLastName],
user_backup.email [BackupEmail],user_backup.firstName + ' ' + user_backup.lastName [BackupUserFull], user_backup.lastName [BackupLastName],
user_manager.email [ManagerEmail],user_manager.firstName + ' ' + user_manager.lastName [ManagerUserFull], user_manager.lastName [ManagerLastName]
FROM common_products cp
LEFT OUTER JOIN ss_users user_owner ON user_owner.userID = cp.owner_uid
LEFT OUTER JOIN ss_users user_backup ON user_backup.userID = cp.backup_uid
LEFT OUTER JOIN ss_users user_manager ON user_manager.userID = cp.manager_uid
```
|
Optimizing SQL join single column with multiple columns in another table
|
[
"",
"sql",
"sql-server",
"performance",
""
] |
I don't understand why my sql is not running,
it pop out a window say
> "Your query does not include the specified expression `' SUM(SaleRecord.Number)*(product.Price'` as part of an aggregate function"
```
SELECT SUM(SaleRecord.Number)*(Product.Price) AS TotalIncome
FROM Product, SaleRecord
WHERE Product.ProductID=SaleRecord.SaleProduct;
```
|
You asked in my previous answer:
> "thank you, I just make some mistake, now it is working. And sorry to
> bother you more, I want to select the product who sell the most out,
> how can I do it, I try to add MAX(xxx) on it, and it don't work"
Now, I am by no means an expert, but there are two processes going on. Your language is confusing so I'm going to assume you want to know which product sells the most in $$ terms (rather than count. For example, you might sell 1,000 $0.50 products, equallying $500 total sales, or 10 $500 products, totallying $5000. If you want the count or the dollar value, then the method changes slightly).
So the first process is to get the total sales of each product, which I outlined above. Then you want to nest that inside a second query, where you then select the max. I'll give you the code and then explain it:
```
SELECT ProductID, MAX(TotalSale)
FROM (
SELECT P.ProductID, SUM(S.Number)*P.Price AS TotalSale
FROM Products as P, SaleRecords as S
WHERE product.Productid = SaleRecord.SaleProduct
GROUP BY Product.ProductID
)
```
It's easiest to imagine this as querying a query. Your first query is in the FROM() statement. That will run and give you the output of total sale per product. Then the second query is ran (the top most SELECT line) that selects the productID and the sale amount that is the largest among all the products.
Your teacher may not like this since nesting queries is a little advanced (though completely intuitive IMO). Hopefully this helps!
|
`Product.Price` is not part of the aggregate. Presumably, you intend:
```
SELECT SUM(SaleRecord.Number * Product.Price) AS TotalIncome
FROM Product INNER JOIN
SaleRecord
ON Product.ProductID=SaleRecord.SaleProduct;
```
Note that I also fixed the archaic `join` syntax.
|
Your Query does not include the specified expression, how to fix it?
|
[
"",
"sql",
"ms-access",
""
] |
Let's say I have this table called `mytable`:
```
+-----+-----+-----------+
|ID | ID2 | product |
+-----+-----+-----------+
| 1 | 1 | product1 |
| 2 | 2 | product1 |
| 3 | 1 | product2 |
| 4 | 4 | product2 |
| 5 | 1 | product3 |
| 6 | 1 | product4 |
| 7 | 4 | product4 |
+-----+-----+-----------+
```
I want to select all products that have `id2 = 4` but if they don't have `id2 = 4` then show `id2 = 1`.
The output should be:
```
id product
1 product1
4 product2
5 product3
7 product4
```
Could I do something like this in SQL (I am using mysql)?
|
Another query using conditional aggregation to select products and the id corresponding to id2=4 or id2=2
```
select coalesce(
max(case when id2=4 then id end),
max(case when id2=1 then id end)
) id,
product
from mytable
where id2 in (4,1)
group by product
```
<http://sqlfiddle.com/#!9/47050/1>
|
For this you can use `OR`. It should work in this case, because if you meet the first condition, you're satisfied. If you don't, you'll fall back on the second one. If you still don't meet it, then it won't be included. Try this:
```
SELECT DISTINCT product
FROM myTable
WHERE id2 = 4 OR id2 = 1;
```
**EDIT**
This doesn't select the id, but I'm not sure how you want to select the id. What if a product has `id2=4` twice? Which `id` value would you select? If this doesn't solve your entire problem, let me know and I will try to help work out the full solution.
|
how to select one thing and if the condition is false then select something else
|
[
"",
"mysql",
"sql",
""
] |
I am having one table "Mark" which contains marks of different subjects. If marks fit into one particular range then I should pick up respective rank and insert into marks table itself in column 'rank\_sub\_1'. could you please help me how can I look up in the table and insert in the column. Below is my table structure.
```
**Marks**
Subject1_Marks Subject2_Marks
------------------------------
71 22
10 40
**LookupTable**
Rank range1 range2
----------------------
9 10 20
8 21 30
7 31 40
6 41 50
5 51 60
4 61 70
3 71 80
2 81 90
1 91 100
```
Now I want to check marks of each subject with lookup table which contains the ranges and ranks for different marks obtained.
```
**Marks**
Subject1_Marks Subject2_Marks Rank_Sub_1 Rank_Sub_2
------------------------------------------------------
71 22
10 40
```
If marks fit into one particular range then I should pick up respective rank and insert into marks table itself in column 'rank\_sub\_1'. could you please help me how can I look up in the table and insert in the column.
|
(Considering there is no overlapping in range values)
Take two instances of lookuptable and join first with subject1\_marks and second with subject2\_marks. Here i haven't used LEFT JOINS as i am assuming your subject marks will fall under 1 range for sure. If you are not sure about that, please use left joins and handle null values as per your requirement for columns RANK\_SUB\_1 and RANK\_SUB\_2
```
WITH LOOKUPTABLE_TMP AS (SELECT * FROM LOOKUPTABLE)
SELECT M.*, L1.RANK AS RANK_SUB_1, L2.RANK AS RANK_SUB_2
FROM MARKS M , LOOKUPTABLE_TMP L1, LOOKUPTABLE_TMP L2
WHERE M.SUBJECT1_MARKS BETWEEN L1.RANGE1 AND L1.RANGE2
AND M.SUBJECT2_MARKS BETWEEN L2.RANGE1 AND L2.RANGE2
```
Then MERGE the data into table MARKS.
Solution:
```
MERGE INTO MARKS MS
USING
(
SELECT M.SUBJECT1_MARKS, M.SUBJECT2_MARKS, L1.RNK AS RANK_SUB_1, L2.RNK AS RANK_SUB_2
FROM MARKS M , LOOKUPTABLE L1, LOOKUPTABLE L2
WHERE M.SUBJECT1_MARKS BETWEEN L1.RANGE1 AND L1.RANGE2
AND M.SUBJECT2_MARKS BETWEEN L2.RANGE1 AND L2.RANGE2
GROUP BY M.SUBJECT1_MARKS, M.SUBJECT2_MARKS, L1.RNK, L2.RNK
) SUB
ON (MS.SUBJECT1_MARKS=SUB.SUBJECT1_MARKS AND MS.SUBJECT2_MARKS =SUB.SUBJECT2_MARKS)
WHEN MATCHED THEN UPDATE
SET MS.RANK_SUB_1=SUB.RANK_SUB_1, MS.RANK_SUB_2=SUB.RANK_SUB_2;
```
Tested on below schema and data as per your question's details.
```
CREATE TABLE MARKS (SUBJECT1_MARKS NUMBER, SUBJECT2_MARKS NUMBER , RANK_SUB_1 NUMBER, RANK_SUB_2 NUMBER)
INSERT INTO MARKS (SUBJECT1_MARKS , SUBJECT2_MARKS ) VALUES (71, 22);
INSERT INTO MARKS (SUBJECT1_MARKS , SUBJECT2_MARKS ) VALUES (10, 40);
CREATE TABLE LOOKUPTABLE (RNK NUMBER, RANGE1 NUMBER , RANGE2 NUMBER)
INSERT INTO LOOKUPTABLE VALUES (9, 10, 20);
INSERT INTO LOOKUPTABLE VALUES (8, 21, 30);
INSERT INTO LOOKUPTABLE VALUES (7, 31, 40);
INSERT INTO LOOKUPTABLE VALUES (6, 41, 50);
INSERT INTO LOOKUPTABLE VALUES (5, 51, 60);
INSERT INTO LOOKUPTABLE VALUES (4, 61, 70);
INSERT INTO LOOKUPTABLE VALUES (3, 71, 80);
INSERT INTO LOOKUPTABLE VALUES (2, 81, 90);
INSERT INTO LOOKUPTABLE VALUES (1, 91, 100);
```
Thanks!!
|
I think this`update`statement should do what you want:
```
UPDATE Marks m
SET Rank_Sub_1 = (SELECT l.Rank
FROM LookupTable l
WHERE m.Subject1_Marks BETWEEN l.range1 AND l.range2)
WHERE EXISTS (
SELECT 1
FROM LookupTable l
WHERE m.Subject1_Marks BETWEEN l.range1 AND l.range2
);
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!4/bd827/1)
If you want to update the value for`Rank_Sub_2`at the same time you can do this:
```
UPDATE Marks m
SET Rank_Sub_1 = (SELECT l.Rank
FROM LookupTable l
WHERE m.Subject1_Marks BETWEEN l.range1 AND l.range2)
,Rank_Sub_2 = (SELECT l.Rank
FROM LookupTable l
WHERE m.Subject2_Marks BETWEEN l.range1 AND l.range2)
```
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!4/9affd8/1)
|
How to compare values with lookup table
|
[
"",
"sql",
"plsql",
""
] |
I have two tables:
tournament\_teams:
```
ID TEAM TournamentID
2 Berp 7
3 Dang 7
```
tournament\_pool\_team:
```
ID TournamentID PoolID TeamID VersusTeamID
1 7 5 2 3
```
Now this query should return nothing since all teams are in the pool (teamid and versusteamid)
```
SELECT t1.id,
t1.team
FROM tournament_teams t1
WHERE t1.id NOT IN(SELECT t2.id
FROM tournament_pool_team t2
WHERE ( t2.teamid = t1.id
OR t2.versusteamid = t1.id )
AND poolid = '5')
AND t1.tournamentid = '7'
ORDER BY team ASC
```
I only want to show teams that do not exist in the table: Tournament\_pool\_team; Teams are specified in teamid and versusteamid of that table. It needs to pull the teams from the tournament\_teams who arent specified in tournament\_pool\_team. With the current entries, it should not return anything since the IDs both 2 and 3 are specified in tournament\_pool\_team.
|
```
SELECT t1.id,
t1.team
FROM tournament_teams t1
WHERE t1.id NOT IN(SELECT t2.teamid
FROM tournament_pool_team t2
WHERE t2.teamid = t1.id
AND poolid = '5'
UNION
SELECT t2.versusteamid
FROM tournament_pool_team t2
WHERE t2.versusteamid = t1.id
AND poolid = '5')
AND t1.tournamentid = '7'
ORDER BY team ASC
```
I have modified the query according to your context(the way you were trying to implement). You were matching the t1.teamid with t2.id, instead you should be matching t1.id with t2.teamid and t2.versusteamid
|
Your sub query returns tournament\_pool\_team's `ID` which is not present in tournament\_teams `id` so you are getting wrong result.
Use `Not Exists` operator which will help you to find the tournament\_teams which is not present in tournament\_pool\_team's `TeamID` or `VersusTeamID`
```
SELECT t1.id,
t1.team
FROM tournament_teams t1
WHERE NOT EXISTS (SELECT 1
FROM tournament_pool_team t2
WHERE ( t2.teamid = t1.id
OR t2.versusteamid = t1.id )
AND t2.poolid = '5')
AND t1.tournamentid = '7'
ORDER BY t1.team ASC
```
|
SQL (Mysql) NOT IN in combination with OR, wrong results
|
[
"",
"mysql",
"sql",
""
] |
```
select ROUND(5.8990,0) output is 6.000
select ROUND(643/109,0) output is 5
```
Why is this difference in output ?
|
It's due to use of integer division.
```
> select 643/109;
5
> select round(643/109, 0);
5.0
> select 643/109.0;
5.89908256880734
> select round(643/109.0, 0);
6.0
```
The first command shows that integer division is taking place. So, change the divisor or dividend to a float and you will get your expected result as shown in the last command.
|
`select 643/109` gives `5` quotient . So `select ROUND(5,0)` is `5`
|
Why is this difference in output ?
|
[
"",
"sql",
"sql-server",
""
] |
I have two tables T1 and T2. I want to select all rows from T2 table where t1\_id not equal to any id in T1 table. This is not working:
```
SELECT t2.id, t2.t1_id, t2.data FROM T2.t2, T1.t1 WHERE t2.t1_id != t1.id
```
|
You could use subquery with NOT IN clause as below:
```
SELECT t2.id, t2.t1_id, t2.data
FROM T2 t2
WHERE t2.t1_id NOT IN (SELECT DISTINCT id
FROM T1)
```
|
Use **LEFT JOIN** to fetch all records from left table and add where condition with checking of null value of second table to find unmatched records of left table
Try this:
```
SELECT t2.id, t2.t1_id, t2.data
FROM T2.t2
LEFT OUTER JOIN T1.t1 ON t2.t1_id = t1.id
WHERE t1.id IS NULL
```
|
Select SQL statement on related tables
|
[
"",
"mysql",
"sql",
"select",
"join",
"relational-database",
""
] |
I am using SQL 2012.
I have large Database Structure in my project.
I have around 10000 Stored Procedure in my Database.
I have to delete one table from the database, Is there any way that I can directly check that by deleteting my table how many stored procedure will affect.
Any help Will Be appriciated.
|
Using [sp\_depends](http://msdn.microsoft.com/en-us/library/ms189487.aspx):
```
EXEC sp_depends @objname = N'yourtablename'
```
|
If you are using SQL Server Management Studio, you can right click table and select "View Dependencies" to view the dependent objects
|
How check that how many stored procedure will affect after deleting one table in SQL
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm trying to create an SQL query that only leaves "yahoo finance" and remove the others in following dataset. This dataset is a part of InnoDB table.
```
yahoo/finance
yahoo.finance
yahoo. finance
yahoo-finance
yahoo+finance
yahoo finance
yahoo .finance
yahoo . finance
yahoo - finance
yahoo + finance
```
I tried to write an SQL function that remove duplicates by ignoring these characters: /.,+-'. However I failed and I appreciate any help. Thank you!
|
Try this:
```
SELECT DISTINCT REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(columnA, '/', ' '), '.', ' '), ',', ' '), '+', ' '), '-', ' '), ' ', ' '), ' ', ' ')
FROM tableA;
```
|
You may want to consider using `LIKE` with a reg-ex pattern.
Perhaps a where clause something like:
```
... WHERE Field LIKE 'yahoo[. -+]finance'
```
<http://msdn.microsoft.com/en-us/library/ms179859.aspx>
|
SQL to remove duplicated records by ignoring non-letters and non-digits
|
[
"",
"mysql",
"sql",
"sql-server",
"select",
"replace",
""
] |
I have a column, call 'expected\_salary'. By the name, it suppose to be some numerical value. But I would like it to also hold value such as "competitive", "commission based" etc, so, it became a string type.
Now the dumb problem is, I would like to query it, like, `SELECT all rows where expected_salary >= 3000`. Of course, it won't work with a normal numeric comparison.
Before I convert the 'expected\_salary' to integer type, and created an extra column call 'other\_salary', I would like to know, if there is better solution.
|
Sure, create additional columns, probably booleans, which are false by default. `is_commissioned`, `is_competitive`, etc.
Then the proper query is
```
SELECT *
FROM table
WHERE expected_salary >= 3000 AND is_competitive
```
|
A guess:
```
create table salaries (
expected_salary text
);
insert into salaries values ('1000'),('1500,comment'),('3000,comment'),('3500'),
('comment'),('4000'),('4500,comment');
```
you can use `substring()` function to extract `Integer` from a `string`
```
select * from (
select substring(expected_salary FROM '[0-9]+')::int sal from salaries
) t where sal >=3000;
```
|
Query a string column
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables `tblData1` and `tblData2` and now I want to migrate records from another table with identity insert and I am trying to run a command as shown below
```
SET IDENTITY_INSERT LP1.dbo.tblData1 ON
GO
SET IDENTITY_INSERT LP1.dbo.tblData2 ON
GO
INSERT INTO LP1.DBO.tblData1 (ID,DATA)
SELECT ID,DATA FROM LP.DBO.tblData1
GO
INSERT INTO LP1.DBO.tblData2 (ID,DATA)
SELECT ID,DATA FROM LP.DBO.tblData2
GO
SET IDENTITY_INSERT LP1.dbo.tblData1 OFF
GO
SET IDENTITY_INSERT LP1.dbo.tblData2 OFF
GO
```
But it is showing error as below
> IDENTITY\_INSERT is already ON for table 'Sample\_Training.dbo.tblData1'. Cannot perform SET operation for table 'dbo.tblData2'
Is it possible to perform multiple `IDENTITY_INSERT` at time in SQL Server 2008
|
**At any time, only one table in a session can have the IDENTITY\_INSERT property set to ON.**
So before enabling the other one, you should turn of existing if any.
If it is lesser number of tables you can turn on and turn off before and after your operations.
If the table count is huge, you should automate somehow to enable and disable before your operations.
|
Did you try changing the order
```
go
SET IDENTITY_INSERT LP1.dbo.tblData1 ON
INSERT INTO LP1.DBO.tblData1
(ID,DATA)
SELECT ID,DATA
FROM LP.DBO.tblData1
SET IDENTITY_INSERT LP1.dbo.tblData1 OFF
GO
SET IDENTITY_INSERT LP1.dbo.tblData2 ON
INSERT INTO LP1.DBO.tblData2
(ID,DATA)
SELECT ID,DATA
FROM LP.DBO.tblData2
SET IDENTITY_INSERT LP1.dbo.tblData2 OFF
GO
```
|
How to SET IDENTITY_INSERT ON in SQL Server 2008 for multiple tables at once
|
[
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
The Following is the schema
```
+---------+---------+
| Employee Table |
+---------+---------+
| EmpId | Name |
+---------+---------+
| 1 | John |
| 2 | Lisa |
| 3 | Mike |
| | |
+---------+---------+
+---------+-----------------+
| Family Table |
+---------+-----------------+
| EmpId | Relationship |
+---------+-----------------+
| 1 | Father |
| 1 | Mother |
| 1 | Wife |
| 2 | Husband |
| 2 | Child |
+---------+-----------------+
+---------+---------+
| Loan Table |
+---------+--------+
| LoanId | EmpId |
+---------+--------+
| L1 | 1 |
| L2 | 1 |
| L3 | 2 |
| L4 | 2 |
| L5 | 3 |
+---------+--------+
```
* Employee Table and Family Table has one to many relationship
* Employee Table and Loan Table has one many Relation ship
I have tried Joins but it gives redundant rows.
Now the desired output will be
```
+---------+---------+--------------+---------+
| EmpId | Name | RelationShip | Loan |
+---------+---------+--------------+---------+
| 1 | John | Father | L1 |
| - | - | Mother | L2 |
| - | - | Wife | - |
| 2 | Lisa | Husband | L3 |
| - | - | Child | L4 |
| 3 | Mike | - | L5 |
| | | | |
+---------+---------+--------------+---------+
```
|
It looks like you are trying to assign the loans "sequentially" to rows in the family table. The approach to solve this is to first get the right rows, and then to get the loans assigned to rows.
The right rows (and first three columns) are:
```
select f.EmpId, e.Name, f.Relationship
from family f join
Employee e
on f.empid = e.empid;
```
Note that this does not put hyphens in the columns for repeated values, it puts in the actual values. Although you can arrange for the hyphens in SQL, it is a bad idea. SQL results are in the form of tables, which are unordered sets with values for each column and each row. When you start putting hyphens in, you are depending on the order.
Now the problem is joining in the loans. This is actually pretty easy, by using `row_number()` to add a `join` key:
```
select f.EmpId, e.Name, f.Relationship, l.LoanId
from Employee e left join
(select f.*, row_number() over (partition by f.EmpId order by (select NULL)) as seqnum
from family f
) f
on f.empid = e.empid left join
(select l.*, row_number() over (partition by l.EmpId order by (select NULL)) as seqnum
from Loan l
) l
on f.EmpId = l.EmpId and f.seqnum = l.seqnum;
```
Note that this does not guarantee the order of assignment of loans for a given employee. Your data does not seem to have enough information to handle a more consistent assignment.
|
The approach outlined below allows to easily "concatenate" more tables to the result set. It is not limited to two tables.
I'll use table variables to illustrate the solution. In real life these tables would be real tables, of course, not variables, but I'll stick with variables to make this sample script easy to run and try.
```
declare @TEmployee table (EmpId int, Name varchar(50));
declare @TFamily table (EmpId int, Relationship varchar(50));
declare @TLoan table (EmpId int, LoanId varchar(50));
insert into @TEmployee values (1, 'John');
insert into @TEmployee values (2, 'Lisa');
insert into @TEmployee values (3, 'Mike');
insert into @TFamily values (1, 'Father');
insert into @TFamily values (1, 'Mother');
insert into @TFamily values (1, 'Wife');
insert into @TFamily values (2, 'Husband');
insert into @TFamily values (2, 'Child');
insert into @TLoan values (1, 'L1');
insert into @TLoan values (1, 'L2');
insert into @TLoan values (2, 'L3');
insert into @TLoan values (2, 'L4');
insert into @TLoan values (3, 'L5');
```
We'll need a table of numbers.
[SQL, Auxiliary table of numbers](https://stackoverflow.com/questions/10819/sql-auxiliary-table-of-numbers)
<http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html>
<http://dataeducation.com/you-require-a-numbers-table/>
Again, in real life you'll have a proper table of numbers, but for this example I'll use the following:
```
declare @TNumbers table (Number int);
insert into @TNumbers values (1);
insert into @TNumbers values (2);
insert into @TNumbers values (3);
insert into @TNumbers values (4);
insert into @TNumbers values (5);
```
The main idea behind my approach is to make a helper table that would contain correct number of rows for each `EmpId` at first and then use this table to get results efficiently.
We'll start with counting number of relationships and loans for each `EmpId`:
```
WITH
CTE_Rows
AS
(
SELECT Relationships.EmpId, COUNT(*) AS EmpRows
FROM @TFamily AS Relationships
GROUP BY Relationships.EmpId
UNION ALL
SELECT Loans.EmpId, COUNT(*) AS EmpRows
FROM @TLoan AS Loans
GROUP BY Loans.EmpId
)
```
Then we calculate the maximum number of rows for each `EmpId`:
```
,CTE_MaxRows
AS
(
SELECT
CTE_Rows.empid
,MAX(CTE_Rows.EmpRows) AS MaxEmpRows
FROM CTE_Rows
GROUP BY CTE_Rows.empid
)
```
The CTE above has one row for each `EmpId`: `EmpId` itself and a maximum number of relationships or loans for this `EmpId`. Now we need to expand this table and generate the given number of rows for each `EmpId`. Here I'm using the `Numbers` table for it:
```
,CTE_RowNumbers
AS
(
SELECT
CTE_MaxRows.empid
,Numbers.Number AS rn
FROM
CTE_MaxRows
CROSS JOIN @TNumbers AS Numbers
WHERE
Numbers.Number <= CTE_MaxRows.MaxEmpRows
)
```
Then we need to add row numbers to all tables with data, which we'll use for joining later. You can order the row numbers using other columns in your tables. For this example there is not much choice.
```
,CTE_Relationships
AS
(
SELECT
Relationships.EmpId
,ROW_NUMBER() OVER (PARTITION BY Relationships.EmpId ORDER BY Relationships.Relationship) AS rn
,Relationships.Relationship
FROM @TFamily AS Relationships
)
,CTE_Loans
AS
(
SELECT
Loans.EmpId
,ROW_NUMBER() OVER (PARTITION BY Loans.EmpId ORDER BY Loans.LoanId) AS rn
,Loans.LoanId
FROM @TLoan AS Loans
)
```
Now we are ready to join all this together. `CTE_RowNumbers` has exact number of rows that we need, so simple `LEFT JOIN` is enough:
```
,CTE_Data
AS
(
SELECT
CTE_RowNumbers.empid
,CTE_Relationships.Relationship
,CTE_Loans.LoanId
FROM
CTE_RowNumbers
LEFT JOIN CTE_Relationships ON CTE_Relationships.EmpId = CTE_RowNumbers.EmpId AND CTE_Relationships.rn = CTE_RowNumbers.rn
LEFT JOIN CTE_Loans ON CTE_Loans.EmpId = CTE_RowNumbers.EmpId AND CTE_Loans.rn = CTE_RowNumbers.rn
)
```
We are almost done. It is possible that the main `Employee` table has some `EmpIds` that don't have any related data, like `EmpId = 3` in your sample data. To get these `EmpIds` in the result set I'll left join the `CTE_Data` to the main table and replace `NULLs` with dashes:
```
SELECT
Employees.EmpId
,Employees.Name
,ISNULL(CTE_Data.Relationship, '-') AS Relationship
,ISNULL(CTE_Data.LoanId, '-') AS LoanId
FROM
@TEmployee AS Employees
LEFT JOIN CTE_Data ON CTE_Data.EmpId = Employees.EmpId
ORDER BY Employees.EmpId, Relationship, LoanId;
```
To get the full script just put all code blocks from this post together in the same order as they appear here.
This is the result set:
```
EmpId Name Relationship LoanId
1 John Father L1
1 John Mother L2
1 John Wife -
2 Lisa Child L3
2 Lisa Husband L4
3 Mike - L5
```
|
Concatenate more than two tables horizontally in SQL Server
|
[
"",
"sql",
"sql-server-2008",
"t-sql",
"outer-join",
""
] |
Is there a general rule of thumb or formula as to when it would be useful to use an index?
For example, in the following case, it would be obvious to add an INDEX on the `id` column:
```
SELECT * FROM table WHERE id = '1iuhiasdf89384h'
```
However, the following would probably not useful on `is_qualified` if it has 2 possible values:
```
SELECT * FROM table WHERE is_qualified=1
```
What about an item that has 10 [0-9] possible values? 100 values [0-99]? When do you usually want to add an index, and how does this relate to the `cardinality` ?
|
As a general rule of thumb, the reason the optimizer chooses to use an index in a `where` clause is to reduce the number of data pages being read. So, if your data is spread randomly over the pages, the important question is how many records are on each page and how many records is the filter selecting.
Say there are 100 records on each page. Then a *random* selection of one percent of records will likely select (almost) all the pages. In this situation, reading the pages and doing the filtering on the pages will likely be faster than using an index, because just about all the data pages will be read anyway. T
So, for most tables, a query that returns one or a handful of records is going to be better off with an index. Queries that return lots of records may not benefit from an index. A corollary is that for small tables, indexes may never be useful. If the data fits on one page, it might be just as fast to scan the page using the `where` filter as using an index.
That is, if the selectivity of the query is greater than the inverse of the average number of records on a data page, then an index is probably not going to be useful. This comes as close as possible to a "general" rule, but read on.
The *type* of index makes a difference as well. If you have the condition `is_qualified` and only 0.1% of records meet this condition, then the index might prove useful. Or, if 1% are qualified but the records are very large so there are only 10 on a page, then index is probably useful. Or, if `is_qualified` is the first column in a clustered index, than all the values with `1` are on a handful of pages. With a clustered index, even a 30% selectivity for `is_qualified = 1` is going to mean reading only 30% of the data pages -- which should cut the time for many queries by two-thirds.
Of course, this leaves out the use of indexes for joins and order by -- situations where even 100% selectivity may still benefit from an index. However, your question seemed geared toward filtering in the `where` clause.
|
I think you need to do some research and reading on using indexes. Even from your own example, you expect an index on the "id" column because you are looking for a specific ONE... But then its not important to have an index on IS\_QUALIFIED because it can have only 2 possible values... But the ID with alpha-numeric can have billions of values.
Indexes are used to help quickly narrow down and find records without having to get to the raw data pages to pull qualified record based on common criteria you expect to be pulling out. Indexes should be considered to even have multiple columns based on the types of common queries you expect to run.
Lets take some of your data scenario columns and assume a table is a child table per "id", has the "is\_qualified" and "othertype" (your values 0-9), along with some other things.. maybe such as date of something, or description of the "other type".
if you only had an index on the ID, then all the "ID" records would be grouped together and that is fine, once you get to them a quick run through those gets you results.
But now, let say you are looking for all IDs that have "Is\_Qualified" = 1 AND also the "othertype" column is 3. You can't get them quickly, there is no correlation from one to another... but if you had a multiple key index of (is\_qualified, othertype, id ), then you could quickly jump to only those records of Is\_Qualified = 1 and othertype = 3 and then you have all the IDs ready to go.
Apply this to a table that has a few million records to query, and run queries without indexes and you will greatly appreciate their purpose, and also learn to design well, meaningful indexes.
|
When to use an index
|
[
"",
"mysql",
"sql",
""
] |
I've been trying for hours to make a left join query with MySQL.
I've a table called "at\_friends" where I store the relationships between users.
```
id | id_user1 | id_user2 | accepted
1 | 2 | 1 | 1
2 | 1 | 3 | 0
```
It means that user 1 is friend with user 2, and user 1 sent a friend request to user 3.
And here is my table "at\_users"
```
id | name
1 | "John"
2 | "Mike"
3 | "Bob"
```
I've tried this query :
```
SELECT at_users.id, at_users.name
FROM at_users
LEFT JOIN at_friends
ON at_friends.id_user1 = at_users.id
OR at_friends.id_user2 = at_users.id
AND at_friends.accepted = 1
WHERE id_user1 = 1 OR id_user2 = 1
```
"1" is the unique id of the current user (John)
But I get these results
```
id | name
1 | "John"
2 | "Mike"
1 | "John"
```
What I'm trying to have is the list of the accepted friends of the user n°1.
By the way, I use Laravel, but I don't know if there is a way to do this without a custom mysql query.
|
Try this:
```
SELECT u.id, u.name
FROM at_users u
INNER JOIN (SELECT id_user2 AS friendID FROM at_friends WHERE id_user1 = 1 AND accepted = 1
UNION
SELECT id_user1 AS friendID FROM at_friends WHERE id_user2 = 1 AND accepted = 1
) AS A ON u.id = A.friendID
```
|
An alternative that only requires a single scan of the friends table:
```
SELECT u.id, u.name
FROM at_users u
JOIN at_friends f
ON 1 in (f.id_user1, f.id_user2) and
u.id = case f.id_user1 when 1 then id_user2 else id_user1 end and
f.accepted=1
```
[SQLFiddle here.](http://sqlfiddle.com/#!2/ae1acd/4)
|
Left Join query return bad results
|
[
"",
"mysql",
"sql",
"select",
"join",
"laravel",
""
] |
Here is my query however it is getting error
The error
> Msg 156, Level 15, State 1, Line 6
> Incorrect syntax near the keyword 'order'.
>
> Msg 156, Level 15, State 1, Line 13
> Incorrect syntax near the keyword 'order'.
>
> Msg 156, Level 15, State 1, Line 20
> Incorrect syntax near the keyword 'order'.
The query
```
(SELECT TOP 20 cl_RooSiteId,
cl_CrawlOrgUrl
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 1
ORDER BY cl_LastCrawlDate ASC)
UNION ALL
(SELECT TOP 200 cl_RooSiteId,
cl_CrawlOrgUrl
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 2
ORDER BY cl_LastCrawlDate ASC)
UNION ALL
(SELECT TOP 50 cl_RooSiteId,
cl_CrawlOrgUrl
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 3
ORDER BY cl_LastCrawlDate ASC)
```
So how should write it in a way that it should work?
|
Copy the query results into `temp table` and use `Union all`
```
SELECT TOP 20 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
INTO #Temp1
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 1
ORDER BY cl_LastCrawlDate ASC
SELECT TOP 200 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
INTO #temp2
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 2
ORDER BY cl_LastCrawlDate ASC
SELECT TOP 50 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
INTO #temp3
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 3
ORDER BY cl_LastCrawlDate ASC
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM #Temp1
UNION ALL
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM #Temp2
UNION ALL
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM #Temp3
Order by cl_LastCrawlDate
```
Or use `Stacked CTE`
```
;WITH cte1
AS (SELECT TOP 20 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 1
ORDER BY cl_LastCrawlDate ASC),
cte2
AS (SELECT TOP 200 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 2
ORDER BY cl_LastCrawlDate ASC),
cte3
AS (SELECT TOP 50 cl_RooSiteId,
cl_CrawlOrgUrl,cl_LastCrawlDate
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate)
AND cl_DuplicateUrl_ByCanonical = 0
AND cl_RooSiteId = 3
ORDER BY cl_LastCrawlDate ASC)
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM cte1
UNION ALL
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM cte2
UNION ALL
SELECT cl_RooSiteId,cl_CrawlOrgUrl FROM cte3
ORDER BY cl_LastCrawlDate ASC
```
|
Assuming that you want these in the order you have written them:
```
SELECT cl_RooSiteId, cl_CrawlOrgUrl
FROM ((SELECT TOP 20 cl_RooSiteId, cl_CrawlOrgUrl, cl_LastCrawlDate, 0 as priority
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate) AND
cl_DuplicateUrl_ByCanonical = 0 AND
cl_RooSiteId = 1
) UNION ALL
(SELECT TOP 200 cl_RooSiteId, cl_CrawlOrgUrl, cl_LastCrawlDate, 1 as priority
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate) AND
cl_DuplicateUrl_ByCanonical = 0 AND
cl_RooSiteId = 2
) UNION ALL
(SELECT TOP 50 cl_RooSiteId, cl_CrawlOrgUrl, cl_LastCrawlDate, 2
FROM tblCrawlUrls
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate) AND
cl_DuplicateUrl_ByCanonical = 0 AND
cl_RooSiteId = 3
)
) c
ORDER BY priority, cl_LastCrawlDate ASC
```
Note the inclusion of `priority` and `cl_LastCrawlDate` in the subqueries. I realize that `priority` is redundant because you can use `ORDER BY cl_RooSiteId, cl_LastCrawlDate`.
EDIT:
You can also do this without `union all`:
```
SELECT cl_RooSiteId, cl_CrawlOrgUrl
FROM (SELECT c.*,
ROW_NUMBER() OVER (PARTITION BY cl_RooSiteId ORDER BY cl_LastCrawlDate) as seqnum
FROM tblCrawlUrls c
WHERE Sysutcdatetime() > Dateadd(MINUTE, 50000, cl_LastCrawlDate) AND
cl_DuplicateUrl_ByCanonical = 0
) c
WHERE (cl_RooSiteId = 1 and seqnum <= 20) OR
(cl_RooSiteId = 2 and seqnum <= 200) OR
(cl_RooSiteId = 3 and seqnum <= 50)
ORDER BY cl_RooSiteId, cl_LastCrawlDate;
```
|
How to use union all when using order by clause SQL Server 2014
|
[
"",
"sql",
"sql-server",
"t-sql",
"union",
"sql-server-2014",
""
] |
I spent many hours combining my query into one. I have 3 database table. This is example schema.

I have only user name from session. I need take all tickets from database from user company.
Example:
User A working in company B.
Company B have many workers. Workers creating tickets.
User A after login see all tickets from yours company.
I now done this using 2 query:
- Take usera A company id
- Take tickets where users work in specifist company.
|
This should do it:
```
SELECT *
FROM User u1
INNER JOIN User u2 on u2.id_company = u1.id_company
INNER JOIN Ticket t on t.id_author = u2.id_user
WHERE u1.Name = @SessionUser
```
|
Is this what you are looking for?
```
select t.*
from users u join
tickets t
on u.id_user = t.id_author
where u.company_id = (select u2.company_id from users u2 where u2.id_user = 'UserA');
```
|
Combining 2 sql queries into one
|
[
"",
"mysql",
"sql",
""
] |
If I have a table such as:
```
memberid,time,ip
```
The his an ip history of all the ip's that have logged in and the associated account id they logged in with and their respective IP.
**Is there anyway with a single query that you can check if there are any IP's in the table that have logged into `> 1` account and if so, what `memberid's` has that IP logged into.**
I know how to restrict it to unique IP's by account using:
```
SELECT DISTINCT memberid,ip FROM iphistory
```
but unsure how I can check what memberid's share the same IP.
**Edit:** Updated to make it more clear.
|
The question can be phrased as: "members that share the same ip". To determine that something like:
```
select ip
from T
group by ip
having count(distinct memberid) > 1
```
can be used. If you want the members that are associated with these ip's you can join the original table with that:
```
select distinct x.memberid, y.ip
from T as x
join (
select ip
from T
group by ip
having count(distinct memberid) > 1
) as y
on x.ip = y.ip
```
I've added distinct since a member can log in from one ip several times.
|
You have an ip address say "192.0.100.1" of member "myuser" and you want to check what all member that logged nto the system with same and check if other members who logged in using the same ip, then you could use where clause as below:
```
SELECT DISTINCT memberid
FROM iphistory
WHERE ip = "192.0.100.1"
AND memberid != 'myuser'
```
|
Get accounts that have looged in with the same ip within ip loging table
|
[
"",
"mysql",
"sql",
""
] |
I need to update multiple rows in the table on the basis of Ids. In the stored procedure, I'm creating a varchar variable that holds the Ids list.
Now, in the table I have entries with Ids 1, 2. The varchar variable has value `1,2`; so I'm expecting no row to be updated with following query.
```
UPDATE mytbl
SET flag = 1
WHERE Id IN (IdList); -- Here IdList has value '1,2'
```
But here row with Id as `2` is getting updated. Select query also returns the same row.
I tried concatenating `IdList` as `"'1','2'"`, then it retuens both the rows (with Ids 1 and 2).
The data type of `Id` is `int`. Is there proper way to maintain integer list?
|
Try this:
```
UPDATE mytbl
SET flag = 1
WHERE @IdList LIKE CONCAT('%,',Id,',%');
```
Note that your varchar list `@IdList` must start and end with comma (e.g. `,1,2,20,30,`).
|
just use the ids as integers like so:
```
UPDATE mytbl
SET flag = 1
WHERE id in (1,2,3,....)
```
|
Updating multiple rows using list of Ids
|
[
"",
"mysql",
"sql",
""
] |
I have two tables from which I need to get data in the same `SELECT` output. The thing is that I need to limit the amount of results.
Say I have an `ID` column that is unique in `table1`, but in `table2` it has many rows with that `ID`.
Now I just want to list how many different `ID`s I have in `table1` and some other information stored in `table2`.
How can I get the desired output I show in the end?
To make my idea clear I used a "messenger" database for an example.
Tables
T1
```
Id_thread Date
1 13Dic
2 12Dic
```
T2
```
Id_thread Message Name
1 Hi Someone
1 Hi to you Someone
2 Help me? Someother
2 Yes! Someother
```
Desired output
```
T1.Id_thread T2.Name T1.Date
1 Someone 13Dic
2 Someother 12Dic
```
|
Use a `JOIN` and `GROUP BY`:
```
SELECT t1.Id_thread, t2.Name, t1.Date
FROM t1
JOIN t2 ON t1.Id_thread = t2.Id_thread
GROUP BY t1.Id_thread
```
Note that if `Name` is the same for all rows in `t2` that have the same `Id_thread`, that column probably should be in `t1`. If you fix that, you don't need the `JOIN`.
|
I'd join and use `distinct`:
```
SELECT DISTINCT t1.id_thread, t2.name, t1.date
FROM t1
JOIN t2 ON t1.id_thred = t2.id_thread
```
|
SQL SELECT from multiple tables or JOIN
|
[
"",
"mysql",
"sql",
"select",
"join",
"syntax",
""
] |
I have following table
```
Key ID Value
1 1 C
2 1 C
3 1 I
4 2 C
5 2 C
6 2 C
7 1 C
```
If the value of value column is I then want to update previous records of same `ID` to `I`.
For example `ID` 1 has last recod with value `I` so want to update first 2 records of `ID` 1 to `I`.
But ID 1 with Key 7 value should not change
I can do self join and update previous records using Key value is less than current key value etc..
But table has large number of records so it takes long time to scan through the table for each Id value.
Can I Use lag function but offset value will be entire table.
So not sure which is the best solution. Or is there any other option other than self join and lag.
---
|
Try this (or the alternate below the Results):
```
SET NOCOUNT ON;
DECLARE @Data TABLE
(
[Key] INT NOT NULL,
[ID] INT NOT NULL,
[Value] VARCHAR(50) NOT NULL
);
INSERT INTO @Data VALUES (1, 1, 'C');
INSERT INTO @Data VALUES (2, 1, 'C');
INSERT INTO @Data VALUES (3, 1, 'I');
INSERT INTO @Data VALUES (4, 2, 'C');
INSERT INTO @Data VALUES (5, 2, 'C');
INSERT INTO @Data VALUES (6, 2, 'C');
INSERT INTO @Data VALUES (7, 1, 'C');
;WITH cte AS
(
SELECT DISTINCT tmp.ID,
MAX(tmp.[Key]) OVER(PARTITION BY tmp.[ID]) AS [MaxKeyOfI]
FROM @Data tmp
WHERE tmp.[Value] = 'I'
)
UPDATE tbl
SET tbl.[Value] = 'I'
--SELECT tbl.*
FROM @Data tbl
INNER JOIN cte
ON cte.[ID] = tbl.[ID]
AND cte.MaxKeyOfI > tbl.[Key]
WHERE tbl.Value <> 'I';
SELECT *
FROM @Data;
```
Returns:
```
Key ID Value
1 1 I
2 1 I
3 1 I
4 2 C
5 2 C
6 2 C
7 1 C
```
And if you change Key 3 to C and Key 7 to I, it will change Keys 1 - 3, leaving Keys 4 - 6 alone.
**OR**, you could try the following instead, which finds the `[Key]` values and uses them in the UPDATE so that the UPDATE itself is based on the PK field only:
```
;WITH maxkeys AS
(
SELECT DISTINCT tmp.ID,
MAX(tmp.[Key]) OVER(PARTITION BY tmp.[ID]) AS [MaxKeyOfI]
FROM @Data tmp
WHERE tmp.[Value] = 'I'
), others AS
(
SELECT tmp2.[Key],
tmp2.[ID]
FROM @Data tmp2
INNER JOIN maxkeys
ON maxkeys.[ID] = tmp2.[ID]
AND maxkeys.MaxKeyOfI > tmp2.[Key]
WHERE tmp2.Value <> 'I'
)
UPDATE tbl
SET tbl.[Value] = 'I'
--SELECT tbl.*
FROM @Data tbl
INNER JOIN others tmp
ON tmp.[Key] = tbl.[Key];
```
|
This should work:
```
WITH CTE AS
(
SELECT *,
MAX([Key]) OVER(PARTITION BY ID) MaxKey
FROM dbo.YourTable
)
UPDATE A
SET Value = 'I'
FROM CTE A
WHERE EXISTS(SELECT 1 FROM CTE
WHERE [Key] = A.MaxKey
AND Value = 'I');
```
[**Here is a sqlfiddle**](http://sqlfiddle.com/#!3/f90897/1) with a demo of it.
And the results are:
```
╔═════╦════╦═══════╗
║ Key ║ ID ║ Value ║
╠═════╬════╬═══════╣
║ 1 ║ 1 ║ I ║
║ 2 ║ 1 ║ I ║
║ 3 ║ 1 ║ I ║
║ 4 ║ 2 ║ C ║
║ 5 ║ 2 ║ C ║
║ 6 ║ 2 ║ C ║
╚═════╩════╩═══════╝
```
|
SQL Scanning table to update the value
|
[
"",
"sql",
"sql-server",
""
] |
I want to get the distinct `"SkillCategory"` and sort by `id`
here is the script below :
```
SELECT distinct EC.SKILL_CATEGORY from ETOPS_CHECK_FORM E
INNER JOIN EC_FORM_SKILL EC
ON EC.EC_FORM_ID=E.ID group by EC.SKILL_CATEGORY ORDER BY EC.ID
```
what i'm doing wrong?
|
The problem is that you need to decide *which* id to sort by. Remove the `distinct` and then use an aggregation function in the `order by`:
```
SELECT EC.SKILL_CATEGORY
FROM ETOPS_CHECK_FORM E INNER JOIN
EC_FORM_SKILL EC
ON EC.EC_FORM_ID = E.ID
GROUP BY EC.SKILL_CATEGORY
ORDER BY MIN(EC.ID);
```
|
If you only want distinct records with no use of aggregate functions then use **DISTINCT** keyword instead of **GROUP BY**
Try this:
```
SELECT DISTINCT EC.SKILL_CATEGORY
FROM ETOPS_CHECK_FORM E
INNER JOIN EC_FORM_SKILL EC ON EC.EC_FORM_ID = E.ID
ORDER BY EC.ID;
```
|
ORA-01791-Distinct and order by
|
[
"",
"sql",
"oracle",
"group-by",
"sql-order-by",
"distinct",
""
] |
Imagine the following 3 tables:
train: key, name, departure\_date
group\_train: train, group
group: key, name
The group\_train.train as a foreign key relation with train.key and group\_train.group with group.key.
A group can contain multiple trains and a train might be present in multiple groups.
I would like to retrieve the latest train to departure by group. Based on the train.departure\_date
I already tried multiple types of joins, sub queries and group by clauses. All of them without success. I seems a straightforward query but for some reason I got stuck!
Thank you for your help!
|
Analytical function Approach;
```
SELECT train.*
FROM
(
SELECT T.KEY,
T.NAME AS TRAIN_NAME,
T.departure_date,
G.name AS GROUP_NAME,
RANK() OVER (PARTITION BY G.group ORDER BY T.departure_date DESC) AS MY_RANK
FROM GROUP_TRAIN GT
INNER JOIN TRAIN T ON (GT.TRAIN = T.KEY)
INNER JOIN GROUP G ON (GT.group = G.group)
) train
WHERE MY_RANK = 1
```
|
Try this:
```
SELECT g.key, g.name, t.key, t.name, t.departure_date
FROM group g
INNER JOIN group_train gt ON g.key = gt.group
INNER JOIN train t ON gt.train = t.key
INNER JOIN (SELECT gt.group, MAX(t.departure_date) AS departure_date
FROM group_train gt
INNER JOIN train t ON gt.train = t.name
GROUP BY gt.group
) AS A ON g.key = gt.group AND t.departure_date = A.departure_date;
```
|
unable to retrieve the latest row based on a date
|
[
"",
"sql",
"database",
"oracle",
"group-by",
"greatest-n-per-group",
""
] |
How would do the following in SQL:
"select all emps with list mark all one's tasks".................................
```
EmpId EmpName
------ --------
1 tom
2 jerry
3 jack
taskId EmpID mark
------ ----- ------
1 1 5
2 3 0
3 1 10
4 2 5
5 2 10
6 3 5
7 3 5
```
**Result:**
```
EmpName 0 5 10 sum
------ ----- ------ ------ ----
tom 0 1 1 2
jerry 0 1 1 2
jack 1 2 0 3
```
|
Try this:
```
SELECT e.EmpId, e.EmpName,
SUM(CASE WHEN t.mark = 0 THEN 1 ELSE 0 END) AS 0Mark,
SUM(CASE WHEN t.mark = 5 THEN 1 ELSE 0 END) AS 5Mark,
SUM(CASE WHEN t.mark = 10 THEN 1 ELSE 0 END) AS 10Mark,
COUNT(1) AS TotalMark
FROM employee e
INNER JOIN task t ON e.EmpId = t.EmpId
GROUP BY e.EmpId, e.EmpName;
```
|
```
select
e.empname,
(select count(*) from tasks t where e.empid=t.empid and mark=0) as mark0,
(select count(*) from tasks t where e.empid=t.empid and mark=5) as mark5,
(select count(*) from tasks t where e.empid=t.empid and mark=10) as mark10,
(select count(*) from tasks t where e.empid=t.empid) as marksum
from
employees e
```
|
SELECT employees with count of finished tasks
|
[
"",
"sql",
"select",
"group-by",
"sum",
"pivot",
""
] |
I have a table in oracle database where a column ONSITE\_MANAGER has data like
```
ONSITE_MANAGER
--------------
jhon
'bikram'
'hari'singh
"'vijay'kumar"
kumar
```
to select a particular data with matching records i am using following select query
```
SELECT *
from (
Select M.*, Rownum R
From IMDB1_FINANCE_ACTUAL M
)
where r > 0
and r <= 10
AND ONSITE_MANAGER=''bikram''
```
this is giving `ORA-00933: SQL command not properly ended`
but when I use
`ONSITE_MANAGER='jhon' OR ONSITE_MANAGER='kumar'` it is fetching data without any problem.
some one let me know the query to fetch data which is in single quotes and double quotes and mixed values as shown in above table
|
Use the [new quoting method](http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/fundamentals.htm#sthref336) from ora10g onwards
> You choose a character that is not present in the string, and then do
> not need to escape other single quotation marks inside the literal
So your query would be
```
SELECT * from (Select M.*, Rownum R From IMDB1_FINANCE_ACTUAL M) where r > 0 and r <= 10 AND ONSITE_MANAGER=q'!'bikram'!'
```
|
I think you have syntax error in the query:
Try following:
```
SELECT * from (Select M.*, Rownum R From IMDB1_FINANCE_ACTUAL M) where r > 0 and r <= 10 AND ONSITE_MANAGER='bikram'
```
|
select query with where condition in oracle 10g giving ORA-00933: SQL command not properly ended
|
[
"",
"sql",
"oracle",
"jdbc",
""
] |
I've got the tables
```
person
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
```
and
```
chatroom
+-----------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| col1 | int(11) | YES | | NULL | |
| col2 | int(11) | YES | | NULL | |
+-----------+---------+------+-----+---------+----------------+
```
and I want to execute the following statement (for each `person` add a row in `chatroom` with the respective `person.id` and `DEFAULT` and `NULL` values for the `id` and `col2` columns)
```
INSERT INTO chatroom (DEFAULT, col1, NULL) SELECT DEFAULT, id AS col1, NULL FROM person;
```
but it does not work.
Can someone correct the query?
|
Your `id` field is `auto_increment`, so you can just leave it out of the query and it will automatically set the row to the next value. Also, since `col2` has a default value of `NULL`, it too can be left out of the query. It will be set to its default automatically.
```
INSERT INTO chatroom (col1) SELECT id FROM person
```
|
Because your `chatroom` table has `id` column as auto-incremant so there is no need to pass value for this column. This column have automatic value when new row inserted in table. So no need of `DEFAULT`.
Second, `col2` will accept the `null` as per your table definition, so there is also no need to pass value for this column also until you have some value which you need to put in this column.
So you have to only specify column for this, you have value
So, if you want to insert `null` value in `col2` column then use this
```
INSERT INTO chatroom (col1)
SELECT id FROM person;
```
So, if you want to insert some value in `col2` column then use this
```
INSERT INTO chatroom (col1,col2)
SELECT id, col2 FROM person;
```
|
INSERT INTO with SELECT and some DEFAULT values
|
[
"",
"mysql",
"sql",
""
] |
This query runs on an `invoices` table to help me decide who I need to pay
Here's the base table:
The **users** table
```
+---------+--------+
| user_id | name |
+---------+--------+
| 1 | Peter |
| 2 | Lois |
| 3 | Stewie |
+---------+--------+
```
The **invoices** table:
```
+------------+---------+----------+--------+---------------+---------+
| invoice_id | user_id | currency | amount | description | is_paid |
+------------+---------+----------+--------+---------------+---------+
| 1 | 1 | usd | 140 | Cow hoof | 0 |
| 2 | 1 | usd | 45 | Cow tail | 0 |
| 3 | 1 | gbp | 1 | Cow nostril | 0 |
| 4 | 2 | gbp | 1500 | Cow nose hair | 0 |
| 5 | 2 | cad | 1 | eyelash | 1 |
+------------+---------+----------+--------+---------------+---------+
```
I want a resulting table that looks like this:
```
+---------+-------+----------+-------------+
| user_id | name | currency | SUM(amount) |
+---------+-------+----------+-------------+
| 1 | Peter | usd | 185 |
| 2 | Lois | gbp | 1500 |
+---------+-------+----------+-------------+
```
The conditions are:
* Only consider invoices that have not been paid, so where `is_paid = 0`
* Group them by `user_id`, by `currency`
* If the `SUM(amount) < $100` for the user\_id, currency pair then don't bother showing the result, since we don't pay invoices that are less than $100 (or equivalent, based on a fixed exchange rate).
Here's what I've got so far (not working -- which I guess is because I'm filtering by a GROUP'ed parameter):
```
SELECT
users.user_id, users.name,
invoices.currency, SUM(invoices.amount)
FROM
mydb.users,
mydb.invoices
WHERE
users.user_id = invoices.user_id AND
invoices.is_paid != true AND
SUM(invoices.amount) >=
CASE
WHEN invoices.currency = 'usd' THEN 100
WHEN invoices.currency = 'gbp' THEN 155
WHEN invoices.currency = 'cad' THEN 117
END
GROUP BY
invoices.currency, users.user_id
ORDER BY
users.name, invoices.currency;
```
Help?
|
Use **HAVING** clause instead of **SUM** in **WHERE** condition
Try this:
```
SELECT u.user_id, u.name, i.currency, SUM(i.amount) invoiceAmount
FROM mydb.users u
INNER JOIN mydb.invoices i ON u.user_id = i.user_id
WHERE i.is_paid = 0
GROUP BY u.user_id, i.currency
HAVING SUM(i.amount) >= (CASE i.currency WHEN 'usd' THEN 100 WHEN 'gbp' THEN 155 WHEN 'cad' THEN 117 END)
ORDER BY u.name, i.currency;
```
|
You can't use `SUM` in a `WHERE`. Use `HAVING` instead.
|
Mind numbing SQL madness
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"having",
""
] |
I want to be able to:
* extract specific words from *column1* in *Table1* - but only the words that are matched from *Table2* from a column called *word*,
* perform a(n individual) count of the number of words that have been found, and
* put this information into a permanent table with a format, that looks like:
**Final**
```
Word | Count
--------+------
Test | 7
Blue | 5
Have | 2
```
Currently I have tried this:
```
INSERT INTO final (word, count)
SELECT
extext
, SUM(dbo.WordRepeatedNumTimes(extext, 'test')) AS Count
FROM [dbo].[TestSite_Info], [dbo].[table_words]
WHERE [dbo].[TestSite_Info].ExText = [dbo].[table_words].Words
GROUP BY ExText;
```
The function dbo.WordRepeatedNumTimes is:
```
ALTER function [dbo].[WordRepeatedNumTimes]
(@SourceString varchar(8000),@TargetWord varchar(8000))
RETURNS int
AS
BEGIN
DECLARE @NumTimesRepeated int
,@CurrentStringPosition int
,@LengthOfString int
,@PatternStartsAtPosition int
,@LengthOfTargetWord int
,@NewSourceString varchar(8000)
SET @LengthOfTargetWord = len(@TargetWord)
SET @LengthOfString = len(@SourceString)
SET @NumTimesRepeated = 0
SET @CurrentStringPosition = 0
SET @PatternStartsAtPosition = 0
SET @NewSourceString = @SourceString
WHILE len(@NewSourceString) >= @LengthOfTargetWord
BEGIN
SET @PatternStartsAtPosition = CHARINDEX (@TargetWord,@NewSourceString)
IF @PatternStartsAtPosition <> 0
BEGIN
SET @NumTimesRepeated = @NumTimesRepeated + 1
SET @CurrentStringPosition = @CurrentStringPosition + @PatternStartsAtPosition +
@LengthOfTargetWord
SET @NewSourceString = substring(@NewSourceString, @PatternStartsAtPosition +
@LengthOfTargetWord, @LengthOfString)
END
ELSE
BEGIN
SET @NewSourceString = ''
END
END
RETURN @NumTimesRepeated
END
```
---
When I run the above `INSERT` statement, no record is inserted.
In the table *TestSite\_Info* is a column called *Extext*. Within this column, there is random text - one of the words being 'test'.
In the other table called *Table\_Words*, I have a column called *Words* and one of the words in there is 'Test'. So in theory, as the word is a match, I would pick it up, put it into the table *Final*, and then next to the word (in another column) the count of how many times the word has been found within *TestSite\_Info.Extext*.
**Table\_Words**
```
id|word
--+----
1 |Test
2 |Onsite
3 |Here
4 |As
```
**TestSite\_Info**
```
ExText
-------------------------------------------------
This is a test, onsite test , test test i am here
```
The expected *Final* table has been given at the top.
-- Update
Now that I have run Abecee block of code this actually works in terms of bringing back a count column and the id relating to the word.
Here are the results :
```
id|total
--+----
169 |3
170 |0
171 |5
172 |7
173 |1
174 |3
```
Taken from the following text which it is extracting from :
> Test test and I went to and this was a test I'm writing rubbish hello
> but I don't care about care and care seems care to be the word that you will see appear
> four times as well as word word word word word, but a .!
> who knows whats going on here.
So as you can see, the count for ID 172 appears 7 times (as a reference please see below to what ID numbers relate to in terms of words) which is incorrect it should appear appear 6 times (its added +1 for some reason) as well as ID 171 which is the word care, that appears 4 times but is showing up as 5 times on the count. Any ideas why this would be?
Also what I was really after was a way as you have quite kindly done of the table showing the ID and count BUT also showing the word it relates to as well in the final table, so I don't have to link back through the ID table to see what the actual word is.
```
Word|id
--+----
as |174
here |173
word |172
care |171
hello |170
test |169
```
|
You could work along the updated
```
WITH
Detail AS (
SELECT
W.id
, W.word
, T.extext
, (LEN(REPLACE(T.extext, ' ', ' ')) + 2
- LEN(REPLACE(' '
+ UPPER(REPLACE(REPLACE(REPLACE(REPLACE(T.extext, ' ', ' '), ':', ' '), '.', ' '), ',', ' '))
+ ' ', ' ' + UPPER(W.word) + ' ', '')) - 1
) / (LEN(W.word) + 2) count
FROM Table_Words W
JOIN TestSite_Info T
ON CHARINDEX(UPPER(W.word), UPPER(T.extext)) > 0
)
INSERT INTO Result
SELECT
id
, SUM(count) total
FROM Detail
GROUP BY id
;
```
(Had forgotten to count in the blanks added to the front and the end, missed a sign change, and got mixed up as for the length of the word(s) surrounded by blanks. Sorry about that. Thanks for testing it more thoroughly than I did originally!)
Tested on SQL Server 2008: [Updated SQL Fiddle](http://sqlfiddle.com/#!3/f2625/1) and 2012: [Updated SQL Fiddle](http://sqlfiddle.com/#!6/f2625/1).
And [with your test case](http://sqlfiddle.com/#!6/544fa6/4) as well.
It:
* is pure SQL (no UDF required),
* has room for some tuning:
+ Store words all lower / all upper case, unless case matters (Which would require to adjust the suggested solution.)
+ Store strings to check with all punctuation marks removed.
Please comment if and as further detail is required.
|
From what I could understand, this might do the job. However, things will be more clear if you post the schema
```
create table final(
word varchar(100),
count integer
);
insert into final (word, count)
select column1, count(*)
from table1, table2
where table1.column1 = table2.words
group by column1;
```
|
How to count words in specific column against matching words in another table
|
[
"",
"sql",
"sql-server",
"t-sql",
"count",
""
] |
As a follow up to [In SQL / MySQL, what is the difference between "ON" and "WHERE" in a join statement?](https://stackoverflow.com/questions/2722795/in-sql-mysql-what-is-the-difference-between-on-and-where-in-a-join-statem) and [SQL join: where clause vs. on clause](https://stackoverflow.com/questions/354070/sql-join-where-clause-vs-on-clause) - it **does** matter if a condition is placed in the on-clause vs. the where-clause in an outer join.
However, does it matter *which* on-clause the condition is placed in when there are multiple outer joins?
For example, could these produce different results?
```
select * from t1 left join t2 on t1.fid=t2.id and t2.col=val
left join t3 on t2.fid=t3.id;
```
vs:
```
select * from t1 left join t2 on t1.fid=t2.id
left join t3 on t2.fid=t3.id and t2.col=val;
```
|
Absolutely they are different.
The fisrt query will only have `t2` rows that satisfy `t2.col=val`
The second query will include all `t2` rows and only list `t3` when `t2.col=val`
|
The queries are not equivalent. It is easy to construct a counter example:
```
create table t1 (id int not null, val int not null);
create table t2 (id int not null, val int not null);
create table t3 (id int not null, val int not null);
insert into t1 (id, val) values (1,1);
insert into t2 (id, val) values (1,1);
insert into t3 (id, val) values (1,1);
select * from t1
left join t2
on t1.id = t2.id
and t2.val = 2
left join t3
on t2.id = t3.id;
+----+-----+------+------+------+------+
| id | val | id | val | id | val |
+----+-----+------+------+------+------+
| 1 | 1 | NULL | NULL | NULL | NULL |
+----+-----+------+------+------+------+
select * from t1
left join t2
on t1.id = t2.id
left join t3
on t2.id = t3.id
and t2.val = 2;
+----+-----+------+------+------+------+
| id | val | id | val | id | val |
+----+-----+------+------+------+------+
| 1 | 1 | 1 | 1 | NULL | NULL |
+----+-----+------+------+------+------+
```
|
MySQL placement of conditions in on-clauses of multiple outer joins
|
[
"",
"mysql",
"sql",
"join",
"left-join",
"on-clause",
""
] |
Is there any way to figure out when a SQL query will **end** after it starts? In other words, is there any way to figure out how much time a query will take before query execution? (*rough* estimate is fine!)
```
ALTER IGNORE TABLE mytbl ADD UNIQUE (c);
```
I ran this query on innodb table that has 35 million records. `c` is varchar(255)
This command has been running for 10 hours and it still is running. I don't want to cancel if I'm close to the end.
I appreciate any help. Thank you!
|
Excerpt from the [ServerFault](http://www.serverfault.com/questions/346838/estimating-mysql-query-runtime) solution:
The solution was to measure how fast the query was scanning rows in the table scan of the fact table. This is shown by the Handler\_read\_rnd\_next status variable. Here’s an easy way to watch it (innotop is another handy way):
```
mysqladmin extended -r -i 10 | grep Handler_read_rnd_next
-- ignore the first line of output...
| Handler_read_rnd_next | 429224 |
```
So the server was reading roughly 43K rows per second, and there were 150 million rows in the table. A little math later, and you get 3488 seconds to completion, or a little less than an hour. And indeed the query completed in about 55 minutes.
|
The big problem here is that the most important factor (by far) in determining how long a query will take has *nothing* to do with the query itself. Rather, the biggest factor in determining the answer is the load placed on the database while the query is running, especially including load from other queries. For complex queries on active systems, this load may even change significantly over the course of the query.
What you can do is get an execution plan for your query, using the `EXPLAIN` keyword. You can use this to get a relative idea, all else being equal, for how much a particular query might cost. Even here, though, MySql doesn't make it easy, in that it won't just give you a nice plan cost or execution time numbers you can use, like sql server does. You need to infer those numbers based on rows and disk reads.
Again, though, even if you generate an estimated cost and then run the query, you could get results in greatly longer or shorter time than expected because the load changes on the server after the estimate was created and as the query runs.
---
In this case, after 10 hours, it sounds like you might have a blocking/locking issue. If you have another active (possibly long-running or frequently recurring) query that uses this table, try killing it and there's a good chance your query will finish naturally a few minutes later.
|
How to figure out when a SQL query will end?
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have two tables as below:
`ObjName` and `RptName` is a composite key.
**Table 1:**
```
ObjName | RptName | FileName | Success
----------------------------------------
obj1 | rept1 | file1.csv| NULL
```
Table 2:
```
FileName | Success
-------------------
file1.csv | 1
file1.csv | 0
file1.csv | 0
```
Table 2 can have multiple entries, like above.
The tables have to be joined on `FileName`.
I want to update the TABLE 1's `Success` column depending on below condition:
If any of the entry in resultset (got after joining tables) has the `Success` value as "1" then the `SUCCESS` column of table 1 should be updated as "1". Otherwise it should be set to "0".
Please help.
|
Try this:
```
UPDATE a
SET a.Success = COALESCE(b.Success, 0)
FROM table1 a
LEFT OUTER JOIN table2 b ON a.FileName = b.FileName AND b.Success = 1;
```
|
```
update t1
set success =
case
when exists (select * from t2 where FileName = t1.FileName and Success = 1) then 1
else 0
end
```
|
Updating a table using JOIN by calculating result from multiple rows
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"join",
"sql-update",
""
] |
When i write query like:
```
SELECT DISTINCT City FROM Customers;
```
It is returning all different cities in Customers table.
But when i write query like:
```
SELECT DISTINCT City,* FROM Customers;
```
It is returning all the rows in Customers table(with city column first and all columns including city later in the output).
What is the reason for displaying all the records even if we use distinct keyword?
|
The DISTINCT keyword doesn't apply to a single column, it applies to the entire record. If the rows of your table are not identically valued for all columns, they are not distinct.
Let's assume there is some unique column in your customers table, ID. If you just want the latest customer ID per city, you could try:
```
SELECT Customers.* FROM
Customers JOIN (
SELECT city, MAX(ID) as id
FROM Customers
GROUP BY city
) as max_id
on Customers.ID=max_id.id
```
|
It would only display all the rows if each row was unique in the database. The `DISTINCT` keyword says "eliminate exact duplicates" in the output set. In your second query, you're selecting `City`, then via the `*`, all of the columns in the table. Say you had columns `City, State, Zip`, then `City, *` is the same as `City, City, State, Zip`. Now, it would only return the unique combinations of these for all rows in the database. So if you had
```
Name, City, State, Zip
Joe, Chicago, IL, 60290
Steve, Chicago, IL, 60290
Joe, Chicago, IL, 60290
Joe, Los Angeles, CA, 90012
```
And selected `City, *` you would get
```
Joe, Chicago, IL, 60290
Steve, Chicago, IL, 60290
Joe, Los Angeles, CA, 90012
```
with the duplicate row `Joe, Chicago, IL, 60290` eliminated.
If you are getting all rows, I suspect it's because you have a unique column (perhaps a primary key) in the table schema that forces each row to be unique.
What you might want instead is a `GROUP BY` query, but then you'll need to apply an aggregation operator to the remaining columns to choose which values for them you want.
```
SELECT State, COUNT(*) as [Customers in State]
FROM Customers
GROUP BY State
```
That will return a set of distinct states with the count of the number of rows corresponding to that state in the table.
|
Usage of Distinct key word, SQL
|
[
"",
"sql",
"distinct",
""
] |
I have a query:
```
SELECT case
when Submission__bTracking = 'Phone' then 'Phone'
when Submission__bTracking = 'Web' then 'Web'
when Submission__bTracking = 'Email' then 'Email
when Submission__bTracking = 'Live__bTech__bSupport' then '@ Live Tech Support
when Submission__bTracking = 'Verbal' then 'Verbal Request'
when Submission__bTracking = 'Fax__b__f__bform' then 'Fax / Form'
End as Sub_Tracking,
COUNT(Submission__bTracking) as tickets FROM dbo.MASTER30
WHERE mrSUBMITDATE >= (CONVERT (date, CURRENT_TIMESTAMP -1))
AND mrSUBMITDATE < (CONVERT (date, CURRENT_TIMESTAMP))
GROUP BY Submission__bTracking
```
WHICH PRODUCES THE FOLLOWING RESULT:
```
Sub_Tracking tickets
Email 36
Fax / Form 1
@ Live Tech Support 18
Phone 441
Web 41
```
How do I also produce the total sum of all the tickets within the same query?
LIKE BELOW:
```
Sub_Tracking tickets
Email 36
Fax / Form 1
@ Live Tech Support 18
Phone 441
Web 41
ALL 537
```
Thanks for any help!!
|
You can use UNION and just add one more row in your result set:
```
YOUR QUERY
UNION ALL
select 'ALL' as Sub_Tracking,
COUNT(Submission__bTracking) as tickets FROM dbo.MASTER30
WHERE mrSUBMITDATE >= (CONVERT (date, CURRENT_TIMESTAMP -1))
AND mrSUBMITDATE < (CONVERT (date, CURRENT_TIMESTAMP))
```
|
You could do
```
...previous lines omitted
GROUP BY Submission__bTracking
UNION
SELECT 'ALL' as Sub_Tracking, count(Submission__bTracking) as tickets
FROM dbo.MASTER30
WHERE mrSUBMITDATE >= (CONVERT (date, CURRENT_TIMESTAMP -1))
AND mrSUBMITDATE < (CONVERT (date, CURRENT_TIMESTAMP))
```
|
Get SUM of query result WITHIN same query
|
[
"",
"sql",
"sum",
""
] |
hey i'm new to the sql language
i have a relation `Movie(title, year, length, inColor, studioName, producerC#)`
the question is (Which movies are longer than Gone With the Wind?)
i want to know if there a difference between these two answers and which one is correct and why
```
select title
from movie
where length>any(select length from movie where title ='gone with the wind')
select title
from movie
where length>(select length from movie where title ='gone with the wind')
```
Thanks in advance
|
`length > ANY (<select>)` means that `length` needs to be higher than one (or more) of the values returned by the select. This select could return more than one value. Replacing it with `> (<select>)` is only possible if you know that select to be a singleton select (ie it produces only one row).
|
If you know that the title is unique, then use:
```
select title
from movie
where length > (select length from movie where title = 'gone with the wind');
```
The use of `any` is a bit misleading because it implies that there could be more than one match.
If there is more than one match, I would use:
```
select title
from movie
where length > (select min(length) from movie where title = 'gone with the wind');
```
or:
```
where length > (select max(length) from movie where title = 'gone with the wind');
```
Depending on which I really mean.
Personally, I don't see any use for the keywords `any`, `some`, and `all` when used with subqueries. I think the resulting queries are easier to understand using aggregations, which make the particular comparisons explicit.
|
is using 'ANY' here necessary (sql)
|
[
"",
"sql",
""
] |
I am having trouble writing the following query in MySQL. I have a table called `pizz0r_pizza_ingredients` which looks something like this:
```
| id | pizza_id | ingredient | amount | measure |
+----+----------+------------+--------+---------+
| 6 | 1 | 15 | 3 | 4 |
|178 | 17 | 1 | 160 | 1 |
| 3 | 1 | 20 | 3 | 4 |
```
I want to search for pizzas where the ingredients have specific requirements such as the following:
```
SELECT `pizza_id`
FROM `pizz0r_pizza_ingredients`
WHERE `ingredient` = 15 AND `ingredient` != 20
GROUP BY `pizza_id`
```
I am trying to get entries where `ingredient` is equal to 15, but ignores that pizza\_id if it also has ingredient 20.
The current result is `1`, but in this example nothing should be returned.
|
I like to handle these problems using `group by` and `having`:
```
SELECT `pizza_id`
FROM `pizz0r_pizza_ingredients`
GROUP BY `pizza_id`
HAVING SUM(ingredient = 15) > 0 AND
SUM(ingredient = 20) = 0;
```
You can add as many new requirements as you like. The `SUM()` expression counts the number of ingredients of a certain type. The `> 0` means that there is at least one on the pizza. The `= 0` means that there are none.
|
Try this:
```
SELECT P1.pizza_id
FROM pizz0r_pizza_ingredients P1
LEFT OUTER JOIN pizz0r_pizza_ingredients P2 ON P1.pizza_id = P2.pizza_id AND P2.ingredient IN (20, 21)
WHERE P1.ingredient = 15 AND P2.id IS NULL
GROUP bY P1.pizza_id;
```
|
Multiple row conditions in WHERE clause
|
[
"",
"mysql",
"sql",
"join",
"group-by",
"relational-division",
""
] |
I have not used concatenation in SQL much and don't know how I would do this.
Say I have an Email column containing a few email addresses (varchar).
Then I have another column for LastLogin (datetime).
```
john@gmail.com
Frank@yahoo.com
Donny@apple.com
```
How would a query look for sorting firstly by the lastLogin date, and then by the domain name of the emails?
|
You can use `charindex()` and `substring()`:
```
order by lastLogin,
substring(email, charindex('@', email) + 1, len(email))
```
If you want the *date* component of the login:
```
order by cast(lastLogin as date),
substring(email, charindex('@', email) + 1, len(email))
```
|
Assuming you're using MySQL, the following query should work for you:
```
SELECT * FROM YourTable ORDER BY lastLogin, SUBSTRING_INDEX(email, '@', -1);
```
You provide to SUBSTRING\_INDEX the name of the column, your delimiter and the number of occurrences of the delimiter as -1, in order to get everything on the left hand side after the delimiter.
|
How can I order a query by the domains of email addresses?
|
[
"",
"sql",
"sql-server",
"sql-order-by",
"concatenation",
""
] |
Say I have two tables: table1 as follows
```
Name |Surname| col1
------------------------
Bob |Smith | BS1
Mary Jane|Jones | MJ1
```
and table2 as follows:
```
Name |Surname | col2
------------------------------
Bob |Keller Smith | BS2
Mary |Jones | MJ2
```
What I would like is to JOIN these together to obtain:
```
Name |Surname | col1| col2
-------------------------------------
Bob |Keller Smith | BS1 | BS2
Mary |Jones | MJ1 | MJ2
```
I tried:
```
SELECT tableb.Name, tableb.Surname, tablea.col1, tableb.col2
FROM table1 as tablea
LEFT JOIN table2 as tableb
ON '%'+tablea.Name+'%' LIKE '%'+tableb.Name+'%' AND '%'+tablea.Surame+'%' LIKE '%'+tableb.Surame+'%'
```
But these seemed to join everything.
How can I join together columns with wildcards correctly?
|
Try this:
```
SELECT tableb.Name, tableb.Surname, tablea.col1, tableb.col2
FROM table1 as tablea
LEFT JOIN table2 as tableb
ON (tablea.Name LIKE '%' + tableb.Name + '%' OR tableb.Name LIKE '%' + tablea.Name + '%')
AND (tablea.Surname LIKE '%' + tableb.Surname + '%' OR tableb.Surname LIKE '%' + tablea.Surname + '%')
```
|
I think you want an `INNER JOIN`.
Also note that the percent signs are only recognized on the right hand side of the `LIKE` operator. Joining `Name` with `Surname` also doesn't seem right
```
SELECT
a.Name, b.Surname, a.col1, b.col2
FROM
table1 AS a
INNER JOIN table2 AS b ON
a.Name LIKE '%' + b.Name + '%'
OR b.Name LIKE '%' + a.Name + '%'
```
|
SQL Join with wildcards
|
[
"",
"sql",
"join",
""
] |
I need help with the algorithms/ database design for my current working on web-based application (I apologize for the long question in advance)
**Application description:**
I am building a customer check-in station (it's basically just a monitor that displays a webpage and was connected to a scanner) where customers who come into an office (similar to a library) can scan their office ID card (it has an unique bar code on it) to check-in, customer information (First name, Last name, date of birth, check-in time...) will be sent/saved onto server and the office administrator will be able to see who is in the office right now and do stuff...)
When creating ID card for a new customer, the only information needed is: first name, last name and date of birth (customer can be any ages from kids to elder) => system will generate a unique bar-code (16 digits) and print out a new ID card (with only the bar-code on it)
**Problem**:
If a customer forgot/ lost their ID card or sometimes the card is too old so the bar-code can't be scanned, customer can type in their first+last name and date of birth into the check-in station then system will search for (first name + last name + date of birth) and determine whether that customer existing and check them in. But it is possible that there is more than one person who has same name + birthday:
- system then can display all matched people to screen but how can customer know which one is them self?
- or that situation can be avoided if system would not allowing customer who has same name and dob to be saved the to database in first place. But then the customer who came "second" will be very upset that he/she can not have a card :))
***Edit:***
How do I deal with this problem, I mean this is just a office so we can not ask for SSN or driver license ... the check-in process have to be simple and quick some of them maybe a kids who don't have any ID or phone (they will come with their parents/guardians) and many of them are older people (older than 70, or even 80) they can't even type that why the "ID card - scanning idea is here to help them - all they need to do is scan their card... (I don't think they ever can remember the "security question"), SMS verify will not work (phone number may be changed, not all of customer have a phone, the carrier will be involved here (coverage, rate charge...) I don't want any of that ).
OK after read all your suggestion:
I am thinking about a **4 digit pass-code** (like 3 digits security code in the back of a credit card) - it will be printed out in the ID card with the instructions and everything, this will be different from the unique customer number (used to scan), here how it works:
- when creating a new customer ID card, system will ask for an additional 4 digits pass-code and also print it out to the card, an algorithms also will be put there to make sure that customers who have same name + dob can't have same pass-code.
In case customer can't use/lost the card, after they entering name+dob to check-in, **system only ask this 4-digit passcode if there are more than 1 matched person**, otherwise if there is only 1 person matched, system will check-in them in right on.
if they ever forgot this pass code, I mean there is nothing else they can do for them self, the receptionist will have to somehow help them, **what do you guys think, I am still open for suggestions**?
***My final solution:***
Because the cards have nothing else (of customer information) rather than the bar code (customer number) on it so the best way is to preprint (pre-made)them, have them ready in the desk to give to new customers or for card replacement purposes.
* When creating a new customer, receptionist will manually input first name + last name + dob + **phone number (optional) + email (optional) + home address** + customer number (as same as in the card that they about to give to the customer) +. submit, system will check for everything, if everything is good receptionist then give customer a new card => customer come back to check-in station and check-in.
* When customers forgot card => they will need to see the receptionist => do verification process => receptionist will check them in manually.
* When customer lost card or card is damaged => they will need to see the receptionist => do verification process => receptionist then give customer a replacement card => customer come back to check-in station and check-in.
|
Some rambling thoughts:
You could assign users a PIN and use that to make the account unique. Presumably to insure that it is unique you would have to assign it, you can't let them invent one. (You could let them invent one and then verify it's unique. That might be good to allow people to use numbers that have some meaning to them so they can remember them.) The big catch to this is that if the person loses his PIN, you're stuck. If the PIN is printed on the card, then if the card is damaged or worn, yes, they could type in the PIN. But if they lost the card, they might have no other record of the PIN.
You could ask for their phone number and use that as an identifier. My grocery store does that for their discount card. I think almost everyone has a phone these days. For the small number who don't, you could generate a random number that looks like a phone number but is not valid (so it won't coincidentally duplicate any real phone number), and use it like a PIN. For example if you're in the US, you could make generated numbers all start 555-555 and then make the last 4 digits be like a PIN. Then the only people who would be a problem are those who don't have a phone AND who lost their card, which should be a very small number.
Is there any information in this system that is confidential, or are people committing to spending money? I mean, if someone walked up to a kiosk and typed in the name and birth-date of his next door neighbor and accessed that person's account, would that be a problem? You haven't said what the system does. If getting into the system gives someone access to the person's medical records or bank account or transcripts of his last confession to his priest, then you have to take steps to prevent unauthorized access, you can't let just anyone come up and claim to be someone else and get in. I'm reminded of a case a few years ago where a reporter got access to records of some politician's DVD rentals. He was apparently hoping to find that he had rented a lot of vile pornography or some such that he could use to embarrass the guy, though as it turned out it was mostly westerns. My point is that even seemingly innocent information could be embarrassing to someone under the right circumstances, so you have to be careful.
How often do people have lost or damaged cards? And are there clerks available who could help someone in such cases? That is, if 99% of the time someone comes in, swipes his card, and he's in and everything is good, and the number of times that someone has a lost or damaged card is very small, you could say that in those cases they have to go to a clerk and show the damaged card, or if they say they lost their card, show identification. Then the clerk can verify whatever and give them a new card. You could have the clerk search by name and have a screen that shows birth dates and addresses, ask the customer what their birth date and address is and if it matches one, give them a new card, if not, say I'm sorry you're not on file. This is quite different from a security point of view of showing the customer the list of birth dates and addresses and letting them pick one, as a customer could, (a) type in a common or overheard name and then pick any matching entry that shows up, or even (b) use this to find the address of someone they want to harass, and then you could be liable.
|
Have each customer tell you two "security question" style data: Location of birth, favorite dish, ... These can serve as uniquifiers.
You can then prevent duplicates from being entered because in case there is a colliding registration the customer must simply chose a different question.
|
Will First Name + Last Name + Date Of Birth combination be unique enough?
|
[
"",
"sql",
"asp.net",
"database",
"algorithm",
"database-design",
""
] |
I have 2 MySQL tables consisting of the following information:
**table1** (basic information)
```
name | url
a | www.a.com
b | www.b.com
c | www.c.com
```
**table2** (time series data)
```
name | status | date
a | ok | 22/12/14
b | ok | 22/12/14
c | ok | 22/12/14
a | ok | 21/12/14
b | ok | 21/12/14
c | ok | 21/12/14
etc
```
I need to do a join so I have all the entries from table1 joined with the most recent entries of table2. So the output would look like:
**output**
```
name | url | status | date
a | www.a.com | ok | 22/12/14
b | www.b.com | ok | 22/12/14
c | www.c.com | ok | 22/12/14
```
What query would give the output above?
|
That's a tricky one. What you can do is join the second table twice - one to find the "newest" lines and the second time to get the actual data.
```
SELECT t1.name, t1.url, t2.status, t2.date
FROM table1 t1
LEFT JOIN (SELECT name, max(date) as mx from table2 GROUP BY name) as X ON X.name = t1.name
LEFT JOIN table2 t2 0N t2.name = X.name AND t2.date = X.mx
```
I used name for joining. You'd normally use some keys (ids)
|
I specialize in such time-sensitive designs and here is what I do. Your second table is a `Versioned` table in that, like source control systems, when the data is changed, the old data remains, just a new copy is made with the date the change was made. A small change can add full bi-temporal functionality, but that's not your question, is it? 8)
If, like I have found to be true, you notice that the overwhelming majority of the queries against this table are for current data, then one thing you may want to consider is creating a view to expose only the current version of each row.
```
create view tab2 as
select *
from table2 t2
where date =(
select max( date )
from table2
where name = t2.name );
```
Then you can simply join the first table with the view for a one-to-one correlation with the data in table1 with only the current data in table2. This allows you to abstract away the time-sensitive nature of the data.
If there are reasons you can't use a view (such as an old-school DBA who has seizures at the thought of joining with a view) then you have to write the whole thing as one query. Fortunately, that's not difficult, but abstraction is handy.
```
select t1.Name, t1.URL, t2.Status, t2.Date
from table1 t1
join table2 t2
on t2.Name = t1.Name
and t2.Date =(
select max( Date )
from table2
where name = t2.name );
```
Some DBMSs do not allow a subquery in the join. In that case, just move it to the WHERE clause:
```
select t1.Name, t1.URL, t2.Status, t2.Date
from table1 t1
join table2 t2
on t2.Name = t1.Name
where t2.Date =(
select max( Date )
from table2
where name = t2.name );
```
If Name and Date form a unique index (either defined explicitly or because they form the PK of the table), you will find performance to be much better than you might at first think. Try it and compare with alternatives.
|
Join one table with the most recent rows of another
|
[
"",
"mysql",
"sql",
"join",
""
] |
I'm trying to replicate a SQL database in Cassandra, but, while I had no problem creating the tables, I found that I cannot find an example easy to understand that shows how I can create foreign keys in Cassandra.
So, If I have this in SQL:
```
CREATE TABLE COOP_USUARIO (
CI VARCHAR2 (13 BYTE) NOT NULL ,
CUENTA VARCHAR2 (20 BYTE) NOT NULL ,
NOMBRE VARCHAR2 (50 BYTE) NOT NULL ,
EMAIL VARCHAR2 (255 BYTE) NOT NULL ,
DIRECCION VARCHAR2 (255 BYTE) ,
CIUDAD NUMBER NOT NULL ,
TELEFONO VARCHAR2 (10 BYTE) NOT NULL ,
TIPO_PERSONA NUMBER (1) NOT NULL ,
);
CREATE UNIQUE INDEX COOP_USUARIO_PK ON COOP_USUARIO(
CI ASC
);
ALTER TABLE COOP_USUARIO ADD CONSTRAINT COOP_USUARIO_PK PRIMARY KEY ( CI ) ;
CREATE TABLE COOP_CIUDADES
(
ID NUMBER NOT NULL ,
NOMBRE VARCHAR2 (25 BYTE) NOT NULL ,
PROVINCIA NUMBER NOT NULL
) ;
CREATE UNIQUE INDEX COOP_CIUDADES_PK ON COOP_CIUDADES
(
ID ASC
);
ALTER TABLE COOP_CIUDADES ADD CONSTRAINT COOP_CIUDADES_PK PRIMARY KEY ( ID ) ;
ALTER TABLE COOP_USUARIO ADD CONSTRAINT COOP_USUARIO_CIUDADES_FK FOREIGN KEY ( CIUDAD ) REFERENCES COOP_CIUDADES ( ID ) ;
```
What is the Cassndra CQL code for the same purpose?
|
Simple answer is: There is no CQL code for the same purpose.
CQL does not have a concept of foreign keys or any concept of constraints between tables in the same way that you can't do joins between tables.
If you need a constraint between tables then you would need to handle this in code.
|
"one table for each expected query). With that type of approach, there is no need for foreign key"
this seems a bit misleading. there are consistency risks with fully denormalizing in
Cassandra... best way to insure against consistency issues is doing statements in batches with an all or nothing commit using the apply batch command
|
How to create foreign key in Cassandra CQL
|
[
"",
"sql",
"cassandra",
"foreign-keys",
"foreign-key-relationship",
"cql",
""
] |
I need to calculate due date / end date for SLAs. As input values I have the start date and a timespan (in minutes). This calculation needs to take into account business hours, weekends, and holidays.
I've seen a lot of examples where the input is start date and end date, but have been struggling finding anything similar to the above input values.
Is there an elegant solution to this problem? Is there a way to calculate due date without using a loop? I can't think of a way to do the calculation without doing something similar to the following terrible algorithm:
1. Create a return variable "due date" and set it to input variable
"start date"
2. Create a control variable "used minutes" and set it to 0
3. Create a loop with the condition "used minutes" <= "input timespan"
4. Inside the loop, add a second to the "due date" return variable
5. Inside the loop, check if the second is within hours of operation
(checking business hours, weekends, and holidays). If so, increment
control variable "used minutes" by 1.
6. Upon exiting the loop, return variable "due date"
|
You need a table with valid business hours, with the weekends and holidays excluded (or marked as weekend/holiday so you can skip them.) Each row represents one day and the number of working hours for that day. Then you query the business hours table from your start date to the first (min) date where the sum(hours\*60) is greater than your minutes parameter, excluding marked weekend/holiday rows. That gives you your end date.
Here's the day table:
```
CREATE TABLE [dbo].[tblDay](
[dt] [datetime] NOT NULL,
[dayOfWk] [int] NULL,
[dayOfWkInMo] [int] NULL,
[isWeekend] [bit] NOT NULL,
[holidayID] [int] NULL,
[workingDayCount] [int] NULL,
CONSTRAINT [PK_tblDay] PRIMARY KEY CLUSTERED
(
[dt] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
here's how I populate the table with days:
```
CREATE PROCEDURE [dbo].[usp_tblDay]
AS
BEGIN
SET NOCOUNT ON;
DECLARE
@Dt datetime ,
@wkInMo int,
@firstDwOfMo int,
@holID int,
@workDayCount int,
@weekday int,
@month int,
@day int,
@isWkEnd bit
set @workDayCount = 0
SET @Dt = CONVERT( datetime, '2008-01-01' )
while @dt < '2020-01-01'
begin
delete from tblDay where dt = @dt
set @weekday = datepart( weekday, @Dt )
set @month = datepart(month,@dt)
set @day = datepart(day,@dt)
if @day = 1 -- 1st of mo
begin
set @wkInMo = 1
set @firstDwOfMo = @weekday
end
if ((@weekday = 7) or (@weekday = 1))
set @isWkEnd = 1
else
set @isWkEnd = 0
if @isWkEnd = 0 and (@month = 1 and @day = 1)
set @holID=1 -- new years on workday
else if @weekday= 6 and (@month = 12 and @day = 31)
set @holID=1 -- holiday on sat, change to fri
else if @weekday= 2 and (@month = 1 and @day = 2)
set @holID=1 -- holiday on sun, change to mon
else if @wkInMo = 3 and @weekday= 2 and @month = 1
set @holID = 2 -- mlk
else if @wkInMo = 3 and @weekday= 2 and @month = 2
set @holID = 3 -- President’s
else if @wkInMo = 4 and @weekday= 2 and @month = 5 and datepart(month,@dt+7) = 6
set @holID = 4 -- memorial on 4th mon, no 5th
else if @wkInMo = 5 and @weekday= 2 and @month = 5
set @holID = 4 -- memorial on 5th mon
else if @isWkEnd = 0 and (@month = 7 and @day = 4)
set @holID=5 -- July 4 on workday
else if @weekday= 6 and (@month = 7 and @day = 3)
set @holID=5 -- holiday on sat, change to fri
else if @weekday= 2 and (@month = 7 and @day = 5)
set @holID=5 -- holiday on sun, change to mon
else if @wkInMo = 1 and @weekday= 2 and @month = 9
set @holID = 6 -- Labor
else if @isWkEnd = 0 and (@month = 11 and @day = 11)
set @holID=7 -- Vets day on workday
else if @weekday= 6 and (@month = 11 and @day = 10)
set @holID=7 -- holiday on sat, change to fri
else if @weekday= 2 and (@month = 11 and @day = 12)
set @holID=7 -- holiday on sun, change to mon
else if @wkInMo = 4 and @weekday= 5 and @month = 11
set @holID = 8 -- thx
else if @holID = 8
set @holID = 9 -- dy after thx
else if @isWkEnd = 0 and (@month = 12 and @day = 25)
set @holID=10 -- xmas day on workday
else if @weekday= 6 and (@month = 12 and @day = 24)
set @holID=10 -- holiday on sat, change to fri
else if @weekday= 2 and (@month = 12 and @day = 26)
set @holID=10 -- holiday on sun, change to mon
else
set @holID = null
insert into tblDay select @dt,@weekday,@wkInMo,@isWkEnd,@holID,@workDayCount
if @isWkEnd=0 and @holID is null
set @workDayCount = @workDayCount + 1
set @dt = @dt + 1
if datepart( weekday, @Dt ) = @firstDwOfMo
set @wkInMo = @wkInMo + 1
end
END
```
I also have a holiday table, but everyone's holidays are different:
```
holidayID holiday rule description
1 New Year's Day Jan. 1
2 Martin Luther King Day third Mon. in Jan.
3 Presidents' Day third Mon. in Feb.
4 Memorial Day last Mon. in May
5 Independence Day 4-Jul
6 Labor Day first Mon. in Sept
7 Veterans' Day Nov. 11
8 Thanksgiving fourth Thurs. in Nov.
9 Fri after Thanksgiving Friday after Thanksgiving
10 Christmas Day Dec. 25
```
HTH
|
This is the best I could do, still uses a loop but uses date functions instead of incrementing a minutes variable. Hope you like it.
```
--set up our source data
declare @business_hours table
(
work_day varchar(10),
open_time varchar(8),
close_time varchar(8)
)
insert into @business_hours values ('Monday', '08:30:00', '17:00:00')
insert into @business_hours values ('Tuesday', '08:30:00', '17:00:00')
insert into @business_hours values ('Wednesday', '08:30:00', '17:00:00')
insert into @business_hours values ('Thursday', '08:30:00', '17:00:00')
insert into @business_hours values ('Friday', '08:30:00', '18:00:00')
insert into @business_hours values ('Saturday', '09:00:00', '14:00:00')
```
declare @holidays table
(
holiday varchar(10)
)
insert into @holidays values ('2015-01-01')
insert into @holidays values ('2015-01-02')
--Im going to assume the SLA of 2 standard business days (0900-1700) = 8\*60\*2 = 960
declare @start\_date datetime = '2014-12-31 16:12:47'
declare @time\_span int = 960-- time till due in minutes
declare @true bit = 'true'
declare @false bit = 'false'
declare @due\_date datetime --our output
--other variables
declare @date\_string varchar(10)
declare @today\_closing datetime
declare @is\_workday bit = @true
declare @is\_holiday bit = @false
--Given our timespan is in minutes, lets also assume we dont care about seconds in start or due dates
set @start\_date = DATEADD(ss,datepart(ss,@start\_date)\*-1,@start\_date)
while (@time\_span > 0)
begin
```
set @due_date = DATEADD(MINUTE,@time_span,@start_date)
set @date_string = FORMAT(DATEADD(dd, 0, DATEDIFF(dd, 0, @start_date)),'yyyy-MM-dd')
set @today_closing = (select convert(datetime,@date_string + ' ' + close_time) from @business_hours where work_day = DATENAME(weekday,@start_date))
if exists((select work_day from @business_hours where work_day = DATENAME(weekday,@start_date)))
set @is_workday = @true
else
set @is_workday = @false
if exists(select holiday from @holidays where holiday = @date_string)
set @is_holiday = @true
else
set @is_holiday = @false
if @is_workday = @true and @is_holiday = @false
begin
if @due_date > @today_closing
set @time_span = @time_span - datediff(MINUTE, @start_date, @today_closing)
else
set @time_span = @time_span - datediff(minute, @start_date, @due_date)
end
set @date_string = FORMAT(DATEADD(dd, 1, DATEDIFF(dd, 0, @start_date)),'yyyy-MM-dd')
set @start_date = CONVERT(datetime, @date_string + ' ' + isnull((select open_time from @business_hours where work_day = DATENAME(weekday,convert(datetime,@date_string))),''))
```
end
select @due\_date
|
Calculating due date using business hours and holidays
|
[
"",
"sql",
"sql-server-2005",
""
] |
I would like to write a regexp\_like function which would identify if a string consists of two repeating characters. It would only identify a string that has alternating numbers and only consisting of two unique numbers, but the unique number cannot repeat, it must alternate.
**Requirement :**
Regular expression should match the pattern for `787878787`, but it should **NOT** match the pattern `787878788`
It should NOT consider the pattern like `000000000`
|
I think you want the following:
```
WITH t1 AS (
SELECT '787878787' AS str FROM dual
UNION
SELECT '787878788' AS str FROM dual
UNION
SELECT '7878787878' AS str FROM dual
UNION
SELECT '78' AS str FROM dual
)
SELECT * FROM t1
WHERE REGEXP_LIKE(str, '^(.)(.)(\1\2)*\1?$')
AND SUBSTR(str, 1, 1) != SUBSTR(str, 2, 1)
```
This will cover the case (mentioned in the requirements) where the string ends with the same character with which it begins. If you want only digits, replace the `.` in the regex with `\d`.
**Update:**
Here is how the regex breaks down:
```
^ = start of string
(.) = first character - can be anything - in parentheses to capture it and use it in a backreference
(.) = second character - can be anything
\1 = backreference to first captured group
\2 = backreference to second captured group
(\1\2)* = These should appear together zero or more times
\1? = The first captured group should appear zero or one times
$ = end of the string
```
Hope this helps.
|
You might do something like this -
```
SQL> WITH DATA AS(
2 SELECT '787878787' str FROM dual UNION ALL
3 SELECT '787878788' FROM dual
4 )
5 SELECT *
6 FROM DATA
7 WHERE REGEXP_LIKE(str, '(\d+?)\1')
8 AND SUBSTR(str, 1,1) = SUBSTR(str, -1, 1)
9 /
STR
---------
787878787
SQL>
```
Since you are dealing only with digits, I used `\d`.
`\d+?` will match the digits, and, `\1` are the captured digits. The `substr` in the `AND` condition is checking whether the first and last digit of the string are same.
**Edit : Additional requirement by OP**
To avoid the numbers like `00000000`, you need to add a `NOT` condition to the predicate.
```
SQL> WITH DATA AS
2 ( SELECT '787878787' str FROM dual
3 UNION ALL
4 SELECT '787878788' FROM dual
5 UNION ALL
6 SELECT '787878788' FROM dual
7 )
8 SELECT *
9 FROM DATA
10 WHERE REGEXP_LIKE(str, '(\d+?)\1')
11 AND SUBSTR(str, 1,1) = SUBSTR(str, -1, 1)
12 AND SUBSTR(str, 2,1) <> SUBSTR(str, -1, 1)
13 /
STR
---------
787878787
SQL>
```
|
Oracle RegExp_Like for Two Repeating Digits
|
[
"",
"sql",
"regex",
"oracle",
""
] |
I have a table with 20 columns. The first is my primary key. The remaining columns are attributes about the primary key. I need to evaluate each column on a row by row basis. If a attribute column has a value other than null, then do some further processing.
The way I am familiar with doing this in TSQL would be a while to bump through the rows, capture the values from the columns, evaluate the values for further processing.
Does anyone have any better ideas?
|
You could unpivot the table and filter out things that are not NULL:
```
select pk, col, val
from (select pk, col, val
from table t
unpivot (val for col in (attr1, attr2, . . . )) as unpivot
) u
where val is not null;
```
This will provide a list of the columns and associated non-NULL values. Note: it assumes that the types of the attribute columns are all the same.
|
I think the best approach would be to first define if any column actually has a null value. For that you could use something like the following;
```
Select DATALENGTH(null,'Attr1','Attr2') // Add all the colums. This will return null. All of the columns must be the same type.
```
This will return null. Let's say that you have a while loop(you could use Cursor as well but they are considered slower compared to while loops) that checks each row with against the results of this statement and when you find the result to be null then you could actually check which column is null. This should speed up the process a little bit.
Also, this one looks pretty easy about finding the rows that has null values: <https://dba.stackexchange.com/questions/14864/test-if-any-fields-are-null>
|
TSQL Technique for attribute evaluation
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to get the maximum id from table with ought use of any function like max or else.I need simple pure query.Any one help me to solve this problem .
We simply write
```
select max(id) from table
```
but i don't want to use **max()**
|
Use `ORDER BY` and `LIMIT`
```
SELECT id
FROM table
ORDER BY id DESC
LIMIT 1
```
|
**ORDER BY** with **LIMIT** will do the job for u just fine
```
SELECT id FROM table
ORDER BY id DESC LIMIT 1;
```
But as you asked the question from interview's point of view , they may even ask you to do the same without using **LIMIT** , **TOP** or ***max()*** .
In thay case you should go with *subquery* approach . Here' s how u should do it :
```
SELECT id FROM table
WHERE id >= ALL
(SELECT id FROM table)
```
In this query an id is matched with all the id's in the table and it will be printed only if the value is greater than or equal to all the id's in the table. Only the max will satisfy the condition.
|
How to get Max of ID without use of MAX()
|
[
"",
"mysql",
"sql",
"select",
"sql-order-by",
"max",
""
] |
Given the following table structures:
```
countries: id, name
regions: id, country_id, name, population
cities: id, region_id, name
```
...and this query...
```
SELECT c.name AS country, COUNT(DISTINCT r.id) AS regions, COUNT(s.id) AS cities
FROM countries AS c
JOIN regions AS r ON r.country_id = c.id
JOIN cities AS s ON s.region_id = r.id
GROUP BY c.id
```
How would I add a `SUM` of the `regions.population` value to calculate the country's population? I need to only use the value of each region once when summing, but the un-grouped result has multiple rows for each region (the number of cities in that region).
Example data:
```
mysql> SELECT * FROM countries;
+----+-----------+
| id | name |
+----+-----------+
| 1 | country 1 |
| 2 | country 2 |
+----+-----------+
2 rows in set (0.00 sec)
mysql> SELECT * FROM regions;
+----+------------+-----------------------+------------+
| id | country_id | name | population |
+----+------------+-----------------------+------------+
| 11 | 1 | region 1 in country 1 | 10 |
| 12 | 1 | region 2 in country 1 | 15 |
| 21 | 2 | region 1 in country 2 | 25 |
+----+------------+-----------------------+------------+
3 rows in set (0.00 sec)
mysql> SELECT * FROM cities;
+-----+-----------+---------------------------------+
| id | region_id | name |
+-----+-----------+---------------------------------+
| 111 | 11 | City 1 in region 1 in country 1 |
| 112 | 11 | City 2 in region 1 in country 1 |
| 121 | 12 | City 1 in region 2 in country 1 |
| 211 | 21 | City 1 in region 1 in country 2 |
+-----+-----------+---------------------------------+
4 rows in set (0.00 sec)
```
Desired output with example data:
```
+-----------+---------+--------+------------+
| country | regions | cities | population |
+-----------+---------+--------+------------+
| country 1 | 2 | 3 | 25 |
| country 2 | 1 | 1 | 25 |
+-----------+---------+--------+------------+
```
***I prefer a solution that doesn't require changing the `JOIN` logic.***
The [accepted solution](https://stackoverflow.com/a/2440065/244826) for [this post](https://stackoverflow.com/q/2436284/244826) seems to be in the neighborhood of what I'm looking for, but I haven't been able to figure out how to apply it to my issue.
---
***MY SOLUTION***
```
SELECT c.id AS country_id,
c.name AS country,
COUNT(x.region_id) AS regions,
SUM(x.population) AS population,
SUM(x.cities) AS cities
FROM countries AS c
LEFT JOIN (
SELECT r.country_id,
r.id AS region_id,
r.population AS population,
COUNT(s.id) AS cities
FROM regions AS r
LEFT JOIN cities AS s ON s.region_id = r.id
GROUP BY r.country_id, r.id, r.population
) AS x ON x.country_id = c.id
GROUP BY c.id, c.name
```
---
***Note:*** My actual query is much more complex and has nothing to do with countries, regions, or cities. This is a minimal example to illustrate my issue.
|
First of all, the [other post](https://stackoverflow.com/q/2436284/244826) you reference is not the same situation. In that case, the joins are like [A -> B and A -> C], so the weighted average (which is what that calculation does) is correct. In your case the joins are like [A -> B -> C], so you need a different approach.
The simplest solution that comes to mind right away does involve a subquery, but not a complex one:
```
SELECT
c.name AS country,
COUNT(r.id) AS regions,
SUM(s.city_count) AS cities,
SUM(r.population) as population
FROM countries AS c
JOIN regions AS r ON r.country_id = c.id
JOIN
(select region_id, count(*) as city_count
from cities
group by region_id) AS s
ON s.region_id = r.id
GROUP BY c.id
```
The reason this works is that it resolves the cities to one row per region before joining to the region, thus eliminating the cross join situation.
|
How about leaving the rest and just adding one more join for the population
```
SELECT c.name AS country,
COUNT(distinct r.id) AS regions,
COUNT(s.id) AS cities,
pop_regs.sum as total_population
FROM countries AS c
LEFT JOIN regions AS r ON r.country_id = c.id
LEFT JOIN cities AS s ON s.region_id = r.id
left join
(
select country_id, sum(population) as sum
from regions
group by country_id
) pop_regs on pop_regs.country_id = c.id
GROUP BY c.id, c.name
```
## [SQLFiddle demo](http://sqlfiddle.com/#!8/3dd8b/1)
|
SUM For Distinct Rows
|
[
"",
"mysql",
"sql",
"select",
"group-by",
"sum",
""
] |
I have a requirement where I need to convert all SQL Server stored procedures into HANA stored procedures. I have come across a function ISNUMERIC in T-SQL and I am not getting the equivalent of it in HANA.
After searching the web, I found that HANA does not have built in ISNUMERIC equivalent function. Then I tried writing my own function to achieve this and there I stuck with error handling and regular expression limitations.
My HANA version is 70.
|
SAP HANA does not come with a `ISNUMERIC()` function.
However, this question had been asked and answered multiple times on SCN:
E.g. <http://scn.sap.com/thread/3449615>
or my approach from back in the days:
<http://scn.sap.com/thread/3638673>
```
drop function isnumeric;
create function isNumeric( IN checkString NVARCHAR(64))
returns isNumeric integer
language SQLSCRIPT as
begin
declare tmp_string nvarchar(64) := :checkString;
declare empty_string nvarchar(1) :='';
/* replace all numbers with the empty string */
tmp_string := replace (:tmp_string, '1', :empty_string);
tmp_string := replace (:tmp_string, '2', :empty_string);
tmp_string := replace (:tmp_string, '3', :empty_string);
tmp_string := replace (:tmp_string, '4', :empty_string);
tmp_string := replace (:tmp_string, '5', :empty_string);
tmp_string := replace (:tmp_string, '6', :empty_string);
tmp_string := replace (:tmp_string, '7', :empty_string);
tmp_string := replace (:tmp_string, '8', :empty_string);
tmp_string := replace (:tmp_string, '9', :empty_string);
tmp_string := replace (:tmp_string, '0', :empty_string);
/*if the remaining string is not empty, it must contain non-number characters */
if length(:tmp_string)>0 then
isNumeric := 0;
else
isNumeric := 1;
end if;
end;
```
Testing this shows:
with data as( select '1blablupp' as VAL from dummy
union all select '1234' as VAL from dummy
union all select 'bla123' as val from dummy)
```
select val, isNumeric(val) from data
VAL ISNUMERIC(VAL)
1blablupp 0
1234 1
bla123 0
```
|
From SAP HANA 1.0 SPS12 you can use regular expressions in SQL. You can use `LIKE_REGEXPR` function to check if specific string contains letters i.e.:
```
SELECT
CASE WHEN '0001A' LIKE_REGEXPR '[A-Z]' THEN 0 ELSE 1 END
FROM
DUMMY;
```
|
What is the equivalent of T-SQL ISNUMERIC function in HANA Sqlscript?
|
[
"",
"sql",
"sql-scripts",
"hana",
""
] |

How would I SELECT the USER\_ID where there is a value or "Retailer" and "Colorado". In other words, the query should result in 5.
|
You can use aggregation and a `having` clause:
```
select user_id
from table t
group by user_id
having sum(value = 'Retailer') > 0 and
sum(value = 'Colorado') > 0;
```
|
You can try this
```
SELECT user_id
FROM table_name T
WHERE value IN ('Retailer','Colorado')
GROUP BY user_id
HAVING COUNT(user_id)>1
```
|
SQL Select unique user_id from single table
|
[
"",
"mysql",
"sql",
""
] |
I have a table `hastags`
```
id|tweet_id| tag |time
1 1 hell
2 2 hellyeah
3 3 hell
4 4 stackoverflow
5 5 hell
6 6 hellyeah
7 7 bellrings
8 7 nomorehell
8 7 nomorehell
```
I want to select the count of *most used* `tags` above a *specific time*.
Now if i limit my query to 3 rows i should get
```
|tag | count|
hell 3
hellyeah 2
nomorehell 2
```
How can i achieve this any help ?
|
I chose `2014-12-29` as an example of a specific time
```
select tag,
sum(`time` > '2014-12-29') as sum_greater,
sum(`time` < '2014-12-29') as sum_smaller
from hashtags
group by tag
order by sum_greater desc, sum_smaller desc
limit 3
```
|
```
SELECT tag, count(*) as anz FROM hastags GROUP BY tag ORDER BY anz DESC
```
|
Select distinct rows by count in descending order
|
[
"",
"mysql",
"sql",
"select",
"count",
"distinct",
""
] |
I use [this query](https://data.stackexchange.com/stackoverflow/query/201514/find-answers-to-flag) on SEDE. I have just been editing the `WHERE p.CreationDate > '2014-12-21T00:00:00.001'` like every week to select the last week's records only. So for example previous edits were just changing to 12-21 from 12-14 from 12-7, etc..
I'm trying to edit that part so that I don't have to keep editing it every week.
I was thinking I could do something like
```
WHERE DATEDIFF(DAY, p.creationDate, GETDATE()) <= 7
```
which would select only results from the last 7 days.
However, this will only work on like Sunday when SEDE is updated. If I run a query on Wednesday, then this query will be missing three days of results.
How could I write this `where` statement to like find the last nearest Sunday, or is Sunday, and then go back one week from there?
|
To find `previous sunday` use this piece of code.
```
select DateAdd(dd, -1, DateAdd(wk, DateDiff(wk, 0, getdate()), 0)) [Previous Sunday]
```
`Where clause` should be something like.
```
Select ... from tablename
WHERE DATEDIFF(DAY, p.creationDate, DateAdd(dd, -1, DateAdd(wk, DateDiff(wk, 0, getdate()), 0))) <= 7
```
|
There are several solutions, similar to this.
Replace the GetDate in your where clause with some thing like
```
Cast(DateAdd(day,-DatePart(weekday,GetDate())+1,GetDate()) as Date)
```
where the +1 is adjusted larger or smaller to move to the specific day of the week you want to start with. The +1 causes this to becaome sunday.
|
How to write a query that selects last specific day and goes back 1 week from there?
|
[
"",
"sql",
"sede",
""
] |
I am writing a sp for getting the data for the next 6 months for a particular date field,
```
SELECT CR.[Id] AS ClaimId
,CR.[BOLNumber]
,CR.[PRONumber]
,CR.[ClaimNumber]
,CR.[CompanyName]
,c.NAME AS CarrierName
,CR.[DateFiled]
,CONVERT(VARCHAR(10),CR.[DateFiled], 103) AS DateFiledString
,CR.[ClaimDate] AS ClaimReceivedDate
,CONVERT(VARCHAR(10), CR.[ClaimDate], 103) AS ClaimReceivedDateString
,CR.[AmountFiled]
,CR.[Status] AS StatusId
,CR.[SettledAmount]
FROM CarrierRate.Claims AS CR WITH (NOLOCK)
WHERE CR.CustomerId = @AccountUserId AND
CR.Status = @statusType AND CR.ClaimDate < DATEADD(month,6,CR.ClaimDate)
ORDER BY CR.[Status] ASC
```
The field is ClaimDate. So am i doing it right or anything to be changed?
Please suggest
|
You can try
```
CONVERT(DATE, CR.ClaimDate) < DATEADD(MONTH,6,CONVERT(DATE, CR.ClaimDate))
```
only date is considered here.
|
Use [**BETWEEN**](http://msdn.microsoft.com/en-IN/library/ms187922.aspx) keyword to specify date range.
```
WHERE CR.CustomerId = @AccountUserId AND
CR.Status = @statusType AND
CR.ClaimDate BETWEEN GETDATE() AND DATEADD(month,6,CR.ClaimDate)
```
|
Adding months in where clause in sql
|
[
"",
"sql",
"where-clause",
""
] |
I´m trying to fill the gaps after using group by using an aux table, can you help?
**aux table to deal with days with no orders**
```
date quantity
2014-01-01 0
2014-01-02 0
2014-01-03 0
2014-01-04 0
2014-01-05 0
2014-01-06 0
2014-01-07 0
```
**group by result from "orders" table**
```
date quantity
2014-01-01 7
2014-01-02 1
2014-01-04 2
2014-01-05 3
```
**desired result joining "orders" table with "aux table"**
```
date quantity
2014-01-01 7
2014-01-02 1
2014-01-03 0
2014-01-04 2
2014-01-05 3
2014-01-06 0
2014-01-07 0
```
|
Without knowing how you create your `group by result` table, what you're looking for in an `outer join`, perhaps with `coalesce`. Something like this:
```
select distinct a.date, coalesce(b.quantity,0) quantity
from aux a
left join yourgroupbyresults b on a.date = b.date
```
Please note, you may or may not need `distinct` -- depends on your data.
---
Edit, given your comments, this should work:
```
select a.date, count(b.date_sent)
from aux a
left join orders b on a.date = date_format(b.date_sent, '%Y-%m-%d')
group by a.date
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!2/3958ba/1)
|
Using your results it would be something like:
```
SELECT a.date
,COALESCE(b.quantity,0) as quantity
FROM auxtable a
LEFT JOIN groupbyresult b
ON a.date = b.date
```
You can also do your grouping in the same query as the left join:
```
SELECT a.date
,COALESCE(COUNT(b.somefield),0) as quantity
FROM auxtable a
LEFT JOIN table1 b
ON a.date = b.date
GROUP BY a.date
```
|
MYSQL fill group by "gaps"
|
[
"",
"mysql",
"sql",
"group-by",
"left-join",
""
] |
I have a (MySQL) table that looks like this:
```
`radiostation_id` varchar(36) NOT NULL,
`song_id` varchar(36) NOT NULL,
`date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`length` int(4) NOT NULL
```
What I want to do is to find every same `song_id` that has been played within an interval of 15 minutes. It could be that the song was played for example today 15:10 and then again 15:20. So it shouldnt be that i need to set the interval myself, it should check for any interval through the table and list all the songs and timestamps it happened.
|
It seems to me that you want a song that is played within a certain 15 minutes regardless of the day. There are [date and time functions](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html) you can use to parse the information you want. For example, you can use `TIME()` to extract the time portion of your column. For example, you can search for any song played between 12:00 and 12:15 like this:
```
SELECT DISTINCT id
FROM myTable
WHERE TIME(timeColumn) BETWEEN '12:00:00' AND '12:15:00';
```
Here is a little [SQL Fiddle](http://sqlfiddle.com/#!2/103ead/1) I used to try it.
**EDIT**
Based on our discussions in the comments, you can use the following query. It will self join the table on the condition that the song\_id matches (so you can compare individual songs), the condition that the time is not exactly the same (so no occurrence is counted twice) and on the condition that the interval in the second table is within +/- 15 minutes of the first like this:
```
SELECT m.song_id, m.timecolumn
FROM myTable m
JOIN myTable mt
ON m.song_id = mt.song_id
AND m.timeColumn != mt.timeColumn
AND mt.timeColumn BETWEEN DATE_SUB(m.timeColumn, INTERVAL 15 MINUTE) AND DATE_ADD(m.timeColumn, INTERVAL 15 MINUTE);
```
Here is an updated [Fiddle](http://sqlfiddle.com/#!2/000975/5).
|
It's not entirely clear what you are asking.
If you want to find occurrences of the *same* `song_id`, where a second row with the same `song_id` has a date value within 15 minutes of the first row, one way to get that would be a query like this:
```
SELECT t.song_id
FROM looks_like_this t
WHERE EXISTS ( SELECT 1
FROM looks_like_this s
WHERE s.song_id = t.song_id
AND s.date >= t.date
AND s.date <= t.date + INTERVAL 15 MINUTE
AND NOT (s.radiostation_id = t.radiostation_id AND s.date = t.date)
```
If you want to return a unique list of `song_id`, then add either `DISTINCT` keyword or a `GROUP BY t.song_id` clause.
Another way to get the result would be use a JOIN operation:
```
SELECT t.song_id
FROM looks_like_this t
JOIN looks_like_this s
ON s.song_id = t.song_id
AND s.date >= t.date
AND s.date <= t.date + INTERVAL 15 MINUTE
AND NOT (s.radiostation_id = t.radiostation_id AND s.date = t.date)
GROUP BY t.song_id
```
|
Selecting multiple values within a timestamp interval
|
[
"",
"mysql",
"sql",
""
] |
I got result from query:
```
TITAL PCS RGH_CTS VALUE
----- --- ------- ------------
SEND 10 49.8550 198742.94400
BID 15 76.7070 327461.76300
FINAL 19 88.5630 337954.15600
```
I want this output - How to solve this?
```
| TITAL | SEND | BID | FINAL |
----------------------------------------------------------
| PCS | 10 | 15 | 19 |
| RGH_CTS | 49.8550 | 76.7070 | 88.5630 |
| VALUE | 198742.94400 | 327461.76300 | 337954.15600 |
```
Any body can solve this simple query?
|
You need to Combine `Pivot` and `Unpivot` to get the result.
First `Unpivot` the table to get the data in single column using `Cross Apply`.
Next `Pivot` the result of previous step to get the result.
```
SELECT Tital,
[SEND],
[BID],
[FINAL]
FROM (SELECT t.TITAL AS tt,
c.TITAL,
cdata
FROM Tablename t
CROSS apply ( VALUES ('PCS',[PCS]),
('RGH_CTS',[RGH_CTS]),
('VALUE',VALUE) ) c (TITAL, cdata)) a
PIVOT (Max(cdata)
FOR tt IN([SEND],
[BID],
[FINAL])) piv
```
## [SQLFIDDLE DEMO](http://sqlfiddle.com/#!3/4d986/1)
|
As per NoDisplayName, you'll need a do a full transpose, by first UNPIVOT ing, and then re-PIVOTing the data. You can also UNPIVOT data with the keyword with the same name:
```
SELECT Criteria as TITAL, [SEND], [BID], [FINAL]
FROM
(
SELECT
*
FROM
Result
UNPIVOT
(
SomeColumn
for Criteria in (PCS, RGH_CTS, VALUE)
) unpvt
) X
PIVOT
(
SUM(SomeColumn)
for [TITAL] IN ([SEND], [BID], [FINAL])
)pvt;
```
Note that it is fairly unusual to retain the LHS column name after a Transpose (i.e. `TITAL` seems to refer to a different subject entirely in the pre and post transposed data)?
[SqlFiddle here](http://sqlfiddle.com/#!6/b84030/9)
|
how to create pivot query for this result?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
"pivot",
""
] |
Is there a way to split a string in HANA?
Something similar to the equivalent in SQL Server: `SELECT * FROM dbo.fnSplitString('valueA,valueB', ',')`
|
Try this,
[Refer Here](http://scn.sap.com/thread/3482620)
```
CREATE PROCEDURE SPLIT_TEST(TEXT nvarchar(100))
AS
BEGIN
declare _items nvarchar(100) ARRAY;
declare _text nvarchar(100);
declare _index integer;
_text := :TEXT;
_index := 1;
WHILE LOCATE(:_text,',') > 0 DO
_items[:_index] := SUBSTR_BEFORE(:_text,',');
_text := SUBSTR_AFTER(:_text,',');
_index := :_index + 1;
END WHILE;
_items[:_index] := :_text;
rst = UNNEST(:_items) AS ("items");
SELECT * FROM :rst;
END;
CALL SPLIT_TEST('A,B,C,E,F')
```
|
Another way of splitting a string is with an outbound variable using table types:
```
CREATE TYPE UTIL_VARCHAR_LIST AS TABLE
(
VarcharValue Varchar(5000)
);
CREATE PROCEDURE UTIL_SPLIT_STRING
(
IN iv_split_string Varchar(5000),
IN iv_split_character Varchar(1) DEFAULT ',',
OUT ot_string_list UTIL_VARCHAR_LIST
) LANGUAGE SQLSCRIPT
AS
BEGIN
DECLARE TEMP_STR VARCHAR(5000) := :iv_split_string || :iv_split_character;
DECLARE OUT_VAR VARCHAR(5000) ARRAY;
DECLARE POS INTEGER :=1;
DECLARE FLAG INTEGER := 1;
DECLARE LEFT_STR VARCHAR(5000);
WHILE(LENGTH(:TEMP_STR) > 0 )
DO
LEFT_STR := SUBSTR_BEFORE (:TEMP_STR,:iv_split_character);
TEMP_STR := SUBSTR_AFTER (:TEMP_STR,:LEFT_STR || :iv_split_character);
OUT_VAR[POS] := LEFT_STR;
POS := :POS + 1;
END WHILE;
ot_string_list = UNNEST(:OUT_VAR) AS ("VARCHARVALUE");
END;
```
|
HANA: Split string?
|
[
"",
"sql",
"hana",
""
] |
I've got two tables in MS Access that keep track of class facilitators and the classes they facilitate. The two tables are structured as follows:
tbl\_facilitators
```
facilID -> a unique autonumber to keep track of individual teachers
facilLname -> the Last name of the facilitator
facilFname -> the First name of the facilitator
```
tbl\_facilitatorClasses
```
classID -> a unique autonumber to keep track of individual classes
className -> the name of the class (science, math, etc)
primeFacil -> the facilID from the first table of a teacher who is primary facilitator
secondFacil -> the facilID from the first table of another teacher who is backup facilitator
```
I cannot figure out how to write an Inner Join that pulls up the results in this format:
```
Column 1: Class Name
Column 2: Primary Facilitator's Last Name
Column 3: Primary Facilitator's First Name
Column 4: Secondary Facilitator's Last Name
Column 5: Secondary Facilitator's First Name
```
I am able to pull up and get the correct results if I only request the primary facilitator by itself or only request the secondary facilitator by itself. I cannot get them both to work out, though.
This is my working Inner Join:
```
SELECT tbl_facilitatorClasses.className,
tbl_facilitators.facilLname, tbl_facilitators.facilFname
FROM tbl_facilitatorClasses
INNER JOIN tbl_facilitators
ON tbl_facilitatorClasses.primeFacil = tbl_facilitators.facilID;
```
Out of desperation I also tried a Union, but it didn't work out as I had hoped. Your help is greatly appreciated. I'm really struggling to make any progress at this point. I don't often work with SQL.
**SOLUTION**
Thanks to @philipxy I came up with the following query which ended up working:
```
SELECT tblCLS.className,
tblP.facilLname, tblP.facilFname, tblS.facilLname, tblS.facilFname
FROM (tbl_facilitatorClasses AS tblCLS
INNER JOIN tbl_facilitators AS tblP
ON tblCLS.primeFacil=tblP.facilID)
INNER JOIN tbl_facilitators AS tblS
ON tblCLS.secondFacil=tblS.facilID;
```
When performing multiple Inner Joins in MS Access, parenthesis are needed...[As described in this other post.](https://stackoverflow.com/questions/19367565/sql-inner-join-wih-multiple-table)
|
(The following applies when every row is SQL DISTINCT, and outside SQL code similarly treats NULL like just another value.)
Every base table has a statement template, aka *predicate*, parameterized by column names, by which we put a row in or leave it out. We can use a (standard predicate logic) shorthand for the predicate that is like its SQL declaration.
```
-- facilitator [facilID] is named [facilFname] [facilLname]
facilitator(facilID, facilLname, facilFname)
-- class [classID] named [className] has prime [primeFacil] & backup [secondFacil]
class(classID, className, primeFacil, secondFacil)
```
Plugging a row into a predicate gives a statement aka *proposition*. The rows that make a true proposition go in a table and the rows that make a false proposition stay out. (So a table states the proposition of each present row *and* states NOT the proposition of each absent row.)
```
-- facilitator f1 is named Jane Doe
facilitator(f1, 'Jane', 'Doe')
-- class c1 named CSC101 has prime f1 & backup f8
class(c1, 'CSC101', f1, f8)
```
But *every table expression value* has a predicate per its expression. SQL is designed so that if tables `T` and `U` hold the (NULL-free non-duplicate) rows where T(...) and U(...) (respectively) then:
* `T CROSS JOIN U` holds rows where T(...) AND U(...)
* `T INNER JOIN U ON`*`condition`* holds rows where T(...) AND U(...) AND *condition*
* `T LEFT JOIN U ON`*`condition`* holds rows where (for U-only columns U1,...)
T(...) AND U(...) AND *condition*
OR T(...)
AND NOT there EXISTS values for U1,... where [U(...) AND *condition*]
AND U1 IS NULL AND ...
* `T WHERE`*`condition`* holds rows where T(...) AND *condition*
* `T INTERSECT U` holds rows where T(...) AND U(...)
* `T UNION U` holds rows where T(...) OR U(...)
* `T EXCEPT U` holds rows where T(...) AND NOT U(...)
* `SELECT DISTINCT * FROM T` holds rows where T(...)
* `SELECT DISTINCT`*`columns to keep`*`FROM T` holds rows where
there EXISTS values for *columns to drop* where T(...)
* `VALUES (C1, C2, ...)((`*`v1`*`,`*`v2`*`, ...), ...)` holds rows where
C1 = *v1* AND C2 = *v2* AND ... OR ...
Also:
* `(...) IN T` means T(...)
* *`scalar`*`= T` means T(*scalar*)
* T(..., X, ...) AND X = Y means T(..., Y, ...) AND X = Y
So to query we find a way of phrasing the predicate for the rows that we want in natural language using base table predicates, then in shorthand using base table predicates, then in shorthand using aliases in column names except for output columns, then in SQL using base table names plus ON & WHERE conditions etc. If we need to mention a base table twice then we give it aliases.
```
-- natural language
there EXISTS values for classID, primeFacil & secondFacil where
class [classID] named [className]
has prime [primeFacil] & backup [secondFacil]
AND facilitator [primeFacil] is named [pf.facilFname] [pf.facilLname]
AND facilitator [secondFacil] is named [sf.facilFname] [sf.facilLname]
-- shorthand
there EXISTS values for classID, primeFacil & secondFacil where
class(classID, className, primeFacil, secondFacil)
AND facilitator(pf.facilID, pf.facilLname, pf.facilFname)
AND pf.facilID = primeFacil
AND facilitator(sf.facilID, sf.facilLname, sf.facilFname)
AND sf.facilID = secondFacil
-- shorthand using aliases everywhere but result
-- use # to distinguish same-named result columns in specification
there EXISTS values for c.*, pf.*, sf.* where
className = c.className
AND facilLname#1 = pf.facilLname AND facilFname#1 = pf.facilFname
AND facilLname#2 = sf.facilLname AND facilFname#2 = sf.facilFname
AND class(c.classID, c.className, c.primeFacil, c.secondFacil)
AND facilitator(pf.facilID, pf.facilLname, pf.facilFname)
AND pf.facilID = c.primeFacil
AND facilitator(sf.facilID, sf.facilLname, sf.facilFname)
AND sf.facilID = c.secondFacil
-- table names & SQL (with MS Access parentheses)
SELECT className, pf.facilLname, pf.facilFname, sf.facilLname, sf.facilFname
FROM (class JOIN facilitator AS pf ON pf.facilID = primeFacil)
JOIN facilitator AS sf ON sf.facilID = secondFacil
```
OUTER JOIN would be used when a class doesn't always have both facilitators or something doesn't always have all names. (Ie if a column can be NULL.) But you haven't given the specific predicates for your base table and query or the business rules about when things might be NULL so I have assumed no NULLs.
[Is there any rule of thumb to construct SQL query from a human-readable description?](https://stackoverflow.com/a/33952141/3404097)
(Re MS Access JOIN parentheses see [this from SO](https://stackoverflow.com/a/7855015/3404097) and [this from MS](http://msdn.microsoft.com/en-us/library/bb243855%28v=office.12%29.aspx).)
|
I would do it as above by joining to the tbl\_facilitators table twice but you might want to make sure that every class really does require a 2nd facilitator as the second join should be an outer join instead. Indeed it might be safer to assume that it's not a required field.
|
How to get matching data from another SQL table for two different columns: Inner Join and/or Union?
|
[
"",
"sql",
"ms-access",
"inner-join",
"union",
"self-join",
""
] |
I have a fairly complex requirement I would like to solve using SQL in a Postgres DB. I'm sure this would be addressed in any order management system however I cannot find anything of a similar nature.
I have the following table (and values):
```
CREATE TABLE TABLE1 (
ID varchar(8),
ORIG_ID varchar(8),
STATUS varchar(8),
VALIDITY varchar(8)
);
INSERT INTO TABLE1
(ID, ORIG_ID, STATUS, VALIDITY)
VALUES
('1', '1', 'REPLACED','DAY'),
('2', '1', 'REPLACED','DAY'),
('3', '1', 'FILLED','DAY'),
('4', '4', 'REJECTED','DAY'),
('5', '5', 'PARTIAL','GTC'),
('6', '6', 'EXPIRED','GTD'),
('7', '7', 'REPLACED','GTD'),
('8', '7', 'PARTIAL','GTD'),
('9', '9', 'FILLED', 'GTD'),
('10', '10', 'NEW', 'DAY'),
('11', '11', 'NEW', 'GTD'),
('12', '12', 'DFD', 'GTD'),
('13', '13', 'REPLACED', 'GTD'),
('14', '13', 'FILLED', 'GTD')
;
```
N.B -
* Please ignore the data types on the fields
* The final table may have thousands of entries to process
* The above can be pasted directly into SQL Fiddle if required (PostgreSQL 9.3.1)
The requirements I have are:
Delete all entries that have a STATUS of either:
```
FILLED, EXPIRED, REJECTED, CANCELLED
PARTIAL/NEW - If the VALIDITY is not GTD/GTC (i.e. only DAY)
REPLACED - Unless there are other entries with the same ORIG_ID in a PARTIAL/NEW STATUS and not GTD/GTC (still working orders)
TBD - To Be Deleted:
TBD ('1', '1', 'REPLACED','DAY'),
TBD ('2', '1', 'REPLACED','DAY'),
TBD ('3', '1', 'FILLED','DAY'),
TBD ('4', '4', 'REJECTED','DAY'),
('5', '5', 'PARTIAL','GTC'),
TBD ('6', '6', 'EXPIRED','GTD'),
('7', '7', 'REPLACED','GTD'),
('8', '7', 'PARTIAL','GTD'),
TBD ('9', '9', 'FILLED', 'GTD'),
TBD ('10', '10', 'NEW', 'DAY'),
('11', '11', 'NEW', 'GTD'),
('12', '12', 'DFD', 'GTD'),
TBD ('13', '13', 'REPLACED', 'GTD'),
TBD ('14', '13', 'FILLED', 'GTD')
```
I've tried looking and the closet I could find was the following:
[Delete with join on the same table and limit clause](https://stackoverflow.com/questions/18651992/delete-with-join-on-the-same-table-and-limit-clause)
However I couldn't get it to work while incorporating the requirements above.
As this will be run at the end of day I have had a few thoughts in such as changing all entries with VALIDITY of DAY, setting STATUS to EXPIRED. Then just deleting them all but then still hit the issue of STATUS with the GTD/GTC orders. I'm unsure if this would also be faster than handling it all under the same logic.
Any help (or new ideas) would be appreciated on how to tackle this issue.
|
```
Delete from table1
WHERE status in ('FILLED','EXPIRED','REJECTED','CANCELLED')
OR (status in ('PARTIAL','NEW') AND validity not in ('GTD','GTC'))
OR (status = 'REPLACED' and orig_ID not in
(select ORig_ID from table1 where status in ('PARTIAL','NEW')));
```
<http://sqlfiddle.com/#!15/9e465/28/0>
|
what is your PostgreSQL version ?
anyways here is what I come up with
```
delete from TABLE1 where
STATUS in ('FILLED','EXPIRED','REJECTED','CANCELLED') or
STATUS in (select STATUS from TABLE1 where STATUS in ('PARTIAL','NEW')
and VALIDITY!='DAY') OR
STATUS in (select status from TABLE1 where STATUS ='REPLACED'
and orig_ID not in
(select ORig_ID from table1 where status in ('PARTIAL','NEW')))
```
<http://sqlfiddle.com/#!15/9e465/54>
|
Deleting rows in a single database table based on the values of other rows
|
[
"",
"sql",
"postgresql",
"postgresql-9.3",
""
] |
I have a query below, I want it to sort the data by `id`, but it doesn't sort at all.
```
Select distinct ec.category,ec.id
from print ec
order by ec.id asc
```
What could be the reason?
this is the output :

|
Looking at your data, the column data type is a `varchar`, aka 'text'.
If it is text, it sorts like text, according to the place the character occurs in the character set used.
So each column is ordered on the first character, then the second, etc. So `2` comes after `11`.
Either make the column a numeric data type, like `number`, or use `to_number` in the sorting:
```
select distinct ec.category,ec.id
from print ec
order by to_number(ec.id)
```
|
The difference lies in the way `varchar` and `number` are sorted. in your case, since you have used `varchar` data type to store `number`, the sorting is done for the `ASCII` values.
**NUMBERS when sorted as STRING**
```
SQL> WITH DATA AS(
2 SELECT LEVEL rn FROM dual CONNECT BY LEVEL < = 11
3 )
4 SELECT rn, ascii(rn) FROM DATA
5 order by ascii(rn)
6 /
RN ASCII(RN)
---------- ----------
1 49
11 49
10 49
2 50
3 51
4 52
5 53
6 54
7 55
8 56
9 57
11 rows selected.
SQL>
```
As you can see, the sorting is based on the `ASCII` values.
**NUMBER when sorted as a NUMBER itself**
```
SQL> WITH DATA AS(
2 SELECT LEVEL rn FROM dual CONNECT BY LEVEL < = 11
3 )
4 SELECT rn, ascii(rn) FROM DATA
5 ORDER BY rn
6 /
RN ASCII(RN)
---------- ----------
1 49
2 50
3 51
4 52
5 53
6 54
7 55
8 56
9 57
10 49
11 49
11 rows selected.
SQL>
```
**How to fix the issue?**
Change the data type to `NUMBER`. As a workaround, you could use `to_number`.
Using `to_number` -
```
SQL> WITH DATA AS(
2 SELECT to_char(LEVEL) rn FROM dual CONNECT BY LEVEL < = 11
3 )
4 SELECT rn, ascii(rn) FROM DATA
5 ORDER BY to_number(rn)
6 /
RN ASCII(RN)
--- ----------
1 49
2 50
3 51
4 52
5 53
6 54
7 55
8 56
9 57
10 49
11 49
11 rows selected.
SQL>
```
|
why Order by does not sort?
|
[
"",
"sql",
"oracle",
""
] |
I have the following SQL statement that shows the total sales amount for customers in cities who have done at least 2 orders. But let say I only want to show the the city/cities where someone has done at least 2 orders, and that there are two customers living in this city/cities, so what I want to do is to pick out the city/cities where a customer has made at least 2 orders, but then show the total sales amount for both the customers living in this city, even if the other customer has only made one order, how is that done, should there be a comparison statement for the COUNT-operation to be able to show the sales amount for all the customers in the cities, if so - how is it stated?
```
SELECT c.CityName, SUM(p.Price * o2.Orderquantity) AS 'TotalSalesAmount'
FROM Customers c, Order1 o1, Orderrader o2, Products p,
(SELECT o1.CustomerNr
FROM Order1 o1
GROUP BY o1.CustomerNr
HAVING COUNT(o1.CustomerNr) >= 2)
AS a WHERE c.CustomerNr = a.CustomerNr AND c.CustomerNr = o1.CustomerNr
AND o1.Ordernr = o2.Ordernr AND o2.Productnr = p.Productnr
GROUP BY c.CityName
```
The structures of the tables I work with look like:
```
'Customers' has the columns: CustomerNr, City Name
'Order1' has columns: Ordernr, CustomerNr
'Order2' has columns: Ordernr, Productnr, Order quantity
'Products' has columns: ProductNr, Price
```
Sample data: Sample data for the table 'Customers':
```
- CustomerNr CityName:
- 01109 New York
- 01999 Los Angeles
- 20090 New York
```
Sample data for 'Order1':
```
- Ordernr CustomerNr
- 1306 01109
- 1307 01109
- 1308 20090
```
Sample data for 'Order2':
```
- OrderNr ProductNr Order quantity:
- 1306 15-116 3
- 1306 46-701 2
- 1307 15-96 1
- 1308 17-91 1
```
(etc...)
Sample data for 'Products':
```
- ProductNr Price:
- 15-116 44.00
- 15-96 28.50
- 46-701 3000.00
- 17-91 200.00
```
etc...
According to the SQL statement and sample data above, the desired result I want is:
```
- CityName TotalSalesAmount:
- New York 6360.50
```
|
Try using a subquery. I know this looks a bit nasty but must be working.
**Turns out that you have a duplicate Ordernr (1306) for the same CustomerNr in your Order1 example. I am assuming that this might not be the case in your real data.**
```
SELECT c.CityName,
(Select SUM(order2.quantity * products.Price) from order1
INNER JOIN Customers On Customers.CustomerNr=order1.CustomerNr
INNER JOIN Order2 ON Order2.Ordernr=Order1.Ordernr
INNER JOIN Products ON Products.ProductNr=Order2.ProductNr
WHERE Customers.CityName=c.CityName) AS 'TotalSalesAmount'
FROM Order1 o1
INNER JOIN (SELECT o1.CustomerNr
FROM Order1 o1
GROUP BY o1.CustomerNr
HAVING COUNT(o1.CustomerNr) >= 2
) AS a ON o1.CustomerNr = a.CustomerNr
INNER JOIN Order2 o2 ON o1.Ordernr = o2.Ordernr
INNER JOIN Customers c ON o1.CustomerNr = c.CustomerNr
INNER JOIN Products P ON o2.ProductNr = P.ProductNr
GROUP BY c.CityName;
```
|
I have interpretted your question as " I need the sum of ALL sales by city, for any city where a single customer has placed more than one order."
```
select
c.City,
SUM(p.Price * o2.Orderquantity) AS 'TotalSalesAmount'
from
(select c.City
from @t_Customers c
inner join @t_Order1 o1
on o1.CustomerNr = c.CustomerNr
group by c.City
having count(c.City) > 1) as ct
inner join @t_Customers c
on c.City = ct.City
inner join @t_Order1 o1
on o1.CustomerNr = c.CustomerNr
inner join @t_Order2 o2
on o2.OrderNr = o1.Ordernr
inner join @t_Products p
on p.ProductNr = o2.ProductNr
group by c.City
```
|
Total sales amount for customers in cities
|
[
"",
"sql",
"sql-server",
""
] |
I have a table like so:
```
id min max version data
1 1 10 1 a
2 11 20 1 b
3 21 30 1 c
4 1 10 2 a
5 11 20 2 b
6 21 30 2 c
```
min, max represent values of `key`. Each `(min, max)` row within the given version is guaranteed to have mutually exclusive key intervals.
Suppose I have a key value of 5 which and I want the latest version of data for that key. This means, I want to select row with id = 4.
Normally I want to select the set with the latest version, but sometimes I may specify the version number explicitly.
What I have now is this:
```
select * from range_table where 5 between `min` and `max` and ver = 2;
```
**Question:** is there a way to select max version automatically (max ver), without specifying it explicitly? (By "efficiently" I mean without examining all tables rows.)
**To Recreate Table**
```
drop table range_table;
CREATE TABLE `range_table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`min` int(11) NOT NULL,
`max` int(11) NOT NULL,
`ver` int(11) NOT NULL default 1,
`data` CHAR NOT NULL,
PRIMARY KEY (`id`),
unique key ver_min_max(ver, `min`, `max`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
insert into range_table (`min`,`max`, ver, data) values
(1, 10, 1, 'a'),
(11, 20, 1, 'b'),
(21, 30, 1, 'c'),
(1, 10, 2, 'a'),
(11, 20, 2, 'b'),
(21, 30, 2, 'd');
```
|
You could take the first row ordered by ver desc...
```
select * from range_table where 5 between `min` and `max` order by ver desc limit 1;
```
|
If you care about performance, then, depending on the size and/or selectivity of the columns, you can add an index to the min or max column. If the number of versions remain low for each min-max, then your query will be optimized.
|
Efficiently SELECT a DB row marked as "latest version" and while matching a given interval
|
[
"",
"mysql",
"sql",
"range-query",
""
] |
Here is the query I'm running to get the counts of each value, for each key of a given object:
```
SELECT key, value, COUNT(value)
FROM keyval kv
WHERE object_id = 123456
GROUP BY kv.key, kv.value
ORDER BY kv.key, kv.value;
```
The table I'm querying off of is very simple. It's just:
```
object_id BIGINT
key VARCHAR(45)
value VARCHAR(45)
```
So I get values like:
```
Color Red 26
Color Blue 24
Shape Circle 14
Shape Square 12
```
So I want to parse out the results for Blue and Square, but keep the results for Red and Circle. Is this possible?
|
Try this but beware the situation where red and blue both have count of 26... both will be returned.
```
with cte as
(select k,v,count(v) c
from #kvp
group by k,v)
select results.*
from
cte restults
inner join (select k, max(c) m from cte group by k) filter
on restults.k = filter.k
where
filter.m = restults.c
and id = @id
```
|
I would use row\_number in a subquery or a cte:
```
select
[key],
value,
cnt
from (
select
[key],
value,
count(*) as cnt,
row_number() over(partition by [key] order by count(*) desc) rnk
from keyval
where object_id = 123456
group by
[key],
value
) kv
where rnk = 1
```
<http://sqlfiddle.com/#!6/4d869/5>
|
Is it possible to maximum occurences of a value for each key in my table all in one query?
|
[
"",
"sql",
"postgresql",
"amazon-redshift",
""
] |
So I have a simple table that holds `comments` from a `user` that pertain to a specific blog `post`.
```
id | user | post_id | comment
----------------------------------------------------------
0 | john@test.com | 1001 | great article
1 | bob@test.com | 1001 | nice post
2 | john@test.com | 1002 | I agree
3 | john@test.com | 1001 | thats cool
4 | bob@test.com | 1002 | thanks for sharing
5 | bob@test.com | 1002 | really helpful
6 | steve@test.com | 1001 | spam post about pills
```
I want to get all instances where a user commented on the same post twice (meaning same `user` and same `post_id`). In this case I would return:
```
id | user | post_id | comment
----------------------------------------------------------
0 | john@test.com | 1001 | great article
3 | john@test.com | 1001 | thats cool
4 | bob@test.com | 1002 | thanks for sharing
5 | bob@test.com | 1002 | really helpful
```
I thought `DISTINCT` was what I needed but that just gives me unique rows.
|
You can use `GROUP BY` and `HAVING` to find pairs of `user` and `post_id` that have multiple entries:
```
SELECT a.*
FROM table_name a
JOIN (SELECT user, post_id
FROM table_name
GROUP BY user, post_id
HAVING COUNT(id) > 1
) b
ON a.user = b.user
AND a.post_id = b.post_id
```
|
`DISTINCT` removes all duplicate rows, which is why you're getting unique rows.
You can try using a `CROSS JOIN` (available as of Hive 0.10 according to <https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Joins>):
```
SELECT mt.*
FROM MYTABLE mt
CROSS JOIN MYTABLE mt2
WHERE mt.user = mt2.user
AND mt.post_id = mt2.post_id
```
The performance might not be the best though. If you wanted to sort it, use `SORT BY` or `ORDER BY`.
|
SQL - find all instances where two columns are the same
|
[
"",
"sql",
"hive",
"hiveql",
""
] |
When I need to update a table through a stored procedure I use something like the following for this which usually works for me (using SQL Server 2012):
```
UPDATE LogTable
SET title = @title,
summary = @summary,
post = @post,
departmentID = @departmentID
WHERE postID = @postID
```
However, now I have the situation that my input is not from single parameters but is instead part of an XML input (defined as `@xmlMain xml`) while postID is a separate parameter (defined as `@postID int`).
I was trying to use something like the following here but then don't know how to apply a FROM statement here (which would be something like `FROM @xmlMain.nodes('/root') AS [Xml_Tab]([Cols])` ):
```
UPDATE LogTable
SET title = [Xml_Tab].[Cols].value('(title)[1]', 'nvarchar(100)'),
summary = [Xml_Tab].[Cols].value('(summary)[1]', 'nvarchar(500)'),
post = [Xml_Tab].[Cols].value('(post)[1]', 'nvarchar(max)'),
departmentID = [Xml_Tab].[Cols].value('(departmentID)[1]', 'int')
WHERE postID = @postID
```
How can I write this properly ?
|
```
UPDATE A
SET title = b.title,
summary = b.summary,
post = b.post,
departmentID = b.departmentID
FROM LogTable A
JOIN (SELECT title=[Xml_Tab].[Cols].value('(title)[1]', 'nvarchar(100)'),
summary=[Xml_Tab].[Cols].value('(summary)[1]', 'nvarchar(500)'),
post=[Xml_Tab].[Cols].value('(post)[1]', 'nvarchar(max)'),
departmentID=[Xml_Tab].[Cols].value('(departmentID)[1]', 'int'),
PostID=[Xml_Tab].[Cols].value('(postID)[1]', 'int')
FROM @xmlMain.nodes('/root') AS [Xml_Tab]([Cols]) ) B
ON a.postID = b.postID
```
|
Try following query and check whether it is working or not:
```
UPDATE
LogTable
SET
title = Temp.Cols.value('@title','nvarchar(100)'),
summary = Temp.Cols.value('@summary', 'nvarchar(500)'),
post = Temp.Cols.value('@post', 'nvarchar(max)'),
departmentID = Temp.Cols.value('@departmentID', 'int')
FROM
@xmlMain.nodes('/root/[YourXMLElementName]') AS Temp(Cols)
WHERE
postID = Temp.Cols.value('@postID', 'INT')
```
|
How to use XML input for Update / Set statement in SQL Server?
|
[
"",
"sql",
"sql-server",
"xml",
"stored-procedures",
"sql-update",
""
] |
I am having an issue while using `GetDate()`, for some reason is not returning the right time (it is 7 hours ahead from the actual time) I am using `AZURE` and the Database is configured with the right location (West US). I will appreciate any help!
I tried to run this script:
```
SELECT id,
status,
AcceptedDate,
Getdate(),
Datediff(hour, AcceptedDate, Getdate())
FROM orderoffers
WHERE status = 'Accepted'
```
|
Azure SQL Databases are always UTC, regardless of the data center. You'll want to handle time zone conversion at your application.
In this scenario, since you want to compare "now" to a data column, make sure `AcceptedDate` is also stored in UTC.
[Reference](http://blogs.msdn.com/b/cie/archive/2013/07/29/manage-timezone-for-applications-on-windows-azure.aspx)
|
The SQL databases on the `Azure` cloud are pegged against `Greenwich Mean Time(GMT) or Coordinated Universal Time(UTC)` however many applications are using `DateTime.Now` which is the time according to the regional settings specified on the host machine.
Sometimes this is not an issue when the DateTime is not used for any time spanning or comparisons and instead for display only. However if you migrate an existing Database to SQL Azure using the dates populated via `GETDATE()` or `DateTime.Now` you will have an offset, in your case it’s 7 hours during Daylight Saving Time or 8 hours during Standard Time.
|
SQL GetDate() returns wrong time
|
[
"",
"sql",
"azure-sql-database",
"getdate",
""
] |
I need help with a sql query.
I have these 2 tables:
`player_locations:`
```
ID | playerid | location <- unqiue key
---|-----------------------
1 | 1 | DOWNTOWN
```
and `users:`
```
ID | playername | [..]
----|--------------------
1 | example1 | ...
```
I need a select to get the `users.playername` from the `player_locations.playerid`. I have the unique location to get the `player_locations.playerid`.
Pseudo query:
```
SELECT playername
FROM users
WHERE id = player_locations.playerid
AND player_locations.location = "DOWNTOWN";
```
The output should be `example1`.
|
This is just a simple `INNER JOIN`. The general syntax for a [**JOIN**](http://dev.mysql.com/doc/refman/5.0/en/join.html) is:
```
SELECT stuff
FROM table1
JOIN table2 ON table1.relatedColumn = table2.relatedColumn
```
In your case, you can relate the two tables using the `id` column from users and `playerid` column from `player_locations`. You can also include your `'DOWNTOWN'` requirement in the `JOIN` statement. Try this:
```
SELECT u.playername
FROM users u
JOIN player_locations pl ON pl.playerid = u.id AND pl.location = 'DOWNTOWN';
```
**EDIT**
While I personally prefer the above syntax, I would like you to be aware of another way to write this which is similar to what you have now.
You can also select from multiple tables by using a comma in your `FROM` clause to separate them. Then, in your `WHERE` clause you can insert your conditions:
```
SELECT u.playername
FROM users u, player_locations pl
WHERE u.id = pl.playerid AND pl.location = 'DOWNTOWN';
```
|
Here is the solution.
```
SELECT
playername
FROM users
WHERE id = (SELECT id FROM player_locations WHERE location='DOWNTOWN')
```
|
SQL SELECT name by id
|
[
"",
"mysql",
"sql",
"select",
"join",
""
] |
Having two issues at the moment:
1) The below example of wrapping an OS/400 API with an external SQL stored procedure which is further wrapper in a SQL user defined table function both compiles and runs without error, **but it returns blanks and zeroes for the job information when passing '\*' for the job name (i.e. current job). Any tips on why would be appreciated.** Note: If I pass a non-existent job, the QUSRJOBI api correctly throws an error, so the code is behaving partially correct. If I pass a correct active job name, job user, and job number, no error occurs but blanks and zeroes are still returned. I've tried both CHAR(85) and VARCHAR(85) for RECEIVER\_VARIABLE. I'll try BINARY(85) next for RECEIVER\_VARIABLE, but converting from BINARY back to CHAR and INT return columns might prove difficult.
2) Some OS/400 API parameters call for using data structures, which DB2 SQL at V7R1 on the System i does not yet directly support (i.e. no direct support yet for structured types). However, [this article](http://www-01.ibm.com/support/docview.wss?uid=nas8N1017493) says they can be implemented using BINARY strings, but does not provide an example :(. After extensive searching, I've not been able to find an example of wrapping an OS400 api using ONLY SQL objects. **If anyone has any examples of how to form a BINARY string using SQL only that is comprised of a mixture of CHAR and other data types like especially INT, please post one.** The API error code parameter is an example where this is commonly needed. I have the ERROR\_CODE related code commented out since it generates error CPF3CF1 "Error code parameter not valid" if that code is reactivated. **If anyone can tell what is wrong with how the ERROR\_CODE binary string data structure is being formed, please let me know.** I've tried both CHAR(16) and BINARY(16) for the ERROR\_CODE structure. I've tested the current technique of how ERROR\_CODE is being formed dumping the results into a table, and viewing the table results using DSPPFM in hex mode makes it look like the "binary( hex( ERROR\_CODE\_BYTES\_PROVIDED ) )" etc. is working correctly. However, I'm missing something.
I'm aware that there are lots of examples of using RPG to wrap OS/400 api's, but I want to keep these wrappers as SQL code only.
```
create or replace procedure M_GET_JOB_INFORMATION
( out OUT_RECEIVER_VARIABLE char(85)
,in IN_LENGTH_OF_RECEIVER_VARIABLE int
,in IN_FORMAT_NAME char(8)
,in IN_QUALIFIED_JOB_NAME char(26)
,in IN_INTERNAL_JOB_IDENTIFIER char(16)
-- ,inout INOUT_ERROR_CODE binary(16)
)
program type main
external name QSYS/QUSRJOBI
parameter style general
not deterministic
modifies SQL data
specific M_JOBINFO
set option dbgview = *source
,commit = *nc
,closqlcsr = *endmod
,tgtrls = V7R1M0
;
create or replace function M_GET_JOB_INFORMATION_BASIC
( IN_JOB_NAME varchar(10)
,IN_JOB_USER varchar(10)
,IN_JOB_NUMBER varchar(6)
,IN_INTERNAL_JOB_IDENTIFIER varchar(16)
)
returns table( JOB_NAME char(10)
,JOB_USER char(10)
,JOB_NUMBER char(6)
,INTERNAL_JOB_IDENTIFIER char(16)
,JOB_STATUS char(10)
,JOB_TYPE char(1)
,JOB_SUBTYPE char(1)
,RUN_PRIORITY int
,TIME_SLICE int
,DEFAULT_WAIT int
,ELIGIBLE_FOR_PURGE char(10)
)
language SQL
specific M_JOBINFBF
not deterministic
disallow parallel
no external action
modifies SQL data
returns null on null input
not fenced
set option dbgview = *source
,commit = *nc
,closqlcsr = *endmod
,tgtrls = V7R1M0
-- ,output = *PRINT
begin
declare RECEIVER_VARIABLE char(85) default ''; --receives "JOBI0100" format output from API
declare LENGTH_OF_RECEIVER_VARIABLE int default 85; --length of "JOBI0100" Format
declare FORMAT_NAME char(8) default 'JOBI0100'; --basic job information
declare QUALIFIED_JOB_NAME char(26);
declare INTERNAL_JOB_IDENTIFIER char(16);
declare ERROR_CODE binary(16);
--ERROR_CODE "ERRC0100" Format:
declare ERROR_CODE_BYTES_PROVIDED int default 8; --Size of API Error Code data structure passed to API
declare ERROR_CODE_BYTES_RETURNED int default 0; --Number of exception data bytes returned by the API
declare ERROR_CODE_EXCEPTION_ID char(7) default ''; --Exception / error message ID returned by the API
declare ERROR_CODE_RESERVED char(1) default ''; --Reserved bytes
declare ERROR_CODE_EXCEPTION_DATA char(1) default ''; --Exception data returned by the API
if IN_INTERNAL_JOB_IDENTIFIER = '' then
set QUALIFIED_JOB_NAME = char( IN_JOB_NAME, 10 ) || char( IN_JOB_USER, 10 ) || char( IN_JOB_NUMBER, 6 );
set INTERNAL_JOB_IDENTIFIER = '';
else
set QUALIFIED_JOB_NAME = '*INT';
set INTERNAL_JOB_IDENTIFIER = IN_INTERNAL_JOB_IDENTIFIER;
end if;
set ERROR_CODE = binary( hex( ERROR_CODE_BYTES_PROVIDED ) ) ||
binary( hex( ERROR_CODE_BYTES_RETURNED ) ) ||
binary( ERROR_CODE_EXCEPTION_ID ) ||
binary( ERROR_CODE_RESERVED )
-- || binary( ERROR_CODE_EXCEPTION_DATA )
;
call M_GET_JOB_INFORMATION
( RECEIVER_VARIABLE --out
,LENGTH_OF_RECEIVER_VARIABLE --in
,FORMAT_NAME --in
,QUALIFIED_JOB_NAME --in
,INTERNAL_JOB_IDENTIFIER --in
-- ,ERROR_CODE --in/out --Results in error CPF3CF1 "Error code parameter not valid" if code line reactivated
);
return values( char( substr( RECEIVER_VARIABLE, 8, 10 ), 10 ) --JOB_NAME
,char( substr( RECEIVER_VARIABLE, 18, 10 ), 10 ) --JOB_USER
,char( substr( RECEIVER_VARIABLE, 28, 6 ), 6 ) --JOB_NUMBER
,char( substr( RECEIVER_VARIABLE, 28, 16 ), 16 ) --INTERNAL_JOB_IDENTIFIER
,char( substr( RECEIVER_VARIABLE, 50, 10 ), 10 ) --JOB_STATUS
,char( substr( RECEIVER_VARIABLE, 60, 1 ), 1 ) --JOB_TYPE
,char( substr( RECEIVER_VARIABLE, 61, 1 ), 1 ) --JOB_SUBTYPE
,case when substr( RECEIVER_VARIABLE, 64, 4 ) = ''
then 0
else int( substr( RECEIVER_VARIABLE, 64, 4 ) )
end --RUN_PRIORITY
,case when substr( RECEIVER_VARIABLE, 68, 4 ) = ''
then 0
else int( substr( RECEIVER_VARIABLE, 68, 4 ) )
end --TIME_SLICE
,case when substr( RECEIVER_VARIABLE, 72, 10 ) = ''
then 0
else int( substr( RECEIVER_VARIABLE, 72, 4 ) )
end --DEFAULT_WAIT
,char( substr( RECEIVER_VARIABLE, 76, 10 ), 10 ) --ELIGIBLE_FOR_PURGE
)
;
end
;
select * from table( M_GET_JOB_INFORMATION_BASIC( '*', '', '', '' ) ) as JOB_INFO
;
```
|
I use this on i 6.1 to call the QDBRTVFD API:
```
CREATE PROCEDURE SQLEXAMPLE.DBRTVFD (
INOUT FD CHAR(1024) ,
IN SZFD INTEGER ,
INOUT RTNFD CHAR(20) ,
IN FORMAT CHAR(8) ,
IN QF CHAR(20) ,
IN "RCDFMT" CHAR(10) ,
IN OVRPRC CHAR(1) ,
IN SYSTEM CHAR(10) ,
IN FMTTYP CHAR(10) ,
IN ERRCOD CHAR(8) )
LANGUAGE CL
SPECIFIC SQLEXAMPLE.DBRTVFD
NOT DETERMINISTIC
NO SQL
CALLED ON NULL INPUT
EXTERNAL NAME 'QSYS/QDBRTVFD'
PARAMETER STYLE GENERAL ;
```
First, the default is `LANGUAGE C`, and you probably don't want that for QUSRJOBI which is an OPM program. CL-language parameter passing can be a better choice for predictability here.
Also, you probably want to set this as `NO SQL` rather than `modifies SQL data` since you aren't modifying SQL data. It might be necessary to remove the `SET OPTION` in order to get things down to the minimum.
If you make those changes for your M\_GET\_JOB\_INFORMATION procedure, see if it returns useful values. If it doesn't, we can dig a little deeper.
---
For your particular API, I used this code to test results on i 6.1:
```
CREATE PROCEDURE SQLEXAMPLE.M_GET_JOB_INFORMATION (
INOUT OUT_RECEIVER_VARIABLE CHAR(85) ,
IN IN_LENGTH_OF_RECEIVER_VARIABLE INTEGER ,
IN IN_FORMAT_NAME CHAR(8) ,
IN IN_QUALIFIED_JOB_NAME CHAR(26) ,
IN IN_INTERNAL_JOB_IDENTIFIER CHAR(16) ,
IN IN_ERROR_CODE CHAR(8) )
LANGUAGE CL
SPECIFIC SQLEXAMPLE.M_JOBINFO
NOT DETERMINISTIC
NO SQL
CALLED ON NULL INPUT
EXTERNAL NAME 'QSYS/QUSRJOBI'
PARAMETER STYLE GENERAL ;
```
A basic wrapper was created like so:
```
CREATE PROCEDURE SQLEXAMPLE.GENRJOBI (
INOUT JOBI VARCHAR(85) ,
IN QJOB VARCHAR(26) )
LANGUAGE SQL
SPECIFIC SQLEXAMPLE.GENRJOBI
NOT DETERMINISTIC
MODIFIES SQL DATA
CALLED ON NULL INPUT
SET OPTION ALWBLK = *ALLREAD ,
ALWCPYDTA = *OPTIMIZE ,
COMMIT = *NONE ,
DBGVIEW = *LIST ,
CLOSQLCSR = *ENDMOD ,
DECRESULT = (31, 31, 00) ,
DFTRDBCOL = *NONE ,
DLYPRP = *NO ,
DYNDFTCOL = *NO ,
DYNUSRPRF = *USER ,
RDBCNNMTH = *RUW ,
SRTSEQ = *HEX
P1 : BEGIN
DECLARE JOBII CHAR ( 85 ) ;
DECLARE SZJOBI INTEGER ;
DECLARE FORMATI CHAR ( 8 ) ;
DECLARE QJOBI CHAR ( 26 ) ;
DECLARE JOBIDI CHAR ( 16 ) ;
DECLARE ERRCODI CHAR ( 8 ) ;
DECLARE STKCMD CHAR ( 10 ) ;
SET JOBII = X'00000000' ;
SET SZJOBI = 85 ;
SET FORMATI = 'JOBI0100' ;
SET QJOBI = QJOB ;
SET JOBIDI = ' ' ;
SET ERRCODI = X'0000000000000000' ;
SET STKCMD = '*LOG' ;
CALL SQLEXAMPLE . M_GET_JOB_INFORMATION ( JOBII , SZJOBI , FORMATI , QJOBI , JOBIDI , ERRCODI ) ;
CALL SQLEXAMPLE . LOGSTACK ( STKCMD ) ;
SET JOBI = JOBII ;
END P1 ;
```
The wrapper only provides an example of calling the API proc. It does nothing with the returned structure from the API except pass it back out to its caller. Your original question included bits of code to extract sub-fields from a structure, so I didn't see a point to putting similar code here.
The two procs were tested in iNav's 'Run SQL Scripts' to grab info about an interactive job I was running, and the result looked like this:

The output area shows the structure in characters, and the integer sub-fields can be seen mixed with character sub-fields. Deconstruct the structure as needed. I might create an additional proc that takes the structure as input and returns individual structure elements.
|
Were you aware that IBM has already wrapped the Get Job Info API?
All Services by version / release
<https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20Technology%20Updates/page/DB2%20for%20i%20-%20Services>
Get Job Info
<https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20Technology%20Updates/page/QSYS2.GET_JOB_INFO()%20%E2%80%93%20user%20defined%20table%20function>
|
Looking for a working example of any OS/400 API wrapped in an external SQL stored procedure wrapped in a user defined SQL function
|
[
"",
"sql",
"api",
"stored-procedures",
"db2",
"ibm-midrange",
""
] |
I am building a sql query with a large set of data but query is too slow
I've got 3 tables; `movies`, `movie_categories`, `skipped_movies`
The `movies` table is normalized and I am trying to query a movie based on a category while excluding ids from `skipped_movies` table.
However I am trying to use WHERE IN and WHERE NOT IN to in my query.
`movies` table has approx. 2 million rows (id, name, score)
`movie_categories` approx. 5 million (id, movie\_id, category\_id)
`skipped_movies` has approx. 1k rows (id, movie\_id, user\_id)
When the `skipped_movies` table is very small 10 - 20 rows the query is quite fast. (about 40 - 50 ms) but when the table gets somewhere around 1k of data I get somewhere around 7 to 8 seconds on the query.
This is the query I'm using.
> SELECT SQL\_NO\_CACHE \* FROM `movies` WHERE `id` IN (SELECT `movie\_id` FROM `movie\_categories` WHERE `category\_id` = 1) AND `id` NOT IN (SELECT `movie\_id` FROM `skipped\_movies` WHERE `user\_id` = 1) AND `score` <= 9 ORDER BY `score` DESC LIMIT 1;
I've tried many ways that came to mind but this was the fastest one. I even tried the `EXISTS` method to no extent.
I'm using the SQL\_NO\_CACHE just for testing.
And I guess that the ORDER BY statement is running very slow.
|
Assuming that (movie\_id,category\_id) is unique in movies\_categories table, I'd get the specified result using join operations, rather than subqueries.
To exclude "skipped" movies, an anti-join pattern would suffice... that's a left outer join to find matching rows in skipped\_movies, and then a predicate in the WHERE clause to exclude any matches found, leaving only rows that didn't have a match.
```
SELECT SQL_NO_CACHE m.*
FROM movies m
JOIN movie_categories c
ON c.movie_id = m.id
AND c.category_id = 1
LEFT
JOIN skipped_movies s
ON s.movie_id = m.id
AND s.user_id = 1
WHERE s.movie_id IS NULL
AND m.score <= 9
ORDER
BY m.score DESC
LIMIT 1
```
And appropriate indexes will likely improve performance...
```
... ON movie_categories (category_id, movie_id)
... ON skipped_movies (user_id, movie_id)
```
|
Most IN/NOT IN queries can be expressed using JOIN/LEFT JOIN, which usually gives the best performance.
Convert your query to use joins:
```
SELECT m.*
FROM movies m
JOIN movie_categories mc ON m.id = mc.movie_id AND mc.category_id = 1
LEFT JOIN skipped_movies sm ON m.id = sm.movie_id AND sm.user_id = 1
WHERE sm.movie_id IS NULL
AND score <= 9
ORDER BY score DESC
LIMIT 1
```
|
Mysql very slow subquery optimizing
|
[
"",
"mysql",
"sql",
"database",
"performance",
"subquery",
""
] |
I would like to update the table using case statement the query is like this...
```
select case(@columnname) when 'name1' then
begin
update table
set
pay1=pay1* 100
pay2=pay2*20
pay3=pay3* 100
end
when 'name2' then
begin
update table
set
pay2=pay2*20
pay3=pay3* 100
end
when 'name3' then
begin
update table
set
pay3=pay3* 100
end
end
```
can u please tell the correct logic to complete the query using case statement
|
You'll have to swap the syntax around. The case statement will be applied for every value you want to update...
```
UPDATE table SET
pay1 = CASE WHEN @columnname IN('name1') THEN pay1 * 100 ELSE pay1 END,
pay2 = CASE WHEN @columnname IN('name1', 'name2') THEN pay2 * 20 ELSE pay2 END,
pay3 = CASE WHEN @columnname IN('name1', 'name2', 'name3') THEN pay3 * 100 ELSE pay3 END
```
It looks like you actually want is a if statement....
```
IF @columnname = 'name1'
UPDATE table SET pay1 = pay1 * 100, pay2=pay2*20, pay3=pay3* 100
ELSE IF @ColumnName = 'name2'
UPDATE table SET pay2 = pay2 * 20, pay3 = pay3 * 100
ELSE IF @ColumnName = 'name3'
UPDATE table SET pay3 = pay3 * 100
```
Hope that helps
|
Use this.
```
update table
set pay1 = CASE WHEN @columnname = 'name1'
THEN pay1* 100
ELSE pay1
set pay2 = CASE WHEN @columnname = 'name1'
OR @columnname = 'name2'
THEN pay2* 20
ELSE pay2
set pay3 = CASE WHEN @columnname = 'name1' OR
@columnname = 'name2' OR
@columnname = 'name3'
THEN pay3 * 100
ELSE pay3
```
|
updating multiple columns using case statement in sql server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2012",
"sql-update",
""
] |
I have below 2 tables:
**table1**
```
objName | rptName | srcTblName | srcFileName | srcDateColName
--------------------------------------------------------------
obj1 | rpt1 | srcTbl1 | srcFile1.csv| srcDate
```
**table2**
```
FileName | FileSize
------------------------
srcFile1.csv | 2009
```
The below query gives me distinct Table and Date Column names.
```
SELECT DISTINCT a.srcTblName, a.SrcDateColName
FROM table1 a
LEFT JOIN table2 b
ON a.srcFileName LIKE b.FileName
WHERE a.srcTblName is NOT NULL
AND a.srcFileName is NOT NULL
```
**Output**
```
srcTblName | srcDateColName
---------------------------------------------
tableN | EntryDate
tableO | Modified_Date
```
The second column of the output is a **COLUMN\_NAME** in **SrcTblName**, which is a **date**.
I want to find the **max(srcDateColName)** from the respective **srcTblName** in the same query.
Can anyone help me modify the above query?
|
The answers given by Hadi and Sarath seem to be working, But the cursor seems to affect the performance. So I did this using TEMP Table as below:
```
DECLARE @OBJECT_NAME VARCHAR(50) = 'ObjName'
BEGIN
DECLARE @Query NVARCHAR(1000),@COUNT INT, @MAX_Count INT, @rptName VARCHAR(255)
DECLARE @tblName varchar(100),@dateColName varchar(100), @max_Date Date,@fileName varchar(100),@refID INT
DECLARE @Temp_Table TABLE([RowNumber] INT,rptName VARCHAR(500),SrcTblName VARCHAR(500), DateColName VARCHAR(100), SrcFileName VARCHAR(600), RefID INT)
INSERT INTO @Temp_Table
SELECT
ROW_NUMBER()OVER(ORDER BY rptName)
,rptName
,srcTblName
,srcDateColName
,srcFileName
,ID
FROM
table1
WHERE
objName = @OBJECT_NAME
SELECT @MAX_Count = MAX(RowNumber) FROM @Temp_Table
SET @COUNT = 1
WHILE (@COUNT <= @MAX_Count)
BEGIN
SELECT
@rptName = rptName,
@tblName = SrcTblName,
@fileName = SrcFileName,
@dateColName = DateColName,
@refID = RefID
FROM
@Temp_Table
WHERE
RowNumber = @COUNT
IF @tblName IS NOT NULL AND @fileName IS NOT NULL AND @dateColName IS NOT NULL
BEGIN
SET @Query = 'SELECT @max_Date = MAX(CONVERT(DATE, ' + @dateColName + ')) FROM ' + @tblName
EXEC SP_EXECUTESQL @Query, N'@max_Date DATE OUTPUT', @max_DATE OUTPUT
END
SET @COUNT = @COUNT + 1
END
END
```
I hope this will help someone who comes searching for the similar solution.
Thanks for your help though :)
|
Since you have to execute tablename and datecolumn from a table, you have to use dynamic sql
```
SELECT * INTO tableN FROM
(
SELECT '01/JAN/2014' EntryDate
UNION ALL
SELECT '24/JAN/2014'
UNION ALL
SELECT '13/MAR/2014'
)TAB
SELECT * INTO tableO FROM
(
SELECT '11/APR/2014' Modified_Date
UNION ALL
SELECT '18/MAY/2014'
UNION ALL
SELECT '22/JUN/2014'
)TAB
SELECT * INTO NEWTBL FROM
(
SELECT 'tableN' srcTblName,'EntryDate' srcDateColName
UNION ALL
SELECT 'tableO' ,'Modified_Date'
)TAB
```
Create a temporary table to get your result
```
CREATE TABLE #TEMP(srcTblName VARCHAR(100),srcDateColName VARCHAR(100),NEWDATE DATE)
```
Now use a cursor and execute it dynamically
```
DECLARE @TABLENAME VARCHAR(100)
DECLARE @COLUMNNAME VARCHAR(100)
-- Here you declare which all columns you need to loop in Cursor
DECLARE CUR CURSOR FOR
-- Use your SELECT here instead of SELECT srcTblName , srcDateColName FROM NEWTBL
--SELECT DISTINCT a.srcTblName, a.SrcDateColName
--FROM table1 a
--LEFT JOIN table2 b
--ON a.srcFileName LIKE b.FileName
--WHERE a.srcTblName is NOT NULL
--AND a.srcFileName is NOT NULL
SELECT srcTblName , srcDateColName
FROM NEWTBL
OPEN CUR
-- Select each tablename and datecolumn and execute dynamically
FETCH NEXT FROM CUR
INTO @TABLENAME,@COLUMNNAME
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @QRY NVARCHAR(MAX)
SET @QRY = '
INSERT INTO #TEMP
SELECT '''+@TABLENAME+''' , '''+@COLUMNNAME+''',
(SELECT MAX(CAST('+@COLUMNNAME+' AS DATE))ENDT FROM '+@TABLENAME+')AA
'
EXEC SP_EXECUTESQL @QRY
-- Fetches next record and increments the loop
FETCH NEXT FROM CUR
INTO @TABLENAME,@COLUMNNAME
END
CLOSE CUR;
DEALLOCATE CUR;
```
* **[SQL FIDDLE](http://sqlfiddle.com/#!3/dbb9d/4)**
|
Finding max(date) from the resultant query output
|
[
"",
"sql",
"sql-server-2008",
"join",
"sqldatetime",
""
] |
In SQL Server I have a table `CustomerAddr` and a column named `Stnumber`. I have non-integer values that I want to delete. I would like to delete the rows of the non-integer values. I use this query that shows the results of the non-integer values:
```
Select *
From [CustomerAddr] [Stnumber]
Where ISNUMERIC ([Stnumber]) = 0
```
I use this query:
```
Delete FROM CustomerAddr
WHERE ISNUMERIC StNumber = '0';
```
Unfortunately `ISNumeric` is the culprit. I receive an error message:
> An expression of non-boolean type specified in a context where a condition is expected, near 'Stnumber'
If I remove `isnumeric` I can only delete one row at a time.. Help would be great.
|
try this
Before Deleting Confirms records using Select
```
select * From [CustomerAddr] Where ISNUMERIC ([Stnumber]) = 0
```
then use this
```
delete From [CustomerAddr] Where ISNUMERIC ([Stnumber]) = 0
```
|
The where clause needs to be rewritten.
> WHERE ISNUMERIC StNumber = '0';
`ISNUMERIC` is a built-in function. `StNumber` is a column that you are comparing to '0'. These are not compatible.
You can write it like this:
```
WHERE ISNUMERIC (StNumber) <> 1;
```
If you meant non-number, then I would use the above.
But hold on! The above will return rows that are not decimals, money data types etc. The ISNUMERIC may not be what you want. You say this:
> I would like to delete the rows of the non-integer values.
If it is an integer test, `ISNUMERIC` isn't right. `ISNUMERIC will return a 1 (or a true)` for a value that could be a decimal. A decimal is not an integer.
This may give you what you want if you meant integer:
```
WHERE StNumber NOT LIKE '%[^0-9]%'
```
|
SQL Server delete multiple rows
|
[
"",
"sql",
"sql-server",
"database",
"sql-delete",
""
] |
I have this database Diagram :

the diagram is represent a database for Insurance Company.
the `final_cost` table represent the cost that the company should paid to repair a car
the table `car` has the field `car_type` which take one of the following values (1,2,3) where 1 refers to small cars, 2 refers to trucks , 3 refers to buses
I want to retrieve the name of kind (1 or 2 or 3 ) which has the maximum repaired cost during the 2013 year
I wrote The following Query :
```
select innerr.car_type from (
select car_type ,sum(fina_cost.cost) from car_acc inner join cars on cars.car_id = car_acc.car_id
inner join final_cost on FINAL_COST.CAR_ACC_ID = car_acc.CAR_ACC_ID
where (extract(year from final_cost.fittest_date)=2013)
group by(car_type)) innerr;
```
but I don't know how to get the `car_type` with maximum repaired Cost from the `inner` Sub-Query !
|
You can have access to anything and everything from a subquery if you use it right. The best way to build a complicated query is to start simply, seeing what data you have and usually the answer, or the next step, will be obvious.
So let's start by displaying all the accidents for 2013. We aren't interest in the individual cars, just the most expensive accidents by type. So...
```
select c.car_type, f.cost
from car_acc a
join cars c
on c.car_id = a.car_id
join final_cost f
on f.car_acc_id = a.car_acc_id
where f.fittest_date >= date '2013-01-01'
and f.fittest_date < date '2014-01-01';
```
I've changed the filtering criteria to a sargable form for efficiency. I don't usually worry about performance early in the design of a query, but when it's this obvious, why not?
Anyway, we now have a list of all 2013 accidents, by car type and the cost of each one. So now we only have to `group by` the type and take the `Max` of the cost of each group.
```
select c.car_type, Max( f.cost ) MaxCost
from car_acc a
join cars c
on c.car_id = a.car_id
join final_cost f
on f.car_acc_id = a.car_acc_id
where f.fittest_date >= date '2013-01-01'
and f.fittest_date < date '2014-01-01'
group by c.car_type;
```
Now we have a list of car types and the most expensive accidents for that type for 2013. With only three rows in the result set, it's easy to see which is the car type we're looking for. Now we just have to isolate that one row. The easiest step from here is to use this query in a CTE.
```
with MaxPerType( car_type, MaxCost )as(
select c.car_type, Max( f.cost ) MaxCost
from car_acc a
join cars c
on c.car_id = a.car_id
join final_cost f
on f.car_acc_id = a.car_acc_id
where f.fittest_date >= date '2013-01-01'
and f.fittest_date < date '2014-01-01'
group by c.car_type
)
select m.car_type, m.MaxCost
from MaxPerType m
where m.MaxCost =(
select Max( MaxCost )
from MaxPerType );
```
So the CTE gives us the largest cost per type and the subquery in the main query gives us the largest cost overall. So the result is the type(s) that match the largest cost overall.
|
You could try either orderby or better yet, use the Max Function <http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions085.htm>
|
Retrieve records from multiple Records returned by Sub-Query
|
[
"",
"sql",
"oracle",
"oracle11g",
"group-by",
"sql-order-by",
""
] |
```
CREATE TABLE Prescription (
pre_id NUMBER(10),
pssn CHAR(11),
phy_ssn CHAR(11),
date CHAR(11),
quantity INTEGER,
trade name CHAR(20),
pharm id CHAR(11),
drop_off_time CHAR(10),
pick_up_time CHAR(10),
status CHAR (15),
PRIMARY KEY(pre_id),
FOREIGN KEY (pssn) REFERENCES Pri_Phy_Patient,
FOREIGN KEY (phy_ssn) REFERENCES Doctor,
FOREIGN KEY (trade_name, pharm_id) REFERENCES Make_Drug);
```
I get `ORA-00904:invalid identifier` error on using the above sql statement.
The other sql statements related to the above one are:
```
CREATE TABLE Doctor (
phy_ssn CHAR(11),
name CHAR(20),
speciality CHAR(30),
exp_years CHAR(10),
PRIMARY KEY (phy_ssn));
CREATE TABLE Pri_Phy_Patient (
pssn CHAR(11),
name CHAR(20),
age INTEGER,
address CHAR(20),
phy ssn CHAR(11),
PRIMARY KEY (pssn),
FOREIGN KEY (phy_ssn) REFERENCES Doctor );
CREATE TABLE Make_Drug (
trade_name CHAR(20),
pharm_id CHAR(11),
formula VARCHAR(100)
PRIMARY KEY (trade_name, pharm_id),
FOREIGN KEY (pharm_id) REFERENCES Pharm_co);
```
Could anyone suggest on what I can do about this?!!
|
date CHAR(11)
Date is a reserved word in SQL. It cannot be declared as a column in the way you did.
Please change the column name and it will work.
|
If you execute the statement in `SQL*Plus`, you can easily find the error and fix it.
```
SQL> CREATE TABLE Prescription (
2 pre_id NUMBER(10),
3 pssn CHAR(11),
4 phy_ssn CHAR(11),
5 date CHAR(11),
6 quantity INTEGER,
7 trade name CHAR(20),
8 pharm id CHAR(11),
9 drop_off_time CHAR(10),
10 pick_up_time CHAR(10),
11 status CHAR (15),
12 PRIMARY KEY(pre_id),
13 FOREIGN KEY (pssn) REFERENCES Pri_Phy_Patient,
14 FOREIGN KEY (phy_ssn) REFERENCES Doctor,
15 FOREIGN KEY (trade_name, pharm_id) REFERENCES Make_Drug);
date CHAR(11),
*
ERROR at line 5:
ORA-00904: : invalid identifier
```
1. You cannot use `DATE` as column name directly. Either keep it in `"" double-quotation marks` or better avoid using the `keyword`.
More issues with the `create table` script -
2. The column name cannot have two words, `trade name` is an invalid column name. Change it to `trade_name`.
3. Similarly, change `pharm id` to `pharm_id`.
4. Why do you use `CHAR` data type? It will always have blank padding to the right and consume space. Better use `VARCHAR2` data type.
|
Error while creating a table in oracle 11g
|
[
"",
"sql",
"oracle",
"create-table",
""
] |
I have three tables: `users`, `account` and `accountinfo` and I am trying to make a trigger that will add the id from users to the `UserID` column in the account table. Here is what I tried:
```
CREATE TRIGGER catchUser BEFORE INSERT ON defaultdatabase.users
FOR EACH ROW
BEGIN
INSERT INTO defaultdatabase.account(UserID) VALUES (new.id);
END
```
However, I get an error right after my `INSERT` statement that says,
```
Syntax Error: insert 'semicolon'
```
Why am I getting this error is I have the semicolon or is my trigger just wrong?
I'm using MySQL 5.6 if that makes any difference as well.
|
You need to specify the delimiter:
```
delimiter //
CREATE TRIGGER catchUser BEFORE INSERT ON defaultdatabase.users
FOR EACH ROW
BEGIN
INSERT INTO defaultdatabase.account(UserID) VALUES (new.id);
END; //
delimiter ;
```
|
Try this:
```
CREATE TRIGGER catchUser BEFORE INSERT ON defaultdatabase.users
FOR EACH ROW
BEGIN
INSERT INTO defaultdatabase.account(UserID) VALUES (new.id)
END;
```
|
Using MySQL Database Triggers
|
[
"",
"mysql",
"sql",
"database",
"database-trigger",
""
] |
I need to return the unique records between two tables. Ideally, an UNION would solve my problem but both tables contain an object field which gives me an error(cannot ORDER objects without MAP or ORDER method) when I do UNION/distinct.
So, I was wondering if I can do a UNION ALL(to avoid the error) to get all the records first then do something to return only the unique records from there. I tried analytic function combined with the UNION ALL query but no luck so far.
```
Select * from Table1
union all
Select * from table2
```
Any help? Note:I need to return all fields.
|
I actually solved the problem using analytic function+row\_num. The query will choose the first record for each set of duplicates hence returning only the unique records.
```
select * from
(
select ua.*,row_number() over (partition by p_id order by p_id ) row_num from
(
select * from table1
union all
select * from table2
)ua
) inner
where inner.row_num=1
```
|
How about this :
```
SELECT DISTINCT A.* FROM
(
Select * from Table1
union all
Select * from table2
) A;
```
(or)
```
SELECT col1,col2,col3...coln FROM
(
Select col1,col2,col3...coln from Table1
union all
Select col1,col2,col3...coln from table2
) A
GROUP BY A.col1,col2,col3...coln;
```
|
How to return unique records between two tables without using distinct and union?
|
[
"",
"sql",
"oracle11g",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.