Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I can't update temp table. This is my query
```
CREATE TABLE #temp_po(IndentID INT, OIndentDetailID INT, OD1 VARCHAR(50), OD2 VARCHAR(50),
OD3 VARCHAR(50), ORD VARCHAR(50), NIndentDetailID INT, ND1 VARCHAR(50), ND2 VARCHAR(50),
ND3 VARCHAR(50), NRD VARCHAR(50), Quantity DECIMAL(15,3))
INSERT INTO #temp_po(IndentID, OIndentDetailID, OD1, OD2, OD3, ORD)
SELECT ID.IndentID, ID.IndentDetailID, ID.D1, ID.D2, ID.D3, ID.RandomDimension
FROM STR_IndentDetail ID WHERE ID.IndentID = @IndentID
UPDATE
t
SET
t.ND1 = CASE WHEN D.D1 = '' THEN NULL ELSE D.D1 END,
t.ND2 = CASE WHEN D.D2 = '' THEN NULL ELSE D.D2 END,
t.ND3 = CASE WHEN D.D3 = '' THEN NULL ELSE D.D3 END,
t.NRD = CASE WHEN D.RandomDim = '' THEN NULL ELSE D.RandomDim END,
t.Quantity = D.PurchaseQty
FROM
#temp_po t INNER JOIN @detail D ON D.IndentDetailID = t.OIndentDetailID
WHERE
t.IndentID = @IndentID
```
But it gives the error
> Cannot resolve the collation conflict between "Latin1\_General\_CI\_AI" and "SQL\_Latin1\_General\_CP1\_CI\_AS" in the equal to operation.
How to resolve this problem?
My `tempdb` collation is `Latin1_General_CI_AI` and my actual database collation is `SQL_Latin1_General_CP1_CI_AS`. | This happens because the collations on `#tempdb.temp_po.OD1` and `STR_IndentDetail.D1` are different (and specifically, note that `#tempdb` is a different, system database, which is generally why it will have a default opinion for collation, unlike your own databases and tables where you may have provided more specific opinions).
Since you have control over the creation of the temp table, the easiest way to solve this appears to be to create \*char columns in the temp table with the same collation as your `STR_IndentDetail` table:
```
CREATE TABLE #temp_po(
IndentID INT,
OIndentDetailID INT,
OD1 VARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AS,
.. Same for the other *char columns
```
In the situation where you don't have control over the table creation, when you join the columns, another way is to add explicit `COLLATE` statements in the DML where errors occur, either via `COLLATE SQL_Latin1_General_CP1_CI_AS` or easier, using `COLLATE DATABASE_DEFAULT`
```
SELECT * FROM #temp_po t INNER JOIN STR_IndentDetail s
ON t.OD1 = s.D1 COLLATE SQL_Latin1_General_CP1_CI_AS;
```
OR, easier
```
SELECT * FROM #temp_po t INNER JOIN STR_IndentDetail s
ON t.OD1 = s.D1 COLLATE DATABASE_DEFAULT;
```
[SqlFiddle here](http://sqlfiddle.com/#!3/c0cde/4) | Changing the server collation is not a straight forward decision, there may be other databases on the server which may get impacted. Even changing the database collation is not always advisable for an existing populated database. I think using `COLLATE DATABASE_DEFAULT` when creating temp table is the safest and easiest option as it does not hard code any collation in your sql. For example:
```
CREATE TABLE #temp_table1
(
column_1 VARCHAR(2) COLLATE database_default
)
``` | Temp Table collation conflict - Error : Cannot resolve the collation conflict between Latin1* and SQL_Latin1* | [
"",
"sql",
"sql-server",
"collation",
"tempdb",
""
] |
The `JOIN` or `INNER JOIN` clauses are usually used with the `=` operator, like this:
```
SELECT * FROM table1 INNER JOIN table2 ON table1.id = table2.id
```
I've noticed that it is possible use any operator on `JOIN` clauses. It's possible to use `!=, >=, <=,` etc.
I wonder what utility could has to use another operator in `JOIN` clauses. I think that using i.e `>=, <=, >, <` maybe doesn't have so much utility, but someone imagine any example where using these could be useful? (I meant with `PRIMARY KEY`)
Further, can anyone tell me if the next query returns *"The rows of both tables that they haven't any relationship with the other"*?
```
SELECT * FROM table1 INNER JOIN table2 ON table1.id != table2.id
``` | A typical example of a **join by `<`** would be to get all pairs of the values a single column where both items of the pair are distinct from each other.
For an example, see <http://www.sqlfiddle.com/#!2/4ed2a/3>
```
create table numbers(val int);
insert into numbers values(1);
insert into numbers values(2);
insert into numbers values(3);
insert into numbers values(4);
```
and the statement:
```
select m.val as first, n.val as second
from numbers m join numbers n on m.val < n.val
```
delivers the six combinations of the numbers 1 to 4 where both numbers are different:
```
1 2
1 3
1 4
2 3
2 4
3 4
```
An example for **using `<=` in a join** would be a running sum: Based on the same table, if I want to see the actual value of a record, as well as the sum of all records up to the current row, I could use the following statement:
```
select m.val, sum(n.val) as running_sum
from numbers m join numbers n on n.val <= m.val
group by m.val
order by m.val
```
which returns
```
1 1
2 3
3 6
4 10
```
In real applications, you would normally have at least two columns in the table like an id and a sales value for which you want to show the actual record value and the running total. But for the matter of simplicity, I stayed with the same table model. | ```
SELECT *
FROM
table1
INNER JOIN table2
ON Table1.id = Table2.id
AND Table2.description != 'leave me alone!'
```
---
```
SELECT *
FROM
table1
INNER JOIN table2
ON Table1.id = Table2.id
AND Table2.due_date <= GETDATE()
``` | JOIN with '!=' operator | [
"",
"sql",
"database",
"join",
""
] |
I'm creating an Oracle view like this :
```
SELECT * FROM TABLE;
```
When I create the view, I notice that oracle changes the view query to something like :
```
SELECT FIELD1, FIELD2,... FROM TABLE;
```
My problem is that if I change the TABLE structure, add a new field for instance, the changes are not taken into consideration in the view. I have then to recreate the view, and regrant privileges on this view to the users/roles.
Is there any way to make a view mode generic and keep it as the form of : SELECT \* FROM TABLE ?
Thanks. | You cannot define a simple view that would automatically change its structure when the underlying table changes. When new columns are added, you'll need to touch the view. You'll almost certainly need to do as @GordonLinoff suggests and do a `CREATE OR REPLACE` when the table changes. Given that changes to tables should be rare and should involve proper change control, touching the view as part of the change should be a relatively simple step.
If you're really determined to avoid having to touch the view, there are some alternatives. I generally wouldn't recommend these because they are very likely to increase rather than decrease the complexity of maintaining your system. But if you have a third party system that is generating DDL to add columns on an unpredictable basis, maybe it makes sense.
You could create a DDL trigger that fires in response to statements that `ALTER` the table and that submits a job using `dbms_job` that re-creates the view. That's quite a few moving pieces but it's generally workable.
Alternately, instead of a view, you could create a pipelined table function that returns a variable number of columns. That's going to be [really complicated](http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4843682300346852395#5421020800346627246) but it's also pretty slick. There aren't many places that I'd feel comfortable using that approach simply because there aren't many people that can look at that code and have a chance of maintaining it. But the code is pretty slick. | The `*` is evaluated when the view is created, not when it is executed. In fact, Oracle compiles views for faster execution. It uses the compiled code when the view is referenced. It does not just do a text substitution into the query.
The proper syntax for changing a view is:
```
create or replace view v_table as
select *
from table;
``` | How to keep "*" in VIEW output clause so that columns track table changes? | [
"",
"sql",
"oracle",
""
] |
I have a column called "Patient Type" in a table. I want to make sure that only 2 values can be inserted in to the column , either opd or admitted, other than that, all other inputs are not valid.
Below is an example of what I want

How do I make sure that the column only accepts "opd" or "admitted" as the data for "Patient Type" column. | You need a check constraint.
```
ALTER TABLE [TableName] ADD CONSTRAINT
my_constraint CHECK (PatientType = 'Admitted' OR PatientType = 'OPD')
```
You need to check if it works though in MySQL in particular as of today.
Seems it does not work (or at least it did not a few years ago).
[MySQL CHECK Constraint](https://stackoverflow.com/questions/14247655/mysql-check-constraint/)
[CHECK constraint in MySQL is not working](https://stackoverflow.com/questions/2115497/check-constraint-in-mysql-is-not-working/)
[MySQL CHECK Constraint Workaround](http://blog.christosoft.de/2012/08/mysql-check-constraint/)
Not sure if it's fixed now.
If not working, use a trigger instead of a check constraint. | I'm not a MySQL dev, but I think this might be what you're looking for.
[ENUM](http://dev.mysql.com/doc/refman/5.0/en/enum.html) | Restricting a column to accept only 2 values | [
"",
"mysql",
"sql",
"database",
"input",
""
] |
I have a column in one of my tables which has square brackets around it, `[Book_Category]`, which I want to rename to `Book_Category`.
I tried the following query:
```
sp_rename 'BookPublisher.[[Book_Category]]', 'Book_Category', 'COLUMN'
```
but I got this error:
> Msg 15253, Level 11, State 1, Procedure sp\_rename, Line 105 Syntax
> error parsing SQL identifier 'BookPublisher.[[Book\_Category]]'.
Can anyone help me? | You do it the same way you do to create it:
```
exec sp_rename 'BookPublisher."[Book_Category]"', 'Book_Category', 'COLUMN';
```
---
Here's a little sample I made to test if this was even possible. At first I just assumed it was a misunderstanding of how `[]` can be used in SQL Server, turns out I was wrong, it is possible - you have to use double quotes to outside of the brackets.
```
begin tran
create table [Foo] ("[i]" int);
exec sp_help 'Foo';
exec sp_rename 'Foo."[i]"', 'i', 'column ';
exec sp_help 'Foo';
rollback tran
``` | Double quotes are not required. You simply double up closing square brackets, like so:
```
EXEC sp_rename 'BookPublisher.[[Book_Category]]]', 'Book_Category', 'COLUMN';
```
You can find this out yourself using the `quotename` function:
```
SELECT QuoteName('[Book_Category]');
-- Result: [[Book_Category]]]
```
This incidentally works for creating columns, too:
```
CREATE TABLE dbo.Book (
[[Book_Category]]] int -- "[Book_Category]"
);
``` | Rename object that has square brackets in the name? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Sorry I am new in PL/SQL toad for oracle. I have a simple problem. I need to find some column name in this table "JTF\_RS\_DEFRESOURCES\_VL" and i do that with this script. just an example column.
```
SELECT column1,
column2,
column3,
column4,
column5,
end_date_active
FROM jtf_rs_defresources_vl
```
Then i want to use "if else statement" that if END\_DATE\_ACTIVE = null then it is active else it is inactive. | It sounds like you want a CASE statement
```
SELECT (CASE WHEN end_date_active IS NULL
THEN 'Active'
ELSE 'Inactive'
END) status,
<<other columns>>
FROM jtf_rs_defresources_vl
``` | You can use a `CASE` expression like @Justin suggested, but in Oracle there's a simpler function - `NVL2`. It receives three arguments and evaluates the first. If it **isn't** `null` the second argument is returned, if it is, the third one is returned.
So in your case:
```
SELECT NVL2(end_date_active, 'Active', 'Inactive') AS status,
<<other columns>>
FROM jtf_rs_defresources_vl
``` | Select statement And If/else statement | [
"",
"sql",
"oracle",
"null",
"case",
""
] |
```
ID Title Qty
1 BMW 2
2 VW 3
```
I want in one SQL query to find a sum of quantity and divide it by current Qty.
For example:
`SUM (Qty) = 5`, so I want to compute scores by formula. 2/5 and 3/5
```
ID title newscores
1 BMW 2/5
1 vW 2/5
``` | [Demo](http://sqlfiddle.com/#!2/9a7c51/9)
```
SELECT id
,title
,qty/(select sum(qty) from cars)
FROM cars
GROUP BY id
``` | this should work
```
select title , sum (QTY) , t2.total, sum(QTY) / t2.total
from table , (select sum(QTY) as total from table) as t2
group by title
``` | SQL SUM aggragete issue | [
"",
"sql",
""
] |
I have a table in which has duplicate values. by eliminating these values I want sum of distinct values. but without group by.
My table
```
--------------------------------------
ID City collection
---------------------------------------
1 xyz 5000
2 xyz 5000
3 abc 2000
4 pqr 3000
5 xyz 5000
6 pqr 3000
7 abc 2000
-----------------------------------
```
I want result of whole collection column but eliminate city name
ex. Result = 10000
not like
```
xyz 15000
abc 4000
pqr 6000
```
or not like 25000
but result should be 10000 by eliminated | Get the `Distinct` combinations of City and Collection first and then do the `SUM`
```
select SUM(Collection) as Collection
from
(
select distinct City, Collection
from table
) data
``` | ```
select sum(collection) as output
from
(
select collection, row_number() over(partition by city order by collection) as RN
from yourtable
) as inside
where RN=1
``` | Grand Sum with distinct in SQL Server | [
"",
"sql",
"sql-server-2012",
""
] |
I have a Hive table, titled 'UK.Choices' with a column, titled 'Fruit', with each row as follows:
AppleBananaAppleOrangeOrangePears
BananaKiwiPlumAppleAppleOrange
KiwiKiwiOrangeGrapesAppleKiwi
etc.
etc.
There are 2.5M rows and the rows are much longer than the above.
I want to count the number of instances that the word 'Apple' appears.
For example above, it is:
Number of 'Apple'= 5
My sql so far is:
```
select 'Fruit' from UK.Choices
```
Then in chunks of 300,000 I copy and paste into Excel, where I'm more proficient and able to do this using formulas. Problem is, it takes upto an hour and a half to generate each chunk of 300,000 rows.
Anyone know a quicker way to do this bypassing Excel? I can do simple things like counts using where clauses, but something like the above is a little beyond me right now. Please help.
Thank you. | This is straight-forward if you have any delimiter ( eg: comma ) between the fruit names. The idea is to split the column into an array, and explode the array into multiple rows using the 'explode' function.
```
SELECT fruit, count(1) as count FROM
( SELECT
explode(split(Fruit, ',')) as fruit
FROM UK.Choices ) X
GROUP BY fruit
```
From your example, it looks like fruits are delimited by Capital letters. One idea is to split the column based on capital letters, assuming there are no fruits with same suffix.
```
SELECT fruit_suffix, count(1) as count FROM
( SELECT
explode(split(Fruit, '[A-Z]')) as fruit_suffix
FROM UK.Choices ) X
WHERE fruit_suffix <> ''
GROUP BY fruit_suffix
```
The downside is that, the output will not have first letter of the fruit,
```
pple - 5
range - 4
``` | I think I am 2 years too late. But since I was looking for the same answer and I finally managed to solve it, I thought it was a good idea to post it here.
Here is how I do it.
**Solution 1:**
```
+-----------------------------------+---------------------------+-------------+-------------+
| Fruits | Transform 1 | Transform 2 | Final Count |
+-----------------------------------+---------------------------+-------------+-------------+
| AppleBananaAppleOrangeOrangePears | #Banana#OrangeOrangePears | ## | 2 |
| BananaKiwiPlumAppleAppleOrange | BananaKiwiPlum##Orange | ## | 2 |
| KiwiKiwiOrangeGrapesAppleKiwi | KiwiKiwiOrangeGrapes#Kiwi | # | 1 |
+-----------------------------------+---------------------------+-------------+-------------+
```
Here is the code for it:
```
SELECT length(regexp_replace(regexp_replace(fruits, "Apple", "#"), "[A-Za-z]", "")) as number_of_apples
FROM fruits;
```
You may have numbers or other special characters in your `fruits` column and you can just modify the second regexp to incorporate that. Just remember that in hive to escape a character you may need to use `\\` instead of just one `\`.
**Solution 2:**
```
SELECT size(split(fruits,"Apple"))-1 as number_of_apples
FROM fruits;
```
This just first `split` the string using "Apple" as a separator and makes an array. The `size` function just tells the size of that array. Note that the size of the array is one more than the number of separators. | Count particular substring text within column | [
"",
"sql",
"database",
"excel",
"hive",
"average",
""
] |
Does anybody know what is the limit for the number of values one can have in a list of expressions (to test for a match) for the IN clause? | Yes, there is a limit, but [Microsoft only specifies that it lies "in the thousands"](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/in-transact-sql):
> Explicitly including an extremely large number of values (many thousands of values separated by commas) within the parentheses, in an IN clause can consume resources and return errors 8623 or 8632. To work around this problem, store the items in the IN list in a table, and use a SELECT subquery within an IN clause.
Looking at those errors in details, we see that this limit is not specific to `IN` but applies to query complexity in general:
> Error 8623:
>
> *The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.*
>
> Error 8632:
>
> *Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them.* | It is not specific but is related to the query plan generator exceeding memory limits. I can confirm that with several thousand it often errors but can be resolved by inserting the values into a table first and rephrase the query as
```
select * from b where z in (select z from c)
```
where the values you want in the in clause are in table c. We used this successfully with an in clause of 1-million values. | "IN" clause limitation in Sql Server | [
"",
"sql",
"sql-server",
""
] |
I have two tables:
* `PersonTBL` table : includes the column `HostAddress` which stores the IP address of Person
* `ip-to-country` table: include the range of IP numbers with countries
I am trying to count the persons per country, but I received an error:
> Invalid column name 'CountryName'.
Query:
```
SELECT
Count(HostAddress) as TotalNo,
(select [ip-to-country].CountryName
from [ip-to-country]
where ((CAST(PARSENAME(HostAddress, 4) AS Bigint) * 256 * 256 * 256) +
(CAST(PARSENAME(HostAddress, 3) AS INT) * 256 * 256) +
(CAST(PARSENAME(HostAddress, 2) AS INT) * 256) +
CAST(PARSENAME(HostAddress, 1) AS INT))
BETWEEN [ip-to-country].BegingIP AND [ip-to-country].EndIP) AS CountryName
FROM PersonTBL
GROUP BY
CountryName
```
I also tried :
```
SELECT
Count(HostAddress) as TotalNo,
(SELECT [ip-to-country].CountryName
FROM [ip-to-country]
WHERE
((CAST(PARSENAME(HostAddress, 4) AS Bigint) * 256 * 256 * 256) +
(CAST(PARSENAME(HostAddress, 3) AS INT) * 256 * 256) +
(CAST(PARSENAME(HostAddress, 2) AS INT) * 256) +
CAST(PARSENAME(HostAddress, 1) AS INT)) BETWEEN [ip-to-country].BegingIP AND [ip-to-country].EndIP) as CountryName
FROM PersonTBL
GROUP BY
(SELECT [ip-to-country].CountryName
FROM [ip-to-country]
WHERE
((CAST(PARSENAME(HostAddress, 4) AS Bigint) * 256 * 256 * 256) +
(CAST(PARSENAME(HostAddress, 3) AS INT) * 256 * 256) +
(CAST(PARSENAME(HostAddress, 2) AS INT) * 256) +
CAST(PARSENAME(HostAddress, 1) AS INT)) BETWEEN [ip-to-country].BegingIP AND [ip-to-country].EndIP) as CountryName
```
But I received another error :
> Cannot use an aggregate or a subquery in an expression used for the group by list of a GROUP BY clause.
Can any one help me to group the users by country ? | You can't use a column alias directly on the `GROUP BY` if you just define that alias, you either use the same expression on the `GROUP BY` or use a derived table or a CTE:
Derived Table:
```
SELECT COUNT(HostAddress) AS TotalNo,
CountryName
FROM ( SELECT HostAddress,
(SELECT [ip-to-country].CountryName
FROM [ip-to-country]
WHERE
((CAST(PARSENAME(HostAddress, 4) AS BIGINT)*256*256*256) +
(CAST(PARSENAME(HostAddress, 3) AS INT)*256*256)
+ (CAST(PARSENAME(HostAddress, 2) AS INT)*256)
+ CAST(PARSENAME(HostAddress, 1) AS INT))
BETWEEN [ip-to-country].BegingIP and [ip-to-country].EndIP) AS CountryName
FROM PersonTBL) T
GROUP BY CountryName
```
CTE (SQL Server 2005+):
```
;WITH CTE AS
(
SELECT HostAddress,
(SELECT [ip-to-country].CountryName
FROM [ip-to-country]
WHERE
((CAST(PARSENAME(HostAddress, 4) AS BIGINT)*256*256*256) +
(CAST(PARSENAME(HostAddress, 3) AS INT)*256*256)
+ (CAST(PARSENAME(HostAddress, 2) AS INT)*256)
+ CAST(PARSENAME(HostAddress, 1) AS INT))
BETWEEN [ip-to-country].BegingIP and [ip-to-country].EndIP) AS CountryName
FROM PersonTBL
)
SELECT COUNT(HostAddress) AS TotalNo,
CountryName
FROM CTE
GROUP BY CountryName
``` | In SQL Server, you can't use column aliases in the `group by` clause. Here is an alternative:
```
with cte as (
SELECT HostAddress,
(select [ip-to-country].CountryName
from [ip-to-country]
where ((CAST(PARSENAME(HostAddress, 4) AS Bigint)*256*256*256) +
(CAST(PARSENAME(HostAddress, 3) AS INT)*256*256) +
(CAST(PARSENAME(HostAddress, 2) AS INT)*256) +
CAST(PARSENAME(HostAddress, 1) AS INT)
) between [ip-to-country].BegingIP and [ip-to-country].EndIP
) as CountryName
FROM PersonTBL
)
select count(HostAddress) as TotalNo, CountryName
from cte
Group By CountryName;
``` | SQL Server : Group by select statment give error | [
"",
"sql",
"sql-server",
""
] |
I have a simple Parts database which I'd like to use for calculating costs of assemblies, and I need to keep a cost history, so that I can update the costs for parts without the update affecting historic data.
So far I have the info stored in 2 tables:
tblPart:
```
PartID | PartName
1 | Foo
2 | Bar
3 | Foobar
```
tblPartCostHistory
```
PartCostHistoryID | PartID | Revision | Cost
1 | 1 | 1 | £1.00
2 | 1 | 2 | £1.20
3 | 2 | 1 | £3.00
4 | 3 | 1 | £2.20
5 | 3 | 2 | £2.05
```
What I want to end up with is just the PartID for each part, and the PartCostHistoryID where the revision number is highest, so this:
```
PartID | PartCostHistoryID
1 | 2
2 | 3
3 | 5
```
I've had a look at some of the other threads on here and I can't quite get it. I can manage to get the PartID along with the highest Revision number, but if I try to then do anything with the PartCostHistoryID I end up with multiple PartCostHistoryIDs per part.
I'm using MS Access 2007.
Many thanks. | Mihai's (very concise) answer will work assuming that the order of both
* [PartCostHistoryID] and
* [Revision] for each [PartID]
are always ascending.
A solution that does not rely on that assumption would be
```
SELECT
tblPartCostHistory.PartID,
tblPartCostHistory.PartCostHistoryID
FROM
tblPartCostHistory
INNER JOIN
(
SELECT
PartID,
MAX(Revision) AS MaxOfRevision
FROM tblPartCostHistory
GROUP BY PartID
) AS max
ON max.PartID = tblPartCostHistory.PartID
AND max.MaxOfRevision = tblPartCostHistory.Revision
``` | Here is query
```
select PartCostHistoryId, PartId from tblCost
where PartCostHistoryId in
(select PartCostHistoryId from
(select * from tblCost as tbl order by Revision desc) as tbl1
group by PartId
)
```
Here is SQL Fiddle <http://sqlfiddle.com/#!2/19c2d/12> | SQL SELECT only rows where a max value is present, and the corresponding ID from another linked table | [
"",
"sql",
"ms-access-2007",
""
] |
I'm struggling for a `like` operator which works for below example
Words could be
```
MS004 -- GTER
MS006 -- ATLT
MS009 -- STRR
MS014 -- GTEE
MS015 -- ATLT
```
What would be the like operator in `Sql Server` for pulling data which will contain words like `ms004 and ATLT` or any other combination like above.
I tried using multiple like for example
```
where column like '%ms004 | atl%'
```
but it didn't work.
**EDIT**
Result should be combination of both words only. | Seems you are looking for this.
```
`where column like '%ms004%' or column like '%atl%'`
```
or this
```
`where column like '%ms004%atl%'
``` | Try like this
```
select .....from table where columnname like '%ms004%' or columnname like '%atl%'
``` | Like Operator for checking multiple words | [
"",
"sql",
"sql-server",
"sql-like",
""
] |
I created a table with 85 columns but I missed one column. The missed column should be the 57th one. I don't want to drop that table and create it again. I'm looking to edit that table and add a column in the 57th index.
I tried the following query but it added a column at the end of the table.
```
ALTER table table_name
Add column column_name57 integer
```
How can I insert columns into a specific position? | `ALTER TABLE` by default adds new columns at the end of the table. Use the `AFTER` directive to place it in a certain position within the table:
```
ALTER table table_name
Add column column_name57 integer AFTER column_name56
```
From mysql doc
> To add a column at a specific position within a table row, use `FIRST` or `AFTER`***`col_name`***. The default is to add the column last. You can also use `FIRST` and `AFTER` in `CHANGE` or `MODIFY` operations to reorder columns within a table.
<http://dev.mysql.com/doc/refman/5.1/en/alter-table.html>
I googled for this for PostgreSQL but [it seems to be impossible](https://stackoverflow.com/questions/1243547/how-to-add-a-new-column-in-a-table-after-the-2nd-or-3rd-column-in-the-table-usin). | Try this
```
ALTER TABLE tablename ADD column_name57 INT AFTER column_name56
```
[See here](http://blog.sqlauthority.com/2013/03/11/sql-server-how-to-add-column-at-specific-location-in-table/) | How to insert columns at a specific position in existing table? | [
"",
"mysql",
"sql",
""
] |
I creating triggers for several tables. The triggers have same logic. I will want to use a common stored procedure.
But I don't know how work with **inserted** and **deleted** table.
example:
```
SET @FiledId = (SELECT FiledId FROM inserted)
begin tran
update table with (serializable) set DateVersion = GETDATE()
where FiledId = @FiledId
if @@rowcount = 0
begin
insert table (FiledId) values (@FiledId)
end
commit tran
``` | You can use a [table valued parameter](http://msdn.microsoft.com/en-us/library/bb675163%28v=vs.110%29.aspx) to store the inserted / deleted values from triggers, and pass it across to the proc. e.g., if all you need in your proc is the UNIQUE `FileID's`:
```
CREATE TYPE FileIds AS TABLE
(
FileId INT
);
-- Create the proc to use the type as a TVP
CREATE PROC commonProc(@FileIds AS FileIds READONLY)
AS
BEGIN
UPDATE at
SET at.DateVersion = CURRENT_TIMESTAMP
FROM ATable at
JOIN @FileIds fi
ON at.FileID = fi.FileID;
END
```
And then pass the inserted / deleted ids from the trigger, e.g.:
```
CREATE TRIGGER MyTrigger ON SomeTable FOR INSERT
AS
BEGIN
DECLARE @FileIds FileIDs;
INSERT INTO @FileIds(FileID)
SELECT DISTINCT FileID FROM INSERTED;
EXEC commonProc @FileIds;
END;
``` | You can
```
select * into #Inserted from inserted
select * into #Deleted from deleted
```
and then
use these two temp tables in your stored proc | How use inserted\deleted table in stored procedure? | [
"",
"sql",
"sql-server",
"stored-procedures",
"triggers",
""
] |
I have a flat file that I intend to import into a table via SSIS. The file has a field with dates in the format "d/mm/yyyy". These dates eventually get stored into the Database as "yyyy/mm/dd". I know this because I run a datepart sql query on the table data to find out. I have no problem with how they are stored as I can format them via presentation.
The problem is that for some of the dates, the days are swapped with the month values. ie "3/05/1989" should be stored as "1989/05/03", but it can end up as "1989/03/05" and as such, the data presentation is inconsistent with what is in the CSV files.
I have searched everywhere possible for a solution and trust me I can put the links here if you want. I tried exporting from one csv file to another csv file, that one saved "d/mm/yyyy" the same way "d/mm/yyyy".
My last attempt was this
[Date format issue while importing from a flat file to an SQL database](https://stackoverflow.com/questions/9315471/date-format-issue-while-importing-from-a-flat-file-to-an-sql-database)
And as you can see, it hasn't still been marked as an answer. Can anyone help me out here? | Ok so after searching around this is what worked for me:
1. I used a Derived column transformation to transform the date field and split the date string by using the solution here: [SSIS How to get part of a string by separator](https://stackoverflow.com/questions/10921645/ssis-how-to-get-part-of-a-string-by-separator) .
Am surprised it hasnt been marked as an answer.
2. Since I knew my db will switch the day and month regardless, I switched them myself and passed the result back to the same field in the Derived column transformation like so:
TOKEN(FlightDate,"/",2) + "/" + TOKEN(FlightDate,"/",1) + "/" + TOKEN(FlightDate,"/",3)
where 2 is the month, 1 is the day and 3 is the year.
Now the results are being saved in the correct manner and so the presentation matches the source.
I know there should be a better way to do this but this works for me and I hope it works for anyone else who finds themselves in my situation | Can you tell the structure of the Destination table?There should not be any problem while converting the d/mm/yyyy formats into yyyy/mm/dd.Let me know the datatype and then i can help you in further resolution of the problem. | SSIS Not saving correct Month and Day fields from CSV file | [
"",
"sql",
"sql-server",
"date",
"csv",
"ssis",
""
] |
I have a **very** large table containing price history.
```
CREATE TABLE [dbo].[SupplierPurchasePrice](
[SupplierPurchasePriceId] [int] IDENTITY(1,1) PRIMARY KEY,
[ExternalSupplierPurchasePriceId] [varchar](20) NULL,
[ProductId] [int] NOT NULL,
[SupplierId] [int] NOT NULL,
[Price] [money] NOT NULL,
[PreviousPrice] [money] NULL,
[SupplierPurchasePriceDate] [date] NOT NULL,
[Created] [datetime] NULL,
[Modified] [datetime] NULL,
)
```
Per Product(Id) and Supplier(Id) I have hundreds of price records.
Now there is a need to remove the bulk of the data but still keep some historic data. For each Product(Id) and Supplier(Id) i want to keep, let's say, 14 records. But not the first or last 14. I want to keep the first and last record. And then keep 12 records evenly in between the first and last. That way I keep some history in tact.
I cannot figure out a way to do this directly with a stored procedure instead of through my c# ORM (which is way too slow). | Here is a direct counting approach to solving the problem:
```
select spp.*
from (select spp.*,
sum(12.5 / (cnt - 1)) over (partition by SupplierId, ProductId
order by SupplierPurchasePriceId
) as cum
from (select spp.*,
row_number() over (partition by SupplierId, ProductId
order by SupplierPurchasePriceId
) as seqnum,
count(*) over (partition by SupplierId, ProductId) as cnt,
from SupplierPurchasePrice spp
) spp
) spp
where seqnum = 1 or seqnum = cnt or cnt <= 14 or
(floor(cumgap) <> floor(cumgap - 12.5/(cnt - 1)));
```
The challenge is deciding where the 12 records in between go. This calculates an average "gap" in the records, as `12.5/(cnt - 1)`. This is a constant that is then accumulated over the records. It will go from basically 0 to 12.5 in the largest record. The idea is to grab any record where this passes an integer value. So, if the cumulative goes form 2.1 to 2.3, then the record is not chosen. If it goes from 2.9 to 3.1 then the record is chosen.
The number 12.5 is not magic. Any number between 12 and 13 should do. Except for the issue with choosing the oldest and most recent values. I chose 12.5 to be extra sure that these don't count for the 12.
You can see the same logic working [here](http://www.sqlfiddle.com/#!6/d41d8/14097) at SQL Fiddle. The flag column shows which would be chosen and the totflag validates that exactly 14 are chosen. | I would try something like
```
select
to_keep.SupplierPurchasePriceId
from
(select
foo.SupplierPurchasePriceId,
row_number() over (partition by ProductId, SupplierId, tile_num order by Created) as takeme
from
(select
SupplierPurchasePriceId,
ProductId,
SupplierId,
Created,
ntile(13) over(partition by ProductId, SupplierId order by Created) as tile_num
from
SupplierPurchasePrice
) foo
) to_keep
where
to_keep.takeme = 1
union
select distinct
last_value(SupplierPurchasePriceId) over (partition by ProductId, SupplierId order by Created range between UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING) as SupplierPurchasePriceId
from
SupplierPurchasePrice
```
This should give primary keys of rows to be kept. Performance may vary. Not tested. | SQL Server 2012 clean up a set of data | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I am using SQL Server 2008
I have sql string in column with `;` separated values. How i can trim the below value
Current string:
`;145615;1676288;178829;`
Output:
`145615;1676288;178829;`
Please help with sql query to trim the first `;` from string
**Note :** The first char may be or may not be `;` but if it is `;` then only it should trim.
**Edit:** What i had tried before, although it doesn't make sense after so many good responses.
```
DECLARE
@VAL VARCHAR(1000)
BEGIN
SET @VAL =';13342762;1334273;'
IF(CHARINDEX(';',@VAL,1)=1)
BEGIN
SELECT SUBSTRING(@VAL,2,LEN(@VAL))
END
ELSE
BEGIN
SELECT @VAL
END
END
``` | ```
SELECT CASE WHEN col LIKE ';%'
THEN STUFF(col,1,1,'') ELSE col END
FROM dbo.table;
``` | Just check the first character, and if it matches, start from the second character:
```
SELECT CASE WHEN SUBSTRING(col,1,1) = ';'
THEN SUBSTRING(col,2,LEN(col))
ELSE col
END AS col
``` | Trim Special Char from SQL String | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
We have three tables in a **MySQL** Database -
1. Company
2. Employee
3. Address
Company has Employees. Employees have Address. [Note they can have multiple addresses]
Please take a look at the image below for a general idea of the structure

We have two conditions here -
1.Get all companies whose employee(s) have at least one address listed in the Address table.
Example Query Result should contain the following companies -
Microsoft, Google, IBM
---
2.Get all companies whose employees have no address listed in the Address table.
Example Query Result should contain the following companies -
Yahoo
We have currently written this query which seems to be working for this particular condition-
```
SELECT
company_id,
companies.company_name,
FROM companies
LEFT OUTER JOIN employees ON employees.company_id = companies.company_id
LEFT OUTER JOIN addresses ON address.employee_id = employees.employee_id AND address_id IS NOT NULL
WHERE address_id IS NULL GROUP BY companies.company_id;
```
---
Is there a way to fetch these result using a **single query** to the database without the use of Stored Procedures? It should add a column to the result set (0 or 1) depending on whether a companies employee has an address listed. | 1.Get all companies whose employee(s) have at least one address listed in the Address table.
2.Get all companies whose employees have no address listed in the Address table.
Is there a way to fetch these result using a single query to the database without the use of Stored Procedures?
Try this:
```
SELECT * FROM companies
```
---
Updated answer:
```
Select c.[company_id],c.[company_name], CASE WHEN count(a.address_id)>0 THEN 1 ELSE 0 END as [flag] from Company c
left join Employee e on e.[company_id] = c.[company_id]
left join Address a on a.[employee_id] = e.[employee_id]
group by c.[ID],c.[company_name]
```
give me result:
```
ID NAME FLAG
2 Google 1
3 IBM 1
1 Microsoft 1
4 Yahoo 0
```
sqlfiddle:
<http://sqlfiddle.com/#!6/4163a/3>
update: sorry, sqlfiddle for MSSQL. This is fo mysql:
<http://sqlfiddle.com/#!2/18d09/1> | I would just add another column to your existing query and remove your test for IS NULL on the address. You would get all companies, and a column (flag) indicating if it has no addresses on file.
```
SELECT
company_id,
companies.company_name,
MAX( CASE WHEN address.address_id IS NULL then 1 else 0 end ) as NoAddressOnFile
FROM
companies
LEFT OUTER JOIN employees
ON companies.company_id = employees.company_id
LEFT OUTER JOIN addresses
ON employees.employee_id = address.employee_id
GROUP BY
companies.company_id;
``` | SELECT query based on multiple tables | [
"",
"mysql",
"sql",
""
] |
I am trying to call a function from oracle using OLE DB Command for SSIS, I have the connection set up Correctly but I think my Syntax for calling the function is incorrect?
```
EXEC UPDATE_PERSON ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? output
```
I have used this on a SQL stored proc for testing & it has worked. the oracal connection is 3rd party & they have just supplied the function name & expected parameters. I should get 1 return parameter.
The Error:
```
Error at Data Flow Task [OLE DB Command [3013]]: SSIS Error Code DTS_E_OLEDBERROR. AN OLE DB error has occurred. Error code 0x80040e14.
An OLE DB record is available. Source: "Microsoft OLE DB Provider for Oracle" Hresult: 0x80040E14
Description: "ORA-00900: invalid SQL statement".
``` | in your syntax you have to change the command " EXCU " to " EXEC ".
**EXEC** UPDATE\_PERSON ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? OUTPUT
i have to mention that there is no need for { }
other than that you have to be aware of your full path of the SP that your are executing " by means of specifying [databaseName].[dbo]. if needed
Regards,
S.ANDOURA | Try wrapping it in curly braces:
```
{EXEC UPDATE_PERSON ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? output}
```
I don't think Oracle functions have output parameters - are you absolutely certain this is a function and not a proc? functions don't normally perform updates either. | Execute Oracle Function from SSIS OLE DB Command | [
"",
"sql",
"oracle",
"ssis",
"sqlcommand",
"oledbcommand",
""
] |
I have two SQL tables:
**items** table
```
item_id name timestamp
--------------------------------------------
a apple 2014-01-01
b banana 2014-01-06
c tomato 2013-12-25
d chicken 2014-01-23
e cheese 2014-01-02
f carrot 2014-01-16
```
**items\_to\_categories** table
```
cat_id item_id
--------------------------------------------
1 a
5 c
2 e
3 a
4 f
5 d
5 b
5 a
```
Knowing `cat_id` (*eg 5*), I need to get 2 latest items (based on the `timestamp`) that belongs to that `cat_id`.
If I first get 2 rows from **items\_to\_categories** table:
```
SELECT item_id FROM items_to_categories WHERE cat_id = 5 LIMIT 2;
>> returns 'c' and 'd'
```
And then use returned items ids to query **items** table, I am not making sure returned items will be the latest ones (order by `timestamp`).
The ideal result what I need to get selecting 2 latest items by `cat_id` (*eg 5*) would be:
```
d chicken 2014-01-23
b banana 2014-01-06
``` | ```
SELECT t1.item_id, t1.name, t1.timestamp
FROM items t1
LEFT JOIN items_to_categories t2 ON t1.item_id = t2.item_id
WHERE cat_id = 5
ORDER BY t1.timestamp DESC
LIMIT 2;
```
Use above query. | ```
SELECT top 2 I.item_id, I.name, I.timestamp
FROM items I
JOIN items_to_categories IC ON I.item_id = IC.item_id
WHERE IC.cat_id in (Select top 1 from items_to_categories order by timestamp desc)
ORDER BY I.timestamp DESC
``` | Select rows from a table matching other table records | [
"",
"mysql",
"sql",
"join",
""
] |
I have used `REPLACE` function in order to delete email addresses from hundreds of records. However, as it is known, the semicolon is the separator, usually between each email address and anther. The problem is, there are a lot of semicolons left randomly.
**For example:** the field:
```
123@hotmail.com;456@yahoo.com;789@gmail.com;xyz@msn.com
```
Let's say that after I deleted two email addresses, the field content became like:
`;456@yahoo.com;789@gmail.com;`
I need to clean these fields from these extra undesired semicolons to be like
`456@yahoo.com;789@gmail.com`
For double semicolons I have used `REPLACE` as well by replacing each `;;` with `;`
Is there anyway to delete any semicolon that is not preceded or following by any character? | If you only need to replace semicolons at the start or end of the string, using a regular expression with the anchor '^' (beginning of string) / '$' (end of string) should achieve what you want:
```
with v_data as (
select '123@hotmail.com;456@yahoo.com;789@gmail.com;xyz@msn.com' value
from dual union all
select ';456@yahoo.com;789@gmail.com;' value from dual
)
select
value,
regexp_replace(regexp_replace(value, '^;', ''), ';$', '') as normalized_value
from v_data
```
If you also need to replace stray semicolons from the middle of the string, you'll probably need regexes with lookahead/lookbehind. | You remove leading and trailing characters with TRIM:
```
select trim(both ';' from ';456@yahoo.com;;;789@gmail.com;') from dual;
```
To replace multiple characters with only one occurrence use REGEXP\_REPLACE:
```
select regexp_replace(';456@yahoo.com;;;789@gmail.com;', ';+', ';') from dual;
```
Both methods combined:
```
select regexp_replace( trim(both ';' from ';456@yahoo.com;;;789@gmail.com;'), ';+', ';' ) from dual;
``` | Delete certain character based on the preceding or succeeding character - ORACLE | [
"",
"sql",
"regex",
"oracle",
""
] |
Ok, I'm totally stumped.
I have a table GROCERY\_PRICES.
In it there is data for GROCERY\_ITEM, PRICE\_IN\_2012, and ESTIMATED\_PRICE\_IN\_2042.
I need to Write a SQL SELECT statement that finds the items having the highest percentage of price increase. And then sort by GROCERY\_ITEM.
I cannot for the life of me figure out how to grab the three items with the highest percentage of change over the time period using the data in the table.
This is the code I have, it returns all the data I need, but for every GROCERY\_ITEM rather than the three items with the highest percentage of change. I know it's 310 but I can't hardcode that number in.
```
SELECT grocery_item,
price_in_2012,
ESTIMATED_PRICE_IN_2042,
sum((ESTIMATED_PRICE_IN_2042 - PRICE_IN_2012) / PRICE_IN_2012) * 100 as Percent_Change
FROM grocery_prices
GROUP BY grocery_item,
price_in_2012,
ESTIMATED_PRICE_IN_2042
ORDER BY grocery_item;
```
I know I need to use a subquery, but I have no idea how to go about it.
Thanks.
EDIT: Some sample data:
 | Are you looking for something like this?
```
SELECT grocery_item, price_in_2012, estimated_price_in_2042, percent_change
FROM
(
SELECT grocery_item, price_in_2012, estimated_price_in_2042,
ROUND((estimated_price_in_2042 - price_in_2012) / price_in_2012 * 100, 2) AS percent_change,
ROW_NUMBER() OVER (ORDER BY ABS((estimated_price_in_2042 - price_in_2012) / price_in_2012 * 100) DESC) AS rank
FROM grocery_prices t
) q
WHERE rank <= 3;
```
Output:
```
| GROCERY_ITEM | PRICE_IN_2012 | ESTIMATED_PRICE_IN_2042 | PERCENT_CHANGE |
|--------------|---------------|-------------------------|----------------|
| B_001 | 0.8 | 3.28 | 310 |
| G_010 | 8 | 32.8 | 310 |
| R_003 | 4 | 16.4 | 310 |
```
Depending on your needs you may want to use `DENSE_RANK()` instead of `ROW_NUMBER()`
```
SELECT grocery_item, price_in_2012, estimated_price_in_2042, percent_change
FROM
(
SELECT grocery_item, price_in_2012, estimated_price_in_2042,
ROUND((estimated_price_in_2042 - price_in_2012) / price_in_2012 * 100, 2) AS percent_change,
DENSE_RANK() OVER (ORDER BY ABS((estimated_price_in_2042 - price_in_2012) / price_in_2012 * 100) DESC) AS rank
FROM grocery_prices t
) q
WHERE rank <= 3;
```
Output:
```
| GROCERY_ITEM | PRICE_IN_2012 | ESTIMATED_PRICE_IN_2042 | PERCENT_CHANGE |
|--------------|---------------|-------------------------|----------------|
| B_001 | 0.8 | 3.28 | 310 |
| G_010 | 8 | 32.8 | 310 |
| R_003 | 4 | 16.4 | 310 |
| E_001 | 0.62 | 1.78 | 187.1 |
| B_002 | 2.72 | 7.36 | 170.59 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!4/a7ae1/12)** demo | top 3 items having the highest percentage of price increase
```
SELECT GROCERY_ITEM,PRICE_IN_2012,ESTIMATED_PRICE_IN_2042,
ROUND(((`ESTIMATED_PRICE_IN_2042` - `PRICE_IN_2012`) / PRICE_IN_2012) * 100) AS Percent_Change
FROM grocery_prices AS itm
GROUP BY GROCERY_ITEM
ORDER BY Percent_Change DESC LIMIT 3
```
[SQL Fiddle](http://sqlfiddle.com/#!2/c3471b/1)
hope this help you ! | SQL: How to select items from a table with the highest percentage of change | [
"",
"sql",
"oracle",
"subquery",
"max",
""
] |
below is the sample line of csv
```
012,12/11/2013,"<555523051548>KRISHNA KUMAR ASHOKU,AR",<10-12-2013>,555523051548,12/11/2013,"13,012.55",
```
you can see **KRISHNA KUMAR ASHOKU,AR** as single field but it is treating KRISHNA KUMAR ASHOKU and AR as two different fields because of comma, though they are enclosed with " but still no luck
I tried
```
BULK
INSERT tbl
FROM 'd:\1.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW=2
)
GO
```
is there any solution for it? | The answer is: you can't do that. See <http://technet.microsoft.com/en-us/library/ms188365.aspx>.
"Importing Data from a CSV file
Comma-separated value (CSV) files are not supported by SQL Server bulk-import operations. However, in some cases, a CSV file can be used as the data file for a bulk import of data into SQL Server. For information about the requirements for importing data from a CSV data file, see Prepare Data for Bulk Export or Import (SQL Server)."
The general solution is that you must convert your CSV file into one that can be be successfully imported. You can do that in many ways, such as by creating the file with a different delimiter (such as TAB) or by importing your table using a tool that understands CSV files (such as Excel or many scripting languages) and exporting it with a unique delimiter (such as TAB), from which you can then BULK INSERT. | They added support for this SQL Server 2017 (14.x) CTP 1.1. You need to use the FORMAT = 'CSV' Input File Option for the BULK INSERT command.
To be clear, here is what the csv looks like that was giving me problems, the first line is easy to parse, the second line contains the curve ball since there is a comma inside the quoted field:
```
jenkins-2019-09-25_cve-2019-10401,CVE-2019-10401,4,Jenkins Advisory 2019-09-25: CVE-2019-10401:
jenkins-2019-09-25_cve-2019-10403_cve-2019-10404,"CVE-2019-10404,CVE-2019-10403",4,Jenkins Advisory 2019-09-25: CVE-2019-10403: CVE-2019-10404:
```
Broken Code
```
BULK INSERT temp
FROM 'c:\test.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '0x0a',
FIRSTROW= 2
);
```
Working Code
```
BULK INSERT temp
FROM 'c:\test.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '0x0a',
FORMAT = 'CSV',
FIRSTROW= 2
);
``` | sql server Bulk insert csv with data having comma | [
"",
"sql",
"sql-server",
"csv",
"bulkinsert",
""
] |
OK, so here's what I need :
Let's say we've got a table, e.g. `A` and need to get the number of rows.
In that case we'd `SELECT COUNT(*) FROM A`.
Now, what if we have 3 different tables, let's say `A`, `B` and `C`.
How can I get the total number of rows in all three of them? | ```
Select (select count(*) from a) +
(select count(*) from b) +
(select count(*) from c);
```
is the easiest way if you only have 3 counts every time. | Try something like this:
```
SELECT
((select count(*) from demo1) +
(select count(*) from demo2) +
(select count(*) from demo3)) as Tbl3;
```
Sql fiddle:<http://sqlfiddle.com/#!2/d439f8/5> | Get SUM of COUNTs for several tables | [
"",
"mysql",
"sql",
""
] |
I have a set of one to one mappings A -> apple, B-> Banana and like that..
My table has a column with values as A,B,C..
Now I'm trying to use a select statement which will give me the direct result
```
SELECT
CASE
WHEN FRUIT = 'A' THEN FRUIT ='APPLE'
ELSE WHEN FRUIT ='B' THEN FRUIT ='BANANA'
FROM FRUIT_TABLE;
```
But I'm not getting the correct result, please help me.. | This is just the syntax of the case statement, it looks like this.
```
SELECT
CASE
WHEN FRUIT = 'A' THEN 'APPLE'
WHEN FRUIT = 'B' THEN 'BANANA'
END AS FRUIT
FROM FRUIT_TABLE;
```
As a reminder remember; no assignment is performed the value becomes the column contents. (If you wanted to assign that to a variable you would put it before the CASE statement). | Change to:
```
SELECT
CASE
WHEN FRUIT = 'A' THEN 'APPLE'
WHEN FRUIT = 'B' THEN 'BANANA'
END
FROM FRUIT_TABLE;
``` | SELECT using 'CASE' in SQL | [
"",
"sql",
"select",
"case",
""
] |
How to get the first day of the week and last day of the week when we input any one day of a week?
For example if we enter a date then the first(Monday) and last (Friday) day should be displayed.
that is if we enter 24-jan-2014 then 20-jan-2014 and 24-jan-2014 should be displayed.
Regards | Here's how you can do it:
```
DECLARE @yourdate date = getdate()
Select dateadd(ww, datediff(ww, 0, @yourdate), 0)
Select dateadd(ww, datediff(ww, 0, @yourdate), 4)
```
You set `@yourdate` to the date you want. The first SELECT will give you the first day and the second SELECT will give you the last date | This solves it and also wraps around year ends:
```
SELECT DATEADD(wk, DATEDIFF(d, 0, '01 January 2017') / 7, 0)
``` | How to get first day of the week and last day of the week in sql server 2008? | [
"",
"sql",
"sql-server",
"date",
""
] |
I am new to this group. I was trying to think of having a cursor with condition select statement.
Some how like this pseudo code--
```
[B]cursor test_cursor is
if condition == 't11'
then
select * from test1;
else
select * from test1;
end if;[/B]
begin
for cursorVal in test_cursor loop
//Doing the actual task on cursor data.
end loop;
commit;
end;
```
Actually, i came across with a scenario where need to work on two different tables with same DDL.
Based on some user input, need to fetch data from either of the table and further manipulate in procedure. As i said both table are of same DDL
so don't want to create two different cursor. The reason for this same business logic will be applied on both tables data. Its just the user input which decide which table need to fetch data. Some how one can think of this as latest data and historical data and the way DB is designed.
Hope i am clear with my scenario.
Thanks,
Arfeen. | The cursor can be declared as a union as described below. Depending on the content of variable `condition`, the cursor will either be based on Test1 or Test2.
```
SELECT * FROM Test1 WHERE condition = 't1'
UNION ALL
SELECT * FROM Test2 WHERE condition = 't2'
``` | What you are trying to achieve looks like it could either be achieved by better table or view design or by using a BULK COLLECT.
If you can - always consider database design first over code.
```
BEGIN
if condition == 't11' then
SELECT XXXXXX
BULK COLLECT INTO bulk_collect_ids
FROM your_table1;
else
SELECT XXXXXX
BULK COLLECT INTO bulk_collect_ids
FROM your_table2;
end if;
FOR indx IN 1 .. bulk_collect_ids.COUNT
LOOP
.
//Doing the actual task on bulk_collect_ids data.
.
END LOOP;
END;
``` | Oracle Cursor with conditional Select statement | [
"",
"sql",
"oracle",
""
] |
I have two tables, from witch want to select the **user, system and soft**.
**soft** records should be the one with the latest "**tstamp2**"
First: table systems
```
USER SYSTEM ltstamp
======-----======----===================
User1 LA1 2013-05-06 11:27:26
User2 LA2 2013-06-07 11:27:26
```
Second: table software
```
Soft SYSTEM tstamp2
=====----=====------===================
Av1 LA1 2013-04-06 10:27:26
Av2 LA1 2013-05-06 11:27:26
Av1 LA2 2013-04-06 10:27:26
Av2 LA2 2013-06-07 11:27:26
``` | ```
SELECT s.user, s.system, sw.max_tstamp, sw2.soft
FROM
systems s INNER JOIN (SELECT system, MAX(tstamp2) AS max_tstamp
FROM software
GROUP BY system) sw
ON s.system = sw.system INNER JOIN software sw2
ON s.system = sw2.system AND sw.max_tstamp=sw2.tstamp2
```
Please see fiddle [here](http://sqlfiddle.com/#!2/2bf0b/3). | You need a sub request to do it. For example :
```
select * from systems
where ltstamp = (select top 1 ltstamp from systems order by ltstamp desc)
``` | Select records with latest timestamps | [
"",
"mysql",
"sql",
"database",
"time",
"timestamp",
""
] |
I have following rows in my table, i want to remove the duplicate semicolons(;).
```
ColumnName
Test;Test2;;;Test3;;;;Test4;
Test;;;Test2;Test3;;Test4;
Test;;;;;;;;;;;;;;;;;Test2;;;;Test3;;;;;Test4;
```
from the above rows, i want to remove the duplicate semicolons(;) and keep only one semicolon (;)
Like below
```
ColumnName
Test;Test2;Test3;Test4
Test;Test2;Test3;Test4
Test;Test2;Test3;Test4
```
Thanks
Rajesh | I like to use this pattern. AFAIK, originally posted by Jeff Moden on SQLServerCentral ([link](http://www.sqlservercentral.com/articles/T-SQL/68378/)).
I've left the terminal semi-colon in all the rows as they are single occurance items. `SUBSTRING()` or `LEFT()` to remove.
```
CREATE TABLE #MyTable (MyColumn VARCHAR(500))
INSERT INTO #MyTable
SELECT 'Test;Test2;;;Test3;;;;Test4;' UNION ALL
SELECT 'Test;;;Test2;Test3;;Test4;' UNION ALL
SELECT 'Test;;;;;;;;;;;;;;;;;Test2;;;;Test3;;;;;Test4;'
-- Where '^|' nor its revers '|^' is a sequence of characters that does not occur in the table\field
SELECT REPLACE(REPLACE(REPLACE(MyColumn, ';', '^|'), '|^', ''), '^|', ';')
FROM #MyTable
-- If you MUST remove terminal semi-colon
SELECT CleanText2 = LEFT(CleanText1, LEN(CleanText1)-1)
FROM
(
SELECT CleanText1 = REPLACE(REPLACE(REPLACE(MyColumn, ';', '^|'), '|^', ''), '^|', ';')
FROM #MyTable
)DT
``` | ```
DECLARE @TestTable TABLE(Column1 NVARCHAR(MAX))
INSERT INTO @TestTable VALUES
('Test;Test2;;;Test3;;;;Test4;'),
('Test;;;Test2;Test3;;Test4;'),
('Test;;;;;;;;;;;;;;;;;Test2;;;;Test3;;;;;Test4;')
SELECT replace(
replace(
replace(
LTrim(RTrim(Column1)),
';;',';|'),
'|;',''),
'|','') AS Fixed
FROM @TestTable
```
**Result Set**
```
╔═════════════════════════╗
║ Fixed ║
╠═════════════════════════╣
║ Test;Test2;Test3;Test4; ║
║ Test;Test2;Test3;Test4; ║
║ Test;Test2;Test3;Test4; ║
╚═════════════════════════╝
``` | SQL Server Query to replace extra ";;;" characters | [
"",
"sql",
"sql-server",
""
] |
In SQL server, if i write command like this -
> EXEC SP\_HELP EmployeeMaster
it'll return my table fields and all the default constraint defined on it. Now i just wants to know that is there any kind of command in postgres like above ?
Although i know to get just table fields in postgres, you can use below query.
```
select column_name from information_schema.columns where table_name = 'EmployeeMaster'
``` | No, there's nothing like that *server-side*. Most of PostgreSQL's descriptive and utility commands are implemented in the `psql` client, meaning they are not available to other client applications.
You will need to query the `information_schema` to collect the information you require yourself if you need to do this in a client app.
If you're using PgAdmin-III, well, `psql` is a useful and powerful tool, well worth learning. | You can use the **pg\_constraint, pg\_class, pg\_attribute** to get the result. See [here](http://postgresql.org) in detail | Is there any equivalent of sp_help in postgres | [
"",
"sql",
"database",
"postgresql",
""
] |
I need to delete a bunch of records from a table. I have found a query that will work to do the job, but I am told that sub-queries are not supported, but joins are.. Is it possible to convert the following query to a join. If so how?
```
SELECT *
FROM PRODUCT
WHERE PROD_NAME IN
(SELECT PROD_NAME
FROM PRODUCT
WHERE BRAND = 'Apt88'
AND NAME = 'Version'
AND VALUE IN ('3.7', '3.8'))
```
Any help appreciated,
Ted | Here is the proper equivalent:
```
SELECT p.*
FROM PRODUCT p join
(SELECT distinct PROD_NAME
FROM PRODUCT
WHERE BRAND = 'Apt88' AND NAME = 'Version' AND VALUE IN ('3.7', '3.8')
) pn
on p.prod_name = pn.prod_name;
```
Note the use of `distinct`. If your rows are distinct (but not necessarily `prod_name`) then you can do:
```
SELECT distinct p.*
FROM PRODUCT p join
PRODUCT pn
on p.prod_name = pn.prod_name;
WHERE pn.BRAND = 'Apt88' AND pn.NAME = 'Version' AND pn.VALUE IN ('3.7', '3.8');
``` | You are right - deleting from the same table you are selecting from is not supported in MySQL. But you can trick that with another subquery
```
DELETE FROM PRODUCT
WHERE PROD_NAME IN
(
select * from
(
SELECT PROD_NAME
FROM PRODUCT
WHERE BRAND = 'Apt88' AND NAME = 'Version' AND VALUE IN ('3.7', '3.8')
) x
)
``` | Is it possible to convert SELECT-IN-SELECT query to a JOIN Query? | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I'm building up intelligence on tables and generating values and I'd like to process those values. This is not production code, but debug script, so I'd simply like to do the following in MS SQL server:
```
@declare @Value1 int;
set @Value1 = 1;
@declare @Value2 int;
set @Value2 = 2;
@declare @Value3 int;
@Value3 = @Value1 + @Value2
```
How can I perform this - no select is required at all. I've not seen anything on how to do this, | Maybe this?
```
declare @Value1 int;
set @Value1 = 1;
declare @Value2 int;
set @Value2 = 2;
declare @Value3 int;
set @Value3 = @Value1 + @Value2
```
Or even: (For Sql Server 2008+)
```
declare @Value1 int= 1;
declare @Value2 int = 2;
declare @Value3 int = @Value1 + @Value2
``` | After you do the calculation, you can do:
```
Set @Value3 = @Value1 + @Value2
Print Convert(varchar, @Value3)
```
to see the result | Math operation in SQL without select? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I currently have 2 tables which needs to be joined.
The first table is a big table,millions of records. The second is a matching table.
I would want to join the 2 table. A table of the intended result is shown below.
I would need tips on how to write the SQL to join the 2 tables.
```
Table 1
ID X Column X
1 X1
2 X2
3 X3
4 X4
5 X5
6 X6
... ...
the list goes on
Table 2
ID Column Y
1 Y1
3 Y2
6 Y3
11 Y4
Intended result
ID X Column X Column Y
1 X1 Y1
2 X2 Y1
3 X3 Y2
4 X4 Y2
5 X5 Y2
6 X6 Y3
7 X7 Y3
8 X8 Y3
9 X9 Y3
10 X10 Y3
11 X11 Y4
12 X12 Y4
``` | ```
SELECT idX, columnX, columnY
FROM (
SELECT id, columnY, @prevID AS prevID, @prevID := id
FROM table2
CROSS JOIN (SELECT @prevID := NULL) init
ORDER BY id DESC) AS t2
JOIN table1 AS t1 ON t1.idX >= t2.id AND (t2.prevID IS NULL OR t1.idX < t2.prevID)
```
[DEMO](http://www.sqlfiddle.com/#!2/cdbd1/1) | ```
SELECT T1.IDX, T1.ColumnX , T2.ColumnY FROM TABLE1 T1
INNER JOIN TABLE2 T2 ON T1.IDX = T2.ID
``` | Joining table with range ID matching | [
"",
"mysql",
"sql",
"join",
""
] |
Suppose my database table structure is like this
```
id name college address
1 xxx nnn xn
2 yyy nnm yn
3 zzz nnz zn
```
If i want to get the student details based on the name in sql like this
select \* from student where name = 'xxx'
so how its is possible in redis database | Redis, like other NoSQL datastores, has different requirements based on what you are going to be doing.
Redis has several data structures that could be useful depending on your need. For example, given your desire for a `select * from student where name = 'xxx'` you could use a Redis `hash`.
```
redis 127.0.0.1:6379> hmset student:xxx id 1 college nnn address xn
OK
redis 127.0.0.1:6379> hgetall student:xxx
1) "id"
2) "1"
3) "college"
4) "nnn"
5) "address"
6) "xn"
```
If you have other queries though, like you want to do the same thing but select on `where college = 'nnn'` then you are going to have to denormalize your data. Denormalization is usually a bad thing in SQL, but in NoSQL it is very common.
If your primary query will be against the name, but you may need to query against the college, then you might do something like adding a `set` in addition to the hashes.
```
redis 127.0.0.1:6379> sadd college:nnn student:xxx
(integer) 1
redis 127.0.0.1:6379> smembers college:nnn
1) "student:xxx"
```
With your data structured like this, if you wanted to find all information for names going to college xn, you would first select the `set`, then select each `hash` based on the name returned in the `set`.
Your requirements will generally drive the design and the structures you use. | With just 6 principles (which I collected [here](https://github.com/MehmetKaplan/Redis_Table)), it is very easy for a SQL minded person to adapt herself to Redis approach. Briefly they are:
> 1. The most important thing is that, don't be afraid to generate lots of key-value pairs. So feel free to store each row of the table in a different key.
> 2. Use Redis' hash map data type
> 3. Form key name from primary key values of the table by a separator (such as ":")
> 4. Store the remaining fields as a hash
> 5. When you want to query a single row, directly form the key and retrieve its results
> 6. When you want to query a range, use wild char "\*" towards your key.
The link just gives a simple table example and how to model it in Redis. Following those 6 principles you can continue to think like you do for normal tables. (Of course without some not-so-relevant concepts as CRUD, constraints, relations, etc.) | Design Redis database table like SQL? | [
"",
"sql",
"redis",
"nosql",
""
] |
My SQL skills have atrophied and I need some help connecting two tables through a third one that contains foreign keys to those two.
The Customer table has data I need. The Address table has data I need. They are not directly related to each other, but the CustomerAddress table has both CustomerID and AddressID columns.
Specifically, I need from the Customer table:
```
FirstName
MiddleName
LastName
```
...and from the Address table:
```
AddressLine1
AddressLine2
City
StateProvince,
CountryRegion
PostalCode
```
Here is my awkward attempt, which syntax LINQPad does not even recognize ("*Incorrect syntax near '='*").
```
select C.FirstName, C.MiddleName, C.LastName, A.AddressLine1, A.AddressLine2, A.City, A.StateProvince,
A.CountryRegion, A.PostalCode
from SalesLT.Customer C, SalesLT.Address A, SalesLT.CustomerAddress U
left join U.CustomerID = C.CustomerID
where A.AddressID = U.AddressID
```
Note: This is a SQL Server table, specifically AdventureWorksLT2012\_Data.mdf | ```
select C.FirstName, C.MiddleName, C.LastName, A.AddressLine1, A.AddressLine2, A.City, A.StateProvince,
A.CountryRegion, A.PostalCode
from SalesLT.CustomerAddress U INNER JOIN SalesLT.Address A
ON A.AddressID = U.AddressID
INNER JOIN SalesLT.Customer C
ON U.CustomerID = C.CustomerID
```
I have only used `INNER JOINS` but obviously you can replace them with `LEFT` or `RIGHT` joins depending on your requirements. | ```
SELECT
c.FirstName,
c.MiddleName,
c.LastName,
a.AddressLine1
a.AddressLine2
a.City
a.StateProvince,
a.CountryRegion
a.PostalCode
FROM Address a
JOIN CustomerAddress ca
ON ca.AddressID = a.AddressID
JOIN Customer c
ON c.CustomerID = ca.CustomerID
WHERE ...
``` | How can I relate two tables in SQL that are related through a third one? | [
"",
"sql",
"sql-server",
"t-sql",
"join",
"relational",
""
] |
i am bit confused by the nature and working of query , I tried to access database which contains each name more than once having same EMPid so when i accessed it in my DROP DOWN LIST then same repetition was in there too so i tried to remove repetition by putting DISTINCT in query but that didn't work but later i modified it another way and that worked but **WHY THAT WORKED, I DON'T UNDERSTAND ?**
QUERY THAT DIDN'T WORK
```
var names = (from n in DataContext.EmployeeAtds select n).Distinct();
```
QUERY THAT WORKED of which i don't know how ?
```
var names = (from n in DataContext.EmployeeAtds select new {n.EmplID, n.EmplName}).Distinct();
```
why 2nd worked exactly like i wanted (picking each name 1 time)
i'm using mvc 3 and linq to sql and i am newbie. | Both queries are different. I am explaining you both query in SQL that will help you in understanding both queries.
Your first query is:
```
var names = (from n in DataContext.EmployeeAtds select n).Distinct();
```
SQL:-
> SELECT DISTINCT [t0].[EmplID], [t0].[EmplName], [t0].[Dept]
> FROM [EmployeeAtd] AS [t0]
Your second query is:
```
(from n in EmployeeAtds select new {n.EmplID, n.EmplName}).Distinct()
```
SQL:-
> SELECT DISTINCT [t0].[EmplID], [t0].[EmplName] FROM [EmployeeAtd] AS
> [t0]
Now you can see SQL query for both queries. First query is showing that you are implementing Distinct on all columns of table but in second query you are implementing distinct only on required columns so it is giving you desired result. | Try this:
```
var names = DataContext.EmployeeAtds.Select(x => x.EmplName).Distinct().ToList();
```
**Update:**
```
var names = DataContext.EmployeeAtds
.GroupBy(x => x.EmplID)
.Select(g => new { EmplID = g.Key, EmplName = g.FirstOrDefault().EmplName })
.ToList();
``` | nature of SELECT query in MVC and LINQ TO SQL | [
"",
"asp.net",
"sql",
"asp.net-mvc",
"linq-to-sql",
""
] |
I am trying to use a login form with SQL Express 2012 and vb.net. I have the db connection, now I have the following problem;
Incorrect syntax near '=' for the code ; data = command.ExecuteReader
Any suggestions? Here is the code
Thanks!!!!!!!
```
Imports System.Data.SqlClient
Imports System.Data.OleDb
Public Class login
Private Sub login_user_Click(sender As Object, e As EventArgs) Handles login_user.Click
Dim conn As New SqlConnection
If conn.State = ConnectionState.Closed Then
conn.ConnectionString = ("Server=192.168.0.2;Database=Sunshinetix;User=sa;Password=sunshine;")
End If
Try
conn.Open()
Dim sqlquery As String = "SELECT = FROM Users Where Username = '" & username_user.Text & "';"
Dim data As SqlDataReader
Dim adapter As New SqlDataAdapter
Dim command As New SqlCommand
command.CommandText = sqlquery
command.Connection = conn
adapter.SelectCommand = command
data = command.ExecuteReader()
While data.Read
If data.HasRows = True Then
If data(2).ToString = password_user.Text Then
MsgBox("Sucsess")
Else
MsgBox("Login Failed! Please try again or contact support")
End If
Else
MsgBox("Login Failed! Please try again or contact support")
End If
End While
Catch ex As Exception
End Try
End Sub
```
End Class | The problem was that your query is `SELECT = FROM` which is obviously a typo the correct syntax is `SELECT * FROM`.
See my code to avoid `SqlInjection`

Try this code:
```
Dim conn As New SqlConnection
If conn.State = ConnectionState.Closed Then
conn.ConnectionString = ("Server=192.168.0.2;Database=Sunshinetix;User=sa;Password=sunshine;")
End If
Try
conn.Open()
Dim sqlquery As String = "SELECT * FROM Users Where Username = @user;"
Dim data As SqlDataReader
Dim adapter As New SqlDataAdapter
Dim parameter As New SqlParameter
Dim command As SqlCommand = New SqlCommand(sqlquery, conn)
With command.Parameters
.Add(New SqlParameter("@user", password_user.Text))
End With
command.Connection = conn
adapter.SelectCommand = command
data = command.ExecuteReader()
While data.Read
If data.HasRows = True Then
If data(2).ToString = password_user.Text Then
MsgBox("Sucsess")
Else
MsgBox("Login Failed! Please try again or contact support")
End If
Else
MsgBox("Login Failed! Please try again or contact support")
End If
End While
Catch ex As Exception
End Try
```
I would recommend to you use the parametrized query to avoid [SQL Injection](http://en.wikipedia.org/wiki/SQL_injection) | Change
`SELECT = FROM Users ....`
to
`SELECT * FROM Users ....` | SQL Command.ExecuteReader vb.net | [
"",
"sql",
"vb.net",
""
] |
How would I add "OR" to this where statement when setting my instance variable in a Rails controller?
`@activities = PublicActivity::Activity.order("created_at DESC").where(owner_type: "User", owner_id: current_user.followed_users.map {|u| u.id}).where("owner_id IS NOT NULL").page(params[:page]).per_page(20)`
I want to set @activities to records where owner\_id is equal to either current\_user.id or the current\_user.followed\_users. I tried adding `.where(owner_id: current_user.id)` but that seems to negate the entire query and I get no results at all. The query looked like this after I added to it:
`@activities = PublicActivity::Activity.order("created_at DESC").where(owner_type: "User", owner_id: current_user.followed_users.map {|u| u.id}).where(owner_id: current_user.id).where("owner_id IS NOT NULL").page(params[:page]).per_page(20)`
How can I add an OR condition so that I pull records where owner\_id is either current\_user or current\_user.followed\_users?
Thanks! | The quick fix is to include current\_user's id in the array.
```
# Also use pluck instead of map
ids = current_user.followed_users.pluck(:id) << current_user.id
PublicActivity::Activity.order("created_at DESC")
.where(owner_type: "User", owner_id: ids).page(params[:page])
``` | Rails doesn't support "OR" directly in this context, so you either have to reconstruct your query to avoid it (as shown in [the answer from Billy Chan](https://stackoverflow.com/a/21208348/1008891)) or provide the SQL as an argument to the `where` as in:
```
.where("owner_type = 'User' and (owner_id = ? OR owner_id in (?))", current_user.id,
current_user.followed_users.pluck(:id))
``` | How to add OR to a where statement when setting Rails instance variable? | [
"",
"sql",
"ruby-on-rails",
"postgresql",
""
] |
I'm trying to write a small trigger for my DB, but I'm getting the error:
SQLException: You can't specify target table 'p' for update in FROM clause
The query I'm using is:
```
create trigger CheckVendibilitaPP
after update on hotel
for each row
if (old.IN_VENDITA=true and new.IN_VENDITA=false) then
update pacchettopreconfigurato as p
set IN_VENDITA=false
where p.id_pp in (select p2.id_pp
from pacchettopreconfigurato as p2 join hotel as h on p2.DESTINAZIONE_PACCHETTO = h.LUOGO
where h.id = new.id)
and (select count(*)
from pacchettopreconfigurato as p3 join hotel as h2 on p3.DESTINAZIONE_PACCHETTO = h2.LUOGO
where p3.id_pp = p.id_pp) > 0;
end if;;
```
I saw some answers to symilar questions and tried some of the techniques shown there, but none of those was effective. Wrapping the first part of the query in a "SELECT id\_pp FROM (select...) AS c" block, but that wasn't working either.
How should I rewrite the query so for it to stop throwing the error? | As suggested in the other answers, the solution was to break the where clause in sub-parts.
The need for parameters to be used in the sub-queries led me to write two Boolean functions taking parameters `p.id_pp, new.id` and `p.id_pp` respectively.
The where clause thus became a simple `func_1(p.id_pp,new.id) and func_2(p.id_pp)` | The MySQL docs say that:
> Currently, you cannot update a table and select from the same table in
> a subquery
See second to last line [here](http://dev.mysql.com/doc/refman/5.0/en/update.html).
What you could do is run the two sub-queries separately, putting their results into variables. Then substitute the variable for the suq-queries in the UPDATE statement. | SQLException: You can't specify target table 'p' for update in FROM clause | [
"",
"mysql",
"sql",
""
] |
Table and (columns) in question are:
```
Attachment (att_id)
Assignment (att_id, ctg_id, and itm_id)
```
I have tried for several hours to try and call the data I am looking for but to avail. I can't figure out the logic behind it and it seems so simple.
I need to call all the rows in the Attachment Table where the att\_id **is not** linked to a ctg\_id **or** itm\_id in the Assignment table.
I make the join on att\_id = att\_id but that brings up all the rows in the Attachment table that **are** linked to a ctg\_id or itm\_id, when I need just the opposite.
Very Frustrating. Any advise/help is greatly appreciated. | This should select all rows in Attachment that are not referenced by att\_id in Assignment.
```
SELECT *
FROM Attachment
WHERE att_id NOT IN (SELECT att_id FROM Assignment)
```
JOIN is typically used to find links, not to find non-links. WHERE x NOT IN ([blah]) is used to find missing links. | A `LEFT OUTER JOIN` is an easy way to find non-matches:
```
select at.*
from Attachment at
left outer join Assignment as on at.att_id = as.att_id
where as.att_id is null
``` | Select Data in SQL from No links between another table/column | [
"",
"sql",
"sql-server-2008",
"select",
"join",
""
] |
Is there any way instead using Case statement in t-sql?
for example :
```
select case day
when 0
then 'sun'
when 1
then 'mon' ...
else 'sat'
end as dayName
from table
```
Now I want to know a better way than "case statement" like struct in C# or simple functions or dictionary! you can send a key and use the value member instead select case. | If I've understood your question correctly... then yes, there are other ways.
Whether they are better or not depends largely on context!
A dictionary is a collection with an index of sorts. In this example, `#DayTable` plays that roles.
If you are asking about conditional programming constructs, then within a `SELECT` statement, I think the answer is no, prior to SQL Server 2012 which has `IIF()` [BOL Ref](http://technet.microsoft.com/en-us/library/hh213574.aspx)
This differs somewhat to the `IF` controlling logic flow - outside of a `SELECT` statement.
```
IF OBJECT_ID('tempdb..#DayTable')IS NOT NULL DROP TABLE #DayTable
IF OBJECT_ID('tempdb..#ArbEvent')IS NOT NULL DROP TABLE #ArbEvent
CREATE TABLE #DayTable (DayID INT, DayTextName VARCHAR(30))
INSERT INTO #DayTable
(DayID, DayTextName)
VALUES
(1, 'Monday'), (2, 'Tuesday'), (3, 'Wednseday'), (4, 'Thursday'), (5, 'Friday'), (6, 'Saturday'), (7, 'Sunday')
CREATE TABLE #ArbEvent (EventName VARCHAR(30), DayID INT)
INSERT INTO #ArbEvent
(EventName, DayID)
VALUES
('John''s Birthday', 1), ('Susan''s Funeral', 2), ('Pick Jessica up', 6)
SELECT AE.EventName
,AE.DayID
,DT.DayTextName
FROM #ArbEvent AE
JOIN #DayTable DT ON AE.DayID = DT.DayID
``` | You are asking for another way. Here is another way:
```
select left(datename(dw, dateadd(day, [day], '2014-01-25')), 3)
```
The constant `'2014-01-25'` is simply your favorite Saturday.
I'm not advocating using this. I think the `case` statement is much clearer in intention. | Another way instead Select Case | [
"",
"sql",
"t-sql",
""
] |
I have a select statement which returns duplicates in a table. I want to now be able to delete these duplicates. How can I do this with ORACLE SQL?
```
SELECT
c1.name AS c1name,
c2.name AS c2name,
c1.cy_code AS c1country,
c2.cy_code AS c2country,
c1.latitude AS c1lat,
c1.longitude AS c1long,
c2.latitude AS c2lat,
c2.longitude AS c2long,
calc_distance(c1.latitude,
c1.longitude,
c2.latitude,
c2.longitude) AS distance
FROM city_temp c1,
city_temp c2
WHERE c2.name = c1.name
AND c2.cy_code = c1.cy_code
AND calc_distance(c1.latitude, c1.longitude, c2.latitude, c2.longitude) = 0
AND c1.latitude = c2.latitude
AND c1.longitude = c2.longitude;
``` | You would use an exists clause to delete all rows that have equals with higher rowids:
```
delete from city_temp c1
where exists
(
select *
from city_temp c2
where c2.name = c1.name
and c2.cy_code = c1.cy_code
and c2.latitude = c1.latitude
and c2.longitude = c1.longitude
and calc_distance(c1.latitude, c1.longitude, c2.latitude, c2.longitude) = 0
and c2.rowid > c1.rowid
);
``` | you can try this if you could change the function calc\_distance
```
DELETE FROM city_temp
WHERE rowid not in
(SELECT MIN(rowid)
FROM city_temp
WHERE calc_distance(latitude, longitude)=0
GROUP BY name, cy_code, latitude,longitude
)
``` | How can I turn this into a DELETE statement in ORACLE? | [
"",
"sql",
"oracle",
"duplicates",
""
] |
This is a research database which has (yes I know poorly designed) a database which includes a questionnaire for participants. This questionnaire is asked to the participant at three times in the study (P1/P2/P3)
The table structure for the questionaire;
Questionaire 1.
```
+--+-------+--------+--+--+--+--+--+---
|PK|Part_ID|Phase_Id|Q1|q2|q3|q4|q5|...
+--+-------+--------+--+--+--+--+--+---
|1 |A010 |P1 |1 |0 |1 |0 |1 |...
+--+-------+--------+--+--+--+--+--+---
|2 |A010 |P2 |0 |1 |0 |1 |0 |...
+--+-------+--------+--+--+--+--+--+---
|3 |A010 |P3 |1 |0 |1 |0 |1 |...
+--+-------+--------+--+--+--+--+--+---
```
so forth for each participants.
All participants have a maximmum of 3 Phases, may have less. Due to the table structure column count is always the same.
The researcher wants to get all the data onto a single line for input into an analytical program (SPSS). So the output structure need to look like this
```
+-------+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|Part_ID|P1_Q1|P1_q2|P1_q3|P1_q4|P1_q5|P2_Q1|P2_q2|P2_q3|P2_q4|P2_q5|P3_Q1|P3_q2|P3_q3|P3_q4|P3_q5|
+-------+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|A010 |1 |0 |1 |0 |1 |0 |1 |0 |1 |0 |1 |0 |1 |0 |1 |
+-------+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
```
Using SQL Server 2008 Express.
I had a look at a few pivot and unpivot examples and dont think they are the right way to go (no aggregation required).
So any pointers would be great.
Regards
Roger
(I hope the Demo format comes out ok). | You can `UPIVOT` your table first and `PIVOT` it again
```
SELECT Part_Id,
P1_Q1, P1_Q2, P1_Q3, P1_Q4, P1_Q5,
P2_Q1, P2_Q2, P2_Q3, P2_Q4, P2_Q5,
P3_Q1, P3_Q2, P3_Q3, P3_Q4, P3_Q5
FROM
(
SELECT Part_ID, Phase_Id + '_' + Question Question, Value
FROM
(
SELECT Part_ID, Phase_Id, q1, q2, q3, q4, q5
FROM Table1
) s
UNPIVOT
(
Value FOR Question IN (q1, q2, q3, q4, q5)
) u
) s
PIVOT
(
MAX(Value) FOR Question IN
(
P1_Q1, P1_Q2, P1_Q3, P1_Q4, P1_Q5,
P2_Q1, P2_Q2, P2_Q3, P2_Q4, P2_Q5,
P3_Q1, P3_Q2, P3_Q3, P3_Q4, P3_Q5
)
) p
```
or by using conditional aggregation
```
SELECT Part_Id,
MAX(CASE WHEN Phase_id = 'P1' THEN Q1 END) P1_Q1,
MAX(CASE WHEN Phase_id = 'P1' THEN Q2 END) P1_Q2,
MAX(CASE WHEN Phase_id = 'P1' THEN Q3 END) P1_Q3,
MAX(CASE WHEN Phase_id = 'P1' THEN Q4 END) P1_Q4,
MAX(CASE WHEN Phase_id = 'P1' THEN Q5 END) P1_Q5,
MAX(CASE WHEN Phase_id = 'P2' THEN Q1 END) P2_Q1,
MAX(CASE WHEN Phase_id = 'P2' THEN Q2 END) P2_Q2,
MAX(CASE WHEN Phase_id = 'P2' THEN Q3 END) P2_Q3,
MAX(CASE WHEN Phase_id = 'P2' THEN Q4 END) P2_Q4,
MAX(CASE WHEN Phase_id = 'P2' THEN Q5 END) P2_Q5,
MAX(CASE WHEN Phase_id = 'P3' THEN Q1 END) P3_Q1,
MAX(CASE WHEN Phase_id = 'P3' THEN Q2 END) P3_Q2,
MAX(CASE WHEN Phase_id = 'P3' THEN Q3 END) P3_Q3,
MAX(CASE WHEN Phase_id = 'P3' THEN Q4 END) P3_Q4,
MAX(CASE WHEN Phase_id = 'P3' THEN Q5 END) P3_Q5
FROM Table1
GROUP BY Part_Id;
```
Output:
```
| PART_ID | P1_Q1 | P1_Q2 | P1_Q3 | P1_Q4 | P1_Q5 | P2_Q1 | P2_Q2 | P2_Q3 | P2_Q4 | P2_Q5 | P3_Q1 | P3_Q2 | P3_Q3 | P3_Q4 | P3_Q5 |
|---------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| A010 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!3/8daf7/12)** demo | If the number of phases is a maximum of 3 then you can use `LEFT JOIN`s, for example something like:
```
SELECT p1.Part_ID, p1.q1, p1.q2, p1.q3, p1.etc,
p2.Part_ID, p2.q1, p2.q2, p2.q3, p2.etc,
p3.Part_ID, p3.q1, p3.q2, p3.q3, p3.etc,
FROM Questionaire p1
LEFT JOIN Questionaire p2 ON p1.Part_ID = p2.Part_ID AND p1.PhaseId='P1' AND p2.PhaseId='P2'
LEFT JOIN Questionaire p3 ON p2.Part_ID = p3.Part_ID AND p3.PhaseId='P3'
```
I haven't tried this so the syntax might not be quite right (e.g. you may need some brackets). | How do I convert multiple rows into a single row | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I'm working on automated tests for particular web app. It uses database to persist data (SQL Server in this case).
In our automated tests we perform several database changes (inserts, updates). And after tests have been executed we want to restore database to original state.
Steps will be something like this:
1. Create somehow backup
2. Execute tests
3. Restore data from backup
The first version was pretty simple - create table backup and then restore it. But we've encountered an issue with references integrity.
After that we decided to use full database backup, but I don't like this idea.
Also we were thinking that we can track all references and backup only needed tables not a whole database.
Last thoughts was about somehow logging our actions (inserts, updates) and then perform reverse actions (deletes for inserts, updates with old data for updates), but it looks kinda complicated.
May be there is another solution? | Found simple solution - renaming table.
Algorithm is pretty simple:
1. Rename table to another table, e.g. "table" to "table-backup" (references will refer to that backup table)
2. Create "table" from "table-backup" ("table" will not have any dependencies)
3. Perform any actions in the application
4. Drop "table" with dirty data (will not break references integrity)
5. Rename "table-backup" to "table" (references integrity will be kept).
Thanks | Actually, there is no need to restore the database in native SQL Server terms, nor to track the changes and then revert them back
You can use **[ApexSQL Restore](http://www.apexsql.com/sql_tools_restore.aspx)** – a SQL Server tool that **attaches both native and natively compressed SQL database backups and transaction log backups as live databases**, accessible via SQL Server Management Studio, Visual Studio or any other third-party tool. It allows attaching single or multiple full, differential and transaction log backups
For more information on how to use the tool in your scenario check the **[Using SQL database backups instead of live databases in a large development team](http://solutioncenter.apexsql.com/using-sql-database-backups-instead-of-live-databases/)** online article
Disclaimer: I work as a Product Support Engineer at ApexSQL | Ways to backup database data | [
"",
"sql",
"sql-server",
"sql-server-2008",
"backup",
""
] |
First of all, please correct me if "alias" is the wrong word. I am talkin about renaming the column with `AS` operator.
So I'm trying calculate an average like this :
```
SELECT
users.username AS player_name,
COUNT(*) AS total_games,
SUM(games.points) AS total_points,
(total_points / total_games) AS average_points
FROM games,
INNER JOIN users
ON games.player_id = users.id
GROUP BY games.player_id
```
*(the query might be wrong, its just a quick example)*
in this query, the line
```
(total_points / total_games) AS average_points
```
gives an error : `unknown column total_points`
so how can I fix this to keep using the aliases, instead of writing this :
```
(SUM(games.points) / COUNT(*) ) AS average_points
```
Thanks for any help ! | I'm fairly sure it's not possible to use aliases like that. You will have to do it the 'long' way...
```
(SUM(games.points) / COUNT(*) ) AS average_points
```
**Edit:** I wanted to edit this answer to second **@jbeldock** 's answer as a solution I have since used many times myself. I found myself writing out formulas and re-using them in other parts of queries, leading to big messy queries with copy/pasted sections. Putting your forumlas in a subquery allows you to use their results in the outer query and make things more elegant. | Unfortunately, you cannot use column aliases this way. They are only available *after* the `SELECT` statement is processed. However, you can do it in a subquery or a common table expression (CTE). Here is the simple subquery:
```
SELECT player_name,
total_games,
total_points,
(total_points / total_games) AS average_points
FROM
(SELECT
users.username AS player_name,
COUNT(*) AS total_games,
SUM(games.points) AS total_points,
FROM games,
INNER JOIN users
ON games.player_id = users.id
GROUP BY games.player_id) as InnerQuery
``` | How to use aliases in math operators in SQL? | [
"",
"mysql",
"sql",
"join",
"alias",
""
] |
I want to delete the data between some years, like Table name is `t1`, Columns `id,name date`.
The values like this
```
+--+----+----------+
|id|name|date |
+--+----+----------+
|1 |Raj |2013-03-01|
+--+----+----------+
|2 |Raja|2014-04-05|
+--+----+----------+
```
Now i want to delete this record using years. I tried like this
```
delete from appsetup.company2 where compno=2 and date_trunc('year',acfrom)
between '2013' and '2015'
```
Its Show Error **invalid input syntax for type timestamp: "2013"**. How to achive this?.
I am using postgresql 9.1 | Try entering the date in YYYY-MM-DD format, like this: `'2015-12-31'`
So, instead of this:
```
between '2013' and '2015'
```
Try this:
```
between '2013-01-01' and '2015-12-31'
```
More information on how to enter literal values here:
<http://dev.mysql.com/doc/refman/5.0/en/date-and-time-literals.html>
Edit: Sorry, I gave you the MySQL link because you tagged your question "mysql". Here's the equivalent for PostgreSQL:
<http://www.postgresql.org/docs/9.1/static/datatype-datetime.html> | I got the answer: it work fine for me
```
delete from appsetup.company2 where cast(to_char(acfrom,'yyyy') as integer)
between 2013 and 2015
``` | Between use two years in postgresql | [
"",
"sql",
"postgresql",
""
] |
I'm thinking about which should be the best way (considering the **execution time**) of doing a join between 2 or more tables with some conditions. I got these three ways:
FIRST WAY:
```
select * from
TABLE A inner join TABLE B on A.KEY = B.KEY
where
B.PARAM=VALUE
```
SECOND WAY
```
select * from
TABLE A inner join TABLE B on A.KEY = B.KEY
and B.PARAM=VALUE
```
THIRD WAY
```
select * from
TABLE A inner join (Select * from TABLE B where B.PARAM=VALUE) J ON A.KEY=J.KEY
```
Consider that tables have more than 1 milion of rows.
What your opinion? Which should be the right way, if exists? | Usually putting the condition in where clause or join condition has no noticeable differences in inner joins.
If you are using outer joins ,putting the condition in the where clause improves query time because when you use condition in the where clause of
left outer joins, rows which aren't met the condition will be deleted from the result set and the result set becomes smaller.
But if you use the condition in join clause of left outer joins ,no rows deletes and result set is bigger in comparison to using condition in the where clause.
for more clarification,follow the example.
create table A
(
ano NUMBER,
aname VARCHAR2(10),
rdate DATE
)
----A data
insert into A
select 1,'Amand',to\_date('20130101','yyyymmdd') from dual;
commit;
insert into A
select 2,'Alex',to\_date('20130101','yyyymmdd') from dual;
commit;
insert into A
select 3,'Angel',to\_date('20130201','yyyymmdd') from dual;
## commit;
create table B
(
bno NUMBER,
bname VARCHAR2(10),
rdate DATE
)
insert into B
select 3,'BOB',to\_date('20130201','yyyymmdd') from dual;
commit;
insert into B
select 2,'Br',to\_date('20130101','yyyymmdd') from dual;
commit;
insert into B
select 1,'Bn',to\_date('20130101','yyyymmdd') from dual;
commit;
first of all we have normal query which joins 2 tables with each other:
```
select * from a inner join b on a.ano=b.bno
```
the result set has 3 records.
now please run below queries:
```
select * from a inner join b on a.ano=b.bno and a.rdate=to_date('20130101','yyyymmdd')
select * from a inner join b on a.ano=b.bno where a.rdate=to_date('20130101','yyyymmdd')
```
as you see above results row counts have no differences,and According to my experience there is no noticeable performance differences for data in large volume.
please run below queries:
```
select * from a left outer join b on a.ano=b.bno and a.rdate=to_date('20130101','yyyymmdd')
```
in this case,the count of output records will be equal to table A records.
```
select * from a left outer join b on a.ano=b.bno where a.rdate=to_date('20130101','yyyymmdd')
```
in this case , records of A which didn't met the condition deleted from the result set and as I said the result set will have less records(in this case 2 records).
According to above examples we can have following conclusions:
1-in case of using inner joins,
there is no special differences between putting condition in where clause or join clause ,but please try to put tables in from clause in order to have minimum intermediate result row counts:
(<http://www.dba-oracle.com/art_dbazine_oracle10g_dynamic_sampling_hint.htm>)
2-In case of using outer joins,whenever you don't care of exact result row counts (don't care of missing records of table A which have no paired records in table B and fields of table B will be null for these records in the result set),put the condition in the where clause to delete a set of rows which aren't met the condition and obviously improve query time by decreasing the result row counts.
but in special cases you HAVE TO put the condition in the join part.for example if you want that your result row count will be equal to table 'A' row counts(this case is common in ETL processes) you HAVE TO put the condition in the join clause.
3-avoiding subquery is recommended by lots of reliable resources and expert programmers.It usually increase the query time and you can use subquery just when its result data set is small.
I hope this will be useful:) | 1M rows really isn't *that* much - especially if you have sensible indexes. I'd start off with making your queries as readable and maintainable as possible, and only start optimizing if you notice a perforamnce problem with the query (and as Gordon Linoff said in his comment - it's doubtful there would even be a difference between the three).
It may be a matter of taste, but to me, the third way seems clumsy, so I'd cross it out. Personally, I prefer using `JOIN` syntax for the joining logic (i.e., how A and B's rows are matched) and `WHERE` for filtering (i.e., once matched, which rows interest me), so I'd go for the first way. But again, it really boils down to personal taste and preferences. | SQL Join between tables with conditions | [
"",
"sql",
"oracle",
"join",
""
] |
So I looked this up and this question is very similar but it's missing a key piece: [SQL Server count number of distinct values in each column of a table](https://stackoverflow.com/questions/12625729/sql-server-count-number-of-distinct-values-in-each-column-of-a-table)
So in that question they want the distinct count for each column. What I am looking to do is to get a count of each distinct value for each column in a table (and I'm doing this for all the tables in a particular database which is why I'm looking to try to automate this as much as possible). Currently my code looks like this which I have to run for each column:
```
select mycol1, COUNT(*) as [Count]
from mytable
group by mycol1
order by [Count] desc
```
Ideally my output would look like this:
```
ColumnName1 Count
val1 24457620
val2 17958530
val3 13350
ColumnName2 Count
val1 24457620
val2 17958530
val3 13350
val4 12
```
and so on for all the columns in the table
This answer below (provided by @beargle) from that previous question is *really* close to what I'm looking to do but I can't seem to figure out a way to get it to work for what I am trying to do so I would appreciate any help.
```
DECLARE @Table SYSNAME = 'TableName';
-- REVERSE and STUFF used to remove trailing UNION in string
SELECT REVERSE(STUFF(REVERSE((SELECT 'SELECT ''' + name
+ ''' AS [Column], COUNT(DISTINCT('
+ QUOTENAME(name) + ')) AS [Count] FROM '
+ QUOTENAME(@Table) + ' UNION '
-- get column name from sys.columns
FROM sys.columns
WHERE object_id = Object_id(@Table)
-- concatenate result strings with FOR XML PATH
FOR XML PATH (''))), 1, 7, ';'));
``` | You could use:
```
DECLARE @Table SYSNAME = 'TableName';
DECLARE @SQL NVARCHAR(MAX) = ''
SELECT @SQL = STUFF((SELECT ' UNION SELECT ''' + name
+ ''' AS [Column], '
+ 'CAST(' + QUOTENAME(Name)
+ ' AS NVARCHAR(MAX)) AS [ColumnValue], COUNT(*) AS [Count] FROM '
+ QUOTENAME(@Table) + ' GROUP BY ' + QUOTENAME(Name)
FROM sys.columns
WHERE object_id = Object_id(@Table)
-- concatenate result strings with FOR XML PATH
FOR XML PATH ('')), 1, 7, '');
EXECUTE sp_executesql @SQL;
```
Which will produce SQL Like the following for a table with two columns (Column1 and Column2)
```
SELECT 'Column1' AS [Column],
CAST([Column1] AS NVARCHAR(MAX)) AS [ColumnValue],
COUNT(*) AS [Count]
FROM [TableName]
GROUP BY [Column1]
UNION
SELECT 'Column2' AS [Column],
CAST([Column2] AS NVARCHAR(MAX)) AS [ColumnValue],
COUNT(*) AS [Count]
FROM [TableName]
GROUP BY [Column2]
```
---
**EDIT**
If you want a new result set for each column then use:
```
DECLARE @Table SYSNAME = 'TableName';
DECLARE @SQL NVARCHAR(MAX) = '';
SELECT @SQL = (SELECT ' SELECT ' + QUOTENAME(Name)
+ ', COUNT(*) AS [Count] FROM '
+ QUOTENAME(@Table) + ' GROUP BY ' + QUOTENAME(Name) + ';'
FROM sys.columns
WHERE object_id = Object_id(@Table)
-- concatenate result strings with FOR XML PATH
FOR XML PATH (''));
EXECUTE sp_executesql @SQL;
```
Which would produce SQL Like:
```
SELECT [Column1],
COUNT(*) AS [Count]
FROM [callsupplier]
GROUP BY [Column1];
SELECT [Column2],
COUNT(*) AS [Count]
FROM [callsupplier]
GROUP BY [Column2];
``` | thought i would take a stab at this whilst waiting for a backup to restore
hope this does what you require
```
create Table #Temp
(tableName varchar(100),
columnName varchar(100),
value varchar(1000),
distinctItems int)
Declare @tabName as varchar(100)
Declare @colName as varchar(100)
Declare @tabid as int
Declare cursorTables Cursor
for
select t.object_id , t.name , c.name from sys.tables t inner join sys.columns c on t.object_id = c.object_id
open cursorTables
Fetch Next from cursorTables into
@tabid,@tabName,@colName
while @@Fetch_Status = 0
Begin
declare @query as nVarchar(1000)
set @query = 'Insert into #Temp SELECT ''' + @tabName + ''' , '''+ @colName +''', ' + @colName + ', COUNT([' + @colName +']) AS Expr1 FROM [' + @tabName+ '] group by [' + @colName + ']'
print @query
exec sp_executesql @query
Fetch Next from cursorTables into
@tabid,@tabName,@colName
End
Close cursorTables
Deallocate cursorTables
select * from #temp
drop table #temp
```
produces some not very useful results on PK values and i suspect it would not work on columns greater than varchar(1000) but works on a fe of my dbs | get a count of each value from every column in a table SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have here some problems undersanding how this works.
So I have 2 tables Students and Enrolled like this:
```
CREATE TABLE Students
(sid CHAR(20),
name CHAR(50),
email CHAR(30),
age INTEGER,
gr INTEGER)
CREATE TABLE Enrolled
(sid CHAR(20),
cid CHAR(20),
grade CHAR(2),
PRIMARY KEY (sid,cid))
```
So I don't understand this particulary row `PRIMARY KEY (sid,cid)`
Can someone explain to me how it works? I have to specify that I have another table Courses from where the `cid.`
Is is equivalent saying like this:
```
CREATE TABLE Enrolled
(sid CHAR(20) foreign key references Students(sid),
cid CHAR(20) foreign key references Courses(cid)
)
``` | A `PRIMARY KEY` is used to identify a table. A field or column that is defined as `PRIMARY KEY` will contains different values on each row in that table, and is mandatory to have a value (so, `PRIMARY KEY` is equivalent to `UNIQUE` and `NOT NULL`).
`PRIMARY KEY` can be a single field, or multiple fields, but it always satisfy that "each row will have a different `PRIMARY KEY`".
If you declare as `PRIMARY KEY` a combination of 2 columns, you will be able to have this for example:
```
CREATE TABLE Enrolled
(sid CHAR(20),
cid CHAR(20),
grade CHAR(2),
PRIMARY KEY (sid,cid)) --PRIMARY KEY with combination of 2 columns
```
---
```
sid | cid | grade
1 1 XX
1 2 XX
2 1 XX
2 2 XX
2 3 XX
```
In this example, you can see that the column `sid` or the column `cid` has repeated values individually, but there isn't a combination of `(sid, cid)` that was repeated.
Like a `PRIMARY KEY` is used to identify a row in a table, when we want to relate two tables we can define a `FOREIGN KEY` in one table to link this one with the other table.
Your case is that the `ENROLLED` table is identified by a composite `PRIMARY KEY` to represent a many-to-many relationship. That is the way to say that:
* A student can be enrolled in many courses.
* A course can have enrolled many students.

Note\*: Is a best practice to define the `PRIMARY KEYS` as numeric values, such `integer`, `bigint`, etc. because it is better to improve the indexes performance (all `PRIMARY KEYS` have defined inherently an `INDEX`, and they are faster working with "numeric" values than working with "string" values). | `PRIMARY KEY` means both `UNIQUE` and `NOT NULL`.
If you want `sid` and `cid` to also be `FOREIGN KEY` you have to specify that separately.
Having two fields for a primary key is often used for tables that are the physical representation of a many-to-many relation. In your database design diagram you would have `STUDENT` and `COURSE` as entities and `ENROLLMENT` as a many-to-many relationship between them.
In the physical database diagram many-to-many relationships are modelled as tables, often with a composite `PRIMARY KEY` and with `FOREIGN KEY` constraints to the entity tables. | Primary keys in sql | [
"",
"mysql",
"sql",
""
] |
Basically i need the distinct values from a column to be as column name via Query
Example is below
```
id status userId date
------------------------
1 p u12 01/01/2014
2 p u13 01/01/2014
3 a u14 01/01/2014
4 hd u15 01/01/2014
5 a u16 01/01/2014
6 p u12 01/02/2014
7 a u13 01/02/2014
8 p u14 01/02/2014
```
output
```
date p a hd
------------------------
01/01/2014 2 2 1
02/01/2014 2 1 0
```
The status 'p','a','hd' are used as column name and it is grouped by date | Try this:
```
SELECT a.date, SUM(a.status='p') AS p, SUM(a.status='a') AS a,
SUM(a.status='hd') AS hd
FROM tableA a
GROUP BY a.date
```
Check this [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!2/7cbb5/1)
**OUTPUT**
```
| DATE | P | A | HD |
|--------------------------------|---|---|----|
| January, 01 2014 00:00:00+0000 | 2 | 2 | 1 |
| January, 02 2014 00:00:00+0000 | 2 | 1 | 0 |
``` | Try this query, but it forces you to know all possible statuses (result field names) -
```
SELECT
date,
COUNT(IF(status = 'p', status, NULL)) AS p,
COUNT(IF(status = 'a', status, NULL)) AS a,
COUNT(IF(status = 'hd', status, NULL)) AS hd
FROM
table
GROUP BY
date;
```
You also can try to automate this logic, have a look at this link - [Dynamic pivot tables (transform rows to columns)](http://buysql.com/mysql/14-how-to-automate-pivot-tables.html). | Group by two column and use the distinct values as column name in MYSQL | [
"",
"mysql",
"sql",
"select",
"count",
"group-by",
""
] |
Consider the following:
eventTypes table has 163 rows.
events has 43,000 rows.
```
SELECT events.eventTypeID, eventTypes.eventTypeName
FROM events
LEFT JOIN eventTypes ON events.eventTypeID = eventTypes.eventTypeID
```
This returns 163 rows. However, if I add "WHERE events.eventID >= 0"
```
SELECT events.eventTypeID, eventTypes.eventTypeName
FROM events
LEFT JOIN eventTypes ON events.eventTypeID = eventTypes.eventTypeID
WHERE events.eventID >= 0
```
I get all 43,000 rows. I would expect the lack of a WHERE clause would give me everything. Am I thinking about this wrong?
Update: I just tried this on another server and same result. My exact query copied and pasted is:
```
SELECT events.eventTypeID, eventTypes.eventTypeName FROM events LEFT JOIN eventTypes ON events.eventTypeID = eventTypes.eventTypeID
```
This only returns the first 163 records. MySQL Versions are 5.5.29 and 5.1.61. I looked in the bug list and found nothing.
Update #2: EXPLAIN gives the same output with either query (i.e. with or without WHERE 1=1)
```
mysql> EXPLAIN(SELECT events.eventTypeID, eventTypes.eventTypeName FROM events LEFT JOIN eventTypes ON events.eventTypeID = eventTypes.eventTypeID);
+----+-------------+------------+--------+---------------+-------------+---------+-------------------------------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+--------+---------------+-------------+---------+-------------------------------+-------+-------------+
| 1 | SIMPLE | events | index | NULL | eventTypeID | 4 | NULL | 37748 | Using index |
| 1 | SIMPLE | eventTypes | eq_ref | PRIMARY | PRIMARY | 4 | casefriend.events.eventTypeID | 1 | |
+----+-------------+------------+--------+---------------+-------------+---------+-------------------------------+-------+-------------+
```
Update#3 Testing on a 3rd system produces results I expect though I have no idea why. The 3rd system is another CentOS6 running MySQL 5.1.69. I imported the exact dumps from my development system that I imported into the 2nd test system that did not produce the correct results.
### Update#4 Found the issue. This is NOT a MySQL issue. This is a phpMyAdmin issue. When testing on the command line with mysql client I get the correct results on all systems. | I tried this at SqlFiddle: <http://sqlfiddle.com/#!2/c9908b/1>
```
create table event (id int,type_id int);
create table type (type_id int, type_name varchar(30));
insert into type values(1, 'type 1');
insert into type values(2, 'type 2');
insert into type values(3, 'type 3');
insert into type values(4, 'type 4');
insert into type values(5, 'type 5');
insert into event values( 1,1);
insert into event values( 2,1);
insert into event values( 3,1);
insert into event values( 4,1);
insert into event values( 5,2);
insert into event values( 6,2);
insert into event values( 7,2);
insert into event values( 8,2);
insert into event values( 9,3);
insert into event values(10,3);
insert into event values(11,3);
insert into event values(12,3);
insert into event values(13,4);
insert into event values(14,4);
insert into event values(15,4);
insert into event values(16,4);
insert into event values(17,5);
insert into event values(18,5);
insert into event values(19,5);
insert into event values(20,5);
select event.id, type.type_name from event left join type
on event.type_id=type.type_id
```
I get 20 rows back as expected | ```
I would expect the lack of a WHERE clause would give me everything.
Am I thinking about this wrong?
```
Yes , you are right your first query is correct and it should return what you are expecting.
You are using `LEFT OUTER JOIN` , which means all records from left table and matching records from right table.
This is issue is not related to MYSQL version as my knowledge.
My suggestion is ,please check the relation (foreign key and primary key) on which you are joining both table. | Why does MySQL LEFT JOIN not return all rows unless there is WHERE clause - phpMyAdmin issue | [
"",
"mysql",
"sql",
"phpmyadmin",
""
] |
I have two tables as follows:
**TABLE marklist:**
```
student_id class_id subject_1 subject_2 subject_3 subject_4 subject_5
----------- ----------- ----------- ----------- ----------- ----------- ----------
1 9 78 87 95
2 9 67 95 87
3 9 85 84 85
4 10 70 65 78
5 10 75 80 81
6 10 80 75 82
```
**Table subject\_names**
```
column_name subject_name
--------------- -------------
subject_1 English
subject_2 Chemistry
subject_3 Economics
subject_4 Accounts
subject_5 Biology
```
Now, I need to generate a report like this for class\_id = 9
```
column_name subject_name no_of_students
--------------- ------------- --------------
subject_1 English 3
subject_2 Chemistry 3
subject_3 Economics 0
subject_4 Accounts 0
subject_5 Biology 3
```
In short, I have to generate a report with column\_names, subject\_name and the number of students from class\_id = 9 (or 10, whatever) who have appeared for that subject.
All that I have managed to do is
1.
```
SELECT sn.column_name, sn.subject_name FROM subject_names sn;
```
and
2.
```
SELECT ml.class_id,
count(ml.subject_1) AS s1,
count(ml.subject_2) AS s1,
count(ml.subject_3) AS s1,
count(ml.subject_4) AS s1,
count(ml.subject_5) AS s1,
FROM marklist ml
WHERE ml.class_id = 9;
```
I don't understand how do I go ahead and pivot the results of query 2 with query one. I may be going into the wrong direction but I don't have an idea. | I have found the answer. You can skip the CTE and check the main query out in the following:
```
/* -- This WITH clause is just for your reference.
WITH marklist AS (
SELECT 1 student_id, 9 class_id, 78 subject_1,87 subject_2,
null subject_3, null subject_4, 95 subject_5 FROM dual
UNION ALL SELECT 2, 9, 67,95, null, null, 87 FROM dual
UNION ALL SELECT 3, 9, 85,84, null, null, 85 FROM dual
UNION ALL SELECT 4, 10,70, null, 65,78, null FROM dual
UNION ALL SELECT 5, 10,75, null, 80,81, null FROM dual
UNION ALL SELECT 6, 10,80, null, 75,82, null FROM dual
)
, subject_names AS (
SELECT 'subject_1' column_name, 'English' subject_name FROM dual
UNION ALL SELECT 'subject_2', 'Chemistry' FROM dual
UNION ALL SELECT 'subject_3', 'Economics' FROM dual
UNION ALL SELECT 'subject_4', 'Accounts' FROM dual
UNION ALL SELECT 'subject_5', 'Biology' FROM dual
) -- */
SELECT sn.column_name, sn.subject_name,
(SELECT COUNT(CASE sn.column_name
WHEN 'subject_1' THEN ml.subject_1
WHEN 'subject_2' THEN ml.subject_2
WHEN 'subject_3' THEN ml.subject_3
WHEN 'subject_4' THEN ml.subject_4
WHEN 'subject_5' THEN ml.subject_5
END)
FROM marklist ml
WHERE ml.class_id = 9) AS no_of_students
FROM subject_names sn;
```
**OUTPUT:**
```
column_name subject_name no_of_students
-------------- --------------- --------------
subject_1 English 3
subject_2 Chemistry 3
subject_3 Economics 0
subject_4 Accounts 0
subject_5 Biology 3
``` | You can [unpivot](http://www.oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1.php) your marklist table and outer join it with subject\_names table.
```
with unpivot_x(student_id,class_id,subject_code,marks) as (
select * from marklist
unpivot (marks for subject_code in ( subject_1 as 'subject_1',
subject_2 as 'subject_2',
subject_3 as 'subject_3',
subject_4 as 'subject_4',
subject_5 as 'subject_5'
)
))
select a.column_name,a.subject_name, count(b.student_id)
from subject_names a left outer join unpivot_x b
on a.column_name = b.subject_code and b.class_id = 9
group by a.column_name,a.subject_name
order by 1;
```
Demo at [sqlfiddle](http://sqlfiddle.com/#!4/6e3f8/3). | Column name with column caption and count / Pivot based on column label in other table | [
"",
"sql",
"oracle",
"pivot",
""
] |
I'm using DB2 and trying to restrict a query which uses group by to only include counts larger than some threshold but cannot get past the syntax errors
```
select val, count(*) as c from sample_table
where c > 20
group by val;
```
It doesn't like the fact that I'm referencing c and using it in where clause.
How do I achieve something like this? | You could do this:
```
select *
from (
select val, count(*) as c
from sample_table
group by val
)
where c > 20
```
but better to do:
```
select val, count(*) as c
from sample_table
group by val
having count(*) > 20
``` | You can use `HAVING`:
```
select val, count(*) as c
from sample_table
group by val;
having count(*)> 20
``` | how to reference count(*) as a variable in where clause along with group by clause | [
"",
"sql",
"count",
"group-by",
"db2",
""
] |
```
SELECT
*
FROM (
SELECT
row_number() OVER (ORDER BY ID) AS [rownum],
UserID, wfoKey
FROM
tbl_user_statistics2 Where (wfoKey = '4b1fbf7aa7')
) T
WHERE
rownum BETWEEN (@opt_start) AND (@opt_end)
```
The above query works and pulls all UserID's from rows 1 (opt\_start) to 10,000 (opt\_end). In that data set those UserID's are duplicated. I would like to now pull out the unique UserID's for the records in that range of rows (1 to 10,000). The user can select any range so they could pick 10,0000 to 20,0000 so the query should get the unique UserID's in that range and so on. I don't need the rownum in the result.
Sample data:
```
1 dced6f0d-3a67-456d78cb7-fb73c8db0c13 4b1fbf7aa7
2 dced6f0d-3a67-456d78cb7-fb73c8db0c13 4b1fbf7aa7
3 6360f485-4f4d-4c9a98b0e-74fea312a25d 4b1fbf7aa7
4 6360f485-4f4d-4c9a98b0e-74fea312a25d 4b1fbf7aa7
```
The result would be:
```
dced6f0d-3a67-456d78cb7-fb73c8db0c13 4b1fbf7aa7
6360f485-4f4d-4c9a98b0e-74fea312a25d 4b1fbf7aa7
```
Not sure how to proceed or perhaps there is a better way than row\_number() over. Any suggestions would be much appreciated. | This can be solved by taking your existing query and turning it into an in-line view (`T2`) and then querying the distinct records from it:
```
SELECT DISTINCT UserID, wfoKey FROM
(SELECT
*
FROM (
SELECT
row_number() OVER (ORDER BY ID) AS [rownum],
UserID, wfoKey
FROM
tbl_user_statistics2 Where (wfoKey = '4b1fbf7aa7')
) T
WHERE
rownum BETWEEN (@opt_start) AND (@opt_end)) T2
``` | You can add a piece called partition by userID
```
;with cte as(
SELECT
*
FROM (
SELECT
row_number() OVER (partition by USerID ORDER BY ID) AS [rownum],
UserID, wfoKey
FROM
tbl_user_statistics2 Where (wfoKey = '4b1fbf7aa7')
) T
WHERE
rownum BETWEEN (@opt_start) AND (@opt_end)
)
Select * from cte
where rownum =1
``` | SQL Server unique records in row_number() range of records | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
DECLARE @Plus varchar(1) = '+'
,@Minus varchar(1) = '-'
```
-- My Try
```
SELECT 5 + (@Plus*2) AS [Should_be_7], 5 + (@Minus*2) AS [Should_be_3], 5 - (@Plus*2) AS [Should_be_3], 5 - (@Minus*2) AS [Should_be_7]
```
/\*
My Output
```
Should_be_7 Should_be_3 Should_be_3 Should_be_7
5 5 5 5
```
My goal: To use signs from variable and do calculations. (I know I am trying to use characters with int in my math while expecting output in int and that a NO-NO at the most fundamental level of math...the query is just to show you what I am trying to do).
I am open for alternative datatypes for INT as long as I can do CAST/CONVERT +/- to that datatype.
Expected output:
```
Should_be_7 Should_be_3 Should_be_3 Should_be_7
7 3 3 3
```
\*/
UPDATE (kind of different Q...so sorry):
There can be a SINGLE variable which can be + or \_ at any time and I need to use just that variable to get Expected output...is that possible?
```
DECLARE @Sign varchar(1) = '+' -- or can be '-"
```
-- My Try
-- WHEN SIGN IS '+'
SELECT 5 + (@Sign\*2) AS [Should\_be\_7], 5 - (@Sign\*2) AS [Should\_be\_7]
-- WHEN SIGN IS '-'
SELECT 5 + (@Sign\*2) AS [Should\_be\_3], 5 - (@Sign\*2) AS [Should\_be\_3]
/\*
Expected output:
-- WHEN SIGN IS '+'
```
Should_be_7 Should_be_7
7 3
```
-- WHEN SIGN IS '-'
```
Should_be_3 Should_be_3
3 7
```
\*/ | SQL Server converts strings to numbers in an arithmetic context. So,
```
select '-1' - 2, '+1' + 2;
```
Produces the expected value of -3 and 3 respectively.
What you are observing as variables is that `'-'` and `'+'` are turned into 0s. That is very reasonable behavior. All 4 of your expressions should return 5 and they do, on SQL Fiddle in SQL Server 2008 and 2012. | Why not just make the variables ints and assign 1 and -1 to them? | Signed variables - T/SQL | [
"",
"sql",
"sql-server",
""
] |
```
sel cast(9.1 as integer)as inttt;
Result:
9
sel cast(9.9 as integer)as inttt;
Result:
9
```
I executed the above queries on Teradata, and both of the queries resulted in the floor value.
Is this a hard rule for casting from DEC to INT? That is, does it always return the floor value? | ## Yes it is.
If you cast a decimal value into an integer the decimal places get lost. That is in all languages (I know of) the case. | Following code can be used to round decimal number:
```
Select CAST(number + 0.5 AS integer) from sometable
``` | converting decimal to integer in teradata | [
"",
"sql",
"teradata",
""
] |
I have a simple SQL question.
I have 3 tables
```
Books
--------------------
Book_ID(pk) | Title | Price
--------------------
Clients
---------------------
Client_ID(pk) | Name |
CardsOfReaders
---------------------
Book_ID(pk,fk)| Client_ID(pk, fk)
```
So the question is: How to display the name of the reader who borrowed the book with the highest price (and the column price to be displayed as well)
Thanks in advance! | ```
select top 1 c.name, b.price
from books b
inner join cardsofcreaders cr on cr.book_id = b.book_id
inner join clients c on c.client_id = cr.client_id
order by b.price desc
``` | Taking a slightly different spin on your question, given your table structure, note that it is possible for multiple clients to borrow the book with the most expensive price. To find all such clients:
```
SELECT c.Name, b.Price
FROM
(
SELECT TOP 1 Book_ID, Price
FROM BOOKS
ORDER BY Price desc
) b
INNER JOIN CardsOfReaders car
on b.BOOK_ID = car.BOOK_ID
INNER JOIN Clients c
ON car.ClientID = c.Client_ID;
``` | SQL simple query issue | [
"",
"sql",
"sql-server",
""
] |
I have a table
```
ID P_ID Cost
1 101 1000
2 101 1050
3 101 1100
4 102 5000
5 102 2000
6 102 6000
7 103 3000
8 103 5000
9 103 4000
```
I want to use 'Cost' column twice to fetch first and last inserted value in cost corresponding to each P\_ID
I want output as:
```
P_ID First_Cost Last_Cost
101 1000 1100
102 5000 6000
103 3000 4000
``` | ```
;WITH t AS
(
SELECT P_ID, Cost,
f = ROW_NUMBER() OVER (PARTITION BY P_ID ORDER BY ID),
l = ROW_NUMBER() OVER (PARTITION BY P_ID ORDER BY ID DESC)
FROM dbo.tablename
)
SELECT t.P_ID, t.Cost, t2.Cost
FROM t INNER JOIN t AS t2
ON t.P_ID = t2.P_ID
WHERE t.f = 1 AND t2.l = 1;
```
In 2012 you will be able to use `FIRST_VALUE()`:
```
SELECT DISTINCT
P_ID,
FIRST_VALUE(Cost) OVER (PARTITION BY P_ID ORDER BY ID),
FIRST_VALUE(Cost) OVER (PARTITION BY P_ID ORDER BY ID DESC)
FROM dbo.tablename;
```
You get a slightly more favorable plan if you remove the `DISTINCT` and instead use `ROW_NUMBER()` with the same partitioning to eliminate multiple rows with the same `P_ID`:
```
;WITH t AS
(
SELECT
P_ID,
f = FIRST_VALUE(Cost) OVER (PARTITION BY P_ID ORDER BY ID),
l = FIRST_VALUE(Cost) OVER (PARTITION BY P_ID ORDER BY ID DESC),
r = ROW_NUMBER() OVER (PARTITION BY P_ID ORDER BY ID)
FROM dbo.tablename
)
SELECT P_ID, f, l FROM t WHERE r = 1;
```
Why not `LAST_VALUE()`, you ask? Well, it doesn't work like you might expect. For more details, [see the comments under the documentation](http://technet.microsoft.com/en-us/library/hh231517.aspx). | ```
SELECT t.P_ID,
SUM(CASE WHEN ID = t.minID THEN Cost ELSE 0 END) as FirstCost,
SUM(CASE WHEN ID = t.maxID THEN Cost ELSE 0 END) as LastCost
FROM myTable
JOIN (
SELECT P_ID, MIN(ID) as minID, MAX(ID) as maxID
FROM myTable
GROUP BY P_ID) t ON myTable.ID IN (t.minID, t.maxID)
GROUP BY t.P_ID
```
Admittedly, @AaronBertrand's approach is cleaner here. However, this solution will work on older versions of SQL Server (that don't support CTE's or window functions), or on pretty much any other DBMS. | how to use same column twice with different criteria with one common column in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a Table `SMTP` (simplified)
**INITIAL STAGE**
```
| Id | Server | ConnectionRequired |
---------------------------------------------
| 0 | smtp.tea.com | '1' |
| 1 | smtp.juice.com | '0' |
| 2 | smtp.coffee.com | NULL |
| 3 | smtp.milk.org | '1' |
```
`ConnectionRequired` is a `CHAR(1)`
I am trying to add:
`ConnectionType` of type `INT`
Whose initial value depends on `ConnectionRequired`:
```
'1' becomes 1 otherwise it becomes 0
```
**MIDDLE STAGE**
```
| Id | Server | ConnectionRequired | ConnectionType |
--------------------------------------------------------------
| 0 | smtp.tea.com | '1' | 1 |
| 1 | smtp.juice.com | '0' | 0 |
| 2 | smtp.coffee.com | NULL | 0 |
| 3 | smtp.milk.org | '1' | 1 |
```
THEN I want to `DROP` `ConnectionRequired`
**ENDING STAGE**
```
| Id | Server | ConnectionType |
----------------------------------------|
| 0 | smtp.tea.com | 1 |
| 1 | smtp.juice.com | 0 |
| 2 | smtp.coffee.com | 0 |
| 3 | smtp.milk.org | 1 |
```
What I have so far:
```
ALTER TABLE SMTP ADD ConnectionType INT NOT NULL
MIDDLE STAGE.... HAPPENS HERE
ALTER TABLE SMTP DROP COLUMN ConnectionRequired
``` | and working on both (Sql server and Oracle)
```
update smtp set connectionType = (case when connectionRequired = '1' then 1 else 0 end);
``` | Are you looking for the middle stage? If so, I would try
update smtp set connectionType = decode(connectionRequired, '1', 1, 0); | Alter a Table by adding a new column whose value depends on another column | [
"",
"sql",
"sql-server",
""
] |
User no=1 which is `Aa` :
```
CREATE TABLE if not exists tblA
(
id int(11) NOT NULL auto_increment ,
sender varchar(255),
receiver varchar(255),
msg varchar(255),
date timestamp,
PRIMARY KEY (id)
);
CREATE TABLE if not exists tblB
(
id int(11) NOT NULL auto_increment ,
sno varchar(255),
name varchar(255),
PRIMARY KEY (id)
);
INSERT INTO tblA (sender, receiver,msg,date ) VALUES
('1', '2', 'buzz ...','2011-08-21 14:11:09'),
('1', '2', 'test ...','2011-08-21 14:12:19'),
('1', '2', 'check ...','2011-08-21 14:13:29'),
('1', '1', 'test2 ...','2011-08-21 14:14:09'),
('2', '1', 'check2 ...','2011-08-21 14:15:09'),
('2', '1', 'test3 ...','2011-08-21 14:16:09'),
('1', '2', 'buzz ...','2011-08-21 14:17:09'),
('1', '2', 'test ...','2011-08-21 14:18:19'),
('1', '2', 'check ...','2011-08-21 15:19:29'),
('1', '1', 'test2 ...','2011-08-21 14:10:09'),
('3', '1', 'check2 ...','2011-08-21 14:21:09'),
('3', '1', 'test3 ...','2011-08-21 14:22:09'),
('3', '2', 'buzz ...','2011-08-21 14:24:09'),
('3', '2', 'test ...','2011-08-21 14:25:19'),
('1', '3', 'check ...','2011-08-21 14:26:29'),
('1', '3', 'test2 ...','2011-08-21 14:27:09'),
('2', '3', 'check2 ...','2011-08-21 14:28:09'),
('2', '3', 'test3 ...','2011-08-21 14:29:09'),
('1', '2', 'check3 ...','2011-08-21 14:23:09'),
('1', '4', 'test2 ...','2011-08-21 14:27:09'),
('1', '5', 'test2 ...','2011-08-21 14:27:09'),
('2', '6', 'check2 ...','2011-08-21 14:28:09'),
('1', '7', 'test3 ...','2011-08-21 14:29:09'),
('8', '2', 'check3 ...','2011-08-21 14:23:09');
INSERT INTO tblB (sno, name ) VALUES
('1', 'Aa'),
('2', 'Bb'),
('3', 'Cc'),
('4', 'Dd'),
('5', 'Ee'),
('6', 'Ff'),
('7', 'Gg'),
('8', 'Hh');
```
How to get latest communication time b/n 2 users .
<http://www.sqlfiddle.com/#!2/ed676/1> the query is not giving latest times of communication.
I really appreciate any help.Thanks in Advance. | Is this what you want?
```
select b.name, max(date)
from tblA a join
tblB b
on b.sno in (a.receiver, a.sender)
where b.sno <> '1' and
exists (select 1
from tblB b2
where b2.sno = '1' and
b2.sno in (a.receiver, a.sender)
) or
(a.sender = 1 and a.receiver = 1)
group by b.name
order by max(date) desc;
```
This returns the latest time of communication for each "other person" talking to `1`. Your original query didn't put `max(date)` in the select list. | Based on the discussion on my previous answer, I think what you want is something like this:
```
SELECT
tblA.id,
sender.name AS sender_name,
receiver.name AS receiver_name,
tblA.msg,
tblA.date
FROM (
SELECT
LEAST(sender, receiver) AS a,
GREATEST(sender, receiver) AS b,
MAX(id) AS id
FROM tblA
GROUP BY a, b
) AS t
JOIN tblA ON t.id = tblA.id
JOIN tblB AS sender ON tblA.sender = sender.sno
JOIN tblB AS receiver ON tblA.receiver = receiver.sno
```
So the basic idea is:
1. Choose the latest message `id`, `auto_increment` or some other form of continuously incrementing numeric `PRIMARY KEY` is assumed for the messages (`tblA`) table. This finds the IDs of those messages without regard to `sender` or `receiver` ordering.
2. Based on the message `id` for each message, select its full detail.
3. Join the message rows against the user (`tblB`) records twice, once to get `sender` details, and once for `receiver` details. | Select the Latest Message Between the communication of two users and group them | [
"",
"mysql",
"sql",
""
] |
I have to check whether a worker have all required skills. This is done by comparing worker set of skills with a set of skills that is required. So, to make it more clear, here is the DDL of the tables I have:
```
CREATE TABLE [WorkerSkills](
[WorkerId] [bigint] NOT NULL,
[SkillName] [varchar](100) NOT NULL
) GO
CREATE TABLE [SkillCombinator](
[SetId] [int] NOT NULL,
[SkillCombinator] [varchar](5) NOT NULL
) GO
CREATE TABLE [RequiredSkills](
[SetId] [int] NOT NULL,
[SkillName] [varchar](100) NOT NULL
) GO
```
and here is the sample data:
```
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (1, 'A')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (1, 'B')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (1, 'C')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (2, 'D')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (2, 'X')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (3, 'E')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (4, 'A')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (4, 'B')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (4, 'H')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (4, 'I')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'A')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'B')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'C')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'E')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'G')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'H')
INSERT [WorkerSkills] ([WorkerId], [SkillName]) VALUES (5, 'I')
INSERT [SkillCombinator] ([SetId], [SkillCombinator]) VALUES (1, 'AND')
INSERT [SkillCombinator] ([SetId], [SkillCombinator]) VALUES (2, 'OR')
INSERT [SkillCombinator] ([SetId], [SkillCombinator]) VALUES (3, 'AND')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (1, 'A')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (1, 'B')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (1, 'C')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (2, 'D')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (2, 'E')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (2, 'F')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (3, 'G')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (3, 'H')
INSERT [RequiredSkills] ([SetId], [SkillName]) VALUES (3, 'I')
```
This means that there are 3 sets with each having 3 skills defined.
```
set 1: A and B and C
set 2: D or E or F
set 3: G and H and I
```
And there are workers with following skills:
```
worker 1: A, B, C
worker 2: D, X
worker 3: E
worker 4: A, B, H, I
worker 5: A, B, C, E, G, H, I
```
Now, the problem is to write a function in Sql Server 2008 that accepts **WorkerId** and **SetCombinator** parameters and returns a value indicating whether Worker has all required skill.
*Sample input 1:*
```
WorkerId: 1
SetCombinator: OR
```
This means that all sets should have **OR** combinator, i.e.:
```
set 1: A and B and C
OR
set 2: D or E or F
OR
set 3: G and H and I
```
The result should be **true** since Worker has skills that correspond to set #1.
*Sample input 2:*
```
WorkerId: 4
SetCombinator: OR
```
The result should be **false**.
*Sample input 3:*
```
WorkerId: 1
SetCombinator: AND
```
This means that all sets should have **AND** combinator, i.e.:
```
set 1: A and B and C
AND
set 2: D or E or F
AND
set 3: G and H and I
```
The result should be **false** since Worker has skills that correspond only to set #1, but not for 2nd and 3rd sets.
*Sample input 4:*
```
WorkerId: 5
SetCombinator: AND
```
The result should be **true** since Worker has skills that correspond for all sets.
Any ideas how this function should look like?
**UPDATE:** I forgot to mention that skills in RequiredSkills table are not constants, they will be often modified, and the number of these skills is also dynamic. So the solution with hardcoded values will not work. | Ok, so here is the best solution I've found so far.
```
create function fnMatchedToSkillsSet
(
@WorkerId int,
@Condition varchar(3)
)
returns table
as
return (
with x as
(
select
sc.SetId,
nullif(case
when sc.SkillCombinator = 'AND' and count(distinct ws.SkillName) = count(*) then count(distinct ws.SkillName)
when sc.SkillCombinator = 'OR' then count(distinct ws.SkillName)
end, 0) as SkillsCount
from
dbo.SkillCombinator sc join
dbo.RequiredSkills rs on rs.SetId = sc.SetId left join
dbo.WorkerSkills ws on ws.WorkerId = @WorkerId and ws.SkillName = rs.SkillName
group by
sc.SetId, sc.SkillCombinator
)
select
case
when @Condition = 'AND' and count(SkillsCount) = (select count(*) from dbo.SkillCombinator) then 1
when @Condition = 'OR' and count(SkillsCount) > 0 then 1
else 0
end as Result
from
x
);
``` | might not be the most efficient but it works for any number of skills and combinations:
```
declare @workerid int =5 -- input param
declare @setcombinator varchar(3) ='AND' -- input param
declare @skillsleft varchar(max)
declare @result varchar(10)
declare @getsets cursor
declare @set int
set @getsets = CURSOR FOR SELECT distinct SetId FROM SkillCombinator
OPEN @getsets
FETCH NEXT FROM @getsets INTO @set
-- set result by default
IF(@setcombinator='OR') set @result='FALSE' else set @result='TRUE'
WHILE @@FETCH_STATUS = 0
BEGIN -- compares each skillset against worker's and determine match or mismatch
if (select skillcombinator from skillcombinator where setid=@set) = 'AND'
BEGIN --needs all the skills in the set
SET @skillsleft=
(select count(*) from
(select skillname from RequiredSkills a
join SkillCombinator b on a.setid=b.setid
where a.setid=@set
except
select skillname from workerskills where workerid=@workerid) t)
-- override default depending on the logical combinations
if (@skillsleft = 0 and @setcombinator='OR') set @result='TRUE'
if (@skillsleft > 0 and @setcombinator='AND')set @result='FALSE'
END
if (select skillcombinator from skillcombinator where setid=@set) = 'OR'
BEGIN --needs at least one of the skills in the set
SET @skillsleft=
(select count(*) from
(select skillname from RequiredSkills a
join SkillCombinator b on a.setid=b.setid
where a.setid=@set
intersect
select skillname from workerskills where workerid=@workerid) t)
-- override default depending on the logical combinations
if (@skillsleft > 0 and @setcombinator='OR') set @result='TRUE'
if (@skillsleft = 0 and @setcombinator='AND') set @result='FALSE'
END
FETCH NEXT FROM @getsets INTO @set
END
select @result
CLOSE @getsets
DEALLOCATE @getsets
``` | How to check for logical item combinations using t-sql? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Right now, I have a script that does the following in python:
```
infinite loop:
select 1 row from MySQL
works with row data
updates 1 row if needed in MySQL (<10% of the time)
```
Would selecting and updating more than 1 row at a time speed up my script and process more rows? If so, how many rows should I pull down? I have a small VPS with 512 MB RAM. Would 1000 rows at a time be too many? Too little?
The new script logic would be:
```
infinite loop:
select 1000 rows from MySQL
loop 1000 times:
work with row data
appends mysql statement to string if update is required
runs the single update statement containing 10-50 commands
```
The new logic would have 1 big select request and 1 big update request. Would this be better than many little select and update requests?
I couldn't find much info about this. Thanks so much! | In general, you want to make as few calls to the database as you possibly can. As far as how many records to get per call is all up to your system.
My method is to just mess around with the number until I'm satisfied. | Regarding performance, no assumptions should be made. You should measure things in order to make a decision. You should define **performance goals** and then test your system. I would suggest using some king of **load test** tools like [mysqlslap](http://dev.mysql.com/doc/refman/5.1/en/mysqlslap.html) or [hammerdb](http://hammerora.sourceforge.net/). If you have problems, then you can use some kind of **profiler** like [neor](http://www.profilesql.com/).
Hope I helped! | Efficiency: How many MySQL rows should I pull at a time? | [
"",
"mysql",
"sql",
""
] |
I have the sql create and insert scripts :
```
CREATE TABLE NINJA_TYPE (
NINJA_TYPE_ID int(10) unsigned NOT NULL AUTO_INCREMENT,
DESCRIPTION varchar(30) NOT NULL,
PRIMARY KEY (NINJA_TYPE_ID)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO NINJA_TYPE VALUES (1, 'Genin');
INSERT INTO NINJA_TYPE VALUES (2, 'Chunin');
INSERT INTO NINJA_TYPE VALUES (3, 'Jounin');
INSERT INTO NINJA_TYPE VALUES (4, 'Kage');
CREATE TABLE NINJA (
NINJA_ID int(10) unsigned NOT NULL AUTO_INCREMENT,
NINJA_TYPE_ID int(10) unsigned NOT NULL,
NAME varchar(30) NOT NULL,
PRIMARY KEY (NINJA_ID),
KEY FK_NINJA_TYPE (NINJA_TYPE_ID),
CONSTRAINT FK_NINJA_TYPE FOREIGN KEY (NINJA_TYPE_ID) REFERENCES NINJA_TYPE (NINJA_TYPE_ID)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO NINJA VALUES (1, 1, 'Fransie');
INSERT INTO NINJA VALUES (2, 1, 'Mary Grace');
INSERT INTO NINJA VALUES (3, 1, 'Hannah');
INSERT INTO NINJA VALUES (4, 1, 'Chinita');
INSERT INTO NINJA VALUES (5, 2, 'Nookie');
INSERT INTO NINJA VALUES (6, 2, 'Ruth');
INSERT INTO NINJA VALUES (7, 2, 'Rose');
INSERT INTO NINJA VALUES (8, 2, 'Irish');
INSERT INTO NINJA VALUES (9, 3, 'Tokmol');
INSERT INTO NINJA VALUES (10, 3, 'JG');
INSERT INTO NINJA VALUES (11, 3, 'JM');
INSERT INTO NINJA VALUES (12, 3, 'Jose');
INSERT INTO NINJA VALUES (13, 4, 'Tom');
INSERT INTO NINJA VALUES (14, 4, 'Johan');
INSERT INTO NINJA VALUES (15, 4, 'Christian');
INSERT INTO NINJA VALUES (16, 4, 'Lemuel');
```
I have the sql select :
```
SELECT DISTINCT NAME, DESCRIPTION FROM NINJA N
INNER JOIN NINJA_TYPE NT ON N.NINJA_TYPE_ID = NT.NINJA_TYPE_ID ;
```
The result :
```
+------------+-------------+
| NAME | DESCRIPTION |
+------------+-------------+
| Fransie | Genin |
| Mary Grace | Genin |
| Hannah | Genin |
| Chinita | Genin |
| Nookie | Chunin |
| Ruth | Chunin |
| Rose | Chunin |
| Irish | Chunin |
| Tokmol | Jounin |
| JG | Jounin |
| JM | Jounin |
| Jose | Jounin |
| Tom | Kage |
| Johan | Kage |
| Christian | Kage |
| Lemuel | Kage |
+------------+-------------+
```
I also tried group by :
```
SELECT DISTINCT NAME, DESCRIPTION FROM NINJA N
INNER JOIN NINJA_TYPE NT ON N.NINJA_TYPE_ID = NT.NINJA_TYPE_ID GROUP BY DESCRIPTION;
```
The result :
```
+---------+-------------+
| NAME | DESCRIPTION |
+---------+-------------+
| Nookie | Chunin |
| Fransie | Genin |
| Tokmol | Jounin |
| Tom | Kage |
+---------+-------------+
```
What I want to happen is :
```
+------------+-------------+
| NAME | DESCRIPTION |
+------------+-------------+
| Fransie | Genin |
| Mary Grace | |
| Hannah | |
| Chinita | |
| Nookie | Chunin |
| Ruth | |
| Rose | |
| Irish | |
| Tokmol | Jounin |
| JG | |
| JM | |
| Jose | |
| Tom | Kage |
| Johan | |
| Christian | |
| Lemuel | |
+------------+-------------+
```
Please don't ask me why Im doing this. I have a reason. I just wanna know how to deal with it. Please help.
PS: I may not be able to answer today some probing questions because I have to go somewhere and be back tomorrow. Please feel free to give answers, comments, and suggestions.
Thanks. | So just do:
```
SELECT
dist.name,
IF(grouped.description IS NULL, '', grouped.description) AS description
FROM
(SELECT
DISTINCT NAME,
DESCRIPTION
FROM
NINJA AS N
INNER JOIN NINJA_TYPE AS NT
ON N.NINJA_TYPE_ID = NT.NINJA_TYPE_ID) AS dist
LEFT JOIN
(SELECT
DISTINCT NAME,
DESCRIPTION
FROM
NINJA AS N
INNER JOIN NINJA_TYPE AS NT
ON N.NINJA_TYPE_ID = NT.NINJA_TYPE_ID
GROUP BY
DESCRIPTION) AS grouped
ON
dist.name=grouped.name
```
-you'll see result quite as [expected](http://sqlfiddle.com/#!2/a5d9d/4). Note, that for formatting purposes using application post-processing is much better idea. | While I completely agree with Mihai, you *can* also do something like this...
```
SELECT * FROM users;
+----+----------+-----------+
| id | name | surname |
+----+----------+-----------+
| 1 | John | Lennon |
| 2 | Paul | McCartney |
| 3 | George | Harrison |
| 4 | Ringo | Starr |
| 5 | Yoko Ono | Lennon |
| 6 | Linda | McCartney |
| 7 | Olivia | Harrison |
| 8 | Barbara | Bach |
| 9 | Stella | McCartney |
+----+----------+-----------+
SELECT u.name
, IF(u.surname=@last,'',@last:=u.surname) surname
FROM users u, (SELECT @last='')x
ORDER
BY u.surname;
+----------+-----------+
| name | surname |
+----------+-----------+
| Barbara | Bach |
| George | Harrison |
| Olivia | |
| John | Lennon |
| Yoko Ono | |
| Paul | McCartney |
| Linda | |
| Stella | |
| Ringo | Starr |
+----------+-----------+
``` | mysql : How make recurring fields appear only once? | [
"",
"mysql",
"sql",
"select",
""
] |
I have following two tables **TableA** and **TableB**
**TableA**
```
Id Month_Id Customer_Id Total_Amount
1 1 1 50
2 2 1 150
3 3 1 200
4 1 2 75
5 2 2 100
6 1 3 400
7 2 3 200
```
**TableB**
```
Id Month_Id Customer_Id Total_Amount
1 1 1 50
2 2 1 150
3 1 2 75
```
I want to compare **Month\_Id** **Customer\_Id** **Total\_Amount** in both tables and select **Id** from **TableA**. The output should be as follow.
**Output**
```
Id
3
5
6
7
```
My concept is:
```
SELECT TableA.Id FROM TableA
WHERE TableA.Month_Id <> TableB.MonthId AND
TableA.Customer_Id <> TableB.Customer_Id AND
TableA.Total_Amount <> TableB.Total_Amount
``` | ```
SELECT TableA.Id
FROM TableA
WHERE NOT EXISTS (
SELECT 1
FROM TableB
WHERE TableB.Month_Id = TableA.Month_Id
AND TableB.Customer_Id = TableA.Customer_Id
AND TableB.Total_Amount = TableA.Total_Amount
)
``` | ```
select Id
from (
select Id, Month_Id, Customer_Id, Total_Amount from TableA
except
select Id, Month_Id, Customer_Id, Total_Amount from TableB
) q
``` | Select rows not in another table by comparing two table | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a database table that contains two scores:
* scoreA
* scoreB
I am trying to make a SQL query by adding these two values, such as
```
SELECT *,(scoreA+scoreB) as scoreC FROM data WHERE scoreC > 100 ORDER BY scoreC DESC
```
However, it shows an error:
```
ERROR: Unknown column 'scoreC' in 'where clause'
```
Is there any way to work around for this?
P.S. the reason I don't add a new column for scoreC is because scoreA/scoreB are updated continuously by other cron jobs and if I want to calculate scoreC, I need to make extra queries/updates for scoreC, which I would like to avoid to save system resources. However, if it is the only way to calculate scoreC by another cron job, I am also ok with it if it's the only solution. Thanks. | MySQL supports a non-standard extension that allows you to use the `having` clause in this case:
```
SELECT *, (scoreA+scoreB) as scoreC
FROM data
HAVING scoreC > 100
ORDER BY scoreC DESC;
```
Let me emphasize that this is MySQL-specific. For a simple expression such as this, I would just put the formula in the `where` clause. But sometimes the expression can be quite complicated -- say a distance formula or complex `case` or subquery. In those cases, the `having` extension is actually useful.
And, you don't need the formula in the `order by` clause. MySQL allows column aliases there. | In most ANSI compliant RDBMS, you won't be able to use the derived column `ScoreC` in the where clause. However, you can do this:
```
SELECT *
FROM
(
SELECT *, (scoreA + scoreB) as scoreC
FROM data
) SummedScores
WHERE SummedScores.scoreC > 100
ORDER BY SummedScores.scoreC DESC;
```
where `SummedScores` is a [derived table](https://dev.mysql.com/doc/refman/8.0/en/derived-tables.html)
[Fiddle here](http://www.sqlfiddle.com/#!2/7f0120/1) | SQL query by adding two columns in where clause? | [
"",
"mysql",
"sql",
""
] |
this is my query
```
SELECT * FROM ".TB_PREFIX."wdata
left JOIN ".TB_PREFIX."vdata
ON ".TB_PREFIX."vdata.wref = ".TB_PREFIX."wdata.id
where ".TB_PREFIX."wdata.id = $id
```
and this is the result of slow query:
> Query\_time: 0.005000 Lock\_time: 0.000000 Rows\_sent: 1 Rows\_examined: 21184
How can I optimize this code to execute faster?
EDIT:
tables structure:
```
CREATE TABLE IF NOT EXISTS `tss1_wdata` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`fieldtype` tinyint(2) unsigned NOT NULL,
`oasistype` tinyint(2) unsigned NOT NULL,
`x` smallint(3) NOT NULL,
`y` smallint(3) NOT NULL,
`occupied` tinyint(1) NOT NULL,
`image` char(3) COLLATE utf8_persian_ci NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id` (`id`),
KEY `id_2` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=40402 ;
CREATE TABLE IF NOT EXISTS `tss1_vdata` (
`wref` int(11) unsigned NOT NULL,
`owner` int(11) unsigned NOT NULL,
`name` char(100) COLLATE utf8_persian_ci NOT NULL,
`capital` tinyint(1) unsigned NOT NULL,
`pop` mediumint(11) unsigned NOT NULL,
`cp` mediumint(11) unsigned NOT NULL,
`celebration` int(11) NOT NULL DEFAULT '0',
`type` int(11) NOT NULL DEFAULT '0',
`wood` float(12,2) NOT NULL,
`clay` float(12,2) NOT NULL,
`iron` float(12,2) NOT NULL,
`maxstore` int(11) unsigned NOT NULL,
`crop` float(12,2) NOT NULL,
`maxcrop` int(11) unsigned NOT NULL,
`lastupdate` int(11) unsigned NOT NULL,
`loyalty` tinyint(3) NOT NULL DEFAULT '100',
`exp1` int(11) NOT NULL,
`exp2` int(11) NOT NULL,
`exp3` int(11) NOT NULL,
`created` int(11) NOT NULL,
`natar` tinyint(1) unsigned NOT NULL,
`starv` int(11) unsigned NOT NULL,
`starvupdate` int(11) unsigned NOT NULL,
PRIMARY KEY (`wref`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
``` | You are looking for a specific ID, so you can limit the result to 1 row. Also, you can replace wdata.id with the actual id in the ON statement.
```
SELECT * FROM ".TB_PREFIX."wdata
LEFT JOIN ".TB_PREFIX."vdata
ON ".TB_PREFIX."vdata.wref = '$id'
WHERE ".TB_PREFIX."wdata.id = '$id'
LIMIT 1
```
You have the right indices, so there's nothing to be done there. | It looks like you have all necessary indexes and query itself is built just right, i.e. there is nothing to improve in SQL. Actually, I would not consider 5ms query as a slow one
If it is critical query executed thousands times a second, you still can cache result on application level or check NoSQL solution. I personally would try to cache it first. | How can I optimize this MySQL query to execute fastar? | [
"",
"mysql",
"sql",
"optimization",
"left-join",
""
] |
I'm trying to create an Oracle SQL\*Plus table, but I'm having trouble with a constraint using regular expressions.
The constraint I'm trying to implement is that the field is 4 characters long and cannot contain the following characters: ,.!£$\*<>"=
In reality I would be enough to just only allow alphanumeric characters, so I tried the following regular expression constraint:
```
CONSTRAINT CHK_Foo CHECK (Foo LIKE '[[:alnum:]]'),
```
However when I attempted to enter `AAAA`field it said I had violated the constraint.
I'm a complete newcomer to Regexp, and a relative newcomer to SQL, so any explanations as to why it doesn't work as well as just an alternative would be immensely appreciated! | Two options:
1. `CHECK (REGEXP_LIKE( foo, '[^,.!£$*<>"=]{4}' ) )` will accept all 4-character strings that do not contain `,.!£$*<>"=`
2. `CHECK (REGEXP_LIKE( bar, '[a-zA-Z0-9]{4}' ) )` will accept all 4-character strings that only contain `a-zA-Z0-9`
(If you want less than strings which are less than or equal to 4 characters in length then you can replace `{4}` with `{0,4}`)
[SQL Fiddle](http://sqlfiddle.com/#!4/850c2/10)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE test (
foo CHAR(4) NOT NULL,
bar CHAR(4) NOT NULL,
CONSTRAINT chk_foo CHECK (REGEXP_LIKE( foo, '[^,.!£$*<>"=]{4}' ) ),
CONSTRAINT chk_bar CHECK (REGEXP_LIKE( bar, '[a-zA-Z0-9]{4}' ) )
);
```
**Query 1**:
```
INSERT INTO test VALUES ( 'AAAA', '"AAA' );
```
**Result 1**:
```
ORA-02290: check constraint (USER_4_850C2.CHK_BAR) violated : INSERT INTO test VALUES ( 'AAAA', '"AAA' )
```
**Query 2**:
```
INSERT INTO test VALUES ( '$AAA', 'AAAA' );
```
**Result 2**:
```
ORA-02290: check constraint (USER_4_850C2.CHK_FOO) violated : INSERT INTO test VALUES ( '$AAA', 'AAAA' )
```
**Query 3**:
```
INSERT INTO test VALUES ( 'AAAA', 'AAAA' )
```
**Result 3**: | ```
CONSTRAINT CHK_Foo CHECK (regexp_LIKE(Foo, '^\w{4}$'))
``` | Exclude certain characters using a sql constraint (with regexp)? | [
"",
"sql",
"regex",
"oracle",
"sqlplus",
""
] |
Ok, I'm really not sure how to tackle this SQL problem so I'll just go ahead and explain it...
I have a table with the following columns :
```
gameId, winnerPlayerId, winnerFactionId, loserPlayerId, loserFactionId
```
Now, I'd like to build a query that, given a factionId, outputs the following data :
```
playerId, totalGamesPlayedAsFaction
```
What stumps me here, is that I each row in the table needs to be counted TWICE, once for the loser and once for the winner... Therefore I can't use a simple GROUP BY winnerPlayerId.
I feel as if my question is not particularly clear and that the solution is probably quite simple... | You need to "duplicate" the data somehow. The most typical way would use `union all`:
```
select . . .
from ((select gameId, winnerPlayerId as PlayerId, winnerFactionId as FactionId
from table t
) union all
(select gameId, loserPlayerId as PlayerId, loserFactionId as FactionId
from table t
)
) t
group by PlayerId, FactionId;
``` | Looks like a job for the ever popular self join
```
select playerid, count(*) total
from player p left join game winner on p.playerid = winner.winnerplayerid
left join game loser on p.playerid = loser.loserplayerid
etc
```
This will get you started. You'll have to contend with filters and null values on your own. | GROUP BY, value could be in either of 2 columns | [
"",
"sql",
"count",
"group-by",
""
] |
I have the following table:
```
create table Person(
FName varchar(15) not null
,LName varchar(25) not null
,Address varchar(100)
,City varchar(30)
,State varchar(30)
,Zip varchar(10)
,Phone varchar(15)
,Email varchar(60) unique
,UserID varchar(30)
,Password varchar(30) not null
,CCtype varchar(8)
,CCNumber varchar(16)
,primary key(UserID)
);
```
And I want to write a restriction where if there is CCNumber, the CCType can not be empty. The other way around is fine. If I have type I may not have a number, and that is ok.
I have tried those constraints but both are not working as expected when i test.
```
,constraint CCNumber_no_CCType check((CCNumber is null) or (CCType is null))
```
or
```
,constraint CCNumber_no_CCType check((CCNumber is not null) and (CCType is null))
``` | The constraint that you want is:
```
constraint CCNumber_no_CCType check((CCNumber is null) or (CCType is not null))
```
Here is a more general way to think of this. If you have "if A --> B", then this is true when either B is true or not A is true. | Try like so:
```
constraint CCNumber_no_CCType check((CCNumber is null)
or (CCNumber is not null and CCType is not null))
```
As others have pointed out, [K map](http://en.wikipedia.org/wiki/Karnaugh_map) simplification should mean that the `or (CCNumber is not null ..` becomes redundant. | SQL Server restriction between columns in one table | [
"",
"sql",
"sql-server",
""
] |
```
DELETE FROM (SELECT * FROM orders1 o,order_item1 i
WHERE i.order_id = o.order_id );
```
above query has not mentioned particular to delete rows.Instead of i have mentioned a sub query.
But this query executes without any error and deletes `order_item1` table rows.
How oracle server decide which table rows should be deleted?
why `orders1` table rows cannot be deleted? | In your case, the subquery's table `orders1` has most certainly a primary key
(`order_id` ) and the table `order_item1` has the corresponding foreign key `order_id`.
Thus, Oracle turns the table `order_item1` into a **key preserved table** and is able to delete from this table in the outer `delete` statement.
The concept of *key preserved tables* is further explained at [This tahiti documentation link](http://docs.oracle.com/cd/E11882_01/server.112/e25494/views.htm#ADMIN11783).
If you want to find out what is deleted you can go like so
These are the tables that are joined in your subquery:
```
create table tq84_orders1 (
id number primary key,
col_1 varchar2(10),
col_2 date
);
create table tq84_order_item1 (
order_id references tq84_orders1,
col_3 varchar2(10),
col_4 date
);
```
This view emulates the subquery:
```
create view tq84_orders_v as (
select *
from
tq84_orders1 o,
tq84_order_item1 i
where
o.id = i.order_id
);
```
This query (on `user_updateable_columns`) now finds which columns actually get deleted (or can be deleted):
```
select
table_name,
column_name,
--updatable,
--insertable,
deletable
from
user_updatable_columns
where
table_name = 'TQ84_ORDERS_V';
```
The result shows that in effect the three columns ORDER\_ID, COL\_3 and COL\_4 can be deleted, all of which stem from TQ84\_ORDER\_ITEM1.
```
TABLE_NAME COLUMN_NAME DEL
------------------------------ ------------------------------ ---
TQ84_ORDERS_V ID NO
TQ84_ORDERS_V COL_1 NO
TQ84_ORDERS_V COL_2 NO
TQ84_ORDERS_V ORDER_ID YES
TQ84_ORDERS_V COL_3 YES
TQ84_ORDERS_V COL_4 YES
``` | I think that if you want to Delete rows from `just ONE` table you can use this query:
```
DELETE FROM orders1 WHERE order_id in
(SELECT o.order_id FROM orders1 o,order_item1 i
WHERE i.order_id = o.order_id );
```
But if you want to delete rows from both tables you can do something like this:
```
CREATE TABLE TEMP_TAB AS SELECT o.order_id FROM orders1 o,order_item1 i
WHERE i.order_id = o.order_id;
DELETE FROM order_item1 WHERE order_id in
(SELECT TEMP_TAB.order_id FROM TEMP_TAB);
DELETE FROM orders1 WHERE order_id in
(SELECT TEMP_TAB.order_id FROM TEMP_TAB);
DROP TABLE TEMP_TAB;
``` | DELETE FROM <subquery> | [
"",
"sql",
"oracle",
"oracle11g",
"oracle10g",
""
] |
Is there any way to do this query in a more optimized way?
```
select SQL_NO_CACHE count(*) from products p
INNER JOIN `products_categories` AS `pc` ON p.id = pc.products_id
where pc.categories_id = 87
```
My schema is simple: products, categories and a N:N join table: products\_categories. Products are about 400000 rows. products\_categories is about 600000. Products with category = 87 are about 18000. Using explain gives:
```
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE pc index products_id products_id 8 NULL 612469 Using where; Using index
1 SIMPLE p eq_ref PRIMARY PRIMARY 4 stagingbagthis.pc.products_id 1 Using index
```
It seems to me that the first line where rows = 612469 is not a very good sign. So, can this query be optimized in any way possible? | You need an index on `products_categories.categories_id` so that the `WHERE` clause can be optimized. | What if you removed the `products` table:
```
select SQL_NO_CACHE count(*)
from products_categories` `pc`
where pc.categories_id = 87;
```
For this, you would need an index on `products_categories(categories_id)`, or similar index where `categories_id` is the first column. | Optimize mysql query: products <- N:N -> categories | [
"",
"mysql",
"sql",
""
] |
```
( SELECT 'no' AS aff,
ss.orderid, ss.secondaff, ss.saletype, ss.price,
ss.affiliate, ss.vendor,
ss.cpa,ss.rebill,ss.salests,ss.fname,ss.lname
FROM sales ss
WHERE 1=1
AND ( ss.vendor='3kpertrade' OR ss.affiliate='3kpertrade')
ORDER BY ss.salests DESC )
UNION ALL
( SELECT 'yes' AS aff, ss.orderid,ss.secondaff,ss.saletype,sf.price,
ss.affiliate,ss.vendor,ss.cpa,ss.rebill,ss.salests,ss.fname,ss.lname
FROM salesaff sf
INNER JOIN sales ss ON ss.orderid=sf.orderid
WHERE 1=1
AND ( ss.vendor='3kpertrade' OR ss.affiliate='3kpertrade')
ORDER BY sf.salests DESC )
ORDER BY salests DESC
```
I need to select data from sales table with given condition, and select data from `salesaff` table too if vendor=affiliate in sales table and ss.orderid=sf=orderid in above sql.
I wrote this with some help but query took 195 sec to load. `sales table` contains 8000 rows and index by salests,vendor,affiliate,orderid. The `salesaff` table contains 6000 rows and is indexed by orderid
Both table contains same columns.
Some facts:
-> If i remove custom column from both ('yes' as aff, 'no' as aff), then query will execute if i use limit 100 or something. Otherwise it takes 195 sec to load.
If i use limit 100, then query is executed in 2 sec. But without limiting, it gives error code 500.
Moreover, I need total sum of price too. Can anybody help me? | The first efficiency is that the order by in the subqueries is not necessary, unless you are putting limit clauses there. So try this:
```
(SELECT 'no' AS aff, ss.orderid, ss.secondaff, ss.saletype, ss.price, ss.affiliate,
ss.vendor, ss. cpa, ss.rebill, ss.salests, ss.fname, ss.lname
FROM sales ss
WHERE 1=1 AND ( ss.vendor='3kpertrade' OR ss.affiliate='3kpertrade')
)
UNION ALL
( SELECT 'yes' AS aff, ss.orderid, ss.secondaff, ss.saletype, sf.price, ss.affiliate,
ss.vendor, ss.cpa, ss.rebill, ss.salests, ss.fname, ss.lname
FROM salesaff sf INNER JOIN
sales ss
ON ss.orderid = sf.orderid
WHERE 1=1 AND ( ss.vendor='3kpertrade' OR ss.affiliate='3kpertrade')
)
ORDER BY salests DESC
``` | Heh heh. This is a hard one. You're probably getting correct results, but your hosting provider is timing out your query, so you can't tell. These two tables don't seem very large to me.
First, increase the timeout time if you can, especially for debugging. It's difficult to ensure correctness if your inefficient queries don't finish. The most efficient query in the world is worthless if it's wrong. Ask your hosting provider support person for help.
Second, you will get a timeout if it takes too long for the query to complete and return the first row of the resultset. You might also get a timeout if your host language is slow to consume the entire resultset, so make sure you're doing that efficiently.
Third, let's break it down. You're looking up a particular set of sales records. You're doing an OR operation. Most likely that guarantees a full table scan, which isn't so good for performance.
You may as well skip the `1=1` in your `WHERE` clauses. That's usually put in there as a hack to make the construction of code by a host language a little simpler. It shouldn't hurt performance, but it won't help.
Making a couple of guesses: assuming `ss.orderid` is a unique (autoincrement) primary key field, you might try adding compound indexes -- covering indexes -- on
```
(vendor, orderid)
```
and
```
(affiliate,orderid)
```
then using the following query to retrieve the list of orderid values you need to look up.
```
SELECT orderid FROM sales WHERE vendor = '3kpertrade'
UNION
SELECT orderid FROM sales WHERE affiliate = '3kpertrade'
```
This subquery efficiently looks up the set of `orderid` values you need. The two covering indexes I suggested will improve performance pretty significantly. (Don't use `UNION ALL` here, use `UNION`, or you may get some duplicated data.)
Then you can use this query to get some of your order data -- the first part of your `UNION ALL`.
```
SELECT 'no' AS aff,
ss.orderid, ss.secondaff, ss.saletype, ss.price,
ss.affiliate, ss.vendor,
ss.cpa,ss.rebill,ss.salests,ss.fname,ss.lname
FROM sales ss
JOIN (
SELECT orderid FROM sales WHERE vendor = '3kpertrade'
UNION
SELECT orderid FROM sales WHERE affiliate = '3kpertrade'
) ids ON ss.orderid = ids.orderid
```
I suggest you debug your app with just this much of your query. Make sure it's displaying the correct stuff.
You appear to be implementing a shadow-booking system in your query. That is, when a particular order appears in your `salesaff` table you're *duplicating* it in this query's result set. It looks like the only information you retrieve from `salesaff` is the price. Is that correct? Do affiliate sales show up more than once in your `UNION ALL` result set?
Along these same lines, second half of your query will look like this:
```
SELECT 'yes' AS aff, ss.orderid, ss.secondaff, ss.saletype,
sf.price,
ss.affiliate, ss.vendor, ss.cpa, ss.rebill, ss.salests, ss.fname, ss.lname
FROM sales AS ss
JOIN (
SELECT orderid FROM sales WHERE vendor = '3kpertrade'
UNION
SELECT orderid FROM sales WHERE affiliate = '3kpertrade'
) ids ON ss.orderid = ids.orderid
JOIN salesaff sf ON ss.orderid=sf.orderid
```
If the only information you're pulling from `salesaff` is in fact the price, then a covering index
```
(orderid, price)
```
will help this query a bit.
Finally, do the UNION ALL and the `ORDER BY`.
An observation. If `sales.salests` is a timestamp, and if `sales.orderid` is an auto-incrementing index, and if you never update `sales.salests` but only insert it, then `ORDER BY orderid DESC` will do the same thing as `ORDER BY salests DESC`. That might save a little time. | query took too much time to load | [
"",
"mysql",
"sql",
""
] |
I have deleted a record with EmpID = 13 from HrEmployee table, when I use select query to filter empId=13 then it shows no result, that's good but this query still shows empID=13 records why is this so?
```
SELECT distinct
dbo.HrEmployee.EmplID,
dbo.HrEmployee.EmplName,
dbo.AtdRecord.RecDate,
dbo.AtdRecord.RecTime,
dbo.HrDept.DeptName
FROM dbo.HrDept
inner JOIN dbo.HrEmployee
ON dbo.HrDept.DeptID = dbo.HrEmployee.DeptID
inner JOIN dbo.AtdRecord
ON dbo.HrEmployee.EmplID = dbo.AtdRecord.EmplID
where HrEmployee.EmplID = 13
``` | Because some record still exsits with EmpID = 13 in table `dbo.AtdRecord` and you are using `INNER JOIN` try this:
```
SELECT distinct
dbo.HrEmployee.EmplID,
dbo.HrEmployee.EmplName,
dbo.AtdRecord.RecDate,
dbo.AtdRecord.RecTime,
dbo.HrDept.DeptName
FROM dbo.HrDept
left outer JOIN dbo.HrEmployee
ON dbo.HrDept.DeptID = dbo.HrEmployee.DeptID
left outer JOIN dbo.AtdRecord
ON dbo.HrEmployee.EmplID = dbo.AtdRecord.EmplID
where HrEmployee.EmplID = 13
``` | This will be linked to your the problem in your other post,
[select statement working unexpectedly](https://stackoverflow.com/questions/21349391/select-statement-working-unexpectedly/21350871#21350871)
You have two copies of the same table, the original intended table and one in the [master] database.
you are probably running this query against the master database where the record EmpId 13 still exists but you deleted it from the original table | record still appears after deleting | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table called **real\_estate** its structure and data is as follows:-
```
| id | user_id | details | location | worth
| 1 | 1 | Null | Null | 10000000
| 2 | 1 | Null | Null | 20000000
| 3 | 2 | Null | Null | 10000000
```
My query is the folloeing:
```
SELECT * , SUM( worth ) as sum
FROM real_estate
WHERE user_id = '1'
```
The result which I get from this query is
```
| id | user_id | details | location | worth | sum
| 1 | 1 | Null | Null | 10000000 | 30000000
```
I want result to be like
```
| id | user_id | details | location | worth | sum
| 1 | 1 | Null | Null | 10000000 | 30000000
| 2 | 1 | Null | Null | 20000000 | 30000000
```
Is there any way to get the result the way I want or should I write 2 different queries?
1)To get the **sum** of worth
2)To get **all the rows** for that user | You need to use a subquery that calculates the sum for every user, and then JOIN the result of the subquery with your table:
```
SELECT real_estate.*, s.user_sum
FROM
real_estate INNER JOIN (SELECT user_id, SUM(worth) AS user_sum
FROM real_estate
GROUP BY user_id) s
ON real_estate.user_id = s.user_id
WHERE
user_id = '1'
```
but if you just need to return records for a single user, you could use this:
```
SELECT
real_estate.*,
(SELECT SUM(worth) FROM real_estate WHERE user_id='1') AS user_sum
FROM
real_estate
WHERE
user_id='1'
``` | You can do your sum in a subquery like this
```
SELECT * , (select SUM(worth) from real_estate WHERE user_id = '1' ) as sum
FROM real_estate WHERE user_id = '1'
``` | Get all rows from a table for a particular user along with sum | [
"",
"mysql",
"sql",
""
] |
I just re-opened a package that I had been working on in SSIS 2008, and when I right click on one of the tasks, the "execute task" option is not showing up:

Just a few minutes ago it was showing up (before re-opening it). What am I doing wrong?
Perhaps a little bit more information would be helpful in getting your guidance:
1. "execute task" was showing up without problems
2. i closed to the package, and re-opened as administrator
3. "execute task" disappeared (even when i opened with regular permissions)
Here are the properties of this specific task. Please note that NONE of the tasks have "execute task" when right clicking:
 | The most common reason for this is that you have opened the individual Package.
The Execute Task command only appears when you have first opened a Project or Solution, then opened your Package. | I was experiencing that kind of issue and the solution was going to `File > Open > Project/Solution...`, selecting my `.sln` file (Visual studio Solution), and that would open my SSIS solution from Microsoft SQL Server Data Tools for Visual Studio. | why isn't "execute task" showing up when i right click on the task? | [
"",
"sql",
"visual-studio",
"visual-studio-2008",
"ssis",
""
] |
I was trying to execute an SQL query on a sample table I made that looks like this:
```
name text number
----------------------------------------------------------
target the cell to operate on 5
cruft cell here to simulate stuff happening 7
```
This is the SQL query I am trying to execute:
```
UPDATE data SET 'name'='target', 'number'=2 WHERE 'name'='target';
```
I expected this to change the 5 in the first row to a 2.
When I tried to execute this in [SQLite Database Browser](http://sqlitebrowser.sourceforge.net/) and eventually in the command-line sqlite version 3.7.13, it reported no error but didn't change the requested entry either. Oddly, if the quotes are removed from either string 'target' I get an error message saying `no such column: target` even though 'target' isn't being used as a column name. Removing the `'name'='target',` portion doesn't change anything.
Eventually I solved the immediate problem by removing the single quotes from `'name'` in the WHERE clause only, which is confounding.
```
UPDATE data SET 'name'='target', 'number'=3 WHERE name='target';
```
Does anyone know why this happens, or how I can work the quotes back in to protect against a hypothetical column name with a space in it? | You were testing whether the string `'name'` was equal to the string `'target'`.
You should use double quotes for column-names:
```
UPDATE data SET "name"='target', "number"=3 WHERE "name"='target';
```
Reference: <https://www.sqlite.org/lang_keywords.html> | Use single quotes for string literals i.e. the value you want to assign to a column. Use double quotes for identifiers i.e. the names of columns or tables. See <http://sqlite.org/faq.html#q24>
So for your example it should read
```
UPDATE data SET "name"='target', "number"=2 WHERE "name"='target';
``` | SQLite UPDATE statement is doing nothing if column is quoted | [
"",
"sql",
"sqlite",
""
] |
Below I have sql select to retrieve values from a table. I want to retrieve the values from tableA regardless of whether or not there are matching rows in tableB. The below gives me both non-null values and null values. How do I filter out the null values if non-null rows exist, but otherwise keep the null values?
```
SELECT a.* FROM
(
SELECT
id,
col1,
coll2
FROM tableA a LEFT OUTER JOIN tableB b ON b.col1=a.col1 and b.col2='value'
WHERE a.id= @id
AND a.col2= @arg
) AS a
ORDER BY col1 ASC
``` | You can do this by counting the number of matches using a window function. Then, either return all rows in `A` if there are no matching `B` rows, or only return the rows that do match:
```
select id, col1, col2
from (SELECT a.id, a.col1, a.coll2,
count(b.id) over () as numbs
FROM tableA a LEFT OUTER JOIN tableB b ON b.col1=a.col1 and b.col2='value'
WHERE a.id = @id AND a.col2= @arg
) ab
where numbs = 0 or b.id is not null;
``` | Filter them out in WHERE clause
```
SELECT
id,
col1,
coll2
FROM tableA a LEFT OUTER JOIN tableB b ON b.col1=a.col1 and b.col2='value'
WHERE a.id= @id
AND a.col2= @arg
AND A.Col1 IS NOT NULL -- HERE
) AS a
ORDER BY col1 ASC
``` | Filtering out null values only if they exist in SQL select | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to get some records from a query in SQL Server 2008.
The way the table is store is like this
```
P_Id Activity_Id
-----------------------
1234 53
1234 510
4567 65
4567 510
3456 53
3456 540
5678 53
5678 510
```
Now what I need to get is which P\_Id booked Activity 53 & 510 or 65 & 510
If I do something like this, it gives me no records
```
SELECT P_Id FROM ActivitiesPerPerson WHERE Activity_Id = 53 AND Activity_Id = 510
```
If I do this
```
SELECT P_Id FROM ActivitiesPerPerson WHERE Activity_Id IN (53, 65, 510)
```
It gives me some results but it give me people that booked activity 53 only together with the rest.
I need to get the person id that booked 53 & 510 together and the person id that booked 65 & 510 together. All on the same result if possible.
I've got an sql fiddle with this
<http://sqlfiddle.com/#!3/b4c11/2>
Thanks in advance | You would scan the table once and group by P\_Id. Having done so, you need an aggregate to tell you wether there is an Activity\_Id x for that P\_Id. You can simply sum with a case clause for that. Then take only those P\_Id matching your criteria:
```
SELECT P_Id
FROM ActivitiesPerPerson
WHERE Activity_Id IN (53, 65, 510)
GROUP BY p_Id
HAVING
(
SUM( CASE WHEN Activity_Id = 53 THEN 1 ELSE 0 END ) > 0
AND
SUM( CASE WHEN Activity_Id = 510 THEN 1 ELSE 0 END ) > 0
)
OR
(
SUM( CASE WHEN Activity_Id = 65 THEN 1 ELSE 0 END ) > 0
AND
SUM( CASE WHEN Activity_Id = 510 THEN 1 ELSE 0 END ) > 0
);
```
See <http://sqlfiddle.com/#!3/b4c11/38>. | Interpreting your question as 'find all persons who have done ALL activities in a given set', you can do like so:
```
SELECT P_Id, COUNT(*)
FROM ActivitiesPerPerson
WHERE Activity_Id IN (50, 510, 800)
GROUP BY P_Id
HAVING COUNT(*) = 3;
```
(Adjusted for your fiddle data, and this assumes that `ActivitiesPerPerson` is unique per `(PersonId, ActivityID)` pair - i.e. that the same person can't do the same activity more than once. You'll also need to adjust the `count = x`every time you change the number of criteria.
If you need to exclude the possibility of duplicates of the same `(person, activity)` combination giving false positives, distinct them like so:
```
SELECT P_Id, COUNT(*)
FROM
(
SELECT DISTINCT P_Id, Activity_ID
FROM ActivitiesPerPerson
WHERE Activity_Id IN (50, 510, 800)
) X
GROUP BY P_Id
HAVING COUNT(*) = 3;
```
The `Activities` criteria set can be expressed as a table variable or table valued parameter - this helps with maintainablity of this approach (notably the count = x variation). [Another Fiddle](http://sqlfiddle.com/#!3/0d537/1) | Getting single records from multiple rows by id in one column | [
"",
"sql",
"sql-server",
"database",
"resultset",
""
] |
I am not quite understanding the `cast` function and `decimal` here. What does `decimal(7,2)` actually mean?. `The first part and the second part arg of the function.`
Why do i need to `cast/convert` my floating point to decimal here?.
```
mysql> SELECT productCode, MAX(price), MIN(price),
CAST(AVG(price) AS DECIMAL(7,2)) AS `Average`,
CAST(STD(price) AS DECIMAL(7,2)) AS `Std Dev`,
SUM(quantity)
FROM products
GROUP BY productCode;
+-------------+------------+------------+---------+---------+---------------+
| productCode | MAX(price) | MIN(price) | Average | Std Dev | SUM(quantity) |
+-------------+------------+------------+---------+---------+---------------+
| PEC | 0.49 | 0.48 | 0.49 | 0.01 | 18000 |
| PEN | 1.25 | 1.23 | 1.24 | 0.01 | 15000 |
+-------------+------------+------------+---------+---------+---------------+
```
Below is a sql fiddle for the same?.
<http://sqlfiddle.com/#!2/1ed51b/1/0>
My Questions again repeated:
1. What does `decimal(7,2)` actually mean?. `The first part and the
second part arg of the function.`
2. Why do i need to `cast/convert` my floating point to decimal here?. | `DECIMAL(7,2)` means a number with a fixed decimal point, having 7 digits altogether, with 2 of them to the right of the decimal point. (So, 5 left, 2 right.)
You don't *need* to cast unless you want the behavior of DECIMAL types (typically, fixed point) rather than the behavior of FLOAT types (useful approximations with a relatively unpredictable number of digits right of the decimal point). For example, your average for product code "PEC" *with* the cast is 33333.65; *without* the cast it's 33333.653333.
If you're routinely using *all* the digits, you should probably increase the total number of digits in the cast. Say, to something like `DECIMAL (14,2)`. | Why? Because if the intention is just to display the values, then casting to a DECIMAL with two decimal places will make it easier to read. If you take out the CASTs you will be shown values like `AVERAGE 33333.653333 STD DEV 47140.218734`. | casting a floating to decimal in mysql | [
"",
"mysql",
"sql",
""
] |
Hi im new to SQL query i only know simple query.
My question is it possible to `SELECT skipped check number EX2001,EX2002,EX2004`
`select result will show EX2003`.
thanks in advance, sorry for my english.
you can answer algorithm only, ill try to implement it to SQL.
example:
SELECT \* FROM SETTLEMENT WHERE checkno not in (between ex2001 and ex2900)
is it possible like this? im using MS SQL 2008. | ```
create table sequence(st varchar(50))
insert into sequence values('EX2001');
insert into sequence values('EX2002');
insert into sequence values('EX2004');
insert into sequence values('EX2005');
insert into sequence values('EX2008');
```
Assuming your original table name is **sequence** with only one field, you can modify as per your needs
Try below
```
DECLARE @all TABLE
(
st varchar(20)
)
declare @start int
declare @end int
declare @str varchar(20)
set @start=2000 //define starting point
set @end=2010 //define end point
while(@start<@end)
BEGIN
SET @start=@start+1
set @str='EX'+cast(@start as varchar(20))
INSERT INTO @all VALUES (''+@str+'')
END
SELECT * from @all
except
select * from sequence
```
output
```
st
EX2003
EX2006
EX2007
EX2009
EX2010
``` | if you need hard coded values in query then it can be done like (as you did not posted any code so here is a simple query). you can use `IN()` or `NOT IN()` in sql
```
select * from table where check_number not in ('EX2001','EX2002','EX2004' );
``` | SQL query to know skipped number | [
"",
"sql",
"sql-server",
""
] |
I have the following table
```
PnlId LineTotalisationId Designation Totalisation
1 A Gross Fees Formule A01+A02+A03+A04+A05
2 A01 GF1 Comptes B01+B02+B03+B04+B05
3 A02 GF2 Comptes C01+C02+C03+C04+C05
4 A03 GF3 Comptes 99991
5 A04 GF4 Comptes 99996
6 A05 GF5 Comptes 999995
14 B1 Perm Comptes 12+14+25
```
I am looking to get
If Designation=Formule in Totalisation LineTotalisationId name for PnlId contents number so I should concat Totalisation for all Totalisation else I will simply add the row
I try the following code
```
SELECT Ref.*,
CASE
WHEN Charindex('+', Ref.totalisation) > 0 THEN '+'
WHEN Charindex('/', Ref.totalisation) > 0 THEN '/'
WHEN Charindex('*', Ref.totalisation) > 0 THEN '*'
WHEN Charindex('*', Ref.totalisation) > 0 THEN '-'
END AS Operator
INTO ##ttresults
FROM [dbo].[pnlreference] Ref
WHERE [typetotalisation] = 'Formule'
AND totalisation <> ''
AND designation NOT LIKE '%[0-9]%'
AND designation != ''
SELECT split.linetotalisationid AS PNLParentId,
NULL AS Totalisation
INTO ##tempresults
FROM (SELECT tbl.designation,
tbl.linetotalisationid,
tbl.typetotalisation,
tbl.pnlid,
tbl.totalisation,
Cast('<t>'
+ Replace(tbl.totalisation, tbl.operator, '</t><t>')
+ '</t>' AS XML) x,
tbl.operator
FROM ##ttresults AS tbl) split
CROSS apply x.nodes('/t') t(c)
INNER JOIN [dbo].[pnlreference] Ref
ON Ref.linetotalisationid = t.c.value('.', 'nvarchar(255)')
WHERE Ref.designation LIKE '%[0-9]%'
OR Ref.designation = ''
GROUP BY split.linetotalisationid ;
```
The code above returns as result
```
PNLParentId
A
```
Is there a way to modify it to get totalisation as it is described ? | If your sample data represent your data, you can try below sql
Assuming Panel is your table name and the Line field only have 1 string character
```
Declare @sql nvarchar(max);
;
set @sql =
'with cte as
(
select
case
when patindex(''%[A-Z]%'',Line) > 0
then
substring(line,patindex(''%[A-Z]%'',Line),1)
end as LineId
,Totalisation
from panel
where line in (' +''''+ (select replace(a.totalisation,'+',''',''') from Panel a where Designation='formule' ) + '''' +')
union all
select
Line as LineId,
Totalisation
from panel
where line not in (' +''''+ (select replace(a.totalisation,'+',''',''') from Panel a where Designation='formule' ) + '''' +')
and Designation <> ''Formule''
'
+ ')
select
distinct
p.pnlId, p.Line, p.TotalisationId--, p.Designation ,P.Totalisation
,LineId, LTRIM(substring(stuff
(
(
select '' | '' + c2.Totalisation from cte c2 where c.LineId = c2.LineId for xml path('''')
)
,1,0,''''
),3,len(stuff
(
(
select '' | '' + c2.Totalisation from cte c2 where c.LineId = c2.LineId for xml path('''')
)
,1,0,''''
)))
) as Totalisation
from cte c
right join panel p on c.LineId = p.Line
where c.Totalisation is not null
'
;
exec(@sql)
/*
RESULT
pnlId Line TotalisationId LineId Totalisation
----------- ----- -------------- ------ --------------------------------------
1 A Gross Fees A 99999 | 99998 | 99991 | 99996 | 999995
14 B1 Perm B1 12+14+25
*/
```
**UPDATED**
TO use @roopesh Sample data B1 'Formule'
```
declare @formula nvarchar(max);
;
with cte as
(
select
distinct 1 as id, p.Totalisation
from
panel2 p
where Designation = 'formule'
)
select
distinct @formula = '''' + Replace(replace(substring(stuff
(
(
select ',' + c2.Totalisation from cte c2 where c.id = c2.id for xml path('')
)
,1,0,''
),2,len(stuff
(
(
select ',' + c2.Totalisation from cte c2 where c.id = c2.id for xml path('')
)
,1,0,''
))),',',''','''),'+',''',''') + ''''
from cte c
;
Declare @sql nvarchar(max);
;
set @sql =
'
;with cte as
(
select
case
when patindex(''%[A-Z]%'',Line) > 0
then
substring(line,patindex(''%[A-Z]%'',Line),1)
end as LineId
,Totalisation
from panel2
where line in (' + @formula +')
union all
select
Line as LineId,
Totalisation
from panel2
where line not in (' + @formula +')
and Designation <> ''Formule''
'
+ ')
select
distinct
p.pnlId, p.Line, p.TotalisationId--, p.Designation , p.totalisation
,LineId, Case when c.totalisation is null and p.designation=''Formule'' then p.totalisation
else
LTRIM(substring(stuff
(
(
select '' | '' + c2.Totalisation from cte c2 where c.LineId = c2.LineId for xml path('''')
)
,1,0,''''
),3,len(stuff
(
(
select '' | '' + c2.Totalisation from cte c2 where c.LineId = c2.LineId for xml path('''')
)
,1,0,''''
)))
)
end as Totalisation
from cte c
right join panel2 p on c.LineId = p.Line
where p.Designation = ''Formule''
'
;
exec(@sql)
``` | This stored procedure comes as a solution for your problem. It is using cursor. May be there is a way to remove the cursor, but could not get till now. So got this solution.
```
CREATE Procedure [dbo].[spGetResult]
As
Begin
declare @curPNL cursor
declare @pnlid int
declare @Line varchar(10), @TotalisationId varchar(20), @Totalisation varchar(50)
declare @spresult table(PNLId int, Line varchar(10), TotalisationId varchar(20), result varchar(4000));
--declare the cursor
set @curPNL = cursor
for select PnlId, Line, TotalisationId, totalisation
from PNLTable where designation = 'Formule'
open @curPNL
Fetch Next From @curPNL into @pnlId, @Line, @TotalisationId, @Totalisation
While @@FETCH_STATUS = 0
Begin
declare @nsql nvarchar(4000);
declare @table table(tname varchar(50));
declare @result varchar(4000)
delete from @table
--get the totalisation data for specific column
set @nsql = 'select totalisation from PNLTable Where Line in (''' + replace(@Totalisation,'+',''',''') + ''')';
print 'Calling child'
insert into @table
exec(@nsql);
set @result = '';
if not exists (select 1 from @table)
Begin
set @result = replace(@Totalisation,'+','|')
End
else
Begin
--get the values of totalisation in a pipe separated string
select @result = case when @result = '' then '' else @result + '|' end + tname from @table;
End
--insert the values in the temporary table
insert into @spresult(PNLId, Line, TotalisationId, result)
select @pnlid, @Line, @TotalisationId, @result
Fetch Next From @curPNL into @pnlId, @Line, @TotalisationId, @Totalisation
End
close @curPNL
deallocate @curPNL
select * from @spresult;
End
```
Though the table structure was not very much clear to me. But I used the following script to create the table and insert the data.
```
CREATE TABLE [dbo].[PNLTable](
[PnlId] [int] NOT NULL,
[Line] [varchar](10) NULL,
[TotalisationId] [varchar](20) NULL,
[Designation] [varchar](20) NULL,
[Totalisation] [varchar](50) NULL,
PRIMARY KEY CLUSTERED
(
[PnlId] ASC
)
)
```
--insert data
```
INSERT [PNLTable]
([PnlId], [Line], [TotalisationId], [Designation], [Totalisation])
VALUES (1, N'A', N'Gross Fees', N'Formule', N'A01+A02+A03+A04+A05'), (2, N'A01', N'GF1', N'Comptes', N'99999')
,(3, N'A02', N'GF2', N'Comptes', N'99998'), (4, N'A03', N'GF3', N'Comptes', N'99991'), (5, N'A04', N'GF4', N'Comptes', N'99996')
, (6, N'A05', N'GF5', N'Comptes', N'999995'), (14, N'B1', N'Perm', N'Formule', N'12+14+25')
``` | Dynamically evaluate an expression stored in a table column | [
"",
"sql",
"sql-server",
""
] |
I am working on Development Server Named as `SQLDEV01` and the db name is `University` and the table name is `cse.students`. During my work on table cse.students I lost some rows so I need to get the all the exact data from Production server.production Server Name is `SQLPROD01`.
How can I query to get the production data without using SSIS? | There are a few ways to do this. One that is pretty much failsafe is the following:
In Microsoft SQL Management Studio expand the server node for the source server - Then expand the Databases node. Right click on the source database and select Tasks -> Generate Scripts. When the dialog pops up click next. Select the "Select specific database objects" radio button. Expand the Tables node as check the table you want to copy. Click the next button at the bottom. Click the advanced button. In the options that pop up for Script DROP and CREATE select Script DROP and CREATE on the right. For the option Type of data to script select schema and data on the right. Click OK. Now back on the main dialog you need to select "Save to Clipboard" or "Save to new query window". I usually select clipboard because I am usually going to a different server but select what works best for you. Click next. Click next again and the script will generate according to your selections. Now just run that script on the destination database.
 | Setup a linked server from the development to production servers before doing the following from SSMS. All code should be executed on the development server.
<http://technet.microsoft.com/en-us/library/ms188279.aspx>
```
-- On Development server [SQLDEV01]
TRUNCATE TABLE [University].[cse].[students];
GO
-- Use link server to move data
INSERT INTO
[University].[cse].[students]
SELECT
*
FROM
[SQLPROD01].[University].[cse].[students]
GO
```
This assumes there are no identity columns on the target.
If you do have identity columns, turn on/off allow inserts before/after executing the above insert.
```
-- Before Insert, execute this statement
SET IDENTITY_INSERT [University].[cse].[students] ON
GO
-- After Insert, execute this statement
SET IDENTITY_INSERT [University].[cse].[students] OFF
GO
```
You can also create an ad-hoc connection using the OPENROWSET command.
<http://technet.microsoft.com/en-us/library/ms190312.aspx>
```
INSERT INTO
[University].[cse].[students]
SELECT
PRD.*
FROM
OPENROWSET('SQLNCLI', 'Server=SQLPROD01;Trusted_Connection=yes;',
'SELECT * FROM [University].[cse].[students]') AS PRD;
```
I did not check the syntax for your environment, please check.
If add hoc queries are set off (0), have the DBA turn them on (1) temporarily. Execute the following on the production server.
<http://msdn.microsoft.com/en-us/library/ms187569.aspx>
```
-- Show all settings
sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
-- What is the current setting?
sp_configure
GO
-- Allow add hoc queries
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO
```
If (s)he does not allow that, you are stuck with a physical linked server.
If you are using **delegation** - choice #3, you have to watch out for the double hop issue.
<http://blogs.msdn.com/b/sql_protocols/archive/2006/08/10/694657.aspx>
I would configure the linked server with a specific account on production that has rights to select the data. See choice #4.
 | How to copy table from Production to Development? | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
""
] |
I am trying to run this SQL to update a table. The issue is that I need a join in order to update the correct record. This is what I have come up with so far:
```
Update UserProfile
set UserProfile.PropertyValue = 'Test'
join ProfilePropertyDefinition
on ProfilePropertyDefinition.Id = UserProfile.PropertyDefinitionId
where UserProfile.UserId = 11
and ProfilePropertyDefinition.PortalID = 0
and ProfilePropertyDefinition.PropertyName = 'Address1'
and ProfilePropertyDefinition.PropertyCategory = 'Address'
```
This is the message I get:
Incorrect syntax near the keyword 'join'. | You are almost there, you forgot the `from` clause:
```
Update UserProfile
set UserProfile.PropertyValue = 'Test'
from UserProfile
join ProfilePropertyDefinition
on ProfilePropertyDefinition.Id = UserProfile.PropertyDefinitionId
where UserProfile.UserId = 11
and ProfilePropertyDefinition.PortalID = 0
and ProfilePropertyDefinition.PropertyName = 'Address1'
and ProfilePropertyDefinition.PropertyCategory = 'Address'
``` | ```
Update UserProfile
set UserProfile.PropertyValue = 'Test'
from UserProfile
join ProfilePropertyDefinition
on ProfilePropertyDefinition.Id = UserProfile.PropertyDefinitionId
where UserProfile.UserId = 11
and ProfilePropertyDefinition.PortalID = 0
and ProfilePropertyDefinition.PropertyName = 'Address1'
and ProfilePropertyDefinition.PropertyCategory = 'Address'
```
You have to repeat the table to update in the `from` clause - even iof this syntax looks a little strange. | How to write Update with join in where clause | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have Table with two cols ID\_COL and VALUE\_COL.
```
ID_COL VALUE_OUT
====== =========
15 10
16 10
16 11
17 10
17 11
17 12
18 10
18 12
19 11
20 11
20 12
21 12
```
this table is populated with some combinations based on our biz rule. Now i want to find the output as follows.
The input from the application is 11,12
we need to search in the above table and need to find the ID\_COL. In this case i need to return the value 20 from ID\_COL ( this is one exact match of 11,12 others having extra values)
It is not two values to match , some times it might be single value also.. if i pass 12 .. i need to return id\_col 21 | ```
select ID_COL
from table
group by ID_COL
having
count(0) = 2 and
count(case when VALUE_OUT in (11,12) then 1 end) = 2
``` | If i got your question correctly, then you want to find the values in \*Id\_COL\* that satisfies the both the input values passed from application for the \*VALUE\_OUT\*.
You can use the below query for that :
```
select ID_COL from table where value_out in (11,12);
```
Hope i understood you question correctly.
Hope it helps
Vishad | Query to find unique value for the combinations | [
"",
"sql",
"oracle",
""
] |
My select statement is:
`SELECT date FROM user WHERE date BETWEEN '13/01/2011' AND '28/01/2014'`
The date in my `user` table is structured as i.e.:
`2013-12-10 00:39:22`
So my `SELECT` statement therefore returns `0` records even though there are definitely matches in the table. Is there a way to format it so that it recognizes my date format and returns results in my format? | Try this
```
SELECT date FROM user
WHERE date
BETWEEN STR_TO_DATE('13/01/2011','%d/%m/%Y %h:%i:%s')
AND STR_TO_DATE('28/01/2014','%d/%m/%Y %h:%i:%s')
```
# [STR\_TO\_DATE](http://www.mysqltutorial.org/mysql-str_to_date/) | Why not using the function [str-to-date](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date)? | select fields using formatted date | [
"",
"mysql",
"sql",
""
] |
I have a table which list loggin users each time they login. the table is like this
```
id | user | cmd | ts
1 | john | login | 2014-01-19 18:26:01
2 | mark | login | 2014-01-19 18:36:00
3 | john | login | 2014-01-19 18:40:13
4 | dave | login | 2014-01-19 18:41:58
5 | john | login | 2014-01-19 18:50:59
6 | mark | login | 2014-01-19 19:03:23
7 | dave | login | 2014-01-19 19:15:13
8 | mark | login | 2014-01-19 19:26:58
```
i need to select only the latest login from this table, so the output i expect is
```
id | user | cmd | ts
5 | john | login | 2014-01-19 18:50:59
7 | dave | login | 2014-01-19 19:26:58
8 | mark | login | 2014-01-19 19:15:13
```
this is what i tried
```
SELECT * FROM `logins` group by user having count(*) <= 1 order by ts
```
i got zero rows | ```
SELECT id,user,cmd,MAX(ts) ts FROM `logins`
group by user
``` | Your order is missing a direction. Try adding either ASC or DESC. In your case it would be DESC
`order by ts DESC` | select top records from multiple entries | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
```
SELECT bau.username, cc.key
FROM ba_users bau
INNER JOIN billings ba
ON ba.id = bau.billing_account_id
INNER JOIN accc_once_only_cc_map acooc
ON acooc.billing_account_id = ba.id
INNER JOIN account_components_map acm
on acooc.account_component_id = acm.account_component_id
INNER JOIN cuoffers cuo
ON cuo.id = acm.customized_offer_id
INNER JOIN charges cc
ON cc.key = acooc.charge_cluster_key;
```
result is for example
```
USERNAME KEY
1182925980_0 RT_FEE
1182926277_0 RT_FEE
1182926574_0 RT_FEE
1182924785_0 RT_FEE
1182925980_0 RT_FEE2
1182923514_0 RT_FEE
1182932481_0 RT_FEE
1182936581_0 RT_FEE
1182941631_0 RT_FEE
```
now I would like to found which user has more then one key and print out the user with keys.
like:
```
1182925980_0
```
has keys
```
RT_FEE and RT_FEE2
```
How to do this with plsql? | The first part (finding users with > 1 key)
```
WITH CTE AS
(
SELECT bau.username, cc.key
FROM ba_users bau
INNER JOIN billings ba
ON ba.id = bau.billing_account_id
INNER JOIN accc_once_only_cc_map acooc
ON acooc.billing_account_id = ba.id
INNER JOIN account_components_map acm
on acooc.account_component_id = acm.account_component_id
INNER JOIN cuoffers cuo
ON cuo.id = acm.customized_offer_id
INNER JOIN charges cc
ON cc.key = acooc.charge_cluster_key
)
SELECT username, COUNT(*)
FROM CTE
GROUP BY username
HAVING COUNT(*) > 1;
```
The second part of your query (showing a list of all keys projected in a single column) will will depend on whether you can use [listagg](https://stackoverflow.com/questions/16771086/is-there-any-function-in-oracle-similar-like-group-concat-of-mysql) or not.
```
WITH CTE AS
(
SELECT bau.username, cc.key
FROM ba_users bau
INNER JOIN billings ba
ON ba.id = bau.billing_account_id
INNER JOIN accc_once_only_cc_map acooc
ON acooc.billing_account_id = ba.id
INNER JOIN account_components_map acm
on acooc.account_component_id = acm.account_component_id
INNER JOIN cuoffers cuo
ON cuo.id = acm.customized_offer_id
INNER JOIN charges cc
ON cc.key = acooc.charge_cluster_key
)
SELECT username, LISTAGG(key, ', ') WITHIN GROUP (ORDER BY key) the_keys
FROM CTE
GROUP BY username
HAVING COUNT(*) > 1;
``` | You can try something like this:
```
SELECT username,key,COUNT(*)
FROM ba_users
GROUP BY username,key
HAVING COUNT(*) > 1
```
This should display the username with more than one key, hope this helps. | How do I find user with more keys in a table in Oracle? | [
"",
"sql",
"oracle",
"plsql",
""
] |
I have a simple table that is ordered by column ind and contains
```
create table #test (Ind int, Value int);
insert into #test (Ind, Value) values
(1, 3),
(2, 3),
(3, 3),
(4, 5),
(5, 6),
(6, 6),
(7, 3),
(8, 3);
```
Now I need a query that returns three columns: Ind, value and FirstInd. FirstInd should be the first "Ind" value within a group of "value" columns:
```
Ind Value FirstInd
1 3 1
2 3 1
3 3 1
4 5 4
5 6 5
6 6 5
7 3 7
8 3 7
```
I already tried several things and as I'm using SQL Server 2012 I could also use "lag(Ind) over (Group by Ind)" for FirstInd, but in the end I didn't find a solution yet to what first looked like a simple Problem to me. | Try this:
```
select ind, value, min(ind) over (partition by value, grp) as FirstInd
from (select t.*, sum(GrpStart) over (order by Ind) as Grp
from (select t.*,
(case when lag(value) over (order by ind) = value then 0 else 1
end) as GrpStart
from t
) t
) t;
```
The idea is to find where groups of consecutive values start. You can do this by comparing the current `value` to the previous `value` using `lag()`. When they are different, then there is not a start. This is a flag on one row.
Then, for any given row, sum up these flag values on the previous rows. This produces a `grp` identifier for consecutive values. The outer query just finds the minimum ind in that group. | Please try..
```
SELECT MIN(IND) OVER(PARTITION BY Value ORDER BY Value) AS FirstInd, Ind, Value FROM #test
``` | how to repeat first value of a (ordered) column grouping by another column | [
"",
"sql",
"sql-server-2012",
""
] |
I have a table which contain records of people according to month and year.
table have multiple columns in which there are two columns which are fldmonth & fldyear, which contain month no and year respectively.
Now I want to fetch data between months of different year. (E.g. 3-2012 to 6-2013)
I am using following query, but not getting proper record.
```
SELECT * FROM 'table' WHERE
user_id = 'id' AND
STR_TO_DATE(CONCAT('fldyear', 'fldmonth', '01'), '%Y%m%d') BETWEEN
STR_TO_DATE(CONCAT('2012', '3', '01'), '%Y%m%d') AND
STR_TO_DATE(CONCAT('2013', '6','01'), '%Y%m%d');
```
**Table Schema :**
```
user_id varchar(100), fldmonth smallint(2), fldyear mediumint(4)
```
(table name & userid given here are just for example)
Please need help.
**Note:** I used %c also in date format because month are in 1,2,..12 format. But still am getting empty result set | ```
SELECT * FROM tbl
WHERE USERID=1 and
STR_TO_DATE(CONCAT(fldyear,'-',LPAD(fldmonth,2,'00'),'-',LPAD(fldate,2,'00')), '%Y-%m-%d')
BETWEEN
STR_TO_DATE(CONCAT(2012,'-',LPAD(03,2,'00'),'-',LPAD(01,2,'00')), '%Y-%m-%d') AND
STR_TO_DATE(CONCAT(2013,'-',LPAD(06,2,'00'),'-',LPAD(01,2,'00')), '%Y-%m-%d');
```
# [Working Fiddle](http://sqlfiddle.com/#!2/4a277/51) | Remove `Single quote` from column names.
Try this:
```
SELECT *
FROM table
WHERE user_id = 'id' AND
STR_TO_DATE(CONCAT(fldyear, fldmonth, '01'), '%Y%c%d') BETWEEN '2012-03-01' AND '2013-06-01';
``` | select records between two months with year difference | [
"",
"mysql",
"sql",
"select",
"between",
""
] |
I've got the following three tables defined:
```
table 'A':
-------------------
majorID | bigint (primary key)
-------------------
table 'B':
-------------------
majorID | bigint (foreign key to table 'A's majorID)
minorID | bigint (primary key)
totalSize | bigint
-------------------
table 'C':
-------------------
objectID | bigint (primary key)
majorID | bigint (foreign key to table 'A's majorID)
minorID | bigint (foreign key to table 'B's minorID)
startPoint | bigint
length | bigint
-------------------
```
What I'm looking to do is get a list of all rows in table 'B', but show how much space is left for each row.
The remaining space can be found by finding the highest "startPoint", adding the value o the "length" column for the row containing the highest "startPoint", then subtracting that combined value from the "totalSize" column in table 'B'
I am currently able to achieve this using the following code:
```
create table #results (MinorID bigint, MajorID bigint, RemainingSpace bigint)
DECLARE @MinorID bigint
DECLARE @TotalSpace bigint
DECLARE @MajorID bigint
DECLARE cur CURSOR FOR
SELECT MinorID, MajorID, TotalSize FROM B
OPEN cur
FETCH NEXT FROM cur INTO @MinorID,@MajorID, @TotalSpace
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @UsedSize bigint
SELECT TOP 1 @UsedSize = StartPoint + [length] FROM C
WHERE MinorID = @MinorID AND MajorID = @MajorID
ORDER BY StartPoint DESC
INSERT INTO #results VALUES (@MinorID,@MajorID,@TotalSpace - @UsedSize)
FETCH NEXT FROM cur INTO @MinorID,@MajorID, @TotalSpace
END
CLOSE cur
DEALLOCATE cur
SELECT * FROM #results
drop table #results
```
The problem is that I expect these tables are going to get VERY large, and I realise that running a cursor over the tables probably isn't the fastest way to achieve what I want.
However, I'm struggling to find a better solution (Monday morning blues), and was hoping someone more awake / better at SQL than me can suggest a solution!
*note: The table designs are not "set in stone", so if the only solution is to denormalize the data so that table 'B' keeps a record of it's "taken space", then I'm open to that...*
EDIT:
I went with a modified version of the accepted answer, as follows:
```
SELECT B.*, coalesce(C.StartPoint + C.Length,0) AS UsedSize
FROM TableB B
LEFT JOIN
(
SELECT *, DENSE_RANK() OVER(PARTITION BY C.MajorID, C.MinorID ORDER BY C.StartPoint DESC) AS Rank
FROM TableC C
) C
ON C.MajorID = B.MajorID
AND C.MinorID = B.MinorID
AND C.Rank = 1
``` | Maybe you can work with the [DENSE\_RANK](http://technet.microsoft.com/en-us/library/ms173825.aspx).
In this query i am joining the table C with the extra column Rank. This column is given the value 1 if its the highest startpoint. In the (AND C.Rank = 1) we only extract that row.
```
SELECT B.*, (C.StartPoint + C.Length) AS UsedSize
FROM TableB B
INNER JOIN
(
SELECT *, DENSE_RANK() OVER(PARTITION BY C.MajorID, C.MinorID ORDER BY C.StartPoint DESC) AS Rank
FROM TableC C
) C
ON C.MajorID = B.MajorID
AND C.MinorID = B.MinorID
AND C.Rank = 1
``` | ```
WITH UsedSpace AS
(
SELECT minorID, MAX(startPoint + length) AS used
FROM C
GROUP BY minorID
)
SELECT B.minorID, totalSize - COALESCE(UsedSpace.used, 0)
FROM B LEFT JOIN UsedSpace ON B.minorID = UsedSpace.minorID
``` | Best way to get single MAX value + another column value from SQL | [
"",
"sql",
"sql-server",
"database",
""
] |
I have these 2 tables that I need the boat and image info from to show in the same sql/loop.
Is this possible?
With inner join?
```
$info1 = mysql_query(" SELECT image1 as image, boat1 as boat FROM all_images ");
$info2 = mysql_query(" SELECT image2 as image, boat2 as boat FROM all_boats_images ");
while($b = mysql_fetch_array($????????)){
echo $b['boat'].$b['boat'];
}
``` | i think you want to use an union since you commented that the tables are not related
```
SELECT image1 as image, boat1 as boat FROM all_images
union all
SELECT image2 as image, boat2 as boat FROM all_boats_images
``` | just append database name before your table name
i.e. if database name is db1 and table name is tb1 then
```
SELECT * FROM db1.tb1;
``` | How can I get info from 2 tables into one sql query? | [
"",
"mysql",
"sql",
"union",
"union-all",
""
] |
Please help me to rewrite the below query.
we are getting performance issues due to below query.in this query we are using sub-queries.
please help me to re write without sub-query.
query is:
```
select
sum(a.order_count)
from
(select
count(cart_id) as order_count, user_id
from
carts_archive
where
order_date > '2013-01-21 00:00:01'
and user_id is not null
group by user_id
order by order_count desc) a
where
a.order_count > 1;
```
we are not able to retrive explain plan also.
Explain plan:
explain plan :
```
+----+-------------+---------------+-------+--------------------------------------------+------------------------+---------+------+---------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------+-------+--------------------------------------------+------------------------+---------+------+---------+----------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 1436430 | Using where |
| 2 | DERIVED | carts_archive | range | nk_cart_ach_user_id,pk_cart_ach_order_date | pk_cart_ach_order_date | 9 | NULL | 3552006 | Using where; Using temporary; Using filesort |
+----+-------------+---------------+-------+--------------------------------------------+------------------------+---------+------+---------+----------------------------------------------+
2 rows in set (2 min 50.33 sec)
```
Table structure:
```
mysql> show create table carts_archive\G
*************************** 1. row ***************************
Table: carts_archive
Create Table: CREATE TABLE `carts_archive` (
`row_mod` datetime DEFAULT NULL,
`row_create` datetime DEFAULT NULL,
`order_date` datetime DEFAULT NULL,
`billing_zip` varchar(10) COLLATE latin1_bin DEFAULT NULL,
`billing_address` varchar(200) COLLATE latin1_bin DEFAULT NULL,
`billing_home_phone` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`billing_email` varchar(100) COLLATE latin1_bin DEFAULT NULL,
`status` varchar(30) COLLATE latin1_bin DEFAULT NULL,
`website_id` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`discount_program` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`credit_card_exp_year` varchar(4) COLLATE latin1_bin DEFAULT NULL,
`billing_country` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`add_client_flag` varchar(1) COLLATE latin1_bin DEFAULT NULL,
`billing_last_name` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`billing_work_phone` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`total_charge` float(10,2) DEFAULT NULL,
`add_newsletter_flag` varchar(1) COLLATE latin1_bin DEFAULT NULL,
`cart_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
`discount_first_name` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`markcode` int(11) DEFAULT NULL,
`discount_account_junk` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`gift_cert_junk` varchar(30) COLLATE latin1_bin DEFAULT NULL,
`credit_card_name` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`billing_work_phone_ext` varchar(10) COLLATE latin1_bin DEFAULT NULL,
`billing_state` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`billing_first_name` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`order_id` varchar(30) COLLATE latin1_bin DEFAULT NULL,
`discount_last_name` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`credit_card_exp_month` varchar(2) COLLATE latin1_bin DEFAULT NULL,
`credit_card_number` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`billing_city` varchar(100) COLLATE latin1_bin DEFAULT NULL,
`credit_card_type` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`discount_account` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`asnbuyer` varchar(70) COLLATE latin1_bin DEFAULT NULL,
`buyercookie` varchar(70) COLLATE latin1_bin DEFAULT NULL,
`expire_date` datetime DEFAULT NULL,
`security_string` varchar(255) COLLATE latin1_bin DEFAULT NULL,
`user_information` varchar(255) COLLATE latin1_bin DEFAULT NULL,
`track_id` varchar(255) COLLATE latin1_bin DEFAULT NULL,
`gift_cert` varchar(255) COLLATE latin1_bin DEFAULT NULL,
`billing_address2` varchar(100) COLLATE latin1_bin DEFAULT NULL,
`visited_signup` varchar(1) COLLATE latin1_bin DEFAULT NULL,
`paypal_email_junk` varchar(127) COLLATE latin1_bin DEFAULT NULL,
`used_saved_cc_flag` varchar(1) COLLATE latin1_bin DEFAULT NULL,
`paypal_auth_amount_junk` float DEFAULT NULL,
`bml_auth_code_junk` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`bml_approve_amount_junk` float DEFAULT NULL,
`bml_account_id_junk` varchar(30) COLLATE latin1_bin DEFAULT NULL,
`bml_order_number_junk` varchar(22) COLLATE latin1_bin DEFAULT NULL,
`alt_reference_num` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`alt_account_id` varchar(200) COLLATE latin1_bin DEFAULT NULL,
`alt_approval_amount` float DEFAULT NULL,
`alt_points_used` float DEFAULT NULL,
`alt_auth_id` varchar(30) COLLATE latin1_bin DEFAULT NULL,
`alt_redemption_rate` float DEFAULT NULL,
`cvv_response_code` varchar(50) COLLATE latin1_bin DEFAULT NULL,
`cvv_indicator_value` varchar(1) COLLATE latin1_bin DEFAULT NULL,
`language_id` varchar(10) COLLATE latin1_bin DEFAULT 'ENUS',
`gc_trans_id` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`gc_auth_date` datetime DEFAULT NULL,
`gc_pin` varchar(20) COLLATE latin1_bin DEFAULT NULL,
`gift_cert_value` float DEFAULT NULL,
UNIQUE KEY `pk_cart_ach_cart_id` (`cart_id`),
KEY `nk_cart_ach_user_id` (`user_id`),
KEY `pk_cart_ach_order_date` (`order_date`),
KEY `carts_archive_n1` (`row_mod`),
KEY `order_id_n1` (`order_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin
1 row in set (0.00 sec)
``` | Try creating index as below and then see `EXPLAIN` plan
```
CREATE INDEX idx_order_date_user_id ON carts_archive(order_date,user_id);
``` | i don't see any use of having order by clause in your subquery.
information about number of records in table with indexes on them will help, or explain plan will help.
also try with this slightly modified sql below
select
sum(a.order\_count)
from
(select
count(cart\_id) as order\_count, user\_id
from
carts\_archive
where
order\_date > '2013-01-21 00:00:01'
and user\_id is not null
group by user\_id
having count(card\_id)>1) a | performance degraded due to query? | [
"",
"mysql",
"sql",
"indexing",
"query-performance",
""
] |
I have the following query and I can for love nor money work out where the error is in my `UPDATE` statement? I only started playing with this last week so I'm on a big learning curve!
```
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
'Declerations For Calling on
Dim AnimalHouse As String
AnimalHouse = "TestText"
Dim AddressForAssingment As Integer
AddressForAssingment = 1
Dim IDCheckAssignment As Integer
IDCheckAssignment = 1
Dim CommandText As String = _
"UPDATE IOInformation.Description = @animalHouse, WHERE ID_number = @addrForAssn AND ID_Check = @Id;"
'Connection Information
Dim myConnection As New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + ProjectDirectory.Text)
Dim myCommand As New OleDbCommand(CommandText)
myCommand.Parameters.AddWithValue("@animalHouse", AnimalHouse)
myCommand.Parameters.AddWithValue("@addrForAssn", AddressForAssingment)
myCommand.Parameters.AddWithValue("@Id", IDCheckAssignment)
myCommand.Connection = myConnection
myConnection.Open()
myCommand.ExecuteNonQuery()
myConnection.Close()
End Sub
``` | You have a couple of errors in your syntax.
The correct syntax for an UPDATE query is
```
UPDATE table SET field1=value1, field2=value2, ...... WHERE keyfield=value
```
so you should write
```
Dim CommandText As String = _
"UPDATE IOInformation SET Description = @animalHouse " &
"WHERE ID_number = @addrForAssn AND ID_Check = @Id;"
```
Separate the table name from the field name, add a SET before the first fieldname and remove the comma before the WHERE | You should use the following syntax for update `UPDATA tablename SET collumnName='value', columnname1='adsf' WHERE ...`, no ',' before where. `UPDATE IOInformation SET Description = ... WHERE ID_number = ... AND ID_Check = ...;` | Syntax Error in UPDATE statement in VB.net | [
"",
"sql",
"vb.net",
"syntax",
"oledb",
""
] |
I've a huge problem, I don't what is going on. This is my
**`table1`**
```
+-----+-----------------------+--------------+
| id | invoice_number | invoice_date |
+-----+-----------------------+--------------+
| 12 | 12536801244 | 2009-09-23 |
| 38 | 12585302890 | 2009-11-18 |
| 37 | 12584309829 | 2009-11-16 |
| 123 | 12627605146 | 2010-01-06 |
| 191 | 12663105176 | 2010-02-16 |
+-----+-----------------------+--------------+
```
and this is my second
**`table2`**
```
+-----+-----------------------+--------------+
| id | invoice_number | invoice_date |
+-----+-----------------------+--------------+
| 12 | 1t657801244 | 2009-09-23 |
| 20 | 12585302890 | 2009-11-18 |
| 37 | 1ss58430982 | 2009-11-16 |
| 103 | 12627605146 | 2010-01-06 |
| 121 | 12346310517 | 2010-02-16 |
+-----+-----------------------+--------------+
```
What I want is, I have get all invoice\_numbers that are not in **`table2`**
This is my SQL query.
```
select t2.invoice_number FROM table1 t1
JOIN table2 t2 ON t2.invoice_number != t1.inovice_number;
```
But I get different result. Any body what is wrong with SQL code? | > Performance wise you can use Left Join or Not EXIST
**Try with Left Join**
```
Select t2.invoice_number
FROM table1 t1
LEFT JOIN table2 t2 ON
t2.invoice_number = t1.inovice_number AND
t2.invoice_number IS NULL;
```
**Try With Not Exist**
```
Select t2.invoice_number
FROM table1 t1 Where Not Exist
(
SELECT NULL
FROM table2 t2
WHERE t2.invoice_number = t1.inovice_number
)
```
**Try with Not IN**
```
Select t2.invoice_number
FROM table1 t1
Where t1.inovice_number NOT IN
(
SELECT t2.inovice_number
FROM table2 t2
)
``` | Why don't you simply try:
```
Select * from Table1
Where Table1.invoice_number NOT IN (select invoice_number from Table2)
``` | mysql not equal is not working | [
"",
"mysql",
"sql",
""
] |
I have one table:
```
ID | domain | address
=======================================
1 | example.com | example.com/siteA
2 | example.com | example.com/siteB
3 | example.com | sub.example.com
4 | example.com | sub.example.com/dir
5 | foobar.com | foobar.com/site123
6 | foobar.com | foobar.com/site
...
```
now I like to know how many "addresses" one domain has like
```
ID | domain | count
==========================
1 | example.com | 4
5 | foobar.com | 4
```
my statement is currently
```
SELECT * FROM table GROUP BY domain
```
I know this is nothing but is one table enough or should I split it into two? | ```
SELECT domain, COUNT(domain) as Count FROM table GROUP BY domain
```
**UPDATE : To get the Id**
```
SELECT MIN(ID) ID, domain, COUNT(domain) as Count FROM table GROUP BY domain
```
OR, another way:
```
SELECT
(SELECT MIN (ID) FROM table WHERE DOMAIN = A.DOMAN) AS ID
domain,
COUNT(domain) as Count
FROM table A GROUP BY domain
``` | **[Read Group by](http://www.tutorialspoint.com/sql/sql-group-by.htm)**
```
select min(ID) as ID,domain,count(1) as Count
from table
group by domain
``` | SQL with group by and number of rows | [
"",
"sql",
""
] |
When i execute the following query, i get the message like
> "Ora-01427 single-row subquery returns more than one row"
```
SELECT E.I_EmpID AS EMPID,
E.I_EMPCODE AS EMPCODE,
E.I_EmpName AS EMPNAME,
REPLACE(TO_CHAR(A.I_REQDATE, 'DD-Mon-YYYY'), ' ', '') AS FROMDATE,
REPLACE(TO_CHAR(A.I_ENDDATE, 'DD-Mon-YYYY'), ' ', '') AS TODATE,
TO_CHAR(NOD) AS NOD,
DECODE(A.I_DURATION,
'FD',
'FullDay',
'FN',
'ForeNoon',
'AN',
'AfterNoon') AS DURATION,
L.I_LeaveType AS LEAVETYPE,
REPLACE(TO_CHAR((SELECT C.I_WORKDATE
FROM T_COMPENSATION C
WHERE C.I_COMPENSATEDDATE = A.I_REQDATE
AND C.I_EMPID = A.I_EMPID),
'DD-Mon-YYYY'),
' ',
'') AS WORKDATE,
A.I_REASON AS REASON,
AP.I_REJECTREASON AS REJECTREASON
FROM T_LEAVEAPPLY A
INNER JOIN T_EMPLOYEE_MS E
ON A.I_EMPID = E.I_EmpID
AND UPPER(E.I_IsActive) = 'YES'
AND A.I_STATUS = '1'
INNER JOIN T_LeaveType_MS L
ON A.I_LEAVETYPEID = L.I_LEAVETYPEID
LEFT OUTER JOIN T_APPROVAL AP
ON A.I_REQDATE = AP.I_REQDATE
AND A.I_EMPID = AP.I_EMPID
AND AP.I_APPROVALSTATUS = '1'
WHERE E.I_EMPID <> '22'
ORDER BY A.I_REQDATE DESC
```
when i execute this without ORDER BY A.I\_REQDATE DESC it returns 100 rows... | Use the following query:
```
SELECT E.I_EmpID AS EMPID,
E.I_EMPCODE AS EMPCODE,
E.I_EmpName AS EMPNAME,
REPLACE(TO_CHAR(A.I_REQDATE, 'DD-Mon-YYYY'), ' ', '') AS FROMDATE,
REPLACE(TO_CHAR(A.I_ENDDATE, 'DD-Mon-YYYY'), ' ', '') AS TODATE,
TO_CHAR(NOD) AS NOD,
DECODE(A.I_DURATION,
'FD',
'FullDay',
'FN',
'ForeNoon',
'AN',
'AfterNoon') AS DURATION,
L.I_LeaveType AS LEAVETYPE,
REPLACE(TO_CHAR((SELECT max(C.I_WORKDATE)
FROM T_COMPENSATION C
WHERE C.I_COMPENSATEDDATE = A.I_REQDATE
AND C.I_EMPID = A.I_EMPID),
'DD-Mon-YYYY'),
' ',
'') AS WORKDATE,
A.I_REASON AS REASON,
AP.I_REJECTREASON AS REJECTREASON
FROM T_LEAVEAPPLY A
INNER JOIN T_EMPLOYEE_MS E
ON A.I_EMPID = E.I_EmpID
AND UPPER(E.I_IsActive) = 'YES'
AND A.I_STATUS = '1'
INNER JOIN T_LeaveType_MS L
ON A.I_LEAVETYPEID = L.I_LEAVETYPEID
LEFT OUTER JOIN T_APPROVAL AP
ON A.I_REQDATE = AP.I_REQDATE
AND A.I_EMPID = AP.I_EMPID
AND AP.I_APPROVALSTATUS = '1'
WHERE E.I_EMPID <> '22'
ORDER BY A.I_REQDATE DESC
```
The trick is to force the inner query return only one record by adding an aggregate function (I have used max() here). This will work perfectly as far as the query is concerned, but, honestly, OP should investigate why the inner query is returning multiple records by examining the data. Are these multiple records really relevant business wise? | The only subquery appears to be this - try adding a `ROWNUM` limit to the where to be sure:
```
(SELECT C.I_WORKDATE
FROM T_COMPENSATION C
WHERE C.I_COMPENSATEDDATE = A.I_REQDATE AND ROWNUM <= 1
AND C.I_EMPID = A.I_EMPID)
```
You do need to investigate why this isn't unique, however - e.g. the employee might have had more than one `C.I_COMPENSATEDDATE` on the matched date.
For performance reasons, you should also see if the lookup subquery can be rearranged into an inner / left join, i.e.
```
SELECT
...
REPLACE(TO_CHAR(C.I_WORKDATE, 'DD-Mon-YYYY'),
' ',
'') AS WORKDATE,
...
INNER JOIN T_EMPLOYEE_MS E
...
LEFT OUTER JOIN T_COMPENSATION C
ON C.I_COMPENSATEDDATE = A.I_REQDATE
AND C.I_EMPID = A.I_EMPID
...
``` | How to fix Ora-01427 single-row subquery returns more than one row in select? | [
"",
"sql",
"oracle",
""
] |
Im using LOAD DATA INFILE to import a csv file, the files date format is `29/11/2010` and the database format is `2010-11-29`, what can i use to format the date inside the query?
I've tried str\_to\_date:
```
SET date_start = STR_TO_DATE(@from_date,'%Y-%m-%d'),
```
but that only inserts `0000-00-00` | **MySQL 4.x**
`LOAD DATA` will try to insert your dates as they are. It isn't aware about format and in common case you can not apply some post-processing to your fields (well, different from some format which is allowed inside `LOAD DATA` syntax itself) - and you can not adjust your values via `SET` keyword like in `MySQL 5.x`
Instead you can do following steps:
* Declare your table's column as `VARCHAR`. Let it name be `record_date`
* Do your `LOAD DATA` query. It will load your dates into `record_date` column
* Add new column to your table, let it be `temp_date` - with type `DATE`: `ALTER TABLE t ADD temp_date DATE`
* Update your `temp_date` column: `UPDATE t SET temp_date = STR_TO_DATE(record_date, '%d/%m/%Y')`
* Drop your `VARCHAR` date column: `ALTER TABLE t DROP record_date`
* Finally, rename column with correct `DATE` type to original one: `ALTER TABLE t CHANGE temp_date record_date DATE`
As result, you'll have your dates loaded into your table as `DATE` date type. Replace `record_date` to the name which your original column has.
**MySQL 5.x**
You can use [`SET`](http://dev.mysql.com/doc/refman/5.1/en/load-data.html) keyword and natively replace procedure, described above. So just do:
```
LOAD DATA INFILE 'file.csv'
INTO TABLE t
(@date)
SET record_date=STR_TO_DATE(@date, '%d/%m/%Y')
```
-sample above is for one column and you'll need to add others (if they exist). Date column name is also `record_date` - so change it to actual name too. | Try something like
```
update tablename SET date_start = date_format(@from_date,'%Y-%m-%d')
``` | Format date for mysql insert | [
"",
"mysql",
"sql",
"date",
""
] |
If I have a view:
```
Movie Genre Actor
-------------------------------------------
Ocean's Twelve Crime George Clooney
Ocean's Twelve Crime Julia Roberts
Ocean's Twelve Crime Brad Pitt
Forrest Gump Drama Tom Hanks
```
How would I group by the movie title, but flatten the other columns like so:
```
Movie Genre Actor
-------------------------------------------
Ocean's Twelve Crime George Clooney, Julia Roberts, Brad Pitt
Forrest Gump Drama Tom Hanks
```
Note that if an element is equivalent, it is not repeated (e.g. `Crime`) | **MySQL**
Use `GROUP_CONCAT()` function:
```
SELECT movie, Genre, GROUP_CONCAT(Actor) AS Actor
FROM tableA
GROUP BY movie, Genre
```
**SQL SERVER**
```
SELECT A.movie, A.Genre, MAX(STUFF(B.ActorNames, 1, 1, '')) AS Actor
FROM tableA A
CROSS APPLY(SELECT ' ' + Actor + ',' FROM tableA B
WHERE A.movie = B.movie AND A.Genre = B.Genre
FOR XML PATH('')
) AS B (ActorNames)
GROUP BY A.movie, A.Genre
``` | You are looking for [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat)
```
select Movie, Genre, group_concat(Actor separator ', ') as `Actors`
from
movies
group by Movie, Genre;
``` | MySQL: Group and flatten elements on a column | [
"",
"mysql",
"sql",
"select",
"group-by",
"group-concat",
""
] |
I'm looking at 5 different columns (db made badly unfortunately). If of the five columns two have one "1" value and one "2" value I want this record to be excluded from the results. However, if it only has one of the two values I want it to be included.
I have this so far, but I'm certain it will not include the record if it has even one of the two values.
```
NOT ((Ew.DocRecvd1 = 10 OR Ew.DocRecvd1 = 11) OR
(Ew.DocRecvd2 = 10 OR Ew.DocRecvd2 = 11) OR
(Ew.DocRecvd3 = 10 OR Ew.DocRecvd3 = 11) OR
(Ew.DocRecvd4 = 10 OR Ew.DocRecvd4 = 11) OR
(Ew.DocRecvd5 = 10 OR Ew.DocRecvd5 = 11))
```
Thanks. | I would suggest that you count the number of values in each group that you want. And, I would do it in a subquery, just because that makes the code more readable and maintainable.
Here is an example:
```
from (select t.*,
((case when Ew.DocRecvd1 in (10, 11) then 1 else 0) +
(case when Ew.DocRecvd2 in (10, 11) then 1 else 0) +
(case when Ew.DocRecvd3 in (10, 11) then 1 else 0) +
(case when Ew.DocRecvd4 in (10, 11) then 1 else 0) +
(case when Ew.DocRecvd5 in (10, 11) then 1 else 0) +
) as Num1s,
<something similar> as Num2s
from table t
) t
where Num1s = 2 and Num2s = 1;
``` | You state the filter conditions simply in the `where` clause. Given a table
```
create table foobar
(
id int not null primary key ,
c1 int not null ,
c2 int not null ,
c3 int not null ,
c4 int not null ,
c5 int not null ,
)
go
```
You can say
```
select *
from foobar
where not ( 2 = case c1 when 1 then 1 else 0 end
+ case c2 when 1 then 1 else 0 end
+ case c3 when 1 then 1 else 0 end
+ case c4 when 1 then 1 else 0 end
+ case c5 when 1 then 1 else 0 end
and 1 = case c1 when 2 then 1 else 0 end
+ case c2 when 2 then 1 else 0 end
+ case c3 when 2 then 1 else 0 end
+ case c4 when 2 then 1 else 0 end
+ case c5 when 2 then 1 else 0 end
)
```
The other approach which might run faster is to use as mask table, containing the conditions you want to exclude. Something like this one:
```
create table mask
(
c1 tinyint null ,
c2 tinyint null ,
c3 tinyint null ,
c4 tinyint null ,
c5 tinyint null ,
unique clustered ( c1,c2,c3,c4,c5) ,
)
```
In your case, there are only 30 conditions to be excluded:
```
c1 c2 c3 c4 c5
---- ---- ---- ---- ----
NULL NULL 1 1 2
NULL NULL 1 2 1
NULL NULL 2 1 1
NULL 1 NULL 1 2
NULL 1 NULL 2 1
NULL 1 1 NULL 2
NULL 1 1 2 NULL
NULL 1 2 NULL 1
NULL 1 2 1 NULL
NULL 2 NULL 1 1
NULL 2 1 NULL 1
NULL 2 1 1 NULL
1 NULL NULL 1 2
1 NULL NULL 2 1
1 NULL 1 NULL 2
1 NULL 1 2 NULL
1 NULL 2 NULL 1
1 NULL 2 1 NULL
1 1 NULL NULL 2
1 1 NULL 2 NULL
1 1 2 NULL NULL
1 2 NULL NULL 1
1 2 NULL 1 NULL
1 2 1 NULL NULL
2 NULL NULL 1 1
2 NULL 1 NULL 1
2 NULL 1 1 NULL
2 1 NULL NULL 1
2 1 NULL 1 NULL
2 1 1 NULL NULL
(30 row(s) affected)
```
The actual query is trivial then (and if you have a *covering index* on the columns to be tested, the test is done with index seeks and so should perform extremely well:
```
select *
from dbo.foobar t
where not exists ( select *
from mask m
where t.c1 = m.c1
and t.c2 = m.c2
and t.c3 = m.c3
and t.c4 = m.c4
and t.c5 = m.c6
)
```
The advantage of this approach is that the ruleset is *table-driven*, meaning future changes to the rules are just data modifications to your mask table.
You could also use a positive set of rules, but in your case, the set is bigger (>200 positive cases as opposed to the 30 negative cases). | Using Boolean to determine 5-way Where clause | [
"",
"sql",
"boolean",
"logic",
"where-clause",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.