Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I get this error at this code:
```
SELECT "LastUpdate" ;
FROM "xx_yy";
```
Is LastUpdate a reserved word ?
I tried to change " " to `` or delete them, I don't really know the perfect combination to make it work. I'm beginner in this. | A semicolon (`;`) signifies the end of a statement. So you actually have two separate, distinct statements:
1. `SELECT "LastUpdate"`
2. `FROM xx_yy`
The second statement is not valid, which is why you are seeing the error.
Solution: Remove the semicolon at the end of the first line:
```
SELECT "LastUpdate"
FROM "xx_yy";
```
---
Also note if the [`ANSI_QUOTES` sqlmode](http://dev.mysql.com/doc/refman/5.1/en/sql-mode.html#sqlmode_ansi_quotes) is not enabled, MySQL treats double-quotes as *string literals* (the same as single quotes). You may need to change these to the MySQL-specific backtick, or remove them entirely:
```
SELECT `LastUpdate`
FROM `xx_yy`;
``` | Get rid of the quotes around your column identifier and tablename. That makes them strings instead of identifiers. Either use ticks or nothing at all. Also, ditch the semi-colon after the first line as it is terminating your query before it reaches the `FROM` clause.
```
SELECT `LastUpdate`
FROM `xx_yy`;
```
or
```
SELECT LastUpdate
FROM xx_yy;
``` | 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM | [
"",
"mysql",
"sql",
""
] |
I'm sure there is a better way of doing this as I am running the same Table Valued Function several times, is there a way to get the table from the TVF just once and then run the queries against it to set my variables?
```
DECLARE @RUNS INT
DECLARE @WON INT
DECLARE @PERC FLOAT
SET @RUNS = (SELECT COUNT(*) FROM tvf_horse_going('Allow Me', 'HEAVY'))
SET @WON = (SELECT COUNT(*) FROM tvf_horse_going('Allow Me', 'HEAVY') WHERE PosNum = '1')
SET @PERC = (@WON * 100) / @RUNS
SELECT @RUNS As Runs, @WON As Won, @PERC As Perc
``` | ```
select
@RUNS = count(*),
@WON = sum(case when PosNum = 1 then 1 else 0 end)
from tvf_horse_going('Allow Me', 'HEAVY')
set @PERC = (@WON * 100) / @RUNS
``` | You can also use hash table (temporary table).
```
DECLARE @RUNS INT
DECLARE @WON INT
DECLARE @PERC FLOAT
SELECT * INTO #TEMP FROM tvf_horse_going('Allow Me', 'HEAVY')
SET @RUNS = (SELECT COUNT(*) FROM #TEMP)
SET @WON = (SELECT COUNT(*) FROM #TEMP WHERE PosNum = '1')
SET @PERC = (@WON * 100) / @RUNS
SELECT @RUNS As Runs, @WON As Won, @PERC As Perc
``` | SQL: Use table from table valued function | [
"",
"sql",
"sql-server",
""
] |
Hi I have a query returning this
```
member_id question_variable response_id label
----------- -------------------------------------------------- ----------- ----------------------------
35 area 15 Sydney (Metro)
35 relationship_status 8 Single
35 education 31 Bachelor Degree
35 house_hold_income 4 $75,001 to $100,000
35 pets 36 Dog
35 pets 37 Fish
```
How do i detect duplicate results such as pets and have response\_id = 36,37 and label = Dog, Fish
like so
```
member_id question_variable response_id label
----------- -------------------------------------------------- ----------- ----------------------------
35 area 15 Sydney (Metro)
35 relationship_status 8 Single
35 education 31 Bachelor Degree
35 house_hold_income 4 $75,001 to $100,000
35 pets 36,37 Dog,Fish
``` | You have to use the keyword **STUFF** to get the above result.
QUERY:
```
SELECT DISTINCT T1.MEMBER_ID,T1.QUESTION_VARIABLE,
STUFF((SELECT DISTINCT ',' + T2.RESPONSE_ID
FROM TEST T2
WHERE T1.QUESTION_VARIABLE = T2.QUESTION_VARIABLE
FOR XML PATH('') ),1,1,'') AS RESPONSE_ID,
STUFF((SELECT DISTINCT ',' + T2.LABEL
FROM TEST T2
WHERE T1.QUESTION_VARIABLE = T2.QUESTION_VARIABLE
FOR XML PATH('') ),1,1,'') AS LABEL
FROM TEST T1
;
```
> **HERE IS THE LINK TO SQL FIDDLE**
> <http://sqlfiddle.com/#!3/64515/3> | This is can be achieved in the following way as well. I haven't got a chance to test this against large data set.
If you want to check the performance, please turn on the following
```
SET STATISTICS IO ON
SET STATISTICS TIME ON
```
**Query:**
```
SELECT Main.member_id,
Main.question_variable,
STUFF(SubResponse.response_id,1,1,'') AS response_id,
STUFF(SubLebel.label,1,1,'') AS label
FROM Member Main
CROSS APPLY
(
SELECT ',' + response_id
FROM Member
WHERE member_id = Main.member_id AND question_variable = Main.question_variable
FOR XML PATH('')
) SubResponse (response_id)
CROSS APPLY
(
SELECT ',' + label
FROM Member
WHERE member_id = Main.member_id AND question_variable = Main.question_variable
FOR XML PATH('')
) SubLebel (label)
GROUP By Main.member_id,
Main.question_variable,
SubResponse.response_id,
SubLebel.label
``` | Sql Issue with Survey results | [
"",
"sql",
"sql-server",
""
] |
I have entries like:
```
first_col, second_col
'john' , 'chips'
'john' , 'candy bars'
'luke' , 'pop corn'
```
so that some values for first\_col can have (but don't need to) several values in second\_col (and the multiplicity is not constant). I would like to retrieve a unique value of second\_col associated with first\_col, and it should be selected randomly from the existing possibilities. For example, for 'John' I'd randomly pick among 'chips' and 'candy bars'.
I'm using Teradata, if that helps.
Thanks. | unfortunaly, I don't have Teradata at home, but try this decision (on oracle). Simply, I add random value to each row and sort by it, and get first row for each group
```
SELECT first_col,
MAX(second_col) KEEP (DENSE_RANK FIRST ORDER BY num) as rand_second_col
FROM
(SELECT first_col, second_col,dbms_random.value() as num
FROM table) tmp
GROUP BY first_col
```
Best regards | you can try this one
```
SELECT [col1],min([col2]) as col2,NEWID()
FROM [testtable] group by [col1] order by NEWID()
```
i have made a table with below data and it works
```
col1 col2
john chips
john candy bars
luke pop corn
john ice
```
and output is
```
col1 col2 (No column name)
luke pop corn E8CCD6A2-27A8-4728-B16F-2B1EFEAFA8A1
john candy bars D135E1E0-5193-41F1-A5F9-B55F68CF6156
``` | selecting unique random values from different rows | [
"",
"sql",
"teradata",
""
] |
I have the following code, it prints 0. I need the right result (3.571428571428571) instead.
What is wrong in this code?
```
declare @result decimal;
declare @a int; set @a = 56;
declare @b int; set @b = 2;
declare @p int; set @p = 100;
set @result = CAST(((@b / @a)*100) as decimal);
print @result
``` | That's called integer division.
> "If an integer dividend is divided by an integer divisor, the result
> is an integer that has any fractional part of the result truncated."
[`/` (Divide) (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms175009.aspx)
So use a decimal instead. For example:
```
declare @result decimal(27,17)
declare @a decimal(25,15)
set @a = 56
declare @b decimal(25,15)
set @b = 2
set @result = (@b / @a) * 100
select @result
```
`Demo`
If you need the result as percent rounded to two decimal places you can use `decimal(5, 2)` and `ROUND`:
```
declare @percent decimal(5,2)
declare @a decimal(25,15)
set @a = 56
declare @b decimal(25,15)
set @b = 2
set @percent = ROUND((@b / @a) * 100, 2)
select @percent
```
`Demo` | If you really need the `cast`, use this:
```
declare @result decimal;
declare @a int; set @a = 56;
declare @b int; set @b = 2;
declare @p int; set @p = 100;
set @result = (cast(@b as decimal) / @a)*100;
print @result
```
It casts the variable `@b` to a `decimal` so the division is done over decimals.
I suggest to convert `a`, `b` and `p` to decimals if possible and cast the end result to an integer if you need to. | Cannot cast of a division to decimal | [
"",
"sql",
"sql-server",
""
] |
First, this question relates to **Oracle SQL Developer 3.2**, not SQL\*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
```
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
```
I would like to have a file `scripts/runEverything.sql` something like this:
```
@@/A/runAll.sql
@@/B/runAll.sql
```
scripts/A/runAll.sql:
```
@@/A1.sql
@@/A2.sql
```
where `"@@"`, I gather, means relative path in SQL\*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using `'&1'` and passing in the root directory. I.e.:
scripts/runEverything.sql:
```
@'&1/A/runAll.sql' '&1/A'
@'&1/B/runAll.sql' '&1/B'
```
and call it by executing this:
```
@'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
```
But the problem here has been that `B/runAll.sql` gets called with the path: `c:/.../scripts/A/B`.
So, is it possible **with SQL Developer** to make nested calls, and how? | This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
1. Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
2. Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
```
@A1.sql;
@A2.sql;
```
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script". | I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. `&1`) are redefined by each `start` or `@`. So after your first `@runAll`, the parent script sees the same `&1` that the last child saw, which now includes the `/A`.
You can avoid that by defining your own variable in the master script:
```
define path=&1
@'&path/A/runAll.sql' '&path/A'
@'&path/B/runAll.sql' '&path/B'
```
As long as `runAll.sql`, and anything that runs, does not also (re-define) `path` this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past... | Execute scripts by relative path in Oracle SQL Developer | [
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"relative-path",
"sql-scripts",
""
] |
I read something like below in 1NF form of DBMS.
There was a sentence as follows:
> "Every column should be atomic."
Can anyone please explain it to me thoroughly with an example? | > "Every column should be atomic."
Chris Date says, "*Please note very carefully that it is not just simple things like the integer 3 that are legitimate values.* On the contrary, values can be arbitrarily complex; for example, a value might be a geometric point, or a polygon, or an X ray, or an XML document, or a fingerprint, or an array, or a stack, or a list, or a relation (and so on)."[1]
He also says, "A relvar is in 1NF if and only if, in every legal value of that relvar, every tuple contains exactly one value for each attribute."[2]
He generally discourages the use of the word *atomic*, because it has confusing connotations. *Single value* is probably a better term to use.
For example, a date like '2014-01-01' is a single value. It's not indivisible; on the contrary, it quite clearly *is* divisible. But the dbms does one of two things with single values that have parts. The dbms either returns those values as a whole, or *the dbms* provides functions to manipulate the parts. (Clients don't have to write code to manipulate the parts.)[3]
In the case of dates, SQL can
* return dates as a whole (`SELECT CURRENT_DATE`),
* return one or more parts of a date (`EXTRACT(YEAR FROM CURRENT_DATE)`),
* add and subtract intervals (`CURRENT_DATE + INTERVAL '1' DAY`),
* subtract one date from another (`CURRENT_DATE - DATE '2014-01-01'`),
and so on. In this (narrow) respect, SQL is quite relational.
---
1. *An Introduction to Database Systems*, 8th ed, p 113. Emphasis in the original.
2. *Ibid*, p 358.
3. In the case of a "user-defined" type, the "user" is presumed to be a database programmer, not a client of the database. | **Re "atomic"**
In Codd's original 1969 and [1970](http://www.seas.upenn.edu/%7Ezives/03f/cis550/codd.pdf) papers he defined relations as having a value for every attribute in a row. The value could be anything, including a relation. This used no notion of "atomic". He explained that "atomic" meant *not relation-valued* (ie not table-valued):
> So far, we have discussed examples of relations which are defined on
> simple domains--domains whose elements are atomic (nondecomposable)
> values. Nonatomic values can be discussed within the relational
> framework. Thus, some domains may have relations as elements.
He used "simple", "atomic" and "nondecomposable" as informal expository notions. He understood that a relation has rows of which each column has an associated name and value; attributes are by definition "single-valued"; the value is of any type. The only structural property that matters relationally is being a relation. It is also just a value, *but you can query it relationally*. Then he used "nonsimple" etc *meaning* relation-valued.
By the time of Codd's 1990 book [The Relational Model for Database Management: Version 2](http://www.amazon.ca/The-Relational-Model-Database-Management/dp/0201141922):
> From a database perspective, data can be classified into two types:
> atomic and compound. Atomic data cannot be decomposed into smaller
> pieces by the DBMS (excluding certain special functions). Compound
> data, consisting of structured combinations of atomic data, can be
> decomposed by the DBMS.
>
> In the relational model there is only one type of compound data: the
> relation. The values in the domains on which each relation is defined
> are required to be atomic with respect to the DBMS. A relational
> database is a collection of relations of assorted degrees. All of the
> query and manipulative operators are upon relations, and all of them
> generate relations as results. Why focus on just one type of compound
> data? The main reason is that any additional types of compound data
> add complexity without adding power.
*"In the relational model there is only one type of compound data: the relation."*
Sadly, "atomic = non-relation" is not what you're going to hear. (Unfortunately Codd was not the clearest writer and his expository remarks get confused with his bottom line.) Virtually all presentations of the relational model get no further than what was for Codd merely a stepping stone. They promote an unhelpful confused fuzzy notion canonicalized/canonized as "atomic" determining "normalized". Sometimes they wrongly use it to *define* realtion. Whereas Codd used everyday "nonatomic" to introduce defining relational "nonatomic" as relation-valued and defined "normalized" as free of relation-valued domains.
(Neither is "not a repeating group" helpful as "atomic", defining it as not something that is not even a relational notion. And sure enough in 1970 Codd says "terms attribute and repeating group in present database terminology are roughly analogous to simple domain and nonsimple domain, respectively".)
Eg: This misinterpretation was promoted for a long time from early on by Chris Date, honourable early relational explicator and proselytizer, primarily in his seminal still-current book An Introduction to Database Systems. Which now (2004 8th edition) thankfully presents the helpful relationally-oriented extended notion of distinguishing relation, row and "scalar" (non-relation non-row) domains:
> This definition merely states that all [relation variables] are in 1NF
Eg: Maiers' classic [The Theory of Relational Databases (1983)](http://web.cecs.pdx.edu/%7Emaier/TheoryBook/TRD.html):
> The definition of atomic is hazy; a value that is atomic in one application could be non-atomic in another. For a general guideline, a value is non-atomic if the application deals with only a part of the value.
Eg: The current Wikipedia article on First NF (Normal Form) section Atomicity actually quotes from the *introductory* parts above. And then *ignores* the precise meaning. (Then it says something unintelligible about when the nonatomic turtles should stop.):
> Codd states that the "values in the domains on which each
> relation is defined are required to be atomic with respect to the
> DBMS." Codd defines an atomic value as one that "cannot be decomposed
> into smaller pieces by the DBMS (excluding certain special functions)"
> meaning a field should not be divided into parts with more than one
> kind of data in it such that what one part means to the DBMS depends
> on another part of the same field.
**Re "normalized" and "1NF"**
When Codd used "normalize" in 1970, he meant eliminate relation-valued ("non-simple") domains from a relational database:
> For this reason (and others to be cited below) the possibility of
> eliminating nonsimple domains appears worth investigating. There is,
> in fact, a very simple elimination procedure, which we shall call
> normalization.
Later the notion of "higher NFs" (involving FDs (functional dependencies) & then JDs (join dependencies)) arose and "normalize" took on a different meaning. Since Codd's original normalization paper, normalization theory has always given results relevant to all relations, not just those in Codd's 1NF. So one can "normalize" in the original sense of going from just relations to a "normalized" "1NF" without relation-valued columns. And one can "normalize" in the normalization-theory sense of going from a just-relations "1NF" to higher NFs while ignoring whether domains are relations. And "normalization" is commonly also used for the "hazy" notion of eliminating values with "parts". And "normalization" is also wrongly used for designing a relational version of a non-relational database (whether just relations and/or some other sense of "1NF").
Relational *spirit* is to eschew multiple columns with the same meaning or domains with interesting parts in favour of another base table. But we must always come to an *informal* ergonomic decision about when to stop representing parts and just *treat* a column as "atomic" (non-relation-valued) vs "nonatomic" (relation-valued).
[Normalization in database management system](https://stackoverflow.com/a/40640962/3404097) | What is atomicity in dbms | [
"",
"sql",
"database",
"atomic",
"database-normalization",
""
] |
Are there way/s for me to extract data that only contain a certain values.
Ex:
```
Contact Asset Status
AB 1 Cancelled
AB 2 Cancelled
AB 3 Cancelled
AB 4 Cancelled
CD 5 Cancelled
CD 6 Active
CD 7 Cancelled
CD 8 Active
```
What I want to get are only those contacts that does contain cancelled assets ONLY (like Contact AB). And not those with both cancelled and active assets (like Contact CD). | The pure relational logic way is easier to understand, but will perform less well, requiring some kind of join to work. Let's satisfy the conditions
1. There is at least one status `Cancelled` for each `Contact`
2. But there are 0 statuses for that same `Contact` that aren't `Cancelled`
in a query like so:
```
SELECT DISTINCT
CS.Contact
FROM
ContactStatus CS
WHERE
CS.Status = 'Cancelled' -- at least one cancelled
AND NOT EXISTS ( -- but there are none of...
SELECT *
FROM ContactStatus CS2 -- contacts in the same table
WHERE
CS.Contact = CS2.Contact -- for that same contact
AND CS.Status <> 'Cancelled' -- that aren't cancelled
)
;
```
But we can do this with an aggregate, that will take only a single scan of the table, by using a little thought:
```
SELECT
Contact
FROM
ContactStatus
GROUP BY
Contact
HAVING
Count(*) = Count(CASE WHEN Status = 'Cancelled' THEN 1 END)
;
```
Other aggregate expressions in the `HAVING` clause are possible, such as:
```
Count(CASE WHEN Status <> 'Cancelled' THEN 1 END) = 0 -- or Min()
Min(Status) = 'Cancelled' AND Max(Status) = 'Cancelled'
Max(CASE WHEN Status = 'Cancelled' THEN 0 ELSE 1 END) = 0
Sum(SELECT 1 WHERE Status <> 'Cancelled') = 0
```
All of these would do the trick in this case; pick the one that makes the most sense to you. | You can do this with `group by` and a `having` clause:
```
select contact
from table t
group by contact
having min(status) = 'Cancelled' and max(status) = 'Cancelled';
```
This works for the data in your example. If `status` could be `NULL` and you want to count that as a different value, then the logic would be slightly more complicated. | Are there way/s for me to extract data that only contain a certain values in sql? | [
"",
"sql",
""
] |
```
SELECT TOP @columnCount @columnName
FROM @tableName
```
I get the following error
> Incorrect syntax near '@columnCount'.
What could be wrong?
If I change to
```
SELECT TOP (@columnCount) @columnName
FROM @tableName
```
I get the following error
> Must declare the table variable "@tableName".
I run it from C# | A safe and secure way would be
```
DECLARE @columnCount INT = 100
DECLARE @columnName NVARCHAR(128) = 'YourColumnName'
DECLARE @tableName NVARCHAR(128) = 'YourTableName'
DECLARE @Sql NVARCHAR(MAX);
SET @Sql = N'SELECT TOP (@columnCount) ' + QUOTENAME(@columnName) + N'
FROM ' + QUOTENAME(@tableName)
EXECUTE sp_executesql @Sql
,N'@columnCount INT'
,@columnCount
``` | You need dynamic SQL to accomplish what you're trying to do.
```
DECLARE @sql VARCHAR(max);
SET @sql = 'SELECT TOP ' + @columnCount + ' ' + @columnName + ' FROM ' + @tableName;
EXEC(@sql);
```
The variables used need to be converted appropriately.
Read more in the [documentation](http://msdn.microsoft.com/en-us/library/ms188001.aspx) | Parametrize query in t-sql | [
"",
"sql",
"sql-server",
"t-sql",
"prepared-statement",
""
] |
Let's say I have a database with a table with some columns that can be repeated often e.g. "Country" or "Event" (click,press,etc.), and I want to give the users the option to build their own query in a web page. Now I want to populate a dropdown with the available values. One option is to store the Countries and events in separate tables and build a relation between the main table and those two, then just do `select *` from the tables to populate the dropdown. The other option is to just leave them in the main table and perform a Select distinct on the fields that I want to get from the main table. Which of those two is the more robust way performance wise to achieve what I want? | Over time, the `select distinct` you're thinking of using to display the country/event type would become slower and slower since your main table will grow with data.
I suggest you read up on database normalization and best practices to learn more about what would work or not.
The quick answer: Use 2 tables, one to store the country, one to store the event type. Then use relationships to the main table to define it. | It's better to have them on a separate lookup table. It's called database normalization.
For further read about the term: <http://databases.about.com/od/specificproducts/a/normalization.htm> | SQL select distinct or a new table? | [
"",
"sql",
"select",
"distinct",
""
] |
```
INSERT INTO <TABLED>
SELECT A.* FROM
<TABLEA> A WHERE A.MED_DTL_STATUS='0'
AND A.TRANS_ID
NOT IN
(
SELECT DISTINCT TRANS_ID_X_REF FROM <TABLEB>
UNION
SELECT DISTINCT TRANS_ID FROM <TABLEA> WHERE ADJUSTMENT_TYPE='3'
);
```
The table has more than 250 columns.
The Select statement will return more than 300000 records .The above query is running for a long time.I have never worked on performance tuning.Could someone please help me on tuning this or give me some good links on how to tune oracle queries?.
Thanks in advance. | I find that NOT IN clauses are really slow. I would rewrite the query with NOT EXISTS instead.
```
INSERT INTO <TABLED>
SELECT A.* FROM <TABLEA> A
WHERE A.MED_DTL_STATUS='0'
AND NOT EXISTS (
SELECT B.TRANS_ID_X_REF
FROM <TABLEB> B
WHERE B.TRANS_ID_X_REF = A.TRANS_ID
)
AND NOT EXISTS (
SELECT A2.TRANS_ID
FROM <TABLEA> A2
WHERE A2.TRANS_ID = A.TRANS_ID
AND A2.ADJUSTMENT_TYPE='3'
);
```
The query above assumes there are indexes on TRANS\_ID on TableA and TableB. This may not really solve your problem, but without knowing the data model and indexes it may be worth a shot. | Apart from the good suggestions already given, whenever you are inserting a large number of records into a table it is best practice to drop the indexes on that table. When the INSERT process has finished, then recreate the indexes. | Tuning And Performance | [
"",
"sql",
"performance",
"oracle",
""
] |
MS SQL
Have a table containing data like 'SeqXXXX: hello world'
The XXXX of course being incremented numbers.
I would like to replace those with a blank. Is there a way to replace 'Seq' + 4 to the right with a blank in an update statement? | Try this:
```
UPDATE YourTable
SET YourColumn = 'Seq'
WHERE LEN(YourColumn) = 7 AND
PATINDEX('%[0-9]%',RIGHT(YourColumn,4)) = 1 AND
LEFT(YourColumn, 3) = 'Seq'
```
As per your comment, alter it as below
```
UPDATE YourTable
SET YourColumn = SUBSTRING(YourColumn, 8, LEN(YourColumn))
WHERE PATINDEX('%[0-9]%',RIGHT(LEFT(YourColumn, 7),4)) = 1 AND
LEFT(YourColumn, 3) = 'Seq'
``` | This will change
```
Seq1234: hello world
```
Into
```
: hello world
```
-
```
UPDATE yourtable
SET yourcolumn = stuff(yourcolumn, 1,7,'')
WHERE yourcolumn like 'seq[0-9][0-9][0-9][0-9]%'
``` | Replace string using wildcards | [
"",
"sql",
"sql-server",
""
] |
I have a table with Primary key and auto incremented column lets say "HeaderFieldID".
Now i want to get the records as per the HeaderFieldID values.
Ex:
```
select *
from tblHeaderField
where HeaderFieldID in (2,1,3,4,6,5)
```
But,by default I am getting the records by HeaderFieldID asc order. But I want records as per the given HeaderFieldID's only.
Original Table
```
HeaderFieldID HFName DisplayName
1 OrgName1 disp1
2 OrgName2 disp2
3 OrgName3 disp3
4 OrgName4 disp4
5 OrgName5 disp5
6 OrgName6 disp6
```
Thanks in Advance | I have solved my query.
```
SELECT * FROM tblHeaderField
WHERE HeaderFieldID in (5,6,2,1,3,4,7,8,9,10,11,12,13,14,15)
ORDER BY CHARINDEX(CAST(HeaderFieldID AS VARCHAR), '5,6,2,1,3,4,7,8,9,10,11,12,13,14,15')
``` | I don't know if you can order by IN, because you don't know order.
So first I would split data into rows from IN and then join it to your table.
```
DECLARE @table TABLE (ID INT IDENTITY(1,1) NOT NULL, NR INT)
--Prodvide data to lookup
DECLARE @givenText VARCHAR(100) = '2,1,3,4,5,6,7,8,9,10,11,12,13,14,15'
-- Split requested string into rows and add unique number
;WITH xmlData (xmlData) AS (
SELECT CAST('<x>'+REPLACE(@givenText, ',', '</x><x>')+'</x>' AS XML) AS xmlData
)
INSERT INTO @table (NR)
SELECT x.value('.','INT') AS NR
FROM xmlData
CROSS APPLY xmlData.xmlData.nodes('//x') AS func(x)
--Join tables to get result
SELECT tHF.*
FROM tblHeaderField AS tHF
INNER JOIN @table AS T
ON T.NR = tHF.HeaderFieldID
ORDER BY T.ID
``` | Get the Records as per the given OrderId only | [
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"sql-server-2012",
""
] |
I have the real world situation:
There are cabins, identified by 'house'. People can live in cabins. 'person\_age' identifies each person's age.
I am trying to find out how many people older than 35 live in each cabin, and what percentage of people in that cabin are older than 35, with the condition that I only want to consider people who moved into the cabin in a particular date range, and I also want to know results per cabin.
I have the following query:
```
select
[house]
,count(case when person_age > 35 then 1 end) as [older than 35]
,(cast(count(case when person_age > 35 then 1 end) as float))/(count(case when person_age > 35 then 1 else 1 end)) as [percent older than 35]
from cabins
where
move_in_date >= '2014-02-01'
and move_in_date <= '2014-03-01'
group by [house]
```
Is there a better way to calculate the percentage field, such as a specific function or something? It works, I think, but it's so ugly.
Am I using the count function correctly? It gives me what I think I want, but I'm not sure. | Your query looks ok in the logic. I would write it like this instead:
```
select [house],
sum(case when person_age > 35 then 1 else 0 end) as [older than 35],
avg(case when person_age > 35 then cast(1 as float) else 0
end) as [percent older than 35]
from cabins
where move_in_date >= '2014-02-01' and move_in_date <= '2014-03-01'
group by [house];
```
In other words, you can just create an indicator that has a value of 0.0 or 1.0 and use the `avg()` function. | You are selecting a non-aggregate along with two aggregates, so you'll need a `GROUP BY`, and you can use `OVER()` on the `SUM()` to get the `Percent of total`:
```
SELECT [house]
, COUNT(CASE WHEN person_age > 35 THEN 1 END) AS [older than 35]
, COUNT(CASE WHEN person_age > 35 THEN 1 END)*1.0/SUM(COUNT(CASE WHEN person_age > 35 THEN 1 END))OVER() AS [percent older than 35]
FROM cabins
WHERE move_in_date >= '2014-02-01'
AND move_in_date <= '2014-03-01'
GROUP BY [house]
```
Update: Re-reading I think you want the percent of people over 35 per house, so simply:
```
SELECT [house]
, COUNT(CASE WHEN person_age > 35 THEN 1 END) AS [older than 35]
, COUNT(CASE WHEN person_age > 35 THEN 1 END)*1.0/COUNT(person_age) AS [percent older than 35]
FROM cabins
WHERE move_in_date >= '2014-02-01'
AND move_in_date <= '2014-03-01'
GROUP BY [house]
``` | SQL Server - Counting a column with conditions and also outputing percentages | [
"",
"sql",
"sql-server",
""
] |
How can I specify a WHERE clause, so that it returns a row if there is an intersection between the column's value (list of strings) and another list of strings provided?
Something like:
```
SELECT id, names
FROM table
WHERE ANY(names) = ANY(('John', 'Alice', 'Bob'))
```
So if the value `names` column is, e.g. `['George', 'Bob']`, the row should be returned. | If you really can't change your design (which I would recommand, as mentionned by Craig Ringer)
You may use regexp\_split\_to\_array
```
SELECT id, names
from (
SELECT
id,
names,
regexp_split_to_table(names, ', ') as splitted_value
from <yourTable>) t
where splitted_value in ('John', 'Alice', 'Bob')
group by id, names;
```
or more complicated, with your sample
```
SELECT id, names
from (
SELECT
id,
names,
regexp_split_to_table(replace(replace(names, '[''', ''), ''']', ''), ''', ''') as splitted_value
from <yourTable>) t
where splitted_value in ('John', 'Alice', 'Bob')
group by id, names;
```
Another ugly way, using some json functions (as your column datas look like json)
**"Detail"** : I'm not a postgresql expert, and less than all in json datas part. So there may be much better way to do this.
```
select id, names
from
(select
id,
names,
replace(cast(json_array_elements(cast(replace(names, '''', '"') as json)) as text), '"', '') as elem
from <yourTable>) t
where elem in ('George', 'Bob');
``` | [You should really use arrays or a table of records for this](https://dba.stackexchange.com/q/55871/7788).
You can work around your design by [splitting strings into arrays at runtime](http://www.postgresql.org/docs/current/static/functions-string.html) and [using PostgreSQL's array features](http://www.postgresql.org/docs/current/static/functions-array.html).
```
SELECT id, names
FROM table
WHERE string_to_array(names, ',') && ARRAY['John', 'Alice', 'Bob'];
```
If your comma separated values have spaces, etc, you might want `regexp_split_to_array` instead of `string_to_array`. | SQL WHERE clause for column of lists | [
"",
"sql",
"postgresql",
""
] |
I have a tutorial I've been solving slowly and need help with this because HAVING is not one of my strongest suits.
This is the site:
[Tutorial](http://sqlzoo.net/wiki/The_nobel_table_can_be_used_to_practice_more_SUM_and_COUNT_functions. "The tutorial").
The problem is 9th question:
> Show the years in which three prizes were given for Physics.
```
Table: nobel(yr, subject, winner)
```
Can you tell me how to solve this? | I put think you should count winners in the query, like this:
```
SELECT yr
FROM nobel
WHERE subject = 'Physics'
GROUP BY yr
HAVING count(winner) = 3
```
Remember to use having with an aggregate function (sum, avg, count etc.)
The order in which you write where, group by and having is important.
If you have problems with other tutorials, ask here. | This will give you only rows where the subject is Physics. Then they are grouped by the year, having a count equal to three in that year.
```
SELECT yr FROM nobel
WHERE subject = 'Physics'
GROUP BY yr HAVING COUNT(*) = 3
```
The `HAVING` keyword is fairly clearly [explained here](http://www.w3schools.com/sql/sql_having.asp). It essentially can be looked at as the `WHERE` clause of a `GROUP BY`. If that helps you conceptually understand it. | SQL query from a tutorial table using HAVING | [
"",
"sql",
""
] |
I have created a temporary table which takes in information from two joined tables, I am then trying to insert into a fourth table by selecting the attributes from one table whose `IDs` match the one in the temp table.
At the moment I have tried:
```
INSERT INTO TableX
SELECT attributeID, attribute1, attribute2
FROM Table1 WHERE attributeID = attributeID IN
#TempTable
```
But I'm being told 'Incorrect syntax near 'IN''. | Change your query to this:
```
INSERT INTO TableX
SELECT attributeID, attribute1, attribute2
FROM Table1
WHERE attributeID IN
(SELECT DISTINCT attributeID FROM #TempTable)
```
The `IN` keyword requires either a list of values or a subquery after it. | The IN operator allows you to specify multiple values in a WHERE clause, but not the way you specified.
`INSERT INTO TableX
SELECT attributeID, attribute1, attribute2 FROM Table1 WHERE attributeID IN (select attributeID from #TempTable)`
`IN looks for the match in selected attributeID list from #TempTable` | Retrieving data from temporary tables | [
"",
"sql",
""
] |
I have a table like : session is the name of the table for example
With columns: Id, sessionDate, user\_id
What i need:
Delta should be a new calculated column
```
Id | sessionDate | user_id | Delta in days
------------------------------------------------------
1 | 2011-02-20 00:00:00 | 2 | NULL
2 | 2011-03-21 00:00:00 | 2 | NULL
3 | 2011-04-22 00:00:00 | 2 | NULL
4 | 2011-02-20 00:00:00 | 4 | NULL
5 | 2011-03-21 00:00:00 | 4 | NULL
6 | 2011-04-22 00:00:00 | 4 | NULL
```
Delta is the Difference between the timestamps
What i want is a result for Delta Timestamp (in Days) for the the previous row and the current row grouped by the user\_id.
this should be the result:
```
Id | sessionDate | user_id | Delta in Days
------------------------------------------------------
1 | 2011-02-20 00:00:00 | 2 | NULL
2 | 2011-02-21 00:00:00 | 2 | 1
3 | 2011-02-22 00:00:00 | 2 | 1
4 | 2011-02-20 00:00:00 | 4 | NULL
5 | 2011-02-23 00:00:00 | 4 | 3
6 | 2011-02-25 00:00:00 | 4 | 2
```
I already have a solution for a specific user\_id:
```
SELECT user_id, sessionDate,
abs(DATEDIFF((SELECT MAX(sessionDate) FROM session WHERE sessionDate < t.sessionDate and user_id = 1), sessionDate)) as Delta_in_days
FROM session AS t
WHERE t.user_id = 1 order by sessionDate asc
```
But for more user\_ids i didn´t find any solution
Hope somebody can help me. | Try this:
```
drop table a;
create table a( id integer not null primary key, d datetime, user_id integer );
insert into a values (1,now() + interval 0 day, 1 );
insert into a values (2,now() + interval 1 day, 1 );
insert into a values (3,now() + interval 2 day, 1 );
insert into a values (4,now() + interval 0 day, 2 );
insert into a values (5,now() + interval 1 day, 2 );
insert into a values (6,now() + interval 2 day, 2 );
select t1.user_id, t1.d, t2.d, datediff(t2.d,t1.d)
from a t1, a t2
where t1.user_id=t2.user_id
and t2.d = (select min(d) from a t3 where t1.user_id=t3.user_id and t3.d > t1.d)
```
Which means: join your table to itself on user\_ids and adjacent datetime entries and compute the difference. | If `id` is really sequential (as in your sample data), the following should be quite efficient:
```
select t.id, t.sessionDate, t.user_id, datediff(t2.sessiondate, t.sessiondate)
from table t left outer join
table tprev
on t.user_id = tprev.user_id and
t.id = tprev.id + 1;
```
There is also another efficient method using variables. Something like this should work:
```
select t.id, t.sessionDate, t.user_id, datediff(prevsessiondate, sessiondate)
from (select t.*,
if(@user_id = user_id, @prev, NULL) as prevsessiondate,
@prev := sessiondate,
@user_id := user_id
from table t cross join
(select @user_id := 0, @prev := 0) vars
order by user_id, id
) t;
```
(There is a small issue with these queries where the variables in the `select` clause may not be evaluated in the order we expect them to. This is possible to fix, but it complicates the query and this will usually work.) | Calculate delta(difference of current and previous row) mysql group by specific column | [
"",
"mysql",
"sql",
"timestamp",
"datediff",
"difference",
""
] |
Hi I'm new here. I have a problem selecting results of a query into another database.
```
<?php
include_once("config.php");
$connectionInfo = array( "Database"=>"rohanstat", "UID"=>$UID, "PWD"=>$PASS);
$conn = sqlsrv_connect( $serverName, $connectionInfo);
$conn = sqlsrv_connect( $serverName, $connectionInfo);
$sql2 = "SET ROWCOUNT 15 SELECT attacker, COUNT(attacker) AS dupe_cnt FROM [rohanstat].[dbo].[TPKill]
GROUP BY attacker
HAVING COUNT(attacker) > 0
ORDER BY COUNT(attacker) DESC";
$stmt2 = sqlsrv_query( $conn, $sql2);
while($rows = sqlsrv_fetch_array($stmt2))
echo $rows ['attacker']."<br>";
?>
```
I just want to use `$rows['attacker']` to select in to another database. Which means I want to use it like this.
```
sql = "select * from [RohanGame].[dbo].[TCharacter] where name = $rows ['attacker'];
```
Thanks in advance. | Your tags for the question are SQL; sql-server
but is same for mySQL Oracle and other DBs
here I answer to other question in T-SQL
You can do in PHP in two ways
read in the memory from DB1 then open connection and do your logic to DB2
or in T-SQL you can do it in one query
[COPY data from table1 to table2 with date inserted as 7 days back or 14 days back?](https://stackoverflow.com/questions/20528406/copy-data-from-table1-to-table2-with-date-inserted-as-7-days-back-or-14-days-bac/20528778#20528778) | You can simply use `IN`:
```
SELECT *
FROM [RohanGame].[dbo].[TCharacter]
WHERE name IN
( SELECT TOP 15 attacker
FROM [rohanstat].[dbo].[TPKill]
GROUP BY attacker
HAVING COUNT(attacker) > 0
ORDER BY COUNT(attacker) DESC
);
```
Or if you need the COUNT you can JOIN your results together:
```
SELECT tc.*, a.Dupe_cnt
FROM [RohanGame].[dbo].[TCharacter] tc
INNER JOIN
( SELECT TOP 15 attacker, COUNT(attacker) AS dupe_cnt
FROM [rohanstat].[dbo].[TPKill]
GROUP BY attacker
HAVING COUNT(attacker) > 0
ORDER BY COUNT(attacker) DESC
) a
ON a.Attacker = tc.name;
```
I don't think it will make much, if any difference, but you can swap out:
```
HAVING COUNT(attacker) > 0
```
with
```
WHERE attacker IS NOT NULL
```
Since you are already grouping by attacker, the only group where `COUNT(attacker)` will not be null is the group where attacker is null:
```
SELECT tc.*, a.Dupe_cnt
FROM [RohanGame].[dbo].[TCharacter] tc
INNER JOIN
( SELECT TOP 15 attacker, COUNT(*) AS dupe_cnt
FROM [rohanstat].[dbo].[TPKill]
WHERE attacker IS NOT NULL
GROUP BY attacker
ORDER BY COUNT(*) DESC
) a
ON a.Attacker = tc.name;
``` | How to use result of query to select in to another table in Sql Server | [
"",
"sql",
"sql-server",
""
] |
I have a simple task of copying Excel data to SQL tables.
I am executing one stored procedure initially to delete tables entries. Then I have Excel input from which I am copying data to the SQL tables using tMap.
I have 20 tables to copy data to. I have relatively small number of table entries (10-100) to copy.
Still when I am executing my task, it takes a very long time (5-10 mins) and after copying 12 tables entries its running out of memory.
My work flow is..
(stored procedure ->(on subjob ok) -> excel input -> tmap -> tMSSqlOutput -> (on component ok) -> excel input -> tmap -> tMSSqlOutput (on component ok) - > ...... -> excel input -> tmap -> tMSSqlOutput)
My Excel sheet is on my local machine where as I am copying data to SQL tables on a server.
I have kept my run/debug settings as Xms 1024M, Xmx 8192m. But still its not working.
May I know what can I do to solve this issue?
I am running my talend on a VM (Virtual Machine).
I have attached the screenshot of my job.
 | You should be running all of these separate steps in separate subjobs, using "on subjob ok" to link them, so that the Java garbage collector can better reallocate memory between steps.
If this still doesn't work you could separate them into completely separate jobs and link them all using tRunJob components and make sure to select to tick "Use an independent process to run subjob":

This will spawn a completely new JVM instance for the process and thus not be memory tied by the JVM. That said, you should be careful not to spawn too many JVM instances as there will be some overhead in the start up of the JVM and obviously you are still limited by any physical memory constraints.
It belongs in a separate question really but you may also find some benefit to using parallelisation in your job to improve performance. | Use onSubJobOK on the excelInput to connect to the next ExcelInput. This would change the whole codegeneration.
The Generated code is a function for every subjob. The difference in code generation between onSubJob and onComponentOk is that OnComponent ok will call the next function, while OnSubJobOk waits for the current subjob/function to finish. The latter let the Garbage Collerctor function better.
If that doesn't solve the problem create subjobs which contain 1 excel-DBoutput. Then link these jobs with OnSubjobOK in a master job. | Memory and Running Time issues while copying from Excel to SQL using Talend | [
"",
"sql",
"performance",
"memory",
"talend",
"long-running-processes",
""
] |
I have a sql-Statement and I'd like to "convert" it into rails (activerecord) method calls.
This is my query
```
'SELECT * FROM clients WHERE company_id IN (SELECT company_id FROM companies_projects WHERE project_id= ? )
```
* *companies\_projects* is a join table for an n:n relation of *companies* and *projects*
* *clients* belong to companies (1:n)
* *project* is an external resource and has no *has\_many companies*, so I can't go from that direction
* I want to get all *clients* that belong to *companies* that belong to one *project*, so I can list them in the index-page
My models
```
class Client < ActiveRecord::Base
belongs_to :company
end
class Company < ActiveRecord::Base
has_many :companies_projects
has_many :clients
has_many :projects, :through => :companies_projects
end
```
I checked the statement in rails console and it works.
I have two problems impelementing this query.
**1. find\_by\_sql**
I tried this method
```
Client.find_by_sql('SELECT * FROM clients WHERE company_id IN (SELECT company_id FROM companies_projects WHERE project_id= ? )',project.id)
```
But it throws an InvalidStatement Exception, MySQL Syntax Error near "?"
I also tried to put the sql and bindings into an array [sql,bind1], that works but I get an array and need an ActiveRecordRelation
**2. where**
I'm new to rails and can't figure out a valid method chain for such a query.
Could someone point me in the right direction?
I would prefer using ActiveRecord methods for the query, but I just don't know which methods to use for the nested selects. | ```
Client.where(company_id: CompanyProject.where(project_id: project.id).pluck(:id))
```
Or you can use JOIN
```
Client.joins(:company_project).where('companies_projects.project_id = ?', project.id)
```
But the best solution was proposed by @arup-rakshit | You should have following associations between your models:
```
class Client < ActiveRecord::Base
belongs_to :company
end
class Company < ActiveRecord::Base
has_and_belongs_to_many :projects
has_many :clients
end
class Project < ActiveRecord::Base
has_and_belongs_to_many :companies
has_many :clients, through: :companies
end
```
Then it is simply:
```
project.clients
``` | How to get this SQL query into rails (3) syntax | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I have this query:
```
update prices_types_company1 set 'date'=DATE_ADD('date',INTERVAL 1 year)
```
which I am trying to execute directly in phpMyadmin to increase all date fields with 1 year but it returns error:
> 1064 - You have an error in your SQL syntax; check the manual that
> corresponds to your MySQL server version for the right syntax to use
> near ''date'=DATE\_ADD('date',INTERVAL 1 year)' at line 1
what is wrong with it and what other query I can execute to increase the date with 1 year. Field "date" is type date..
Thank you | MySQL behaves rather weird in a number of situations, update is one of them. You will have to do something like:
```
update prices_types_company1
set date=DATE_ADD(date,INTERVAL 1 year)
order by date desc;
```
to avoid duplicate key error. Example:
```
create table t (d date not null primary key);
insert into t (d) values ('2014-06-05 12:00:00'),('2014-06-06 12:00:00');
update t set d = DATE_ADD(d, interval 1 day);
ERROR 1062 (23000): Duplicate entry '2014-06-06' for key 'PRIMARY'
update t set d = DATE_ADD(d, interval 1 day) order by d desc;
Query OK, 2 rows affected (0.01 sec)
Rows matched: 2 Changed: 2 Warnings: 0
``` | Remove single quotes and use backticks like this
```
UPDATE prices_types_company1 SET `date`=DATE_ADD(`date`,INTERVAL 1 YEAR);
``` | Mysql updating date field with 1 year | [
"",
"mysql",
"sql",
"date",
"sql-update",
""
] |
I have a requirement in my project that I have this data with me:
```
C1 | C2 | C3 | C4
A | B | 2 | X
A | B | 3 | Y
C | D | 4 | Q
C | D | 1 | P
```
Where C1, C2, C3 and C4 are columns name in Database
And I have need to show data like this
```
C1 | C2 | C3 | C4
A | B | 5 | X
C | D | 5 | Q
``` | The answer to this is fairly simple. Just follow my solution below:
```
--CREATE THE SAMPLE TABLE
CREATE TABLE TABLE1 (C1 char(1) NULL, C2 char(1) NULL, C3 int NULL, C4 char(1) NULL);
GO
--INSERT THE SAMPLE VALUES
INSERT INTO TABLE1 VALUES ('A', 'B', 2, 'X'), ('A', 'B', 3, 'Y'), ('C', 'D', 4, 'Q'), ('C','D', 1, 'P');
GO
--SELECT SUM(C3) AND GROUP BY ONLY C1 AND C2, THEN SELECT TOP 1 ONLY FROM C4
SELECT
C1,
C2,
SUM(C3) AS C3,
(SELECT TOP(1) C4 FROM TABLE1 AS B WHERE A.C1 = B.C1) AS C4
FROM
TABLE1 AS A
GROUP BY
C1,
C2;
GO
--CLEAN UP THE DATABASE, DROP THE SAMPLE TABLE
IF EXISTS(SELECT name FROM sys.tables WHERE object_id = OBJECT_ID(N'TABLE1')) DROP TABLE TABLE1;
GO
```
Let me know if this helps. | Assuming you mean the first record ordered by `c4` (grouped by `c1` and `c2`), then this will work establishing a `row_number` and using `max` with `case`:
```
with cte as (
select *,
row_number() over (partition by c1, c2 order by c4) rn
from yourtable
)
select c1, c2, sum(c3), max(case when rn = 1 then c4 end) c4
from cte
group by c1, c2
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/e9bfe/1)
However, if you don't want to order by `c4`, then you need some other column to ensure the correct order of the results. Without an order by clause, there's no guarantee on how they are returned. | Select first row in each GROUP BY group | [
"",
"sql",
"sql-server",
"greatest-n-per-group",
""
] |
I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit.
* Id - The record id
* Count - number of times this Id has been reported
* FirstHit - earliest timestamp with which this Id was reported
* LastHit - latest timestamp with which this Id was reported
This table only has one record for any given Id
Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others:
* Id
* Timestamp - Entry date and time.
This table can have many records for the same id
What I want to do is to update log in the following way.
Count - log count value, plus the count() of records for that id found in feed
FirstHit - the earliest of the current value in log or the minimum value in feed for that id
LastHit - the latest of the current value in log or the maximum value in feed for that id.
It should be noticed that many of the ids in feed are already in log.
The simple thing that worked is to create a temporary table and insert into it the union of both as in
```
Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id
UNION ALL
Select Id, FirstHit,LastHit,Count FROM log;
```
From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count)
```
Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id;
```
and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient.
Is there a more efficient way of doing this. Perhaps doing the update in place in the log table? | The keyword here is `EVERYDAY`. You should have a (batch) job which run the process at the end of each day. The idea is process only the record from **yesterday**, this is ways better than process the whole `Feed` table.
## Updated information:
Feed table contains only the hits from last run date. This is much easier with MERGE to update `Log` table:
Notice: We can say `FirstHit` will never be updated. Only `LastHit` and `Count`. Improved from @dened answer.
```
MERGE INTO log l
USING (SELECT Id, MIN(Timestamp) AS FirstHit, MAX(Timestamp) AS LastHit, Count(*) as TodayHit FROM feed GROUP BY Id) f
ON l.Id = f.Id
WHEN MATCHED THEN
UPDATE SET
LastHit = f.LastHit,
Count = l.Count + f.TodayHit
WHEN NOT MATCHED THEN
INSERT (Id, FirstHit, LastHit, Count)
VALUES (f.Id, f.FirstHit, f.LastHit, f.TodayHit);
``` | If your SQL Server version is 2008 or later then you can try this:
```
MERGE INTO log l
USING (SELECT Id, MIN(Timestamp) AS FirstHit, MAX(Timestamp) AS LastHit, Count(*) as Count FROM feed GROUP BY Id) f
ON l.Id = f.Id
WHEN MATCHED THEN
UPDATE SET
FirstHit = CASE WHEN l.FirstHit < f.FirstHit THEN l.FirstHit ELSE f.FirstHit END,
LastHit = CASE WHEN l.LastHit > f.LastHit THEN l.LastHit ELSE f.LastHit END,
Count = l.Count + f.Count
WHEN NOT MATCHED THEN
INSERT (Id, FirstHit, LastHit, Count)
VALUES (f.Id, f.FirstHit, f.LastHit, f.Count);
``` | merging two tables, while applying aggregates on the duplicates (max,min and sum) | [
"",
"sql",
"sql-server",
"database",
""
] |
How to convert varchar 1,760,862.8185919 to float in SQL Server 2012 or decimal?
```
DECLARE @n varchar(100) = '1,760,862.8185919'
DECLARE @f float = ??
``` | You need to cast it, but remove the commas first:
```
DECLARE @n varchar(100) = '1,760,862.8185919'
```
To convert with `CAST`, do this:
```
DECLARE @f float = CAST(REPLACE(@n, ',', '') AS FLOAT)
```
Alternatively, use `CONVERT`:
```
DECLARE @f float = CONVERT(float, REPLACE(@n, ',', ''))
```
And you can convert back (with loss of precision) by first converting to `MONEY` then `VARCHAR`:
```
DECLARE @new_n VARCHAR(100) = CONVERT(VARCHAR, CAST(@f AS MONEY), 1)
``` | ```
Declare @n varchar(100)
set @n = '1,760,862.8185919'
select convert(float,Replace(@n,',',''))
``` | How to convert varchar 1,760,862.8185919 to float in SQL Server 2012? | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
a very simple SQL statement is returning two rows instead of the one i would expect.
```
select distinct(F_NAME) from tab2 where F_NAME like '%SMITH%'
F_NAME
CLAIRE_SMITH
CLAIRE_SMITH
```
The text is the same, tried lowercase, trim and some other text functions to see if i can find a difference, but no joy. I've also re-entered the data by hand and by using an update function. I've also checked character encoding to make sure nothing weird was happening, both are `varchar(90)` and `latin1_general_ci`
The name actually exists in two tables I am looking at with around 5 rows in the `tab1` table and 100 row in `tab2`
The problem came to light when joining^1 `tab1` and `tab2` tables together where `F_NAME=F_NAME` and CLAIRE\_SMITH didn't appear in the results, yet every other person in `tab1`and `tab2` are returned.
^1 right join, implicit join, left join, right join, left outer join and right outer join.
```
tab1
F_NAME, F_DEPTNO, F_AGE
CLAIRE_SMITH, 1, 17
BOB_JONES, 2, 37,
SUE_JENKINS, 2, 29,
tab2
F_ID, F_NAME, F_VALUE1, F_VALUE2, F_VALUE3
1, CLAIRE_SMITH, 10, 11, 15
2, BOB_JONES, 15, 11, 15
3, SUE_JENKINS, 20, 13, 14
4, CLAIRE_SMITH, 10, 11, 15
5, BOB_JONES, 15, 11, 15
6, SUE_JENKINS, 20, 13, 14
```
what i am trying to do is sum the values in `tab2` and group by `F_NAME`, whilst adding in some values from `tab1`, unfortunately their is no foreign key in `tab2` that i can use to aid the join and the data is provided as is and all i have to work with.
my query outputs correctly for all people except CLAIRE\_SMITH who does not appear.
```
SELECT a.F_DEPTNO, a.F_NAME, sum(b.F_VALUE1), sum(b.F_VALUE2), sum(b.F_VALUE3)
FROM TAB1 a, TAB2 b WHERE a.F_NAME=b.F_NAME
GROUP BY a.F_ID, a.F_NAME
ORDER BY a.F_ID ASC
```
can anyone explain what might be happening?
1) the name appears to be the same but is returned as two distinct instances
2) the join excludes this person.
thanks | Use
```
select hex(F_NAME) from tab2 where F_NAME like '%SMITH%'
```
in order to find the difference.
[hex function](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_hex) returns a hexadecimal string representation, so any invisible (i.e space) as a string becomes revealed. | My reputation is too low to comment. This query might be interesting:
```
select id1.f_name=id4.f_name
from ( select f_name from tab2 where id=1 ) id1
join ( select f_name from tab2 where id=4 ) id4 using (f_name)
```
This would indicate whether the db considers those two values equal (query will return 1) or not (query will return 0) | how to avoid missing rows? and strange case of Distinct not working in mysql | [
"",
"mysql",
"sql",
""
] |
I have the following SQL question:
How to divide a column (text inside) using the SELECT command into two separate columns with split text?
I need to separate the text-data, using the space character.
I know it is better to put an example to make it easy. So:
```
SELECT COLUMN_A FROM TABLE1
```
output:
```
COLUMN_A
-----------
LORE IPSUM
```
desired output:
```
COLUMN_A COLUMN_B
--------- ----------
LORE IPSUM
```
Thank you all for the help. | Depends on the consistency of the data - assuming a single space is the separator between what you want to appear in column one vs two:
```
WITH TEST_DATA AS
(SELECT 'LOREM IPSUM' COLUMN_A FROM DUAL)
SELECT SUBSTR(t.COLUMN_A, 1, INSTR(t.COLUMN_A, ' ')-1) AS COLUMN_A,
SUBSTR(t.COLUMN_A, INSTR(t.COLUMN_A, ' ')+1) AS COLUMN_B
FROM test_data T;
```
You can also use below query with REGEX:
```
WITH TEST_DATA AS
(SELECT 'LOREM IPSUM' COLUMN_A FROM DUAL)
SELECT REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 1, 1) COLUMN_A,
REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 1, 2) COLUMN_B
FROM test_data T;
```
Oracle 10g+ has regex support, allowing more flexibility depending on the situation you need to solve. It also has a regex substring method...
**EDIT:**
3 WORDS SPLIT:
```
WITH TEST_DATA AS
(SELECT 'LOREM IPSUM DIMSUM' COLUMN_A FROM DUAL)
SELECT REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 1, 1) COLUMN_A,
REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 1, 2) COLUMN_B,
REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 2, 3) COLUMN_C
FROM test_data T;
```
Reference:
* [SUBSTR](http://www.techonthenet.com/oracle/functions/substr.php)
* [INSTR](http://www.techonthenet.com/oracle/functions/instr.php) | The solution can be generalized using a counter and the `PIVOT` operator, the counter to get the word number and the `PIVOT` to change rows to columns
```
WITH Counter (N) AS (
SELECT LEVEL FROM DUAL
CONNECT BY LEVEL <= (SELECT MAX(regexp_count( COLUMN_A, ' ')) + 1
FROM Table1)
)
SELECT Word_1, Word_2, Word_3, Word_4
FROM (SELECT t.COLUMN_A
, c.N N
, REGEXP_SUBSTR(t.COLUMN_A, '[^ ]+', 1, c.N) Word
FROM Table1 t
LEFT JOIN Counter c ON c.N <= regexp_count( COLUMN_A, ' ') + 1) b
PIVOT
(MAX(Word) FOR N IN (1 Word_1, 2 Word_2, 3 Word_3, 4 Word_4)) pvt
```
`SQLFiddle demo`
But that have a fixed columns list in the `PIVOT` definition, to really have a general query a dynamic pivot or a `PIVOT XML` is needed | SQL - Divide single column in multiple columns | [
"",
"sql",
"oracle",
"select",
"split",
""
] |
can i write somthing like this:
```
cmd = new OleDbCommand("select * from cmn_mst; select * from cmn_typ", oledbCon);
```
But this is showing an error. Is there any other way to write multiple select in dataset? | write a stored procedure, that outputs 2 ref cursors and call it in your .net code.
a detailed answer would need a knowledge in the type of provider your using.
But this should help google stuff.
This [article](http://support.microsoft.com/kb/322160) might help. | You can't do that in one OleDbCommand, you need to split the queries in two commands, so 2 datasets.. | Multiple select query in .net for a dataset | [
"",
"sql",
".net",
"oracle",
""
] |
How would you get the average price in each category, compare it against each item of that category and get the percentage difference?
For example,in the table below, the average price of a doll is 24, so item B ($18) should be **-25%**, which is cheaper than the average of the category. ((18-24) / 24) \* 100).
How can I join these two queries and then do the calculation?
```
SELECT price from `product` GROUP BY name
SELECT AVG(price) AS average_Price from `product` GROUP BY category
(price - AVG(price))/AVG(price)*100
```
Table Example:
```
ID name category price
1 A Puzzle 5
2 B Doll 18
3 C Puzzle 10
4 D Outdoor 20
5 E Brainteaser 2
6 F Outdoor 40
7 G Doll 30
8 H Brainteaser 9
``` | You can do so ,use a self join with a subquery ,this subquery will get the average price for each category and then in select part perform your calculation
```
select t.*,tt.avg_price
,((t.price - tt.avg_price ) / tt.avg_price *100) precent_diff
from
t
join (select category ,avg(price) avg_price from t group by category) tt
on(t.category = tt.category)
```
## [Demo](http://sqlfiddle.com/#!2/81f0af/4)
To have percent sign use concat with end result of calculation part
```
select t.*,tt.avg_price
,
concat(
((t.price - tt.avg_price ) / tt.avg_price *100)
,'%')
precent_diff
from
t
join (select category ,avg(price) avg_price from t group by category) tt
on(t.category = tt.category)
```
## [Demo with sign](http://sqlfiddle.com/#!2/81f0af/5) | Have an inner query to get AVG per category and then use a `JOIN`, like this:
```
SELECT name, product.category, price, ((price - avg_price)/avg_price)*100 as diff
FROM product
JOIN (
SELECT category, AVG(price) as avg_price
FROM product
GROUP BY category
) t
ON t.category = product.category
```
Working demo: <http://sqlfiddle.com/#!2/f25205/5> | Get percentage difference of each item's price by comparing them with the average value of a field | [
"",
"mysql",
"sql",
""
] |
**I have:** something like
```
UPDATE table
SET field = (SELECT field FROM another_table WHERE id = @id);
```
**Problem:** `SELECT field FROM another_table WHERE id = @id` subquery can return one field or *EMPTY SET*.
**Question:** *How to handle situation when subquery returns empty set?*
**Updated:**
```
UPDATE table t
SET field = IF((SELECT field FROM another_table WHERE id = @id) IS NOT NULL, -- select field
(SELECT field FROM another_table WHERE id = @id), -- Problem #1: select field AGAIN!
(SELECT field FROM table WHERE id = t.id) -- Problem #2: try to not change value, so select the current field value!!
);
``` | If function can be useful:
```
UPDATE table
SET field = if((SELECT field FROM another_table WHERE id = @id) IS NULL,true,false);
``` | You can add the conditional:
```
WHERE (SELECT COUNT(*) FROM another_table WHERE id = @id) > 0
```
This will make sure that at least one row exists in `another_table` with the `id`. See my [SQL Fiddle](http://sqlfiddle.com/#!2/fadce) as an example.
Note: this may not be the most efficient because it does a count on `another_table`, and if it is greater than 1 it will do another `SELECT` (two sub-queries). Instead, you can do an `INNER JOIN`:
```
UPDATE table
INNER JOIN another_table ON table.id=another_table.id
SET table.field = another_table.field
WHERE another_table.id = @id;
```
See this [SQL Fiddle](http://sqlfiddle.com/#!2/098cc). The reason why I saved this as a second option, is not all SQL languages can `UPDATE` with joins (MySQL can). Also, you need some way to relate the tables..in this case I said that the `table.id` we are updating is equal to `another_table.id` we are taking the data from. | MySQL: handle EMPTY SET in UPDATE statement | [
"",
"mysql",
"sql",
""
] |
I have an SQL statement
```
SELECT NO, MEMBID, DATEFROM, DATETO from Member where MEMBID ='xxyy'
```
I get a return where the dates are in DateTime format. I only want them to be in Date format.
I have tried:-
```
SELECT NO, MEMBID, CONVERT(DATEFROM, GETDATE()), DATETO from Member where MEMBID ='xxyy'
SELECT NO, MEMBID, CAST(DATEFROM as DATE()), DATETO from Member where MEMBID ='xxyy'
```
these don't seem to be working.
I need to convert inside the SQL Statement itself. Been looking around on Google but can't seem to find anything. Any insights?
**EDIT :-**
What finally worked for me is the following conversion technique.
CONVERT(VARCHAR(10), DATEFROM, 101) | There is no format in a `DateTime` or `Date` object. That only comes with the display of the data.
You were very close with the Convert, but needed just the [output format](http://msdn.microsoft.com/en-us/library/ms187928.aspx).
```
CONVERT(VARCHAR(10), DATEFROM, 101)
```
This will output the `DATEFROM` column as `mm/dd/yyyy` | Here's a query that casts date time in date.
```
select cast('2014-06-04 00:00:00.000' as date)
``` | Convert DateTime into Date only, inside of the SELECT SQL statement | [
"",
"sql",
"sql-server",
"datetime",
"sql-server-2012",
""
] |
I need some help in fixing a data aberration. I create a view based on two tables with Left Join and the result has some duplicates (as given in the logic section)
**Data Setup:**
```
*******************
TEST1
*******************
PRODUCT VALUE1 KEY
1 2 12
1 3 13
1 4 14
1 5 15
*******************
TEST2
*******************
KEY ATTRIBUTE
12 DESC
13 (null)
14 DESC
15 (null)
```
**What I tried so far**
```
SELECT
B.KEY,
B.ATTRIBUTE,
A.PRODUCT
A.VALUE1
FROM TEST2 B LEFT JOIN TEST1 A ON TEST2.KEY = TEST1.KEY;
```
**What I get with above SQL is**
```
KEY ATTRIBUTE PRODUCT VALUE1
12 DESC 1 2
13 (null) 1 3
14 DESC 1 4
15 (null) 1 5
```
**What I need to get**
```
KEY ATTRIBUTE PRODUCT VALUE1
12 DESC 1 2
13 DESC 1 3
14 DESC 1 4
15 DESC 1 5
```
**Logic**:
Since all products with id 1 are same, I need to retain the attributes if it is NULL. So doing a distinct of PRODUCT and ATTRIBUTE will always have 1 row per product id. Test1 has more than 100 products and Test2 has corresponding descriptions.
**Note:** This is not a normalized design since it is data warehousing. So no complaints on design please
I would like to have a CASE statement in the attribute field.
```
CASE
WHEN ATTRIBUTE IS NULL THEN {fix goes here}
ELSE ATTRIBUTE
END AS ATTRIBUTE
```
Some one needs to see fiddle, then go [here](http://www.sqlfiddle.com/#!4/d3d30/1/0) | It's not clear but if you say that for each product can be only one attribute then try to use `MAX() OVER`
```
SELECT
TEST1.Product,
TEST1.value1,
TEST2.KEY,
MAX(ATTRIBUTE) OVER (PARTITION BY test1.Product) ATTR
FROM TEST2
LEFT JOIN
TEST1 ON TEST2.KEY = TEST1.KEY
```
`SQLFiddle demo` | [**SQL Fiddle**](http://www.sqlfiddle.com/#!4/d3d30/45/0):
```
SELECT B.KEY,
CASE WHEN B.ATTRIBUTE IS NULL THEN
(
SELECT s2.ATTRIBUTE
FROM test2 s2
LEFT JOIN TEST1 s1 ON s1.KEY = s2.KEY
WHERE s1.PRODUCT = A.PRODUCT
AND s2.ATTRIBUTE IS NOT NULL
AND ROWNUM = 1
) ELSE B.ATTRIBUTE END AS ATTRIBUTE,
A.PRODUCT, A.VALUE1
FROM TEST2 B
LEFT JOIN TEST1 A ON A.KEY = B.KEY;
``` | CASE statement when using LEFT JOIN | [
"",
"sql",
"oracle",
"join",
"kognitio",
"kognitio-wx2",
""
] |
Long time lurker, first time poster.
I have two tables 'case' and 'case\_char'.
**case**
```
case_id | status | date
1 | closed | 01/01/2014
2 | open | 02/01/2014
```
**case\_char**
```
case_id | property_key | value
1 | email | xx@xx.com
1 | phone | 1234567
2 | email | x2@xx.com
2 | phone | 987654
2 | issue | Unhappy
```
Say I want to return the 'issue' for each case. Not all cases have issues so I will need to do a left outer join. Unfortunately it is not working for me, it is returning only cases with the 'issue' characteristic. I need it to return all cases regardless of whether the 'issue' characteristic exists for a case in the case\_char table.
Below is an example of the way I have written the code ( bearing in mind I am using an Oracle DB).
Could any of you whizzes help a brother out?
```
SELECT c.case_id, char.value
FROM case c, case_char char
WHERE c.case_id = char.case_id (+)
AND char.property_key = 'issue'
``` | Just add a Join(+) to your property key as below:
```
SELECT C.CASE_ID, CHAR.VALUE
FROM CASE C, CASE_CHAR CHAR
WHERE
C.CASE_ID = CHAR.CASE_ID (+)
AND
CHAR.PROPERTY_KEY(+) = 'ISSUE';
^
|
``` | You shoud use an explicit join, and put the `property_key` in the `ON` clause.
```
SELECT c.case_id, char.value
FROM case AS c
LEFT JOIN case_char AS char ON c.case_id = char.case_id AND char.property_key = 'issue'
```
I'm not very familiar with the syntax for implicit out joins. My guess is you need to put `(+)` after `char.property_key = 'issue'` to keep it from filtering out the null rows. | Left Outer Joining Tables with Multiple Property Keys | [
"",
"sql",
"oracle",
"left-join",
""
] |
I have a SQL Server query that looks like this:
```
SELECT ListingId, hh.DateAltered
FROM Listings (NOLOCK)
LEFT JOIN (
SELECT
h.ParentId,
h.DateAltered
FROM
History AS h
WHERE h.ParentType = 'Listing'
) hh
ON hh.ParentId = Listings.ListingId
WHERE ListingId = 56082
```
So basically I have two tables, `Listings` and `History`. The `Listings` table only has 1 row. The `History` table has 5 rows which are linked to the `Listings` table record (as you can tell from the `LEFT JOIN`).
When I run the above query then it returns 5 rows. This is because the `History` table has 5 rows that are linked to the 1 row in the `Listings` table.
```
ListingId DateAltered
56082 2013-11-06 09:27:29.647
56082 2013-11-08 14:30:42.543
56082 2013-11-08 15:11:30.390
56082 2013-11-14 09:54:21.060
56082 2014-01-09 16:23:52.440
```
But I only need 1 row from the `History` table, so I can see what the last `DateAltered` is.
So I thought it would be as easy as adding a `TOP(1)` to the `LEFT JOIN` query:
```
SELECT ListingId, hh.DateAltered
FROM Listings (NOLOCK)
LEFT JOIN (
SELECT TOP(1)
h.ParentId,
h.DateAltered
FROM
History AS h
WHERE h.ParentType = 'Listing'
) hh
ON hh.ParentId = Listings.ListingId
WHERE ListingId = 56082
```
It does return 1 row now (which is what I want), but now the `DateAltered` column is `NULL`:
```
ListingId DateAltered
56082 NULL
```
Why is this happening? And how can I solve this problem? | It is happening because the `top 1` record you have selected doesn't have the same ID as your listings table.
You want the most recent record from the history table, where the ID matches. You can get this using `row_number`
```
SELECT ListingId, hh.DateAltered
FROM Listings (NOLOCK)
LEFT JOIN (
select *
from
(
SELECT
h.ParentId,
h.DateAltered,
ROW_NUMBER() over (partition by parentid order by datealtered desc) rn
FROM
History AS h
WHERE h.ParentType = 'Listing'
) h
where rn=1
) hh
ON hh.ParentId = Listings.ListingId
WHERE ListingId = 56082
``` | if you want last `DateAltered` value, you should use `MAX` and `GROUP BY`:
```
SELECT ListingId, hh.DateAltered
FROM Listings (NOLOCK)
LEFT JOIN (
SELECT
h.ParentId,
MAX(h.DateAltered) AS DateAltered
FROM
History AS h
WHERE h.ParentType = 'Listing'
GROUP BY h.ParentId
) hh
ON hh.ParentId = Listings.ListingId
WHERE ListingId = 56082
``` | Left joining with one row | [
"",
"sql",
"sql-server",
"left-join",
""
] |
Below is the dataset I've got.
Person, Format and different roles (bit).
I would like to get all the rows where each *Format* group has all 3 roles selected. For ex: Novel format does not have Editor role selected. So I would like to get data that does not contain *Novel* records.
How can I achieve that?
 | Your first question is "I would like to get all the rows where each Format group has all 3 roles selected." You can approach this with window functions:
```
select name, format, write, director, editor
from (select t.*,
max(cast(writer as int)) over (partition by format) as maxwriter,
max(cast(director as int)) over (partition by format) as maxdirector,
max(cast(editor as int)) over (partition by format) as maxeditor
from table t
) t
where maxwriter = 1 and maxdirector = 1 and maxeditor = 1;
```
If you want to get rows where there is no editor, you can use a similar approach, just change the `where` clause:
```
where maxwriter = 1 and maxdirector = 1 and maxeditor = 0;
``` | ```
select format
from your_table
group by format
having sum(case when writer = 1 then 1 else 0 end) > 0
and sum(case when director = 1 then 1 else 0 end) > 0
and sum(case when editor = 1 then 1 else 0 end) > 0
```
and if you need the complete row instead of only `format` then you can do
```
select * from your_table
where format in
(
select format
from your_table
group by format
having sum(case when writer = 1 then 1 else 0 end) > 0
and sum(case when director = 1 then 1 else 0 end) > 0
and sum(case when editor = 1 then 1 else 0 end) > 0
)
``` | Filter groups of rows | [
"",
"sql",
"sql-server",
""
] |
```
SELECT SUM(SALES_AMOUNT)SALES,
YEAR,
MONTH,
CATAGORY_ID,
SALES_PERSON_ID,
ITEM_TYPE_ID
FROM APEX_FINAL
where sales_amount is not null
and catagory_id is not null
GROUP BY (YEAR,MONTH,CATAGORY_ID,SALES_PERSON_ID,ITEM_TYPE_ID)
union all
SELECT SUM(SALES_AMOUNT)SALES,
YEAR,
MONTH,
CATAGORY_ID,
'all others' SALES_PERSON_ID,
ITEM_TYPE_ID
FROM APEX_FINAL
where sales_amount is not null
and sales_person_id is null
GROUP BY (YEAR,MONTH,CATAGORY_ID,SALES_PERSON_ID,ITEM_TYPE_ID)
```
this is my code plz help me in resolving my problem as i m getting the eroor of mis math data types but all data types are same | Sales Person ID seems to have the mismatch. It could possibly be a `NUMERIC` datatype in your table.
May be you can use `TO_CHAR(SALES_PERSON_ID)` in the first Union Query as below. It would help you solve the issue
```
SELECT SUM(SALES_AMOUNT)SALES,
YEAR,
MONTH,
CATAGORY_ID,
TO_CHAR(SALES_PERSON_ID) SALES_PERSON_ID,
ITEM_TYPE_ID
FROM APEX_FINAL
where sales_amount is not null
and catagory_id is not null
GROUP BY (YEAR,MONTH,CATAGORY_ID,SALES_PERSON_ID,ITEM_TYPE_ID)
union all
SELECT SUM(SALES_AMOUNT)SALES,
YEAR,
MONTH,
CATAGORY_ID,
'all others' SALES_PERSON_ID,
ITEM_TYPE_ID
FROM APEX_FINAL
where sales_amount is not null
and sales_person_id is null
GROUP BY (YEAR,MONTH,CATAGORY_ID,SALES_PERSON_ID,ITEM_TYPE_ID)
``` | I was struggling hours on a similar issue and realized that the column orders of the two tables we are unioning should be the same!
It still gives you the same error, so make sure columns are in the same order :) | oracle pl/saql ORA-01790: expression must have same datatype as corresponding expression | [
"",
"sql",
"oracle",
""
] |
There are quite a few similar Questions, however i didn't quite find what i was looking for.
Since using dynamic SQL in a stored procedure can quickly get cumbersome, I want to pass a table name (Varchar) to a stored procedure, turn that Tablename into a Tablevariable and afterwards work with this Tablevariable for the rest of the procedure.
I can't figure out the code for this.
I'm working in SSMS on a SQL Server 2008R2. Currently my code looks similar to this. I lag the middle part to create the @Table Table Variable from the @TableName Varchar Variable
```
CREATE Procedure [dbo].StoredProc(@Tablename Varchar)
AS
Begin
Declare @Table Table (ColA Varchar, ColB Float)
Declare @result float
-- Something like Insert @Table Select * From @Tablename using Dynamic sql or sth. similar
Select @result = Select sum(ColB) From @Table
End
``` | You can combine dynamic SQL and Temporary table storage the following way:
```
CREATE Procedure [dbo].StoredProc(@Tablename Varchar(100))
AS
Begin
create table #TempTbl (ColA Varchar(100), ColB Float);
Declare @result float
declare @dynSQL varchar(max);
select @dynSQL = 'insert into #TempTbl select
cast(val1 as varchar(100)) as ColA,
cast(val2 as float) as ColB from ' + COALESCE( @Tablename, 'NULL');
-- Tablename should contain schema name, 'dbo.' for example
exec( @dynSQL );
Select @result = sum(ColB) From #TempTbl
drop table #TempTbl;
return @Result;
End
``` | you should set the statement yo need in a variable:
```
SET @sqlString='INSERT ' + @Table + ' SELECT * FROM ' + @Tablename
```
using Dynamic sql or sth. similar"
and then execute it:
```
EXEC sp_executesql @sqlString
``` | Stored Procedure: turn Table name into Table Variable | [
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I've found another thread on this question, but I wasn't able to use its solutions, so I thought I'd ask with more clarity and detail.
I have a large MySQL database representing a vBulletin forum. For several years, this forum has had an error generated on each view, each time creating a new table named `aagregate_temp_1251634200`, `aagregate_temp_1251734400`, etc etc. There are about 20,000 of these tables in the database, and I wish to delete them all.
I want to issue a command that says the equivalent of `DROP TABLE WHERE TABLE_NAME LIKE 'aggregate_temp%';`.
Unfortunately this command doesn't work, and the Google results for this problem are full of elaborate stored procedures beyond my understanding and all seemingly tailored to the more complex problems of different posters.
Is it possible to write a simple statement that drops multiple tables based on a `name like` match? | There's no single statement to do that.
The simplest approach is to generate a set of statements, and execute them individually.
We can write a simple query that will generate the statements for us:
```
SELECT CONCAT('DROP TABLE `',t.table_schema,'`.`',t.table_name,'`;') AS stmt
FROM information_schema.tables t
WHERE t.table_schema = 'mydatabase'
AND t.table_name LIKE 'aggregate\_temp%' ESCAPE '\\'
ORDER BY t.table_name
```
The SELECT statement returns a rowset, but each row conveniently contains the exact SQL statement we need to execute to drop a table. (Note that `information_schema` is a builtin database that contains metadata. We'd need to replace **`mydatabase`** with the name of the database we want to drop tables from.
We can save the resultset from this query as a plain text file, remove any heading line, and voila, we've got a script we can execute in our SQL client.
There's no need for an elaborate stored procedure. | A little googling found this:
```
SELECT 'DROP TABLE "' + TABLE_NAME + '"'
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'prefix%'
```
This should generate a script.
**Source**: [Drop all tables whose names begin with a certain string](https://stackoverflow.com/questions/4393/drop-all-tables-whose-names-begin-with-a-certain-string) | How to delete all MySQL tables beginning with a certain prefix? | [
"",
"mysql",
"sql",
""
] |
I have a little string:
```
JIW.sql;XXX.txt;Qwij.DLL;saasa.sql;ttt.txt;lok.SQL;ddd.jpg;aas.sql
```
and i need make select with like this
```
SELECT regexp_split_to_table('JIW.sql;XXX.txt;Qwij.DLL;saasa.sql;ttt.txt;lok.SQL;ddd.jpg;aas.sql', '[0-9A-z]*.sql');
```
And i would like expect:
```
JIW.sql
saasa.sql
lok.SQL
aas.sql
```
but now i have
```
---------------------------
;XXX.txt;Qwij.DLL;
;ttt.txt;lok.SQL;ddd.jpg;
(4 rows)
```
How to change this ? | I think you are looking for:
```
SELECT regexp_matches('JIW.sql;XXX.txt;Qwij.DLL;saasa.sql;ttt.txt;lok.SQL;ddd.jpg;aas.sql', '[0-9A-z]*.sql', 'g');
```
instead ie. **regexp\_matches** instead of **regexp\_split\_to\_table** | You're using the `regexp_split_to_table` function, which **splits** your string when it matches the pattern.
You'd use `regex_matches` to find matches for your regex pattern:
```
SELECT
regexp_matches(
'JIW.sql;XXX.txt;Qwij.DLL;saasa.sql;ttt.txt;lok.SQL;ddd.jpg;aas.sql',
E'([^;]+\.sql)',
'gi'
);
```
Where, the `gi` parameter are the modifiers for **global** and **case-insensitive** search. The result would be something [like this](http://sqlfiddle.com/#!15/d41d8/2136).
You can also use the `unnest` function to get the [result returned as rows instead](http://sqlfiddle.com/#!15/d41d8/2139). The query would then be:
```
SELECT
unnest(
regexp_matches(
'JIW.sql;XXX.txt;Qwij.DLL;saasa.sql;ttt.txt;lok.SQL;ddd.jpg;aas.sql',
E'([^;]+\.sql)',
'gi'
)
);
``` | My regexp not working correct how to change this? | [
"",
"sql",
"regex",
"postgresql",
""
] |
Afternoon, apologies for the newbie question but I've written the following SQL code using temp tables and I want to use it as a view, I'm sure that it can be written without using temp tables but if someone could help me and point me in the right direction as to what i need to do, that would be greatly appreciated. SQL Server 2008 R2
```
select R.Code, R.Name, R.[Department Code], R.[Line Role]
into #first
from [Employment Role] R
select R.Code, R.Name, R.[Department Code], R.[Line Role]
into #second
from [Employment Role] R
select R.Code, R.Name, R.[Department Code], R.[Line Role]
into #senior
from [Employment Role] R
select emp.[First Name], emp.[Last Name], f.Name AS [Employee Job Title], f.[Department Code] as [Employee Department],
s.Name as [Manager Job Title], s.[Department Code] as [Manager Department], snr.Name as [Senior Manager Job Title],
snr.[Department Code] as [Senior Manager Department]
from #first f
join #second s on f.[Line Role] = s.Code
join #senior snr on s.[Line Role] = snr.Code
join [Employee] Emp on Emp.[Role Name] = f.Name
drop table #first
drop table #second
drop table #senior
``` | It looks like your temp tables all have the same data, so you can just use one cte and join it to itself:
```
WITH cte
AS ( SELECT R.Code ,
R.Name ,
R.[Department Code] ,
R.[Line Role]
FROM [Employment Role] R
)
SELECT emp.[First Name] ,
emp.[Last Name] ,
f.Name AS [Employee Job Title] ,
f.[Department Code] AS [Employee Department] ,
s.Name AS [Manager Job Title] ,
s.[Department Code] AS [Manager Department] ,
snr.Name AS [Senior Manager Job Title] ,
snr.[Department Code] AS [Senior Manager Department]
FROM cte f
INNER JOIN cte s ON f.[Line Role] = s.Code
INNER JOIN cte snr ON s.[Line Role] = snr.Code
INNER JOIN [Employee] Emp ON Emp.[Role Name] = f.Name
``` | This looks like it would work with just aliases, without using CTE.
```
select emp.[First Name], emp.[Last Name], f.Name AS [Employee Job Title],
f.[Department Code] as [Employee Department],
s.Name as [Manager Job Title],
s.[Department Code] as [Manager Department],
snr.Name as [Senior Manager Job Title],
snr.[Department Code] as [Senior Manager Department]
from [Employment Role] f
join [Employment Role] s on f.[Line Role] = s.Code
join [Employment Role] snr on s.[Line Role] = snr.Code
join [Employee] Emp on Emp.[Role Name] = f.Name
``` | Alternative to using temp tables in a view | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2008-r2",
""
] |
Hello All,
This sound very simple but I'm having a problem here.
There are tables:
```
table_A
TA_id | TA_user | TA_hour | TA_date
```
and
```
table_B
TB_id | TB_name | TB_adress
```
Here is the query:
```
$sql = mysql_query("SELECT
DISTINCT TA_user
FROM table_A
INNER JOIN table_B ON table_B.id = table_A.TA_user
WHERE TA_date LIKE '%$vardate%'
ORDER BY TA_user ASC ");
```
When I run the While loop in the array the TB\_name that shuld print cause the inter join dosen't work.
Any ideas ? | It appears to me that you have multiple table A records for a given table B entity, is that right?
By the way, your column names are horrible...table\_B.id = table\_A.TA\_user???? how is someone examining your database supposed to know that relationship exists? But that's beside the point.
Anyway, I guess that you're trying to return the table B entities that have records in table\_A for a given date?
If so, distinct should work for you. I'd probably use group by instead because it's generally faster. But this should do what you want, if I understand you correctly.
```
SELECT DISTINCT table_B.TB_name
FROM table_A
INNER JOIN table_B
ON table_B.TB_id = table_A.TA_user
WHERE TA_date LIKE '%$vardate%'
ORDER BY TA_user ASC;
```
And, as @Gordon Linoff says above, the LIKE with a date field is suspicious. His idea there seems appropriate to me. | First, don't use deprecated interfaces like "mysql\_". Use mysqli or PDO. But, focusing on your query:
```
SELECT DISTINCT TA_user
FROM table_A INNER JOIN
table_B
ON table_B.id = table_A.TA_user
WHERE TA_date LIKE '%$vardate%'
ORDER BY TA_user ASC;
```
One possibility is that the join key is not correct. Assuming it is, the other big issue is using `like` with a date. It is not clear exactly what you want, but I am guessing that it is something like this:
```
SELECT DISTINCT TA_user
FROM table_A INNER JOIN
table_B
ON table_B.id = table_A.TA_user
WHERE TA_date = date('$vardate')
ORDER BY TA_user ASC;
```
You will want `$vardate` to be in the format of "YYYY-MM-DD" to ensure proper conversion. Or, better yet, make it a parameter instead of an embedded constant. | SELECT DISTINCT with INNER JOIN | [
"",
"mysql",
"sql",
""
] |
I have a query where one table has ~10 million rows and the other two are <20 in each table.
```
SELECT a.name, b.name, c.total
FROM smallTable1 a, smallTable2 b, largeTable c
WHERE c.id1 = a.id AND c.id2 = b.id;
```
`largeTable` has columns `(id, id1, id2, total)` and ~10 million rows
`smallTable1` has columns `(id, name)`
`smallTable2` has columns `(id, name)`
Right now it takes 5 seconds to run.
Is it possible to make it much faster? | Create indexes - they are the reason why querying is fast. Without indexes, we would be stuck with CPU-only solutions.
So:
1. Create index for SmallTable1(id)
2. Create index for SmallTable2(id)
3. Create index for LargeTable(id1) and LargeTable(id2)
**Important**: You can create index for more than one column at the same time, like this LargeTable(id1,id2) <--- DO NOT DO THAT because it does not make sense in your case.
**Next**, your query is not out of the box wrong, but it does not follow the best practice querying. Relational databases are based on [Set theory](http://en.wikipedia.org/wiki/Relational_model). Therefore, you must think in terms of "bags with marbles" instead of "cells in a table".
Roughly, your initial query translates to:
1. Get EVERYTHING from LargeTable c, SmallTable1 a and SmallTable2 b
2. Now when you have all this information, find items where c.id1 = a.id AND c.id2 = b.id; (there goes your 5+ seconds because this is semi-resource intensive)
Ambrish has suggested the correct query, use that although this will not be faster.
Why? Because in the end, you still pull all the data from the table out of the database.
As for the data itself goes: 10 million records is not ridiculously large table, but it is not small either. In data warehouses, the [star schema](http://en.wikipedia.org/wiki/Star_schema) is a standard. And you have a star schema basically. The problem you are actually facing is that the result has to be calculated on-the-fly and that takes time. The reason i'm telling you this is because in corporate environments, engineers are facing this problems on a daily basis. And the solution is OLAP (basically pre-calculated, pre-aggregated, pre-summarized, pre-everything data). The end users then just query this precalculated data and the query seems very fast, but it is never 100% correct, because there is a delay between OLTP (on-line **transactional** processing = day to day database) and OLAP (on-line **analytical** processing = reporting database)
The indexes will help with queries such as WHERE id = 3 etc. But when you are cross joining and basically pulling everything from DB, it probably wouldn't play a significant role in your case.
So to make long story short: if your only options are queries, it will be hard to make an improvement. | There is one circumstance under which separately indexing `ID1` and `ID2` in the large table will make less of a difference. If there are 9,000,000 rows with `ID1` matching `SmallTable1.id` and 200 rows with `ID2` matching `SmallTable2.id`, with the 200 being the only rows where both exist at the same time, you will still be doing almost a complete table/index scan. If that is the case, creating an index on *both* `ID1` *and* `ID2` should speed things up as it can then locate those 200 rows with index seeks.
If that works, you may want to include `Total` in that index to make it a covering index for that table.
This solution (assuming it is one) would be extremely data-centric and thus the execution would change if the data changes significantly.
Whatever you decide to do, I would suggest you make *one* change (create an index or whatever) then check the execution plan. Make another change and check the execution plan. Make another change and check the execution plan. Repeat or rewind as needed. | Optimizing simple SQL query for large table | [
"",
"sql",
"postgresql",
"query-optimization",
""
] |
I need to understand the difference between super key and composite key. The examples I found made more confused. Can you please simply clarify what is the difference? Thanks | The accepted answer is not entirely accurate...
* A **superkey** is any set of columns that, combined together, are unique. There are typically many superkeys per table and same column may be shared by many superkeys. They are not very useful by themselves, but are more of a mental tool for identifying candidate keys (see below).
* A **candidate key** is a minimal superkey - if any column is removed it would no longer be unique. There are typically significantly fewer candidate keys than superkeys.
* A **key** is just a synonym for a candidate key.
* A **composite1 key** is a key that has more than one column. In other words, it's a minimal superkey that has multiple columns.
Few more points:
* Every key is unique, so calling it "unique key" is redundant. Just "key" is enough.
* At the DBMS level, a key is enforced through a PRIMARY KEY or UNIQUE2 constraint.
* An index is usually present underneath the key (PRIMARY KEY or UNIQUE constraint), for performance reasons. But despite often going together, key and index are separate concepts: key is a logical concept (changes the meaning of data) and index is a physical concept (doesn't change the meaning of data, just performance).
---
*1 Aka. compound, complex or concatenated.*
*2 On NOT NULL columns.* | Yes, I agree with @Branko, the accepted answer is not the accurate and not clear.
I'll take example of an Employee table:
```
CREATE TABLE Employee (
Employee ID,
FullName,
SSN,
DeptID
);
```
And to know difference b/w Super & Candidate keys, let's check first what are other types of keys?
**1. Candidate Key:** are individual columns in a table that qualifies for uniqueness of all the rows. Here in Employee table EmployeeID & SSN are Candidate keys.
**2. Primary Key:** is the columns you choose to maintain uniqueness in a table. Here in Employee table you can choose either EmployeeID or SSN columns, EmployeeID is preferable choice, as SSN is a secure value.
**3. Alternate Key:** Candidate column other the Primary column, like if EmployeeID is PK then SSN would be the Alternate key.
**4. Super Key:** If you add any other column/attribute to a Primary Key then it become a super key, like EmployeeID + FullName is a Super Key.
**5. Composite Key:** If a table don't have any individual columns that qualifies for a Candidate key, then you have to select 2 or more columns to make a row unique. Like if there is no EmployeeID or SSN columns, then you can make FullName + DateOfBirth as Composite primary Key. But still there can be a narrow chance of duplicate row.
[Reference](http://sqlwithmanoj.com/2014/09/15/db-basics-what-are-candidate-primary-composite-super-keys-and-difference-between-them/) | Difference between super key and composite key | [
"",
"sql",
"database",
"database-design",
"relational-database",
""
] |
I am getting this error with the following code. I also get the error for column 3 of 'c'
Here is a snapshot of the errors I am getting with this code. 
I have updated my SQL below. This is as of 6/5/14
```
USE SYNLIVE
SELECT
INLOC.Itemkey, l.Description, INLOC.Location, INLOC.Qtyonhand,
sum(l.POqtyRemn) [POqtyRemn],
SUM(c.Qtyord) AS [COqtyOrd], h.Statusflg
FROM
INLOC INLOC
INNER JOIN
(SELECT
POLIN.Itemkey, POLIN.Description, POLIN.Location, POLIN.Pono,
SUM(POLIN.Qtyremn) AS [POqtyRemn]
FROM
POLIN POLIN
GROUP BY
POLIN.Itemkey, POLIN.Description, POLIN.Location, POLIN.Pono) l ON INLOC.Itemkey = l.Itemkey
INNER JOIN
(SELECT
POHDR.Statusflg, POHDR.Pono
FROM
POHDR POHDR
WHERE POHDR.Statusflg = 'NEW' OR POHDR.Statusflg = 'OPEN'
GROUP BY
POHDR.Statusflg, POHDR.Pono) poh ON l.Pono = poh.Pono
JOIN
OELIN c ON INLOC.Itemkey = c.Itemkey
INNER JOIN
(SELECT
OEHDR.Statusflg, OEHDR.Ordno
FROM
OEHDR
WHERE OEHDR.Statusflg = 'NEW' OR OEHDR.Statusflg = 'OPEN'
GROUP BY
OEHDR.Statusflg, OEHDR.Ordno) h ON c.Ordno = h.Ordno
WHERE
((INLOC.Location = 'SPL') AND (l.POqtyRemn > 0)) OR ((INLOC.Location = 'SPL') AND (c.Qtyord > 0))
GROUP BY
INLOC.Itemkey, l.Description, INLOC.Location, h.Statusflg, inloc.Qtyonhand
/* Add other fields that you are pulling -- you must group by all fields (or have a calc on them .. i.e. Sum(field) */
ORDER BY INLOC.Itemkey
``` | **Update: I see what you're after now... woah... you have to group by the first 3 fields and sum the last 3 fields in your first select statement... I updated the sql below.**
Note the change to the final "group by". It has to include all fields... otherwise... run the individual select statements by themselves and make sure they are valid... seems ok other than the "group by".
As for your error... run the select statement on it's on that creates the L table. Based on that error, I believe that's giving you trouble... also add [] square brackets around the word "Description" as it's a keyword... shouldn't hurt you... but something is and that might be a good place to start. Once you have this statement working... and fix the group by... let us know what happens.
```
SELECT
POLIN.Itemkey, POLIN.Description, POLIN.Location,
SUM(POLIN.Qtyremn) AS [POqtyRemn]
FROM
X.dbo.POLIN POLIN
GROUP BY
Itemkey, Description, Location
```
---
-- After a few more tweaks -- I flatlined it... so you no longer need the group by... I also added the where clauses to the individual subqueries (the inner join select statements).
Without the table schema, I can't guarantee I have all the syntax correct, but take a look... also as I mentioned... make sure you run the subqueries and compare them to the totals... for the ItemKey's (do a spot check) and the overall totals...
```
SELECT
INLOC.Itemkey, l.Description, INLOC.Location, INLOC.Qtyonhand,
l.POqtyRemn,
c.Qtyord, h.Statusflg
FROM
(
select Itemkey, sum(QtyOnHand) [QtyOnHand]
from INLOC
where Location = 'SPL'
group by ItemKey
) INLOC
INNER JOIN
(
SELECT Itemkey, Description, Location, Pono,SUM(Qtyremn) AS [POqtyRemn]
FROM POLIN
GROUP BY Itemkey, Description, Location, Pono
having SUM(Qtyremn) > 0 --This will only return an ItemKey if it has remaining Qty on the PO [POQtyRemn]
) l ON INLOC.Itemkey = l.Itemkey
INNER JOIN
(
SELECT Statusflg, Pono
FROM POHDR
WHERE Statusflg = 'NEW' OR Statusflg = 'OPEN'
GROUP BY
Statusflg, Pono
) poh ON l.Pono = poh.Pono
JOIN
OELIN c ON INLOC.Itemkey = c.Itemkey
INNER JOIN
(
SELECT Statusflg, Ordno
FROM OEHDR
WHERE Statusflg = 'NEW' OR Statusflg = 'OPEN'
GROUP BY Statusflg, Ordno
) h ON c.Ordno = h.Ordno
WHERE
(l.POqtyRemn > 0) OR (c.Qtyord > 0)
ORDER BY INLOC.Itemkey
``` | Looks like [this issue with later versions of office](http://www.tachytelic.net/2013/07/alias-names-in-microsoft-query-no-longer-work-in-recent-versions-of-office/).
Namely that the Aliases are confusing MS Query. Try replacing your
```
SUM(somefield) as SomeAlias
```
with
```
SUM(somefield) as [SomeAlias]
``` | No Column name was specified for column 4 of 'l' | [
"",
"sql",
""
] |
I have a `datagridview` in which a user is allowed to retriev either `n` records (using textbox) or All records by clicking on a `buttun`
Now want to retrieve `Top N` records Or All Records with a single Query.
right now I'm using 2 different Queries to achieve.
```
//retrieving Top N Records
SELECT Top @Rows FROM CaseDetails //@Row is parameter
```
And
```
//retrieving All records
SELECT Top (SELECT COUNT(*) FROM CaseDetails) FROM CaseDetails
```
How can i Use a single query in `SQL Server` to perform these 2 options ? | **This is working fine**
Declare parameters
```
Create Procedure uspRetrieve
@Rows int =NULL
AS
BEGIN
SET NOCOUNT ON;
SET ROWCOUNT @Rows
SELECT * FROM CaseDetails
End
```
IF you supply @Row=0 you will get all records else you get @Row =your limit | This is a tricky one.
Really you'd probably pass `@Rows` in all cases, or `NULL` to select all rows, then null coalesce via `ISNULL` to select the count from the table to get all rows.
```
// Set the Rows param first
SELECT @Rows = ISNULL(@Rows, (SELECT COUNT(*) FROM CaseDetails))
// @Rows is parameter
SELECT TOP @Rows * FROM CaseDetails
``` | How to Retrieve either Top N rows OR all rows from a table in SQL Server | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a single table with rows like this: (Date, Score, Name)
The Date field has two possible dates, and it's possible that a Name value will appear under only one date (if that name was recently added or removed).
I'm looking to get a table with rows like this: (Delta, Name), where delta is the score change for each name between the earlier and later dates. In addition, only a negative change interests me, so if Delta>=0, it shouldn't appear in the output table at all.
My main challenge for me is calculating the Delta field.
As stated in the title, it should be an SQL query.
Thanks in advance for any help! | I assumed that each name can have it's own start/end dates. It can be simplified significantly if there are only two possible dates for the entire table.
I tried this out in SQL Fiddle **[here](http://sqlfiddle.com/#!2/f599db/10/0)**
```
SELECT (score_end - score_start) delta, name_start
FROM
( SELECT date date_start, score score_start, name name_start
FROM t t
WHERE NOT EXISTS
( SELECT 1
FROM t x
WHERE x.date < t.date
AND x.name = t.name
)
) AS start_date_t
JOIN
( SELECT date date_end, score score_end, name name_end
FROM t t
WHERE NOT EXISTS
( SELECT 1
FROM t x
WHERE x.date > t.date
AND x.name = t.name
)
) end_date_t ON start_date_t.name_start = end_date_t.name_end
WHERE score_end-score_start < 0
``` | The following uses a "trick" in MySQL that I don't really like using, because it turns the score into a string and then back into a number. But, it is an easy way to get what you want:
```
select t.name, (lastscore - firstscore) as diff
from (select t.name,
substring_index(group_concat(score order by date asc), ',', 1) as firstscore,
substring_index(group_concat(score order by date desc), ',', 1) as lastscore
from table t
group by t.name
) t
where lastscore - firstscore < 0;
```
If MySQL supported window functions, such tricks wouldn't be necessary. | SQL query that finds a negative change between two rows with the same name field | [
"",
"mysql",
"sql",
""
] |
This is my table name :- xyztable and in this table i have 3 column
A:- id
B:- Name
c:- Order\_number
i have a sample data like
```
Id Name order_nmber
1 A 1
2 K 0
3 B 6
4 c 3
5 P 0
6 d 5
7 e 2
8 O 0
```
I wrote the query is
```
select * from xyztable order by order_number ASC;
```
that give me the result is
```
Id Name order_nmber
2 K 0
5 P 0
8 O 0
1 A 1
7 e 2
4 c 3
6 d 5
3 B 6
```
But i want the result like :-
```
Id Name order_nmber
1 A 1
7 e 2
4 c 3
6 d 5
3 B 6
2 K 0
5 P 0
8 O 0
```
So how can i get this in single query....Please help!!!! | You could do like below:
```
select * from xyztable order by order_number = 0, order_number ASC;
``` | ```
select * from xyztable
order by case when order_number=0 then 1 else 0 end ASC, order_number ASC;
``` | How to Put condition in order by sql Query | [
"",
"mysql",
"sql",
""
] |
I want to select a data from **Oracle** . However I have a column - `CHILD_IDN` - that decides which column is right for me . If the column is having an idn then I need to pick `CHILD` from the table else by default I should pick the `PARENT` . MSSQL allows a simple CASE statement to work .
But what about Oracle ? Is there a Oracle vs MSSQL compatible query for this ? I think its a non - normalized data hence need advice .
For example
```
TableA
1 PARENT CHILD_IDN CHILD
2 Okay
3 Cool 1 PickMe1
4 Fine
5 Test
6 Bar 2 Pickme2
7 Hello
Now the result expected is
Okay
PickMe1
Fine
Test
Pickme2
Hellow
``` | You can use `CASE` statement in Oracle too. like,
```
SELECT CASE WHEN child_idn IS NOT NULL THEN child
ELSE parent
END
FROM tableA;
``` | You can do this with a `case`:
```
select (case when child_idn = 2 then child else parent
end)
from table t;
```
This is standard SQL and should work in either database.
Note: this assumes that the "blank" values are `NULL` and uses the fact that the condition will fail on a `NULL` value. You can also be explicit if you want:
```
select (case when child_idn = 1 then parent
when child_idn = 2 then child
else parent
end)
``` | How to select columns conditionally in Oracle? | [
"",
"sql",
"oracle",
""
] |
Thanks for your time! For each user, I am looking to output a single column which contains their earliest effective date and latest end date, along with other columns from this and other tables (this stuff is consistent for each user). Below is the format for the input data.
```
User Eff_Date End_Date Otherstuff...
----|-------------|------------|---------
001 | 20140101 | 20140106
001 | 20140107 | 99990101
002 | 20140201 | 20140305
002 | 20140306 | 20140319
002 | 20140320 | 99990101
003 | 20140401 | 20140402
004 | 20140501 | 20250901
```
This is basically what I would prefer as a result:
```
User Eff_Date End_Date Otherstuff...
----|-------------|------------|---------
001 | 20140101 | 99990101
002 | 20140201 | 99990101
003 | 20140401 | 20140402
004 | 20140501 | 20250901
```
Here is what I tried:
```
SELECT DISTINCT M.user, T.mineffdate, T.maxenddate, A.otherstuff
FROM tbluser M
LEFT JOIN otherstuff A ON A.[user]=M.[user]
INNER JOIN (SELECT user, MAX(m.end_date) as maxenddate, MIN(m.eff_date) as mineffdate FROM tbluser M GROUP BY user) T ON T.user = M.user AND T.maxenddate = m.end_date AND T.mineffdate = M.eff_date
```
When I ran this, users like 003 and 004 above showed up alright, but users like 001 and 002 failed to show up at all.
I am fairly new to SQL, so I might be making a very basic mistake. Feel free to let me know if that is the case. Additionally, I have no control over the data source, so I cannot fix this at the source. The only reason I found about about this was by using SQL to output every user record and then using VBA to assemble the records. This is unfortunately not a long-term solution.
Please let me know if you need any more info, and I appreciate everything, a nudge in the right direction might be enough to help me solve this. Thank you again for your time! | How about using windowing functions?
Try this:
```
Select distinct t.[user],
Min(t.eff_date) Over (Partition By [user]) as EffDate,
Max(t.End_Date) Over (Partition By [user]) as EndDate,
'Other' as [Other]
From tblUser t
```
EDIT: Although keep in mind if the dates are stored as strings MIN/MAX will work on the strings, not the dates. May need to convert to `Date` if they're stored as strings. | Try to use where `NOT EXISTS` clause, the example may not be exactly what you want, but showing the clause which may help you
```
SELECT M.user, M.Eff_Date, M.End_Date, M.otherstuff
FROM tbluser M
WHERE NOT EXISTS(
SELECT * FROM tbluser N
WHERE M.user=N.user AND (N.Eff_Date > M.Eff_Date OR N.Eff_Date = M.Eff_Date AND N.End_Date > M.End_Date)
)
``` | SQL Selecting Max and Min Dates in Different Rows | [
"",
"sql",
"sql-server",
"date",
"max",
"min",
""
] |
I have a table that looks like this:
```
identifier | value | tstamp
-----------+-------+---------------------
abc | 21 | 2014-01-05 05:24:31
xyz | 16 | 2014-01-11 03:32:04
sdf | 11 | 2014-02-06 07:04:24
qwe | 24 | 2014-02-14 02:12:07
abc | 23 | 2014-02-17 08:45:24
sdf | 15 | 2014-03-21 11:23:17
xyz | 19 | 2014-03-27 09:52:37
```
I know how to get the most recent value for a single identifier:
```
select * from table where identifier = 'abc' order by tstamp desc limit 1;
```
But I want to get the most recent value for all identifiers. How can I do this? | The simplest (and often fastest) way is `DISTINCT ON` in Postgres:
```
SELECT DISTINCT ON (identifier) *
FROM tbl
ORDER BY identifier, tstamp DESC;
```
This also returns an ordered list.
[SQLFiddle.](http://sqlfiddle.com/#!15/ecc8f/1)
Details:
[Select first row in each GROUP BY group?](https://stackoverflow.com/questions/3800551/select-first-row-in-each-group-by-group/7630564#7630564) | ```
SELECT *
FROM ( SELECT *,
ROW_NUMBER() OVER(PARTITION BY identifier
ORDER BY tstamp DESC) AS RN
FROM YourTable) AS T
WHERE RN = 1
```
[**Here is**](http://sqlfiddle.com/#!15/ebe57/2) an sqlfiddle with a demo of this.
The results are:
```
╔════════════╦═══════╦═════════════════════════════════╦════╗
║ IDENTIFIER ║ VALUE ║ TSTAMP ║ RN ║
╠════════════╬═══════╬═════════════════════════════════╬════╣
║ abc ║ 23 ║ February, 17 2014 08:45:24+0000 ║ 1 ║
║ qwe ║ 24 ║ February, 14 2014 02:12:07+0000 ║ 1 ║
║ sdf ║ 15 ║ March, 21 2014 11:23:17+0000 ║ 1 ║
║ xyz ║ 19 ║ March, 27 2014 09:52:37+0000 ║ 1 ║
╚════════════╩═══════╩═════════════════════════════════╩════╝
``` | Select only the most recent value for each identifier in a table | [
"",
"sql",
"database",
"postgresql",
"select",
"greatest-n-per-group",
""
] |
**Context:**
The best example is AirBnB. Let's say I have 5 apartments. Each apartment has a calendar that represents it's availability. When a vacationer travels to my city and searches for apartments using a given start date and end date, if that period of time shows up as available on the calendar for any of my apartments, I want those apartments to be shown in search results for the vacationer.
**One bit at a time:**
Obviously there's a lot in the above. The scope of this question is how I should set up database for the list of apartments that includes their availability. Before building a database, I spent some time manually coordinating in Excel just to get a clearer picture in my head of what everything should look like. In the Excel, what I found worked to be column headers for table are:
* apartment\_name
* owner\_id
* apartment\_description
* calendar
Calendar right now is what I'm having trouble with. Literally in my Excel, the columns are just dates going on to eternity. Whenever a vacationer submits a request, I find all the apartments for which each date cell is empty (e.g., available). Then I send the vacationer these apartments. When s/he makes a booking, I go back to the Excel and mark unavailable in each date cell for the specific apartment chosen.
I want to get more opinions... is this the right way I should imagine my database in PostGreSQL? And if so... can I just make a migration that looks like below?
```
class CreateApartments < ActiveRecord::Migration
def change
create_table :apartments do |t|
t.string :apt_name
t.integer :apt_owner
t.text :apt_description
Date.today..Date.new(2034, 12, 31)).each do |date|
t.date :date
end
t.timestamps
end
end
end
``` | You should not store availability but the opposite (apartment is booked for a specific date). Without any deeper analysis I would do something as simple as:
```
owner
owner_id
owner_name
apartment
apartment_id
apartment_name
apartment_description
owner_id
customer
customer_id
customer_name
booking
booking_id
customer_id
apartment_id
booking_start
booking_end
```
In case when one can book disjoint days:
```
booking
booking_id
customer_id
apartment_id
booking_calendar
booking_id
booking_date
```
In any case you will be able to return list of available apartments quite easy.
```
select
*
from
apartments a
where not exists
(select
1
from
bookings b
where
a.apartment_id = b.apartment_id
and (
<<required_start>> between booking_start and booking_end
or
<<required_end>> between booking_start and booking_end
)
``` | I think you will find PostgreSQL's support for [range types](http://www.postgresql.org/docs/current/static/rangetypes.html) and associated [overlapping tests](http://www.postgresql.org/docs/current/static/functions-range.html) useful here.
You can index on ranges and even use a [constraint](http://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-CONSTRAINT) to prevent an apartment being booked twice on the same day.
With ranges you can store bookings or availability as you prefer. | How to create a database that shows a calendar of availability for a set of apartments? | [
"",
"sql",
"ruby-on-rails",
"database",
"postgresql",
"database-design",
""
] |
I am trying to wrap my head around the correct way of connecting `Employees` to `Projects` but for some reason i am having a hard time with this. I have the following so far:
```
-------------------
| Employee |
-------------------
| EmployeeID | PK |
-------------------
| Name | |
-------------------
| Position | |
-------------------
--------------------
| Project |
--------------------
| ProjectID | PK |
--------------------
| Name | |
--------------------
| Description | |
--------------------
```
I am going to have many `Employees` and many `Projects` and each employee can be a part of many projects while each project would have many employees attached to it. I am having an issue with how to make the connection between the two. Can someone please help talk me through this? Thanks! | You need a joining table to create two one-to-many relationships:
```
employee
employee_id PK
```
Relates to:
```
employee_project
employee_id PK
project_id PK
```
With project also related to the above:
```
project
project_id PK
```
So your employees can be related to your employee\_project table but your projects can be related to your employee\_project table too.
Having two primary keys on a table is called a [Composite Primary Key](http://www.techopedia.com/definition/6572/composite-key) (two foreign keys in this case). | Create a Many-2-Many relationships Table like so.
```
--------------------------
| EmployeeProject |
--------------------------
| EmployeeID | PK |
-------------------
| ProjectID | PK |
--------------------------
```
Your PK on this table with be Combination of `EmployeeID + ProjectID` (called Composite Primary Key).
To get Employees that have Projects, your SQL will look like so.
```
SELECT emp.*
FROM Employee emp
INNER JOIN Project prj ON emp.EmployeeID = prj.EmployeeID
```
If you want to get Employees who \*do not have any project\*s assigned, your SQL will look like so.
```
SELECT emp.*
FROM Employee emp
LEFT JOIN Project prj ON emp.EmployeeID = prj.EmployeeID
WHERE prj,EmployeeID IS NULL
``` | How to create a relationship in my Database for Employees that have Projects | [
"",
"sql",
"database",
"ssms",
""
] |
I am looking for all the records for CollA and CollB. I also want to pre-append the base records to each collection, at the same time retarding the sequence numbers of the base collection by 3 (a static number the query doesn't have to count the rows) so that when sorted the base rows appear first.
I have tried searching for queries similar but haven't found anything. Mostly I think, because I can't come up with the technical name for what it is I am trying to do. Is there a name for this sort of thing?
The query I have that does not include the base records looks like this:
```
SELECT *
FROM NamedFieldCollections
WHERE CollectionName IN (SELECT CollectionName FROM NamedCollections)
CollectionName | FieldName | Sequence
-----------------------------------------------
CollA Field1 0
CollA Field2 1
CollA Field3 2
CollB FieldA 0
CollB FieldB 1
Base F1 0
Base F2 1
Base F3 2
```
I am trying to end up with a query where the results look like this:
```
CollectionName | FieldName | Sequence
-----------------------------------------------
CollA F1 -3
CollA F2 -2
CollA F3 -1
CollA Field1 0
CollA Field2 1
CollA Field3 2
CollB F1 -3
CollB F2 -2
CollB F3 -1
CollB FieldA 0
CollB FieldB 1
``` | ```
SELECT CollectionName, FieldName, Sequence
FROM
(
SELECT T1.CollectionName, T1.FieldName, T1.Sequence
FROM NamedFieldCollections T1
WHERE T1.CollectionName <> 'Base'
UNION
SELECT T3.CollectionName, T2.FieldName, T2.Sequence - 3
FROM NamedFieldCollections T2
CROSS JOIN (
SELECT DISTINCT T4.CollectionName
FROM NamedFieldCollections T4
WHERE T4.CollectionName <> 'Base'
) T3
WHERE T2.CollectionName = 'Base'
) T
ORDER BY T.CollectionName, T.Sequence
``` | You can try this query:-
```
SELECT *
FROM NamedFieldCollections
WHERE CollectionName IN (SELECT CollectionName FROM NamedCollections)
order by collectionName, FieldName;
```
Hope this can help you. | SQL Self Join combining collections | [
"",
"sql",
"sql-server",
"sql-server-2008",
"self-join",
""
] |
Is there a way to group by a unique (primary) key, essentially giving an implicit guarantee that the other columns from that table will be well-defined?
```
SELECT myPrimaryKey, otherThing
FROM myTable
GROUP BY myPrimaryKey
```
I know that I can add the other columns to the statement (`GROUP BY myPrimaryKey,otherThing`), but I'm trying to avoid that. If you're curious why, read on:
---
I have a statement which is essentially doing this:
```
SELECT nodes.node_id, nodes.node_label, COUNT(1)
FROM {a couple of joined tables}
INNER JOIN nodes USING (node_id)
GROUP BY nodes.node_id, nodes.node_label
```
which works fine, but is a bit slow in MySQL. If I remove `nodes.node_label` from the `GROUP BY`, it runs about 10x faster (according to `EXPLAIN`, this is because one of the earlier joins starts using an index when previously it didn't).
We're in the process of migrating to Postgres, so all new statements are supposed to be compatible with *both* MySQL *and* Postgres when possible. Now in Postgres, the original statement runs fast, but the new statement (with the reduced group by) won't run (because Postgres is stricter). In this case, it's a false error because the statement is actually well-defined.
Is there a syntax I can use which will let the same statement run in both platforms, while letting MySQL use just one column in the group by for speed? | You could try converting the other columns into aggregates:
```
SELECT myPrimaryKey, MAX(otherThing)
FROM myTable
GROUP BY myPrimaryKey
``` | In more recent versions of MySql you might have `sql_mode=only_full_group_by` enabled which doesn't allow to select non-aggregated columns when using `group by` i.e. it forces you to use a function like `max()` or `avg()` or `group_concat()`, sometimes you just want any value.
This flag is enabled by default in MySql 5.7.
The function `any_value()` is available when that flag is enabled.
> You can achieve the same effect without disabling ONLY\_FULL\_GROUP\_BY
> by using ANY\_VALUE() to refer to the nonaggregated column.
```
select t.index, any_value(t.insert_date)
from my_table t
group by t.index;
```
More information here:
<https://dev.mysql.com/doc/refman/5.7/en/sql-mode.html#sqlmode_only_full_group_by>
and here:
<https://dev.mysql.com/doc/refman/5.7/en/group-by-handling.html> | GROUP BY only primary key, but select other values | [
"",
"mysql",
"sql",
"postgresql",
"group-by",
""
] |
I can successfully create composite primary key in sql server management studio 2012 by selecting two columns (OrderId, CompanyId) and right click and set as primary key. But i don't know how to create foreign key on two columns(OrderId, CompanyId) in other table by using sql server management studio 2012. | In Object Explorer, go to your table and select `Keys > New Foreign Key` from the context menu:

From the dialog box that pops up, click on the `Add` button to create a new foreign key:

Give it a meaningful name and then click on the `...` button to open the `Tables and Columns specification` dialog box:

Fill in the necessary columns for the parent and the child tables, click `OK` and **you're done!**
Or **much easier and more efficiently** - use a T-SQL script!
```
ALTER TABLE dbo.OtherTable
ADD CONSTRAINT FK_OtherTable_ParentTable
FOREIGN KEY(OrderId, CompanyId) REFERENCES dbo.ParentTable(OrderId, CompanyId)
``` | If you open the submenu for a table in the table list in Management Studio, there is an item `Keys`. If you right-click this, you get `New Foreign Key` as option. If you select this, the Foreign Key Relationships dialogue opens. In the section (General), you will find `Tables And Columns Specifications`. If i open this, i can select multiple columns. | How to create composite foreign key in sql server management studio 2012 | [
"",
"sql",
"sql-server-2012",
"composite-key",
""
] |
I have a data like this :
```
idOrder | transactionDate
31 | 04/06/2014 7:58:38
32 | 05/06/2014 8:00:08
33 | 05/06/2014 14:23:51
```
I want build stored procedure that produce invoice number automatically, e.g. :
In 4 June, there are 1 invoice number : 201406041 (20140604 from date and 1 at last from total order happened that day).
So, in 5 June, there are 2 invoice number : 201406051 and 201406052
I am lack on SQL scripting except this query to count transaction per day:
```
SELECT
count(idOrder) as total
FROM
TR_Order
GROUP BY
Convert(char(8), transactionDate, 111);
```
Thanks in advance. | Try this
```
select * from test
select
idorder, transactiondate,
REPLACE(CAST(transactionDate as DATE),'-','') +
CAST(ROW_NUMBER() OVER(PARTITION BY CAST(transactionDate AS DATE)
ORDER BY idorder) AS varchar(8)) AS InvoiceNumber
from test
```
 | ```
select convert(VARCHAR(8), transactionDate, 112)
+ CONVERT(VARCHAR(100), RANK()
over (partition by convert(date, transactionDate) order by idorder))
as 'invoice no'
, *
from TR_Order
``` | Create invoice number in sql server stored procedure | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I think it is a common situation, but I am not able to get the logic.
I have a table as follows.
```
PersonID SchoolID EndDate
-------- -------- -------
1 ABC 2013
1 DEF 2014
1 GHI 2010
2 XYZ 2013
2 UVW 2011
```
I want the following output
```
PersonID SchoolID EndDate
-------- -------- -------
1 DEF 2014
2 XYZ 2013
```
Basically, I want the latest school for each person. Hence, I try to do something like
```
SELECT SchoolID, PersonID,EndDate FROM tbl
GROUP BY PersonID
HAVING EndDate = MAX(ENDDATE)
ORDER BY EndDate DESC
```
But I got an error saying EndDate is invalid in a HAVING clause because it is not contained in an aggregate function or group by clause.
I tried doing this
```
SELECT SchoolID, PersonID,MAX(EndDate) FROM tbl
GROUP BY PersonID
ORDER BY EndDate DESC
```
I get an error saying SchoolID is invalid in the select list because of the same reason.
What am I missing here? | ```
with cte as (SELECT *,
ROW_NUMBER() OVER(PARTITION BY PersonID
ORDER BY EndDate DESC) AS RN
FROM Table1)
select PersonId, SchoolId, EndDate from cte
where RN = 1
```
see [SqlFiddle](http://sqlfiddle.com/#!3/43003/4) | You have to wrap `MAX(Date)` in a subquery.
```
SELECT SchoolID, PersonID, Date
FROM table1 t
WHERE Date =
(SELECT MAX(Date) FROM table1
WHERE PersonID = t.PersonID);
```
Note: this will give multiple rows for one PersonID if there are multiple dates tied for the max. | Select top row based on grouping | [
"",
"sql",
"sql-server-2008-r2",
""
] |
Is there a *simple* way to get `1` for a non-`NULL` column and `0` for a `NULL` column (e.g., in a `SELECT`), without adding a scalar function to do it? | Three basic options:
1. `CASE WHEN`:
```
CASE WHEN [TheColumn] IS NULL THEN 0 ELSE 1 END AS [MyFlag]
```
or if you want it as a `BIT`:
```
CAST(CASE WHEN [TheColumn] IS NULL THEN 0 ELSE 1 END AS BIT) AS [MyFlag]
```
Whether that's "simple" is a matter of opinion...
2. **If** you know non-`NULL` values will never be `0` (for an `INT` column) or `'FALSE'` (for a character column), you can shorten that a bit:
```
CAST(COALESCE([TheColumn], 0) AS BIT) AS [MyFlag]
```
...but again note the assumption about `0`/`'FALSE'`.
3. On SQL Server 2012+, you can use `IIF` *(thank you, [Martin Smith](https://stackoverflow.com/users/73226/martin-smith))*:
```
IIF(TheColumn IS NULL, 0, 1) AS [MyFlag]
``` | My first option would be the already posted answer by the OP, but an alternative is
```
select isnull(column * 0 + 1, 0)
```
Basically, if `column` is not `NULL`, then `column * 0 + 1` will be `1`, otherwise it will be `NULL`. I'm assuming `column` is an integer column, as suggested in the comments on the question, but it's usable for any type so long as you've got a function that converts that type to an integer (and returns `NULL` if and only if its input is).
It's similar in spirit to Jayvee's answer, but avoids any problems with any division by zero, and should be a bit easier to understand. | Simple conversion of possibly-NULL value to flag | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SampleID | MemberId | MemberType | Status | ExpDate
1 | 222 | AA | Active | NULL
2 | 222 | AA | Pending | NULL
3 | 222 | AA | Cancelled | 2014-06-04 13:35:04.267
4 | 333 | AA | Pending | NULL
5 | 333 | AA | Cancelled | 2014-06-04 13:35:04.267
6 | 444 | AA | Cancelled | 2014-06-04 13:35:04.267
```
In the above table there are 3 rows for Member ID 222 with multiple status like Active, Pending and Cancelled.
If `MemberId` has more that one status I need to order them having Active first, then Pending, and Cancelled last. The order should stand even if not all the status value are there. | Try to use [ROW\_NUMBER()](http://msdn.microsoft.com/en-us/library/ms186734.aspx) with CASE for `Status` in ORDER BY class:
```
SELECT * FROM
(
SELECT T.*,
ROW_NUMBER() OVER (PARTITION BY MemberID
ORDER BY CASE Status
WHEN 'Active' THEN 1
WHEN 'Pending' THEN 2
WHEN 'Cancelled' THEN 3
END ) as RN
FROM T) as T1
WHERE RN=1
```
`SQLFiddle demo` | I suggest you use a subquery using a case for the status and append a number to the status as 1. Active, 2. Pending, 3. Cancelled since they are not alphabetical, and then you can do a max on the status to get what you want.
Lidia D. | Select row one by one based on hierarchy using group by | [
"",
"sql",
"sql-server-2012",
""
] |
I'm looking to achieve my goals as described below using one single query, as opposed to multiple queries as I currently have to use.
The problem I am having is that data from the `wp_usermeta` table is stored as a `meta_key`/`meta_value` in 2x columns, as opposed to each type of data having it's own column. While the reasoning for this makes sense, it does mean I'm stumped at this point.
The database in question is for a WordPerss site, should anyone wish to replicate this.
# My goals
For each user who has at least one image/video post I need to grab the following details -
```
+-----------------------+---------------+------------------------------------------------+
| Description | Table | Column |
+-----------------------+---------------+------------------------------------------------+
| User ID | `wp_users` | `ID` |
| Display Name | `wp_users` | `display_name` |
| First Name | `wp_usermeta` | `meta_value` (WHERE `meta_key` = 'first_name' |
| Description | `wp_usermeta` | `meta_value` (WHERE `meta_key` = 'description' |
| Facebook profile link | `wp_usermeta` | `meta_value` (WHERE `meta_key` = 'facebook' |
| Google+ profile link | `wp_usermeta` | `meta_value` (WHERE `meta_key` = 'google_plus' |
| Twitter profile link | `wp_usermeta` | `meta_value` (WHERE `meta_key` = 'twitter' |
+-----------------------+---------------+------------------------------------------------+
```
# My current solution
First I select the ID and display name of all users who have at least 1 image/video post (this is one single query) -
```
SELECT DISTINCT `wp_users`.`ID`, `wp_users`.`display_name`
FROM `wp_posts`
INNER JOIN `wp_users`
WHERE `wp_posts`.`post_type` = "attachment"
AND `wp_posts`.`post_status` = "inherit"
AND `wp_posts`.`post_author` = `wp_users`.`ID`
AND (
`wp_posts`.`post_mime_type` LIKE "image%"
OR `wp_posts`.`post_mime_type` LIKE "video%"
)
```
Next, I have to loop through each result from the first query and select the first name, description and social media links for each (this is one example, for `user_id = 2`) -
```
SELECT `wp_usermeta`.`meta_key`, `wp_usermeta`.`meta_value`
FROM `wp_usermeta`
WHERE `wp_usermeta`.`user_id` = 2
AND (
`wp_usermeta`.`meta_key` = 'first_name'
OR `wp_usermeta`.`meta_key` = 'description'
OR `wp_usermeta`.`meta_key` = 'facebook'
OR `wp_usermeta`.`meta_key` = 'google_plus'
OR `wp_usermeta`.`meta_key` = 'twitter'
);
```
As part of the loop where the second query is run I also have to insert those results into the results from the first. All of this not only means extra code, but leads to a longer execution time.
My full code can be found here - <http://pastebin.com/P2jv3WTt>
Is this goal achievable with MySQL, or is it simply not something it is able to do? Thanks.
# What I've tried
I have tried to join to the `wp_usermeta` table as follows, outputting the results as a named column, but there is an issue - the only results are for users who have an entry for every single `meta_key` in the `wp_usermeta` table, but some don't (no Twitter profile link, for example).
```
SELECT DISTINCT
u.`ID`,
u.`display_name`,
m1.`meta_value` AS first_name,
m2.`meta_value` AS description,
m3.`meta_value` AS facebook,
m4.`meta_value` AS google_plus,
m5.`meta_value` AS twitter
FROM `wp_users` u
JOIN `wp_posts` p
JOIN `wp_usermeta` m1 ON (m1.user_id = u.id AND m1.meta_key = 'first_name')
JOIN `wp_usermeta` m2 ON (m2.user_id = u.id AND m2.meta_key = 'description')
JOIN `wp_usermeta` m3 ON (m3.user_id = u.id AND m3.meta_key = 'facebook')
JOIN `wp_usermeta` m4 ON (m4.user_id = u.id AND m4.meta_key = 'google_plus')
JOIN `wp_usermeta` m5 ON (m5.user_id = u.id AND m5.meta_key = 'twitter')
WHERE p.`post_type` = "attachment"
AND p.`post_status` = "inherit"
AND p.`post_author` = u.`ID`
AND (
p.`post_mime_type` LIKE "image%"
OR p.`post_mime_type` LIKE "video%"
)
ORDER BY RAND()
``` | Most straightforward solution would be like this. Simply do a join for each meta-value you need.
By using `LEFT OUTER JOIN` instead of `INNER JOIN` to join the meta-data, you will still keep users for which some of these meta-data don't exist.
```
SELECT DISTINCT u.ID, u.DisplayName,
fn.meta_value AS firstname, fb.meta_value AS facebook, (etc.)
FROM wp_Users u
INNER JOIN wp_posts p ON p.post_author = u.ID
LEFT OUTER JOIN wp_usermeta fn ON fn.UserId = u.ID AND fn.meta_key = 'first_name'
LEFT OUTER JOIN wp_usermeta fb ON fb.UserId = u.ID AND fb.meta_key = 'facebook'
--- (etc. for each meta column you need to join)
WHERE p.post_type = 'attachment'
AND p.post_status = 'inherit'
AND (p.post_mime_type LIKE 'image%'
OR p.post_mime_type LIKE 'video%')
``` | I would go with something like this:
```
SELECT DISTINCT
`wp_users`.`ID`,
`wp_users`.`display_name`,
`wp_usermeta`.`meta_key`,
`wp_usermeta`.`meta_value`
FROM
`wp_posts`
INNER JOIN
`wp_users` ON (`wp_posts`.`post_author` = `wp_users`.`ID`)
LEFT JOIN
`wp_usermeta` ON (`wp_usermeta`.`user_id` = `wp_users`.`ID`)
WHERE
`wp_posts`.`post_type` = 'attachment'
AND `wp_posts`.`post_status` = 'inherit'
AND (`wp_posts`.`post_mime_type` LIKE 'image%'
OR `wp_posts`.`post_mime_type` LIKE 'video%')
AND `wp_usermeta`.`meta_key`
IN ('first_name', 'description', 'facebook', 'google_plus', 'twitter');
```
LEFT JOIN must return you posts and users even if there are no meta associated with users. | Consolidate many queries into one | [
"",
"mysql",
"sql",
"wordpress",
""
] |
I researched this and found that a `text` column in SQL Server can store a lot more than 8000 characters. But when I run the following insert in the text column, it only inserts 8000 characters:
```
UPDATE a
SET [File] = b.Header + CHAR(13) + CHAR(10) + d.Detail + c.Trailer + CHAR(13) + CHAR(10) + CHAR(26)
FROM Summary a
JOIN #Header b ON b.SummaryId = a.SummaryId
JOIN #Trailer c ON c.SummaryId = a.SummaryId
JOIN #Detail d ON d.SummaryId = a.SummaryId
WHERE
a.SummaryId = @SummaryId
```
I am trying to generate a fixed width flat file and every row should be 3900 characters long, and they are in the respective temp tables. But when I do the insert in the permanent table, the Trailer data gets truncated.
I am adding char(10) + char(13) to add carriage return and line feed and char(26) for end of file, and it seems like they are adding characters to the fixed width layout. | The problem with your code is not the data type of the field that you store the value in, it's the type of the value that you put together to store in it.
The type of `b.Header` is not `text` but `varchar`, which is used as type for the whole expression. When the strings are concatenated, the result will be truncated to fit in a `varchar` value.
If you cast the first string to `text`, the whole expression gets that type, and can become longer than 8000 characters:
```
SET [File] = cast(b.Header as text) + CHAR(13) + CHAR(10) + d.Detail + c.Trailer + CHAR(13) + CHAR(10) + CHAR(26)
```
Naturally you should transition into using the new type `varchar(max)` instead of `text`, but that is not the reason for your problem. | According to <http://msdn.microsoft.com/en-us/library/ms187993.aspx> `TEXT` fields are deprecated. Use `VARCHAR(MAX)` fields instead. They should support 2GB in text. | Text column not storing more than 8000 characters | [
"",
"sql",
"sql-server-2008",
"flat-file",
""
] |
Is there a way to retrieve the last X number of results from a query?
For example - If want the first ten results, I see that example here works: [setMaxResults for Spring-Data-JPA annotation?](https://stackoverflow.com/questions/9314078/setmaxresults-for-spring-data-jpa-annotation)
```
public interface UserRepository extends Repository<User, Long> {
List<User> findByUsername(String username, Pageable pageable);
}
//and then I could call it like this
Pageable topTen = new PageRequest(0, 10);
List<User> result = repository.findByUsername("Matthews", topTen);
```
But how do I get the LAST ten records?
The only way I could think of doing it would be to flip the order in the query (findByUsernameDesc, assuming original results were ascending) and then iterate through the list backwards so I can process it in the order I wanted (ascending).
That seems like an ugly way to do it. Is there a way to have the query give me the last X results in the order I want? | The question is how efficient it would be such on option, especially against large data sets.
I would go for a descending index, which I could query using the maxResult support, as you already figured it out.
This is no way a hack. If you were to match 100M results only to get the last X ones, this method would yield the best results. | Spring Data JPA 1.7 has introduced 'top' and 'first' as keywords in derived queries so now we can do like:
```
public interface UserRepository extends Repository<User, Long> {
List<User> findFirst10ByUsername(String username);
}
```
Check it out - [Spring Data Release Train Evans Goes GA](https://spring.io/blog/2014/09/09/spring-data-release-train-evans-goes-ga) | Spring Data JPA Java - get Last 10 records from query | [
"",
"sql",
"spring",
"hibernate",
"jpa",
"spring-data-jpa",
""
] |
I have actually three tables. They are categories, users and userCategories. As you can see a user can be assigned to multiple categories through the m-n userCategories table.
I want to select all categories for a specific user. If the category is assigned to that given user, the userFk-column should be the userId (for example 5), otherwise NULL.
I've got a similar result with this query. But is there a way to simplify this query?
```
select *
from (
SELECT `categoryId`, `category`, userFk FROM `category` c
left join usercategories uc on c.categoryId = uc.catFk
where userFk = 5
union
SELECT `categoryId`, `category`, userFk FROM `category` c
left join usercategories uc on c.categoryId = uc.catFk
where userFk != 5 OR userFk is NULL
) as result
group by categoryId
``` | If you put the userid in the left join clause, it would save you the union
```
SELECT `categoryId`, `category`, userFk
FROM `category` c
LEFT JOIN usercategories uc on c.categoryId = uc.catFk AND userFk = 5
```
That way, the only possible outcome of the query would be `userFK=5` or `NULL`, thus saving you the WHERE clause too.
Since the categories your user doesn't have can only show up once in a `userFK = NULL` tuple, you don't need the `group by` either, unless there's some repetition I'm not seeing. | If you want the categories of a user you could use:
```
SELECT `categoryId`, `category`, userFk FROM `category` c
left join usercategories uc on c.categoryId = uc.catFk
where userFk = 5
```
I suppose userCategories has no null values, a user without categories should not be in userCategories table. | Simplify SQL UNION Query | [
"",
"sql",
"left-join",
"union",
""
] |
I am having a bit of trouble with getting some MS Access SQL to work. Here is the high level:
I have values in one table, `by15official` that I need to use to update related records in another table, `investmentInfo`. Pretty straight forward except there are quite a few joins I need to perform to make sure the right record is updated in the `investmentTable` and I think I could figure this out with regular sql, but Access is not playing nicely. The following is my sql I am trying to use (which results in this error: "Syntax error (missing operator) in query expression ..."
```
update ii
set ii.investmentType = by15.InvestmentType
from investmentInfo as ii
inner join
(select by15Official.InvestmentType, by15Official.InvestmentNumber
from (((by15official left join investmentinfo on by15official.InvestmentNumber = investmentInfo.investID)
left join fundingSources on fundingSources.investId = investmentInfo.id)
left join budgetInfo on budgetInfo.fundingID = fundingSources.id)
where investmentinfo.submissionType = 2
and budgetInfo.byYear = 2015
and budgetInfo.type = 'X') as by15
on by15.InvestmentNumber = ii.investID
```
This seems like it should work, I am trying to join this group of tables that provide the `investmentType` which is what I want to update in the main table `investmentInfo`. Thoughts? Can this be done in Access? I have googled around and found the above which I adapted to meet my needs (actually I am pretty sure I found the above on SO).
Thoughts?
Thank you very much! | I did get some help from someone over at the MS forums. The solution was to format my SQL slightly differently. Here is the code that eventually worked.
```
UPDATE
(
(
by15official LEFT JOIN investmentinfo
ON by15official.InvestmentNumber = investmentInfo.investID
)
LEFT JOIN
fundingSources
ON investmentInfo.id = fundingSources.investId
)
LEFT JOIN budgetInfo
ON fundingSources.id = budgetInfo.fundingID
SET investmentInfo.investmentType = by15official.InvestmentNumber
WHERE (investmentinfo.submissionType = 2)
And (budgetInfo.byYear = 2015)
```
Perhaps the above Access SQL can help others.
Basically you want to do the update and the joins before doing the set and where clauses. Makes sense, and I am sure if I were better skilled at writing SQL I would have known that. | I know this doesn't use the original code, but this is a generic example with the proper syntax:
```
UPDATE ([My First Table]
LEFT JOIN [My First Table] ON [My Second Table].[RequestID] =
[My First Table].[ID])
INNER JOIN [My Third Table] ON [My Second Table].[Some Field] =
[My Third Table].[Matching Field]
SET
[My First Table].[Approved By] = [My Third Table].[Approver],
[My First Table].[Approval Date] = [My Second Table].[Modified]
WHERE [My First Table].[Review Status] LIKE '*Approved*'
``` | Using MS Access how to perform an update with multiple joins and where clauses? | [
"",
"sql",
"ms-access-2010",
""
] |
I have two tables: **WebPages**(including parent site ID) and **Results**(including parent webpage ID).
I'm want to write a stored procedure that updates the column "**FirstSeen**" *for all records with a specific site ID*.
For example (this code is not working):
```
CREATE PROCEDURE [dbo].[MySP]
@SiteId int
AS
BEGIN
SET NOCOUNT ON;
UPDATE [dbo].[Results]
SET [dbo].[Results].[FirstSeen] = GetDate()
WHERE [dbo].[WebPages].[Id] = [dbo].[Results].[WebPages_Id] AND [dbo].[WebPages].[WebSites_Id]=@SiteId
END
GO
```
I'm getting these errors:
```
Msg 4104, Level 16, State 1, Procedure MarkAllResultsAsReaded, Line 10
The multi-part identifier "dbo.WebPages.Id" could not be bound.
Msg 4104, Level 16, State 1, Procedure MarkAllResultsAsReaded, Line 10
The multi-part identifier "dbo.WebPages.WebSites_Id" could not be bound.
```
Do you know how to solve this?
I'm using SQL-Server 2008. | You cannot use tables other than the current one in an update without explicitly in a join, like this:
```
UPDATE r
SET r.[FirstSeen] = GetDate()
FROM [dbo].[Results] r
INNER JOIN [dbo].[WebPages] p ON p.[Id] = r.[WebPages_Id]
WHERE p.[WebSites_Id]=@SiteId
```
The reason why you need a join is that your `UPDATE` references two tables - `Results` and `WebPages`. | You wanted to join both the tables and do UPDATE like below, cause you are actually trying to update all the rows in `Results` table where there is a matching record present in `webpages` table.
```
UPDATE r
SET [FirstSeen] = GetDate()
FROM [dbo].[Results] r
JOIN [dbo].[WebPages] w
ON w.[Id] = r.[WebPages_Id]
AND w.[WebSites_Id]=@SiteId
``` | Update table with complex WHERE statement | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've come across many similar posts on this but none I've found got this specific.
Here's my sample data:
```
ID CID NARID NATID NADate EID AEDate
1 1655 1 4 12/1/12 202 6/4/14 11:37:01
2 1655 1 7 12/1/12 202 6/4/14 11:37:12
5 1655 2 65 1/13/14 587 6/4/14 11:37:00
29 3165 1 6 4/15/14 7 6/4/14 11:37:00
300 3165 1 6 6/30/14 7 6/4/14 11:33:50
295 3165 2 64 6/11/14 7 6/4/14 11:37:00
302 3165 2 63 7/24/14 7 6/4/14 11:41:24
303 3165 2 67 7/24/14 7 6/4/14 15:59:06
```
I first am looking to get the max NADate for each CID & NARID:
```
ID CID NARID NATID NADate EID AEDate
1 1655 1 4 12/1/12 202 6/4/14 11:37:01
2 1655 1 7 12/1/12 202 6/4/14 11:37:12
5 1655 2 65 1/13/14 587 6/4/14 11:37:00
300 3165 1 6 6/30/14 7 6/4/14 11:33:50
302 3165 2 63 7/24/14 7 6/4/14 11:41:24
303 3165 2 67 7/24/14 7 6/4/14 15:59:06
```
Then from these results, get the record with the max AEDate (along with all other corresponding fields):
```
ID CID NARID NATID NADate EID AEDate
2 1655 1 7 12/1/12 202 6/4/14 11:37:12
5 1655 2 65 1/13/14 587 6/4/14 11:37:00
300 3165 1 6 6/30/14 7 6/4/14 11:33:50
303 3165 2 67 7/24/14 7 6/4/14 15:59:06
```
The database type is MSSQL 2005. | I think the easiest way is to use `dense_rank()`:
```
select t.*
from (select t.*,
dense_rank() over (partition by cid
order by nadate desc, cast(edate as date) desc
) as seqnum
from table t
) t
where seqnum = 1;
```
You need the `cast(edate to date)` so the query will be considering only the date portion of `edate`. You need the `dense_rank()` so the will return all rows on the most recent date. | You can use `row_number()` to assign numbers within each `(cid, narid)` group. If you assign the row numbers ordered by `nadate desc, aedate desc`, the rows with row number `1` will be the rows you're looking for:
```
select *
from (
select row_number() over (
partiton by cid, narid
order by nadate desc, aedate desc) as rn
, *
from YourTable
) as SubQueryAlias
where rn = 1
``` | MSSQL Get max value from multiple columns by an ID | [
"",
"sql",
"sql-server",
"greatest-n-per-group",
"top-n",
""
] |
We have a stored procedure that is used to allow users to search in a table with 20 million records and 40 columns wide. There are about 20 different columns they can search on (any combination) from and all those columns are in the `WHERE` clause.
Furthermore each columns is checked for Null and needs to be able to search with just part of the data.
Here is an example
```
(
@FirstName IS NULL
OR (RTRIM(UPPER(FirstName)) LIKE RTRIM(UPPER(@FirstName)) + '%')
)
AND (@LastName IS NULL)
```
What is a best way to rewrite this stored procedure? Should I break this stored procedure into multiple small stored procedures? If so how? I will need to allow user to search
When I look at the execution plan, regardless of what columns are passed, it always does the index scan | I had exactly this situation years ago, millions of rows and numerous filter parameters and the best method is to use dynamic sql. Construct a SQL statement based on the parameters that have values, then execute the SQL statement. (EXEC sp\_executesql @sql)
The select clause of the sql statement is static but the from clause and the where clause is based on the parameters.
```
CREATE PROCEDURE dbo.DynamicSearch
@FirstName VARCHAR(20),
@LastName VARCHAR(20),
@CompanyName VARCHAR(50)
AS
BEGIN
DECLARE @SQL NVARCHAR(MAX) = N''
DECLARE @Select NVARCHAR(MAX) = N'SELECT ColA, ColB, ColC, ColD '
DECLARE @From NVARCHAR(MAX) = N'From Person'
DECLARE @Where NVARCHAR(MAX) = N''
IF @FirstName IS NOT NULL
Begin
Set @Where = @Where + 'FirstName = ''' + @FirstName + ''''
End
IF @LastName IS NOT NULL
Begin
if len(@Where) > 0
Begin
Set @Where = @Where + ' AND '
End
Set @Where = @Where + 'LastName = ''' + @LastName + ''''
End
IF @CompanyName IS NOT NULL
Begin
if len(@Where) > 0
Begin
Set @Where = @Where + ' AND '
End
Set @From = @From + ' inner join Company on person.companyid = company.companyid '
Set @Where = @Where + 'company.CompanyName = ''' + @CompanyName + ''''
End
Set @SQL = @Select + @From + @Where
EXECUTE sp_executesql @sql
END
``` | To go down the dynamic SQL route you would use something like:
```
CREATE PROCEDURE dbo.SearchSomeTable
@FirstName VARCHAR(20),
@LastName VARCHAR(20),
@AnotherCol INT
AS
BEGIN
DECLARE @SQL NVARCHAR(MAX) = N'SELECT SomeColumn FROM SomeTable WHERE 1 = 1',
@ParamDefinition NVARCHAR(MAX) = N'@FirstName VARCHAR(20),
@LastName VARCHAR(20),
@AnotherCol INT';
IF @FirstName IS NOT NULL
@SQL = @SQL + ' AND FirstName = @FirstName';
IF @LastName IS NOT NULL
@SQL = @SQL + ' AND LastName = @LastName';
IF @AnotherCol IS NOT NULL
@SQL = @SQL + ' AND AnotherCol = @AnotherCol';
EXECUTE sp_executesql @sql, @ParamDefinition, @FirstName, @LastName, @AnotherCol;
END
```
Otherwise you will need to use the `OPTION (RECOMPILE)` [query hint](http://msdn.microsoft.com/en-gb/library/ms181714.aspx) to force the query to recompile each time it is run to get the optimal plan for the particular parameters you have passed. | Search from Multiple columns in Where clause for SQL Server | [
"",
"sql",
"sql-server",
"stored-procedures",
"sql-execution-plan",
""
] |
I'm trying to make a summary report based off my database. It must be grouped by location by week by year and I want to total the amount of orders in that grouping.
It needs to look something like:
```
Year Week Location Total Amount
2014 1 Atlanta 22,000
2014 1 Schaumberg 32,566
2014 1 Dallas 32,567
2014 1 New York 32,356
2014 2 Atlanta 22,000
2014 2 Schaumberg 32,566
2014 2 Dallas 32,567
2014 2 New York 32,356
```
My table (system\_order) structure is setup like this:
```
Order Amount Location Week Year
1 1895 Schaumberg 1 2014
2 1295 Atlanta 1 2014
3 1895 Atlanta 1 2014
4 1895 New York 1 2014
5 1495 Dallas 2 2014
6 1695 Schaumberg 2 2014
7 1895 Schaumberg 2 2014
8 1895 Dallas 2 2014
9 1895 New York 2 2014
```
Can this be done in one sql statement? | ```
SELECT Year, Week, Location, sum(Amount) as 'Total Amount'
FROM [system_order]
GROUP BY Location, Week, Year
``` | Usually you have to put the rows you selected into the group by clause. The exception is if they are in an aggregate function in the select clause.
This right here should work.
```
Select Year
,Week
,Location
,sum(amount) as 'Total_Amount'
from (system_order)
group by Year
,Week
,Location
``` | Making a sql statement with multiple grouping | [
"",
"mysql",
"sql",
""
] |
I want to add the values in the column `cost` ,`amt`- if there is a `flag` 1 and 2 for same person `id` on same date. please help. Thank you. Column are:
```
id date cost amt flag
455 05/25/2013 150 110 1
455 05/25/2013 20 45 2
456 08/17/2013 140 60 1
456 08/17/2013 15 20 2
457 09/28/2013 135 10 1
457 09/28/2013 8 40 2
458 11/09/2013 10 30 1
```
output should be:
```
id date cost amt flag
455 05/25/2013 170 155 1
456 08/17/2013 155 80 1
457 09/28/2013 143 50 1
458 11/09/2013 10 30 1
``` | Just for diversity, check out my solution. It uses `over (partition by )` for calculation and `distinct` for filtering out the duplicates.
```
select distinct o.ID, o.Date,
SUM(o.COST) OVER(PARTITION BY o.ID, o.Date) as cost
,SUM(o.AMT) OVER(PARTITION BY o.ID, o.Date) as amt
,MIN(FLAG) OVER(PARTITION BY o.ID, o.Date) as flag
from orders o
order by o.ID, o.Date
```
[SqlFiddle proof](http://sqlfiddle.com/#!3/181a3/26)
It's inspired by [this article](http://realworldsql.com/tag/sql-sum-partition-by/). | Not really sure what you want to do with flag, but you need `GROUP BY` like:
```
SELECT id, date, SUM(cost), Sum(amt), 1 as flag
FROM yourTable
GROUP BY id,date
``` | sum the values in column for same date and id | [
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
```
IF((SELECT COUNT(*) FROM IPA_PRCADJ_HDR WHERE TRM_CODE = 41) = 0)
THEN
select '111111' from dual;
ELSE
SELECT '0000000' FROM DUAL;
END IF;
```
showing error..
Error starting at line 73 in command:
```
END IF
ERROR REPORT:
Unknown Command
``` | You can't build conditions like this. Try to build them inside the `select` statement like this:
```
select case
when exists
( select 1
from IPA_PRCADJ_HDR
where TRM_CODE = 41
)
then '111111'
else '0000000'
end
from dual
``` | This takes into account the scenario when the table is empty.
```
SELECT CASE WHEN MAX(TRM_CODE) IS NULL THEN 'FALSE' ELSE 'TRUE' END FROM (SELECT * FROM IPA_PRCADJ_HDR WHERE TRM_CODE = 41)
``` | Check if table is empty in oracle | [
"",
"sql",
"oracle",
""
] |
I need a query that will produce a non duplicate list of all of our members and their corresponding states & countries (along with some other data that is joined from other tables). Each member may have 0-many MemberAddress records. If a member has MemberAddress records, I would like to join only to the record that has been modified most recently. If the member does not have any associated MemberAddress records, I still want the member to show in the list, but the state and country would then be NULL values.
```
SELECT m.member, ma.state, ma.country FROM Member m
LEFT OUTER JOIN MemberAddress ma ON m.member = ma.member
INNER JOIN (SELECT Member, MAX(Modified) AS MaxDate
FROM MemberAddress
GROUP BY Member) AS m2
ON (ma.Member = m2.Member AND ma.Modified = m2.MaxDate)
```
This query removes the duplicates caused when a member has multiple MemberAddress records, however it does not allow for members that do not have any MemberAddress records.
How can I alter this query to also show members that do not have any MemberAddress records?
Thanks!!
Edited to add: I'm using SQL 2005 | You were on the right track, but the join between ma and m2 has to, itself, be an entire subquery. The problem is that your INNER JOIN applies to the whole query, not just to the relationship between ma and m2:
```
SELECT m.member, mx.state, mx.country
FROM Member m
LEFT OUTER JOIN (
SELECT ma.state, ma.country, ma.member from MemberAddress ma
INNER JOIN (SELECT Member, MAX(Modified) AS MaxDate
FROM MemberAddress
GROUP BY Member) AS m2
ON (ma.Member = m2.Member AND ma.Modified = m2.MaxDate)
) mx ON m.member = mx.member
```
Assuming I didn't typo anything (except that parentheses, which I fixed). | Your version is quite close. You can do it using two left joins:
```
SELECT m.member, ma.state, ma.country
FROM Member m LEFT OUTER JOIN
MemberAddress ma
ON m.member = ma.member LEFT OUTER JOIN
(SELECT Member, MAX(Modified) AS MaxDate
FROM MemberAddress
GROUP BY Member
) m2
ON (ma.Member = m2.Member AND ma.Modified = m2.MaxDate);
``` | SELECT only rows with either the MAX date or NULL | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
I have a dob field in my MySQL table that's of type `date`. Just a small, trivial example is something like this:
```
mysql> select dob from players limit 5;
+------------+
| dob |
+------------+
| 1983-12-02 |
| 1979-01-01 |
| 1989-05-11 |
| 1976-03-24 |
| 1989-09-12 |
+------------+
```
I am trying to calculate ages with decimal points by using today's date. So technically if your birthday is June 1, 1981 or 1981-06-01, that makes you 33 and today is June 7.. you'd be `33.(6/365) or 33.02 years old.` What's the easiest way to calculate this using SQL? | Usually DOB calculation is pretty easy in mysql when you want to calculate the years without any fraction something as
```
mysql> select timestampdiff(YEAR,'1981-06-01',now());
+----------------------------------------+
| timestampdiff(YEAR,'1981-06-01',now()) |
+----------------------------------------+
| 33 |
+----------------------------------------+
```
But since you need the fraction also then this should do the trick
```
mysql> select format(datediff(curdate(),'1981-06-01') / 365.25,2);
+-----------------------------------------------------+
| format(datediff(curdate(),'1981-06-01') / 365.25,2) |
+-----------------------------------------------------+
| 33.02 |
+-----------------------------------------------------+
```
Year is considered as 365.25 days.
So in your case you may have the query as
```
select
format(datediff(curdate(),dob) / 365.25,2) as dob
from players limit 5;
``` | You can use the `to_days` function to calculate the days between the year zero and someone's birthday. Then subtract from today that number of days. That should give you the birthday as if someone was born in the year zero:
```
select year(subdate(now(), to_days(dob)))
```
[Example at SQL Fiddle.](http://sqlfiddle.com/#!2/1ce96/1/0) | Calculate age with decimals from date of birth | [
"",
"mysql",
"sql",
""
] |
How do i actually use if else with update in sql?
I have tried a lot of ways but it still doesnt work.
This is the question:
Add a column called ‘Status’ to a relational table Customer and use one UPDATE statement to fill the column with information about the customer status. Mark ‘Y’ if the customers had placed any orders otherwise ‘X’ will be initialised.
I am stuck at the update part.
```
UPDATE CUSTOMER
IF(customer.customerID IN (SELECT customerID from invoice))
( set status = 'y' )
else
( set status = 'x' )
endif
where status is null;
``` | You could use a case statement
```
UPDATE CUSTOMER
SET Status = CASE WHEN CUSTOMER.CUSTOMERId IN (SELECT customerID from invoice)
THEN 'y'
ELSE 'n' END
where status is null;
``` | What you want to do is a case statement because that's the closest you can get to an IF in sql.
```
update customer
set status
case
when exists (select customerID from Customer where CustomerID in (select customerID from invoice)
then
'y'
else
'x'
end
where stats is null
``` | Using if else on update sqlplus | [
"",
"sql",
"if-statement",
"updates",
"sqlplus",
""
] |
I have a small query, and a union to put another small query next to it. However, the union has a syntax error in it.
```
Select <column1>
,<column2>
,<column3>
From <Table1>
<Some joins in there>
where <conditions>
order by <column2>
union
select <column2>
,<column3>
,<column4>
from <Table2>
<Some more joins here>
where <conditions>
order by <column2>
```
This is the Error I receive
```
ERROR: Syntax error at or near 'union'
``` | I see what was wrong. You have to place the order by at the end of the query, and only at the end. It gave me an error because it thought the query had eneded.
```
Select <column1>
,<column2>
,<aggregate column3>
From <Table1>
<Some joins in there>
Where <conditions>
group by <column2>, <column1>
union
select <column2>
,<column3>
,<aggregate column4>
From <Table2>
<Some more joins here>
Where <conditions>
group by <column2>, <column3>
order by <column2>
```
That did the trick. | Short answer: `(SELECT... ORDER BY..) UNION (SELECT .. ORDER BY...)` does work.
See the [documentation](http://www.postgresql.org/docs/current/static/sql-select.html#SQL-UNION) about `UNION`:
> UNION Clause
>
> The UNION clause has this general form:
>
> select\_statement UNION [ ALL | DISTINCT ] select\_statement
>
> select\_statement is any SELECT statement without an ORDER BY, LIMIT,
> FOR NO KEY UPDATE, FOR UPDATE, FOR SHARE, or FOR KEY SHARE clause.
> (**ORDER BY and LIMIT can be attached to a subexpression if it is
> enclosed in parentheses**. Without parentheses, these clauses will be
> taken to apply to the result of the UNION, not to its right-hand input
> expression.) | PostgreSQL syntax error at or near 'union' | [
"",
"sql",
"postgresql",
"sql-order-by",
"union",
""
] |
Hi I currently have a tables with a column that I would like to split.
```
ID Serial
1 AAA"-A01-AU-234-U_xyz(CY)(REV-002)
2 AAA"-A01-AU-234-U(CY)(REV-1)
3 AAA"-A01-AU-234-U(CY)(REV-101)
4 VVV"-01-AU-234-Z_ww(REV-001)
5 VVV"-01-AU-234-Z(REV-001)_xyz(CY)
6 V-VV"-01-AU-234-Z(REV-03)_xyz(CY)
7 V-VV"-01-AU-234-Z-ZZZ(REV-004)_xyz(CY)
```
I would like to split this column into 2 field via a select statement
The first field would consist of the text from the start and end when this scenario is satisfied
1. After the first "-
2. take all text till the next 3 hypen (-)
3. Take the first letter after the last hypen(-)
The second field would want to store the Value(Int) inside the (REV) bracket. Rev is always stored inside a compassing bracket (Rev-xxx) the number may stretch from 0-999 and have different form of representation
Example of output
```
Field 1 Field 2
AAA"-A01-AU-234-U 2
AAA"-A01-AU-234-U 1
AAA"-A01-AU-234-U 101
VVV"-01-AU-234-Z 1
VVV"-01-AU-234-Z 1
V-VV"-01-AU-234-Z 3
V-VV"-01-AU-234-Z 4
``` | Maybe it is possible to make it better and faster, but at least it does work. If i will have some time more i will look at this again to think of better solution, but it do the job.
```
create table #t
(
id int,
serial nvarchar(255)
)
go
insert into #t values (1, 'AAA"-A01-AU-234-U_xyz(CY)(REV-002)')
insert into #t values (2, 'AAA"-A01-AU-234-U(CY)(REV-1)')
insert into #t values (3, 'AAA"-A01-AU-234-U(CY)(REV-101)')
insert into #t values (4, 'VVV"-01-AU-234-Z_ww(REV-001)')
insert into #t values (5, 'VVV"-01-AU-234-Z(REV-001)_xyz(CY)')
insert into #t values (6, 'VVV"-01-AU-234-Z(REV-03)_xyz(CY)')
insert into #t values (7, 'VVV"-01-AU-234-Z(REV-004)_xyz(CY)')
go
select id, serial,
left(serial,charindex('-', serial, charindex('-', serial, charindex('-', serial, charindex('"',serial) + 2) +1) + 1) + 1) as 'Field2'
,cast( replace(left(right(serial, len(serial) - charindex('REV',serial) +1 ), CHARINDEX(')',right(serial, len(serial) - charindex('REV',serial) +1 )) - 1), 'REV-', '')as int) as 'Field1'
from #t
go
```
gives me:
```
id serial Field2 Field1
1 AAA"-A01-AU-234-U_xyz(CY)(REV-002) AAA"-A01-AU-234-U 2
2 AAA"-A01-AU-234-U(CY)(REV-1) AAA"-A01-AU-234-U 1
3 AAA"-A01-AU-234-U(CY)(REV-101) AAA"-A01-AU-234-U 101
4 VVV"-01-AU-234-Z_ww(REV-001) VVV"-01-AU-234-Z 1
5 VVV"-01-AU-234-Z(REV-001)_xyz(CY) VVV"-01-AU-234-Z 1
6 VVV"-01-AU-234-Z(REV-03)_xyz(CY) VVV"-01-AU-234-Z 3
7 VVV"-01-AU-234-Z(REV-004)_xyz(CY) VVV"-01-AU-234-Z 4
``` | Try this solution. It uses a combination of charindex and the substring function.
```
DECLARE @TempTable table
(
id int,
serial nvarchar(255)
)
insert into @TempTable values (1, 'AAA"-A01-AU-234-U_xyz(CY)(REV-002)')
insert into @TempTable values (2, 'AAA"-A01-AU-234-U(CY)(REV-1)')
insert into @TempTable values (3, 'AAA"-A01-AU-234-U(CY)(REV-101)')
insert into @TempTable values (4, 'VVV"-01-AU-234-Z_ww(REV-001)')
insert into @TempTable values (5, 'VVV"-01-AU-234-Z(REV-001)_xyz(CY)')
insert into @TempTable values (6, 'VVV"-01-AU-234-Z(REV-03)_xyz(CY)')
insert into @TempTable values (7, 'VVV"-01-AU-234-Z(REV-004)_xyz(CY)')
select
id,
serial,
substring(serial, 1, P4.Pos+1) as field1,
convert(int, substring(Serial, P6.Pos , P7.Pos - P6.Pos)) as field2
from @TempTable
cross apply (select (charindex('-', Serial))) as P1(Pos)
cross apply (select (charindex('-', Serial, P1.Pos+1))) as P2(Pos)
cross apply (select (charindex('-', Serial, P2.Pos+1))) as P3(Pos)
cross apply (select (charindex('-', Serial, P3.Pos+1))) as P4(Pos)
cross apply (select (charindex('REV-', Serial,P1.Pos+1)+4)) as P6(Pos)
--+4 because 'REV-' is 4 chars long
cross apply (select (charindex(')', Serial,P6.Pos+1))) as P7(Pos);
``` | Splitting of string by fixed keyword | [
"",
"sql",
"sql-server",
""
] |
I have scheme With `Users`, `Reviews`, and Many to Many relation between, which decide who rated which review as `Helpful`. Over time helpful table grows into having about 10 milion rows so I decided to cache `Count()` result in `Reviews` table. So I can easily tell user how many times certain review got rated. Problem is that when I do it using following correlated query it takes ages.
```
UPDATE EXT.REVIEWS AS R
SET HELPFUL_COUNTER =
(SELECT COUNT (*)
FROM EXT.USERS_REVIEWS_HELPFUL AS H
WHERE R.PK = H.REVIEW_FK)
```
Is there any way to speed it up? | One way to speed up such a query is to use an index. In this case, the appropriate index is `USERS_REVIEWS_HELPFUL(REVIEW_FK)`. | For readers whose DB2 server is running under **IBM i** OS, you may consider using an EVI.
```
CREATE ENCODED VECTOR INDEX EXT.USERS_REVIEWS_HELPFUL_EV1
on EXT.USERS_REVIEWS_HELPFUL (REVIEW_FK)
INCLUDE ( count(*) )
```
I would do this in addition to creating a "normal" index, known as a radix index, as recommended by Gordon.
When running your query, the system need only probe (ie. read) a single index entry for each REVIEW\_FK value, dramatically improving performance of querying the `count(*)`. The maintenance cost of this index is so minimal, especially compared to index maintenance on other platforms, that it should generally not be a concern, unless INSERT performance is already pressing against acceptable limits.
Sadly, DB2 for LUW or z/OS do not (yet?) support this type of index. | Speed up correlated query with UPDATE and COUNT | [
"",
"sql",
"db2",
""
] |
I am creating a smaller sized database in Microsoft SQL Server 2012 to keep run data from machines. The company has production machines and R & D machines. I would like to use the same table for production and R&D with a Type field specifying what the run was for simplicity. I have two schemas (prod and r\_d). The permissions for the production and r\_d schemas will be different. Is it possible to create a table that belongs to more than one schema? I know you can have the same table name in multiple schemas, but this creates separate objects. I would like to have the one table object to belong to multiple schemas.
Example:
CREATE TABLE db\_name.prod.r\_d.table\_name | Consider creating a synonym in one the of schemas, referencing the other schema table:
```
CREATE SYNONYM r_d.table_name FOR prod.table_name;
``` | No, but you can create a [`view`](http://msdn.microsoft.com/en-gb/library/ms187956.aspx) in each schema on to a single table that filters the rows | SQL Server - Same table in multiple schemas | [
"",
"sql",
"sql-server",
"database",
"schema",
"sql-server-2012-express",
""
] |
i have 3 separate select statements that i need to union. but all of them need to be ordered by a different column.
i tried doing this
```
select * from(
select * from (select columns from table1 order by column1 ) A
UNION
select * from (select columns from table2 order by column2 ) B
UNION
select * from (select columns from table3 order by column3 ) C
) Table
```
but this doesn't work
does anyone have any experience with this? | You can do something like this:
```
select *
from((select columns, 'table1' as which from table1 )
UNION ALL
(select columns, 'table2' from table2 )
UNION ALL
(select columns, 'table3' from table3 )
) t
order by which,
(case when which = 'table1' then column1
when which = 'table2' then column2
when which = 'table3' then column3
end);
```
This assumes that the columns used for ordering are all of the same type.
Note that this query uses `union all` instead of `union`. I see no reason why you would want to eliminate duplicates if you want the results from the three subqueries ordered independently.
EDIT:
You can also express the `order by` separately for each table:
```
order by which,
(case when which = 'table1' then column1 end) ASC,
(case when which = 'table2' then column2 end) DESC
(case when which = 'table3' then column3 end)
``` | You should separate these columns in the one common column and then order
```
SELECT * FROM
(
SELECT A.*,columnA as ORDER_COL FROM A
UNION ALL
SELECT B.*,columnB as ORDER_COL FROM B
UNION ALL
SELECT C.*,columnC as ORDER_COL FROM C
) as T1
ORDER BY ORDER_COL
``` | How to use union if i need to "order by" all selects | [
"",
"sql",
""
] |
I basically have a two column table containing a primary key and names of companies with about 20,000 rows.
My task is to find all duplicate entries.
I originally tried using soundex, but it would match companies that were completely different, just because they had similar first words. So this led me on to the levenshtein distance algorithm.
The problem is, the query takes an indefinite amount of time. I've left it for about 10 hours now, it still hasn't responded.
Here is the query:
```
SELECT *
FROM `Companies` a, `Companies` b
WHERE levenshtein(a.name, b.name)<5
AND a.id<>b.id
```
And here is the levenshtein function I'm using (got it from [this post](https://stackoverflow.com/questions/13909885/how-to-add-levenshtein-function-in-mysql))
```
DELIMITER $$
CREATE FUNCTION levenshtein( s1 VARCHAR(255), s2 VARCHAR(255) )
RETURNS INT
DETERMINISTIC
BEGIN
DECLARE s1_len, s2_len, i, j, c, c_temp, cost INT;
DECLARE s1_char CHAR;
-- max strlen=255
DECLARE cv0, cv1 VARBINARY(256);
SET s1_len = CHAR_LENGTH(s1), s2_len = CHAR_LENGTH(s2), cv1 = 0x00, j = 1, i = 1, c = 0;
IF s1 = s2 THEN
RETURN 0;
ELSEIF s1_len = 0 THEN
RETURN s2_len;
ELSEIF s2_len = 0 THEN
RETURN s1_len;
ELSE
WHILE j <= s2_len DO
SET cv1 = CONCAT(cv1, UNHEX(HEX(j))), j = j + 1;
END WHILE;
WHILE i <= s1_len DO
SET s1_char = SUBSTRING(s1, i, 1), c = i, cv0 = UNHEX(HEX(i)), j = 1;
WHILE j <= s2_len DO
SET c = c + 1;
IF s1_char = SUBSTRING(s2, j, 1) THEN
SET cost = 0; ELSE SET cost = 1;
END IF;
SET c_temp = CONV(HEX(SUBSTRING(cv1, j, 1)), 16, 10) + cost;
IF c > c_temp THEN SET c = c_temp; END IF;
SET c_temp = CONV(HEX(SUBSTRING(cv1, j+1, 1)), 16, 10) + 1;
IF c > c_temp THEN
SET c = c_temp;
END IF;
SET cv0 = CONCAT(cv0, UNHEX(HEX(c))), j = j + 1;
END WHILE;
SET cv1 = cv0, i = i + 1;
END WHILE;
END IF;
RETURN c;
END$$
DELIMITER ;
```
What can I do to speed up the query? | So I implemented a bunch of suggestions in this thread to reduce my query time.
I indexed the name collumn, changed a.id <> b.id to a.id < b.id to reduce recomparing already compared rows, and added LEFT(a.name, 3) = LEFT(b.name, 3) to prevent executing the heavy levenshtein function on rows that can be easily excluded by the first 3 characters.
This was the query I used:
```
SELECT *
FROM `Companies` a, `Companies` b
WHERE LEFT(a.name, 3) = LEFT(b.name, 3)
AND a.id < b.id
AND levenshtein(a.name, b.name)<3
```
This took about 2 hours to complete, and gave me 964 results. After that, I exported the results as a .csv and imported them into another table, TABLE 2.
TABLE 2 is structured like this:
```
COL 1, COL 2, COL 3, COL 4
a.id, a.name, b.id, b.name
```
I noticed that there were a lot of results in TABLE 2 that were actually different companies, but were only a couple of characters apart, making the levinshtein distance ineffective at sorting them. For example: "Body FX", "Body Fit", or "Baxco", "Baxyl".
I attempted to filter out more names by comparing RIGHT() on the last 2 characters of the string, but ran into problems as some names were plural, like "Aroostock Medical Center" and "Aroostock Medical Centers". So I wrote my own RIGHT\_PLURAL() function that ignored the plural characters.
```
DROP FUNCTION IF EXISTS RIGHT_PLURAL;
DELIMITER $$
CREATE FUNCTION RIGHT_PLURAL(input VARCHAR(50), right_input INT)
RETURNS VARCHAR(50)
BEGIN
DECLARE length INT;
SET length = LENGTH(input);
IF RIGHT(input, 2)="'s" THEN
RETURN SUBSTR(input, length-right_input-1, right_input);
ELSEIF RIGHT(input, 1)="s" THEN
RETURN SUBSTR(input, length-right_input, right_input);
ELSE
RETURN RIGHT(input, right_input);
END IF;
END;
$$
DELIMITER ;
```
I ran
```
SELECT *
FROM `TABLE 2`
WHERE RIGHT_PLURAL(
`COL 2` , 2
) = RIGHT_PLURAL(
`COL 4` , 2
)
```
and was down to 893 duplicates. I was satisfied. I copied over the result set to TABLE 3, and ran the following.
```
DELETE
FROM `Companies`
WHERE `id` IN ( SELECT `COL 1` FROM `TABLE 3` )
```
My database was now largely duplicate free! The only few strays left were due to serious miss-spellings of names. | I know at least one optimization that might cut the running time in half:
```
AND a.id < b.id
```
This prevents you from testing a=1, b=2 when you've already tested a=2, b=1.
It's still gonna be O(n^2) though, but I can't see how you can do much about that. | mySQL: Using Levenshtein distance to find duplicates in 20,000 rows | [
"",
"mysql",
"sql",
"levenshtein-distance",
""
] |
I have two tables:
Topic table:
```
topic_id
2
3
4
```
like table:
```
topic_id user_id
2 4
2 6
3 1
4 2
```
For a logged in user (use $user\_id to represent), I need to return unique topics and a flag to indicate whether this user has liked this post or not.
For the above example, if the user id for the logged in user is 2, the return should be:
```
topic_id liked
2 0 or NULL
3 0 or NULL
4 1
```
I have tried to leftJoin topic and like table with "like.user\_id = $user\_id OR like.user\_id is NULL", but that will left out topics that have likes but haven't liked by the user. Could anyone help me with the query? Thanks very much. | You can list all topics from the topic table and use `LEFT JOIN` to merge them with rows from like table. After that you can filter `user_id` column with `IF` : if it is `null` then there are no records for this topic related to this user so flag is `0`, otherwise flag is `1`.
```
SELECT t.topic_id
, IF(l.user_id IS NULL, 0, 1) AS liked
FROM topic t LEFT JOIN like l ON t.topic_id= l.topic_id AND l.user_id = ?
```
[**SQLFiddle**](http://www.sqlfiddle.com/#!2/17861/9) | You can do so, by using your user id condition in sum will give you the count for each topic user has liked and 0 if not liked by user
```
select t.*,sum(lk.user_id = 2) liked from topic t
left join like_t lk on(t.topic_id= lk.topic_id)
group by t.topic_id
```
## [Demo](http://www.sqlfiddle.com/#!2/17861/3) | Mysql left join to get unique topics | [
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I got a table which contains all the information about the artist. My wish is to retrieve the details for those artists who are deceased and calculate the age
```
select artistName, dateDeceased - dataOfBirth as Age
From my.artist
Where artistID in (select artistID from My.artist where dateDeceased != Null);
```
The problem is that I will get 0 result and when I remove the subquery the results will come out but with the artist who are still alive which I don't need.
So I believe there is something wrong with the NULL condition but I can't figure it out. | No need for a subquery at all:
```
SELECT artistName, dateDeceased - dataOfBirth AS Age
FROM my.artist
WHERE dateDeceased IS NOT NULL
```
Since `NULL` is a non-value (absence of a value), you cannot use the usual comparison operators - you need to check using `IS NULL` or `IS NOT NULL`
**Update:** as @GarethD points out - should `ArtistID` *not* be your primary key, e.g. if there could be *multiple* entries with the same `ArtistID`, of which only a few are deceased and you'd want to get all of those - then you'd need to use the subquery you already have - adapted to using the `IS NOT NULL`:
```
SELECT artistName, dateDeceased - dataOfBirth AS Age
FROM my.artist
WHERE ArtistId IN (SELECT ArtistId FROM my.artist WHERE dateDeceased IS NOT NULL)
``` | You should use "IS NOT NULL" instead of "!-NULL" | In SQL, if i set a condition to null then I get 0 results | [
"",
"sql",
""
] |
I have two SQL Server tables. The first table is a customer table with customer number, name, etc. The second table contains the customer's service dates. The customer can have multiple service dates. Here is an example of the service date table:
```
custnmbr DateIn DateOut
------------------------------------
78001 1991-02-10 2001-12-07
78001 2002-08-03 2003-06-17
78001 2006-11-22 NULL
```
I want to select the earliest DateIn and the most recent DateOut. In the example above, I would like to return the DateIn as 1991-02-10 and since the customer is currently active, I would want to return the DateOut as NULL.
This is what I have tried, but no luck
```
SELECT
SM.Custnmbr,
CONVERT (VARCHAR(10), MAX(LH.DateIn), 101) AS DateIn,
CONVERT (VARCHAR(10), MAX(LH.DateOut), 101) AS DateOut,
FROM
dbo.toCustomer SM
LEFT OUTER JOIN
dbo.toLocCustHist LH ON SM.CustomerId = LH.CustomerId
GROUP BY
SM.CustNmbr, SM.CustName, LH.LocationId
```
When I run the query the DateIn is correct with 1991-02-10, but the DateOut has 2003-06-17, which is wrong. | ```
WITH TEMP AS
(
SELECT SM.Custnmbr AS Custnmbr,
CONVERT (VARCHAR(10), MIN(LH.DateIn), 101) AS DateIn,
CONVERT (VARCHAR(10), MAX(ISNULL(LH.DateOut,'9999-12-12')), 101) AS DateOut,
FROM dbo.toCustomer SM
LEFT OUTER JOIN dbo.toLocCustHist LH ON SM.CustomerId = LH.CustomerId
GROUP BY SM.CustNmbr, SM.CustName, LH.LocationId
)
SELECT Custnmbr,DateIn,
CASE WHEN DateOut='9999-12-12' THEN NULL
ELSE DateOut
END
FROM TEMP;
``` | `NULL` values aren't considered in aggregate functions, so you need to set the `NULL` to something else, and you shouldn't have the extra elements in your `GROUP BY` statement:
```
SELECT
SM.Custnmbr,
CONVERT (VARCHAR(10), MAX(LH.DateIn), 101) AS DateIn,
CONVERT (VARCHAR(10), MAX(ISNULL(LH.DateOut,'2099-01-01'), 101) AS DateOut,
FROM
dbo.toCustomer SM
LEFT OUTER JOIN
dbo.toLocCustHist LH ON SM.CustomerId = LH.CustomerId
GROUP BY
SM.CustNmbr
``` | How to SELECT records using MIN and MAX when multiple records exist? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have several tables which all have a unique ID field. I want to join them on this ID field. More precisely, I want to extract a column from each table, and line all the extracted columns with the ID value. The tables do not have entries for all ID values; I want the result to have a single row for each ID value that is present in at least one table. If an Id is missing from one of the tables, the result should have a `null`. I don't care what happens if Id values aren't unique within a table (in my data, they are).
Toy example: I have a `plant` table
```
Id Name Class
1 larch pinopsida
3 horse chestnut angiosperms
```
and an `animal` table
```
Id Name Cry
1 cat meow
2 dog bow wow
4 carp
```
and a `mineral` table
```
Id Name Color
2 diamond white
3 emerald green
```
and I want to combine them into
```
Id Plant Animal Mineral
1 larch cat (null)
2 dog diamond
3 horse chestnut (null) emerald
4 (null) carp (null)
```
As far as I can tell, this isn't a straight `inner join` because that would only retain Ids present in all the databases, nor a straight `left join` or `right join` because that would only retain the Ids present in one of the tables (none of the tables contain all Id values), nor a straight `outer join` or `cross join` because those would spread Ids over multiple rows.
The database is SQL Server 2012. I'm accessing it directly, not through another programming language. The real query is <https://data.stackexchange.com/cs/query/36599/show-all-types> — there's got to be a better way than listing all the pairs of `Id` columns that can be equal. | You can use the `FULL JOIN`
```
Select COALESCE(p.Id, a.Id, m.Id) Id
, MAX(p.Name) Plant
, MAX(a.Name) Animal
, MAX(m.Name) Mineral
FROM Plant p
FULL JOIN Animal a ON p.Id = a.Id
FULL JOIN Mineral m ON p.Id = m.Id
GROUP BY COALESCE(p.Id, a.Id, m.Id)
ORDER BY COALESCE(p.Id, a.Id, m.Id)
```
`SQLFiddle demo`
---
If you prefer not to have `COALESCE` and `GROUP` in the query it's possible to get the Ids first and the `JOIN` the tables
```
WITH Ids AS (
SELECT Id FROM Plant
UNION
SELECT Id FROM Animal
UNION
SELECT Id FROM Mineral
)
SELECT Ids.Id
, p.Name Plant
, a.Name Animal
, m.Name Mineral
FROM Ids
LEFT JOIN Plant p ON Ids.Id = p.Id
LEFT JOIN Animal a ON Ids.Id = a.Id
LEFT JOIN Mineral m ON Ids.Id = m.Id
```
`SQLFiddle demo` | ```
SELECT COALESCE(t.id, a.id, m.id) AS [ID]
, MAX(t.name) AS Plant
, MAX(a.name) AS Animal
, MAX(m.name) AS Mineral
FROM plant t FULL OUTER JOIN mineral m ON m.id = t.id
FULL OUTER JOIN animal a ON a.id = t.id
GROUP BY COALESCE(t.id, a.id, m.id)
ORDER BY [ID]
```
[**Example**](http://sqlfiddle.com/#!3/92c92/17) | Join tables on a unique ID field, returning exactly one result row per Id value | [
"",
"sql",
"sql-server",
"join",
""
] |
I have 2 tables here that I need to merge.
table1
pk1
appt\_id
other\_fields\_here
table2
pk2
appt\_id
other\_fields\_here (mostly different name but there are some with the same name from table1)
The table is going to be merge using appt\_id as the matching key. Any records on that doesn't have a match gets carried over to the new table.
Please see the example below to give a better idea on what I want to accomplish.
Thanks a lot.
---
**example: There are only 2 records in each table. The new table will have 3 records after the merge.**
**scenario1**: matching appt\_id (This will create a single record on the new table.)
```
table1: pk1=1 | appt_id=1 | field1=test1
table2: pk2=1 | appy_id=1 | field2=test1
new pk = autoincrement | appt=1 | field1=test1 | field2=test1
```
---
**scenario2**: appt\_id has no matching records (This will create 2 new records in the new table)
```
table1: pk1=293454 | appt_id=34535 | field1=test34535
table2: pk2=735353 | appt_id=88888 | field2=test88888
new pk = autoincrement | appt=34535 | field1=test34535 | field2=null
new pk = autoincrement | appt=88888 | field1=null | field2=test88888
``` | Use join and UNION both. Check this [demo](http://sqlfiddle.com/#!2/6706b5/18/0)
```
SELECT t.appt_id,t.field1,t2.field2 FROM table1 t LEFT JOIN table2 t2 ON t.appt_id = t2.appt_id
UNION
SELECT t2.appt_id,t.field1,t2.field2 FROM table2 t2 LEFT JOIN table1 t ON t.appt_id = t2.appt_id
``` | This thing is called `FULL OUTER JOIN`
Unfortunately MySQL does not support this kind of joins.
You can achieve emulate it's result with the following query:
```
SELECT
* -- list required fields here!
FROM
table1 T1
LEFT JOIN table2 T2
ON T1.appt_id
UNION
SELECT
* -- list required fields here!
FROM
table1 T1
RIGHT JOIN table2 T2
ON T1.appt_id
``` | How to merge 2 tables with different structures into 1 table using MySQL? | [
"",
"mysql",
"sql",
""
] |
I want to fetch all customer\_details will all fields and total count of bill and sum of bill amount that is stored in `customer_details(id,name,address,...)` and `bill_details(bill_no,customer_id,price,...)` table.
Basic query to fetch both are first this:
```
select cd.* from customer_details as cd
```
Then this:
```
select customer_id,count(*) as billCount,sum(total) price from bill_details group by customer_id;
```
I am not able to combine these two in one query to fetch all customer details with bill sum and count. | Try using `JOIN`:
```
SELECT cd.*,IFNULL(T.billCount,0) as billCount,IFNULL(T.price,0) as price
FROM customer_details as cd LEFT JOIN
(SELECT customer_id,count(*) as billCount,sum(total) price
FROM bill_details
GROUP BY customer_id) T ON cd.id=T.customer_id
``` | I tried following goving correct result Can I use this
```
select cd.*,IFNULL(count(*),0) as count,IFNULL(sum(bd.total),0) as total from customer_details as cd
left join bill_details as bd
on cd.id=bd.customer_id
group by cd.id;
``` | Not able to combine two queries in one | [
"",
"mysql",
"sql",
""
] |
Here is my data:
```
INT CHAR(2) CHAR(4)
+------+--------------+------------+
| ID | SHORT_CODE | LONG_CODE |
+------+--------------+------------+
| 1 01 0100
| 2 01 0110
| 3 01 0120
| 4 02 0200
| 5 02 0212
| 6 02 0299
| 7 02 0211
```
I'm looking for a query that will output this result:
```
+--------------+------------+-------------+
| SHORT_CODE | LONG_CODE | IS_FIRST |
+--------------+------------+-------------+
| 02 0200 false
| 02 0211 true
| 02 0212 false
| 02 0299 false
```
Here is what I tried
```
SELECT
short_code,
long_code,
CASE
WHEN long_code LIKE '021%'
THEN 'true'
ELSE 'false'
END as is_first
FROM
MY_TABLE
WHERE
short_code='02'
ORDER BY
long_code ASC;
```
This query will print also `true` for `0212`. I want `true` for only the first matching row.
How can I achieve that ?
The sample code in this question is available [here](http://sqlfiddle.com/#!2/3563f/1/0).
**Oracle 10gR2** | You can create an extra column `is_match` that shows if the row matches the `like '021%'` condition. Then you can assign an ascending row number `rn` within both the partition of rows that match and the partition of rows that do not match. The row that matches and has row number one is the row you're looking for.
```
SELECT short_code
, long_code
, CASE
WHEN is_match = 1 and rn = 1 THEN 'true'
ELSE 'false'
END as is_first
FROM (
SELECT short_code
, long_code
, is_match
, row_number() over (
partition by is_match
order by long_code) as rn
FROM (
SELECT short_code
, long_code
, case
when long_code like '%021%' then 1
else 0
end as is_match
FROM MY_TABLE
WHERE short_code = '02'
) s1
) s2
ORDER BY
long_code;
```
[See it working at SQL Fiddle.](http://sqlfiddle.com/#!4/3563f/8/0) | You could jimmy-rig `row_number()` to do this, but it may be a bit cumbersome.
The best idea I could find was to create a pseudo-column `prefix` so I could sort all the matching rows first, and then sort by `long_code`:
```
SELECT short_code,
long_code,
CASE row_number() over (order by prefix desc, long_code asc)
WHEN 1 THEN 'true'
ELSE 'false'
END AS is_first
FROM (SELECT short_code,
long_code,
CASE
WHEN long_code LIKE '021%' THEN 1
ELSE 0
END as prefix
FROM my_table
WHERE short_code='02'
)
ORDER BY long_code ASC;
```
Results also shown in this [SQLFiddle](http://sqlfiddle.com/#!4/3563f/11/0). | How to match only the first row matching a condition? | [
"",
"sql",
"oracle",
""
] |
I want to build a "check in" service like [FourSquare](https://foursquare.com/) or [Untappd](https://untappd.com/).
How do I design a suitable database schema for storing check-ins?
For example, suppose I'm developing "CheeseSquare" to help people keep track of the delicious cheeses they've tried.
The table for the items into which one can check in is fairly simple and would look like
```
+----+---------+---------+-------------+--------+
| ID | Name | Country | Style | Colour |
+----+---------+---------+-------------+--------+
| 1 | Brie | France | Soft | White |
| 2 | Cheddar | UK | Traditional | Yellow |
+----+---------+---------+-------------+--------+
```
I would also have a table for the users, say
```
+-----+------+---------------+----------------+
| ID | Name | Twitter Token | Facebook Token |
+-----+------+---------------+----------------+
| 345 | Anne | qwerty | poiuyt |
| 678 | Bob | asdfg | mnbvc |
+-----+------+---------------+----------------+
```
What's the best way of recording that a user has checked in to a particular cheese?
For example, I want to record how many French cheeses Anne has checked-in. Which cheeses Bob has checked into etc. If Cersei has eaten Camembert more than 5 times etc.
Am I best putting this information in the user's table? E.g.
```
+-----+------+------+--------+------+------+---------+---------+
| ID | Name | Blue | Yellow | Soft | Brie | Cheddar | Stilton |
+-----+------+------+--------+------+------+---------+---------+
| 345 | Anne | 1 | 0 | 2 | 1 | 0 | 5 |
| 678 | Bob | 3 | 1 | 1 | 1 | 1 | 2 |
+-----+------+------+--------+------+------+---------+---------+
```
That looks rather ungainly and hard to maintain. So should I have separate tables for recordings check in? | No, don't put it into the `users` table. That information is better stored in a join table which represents a *many-to-many* relationship between users and cheeses.
The join table (we'll call `cheeses_users`) must have at least two columns (`user_ID, cheese_ID`), but a third (a timestamp) would be useful too. If you default the timestamp column to `CURRENT_TIMESTAMP`, you need only insert the `user_ID, cheese_ID` into the table to log a checkin.
```
cheeses (ID) ⇒ (cheese_ID) cheeses_users (user_ID) ⇐ users (ID)
```
Created as:
```
CREATE TABLE cheeses_users
cheese_ID INT NOT NULL,
user_ID INT NOT NULL,
-- timestamp defaults to current time
checkin_time DATETIME DEFAULT CURRENT_TIMESTAMP,
-- (add any other column *specific to* this checkin (user+cheese+time))
--The primary key is the combination of all 3
-- It becomes impossible for the same user to log the same cheese
-- at the same second in time...
PRIMARY KEY (cheese_ID, user_ID, checkin_time),
-- FOREIGN KEYs to your other tables
FOREIGN KEY (cheese_ID) REFERENCES cheeses (ID),
FOREIGN KEY (user_ID) REFERENCES users (ID),
) ENGINE=InnoDB; -- InnoDB is necessary for the FK's to be honored and useful
```
To log a checkin for Bob & Cheddar, insert with:
```
INSERT INTO cheeses_users (cheese_ID, user_ID) VALUES (2, 678);
```
To query them, you join through this table. For example, to see the number of each cheese type for each user, you might use:
```
SELECT
u.Name AS username,
c.Name AS cheesename,
COUNT(*) AS num_checkins
FROM
users u
JOIN cheeses_users cu ON u.ID = cu.user_ID
JOIN cheeses c ON cu.cheese_ID = c.ID
GROUP BY
u.Name,
c.Name
```
To get the 5 most recent checkins for a given user, something like:
```
SELECT
c.Name AS cheesename,
cu.checkin_time
FROM
cheeses_users cu
JOIN cheeses c ON cu.cheese_ID = c.ID
WHERE
-- Limit to Anne's checkins...
cu.user_ID = 345
ORDER BY checkin_time DESC
LIMIT 5
``` | Let's define more clearly, so you can tell me if I'm wrong:
* Cheese instances exist and aren't divisible ("Cheddar/UK/Traditional/Yellow" is a valid checkinable cheese, but "Cheddar" isn't, nor is "Yellow" or "Cheddar/France/...)
* Users check into a single cheese instance at a given time
* Users can re-check into the same cheese instance at a later date.
If this is the case, then to store fully normalized data, and to be able to retrieve that data's history, you need a third relational table linking the two existing tables.
```
+-----+------------+---------------------+
| uid | cheese_id | timestamp |
+----+-------------+---------------------+
| 345 | 1 | 2014-05-04 19:04:38 |
| 345 | 2 | 2014-05-08 19:04:38 |
| 678 | 1 | 2014-05-09 19:04:38 |
+-----+------------+---------------------+
```
etc. You can add extra columns to correspond to the cheese data, but strictly speaking you don't need to.
By putting all this in a third table, you potentially improve both performance and flexibility. You can always reconstruct the additions to the users table you mooted, using aggregate queries.
If you really decide you don't need the timestamps, then you'd replace them with basically the equivalent of a COUNT(\*) field:
```
+-----+------------+--------------+
| uid | cheese_id | num_checkins |
+----+-------------+--------------+
| 345 | 1 | 15 |
| 345 | 2 | 3 |
| 678 | 1 | 8 |
+-----+------------+--------------+
```
That would dramatically reduce the size of your joining table, although obviously there's less of a "paper trail", should you need to reconstruct your data (and possibly say to a user "oh, yeah, we forgot to record your checkin on such-a-date.") | How To Design A Database for a "Check In" Social Service | [
"",
"mysql",
"sql",
"database",
"database-design",
"social-networking",
""
] |
Created a banking DB, here is a link to the ER diagram:<http://goo.gl/Auye7X>, sorry that I couldn't just post the image in the question. So what I need to do is have a query that returns all customers who have a current account but not a saving account. It needs to be put out in a file using spool. So far, this is what I've got:
repheader 'Customer report'
repfooter 'Author Theo'
column customerID heading 'Customer ID'
column customerID format a10
column name heading 'Customer Name'
column name format a10
column current\_acc heading 'Current Accounts'
column current\_acc format 09999
set feedback off
spool customers.txt
select c.customerID, c.name, a.UAN, a.balance, a.overdraft
From customer c, current\_acc a
Where c.customerID = a.customerID;
spool off
This does generate an output, but it has people who also have a savings account, and I'm just not sure what sort of logic to put in, so it only shows people who only have a current account. This is SQLPLUS, and I'm open to using PLSQL. Also, can someone just confirm this is the correct way to spool.
All help greatly appreciated! | try this!
```
select c.customerID, c.name, a.UAN, a.balance, a.overdraft
From customer c join current_acc a
on c.customerID = a.customerID and a ='current account'
``` | An inline view perhaps? Something like this:
```
select c.customerID, c.name, ca.UAN, ca.balance, ca.overdraft
from customer c, (select * from current_acc
where customer_id not in
(select customer_id from savings_acc)
) ca
where c.customerID = ca.customerID;
``` | SQL Bank DB: return only people with a given property | [
"",
"sql",
"database",
"oracle",
"plsql",
"sqlplus",
""
] |
this is my variable
```
declare @fecha varchar(50)
set @fecha='29:14:2'
select
horas= CONVERT(int,SUBSTRING (@fecha,1,
charindex(':',@fecha)-1))
```
\*\*@fecha could to have this format
```
set @fecha='9:4:2' or set @fecha='29:59:59'
```
i need get hour=24, minutes=14, second=2 with the simplest select
as you see i could get the hour only, but other two are confused :s
I know i need to get again the index where is first : until other : and so i could get minute but his was confused for me and i got only errors converting "4:",":",":1" | This will do it:
```
declare @time varchar(32) = '29:14:2'
select hh = convert(int,
left( @time ,
charindex(':',@time)
- 1
)
) ,
mm = convert(int,
left( right(@time,len(@time)-charindex(':',@time)) ,
charindex(':', right(@time,len(@time)-charindex(':',@time)) )
- 1
)
) ,
ss = convert(int ,
right( right(@time,len(@time)-charindex(':',@time) ) ,
len( right(@time,len(@time)-charindex(':',@time) ) )
- charindex( ':' , right(@time,len(@time)-charindex(':',@time) ) )
)
)
```
And it will almost certainly break if — **when** — your data isn't clean. | [`charindex`](http://msdn.microsoft.com/en-us/library/ms186323.aspx) allows to give a starting position from where to search
> Syntax
>
> CHARINDEX ( expressionToFind ,expressionToSearch [ , start\_location ] )
So you can save the position of the first colon `:` and start one character later for the second colon
```
declare @fecha varchar(50) = '29:14:2'
declare @pos1 int = charindex(':', @fecha)
declare @pos2 int = charindex(':', @fecha, @pos1 + 1)
select horas = CONVERT(int, SUBSTRING(@fecha,1, @pos1 - 1)),
minutas = CONVERT(int, SUBSTRING(@fecha, @pos1 + 1, @pos2 - @pos1 - 1))
```
[SQLFiddle](http://sqlfiddle.com/#!6/bd6e0/17) | how to get some substring in sql separate of ":" with the simplest query? | [
"",
"sql",
"t-sql",
"select",
"substring",
"charindex",
""
] |
I have strings like:
```
t.reported_name
-------------------------
D3O using TM-0549 - Rev # 6
D3O using TM-0549 - Rev # 6
Water using TM-0415 - Rev #10
Water using TM-0449 - Rev # 10
Decanoic Acid using LL-1448 - Rev# 2
DBE-821 using QCRM-0015 - Rev#1
Water using TM-0441 Rev # 10
FC Sessile Drop Contact Angle using MTM-017_REV_B - Rev # 1
IPA using QCRM-0017
Norbloc using TM-0501 - Rev # 5
DK (non-edge corrected) using TM-0534 - Rev # 3
Decanoic Acid_L3 using LL-1448_L3
Decanoic Acid_L4 using LL-1448_L4
MXP7-1911 using CRM-0239
TMPTMA using TM-0515 - Rev# 8
DK (edge corrected) using MTM-09 - Rev# 0
```
I need to extract test method (anything after 'using') with number, i.e: `TM-0549`
and Revision number, i.e: `Rev # 6`
```
select distinct
case when REGEXP_LIKE(t.reported_name,'TM-', 'c') THEN SUBSTR(t.reported_name, INSTR(t.reported_name, 'TM'), 7) END test_method,
case when INSTR(t.reported_name,'Rev #') = 1 THEN SUBSTR(t.reported_name, INSTR(t.reported_name, 'Rev'), 7) END revision_number
from test s
```
from above data I want:
```
test_method revision_number
```
---
```
TM-0549 Rev # 6
TM-0549 Rev # 6
TM-0415 Rev #10
TM-0449 Rev # 10
LL-1448 Rev# 2
QCRM-0015 Rev#1
TM-0441 Rev # 10
MTM-017_REV_B Rev # 1
QCRM-0017 null
TM-0501 Rev # 5
TM-0534 Rev # 3
LL-1448_L3 null
LL-1448_L4 null
CRM-0239 null
TM-0515 Rev# 8
MTM-09 Rev# 0
``` | Assuming
* for test\_method, we want to match `TM-<number>`
* for revision, we want to match `Rev # <number>` (note the spaces around #)
Then here's a solution with `REGEXP_SUBSTR`:
```
select
regexp_substr(reported_name, 'TM\-[0-9]+') as test_method_regexsub,
regexp_substr(reported_name, 'Rev # [0-9]+') as revision_regexsub
from test t
```
And here's another one with `REGEXP_REPLACE`; we have to use the CASE/REGEXP\_LIKE workaround to return an empty string if the regex doesn't match, because REGEXP\_REPLACE returns the whole string unchanged if no match is found:
```
select
(case
when regexp_like(reported_name, '.*(TM\-[0-9]+).*')
then regexp_replace(reported_name, '.*(TM\-[0-9]+).*', '\1')
else ''
end) as test_method_regexrepl,
(case
when regexp_like(reported_name, '.*(Rev # [0-9]+).*')
then regexp_replace(reported_name, '.*(Rev # [0-9]+).*', '\1')
else ''
end) as revision_regexrepl
from test t
```
The second approach uses a capturing group `(Rev # [0-9]+)` and replaces the whole string with its contents `\1`.
**2nd UPDATE**
Assuming
* everything in front of `using` should be ignored
* everything up to an optional `Rev` is the test method name
* a revision consists of `Rev # <number>`, where the first space is optional
this should work:
```
select reported_name,
(case
when regexp_like(reported_name, '.* using (.*)( - Rev.*)')
then regexp_replace(reported_name, '.* using (.*)( - Rev.*)', '\1')
when regexp_like(reported_name, '.* using (.*)')
then regexp_replace(reported_name, '.* using (.*)', '\1')
else '' end) as test_method_regexrepl,
(case when regexp_like(reported_name, '.* - (Rev[ ]?# [0-9]+)')
then regexp_replace(reported_name, '.*(Rev[ ]?# [0-9]+)', '\1')
else '' end) as revision_regexrepl
from test t
```
Explanation:
* `.* using (.*)( - Rev.*)` is our regex for a test method that has a revision. It matches
+ an arbitrary string `.*`
+ the string `using` (note the two spaces)
+ an arbitrary string `(.*)` - we use the parentheses `()` to capture this part of the match in a group
+ the string `- Rev`, followed by an arbitrary string; again, we use parentheses to capture the string in a group (although we don't really need that)
If we have a match, we replace the whole string with the first capturing group `\1` (this contains the part between `using` and `Rev`
* `.* using (.*)` is our fallback for a test method without the revision; it matches
+ an arbitrary string `.*`
+ the string `using` (note the two spaces)
+ an arbitrary string `(.*)` - we use the parentheses `()` to capture this part of the match in a group
If we have a match, we replace the whole string with the first capturing group `\1` (this contains the part between `using` and `Rev`
* `.* - (Rev[ ]?# [0-9]+)` is our regex for the revision part. It matches
+ an arbitrary string followed by a hyphen surrounded by spaces `.* -`
+ the word `Rev`
+ an optional space `[ ]?`
+ a lattice followed by a space `#`
+ one or more digits `[0-9]+`
and again uses a capturing group `(Rev...)` for the "interesting" part
If we have a match, we replace the whole string with the first capturing group `\1` (this contains the part between `Rev` and the last digit)
[SQL Fiddle](http://sqlfiddle.com/#!4/d41d8/30817) | As for me for particular situation query can look like:
```
WITH j
AS (SELECT 'D3O using TM-0549 - Rev # 6' str FROM DUAL
UNION ALL
SELECT 'D3O using TM-0549 - Rev # 6' FROM DUAL
UNION ALL
SELECT 'Water using TM-0415 - Rev #10' FROM DUAL
UNION ALL
SELECT 'Water using TM-0449 - Rev # 10' FROM DUAL
UNION ALL
SELECT 'Decanoic Acid using LL-1448 - Rev# 2' FROM DUAL
UNION ALL
SELECT 'DBE-821 using QCRM-0015 - Rev#1' FROM DUAL
UNION ALL
SELECT 'Water using TM-0441 Rev # 10' FROM DUAL
UNION ALL
SELECT 'FC Sessile Drop Contact Angle using MTM-017_REV_B - Rev # 1' FROM DUAL
UNION ALL
SELECT 'IPA using QCRM-0017' FROM DUAL
UNION ALL
SELECT 'Norbloc using TM-0501 - Rev # 5' FROM DUAL
UNION ALL
SELECT 'DK (non-edge corrected) using TM-0534 - Rev # 3' FROM DUAL
UNION ALL
SELECT 'Decanoic Acid_L3 using LL-1448_L3' FROM DUAL
UNION ALL
SELECT 'Decanoic Acid_L4 using LL-1448_L4' FROM DUAL
UNION ALL
SELECT 'MXP7-1911 using CRM-0239' FROM DUAL
UNION ALL
SELECT 'TMPTMA using TM-0515 - Rev# 8' FROM DUAL
UNION ALL
SELECT 'DK (edge corrected) using MTM-09 - Rev# 0' FROM DUAL)
SELECT TRIM(RTRIM(TRIM (SUBSTR (clear_str, 0, INSTR (clear_str, ' ') + LENGTH (' '))),'-')) AS left_str,
TRIM(LTRIM(TRIM (SUBSTR (clear_str, INSTR (clear_str, ' ') + LENGTH (' '))),'-')) AS right_str
FROM (SELECT TRIM (SUBSTR (str, INSTR (str, 'using') + LENGTH ('using'))) || ' ' clear_str FROM j)
```
**UPD.**
Also, this solutions not depends on business data like 'Rev' or something else. But it is sensitive to spaces in our test\_method value | What's the best way to check if characters exist in string in SQL? | [
"",
"sql",
"regex",
"oracle",
""
] |
My company uses an internal management software for storing products. They want to transpose all the products in a MySql database so they can do available their products on the company website.
Notice: they will continue to use their own internal software. This software can exports all the products in various file format (including XML).
The syncronization not have to be in real time, they are satisfied to syncronize the MySql database once a day (late night).
Also, each product in their software has one or more images, then I have to do available also the images on the website.
Here is an example of an XML export:
```
<?xml version="1.0" encoding="UTF-8"?>
<export_management userid="78643">
<product id="1234">
<version>100</version>
<insert_date>2013-12-12 00:00:00</insert_date>
<warrenty>true</warrenty>
<price>139,00</price>
<model>
<code>324234345</code>
<model>Notredame</model>
<color>red</color>
<size>XL</size>
</model>
<internal>
<color>green</color>
<size>S</size>
</internal>
<options>
<s_option>aaa</s_option>
<s_option>bbb</s_option>
<s_option>ccc</s_option>
<s_option>ddd</s_option>
<s_option>eee</s_option>
<s_option>fff</s_option>
...
<extra_option>ggg</extra_option>
<extra_option>hhh</extra_option>
<extra_option>jjj</extra_option>
<extra_option>kkk</extra_option>
...
</options>
<images>


</images>
</product>
<product id="5321">
...
</product>
<product id="2621">
...
</product>
...
</export_management>
```
Some ideas for how can I do it?
Please let me know if my question is not clear. Thanks
---
**EDIT:**
I used a SQL like this for each table to fill them with the XML datas:
```
LOAD XML LOCAL INFILE '/products.xml' INTO TABLE table_name ROWS IDENTIFIED BY '<tag_name>';
```
Then, checking the tables content I can see that the field "id" (primary key) automatically has mantained itself the same for each respective product row in each tables. That's correct and suprisingly awesome!
The problem now is for the parameter `<options>` because it contains sub-parameters with same name (`<s_option>` and `<extra_option>`). The values of these tags are always different (that is, there is no a specific list of values, they are inserted manually by an employee) and also I don't know how many are for each product. I read that storing them as an array is not so good but if it's the only simple solution I can get it. | The way that I would approach the problem in your case is:
1. Create a respective set of corresponding tables in the database which in turn will represent the company's Product model by extracting the modelling from your given XML.
2. Create and use a scheduled daily synchronization job, that probably will executes few SQL commands in order to refresh the data or introduce a new one by **parsing** the products XMLs into the created tables.
**To be more practical about it all:**
* As for the database's **tables**, I can easily identify three tables to be created based on your XML, look at the yellow marked elements:
1. `Products`
2. `ProductsOptions`
3. `ProductsImages`

(*This diagram created based on an [XSD](http://pastebin.com/KY31Mb0n) that was generated from your XML*)
---
All rest can be considered as regular columns in the `Products` table since they're constitutes a 1-1 relationship only.
Next, create the required tables in your database (you can use an XSD2DB Schema converter tool to create the DDL script, I did it manually):
**companydb.products**
```
CREATE TABLE companydb.products (
Id INT(11) NOT NULL,
Version INT(11) DEFAULT NULL,
InsertDate DATETIME DEFAULT NULL,
Warrenty TINYINT(1) DEFAULT NULL,
Price DECIMAL(19, 2) DEFAULT NULL,
ModelCode INT(11) DEFAULT NULL,
ModelColor VARCHAR(10) DEFAULT NULL,
Model VARCHAR(255) DEFAULT NULL,
ModelSize VARCHAR(10) DEFAULT NULL,
InternalColor VARCHAR(10) DEFAULT NULL,
InternalSize VARCHAR(10) DEFAULT NULL,
PRIMARY KEY (Id)
)
ENGINE = INNODB
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Company''s Products';
```
**companydb.productsimages**
```
CREATE TABLE companydb.productimages (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Size VARCHAR(10) DEFAULT NULL,
FileName VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_productsimages_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci
COMMENT = 'Products'' Images';
```
**companydb.productsoptions**
```
CREATE TABLE companydb.productoptions (
Id INT(11) NOT NULL AUTO_INCREMENT,
ProductId INT(11) DEFAULT NULL,
Type VARCHAR(255) DEFAULT NULL,
`Option` VARCHAR(255) DEFAULT NULL,
PRIMARY KEY (Id),
CONSTRAINT FK_producstsoptions_products_Id FOREIGN KEY (ProductId)
REFERENCES companydb.products(Id) ON DELETE RESTRICT ON UPDATE RESTRICT
)
ENGINE = INNODB
AUTO_INCREMENT = 1
CHARACTER SET utf8
COLLATE utf8_general_ci;
```
---
* As for the **synchronisation job process** to take place, you can easily create an MySql [**event**](http://dev.mysql.com/doc/refman/5.1/en/events.html) and use the [Event Scheduler](http://dev.mysql.com/doc/refman/5.1/en/events.html) to control it, I created the required `event` which is calling a stored-procedure that you'll find below (`SyncProductsDataFromXML`), look:
> CREATE DEFINER = 'root'@'localhost' EVENT
> companydb.ProductsDataSyncEvent ON SCHEDULE EVERY '1' DAY STARTS
> '2014-06-13 01:27:38' COMMENT 'Synchronize Products table with
> Products XMLs' DO BEGIN SET @productsXml =
> LOAD\_FILE('C:/MySqlXmlSync/products.xml'); CALL
> SyncProductsDataFromXML(@productsXml); END;
> ALTER EVENT companydb.ProductsDataSyncEvent ENABLE
Now the interesting part is taking place, here is the synchronization stored-procedure (note how the `event` above is calling it):
```
CREATE DEFINER = 'root'@'localhost'
PROCEDURE companydb.SyncProductsDataFromXML(IN productsXml MEDIUMTEXT)
BEGIN
DECLARE totalProducts INT;
DECLARE productIndex INT;
SET totalProducts = ExtractValue(productsXml, 'count(//export_management/product)');
SET productIndex = 1;
WHILE productIndex <= totalProducts DO
SET @productId = CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/@id') AS UNSIGNED);
INSERT INTO products(`Id`, `Version`, InsertDate, Warrenty, Price, ModelCode, Model, ModelColor, ModelSize, InternalColor, InternalSize)
VALUES(
@productId,
ExtractValue(productsXml, 'export_management/product[$productIndex]/version'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/insert_date'),
CASE WHEN (ExtractValue(productsXml, 'export_management/product[$productIndex]/warrenty')) <> 'false' THEN 1 ELSE 0 END,
CAST(ExtractValue(productsXml, 'export_management/product[$productIndex]/price') as DECIMAL),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/code'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/model'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/model/size'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/color'),
ExtractValue(productsXml, 'export_management/product[$productIndex]/internal/size')
);
SET @totalImages = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/images/image)');
SET @imageIndex = 1;
WHILE (@imageIndex <= @totalImages) DO
INSERT INTO productimages(ProductId, Size, FileName) VALUES(@productId, 'small', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/images/image[$@imageIndex]/small'));
SET @imageIndex = @imageIndex + 1;
END WHILE;
SET @totalStandardOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/s_option)');
SET @standardOptionIndex = 1;
WHILE (@standardOptionIndex <= @totalStandardOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(@productId, 'Standard Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/s_option[$@standardOptionIndex]'));
SET @standardOptionIndex = @standardOptionIndex + 1;
END WHILE;
SET @totalExtraOptions = ExtractValue(productsXml, 'count(//export_management/product[$productIndex]/options/extra_option)');
SET @extraOptionIndex = 1;
WHILE (@extraOptionIndex <= @totalExtraOptions) DO
INSERT INTO productoptions(ProductId, `Type`, `Option`) VALUES(@productId, 'Extra Option', EXTRACTVALUE(productsXml, 'export_management/product[$productIndex]/options/extra_option[$@extraOptionIndex]'));
SET @extraOptionIndex = @extraOptionIndex + 1;
END WHILE;
SET productIndex = productIndex + 1;
END WHILE;
END
```
And you're done, this is the final expected results from this process:

---

---

**NOTE:** I've commit the entire code to one of my GitHub's repositories: [**XmlSyncToMySql**](https://github.com/ynevet/XmlSyncToMySql)
**UPDATE:**
Because your XML data might be larger then the maximum allowed for a `TEXT` field, I've changed the `productsXml` parameter to a `MEDIUMTEXT`. Look at this answer which outlines the various text datatypes max allowed size:
[Maximum length for MYSQL type text](https://stackoverflow.com/a/6766854/952310) | As this smells like integration work, I would suggest a multi-pass, multi-step procedure with an interim format that is not only easy to import into mysql but which also helps you to wrap your mind around the problems this integration ships with and test a solution in small steps.
This procedure works well if you can flatten the tree structure that can or could be expressed within the XML export into a list of products with fixed named attributes.
* query all product elements with an xpath query from the XML, iterate the result of products
* query all product attributes relative to the context node of the product from the previous query. Use one xpath per each attribute again.
* store the result of all attributes per each product as one row into a CSV file.
* store the filenames in the CSV as well (the basenames), but the files into a folder of it's own
* create the DDL of the mysql table in form of an .sql file
* run that .sql file against mysql commandline.
* import the CSV file into that table via mysql commandline.
You should get quick results within hours. If it turns out that products can not be mapped on a single row because of attributes having multiple values (what you call an array in your question), consider to turn these into JSON strings if you can not prevent to drop them at all (just hope you don't need to display complex data in the beginning). Doing so would be violating to target a normal form, however as you describe the Mysql table is only intermediate here as well, I would aim for simpleness of the data-structure in the database as otherwise queries for a simple and fast display on the website will create the next burden.
So my suggestion here basically is: Turn the tree structure into a (more) flat list for both simplification of transition and easier templating for display.
Having an intermediate format here also allows you to replay in case things are going wrong.
It also allows you to mock the whole templating more easily.
Alterantively is is also possible to store the XML of each project inside the database (keep the chunks in a second table so you can keep varchar (variable length) fileds out of the first table) and keep some other columns as (flat) reference columns to query against. If it's for templating needs, turning the XML into a SimpleXMLElement is often very nice to have it being a structured, non-primitive data-type as view object you can traverse and loop over options. Would work similar with JSON however keeping the XML would not break a format boundary and XML can also express more structure than JSON. | Synchronizing XML file to MySQL database | [
"",
"mysql",
"sql",
"xml",
"synchronization",
"database-schema",
""
] |
I have two tables with two attributes.I have written a procedure which should take valuesfrom table1 and update it in Table 2.
```
MEDICINE
BARCODE varchar2 13 byte
PDF_KUB_PATH varchar2 1000 byte
PARSED_ILAC_REHBERI
BARCODE varchar2 13 byte
PDF_KUB_PATH varchar2 1000 byte
```
This is my procedure in PL/SQL, i cant find where is my mistake, thanks in advance
```
CREATE OR REPLACE PROCEDURE Deneme
IS
BARCODETEMP VARCHAR2(13 BYTE);
S_KUB VARCHAR2 (1000 BYTE);
CURSOR c1 IS
SELECT a.PDF_KUB_PATH, a.BARCODE
FROM medicine b,parsed_ilac_rehberi a
WHERE a.BARCODE = b.barcode;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO S_KUB , BARCODETEMP,
EXIT WHEN c1%NOTFOUND;
UPDATE medicine
SET PDF_KUB_PATH = S_KUB
WHERE BARCODE = BARCODETEMP;
END LOOP;
CLOSE c1;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20001,'An error was encountered - '
||SQLCODE||' -ERROR'||SQLERRM);
end;
``` | You dont need to use a procedure, you can solve your question just with sql script.
```
update medicine m
set m.PDF_KUB_PATH = (select p.PDF_KUB_PATH from parsed_ilac_rehberi p where m.BARCODE = p.BARCODE);
``` | Shorter version (you don't need MERGE for 1 insert), you don't need redefine exception (as noted by Ben)
```
CREATE OR REPLACE PROCEDURE Deneme
IS
UPDATE (select a.PDF_KUB_PATH src, m.PDF_KUB_PATH dst
FROM medicine m INNER JOIN parsed_ilac_rehberi a
ON m.BARCODE = a.BARCODE)
SET dst = src;
END;
``` | Oracle procedure to update values of one table to another another table field | [
"",
"sql",
"oracle",
"stored-procedures",
"oracle11g",
""
] |
I have a table named `article` with fields `day` , `month` , `year` , `title`. So the `date` of article in the table would be `day-month-year` but has been stored in form of 3 different columns. I need to obtain titles of all those articles from the table which have been published between a given a `start_date` and an `end_date`.
Assume that `start_date` also is given in form of 3 variables -
`sday` , `smonth` , `syear`
And `end_date` as 3 variables -
`eday` , `emonth` , `eyear`
How can I write the mysql query for the above ? Thanks in advance. | I assume your fields are number.
So you can use the following code:
```
select *
from article
where (year*10000+month*100+day) between (syear*10000+smonth*100+sday) and (eyear*10000+emonth*100+eday)
``` | The simplest way seems to convert the years, months and days to real dates. Then the database can do the hard work. You can convert the components to a date like:
```
cast('2014-04-01' as datetime)
```
In a query, this would look like:
```
select *
from YourTable
where cast(concat(`year`, '-', `month`, '-', `day`) as datetime)
<
cast(concat(2014, '-', 4, '-', 30) as datetime)
```
[See it working at SQL Fiddle.](http://sqlfiddle.com/#!2/46ecb/8/0) | How to put If else statement in SQL where condition? | [
"",
"mysql",
"sql",
""
] |
I have a table in SQL Server 2012 which has these values (they are all integers).
```
a b
1 1
1 0
-1 1
2 1
```
This query works OK.
```
select * from T1
where
b <> 0 and a / b > 0
```
This query is failing.
```
select * from T1
where
b * b * b <> 0 and a / b > 0
```
I think I have a slight idea why but still... Why? Could you clearly explain (or point me to some official docs) what does SQL Server short-circuit and what not?
I find statements saying that T-SQL does support short-circuit but if the evaluation order is not guaranteed then... isn't the short-circuiting process ambiguous or let's say not well defined?
I am kind of confused on this. | I had learn this problem from book - Inside Microsoft SQL Server : T-SQL Querying. I copy book content and post it here. Your question is answered after the
first example.
**Key concept - All-At-Once Operations**
SQL supports a concept called all-at-once operations, which means that all expressions that appear in the same logical query processing phase are evaluated as if at the same point in time.
This concept explains why, for example, you cannot refer to column aliases assigned in the SELECT clause within the same SELECT clause, even if it seems intuitively that you should be able to. Consider the following query:
```
SELECT
orderid,
YEAR(orderdate) AS orderyear,
orderyear + 1 AS nextyear
FROM Sales.Orders;
```
The reference to the column alias orderyear is invalid in the third expression in the SELECT list, even though the referencing expression appears "after" the one where the alias is assigned. The reason is that logically there is no order of evaluation of the expressions in the SELECT list—it’s a set of expressions. At the logical level all expressions in the SELECT list are evaluated at the same point in time. Therefore this query generates the following error:
```
Msg 207, Level 16, State 1, Line 4
Invalid column name 'orderyear'.
```
Here’s another example of the relevance of all-at-once operations: Suppose you had a table called T1 with two integer columns called col1 and col2, and you wanted to return all rows where col2/col1 is greater than 2. Because there may be rows in the table where col1 is equal to 0, you need to ensure that the division doesn’t take place in those cases—otherwise, the query fails because of a divide-by-zero error. So if you write a query using the following format:
```
SELECT col1, col2
FROM dbo.T1
WHERE col1 <> 0 AND col2/col1 > 2;
```
You assume that SQL Server evaluates the expressions from left to right, and that if the expression col1 <> 0 evaluates to FALSE, SQL Server will short-circuit; that is, it doesn’t bother to evaluate the expression 10/col1 > 2 because at this point it is known that the whole expression is FALSE. So you might think that this query never produces a divide-by-zero error.
**SQL Server does support short circuits, but because of the all-at-once operations concept in ANSI SQL, SQL Server is free to process the expressions in the WHERE clause in any order that it likes. SQL Server usually makes decisions like this based on cost estimations, meaning that typically the expression that is cheaper to evaluate is evaluated first.** *You can see that if SQL Server decides to process the expression 10/col1 > 2 first, this query might fail because of a divide-by-zero error.*
You have several ways to try and avoid a failure here. For example, the order in which the WHEN clauses of a CASE expression are evaluated is guaranteed. So you could revise the query as follows:
```
SELECT col1, col2
FROM dbo.T1
WHERE
CASE
WHEN col1 = 0 THEN 'no' – or 'yes' if row should be returned
WHEN col2/col1 > 2 THEN 'yes'
ELSE 'no'
END = 'yes';
```
In rows where col1 is equal to zero, the first WHEN clause evaluates to TRUE and the CASE expression returns the string ‘no’ (replace with ‘yes’ if you want to return the row when col1 is equal to zero). Only if the first CASE expression does not evaluate to TRUE—meaning that col1 is not 0—does the second WHEN clause check whether the expression 10/col1 > 2 evaluates to TRUE. If it does, the CASE expression returns the string ‘yes.’ In all other cases, the CASE expression returns the string ‘no.’ The predicate in the WHERE clause returns TRUE only when the result of the CASE expression is equal to the string ‘yes.’ This means that there will never be an attempt here to divide by zero.
This workaround turned out to be quite convoluted, and in this particular case we can use a simpler mathematical workaround that avoids division altogether:
```
SELECT col1, col2
FROM dbo.T1
WHERE col1 <> 0 and col2 > 2*col1;
```
I included this example to explain the unique and important all-at-once operations concept, and the fact that SQL Server guarantees the processing order of the WHEN
clauses in a CASE expression.
There is more in this link - <http://social.technet.microsoft.com/wiki/contents/articles/20724.all-at-once-operations-in-t-sql.aspx> | The specification of SQL short-circuits in SQL server is very blurry. From what I've heard, the only time you can be sure that your query will be lazy-evaluated is CASE instruction with multiple WHEN entries. There is no guarantee even when you are using simple OR/AND expressions. See [this article](http://weblogs.sqlteam.com/jeffs/archive/2008/02/22/sql-server-short-circuit.aspx) if you want to know more. | SQL / T-SQL short-circuit | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have this SQL:
```
SELECT
c.customer_code,
SUM(units) AS tot_units,
SUM(total_amount) AS tot_money,
null as units_to_date,
null as amount_to_date,
FROM customers c
join transactions t on t.customer_code = c.customer_code
WHERE customer_active='S'
GROUP BY c.customer_code
UNION
SELECT
c.customer_code,
null AS tot_units,
null AS tot_money,
SUM(units) as units_to_date,
SUM(total_amount) as amount_to_date,
FROM customers c
join transactions t on t.customer_code = c.customer_code
WHERE customer_active='S' and t.transaction_date >= (trunc(current_date) - (60 * INTERVAL '1' DAY))
GROUP BY c.customer_code;
```
The result is:
```
CUSTOMER_CODE, TOT_UNITS, TOT_AMOUNT, TO_DATE_UNITS, TO_DATE_AMOUNT
0000001 450 300 null null
0000001 null null 30 15
```
and I need the result to be :
```
CUSTOMER_CODE, TOT_UNITS, TOT_AMOUNT, TO_DATE_UNITS, TO_DATE_AMOUNT
0000001 450 300 30 15
```
I've tried with UNION but it doesn't work. | Extending @VJHil's answer, it *is* possible to get rid of the `union`. The trick is to filter out everything outside of you desired date range using `case`:
```
SELECT c.customer_code,
SUM (units) AS tot_units,
SUM (total_amount) AS tot_money,
SUM (
CASE
WHEN t.transaction_date >=
(TRUNC (CURRENT_DATE) - (60 * INTERVAL '1' DAY)) THEN
units
ELSE
NULL
END)
AS units_to_date,
SUM (
CASE
WHEN t.transaction_date >=
(TRUNC (CURRENT_DATE) - (60 * INTERVAL '1' DAY)) THEN
total_amount
ELSE
NULL
END)
AS amount_to_date
FROM customers c JOIN transactions t ON t.customer_code = c.customer_code
WHERE customer_active = 'S'
GROUP BY c.customer_code
```
This should perform better than any solution where you access the data twice. | May be split up as two views and join them.
Then either Use `NVL()` or `MAX()` which ever suits best.
```
WITH V1 AS (
SELECT
c.customer_code,
SUM(units) AS tot_units,
SUM(total_amount) AS tot_money,
null as units_to_date,
null as amount_to_date,
FROM customers c
join transactions t on t.customer_code = c.customer_code
WHERE customer_active='S'
GROUP BY c.customer_code
),
V2 AS (
SELECT
c.customer_code,
null AS tot_units,
null AS tot_money,
SUM(units) as units_to_date,
SUM(total_amount) as amount_to_date,
FROM customers c
join transactions t on t.customer_code = c.customer_code
WHERE customer_active='S' and t.transaction_date >= (trunc(current_date) - (60 * INTERVAL '1' DAY))
GROUP BY c.customer_code)
SELECT
V1.CUSTOMER_CODE, NVL(V1.TOT_UNITS,V2. TOT_UNITS), NVL(V1.TOT_AMOUNT,V2. TOT_AMOUNT), NVL(V1.TO_DATE_UNITS,V2. TO_DATE_UNITS) TO_DATE_AMOUNT
FROM V1, V2
WHERE V1.CUSTOMER_CODE = V2.CUSTOMER_CODE
``` | ORACLE merge two SELECTs with UNION adding up | [
"",
"sql",
"oracle",
"select",
"union",
""
] |
Say we have a table:
```
CREATE TABLE p
(
id serial NOT NULL,
val boolean NOT NULL,
PRIMARY KEY (id)
);
```
Populated with some rows:
```
insert into p (val)
values (true),(false),(false),(true),(true),(true),(false);
```
```
ID VAL
1 1
2 0
3 0
4 1
5 1
6 1
7 0
```
I want to determine when the value has been changed. So the result of my query should be:
```
ID VAL
2 0
4 1
7 0
```
I have a solution with joins and subqueries:
```
select min(id) id, val from
(
select p1.id, p1.val, max(p2.id) last_prev
from p p1
join p p2
on p2.id < p1.id and p2.val != p1.val
group by p1.id, p1.val
) tmp
group by val, last_prev
order by id;
```
But it is very inefficient and will work extremely slow for tables with many rows.
I believe there could be more efficient solution using PostgreSQL window functions?
[SQL Fiddle](http://sqlfiddle.com/#!15/962ac/1) | This is how I would do it with an analytic:
```
SELECT id, val
FROM ( SELECT id, val
,LAG(val) OVER (ORDER BY id) AS prev_val
FROM p ) x
WHERE val <> COALESCE(prev_val, val)
ORDER BY id
```
**Update (some explanation):**
Analytic functions operate as a post-processing step. The query result is broken into groupings (`partition by`) and the analytic function is applied within the context of a grouping.
In this case, the query is a selection from `p`. The analytic function being applied is `LAG`. Since there is no `partition by` clause, there is only one grouping: the entire result set. This grouping is ordered by `id`. `LAG` returns the value of the previous row in the grouping using the specified order. The result is each row having an additional column (aliased prev\_val) which is the `val` of the preceding row. That is the subquery.
Then we look for rows where the `val` does not match the `val` of the previous row (prev\_val). The `COALESCE` handles the special case of the first row which does not have a previous value.
Analytic functions may seem a bit strange at first, but a search on analytic functions finds a lot of examples walking through how they work. For example: <http://www.cs.utexas.edu/~cannata/dbms/Analytic%20Functions%20in%20Oracle%208i%20and%209i.htm> Just remember that it is a post-processing step. You won't be able to perform filtering, etc on the value of an analytic function unless you subquery it. | ### Window function
Instead of calling `COALESCE`, you can provide a default from the window function [**`lag()`**](http://www.postgresql.org/docs/current/interactive/functions-window.html) directly. A minor detail in this case since all columns are defined `NOT NULL`. But this may be essential to distinguish "no previous row" from "NULL in previous row".
```
SELECT id, val
FROM (
SELECT id, val, lag(val, 1, val) OVER (ORDER BY id) <> val AS changed
FROM p
) sub
WHERE changed
ORDER BY id;
```
Compute the result of the comparison immediately, since the previous value is not of interest per se, only a possible change. Shorter and may be a tiny bit faster.
**If** you consider the ***first row*** to be "changed" (unlike your demo output suggests), you need to observe `NULL` values - even though your columns are defined `NOT NULL`. Basic `lag()` returns `NULL` in case there is no previous row:
```
SELECT id, val
FROM (
SELECT id, val, lag(val) OVER (ORDER BY id) IS DISTINCT FROM val AS changed
FROM p
) sub
WHERE changed
ORDER BY id;
```
*Or* employ the additional parameters of `lag()` once again:
```
SELECT id, val
FROM (
SELECT id, val, lag(val, 1, NOT val) OVER (ORDER BY id) <> val AS changed
FROM p
) sub
WHERE changed
ORDER BY id;
```
### Recursive CTE
As proof of concept. :)
Performance won't keep up with posted alternatives.
```
WITH RECURSIVE cte AS (
SELECT id, val
FROM p
WHERE NOT EXISTS (
SELECT 1
FROM p p0
WHERE p0.id < p.id
)
UNION ALL
SELECT p.id, p.val
FROM cte
JOIN p ON p.id > cte.id
AND p.val <> cte.val
WHERE NOT EXISTS (
SELECT 1
FROM p p0
WHERE p0.id > cte.id
AND p0.val <> cte.val
AND p0.id < p.id
)
)
SELECT * FROM cte;
```
With an improvement from @wildplasser.
[SQL Fiddle](http://sqlfiddle.com/#!15/962ac/37) demonstrating all. | PostgreSQL - column value changed - select query optimization | [
"",
"sql",
"postgresql",
"window-functions",
"gaps-and-islands",
""
] |
My question is somewhat different.
People here are asking to insert date along with time, but my question is how to insert only date DD-MM-YYYY ? I do not require time to be inserted. Because of difference in miliseconds I am getting exception. So I want to avoid that.
sysdate,current\_time keywords in oracle insert date along with time.
Requirement is like I have to insert 1000 rows where timestamp(I don't want a single millisonds difference) for each row should be same. | So you just want to insert a lot of records with the same timestamp?
First, here's what **doesn't** work:
```
CREATE TABLE test_table (tstamp TIMESTAMP);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
INSERT INTO test_table VALUES (systimestamp);
SELECT tstamp FROM test_table;
TSTAMP
----------------------------
06-JUN-14 12.24.28.659493 PM
06-JUN-14 12.24.28.663258 PM
06-JUN-14 12.24.28.666936 PM
06-JUN-14 12.24.28.671949 PM
06-JUN-14 12.24.28.676808 PM
06-JUN-14 12.24.28.680507 PM
06-JUN-14 12.24.28.684501 PM
06-JUN-14 12.24.28.688620 PM
06-JUN-14 12.24.28.694491 PM
06-JUN-14 12.24.28.698288 PM
```
**The solution**, do your INSERTs in a PL/SQL block. First get the `systimestamp` and store it in a variable, such as `v_right_now`. Then you can get all your records to have the exact same date/time.
```
DECLARE
v_right_now TIMESTAMP := systimestamp;
BEGIN
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
INSERT INTO test_table VALUES (v_right_now);
END;
/
SELECT tstamp FROM test_table;
TSTAMP
----------------------------
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
06-JUN-14 12.24.48.263444 PM
``` | In Oracle use a date with time "00:00:00", for example: `trunc(sysdate)`.
There isn't a date-only type (without time). | How to insert multiple records into Oracle with same timestamp | [
"",
"sql",
"oracle",
"date",
""
] |
For e.g., I have a MySQL table named "Qbank" which has following columns
* ID (int AUTO\_INCREMENT)
* Question (longtext)
* Repeated (int, default 0)
I know how to get counts of repeated rows (Questions) -
```
SELECT Question,
Repeated,
count(ID) as cnt
FROM Qbank
GROUP BY Question
HAVING cnt > 1
```
And also, to get a list of all duplicate rows -
```
SELECT ID, Qbank.Question, Repeated FROM Qbank
INNER JOIN (
SELECT Question
FROM Qbank
GROUP BY Question
HAVING count(ID) > 1
) dup ON Qbank.Question = dup.Question
ORDER BY Question
```
**But what I want to do is - to SET the "Repeated" value of all the duplicates (having same Question) to 1.**
I.e. so that these can be differentiated from remaining non-duplicate questions, which have Repeated value 0 by default.
### What would be the structure of such query?
Sorry, for the elaboration, that was the only way to prove, that I have searched a lot before asking, and have not found anything yet.
Thanks.
Regards,
Dr. Atul | Try using `JOIN` in `UPDATE`:
```
UPDATE Qbank T1 JOIN
(SELECT Question FROM Qbank
GROUP BY Question HAVING count(ID) > 1) dup ON T1.Question = dup.Question
SET T1.Repeated = 1
``` | Update duplicates using `GROUP BY`:
```
START TRANSACTION;
UPDATE wlvalidation wvl
INNER JOIN
(SELECT w.`id`,w.`active`,w.`isused` ,w.wl_key,COUNT(w.`wl_key`) AS ked FROM wlvalidation w WHERE w.`isused`=0 AND w.`wl_company`='afr'
GROUP BY w.`wl_key`
HAVING COUNT(w.`wl_key`) > 1)drs
ON wvl.`id`=drs.`id`
SET wvl.`active`=0;
``` | How to UPDATE a column of all duplicate records in MySQL? | [
"",
"mysql",
"sql",
""
] |
I have a SQL query and I am trying to display only distinct records like one post of each user order by latest post desc. How to do that? I tried group by and it failed.
```
SELECT jca.id,
ju.name,
jca.actor,
jca.title as title,
jca.created as postedDate
FROM community_activities jca
left join users ju on jca.actor=ju.id
left join community_users jcu on jcu.userid=ju.id
ORDER BY jca.id, jca.postedDate DESC LIMIT 0 , 50;
id name actor title
200 chandra 12 hello man
201 chandra 12 hey man
202 shayam 13 hello hero
203 chandra 12 hello yoyo
204 kokila 16 yep
205 chandra 12 hello y
206 chandra 12 hello abc
``` | To display only the latest post of each user, create a derived table that consists of only the latest post id of each user and join `community_activities` to that table, so only those results will be displayed.
```
SELECT jca.id,
ju.name,
jca.actor,
jca.title as title,
jca.created as postedDate
FROM community_activities jca
JOIN (SELECT MAX(id) max_id
FROM community_activities
GROUP BY actor) t1 on t1.max_id = jca.id
LEFT JOIN users ju on jca.actor=ju.id
LEFT JOIN community_users jcu on jcu.userid=ju.id
ORDER BY jca.id, jca.postedDate DESC LIMIT 0 , 50;
``` | ```
SELECT
MAX(jca.id), -- just selects maximum of each column ,
DISTINCT ju.name, -- max() may be wrong for your scenario .
MAX(jca.actor),
MAX(jca.title) as title,
MAX(jca.created) as postedDate
FROM
community_activities jca
left join
users ju on jca.actor=ju.id
left join
community_users jcu on jcu.userid=ju.id
GROUP BY
ju.name;
```
output:
```
202 shayam 13 hello hero
204 kokila 16 yep
206 chandra 12 hello abc
``` | Sql Query to show distinct records | [
"",
"sql",
"sql-server",
"distinct",
""
] |
I have a index defined on a one single column of a table. I am using the following query to determine whether the index exist on one column or not. This seems to work but is there a better way to do this? I looked at other solutions on stackoverflow but that does not seem to work well.
```
SELECT MIN(ColumnsCount) FROM
(
SELECT COUNT(*) AS ColumnsCount,
SysIndex.name
FROM sys.indexes As SysIndex
Inner Join sys.index_columns As SysIndexCol On SysIndex.object_id = SysIndexCol.object_id And SysIndex.index_id = SysIndexCol.index_id
Inner Join sys.columns As SysCols On SysIndexCol.column_id = SysCols.column_id And SysIndexCol.object_id = SysCols.object_id
WHERE SysIndex.name
in
(
Select
SysIndex.name
From
sys.indexes As SysIndex
Inner Join sys.index_columns As SysIndexCol On SysIndex.object_id = SysIndexCol.object_id And SysIndex.index_id = SysIndexCol.index_id
Inner Join sys.columns As SysCols On SysIndexCol.column_id = SysCols.column_id And SysIndexCol.object_id = SysCols.object_id
Where
type <> 0
And SysIndex.object_id in (Select systbl.object_id from sys.tables as systbl Where SysTbl.name = 'TableName')
And SysCols.name = 'ColName'
)
GROUP BY SysIndex.name) A
``` | What you are retrieving is the lowest number of columns used in an index on a given table, where the index includes a given column. Your query can be simplified to:
```
SELECT TOP 1 ColumnsCount = COUNT(*)
FROM sys.index_columns AS ic
INNER JOIN sys.indexes AS i
ON ic.[object_id] = i.[object_id]
AND ic.index_id = i.index_id
INNER JOIN sys.columns AS c
ON ic.[object_id] = c.[object_id]
AND ic.column_id = c.column_id
WHERE ic.[object_id] = OBJECT_ID(N'dbo.YourTableName')
AND i.[type] != 0
AND ic.is_included_column = 0
GROUP BY i.index_id
HAVING COUNT(CASE WHEN c.Name = 'YourColumnName' THEN 1 END) > 0
ORDER BY ColumnsCount;
```
I've added the condition in `ic.is_included_column = 0`, on the assumption that you don't want to include non key columns in the account, nor are you interested in indexes where the given column is not a key column. If this assumption is incorrect then remove this predicate.
However, if your current query works, I don't see that there is much benefit from optimising a query on the system catalogs. They aren't likely to be performance killers. | Not sure why such a big query but if I have understood correctly, you are trying to find whether a specific index `idx` is present in table `X`. if that's the case then you can directly query `sys.indexes` table like below (assuming your index name `idx123` and your table name is `table1`)
```
SELECT *
FROM sys.indexes
WHERE name='idx123'
AND object_id = OBJECT_ID('table1')
``` | Check if an index exists on table column | [
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.