Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I need to parse the string and I am having trouble identifying the order number.
Here few examples with expected answer. I need Oracle SQL expression to ruturn the value
```
SOURCE_COLUMN PARAMETER RETURN_VALUE
AAA_BBB_CCC_DDD AAA 1
AAA_BBB_CCCC_DDD BBB 2
AAA_BBB_CC_DDD CC 3
AAA_BBBB_CCC_DDD DDD 4
AAA_BBB_CCC_DDD EEE 0
```
Here is SQL to generate first two columns
```
select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'AAA' PARAM FROM DUAL UNION ALL
select 'AAA_BBB_CCCC_DDD' SOURCE_COLUMN, 'BBB' PARAM FROM DUAL UNION ALL
select 'AAA_BBB_CC_DDD' SOURCE_COLUMN, 'CC' PARAM FROM DUAL UNION ALL
select 'AAA_BBBB_CCC_DDD' SOURCE_COLUMN, 'DDD' PARAM FROM DUAL UNION ALL
select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'EEE' PARAM FROM DUAL
``` | This query does what you want:
```
select (case when source_column like '%'||param||'%'
then 1 +
coalesce(length(substr(source_column, 1, instr(source_column, param) - 1)) -
length(replace(substr(source_column, 1, instr(source_column, param) - 1), '_', '')),
0)
else 0
end) as pos
from t;
```
The idea is much simpler than the query looks. It finds the matching parameter and then takes the initial substring up to that point. You can count the number of `'_'` by using a trick: take the length of the string and then subtract the length of the string when you replace the `'_'` with `''`. The value you want is actually one more than this value. And, if the pattern is not found, then return `0`. | For your particular example (stable string patterns):
```
SQL> with t as (
2 select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'AAA' PARAM FROM DUAL UNION ALL
3 select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'BBB' PARAM FROM DUAL UNION ALL
4 select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'CCC' PARAM FROM DUAL UNION ALL
5 select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'DDD' PARAM FROM DUAL UNION ALL
6 select 'AAA_BBB_CCC_DDD' SOURCE_COLUMN, 'EEE' PARAM FROM DUAL
7 )
8 select SOURCE_COLUMN, PARAM, floor((instr(SOURCE_COLUMN,param)+3)/4) p from t;
SOURCE_COLUMN PAR P
--------------- --- ----------
AAA_BBB_CCC_DDD AAA 1
AAA_BBB_CCC_DDD BBB 2
AAA_BBB_CCC_DDD CCC 3
AAA_BBB_CCC_DDD DDD 4
AAA_BBB_CCC_DDD EEE 0
``` | Returning the order number of sub-string in a text | [
"",
"sql",
"oracle",
""
] |
I keep getting the error:
```
Msg 8114, Level 16, State 5, Line 1
Error converting data type varchar to numeric.
```
When executing this query:
```
SELECT A.EMPLID, A.EMPL_RCD, A.DZ_PRESENCE_TYPE, A.DZ_DATE, A.DZ_TIME_HOURS
FROM PS_DZ_TIME_VW A
WHERE A.DZ_PRESENCE_TYPE IN ('P50', 'FORF') AND A.DZ_TIME_HOURS > 0
```
`A.DZ_TIME_HOURS` is a decimal value, when I execute the query without the `'AND A.DZ_TIME_HOURS > 0'` the query runs fine and returns values like `0.00000` or `3.11400`.
The goal is to only display values above 0.
Edit: I forgot to add `EXEC sp_help PS_DZ_TIME_VW` tells me `A.DZ_TIME_HOURS` is a decimal type with length 5, Prec 7 and scale 5.
 | A colleague came up with a solution/workaround:
```
SELECT * FROM
(
SELECT EMPLID, EMPL_RCD, DZ_PRESENCE_TYPE, DZ_DATE, SUM(DZ_TIME_HOURS) HOURS
FROM PS_DZ_TIME_VW
GROUP BY EMPLID, EMPL_RCD, DZ_PRESENCE_TYPE, DZ_DATE
) A
WHERE A.HOURS > 0
```
He thinks this has something to do with how the database handles the query.
In our Oracle db the query ran without problems.
Thank you all for your help! | Try forcing the casting? (you may need to specify length precision\scale etc) :
```
CAST(A.DZ_TIME_HOURS AS DECIMAL) > 0
```
Just noticed your collation. Seems unfamiliar to me. This could have an affect as different datatypes are treated differently depending on local. But DECIMAL\VARCHAR issues seems weird... | Error selecting rows where decimal > 0 | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Suppose I have a table with column A like following :
```
a
--
x
y
m
x
n
y
```
I want to delete all rows that have duplicate a column value and keep just one value.
After this operation, my column would be like If you do :
```
select distinct a from A;
```
I [know how to select rows with repeated a column values](https://stackoverflow.com/questions/16291075/oracle-duplicate-rows-based-on-a-single-column) But I can't just replace select with DELETE because it would delete the unique values too.
Any help would be greatly appreciated. | In Oracle, you can do this by using the hidden column `rowid` and a correlated subquery:
```
delete from a
where rowid > (select min(rowid)
from a a2
where a.a = a2.a
);
```
Alternatively, you can phrase this as a `not in`:
```
delete from a
where rowid not in (select min(rowid)
from a a2
group by a2.a
);
``` | You can use combination of CTE and [**Ranking function**](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm)
```
;With cte As
(
Select ROW_NUMBER() OVER (PARTITION BY colA ORDER BY colA) as rNum
From yourTable
)
Delete From cte
Where rNum<>1
``` | remove rows with some duplicate column value | [
"",
"sql",
"oracle",
"oracle9i",
""
] |
Is it possible to select the value of the second column using something like an index or column position if i dont know the name of the column?
```
Select col(2) FROM (
Select 'a', 'b',' c', 'd' from dual
)
``` | Is it possible? Sure. You could write a PL/SQL block that used `dbms_sql` to open a cursor using the actual query against `dual`, describe the results, bind a variable to whatever you find the second column to be, fetch from the cursor, and then loop. That would be a terribly involved and generally rather painful process but it could be done.
The SQL language does not define a way to do this in a static SQL statement and Oracle does not provide an extension that would allow this. I'd be rather concerned about the underlying problem that you're trying to solve, though, if you somehow know that you want the second column but don't know what that column represents. That's not something that makes a lot of sense in a relational database. | ```
SELECT ORDINAL_POSITION, COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'dual'
```
not sure if this is what you need. | Oracle SQL - Selecting column without knowing column name | [
"",
"sql",
"oracle",
""
] |
I'm working on automating the import of CSV data to a database. At this time I have code that results in a table called `#temp1` which looks like this:
```
CompetitionID DateKickOff TimeKickOff TeamIDHome TeamIDAway ScoreHome ScoreAway
--------------- -------------------------- ------------------- ----------- ------------ ----------- ------------
2 2013-03-08 00:00:00.000 14:02:00.0000000 21 43 0 4
```
At this point I need to check whether an identical combination of `DateKickOff` and `TeamIDHome` values exist in the database all ready, so as to avoid importing duplicate data. If this combination all ready exists on a row of table `Match` then I need to effectively skip the following code:
```
INSERT INTO Match
SELECT *
FROM #temp1
--Add MatchID column to Temp Data file and fill it with the most recent match ID
ALTER TABLE #CSVTest_Data
ADD MatchID INT
UPDATE #CSVTest_Data
SET MatchID = (SELECT TOP 1 MatchID
FROM Match
ORDER BY MatchID DESC)
INSERT INTO Data (MatchID,
OddsFirstTimeTaken,
OddsLastTimeTaken,
MarketName,
Outcome,
Odds,
NumberOfBets,
VolumeMatched,
InPlay)
SELECT MatchID,
FirstTimeTaken,
LatestTimeTaken,
Market,
Outcome,
Odds,
NumberOfBets,
VolumeMatched,
InPlay
FROM #CSVTest_Data
```
Obviously, if the data is not a duplicate entry then the code above needs to be run. I would really appreciate some help with this.
EDIT: Just to clarify, the comparison of data needs to take place before the 'INSERT INTO Match' code occurs. If the data is not duplicate, SQL Server will increment the Primary Key of MatchID in the Match table. I then get this new MatchID value and write it to my second temporary table before writing to my 'Data' table. If no new entry was added to the 'Match' table, then no data must be written to the 'Data' table. | This works by carrying out a separate check for duplicate data for each table:
```
INSERT INTO Match
SELECT *
FROM #temp1
EXCEPT
SELECT CompetitionID, DateKickOff, TimeKickOff, TeamIDHome, TeamIDAway, ScoreHome, ScoreAway
FROM Match
DELETE #CSVTest_Data
FROM #CSVTest_Data d
WHERE EXISTS( SELECT * from Data d2 WHERE
d.FirstTimeTaken = d2.OddsFirstTimeTaken AND
d.LatestTimeTaken = d2.OddsLastTimeTaken AND
d.Market = d2.MarketName AND
d.Outcome = d2.Outcome AND
d.Odds = d2.Odds AND
d.NumberOfBets = d2.NumberOfBets AND
d.VolumeMatched = d2.VolumeMatched AND
d.InPlay = d2.InPlay)
--Add MatchID column to Temp Data file and fill it with the most recent match ID
ALTER TABLE #CSVTest_Data ADD MatchID INT
update #CSVTest_Data
Set MatchID = (SELECT TOP 1 MatchID FROM BetfairFootballDB..Match
ORDER BY MatchID DESC)
INSERT INTO BetfairFootballDB..Data (MatchID, OddsFirstTimeTaken, OddsLastTimeTaken, MarketName, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay)
SELECT MatchID, FirstTimeTaken, LatestTimeTaken, Market, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay
FROM #CSVTest_Data
``` | You can use the `EXCEPT` keyword:
```
WITH NewData AS (
SELECT FirstTimeTaken
, LatestTimeTaken
, Market
, Outcome
, Odds
, NumberOfBets
, VolumeMatched
, InPlay
FROM #CSVTest_Data -- Coming data
EXCEPT --Minus
SELECT FirstTimeTaken
, LatestTimeTaken
, Market
, Outcome
, Odds
, NumberOfBets
, VolumeMatched
, InPlay
FROM Data --Existing Data
)
INSERT INTO Data (OddsFirstTimeTaken, OddsLastTimeTaken, MarketName, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay)
SELECT FirstTimeTaken, LatestTimeTaken, Market, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay
FROM NewData --Insert New Data only
```
**EDIT :**
If you have primary identity keys, you don't need to use them at all even in insert statements. SQL Server will figure out how to handle them. | Check if data exists (avoid duplication) in database before running some SQL Server code | [
"",
"sql",
"sql-server",
""
] |
I want to convert minutes in hoursa format but my solution add second also, can any one help to remove second
Current Result: 02:02:00
Need Result Like this: 02:02
```
SELECT CONVERT(varchar(12), DATEADD(minute,122, 0), 108)
``` | try
```
SELECT CONVERT(varchar(5), DATEADD(minute,122, 0), 108)
``` | try this
```
SELECT SUBSTRING(CONVERT(varchar(12), DATEADD(minute,122, 0), 108),1,5)
``` | Convert minutes in hours | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table as below:
```
paper_id author_id author_name author_affiliation
1 521630 Ayman Kaheel Cairo Microsoft Innovation Lab
1 972575 Mahmoud Refaat Cairo Microsoft Innovation Lab
3 1528710 Ahmed Abdul-hamid Harvard
```
Now I am finding multiple pairs of `author_id`, `author_name` and `author_affiliation` occuring together. For example:
```
author_id author_name author_affiliation count
1 Masuo Fukui <NA> 4
4 Yasusada Yamada <NA> 8
```
I am using the following query:
```
statement<-"select author_id,author_name,author_affiliation,count(*)
from paper_author
GROUP BY author_id,author_name,author_affiliation
HAVING (COUNT(*)>1)"
```
Now I want to know how many author\_ids are present in this. I am doing this:
```
statement<-"select distinct author_id
from paper_author
where author_id in (
select author_id,author_name,author_affiliation,count(*)
from paper_author
GROUP BY author_id,author_name,author_affiliation
HAVING (COUNT(*)>1)
)"
```
I am not able to get the desired result.
Also, how can I get the number of paper ids in the above result?
Thanks. | This will do the job, i think:
```
statement<-"select distinct author_id
from paper_author
where author_id in (
select author_id
from paper_author
GROUP BY author_id,author_name,author_affiliation
HAVING (COUNT(*)>1)
)"
``` | If you just want to know how many authors have more than one paper, use this query:
```
SELECT COUNT(*)
FROM (SELECT author_id, author_affiliation, COUNT(*)
FROM paper_author
GROUP BY author_id, author_affiliation
HAVING COUNT(*) > 1);
```
This assumes that `author_id` is a unique identifier for `author_name`. If the id selects for the `author_name, author_affiliation` combination (i.e. an author producing papers for different institutions has multiple id's, one for each affiliation) then you can also strike `author_affiliation` from the sub-query. | sql query for finding multiple duplicate pairs in columns | [
"",
"sql",
"postgresql",
""
] |
I am having an issue with this query that I am trying to write.
**Example table**
`Table1`:
```
ID | Type
--------------------
111 | beer
111 | Wine
222 | Wine
333 | Soda
```
I am trying to query those who bought wine but didn't buy beer.
I am at
```
select ID
from table1
where type <>'beer'
and type = 'wine'
```
which does not work. Any thoughts? | You are looking for records in your table where the type is not beer but wine. This is not what you want. You should be looking for IDs where records exist or not.
Usually you would have a persons table associated:
```
select *
from persons
where id in (select id from table1 where type = 'wine')
and id not in (select id from table1 where type = 'beer');
```
or with exists
```
select *
from persons
where exists (select * from table1 where id = persons.id and type = 'wine')
and not exists (select * from table1 where id = persons.id and type = 'beer');
```
Without a persons table you would simple select IDs:
```
select id
from table1 wine_buyers
where type = 'wine'
and not exists (select * from table1 where id = wine_buyers.id and type = 'beer');
```
or
```
select id
from table1
where type = 'wine'
and id not in (select id from table1 where type = 'beer');
```
Some dbms offer set methods:
```
select id from table1 where type = 'wine'
minus
select id from table1 where type = 'beer';
```
EDIT: Just thought I should add a way where you scan the table just once. You'd kind of create a flag for wine and beer per id:
```
select id
from table1
group by id
having max(case when type = 'wine' then 1 else 0 end) = 1
and max(case when type = 'beer' then 1 else 0 end) = 0;
``` | ```
SELECT id
FROM table1
WHERE type = 'wine'
AND id NOT IN
(SELECT id
FROM table1
WHERE type = 'beer');
```
The nested query selects those who have purchased beer. The outer query selects those who have purchased wine but do not have their id in the list of those who have purchased beer. | SQL - Multiple Clause | [
"",
"sql",
""
] |
How can I connect to SQL Server from command prompt using Windows authentication?
This command
```
Sqlcmd -u username -p password
```
assumes a username & password for the SQL Server already setup
Alternatively how can I setup a user account from command prompt?
I've SQL Server 2008 Express on a Windows Server 2008 machine, if that relates. | You can use different syntax to achieve different things. If it is windows authentication you want, you could try this:
```
sqlcmd /S /d -E
```
If you want to use SQL Server authentication you could try this:
```
sqlcmd /S /d -U -P
```
Definitions:
```
/S = the servername/instance name. Example: Pete's Laptop/SQLSERV
/d = the database name. Example: Botlek1
-E = Windows authentication.
-U = SQL Server authentication/user. Example: Pete
-P = password that belongs to the user. Example: 1234
```
Hope this helps! | Try This :
--Default Instance
```
SQLCMD -S SERVERNAME -E
```
--OR
--Named Instance
```
SQLCMD -S SERVERNAME\INSTANCENAME -E
```
--OR
```
SQLCMD -S SERVERNAME\INSTANCENAME,1919 -E
```
More details can be found [here](http://msdn.microsoft.com/en-us/library/ms191518.aspx) | How to connect to SQL Server from command prompt with Windows authentication | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
"command-prompt",
""
] |
I've got a table that has some course labels such as
```
Subject | Course |
ACC | 201
ACC | 843
ACC | 843I
ACC | 850
ACC | 930
```
I'm using this SQL to get ACC 843, ACC 843I and ACC850:
```
select Subj_Code, crse_code
from section_info
Where (crse_code between '800' and '899') and subj_code = 'ACC'
order by crse_code
```
But somehow, this misses the 843I. How can I get this check to work?
Thanks. | ```
select Subj_Code, crse_code
from section_info
Where (cast(left(crse_code,3) as int) as num between 800 and 899) and subj_code = 'ACC'
order by crse_code
```
Hope this works, try it out and let me know. Thanks! | Assuming that course name has a fixed format of 3 digits followed by an optional letter, you could simply compare on the first 3 characters of the column. Additionally, since the first 3 characters are always digits, you may want to convert them to numbers before comparing. Your query will then become something like this:
```
select Subj_Code, crse_code
from section_info
Where (cast(left(crse_code,3) as int) between 800 and 899) and subj_code = 'ACC'
order by crse_code
``` | String comparison in ms sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Okay, so I'm woking on a database, where I have basically a list of ownerships. And sometimes I receive almost duplicate rows, differing only in the amount of the property. I'd like to merge them into one, setting the amount as the sum of the two. Example:
```
Name Surname Amount
Ann Evans 4
Ann Evans 7
```
And I'd like to have just:
```
Ann Evans 11
```
How do I merge two rows having a common tuple? | Use [`SUM()`](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) aggregate function along with [`GROUP BY`](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) clause:
```
SELECT Name,
Surname,
SUM(Amount) AS total_amount
FROM tbl
GROUP BY
Name,
Surname;
```
UPD.
**Michał Szydłowski**: Okay, but I don't want to select, I want to run an update query, that will permanently modify the table.
The best option I see here, is to use [`INSERT INTO ... SELECT`](http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-select-into-table.html):
```
CREATE TABLE temp_tbl LIKE tbl;
INSERT INTO temp_tbl (Name, Surname, Amount)
SELECT Name,
Surname,
SUM(Amount)
FROM tbl
GROUP BY
Name,
Surname;
TRUNCATE tbl;
INSERT INTO tbl (Name, Surname, Amount)
SELECT Name,
Surname,
Amount
FROM temp_tbl;
``` | Use Aggregate function [SUM()](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_sum) with `GROUP BY` clause, group by the number of columns you want along with result
```
select Name,Surname,sum(Amount) as Amount from table group by Name,Surname
``` | SQL merging rows differing by 1 parameter | [
"",
"mysql",
"sql",
""
] |
Table 'section':

I am trying to create a query that does the following:
```
SELECT stop
FROM section
WHERE the id is the highest and the section is JPS_44-300-A-y10-1-1-1-H.
```
In the table above the result would be 1900HA080909. How can I do this using SQL? | You can do this with the `LIMIT` function:
```
SELECT stop
FROM section
WHERE section='JPS_44-300-A-y10-1-1-1-H'
ORDER BY ID DESC
LIMIT 1;
``` | Please try:
```
select * From YourTable a
where a.ID=(select MAX(ID) from YourTable b where b.Section=a.Section)
``` | SQL query to select highest ID only where field is equal to value | [
"",
"mysql",
"sql",
""
] |
I'm searching for an SQL-Query that can map a set of items of an individual size to a set off buckets of individual size.
I would like to satisfy the following conditions:
* The size of a bucket has to be bigger or equal the size of an item.
* Every bucket can contain only one item or it is left empty.
* Every item can only be placed in one bucket.
* No item can be split to multiple buckets.
* I want to fill the buckets in a way, that the smallest unused buckets are filled first.
* Then initial item and bucket sets can be ordered by size or id, but are not incremental
* Sizes and ids of initial bucket and item sets can be arbitrary and do not start at a known minimum value
* The result has to be always correct, when there is a valid mapping
* The result is allowed to be incorrect if the is no valid mapping (for example if there are more items than buckets), but I would appreciate, when the result is an empty set or has another property/signal that indicates an incorrect result.
To give you an example, let's say my bucket and items tables look like that:
```
Bucket: Item:
+---------------------+ +---------------------+
| BucketID | Size | | ItemID | Size |
+---------------------+ +---------------------+
| 1 | 2 | | 1 | 2 |
| 2 | 2 | | 2 | 2 |
| 3 | 2 | | 3 | 5 |
| 4 | 4 | | 4 | 11 |
| 5 | 4 | | 5 | 12 |
| 6 | 7 | +---------------------+
| 7 | 9 |
| 8 | 11 |
| 9 | 11 |
| 10 | 12 |
+---------------------+
```
Then, I'd like to have a mapping that is returning the following result table:
```
Result:
+---------------------+
| BucketID | ItemID |
+---------------------+
| 1 | 1 |
| 2 | 2 |
| 3 | NULL |
| 4 | NULL |
| 5 | NULL |
| 6 | 3 |
| 7 | NULL |
| 8 | 4 |
| 9 | NULL |
| 10 | 5 |
+---------------------+
```
Since there is no foreign key relation or something I could fix the columns to their corresponding bucket (but only the relation Bucket.Size >= Item.Size) I'm have a lot of trouble describing the result with a valid SQL query. Whenever I use joins or sub selects, I get items in buckets, that are to big (like having an item of size 2 in a bucket of size 12, while a bucket of size 2 is still available) or I get the same item in multiple buckets.
I spent some time now to find the solution myself and I am close to say, that it is better not to declare the problem in SQL but in an application, that is just fetching the tables.
Do you think this task is possible in SQL? And if so, I would really appreciate if you can help me out with a working query.
Edit : The query should be compatible to at least Oracle, Postgres and SQLite databases
Edit II: An SQL Fiddle with the given test set above an example query, that returns a wrong result, but is close, to what the result could look like <http://sqlfiddle.com/#!15/a6c30/1> | Using the table definition from @SoulTrain (but requiring the data to be sorted in advance):
```
; WITH ORDERED_PAIRINGS AS (
SELECT i.ITEMID, b.BUCKETID, ROW_NUMBER() OVER (ORDER BY i.SIZE, i.ITEMID, b.SIZE, b.BUCKETID) AS ORDERING, DENSE_RANK() OVER (ORDER BY b.SIZE, b.BUCKETID) AS BUCKET_ORDER, DENSE_RANK() OVER (PARTITION BY b.BUCKETID ORDER BY i.SIZE, i.ITEMID) AS ITEM_ORDER
FROM @ITEM i
JOIN @BUCKET b
ON i.SIZE <= b.SIZE
), ITEM_PLACED AS (
SELECT ITEMID, BUCKETID, ORDERING, BUCKET_ORDER, ITEM_ORDER, CAST(1 as int) AS SELECTION
FROM ORDERED_PAIRINGS
WHERE ORDERING = 1
UNION ALL
SELECT *
FROM (
SELECT op.ITEMID, op.BUCKETID, op.ORDERING, op.BUCKET_ORDER, op.ITEM_ORDER, CAST(ROW_NUMBER() OVER(ORDER BY op.BUCKET_ORDER) as int) as SELECTION
FROM ORDERED_PAIRINGS op
JOIN ITEM_PLACED ip
ON op.ITEM_ORDER = ip.ITEM_ORDER + 1
AND op.BUCKET_ORDER > ip.BUCKET_ORDER
) AS sq
WHERE SELECTION = 1
)
SELECT *
FROM ITEM_PLACED
``` | Try this...
I was able to implement this using `recursive CTE`, all in 1 single **SQL** statement
The only assumption I had was that the **Bucket and Item** data set are sorted.
```
DECLARE @BUCKET TABLE
(
BUCKETID INT
, SIZE INT
)
DECLARE @ITEM TABLE
(
ITEMID INT
, SIZE INT
)
;
INSERT INTO @BUCKET
SELECT 1,2 UNION ALL
SELECT 2,2 UNION ALL
SELECT 3,2 UNION ALL
SELECT 4,4 UNION ALL
SELECT 5,4 UNION ALL
SELECT 6,7 UNION ALL
SELECT 7,9 UNION ALL
SELECT 8, 11 UNION ALL
SELECT 9, 11 UNION ALL
SELECT 10,12
INSERT INTO @ITEM
SELECT 1,2 UNION ALL
SELECT 2,2 UNION ALL
SELECT 3,5 UNION ALL
SELECT 4,11 UNION ALL
SELECT 5,12;
WITH TOTAL_BUCKETS
AS (
SELECT MAX(BUCKETID) CNT
FROM @BUCKET
) -- TO GET THE TOTAL BUCKETS COUNT TO HALT THE RECURSION
, CTE
AS (
--INVOCATION PART
SELECT BUCKETID
, (
SELECT MIN(ITEMID)
FROM @ITEM I2
WHERE I2.SIZE <= (
SELECT SIZE
FROM @BUCKET
WHERE BUCKETID = (1)
)
) ITEMID --PICKS THE FIRST ITEM ID MATCH FOR THE BUCKET SIZE
, BUCKETID + 1 NEXT_BUCKETID --INCREMENT FOR NEXT BUCKET ID
, (
SELECT ISNULL(MIN(ITEMID), 0)
FROM @ITEM I2
WHERE I2.SIZE <= (
SELECT SIZE
FROM @BUCKET
WHERE BUCKETID = (1)
)
) --PICK FIRST ITEM ID MATCH
+ (
CASE
WHEN (
SELECT ISNULL(MIN(ITEMID), 0)
FROM @ITEM I3
WHERE I3.SIZE <= (
SELECT SIZE
FROM @BUCKET
WHERE BUCKETID = (1)
)
) IS NOT NULL
THEN 1
ELSE 0
END
) NEXT_ITEMID --IF THE ITEM IS PLACED IN THE BUCKET THEN INCREMENTS THE FIRST ITEM ID
, (
SELECT SIZE
FROM @BUCKET
WHERE BUCKETID = (1 + 1)
) NEXT_BUCKET_SIZE --STATES THE NEXT BUCKET SIZE
FROM @BUCKET B
WHERE BUCKETID = 1
UNION ALL
--RECURSIVE PART
SELECT NEXT_BUCKETID BUCKETID
, (
SELECT ITEMID
FROM @ITEM I2
WHERE I2.SIZE <= NEXT_BUCKET_SIZE
AND I2.ITEMID = NEXT_ITEMID
) ITEMID -- PICKS THE ITEM ID IF IT IS PLACED IN THE BUCKET
, NEXT_BUCKETID + 1 NEXT_BUCKETID --INCREMENT FOR NEXT BUCKET ID
, NEXT_ITEMID + (
CASE
WHEN (
SELECT I3.ITEMID
FROM @ITEM I3
WHERE I3.SIZE <= NEXT_BUCKET_SIZE
AND I3.ITEMID = NEXT_ITEMID
) IS NOT NULL
THEN 1
ELSE 0
END
) NEXT_ITEMID --IF THE ITEM IS PLACED IN THE BUCKET THEN INCREMENTS THE CURRENT ITEM ID
, (
SELECT SIZE
FROM @BUCKET
WHERE BUCKETID = (NEXT_BUCKETID + 1)
) NEXT_BUCKET_SIZE --STATES THE NEXT BUCKET SIZE
FROM CTE
WHERE NEXT_BUCKETID <= (
SELECT CNT
FROM TOTAL_BUCKETS
) --HALTS THE RECURSION
)
SELECT
BUCKETID
, ITEMID
FROM CTE
``` | Mapping fields of weakly related tables in SQL | [
"",
"sql",
"database",
"sorting",
"mapping",
"bucket-sort",
""
] |
My goal is to keep SQL Server stored procedures under source control. I also want to stop using SQL Server Management Studio and use only Visual Studio for SQL related development.
I've added a new SQL Server Database project to my solution. I have successfully imported my database schema into the new project, and all the SQL objects (tables, stored procedures) are there in their own files.

I know that now if I run (with F5) the .sql files then my changes will be applied to my `(LocalDB)`. This if fine, but what if I want to very quickly run something on another machine (like a dedicated SQL Server shared by the entire team)? **How can I change the connection string of the current .sql file in the Sql Server Data Tools editor**?
I have the latest version of Sql Server Data Tools extension for Visual Studio 2012 (SQL Server Data Tools 11.1.31203.1). I don't know if this is related to the current version, but I cannot find anymore the Transact-SQL Editor Toolbar.
I have also tried to Right-click on the sql editor, choose Connection -> Disconnect. If I do the reverse (Connection -> Connect...) the editor directly connects automatically (probably to my LocalDB), without asking me a dialog to choose my connection.
Another strange thing I've observed, if I try to run a simple SQL query (like `select * from dbo.ApplicationUser` I receive the following message (even if the autocomplete works):

Thanks.
*(Note: I have the same issue with Visual Studio 2013)* | This *should* be a fairly simple and straight-forward thing to do, that is, if you are using SSDT version 12.0.41025.0 (or newer, one would suppose):
1. Do either:
1. Go to the `SQL` menu at the top of the Visual Studio window
2. Right-click inside of the SQL editor tab
2. Go to `Connection ->`
3. Select `Change Connection`
Then it will display the "Connect to Server" modal dialog window.
If you do not see the options for "Disconnect All Queries" and "Change Connection...", then you need to upgrade your SSDT via either:
* Visual Studio:
Go to the "TOOLS" menu and then "Extensions and Updates..."
* Direct download:
Go to: <http://msdn.microsoft.com/en-us/data/tools.aspx> | Inspired by srutzky's comments, I installed the latest SSDT pack (12.0.41025). And bingo, like srutzky said there is a Change Connection option. But what's more, you can specify your Target DB by right clicking on the Project in the Solution Explorer, and going to Properties->Debug and changing the Target Connection String! If you're stuck on an older SSDT, then the below instructions will still work.
---
**For SSDT 12.0.3-**
I've also been plagued by this problem! My solution is below, but it has some Pros and Cons to it...
**SOLUTION**
1. I'm assuming that you are using a SQL Server Project in VS (I'm using VS2013 and SQL Server 2012).
2. Right click on your .sql file in the Solution Explorer and view Properties.
3. Change Build Action to None.
4. If the file is open for editing, then close it.
5. Reopen the file, and the T-SQL Editor should appear at the top.
6. Click Connect, and you will connect to your (localdb).
7. Click Disconnect.
8. Click Connect again and the SQL Server Connection dialog should appear.
9. Switch the connect string of '(localdb)\Whatever' to '.' (for some reason using '(localhost)' didn't work for me).
Voila, you should now be able to query against your SQL Server DBs! Rinse and repeat for every file you want this capability with... :/
**PROS**
* You can finally run queries directly against your SQL Server DB
* Your code can be organized in a nice VS solution (SSMS doesn't allow folders! :/)
* You can (after switching Build Action setting back) Build the project
**CONS**
* I'm not seeing any autocomplete/intellisense against the remote DB, although if you import your DB, then you could gain the intellisense from that
* Requires each file to switch Build Action to None | How to change the connection in Sql Server Data Tools Editor in Visual Studio | [
"",
"sql",
"sql-server",
"visual-studio-2012",
"visual-studio-2013",
"sql-server-2012-datatools",
""
] |
suppose i need to print
```
select field1,max(field2) from table1
```
here by field1 mean value of field1, whcih is correpsonding to tuple in the value at max(field2).
How can this be done?
for Example
```
| 49 | 2000 |
| 50 | 2001 |
| 63 | 2002 |
| 79 | 2003 |
```
Here the maximum value is 79, which is of year 2003, but the above select statement willreturn
`79 2000`
How can i get the output like
`79 2003`
This is an example, and i am looking for a generic method.
I know I could first use `order by` clause and then `limit 1`, or I can write a subquery like `Having abc >=All(some subquery)`,aint there any other method, which is less expensive? | I guess you are trying to find maximum value of a certain column based on some specific field value.There different ways of doing this. Try below query :-
```
SELECT article, dealer, price
FROM shop
WHERE price=(SELECT MAX(price) FROM shop);
```
You can use above query for reference and make change in your query accordingly.
Hope this helps. | Try This,
```
;With Cte as
(
select max(F1) mf from F
)
select f1,f2 from CTE
inner join F on mf=f1
``` | How to select a field corresponding to the tuple in which max(anotehrField) exists | [
"",
"mysql",
"sql",
""
] |
This is my test table in sql-server
the table structure is this
Table A
```
status id Col1 Col2 Col3 Date
A 111 Name1 Name4 Name6 2/18/14 5:54
B 111 Name1 Name5 Name6 3/18/14 5:54
B 111 Name2 Name6 Name6 5/12/14 5:54
B 111 Name2 Name6 Name7 10/11/13 3:34
```
required result:
```
status id Col1 Col2 Col3 Date
A 111 Name1 Name4 Name6 2/18/14 5:54
B 111 Name1 Name5 Name6 3/18/14 5:54
```
the answer has to be generic solution,as i don't know the RDBMS of my client :
I did this in sql-server :
```
select *
from (select *,
row_number() over(partition by status order by Date desc) as rn
from table
)
where rn=1
```
Any help is much appreciated ! | ```
select t1.*
from table t1
join (select status, MAX(Date) as maxDate
from table
group by status) m
on m.status = t1.status and m.maxDate = t1.Date
``` | Assuming that you have only one maximum date per status, the following is probably the most efficient method of doing what you want:
```
select t.*
from table t
where not exists (select 1 from table t2 where t2.status = t.status and t2."date" > t."date");
```
For best performance, build an index on `table(status, "date")`.
This is saying: "Get me all rows from the table where the `date` in the table has no larger value for that `id`". That is a fancy way of saying, "get me the row for each `id` that has the maximum `date`." | Top 2 distinct in sql | [
"",
"sql",
""
] |
I have two tables:
`ci_categories`
with fields `cat_id`, `cat_name`, `cat_slug` etc.
`ci_albums`
with fields `cat_id`, `album_id`, `album_name` etc.
I want to join these two tables, display category listing, and COUNT albums in each category. Like:
* Category 1 - 20 Albums
* Category 2 - 25 Albums
How is this done properly in PostgreSQL? | The following query should get you what you want:
```
SELECT c.cat_id, COUNT(a.album_id)
FROM ci_categories AS c
LEFT JOIN ci_albums AS a ON c.cat_id = a.cat_id
GROUP BY c.cat_id
```
We use a `LEFT JOIN` here due to the possibility of categories which contain no albums. If you don't care if those results show up, just change this to an `INNER JOIN`.
We use the aggregate `COUNT(a.album_id)` function to count the number of records. By grouping by the `c.cat_id`, we make sure this count is done only over records of the same type. If there is no album for that category, a.album\_id will be NULL, and so `COUNT` will not include it in the total count. | If you are satisfied with `cat_id` and the count of albums in the result, you don't need the table `ci_categories` in your query at all. Use this simpler and faster query:
```
SELECT cat_id, count(*) AS albums
FROM ci_albums
GROUP BY 1;
```
Assuming proper table definitions without duplicate entries.
This does not includes categories with no albums at all. Only if you need those - too, or if you need additional information from the table `ci_categories` itself - only then you need to join to that table. But since we are counting *everything*, it is faster to *aggregate first* and join later:
```
SELECT c.cat_id, c.cat_name, COALESCE(a.albums, 0) AS albums
FROM ci_categories c
LEFT JOIN (
SELECT cat_id, count(*) AS albums
FROM ci_albums
GROUP BY 1
) a USING (cat_id);
```
[`LEFT JOIN`](http://www.postgresql.org/docs/current/interactive/queries-table-expressions.html#QUERIES-FROM) if you want categories with 0 albums, too. Else, just `JOIN`.
[`COALESCE`](http://www.postgresql.org/docs/current/interactive/functions-conditional.html#FUNCTIONS-COALESCE-NVL-IFNULL) is only needed if you need `0` instead of `NULL` in the result.
Test performance with [`EXPLAIN ANALYZE`](http://www.postgresql.org/docs/current/interactive/sql-explain.html) if you care. | Count records in each category | [
"",
"sql",
"postgresql",
"count",
"aggregate-functions",
""
] |
i am trying to import file CSV like this image..
this image means..
when i import this file..line 1 with read and save in to Table SeniorHighSchool
then it will get :
1. name : Alexander
2. age : 15
3. muchcourse : 3
after that i want make a condition, When "muchcourse is filled" it will read the next row...
example :
at this case, the "muchcourse" is "3", then "look at the image"..row 2,3,4 (3 lines)will inserted to other table .. because the "muchcourse" is "3"

this is my coding what i have tried .
```
def upload = {
withForm{
def f = request.getFile('filecsv')
def orifilename = f.getOriginalFilename()
def homeDir = new File(System.getProperty("user.home"))
def homeurl = "Documents/Uploads/"
File fileDest = new File(homeDir,homeurl+orifilename)
f.transferTo(fileDest)
request.getFile(new File(fileDest)).InputStream.splitEachLine(',') {fields ->
def student= new SeniorHighSchool(
name: fields[0].trim(),
age: fields[1].trim(),
muchcourse: fields[2].trim()
)
if (student.hasErrors() || student.save(flush: true) == null)
{
log.error("Could not import domainObject ${student.errors}")
}
}
redirect(action:"list")
}
}
```
i confused to make a condition..
```
def upload = {
withForm{
def f = request.getFile('filecsv')
def orifilename = f.getOriginalFilename()
def homeDir = new File(System.getProperty("user.home"))
def homeurl = "Documents/Uploads/"
File fileDest = new File(homeDir,homeurl+orifilename)
f.transferTo(fileDest)
request.getFile(new File(fileDest)).InputStream.splitEachLine(',') {fields ->
def student= new SeniorHighSchool(
name: fields[0].trim(),
age: fields[1].trim(),
muchcourse: fields[2].trim()
)
if (student.hasErrors() || student.save(flush: true) == null)
{
log.error("Could not import domainObject ${student.errors}")
}
if(fields[2]) {
def score = new Score(
course: //the problem at this line..how?
//it will insert 3 times then back to the row 5 to insert into "Student" again
)
}
}
redirect(action:"list")
}
}
``` | ```
if(fields.size()>2){
store 3 values in one table(student)
}
else{
store 2 values in another table(score)
}
```
If muchcourse field is occured then fields size is 3 then save three data in one table. otherwise size is 2 then save that two data in another table.I think it will solve your problem. | @th3morg like thiss?
```
request.getFile(new File(fileDest)).InputStream.splitEachLine(',') {fields ->
if(fields.size()>2){
def student= new SeniorHighSchool(
name: fields[0].trim(),
age: fields[1].trim(),
muchcourse: fields[2].trim()
)
}
else{
def score = new Score(
course:fields[0].trim(),
score:fields[1].trim()
)
}
}
``` | get next Row when importing file CSV with grails | [
"",
"mysql",
"sql",
"grails",
"csv",
"groovy",
""
] |
I have table test1 and have one column DOJ with timestamp datatype having few records as shown below.
Here i need the difference im milliseconds between doj and systimestamp.
```
SELECT DOJ FROM TEST1;
DOJ
----------------------------
21-MAR-14 09.25.34.514526000
21-MAR-14 09.25.34.520345000
22-MAR-14 09.25.34.523144000
22-MAR-14 09.25.34.527770000
23-MAR-14 09.25.34.532482000
23-MAR-14 09.25.34.535603000
24-MAR-14 09.25.34.538556000
24-MAR-14 09.25.34.541729000
SELECT SYSTIMESTAMP FROM DUAL;
SYSTIMESTAMP
--------------
24-MAR-14 09.48.10.251556000 +00:00
```
Can some one please help me with this. | ```
select
extract(second from systimestamp - doj) * 1000
from
test1;
``` | Expanding René's answer a bit, if you want the total milliseconds then you need to extract and combine all of the elements from the interval that's produced by subtracting one timestamp from another:
```
select doj, systimestamp - doj,
trunc(1000 * (extract(second from systimestamp - doj)
+ 60 * (extract(minute from systimestamp - doj)
+ 60 * (extract(hour from systimestamp - doj)
+ 24 * (extract(day from systimestamp - doj) ))))) as milliseconds
from test1;
DOJ SYSTIMESTAMP-DOJ MILLISECONDS
---------------------------- ---------------- ----------------
21-MAR-14 09.25.34.514526000 3 2:9:8.785713 266948785
21-MAR-14 09.25.34.520345000 3 2:9:8.779894 266948779
22-MAR-14 09.25.34.523144000 2 2:9:8.777095 180548777
22-MAR-14 09.25.34.527770000 2 2:9:8.772469 180548772
23-MAR-14 09.25.34.532482000 1 2:9:8.767757 94148767
23-MAR-14 09.25.34.535603000 1 2:9:8.764636 94148764
24-MAR-14 09.25.34.538556000 0 2:9:8.761683 7748761
24-MAR-14 09.25.34.541729000 0 2:9:8.75851 7748758
```
[SQL Fiddle](http://sqlfiddle.com/#!4/615cf/1), including the Unix epoch date for comparison, though you'd need to adjust that for your server time zone. | Difference Between Timestamps in Milliseconds in Oracle | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
"oracle10g",
""
] |
I wish to convert a single Oracle datetime value to minutes.
As an example, I need to convert the current date/time alone into minutes, i.e.:
```
select (sysdate)*24*60 from dual
```
but this is giving me an error.
I basically need to perform a check to see that a certain operation cannot be performed until 30 minutes before a particular date/start time, that exists in the database.
So if the Start Time in the DB is:
`24/04/2014 22:00:00 and the current date/time (SYSDATE) is 24/04/2014 21:29:59`,
then operation CANNOT be performed but if the current date/time (SYSDATE) is:
`24/04/2014 21:30:00`,
then operation CAN be performed. | You probably want something like
```
startTime - interval '30' minute >= sysdate
```
or
```
startTime >= sysdate + interval '30' minute
```
You could also subtract the two `date` values which gives you the number of days between them and multiply
```
(startTime - sysdate)*24*60 >= 30
```
but I generally find the interval notation clearer and easier to read. It's also easier to structure in a way that allows you to use indexes on columns like `startTime`. | *select (sysdate - trunc(sysdate)) \*24 \*60 from dual* | How to convert a single Oracle datetime into minutes? | [
"",
"sql",
"date",
"plsql",
"oracle11g",
""
] |
Okay, this is a little hard to explain, but I'll give it my best shot.
I've got two tables, we'll call them table1 and table2. table1 looks something like this:
```
ID | CampaignID | Package | GroupID
1 | 1 | 1 | 1
2 | 1 | 1 | 2
3 | 1 | 2 | 2
4 | 2 | 1 | 3
5 | 2 | 2 | 3
6 | 2 | 3 | 3
```
etc
Table2 looks something like this:
```
ID | ClientID | ClientName | Package | OrderID
1 | 1111 | John Smith | 1 | 155
2 | 1111 | John Smith | 2 | 155
4 | 2222 | Dave Jones | 1 | 177
5 | 2222 | Dave Jones | 2 | 178
6 | 2222 | Dave Jones | 3 | 179
```
What I'm trying to do, is see if for example, John Smith has any Orders with sets of packages that match one of the Campaign Groups in table1. For the above example, John Smith's order 155 would match CampaignID 1, GroupID 2. Dave Jones's order 177 matches CampaignID 1, GroupID 1. However orders 178 and 179 don't match anything. So each set of packages in an order need to contain all the packages for a given group in order to match it
For the purposes of the select statement I have the client's id along with the orderID, and I'm simply trying to see if the packages in his order match the criteria for any campaigns.
I know I probably haven't explained this too well, so let me know what needs clarifying.
EDIT:
If lets say we search for orderID 155, clientID 1111, then the desired result would be:
```
CampaignID | GroupID
1 | 2
```
Perhaps given that GroupID 1 also qualifies, it could return the groupID that qualifies with the largest number of packages. | I think I figured it out:
```
SELECT top 1 pcr1.GroupID FROM table1 pcr1
Where Not Exists (Select CampaignID from table1 as pcr2
where CampaignID = @campaignID and pcr2.GroupID= pcr1.GroupID Except
Select t2.Package from table2 as t2 where t2.OrderID = @ordID)
GROUP BY pcr1.GroupID
ORDER BY COUNT(pcr1.GroupID) DESC
```
Given a known order and a campaign the client is trying to apply for, this gives me the group id that best matches the given order (in regard to packages), if any exists. And just because people seem to be asking, the way the tables appear here aren't exactly the way they're implemented, it's just a simplified equivalent for the purposes of this question. | Is this , what u want?
```
select table1.CampaignID from Table2
left join table1 on table1.Package =table2.Package
where Table2.ClientID =@ClientID
and Table2.OrderID =@OrderID
``` | SQL Match exact set of values between two tables | [
"",
"sql",
""
] |
Here is the interesting challenge.
I have got one sp with select statement like this:
> select \* from employee
What I want is user can call this stored procedure with or without employee type parameter.
Let say for an example employee type allows 'Internal', 'External' value only. It is an optional parameter.
We want to call this SP and sp should executed with only one select statement
so I want:
```
select * from employee
or
select * from employee where employeetype in ('Internal')
```
I want to combine this into one.
I don't want to use If Statement.
If the EmployeeType is not provided the it should return entire employee list.
Please note that I can have more than one employee type in the variable so it has to be like
@employeeType = 'Internal,External,Officer'
select \* from Employee
where
Employee in ( @employeeType)
-- run this when EmployeeType is null
```
or EmployeeType = EmployeeType
``` | You can use a parameter to attain the result. Add an input parameter which accepts values 'Internal', 'External' or empty. If empty is the case full data should be returned. For this you can use a case statement. See the sample below:
```
select *
from
employee
where
employeetype =case when isnull(@var, '')='' then employeetype else @var end
```
Note: variable `@var` accepts values 'Internal' or 'External'. To get full data, assign empty.
If variable @var comes as null, then below query can be used.
```
select *
from
employee
where
employeetype =isnull(@var, employeetype)
```
For coma separated values, try:
```
select *
from
employee
where
employeetype =(select col from fnSplit(case when isnull(@var,'')=''
then employeetype else @var end, ','))
``` | Lets say you have a SP taking one parameter @EType
then send your parameter to SP either with a value or NULL, and modify your SP like this..
```
Select * From Employees
Where (EmployeeType = @eType OR @eType is NULL)
``` | Merge two select statements into one; I could not figure out how to do... looks very easy though | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I would like to optimize (read: make feasible at all) an SQL query.
The following PostgreSQL query retrieves the records I need. I (believe that I) could confirm that by running the query on a small subset of the actual DB.
```
SELECT B.*, A1.foo, A1.bar, A2.foo, A2.bar FROM B LEFT JOIN A as A1 on B.n1_id = A1.n_id LEFT JOIN A as A2 on B.n2_id = A2.n_id WHERE B.l_id IN (
SELECT l_id FROM C
WHERE l_id IN (
SELECT l_id FROM B
WHERE n1_id IN (SELECT n_id FROM A WHERE foo BETWEEN foo_min AND foo_max AND bar BETWEEN bar_min AND bar_max)
UNION
SELECT l_id FROM B
WHERE n2_id IN (SELECT n_id FROM A WHERE foo BETWEEN foo_min AND foo_max AND bar BETWEEN bar_min AND bar_max)
)
AND (property1 = 'Y' OR property2 = 'Y')
)
```
The relevant part of the DB looks as follows:
```
table A:
n_id (PK);
foo, int (indexed);
bar, int (indexed);
table B:
l_id (PK);
n1_id (FK, indexed);
n2_id (FK, (indexed);
table C:
l_id (PK, FK);
property1, char (indexed);
property2, char (indexed);
```
`EXPLAIN` tells me this:
```
"Merge Join (cost=6590667.27..10067376.97 rows=453419 width=136)"
" Merge Cond: (A2.n_id = B.n2_id)"
" -> Index Scan using pk_A on A A2 (cost=0.57..3220265.29 rows=99883648 width=38)"
" -> Sort (cost=6590613.72..6591747.27 rows=453419 width=98)"
" Sort Key: B.n2_id"
" -> Merge Join (cost=3071304.25..6548013.91 rows=453419 width=98)"
" Merge Cond: (A1.n_id = B.n1_id)"
" -> Index Scan using pk_A on A A1 (cost=0.57..3220265.29 rows=99883648 width=38)"
" -> Sort (cost=3071250.74..3072384.28 rows=453419 width=60)"
" Sort Key: B.n1_id"
" -> Hash Semi Join (cost=32475.31..3028650.92 rows=453419 width=60)"
" Hash Cond: (B.l_id = C.l_id)"
" -> Seq Scan on B B (cost=0.00..2575104.04 rows=122360504 width=60)"
" -> Hash (cost=26807.58..26807.58 rows=453419 width=16)"
" -> Nested Loop (cost=10617.22..26807.58 rows=453419 width=16)"
" -> HashAggregate (cost=10616.65..10635.46 rows=1881 width=8)"
" -> Append (cost=4081.76..10611.95 rows=1881 width=8)"
" -> Nested Loop (cost=4081.76..5383.92 rows=1078 width=8)"
" -> Bitmap Heap Scan on A (cost=4081.19..4304.85 rows=56 width=8)"
" Recheck Cond: ((bar >= bar_min) AND (bar <= bar_max) AND (foo >= foo_min) AND (foo <= foo_max))"
" -> BitmapAnd (cost=4081.19..4081.19 rows=56 width=0)"
" -> Bitmap Index Scan on A_bar_idx (cost=0.00..740.99 rows=35242 width=0)"
" Index Cond: ((bar >= bar_min) AND (bar <= bar_max))"
" -> Bitmap Index Scan on A_foo_idx (cost=0.00..3339.93 rows=159136 width=0)"
" Index Cond: ((foo >= foo_min) AND (foo <= foo_max))"
" -> Index Scan using nx_B_n1 on B (cost=0.57..19.08 rows=19 width=16)"
" Index Cond: (n1_id = A.n_id)"
" -> Nested Loop (cost=4081.76..5209.22 rows=803 width=8)"
" -> Bitmap Heap Scan on A A_1 (cost=4081.19..4304.85 rows=56 width=8)"
" Recheck Cond: ((bar >= bar_min) AND (bar <= bar_max) AND (foo >= foo_min) AND (foo <= foo_max))"
" -> BitmapAnd (cost=4081.19..4081.19 rows=56 width=0)"
" -> Bitmap Index Scan on A_bar_idx (cost=0.00..740.99 rows=35242 width=0)"
" Index Cond: ((bar >= bar_min) AND (bar <= bar_max))"
" -> Bitmap Index Scan on A_foo_idx (cost=0.00..3339.93 rows=159136 width=0)"
" Index Cond: ((foo >= foo_min) AND (foo <= foo_max))"
" -> Index Scan using nx_B_n2 on B B_1 (cost=0.57..16.01 rows=14 width=16)"
" Index Cond: (n2_id = A_1.n_id)"
" -> Index Scan using pk_C on C (cost=0.57..8.58 rows=1 width=8)"
" Index Cond: (l_id = B.l_id)"
" Filter: ((property1 = 'Y'::bpchar) OR (property2 = 'Y'::bpchar))"
```
All three tables have millions of rows. I cannot change the table definitions.
The `WHERE l_id IN ( SELECT l_id FROM B...UNION...)` is very restrictive and returns < 100 results.
What can I do to make the query execute in a reasonable amount of time (a few seconds max)?
**EDIT**: forgot to select two columns in the outermost `SELECT`. That should now change this question though.
**UPDATE**
This seems to be a difficult question, possibly due to lack of information on my part. I would love to give more information, however the data base is properietary and confidential.
I can retrieve the rows of B with all conditions reasonably fast (0.1 s) with the following query:
```
WITH relevant_a AS (
SELECT * FROM A
WHERE
foo BETWEEN foo_min AND foo_max
AND
bar BETWEEN bar_min AND bar_max
)
WITH relevant_c AS (
SELECT * FROM C
WHERE l_id IN (
SELECT l_id FROM B
WHERE n1_id IN (
SELECT n_id FROM relevant_a
)
UNION
SELECT l_id FROM B
WHERE n2_id IN (
SELECT n_id FROM relevant_a
)
)
AND
(property1 = 'Y' OR property2= 'Y')
),
relevant_b AS (
SELECT * FROM B WHERE l_id IN (
SELECT l_id FROM relevant_c
)
)
SELECT * FROM relevant_b
```
The join with A is the part where it becomes slow. The query returns < 100 records, so why does the join with A make it so slow?. Do you have any ideas how to make this simple join faster? It should not be that costly to simply add four columns of information from another table. | I could make it work with the following query:
```
WITH relevant_a AS (
SELECT * FROM A
WHERE
foo BETWEEN foo_min AND foo_max
AND
bar BETWEEN bar_min AND bar_max
),
relevant_c AS (
SELECT * FROM C
WHERE l_id IN (
SELECT l_id FROM B
WHERE n1_id IN (
SELECT n_id FROM relevant_a
)
UNION
SELECT l_id FROM B
WHERE n2_id IN (
SELECT n_id FROM relevant_a
)
)
AND
(property1 = 'Y' OR property2= 'Y')
),
relevant_b AS (
SELECT * FROM B WHERE l_id IN (
SELECT l_id FROM relevant_c
)
),
a1_data AS (
SELECT A.n_id, A.foo, A.bar
FROM A
WHERE A.n_id IN (
SELECT n1_id FROM relevant_b
)
)
a2_data AS (
SELECT A.n_id, A.foo, A.bar
FROM A
WHERE A.n_id IN (
SELECT n2_id FROM relevant_b
)
)
SELECT relevant_b.*, a1_data.foo, a1_data.bar, a2_data.foo, a2_data.bar
FROM relevant_b
LEFT JOIN a1_data ON relevant_b.n1_id = a1_data.n_id
LEFT JOIN a2_data ON relevant_b.n2_id = a2_data.n_id
```
I do not like this solution as it seems forced and redundant. However, it gets the job done in < 0.1 s.
I still refuse to believe that an SQL noob like me can (quite easily in retrospect) come up with a statement that forces the optimizer to use a strategy much better than the one it comes up with on its own. There has to be a better solution but I will not be looking for one anymore.
Regardless, thank you all for your suggestions, I definetely learned a few things on the way. | Or something like this:
```
SELECT B.*, A1.foo, A2.bar
FROM B
LEFT JOIN A as A1 on B.n1_id = A1.n_id
LEFT JOIN A as A2 on B.n2_id = A2.n_id
INNER JOIN C on (C.l_id = B.l_id)
where
A1.foo between A1.foo_min AND A1.foo_max AND
A2.bar BETWEEN A2.bar_min AND A2.bar_max and
b.foo between b.foo_min AND b.foo_max AND
b.bar BETWEEN b.bar_min AND bar_max AND
(C.property1 = 'Y' OR C.property2 = 'Y')
``` | How to make this SQL query reasonably fast | [
"",
"sql",
"postgresql",
"join",
"left-join",
"query-performance",
""
] |
I got following table content :
```
> id text time
> 1 Hi 2
> 2 Hello 1
> 1 Mr. SP 3
> 3 KK 1
> 2 TT 2
> 1 Sample 1
> 1 App 4
```
and i need to select distinct values for id with text, hence output should be:
> ```
> id text time
> 1 App 4
> 2 TT 2
> 3 KK 1
> ```
Can anyone help me out with this query.
I'm using Sqlite
Thanks | ```
select b.*
from
(
select id, max(time) as time
from yourtable
group by id) a inner join yourtable b on a.time = b.time and a.id = b.id
```
working demo: <http://sqlfiddle.com/#!5/fbb87/5> | You seem to want the value with the maximum time. The following is often an efficient way to get this:
```
select t.*
from table t
where not exists (select 1 from table t2 where t2.id = t.id and t2.time > t.time);
```
What this is saying is: "Get me all rows from the table where the `id` does not have a row with a larger `id`". With an index on `table(id, time)`, this is probably the most efficient way to express this query. | Select Top 1 row for distinct id sqlite | [
"",
"mysql",
"ios",
"sql",
"database",
"sqlite",
""
] |
When I want to **Save** currently editing article i have this error:
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'AND `sha1_hash` = '2767de6c4360cd17f82bc9fe15203dbd6337c785' LIMIT 0, 1' at line 3 SQL=SELECT * FROM `pdg_ucm_history` WHERE `ucm_item_id` = 103 AND `ucm_type_id` = AND `sha1_hash` = '2767de6c4360cd17f82bc9fe15203dbd6337c785' LIMIT 0, 1 "
```
Any ideas how to solve? | 1. Check your MySQL version and ensure it corresponds to the Joomla 3.x requirements.
2. Try repairing the database tables as mentioned in the error you received.
3. In the Joomla backend, try going to **Extensions** >> **Extension Manager** >> **Database** and check everything is up to date. If it says it is not up to date, click the Fix button.
4. It could be a possible bug with Joomla's Article versioning which I have already asked about and waiting reply | I just went through this myself.
The value for ucm\_type\_id is not being added to the sql, hence the broken syntax. Joomla grabs this data from the content\_types table, so check that and verify that you have an entry for the type of content you're attempting to save.
For some reason I was missing a bunch of the basics, like "Article". | Editing article in Joomla 3.2 SQL error | [
"",
"mysql",
"sql",
"joomla",
""
] |
I have an SQL database that is reported as being 40Mb in size when I look in cPanel. If I try and export the database using PHPMyAdmin, the export file is only 900Kb in size.
The strange thing is, if I import this file into a clean database then everything does appear to be there.
Anyone any ideas what could be causing this size issue? Maybe it is just being incorrectly reported? | The two answers prior to mine are pretty good, but I have a bit to add. Depending how cPanel is computing the space, there's a very simple explanation. Basically, certain table types don't reclaim disk space after your delete records. If, at one time, you stored 40Mb worth of data, then dropped a lot of it, that's a very reasonable cause for you to see it reported that the database still takes up 40Mb of space on disk. | That's because the database and the export are two completely different things.
The export contains only raw data and table structure, while the database itself also contains extra data that is used for indexes. Apart from that, the format in which the data is stored also doesn't have to be the same. An export is probably optimized for either size or recoverability (or both), while the data in the production database is optimized for speed. | SQL database export different size | [
"",
"mysql",
"sql",
"phpmyadmin",
"cpanel",
""
] |
I have the following relation in sql server database:

Assume I want to do some modifications in the relation. I want to remove the CPU\_TYPE table and add one column 'cpu' in the computer table.
Now I want to use the 'with' clause in sql server to query the data joined from the two tables and do the update statement to update the computer table.
This sql server query:
```
SELECT COMPUTER.compID, COMPUTER.serialNo, COMPUTER.cpuId,
CPU_TYPE.name, CPU_TYPE.cpuSpeed, COMPUTER.cpu
FROM COMPUTER LEFT OUTER JOIN
CPU_TYPE ON COMPUTER.cpuId = CPU_TYPE.cpuId
```
And the results as follows:

Now I will create a dataset using the 'with' clause as follows:
```
with ds as
(
SELECT COMPUTER.compID, COMPUTER.serialNo, COMPUTER.cpuId,
CPU_TYPE.name, CPU_TYPE.cpuSpeed, COMPUTER.cpu
FROM COMPUTER LEFT OUTER JOIN
CPU_TYPE ON COMPUTER.cpuId = CPU_TYPE.cpuId
)
update ds set cpu = name +', '+ cpuSpeed;
```
And the results as follws:

The query is executed successfully but the problem is that the COMPUTER table has not changed!
How to solve this problem? | Your code is fine. Your data is off. `CPUSpeed` is `NULL` so the result of the concatenation will always be `NULL`. So, use `coalesce()`:
```
with ds as (
SELECT COMPUTER.compID, COMPUTER.serialNo, COMPUTER.cpuId,
CPU_TYPE.name, CPU_TYPE.cpuSpeed, COMPUTER.cpu
FROM COMPUTER LEFT OUTER JOIN
CPU_TYPE
ON COMPUTER.cpuId = CPU_TYPE.cpuId
)
update ds
set cpu = name +', '+ coalesce(cpuSpeed, '');
``` | As I am sure many will note here in their answers, you do not *have* to use a Common Table Expression to accomplish this UPDATE.
That said, however, you could (in case you are looking to practice your CTE skills):
```
WITH cte AS
(
SELECT c1.compID, c2.name, c2.cpuSpeed
FROM [COMPUTER] AS c1
LEFT OUTER JOIN [CPU_TYPE] AS c2
ON ( comp.cpuId = c2.cpuId )
)
UPDATE [COMPUTER]
SET cpu = cte.name + ', ' + cte.cpuSpeed
FROM [COMPUTER] AS comp
INNER JOIN cte ON ( comp.compId = cte.compId )
```
If you wanted to filter your CTEs result then you could add a WHERE clause to your CTE like:
```
WHERE c1.cpuId IS NOT NULL
```
Or anything else for that matter.
**Update**
If you do, in fact, have NULL data in your `cpuSpeed` column, then you may want to consider the following `SET` statement instead of the `SET` listed above:
```
SET cpu = cte.name + Coalesce(', ' + cte.cpuSpeed, '')
```
...or alternatively:
```
SET cpu = cte.name + IsNull(', ' + cte.cpuSpeed, '')
```
This way if `cpuSpeed` is NULL, then you still have the data from the `name` column without the comma (",") appended at the end; if `name` is also NULL (or NULL instead is `cpuSpeed`), then `cpu` will be set to NULL.
Remember, you could always add an even more protective filter to your CTE, like:
```
WHERE c1.cpuId IS NOT NULL AND c2.name IS NOT NULL
```
Naturally, you should adjust the SQL to suit your specific requirements.
I hope this helps. Good luck! | Using 'with' clause for update purpose | [
"",
"sql",
"sql-server",
"t-sql",
"common-table-expression",
""
] |
I have a table column that contains City, State, and Zip. I would like to split this into 3 separate
I'm wondering if I'm going about this wrong. Here is my attempt at extracting all 3.
```
SELECT [City State Zip]
,CHARINDEX(',',[City State Zip]) AS [Comma location]
,SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13) AS [State and Zip]
,SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9)
-- Below code attempts to add a dash to the 9 digit zip codes but appears to only be doing it to some of them
CASE LEN(SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9))
WHEN 9
THEN
STUFF((SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9)
ELSE
(SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9)
END AS Zip
,SUBSTRING([City State Zip],0,CHARINDEX(',',[City State Zip])) AS City
-- This code for extracting the STATE is producing an error "Invalid length parameter passed to the SUBSTRING function"
,SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip])+1,LEN([City State Zip])- (CHARINDEX(',',[City State Zip])+1 + 5))
FROM dbo.foo
```
SO now how do I extract the State? Currently it fails with "Invalid length parameter passed to the SUBSTRING function"
The City will always be followed by a comma, and the state will always be 2 digits.
Sample data below:
Georgetown, DE 19947
Greenwood, DE 199502039
Dover, DE 19901
New Castle, DE 197205069
Lewes, DE 199581984
Newark, DE 197118734
Smyrna, DE 19904
Baltimore, MD 21020
Dover, DE 19901 | I would break up your logic and use a query like this:
```
SELECT
[city],
[state],
CASE LEN([zip])
WHEN 5 THEN [zip]
WHEN 9 THEN STUFF([zip],6,0,'-')
END [zip]
FROM (
SELECT
LEFT([City State Zip],CHARINDEX(',',[City State Zip])-1) [city],
SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip])+2,2) [state],
RIGHT([City State Zip],CHARINDEX(' ',REVERSE([City State Zip]))-1) [zip]
FROM dbo.foo
) A
``` | Solved it, here is the working code to split out a column of City, State Zip into 3 separate in addition to adding dashes to the zip when it is 9 digits.
```
SELECT
CASE LEN(SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9))
WHEN 9
THEN
STUFF((SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9)
ELSE
(SUBSTRING(SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip]),13),5,9)
END AS Zip
,SUBSTRING([City State Zip],0,CHARINDEX(',',[City State Zip])) AS City
,SUBSTRING([City State Zip],CHARINDEX(',',[City State Zip])+1,4) AS STATE
FROM dbo.foo
``` | Split City, State, Zip from one column into 3 separate, also add dash to 9 digit zips | [
"",
"sql",
"t-sql",
"sql-server-2008-r2",
""
] |
I have several million filenames I need to search through. They look like this:
```
LG_MARGINCALL_HD2CH_127879834_EN.mov
```
If someone searches for any of the following, it should match:
* margin
* margin call
* margin call mov
* margin call hd en
* margin call hd en mov
What I am currently using is a mysql %LIKE% search. Something like:
```
SELECT filename FROM path WHERE filename LIKE '%margin%' AND filename LIKE '%mov%'
```
It is deathly slow (can take up to ten seconds for a search). Note that it **does work** though.
What would be a better way to do the above search? Either using mysql or another program. | Your search strategy is, as you noticed, slow. It's slow because
```
LIKE '%something%'
```
has to scan the table to find matches. Leading % signs on `LIKE` searches are an excellent way to wreck performance.
I don't know how many columns are in your `path` table. If there are a *lot of columns* you could do two quick things to improve performance:
1. get rid of `SELECT *` and list the names of the columns you want in your resultset.
2. create a compound index consisting of your `filename` column followed by the other columns you need to retrieve.
(This won't help if you only have a few columns in your table.)
You can't use straight-out-of-the-software-package `FULLTEXT` searching for this stuff, because that's designed for language text.
If I had to make this work fast for production, I would do this:
First, create a new table called "searchterm" containing
```
filename_id INT the id number of a row in your path table
searchterm VARCHAR(20) a fragment of a filename.
```
Second, write a program that reads the `filename_id` and `filename` values, and inserts a bunch of different rows for each one into `searchterm`. For the item you've shown the values should be:
```
LG_MARGINCALL_HD2CH_127879834_EN.mov (original)
LG MARGINCALL HD2CH 127879834 EN mov (split on punctuation)
HD 2 CH (split on embedded numerics)
MARGIN CALL (split on an app-specific list of words)
```
So, you'd have a bunch of entries in your searchterm table, all with the same `filename_id` value and lots of different little chunks of text.
Finally, when searching you could do this.
```
SELECT path.id, path.filename, path.whatever,
COUNT(DISTINCT searchterms.term) AS termcount
FROM path
JOIN searchterm ON path.filenanme_id = search.filename_id
WHERE searchterm.term IN ('margin','call','hd','en', 'mov')
GROUP BY path.id, path.filename, path.whatever
ORDER BY path.filename, COUNT(DISTINCT searchterms.term) DESC
```
This little query finds all the matching fragments to what you're search for. It returns multiple file names, and it presents them in order of what matches the most terms.
What I'm suggesting is that you create your own application-specific kinda- sorta- full text search system. If your really have several million multimedia files, this is surely worth your effort. | It seems clear that you need **full-text search** functionality.
There are multiple solutions out there that can respond to this, one of the best at the moment being **[Elastic Search](http://www.elasticsearch.org/)**.
It has all the capabilities to handle real time full-text search.
And it goes largely beyond just this by providing auto-suggestions, autocomplete, etc.
And it's open source. | Improving filepath search in mysql | [
"",
"mysql",
"sql",
"unix",
"search",
"full-text-search",
""
] |
I started in the database, so be kind to me please. The application that I want to write seems basic, I tried, but I failed.
Here is my problem:
I have this:
Table: Employee
```
# | Colonne | Type | Interclassement | Attributs | Null | Défaut | Extra | Action
1 | Id | int(11) | no | Aucune | AUTO_INCREMENT
2 | Name | varchar(50) | latin1_swedish_ci | yes | NULL
3 | Firstname | varchar(50) | latin1_swedish_ci | yes | NULL |
4 | IdNumber | int(11) | yes | NULL
5 | Mail | varchar(50) | latin1_swedish_ci | no | Aucune |
6 | PassWord | varchar(50) | latin1_swedish_ci | no | Aucune |
7 | Site | varchar(50) | latin1_swedish_ci | yes | NULL |
8 | Entrance | date | yes | NULL |
9 | Departure | date | yes | NULL |
10 | Car_Id | int(11) | yes | NULL |
11 | Profil_Id | int(11) | yes | NULL |
```
Table : Imputation
```
# | Colonne | Type | Interclassement | Attributs | Null | Défaut | Extra | Action
1 | Id | int(11) | | | no | Aucune | AUTO_INCREMENT
2 | Hours | int(11) | | | yes | NULL |
3 | Description | varchar(256) | latin1_swedish_ci | | yes | NULL |
4 | ToBeBilled | tinyint(1) | | | yes | 1 |
5 | BillNumber | int(11) | | | yes | NULL |
6 | Day | date | | | yes | NULL |
7 | TimeSheet_Id | int(11) | | | no | Aucune |
8 | Project_Id | int(11) | | | no | Aucune |
9 | automatic | tinyint(1) | | | no | 0 |
```
Table : TimeSheet
```
# | Colonne | Type | Interclassement | Attributs | Null | Défaut | Extra | Action
1 | Id | int(11) | | | no | Aucune | AUTO_INCREMENT
2 | Month | int(2) | | | yes | NULL |
3 | Year | int(4) | | | yes | NULL |
4 | Filled | tinyint(1) | | | yes | 0 |
5 | Closed | tinyint(1) | | | yes | 0
6 | Employee_Id | int(11) | | | no | Aucune |
```
And I want to achieve the following result :
```
________________________________________________________
Name | Billable hours | Non-billable hours | Total hours
________________________________________________________
John Doe | 872 | 142 | 1014
________________________________________________________
```
Billable hours are those with ToBeBilled lines = true. Non-billable hours are ToBeBilled lines = false.
Here is my SQL query that I'm currently working on (I use [FlySpeed SQL Query](http://www.activedbsoft.com/overview-querytool.html) tool to help me build my SQL queries) :
```
Select
Employee.Name,
Sum( Imputation.Hours),
Imputation.ToBeBilled
From
Employee Inner Join
TimeSheet On TimeSheet.Employee_Id = Employee.Id,
Imputation
Where
Imputation.ToBeBilled = 'true'
Group By
Employee.Name, Imputation.ToBeBilled
Order By
Employee.Name
```
---
After help, here is the final query :
```
Select
Employee.Name As Name,
Sum(Case When Imputation.ToBeBilled = '1' Then Imputation.Hours End) As `Billable`,
Sum(Case When Imputation.ToBeBilled = '0' Then Imputation.Hours End) As `NonBillable`,
Sum(Imputation.Hours) As `Total`
From
Employee Inner Join
TimeSheet On TimeSheet.Employee_Id = Employee.Id Inner Join
Imputation On Imputation.TimeSheet_Id = TimeSheet.Id
Group By
Employee.Name, Employee.Id
Order By
Name
``` | Start by doing the proper joins between all the tables. Just *never* use a comma in a `from` statement. Then do conditional aggregation -- nesting `case` statements in the `sum()`:
```
Select e.Name, Sum( i.Hours) as TotalHours,
sum(case when i.ToBeBilled = 1 then i.Hours else 0 end) as Billable,
sum(case when i.ToBeBilled = 0 then i.Hours else 0 end) as NonBillable
From Employee e Inner Join
TimeSheet ts
On ts.Employee_Id = e.Id Inner Join
Imputation i
on i.TimeSheetID = ts.TimeSheetId
Group By e.Name, e.id
Order By Employee.Name;
```
This assumes that the values taken by `ToBeBilled` are `0` and `1`. The inclusion of `e.Id` in the `group by` is to handle the situation where two employees have the same name. | ```
SELECT
Employee.Name,
Sum(CASE WHEN Imputation.ToBeBilled = 'true' THEN Imputation.Hours END) As Billable_hours,
Sum(CASE WHEN Imputation.ToBeBilled != 'true' THEN Imputation.Hours END) As NonBillable_hours,
SUM(Imputation.Hours) As total_hours
FROM
Employee INNER JOIN TimeSheet
On TimeSheet.Employee_Id = Employee.Id
INNER JOIN Imputation
ON Imputation.TimeSheet_Id=TimeSheet.id
GROUP BY
Employee.Id, Employee.Name
ORDER BY
Employee.Name
``` | Select the amount of billable hours and non-billable hours | [
"",
"mysql",
"sql",
""
] |
Here is a screenshot of the SQL command results:

Here is the SQL command:
```
SELECT inl_cbsubs_subscriptions.user_id AS cbsubsUserID, inl_cbsubs_subscriptions.id AS cbsubsId, inl_cbsubs_subscriptions.status AS status, inl_cbsubs_payment_items.subscription_id AS paymentSubID, inl_cbsubs_payment_items.stop_date AS paymentStopDate, inl_cbsubs_payment_items.id AS paymentID
FROM inl_cbsubs_subscriptions
INNER JOIN inl_cbsubs_payment_items
ON inl_cbsubs_subscriptions.id=inl_cbsubs_payment_items.subscription_id
WHERE status='C'
ORDER BY paymentID DESC;
```
I am looking to adjust this command so that I have only the most recent result showing on a per user basis. So in other words, this is what the table should resemble in this case:

As you can see, the `cbsubsUserID` only shows one result per ID whereas before there were multiple results for the `596` id. | If you want the most recent result, the best way to do it is with `not exists`:
```
SELECT s.user_id AS cbsubsUserID, s.id AS cbsubsId, s.status AS status,
i.subscription_id AS paymentSubID, i.stop_date AS paymentStopDate, i.id AS paymentID
FROM inl_cbsubs_subscriptions s INNER JOIN
inl_cbsubs_payment_items i
ON s.id = i.subscription_id
WHERE s.status = 'C' and
not exists (select 1
from inl_cbsubs_payment_items i2
where i2.subscription_id = i.subscription_id and
i2.id > i.id
)
ORDER BY paymentID DESC;
```
You will want an index on `inl_cbsubs_payment_items(subscription_id, paymentid)`.
What this is saying is: "Get me all the items for a given subscription id that have no bigger payment stop dates for that subscription". It is a fancy way of saying "get me the most recent for each subscription id", but it tends to work best in the database. | I think you can use [**`GROUP BY`**](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html) to achieve this
```
SELECT inl_cbsubs_subscriptions.user_id AS cbsubsUserID, inl_cbsubs_subscriptions.id AS cbsubsId, inl_cbsubs_subscriptions.status AS status, inl_cbsubs_payment_items.subscription_id AS paymentSubID, inl_cbsubs_payment_items.stop_date AS paymentStopDate, inl_cbsubs_payment_items.id AS paymentID
FROM inl_cbsubs_subscriptions
INNER JOIN inl_cbsubs_payment_items
ON inl_cbsubs_subscriptions.id=inl_cbsubs_payment_items.subscription_id
WHERE status='C'
GROUP BY inl_cbsubs_subscriptions.user_id
ORDER BY paymentID DESC;
``` | How to filter MySQL results to display LIMIT 1 on per user basis? | [
"",
"mysql",
"sql",
"filter",
"greatest-n-per-group",
""
] |
I'm working on a query to get a list of all computers that have more than one antivirus installed. Unfortunately I'm unable to get my query to return anything. As you can see in the screenshots below I have setup the data so that it when I get the query working my result set will contain data on devices 1, 2, and 3.


Here is what I've got currently:
```
SELECT d.ip_address AS "IP Address",
d.name AS "Computer Name",
av.name AS "Antivirus Name",
COUNT(av.deviceID) AS "# of AV Installed"
FROM devices d JOIN anti_virus_products av
ON d.deviceID = av.deviceID
GROUP BY d.ip_address, d.name, av.name
HAVING COUNT(av.deviceID) > 1;
``` | You need to remove the antivirus name `av.name` from the `SELECT` and `GROUP BY`. By including that in the results, the `GROUP BY` will create a new row for each computer and antivirus name -- because those values create a unique group.
```
SELECT d.ip_address AS "IP Address",
d.name AS "Computer Name",
COUNT(av.deviceID) AS "# of AV Installed"
FROM devices d JOIN anti_virus_products av
ON d.deviceID = av.deviceID
GROUP BY d.ip_address, d.name
HAVING COUNT(av.deviceID) > 1;
```
Update based on comment to expand query to list names of all AV software installed on the devices:
You could partition the `COUNT` function. I couldn't find a link to the Oracle documentation for this, but the SQL Server syntax for the [OVER Clause](http://technet.microsoft.com/en-us/library/ms189461.aspx) seems to work with Oracle. See this example query on `SQL Fiddle`
```
SELECT d.ip_address AS "IP Address",
d.name AS "Computer Name",
d.deviceid,
av.name AS "Antivirus Name",
COUNT(*) OVER (PARTITION BY d.ip_address, d.name) AS "Total # of AV Installed"
FROM devices d JOIN anti_virus_products av
ON d.deviceID = av.deviceID
GROUP BY d.ip_address, d.name, d.deviceid, av.name;
```
This example query will return one row for each device and antivirus pair and include the total number of AV installed on the device.
I'm not sure if this will solve your problem, but it provides the additional data you're looking for. | Just remove `av.name` from the `group by` clause. Your query returns computers that have a the same anti-virus software installed multiple times -- an unlikely scenario:
```
SELECT d.ip_address AS "IP Address",
d.name AS "Computer Name",
COUNT(av.deviceID) AS "# of AV Installed"
FROM devices d JOIN
anti_virus_products av
ON d.deviceID = av.deviceID
GROUP BY d.ip_address, d.name
HAVING COUNT(av.deviceID) > 1;
```
If you want the names of the antivirus software on a single row, you can use `listagg()`. Here is the Oracle documentation page for listagg: <http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions089.htm#SQLRF30030> | SQL Query returns no rows | [
"",
"sql",
"oracle",
""
] |
I have declared a temp table to hold all the required values as follows:
```
DECLARE @temp TABLE
(
Password INT,
IdTran INT,
Kind VARCHAR(16)
)
INSERT INTO @temp
SELECT s.Password, s.IdTran, 'test'
from signal s inner join vefify v
on s.Password = v.Password
and s.IdTran = v.IdTran
and v.type = 'DEV'
where s.[Type] = 'start'
AND NOT EXISTS (SELECT * FROM signal s2
WHERE s.Password = s2.Password
and s.IdTran = s2.IdTran
and s2.[Type] = 'progress' )
INSERT INTO @temp
SELECT s.Password, s.IdTran, 'test'
FROM signal s inner join vefify v
on s.Password = v.Password
and s.IdTran = v.IdTran
and v.type = 'PROD'
where s.[Type] = 'progress'
AND NOT EXISTS (SELECT * FROM signal s2
WHERE s.Password = s2.Password
and s.IdTran = s2.IdTran
and s2.[Type] = 'finish' )
```
Now i need to loop through the rows in the @temp table and and for each row call a sp that takes all the parameters of @temp table as input.
How can I achieve this? | you could use a cursor:
```
DECLARE @id int
DECLARE @pass varchar(100)
DECLARE cur CURSOR FOR SELECT Id, Password FROM @temp
OPEN cur
FETCH NEXT FROM cur INTO @id, @pass
WHILE @@FETCH_STATUS = 0 BEGIN
EXEC mysp @id, @pass ... -- call your sp here
FETCH NEXT FROM cur INTO @id, @pass
END
CLOSE cur
DEALLOCATE cur
``` | Try returning the dataset from your stored procedure to your datatable in C# or VB.Net. Then the large amount of data in your datatable can be copied to your destination table using a Bulk Copy. I have used BulkCopy for loading large datatables with thousands of rows, into Sql tables with great success in terms of performance.
You may want to experiment with BulkCopy in your C# or VB.Net code. | Loop through all the rows of a temp table and call a stored procedure for each row | [
"",
"sql",
"sql-server",
"stored-procedures",
"foreach",
""
] |
The result I want is simply the number of users voted for rateable\_id = '3'?
Example: As you can see in my table rates, rater\_id: 1 and rater\_id: 8 have voted for rateable\_id = 3. This makes 2 users.
My question is how to display that in view?
This is in my ranking\_controller.rb:
```
class RankingController < ApplicationController
def index
@rankings = Rate.find(:all)
end
end
```
This is in my table rates:
```
- !ruby/object:Rate
attributes:
id: 11
rater_id: 1
rateable_id: 3
rateable_type: Bboy
stars: 5.0
dimension: foundation
created_at: 2014-02-25 09:33:23.000000000 Z
updated_at: 2014-02-25 09:33:23.000000000 Z
- !ruby/object:Rate
attributes:
id: 12
rater_id: 1
rateable_id: 3
rateable_type: Bboy
stars: 5.0
dimension: originality
created_at: 2014-02-25 09:33:24.000000000 Z
updated_at: 2014-02-25 09:33:24.000000000 Z
- !ruby/object:Rate
attributes:
id: 13
rater_id: 1
rateable_id: 3
rateable_type: Bboy
stars: 5.0
dimension: dynamics
created_at: 2014-02-25 09:33:25.000000000 Z
updated_at: 2014-02-25 09:33:25.000000000 Z
- !ruby/object:Rate
attributes:
id: 14
rater_id: 1
rateable_id: 3
rateable_type: Bboy
stars: 5.0
dimension: execution
created_at: 2014-02-25 09:33:26.000000000 Z
updated_at: 2014-02-25 09:33:26.000000000 Z
- !ruby/object:Rate
attributes:
id: 15
rater_id: 1
rateable_id: 3
rateable_type: Bboy
stars: 5.0
dimension: battle
created_at: 2014-02-25 09:33:27.000000000 Z
updated_at: 2014-02-25 09:33:27.000000000 Z
- !ruby/object:Rate
attributes:
id: 16
rater_id: 1
rateable_id: 5
rateable_type: Bboy
stars: 5.0
dimension: foundation
created_at: 2014-02-25 09:36:30.000000000 Z
updated_at: 2014-02-25 09:36:30.000000000 Z
- !ruby/object:Rate
attributes:
id: 17
rater_id: 1
rateable_id: 5
rateable_type: Bboy
stars: 5.0
dimension: originality
created_at: 2014-02-25 09:36:31.000000000 Z
updated_at: 2014-02-25 09:36:31.000000000 Z
- !ruby/object:Rate
attributes:
id: 18
rater_id: 1
rateable_id: 5
rateable_type: Bboy
stars: 5.0
dimension: dynamics
created_at: 2014-02-25 09:36:31.000000000 Z
updated_at: 2014-02-25 09:36:31.000000000 Z
- !ruby/object:Rate
attributes:
id: 19
rater_id: 1
rateable_id: 5
rateable_type: Bboy
stars: 5.0
dimension: battle
created_at: 2014-02-25 09:36:32.000000000 Z
updated_at: 2014-02-25 09:36:32.000000000 Z
- !ruby/object:Rate
attributes:
id: 25
rater_id: 8
rateable_id: 3
rateable_type: Bboy
stars: 1.0
dimension: foundation
created_at: 2014-03-04 14:06:46.000000000 Z
updated_at: 2014-03-04 14:06:46.000000000 Z
- !ruby/object:Rate
attributes:
id: 26
rater_id: 8
rateable_id: 3
rateable_type: Bboy
stars: 1.0
dimension: originality
created_at: 2014-03-04 14:06:49.000000000 Z
updated_at: 2014-03-04 14:06:49.000000000 Z
- !ruby/object:Rate
attributes:
id: 27
rater_id: 8
rateable_id: 3
rateable_type: Bboy
stars: 1.0
dimension: dynamics
created_at: 2014-03-04 14:06:51.000000000 Z
updated_at: 2014-03-04 14:06:51.000000000 Z
- !ruby/object:Rate
attributes:
id: 28
rater_id: 8
rateable_id: 3
rateable_type: Bboy
stars: 1.0
dimension: execution
created_at: 2014-03-04 14:06:53.000000000 Z
updated_at: 2014-03-04 14:06:53.000000000 Z
- !ruby/object:Rate
attributes:
id: 29
rater_id: 8
rateable_id: 3
rateable_type: Bboy
stars: 1.0
dimension: battle
created_at: 2014-03-04 14:06:54.000000000 Z
updated_at: 2014-03-04 14:06:54.000000000 Z
``` | You could use it as below:
```
@raters_count = Rate.select(:rater_id).where(rateable_id: 3, rateable_type: 'Bboy' ).distinct.count
``` | There are many ways to accomplish this.
One would be
```
@ranking_count = Rate.where(rateable_id: 3).pluck(:rater_id).uniq.count
``` | Rails ActiveRecord sum of users voted? | [
"",
"sql",
"ruby-on-rails",
"ruby",
"activerecord",
""
] |
I have a table like below:
```
Book BookSales Date
BookA $1 2013-03-04
BookB $2 2013-03-04
BookA $3 2013-03-05
```
...
...
I run the query -
```
select book,booksales,date
from tblBook
where date>='03/03/2013'
and date<='03/05/2013'
and book in ('BookA','BookB')
order by date,book
```
Now it returns the result as below
```
BookA $1 2013-03-04
BookB $2 2013-03-04
BookA $3 2013-03-05
```
But what i want is returning the empty result($0) for book b as well if it does not exists for that date as shown below). Here, the record for book b does not exist for date 2013-03-05.
```
BookA $1 2013-03-04
BookB $2 2013-03-04
BookA $3 2013-03-05
BookB $0 2013-03-05 <-- I want to include this record
```
How can I achieve this ? If it is not possible to achieve this way, what other approach can I consider? | There must be an easier way of doing this, but here's what I came up with. Use common table expressions to get a distinct list of dates and a distinct list of books, then `CROSS JOIN` them to get the full matrix of dates and books. Once you've got the full matrix, you can `OUTER JOIN` your table to it, replacing any nulls with 0.
Cross joins can get messy pretty quickly, but if it's just a list of dates and list of books it should work fine.
```
;with dates AS
(
select distinct date
from tblBook
), books AS
(
select distinct book
from tblBook
)
select b.book, coalesce(tb.booksales, 0), d.[date]
from dates as d
cross join books as b
left outer join tblBook as tb
on d.[date] = tb.[date]
and b.book = tb.book
where d.[date] >='03/03/2013'
and d.[date] <='03/05/2013'
and b.book in ('BookA','BookB')
order by d.[date], b.[book]
```
[Working SQLFiddle here](http://sqlfiddle.com/#!3/536f1/2). | # Resolving Missing Aggregation Data With Dimensional Sub Queries and Outer Join SQL
You can solve your problem by using a sub query that produces all the distinct values that you need to report on. I am assume that the output is supposed to provide a way to track the sales of each book on each day, even if no sales transactions have been entered into the table. Here is one possible solution and the output:
## Problem Analysis:
The request to see a data value for which there is no physical representation in the queried table is possible by constructing a `sub query` that represents all possible values whether or not they already exist in the table.
In this case, the two dimensions reported on are: "book" and "date". There must be an entry for total sales for each book for each day of transactions recorded in the book sales table.
**Assumptions:** It wasn't clear in the example if `tblBook` represents already aggregated sales, or if there can be multiple records for a given "book" and "date" combination... as in the style of a transaction register that records each sale as they happen. The solution query aggregates sales amounts to provide a single record for each book on each day.
> **Note:** This example was tested on a MySQL RDBMS. The SQL used in this example is ANSI standard, so it should work on a SQLServer database as well. You may have to find the equivalent to the `ifNULL()` function used in this solution
**The SQL Code**
```
SELECT REPORTING_DIMENSIONS.book,
sum(ifNULL(sales_data.bookSales,0)) as totalSales,
REPORTING_DIMENSIONS.date
FROM
(SELECT book_list.book, date_list.date
FROM
(SELECT distinct book
FROM tblBook
) book_list
CROSS JOIN
(SELECT distinct date
FROM tblBook
) date_list
) REPORTING_DIMENSIONS
LEFT JOIN tblBook SALES_DATA
ON REPORTING_DIMENSIONS.book = SALES_DATA.book
AND REPORTING_DIMENSIONS.date = SALES_DATA.date
GROUP BY REPORTING_DIMENSIONS.book,
REPORTING_DIMENSIONS.date
```
**The Output:**
```
| BOOK | TOTALSALES | DATE |
|-------|------------|------------------------------|
| BookA | 2 | March, 04 2013 00:00:00+0000 |
| BookA | 6 | March, 05 2013 00:00:00+0000 |
| BookB | 4 | March, 04 2013 00:00:00+0000 |
| BookB | 0 | March, 05 2013 00:00:00+0000 |
```
## Discussion of Results
The value of zero for BookB on 2013-03-15 actually was a NULL value. When an OUTER (i.e., LEFT) join is made for which there is no match on the joining values, a null is returned.
The only fancy step was the `CROSS JOIN` designation. Also known as a `CARTESIAN PRODUCT`, this generates a record for each possible "book" and "date" combination recorded in the existing table.
The sub query aliased as `REPORTING DIMENSIONS` is a fixed, dynamic feature of the query. It allows the query to adjust its output as new books or dates are added to its scope.
**Additional Considerations:**
> It is possible to have NO SALES for NONE of the products on one, all or many of the calendar days contained in the range of your report query. This again will affect the output of the original report query because there will be holes in the output of results.
This can be fixed by creating a fixed dimension table just for date values. Pre populate it with a record for each discrete calendar day for a few years before and after the scope of your sales data. A simple T/SQL block with a looping insert operation can easily populate this table. Once it's been populated, it is very unlikely it needs to be touched. | Returning all results in where in clause (Sql Server) | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have three tables A,B,C.Their relation is A.id is B's foreign key and B.id is C's foreign key.I need to sum the value when B.id = C.id and A.id = B.id ,I can count the number by query twice. But now I need some way to count the summation just once time !
### My inefficient solution
```
select count(C.id) from C,B where C.id = B.id; //return the value X
select count(A.id) from C,B where A.id = B.id; //return the value Y
select X + Y; // count the summation fo X and Y
```
How can I optimize ? Thks! :)
### PS:
My question is from [GalaXQL](http://sol.gfxile.net/galaxql.html),which is a SQL interactive tutorial.I have abstract the problem,more detail you can check the section 17.SELECT...GROUP BY... Having... | You can do these things in one query. For instance, something like this:
```
select (select count(*) from C join B on C.id = B.id) +
(select count(*) from C join A on C.id = A.id)
```
(Your second query will not parse because `A` is not a recognized table alias.)
In any case, if you are learning SQL, the first thing you should learn is modern `join` syntax. The implicit joins that you are using were out of date 15 years ago and have been part of the ANSI standard for over 20 years. Learn proper `join` syntax. | Try Like This
```
select sum(cid) (
select count(*) as cid from C join B on C.id = B.id
union all
select count(*) as cid from A join B on A.id = B.id ) as tt
``` | how to count the two count function's return value in once sql query | [
"",
"mysql",
"sql",
"sql-server",
""
] |
Here is the code to create the database:
```
CREATE TABLE foo (
id TEXT PRIMARY KEY,
value TEXT
);
INSERT INTO foo VALUES(1, 10), (2, 20), (3, 30), (5, 50);
```
Now I have a set of rows and I want back 0 if the row doesnt exist, 1 if the row exists but is not the same, and 2 if the row exists exactly.
So the result of the query on `(1, 11), (2, 20), (4, 40)` should be `1, 2, 0`.
The reason I want this is to know what query to use to insert the data into the database. If it is a 0, I do a normal insert, if it is a 1 I do an update, and if it is a 2 I skip the row. I know that `INSERT OR REPLACE` will result in nearly the same rows, but the problem is that it doesnt trigger the correct triggers (it will always trigger an on insert trigger instead of an update trigger or no trigger if the row exists exactly).
Also, I want to do one query with all of the rows, not one query per row. | The idea is to use an aggregation query. Count the number of times that the id matches. If there are none, then return `0`. Then check the value to distinguish between 1 and 2:
```
select (case when max(id = 1) = 0 then 0
when max(id = 1 and value = 11) = 0 then 1
else 2
end) as flag
from table t;
```
You need to plug the values into the query.
EDIT:
If you want to match a bunch of rows, do something like this:
```
select testvalue.id,
(case when max(t.id = testvalue.id) = 0 then 0
when max(t.id = testvalue.id and t.value = testvalue.value) = 0 then 1
else 2
end) as flag
from table t cross join
(select 1 as id 10 as value union all
select 2, 20 union all
select 4, 40
) as testvalues
group by testvalues.id;
``` | You can use the EXISTS argument in Transact-SQL. [MSDN Documentation](http://msdn.microsoft.com/en-us/library/ms188336.aspx).
This returns true if a row exists.
You can then use an If statement within that to check if the row is the same or different, and if true, use the RETURN argument with your specified values. [MSDN Documentation](http://technet.microsoft.com/en-us/library/ms174998.aspx). | What SQL query can answer "Do these rows exist?" | [
"",
"sql",
"sqlite",
""
] |
I am totally new to databases. I would like to create a database; I am going to make a small project which is going to use DB. I am going to use Maria DB as it is totally free for commercial use.
The question is: Can I use MySQL workbench program to create a database and then transform/change it to MariaDB? | From my experience -- Sure, you can use MySQL Workbench with MariaDB.
However, I have tried basic functionalities only, like queries, schema design etc. Not sure about compatibility of advanced features. | So my experiences are, yes you can use MySQL Workbench for MariaDB database designs.
However I needed to change the "Default Target MySQL Version" to `5.7`.
This can be done by going to: Edit->Preferences in the menu. And finally to Modeling->MySQL.
Since the latest MySQL version, v8.x, the SQL statements are not compatible with MariaDB statements (like creating an index). MariabDB creating an index on a table:
```
INDEX `fk_rsg_sub_level_rsg_top_level1_idx` (`rgs_top_level_id` ASC)
```
vs
MySQL:
```
INDEX `fk_rsg_sub_level_rsg_top_level1_idx` (`rgs_top_level_id` ASC) VISIBLE
```
MariaDB can't handle this `VISIBLE` keyword in this example. Using an old MySQL Version, MySQL Workbench will forward engineer a compatible MariaDB SQL file.
Currently (Oct 2019) the generated SQL\_MODE output is still compatible with MariaDB. Just like InnoDB, which is also preferred when using MariaDB in most cases. | Can I use MySQL Workbench to create MariaDB? | [
"",
"mysql",
"sql",
"mariadb",
"workbench",
""
] |
I having a hard time figuring out a sql query.
I want to pick all companies that only has licenses with a name containing "Test".
```
Company License
|pkId|Name | |pkId|CompanyId|Name |
----------------- -----------------------------
|1 |Microsoft | |1 |1 |License Test|
|2 |Apple | |2 |1 |Commercial |
|3 |2 |License Test|
|4 |2 |License Test|
```
So, in the example Microsoft has 2 licenses. One test and one commercial
so I don't want that company.
But all of Apples licenses are test licenses so I want to select Apple.
What I'm thinking of is:
```
SELECT Company.Name, COUNT(Company.Name)
FROM Company INNER JOIN License ON License.CompanyId = Company.pkId
WHERE License.Name LIKE '%Test%'
GROUP BY Company.Name
```
to get how many rows containing "Test" for each company and compare it to
```
SELECT Company.Name, COUNT(Company.Name)
FROM Company INNER JOIN License ON License.CompanyId = Company.pkId
GROUP BY Company.Name
```
And if there is no difference in the count, I have a company with only test licenses.
But I have no idea how to but it all together or if there is a better way. | What you want is to subtract one set from another (that is, return one set, but without rows that appear in another set), and this is what the [EXCEPT](https://en.wikipedia.org/wiki/Union_%28SQL%29#EXCEPT_operator) (or MINUS in Oracle) operator does. This query should give you what you want:
```
SELECT c.pkId, c.Name FROM Company c
INNER JOIN License l ON c.pkId=l.CompanyId
WHERE l.Name LIKE '%Test%'
EXCEPT
SELECT c.pkId, c.Name FROM Company c
INNER JOIN License l ON c.pkId=l.CompanyId
WHERE l.Name NOT LIKE '%Test%'
```
There are other ways of doing this, but this should be the most straightforward.
[Sample SQL Fiddle](http://www.sqlfiddle.com/#!3/d41d8/31894). | This is an attempt if it is SQL Server Database:-
```
declare @company table (pkid int, name varchar(20))
insert into @company values (1,'microsoft')
insert into @company values (2,'apple')
declare @license table (pkid int, companyid int, name varchar(20))
insert into @license values (1,1,'license test')
insert into @license values (2,1,'comm')
insert into @license values (3,2,'license test')
insert into @license values (4,2,'license test')
select * from @company
select * from @license
select c.name
from @license l
inner join @company c
on c.pkid = l.companyid
where l.companyid not in
(select companyid from @license l1
where l1.name not like '%test%')
group by c.name
``` | Select only if related entities column contains specific value | [
"",
"sql",
"database",
"select",
"sql-server-2008-r2",
""
] |
This is my table looked like:
```
Employee_Number | DateTime
00000001 | 2014/01/14 09:20
00000001 | 2014/01/14 12:35
00000001 | 2014/01/14 13:35
00000002 | 2014/01/14 09:20
00000001 | 2014/01/14 22:49
00000001 | 2014/01/15 09:35
00000001 | 2014/01/15 10:35
00000001 | 2014/01/15 17:35
00000002 | 2014/01/14 12:34
00000002 | 2014/01/14 17:34
```
I want to do a select statement where I will get the min and the max datetime of employee per day something like this:
```
Employee_Number | DateTime MIN MAX
00000001 | 2014/01/14 2014/01/14 09:20 2014/01/14 22:49
00000001 | 2014/01/15 2014/01/15 09:35 2014/01/15 17:35
00000002 | 2014/01/14 2014/01/14 09:20 2014/01/14 17:34
```
I already searched google to find answer to my dilemma but the very near sql statement that I can have is this:
```
declare @tmp table (
tranDate int,
tranTime datetime
)
insert into @tmp
select Convert(int, convert(nvarchar(100), DateTime,112)) ,DateTime from tblExtract
select tranDate, min(tranTime) as 'min' , max(tranTime) as 'max' from @tmp
group by tranDate
```
The problem is it only shows the min and max of the day not per employee\_number. How can I solve this? | First of all, you must add Employee\_Number to the tmp table.
Next, In the last statement, you must write
```
group by tranDate, Employee_Number
```
The reason is that you will get the minimum corresponding to each tranDate and Employee\_Number pair, as required. | try this, assuming that the `DateTime` column is not stored as a string
```
select Employee_Number, Cast([DateTime] as Date) as 'DateTime', MIN([DateTime]) as 'MIN', MAX([DateTime]) as 'MAX'
from Employee_Table
group by Employee_Number, Cast([DateTime] as Date)
``` | How can I get the min and max date per date of employees? | [
"",
"sql",
"sql-server",
""
] |
```
SELECT *
FROM Products,
ProductDetails
WHERE Products.ProductID IN (111,222,333,444,555)
AND Products.ProductID = ProductDetails.ProductID
```
My current SQL is like the above. Now, there is a new requirement, where I want the ProductID from the comma separated list to still be available in the recordset even if it's not found in the Products and ProductDetails tables.
How can I do this? Maybe can achieve that using LEFT JOIN? I am not really good with SQL, hope someone can help with working codes.
Thanks. | I suggest you should create a **'Table-Values Function'**, which will return you a table based result set from your comma separate string
```
CREATE FUNCTION [dbo].[funcSplit]
(
@param NVARCHAR(MAX),
@delimiter CHAR(1)
)
RETURNS @t TABLE (val NVARCHAR(MAX))
AS
BEGIN
SET @param += @delimiter
;WITH a AS
(
SELECT CAST(1 AS BIGINT) f,
CHARINDEX(@delimiter, @param) t,
1 seq
UNION ALL
SELECT t + 1,
CHARINDEX(@delimiter, @param, t + 1),
seq + 1
FROM a
WHERE CHARINDEX(@delimiter, @param, t + 1) > 0
)
INSERT @t
SELECT SUBSTRING(@param, f, t - f)
FROM a
OPTION(MAXRECURSION 0)
RETURN
END
```
and use LEFT JOIN, so you will always get result set from your first query, which is your input string, which has product IDs, Below is an example query. *(You don't need to write Stored Procedure, you can just use query, and pass your ProductIds in @str variable.)*
```
DECLARE @str nvarchar(MAX)
SET @str='111,222,333,444,555'
SELECT val , p.ProductID, pd.ProductID from dbo.[funcSplit](@str,',') a
LEFT JOIN Products p ON p.ProductID=a.val
LEFT JOIN ProductDetails pd ON pd.ProductID=a.val
```
You will get output like
```
val p.ProductID pd.ProductID
111 111 NULL
222 222 222
333 333 NULL
444 NULL NULL
```
Hope this will work!
Thanks
Suresh | Your `IN` clause will not help you in this case. You have to provide a table which contains the IDs. One way was introduced already: a UDF which converts a comma separated list into a table.
There is another quite simple way to get what you want:
```
select *
from (
SELECT * FROM (VALUES(111),(222),(333),(444),(555)) as x(ID)
) x
left join Products on x.ID = Products.ID
left join ProductDetails
on Products.ProductID = ProductDetails.ProductID
```
This lists all IDs from the list. In case there is a matching `Product` it will list that Product. And in case there are `ProductDetails` for that product, it will list those details. | Join tables with comma separated list | [
"",
"sql",
"sql-server",
""
] |
I am learning `SQL` online from [w3school.com](http://www.w3schools.com/sql/).
for droping a column
`ALTER TABLE table_name
DROP COLUMN column_name` command is given in [SQL ALTER](http://www.w3schools.com/sql/sql_alter.asp) page
i tried it in [Try it yourself](http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_all) section provided by w3school.
but it's not working
I used
```
ALTER TABLE Customers
DROP COLUMN City;
```
I also try to do this query on [compile online/sql](http://www.compileonline.com/execute_sql_online.php).
but again it's not working.
so, can any one explain me where i am wrong. | Did you notice what it says in the brackets?
`To delete a column in a table, use the following syntax (notice that some database systems don't allow deleting a column)`
See the answers to these questions:
[websql drop column with javascript](https://stackoverflow.com/questions/15108204/websql-drop-column-with-javascript)
[How to delete or add column in SQLITE?](https://stackoverflow.com/questions/8442147/how-to-delete-or-add-column-in-sqlite)
W3schools uses websql. Try [SQLFiddle](http://sqlfiddle.com/#!8/4d4e1/1) instead. | try like this:
```
use 'DatabaseName';
ALTER TABLE Customers
DROP COLUMN City;
enter code here
``` | SQL ALTER DROP query not working | [
"",
"sql",
"alter",
"sql-drop",
""
] |
I am struggling a little bit with an SQL Statement and was hoping if someone would be willing to point me in the right direction.
Below is a picture of my table:

As you can see here I have a column called 'State'. Basically I want to group all 'State' associated with a particular BuildID and display them in an extra column on my SQL output.
I am close but not quite there.
In the next picture below is an example of the statement I have used to try and achieve this:

As you can see here, it has done SUM and added the TotalTime and created the extra columns I require. However instead of grouping it all into one record, it has created 2 extra lines per BuildID. One with the value for the 'Running' state, another record for the value State of 'Break' and another record that contains the value 0 on both States.
Now what I want it to do is group it into one record for each BuildID. Like the picture I have added below:

The above image is how I want the records displayed. But the problem is I have added the WHERE [State] = Running, which I know is wrong but was just using this as an example. As you can see the 'Break' column has no value.
I hope this makes sense and was hoping if someone could point me in the right direction?
Here is an example on SQL fiddle <http://sqlfiddle.com/#!3/7b6b9/2>
Thanks for taking the time to read this :) | Have refined the sql a bit, here you go:
```
SELECT BuildID,Product, Program,
sum(CASE WHEN State = 'Running' THEN cast(TotalTime as INT) ELSE 0 END) AS Running
, sum(CASE WHEN State = 'Break' THEN cast(TotalTime as INT) ELSE 0 END) AS Breakt
FROM Line1Log
GROUP BY BuildID,Product, Program
```
Please check [SQLFiddle](http://www.sqlfiddle.com/#!3/e460a/10)
Not sure if i am missing something :-), but the solution looks pretty straight forward. Let me know. | move your SUM() OVER(PARTITION BY...) out from CASE
[SQL fiddle](http://sqlfiddle.com/#!3/7b6b9/11)
```
select BuildID, Product, Program, Sum(Running) Running, Sum([Break]) [Break]
from (
SELECT Distinct BuildID, Product, Program,
Sum(Case when [State]='Running' Then TotalTime Else 0 END) OVER (Partition by [State], [BuildID]) Running,
Sum(Case when [State]='Break' Then TotalTime Else 0 END) OVER (Partition by [State], [BuildID]) [Break]
From Line1Log) T
group by BuildID, Product, Program
``` | SQL Group By and window function | [
"",
"sql",
"sql-server",
"sql-server-2008",
"window-functions",
""
] |
I have been looking at this statement for ages and simply cannot find the error can you guys help?
```
SELECT XD.*, UhED.row_class,
(SELECT id
FROM Comment C
WHERE C.Excel_Data_Excel_Lists_id = XD.Excel_Lists_id
AND C.Excel_Data_row = XD.row
LIMIT 1
) AS has_activity
FROM User_has_Excel_Lists UhXL
JOIN Excel_Lists XL
ON XL.id = UhXL.Excel_Lists_id
JOIN Excel_Data XD
ON XD.Excel_Lists_id = XL.id
LEFT JOIN User_has_Excel_Data UhED
ON UhED.Excel_Data_Excel_Lists_id = XL.id
AND UhED.Excel_Data_row = XD.row
AND UhED.User_id = 1
WHERE UhXL.User_id = 1
AND XL.created > DATE_SUB(DATE(now()), INTERVAL 2)<-- it says that the error is here
GROUP BY XD.telephone
ORDER BY last_name ASC, first_name ASC
``` | ```
AND XL.created > DATE_SUB(DATE(now()), INTERVAL 2 DAY)
```
try this it may help you | You forgot do specify the interval **unit** maybe? Something like `INTERVAL 2 DAY` or `INTERVAL 2 HOUR` maybe? | SQL Date_Sub syntax error | [
"",
"mysql",
"sql",
"syntax",
""
] |
Consider a table with just two colums, say `col1` and `col2`.
I would like to find the fastest way to compute all the rows having `col2` equal to the `col2` column of a row having `col1` equal to a given value.
For example, consider the case where you want to find the names of all the people having the same surname of a person whose name is 'John' | ```
SELECT
t.*
FROM table t
JOIN table t2 ON t2.col2 = t.col2
WHERE t2.col1 = 'some value'
```
You will need indexes on both columns to get good performance. | Basically you need this query:
```
select
t1.col1
from
table t1,
table t2
where
t2.col1='John' and
t1.col2=t2.col2
```
If you add an index to the col1, col2 fields it should be fast enough.
If you would like to generate an output table for huge amount of data, then probably an interim create table would be the fastest. | mysql - fastest way to find rows having common field | [
"",
"mysql",
"sql",
""
] |
I have 2 tables with a join and a where clause. Example contents of the 2 tables:
```
Id FieldA Id FieldB
1 100 1 Yellow
2 100 2 Green
3 200 3 Green
4 200 4 Blue
5 300 5 Yellow
6 300 6 Orange
```
I am trying to return everything except where fieldA = 200 AND fieldB = Green. So it should still return line 2 that has fieldA = 100 and FieldB = Green. However, here is my query and it is not working. It is excluding all rows with 200 and green in them:
```
select t1.FieldA, t2.FieldB
FROM test1 t1
JOIN test2 t2 ON t1.Id = t2.Id
WHERE (t1.FieldA <> 200 AND t2.FieldB <> 'Green')
```
The way I see it, after running this query the only row excluded should be row 3 because it has fielda = 200 and fieldb = green, but instead it only returns row1, row 5 & row6. It seems to me that it should only do that if I am using an OR.
Let me know where I am going wrong, and here is some DDL so you can play with it:
```
create table dbo.test1
(
Id int not null,
FieldA int
)
create table dbo.test2
(
Id int not null,
FieldB varchar(10)
)
INSERT INTO test1 (Id, FieldA)
VALUES
(1,100),
(2,100),
(3,200),
(4,200),
(5,300),
(6,300)
INSERT INTO test2 (Id, FieldB)
VALUES
(1,'Yellow'),
(2,'Green'),
(3,'Green'),
(4,'Blue'),
(5,'Yellow'),
(6,'Orange')
``` | Each condition is being evaluated independently against the whole set of rows. To combine them, flip your operators and negate the combination, like so:
```
select t1.FieldA, t2.FieldB
FROM test1 t1
JOIN test2 t2 ON t1.Id = t2.Id
WHERE not (t1.FieldA = 200 AND t2.FieldB = 'Green')
```
Your original query was basically saying, first eliminate all the rows where FieldA is not 200, and then, from the rows that remain, eliminate all the ones where FieldB is not 'Green'.
When you want both conditions to apply for a given row, you first select for the conditions you want to exclude, which is why you switch from `<>` to `=`, then make your `WHERE` clause exclude the whole thing by applying the `NOT` operator.
**EDIT re: comment**
I think the confusion about the results returned by your original query and the idea that conditions in parenthesis are "evaluated as one" might stem from the fact that logical negation is not distributive, i.e. the negation of `A && B` is not `~A && ~B`, but rather `~(A && B)`.
Your first sentence describing the results you want is pretty close to the correct t-sql for the query. You say "I am trying to return everything except where fieldA = 200 AND fieldB = Green." The last part of the sentence is your where clause, i.e.
```
except where fieldA = 200 AND fieldB = Green
```
Substitute "not" for "except"
```
not where fieldA = 200 AND fieldB = Green
-- or, to make the grouping explicit
not (where fieldA = 200 AND fieldB = Green)
```
and clean it up to be valid t-sql syntax
```
where not (fieldA = 200 AND fieldB = Green)
```
By contrast, the English equivalent to `WHERE (t1.FieldA <> 200 AND t2.FieldB <> 'Green')` might be: return everything where field1 is anything but 200 and field1 is anything but green. In which case a match for either 200 or green would be sufficient to exclude the row.
To see why rows 2 and 4 were erroneously excluded, consider the truth table for your original where clause:
```
Field1 <> 200
T F
-----------------
T | T | F |
| | row 4 |
Field2 <> 'Green' -----------------
F | F | F |
| row 2 | |
-----------------
```
In other words, row 2 gets excluded because `Field2 = 'Green'`, making the condition `Field2 <> 'Green'` evaluate to `FALSE`, so it doesn't matter what `Field1` is, because `FALSE` and any other value is always `FALSE`. | When you do a join on Id, it creates a temporary table as follows
```
Id FieldA FieldB
1 100 Yellow
2 100 Green
3 200 Green
4 200 Blue
5 300 Yellow
6 300 Orange
```
When you say FieldA <> 200, the rows 3 and 4 gets excluded and the remaining rows will be 1,2,5,6.
Now when you say FieldB <> Green,then row 2 gets excluded resulting in Rows 1,5,6.
**Note:** Conditions in 'where' Clause are not applied based on the order you specify. Rather, it is applied on the sql execution plan at run time and the order of specifying the where conditions will not have any impact on the result.
To get the result, use the below condition
```
select t1.FieldA, t2.FieldB
FROM test1 t1
JOIN test2 t2 ON t1.Id = t2.Id
WHERE NOT (t1.FieldA = 200 AND t2.FieldB = 'Green')
``` | SQL Where clause excluding more than it should | [
"",
"sql",
"t-sql",
"where-clause",
""
] |
If I have a really big table will this query load the whole table in memory before it filters the resets:
```
with parent as
(
select * from a101
)
select * from parent
where value1 = 159
```
As you can see the parent query reference the whole table. Will this loaded in memory. This is a very simplified version of the query. The real query has a few joins to other tables. I am evaluating sql server 2012 and postgrsql. | In PostgreSQL (true as of 9.4, at least) [CTEs act as *optimisation fences*](https://dba.stackexchange.com/q/27425/7788).
The query optimiser will not flatten CTE terms into the outer query, push down qualifiers, or pull up qualifiers, even in trivial cases. So an unqualified `SELECT` inside a CTE term will always do a full table scan (or an index-only scan if there's a suitable index).
Thus, in PostgreSQL, these two things are very different indeed, as a simple `EXPLAIN` would show:
```
with parent as
(
select * from a101
)
select * from parent
where value1 = 159
```
and
```
SELECT *
FROM
(
SELECT * FROM a101
) AS parent
WHERE value1 = 159;
```
However, that "will scan the whole table" doesn't necessarily mean "will load the whole table in memory". PostgreSQL will use a TupleStore, which will transparently spill to a tempfile on disk as it gets larger.
The original justification was that DML in CTE terms was planned (and later implemented). If there's DML in a CTE term it's vital that its execution be predictable and complete. This may also be true if the CTE calls data-modifying functions.
Unfortunately, nobody seems to have thought *"... but what if it's just a SELECT and we want to inline it?"*.
Many in the community appear to see this as a *feature* and regularly promulgate it as a workaround for optimiser issues. I find this attitude utterly perplexing. As a result, it's going to be really hard to fix this later, because people are intentionally using CTEs when they want to prevent the optimiser from altering a query.
In other words, PostgreSQL abuses CTEs as pseudo-query-hints (along with the `OFFSET 0` hack), because project policy says real query hints aren't desired or supported.
AFAIK MS SQL Server may optimise CTE barriers, but may also choose to materialise a result set. | I just made `EXPLAIN` for this query in PostgreSQL. Surprisingly it does sequence scan instead of index lookup:
```
CTE Scan on parent (cost=123.30..132.97 rows=2 width=1711)
Filter: (value1 = 159)
CTE parent
-> Seq Scan on a101 (cost=0.00..123.30 rows=430 width=2060)
```
I have a primary key index on `value1` and it is used for simple `select * from a101 where value1 = 159` query.
So, the answer is it will scan the whole table. I am surprised, I thought it will work as a view or subquery, but it does not. You can use this to use index:
```
select * from (select * from a101) parent
where value1 = 159`
``` | Will this query load the whole table in memory | [
"",
"sql",
"sql-server",
"postgresql",
""
] |
i have a table name test(id,params)
Which has values (1,$ch=20$ph=9875567$ng=hutdj)
i want to take only Ph value , that means output should be 9875567 | You can do it like this:
```
DECLARE @searchString NVARCHAR(50) = '$ch=20$ph=9875567$ng=hutdj'
DECLARE @startFrom INT= CHARINDEX('ph=',@searchstring) + 3
DECLARE @length INT = CHARINDEX('$ng',@searchstring) - @startFrom
SELECT SUBSTRING(@searchstring,@startFrom, @length) AS RESULT
```
Demo **[SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/31892)**
The inline version:
```
DECLARE @searchString NVARCHAR(50) = '$ch=20$ph=9875567$ng=hutdj'
SELECT SUBSTRING(@searchstring,
CHARINDEX('ph=',@searchstring) + 3,
CHARINDEX('$ng',@searchstring) - CHARINDEX('ph=',@searchstring) - 3) AS RESULT
```
Demo **[SQL Fiddle](http://sqlfiddle.com/#!3/d41d8/31896)** | Try this...
The first substring will return all characters after $ph= and the second substring will return all characters from the resulted string to the next $
```
SET @params = SUBSTRING(
SUBSTRING(@s, PATINDEX('%$ph%',@s) + 4 ,
LEN(@S)- PATINDEX('%$ph%',@s)-3),
0,
PATINDEX('%$%', SUBSTRING(@s, PATINDEX('%$ph%',@s) + 4 ,
LEN(@S)- PATINDEX('%$ph%',@s)-3)))
``` | how to split values in column in SQL based on string | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
Im asking for direction.
I have a .sql file of tables - customer, product and shopping cart. The shopping cart has the customerID and productID both auto incremented and both as the primary keys. I INSERTed a record into the shopping cart for a specific customerID, specific productID with a quantity of 1. I INSERTed another record into the shopping cart for the same customer.
When trying to import into phpMyAdmin, I get an error: #1062 - Duplicate entry '0-0' for key 'PRIMARY'
How do I add more than one record to the shopping cart with the same customerID?
```
CREATE TABLE Shopping_Cart(
CustID INTEGER UNSIGNED NOT NULL,
ProdID INTEGER UNSIGNED NOT NULL,
Quantity INTEGER UNSIGNED NOT NULL,
PRIMARY KEY(CustID, ProdID));
INSERT INTO Shopping_Cart VALUES("BostockG12345A", "DULL12275226", 2);
INSERT INTO Shopping_Cart VALUES("BostockG12345A", "jPET1224108", 1);
```
Thank you all. I will change it from an integer to VARCHAR when I get home. I created the table before I assigned IDs. But that leads me to another question.
For the shopping cart I made up the CustID and ProdID even though they are a auto\_increment in the Customer table and Product table. Im getting a little confused with which comes first, the chicken or the egg (the auto\_increments CustID (and ProdID) or the INSERTs CustID (or ProdID). How do I know what the auto\_increment ID number is for the shopping carts INSERT if its all in the same file? Should I do a query for the ID of the customer whose name is Bostock and the ID of that specific product, then do the INSERT? | ```
CustID INTEGER UNSIGNED NOT NULL, --> Integer
ProdID INTEGER UNSIGNED NOT NULL, -->Integer
INSERT INTO Shopping_Cart VALUES("BostockG12345A", "DULL12275226", 2);
INSERT INTO Shopping_Cart VALUES("BostockG12345A", "jPET1224108", 1);
Wrong May be : But you are inserting string data in place of integer
``` | `CustID` and `ProdID` are integer values in database, you are inserting strings ("BostockG12345A", "DULL12275226") which mysql tries to convert to numbers. result of conversion is 0, so you end up inserting 2 records with CustId =0 and ProdId =0 | sql cannot insert same customer ID into shopping cart | [
"",
"mysql",
"sql",
"auto-increment",
"shopping-cart",
""
] |
Due to a recent change in business rules, a certain set of data used to be stored in one database and is now stored in a different database on the same server. As things are currently set up, if a user wants to query data from a range of dates which overlaps the time when business rules were changed, they're forced to use an IF statement in their code.
I would like to create a table-valued function that will abstract this change in business rules for users making such a query. Currently I'm trying to execute code similar to this:
```
CREATE FUNCTION dbo.someFunction (date DATETIME)
RETURNS TABLE
AS
RETURN
IF (date <= business_rules_change_date)
BEGIN
SELECT (A select statement)
FROM (Old Database)
WHERE (criteria)
END
ELSE
SELECT (A similar select statement)
FROM (New Database)
JOIN (A Join Statement)
WHERE (criteria)
GO
```
and I'm being told that there is a syntax error around the IF statement. Is there a good solution to my problem? | Or as Yuriy has suggested you can make use of Multi-statement-table valued functions something like this...
```
CREATE FUNCTION dbo.someFunction (date DATETIME)
RETURNS @TABLE TABLE
(
-- Define the structure of table here
)
AS
BEGIN
IF (date <= business_rules_change_date)
BEGIN
INSERT INTO @TABLE
SELECT (A select statement)
FROM (Old Database)
WHERE (criteria)
END
ELSE
BEGIN
INSERT INTO @TABLE
SELECT (A similar select statement)
FROM (New Database)
JOIN (A Join Statement)
WHERE (criteria)
END
RETURN
END
``` | You can try something like this
```
CREATE FUNCTION dbo.someFunction (date DATETIME)
RETURNS @RtnTable TABLE (SomeValue varchar(250))
AS
BEGIN
IF (date <= business_rules_change_date)
BEGIN
INSERT INTO @RtnTable
SELECT (A select statement)
FROM (Old Database)
WHERE (criteria)
END
ELSE
BEGIN
INSERT INTO @RtnTable
SELECT (A similar select statement)
FROM (New Database)
JOIN (A Join Statement)
WHERE (criteria)
END
RETURN
END
```
This example assumes that your function returns a resultset with a single varchar column. Based on your IF condition you can populate it with results of different SELECT statement. Adjust it as needed. | If statement to determine which SELECT to return in Table-Valued Function | [
"",
"sql",
"sql-server",
""
] |
I have a problem with this query I want retrieve all records but leaving first twenty,
Error is: {"Incorrect syntax near 'LIMIT'."}
```
"SELECT * FROM [upload_news] WHERE [country]='" + country.Text + "' ORDER BY [upload_time] DESC LIMIT 20";
``` | You can't use `LIMIT` with SQL Server. You can use [`Top 20`](http://technet.microsoft.com/en-us/library/ms189463.aspx). Or you can use [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx) and then filter based on that.
Also you should [parametrized your query](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.aspx), your current query is prone to [SQL Injection](http://en.wikipedia.org/wiki/SQL_injection).
```
using (SqlCommand cmd = new SqlCommand(@"SELECT TOP 20 *
FROM [upload_news]
WHERE [country]=@country ORDER BY [upload_time] DESC", connection))
{
cmd.Parameters.AddWithValue("@country", country.Text);
//,.... rest of the code
}
``` | If it's SQL Server, you need to use Top N.
```
SELECT TOP 20 * FROM [upload_news] WHERE [country]='" + country.Text + "'
ORDER BY [upload_time] DESC
``` | Adding LIMIT with query in asp.net | [
"",
"asp.net",
"sql",
"sql-server",
""
] |
I am trying to create a unique value feild which will hold the current ID, and the current date. @QuoteNumber is the unique field i want to create.
So far i think im on the right track but i'm getting an error when trying to add the date into my nvarchar.
```
DECLARE @QuoteNumber nvarchar(50)
DECLARE @Date varchar(10)
SELECT @Date = (LEFT(CONVERT(VARCHAR, GETDATE(), 120), 10))
SET @QuoteNumber = 'QU-' + @Date + '-' + SCOPE_IDENTITY()
```
Error:
> Error converting data type varchar to numeric.
Edit
---
Iv also tried:
```
SET @Date = CONVERT(varchar(10), GETDATE(), 20)
``` | ```
DECLARE @QuoteNumber nvarchar(50)
DECLARE @Date varchar(10)
SELECT @Date = CONVERT(VARCHAR(10),GETDATE(),121)
SET @QuoteNumber = 'QU-' + @Date + '-' + CAST(SCOPE_IDENTITY() AS VARCHAR(10))
```
Its not date which is causing issue it is `SCOPE_IDENTITY()` function which returns INT data and you are trying to concatenate Int value with a string. Because INT is a data type with higher presidence sql server tries to convert the string to INT and fails and you get the error you get. | Try doing it like this:
```
SELECT @Date = CONVERT(VARCHAR(10), GETDATE(), 120);
```
You should always use a length with `varchar()`, although that doesn't affect this query.
Then, `scope_identity()` returns a number, so you need to conver that:
```
SET @QuoteNumber = 'QU-' + @Date + '-' + cast(SCOPE_IDENTITY() as varchar(255);
``` | Convert Date into nvarchar? | [
"",
"sql",
"sql-server",
"date",
""
] |
Part of the task I have been given involves performing calculations on a few columns, 2 of which are in the format of hh.mi.ss and they're varchar. In order for the calculations to work, I need to get them into a time decimal format, whereby 1:30 would be 1.5 . Since I'm currently using SQL Server 2005, I don't have the time or data types built-in and I'm unable to get an upgraded version (not my choice). Working with what I have, I've searched around online and tried to convert it but the result isn't accurate. For example, 13.28 becomes (roughly) 13.5, which is great, however, the seconds go to 100 instead of ending at 60 (since I'm converting it to a float).
For example, using 12.57.46,
```
CAST(DATEPART(HH, CAST(REPLACE([OASTIM], '.', ':') AS DATETIME)) AS FLOAT) +
(CAST(DATEPART(MI, CAST(REPLACE([OASTIM], '.', ':') AS DATETIME)) AS FLOAT)/60) +
(CAST(DATEPART(SS, CAST(REPLACE([OASTIM], '.', ':') AS DATETIME)) AS FLOAT)/3600)
```
gave me 12.962...
whereas
```
CAST(SUBSTRING([OASTIM], 1, 2) AS FLOAT) +
((CAST(SUBSTRING([OASTIM], 4, 5) AS FLOAT) +
CAST(SUBSTRING([OASTIM], 7, 8) AS FLOAT)/60)/60)
```
gave me 12.970....
and when I tried something simpler,
```
DATEPART(HOUR, CAST(REPLACE([OASTIM], '.', ':') AS DATETIME))+
(DATEPART(MINUTE, CAST(REPLACE([OASTIM], '.', ':') AS DATETIME))/60)
```
flopped out and gave me only 12
It's my first exposure to Windows SQL and T-SQL, I've been struggling with this for a few hours. As horrible as it sounds, I'm at the point where I'd be happy with it working even it it means sacrificing performance. | You don't explain what "time decimal" format is. From your example, I'll guess that you mean decimal hours.
A key function in SQL Server for date differences is `datediff()`. You can convert the time to seconds using a trick. Add the time to a *date*, then use `datediff()` to get the number of seconds after midnight. After that, the conversion to decimal hours is just arithmetic.
Here is an example:
```
select datediff(second,
cast('2000-01-01' as datetime),
cast('2000-01-01 ' + '00:00:59' as datetime)
)/3600.0 as DecimalHours
```
Note the use of the constant `3600.0`. The decimal point is quite important, because SQL Server does integer division on integer inputs. So, `1/2` is `0`, rather than `0.5`. | You said,
```
CAST(SUBSTRING([OASTIM], 1, 2) AS FLOAT) +
((CAST(SUBSTRING([OASTIM], 4, 5) AS FLOAT) +
CAST(SUBSTRING([OASTIM], 7, 8) AS FLOAT)/60)/60)
```
gave me 12.970....
12.970 is wrong for an input of '12.57.46'. The problem is that you are using the SUBSTRING function incorrectly. The 3rd argument represents the number of characters, not the ending character position.
Take a look at this code:
Declare @Sample varchar(20)
Set @Sample = '12.57.46'
```
select CAST(SUBSTRING(@Sample, 1, 2) AS FLOAT) +
CAST(SUBSTRING(@Sample, 4, 5) AS FLOAT) / 60 +
CAST(SUBSTRING(@Sample, 7, 8) AS FLOAT) / 60 / 60,
SUBSTRING(@Sample, 1, 2),
SUBSTRING(@Sample, 4, 5),
SUBSTRING(@Sample, 7, 8),
CAST(SUBSTRING(@Sample, 1, 2) AS FLOAT) +
CAST(SUBSTRING(@Sample, 4, 2) AS FLOAT) / 60 +
CAST(SUBSTRING(@Sample, 7, 2) AS FLOAT) / 60 / 60
```
Notice that the minutes is coming out as 57.46 because you are asking for 5 characters. The seconds are coming out correctly because eventhough you are asking for 8 characters, there are only 2 characters left in the string so only those 2 characters are returned.
BTW, I would solve this problem the same way that Gordon did, except I would remove the date stuff so it would look like this:
```
Select DateDiff(Second,
0,
Convert(DateTime, Replace([OASTIM], '.',':'))) / 3600.0
``` | Trouble converting from time to decimal time | [
"",
"sql",
"sql-server",
"t-sql",
"datetime-conversion",
""
] |
I would like to filter the result set on the variables that are listed in the CASE statements.
```
SELECT u.id,
max(t.request_at) AS "Date",
sum(CASE
WHEN t.view = 1 THEN 1
ELSE 0 END) AS ONE,
sum(CASE
WHEN t.view = 2 THEN 1
ELSE 0 END) AS TWO,
sum(CASE
WHEN t.view = 3 THEN 1
ELSE 0 END) AS THREE
FROM users u
JOIN t ON u.id = t.uid
WHERE u.signup_city_id = 18
AND u.creationtime BETWEEN '2013-12-01' AND '2014-01-01'
group by 1
```
I would really like to filter something along the lines of: WHERE ONE < 3
i.e. Where the column one is smaller than 3. | You would use a `having` clause:
```
SELECT u.id, max(t.request_at) AS "Date",
sum(CASE WHEN t.view = 1 THEN 1 ELSE 0 END) AS ONE,
sum(CASE WHEN t.view = 2 THEN 1 ELSE 0 END) AS TWO,
sum(CASE WHEN t.view = 3 THEN 1 ELSE 0 END) AS THREE
FROM users u JOIN
t
ON u.id = t.uid
WHERE u.signup_city_id = 18 AND u.creationtime BETWEEN '2013-12-01' AND '2014-01-01'
group by 1
HAVING sum(CASE WHEN t.view = 1 THEN 1 ELSE 0 END) < 3;
```
Or use a subquery:
```
SELECT t.*
FROM (SELECT u.id, max(t.request_at) AS "Date",
sum(CASE WHEN t.view = 1 THEN 1 ELSE 0 END) AS ONE,
sum(CASE WHEN t.view = 2 THEN 1 ELSE 0 END) AS TWO,
sum(CASE WHEN t.view = 3 THEN 1 ELSE 0 END) AS THREE
FROM users u JOIN
t
ON u.id = t.uid
WHERE u.signup_city_id = 18 AND u.creationtime BETWEEN '2013-12-01' AND '2014-01-01'
group by 1
) t
WHERE ONE < 3;
``` | You need to wrap it into a derived table:
```
select *
from (
SELECT u.id,
max(t.request_at) AS "Date",
sum(CASE
WHEN t.view = 1 THEN 1
ELSE 0 END) AS ONE,
sum(CASE
WHEN t.view = 2 THEN 1
ELSE 0 END) AS TWO,
sum(CASE
WHEN t.view = 3 THEN 1
ELSE 0 END) AS THREE
FROM users u
JOIN t ON u.id = t.uid
WHERE u.signup_city_id = 18
AND u.creationtime BETWEEN '2013-12-01' AND '2014-01-01'
group by 1
) t
WHERE ONE < 3
``` | Using WHERE to filter a CASE statement | [
"",
"sql",
"postgresql",
""
] |
When I create a new comment the text saves fine but the commenter is not being saved. I have verified that the column for commenter exists and that the parameter is being passed. The field just isn't inserting into the table and I have no clue why.
Form:
```
<%= form_for @comment do |f| %>
<p>
<%= f.label :new_comment %>
<%= f.text_field :text %>
<%= hidden_field_tag :user_id, params[:user_id] %>
<%= hidden_field_tag :post_id, params[:id] %>
<%= f.submit "Comment" %>
</p>
<% end %>
```
Action:
```
def create
@user = User.find(params[:user_id])
@post = @user.posts.find(params[:post_id])
@comment = @post.comments.create(params[:comment].permit(:text, :commenter))
redirect_to show_post_path(@user, @post)
end
``` | Add one more field in form which will pass commenter for comment
```
<%= form_for @comment do |f| %>
<p>
<%= f.label :new_comment %>
<%= f.text_field :text %>
<%= hidden_field_tag :user_id, params[:user_id] %>
<%= hidden_field_tag :post_id, params[:id] %>
<%= hidden_field_tag 'comment[commenter]', params[:user_id] %>
<%= f.submit "Comment" %>
</p>
<% end %>
```
OR
Change controller code to
```
def create
@user = User.find(params[:user_id])
@post = @user.posts.find(params[:post_id])
@comment = @post.comments.new(params[:comment].permit(:text, :commenter))
@comment.commenter = @user.id
@comment.save
redirect_to show_post_path(@user, @post)
end
``` | I'm guessing you want your `commenter_id` in your form instead of `user_id`. Change your `user_id` line in your form to:
```
<%= hidden_field_tag :commenter_id, params[:commenter_id] %>
```
And then change the third line in your `create` function:
```
@comment = @post.comments.create(params[:comment].permit(:text, :commenter_id))
``` | Rails 4 - comment.create saves :text but not :commenter | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I have 3 tables: medicationorder, Ordercatalog and Person
```
medicationOrder (
startdate,
enddate,
catalogId,
orderId,
personId)
ordercatalog (
catalogId,
drugId,
IsGeneric,
Brand)
Person (firsstname,
lastname,
dob,
personId)
```
I want to retrieve all medication orders for the patient with last name "cat"... I tried this.. can you please tell me where I'm going wrong.. Thanks..
```
select *
from ordercatalog as o, person as p, medicationorder as m
join ordercatalog on o.CatalogId= m.CatalogId,
where (p.PersonId= m.PersonId and p.LastName= "Cat");
``` | You've combined two different SQL standards.
The older standard called SQL89:
```
select
*
from
ordercatalog as o,
person as p,
medicationorder as m
where
o.CatalogId= m.CatalogId
and p.PersonId= m.PersonId
and p.LastName= "Cat"
```
Notice that all the joins are handled in your `where` clause.
Conversely, you can do it based on the [SQL92](http://en.wikipedia.org/wiki/SQL-92) standard that puts your joins in the `from` clause:
```
select
*
from
ordercatalog o
inner join medicationorder m on o.CatalogId = m.CatalogId
inner join person p on m.PersonId = p.PersonId and p.LastName = "Cat"
```
You should find that every platform out these days supports the SQL92 standard. But most will support both. | ```
SELECT * FROM
ORDERCATALOG O INNER JOIN MEDICATIONORDER M
ON O.CATALOGID=M.CATALOGID
INNER JOIN PERSON P
ON P.PERSONID=M.PERSONID
AND P.LASTNAME='CAT';
``` | sql joins how to join multiple tables | [
"",
"sql",
"join",
""
] |
I have a simple Database I use for storing the Url, Title and number of downloads of a youtube video. I'm now using this basic query to insert the Title and Url:
```
$query = "INSERT INTO youtube (ID, Titel, Downloads, Url) VALUES (NULL, '".$title."', '1', '".$my_id."')";
$mysqli->query($query);
```
In the 'Downloads' column I am currently just inserting 1.
My question is how can I check if the url already exists in the DB and if it does I want to edit that record so it adds 1 to the 'Downloads'. | ```
INSERT INTO youtube VALUES (NULL, 'title', '1', 'id')
ON DUPLICATE KEY update Downloads = Downloads +1
```
Note that you should use prepared statements for all database interactions.
Here is the code using my safemysql lib for example
```
$sql = "INSERT INTO youtube VALUES (NULL, ?s, 1, ?s)
ON DUPLICATE KEY update Downloads = Downloads + 1";
$db->query($sql, $title, $my_id);
``` | MySQL offers the ON DUPLICATE clause on inserts:
```
INSERT INTO youtube VALUES (NULL, '".$title."', '1', '".$my_id."')"
ON DUPLICATE KEY UPDATE downloads = downloads + 1;
``` | Check if record exists if it does add 1 | [
"",
"mysql",
"sql",
""
] |
I don't dabble in SQL queries much and rely on google when I need something more than the basics, and have come up with a problem.
I am trying to calculate a value and it returns a result rounded down to the nearest integer.
To test this out, I wrote the following query:
```
select ELAPTIME AS "ELAPSEC", ELAPTIME/60 AS "ELAPMIN" from CMR_RUNINF
```
The result is:
```
+-----------+-----------+
|ELAPSEC |ELAPMIN |
+-----------+-----------+
|258 |4 |
+-----------+-----------+
|0 |0 |
+-----------+-----------+
|2128 |35 |
+-----------+-----------+
|59 |0 |
+-----------+-----------+
```
I'm trying to do a bit more than this, but I've simplified it to make it easier to explain the problem. How do I ensure that this calculation returns the decimal point? | Your SQL product performs integral division because both operands are integers. `ELAPTIME`'s integer type is determined be the table structure and `60` is automatically assumed to be integer because it has no decimal point.
There are two methods of resolving the issue:
1. Convert either operand to a non-integer numeric type explicitly:
```
CAST(ELAPTIME AS float) / 60
```
2. Write `60.0` instead of `60` so that the parser can see you are not dividing an integer by an integer:
```
ELAPTIME / 60.0
``` | ```
postgres=# SELECT 258/60::float;
?column?
----------
4.3
(1 row)
``` | My PostgreSQL calculations don't include decimals | [
"",
"sql",
"decimal",
""
] |
```
select DISTINCT po. *
from purchaseorder po
inner join poTask pt on (po.purchaseorderid = pt.purchaseorderid)
inner join poTaskline ptl1 on (pt.potaskid = ptl1.potaskid) and ptl1.poTasklinetype = 'M'
inner join poTaskline ptl2 on (pt.potaskid = ptl2.potaskid) and ptl2.poTasklinetype = 'D'
where po.PoStatus = 6
```
This is giving me all purchase orders that have `Tasklinetypes` of both D and M.
But how do I get it so that it only shows purchase orders that have ONLY a D and an M and no more?
I'm very new and it took me the better half of a day to figure out the above. So I'm REALLY stumped now. I hope you don't need more context than that. | I'd use `(NOT) EXISTS` to check for the presence of those entries in the `poTaskline` table:
```
SELECT po.*
FROM purchaseorder po
INNER JOIN poTask pt ON po.purchaseorderid = pt.purchaseorderid
WHERE po.PoStatus = 6 -- PoStatus of 6
AND EXISTS (SELECT 1 FROM poTaskLine ptl WHERE ptl.potaskid = pt.potaskid AND ptl.poTasklinetype = 'M') -- there's a poTaskLine entry for 'M' for this potaskid
AND EXISTS (SELECT 1 FROM poTaskLine ptl WHERE ptl.potaskid = pt.potaskid AND ptl.poTasklinetype = 'D') -- there's a poTaskLine entry for 'D' for this potaskid
AND NOT EXISTS (SELECT 1 FROM poTaskLine ptl WHERE ptl.potaskid = pt.potaskid AND ptl.poTasklinetype NOT IN ('D', 'M')) -- there aren't any poTaskLine entries for anything else
``` | You can use `HAVING` clause to filter your groups.
```
select po. *
from purchaseorder po
inner join poTask pt
on po.purchaseorderid = pt.purchaseorderid
inner join poTaskline ptl1
on pt.potaskid = ptl1.potaskid
and
(ptl1.poTasklinetype = 'M'
OR ptl1.poTasklinetype = 'D')
where po.PoStatus = 6
GROUP BY po.col1, po.col2... -- You must mention here all necessary fields from
-- purchaseorder which must be in select list
HAVING COUNT(po.id) = 2
``` | COUNT where only two rows | [
"",
"sql",
"join",
"count",
""
] |
I cannot figure out how to form a proper SQL statement for this. I want to get every row from products table but filter the results if certain value in other table is larger than something.
My current statement returns every row that I want to filter correctly but I was unable to make it instead filter the current result and print everything else
```
SELECT p.*
FROM products p
LEFT JOIN orders o ON o.product_id = p.product_id AND order_date < somedate
WHERE o.id IS NULL
```
I want it to print whole products table except filter row if orders table column order\_date is smaller than the given date,
Edit: thanks for everyone who helped me with this! | You think too complicated. You want records from table products where a certain order not exists. So say NOT EXISTS in your where clause.
```
SELECT *
FROM products
WHERE NOT EXISTS
(
SELECT *
FROM orders
WHERE product_id = products.product_id
AND order_date < somedate
);
```
(You can solve this with an outer join, which alwyays looks a bit quirky in my eyes. Be aware: When outer joining, don't use the outer joined tables fields in the where clause, cause they would be null for an outer joined record of course.) | ```
SELECT p.*
FROM products p
LEFT JOIN orders o ON o.product_id = p.product_id AND order_date < somedate and o.id IS NULL
``` | SQL statement returns only rows that exist also in the joined table | [
"",
"mysql",
"sql",
""
] |
There are 2 sql tables:
1) "users" with the following structure and data:
user\_id, user\_name
```
1, John
2, Mike
3, Chris
4, Paul
5, Kelly
6, Kevin
```
2) "userfriends" which cotains friendship relations, with the following structure and data:
userfriends\_user\_id, userfriends\_friend\_id
```
1, 2 => [ John is friend with Mike ]
2, 3 => [ Mike is friend with Chris]
2, 4 => [ Mike is friend with Paul ]
5, 1 => [ Kelly is friend with John]
6, 5 => [ Kevin is friend with Kelly]
```
I want to make a SELECT (user\_name, user\_id) to get those users where John have common friends with, so the output should be Chris, Paul, Kevin. Can this be done in only ONE select statement ? | Well, I still don't know if I am thinking too complicated right now. Here is what I've come up with:
```
select * from users
where user_id in
(
select user1_id
from
( -- users except John and their friends (again except John)
select userfriends_user_id as user1_id, userfriends_friend_id as user2_id
from userfriends where userfriends_user_id != 1 and userfriends_friend_id != 1
union
select userfriends_friend_id as user1_id, userfriends_user_id as user2_id
from userfriends where userfriends_user_id != 1 and userfriends_friend_id != 1
) others
where user2_id in
( -- friends of John's
select userfriends_user_id from userfriends where userfriends_friend_id = 1
union
select userfriends_friend_id from userfriends where userfriends_user_id = 1
)
);
```
Here is the SQL fiddle: <http://sqlfiddle.com/#!2/338d87/1>. | You can do this by getting all the friends of your friends. This requires a join from the `userfriends` table back to itself.
Then, just look at the the "friends of friends" that appear more than once. You can do this with `group by` and `having`:
```
select uff.userfriends_friend_id
from userfriends uf join
userfriends uff
on uf.userfriends_friend_id = uff.userfriends_user_id
where uf.userfriends_user_id = 1
group by uff.userfriends_friend_id
having count(*) > 1;
``` | Get users which I have friends in common | [
"",
"mysql",
"sql",
""
] |
I am trying to write a query in sql server to grab these sample values:
```
firstname,
lastname,
count(reports where status =1),
count(reports where status =2)
```

how do I get these values in one sql statement?
here is sample output
 | I think you can accomplish it with a PIVOT as well. I'm not sure you would want to:
```
DECLARE @tbl TABLE(UserName VARCHAR(10), Report VARCHAR(10), StatusId INT)
INSERT INTO @tbl VALUES ('bob','report1',1)
,('bob','report2',1)
,('bob','report3',2)
,('jim','report2',1)
,('joe','report3',3)
SELECT UserName, [1] Status1_Count,[2] Status2_Count,[3] Status3_Count
FROM (SELECT UserName
,StatusId
,COUNT(*) ReportCount
FROM @tbl
GROUP BY UserName
,StatusId) src
PIVOT (MAX(StatusId)
FOR StatusId IN ([1],[2],[3])) pvt
```
The GROUP BY and COUNT answer is better in general, it just doesn't do what you asked for, which was to have all status counts for a user on a single one row. | Looks like XY problem to me...
```
select firstname, lastname, status, count(status)
from report
group by firstname, lastname, status
order by firstname, lastname, status
``` | report sql query multiple wheres | [
"",
"sql",
"sql-server",
""
] |
Say that my database was like IMDb, a huge collection of movie titles and their release dates.
```
TITLE DATE
Terminator 2 1991
Tron 1982
Karate Kid 1984
Silence of the Lambs 1991
```
and I want to issue a query that will return to me data in the form
```
1991 2
1982 1
1984 1
```
Meaning that there are two rows that have '1991' in the year field, one row that has '1982' in that field, etc.
Is there a way I can do this purely with an SQL query, or should I be writing something in my program itself to generate this data? | Instead of distinct + count it would be group by
```
select field, count(*) from table group by field
``` | When you do a *COUNT* and *GROUP BY*, the *DISTINCT* isn't necessary because all the duplicates are already eliminated. | Is it possible to combine 'DISTINCT' and 'COUNT' queries, so that I can see how many times each distinct value appears? | [
"",
"mysql",
"sql",
""
] |
Is it possible to make phpMyAdmin rearrange the IDs of a table?
Like when I got `1, 2, 3, 5, 6, ...` PMA makes it `1, 2, 3, 4, 5, 6, ..`?
Thanks | The best way I can think of to renumber data is to use an auto\_increment. Say you want to reorder table `t` with out-of-order / holey ID column `id`.
```
CREATE TABLE `t` ( id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, ... );
```
First step is to create a new numbering and associate each existing ID to a new ID counting up from 1.
```
-- this could be a temporary table
CREATE TABLE `new_t_ids` (
new_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
current_id INT UNSIGNED NOT NULL
);
INSERT INTO new_t_ids (current_id) SELECT id FROM t ORDER BY id ASC;
```
now we can update `t`:
```
UPDATE t JOIN new_t_ids ON t.id = new_t_ids.current_id SET t.id = new_id;
```
finally we set the auto\_increment. This ends up being excessively tricky because the ALTER can't set to a vairable directly, but a good pattern to show off in stackoverlow anyway (FYI, [I had to look it up](http://forums.mysql.com/read.php?10,277859,277863#msg-277863))
```
SET @next_id = ( SELECT MAX(ID) + 1 from t );
set @s = CONCAT("alter table t auto_increment = ", @next_id);
prepare stmt from @s;
execute stmt;
deallocate prepare stmt;
```
Now your IDs are all renumbered from 1.
We can leave the new\_t\_ids table around for reference, or
```
DROP TABLE new_t_ids;
```
That having been said, this is a bad idea. The numbers don't have to be without gaps between. My big concern is that if anything relates to this data and your foreign keys aren't properly defined, you're going to end up breaking all those associations.
Furthermore, if you're doing this to the table online, you'll want to lock it so that the content doesn't change while you're working. Better really to just take the database out of service.
So my *recommendation* is to try to let go of your rather human perspective on how data should look in a computer:) Holes are a lot more OK generally speaking than broken relational data. | If they're subject to change regularly, they're not really IDs. Imagine if your Social Security Number or credit card number changed every time someone died. | Order IDs in a table | [
"",
"mysql",
"sql",
""
] |
I'm new to SQL and I'm stuck with a query.
I have 3 tables employees, departments and salary\_paid.
I'm trying to update bonus column in salary\_paid table by giving this condition
```
give 10% bonus on total salary to the employees who are not in IT departments.
```
I came up with this query
```
update salary_paid
set bonus=(select (0.1*total_salary) "Bonus"
from salary_paid, departments, employees
where
employees.department_id=departments.department_id and
employees.employee_id=salary_paid.employee_id and
departments.department_name!='IT')
;
```
However it returns this error
> ORA-01427: single-row subquery returns more than one row
I'm completely clueless on this, please help.
Thanks in advance | Your inner query `(select (0.1*total_salary) "Bonus" from salary_paid` is returning more than one value and so can't be assigned to bounus column.
Instead try updating using Joins like this
```
UPDATE
(SELECT salary_paid.bonus as oldBonus, 0.1*salary_paid.total_salary as newBounus
FROM salary_paid
INNER JOIN employees
ON salary_paid.employee_id = employees.employee_id
INNER JOIN departments
ON departments.department_id = employees.department_id
WHERE departments.department_name != 'IT'
) t
SET t.oldBonus= t.newBounus
``` | Try this:
```
UPDATE
(
SELECT *
FROM employees e LEFT JOIN salary_paid sp ON e.employee_id = sp.employee_id
LEFT JOIN departments d ON d.department_id = e.department_id
) t
SET t.bonus = 0.1 * t.total_salary
WHERE t.department_name != 'IT';
```
You query was updating all the rows in the table with the result of the sub-query. Also, the sub-query was returning more than one rows. When setting a value, the sub-query should always return single row with single column.
In Oracle, these problems are solved by using join, as shown above. This will update the `bonus` column using values from the respective `total_salary` columns. No need to use sub-query. | Update statement returning single-row subquery returns more than one row | [
"",
"mysql",
"sql",
"oracle",
""
] |
I have date like:
```
=> #<Item id: 8749, asin: "B000V2ACH8", created_at: "2014-03-24 00:15:24">
=> #<Item id: 8750, asin: "B000V2ACH8", created_at: "2014-03-24 14:35:24">
=> #<Item id: 8751, asin: "B000V2ACH8", created_at: "2014-03-24 19:35:24">
=> #<Item id: 8752, asin: "B000V2ACH8", created_at: "2014-03-24 17:35:24">
...
=> #<Item id: 8753, asin: "7770V2A888", created_at: "2014-03-24 21:58:24">
=> #<Item id: 8754, asin: "7770V2A888", created_at: "2014-03-24 05:24:24">
...
=> #<Item id: 8755, asin: "7770V2A888", created_at: "2014-03-23 23:58:24">
=> #<Item id: 8756, asin: "7770V2A888", created_at: "2014-03-23 22:58:24">
...
=> #<Item id: 8757, asin: "7770V2A888", created_at: "2014-03-22 19:58:24">
=> #<Item id: 8759, asin: "7770V2A888", created_at: "2014-03-22 10:58:24">
=> #<Item id: 8760, asin: "7770V2A888", created_at: "2014-03-22 23:33:33">
```
How to get max(create\_at) of the same year-month-date & delete rest with the same asin & the same year-month-date(lower than max of that date)?
Want to get values like ⬇
```
=> #<Item id: 8751, asin: "B000V2ACH8", created_at: "2014-03-24 19:35:24">
=> #<Item id: 8753, asin: "7770V2A888", created_at: "2014-03-24 21:58:24">
=> #<Item id: 8755, asin: "7770V2A888", created_at: "2014-03-23 23:58:24">
=> #<Item id: 8760, asin: "7770V2A888", created_at: "2014-03-22 23:33:33">
``` | Definitely not the best answer, but it works for me :)
Example :)
```
@item.try(:each) do |i|
for j in 0..10
group_records = Item.where(:asin => i.asin, :domain => i.domain, :user_id => i.user_id).where(:created_at => Date.today.beginning_of_day - j.day..Date.today.end_of_day - j.day)
max_date = group_records.select('max(created_at) as m').first.m
group_records.where('created_at != ?', max_date).delete_all
end
end
``` | `group_records = Item.where('id>0') # you can put any conditions in`where` clause
```
max_date = group_records.select('max(created_at) as m').first.m
group_records.where('created_at < ?', max_date).delete_all
``` | How to delete data from DB with the same dates except the biggest date. Rails 3 | [
"",
"sql",
"ruby-on-rails",
"ruby-on-rails-3",
"postgresql",
"activerecord",
""
] |
I made this using Spring Data. I can't figure whats the problem with this sql statement.This query was auto generated by Spring data.
It says.
```
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'database bit not null, designation
varchar(255), equipment varchar(255), hardwar' at line 1
```
My query:
```
create table joborder (id bigint not null auto_increment,
barPrinter bit not null,
birPermit bit not null,
cabling bit not null,
cashDrawer bit not null,
cashierPrinter bit not null,
ccccods bit not null,
configuration bit not null,
cpu bit not null,
database bit not null,
designation varchar(255),
equipment varchar(255),
hardwareActionTaken varchar(255),
hardwareOthers bit not null,
hardwareRemarks varchar(255),
invoiceNumber varchar(255),
keyboard bit not null,
kitchenPrinter bit not null,
logNumber varchar(255),
magneticCardReader bit not null,
mallInterface bit not null,
menu bit not null,
monitor bit not null,
mouse bit not null,
operatingSystem bit not null,
othersActionTaken varchar(255),
othersOthers bit not null,
othersRemarks varchar(255),
powerSupply bit not null,
reportProblemTask varchar(255),
scanner bit not null,
softwareActionTaken varchar(255),
softwareOthers bit not null,
softwareRemarks varchar(255),
storeLocation varchar(255),
storeRepresentative varchar(255),
technician varchar(255),
timeIn varchar(255),
timeOut varchar(255),
transightECLM bit not null,
transightHQ bit not null,
transightPOS bit not null,
uat bit not null,
store_id bigint,
primary key (id))
``` | Use this query
```
CREATE TABLE `joborder` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`barPrinter` BIT(1) NOT NULL,
`birPermit` BIT(1) NOT NULL,
`cabling` BIT(1) NOT NULL,
`cashDrawer` BIT(1) NOT NULL,
`cashierPrinter` BIT(1) NOT NULL,
`ccccods` BIT(1) NOT NULL,
`configuration` BIT(1) NOT NULL,
`cpu` BIT(1) NOT NULL,
`database` BIT NOT NULL,
`designation` VARCHAR(255) DEFAULT NULL,
`equipment` VARCHAR(255) DEFAULT NULL,
`hardwareActionTaken` VARCHAR(255) DEFAULT NULL,
`hardwareOthers` BIT(1) NOT NULL,
`hardwareRemarks` VARCHAR(255) DEFAULT NULL,
`invoiceNumber` VARCHAR(255) DEFAULT NULL,
`keyboard` BIT(1) NOT NULL,
`kitchenPrinter` BIT(1) NOT NULL,
`logNumber` VARCHAR(255) DEFAULT NULL,
`magneticCardReader` BIT(1) NOT NULL,
`mallInterface` BIT(1) NOT NULL,
`menu` BIT(1) NOT NULL,
`monitor` BIT(1) NOT NULL,
`mouse` BIT(1) NOT NULL,
`operatingSystem` BIT(1) NOT NULL,
`othersActionTaken` VARCHAR(255) DEFAULT NULL,
`othersOthers` BIT(1) NOT NULL,
`othersRemarks` VARCHAR(255) DEFAULT NULL,
`powerSupply` BIT(1) NOT NULL,
`reportProblemTask` VARCHAR(255) DEFAULT NULL,
`scanner` BIT(1) NOT NULL,
`softwareActionTaken` VARCHAR(255) DEFAULT NULL,
`softwareOthers` BIT(1) NOT NULL,
`softwareRemarks` VARCHAR(255) DEFAULT NULL,
`storeLocation` VARCHAR(255) DEFAULT NULL,
`storeRepresentative` VARCHAR(255) DEFAULT NULL,
`technician` VARCHAR(255) DEFAULT NULL,
`timeIn` VARCHAR(255) DEFAULT NULL,
`timeOut` VARCHAR(255) DEFAULT NULL,
`transightECLM` BIT(1) NOT NULL,
`transightHQ` BIT(1) NOT NULL,
`transightPOS` BIT(1) NOT NULL,
`uat` BIT(1) NOT NULL,
`store_id` BIGINT(20) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=INNODB DEFAULT CHARSET=latin1;
``` | **Database** is a reserved **keyword**
you have to use like this `Databse`
Try this
```
create table joborder (id bigint not null auto_increment,
barPrinter bit not null,
birPermit bit not null,
cabling bit not null,
cashDrawer bit not null,
cashierPrinter bit not null,
ccccods bit not null,
configuration bit not null,
cpu bit not null,
`database` bit not null,
designation varchar(255),
equipment varchar(255),
hardwareActionTaken varchar(255),
hardwareOthers bit not null,
hardwareRemarks varchar(255),
invoiceNumber varchar(255),
keyboard bit not null,
kitchenPrinter bit not null,
logNumber varchar(255),
magneticCardReader bit not null,
mallInterface bit not null,
menu bit not null,
monitor bit not null,
mouse bit not null,
operatingSystem bit not null,
othersActionTaken varchar(255),
othersOthers bit not null,
othersRemarks varchar(255),
powerSupply bit not null,
reportProblemTask varchar(255),
scanner bit not null,
softwareActionTaken varchar(255),
softwareOthers bit not null,
softwareRemarks varchar(255),
storeLocation varchar(255),
storeRepresentative varchar(255),
technician varchar(255),
timeIn varchar(255),
timeOut varchar(255),
transightECLM bit not null,
transightHQ bit not null,
transightPOS bit not null,
uat bit not null,
store_id bigint,
primary key (id))
``` | mysql spring data error (QUERY) | [
"",
"mysql",
"sql",
"spring-data",
""
] |
I'm trying to update a single record which is the exact copy of another record. Is there any way to limit or select only 1 record while updating?
Thanks | You can use `FETCH FIRST n ROWS` clause .
```
UPDATE
( SELECT colA FROM tableName t WHERE <where condition> FETCH FIRST 1 ROW ONLY
)
SET t.colA= 'newvalue';
``` | Just curious why exactly you have an exact copy of a record? Do you not have some sort of ID field? Can you show what you've tried?
Really, the best way to avoid this issue must be to have an ID field. | Update only 1 record in DB2 from multiple identical records? | [
"",
"sql",
"db2",
"sql-update",
"db2-luw",
""
] |
I have two tables:
```
groups group_members
v-------------------------------v
+----+------------+ +----+----------+------+
| id | group_name | | id | group_id | name |
+----+------------+ +----+----------+------+
| 1 | admin | | 1 | 1 | Dave |
+----+------------+ +----+----------+------+
| 2 | moderator | | 2 | 1 | Bob |
+----+------------+ +----+----------+------+
| 3 | vip | | 3 | 2 | Sue |
+----+------------+ +----+----------+------+
| 4 | user | | 4 | 2 | Jill |
+----+------------+ +----+----------+------+
| 5 | 2 | Jack |
+----+----------+------+
| 6 | 3 | Bill |
+----+----------+------+
| 7 | 4 | Sam |
+----+----------+------+
```
`group_members` has a foreign key reference from `group_id` to `id` in `groups`.
I want to obtain the roster of members found in a group, but also return the `group_name` in the process. I would like to do this in one query only.
I've tried this query but it returns the group name for every row, I'd like to see the `group_name` as its own row.
```
SELECT groups.group_name,
group_members.name
FROM groups,
group_members
WHERE group_members.group_id = groups.id
AND groups.id = 1
```
The result is:
```
+-------------------+--------------------+
| groups.group_name | group_members.name |
+-------------------+--------------------+
| admin | Dave |
+-------------------+--------------------+
| admin | Bob |
+-------------------+--------------------+
```
I'd like to see:
```
either this | or this |
v v
+-------------------+ +---------------------+
| groups.group_name | | groups_members.name |
+-------------------+ +---------------------+
| admin | | admin |
+-------------------+ +---------------------+
| Dave |
+---------------------+ +---------------------+
| groups_members.name | | Bob |
+---------------------+ +---------------------+
| Dave |
+---------------------+
| Bob |
+---------------------+
```
Would that be possible? | ```
SELECT if(gm.name='',g.name,gm.name) as name
FROM groups g
join (select group_id, name from group_members gm
union all
select 1 as group_id, '' as name) gm on
gm.group_id = g.id
WHERE
groups.id = 1
```
We can add one more artificial row and choose either group name (for the additional row) or group member name | ```
SELECT groups.group_name
FROM groups,
group_members
WHERE group_members.group_id = groups.id
AND groups.id = 1
UNION
SELECT group_members.name
FROM groups,
group_members
WHERE group_members.group_id = groups.id
AND groups.id = 1
``` | Selecting one cell once, but everything else | [
"",
"mysql",
"sql",
""
] |
```
+------------+--------------------+----------+-------------+
| id_product | name | id_brand | id_category |
+------------+--------------------+----------+-------------+
| 1 | Nokia E55 | 120 | 1 |
| 2 | Nokia E75 (Red) | 101 | 1 |
| 3 | Nokia N86 | 105 | 2 |
| 4 | Nokia 6700 Classic | 110 | 2 |
| 5 | Nokia 6260 Slide | 120 | 1 |
+------------+--------------------+----------+-------------+
```
What I want to do is, I have the following data
```
id_category = 1,2
id_brand = 110,105
```
Now I want the following products
```
Nokia 6700 Classic
Nokia N86
```
only
because there is no brand (110,105) in id\_category 1....... | I believe you are looking for this:
```
WHERE id_category IN (1,2) AND id_brand IN (110,105)
``` | Try this:
```
WHERE id_category in (1,2) and id_brand in (110,105)
``` | Multiple AND conditions in a mysql query | [
"",
"mysql",
"sql",
""
] |
I have a simple query as follows:
```
$q = "SELECT * FROM blah WHERE disabled = '0'";
```
Now for pagination, I need to add `LIMIT` to my query, so:
```
$q = "SELECT * FROM blah WHERE disabled = '0' LIMIT 10,20";
```
And still, I want to know about the number of all rows with `mysql_num_rows`, but in the above query it is always 10, since I'm limiting the results, so for the number of all rows I need to do the same query again without `LIMIT` statement.
And it's somehow stupid to run the same query twice to just get the number of all rows, anybody has a better solution?
Thanks | MySQL [supports a `FOUND_ROWS()` function](https://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows) to find the unlimited number of rows that would have been returned from the previous limited query.
```
SELECT SQL_CALC_FOUND_ROWS * FROM blah WHERE disabled = '0' LIMIT 10,20
SELECT FOUND_ROWS();
```
Note that (a) you need to include the `SQL_CALC_FOUND_ROWS` option, and (b) that this is a specific MySQL extension that won't work on another RDBMS (though they each *may* have their own way of doing this.)
This isn't necessarily the best way of doing things, even if it might feel like it; you still have to issue two statements, you're introducing non-standard SQL, and the actual `COUNT`ing is likely to be a similar speed to a simple `SELECT COUNT(*)...` anyway. I'd be inclined to stick to the standard way of doing it, myself. | For pagination, you have to run a count query to get the total first.
```
$q = "SELECT count(*) FROM blah WHERE disabled = '0'";
```
Two queries are **necessary**. | Get count query results with ignoring the LIMIT statement | [
"",
"mysql",
"sql",
""
] |
I am using sqlserver 2008 and I have a data like below. The row count is uncertain. The row count is 44 for this simple, but it can be more or less. And The scale always is constant.
```
+----------------------------------------------------+-------+
| Level | Count |
+----------------------------------------------------+-------+
| 0 - 10 | 49 |
| 11 - 20 | 11 |
| 21 - 30 | 15 |
| 31 - 40 | 19 |
| 41 - 50 | 18 |
| 51 - 60 | 9 |
| 61 - 70 | 0 |
| 71 - 80 | 2 |
| 81 - 90 | 2 |
| 91 - 100 | 1 |
| 101 - 9999 | 9 |
| 0 - 10 | 47 |
| 11 - 20 | 6 |
| 21 - 30 | 5 |
| 31 - 40 | 3 |
| 41 - 50 | 3 |
| 51 - 60 | 5 |
| 61 - 70 | 9 |
| 71 - 80 | 5 |
| 81 - 90 | 8 |
| 91 - 100 | 14 |
| 101 - 9999 | 30 |
| 0 - 10 | 46 |
| 11 - 20 | 3 |
| 21 - 30 | 4 |
| 31 - 40 | 4 |
| 41 - 50 | 4 |
| 51 - 60 | 1 |
| 61 - 70 | 7 |
| 71 - 80 | 14 |
| 81 - 90 | 13 |
| 91 - 100 | 15 |
| 101 - 9999 | 24 |
| 0 - 10 | 43 |
| 11 - 20 | 4 |
| 21 - 30 | 3 |
| 31 - 40 | 1 |
| 41 - 50 | 7 |
| 51 - 60 | 3 |
| 61 - 70 | 8 |
| 71 - 80 | 15 |
| 81 - 90 | 13 |
| 91 - 100 | 17 |
| 101 - 9999 | 21 |
+----------------------------------------------------+-------+
```
I would like it to be like below,
```
+--------+---------+---------+---------+---------+---------+---------+---------+---------+----------+-------+
| 0 - 10 | 11 - 20 | 21 - 30 | 31 - 40 | 41 - 50 | 51 - 60 | 61 - 70 | 71 - 80 | 81 - 90 | 91 - 100 | 101 - |
+--------+---------+---------+---------+---------+---------+---------+---------+---------+----------+-------+
| 49 | 11 | 15 | 19 | 18 | 9 | 0 | 2 | 2 | 1 | 9 |
| 47 | 6 | 5 | 3 | 3 | 5 | 9 | 5 | 8 | 14 | 30 |
| 46 | 3 | 4 | 4 | 4 | 1 | 7 | 14 | 13 | 15 | 24 |
| 43 | 4 | 3 | 1 | 7 | 3 | 8 | 15 | 13 | 17 | 21 |
+--------+---------+---------+---------+---------+---------+---------+---------+---------+----------+-------+
```
How can I do this ? | You can use the [pivot](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) keyword to pivot a row entry into a column.
```
SELECT [0 - 10],
[11 - 20],
[21 - 30],
[31 - 40],
[41 - 50],
[51 - 60],
[61 - 70],
[71 - 80],
[81 - 90],
[91 - 100],
[101 - 9999]
FROM (
SELECT row_number() over(partition by [Level] order by [Count]) as [RowNumber],
[Level],
[Count]
FROM TableName
) as t
PIVOT (MIN([Count])
FOR [Level] IN ([0 - 10], [11 - 20], [21 - 30], [31 - 40], [41 - 50], [51 - 60], [61 - 70], [71 - 80], [81 - 90], [91 - 100], [101 - 9999])
) as p
``` | ```
SELECT MAX(CASE WHEN LEVEL='0 - 10' THEN Count ELSE 0 END) as '0 - 10',
MAX(CASE WHEN LEVEL='11 - 20' THEN Count ELSE 0 END) as '11 - 20',
....
FROM table
GROUP BY LEVEL
``` | How to convert my data from row to column ? | [
"",
"sql",
"sql-server",
""
] |
I am trying to loop through a list of email addresss with a SQL stored procedure
-Get the first part of the email addresses and query another table where the part matches a a field
```
DECLARE @channelName char(100)
DECLARE @EmailAddress char(100)
DECLARE @ParsedEmailID char(100)
DECLARE gpluscursor CURSOR for Select email from Accounts
OPEN gpluscursor
Fetch NEXT from gpluscursor
into @EmailAddress
While @@Fetch_Status=0
Begin
While @EmailAddress is not NULL
Begin
PRINT @EmailAddress
set @ParsedEmailID = (SELECT SUBSTRING(email,0,CHARINDEX('-',email)) from Accounts where email = @EmailAddress)
PRINT @ParsedEmailID
set @channelName = (SELECT channelname FROM channel where channelname like '%'+@ParsedEmailID+'%')
PRINT 'mychannel'+@channelName
update Accounts set channel = @channelName where email like @EmailAddress
End
Fetch Next from gpluscursor
into @EmailAddress
PRINT @EmailAddress
End
close gpluscursor
```
Here is the result from the output:
```
test-9724@pages.plusgoogle.com
test
test-9724@pages.plusgoogle.com
test
test-9724@pages.plusgoogle.com
test
```
It appears that the cursor is not moving to the next record . What am I doing wrong here? | ```
SELECT channelName,SUBSTRING(email,0,CHARINDEX('-',email))
from Accounts as A
INNER JOIN channel AS c ON channelname like SUBSTRING(email,0,CHARINDEX('-',email))
```
Isnt the most efficient but it will get you there. It would be better if we could remove the like comparison but if thats what you have that is what we will use. | As far as your Cursor is concerned, Change your `WHILE` to an `IF`. Your code is getting into an infinite loop...
But as others have mentioned, a more optimized approach would be to not use a `Cursor`
```
DECLARE @channelName char(100)
DECLARE @EmailAddress char(100)
DECLARE @ParsedEmailID char(100)
DECLARE gpluscursor CURSOR for Select email from Accounts
OPEN gpluscursor
Fetch NEXT from gpluscursor
into @EmailAddress
While @@Fetch_Status=0
Begin
IF (@EmailAddress is not NULL)
Begin
PRINT @EmailAddress
set @ParsedEmailID = (SELECT SUBSTRING(email,0,CHARINDEX('-',email)) from Accounts where email = @EmailAddress)
PRINT @ParsedEmailID
set @channelName = (SELECT channelname FROM channel where channelname like '%'+@ParsedEmailID+'%')
PRINT 'mychannel'+@channelName
update Accounts set channel = @channelName where email like @EmailAddress
End
Fetch Next from gpluscursor
into @EmailAddress
PRINT @EmailAddress
End
close gpluscursor
``` | How to loop through a list of strings with SQL SEVER stored procedure and update a table | [
"",
"sql",
"sql-server",
""
] |
My tables are:
```
Parent_Child (Parent_SSN, Child_SSN)
Person (SSN, Name, age, sex)
School (Child_SSN, School_Name)
```
I want to select the parents(female,male) who have atleast one of their children in a particular school `'X'`.I have a working query and my mysql query is:
```
select group_concat(p.name) from person p,parentchild pc,school s
where s.schoolname='X' and s.childssn=pc.childssn and p.ssn=pc.parentssn
group by pc.childssn
```
This displays the result as `parent(male,female)` but I want the result in `(female,male) form` and if I group it by parent.sex it displays results in individual rows and not in a single row.I am out of ideas.
Sample desired output:
```
name
Angela,Jim
```
Output of my above existing query:
```
name
Jim,Angela
``` | Here's a [Sql Fiddle](http://sqlfiddle.com/#!9/76ba4/15) for you.
```
SELECT DISTINCT group_concat(p.name ORDER BY p.sex Asc)
FROM Person p JOIN Parent_Child pc ON p.ssn=pc.Parent_SSN
JOIN School s ON s.Child_SSN = pc.Child_SSN
WHERE s.School_Name='X'
GROUP By pc.Child_SSN;
``` | Try this, might possible using sort
```
SELECT GROUP_CONCAT( p.name ORDER BY p.name DESC ) from person p,parentchild pc,school s
where s.schoolname='X' and s.childssn=pc.childssn and p.ssn=pc.parentssn
group by pc.childssn
``` | Group_concat with columns of a particular type in the first | [
"",
"mysql",
"sql",
""
] |
I was trying to create a procedure as follows :
```
CREATE PROC prc_invoice_add
AS
BEGIN
insert into INVOICE
(INV_NUMBER,CUS_CODE,INV_DATE)
VALUES (8006,1000,cast('30-APRL-08 00:00:00 AM ' as DATETIME2))
END
```
But whenever I execute this procedure this error message pops up :
> "Conversion failed when converting date and/or time from character string". | The problem is in **Aprl**
Pass it as **April** or **Apr**
try this
```
CREATE PROC prc_invoice_add
AS
BEGIN
insert into INVOICE
(INV_NUMBER,CUS_CODE,INV_DATE)
VALUES (8006,1000,cast('30-APRIL-08 00:00:00 AM ' as DATETIME2))
END
``` | try this,
```
CREATE PROC prc_invoice_add
AS
BEGIN
insert into INVOICE
(INV_NUMBER,CUS_CODE,INV_DATE)
VALUES (8006,1000,cast('30-APR-08 00:00:00 AM ' as DATETIME2))
END
```
your format of date is dd-MMM-yyyy, in this format APRIL is writtan as apr. | Conversion failed when converting date and / or time from character string? | [
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have two columns which are "datetime" type in SQL server. However, one columns contains the valid date I need and the other one contains the time(hour, min,..) only.
How can I extract the date from the first column and the time from the last column and concatenate them together into a valid datetime column.
What I have tried:
```
select
[Created on],
[Created At],
LEFT([Created On],11),
RIGHT([Created At],8),
LEFT([Created On],11) + RIGHT([Created At],8)
from..
```
However, the output makes sense but I want it to be a valid timestamp type in military format as the first and second columns.
How can I do it?
 | I did some google and this solution below works for me.
```
select
[Created on],
[Created At],
CAST(Convert(date, [Created on]) as datetime) + CAST(Convert(time, [Created At]) as datetime),
LEFT([Created On],11) + RIGHT([Created At],8)
from [VERICAL$Purch_ Inv_ Header]
where [Posting Date] > '2014-02-01
```
 | try using this expression:-
```
select
DATEADD(MILLISECOND,datepart(MILLISECOND,created_at),
DATEADD(SECOND,datepart(SECOND,created_at),
DATEADD(minute,datepart(minute,created_at),
DATEADD(HOUR,datepart(hour,created_at),created_on))))
from ..
```
It actually finds the time part from the `created at` column and adds to `created on` | SQL server concatenate date and time | [
"",
"sql",
"sql-server",
""
] |
So I am pretty new to SQL and databases in general(only designed a very simple one for a minimal site), and I'm trying to work out the best way to design some models for a heavily DB driven site. So take for example, a user uploaded gallery. I have a gallery table with sensible columns like date uploaded, name, etc., and galleries can belong to one category, of which there will not be that many (at most like 6). Should I have the category be a column of the gallery table? Or have a separate table for categories and have a many to one relationship between the category and gallery tables? I would like to do things in my views like sorting all galleries in a category by date uploaded, is there a performance/convenience difference between these? Having the category be a column of the Gallery table certainly seems easier to deal with than me, but I'm not sure what is the best practice. Thanks. | First of all, you need to understand the conceptual difference.
As a rule of thumb, you are safe to consider the following equivalence:
> Table ~~~ Entity
>
> Column ~~~ Attribute
So, when you need to add a new piece of data, in relation to an Entity (an existing Table), the question you can ask yourself is:
> Is this piece of data an **attribute** of the entity?
If the answer is yes, then you need a new column.
For instance, say you have a table describing the Student entity:
```
Table Student:
[PK] Id
[FK] IdClass
Name
Surname
```
Say you want to also add the GPA of each student. This is obviously an attribute of the student, so you can add the GPA column in the Student table.
If however you wish to define the Department for each Student, you will see that the department **is not** an attribute of a Student. The Department is an entity, it exists and has its own attributes ***outside*** the scope of the student.
Therefore, the attribute of the student is the affiliation to a certain Department, but not the department itself.
So, you will create a new Department table, and use the `Department.Id` as a FK in the Students table.
I hope this helps. Cheers. | If you have a one to many relationship between categories and galleries, you want category to be a separate table. | SQL database - when to use a separate table vs a column of an existing one? | [
"",
"sql",
"database-design",
""
] |
I have a t-sql query where sum function is duplicated.
How to avoid duplicating those statements?
```
select
Id,
sum(Value)
from
SomeTable
group by
Id
having
sum(Value) > 1000
``` | It look like table aliasing is not supported.
I think `with` should work:
```
with tmptable (id,sumv)
as
(select
Id,
sum(Value) as sumv
from
SomeTable
group by
Id
)
select
id,
sumv
from
tmptable
where
sumv>1000
```
And a fiddle:
<http://sqlfiddle.com/#!6/0d3f2/2> | You need to remove the sum(Value) from the group by clause. | How to avoid duplicating statements with grouping functions? | [
"",
"sql",
"group-by",
"having-clause",
""
] |
I have character data stored in a column that was imported from a data file. The character data represents an integer value, but.. the last (rightmost) character isn't always a digit character. I'm attempting to convert the character data into the integer value using a SQL expression, but it's not working.
My attempt at a SQL statement is shown below, and a test case that demonstrates it's not working. My approach is to split off the rightmost character from the string, do the appropriate conversion, and then string it back together and cast to integer.
**Q:** How can I fix my SQL expression to convert this correctly, or what SQL expression can be used to do the conversion?
**DETAILS**
The rightmost character in the string can be one of the values in the "Code" column below. The "Digit" column shows that actual integer value represented by the character, and the "Sign" column shows whether the overall string is to be interpreted as a negative value, or a positive value.
For example, the string value `'023N'` represents an integer value of `+235`. (The rightmost 'N' character represents a digit value of 5, with a positive sign). The string value of `'104}'` represents an integer value of `-1040`. (The rightmost `'}'` charcacter represents a digit value of '0' and makes the overall integer value negative.)
Here's the table that shows the required conversion.
```
Code Digit Sign
'}' '0' -
'J' '1' -
'K' '2' -
'L' '3' -
'M' '4' -
'N' '5' -
'O' '6' -
'P' '7' -
'Q' '8' -
'R' '9' -
'{' '0' +
'A' '1' +
'B' '2' +
'C' '3' +
'D' '4' +
'E' '5' +
'F' '6' +
'G' '7' +
'H' '8' +
'I' '9' +
```
Here's a table of example values:
```
Create Table #Punch
(
aa varchar(20)
)
Insert Into #Punch values ('046')
Insert into #Punch values ('027')
Insert into #Punch values ('004')
Insert into #Punch values ('020')
Insert into #Punch values ('090')
```
And this is the SQL statement do do the conversion, but it's not working correctly for character strings that have just regular digit characters. (The sample table above are examples of other character strings that should be converted to integer value.
This SQL statement is returning an integer value of `184` for the character string `046`, when I expect it to return `46`.
**Q:** Why is my SQL statement returning an integer value of `184` instead of `46` for the character string `'046'`?
```
select
aa, Answervalue =
(cast(
substring(aa, 1, len(aa)-1) +
case
when right(aa,1) in ('{','}','0') then '0'
when right(aa,1) between 'A' and 'I' then cast(ascii(right(aa,1))-64 as char(1))
when right(aa,1) between 'J' and 'R' then cast(ascii(right(aa,1))-73 as char(1))
else ''
end
as int) *
case
when right(aa,1) in ('{','0') or right(aa,1) between 'A' and 'I' then 1
when right(aa,1) in ('}') or right(aa,1) between 'J' and 'R' then -1
when aa in (aa) then aa
end)
from
(
select aa from #Punch
) bb
```
---
For given inserted value, the result of "046" is coming as "184". It should be "46". For "004" the result is coming as "0". It should be "4". Other than these issue, logic works fine. If the column value aa is numeric, and there is no code\characters (for example {,A,N,B, etc) in the value, I want to put as original value. so if it is 046 then value should be 46.
Thank you in advance ! | I couldn't edit my original (I missed your point about the numeric) so here it is again:
It looks to me like you inserted the wrong data into your table.
Try:
```
Insert Into #Punch values ('04O')
Insert into #Punch values ('02P')
Insert into #Punch values ('00D')
Insert into #Punch values ('02{')
Insert into #Punch values ('09}')
```
As far as checking to see if the value is numeric, that's another issue. Try using:
```
select
aa, Answervalue = CASE WHEN IsNumeric(aa) = 1 THEN aa ELSE
(cast(
substring(aa, 1, len(aa)-1) +
case
when right(aa,1) in ('{','}','0') then '0'
when right(aa,1) between 'A' and 'I' then cast(ascii(right(aa,1))-64 as char(1))
when right(aa,1) between 'J' and 'R' then cast(ascii(right(aa,1))-73 as char(1))
else ''
end
as int) *
case
when right(aa,1) in ('{','0') or right(aa,1) between 'A' and 'I' then 1
when right(aa,1) in ('}') or right(aa,1) between 'J' and 'R' then -1
when aa in (aa) then aa
end) END
from
(
select aa from #Punch
) bb
``` | It looks like one of your problems is this line:
```
when aa in (aa) then aa
```
With a value of `'046'`, the leftmost two characters are `'04'`, multiply that by `'046'` and you get integer value 184.
I would do the test of that rightmost character one time, with just one `CASE` expression, rather than checking the same thing in multiple CASE expressions and doing the multiplication.
The original statement is just too much work to figure out what it's doing, with the multiple CASE expressions and the multiplication and the CAST.
Using a single CASE expression makes for a much more straightforward SQL statement; by getting the return expression as a single expression, that's much easier to decipher, even if it means repeating a bit of similar code.
For SQL Server, a somewhat lengthier expression would be MUCH easier to decipher, and would make it much easier for the reader to understand what the expression is doing:
```
SELECT aa
, Answervalue =
CASE RIGHT(aa,1)
WHEN '{' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'0') AS INT)
WHEN 'A' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'1') AS INT)
WHEN 'B' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'2') AS INT)
WHEN 'C' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'3') AS INT)
WHEN 'D' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'4') AS INT)
WHEN 'E' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'5') AS INT)
WHEN 'F' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'6') AS INT)
WHEN 'G' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'7') AS INT)
WHEN 'H' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'8') AS INT)
WHEN 'I' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'9') AS INT)
WHEN '}' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'0') AS INT) * -1
WHEN 'J' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'1') AS INT) * -1
WHEN 'K' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'2') AS INT) * -1
WHEN 'L' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'3') AS INT) * -1
WHEN 'M' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'4') AS INT) * -1
WHEN 'N' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'5') AS INT) * -1
WHEN 'O' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'6') AS INT) * -1
WHEN 'P' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'7') AS INT) * -1
WHEN 'Q' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'8') AS INT) * -1
WHEN 'R' THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'9') AS INT) * -1
ELSE CAST(aa AS INT)
END
FROM Punch#
```
Or, you could do something like this:
```
SELECT aa
, Answervalue =
CASE
WHEN RIGHT(aa,1) IN ('{')
THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'0') AS INT)
WHEN RIGHT(aa,1) BETWEEN 'A' AND 'I'
THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),CAST(ASCII(RIGHT(aa,1))-64 AS CHAR(1))) AS INT)
WHEN RIGHT(aa,1) IN ('}')
THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),'0') AS INT) * -1
WHEN RIGHT(aa,1) BETWEEN 'J' AND 'R'
THEN CAST(CONCAT(LEFT(aa,LEN(aa)-1),CAST(ASCII(RIGHT(aa,1))-73 AS CHAR(1))) AS INT * -1
ELSE
CAST(aa AS INT)
END
FROM Punch#
```
For MySQL, that would look something like this:
```
SELECT aa
, CASE
WHEN RIGHT(aa,1) IN ('{')
THEN CONCAT(LEFT(aa,CHAR_LENGTH(aa)-1),'0') + 0
WHEN RIGHT(aa,1) BETWEEN 'A' AND 'I'
THEN CONCAT(LEFT(aa,CHAR_LENGTH(aa)-1),CAST(ASCII(RIGHT(aa,1))-64 AS CHAR(1))) + 0
WHEN RIGHT(aa,1) IN ('}')
THEN CONCAT(LEFT(aa,CHAR_LENGTH(aa)-1),'0') * -1 + 0
WHEN RIGHT(aa,1) BETWEEN 'J' AND 'R'
THEN CONCAT(LEFT(aa,CHAR_LENGTH(aa)-1),CAST(ASCII(RIGHT(aa,1))-73 AS CHAR(1))) * -1 + 0
ELSE aa + 0
END AS Answervalue
FROM Punch#
```
**NOTE:** In MySQL, we can replace the `CAST( x AS INT)` with either `CAST( x AS SIGNED)`, or we can do an addition operation to cause an implicit conversion to numeric.
---
I'm not entirely comfortable subtracting 64 or 73 from the ASCII value. (Because I haven't tested that to ensure that works with all character sets.)
I'd actually be tempted to setup a lookup table, and use an outer join operation. Something like this:
```
CREATE TABLE _convert_zoned_decimal
( `zdigit` CHAR(1) NOT NULL PRIMARY KEY
, `rdigit` CHAR(1) NOT NULL
, `rsign` TINYINT NOT NULL
);
INSERT INTO _convert_zoned_decimal VALUES
('}','0',-1),('J','1',-1),('K','2',-1),('L','3',-1),('M','4',-1)
,('N','5',-1),('O','6',-1),('P','7',-1),('Q','8',-1),('R','9',-1)
,('{','0',+1),('A','1',+1),('B','2',+1),('C','3',+1),('D','4',+1)
,('E','5',+1),('F','6',+1),('G','7',+1),('H','8',+1),('I','9',+1)
;
```
With that table, I could use an outer join operation and do the replacement, something like this for MySQL:
```
SELECT aa
, CASE
WHEN z.zdigit IS NOT NULL
THEN CONCAT(LEFT(aa,CHAR_LENGTH(aa)-1),z.rdigit) * z.rsign
ELSE aa + 0
END AS Answervalue
FROM Punch# t
LEFT
JOIN _convert_zoned_decimal z
ON z.zdigit = RIGHT(t.aa,1)
``` | SQL Logic doesn't work | [
"",
"sql",
"sql-server",
""
] |
Hi all I have no code here to show because I can't wrap my head around a clean way to do this. In our table I have a birthday column that is a timestamp. I need to find everyone whose birthday occurs within the next 3 months.
So I know I can add 3 months to the sysdate, but how I do the date comparison while ignoring the year has me thrown for a loop. | The challenge is to get the number of months between now and the next birthday interval. The `MONTHS_BETWEEN()` function combined with dividing by 12 gets us close, but we have to deal with the decimal dust and convert the remainder to something useful. I can think of two ways to deal with that: `MOD()` and `CEIL()`.
Using `CEIL` we would take the difference between the age-to-be on the next birthday and the present age and use that in the predicate. In this case the result is in decimal years, so we need to compare to a quarter year (3/12):
```
where (ceil(months_between(sysdate, b.birthday)/12) -
months_between(sysdate, b.birthday)/12) <= 3/12;
```
That's a bit clunky. Alternatively, `MOD` would give us the remainder of months to go, so subtracting `MOD` from 12 gives us the months left to go:
```
where (12 - mod(months_between(sysdate, b.birthday),12)) <= 3;
```
Here it is in action with sample data:
```
with b as
(select date '1990-05-10' birthday from dual union all
select date '1970-09-23' from dual union all
select date '2000-04-02' from dual union all
select date '1948-11-12' from dual)
select
b.birthday,
months_between(sysdate, b.birthday) months,
months_between(sysdate, b.birthday)/12 years,
ceil(months_between(sysdate, b.birthday)/12) age_to_be,
ceil(months_between(sysdate, b.birthday)/12) - months_between(sysdate, b.birthday)/12 period_to_birthday,
mod(months_between(sysdate, b.birthday),12) mod_months,
12 - mod(months_between(sysdate, b.birthday),12) months_diff
from b
where (12 - mod(months_between(sysdate, b.birthday),12)) <= 3;
-- Or
--where (ceil(months_between(sysdate, b.birthday)/12) -
-- months_between(sysdate, b.birthday)/12) <= 3/12;
```
---
**EDIT:** I'll leave my old and incorrect answer here for reference and I'll just look the other way.
Just use `ADD_MONTHS()` and `LAST_DAY()` to find all rows where the birthday is inside of a calculated value. No need to worry about the year component, you are not working with string literals.
This example casts `sysdate` as a `timestamp`, then gets the last day of the current month and adds three months to that:
```
select add_months(last_day(cast(sysdate as timestamp)),3) from dual;
```
So using that logic, you can:
```
select *
from your_table
where birthday <= add_months(last_day(cast(sysdate as timestamp)),3);
```
Or just:
```
select *
from your_table
where birthday <= add_months(last_day(sysdate),3);
```
Oracle nicely handles the data type conversion for you. | A person's birthday (anniversary) in the current year, given their date of birth is:
```
TRUNC(SYSDATE,'Y') + (dob - TRUNC(dob,'Y'))
```
You can tell if this is within the next three months with something like:
```
SELECT 'Yes' FROM dual
WHERE TRUNC(SYSDATE,'Y') + (dob - TRUNC(dob,'Y'))
BETWEEN SYSDATE AND ADD_MONTHS(SYSDATE,3);
```
There is only one small problem, that is if the person was born on 31 December in a Leap year, this method has an out-by-one error when you run it in a non-leap year (i.e. it will not identify that person as begin within the next three months until the 1st of October). | Oracle - Birthday Date Math | [
"",
"sql",
"oracle",
"date-arithmetic",
""
] |
I have a query like
```
select * from emp where emp.id in(1,2,3) and emp.sal>10000
```
I need to add a logic to the check emp.sal>10000 only if design.type='Labor', I tried like the below but not working
```
select * from emp,design where emp.id in(1,2,3) and case when design.type='Labor' then emp.sal>10000
```
Can we implement this ? | Simple:
```
select * from emp where emp.id in(1,2,3) and (design.type<>'Labor' OR emp.sal>10000)
```
If design.type='Labor' then emp.sal must be greater than 10000 for the condition to be true. | This should work for you...
```
SELECT * FROM emp
WHERE emp.id IN (1,2,3)
AND ((design.type = 'Labor' AND emp.sal>10000)
OR (design.type <> 'Labor'))
```
The simplification of the OR condition would be..
```
SELECT * FROM emp
WHERE emp.id IN (1,2,3)
AND (design.type <> 'Labor' OR emp.sal > 10000)
``` | Condition in oracle sql query | [
"",
"sql",
"oracle",
""
] |
I have a script which uploads a file and stores the details of the file name in the database. When a document gets uploaded I want to be able to update the name of the file in the database to be proceeded by an incremental number such as \_1, \_2, \_3 (before the file extension) if the DOCUMENT\_ID already exists. The table structure looks like this:
```
ID | DOCUMENT_ID | NAME | MODIFIED | USER_ID
33 | 81 | document.docx | 2014-03-21 | 1
34 | 82 | doc.docx | 2014-03-21 | 1
35 | 82 | doc.docx | 2014-03-21 | 1
36 | 82 | doc.docx | 2014-03-21 | 1
```
So in the case above I would want ID 35 NAME to be doc\_1.docx and ID 36 NAME to be doc\_2.docx.
This is where I have got to so far. I have retrieved the last file details that have been uploaded:
```
$result1 = mysqli_query($con,"SELECT ID, DOCUMENT_ID, NAME, MODIFIED
FROM b_bp_history ORDER BY ID DESC LIMIT 1");
while($row = mysqli_fetch_array($result1))
{
$ID = $row['ID'];
$documentID = $row['DOCUMENT_ID'];
$documentName = $row['NAME'];
$documentModified = $row['MODIFIED'];
}
```
So this will give me the details I need to see whether the DOCUMENT\_ID exists already. Now I thought it would be best to see if it does exist then by carrying out the following:
```
$sql = "SELECT ID, DOCUMENT_ID
FROM b_bp_history WHERE DOCUMENT_ID = $documentID";
$result2 = mysqli_query($sql);
if(mysqli_num_rows($result2) >0){
/* This is where I need my update */
} else {
/* I don't need an update in here as it will automatically add to the database
table with no number after it. Not sure if I should always add the first one
with a _1 after it so the increment is easy? */
}
```
As you can see from the above I need an update in there that basically checks to see if a number exists after the name and if it does then increment it by one. On the else statement i.e. if the DOCUMENT\_ID doesn't already exist I could add the first one with an \_1.docx so that the increment will be easier?
If the DOCUMENT\_ID does already exist the update in the first half will need to check the last number before the extension and increment by +1, so if it's \_1 then then next will be \_2. Not sure how to do this though either. The end result I want is:
```
ID | DOCUMENT_ID | NAME | MODIFIED | USER_ID
33 | 81 | document.docx | 2014-03-21 | 1
34 | 82 | doc.docx | 2014-03-21 | 1
35 | 82 | doc_1.docx | 2014-03-21 | 1
36 | 82 | doc_2.docx | 2014-03-21 | 1
``` | # Generating a Sequence ID Value in MySQL to Represent a Revision ID Based Naming Convention
> I used `MySQL 5.5.32` to develop and test this solution. Be sure to review the bottom section of my solution for a few homework assignments for future consideration in your overall design approach.
## Summary of Requirements and Initial Comments
A external script writes to a document history table. Meta information about a user submitted file is kept in this table, including its user assigned name. The OP requests a SQL update statement or procedural block of DML operations that will reassign the original document name to one that represents the concept of a discrete `REVISION ID`.
* The original table design contains a independent primary key: `ID`
* An implied business key also exists in the relationship between `DOCUMENT_ID` (a numerical id possibly assigned externally by the script itself) and `MODIFIED` (a DATE typed value representing when the latest revision of a document was submitted/recorded).
> Although other RDBMS systems have useful objects and built-in features such as Oracle's SEQUENCE object and ANALYTICAL FUNCTIONS, There are options available with MySQL's SQL based capabilities.
## Setting up a Working Schema
Below is the DDL script used to build the environment discussed in this solution. It should match the OP description with an exception (discussed below):
```
CREATE TABLE document_history
(
id int auto_increment primary key,
document_id int,
name varchar(100),
modified datetime,
user_id int
);
INSERT INTO document_history (document_id, name, modified,
user_id)
VALUES
(81, 'document.docx', convert('2014-03-21 05:00:00',datetime),1),
(82, 'doc.docx', convert('2014-03-21 05:30:00',datetime),1),
(82, 'doc.docx', convert('2014-03-21 05:35:00',datetime),1),
(82, 'doc.docx', convert('2014-03-21 05:50:00',datetime),1);
COMMIT;
```
The table `DOCUMENT_HISTORY` was designed with a `DATETIME` typed column for the column called `MODIFIED`. Entries into the document\_history table would otherwise have a high likeliness of returning multiple records for queries organized around the composite business key combination of: `DOCUMENT_ID` and `MODIFIED`.
## How to Provide a Sequenced Revision ID Assignment
A creative solution to SQL based, partitioned row counts is in an older post: [ROW\_NUMBER() in MySQL](https://stackoverflow.com/questions/1895110/row-number-in-mysql) by @bobince.
**A SQL query adapted for this task:**
```
select t0.document_id, t0.modified, count(*) as revision_id
from document_history as t0
join document_history as t1
on t0.document_id = t1.document_id
and t0.modified >= t1.modified
group by t0.document_id, t0.modified
order by t0.document_id asc, t0.modified asc;
```
**The resulting output of this query using the supplied test data:**
```
| DOCUMENT_ID | MODIFIED | REVISION_ID |
|-------------|------------------------------|-------------|
| 81 | March, 21 2014 05:00:00+0000 | 1 |
| 82 | March, 21 2014 05:30:00+0000 | 1 |
| 82 | March, 21 2014 05:35:00+0000 | 2 |
| 82 | March, 21 2014 05:50:00+0000 | 3 |
```
Note that the revision id sequence follows the correct order that each version was checked in and the revision sequence properly resets when it is counting a new series of revisions related to a different document id.
> **EDIT:** A good comment from @ThomasKöhne is to consider keeping this `REVISION_ID` as a persistent attribute of your version tracking table. This could be derived from the assigned file name, but it may be preferred because an index optimization to a single-value column is more likely to work. The Revision ID alone may be useful for other purposes such as creating an accurate `SORT` column for querying a document's history.
## Using MySQL String Manipulation Functions
Revision identification can also benefit from an additional convention: the column name width should be sized to also accommodate for the appended revision id suffix. Some MySQL string operations that will help:
```
-- Resizing String Values:
SELECT SUBSTR('EXTRALONGFILENAMEXXX',1,17) FROM DUAL
| SUBSTR('EXTRALONGFILENAMEXXX',1,17) |
|-------------------------------------|
| EXTRALONGFILENAME |
-- Substituting and Inserting Text Within Existing String Values:
SELECT REPLACE('THE QUICK <LEAN> FOX','<LEAN>','BROWN') FROM DUAL
| REPLACE('THE QUICK <LEAN> FOX','<LEAN>','BROWN') |
|--------------------------------------------------|
| THE QUICK BROWN FOX |
-- Combining Strings Using Concatenation
SELECT CONCAT(id, '-', document_id, '-', name)
FROM document_history
| CONCAT(ID, '-', DOCUMENT_ID, '-', NAME) |
|-----------------------------------------|
| 1-81-document.docx |
| 2-82-doc.docx |
| 3-82-doc.docx |
| 4-82-doc.docx |
```
## Pulling it All Together: Constructing a New File Name Using Revision Notation
Using the previous query from above as a base, inline view (or sub query), this is a next step in generating the new file name for a given revision log record:
**SQL Query With Revised File Name**
```
select replace(docrec.name, '.', CONCAT('_', rev.revision_id, '.')) as new_name,
rev.document_id, rev.modified
from (
select t0.document_id, t0.modified, count(*) as revision_id
from document_history as t0
join document_history as t1
on t0.document_id = t1.document_id
and t0.modified >= t1.modified
group by t0.document_id, t0.modified
order by t0.document_id asc, t0.modified asc
) as rev
join document_history as docrec
on docrec.document_id = rev.document_id
and docrec.modified = rev.modified;
```
**Output With Revised File Name**
```
| NEW_NAME | DOCUMENT_ID | MODIFIED |
|-----------------|-------------|------------------------------|
| document_1.docx | 81 | March, 21 2014 05:00:00+0000 |
| doc_1.docx | 82 | March, 21 2014 05:30:00+0000 |
| doc_2.docx | 82 | March, 21 2014 05:35:00+0000 |
| doc_3.docx | 82 | March, 21 2014 05:50:00+0000 |
```
These (`NEW_NAME`) values are the ones required to update the `DOCUMENT_HISTORY` table. An inspection of the `MODIFIED` column for `DOCUMENT_ID` = 82 shows that the check-in revisions are numbered in the correct order with respect to this part of the composite business key.
**Finding Un-processed Document Records**
If the file name format is fairly consistent, a SQL `LIKE` operator may be enough to identify the record names which have been already altered. MySQL also offers filtering capabilities through `REGULAR EXPRESSIONS`, which offers more flexibility with parsing through document name values.
What remains is figuring out how to update just a single record or a set of records. The appropriate place to put the filter criteria would be on the outermost part of the query right after the join between aliased tables:
```
...
and docrec.modified = rev.modified
WHERE docrec.id = ??? ;
```
There are other places where you can optimize for faster response times, such as within the internal sub query that derives the revision id value... the more you know about the specific set of records that you are interested in, you can segment the beginning SQL statements to look only at what is of interest.
## Homework: Some Closing Comments on the Solution
This stuff is purely optional and they represent some side thoughts that came to mind on aspects of design and usability while writing this up.
**Two-Step or One-Step?**
With the current design, there are two discrete operations per record: `INSERT` by a script and then `UPDATE` of the value via a SQL DML call. It may be annoying to have to remember two SQL commands. Consider building a second table built for insert only operations.
* Use the second table (`DOCUMENT_LIST`) to hold nearly identical information, except possibly two columns:
1. `BASE_FILE_NAME` (i.e., doc.docx or document.docx) which may apply for multiple HISTORY\_ID values.
2. `FILE_NAME` (i.e., doc\_1.docx, doc\_2.docx, etc.) which will be unique for each record.
* Set a database `TRIGGER` on the source table: `DOCUMENT_HISTORY` and put the SQL query we've developed inside of it. This will automatically populate the correct revision file name at roughly the same moment after the script fills the history table.
> **WHY BOTHER?** This suggestion mainly fits under the category of `SCALABILITY` of your database design. The assignment of a revision name is still a two step process, but the second step is now handled automatically within the database, whereas you'd have to remember to include it everywhere you invoked a DML operation on top of the history table.
**Managing Aliases**
I didn't see it anywhere, but I assume that the `USER` initially assigns some name to the file being tracked. In the end, it appears that it may not matter as it is an internally tracked thing that the end user of the system would never see.
> For your information, this information isn't portrayed to the customer, it is saved in a table in the database as a version history...
Reading the history of a given document would be easier if the "base" name was kept the same once it has been given:

In the data sample above, unless the `DOCUMENT_ID` is known, it may not be clear that all the file names listed are related. This may not necessarily be a problem, but it is a good practice from a semantic point of view to separate user assigned file names as `ALIASES` that can be changed and assigned at will at any time.
Consider setting up a separate table for tracking the "User-Friendly" name given by the end user, and associating it with the document id it is supposed to represent. A user may make hundreds or thousands of rename requests... while the back end file system uses a simpler, more consistent naming approach. | I had similar trouble recently, but I'm using MSSQL and I don't no MySQL syntax, so here is a T-SQL code. Hope, it will help you!
```
declare
@id int,
@document_id int,
@document_name varchar(255),
@append_name int,
@name varchar(255),
@extension varchar(10)
set @append_name = 1
select top 1
@id = ID,
@document_id = DOCUMENT_ID,
@document_name = NAME
from
b_bp_history
while exists (
select *
from b_bp_history
where
NAME = @document_name and
DOCUMENT_ID = @document_id and
ID <> @id)
begin
set @name = ''
set @extension = ''
declare @dot_index int -- index of dot-symbol in document name
set @dot_index = charindex('.', reverse(@document_name))
if (@dot_index > 0)
begin
set @name = substring(@document_name, 0, len(@document_name) - @dot_index + 1)
set @extension = substring(@document_name, len(@document_name) - @dot_index + 2, len(@document_name) - len(@name))
end
else
set @name = @document_name
if (@append_name > 1) -- if not first try to rename file
begin
if (right(@name, len(cast(@append_name - 1 as varchar)) + 1)) = '_' + cast(@append_name - 1 as varchar)
begin
set @name = substring(@name, 0, len(@name) - (len(cast(@append_name - 1 as varchar))))
end
end
set @name = @name + '_' + cast(@append_name as varchar)
if (len(@extension) > 0)
set @document_name = @name + '.' + @extension
else
set @document_name = @name
set @append_name = @append_name + 1
end
update b_bp_history
set NAME = @document_name
where ID = @id
``` | Increment File Name Before Extension By 1 in the Database | [
"",
"mysql",
"sql",
""
] |
I have a data base named "mig". it has 10 tables. now i want to create a same database in another system so I am using `mysqldump` command but it shows error.
I entered command as follows :
```
mysqldump -u root -p root mig >file.sql;
```
This is the error i got :
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that
> corresponds to your MySQL server version for the right syntax to use near 'mysql
> dump -u root -p root mig >file.sql' at line 1
I am getting the same error when I use ,
```
mysqldump -u root -proot mig >file.sql;
```
How can i fix this ? | Simply try-
```
mysqldump -u root mig> file.sql
```
**Edit**
`mysqldump` is not a MySQL command, it is a command line utility. You must call it from your shell command line. I hope you are not calling this from MySQL prompt. | When providing password on the command line you should leave no space after -p.
It should look smth like:
```
mysqldump -u root -proot mig >file.sql;
``` | Error in Mysqldump command | [
"",
"sql",
"mysql",
""
] |
```
CREATE PROCEDURE findlist @comedianname1 nvarchar(30),@comedianname2 nvarchar(30)
AS
SELECT comedian
FROM Comedian
if @comedianname2!=null
begin
WHERE comedian! = @comedianname1 and comedian! = @comedianname2
end
else
begin
WHERE comedian! = @comedianname1
end
GO
```
I am getting this error : Msg 156, Level 15, State 1, Procedure findlist, Line 7
Incorrect syntax near the keyword 'WHERE'. | You can not use `IF` in select. You could do this instead in your case:
```
SELECT comedian
FROM Comedian
WHERE
comedian! = @comedianname1 AND
comedian! = ISNULL(@comedianname2, comedian!)
```
What does it do? If compares `comedian!` to `@comedianname1` and if `@comedianname2` is not null, it compares `comedian!` to `@comedianname2`. Otherwise it compares `comedian!` to `comedian!`, which is always `true`. | You need to use the below query
```
SELECT comedian
FROM Comedian
WHERE comedian! = @comedianname1 and (comedian! = @comedianname2 OR @comedianname2 IS NULL)
``` | how to use if else in sql server with where clause | [
"",
"sql",
""
] |
```
strSQL = "INSERT INTO Accounts UserName, Password VALUES ('" & txtUsername.Text & "', '" & txtEncryptedPassword & "');"
```
When the code is executed and error is thrown, but there is no visible problem that i can see. Help! | The word PASSWORD is reserved in MS-Access.
You need to use square brackets around that name (Or change it to something different)
```
strSQL = "INSERT INTO Accounts (UserName, [Password]) VALUES (......
```
Said that, please use a parameterized query to build sql commands.
A string concatenation like yours is easily attacked by hackers using [SQL Injection](https://stackoverflow.com/questions/332365/how-does-the-sql-injection-from-the-bobby-tables-xkcd-comic-work)
Also, if the username or password contains a single quote, the resulting sql text built using string concatenation will be invalid.
```
strSQL = "INSERT INTO Accounts (UserName, [Password]) VALUES (?, ?)"
OleDbCommand cmd = new OleDbCommand(strSQL, connection);
cmd.Parameters.AddWithValue("@p1",txtUsername.Text);
cmd.Parameters.AddWithValue("@p2",txtEncryptedPassword);
cmd.ExecuteNonQuery();
``` | You forgot parentheses:
```
strSQL = "INSERT INTO Accounts (UserName, Password) VALUES ('" & txtUsername.Text & "', '" & txtEncryptedPassword & "');"
``` | SQL Insert into statement | [
"",
"sql",
"vb.net",
"insert",
""
] |
```
create table approval
(
app_id smallint not null,
visitor_id smallint(5),
approve varchar(5),
primary key (app_id) auto increment
foreign key (visitor_id) references trans_req_visitor(visitor_id)
constraint approval check (approval in ('TRUE', 'FALSE'))
);
```
I don't find where it causes an error, where am I making mistake? | ```
create table approval(app_id smallint not null auto_increment,visitor_id mallint(5),approve varchar(5),primary key (app_id),FOREIGN KEY (visitor_id) EFERENCES trans_req_visitor(visitor_id),constraint approval check (approval in 'TRUE', 'FALSE')));
``` | You are missing a comma after
```
primary key (app_id) auto increment
```
Change your query to
```
create table approval(
app_id smallint not null,
visitor_id smallint(5),
approve varchar(5),
primary key (app_id) auto_increment,
FOREIGN KEY (visitor_id) REFERENCES trans_req_visitor(visitor_id),
constraint approval check (approval in ('TRUE', 'FALSE'))
);
``` | Error code 1064 raises, maybe syntax error | [
"",
"mysql",
"sql",
""
] |
```
public partial class _Default : System.Web.UI.Page
{
SqlConnection con = new SqlConnection("Data Source=LENOVO;Initial Catalog=dbMACARON;Integrated Security=True"); SqlCommand com = new SqlCommand("Select * from PRODUCT");
public void Bind()
{
SqlDataAdapter da = new SqlDataAdapter(com);
DataSet ds = new DataSet();
con.Open();
com.Connection = con;
com.ExecuteNonQuery();
da.Fill(ds, "PRODUCT");
GridView1.DataSource = ds;
GridView1.DataBind();
con.Close();
}
protected void Page_Load(object sender, EventArgs e)
{
con.Open();
com.Connection = con;
SqlDataReader dr;
dr = com.ExecuteReader();
string id = "0";
Source Error:
Line 31: protected void Page_Load(object sender, EventArgs e)
Line 32: {
Line 33: con.Open();
Line 34: com.Connection = con;
Line 35: SqlDataReader dr;
```
It said error on my con.Open could not open database, what did i go wrong?
sorry im fresh so i need guidance ,
there some source error it tells me | if you are using sql express edition try this:
```
SqlConnection con = new SqlConnection("Data Source=.\SQLExpress;Initial Catalog=dbMACARON;Integrated Security=True");
``` | 1. Try using port number with the DB server name
2. What is the authentication mode? Windows Authentication or User id/Password based?
If it is userId/password based, then you need to provide password also. | I could not connect to database | [
"",
"asp.net",
"sql",
"database",
""
] |
I have 2 columns:
1. `CategoryID`
2. `Name`
How can I use `where` if name only in a CategoryID = 1 and has a NULL value it will not show.
```
where not (name IS NULL or categoryID = 1)
```
I tried this but it's not working. | Since you indicated that records should not be displayed when "name *only* in a CategoryID = 1 *and* has a NULL value", try this:
```
WHERE CategoryID <> 1 OR
(CategoryID = 1 AND Name IS NOT NULL)
```
This will display records with any `CategoryID`, just not records where `CategoryID = 1` *and* `Name IS NULL`. | If you are asking where name is not null and the categoryID is 1 then this will work.
where (**NOT** name IS NULL **AND** categoryID = 1) | How can I use where with two condition | [
"",
"sql",
"sql-server-2008",
""
] |
How do we know the first inserted record in table, let consider we have loaded 1 million records into a table and we don't have any time stamp columns here.
How to find it? | If you have no column that indicate order of insert like timestamp, serial fields (ex: primary key) you have no method to find what was the first inserted row. SQL standard says clearly that data sets are not ordered (there is no "natural" order). All hacks like rownum, row id etc. are not warranted to work after reindexing database, reload database and similar operations. | Not 100% accurate (as **piotrpo** pointed out), but if you are ok with accuracy of the value provided by `ORA_ROWSCN` ([docs](http://docs.oracle.com/cd/B19306_01/server.102/b14200/pseudocolumns007.htm#BABFAFIC)), you can find out which row was inserted first (surely, if it wasn't updated later) . To be correct, not even row, but the block where row resides - you can use `DBMS_ROWID` functions to drill down to row level (again, not 100% precise) :
```
SELECT DBMS_ROWID.ROWID_ROW_NUMBER(ROWID),
SCN_TO_TIMESTAMP(ORA_ROWSCN)
FROM table1 ORDER BY 2 DESC, 1 ASC
``` | First inserted record in a oracle table? | [
"",
"sql",
"oracle",
""
] |
I am trying to understand the update statement in SQL. I have tried many different ways of doing the work but nothing seems to be working. I have to join to table and ONLY on the joined rows, I have update a column text from "and" to "is".
This is exactly what I am trying to do. I hope this is making sense:
```
update (select t2.text from rules t1 inner join rules t2 on t1.parentid = t2.childid
where t1.parentid > 0 and
t2.value = to_char (t1.position))
set text = replace(text, 'and', 'is');
```
I would really appreciate all your help. | If I have the logic correct, you want to do the replacement in the first table, when the id exists in the second table. Use a `where` clause instead:
```
update table1
set text = replace(text, 'is', 'and')
where exists (select 1
from table2 t2
where t2.id = table1.id
);
```
I am assuming that the condition `t2.id > 0` is a redundant way of specifying a match. Because the two ids are the same, I would use `table1.id > 0`:
```
update table1
set text = replace(text, 'is', 'and')
where id > 0 and
exists (select 1
from table2 t2
where t2.id = table1.id
);
``` | first problem you have, is that you're using `table` keyword in your `select` this is a wrong syntax.
Second problem is that you're telling oracle `set t1.text` when your query result, the `select` doesn't have `t1.text` but has `text`
this query works:
```
update (select t1.text from t1 inner join t2
on t1.id = t2.id
where t2.id > 0)
set text = replace(text_val, 'is', 'and');
```
Here's a working [fiddle](http://sqlfiddle.com/#!4/d6bf9/2) | Updating only the joined rows | [
"",
"sql",
"oracle",
""
] |
I need to analyze my database. I want to get all table names, their record counts and actual data size of these tables. As you know, record count sometimes may be low but actual size of table can be very high.
Do you have such a script? | Will this work for you? It's a script I use:
```
SELECT
S.name +'.'+ T.name as TableName,
Convert(varchar,Cast(SUM(P.rows) as Money),1) as [RowCount],
Convert(varchar,Cast(SUM(a.total_pages) * 8 as Money),1) AS TotalSpaceKB,
Convert(varchar,Cast(SUM(a.used_pages) * 8 as Money),1) AS UsedSpaceKB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM sys.tables T
INNER JOIN sys.partitions P ON P.OBJECT_ID = T.OBJECT_ID
INNER JOIN sys.schemas S ON T.schema_id = S.schema_id
INNER JOIN sys.allocation_units A ON p.partition_id = a.container_id
WHERE T.is_ms_shipped = 0 AND P.index_id IN (1,0)
GROUP BY S.name, T.name
ORDER BY SUM(P.rows) DESC
```
By the way, I cast the counts as money so I can get commas. | Another method using CURSOR and temp table.
```
IF OBJECT_ID(N'tempdb..[#TableSizes]') IS NOT NULL
DROP TABLE #TableSizes ;
GO
CREATE TABLE #TableSizes
(
TableName nvarchar(128)
, [RowCount] int
, ReservedSpaceKB int
, DataSpaceKB int
, IndexSizeKB int
, UnusedSpaceKB int
) ;
GO
DECLARE RecCountCursor CURSOR FOR
SELECT S.name+'.'+T.name AS TableName
FROM Sys.tables T INNER JOIN sys.schemas S ON (S.schema_id = T.schema_id)
WHERE S.principal_id = 1
ORDER BY TableName
OPEN RecCountCursor ;
DECLARE @TableName nvarchar(128) ;
FETCH RecCountCursor INTO @TableName ;
WHILE ( @@FETCH_STATUS = 0 )
BEGIN
CREATE TABLE #TempTableSizes
(
[TableName] nvarchar(128)
, [RowCount] char(11)
, [ReservedSpace] varchar(18)
, [DataSpace] varchar(18)
, [IndexSize] varchar(18)
, [UnusedSpace] varchar(18)
) ;
INSERT INTO #TempTableSizes
exec sp_spaceused @objname = @TableName;
UPDATE #TempTableSizes SET [TableName] = @TableName;
INSERT INTO #TableSizes
SELECT [TableName], [RowCount],
SUBSTRING([ReservedSpace], 1, CHARINDEX(' KB', [ReservedSpace])),
SUBSTRING([DataSpace], 1, CHARINDEX(' KB', [DataSpace])),
SUBSTRING([IndexSize], 1, CHARINDEX(' KB', [IndexSize])),
SUBSTRING([UnusedSpace], 1, CHARINDEX(' KB', [UnusedSpace]))
FROM #TempTableSizes
DROP TABLE #TempTableSizes;
FETCH RecCountCursor INTO @TableName ;
end
CLOSE RecCountCursor ;
DEALLOCATE RecCountCursor ;
SELECT *
FROM [#TableSizes]
ORDER BY [ReservedSpaceKB] DESC ;
DROP TABLE #TableSizes ;
``` | How can I get all table names, row counts and their data sizes in a SQL Server 2012 database? | [
"",
"sql",
"sql-server",
""
] |
I got stuck on this because I have a Multiple select like this
```
SELECT * FROM
(
SELECT CASE
WHEN co.grdtxtCertificationOther LIKE '%อย.%' THEN 2
WHEN co.grdtxtCertificationOther LIKE '%มผช%' THEN 3
WHEN co.grdtxtCertificationOther LIKE '%มอก.%' THEN 4
WHEN co.grdtxtCertificationOther LIKE '%ฮาลาล%' THEN 5
WHEN co.grdtxtCertificationOther LIKE '%Q%' THEN 6
WHEN co.grdtxtCertificationOther LIKE '%GMP%' THEN 7
WHEN co.grdtxtCertificationOther LIKE '%GAP%' THEN 8
WHEN co.grdtxtCertificationOther LIKE '%HACCP%' THEN 9
WHEN co.grdtxtCertificationOther LIKE '%เกษตรอินทรีย์%' THEN 10
ELSE 1 END As ID
, co.[grdtxtCertificationOther] AS STDName
, co.[grdtxtCertificationNumberOther] AS STDNumber
, CONVERT(VARCHAR(24),
CASE
WHEN LEN(co.grddatIssueDateOther) >4
THEN co.grddatIssueDateOther
END,109) As SentDate
, dp.libtxtUserID As ParentID
, 1 AS Displayorder
, 0 AS isDisable
FROM Custom.tblR_docProduct dp
INNER JOIN [Custom].[tblR_docProduct_grdCertificationOther] co
ON dp.Id = co._Parent) t
WHERE NOT t.SentDate IS NULL
AND NOT t.STDName IS NULL
AND NOT t.STDNumber IS NULL
```
I need them to change a value contain in STDName that's ID is 2-9 to NULL
but I can't use a expression before i've named them as t
how can i add more if clause or case ?
I need to add a expression like this
```
if t.id <> 1 then t.STDName = NULL
``` | Perhaps this will help you construct your query.
```
SELECT star
, is
, evil
, CASE WHEN evil = 1 THEN star ELSE NULL END As we_can_refer_to_this_now
FROM (
SELECT star
, is
, CASE WHEN column = 'a' THEN 1 ELSE 0 END As evil
FROM ...
) As a_subquery
WHERE evil IN (1, 0)
```
Once you have made the inner query a subquery (wrap it in brackets and give it an alias) the columns within are accessible to the outer query. | You just need to repeat the whole `CASE` again, this time when selecting `STDName`
```
SELECT * FROM
(
SELECT CASE
WHEN co.grdtxtCertificationOther LIKE '%อย.%' THEN 2
WHEN co.grdtxtCertificationOther LIKE '%มผช%' THEN 3
WHEN co.grdtxtCertificationOther LIKE '%มอก.%' THEN 4
WHEN co.grdtxtCertificationOther LIKE '%ฮาลาล%' THEN 5
WHEN co.grdtxtCertificationOther LIKE '%Q%' THEN 6
WHEN co.grdtxtCertificationOther LIKE '%GMP%' THEN 7
WHEN co.grdtxtCertificationOther LIKE '%GAP%' THEN 8
WHEN co.grdtxtCertificationOther LIKE '%HACCP%' THEN 9
WHEN co.grdtxtCertificationOther LIKE '%เกษตรอินทรีย์%' THEN 10
ELSE 1 END As ID,
CASE
WHEN co.grdtxtCertificationOther LIKE '%อย.%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%มผช%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%มอก.%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%ฮาลาล%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%Q%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%GMP%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%GAP%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%HACCP%' THEN NULL
WHEN co.grdtxtCertificationOther LIKE '%เกษตรอินทรีย์%' THEN NULL
ELSE co.[grdtxtCertificationOther]
END AS STDName
, co.[grdtxtCertificationNumberOther] AS STDNumber
, CONVERT(VARCHAR(24),
CASE
WHEN LEN(co.grddatIssueDateOther) >4
THEN co.grddatIssueDateOther
END,109) As SentDate
, dp.libtxtUserID As ParentID
, 1 AS Displayorder
, 0 AS isDisable
FROM Custom.tblR_docProduct dp
INNER JOIN [Custom].[tblR_docProduct_grdCertificationOther] co
ON dp.Id = co._Parent) t
WHERE NOT t.SentDate IS NULL
AND NOT t.STDName IS NULL
AND NOT t.STDNumber IS NULL
``` | I got stuck on Multiple Select | [
"",
"sql",
"sql-server-2008",
""
] |
How can I update and replace part of string/text in SQL?
I want to be able to update my `Article` table, and substitute all of `’` with `'` in `Description` field. How can something like this be done? | ```
UPDATE Article
SET Description = REPLACE(Description , '’', '''')
``` | `REPLACE(COLUMN_NAME,'’','''')` | How to replace part of text in SQL server? | [
"",
"sql",
"sql-server",
""
] |
I have listing according to month year as may 2014 etc. I have take created date integer 11 in unixtime stamp format. how to compare date formate.
```
SELECT DATE_FORMAT(created, '%d-%m-%Y') AS Datetd
FROM `tbl_users_details` where DATE_FORMAT(created, '%m-%Y')='11-2011';
```
my created date save as '1357151400' this type.
but this is not working show only null value. not show record. | You should first try to get result from dual,
```
SELECT DATE_FORMAT(TO_CHAR(unixts_to_date(1357151400), 'DD-MM-YYYY HH:MI:SS'),'%d-%m-%Y') FROM dual;
```
Try this Query
```
SELECT DATE_FORMAT(TO_CHAR(unixts_to_date(1357151400), 'DD-MM-YYYY HH:MI:SS'),'%d-%m-%Y') AS Datetd
FROM `tbl_users_details` where DATE_FORMAT(TO_CHAR(unixts_to_date(1357151400), 'DD-MM-YYYY HH:MI:SS'), '%m-%Y') = '11-2011';
``` | You can retrieve de raw data string or timestamp from database and format it with PHP would be something like this:
```
SELECT date_field FROM `tbl_users_details` where something';
```
To format with PHP:
```
<?php echo date('d/m/Y', strtotime($row['date_field'])); ?>
``` | how to convert sql date formate unix time stamp to date format as dd mm yy format? | [
"",
"mysql",
"sql",
"date",
""
] |
I'm evaluating various options to run a bunch of high-performing queries against a single temporary data set in Oracle. In T-SQL, I'd probably use in-memory temporary tables, but Oracle doesn't have an exact equivalent of this feature.
I'm currently seeing these options:
### 1. Global temporary tables
```
CREATE GLOBAL TEMPORARY TABLE test_temp_t (
n NUMBER(10),
s VARCHAR2(10)
) ON COMMIT DELETE ROWS; -- Other configurations are possible, too
DECLARE
t test_t;
n NUMBER(10);
BEGIN
-- Replace this with the actual temporary data set generation
INSERT INTO test_temp_t
SELECT MOD(level, 10), '' || MOD(level, 12)
FROM dual
CONNECT BY level < 1000000;
-- Replace this example query with more interesting statistics
SELECT COUNT(DISTINCT t.n)
INTO n
FROM test_temp_t t;
DBMS_OUTPUT.PUT_LINE(n);
END;
```
Plan:
```
----------------------------------------------------
| Id | Operation | A-Rows | A-Time |
----------------------------------------------------
| 0 | SELECT STATEMENT | 1 |00:00:00.27 |
| 1 | SORT AGGREGATE | 1 |00:00:00.27 |
| 2 | VIEW | 10 |00:00:00.27 |
| 3 | HASH GROUP BY | 10 |00:00:00.27 |
| 4 | TABLE ACCESS FULL| 999K|00:00:00.11 |
----------------------------------------------------
```
### 2. Unnesting of PL/SQL table type variables
```
CREATE TYPE test_o AS OBJECT (n NUMBER(10), s VARCHAR2(10));
CREATE TYPE test_t AS TABLE OF test_o;
DECLARE
t test_t;
n NUMBER(10);
BEGIN
-- Replace this with the actual temporary data set generation
SELECT test_o(MOD(level, 10), '' || MOD(level, 12))
BULK COLLECT INTO t
FROM dual
CONNECT BY level < 1000000;
-- Replace this example query with more interesting statistics
SELECT COUNT(DISTINCT n)
INTO n
FROM TABLE(t) t;
DBMS_OUTPUT.PUT_LINE(n);
END;
```
Plan:
```
------------------------------------------------------------------
| Id | Operation | A-Rows | A-Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | 1 |00:00:00.68 |
| 1 | SORT GROUP BY | 1 |00:00:00.68 |
| 2 | COLLECTION ITERATOR PICKLER FETCH| 999K|00:00:00.22 |
------------------------------------------------------------------
```
### 3. Materialised views
I'm ruling them out for this use-case, because the temporary data set in question is rather complex and the implications on updating the materialised view would be too significant.
### Real data considerations
The above are examples of what I'm trying to do. The real data sets involve:
* The temp data is denormalised from around 15 joined tables.
* It is produced around 2-20x / seconds.
* The actual amount of rows per temp data set is around 10-200 (not as big as in the above example).
* Each user of the system has their own temp data set (1M users overall, 10k concurrent users).
* Once the data set is established, around 10-50 analytical queries should be run against it.
* These analyses must be run online, i.e. they cannot be deferred to a batch job.
### Questions
From my intuition, the temp table query *"should"* be slower because it (probably) involves I/O and disk access, whereas the PL/SQL collection query is a mere in-memory solution. But in my trivial benchmark, this is not the case as the temp table query beats the PL/SQL collection query by factor 3x. Why is this the case? Is there some PL/SQL <-> SQL context switch happening?
Do I have other options for fast (yet extensive) "in-memory" data analysis on a well-defined temporary data set? Are there any significant publicly available benchmarks comparing the various options? | Temporary tables are effectively the same as in-memory tables thanks to caching and asynchronous I/O, and the temporary table solution does not require any overhead for converting between SQL and PL/SQL.
**Confirming the results**
Comparing the two versions with RunStats, the temporary table version *looks* much worse. All that junk for the temporary table version in Run1, and only a little extra memory for the PL/SQL version in Run2. At first it seems like PL/SQL should be the clear winner.
```
Type Name Run1 (temp) Run2 (PLSQL) Diff
----- -------------------------------- ------------ ------------ ------------
...
STAT physical read bytes 81,920 0 -81,920
STAT physical read total bytes 81,920 0 -81,920
LATCH cache buffers chains 104,663 462 -104,201
STAT session uga memory 445,488 681,016 235,528
STAT KTFB alloc space (block) 2,097,152 0 -2,097,152
STAT undo change vector size 2,350,188 0 -2,350,188
STAT redo size 2,804,516 0 -2,804,516
STAT temp space allocated (bytes) 12,582,912 0 -12,582,912
STAT table scan rows gotten 15,499,845 0 -15,499,845
STAT session pga memory 196,608 19,857,408 19,660,800
STAT logical read bytes from cache 299,958,272 0 -299,958,272
```
But at the end of the day only the wall clock time matters. Both the loading and the querying steps run much faster with temporary tables.
The PL/SQL version can be improved by replacing the `BULK COLLECT` with `cast(collect(test_o(MOD(a, 10), '' || MOD(a, 12))) as test_t) INTO t`. But it's still significantly slower than the temporary table version.
**Optimized Reads**
Reading from the small temporary table only uses the buffer cache, which is in memory. Run only the query part many times, and watch how the `consistent gets from cache` (memory) increase while the `physical reads cache` (disk) stay the same.
```
select name, value
from v$sysstat
where name in ('db block gets from cache', 'consistent gets from cache',
'physical reads cache');
```
**Optimized Writes**
Ideally there would be no physical I/O, especially since the temporary table is `ON COMMIT DELETE ROWS`. And it sounds like the next version of Oracle may introduce such a mechanism. But it doesn't matter much in this case, the disk I/O does not seem to slow things down.
Run the load step multiple times, and then run `select * from v$active_session_history order by sample_time desc;`. Most of the I/O is `BACKGROUND`, which means nothing is waiting on it. I assume the temporary table internal logic is just a copy of regular DML mechanisms. In general, new table data *may* need to be written to disk, if it's committed. Oracle may start working on it, for example by moving data from the log buffer to disk, but there is no rush until there is an actual `COMMIT`.
**Where does the PL/SQL time go?**
I have no clue. Are there multiple context switches, or a single conversion between the SQL and PL/SQL engines? As far as I know none of the available metrics show the *time* spent on switching between SQL and PL/SQL.
We may never know exactly why PL/SQL code is slower. I don't worry about it too much. The general answer is, the vast majority of database work has to be done in SQL anyway. It would make a lot of sense if Oracle spent more time optimizing the core of their database, SQL, than the add-on language, PL/SQL.
**Additional notes**
For performance testing it can be helpful to remove the `connect by` logic into a separate step. That SQL is a great trick for loading data, but it can be very slow and resource intensive. It's more realistic to load a sample table once with that trick, and then insert from that table.
I tried using the new Oracle 12c feature, temporary undo, and the new 18c feature, private temporary tables. Neither one improved performance over regular temporary tables.
I wouldn't bet on it, but I can see a way that the results would completely change as the data gets larger. The log buffer and the buffer cache can only get so large. And eventually that background I/O could add up and overwhelm some processes, turning the `BACKGROUND` wait into a `FOREGROUND` wait. On the other hand, there's only so much PGA memory for the PL/SQL solution, and then things crash.
Finally, this partially confirms my skepticism of "in-memory databases". Caching is nothing new, databases have been doing it for decades. | The reason why GTTs are faster than PLSQL collection may be the context switch for selecting from dual(or your real tables) and the fact that tables, being small, are almost sure in buffer cache(no I/O)
Because your tables are small I would use GTT with CACHE option, to increase the chance that data is in SGA.
(But this type of option should be used carefully, as it does not what appears to do.
Please read [Ask Tom about alter table cache](https://asktom.oracle.com/pls/asktom/f?p=100:11:0%3a%3a%3a%3aP11_QUESTION_ID:253415112676) and [ask Tom about pools](http://asktom.oracle.com/pls/apex/f?p=100:11:0%3a%3a%3a%3ap11_question_id:1590999000346302363)) | Performance considerations for temporary data in Oracle | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have two tables, they have a same column `id`, but `table1` has more `id`s than `table2`. Now I want to find those `id`a in `table1` but not exist in `table2`, and `insert` them into `table2`, and set their value of count as 0.
I tried the following code but it says `syntax error, unexpected IF`.
```
if not exists(select * from table1 where table1.id = table2.id)
begin
insert into table2 (id, count) values (table1.id, 0)
end
``` | You can do this with a single `insert . . . select` statement:
```
insert into table2(id, count)
select id, 0
from table1 t1
where not exists (select 1 from table2 t2 where t2.id = t1.id);
```
I am guessing that you are using MySQL if you are getting an error on `if` (`if` is only allowed in procedure/function/trigger code). But even if `if` where allowed, the query in `exists` references `table2.id` and there is no `table2` in the `from` clause. So that would be the next error. | LEFT JOIN also could be used here
```
insert into table2(id, count)
select t1.id, 0
from table1 t1 left join table2 t2
on t1.id = t2.id
where t2.id is null;
``` | Insert into table when one value not exist in another table? | [
"",
"mysql",
"sql",
"if-statement",
""
] |
I have (a more complex version of) the following schema:
```
CREATE TABLE Messages (
ItemId INT IDENTITY PRIMARY KEY,
Name VARCHAR(20) NOT NULL,
DeliveryType INT NOT NULL
);
CREATE TABLE NotificationsQueue (
ItemId INT NOT NULL,
DeliveryType INT NOT NULL
);
```
The possible values for DeliveryType are:
```
0 -> none
1 -> mail
2 -> email
3 -> both
```
Now, having some entries in the Messages table:
```
INSERT INTO Messages(Name, DeliveryType) Values
('Test0', 0),
('Test1', 1),
('Test2', 2),
('Test3', 3)
```
I need to populate the NotificationsQueue table like so:
* when DeliveryType = 0, don't insert anything
* when DeliveryType = 1 OR DeliveryType = 2, insert a tuple: (ItemId, DeliveryType)
* when DeliveryType = 3, insert two tuples: (ItemId, 1), (ItemId, 2)
My attempt to do so resulted in these two queries:
```
INSERT INTO NotificationsQueue (ItemId, DeliveryType) SELECT ItemId, 1 AS DeliveryType FROM Messages WHERE DeliveryType IN (1, 3)
INSERT INTO NotificationsQueue (ItemId, DeliveryType) SELECT ItemId, 2 AS DeliveryType FROM Messages WHERE DeliveryType IN (2, 3)
```
Even though this works, the select statements for the inserts are much more complex in practice, so I was wondering if there's any good way to merge them in a single query, in order to avoid duplication.
PS: I am not allowed to change the schema or add duplicate items in the `Messages` table. | If you can use trigger you can do this
```
CREATE TRIGGER dbo.Messages_Inserted ON dbo.[Messages]
FOR INSERT
AS
INSERT INTO dbo.[NotificationsQueue]
( ItemId
,DeliveryType
)
SELECT ItemId
,1
FROM INSERTED
WHERE DeliveryType IN ( 1, 3 )
UNION ALL
SELECT ItemId
,2
FROM INSERTED
WHERE DeliveryType IN ( 2, 3 )
GO
INSERT INTO dbo.[Messages](Name, DeliveryType) Values
('Test0', 0),
('Test1', 1),
('Test2', 2),
('Test3', 3)
SELECT * FROM dbo.[Messages]
SELECT * FROM dbo.NotificationsQueue
``` | If you just want everything in one select, I would create a look up table to translate the type, like this
```
;WITH cte AS (SELECT * FROM (VALUES (1,1),(2,2),(3,1),(3,2)) a(DeliveryType,type))
SELECT ItemId, t.type AS DeliveryType
FROM Messages m
INNER JOIN cte t
ON t.DeliveryType = m.DeliveryType
``` | Insert multiple rows from a single row inside an insert into select query | [
"",
"sql",
"sql-server",
"select",
"insert",
"bulkinsert",
""
] |
I have table:
```
opendt enddt Id
------------------------------
2013-01-20 2013-02-20 1
2013-02-20 2014-02-06 1
2013-02-28 NULL 1
```
Ideally for record #2 we should have `enddt = 2013-02-28` but instead it is `2014-02-06` by mistake. I want to find such records where `enddt` is not equal to next row `opendt` for same `ID`. I know I can try finding this by using `temp` tables. Is there is a way to do this without `temp` table? SQL 2012 | You are using SQL Server 2012, so I would recommend using `lead()`:
```
select t.*
from (select t.*, lead(opendt) over (partition by id order by opendt) as next_opendt
from table t
) t
where next_opendt <> enddt;
``` | Try this,
```
;With CTE as
(
Select *,row_number() over (partition by id order by opendt asc) as rNo from myTable as t1
)
select * from CTE as a
left join CTE as b on a.rNo =b.rNo + 1 and a.id=b.id
where b.opendt <> a.enddt
``` | SQL query to do row by row comparison | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have below two queries by using which we can get no of Serious, Fatal accidents per month.
How can we optimize and get result in one query for diffferent accidents types?
```
SELECT
COUNT(ICT.ID) NoOfAccident,
YEAR(ICT.[Date]) AccidentYear,
Month(ICT.[Date]) AccidentMonth,
MAX(ICT.[Date]) AS AccidentDate
FROM
Accidents ICT
Where
ICT.AccidentType = "Serious"
AND
ICT.[Date] > CONVERT(DATETIME, '09/20/13', 1)
Group By
YEAR(ICT.[Date]),
Month(ICT.[Date])
ORDER BY
IncidentDate ASC
SELECT
COUNT(ICT.ID) NoOfAccident,
YEAR(ICT.[Date]) AccidentYear,
Month(ICT.[Date]) AccidentMonth,
MAX(ICT.[Date]) AS AccidentDate
FROM
Accidents ICT
Where
ICT.AccidentType = "Fatal"
AND
ICT.[Date] > CONVERT(DATETIME, '09/20/13', 1)
Group By
YEAR(ICT.[Date]),
Month(ICT.[Date])
ORDER BY
IncidentDate ASC
```
How can we optimize and get result in one query like:
```
NoOfSeriousAccident
NoOfFatalAccident
AccidentYear
AccidentMonth
AccidentDate
``` | ```
SELECT
ICT.AccidentType,
COUNT(ICT.ID) NoOfAccident,
YEAR(ICT.[Date]) AccidentYear,
Month(ICT.[Date]) AccidentMonth,
MAX(ICT.[Date]) AS AccidentDate
FROM
Accidents ICT
Where
ICT.AccidentType IN ("Serious","Fatal")
AND
ICT.[Date] > CONVERT(DATETIME, '09/20/13', 1)
Group By
ICT.AccidentType
YEAR(ICT.[Date]),
Month(ICT.[Date])
ORDER BY
IncidentDate ASC
```
EDIT: Based on updated requirement, you can use PIVOT to get separate columns for fatal and serious accident counts as follows:
```
;with pivoted as
(select
accidentyear,
accidentmonth,
serious as NoOfSeriousAccident,
fatal as NoOfFatalAccident from
(SELECT
ICT.AccidentType,
COUNT(ICT.ID) cnt,
YEAR(ICT.[accidentdate]) AccidentYear,
Month(ICT.[accidentdate]) AccidentMonth
FROM
Accident ICT
Where
ICT.AccidentType IN ('Serious','Fatal')
AND
ICT.[accidentdate] > CONVERT(DATETIME, '09/20/13', 1)
Group By
ICT.AccidentType,
YEAR(ICT.[accidentdate]),
Month(ICT.[accidentdate])) as s
pivot
(
max(cnt)
for accidenttype in ([serious] ,[fatal])
) as pvt
)
select
x.accidentyear,
x.accidentmonth,
max(a.accidentdate),
x.NoOfSeriousAccident,
x.NoOfFatalAccident,
from pivoted x
inner join accident a
on month(a.accidentdate) = x.accidentmonth
and year(a.accidentdate) = x.accidentyear
group by x.accidentmonth, x.accidentyear, x.seriouscount, x.fatalcount
order by max(a.accidentdate)
``` | Trivial - group not only by year and month but also by AccidentType (and remove the filter of one accidedenttype per query).
THen you get 2 rows per year/month - one per accident type. | SQL query optimization to get result in one query | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Please have a look at [this](http://www.sqlfiddle.com/#!4/c58f3/3/0). The result shows, indeed, a join of two sets. I want the output as following i.e. No Cartesian Product.
```
ID_1 TYPE_1 NAME_1 ID_2 TYPE_2 NAME_2
===============================================================
TP001 1 Adam Smith TV001 2 Leon Crowell
TP002 1 Ben Conrad TV002 2 Chris Hobbs
TP003 1 Lively Jonathan
```
I used one of the solution, `join`, known to me to select rows as columns but i need results in required format while `join` is not mandatory. | You need an artificial column as id. Use rownum for that on both types of teachers.
Because you do not know if there are more Teachers of type 1 or of Type 2, you must do a full outer join to combine both sets.
```
SELECT *
FROM (SELECT ROWNUM AS cnt, teacherid
, teachertype, teachername
FROM teachers
WHERE teachertype = 1) qry1
FULL OUTER JOIN (SELECT ROWNUM AS cnt, teacherid
, teachertype, teachername
FROM teachers
WHERE teachertype = 2) qry2
ON qry1.cnt = qry2.cnt
```
In general, databases think in rows, not in columns. In your example you are lucky - you only have two types of teachers. For every new type of teacher you would have to alter your statement and append a full outer join only to present the output of your query in a special way - one set per column.
But with a simple select you retrive the same Information and it will work regardless how many teacher types you have.
SQL is somewhat limited in presenting data, i would leave that to the client retriving the data or use PL/SQL for a more generic aproach. | There should be some constraint of keys on which you join table or tables. If there is no constraint it will always result in Cartesian Product i.e number of rows of first table x numbers of rows of second table | Rows as Columns without Join | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.