Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am currently working with a large dataset of strings. I appreciate SQL isn't ideal for string manipulation but its my only option.
My original table essentially took a string of the form `'company name [+place name] [+ other text]'` and matched it against a list of place names. If there was a match it, stripped the place name and put it in a separate column i.e.
```
Daves butchers London | Daves butchers | London
```
This works fine for all my entries except those which are of the form
```
Duke of Westminster | Duke of | Westminster
```
Is there a way of querying a table so that when column 2 ends with the word 'of' I search column one for the word after of and insert it at the end of column 2?
```
Duke of Westminster | Duke of Westminster | Westminster
```
In particular, it is often the case that the place name is not at the end of column 1, i.e.
```
Duke of Westminster 0177263774 | Duke of | Westminster
```
Is is still possible in this case?
My dataset has 40,000,000 rows in it and I'm not in a position to rebuild it - I simply have to try and fix the problem. | ```
select 'This is a test of concept with other words' as [Column1]
,'This is a test of ' as [Column2]
Into #Tmp
SELECT *
,CASE WHEN RIGHT(RTRIM(Column2),3)=' of'
THEN Column2+SubString(Column1, PATINDEX('% of %',Column1)+4,
CHARINDEX(' '
,Column1
,(PATINDEX('% of %',Column1)+4) - PATINDEX('% of %',Column1)+4))
ELSE Column2 END
FROM #tmp
drop table #tmp
```
this assumes that you are working with the first occurrence of 'OF' from Col1 | Maybe give the following function a try.
```
CREATE FUNCTION SplitInput
(
@input NVARCHAR(MAX)
)
RETURNS @output TABLE(name NVARCHAR(MAX), place NVARCHAR(MAX))
BEGIN
DECLARE @index1 INT
DECLARE @index2 INT
SET @index1 = patindex('% of %', @input)
IF @index1 > 0
BEGIN
SET @index2 = patindex('% %', substring(@input, @index1 + 4, len(@input))) + @index1 + 3
IF @index2 > @index1 + 3
INSERT INTO
@output
VALUES
(left(@input, @index2 - 1), substring(@input, @index1 + 4, @index2 - @index1 - 4))
ELSE
INSERT INTO
@output
VALUES
(@input, substring(@input, @index1 + 4, len(@input)))
END
RETURN
END
SELECT * FROM SplitInput('Duke of Westminster 12345')
SELECT * FROM SplitInput('King of Scotland')
``` | Select next word after string (SQL) | [
"",
"sql",
"sql-server",
"database",
"string",
"t-sql",
""
] |
I'm trying to count when state = 0 And when state = 1 and when show 0 if the other state doesn't have value.
My tables:
```
|policies|
|id| |client| |policy_business_unit_id| |cia_ensure_id| |state|
1 MATT 1 1 0
2 STEVE 2 1 0
3 BILL 3 2 1
4 LARRY 4 2 1
|policy_business_units|
|id| |name| |comercial_area_id|
1 LIFE 2
2 ROB 1
3 SECURE 2
4 ACCIDENT 1
|comercial_areas|
|id| |name|
1 BANK
2 HOSPITAL
|cia_ensures|
|id| |name|
1 SPRINT
2 APPLE
```
Here is the information:
```
http://sqlfiddle.com/#!2/54d2f/15
```
I'm trying to get states = 0,1,2 anc count if it doesn't exist or show 0
```
Select p.id, p.client,
pb.name as BUSINESS_UNITS,
ce.name as CIA,ca.name as COMERCIAL_AREAS,
count(p.state) as status_0,count(p.state) as status_1,count(p.state) as status_2
From policies p
INNER JOIN policy_business_units pb ON pb.id = p.policy_business_unit_id
INNER JOIN comercial_areas ca ON ca.id = pb.comercial_area_id
INNER JOIN cia_ensures ce ON ce.id = p.cia_ensure_id
```
I'm getting this result:
```
ID CLIENT BUSINESS_UNITS CIA COMERCIAL_AREAS STATE_0 STATE_1 STATUS_2
1 MATT LIFE SPRINT HOSPITAL 4 4 4
```
I'm trying to count if state = 0,1,2 else show 0 in the state where doesn't have value
How can I do to have this result?
```
ID CLIENT BUSINESS_UNITS CIA COMERCIAL_AREAS STATE_0 STATE_1 STATE_2
1 MATT LIFE SPRINT HOSPITAL 1 0 0
2 STEVE ROB SPRINT BANK 1 0 0
3 BILL SECURE APPLE HOSPITAL 0 1 0
4 LARRY ACCIDENT APPLE BANK 0 1 0
```
Please I will appreciate all kind of help.
Thanks. | this should do the trick. do an if conditional. if the state = 0 then count else put in a 0
see working [FIDDLE](http://sqlfiddle.com/#!2/54d2f/26)
```
SELECT
p.id,
p.client,
pb.name AS BUSINESS_UNITS,
ce.name AS CIA,ca.name AS COMERCIAL_AREAS,
IF (p.state = 0, count(p.state), 0) AS state_0,
IF (p.state = 1, count(p.state), 0) AS state_1,
IF (p.state = 2, count(p.state), 0) AS state_2
FROM policies p
INNER JOIN policy_business_units pb ON pb.id = p.policy_business_unit_id
INNER JOIN comercial_areas ca ON ca.id = pb.comercial_area_id
INNER JOIN cia_ensures ce ON ce.id = p.cia_ensure_id
GROUP BY pb.id
ORDER BY p.id;
```
---
if you want to account for null states this would be the query.. [FIDDLE](http://sqlfiddle.com/#!2/7d6666/1)
```
SELECT
p.id,
p.client,
pb.name AS BUSINESS_UNITS,
ce.name AS CIA,ca.name AS COMERCIAL_AREAS,
IF (p.state = 0, count(p.state), 0) AS state_0,
IF (p.state = 1, count(p.state), 0) AS state_1,
IF (p.state IS NULL, count(p.id), 0) AS state_null
FROM policies p
INNER JOIN policy_business_units pb ON pb.id = p.policy_business_unit_id
INNER JOIN comercial_areas ca ON ca.id = pb.comercial_area_id
INNER JOIN cia_ensures ce ON ce.id = p.cia_ensure_id
GROUP BY pb.id
ORDER BY p.id;
``` | ```
SELECT SUM( CASE WHEN [condition] THEN 1
ELSE 0 END) AS count
FROM [your tables]
``` | How can count columns according a condition? | [
"",
"mysql",
"sql",
""
] |
I have a table like this:
```
Name CPP Java Python Age
David 1 1 0 40
Mike 1 0 1 50
```
I want to generate a row for each non-zero CPP/Java/Python value with other non-zero values cleared:
```
Name CPP Java Python Age
David 1 0 0 40
David 0 1 0 40
Mike 1 0 0 50
Mike 0 0 1 50
```
How can I do that? | The simple way would be to do a union, eg,
```
SELECT name, cpp, 0 as java, 0 as python, age
FROM {table}
WHERE cpp = 1
UNION ALL
SELECT name, 0 as cpp, java, 0 as python, age
FROM {table}
WHERE java = 1
UNION ALL
SELECT name, 0 as cpp, 0 as java, python, age
FROM {table}
WHERE python = 1
``` | this should do the magic:
```
SELECT name, cpp, 0,0, Age FROM table WHERE cpp = 1 UNION ALL
SELECT name, 0, Java,0, Age FROM table WHERE Java = 1 UNION ALL
SELECT name, 0, 0,Python, Age FROM table WHERE Python = 1
``` | How to convert one row into multiple rows in SQL like this | [
"",
"sql",
"sql-server",
""
] |
I'm new to SQL and I want to know the approach to solve this small problem
```
Select * from ApplicationData where ApplicationId = @AppID
```
AppID can be null as well as it could contain some value. When null value is received, it return all the application. Is there any way we can alter Where clause.
Example
```
Select * from ApplicationData where Case When <some condition> then
ApplicationId = @AppID else ApplicationId is null;
```
Thanks | This should work:
```
SELECT * FROM ApplicationData
WHERE (ApplicationId IS NULL AND @AppID IS NULL) OR ApplicationId = @AppID
```
This is an alternate approach:
```
SELECT * FROM ApplicationData
WHERE ISNULL(ApplicationId, -1) = ISNULL(@AppID, -1)
``` | If you're doing this in a stored procedure then use logic to switch between the two filter requirements. However this will only give you optimal code if the ApplicationId column is included, preferably as the first key, in an index.
```
IF @AppID IS NULL
BEGIN
SELECT * FROM ApplicationData
END
ELSE
BEGIN
SELECT * FROM ApplicationData WHERE ApplicationId = @AppID
END
```
Why is this the best solution? Because the SQL engine will create two query plans for this stored procedure. The query plans will give you the optimal solution for the two filtering requirements. As they are small statements and you only require two possible outcomes you're not creating any burden on the query cache and your code is very readable. If your requirements were more complex, with many variation of filtering then you should consider Dynamic SQL.
If you are not using stored procedures then your application layer should dynamically create the two separate query strings.
There has been many articles written on this subject. I could dig out a few good ones for you if you need. | Comparing a parameter against null values | [
"",
"sql",
"sql-server",
"database",
"t-sql",
"where-clause",
""
] |
How to replace/convert empty string with `NULL` in `SELECT` query without altering the table or updating the table. | The [`NULLIF`](http://en.wikipedia.org/wiki/Null_%28SQL%29#NULLIF) function is supported on most major RDBMS's.
```
SELECT NULLIF(MyColumn, '') FROM MyTable
```
This will return `NULL` for any row if `MyColumn` is equal to the empty string. | ```
SELECT CASE RTRIM(Column) WHEN '' THEN NULL ELSE Column END AS [Column]
``` | How to convert empty string to null in SQLite | [
"",
"sql",
"sqlite",
""
] |
I'm about to insert millions of rows into a database. Is having one table with lots of columns more or less practical than having multiple tables with lots of rows?
The data sort of looks like this:
```
user Jan01 Jan02 Jan03 ...
abcdef 459232 958394 319348
ghijkl 583941 813941 438923
mnopqr 681294 249393 934304
...
```
I thought about breaking it up by month but then I would end up with about 60 tables with millions of rows. Breaking it up by year would still make the tables result in ~365 columns each. If I'm going to that extent, I figure I might as well just have one table because then it would save space and completely remove any redundancy (which is always a good in programming).
However, having one table with ~365\*~60 columns sounds so crazy.
Is there a feature or something that would solve my problems that I don't know about? What would you do? | You should make the table long rather than wide.
If you make the table:
```
user | date | data
```
then you queries will be a lot faster.
You also need to make sure you index the columns correctly.
FInally dont use `varchar` etc. if you can avoid it - if you know the length of a column and it will always be an `integer` then make sure it is Type: `integer Length 11` (or whatever) as this will massively speed up queries.
**EDIT**
to make this easier to understand your table should be
```
user | date | data
abcdef Jan01 459232
abcdef Jan02 958394
abcdef Jan03 319438
ghijkl Jan01 583941
ghijkl Jan02 813941
ghijkl Jan03 438923
mnopqr Jan01 681294
mnopqr Jan02 249393
mnopqr Jan03 934304
```
This allows you to query the data effectively, insert and update data more easily and is how database are designed to be (long not wide).
**even with 6 million rows it will still be faster than a table 60 columns wide with 100,000 rows.** | Make one table with lots of rows. Don't break it up by anything.
If it really gets too big, you can [partition](https://dev.mysql.com/doc/refman/5.6/en/partitioning.html) it. | Options for optimizing a table with millions of rows and lots of columns | [
"",
"mysql",
"sql",
""
] |
I Have A Table as Shown Below
```
number Value
77 ss
74 aa
77 ww
78 ee
77 ii
74 pp
```
I am writing a Range Query to get the nearest Value
```
cursor=db.rawQuery("SELECT * FROM table WHERE number BETWEEN -5 AND +5 ORDER BY ABS( "+inputNumber+" - number) ", null);
```
If the Input Number is 76 then I will get all the Values of 77,78,74. But I want Only Nearest Values. Here Nearest Value of 76 is 77, I want all the Values of 77 (ie 3 values in case of this table) , not the values of 74 and 78. Any one Suggest me how to update the above query. Thank u in Advance | You can use a subquery to get what is the nearest number in the table (the one with lower absolute value after substracting) and then in the main query get the rows with this number (with the same absolute value after substracting).
```
cursor=db.rawQuery("SELECT * FROM table where ABS("+inputNumber+"-number)=(SELECT MIN(ABS("+inputNumber+"-number)) FROM table)", null);
```
If you only want numbers where the difference with the input is +-5 you can check that ABS(inputNumber-number) <= 5 like this:
```
cursor=db.rawQuery("SELECT * FROM table where ABS("+inputNumber+"-number)=(SELECT MIN(ABS("+inputNumber+"-number)) FROM table) AND ABS("+inputNumber+"-number)<=5", null);
``` | Since you are using the offsets -5 and +5, your range will be from 76-5= 71 to 76+5
=81. Therefore you will retrieve all values within that range.
Instead of using the range, you can try using the TOP keyword:
```
cursor=db.rawQuery("SELECT TOP 3 number, Value FROM table ORDER BY ABS("+inputNumber+" - number) ", null);
``` | Range Query in Android | [
"",
"android",
"sql",
"sqlite",
""
] |
How to select all the records from the table, except for null values from value-431 in col-B?
```
COLUMN-A COLUMN-B
--------------------
N 433
N 431
Y 431
431
431
Y 431
N 431
N 520
520
N 304
390
N 410
433
```
Desired Output:
```
COLUMN-A COLUMN-B
--------------------
N 433
N 431
Y 431
Y 431
N 431
N 520
520
N 304
390
N 410
433
``` | Try this:--
```
Select * from
(select * from table where column_b = 431 and column_a is not null
union all
select * from table where column_b <> 431) A
``` | ```
(select * from table where column_b ! - 431 )
union
(select * from table where column_b = 431 and column_a is not null);
``` | Avoid null values for a specific record | [
"",
"sql",
"oracle",
""
] |
I have a Table\_4
```
accid0v subst0v actd0v pric0v
12001 10 11/19/2013 10.99
12002 10 11/20/2013 10.99
12003 10 11/21/2013 10.99
12004 20 11/21/2013 20.99
12005 10 11/21/2013 10.99
12006 30 11/26/2013 20.99
12007 40 11/26/2013 10.99
12008 10 11/26/2013 5.99
```
I want output
```
actd0v pric0v
11/19/2013 10.99
11/20/2013 10.99
11/21/2013 42.97
11/26/2013 36.98
```
I am using Microsoft SQL Server 2008. | ```
select actd0v,sum(price0v) from Table_4 group by actd0v
``` | ```
select actd0v, sum(pric0v) from Tablename GROUP BY actd0v
``` | sum of value with distinct date sql server | [
"",
"sql",
"sql-server-2008",
""
] |
I have a recipe table with recipe numbers and a list of ingredients. I want to select recipe numbers that have (a list of ingredients) AND (do not have another list of ingredients).
Thanks in advance for any direction.
```
CREATE TABLE recipe (
id INT PRIMARY KEY,
recipe_num INT,
ingredient VARCHAR(20)
);
INSERT INTO recipe VALUES (1,1,'salt'),(2,1,'pork'),(3,1,'pepper'),(4,1,'milk'),(5,1,'garlic'),
(6,2,'steak'), (7,2,'pepper'),(8,2,'ketchup'),
(9,3,'fish'),(10,3,'lemon'),(11,3,'cheese'),
(12,4,'veal'),(13,4,'cream'),(14,4,'salt'),(15,4,'garlic');
select * from recipe;
+----+------------+------------+
| id | recipe_num | ingredient |
+----+------------+------------+
| 1 | 1 | salt |
| 2 | 1 | pork |
| 3 | 1 | pepper |
| 4 | 1 | milk |
| 5 | 1 | garlic |
| 6 | 2 | steak |
| 7 | 2 | pepper |
| 8 | 2 | ketchup |
| 9 | 3 | fish |
| 10 | 3 | lemon |
| 11 | 3 | cheese |
| 12 | 4 | veal |
| 13 | 4 | cream |
| 14 | 4 | salt |
| 15 | 4 | garlic |
+----+------------+------------+
15 rows in set (0.00 sec)
```
I can select all recipe\_num that contain meat and dairy. How do I query all recipe\_num that contain meat and but not dairy? It seems so easy,
```
SELECT
meat.recipe_num, meat.ingredient as meat, dairy.recipe_num, dairy.ingredient as dairy
FROM
recipe as meat, recipe as dairy
WHERE
meat.ingredient IN ('pork' , 'steak','chicken','veal')
AND dairy.ingredient IN ('milk' , 'cheese')
AND meat.recipe_num = dairy.recipe_num;
+------------+------+------------+-------+
| recipe_num | meat | recipe_num | dairy |
+------------+------+------------+-------+
| 1 | pork | 1 | milk |
+------------+------+------------+-------+
```
1 row in set (0.00 sec) | Use a `LEFT JOIN` to match the two sub-tables, and then test for `NULL` in the dairy column to find recipes with no match.
```
SELECT DISTINCT meat.recipe_num
FROM recipe AS meat
LEFT JOIN recipe AS dairy
ON meat.recipe_num = dairy.recipe_num AND dairy.ingredient IN ('milk', 'cheese')
WHERE meat.ingredient IN ('pork' , 'steak','chicken','veal')
AND dairy.recipe_num IS NULL
```
Alternatively you can use a more intuitive `NOT IN` query. However, this may not perform as well.
```
SELECT DISTINCT recipe_num
FROM recipe
WHERE ingredient IN ('pork' , 'steak','chicken','veal')
AND recipe_num NOT IN (SELECT recipe_num
FROM recipe
WHERE ingredient IN ('milk', 'cheese'))
``` | While probably futile, the last part of this problem has been left as an exercise for the reader...
```
SELECT *
FROM recipe a
JOIN recipe b
ON b.recipe_num = a.recipe_num
LEFT
JOIN recipe c
ON c.recipe_num = a.recipe_num
AND c.ingredient IN ('milk','cream')
WHERE a.ingredient IN ('pork','steak','veal');
``` | MySQL select with IN (include) and NOT IN (exclude) | [
"",
"mysql",
"sql",
""
] |
I have a sql select command with grouping and I want to get the number of total rows. How do I achieve that?
My sql command:
```
select p.UserName, p.FirstName + ' ' + p.LastName as [FullName]
,count(b.billid) as [Count], sum(b.PercentRials) as [Sum] from Bills b
inner join UserProfiles p on b.PayerUserName=p.UserName
where b.Successful=1
group by p.UserName, p.FirstName + ' ' + p.LastName
```
I have tried these with no luck:
```
select count(*) from (select ...)
```
and
```
select count(select ...)
```
**EDIT**
this is the complete sql statement that I want to run:
```
select count(*) from ( select p.UserName, p.FirstName + ' ' + p.LastName as [FullName]
,count(b.billid) as [Count], sum(b.PercentRials) as [Sum] from Bills b
inner join UserProfiles p on b.PayerUserName=p.UserName
where b.Successful=1
group by p.UserName, p.FirstName + ' ' + p.LastName)
```
and I get this error on the last line:
```
Incorrect syntax near ')'.
``` | ```
SELECT COUNT(*)
FROM
(
select p.UserName, p.FirstName + ' ' + p.LastName as [FullName]
,count(b.billid) as [Count], sum(b.PercentRials) as [Sum] from Bills b
inner join UserProfiles p on b.PayerUserName=p.UserName
where b.Successful=1
group by p.UserName, p.FirstName + ' ' + p.LastName --<-- Removed the extra comma here
) A --<-- Use an Alias here
```
As I expected from your shown attempt you were missing an Alias
```
select count(*)
from (select ...) Q --<-- This sub-query in From clause needs an Alias
```
**Edit**
If you only need to know the rows returned by this query and you are executing this query anyway somwhere in your code you could simply make use of [`@@ROWCOUNT`](https://msdn.microsoft.com/en-gb/library/ms187316.aspx "MSDN Documentation") function. Something like....
```
SELECT ...... --<-- Your Query
SELECT @@ROWCOUNT --<-- This will return the number of rows returned
-- by the previous query
``` | Try this code:
```
SELECT COUNT(*)
FROM (
SELECT p.UserName
,p.FirstName + ' ' + p.LastName AS [FullName]
,COUNT(b.billid) AS [Count]
,SUM(b.PercentRials) AS [Sum]
FROM Bills b
INNER JOIN UserProfiles p
ON b.PayerUserName = p.UserName
WHERE b.Successful = 1
GROUP BY p.UserName
,p.FirstName + ' ' + p.LastName
) a
```
based on your edit. You were missing derived tabled alias.
If you look at `FROM` clause syntax you will see
> | **derived\_table [ AS ] table\_alias** [ ( column\_alias [ ,...n ] ) ]
>
> When a derived table, rowset or table-valued function, or operator
> clause (such as PIVOT or UNPIVOT) is used, the **required table\_alias** at
> the end of the clause is the associated table name for all columns,
> including grouping columns, returned.
<http://technet.microsoft.com/en-us/library/ms177634.aspx> | Sql: Select count(*) from (select ...) | [
"",
"sql",
"sql-server",
"count",
"subquery",
""
] |
I try to select from `table_1` where `ITEM_FIELD_A` is not in `ITEM_FIELD_B`. The Item\_FIELD\_B value are look as below. I was expecting no `COVER_TAPE` & `SHIPPING_REELS` will be selected. But unfortunately, it's not working.

**The sql I used to select the table**
```
select * from table_1 where MST.ITEM_FIELD_A not in ITEM_FIELD_B
```
Question:
In Oracle, is there any function to decode the string. so that the above select statement will not return `COVER_TAPE` and `SHIPPING_REELS`?? | The `IN` operator would be used when you wish to compare (or negate) one item in a list such as
```
WHERE ITEM_FIELD_A NOT IN ('COVER_TAPE', 'SHIPPING_REELS', '')
```
What you want is the `LIKE` operator:
```
WHERE ITEM_FIELD_B NOT LIKE '%' || ITEM_FIELD_A || '%'
```
Apologies if I got the wildcard wrong, been a while since I last touched Oracle. | Check out below Query:
```
WITH TAB1 AS
( SELECT 'COVER_TAPE' ITEM_A FROM DUAL
UNION
SELECT 'CARRIER_TAPE' ITEM_A FROM DUAL
UNION
SELECT 'SHIPPING_REELS' ITEM_A FROM DUAL
),
TAB2 AS
(
SELECT 'COVER_TAPE,SHIPPING_REELS' ITEM_B FROM DUAL
)
SELECT ITEM_A, ITEM_B FROM TAB1, TAB2 WHERE INSTR(ITEM_B, ITEM_A) <=0
```
INSTR will return >0 if same sequence of characters is available. | How to convert String in SQL (ORACLE) | [
"",
"sql",
"oracle",
""
] |
```
create table myTable (int i, int user_id, int a, int b, int c)
```
The values are like this.
```
1, 1, 1, 1, 0
2, 1, 2, 2, 0
3, 2, 3, 3, 0
4, 2, 4, 4, 0
```
I want column "c" to be updated as "a" + "b" for user\_id = 1. How should I do this? | Select query
```
SELECT a, b, (a+b) AS c FROM mytable where user_id=1;
```
UPDATE query
```
UPDATE mytable SET c = a+b where user_id=1;
``` | Try this,
```
update myTable set c = a+b where user_id =1;
```
[**Demo**](http://sqlfiddle.com/#!2/bafcff/1) | How do I calculate a column value based on the existin columns | [
"",
"mysql",
"sql",
"sql-update",
""
] |
```
1) user- id | nickname
5 hello
6 ouuu
7 youyou
2) team_leader- id | user_id | team_name | game |position
2 5 haha M.A Leader
3 7 nono M.A Leader
3) team_member- id | user_id | team_leader_id| game |position
1 6 2 M.A Member
4) user_game- id | user_id | game | character_game
1 5 M.A wahaha
2 6 M.A kiki
3 7 M.A popo
```
i want to display the team\_leader's id where id=2
so the output should be display:
Nickname Character name Position
hello wahaha Leader
ouuu kiki Member | First, you should **never use the `*` in queries**, as long as they aren't only for temporary testing purposes. Even if everything works as expected it can lead to unexpected errors later, e.g. when table schemata are changed. Always explicitly list the fields you wish to include in your query.
The problem with your query is that you got **name clashes** in the joined tables. Every table has an `id` field, so you have to **use alias names** and address them via qualified names to be able to clearly distinguish them.
```
SELECT u.id, u.nickname, t.id, t.user_id, t.team_name, t.game,
tm.id, tm.user_id, tm.team_id, ug.id, ug.user_id, ug.game, ug.character_game
FROM user AS u
JOIN user_game AS ug
ON ug.user_id = u.id
JOIN team AS t
ON t.user_id = u.id
JOIN team_member AS tm
ON t.id = tm.team_id
WHERE t.id = 2 AND ug.game = t.game
AND tm.team_id = 2;
```
**BTW:** If your application design allows that a user can have only a part of the "subtables" referencing user\_id or even none of them, you should think of using a `LEFT OUTER JOIN`. That would ensure that every user will have a row in the result, even those who aren't a team member, e.g.
```
SELECT u.id, u.nickname, t.id, t.user_id, t.team_name, t.game,
tm.id, tm.user_id, tm.team_id, ug.id, ug.user_id, ug.game, ug.character_game
FROM user AS u
LEFT OUTER JOIN user_game AS ug
ON ug.user_id = u.id
LEFT OUTER JOIN team AS t
ON t.user_id = u.id
LEFT OUTER JOIN team_member AS tm
ON t.id = tm.team_id
WHERE t.id = 2 AND ug.game = t.game
AND tm.team_id = 2;
``` | ```
select * from user u
inner join team t on u.id = t.user_id
inner join team_member tm on t.user_id = tm.user_id
inner join user_game ug on t.user_id = ug.user_id and t.game = ug.game where tm.team_id = 2
```
Pay your attention to lean SQL
joins | SQL join 4 table | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I have a stored procedure that takes in two parameters. I can execute it successfully in Server Management Studio. It shows me the results which are as I expect. However it also returns a Return Value.
It has added this line,
```
SELECT 'Return Value' = @return_value
```
I would like the stored procedure to return the table it shows me in the results not the return value as I am calling this stored procedure from MATLAB and all it returns is true or false.
Do I need to specify in my stored procedure what it should return? If so how do I specify a table of 4 columns (varchar(10), float, float, float)? | A procedure can't return a table as such. However you can select from a table in a procedure and direct it into a table (or table variable) like this:
```
create procedure p_x
as
begin
declare @t table(col1 varchar(10), col2 float, col3 float, col4 float)
insert @t values('a', 1,1,1)
insert @t values('b', 2,2,2)
select * from @t
end
go
declare @t table(col1 varchar(10), col2 float, col3 float, col4 float)
insert @t
exec p_x
select * from @t
``` | I do this frequently using Table Types to ensure more consistency and simplify code. You can't technically return "a table", but you can return a result set and using `INSERT INTO .. EXEC ...` syntax, you can clearly call a PROC and store the results into a table type. In the following example I'm actually passing a table into a PROC along with another param I need to add logic, then I'm effectively "returning a table" and can then work with that as a table variable.
```
/****** Check if my table type and/or proc exists and drop them ******/
IF EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'returnTableTypeData')
DROP PROCEDURE returnTableTypeData
GO
IF EXISTS (SELECT * FROM sys.types WHERE is_table_type = 1 AND name = 'myTableType')
DROP TYPE myTableType
GO
/****** Create the type that I'll pass into the proc and return from it ******/
CREATE TYPE [dbo].[myTableType] AS TABLE(
[someInt] [int] NULL,
[somenVarChar] [nvarchar](100) NULL
)
GO
CREATE PROC returnTableTypeData
@someInputInt INT,
@myInputTable myTableType READONLY --Must be readonly because
AS
BEGIN
--Return the subset of data consistent with the type
SELECT
*
FROM
@myInputTable
WHERE
someInt < @someInputInt
END
GO
DECLARE @myInputTableOrig myTableType
DECLARE @myUpdatedTable myTableType
INSERT INTO @myInputTableOrig ( someInt,somenVarChar )
VALUES ( 0, N'Value 0' ), ( 1, N'Value 1' ), ( 2, N'Value 2' )
INSERT INTO @myUpdatedTable EXEC returnTableTypeData @someInputInt=1, @myInputTable=@myInputTableOrig
SELECT * FROM @myUpdatedTable
DROP PROCEDURE returnTableTypeData
GO
DROP TYPE myTableType
GO
``` | SQL server stored procedure return a table | [
"",
"sql",
"sql-server",
"matlab",
"stored-procedures",
""
] |
Thanks in advance.
I want to show the columns with most repeated values first like this
```
col1 col2
1 A
1 B
1 C
1 D
2 A
2 B
2 C
4 D
4 E
3 A
```
In 'col1' since '1' is repeated four times it should come first and '2' repeated thrice it comes second.
need to write sql query to get this result.
Please help me. | I propose you this solution :
```
WITH temp AS (
SELECT col1, count(col2) nb_occurences
FROM tab
GROUP BY
col1)
SELECT
tab.col1, tab.col2, nb_occurences
FROM tab
INNER JOIN
temp
ON temp.col1 = tab.col1
ORDER BY nb_occurences DESC
```
I hope this will help you :)
Good Luck | Assuming SQL Server 2005 or newer you can make use of the `OVER()` clause:
```
SELECT *
FROM Table1
ORDER BY COUNT(*) OVER (PARTITION BY col1) DESC
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!3/82a2b/3/0) | Show most repeated columns first in sql server | [
"",
"sql",
"sql-server",
""
] |
In my database table fields are saved as 2013-02-15 00:00:00.000. I want they should be 2013-02-15 23:59:59.999. So how to convert 2013-02-15 00:00:00.000 to 02-15 23:59:59.999. In other words only change minimum to maximum time. | ```
DECLARE @Time TIME = '23:59:59.999'
SELECT dateColumn + @Time
FROM tableName
```
[**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/c8eb8/1)
Edit
Cast @time to datetime before (+)
```
DECLARE @Time TIME = '23:59:59.999'
SELECT dateColumn + CAST(@Time as DATETIME)
FROM tableName
``` | Easily done:
```
SELECT dateCol + '23:59:59'
``` | Convert DateTime Time from 00:00:00 to 23:59:59 | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
The title is pretty much explicit, my question is if i get two dates with hour:
* 01-10-2014 10:00:00
* 01-20-2014 20:00:00
Is it possible to pick a random datetime between these two datetime ?
I tried with the random() function but i don't really get how to use it with datetime
Thanks
Matthiew | You can do almost everything with the [date/time operators](http://www.postgresql.org/docs/current/static/functions-datetime.html):
```
select timestamp '2014-01-10 20:00:00' +
random() * (timestamp '2014-01-20 20:00:00' -
timestamp '2014-01-10 10:00:00')
``` | I adapted @pozs answer, since I didn't have timestamps to go off of.
`90 days` is the time window you want and the `30 days` is how far out to push the time window. This is helpful when running it via a job instead of at a set time.
```
select NOW() + (random() * (NOW()+'90 days' - NOW())) + '30 days';
``` | PostgreSQL Get a random datetime/timestamp between two datetime/timestamp | [
"",
"sql",
"database",
"postgresql",
"datetime",
"timestamp",
""
] |
I'm not very experienced when it comes to joining tables so this may be the result of the way I'm joining them. I don't quite understand why this query is duplicating results. For instance this should only return 3 results because I only have 3 rows for that specific job and revision, but its returning 6, the duplicates are exactly the same as the first 3.
```
SELECT
checklist_component_stock.id,
checklist_component_stock.job_num,
checklist_revision.user_id,
checklist_component_stock.revision,
checklist_category.name as category,
checklist_revision.revision_num as revision_num,
checklist_revision.category as rev_category,
checklist_revision.per_workorder_number as per_wo_num,
checklist_component_stock.wo_num_and_date,
checklist_component_stock.posted_date,
checklist_component_stock.comp_name_and_number,
checklist_component_stock.finish_sizes,
checklist_component_stock.material,
checklist_component_stock.total_num_pieces,
checklist_component_stock.workorder_num_one,
checklist_component_stock.notes_one,
checklist_component_stock.signoff_user_one,
checklist_component_stock.workorder_num_two,
checklist_component_stock.notes_two,
checklist_component_stock.signoff_user_two,
checklist_component_stock.workorder_num_three,
checklist_component_stock.notes_three,
checklist_component_stock.signoff_user_three
FROM checklist_component_stock
LEFT JOIN checklist_category ON checklist_component_stock.category
LEFT JOIN checklist_revision ON checklist_component_stock.revision = checklist_revision.revision_num
WHERE checklist_component_stock.job_num = 1000 AND revision = 1;
```
Tables structure:
**checklist\_category**

**checklist\_revision**

**checklist\_component\_stock**
 | The line
```
LEFT JOIN checklist_category ON checklist_component_stock.category
```
was certainly supposed to be something like
```
LEFT JOIN checklist_category ON checklist_component_stock.category = checklist_category.category
```
Most other dbms would have reported a syntax error, but MySQL treats checklist\_component\_stock.category as a boolean. For MySQL a boolean is a number, which is 0 for FALSE and != 0 for TRUE. So every checklist\_component\_stock with category != 0 is being connected to all records in checklist\_category. | As well as fixing Thorsten Kettner's suggestion, my foreign keys for the revisions was off. I was referencing the revision in checklist\_component\_stock.revision to checklist\_revision.revision\_num when instead I should have referenced it to checklist\_revision.id. | Why is this query duplicating results? | [
"",
"mysql",
"sql",
""
] |
I have a table which has two rows id and name, i want to use Count function in insert such that I can use query as:
```
Insert into table1(ID,Name) Values (Count + 1,'Name')
``` | Try this then,
```
var CurrentCount = Count + 1;
INSERT INTO table1 (ID, Name) VALUES (CurrentCount, Name);
```
Why not use the default Sql functions, IsIdentity property. Which would automatically increment the value for you. And you won't even have to run this function to increment the value.
<http://msdn.microsoft.com/en-us/library/ms186775.aspx> | Nearly all database management systems have built in some magic for this. MySQL's using AUTO\_INCREMENT, Postgres sequences, MS SQL Server identity columns, ...
It depends of your database. | How to use count and insert in sql? | [
"",
"sql",
"insert",
"count",
""
] |
I have a simple table with only 4 fields.
<http://sqlfiddle.com/#!3/06d7d/1>
```
CREATE TABLE Assessment (
id INTEGER IDENTITY(1,1) PRIMARY KEY,
personId INTEGER NOT NULL,
dateTaken DATETIME,
outcomeLevel VARCHAR(2)
)
INSERT INTO Assessment (personId, dateTaken, outcomeLevel)
VALUES (1, '2014-04-01', 'L1')
INSERT INTO Assessment (personId, dateTaken, outcomeLevel)
VALUES (1, '2014-04-05', 'L2')
INSERT INTO Assessment (personId, dateTaken, outcomeLevel)
VALUES (2, '2014-04-03', 'E3')
INSERT INTO Assessment (personId, dateTaken, outcomeLevel)
VALUES (2, '2014-04-07', 'L1')
```
I am trying to select for each "personId" their latest assessment result based on the dateTaken.
So my desired output for the following data would be.
```
[personId, outcomeLevel]
[1, L2]
[2, L1]
```
Thanks,
Danny | Try this:
```
;with cte as
(select personId pid, max(dateTaken) maxdate
from assessment
group by personId)
select personId, outcomeLevel
from assessment a
inner join cte c on a.personId = c.pid
where c.maxdate = a.dateTaken
order by a.personId
``` | Here is a possible solution using common table expression:
```
WITH cte AS (
SELECT
ROW_NUMBER() OVER (PARTITION BY personId ORDER BY dateTaken DESC) AS rn
, personId
, outcomeLevel
FROM
[dbo].[Assessment]
)
SELECT
personId
, outcomeLevel
FROM
cte
WHERE
rn = 1
```
**About CTEs**
> A common table expression (CTE) can be thought of as a temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. Unlike a derived table, a CTE can be self-referencing and can be referenced multiple times in the same query. [From MSDN: Using Common Table Expressions](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx) | select the latest result based on DateTime field | [
"",
"sql",
"sql-server-2008",
""
] |
I need to execute the following query:
```
SELECT (COL1,COL2, ClientID)
FROM Jobs
Union
SELECT (ClientID,COL2,COL3)
FROM Clients WHERE (the ClientID= ClientID my first select)
```
I'm really stuck, I've been trying joins and unions and have no idea how to do this.
\**EDIT*\*Query to create jobs table
```
CREATE TABLE IF NOT EXISTS `jobs` (
`JobID` int(11) NOT NULL AUTO_INCREMENT,
`Title` varchar(32) NOT NULL,
`Trade` varchar(32) NOT NULL,
`SubTrade` varchar(300) NOT NULL,
`Urgency` tinyint(4) NOT NULL,
`DatePosted` int(11) NOT NULL,
`Description` varchar(500) NOT NULL,
`Photo` longblob,
`Photo2` longblob,
`Address` varchar(600) NOT NULL,
`ShowAddress` tinyint(4) NOT NULL,
`ShowExact` tinyint(4) NOT NULL,
`JobStatus` tinyint(4) NOT NULL,
`Longitude` double NOT NULL,
`Latitude` double NOT NULL,
`ClientID` int(11) NOT NULL,
`TradesmanID` int(11) DEFAULT NULL,
PRIMARY KEY (`JobID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=171 ;
```
and query to create clients table is
```
CREATE TABLE IF NOT EXISTS `clients` (
`ClientID` int(11) NOT NULL AUTO_INCREMENT,
`FName` varchar(32) NOT NULL,
`SName` varchar(32) NOT NULL,
`Email` varchar(32) NOT NULL,
`HomePhone` int(11) NOT NULL,
`Mobile` varchar(30) NOT NULL,
`Address` varchar(100) NOT NULL,
`County` varchar(32) NOT NULL,
`PostCode` varchar(32) NOT NULL,
`UserName` varchar(32) NOT NULL,
`Password` varchar(32) NOT NULL,
`NotificationID` varchar(255) NOT NULL,
PRIMARY KEY (`ClientID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=96 ;
``` | ```
SELECT
Clients.ClientID
,Clients.NotificationID
,Clients.Email
,Clients.Mobile
,Clients.HomePhone
,Jobs.JobID
,Jobs.Title
,Jobs.Trade
,Jobs.Address AS JobAddress
,Jobs.Urgency
,Jobs.DatePosted
,Jobs.Description
,Jobs.Photo
,Jobs.Photo2
,Jobs.ShowAddress
,Jobs.ShowExact
,Jobs.JobStatus
,Jobs.TradesmanID
,Jobs.Longitude
,Jobs.Latitude
FROM
Clients
INNER JOIN
Jobs
on Clients.ClientId = Jobs.ClientId
``` | How about this? There is a bit of duplication i.e. the select clause in the second union replications the first statement but it will work.
```
SELECT COL1,COL2, ClientID
FROM Jobs
Union
SELECT ClientID,COL2,COL3
FROM Clients WHERE (Select ClientID FROM Jobs)
``` | SQL UNION with Where clause a result of the first select | [
"",
"mysql",
"sql",
"join",
"union",
""
] |
I have an issue in Teradata where I am trying to build a historical contract table that lists a system, it's corresponding contracts and the start and end dates of each contract. This table would then be queried for reporting as a point in time table. Here is some code to better explain.
```
CREATE TABLE TMP_WORK_DB.SOLD_SYSTEMS
(
SYSTEM_ID varchar(5),
CONTRACT_TYPE varchar(10),
CONTRACT_RANK int,
CONTRACT_STRT_DT date,
CONTRACT_END_DT date
);
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('AAA', 'BEST', 10, '2012-01-01', '2012-06-30');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('AAA', 'BEST', 9, '2012-01-01', '2012-06-30');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('AAA', 'OK', 1, '2012-08-01', '2012-12-30');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('BBB', 'BEST', 10, '2013-12-01', '2014-03-02');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('BBB', 'BETTER', 7, '2013-12-01', '2017-03-02');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('BBB', 'GOOD', 4, '2016-12-02', '2017-12-02');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('CCC', 'BEST', 10, '2009-10-13', '2014-10-14');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('CCC', 'BETTER', 7, '2009-10-13', '2016-10-14');
INSERT INTO TMP_WORK_DB.SOLD_SYSTEMS VALUES ('CCC', 'OK', 2, '2008-10-13', '2017-10-14');
```
The required output would be:
```
SYSTEM_ID CONTRACT_TYPE CONTRACT_STRT_DT CONTARCT_END_DT CONTRACT_RANK
AAA BEST 01/01/2012 06/30/2012 10
AAA OK 08/01/2012 12/30/2012 1
BBB BEST 12/01/2013 03/02/2014 10
BBB BETTER 03/03/2014 03/02/2017 7
BBB GOOD 03/03/2017 12/02/2017 4
CCC OK 10/13/2008 10/12/2009 2
CCC BEST 10/13/2009 10/14/2014 10
CCC BETTER 10/15/2014 10/14/2016 7
CCC OK 10/15/2016 10/14/2017 2
```
I'm not necessarily looking to reduce rows but am looking to get the correct state of the system\_id at any given point in time. Note that when a higher ranked contract ends and a lower ranked contract is still active the lower ranked picks up where the higher one left off.
We are using TD 14 and I have been able to get the easy records where the dates flow sequentially and are of higher rank but am having trouble with the overlaps where two different ranked contracts cover multiple date spans.
I found this blog post ([Sharpening Stones](http://walkingoncoals.blogspot.com/2009/12/fun-with-recursive-sql-part-3.html)) and got it working for the most part but am still having trouble setting the new start dates for the overlapping contracts.
Any help would be appreciated. Thanks.
---
\**UPDATE 04/04/2014 \**
I came up with the following code which gives me exactly what I want but I'm not sure of the performance. It works on smaller data sets of a few hundred rows but I havent tested it on several million:
\**UPDATE 04/07/2014 \**
Updated the date subquery due to spool issues. This query explodes all days where the contract is possibly active and then uses the ROW\_NUMBER function to get the highest ranked CONTRACT\_TYPE per day. The MIN/MAX functions are then partitioned over the system and contract type to pick up when the highest ranked contract type changes.
\**UPDATE - 2 - 04/07/2014 \**
I cleaned up the query and it seems to be perform a little better.
```
SELECT
SYSTEM_ID
, CONTRACT_TYPE
, MIN(CALENDAR_DATE) NEW_START_DATE
, MAX(CALENDAR_DATE) NEW_END_DATE
, CONTRACT_RANK
FROM (
SELECT
CALENDAR_DATE
, SYSTEM_ID
, CONTRACT_TYPE
, CONTRACT_RANK
, ROW_NUMBER() OVER (PARTITION BY SYSTEM_ID, CALENDAR_DATE ORDER BY CONTRACT_RANK DESC, CONTRACT_STRT_DT DESC, CONTRACT_END_DT DESC) AS RNK
FROM SOLD_SYSTEMS t1
JOIN (
SELECT CALENDAR_DATE
FROM FULL_CALENDAR_TABLE ia
WHERE CALENDAR_DATE > DATE'2013-01-01'
)dt
ON CALENDAR_DATE BETWEEN CONTRACT_STRT_DT AND CONTRACT_END_DT
QUALIFY RNK = 1
)z1
GROUP BY 1,2,5
``` | Following approach uses the new PERIOD functions in TD13.10.
```
-- 1. TD_SEQUENCED_COUNT can't be used in joins, so create a Volatile Table
-- 2. TD_SEQUENCED_COUNT can't use additional columns (e.g. CONTRACT_RANK),
-- so simply create a new row whenever a period starts or ends without
-- considering CONTRACT_RANK
CREATE VOLATILE TABLE vt AS
(
WITH cte
(
SYSTEM_ID
,pd
)
AS
(
SELECT
SYSTEM_ID
-- PERIODs can easily be constructed on-the-fly, but the end date is not inclusive,
-- so I had to adjust to your implementation, CONTRACT_END_DT +/- 1:
,PERIOD(CONTRACT_STRT_DT, CONTRACT_END_DT + 1) AS pd
FROM SOLD_SYSTEMS
)
SELECT
SYSTEM_ID
,BEGIN(pd) AS CONTRACT_STRT_DT
,END(pd) - 1 AS CONTRACT_END_DT
FROM
TABLE (TD_SEQUENCED_COUNT
(NEW VARIANT_TYPE(cte.SYSTEM_ID)
,cte.pd)
RETURNS (SYSTEM_ID VARCHAR(5)
,Policy_Count INTEGER
,pd PERIOD(DATE))
HASH BY SYSTEM_ID
LOCAL ORDER BY SYSTEM_ID ,pd) AS dt
)
WITH DATA
PRIMARY INDEX (SYSTEM_ID)
ON COMMIT PRESERVE ROWS
;
-- Find the matching CONTRACT_RANK
SELECT
vt.SYSTEM_ID
,t.CONTRACT_TYPE
,vt.CONTRACT_STRT_DT
,vt.CONTRACT_END_DT
,t.CONTRACT_RANK
FROM vt
-- If both vt and SOLD_SYSTEMS have a NUPI on SYSTEM_ID this join should be
-- quite efficient
JOIN SOLD_SYSTEMS AS t
ON vt.SYSTEM_ID = t.SYSTEM_ID
AND ( t.CONTRACT_STRT_DT, t.CONTRACT_END_DT)
OVERLAPS (vt.CONTRACT_STRT_DT, vt.CONTRACT_END_DT)
QUALIFY
-- As multiple contracts for the same period are possible:
-- find the row with the highest rank
ROW_NUMBER()
OVER (PARTITION BY vt.SYSTEM_ID,vt.CONTRACT_STRT_DT
ORDER BY t.CONTRACT_RANK DESC, vt.CONTRACT_END_DT DESC) = 1
ORDER BY 1,3
;
-- Previous query might return consecutive rows with the same CONTRACT_RANK, e.g.
-- BBB BETTER 2014-03-03 2016-12-01 7
-- BBB BETTER 2016-12-02 2017-03-02 7
-- If you don't want that you have to normalize the data:
WITH cte
(
SYSTEM_ID
,CONTRACT_STRT_DT
,CONTRACT_END_DT
,CONTRACT_RANK
,CONTRACT_TYPE
,pd
)
AS
(
SELECT
vt.SYSTEM_ID
,vt.CONTRACT_STRT_DT
,vt.CONTRACT_END_DT
,t.CONTRACT_RANK
,t.CONTRACT_TYPE
,PERIOD(vt.CONTRACT_STRT_DT, vt.CONTRACT_END_DT + 1) AS pd
FROM vt
JOIN SOLD_SYSTEMS AS t
ON vt.SYSTEM_ID = t.SYSTEM_ID
AND ( t.CONTRACT_STRT_DT, t.CONTRACT_END_DT)
OVERLAPS (vt.CONTRACT_STRT_DT, vt.CONTRACT_END_DT)
QUALIFY
ROW_NUMBER()
OVER (PARTITION BY vt.SYSTEM_ID,vt.CONTRACT_STRT_DT
ORDER BY t.CONTRACT_RANK DESC, vt.CONTRACT_END_DT DESC) = 1
)
SELECT
SYSTEM_ID
,CONTRACT_TYPE
,BEGIN(pd) AS CONTRACT_STRT_DT
,END(pd) - 1 AS CONTRACT_END_DT
,CONTRACT_RANK
FROM
TABLE (TD_NORMALIZE_MEET
(NEW VARIANT_TYPE(cte.SYSTEM_ID
,cte.CONTRACT_RANK
,cte.CONTRACT_TYPE)
,cte.pd)
RETURNS (SYSTEM_ID VARCHAR(5)
,CONTRACT_RANK INT
,CONTRACT_TYPE VARCHAR(10)
,pd PERIOD(DATE))
HASH BY SYSTEM_ID
LOCAL ORDER BY SYSTEM_ID, CONTRACT_RANK, CONTRACT_TYPE, pd ) A
ORDER BY 1, 3;
```
Edit: This is another way to get the result of the 2nd query without Volatile Table and TD\_SEQUENCED\_COUNT:
```
SELECT
t.SYSTEM_ID
,t.CONTRACT_TYPE
,BEGIN(CONTRACT_PERIOD) AS CONTRACT_STRT_DT
,END(CONTRACT_PERIOD)- 1 AS CONTRACT_END_DT
,t.CONTRACT_RANK
,dt.p P_INTERSECT PERIOD(t.CONTRACT_STRT_DT,t.CONTRACT_END_DT + 1) AS CONTRACT_PERIOD
FROM
(
SELECT
dt.SYSTEM_ID
,PERIOD(d, MIN(d)
OVER (PARTITION BY dt.SYSTEM_ID
ORDER BY d
ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)) AS p
FROM
(
SELECT
SYSTEM_ID
,CONTRACT_STRT_DT AS d
FROM SOLD_SYSTEMS
UNION
SELECT
SYSTEM_ID
,CONTRACT_END_DT + 1 AS d
FROM SOLD_SYSTEMS
) AS dt
QUALIFY p IS NOT NULL
) AS dt
JOIN SOLD_SYSTEMS AS t
ON dt.SYSTEM_ID = t.SYSTEM_ID
WHERE CONTRACT_PERIOD IS NOT NULL
QUALIFY
ROW_NUMBER()
OVER (PARTITION BY dt.SYSTEM_ID,p
ORDER BY t.CONTRACT_RANK DESC, t.CONTRACT_END_DT DESC) = 1
ORDER BY 1,3
```
And based on that you can also include the normalization in a single query:
```
WITH cte
(
SYSTEM_ID
,CONTRACT_TYPE
,CONTRACT_STRT_DT
,CONTRACT_END_DT
,CONTRACT_RANK
,pd
)
AS
(
SELECT
t.SYSTEM_ID
,t.CONTRACT_TYPE
,BEGIN(CONTRACT_PERIOD) AS CONTRACT_STRT_DT
,END(CONTRACT_PERIOD)- 1 AS CONTRACT_END_DT
,t.CONTRACT_RANK
,dt.p P_INTERSECT PERIOD(t.CONTRACT_STRT_DT,t.CONTRACT_END_DT + 1) AS CONTRACT_PERIOD
FROM
(
SELECT
dt.SYSTEM_ID
,PERIOD(d, MIN(d)
OVER (PARTITION BY dt.SYSTEM_ID
ORDER BY d
ROWS BETWEEN 1 FOLLOWING AND UNBOUNDED FOLLOWING)) AS p
FROM
(
SELECT
SYSTEM_ID
,CONTRACT_STRT_DT AS d
FROM SOLD_SYSTEMS
UNION
SELECT
SYSTEM_ID
,CONTRACT_END_DT + 1 AS d
FROM SOLD_SYSTEMS
) AS dt
QUALIFY p IS NOT NULL
) AS dt
JOIN SOLD_SYSTEMS AS t
ON dt.SYSTEM_ID = t.SYSTEM_ID
WHERE CONTRACT_PERIOD IS NOT NULL
QUALIFY
ROW_NUMBER()
OVER (PARTITION BY dt.SYSTEM_ID,p
ORDER BY t.CONTRACT_RANK DESC, t.CONTRACT_END_DT DESC) = 1
)
SELECT
SYSTEM_ID
,CONTRACT_TYPE
,BEGIN(pd) AS CONTRACT_STRT_DT
,END(pd) - 1 AS CONTRACT_END_DT
,CONTRACT_RANK
FROM
TABLE (TD_NORMALIZE_MEET
(NEW VARIANT_TYPE(cte.SYSTEM_ID
,cte.CONTRACT_RANK
,cte.CONTRACT_TYPE)
,cte.pd)
RETURNS (SYSTEM_ID VARCHAR(5)
,CONTRACT_RANK INT
,CONTRACT_TYPE VARCHAR(10)
,pd PERIOD(DATE))
HASH BY SYSTEM_ID
LOCAL ORDER BY SYSTEM_ID, CONTRACT_RANK, CONTRACT_TYPE, pd ) A
ORDER BY 1, 3;
``` | ```
SEL system_id,contract_type,MAX(contract_rank),
CASE WHEN contract_strt_dt<prev_end_dt THEN prev_end_dt+1
ELSE contract_strt_dt
END AS new_start ,contract_strt_dt,contract_end_dt,
MIN(contract_end_dt) OVER (PARTITION BY system_id
ORDER BY contract_strt_dt,contract_end_dt ROWS BETWEEN 1 PRECEDING
AND 1 PRECEDING) prev_end_dt
FROM sold_systems
GROUP BY system_id,contract_type,contract_strt_dt,contract_end_dt
ORDER BY contract_strt_dt,contract_end_dt,prev_end_dt
``` | Create Historical Table from Dates with Ranked Contracts (Gaps and Islands?) | [
"",
"sql",
"date",
"teradata",
"gaps-and-islands",
""
] |
I have a table that store information about transactions, where the KeyInfo column is not always available but a GUID is generated for all the entries in the same transaction.
```
GUID | KeyInfo | Message
================================================
123456 | No Info | Sample message 1
123456 | No Info | Sample message 2
123456 | Test-1 | Sample message 3
123456 | No Info | Sample message 4
321654 | No Info | Sample message 5
321654 | No Info | Sample message 6
321654 | Test-2 | Sample message 7
321654 | No Info | Sample message 8
789456 | Test-1 | Sample message 1
789456 | No Info | Sample message 2
789456 | Test-1 | Sample message 3
789456 | No Info | Sample message 4
```
Currently I can do a search like this:
```
select GUID, KeyInfo, Message from MyTable where KeyInfo = 'Test-1'
```
This only returns two rows
```
GUID | KeyInfo | Message
================================================
123456 | Test-1 | Sample message 3
789456 | Test-1 | Sample message 3
```
But I need a query that returns all the rows that belongs to one transaction (same GUID), something like this
# GUID | KeyInfo | Message
```
123456 | Test-1 | Sample message 1
123456 | Test-1 | Sample message 2
123456 | Test-1 | Sample message 3
123456 | Test-1 | Sample message 4
789456 | Test-1 | Sample message 1
789456 | Test-1 | Sample message 2
789456 | Test-1 | Sample message 3
789456 | Test-1 | Sample message 4
```
Any ideas on how to achieve this? | ```
select GUID, KeyInfo, Message from MyTable where GUID
IN(SELECT GUID from MyTable Where KeyInfo = 'Test-1')
```
i think abover query will have better performance | Here is one way to do it...
```
SELECT *
FROM MyTable
WHERE GUID IN (
SELECT GUID
FROM MyTable
WHERE KeyInfo = 'Test-1'
)
```
Unlike JOIN, you don't have to worry whether there is more than one row with `KeyInfo = 'Test-1'`. | SQL join within same table | [
"",
"sql",
"oracle",
""
] |
I have 2 tables and i'm inner join them using EID
```
CSCcode Description BNO BNO-CSCcode E_ID
05078 blah1 5430 5430-05078 1098
05026 blah2 5431 5431-05026 1077
05026 blah3 5431 5431-05026 3011
04020 blah4 8580 8580-04020 3000
07620 blah5 7560 7560-07620 7890
07620 blah6 7560 7560-07620 8560
05020 blah1 5560 5560-04020 1056
```
Second table
```
y/n EID
y 1056
n 1098
y 1077
n 3011
y 3000
n 7890
n 8560
```
I'm selecting all fields from table one and y/n field from table 2. but it retrieve all from table 2 including EID. I don't want to retrieve EID from table2 because result table will have two EID fields.
My query
```
SELECT *, table2 .EID
FROM table1 INNER JOIN table2 ON table1 .E_ID = table2 .EID;
``` | > "I'm selecting all fields from table one"
No, you are selecting all fields from all tables. You need to specify the table if you only want all fields from one table:
```
SELECT table1.*, table2.EID
```
However, using `*` is not good practive. It's better to specify the fields that you want, so that any field that you add to the table isn't automatically included, as that might break your queries. | You can't do things like this 'SELECT \*, table2 .EID' - you should include all your fields from table 1. However, it is not a good practice even if you are selecting from one table.
```
SELECT
table1.CSCcode,
table1.Descriptio,
table1.BNO,
table1.BNO-CSCcode,
table1.E_ID,
table2 .EID
FROM
table1 INNER JOIN table2 ON table1 .E_ID = table2 .EID
``` | Inner join returns same columns from two tables access sql | [
"",
"sql",
"ms-access",
"join",
""
] |
I have the following query which isnt very efficiant and a lot of the time brings back an out of memory message, can anyone make any recomendations to help speed it up?
Thanks
Jim
```
DECLARE @period_from INT
SET @period_from = 201400
DECLARE @period_to INT
SET @period_to = 201414
Declare @length INT
Set @length = '12'
DECLARE @query VARCHAR(MAX)
SET @query = '%[^-a-zA-Z0-9() ]%'
SELECT 'dim_2' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_2 LIKE @query
UNION
SELECT 'dim_3' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_3 LIKE @query
UNION
SELECT 'dim_4' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_4 LIKE @query
UNION
SELECT 'dim_5' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_5 LIKE @query
UNION
SELECT 'dim_6' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_6 LIKE @query
UNION
SELECT 'dim_7' AS field, NULL AS Length,* FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_7 LIKE @query
UNION
SELECT 'ext_inv_ref' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND ext_inv_ref LIKE @query
UNION
SELECT 'ext_ref' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND ext_ref LIKE @query
UNION
SELECT 'description' AS field, NULL AS Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND description LIKE @query
UNION
SELECT 'Length dim_2' AS field,LEN(dim_2) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_2 is not null and len(dim_2) >@length
UNION
SELECT 'Length dim_3' AS field, LEN(dim_3) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_3 is not null and len(dim_3) >@length
UNION
SELECT 'Length dim_4' AS field, LEN(dim_4) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_4 is not null and len(dim_4) >@length
UNION
SELECT 'Length dim_5' AS field, LEN(dim_5) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_5 is not null and len(dim_5) >@length
UNION
SELECT 'Length dim_6' AS field, LEN(dim_6) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_6 is not null and len(dim_6) >@length
UNION
SELECT 'Length dim_7' AS field, LEN(dim_7) as Length, * FROM table1 WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to AND dim_7 is not null and len(dim_7) >@length
``` | You can reduce the number of unions significantly, but the work then goes into the WHERE clause. The SQL Query optimiser should figure out that you only need to go through the rows in the table once for each union statement, so it should be quicker. Try it like this and see!
```
SELECT
CASE
WHEN dim_2 like @query Then 'dim_2'
WHEN dim_3 like @query Then 'dim_3'
WHEN dim_4 like @query Then 'dim_4'
WHEN dim_5 like @query Then 'dim_5'
WHEN dim_6 like @query Then 'dim_6'
WHEN dim_7 like @query Then 'dim_7'
WHEN ext_inv_ref LIKE @query Then 'ext_inv_ref'
WHEN ext_ref LIKE @query Then 'ext_ref'
END AS field,
NULL AS Length,
*
FROM table1
WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to
AND (dim_2 LIKE @query
OR dim_3 LIKE @query
OR dim_4 LIKE @query
OR dim_5 LIKE @query
OR dim_6 LIKE @query
OR dim_7 LIKE @query
OR ext_inv_ref LIKE @query
OR ext_ref LIKE @query)
```
UNION
```
SELECT
CASE
WHEN dim_2 is not null and len(dim_2) >@length Then 'Length dim_2'
WHEN dim_3 is not null and len(dim_3) >@length Then 'Length dim_3'
....
END AS field,
LEN(dim_2) as Length,
*
FROM table1
WHERE client = 'CL'AND period >= @period_from AND @period_to <= @period_to
AND ((dim_2 is not null and len(dim_2) >@length)
OR
(dim_3 is not null and len(dim_3) >@length)
OR ....
)
``` | I don't think you can optimize this much. The database has dim1 to dim7 as columns of one table. Now you want to treat them as if they were stand-alone columns. So the database design doesn't meet your requirement. If this is just an exception than you will have to live with it. If this usage however becomes the typical access, then one should think about changing the database design and have an additional table for the dimensions.
The one thing you unnecessarily do is to use **UNION** which lets the dbms look out for duplicates. As your records start with a different constant per union group there will be none. Use **UNION ALL** instead. | Run Speed (inefficient query) | [
"",
"sql",
"performance",
"t-sql",
""
] |
In a MySQL database, I have two tables. I need to count how many rows in `tbl1` have matches in `tbl2`. Here is the simplified relevant table structure:
```
tbl1
code (pk)
label
tbl2
id
code1 (fk to tbl1.code if there is a match for a given row)
reltype
```
I have the following sql to start with:
```
SELECT code, label FROM tbl1 INNER JOIN tbl2 ON tbl1.code=tbl2.code1
WHERE reltype='desiredtype';
```
I think this tells me how many rows in `tbl2` have matches in tbl1. Is this correct? Also, the reason for my posting is, **how can I change this to determine how many rows in `tbl1` have matches in `tbl2`?** | ```
select count(*) from tbl1
where code in (select code1 from tbl2)
```
perhaps? Might need `distinct(code1)` for optimisation. | ```
SELECT
COUNT(tbl1.code)
FROM
tbl1 INNER JOIN tbl2 ON tbl1.code=tbl2.code1
WHERE
tbl2.reltype='desiredtype'
GROUP BY
tbl1.code
``` | counting rows that have join matches | [
"",
"mysql",
"sql",
"join",
""
] |
I have made query :
```
SELECT
MemberId,
FirstName,LastName,
[DateOfChange]
,(select title from StatusList where idStatus=[OldStatus]) as [OldStatus]
,(select title from StatusList where idStatus=[NewStatus]) as [NewStatus]
FROM [statusLog] , Users where
statusLog.IdUser=Users.IdUser
```
This query gives me following result:

I have OldStatus and NewStatus and DateOfChange of status as a column.
I just wanted to have difference in hours of status change.
i.e. From DateOfChange For Old Status OnCall To Patrol i want to find difference between two dates as:
```
2014-04-04 16:13:33:000 and 2014-04-04 16:13:44:000
```
I Tried:
```
SELECT
MemberId,
FirstName,LastName,
[DateOfChange],
DATEDIFF(HOUR,select [DateOfChange] from statusLog,Users where idstatus=[OldStatus]
,select [DateOfChange] from statusLog,Users where idstatus=[NewStatus])
,(select title from StatusList where idStatus=[OldStatus]) as [OldStatus]
,(select title from StatusList where idStatus=[NewStatus]) as [NewStatus]
FROM [statusLog] , Users where
statusLog.IdUser=Users.IdUser
```
But this doesnt worked.
Two tables which i have joined are:
Users:

statusLog:

Please help me.
How can i have difference in hours like this in above query??
**Edit:**
```
SELECT
MemberId,
FirstName,LastName,
[DateOfChange] ,
(SELECT
DATEDIFF(HOUR, SL.DateOfChange, SLN.StatusTo) AS StatusDuration
FROM
StatusLog SL
OUTER APPLY (
SELECT TOP(1)
DateOfChange AS StatusTo
FROM
StatusLog SLT
WHERE
SL.IdUser = SLT.IdUser
AND SLT.DateOfChange > SL.DateOfChange
ORDER BY
SLT.DateOfChange ASC
) SLN) Hourss
,(select title from StatusList where idStatus=[OldStatus]) as [OldStatus]
,(select title from StatusList where idStatus=[NewStatus]) as [NewStatus]
FROM [statusLog] , Users where
statusLog.IdUser=Users.IdUser
```
**Edit 2:**
 | Here is a SQL query returning the desired result simply using a sub select:
```
SELECT [users].MemberId,
[users].FirstName,
[users].LastName,
thisLog.DateOfChange,
statusList1.title as OldStatus,
statuslist2.title as NewStatus,
(SELECT TOP 1 DATEDIFF(hour,lastLog.DateOfChange,thisLog.DateOfChange)
from [dbo].[statusLog] lastLog WHERE lastLog.DateOfChange<thisLog.DateOfChange
ORDER BY DateOfChange desc ) AS HoursSinceLastChange
FROM [dbo].[statusLog] thisLog
INNER JOIN [users] ON [users].IdUser=thisLog.IdUSer
INNER JOIN StatusList statusList1 ON statusList1.idStatus=thisLog.OldStatus
INNER JOIN StatusList statusList2 on statusList2.idStatus=thisLog.Newstatus
order by DateOfChange desc
```
Hopefully I got all your column and table names correct. | Start with this one (you have to join users and what you want)
```
SELECT
SL.DateOfChange AS StatusFrom
, SLN.StatusTo AS
, DATEDIFF(HOUR, SL.DateOfChange, SLN.StatusTo) AS StatusDuration
FROM
StatusLog SL
OUTER APPLY (
SELECT TOP(1)
DateOfChange AS StatusTo
FROM
StatusLog SLT
WHERE
SL.IdUser = SLT.IdUser
AND SLT.DateOfChange > SL.DateOfChange
ORDER BY
SLT.DateOfChange ASC
) SLN
INNER JOIN Users U
ON SL.IdUser = U.IdUser
INNER JOIN StatusList SLO -- Old status
ON SL.OldStatus = SLO.idStatus
INNER JOIN StatusList SLC -- Current status
ON SL.NewStatus = SLC.idStatus
```
> The APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query. The table-valued function acts as the right input and the outer table expression acts as the left input. The right input is evaluated for each row from the left input and the rows produced are combined for the final output. The list of columns produced by the APPLY operator is the set of columns in the left input followed by the list of columns returned by the right input. [From MSDN: Using APPLY](http://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx)
**As a side note:** The table-valued function could be a subquery too.
I suggest you that use the explicit join syntax (`INNER JOIN`) instead of the implicit one (list the tables and use the WHERE condition). | Find Difference between hours | [
"",
"sql",
"database",
"sql-server-2008-r2",
""
] |
Employees of the company are divided into categories A, B and C regardless of the division they work in (Finance, HR, Sales...)
How can I write a query (Access 2010) in order to retrieve the number of employees for each category and each division?
The final output will be an excel sheet where the company divisions will be in column A, Category A in column B, category B in column and category C in column D.
I thought an `IIF()` nested in a `COUNT()` would do the job but it actually counts the total number of employees instead of giving the breakdown by category.
Any idea?
```
SELECT
tblAssssDB.[Division:],
COUNT( IIF( [Category] = "A", 1, 0 ) ) AS Count_A,
COUNT( IIF( [Category] = "B", 1, 0 ) ) AS Count_B,
COUNT( IIF( [ET Outcome] = "C", 1, 0 ) ) AS Count_C
FROM
tblAssssDB
GROUP BY
tblAssssDB.[Division:];
```
My aim is to code a single sql statement and avoid writing sub-queries in order to calculate the values for each division. | `Count` counts every non-Null value ... so you're counting 1 for each row regardless of the `[Category]` value.
If you want to stick with `Count` ...
```
Count(IIf([Category]="A",1,Null))
```
Otherwise switch to `Sum` ...
```
Sum(IIf([Category]="A",1,0))
``` | Use `GROUP BY` instead of `IIF`. Try this:
```
SELECT [Division:], [Category], Count([Category]) AS Category_Count
FROM tblAssssDB
GROUP BY [Division:], [Category];
``` | Count + IIF - Access query | [
"",
"sql",
"ms-access",
"count",
"iif",
""
] |
Silly question but I can't find a reasonable answer.
I need to order a field containing **hexadecimal** values like :
```
select str from
(
select '2212A' str from dual union all
select '2212B' from dual union all
select '22129' from dual union all
select '22127' from dual union all
select '22125' from dual union all
select '22126' from dual
) t
order by str asc;
```
This request give :
```
STR
------------
2212A
2212B
22125
22126
22127
22129
```
I would like
```
STR
------------
22125
22126
22127
22129
2212A
2212B
```
How can I do that ? | Are these HEX numbers? Will the max letter be F? Then convert hex to decimal:
```
select str
from t
order by to_number(str,'XXXXXXXXXXXX');
```
EDIT: Stupid me. The title says it's hex numbers :P So this solution should work for you. | You have to clarify further what you want to achieve, but in general, you can sort your table on that column and the first row is the one with smallest value:
```
SELECT * FROM mytable ORDER BY mycolumn
```
If you want one record only:
```
SELECT * FROM mytable ORDER BY mycolumn WHERE rownum = 1
``` | Oracle : how to order hexadecimal field | [
"",
"sql",
"oracle",
"sql-order-by",
"alphanumeric",
""
] |
I'm trying to get the gridview details to be put into textboxes for better view and edit.
I'm listing the following code to create the gridview:
```
'Finds all cases that are not closed
Protected Sub listAllCases()
sqlCommand = New SqlClient.SqlCommand("SELECT TC.caseId,TS.subName,TSU.userName,TC.caseType,TC.caseRegBy,TC.caseTopic,TC.caseDesc,TC.caseSolu,TC.caseDtCreated, TC.caseStatus FROM TBL_CASE TC INNER JOIN TBL_SUBSIDIARY_USER TSU ON TC.caseUser = TSU.userID INNER JOIN TBL_SUBSIDIARY TS on TSU.usersubId = TS.subId WHERE TC.caseStatus = 0 order by caseId")
sqlCommand.Connection = sqlConnection
sqlConnection.Open()
'sqlCommand.Parameters.AddWithValue("@subID", Me.caseSub.SelectedItem.Value)
Dim dr As SqlClient.SqlDataReader
dr = sqlCommand.ExecuteReader
If dr.HasRows Then
allCases.DataSource = dr
allCases.DataBind()
Else
allCases.DataSource = Nothing
allCases.DataBind()
End If
dr.Close()
sqlConnection.Close()
End Sub
```
Then I use the a function on the gridview onselectindexchanged and Writes this:
```
Protected Sub OnSelectedIndexChanged(sender As Object, e As EventArgs)
Dim row As GridViewRow = allCases.SelectedRow
txtcase.Text = row.Cells(1).Text()
txtsub.Text = row.Cells(2).Text
txtuser.Text = row.Cells(3).Text
oDato.Text = row.Cells(9).Text
lDato.Text = "Saken er ikke lukket!"
txttype.Text = row.Cells(4).Text.ToString
txtregBy.Text = row.Cells(5).Text.ToString
txttopic.Text = row.Cells(6).Text
txtDesc.Text = row.Cells(7).Text
txtSolu.Text = row.Cells(8).Text
lblinfo.Text = row.Cells(6).Text
End Sub
```
I only get it to display cells 1 to 9. Means cell 4 to 8 is not listed or being blank, even though i know it should contain data.
Any tips or Clues is very appreciated! | ok. i got it right, finally. the code i used to get it work is as follows:
```
Protected Sub allCases_OnSelectedIndexChanged(sender As Object, e As EventArgs)
Dim row As GridViewRow = allCases.SelectedRow
txtcase.Text = row.Cells(1).Text()
txtsub.Text = row.Cells(2).Text.ToString
txtuser.Text = row.Cells(3).Text
oDato.Text = row.Cells(9).Text
txtDesc.Text = TryCast(row.FindControl("lblcaseDesc"), Label).Text
txtSolu.Text = TryCast(row.FindControl("lblcaseSolu"), Label).Text
end sub
```
this means I need to fetch the boundfields and templatefields differently.
Anyhow thanks for the reponse. | Like This ?
```
Private Sub DataGridView1_CellContentClick(ByVal sender As System.Object, ByVal e As
TextBox1.text = DataGridView1.Rows(e.Index).Cells(0).value.toString
TextBox2.text = DataGridView1.Rows(e.Index).Cells(1).value.toString
TextBox3.text = DataGridView1.Rows(e.Index).Cells(2).value.toString
TextBox4.text = DataGridView1.Rows(e.Index).Cells(3).value.toString
TextBox5.text = DataGridView1.Rows(e.Index).Cells(4).value.toString
TextBox6.text = DataGridView1.Rows(e.Index).Cells(5).value.toString
TextBox7.text = DataGridView1.Rows(e.Index).Cells(6).value.toString
TextBox8.text = DataGridView1.Rows(e.Index).Cells(7).value.toString
TextBox9.text = DataGridView1.Rows(e.Index).Cells(8).value.toString
TextBox10.text = DataGridView1.Rows(e.Index).Cells(9).value.toString
End Sub
```
This will display all the information in the row by clicking the record... | Adding gridview line data into textboxes and labels | [
"",
"asp.net",
"sql",
"vb.net",
"gridview",
""
] |
I am a complete beginner at SQL Server. I was given a database that was in SQL Server backup file format. I figured out how to restore the databases, but now I am looking to export the tables (eventually to Stata .dta files)
I am confused how to view and extract any meta-data my SQL Server database might contain. For example, I have one column labeled `Sex` and the values are 1 and 2. However, I have no idea which number refers to male and which refers to female. How would I view the column description (if it exists) to see if there is any labeling that might be able to clarify this issue?
Edit: Quick question. If I use the Import/Export Wizard, will that automatically extract the meta-data? | This is by far the best post for exporting to excel from SQL:
<http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=49926>
**To export data to new EXCEL file with heading(column names)**, create the following procedure
```
create procedure proc_generate_excel_with_columns
(
@db_name varchar(100),
@table_name varchar(100),
@file_name varchar(100)
)
as
--Generate column names as a recordset
declare @columns varchar(8000), @sql varchar(8000), @data_file varchar(100)
select
@columns=coalesce(@columns+',','')+column_name+' as '+column_name
from
information_schema.columns
where
table_name=@table_name
select @columns=''''''+replace(replace(@columns,' as ',''''' as '),',',',''''')
--Create a dummy file to have actual data
select @data_file=substring(@file_name,1,len(@file_name)-charindex('\',reverse(@file_name)))+'\data_file.xls'
--Generate column names in the passed EXCEL file
set @sql='exec master..xp_cmdshell ''bcp " select * from (select '+@columns+') as t" queryout "'+@file_name+'" -c'''
exec(@sql)
--Generate data in the dummy file
set @sql='exec master..xp_cmdshell ''bcp "select * from '+@db_name+'..'+@table_name+'" queryout "'+@data_file+'" -c'''
exec(@sql)
--Copy dummy file to passed EXCEL file
set @sql= 'exec master..xp_cmdshell ''type '+@data_file+' >> "'+@file_name+'"'''
exec(@sql)
--Delete dummy file
set @sql= 'exec master..xp_cmdshell ''del '+@data_file+''''
exec(@sql)
```
After creating the procedure, execute it by supplying database name, table name and file path
```
EXEC proc_generate_excel_with_columns 'your dbname', 'your table name','your file path'
``` | If your database admin and / or the developer have used what is called the **extended properties** to document the database tables, then you can retrieve those descriptions with this T-SQL statement:
```
SELECT
TableName = t.Name,
ColumnName = c.Name,
ep.class_desc,
ep.value
FROM
sys.extended_properties ep
INNER JOIN
sys.tables t ON ep.major_id = t.object_id
INNER JOIN
sys.columns c ON ep.major_id = c.object_id AND ep.minor_id = c.column_id
```
It uses some of the SQL Server **catalog views** (`sys.tables` etc.) that contain all the valuable metadata information about SQL Server database objects. | How do I read meta-data from a SQL Server database table? | [
"",
"sql",
"sql-server",
"database",
"metadata",
""
] |
I have a TimeStamp field in a MySQL database that I'm trying to pull data from. I'm trying to get it as a string, so I've been using the following query:
`select CONVERT(VARCHAR, date_created, 120) from junk;`
It throws the error:
`ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'VARCHAR, date_modified, 120) from junk limit 10' at line 1`
Can someone please tell me what I'm doing wrong? | If you wanted to format explicitly to yyyymmdd (style 120 in sql server)
```
Select DATE_FORMAT(somedate, '%Y%m%d') From SomeTable
``` | `CONVERT()` in `MySQL` is used to convert between different character sets, you need `DATE_FORMAT()`:
```
SELECT DATE_FORMAT(date_created, '%Y%m%d %H%i%S')
FROM Junk
```
Update: Originally had `CAST()` incorrectly using `VARCHAR()`, but `CAST()` will also work:
```
SELECT CAST(date_created AS CHAR(10))
FROM Junk
```
[DATE\_FORMAT() options](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format) | CONVERT or CAST field in SQL Query | [
"",
"mysql",
"sql",
"casting",
""
] |
I need help with an SQL query. The following table is a result of several joined tables and filtered by some columns. It is now in a (hopefully correct) simplified state.
```
----------------------
| A | B | C | D |
|----+-----+---+-----|
| 81 | 651 | n | 656 |
|----+-----+---+-----|
| 81 | 651 | j | 658 |
|----+-----+---+-----|
| 81 | 804 | n | 659 |
|----+-----+---+-----|
| 81 | 651 | n | 660 |
|----+-----+---+-----|
| 81 | 512 | j | 660 |
|----+-----+---+-----|
| 81 | 670 | j | 660 |
|----+-----+---+-----|
| 81 | 512 | n | 668 |
|----+-----+---+-----|
| 81 | 651 | n | 668 |
|----+-----+---+-----|
| 81 | 670 | n | 668 |
|----+-----+---+-----|
| 81 | 651 | n | 414 |
----------------------
```
I now need to further define the result.
* Have only one row per value in D
* If there is one where `C='j'` take one of these.
The new result should look like this:
```
----------------------
| A | B | C | D |
|----+-----+---+-----|
| 81 | 651 | n | 656 |
|----+-----+---+-----|
| 81 | 651 | j | 658 |
|----+-----+---+-----|
| 81 | 804 | n | 659 |
|----+-----+---+-----|
| 81 | 512 | j | 660 |
|----+-----+---+-----|
| 81 | 512 | n | 668 |
|----+-----+---+-----|
| 81 | 651 | n | 414 |
----------------------
```
As can be seen for `D='660'` there are two rows with `C='j'`. I took the first of them.
For `D='668'` there is no row with `C='j'`. So I don't care which one should stay. I took the first of them.
So how can I achieve this? | Try this:
```
;with cte as
(select a,b,c,d, row_number() over (partition by d order by c,b) rn
from your_derived_resultset)
select a,b,c,d
from cte
where rn = 1
``` | Try this query my friend:
```
with t1 as
(select * from table1 where D not in
(select D from table1 where c = 'j')
union all
select * from table1 where D in
(select D from table1 where c = 'j')
and c = 'j')
select a,min(b) b,c,d from t1 group by a,c,d;
```
## [SQL Fiddle](http://sqlfiddle.com/#!4/d75b3/8) | Conditionally keep Rows in Selection | [
"",
"sql",
"oracle11g",
""
] |
I have 2 tables. One is a list of proucts, and the second - list of images connected to each product.
PRODUCTS as P
```
ID | NAME
1 | apple
2 | plum
3 | carrot
4 | strawberry
5 | pie
```
IMAGES as IM
```
PRODUCT_ID | IMAGE | I_NAME
1 | 1 | app_abc.jpg
1 | 2 | apple.jpg
1 | 3 | appleonemoretime.jpg
2 | 1 | plum.jpg
2 | 2 | plum2.jpg
2 | 3 | plum3.jpg
2 | 4 | plum4.jpg
3 | 1 | somecarrot.jpg
4 | 1 | s1.jpg
```
etc...
Additional info:
- Each product has min 1 image.
- The max amount of images connected with one product is 60. I would like to get: list of products with image names **(one row = one product)**.
- I will search products by product.id - I want to **get images in one column**, separated by commas, **I do not want to get 60 'null' columns.**
For instance: if I search of p.id (1, 3) I would like to get something like:
```
P.ID | IM1.I_NAME
1 | app_abc.jpg, apple.jpg, appleonemoretime.jpg
3 | somecarrot.jpg
```
Is there a way? 'COALESCE' is good for this?
What I have now is:
```
select p.id
from products p
join images im on im.product_id = p.id
where p.id in (1, 3)
``` | [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) is your best friend, your query should look like this:
```
SELECT p.id, GROUP_CONCAT( im.i_name SEPARATOR '; ' ) AS images
FROM products p
LEFT JOIN images im ON (im.product_id = p.id)
WHERE p.id IN (1, 3) GROUP BY p.id
``` | You can use `group_concat` in query. Try this query,
```
select p.id, group_concat(im.I_NAME)
from products p
join images im on im.product_id = p.id
where p.id in (1, 3)
group by p.id
``` | MySQL \ join 2 tables and show results from 2 rows in one column | [
"",
"mysql",
"sql",
"join",
"subquery",
""
] |
I have create view without any problem in Oracle. But I can not update it.I want update view in oracle, please help. Is it possible anyway? | You can create an `instead of` trigger on the view, [as described in the documentation](http://docs.oracle.com/cd/E11882_01/appdev.112/e10766/tdddg_triggers.htm#TDDDG52800):
> A view presents the output of a query as a table. If you want to change a view as you would change a table, you must create `INSTEAD OF` triggers. Instead of changing the view, they change the underlying tables.
Once you have that trigger in place, you can update the view using the same syntax as if it was a table.
You haven't shown your view or table definitions so there isn't enough information to provide a useful example; fortunately the documentation has one you can use as a starting point. | View is a select of your table my friend, there for if you want to update your view you must update your table and then you will see the changes in your view. | How to update a view in oracle | [
"",
"sql",
"oracle",
"view",
""
] |
For example, I need to select all of one user's file's ids and use those ids in several updates in a row. Is there a way to use WITH in multiple updates other than copying and pasting it into each update query? | No, it's not really possible. See [this link](http://technet.microsoft.com/en-us/library/ms175972%28v=sql.105%29.aspx) (for TSQL):
> This is derived from a simple query and defined within the execution scope of a single SELECT, INSERT, UPDATE, MERGE, or DELETE statement.
In PostgreSQL, it is the same ([see here](http://www.postgresql.org/docs/9.1/static/queries-with.html)):
> These statements, which are often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query.
Simplest way to change WITH to make it work with more queries would be rewriting it to `SELECT INTO` queries:
```
SELECT a,b,c
INTO #TemporaryTable
WHERE a<>b
(do something)
DROP TABLE #TemporaryTable
``` | Are you running all your UPDATE queries in a single stored procedure / transaction? If so, just write the results of your CTE to a temp table (or a table variable, if you're savvy - but then they're harder to debug - ahh well) for re-use and then drop it when you're done. Sure you'll suffer a teensy bit of IO performance but it'll keep your code a lot cleaner. It's basically what temp tables are for - intermediate data storage. | Is it possible to use a WITH clause in multiple updates without copying it to each one? | [
"",
"sql",
"postgresql",
"sql-update",
"with-statement",
""
] |
I would like to know how if it is possible to select the two most recent dates from a column in a table. Please see the simple example below. I know to get the max date I can use the max function. I'm also aware that I could then do another max statement with a where condition that states it must be less than the first date returned from my first max query. I was wondering though if there was a way of doing this in one query?
```
Name DateAdded
ABC 2014-04-20
ABC 2014-04-20
ABC 2014-03-01
ABC 2014-03-01
ABC 2014-02-25
ABC 2014-05-22
ABC 2014-04-01
```
The two dates that should be returned are the two most recent, i.e. 2014-05-22 & 2014-04-20.
**EDIT**
Sorry I should have mentioned yes I want two distnict dates. The table is large and the dates are not sorted. I think sorting the table could be quite slow. | ```
SELECT distinct top 2 Dateadded
FROM table
ORDER BY Dateadded desc
``` | Try This :
```
select distinct top(2) format(Dateadded ,'yyyy-MM-dd') as Dateadded
from TableName
order by Dateadded DESC
``` | SQL server select max two dates from a table | [
"",
"sql",
"sql-server",
""
] |
I have to update some rows in a table, but I need to get the referring id from another table. The table structure is similar to below:
```
CoInvDept
+--------------+--------+--------+
| coinvdeptid | coinv | code |
+--------------+--------+--------+
Coinv
+--------------+-------------+
| coinv | itemnumber |
+--------------+-------------+
```
I need to update the code, but I have the itemnumber. I am using SQL SERVER 2008.
So far
```
UPDATE coinvdept
SET code = 'ABC123'
WHERE (SELECT coinvid WHERE itemnumber = '12345678')
``` | I think this is what you are after:
```
UPDATE coinvdept
SET code = 'ABC123'
WHERE coinv IN (SELECT coinvid FROM coinv WHERE itemnumber = '12345678')
``` | ```
UPDATE coinvdept
SET code = 'ABC123'
WHERE coinv = (SELECT coinvid FROM coinv WHERE itemnumber = '12345678')
```
OR
```
UPDATE T1
SET code='ABC123'
FROM coinvdept as T1
INNER JOIN Coinv as T2 ON T1.coinv=T2.coinv
AND T2.itemnumber='12345678'
```
Both should work :) | SQL Server Updating getting id from another table | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a table with 3 fields: Id(PK,AI), Name(varchar(36)), LName(varchar(36)).
I have to insert name and last name, Id inserts automatically because of it's constraints,
Is There a way to Jump id auto increment value when it reaches 6?
for instance do this 7 times:
Insert Into table(Name, LName) Values ('name1', 'lname1') "And jump id to 7 if it is going to be 6"
It may sound stupid to do this but I have the doubt.
Also Jump and do not record id 6.
record only, 1-5, 7,8,9 and so on
**What I want to achieve starts from a Union:**
```
Select * From TableNames
Union All
Select * From TableNames_general
```
In the TableNames\_general I assign it's first value so that when the user sees the table for the first time it will be displayed the record I inserted.
The problem comes when the user inserts a new record, if the Id of the inserted record is the same as the one I have inserted it will be duplicated, that is why I want to achieve when the users inserts one record and if the last insert id already exists just jump that record. this is because I must have different ids due to its relationship among child tables. | Identity column generate values for you, And its best left this way, You have the ability to insert specific values in Identity column but its best left alone and let it generate values for you.
Imagine you have inserted a value explicitly in an identity column and then later on Identity column generates the same value for you, you will end up with duplicates.
If you want to have your input in that column then why bother with identity column anyway ??
Well this is not the best practice but you can jump to a specific number by doing as follows:
**MS SQL SERVER 2005 and Later**
```
-- Create test table
CREATE TABLE ID_TEST(ID INT IDENTITY(1,1), VALUE INT)
GO
-- Insert values
INSERT INTO ID_TEST (VALUE) VALUES
(1),(2),(3)
GO
-- Set idnentity insert on to insert values explicitly in identity column
SET IDENTITY_INSERT ID_TEST ON;
INSERT INTO ID_TEST (ID, VALUE) VALUES
(6, 6),(8,8),(9,9)
GO
-- Set identity insert off
SET IDENTITY_INSERT ID_TEST OFF;
GO
-- 1st reseed the value of identity column to any smallest value in your table
-- below I reseeded it to 0
DBCC CHECKIDENT ('ID_TEST', RESEED, 0);
-- execute the same commad without any seed value it will reset it to the
-- next highest idnetity value
DBCC CHECKIDENT ('ID_TEST', RESEED);
GO
-- final insert
INSERT INTO ID_TEST (VALUE) VALUES
(10)
GO
-- now select data from table and see the gap
SELECT * FROM ID_TEST
``` | If you query the database to get the last inserted ID, then you can check if you need to increment it, by using a parameter in the query to set the correct ID. | Insert only when auto-increment id is not equal 6(for example)? | [
"",
"mysql",
"sql",
"sql-server-2008",
""
] |
More of a theory/logic question but what I have is two tables: `links` and `options`. Links is a table where I add rows that represent a link between a product ID (in a separate `products` table) and an option. The `options` table holds all available options.
What I'm trying to do (but struggling to create the logic for) is to join the two tables, returning only the rows where there is no option link in the `links` table, therefore representing which options are still available to add to the product.
Is there a feature of SQL that might help me here? I'm not tremendously experienced with SQL yet. | Your table design sounds fine.
If this query returns the `id` values of the "options" linked to a particular "product"...
```
SELECT k.option_id
FROM links k
WHERE k.product_id = 'foo'
```
Then this query would get the details of all the options related to the "product"
```
SELECT o.id
, o.name
FROM options o
JOIN links k
ON k.option_id = o.id
WHERE k.product_id = 'foo'
```
Note that we can actually move the `"product_id='foo'"` predicate from the WHERE clause to the ON clause of the JOIN, for an equivalent result, e.g.
```
SELECT o.id
, o.name
FROM options o
JOIN links k
ON k.option_id = o.id
AND k.product_id = 'foo'
```
(Not that it makes any difference here, but it would make a difference if we were using an OUTER JOIN (in the WHERE clause, it would negate the "outer-ness" of the join, and make it equivalent to an INNER JOIN.)
But, none of that answers your question, it only sets the stage for answering your question:
**How do we get the rows from "options" that are NOT linked to particular product?**
The most efficient approach is (usually) an **anti-join** pattern.
What that is, we will get all the rows from "options", along with any matching rows from "links" (for a particular product\_id, in your case). That result set will include the rows from "options" that don't have a matching row in "links".
The "trick" is to filter out all the rows that had matching row(s) found in "links". That will leave us with **only** the rows that didn't have a match.
And way we filter those rows, we use a predicate in the WHERE clause that checks whether a match was found. We do that by checking a column that we know for certain will be **NOT NULL** if a matching row was found. And we *know*\* for certain that column will be **NULL** if a matching row was **NOT** found.
Something like this:
```
SELECT o.id
, o.name
FROM options o
LEFT
JOIN links k
ON k.option_id = o.id
AND k.product_id = 'foo'
WHERE k.option_id IS NULL
```
The `"LEFT"` keyword specifies an "outer" join operation, we get all the rows from "options" (the table on the "left" side of the JOIN) even if a matching row is not found. (A normal inner join would filter out rows that didn't have a match.)
The "trick" is in the WHERE clause... if we found a matching row from links, we know that the `"option_id"` column returned from `"links"` would not be NULL. It can't be NULL if it "equals" something, and we know it had to "equals" something because of the predicate in the ON clause.
So, we know that the rows from options that didn't have a match will have a NULL value for that column.
It takes a bit to get your brain wrapped around it, but the anti-join quickly becomes a familiar pattern.
---
The "anti-join" pattern isn't the only way to get the result set. There are a couple of other approaches.
One option is to use a query with a `"NOT EXISTS"` predicate with a correlated subquery. This is somewhat easier to understand, but doesn't usually perform as well:
```
SELECT o.id
, o.name
FROM options o
WHERE NOT EXISTS ( SELECT 1
FROM links k
WHERE k.option_id = o.id
AND k.product_id = 'foo'
)
```
That says get me all rows from the options table. But for each row, run a query, and see if a matching row "exists" in the links table. (It doesn't matter what is returned in the select list, we're only testing whether it returns at least one row... I use a "1" in the select list to remind me I'm looking for "1 row".
This usually doesn't perform as well as the anti-join, but sometimes it does run faster, especially if other predicates in the WHERE clause of the outer query filter out nearly every row, and the subquery only has to run for a couple of rows. (That is, when we only have to check a few needles in a haystack. When we need to process the whole stack of hay, the anti-join pattern is usually faster.)
And the beginner query you're most likely to see is a `NOT IN (subquery)`. I'm not even going to give an example of that. If you've got a list of literals, then by all means, use a NOT IN. But with a subquery, it's rarely the best performer, though it does seem to be the easiest to understand.
Oh, what the hay, I'll give a demo of that as well (not that I'm encouraging you to do it this way):
```
SELECT o.id
, o.name
FROM options o
WHERE o.id NOT IN ( SELECT k.option_id
FROM links k
WHERE k.product_id = 'foo'
AND k.option_id IS NOT NULL
GROUP BY k.option_id
)
```
That subquery (inside the parens) gets a list of all the option\_id values associated with a product.
Now, for each row in options (in the outer query), we can check the id value to see if it's in that list returned by the subquery.
If we have a guarantee that option\_id will never be NULL, we can omit the predicate that tests for `"option_id IS NOT NULL"`. (In the more general case, when a NULL creeps into the resultset, then the outer query can't tell if o.id is in the list or not, and the query doesn't return any rows; so I usually include that, even when it's not required. The `GROUP BY` isn't strictly necessary either; especially if there's a unique constraint (guaranteed uniqueness) on the (product\_id,option\_id) tuple.
But, again, don't use that `NOT IN (subquery)`, except for testing, unless there's some compelling reason to (for example, it manages to perform better than the anti-join.)
You're unlikely to notice any performance differences with small sets, the overhead of transmitting the statement, parsing it, generating an access plan, and returning results dwarfs the actual "execution" time of the plan. It's with larger sets that the differences in "execution" time become apparent.
`EXPLAIN SELECT ...` is a really good way to get a handle on the execution plans, to see what MySQL is really doing with your statement.
Appropriate indexes, especially covering indexes, can noticeably improve performance of some statements. | Yes, you can do a `LEFT JOIN` (if MySQL; there are variations in other dialects) which will include rows in links which do NOT have a match in options. Then test if `options.someColumn` `IS NULL` and you will have exactly the rows in links which had no "matching" row in options. | SQL JOIN Query to return rows where we did NOT find a match in joined table | [
"",
"mysql",
"sql",
"join",
""
] |
Having the following table:
```
ID | ShopId | GroupID | Vid
1 | 10 | 646 |248237
2 | 5 | 646 |248237
3 | 7 | 646 |248237
4 | 5 | 700 |248237
5 | 7 | 700 |248237
```
I want to add a column that contains the number of Vid values in each GroupId. Something like:
```
ID | ShopId | GroupID | Vid | Occurrences
1 | 10 | 646 |248237 | 3
2 | 5 | 646 |248237 | 3
3 | 7 | 646 |248237 | 3
4 | 5 | 700 |248237 | 2
5 | 7 | 700 |248237 | 2
``` | if you only want the VID counts no matter their value you can write
```
Select *, (select count(1) from table t1 where t1.GroupID = t2.GroupID) Occurences
From table t2
```
But if you want the the count of Similar VIDs in each group you can write
```
Select table.*, t.cnt as Occurences
from table
inner join (select count(1) cnt, groupID, VID from table group by groupID, VID) t on t.groupID = table.groupID and t.VID = table.VID
```
p.s. You can use the second query without grouping by VID as first one too but it is more complicated | Try This one
```
Select ID,ShopId,GroupID,Vid,
(select count(GroupID) from table_name where GroupID=tb.GroupID) as Occurences
From table_name as tb
``` | Correct SQL count occurrences | [
"",
"sql",
"sql-server-2008",
""
] |
my tables look like this:
**news**
```
id_news
id_gallery
headline
content
date
```
**galleries**
```
id_gallery
gallery_name
```
**images**
```
id_image
id_gallery
```
And I need to select every image for certain news. I am very unfamiliar with sql, so I would really appreciate every kind of help. I have searched, but when I try to use queries that I find, it just doesn't work, I'm doing something really wrong.. Help!
Thank you for Your time | ```
SELECT i.*
FROM images AS i
JOIN news AS n on n.id_gallery = i.id_gallery
WHERE n.id_news = <selected news>
``` | ```
SELECT a.*
FROM images a
JOIN news b on b.id_gallery = a.id_gallery
```
Hope this helps. :D | SELECT QUERY FROM three tables SQL | [
"",
"mysql",
"sql",
""
] |
I am working on the redesign of an old database which started off small and is now really bloated and slow due to years of quick fixes when system changes occurred.
No matter how well it is designed this time, there will of course be unforeseen changes so I am looking for some general tips on how best to prepare for such changes along with general advice on whether or not I am on the right track.
I am very new to the software development / database design world so please forgive me if there are some glaringly obvious issues here or I am being a little too vague.... I'm trying my best :)
To be specific;
A reservation will be made on a website. At the time of booking, some extras / requirements may be added, e.g. a carpark space is booked - user will indicate whether or not a disabled space is required.
I am going create another table of 'DisabledSpacesRequired' which will have one column - the bookingIDs of those where a disabled space was required. Is this "better" than having a flag in the booking table indicating whether or not the space is required?
Similarly, a booking may be cancelled - so there will be a table of cancelled bookings. For searching later, would it be better to simply search the cancelled bookings table for the bookingID? Or have a flag in the booking table indicating whether or not it was cancelled? (The 'CancelledBookings' table will be necessary anyway but should a flag also be included?)
What has got me thinking about such issues is the fact that there seems to be lots of add-ons currently in the database - e.g. there is a 'Subscribers' table, and there is a 'SubscribersTwitterHandles' table which was added later - is it good practice to separate out types of subscribers in this way? Or add flags to the existing table?
I've had a look for some similar questions and going by
[Implementing Review flags in Databases; best practices](https://stackoverflow.com/questions/10915137/implementing-review-flags-in-databases-best-practices)
I think it is best to separate out variables to prepare for changes that might be made in the future. (For example we might want to add some information related to the disabled parking space required.)
Hope I am clear - any advice is greatly appreciated. | There are lots of opinions about flags in databases. So the common answer is "well, it depends what you want your RBDMS to be doing".
The student information system I work with on a daily basis has a status flag in the base student table. The legal values are A - Active, I - Inactive, P - Pre-registered, and G - Graduated. There's no validation table or lookup table for this. It's hard-coded in the application. While relationally that's a problem, the application works perfectly. A student always has one and exactly one status, and there's no situation not covered by the existing status lists. You could add a `regtb_status` lookup table and add a foreign key constraint to the student registration table, but it doesn't add much to this application.
For your Booking example, I would have a current status field in the Booking table itself. I would prefer to use a character field so I could support the statuses that I know I might need: A - Active, C - Cancelled by Customer, I - Invalid, D - Deleted by Staff, etc. You can even allow the customer to have access to the validation table so they can create custom statuses if they want. It depends on the workflow you're envisioning and your customers want.
Elsewhere in the same system, there are a lot of status flag fields that are hard coded `CHAR(1)` fields that are Y - Yes and N - No. You probably should use your RDBMS's boolean types for these flags, but unless you're talking ridiculous numbers of records or need to worry about internationalization, it's not going to be an issue. These types of tables are typically also functioning as junction tables. For example, the table that relates students to contacts includes status flags for whether the contact is living with the student, the type of contact (guardian, emergency contact), what the contacts relationship is to the student (mother, father, aunt, etc.), whether or not that contact should have access to the student in the parent website, the order of priority of the contacts, whether the parent should receive report cards in the mail, etc. This particular table is somewhat cumbersome simply because there are over a dozen flag fields in this table, but the multiple flag options relationship type are completely configurable in validation/lookup tables within the application and the column names are, at least in part, self-documenting. From a report-writing standpoint that's invaluable.
We have a few fields that are stored in user-defined tables, which actually store everything in an EAV table in the DB. These cause a problem because, often, the particular EAV record doesn't exist until the school explicitly sets it. The application behaves as though null = No, but it can make writing reports and even searching in the application difficult. You can't look for `field = 'N'`. You have to look for `field = 'N' OR field IS NULL`. In the application's search system, you have to specify `field <> 'Y'` because it doesn't handle nulls well in all cases. This is very confusing for users that can't wrap their heads around three valued logic. It's also fairly irritating for a DBA because the best way to view the data, a view, is not easily updated.
In my experience, bitmasks are *almost always* incorrect. They're very cumbersome and expensive to query against, not self-documenting, and generally a tremendous pain in the tail. I would rather see a series of `BIT`/`BOOLEAN` or `CHAR` fields any day than a bitmask. If it has multiple attributes in a single field, it's going to be a tremendous problem.
For your SubscribersTwitterHandles question, I guess I'm a little confused. Why didn't they just add a column to the existing table? Is it a one-to-many relationship, or are there multiple Twitter Handle fields? Either your customers haven't given you their handle -- in which case it's explicitly `''` -- or it's the handle they gave you.
I guess my real question from a design standpoint: Are we creating *flags* or *tags*? In my mind a flag is something that has a one-to-one relationship with an existing entity in the database. That entity might be the junction between two entities, or it might be on the entity itself, but it always has a non-null value.
Tags, on the other hand, are arbitrary, potentially many-to-one or many-to-many, and in most situations are completely defined by the customer as an ad hoc means to group records. | I am trying to share my opinion from the perspective of Database designing,
* Please try to think about your entities and it's properties. In relational database design properties map to the columns and entities map to the tables.
* If you agreed if newly added subject can be an entity itself then it is better to create a new table for it and for relation with other you can use either foreign key relation or may be another table for keeping relation.
* If you think that it can be solely another property of an existing entity then better to add a column to that table.
These are very basic database design techniques but people also sometimes do trade-offs other than doing this for easier coding/query. But I consider that might be a different story. | Database design - adding flags for exceptions/extras | [
"",
"sql",
"database",
"schema",
""
] |
I'm trying to look up two tables where tableA has an ID from tableB and have it return the name of tableB, but if the ID is NULL just return NULL. As of now I have:
```
SELECT vehicle.Id, vehicle.Registration, (case when vehicle.ChecklistId != NULL then
checklist.Name else NULL end) FROM vehicle, checklist WHERE vehicle.ChecklistId =
checklist.Id OR vehicle.ChecklistId IS NULL"
```
However this just return nothing as there's currently nothing in the checklist table. Am I missing something or just doing it wrong completely? | Either simply select the value:
```
select
vehicle.id,
vehicle.registration,
(select checklist.name from checklist where checklist.id = vehicle.checklistid )
from vehicle;
```
Or use an outer join:
select
```
vehicle.id,
vehicle.registration,
checklist.name
from vehicle
left join checklist on checklist.id = vehicle.checklistid;
```
EDIT: As to your statement: You should not use that old join syntax where you just list the tables comma-separated. It is prone to errors. What you are doing is this:
* join each vehicle where vehicle.ChecklistId IS NOT NULL with the matching checklist, if any
* join each vehicle where vehicle.ChecklistId IS NULL with *all* checklists
In your case checklist is empty so both result sets are empty. If there were matches in checklist, however, you would get too many records, because you would cross join all your vehicles where ChecklistId IS NULL with the checklist table. | You need to use LEFT OUTER JOIN
```
SELECT vehicle.Id, vehicle.Registration, (case when vehicle.ChecklistId != NULL then
checklist.Name else NULL end) FROM vehicle LEFT OUTER JOIN checklist
ON vehicle.ChecklistId =checklist.Id OR vehicle.ChecklistId IS NULL
``` | Checking two tables where 1 result can be NULL in mySQL | [
"",
"mysql",
"sql",
"null",
""
] |
I run this query in SqlServer 2008 R2, it take about 6 seconds and return around 8000 records. `OrderItemView` is a view and `DocumentStationHistory` is a table.
```
SELECT o.Number, dsh.DateSend AS Expr1
FROM OrderItemView AS o INNER JOIN
DocumentStationHistory AS dsh ON dsh.DocumentStationHistoryId =
(SELECT TOP (1) DocumentStationHistoryId
FROM DocumentStationHistory AS dsh2
WHERE (o.DocumentStationId = ToStationId) AND
(DocumentId = o.id)
ORDER BY DateSend DESC)
WHERE (o.DocumentStationId = 10)
```
But when I run the same query with `o.DocumentStationId = 8` where clause, it return around 200 records but take about 90 seconds!
Is there any idea that where is the problem? | I rebuilt the index on `o.DocumentStationId` and the problem solved. | I suppose the index is the issue, But not for `o.DocumentStationId` but all the fields that are joined using the field o.DocumentStationId.
try to see how your inner query is working by checking the execution plan.
that would need some performance tuning.
Also, try using index for `ToStationId` and `DateSend`. also see if you can modify inner query.
Other than these i dont see any suggestions.
Also post you execution plan | Different execution time for same query - SQL Server 2008 R2 | [
"",
"sql",
"sql-server",
"performance",
""
] |
Using SQL Server 2008, I want to query a table like so:
```
| ID | Number
-------------
| 1 | 0
| 2 | 0
| 3 | 1
| 4 | 0
| 5 | 0
| 6 | 1
| 7 | 1
| 8 | 1
```
The result should be the same table with an additional column that counts.
The method of counting is: if the number in "number" equals to 1 - increment the counter by one for the **next** line.
An example of result for the provided table:
```
| ID | Number | Counter
-----------------------
| 1 | 0 | 1
| 2 | 0 | 1
| 3 | 1 | 1
| 4 | 0 | 2
| 5 | 0 | 2
| 6 | 1 | 2
| 7 | 1 | 3
| 8 | 1 | 4
```
How can this be achieved? | ```
select [ID], [Number],
isnull(1+(select sum([Number]) from Table1 t2 where t2.ID<t1.Id),1)
from Table1 t1
```
**[SQL Fiddle to test](http://www.sqlfiddle.com/#!3/21a96/1)** | This is not too hard to do. What you are looking for is very much like the running total, which you get with sum and a windowing clause.
```
select id, num, 1 + sum(num) over (order by id) - num as counter
from mytable
order by id;
```
Here is an SQL fiddle: <http://sqlfiddle.com/#!4/958e2a/1>. | Increment variable in sql query | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I am trying to insert 10 values of the format `"typename_" + i` where `i` is the counter of the loop in a table named `roomtype` with attributes `typename` (primary key of SQL type character varying (45)) and `samplephoto` (it can be NULL and I am not dealing with this for now). What seems strange to me is that the tuples are inserted in different order than the loop counter increments. That is:
```
typename_1
typename_10
typename_2
typename_3
...
```
I suppose it's not very important but I can't understand why this is happening. I am using PostgreSQL 9.3.4, pgAdmin III version 1.18.1 and Eclipse Kepler.
The Java code that creates the connection (using JDBC driver) and makes the query is:
```
import java.sql.*;
import java.util.Random;
public class DBC{
Connection _conn;
public DBC() throws Exception{
try{
Class.forName("org.postgresql.Driver");
}catch(java.lang.ClassNotFoundException e){
java.lang.System.err.print("ClassNotFoundException: Postgres Server JDBC");
java.lang.System.err.println(e.getMessage());
throw new Exception("No JDBC Driver found in Server");
}
try{
_conn = DriverManager.getConnection("jdbc:postgresql://localhost:5432/hotelreservation","user", "0000");
ZipfGenerator p = new ZipfGenerator(new Random(System.currentTimeMillis()));
_conn.setCatalog("jdbcTest");
Statement statement = _conn.createStatement();
String query;
for(int i = 1; i <= 10; i++){
String roomtype_typename = "typename_" + i;
query = "INSERT INTO roomtype VALUES ('" + roomtype_typename + "','" + "NULL" +"')";
System.out.println(i);
statement.execute(query);
}
}catch(SQLException E){
java.lang.System.out.println("SQLException: " + E.getMessage());
java.lang.System.out.println("SQLState: " + E.getSQLState());
java.lang.System.out.println("VendorError: " + E.getErrorCode());
throw E;
}
}
}
```
But what I get in pgAdmin table is:
[](https://i.stack.imgur.com/t8eNf.jpg) | This is a misunderstanding. There is no "natural" order in a relational database table. While rows are normally inserted in sequence to the physical file holding a table, a wide range of activities can reshuffle physical order. And queries doing anything more than a basic (non-parallelized) sequential scan may return rows in any opportune order. That's according to standard SQL.
***The order you see is arbitrary unless you add `ORDER BY` to the query.***
pgAdmin3 by default orders rows by the primary key (unless specified otherwise). Your column is of type [`varchar`](https://www.postgresql.org/docs/current/datatype-character.html) and rows are ordered alphabetically (according to your current locale). All by design, all as it should be.
To sort rows like you seem to be expecting, you could pad some '0' in your text:
```
...
typename_0009
typename_0010
...
```
The **proper solution** would be to have a numeric column with just the number, though.
You may be interested in [natural-sort](/questions/tagged/natural-sort "show questions tagged 'natural-sort'"). You may also be interested in a [`serial`](https://stackoverflow.com/questions/9875223/auto-increment-sql-function/9875517#9875517) column. | You are not seeing the order in which PostgreSQL stores the data, but rather the order in which pgadmin displays it.
The `edit table` feature of pgadmin automatically sorts the data by the primary key by default. that is what you are seeing.
In general, databases store table data in whatever order is convenient. Since you did not intentionally supply an ORDER BY you have no right to care what order it is actually in. | Tuples are not inserted sequentially in database table? | [
"",
"sql",
"postgresql",
"jdbc",
"insertion",
"pgadmin",
""
] |
I can't find a way to insert Postgres' array type with Clojure.
```
(sql/insert! db :things {:animals ["cow" "pig"]})
```
Didn't work which I kind of expected. Error message:
```
PSQLException Can't infer the SQL type to use for an instance of clojure.lang.PersistentVector. Use setObject() with an explicit Types value to specify the type to use. org.postgresql.jdbc2.AbstractJdbc2Statement.setObject (AbstractJdbc2Statement.java:1936)
```
Even the most direct access to SQL that I could find didn't work:
```
(sql/execute! db "INSERT INTO things (animals) VALUES ('{\"cow\", \"pig\"}')")
```
Don't really know what's going on here:
```
ClassCastException java.lang.Character cannot be cast to java.lang.String clojure.java.jdbc/prepare-statement (jdbc.clj:419)
```
Surely it must be possible somehow? If not by the helper functions, then by somehow executing raw SQL. | to use *insert!* to insert a vector of strings you must create an object (from the vector of strings) that implements java.sql.Array. You can use [java.sql.Connection.createArrayOf](http://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#createArrayOf%28java.lang.String,%20java.lang.Object%5B%5D%29) to create such object
```
(def con (sql/get-connection db))
(def val-to-insert
(.createArrayOf con "varchar" (into-array String ["cow", "pig"]))
(sql/insert! db :things {:animals val-to-insert})
```
and
clojure.java.jdbc's docs on [*execute!*](http://clojure.github.io/java.jdbc/#clojure.java.jdbc/execute!) said
```
(execute! db-spec [sql & params] :multi? false :transaction? true)
(execute! db-spec [sql & param-groups] :multi? true :transaction? true)
```
Your must put your sql string in a vector to make it work.
```
(sql/execute! db ["INSERT INTO things (animals) VALUES ('{\"cow\", \"pig\"}')"])
``` | You can make clojure.java.jdbc automatically convert between Clojure vectors and SQL arrays by extending two protocols. This can be done from your own code:
```
(extend-protocol clojure.java.jdbc/ISQLParameter
clojure.lang.IPersistentVector
(set-parameter [v ^java.sql.PreparedStatement stmt ^long i]
(let [conn (.getConnection stmt)
meta (.getParameterMetaData stmt)
type-name (.getParameterTypeName meta i)]
(if-let [elem-type (when (= (first type-name) \_) (apply str (rest type-name)))]
(.setObject stmt i (.createArrayOf conn elem-type (to-array v)))
(.setObject stmt i v)))))
(extend-protocol clojure.java.jdbc/IResultSetReadColumn
java.sql.Array
(result-set-read-column [val _ _]
(into [] (.getArray val))))
```
REPL Example:
```
user> (def db (clj-postgresql.core/pool :dbname "test"))
#'user/db
user> (clojure.java.jdbc/query db ["SELECT ?::text[], ?::int[]" ["foo" "bar"] [1 2 3]])
({:int4 [1 2 3], :text ["foo" "bar"]})
```
I'm currently working on a library that will support PostgreSQL, and PostGIS types automatically. It's still very much work in process though <https://github.com/remodoy/clj-postgresql> | Inserting PostgreSQL arrays with Clojure | [
"",
"sql",
"postgresql",
"jdbc",
"clojure",
""
] |
I inherited a table that has a column containing hand-entered award numbers. It has been used for many years by many people. The award numbers in general look like this:
```
R01AR012345-01
R01AR012345-02
R01AR012345-03
```
Award numbers get assigned each year. Because so many different people have had their hands in this in the past, there isn't a lot of consistency in how these are entered. For instance, an award sequence may appear like this:
```
R01AR012345-01
1 RO1AR012345-02
12345-03
12345-05A1
1234506
```
The rule I've been given to find is to return any record in which 5 consecutive integers from that column match with another record.
I know how to match a given string, but am at a loss when the 5 consecutive integers are unknown.
Here's a sample table to make what I'm looking for more clear:
```
+----------------------+
| table: AWARD |
+-----+----------------+
| ID | AWARD_NO |
+-----+----------------+
| 12 | R01AR015123-01 |
+-----+----------------+
| 13 | R01AR015124-01 |
+-----+----------------+
| 14 | 15123-02A1 |
+-----+----------------+
| 15 | 1 Ro1XY1512303 |
+-----+----------------+
| 16 | R01XX099232-01 |
+-----+----------------+
```
In the above table, the following IDs would be returned: 12,13,14,15
The five consecutive integers that match are:
```
12,13: 01512
12,14: 15123
12,15: 15123
```
In our specific case, ID 13 is a false positive... but we're willing to deal with those on a case-by-case basis.
Here's the desired return set for the above table:
```
+-----+-----+----------------+----------------+
| ID1 | ID2 | AWARD_NO_1 | AWARD_NO_2 |
+-----+-----+----------------+----------------+
| 12 | 13 | R01AR015123-01 | R01AR015124-01 |
+-----+-----+----------------+----------------+
| 12 | 14 | R01AR015123-01 | 15123-02A1 |
+-----+-----+----------------+----------------+
| 12 | 15 | R01AR015123-01 | 1 Ro1XY1512303 |
+-----+-----+----------------+----------------+
```
Now... I'm OK with false positives (like 12 matching 13) and duplicates (because if 12 matches 14, then 14 also matches 12). We're looking through something like 18,000 rows. Optimization isn't really necessary in this situation, because it's only needed to be run one time. | This should handle removing duplicates and most false-positives:
```
DECLARE @SPONSOR TABLE (ID INT NOT NULL PRIMARY KEY, AWARD_NO VARCHAR(50))
INSERT INTO @SPONSOR VALUES (12, 'R01AR015123-01')
INSERT INTO @SPONSOR VALUES (13, 'R01AR015124-01')
INSERT INTO @SPONSOR VALUES (14, '15123-02A1')
INSERT INTO @SPONSOR VALUES (15, '1 Ro1XY1512303')
INSERT INTO @SPONSOR VALUES (16, 'R01XX099232-01')
;WITH nums AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS [Num]
FROM sys.objects
),
cte AS
(
SELECT sp.ID,
sp.AWARD_NO,
SUBSTRING(sp.AWARD_NO, nums.Num, 5) AS [TestCode],
SUBSTRING(sp.AWARD_NO, nums.Num + 5, 1) AS [FalsePositiveTest]
FROM @SPONSOR sp
CROSS JOIN nums
WHERE nums.Num < LEN(sp.AWARD_NO)
AND SUBSTRING(sp.AWARD_NO, nums.Num, 5) LIKE '%[1-9][0-9][0-9][0-9][0-9]%'
-- AND SUBSTRING(sp.AWARD_NO, nums.Num, 5) LIKE '%[0-9][0-9][0-9][0-9][0-9]%'
)
SELECT sp1.ID AS [ID1],
sp2.ID AS [ID2],
sp1.AWARD_NO AS [AWARD_NO1],
sp2.AWARD_NO AS [AWARD_NO2],
sp1.TestCode
FROM cte sp1
CROSS JOIN @SPONSOR sp2
WHERE sp2.AWARD_NO LIKE '%' + sp1.TestCode + '%'
AND sp1.ID < sp2.ID
--AND 1 = CASE
-- WHEN (
-- sp1.FalsePositiveTest LIKE '[0-9]'
-- AND sp2.AWARD_NO NOT LIKE
-- '%' + sp1.TestCode + sp1.FalsePositiveTest + '%'
-- ) THEN 0
-- ELSE 1
-- END
```
**Output:**
```
ID1 ID2 AWARD_NO1 AWARD_NO2 TestCode
12 14 R01AR015123-01 15123-02A1 15123
12 15 R01AR015123-01 1 Ro1XY1512303 15123
14 15 15123-02A1 1 Ro1XY1512303 15123
```
If IDs 14 and 15 should *not* match, we might be able to correct for that as well.
**EDIT:**
Based on the comment from @Serpiton I commented out the creation and usage of the `[FalsePositiveTest]` field since changing the initial character range in the LIKE clause on the SUBSTRING to be `[1-9]` accomplished the same goal and slightly more efficiently. However, this change assumes that no valid Award # will start with a 0 and I am not sure that this is a valid assumption. Hence, I left the original code in place but just commented out. | You want to use the LIKE command in your where clause and use a pattern to look for the 5 numbers. See this post [here](http://technet.microsoft.com/en-us/library/ms179859.aspx):
There are probably better ways of representing this but the below example looks for 5 digits from 0-9 next to each other in the data anywhere in your column value. This could perform quite slowly however...
```
Select *
from blah
Where column LIKE '%[0-9][0-9][0-9][0-9][0-9]%'
``` | SQL Server - Find records with identical substrings | [
"",
"sql",
"sql-server",
"substring",
""
] |
When you have two tables something like this
```
SELECT Customers.CustomerName,
FROM Customers
LEFT JOIN Orders
ON Customers.CustomerID=Orders.CustomerID
ORDER BY Customers.CustomerName;
```
do both tables have to share a column name , like in this example they share CustomerID column? | First of all, that's not a `UNION`, it's a `JOIN`. There is a `UNION` operation in SQL, but it means something different.
If your question is "Do the columns on which I `JOIN` my data have to have the same name" the answer is no. You could write:
```
SELECT Customers.CustomerName,
FROM Customers
LEFT JOIN Orders
ON Customers.CustomerID = Orders.PurchasedByID
ORDER BY Customers.CustomerName;
```
if those were the names in your two tables.
On the other hand, if you question is "Do I need to connect the two tables together with a column from one table matching the value in a column in the other table when I do a `JOIN`?" then the answer is also no. However, a `JOIN` without linked columns will probably not give the results expected -- the number of records in the result set will be equal to the number of records in the first table *multiplied by* the number of records in the second table. This is called the Cartesian Product of the two tables. | The column name does not matter - you can join on any existing columns. That said, it is a good practice to make sensible joins obvious - you don't want to have a column in one table called "CustomerDataNumbers" that joins to a column in another table called "CustomerID", since it would make a lot more sense to just call both columns "CustomerID" (or something similar). Joins are incredibly powerful... pretty much anything will work, as long as the syntax is correct.
You should note that a join is different from a union. A join will merge the *columns* from two sets of data given a set of rules that govern the join - a union will merge the *rows* from each data set together. | When you join tables, do the tables have to have the same column name? | [
"",
"sql",
""
] |
Let's say you have the following tables:
```
CREATE TABLE users
(
id INT(11) NOT NULL,
username VARCHAR(45) NOT NULL
)
CREATE TABLE accounts
(
id INT(11) NOT NULL,
accountname VARCHAR(45) NOT NULL
)
CREATE TABLE users_accounts
(
user_id INT(11) NOT NULL,
account_id INT(11) NOT NULL
)
CREATE TABLE events
(
id INT(11) NOT NULL,
eventdata VARCHAR(45) NOT NULL,
eventtype VARCHAR(45) NOT NULL,
date DATE NOT NULL
)
create table events_accounts
(
event_id INT(11),
account_id INT(11)
)
```
Now I want a user to be able to query the events table, but I want to restrict them to view events only if they also have access to all of the accounts associated with the event.
I've come up with a couple of ways of doing the query, but they all seem inefficient. Example:
```
' View shows number of accounts associated with each event
CREATE VIEW event_account_count AS
SELECT e.id AS event_id, count(1) AS account_count
FROM events e JOIN event_accounts ea ON e.id = ea.event_id
GROUP BY e.id
```
secondly
```
` View shows the number of accounts each user can see of each event
CREATE VIEW event_account_user_count AS
SELECT e.id AS event_id, ua.user_id, count(1) AS account_count
FROM events e
JOIN event_accounts ea ON e.id = ea.event_id
JOIN users_accounts ua ON ea.account_id = ua.account_id
GROUP BY e.id, ua.user_id
```
and then finally:
```
' Select all the events that this user can see where the number
' of accounts of the event they can see is equal to the number of accounts in the event
SELECT e.*
FROM events e
JOIN event_account_count eac ON e.id = eac.event_id
JOIN event_account_user_count eaus ON e.id = eaus.event_id
WHERE eac.account_count = eaus.account_count AND
eaus.user_id = @user_id
```
I know that as the number of events gets large, this is going to become very inefficient. I've thought about materializing the first view, i.e. add an account\_count to the event table, which would cut out one of the queries, but I'm not sure how to get around the second view.
I'm leaning towards dropping a feature to allow access to events by account permissions because I know that this could become a bottleneck for any query against the events table, particularly if the queries filter on fields in the events table. Once you're filtering on the events table, I don't think I can use views because predicates won't get pushed.
Any ideas on other ways to implement this? | Try double negation - don't find nothing who youw don't want find ;)
First, select all the accounts that are not the user, and then select all events not on these accounts.
```
SELECT * FROM events where id NOT IN(
SELECT event_id FROM events_accounts WHERE account_id NOT IN(
SELECT account_id FROM users_accounts WHERE user_id = @user_id
))
```
I do not think you need more time to attach the tables and check the condition, but check for yourself. (Not tested code) | The solution of @bato3 is very nice, but the following reusable views may be helpful also:
```
/* View shows events inaccessible to users */
CREATE VIEW users_inaccessible_events AS
SELECT DISTINCT u.id AS user_id, event_id
FROM users u
JOIN events_accounts ea ON 1
LEFT JOIN users_accounts ua
ON u.id = ua.user_id AND ea.account_id = ua.account_id
WHERE ua.account_id IS NULL
/* View shows events accessible to users */
CREATE VIEW users_accessible_events AS
SELECT u.id AS user_id, e.id AS event_id
FROM users u
JOIN events e ON 1
LEFT JOIN users_inaccessible_events AS uie
ON u.id = uie.user_id AND e.id = uie.event_id
WHERE uie.event_id IS NULL
```
So, the query which gives the results you need, could be as simple as:
```
SELECT e.*
FROM users_accessible_events, events e
WHERE user_id = @user_id AND event_id = e.id
```
And if you need, for example, the list of users which can access a particular event, or the list of events which can't be accessed by a specific user, etc., an applicable query will be also very simple and straightforward. | Row permissions based on child rows | [
"",
"mysql",
"sql",
"group-by",
""
] |
I have a table like this (this is really an example only):
```
+-------------+---------------------+---------------------+
| status | open_date | close_date |
+-------------+---------------------+---------------------+
| closed | 01-11-2014 19:32:44 | 01-11-2014 20:32:44 |
| open | 01-12-2014 22:33:49 | 02-12-2014 22:33:49 |
| open | 01-23-2014 22:08:24 | 03-23-2014 22:08:24 |
| closed | 02-01-2014 22:33:57 | 03-01-2014 22:33:57 |
| open | 02-01-2013 22:37:34 | 02-01-2013 23:37:34 |
| closed | 04-20-2013 15:23:00 | 05-20-2013 15:23:00 |
| open | 04-20-2013 12:21:49 | 05-20-2013 12:21:49 |
| closed | 04-25-2013 11:22:00 | 06-25-2013 11:22:00 |
| closed | 05-20-2013 14:23:49 | 10-20-2013 14:23:49 |
| closed | 04-20-2013 16:33:49 | 04-25-2013 16:33:49 |
+-------------+---------------------+---------------------*
```
And want to list all opened and closed cases by Year and Month, like this:
```
+-------------+---------------+--------------+
| Year | Month | Opened Cases | Closed Cases |
+-------------+---------------+--------------+
| 2014 | 4 | 10 | 5 |
| 2014 | 3 | 9 | 7 |
| 2014 | 2 | 15 | 10 |
| 2014 | 1 | 12 | 1 |
| 2013 | 12 | 30 | 9 |
| 2013 | 11 | 5 | 50 |
+--------------+--------------+--------------+
```
I have a select like this:
```
SELECT
YEAR(open_date) AS TheYear,
MONTH(open_date) AS TheMonth,
sum(CASE WHEN open_date = ??? THEN 1 ELSE 0 END) TheOpened
sum(CASE WHEN close_date = ??? THEN 1 ELSE 0 END) TheClosed
FROM
TABLE
WHERE
CASEGROUP= 'SUPPORT'
GROUP BY
MONTH(open_date),
YEAR(open_date)
ORDER BY
TheYear DESC,
TheMonth ASC
``` | Found it.
```
SELECT
YEAR(open_time) AS TheYear,
MONTH(open_time) AS TheMonth,
sum(CASE WHEN DATEPART(YYYY, open_time)= YEAR(open_time) AND DATEPART(MM, open_time)= MONTH(open_time) THEN 1 ELSE 0 END) TheOpened,
sum(CASE WHEN status = 'closed' THEN 1 ELSE 0 END) TheClosed
FROM
TABLE
WHERE
CASEGROUP= 'SUPPORT'
GROUP BY
MONTH(open_date),
YEAR(open_date)
ORDER BY
TheYear DESC,
TheMonth ASC
``` | Try this
```
SELECT
YEAR(open_date) AS TheYear,
MONTH(open_date) AS TheMonth,
sum(CASE WHEN open_status= 'Closed' THEN 0 ELSE 1 END) TheOpened,
sum(CASE WHEN open_status= 'Closed' THEN 1 ELSE 0 END) TheClosed
FROM
TABLE
WHERE
CASEGROUP= 'SUPPORT'
GROUP BY
MONTH(open_date),
YEAR(open_date)
ORDER BY
TheYear DESC,
TheMonth ASC
``` | Select and group by Year, Month and Condition | [
"",
"sql",
"sql-server",
"select",
"group-by",
""
] |
I have a maybe really tricky question. I have a table with costs, very simple:
```
Cost name, cost value
```
And I want to output the `TOP 10` costs values with the name, which is no problem, BUT: as 11th row, I want to output all others sum as "Others" marked ... Is that possible with a SQL query?
I hope you understand my question and I'm very thankful for all helpful answers.
Best regards,
Tobias
UPDATE:

<< Data Example >> | Please try:
```
;with T as(
select *, ROW_NUMBER() over (order by value desc) RNum
from YourTable
)
select value, name from T
where RNum<=10
union all
select sum(value), 'Others'
from T
where RNum>10
``` | Perhaps something like this?
```
select * from (select top (10) name, value from costs order by value) s1
UNION (
select 'other', sum(value) from costs
where name not in (select top 10 Name from costs order by value)
)
```
This assumes Name is a PK on costs. | Combine Row Values to "Others" | [
"",
"sql",
"sql-server",
""
] |
I have following MySQL Tables:
```
+--------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| firstName | longtext | NO | | NULL | |
| lastName | longtext | NO | | NULL | |
| username | longtext | NO | | NULL | |
+--------------------+----------+------+-----+---------+----------------+
```
And:
```
+------------------------+----------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------------+----------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| trainee_id | int(11) | NO | MUL | NULL | |
| date | date | NO | | NULL | |
| duration | time | NO | | NULL | |
| educationDepartment | longtext | NO | | NULL | |
| completedtasks | longtext | NO | | NULL | |
+------------------------+----------+------+-----+---------+----------------+
```
I have two conditions. I have a start and a end-date from my second table and I want the data from a specific user which I just can verify through the "username" of the first table. It looks like this: username is known > I will get the id (Foreign Key) of the user > now I can use this Foreign Key to get userspecific data from the second table. I started my MySQL query and it looks like this atm:
```
SELECT date, duration, educationDepartment, completedtasks FROM programm_completedtask WHERE date BETWEEN "2014-04-07" AND "2014-04-11";
```
But how can I make it now user specific ? IMPORTANT: I just want a MySQL query, php or something like that won't help me :/ | This limits the results to one particular user:
```
SELECT date, duration, educationDepartment, completedtasks
FROM programm_completedtask
WHERE date BETWEEN '2014-04-07' AND '2014-04-11'
AND trainee_id = (select id from users where username = 'XYZ');
``` | Something like this?
`SELECT
table2.username,
table1.date,
table1.duration,
table1.educationDepartment,
table1.completedtasks
FROM table1
LEFT JOIN table2 ON table2.id = table1.trainee_id
WHERE
table1.date BETWEEN "2014-04-07" AND "2014-04-11" AND
table1.username = 'someuser';` | MySQL query where you can't access relations through joins | [
"",
"mysql",
"sql",
"join",
"left-join",
"inner-join",
""
] |
This may be a silly question, but null seems to neither equal nor unequal any empty string "".
I have a table with the following values:
```
id field1 field2 field3
1 a b c
2 null b c
3 b c
4 a b c
```
My query
```
select * from table where field1 = ""
```
does not return row 2 where the value of field1 is null. This makes absolute sense, as null is not equal to an empty string.
But,
```
select * from table where field1 != ""
```
doesn't return the row 2 either.
Does anyone have an explanation for the historic origin of this? Is it because the value null means that we do not know the value and hence it is unknown whether field1 is equal or unequal to an empty string for row 2? | > Is it because the value null means that we do not know the value and
> hence it is unknown whether field1 is equal or unequal to an empty
> string for row 2?
You are correct. Whenever you perform a comparison against `NULL`, the result is `NULL`.
You can think of `NULL` as meaning "unknown". When `NULL` is stored in a column for a record, it doesn't mean that it doesn't have a value, it means it hasn't been entered in the database.
For example, you might have a "Person" record, with a "Date of Birth" column. If the value is `NULL`, it doesn't mean that person wasn't born. It just means it hasn't been entered, so, according to the database, that person's birth date is "unknown".
If you don't know the person's date of birth, you can't answer either of these questions:
> Was the person born on April 1st?
>
> Was the person not born on April 1st?
The answer to both is "unknown".
You also can't answer:
> Was the person born after April 1st?
>
> Was the person not born before April 1st?
Whenever you compare a known value against "unknown", the answer is going to be "unknown".
Further, if two people's dates of birth are both `NULL` or "unknown", you also can't answer these:
> Were the two people born on the same day?
>
> Were the two people not born on the same day?
>
> Was person one born after person two?
>
> Was person two born after person one?
Whenever you compare an "unknown" value to another "unknown" value, the answer is "unknown".
Comparing anything against an "unknown" value yields "unknown".
You can, however, always answer the following:
> Do I know the person's date of birth?
>
> Do I not know the person's date of birth?
To ask that question in MySQL, you use `IS NOT NULL` and `IS NULL`. | `NULL` is not equal to an empty. `NULL` is not equal to *anything*, including `NULL`. To compare to `NULL` you need to use `IS NULL` or `IS NOT NULL`
```
SELECT NULL = NULL,
NULL != NULL,
NULL IS NULL,
NULL IS NOT NULL
NULL = NULL NULL != NULL NULL IS NULL NULL IS NOT NULL
(null) (null) 1 0
```
[SQL Fiddle](http://sqlfiddle.com/#!2/d41d8/35250)
```
select * from table where field1 IS NULL
```
or
```
select * from table where field1 IS NOT NULL
``` | MySQL - different behaviour for = "" and != "" with regards to null | [
"",
"mysql",
"sql",
"null",
"string",
""
] |
I have the following query:
```
DELETE FROM [DSPCONTENT01].[dbo].[Contact Center]
WHERE [Contact Center] IS NULL
AND [F2] IS NULL
```
How can I modify the query so after execution ONLY row 15 is still showing and every other row is deleted? | Some other way to delete except the specific row:
```
DELETE FROM [DSPCONTENT01].[dbo].[Contact Center] WHERE F10 NOT IN (2096)
```
You can also `Select` the particular record into a temporary table, then drop the original table and finally copy the data from the temporary table to the original table. Something like this:
```
create table #ContactCenter as
select * from [DSPCONTENT01].[dbo].[Contact Center] where F10 = 2096
truncate table [DSPCONTENT01].[dbo].[Contact Center]
insert into [DSPCONTENT01].[dbo].[Contact Center]
select * from #ContactCenter
``` | As of now you could simply do this, **assuming this record is the only one having a F10 value of 2096**:
```
DELETE FROM [DSPCONTENT01].[dbo].[Contact Center]
WHERE F10 <> 2096
```
However, in the long run you are bound to have other problems with such a design. This table needs a [primary key](http://msdn.microsoft.com/en-us/library/bb726011.aspx) and proper field definitions. Urgently. | How to delete all rows in sql except one | [
"",
"sql",
"sql-server",
""
] |
I have a table with 3 columns (in SQL Server 2012). One of the columns is a date column. What I would like to do is split the table for two specified dates and merge them into one table with an extra field. Hopefully the example below will explain.
Example of what I currently have.
```
Company date no_employees
ABC 2014-05-30 35
DEF 2014-05-30 322
GHI 2014-05-30 65
JKL 2014-05-30 8
MNO 2014-05-30 30
ABC 2014-01-01 33
DEF 2014-01-01 301
GHI 2014-01-01 70
MNO 2014-01-01 30
```
What I would like a query to return for me (not sure if its possible),
```
Company start date no_employees end date no_employees diff
ABC 33 35 2
DEF 301 322 21
GHI 70 65 -5
JKL 0 8 8
MNO 30 30 0
``` | ```
declare @lowdate date = '2014-01-01'
declare @highdate date = '2014-05-30'
;with x as
(
select company, min(no_employees) no_employees
from @t records
where recorddate = @lowdate
group by company
), y as
(
select company, max(no_employees) no_employees
from @t records
where recorddate = @highdate
group by company
)
select coalesce(x.company, y.company) company,
coalesce(x.no_employees, 0) start_no_employees,
coalesce(y.no_employees, 0) end_no_employees,
coalesce(y.no_employees, 0) - coalesce(x.no_employees, 0) diff
from
x full outer join y
on
x.company = y.company
``` | [`PIVOT`](http://technet.microsoft.com/en-us/library/ms177410%28v=sql.105%29.aspx) (and [`COALESCE`](http://technet.microsoft.com/en-us/library/ms190349%28v=SQL.105%29.aspx) to generate the 0s) seems to do it:
```
declare @t table (Company char(3),[date] date,no_employees int)
insert into @t(Company,[date],no_employees) values
('ABC','2014-05-30',35 ),
('DEF','2014-05-30',322 ),
('GHI','2014-05-30',65 ),
('JKL','2014-05-30',8 ),
('MNO','2014-05-30',30 ),
('ABC','2014-01-01',33 ),
('DEF','2014-01-01',301 ),
('GHI','2014-01-01',70 ),
('MNO','2014-01-01',30 )
select Company,
COALESCE(start,0) as start,
COALESCE([end],0) as [end],
COALESCE([end],0)-COALESCE(start,0) as diff
from
(select
Company,
CASE WHEN [date]='20140530' THEN 'end'
ELSE 'start' END as period,
no_employees
from @t
where [date] in ('20140101','20140530')
) t
pivot (MAX(no_employees) for period in ([start],[end])) u
```
Result:
```
Company start end diff
------- ----------- ----------- -----------
ABC 33 35 2
DEF 301 322 21
GHI 70 65 -5
JKL 0 8 8
MNO 30 30 0
```
This could easily be parameterized for the specific start and end dates to use.
Also, at the moment I'm using `MAX` because we have to have an aggregate in `PIVOT`, even though here the sample data contains a maximum of one row. If there's a possibility of multiple rows existing for the start or end date, we'd need to know how you want that handled. | SQL Server creating two tables and comparing them | [
"",
"sql",
"sql-server",
""
] |
Suppose I get this table:
```
MyTable
+----+-------+
| ID | count |
+----+-------+
| a | 2 |
| b | 6 |
| c | 4 |
| d | 6 |
| e | 2 |
+----+-------+
```
Now I want this back:
```
Result
+----+-------+
| ID | count |
+----+-------+
| b | 6 |
| d | 6 |
+----+-------+
```
I want the IDs that have the most counted values. So if there are multiple maximum values, I want them all. Also I don't know if there *will* be multiple values and if there are, how many it will be. | You can get the greatest value inside the subquery. Eg,
```
SELECT *
FROM MyTable
WHERE count =
(
SELECT MAX(count)
FROM MyTable
)
``` | ```
SELECT
Id, count
FROM MyTable
WHERE count = (SELECT MAX(count) FROM MyTable)
``` | MySQL SELECT multiple/all max values in the same column | [
"",
"mysql",
"sql",
""
] |
I have a column called tidpunkt in the database that stores the dates in timestamp formats. How do I extract only the year with SQL? I still want the extracted year to be in timestamp.
I've tried like this:
```
SELECT EXTRACT(YEAR FROM tidpunkt) as tidpunkt FROM ringupp
```
But it does not work. Am I doing something wrong? | ```
SELECT YEAR(tidpunkt) as tidpunkt FROM ringupp
```
Do this
```
SELECT FROM_UNIXTIME(tidpunkt , '%Y') AS tidpunkt FROM ringupp
```
**MySQL** `FROM_UNIXTIME()` returns a date /datetime from a version of unix\_timestamp. The return value is in ‘YYYYY-MM-DD HH:MM:SS’ .
See [**DOCUMENTATION**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_from-unixtime) . Also see this [**Example**](http://www.w3resource.com/mysql/date-and-time-functions/mysql-from_unixtime-function.php) | Try this:
if you require 4 digit year
```
SELECT DATEPART('yyyy',tidpunkt) as tidpunkt FROM ringupp
```
and if 2 digit year is required:
```
SELECT DATEPART('yy',tidpunkt) as tidpunkt FROM ringupp
```
Documentation [here](http://msdn.microsoft.com/en-us/library/ms174420.aspx) | Extract year from UNIX timestamp with SQL | [
"",
"sql",
"time",
"timestamp",
"extract",
""
] |
Ok, my title looks a little bit weird but I could find a better way to title my question.
My current SQL statement looks like this:
```
SELECT
ActionBy,
DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0) AS 'dateStart',
CONVERT(CHAR(3), (DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0)), 100) AS 'Month' ,
YEAR(ActionCreateDate) AS 'YEAR',
SUM([TimeTakenToComplete]) AS TOTAL
FROM
[myTable]
WHERE
[TimeTakenToComplete] IS NOT NULL
AND DeleteDate IS NULL
AND ActionBy IS NOT NULL
GROUP BY
ActionBy, DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0),
YEAR(ActionCreateDate)
ORDER BY
DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0) ASC
```
The end result looks something like this:
```
user1 , 2013-06-01 00:00:00.000 , Jun , 2013 , 1000
user2 , 2013-06-01 00:00:00.000 , Jun , 2013 , 998
user3 , 2013-06-01 00:00:00.000 , Jun , 2013 , 600
user1 , 2013-07-01 00:00:00.000 , Jul , 2013 , 1110
user3 , 2013-07-01 00:00:00.000 , Jul , 2013 , 2330
```
My problem is that I want to have a record for all users, for all months with a default value of 0.
So my desired result would look something like this:
```
user1 , 2013-06-01 00:00:00.000 , Jun , 2013 , 1000
user2 , 2013-06-01 00:00:00.000 , Jun , 2013 , 998
user3 , 2013-06-01 00:00:00.000 , Jun , 2013 , 600
user1 , 2013-07-01 00:00:00.000 , Jul , 2013 , 1110
user2 , 2013-07-01 00:00:00.000 , Jul , 2013 , 0
user3 , 2013-07-01 00:00:00.000 , Jul , 2013 , 2330
```
Is there any way that I can achieve this result or should I go and solve this issue programmatically? | Check this solution out. It gives the desired output.
Also I have used table variables to simulate your need. I have also used `@Users` table to consider that you have a users table. If you do not want to use the users table then you can use `mytable` as well but there should be atleast 1 entry of the missing user for some other month.
```
--Simulation of your myTable
DECLARE @myTable TABLE
(
ActionBy VARCHAR(10),
ActionCreateDate DATETIME,
TimeTakenToComplete INT
)
--Simulation of users table
DECLARE @Users TABLE
(
ActionBy VARCHAR(10)
)
--Dummy data for testing
INSERT INTO @myTable VALUES ('user1' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user2' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-06-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user1' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-07-01', 100);
INSERT INTO @myTable VALUES ('user3' , '2013-07-01', 100);
INSERT INTO @Users VALUES ('user1');
INSERT INTO @Users VALUES ('user2');
INSERT INTO @Users VALUES ('user3');
--Actual solution starts from here
; WITH MYCTE AS
(SELECT
ActionBy,
DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0) AS 'dateStart',
CONVERT(CHAR(3), (DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0)), 100) AS 'Month' ,
YEAR(ActionCreateDate) AS 'YEAR',
SUM([TimeTakenToComplete]) AS TOTAL
FROM
@myTable
WHERE
[TimeTakenToComplete] IS NOT NULL
--AND DeleteDate IS NULL --uncomment this on using with your code
AND ActionBy IS NOT NULL
GROUP BY
ActionBy, DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0),
YEAR(ActionCreateDate)
)
SELECT
ISNULL(M1.ActionBy, L1.ActionBy) ,
ISNULL(M1.dateStart, L1.dateStart),
ISNULL(M1.Month, L1.Month),
ISNULL(M1.YEAR, L1.YEAR),
ISNULL(M1.TOTAL, 0)
FROM MYCTE M1
RIGHT OUTER JOIN
(
SELECT * FROM
(
(SELECT DISTINCT [Month], dateStart, YEAR FROM MYCTE) m
CROSS JOIN (SELECT ActionBy FROM @Users) U
)
)L1
ON M1.[Month] = L1.[Month]
AND M1.ActionBy = L1.ActionBy
```
Hope this helps | actually I was waiting for @Hamlet's solution..
but here is what I have in mind (I haven't tried to run it)..
and if there isn't any data in a month of any user, there will be no data at all (for that particular month)..
```
SELECT a.username,
b.dateStart,
b.Month ,
b.YEAR,
ISNULL(result.TOTAL,0)
FROM
(SELECT username FROM tableUser) a CROSS JOIN
(SELECT DISTINCT dateStart, CONVERT(CHAR(3), dateStart, 100) AS 'Month', 'YEAR' FROM result) b
LEFT JOIN result
ON a.username=result.ActionBy AND b.dateStart=result.dateStart
```
note: I called the table from your original query as `result`, and in your original query, you can just have this selection:
```
SELECT
ActionBy,
DATEADD(MONTH, DATEDIFF(MONTH, 0, ActionCreateDate), 0) AS 'dateStart',
YEAR(ActionCreateDate) AS 'YEAR',
SUM([TimeTakenToComplete]) AS TOTAL
...
``` | Query by user, by month including a default value for not existing records | [
"",
"sql",
"sql-server",
""
] |
I have a string like a address.
How delete all symbols (number, space, dot, comma) from start and end of string?
*For example:*
Obere Str**. 57**
**120** Hanover Sq**.**
*to->*
Obere Str
Hanover Sq
Thanks! | I find answer:
`select regexp_replace(regexp_replace(address, '[0-9\. ,]+$',''),'^[0-9\. ,]+','')
from customers` | example :
```
SELECT employee_id, REPLACE(job_id, 'SH', 'SHIPPING') FROM employees
WHERE SUBSTR(job_id, 1, 2) = 'SH';
```
<http://docs.oracle.com/cd/B25329_01/doc/appdev.102/b25108/xedev_sql.htm>
Chapter 3-18 | Delete symbol (number, space, dot, comma) in string SQL Oracle | [
"",
"sql",
"database",
"oracle",
""
] |
I have the following query in SQL:
```
SELECT TOP 1000 [Date]
,[APDERM]
,[IN HOUSE]
,[RAD EMR ORDERS]
,[EMR ORDERS]
FROM [database].[dbo].[MYTABLE]
```
Which produces this:

How can I write a query which will add each column and insert a new row under the `[MYTABLE]`
Something like this:
 | Use `UNION` operator:
```
SELECT [Date]
,[APDERM]
,[IN HOUSE]
,[RAD EMR ORDERS]
,[EMR ORDERS]
FROM [database].[dbo].[MYTABLE]
UNION
SELECT [Date]
, SUM([APDERM])
, SUM([IN HOUSE])
, SUM([RAD EMR ORDERS])
, SUM([EMR ORDERS](
FROM [database].[dbo].[MYTABLE]
``` | you can try this to do it in a single statement:
```
SELECT TOP 1000 [Date]
,SUM([APDERM]) AS [APDERM]
,SUM([IN HOUSE]) AS [IN HOUSE]
,SUM([RAD EMR ORDERS]) AS [RAD EMR ORDERS]
,SUM([EMR ORDERS]) AS [EMR ORDERS]
FROM [database].[dbo].[MYTABLE]
GROUP BY [Date]
WITH ROLLUP
``` | How to SUM the values in a column in SQL | [
"",
"sql",
"sql-server",
""
] |
I have a column containing value like:
"AAAA\BBBBBBBB\CCC" (A, B, C part length are not fixed.)
I need to trim out the \CCC part if it exists, and leave it alone if not exist.
For example:
AAA\BBBBBB\CCC -> AAA\BBBBBB
AA\BBBB -> AA\BBBB
AA -> AA
**Sorry I am not clear enough, the A, B, C parts are not literally ABC, they could be any content.**
**Also \DDD\EEEE(etc.) should be removed as well** | While there's certainly a way to do it in pure T-SQL, it might not be as clear as it could be.
You may want to consider using a [SQLCLR](http://msdn.microsoft.com/en-us/library/ms254963%28v=vs.110%29.aspx)-based user defined function (UDF) instead. With that you'll be able to benefit from the power and clarity of [c#](/questions/tagged/c%23 "show questions tagged 'c#'") (or [vb.net](/questions/tagged/vb.net "show questions tagged 'vb.net'")) within Sql Server.
Just make it so that your function will receive your string as a param and return the output that you want as a scalar value. From then on you'll be able to use that function just like any other UDF you may already have. This way your code would be much easier to write (and read / maintain afterwards).
Your [sqlclr](/questions/tagged/sqlclr "show questions tagged 'sqlclr'") function could be as easy to write as this (pseudo code):
```
public static string SpecialSubstring(string input)
{
if (input == null) return null;
var builder = new StringBuilder();
var occurences = 0;
for (i = 0; i < input.Length; i++;)
{
var current = input[i];
if (current == '\') occurences += 1;
if (occurences >= 2) break;
builder.Append(current)
}
return builder.ToString();
}
```
Then, from T-SQL:
```
SELECT
[owningschema].SpecialSubstring('AAA\BBBBBB\CCC'), -- returns 'AAA\BBBBBB'
[owningschema].SpecialSubstring('AA\BBBB'),, -- returns 'AA\BBBB'
[owningschema].SpecialSubstring('AA') -- returns 'AA'
```
This page will give you pretty much all you need to get started:
[SQL Server Common Language Runtime Integration](http://msdn.microsoft.com/en-us/library/ms254963%28v=vs.110%29.aspx) | Here is a solution to remove the last part if there are two or more parts (separated by \)
```
DECLARE @var VARCHAR(32) = 'AAAA\BBBBBBBB\CCC'
SELECT
LEN(@var) - LEN(REPLACE(@var, '\', '')) -- Number of occurences of the character \
, CHARINDEX('\', @var) -- Position of the first occurence
, LEN(@var) - CHARINDEX('\', REVERSE(@var)) -- Position of the last occurence
, CASE WHEN LEN(@var) - LEN(REPLACE(@var, '\', '')) > 1 AND CHARINDEX('\', @var) != LEN(@var) - CHARINDEX('\', REVERSE(@var))
THEN SUBSTRING(@var, 1, LEN(@var) - CHARINDEX('\', REVERSE(@var)))
ELSE @var
END
``` | How to get SQL substring? | [
"",
"sql",
"sql-server",
"t-sql",
"substring",
""
] |
Let's say I have a password that looks like this: password;
How can I get it to work with a semicolon as the last character. The password works in SSMS and ODBC, but not with the connection string. I've tried `"` around it but that does not seem to work.
```
<add name="DbConn" connectionString="Data Source=LOCALHOST;Database=MYDB;Trusted_Connection=no;User Id=myuser;Password=password;" providerName="System.Data.SqlClient" />
```
This is for an ASP.NET web application. As far as I can tell, it is impossible. UPDATE: It IS possible! | Encapsulate your password in single quotes. e.g. given the password `iloveachallenge;` your connection string should contain `Password='iloveachallenge;';`.
I am using the following code to connect to SQL Server 2008 R2.
```
var sqlConnection = new SqlConnection()
{
ConnectionString = "Server=DT2719MOD;Database=abs2;User Id=TestUserLogon;Password='iloveachallenge;';"
};
sqlConnection.Open();
Console.WriteLine(sqlConnection.State);
sqlConnection.Close();
```
Edit:
Also tried to use the connection string that you have and it works on my machine.
```
ConnectionString="Data Source=DT2719MOD;Database=abs2;Trusted_Connection=no;User Id=TestUserLogon;Password='iloveachallenge;';"
``` | To include values that contain a semicolon, single-quote character, or double-quote character, the value must be enclosed in double quotes.
From: <http://msdn.microsoft.com/en-us/library/windows/desktop/ms722656(v=vs.85).aspx>
Basically you have there all the escaping procedures of a connection string. | SQL Server 2008 password ending in a semicolon | [
"",
"sql",
"sql-server-2008",
""
] |
I have a query that produces one row. I would like to output this one row multiple times, determined by a constant in this query. The query is on the format:
```
select A.a, B.b, C.c from A
inner join B ..
inner join C ..
where A.a =.. and B.b = .. and C.c = ..
```
This gives the result:
```
[A.a, B.b, C.c]
```
but what I want is, given that the constant is 3:
```
[A.a, B.b, C.c]
[A.a, B.b, C.c]
[A.a, B.b, C.c]
```
Is there an easy way to do this? Using UNION ALL with the query repeated doesn't work because I need to use a numeric value to specify the number of times the value should be repeated. This value will be a parameter for a report made with *iReport* with this query. If it could be done in *iReport* that would also work, but it has to be possible in a very old version of *iReport (3.0.0)*. Any help would be greatly appreciated, I've been googling this for hours but can't find a solution that works. It seems like it should be a simple task. | Why do you need this? One way I am thinking of is to have number table with 100 rows (numbers from 1 to 100) and cross join with the original query
```
select t1.* from
(
select A.a, B.b, C.c from A
inner join B ..
inner join C ..
where A.a =.. and B.b = .. and C.c = ..
) as t1 inner join number_table on 1=1
where number<=3
``` | You can use recursive SQL to generate a "table" with as many rows as you want, then join it with your original query, something like that:
```
with somerows(i) as (
select 1 from sysibm.sysdummy1
union all
select i+1 from somerows
where i < ? -- this host variable would specify the number of rows
)
select A.a, B.b, C.c from
somerows,
A
inner join B ..
inner join C ..
where A.a =.. and B.b = .. and C.c = ..
``` | Select same row multiple times | [
"",
"sql",
"jasper-reports",
"db2",
"ireport",
""
] |
from millions of records in a table if i want to select few records depending on several where conditions, are there any facts to be considered for the sequence of where conditions?
for example, in a table of students in a state where conditions might include the following:
1. Institute Id,
2. DeptId (unique for each institute, i.e. deptId in a institute cannot be present on another institute.)
Now, if i have to select a list of students in a particular dept, deptId is enough for the where condition, because deptId of the students will be present only in one particular institute.
But will it improve performance if i include instituteId also before the deptid in where conditions so that records can be filtered based on institute first, and then based on deptId?
Will the order of where conditions have impact on query performance? Thanks. | Order of where conditions won't have impact on query performance. Because your RDBMS will reorganise depending on best indexed columns.
Anyway, if you have indexes on both columns you should use only `DeptId`. Otherwise RDBMS will perform 2 filters instead of 1, in theory it could be slower to use more conditions (depending on indexes).
But, you can try both ways to check excecution time, many things can affect performance (specially with huge bulk of data) so just test it.
Try to make 1 index for 2 columns in same time, so it could be interesting to use 2 conditions. (depending on RDBMS) | Not per se but depending on the index: if you have an index with Institute ID, DeptID the using only DeptID in the where condition will not use the index and performe a table scan (there is a lot more in this but that's the basic). Try always to create a where condition covered by the PK or another index on the table with every column in that index, if you have an index on a, b and c and a where on a and c that will not use the index.
The order of the column in the where condition will be re-organized by the DB to fit the index definition | performance of select queries oracle | [
"",
"sql",
"oracle",
"select",
""
] |
I was writing a simple MySQL query along these lines today:
```
SELECT * FROM table_name WHERE column_name IS NOT NULL and column_name !='foo';
```
This returned the expected number of results. But I didn't love the syntax, and tried to make it more elegant:
```
SELECT * FROM table_name WHERE column_name NOT IN (NULL, 'foo');
```
Of course, that returned 0 results.
My question is this: **Can you explain why a null value would not be in `(NULL, 'bar')`?** I think it's because you can't compare `NULL` with `NULL`, at least philosophically. But why not?
Consider this:
```
# ruby
nil == nil
# => true
/* JavaScript */
undefined === undefined
// true
```
In those languages, and nil or undefined value is equal to any other nil or undefined value. Why not in SQL?
(Bonus points about close-to-the-metal implementation details of SQL, or philosophical differences in languages?) | The database structural query language SQL implements **Three Valued Logic** as a means of handling comparisons with NULL field content.
```
True
False
Unknown
```
The original intent of NULL in SQL was to represent missing data in a database, i.e. the assumption that an actual value exists, but that the value is not currently recorded in the database.
So comparison with UNKNOWN value gives indeterministic result which is evalauted to FALSE. | In SQL, direct comparisons with NULL are neither true nor false, so the `IN` clause will not work. This is a fundamental feature1 of the ISO SQL standard.
See this [Wikipedia](http://en.wikipedia.org/wiki/Null_%28SQL%29#Comparisons_with_NULL_and_the_three-valued_logic_.283VL.29) entry:
> Since Null is not a member of any data domain, it is not considered a "value", but rather a marker (or placeholder) indicating the absence of value. Because of this, comparisons with Null can never result in either True or False, but always in a third logical result, Unknown.
In this way, the concept of NULL in SQL is very different from Ruby's `nil`, or JavaScript's `undefined`. Ruby's `nil` is a 'value', so `nil == nil` is *true*. However, SQL's `NULL` is not a 'value', so `NULL = NULL` is *unknown* (but so is `NULL <> NULL`). For this reason, SQL provides a different operator for comparing `NULL`'s—`NULL IS NULL` is *true*.
1: Some may disagree that this is in fact a feature. | MySQL: Why can't I "column_name not in (null, 'foo')"? | [
"",
"mysql",
"sql",
""
] |
I have two threads, both need to update the same row.
The row looks like:
* (id, firstName, lastName)
* (1, "xxxx", null)
* 1 is the primary key value
Update looks like:
```
Update table set lastName = "yyy" where id = 1 and lastName = null;
```
What I want is if one thread successfully updates the null lastName to new value, I want second thread to fail and throw some sort of exception back to the caller. I need a way to know during update that the column is not null anymore (which means it was updated by first thread)
Which update statement will solve my problem? (`select for update`, `coalesce` etc.) | You can use it security in default read committed isolation.
for exp :
```
SESSION A :
digoal=# create table t_curr(id int, c1 text, c2 text);
CREATE TABLE
digoal=# insert into t_curr values (1,'t',null);
INSERT 0 1
digoal=# begin;
BEGIN
digoal=# update t_curr set c2='ttt' where id=1 and c2 is null;
UPDATE 1
SESSION B :
digoal=# update t_curr set c2='ttwt' where id=1 and c2 is null;
SESSION A :
digoal=# end;
COMMIT
SESSION B :
UPDATE 0
digoal=# select * from t_curr ;
id | c1 | c2
----+----+-----
1 | t | ttt
(1 row)
```
but if you like to raise error when update zero rows, you shold use repeatable read isolation.
for exp :
```
SESSION A :
digoal=# update t_curr set c2 = null;
UPDATE 1
digoal=# begin transaction isolation level repeatable READ ;
BEGIN
digoal=# update t_curr set c2='ttt' where id=1 and c2 is null;
UPDATE 1
SESSION B :
digoal=# begin transaction isolation level repeatable READ ;
BEGIN
digoal=# update t_curr set c2='ttt' where id=1 and c2 is null;
SESSION A :
digoal=# end;
COMMIT
SESSION B :
ERROR: could not serialize access due to concurrent update
```
or you shold use function to update , and raise error.
for exp :
```
SESSION a :
digoal=# update t_curr set c2 = null;
UPDATE 1
digoal=# begin;
BEGIN
digoal=# do language plpgsql $$
declare
begin
update t_curr set c2='ttt' where id=1 and c2 is null;
if not found then
raise 'update 0';
end if;
end;
$$;
DO
SESSION B :
digoal=# begin;
BEGIN
digoal=# do language plpgsql $$
declare
begin
update t_curr set c2='ttt' where id=1 and c2 is null;
if not found then
raise 'update 0';
end if;
end;
$$;
SESSION A :
digoal=# end;
COMMIT
SESSION B :
ERROR: update 0
digoal=# end;
ROLLBACK
``` | ### Isolation level
This can be done with regular `UPDATE` commands - in the [*default* isolation level `READ COMMITTED`](http://www.postgresql.org/docs/current/interactive/transaction-iso.html). All of this is safe for concurrent use.
You *could* use `TRANSACTION ISOLATION LEVEL SERIALIZABLE`. This would generate a serialization failure exception automatically. But that makes the whole transactions much more expensive. Use one of the following methods in default isolation level instead:
### Faulty command
Your original command had two errors:
* *Values* are enclosed in single quotes, not double quotes: `'yyy'`.
* `something = NULL` is always `NULL`. Use `something IS NULL` instead. [Details in the manual.](http://www.postgresql.org/docs/current/interactive/functions-comparison.html)
```
UPDATE tbl
SET lastname = 'yyy'
WHERE tbl_id = 1
AND lastname IS NULL;
```
### [Command tag](http://www.postgresql.org/docs/current/interactive/sql-update.html#AEN82079)
It tells you how many rows were affected:
```
UPDATE count
```
Only the first `UPDATE` actually gets to update the row. All following attempts do nothing and return `UPDATE 0`.
### [`RETURNING` clause](http://www.postgresql.org/docs/current/interactive/sql-update.html)
An `UPDATE` can return rows directly:
```
UPDATE tbl
...
RETURNING lastname; -- value from *new* row
```
Only the first `UPDATE` returns a value. All following return *no row*.
### PL/pgSQL function raising exception
If you actually *need* an exception:
```
CREATE OR REPLACE FUNCTION f_upd(_tbl_id int, _lastname text)
RETURNS void AS
$func$
BEGIN
UPDATE tbl
SET lastname = $2
WHERE tbl_id = $1
AND lastname IS NULL;
IF NOT FOUND
RAISE EXCEPTION 'UPDATE found no row.';
END IF;
END
$func$ LANGUAGE plpgsql
``` | select for update if not null with exception or return value | [
"",
"sql",
"postgresql",
"concurrency",
"sql-update",
""
] |
I'm trying to create a table:
```
CREATE TABLE [MyTable]
(
[Id] [int] IDENTITY,
[Column1] [int] NOT NULL,
[Column2] [int] NOT NULL
CONSTRAINT [PK_MyTable_Id] PRIMARY KEY ([Id])
CONSTRAINT [UQ_MyTable_Column1_Column2] UNIQUE ([Column1], [Column2])
)
```
This script fails with the error:
> Both a PRIMARY KEY and UNIQUE constraint have been defined for column
> 'Column2', table 'MyTable'. Only one is allowed.
Why is this restriction enforced? How can I create a table with these properties? | You missed a comma after the primary key constraint.
```
CREATE TABLE [MyTable]
(
[Id] [int] IDENTITY,
[Column1] [int] NOT NULL,
[Column2] [int] NOT NULL
CONSTRAINT [PK_MyTable_Id] PRIMARY KEY ([Id]),
CONSTRAINT [UQ_MyTable_Column1_Column2] UNIQUE ([Column1], [Column2])
)
``` | ```
CREATE TABLE [MyTable]
(
[Id] [int] IDENTITY,
[Column1] [int] NOT NULL,
[Column2] [int] NOT NULL
CONSTRAINT [PK_MyTable_Id] PRIMARY KEY ([Id])
)
ALTER TABLE [MyTable] ADD CONSTRAINT [UQ_MyTable_Column1_Column2] UNIQUE ([Column1], [Column2])
``` | Why can't I specify both a Primary Key and a Unique Constraint on the same table? | [
"",
"sql",
"sql-server",
"primary-key",
"unique-constraint",
""
] |
I have been using a Django database for a while without any major issues. Today I needed to set default values for a few tables for the first time. I created a fixtures directory in the top-level Django directory. Then I created the files for the default values. However, I keep getting error messages and I'm not sure why.
First I tried to use .sql files. It is worth noting that these tables are very simple; they only have one value, "name". My SQL file looked like this:
```
INSERT INTO MyTable (name) VALUES ('Default');
```
I saved this as `MyTable.sql`. When I ran the command `python manage.py loaddata fixtures/MyTable.sql`, I got this error message:
```
CommandError: Problem installing fixture 'MyTable': sql is not a known serialization format.
```
(Note: I also tried without the fixtures/ part, for the above example and the next, and got identical results).
I asked my project lead and he said he doesn't think SQL files can be used for this. So, I tried JSON files. My `MyTable.json` looked like this:
`[
{
"model": "mydatabase.MyTable",
"pk": 1,
"fields": {
"name": "Default"
}
}
]`
I'll be very upfront to admit I've never worked with JSON in this context before, only in web development, so I don't know if the issue may be something I'm doing wrong here. I tried to base it on the formatting I found [here](https://docs.djangoproject.com/en/dev/howto/initial-data/). When I ran this through the loaddata function again, I got this error message:
```
C:\Python27\lib\site-packages\django-1.6.1-py2.7.egg\django\core\management\commands\loaddata.py:216: UserWarning: No fixture named 'fixtures/MyTable' found.
```
This is my first time doing this and I've had a bit of a hard time finding documentation to figure out what I'm doing wrong. Could anyone please offer advice? Thanks! | To create your fixture files start with an empty database and then add some data to your database using the djanog-admin or the django shell or even pure SQL. After that you can do a
```
python manage.py dumpdata # to dump all your data, or
python manage.py dumpdata app_name # to dump all data of a specific app, or
python manage.py dumpdata app_name.model_name # to dump all data of a specific model
```
The above will print data to your stdout. In order to write it to a file use a redirect (>), for instance
```
python manage.py dumpdata auth.User > user_fixture.json
```
*Update:* I just saw that you are using Windows -- remember to load your fixtures using a backslash (`\`). | As for me, the matter is the suffix of the output file.
```
python manage.py dumpdata -o my_dump
python manage.py loaddata < my_dump # it fails
# change the file name my_dump to my_dump.json
python manage.py loaddata < my_dump.json # it works
```
---
So, I guess the `dumpdata` implicitly use `json` as output format. While `loaddata` needs a format tip from file name suffix. | Django - cannot get loaddata to work | [
"",
"sql",
"json",
"django",
"default",
""
] |
In my assignment with Oracle 11g, I am asked to make a table with column has this structure:
```
[NL|TE|][0-9]^10
```
Where NL or TE is inputed when INSERT row and [0-9]^10 is an auto-increment 10 digits number.
Example:
```
NL1234567890 or TE0253627576
```
When INSERT, the user should only write this:
```
INSERT INTO TableA VALUES ('NL');
```
And the DBMS take care of the rest. So how can I do so? Im still a newbie in this thing. | ```
CREATE SEQUENCE your_seq;
/
CREATE OR REPLACE TRIGGER your_tablename_BI
BEFORE INSERT
ON your_tablename
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
:NEW.your_col := :NEW.your_col || trim(to_char(your_seq.nextval, '0000000000'));
END your_tablename_BI;
/
``` | Sample code?
```
'NL' || to_char(yoursequence.nextval)
``` | Oracle - How to make auto-increment column with varchar type? | [
"",
"sql",
"oracle",
"auto-increment",
""
] |
I want to write a sql statement:
```
Select * FROM table1
inner join table2 AS t2
ON
inner join table3 AS t3
ON
inner join table4 AS t4
ON
Where FK_Client_ID = 4
```
All tables have the client ID in common. So Not sure what to do on the `ON`.
Will it be something like `ON t2.FK_Client_ID = ......` not sure.
So I just want all the data from those tables that has FK\_Client\_ID in common. | ```
Select *,
(Select FK_Client_ID from table2 where FK_Client_ID = t1.FK_Client_ID )As TID1,
(Select FK_Client_ID from table3 where FK_Client_ID = t1.FK_Client_ID )As TID2,
(Select FK_Client_ID from table4 where FK_Client_ID = t1.FK_Client_ID )As TID3
FROM table1 t1
``` | Try this
```
Select * FROM table1 t1
inner join table2 AS t2
ON t2.FK_Client_ID = t1.FK_Client_ID
inner join table3 AS t3
ON t3.FK_Client_ID = t1.FK_Client_ID
inner join table4 AS t4
ON t4.FK_Client_ID = t1.FK_Client_ID
Where t1.FK_Client_ID = 4
``` | SQL Join multiple tables using where clause | [
"",
"sql",
"sql-server",
""
] |
I have creates two tables in sql server 2008 
as shown in diagram
I have added some data in product table
```
insert into tblproduct values ("I PHONE");
insert into tblproduct values ("SAMSUNG");
insert into tblproduct values ("SONY");
...etc
```
Now my Question is how will I write sql statement for inserting data in tblOrder
so that Pid will be automatically filled in tblOrder. | Well, to answer to your question sql data are manipulated by sql language. But sql server is just a way to saved structured data.
So, when you are building an application, and you are listing products available to order, you know the ids of thoses products. And then you can do something like :
```
INSERT INTO tblOrder(OName, Pid)
VALUES ('name_choosen', [product_id_selected])
```
In your case, for sql training, you do :
```
INSERT INTO tblOrder(OName, Pid)
SELECT 'name_of_order', Pid
FROM tblProduct
WHERE PName='IPhone'
```
Or
```
INSERT INTO tblOrder(OName, Pid)
SELECT 'name_of_order'
, (SELECT Pid from tblProduct WHERE PName='IPhone')
```
Or
```
INSERT INTO tblOrder(OName, Pid)
VALUES ('name_of_order'
, (SELECT Pid from tblProduct WHERE PName='IPhone'))
``` | Hope this helps
<http://www.codeproject.com/Articles/620086/CASCADE-in-SQL-Server-with-example>
Happy coding.
Try using the INSERT ON CASCADE. | Perform Insert operation when two tables are related by foreign key? | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I am trying to build a site and I have database that I am accessing with linq. The user is directed to the site and initially the query uses the id that is provided in the link to query the database. Then there are some drop down combo boxes that can have multiple items selected. On post back I need to use the checked items to query the database to return a new result based on the user selections.
I have tried creating a string with all the text from each selection and then using something similar to the following bit of code.
```
q = from db.database
Where StringOfValues.Contains(db.items)
```
The problem is it is not exact and returns extra items that the user would not want. So what would be the best way to build the linq query if I never know how many values the user will choose. | Just do a series of ifs.
```
var query = myBaseQuery();
if(!string.IsNullOrWhiteSpace(name))
{
query.Where(item => item.Name == name);
}
```
and so on. As long as your query is still of type IQuerable you can add wheres, group by, order bys, and other linq methods. Just check what values the user provided and add them to the base query. | Since your query is going to be dynamic, maybe LINQ is not the best way to retrieve the data and good old ADO.NET is a better approach, where you can build a normal SQL query dynamically.
If you do need to use LINQ I suggest looking into [Dynamic LINQ](http://dynamiclinq.codeplex.com/) library - it looks like it has what you need. | Best way to build a linq query based on user selection in VB | [
"",
"sql",
"vb.net",
"linq",
"telerik-grid",
""
] |
I have a table named `"Orders"` with **1-1000** rows and 3 columns (`S.no`, `Order` and `Status`). I need to fetch `Order` from `50-1000` which has its `Status` as `"Cancelled"`. How can i do this in SQL Server? | *Logic operator*:
```
SELECT Order
FROM Orders
WHERE Status = 'Cancelled'
AND (S.no > 50 AND S.no < 1000)
```
`BETWEEN`:
```
SELECT Order
FROM Orders
WHERE Status = 'Cancelled'
AND (S.no BETWEEN 50 and 1000)
``` | You can try something like this:
```
SELECT *
FROM Orders
WHERE (S.no BETWEEN 50 AND 1000) AND (Status = 'Cancelled')
```
Hope this helps | Fetch data from table using SQL | [
"",
"sql",
"sql-server",
"database",
""
] |
Is there a way to get a list of months back if a table has this structure:
```
Table
id | payment | date
1001 200.00 2013-10-11
1001 100.00 2013-11-02
1002 250.00 2014-01-23
1003 320.00 2014-02-02
1004 300.00 2014-03-04
1004 90.00 2014-03-05
1004 50.00 2014-04-21
1005 400.00 2014-04-21
```
I want to get a list back where the month is **unique** like so:
```
Months
2013-10-01
2013-11-01
2014-01-01
2014-02-01
2014-03-01
2014-04-01
```
Is that doable? Or would I have to have some other kind of query and play with it in PHP? | ```
SELECT DATE_FORMAT(date, '%Y-%m-01') as Months from Table GROUP BY Months;
```
This:
* Selects the date in the format you specified (i.e., as first day of month)
* Groups by month so that you get one record returned for each month | try this:
`select DATE_FORMAT(date, '%Y-%m-01') as Months from Table group by Months`
based off this question: [MySQL round date to the start of the week and month](https://stackoverflow.com/questions/11461420/mysql-round-date-to-the-start-of-the-week-and-month)
and this doc: <http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format> | MySQL query to return a list of months | [
"",
"mysql",
"sql",
""
] |
I'm using SQL Server and I have two tables
`player`:
```
player guildId
--------------------
a 1
b 2
c 2
d 2
e 1
f 1
g 1
```
`game`:
```
player gameId
--------------------
a 4
b 1
c 2
d 1
e 3
f 2
g 2
```
I want to create a view called view\_test,
The view's result :
```
select * from view_test where guildId = 2 and gameId = 2
```
it shows
```
player joined
----------------
b false
c true
d false
select * from view_test where guildId = 2 and gameId = 1
player joined
-----------------
b true
c false
d true
select * from view_test where guildId = 2 and gameId = 3
player joined
----------------
b false
c false
d false
select * from view_test where guildId = 1 and gameId = 4
player joined
----------------
a true
e false
f false
g false
```
How can I do this SQL ?
Thanks | You need to cross join players with all game IDs to get all possible combinations first. Then you look the combination up in the games table. So base the view on:
```
select player.player, player.guildid, gameids.gameid,
case when
(
select count(*)
from game
where game.player = player.player
and game.gameid = gameids.gameid
) > 0 then 'true' else 'false' end as joined
from player
cross join (select distinct gameid from game) gameids;
```
The statement
```
select * from view_test where guildId=2 and gameId=2
```
would result in
```
player guildId gameid joined
---------------------------------
b 2 2 false
c 2 2 true
d 2 2 false
``` | Try Like This
```
select id,case when count(*)=2 then 'true' else
'false' end from (
select id from player where guildId=2
union all
select id from game where gameId=2
) as tt group by id
``` | SQL : search from two tables | [
"",
"sql",
"sql-server",
""
] |
Table `FOO` has a column `FILEPATH` of type `VARCHAR(512)`. Its entries are absolute paths:
```
FILEPATH
------------------------------------------------------------
file://very/long/file/path/with/many/slashes/in/it/foo.xml
file://even/longer/file/path/with/more/slashes/in/it/baz.xml
file://something/completely/different/foo.xml
file://short/path/foobar.xml
```
There's ~50k records in this table and I want to know all distinct filenames, not the file paths:
```
foo.xml
baz.xml
foobar.xml
```
This looks easy, but I couldn't find a DB2 scalar function that allows me to search for the last occurrence of a character in a string. Am I overseeing something?
I could do this with a recursive query, but this appears to be overkill for such a simple task and (oh wonder) is extremely slow:
```
WITH PATHFRAGMENTS (POS, PATHFRAGMENT) AS (
SELECT
1,
FILEPATH
FROM FOO
UNION ALL
SELECT
POSITION('/', PATHFRAGMENT, OCTETS) AS POS,
SUBSTR(PATHFRAGMENT, POSITION('/', PATHFRAGMENT, OCTETS)+1) AS PATHFRAGMENT
FROM PATHFRAGMENTS
)
SELECT DISTINCT PATHFRAGMENT FROM PATHFRAGMENTS WHERE POS = 0
``` | I think what you're looking for is the [`LOCATE_IN_STRING()` scalar function](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0054098.html). This is what Info Center has to say if you use a negative start value:
> If the value of the integer is less than zero, the search begins at
> LENGTH(source-string) + start + 1 and continues for each position to
> the beginning of the string.
Combine that with the [`LENGTH()`](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000818.html) and [`RIGHT()`](https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000844.html) scalar functions, and you can get what you want:
```
SELECT
RIGHT(
FILEPATH
,LENGTH(FILEPATH) - LOCATE_IN_STRING(FILEPATH,'/',-1)
)
FROM FOO
``` | One way to do this is by taking advantage of the power of DB2s [XQuery](http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.xml.doc/doc/xqrcontainer.html) engine. The following worked for me (and fast):
```
SELECT DISTINCT XMLCAST(
XMLQuery('tokenize($P, ''/'')[last()]' PASSING FILEPATH AS "P")
AS VARCHAR(512) )
FROM FOO
```
Here I use [tokenize](http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.xml.doc/doc/xqrfntkz.html) to split the file path into a sequence of tokens and then select the last of these tokens. The rest is only conversion from SQL to XML types and back again. | Looking for a scalar function to find the last occurrence of a character in a string | [
"",
"sql",
"db2",
""
] |
I have two tables, with same schema -
```
create table test1 (
a INT NOT NULL ,
b INT NOT NULL ,
c INT,
PRIMARY KEY (a,b)
);
create table test2 (
a INT NOT NULL ,
b INT NOT NULL ,
c INT,
PRIMARY KEY (a,b)
);
```
I want to insert values from test2 table into test1, but if the row with same primary key already exist, update it. I know in mysql you can do similar thing with ON DUPLICATE KEY UPDATE like -
```
INSERT INTO test1 VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=3;
```
But I dont know how to do the above query with a SELECT from another table. What I am looking for is a query of form -
```
INSERT INTO test2
SELECT a, b, c FROM test1
ON DUPLICATE KEY UPDATE c = c + t.c
(Select a, b, c from tests1)t;
```
This query is obviously invalid. I would appreciate if somebody can make a valid query out of it. | This should work for you:
```
INSERT INTO test2
SELECT a, b, c as c1 FROM test1
ON DUPLICATE KEY UPDATE c = c + c1
``` | You could do it with this SQL:
```
INSERT INTO test1 (a, b, c)
SELECT t.a as a, t.b as b, t.c AS c FROM test2 AS t
ON DUPLICATE KEY UPDATE c=t.c;
``` | INSERT or UPDATE from another table in mysql | [
"",
"mysql",
"sql",
""
] |
I'm in need of a better way of retrieving top 10 distinct UID from some tables I have.
The setup:
* Table user\_view\_tracker
* Contains pairs of {user id (uid), timestamp (ts)}
* Is growing every day (today it's 41k entries)
My goal:
* To produce a top 10 of most viewed user id's in the table user\_view\_tracker
My current code is working, but killing the database slowly:
```
select
distinct uvt.uid as UID,
(select count(*) from user_view_tracker temp where temp.uid=uvt.uid and temp.ts>date_sub(now(),interval 1 month)) as CLICK
from user_view_tracker uvt
order by CLICK
limit 10
```
It's quite obvious that a different data structure would help. But I can't do that as of now. | First of all, delete that subquery, this should be enough ;)
```
select
uvt.uid as UID
,count(*) as CLICK
from
user_view_tracker uvt
where
uvt.ts > date_sub(now(),interval 1 month)
group by
uvt.uid
order by CLICK DESC
limit 10
``` | Try:
```
select uid, count(*) as num_stamps
from user_view_tracker
where ts > date_sub(now(), interval 1 month)
group by uid
order by 2 desc limit 10
```
I kept your criteria as far as getting the count for just the past month. You can remove that line if you want to count all.
The removal of DISTINCT should improve performance. It is not necessary if you aggregate in your outer query and group by uid, as that will aggregate the data to one row per uid with the count. | Top-10 mysql query | [
"",
"mysql",
"sql",
"performance",
""
] |
I'm trying to reinstall mysql on my computer (os x mavericks) and I've done the following with the following errors:
```
bash <(curl -Ls http://git.io/eUx7rg)
```
Error:
```
Starting MySQL
... ERROR! The server quit without updating PID file (/usr/local/mysql/data/christians-mbp.saumag.edu.pid).
/usr/local/mysql/support-files/mysql.server: line 362: pidof: command not found
/dev/fd/63: line 119: SORRY, MySQL IS NOT RUNNING ... THERE MUST BE A PROBLEM: command not found
```
So I decided to see uninstall it if it was already there:
```
brew uninstall mysql
bash <(curl -Ls http://git.io/eUx7rg)
```
This says that mysql is currently still installed, so I did the following:
```
sudo rm /usr/local/mysql
sudo rm -rf /usr/local/mysql*
sudo rm -rf /Library/StartupItems/MySQLCOM
sudo rm -rf /Library/PreferencePanes/My*
edit /etc/hostconfig and remove the line MYSQLCOM=-YES-
sudo rm -rf /Library/Receipts/mysql*
sudo rm -rf /Library/Receipts/MySQL*
sudo rm -rf /var/db/receipts/com.mysql.*
bash<(curl -Ls http://git.io/eUx7rg)
```
This gave me the same error messages as before, So I decided to redo all the rm commands above, including the edit and run the following commands:
```
ps -ax | grep mysql
brew cleanup
sudo rm ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist
sudo rm -rf /Library/StartupItems/MySQLCOM
brew doctor
brew update
brew install mysql
unset TMPDIR
mysql_install_db --verbose --user=`yyttr3` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp
```
After the last command I got the following error message:
```
dyld: Library not loaded: @@HOMEBREW_PREFIX@@/opt/openssl/lib/libssl.1.0.0.dylib
Referenced from: /usr/local/opt/mysql/bin/my_print_defaults
Reason: image not found
FATAL ERROR: Neither host 'christians-mbp.saumag.edu' nor 'localhost' could be looked up with
/usr/local/opt/mysql/bin/resolveip
Please configure the 'hostname' command to return a correct
```
hostname.
If you want to solve this at a later stage, restart this script
with the --force option
I don't really know what the problem is, i'm not sure if i'm even installing it right. From what I have read this should be a good way to install mysql. If you have any ideas on what I would greatly appreciate it. | I'm sure you've solved this by now, but I just had the same problem. For anyone else who stumbles across this:
For me, it was an issue with OpenSSL.
```
resolveip `hostname`
```
If it says OpenSSL isn't loaded,
```
brew uninstall openssl
brew install openssl
```
then try again. Hope this helps. | I had the same problem. Only solution that I found is to install mysql from source code. The reason for this error to occur and how to solve it is explained in full detail in the following link: <http://jaitechwriteups.blogspot.com.au/2010/08/mysql-installation-error-neither-host.html> | Trouble reinstalling mysql on OSX Mavericks | [
"",
"mysql",
"sql",
"database",
"macos",
"terminal",
""
] |
```
var convID = 2525;
var contactUsername = 'blabla';
var userId = 100;
var contactId = 200;
var sql = "INSERT INTO contacts (FK_OWNERID, FK_USERID, FC_CONTACTNAME, `CONVERSATION_ID`)
VALUES (" + mysql.escape(userId) + "," + mysql.escape(contactId) + "," + mysql.escape(contactUsername) + "," + convID + "),
(" + mysql.escape(contactId) + "," + mysql.escape(userId) + "," + mysql.escape(username) + "," + convID + ")";
```
`Error: ER\_BAD\_FIELD\_ERROR: Unknown column '2525' in 'field list'
Fields are varchar(32) for both CONVERSATION\_ID and FC\_CONTACTNAME and ints for the 2 others.
What's wrong with this query? I even put the backticks around `CONVERSATION_ID` and it thinks the `convID` variable is a column...
Edit:
If my query is just that, then it works:
```
var sql = "INSERT INTO contacts (`FK_OWNERID`, `FK_USERID`, `FC_CONTACTNAME`)
VALUES (" + mysql.escape(userId) + "," + mysql.escape(contactId) + "," + mysql.escape(contactUsername) + "),
(" + mysql.escape(contactId) + "," + mysql.escape(userId) + "," + mysql.escape(username) + ")";
``` | make
```
" + convID + "
```
to
```
'" + convID + "'
```
Most likely its a string and the data type is varchar so u need to enclose within single quote. | I had a similar problem myself, I also didn't have all of my column names specified correctly. here is the code that finally worked.
```
var queryString = "INSERT INTO " + table + " (columnText, columnBool) VALUES ('" + val + "', FALSE);"
``` | MySQL insert error : ER_BAD_FIELD_ERROR: Unknown column '2525' in 'field list' | [
"",
"mysql",
"sql",
""
] |
i am having following tables:
Tender Master:
```
tenderid tendername
1 kjnj
2 hjbhb
3 abc
```
Bid Master:
```
bidid tenderid userid status
1 1 100 unsold
2 1 101 unsold
3 1 102 unsold
4 1 104 sold
5 2 100 unsold
6 2 101 sold
7 3 100 unsold
8 3 101 unsold
9 3 102 unsold
```
now i want to display only name of only those tender which are not sold ie unsold.
i want output like this:
```
tendername
abc
```
here only name of third tender will come as it is not sold to any one.
here tender 1 is sold to userid 104 so it will not come and tender 2 is sold to userid 101
so it will also not come.
i have written query like this:
```
select t.tendername
from tender_master t,bid master b
where t.tenderid==b.tenderid
and b.status=='unsold'
```
but it is giving me all the name of tender.
can anyone please provide me correct query??
please please. | Try this:
```
SELECT
tendername
FROM [Tender Master] TM
WHERE NOT EXISTS
(
SELECT 1 FROM [Bid Master] BM
WHERE TM.tenderid = BM.tenderid
AND BM.status = 'sold'
)
``` | Try it with a single `=` instead of `==`. | sql query to display only unsold tender | [
"",
"sql",
"sql-server",
""
] |
I am trying to update some fields based on their occurence. If they only occur one time, I am updating some status field.
My current code is as follows:
```
UPDATE table1
SET statusField = 1
WHERE someID = (
SELECT someID
FROM table1
GROUP BY someID HAVING COUNT(*) = 1
)
```
This returns an error like the one in the title: `Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.`
Is there any other, as easily readable/simple, solution to this? | Use `IN` keyword instead of equals operator like so:
```
UPDATE table1
SET statusField = 1
WHERE someID IN (
SELECT someID
FROM table1
GROUP BY someID HAVING COUNT(*) = 1
)
```
Using `=` requires that exactly 1 result is returned by the subquery. `IN` keyword works on a list. | You should join your tables in the subselect. It is possible to use 'in', but in your case I would use exists:
```
UPDATE table1 x
SET statusField = 1
WHERE exists (
SELECT null
FROM table1
WHERE x.someID = someID
GROUP BY someID
HAVING COUNT(*) = 1
)
```
For better performance I would use this script instead (sqlserver-2008+):
```
;WITH x as
(
SELECT rc = count() over (partition by someID), statusField
FROM table1
)
UPDATE x
SET statusField = 1
WHERE rc = 1
``` | SQL Server Subquery returned more than 1 value. This is not permitted when the subquery follows (chars) or when the subquery is used as an expression | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table in a SQL db that I want to remove the data from? I want to keep the columns though.
e.g. my table has 3 columns, Name, Age, Date. I don't want to remove these, i just want to remove the data.
Should I should Truncate, Delete or Drop? | Don't drop - it will delete the data and the definition.
If you delete - the data is gone and auto-increment values go on from the last value.
If you truncate - then it is like you just did create the table. No data and all counters resetted | **Truncate** is very fast - like quick format of the table. It does not require any extra space when deleting. You can not rollback his operation. You can not specify conditions. This is best choice for deleting all data from table.
**Delete** is much slower and you need extra space for this operation, because you must be able to rollback the data. If you need to delete all data, use truncate. If you need to specify conditions, use delete.
**Drop** table - you can delete data from table by dropping and creating it again like you would truncate it, but it is slower and you can have some other problems, like foreign key dependencies. I definitely don't recommend this operation. | SQL Truncate, Delete, Drop advise | [
"",
"sql",
"database",
"sql-delete",
"truncate",
""
] |
I'm designing a basic data base for the bank. I wondered what is the best way to add a constraint for each manager of a bank should be an employee of that bank?
Here is data structure so far:
```
SQL> CREATE TABLE Employee (
2 BSB# CHAR(6), /* Bank BSB number */
3 E# CHAR(5) NOT NULL, /* Employee number */
4 Name VARCHAR2(30) NOT NULL, /* Employee name */
5 DOB Date, /* Date of birth */
6 Address VARCHAR2(50), /* Home address */
7 Sex CHAR, /* M-Male, F-Female */
8 Salary NUMBER(7,2), /* Salary */
9 Supervisor# CHAR(5), /* Supervisor number */
10 CONSTRAINT Employee_PK PRIMARY KEY(BSB#, E#),
11 CONSTRAINT Employee_FK1 FOREIGN KEY (BSB#, Supervisor#) REFERENCES Employee(BSB#, E#),
12 CONSTRAINT Employee_FK2 FOREIGN KEY (BSB#) REFERENCES Bank (BSB#)
13 );
CREATE TABLE Bank (
2 BSB# CHAR(6) NOT NULL, /* Bank BSB number */
3 Name VARCHAR2(40) NOT NULL, /* Bank name */
4 Address VARCHAR2(50) NOT NULL, /* Bank location */
5 Manager# CHAR(5) NOT NULL, /* Department manager number */
6 MSDate DATE, /* Manager start date */
7 Phone# VARCHAR2(15) NOT NULL, /* Contact phone number */
8 CONSTRAINT Bank_PK PRIMARY KEY(BSB#),
9 CONSTRAINT Bank_CK1 UNIQUE(Name)
10 );
``` | If you simply want to make sure that a manager is a valid employee, you could use a simple foreign key:
```
ALTER TABLE bank
ADD CONSTRAINT bank_manager_fk
FOREIGN KEY (manager#) REFERENCES employee(e#)
```
If you want to ensure that the bank manager is indeed an employee of **the same bank**, you'd need a fancier foreign key:
```
ALTER TABLE bank
ADD CONSTRAINT bank_manager_fk
FOREIGN KEY (manager#, bsb#) REFERENCES employee(e#, bsb#)
``` | You're basically looking for a foreign key. There's a chicken-and-egg problem here. Each employee refers to a bank via the `Employee.BSB#` number column, but each bank refers to an employee via `Bank.Manager#`.
I think you might do better with a separate table `BankManager` containing a `BSB#` and `Manager#` columns, plus the `MSDate` value.
Anyway, in most DBMS, you can't create a table `Employee` with FK constraints to table `Bank` before table `Bank` exists, nor vice versa. So, with the two table design, you will probably have to create one of the tables without its FK constraints, and then add the FK constraints with ALTER TABLE.
With the three table design, you can create `Bank`, then `Employee`, then `BankManager`. | What is the constraint to force a column of a table to be a member of another table? | [
"",
"sql",
"oracle",
""
] |
I am not getting my head around this, and wondered if anyone may be able to help me with this.
I have 2 Tables called `RES_DATA` and `INV_DATA`
`RES_DATA` Contains my Customer as below
```
CUSTOMER ID | NAME
1, Robert
2, John
3, Peter
```
`INV_DATA` Contains their INVOICES as Below
```
INVOICE ID | CUSTOMER ID | AMOUNT
100, 1, £49.95
200, 1, £105.95
300, 2, £400.00
400, 3, £150.00
500, 1, £25.00
```
I am Trying to write a `SELECT` STATEMENT Which will give me the results as Below.
```
CUSTOMER ID | NAME | TOTAL AMOUNT
1, Robert, £180.90
2, John, £400.00
3, Peter, £150.00
```
I think I need 2 INNER JOINS Somehow to Add the tables and SUM Values of the INVOICES Table GROUPED BY the Customer Table but honestly think I am missing something. Can't even get close to the Results I need. | This should work.
```
SELECT a.[CUSTOMER ID], a.[NAME], SUM(b.[AMOUNT]) AS [TOTAL AMOUNT]
FROM RES_DATA a INNER JOIN INV_DATA b
ON a.[CUSTOMER ID]=b.[CUSTOMER ID]
GROUP BY a.[CUSTOMER ID], a.[NAME]
```
I tested it with SQL Fiddle against SQL Server 2008: <http://sqlfiddle.com/#!3/1cad5/1>
Basically what's happening here is that, because of the join, you are getting the same row on the "left" (i.e. from the `RES_DATA` table) for every row on the "right" (i.e. the `INV_DATA` table) that has the same `[CUSTOMER ID]` value. When you group by just the columns on the left side, and then do a sum of just the `[AMOUNT]` column from the right side, it keeps the one row intact from the left side, and sums up the matching values from the right side. | Two ways to do it...
GROUP BY
```
SELECT RES.[CUSTOMER ID], RES,NAME, SUM(INV.AMOUNT) AS [TOTAL AMOUNT]
FROM RES_DATA RES
JOIN INV_DATA INV ON RES.[CUSTOMER ID] INV.[CUSTOMER ID]
GROUP BY RES.[CUSTOMER ID], RES,NAME
```
OVER
```
SELECT RES.[CUSTOMER ID], RES,NAME,
SUM(INV.AMOUNT) OVER (PARTITION RES.[CUSTOMER ID]) AS [TOTAL AMOUNT]
FROM RES_DATA RES
JOIN INV_DATA INV ON RES.[CUSTOMER ID] INV.[CUSTOMER ID]
``` | Trying to use INNER JOIN and GROUP BY SQL with SUM Function, Not Working | [
"",
"sql",
"select",
"join",
"group-by",
"inner-join",
""
] |
I would like to formatting numbers to HH:mm in XLS. The main problem is that, the input number is like 810 (-> 8:10) and 1530 (-> 15:30)
How can i do it?
I tried to use this:
```
<xsl:choose>
<xsl:when test="string-length(IND)=4">
<!--<td><xsl:value-of select="IND"></xsl:value-of></td>-->
<td><xsl:value-of select="format-time(IND, 'HH:mm')"/></td>
</xsl:when> ...
```
But it is drop an error:
XML-22018: (Error) Parse Error in format-time function. | How about this:
```
<xsl:variable name="hrLen" select="string-length(IND) - 2" />
<td><xsl:value-of select="concat(substring(IND, 1, $hrLen),
':',
substring(IND, $hrLen + 1))" />
``` | Try:
```
<xsl:value-of select="translate(format-number(IND div 100,'00.00'), '.', ':')"/>
``` | How can i do Time formatting in XSL | [
"",
"sql",
"xml",
"xslt",
""
] |
I write ...
`ORDER BY` column `ASC`
but my column is `VARCHAR` and it sorts wrong like
I want to sorting numeric value but its datatype is varchar then how
value like this
```
1.2.840.113619.2.55.3.163578213.42.1355218116.691.1
1.2.840.113619.2.55.3.163578213.42.1355218116.691.10
1.2.840.113619.2.55.3.163578213.42.1355218116.691.100
1.2.840.113619.2.55.3.163578213.42.1355218116.691.101
1.2.840.113619.2.55.3.163578213.42.1355218116.691.2
1.2.840.113619.2.55.3.163578213.42.1355218116.691.20
```
but i want to in sequence last
```
1.2.840.113619.2.55.3.163578213.42.1355218116.691.1
1.2.840.113619.2.55.3.163578213.42.1355218116.691.2
1.2.840.113619.2.55.3.163578213.42.1355218116.691.10
1.2.840.113619.2.55.3.163578213.42.1355218116.691.20
1.2.840.113619.2.55.3.163578213.42.1355218116.691.100
1.2.840.113619.2.55.3.163578213.42.1355218116.691.101
```
and also I have string like this
```
1.2.840.114257.0.10325113632288210457800001002296133400001
1.2.840.114257.0.10379710976288210457800000002296491200000
1.2.840.114257.0.10328923264288210457800000002296158400001
```
I also want to sort this ... | For your specific example, you can do:
```
ORDER BY length(col), col
```
If you have other examples, you might need to "parse" the values using `substring_index()`. | I have tried the following and it is working fine.
First Remove the `"."` From the String so you have only numeric and then sort by using `ABC()` Function so, It will not truncated.
`SELECT yourcol AS v FROM test ORDER BY ABS(REPLACE(yourcol, '.', '')), yourcol;` | how to sort varchar numeric columns? | [
"",
"mysql",
"sql",
""
] |
I have a table Machine\_Mode\_Duration:

I need a query so that it will be displayed as follows:

Suggestions are appreciated! | You need a `GROUP BY`.
Assuming that you have exactly 3 modes and that in case of duplicate `(Machine_id, INTERNAL_MODES)` tuples it is okay to sum up their `INTERNAL_MODE_DURATION`:
```
SELECT
Machine_Id,
SUM(CASE WHEN INTERNAL_MODES = 1 THEN INTERNAL_MODE_DURATION ELSE 0 END) AS Mode_1,
SUM(CASE WHEN INTERNAL_MODES = 2 THEN INTERNAL_MODE_DURATION ELSE 0 END) AS Mode_2,
SUM(CASE WHEN INTERNAL_MODES = 3 THEN INTERNAL_MODE_DURATION ELSE 0 END) AS Mode_3
FROM t
GROUP BY
Machine_Id;
``` | you can use `PIVOT` like this
```
SELECT Machine_ID,
[1] as Mode_1, [2] as Mode_2, [3] as Mode_3 FROM
(SELECT Machine_ID, Internal_Mode_Duration , InternalModes
FROM Machine_Mode_Duration) AS SourceTable
PIVOT
(
Sum(Internal_Mode_Duration)
FOR InternalModes IN ([1], [2], [3])
) AS PivotTable;
```
also you can use old fashioned method via `CASE` like the time that there were no `PIVOT` command ;) | Query to change vertical to horizontal | [
"",
"sql",
"sql-server",
"database",
"pivot",
""
] |
I am wanting to look up data from one of my database tables based on a varchar stored in another table.
We have a table of `Manufacturers (M)` and a table of `Parameters (P),` and rather than having to have a parameter per Manufacturer (as there is a lot of crossover between 60% of them), we thought it would be cleaner to have parameters per manufacturer where required, and for the other ones just have a set of default parameters.
This means that I cannot store `ManufacturerID` in both tables and create a simple link, instead I need to link using the `ManufacturerName`.
So I need to know if it is possible to Link via `ManufacturerName`, and where there is no match look in the `Parameters` table (P) for Manufacturer 'Default'.
As part of the link, we are also including:
> P.CategoryID = M.CategoryID AND (P.PriceFrom <= M.BasePrice AND
> P.PriceTo >= M.BasePrice)
Here is the data structures:
> **Manufacturers (M):**
>
> ManufacturerID, ManufacturerName, CategoryID, BasePrice
>
> **Parameters (P)**
>
> CategoryID, ManufacturerName, PriceFrom, PriceTo, Percentage
Here is some sample data:
**Manufacturers (M):**
```
ManufacturerID | ManufacturerName | CategoryID | BasePrice
3 | Apple | 1 | 150.00
3 | Apple | 9 | 99.99
10 | HTC | 1 | 50.00
15 | Nokia | 1 | 25.00
19 | Samsung | 1 | 80.00
```
**Parameters (P):**
```
CategoryID | ManufacturerName | PriceFrom | PriceTo | Percentage |
1 | Samsung | 0.00 | 99.99 | 50% |
1 | Apple | 0.00 | 99.99 | 55% |
1 | Apple | 100.00 | 149.99 | 45% |
9 | Apple | 0.00 | 99.99 | 65% |
1 | Default | 0.00 | 99.99 | 60% |
```
So we still need to just return 1 result in each link.
Any suggestions of guidance much appreciated.
Thanks in advance. | As per the conditions you have posted a `Left Join` should serve the purpose:
```
select M.ManufacturerID, P.ManufacturerName, M.CategoryID
from Manufacturers M
left join Parameters P On P.CategoryID = M.CategoryID AND (P.PriceFrom <= M.BasePrice AND P.PriceTo >= M.BasePrice)
```
Please post your table structure here if I have missed something: [SQL Fiddle](http://sqlfiddle.com/#!3/4c8e0/2) | Without a table structure to work with is difficoult but something within the line of
```
SELECT m.name Manufacturer, p.name, p.value, p.Manufacturer alias
FROM Manufacturers m
INNER JOIN Parameter p ON (m.name = p.Manufacturer)
UNION ALL
SELECT m.name Manufacturer, p.name, p.value, p.Manufacturer alias
FROM Manufacturers m
LEFT JOIN Parameter p on p.Manufacturer = 'default'
WHERE m.name + p.name not in (SELECT m.name + p.name
FROM Manufacturers m
INNER JOIN Parameter p
ON (m.name = p.Manufacturer)
)
ORDER BY 1, 2
```
Can get what you need. The first part get you the linked part, the second part get the default value only for the parameters name that don't have already a match.
This is a [SQLFiddle](http://www.sqlfiddle.com/#!3/6fe6f/6) with a few data and the query | SQL Server - Select data from one table based on a string value | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This is the code i got so far
```
SELECT users_ID,Problem_ID
FROM 'submission'
WHERE Status = "AC"
GROUP BY users_ID,Problem_ID
```
I am getting these results
```
+----------+------------+
| Users_ID | Problem_ID |
+----------+------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 3 |
+----------+------------+
```
## I only want to get
```
+----------+------------+
| Users_ID | Problem_ID |
+----------+------------+
| 1 | 3 | -- so because there are 3 results for user_ID 1
| 2 | 2 | -- and there are 2 results for user_ID 2
+----------+------------+
```
So the `Problem_ID` is how many rows I am getting from my query for each user.
But how do I accomplish this?
## Edit:
I forgot mention that the table contains duplicates of the same problem for example.
I got a `Problem` with the `ID` of `1` and then in the database there could be two rows with the same user and with status as `"AC"` but I want to only get one of them. | This should work:
```
SELECT users_ID, COUNT(DISTINCT Problem_ID)
FROM `submission`
WHERE Status = 'AC'
GROUP BY users_ID
``` | ```
SELECT users_ID, count(Problem_ID) as `problem_count`
FROM `submission`
WHERE Status = 'AC'
GROUP BY users_ID;
``` | SQL COUNT on GROUP BY | [
"",
"mysql",
"sql",
""
] |
I've got a query in which I want to exclude a specific value (in this example "Mat")
The query works perfect without the sub-query in it:
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name NOT IN (
SELECT *
FROM test
WHERE test.NAME = "Mat"
)
group by SAME , name
order by same desc
```
Example here:
<http://www.sqlfiddle.com/#!2/1f1fb/28> | you can do this, if you want to match exact name with **Mat**:
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name != "Mat"
group by SAME , name
order by same desc
```
if you want to exclude all that contains **Mat** then:
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name NOT LIKE '%Mat%'
group by SAME , name
order by same desc
```
if you want to exclude that starts with **Mat** then:
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name NOT LIKE 'Mat%'
group by SAME , name
order by same desc
``` | You can use the `NOT LIKE` [statement](http://www.w3schools.com/sql/sql_like.asp):
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name not like 'Mat'
group by SAME , name
order by same desc
```
[**SQLFiddle**](http://www.sqlfiddle.com/#!2/1f1fb/33)
Or the `<>` or `!=` [operator](http://www.w3schools.com/sql/sql_where.asp):
```
SELECT test.SAME , test.Nationality, test.Name, coalesce(test2.Job,'')jobb
FROM test
LEFT JOIN test2
ON test.noID=test2.noID
WHERE test.SAME = 555 and test.Name <> 'Mat'
group by SAME , name
order by same desc
```
[**SQLFiddle**](http://www.sqlfiddle.com/#!2/1f1fb/37) | SQL Query Excluding value | [
"",
"sql",
""
] |
I have 2 tables. (hotels, locations).
I want to retrieve all the locations with a count of number of Hotels in that location.
Right now I have the following query:
```
SELECT locations.*,
COUNT(*) as no_of_hotels
FROM locations RIGHT JOIN hotels ON hotels.location_id = locations.id
GROUP BY locations.id
ORDER BY no_of_hotels DESC
```
The query works fine, but the problem is, It gives me only the locations with 1 or more hotels. I want to show all the locations (even though the location has no hotels). | You should change to left join:
```
SELECT locations.*,
COUNT(distinct hotels.id) as no_of_hotels
FROM locations
LEFT JOIN hotels ON hotels.location_id = locations.id
GROUP BY locations.id
ORDER BY no_of_hotels DESC
```
Otherwise probably this is easier to understand:
```
select
locations.*
(select count(*) from hotels where hotels.location_id = locations.id) as no_of_hotels
from
locations
order by no_of_hotels desc
``` | ```
SELECT locations.*, (SELECT count(*) FROM hotels WHERE hotels.location_id = locations.id) AS no_of_hotels FROM locations ORDER BY no_of_hotels DESC
``` | MySQL Join Query GROUP BY | [
"",
"mysql",
"sql",
""
] |
Good Day!
Would there be a simple but effective SQL syntax to summarize data from within a table based on Primary Key?
For example, I have the table SampleTable1 on database SampleDB:
```
|PermaID |Name_of_Person | Cash |
|111 |Michael | 200 |
|111 |Mikel | 100 |
|222 |Kid | 200 |
|222 |Kidd | 200 |
|222 |Kidd | 200 |
|222 |Kid | 200 |
|333 |Him | 100 |
|333 |Him | 200 |
```
Would there be an SQL string(s) that would output the summary based on **PermaID**, and not on the Name? :
```
|PermaID |Name_of_Person | Cash |
|111 |Michael | 300 |
|222 |Kid | 800 |
|333 |Him | 300 |
```
Assume that the Correct name is the First Name SQL detected. ..And that there are actually more than three columns in my actual table.
..If there aren't, should I output in my UI and then summarize myself via looping on the arrays? | If you need the first record name to be selected
```
;with Cte as( Select PermaID ,Name_of_Person,Cash, C=ROW_NUMBER()
over(partition by PermaID order by PermaID )
From SampleTable1
)
Select Cte.PermaID ,Cte.Name_of_Person,SUM(T.Cash)
from Cte Inner Join SampleTable1 T On T.PermaID =Cte.PermaID
Where C=1
Group By Cte.PermaID ,Cte.Name_of_Person
```
Fiddle sample [here](http://sqlfiddle.com/#!6/864e4/1) | This should give you the result you are looking for
```
SELECT PermaID,MIN(Name_of_Person),SUM(Cash)
FROM SampleTable1
GROUP BY PermaID
``` | Summary from within the Table having same key in Database | [
"",
"sql",
"sql-server",
""
] |
I have the following table which lists the employees and their corresponding managers:
```
id | employeeid | managerid
1 | 34256 | 12789
2 | 21222 | 34256
3 | 12435 | 34256
.....
.....
```
What is the recommended way to list out all distinct employees(id) in a single list.
Note that all managers may not be listed under the employeeid column (as he may not have a manager in turn). | If I understand this correctly:
This will unite all distict Employee IDs avoiding duplicates from between the two column (`UNION`)
```
SELECT employeeid AS Employee
FROM tableA
UNION
SELECT managerid AS Employee
FROM tableA
``` | This should d it :
```
SELECT DISTINCT employeeid FROM yourtablename
```
But seriously, by googling the keyword "distinct" you could have found out very easily yourself ! Or did I miss something out ? | Get list of unique records | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I've got a MySql function called Base52Encode, which takes in a bigint and returns a varchar(12).
I've got a table called Things that has a bigint auto\_increment for the primary key, called ThingId, and a second column of type varchar(12), called ShortCode.
I want to set the value of ShortCode to the base-52 encoded value of the ThingId primary key, which gets created via the auto\_increment.
```
DELIMITER $$
CREATE TRIGGER BeforeInsertThings
BEFORE INSERT ON Things
FOR EACH ROW
BEGIN
set new.ShortCode = Base52Encode(new.ThingId);
END $$
```
But whenever I insert a row, the ShortCode value always gets set to 0 (zero), which means the value being passed into Base52Encode is also a zero.
I'm assuming the issue is that the auto\_increment value hasn't kicked in yet when this trigger runs.
So how exactly can I solve this issue? | As explained in [this similar question on SO](https://stackoverflow.com/questions/18400436/mysql-trigger-before-insert-update-column-with-new-auto-increment-id), what I want to do is impossible within the context of triggers in MySQL, and the only solution is to do what I'm already doing to work around this issue: handle the extra logic that needs to run directly after my existing INSERT code in my stored procedure. Not ideal, but it is what it is. | Try this
```
DELIMITER $$
CREATE TRIGGER BeforeInsertThings
BEFORE INSERT ON Things
FOR EACH ROW
BEGIN
Select ThingId into largestId order by ThingId Desc limit 1;
Set newThingId = largestId + 1;
set new.ShortCode = Base52Encode(newThingId);
END $$
``` | Why is this BEFORE INSERT trigger not working in MySQL? | [
"",
"mysql",
"sql",
"triggers",
""
] |
Can someone give me a select statement that would retrieve only the rows with the highest (max) YEAR for each user?
```
Year User_ID Name City
===== ======= ===== =====
2001 1 Bob Mpls
2002 1 Bob Mpls
2003 1 Bob St Paul
2005 2 Mary New York
2010 2 Mary L.A.
```
...so the result set I would want is:
```
Year User_ID Name City
===== ======= ==== =====
2003 1 Bob St Paul
2010 2 Mary L.A.
``` | Since you did not mention any RDBMS, this query will work on almost all RDBMS.
```
SELECT a.*
FROM TableName a
INNER JOIN
(
SELECT Name, MAX(Year) Year
FROM TableName
GROUP BY Name
) b ON a.Name = b.Name
AND a.Year = b.Year
```
However, if your RDBMS supports window functions, you can use `ROW_NUMBER()`
```
SELECT Year, User_ID, Name
FROM
(
SELECT Year, User_ID, Name,
ROW_NUMBER() OVER (PARTITION BY Name,
ORDER BY Year DESC) rn
FROM TableName
) x
WHERE x.rn = 1
``` | ```
SELECT *
FROM Table a
WHERE a.Year = (
SELECT TOP 1 Year
FROM Table b
WHERE a.name = b.name
ORDER BY year DESC
)
```
not the best i know but simple | SQL Select Group By Max Value | [
"",
"sql",
"sql-server",
"group-by",
""
] |
Hello I have a problem while deleting rows ,
I want to keep two rows in database and delete all other rows
i found a query in this link [for delete row](https://stackoverflow.com/questions/13154604/delete-from-database-all-selected-rows-except-first-two)
but It is gives mysql Syntax limit function.
but in oracle no limit variable available .
how can i do the same?
```
DELETE FROM Table
WHERE DateColumn NOT IN (SELECT DateColumn
FROM Table
GROUP BY date(DateColumn)
ORDER BY DateColumn DESC
LIMIT 2);
``` | This cab be achieved through this query
```
DELETE FROM TABLE
WHERE ROWID NOT IN
(SELECT ROWIDS FROM (SELECT MIN(ROWID) ROWIDS
FROM TABLE
GROUP BY DATECOLUMN
ORDER BY DATECOLUMN DESC)
WHERE ROWNUM <= 2);
```
The inner most query will return min of rowid's from table and then the immediate outer query will select only two rows to be avoided in the outermost delete. | Use ROWID, ROWNUM and subquery something like this (not tested)
```
delete from table WHERE rowid NOT IN
(SELECT rowid FROM (SELECT rowid
FROM Table
ORDER BY DateColumn DESC)
WHERE rownum <= 2);
```
Notice the "double" select is needed because rownum where messes with the **order by** as it is applied after the rownum
If the dates don't matter and you just want to keep two rows (random) it can be simplified to
```
delete from table WHERE rowid NOT IN
(SELECT rowid FROM table WHERE rownum <= 2);
``` | Deleting all rows from database except first two rows in oracle | [
"",
"sql",
"oracle",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.