Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm passing where clause and Quantity and Role (to update or less quantity).
My stored procedure is:
```
ALTER PROCEDURE [dbo].[spUpdateQuantity]
(
@WhereClause AS NVARCHAR,
@Qty AS INT,
@Role AS INT
)
AS
BEGIN
DECLARE @CurrentQty AS INT;
SELECT @CurrentQty = Quantity
FROM ProductMaster
WHERE SKUCode = @WhereClause;
IF(@Role = 1)
BEGIN
UPDATE ProductMaster
SET Quantity = (@CurrentQty + @Qty)
WHERE SKUCode = @WhereClause;
END
IF(@Role = 2)
BEGIN
UPDATE ProductMaster
SET Quantity = (@CurrentQty - @Qty)
WHERE SKUCode = @WhereClause;
END
END
```
Current status is
```
=====================================================
| ProductId | SKUCode | Quantity |
------------------------------------------
| GA10000005 | GA.42205 | 5 |
```
With query command I'm updating the status like...
```
DECLARE @return_value int
EXEC @return_value = [dbo].[spUpdateQuantity]
@WhereClause = N'GA.42205',
@Qty = 3,
@Role = 1
SELECT 'Return Value' = @return_value
GO
```
**I got 0 in result** and the status is the same as before
Does anyone have an idea to update stock quantity this way?
|
You should add the `OUTPUT` variable to stored procedure if you want to get back some value rather then using return value which is for purpose of returning status of execution of the proc:
```
ALTER PROCEDURE [dbo].[spUpdateQuantity]
(
@WhereClause AS NVARCHAR(100),
@Qty AS INT,
@Role AS INT,
@NewQty INT OUTPUT
)
AS
BEGIN
DECLARE @CurrentQty AS INT;
SELECT @CurrentQty = Quantity FROM ProductMaster WHERE SKUCode = @WhereClause;
IF(@Role = 1)
BEGIN
SET @NewQty = @CurrentQty + @Qty
UPDATE ProductMaster
SET Quantity = @NewQty
WHERE SKUCode = @WhereClause;
END
IF(@Role = 2)
BEGIN
SET @NewQty = @CurrentQty - @Qty
UPDATE ProductMaster
SET Quantity = @NewQty
WHERE SKUCode = @WhereClause;
END
END
```
And use this like:
```
DECLARE @return_value int, @NewQty INT
EXEC @return_value = [dbo].[spUpdateQuantity]
@WhereClause = N'GA.42205',
@Qty = 3,
@Role = 1,
@NewQty = @NewQty OUTPUT
SELECT @NewQty AS 'Return Value'
```
This is regarding 0 as a result. The second part of the question is obvious: change the type to `@WhereClause AS NVARCHAR(100)`. Other way it is considered as `@WhereClause AS NVARCHAR(1)` and your passed value is truncated to `N'G'`.
<https://msdn.microsoft.com/en-us/library/ms186939.aspx>
> nvarchar [ ( n | max ) ]
> When n is not specified in a data definition or variable declaration
> statement, the default length is 1. When n is not specified when using
> the CAST and CONVERT functions, the default length is 30.
|
this should work:
```
ALTER PROCEDURE [dbo].[spUpdateQuantity]
(
@WhereClause AS NVARCHAR,
@Qty AS INT,
@Role AS INT
)
AS
BEGIN
UPDATE ProductMaster
SET Quantity = Quantity + case when @Role = 1 then @Qty else (@Qty*-1) end
WHERE SKUCode = @WhereClause;
END
```
|
GET AND UPDATE Quantity in SQL using stored procedure
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
I'm attempting to create a report that will present filtered information depending on the user ID. Lecturers will be able to login to the system and will see all of the modules that they are the leader of, the login ID is the same as the lecturer code that is used to define lecturer code in the module table.
Here is my SQL query.
```
SELECT m.module_code, m.module_name, m.lecturer_code,
m.module_number_assessments, m.module_moderator FROM module m, lecturer l
WHERE m.lecturer_code = [Select l.lecturer_code from lecturer l where
l.lecturer_code = UserInfo.getUserId()];
```
I receive the following error "Query cannot be parsed, please check the syntax of your query. (ORA-00936: missing expression)" when attempting to use this as the source for the report.
I am fairly new to APEX. Any ideas why this code is invalid?
|
In Apex you retrieve the user by using `:APP_USER`. This bind variable contains the value used by the user to log in to the application (the value from `:P101_USERNAME` on the login page thus).
If you need an actual ID or any other related info pertaining to the user you should create some application items to store that info. In the post-authentication procedure of the authentication scheme you can then retrieve the values into the application items, based on the used login name.
|
If I understand your intention, the proper SQL would be:
```
SELECT m.module_code, m.module_name, m.lecturer_code,
m.module_number_assessments, m.module_moderator
FROM module m JOIN
lecturer l
ON m.lecturer_code = l.lecturer_code
WHERE l.lecturer_code = UserInfo.getUserId();
```
`UserInfo.getUserId()` is not a standard part of Oracle (as far as I know). You can use `USER` or a similar function to get the user id inside the database. A general function for getting system information is `SYS_CONTEXT()`, which is documented [here](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions184.htm#SQLRF06117). You probably want "SYSTEM\_USER": `SYS_CONTEXT('USERENV', 'SESSION_USER')`.
In your case, you might just want `EXISTS`:
```
SELECT m.*
FROM module m
WHERE EXISTS (SELECT 1
FROM lecture l
WHERE m.lecturer_code = l.lecturer_code AND
l.lecturer_code = UserInfo.getUserId()
);
```
Neither of these address the issue that `UserInfo.getUserId()` is not understood by Oracle.
|
APEX - Compare value against current user ID
|
[
"",
"sql",
"database",
"oracle",
"oracle-apex",
""
] |
I'm trying to delete duplicate rows according to some fileds.
When I'm running the query below:
```
delete
from slowmo_vid as sv1, slowmo_vid as sv2
where sv1.video_id = '2luh6g3ni5ex'
and sv1.slowmo_end_t<=sv2.slowmo_end_t;
```
I'm getting the error:
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'as sv1, slowmo_vid as sv2
where sv1.video_id = '2luh6g3ni5ex'
and sv1.slowmo_end' at line 2
```
The fields of the tables are : id, video\_id internal\_uri, slowmo\_end\_t
|
You seem to be trying to do an ANSI-92 style inner join inside a `DELETE` statement. But the `WHERE` clause cannot simultaneously be used to enforce the join *and* enforce a restriction on a result set. Instead, do the following explicit `INNER JOIN` to remove the records you want. Notice that it is clear what role the `WHERE` clause is playing.
Update: If you want to delete all records *except* for the one containing the max `video_id` then you can add a nested subquery to the `WHERE` clause.
```
DELETE sv1.*
FROM slowmo_vid sv1
INNER JOIN slowmo_vid sv2 ON sv1.slowmo_end_t <= sv2.slowmo_end_t
WHERE sv1.video_id = '2luh6g3ni5ex' AND
sv1.video_id <> (SELECT x.id
FROM (SELECT MAX(t.video_id) AS id
FROM slowmo_vid t) x)
```
|
> You can specify multiple tables in a DELETE statement to delete rows
> from one or more tables depending on the particular condition in the
> WHERE clause. However, you cannot use ORDER BY or LIMIT in a
> multiple-table DELETE. The table\_references clause lists the tables
> involved in the join. Its syntax is described in Section 12.2.8.1,
> βJOIN Syntaxβ.
<http://dev.mysql.com/doc/refman/5.6/en/delete.html>
The example in the manual is:
```
DELETE t1, t2 FROM t1 INNER JOIN t2 INNER JOIN t3
WHERE t1.id=t2.id AND t2.id=t3.id;
```
|
Delete MySQL row using where clause
|
[
"",
"mysql",
"sql",
""
] |
I have schema like this:
[](https://i.stack.imgur.com/jxy0I.png)
Now I would like to write query which returns me list of items with borrower name only if book is borrowed now, otherwise borrower name should be null.
I feel like it should be easy; I have been trying with just multple joins and NULL conditions but I still can't get what I want. Maybe should I change my schema?
|
```
SELECT items.*, borrowers.first_name, borrowers.last_name
FROM items
LEFT JOIN borrows ON borrows.item_id = items.id AND return_date IS NULL
LEFT JOIN borrowers ON borrowers.id = borrows.borrower_id
```
|
Something like this should do the trick:
`select Items.title,
case
when b1.return_date is null then b2.first_name || ' ' || b2.last_name
else null
end as 'Borrower name'
from Items i
join Borrows b1 on b1.item_id = i.id
join Borrowers b2 on b2.id = b1.borrower_id`
This always selects the title of the item, and concats the first\_name and last\_name of the borrower, but only if return\_date is null. Otherwise, null is selected as 'Borrower name'
|
Get list of items with name or null based on another table values
|
[
"",
"mysql",
"sql",
"jpql",
""
] |
Have a small cinema project I've been working on and was hoping for some help on a rather advanced sql query - I'm a complete noob atm.
|
Is this what you mean?
```
select SeatNo, 1 as available from Chair where SeatNo in (select SeatNo from Booki1)
Union
select SeatNo, 0 as available from Chair where SeatNo not in (select SeatNo from Booki1)
```
|
Your query is a bit confused. The chairs are always there, so it makes no sense to outer-join this table. Have it in the FROM clause instead and outer-join the bookings (or access them in a subquery).
```
select
seatno,
case when seatno in (select seatno from bookli1 where ddate = ... and ttime = ...)
then 0 else 1 end as available
from chair
order by seatno;
```
|
SQL Query - to show all ChairNo's which are available and unavailable
|
[
"",
"mysql",
"sql",
"select",
"join",
"where-clause",
""
] |
The pattern match in sql server doesn't work like regex. Is it possible to use LIKE to match number ranges? I need to do something like this:
```
ABCD 1
ABCD 2
ABCD 3
ABCD 4
ABCD 8
ABCD 9
ABCD 10
...
ABCD 20
```
I want to select all data from "ABCD 1" to "ABCD 20". But I want to ignore anything between "ABCD 4" and "ABCD 8". Of course I can do OR conditions, but I just want to check if there is more elegant way.
TIA.
|
You can use the [LIKE](https://msdn.microsoft.com/en-us/library/ms179859%28v=sql.120%29.aspx) operator and specify your pattern executing a query like this one:
```
SELECT mydata FROM mytable
WHERE(mydata LIKE 'ABCD [1-9]' OR mydata LIKE 'ABCD 1[0-9]' OR mydata LIKE 'ABCD 20')
AND mydata NOT LIKE 'ABCD [4-8]';
```
or, something more concise and shorter:
```
SELECT mydata FROM mytable
where mydata like 'ABCD [^4-8]%';
```
Have a look at [this SQL Fiddle](http://sqlfiddle.com/#!6/94d5f/1).
|
You could use like in SQL Server like this:
```
where col like 'ABCD [1-9]' or
col like 'ABCD 1[0-9]' or
col like 'ABCD 20'
```
|
Wildcard of Number Ranges in SQL Server
|
[
"",
"sql",
"sql-server",
"range",
""
] |
I am going to implement several lookup tables in a system. In general all lookup tables have the same structure like
id, name, value, rank, active
We are using AngularJS as front-end and Web API/Entity framework as backend in this project
There are some options on my mind
Option 1 - Create a set of lookup tables with the same structure
e.g. LKRegion, LKStatus, LKPeriod, LKState, LKDepartment, etc.
This option is a traditional design. The data schema is structural and easy to understand. It is easy to implement/enforce foreign key integrity. But you have to create separated web methods to handle CRUD actions. You have to repeat the same thing if you have another lookup table to add in the future.
Option 2 - Create a big lookup table by adding an extra column called LookupType to identify the lookup group
This option reduces the number of tables. Make the lookup table easy to maintain and retrieve (e.g. One schema, one web method can handle all general lookup CRUD actions). But the foreign key integrity is a little bit loose due to the LookupType.
Please share your preference and the tell me why. I would like to get the best practise on this implementation. Thank you!
|
I suggest that :
A. Follow the organization standard if this is an enterprise system (some may laugh loud on this, I know). If such a thing exists, it would certainly promote individual tables.
B. Use Enums or 1 aggregated lookup table for programming level lookups only (such as error messages, etc,) if you must only. Any lookup data for business related data should be (in my opinion) be in a separate table for the following reasons at least:
1. When you have separate tables, you need to use the correct table name when you join and not use a code column of the reference table. This makes writing queries less error prone. Writing "Select ... Where (TableID=12 and State="NY") AND (TableId=133 and Country="USA")"...style of coding is quite error prone during development. This is the major issue for me from coding perspective.
2. RI errors on inserts and updates may be ambiguous when there is more 1 than reference to the lookup in the row being inserted or updated.
3. In some cases, the a lookup table may have self references (relationships). For example, a Geographical location can be described as a hierarchy which would add more confusion to the model.
4. The relationships (references) could loose meaning in your database. You will find that almost every table in your system is linked to this one table. It some how will not make sense.
5. If you ever decided to allow the user to perform ad-hoc reporting, it would be difficult for them to use codes for lookup tables instead of names.
6. I feel that the 1 table approach breaks Normalization concepts - Can't prove it now though.
An disadvantage, is that you may need to build an indexes on PKs and FKs for some (or all) of the separate tables. However, in the world of powerful database available today, this may not be a big deal.
There are plenty of discussion in the net that I should have read before answering your question, however I present some of the links if you care to take a look at some yourself:
[Link 1](https://stackoverflow.com/questions/876008/database-design-multiple-lookup-enum-tables-or-one-large-table), [Link 2](https://stackoverflow.com/questions/27573837/one-lookup-table-or-many-lookup-tables), [Link 3](http://www.simple-talk.com/sql/database-administration/five-simple--database-design-errors-you-should-avoid/), [Link 4](https://dba.stackexchange.com/questions/9011/proper-use-of-lookup-tables), [Link 5](https://social.msdn.microsoft.com/forums/sqlserver/en-US/037967bf-c26e-403f-bf5f-174e17f22caf/lookup-tables-one-or-many)...
|
I'll defend Option 2, although in general, you want Option 1. As others have mentioned, Option 1 is the simpler method and easily allows foreign key relationships.
There are some circumstances where having a single reference table is handy. For instance, if you are writing a system that will support multiple human languages, then having a single reference table with the names of things is much simpler than a zillion reference tables spread throughout the database. Or, I suppose, you could have very arcane security requirements that require complex encryption algorithms -- and dealing with a single table is easier.
Nevertheless, referential integrity on reference tables is important. Some databases have non-trigger-based mechanisms that will support referential integrity for one table (foreign keys and computed columns in Oracle and SQL Server). These mechanisms are a bit cumbersome but they do allow different foreign key references to a single table. And, you can always enforce referential integrity using triggers, although I don't recommend that approach.
As with most things that databases do, there isn't a right answer. There is a generally accepted answer that works/is correct in most cases (Option 1). The second option would only be desirable under limited circumstances depending on system requirements.
|
Lookup tables implementation - one table or separate tables
|
[
"",
"sql",
"database",
"asp.net-web-api",
""
] |
This is my SQL :
```
SELECT * FROM unity_database.unit_uptime_daily
inner join unity_database.units on unity_database.units.id = unity_database.unit_uptime_daily.unit_id
where unity_database.units.location_id = 1
```
Below is a screenshot of the first part of the results :
[](https://i.stack.imgur.com/ipuR0.jpg)
I am trying to only show one result for each unit\_id that has the latest date so in this example the unit with the unit\_id of 30 should only be shown once, and it should be the bottom row.
Any ideas?
|
Here is a MySQL solution you can consider:
```
SELECT t1.id, t1.unit_id, t1.uptime, t1.total_update, t2.timestamp
FROM
(
SELECT *
FROM unity_database.unit_uptime_daily ud1 INNER JOIN unity_database.units ud2
ON ud2.id = ud1.unit_id
WHERE ud2.location_id = 1
) t1
INNER JOIN
(
SELECT ud1.unit_id, MAX(ud1.timestamp) AS timestamp
FROM unity_database.unit_uptime_daily ud1 INNER JOIN unity_database.units ud2
ON ud2.id = ud1.unit_id
WHERE ud2.location_id = 1
GROUP BY ud1.unit_id
) t2
ON t1.unit_id = t2.unit_id AND t1.timestamp = t2.timestamp
```
|
```
WITH X AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY Unit_Id ORDER BY [Timestamp] DESC) rn
FROM unity_database.unit_uptime_daily
inner join unity_database.units
on unity_database.units.id = unity_database.unit_uptime_daily.unit_id
where unity_database.units.location_id = 1
)
SELECT * FROM X Where rn = 1
```
OR
```
SELECT *
FROM (
SELECT *
,ROW_NUMBER() OVER (PARTITION BY Unit_Id ORDER BY [Timestamp] DESC) rn
FROM unity_database.unit_uptime_daily
inner join unity_database.units
on unity_database.units.id = unity_database.unit_uptime_daily.unit_id
where unity_database.units.location_id = 1
)A
WHERE rn = 1
```
|
SQL remove duplicates from query results
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I have a table in my database called Games. This table has a column called "players\_attending" that stores varchars. I need to be able to add a new string into this column while still saving the previous strings.
For example, if one row of Games had players\_attending = "bryan." I want to be able to add "matt" to players\_attending but also still have bryan.
The result should be players\_attending = "bryan matt."
Is this possible?
|
Your query would be like:
**Oracle:**
```
Update table set field = field || ' matt'
```
**MySQL:**
```
Update table SET col=CONCAT(col,' matt');
```
**MSSQL:**
```
Update table set col = col + ' matt'
```
|
Yes, you would concatenate the string using `||`
```
update Games
set players_attending = players_attending || ' ' || new_player_name
where Id = x
```
|
Sql how to update a column while keeping the existing strings in the column
|
[
"",
"mysql",
"sql",
""
] |
I need to increment every value of a column in my table of +10.000
For example, at the moment I have a column like that:
hits
* 12
* 1443
* 876
* 345
* 34
* and so on...
I would increase every value + 10.000
so it becomes
* 10.012
* 11.443
* and so on...
What's the correct SQL query I should use?
|
Just a simple Googling can give you answer to this question.Anyway You can use [Update](http://www.tutorialspoint.com/mysql/mysql-update-query.htm) query to achieve above scenario.Like
```
Update yourtable set yourcolumn=yourcolumn + 10000 where somecondition
```
And if you are learning Mysql ,you can use [Mysql Tutorial](http://www.mysqltutorial.org/) for further References
|
```
UPDATE your_table
SET your_column = your_column + 10000
```
|
MySQL query to increment of 10.000 the values of a column
|
[
"",
"mysql",
"sql",
""
] |
```
var1 = 12345
output = 1+2+3+4+5 = 15
```
I tried the following
```
Declare @var1 int = 12345,
@Length int = len(12345)
;with SUMM as
(
SELECT SUBSTRING(CAST(@var1 AS VARCHAR) ,1,@Length)%10 N
UNION ALL
SELECT SUBSTRING(CAST(@var1 AS VARCHAR) ,1,@Length-1))%10 N
FROM SUMM
WHERE @Length <= len(@var1)
)
SELECT SUM(N) FROM SUMM;
```
Please help me with achieving it with cte and any other methods other than traditional looping are welcome
|
It's a fun exercise if a bit pointless.
```
DECLARE @var1 int = 12345;
WITH i AS (
SELECT @var1 / 10 n, @var1 % 10 d
UNION ALL
SELECT n / 10, n % 10
FROM i
WHERE n > 0
)
SELECT SUM(d)
FROM i;
```
|
```
SELECT
SUM(CAST (SUBSTRING(STRING.B, V.NUMBER+1, 1)AS INT) )
FROM
(SELECT '12345' B) STRING
JOIN MASTER..SPT_VALUES V ON V.NUMBER < LEN(STRING.B)
WHERE V.TYPE = 'P'
```
|
Sum of digits of a number in sql server without using traditional loops like while
|
[
"",
"sql",
"sql-server",
""
] |
similar questions have been asked on the forums but I seem to have a unique issue with mine. I'm not sure if this is because I don't have a unique ID or because my `KEY` is my actual data. I hope you guys can help.
I am trying to merge two tables (Old and New) that have identical column structures.
I want to retain all my values in the Old table and append ONLY new variables from New Table into a Combined Table. Any keys that exist in both tables should take on the value of the Old table.
```
OLD TABLE
Key | Points
AAA | 1
BBB | 2
CCC | 3
NEW TABLE
Key | Points
AAA | 2
BBB | 5
CCC | 8
DDD | 6
Combined TABLE
Key | Points
AAA | 1
BBB | 2
CCC | 3
DDD | 6
```
I feel like what I want to achieve is the venn diagram equivalent of this:
[Venn diagram](https://i.stack.imgur.com/BD1Mk.png)
... but for whatever reason I'm not getting the intended effect with this code:
```
CREATE TABLE Combined
SELECT * FROM Old as A
FULL OUTER JOIN New as B ON A.Key=B.Key
WHERE A.Key IS NULL OR B.Key IS NULL;
```
|
Order the datasets
```
proc sort data=old;
by key;
run;
proc sort data=new;
by key;
run;
```
Combine them with a data set with by, output only the first key if there is a match
```
data combined;
set
old
new;
by key;
if first.key then output;
run;
```
|
This might help you.
`SELECT B.[Key], MIN(CASE WHEN A.[Key] = B.[Key] THEN A.Points ELSE B.Points END) AS 'Points'
FROM OldTable A
CROSS APPLY NewTable B
GROUP BY B.[Key]`
|
Join / Merge two tables removing new duplicates [PROC SQL in SAS]
|
[
"",
"sql",
"join",
"merge",
"sas",
"proc-sql",
""
] |
below is the code I have written - what I want is when the status is equal to 'N' to display 'Export to WMS' in the column 'Export to WMS'.
If it has any other value than 'N' in the status, I want the column to still appear because some results will be at status 'N' but for those that aren't I want the value of that column to be blank.
```
select
m.display_order_number, m.record_create_date, l.lookup_description,m.wh_id,
m.client_code,m.order_type,m.order_date,m.UIN, m.ship_to_name,m.carrier,
(select DISTINCT 'Export to WMS'
from
t_3pl_order_master
where
status = 'N') AS "Export to WMS"
from
t_3pl_order_master m
INNER JOIN
t_3pl_lookup l on m.status = l.lookup_value AND l.lookup_type = 'Order Status';
```
Results I get are:
[](https://i.stack.imgur.com/O42pS.png)
Where you can clearly see when the status is 'W' it still displays 'Export to WMS' but ideally I would want that to be blank and those with status 'N' to display 'Export to WMS'.
Hope you can help!
|
Need the ELSE in there to make it blank. Otherwise it will be null.
```
CASE WHEN status = 'N' THEN 'Export to WMS' ELSE ' ' END AS "Export to WMS"
```
|
Use a case expression:
```
CASE WHEN status = 'N' THEN 'Export to WMS' END
```
in this way:
```
select m.display_order_number,
m.record_create_date,
l.lookup_description,
m.wh_id,
m.client_code,
m.order_type,
m.order_date,
m.UIN,
m.ship_to_name,
m.carrier,
CASE WHEN status = 'N' THEN 'Export to WMS' END AS "Export to WMS"
from t_3pl_order_master m
INNER JOIN t_3pl_lookup l on m.status = l.lookup_value AND l.lookup_type = 'Order Status';
```
see this link for details: <https://docs.oracle.com/cd/B19306_01/server.102/b14200/expressions004.htm>
|
SQL to display a column value as a set value or null depending on status
|
[
"",
"sql",
"database",
"oracle",
""
] |
I have a table with a varchar type column. I want to select values from this column which start with a capital letter only.
For example
MyTable
```
Col1 Col2
Argentina 2
Brasil 3
uruguay 4
```
I want my select query to return:
```
Argentina
Brasil
```
|
This is a bit of a pain. You can use `ASCII()` or `COLLATE`, but these depend on how the data is stored. For `varchar()` and `char()`, this will work:
```
where ASCII(left(col1, 1)) between ASCII('A') and ASCII('Z')
```
|
select Col1 from MyTable where Col1 like '[A-Z]%'
|
SQL Select values starting with a capital letter
|
[
"",
"sql",
"sql-server",
""
] |
I have a table which represents users connected to a game room. It looks like this:
```
id | gameRoomId | userId
------------------------
0 abc bob
1 xyz joe
2 xyz frank
...
```
Is there a way I can remove the auto-increment primary-key `id` column? I'm not using it for any queries, and don't plan to.
The `gameRoomId` and `userId` have a generic index on them individually.
I'm using mysql now, might switch to postgres eventually, if it matters.
|
It is not obligatory for a table to have a primary key constraint. Where a table does have a primary key, it is not obligatory for that key to be automatically generated. In some cases, there is no meaningful sense in which a given primary key even *could* be automatically generated.
You should be able to remove your existing primary key column from the database like so:
```
alter table my_table drop column id;
```
or perhaps you can avoid creating it in the first place.
Whether this is a wise thing to do depends on your circumstances.
|
Your table looks like a relationship table. It represents the many-to-many relationship between gamerooms and users. Assuming that a given combination of the two can occur only once (which seems reasonable), you could declare a composite primary key of those two columns, and do without the id field.
Some design tools require a simple primary key, but that's not part of relational modeling as such.
At the physical level, declaring a primary key would have a variety of consequences. The index created for you would be a compound one. It would be slightly more efificient if you used integers for userids and gemeroomids, instead of the character strings you have shown.
As far as not declaring any primary key goes, I don't recommend that. Sooner or later, you'll have a bug in your application that lets duplicate rows in there, and you'll start to get unexpected results from your queries. It's far better to let the DBMS help you manage your data.
|
Is primary key auto increment always needed in a table?
|
[
"",
"sql",
"database-design",
"primary-key",
""
] |
I have a table tbl1 with row of data:
```
ID TIN SSS
10001 none 1000-20
10002 69098 PRC
10003 69099 INC
```
I want to query the the Legal\_Doc\_No of each ID. The value of each ID is either the TIN or the SSS column.
How can I query the TIN and SSS column starting with letters (none) so that only values starting with numbers will be assigned to Legal\_Doc\_No
```
Select
ID,
'Legal_Doc_No' = case when TIN = Letter then SSS
else TIN end
from tbl1
```
|
Most databases support `left()`, so you can do something like this:
```
select id,
(case when left(time, 1) between 'a' and 'z' or left(time, 1) between 'A' and 'Z'
then SSS else TIN
end) as Legal_Doc_no
from tbl1;
```
Depending on the database, there might be other solutions.
In SQL Server, you can do:
```
select id,
(case when time like '[a-z]%'
then SSS else TIN
end) as Legal_Doc_no
from tbl1;
```
If you have a case-sensitive collation, then you'll need to take that into account:
```
select id,
(case when lower(time) like '[a-z]%'
then SSS else TIN
end) as Legal_Doc_no
from tbl1;
```
|
The following solution should work assuming that either the `TIN` column or the `SSS` column, but not both, contains numeric digits (i.e. is in numeric format):
```
SELECT ID,
CASE WHEN TIN LIKE '%[0-9]%' THEN TIN ELSE SSS END AS Legal_Doc_No
FROM tbl1
```
|
SQL select where column begins with Letters
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have the following two tables in Oracle database (read only access so I can only use select).
`Question` table:
```
N Question ID
1 1
2 2
3 3
```
`Response` table:
```
Question ID Response day
1 01-04-15
3 02-04-15
4 03-04-15
```
I want the output result to be:
```
Question ID Response day
1 01-04-15
2 null
3 02-04-15
4 03-04-15
```
I m strying with the following query and I'm ony getting the question ID when there is a response. Anyone know where I'm going wrong?
```
select questionID, responseday
from questions
join response
where question.question.ID = response.question.ID;
```
With this query I'm getting these results:
```
Question ID Response day
1 01-04-15
3 02-04-15
4 03-04-15
```
|
Based on your schema, a simple `LEFT JOIN` will be sufficient for your model.
```
SELECT q.Question_ID,
r.Response_Day
FROM Question AS "q"
LEFT JOIN Response AS "r"
ON r.Question_ID = q.Question_ID;
```
`JOIN` or `INNER JOIN` will **only** return the result sets from both tables where ALL attributes match. Because your `Question` record with ID of `2` does not exist in the Response table, SQL will exclude this data.
To account for `NULLs`, you have to `LEFT JOIN`, see the sqlfiddle here:
[SQL Fiddle](http://sqlfiddle.com/#!15/2e998/1/0 "SQL Fiddle")
Also, have a look at this graphical model explaining all the `JOINs`:
[Visual Representation of SQL Joins](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins)
|
One method is a full outer join:
```
select coalesce(q.questionid, r.questionid) as questionid,
r.responseday
from questions q full outer join
responses r
on q.questionid = r.questionid;
```
|
How to select all values between 2 tables in SQL?
|
[
"",
"sql",
"oracle",
"select",
"join",
""
] |
So lets say I have this table with names and scores,lets call it grades, and the score values can only be 1, 2 or 3
```
names | scores
Bob | 3
Bob | 3
Bob | 3
John | 3
John | 1
Peter | 3
```
And I want the names of the people who got perfect score (3 in all of their scores). My problem is that each person can have different number of grades.
The expected output would be something like this like this:
```
names
Bob
Peter
```
How do I do this in Oracle sql
|
try this...
```
select names
from grades
group by names
having count(*) * 3 = sum(scores)
```
|
[SQL Fiddle](http://sqlfiddle.com/#!4/fea6a/1)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE grades ( name, grade ) AS
SELECT 'Bob', 3 FROM DUAL
UNION ALL SELECT 'Bob', 3 FROM DUAL
UNION ALL SELECT 'Bob', 3 FROM DUAL
UNION ALL SELECT 'John', 3 FROM DUAL
UNION ALL SELECT 'John', 1 FROM DUAL
UNION ALL SELECT 'Peter', 3 FROM DUAL
```
**Query 1**:
```
SELECT name
FROM grades
GROUP BY name
HAVING MIN(grade) = 3
```
**[Results](http://sqlfiddle.com/#!4/fea6a/1/0)**:
```
| NAME |
|-------|
| Peter |
| Bob |
```
|
Oracle SQL - getting values out of a table where all values must be equal
|
[
"",
"sql",
"database",
"oracle",
""
] |
I am working on an Oracle sql script that displays the average for one department location(`location_id = 1700`) in comparison to *all other* department locations (`location_id <> 1700`)--since it's comparing two values, I am looking to only have two rows returned. I was able to work out one query like this:
```
select d.LOCATION_ID, round(avg(e.salary),2) AS "AVG SALARY", count(d.LOCATION_ID) from departments d
join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID
where d.location_id = 1700
group by grouping sets(d.LOCATION_ID);
```
This returns a single row for me as such:
[](https://i.stack.imgur.com/Wdz7Z.jpg)
My second query returns four rows instead of a single one (like I thought it would):
```
select round(avg(e.salary),2) AS "AVG SALARY", count(d.LOCATION_ID) from departments d
join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID
where d.location_id <> 1700
group by grouping sets(d.LOCATION_ID);
```
This returns 4 rows, but I am looking to have it return only 1:
[](https://i.stack.imgur.com/dsNX8.jpg)
My hope was to work out both queries, then union them together--obviously I need to get over the hurdle of the second query before putting them together.
Any ideas?
|
Try remove the group bys altogether. It sounds like you just want the average where location\_id = 1700 and where location\_id <> 1700 then UNION the two results.
```
select '1700' as "LOCATION", round(avg(e.salary),2) AS "AVG SALARY", count(d.LOCATION_ID) as "COUNT"
from departments d
join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID
where d.location_id = 1700
union
select '<>1700' as "LOCATION", round(avg(e.salary),2) AS "AVG SALARY", count(d.LOCATION_ID) as "COUNT"
from departments d
join employees e on e.DEPARTMENT_ID = d.DEPARTMENT_ID
where d.location_id <> 1700
```
|
You might consider using a `case` instead of `locationid`:
```
select (case when d.lcoation_id = 1700 then 1700 else -1 end) as LocationId,
round(avg(e.salary), 2) AS "AVG SALARY",
count(d.LOCATION_ID)
from departments d join
employees e
on e.DEPARTMENT_ID = d.DEPARTMENT_ID
group by grouping sets((case when d.lcoation_id = 1700 then 1700 else -1 end));
```
I'm not sure if you need the `grouping sets`. If you only want two rows, then this probably does what you want:
```
select (case when d.lcoation_id = 1700 then 1700 else -1 end) as LocationId,
round(avg(e.salary), 2) AS "AVG SALARY",
count(d.LOCATION_ID)
from departments d join
employees e
on e.DEPARTMENT_ID = d.DEPARTMENT_ID
group by (case when d.lcoation_id = 1700 then 1700 else -1 end);
```
|
Oracle SQL grouping set returning more than one row
|
[
"",
"sql",
"oracle",
"oracle11g",
"group-by",
""
] |
I use MS SQL, and I have in my table 2 columns. DATE and VALUE, and I want to select that in one column to display values between 2 dates and in second column to display value with different range of dates.something like:
```
SELECT value1, value2
FROM TABLE
WHERE (value1 = date between '2015-05-10' and '2015-09-10')
and (value2 = date between '2015-04-10' and '2015-11-01').
```
All I want is that from the same column with date extract values with different range of dates. Thank you!
|
You can try this:
```
SELECT A.DATE, A.VALUE1, A.VALUE2
FROM (
SELECT
DATE,
VALUE AS VALUE1
FROM TABLE
WHERE DATE BETWEEN'2015-05-10' AND '2015-09-10'
) AS A
FULL OUTER JOIN (
SELECT
DATE,
VALUE AS VALUE1
FROM TABLE
WHERE DATE BETWEEN'2015-04-10' AND '2015-11-01'
) AS B
ON A.DATE = B.DATE
```
|
a bit unclear for me... you mean something like this -
```
SELECT value1, value2
FROM tbl
WHERE value1 BETWEEN '2015-05-10' AND '2015-09-10'
AND value2 BETWEEN '2015-04-10' AND '2015-11-01'
```
|
MS SQL - 2 different range of date
|
[
"",
"sql",
"sql-server",
"select",
""
] |
The goal is to have the Install Address and Dispatch address appear on the same line but I can't figure out a way of doing this. I am doing 2 queries on the same data and doing union all on the results. The address details may be different but also could be the same (eg Install and Dispatch address same).
[](https://i.stack.imgur.com/KKlWZ.png)
```
Select zSTRI_CertificateNumber, CPS, InstallAdr1, InstallCity, DispAdr1, DispCity, DateSubmitted
From (
SELECT zSTRI_CertificateNumber,
'STRI' + CAST(op.ID as Varchar(4)) as CPSref,
JobRef,
CAST(CASE
WHEN notif.Cps = 1 THEN 'CPS'
END AS varchar(3)) as CPS,
notif.DateSubmitted,
nAdr.AddressLine1 as InstallAdr1,
nAdr.AddressLine2 as InstallAdr2,
nAdr.City as InstallCity,
nAdr.PostCode as InstallPostCode,
'' as DispAdr1,
'' as DispAdr2,
'' as DispCity,
'' as DispPostCode,
DateWorkCompleted,
c.CompanyName,
msrs.UniqueID
FROM [Notification] notif
INNER JOIN NotificationAddress nAdr
ON notif.ID = nAdr.NotificationID
INNER JOIN Company c
ON c.CompanyID = notif.CompanyID
INNER JOIN NotificationMeasures msrs
ON notif.ID = msrs.NotificationID
INNER JOIN Operative op
ON op.ID = NotifyingOperativeID
WHERE notif.DispatchMethodEmail = 0
AND nAdr.InstallAddress = 1
AND notif.ID = 5411
UNION ALL
SELECT zSTRI_CertificateNumber,
'STRI' + CAST(op.ID as Varchar(4)) as CPSref,
JobRef,
CAST(CASE
WHEN notif.Cps = 1 THEN 'CPS'
END AS varchar(3)) as CPS,
notif.DateSubmitted,
'' as InstallAdr1,
'' as InstallAdr2,
'' as InstallCity,
'' as InstallPostCode,
nAdr.AddressLine1 as DispAdr1,
nAdr.AddressLine2 as DispAdr2,
nAdr.City as DispCity,
nAdr.PostCode as DispPostCode,
DateWorkCompleted,
c.CompanyName,
msrs.UniqueID
FROM [Notification] notif
INNER JOIN NotificationAddress nAdr
ON notif.ID = nAdr.NotificationID
INNER JOIN Company c
ON c.CompanyID = notif.CompanyID
INNER JOIN NotificationMeasures msrs
ON notif.ID = msrs.NotificationID
INNER JOIN Operative op
ON op.ID = NotifyingOperativeID
WHERE
notif.DispatchMethodEmail = 0
AND nAdr.DispatchAddress = 1
AND notif.ID = 5411
) as SubGroup
Group by zSTRI_CertificateNumber, CPS, InstallAdr1, InstallCity, DispAdr1, DispCity, DateSubmitted
```
|
may be you are code is so huge as per my assumption and Use the MAX values for some NULL columns and Remove them in Group BY
```
SELECT
zSTRI_CertificateNumber,
CPS,
MAX(InstallAdr1) InstallAdr1,
InstallCity,
MAX(DispAdr1)DispAdr1,
MAX(DispCity)DispCity,
DateSubmitted
FROM (
SELECT zSTRI_CertificateNumber,
'STRI' + CAST(op.ID as Varchar(4)) as CPSref,
JobRef,
CAST(CASE
WHEN notif.Cps = 1 THEN 'CPS'
END AS varchar(3)) as CPS,
notif.DateSubmitted,
nAdr.AddressLine1 as InstallAdr1,
nAdr.AddressLine2 as InstallAdr2,
nAdr.City as InstallCity,
nAdr.PostCode as InstallPostCode,
'' as DispAdr1,
'' as DispAdr2,
'' as DispCity,
'' as DispPostCode,
DateWorkCompleted,
c.CompanyName,
msrs.UniqueID
FROM [Notification] notif
INNER JOIN NotificationAddress nAdr
ON notif.ID = nAdr.NotificationID
INNER JOIN Company c
ON c.CompanyID = notif.CompanyID
INNER JOIN NotificationMeasures msrs
ON notif.ID = msrs.NotificationID
INNER JOIN Operative op
ON op.ID = NotifyingOperativeID
WHERE notif.DispatchMethodEmail = 0
AND nAdr.InstallAddress = 1
AND notif.ID = 5411
UNION ALL
SELECT zSTRI_CertificateNumber,
'STRI' + CAST(op.ID as Varchar(4)) as CPSref,
JobRef,
CAST(CASE
WHEN notif.Cps = 1 THEN 'CPS'
END AS varchar(3)) as CPS,
notif.DateSubmitted,
'' as InstallAdr1,
'' as InstallAdr2,
'' as InstallCity,
'' as InstallPostCode,
nAdr.AddressLine1 as DispAdr1,
nAdr.AddressLine2 as DispAdr2,
nAdr.City as DispCity,
nAdr.PostCode as DispPostCode,
DateWorkCompleted,
c.CompanyName,
msrs.UniqueID
FROM [Notification] notif
INNER JOIN NotificationAddress nAdr
ON notif.ID = nAdr.NotificationID
INNER JOIN Company c
ON c.CompanyID = notif.CompanyID
INNER JOIN NotificationMeasures msrs
ON notif.ID = msrs.NotificationID
INNER JOIN Operative op
ON op.ID = NotifyingOperativeID
WHERE
notif.DispatchMethodEmail = 0
AND nAdr.DispatchAddress = 1
AND notif.ID = 5411
)As Subgroup
GROUP BY zSTRI_CertificateNumber, CPS, InstallCity, DateSubmitted
```
|
It sounds like you just need to join on the same NotificationAddress table twice within a single query, using different join criteria.
e.g.
```
select A.id, X.value as 'xValue', Y.value as 'yValue'
from IdTable A
inner join ValueTable X
on A.id=X.id
inner join ValueTable Y -- same table as "X"
on A.id=Y.id
where X.type = 'X'
and Y.type = 'Y' -- but different join criteria
```
|
T-SQL Consolidate and merge two rows into one
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Suppose:
> `Customers {CustomerID, CustomerName, Address, City, Tel, Fax}`
>
> `Orders {OrderID, OrderDate, CustomerID, OrderValue}`
>
> "Give the list of customers fulfilling the following condition: value of ***each order*** from the given customer exceeds $10,000."
[](https://i.stack.imgur.com/toTAP.png)
[](https://i.stack.imgur.com/JMs8Y.png)
This query doesn't look correct to me:
```
select cc.CustomerID, cc.CustomerName, tt.OrderValue
from Customers cc,
(select CustomerID, OrderValue
from Orders
where OrderValue > 10000) tt
where cc.CustomerID = tt.CustomerID;
```
[](https://i.stack.imgur.com/IMA9C.png)
The answer should be: **Pixoboo**.
|
Well, if each ordeer exceeds 10000, then the smallest one must do so. So, perhaps this is the easier way:
```
SELECT *
FROM Customers
WHERE CustomerID IN
(SELECT CustomerID
FROM Orders
GROUP BY CustomerID
HAVING min(OrderValue)>10000)
```
Bonus: it avoids hairy problems with `NOT IN` and `NULLs`.
|
The present SQL query will fetch those records as well who have placed orders in less than 10,000 and more than 10,000.
Given that you have retrieve the list of customers who has always placed an order for more than 10,000, I would always approach this with the negation condition as this will filter all the records who have at any given point of time placed a lower value order.
Query is:
```
Select DISTINCT CC.CUSTOMERNAME
from CUSTOMER CC
WHERE CC.CUSTOMERID NOT IN
(Select DISTINCT CUSTOMERID from ORDERS where ORDERVALUE<10000)
```
Let me know if this answers your questions
|
What should be the SQL query for this?
|
[
"",
"sql",
"oracle11g",
""
] |
I'm trying to insert some dummy data into a table (A), for which I need the IDs from two other tables (B and C). How can I get n rows with a random B.Id and a random C.Id.
I've got:
```
select
(Select top 1 ID from B order by newid()) as 'B.Id',
(select top 1 ID from C order by newid()) as 'C.Id'
```
which gives me random Ids from each table, but what's the best way to get n of these? I've tried joining on a large table and doing top n, but the IDs from B and C are the same random Ids repeated for each row.
So looking to end up with something like this, but able to specify N rows.
```
INSERT INTO A (B-Id,C-Id,Note)
select
(Select top 1 ID from B order by newid()) as 'B.Id',
(select top 1 ID from C order by newid()) as 'C.Id',
'Rar'
```
So if B had Ids 1,2,3,4 and C had Ids 11,12,13,14, i'm after the equivalent of:
```
INSERT INTO A (B-Id,C-Id,Note)
Values
(3,11,'rar'), (1,14,'rar'),(4,11,'rar')
```
Where the Ids from each table are combined at random
|
If you want to avoid duplicates, you can use `row_number()` to enumerate the values in each table (randomly) and then join them:
```
select b.id as b_id, c.id as c_id
from (select b.*, row_number() over (order by newid()) as seqnum
from b
) b join
(select c.*, row_number() over (order by newid()) as seqnum
from c
) c
on b.seqnum = c.seqnum;
```
You can just add `top N` or `where seqnum <= N` to limit the number.
|
If I'm reading your question correctly, I think you want `N` random rows from the union of the two tables - so on any given execution you will get `X` rows from table `B` and `N-X` rows from table `C`. To accomplish this, you first `UNION` tables `B` and `C` together, then `ORDER BY` the random value generated by `NEWID()` while pulling your overall `TOP N`.
```
SELECT TOP 50 --or however many you like
DerivedUnionOfTwoTables.[ID],
DerivedUnionOfTwoTables.[Source]
FROM
(
(SELECT NEWID() AS [Random ID], [ID], 'Table B' AS [Source] FROM B)
UNION ALL
(SELECT NEWID() AS [Random ID], [ID], 'Table C' AS [Source] FROM C)
) DerivedUnionOfTwoTables
ORDER BY
[Random ID] DESC
```
I included a column showing which source table any given record comes from so you could see the distribution of the two table sources changing each time it is executed. If you don't need it and/or don't care to verify, simply comment it out from the topmost select.
|
Select random rows from multiple tables in one query
|
[
"",
"sql",
"sql-server",
""
] |
I've been stuck on this issue for a while now. The problem is the following :
I have 3 tables. A students table, a payments table and a pivot course\_student table.
A student can take many courses and each student can make many payments for each course they take. This data is stored on my payments table with a reference to course\_student\_id.
I just need the last payment made by the student which holds the latest information about the student's payment.
Here's my query :
```
SELECT *
FROM payments p
JOIN course_student cs ON cs.id = p.course_student_id
JOIN students s ON s.id = cs.student_id
GROUP BY p.course_student_id
HAVING max(p.id) IN ( SELECT id FROM payments )
```
The problem is that I'm getting back only the first record of each course\_student from the payments table when actually I expect the last row.
Here's an example of what I want returned :
```
Students
student_name student_id course_student_id
XYZ 1 1
```
On the payments table, say I have 3 entries for course\_student\_id = 1
```
Payments
id course_student_id amount_paid
1 1 100
2 1 250
3 1 500
```
I need the query to return all of the details of the last payment.
|
consider trying this solution. The subselect creates a new list of all the students payments for each course\_student\_id, and selects out the largest id.
```
SELECT *
FROM payments p
JOIN course_student cs ON cs.id = p.course_student_id
JOIN students s ON s.id = cs.student_id
WHERE p.id in (
SELECT MAX(id)
FROM payments p2
WHERE p2.course_student_id = p.course_student_id)
```
|
You can use a derived table to get the max date for a student's payment. If the max date should be by course, include it in `group by`.
```
SELECT s.id, t.course_id, t.mxdate --other columns as needed
FROM course_student cs
JOIN students s ON s.id = cs.student_id
JOIN (select course_student_id, course_id, max(payment_date) as mxdate from payments
---^ -- ^ --change these columns to be appropriate
group by course_student_id, course_id) t
on t.course_student_id = s.id
```
|
Subquery on Join - Where statement
|
[
"",
"mysql",
"sql",
""
] |
How can I do binning in SQL Server 2008 if I need about 100 bins? I need to group records depending if a binning variable belongs to one of 100 equal intervals.
For example if there is continious variable age I could write:
```
CASE
WHEN AGE >= 0 AND AGE < 1 THEN '1'
WHEN AGE >= 1 AND AGE < 2 THEN '2'
...
WHEN AGE >= 99 AND AGE < 100 THEN '100'
END [age_group]
```
But this process would be timeconsuming? Are there some other ways how to do that?
|
Try this based upon your comment about the segments being 1200:
```
;With Number
AS
(
SELECT *
FROM (Values(1),(2), (3), (4), (5), (6), (7), (8), (9), (10))N(x)
),
Segments
As
(
SELECT (ROW_NUMBER() OVER(ORDER BY Num1.x) -1) * 1200 As StartNum,
ROW_NUMBER() OVER(ORDER BY Num1.x) * 1200 As EndNum
FROM Number Num1
CROSS APPLY Number Num2
)
SELECT *
FROM Segments
SELECT *
FROM Segments
INNER JOIN MyTable
ON MyTable.Price >= StartNum AND MyTable.Price < EndNum
```
|
Try This Code Once:
```
SELECT CASE
WHEN AGE = 0 THEN 1
ELSE Ceiling([age])
END [age_group]
FROM @T
```
Here CEILING function returns the smallest integer greater than or equal to the specified numeric expression.i.e select CEILING(0.1) SQL Returns 1 As Output
But According to Your Output Requirement Floor(age)+1 is enough to get Required Output.
```
SELECT Floor([age]) + 1 [age_group]
FROM @T
```
Here Floor Function Returns the largest integer less than or equal to the specified numeric expression.
|
Lots of WHEN conditions in CASE statement (binning)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"case",
""
] |
I have upgraded my system and have installed MySql 5.7.9 with php for a web application I am working on. I have a query that is dynamically created, and when run in older versions of MySQL it works fine. Since upgrading to 5.7 I get this error:
> Expression #1 of SELECT list is not in GROUP BY clause and contains
> non-aggregated column 'support\_desk.mod\_users\_groups.group\_id' which is
> not functionally dependent on columns in GROUP BY clause; this is
> incompatible with sql\_mode=only\_full\_group\_by
Note the manual page for Mysql 5.7 on the topic of [Server SQL Modes](http://dev.mysql.com/doc/refman/5.7/en/sql-mode.html).
This is the query that is giving me trouble:
```
SELECT mod_users_groups.group_id AS 'value',
group_name AS 'text'
FROM mod_users_groups
LEFT JOIN mod_users_data ON mod_users_groups.group_id = mod_users_data.group_id
WHERE mod_users_groups.active = 1
AND mod_users_groups.department_id = 1
AND mod_users_groups.manage_work_orders = 1
AND group_name != 'root'
AND group_name != 'superuser'
GROUP BY group_name
HAVING COUNT(`user_id`) > 0
ORDER BY group_name
```
I don't understand `only_full_group_by` enough to figure out what I need to do to fix the query. Can I just turn off the `only_full_group_by` option, or is there something else I need to do?
|
I would just add `group_id` to the `GROUP BY`.
When `SELECT`ing a column that is not part of the `GROUP BY` there could be multiple values for that column within the groups, but there will only be space for a single value in the results. So, the database *usually* needs to be told exactly how to make those multiple values into one value. Commonly, this is done with an aggregate function like `COUNT()`, `SUM()`, `MAX()` etc... I say *usually* because most other popular database systems insist on this. However, in MySQL prior to version 5.7 the default behaviour has been more forgiving because it will not complain and then arbitrarily choose *any value*! It also has an `ANY_VALUE()` function that could be used as another solution to this question if you really needed the same behaviour as before. This flexibility comes at a cost because it is non-deterministic, so I would not recommend it unless you have a very good reason for needing it. MySQL are now turning on the `only_full_group_by` setting by default for good reasons, so it's best to get used to it and make your queries comply with it.
So why my simple answer above? I've made a couple of assumptions:
1) the `group_id` is unique. Seems reasonable, it is an 'ID' after all.
2) the `group_name` is also unique. This may not be such a reasonable assumption. If this is not the case and you have some duplicate `group_names` and you then follow my advice to add `group_id` to the `GROUP BY`, you may find that you now get more results than before because the groups with the same name will now have separate rows in the results. To me, this would be better than having these duplicate groups hidden because the database has quietly selected a value arbitrarily!
It's also good practice to qualify all the columns with their table name or alias when there's more than one table involved...
```
SELECT
g.group_id AS 'value',
g.group_name AS 'text'
FROM mod_users_groups g
LEFT JOIN mod_users_data d ON g.group_id = d.group_id
WHERE g.active = 1
AND g.department_id = 1
AND g.manage_work_orders = 1
AND g.group_name != 'root'
AND g.group_name != 'superuser'
GROUP BY
g.group_name,
g.group_id
HAVING COUNT(d.user_id) > 0
ORDER BY g.group_name
```
|
You can try to disable the `only_full_group_by` setting by executing the following:
```
mysql> set global sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
mysql> set session sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
```
MySQL 8 does not accept `NO_AUTO_CREATE_USER` so that needs to be removed.
|
Error related to only_full_group_by when executing a query in MySql
|
[
"",
"mysql",
"sql",
"group-by",
"mysql-error-1055",
""
] |
I have a table in PostgreSQL that contains:
```
id name arrayofparents
1 First
2 Second {1}
3 Second_Sec {1,2}
4 Third {1,2,3}
5 Second_A {1}
6 Other
7 Trash {6}
```
`arrayofparents` is of type `integer[]` it contains a list of the parents records for that row with the correct order.
`id=4` parents are: `First` then `Second` then `Second_sec`
How do I write a query that for any given id it will generate a string of it's parents name?
for example:
`id=3`: `First->Second`.
`id=4`: `First->Second->Second_sec`.
`id=7`: `Other`.
**Edit:**
if possible I prefer the requested id `name` will always appear.
`id=3`: `First->Second->Second_sec`.
`id=4`: `First->Second->Second_sec->Third`.
`id=7`: `Other->Trash`.
`id=6`: `Other`.
|
you can combine muliple operations like generate\_subscripts and array to get the result:
```
with mtab as (
SELECT id, name, array_append(arrayofparents,id) as arrayofparents,
generate_subscripts(array_append(arrayofparents, id), 1) AS p_id FROM tab where id=2
)
select distinct array_to_string(
array(
select tab.name from tab join mtab t on tab.id=t.arrayofparents[t.p_id]
), '->'
) ;
```
[**live example Sqlfiddle**](http://sqlfiddle.com/#!15/3a0fd8/78/0)
or use outer join combined with any:
```
SELECT coalesce(string_agg(p.name, '->') || '->' || t.name, t.name) AS parentnames
FROM tab AS t
LEFT JOIN tab AS p ON p.id = ANY(t.arrayofparents)
where t.id =7
GROUP BY t.id, t.name
```
[**live example Sqlfiddle**](http://sqlfiddle.com/#!15/3a0fd8/63/0)
|
Each of these queries work for a single id as well as for the whole table.
And you can return just the path / the full path or all other columns as well.
```
SELECT t.*, concat_ws('->', t1.path, t.name) AS full_path
FROM tbl t
LEFT JOIN LATERAL (
SELECT string_agg(t1.name, '->' ORDER BY i) AS path
FROM generate_subscripts(t.arrayofparents, 1) i
JOIN tbl t1 ON t1.id = t.arrayofparents[i]
) t1 ON true
WHERE t.id = 4; -- optional
```
Alternatively, you could move the `ORDER BY` to a subquery - may be a bit faster:
```
SELECT concat_ws('->', t1.path, t.name) AS full_path
FROM tbl t, LATERAL (
SELECT string_agg(t1.name, '->') AS path
FROM (
SELECT t1.name
FROM generate_subscripts(t.arrayofparents, 1) i
JOIN tbl t1 ON t1.id = t.arrayofparents[i]
ORDER BY i
) t1
) t1
WHERE t.id = 4; -- optional
```
Since the aggregation happens in the `LATERAL` subquery we don't need a `GROUP BY` step in the outer query.
We also don't need `LEFT JOIN LATERAL ... ON true` to retain all rows where `arrayofparents` is NULL or empty, because the `LATERAL` subquery *always* returns a row due to the aggregate function.
`LATERAL` requires Postgres **9.3**.
* [What is the difference between LATERAL and a subquery in PostgreSQL?](https://stackoverflow.com/questions/28550679/what-is-the-difference-between-lateral-and-a-subquery-in-postgresql/28557803#28557803)
Use [`concat_ws()`](http://www.postgresql.org/docs/current/interactive/functions-string.html#FUNCTIONS-STRING-OTHER) to ignore possible NULL values in the concatenation.
[**SQL Fiddle.**](http://sqlfiddle.com/#!15/387f2/1)
`WITH OTDINALITY` makes it a bit simpler and faster in Postgres **9.4**:
```
SELECT t.*, concat_ws('->', t1.path, t.name) AS full_path
FROM tbl t, LATERAL (
SELECT string_agg(t1.name, '->' ORDER BY ord) AS path
FROM unnest(t.arrayofparents) WITH ORDINALITY a(id,ord)
JOIN tbl t1 USING (id)
) t1
WHERE t.id = 4;
```
Detailed explanation:
* [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450)
---
Variant with `UNION ALL` for pg 9.3
```
SELECT t1.full_path
FROM tbl t, LATERAL (
SELECT string_agg(name, '->') AS full_path
FROM (
(
SELECT name
FROM generate_subscripts(t.arrayofparents, 1) i
JOIN tbl ON id = t.arrayofparents[i]
ORDER BY i
)
UNION ALL SELECT t.name
) t1
) t1
WHERE t.id = 4;
```
|
Create string from array
|
[
"",
"sql",
"arrays",
"postgresql",
"unnest",
""
] |
In Go, when using a SQL database, does one need to close the DB (`db.Close`) before closing an application? Will the DB automatically detect that the connection has died?
|
DB will do its best to detect but with no luck, it may not be able to detect. Better to release what is acquired as soon as possible.
`send()` system call will wait for TCP connection to send data but client won't receive anything.
1. Power failure, network issue or bare exit happened without properly releasing resources. TCP keepalive mechanism will kick in and try to detect that connection is dead.
2. Client is paused and doesn't receive any data, in this case `send()` will block.
As a result, it may prevent
1. Graceful shutdown of cluster.
2. Advancing event horizon if it was holding exclusive locks as a part of transaction such as `auto vacuum` in postgresql.
Server keepalive config could be shortened to detect it earlier. (For example, `~2h 12m` default in postgresql will be very long according to workload).
There may be a hard limit on max open connections, until detection, some connections will be zombie (there, unusable but decreases limit).
|
The database will notice the connection had died and take appropriate actions: for instance, all uncommitted transactions active on that connection will be rolled back and the user session will be terminated.
But notice that this is a "recovery" scenario from the point of view of the database engine: it cannot just throw up when a client disconnects; it rather have to take explicit actions to have consistent state.
On the other hand, to shut down property when the program goes down "the normal way" (that is, not because of a panic or `log.Fatal()`) is really not that hard. And since the `sql.DB` instance is usually a program-wide global variable, its even simpler: just try closing it in `main()` like Matt suggested.
|
Do we need to close DB connection before closing application in Go?
|
[
"",
"sql",
"database",
"go",
""
] |
I would like to aggregate two columns into one "array" when grouping.
Assume a table like so:
```
friends_map:
=================================
user_id friend_id confirmed
=================================
1 2 true
1 3 false
2 1 true
2 3 true
1 4 false
```
I would like to select from this table and group by user\_id and get friend\_id and confirmed as a concatenated value separated by a comma.
Currently I have this:
```
SELECT user_id, array_agg(friend_id) as friends, array_agg(confirmed) as confirmed
FROM friend_map
WHERE user_id = 1
GROUP BY user_id
```
which gets me:
```
=================================
user_id friends confirmed
=================================
1 [2,3,4] [t, f, f]
```
How can I get:
```
=================================
user_id friends
=================================
1 [ [2,t], [3,f], [4,f] ]
```
|
You can concatenate the values together prior to feeding them into the `array_agg()` function:
```
SELECT user_id, array_agg('[' || friend_id || ',' || confirmed || ']') as friends
FROM friends_map
WHERE user_id = 1
GROUP BY user_id
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!15/7d8f7/4/0)
|
You could avoid the ugliness of the multidimentional array and use some json which supports mixed datatypes:
```
SELECT user_id, json_agg(json_build_array(friend_id, confirmed)) AS friends
FROM friends_map
WHERE user_id = 1
GROUP BY user_id
```
Or use some `key : value` pairs since json allows that, so your output will be more *semantic* if you like:
```
SELECT user_id, json_agg(json_build_object(
'friend_id', friend_id,
'confirmed', confirmed
)) AS friends
FROM friends_map
WHERE user_id = 1
GROUP BY user_id;
```
|
Postgres - aggregate two columns into one item
|
[
"",
"sql",
"arrays",
"postgresql",
"aggregate-functions",
""
] |
I've written the following query, but I've realised that I need it to do the opposite of what it's currently doing.
I need to display everyone that has a payroll\_no in UG, where the number doesn't exsist in esr.
```
select distinct ug.name, ug.payroll_no, esr.assignment
from user_group as ug
inner join esrtraining as esr
on ug.payroll_no = SUBSTRING(esr.assignment,2,8)
```
Any help appreciated. Thanks.
|
Please try this. This utilises `LEFT JOIN` which brings everything in `ug` table followed by `WHERE` clause which filters and brings only those records which did not satisfy the match criteria
```
select distinct ug.name, ug.payroll_no, esr.assignment
from user_group as ug
LEFT join esrtraining as esr
on ug.payroll_no = SUBSTRING(esr.assignment,2,8)
WHERE
esr.assignment is NULL
```
|
You need to do a `LEFT JOIN` which will leave all `esr` fields as `NULL`. From there, you may simply filter that result with `WHERE esr.assignment IS NULL`.
For this reason, `esr.assignment` will of course always be null in your select.
|
SQL - List those that DON'T match
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I am baffled as to why selecting my SQL View is so slow when using a table alias (25 seconds) but runs so much faster when the alias is removed (2 seconds)
**-this query takes 25 seconds.**
```
SELECT [Extent1].[Id] AS [Id],
[Extent1].[ProjectId] AS [ProjectId],
[Extent1].[ProjectWorkOrderId] AS [ProjectWorkOrderId],
[Extent1].[Project] AS [Project],
[Extent1].[SubcontractorId] AS [SubcontractorId],
[Extent1].[Subcontractor] AS [Subcontractor],
[Extent1].[ValuationNumber] AS [ValuationNumber],
[Extent1].[WorksOrderName] AS [WorksOrderName],
[Extent1].[NewGross],
[Extent1].[CumulativeGross],
[Extent1].[CreateByName] AS [CreateByName],
[Extent1].[CreateDate] AS [CreateDate],
[Extent1].[FinalDateForPayment] AS [FinalDateForPayment],
[Extent1].[CreateByEmail] AS [CreateByEmail],
[Extent1].[Deleted] AS [Deleted],
[Extent1].[ValuationStatusCategoryId] AS [ValuationStatusCategoryId]
FROM [dbo].[ValuationsTotal] AS [Extent1]
```
**-this query takes 2 seconds.**
```
SELECT [Id],
[ProjectId],
[Project],
[SubcontractorId],
[Subcontractor],
[NewGross],
[ProjectWorkOrderId],
[ValuationNumber],
[WorksOrderName],
[CreateByName],
[CreateDate],
[CreateByEmail],
[Deleted],
[ValuationStatusCategoryId],
[FinalDateForPayment],
[CumulativeGross]
FROM [dbo].[ValuationsTotal]
```
this is my SQL View code -
```
WITH ValuationTotalsTemp(Id, ProjectId, Project, SubcontractorId, Subcontractor, WorksOrderName, NewGross, ProjectWorkOrderId, ValuationNumber, CreateByName, CreateDate, CreateByEmail, Deleted, ValuationStatusCategoryId, FinalDateForPayment)
AS (SELECT vi.ValuationId AS Id,
v.ProjectId,
p.NAME,
b.Id AS Expr1,
b.NAME AS Expr2,
wo.OrderNumber,
SUM(vi.ValuationQuantity * pbc.BudgetRate) AS 'NewGross',
sa.ProjectWorkOrderId,
v.ValuationNumber,
up.FirstName + ' ' + up.LastName AS Expr3,
v.CreateDate,
up.Email,
v.Deleted,
v.ValuationStatusCategoryId,
sa.FinalDateForPayment
FROM dbo.ValuationItems AS vi
INNER JOIN dbo.ProjectBudgetCosts AS pbc
ON vi.ProjectBudgetCostId = pbc.Id
INNER JOIN dbo.Valuations AS v
ON vi.ValuationId = v.Id
INNER JOIN dbo.ProjectSubcontractorApplications AS sa
ON sa.Id = v.ProjectSubcontractorApplicationId
INNER JOIN dbo.Projects AS p
ON p.Id = v.ProjectId
INNER JOIN dbo.ProjectWorkOrders AS wo
ON wo.Id = sa.ProjectWorkOrderId
INNER JOIN dbo.ProjectSubcontractors AS sub
ON sub.Id = wo.ProjectSubcontractorId
INNER JOIN dbo.Businesses AS b
ON b.Id = sub.BusinessId
INNER JOIN dbo.UserProfile AS up
ON up.Id = v.CreateBy
WHERE ( vi.Deleted = 0 )
AND ( v.Deleted = 0 )
GROUP BY vi.ValuationId,
v.ProjectId,
p.NAME,
b.Id,
b.NAME,
wo.OrderNumber,
sa.ProjectWorkOrderId,
v.ValuationNumber,
up.FirstName + ' ' + up.LastName,
v.CreateDate,
up.Email,
v.Deleted,
v.ValuationStatusCategoryId,
sa.FinalDateForPayment)
SELECT Id,
ProjectId,
Project,
SubcontractorId,
Subcontractor,
NewGross,
ProjectWorkOrderId,
ValuationNumber,
WorksOrderName,
CreateByName,
CreateDate,
CreateByEmail,
Deleted,
ValuationStatusCategoryId,
FinalDateForPayment,
(SELECT SUM(NewGross) AS Expr1
FROM ValuationTotalsTemp AS tt
WHERE ( ProjectWorkOrderId = t.ProjectWorkOrderId )
AND ( t.ValuationNumber >= ValuationNumber )
GROUP BY ProjectWorkOrderId) AS CumulativeGross
FROM ValuationTotalsTemp AS t
```
Any ideas why this is?
The SQL query runs with table alias as this is generated from Entity Framework so I have no way of changing this. I will need to modify my SQL view to be able to handle the table alias without affecting performance.
|
The execution plans are very different.
The slow one has a part that leaps out as being problematic. It estimates a single row will be input to a nested loops join and result in a single scan of ValuationItems. In practice it ends up performing more than 1,000 such scans.
## Estimated
[](https://i.stack.imgur.com/Nh7nL.png)
## Actual

SQL Server 2014 introduced a new cardinality estimator. Your fast plan is using it. This is shown in the XML as `CardinalityEstimationModelVersion="120"` Your slow plan isn't (`CardinalityEstimationModelVersion="70"`).
So it looks as though in this case the assumptions used by the new estimator give you a better plan.
The reason for the difference is probably as the fast one is running cross database (references [ProbeProduction].[dbo].[ValuationsTotal]) and presumably the database you are executing it from has compatility level of 2014 so automatically gets the new CardinalityEstimator.
The slow one is executing in the context of `ProbeProduction` itself and I assume the compatibility level of that database must be < 2014 - so you are defaulting to the legacy cardinality estimator.
You can use `OPTION (QUERYTRACEON 2312)` to get the slow query to use the new cardinality estimator (changing the database compatibility mode to globally alter the behaviour shouldn't be done without careful testing of existing queries as it can cause regressions as well as improvements).
Alternatively you could just try and tune the query working within the limits of the legacy CE. Perhaps adding join hints to encourage it to use something more akin to the faster plan.
|
The two queries are different (column order!). It is reasonable to assume the first query uses an index and is therefore much faster. I doubt it has anything to do with the aliassing.
|
Select SQL View Slow with table alias
|
[
"",
"sql",
"entity-framework",
"t-sql",
"table-alias",
""
] |
the following doesn't works, how can I search in data which stored in Arabic
```
SELECT * FROM `users` WHERE `name` LIKE = "%Ψ΄Ψ±ΩΩ%"
```
that gave me Error sql query
|
Use:
```
SELECT * FROM `users` WHERE `name` LIKE "%Ψ΄Ψ±ΩΩ%"
```
|
This has nothing to do with the language or character set being used. It's a simple syntax error.
To put it simply, don't use `LIKE` and `=` together. Use one or the other. I guess you mean `LIKE` in this case, so remove the `=` and your query will work.
|
How can I SELECT data which in Arabic language in MYSQL?
|
[
"",
"mysql",
"sql",
"database",
"wampserver",
""
] |
I'm trying to select the current records on daily basis but the timestamp is giving problems with my query.
```
SELECT * FROM `collections` WHERE rdate_collect = curdate()
```
In the database I have the record to be `2015-12-08 15:30:12`. Is there a way i can format the date so that it only return the date alone but with the time too, so it can only return the current day's records.
|
If you want all the records for the current date where the values are stored as `2015-12-08 15:30:12` you can easily use `date()` function over the column in the select something as
`where date(rdate_collect) = curdate()`
The disadvantage is it will never use index even if the column is indexed and when you have large amount of data it would create an issue since the query will be slow.
So you can format the query as
```
where
rdate_collect >= concat(curdate(),' ','00:00:00')
and rdate_collect <= concat(curdate(),' ','23:59:59')
```
Here is a live example, I have a `login_audit` table and it has a column `last_login(datetime)` and its indexed.
So lets see step by step
```
mysql> select count(*) from login_audit ;
+----------+
| count(*) |
+----------+
| 5188680 |
+----------+
1 row in set (0.00 sec)
```
I have large number of records into the table
Now lets use date() function and see what it shows in explain plan
```
mysql> explain select count(*) from login_audit where date(last_login) = curdate() ;
+----+-------------+-------------+-------+---------------+----------------+---------+------+---------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------+----------------+---------+------+---------+--------------------------+
| 1 | SIMPLE | login_audit | index | NULL | last_login_idx | 5 | NULL | 5188680 | Using where; Using index |
+----+-------------+-------------+-------+---------------+----------------+---------+------+---------+--------------------------+
```
From the explain plan looks like mysql may scan the entire table and it will be a slow query.
Now change the query and see
```
mysql> explain select count(*) from login_audit where last_login >= concat(curdate(),' ','00:00:00') and last_login <= concat(curdate(),' ','23:59:59') ;
+----+-------------+-------------+-------+----------------+----------------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+----------------+----------------+---------+------+------+--------------------------+
| 1 | SIMPLE | login_audit | range | last_login_idx | last_login_idx | 5 | NULL | 1 | Using where; Using index |
+----+-------------+-------------+-------+----------------+----------------+---------+------+------+--------------------------+
```
So yes this time its better
And here is the time difference in the query
```
mysql> select count(*) from login_audit where date(last_login) = curdate() ;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.92 sec)
mysql> select count(*) from login_audit where last_login >= concat(curdate(),' ','00:00:00') and last_login <= concat(curdate(),' ','23:59:59') ;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set (0.00 sec)
```
|
```
SELECT * FROM `collections` WHERE date(rdate_collect) = curdate();
```
|
Getting today's record in MySQL ignoring time and hour
|
[
"",
"mysql",
"sql",
""
] |
The following works:
`SELECT * FROM myTable WHERE date_created BETWEEN '2014-10-05' AND '2015-12-31';`
However I tried the following:
`SELECT * FROM myTable WHERE date_created BETWEEN '2014-10%' AND '2015-12%';`
This doesn't work is there a way to select two dates with a wildcard in it?
`SELECT * FROM myTable WHERE year(date_created) BETWEEN 2014 AND 2015 AND month(date_created) BETWEEN 10 AND 12`
I also tried it doing like above, that also doesn't select a range it just selects everything that is `2014` and `2015` and everything that only has the month `10` and `12`.
Is there a way I can make the following work?
`SELECT * FROM myTable WHERE date_created BETWEEN '2014-10%' AND '2015-12%';`
|
Use the [`LAST_DAY`](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_last-day) function to get the last day of the month of a date.
```
WHERE date_created BETWEEN DATE_FORMAT('2014-10-05', '%Y-%m-01') AND LAST_DAY('2015-12-02')
```
This is better than the solution that uses `DATE_FORMAT` on the `date_created` column because it can make use of an index on that column.
|
you can use DATE\_FORMAT for this
```
SELECT * FROM myTable WHERE DATE_FORMAT(date_created,"%Y-%m") BETWEEN
DATE_FORMAT('2014-10-05',"%Y-%m") AND DATE_FORMAT('2015-12-31',"%Y-%m") ;
```
|
MySQL Between Like for dates
|
[
"",
"mysql",
"sql",
""
] |
I didn't know a better way to word the title question.
Here's the sample table:
```
| email | subscription ref num | state |
|-------|----------------------|----------|
| 1@1.1 | 10 | inactive |
| 2@2.2 | 11 | inactive |
| 1@1.1 | 12 | inactive |
| 1@1.1 | 13 | active |
| 3@3.3 | 14 | active |
etc
```
I want to get all emails from the table that do not have an active subscription. I cannot just filter `WHERE state=inactve` because look at email address `1@1.1`. That user has both old inactive subscriptions, but also a currently active one.
So for this sample db, I would only want to return email `2@2.2`. Hope that makes sense.
Can someone help me with the correct query to use?
|
...that do not have an active subscription:
```
select distinct email
from yourTable
where email not in
(select email from yourTable where state = 'active')
```
the query explains itself: select distinct emails that don't have active state in any row.
ADDED: You can create an index on state column in mySql. Also, this could be faster:
```
select distinct email
from yourTable
where not exists
(select * from yourTable as helper
where state = 'active' and helper.email = yourTable.email )
```
|
Here's one way...
```
select email
from sampletable
group by email
having max(state) = 'inactive' and min(state) = 'inactive'
```
|
How to filter based on column values in other rows?
|
[
"",
"mysql",
"sql",
""
] |
I just want to display description with word **ROMEO JULIET** even it is not in chronological orders.
```
ID DESC
1 This is a sample sentence created by romeo and juliet
2 This is a sample sentence with romeo juliet.
3 romeo and juliet.
4 How to query romeo juliet word.
5 This is not included because there no name here.
```
In the sample above, i want to display id 1 to 4, because it contains words romeo juliet.
How can i do that using LIKE ?
I tried below query
```
SELECT DESC FROM TableName WHERE DESC LIKE '%ROMEO JULIET%'
```
but the result is only id 2, since the names in id 1,3 and 4 is not chronological order.
Thanks in advance.
|
You are looking for a statement where you search for ROMEO AND JULIET
You can write in SQL like this
`SELECT... WHERE DESC LIKE '%romeo%' AND DESC LIKE '%juliet%'`
|
You can ask for both combination:
```
SELECT DESC
FROM TableName
WHERE DESC LIKE '%ROMEO%JULIET%'
OR DESC LIKE '%JULIET%ROMEO%'
```
|
Finding 2 words using LIKE even not in chronological order
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
Can We use `GO` statement mutiple times in a `SQL Transaction`. I am having a long T-SQL script and I want to run it in a `SQL Transaction`. If all goes well then I will commit otherwise will rollback.
But, While running that query I got error like `'create function must be the only statement in the batch'`. As I am creating and dropping many Functions and Procedures in that.
I have not used `GO` anywhere in the script. My question is that - Can I use multiple times `GO` statement in that long script. Because, `GO` creates a batch and if batch executes successfully first time but fails next time then will `rollback transaction` statement be able to actually rollback that has been executed ?
Structure of my script looks like :
```
PRINT 'Transaction Started'
BEGIN TRY
BEGIN TRAN
Drop Function
....
....
Create Function
....
....
Drop Procedure
....
....
Lots of statements
....
....
COMMIT TRAN
PRINT 'Transaction Succeeded'
END TRY
BEGIN CATCH
PRINT 'Transaction Failed'
IF(@@TRANCOUNT > 0)
ROLLBACK TRAN
END CATCH
```
I am creating this script to migrate some changes from newDB to oldDB in a single script.
|
You are mixing concepts. `GO` is not a Transact-SQL concept, not part of the language, and not understood by SQL Server. `GO` is the tools *batch delimiter*. `sqlcmd.exe` and SSMS both are using, by default, `GO` as the batch delimiter. The batch delimiter is used to identify the individual *batches* inside the SQL source file. The client tool sends to the server one batch at a time (of course, omitting the delimiter).
Transactions can span batches. TRY/CATCH blocks cannot. CREATE/ALTER statements must be the only statement in a batch (comments are not statements, and statements contained in a function procedure body are,well, contained).
Something similar to what you want to do can be achieved by starting a transaction and abortign the execution on first error (`-b` at `sqlcmd.exe` start, or use [`:on error exit` in SSMS](https://msdn.microsoft.com/en-us/library/ms174187.aspx)).
But doing DDL inside long transactions is not going to work. Specially if you plan to mix it with DML. *Most* corruptions I had to investigate come from this combination (Xact, DDL + DML, rollback). I strongly recommend against it.
The sole way to deploy schema updates safely is to take a backup, deploy, restore from backup if something goes wrong.
Note that what Dan recommends (dynamic SQL) works because `sp_executesql` starts a new, inner, batch. This batch will satisfy the CREATE/ALTER restrictions.
|
Note that [GO is not a SQL keyword](https://stackoverflow.com/a/971199/27968). It is a client-side batch separator used by SQL Server Management Studio and other client tools.
GO has no effect on transaction scope. BEGIN TRAN will start a transaction on the current connection. COMMIT and ROLLBACK will end the transaction. You can execute as many statements as you want in-between. GO will execute the statements separately.
As specified by [MSDN](https://msdn.microsoft.com/en-us/library/ms175976.aspx?f=255&MSPPError=-2147217396):
> A TRYβ¦CATCH construct cannot span multiple batches.
So BEGIN TRY, END TRY, BEGIN CATCH, and END CATCH cannot be separated into separate batches by a GO separator. They must appear in the same query.
If you do try to include a batch separator in a TRY/CATCH statement like the invalid SQL below:
```
begin try
go
end try
begin catch
go
end catch
```
This will execute 3 different queries that return syntax errors:
1) `begin try`
```
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'begin'.
```
2) `end try begin catch`
```
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'try'.
```
3) `end catch`
```
Msg 102, Level 15, State 1, Line 6
Incorrect syntax near 'catch'.
```
|
Can we use 'GO' multiple times in SQL Transaction?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a table that contains prices for a particular item based upon the quantity being ordered and the type of client placing the order ...
```
ID Name Quantity ClientType Price/Unit ($)
========================================================
1 Cheese 10 Consumer 20
2 Cheese 20 Consumer 15
3 Cheese 30 Consumer 12
4 Cheese 10 Restaurant 18
5 Cheese 20 Restaurant 13
6 Cheese 30 Restaurant 10
```
I have having trouble with WHERE clause in the SQL to select the row where the customer gets the best price based upon the quantity that is ordered. The rule is they must at least meet the quantity in order to get the price for that pricing tier. If their order is below the minimum quantity then they get the Price for the first quantity (10 in this case) and if they order more than the largest quantity (30 in this example) they get that price.
For example ... If a Restaurant orders 26 units of cheese the row with ID = 5 should be chosen. If a Consumer ordered 9 units of cheese then the row returned should be ID = 1. If the Consumer orders 50 units of cheese then they should get ID = 3.
```
declare @SelectedQuantity INT;
SELECT *
FROM PriceGuide
WHERE Name = 'Cheese'
AND ClientType = 'Consumer'
AND Quantity <= @SelectedQuantity
```
What am I missing in the WHERE clause?
|
**Edit**
The first solution didn't handle the special case correctly, as mentioned in the comments.
Next try:
```
SELECT TOP 1 ID, Name, Quantity, ClientType, [Price/Unit]
FROM PriceGuide
WHERE Name = 'Cheese'
AND ClientType = 'Consumer'
ORDER BY CASE WHEN Quantity <= @SelectedQuantity THEN Quantity ELSE -Quantity END DESC
```
Assuming that `Quantity` is positive, the `ORDER BY` will return rows that meet `Quantity <= @SelectedQuantity` condition first, in a descending order.
For rows that do not match this condition, it uses `-Quantity` for ordering. So if no rows match the condition, the one with smallest quantity will be returned.
|
This is a little tricky because you need to deal with the quantities less than 10.
I think the best approach is:
```
SELECT TOP 1 *
FROM PriceGuide
WHERE Name = 'Cheese' AND ClientType = 'Consumer'
ORDER BY (CASE WHEN @SelectedQuantity >= Quantity THEN 1 ELSE 0 END) DESC,
(CASE WHEN @SelectedQuantity >= Quantity THEN PriceUnit END) ASC,
Quantity ASC;
```
This version handles the minimum quantity by keeping all the rows for a given `Name`/`ClientType`, using the `ORDER BY` for prioritization.
|
SQl - Select best price based on quantity
|
[
"",
"sql",
"sql-server",
""
] |
i have a table like so -
firstName - lastName - dob - region - authority
and this is the query im using
```
SELECT firstName, lastName, region from whatever
```
is it possible to merge the firsName and lastName like so within the result, I need to keep the firstName and lastName separate within the actual database but merge them within the result like so -
firstname lastname - region
|
Your query would be like:
**Oracle:**
```
SELECT firstname || ' ' || lastname as name, region from whatever
```
**MySQL:**
```
SELECT CONCAT(firstname ,' ', lastname) as name, region from whatever
```
**MSSQL:**
```
SELECT firstname + ' ' + lastname as name , region from whatever
```
|
Try it..
```
SELECT CONCAT(firstName,' ',lastName) as Name, region from whatever
```
|
SQL merge result of two columns
|
[
"",
"sql",
""
] |
I'm currently working on a project where I have a list of dental patients, and I'm supposed to be displaying all the patients who have two specific procedure codes attached to their profiles. These patients must have BOTH procedure codes, not one or the other. At first I thought I could accomplish this by using a basic AND statement in my WHERE clause, like so.
```
SELECT [stuff]
FROM [this table]
WHERE ProcedureCode = 'ABC123' AND ProcedureCode = 'DEF456';
```
The query of course returns nothing because the codes are entered individually and you can't have both Procedure 1 and Procedure 2 simultaneously.
I tried switching the "AND" to "OR" just out of curiosity. Of course I'm now getting results for patients who only have one code or the other, but the patients who have both are showing up too, and the codes are displayed as separate results, like so:
```
Patient ID Last Name First Name Code Visit Date
1111111 Doe Jane ABC123 11-21-2015
5555555 Smith John ABC123 12-08-2015
5555555 Smith John DEF456 12-08-2015
```
My SQL is pretty rusty these days. I'm trying to think of a way to filter out patients like Jane Doe and only include patients like John Smith who have both procedure codes. Ideas?
**ADDING INFO BASED ON CHRISTIAN'S ANSWER:**
This is what the updated query looks like:
```
SELECT PatientID, LastName, FirstName, Code, VisitDate
FROM VisitInfo
WHERE PatientID IN
(
SELECT PatientID
FROM VisitInfo
WHERE Code = 'ABC123' OR Code = 'DEF456'
GROUP BY PatientID
HAVING COUNT(*) > 1
)
AND (Code = 'ABC123' OR Code = 'DEF456');
```
So I'm still getting results like the following where a patient is only showing one procedure code but possibly multiple instances of it:
```
Patient ID Last Name First Name Code Visit Date
1111111 Doe Jane ABC123 11-02-2015
1111111 Doe Jane ABC123 11-21-2015
5555555 Smith John ABC123 12-08-2015
5555555 Smith John DEF456 12-08-2015
5555555 Smith John ABC123 12-14-2015
9999999 Jones Mike DEF456 11-22-2015
9999999 Jones Mike DEF456 12-06-2015
```
Even though Jane Doe and Mike Jones have 2 results, they're both the same code, so we don't want to include them. And even though John Smith still has 2 of the same code, his results also include both codes, so we want to keep him and other patients like him.
**ANOTHER UPDATE:**
I just learned that I now need to include a few basic demographic details for the patients in question, so I've joined my VisitInfo table with a PatientInfo table. The updated query looks like this:
```
SELECT v.PatientID, v.LastName, v.FirstName, v.Code, v.VisitDate, p.DateOfBirth, p.Gender, p.PrimaryPhone
FROM VisitInfo v JOIN PatientInfo p ON v.PatientID = p.PatientID
WHERE v.PatientID IN
(
SELECT PatientID
FROM VisitInfo
WHERE Code = 'ABC123' OR Code = 'DEF456'
GROUP BY PatientID
HAVING COUNT(*) > 1
)
AND (Code = 'ABC123' OR Code = 'DEF456');
```
I wasn't sure if the new JOIN would affect anyone's answers...
|
There's a bunch of ways to skin this particular cat - here's another one:
```
WITH ABC123 AS (SELECT DISTINCT PATIENTID
FROM VISITINFO
WHERE PROCEDURECODE = 'ABC123'),
DEF456 AS (SELECT DISTINCT PATIENTID
FROM VISITINFO
WHERE PROCEDURECODE = 'DEF456')
SELECT v.PATIENTID, v.LASTNAME, v.FIRSTNAME, v.PROCEDURECODE, v.VISITDATE,
p.DateOfBirth, p.Gender, p.PrimaryPhone
FROM VISITINFO v
INNER JOIN ABC123 a
ON a.PATIENTID = v.PATIENTID
INNER JOIN DEF456 d
ON d.PATIENTID = v.PATIENTID
INNER JOIN PatientInfo p
ON v.PatientID = p.PatientID
WHERE v.PROCEDURE_CODE IN ('ABC123', 'DEF456')
ORDER BY v.PATIENTID, v.VISITDATE, v.PROCEDURECODE;
```
What we're doing here is using a couple of CTE's to get the PATIENTID's for each patient who has the the procedures in question performed. We then start with all records in VISITINFO and inner join those with the two CTE's. Because an INNER JOIN requires that matching information exist in the tables on both sides of the join this has the effect of retaining only the visits which match the information in both of the CTE's, based on the join criteria which in this case is the PATIENTID.
Best of luck.
|
```
SELECT *
FROM TABLE
WHERE Patient_ID IN
(
SELECT Patient_ID
FROM TABLE
WHERE Code = 'ABC123' OR Code = 'DEF456'
GROUP BY Patient_ID
HAVING COUNT(*) = 2
)
AND (Code = 'ABC123' OR Code = 'DEF456')
```
UPDATE 1:
As a patient can have multiple Β΄procedure codesΒ΄, this way will work better:
```
SELECT *
FROM TABLE T1
WHERE EXISTS (SELECT 1
FROM TABLE T2
WHERE T1.Patient_ID = T2.Patient_ID
AND T2.Code = 'ABC123')
AND EXISTS (SELECT 1
FROM TABLE T2
WHERE T1.Patient_ID = T2.Patient_ID
AND T2.Code = 'DEF456')
AND T1.Code IN ('ABC123','DEF456')
```
|
SQL Query: Using AND/OR in the WHERE clause
|
[
"",
"sql",
"sql-server",
"where-clause",
""
] |
I have this table with some sample data:
```
Supplier_ID Product_ID Stock
-----------------------------
1 272 1
1 123 5
1 567 3
1 564 3
2 272 4
2 12 3
2 532 1
3 123 4
3 272 5
```
I want to check the suppliers that have both products: `272` and `123`, so the result would be like:
```
Supplier_ID
-----------
1
3
```
|
Try this code:
```
SELECT A.Supplier_ID FROM
(SELECT Supplier_ID FROM Your_Table WHERE Product_ID = 272) AS A
INNER JOIN
(SELECT Supplier_ID FROM Your_Table WHERE Product_ID = 123) AS B
ON A.Supplier_ID = B.Supplier_ID
```
|
You can use `GROUP BY` and `HAVING`:
```
SELECT Supplier_ID
FROM your_tab
WHERE Product_ID IN (272, 123)
GROUP BY Supplier_ID
HAVING COUNT(DISTINCT Product_ID) = 2;
```
`LiveDemo`
|
SQL Server : how to select the rows in a table with the same value on a column but some exact values on another column for the grouped rows
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query that I have been working on for a few days and none of the tutorial (w3school, ms, etc.) are helping with this particular case. I can't figure out how to make CASE work in the following:
```
SELECT T2.SlpName, T0.CardName, T0.DocNum, T0.DocType, T0.DocTotal,
CASE T0.CANCELED WHEN 'N' THEN
(T0.DocTotal - T0.VatSum - T0.TotalExpns) As "Total du document sans TVA",
SUM(T1.LineTotal * (T1.Commission / 100)) As "Total des commissions"
ELSE
((T0.DocTotal - T0.VatSum - T0.TotalExpns) * -1) As "Total du document sans TVA",
(SUM(T1.LineTotal * (T1.Commission / 100)) * -1) As "Total des commissions"
END
FROM OINV T0
INNER JOIN INV1 T1 ON T0.DocEntry = T1.DocEntry
INNER JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode
--WHERE T2.SlpName = N'[%0]'
--AND T0.DocDate >= '[%1]'
--AND T0.DocDate <= '[%2]'
GROUP BY T2.SlpName, T0.CardName, T0.DocNum, T0.DocType, T0.DocTotal, T0.VatSum, T0.TotalExpns, T0.CANCELED
UNION ALL
SELECT T2.SlpName, T0.CardName, T0.DocNum, T0.DocType, T0.DocTotal * -1, T0.CANCELED,
(T0.DocTotal - T0.VatSum - T0.TotalExpns) * -1 As "Total du document sans TVA",
SUM(T1.LineTotal * (T1.Commission / 100)) * -1 As "Total des commissions"
FROM ORIN T0
INNER JOIN RIN1 T1 ON T0.DocEntry = T1.DocEntry
INNER JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode
--WHERE T2.SlpName = N'[%0]'
--AND T0.DocDate >= '[%1]'
--AND T0.DocDate <= '[%2]'
GROUP BY T2.SlpName, T0.CardName, T0.DocNum, T0.DocType, T0.DocTotal, T0.VatSum, T0.TotalExpns , T0.CANCELED
ORDER BY SlpName, DocNum
```
I have tried multiple variation of this query to no avail. Thank you for all and any help
|
I think you're looking for something like this:
```
CASE T0.CANCELED WHEN 'N' THEN
(T0.DocTotal - T0.VatSum - T0.TotalExpns)
ELSE
((T0.DocTotal - T0.VatSum - T0.TotalExpns) * -1)
END As "Total du document sans TVA",
CASE T0.CANCELED WHEN 'N' THEN
SUM(T1.LineTotal * (T1.Commission / 100))
ELSE
(SUM(T1.LineTotal * (T1.Commission / 100)) * -1)
END As "Total des commissions"
```
`CASE` doesn't work like `if` in procedural languages, you can evaluate a condition and return a value based on that comparison.
|
`case` is an expression that returns a single value (well, that is redundant; that is what expressions do). So, use it twice:
```
SELECT T2.SlpName, T0.CardName, T0.DocNum, T0.DocType, T0.DocTotal,
(CASE WHEN T0.CANCELED = 'N'
THEN (T0.DocTotal - T0.VatSum - T0.TotalExpns)
ELSE (SUM(T1.LineTotal * (T1.Commission / 100)) * -1)
END) As "Total du document sans TVA",
(CASE WHEN T0.CANCELED = 'N'
THEN SUM(T1.LineTotal * (T1.Commission / 100))
ELSE (SUM(T1.LineTotal * (T1.Commission / 100)) * -1)
END) As "Total des commissions"
```
|
Using CASE on a view with UNION and JOINS
|
[
"",
"sql",
"sql-server",
"case",
"inner-join",
"union",
""
] |
Suppose I have two table `Gardners table` and `Plantings table`.
[](https://i.stack.imgur.com/K3jeZ.png)
Suppose my query is:
```
SELECT gid, first_name, last_name, pid, gardener_id, plant_name
FROM Gardners
INNER JOIN Plantings
ON gid = gardener_id
```
I want to know how exactly it works internally?
As per my understanding in every `join` condition:
* Each row from `Gardner Table` will be compared with each row of `Plantings Table`. If the condition is matched then it will print out. Is my understanding correct?
In terms of program if you think:
```
for i in [Gardners Table]:
for j in [Plantings Table]:
if i.id == j.garderner id:
print <>
```
Now suppose if you query is something like:
```
User(uid,uname,ucity,utimedat)
Follows(uid,followerid) // uid and followerid are pointing to `uid` of `User`.
SELECT U.uid, U.uname FROM User U1 JOIN Follows F,User U2 WHERE
F.followerid = U2.uiddate AND U2.ucity = 'D'
```
How the join condition will work internally here? Is it equivalent to:
```
for i in [USER]:
for j in [Follows]:
for k in [USER]:
if condition:
print <>
```
|
Your example with `Gardners table` and `Plantings table` is correct. But example with `users` not so obvious.
I think that what you want to get is user followers from some city.
Assuming correct query is:
```
SELECT U1.uid, U2.uname
FROM User U1
JOIN Follows F ON U1.uid = F.uid
JOIN User U2 ON F.followerid = U2.uid
WHERE U2.ucity = 'D'
```
Then in pseudo code it'll look like this:
```
for i in [User Table]:
for j in [Follows Table]:
if i.uid = j.uid:
for k in [User Table]:
if j.followerid = k.uid and k.city = 'D':
print <>
```
SQL Fiddle for this: <http://sqlfiddle.com/#!9/caeb1e/5>
There is a very good picture of how joins actually works can be found here: <http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins>
[](https://i.stack.imgur.com/cPl2H.jpg)
|
In your second query, it's not clear what you're trying to do exactly as the syntax is erroneous; but if I were to guess, it seems like your intention is to join User U1 with a sub query of (implicit) join between Followers F and User U2.
If my guess is correct, the query would properly look more like this:
```
SELECT U1.uid, U1.uname
FROM User U1 JOIN
(SELECT U2.uid
FROM Followers F,User U2
WHERE F.followerid = U2.uiddate AND U2.ucity = 'D') T
WHERE u1.uid = T.uid
```
Which is not a 'best practice' way of writing the query either (you should use explicit joins, there's no need for a sub-query but you can just join three times, and so on)
But I wrote it this way to keep it closest to your original query.
And if my guess is correct, then your pseudo code would be more like:
```
for u2 in [User 2 where condition]:
for f in [Follows]:
if f.uid == u2.uid
SELECT uid AS T
for u1 in [User 1]:
if u1.uid == T.uid:
print <>
```
However, it's not a fully explained interpretation, because one key to understanding SQL is to think more in 'set' of data being filtered, rather than sequential selection of objects of data, because SQL does operations based on the set of data, which one might not be used to.
So a number of the above steps will be executed in one go, instead of sequential. But other than that, you should look towards the answer given by Yuri Tkachenko above, in how to view joins - and then the internals will come second when writing correct joins.
|
How the SQL join actually works?
|
[
"",
"mysql",
"sql",
"join",
"computer-science",
""
] |
Let's say I've 3 tables: `enterprise, users, houses`.
Everyone needs an address. What should I do?
* Create a table address for everyone?
* Create one unique table address?
Assuming that the first is "more" correct (cache, fragmentation, size, ...), how should I write it in plain SQL?
How do I create a "model" of the table address, and use it in custom tables (e.g. enterprise\_address)?
\* So when I change one field in the model, it gets replicated.
|
Ok, I found the answer to postgresql: use `INHERITS`
e.g. `CREATE TABLE name () INHERITS (table_model)`
Thank you all for the answers.
|
First question: how many addresses per enterprise/user/house? If there are multiple addresses per entity, that leads you to want a separate address table, possibly with an address type.
Examples of multiple address situations:
* Multiple locations
* Ship-to addresses
* Physical vs. PO Box
If you are going to assert that an entity can only have one address, then you probably prefer making the address part of the entity row, for simplicity. However, there can be reasons not to do that; for example, table width (number of columns in the table).
If you have multiple addresses per entity, typically you will use a foreign key. The entity should have an abstract key: some kind of ID. This will either be an integer or, in SQL Server, it could be a uniqueidentifier. The entity ID will be part of the key for the address.
|
How do I create a table model in SQL?
|
[
"",
"sql",
"performance",
""
] |
I have this very simple case in my select statement
```
CASE WHEN PHONENR IS NOT NULL THEN '+46'+PHONENR ELSE ' ' END AS Phone
```
my issue here i get to add the country code nr correctly infront of all existing numbers, but the empty rows are getting a default value of +46 instead of being just empty
|
Use [LEN](https://msdn.microsoft.com/en-gb/library/ms190329.aspx) instead
```
CASE WHEN LEN(PHONENR)>1 THEN ....
```
|
You can use
```
CASE WHEN PHONENR <> '' THEN '+46'+PHONENR ELSE ' ' END AS Phone
```
The first branch is only entered if PHONENR is not null or empty string.
|
Select case not working as expected
|
[
"",
"sql",
"sql-server",
"case",
""
] |
Suppose I have a table like this:
Table1
```
EmployeeID Month sym Quantity V_M BudjetCode
222222 1 40 133.35 1214 800000
222222 2 40 178.50 115 800000
222222 3 40 150.33 215 800000
222222 4 40 186.37 315 800000
222222 5 40 127.38 415 800000
222222 6 40 153.00 515 800000
222222 7 40 178.50 615 800000
222222 7 40 8.50 615 700052015
222222 8 40 187.00 715 800000
```
And I would like to change this table to:
Table 2
```
EmployeeID Month sym Quantity V_M BudjetCode
222222 1 40 133.35 1214 800000
222222 2 40 178.50 115 800000
222222 3 40 150.33 215 800000
222222 4 40 186.37 315 800000
222222 5 40 127.38 415 800000
222222 6 40 161.5 515 800000
222222 7 40 178.50 615 800000
222222 8 40 187.00 715 800000
```
How?
See the row in table 1 where the BudjetCode is unusual?
well, for this row I would like to add the Quantity value (8.5) to the row there V\_M is one less than the V\_M is the original row (where the budjetCode is 700052015).
In the example, in the original it was 615 so one less is 515 (615 means date 6.15 and 515 means 5.15) and to that I need to add the quantity (8.5 to 153 = 161.5)
I was thinking of "over partition":
```
select [EmployeeID],[Month],[Sym],
sum([Quantity])
over (partition by [EmployeeID], [V_M]-1 where???) as b
from table1
where [Sym] = '40' and [EmployeeID] = 222222
order by [Month]
```
But I don't know how to sum this up usiing the criteria of the budjetCode starts with "700".
comment : Don't to this for the first row.
Update:
```
EmployeeID month Quantity V_M MS_BudjetCode
22222 1 40 133.35 1214 88888888
22222 2 40 178.50 115 88888888
22222 3 40 150.33 215 88888888
22222 4 40 186.37 315 88888888
22222 5 40 127.38 415 88888888
22222 6 40 153.00 515 88888888
22222 7 40 8.50 615 700000000
22222 7 40 178.50 615 88888888
22222 8 40 187.00 715 88888888
```
Output:
```
22222 2 40 178.50 115 88888888
22222 4 40 186.37 315 88888888
22222 8 40 187.00 715 88888888
22222 3 40 151.33 215 88888888
22222 3 40 151.33 215 88888888
22222 3 40 151.33 215 88888888
22222 3 40 149.33 215 88888888
22222 3 40 149.33 215 88888888
22222 5 40 127.38 415 88888888
22222 6 40 152.00 515 88888888
22222 6 40 154.00 515 88888888
22222 6 40 152.00 515 88888888
22222 6 40 154.00 515 88888888
22222 6 40 154.00 515 88888888
22222 6 40 152.00 515 88888888
22222 6 40 161.50 515 88888888
22222 7 40 178.50 615 88888888
22222 1 40 133.35 1214 88888888
```
Query:
```
SELECT t1.EmployeeID, t1.Month, t1.sym,
t1.Quantity + COALESCE(t2.Quantity, 0),
t1.V_M, t1.BudjetCode
FROM Table1 AS t1
LEFT JOIN Table1 AS t2
ON t1.EmployeeID = t2.EmployeeID AND t1.V_M = t2.V_M - 100 AND
SUBSTRING(t2.BudjetCode,1,3) = '700'
WHERE SUBSTRING(t1.BudjetCode,1,3) <> '700' and sym='40' and EmployeeID = 22222
```
|
This may be a solution, i think we cant simply subtract 1 with month, need to consider january december cases also. i had included that checking also. here budgetcode is distinguished with length, you can change it if needed.
`;WITH CTETable1 AS
(
SELECT
EmployeeID,[Month],sym,
CASE WHEN LEN(BudjetCode)>6 THEN Quantity ELSE 0 END Quantity,
CASE WHEN LEN(BudjetCode)>6 THEN
(CASE WHEN LEFT(V_M,LEN(V_M)-2)=1 THEN 12 ELSE LEFT(V_M,LEN(V_M)-2)-1 END)*100+
(CASE WHEN LEFT(V_M,LEN(V_M)-2)=1 THEN RIGHT(V_M,2)-1 ELSE RIGHT(V_M,2) END) ELSE 0 END V_M,
BudjetCode
FROM Table1
)
SELECT
T1.EmployeeID,T1.[Month],T1.sym,T1.Quantity+ISNULL(T2.Quantity,0) Quantity,
T1.V_M,T1.BudjetCode
FROM Table1 T1
LEFT JOIN CTETable1 T2 ON T1.EmployeeID=T2.EmployeeID AND T1.V_M=T2.V_M
WHERE LEN(T1.BudjetCode)=6`
|
I think you can do what you want using a simple `LEFT JOIN` operation:
```
SELECT t1.EmployeeID, t1.Month, t1.sym,
t1.Quantity + COALESCE(t2.Quantity, 0),
t1.V_M, t1.BudjetCode
FROM Table1 AS t1
LEFT JOIN Table1 AS t2
ON t1.EmployeeID = t2.EmployeeID AND t1.V_M = t2.V_M - 100 AND
SUBSTRING(t2.BudjetCode,1,3) = '700'
WHERE SUBSTRING(t1.BudjetCode,1,3) <> '700'
```
The query select all table rows *but* the 'unusual' ones and joins them with 'unusual' rows one day (or is it maybe hour?) ahead. `Quantity` returned is the sum of the value of 'normal' row plus the value of the related 'unusual' row (if such one really exists).
Expression:
```
t1.V_M = t2.V_M - 100
```
implements *one less* relation between `V_M`of `t1` and `t2`. You may change it to suit your actual needs.
[**Demo here**](http://sqlfiddle.com/#!6/7e7f4/1)
|
Sum or join rows by criteria
|
[
"",
"sql",
"sql-server-2008",
"select",
"sql-update",
""
] |
I want to change the value in a result using a `case` statement, but it seems to ignore this. Very simple, if the value in the table is `1` then must change to `True`, if the value is `0` then must change to `false`. Here is what I did:
```
CASE pbc.bShownUp
WHEN 0 THEN 'False'
ELSE pbc.bShownUp
END
AND
CASE ch.bShownUp
WHEN 1 THEN 'True'
ELSE ch.bShownUp
END
```
No matter what I do, the result is still 0 or 1.
Please help.
|
When you're using a `CASE` Expression, the return type of the data has to be the same for all `WHEN` parts or more simply put, `CASE` must resolve to a single type.
More reference [here](https://stackoverflow.com/questions/5214613/order-by-and-different-types-in-a-case) and [here](https://stackoverflow.com/questions/15305925/trouble-with-a-case-statement-in-a-sql-server-stored-procedure/15306015#15306015)
From a Type point of view, at the moment your code says
```
CASE BIT
WHEN 1 then NVARCHAR
ELSE BIT
```
And that won't work.
So you have to do something like this
```
CASE ch.bShownUp
WHEN 1 then 'TRUE'
ELSE 'FALSE'
```
|
If `bShownUp` is numeric column, you are producing mixed result - sometimes 0/1, sometimes 'False'/'True' e.g. string. I suppose `bShownUp` is of type VARCHAR and you have to change `WHEN 0` to `WHEN '0'`.
|
Use case statement in SQL
|
[
"",
"sql",
"sql-server",
"case",
""
] |
I would like to extract two string sets from https-address. I'm using Oracle 11g and this is my data and my solution:
```
with mcte as (
select 'https://id.adsf.fi/dfad2/services/org/Organizations/9884752-6/2542595-3/Corlsdf-asdf/Corlsdf/MOB50202282' as addr from dual
union all
select 'https://id.asdf.fi/dfad2/services/org/Organizations/2435213-2/Corlsdf-asdf/Corlsdf/KVY239229' as addr from dual
union all
select 'https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0' as addr from dual
union all
select 'https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0/6356324-0' as addr from dual
union all
select 'https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0/6356324-0/' as addr from dual
)
select addr,
REGEXP_SUBSTR(addr, '(Organizations\/)(\d+)', 1, 1, null, 2) AS num1,
REGEXP_SUBSTR(addr, '(Organizations\/)(\d+)', 1, 1, null, 3) AS num2
from mcte
;
```
This does not result the correct set which should be like this for NUM1 and NUM2
```
+----------------------------------------------------------------------------------------------------------+-----------+-----------+
| id | num1 | num2 |
+----------------------------------------------------------------------------------------------------------+-----------+-----------+
| https://id.adsf.fi/dfad2/services/org/Organizations/9884752-6/2542595-3/Corlsdf-asdf/Corlsdf/MOB50202282 | 9884752-6 | 2542595-3 |
| https://id.asdf.fi/dfad2/services/org/Organizations/2435213-2/Corlsdf-asdf/Corlsdf/KVY239229 | 2435213-2 | NULL |
| https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0 | 6356334-0 | NULL |
| https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0/6356324-0 | 6356334-0 | 6356324-0 |
| https://id.asdf.fi/dfad2/services/org/Organizations/6356334-0/6356324-0/ | 6356334-0 | 6356324-0 |
+----------------------------------------------------------------------------------------------------------+-----------+-----------+
```
How can I achieve this?
|
Your expressions are not correct / complete:
1. You are looking for digits only (`\d`), rather than looking for the minus sign also (`[0-9-]`).
2. They lack the second number you are looking for.
It should be this instead:
```
REGEXP_SUBSTR(addr, '(Organizations\/)([0-9-]+)', 1, 1, null, 2) AS num1,
REGEXP_SUBSTR(addr, '(Organizations\/)([0-9-]+)\/([0-9-]+)', 1, 1, null, 3) AS num2
```
|
Try the following:
```
REGEXP_SUBSTR(addr, '(Organizations\/)([^\/]+)', 1, 1, null, 2) AS num1,
REGEXP_SUBSTR(addr, '(Organizations\/)([^\/]+\/)([0-9\-]+)', 1, 1, null, 3) AS num2
```
First is: skip `Organizations/` and take everything while the symbol is not `/`.
Second is: skip `Organizations/..../` and then take continuous digits and `-`.
Here is fiddle <http://sqlfiddle.com/#!4/9eecb7d/12905>
|
How parse numbers from https arddress in Oracle SQL?
|
[
"",
"sql",
"regex",
"oracle",
"oracle11g",
""
] |
How can I concatenate these two queries, so that I can have 3 columns in my result: postcode, memberCount and placeCount
```
SELECT LEFT(`delivery_postcode`, 2) as `postcode`, count(`delivery_postcode`) as `count`
FROM `customer_cards`
WHERE `delivery_postcode` IS NOT NULL
AND `delivery_postcode` <> ''
GROUP BY `postcode`
ORDER BY `count` DESC
```
and
```
SELECT LEFT(`placePostcode`, 2) as `postcode`, count(`placePostcode`) as `placeCount`
FROM `RestaurantsForGoogleMaps`
WHERE `placePostcode` IS NOT NULL
AND `placePostcode` <> ''
GROUP BY `postcode`
ORDER BY `placeCount` DESC
```
At the moment my results look like the following, for either query
```
postcode | count/placeCount
------------------------
SW | 817
W1 | 533
EC | 395
```
|
Something like this should work:
```
SELECT LEFT(`delivery_postcode`, 2) as `postcode`
, count(`delivery_postcode`) as `count`
, pc.placeCount
FROM `customer_cards` cc
LEFT JOIN (
SELECT LEFT(`placePostcode`, 2) as `postcode`,
count(`placePostcode`) as `placeCount`
FROM `RestaurantsForGoogleMaps`
WHERE `placePostcode` IS NOT NULL
AND `placePostcode` <> ''
GROUP BY `postcode`
) pc
on pc.postcode = LEFT(cc.delivery_postcode, 2)
WHERE `delivery_postcode` IS NOT NULL
AND `delivery_postcode` <> ''
GROUP BY `postcode`
ORDER BY `count` DESC
```
|
```
SELECT postcode,count,placecount FROM (SELECT LEFT(`delivery_postcode`, 2) as `postcode`, count(`delivery_postcode`) as `count`, 0 as `placecount`
FROM `customer_cards`
WHERE `delivery_postcode` IS NOT NULL
AND `delivery_postcode` <> ''
GROUP BY `postcode`
UNION
SELECT LEFT(`placePostcode`, 2) as `postcode`, count(`placePostcode`) as `placecount`,0 as `count`
FROM `RestaurantsForGoogleMaps`
WHERE `placePostcode` IS NOT NULL
AND `placePostcode` <> ''
GROUP BY `postcode` ) ORDER BY count desc
```
|
2 SQL queries at once
|
[
"",
"mysql",
"sql",
""
] |
I have two table called
```
Supplier Table
Customer Table
```
there are no relation between them
First Table `Customer` have this data
[Customer Table](https://i.stack.imgur.com/yPmjH.jpg)
Second Table `Supplier` have this data
[Supplier Table](https://i.stack.imgur.com/yeupl.jpg)
I need to see data as this
```
SupplierID CustomerName
1 Yahia
1 Ahmed
1 Ali
2 Yahia
2 Ahmed
2 Ali
3 Yahia
3 Ahmed
3 Ali
```
Note That No Relation Between Two table
can achieve this relation?
|
Your question is not very clear but it seems you want a cartesian product of those two tables. This is easily done with a cross join.
```
select *
from supplier s
cross join Customer c
```
|
You might want this
```
select
SupplierID,CustomerName
from
Supplier
cross join
Customer
```
|
Get Data Between Two Table Where There are no relation between Them
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm running a query and trying to get a percentage returned for failures vs successes in the database.
```
SELECT ((SELECT count(*) FROM validation WHERE request_date BETWEEN (SYSDATE - 30/(24*60)) AND SYSDATE AND response != 'VERIFIED') / count(*))
AS percentage
FROM (SELECT * FROM validation
WHERE request_date BETWEEN (SYSDATE - 30/(24*60)) AND SYSDATE)
;
```
It's returning the error 'ORA-00937: not a single-group group function'. I would have thought that the subquery would be returning a single value so I would simply be able to divide it by the count(\*). Aby ideas?
|
You're over complicating the query. You don't need any subqueries:
```
select count(case when response != 'VERIFIED' then 'X' end) / count(*) as percentage
from validation
where request_date BETWEEN (SYSDATE - 30/(24*60)) AND SYSDATE
```
|
May be you need just 2 subquery from dual?
```
Select
(SELECT count(*) FROM validation WHERE request_date BETWEEN (SYSDATE - 30/(24*60)) AND SYSDATE AND response != 'VERIFIED')/
(SELECT count(*) FROM validation WHERE request_date BETWEEN (SYSDATE - 30/(24*60)) AND SYSDATE)
from dual;
```
|
Dividing by a subquery by a count(*)
|
[
"",
"sql",
"oracle",
""
] |
I have been trying to achieve this all day, I have followed numerous tutorials and can't seem to crack it, I have been trying things like:
```
select CONVERT(VARCHAR(10), DATE, 131) from Table
```
yet it does not seem to change anything.
Any advice or help would be appreciated. Thankyou in advance.
|
`SELECT CONVERT(VARCHAR(10), DATE, 12)`
12 is the right code for the format you want. See: <https://msdn.microsoft.com/en-GB/library/ms187928.aspx>
|
Try this
```
declare @TDATE Date = '2015-11-10';
select Convert(varchar,Datepart(Year, @TDATE))+Convert(varchar,Datepart(Month, @TDATE))+Convert(varchar,Datepart(Day, @TDATE))
```
Output:
```
20151110
```
|
SQL Convert dd/mm/yy to yymmdd
|
[
"",
"sql",
"date",
""
] |
I have a specific problem. I have a table which contains invalid values.
I need to replace the invalid values (here `0`) with the previous value which is bigger than `0`.
The difficulty is, it is not appropiate for me to use an Update or an insert(Cursor and update would do it).Well my only way is to use a Select statement.
When I use the `lag(col1, 1)` - function with when case, I get only one column with the correct value.
```
select col1, col2 realcol2,
(case
when col2 = 0 then
lag(col2,1,1) over (partition by col1 order by col1 )
else
col2
end ) col2,
col3 realcol3,
(case
when col3 = 0 then
lag(col3,1,1) over (partition by col1 order by col1 )
else
col3
end ) col3
from test_table
```
Contents of `TEST_TABLE`:
```
---------------------------
Col1 | Col2 | Col3 | Col4
---------------------------
A | 0 | 1 | 5
B | 0 | 4 | 0
C | 2 | 0 | 0
D | 0 | 0 | 0
E | 3 | 5 | 0
F | 0 | 3 | 0
G | 0 | 3 | 1
A | 0 | 1 | 5
E | 3 | 5 | 0
```
Expected query result:
```
---------------------------
Col1 | Col2 | Col3 | Col4
---------------------------
A | 0 | 1 | 5
B | 0 | 4 | 5
C | 2 | 4 | 5
D | 2 | 4 | 5
E | 3 | 5 | 5
F | 3 | 3 | 5
G | 3 | 3 | 1
A | 3 | 1 | 5
E | 3 | 5 | 5
```
|
```
SELECT col1,
CASE col2 WHEN 0 THEN NVL( LAG( CASE col2 WHEN 0 THEN NULL ELSE col2 END ) IGNORE NULLS OVER ( ORDER BY NULL ), 0 ) ELSE col2 END AS col2,
CASE col3 WHEN 0 THEN NVL( LAG( CASE col3 WHEN 0 THEN NULL ELSE col3 END ) IGNORE NULLS OVER ( ORDER BY NULL ), 0 ) ELSE col3 END AS col3,
CASE col4 WHEN 0 THEN NVL( LAG( CASE col4 WHEN 0 THEN NULL ELSE col4 END ) IGNORE NULLS OVER ( ORDER BY NULL ), 0 ) ELSE col4 END AS col4
FROM table_name;
```
**Result**:
```
COL1 COL2 COL3 COL4
---- ---------- ---------- ----------
A 0 1 5
B 0 4 5
C 2 4 5
D 2 4 5
E 3 5 5
F 3 3 5
G 3 3 1
A 3 1 5
E 3 5 5
```
|
I'm assuming an additional column `col0` that contains an obvious ordering criteria for your data, as your `col1` example data isn't really ordered correctly (repeated, trailing values of `A` and `E`).
I love the `MODEL` clause for these kinds of purposes. The following query yields the expected result:
```
WITH t(col0, col1, col2, col3, col4) AS (
SELECT 1, 'A', 0, 1, 5 FROM DUAL UNION ALL
SELECT 2, 'B', 0, 4, 0 FROM DUAL UNION ALL
SELECT 3, 'C', 2, 0, 0 FROM DUAL UNION ALL
SELECT 4, 'D', 0, 0, 0 FROM DUAL UNION ALL
SELECT 5, 'E', 3, 5, 0 FROM DUAL UNION ALL
SELECT 6, 'F', 0, 3, 0 FROM DUAL UNION ALL
SELECT 7, 'G', 0, 3, 1 FROM DUAL UNION ALL
SELECT 8, 'A', 0, 1, 5 FROM DUAL UNION ALL
SELECT 9, 'E', 3, 5, 0 FROM DUAL
)
SELECT * FROM t
MODEL
DIMENSION BY (row_number() OVER (ORDER BY col0) rn)
MEASURES (col1, col2, col3, col4)
RULES (
col2[any] = DECODE(col2[cv(rn)], 0, NVL(col2[cv(rn) - 1], 0), col2[cv(rn)]),
col3[any] = DECODE(col3[cv(rn)], 0, NVL(col3[cv(rn) - 1], 0), col3[cv(rn)]),
col4[any] = DECODE(col4[cv(rn)], 0, NVL(col4[cv(rn) - 1], 0), col4[cv(rn)])
)
```
Result:
```
RN COL1 COL2 COL3 COL4
1 A 0 1 5
2 B 0 4 5
3 C 2 4 5
4 D 2 4 5
5 E 3 5 5
6 F 3 3 5
7 G 3 3 1
8 A 3 1 5
9 E 3 5 5
```
[SQLFiddle](http://sqlfiddle.com/#!4/9eecb7/6723)
### A note on the MODEL clause vs. window function-based approaches
While the above looks cool (or scary, depending on your point of view), you should certainly prefer using a window function based appraoch as exposed by the other elegant answers by [nop77svk (using `LAST_VALUE() IGNORE NULLS`)](https://stackoverflow.com/a/34160956/521799) or [MT0 (using `LAG() IGNORE NULLS`)](https://stackoverflow.com/a/34163217/521799). [I've explained these answers more in detail in this blog post](http://blog.jooq.org/2015/12/17/how-to-fill-sparse-data-with-the-previous-non-empty-value-in-sql/).
|
Oracle Lag function with dynamic parameter
|
[
"",
"sql",
"oracle",
"window-functions",
""
] |
I was hoping someone would be able to help me (or point me in the right direction) on the following problem. I'm looking at grouping a large number of codes to only 3 digits, ensuring that if a participant had the code 122.2 and 122.3, it would count as one occurrence and not two.
Data Example:
```
Participant | group_code
1 | 1223
1 | 1224
1 | 1123
2 | 1012
2 | 0123
```
Current Code:
```
SELECT (left(group_code, 3)) as Group, count(left(group_code, 3)) as occurrence
from testDB
group by left(group_code, 3)
```
I suspect I need to use a unique element on the participant ID when grouping, however I'm not too sure.
Current Outcome:
Using the current data example, the result is as follows.
122 has 2 occurrences
112 has 1 occurrence
101 has 1 occurrence
012 has 1 occurrence
Expected Outcome:
122 has 1 occurrences
112 has 1 occurrence
101 has 1 occurrence
012 has 1 occurrence
Question: Is it possible to change the current code so that, if a single participant has multiple occurrences of a 3 digit value, for example 111.1, 111.2, 111.3, and 111.4, using the code above would provide the out 111 has occurred 4 times. However, I only want it to state it has appeared once as I'm only interested in a 3 digit level (and not the 4th).
Many thanks
|
Try this.
```
declare @t table ( group_code varchar(15))
insert into @t values ('122.2') ,('122.3' ) ,( '122.4' ) ,( '112.6'),( '112.0') , ( '119.1')
SELECT (left(group_code, 3)) as Grop,
count(left(group_code, 3)) as occurrence
from @t
group by left(group_code, 3)
select * from
(
SELECT (left(group_code, 3)) as Grop,
count(left(group_code, 3)) as occurrence
from @t
group by left(group_code, 3)
) a
join @t t on a.Grop = left(t.group_code, 3)
```
|
```
Create Table #T(Id int, Value decimal(16,2))
Insert into #T
Values(1,122.2),(1,122.3),(2,122.2)
Select Id,ROUND(Value,0)
from #T
Group By Id,ROUND(Value,0)
```
|
SQL Data Grouping
|
[
"",
"sql",
"sql-server",
""
] |
How I can have this command:
```
SELECT * FROM table USE INDEX (index1)
```
|
You should add `USE INDEX` after name of table in from section. For yii queryBuilder you can do it like this:
```
Model::find()
->from(['tableAlias' => Model::tableName().' use index (index1)'])
->with(...)
->where(...)
//etc
```
|
For MySQL I had to use this:
```
Product::find()
->select('product.*')
->from([new Expression('{{%product}} FORCE INDEX (idx__product_search)')])
```
Without specifying `->select` it would not create a proper query, because it would search in a table named `` `product FORCE INDEX(idx__product_search)` ``
|
How select index in Yii2 ActiveQuery
|
[
"",
"sql",
"yii2",
""
] |
I have a table with 59 columns and over 17K rows. Lots of the rows have `NULL` in some of the columns.
I want to remove the `NULL` so that queries return a blank (`''`) rather than a `NULL`.
Can I run some update function that replaces all `NULL` with `''` ?
Using SQL Server 2008R2 Management Studio.
```
UPDATE my_table
SET column_1 = REPLACE (column_1,NULL,'')
```
But that would take forever to do it to all 59 columns!
What's the trick, team?
|
Use the SQL standard [`COALESCE`](https://msdn.microsoft.com/en-us/library/ms190349.aspx):
```
UPDATE my_table
SET column_1 = COALESCE(column_1,'')
, column_2 = COALESCE(column_2,'')
...
WHERE column_1 IS NULL OR
column_2 IS NULL OR
... -- avoid empty updates
;
```
Then use `ALTER TABLE ...` to add `NOT NULL` constraints to all columns that shall not have NULL to prohibit re-introducing NULL values.
Don't use `ISNULL`, which basically is a duplication of the standard `COALESCE` in some RDBMS - and not available in others. (Well, there are subtle differences, [read the manual for details](https://msdn.microsoft.com/en-us/library/ms190349.aspx) or [even more detail here](http://sqlmag.com/t-sql/coalesce-vs-isnull).)
Of course, the empty string (`''`) is only valid for string types. Not for number types, for instance.
|
Use isnull function. Returns specified value if null is found in column
```
Select isnull(col1,'')
```
|
How to remove 'NULL' from results of queries SQL Server 2008
|
[
"",
"sql",
"sql-server",
"replace",
"sql-server-2008-r2",
"null",
""
] |
I have a table say **Cases**, which is using reference from **Workers** for three columns. Also there is one table **Company** to which workers belongs.
Below is the schema:
```
Cases [ CaseID, CaseNumber, Worker1, Worker2, Worker3 ]
Workers [ WorkerID, ComapnyID]
Company [CompanyID, CompanyName]
```
Now I need case count for each company.
So is it possible to make one join with workers and map all **Worker1**, **Worker2** and **Worker3** columns? Is there any better option and performance impact?
Note: Two workers from one company can work on single case, or all the workers can be from different companies.
|
Although join conditions are commonly equality checks, there's nothing special about them - any valid SQL condition could be used for performing a join. In you case, an `IN` condition seems appropriate:
```
SELECT CompanyName, COUNT(DISTINCT CaseID)
FROM Company co
JOIN Workers w ON co.CompanyId = w.CompanyId
JOIN Cases ca ON w.WorkerId IN (ca.Worker1, ca.Worker2, ca.Worker3)
GROUP BY CompanyName
```
|
```
select
C.CompanyID, C.CompanyName, count(distinct CaseID)
from Company C
inner join Workers W
on C.CompanyID = W.WorkerID
inner join (
select CaseId, WorkerId = Worker1 from Cases where Worker1 is not null
UNION ALL
select CaseId, WorkerId = Worker2 from Cases where Worker2 is not null
UNION ALL
select CaseId, WorkerId = Worker3 from Cases where Worker3 is not null
) CW
on W.WorkerID = CW.WorkerID
group by C.CompanyID, C.CompanyName
```
|
SQL JOIN WITH OR Condition
|
[
"",
"sql",
"sql-server",
"select",
"sql-server-2012",
""
] |
What is the best way to extract words from a string?
I already made something up because I found many ways and none of them looked 'simple'.
Let's assume there is a procedure called 'change\_opening\_hours'. This procedure has a time-range string input called 'v\_perioden'.
This string looks like:
'10:00-12:00' **OR**
'10:00-12:00 14:00-16:00' **OR**
'10:00-12:00 14:00-16:00 18:00-22:00' **etc**
Now I already made something up myself to exstract every period of time from this input.
```
v_perioden VARCHAR2(50) := '10:00-12:00 14:00-18:00 22:00-24:00';
...
-- loop though time-periode depeningd
-- on amount of spaces
FOR i IN 0..REGEXP_COUNT(v_perioden, ' ') LOOP
-- first period
IF i = 0 THEN DBMS_OUTPUT.PUT_LINE(SUBSTR(v_perioden, 0, 11));
-- second period
ELSIF i = 1 THEN DBMS_OUTPUT.PUT_LINE(SUBSTR(v_perioden, 13, 11));
--thirt period
ELSIF i = 2 THEN DBMS_OUTPUT.PUT_LINE(SUBSTR(v_perioden, 25, 11));
END IF;
END LOOP;
```
Output:
```
10:00-12:00
14:00-18:00
22:00-24:00
```
Now this way is working fine, but it isn't that capable. I tried to find out how to extract words from a string on a space but this wasn't working out tho.
|
you should use a combination of SUBSTR and [INSTR Function](https://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_1103.htm)
```
select substr('A B C',0,instr('A B C',' ')) from dual -- A
UNION ALL
select substr( 'A B C',instr('A B C',' ') , instr('A B C',' ',1,2)-1 ) from dual --B
UNION ALL
select substr( 'A B C', instr('A B C',' ',1,2) , instr('A B C',' ') ) from dual -- C
```
\*Replace A B C with your string
|
This method handles NULL elements (try with this string `'10:00-12:00 18:00-22:00'` to test, handles variable numbers of characters between occurrences of the delimiter without having to be edited, and most important handles a variable number of elements in the list:
```
SQL> with tbl(v_perioden) as (
select '10:00-12:00 14:00-16:00 18:00-22:00' from dual
)
select level nbr, regexp_substr(v_perioden, '(.*?)( |$)', 1, level, null, 1) element
from tbl
connect by level <= regexp_count(v_perioden, ' ')+1
order by level;
NBR ELEMENT
---------- -----------------------------------
1 10:00-12:00
2 14:00-16:00
3 18:00-22:00
SQL>
```
Here's why you want to make sure you handle NULL list elements: [Split comma separated values to columns in Oracle](https://stackoverflow.com/questions/31464275/split-comma-separated-values-to-columns-in-oracle/31464699#31464699)
Always expect the unexpected!
|
Plsql best way to split string
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
If I execute this query:
```
select * from product where product_name like '%';
```
it gets all the products and that's expected.
But I want to get the products where the name contains the wildcard character '%'.
How can I do that?
|
You can use [`ESCAPE`](https://msdn.microsoft.com/en-us/library/ms179859.aspx):
> match\_expression [ NOT ] LIKE pattern [ ESCAPE escape\_character ]
>
> **Is a character that is put in front of a wildcard character to indicate that the wildcard should be interpreted as a regular
> character and not as a wildcard.** escape\_character is a character
> expression that has no default and must evaluate to only one
> character.
```
select *
from product
where product_name like '%|%%' ESCAPE '|';
```
`LiveDemo`
|
Presuming that you're using SQL-Server (thought you've mentioned it somewhere):
You have to escape the wildcard character `%` with brackets:
```
select * from product where product_name like '%[%]%';
```
[**Demo**](https://data.stackexchange.com/stackoverflow/query/406802/percentage-in-where)
Side-note: you have to do the same if you want to search underscores:
```
select * from product where product_name like '%[_]%';
```
because this means *any single character*.
|
Sql query - search a varchar containing a wildcard character
|
[
"",
"sql",
""
] |
I am using the `AdventureWorks2012` sample database.
I am trying to calculate the quarterly total transaction amount for the year 2006 using the `datepart` function. And I need the `SEASON` column to return 1,2,3,4 not just 3,4 and the `TOTAL` column to return values
```
0, 0, 83537.4000000, 134826.4400000
```
I am sorry if this is confusing to look at, it is my first time using stackoverflow. Please help!
Here is the code:
```
WITH GROSSINCOME AS
(
SELECT
A.ORDERID,
SUM((B.QTY * B.unitprice * (1 - B.DISCOUNT))) + A.FREIGHT AS TOTAL,
A.orderdate
FROM
SALES.Orders AS A
JOIN
SALES.OrderDetails AS B ON A.orderid = B.orderid
GROUP BY
A.orderid, A.freight, A.orderdate
)
SELECT
DATEPART(QUARTER, orderdate) AS SEASON,
SUM(TOTAL) AS TOTAL
FROM
GROSSINCOME
WHERE
YEAR(orderdate) = '2006'
GROUP BY
DATEPART(QUARTER,orderdate);
```
|
Just to fix Dark code, the below code should work.
```
WITH GROSSINCOME AS
(
SELECT A.ORDERID,SUM(B.QTY*B.unitprice*(1-B.DISCOUNT)) + A.FREIGHT AS TOTAL,
A.orderdate
FROM SALES.Orders AS A JOIN SALES.OrderDetails AS B
ON A.orderid=B.orderid
WHERE YEAR(A.orderdate)='2006'
GROUP BY A.orderid,A.freight,A.orderdate
)
SELECT T.VAL AS SEASON,
SUM(ISNULL(TOTAL,0)) AS TOTAL
FROM GROSSINCOME
RIGHT JOIN (VALUES(1),(2),(3),(4))as T(VAL) ON T.VAL = DATEPART(QUARTER,orderdate)
GROUP BY T.VAL;
```
|
Try this
```
WITH GROSSINCOME AS
(
SELECT A.ORDERID, SUM((B.QTY*B.unitprice*(1-B.DISCOUNT))) + A.FREIGHT AS TOTAL,
A.orderdate
FROM SALES.Orders AS A JOIN SALES.OrderDetails AS B
ON A.orderid=B.orderid
GROUP BY A.orderid,A.freight,A.orderdate
)
SELECT T.VAL AS SEASON,
SUM(TOTAL) AS TOTAL
FROM GROSSINCOME
RIGHT JOIN (VALUES(1),(2),(3),(4))as T(VAL) ON T.VAL = DATEPART(QUARTER,orderdate)
WHERE YEAR(orderdate)='2006'
GROUP BY T.VAL;
```
|
SQL Server DATEPART
|
[
"",
"sql",
"sql-server",
""
] |
I made a SQL Statement and I want to use the date but without the time.
My Select is:
```
SELECT DATEPART(dw, [Order].PerformDate)
```
And my Group by is:
```
GROUP BY [Order].PerformDate
```
Now how can I ignore the time?
|
You can use `CONVERT` function of SQL
```
select datepart(dw, CONVERT(DATE,[Order].PerformDate))
GROUP BY CONVERT(DATE,[Order].PerformDate)
```
|
Cast datetime value to date:
```
select cast(`Order`.PerformDate as date) as PerformDate
```
|
How to select a date and ignor the time
|
[
"",
"mysql",
"sql",
"select",
"group-by",
""
] |
I'm going to try to make it clear.
First of all here are the tables :
CARROSSERIE
```
MARQ MODEL SILHOUETTE ID
citroen c3 coupe 1
citroen c3 sport 2
citroen c4 coupe 3
citroen c4 sport 4
acura cdx cuv 5
... ... ... ...
```
table2 has the same fields as table1 but with some more fields (only 1 is interesting me)
alltable
```
SAME AS TABLE1 zone
... EUR
... EUR
... USA
... RUS
... CHI
... ...
```
So I just simply join table1 and table2 on their similar fields to get "zone" from table2.
Here is the query:
```
SELECT C.model_marq AS model_marq, C.model_name AS model_name, C.silhouette AS silhouette, IIF(T.ZONE IS NULL, 'ROW', T.ZONE) AS zone
INTO zone_vehicule
FROM CARROSSERIE C
LEFT JOIN alltable T
ON C.model_marq= T.MARQUE
AND C.model_name = T.MODELE
AND C.silhouette = T.CARROS
GROUP BY model_marq, model_name, silhouette, zone
```
I'm getting the good result as I get all the "zone" from table2 depending on the fields of table1.
The problem is that there is something I need to handle in this query:
if the cars from table1 (fields model\_marq, model\_name, silhouette) are in 7 different zones from table2 (7 zones are in fact all the possible zones), I get in the resultset :
```
MARQ MODEL SILHOUETTE zone
citroen c3 coupe EUR
citroen c3 coupe CHIN
citroen c3 coupe ASI
citroen c3 coupe RUS
citroen c3 coupe AML
... ... ... etc(all the 7 zones)
```
which is totaly normal with the query i'm using.
What I would like is that if the query returns this case (a car is in all the 7 zones), instead of having 7 rows in the resultset with all the different zones I want only 1 row with '\*' in zone field (in fact it would merge the 7 rows in 1 rows only when it is in all the zones)
An example of the result I would need:
```
MARQ MODEL SILHOUETTE zone
citroen c3 coupe * (because it was in all the 7 zones)
citroen c3 sport CHIN
citroen c3 sport RUS
citroen c3 sport EUR
citroen c4 cuv AML
acura cdx sport *
acura cdx coupe EUR
... ... ... ...
```
I don't know where to start from there.
I hope it was clear enough to be understood. Feel free to ask questions.
|
Here is what worked for me but I think it's a bit complicated and long for my need. I'll try to use @TPhe answer to improve it.
```
SELECT model_marq, model_name, silhouette, zone into ZONE_MODELE FROM
(SELECT C.marque AS model_marq, C.modele AS model_name, C.carros AS silhouette, IIF(T.ZONE IS NULL, IIF(model_name = 'DIVERS', '*', 'ROW'), IIF(C.countzones = 7, '*', T.ZONE)) AS zone, C.countzones as countzones FROM (SELECT marque, modele, carros, count(*) as countzones FROM (SELECT marque, modele, carros, zone from (SELECT CA.model_marq AS marque, CA.model_name AS modele, CA.silhouette AS carros, IIF(TMP.ZONE IS NULL, 'ROW', TMP.ZONE) AS zone FROM CARROSSERIE CA LEFT JOIN alltable TMP ON CA.model_marq = TMP.MARQUE AND CA.model_name = TMP.MODELE AND CA.silhouette = TMP.CARROS GROUP BY CA.model_marq, CA.model_name, CA.silhouette, TMP.ZONE ) group by marque, modele, carros, zone) a GROUP BY marque, modele, carros) C LEFT JOIN alltable T ON C.marque= T.MARQUE AND C.modele = T.MODELE AND C.carros = T.CARROS GROUP BY C.marque, C.modele, C.carros, zone, countzones) TMP2 GROUP BY model_marq, model_name, silhouette, zone
```
If anyone knows how to make the same simplified query I would appreciate
|
My approach uses a subquery to find just the cars that have 7 zones listed, and then uses that subquery's results to inform the main query's IIF statement determining the "zone" field. It makes some assumptions that may not prove to be correct, depending on your data, such as that the `alltable` has something like a `carId` that is an id for each car you need to look at, and that if you count the number of rows for each `carId` it will return the number of distinct zones. You can tweak the query as needed, depending on the actual situation, but hopefully the concept is clear:
```
SELECT C.model_marq AS model_marq, C.model_name AS model_name, C.silhouette AS silhouette, IIF(T.ZONE IS NULL, 'ROW', iff(allZones.carId is not null, "*", T.ZONE) AS zone
INTO zone_vehicule
FROM (CARROSSERIE C
LEFT JOIN alltable T
ON C.model_marq= T.MARQUE
AND C.model_name = T.MODELE
AND C.silhouette = T.CARROS) Left join
(select distinct carId, count(carId)
from alltable
group by carId
Having count(carId) = 7) as allZones on t.carId = allZones.carId
GROUP BY model_marq, model_name, silhouette, zone
```
This is the subquery:
```
(select distinct carId, count(carId)
from alltable
group by carId
Having count(carId) = 7) as allZones
```
Which you can play with and run separately to evaluate the accuracy of the results.
|
COUNT returned rows from JOIN depending on columns
|
[
"",
"sql",
"database",
"ms-access",
"join",
""
] |
I need to get date between two date range. That is nth day of nth month.
For example, I need to know 23rd day of every 2nd month between January 1, 2015 to December 30, 2015.
I need the query in T-SQL for SQL Server
|
You should use recursive query in MSSQL.
Here the first `WITH DT` is a table where you set up conditions:
```
WITH DT AS
(
SELECT CAST('January 1, 2015' as datetime) as dStart,
CAST('December 30, 2015' as datetime) as dFinish,
31 as nDay,
2 as nMonth
),
T AS
(
SELECT DATEADD(DAY,nDay-1,
DATEADD(MONTH, DATEDIFF(MONTH, 0, DStart), 0)
) as d,0 as MonthNumber
FROM DT
UNION ALL
SELECT DATEADD(DAY,nDay-1,
DATEADD(MONTH, DATEDIFF(MONTH, 0, DStart)
+T.MonthNumber+nMonth,0)
)as d, T.MonthNumber+nMonth as MonthNumber
FROM T,DT
WHERE DATEADD(DAY,nDay-1,
DATEADD(MONTH, DATEDIFF(MONTH, 0, DStart)
+T.MonthNumber+nMonth,0)
)<=DT.dFinish
)
SELECT d FROM T,DT WHERE DAY(d)=DT.nDay
```
`SQLFiddle demo`
|
Is this what you are trying to achieve?
```
DECLARE @startDate datetime
DECLARE @endDate datetime
DECLARE @monthToFind INT
DECLARE @dayToFind INT
SET @startDate = '01/01/2015'
SET @endDate = '12/31/2015'
SET @monthToFind = 2
SET @dayToFind = 20
IF MONTH(@startDate) + (@monthToFind - 1) BETWEEN MONTH(@startDate) AND MONTH(@endDate)
AND YEAR(@startDate) = YEAR(@endDate)
BEGIN
DECLARE @setTheDate datetime
SET @setTheDate = CAST(MONTH(@startDate) + (@monthToFind - 1) AS varchar) + '/' + CAST(@dayToFind AS varchar) + '/' + CAST(YEAR(@startDate) AS varchar)
SELECT DATENAME(DW,@setTheDate)
END
```
|
nth day to nth month in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have a history table that captures updates to a certain object and, in addition to other information, captures the time this update happened. What I would like to do is `SELECT` the `MIN(LogDate)` corresponding to a certain `ActionTaken` column.
More specifically, the history table may have other (more recent) rows where `ActionTaken = 1`, but I want to capture the date `ActionTaken` *became* 1.
Example:
```
SELECT MIN(LogDate) AS FirstActionDate
FROM HistoryTable
WHERE ID = 123
AND FirstActionTaken = 1
SELECT MIN(LogDate) AS SecondActionDate
FROM HistoryTable
WHERE ID = 123
AND SecondActionTaken = 1
SELECT MIN(LogDate) AS ThirdActionDate
FROM HistoryTable
WHERE ID = 123
AND ThirdActionTaken = 1
```
This works well, and I receive the proper dates without issue. Where I'm running into trouble is then going to `select` the `MAX(LogDate)` from this group:
```
SELECT MAX(LogDate) AS LastActionDate
FROM HistoryTable
WHERE ID = 123
AND LogDate IN
(
( SELECT MIN(LogDate) AS FirstActionDate
FROM HistoryTable
WHERE ID = 123
AND FirstActionTaken = 1 ),
( SELECT MIN(LogDate) AS SecondActionDate
FROM HistoryTable
WHERE ID = 123
AND SecondActionTaken = 1 ),
( SELECT MIN(LogDate) AS ThirdActionDate
FROM HistoryTable
WHERE ID = 123
AND ThirdActionTaken = 1 )
)
```
This also works, but I hate doing it this way. I could save out the previous statements into variables and just `SELECT MAX()` from those; it would certainly be more readable, but what would the `JOIN` syntax look like for this query?
*Is there a way to combine the first three `SELECT` statements into one that returns all three dates and isn't an unreadable mess?*
*How can I grab the most recent `LogDate` (as a separate column) from this result set and without the (seemingly unnecessary) repeating `SELECT` statements?*
EDIT:
Here are a few links I've found in relation to the answers that have been given so far:
* [Data Normalization](http://edn.embarcadero.com/article/25209)
* [Using `OUTER/CROSS APPLY`](https://stackoverflow.com/questions/9275132/real-life-example-when-to-use-outer-cross-apply-in-sql)
* [`UNPIVOT`](http://blog.devart.com/is-unpivot-the-best-way-for-converting-columns-into-rows.html) (and others)
Hopefully these will help with others looking for solutions to similar problems!
|
**EDIT 2**
Based on new information that can be gleaned from OP's own answer (about how to define the latest action date), the query can be further simplified to simply this:
```
select coalesce(
min(case when ThirdActionTaken = 1 then LogDate end),
min(case when SecondActionTaken = 1 then LogDate end),
min(case when FirstActionTaken = 1 then LogDate end)
) as LastActionDate
from HistoryTable
where id = 123
```
---
Unpivot can also be used:
```
select max(ActionDate)
from (select min(case when FirstActionTaken = 1 then LogDate end) as FirstActionDate,
min(case when SecondActionTaken = 1 then LogDate end) as SecondActionDate,
min(case when ThirdActionTaken = 1 then LogDate end) as ThirdActionDate
from HistoryTable
where id = 123) t
unpivot (ActionDate for ActionDates in (FirstActionDate, SecondActionDate, ThirdActionDate)) unpvt
```
**EDIT: Short explanation**
This answer is very similar to Gordon's in that it uses conditional aggregation to get the 3 minimum dates in one query.
So the following part of the query:
```
select min(case when FirstActionTaken = 1 then LogDate end) as FirstActionDate,
min(case when SecondActionTaken = 1 then LogDate end) as SecondActionDate,
min(case when ThirdActionTaken = 1 then LogDate end) as ThirdActionDate
from HistoryTable
where id = 123
```
...might return something like...
```
FirstActionDate SecondActionDate ThirdActionDate
--------------- ---------------- ---------------
2015-01-01 2015-12-01 (null)
```
Then, the `unpivot` clause is what "unpivots" the 3 columns into a result set with 3 rows but a single column instead:
```
ActionDate
----------
2015-01-01
2015-12-01
(null)
```
Once the results are in this format, then a simple `max` aggregate function (`select max(ActionDate)`) can be applied to get the max value of the 3 rows.
|
This would be easier with a normalized data structure. Here is one method that uses conditional aggregation to calculate the three minimum dates. Then it takes the maximum of those values:
```
SELECT v.dt
FROM (SELECT MIN(CASE WHEN FirstActionTaken = 1 THEN LogDate END) AS d1,
MIN(CASE WHEN SecondActionTaken = 1 THEN LogDate END) AS d2,
MIN(CASE WHEN ThirdActionTaken = 1 THEN LogDate END) AS d3
FROM HistoryTable
WHERE ID = 123
) ht OUTER APPLY
(SELECT MAX(dt) as dt
FROM (VALUES (d1), (d2), (d3) ) v(dt)
) v;
```
|
Selecting a group of dates in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
In-order to get all the `Sunday dates` and `Saturday dates`
between given `date ranges` I have used a `iterative solution` as below,
```
create TABLE #Dayweeks (id int IDENTITY(1,1),StartWeek DATETIME, EndWeek DATETIME)
DECLARE @wkstartdate datetime = '2015-12-06',
@wkenddate datetime = '2016-04-05'
WHILE (@wkstartdate <= @wkenddate)
BEGIN
INSERT INTO #Dayweeks
(
StartWeek, EndWeek
)
SELECT
@wkstartdate, DATEADD(wk,DATEDIFF(wk,0,@wkstartdate),6)-1
SELECT @wkstartdate = DATEADD(dd,7,@wkstartdate)
END
```
I wonder how to achieve this using `set based approach`. Is there any way to get the above result
by using `set based approach`?
The output I got using iterative solution is given below.
[](https://i.stack.imgur.com/X7LMm.png)
|
Check this with CTE :
```
Declare @DateFrom DateTime ='2011-07-01',
@DateTo DateTime = '2011-07-31'
;WITH CTE(dt)
AS
(
Select @DateFrom
Union All
Select DATEADD(d,1,dt)FROM CTE
Where dt<@DateTo
)
Select
DATENAME(dw,dt) day, dt
from CTE
where DATENAME(dw,dt)In('Sunday' , 'Saturday')
--To understand more, comment above select and run this.
select * from
(
select 'Sunday' day,dt from CTE
where DATENAME(dw,dt)In('Sunday' )
union
select 'Saturday',dt from CTE
where DATENAME(dw,dt)In('Saturday' )
) a order by dt
```
Check this [link](https://ask.sqlservercentral.com/questions/2019/set-based-vs-procedural.html) to understand both approach.
|
There really is no "set-based" approach when you are starting with an empty set. You can replace your code with a recursive CTE. You can get the start dates by doing:
```
with weeks as (
select @wkstartdate as dte
union all
select dateadd(weeks, 1, dte)
from dte
where dte < @wkenddate
)
insert into #Dayweeks(Startweek, EndWeek)
select dte, dateadd(day, 6, dte)
from weeks
option (maxrecursion 0);
```
Note that this does not verify the day of the week requirements. It just counts weeks from the first day.
|
Get specific dates between given date-ranges using set based approach
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to extract original files stored as `BLOB` in `DB2` database.
I used this `DB2` command:
```
EXPORT TO MyFile.DEL OF DEL LOBS TO . LOBFILE lob SELECT BLOB_COL
FROM MY_TABLE where REPORT_ID in
(select report_id from My_TABLE2 where CONDITION)
```
I get a `.blob` file that contains the content of all the files.
Now I am wondering if there is a way to export each file in a single file instead of having them gathered in the same file.
Is this possible in DB2 ?
|
It is possible in recent versions of DB2 for LUW (beginning at least with v9.5) by specifying the `lobsinsepfiles` modifier:
```
EXPORT TO MyFile.DEL OF DEL LOBS TO . LOBFILE lob
MODIFIED BY lobsinsepfiles
SELECT ...
```
|
Yes, you can store the LOBs in different files by adding the keyword `lobsinsepfiles` to your `EXPORT` statement. [See here for details](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dm.doc/doc/c0004562.html?lang=en). There are also options to specify how the individual file names should be constructed which I would recommend to use.
Your statement would look like:
```
EXPORT TO MyFile.DEL OF DEL LOBS TO . LOBFILE lob
MODIFIED BY lobsinsepfile
SELECT BLOB_COL FROM MY_TABLE where REPORT_ID in
(select report_id from My_TABLE2 where CONDITION)
```
|
Extract BLOB File as multiple files in DB2
|
[
"",
"sql",
"db2",
"blob",
""
] |
I have a table like following
```
id id_a id_b uds
--------------------------
1 1 3 20
1 2 8 17
2 1 3 5
3 1 1 32
3 2 1 6
```
What I would need is to get the row with minimum "uds" for each "id". So the result would be:
```
id id_a id_b uds
--------------------------
1 2 8 17
2 1 3 5
3 2 1 6
```
Thank you in advance...
|
Most databases support the ANSI standard window functions. An easy way to do what you want:
```
select t.*
from (select t.*, row_number() over (partition by id order by uds) as seqnum
from t
) t
where seqnum = 1;
```
|
Use `Min` with a `group by` clause:
```
select id, id_a, id_b, min(uds) as uds
from table1
group by id, id_a, id_b
order by id, id_a, id_b;
```
However, I should mention this is going to get you all of the items, you need to also specify an aggregate on the other columns, or do not include them.
```
select id, min(uds) as uds
from table1
group by id
order by id;
```
Judging by your desired output though, the following may be what you want:
```
select id, max(id_a) as id_a, max(id_b) as id_b, min(uds) as uds
from table1
group by id
order by id;
```
|
Select row minimum col for each id
|
[
"",
"sql",
""
] |
There is probably an easy solution, but I'm just a rookie when it comes to databases and couldn't find a solution.
I've got two columns. Something like:
```
Meta_value | Meta_key
----------------------
_featured | 1
_featured | 1
_featured | 1
```
I want to change (all) the `meta_key` values to `0` only if the value of `meta_value` = `_featured`.
How can I do this?
|
```
update table_name set meta_key = 0 where meta_value = '_featured';
```
|
use this:
```
UPDATE table_name SET Meta_key = 0 WHERE Meta_value = '_featured';
```
|
Update value column if other column value is met
|
[
"",
"mysql",
"sql",
""
] |
We have a database with duplicate user records, and I need to pick the "best" user based on a few factors:
1. Users with memberships should be selected before those without
2. Memberships have levels, and all things being equal, the user with the "best" membership level should be selected.
3. Users with active memberships should be selected before users with expired memberships.
Based on those conditions, I came up with something like the following query (actual query is too sensitive).
```
SELECT TOP 1 u.[UserId]
FROM [dbo].[Users] u
LEFT OUTER JOIN [dbo].[UserMemberships] um
ON u.[UserId] = um.[UserId]
LEFT OUTER JOIN [dbo].[Memberships] m
ON um.[MembershipId] = m.[MembershipId]
WHERE u.[Email] = @Email
ORDER BY m.[Order] ASC, um.[Expires] DESC, u.[Created] DESC
```
The problem I'm having is with the ordering of the membership versus the expiration. For example, if there's two duplicate users with memberships of different levels I typically would want the user with the "best" membership level (based on the order), but if that membership is expired, I want the one that isn't expired, even if it's a lower level. However, I can't simply exclude expired memberships, because I need to pick the user with a membership (even if it's expired) over one without a membership.
Essentially, ordering by membership order and then expire date covers most scenarios, but in this one particular case, the order should be expire date, then membership order. What modification could I make to the query to cover this edge-case scenario?
|
What about something like this? By utilizing a few [`Case`](http://www.techonthenet.com/sql_server/functions/case.php) statements to create numeric ranking columns, you can still keep your expired records in the query but assign them a lower value so that the active memberships must be evaluated first:
```
SELECT TOP 1 u.[UserId],
Case um.[Expires]
When null then 9999 --Inactive Membership
When >= GetDate() then 1 --Expired Membership
Else 0 --Active Membership
End as ActiveRank,
Case m.[Order]
When null then 9999 --No Membership
Else Order --Membership Ranking
End as MembershipRank
FROM [dbo].[Users] u
LEFT OUTER JOIN [dbo].[UserMemberships] um
ON u.[UserId] = um.[UserId]
LEFT OUTER JOIN [dbo].[Memberships] m
ON um.[MembershipId] = m.[MembershipId]
WHERE u.[Email] = @Email
ORDER BY ActiveRank ASC, MembershipRank ASC, u.[Created] DESC
```
*Note: I wasn't quite sure what you were doing with the `Created` field so I left that in the Order.*
This means that the following holds true:
1. `ActiveRank` will split the ordering into two distinct groups... the top half of the list will be ordered by active memberships and the lower have will be ordered by expired or non-existent memberships.
2. `MembershipRank` will prioritize according to rank (pushing null values or nonexistent memberships to the very bottom of the expired membership portion of the list.
|
Here is psuedocode for how I would solve this:
```
SELECT TOP 1 FROM (
SELECT UserId, 1 AS rnk
FROM Table
WHERE {most desireable conditions are true}
UNION ALL
SELECT UserId, 2 AS rnk
FROM Table
WHERE {2nd most desireable conditions are true}
UNION ALL
SELECT UserId, 3 AS rnk
FROM Table
WHERE {3rd most desireable conditions are true}
...
) u
ORDER BY rnk ASC, {Secondary OrderBys like ExpDate etc}
```
|
De-duping users based on membership, level thereof, and expiration date or membership
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I want to know what is the SQL if I want data to count from the occurence of a number with respect to data in another table.
TABLE1
```
Day1 Day2
--------------------
1 2
2 3
1 3
3 1
```
TABLE2
```
ID NAME
------------------
1 John
2 Mary
3 Tom
```
The result I want from SQL will be this
```
ID NAME OCCUR
1 John 3
2 Mary 2
3 Tom 3
```
How will the SQL look like for getting this result?
|
It seems you want to count how many rows in `table1` have a matching value in either column. One easy way is:
```
select t2.*,
(select count(*)
from table1 t1
where t2.id in (t1.day1, t1.day2)
) as occur
from table2 t2;
```
|
In table 1, you must make a foreign key for the table 2. Probably the id of user. With that:
```
Select t2.id, t2.name, Sum(t1.day)
from table1 t1
innerjoin table2 t2
on t1.id = t2.id
```
|
SQL COUNT among multiple columns
|
[
"",
"mysql",
"sql",
""
] |
I have 2 tables:
**Departament**
```
id_dep, name_dep
```
**Employee**
```
id_dep, salary
```
I want to get name of the departments where ALL the salaries are less than 7000. I have to create a `SELECT` statement. I am a beginner at SQL and I'm sorry if I'll lose your time, but I need help.
I have a solution, but I don't know if is good or not:
```
SELECT name_dep
FROM departments
WHERE employee.salary < 7000
MINUS
SELECT name_dep
FROM departments
WHERE employee.salary >= 7000
```
|
You can do it using `NOT EXISTS`:
```
SELECT id_dep , name_dep
FROM department AS d
WHERE NOT EXISTS (SELECT 1
FROM employee AS e
WHERE e.id_dep = d.id_dep AND e.salary >= 7000)
```
|
You can group by department and check for max of salary in group:
```
select d.id_dep, d.name_dep
from department d
join employee e on d.id_dep = e.id_dep
group by d.id_dep, d.name_dep
having max(e.salary) < 700
```
|
SQL SELECT (beginner)
|
[
"",
"sql",
"oracle",
"plsql",
""
] |
I have a table named VehicleAlerts,
you can see in the below image that there are different type AlertSubCategory such as Pickup/Drop Delay ,Vehicle Stalled ,Vehicle Speeding Geofence Violation. How will I get count of all type of AlertSubCategory of a vehicle?
[](https://i.stack.imgur.com/Gk8wa.png)
|
Use `Conditional Count` to do this
```
SELECT vehicleid,
P_D_delay_count = Count(CASE
WHEN alertsubcategory = 'pickup/drop delay' THEN 1
END) AS,
VehicleStalled = Count(CASE
WHEN alertsubcategory = 'VehicleStalled' THEN 1
END),
VehicleSpeeding = Count(CASE
WHEN alertsubcategory = 'VehicleSpeeding' THEN 1
END),
Geofence_Violation_count = Count(CASE
WHEN alertsubcategory = 'Geofence Violation' THEN 1
END)
FROM yourtable
GROUP BY vehicleid
```
|
A simple `Count` with a `Group By` on both VehicleId and AlertSubCategory will get the data
```
SELECT VehicleId, AlertSubCategory, Count(1) Occurrence
FROM [table]
GROUP BY VehicleId, AlertSubCategory
```
|
How to get count of different value from a same columns in sql?
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
My problem is I cannot create proper DDL statement 'create table' with column based on 2 columns from other table.
The final table should looks like after:
```
CREATE TABLE PRACOWNICY_ZESPOLY AS
SELECT 12*PLACA_POD + NVL(PLACA_DOD,0) AS ROCZNA_PLACA FROM PRAC;
ALTER TABLE PRACOWNICY_ZESPOLY
ADD (
NAZWISKO VARCHAR(20),
POSADA VARCHAR(20),
ZESPOL NUMBER(4),
ADRES_PRACY VARCHAR(20) );
```
I was trying something like this:
```
CREATE TABLE PRACOWNICY_ZESPOLY (
NAZWISKO VARCHAR(20),
POSADA VARCHAR(20),
ZESPOL NUMBER(4),
ADRES_PRACY VARCHAR(20),
ROCZNA_PLACA NUMBER(6,2) AS (SELECT 12*PLACA_POD + NVL(PLACA_DOD,0) FROM PRAC));
```
Result:
> SQL Error: ORA-00936: missing expression
```
CREATE TABLE PRACOWNICY_ZESPOLY (
NAZWISKO VARCHAR(20),
POSADA VARCHAR(20),
ZESPOL NUMBER(4),
ADRES_PRACY VARCHAR(20))
AS SELECT 12*PLACA_POD + NVL(PLACA_DOD,0) FROM PRAC;
```
Result:
> SQL Error: ORA-01773: may not specify column datatypes in this CREATE TABLE
```
CREATE TABLE PRACOWNICY_ZESPOLY AS
SELECT 12*PLACA_POD + NVL(PLACA_DOD,0) AS ROCZNA_PLACA FROM PRAC,
(NAZWISKO VARCHAR(20),
POSADA VARCHAR(20),
ZESPOL NUMBER(4),
ADRES_PRACY VARCHAR(20));
```
Result:
> SQL Error: ORA-00907: missing right parenthesis
|
You need a `create` and an `insert` statement:
```
CREATE TABLE PRACOWNICY_ZESPOLY
(
ROCZNA_PLACA number,
NAZWISKO VARCHAR(20),
POSADA VARCHAR(20),
ZESPOL NUMBER(4),
ADRES_PRACY VARCHAR(20)
);
insert into PRACOWNICY_ZESPOLY (ROCZNA_PLACA)
SELECT 12 * PLACA_POD + NVL(PLACA_DOD,0)
FROM PRAC;
```
|
I have found a working solution in [this](https://stackoverflow.com/a/57151207/6071045) question.
You need to combine get\_ddl with CTAS syntax.
|
SQL create table column AS SELECT FROM OTHER TABLE
|
[
"",
"sql",
"oracle",
"ddl",
""
] |
I am trying to find a data with specific where clause of date and month but I am receiving an error can anyone help me with this?
```
select *
from my_data
where date BETWEEN '11-20' AND '12-15'
```
> MS SQL Server Management Studio
I am receving an error
> Conversion failed when converting date and/or time from character string
|
The conversion is failing because you are not specifying a year. If you were to specify '11-20-2015' your query would work just insert whatever year you need.
```
SELECT *
FROM my_data
WHERE date BETWEEN '11-20-2015' AND '12-15-2015'
```
Alternatively if you wanted data from that range of dates for multiple years I would use a while loop to insert information in a # table then read from that table, depending on the amount of data this could be quick or sloooowww here is an example.
```
DECLARE @mindatestart date, @mindateend date, @maxdatestart date
SET @mindatestart = '11-20-2010'
SET @mindateend = '12-15-2010'
SET @maxdatestart = '11-20-2015'
SELECT top 0 *, year = ' '
INTO #mydata
FROM my_data
WHILE @mindatestart < @maxdatestart
BEGIN
INSERT INTO #mydata
SELECT *, YEAR(@mindatestart)
FROM my_data
where date between @mindatestart and @mindateend
SET @mindatestart = DATEADD(Year, 1, @mindatestart)
SET @mindateend = DATEADD(Year, 1, @mindateend)
END
```
This will loop and insert the data from 2010-2015 for those date ranges and add a extra column on the end so you can call the data and order by year if you want like this
```
SELECT * FROM #mydata order by YEAR
```
Hopefully some part of this helps!
FROM THE COMMENT BELOW
```
SELECT *
FROM my_data
WHERE DAY(RIGHT(date, 5)) between DAY(11-20) and DAY(12-15)
```
The reason '11-20' doesn't work is because its a character string which is why you have to input it between ' ' What the Month() function does is take whatever you put between the () and convert it to an integer. Which is why you're not getting anything back using the method in the first answer, the '-Year' from the table date field is being added into the numeric value where your value is just being converted from 11-20 you can see by using these queries
```
SELECT MONTH(11-20) --Returns 12
SELECT MONTH(11-20-2015) -- Returns 6
SELECT MONTH(11-20-2014) -- Returns 6
```
Using RIGHT(Date, 5) you only get Month-day, then you date the day value of that so DAY(RIGHT(DATE, 5) and you should get something that in theory should fall within those date ranges despite the year. However I'm not sure how accurate the data will be, and its a lot of work just to not add an additional 8 characters in your original query.
|
Most databases support functions to extract components of dates. So, one way of doing what you want is to convert the values to numbers and make a comparison like this:
```
where month(date) * 100 + day(date) between 1120 and 1215
```
The functions for extracting date parts differ by database, so your database might have somewhat different methods for doing this.
|
Find data with specific date and month only
|
[
"",
"sql",
"date",
"sql-server-2012",
""
] |
The following query returns 1 row:
```
SELECT `coach_id` FROM `table_a` WHERE `coach_id` = 8
UNION ALL
SELECT `coach_id` FROM `table_b` WHERE `coach_id` = 8
```
But `SELECT coach_id FROM table_b WHERE coach_id = 8` returns 2 rows.
And `SELECT coach_id FROM table_a WHERE coach_id = 8` returns 1 row.
I'm using `UNION ALL` to avoid the `DISTINCT` filtering, because I'm actually just interested in the total number of rows. Still it seems to behave like regular `UNION` a.k.a `UNION DISTINCT`.
Whats going on here? Query is executed in phpMyAdmin `4.5.2` interface on MariaDB `10.1.9` Server.
### Update
I just discovered that the mysql command line client behaves like expected. So the failure has to be somewhere within my stack of `nginx 1.8.0`, `PHP 5.6.16` `mysqli` and `phpmyadmin`.
### Update 2
When I run the query from a php script (using mysqli) it also correctly returns 3 rows. I guess that leaves nothing but phpMyAdmin to cause this phenomenon. Thanks for the help so far and sorry that the question has been misleading. I didn't know better...
|
This is a phpMyAdmin bug, which is fixed in v4.5.3.0 (2015-12-23).
|
## schema
```
create schema testThenDrop123;
use testThenDrop123; -- use that database
create table table_a
( id int auto_increment primary key,
coach_id int not null
);
create table table_b
( id int auto_increment primary key,
coach_id int not null
);
insert table_a (coach_id) values (8);
insert table_b (coach_id) values (8),(8);
```
## query
```
SELECT `coach_id` FROM `table_a` WHERE `coach_id` = 8
UNION ALL
SELECT `coach_id` FROM `table_b` WHERE `coach_id` = 8;
+----------+
| coach_id |
+----------+
| 8 |
| 8 |
| 8 |
+----------+
```
.
```
drop schema testThenDrop123; -- cleanup by dropping db
```
So I don't see what you are seeing.
|
What is phpMyAdmin doing to my UNION ALL query?
|
[
"",
"sql",
"phpmyadmin",
"union",
""
] |
I have 2 tables
```
table : tab1
id serial primary key
val integer
```
and
```
table : tab2
id serial primary key
val2 integer
```
Now I tried this query
```
select id, (select id from tab2) as id2 from tab1
```
It is working fine. But when I tried to use `id2` in `where` clause, it is giving error.
```
select id, (select id from tab2) as id2 from tab1 where id2 = id
```
|
Unless I don't understand, it should be as simple as this:
```
select T1.id
, T2.id as id2
from tab1 T1
join tab2 T2
ON T1.id = T2.id
```
**edit based on comment**
I'm not very familiar with `mysql` syntax, but couldn't you put you query in a subquery? Like:
```
select *
from (select id, (select id from tab2) as id2 from tab1) a
where a.id = a.id2
```
|
As per OP's [comment](https://stackoverflow.com/questions/34174871/how-to-use-variable-from-inner-select-in-where-clause#comment56097054_34174983) Try this
```
select * from (
select arr, unnest('{1,2,3}'::int[]) as val
from tab1
)t
where val = any(arr)
```
|
How to use variable from inner select in where clause
|
[
"",
"sql",
"postgresql",
""
] |
I have a query to select particular day-time for a office.
```
select
a.sat_date,
b.officeid
from
OfficeHours a
where
sat_date = '9:30 AM'
and officeik in (select OfficeIK from office where officeid = 50000) b
```
I need to select the sub-query column `officeid` in main query. Above query throws a `syntax error.`
Thanks for the help.
|
You can't use `officied` column from subquery not only because that subquery select list doesn't contain this column, but also bu the reason it is in where condition, not in some join/apply.
Instead you can join that subquery and use it columns like this:
```
select
a.sat_date,
b.officied
from OfficeHours as a
inner join (select * from office where officeid = 50000) as b on b.officeik = a.officeik
where a.sat_date = '9:30 AM'
```
or (even simplier and more natural):
```
select
a.sat_date,
b.officied
from OfficeHours as a
inner join office as b on b.officeik = a.officeik
where
a.sat_date = '9:30 AM'
and b.officeid = 50000
```
|
You can use inner join:
```
select a.sat_date ,b.officied
from OfficeHours a inner join office b on(a.officeik=b.OfficeIK)
where a.sat_date = '9:30 AM' and b.officeid=50000
```
|
SQL Server: how to select sub-query column in main query
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I've got 'sales' table where I have `stor_id`, `ord_date`, and `quantity`.
The problem is to find the `stor_id` which has maximum `quantity` for that `ord_date`
I used below query
```
select sales.ord_date, sales.stor_id, sum(sales.qty)
from sales
group by sales.ord_date, sales.stor_id
order by sales.ord_date desc
```
In the result i got for example, that on `2016-12-04` and `stor_id = 1` , the store has sold 35 books, and `stor_id = 2` shop has sold 20 books for that same day.
I'd like to take the maximum value (in this case `stor_id = 1` as it sold 35) of each group with the same date.
|
You can use the below query of you are using sql server 2008 above
```
select * from
(
select
ord_date,
case
when s.tot_sales = max(s.tot_sales) over (partition by s.ord_date)
then s.stor_id
else NULL
end as store_id,
max(s.tot_sales) over (partition by s.ord_date) as maximum_sales
from
(select distinct ord_date, stor_id,
sum(qty) over (partition by ord_date, stor_id) as tot_sales
from sales
)s
)s
where s.store_id is not NULL
```
**Working SQL fiddle:**<http://sqlfiddle.com/#!3/179e1/8>
For other versions of SQL server
```
select sales.* from
(
select ord_date, stor_id,
sum(qty) as tot_sales
from sales
group by ord_date, stor_id
)sales
inner join
(
select
ord_date,
max(s.tot_sales) as maximum_sales
from
(
select ord_date, stor_id,
sum(qty) as tot_sales
from sales
group by ord_date, stor_id
)s
group by ord_date
)s
on sales.ord_date=s.ord_date and sales.tot_sales=s.maximum_sales
```
**Working sql fiddle:** <http://sqlfiddle.com/#!3/179e1/9>
|
I think the simplest method is to use window functions:
```
select s.*
from (select s.ord_date, s.stor_id, sum(s.qty) as sumqty,
row_number() over (partition by s.ord_date order by sum(s.qty) desc) as seqnum
from sales s
group by s.ord_date, s.stor_id
) s
where seqnum = 1
order by s.ord_date desc;
```
Note that in the event of ties, this returns one store. If you want all of them, use `rank()` or `dense_rank()` instead of `row_number()`.
|
SQL Server grouping
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I have a Spark dataframe with the following data (I use spark-csv to load the data in):
```
key,value
1,10
2,12
3,0
1,20
```
is there anything similar to spark RDD `reduceByKey` which can return a Spark DataFrame as: (basically, summing up for the same key values)
```
key,value
1,30
2,12
3,0
```
(I can transform the data to RDD and do a `reduceByKey` operation, but is there a more Spark DataFrame API way to do this?)
|
If you don't care about column names you can use `groupBy` followed by `sum`:
```
df.groupBy($"key").sum("value")
```
otherwise it is better to replace `sum` with `agg`:
```
df.groupBy($"key").agg(sum($"value").alias("value"))
```
Finally you can use raw SQL:
```
df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")
```
See also [DataFrame / Dataset groupBy behaviour/optimization](https://stackoverflow.com/q/32902982/6910411)
|
I think user goks missed out on some part in the code. Its not a tested code.
.map should have been used to convert the rdd to a pairRDD using .map(lambda x: (x,1)).reduceByKey. ....
reduceByKey is not available on a single value rdd or regular rdd but pairRDD.
Thx
|
Spark dataframe reducebykey like operation
|
[
"",
"sql",
"scala",
"apache-spark",
"apache-spark-sql",
""
] |
I have mart table where i have gaps in rows.I tried using loop condition but I'm unable to proceed
```
CREATE TABLE Mart
(martID int, mart int)
;
INSERT INTO Mart
(martID, mart)
VALUES
(1, 10),
(4, 12),
(6, 20)
;
```
**OutPut**
```
martID mart
1 10
2 0
3 0
4 12
5 0
6 20
```
My code so far
```
select max(martId) as nr
from Mart
union all
select nr - 1
from numbers
where nr > 1
```
|
may be this works
```
declare @Mart TABLE
(martID int, mart int)
;
INSERT INTO @Mart
(martID, mart)
VALUES
(1, 10),
(4, 12),
(6, 20)
;
declare @MinNo int
declare @MaxNo int
declare @IncrementStep int
set @MinNo = 1
set @MaxNo = 10
set @IncrementStep = 1
;with C as
(
select @MinNo as Num
union all
select Num + @IncrementStep
from C
where Num < @MaxNo
)
select Num,
CASE WHEN mart IS NOT NULL THEN mart ELSE 0 END AS NUMBER
from C
LEFT JOIN @Mart t
ON t.martID = c.Num
```
|
Hope you have `Number` table contains series of Numbers without gaps. Try this
```
SELECT nr,
COALESCE(mart, 0) AS mart
FROM numbers n
LEFT OUTER JOIN mart m
ON m.martid = n.nr
WHERE n.nr BETWEEN (SELECT Min(martid)
FROM mart) AND (SELECT Max(martid)
FROM mart)
```
In case you don't have `numbers` table then refer this [***link***](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) to generate sequence of values in `SQL Server`. I will prefer `STACKED CTE` method
```
;WITH e1(n) AS
(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), -- 10
e2(n) AS (SELECT 1 FROM e1 CROSS JOIN e1 AS b), -- 10*10
e3(n) AS (SELECT 1 FROM e1 CROSS JOIN e2) -- 10*100
SELECT n = ROW_NUMBER() OVER (ORDER BY n) FROM e3 ORDER BY n;
```
|
how to fill up the gaps in Rows in a table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm considering two SQL queries (Oracle) and I shall state the difference between them by showing examples. The queries are as follows:
```
/* Query 1 */
SELECT DISTINCT countryCode
FROM Member M
WHERE NOT EXISTS(
(SELECT organisation FROM Member
WHERE countryCode = 'US')
MINUS
(SELECT organisation FROM Member
WHERE countryCode = M.countryCode ) )
/* Query 2 */
SELECT DISTINCT M1.countryCode
FROM Member M1, Member M2
WHERE M2.countryCode = 'US' AND M1.organisation = M2.organisation
GROUP BY M1.countryCode
HAVING COUNT(M1.organisation) = (
SELECT COUNT(M3.organisation)
FROM Member M3 WHERE M3.countryCode = 'US' )
```
As far as I get it, these queries give back the countries which are members of the same organisations as the United States. The scheme of Member is (**countryCode**, **organisation**, type) with bold ones as primary key. Example: ('US', 'UN', 'member'). The member table contains only a few tuples and is not complete, so when executing (1) and (2) both yield the same result (e.g. Germany, since here only 'UN' and 'G7' are in the table).
So how can I show that these queries can actually return different results?
That means how can I create an example table instance of Member such that the queries yield different results based on that table instance?
Thanks for your time and effort.
|
The queries will result all the country codes which are members **at least** with all the organization the US is member with (it could be member with other organizations as well).
|
I've finally found an example to show that they can actually output different values based on the same `Member` instance. This is actually the case when `Member` contains duplicates. For query 1 this is not a problem, but for query 2 it actually affects the result, since here the number of memberships is crucial. So, if you have e.g. ('FR', 'UN', member) twice in `Member` the `HAVING COUNT(M1.organisation)` will return a different value as `SELECT COUNT(M3.organisation)` and 'FR' would not be part of the output.
Thanks to all for your constructive suggestions, that helped me a lot.
|
Comparing SQL Queries
|
[
"",
"sql",
"oracle",
""
] |
I need help in building a sql query for the following criteria:
there are 3 columns in oracle database, and i need to update one of these columns checking if 1st is null then update that and don't touch other two columns, if 1st is not null then update 2nd and don't update the 3rd column else if 3rd column is null update the 3rd.
The query which i could build is:
```
update temp set
flg1 =
case flg1
when null then 'Y' else flg1 end,
flg2 =
case flg2
when null then 'Y' else flg2 end,
flg3 =
case flg3
when null then 'Y' else flg3 end,
where id = 132
```
i know the above query is different from what i want , need help ..
|
Try this:
```
update temp set
flg1 =
case when flag1 is null then 'Y' else flg1 end,
flg2 =
case when flag1 is not null and flag2 is null then 'Y' else flg2 end,
flg3 =
case when flag1 is not null and flag2 is not null and flag3 is null then 'Y' else flg3 end
where id = 132
```
|
you should use neste case condition to achieve your requirement
Just try below code
```
update temp set
flg1 =
case when flag1 is null then 'Y' else flg1 end,
flg2 =
case when flag1 is not null
then case flag2 is null then 'Y' else flg2 end
end,
flg3 =
case when flag1 is not null
then case flag2 is not null
then case flag3 is null then 'Y'
else flg3 end
end
end
where id = 132
```
|
Updating multiple columns in a single sql query with conditions
|
[
"",
"sql",
"database",
"oracle",
""
] |
[](https://i.stack.imgur.com/XwlN0.jpg)
Calculate expiry date using mysql query.
calculate make\_date and expiry\_date difference against tblt\_name in mysql query.
|
You can use this below query..
```
select tblt_name,
(CASE
WHEN datediff(expiry_date,CURDATE()) > 0 then datediff(expiry_date,CURDATE())
ELSE 'Expired'
END) as Remaining_days_expired from tablets;
```
[SQL fiddle](http://sqlfiddle.com/#!9/21d6a/3)
[](https://i.stack.imgur.com/3dOL2.jpg)
|
Is this along the lines of what you want:
```
SELECT tblt_id, tblt_name, make_date, expiry_date, expiry_date - make_date AS shelf_life
FROM medicine
```
|
how to calculate expiry date using mysql query?
|
[
"",
"mysql",
"sql",
""
] |
I have this table (simplified version)
```
create table completions (
id int(11) not null auto_increment,
completed_at datetime default null,
is_mongo_synced tinyint(1) default '0',
primary key (id),
key index_completions_on_completed_at_and_is_mongo_synced_and_id (completed_at,is_mongo_synced,id),
) engine=innodb auto_increment=4785424 default charset=utf8 collate=utf8_unicode_ci;
```
Size:
```
select count(*) from completions; -- => 4817574
```
Now I try to execute this query:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
order by completions.id asc limit 10;
```
And it takes **9mins**.
I see there is not any index used, the `explain extend` returns this:
```
id: 1
select_type: SIMPLE
table: completions
type: index
possible_keys: index_completions_on_completed_at_and_is_mongo_synced_and_id
key: PRIMARY
key_len: 4
ref: NULL
rows: 20
filtered: 11616415.00
Extra: Using where
```
If I force the index:
```
select completions.*
from completions
force index(index_completions_on_completed_at_and_is_mongo_synced_and_id)
where
(completed_at is not null)
and completions.is_mongo_synced = 0
order by completions.id asc limit 10;
```
It takes **1,22s**, which is much better. The `explain extend` returns:
```
id: 1
select_type: SIMPLE
table: completions
type: range
possible_keys: index_completions_on_completed_at_and_is_mongo_synced_and_id
key: index_completions_on_completed_at_and_is_mongo_synced_and_id
key_len: 6
ref: null
rows: 2323334
filtered: 100
Extra: Using index condition; Using filesort
```
Now if I narrow the query by `completions.id` like:
```
select completions.*
from completions
force index(index_completions_on_completed_at_and_is_mongo_synced_and_id)
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
It takes **1,31s**, still good. The `explain extend` returns:
```
id: 1
select_type: SIMPLE
table: completions
type: range
possible_keys: index_completions_on_completed_at_and_is_mongo_synced_and_id
key: index_completions_on_completed_at_and_is_mongo_synced_and_id
key_len: 6
ref: null
rows: 2323407
filtered: 100
Extra: Using index condition; Using filesort
```
The point is that if for the last query I don't force the index:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
It takes **85ms**, check that it is **ms** and not **s**. The `explain extend` returns:
```
id: 1
select_type: SIMPLE
table: completions
type: range
possible_keys: PRIMARYindex_completions_on_completed_at_and_is_mongo_synced_and_id
key: PRIMARY
key_len: 4
ref: null
rows: 2323451
filtered: 100
Extra: Using where
```
Not only this is making me nuts but also the fact that the performance of the last query is highly affected for small changes in the number of the filter:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 1600000
order by completions.id asc limit 10;
```
It takes **13s**
Things I don't understand:
1. Why this the below query A is faster than query B when query B suppose to use a more precise index:
c
Query A:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
**85ms**
Query B:
```
select completions.*
from completions
force index(index_completions_on_completed_at_and_is_mongo_synced_and_id)
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
**1,31s**
## 2. Why such a difference in performan among the below queries:
Query A:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
**85ms**
Query B:
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 1600000
order by completions.id asc limit 10;
```
**13s**
## 3. Why MySQL is not using automatically the index for the below query:
Index:
```
key index_completions_on_completed_at_and_is_mongo_synced_and_id (completed_at,is_mongo_synced,id),
```
Query:
```
select completions.*
from completions
force index(index_completions_on_completed_at_and_is_mongo_synced_and_id)
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
order by completions.id asc limit 10;
```
### Update
Some more data requested in the comments
Num of rows based on `is_mongo_synced` values
```
select
completions.is_mongo_synced,
count(*)
from completions
group by completions.is_mongo_synced;
```
Result:
```
[
{
"is_mongo_synced":0,
"count(*)":2731921
},
{
"is_mongo_synced":1,
"count(*)":2087869
}
]
```
Queries without `order by`
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
limit 10;
```
**544ms**
```
select completions.*
from completions
force index(index_completions_on_completed_at_and_is_mongo_synced_and_id)
where
(completed_at is not null)
and completions.is_mongo_synced = 0
and completions.id > 2000000
limit 10;
```
**314ms**
But, anyhow, I need the order because I'm scanning the table batch by batch.
|
Your questions are quite complicated. But, your for your first query:
```
select completions.*
from completions
where completed_at is not null and
completions.is_mongo_synced = 0
order by completions.id asc
limit 10;
```
The best index in on `(is_mongo_synced, completed_at)`. There might be other ways to write the query, but in the index you are forcing, the columns are not in an optimal order.
The difference in performance in your second query is probably because the data is actually being sorted. A few extra hundreds of thousands of rows can affect the sort time. The dependence on the value of `id` is probably way the index is not used. If you changed the index to `(is_mongo_synced, id, completed_at)`, then index usage would be more likely.
MySQL has good documentation on composite indexes. You might want to review it [here](http://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html).
### After adding the suggested filter
After adding the index:
```
KEY `index_completions_on_is_mongo_synced_and_id_and_completed_at` (`is_mongo_synced`,`id`,`completed_at`) USING BTREE,
```
And executing the long query again
```
select completions.*
from completions
where
(completed_at is not null)
and completions.is_mongo_synced = 0
order by completions.id asc limit 10;
```
It takes **156ms**, which is very good.
Checking the `explain extended` we see MySQL is using the correct index:
```
id: 1
select_type: SIMPLE
table: completions
type: ref
possible_keys: index_completions_on_completed_at_and_is_mongo_synced_and_id,index_completions_on_is_mongo_synced_and_id_and_completed_at
key: index_completions_on_is_mongo_synced_and_id_and_completed_at
key_len: 2
ref: const
rows: 1626322
filtered: 100
Extra: Using index condition; Using where
```
|
You're trying to force index
```
(completed_at, is_mongo_synced, id)
```
It's a b-tree and it has to first explore all the distinct values of `completed_at` that are not `NULL`, then the correct mongo\_synced at for each of them, them collect all the IDs and sort them and finally visit the table to fetch the desired rows.
With the primary key on the other hand it (assuming it's the clustering key) it just jumps fetches the page that has completions.id > 2000000 and reads consecutive rows until it gathers 10 of them, if not on this page then the next one is fetched.
In the end both queries probably check similar number of pages in the table + the first one has to fetch the entire index and sort it.
If you want to use an index try
```
(is_mongo_synced, id, completed_at)
```
Please consult the manual on [clustered indexes](https://dev.mysql.com/doc/refman/5.7/en/innodb-index-types.html).
|
MySQL looking for a nice index
|
[
"",
"mysql",
"sql",
"select",
"indexing",
""
] |
I have a database with menus and items, with a relationship between the two so that items can go on menus.
I want a select statement that can get varchar2 input from the user, and then check if the inputted value is part of the name of any items on the menus.
For example, if the user was to input 'strawberry' I would want the query to output all menus with corresponding items that have the word 'strawberry' in them. Like...
```
Menu Item
-------------------- -----------------------------
Valentine Theme Chocolate covered strawberry
Red Drink Theme Strawberry Margarita
Red Drink Theme Strawberry sensation
```
[How] is this possible? Here's what I've tried so far, and both have failed...
**Try 1** :
```
select menu_name Menu, item_name Item
from menu m, item_for_menu i_m, item i
where m.menu_id=i_m.menu_id
and i.item_id=i_m.item_id
(and item_name = '&name_of_item'
or item_name like '%name_of_item%');
```
**Try 2** (after some googling):
```
accept item_in prompt 'What item do you want to search for? '
select menu_name Menu, item_name Item
from menu m, item_for_menu i_m, item i
where m.menu_id=i_m.menu_id
and i.item_id=i_m.item_id
and item_name like item_in;
undefine item_in
```
|
There are indeed better ways than SQL prompt to get input from user, but your queries are still flawed. You either don't consider the case, or forget to add wildcards. maybe there are other problems, but you didn't state how the queries failed, exactly, so I have to assume.
Try this, your modified second query:
```
accept item_in prompt 'What item do you want to search for? '
select menu_name Menu, item_name Item
from menu m, item_for_menu i_m, item i
where m.menu_id=i_m.menu_id
and i.item_id=i_m.item_id
and upper(item_name) like upper('%'||item_in||'%');
undefine item_in
```
If it still fails, debug. Try to execute your query without `item_in` at all, put the string in there directly, until you get the data you want, then replace the string with substitution variable.
|
Thanks for the advice guys!
Here's the working solution I ended up with:
```
set verify off
select 'The ' || menu_name || ' Menu contains ' || item_name Results
from menu m, item_for_menu i_m, item i
where m.menu_id=i_m.menu_id
and i.item_id=i_m.item_id
and item_name like '%'||'&search_for_item'||'%'
order by menu_name,item_name;
set verify on
```
The only thing I wish I could do now is figure out how to force the inputted value to lowercase, as well as forcing the inputted value to start with a capital letter, so that I could get more results. Still seems to work great in general though!
|
Oracle's SQL - How to get user input and search it in a query
|
[
"",
"sql",
"oracle",
"sqlplus",
"where-clause",
""
] |
What is a SQL statement to find out which Schema owns an Oracle table?
|
To see information about any object in the database, in your case USER\_TABLES use:
```
select * from all_objects where object_name = 'USER_TABLES';
OWNER OBJECT_NAME OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME
SYS USER_TABLES 3922 VIEW 24-MAY-13 24-MAY-13
```
USER\_TABLES is a dictionary view. All dictionary views are owned by SYS.
|
`SELECT OWNER FROM DBA_TABLES WHERE TABLE_NAME = '<your table>'`
If you don't have privilege to `DBA_TABLES` use `ALL_TABLES`.
|
SQL Statement to Find Out Which Schema Owns an Oracle Table?
|
[
"",
"sql",
"oracle",
"schema",
""
] |
I have looked around but I just can't seem to understand the logic. I think a good response is [here](https://stackoverflow.com/questions/15128653/need-to-calculate-percentage-of-total-count-in-sql-query), but like I said, it doesn't make sense, so a more specific explanation would be greatly appreciated.
So I want to show how often customers of each ethnicity are using an credit card. There are different types of credit cards, but if the CardID = 1, they used cash (hence the not equal to 1 statement).
I want to Group By ethnicity and show the count of transactions, but as a percentage.
```
SELECT Ethnicity, COUNT(distinctCard.TransactionID) AS CardUseCount
FROM (SELECT DISTINCT TransactionID, CustomerID FROM TransactionT WHERE CardID <> 1)
AS distinctCard INNER JOIN CustomerT ON distinctCard.CustomerID = CustomerT.CustomerID
GROUP BY Ethnicity
ORDER BY COUNT(distinctCard.TransactionID) ASC
```
So for example, this is what it comes up with:
```
Ethnicity | CardUseCount
0 | 100
1 | 200
2 | 300
3 | 400
```
But I would like this:
```
Ethnicity | CardUsePer
0 | 0.1
1 | 0.2
2 | 0.3
3 | 0.4
```
|
If you need the percentage of card-transaction per ethnicity, you have to divide the cardtransactions per ethnicity by the total transactions of the same ethnicity. You don't need a sub query for that:
```
SELECT Ethnicity, sum(IIF(CardID=1,0,1))/count(1) AS CardUsePercentage
FROM TransactionT
INNER JOIN CustomerT
ON TransactionT.CustomerID = CustomerT.CustomerID
GROUP BY Ethnicity
```
|
From your posted sample result to me it looks like you just wanted to divide the count by 1000 like
```
SELECT Ethnicity,
COUNT(distinctCard.TransactionID) / 1000 AS CardUseCount
FROM <rest part of query>
```
|
Calculate Count as Percentage
|
[
"",
"sql",
"ms-access",
"count",
"percentage",
""
] |
I have a table in SQL Server which contains the duration of a user working for different jobs. I need to calculate the total number of experience for user.
```
Declare @temp table(Id int, FromDate DATETIME, ToDate DATETIME)
INSERT INTO @temp ( Id ,FromDate ,ToDate )
VALUES ( 1 , '2003-1-08 06:55:56' , '2005-5-08 06:55:56'),
( 2 , '2000-10-08 06:55:56' , '2008-7-08 06:55:56'),
( 3 , '2013-6-08 06:55:56' , '2015-1-08 06:55:56'),
( 4 , '2006-4-08 06:55:56' , '2011-3-08 06:55:56' )
SELECT * FROM @temp
```
I want to calculate the total number of experience;
```
Id FromDate ToDate Difference IN Months
===================================================
1 2003-01-08 2005-05-08 28
2 2000-10-08 2008-07-08 93
3 2013-06-08 2015-01-08 19
4 2006-04-08 2011-03-08 59
```
after removing the years overlapping like 2003-2005 covers up in 2000-2008;
I got something like this:
```
Id FromDate ToDate Difference IN Months
===================================================
1 2000-10-08 2011-03-08 125
2 2013-06-08 2015-01-08 19
```
So the answer would be `125+19 = 144` Months.
Please help me to find a solution.
|
The syntax here is finding all FromDate that doesn't have an overlapping FromDate and ToDate interval and all ToDates that doesn't have an overlapping FromDate and ToDate interval. Giving them a rownumber according to the date value and matching them on that rownumber:
```
;WITH CTE as
(
SELECT min(Id) Id ,FromDate, row_number() over (ORDER BY FromDate) rn
FROM @temp x
WHERE
not exists
(SELECT * FROM @temp WHERE x.FromDate > FromDate and x.FromDate <= Todate)
GROUP BY FromDate
), CTE2 as
(
SELECT Max(Id) Id ,ToDate, row_number() over (ORDER BY ToDate) rn
FROM @temp x
WHERE
not exists
(SELECT * FROM @temp WHERE x.ToDate >= FromDate and x.ToDate < Todate)
GROUP BY ToDate
)
SELECT SUM(DateDiff(month, CTE.FromDate, CTE2.ToDate))
FROM CTE
JOIN CTE2
ON CTE.rn = CTE2.rn
```
Result:
```
144
```
|
You can try this
```
SELECT Set1.FromDate,MIN(List1.ToDate) AS ToDate, DATEDIFF(MONTH,Set1.FromDate,MIN(List1.ToDate))
FROM @temp Set1
INNER JOIN @temp List1 ON Set1.FromDate <= List1.ToDate
AND NOT EXISTS(SELECT * FROM @temp List2
WHERE List1.ToDate >= List2.FromDate AND List1.ToDate < List2.ToDate)
WHERE NOT EXISTS(SELECT * FROM @temp Set2
WHERE Set1.FromDate > Set2.FromDate AND Set1.FromDate <= Set2.ToDate)
GROUP BY Set1.FromDate
ORDER BY Set1.FromDate
```
|
T-SQL Calculate duration in months between different years of ranges
|
[
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"sqldatetime",
""
] |
I have the following problem - I have a table with 4 columns: Id, Name, InsertTime, UpdateTime.
I want to count for each Name (NOT unique!) how many with the same name have **different** InsertTime or UpdateTime.
For example, if I have the following data:
```
ID NAME INSERTED_TIME UPDATE_TIME
1 maya 21-12-2015 21-12-2015
2 tal 22-12-2015 21-12-2015
3 maya 21-09-2015 21-12-2015
4 mark 21-12-2015 21-12-2015
5 mark 21-12-2015 21-12-2015
```
I want the outcome to be:
```
NAME COUNT
maya 1
tal 0
mark 0
```
Explantion: "maya" has 1 duplicate with different InsertTime or UpdateTime. "tal" has no duplicates. And "mark" has a duplicate but both the InsertTime and UpdateTime are the same so it doesn't count.
I have tried:
```
SELECT COUNT(*)
FROM table t1
JOIN table t2 on t1.NAME=t2.NAME
where (t1.INSERTED_TIME<>t2.INSERTED_TIME or t1.UPDATE_TIME<>t2.UPDATE_TIME)
```
But this returns duplicates.
I also tried different approaches with GROUP BY - nothing has worked so far.
Any help would be appreciated.
|
Use GROUP BY and count the rows for each name:
```
;WITH CTE as
(
SELECT NAME
FROM yourtable
GROUP BY NAME, INSERTED_TIME, UPDATE_TIME
)
SELECT NAME, count(*) - 1 COUNT
FROM CTE
GROUP BY NAME
```
Result:
```
NAME COUNT
mark 0
maya 1
tal 0
```
|
You can use windowed functions:
```
WITH cte AS
(
SELECT NAME,
RN = RANK() OVER (PARTITION BY NAME ORDER BY INSERTED_TIME,UPDATE_TIME) - 1
FROM #table t1
)
SELECT NAME, [COUNT] = MAX(RN)
FROM cte
GROUP BY NAME
ORDER BY [COUNT] DESC
```
`LiveDemo`
|
Sql - How to count different column values grouped by another column
|
[
"",
"sql",
"sql-server",
"group-by",
""
] |
I have the code below:
```
Dim ds As New DataSet
Dim sda As System.Data.SqlClient.SqlDataAdapter
Dim sSQL As String
Dim strCon As String
sSQL = " .... MY QUERY HERE .... "
strCon = appBase.dbConnString
sda = New System.Data.SqlClient.SqlDataAdapter(sSQL, strCon)
sda.Fill(ds, "MY TABLE FROM DB")
dgRecordsContent.DataSource = ds.Tables("MY TABLE FROM DB")
dgRecordsContent.DataBind()
dgRecordsContent.Visible = True
dbConn.Close()
```
How can I programatically count the number of rows from the datagrid that I'm showing the values in?
|
Provided that the `DataTable` being filled doesn't already contain any rows, you can get the count using:
```
int count = sda.Fill(ds, "MY TABLE FROM DB")
```
Otherwise you can access the rows in the `DataTable` using:
```
int count = ds.Tables("MY TABLE FROM DB").Rows.Count
```
|
```
Dim ds As New DataSet
Dim sda As System.Data.SqlClient.SqlDataAdapter
Dim sSQL As String
Dim strCon As String
sSQL = " .... MY QUERY HERE .... "
strCon = appBase.dbConnString
sda = New System.Data.SqlClient.SqlDataAdapter(sSQL, strCon)
Dim num_rows=sda.Fill(ds, "MY TABLE FROM DB")
MessageBox.Show(num_rows.ToString())
dgRecordsContent.DataSource = ds.Tables("MY TABLE FROM DB")
dgRecordsContent.DataBind()
dgRecordsContent.Visible = True
dbConn.Close()
```
Please Check This Edit @Emi
|
How to count records from database using VB.NET ?
|
[
"",
"sql",
"asp.net",
"vb.net",
"webforms",
""
] |
Good evening,
I'm new to SQL.
I've been trying to "reverse" a a simple 2 column relationship table.
Here is an example to explain it :
TABLE KnowEachOther
```
Β¦ id_human1 Β¦ id_human2 Β¦
Β¦ 1 Β¦ 10 Β¦
Β¦ 1 Β¦ 11 Β¦
Β¦ 2 Β¦ 12 Β¦
Β¦ 2 Β¦ 13 Β¦
```
So here I have a table of people that know each other. Each line means humanX and humanY know each other. Now I want to get a table of people that don't know each other ( assuming this table has all the people ). This would give this :
TABLE DontKnowEachOther
```
Β¦ id_human1 Β¦ id_human2 Β¦
Β¦ 1 Β¦ 2 Β¦
Β¦ 1 Β¦ 12 Β¦
Β¦ 1 Β¦ 13 Β¦
Β¦ 2 Β¦ 10 Β¦
Β¦ 2 Β¦ 11 Β¦
Β¦ 11 Β¦ 10 Β¦
Β¦ 12 Β¦ 10 Β¦
Β¦ 13 Β¦ 10 Β¦
Β¦ 11 Β¦ 12 Β¦
Β¦ 11 Β¦ 13 Β¦
Β¦ 12 Β¦ 13 Β¦
```
Any hints on how to do that would be appreciated. What would be even more appreciated is how you approach such non-trivial task. Trying to get better here ;).
Many thanks
|
First, let us begin with something you forgot. (If you really don't want to have this, it can be computed from the tables you gave, but it is surely better to have it.)
```
CREATE TABLE Humans
(
id int PRIMARY KEY
);
INSERT INTO Humans (id) VALUES (1),(2),(11),(12),(13);
```
Then, here is your table and data:
```
CREATE TABLE KnowEachOther
(
id1 INT,
id2 INT,
PRIMARY KEY ( id1, id2 )
);
INSERT INTO KnowEachOther (id1, id2) VALUES
(1,10),
(1,11),
(2,12),
(2,13);
```
Then, we can declare the following very useful view of all possible relationships:
```
CREATE VIEW AllPossibleRelationships AS SELECT
h1.id AS id1,
h2.id AS id2
FROM Humans AS h1
CROSS JOIN Humans AS h2
WHERE h1.id <> h2.id;
```
And then, we can create a view which removes from "all possible relationships" those rows for which there exist relationships. (See `WHERE k.id1 IS NULL`)
```
CREATE VIEW DontKnowEachOther AS SELECT
a.id1 AS id1,
a.id2 AS id2
FROM AllPossibleRelationships AS a
LEFT JOIN KnowEachOther AS k
ON (a.id1 = k.id1 AND a.id2 = k.id2) OR
(a.id1 = k.id2 AND a.id2 = k.id1)
WHERE k.id1 IS NULL
ORDER BY a.id1;
```
So, executing `SELECT * FROM DontKnowEachOther;` yields the following:
```
id1 id2
1 2
1 12
1 13
2 1
2 11
11 2
11 12
11 13
12 1
12 11
12 13
13 1
13 11
13 12
```
Note: there is a bit of ambiguity with respect to the contents of your `KnowEachOther` table and what it means to "know each other". "Knowing each other" is an undirected relationship, meaning that if A knows B, then B also knows A. In light of this, your "know each other" table can be thought of as implicitly containing more rows; for example, since you have a row for (1, 10), then the row (10, 1) is implied. My results take into account these implied rows, and include all implied and non-implied rows in the results.
Filtering out rows which could be implied is left as an exercise to the reader.
|
You need to first get all possible KnowEachOther-pairs and then select those that are not in KnowEachOther.
As you did not specify a table which would list all humans, you need to use union to combine the id\_human1 and id\_human1 from KnowEachOther. By joining the result to itself you will get all possible pairs.
The query will be simpler if you have a separate `humans` table.
```
select *
from
(
select id_human1 as id_human1
from KnowEachOther
union
select id_human2
from KnowEachOther
) humans1
join
(
select id_human1 as id_human2
from KnowEachOther
union
select id_human2
from KnowEachOther
) humans2 on humans1.id_human1!=humans2.id_human2
where not exists (
select *
from KnowEachOther k
where k.id_human1=humans1.id_human1 and k.id_human2=humans2.id_human2
)
order by humans1.id_human1, humans2.id_human2
```
|
How to "reverse" a simple 2 column relationship table
|
[
"",
"mysql",
"sql",
""
] |
I am working in Greenplum - postgresql DB and have below structure of data:
[](https://i.stack.imgur.com/YWE7C.jpg)
In this I need below logic to implement (some of which I already implemented):
```
CASE WHEN PDATE IS NOT NULL THEN to_char(PDATE,'YYYY-MM-DD')
WHEN PDATE IS NULL THEN to_char(NDATE,'YYYY-MM-DD N')
WHEN NDATEIS NULL THEN 'NO PO' ELSE 'NO PO' END
```
According to which I need QTY and VName.
> QTY: Sum(Qty) according to min (PDATE and NDATE)
>
> VName: VName according to min (PDATE and NDATE)
DESIRED OUTPUT:
[](https://i.stack.imgur.com/UD9QT.jpg)
as far I have made below query:
```
SELECT
ITEM ,
MIN(CASE WHEN PDATE IS NOT NULL THEN to_char(PDATE,'YYYY-MM-DD')
WHEN PDATE IS NULL THEN to_char(NDATE,'YYYY-MM-DD N')
WHEN NDATE IS NULL THEN 'NO PO' ELSE 'NO PO' END) AS PRO
FROM
Table
GROUP BY
ITEM
```
Please help me out with the query
|
Anshul, your solution works but it will come with a performance hit as you are joining to your table twice which forces the database to scan your table twice. The better solution is to use an analytical function and only reference the table once.
Here is an example:
```
CREATE TABLE anshul
(
item character varying,
pdate date,
ndate date,
qty integer,
vname character varying
)
WITH (APPENDONLY=true)
DISTRIBUTED BY (item);
INSERT INTO ANSHUL VALUES
('ABC', NULL, '2015-12-31', 10, 'Y JACK SOLLEN'),
('HRD', '2016-01-29', '2016-1-8', 5, 'H HARRIS'),
('HRD', '2015-09-07', '2015-10-09', 31, 'G JOE'),
('HRD', '2015-09-30', '2015-09-07', 28, 'K KAMATH'),
('GGT', '2015-12-10', '2015-12-12', 10, 'P QUIK'),
('GGT', '2015-12-27', NULL, 20, NULL),
('GGT', '2015-12-10', '2016-01-04', 22, 'U RITZ'),
('GGT', '2016-01-07', '2016-01-07', 22, 'S SUE DAL'),
('OWE', NULL, '2015-12-22', 6, 'J JASON NIT'),
('OWE', NULL, '2015-11-05', 2, 'P QUEER'),
('OWE', NULL, '2015-11-05', 5, 'K KITTAN');
```
And here is the query which borrows some of the code you already had figured out.
```
SELECT item,
sum(qty) AS qty,
array_to_string(array_agg(vname), ',') AS vname
FROM (
SELECT item,
rank() OVER(PARTITION BY item ORDER BY desired_date) AS rank,
qty,
vname
FROM (SELECT item,
qty,
vname,
CASE WHEN PDATE IS NOT NULL THEN pdate
WHEN PDATE IS NULL THEN ndate END AS desired_date
FROM anshul
) AS sub1
) AS sub
WHERE sub.rank = 1
GROUP BY item
ORDER BY item;
```
And the results:
```
item | qty | vname
------+-----+------------------
ABC | 10 | Y JACK SOLLEN
GGT | 32 | P QUIK,U RITZ
HRD | 31 | G JOE
OWE | 7 | K KITTAN,P QUEER
```
|
Thanks Tim for your help.. It took me some time to create the query, but in the end its completed.. To save the time I posted the question in forum, which ended the same to me - It took time..
Well here's the query
```
SELECT
FO.ID ,
(CASE WHEN FO.DateQ IS NOT NULL THEN to_char(FO.DateQ ,'YYYY-MM-DD')
WHEN FO.DateQ IS NULL THEN to_char(FO.Datew ,'YYYY-MM-DD N')
WHEN FO.Datew IS NULL AND FO.DateQ IS NULL THEN 'NO PO' END) AS DATER ,
FO.QTY ,
FO.VNAME
FROM
(
SELECT
NT.ID ,
PT.DATEQ ,
PT.DATEW ,
SUM(NT.QTY) AS QTY ,
array_to_string(array_agg(NT.VNAME) ,', ') AS VNAME
FROM
TABLENAME NT INNER JOIN(
SELECT
AST.ID ,
AST.DateQ ,
(CASE WHEN AST.DateQ IS NULL THEN AST.DateW ELSE NULL END) AS DateW
FROM
(
SELECT
ID ,
MIN(PDATE) AS DATEQ ,
MIN(CASE WHEN pdate IS NULL THEN ndate END) DATEW
FROM
TABLENAME
GROUP BY
ID
) AST
) PT
ON NT.ID = PT.ID
AND NT.PDATE = PT.DATEQ
OR NT.NDATE = PT.DATEW
GROUP BY
NT.ID ,
PT.DATEQ ,
PT.DATEW
) FO
ORDER BY
FO.ID
```
Consider ID as Item.
|
Fetching data and sums according to date in postgresql
|
[
"",
"sql",
"postgresql",
"group-by",
"sum",
"greenplum",
""
] |
That's my db schema
[](https://i.stack.imgur.com/mjLJx.png)
I need to view values from tab3 with order to sortId and id from tab2. So I create query:
```
SELECT * FROM test.tab3 where idTab2 = 1 and sortId = 1;
```
and it's ok.
Now I have a quest: id from tab1 and show values from tab3. Because tab1 and tab 2 have relation one to many, I do something like that:
```
SELECT * FROM test.tab3 where idTab2 = (select id from test.tab2 where idTab1 = 1);
```
but I expected the error:
[](https://i.stack.imgur.com/GQ36P.png)
Now is my question. How to create query or maybe do other thing to create something like this pseudocode:
> ids = select id from test.tab2 where idTab1 = 1 select \* from
> tab.test3 where idTab2 = ids[0] and ids [1](https://i.stack.imgur.com/mjLJx.png)
|
Use it as:
```
SELECT * FROM test.tab3 where idTab2 IN (select id from test.tab2 where idTab1 = 1);
```
To understand MySQL IN clause [see it](http://www.tutorialspoint.com/mysql/mysql-in-clause.htm)
|
If I understood well, you want to get all the rows in tab3 related to tab1 through tab2, isn't it? Then, you need a *join* between the three tables:
```
SELECT tab3.* FROM tab3
INNER JOIN tab2 ON tab3.idTab2=tab2.id
INNER JOIN tab1 ON tab2.idTab1=tab1.id
WHERE tab1.id=<your parameter>;
```
If you don't need to access any column in tab1 other than `id`, you could let `tab1` appart of the join:
```
SELECT tab3.* FROM tab3
INNER JOIN tab2 ON tab3.idTab2=tab2.id
WHERE tab2.idTab1=<your parameter>;
```
|
How to create query to avoid 'subquery returns more than 1 row'
|
[
"",
"mysql",
"sql",
""
] |
Having started at the same statements over and over again, I still cannot find the missing right parenthesis. It's appeared when running both of these statements and in order to run the rest of my tables I need to be able to locate the right parenthesis.
I've selected the parenthesis used in the statements and all seemed to match up to the right ones. Can anyone suggest anything?
```
CREATE TABLE booking (
bookingNo NUMBER(8) PRIMARY KEY,
customerNo NUMBER(8) NOT NULL
REFERENCES customer(customerNo),
registrationNo VARCHAR2(10) NOT NULL
REFERENCES vehicle(registrationNo),
dateOfBooking DATE NOT NULL
DEFAULT SYSDATE,
pickupStreetAddressLine VARCHAR2(30) NOT NULL,
pickupTown VARCHAR2(30) NOT NULL,
pickupPostcode VARCHAR2(10) NOT NULL,
startTime NUMBER(4,2) NOT NULL,
startDate DATE NOT NULL
DEFAULT SYSDATE,
endTime NUMBER(4,2) NOT NULL,
endDate DATE NOT NULL
DEFAULTSYSDATE,
noOfPassengers NUMBER(3) NOT NULL
CONSTRAINT CHECK(noOfPassengers > 0 AND noOfPassengers <= 73)
price NUMBER(8,2) NOT NULL
);
CREATE TABLE employees (
nationalInsuranceNo VARCHAR2(10) PRIMARY KEY,
fullName VARCHAR2(50) NOT NULL,
streetAddress VARCHAR2(30) NOT NULL,
town VARCHAR2(30),
postcode VARCHAR2(10) NOT NULL,
homeNo NUMBER(11)
dateOfBirth DATE NOT NULL,
gender VARCHAR2(8) NOT NULL
CONSTRAINT CHECK(gender="Male" OR gender="Female"),
jobDescription VARCHAR2(30) NOT NULL,
currentSalary NUMBER(6) NOT NULL
CONSTRAINT CHECK(currentSalary>0)
);
```
|
Try using a standard indentation scheme to visually verify your formatting.
Here I moved each significant paren to a newline and indented each piece separately.
I put comments next to a few items that seem wrong:
```
CREATE TABLE
booking
(
bookingNo NUMBER(8) PRIMARY KEY,
customerNo NUMBER(8) NOT NULL
REFERENCES customer(customerNo),
registrationNo VARCHAR2(10) NOT NULL
REFERENCES vehicle(registrationNo),
dateOfBooking DATE NOT NULL
DEFAULT SYSDATE,
pickupStreetAddressLine VARCHAR2(30) NOT NULL,
pickupTown VARCHAR2(30) NOT NULL,
pickupPostcode VARCHAR2(10) NOT NULL,
startTime NUMBER(4,2) NOT NULL,
startDate DATE NOT NULL
DEFAULT SYSDATE,
endTime NUMBER(4,2) NOT NULL,
endDate DATE NOT NULL
DEFAULT SYSDATE, -- THERE WAS A MISSING SPACE HERE *********************
noOfPassengers NUMBER(3) NOT NULL, -- MISSING COMMA ********
CONSTRAINT Check_noOfPassengers CHECK -- ADD A CONSTRAINT NAME *********
(
noOfPassengers > 0
AND noOfPassengers <= 73
), -- THERE WAS A MISSING COMMA HERE *******************
price NUMBER(8,2) NOT NULL
);
CREATE TABLE
employees
(
nationalInsuranceNo VARCHAR2(10) PRIMARY KEY,
fullName VARCHAR2(50) NOT NULL,
streetAddress VARCHAR2(30) NOT NULL,
town VARCHAR2(30),
postcode VARCHAR2(10) NOT NULL,
homeNo NUMBER(11), -- THERE WAS A MISSING COMMA HERE ***************
dateOfBirth DATE NOT NULL,
gender VARCHAR2(8) NOT NULL, -- MISSING COMMA *****
CONSTRAINT Check_gender CHECK -- ADD A CONSTRAINT NAME *********
(
gender="Male"
OR gender="Female"
),
jobDescription VARCHAR2(30) NOT NULL,
currentSalary NUMBER(6) NOT NULL, -- MISSING COMMA
CONSTRAINT Check_currentSalary CHECK -- ADD A CONSTRAINT NAME *********
(
currentSalary>0
)
);
```
|
I don't have Oracle installed to test, but is there a missing comma for the column noOfPassengers above? More generally, it may not be a missing right parens, but a parens that is out of place because of another syntax error. Hope that helps.
|
SQL Error: ORA-00907: missing right parenthesis - CANNOT FIND ERROR
|
[
"",
"sql",
"oracle",
""
] |
I have the below
```
;with cte as(
Select 1 as Col1 ,NULL as Col2
Union All
Select Null ,2)
select *
from cte
```
/\* Result\*/
```
Col1 Col2
1 NULL
NULL 2
```
I am looking for
```
Col1 Col2
1 2
```
i.e. results should come in a single row.
How can we achieve this?
|
Without more details about your data, it seems like applying a simple aggregate function like `max` should do what you are after:
```
...
select max(col1) as col1, max(col2) as col2
from cte
```
|
```
SELECT MAX(col1) , MAX(col2) FROM cte
```
|
How to achieve the results into a single row?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have 2 tables, a table Months and a table Ad. I would like to return all distinct emails from Ad per month.
Table Ad : created\_at is a DATETIME.
Table Months : id\_month (INT and primary key)
[](https://i.stack.imgur.com/XECe1.jpg)
I tried with a left join :
```
SELECT Months.id_month,count(distinct Ad.email)
FROM `Months` left outer join `Ad`
ON id_month = MONTH(Ad.created_at)
GROUP BY id_month
ORDER BY id_month ASC
```
But there is no result for december and it doesn't return 0. (only results for months from 1 to 11)
Any idea?
I found that if I add this condition :
```
WHERE YEAR(Ad.created_at)=2015
```
then it doesn't return 0 for december.. why?
|
Based on OP's comments on *@Dhaval Asodariya* answer
You should move the filter on `Ad` to the `ON` condition of the `LEFT join` condition.
When you keep the right side table filter in `where` clause the `Left outer join` will be implicitly converted to `INNER JOIN`.
```
SELECT months.id_month,
Count(DISTINCT ad.email)
FROM `months`
LEFT OUTER JOIN `ad`
ON id_month = Month(ad.created_at)
AND Year(ad.created_at) = 2015
GROUP BY id_month
ORDER BY id_month ASC
```
|
Try this
```
SELECT Months.id_month,count(Ad.email) Emails
FROM `Months`
JOIN `Ad` ON Months.id_month = MONTH(Ad.created_at)
GROUP BY Months.id_month
ORDER BY Months.id_month ASC
```
|
mysql count with 0
|
[
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
I am trying to delete a row in a database, but for some reason it is not working.
```
SELECT * FROM <tablename>
WHERE MAPPING_ID = '66'
AND FIELD_VALUE = 'SanitΓ '
AND PARENT_VALUE = 'ITA';
```
Result: `66 SanitΓ ITA`
```
DELETE FROM <tablename>
WHERE MAPPING_ID = '66'
AND FIELD_VALUE = 'SanitΓ '
AND PARENT_VALUE = 'ITA';
```
Result: `0 rows deleted.`
I am guessing it is due to the Γ . I already have `SET DEFINE OFF` , but still no go.
Any suggestions?
|
```
DELETE FROM <tablename>
WHERE MAPPING_ID = '66'
AND FIELD_VALUE = N'SanitΓ '
AND PARENT_VALUE = 'ITA';
```
try this N use for match nvarchar value(Unicode)
hope this will help you.
|
The query you try to access is wrong the syntax should be:
```
DELETE FROM table
WHERE conditions;
```
If you want to specify the database name, use this
```
DELETE FROM schema.table
WHERE conditions;
```
|
sql delete not working with Γ
|
[
"",
"sql",
"oracle",
""
] |
I have a database with 3 columns, the first column is date (date data type).I want to filter results from years or months or both. I found this:
```
SELECT * FROM Orders WHERE OrderDate='2015-12-11';
```
But how can I select for example, everything with year 2015 or year 2015 and month 11?
|
You can filter results using MySQL's built in [date functions](http://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html) `YEAR()` and `MONTH()` functions:
```
SELECT *
FROM Orders
WHERE
YEAR(OrderDate) = 2015 AND
MONTH(OrderDate) = 11;
```
You can also use these to return additional results:
```
SELECT YEAR(OrderDate) AS OrderYear FROM Orders
```
Or you can sort your results:
```
SELECT * AS OrderYear FROM Orders ORDER BY YEAR(OrderDate)
```
|
Use `Year` and `Month` function
```
SELECT year(OrderDate) as `Year`,month(OrderDate) as `Month`
FROM Orders
WHERE OrderDate='2015-12-11'
```
|
Take year and month from date
|
[
"",
"mysql",
"sql",
""
] |
Let's say I have a table:
```
LpOpenTradeId LPSource SymbolId Volume CreatedUser CreatedDate
1 2 1 10.00 2 2015-12-11 00:00:00.000
2 2 4 12.00 2 2015-12-11 00:00:00.000
3 2 1 10.00 2 2015-12-11 10:53:00.000
4 2 3 1.00 2 2015-12-11 18:03:14.676
5 2 5 1.00 2 2015-12-14 09:38:33.691
6 2 3 2.00 2 2015-12-14 09:39:30.305
7 2 4 13.00 2 2015-12-14 09:43:13.916
8 3 1 15.00 2 2015-12-11 10:53:00.000
```
I want to select the distinct LPSource and SymbolId columns with the Volumes having max CreatedDates. I mean the target result set is:
```
LPSource SymbolId Volume CreatedDate
2 1 10.00 2015-12-11 10:53:00.000
2 4 13.00 2015-12-14 09:43:13.916
2 3 2.00 2015-12-14 09:39:30.305
2 5 1.00 2015-12-14 09:38:33.691
3 1 15.00 2015-12-11 10:53:00.000
```
How can I express myself to have this resultset in T-SQL?
Thanks,
|
You can use `ROW_NUMBER`:
```
SELECT LPSource, SymbolId, Volume, CreatedDate
FROM (
SELECT LPSource, SymbolId, Volume, CreatedDate,
ROW_NUMBER() OVER (PARTITION BY LPSource, SymbolId
ORDER BY CreatedDate DESC) AS rn
FROM mytable) AS t
WHERE t.rn = 1
```
In case of `CreatedDate` ties, i.e. more than one records sharing the same maximum `CreatedDate` value within the same `LPSource, SymbolId` partition, the above query will randomly select one record. You can use `RANK` to select all records in such a case.
|
Use `NOT EXISTS` to return a row if no other row with same `LPSource`/`SymbolId` has (1) a later `CreatedDate`, or (2) same `CreatedDate` but a higher `Volume`.
```
select distinct LPSource, SymbolId, Volume, CreatedDate
from tablename t1
where not exists (select 1 from tablename t2
where t2.LPSource = t1.LPSource
and t2.SymbolId = t1.SymbolId
and (t2.CreatedDate > t1.CreatedDate
or (t2.CreatedDate = t1.CreatedDate and
t2.volume > t1.volume))
```
|
Distinct columns having the max date value
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query:
```
select vrec, valnum, valte from val_tb where
recd in (select recd from rectb where setd = 17)
AND (vid = 3 OR vid = 26 OR vid = 28);
```
For the results above, I get:
```
vrec valnum valte
98945823 NULL Total
98945823 NULL 06001
98945823 16.57 NULL
98945824 NULL Total
98945824 NULL 06005
98945824 0.36 NULL
```
I want to transform it to get:
```
98945823 06001 Total 16.57
98945824 06005 Total 0.36
```
i.e. combine results by vrec.
Is it possible to do this using Oracle SQL?
|
One way to differentiate between `valte` values can be checking if string contains only digits or not (poor solution but should work):
```
WITH cte( vrec,valnum, valte) AS
(
SELECT 98945823 AS vrec, NULL AS valnum,'Total' AS valte FROM dual
UNION ALL SELECT 98945823, NULL, '06001' FROM dual
UNION ALL SELECT 98945823, 16.57, NULL FROM dual
UNION ALL SELECT 98945824, NULL, 'Total' FROM dual
UNION ALL SELECT 98945824, NULL, '06005' FROM dual
UNION ALL SELECT 98945824, 0.36, NULL FROM dual
)
SELECT
vrec
,MAX(CASE WHEN REGEXP_LIKE(valte, '^[[:digit:]]*$') THEN valte ELSE NULL END)
,MAX(CASE WHEN NOT REGEXP_LIKE(valte, '^[[:digit:]]*$') THEN valte ELSE NULL END)
,MAX(valnum)
FROM cte
GROUP BY vrec;
```
`SqlFiddleDemo`
Output:
```
βββββββββββββ¦ββββββββββββββββ¦ββββββββββββββββ¦ββββββββββββββ
β VREC β MAX(CASE...) β MAX(CASE...) β MAX(VALNUM) β
β ββββββββββββ¬ββββββββββββββββ¬ββββββββββββββββ¬ββββββββββββββ£
β 98945823 β 06001 β Total β 16.57 β
β 98945824 β 06005 β Total β 0.36 β
βββββββββββββ©ββββββββββββββββ©ββββββββββββββββ©ββββββββββββββ
```
For your case exchange cte hardcoded values with:
```
select vrec, valnum, valte from val_tb where
recd in (select recd from rectb where setd = 17)
AND (vid = 3 OR vid = 26 OR vid = 28);
```
Your data structure is very poor, so this solution is just workaround. You should really change underlying structure.
|
You are right that this is the simplest solution... but you missed a group by:
```
select vrec, MAX(valnum),'Total' ,MAX(valte)
from val_tb
where recd in (select recd from rectb where setd = 17)
AND (vid = 3 OR vid = 26 OR vid = 28)
AND valte <>'Total' --<< Lines with constant 'Total' are of no use...
GROUP BY vrec;
```
|
Oracle transform table from row to column
|
[
"",
"sql",
"oracle",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.