Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have a table
```
Name Description EID
name1 ssdad 1001
name2 gfd 1002
name3 gfdsffsdf 1003
name4 ssdad 1004
name5 gfd 1005
name6 gfdsffsdf 1006
```
i want to append **C** letter for EID's 1001,1002,1003 only. How to specify that in sql query. Like C1001,C1002,C1003.
```
Name Description EID
name1 ssdad C1001
name2 gfd C1002
name3 gfdsffsdf C1003
name4 ssdad 1004
name5 gfd 1005
name6 gfdsffsdf 1006
``` | Since you're targeting only 3 values, you could include them in a list.
```
SELECT
EID,
IIf(EID In ('1001','1002','1003'), 'C', '') & EID AS adjusted_EID
FROM YourTable;
```
If you actually want to change the stored values, you can use that list in the `WHERE` clause of an `UPDATE`.
```
UPDATE YourTable
SET EID = 'C' & EID
WHERE EID In ('1001','1002','1003');
```
If you had many more target values, such a list would become unwieldy. In that case, you might prefer a different approach to check whether the current value is one of your target values. | You can use IIF to check a condition and then do something accordingly. In your case, you can check if `EID` is less than 1004, and append 'C' to the result if it is. So if you want to only select, query will be:
```
select name, description, IIF(val(EID) < 1004, "C" & EID, EID)
from yourtable
```
If you are updating just change it to
```
update yourtable
set EID = "C"& EID
where val(EID) < 1004
``` | How to append a letter in column values access sql | [
"",
"sql",
"ms-access",
""
] |
I have a table named studios with the columns "earnings" and "country". Each item of that table describes a studio in a country and its earnings.
How can I select the average earnings of the studios for each country?
```
SELECT AVG(earnings) --I want per country
FROM studios
``` | ```
SELECT country,AVG(earnings) FROM studios GROUP BY country;
``` | You can do that: `"SELECT AVG(earnings) FROM studios GROUP BY country"`
If you have column country\_id is for prefer.
You can use and this query `"SELECT AVG(earnings),country FROM studios GROUP BY country"` | How to select avg(value) per column_name in sql | [
"",
"sql",
"group-by",
"average",
""
] |
I have two models, `Recipes` and `Skills`. In this scenario, a skill is a cooking technique, like baking, frying, etc. So each recipe has a certain set of associated skills.
I want to search all the recipes like this:
1. Find all recipes that use any given set of skills (e.g. `Baking` OR `Frying` OR both)
**EDIT:** This should NOT return recipes that require a skill that wasn't in the search query - e.g. If I search for skills [1, 2] I don't want a recipe that uses skills [1, 2, 4] or any other superset.
2. If you add a new skill to the search, return just the additional recipes (e.g. if you add `Boiling` to the previous query of `Baking` or `Frying`, how many NEW recipes can you now cook?)
I currently have this working in Rails using plain old Ruby methods:
```
class Recipe < ActiveRecord::Base
has_many :practices
has_many :skills, through: :practices
def self.find_recipes_that_require_any_of_these_skills(*known_skill_ids)
self.select do |recipe|
recipe.skill_ids.all? do |skill_id|
known_skill_ids.include?(skill_id)
end
end
end
# calls the above method twice, once with the new skill and once without
# and subtracts one result from the other
def self.find_newly_unlocked_recipes(*prior_skill_ids, new_skill_id)
self.find_recipes_that_require_any_of_these_skills(*(prior_skill_ids + [new_skill_id])) - self.find_recipes_that_require_any_of_these_skills(*prior_skill_ids)
end
end
```
In Rails console: `Recipe.find_recipes_that_require_any_of_these_skills(1,4)`
returns all the Recipe objects for skill 1, skill 4, or skills 1 & 4.
But this is inefficient because it generates a SQL query for every single recipe in my database.
**How can I write these queries the ActiveRecord/SQL way?** | Two queries to DB:
```
def self.find_recipes_that_require_any_of_these_skills(*known_skill_ids)
Recipe.joins(:skills)
.merge(Skill.where(id: known_skill_ids))
.where("recipes.id NOT IN (?)", Recipe.joins(:skills).merge(Skill.where("skills.id NOT IN (?)", known_skill_ids)).uniq.pluck(:id)).uniq
end
``` | ```
def self.find_recipes_that_require_any_of_these_skills(*known_skill_ids)
self.includes(:skills).where(skills: { id: known_skill_ids })
end
``` | Rails query with multiple conditions on the same field | [
"",
"sql",
"ruby-on-rails",
"ruby",
"postgresql",
"activerecord",
""
] |
this is a MS-SQL
there are many rows of data duplicated , some of them get the latest update, some not.
I want to update those old data with ones get latest info.
```
from:
orderNum itemID orderTime desc
7247168 101 2013-08-11 09:51:39.20 desc_101_cc
102594 101 2012-09-26 21:17:50.44 desc_101_aaa
631595 101 2014-03-11 19:51:29.40 desc_101_ddd
1157428 235 2014-03-01 10:16:42.43 desc_235_8
7212306 235 2014-03-14 11:26:51.29 desc_235_2
100611 235 2014-03-21 20:23:43.03 desc_235_2
```
To:
```
orderNum itemID orderTime desc
7247168 101 2013-08-11 09:51:39.20 desc_101_ddd
102594 101 2012-09-26 21:17:50.44 desc_101_ddd
631595 101 2014-03-11 19:51:29.40 desc_101_ddd
1157428 235 2014-03-01 10:16:42.43 desc_235_2
7212306 235 2014-03-14 11:26:51.29 desc_235_2
100611 235 2014-03-21 20:23:43.03 desc_235_2
```
I want to use the `max(orderTime)` to get the latest edition of `desc`
then use it to update other `desc`
That means i like to use the `orderTime` to tell which `desc` is the latest
then update other `desc`
**The only column needs to be updated is the `desc`**
Please help me with this SQL | If you are using SQL Server 2012, you can do this with `last_value`:
```
with toupdate as (
select t.*,
last_value("desc") over (partition by itemID order by orderTime) as lastdesc
from table t
)
update toupdate
set "desc" = lastdesc;
```
If you are not using SQL Server 2012, you an emulate this with a correlated subquery:
```
with toupdate as (
select t.*,
(select top 1 "desc"
from table t2
where t2.itemId = t.itemId
order by orderTime desc
) as lastdesc
from table t
)
update toupdate
set "desc" = lastdesc;
``` | Something like this (won't work in SQL Server 2000 or earlier)? Don't try this on a production table; make a temporary copy table to try it.
```
;WITH MaxT AS (
SELECT
itemID
,maxOrderTime = MAX(orderTime)
FROM
myTable
),
MaxTDesc AS (
SELECT
itemID
,desc
FROM
myTable MaxTDesc
,MaxT
WHERE
MaxTDesc.ItemID = MaxT.ItemID
AND MaxTDesc.orderTime = MaxT.maxOrderTime
)
UPDATE
mt
SET
mt.desc = MaxTDesc.desc
FROM
myTable mt, MaxT
WHERE
mt.itemID = MaxTDesc.itemID
``` | SQL update other rows by one latest date row | [
"",
"sql",
"sql-server",
""
] |
I am trying to create a simple trigger in an oracle 10g database. This script to Create the trigger runs clean.
```
CREATE OR REPLACE TRIGGER newAlert
AFTER INSERT OR UPDATE ON Alerts
BEGIN
INSERT INTO Users (userID, firstName, lastName, password) VALUES ('how', 'im', 'testing', 'this trigger')
END;
/
```
But when I run:
```
INSERT INTO Alerts(observationID, dateSent, message, dateViewed) VALUES (3, CURRENT_TIMESTAMP, 'Alert: You have exceeded the Max Threshold', NULL);
```
to activate the trigger, I get this error message:
> ORA-04098: trigger 'JMD.NEWALERT' is invalid and failed re-validation
> (0 rows affected)
I don't understand whats causes this error. Do you know what causes this error? Or why this is happening?
Thank you in advance!
-David | Oracle will try to recompile invalid objects as they are referred to. Here the trigger is invalid, and every time you try to insert a row it will try to recompile the trigger, and fail, which leads to the ORA-04098 error.
You can `select * from user_errors where type = 'TRIGGER' and name = 'NEWALERT'` to see what error(s) the trigger actually gets and why it won't compile. In this case it appears you're missing a semicolon at the end of the `insert` line:
```
INSERT INTO Users (userID, firstName, lastName, password)
VALUES ('how', 'im', 'testing', 'this trigger')
```
So make it:
```
CREATE OR REPLACE TRIGGER newAlert
AFTER INSERT OR UPDATE ON Alerts
BEGIN
INSERT INTO Users (userID, firstName, lastName, password)
VALUES ('how', 'im', 'testing', 'this trigger');
END;
/
```
If you get a compilation warning when you do that you can do `show errors` if you're in SQL\*Plus or SQL Developer, or query `user_errors` again.
Of course, this assumes your `Users` tables does have those column names, and they are all `varchar2`... but presumably you'll be doing something more interesting with the trigger really. | **Cause**: A trigger was attempted to be retrieved for execution and was found to be invalid. This also means that compilation/authorization failed for the trigger.
**Action**: Options are to resolve the compilation/authorization errors, disable the trigger, or drop the trigger.
**Syntax**
```
ALTER TRIGGER trigger_Name DISABLE;
ALTER TRIGGER trigger_Name ENABLE;
``` | Oracle Trigger ORA-04098: trigger is invalid and failed re-validation | [
"",
"sql",
"oracle",
"oracle10g",
"ora-04098",
""
] |
I came across a question that states
Consider the following relation schema pertaining to a students
* database: Student (**rollno**, name, address)
* Enroll (**rollno, courseno**, coursename)
where the primary keys are shown underlined. The number of tuples in the
Student and Enroll tables are 120 and 8 respectively. What are the maximum
and minimum number of tuples that can be present in (Student \* Enroll),
where '\*' denotes natural join ?
I have seen several solutions on Internet like [this](http://learn.hackerearth.com/question/79/maximum-and-minimum-number-of-tuples-in-a-natural-join/) or [this](http://www.sitams.org/cource/gate/it/DBMS_IT_GATE.pdf)
As per my understanding. maximum tuples should be 8 and minimum should be 8 as well, since for each (rollnum,course) there should be a roll num in Students. Anyone who can help in this regard | If there was a referential constraint in place ensuring that every rollno in Enroll must also appear in Student then your answer of 8 for both minimum and maximum would be correct. The question doesn't actually mention any such constraint however. There's no need to assume that the RI constraint exists just because the rollno attribute appears in both tables. So the best answer is 0 minimum and 8 maximum. If it's a multiple-choice question and 0,8 isn't one of the given answers then answer 8,8 instead - and tell your teacher that the question is unclear. | I hope, you understood what Natural Join exactly is. You can review [here](http://www.w3resource.com/sql/joins/natural-join.php).
If the tables R and S contains common attributes and value of that attribute in each tuple in both tables are same, then the natural join will result n\*m tuples as it will return all combinations of tuples.
Consider following two tables
Table R (With attributes A and C)
```
A | C
----+----
1 | 2
3 | 2
```
Table S (With attributes B and C)
```
B | C
----+----
4 | 2
5 | 2
6 | 2
```
Result of natural join R \* S (If domain of attribute C in the two tables are same )
```
A | B | C
---+---+----
1 | 4 | 2
1 | 5 | 2
1 | 6 | 2
3 | 4 | 2
3 | 5 | 2
3 | 6 | 2
```
You can see both R and S contain the attribute C whose value is 2 in each and every tuple. Table R contains 2 tuples, Table S contains 3 tuples, where Result table contains 2\*3=6 tuples.
Moreover, while performing a natural join, if there were no common attributes between the two relations, **Natural join will behave as Cartesian Product**. In that case, you'll obviously have m x n as maximum number of tuples.
Consider following two tables
Table R (With attributes A and B)
```
A | B
----+----
1 | 2
3 | 2
```
Table S (With attributes C and D)
```
C | D
----+----
4 | 2
5 | 2
```
Result of natural join R \* S
```
A | B | C | D
---+---+----+----
1 | 2 | 4 | 2
1 | 2 | 5 | 2
3 | 2 | 4 | 2
3 | 2 | 5 | 2
```
Hope this helps. | maximum and minimum number of tuples in natural join | [
"",
"sql",
"join",
"relational-database",
"tuples",
"natural-join",
""
] |
I'm new to SQL. I have a table 'Customers' and it looks like this.

I would like to select 'Gender' AS Gender Column and 'Age' as Age column which would be like this.

I've tried several ways but it still doesn't show what I need. Please help me. | One way to go about it is to use conditional aggregation
```
SELECT name,
MAX(CASE WHEN field = 'Gender' THEN value END) gender,
MAX(CASE WHEN field = 'Age' THEN value END) age
FROM customers
GROUP BY name
```
The other way (if you're interested only in these two columns) would be
```
SELECT c1.name, c1.value gender, c2.value age
FROM customers c1 JOIN customers c2
ON c1.name = c2.name
AND c1.field = 'Gender'
AND c2.field = 'Age';
```
Assumption is that both Gender and Age exist for each Name. It it's not the case then use an `OUTER JOIN` instead of an `INNER JOIN` like so
```
SELECT n.name, c1.value gender, c2.value age
FROM
(
SELECT DISTINCT name
FROM customers
) n LEFT JOIN customers c1
ON n.name = c1.name AND c1.field = 'Gender'
LEFT JOIN customers c2
ON n.name = c2.name AND c2.field = 'Age';
```
Output:
```
| NAME | GENDER | AGE |
|--------|--------|-----|
| Angela | Female | 28 |
| Davis | Male | 30 |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/c73934/5)** demo | I have not tested this, but give something like this a try:
```
Select c.Name,
(Select c2.Value from customers c2 Where c2.Name = c.Name And c2.Field = 'Gender') as Gender,
(Select c2.Value from customers c2 Where c2.Name = c.Name And c2.Field = 'Age') as Age
From Customers c
Group By c.Name
```
PS I apologize for the awful formatting...had to type this from my phone. | How to select Column value as Column name with conditions in SQL table | [
"",
"mysql",
"sql",
""
] |
I am using SQL Server 2012 sp1 .I have a table column with the following Date Time values.
```
BLDCHKDT
-----------------------
2013-06-19 00:00:00.000
2013-07-22 00:00:00.000
2013-08-21 00:00:00.000
2013-09-20 00:00:00.000
2013-11-18 00:00:00.000
```
I would like to retrieve the date and Time in the following formats:
```
Date: 19062013
Time: 00000000
```
Is it possible? I have reviewed the SQL Server help documentation for the FORMAT, CAST and CONVERT functions and I can’t seem to get any headway.
So far I have attempted the following conversions:
*N.B Please note that I am converting to Date time to string to facilitate a flat file export.*
--Retrieving Date
```
SELECT [DATE-BLDCHKD] = CONVERT (VARCHAR (20), BLDCHKDT, 112)
FROM TABLEA
DATE-BLDCHKD
--------------------
20130619
20130722
20130821
20130920
20131118
--Retrieving Time
SELECT [TIME-BLDCHKD] = CONVERT (VARCHAR (20), BLDCHKDT, 24)
FROM TABLEA
TIME-BLDCHKD
--------------------
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
00:00:00
```
I appreciate your insights. Thank you in advance. | You can do it this way:
```
declare @value datetime = getdate()
select
replace(convert(varchar(20), @value, 104), '.', '') date_part,
left(replace(convert(varchar(20), @value, 114), ':', ''), 8) time_part
```
returns 23032014, 17174466 for 2014-03-23 17:17:44.660 | Since you're using SQL Server 2012 you can use the `FORMAT()` function:
```
SELECT FORMAT(BLDCHKDT,'ddMMyyyy')
, FORMAT(BLDCHKDT,'hhmmssfff')
``` | Date and Time Format Conversion in SQL Server 2012 | [
"",
"sql",
"sql-server",
"date",
"datetime",
"sql-server-2012",
""
] |
Write SQL code to find the **top 6** Client Country for each company.

How to approach this query? So far I am thinking:-
```
select top 6 Client_Country, count(*) Total
from table group by Client_Country
order by total desc
``` | To get the top for each country, you'll need to use a windowing function. Otherwise, you'll just get the top 6 overall.
I'm not sure what RDBMS you're using, but in SQL Server you can do something like this:
```
;WITH top_cte AS
(
SELECT *, ROW_NUMBER() OVER(PARTITION BY Company ORDER BY Revenue DESC) AS [Rank]
FROM table
)
SELECT *
FROM top_cte
WHERE [Rank] <= 6
```
EDIT: Edited to use `ROW_NUMBER` instead of `RANK`. The one issue with `RANK` is that if you had, say, 3 companies tied for 5th place, you'd get 8 results back instead of 6. | Try this
```
SELECT * FROM
(
SELECT *,Row_Number() Over(Partition By Company Order By (Select Null)) RN FROM TAble1
) AS T
Where RN < 7
``` | SQL code to find the top 6 | [
"",
"sql",
""
] |
I have a column which is of type integer array. How can I merge all of them into a single integer array?
For example: If I execute query:
```
select column_name from table_name
```
I get result set as:
```
-[RECORD 1]----------
column_name | {1,2,3}
-[RECORD 2]----------
column_name | {4,5}
```
How can I get `{1,2,3,4,5}` as final result? | You could use [`unnest`](http://www.postgresql.org/docs/current/interactive/functions-array.html#ARRAY-FUNCTIONS-TABLE) to open up the arrays and then [`array_agg`](http://www.postgresql.org/docs/current/interactive/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE) to put them back together:
```
select array_agg(c)
from (
select unnest(column_name)
from table_name
) as dt(c);
``` | Define a trivial custom aggregate:
```
CREATE AGGREGATE array_cat_agg(anyarray) (
SFUNC=array_cat,
STYPE=anyarray
);
```
and use it:
```
WITH v(a) AS ( VALUES (ARRAY[1,2,3]), (ARRAY[4,5,6,7]))
SELECT array_cat_agg(a) FROM v;
```
If you want a particular order, put it within the aggregate call, i.e. `array_cat_agg(a ORDER BY ...)`
This is roughly `O(n log n)` for n rows (I think) `O(n²)` so it is unsuitable for long sets of rows. For better performance you'd need to write it in C, where you can use the more efficient (but horrible to use) C API for PostgreSQL arrays to avoid re-copying the array each iteration. | How to merge all integer arrays from all records into single array in postgres | [
"",
"sql",
"arrays",
"postgresql",
""
] |
the above question is very unclear let me show you what I mean..
I have a table of the following form
```
name ---- measure ----- value
xxx-----------m1--------------x1
xxx-----------m2--------------x2
xxx-----------m3--------------x3
yyy-----------m1--------------y1
yyy-----------m2--------------y2
yyy-----------m3--------------y3
```
---
I want to convert the above table to below form
```
name --- m1 --- m2 --- m3
xxx------x1-----x2-----x3
yyy------y1-----y2-----y3
``` | You can do this to pivot your table
```
select name,
MAX(case when measure = 'm1' then value end) m1,
MAX(case when measure = 'm2' then value end) m2,
MAX(case when measure = 'm3' then value end) m3
from tableName
group by name
```
[**SQL FIDDLE DEMO**](http://www.sqlfiddle.com/#!12/5b5ff/1)
To handle pivoting when column values are not fixed , you should try doing this using [crosstab](http://rowsandcolumns.blogspot.in/2011/10/postgresql-crosstab-query-rotate-table.html) function | If the columns `m1`, `m2` and `m3` are fixed then this is the way:
```
UPDATE tbl
SET m1 = t.value
FROM tbl t
WHERE tbl.name = t.name AND measure = 'm1';
```
And similarly for `m2` and `m3`.
Only that this will create duplicates. To remove which, you should do this:
```
DELETE FROM tbl WHERE measure = 'm2' OR measure = 'm3';
``` | sql query to convert existing column entries into row headers | [
"",
"sql",
"database",
"postgresql",
"pivot",
""
] |
I have a query that was written for SQL Server 2012, and uses the `try_convert()` function. I now need to execute the query on SQL Server 2008 R2 Express, and it is my understanding that `try_convert()` isn't a supported function in 2008 R2.
My current query contains this block:
```
CASE WHEN (try_convert(decimal, tew_userdata_locTo.use_data0) IS NULL)
THEN .. ELSE .. END
```
This gives me a true or false result letting me know if I can convert the particular value to a decimal. If I can convert the value, I do so. If not, I use a default decimal value and carry on with that.
Any ideas on what I could write that would work in SQL Server 2008 R2? | `try_convert()` attempts to do the convert. This is a nice way of handling conversion errors (or at least better than SQL Server previously did, which was nothing). You can use `like` to see if the string "looks" like a valid decimal. Here is one attempt:
```
CASE WHEN tew_userdata_locTo.use_data0 not like '%[^0-9.]%' and
tew_userdata_locTo.use_data0 not like '%.%.%'
THEN convert(decimal, tew_userdata_locTo.use_data0)
THEN .. ELSE .. END
``` | I wrote a useful scalar function to simulate the TRY\_CAST function of SQL SERVER 2012 in SQL Server 2008.
```
dbo.TRY_CAST(Expression, Data_Type, ReturnValueIfErrorCast)
```
> The two main differences with TRY\_CAST Function fo SQL Server 2012 are that you must pass 3 parameters and you
> must additionally perform an explicit CONVERT or CAST to the field.
> However, it is still very useful because it allows you to return a
> default value if CAST is not performed correctly.
**FUNCTION CODE:**
```
DECLARE @strSQL NVARCHAR(1000)
IF NOT EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[TRY_CAST]'))
BEGIN
SET @strSQL = 'CREATE FUNCTION [dbo].[TRY_CAST] () RETURNS INT AS BEGIN RETURN 0 END'
EXEC sys.sp_executesql @strSQL
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
/*
------------------------------------------------------------------------------------------------------------------------
Description:
Syntax
---------------
dbo.TRY_CAST(Expression, Data_Type, ReturnValueIfErrorCast)
+---------------------------+-----------------------+
| Expression | VARCHAR(8000) |
+---------------------------+-----------------------+
| Data_Type | VARCHAR(8000) |
+---------------------------+-----------------------+
| ReturnValueIfErrorCast | SQL_VARIANT = NULL |
+---------------------------+-----------------------+
Arguments
---------------
expression
The value to be cast. Any valid expression.
Data_Type
The data type into which to cast expression.
ReturnValueIfErrorCast
Value returned if cast fails or is not supported. Required. Set the DEFAULT value by default.
Return Type
----------------
Returns value cast to SQL_VARIANT type if the cast succeeds; otherwise, returns null if the parameter @pReturnValueIfErrorCast is set to DEFAULT,
or that the user indicates.
Remarks
----------------
dbo.TRY_CAST function simulates the TRY_CAST function reserved of SQL SERVER 2012 for using in SQL SERVER 2008.
dbo.TRY_CAST function takes the value passed to it and tries to convert it to the specified Data_Type.
If the cast succeeds, dbo.TRY_CAST returns the value as SQL_VARIANT type; if the cast doesn´t succees, null is returned if the parameter @pReturnValueIfErrorCast is set to DEFAULT.
If the Data_Type is unsupported will return @pReturnValueIfErrorCast.
dbo.TRY_CAST function requires user make an explicit CAST or CONVERT in ANY statements.
This version of dbo.TRY_CAST only supports CAST for INT, DATE, NUMERIC and BIT types.
Examples
====================================================================================================
--A. Test TRY_CAST function returns null
SELECT
CASE WHEN dbo.TRY_CAST('6666666166666212', 'INT', DEFAULT) IS NULL
THEN 'Cast failed'
ELSE 'Cast succeeded'
END AS Result;
GO
--B. Error Cast With User Value
SELECT
dbo.TRY_CAST('2147483648', 'INT', DEFAULT) AS [Error Cast With DEFAULT],
dbo.TRY_CAST('2147483648', 'INT', -1) AS [Error Cast With User Value],
dbo.TRY_CAST('2147483648', 'INT', NULL) AS [Error Cast With User NULL Value];
GO
--C. Additional CAST or CONVERT required in any assignment statement
DECLARE @IntegerVariable AS INT
SET @IntegerVariable = CAST(dbo.TRY_CAST(123, 'INT', DEFAULT) AS INT)
SELECT @IntegerVariable
GO
IF OBJECT_ID('tempdb..#temp') IS NOT NULL
DROP TABLE #temp
CREATE TABLE #temp (
Id INT IDENTITY
, FieldNumeric NUMERIC(3, 1)
)
INSERT INTO dbo.#temp (FieldNumeric)
SELECT CAST(dbo.TRY_CAST(12.3, 'NUMERIC(3,1)', 0) AS NUMERIC(3, 1));--Need explicit CAST on INSERT statements
SELECT *
FROM #temp
DROP TABLE #temp
GO
--D. Supports CAST for INT, DATE, NUMERIC and BIT types.
SELECT dbo.TRY_CAST(2147483648, 'INT', 0) AS [Cast failed]
, dbo.TRY_CAST(2147483647, 'INT', 0) AS [Cast succeeded]
, SQL_VARIANT_PROPERTY(dbo.TRY_CAST(212, 'INT', 0), 'BaseType') AS [BaseType];
SELECT dbo.TRY_CAST('AAAA0101', 'DATE', DEFAULT) AS [Cast failed]
, dbo.TRY_CAST('20160101', 'DATE', DEFAULT) AS [Cast succeeded]
, SQL_VARIANT_PROPERTY(dbo.TRY_CAST('2016-01-01', 'DATE', DEFAULT), 'BaseType') AS [BaseType];
SELECT dbo.TRY_CAST(1.23, 'NUMERIC(3,1)', DEFAULT) AS [Cast failed]
, dbo.TRY_CAST(12.3, 'NUMERIC(3,1)', DEFAULT) AS [Cast succeeded]
, SQL_VARIANT_PROPERTY(dbo.TRY_CAST(12.3, 'NUMERIC(3,1)', DEFAULT), 'BaseType') AS [BaseType];
SELECT dbo.TRY_CAST('A', 'BIT', DEFAULT) AS [Cast failed]
, dbo.TRY_CAST(1, 'BIT', DEFAULT) AS [Cast succeeded]
, SQL_VARIANT_PROPERTY(dbo.TRY_CAST('123', 'BIT', DEFAULT), 'BaseType') AS [BaseType];
GO
--E. B. TRY_CAST return NULL on unsupported data_types
SELECT dbo.TRY_CAST(4, 'xml', DEFAULT) AS [unsupported];
GO
====================================================================================================
Responsible: Javier Pardo
Date: diciembre 29/2016
WB tests: Javier Pardo
------------------------------------------------------------------------------------------------------------------------
*/
ALTER FUNCTION dbo.TRY_CAST
(
@pExpression AS VARCHAR(8000),
@pData_Type AS VARCHAR(8000),
@pReturnValueIfErrorCast AS SQL_VARIANT = NULL
)
RETURNS SQL_VARIANT
AS
BEGIN
--------------------------------------------------------------------------------
-- INT
--------------------------------------------------------------------------------
IF @pData_Type = 'INT'
BEGIN
IF ISNUMERIC(@pExpression) = 1
BEGIN
DECLARE @pExpressionINT AS FLOAT = CAST(@pExpression AS FLOAT)
IF @pExpressionINT BETWEEN - 2147483648.0 AND 2147483647.0
BEGIN
RETURN CAST(@pExpressionINT as INT)
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END --FIN IF @pExpressionINT BETWEEN - 2147483648.0 AND 2147483647.0
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END -- FIN IF ISNUMERIC(@pExpression) = 1
END -- FIN IF @pData_Type = 'INT'
--------------------------------------------------------------------------------
-- DATE
--------------------------------------------------------------------------------
IF @pData_Type = 'DATE'
BEGIN
IF ISDATE(@pExpression) = 1
BEGIN
DECLARE @pExpressionDATE AS DATE = cast(@pExpression AS DATE)
RETURN cast(@pExpressionDATE as DATE)
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END --FIN IF ISDATE(@pExpression) = 1
END --FIN IF @pData_Type = 'DATE'
--------------------------------------------------------------------------------
-- NUMERIC
--------------------------------------------------------------------------------
IF @pData_Type LIKE 'NUMERIC%'
BEGIN
IF ISNUMERIC(@pExpression) = 1
BEGIN
DECLARE @TotalDigitsOfType AS INT = SUBSTRING(@pData_Type,CHARINDEX('(',@pData_Type)+1, CHARINDEX(',',@pData_Type) - CHARINDEX('(',@pData_Type) - 1)
, @TotalDecimalsOfType AS INT = SUBSTRING(@pData_Type,CHARINDEX(',',@pData_Type)+1, CHARINDEX(')',@pData_Type) - CHARINDEX(',',@pData_Type) - 1)
, @TotalDigitsOfValue AS INT
, @TotalDecimalsOfValue AS INT
, @TotalWholeDigitsOfType AS INT
, @TotalWholeDigitsOfValue AS INT
SET @pExpression = REPLACE(@pExpression, ',','.')
SET @TotalDigitsOfValue = LEN(REPLACE(@pExpression, '.',''))
SET @TotalDecimalsOfValue = CASE Charindex('.', @pExpression)
WHEN 0
THEN 0
ELSE Len(Cast(Cast(Reverse(CONVERT(VARCHAR(50), @pExpression, 128)) AS FLOAT) AS BIGINT))
END
SET @TotalWholeDigitsOfType = @TotalDigitsOfType - @TotalDecimalsOfType
SET @TotalWholeDigitsOfValue = @TotalDigitsOfValue - @TotalDecimalsOfValue
-- The total digits can not be greater than the p part of NUMERIC (p, s)
-- The total of decimals can not be greater than the part s of NUMERIC (p, s)
-- The total digits of the whole part can not be greater than the subtraction between p and s
IF (@TotalDigitsOfValue <= @TotalDigitsOfType) AND (@TotalDecimalsOfValue <= @TotalDecimalsOfType) AND (@TotalWholeDigitsOfValue <= @TotalWholeDigitsOfType)
BEGIN
DECLARE @pExpressionNUMERIC AS FLOAT
SET @pExpressionNUMERIC = CAST (ROUND(@pExpression, @TotalDecimalsOfValue) AS FLOAT)
RETURN @pExpressionNUMERIC --Returns type FLOAT
END
else
BEGIN
RETURN @pReturnValueIfErrorCast
END-- FIN IF (@TotalDigitisOfValue <= @TotalDigits) AND (@TotalDecimalsOfValue <= @TotalDecimals)
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END --FIN IF ISNUMERIC(@pExpression) = 1
END --IF @pData_Type LIKE 'NUMERIC%'
--------------------------------------------------------------------------------
-- BIT
--------------------------------------------------------------------------------
IF @pData_Type LIKE 'BIT'
BEGIN
IF ISNUMERIC(@pExpression) = 1
BEGIN
RETURN CAST(@pExpression AS BIT)
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END --FIN IF ISNUMERIC(@pExpression) = 1
END --IF @pData_Type LIKE 'BIT'
--------------------------------------------------------------------------------
-- FLOAT
--------------------------------------------------------------------------------
IF @pData_Type LIKE 'FLOAT'
BEGIN
IF ISNUMERIC(REPLACE(REPLACE(@pExpression, CHAR(13), ''), CHAR(10), '')) = 1
BEGIN
RETURN CAST(@pExpression AS FLOAT)
END
ELSE
BEGIN
IF REPLACE(@pExpression, CHAR(13), '') = '' --Only white spaces are replaced, not new lines
BEGIN
RETURN 0
END
ELSE
BEGIN
RETURN @pReturnValueIfErrorCast
END --IF REPLACE(@pExpression, CHAR(13), '') = ''
END --FIN IF ISNUMERIC(@pExpression) = 1
END --IF @pData_Type LIKE 'FLOAT'
--------------------------------------------------------------------------------
-- Any other unsupported data type will return NULL or the value assigned by the user to @pReturnValueIfErrorCast
--------------------------------------------------------------------------------
RETURN @pReturnValueIfErrorCast
END
```
For now only supports the data types **INT, DATE, NUMERIC, BIT and FLOAT**. You can find the last versión of this code in the next link below and we help each other to improve it. *TRY\_CAST Function for SQL Server 2008* <https://gist.github.com/jotapardo/800881eba8c5072eb8d99ce6eb74c8bb> | try_convert in SQL Server 2008 R2 Express | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2008-r2",
""
] |
I recently appeared for an interview and was asked to write SQL query for this question-
There is table visit(person,theatre) which denotes that a person p visits a theatre t. Now a person p1 is considered to be more movie lover if he visits every theatre that another person p2 visits. I was supposed to write an SQL query to find the most movie loving person (a person who is more movie lover than every other person).
I really had no idea about how to answer it. I mumbled about using sub queries and aggregate operators but couldnt answer it.
Can anyone tell me what the query be?
**NOTE:** I was told that I cannot use recursion here | The description is *quite vague*, so if we state the problem as "is there any person who has visited all the theaters all other persons have" we can put it like that:
```
select v.person
from visit v
where v.theatre = all (
select vt.theatre
from visit vt)
``` | Description is vague, but something like
```
select person, count(*) as cnt
from visit
group by person
order by cnt desc
``` | SQl query for (person,theatre) relation | [
"",
"sql",
"database",
""
] |
Is there a better (*more efficient*) way to make this query?
I don't want do search twice in the same query `(SELECT PDATA1 ...)`
```
SELECT package_ID
FROM t_package
WHERE parent_ID IN (SELECT PDATA1
FROM t_object
WHERE stereotype = 'Process' AND object_type = 'Package')
OR package_ID IN (SELECT PDATA1
FROM t_object
WHERE stereotype = 'Process' AND object_type = 'Package')
``` | I believe this can be accomplished entirely without subqueries.
```
SELECT
DISTINCT(tp.package_ID)
FROM
t_package tp
INNER JOIN
t_object tobj ON tobj.PDATA1 = tp.parent_ID OR tobj.PDATA1 = tp.package_ID
WHERE
tobj.stereotype = 'Process' AND tobj.object_type = 'Package'
``` | Use EXISTS, like this -
```
SELECT package_ID
FROM t_package t
WHERE exists(SELECT *
FROM t_object o
WHERE stereotype = 'Process'
AND object_type = 'Package'
and (o.PDATA1=t.parent_ID
OR o.PDATA1=t.package_ID)
)
``` | Improving query having two where ... in clauses | [
"",
"sql",
""
] |
Consider the tables (of a Bank):
Accounts

Clients

I want to return all the clients ID , of the clients that have more
than one account in the bank
My suggestion :
```
SELECT c.Client_id_number , c.firstName_client , c.lastName_Client , COUNT(a.Account_number) as number_of_accounts
FROM Clients AS c
LEFT JOIN Accounts AS a
ON a.Client_id_number = c.Client_id_number
GROUP BY Client_id_number
WHERE number_of_accounts >= 2;
```
But the WHERE clause causes a syntax error . Why ? | ```
SELECT c.Client_id_number , c.firstName_client , c.lastName_Client , COUNT(a.Account_number) as number_of_accounts
FROM Clients AS c
LEFT JOIN Accounts AS a
ON a.Client_id_number = c.Client_id_number
GROUP BY Client_id_number
HAVING COUNT(a.Account_number) >= 2;
```
**[See this for correct SQL clasues order](https://stackoverflow.com/questions/4654608/what-is-the-correct-order-of-these-clauses-while-writing-a-sql-query)** | ```
SELECT Account_number, Client_id_Number
FROM Accounts
GROUP BY Account_number, Client_id_Number
having count(*) > 1
``` | WHERE clause causes syntax error in Mysql query | [
"",
"mysql",
"sql",
""
] |
My friend asking me if there a way to use 1 query to select datetime and use it for any database.
Atm he using this query for Oracle :
```
SELECT vt_sysdate() FROM dual;
```
I know dual table is for Oracle only. But he want to use this same query for other database, for example PostgreSQL, as well.
So is there a way to make this query run on every database (Maybe by create a dual table for every database). Or is there a query to get system datetime that work on any database ?
Thank you for you help. | Thank everyone for your support.
After looking at everyone answer, after a while discuss with my friend, we came to a conclusion that it won't be possible to use one query for all database. So we create a function, check database, use correct function to get time and return it.
We will have to create a dummy table dual on any database we use like in [this blog](http://blog.sqlauthority.com/2010/07/20/sql-server-select-from-dual-dual-equivalent/) suggest. A dummy table with 1 column and 1 record.
Thank you all. | No, I don't think so, because Oracle insists on being weird, and MS SQL doesn't seem to support the standard `current_date` at all.
[MySQL accepts `SELECT current_date;`, doesn't understand `VALUES`](http://sqlfiddle.com/#!2/636321/2).
[PostgreSQL accepts `SELECT current_date;` or `VALUES`](http://sqlfiddle.com/#!15/d41d8/1517).
[MS SQL doesn't seem to understand `current_date` at all](http://sqlfiddle.com/#!6/d41d8/15945).
[Oracle understands `current_date` but not scalar queries without `FROM`](http://sqlfiddle.com/#!4/d41d8/27061). | Query to get datetime for every database | [
"",
"sql",
"oracle",
"postgresql",
"datetime",
""
] |
I have a column in database:
```
Serial Number
-------------
S1
S10
...
S2
S11
..
S13
```
I want to sort and return the result as follows for serial number <= 10 :
```
S1
S2
S10
```
One way I tried was:
```
select Serial_number form table where Serial_Number IN ('S1', 'S2',... 'S10');
```
This solves the purpose but looking for a better way | For Postgres you can use something like this:
```
select serial_number
from the_table
order by regexp_replace(serial_number, '[^0-9]', '', 'g')::integer;
```
The `regexp_replace` will remove all non-numeric characters and the result is treated as a number which is suited for a "proper" sorting.
**Edit 1:**
You can use the new "number" to limit the result of the query:
```
select serial_number
from (
select serial_number,
regexp_replace(serial_number, '[^0-9]', '', 'g')::integer as snum
from the_table
) t
where snum <= 10
order by snum;
```
**Edit 2**
If you receive the error `ERROR: invalid input syntax for integer: ""` then apparently you have values in the serial\_number column which do no follow the format you posted in your question. It means that regexp\_replace() remove all characters from the string, so a string like `S` would cause that.
To prevent that, you need to either exclude those rows from the result using:
```
where length(regexp_replace(serial_number, '[^0-9]', '', 'g')) > 0
```
in the inner select. Or, if you need those rows for some reason, deal with that in the select list:
```
select serial_number
from (
select serial_number,
case
when length(regexp_replace(serial_number, '[^0-9]', '', 'g')) > 0 then regexp_replace(serial_number, '[^0-9]', '', 'g')::integer as snum
else null -- or 0 whatever you need
end as snum
from the_table
) t
where snum <= 10
order by snum;
```
---
This is a really nice example on why you should never mix two different things in a single column. If all your serial numbers have a prefix `S` you shouldn't store it and put the real number in a real `integer` (or `bigint`) column.
Using something like `NOT_SET` to indicate a missing value is also a bad choice. The `NULL` value was precisely invented for that reason: to indicate the absence of data. | Here is an easy way for this format:
```
order by length(Serial_Number),
Serial_Number
```
This works because the prefix (`'S'`) is the same length on all the values. | Sort alphanumeric column | [
"",
"sql",
"postgresql",
"alphanumeric",
"natural-sort",
""
] |
mysql query
```
CREATE TABLE IF NOT EXISTS 'sadakin'.'gcm_users' (
'id' int( 11 ) NOT NULL AUTO_INCREMENT ,
'gcm_regid' text,
'name' varchar( 50 ) NOT NULL ,
'email' varchar( 255 ) NOT NULL ,
'imei' varchar( 20 ) NOT NULL ,
'created_at' timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY ( 'id' ) ,
KEY 'imei' ( 'imei' )
) ENGINE = InnoDB DEFAULT CHARSET = latin1 AUTO_INCREMENT =1;
```
**Error message**:
```
1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''sadakin'.'gcm_users' (
'id' int( 11 ) NOT NULL AUTO_INCREMENT ,
'gcm_' at line 1
```
I don't find error. Help me! | Use backticks to enclose identifiers. Quotes are just for strings:
```
CREATE TABLE IF NOT EXISTS `sadakin`.`gcm_users` (
`id` int( 11 ) NOT NULL AUTO_INCREMENT ,
`gcm_regid` text,
`name` varchar( 50 ) NOT NULL ,
`email` varchar( 255 ) NOT NULL ,
`imei` varchar( 20 ) NOT NULL ,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY ( `id` ) ,
KEY `imei` ( `imei` )
) ENGINE = InnoDB DEFAULT CHARSET = latin1 AUTO_INCREMENT =1;
``` | There is no need of any single quotes on column names
```
CREATE TABLE IF NOT EXISTS sadakin.gcm_users (
id int( 11 ) NOT NULL AUTO_INCREMENT ,
gcm_regid text,
name varchar( 50 ) NOT NULL ,
email varchar( 255 ) NOT NULL ,
imei varchar( 20 ) NOT NULL ,
created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ,
PRIMARY KEY ( id ) ,
KEY imei ( imei )
) ENGINE = InnoDB DEFAULT CHARSET = latin1 AUTO_INCREMENT =1;
``` | mysql syntax error why? | [
"",
"mysql",
"sql",
""
] |
I need to find records by one column but if there was no records i need to search in by another column but the first column has precedence.
Consider scenario:
```
find records matching column_1
if no records matching column_1, find records matching column_2
if records matching column_1 found, find the subset which also match column_2
```
How to construct such query ? | Such queries (try to find records with certain criteria and if that fails try with other) are complicated.
Here is your criteria:
* If there exist records with matching column\_1 and matching column\_2, then show only these.
* If there exist records with matching column\_1 and none of these matching column\_2, then show none.
* If there exist no records with matching column\_1, then show only records matching column\_2
Looking at this more closely, it boils down to:
* If there exist records with matching column\_1, then show only records matching column\_ 1 and column\_2
* If there exist no records with matching column\_1, then show only records matching column\_2
or:
* Show only records with matching column\_2. If records with matching column\_1 exist in the table, then filter such that column\_1 must also match.
Here is the statement:
```
select * from mytable
where column_2 = 'match2'
and
(
column_1 = 'match1'
or
(select count(*) from mytable where column_1 = 'match1') = 0
);
``` | You don't care about the order of the parts of the `where` clause - they can be interchanged at will, and it's a good thing.
The usuall way to do what you're asking would be using the `or` operator:
```
where column_1 = @param or column_2 = @param
```
The query will not let you return the same row twice or more, it only applies the filters to each row once (logically).
What difference should there be between the three scenarios? If you want to return some extra status, you can use the select clause, for example like this:
```
select
case when column_1 = @param then 'Matched first!'
else 'Matched second!' end as Status
```
If you want to do that, you have to copy your query three times, and use `union all` to join the results from the three queries, eg. something like this:
```
select ... from ...
where column_1 = @param
union all
select ... from ...
where column_1 <> @param and column_2 = @param
```
etc. | How would be possible to specify the order how the WHERE clause is evaluated? | [
"",
"sql",
"sql-server-2008",
""
] |
I have a table CUSTOMERS (ID,NAME,REGISTRATION\_DT). It has following values -
1,Alex,093013
2,Peter,012512
3,Mary,041911
.
.
.
I want to be able to select REGISTRATION\_DT column and have the values retun as DATETIME. Is there any query to convert the existing varchar datatype in REGISTRATION\_DT column to datetime and have all values in datetime format? The existing varchar values are in ddmmyy format.
I am using sql-server 2008 | The below example requires the year to always be in 2000. you will also need to change the string YourTable to the table you want to query from.
```
SELECT
Id
,Name
--Convert To Date Time
,CONVERT(datetime,
-- Add 20 to the final 2 numbers to represent the year i.e 2013
('20' + SUBSTRING(REGISTRATION_DT,5,2)
+ '-' +
-- the first 2 numbers to represent the Month i.e 09
SUBSTRING(REGISTRATION_DT,1,2)
+ '-' +
-- the middle 2 numbers to represent the day i.e 30
SUBSTRING(REGISTRATION_DT,3,2)))
as REGISTRATION_DT
from YourTable
``` | The only way is to convert the varchar to date. Try something like
```
SELECT CONVERT(DATE, LEFT(REGISTRATION_DT, 2) + '/'
+ LEFT(RIGHT(REGISTRATION_DT, 4), 2)
+ '/20' + RIGHT(REGISTRATION_DT, 2))
```
An example [**SQL FIDDLE**](http://sqlfiddle.com/#!3/d41d8/32150) here.
More importantly though, you must consider changing the way you save the `REGISTRATION_DT` field in your table. | Is it possible to convert string column to datetime in sql server? | [
"",
"sql",
"sql-server",
"datetime",
""
] |
I can **able to insert** the records into the temp table. From the same temp table **i need to delete the records** according to another condition
**Here is my code :**
```
DECLARE @tempDataTable Table
(
Id bigint,
Title varchar(max),
Forename varchar(max),
Surname varchar(max),
Addr1 varchar(max),
Addr2 varchar(max),
Addr3 varchar(max),
Village varchar(max),
Town varchar(max),
Postcode varchar(max),
LatestER varchar(50),
RowNum bigint
)
DECLARE @sqlQuery varchar(max)
SET @sqlQuery = 'SELECT [Id]
,[Title]
,[ForName]
,[Surname]
,[Addr1]
,[Addr2]
,[Addr3]
,[Village]
,[Town]
,[Postcode]
,[LatestER]
,ROW_NUMBER() OVER(PARTITION BY (DomicileId) ORDER BY Title) as RowNum
FROM [dbo].[DPS_DataRegion_Scotland]
WHERE (' + @searchFilteredDataCondition + ')'
INSERT INTO @tempDataTable
EXEC (@sqlQuery)
```
The Above part is working fine. Here i need to delete the records from **@tempDataTable**
```
SET @sqlQuery = 'DELETE FROM '+ @tempDataTable +'
WHERE (' + @searchFilteredTownCondition + ')'
EXEC (@sqlQuery)
```
**While inserting it throws an error of - Must declare the scalar variable "@tempDataTable".** | Your problem is that you need to use the name of the variable in your dynamic SQL, like
```
SET @sqlQuery = 'DELETE FROM @tempDataTable'
```
But you won't be able to use your table variable inside dynamic SQL this way - it won't be visible inside the scope of that dynamic SQL execution. You would have to declare everything inside one dynamic SQL batch and use one `EXEC` or use temporary tables.
The latter would look like:
```
CREATE TABLE #tempDataTable
(
Id bigint,
Title varchar(max),
Forename varchar(max),
Surname varchar(max),
Addr1 varchar(max),
Addr2 varchar(max),
Addr3 varchar(max),
Village varchar(max),
Town varchar(max),
Postcode varchar(max),
LatestER varchar(50),
RowNum bigint
)
DECLARE @sqlQuery varchar(max)
SET @sqlQuery = 'SELECT [Id]
,[Title]
,[ForName]
,[Surname]
,[Addr1]
,[Addr2]
,[Addr3]
,[Village]
,[Town]
,[Postcode]
,[LatestER]
,ROW_NUMBER() OVER(PARTITION BY (DomicileId) ORDER BY Title) as RowNum
FROM [dbo].[DPS_DataRegion_Scotland]
WHERE (' + @searchFilteredDataCondition + ')'
INSERT INTO #tempDataTable
EXEC (@sqlQuery)
SET @sqlQuery = 'DELETE FROM #tempDataTable WHERE (' + @searchFilteredTownCondition + ')'
EXEC (@sqlQuery)
``` | > While inserting it throws an error of - Must declare the scalar variable "@tempDataTable".
Did you ever look at the SQL you are sending to the server? Not the source. Basically print out the SQL.
SET @sqlQuery = 'DELETE FROM '+ @tempDataTable +'
translated into
```
DELETE FROM (content of @tempDataTable)
```
if you want to use @tempdataTable as name of a table, make it part of the string, not a parameter.
```
SET @sqlQuery = 'DELETE FROM @tempDataTable'
``` | How to delete the records in local table using sql server | [
"",
"sql",
"sql-server",
""
] |
When I execute the following query :-
```
SELECT users.id, users.username, users.last_login,
profile_pictures.pic_resized, profile_pictures.pic_blurred,
user_profile.date_of_birth, user_profile.u_country,
matches.total_score AS score,
favorites.id AS is_favorite,
relationships.id AS is_match,
profile_visitors.id AS is_profile_viewer,
block_chat.id AS is_blocked
FROM users
INNER JOIN user_profile ON users.id = user_profile.user_id
LEFT JOIN profile_pictures ON users.id = profile_pictures.user_id
LEFT JOIN match_collections ON users.id = match_collections.user_id AND match_collections.status='valid'
LEFT JOIN matches ON matches.collection_id = match_collections.id AND match_id = 44700
LEFT JOIN favorites ON favorites.subject_id = 11354 AND favorites.object_id = 44700
LEFT JOIN relationships ON relationships.user_id = 11354 AND relationships.friend_id = 44700
LEFT JOIN profile_visitors ON profile_visitors.viewed_id = 11354 AND profile_visitors.viewer_id = 44700
LEFT JOIN block_chat ON block_chat.subject_id = 11354 AND block_chat.object_id = 44700
WHERE users.id = 11354
```
The result appear:
where is\_favorite and is\_match is null which refer that there are no entry for the user 44700 in those tables.
The question is: How to set a default value like `-1` instead of `NULL`?. | you can use this :-
```
SELECT users.id, users.username, users.last_login,
profile_pictures.pic_resized, profile_pictures.pic_blurred,
user_profile.date_of_birth, user_profile.u_country,
matches.total_score AS score,
if(isnull(favorites.id),-1,favorites.id) AS is_favorite,
if(isnull(relationships.id),-1,relationships.id) AS is_match,
profile_visitors.id AS is_profile_viewer,
block_chat.id AS is_blocked
FROM users
INNER JOIN user_profile ON users.id = user_profile.user_id
LEFT JOIN profile_pictures ON users.id = profile_pictures.user_id
LEFT JOIN match_collections ON users.id = match_collections.user_id AND match_collections.status='valid'
LEFT JOIN matches ON matches.collection_id = match_collections.id AND match_id = 44700
LEFT JOIN favorites ON favorites.subject_id = 11354 AND favorites.object_id = 44700
LEFT JOIN relationships ON relationships.user_id = 11354 AND relationships.friend_id = 44700
LEFT JOIN profile_visitors ON profile_visitors.viewed_id = 11354 AND profile_visitors.viewer_id = 44700
LEFT JOIN block_chat ON block_chat.subject_id = 11354 AND block_chat.object_id = 44700
WHERE users.id = 11354
```
You can use the `if(isnull(exp1),exp2,exp3)` wherever you need to tackle `nulls` | If you want to return something other than NULL, if that's what is in the field use a CASE statement to overide what is returned.
```
case when favorites.id is NULL then '-1' else favorites.id end as is_favorite
``` | Mysql set default value if entry does not exist | [
"",
"mysql",
"sql",
""
] |
How do I determine if NULL is contained in an array in Postgres? Currently using Postgres 9.3.3.
If I test with the following select it returns `contains_null = false`.
```
select ARRAY[NULL,1,2,3,4,NULL]::int[] @> ARRAY[NULL]::int[] AS contains_null
select ARRAY[NULL,1,2,3,4,NULL]::int[] @> NULL AS contains_null
```
I've also tried with:
1. @> (contains)
2. <@ (is contained by)
3. && (overlap) | ```
select exists (
select 1
from unnest(array[1, null]) s(a)
where a is null
);
exists
--------
t
```
Or shorter:
```
select bool_or(a is null)
from unnest(array[1, null]) s(a)
;
bool_or
---------
t
``` | One more construction, like @Clodoaldo Neto proposed. Just more compact expression:
```
CREATE TEMPORARY TABLE null_arrays (
id serial primary key
, array_data int[]
);
INSERT INTO null_arrays (array_data)
VALUES
(ARRAY[1,2, NULL, 4, 5])
, (ARRAY[1,2, 3, 4, 5])
, (ARRAY[NULL,2, 3, NULL, 5])
;
SELECT
*
FROM
null_arrays
WHERE
TRUE = ANY (SELECT unnest(array_data) IS NULL)
;
``` | How to determine if NULL is contained in an array in Postgres? | [
"",
"sql",
"arrays",
"postgresql",
""
] |
I have problem in getting the desired output with the SQL query.
My sql data is as follows:
```
TOTAL Charge PAYMNET A B C D E MonthYear
------- ----------- ----------- --------- -------- ---------- ------- ------- ----------
661 157832.24 82967.80 700.00 10.70 58329.33 0.00 0.00 Oct-2013
612 95030.52 17824.28 850.00 66.10 53971.41 0.00 0.00 Nov-2013
584 90256.35 16732.91 700.00 66.10 52219.87 0.00 0.00 Dec-2013
511 72217.32 12336.12 285.00 53.17 42951.12 0.00 0.00 Jan-2014
```
I need the output as follows,
```
Data Jan-2013 Feb-2013 Mar-2013
TOTALCOUNT 761 647 671
Charge 126888 119995 151737.5
Payment 25705.4 26235.47 28704.41
A 1089.08 1020 745
B 2100.4 1947.25 1868.22
C 94246.55 84202.15 115673.7
D 0 0 0
E 0 0 0
```
I have seen the examples of `pivot` and `unpivot`, in `pivot` I don't get the column headers as row data, and in `unpivot` I didn't found an example where I can transpose multiple columns. I have another option to get this result in the code. But I want to know is it possible to get this kind of result in sql?
**Edit**
The result will give only for 3 or 4 months, not more than that.
**Update** : The first sample data is the actual data which I will get as a result of multiple joins and grouping on multiple tables, which I will store into a temp table. I tried to get the required result by modifying the query which is not possible because of the table structure. I managed to get the result as in the first sample data, but this is not what the client want to see!!! So I need to process the temp table data which will be only 3 to 4 rows into required output. The query to get the first result is `select * from temp`. The processing needs to be done on `temp` table result.
**Update-2**
I have tried the following query
```
declare @cols varchar(max)
select @cols = STUFF((select ', ' + MonthYear
from #tmp for xml path('')),1,1,'')
declare @query varchar(max)
set @query =
'select ''TOTAL'' as Data,' +@cols+' from
(select MonthYear,TOTALCLAIMS from #tmp)st
pivot
(
MAX(TOTAL) for MonthYear in (' + @cols + ')
)pt;'
```
Which gave me the first row correctly!!! But I tried to use `union` as
```
set @query =
'select ''TOTAL'' as Data,' +@cols+' from
(select MonthYear,TOTALCLAIMS from #tmp)st
pivot
(
MAX(TOTAL) for MonthYear in (' + @cols + ')
)pt;
union
select ''CHARGES'' as Data,' +@cols+' from
(select MonthYear,TOTALCLAIMS from #tmp)st
pivot
(
MAX(CHARGES) for MonthYear in (' + @cols + ')
)pt;'
```
Which gives an error as `incorrect syntax near union`. Any one know how to union `pivot` results? Or is there any better way to do this?
Thank You. | I have tried this code. Please check and let me know if it works
I know that it doesnt look so good. Also not sure how it will be performance wise.
```
--Can have more columns like A,B,...
DECLARE @tbl TABLE
(
TOTAL INT,
CHARGE FLOAT,
PAYMENT FLOAT,
MONTHYEAR VARCHAR(50)
)
--Test data
INSERT INTO @tbl SELECT 661, 157832.24, 82967.80, 'Oct2013'
INSERT INTO @tbl SELECT 612, 95030.52, 17824.28, 'Nov2013'
INSERT INTO @tbl SELECT 584 ,90256.35, 16732.91, 'Dec2013'
--Can be a physical table
CREATE TABLE #FinalTbl
(
DATA VARCHAR(100)
)
--inserted hardcode records in data column. To add it dynamically you would need to loop through information_schema.columns
--SELECT *
--FROM information_schema.columns
--WHERE table_name = 'tbl_name'
INSERT INTO #FinalTbl
VALUES ('TOTAL')
INSERT INTO #FinalTbl
VALUES ('CHARGE')
INSERT INTO #FinalTbl
VALUES ('PAYMENT')
DECLARE @StartCount INT, @TotalCount INT, @Query VARCHAR(5000), @TOTAL INT,@CHARGE FLOAT,@PAYMENT FLOAT,@MONTHYEAR VARCHAR(50)
SELECT @TotalCount = COUNT(*) FROM @tbl;
SET @StartCount = 1;
WHILE(@StartCount <= @TotalCount)
BEGIN
SELECT @TOTAL = TOTAL,
@CHARGE = CHARGE,
@PAYMENT = PAYMENT,
@MONTHYEAR = MONTHYEAR
FROM
(SELECT ROW_NUMBER() over(ORDER BY MONTHYEAR) AS ROWNUM, * FROM @tbl) as tbl
WHERE ROWNUM = @StartCount
SELECT @Query = 'ALTER TABLE #FinalTbl ADD ' + @MONTHYEAR + ' VARCHAR(1000)'
EXEC (@Query)
SELECT @Query = 'UPDATE #FinalTbl SET ' + @MONTHYEAR + ' = ''' + CONVERT(VARCHAR(50), @TOTAL) + ''' WHERE DATA = ''TOTAL'''
EXEC (@Query)
SELECT @Query = 'UPDATE #FinalTbl SET ' + @MONTHYEAR + ' = ''' + CONVERT(VARCHAR(50), @CHARGE) + ''' WHERE DATA = ''CHARGE'''
EXEC (@Query)
SELECT @Query = 'UPDATE #FinalTbl SET ' + @MONTHYEAR + ' = ''' + CONVERT(VARCHAR(50), @PAYMENT) + ''' WHERE DATA = ''PAYMENT'''
EXEC (@Query)
SELECT @StartCount = @StartCount + 1
END
SELECT * FROM #FinalTbl
DROP TABLE #FinalTbl
```
Hope this helps | I would imagine the reason you are only getting 3 or 4 months is because you don't have data for the missing months? If you want to display columns for missing months you will need to either:
1. Create a Table datatype with all the months you want to display
and left join the remainder of the tables to it in your query. You
could then use the PIVOT function as normal.
2. If you know how many columns up front i.e. one for each month in a particular year and it won't change, you can simply use CASE
Statements (one for each month) to transpose the data without the
PIVOT operator.
I can provide examples if needed. | transpose rows to columns in sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
"unpivot",
""
] |
I'm using SQL Server 2012. I need to implement a search functionality using a **single** text field.
Let's say I have the following table:
```
--------------------------------------------------------------------------------
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR
--------------------------------------------------------------------------------
John Doe Boston 2005 Mc Donald
Marc Forestier Bruxelle 2010 Private bank
Céline Durand Paris 1999 Food SA
Simon Forestier Toulouse 2001 Forestier SARL
John Smith New York 1992 Events Org.
Sonia Grappe Toulon 2010 Forestier SARL
--------------------------------------------------------------------------------
```
Behavior is the following:
* All words (space separated) have to be searched over all the column.
* A `LIKE` should be applied for each word
* If only one word is searched, return all records that contains that word
* If several words are searched, return only records that contains the largest number of **different** words (see "forestier" example below)
* **I need a single query**, no TSQL
I tried a lot of things, but it is not as simple as it seems.
Some examples:
"John":
```
-------------------------------------------------------------------------------
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR
-------------------------------------------------------------------------------
John Doe Boston 2005 Mc Donald
John Smith New York 1992 Events Org.
-------------------------------------------------------------------------------
```
"John Doe":
```
-------------------------------------------------------------------------------
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR
-------------------------------------------------------------------------------
John Doe Boston 2005 Mc Donald
-------------------------------------------------------------------------------
```
"forestier":
```
-------------------------------------------------------------------------------
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR
-------------------------------------------------------------------------------
Marc Forestier Bruxelle 2010 Private bank
Simon Forestier Toulouse 2001 Forestier SARL
Sonia Grappe Toulon 2010 Forestier SARL
-------------------------------------------------------------------------------
```
"for 2010 xelle":
```
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR
--------------------------------------------------------------------------------
Marc Forestier Bruxelle 2010 Private bank
--------------------------------------------------------------------------------
```
This example uses a single table; in reality my 5 columns are from 4 different tables, so it is a little more complex to implement full text search! | Here is a solution.
I have limited the search to 6 words.
For each word, I check if it exists in the concatenated columns.
I get a "score" for each record by adding +1 each time a word is found in it.
I return the records that have the best score.
The function :
```
CREATE FUNCTION [dbo].[SEARCH_SINGLE] (
@langId INT = 4,
@searchString VARCHAR(MAX) = NULL
)
RETURNS TABLE
AS
RETURN
WITH words AS (
SELECT Name as Val, ROW_NUMBER() OVER(ORDER BY Name) as Num FROM [dbo].splitstring(@searchString, ' ')
),
results AS (
SELECT DISTINCT
...
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 1 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END +
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 2 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END +
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 3 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END +
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 4 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END +
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 5 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END +
CASE WHEN EXISTS(SELECT 1 FROM words WHERE Num = 6 AND (ISNULL(a.[FIRSTNAME], '') + ' ' + ISNULL(a.[LASTNAME], '') + ' ' + ISNULL(c.[CITY], '') + ' ' + ISNULL(j.[PROMO_YEAR], '') + ' ' + ISNULL(e.[EMPLOYOR], '')) like '%'+Val+'%') THEN 1 ELSE 0 END as Nb
FROM
...
WHERE
...
)
SELECT
...
FROM
results
WHERE
Nb = (SELECT MAX(Nb) FROM results)
AND Nb <> 0
```
Comments? | How about adding another field e.g. a text field containing all the information from the other fields.
```
FIRSTNAME LASTNAME CITY PROMOYEAR EMPLOYOR SEARCHFIELD
John Doe Boston 2005 Mc Donald John Doe Boston 2005 Mc Donald
```
And make the search on this field. It's not elegant but it could work.
Adding below:
I don't think that SQL syntax supports all your need but you can make another workaround. Create a table which includes all the words you want to search:
```
create table searchtable
(
rowid int, --key to the id for the row in your table
mothertableName varchar(), -- name of the table if necessary
motherfieldName varchar(), -- name of field
word varchar() -- the actual word to be searchable
)
```
The search for the words and where they have the largest occurrence:
```
SELECT * FROM myTable WHERE id IN(
SELECT rid as id, MAX(c) FROM (
SELECT rowid as rid, COUNT(rowid) as c FROM Searchtable WHERE word IN ('john','doe')
)
)
```
The above SQL will surely not work, but I hope you get the idea of what I am proposing. You should get one row with the largest numbers of you search words. But the 'IN' operator in SQL demands that you make some dynamically SQL.
As you write that you have tried nearly everything, I think that SQL can't do it alone. | SQL simple search functionality across multiple tables | [
"",
"sql",
"sql-server",
"search",
""
] |
I am trying to group the number of hours that employees worked for the last 4 weeks but I want to group them on a weekly basis. For example:
```
WEEK HOURS
Feb 24 to March 2 55
March 3 to March 9 40
March 10 to March 16 48
March 17 to March 23 37
```
This is what I have so far, please help. thanks
```
SET DATEFIRST 1
SELECT CAST(MIN( [DT]) AS VARCHAR(20))+' TO '+CAST (MAX([DT]) AS VARCHAR(20)) AS DATE,
SUM(HOURS) AS NUM_HRS
FROM MyTable
GROUP BY DATEPART(WEEK,[DT])
HAVING COUNT(DISTINCT[DT])=7
``` | Try something like
```
SELECT
DATEADD(DD,
CONVERT(INT, (DATEDIFF(DD, '1/1/1900', t.DT)/7)) * 7,
'1/1/1900') [WeekBeginDate],
DATEADD(DD,
(CONVERT(INT, (DATEDIFF(DD, '1/1/1900', t.DT)/7)) * 7) + 6,
'1/1/1900') [WeekEndDate],
SUM(HOURS) AS NUM_HRS
FROM MyTable t
GROUP BY CONVERT(INT, DATEDIFF(DD, '1/1/1900', t.DT)/7)
```
Though this is the brute force trick, I think in your case it will work.
**EDIT :** Modified the query a little bit, the error was caused because of the order in which `DATEDIFF` calculates the difference.
Also here is a [**SQL FIDDLE**](http://sqlfiddle.com/#!3/50cca3/8) with a working example.
**EDIT 2 :** Updated the Fiddle with the Date Format. To customize the date format, [**this article**](http://msdn.microsoft.com/en-us/library/ms187928.aspx) would help. | ```
SET DATEFIRST 1
SELECT DATEPART(WEEK,DT) AS WEEK,
SUM(HOURS) AS NUM_HRS
FROM MyTable
WHERE DT >= DATEADD(WEEK, -4, GetDate()),
GROUP BY DATEPART(WEEK,[DT])
``` | How to group daily data on weekly basis using sql | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have SQL Server table that contains columns of type `varchar(50)` as a result of a CSV import using the SQL Server Import wizard.
I was wanting to know how I can change this data type to `nvarchar(9)` without getting a SQL Server truncation error.
I tried doing a bulk update to set the data types and column sizes that I need but still had the truncation error message when I tried to load the csv into the empty database table I created (with my required data types that I need).
Grateful for any help. | Since you are willing to lose data and nvarchar will only be able to store 9 non-unicode charaters, then select only 9 characters from your source table, You do the truncation rather than Sql server doing it for you.
The Following Query will trim any White spaces from the strings, Then take only 9 characters from the string and convert them to NVARCHAR(9) for you.....
```
CREATE TABLE New_TABLE (Col1 NVARCHAR(9), Col2 NVARCHAR(9))
GO
INSERT INTO New_TABLE (Col1, Col2)
SELECT CONVERT(NVARCHAR(9),LEFT(LTRIM(Col1), 9))
,CONVERT(NVARCHAR(9),LEFT(LTRIM(Col2), 9))
FROM Existing_Table
GO
``` | Bulk insert into temp table with varchar(50) and insert to actual table
```
insert into tableName
select cast(tempcolumn as nvarchar(9)) from temptable
``` | Converting varchar to nvarchar in SQL Server failed | [
"",
"sql",
"sql-server",
"types",
"type-conversion",
""
] |
I have three tables; doctor, person, and appointment.
doctor table:
```
+-----------+----------+---------+----------------+----------------+
| doctor_id | phone_no | room_no | date_qualified | date_appointed |
+-----------+----------+---------+----------------+----------------+
| 50 | 1234 | 1 | 1963-09-01 | 1991-05-10 |
| 51 | 1235 | 2 | 1973-09-12 | 1991-05-10 |
| 52 | 1236 | 3 | 1990-10-02 | 1993-04-01 |
| 53 | 1237 | 4 | 1965-06-30 | 1994-03-01 |
+-----------+----------+---------+----------------+----------------+
```
**person table**
```
+-----------+----------+-----------+---------------+------+
| person_id | initials | last_name | date_of_birth | sex |
+-----------+----------+-----------+---------------+------+
| 100 | T | Williams | 1972-01-12 | m |
| 101 | J | Garcia | 1981-03-18 | f |
| 102 | W | Fisher | 1950-10-22 | m |
| 103 | K | Waldon | 1942-06-01 | m |
| 104 | P | Timms | 1928-06-03 | m |
| 105 | A | Dryden | 1944-06-23 | m |
| 106 | F | Fogg | 1955-10-16 | f |
| 150 | T | Saj | 1994-06-17 | m |
| 50 | A | Cameron | 1937-04-04 | m |
| 51 | B | Finlay | 1948-12-01 | m |
| 52 | C | King | 1965-06-06 | f |
| 53 | D | Waldon | 1938-07-08 | f |
+-----------+----------+-----------+---------------+------+
```
**appointment table**
```
+-----------+------------+------------+-----------+---------------+
| doctor_id | patient_id | appt_date | appt_time | appt_duration |
+-----------+------------+------------+-----------+---------------+
| 50 | 100 | 1994-08-10 | 10:00:00 | 10 |
| 50 | 100 | 1994-08-16 | 10:50:00 | 10 |
| 50 | 102 | 1994-08-21 | 11:20:00 | 20 |
| 50 | 103 | 1994-08-10 | 10:10:00 | 10 |
| 50 | 104 | 1994-08-10 | 10:20:00 | 20 |
| 52 | 102 | 1994-08-10 | 10:00:00 | 10 |
| 52 | 105 | 1994-08-10 | 10:10:00 | 10 |
| 52 | 150 | 2014-03-10 | 12:00:00 | 15 |
| 53 | 106 | 1994-08-10 | 11:30:00 | 10 |
+-----------+------------+------------+-----------+---------------+
```
I need to create a query to produce a list of `doctor IDs` and their names with the number of appointments they have.
I have already created a statement to produce a list of doctor IDs with the number of appointments they have but im not sure how to produce a list with doctor IDs and their names.
The statement that I have now is:
```
select doctor.doctor_id, count(appointment.appt_time) as no_appt
from doctor
left join appointment
on doctor.doctor_id = appointment.doctor_id
group by doctor.doctor_id;
```
Please Help. | You need an additional join to the `person` table. Apparently, the `doctor_id` is the link. Yuck. This should be an explicit column rather than a re-use of the id.
```
select d.doctor_id, p.initials, p.last_name, count(appointment.appt_time) as no_appt
from doctor d left join
appointment a
on d.doctor_id = a.doctor_id left join
person p
on d.doctor_id = p.person_id
group by d.doctor_id, p.initials, p.last_name;
```
In MySQL, you don't actually need to add the two columns to the `group by`, but it is good practice to do so. | ```
select doctor.doctor_id, person.initials, person.last_name, count(appointment.appt_time) as no_appt
from doctor
left join appointment on doctor.doctor_id = appointment.doctor_id
left join person on person.person_id = appointment.patient_id
group by doctor.doctor_id;
``` | SQL Database - query to produce a list of doctor IDs and their names with the number of appointments they have? | [
"",
"mysql",
"sql",
"database",
""
] |
I have a query which returns distinct records on 2 columns, however I need to sort the results on those 2 columnns and 1 additional column.
When I try the SQL below I get the error shown.
SQL:
```
SELECT DISTINCT vers, revs FROM tblMVer
WHERE mid = 194 ORDER BY date_deployed DESC, vers DESC, revs DESC
```
Error:
```
ORDER BY items must appear in the select list if SELECT DISTINCT is specified.
```
Any ideas on how to achieve this please.
Thanks
Kev | You cann't order by Date simply because they are different I guess.
But if you will take last date, you can do like this:
```
SELECT vers, revs
FROM (
SELECT MAX(date_deployed) AS d, vers, revs
FROM tblMVer
WHERE mid = 194
GROUP BY vers, revs
ORDER BY d DESC, vers DESC, revs DESC
) AS temp
``` | There is no **date\_deployed** in select
Try this
```
SELECT DISTINCT vers, revs
FROM tblMVer
WHERE mid = 194
ORDER BY vers,revs DESC
```
but Still you want to order by date\_deployed
Try this
```
SELECT vers, revs
FROM
(
SELECT DISTINCT vers, revs,date_deployed
FROM tblMVer
WHERE mid = 194
ORDER BY vers,revs,date_deployed DESC
) AS S
``` | MSSQL Select Distinct and Sorting | [
"",
"sql",
"sql-server",
""
] |
I've got a web-application that requires data from a db. It is a simple list of jobs where each job has certain tags and also a location. Now I want to get a list that gives me all possible filter options... like this:
```
FILTER-ATTRIBUTE FILTER-CATEGORY MATCHING-JOBS
programming tag 9
car-washing tag 3
cooking tag 13
Munich city 16
San Diego city 9
```
What I do have is a query with UNIONS, but this is first of all not really fast and it's ugly to filter...because I need several WHERE clauses there if the user applies a filter (e.g. if San Diego is selected, I will restrict the query to return only tags with relevance to San Diego. In this example, programming only).
```
SELECT
jt.tag_name as name,
COUNT(jt.job_id) as available_jobs,
'tag' as filter_cat
FROM job_tag_new jt WHERE jt.job_id IN (20002,30002) GROUP BY(jt.tag_name)
UNION
SELECT
j.company as name,
COUNT(j.id) as available_jobs,
'company' as filter_cat
FROM job j WHERE j.id IN (20002,30002) GROUP BY(j.company)
UNION
SELECT
l.city as name,
COUNT(j.id) as available_jobs,
'city' as filter_cat
FROM job j left join location l on l.id = j.location WHERE j.id IN (20002,30002) GROUP BY(l.city)
```
Last but not least... there is not only one table because several tables/objects can be used for filtering ;-)
Any suggestions would be nice. Even if I should take a complete different approach!
Regards,
Martin | After some more research, I came across an interesting approach and I would like to share this with with you:
```
SELECT
name,
COUNT(job_id)
FROM (
SELECT j.id AS job_id, loc.city, j.company, jt.tag_name
FROM job_tag_new jt
LEFT JOIN job j ON jt.job_id = j.id
LEFT JOIN location loc ON loc.id = j.location
) base
CROSS APPLY (
VALUES
([city]),
([company]),
([tag_name])
)
V(name)
GROUP BY name
```
The only thing which is missing here is to get a separate column that tells me the column name before the cross apply. Maybe someone has an idea for this ;-) | there are two other approaches that you may want to explore:
windowing:
```
select distinct
tag, location, company,
count(id) over(partition by tag) thistag,
count(id) over(partition by location) thisloc,
count(id) over(partition by company) thiscompany
from job
order by tag, location
```
aggregation with cube:
```
select location, tag, company, count(id) from job
group by location, tag, company with cube
``` | SQL query to get filters with expected values for a web view | [
"",
"sql",
"sql-server",
"t-sql",
"union",
""
] |
In Ruby:
```
-2 % 24
=> 22
```
In Postgres:
```
SELECT -2 % 24;
?column?
----------
-2
SELECT mod(-2,24);
mod
-----
-2
```
I can easily write one myself, but I'm curious whether Postgres has a real modulus operation, as opposed to remainder after division. | ```
SELECT MOD(24 + MOD(-2, 24), 24);
```
will return 22 instead of -2 | It seems that I won't find anything easier than:
```
CREATE OR REPLACE FUNCTION modulus(dividend numeric, divisor numeric) RETURNS numeric AS $$
DECLARE
result numeric;
BEGIN
divisor := ABS(divisor);
result := MOD(dividend, divisor);
IF result < 0 THEN
result := result + divisor;
END IF;
RETURN result;
END;
$$ LANGUAGE plpgsql;
``` | Real, mathematical "modulo" operation in Postgres? | [
"",
"sql",
"postgresql",
""
] |
I just try **simple** joins with C# using oracle db. Should be no big deal. But it **ALWAYS** fails. It works in MS-Access. Where is the problem ? *(OleDb or Odbc makes no difference here, I tried both)*
**Edit:**
* Might Oracle version be the problem ? (seems we are using 8.1.7.0.0 and 8.1.5.0.0 modules)
**Code:**
```
using System;
using System.Data.Odbc;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string n = Environment.NewLine + "--------------------------------" + Environment.NewLine + Environment.NewLine;
// connect
string connectionString = "dsn=TEST;uid=read;pwd=myPwd";
OdbcConnection connection = new OdbcConnection(connectionString);
connection.Open();
// select (key is actually text not numeral)
string query = "select * from INFOR.ZEITEN where (KEY = 0)";
query = "select a.KEY, b.GREG from INFOR.ZEITEN a inner join INFOR.ZEITEN b on (a.AUSWEIS = b.AUSWEIS) where (a.KEY like '1')";
try
{
query = query.Replace(Environment.NewLine, " ");
Console.WriteLine(n + query);
OdbcCommand command = new OdbcCommand(query, connection);
OdbcDataReader reader = command.ExecuteReader(); // throws exception
if (reader != null)
Console.WriteLine(n + "success, now read with reader!");
}
catch (Exception e)
{
Console.WriteLine(n + e.Message + n + e.StackTrace);
}
// wait
Console.ReadKey();
}
}
}
```
**Output:**

*And the successful, simple select:*
 | ANSI joins (ex. `inner join`) were first supported in 9i. You will need to use the old syntax:
```
select a.KEY, b.GREG
from INFOR.ZEITEN a,
INFOR.ZEITEN b
where (a.AUSWEIS = b.AUSWEIS)
and (a.KEY like '1')
```
Note that the `like` operator is equivalent to `=` in this case, but you probably know that | The word `key` is a reserved word. That means that it is a very poor choice for an identity. You need to escape it with a double quote. This might work:
```
query = "select a.\"KEY\", b.GREG
from INFOR.ZEITEN a inner join
INFOR.ZEITEN b
on (a.AUSWEIS = b.AUSWEIS)
where (a.\"KEY\" like '1')";
```
I am guessing the `\"` will work in this context, but there might be another method to insert this character. | Why do SQL joins fail in Oracle? | [
"",
"sql",
"database",
"oracle",
"join",
""
] |
I have this SQL Query:
```
SELECT COUNT(DISTINCT callid) as r
FROM voipwallboard_ast_queue_log
WHERE queuename = :queuename
AND time > :date
AND callid NOT IN (
SELECT callid FROM voipwallboard_ast_queue_log
WHERE event IN ('CONNECT', 'ABANDON',
'AGENTCALLBACKLOGOFF', 'AGENTCALLBACKLOGIN'))
```
but i need to know exactly what it is doing, can anyone tell me please? | **Sub query:**
```
SELECT callid FROM voipwallboard_ast_queue_log
WHERE event IN ('CONNECT', 'ABANDON', 'AGENTCALLBACKLOGOFF', 'AGENTCALLBACKLOGIN')
```
Selects the list of callids from voipwallboard\_ast\_queue\_log table where event is any of `'CONNECT', 'ABANDON', 'AGENTCALLBACKLOGOFF', 'AGENTCALLBACKLOGIN'`
**Main query:**
```
SELECT COUNT(DISTINCT callid) as r
FROM voipwallboard_ast_queue_log
WHERE queuename = :queuename
AND time > :date
AND callid NOT IN (Sub query)
```
Selects the distinct count of callids where callid not in the sub query results and rest of the conditions. | it returns all the **unique** `callid` filtered by a specified `queuename` and `date` for ALL the events exept `CONNECT`, `ABANDON`, `AGENTCALLBACKLOGOFF` AND `AGENTCALLBACKLOGIN` | explain SQL Query with NOT IN etc | [
"",
"sql",
""
] |
I have two separate queries that count number of exceptions in my database. I need to return both results in the same query, how do I bring it all together correctly?
```
SELECT (
IF EXISTS (SELECT *
FROM
exception AS ex
INNER JOIN
exceptionDefinition AS ed ON ex.exceptionDefId = ed.exceptionDefId
WHERE
ex.customerId='{5B65755C-3B66-434E-AC03-942004E9A27A}'
AND ex.loanId IS NULL
AND ex.exceptionState LIKE 'Y'
AND ex.statusType LIKE 'required'
AND ed.computationType LIKE 'computed'
GROUP BY
ex.customerId,
ed.computationType,
ex.exceptionState)
BEGIN
SELECT computedExceptionCount = 1
END
ELSE
BEGIN
SELECT computedExceptionCount = 0
END
) AS computedExceptionCount,
(
IF EXISTS (SELECT *
FROM
exception AS ex
INNER JOIN
exceptionDefinition AS ed ON ex.exceptionDefId = ed.exceptionDefId
WHERE
ex.customerId='{5B65755C-3B66-434E-AC03-942004E9A27A}'
AND ex.loanId IS NULL
AND ex.exceptionState LIKE 'Y'
AND ex.statusType LIKE 'required'
AND ed.computationType LIKE 'manual'
GROUP BY
ex.customerId,
ed.computationType,
ex.exceptionState)
BEGIN
SELECT manualExceptionCount = 1
END
ELSE
BEGIN
SELECT manualExceptionCount = 0
END
) AS manualExceptionCount
```
I am sure it is something simple.. more of a formatting issue than anything
Many thanks in advance. | Use [CASE](http://msdn.microsoft.com/en-us/library/ms181765.aspx).
```
SELECT (
CASE WHEN EXISTS (SELECT *
FROM
exception AS ex
INNER JOIN
exceptionDefinition AS ed ON ex.exceptionDefId = ed.exceptionDefId
WHERE
ex.customerId='{5B65755C-3B66-434E-AC03-942004E9A27A}'
AND ex.loanId IS NULL
AND ex.exceptionState LIKE 'Y'
AND ex.statusType LIKE 'required'
AND ed.computationType LIKE 'computed'
GROUP BY
ex.customerId,
ed.computationType,
ex.exceptionState)
THEN 1
ELSE 0
END
) AS computedExceptionCount,
(
CASE WHEN EXISTS (SELECT *
FROM
exception AS ex
INNER JOIN
exceptionDefinition AS ed ON ex.exceptionDefId = ed.exceptionDefId
WHERE
ex.customerId='{5B65755C-3B66-434E-AC03-942004E9A27A}'
AND ex.loanId IS NULL
AND ex.exceptionState LIKE 'Y'
AND ex.statusType LIKE 'required'
AND ed.computationType LIKE 'manual'
GROUP BY
ex.customerId,
ed.computationType,
ex.exceptionState)
THEN 1
ELSE 0
END
) AS manualExceptionCount
``` | Why don't you declare computedExceptionCount and manualExceptionCount above, then select both in a simple select statement:
```
Declare @computedExceptionCount INT, @manualExceptionCount INT
Select @computedExceptionCount as computedExceptionCount,@manualExceptionCount as manualExceptionCount
```
or you can try like this
```
SELECT
case
when ed.computationType LIKE 'manual' then 1
else 0
end as manualExceptionCount,
case
when ed.computationType LIKE 'computed' then 1
else 0
end as computedExceptionCount
FROM exception AS ex
INNER JOIN
exceptionDefinition AS ed ON ex.exceptionDefId = ed.exceptionDefId
WHERE ex.customerId='{5B65755C-3B66-434E-AC03-942004E9A27A}'
AND ex.loanId IS NULL
AND ex.exceptionState LIKE 'Y'
AND ex.statusType LIKE 'required'
``` | Returning 2 count results in statement | [
"",
"sql",
"t-sql",
""
] |
Currently I am getting output for score as " 25 / 50 " as string through SQL, now I need to calculate percentage from this score and return that value using a SQL statement in Post Gres SQL.
```
SELECT es.score FROM emp_scores es WHERE id = 123;
```
This returns me a string as " 25 / 50 ". Now I want to calculate the percentage from this string. | you need to split your data first then you can calculate percentage...
you can use
```
split_part(string text, delimiter text, field int)
//Split string on delimiter and return the given field (counting from one)
```
here is example
```
SELECT (CAST(coalesce(split_part(es.score,'/',2)*100), '0') AS integer)*100)/CAST(coalesce(split_part(es.score,'/',1)*100), '0') AS integer) as 'Percentage' FROM emp_scores es WHERE id = 123;
``` | ```
select
cast(substring(es.score,0,patindex('%/%',es.score)) as int)/
cast(substring(es.score
,patindex('%/%',es.score)+1
,len(es.score)-patindex('%/%',es.score)) as int)*100
FROM emp_scores es WHERE id = 123;
``` | I need to calculate percentage from score using a SQL statement | [
"",
"sql",
"postgresql",
"percentage",
""
] |
I need create SQL query, which retrieve articles, which have given tags. Relation is M:N, so one article can has associated N tags.
```
id_article id_tag article_has_tag
1 1, SQL 1, 1
2 2, HTML 1, 4
3 3, PHP 2, 1
4, JAVA 2, 2
3, 4
3, 1
```
For example, if are selected SQL **AND** Java, result should be id\_article: **1** and **3**. not (1,2,3)
```
SELECT * FROM article_has_tag a
where a.id_tag = 1 and a.id_tag = 4
-- returns zero rows
```
[SQL Fiddle](http://www.sqlfiddle.com/#!2/32e2a/4)
Thanks for help. | ```
SELECT *
FROM article_has_tag a
where a.id_tag = 1 OR a.id_tag = 4
group by id_article
having count(*)=2
``` | ```
SELECT * FROM article_has_tag a
where a.id_tag = 1 and a.id_tag = 4
```
Returns 0 rows meaning u are using AND
```
where a.id_tag = 1 and a.id_tag = 4
```
So its looking id\_tag is 1 and its 4 so it never happens. In other words you are asking to find something on a column which co-exist at the same time.
So you may need to `OR` or `IN(1,4)` | Fetching from an associative table | [
"",
"mysql",
"sql",
""
] |
I have this query which I have been messing with and cannot seem to see what to change in order to receive the results I want.
I want to sum sales by `Emp_ID` by day, but only sum the ones over $10000 for that day. Below is what I currently have
```
SELECT
Emp_ID,
sum(SaleA+SaleB) as TotalSales,
sum(SaleA+SaleB-CommA-CommB) as TSalesAftComm,
count(Emp_ID) as NumOfSales,
SaleDate
FROM
Sales (nolock)
WHERE
SaleDate>='2014-03-15 00:00:00'
GROUP BY
SaleDate, Emp_ID
HAVING
sum(SaleA+SaleB) > 10000
ORDER BY
SaleDate
```
I know that in my select and group by (Emp\_ID) it will group by date and also Emp\_ID for that date. It seems if I remove the Emp\_ID in the SELECT and GROUP BY area it adds all sales for that day even the ones below $10000.
Below are the results I get
```
Emp_ID | TotalSales | TSalesAftComm | NumOfSales | SaleDate
1 10897.65 10000 6 2014-03-15 00:00:00.000
1 18897.65 17800 8 2014-03-15 00:00:00.000
2 10797.65 10000 5 2014-03-15 00:00:00.000
1 10897.65 10000 6 2014-03-16 00:00:00.000
```
I would like to see the results as
```
| TotalSales | TSalesAftComm | NumOfSales | SaleDate
40592.95 37800 19 2014-03-15 00:00:00.000
10897.65 10000 6 2014-03-16 00:00:00.000
```
Thank you for any help or direction you can provide. | Don't have SQL Server 2000 to test with, but you *should* be able to get it done using a plain subquery, something like;
```
SELECT SUM(TotalSales) TotalSales, SUM(TSalesAftComm) TSalesAftComm,
SUM(NumOfSales) NumOfSales, SaleDate
FROM (
SELECT
Emp_ID,
sum(SaleA+SaleB) as TotalSales,
sum(SaleA+SaleB-CommA-CommB) as TSalesAftComm,
count(Emp_ID) as NumOfSales,
SaleDate
FROM Sales (nolock)
WHERE SaleDate>='2014-03-15 00:00:00'
GROUP BY SaleDate, Emp_ID
HAVING
sum(SaleA+SaleB) > 10000
) z
GROUP BY SaleDate
ORDER BY SaleDate
``` | Without looking at actual data/schema, and looking at your expected results, what happens if you try this (look at the group by section, showing group by TotalSales; employee id removed too)?
```
SELECT
sum(SaleA+SaleB) as TotalSales,
sum(SaleA+SaleB-CommA-CommB) as TSalesAftComm,
count(Emp_ID) as NumOfSales,
SaleDate
FROM
Sales (nolock)
WHERE
SaleDate>='2014-03-15 00:00:00'
GROUP BY
TotalSales
HAVING
sum(SaleA+SaleB) > 10000
ORDER BY
SaleDate
``` | GROUP BY not returning correct totals | [
"",
"sql",
"t-sql",
"sql-server-2000",
""
] |
I am running PostgreSQL 9.3.1. I have `test` database and `backup` user which is used to backup the database. I have no problems with granting privileges to all current tables, but I have to grant privileges each time the new table is added to schema.
```
createdb test
psql test
test=# create table foo();
CREATE TABLE
test=# grant all on all tables in schema public to backup;
GRANT
test=# create table bar();
CREATE TABLE
psql -U backup test
test=> select * from foo;
test=> select * from bar;
ERROR: permission denied for relation bar
```
Is it possible to grant access to tables which will be created in future without making user owner of the table? | It looks like the solution is to alter default privileges for `backup` user:
```
alter default privileges in schema public grant all on tables to backup;
alter default privileges in schema public grant all on sequences to backup;
```
From the comment by Matt Schaffer:
> As caveat, the default only applies to the user that executed the
> `alter` statement. This confused me since I was driving most of my
> permissions statements from the postgres user but creating tables from
> an app user. In short, you might need something like this depending on
> your setup:
```
ALTER DEFAULT PRIVILEGES FOR USER webapp IN SCHEMA public GRANT SELECT ON SEQUENCES TO backup;
ALTER DEFAULT PRIVILEGES FOR USER webapp IN SCHEMA public GRANT SELECT ON TABLES TO backup;
```
Where `webapp` is the user that will be creating new tables in the future and `backup` is the user that will be able to read from new tables created by `webapp`. | If you want the **backup** user to have access to the **future** tables of user**N**,
you must run the code below under each user**N** who creates new tables,
because ALTER DEFAULT PRIVILEGES...
works only for objects **by that user under whom you run** ALTER DEFAULT PRIVILEGES...
```
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO backup;
``` | Grant privileges on future tables in PostgreSQL? | [
"",
"sql",
"database",
"postgresql",
""
] |
Imagine the following scenario:
```
ColA | ColB
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
2 | 3
3 | 1
3 | 2
3 | 3
```
Using SQL Server 2008, how would I count an occurrence such that the combination (1,2) would be the same as (2,1) and therefore my results would be as follows:
```
ColA | ColB | Count
1 | 1 | 1
1 | 2 | 2
1 | 3 | 2
2 | 2 | 1
2 | 3 | 2
3 | 3 | 1
```
Thanks! | Try this:
```
;with cte as
(select
ColA,
ColB,
case when ColA < ColB then ColA else ColB end as ColC,
case when ColA > ColB then ColA else ColB end as ColD
from yourtable)
select
ColC as ColA,
ColD as ColB,
count(1) as Count
from cte
group by ColC, ColD
order by ColC, ColD
``` | Before grouping, normalize the data by making `Col1 = MIN(Col1, Col2)` and `Col2 = MAX(Col1, Col2)`. This converts each possible combination to a canonical one. Then, do the usual grouping. | Count Unique Combination of Values | [
"",
"sql",
"sql-server",
""
] |
I have two tables:
**tata\_data1:**
```
Password | Region | Company | System
-------------------------------------
a02040 | Del | xx | abc
a01917 | Mum | xxx | pqr
a01916 | Mum | x | zzz
a01906 | Nag | x | pny
```
and **tata\_passwords:**
```
Password | Region | Company
----------------------------
a02049 | Nag | xxxx
a01917 | Mum | xxx
a01000 | Del | xx
a01906 | Nag | x
```
I want to fetch only those rows from tata\_passwords, which are not there in tata\_data1. Consider **Password** as the primary key. | Try this:
```
SELECT * FROM tata_passwords WHERE (Password, Region, Company) NOT IN ( SELECT Password, Region, Company FROM tata_data1 )
```
**EDIT:**
Now when Password is primary key, one could reduce the query to:
```
SELECT * FROM tata_passwords WHERE Password NOT IN ( SELECT Password FROM tata_data1 )
``` | Using a LEFT OUTER JOIN:-
```
SELECT tata_passwords.*
FROM tata_passwords
LEFT OUTER JOIN tata_data1
ON tata_passwords.Password = tata_data1.Password
WHERE tata_data1.Password IS NULL
``` | How to retrieve rows NOT common in two tables | [
"",
"mysql",
"sql",
""
] |
Okay i have 2 tables,
**url\_table**
```
id
link_url
```
**tag table**
```
id
link_tags
```
i'd like to join them, to create a view or table like,
```
id ,link_tag1, link_tag2, link_tag3.....
```
sample data: url\_table
```
[1 | w3schools.com]
[2 | php.net]
```
sample\_data: tag\_table:
```
[1 | html]
[1 | javascript]
[2 | php]
```
required table
```
[1 | html,javascript]
[2 | php]
```
How best can i do this without using too much queries? | You can use MySQL's `GROUP_CONCAT()` function for this:
```
SELECT u.url
,GROUP_CONCAT(t.tag)
FROM url_table u
JOIN tag_table t
ON u.id = t.id
GROUP BY u.url
```
Note: In the example the `JOIN` doesn't really add anything as `ID` and `tag` could just be had from the `tag_table`, but I included it assuming there's more than just the sample. | ```
Select id, group_concat(`Link_Tag` separator ',') as mTags
FROM tag_table
Group by ID
```
As your results don't really render the URL, there's no need for a join. Everything can be done direct on the tag\_table. | optimum way to join the following two tables? | [
"",
"mysql",
"sql",
"database",
"database-design",
""
] |
I'm trying to do some set operations in PostgreSQL 9.3.
I have two tables, for simplicity let's call them `table_a` and `table_b`:
```
create table table_a(id varchar primary key);
create table table_b(id varchar primary key);
```
And I have a simple query (in its simplest formulation, though it's a source for an insert in practice):
```
(select id from table_a) except (select id from table_b);
```
Before I started using PostgreSQL, I'd do an operation like this:
```
set-diff table_a.csv table_b.csv > table_c.csv
```
Where set-diff looks approximately like this:
```
while (not eof(a)) and (not eof(b)):
line_a <- peek_line(a)
line_b <- peek_line(b)
if line_a < line_b:
output read_line(a)
else if line_a == line_b:
read_line(a)
else:
read_line(b)
while not eof(a):
output read_line(a)
```
This doesn't take very long at all, has insignificant memory requirements, and maximizes efficient use of sequential disk I/O. That's important since this machine doesn't have heaps of memory - it can't fit all the data in RAM.
However, PostgreSQL comes up with this kind of plan (from some actual tables):
```
QUERY PLAN
----------------------------------------------------------------------------------
SetOp Except (cost=3184554.28..3238904.44 rows=9434298 width=51)
-> Sort (cost=3184554.28..3211729.36 rows=10870032 width=51)
Sort Key: "*SELECT* 1".id
-> Append (cost=0.00..428039.64 rows=10870032 width=51)
-> Subquery Scan on "*SELECT* 1" (cost=0.00..345707.96 rows=9434298 width=54)
-> Seq Scan on table_a (cost=0.00..251364.98 rows=9434298 width=54)
-> Subquery Scan on "*SELECT* 2" (cost=0.00..82331.68 rows=1435734 width=32)
-> Seq Scan on table_b (cost=0.00..67974.34 rows=1435734 width=32)
```
The query takes way too long - several minutes.
I'm convinced that PostgreSQL could use the same kind of merge strategy I outline above, using index scans alone, and no sorting. Instead it seems to be concatenating two table scans and sorting the whole lot of them, a little bit like this command line, though without reading table\_b twice:
```
sort table_a.csv table_b.csv table_b.csv | uniq -u
```
This involves rather a lot of extra work - some fraction of log(n) times more I/O, for one, when not everything will fit in memory.
The columns involved are btree indexed. The only column being selected from the query is the same one that is indexed and is being merged. Locale is C everywhere.
Before I was using a lot of text files and a few custom indexing tools. I'm trying to use a database instead to get extra flexibility in querying and to avoid having to maintain custom indexes. However the performance is appalling, so much so that I'm considering doing my merges and most other mass update operations outside the database, round-tripping the data through csv.
What am I missing? | First thoughts:
* Plain `EXCEPT` means `EXCEPT DISTINCT` which means it eliminates duplicate rows from its result. Use `EXCEPT ALL` if you can, it *should* be faster.
* Don't use [combining queries](http://www.postgresql.org/docs/9.3/static/queries-union.html) if you have another options too, they are known to be slow.
* From your `EXPLAIN`, it seems you applied an ordering too, which also takes more time (especially on combining queries).
Results on my `9.2`:
**`EXCEPT`**
```
explain select id from table_a except (select id from table_b);
```
results:
```
HashSetOp Except (cost=0.00..947.00 rows=20000 width=5)
-> Append (cost=0.00..872.00 rows=30000 width=5)
-> Subquery Scan on "*SELECT* 1" (cost=0.00..563.00 rows=20000 width=5)
-> Seq Scan on table_a (cost=0.00..363.00 rows=20000 width=5)
-> Subquery Scan on "*SELECT* 2" (cost=0.00..309.00 rows=10000 width=4)
-> Seq Scan on table_b (cost=0.00..209.00 rows=10000 width=4)
```
**`EXCEPT` with `ORDER BY`**
```
explain select id from table_a except (select id from table_b) order by id;
```
results:
```
Sort (cost=2375.77..2425.77 rows=20000 width=5)
Sort Key: "*SELECT* 1".id
-> HashSetOp Except (cost=0.00..947.00 rows=20000 width=5)
-> Append (cost=0.00..872.00 rows=30000 width=5)
-> Subquery Scan on "*SELECT* 1" (cost=0.00..563.00 rows=20000 width=5)
-> Seq Scan on table_a (cost=0.00..363.00 rows=20000 width=5)
-> Subquery Scan on "*SELECT* 2" (cost=0.00..309.00 rows=10000 width=4)
-> Seq Scan on table_b (cost=0.00..209.00 rows=10000 width=4)
```
**Anti `JOIN` with `ORDER BY`**
```
explain select table_a.id from table_a
left outer join table_b on table_a.id = table_b.id
where table_b.id is null order by table_a.id;
```
and
```
explain select id from table_a
where not exists (select * from table_b where table_b.id = table_a.id) order by id;
```
results (identical):
```
Merge Anti Join (cost=0.57..1213.57 rows=10000 width=5)
Merge Cond: ((table_a.id)::text = (table_b.id)::text)
-> Index Only Scan using table_a_pkey on table_a (cost=0.29..688.29 rows=20000 width=5)
-> Index Only Scan using table_b_pkey on table_b (cost=0.29..350.29 rows=10000 width=4)
```
**`NOT IN` with `ORDER BY`**
```
explain select id from table_a where id not in (select id from table_b) order by id;
```
results (my winner):
```
Seq Scan on table_a (cost=234.00..647.00 rows=10000 width=5)
Filter: (NOT (hashed SubPlan 1))
SubPlan 1
-> Seq Scan on table_b (cost=0.00..209.00 rows=10000 width=4)
```
Used
```
create table table_a(id varchar primary key, rnd float default random());
create table table_b(id varchar primary key, rnd float default random());
do language plpgsql $$
begin
for i in 1 .. 10000 loop
insert into table_a(id) values (i);
insert into table_b(id) values (i);
end loop;
for i in 10001 .. 20000 loop
insert into table_a(id) values (i);
end loop;
end;
$$;
``` | How do these variants do?
```
select id
from table_a a
where not exists (select 1 from table_b b where b.id = a.id);
```
or:
```
select id
from table_a left outer join
table_b b
on a.id = b.id
where b.id is null;
```
If these perform better, it is simply that not as much effort has going into optimizing `except` as other components of the language. | Improve PostgreSQL set difference efficiency | [
"",
"sql",
"database",
"performance",
"postgresql",
""
] |
I have a string variable that denotes a time:
```
@time = '5:00 PM'
```
I need to check if the current time `getdate()` is after or before `@time`. How can I do this in SQL? | This should do it:
SQL 2008+
```
if datediff(ss,cast(@time as time),cast(GetDate() as time)) < 0
print 'Future'
else
print 'Past'
```
Earlier:
```
if DatePart(hh,GETDATE())*60+DATEPART(mm,getDate()) < DatePart(hh,@time)*60+DATEPART(mm,@time)
print 'Future'
else
Print 'Past'
``` | One way, but not great on performance.
```
declare @time varchar(20)
set @time = '5:00pm'
select DatePart(hh,GETDATE())*60+DATEPART(mm,getDate()) as CurTime,
DatePart(hh,@time)*60+DATEPART(mm,@time) as TheTime
``` | Compare two time strings in SQL | [
"",
"sql",
"sql-server",
"datetime",
"sql-server-2005",
""
] |
I'm in the process of trying to get the count from an output of a T-SQL query.
Here's the sample table information...
```
------------------------------------------------------
| order_no | company | destination | date |
|-------------|-------------|--------------|----------|
| 100 | Burger King | Los Angeles | 20140305 |
|-------------|-------------|--------------|----------|
| 101 | Burger King | Phoenix | 20140312 |
|-------------|-------------|--------------|----------|
| 102 | Burger King | Los Angeles | 20140322 |
|-------------|-------------|--------------|----------|
| 103 | McDonalds | Las Vegas | 20140315 |
|-------------|-------------|--------------|----------|
| 104 | McDonalds | Las Vegas | 20140324 |
|-------------|-------------|--------------|----------|
| 105 | McDonalds | Las Vegas | 20140305 |
|-------------|-------------|--------------|----------|
| 106 | McDonalds | Las Vegas | 20140311 |
|-------------|-------------|--------------|----------|
| 107 | Burger King | San Diego | 20140317 |
|-------------|-------------|--------------|----------|
| 108 | Burger King | Los Angeles | 20140305 |
|-------------|-------------|--------------|----------|
| 109 | Burger King | Phoenix | 20140311 |
|-------------|-------------|--------------|----------|
| 110 | Burger King | San Diego | 20140313 |
|-------------|-------------|--------------|----------|
| 111 | Burger King | Los Angeles | 20140319 |
|-------------|-------------|--------------|----------|
| 112 | Burger King | San Diego | 20140304 |
|-------------|-------------|--------------|----------|
```
Based on this information, I then run the following query.
```
SELECT company, COUNT(destination) as company_destination, destination
from dbo.burger_orders
WHERE (date >= 20140301 AND date <= 20140331)
group by company, destination
```
So the result is below. This is exactly what I want for one bit of information I need. However, I need an additional stat from this result below. I need to get a count of every destination noted below.
```
Company Company Orders Destination
Burger King 4 Los Angeles
Burger King 3 San Diego
Burger King 2 Phoenix
McDonald's 4 Las Vegas
```
***DESIRED OUTPUT***
***So What I need to show is the 'Destination' column grand total which is 4 based on each city in the column above.***
I'm stumped as to how to approach this as you note below I haven't been too successful partially because I have never been in this scenario before.
What I've tried
```
SELECT company, COUNT(destination) as company_destination,
COUNT(DISTINCT destination)
from dbo.burger_orders
WHERE (date >= 20140301 AND date <= 20140331)
group by company, destination
```
AND
```
SELECT company, COUNT(destination) as company_destination,
SUM(COUNT(DISTINCT destination)) from dbo.burger_orders
WHERE (date >= 20140301 AND date <= 20140331)
group by company, destination
```
The first gets just turns the destination column into a 1 in every cell in the destination column. I get that. However I need the grand total from this. I tried slapping a SUM in front of the COUNT in the second example and got the following error.
Cannot perform an aggregate function on an expression containing an aggregate or a subquesry.
Am I even on the right path to get the correct result? | Does this help you ?
```
SELECT company,
COUNT(destination) as company_destination,
destination,
count(destination) over() as TotalCities
from dbo.burger_orders
WHERE (date >= 20140301 AND date <= 20140331)
group by company, destination
``` | You are looking for [analytic functions](http://technet.microsoft.com/en-us/library/ms189461.aspx)
Something [like this](http://sqlfiddle.com/#!6/4eee9/25):
```
SELECT company, company_orders, COUNT(destination) OVER () AS dest_count
FROM (
SELECT company, COUNT(destination) as company_orders, destination
FROM burger_orders
WHERE (date >= 20140301 AND date <= 20140331)
GROUP BY company, destination
) x
GROUP BY company, company_orders, destination;
``` | How to get a grand total of distinct cells based on T-sql count output? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a query which gives me the last month's completed data
```
select * from Tabl
where datepart(month, completed_date) = DATEPART(month, getDate())-1
and datepart(year, completed_date) = datepart(year, getDate())
```
But it gives me wrong data when the current date is in january
How can I write the condition to return correct data if the current month is January? | SQL Server's DATEADD function will help you here.
```
select * from Tabl
where datepart(month, completed_date) = DATEPART(month, DATEADD(MM,-1,getDate()))
and datepart(year, completed_date) = datepart(year, DATEADD(MM,-1,getDate()))
```
The 'MM' used is just a keyword for Month. | Well, try subtracting one month from the date instead:
```
select *
from Tabl
where month(completed_date) = month(dateadd(month, -1, getdate()) and
year(completed_date) = year(dateadd(month, -1, getdate());
```
But, if you have an index on `completed_date`, it is better to put all the operations on `getdate()` so the expression is "sargable" (that is, an index can be used):
```
where completed_date >= cast(dateadd(month, -1, getdate() - day(getdate() + 1) as date) and
completed_date < cast(getdate() - day(getdate())
```
When you subtract the "day of the month" from the date, you get the last day of the previous month. | How to get this condition in where clause of Sql | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
Could any one help me for the below request.
I have data of One row for the Login DateTime and another row for the Logout Datetime. The rest of the fields are same. I need to combine both rows in to one with Login (Datetime) and Logout (Datetime).
Sample Data
```
ID Code DateTime User Status
35 100 1/1/2014 14:50 a IN
35 100 1/1/2014 15:45 a OUT
35 100 1/1/2014 18:20 a IN
35 100 1/1/2014 19:10 a OUT
```
Result should look like below
```
ID Code Datetime1 Datetime2 User
35 100 2014-01-01 14:50 2014-01-01 15:45 a
35 100 2014-01-01 18:20 2014-01-01 19:10 a
```
Thank you. | This finds the next date/time with an 'OUT' after each 'IN' :
(simplified to match small data sample, extra code required)
```
With YourData as (
SELECT 35 as ID, 100 as Code, '1/1/2014 14:50' as yDatetime,
'a' as yUser, 'IN' AS status UNION ALL
SELECT 35,100, '1/1/2014 15:45', 'a', 'OUT' UNION ALL
SELECT 35,100, '1/1/2014 18:20', 'a', 'IN' UNION ALL
SELECT 35,100, '1/1/2014 19:10', 'a', 'OUT'
)
SELECT
ID,
Code,
yDatetime AS When_IN,
(SELECT Min(yDatetime) FROM YourData yd2
WHERE (yd2.yDatetime>YourData.yDatetime)
AND Status='OUT'
-- extra matching needed here
-- for ID, CODE, User fields in use
) AS When_OUT,
yUser as _User
FROM YourData WHERE Status='IN'
```
Results :
35 100 1/1/2014 14:50 1/1/2014 15:45 a
35 100 1/1/2014 18:20 1/1/2014 19:10 a | Use the `ROW_NUMBER()` windowing function to determine the closest 'OUT' status for each 'IN' iteration:
```
SELECT * FROM (
SELECT t1.ID, t1.Code, t1.[Datetime] as Datetime1, tNext.[Datetime] as Datetime2, t1.[User],
ROW_NUMBER() OVER (PARTITION BY t1.ID, t1.Code, t1.[User], t1.[Datetime] ORDER BY tNext.[Datetime]) rowNum
FROM myTable t1
JOIN myTable tNext ON
t1.ID = tNext.ID AND
t1.Code = tNext.Code AND
t1.[User] = tNext.[User] AND
tNext.Status = 'OUT' AND
t1.[Datetime] < tNext.[Datetime]
WHERE t1.Status = 'IN' ) t
WHERE rowNum = 1
ORDER BY ID, Code, [User], Datetime1
```
SQLFiddle [here](http://www.sqlfiddle.com/#!3/83d20/2) | Combine Two Rows into One with Similar fields (DateTime) and NULL Vales in SQL | [
"",
"sql",
"sql-server-2008",
""
] |
Each row in my table belongs to some *category*, has some *value* and other data.
I would like to select each *category* with the most common *value* for it (doesn't matter which one if there are multiple), ordered by *category*.
```
some_table: expected result:
+--------+-----+--- +--------+-----+
|category|value|... |category|value|
+--------+-----+--- +--------+-----+
| 1 | a | | 1 | a |
| 1 | a | | 2 | b |
| 1 | b | | 3 | a # or b
| 2 | a | +--------+-----+
| 2 | b |
| 2 | c |
| 2 | b |
| 3 | a |
| 3 | a |
| 3 | b |
| 3 | b |
+--------+-----+---
```
I have a solution (posting it as an answer) but it seems suboptimal to me. So I'm looking for better solutions.
My table will have up to 10000 rows (possibly, but not likely, beyond that).
I'm planning to use SQLite but I'm not tied to it, so I may reconsider if SQLite can't do this with reasonable performance. | I would be inclined to do this using a correlated subquery:
```
select distinct category,
(select value
from some_table t2
where t2.category = t.category
group by value
order by count(*) desc
limit 1
) as mode_value
from some_table t;
```
The name for the most common value is "mode" in statistics.
And, if you had a `categories` table, this would be written as:
```
select category,
(select value
from some_table t2
where t2.category = c.category
group by value
order by count(*) desc
limit 1
) as mode_value
from categories c;
``` | Here is one option, but I think it's slow...
```
SELECT DISTINCT `category` AS `the_category`, `value`
FROM `some_table`
WHERE `value`=(
SELECT `value`
FROM `some_table`
WHERE `category`=`the_category`
GROUP BY `value`
ORDER BY COUNT(`value`) DESC LIMIT 1)
ORDER BY `category`;
```
You can replace a part of this with `` WHERE `id`=( SELECT `id` `` if the table has a unique/primary key column, then the `LIMIT 1` is not needed. | Select the most common item for each category | [
"",
"sql",
"sqlite",
""
] |
I am using the following query to get the latest 2 messages of the same conversation:
```
SELECT *
FROM messages
WHERE conversation_id
IN ( 122806, 122807 )
GROUP BY conversation_id
ORDER BY sent_on DESC
LIMIT 2
```

Which return `message7` and `message3` as result.
What I need is to get latest 2 messages grouped by conversation\_id, therefore the result should be:
```
message3
message1
message4
message5
``` | The canonical way to do this is with a counter in the `where` clause:
```
select m.*
from message m
where 2 >= (select count(*)
from message m2
where m2.conversation_id = m.conversation_id and
m2.sent_on >= m.sent_on
);
```
An index on `message(conversation_id, sent_on)` would definitely help this query. This also assumes that `sent_on` is unique. Otherwise, you can just use `id`.
A more efficient method is to use variables:
```
select m.*
from (select m.*,
@rn := if(@conversation_id = conversation_id, @rn + 1, 1) as rn,
@conversation_id := conversation_id
from message m cross join
(select @conversation_id := '', @rn := 0) const
order by conversation_id, sent_on desc
) m
where rn <= 2;
``` | One way to go about this using `GROUP_CONCAT()` and the `SUBSTRING_INDEX()` but it will show you the messages seperated by seperator you specified in the query with conversation id not as individual row foreach message,you can use `ORDER BY` clause in group\_concat function also i have `ORDER BY sent_on DESC` so the messages will be grouped and ordered by `sent_on`
```
SELECT conversation_id,
SUBSTRING_INDEX(
GROUP_CONCAT(message ORDER BY sent_on DESC SEPARATOR '||'),
'||',2) two_messages
FROM messages
/*
optional where filter i have provided example for all conversations
WHERE conversation_id
IN ( 122806, 122807 ) */
GROUP BY conversation_id
ORDER BY sent_on DESC
```
Also note there is a default 1024 character limit set on `GROUP_CONCAT()` but you can also increase it by following the [**GROUP\_CONCAT()**](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) manual | Use limit before group by | [
"",
"mysql",
"sql",
""
] |
In my Oracle 11 database, I have a sql script for inserting(migrating) data from one table into another, this is a problem I encounter a lot, **for instance**:
```
INSERT INTO TargetTable (eventname, eventplace, eventdate)
SELECT name, eventplace, "How to insert time here?" FROM SourceTable;
```
The SourceTable don't have the eventdate column, and it is **"Not Nullable"** in TargetTable, so I have to insert something there, in this case I just want to just use the current time for all rows. How can I do this?
p.s Not only for dates, This could also be a missing varchar column and I might to set a default value in target database. | For date fields just use sysdate, for varchar fileds I think you have done right, just add "default value" after select works. | Try this:
```
INSERT INTO TargetTable (
eventname,
eventplace,
eventdate)
SELECT name,
eventplace,
SysDate -- <- Current Date and Time
FROM SourceTable;
``` | Oracle Sql: How to set a default column value on insert select | [
"",
"sql",
"oracle",
"data-migration",
""
] |
I'm relatively new to SQL.
I'm trying to print a simple pattern through this code
```
declare
n number(2):=5;
temp number(2):=n;
begin
for a in 1..5
a:=a+1;loop
for b in 1..temp loop
b:=b+1;
dbms_output.put_line(' ');
temp:=temp-1;
end loop;
for c in 1..2*a-1 loop
c:=c+1;
dbms_output.put_line('*');
end loop;
end loop;
end;
/
```
I keep getting this error:
```
PLS-00103: Encountered the symbol "A" when expecting one of the following:
* & - + / at loop mod remainder rem <an exponent (**)> ||
multiset
```
I understand Oracle doesn't allow to reference the counter as the target of an assignment which is why I keep getting error at line 6 but I'm unable to make it work even by declaring another global variable and assigning increment statement in it but it doesn't work either.
Please help.
Thanks! | Modifying the previous answers to actually give you Pascal's triangle, which you mentioned you were attempting in a comment:
```
set serveroutput on format wrapped
declare
n number(2):=5;
begin
for a in 1..n loop
for b in 1..n-a loop
dbms_output.put(' ');
end loop;
for c in 1..2*a-1 loop
dbms_output.put('*');
end loop;
dbms_output.new_line;
end loop;
end;
/
*
***
*****
*******
*********
PL/SQL procedure successfully completed.
```
Both your `dbms_output.put_line` calls needed to be just `dbms_output.put`, as that was printing each `*` on a line on its own. But you do need a line break after each time around the `a` loop, so I've added a `dbms_output.newline` at the end of that. You were also decrementing `temp` inside the `b` loop, which meant it was zero instead of `(n-1)` for the second time around the `a` loop; but you don't really need a separate `temp` variable at all as that is always the same as `(n-a)+1` and the `+1` just puts an extra space on every line. (I also made the `a` loop `1..n` as I assume you want to change the value of `n` later in one place only). With `n := 8`:
```
*
***
*****
*******
*********
***********
*************
***************
```
Crucially though you also have to `set serveroutput on format wrapped`, otherwise the leading spaces you're generating in the `b` loop are discarded.
You can also do this in plain SQL, though you need to supply the `5` twice, or use a bind or substitution variable:
```
select lpad(' ', 5 - level, ' ') || rpad('*', (level * 2) - 1, '*') as pascal
from dual
connect by level <= 5
PASCAL
------------------------------
*
***
*****
*******
*********
```
Your `b` and `c` loops are just doing a manual `lpad` really. | When I took your code in my editor I first noticed, you tried to increase a before starting the loop, and Oracle gives first error at that point. And also it does not allow you to increase counter variable in for loop, (I don't know why) I checked on the internet and found that you can not set increment step for Oracle for loops also you can not set a value for counter variable in for loop.
The code below works fine for me :
```
declare
n number(2):=5;
temp number(2):=n;
begin
for a in 1..5
loop --a:=a+1;
for b in 1..temp loop
--b:=b+1;
dbms_output.put_line(' ');
temp:=temp-1;
end loop;
for c in 1..2*a-1 loop
--c:=c+1;
dbms_output.put_line('*');
end loop;
end loop;
end;
/
``` | ERROR: Reference the counter as the target of an assignment - PL/SQL | [
"",
"sql",
"oracle",
""
] |
I'm wondering if there is a simpler way to accomplish my goal than what I've come up with.
I am returning a specific attribute that applies to an object. The objects go through multiple iterations and the attributes might change slightly from iteration to iteration. The iteration will only be added to the table if the attribute changes. So the most recent iteration might not be in the table.
Each attribute is uniquely identified by a combination of the Attribute ID (AttribId) and Generation ID (GenId).
```
Object_Table
ObjectId | AttribId | GenId
32 | 2 | 3
33 | 3 | 1
Attribute_Table
AttribId | GenId | AttribDesc
1 | 1 | Text
2 | 1 | Some Text
2 | 2 | Some Different Text
3 | 1 | Other Text
```
When I query on a specific object I would like it to return an exact match if possible. For example, Object ID 33 would return "Other Text".
But if there is no exact match, I would like for the most recent generation (largest Gen ID) to be returned. For example, Object ID 32 would return "Some Different Text". Since there is no Attribute ID 2 from Gen 3, it uses the description from the most recent iteration of the Attribute which is Gen ID 2.
This is what I've come up with to accomplish that goal:
```
SELECT attr.AttribDesc
FROM Attribute_Table AS attr
JOIN Object_Table AS obj
ON obj.AttribId = obj.AttribId
WHERE attr.GenId = (SELECT MIN(GenId)
FROM(SELECT CASE obj2.GenId
WHEN attr2.GenId THEN attr2.GenId
ELSE(SELECT MAX(attr3.GenId)
FROM Attribute_Table AS attr3
JOIN Object_Table AS obj3
ON obj3.AttribId = attr3.AttribId
WHERE obj3.AttribId = 2
)
END AS GenId
FROM Attribute_Table AS attr2
JOIN Object_Table AS obj2
ON attr2.AttribId = obj2.AttribId
WHERE obj2.AttribId = 2
) AS ListOfGens
)
```
Is there a simpler way to accomplish this? I feel that there should be, but I'm relatively new to SQL and can't think of anything else.
Thanks! | The following query will return the matching value, if found, otherwise use a correlated subquery to return the value with the highest GenId and matching AttribId:
```
SELECT obj.Object_Id,
CASE WHEN attr1.AttribDesc IS NOT NULL THEN attr1.AttribDesc ELSE attr2.AttribDesc END AS AttribDesc
FROM Object_Table AS obj
LEFT JOIN Attribute_Table AS attr1
ON attr1.AttribId = obj.AttribId AND attr1.GenId = obj.GenId
LEFT JOIN Attribute_Table AS attr2
ON attr2.AttribId = obj.AttribId AND attr2.GenId = (
SELECT max(GenId)
FROM Attribute_Table AS attr3
WHERE attr3.AttribId = obj.AttribId)
```
In the case where there is no matching record at all with the given AttribId, it will return NULL. If you want to get no record at all in this case, make the second JOIN an INNER JOIN rather than a LEFT JOIN. | Try this...
Incase the logic doesn't find a match for the **Object\_table GENID** it maps it to the **next highest GENID** in the `ON` clause of the `JOIN`.
```
SELECT AttribDesc
FROM object_TABLE A
INNER JOIN Attribute_Table B
ON A.AttrbId = B.AttrbId
AND (
CASE
WHEN A.Genid <> B.Genid
THEN (
SELECT MAX(C.Genid)
FROM Attribute_Table C
WHERE A.AttrbId = C.AttrbId
)
ELSE A.Genid
END
) -- Selecting the right GENID in the join clause should do the job
= B.Genid
``` | Is there a simpler way to write this query? [MS SQL Server] | [
"",
"sql",
"sql-server",
"simplify",
""
] |
I am trying to use the below query that shows **country** and **population** of *second most and second least populous* country. I figured out a way to select the population for those countries but I can't find any good way to implement selection of country names.
```
Select Max(population)
From country Where population < (Select max (population) From country)
Union
Select Min(population)
From country where population > (select Min(population) from country) ;
```
I found a way for selecting country and population for second most/second least populous country but problem is I can't use `union` on two selects with 2 ORDER BY (one in each select).
Any idea what I can do to solve my problem?
*Note: Im using Postgres* | ```
select *
from (
select country, population
from
(
select country, population
from country
order by population
offset 1 limit 1
) s
union
select country, population
from
(
select country, population
from country
order by population desc
offset 1 limit 1
) q
) s
``` | By using window function, you can do it simply like this:
```
with t as (
select population,
row_number() over (order by population desc) mx,
row_number() over (order by population asc) mn
from country)
select 'second most population', population from t where mx = 2
union all
select 'second least population', population from t where mn = 2;
``` | Selecting two columns in same table multiple times | [
"",
"sql",
"postgresql",
"union",
"union-all",
"sql-limit",
""
] |
At work we did a project that required a team to count students 8 times a day over 5 days at specific time periods. They are, as follows :-
```
09:00, 10:00, 11:00, 13:15, 14:15, 14:50, 15:50, 16:20.
```
Now, the data collected was put directly into a database via a web app. The problem is that database recorded each record using the standard YYYY-MM-DD HH:MM:SS.MIL, but if I were to order the records by date and then by student count it would cause the following problem;
e.g.:-
if the students counted in a room was 5 at 09:00:12, but another room had a count of 0 at 09:02:20 and I did the following:
```
select student_count, audit_date
from table_name
order by audit_date, student_count;
```
The query will return:
```
5 09:00:12
0 09:02:20
```
but I want:
```
0 09:00:00
5 09:00:00
```
because we're looking for the number of students in each room for the period 09:00, but unfortunately to collect the data it required us to do so within that hour and obviously the database will pick up on that accuracy. Furthermore, this issue becomes more problematic when it gets to the periods 14:15 and 14:50, where we will need to be able to distinguish between the two periods.
Is there a way to ignore the seconds part of the DateTime, and the round the minutes down to the nearest ten minute?
I'm using SQL Server Management Studio 2012. If none of this made sense, I'm sorry! | You may want some sort of Period table to store your segments. Then you can use that to join to your counts table.
```
CREATE TABLE [Periods]
( -- maybe [id] INT,
[start_time] TIME,
[end_time] TIME
);
INSERT INTO [Periods]
VALUES ('09:00','10:00'),
('10:00','11:00'),
('11:00','13:15'),
('13:15','14:15'),
('14:15','14:50'),
('14:50','15:50'),
('15:50','16:20'),
('16:20','17:00')
SELECT
student_count, [start_time]
FROM table_name A
INNER JOIN [Periods] B
ON CAST(A.[audit_date] AS TIME) >= B.[start_time]
AND CAST(A.[audit_date] AS TIME) < B.[end_time]
``` | You can use the `DATEADD`and `DATEPART`functions to accomplish this together with a `CASE`expression. If you want more precise cutoffs between the `.14`and `.50`periods you can easily adjust the case statement and also if you want to minutes to be `.10`or`.15`
```
-- some test data
declare @t table (the_time time)
insert @t values ('09:00:12')
insert @t values ('14:16:12')
insert @t values ('09:02:12')
insert @t values ('14:22:12')
insert @t values ('15:49:12')
insert @t values ('15:50:08')
select
the_time,
case
when datepart(minute,the_time) < 15 then
dateadd(second, -datepart(second,the_time),dateadd(minute, -datepart(minute,the_time),the_time))
when datepart(minute,the_time) >= 15 and datepart(minute,the_time) < 50 then
dateadd(second, -datepart(second,the_time),dateadd(minute, -datepart(minute,the_time)+10,the_time))
else
dateadd(second, -datepart(second,the_time),dateadd(minute, -datepart(minute,the_time)+50,the_time))
end as WithoutSeconds
from @t
```
Results:
```
the_time WithoutSeconds
---------------- ----------------
09:00:12.0000000 09:00:00.0000000
14:16:12.0000000 14:10:00.0000000
09:02:12.0000000 09:00:00.0000000
14:22:12.0000000 14:10:00.0000000
15:49:12.0000000 15:10:00.0000000
15:50:08.0000000 15:50:00.0000000
``` | SQL - How to ignore seconds and round down minutes in DateTime data type | [
"",
"sql",
"sql-server",
"datetime",
""
] |
How can I find which part of the INSTR is not found? For example my query below returns 70002 and 70001 in my results. What I would now like to do is find which part is not in the table. Here is a hurdle I cant get over. I dont want to see all the other orderItems in the table.
In my scenario here I just want 70000 returned.
Basically, which part of my string is not in the table.
All advice is appreciated.
```
SELECT
orderItems
FROM Orders
WHERE
INSTR('70002-70001-70000',orderItems)!=0
``` | It would be better to parse the string into a `SecondTable` and get the desired results with:
```
select *
from SecondTable
where SecondTable.OrderItems not in(select OrderItems from Orders)
```
But if you need to select from the string, assign it to a variable and modify the variable for each record in your Orders table:
```
set @str = '70002-70001-70000';
select @str := replace(@str,OrderItems,'')
from Orders;
select replace(@str,'-','') STR;
```
<http://www.sqlfiddle.com/#!2/f8d67fa/9> | ```
SELECT orderItems
FROM Orders
WHERE orderItems IN ('7001','7002','7003')
SELECT orderItems
FROM Orders
WHERE orderItems NOT IN ('7001','7002','7003')
``` | Finding which part of INSTR is not found | [
"",
"sql",
""
] |
I have a problem with a shell script:
I am trying to inject data from a source file containing MySQL queries into a data base. Here are the relevant lines of my shell script:
```
mysql -u root
source /usr/local/insert.sql;
quit;
```
For example I am running the file as `./insertfile` and it is running smoothly but when it comes to data insertion in MySQL it is logging into MySQL using the `mysql -u root` command but the remaining operations (`source /usr/local/insert.sql;` and `quit;`) are not being executed. When I quit MySQL manually it tries to execute the rest of the command from my `insert.sql` file.
So please help me use the right shell script so that I can insert the queries from the source file. | Thanks for your help. I have tried adding your line in my script and it was primarily giving some errors then I changed the command like below -
`mysql -u root --execute="source /usr/local/insert.sql; \q"`
Above line helped me to execute my command.
Thanks to all for being this much helpful.
Regards,
Shah9il | One way to do that would be
```
mysql -u root --execute="source /usr/local/insert.sql; quit;"
``` | Inserting data in mysql with shell script | [
"",
"mysql",
"sql",
"bash",
"shell",
""
] |
When we create parent-child tables, does the child table need its own primary key, or should the foreign key of the parent be used as the primary key?
Example:
I have a table that stores users, which is defined like:
```
CREATE TABLE users (
'id' bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT,
'username' varchar(32),
PRIMARY KEY ('id'))
```
I want to make a second table that stores some more information about a user, like their favorite animals. Should that table have its own 'id' column as well as a separate foreign key for the linked user?:
```
CREATE TABLE users_extra_info (
'id' bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT,
'fk_user_id' bigint(20) UNSIGNED NOT NULL,
'fav_mammal' varchar(32),
'fav_reptile' varchar(32),
...
PRIMARY KEY ('id'),
KEY 'fk_user_id' ('fk_user_id'))
```
or do you usually just drop the 'id' column since the foreign key has to be unique?:
```
CREATE TABLE users_extra_info (
'fk_user_id' bigint(20) UNSIGNED NOT NULL,
'fav_mammal' varchar(32),
'fav_reptile' varchar(32),
...
PRIMARY KEY ('fk_user_id'))
```
Thanks | As a matter of principle, not related to other tables, each table should have a `PRIMARY KEY` in order for you to be able to distinguish one row from another. While it's not always necessary to absolutely identify individual rows that is often a requirement.
In the event of a true one-to-one relationship (or a one-to-zero-or-one relationship, which also occurs), because the foreign key is necessarily unique it can also be used as the primary key of the subsidiary table. There is absolutely no reason at all to introduce a *second* unique column in that case.
However, one-to-one and one-to-zero-or-one relationships are less common than one-to-many relationships. In that more common case, you cannot use only the foreign key columns as the primary key since they are not unique in the child table. In this case you can choose either to introduce an additional integer key or created a *composite* primary key using the foreign key column(s) and one or more other columns that, together, are guaranteed to be unique. | You have to understand the normalization rules.
Here's the link <http://www.studytonight.com/dbms/database-normalization.php>. | Creating a parent and child table, does child need its own primary id column? | [
"",
"mysql",
"sql",
"database",
"relational-database",
""
] |
I have a table CUSTOMER(ID,ADDRESS) with following values
```
1,ROCKVILLE MDUS
2,JERSEY CITY NJUS
3,NEW YORK CITY NYUS
.
.
.
```
I want to load the above values in separate table `STG_TXN(CITY,STATE,COUNTRY)`. Last two chars in `COUNTRY`, the second last two chars in `STATE` and everything before that in `CITY`.
```
ROCKVILLE,MD,US
JERSEY CITY,NJ,US
NEW YORK CITY,NY,US
.
.
.
INSERT INTO STG_TXN
SELECT (how can I parse the value to be selected as CITY, STATE and COUNTRY here ? )
FROM dbo.CUSTOMER
``` | Try this:
```
INSERT INTO STG_TXN
select SUBSTRING(address,0,LEN(address) - 4),SUBSTRING(address,LEN(address) - 3,2)
,SUBSTRING(address,LEN(address) - 1,2)
from CUSTOMER
```
However, this only works if the the length of state or country is 2 | This should work:
```
INSERT INTO STG_TXN
select
LEFT(@teststr,LEN(@teststr)-4) city,
LEFT(RIGHT(@teststr,4),2) state,
RIGHT(@teststr,2) country
from customer
``` | SQL Server : how to parse a column value and insert in multiple columns of separate table? | [
"",
"sql",
"sql-server",
"parsing",
"insert",
""
] |
Say I have `student` table with a column `name`.
This `name` column has values 'studentone', 'studenttwo', 'studentthree'
& I want to replace them with 'student1', 'student2', 'student3'.
For single replacement it's quite straight forward:
```
update student set name = replace(name, 'one', '1')
```
But what about multiple replacements? Any idea? | I would just use multiple update statements, but if you absolutely must do it in one statement, just nest the replace calls:
```
update student set
name = replace(replace(replace(name, 'one', '1'), 'two', '2'), 'three', '3')
```
This works because (although inefficient) calls to `replace()` have no effect if the search term is not found. | While you *could* pack all into one statement, it would be inefficient without a matching **`WHERE` condition** to exclude unaffected rows.
```
UPDATE student SET name = replace(name, 'one', '1') WHERE name LIKE '%one';
UPDATE student SET name = replace(name, 'two', '2') WHERE name LIKE '%two';
UPDATE student SET name = replace(name, 'three', '3') WHERE name LIKE '%three';
```
This only finds and updates rows that actually change. Matching the end of the string according to your example.
The most efficient way would be to combine that with something [like @Bohemian suggested](https://stackoverflow.com/a/22727784/939860):
```
UPDATE student
SET name = replace(replace(replace(
name
, 'one' , '1')
, 'two' , '2')
, 'three', '3')
WHERE name ~ '(one|two|three)$';
```
But make sure that one replacement does not influence the next.
Actually, a couple of OR'ed `LIKE` expressions are typically faster than a single [regular expression](https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP), so:
```
WHERE (name LIKE '%one' OR
name LIKE '%two' OR
name LIKE '%three');
```
### `CASE`
Per comment:
```
SET name = CASE WHEN name LIKE '%one' THEN replace(name, 'one', '1')
WHEN name LIKE '%two' THEN replace(name, 'two', '2')
WHEN name LIKE '%three' THEN replace(name, 'three', '3')
END
```
Depending on *exact* circumstances, this might be optimized. For your example:
```
SET name = CASE right(name, -7)
WHEN 'one' THEN student || '1'
WHEN 'two' THEN student || '2'
WHEN 'three' THEN student || '3'
END
```
Requires above `WHERE` clause to work properly. | Use replace() function in an update to change more than one sub string of a column | [
"",
"sql",
"postgresql",
"sql-update",
"pattern-matching",
""
] |
I am creating an event (like crone job) in MySQL to execute a query every 3 seconds but does not work. I tried my best. Here is my code please see if I am doing anything wrong
```
CREATE EVENT `croneJob`
ON SCHEDULE EVERY 3 SECOND
STARTS '2014-03-24 13:45:57'
ON COMPLETION PRESERVE ENABLE
DO
insert into ranktable values (1,2,3);
```
It successfully creates it but does not execute every 3 seconds | You have to start your MySQL server events scheduler:
```
SET GLOBAL event_scheduler = ON;
```
Loot at:
<https://dev.mysql.com/doc/refman/5.1/en/events-configuration.html> | As I know all events are executed by a special event scheduler thread.When we refer to the Event Scheduler, we actually refer to this thread.There is global `event_scheduler` system variable determines whether the Event Scheduler is enabled and running on the server.
Try to set the event\_Scheduler on your command line.
```
SET GLOBAL event_scheduler = ON;
```
OR
```
SET GLOBAL event_scheduler = 1;
```
Read more about [events](https://dev.mysql.com/doc/refman/5.1/en/events-configuration.html).
Hope it will help you. | Creating Event in MySQL does not work | [
"",
"mysql",
"sql",
""
] |
I noticed that I can have NULL values in columns that have the UNIQUE constraint: `UNIQUE(col)`
Would that generate any issues in certain situations? | While the following addresses multiple null values, it does *not* address any "issues" associated with such a design, other than possible database/SQL portability - as such, it should probably *not* be considered an answer, and is left here merely for reference.
---
This is actually covered in the SQLite FAQ. It is a design choice - SQLite (unlike SQL Server) chose that multiple NULL values *do not* count towards uniqueness in an index.
[The SQL standard requires that a UNIQUE constraint be enforced even if one or more of the columns in the constraint are NULL, but SQLite does not do this. Isn't that a bug?](https://sqlite.org/faq.html#q26)
> Perhaps you are referring to the following statement from SQL92:
>
> * A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns.
>
> That statement is ambiguous, having at least two possible interpretations:
>
> 1. A unique constraint is satisfied if and only if no two rows in a table have the same values and have non-null values in the unique columns.
> 2. A unique constraint is satisfied if and only if no two rows in a table have the same values in the subset of unique columns that are not null.
>
> SQLite follows interpretation (1), as does PostgreSQL, MySQL, Oracle, and Firebird. It is true that Informix and Microsoft SQL Server use interpretation (2), however we the SQLite developers hold that interpretation (1) is the most natural reading of the requirement and we also want to maximize compatibility with other SQL database engines, and most other database engines also go with (1), so that is what SQLite does.
See a [comparison of NULL handling](http://www.sqlite.org/nulls.html). | If you want your unique index to throw an error when a two rows would be the same if you disregard NULL columns (and don't want to use triggers from Satyam's answer) you can do something like this
```
CREATE TABLE `test` (
`Field1` INTEGER,
`Field2` INTEGER
);
CREATE UNIQUE INDEX `ix` ON `test` (
`Field1`,
`Field2`
);
INSERT INTO `test`(`Field1`,`Field2`) VALUES (1,NULL);
INSERT INTO `test`(`Field1`,`Field2`) VALUES (1,NULL); -- This shouldn't be allowed
DROP INDEX IF EXISTS `ix`;
CREATE UNIQUE INDEX `ix2` ON `test` (
`Field1`,
ifnull(`Field2`, 0) --use this instead
); --will fail
``` | Sqlite NULL and unique? | [
"",
"sql",
"sqlite",
"null",
"unique",
"unique-constraint",
""
] |
I am trying to select distinct records to populate a select tag and I need the ids from the query but when I write a query like this:
```
Topic.select(:name).distinct
```
It comes back with results that look like this:
```
#<ActiveRecord::Relation [#<Topic id: nil, name: "Ruby">, #<Topic id: nil, name: "Python">, #<Topic id: nil, name: "tingz">, #<Topic id: nil, name: "Javascript">, #<Topic id: nil, name: "Java">]>
```
And in order to get my select drop down to work properly I need the corresponding ids to be returned as well
This is the select tag that I am using in my view:
```
<%= select_tag :topic_search, options_from_collection_for_select(Topic.select(:name).distinct, :id, :name), placeholder: "Select topic" %>
```
Does anyone have any idea of what the best way of doing something like this is? | I think you want:
```
Topic.select('distinct on(name) id, name').order(:name)
``` | For Rails 4.x, use
```
Topic.select(:id, :name).distinct
```
**NOTE:**
To answer your question in comment i.e.,
> Nope that still returns repeat values if the same name was entered
> twice
You cannot do that. Suppose you have Python in two records then they'll have two different ids(as its primary key). Which id will you map in that case for the distinct Python | How to I select distinct records with ids from a database in Rails? | [
"",
"sql",
"ruby-on-rails",
"ruby-on-rails-4",
"distinct",
""
] |
I am thinking that this statement will and should pick the distinct values from my table and then also have a count of them or at least that is what I would like in a separate column. Is it possible to count the number of distinct values in SQL, from what I am gathering probably not.
**Question: Can I count the number of distinct values of last names?**
SQL Statement:
```
select distinct c.LastName as 'Distinct Last Names', COUNT(*)
from Customer as c;
```
## Error:
```
Msg 8120, Level 16, State 1, Line 1
Column 'Customer.LastName' is invalid in the select list because it is not contained
in either an aggregate function or the GROUP BY clause.
```
## Output -- Without the count
## 'Distinct Last Names'
LTest1
LTest2
LTest3 | If I understand your question correctly, you want the result in two columns.
If so, you can use **[Group By](http://www.w3schools.com/sql/sql_groupby.asp)**.
```
SELECT c.LastName as 'Distinct Last Names', COUNT(*)
FROM Customer as c
GROUP BY c.LastName
``` | Since you donot have a `Group by` clause in the query, All your columns in the `Select` should use some kind of an aggregation or else `SQL` will not be happy.
Try this
```
select COUNT(distinct c.LastName) 'Distinct Last Names'
from Customer as c;
``` | Is it possible in SQL to count the number of distinct values as well? | [
"",
"sql",
""
] |
I have a table `author_data`:
```
author_id | author_name
----------+----------------
9 | ernest jordan
14 | k moribe
15 | ernest jordan
25 | william h nailon
79 | howard jason
36 | k moribe
```
Now I need the result as:
```
author_id | author_name
----------+----------------
9 | ernest jordan
15 | ernest jordan
14 | k moribe
36 | k moribe
```
That is, I need the `author_id` for the names having duplicate appearances. I have tried this statement:
```
select author_id,count(author_name)
from author_data
group by author_name
having count(author_name)>1
```
But it's not working. How can I get this? | I suggest a [window function](https://www.postgresql.org/docs/current/functions-window.html) in a subquery:
```
SELECT author_id, author_name -- omit the name here if you just need ids
FROM (
SELECT author_id, author_name
, count(*) OVER (PARTITION BY author_name) AS ct
FROM author_data
) sub
WHERE ct > 1;
```
You will recognize the basic aggregate function `count()`. It can be turned into a window function by appending an `OVER` clause - just like any other aggregate function.
This way it counts rows *per partition*. Voilá.
It has to be done in a subquery because the result cannot be referenced in the `WHERE` clause in the same `SELECT` (happens *after* `WHERE`). See:
* [Best way to get result count before LIMIT was applied](https://stackoverflow.com/questions/156114/best-way-to-get-result-count-before-limit-was-applied/8242764#8242764)
In older versions without window functions (v.8.3 or older) - or generally - this alternative performs pretty fast:
```
SELECT author_id, author_name -- omit name, if you just need ids
FROM author_data a
WHERE EXISTS (
SELECT FROM author_data a2
WHERE a2.author_name = a.author_name
AND a2.author_id <> a.author_id
);
```
If you are concerned with performance, add an index on `author_name`. | You are half way there already. You need to just use the identified `Author_IDs` and fetch the rest of the data.
try this..
```
SELECT author_id, author_name
FROM author_data
WHERE author_id in (select author_id
from author_data
group by author_name
having count(author_name)>1)
``` | Find rows with duplicate values in a column | [
"",
"sql",
"postgresql",
"duplicates",
"aggregate-functions",
"window-functions",
""
] |
Eg, take this **ACCOUNT** table:
```
|num | type |amount|
|----------------------
| 1 | cheque | 300 |
| 2 | cheque | 300 |
| 2 | savings | 300 |
```
Would **[type, amount]** by a primary key even though **[cheque, 300]** appears twice?
I'm confused because even though cheque 300 is duplicated, the **num** column is different for the two duplicates, which means it still is part of a unique row. | No, [type, amount] cannot be primary key, because it is not unique.
There are two ways to work with primary keys in a database:
1. Work with technical IDs. You simply add an ID column to your tables and only link by these.
2. Work with natural keys. That would be the ISBN for a table of books for instance or the employee number in an employees table. Here you ask the question: "What makes a record unique in this table?". In your table this might be [num, type]. | If `num` is the primary key, then it cannot contain duplicates or NULL values. "Primary key" implies no duplication.
It's conceivable that you could have a composite primary key consisting of `(id, type)`. That might make sense in an application where a particular `id` could have both a checking and a savings account associated with it.
Then an attempt to insert another row with values `2 | savings` would fail because of the primary key constraint. | Can primary keys themselves contain duplicates but have unique rows? | [
"",
"mysql",
"sql",
""
] |
```
update accounts set password=(select password from accounts where name='joongsu')
where id=(select accountid from characters where name='Nobless')
```
it doesn't work with error message "You can't specify target table 'accounts' for update in FROM clause"
Why doesn't it work? select queries in above only return 1 row. | Perhaps you should try [this one](https://stackoverflow.com/a/14302701/3416826):
```
UPDATE accounts
SET accounts.password =
(
SELECT something.password
FROM (SELECT * FROM accounts) AS something
WHERE something.name='joongsu'
)
WHERE accounts.id=(SELECT accountid FROM characters WHERE name='Nobless');
```
It's a hack, but I tested it and it works on my test data. For some reason MySQL doesn't allow using the same table in inner queries as the one being updated. | ```
UPDATE
accounts AS account_to_be_updated
JOIN
characters
ON characters.accountid = account_to_be_updated.id
AND characters.name = 'Nobless'
CROSS JOIN
( SELECT password
FROM accounts
WHERE name = 'joongsu'
) AS existing_account
SET
account_to_be_updated.password = existing_account.password ;
``` | How to use data in same table on UPDATE query? | [
"",
"mysql",
"sql",
""
] |
For the following data in a table, I want to get the highest balance for each person and then sort the result in descending order of that highest balance:
```
|---------id--------|--------name--------|--------balance--------
1 | Tom | 10
2 | Bob | 20
3 | Ace | 60
4 | Tom | 45
5 | Ace | 35
6 | Tom | 75
7 | Dan | 15
8 | Dan | 95
9 | Tom | 40
10 | Dan | 30
11 | Bob | 65
12 | Dan | 55
13 | Bob | 50
```
the expected result is (For descending order):
```
|--------id---------|--------name--------|--------balance--------
8 | Dan | 95
6 | Tom | 75
11 | Bob | 65
3 | Ace | 60
```
What would be the mySQL query for this? | maybe like this:
```
select pid, name, max(balance) as highest_balance from my_table
group by pid, name
order by max(balance) desc;
``` | You can use `MAX()` and order by
```
SELECT id,name,MAX(balance) AS balance
FROM table
GROUP BY name
ORDER BY balance DESC
```
## [Fiddle](http://sqlfiddle.com/#!2/198f00/4) | How to use the highest values for a certain criteria to sort the list? | [
"",
"mysql",
"sql",
""
] |
I am not sure, but I think I have an error in this query:
```
UPDATE product_discount SET price = '10,15' WHERE key_id = '1,2'
```
I can't figure out why, but it only updates number 10 in the column and skips the 15. | You could try the method suggested in [this answer](https://stackoverflow.com/a/3466/2359271):
```
INSERT INTO product_discount (key_id, price)
VALUES (1, 10), (2, 15)
ON DUPLICATE KEY UPDATE price=VALUES(price);
```
Be sure to read the linked answer for some caveats, though; for example, this only works if:
* There is a unique key constraint on `key_id` (e.g., it is the primary key)
* You know that the rows with these ids already exist, or you don't mind inserting new rows if you provide an id that doesn't exist in the table
---
Despite these limitations on its use, the significant advantage of this method over `CASE` statements is much better readability and parameterization. In Python, for example, here's how you would execute the query given in @Kaf's answer using the `MySQLdb` module:
```
query = """UPDATE product_discount SET price = CASE
WHEN key_id = %s THEN %s ELSE %s
END
WHERE ID IN (%s, %s);"""
params = (1, 10, 15, 1, 2)
cursor.execute(query, params)
```
How long is it going to take someone to figure out from that `params` tuple which values you want updated, and what their values should be? And what if you want to update more than two rows? Not only do you need to rewrite `query` for every use case of N rows, the `params` tuple becomes indecipherable garbage if N is anything more than a handful. You could write some helper functions to format both `query` and `params` according to the number of updates you need to do, but how long will that take? How easy will it be to understand? How many opportunities for bugs will you introduce in all these extra lines of code?
On the other hand, here's how you would do it using the `INSERT ... ON DUPLICATE KEY UPDATE` method:
```
query = """INSERT INTO product_discount (key_id, price) VALUES (%s, %s)
ON DUPLICATE KEY UPDATE price=VALUES(price);"""
params = [(1, 10),
(2, 15),
]
cursor.executemany(query, params)
```
The key-value relationship is clear and highly readable. If more rows need to be updated, more key-value tuples can be added to the params list. It's not a perfect solution for every scenario, but it's a much better solution than `CASE` for particular (and I would argue, *very* common) scenarios. | I think this is what you need (Assuming `Price` should be 10 when `key_id = 1`):
```
UPDATE product_discount SET price = CASE WHEN key_id = 1 THEN 10 ELSE 15 END
WHERE key_id IN (1,2)
``` | MySQL Table won't update multiple rows | [
"",
"mysql",
"sql",
"sql-update",
""
] |
I have a column `eventDate` which contains trailing spaces. I am trying to remove them with the PostgreSQL function `TRIM()`. More specifically, I am running:
```
SELECT TRIM(both ' ' from eventDate)
FROM EventDates;
```
However, the trailing spaces don't go away. Furthermore, when I try and trim another character from the date (such as a number), it doesn't trim either. If I'm reading [the manual](http://www.postgresql.org/docs/current/static/functions-string.html) correctly this should work. Any thoughts? | There are many different invisible characters. Many of them have the property `WSpace=Y` ("whitespace") in Unicode. But some special characters are not considered "whitespace" and still have no visible representation. The excellent Wikipedia articles about [space (punctuation)](https://en.wikipedia.org/wiki/Space_%28punctuation%29#Space_characters_and_digital_typography) and [whitespace characters](https://en.wikipedia.org/wiki/Whitespace_character) should give you an idea.
*<rant>Unicode sucks in this regard: introducing lots of exotic characters that mainly serve to confuse people.</rant>*
[The standard SQL `trim()` function](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-SQL) by default only trims the basic Latin space character (Unicode: U+0020 / ASCII 32). Same with the [`rtrim()` and `ltrim()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) variants. Your call also only targets that particular character.
Use regular expressions with [`regexp_replace()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER) instead.
### Trailing
To remove **all trailing *white space*** (but not white space *inside* the string):
```
SELECT regexp_replace(eventdate, '\s+$', '') FROM eventdates;
```
The regular expression explained:
`\s` ... regular expression [class shorthand for `[[:space:]]`](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-CLASS-SHORTHAND-ESCAPES-TABLE)
*- which is the set of white-space characters - see limitations below*
`+` ... 1 or more consecutive matches
`$` ... end of string
Demo:
```
SELECT regexp_replace('inner white ', '\s+$', '') || '|'
```
Returns:
```
inner white|
```
Yes, that's a *single* backslash (`\`). Details in this related answer:
* [SQL select where column begins with \](https://stackoverflow.com/questions/9141521/sql-select-where-column-begins-with/9141630#9141630)
### Leading
To remove ***all leading white space*** (but not white space inside the string):
```
regexp_replace(eventdate, '^\s+', '')
```
`^` .. start of string
### Both
To remove ***both***, you can chain above function calls:
```
regexp_replace(regexp_replace(eventdate, '^\s+', ''), '\s+$', '')
```
Or you can combine both in a single call with two [*branches*](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-SYNTAX-DETAILS).
Add `'g'` as 4th parameter to replace all matches, not just the first:
```
regexp_replace(eventdate, '^\s+|\s+$', '', 'g')
```
But that should typically be faster with [`substring()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-OTHER):
```
substring(eventdate, '\S(?:.*\S)*')
```
`\S` ... everything *but* white space
`(?:`*`re`*`)` ... [non-capturing set of parentheses](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-ATOMS-TABLE)
`.*` ... any string of 0-n characters
Or one of these:
```
substring(eventdate, '^\s*(.*\S)')
substring(eventdate, '(\S.*\S)') -- only works for 2+ printing characters
```
`(`*`re`*`)` ... [Capturing set of parentheses](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-ATOMS-TABLE)
Effectively takes the first non-whitespace character and everything up to the last non-whitespace character if available.
## Whitespace?
There are a few more [related characters which are not classified as "whitespace" in Unicode](https://en.wikipedia.org/wiki/Whitespace_character) - so not contained in the character class `[[:space:]]`.
These print as invisible glyphs in pgAdmin for me: "mongolian vowel", "zero width space", "zero width non-joiner", "zero width joiner":
```
SELECT E'\u180e', E'\u200B', E'\u200C', E'\u200D';
'' | '' | '' | ''
```
Two more, printing as *visible* glyphs in pgAdmin, but invisible in my browser: "word joiner", "zero width non-breaking space":
```
SELECT E'\u2060', E'\uFEFF';
'' | ''
```
Ultimately, whether characters are rendered invisible or not also depends on the font used for display.
To remove ***all of these*** as well, replace `'\s'` with `'[\s\u180e\u200B\u200C\u200D\u2060\uFEFF]'` or `'[\s]'` (note trailing invisible characters!).
Example, instead of:
```
regexp_replace(eventdate, '\s+$', '')
```
use:
```
regexp_replace(eventdate, '[\s\u180e\u200B\u200C\u200D\u2060\uFEFF]+$', '')
```
or:
```
regexp_replace(eventdate, '[\s]+$', '') -- note invisible characters
```
### Limitations
There is also the [Posix character class `[[:graph:]]`](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-BRACKET-EXPRESSIONS) supposed to represent "visible characters". Example:
```
substring(eventdate, '([[:graph:]].*[[:graph:]])')
```
It works reliably for ASCII characters in every setup (where it boils down to `[\x21-\x7E]`), but beyond that you currently (incl. pg 10) depend on information provided by the underlying OS (to define `ctype`) and possibly locale settings.
Strictly speaking, that's the case for *every* reference to a character class, but there seems to be more disagreement with the less commonly used ones like *graph*. But you may have to add more characters to the character class `[[:space:]]` (shorthand `\s`) to catch all whitespace characters. [Like: `\u2007`, `\u202f` and `\u00a0` seem to also be missing for @XiCoN JFS](https://stackoverflow.com/questions/22699535/trim-trailing-spaces-with-postgresql/22701212?noredirect=1#comment85226164_22701212).
[The manual:](https://www.postgresql.org/docs/current/functions-matching.html#POSIX-BRACKET-EXPRESSIONS)
> Within a bracket expression, the name of a character class enclosed in
> `[:` and `:]` stands for the list of all characters belonging to that
> class. Standard character class names are: `alnum`, `alpha`, `blank`, `cntrl`,
> `digit`, `graph`, `lower`, `print`, `punct`, `space`, `upper`, `xdigit`.
> These stand for the **character classes defined in ctype.
> A locale can provide others.**
Bold emphasis mine.
Also note this limitation that was [fixed with Postgres 10](https://www.postgresql.org/docs/10/release-10.html#id-1.11.6.21.5.5):
> Fix regular expressions' character class handling for large character
> codes, particularly Unicode characters above `U+7FF` (Tom Lane)
>
> Previously, such characters were never recognized as belonging to
> locale-dependent character classes such as `[[:alpha:]]`. | It should work the way you're handling it, but it's hard to say without knowing the specific string.
If you're only trimming leading spaces, you might want to use the more concise form:
```
SELECT RTRIM(eventDate)
FROM EventDates;
```
This is a [little test](http://sqlfiddle.com/#!15/c2f73/387) to show you that it works.
Tell us if it works out! | Trim trailing spaces with PostgreSQL | [
"",
"sql",
"postgresql",
"whitespace",
"trim",
"removing-whitespace",
""
] |
I have a collection in MongoDB with such data:
```
sid; geo
--------
4; 100
4; 101
2; 100
2; 106
2; 107
1; 100
1; 100
1; 100
1; 110
1; 100
```
How to do such SQL query in MongoDB?
```
SELECT sid COUNT(DISTINCT geo) FROM sites GROUP BY sid
```
I want to have a collection with count of unique geo for every sid:
```
sid; geo_count
--------------
4; 2
2; 3
1; 2
``` | Distinct is a type of grouping indeed but the output equivalent of that type of query is this:
```
db.collection.aggregate([
{ "$group": {
"_id": { "sid": "$sid", "geo": "$geo" }
}},
{ "$group": {
"_id": "$_id.sid",
"geo_count": { "$sum": 1 }
}}
]}
```
So you need to first group to get the "distinct" results of "geo" within "sid". Then you count the number of distinct results. | Since DISTINCT is actually a type of group you can do:
```
db.c.aggregate({
{$group: {_id: '$geo', sid: '$sid', 'geo_sum': {$sum: 1}}},
{$group: {_id: '$sid', 'geo_count': {$sum: 1}}}
})
``` | How to do distinct and group in MongoDB Aggregation Framework? | [
"",
"sql",
"mongodb",
""
] |
```
$query = "SELECT
table1.first_name,
table1.id,
table1.profile_image
FROM table1,table2
WHERE table1.id = (
CASE
WHEN '$id' = table2.id1
THEN
table2.id2
WHEN '$id' = table2.id2
THEN
table2.id1
)
AND table2.case='done'";
```
This query is failing and I can't figure out why... What I want is to obtain a join on ids between table1 and table2. The thing is **table2 has two id fields and I don't know if the Id I want is in field *id1* or *id2***. Then join the other id (***NOT $id***, but the partner from **$id** in table2) to the id in table1... | You know the right syntax for a case construction is:
```
CASE
WHEN ...
THEN ....
WHEN ...
THEN ....
ELSE ...
END
```
Notice the `END` at the .... end ;) | ```
Select first_name, id, profileImage From Table1
inner join (
Select id1 as t2id From Table2 where `case` = 'done'
union
Select id2 From Table2 Where `case` = 'done') t2Ids
) On Table1.id = t2Ids.t2id
Where t2Ids.t2Id = $id
```
Would be another way to do it. | Can't figure error in a mysql query | [
"",
"mysql",
"sql",
""
] |
I'm trying to find a way to combine two columns into one, but keep getting the value '0' in the column instead to the combination of the words.
These are what I've tried as well as others:
```
SELECT column1 + column2 AS column3
FROM table;
SELECT column1 || column2 AS column3
FROM table;
SELECT column1 + ' ' + column2 AS column3
FROM table;
```
Could someone please let me know what I'm doing wrong? | My guess is that you are using MySQL where the `+` operator does addition, along with silent conversion of the values to numbers. If a value does not start with a digit, then the converted value is `0`.
So try this:
```
select concat(column1, column2)
```
Two ways to add a space:
```
select concat(column1, ' ', column2)
select concat_ws(' ', column1, column2)
``` | Try this, it works for me
```
select (column1 || ' '|| column2) from table;
``` | MySQL combine two columns into one column | [
"",
"mysql",
"sql",
""
] |
I need a table where CHECK will check if values are ordered in the same way (are monotonic). For example:
```
CREATE TABLE test
(
ID int identity primary key,
ID_foreignkey int references something,
order int,
value int
)
```
I need effect that for the same ID\_foreign key, order grows and also value grows, for example:
```
(1,1,4), (1,2,5), (1,3,9), (1,4,12), (1,5,13), (1,6,18)
```
Is it possible to do it easy using constraint, or a procedure must be used? Maybe check if inserted value is bigger than select max(value) from test? | If we can force `order` to start at 1 and to have no gaps, then we can achieve *most* of the requirements declaratively:
```
CREATE TABLE test
(
ID int not null identity,
ID_foreignkey int not null,
[order] int not null,
value int not null,
prev_order as CASE WHEN [order] > 1 THEN [order]-1 END persisted,
prev_value int null,
constraint PK_test PRIMARY KEY (ID),
constraint UQ_test_backref UNIQUE (ID_foreignkey,[order]),
constraint CK_test_orders CHECK ([order] >= 1),
constraint CK_test_prevpopulated CHECK ([order]=1 or prev_value is not null),
constraint FK_test_backref FOREIGN KEY (ID_foreignkey,prev_order)
references test (ID_foreignkey,[order]),
--Finally, we can actually write the constraint you wanted
constraint CK_test_increase_only CHECK (value > prev_value)
)
```
Unfortunately, we do need to add a trigger so that `prev_value` is correctly set during `INSERT`s1:
```
create trigger T_test on test
instead of insert
as
set nocount on;
insert into test (ID_foreignkey,[order],value,prev_value)
select i.ID_foreignkey,i.[order],i.value,COALESCE(p.value,i2.value)
from inserted i
left join
test p
on
i.ID_foreignkey = p.ID_foreignkey and
i.[order] = p.[order]+1
left join
inserted i2
on
i.ID_foreignkey = i2.ID_foreignkey and
i.[order] = i2.[order]+1
```
And now we can do some sample inserts:
```
insert into test (ID_foreignkey,[order],value) values
(1,1,4), (1,2,5), (1,3,9)
go
insert into test (ID_foreignkey,[order],value) values
(1,4,12), (1,5,13)
```
And now one that fails:
```
insert into test (ID_foreignkey,[order],value) values
(1,6,12)
```
> Msg 547, Level 16, State 0, Procedure T\_test, Line 4
>
> The INSERT statement conflicted with the CHECK constraint "CK\_test\_increase\_only". The conflict occurred in database "X", table "dbo.test".
>
> The statement has been terminated.
Only other thing worth mentioning - if you want to allow `value` to be adjusted later, you don't need to do a lot to allow that and for the constraint to still be enforced - you just have to add `ON UPDATE CASCADE` to `FK_text_backref`.
---
1But note that the declarative constraints do force *when* the data should be populated and that it must contain the correct data. So there's no danger of getting the trigger wrong, say, and ending up with the wrong data in the *table*. | you can create a function like this that return the next value expected
```
create FUNCTION [dbo].[test](@id integer, @typeKey integer)
RETURNS int
AS
BEGIN
DECLARE @retval int
if @typeKey=1
begin
SELECT @retval = max([order]) + 1 FROM table where ID_foreignkey = @id
end
else
begin
/*next value i don't understand the logic*/
SELECT @retval = max([value]) + 1 FROM table where ID_foreignkey = @id
end
RETURN @retval
END
```
then in check constraint
```
dbo.test(ID_foreignkey, 1) = order
```
and
```
dbo.test(ID_foreignkey, 2) = value
``` | SQL Server: values in two columns must be monotonic | [
"",
"sql",
"sql-server",
"constraints",
""
] |
I have a MySQL table like this:

And I want to convert it from long format to wide format like this

Sorry. I'm new and don't know how to post a table | Try this:
```
insert into goodTable
select
bt.id,
(select bt1.value from badTable bt1 where bt1.info = 'firstname' and bt1.id = bt.id),
(select bt1.value from badTable bt1 where bt1.info = 'lastname' and bt1.id = bt.id),
(select bt1.value from badTable bt1 where bt1.info = 'phone' and bt1.id = bt.id)
from
badTable bt
group by id ;
```
Working fiddle here: <http://sqlfiddle.com/#!2/45f29e/2> | You need something like this:
```
select id,
max(case when info='firstname' then value else null end),
max(case when info='lastname' then value else null end),
...
from table
group by id;
``` | Mysql convert table from long format to wide format | [
"",
"mysql",
"sql",
"pivot",
"database-table",
""
] |
I just saw the weirdest thing ever in some very old SQL code we have.
There is a multiply equals operator in a where clause. Does it have a special meaning? It selects the appropriate columns, but `o.name` is `NULL`.
I am sure it is a typo, but I just want to confirm.
```
select c.name,
c.status,
o.name
from syscolumns c,
sysobjects o
where c.id = object_id('dbo.MyTable')
and c.cdefault *= o.id
order by colid asc
``` | This question was already answered here: [SQL Server \*= Operator?](https://stackoverflow.com/questions/983862/sql-server-operator)
Beyond being the oldschool way, as fancyPants mentioned, it also may not always work consistently. You should revise your SQL statement to include the JOINS in the FROM clause.
To answer your question: `*=` is a LEFT OUTER JOIN, while `=*` is a RIGHT OUTER JOIN | As mentioned, this is a `LEFT OUTER JOIN`. Similarly, =\* is a `RIGHT OUTER JOIN`
That said, it shouldn't be used anymore, and you should use the full syntax. | SQL Multiply Equals (*=) in a WHERE clause | [
"",
"sql",
""
] |
In toad's older version i can easily able to format the query in
Toad's Query editor by
```
right click > Format Code.
```
But in newer version of Toad 7 i can't able to find this Format option. Can any one help me from where i can Format My Query.
(**how to FORMAT QUERY in TOAD 7?**) | **TOAD FOR ORACLE**
`SHIFT`+`CTRL`+F should help you
To format a statement Select the statement you want to select and click `»` on the Edit toolbar.
To format an entire script Click `»` on the Edit toolbar.
**Tip**:You can also right-click the script and select Formatting Tools |
Format Code.
See the [documentation](https://support.software.dell.com/download/downloads?id=3382171)
**TOAD FOR MYSQL**
Format SQL
Use the formatter to modify the layout of SQL in the Editor, including inserting headers, adding or removing extra lines, and changing the case for keywords.
To format SQL
Click `Paint shaped box` on the Editor toolbar to format the contents of the Editor window. You can also format a partial statement in the Editor by selecting it before applying [formatting](http://dev.toadformysql.com/webhelp/ToadforMySQL.htm#Editor/formatting_SQL.htm).
See [documentation](http://dev.toadformysql.com/webhelp/ToadforMySQL.htm) | Just to add, in Toad 5 for SQLServer, the button to format code is "2 yellow triangles pointing right". In my fresh installation, this button is on the toolbar, beside the dropdown box to select the db environment.
If you want to assign a shortcut, find it at Tools | Options | Environment | Keyboard | Editor | Tools | FormatCode.
Hope it helps, it took very long to find these buttons. Wish they'd make the controls the same as Toad for Oracle.. | Format sql query in TOAD 7. | [
"",
"sql",
"format",
"toad",
""
] |
I am trying to find the most used query in an oracle database.. Im a little confused! Are these types of statistics stored in the AWR table? If so how would I use this table to find the most popular query executed on the database? Thanks! | try this:
```
SELECT ADDRESS, SQL_TEXT, PARSE_CALLS, EXECUTIONS
FROM V$SQLAREA
ORDER BY EXECUTIONS desc;
``` | That might be fast on many systems one system I was on last week took > 3 minutes to execute a query like this. To make it more efficient, it's cheaper to select from v$sqlstats. thatn v$sqlarea. Also since you are doing an order by you might want to limit the rows sorted with something like "executions > X"
select \* from (
SELECT SQL\_ID, SQL\_TEXT, PARSE\_CALLS, EXECUTIONS
FROM v$sqlstats
where executions > 10
ORDER BY EXECUTIONS desc
) where rownum < 10; | Find most frequently used query | [
"",
"sql",
"database",
"oracle",
"statistics",
""
] |
The following query is giving an error at
'CASE WHEN NOT LIKE' (Msg 156, Level 15, State 1, Line 14
Incorrect syntax near the keyword 'LIKE')
```
SELECT
TimeStamp,
TimeStampNS,
ISNULL(TagName, 'NoTag') AS TagName,
Message,
ConditionName,
ISNULL(TagName, 'NoTag') + '.' + ConditionName AS Source,
CENTUMMsgId,
StationName,
ISNULL(AlarmLevel,5) AS AlarmLevel,
ActiveTime,
ActorID,
ISNULL(AlarmOff,0) AS AlarmOff,
CASE WHEN NOT LIKE '% ACK%' AND CENTUMMsgId IN (4353, 4355, 4609) THEN 1 ELSE 0 END AS RaiseAll,
CASE WHEN LIKE '% ACK%' THEN 1 ELSE 0 END AS Acknowledge,
CASE WHEN CENTUMMsgId IN (4354, 4356, 4358, 4614) AND Message NOT LIKE '% ACK%' THEN 1 ELSE 0 END AS Normal,
PlantHierarchy,
'EXAOPC' AS SourceType
FROM QHistorianData.dbo.vEXAOPCCSProcessAlarm
```
Do you guys know what's going on... I think I have eveything correct there.
Thank you in advance for any help. | I think you missed the attribute name here:
```
CASE WHEN <ATTRIBUTE-NAME> NOT LIKE '% ACK%' AND CENTUMMsgId IN (4353, 4355, 4609) THEN 1 ELSE 0 END AS RaiseAll,
CASE WHEN <ATTRIBUTE-NAME> LIKE '% ACK%' THEN 1 ELSE 0 END AS Acknowledge,
CASE WHEN CENTUMMsgId ...
``` | ```
CASE WHEN **[MissingExpression]** NOT LIKE '% ACK%' AND CENTUMMsgId IN (4353, 4355, 4609) THEN 1 ELSE 0 END AS RaiseAll,
CASE WHEN **[MissingExpression]** LIKE '% ACK%' THEN 1 ELSE 0 END AS Acknowledge,
CASE WHEN CENTUMMsgId IN (4354, 4356, 4358, 4614) AND Message NOT LIKE '% ACK%' THEN 1
ELSE 0 END AS Normal,
PlantHierarchy,
```
You missed to define the column you want to compare with like | SQL Query - Incorrect syntax near the keyword 'LIKE' | [
"",
"sql",
"sql-server-2008",
""
] |
By see below tables how to (write query) get `Table A` data and `status = InActive` with no Data in `Table B`
Example : `4 Comm4 InActive`
```
Table A
AID Name Status
-- --- --
1 comm1 Active
2 comm2 Active
3 Comm3 InActive
4 Comm4 InActive
5 Comm5 InActive
Table B
BID Name AID
--- ---- ---
11 James 1
12 Kris 2
13 Dan 3
14 Steve 3
15 Brian 5
``` | It's quite simple
```
select * from tableA
where status = 'InActive'
and not exists (select * from tableB where tableA.AID = tableB.AID)
``` | ```
select tableA.*
from tableA
left join tableB
on tableA.AID = tableB.AID
and tableA.status = 'InActive'
where tableB.AID is null
```
The not exists from Szymon is correct and may be more efficient | Finding data from Table A and Table B relation | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Consider the DB :
Account table

With the records :

Clients table

with the records :

I want to get all the clients' names & id , that are in Overdraft organized by client ID , so :
```
SELECT c.Client_id_number , a.Account_number , c.firstName_client , c.lastName_Client , a.Account_balance
FROM Clients AS c
INNER JOIN Accounts AS a
ON a.Client_id_number = c.Client_id_number
WHERE a.Account_balance < 0
GROUP BY c.Client_id_number;
```
But this returned

Meaning the client `1234` with account `84` , and balance `-4000` doesn't appear ... where did I go wrong ? | **UPDATE:** Firstly, I should make clear the following, you dont need to use the "group by" because you are not summarizing, so your query should be:
```
SELECT c.Client_id_number , a.Account_number , c.firstName_client , c.lastName_Client , a.Account_balance
FROM Clients AS c
INNER JOIN Accounts AS a
ON a.Client_id_number = c.Client_id_number
WHERE a.Account_balance < 0
```
I noticed that you have problems with the "group by" concept, so it is important that you know that you need to group by each field within your "SELECT" excepting by those where you are summarizing (c.Client\_id\_number , a.Account\_number)
so, i.e.
```
SELECT c.Client_id_number , a.Account_number, sum(a.Account_balance) as balance
FROM Clients AS c
INNER JOIN Accounts AS a
ON a.Client_id_number = c.Client_id_number
WHERE a.Account_balance < 0
GROUP BY c.Client_id_number, a.Account_number;
``` | You are only grouping by `client_id_number`, which means you will only get one row per client ID.
You need to group by *the set of rows that you want to be unique in the output*. In this case, that probably means:
```
GROUP BY c.Client_id_number, a.account_number
```
And then you should apply an aggregate function to any column that is not part of the grouping, such as `sum(account_balance)`.
If you are not using any aggregate functions, it is an indication that you don't actually need grouping.
In most databases it is an error to select any columns that were not in the group by statement, but MySQL "helpfully" allows this kind of query. Instead of returning an error, it hides the problem and randomly picks a possible value for the remaining columns. | GROUP BY doesn't return all rows per specific field | [
"",
"mysql",
"sql",
""
] |
What is the best way for querying this in ORMLite?
```
SELECT JOB.TASK_JOB,
JOB.NAME_JOB,
JOB_SETTINGS.DAY1,
JOB_SETTINGS.DAY2,
JOB_SETTINGS.DAY3,
JOB_SETTINGS.DAY4,
JOB_SETTINGS.DAY5,
JOB_SETTINGS.DAY6,
JOB_SETTINGS.DAY7,
JOB_SETTINGS.HOURSPERWEEK,
JOB_SETTINGS.PAYPERHOUR,
JOB_SETTINGS.VARIABLEHOURS,
WORKINGDAY.DATE_WD,
WORKINGDAY.H_BREAK,
WORKINGDAY.H_REAL,
WORKINGDAY.H_IN,
WORKINGDAY.H_OUT
FROM JOB, JOB_SETTINGS,WORKINGDAY
WHERE JOB.ID_JOB = JOB_SETTINGS.ID_JOB
AND JOB_SETTINGS.ID_JOB = WORKINGDAY.ID_JOB
AND JOB.ID_JOB = 2
```
I have a class for every tables (standard ORMLite) but I don't understand how to build this query...
Thanks!!!
**EDIT:**
I wrote this code, but actually I can't continue because I'm confused about OrmLite syntax...
```
DbHelperJob queryDao = new DbHelperJob(getActivity());
QueryBuilder<Job, String> qJob = queryDao.getJobDao().queryBuilder();
qJob.selectColumns("NAME_JOB", "TASK_JOB");
QueryBuilder<JobSettings, String> qJobSett = queryDao.getJobSettDao().queryBuilder();
qJobSett.selectColumns("DAY1", "DAY2", "DAY3", "DAY4", "DAY5", "DAY6", "DAY7", "HOURSPERWEEK", "PAYPERHOUR", "VARIABLEHOURS");
QueryBuilder<WorkingDay, String> qWD = queryDao.getWDDao().queryBuilder();
qWD.selectColumns("DATE_WD", "H_BREAK", "H_REAL", "H_IN", "H_OUT");
``` | You can use queryRaw() method of your Dao. For example
```
GenericRawResults<String[]> rawResults = queryDao.queryRaw("SELECT JOB.TASK_JOB,
JOB.NAME_JOB,
JOB_SETTINGS.DAY1,
JOB_SETTINGS.DAY2,
JOB_SETTINGS.DAY3,
JOB_SETTINGS.DAY4,
JOB_SETTINGS.DAY5,
JOB_SETTINGS.DAY6,
JOB_SETTINGS.DAY7,
JOB_SETTINGS.HOURSPERWEEK,
JOB_SETTINGS.PAYPERHOUR,
JOB_SETTINGS.VARIABLEHOURS,
WORKINGDAY.DATE_WD,
WORKINGDAY.H_BREAK,
WORKINGDAY.H_REAL,
WORKINGDAY.H_IN,
WORKINGDAY.H_OUT
FROM JOB, JOB_SETTINGS,WORKINGDAY
WHERE JOB.ID_JOB = JOB_SETTINGS.ID_JOB
AND JOB_SETTINGS.ID_JOB = WORKINGDAY.ID_JOB
AND JOB.ID_JOB = 2")
```
read more here [enter link description here](http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_2.html#Raw-Queries) | This has nothing to do with ORMLite, but there's a really handy tool for doing query, right in SQLiteQueryBuilder. You can do this:
```
private static final String COLS = new String[] {
JOB.TASK_JOB,
JOB.NAME_JOB,
JOB_SETTINGS.DAY1,
JOB_SETTINGS.DAY2,
JOB_SETTINGS.DAY3,
JOB_SETTINGS.DAY4,
JOB_SETTINGS.DAY5,
JOB_SETTINGS.DAY6,
JOB_SETTINGS.DAY7,
JOB_SETTINGS.HOURSPERWEEK,
JOB_SETTINGS.PAYPERHOUR,
JOB_SETTINGS.VARIABLEHOURS,
WORKINGDAY.DATE_WD,
WORKINGDAY.H_BREAK,
WORKINGDAY.H_REAL,
WORKINGDAY.H_IN,
WORKINGDAY.H_OUT
}
private static String SEL =
"JOB.ID_JOB = JOB_SETTINGS.ID_JOB"
+ "AND JOB_SETTINGS.ID_JOB = WORKINGDAY.ID_JOB"
+ "AND JOB.ID_JOB = 2"
private static String TABLES = "JOB, JOB_SETTINGS, WORKINGDAY""
// in the query method...
db.execSQL(
SQLiteQueryBuilder.buildQueryString(
false, TABLES, COLS, SEL, null, null, null, null);
``` | How to build a Select query (more tables without join) in ORMLite | [
"",
"android",
"sql",
"select",
"join",
"ormlite",
""
] |
How To Fetch records from a table ? But only those records which `added_on >= '2014/03/01'` and `added_on <= '2014/03/31'`
and currently my sql table field value is `added_on = 2014/03/29 05:23:43`
Please help me out as soon as possible | simple SQL Query
```
select * from table_name where (added_on between '2014-03-01' and '2014-03-31' );
```
You may also want to specify `hh:mm:ss`
```
select * from table_name where (added_on between '2014-03-01 00:00:00' and '2014-03-31 00:00:00' );
```
Both should work. :) | ```
SELECT *
FROM [table]
WHERE CONVERT(datetime, [added_on]) between '2014/03/01' AND '2014/03/31'
``` | How to Fetch Records from table in Sql between two dates | [
"",
"mysql",
"sql",
""
] |
I have a master table in sql server 2012 that consists of two types.
These are the critical columns I have in my master table:
```
TYPE CATEGORYGROUP CREDITORNAME DEBITORNAME
```
One of those two types (Type B) doesn't have a CategoryGroup assigned to it (so it's always null).
Type A always has a CategoryGroup and Debitor.
Type B always has a Creditor but no CategoryGroup.
For creditor and debitor I have two extra tables that also hold the CategoryGroup but for my task I only need the table for creditors since I already have the right value for type A (debitors).
So my goal is to look-up the CategoryGroup in the creditor table based on the creditor name and ideally put those CategoryGroup values in my master table.
With "put" I'm not sure if a view should be generated or actually put the data in the table which contains about 1.5 million records and keeps on growing.
There's also a "cluster table" that uses the CategoryGroup as a key field. But this isn't part of my problem here.
Please have a look at my sample [fiddle](http://sqlfiddle.com/#!3/c653b/1)
Hope you can help me.
Thank you. | If I understand you correctly, you can simply do a join to find the correct value, and update MainData with that value;
You can either use a common table expression...
```
WITH cte AS (
SELECT a.*, b.categorygroup cg
FROM MainData a
JOIN CreditorList b
ON a.creditorname = b.creditorname
)
UPDATE cte SET categorygroup=cg;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!3/88024/1).
...or an UPDATE/JOIN;
```
UPDATE m
SET m.categorygroup = c.categorygroup
FROM maindata m
JOIN creditorlist c
ON m.creditorname = c.creditorname;
```
[Another SQLfiddle](http://sqlfiddle.com/#!3/41507/2).
...and always remember to test before running potentially destructive SQL from random people on the Internet on your production data.
EDIT: To just see the date in the same format without doing the update, you can use;
```
SELECT
a.type, COALESCE(a.categorygroup, b.categorygroup) categorygroup,
a.creditorname, a.debitorname
FROM MainData a
LEFT JOIN CreditorList b
ON a.creditorname = b.creditorname
```
[Yet another SQLfiddle](http://sqlfiddle.com/#!3/88024/9). | Couldn't you just do:
```
update maindata
set categorygroup = (
select top 1 categorygroup
from creditorlist
where creditorname = maindata.creditorname)
where creditorname is not null
and categorygroup is null
```
? | TSQL - Insert values in table based on lookup table | [
"",
"sql",
"sql-server",
"t-sql",
"lookup",
""
] |
All my column headers in a MySQL database are prefixed with a number, 1\_X, 2\_X, etc... which makes bringing the data into IDL impossible using just a basic select statement to bring in the entire table. I'm not sure but I see two possible ways:
1) Bring in the table with column name aliases. Can I use TRIM or SUBSTRING\_INDEX to remove/replace the first two characters?
2) Create a routine that uses the information schema to to recursively go through and delete the first two characters of the column headers and create a new table with those headers and copy the data in.
If there weren't so many different tables (all with 1\_X, 2\_X, etc...) there'd be no problem manually selecting 1\_X AS X but that's not feasible. It would be great to be able to use TRIM/SUBSTRING on column headers in the select statement.
Thanks. | It's not possible to use functions in a SQL statement to alter the identifier assigned to a column being returned. The SQL way of specifying the identifier for the column in a resultset is to use the `expr AS alias` approach.
---
Rather than trim off the leading digit characters, you could prepend the identifiers with another valid character. (Trimming off leading characters seems like it would potentially lead to another problem, duplicate and/or zero length column names.)
You could just use a SQL statement to generate the `SELECT` list for you.
(NOTE: the `GROUP_CONCAT` function is limited by some system/session variables: `group_concat_max_len` and `max_allowed_packet`, it's easy enough to adjust these higher, though changing global `max_allowed_packet` may require MySQL to be restarted.)
To get it back the SELECT list on all one line (assuming you won't overflow the `GROUP_CONCAT` limits) something like:
```
SELECT c.table_schema
, c.table_name
, GROUP_CONCAT(
CONCAT('t.`',c.column_name,'` AS `x',c.column_name,'`')
ORDER BY c.ordinal_position
) AS select_list_expr
FROM information_schema.columns c
FROM information_schema.columns c
WHERE c.table_schema = 'mydatabase'
GROUP BY c.table_schema, c.table_name
```
Or, you could even get back a whole SELECT statement, if you wrapped that `GROUP_CONCAT` expression (which produces the select list) in another `CONCAT`
Something like this:
```
SELECT CONCAT('SELECT '
, GROUP_CONCAT(
<select_list_expr>
)
, ' FROM `',c.table_schema,'`.`',c.table_name,'` t;'
) AS stmt
FROM information_schema.columns c
WHERE c.table_schema = 'mydatabase'
GROUP BY c.table_schema, c.table_name
```
---
You could use a more clever expression for `<select_list_expr>`, to check for leading "digit" characters, and assign an alias to just those columns that need it, and leave the other columns unchanged, though that again introduces the potential for returning duplicate column names.
That is, if you already have columns named `'1_X'` and `'x1_X'` in the same table. But a carefully chosen leading character may avoid that problem...
The `<select_list_expr>` could be more clever by doing a conditional test for leading digit character, something like this:
```
SELECT CONCAT('SELECT '
, GROUP_CONCAT(
CASE
WHEN c.column_name REGEXP '^[[:digit:]]'
THEN CONCAT('t.`',c.column_name,'` AS `x',c.column_name,'`')
ELSE CONCAT('t.`',c.column_name,'`')
END
)
, ' FROM `',c.table_schema,'`.`',c.table_name,'` t;'
) AS stmt
FROM information_schema.columns c
WHERE c.table_schema = 'mydatabase'
GROUP BY c.table_schema, c.table_name
```
Again, there's a potential for generation "duplicate" column names with this approach. The conditional test "`c.column_name REGEXP`" could be extended to check for other "invalid" leading characters as well.
---
As a side note, at some point, someone thought it a "good idea" to name columns with leading digit characters. Just because something is allowed doesn't mean it's a good idea.
---
Then again, maybe all that rigamarole isn't necessary, and just wrapping the column names in backticks would be sufficient for your application. | I think you can follow option 2. However this will not be quick solution.
Another way around this could be,
> 1. Generate schema script for the tables you want to correct.
> 2. Open the script in notepad++ or any editor that supports find using regular expression.
> 3. Search and replace with [0-9]+\_ expression and empty string for replacement.
> 4. Create the new tables using this script and copy data into them.
This may sound like a manual approach but you will do this once for all of your tables. | MySQL Remove characters from column headers | [
"",
"mysql",
"sql",
"rename",
"idl",
""
] |
**My error is as follows:**
> Cannot insert explicit value for identity column in table 'Snacks' when IDENTITY\_INSERT is set to OFF.
> Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
**Exception Details:**
> System.Data.SqlClient.SqlException: Cannot insert explicit value for identity column in table 'Snacks' when IDENTITY\_INSERT is set to OFF.
I have created a table with an SQL Database that has a unique ID, description and price option with the primary key being the ID.
How do I fix this error in Visual Web Developer 2010?
```
<asp:SqlDataSource ID="sdsSnacks" runat="server"
ConnectionString="<%$ ConnectionStrings:cs3201 %>"
DeleteCommand="DELETE FROM [Snacks] WHERE [SID] = @SID"
InsertCommand="INSERT INTO [Snacks] ([Price], [Description]) VALUES (@Price, @Description)"
SelectCommand="SELECT [Price], [Description], [SID] FROM [Snacks]"
UpdateCommand="UPDATE [Snacks] SET [Price] = @Price, [Description] = @Description WHERE [SID] = @SID">
<DeleteParameters>
<asp:Parameter Name="SID" Type="Int32" />
</DeleteParameters>
<InsertParameters>
<asp:Parameter Name="Price" Type="Decimal" />
<asp:Parameter Name="Description" Type="String" />
</InsertParameters>
<UpdateParameters>
<asp:Parameter Name="Price" Type="Decimal" />
<asp:Parameter Name="Description" Type="String" />
<asp:Parameter Name="SID" Type="Int32" />
</UpdateParameters>
</asp:SqlDataSource>`
``` | This is not specifically an ASP.NET issue, it's a SQL error.
When you have an `identity` column, every time a new row is added, that column should contain a new value (normally incremented by 1).
The error your showing is because either you or the technology you're using is attempting to tell SQL Server what value should be in that `identity` column.
You have the following options available to you (that I'm aware of)...
**If you wrote the SQL query and listed all the columns** then you should remove the `identity` column. For instance...
```
INSERT INTO [MYTABLE] ([IDENTCOLUMN],[MYVAL]) VALUES (1,1)
```
Should become...
```
INSERT INTO [MYTABLE] ([MYVAL]) VALUES (1)
```
**If you wrote the SQL query and did NOT list all the columns** then you should write all the columns without the `identity` column. For instance...
```
INSERT INTO [MYTABLE] VALUES (1,1)
```
Should become...
```
INSERT INTO [MYTABLE] ([MYVAL]) VALUES (1)
```
**Otherwise, if you actually want to specifically set the value of the `identity` column** you need to tell SQL Server that you're overriding the normal `identity` processing by using the following...
```
SET IDENTITY_INSERT [MYTABLE] ON
INSERT INTO [MYTABLE] ([IDENTCOLUMN],[MYVAL]) VALUES (1,1)
SET IDENTITY_INSERT [MYTABLE] OFF
``` | you need to SET IDENTITY\_INSERT ON:
Syntax is:
```
SET IDENTITY_INSERT [ database_name . [ schema_name ] . ] table { ON | OFF }
``` | ASP.NET: Identity_Insert set to OFF - How do I fix this? | [
"",
"sql",
"sql-server",
""
] |
I'm trying to manually execute SQL commands so I can access procedures in NuoDB.
I'm using Ruby on Rails and I'm using the following command:
```
ActiveRecord::Base.connection.execute("SQL query")
```
The "SQL query" could be any SQL command.
For example, I have a table called "Feedback" and when I execute the command:
```
ActiveRecord::Base.connection.execute("SELECT `feedbacks`.* FROM `feedbacks`")
```
This would only return a "true" response instead of sending me all the data requested.
This is the output on the Rails Console is:
```
SQL (0.4ms) SELECT `feedbacks`.* FROM `feedbacks`
=> true
```
I would like to use this to call stored procedures in NuoDB but upon calling the procedures, this would also return a "true" response.
Is there any way I can execute SQL commands and get the data requested instead of getting a "true" response? | The working command I'm using to execute custom SQL statements is:
```
results = ActiveRecord::Base.connection.execute("foo")
```
with "foo" being the sql statement( i.e. "SELECT \* FROM table").
This command will return a set of values as a hash and put them into the results variable.
So on my rails application\_controller.rb I added this:
```
def execute_statement(sql)
results = ActiveRecord::Base.connection.execute(sql)
if results.present?
return results
else
return nil
end
end
```
Using execute\_statement will return the records found and if there is none, it will return nil.
This way I can just call it anywhere on the rails application like for example:
```
records = execute_statement("select * from table")
```
"execute\_statement" can also call NuoDB procedures, functions, and also Database Views. | For me, I couldn't get this to return a hash.
```
results = ActiveRecord::Base.connection.execute(sql)
```
But using the exec\_query method worked.
```
results = ActiveRecord::Base.connection.exec_query(sql)
``` | How do you manually execute SQL commands in Ruby On Rails using NuoDB | [
"",
"sql",
"ruby-on-rails",
"activerecord",
"nuodb",
""
] |
I'm fairly new to SQL (~9 months or so) and so I'm stumped trying to figure out this problem, using SQL Server 2012.
I'm trying to match up two tables that look like this:
Table `Shift`:
```
Date | Name | Shift
01-01-2013 | Dan Smith | Night
01-02-2013 |John Johnson| Night
01-03-2013 |John Johnson| Night
```
Table `Sales`:
```
Date | Name | Sales
01-01-2013 | Dan Smith | 3
01-02-2013 | Dan Smith | 5
01-02-2013 |John Johnson| 4
01-03-2013 |John Johnson| 7
01-04-2013 |John Johnson| 2
```
The issue is that the shift table only includes the start date of the shift, which is problematic for the night shift as it takes place over two days. Sales for the night shift show up as taking place over both days as they are timestamped when they occur. This means that if I simply try to link the tables based on the Name and Shift Date, and the person does not have a shift the next day, I will miss any sales data from 12:00AM onward. I'm searching for a way to check if the person has a shift the next day, and if not, add their sales data for the next day to the previous days sales.
Right now my code looks like this:
```
Select
st.Name,
st.Date,
st.Shift,
sl.Sales
From
Shift st
right join
Sales as sl on sl.Name = st.Name
and sl.Date = Case
When sl.Date = DATEADD(DD, 1, st.Date)
Then DATEADD(DD,1, st.Date)
Else st.Date
End
```
Which picks up the missing data from when I simply used sales.date=shift.date, but it also ends up with a lot of duplicate data. Apologies if this is simple, but I haven't been able to figure out a good way to simply check if the salesperson doesn't have a shift the next day and then adding the next days sales data for that person to their previous days.
edit: Desired results would look something like this:
```
ShiftDate | Name | Shift | Sales
01-01-2013 | Dan Smith | Night | 8
01-02-2013 |John Johnson| Night | 4
01-03-2013 |John Johnson| Night | 9
```
I don't really mind if the precise per-shift sales numbers aren't 100% accurate, but I do need to pick up the next day sales data if they don't have a shift the next day.
I would love a model change as it would absolutely fix this problem and have been working to get one for a while now. Unfortunately, the sales data comes from an outside source which I then import into our database, and they so far have been unwilling/unable to provide the precise timestamps. | Ok, this works as you intended:
```
;WITH Sales2 AS
(
SELECT A.*, CASE WHEN B.[Date] IS NULL THEN 1 ELSE 0 END LastShift
FROM Sales A
LEFT JOIN Sales B
ON A.Name = B.Name
AND A.[Date] = DATEADD(DAY,-1,B.[Date])
)
SELECT A.[Date],
A.Name,
A.Shift,
SUM(B.Sales) Sales
FROM Shift A
LEFT JOIN Sales2 B
ON A.Name = B.Name
AND CASE
WHEN A.Shift <> 'Night' AND A.[Date] = B.[Date]
THEN 1
WHEN A.Shift = 'Night' AND B.LastShift = 0 AND A.[Date] = B.[Date]
THEN 1
WHEN A.Shift = 'Night' AND B.LastShift = 1 AND A.[Date] = DATEADD(DAY,-1,B.[Date])
THEN 1
ELSE 0
END = 1
GROUP BY A.[Date],
A.Name,
A.Shift
```
[**Here is a sqlfiddle**](http://sqlfiddle.com/#!6/ad332/2) with a demo of this solution.
And the resuls are:
```
╔═════════════════════════╦══════════════╦═══════╦═══════╗
║ Date ║ Name ║ Shift ║ Sales ║
╠═════════════════════════╬══════════════╬═══════╬═══════╣
║ 2013-01-01 00:00:00.000 ║ Dan Smith ║ Night ║ 8 ║
║ 2013-01-02 00:00:00.000 ║ John Johnson ║ Night ║ 4 ║
║ 2013-01-03 00:00:00.000 ║ John Johnson ║ Night ║ 9 ║
╚═════════════════════════╩══════════════╩═══════╩═══════╝
``` | I think you might need a better model. See if this works?
```
SELECT
sft.Name,
sft.Date,
sft.Shift,
sum(sls.Sales)
FROM Sales sls
LEFT JOIN Shift sft
ON sls.Date >= sft.Date
WHERE (sls.Date = sft.Date OR sls.Date NOT IN
(SELECT Date FROM Shift WHERE Date = sls.Date AND Name = sls.Name ))
GROUP BY
sft.Name,
sft.Date,
sft.Shift
``` | Need to match two SQL tables based on dates but I need to check for date+1 | [
"",
"sql",
"sql-server",
"date",
""
] |
my SQL is a bit rusty after 2 years of not having studied it, so I can't find a way to do this.
I need to get some data from a Moodle database, namely some averages of feedback tests where students rate their teachers.
This SQL query:
```
SELECT mdl_course.id, mdl_user.username FROM mdl_course
INNER JOIN mdl_context ON mdl_context.instanceid = mdl_course.id
INNER JOIN mdl_role_assignments ON mdl_context.id = mdl_role_assignments.contextid
INNER JOIN mdl_role ON mdl_role.id = mdl_role_assignments.roleid
INNER JOIN mdl_user ON mdl_user.id = mdl_role_assignments.userid
WHERE mdl_role.id = 3
```
returns something like this, a table with each course and its teacher as assigned.
```
COURSE_ID TEACHER
2 john
3 mary
4 john
```
Now, my second SQL is like this:
```
SELECT mdl_feedback.course, AVG(mdl_feedback_value.value) as average
FROM mdl_feedback_value
INNER JOIN mdl_feedback_item ON mdl_feedback_value.item = mdl_feedback_item.id
INNER JOIN mdl_feedback ON mdl_feedback.id = mdl_feedback_item.feedback
INNER JOIN mdl_feedback_completed ON mdl_feedback.id = mdl_feedback_completed.feedback
INNER JOIN mdl_user ON mdl_feedback_completed.userid = mdl_user.id
GROUP BY mdl_feedback.course
COURSE AVERAGE
2 3.5
3 3
4 3.25
```
What I want is to combine those 2 SQL queries into one that goes like this, using COURSE/COURSE\_ID as the key
```
TEACHER AVERAGE
john 3,375 <--- avg of 3,5 and 3,25 from each of john's courses
mary 3 <--- she has just one course so no math here
```
I'm not sure on how to go for this, so I appreciate a bit of help :) As I said, I haven't used SQL for a while so I'm not keen in those JOIN thingies, maybe I have to use them as I have used them here.
I'm using MySQL 5.5.33, and although this is related to Moodle, the answer is not really Moodle-centered as the only thing important here is what tables as the output from both queries.
Thanks | If you take each of the subqueries and then try to JOIN them, it should work. I added a third field to help you see the detail of how the score was created.
```
SELECT username, AVG(average) AS average
, GROUP_CONCAT(CONCAT('Course: ', teachers.course_id,' with score ', COALESCE(average,'No Score Found'))) AS detail
FROM (
SELECT mdl_course.id AS course_id, mdl_user.username AS username
FROM mdl_course
INNER JOIN mdl_context ON mdl_context.instanceid = mdl_course.id
INNER JOIN mdl_role_assignments ON mdl_context.id = mdl_role_assignments.contextid
INNER JOIN mdl_role ON mdl_role.id = mdl_role_assignments.roleid
INNER JOIN mdl_user ON mdl_user.id = mdl_role_assignments.userid
WHERE mdl_role.id = 3 ) AS teachers
LEFT JOIN (
SELECT mdl_feedback.course AS course_id, AVG(mdl_feedback_value.value) as average
FROM mdl_feedback_value
INNER JOIN mdl_feedback_item ON mdl_feedback_value.item = mdl_feedback_item.id
INNER JOIN mdl_feedback ON mdl_feedback.id = mdl_feedback_item.feedback
INNER JOIN mdl_feedback_completed ON mdl_feedback.id = mdl_feedback_completed.feedback
INNER JOIN mdl_user ON mdl_feedback_completed.userid = mdl_user.id
GROUP BY mdl_feedback.course) AS scores
ON teachers.course_id = scores.course_id
GROUP BY username
``` | I am Trying to use a temp table to store the two different query result sets and then later i am querying the temp table to get the average of the teacher. Hope this works.
```
CREATE TABLE #TEMP
(
COURSE_ID int,
TEACHER varchar(100),
COURSE int,
AVERAGE int
)
--- Inserting the course_id and Teahcer data----
INSERT INTO #TEMP
(
COURSE_ID,
TEACHER
)
SELECT mdl_course.id, mdl_user.username FROM mdl_course
INNER JOIN mdl_context ON mdl_context.instanceid = mdl_course.id
INNER JOIN mdl_role_assignments ON mdl_context.id = mdl_role_assignments.contextid
INNER JOIN mdl_role ON mdl_role.id = mdl_role_assignments.roleid
INNER JOIN mdl_user ON mdl_user.id = mdl_role_assignments.userid
WHERE mdl_role.id = 3
-- Inserting the course and average data---
INSERT INTO #TEMP
(
COURSE,
AVERAGE
)
SELECT mdl_feedback.course, AVG(mdl_feedback_value.value) as average
FROM mdl_feedback_value
INNER JOIN mdl_feedback_item ON mdl_feedback_value.item = mdl_feedback_item.id
INNER JOIN mdl_feedback ON mdl_feedback.id = mdl_feedback_item.feedback
INNER JOIN mdl_feedback_completed ON mdl_feedback.id = mdl_feedback_completed.feedback
INNER JOIN mdl_user ON mdl_feedback_completed.userid = mdl_user.id
GROUP BY mdl_feedback.course
--- Querying the # temp table for the average and teacher---
SELECT TEACHER, AVG(AVERAGE)
FROM #TEMP
GROUP BY TEACHER
``` | Merging 2 SQL queries with a common field and AVGs | [
"",
"mysql",
"sql",
"join",
"merge",
"moodle",
""
] |
Hello here's my question
Retrieve the total number of bookings for each type of the services that
have at least three bookings (excluding those cancelled).
i.e. where status = 'open' AND 'done'
I'm not to sure on how to exclude and how to count values in a column?
```
SELECT Service.type, Service.description,
COUNT (DISTINCT status)
FROM Booking
LEFT JOIN Service
ON Booking.service = Service.type
WHERE status >= 3
EXCLUDE 'cancelled'
GROUP BY status DESC;
CREATE TABLE Booking(
car CHAR(8) ,
on_date DATE NOT NULL,
at_time TIME NOT NULL,
technician CHAR(6) NOT NULL,
service VARCHAR(15) NOT NULL,
status VARCHAR(9)CHECK(status IN ('open','done', 'cancelled')) DEFAULT 'open' NOT NULL,
note VARCHAR(200) ,
rating INTEGER CHECK(rating IN('0','1','2','3','4','5')) DEFAULT '0' NOT NULL,
feedback VARCHAR(2048) ,
PRIMARY KEY (car, on_date, at_time),
FOREIGN KEY (car) REFERENCES Car (cid)
ON DELETE CASCADE
ON UPDATE CASCADE,
FOREIGN KEY (technician) REFERENCES Technician (tech_id)
ON DELETE CASCADE
ON UPDATE CASCADE,
FOREIGN KEY (service) REFERENCES Service (type)
ON DELETE CASCADE
ON UPDATE CASCADE
);
CREATE TABLE Service(
type VARCHAR(15) PRIMARY KEY,
description VARCHAR(2048)
);
``` | It will be faster to aggregate first and join later. Fewer join operations. Hence the subquery:
```
SELECT s.type, s.description, b.ct
FROM (
SELECT service, count(*) AS ct
FROM booking
WHERE status <> 'cancelled'
GROUP BY 1
HAVING count(*) > 2
) b
JOIN service s ON s.type = b.service;
```
Since you enforce referential integrity with a foreign key constraint and `service` is defined `NOT NULL`, you can as well use `[INNER] JOIN` instead of a `LEFT [OUTER] JOIN` *in this query*.
It would be cleaner and more efficient to use an [`enum`](http://www.postgresql.org/docs/current/interactive/datatype-enum.html) data type instead of `VARCHAR(9)` for the `status` column. Then you wouldn't need the `CHECK` constraint either.
For best performance of this particular query, you could have a [partial covering index](http://www.postgresql.org/docs/current/interactive/indexes-partial.html) (which would also profit from the `enum` data type):
```
CREATE INDEX foo ON booking (service)
WHERE status <> 'cancelled';
```
Every index carries a maintenance cost, so only keep this tailored index if it actually makes your query faster (test with [`EXPLAIN ANALYZE`](https://stackoverflow.com/questions/18867836/poor-performance-on-a-postgresql-query/19460872#19460872)) and it is run often and / or important. | ```
select s.type, s.description, count(*)
from
booking b
inner join
service s on b.service = s.type
where status != 'cancelled'
group by 1, 2
having count(*) >= 3
order by 3 desc;
``` | SQL COUNT EXCLUDE | [
"",
"sql",
"postgresql",
"join",
"count",
"aggregate-functions",
""
] |
I used to do Oracle development many many years ago. I have spent most of the past 15 years doing mainly SQL Server, and where I have done Oracle work, been insulated from the workings by Web services etc.
My question seems to have been asked on the web a few times, but it seems difficult somehow to communicate - at least judging by the answers. I very much appreciate that tools and functionality differ, know I have to learn new things, but this is so simple, yet seems so hard to do.
I am looking to do some ad-hoc queries on the database. At the moment we are using SQL Navigator - I am open to using other tools...
In SQL Server Management Studio, if you open a query window, type a bit of SQL that retuns a value or a set, you get a nice display of the rows or values in a results window.
I've gathered that with Oracle PL/SQL things are a bit different, worked out that I need to return a cursor - but how do I get a tool to display the results?
I started simple:
```
declare
my_id number := 356655;
cursor c1 is select my_id from dual;
begin
open c1;
end;
```
This runs fine - but how do I see the results? Are there any tools that deal with this as 'nicely' as SSMS? I am used to being able to do a lot of this, including stuff like
(perhaps not exactly the right syntax? but you get the idea...)
```
declare
my_id number := 356655;
cursor c1 is select name from my_table where id = my_id;
begin
open c1;
```
And having the results displayed to me as text/grid. Ideally that there is a nice solution. Some spiffy new tool, maybe? | With [SQL Developer](http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index-097090.html) or SQL\*Plus you can use a bind variable declared before the PL/SQL block:
```
variable rc refcursor;
declare
my_id number := 356655;
begin
open :rc for select my_id from dual;
end;
/
print rc
RC
-------------------------------
356655
```
You can also use a bind variable within the query, which can be useful:
```
variable my_id number;
variable rc refcursor;
execute :my_id := 356655;
begin
open :rc for select :my_id from dual;
end;
/
print rc
```
The [`variable`](http://docs.oracle.com/cd/E18283_01/server.112/e16604/ch_twelve050.htm) and [`print`](http://docs.oracle.com/cd/E18283_01/server.112/e16604/ch_twelve031.htm) commands are covered in the SQL\*Plus documentation, which largely applies to SQL Developer as well - that has its [own documentation](http://docs.oracle.com/cd/E39885_01/appdev.40/e38414/intro.htm), including the [commands that are carried over from SQL\*Plus](http://docs.oracle.com/cd/E39885_01/appdev.40/e38414/intro.htm#RPTUG10710).
If you have a function that returns a ref cursor then you can call that in a query, as `select func(val) from dual`, and then the results can go in a grid; or you can call the function (or procedure) with the same `:rc` bind variable and print it. But I'm not sure either is helpful if you are only doing ad hoc queries.
On the other hand, using a PL/SQL block for an ad hoc query seems a little heavy-handed, even if your queries are complicated. You'd need a good reason to open a cursor for a select statement from within a block, rather than just running the `select` directly. (Not sure if that's a SQL Server thing or if you actually have a real need to do this!). If you're just running a query inside the block, you don't need the block, even if you want to keep a bind variable for the values you're using in the query:
```
variable my_id number;
execute :my_id := 356655;
select :my_id from dual;
:MY_ID
----------
356655
``` | I use [Oracle SQL Developer](http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index-097090.html).
Anyway, this should work in any oracle sql client:
If you just want to see your results, you can use
```
dbms_output.put_line('Foo' || somevar || ' bar');
```
Before this, run
```
SET SERVEROUTPUT ON
```
Check the examples at [docs.oracle.com](http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_output.htm#BABHEAFF) | Ad hoc querying Oracle PL/SQL - for SQL Server developer | [
"",
"sql",
"oracle",
"plsql",
"database-tools",
""
] |
I found a sproc in a db that I think infers a table join?
Is that accurate to deduce that from the sample below?
```
SELECT a.Column1, b.Column2
FROM [dbo].[Table1] As a,
[dbo].[Table2] AS b
``` | It does infers a join, but that join is a `CROSS JOIN`, which is rarely what you want. It won't simply map each row of `a` to its equivalent in `b`. Instead, it will map each row of `a` to every row of `b`. That is a join, but it's not what we often think of when we talk about joins.
I wanted to show a quick example, but unfortunately SQL Fiddle is down. Sorry!
More on cross joins and cartesian products in SQL Server [here](http://technet.microsoft.com/en-us/library/ms190690%28v=sql.105%29.aspx). | You inferred correctly, this is a Cartesian join. | when selecting from multiple SQL Server 2008 tables is a join inferred? | [
"",
"sql",
"sql-server-2008",
""
] |
There are 3 tables: `Students`, `Courses` and `Grades`.
`sid`, `cid` and `sid-cid`(Super key) are the primary keys of each table.
`Students`:
```
sid sname address
-----------------
S1 Adam Abawama
S2 Mery Ignora
S3 Aisha Icterra
S4 Sello Icterra
S5 Mery Kaysers
```
`Courses`:
```
cid cname dept
------------------
C1 Db Ceng
C2 Prog Ceng
C3 Calculus Math
C4 Stat EE
C5 Alg Ceng
```
`Grades`:
```
sid cid grade
--------------
S1 C1 50
S1 C2 85
S1 C3 60
S1 C4 90
S1 C5 50
S2 C1 30
S2 C2 40
S3 C2 85
S4 C2 80
S4 C4 75
S4 C5 60
```
Questions
1. List the names of students who taken both a `ceng` and an `EE` course
2. Find the `sid` of student who gets the highest grade from the course `database`
**My SQl Answer for Q1**
```
SELECT s.sname
FROM Students s
JOIN Grades g ON s.sid = g.sid
JOIN Courses c ON g.sid = c.sid
AND c.dept IN ("Ceng","EE")
```
**My SQL Answer for Q2**
```
SELECT sid
FROM Grades
WHERE grade =
(SELECT max(grade)
FROM Grades
GROUP BY cid HAVING Grades.cid = "C1")
```
Answer 1 is returns wrong result, how can fix it?
how can i write relational algebra for these commands? | I like to use TDQD — Test-Driven Query Design. You build up the query in stages, validating each stage. This is a fairly simple query to do — it only needs two steps (though arguably I've compressed two steps into step 2).
### Step 1
You need to do a self-join in some shape or form. This query lists students (by `sid`) with a grade in an EE course:
```
SELECT g.sid
FROM Grades AS g
JOIN Courses AS c ON g.cid = c.cid AND c.dept = "EE"
```
### Step 2
You can use the query in step 1 as one sub-query and the analogous sub-query for "Ceng" to generate two 'tables' with the list of students who do EE and Ceng; you join these tables to come up the students who do both EE and Ceng, and join that with the Students table to list their names:
```
SELECT s.name
FROM Students AS s
JOIN (SELECT DISTINCT g.sid
FROM Grades AS g
JOIN Courses AS c ON g.cid = c.cid AND c.dept = "EE"
) AS ee
ON s.sid = ee.sid
JOIN (SELECT DISTINCT g.sid
FROM Grades AS g
JOIN Courses AS c ON g.cid = c.cid AND c.dept = "Ceng"
) AS ceng
ON s.sid = ceng.sid
```
The DISTINCT qualifiers are necessary in case some student takes 2 EE and 1 Ceng course; it prevents them showing up twice in the output. You could place the DISTINCT in the main query select-list too, of course. | Another approach is to use two correlated subqueries, like so:
```
select S.SNAME
from STUDENTS S
where exists (select G1.SID
from GRADES G1 inner join COURSES C1 on G1.CID=C1.CID
where G1.SID = S.SID
and C1.DEPT = 'Ceng')
and exists (select G2.SID
from GRADES G2 inner join COURSES C2 on G2.CID=C2.CID
where G2.SID = S.SID
and C2.DEPT = 'EE')
```
See: <http://sqlfiddle.com/#!3/2fd72/4> | How to translate SQL to Relational Algebra on Relational Database Tables | [
"",
"sql",
"sql-server-2008",
"relational-database",
"relational-algebra",
""
] |
```
CREATE TABLE info(id KEY AUTO_INCREMENT, email VARCHAR(20), name VARCHAR(20));
```
While using this code, in Ubuntu terminal, i am getting error like below
> ERROR 1064 (42000): You have an error in your SQL syntax; check the
> manual that corresponds to your MySQL server version for the right
> syntax to use near 'KEY AUTO\_INCREMENT, email VARCHAR(20), name
> VARCHAR(20))' at line 1
pls do help me for this problem. Thanks to replies in advance | try this:
```
CREATE TABLE info(
id int not null AUTO_INCREMENT,
email VARCHAR(20),
name VARCHAR(20),
primary key(id)
);
``` | Try that:
```
CREATE TABLE info(id INT NOT null AUTO_INCREMENT, email VARCHAR(20), name VARCHAR(20),primary key(id));
``` | Syntax error in MySQL check manual | [
"",
"mysql",
"sql",
"terminal",
""
] |
Let's say we have a database that records all the Movies a User has not rated yet. Each rating is recorded in a MovieRating table.
When we are looking for movies user #1234 hasn't seen yet:
```
SELECT *
FROM Movies
WHERE id NOT IN
(SELECT DISTINCT movie_id FROM MovieRating WHERE user_id = 1234);
```
Querying NOT IN can be very expensive as the size of MovieRating grows. Assume MovieRatings can have 100,000+ rows.
My question is what are some more efficient alternatives to the NOT IN query? I've heard of the LEFT OUTER JOIN and NOT EXIST queries, but are there anything else? Is there any way I can design this database differently? | A correlated sub-query using WHERE NOT EXISTS() is potential your most efficient if you have to do this, but you should test performance against your data.
You may also want to consider limiting your results both in terms of the select list (don't use \*) and only getting TOP n rows. That is, you may not need 100k+ movies if the user hasn't seen them. You may want to page the results.
```
SELECT *
FROM Movies m
WHERE NOT EXISTS (SELECT 1
FROM MovieRating r
WHERE user_id = 1234
AND r.movie_id= m.movie_id)
``` | This is a mock query, because I don't have a db to test this, but something along the lines of the following *should* work.
```
select m.* from Movies m
left join MovieRating mr on mr.user_id = 1234
where mr.id is null
```
That should join the movies table to the movie rating table based on a user id. The where clause is then going to find null entries, which would be movies a user hasn't rated. | What are some alternatives to a NOT IN query? | [
"",
"sql",
"database",
"performance",
""
] |
I was wondering if there is anyway to count the number of days in a month in SQL server 2008... Something similar to this Javascript code I wrote:
```
function daysInMonth(month, year) {
return new Date(year, month, 0).getDate();
}
var dateForApp = new Date();
var MonthForApp = (dateForApp.getMonth() + 1);
var yearForApp = dateForApp.getFullYear();
var daysofMonth = new Array();
for (var i = 1; i <= daysInMonth(MonthForApp, yearForApp); i++) {
daysofMonth.push(parseInt(i));
} // Resulting in something like this (1,2,3,4,5,6,7,8,9,10,11,12...etc)
```
I now need to figure out how to do this in SQL... I so far have the foolowing:
```
declare @Date datetime
select @Date = Getdate()
select datepart(dd,dateadd(dd,-1,dateadd(mm,1,cast(cast(year(@Date) as varchar)+'-'+cast(month(@Date) as varchar)+'-01' as datetime))))
```
which will let me know how many days there are (31) in the month, but I am now not quite sure how to get the actual counting up done... I was trying while loops etc, but have had no success. Has anyone got any ideas or a thread they could point me to? (I found nothing while searching the net) | A recursive CTE can provide a days table;
```
declare @date datetime = '01 feb 1969'
declare @days int = datediff(day, dateadd(day, 1 - day(@date), @date),
dateadd(month, 1, dateadd(day, 1 - day(@date), @date)))
;with days(day) as
(
select 1 as day
union all
select day + 1
from days
where day < @days
)
select day from days
``` | Have you tried using [DATEDIFF](https://stackoverflow.com/questions/691022/how-to-determine-the-number-of-days-in-a-month-in-sql-server)? | Count days of month | [
"",
"sql",
"sql-server",
""
] |
I have a bit problem returning the max value of a group using mysql
this is my columns
```
id | date | time | tran
--------+----------------+----------+----------
1 | 2014/03/31 | 17:23:00 | 1234
1 | 2014/03/31 | 17:24:00 | 3467
2 | 2014/03/31 | 17:26:00 | 2345
```
My query
```
SELECT id, max(date), MAX(time) , tran
FROM table
GROUP BY id
```
RESULT
```
id | date | time | tran
--------+----------------+----------+----------
1 | 2014/03/31 | 17:26:00 | 1234
2 | 2014/03/31 | 17:24:00 | 2345
```
Expected answer should be
```
id | date | time | tran
--------+----------------+----------+----------
1 | 2014/03/31 | 17:26:00 | 3467
2 | 2014/03/31 | 17:24:00 | 2345
``` | You can do this by using self join on the maxima from same table
```
SELECT t.* FROM
Table1 t
JOIN (
SELECT id, max(date) date, MAX(time) time , tran
FROM Table1
GROUP BY id) t2
ON(t.id=t2.id AND t.date=t2.date AND t.time=t2.time)
```
## [Fiddle](http://sqlfiddle.com/#!2/a75d9/1)
There may be differences between maxima of date and time so you should use a single field for saving date and time for you current schema this one is optimal
```
SELECT t.* FROM
Table1 t
JOIN (
SELECT id,
max(concat(date,' ',time)) `date_time`
FROM Table1
GROUP BY id) t2
ON(t.id=t2.id AND (concat(t.date,' ',t.time))=t2.date_time )
```
## [Fiddle](http://sqlfiddle.com/#!2/a75d9/4) | There is a great article on this theme which a read every time i am facing your problem. You might want to c[heck it](http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/)
Applying to you query, this will look like:
```
SELECT *
FROM `table`
WHERE (
SELECT count(*) FROM `table` AS t
WHERE `t`.`id` = `table`.`id` AND `t`.`tran` <= `table`.`tran`
) < 2;
```
Best thing i like about this way, you can easily get top 2, 3 or any number of rows you need | MYSQL returning the max value per group | [
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I have a huge table where a new row could be an "adjustment" to a previous row.
TableA:
```
Id | RefId | TransId |Score
----------------------------------
101 | null | 3001 | 10
102 | null | 3002 | 15
103 | null | 3003 | 15
104 | 101 | | -5
105 | null | 3004 | 5
106 | 105 | | -10
107 | null | 3005 | 15
```
TableB:
```
TransId | Person
----------------
3001 | Harry
3002 | Draco
3003 | Sarah
3004 | Ron
3005 | Harry
```
In the table above, Harry was given 10 points in TableA.Id=101, deducted 5 of those points in TableA.Id=104, and then given another 15 points in TableA.Id=107.
What I want to do here, is return all the rows where Harry is the person connected to the score. The problem is that there is no name attached to a row where points are deducted, only to the rows where scores are given (through TableB). However, scores are always deducted from a previously given score, where the original transaction's Id is referred to in the tables as "RefId".
```
SELECT
SUM TableA.Score
FROM TableA
LEFT JOIN TableB ON TableA.Trans=TableB.TransId
WHERE 1
AND TableB.Person='Harry'
GROUP BY TableA.Score
```
That only gives me the points given to Harry, not the deducted ones. I would like to get the total scored returned, which would be 20 for Harry. (10-5+15=20)
How do I get MySQL to include the negative scores as well? I feel like it should be possible using the TableA.RefId. Something like "if there is a RefId, get the score from this row, but look at the corresponding TableA.Id for the rest of the data". | try this:
```
select sum(sum1 + sums) as sum_all from (
SELECT t1.id,T1.Score sum1, coalesce(T2.score,0) sums
FROM Table1 t1
inner JOIN Table2 ON T1.TransId=Table2.TransId
left JOIN Table1 t2 ON t2.RefId = t1.id
WHERE Table2.Person='Harry'
)c
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/7244ea/2)
OUTput:
```
SUM_ALL
20
``` | ```
Select sum(total) AS total
From tableb
Join
(
Select t1.transid, sum(score) AS total
From tablea t1
Join tablea t2 on t1.id = t2.refid
group by t1.transid
) x on x.transid = tableb.transid
Where TableB.Person='Harry'
``` | Sum Log values from table using second table | [
"",
"mysql",
"sql",
""
] |
I understand that this has been asked already. However, I am relatively new at SQL and MySQL, so I was confused by the other answers.
Say I have a table of historical financial data, and I have multiple records in place.
```
Date | Close Price | % Change
2014-03-25 | 3.58 | ?
2014-03-24 | 3.57 | ?
2014-03-21 | 3.61 | ?
```
I have the date and close price in the table. I would like to find the % change from day to day. For example, from 2014-03-24 to 2014-03-25 is about +0.28%.
I'm completely stuck on how to do this for all records without running a large number of individual queries. I'm thinking it's some sort of join, but this is where I'm confused as I have not done these before. | Something like this would do it I think
```
SELECT x.Date, x.Close_Price, (((x.Close_Price / y.Close_Price) - 1) * 100) AS '% Change'
FROM
(
SELECT a.Date AS aDate, MAX(b.Date) AS aPrevDate
FROM SomeTable a
INNER JOIN SomeTable b
WHERE a.Date > b.Date
GROUP BY a.Date
) Sub1
INNER JOIN SomeTable x ON Sub1.aDate = x.Date
INNER JOIN SomeTable y ON Sub1.aPrevDate = y.Date
ORDER BY x.Close_Price DESC
```
This will cope when the days are not one after the other (eg, missing records for non processing days).
The sub query gets each date, and the max date that is less than it. This is then joined against the table to get the full records for that date and the previous one, and then the calculation can be done.
EDIT - and update based on this SELECT:-
```
UPDATE treasury x
INNER JOIN
(
SELECT a.Date AS aDate, MAX(b.Date) AS aPrevDate
FROM treasury a
INNER JOIN treasury b
WHERE a.Date > b.Date
GROUP BY a.Date
) Sub1 ON Sub1.aDate = x.Date
INNER JOIN treasury y ON Sub1.aPrevDate = y.Date
SET x.PercentChange = (((x.Close_Price / y.Close_Price) - 1) * 100)
``` | ```
Select S1.Date as Today, (S1.ClosePrice - S2.ClosePrice) / S2.ClosePrice as Change
FROM Stock as S1 INNER JOIN Stock as S2
ON S1.Date = (S2.Date+1)
``` | Calculating Percent Change between two rows | [
"",
"sql",
""
] |
I'm trying to implement a FIFO queue using a sql table.
I have the following SQL (modified for posting), the join and param usage are important to the way this process works.
```
With cte as (
select top(1) q.* from queue q with (readpast)
inner join MyTable a on q.id = a.myTableID AND myTable.procID = @myParam
order by q.Data asc
)
delete from cte
output
deleted.ID,
deleted.col1
```
running this statement returns an error 'View or function 'cte' is not updatable because the modification affects multiple base tables.'
I understand why the error is thrown, what I can't figure out is how to fix it. Any advice would be much appreciated! | You can use `exists()` instead of the inner join to `MyTable` in the CTE.
```
with cte as
(
select top(1) q.id,
q.col1
from queue q with (readpast)
where exists(
select *
from MyTable a
where q.id = a.myTableID AND
a.procID = @myParam
)
order by q.Data asc
)
delete from cte
output deleted.ID, deleted.col1;
``` | Something like this?
```
With cte as (
select top(1) q.* from queue q with (readpast)
inner join MyTable a on q.id = a.myTableID AND myTable.procID = @myParam
order by q.Data asc
)
delete from queue
Where ID in (Select Id from cte)
``` | Delete from CTE with join | [
"",
"sql",
"sql-server",
"t-sql",
"common-table-expression",
"dml",
""
] |
i have two tables
```
activity
id | user_id | time | activity_id
1 | 1 | | 3
2 | 1 | | 1
and preferences
user_id | running | cycling | driving
1 | TRUE | FALSE | FALSE
i need result set of
id | user_id | time |
2 | 1 | |
```
i only need rows from first table whose values are set true in preferences table.
e.g activity\_id for running is 1 which is set true in preferences table, so it returns while others doesn't. | If you can edit the schema, it would be better like this:
```
activity
id | name
1 | running
2 | cycling
3 | driving
user_activity
id | user_id | time | activity_id
1 | 1 | | 3
2 | 1 | | 1
preferences
user_id | activity_id
1 | 1
```
A row in preferences indicates a `TRUE` value from your schema. No row indicates a `FALSE`.
Then your query would simply be:
```
SELECT ua.id, ua.user_id, ua.time
FROM user_activity ua
JOIN preferences p ON ua.user_id = p.user_id
AND ua.activity_id = p.activity_id
```
If you want to see the activity name in the results:
```
SELECT ua.id, ua.user_id, ua.time, activity.name
FROM user_activity ua
JOIN preferences p ON ua.user_id = p.user_id
AND ua.activity_id = p.activity_id
JOIN activity ON ua.activity_id = activity.id
``` | What I would probably do is join the tables on a common column, looks like `user_id` is a common column in this case, which gives access to the columns in both tables to query against in the where clause of the query.
Which type of join depends on what information you want from preferences
[Handy Visual Guide for joins](http://www.codeproject.com/KB/database/Visual_SQL_Joins/Visual_SQL_JOINS_orig.jpg)
So you could query
`SELECT * FROM activity LEFT JOIN preferences ON activity.user_id = preferences.user_id WHERE preferences.columnIWantToBeTrue = true`
I'm using `left join` since you mentioned you want the values from the first table based on the second table. | select statement with only rows which have set true in second table | [
"",
"mysql",
"sql",
""
] |
how to select data between 2 dates in between 2 dates? for example :
```
start_date - end_date -- title
2014-01-28 2014-02-03 test
2014-02-01 2014-02-15 ests
2014-02-28 2014-03-03 sets
2014-03-01 2014-03-10 sste
```
the problem is, i want to select data between 2014-02-02 and
2014-02-28 with first three of them selected because the first three
data is included on 2nd month.
i tried this, but not works
```
SELECT title FROM my_tables WHERE start_date BETWEEN 2014-02-02 AND 2014-02-28
```
how it works? | Two periods overlap when one begins before the other ends, and the other ends after the first begins. Here is the correct logic:
```
SELECT title
FROM my_tables
WHERE start_date <= '2014-02-28' and
end_date >= '2014-02-02';
```
Note that date constants need to be in single quotes.
[Here](http://www.sqlfiddle.com/#!2/9a0108/1) is a SQL Fiddle showing it working. | A date range overlaps with another when either the start or the end is within that other range:
```
given range: |----|
range 1 |----| overlaps with given range, because end_date is in that range
range 2 |----| overlaps with given range, because start_date is in that range
range 3 |----| does not overlap with given range, because both start and end date are not in that range
```
```
SELECT title FROM my_tables
WHERE (start_date BETWEEN '2014-02-02' AND '2014-02-28')
OR (end_date BETWEEN '2014-02-02' AND '2014-02-28');
``` | Mysql Between in Between | [
"",
"mysql",
"sql",
"between",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.