Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I am trying to build a query that will look up a product\_subcategory based on a reference data in a user entered table and joining two tables together. My database is SQL Server 2012 Express.
First the `products` table has three columns: `Product_id` (unique identifier), `event_id` (INT data type), and `product_category`. (`product_category` needs to be alphanumeric currently varchar(32) data type)
Example `Products` table data:
```
Product_id event_id product_category
1 20 100
2 20 105
3 20 200
4 21 100
5 21 200
6 21 203
7 22 105
8 22 207
```
Second the events table has two columns: event\_id (unique identifier, INT Data type) and zone (float data type, not sure why this was setup as float, probably should have been INT but its a pre-existing table and I don't want to change it)
```
event_id zone
20 1
21 2
22 3
```
Third the `subcategory` table has four columns: `subcategory_id` (unique identifier, INT data type), `zone` (joins to `zone` column in `products` table, INT Data type), `category_lookup` (varchar(max) data type), and `product_subcategory` (varchar(50) data type). This is a table that I am creating for this project so I can change the structure or datatypes to be whatever is needed for the project, I don't have that flexibility on the other tables.
Example `Subcategory` table data:
```
subcategory_id zone category_lookup product_subcategory
1 1 '1%' 25
2 1 '2%' 23
3 2 '1%' 26
4 2 '2%' 30
```
I want to build a query that will search the product table and match a `zone`, `product_category`, and `product_subcategory` together based on the value in the `subcategory.category_lookup` column.
The data that I want returned from the query is:
```
product_ID zone product_category product_subcategory
1 1 100 25
2 1 105 25
3 1 200 23
4 2 100 26
5 2 200 30
6 2 203 30
7 3 105 NULL or 'N/A'
8 3 107 NULL or 'N/A'
```
The logic behind looking up the matching subcategory will be similar to below: (this is essentially what is stored in the subcategory table) (the text in the “quotes” is what I mean by reference data, and will be user entered)
```
IE... if zone = 1 and product_category “begins with 1” then product_subcategory = 25
IE... if zone = 1 and product_category “begins with 2” then product_subcategory = 23
IE... if zone = 2 and product_category “begins with 1” then product_subcategory = 26
IE... if zone = 2 and product_category “begins with 2” then product_subcategory = 30
```
I do understand that one of the issues with my logic is that if multiple subcategories match to one product then it will throw an error, but I think I can code around that once I get this part of it working
I am fine going a different direction with this project but this is the first way I decided to tackle it. The most important component is that the `product_subcategory`’s are going to be located in a separate user entered table, and there needs to be user entered logic as discussed above to determine the `product_subcategory` based on zone and `product_category`.
I am not a SQL guru at all so I don’t even know where to start to handle this problem. Any advice is appreciated.
Based on answers I have received so far I have come up with this:
```
SELECT p.product_id, p.event_id, e.zone, p.product_category, sc.product_subcategory
FROM Products p
LEFT JOIN events e on p.event_id = e.event_id
LEFT JOIN SubCategory sc ON e.zone = sc.zone AND CAST(p.product_category as varchar(max)) like sc.category_lookup
```
But unfortunately its only returning NULL for all of the product\_subcategory results.
Any additional help is appreciated.
Thanks, | This should do the trick, you will just need to modify the `CAST(p.zone as nchar(10))` to insert the correct data type for your `category_lookup` column, in place of `nchar(10)`, as I assume the `zone` in `Products` is an `int` where as the lookup column is some string based column:
```
SELECT p.product_id, p.zone, p.product_category, sc.product_subcategory
FROM Products p
LEFT JOIN SubCategory sc ON p.zone = sc.zone
AND CAST(p.zone as nchar(10)) like sc.category_lookup
```
Based on your updates, the following should work:
```
SELECT p.product_id, p.event_id, e.zone, p.product_category,
sc.product_subcategory
FROM Products p
INNER JOIN events e on p.event_id = e.event_id
LEFT OUTER JOIN SubCategory sc ON e.zone = sc.zone
AND CAST(e.zone as nchar(250)) LIKE CAST(sc.category_lookup as nchar(250))
```
**Update based on comments:**
```
SELECT p.product_id, p.event_id, e.zone, p.product_category,
sc.product_subcategory
FROM Products p
INNER JOIN events e on p.event_id = e.event_id
LEFT OUTER JOIN SubCategory sc ON e.zone = sc.zone
AND CAST(p.product_category as nchar(250)) LIKE sc.category_lookup
```
Working sample **[SQLFiddle](http://sqlfiddle.com/#!3/437aa/5)** | Not tested but is this what you're looking for ?
```
select a.product_ID, a.zone, a.product_category, b.product_subcategory
from Products a
inner join Subcategory on ((a.zone = b.zone) and (a.product_category like b.category_lookup))
``` | SQL Lookup data in joined table based on LIKE value stored in joined table column | [
"",
"sql",
"sql-server",
"database",
"sql-server-2012-express",
""
] |
Is there a query that would merge previous arrays into a cumulative array, that would result in:
```
id array_field c_array
---------------------------------------------
1 {one,two} {one,two}
2 {three} {one,two,three}
3 {four,five} {one,two,three,four,five}
4 {six} {one,two,three,four,five,six}
``` | It depends on what you work with. Seems like your base table holds text arrays `text[]`.
* You can use any aggregate function as window function. [Per documentation:](http://www.postgresql.org/docs/current/interactive/functions-window.html)
> In addition to these functions, any built-in or user-defined aggregate
> function can be used as a window function
* There is `array_agg()` but it operates on scalar types, *not* on array types.
* But you can [create your own aggregate function](http://www.postgresql.org/docs/current/interactive/sql-createaggregate.html) easily.
To aggregate array types, create this aggregate function:
```
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
```
Details in this related answer:
[Selecting data into a Postgres array](https://stackoverflow.com/questions/11762398/selecting-data-into-a-postgres-array-format/11763245#11763245)
Now, the job is strikingly simple:
```
SELECT array_agg_mult(array_field) OVER (ORDER BY id) AS result_array
FROM tbl
```
Since the aggregate is defined for [polymorphic types](http://www.postgresql.org/docs/current/interactive/extend-type-system.html#EXTEND-TYPES-POLYMORPHIC), this works for *any array type*, not just `text[]`.
[**SQL Fiddle**](http://sqlfiddle.com/#!15/cef14/2) including alternative solution for text representation in a list. | You can use [recursive CTE](http://www.postgresql.org/docs/current/static/queries-with.html).
[SQLFiddle](http://sqlfiddle.com/#!15/75fe3/2)
Data:
```
-- drop table if exists sample_data;
create table sample_data (id int, arr varchar[]);
insert into sample_data
values
(1,ARRAY['One','Two','Three']),
(2,ARRAY['Four','Six']),
(3,ARRAY['Seven']);
```
Query:
```
with recursive cte as (
select
id, arr, arr as merged_arr
from
sample_data
where
id = 1
union all
select
sd.id, sd.arr, cte.merged_arr || sd.arr
from
cte
join sample_data sd on (cte.id + 1 = sd.id)
)
select * from cte
```
Result:
```
1;{One,Two,Three};{One,Two,Three}
2;{Four,Six}; {One,Two,Three,Four,Six}
3;{Seven}; {One,Two,Three,Four,Six,Seven}
``` | Cumulative array from previous rows array? | [
"",
"sql",
"arrays",
"postgresql",
"window-functions",
""
] |
I have two tables:
* **Actions**
* **Hosts**
Sample data for the tables is as below:
**Actions**
```
| ID (bigint) | ACTION_TIME (datetime) | ACTION_NAME (varchar) |
-----------------------------------------------------------------------
| 1 | 2013-08-03 04:42:55 | test |
| 2 | 2013-10-01 06:22:45 | test56 |
| 3 | 2013-12-12 08:44:24 | cloud |
| 4 | 2014-01-05 04:20:56 | demo |
| 5 | 2014-03-10 08:20:26 | console |
| 6 | 2014-03-12 05:48:27 | temp |
```
**Hosts**
```
| ID (bigint) | ACTION_ID (bigint) | DEVICE_ID (bigint) | CRITICAL_COUNT (bigint) | HIGH_COUNT (bigint) |
------------------------------------------------------------------------------------------------------------------------
| 1 | 1 | 1 | 200 | 400 |
| 2 | 1 | 2 | 100 | 200 |
| 3 | 2 | 1 | 350 | 100 |
| 4 | 4 | 3 | 2 | 50 |
| 5 | 5 | 2 | 4 | 2 |
| 6 | 6 | 2 | 50 | 100 |
```
I need to use this data to generate a line graph (trend). I was able to figure out how to generate the graph but unable to write the proper SQL to retrieve data.
As given in above tables, there are actions and each action involved host(s) (one or more than one). And each host have critical\_count, high\_count.
Now I want to get monthly data of critical\_count, high\_count (SUM's).
There is one very important condition here. If more than one action has same device\_id (hosts) in a month, only the latest data (basing on action\_time) for that device should be used. The result should include data for last 12 months(including present month). At this month it should be Apr 2013 - Mar 2014.
For example: For the above data, result should be:
```
| MONTH | CRITICAL_COUNT | HIGH_COUNT |
---------------------------------------------------
| 4 | 0 | 0 |
| 5 | 0 | 0 |
| 6 | 0 | 0 |
| 7 | 0 | 0 |
| 8 | 300 | 600 |
| 9 | 0 | 0 |
| 10 | 350 | 100 |
| 11 | 0 | 0 |
| 12 | 0 | 0 |
| 1 | 2 | 50 |
| 2 | 0 | 0 |
| 3 | 50 | 100 |
```
I am working with the below SQL Fiddle:
<http://sqlfiddle.com/#!2/d179d>
How can I accomplish my goal? | ```
select MONTH, CRITICAL_COUNT, HIGH_COUNT from
(
select
YEAR(action_time) YEAR,
MONTH(action_time) MONTH,
SUM(critical_count) CRITICAL_COUNT,
SUM(high_count) HIGH_COUNT
FROM hosts H1 inner join actions A1 on A1.id = H1.action_id
WHERE DATE_SUB(CURRENT_DATE,INTERVAL 1 YEAR) <= action_time
AND A1.action_time =
(
select max(A2.action_time)
from hosts H2 inner join actions A2 on A2.id = H2.action_id
where
H2.device_id = H1.device_id
and MONTH(A2.action_time) = MONTH(A1.action_time)
and YEAR(A2.action_time) = YEAR(A1.action_time)
)
group by MONTH(action_time), year(action_time)
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 11 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 11 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 10 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 10 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 9 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 9 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 8 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 8 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 7 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 7 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 6 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 6 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 5 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 5 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 4 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 4 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 3 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 3 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 2 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 2 MONTH)),0,0
Union All Select YEAR(DATE_SUB(CURRENT_DATE,INTERVAL 1 MONTH)), MONTH(DATE_SUB(CURRENT_DATE,INTERVAL 1 MONTH)),0,0
) ALL_MONTHS
where (CRITICAL_COUNT > 0 or HIGH_COUNT> 0)
or (CRITICAL_COUNT = 0 and HIGH_COUNT = 0)
group by YEAR,MONTH
order by YEAR,MONTH
```
[Fiddle Demo](http://sqlfiddle.com/#!2/3f11d/8) | Here is the working Fiddle:
<http://sqlfiddle.com/#!2/d179d/57>
And the query goes something like this:
```
SELECT
m.month,
ifnull(SUM(hosts.critical_count),0) as host_critical,
ifnull(SUM(hosts.high_count),0) as host_high
FROM
(SELECT
MAX(actions.action_time) as maxtime
FROM actions
GROUP BY MONTH(actions.action_time)) a
LEFT OUTER JOIN actions ON a.maxtime = actions.action_time
LEFT OUTER JOIN hosts ON actions.id = hosts.action_id
RIGHT OUTER JOIN
(SELECT 4 AS MONTH
UNION SELECT 5 AS MONTH
UNION SELECT 6 AS MONTH
UNION SELECT 7 AS MONTH
UNION SELECT 8 AS MONTH
UNION SELECT 9 AS MONTH
UNION SELECT 10 AS MONTH
UNION SELECT 11 AS MONTH
UNION SELECT 12 AS MONTH
UNION SELECT 1 AS MONTH
UNION SELECT 2 AS MONTH
UNION SELECT 3 AS MONTH) m ON m.MONTH = MONTH(a.maxtime)
GROUP BY MONTH(actions.action_time),hosts.action_id,m.month
ORDER BY m.month < 4, m.month;
```
Please note that, I have not handled the scenario of multiple years in this code since you wanted it for one fiscal year. If you want to handle the year part too, you can use YEAR() and do it accordingly.
Hope this helps. Good Luck! | Optimising a SQL query | [
"",
"mysql",
"sql",
""
] |
If I have a column in a table which is `tags` for example, and in that column an entry can be tagged with various keywords. Is it acceptable for an entry to have a list in that column instead of a single value?
So for example if I was putting a question into my table and I wanted to tag it as "math" and "physics", would it be acceptable to put `math, physics` into the `tags` column? And would it then later be possible to write a query that would be something along the lines of:
```
SELECT * FROM questions WHERE tags CONTAINS 'physics'
```
I know that `CONTAINS` is not a SQL operator but is there something that does that functionality? Or would i need to pull all entries from the table and then one by one search their tags for physics? | ## Never, never, never store multiple values in one column!
That is the 3rd rule of DB normalization.
Use another table to map the tags to the questions:
```
questions table
---------------
id
title
.....
tags table
----------
id
name
question_tags table
-------------------
question_id
tag_id
```
After that you can look for questions that have a specific tag like this
```
select q.*
from questions q
join question_tags qt on qt.question_id = q.id
join tags t on t.id = qt.tag_id
where t.name = 'physics'
``` | Its not really 3rd normal form to have a column full of tags. Rather, you might consider having a tags table that relates to the questions table. Consider the following tables. This sets up a many to many relationship between questions and tags.
```
Questions:
QuestionID|*
Tags:
TagID|TagText
QuestionTags:
QuestionID|TagID
```
Then if you wanted all the physics questions a query something like this:
```
Select * from questions where questionid in (select questionID from questionTags where tagid in (select tagid from tags where tagtext = 'physics'))
``` | Is it acceptable to have a list in a MySQL column? Are lists searchable by SQL query? | [
"",
"mysql",
"sql",
"database",
""
] |
How do I structure an SQL statement to run across multiple flat unrelated tables and display the result with the result of the select and the name of the table where the result came from.
The scenario is such that I have several tables with the same column name in each. It is data that I have received from outside parties that I store as it is in different tables.
Same tables look like:
```
Table 1: pid, parent_name, student_name, student_number, class_name, columnN
Table 2: pid, previous_school, previous_school, student_number, columnN
Table 3: pid, student_name, student_number, parent_name, column4, columnN
Table 14: pid, student_number, parent_name, column4, columnN
Table N: pid, previous_school, parent_name, column4, columnN
```
I need an SQL statement that searches for `student_name` across all tables
In pseudo code: `for each table, find a student named john doe and return to me the row where you got the result and the table where you found the result`
Give the result in the following presentation:
```
john doe, Table 1, pid
john doe, Table 9, pid
```
To make it a bit complicated, the column `student_name` might not be in all tables so the query needs to proceed graciously if doesn't find the column there. | You are looking for dynamic SQL. Assemble your query from the system catalog automatically:
```
SELECT string_agg('SELECT student_name, '''
|| c.oid::regclass || ''' AS tbl, pid FROM '
|| c.oid::regclass
|| $$ WHERE student_name = 'John Doe'$$
, E'\nUNION ALL\n')
FROM pg_namespace n
JOIN pg_class c ON c.relnamespace = n.oid
WHERE n.nspname = 'public' -- schema name where your tables lie
AND c.relname LIKE 't%' -- and / or filter table names
AND EXISTS (
SELECT 1 FROM pg_attribute
WHERE attrelid = c.oid
AND attname = 'student_name' -- make sure column exists
AND NOT attisdropped -- and is alive
);
```
Produces the query string:
```
SELECT student_name, 'tbl1' AS tbl, pid FROM tbl1 WHERE student_name = 'John Doe'
UNION ALL
SELECT student_name, 'tbl2' AS tbl, pid FROM tbl2 WHERE student_name = 'John Doe'
UNION ALL
SELECT student_name, 'tbl3' AS tbl, pid FROM tbl3 WHERE student_name = 'John Doe'
...
```
Then run it in a second call or completely automate it with a PL/pgSQL function using [`EXECUTE`](http://www.postgresql.org/docs/current/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN). Example:
[Select a dynamic set of columns from a table and get the sum for each](https://stackoverflow.com/questions/12028041/select-a-dynamic-set-of-columns-from-a-table-and-get-the-sum-for-each/12031145#12031145)
This query produces [*safe* code with sanitized identifiers preventing SQL injection.](https://stackoverflow.com/questions/10705616/table-name-as-a-postgresql-function-parameter/10711349#10711349) (Explanation for `oid::regclass` here.)
There are more related answers. [Use a search.](https://stackoverflow.com/search?q=execute%20%5Bplpgsql%5D%20regclass%20string_agg)
BTW, `LIKE` in `student_name LIKE 'John Doe'` is pointless. Without wildcards, just use `=`. | Just add a literal value to each select that describes the source table:
```
select student_name, 'Table 1', pid
from table1
where ...
union
select some_name, 'Table 2', pid
from table2
where ...
union
...
``` | Search across multiple tables and also display table name in resulting rows | [
"",
"sql",
"postgresql",
"database-design",
"dynamic-sql",
""
] |
I have a Google Spreadsheet and I want to run a `QUERY` function. But I want the `WHERE` statement to check a series of values. I'm basically looking for what I would use an `IN` statement in SQL - what the `IN` equivalent in Google Spreadsheets? So right now I have:
```
=QUERY(Sheet1!A3:AB50,"Select A,B, AB WHERE B='"& G4 &"'")
```
And that works. But what I really need is the equivalent of:
```
=QUERY(Sheet1!A3:AB50,"Select A,B, AB WHERE B='"& G4:G7 &"'")
```
And of course, that statement fails. How can I get the where against a range of values? These are text values, if that makes a difference. | Great trick Zolley! Here's a small improvement.
Instead of:
=CONCATENATE(G3,"|",G4,"|",G5,"|",G6,"|",G7)
we can use
=TEXTJOIN("|",1,G3:G7)
That also allows us to work with bigger arrays when adding every cell into the formula one by one just doesn't make sense.
UPD:
Going further I tried to compose two formulas together to exclude the helping cell and here we go:
=QUERY(Sheet1!A3:AB50,"Select A,B, AB WHERE B matches '^.*(" & TEXTJOIN("|",1,G3:G7) & ").*$'")
Used this in my own project and it worked perfectly! | Although I don't now the perfect answer for that, I could find a workaround for the problem, which can be used in small amount of data (hope that's the case here :) )
First step: You should create a "helper cell" in which you concatenate the G4:G7 cells with a "|" character:
=CONCATENATE(G3,"|",G4,"|",G5,"|",G6,"|",G7) - Let's say it's the content of the cell H2.
Now, you should change your above query as follows:
=QUERY(Sheet1!A3:AB50,"Select A,B, AB WHERE B matches '^.\*(" & H2 & ").\*$'")
This should do the trick. Basically we the "matches" operator allows the use of regular expressions, which also allow construction using the | symbol.
As I said, it's a workaround, it has drawbacks, but I hope it'll help. | Query with range of values for WHERE clause? | [
"",
"sql",
"google-sheets",
""
] |
Let us say I have a table called `Employee` in Oracle with following structure.
```
EMPLOYEE:
NAME VARCHAR2(60)
DOB TIMESTAMP
SAL NUMBER
```
I want a query that selects all the employees whose DOB falls on last day of month.
i.e. (1/31, 2/28 or 2/29 based on year, 3/31 ....) | The way I'd approach this would be to add a day to it then see if the day of the month is the 1st.
I don't know Oracle specifically, but since you said general idea in MSSQL I'd do `datepart(d, dateadd(day, 1, DOB)) = 1` | In oracle `Last_day()` function:
```
SELECT * FROM table WHERE DOB=Last_day(DOB)
``` | select all the employees whose date of birth falls end of month | [
"",
"sql",
"oracle",
""
] |
is there a way can we convert Datetime to (dd/MM/yyyy AM/PM) format
for example 18/03/2014 AM in SQL | For MYSQL:
```
select DATE_FORMAT(NOW(), '%d/%m/%Y %p')
```
For SQL SERVER
```
SELECT convert(varchar(20), GetDate(), 103) + ' ' + substring(convert(varchar(30), GetDate(), 9), 25, 2);
```
For Postgres:
```
SELECT TO_CHAR(NOW(), 'dd/mm/yyyy AM')
``` | For Microsoft SQL Server you can use
```
select CONVERT(VARCHAR(24), GETDATE(), 100)
``` | How to Show Date with AM/PM instead of specifying time in SQL | [
"",
"sql",
"sql-server",
""
] |
Database: MSSQL server 2012;
Isolation level: READ\_COMMITTED\_SNAPSHOT
Now I have a table "COV\_HOLES\_PERIODDATE". It has a composite primary key which is also a clustered index. No other indexes on this table.
There are many threads(via Java) working concurrently. Each thread will first do a "select for update" on a DIFFERENT primary key via lock hint (updlock, rowlock), then do some work, then update table for this row. It is guaranteed from Java side that each thread is operating on a different row in this table.
However there is a rare and elusive deadlock that I could not reproduce consistently. This deadlock happens once in a great while. I would imagine there should never be lock contention since each transaction should only lock different row. The deadlock graph shows "collision" on "key lock". "Hobt id" in the deadlock graph, as I queried, is the "index" in this table.
What did I miss? Thanks!
Code listing for each tran: note each tran operate on a different primary key.
```
begin tran;
select DIVISION_ID
from COV_HOLES_PERIODDATE with (updlock, rowlock)
where composite_primary_key = xxx;
//some work on other tables, not related to this table.
update COV_HOLES_PERIODDATE
set non_primaryKey_field= xxx
where composite_primary_key = xxx;
commit;
```
**EDIT 3:** from the execution plan captured by SQL server profiler while my Java load testing program runs, if I read it correctly, due to implicit conversion, "skill=" was changed to "skill between a range" in "Seek Predicates" as this image shows. But I do not know how to interpret that it is still an "Clustered Index Seek", although "Seek Predicates" in this image shows a range for "Skill" column?
<https://i.stack.imgur.com/9vThc.png>
Code listing for the intended sql (but execution plan shows implicit conversion in "skill" column changes "skill" part)
select DIVISION\_ID
from cpq\_jfu\_v4\_speedTest.cpqjfuv4speedTest.COV\_HOLES\_PERIODDATE with (updlock, rowlock)
where DIVISION\_ID = 1
and UNIT\_ID = 2
and SKILL = 'M'
and PERIOD\_START\_DATE = '2014-02-09' ;
**EDIT 2**: first link is the picture for the deadlock. 2nd link is the table and its index
<https://i.stack.imgur.com/8j6V9.png>
<https://i.stack.imgur.com/JaFkl.png>
**EDIT 1**: listing for deadlock graph
```
<deadlock-list>
<deadlock victim="process2fb02d498">
<process-list>
<process id="process2fb02d498" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (425c06b927a8)" waittime="4321" ownerId="181788" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.930" XDES="0x2f0d6a3a8" lockMode="U" schedulerid="4" kpid="8704" status="suspended" spid="60" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-18T14:29:09.010" lastbatchcompleted="2014-02-18T14:29:09.010" lastattention="1900-01-01T00:00:00.010" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181788" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="148" sqlhandle="0x02000000f9bd1c3532128f75b67ae477b2447ba2a64528db0000000000000000000000000000000000000000">
update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 nvarchar(4000),@P1 smallint,@P2 smallint,@P3 nvarchar(4000),@P4 date)update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </inputbuf>
</process>
<process id="process2f0110cf8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (3d51ccbaf870)" waittime="4303" ownerId="181789" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.933" XDES="0x2f651e6c8" lockMode="U" schedulerid="3" kpid="904" status="suspended" spid="56" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-18T14:29:09.030" lastbatchcompleted="2014-02-18T14:29:09.030" lastattention="1900-01-01T00:00:00.030" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181789" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="148" sqlhandle="0x02000000f9bd1c3532128f75b67ae477b2447ba2a64528db0000000000000000000000000000000000000000">
update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 nvarchar(4000),@P1 smallint,@P2 smallint,@P3 nvarchar(4000),@P4 date)update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </inputbuf>
</process>
<process id="process2f05d0188" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (f40d3be1c4a7)" waittime="4381" ownerId="181790" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.937" XDES="0x2f0290d08" lockMode="U" schedulerid="4" kpid="5248" status="suspended" spid="59" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-18T14:29:08.950" lastbatchcompleted="2014-02-18T14:29:08.950" lastattention="1900-01-01T00:00:00.950" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181790" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="110" sqlhandle="0x02000000e59aa71c628d78d0574a5f60f697c8ffd8228e4b0000000000000000000000000000000000000000">
select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 smallint,@P1 smallint,@P2 nvarchar(4000),@P3 date)select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </inputbuf>
</process>
<process id="process2f2a9f868" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (8b00f1e21b7f)" waittime="4380" ownerId="181792" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.950" XDES="0x2f651f1d8" lockMode="U" schedulerid="2" kpid="7652" status="suspended" spid="63" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-18T14:29:08.950" lastbatchcompleted="2014-02-18T14:29:08.950" lastattention="1900-01-01T00:00:00.950" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181792" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="110" sqlhandle="0x02000000e59aa71c628d78d0574a5f60f697c8ffd8228e4b0000000000000000000000000000000000000000">
select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 smallint,@P1 smallint,@P2 nvarchar(4000),@P3 date)select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2c70ba180" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f2a9f868" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2fb02d498" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cc3d9980" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f05d0188" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f0110cf8" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cf4fa480" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f0110cf8" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f05d0188" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cf4f9e80" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2fb02d498" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f2a9f868" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
</resource-list>
</deadlock>
<deadlock victim="process2f0110cf8">
<process-list>
<process id="process2fb02d498" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (425c06b927a8)" waittime="4321" ownerId="181788" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.930" XDES="0x2f0d6a3a8" lockMode="U" schedulerid="4" kpid="8704" status="suspended" spid="60" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-18T14:29:09.010" lastbatchcompleted="2014-02-18T14:29:09.010" lastattention="1900-01-01T00:00:00.010" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181788" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="148" sqlhandle="0x02000000f9bd1c3532128f75b67ae477b2447ba2a64528db0000000000000000000000000000000000000000">
update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 nvarchar(4000),@P1 smallint,@P2 smallint,@P3 nvarchar(4000),@P4 date)update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </inputbuf>
</process>
<process id="process2f0110cf8" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (3d51ccbaf870)" waittime="4303" ownerId="181789" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.933" XDES="0x2f651e6c8" lockMode="U" schedulerid="3" kpid="904" status="suspended" spid="56" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-18T14:29:09.030" lastbatchcompleted="2014-02-18T14:29:09.030" lastattention="1900-01-01T00:00:00.030" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181789" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="148" sqlhandle="0x02000000f9bd1c3532128f75b67ae477b2447ba2a64528db0000000000000000000000000000000000000000">
update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 nvarchar(4000),@P1 smallint,@P2 smallint,@P3 nvarchar(4000),@P4 date)update cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE set LAST_UPDATED_DTZ=@P0 where DIVISION_ID=@P1 and UNIT_ID=@P2 and SKILL=@P3 and PERIOD_START_DATE=@P4 </inputbuf>
</process>
<process id="process2f05d0188" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (f40d3be1c4a7)" waittime="4381" ownerId="181790" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.937" XDES="0x2f0290d08" lockMode="U" schedulerid="4" kpid="5248" status="suspended" spid="59" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-18T14:29:08.950" lastbatchcompleted="2014-02-18T14:29:08.950" lastattention="1900-01-01T00:00:00.950" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181790" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="110" sqlhandle="0x02000000e59aa71c628d78d0574a5f60f697c8ffd8228e4b0000000000000000000000000000000000000000">
select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 smallint,@P1 smallint,@P2 nvarchar(4000),@P3 date)select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </inputbuf>
</process>
<process id="process2f2a9f868" taskpriority="0" logused="0" waitresource="KEY: 6:72057594048413696 (8b00f1e21b7f)" waittime="4381" ownerId="181792" transactionname="implicit_transaction" lasttranstarted="2014-02-18T14:29:08.950" XDES="0x2f651f1d8" lockMode="U" schedulerid="2" kpid="7652" status="suspended" spid="63" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-18T14:29:08.950" lastbatchcompleted="2014-02-18T14:29:08.950" lastattention="1900-01-01T00:00:00.950" clientapp="Microsoft JDBC Driver for SQL Server" hostname="ACNU34794GD" hostpid="0" loginname="cpq_jfu_v380_speedTest_WEBAPP" isolationlevel="read committed (2)" xactid="181792" currentdb="6" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="110" sqlhandle="0x02000000e59aa71c628d78d0574a5f60f697c8ffd8228e4b0000000000000000000000000000000000000000">
select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@P0 smallint,@P1 smallint,@P2 nvarchar(4000),@P3 date)select DIVISION_ID from cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE with (updlock, rowlock) where DIVISION_ID =@P0 and UNIT_ID =@P1 and SKILL =@P2 and PERIOD_START_DATE =@P3 </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2c70ba180" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f2a9f868" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2fb02d498" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cc3d9980" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f05d0188" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f0110cf8" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cf4fa480" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2f0110cf8" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f05d0188" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
<keylock hobtid="72057594048413696" dbid="6" objectname="cpq_jfu_v380_speedTest.cpqjfuv380speedTest.COV_HOLES_PERIODDATE" indexname="1" id="lock2cf4f9e80" mode="U" associatedObjectId="72057594048413696">
<owner-list>
<owner id="process2fb02d498" mode="U"/>
</owner-list>
<waiter-list>
<waiter id="process2f2a9f868" mode="U" requestType="wait"/>
</waiter-list>
</keylock>
</resource-list>
</deadlock>
</deadlock-list>
``` | The investigation documented in the comments lead to the following answer:
You're passing in parameters which do not match the corresponding columns in type. This leads to implicit conversions and a range seek on the clustered index. "Seek" mean that a subset of the index will be scanned. It does not necessarily imply a singleton seek.
The range seek on the 3rd column of the index means that the query completely disregards the filter on the 4th column from a locking perspective. This can lead to multiple rows being locked. Not necessarily range locks because these are not taken under `READ COMMITTED`. Had you also set the locking hint `HOLDLOCK` (or `SERIALIZABLE`) range locks would have been taken.
---
Other things that were tried:
> Is this the whole transaction? Triggers? Indexed views (shouldn't
> matter with the given deadlock graph but who knows)? Can you post the
> graph as XML somewhere? And there are no NC indexes, right? Can you
> post the query plan for both statements?
.
> Why do some of the transactions have trancount="2". Maybe you
> misnested transactions so that they are extended? That would explain
> why multiple locks can be taken per tran. IOW an app bug. Add this IF
> @@TRANCOUNT <> 1 ROLLBACK to assert this at runtime. Rerun your load
> test with this assert.
.
> Are there implicit conversions? I know there are because you're
> passing in a nvarchar(4000) but PK columns cannot have such a big
> column. This can lead to range locks being taken. Change the update
> from @P2 to CONVERT(CORRECT\_COLUMN\_TYPE\_HERE, @P2) to make sure.
> Examine the execution plan to make sure that the CI seek is using only
> equality predicates. Not ranges.
I'm adding this to the answer so that future visitors of this question can learn how to investigate an issue like this one. | `ROWLOCK` never gives you an absolute guarantee of locking only this particular row(s) that you operate on. This is even stated on [MSDN](http://technet.microsoft.com/en-us/library/ms187373.aspx):
> Lock hints ROWLOCK, UPDLOCK, AND XLOCK that acquire row-level locks may place locks on index keys rather than the actual data rows.
As you cannot 100% rely on it, a rare deadlock may happen. In your case, you can see that a deadlock is on a key which would point to this situation. | Rare and elusive deadlocks (select for update; then update) in case of multiple concurrent transactions | [
"",
"sql",
"sql-server",
"deadlock",
"database-deadlocks",
""
] |
I'm trying to figure out an sql query that would allow me to sort data ascending order depending on what time is sorted in appointment\_time column, it has values like "00:00" - "23:00"
In my head it looks like this:
```
SELECT * FROM myTable ORDER BY appointment_time ASC
```
But I don't know how to make it understand that 00:00 is lower value than 00:01 for example and so on. | They will sort fine as your query is written, easiest thing is to just give it a whirl and see what happens:
```
SELECT *
FROM myTable
ORDER BY appointment_time ASC
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!7/fdb0e/1/0)
Alphanumeric ordering has no problem with numbers stored in strings so long as they are zero padded, as is the case with your times. | > But I don't know how to make it understand that 00:00 is lower value than 00:01
If all you have is a time then an alphabetical sort should work just fine. If you *want* to convert to a DateTime you can use CONVERT:
```
select CONVERT(DATETIME, appointment_time , 108)
``` | Sort by / Order by time stored in database "00:00 - 23:00" | [
"",
"sql",
"sqlite",
""
] |
This is my method to delete a row from the database where appointment\_date is equal to a date that was passed in
```
public void deleteAllAppointments(String date) {
SQLiteDatabase db = this.getWritableDatabase();
String deleteAllQuery = "DELETE FROM " + TABLE_APPOINTMENTS + " WHERE appointment_date = '" + date + "'";
db.rawQuery(deleteAllQuery, null);
Log.d("Query: ", deleteAllQuery);
}
```
I then use it like this
```
//Database (DatabaseHandler is the one that contains all database methods)
final DatabaseHandler database = new DatabaseHandler(this);
//This happens when button is clicked, it is tested an executes with every chick,
//@param selectedDate is a string like "18/03/2014"
database.deleteAllAppointments(selectedDate);
```
It executes and query looks like this
```
DELETE FROM appointments WHERE appointment_date = '18/03/2014'
```
However row with appointment\_date = '18/03/2014' is not deleted.
I'm sure database is set up correctly as I have working methods with it and all information is received from there in correct format.
NOTE: Adding "\*" to "DELETE \* FROM..." returns a fatal syntax error. | `rawQuery()` just compiles the SQL but does not run it. To actually run it, use either `execSQL()` or call one of the `moveTo...()` methods on the cursor returned by `rawQuery()`.
For further info, see [What is the correct way to do inserts/updates/deletes in Android SQLiteDatabase using a query string?](https://stackoverflow.com/questions/20110274/what-is-the-correct-way-to-do-inserts-updates-deletes-in-android-sqlitedatabase/20118910#20118910) | For tasks such as insert or delete there are really great "convenience methods" like the [delete method](<http://developer.android.com/reference/android/database/sqlite/SQLiteDatabase.html#delete(java.lang.String>, java.lang.String, java.lang.String[])) already built in to the database.
```
public int delete (String table, String whereClause, String[] whereArgs)
```
As to why your current approach would fail, it could be something as simple the format of the column you're trying to delete not matching (e.g. you have created the table as a date value and not a string).
In any case, using the built in delete method is easier because it will notify you when it fails by returning the number of rows affected by the delete. rawQuery just returns a cursor, which you would then have to get the result from to see if it worked. | Why is such DELETE query not working? | [
"",
"android",
"sql",
"sqlite",
""
] |
I am learning how to write TSQL queries. I am trying to understand them in depth. This query that I got from a tutorial requires that I check for a NOT NULL in the second WHERE clause.
```
SELECT *
FROM Person.Person AS p
WHERE NOT p.BusinessEntityID IN (
SELECT PersonID
FROM Sales.Customer
WHERE PersonID IS NOT NULL);
```
Now the table Sales.Customer has some NULL values for PersonID. If I remove this WHERE clause in the sub query, I get no results returned. In my obviously faulty thinking on the matter, I would think that if the sub query returned a NULL it would simply not meet the condition of the WHERE clause in the outer query. I would expect to get a result set for the rows that had a PersonID that is not NULL. Why does it not work according to this reasoning? | Understanding how NULL values are handled by SQL Server can be difficult for newcomers. A value of NULL indicates that the value is unknown. A value of NULL is different from an empty or zero value. No two null values are equal. Comparisons between two null values, or between a NULL and any other value, return unknown because the value of each NULL is unknown.
[Null Values](http://technet.microsoft.com/en-us/library/ms191504%28v=sql.105%29.aspx) | A little change as shown below (`column not in`)
```
SELECT *
FROM Person AS p
WHERE p.BusinessEntityID NOT IN ( <-- Here
SELECT PersonID
FROM Sales.Customer
WHERE PersonID IS NOT NULL);
```
Your inner query returns all non-null personid and outer query getting all fields
from person table with restriction that BusinessEntityID doesn't belong to personid. | Query using IN keyword | [
"",
"sql",
"sql-server",
"t-sql",
"in-operator",
""
] |
I was wondering how to put a constraint on a column when making the table.
How would the create statement look if I want to say something like, make a table called name with column size, and size has to be greater than 10.
`CREATE TABLE name(size int);`
Where do i put the constraint? | You can include a constraint in your declaration right after you have specified the column name, like so:
`CREATE TABLE name(size int CHECK (size > 10));` | Just because I've encountered more headaches than is necessary with system-named constraints, here's how to do it and also name the constraint in the process:
```
create table Persons (
size int,
constraint [CK_Persons_Size] check ((size > 10))
)
``` | SQL CREATE statement with a constraint | [
"",
"sql",
"sql-server",
""
] |
I am currently doing a SUM CASE to workout a percentage, however the entire string returns zero's or ones (integers) but I don't know why. I have written the SQL in parts to break it out and ensure the underlying data is correct which it is, however when I add the last part on to do the percentage it fails. Am I missing something?
```
SELECT
SUPPLIERCODE,
(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END)) AS ACCEPTABLE,
COUNT(ID) AS TOTALSUBMITTED,
(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END)/COUNT(ID))
FROM SUPPLIERDATA
GROUP BY SUPPLIERCODE
```
For example here's some of the data returned:
```
SUPPLIERCODE ACCEPTABLE TOTALSUBMITTED Column1
HBFDE2 1018 1045 0
DTETY1 4 4 1
SWYTR2 579 736 0
VFTEQ3 2104 2438 0
```
I know I could leave the other columns and use an excel calculation but I'd rather not... Any help would be well received. Thanks | ```
SELECT
SUPPLIERCODE,
(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END)) AS ACCEPTABLE,
COUNT(ID) AS TOTALSUBMITTED,
(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END)*1.0/COUNT(ID))
FROM SUPPLIERDATA
GROUP BY SUPPLIERCODE
```
You need convert your result to float. It can be easy done by multiply on 1.0 | This is due to the fact that SQL Server is treating your values as `INT`s for the purpose of division.
Try the following and you will see the answer `0`:
```
PRINT 1018 / 1045
```
In order to allow your operation to work correctly, you need to convert your values to `FLOAT`s, like so:
```
PRINT CAST(1018 AS FLOAT) / 1045
```
This will produce the answer `0.974163` as expected.
A simple change to your statement to introduce a cast to `FLOAT` will sort your problem:
```
SELECT
SUPPLIERCODE,
(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END)) AS ACCEPTABLE,
COUNT(ID) AS TOTALSUBMITTED,
(CAST(SUM(CASE WHEN ISNULL(DATESUBMITTED,0) - ISNULL(FAILDATE,0) <15 THEN 1 ELSE 0 END) AS FLOAT) / COUNT(ID))
FROM SUPPLIERDATA
GROUP BY SUPPLIERCODE
``` | T-SQL Sum Case Confusion | [
"",
"sql",
"t-sql",
""
] |
I have summed the values and this below query sum the values and returns zero if there are no values.
How can I round off the values for these columns.
Here is my Query :
```
isnull(sum([Old_Values]),0) [Old Values],
[New_Value] = isnull(sum([Yearly_Value]-[Previous_Year_Value]),0)
```
Even this query I use for calculation and I need to round off the values
```
[Product_Price] = [Product_Profit]/4,
[Product_Profit] = [ActualValue] * 0.75,
[GrossValue]
```
How can i do this ? | ```
[Product_Price] =Round([Product_Profit]/4,0),
[Product_Profit] = Round([ActualValue] * 0.75,0)
``` | ```
you could use FLOOR() or CEILING()
```
like this !
```
[Product_Price] = FLOOR([Product_Profit]/4),
[Product_Profit] = FLOOR([ActualValue] * 0.75)
``` | Having roundoff values for columns | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I'm trying to get a list of posts and comments count for each one.
```
SELECT
theposts.id,
theposts.name,
(SELECT COUNT(*) FROM thecomments WHERE thecomments.post_id = theposts.id) AS comments
FROM theposts
```
The problem is: I have 20k posts and 30 millions comments. The query is extremely slow.
If I use LIMIT 5, it works fine in about 40 seconds. But I need get a full list of 20k posts.
Any tip for how to speed up or debug this query?
The server is running in my Macbook 8gb ram. | The best way that I can think of is to create an index. You need an index on `thecomments(post_id)`:
```
create index thecomments_postid on thecomments(post_id);
```
This should change the query plan to just an index scan and go pretty quickly.
I also think that this will be faster than using `group by`, which is the other possibility:
```
SELECT theposts.id, theposts.name, COUNT(*) as comment
FROM theposts join
thecomments
on thecomments.post_id = theposts.id
GROUP BY theposts.id;
``` | First off is to check that you have indexes where appropriate. That's usually the most common problem.
Another problem is that you may well be running 20,000 sub-queries, depending on how smart your query analysis engine is.
You can achieve the same result in *one* query just by grouping your rows, such as with (depending on your schema):
```
SELECT
theposts.id is id,
theposts.name as name,
count(thecomments.id) as comments
FROM
theposts, thecomments
WHERE
thecomments.post_id = theposts.id
GROUP BY thepost.id, theposts.name
```
(that's implicit style join syntax, you could also use explicit `join`). | mysql: simple sub-query very slow because has 30 million rows | [
"",
"mysql",
"sql",
""
] |
Maybe I'm totally wrong and it's not pssible to do in MySQL. But, what I wanted to do is to fill out column row base in a select query:
For example:
Table 1:
```
IdNode NodeName NodeContact NodeStatus Nodegroup
1 Machine1 IT 1 IT
2 Machine2 IT 0 IT
3 Machine3 IT 1 IT
4 Machine4 IT 1 Others
4 Machine5 IT 1 Others
```
Table 2
```
IdGroup GroupName NodesManagedbyGroup
1 IT ??
2 others ??
```
Having those two tables, I would like to fill out (automatically) all rows in column `NodesManagedbyGroup` in the table2.
Manually it would be:
`SELECT COUNT(*) FROM table1 where MemberOfGroup='IT';`
result value Int = 3
Then
`update table2 NodesManagedbyGroup = 3 where GroupName='IT';`
There is any way that MySQL do it for me automatically | You can use [triggers](http://dev.mysql.com/doc/refman/5.0/en/triggers.html) to do this - you'd create triggers for insert, update and delete on table 1 to update table 2.
This is generally a bad idea though - it's duplicating information around the database, denormalizing the schema.
Alternatively, you can create a [view](http://dev.mysql.com/doc/refman/5.0/en/create-view.html) which calculates this data on the fly and behaves like a table when querying. | Use [multiple `UPDATE`](http://dev.mysql.com/doc/refman/5.5/en/update.html) syntax with selecting counts in subquery as a sample:
```
UPDATE
table2
INNER JOIN
(SELECT COUNT(1) AS gcount, Nodegroup FROM table1 GROUP BY Nodegroup) AS counts
ON table2.GroupName=counts.Nodegroup
SET
table2.NodesManagedbyGroup=table1.gcount
``` | Automatic column value update in MySQL | [
"",
"mysql",
"sql",
""
] |
I'm working on sql server 2008 r2. I'm trying to split a row by 24 hours period ranges between **FromDate**, **Todate**.
For example, if a time row has given as below, (Range between **FromDate**, **Todate** is 4 days, so I want 4 rows)
```
ID FromDate Todate
---- ------------------------ -------------------------
1 2014-04-01 08:00:00.000 2014-04-04 12:00:00.000
```
The result I want to see like this:
```
ID FromDate Todate DateDiff(HH)
---- ------------------------ -----------------------------------
1 2014-04-01 08:00:00.000 2014-04-01 23:59:59.000 15
1 2014-04-02 00:00:00.000 2014-04-02 23:59:59.000 23
1 2014-04-03 00:00:00.000 2014-04-03 23:59:59.000 23
1 2014-04-04 00:00:00.000 2014-04-04 12:00:00.000 12
``` | Try this query:
```
WITH TAB1 (ID,FROMDATE,TODATE1,TODATE) AS
(SELECT ID,
FROMDATE,
DATEADD(SECOND, 24*60*60 - 1, CAST(CAST(FROMDATE AS DATE) AS DATETIME)) TODATE1,
TODATE
FROM TABLE1
UNION ALL
SELECT
ID,
DATEADD(HOUR, 24, CAST(CAST(TODATE1 AS DATE) AS DATETIME)) FROMDATE,
DATEADD(SECOND, 2*24*60*60-1, CAST(CAST(TODATE1 AS DATE) AS DATETIME)) TODATE1,
TODATE
FROM TAB1 WHERE CAST(TODATE1 AS DATE) < CAST(TODATE AS DATE)
),
TAB2 AS
(SELECT ID,FROMDATE,
CASE WHEN TODATE1 > TODATE THEN TODATE ELSE TODATE1 END AS TODATE
FROM TAB1)
SELECT TAB2.*,
DATEPART(hh, TODATE) - DATEPART(hh, FROMDATE) [DateDiff(HH)] FROM TAB2;
```
## [SQL Fiddle](http://sqlfiddle.com/#!3/e7422/25) | <http://sqlfiddle.com/#!6/36452/2>
This should be what you need.
The first part of the query is a recursive CTE. This can be used to create the dates-data.
```
WITH CTEDays AS
(
SELECT CONVERT(date, '20140301') datevalue
UNION ALL
SELECT DATEADD(day, 1, C.datevalue)
FROM CTEDays C
WHERE DATEADD(day, 1, C.datevalue) <= '20140501'
)
SELECT
R.FromDate [FromOriginal]
, R.ToDate [ToOriginal]
, CDays.datevalue [RawDate]
, WantedFrom =
CASE WHEN CONVERT(date, R.FromDate) = CDays.datevalue THEN R.FromDate
ELSE CDays.datevalue
END
, WantedTo =
CASE WHEN CONVERT(date, R.ToDate) = CDays.datevalue THEN R.ToDate
ELSE DATEADD(minute, 24*60-1, CONVERT(datetime, CDays.datevalue))
END
FROM
tblRange R
OUTER APPLY
(
SELECT C.datevalue
FROM CTEDays C
WHERE C.datevalue BETWEEN CONVERT(date, R.FromDate) AND CONVERT(date, R.ToDate)
) CDays
```
However, I would suggest to set up a calendar table for this, if the CTE is too slow or inconvenient.
(as I have suggested [here](https://stackoverflow.com/questions/22387655/generate-a-resultset-for-incrementing-dates-in-tsql-and-use-last-known-value-for/22388234#22388234)) | SQL Server , split a time duration row by 24 hour period | [
"",
"sql",
""
] |
My statement look like this,
```
update DATA_SOURCE tgt
set
tgt.DATA_SOURCE_NM=src.DATA_SOURCE_NM,
tgt.ODS_UPDATE_TS=src.ODS_UPDATE_TS
select DATA_SOURCE_NM,ODS_UPDATE_TS
from DATA_SOURCE_BKP src
where tgt.DATA_SRC_ID=src.DATA_SRC_ID;
```
I got following error message:
> error ^ found "SELECT" (at char 103) expecting a keyword
I am using Aginity workbench for netezza. What am I doing wrong? | It works when I remove the line
> select DATA\_SOURCE\_NM,ODS\_UPDATE\_TS | Try this
```
update tgt
set
tgt.DATA_SOURCE_NM=src.DATA_SOURCE_NM,
tgt.ODS_UPDATE_TS=src.ODS_UPDATE_TS
from DATA_SOURCE tgt
inner join DATA_SOURCE_BKP src on tgt.DATA_SRC_ID=src.DATA_SRC_ID;
```
--where
may be you can add a where clause in the end to filter few more conditions | Error when updating table from another table | [
"",
"sql",
"netezza",
""
] |
My table is this:
```
CREATE TABLE studio(
s_id int NOT NULL IDENTITY(1,1),
name varchar(30),
country varchar(15),
income_money float(53),
movies int,
CONSTRAINT primaryKeyStudio PRIMARY KEY(s_id)
)
```
I want to insert values into table studio, is it better this way:
```
INSERT INTO studio VALUES ('Walt Disney Studios','USA',45041000.99,50)
INSERT INTO studio VALUES ('Warner Bros Studios','London',30205000.76,60)
INSERT INTO studio VALUES ('Universal Studios','Hollywood',50235499.87,80)
```
Or this way:
```
INSERT INTO studio(name,country,income_money,movies) VALUES ('Walt Disney Studios','USA',45041000.99,50)
INSERT INTO studio(name,country,income_money,movies) VALUES ('Warner Bros Studios','London',30205000.76,60)
INSERT INTO studio(name,country,income_money,movies) VALUES ('Universal Studios','Hollywood',50235499.87,80)
``` | The second way, i.e. this one
```
INSERT INTO studio(name,country,income_money,movies) VALUES ('Walt Disney Studios','USA',45041000.99,50)
```
where you list column names explicitly is better regardless of the identity column, for several reasons:
* **It is more readable** - Users who are not familiar with the table at all can match up the values to their meaning,
* **Users do not need to remember what columns are in the table** - Users who are familiar with your table will have easier time expanding your `INSERT` to use additional columns
* **Changing the schema does not break this `INSERT`** - Someone adding a column in the middle would not break your existing code. | The first method requires a value for each column to be provided. If a new column is added to the table all insert statements must be rewrote to include the new column. Statements wrote using the second approach would remain functional since they specify columns explicitly.
I would recommend using the second approach. | insert rows into table with identity column | [
"",
"sql",
"sql-server",
""
] |
I'm trying to display the most popular product (format\_id) in a given month (JAN-14) and group them by a count of each format\_id.
Here is my Query :
```
select PRODUCT, AMOUNT
from (Select Order_108681091.Order_Date, order_line_108681091.Format_id as Product,
COUNT(*) AS AMOUNT FROM order_line_108681091
Inner Join order_108681091
On order_108681091.order_id = order_line_108681091.order_id
Where order_108681091.Order_Date like '%JAN-14%'
group by Format_id
order by AMOUNT desc);
```
How can i do this ? | 1. You said you have the condition, so let's start with adding it.
2. There is no need to `ORDER BY`, so remove it.
3. Remove `Order_date` from the subquery `SELECT`
4. Use aliases
5. The subquery itself would be enough.
```
SELECT l.Format_id as PRODUCT,
COUNT(*) AS AMOUNT
FROM order_line_108681091 l
INNER JOIN order_108681091 o
ON o.order_id = l.order_id
WHERE o.Order_Date LIKE '%JAN-14%'
GROUP BY
l.Format_id;
``` | You have to specify the inner join:
```
inner join order_108681091 ON order_1078681091.ID = order_line_018681091.ID
```
or something like that. Also your where clause probably wont work unless you're storing that date as a string and not a datetime datatype. | Oracle SQL Missing Keyword | [
"",
"sql",
"oracle",
""
] |
I am writing:
```
Select datediff(hh,getutcdate(), End_Date) as "Duration"
from MasterTable
order by Duration
```
To get difference of hour values in ascending order. I get output,
```
Duration
--------
-259
-210
5
10
22
35
75
105
235
```
But I want, ordering has to be done based on positive numbers and negative numbers of hour values. So, I need output as:
```
Duration
--------
5
10
22
35
75
105
235
-259
-210
```
Is there any tweak to do such task? Thanks in advance. | If you do want the sequence, as you've indicated in the question, where it's all of the positive numbers in ascending order first, then the negative numbers in ascending order, it should be:
```
Select datediff(hh,getutcdate(),End_Date) as "Duration" from MasterTable
order by CASE WHEN Duration >= 0 THEN 0 ELSE 1 END,Duration
``` | Understanding that what you need is to ignore *Duration* sign:
Try using ABS:
```
Select datediff(hh,getutcdate(),End_Date) as "Duration" from MasterTable order by ABS(Duration)
``` | Sorting Date Difference in SQL | [
"",
"sql",
"sql-server",
"datetime",
"datediff",
""
] |
I'm having an assignment for my sql class which i can't seem to figure out. This is the description of the select wanted:
Show all employees of which the salary is higher than the average of the colleagues in their department, only for the departments with at least 4 employees.
I've been able to find parts of the query like
```
select ename
from emp
where sal > any (select avg(sal)
from emp
group by
deptno);
```
to get the names of the employees which earn more than the avg.
or
```
select count(deptno)
from emp
group by
deptno having count(deptno) > 4;
```
to getthe number of employees in each department.
But somehow it doesn't work linking them together. Maybe someone can help me shine a light on this on. | Just put your second query in with an AND clause:
```
select ename
from emp
where sal > any (select avg(sal)
from emp
group by
deptno)
and deptno in (select deptno
from emp
group by
deptno having count(deptno) > 4);
``` | You can use **Having Clause** in Conjunction with **Group By**
```
select ename
from emp
where sal > any (select avg(sal)
from emp
group by
deptno)
having count(*)>4;
``` | sql subqueries and group by | [
"",
"sql",
"oracle11g",
""
] |
A simplified example for illustration: Consider a table "fruit" with 3 columns: name, count and the date purchased. Need an alphabetical list of the fruits and their count the last time they were bought. I am a bit confused by the order of sorting and how distinct is applied. My attempt -
```
drop table if exists fruit;
create table fruit (
name varchar(8),
count integer,
dateP datetime
);
insert into fruit (name, count, dateP) values
('apple', 4, '2014-03-18 16:24:37'),
('orange', 2, '2013-12-11 11:20:16'),
('apple', 7, '2014-07-05 08:34:21'),
('banana', 6, '2014-06-20 19:10:15'),
('orange', 6, '2014-07-22 17:41:12'),
('banana', 4, '2014-08-15 21:26:37'), -- last
('orange', 5, '2014-12-11 11:20:16'), -- last
('apple', 3, '2014-09-25 18:54:32'), -- last
('apple', 5, '2014-02-05 18:47:18'),
('apple', 12, '2013-09-25 14:18:57'),
('banana', 5, '2013-04-18 15:59:04'),
('apple', 9, '2014-01-29 11:47:45');
-- Expecting:
-- apple 3
-- banana 4
-- orange 5
select distinct name, count
from fruit
group by name
order by name, dateP;
-- Produces:
-- apple 9
-- banana 5
-- orange 5
``` | Try this:-
```
select f1.name,f1.count
from
fruit f1
inner join
(select name,max(dateP) date_P from fruit group by name) f2
on f1.name = f2.name and f1.dateP = f2.date_P
order by f1.name
```
**EDITED** for the last line :) | Try the following:
```
SELECT fruit.name, fruit.count, fruit.dateP
FROM fruit
INNER JOIN (
SELECT name, Max(dateP) AS lastPurchased
FROM fruit
GROUP BY name
) AS dt ON (dt.name = fruit.name AND dt.lastPurchased = fruit.dateP )
```
Here is a demo of this example on [SQLFiddle](http://sqlfiddle.com/#!6/c5bd0/2/0). | Combining select distinct with group and ordering | [
"",
"sql",
"sqlite",
""
] |
I hope someone can help. I have an excel file with hundreds of items that looks like the below table, that shows the Style's Color/Size and it's QTY levels. But I need to take the corresponding header Sizes and match it with the Style and Color as well as it's QTY levels. Then format it into each row and **then copy the Style, color, length and price that corresponds with each columns.** I have both excel and sql, if one is easier to work with then the other.
**So basically take this:**
```
+--------+----------+--------+-------+---------+--------+--------+--------+--------+--------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
| STYLE# | COLOR | LENGTH | Price | Size 00 | Size 0 | Size 2 | Size 4 | Size 6 | Size 8 | Size 10 | Size 12 | Size 14 | Size 16 | Size 18 | Size 20 | Size 22 | Size 24 | Size 26 | Size 28 | Size 30 | Size 32 |
+--------+----------+--------+-------+---------+--------+--------+--------+--------+--------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
| 710 | PURPLE | RL | 199 | 0 | 0 | 0 | 2 | 5 | 5 | 5 | 4 | 4 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 710 | DP CORAL | RL | 199 | 0 | 0 | 2 | 0 | 1 | 2 | 1 | 3 | 1 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 720 | RED | RL | 225 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 720 | NAVY | RL | 225 | 0 | 0 | 0 | 1 | 0 | 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+--------+----------+--------+-------+---------+--------+--------+--------+--------+--------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+---------+
```
**And turn it into this:**
```
+--------+----------+---------+-----+--------+-------+
| STYLE# | COLOR | Size | QTY | LENGTH | Price |
+--------+----------+---------+-----+--------+-------+
| 710 | PURPLE | Size 00 | 0 | RL | 199 |
| 710 | PURPLE | Size 0 | 0 | RL | 199 |
| 710 | PURPLE | Size 2 | 0 | RL | 199 |
| 710 | PURPLE | Size 4 | 2 | RL | 199 |
| 710 | PURPLE | Size 6 | 5 | RL | 199 |
| 710 | PURPLE | Size 8 | 5 | RL | 199 |
| 710 | PURPLE | Size 10 | 5 | RL | 199 |
| 710 | PURPLE | Size 12 | 4 | RL | 199 |
| 710 | PURPLE | Size 14 | 4 | RL | 199 |
| 710 | PURPLE | Size 16 | 3 | RL | 199 |
| 710 | PURPLE | Size 18 | 0 | RL | 199 |
| 710 | PURPLE | Size 20 | 0 | RL | 199 |
| 710 | PURPLE | Size 22 | 0 | RL | 199 |
| 710 | PURPLE | Size 24 | 0 | RL | 199 |
| 710 | PURPLE | Size 26 | 0 | RL | 199 |
| 710 | PURPLE | Size 28 | 0 | RL | 199 |
| 710 | PURPLE | Size 30 | 0 | RL | 199 |
| 710 | PURPLE | Size 32 | 0 | RL | 199 |
| 710 | DP CORAL | Size 00 | 0 | RL | 199 |
| 710 | DP CORAL | Size 0 | 0 | RL | 199 |
| 710 | DP CORAL | Size 2 | 2 | RL | 199 |
| 710 | DP CORAL | Size 4 | 0 | RL | 199 |
| 710 | DP CORAL | Size 6 | 1 | RL | 199 |
| 710 | DP CORAL | Size 8 | 2 | RL | 199 |
| 710 | DP CORAL | Size 10 | 1 | RL | 199 |
| 710 | DP CORAL | Size 12 | 3 | RL | 199 |
| 710 | DP CORAL | Size 14 | 1 | RL | 199 |
| 710 | DP CORAL | Size 16 | 3 | RL | 199 |
| 710 | DP CORAL | Size 18 | 1 | RL | 199 |
| 710 | DP CORAL | Size 20 | 0 | RL | 199 |
| 710 | DP CORAL | Size 22 | 0 | RL | 199 |
| 710 | DP CORAL | Size 24 | 0 | RL | 199 |
| 710 | DP CORAL | Size 26 | 0 | RL | 199 |
| 710 | DP CORAL | Size 28 | 0 | RL | 199 |
| 710 | DP CORAL | Size 30 | 0 | RL | 199 |
| 710 | DP CORAL | Size 32 | 0 | RL | 199 |
| 710 | DP CORAL | Size 00 | 0 | RL | 199 |
| 720 | RED | Size 0 | 0 | RL | 225 |
| 720 | RED | Size 2 | 1 | RL | 225 |
| 720 | RED | Size 4 | 0 | RL | 225 |
| 720 | RED | Size 6 | 0 | RL | 225 |
| 720 | RED | Size 8 | 0 | RL | 225 |
| 720 | RED | Size 10 | 1 | RL | 225 |
| 720 | RED | Size 12 | 0 | RL | 225 |
| 720 | RED | Size 14 | 0 | RL | 225 |
| 720 | RED | Size 16 | 0 | RL | 225 |
| 720 | RED | Size 18 | 0 | RL | 225 |
| 720 | RED | Size 20 | 0 | RL | 225 |
| 720 | RED | Size 22 | 0 | RL | 225 |
| 720 | RED | Size 24 | 0 | RL | 225 |
| 720 | RED | Size 26 | 0 | RL | 225 |
| 720 | RED | Size 28 | 0 | RL | 225 |
| 720 | RED | Size 30 | 0 | RL | 225 |
| 720 | RED | Size 32 | 0 | RL | 225 |
| 720 | NAVY | Size 00 | 0 | RL | 225 |
| 720 | NAVY | Size 0 | 0 | RL | 225 |
| 720 | NAVY | Size 2 | 0 | RL | 225 |
| 720 | NAVY | Size 4 | 1 | RL | 225 |
| 720 | NAVY | Size 6 | 0 | RL | 225 |
| 720 | NAVY | Size 8 | 1 | RL | 225 |
| 720 | NAVY | Size 10 | 2 | RL | 225 |
| 720 | NAVY | Size 12 | 0 | RL | 225 |
| 720 | NAVY | Size 14 | 0 | RL | 225 |
| 720 | NAVY | Size 16 | 0 | RL | 225 |
| 720 | NAVY | Size 18 | 0 | RL | 225 |
| 720 | NAVY | Size 20 | 0 | RL | 225 |
| 720 | NAVY | Size 22 | 0 | RL | 225 |
| 720 | NAVY | Size 24 | 0 | RL | 225 |
| 720 | NAVY | Size 26 | 0 | RL | 225 |
| 720 | NAVY | Size 28 | 0 | RL | 225 |
| 720 | NAVY | Size 30 | 0 | RL | 225 |
| 720 | NAVY | Size 32 | 0 | RL | 225 |
+--------+----------+---------+-----+--------+-------+
```
Any help on this would be greatly appreciated. | I was able to figure out what I need by just using excel and joining the color, price, then adding the corresponding size to each qty level with a pipe-delimiter separating each value. Then transposing it vertically then, using the text to column function in excel to separate everything out. | In SQL you can do something like this I think(sorry I am a little rusty)..
```
SELECT style, color, size, qty, length price,
FROM table_name
ORDER BY style DESC, color, size DESC, qty DESC, length, price;
``` | Excel, SQL - Change select header rows to columns and insert values for each column | [
"",
"mysql",
"sql",
"sql-server",
"excel",
"vba",
""
] |
I have a Database set up with a column for `Datetime`.
In my application a have a date selector.
The issue is that the Db uses American Date Format `(MM/DD/YYYY)` where as the app uses European Date Format `(DD/MM/YYYY)`.
Is there a way to set up the column in the database to use European date format?
I don't want to have to convert through the code in the application. | SQL Server doesn't store a `DateTime` in any string format - it's stored as an 8 byte numerical value, and a `DATETIME` is a `DATETIME` is a `DATETIME`.
The various settings (language, date format) only influence how the `DateTime` is shown to you in SQL Server Management Studio - or how it is parsed when you attempt to convert a string to a `DateTime`.
There are many formats supported by SQL Server - see the [MSDN Books Online on CAST and CONVERT](http://msdn.microsoft.com/en-us/library/ms187928.aspx).
So if you want to see your `DateTime` in US format, use
```
SELECT CONVERT(VARCHAR(30), YourDateTimeColumn, 101)
```
and if you need European (British/French/German) format, use
```
SELECT CONVERT(VARCHAR(30), YourDateTimeColumn, 103)
```
It's is a commonly accepted "Best Practice" to avoid using dates as string as much as possible - if ever possible, use the *native* SQL Server and .NET `DATETIME` datatype for sending back and forth dates (which is **independent** of any regional formatting). Try to convert the `DATETIME` to string **only** when you need to show it (preferably only in your UI - not in your database!)
**Update:** if you want to **insert** `DateTime` values, as I said, I would **strongly recommend** to use a **proper** datatype - and not fiddle around with specifically formatted strings.
If you **must** use strings, then by all means use the (slightly adapted) **ISO-8601 date format** that is supported by SQL Server - this format works **always** - regardless of your SQL Server language and dateformat settings.
The [ISO-8601 format](http://msdn.microsoft.com/en-us/library/ms180878.aspx) is supported by SQL Server comes in two flavors:
* `YYYYMMDD` for just dates (no time portion); note here: **no dashes!**, that's very important! `YYYY-MM-DD` is **NOT** independent of the dateformat settings in your SQL Server and will **NOT** work in all situations!
or:
* `YYYY-MM-DDTHH:MM:SS` for dates and times - note here: this format *has* dashes (but they *can* be omitted), and a fixed `T` as delimiter between the date and time portion of your `DATETIME`.
This is valid for SQL Server 2000 and newer.
If you use SQL Server 2008 or newer and the `DATE` datatype (only `DATE` - **not** `DATETIME`!), then you can indeed also use the `YYYY-MM-DD` format and that will work, too, with any settings in your SQL Server.
Don't ask me why this whole topic is so tricky and somewhat confusing - that's just the way it is. But with the `YYYYMMDD` format, you should be fine for any version of SQL Server and for any language and dateformat setting in your SQL Server.
The recommendation for SQL Server 2008 and newer is to use DATE if you only need the date portion, and DATETIME2(n) when you need both date and time. You should try to start phasing out the DATETIME datatype if ever possible | About "*YYYY-MM-DDTHH:MM:SS for dates and times - note here: this format has dashes (but they can be omitted), and a fixed T as delimiter between the date and time portion of your DATETIME*":
Another option is the usage of the usual space as separator between data and time: YYYYMMDD HH:MM:SS or **YYYYMMDD HH:MM:SS.SSS**.
But then, again, dashes have to be omitted: as said for the long format, " *note here: **no dashes!**, that's very important! YYYY-MM-DD is **NOT** independent of the dateformat settings in your SQL Server and will **NOT** work in all situations!*"
Please note that the time part **needs** the semi-colons!
Converting to DateTime2 is more tolerant, it works with and without dashes, even using European locales. | Getting European Date Format SQL | [
"",
"sql",
"sql-server-2008",
"date",
"datetime",
""
] |
how to calculate aggregate function using mysql database.The following query returns invalid use of group functions.
```
select avg(max(qty)) from tb_name.
``` | You do not have any groups in your statement and if you just read it like this you should understand why it is not yielding a sensible result:
You say `Give me the average of the maximum value for qty from tb_name`, but since you are not grouping by anything there is only one maximum value.
If you need both values in one query, you can do so, but you need to have a grouping on your query for example like this:
```
select avg(qty), max(qty) from tb_name group by SomethingOtherThanQty
```
From your comments I guess you are indeed looking for a maximum of all averages over some group, for that a query with subquery could look like this:
```
SELECT MAX(AVERAGE) FROM
(SELECT
AVG(qty) AS AVERAGE
FROM tb_name
GROUP BY YourGroupingCriteria) AS SubQuery
``` | You can do the following:
```
SELECT AVG(qty) FROM tb_name
```
This will give you the average value of quantity
OR
```
SELECT MAX(qty) FROM tb_name
```
This will give you the maximum value of quanity
Using both AVG and MAX on the same table is both logically and syntax-wise incorrect. | how to use this aggregate function in mysql database | [
"",
"mysql",
"sql",
""
] |
I have problems on query. I need the result that shows top 1 for each records from the table that has same id. Here is the Table Description.
Table `people`:
```
id | name |
--------------
01 | john |
02 | alex |
```
Table `job`:
```
id | job | date start |
---------------------------------
01 | student | 1/1/2013 |
01 | employee | 1/1/2014 |
01 | manager | 3/18/2014 |
02 | student | 3/1/2013 |
02 | engineer | 3/3/2014 |
```
I need the result showed 1 row for each id.
Here is my SQL query :
```
select id,name,job,date_start
from people,job
group by date_start,id,name,job
```
I got wrong result from that query. I don't know how to show just only 1 record for each id.
Here is the result I want :
Query result
```
id | name | job | date_start |
----------------------------------
01 | john | manager | 3/18/2014 | -> max date from id='01'
02 | alex | engineer| 3/3/2014 | -> max date from id='02'
```
It's ordered by date and selected only 1 result from each id with the max date.
How can I do this ? | **Try this**
```
WITH j AS
(
SELECT
ROW_NUMBER() OVER(PARTITION BY id ORDER BY date_Start DESC) AS RowNumber,
id, job, date_start
FROM job
)
select p.id, p.name, j.job, j.date_start
from people p
inner join j
on p.id = j.id
and j.RowNumber = 1
```
As you requested..
* OVER is just like a grouping and sorting area in your select query but it only uses by a functions like ROW\_Number()/RANK()/DENSE\_RANK()/NTILE(n) and the like. [see here for more info](http://technet.microsoft.com/en-us/library/ms189461.aspx)
* PARTITION BY is like having a group, in my example i partitioned or grouped the rows by id
* ORDER By is like having order by clause in the select query
ROW\_Number() is a function in SQL Server that produces a sequence of integer value from 1 to N (up to the last record) and we reset the number generated by ROW\_Number() function each time the PARTITION BY area changes its values. | Try this
```
SELECT S.id,s.name,S.job,s.date_start
FROM
(
SELECT T2.id,T1.Name,T2.job,T2.date_start
from people T1 Left Join job T2 ON T1.id =T2.id
group by T2.date_start,T2.id,T1.name,T2.job
) AS S Inner JOIN
(
SELECT Max(date_start) AS date_start,id From job
Group by id
) AS T ON S.date_start = T.date_start
Order By S.ID
```
# [Fiddle Demo](http://sqlfiddle.com/#!6/652ea/2)
**Output** :
 | select top 1 record for each record on 1 table | [
"",
"sql",
""
] |
I have a `user` table with a one-to-many relationship to a `log` table.
Essentially, every time a user record is updated, a record is inserted into the log table.
Right now what I’d like to do is update the user table with the most recent log entry, setting their current status and notes fields to the most recently logged.
> **log**
> id
> user\_id
> status
> notes
> datetime
>
> **user**
> id
> status
> notes
The following Query almost works except it gets the first entry in the log table.
```
UPDATE user u
INNER JOIN log l
ON u.user_id = l.user_id
SET u.`status` = l.`status`, u.notes = l.notes
```
I would like it get the most recent, either based on the `datetime` field or simply the max `id` | Select the MAX(id) in the WHERE clause of the UPDATE statement like so:
```
UPDATE
u
SET
u.status = l.status,
u.notes = l.notes
FROM
user u
INNER JOIN log l ON u.user_id = l.user_id
WHERE
l.id = (
SELECT
MAX(id)
FROM
log
WHERE
user_id = u.user_id);
``` | You need to manually find the last record. A good way to do this is with `not exists`:
```
UPDATE user u INNER JOIN
log l
ON u.user_id = l.user_id
SET u.`status` = l.`status`, u.notes = l.notes
WHERE not exists (select 1
from log l2
where l2.user_id = l.user_id and
l2.datetime > l.datetime
);
```
This will work best with an index on `log(user_id, datetime)`. | Update with JOIN from another table and get the most recent | [
"",
"mysql",
"sql",
""
] |
I did try to get the answer myself by googling it but i can't find the solution.
I've two table actually.
This is suppose students table
```
id | sname | cid
1 a 1
2 b 2
3 c 2
4 d 3
5 e 4
6 f 3
7 g 4
8 h 3
```
This is the class table
```
id | sname
1 c1
2 c2
3 c3
4 c4
```
I want the result like below:
```
Result:
id | cname | sname | count
1 c1 a 1
2 c2 b 2
3 c2 c 2
4 c3 d 3
1 c4 e 2
2 c3 f 3
3 c4 g 2
4 c3 h 3
```
As you can see the result will be like the above all the student name would be show alongside with class name & count would be like that.
How can I do this ? | Try this:
```
SELECT * FROM
((SELECT C.id,C.sname as cname,S.sname as sname
FROM class C INNER JOIN students S on C.id=S.cid) T1
LEFT JOIN
(Select C.id,count(S.cid) as count
FROM class C INNER JOIN students S on C.id=S.cid
GROUP BY C.id) T2 on T2.id=T1.id)
ORDER BY sname
```
Result:
```
ID CNAME SNAME COUNT
1 c1 a 1
2 c2 b 2
2 c2 c 2
3 c3 d 3
4 c4 e 2
3 c3 f 3
4 c4 g 2
3 c3 h 3
```
See result in [**SQL Fiddle**](http://www.sqlfiddle.com/#!9/ba176/18).
**Explanation:**
First part of the inner query selects id,cname and sname. Second part selects the count. And in outer query, we just join them. | A join with a subquery for count should do:
```
SELECT
s.id `id`,
c.sname `cname`,
s.sname `sname`,
(SELECT COUNT(1) FROM students WHERE students.cid = c.id) `count`
FROM
students s
LEFT JOIN class c ON s.cid = s.id
``` | Show count value and multiple row in mysql | [
"",
"mysql",
"sql",
""
] |
I have a Patient activity table that records every activity of the patient right from the time the patient got admitted to the hospital till the patient got discharged. Here is the table command
```
Create table activity
( activityid int PRIMARY KEY NOT NULL,
calendarid int
admissionID int,
activitydescription varchar(100),
admitTime datetime,
dischargetime datetime,
foreign key (admissionID) references admission(admissionID)
)
```
The data looks like this:
```
activityID calendarid admissionID activitydescription admitTime dischargeTime
1 100 10 Patient Admitted 1/1/2013 10:15 -1
2 100 10 Activity 1 -1 -1
3 100 10 Activity 2 -1 -1
4 100 10 Patient Discharged -1 1/4/2013 13:15
```
For every calendarID defined, the set of admissionid repeats. For a given calendarid, the admissionsid(s) are unique. For my analysis, I want to write a query to display admissionid, calendarid, admitTime and dischargetime.
```
select admissionId, calendarid, admitTime=
(select distinct admitTime
from activity a1
where a1.admisionID=a.admissionID and a1.calendarID=a.calendarid),
dischargeTime=
(select distinct dischargeTime
from activity a1
where a1.admisionID=a.admissionID and a1.calendarID=a.calendarid)
from activity a
where calendarid=100
```
When I individually assign numbers, it works, otherwise it comes up with this message:
> Subquery returned more than 1 value.
What am I doing wrong? | Try this !
```
select admissionId, calendarid, admitTime=
(select top(1) admitTime
from activity a1
where a1.admisionID=a.admissionID and a1.calendarID=a.calendarid),
dischargeTime=
(select top(1) dischargeTime
from activity a1
where a1.admisionID=a.admissionID and a1.calendarID=a.calendarid)
from activity a
where calendarid=100
``` | This should get you what you want, with a bit less of a performance hit than subqueries:
```
select a1.admissionId
,a1.calendarid
,a2.admitTime
,a3.dischargeTime
from activity a1
left join activity a2
on a1.calendarid = a2.calendarid
and a2.admitTime <> -1
left join activity a3
on a1.calendarid = a3.calendarid
and a3.dischargeTime <> -1
where a1.calendarid=100
``` | Troubleshooting SQL Query | [
"",
"sql",
"sql-server",
""
] |
I want the output of SQL statement based on some if else condition.
For Ex:
```
select salary from EMP
```
EMP table has a column DOJ.
Now I want the output as '-' if the DOJ is less than 90 days.
How can i do this ? | ```
SELECT CASE WHEN (DATEDIFF(DAY,DOJ,getdate()) < 90) then '-' else cast(Salary as varchar(15)) end as Salary
FROM EMP
``` | In sql-server
```
select *,case when DATEDIFF(DAY,DOJ,getdate()) <=90 then '-' else null end as mytest
from tbale
``` | SQL statement with if else | [
"",
"sql",
"if-statement",
""
] |
I have a table called #CSVTest\_Data.
I wish to replace the content of that table with the result of the following query:
```
SELECT FirstTimeTaken,
LatestTimeTaken,
Market,
Outcome,
Odds,
NumberOfBets,
VolumeMatched,
InPlay
FROM #CSVTest_Data
EXCEPT
SELECT OddsFirstTimeTaken,
OddsLastTimeTaken,
MarketName,
Outcome,
Odds,
NumberOfBets,
VolumeMatched,
InPlay
FROM Data
```
I would appreciate if somebody can tell me how to achieve this. Thanks.
EDIT:
I worked out another way to do this from an idea that one of you guys gave me. It requires creating a new temp table and using that thereafter, but it works:
```
SELECT * INTO #CSVTest_Data_new FROM #CSVTest_Data WHERE 1 = 0;
INSERT INTO #CSVTest_Data_new
SELECT FirstTimeTaken, LatestTimeTaken, Market, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay
FROM #CSVTest_Data -- Coming data
EXCEPT --Minus
SELECT OddsFirstTimeTaken, OddsLastTimeTaken, MarketName, Outcome, Odds, NumberOfBets, VolumeMatched, InPlay
FROM Data --Existing Data
```
I think I will try Damien's idea though as that looks good and avoids an additional temp table. Why the down votes on my question? I know I'm a noob at this but that's why I need the helpful advice... | Instead of seeking to replace a table, using a query based on the same table, just do a `DELETE`:
```
DELETE #CSVTest_Data
FROM #CSVTest_Data d
WHERE EXISTS( SELECT * from Data d2 WHERE
d.FirstTimeTaken = d2.OddsFirstTimeTaken AND
d.LastTimeTaken = d2.OddsLastTimeTaken AND
d.Market = d2.MarketName AND
d.Outcome = d2.Outcome AND
d.Odds = d2.Odds AND
d.NumberOfBets = d2.NumberOfBets AND
d.VolumeMatched = d2.VolumeMatched AND
d.InPlay = d2.InPlay)
``` | First, the table #CSVTest\_Data must have the same number of columns as your SELECT query...
Then, you add at the top of your query this:
```
INSERT INTO #CSVTest_Data
``` | Replace content of a table with the results of a query in SQL Server | [
"",
"sql",
"sql-server",
""
] |
Is it possible to insert a record including timestamps into the database using usual statements? I don't want to use PreparedStatements.
My current query:
```
INSERT INTO CUSTOMER VALUES('some name',1965-10-31 01:00:00.0);
```
At the moment I get the error
> om.mysql.jdbc.MysqlDataTruncation: Data truncation: Incorrect datetime value...
So is there a way to put timestamps into an insert query? | You forgot the quotes around the date
```
INSERT INTO CUSTOMER
VALUES('some name', '1965-10-31 01:00:00.0');
``` | Try like this
```
INSERT INTO CUSTOMER VALUES('some name','1965-10-31 01:00:00');
``` | Insert timestamp without prepared statement | [
"",
"mysql",
"sql",
"jdbc",
"timestamp",
""
] |
I have a table 'Goods' with different information about goods (name, price, etc). I need to create a view at the same scheme as table 'Goods' has. But the view must be empty. And when user adds new good to the view it is saved in table 'Goods', but the view remains empty when user opens it next time. So main idea is not to show existing data to the user which has access to the view. | Assuming your on a database system that supports a concept like SQL Server's [`CHECK OPTION`](http://technet.microsoft.com/en-us/library/ms187956.aspx), and you're allowed to create a view that *doesn't* have that option set, you should be fine:
```
create table T (ID int not null)
go
create view V
as
select * from T where 1=0
go
select * from V
go
insert into V(ID) values (10)
go
select * from V
go
select * from T
```
The two selects from `V` return 0 rows. The select from `T` returns one row:
```
ID
----
10
```
`CHECK OPTION`:
> Forces all data modification statements executed against the view to follow the criteria set within select\_statement. When a row is modified through a view, the WITH CHECK OPTION makes sure the data remains visible through the view after the modification is committed.
And you want the opposite - you want to allow data modifications performed through the view to create rows which are *invisible* through the view. | Create table Goods1 with "insert trigger" on it which make insert into Goods and delete from Goods1 | SQL.Working with views | [
"",
"sql",
"sql-server",
"view",
"triggers",
""
] |
Guys I am trying to make a simple ticket generation system for my company as a favor. For now, I have a table called `tblTicket` and another table called `tblEngineer` in my MSSQL database.
My application is in C# windows forms, so on the new ticket generation form, I have many textboxes and a comboBox for assigning engineer which is being populated by `tblEngineer`. Upon ticket generation, all the information entered in this form is stored in `tblTicket` along with `EngineerID` from `tblEngineer`.
It was working great but then my client asked me to add option so that 3 engineers could be assigned on a single ticket.
In the future, I will have to develop a "engineer module" in which engineer will be able to see tickets assigned to him only. It will include a login system for authentication purpose. So if a new ticket is generated and is assigned to 3 engineers, only that 3 engineers should be able to see the ticket and not the others.
How should I go about this approach ? It was really easy if there was only one engineer. Now do I need to make a many-to-many table like `tblAssignedEng` with ticket id and multiple engineer ids? referenced as a foreign key? I haven't much experience with SQL so I am kinda struggling here and any help would be appreciated. | Standard practice would be this, as an example...
You have a "tblEngineer" table...
```
tblEngineer
-----------
(PK) EngineerId
EngineerName
```
And a "tblTicket" table...
```
tblTicket
---------
(PK) TicketId
TicketDetails
```
You now add a link table called "tblEngineerTickets" (or similar) which references the Ids of both the Engineer and their tickets...
```
tblEngineerTickets
------------------
(PK) EngineerTicketId
(FK) EngineerId
(FK) TicketId
```
So that way, you keep all the Ticket Details and the Engineer details separately, and link them using ONLY the Ids... the link table would look something like this...
```
EngineerId | TicketId
------------+----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
2 | 3
```
This way, you can have multiple engineers assigned to one ticket, and/or multiple tickets assigned to an engineer.
This is best practice and it gives you the most opportunity for expansion. If you were to just add fields to your existing Engineer tables saying "Ticket1", "Ticket2", "Ticket3" etc... you would be effectively be placing a limit on the code, and potentially you'd have to keep going in to the code to add columns. | I would suggest making 1-to-Many relationships instead of making many-to-many relationships. You can accomplish this by having a table that maps between your `tblTicket` and `tblEngineer`. For Example:
```
tblEngineer
-----------
(PK) iEngineerID
tblTicket
---------
(PK) iTicketID
tblTicketEngineerMap
--------------------
(PK) iMapID
(FK) iEngineerID
(FK) iTicketID
```
By doing it this way, an Engineer and a Ticker can be in many maps, making two 1-to-Many relationships, and allowing the functionality you seek.
[Check out this thread](https://stackoverflow.com/questions/7339143/why-no-many-to-many-relationships) as to why you should try to avoid many-to-many table designs. | Many to many relationship? | [
"",
"sql",
"many-to-many",
""
] |
I have a MySQL table named test. and I need to add three more columns `VERSION` datatype bigint, `CREATE_TIMESTAMP` datatype `Date` and `LAST_UPDATE_TIMESTAMP` datatype `Date`.
The default value for `VERSION` should be `0` and for `CREATE_TIMESTAMP` and `LAST_UPDATE_TIMESTAMP` it should be the current datetime.
Now I am doing something like:
`alter table masterdatabase.test add column VERSION bigint(50) default 0;
alter table masterdatabase.test add column CREATE_TIMESTAMP date default NOW();
alter table masterdatabase.test add column LAST_UPDATE_TIMESTAMP date default NOW();`
it adds the `VERSION`column to the table with Default value `0` in each rows of the table. but could not create the other two columns of datatype `Date`.
It shows a Error in MySQL log:
`alter table masterdatabase.test add column CREATE_TIMESTAMP date default NOW() | Error Code: 1067. Invalid default value for 'CREATE_TIMESTAMP' 0.000 sec`
What exactly prevents me to create those columns I don't understand.
Please suggest is there any other way to do this.
Thanks in Advance! | Use the **CURRENT\_TIMESTAMP** instead of **NOW()**:
```
ALTER TABLE masterdatabase.test add column CREATE_TIMESTAMP TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
``` | try this
```
alter table masterdatabase.test add column CREATE_TIMESTAMP datetime default NOW() ;
```
You need to store Date value in DATETIME as NOW() will retur you the date and time in format of *2008-11-11 12:45:34* and DATETIME can store this value for more detail see [this](https://dev.mysql.com/doc/refman/5.1/en/datetime.html) | ALTER TABLE command to add columns in an existing Table with current date and time: Not Working | [
"",
"mysql",
"sql",
"date",
""
] |
How to get column values comma seperatedly in SQL Server. Below shown is my sql query#
```
select BandName from BandMaster where BandId<100
```
I need to get the BandName values comma seperatedly in a single query. | ```
Use XML path for this.
select (select BandName+',' from BandMaster where BandId<100 for xml path('')) as NewColumnName
Try this
``` | ```
Declare @retStr varchar(max) = ''
select @retStr = @retStr + BandName + ',' from BandMaster
where BandId<100
Select @retStr
``` | How to get column values comma seperatedly | [
"",
"sql",
"sql-server",
""
] |
Table `LESSON` has fields `LessonDate, MemberId` (Among others but only these two are relevant)
In English: Give me a list of the dates on which students that have only ever taken 1 class, took that class.
I have tried a number of things. Here's my latest attempt:
```
SELECT LessonDate
FROM LESSON
WHERE (SELECT COUNT(MemberId) GROUP BY (MemberId)) = 1
```
Just keeps returning SQL errors. Obviously, the syntax and logic are off. | The query may seem a little counter-intuitive at first. You need to group by `MemberId` to get the students who only took one class:
```
select max(l.LessonDate) as LessonDate
from Lesson l
group by l.MemberId
having count(*) = 1;
```
Because there is only one class for the student, `max(l.LessonDate)` is that date. Hence this does what you want: it gets all the dates for members who only took one class.
Your line of thinking is also pretty close. Here is the query that I think you were looking for:
```
SELECT LessonDate
FROM LESSON
WHERE MemberId in (SELECT l2.MemberId
FROM Lesson l2
GROUP BY l2.MemberId
HAVING COUNT(*) = 1
);
```
This approach is more generalizable, if you want to get the dates for members that have 2 or 3 rows. | You need to look into `SELECT ... FROM ... GROUP BY ... HAVING` There is a lot of documentation available online by searching `GROUP BY` e.g. [This article](http://www.techonthenet.com/sql/group_by.php)
The following SQL groups by `MemberId` which I thnk is wrong as you want to count the number of member id's,
```
SELECT
LessonDate
, MemberId
, COUNT(*) AS Lesson_Count
FROM
Lesson
GROUP BY
LessonDate
, MemberId
HAVING
COUNT(*) = 1
```
The above query will give you a list of "dates", "members", and the number of lessons taken on that date by that "member". I don't think you need to have the aggregate function (`COUNT(*) AS Lesson_Count`) as part of the select statement but it's often "nice to have" to give you the confidence that your results are as you would expect.
Your query is actually failing because the subquery doesn't have a `FROM` statement but the above is better practice:
```
SELECT LessonDate
FROM LESSON
WHERE (SELECT COUNT(MemberId) FROM Lesson GROUP BY (MemberId)) = 1
``` | SELECT WHERE COUNT = 1 | [
"",
"sql",
"sql-server",
"subquery",
""
] |
Consider the following table :
```
create table mixedvalues (value varchar(50));
insert into mixedvalues values
('100'),
('ABC100'),
('200'),
('ABC200'),
('300'),
('ABC300'),
('400'),
('ABC400'),
('500'),
('ABC500');
```
How can I write a select statement that would only return the **numeric** values like
```
100
200
300
400
500
```
### [SQLFiddle](http://www.sqlfiddle.com/#!2/4d894/1) | ```
SELECT *
FROM mixedvalues
WHERE value REGEXP '^[0-9]+$';
``` | ```
SELECT *
FROM mixedvalues
WHERE concat('',value * 1) = value;
```
Reference: [Detect if value is number in MySQL](https://stackoverflow.com/questions/5064977/detect-if-value-is-number-in-mysql) | MySQL - Select only numeric values from varchar column | [
"",
"mysql",
"sql",
"numeric",
"varchar",
""
] |
I have a table that contains a column showing whether an account is inactive.
The column values are either True or False... I
only want the False items.
So I've included a clause saying:
Where (account.inactive IN ("False"))
But I get a Operator/Operand type mismatch? error
I have been able to do other where clauses in this way but this one has me confused. Sorry if this is a newbie question - Thanks for the help.
Here is a copy of the query. It works fine except for the where clause:
```
SELECT
account.accountno,
account.title AS AccountName,
account.sortcode,
CAST(ICASE(account.sortcode = "A",10,
account.sortcode = "B",11,
account.sortcode = "C",13,
account.sortcode = "D",12,
account.sortcode = "E",14,
account.sortcode = "F",15,
account.sortcode = "K",20,
account.sortcode = "M",20,
account.sortcode = "N",20,
account.sortcode = "P",21,
account.sortcode = "Q",22,
account.sortcode = "R",23,
account.sortcode = "S",24,
account.sortcode = "L",30,
account.sortcode = "U",30,
account.sortcode = "V",30,
account.sortcode = "X",30,
account.sortcode = "Z",32,
account.sortcode = "^",33,
account.sortcode = "W",31,
account.sortcode = "Y",34,0) AS NUMERIC) AS GLCatCode,
account.inactive,
system.company AS CompanyName
FROM ((((account
LEFT JOIN trans
ON account.uniqueid = trans.accountid AND account.ledgerno=1 AND trans.trantype NOT IN (8) )
LEFT JOIN period
ON trans.periodno = period.periodno)
LEFT JOIN usercurr
ON account.cncycode = usercurr.code)
LEFT JOIN system
ON system.uniqueid = system.uniqueid)
Where (account.inactive IN ("False"))
GROUP BY
account.accountno,
AccountName,
account.sortcode,
account.inactive,
CompanyName,
GLCatCode
ORDER BY
account.sortcode , account.accountno
``` | I had a look at the data dictionary and it is logical type field with one character length. I tried a few different things and Where (account.inactive = 0) and it seems to work. | Are you sure the value can ever be 'FALSE'? Check the values available for filtering:
```
SELECT DISTINCT account.inactive
FROM ((((account
LEFT JOIN trans
ON account.uniqueid = trans.accountid AND account.ledgerno=1 AND trans.trantype NOT IN (8) )
LEFT JOIN period
ON trans.periodno = period.periodno)
LEFT JOIN usercurr
ON account.cncycode = usercurr.code)
LEFT JOIN system
ON system.uniqueid = system.uniqueid)
``` | SQL WHERE Clause True or False values | [
"",
"sql",
""
] |
Having a table `TAB_A`:
```
ID | Date | Value
--------------------------------
101 | 2014-03-01 | 101000001
101 | 2014-03-03 | 101000003
101 | 2014-03-06 | 101000006
102 | 2014-03-01 | 102000001
103 | 2014-03-01 | 103000001
```
And, for example, this single record in another table `TAB_B`:
```
ID | Date | TAB_A.Id
-----------------------------------
40002 | 2014-03-05 | 101
```
**I need to get the closest (most *recent*) `TAB_A.Value` to `TAB_B.Date` field** (which in this case would be '101000003' and NOT '101000006').
I've been searching for other responses with similar scenarios (like [this one](https://stackoverflow.com/questions/13153819/how-to-select-only-row-with-max-sequence-without-using-a-subquery)), but this is not exactly what I need.
Any suggestions? Thanks in advance for your help.
EDIT: I forgot to mention that `TAB_A` has over 200K records and `TAB_B` has over 55M records. | Seeing as the tag is sql-server, `Limit` won't work. Instead, use top
```
SELECT TOP 1 ID, Date, Value
FROM TAB_A
WHERE Date < (SELECT Date from TAB_B where ID=40002)
ORDER BY Date DESC
```
or
```
SELECT ID, Date, Value
FROM tab_a
WHERE date=
(SELECT MAX(date)
FROM TAB_A
WHERE Date <
(SELECT Date
FROM TAB_B
WHERE ID=40002)
)
```
If you want just 1 result in the last query, use `DISTINCT`. For example, if the date you were looking for is 2014-03-01, the 2nd query would show you 3 examples, with distinct just 1. In the first query, `top 1` already ensures that you just have 1 result
.
EDIT: updated for the comment below:
```
SELECT b.id, b.date, a.value FROM
(SELECT TOP 1 ID, Date, Value
FROM TAB_A
WHERE Date < (SELECT Date from TAB_B B where ID=40002)
ORDER BY Date DESC) a
,
(SELECT id,date,[TAB_A.id] FROM tab_b )b
WHERE a.id=b.[TAB_A.id]
```
Excuse my capital letters/small letters inconsistency... | ```
SELECT ID, Date, Value
FROM TAB_A
WHERE Date < (SELECT Date from TAB_B where ID=40002)
ORDER BY Date DESC LIMIT 1
```
First you select this date which you need from TAB\_B in a subquery. Then you select all these dates that are earlier than this from TAB\_B (you can modify to <= if you need). Then you order descending by date and select TOP 1 (the highest one). I think that you could also use MAX (but I am not sure). | SQL get the closest row for a certain date in other table | [
"",
"sql",
"sql-server",
""
] |
I want do this:
```
SELECT *
FROM (t1 NATURAL JOIN t2) AS H
.
.
.
```
But it makes this error: `SQL command not properly ended` in this line.
How I can make it if I can? | You code doesn't make any sense. What do you expect to be named as `H` here? Both tables? That's not possible. What if there are fields with the same name in both tables? `H.someField` could be ambiguous.
You can make an alias (do `AS something`) only for tables and fields - one alias per one table/field. But not to group tables.
For example:
```
SELECT *
FROM t1 AS Table1
NATURAL JOIN t2 AS H;
``` | You mean like this :
```
SELECT * FROM TABLE1 AS T1
INNER JOIN TABLE2 T2 ON T1.ID=T2.ID
``` | How to rename a table in SQL with join | [
"",
"sql",
""
] |
I have the following table structures for which I am trying to obtain a totals and subtotals and show a rollup of the values.
```
ChartOfAccounts(AccountNumber, AccountDescription, ParentAccountNumber, IsControlAccount)
Ledger(LedgerId, JournalId, AccountNumber, IsDebit, Amount)
```
I have managed to use CTE to obtain the required Parent-Child relationships but am unsure how to use this get control account balances which rollup into parent accounts.
So far, I have managed to put the following query together which is not entirely what I want --> [SQL Fiddle](http://www.sqlfiddle.com/#!3/d94bc/4/0). The current query does not seem to rollup and group the parent-child totals correctly. (I have excluded the year,month columns from the fiddle)
Another way to describe the problem, would be to say that all control accounts should have the total of it's child accounts.
My required output is the following
(year, month, AccountNumber, AccountDescription, DebitBalance, CreditBalance, Balance)
```
|Account#|Acc Desc | DR | CR | BAL |
|1000 |Accounts Receivable |10000 |5000 |5000 |
|1200 |Buyer Receivables |5000 |0 |5000 |
|12001 |Buyer Receivables - Best Buy |5000 |0 |5000 |
|1500 |Offers |5000 |5000 |0 |
|4000 |Accounts Payable | |4475.06 |4475.06 |
|4100 |Supplier Invoice Payables | |4475.06 |4475.06 |
|41002 |Supplier Invoice Payables - Knechtel | |4475.06 |4475.06 |
|6000 |Revenue | |524.93 |524.93 |
|6100 |Membership Fees Revenue | | |0 |
|6200 |Processing Fees Revenue | |100 |100 |
|62002 |Processing Fees Revenue - Knechtel | |100 |100 |
|6300 |Fees Revenue | |424.93 |424.93 |
|63002 |Fees Revenue - Knechtel | |424.93 |424.93 |
``` | Here is what I came up with and was able to get really close to matching your desired output
```
WITH CTEAcc
AS
(
SELECT
coa.accountDescription,coa.accountnumber,coa.accountnumber as parentaccount
FROM ChartOfAccounts coa
where iscontrolaccount=1
union all select c.accountdescription, coa.accountnumber, c.ParentAccount
from chartofaccounts coa
inner join cteacc c on coa.ParentAccountNumber=c.accountnumber
)
select parentaccount as [Account#], accountdescription as [Acc Desc],
sum(case when isdebit=1 then amount else 0 end) as DR,
sum(case when isdebit=0 then amount else 0 end) as CR,
sum(case when isdebit=1 then amount else 0 end)-sum(case when isdebit=0 then amount else 0 end) as BAL
from (select c.accountdescription, c.accountnumber,
c.parentaccount, l.isdebit, l.amount
from cteacc c
left join ledger l
on c.accountnumber=l.accountnumber
union all select c.accountdescription,
c.accountnumber, c.accountnumber as parentaccount,
l.isdebit, l.amount
from ChartOfAccounts c
inner join ledger l
on c.accountnumber=l.accountnumber where amount<>0) f
group by parentaccount, accountdescription
order by parentaccount
```
Here is the sql fiddle: <http://www.sqlfiddle.com/#!3/d94bc/106> | Yet another variation. Kept the hierarchy and iscontrol fields in just for reference. First it associates the account hierarchy with each account (the recursive cte). Then, for each account, computes sums of the ledger items for the account based on the hierarchy position (and whether it is a control account or not). Finally, wraps in another query to compute the balance of and strip off unused accounts from the output.
```
WITH AccountHierarchy AS (
SELECT AccountNumber
,AccountDescription
,CAST(AccountNumber AS VARCHAR(MAX))
+ '/' AS AccountHierarchy
,IsControlAccount
FROM ChartOfAccounts
WHERE ParentAccountNumber IS NULL
UNION ALL
SELECT c.AccountNumber
,c.AccountDescription
,CAST(h.AccountHierarchy AS VARCHAR(MAX))
+ CAST(c.AccountNumber AS VARCHAR(MAX))
+ '/' AS AccountHierarchy
,c.IsControlAccount
FROM ChartOfAccounts c
INNER JOIN AccountHierarchy h ON (c.ParentAccountNumber = h.AccountNumber)
WHERE ParentAccountNumber IS NOT NULL
)
SELECT AccountNumber
,AccountDescription
,AccountHierarchy
,IsControlAccount
,DR
,CR
,CASE WHEN (DR IS NULL AND CR IS NULL) THEN NULL
ELSE COALESCE(DR, 0) - COALESCE(CR, 0)
END AS BAL
FROM (SELECT h.AccountNumber
,h.AccountDescription
,h.AccountHierarchy
,h.IsControlAccount
,(SELECT SUM(l.Amount)
FROM Ledger l
INNER JOIN AccountHierarchy hd ON (l.AccountNumber = hd.AccountNumber)
WHERE l.IsDebit = 1
AND ( (h.IsControlAccount = 1 AND hd.AccountHierarchy LIKE h.AccountHierarchy + '%')
OR hd.AccountHierarchy = h.AccountHierarchy)
) AS DR
,(SELECT SUM(l.Amount)
FROM Ledger l
INNER JOIN AccountHierarchy hd ON (l.AccountNumber = hd.AccountNumber)
WHERE l.IsDebit = 0
AND ( (h.IsControlAccount = 1 AND hd.AccountHierarchy LIKE h.AccountHierarchy + '%')
OR hd.AccountHierarchy = h.AccountHierarchy)
) AS CR
FROM AccountHierarchy h
) x
WHERE NOT(CR IS NULL AND DR IS NULL)
ORDER BY AccountHierarchy
```
I used this [question](https://stackoverflow.com/questions/12511746/cte-in-sql-server-2008-how-to-calculate-subtotals-recursively/) for a hierarchy example.
**Output:**
```
| ACCOUNTNUMBER | ACCOUNTDESCRIPTION | ACCOUNTHIERARCHY | ISCONTROLACCOUNT | DR | CR | BAL |
|----------------------|------------------------------------|-----------------------------------------------------------------|------------------|--------|-----------|------------|
| 1000 | Accounts Receivable | 1000 / | 1 | 10000 | 5000 | 5000 |
| 1200 | Buyer Receivables | 1000 /1200 / | 1 | 5000 | (null) | 5000 |
| 12001 | Buyer Receivables - Best Buy | 1000 /1200 /12001 / | 0 | 5000 | (null) | 5000 |
| 1500 | Offers | 1000 /1500 / | 0 | 5000 | 5000 | 0 |
| 4000 | Accounts Payable | 4000 / | 1 | (null) | 4475.0685 | -4475.0685 |
| 4100 | Supplier Payables | 4000 /4100 / | 1 | (null) | 4475.0685 | -4475.0685 |
| 41002 | Supplier Payables - Knechtel | 4000 /4100 /41002 / | 0 | (null) | 4475.0685 | -4475.0685 |
| 6000 | Revenue | 6000 / | 1 | (null) | 524.9315 | -524.9315 |
| 6200 | Processing Fees Revenue | 6000 /6200 / | 1 | (null) | 100 | -100 |
| 62002 | Processing Fees Revenue - Knechtel | 6000 /6200 /62002 / | 0 | (null) | 100 | -100 |
| 6300 | Fees Revenue | 6000 /6300 / | 1 | (null) | 424.9315 | -424.9315 |
| 63002 | Fees Revenue - Knechtel | 6000 /6300 /63002 / | 0 | (null) | 424.9315 | -424.9315 |
``` | Getting totals and sub-totals in Parent-Child hierarchy | [
"",
"sql",
"sql-server",
"parent-child",
"rollup",
""
] |
i have given the following question in assignment. i m not able to understand how to pass argument in stored procedure.
Quesiton :
`If n = 1, FibNum(a, b, n) = a`
`If n = 2, FibNum(a, b, n) = b`
`If n > 2, FibNum(a, b, n) = FibNum(a, b, n-1) + FibNum(a, b, n-2) a`.
Create a **recursive stored procedure FibNum in T-SQL** to calculate the n-th Fibonacci number, given the following procedure declaration **CREATE PROC FibNum (@a int, @b int, @n int)** where its exit code is the n-th Fibonacci number.
b. Utilize the stored procedure to calculate the result of FibNum(1, 1, 10). Attach the screenshot. | Procedure
```
create proc FibNum ( @a int, @b int, @n int )
as begin
if @n = 1 return @a
if @n = 2 return @b
if @n > 2 begin
declare @n1 int = @n - 1, @n2 int = @n - 2, @R1 int, @R2 int
exec @R1 = FibNum @a,@b,@n1
exec @R2 = FibNum @a,@b,@n2
return @R1 + @R2
end
return 1
end
```
Utilization :
```
declare @R int
exec @R = FibNum 1,1,10
select @R
``` | This seems like a function problem, but not because of a recursive stored procedure. You can actually do this using a recursive CTE.
The natural way of doing this would be something like:
```
with fib as (
select 1 as i, 0 as n
union all
select 2 as i, 1 as n
union all
select f1.i + 1, f1.n + f2.n2
from fib f1 join
fib f2
where i < 10
)
select *
from fib;
```
But you can't do this because you can't reference the CTE twice in the `from` clause.
Instead, you want to put the two previous values on a single row:
```
with fib as (
select 1 as i, 1 as n, 0 as n1
union all
select f.i + 1, f.n + f.n1, f.n
from fib f
where i < 10
)
select *
from fib;
```
If you want the nth one, then just change the `10` to the number you want and use `select top 1` with an `order by`. | Create a recursive stored procedure in T-SQL to calculate the n-th Fibonacci number | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
```
CREATE TABLE "TEST"."AB_EMPLOYEE22"
( "NAME" VARCHAR2(20 BYTE),
"AGE" NUMBER,
"SALARY" NUMBER,
"DOB" DATE
)
alter table "TEST"."AB_EMPLOYEE22" add constraint
Age_check check((ROUND((sysdate-DOB)/365)) = AGE) ENABLE
```
But This query is not working.
Pls help me out | You cannot use SYSDATE in check constraint. According to Oracle Documentation - [**Check Constraint**](http://docs.oracle.com/cd/E11882_01/server.112/e41084/clauses002.htm#SQLRF52204)
> Conditions of check constraints cannot
> contain the following constructs:
>
> * Subqueries and scalar subquery expressions
> * Calls to the functions that are **not deterministic** (CURRENT\_DATE,
> CURRENT\_TIMESTAMP, DBTIMEZONE,
> LOCALTIMESTAMP, SESSIONTIMEZONE,
> **SYSDATE**, SYSTIMESTAMP, UID, USER, and
> USERENV)
> * Calls to user-defined functions
> * Dereferencing of REF columns (for example, using the DEREF function)
> * Nested table columns or attributes
> * The pseudocolumns CURRVAL, NEXTVAL, LEVEL, or ROWNUM
> * Date constants that are not fully specified
So, you can use `Trigger` to get your desired output in this case. Here, is the trigger which will work fine as per your requirement:
```
CREATE OR REPLACE TRIGGER trg_check_date
BEFORE INSERT OR UPDATE ON AB_EMPLOYEE22
FOR EACH ROW
BEGIN
IF(ROUND((sysdate-nvl(:NEW.DOB,:OLD.DOB))/365) <> nvl(:NEW.AGE,:OLD.AGE))
THEN
RAISE_APPLICATION_ERROR( -20001, 'Your Date of Birth and Age do not match');
END IF;
END;
```
If you find any difficulty in this trigger, please feel free to write in comments. | Disclaimer: *It's not a direct answer to the question.*
Now, don't store derived and most importantly constantly changing data such as `age` in the table. Instead calculate it on the fly when you query it (e.g. with a view).
```
CREATE TABLE ab_employee22
(
name VARCHAR2(20),
salary NUMBER,
dob DATE
);
CREATE VIEW ab_employee22_view AS
SELECT name, salary, dob,
FLOOR(MONTHS_BETWEEN(sysdate, dob) / 12) age
FROM ab_employee22;
```
Here is **[SQLFIddle](http://sqlfiddle.com/#!4/97bbc/1)** demo | I am trying To add Check constraint for checking age and date of birth | [
"",
"sql",
"oracle",
""
] |
I have two tables `A(column1,column2)` and `B(column1,column2)`.
How to ensure that a value from A(column1) is not contained within B(column1) and insert it in this column.
My Query will be like this :
```
insert into B.column1 values()
where
...
```
I want to complete B.column1 with data from A.column1
What should i put in the `where` clause ? | ```
Insert Into B(column1)
Select A.Column1
From A
Where A.Column1 not in (Select Column1 From B)
``` | I would use MINUS command and select all rows from A(column1) which are not in B(column1) and then SELECT INTO result into B table. | complete a table with data from an other one | [
"",
"sql",
"oracle9i",
""
] |
In a table with two nullable date/datetime columns, how would you efficiently select the earliest of both dates that's not NULL?
Concretely, this is what I'm interested in:
```
| Date1 | Date2 | | Expected |
|------------|------------| |------------|
| 2014-02-23 | 2014-01-16 | | 2014-01-16 |
| 2014-02-01 | NULL | | 2014-02-01 |
| NULL | 2014-01-13 | | 2014-01-13 |
| NULL | NULL | | NULL |
```
I can manage to fetch those results using either of the queries below. Both my solutions use the `LEAST` operator, which returns NULL when either of the arguments is NULL (and not just when both are NULL), so it takes some more logic to get the job done.
```
SELECT IF(ISNULL(Date1), Date2, IFNULL(LEAST(Date1, Date2), Date1)) AS EarlyDate FROM MyTable
```
or the equivalent:
```
SELECT IFNULL(IFNULL(LEAST(Date1, Date2), Date1), Date2) AS EarlyDate FROM MyTable
```
My feeling is that there's something redundant in both of my queries above. Could anyone suggest a better solution? | You have to check for NULLs, try this:
```
SELECT IF(Date1 IS NULL OR Date2 IS NULL,
COALESCE(DATE1, DATE2),
LEAST(DATE1, DATE2)
) AS EarlyDate
FROM MyTable
```
Working Fiddle: <http://sqlfiddle.com/#!2/7940c/1> | You can use LEAST on COALESCE(date1,date2) and COALESCE(date2,date1):
```
SELECT LEAST( COALESCE(Date1,Date2) , COALESCE(Date2,Date1) ) AS EarlyDate
FROM MyTable;
``` | Select earliest of two nullable datetimes | [
"",
"mysql",
"sql",
"datetime",
""
] |
I prepared [a **fiddle** which demonstrates the problem](http://sqlfiddle.com/#!15/e25c5/2).
```
CREATE TABLE parent (
parent_id integer primary key
);
CREATE TABLE child (
child_name TEXT primary key,
parent_id integer REFERENCES parent (parent_id) ON DELETE CASCADE
);
INSERT INTO parent VALUES (1);
INSERT INTO child VALUES ('michael',1), ('vanessa', 1);
```
I want a way for the delete to CASCADE to the parent record when a child record is deleted.
For example:
```
DELETE FROM child WHERE child_name='michael';
```
This should cascade to the parent table and remove the record. | Foreign keys only work in the other direction: cascade deletes from parent to child, so when the parent (referenced) record is deleted, any child (referencing) records are also deleted.
If it's a 1:1 relationship you can create a bi-directional foreign key relationship, where one side is `DEFERRABLE INITIALLY DEFERRED`, and both sides are cascade.
Otherwise, you will want an `ON DELETE ... FOR EACH ROW` trigger on the child table that removes the parent row if there are no remaining children. It's potentially prone to race conditions with concurrent `INSERT`s; you'll need to `SELECT ... FOR UPDATE` the parent record, *then* check for other child records. Foreign key checks on insert take a `FOR SHARE` lock on the referenced (parent) record, so that should prevent any race condition. | You seem to want to kill the whole family, without regard to remaining children. Run this instead of `DELETE FROM child ...`:
```
DELETE FROM parent p
USING child c
WHERE p.parent_id = c.parent_id
AND c.child_name = 'michael';
```
Then everything works with your current design. If you insist on your original `DELETE` statement, you *need* a rule or a trigger. But that will be rather messy.
[`DELETE` statement in the manual.](https://www.postgresql.org/docs/current/sql-delete.html) | How to CASCADE a delete from a child table to the parent table? | [
"",
"sql",
"postgresql",
"database-design",
"foreign-keys",
"cascade",
""
] |
there is a `siteOrder` table where i am checking orders placed by sites in current week, two columns in question are `siteName` and `OrderDate`, i am only considering four sites and ignoring others.
But if there is no order placed for any of these four sites then i want `0` to be displayed against the site rather than no rows displayed, here is my solution
```
SELECT SiteName,COUNT(SiteName) AS Completed FROM SiteOrder WHERE SiteName IN ('Site1','Site2','Site3','Site4')
and DATEPART(mm,OrderDate) = DATEPART(mm,getdate())
and DATEPART(yyyy,OrderDate) = DATEPART(yyyy,getdate())
and DATEPART(dd,OrderDate) > DATEPART(dd,DATEADD(DAY, 1-DATEPART(WEEKDAY, getdate()), getdate()))
GROUP BY SiteName
order by SiteName
```
that gives me
```
SiteName Completed
-----------------------
Site1 1
Site2 1
Site3 1
```
notice that site 4 is missing i want that to be showing 0 against site 4 rather then not displaying anything at all | ```
select SiteName
,SUM(Case When DATEPART(mm,OrderDate) = DATEPART(mm,getdate())
and DATEPART(yyyy,OrderDate) = DATEPART(yyyy,getdate())
and DATEPART(dd,OrderDate) > DATEPART(dd,DATEADD(DAY, 1-DATEPART(WEEKDAY, getdate()), getdate()))
Then 1
Else 0
End) as Orders
from SiteOrder where SiteName in ('Site1','Site2','Site3','Site4')
GROUP BY SiteName
order by SiteName
``` | If you don't already have these values in another table, you can create four rows in a subquery and then use a `LEFT JOIN`:
```
SELECT SiteName,COUNT(SiteName) AS Completed
FROM (
select 'Site1' union all
select 'Site2' union all
select 'Site3' union all
select 'Site4'
) Sites (name)
LEFT JOIN
SiteOrder ON SiteName = Sites.name
WHERE
OrderDate > DATEADD(week,DATEDIFF(week,0,getdate()),-1)
GROUP BY SiteName
order by SiteName
```
I've also tried to come up with an alternative expression for the `OrderDate` comparison. I believe that you just want that the `OrderDate` should be greater then the date of the previous Sunday (on a Sunday, the previous Sunday being taken as that same day) | sql server insert missing record | [
"",
"sql",
"sql-server",
""
] |
I Have a query to count the records which are aged between 4 and 12 months in my table
```
SELECT count(*)
from tm_process_state
where tstamp BETWEEN ADD_MONTHS(trunc(SYSDATE,'MONTH'), -12)
AND LAST_DAY(TRUNC(SYSDATE, 'MONTH') - 4);
```
After running am getting count value as 1240, which is wrong. As it is counting records from 01/03/13 to 27/02/2014.
What i am trying to count is records between sysdate-90 and sysdate-365.
Then I tried below condition
```
tstamp < sysdate-90 and tstamp>sysdate-365
```
Now the count is 598, which is correct.
But here, I cannot give here like 365(As it varies based on leap and non-leap year)
can any one please tell me how to alter the first query to display correct count? | This condition:
```
where tstamp BETWEEN ADD_MONTHS(trunc(SYSDATE,'MONTH'), -12) AND LAST_DAY(TRUNC(SYSDATE, 'MONTH') - 4)
```
is calculating from the *beginning* for the months (due to the `TRUNC(SYSDATE, 'MONTH')`) to the end of a month Why not just do:
```
where tstamp BETWEEN ADD_MONTHS(SYSDATE, -12) AND ADD_MONTHS(SYSDATE, -3)
```
If you are concerned about time values on the dates, just `trunc()` the values:
```
where tstamp BETWEEN trunc(ADD_MONTHS(SYSDATE, -12)) AND trunc(ADD_MONTHS(SYSDATE, -3))
``` | you want use simply sum function
like this
```
SELECT SUM(case when transits.direction = 1 then 1 else 0 end) ,
SUM(case when transits.direction = 0 then 1 else 0 end) from t1 t
where t.device in ('A','B') group by t.device
```
I do not know actual condion in your query so ...
```
SELECT
sum(case when tstamp BETWEEN ADD_MONTHS(trunc(SYSDATE,'MONTH'), -12) then 1 else 0 end) as yearCount ,
sum(case when LAST_DAY(TRUNC(SYSDATE, 'MONTH') - 4) then 1 else 0 end) as lastThreeMonth
FROM tm_process_state
``` | select count of records between 4 and 12 months | [
"",
"sql",
"oracle",
"select",
""
] |
Im creating a MySQL Schema with Two Tables (Student and Instructor).
```
1. Students are given a student number when they join the school.
This is stored along with their name, date of birth, and the date they joined
the school.
2. All instructors are also students, but clearly, not all students are
instructors. In addition to the normal student information, for all
instructors, the date that they start working as an instructor must be
recorded, along with their instructor status (compensated or volunteer).
```
My Schema:
*STUDENT*
```
+---------------+
| ID | PK
+---------------+
| LNAME |
+---------------+
| FNAME |
+---------------+
| MNAME |
+---------------+
| BDAY |
+---------------+
| JOIN_DATE |
+---------------+
| IS_INSTRUCTOR |
+---------------+
```
*INSTRUCTOR*
```
+---------------+
| ID | PK
+---------------+
| STUDENT_ID | FK
+---------------+
| INSTR_STATUS | (compensated or volunteer)
+---------------+
| DATE_WORKED |
+---------------+
```
How can when I insert into STUDENT TABLE and if IS\_INSTRUCTOR = 1 (meaning true) INSTRUCTOR TABLE (STUDENT\_ID) will have the STUDENT.ID?
Or you guy have a better Table Design? TIA | I am not sure why you need `is_instructor` field in the student table. You will need to update it each time you add or remove a record from `instructor` table. If you need to see if particular student is an instructor a simple hash join on `student_id` would suffice. But the options I see: first is using a stored procedure, and the second option is just to wrap separate inserts into one transaction.
I would remove calculated field `is_instructor` unless there are specific application requirements which I don't know. Though surrogate (or artificial) keys can be useful, for current design I would remove primary key `id` from instructor table and make `student_id` a primary key which is also a foreign key to `student` table. | How about something in the lines of...
```
INSERT INTO INSTRUCTOR (STUDENT_ID, INSTR_STATUS, DATE_WORKED)
SELECT ID, 'volunteer', NOW()
FROM STUDENT
WHERE IS_INSTRUCTOR = 1
```
You might want to rethink your design. | MySQL Insert Query With Conditional Statement | [
"",
"mysql",
"sql",
"mysql-workbench",
""
] |
I am very much use to work with ORM (e.g. EF, L2S). So, finding difficulty writing queries.
I've created an `Employee` table also few masters like `Department`, `Area`.
Schema is something like:
* `Employee` (`Id, Name, DeptId, AreaId, ManagerId`)
* `Area` (`Id, Name`)
* `Department` (`Id, Name`)
Now, I want to add constraint in ONE GO.
Something like,
```
ALTER TABLE EMPLOYEE
ADD CONSTRAINT [FK_EMPLOYEE_DEPARTMENT] FOREIGN KEY (DEPTID) REFERENCES DEPARTMENT(ID)
ADD CONSTRAINT [FK_EMPLOYEE_DEPARTMENT] FOREIGN KEY (AREAID) REFERENCES AREA(ID)
ADD CONSTRAINT [FK_EMPLOYEE_EMPLOYEE] FOREIGN KEY (MANAGERID) REFERENCES EMPLOYEE(ID) --Self Reference
```
but, I know this is syntax error.
So, please correct my script to enable me to do this. I don't want add constraint one by one.
Thanks in advance. | Try this:
```
ALTER TABLE EMPLOYEE ADD
CONSTRAINT [FK_EMPLOYEE_DEPARTMENT]
FOREIGN KEY (DEPTID) REFERENCES DEPARTMENT(ID),
CONSTRAINT [FK_EMPLOYEE_DEPARTMENT]
FOREIGN KEY (AREAID) REFERENCES AREA(ID),
CONSTRAINT [FK_EMPLOYEE_EMPLOYEE]
FOREIGN KEY (MANAGERID) REFERENCES EMPLOYEE(ID)
``` | You just need to separate each constraint with a comma, eg:
```
ALTER TABLE EMPLOYEE
ADD CONSTRAINT [FK_EMPLOYEE_DEPARTMENT] FOREIGN KEY (DEPTID) REFERENCES DEPARTMENT(ID),
CONSTRAINT [FK_EMPLOYEE_DEPARTMENT] FOREIGN KEY (AREAID) REFERENCES AREA(ID),
CONSTRAINT [FK_EMPLOYEE_EMPLOYEE] FOREIGN KEY (MANAGERID) REFERENCES EMPLOYEE(ID)
```
You can find the syntax of the ALTER TABLE statement in the [MSDN documentation](http://msdn.microsoft.com/en-us/library/ms190273.aspx) and in Books Online | SQL Server - Multiple foreign key constraint in one go | [
"",
"sql",
"sql-server",
""
] |
Let's say I have this table:
```
create table test3(
id INT NOT NULL AUTO_INCREMENT PRIMARY_KEY,
an int(4),
mois int(2),
jour int(2),
data varchar(255)
);
mysql> desc test3;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | int(11) | | PRI | NULL | auto_increment |
| an | int(4) | YES | | NULL | |
| mois | int(2) | YES | | NULL | |
| jour | int(2) | YES | | NULL | |
| data | varchar(255) | YES | | NULL | |
+-------+--------------+------+-----+---------+----------------+
5 rows in set (0.00 sec)
```
I need to fetch everything between 2013-12-01 and 2014-01-12.
I tried the following query with a COUNT but in fact, because the WHERE needs a month greater or equal to 12 and smaller or equal to 01, it returns nothing.
```
select count(id)
from rapport_cc_agents
where (an>=2013 and mois>=12 and jour>=01)
and (an<=2014 and mois<=01 and jour<=12);
```
It's probably really simple but I'm blocked on this problem.
Thank you!
[EDIT]
The case is that I cannot change the storing format and use only one field because of the impact with other systems. | Ideally, don't store the data in that format. If you have the power to change it to a `DATE`, then do so! If not, then raise the concern to whomever stores the data that it is difficult, slow and error-prone to do data queries when the data is stored in an inappropriate manner.
Also point out to them that if they want the components of a date, then if the data is stored as a `DATE` the components can easily be [`EXTRACT`](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_extract)ed. But storing the date components separately is bad, because you can't then enforce it's a real date -- for example, `an=-1024, mois=99, jour=0` would be considered valid in the given schema. This is just wrong. Even if the fields were limited to the typical valid numbers (1-12, 1-31), it's still possible to construct an invalid date like `an=2001, mois=2, jour=30`. **The only way you can be sure you have a valid date is to use the `DATE` type for it.** What use is data in a database if it's possible to add incorrect data?
As an example of migrating the data to a sane format:
```
CREATE TABLE test3_real (
id INT NOT NULL auto_increment,
date DATE NOT NULL,
data VARCHAR(255),
PRIMARY KEY (id)
);
INSERT INTO test3_real (id, date, data)
SELECT id, STR_TO_DATE(CONCAT(an,'-',mois,'-',jour), '%Y-%m-%d'), data FROM test3;
DROP TABLE test3;
CREATE VIEW test3
AS
SELECT id,
EXTRACT(YEAR FROM date) an,
EXTRACT(MONTH FROM date) mois,
EXTRACT(DAY FROM date) jour,
data
FROM test3;
```
As for your query, the date is a hierarchy of numbers. If the big number is out of range, it doesn't matter what the smaller numbers are:
```
WHERE (an > 2013 OR (an = 2013 AND (mois > 12 OR (mois = 12 AND jour >= 1))))
AND (an < 2014 OR (an = 2014 AND (mois < 1 OR (mois = 1 AND jour <= 12))))
``` | If you must still have the three parts of the date seperate, you can use the following:
`CAST( CAST( an as varchar(4) ) + '-' + CAST( mois as varchar(2) ) + '-' + CAST( jour as varchar(2) ) as DATE )`
I would suggest adding this as part of a view on the table, e.g.
```
CREATE VIEW test3withDate AS
SELECT id, an, mois, jour, data, CAST( CAST( an as varchar(4) ) + '-' + CAST( mois as varchar(2) ) + '-' + CAST( jour as varchar(2) ) as DATE ) as mydate FROM test3
```
so that you can use it anywhere else you need a date | In MYSQL, how to filter a date interval by using 3 separated fields for the date? | [
"",
"mysql",
"sql",
"date",
""
] |
Good day!
i've got some problem with that query:
```
SELECT `id`
FROM `test`
WHERE `name`
LIKE "somename" AND `id` = (SELECT max(`id`) FROM test)
```
It returned an empty result.
But
```
SELECT `id`
FROM `test`
WHERE `name` LIKE "somename"
```
works.
Moreover,
```
SELECT `id`
FROM `test`
WHERE `id` = (SELECT max(`id`) FROM test)
```
works too.
Why they doesn't work together?
Thanks you! | I think you are probably looking for something like this:
```
SELECT *
FROM
test
WHERE
id=(SELECT max(id) FROM test WHERE name LIKE "somename")
```
this will return the row from test that has the maximum ID where name is like "somename" | Because they would have to be both true at the same time.
Imagine the following
```
id name
1 'somename'
2 'someothername'
```
In the above case
```
SELECT `id`
FROM `test`
WHERE `name` LIKE "somename"
```
Will return the first row but:
```
SELECT id FROM test WHERE id = (SELECT max(id) FROM test)
```
Will return the second. Using `AND` will mean no results are returned. You may wish to use `OR` depending on what behaviour you want. | Mysql `AND` operator | [
"",
"mysql",
"sql",
"pymysql",
""
] |
Im kind of stuck at a problem that looks simple. Can somebody please help me on this one?
The actual table has 25 columns.
But basically to form this question lets say it looks something like this:
```
SR | Type | Des | Code
-------------------------
1 | wo | abc | 100
-------------------------
2 | co | def | 100
-------------------------
3 | wo | ghi | 200
-------------------------
4 | co | jkl | 200
-------------------------
5 | co | mno | 200
-------------------------
```
My query is supposed to group by "`Code`". "`Type`" should be taken from the first row and "`Des`" should be taken from the latest set.
I want my result set as following:
```
SR | Type | Des | Code
-------------------------
1 | wo | def | 100
-------------------------
3 | wo | mno | 200
```
Can someone write this query for me please? | I've had a go at your query. The Sub Query in the FROM clause gets the grouping by CODE and determines what the min and max values are for SR. Once we have this we join back into the main table on CODE and the minSR\maxSR values to get the corresponding Type and Des values. This makes the assumption that every CODE will always have a minSR and maxSR value. If this is not the case, change the INNER JOINs to Left JOINs.
Please Note TABLE = the name of your example table above. Also the value of SR in the result set is the minimum value of SR as in your example result set.
```
Select cg.CODE, tmin.Type, tmax.Des, tmin.SR
FROM (
Select CODE, Min(SR) as MinSR, MAX(SR) as MaxSR
FROM TABLE
GROUP BY CODE
) cg
INNER JOIN TABLE tmin ON cg.CODE = tmin.Code
AND cg.MinSR = tmin.SR
INNER JOIN TABLE tmax ON cg.CODE = tmax.Code
AND cg.MaxSR = tmax.SR
``` | Here is a solution using ANSI window functions:
```
select sr,
first_type as type,
last_des as des,
code
from (
select sr,
type,
des,
code,
first_value(type) over (partition by code order by sr rows between unbounded preceding and unbounded following) as first_type,
last_value(des) over (partition by code order by sr rows between unbounded preceding and unbounded following) as last_des,
row_number() over (partition by code order by sr ) as rn
from data
) t
where rn = 1
order by sr
```
SQLFiddle example
* for Postgres: <http://sqlfiddle.com/#!15/f6117/1>
* for Oracle: <http://sqlfiddle.com/#!4/3019f/1> | Sql query to pick up values on some rules | [
"",
"sql",
"rank",
""
] |
I have a hive table with tweets about movies and a table with keywords mapped to movie titles
keyword example:
```
title keyword
------ -------
3 Days to Kill 3daystokill
3 Days to Kill 3 days to kill
12 Years a Slave 12YearsASlave
```
tweets example:
```
id text
------ -------
125675146 3daystokill sucks!
125673498 3 days to kill is awesome!
239873985 I like 12 Years a Slave :)
```
I would like to be able to find the tweets matching the keywords for a certain movie title. For example, I want to find all the tweets that mention keywords from 3 Days to Kill (3daystokill and 3 days to kill).
I thought something like this, but the results are empty :(
```
SELECT k.keyword, t.text
FROM keywords k JOIN tweets t
ON t.text = CONCAT('%',k.keyword,'%')
WHERE k.title = "3 Days to Kill";
``` | You are looking for `like`, not `=`:
```
SELECT k.keyword, t.text
FROM keywords k JOIN
tweets t
ON t.text like CONCAT('%', k.keyword, '%')
WHERE k.title = '3 Days to Kill';
```
EDIT:
I was not aware that HiveQL limited `like` to only wildcards at the beginning or end. One option is `rlike`. Another is `instr()`:
```
SELECT k.keyword, t.text
FROM keywords k JOIN
tweets t
ON instr(t.text, k.keyword) > 0
WHERE k.title = '3 Days to Kill';
``` | I am not sure if the below helps.. :( But have just given a try.
```
select case when
replace(keyword,' ',null) = '3daystokill'
then keyword
end
from keywords
``` | Hive query: Match string with list of keywords | [
"",
"sql",
"hive",
"hiveql",
""
] |
I captured request and response time for procedure i need to calculate time take by substract both request -response to find time taken.
```
P_REQUEST_TIME :='20/MAR/2014 03:03:50.785662 PM';
P_RESPONSE_TIME :='20/MAR/2014 03:03:50.785816 PM';
SELECT TO_TIMESTAMP(P_REQUEST_TIME)-TO_TIMESTAMP(P_RESPONSE_TIME)
into l_actual_time
FROM dual;
```
Getting result is `Result:='-000000000 00:00:00.000154000';`
I need this as seconds. | Use [EXTRACT](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions050.htm) function.
```
SELECT EXTRACT(SECOND FROM TO_TIMESTAMP(P_REQUEST_TIME)-TO_TIMESTAMP(P_RESPONSE_TIME)) diff_seconds
FROM <table_name>;
``` | ```
select extract( day from diff ) days,
extract( hour from diff ) hours,
extract( minute from diff ) minutes,
extract( second from diff ) seconds
from (SELECT TO_TIMESTAMP(REQUEST_DTTM)-TO_TIMESTAMP(RESPONS_DTTM) diff FROM hit_tracer);
``` | timestamp having milliseconds convert to seconds in oracle | [
"",
"sql",
"oracle",
"plsql",
""
] |
I am trying to execute a query to fetch all students that belongs to a specific hostel,with hostel id, on SqlServer but facing the error *`Subquery returned more than 1 value.`* query is :
```
select * from students
where StudentID=(select studentID from Student_To_Hostel where hostelID=2)
```
How to fix it ??? | **Method 1:**
You need to use **[JOIN](http://www.techonthenet.com/sql/joins.php)** here
Try like this
```
SELECT S.*
From Students S Inner JOIN Student_To_Hostel SH ON
SH.StudentID =S.StudentID
WHERE SH.hostelID=2
```
**Method2:**
You can use **[IN](http://technet.microsoft.com/en-us/library/ms177682.aspx)** Clause
```
SELECT *
FROM students
where StudentID IN (
SELECT studentID FROM Student_To_Hostel where hostelID=2
)
``` | Try replacing the '=' sign in the outer query with 'in'.
```
select * from students
where StudentID in (select studentID from Student_To_Hostel where hostelID=2)
```
Hope it Helps
Vishad | Error : SubQuery returns more than one records | [
"",
"sql",
"sql-server",
"database",
""
] |
I have a varchar(5) field. I need find the first unused number across all records such that they begin with leading zeros. like if there is a 00001 and a 00003 I'd like the query to return 00002. Also, many of the records contain letters and look like 'G0542'. These can be ignored.
I know I'm close. This seems to work in SQL Server 2005 but not in 2008 or 2012
<http://sqlfiddle.com/#!3/4016a0/1>
```
create table b_addr ( inst_no varchar(5) Unique );
insert into b_addr (inst_no) values ('00001');
insert into b_addr (inst_no) values ('00002');
insert into b_addr (inst_no) values ('00004');
--this is the problem line
insert into b_addr (inst_no) values ('A0045');
With usedNos as(
select CAST(b_addr.inst_no AS INT) as inst
from b_addr
where b_addr.inst_no LIKE '[0-9][0-9][0-9][0-9][0-9]')
SELECT
RIGHT('00000' + CONVERT(VARCHAR(5), COALESCE(min(inst)+1, 0)),5) AS next_inst_no
from usedNos where not exists (select null from usedNos usn where usn.inst = usedNos.inst +1)
```
How can I structure this so it will work in sql server 2008+ as well? | Here is a version of your query that works:
```
With usedNos as (
select CAST(case when isnumeric(b_addr.inst_no) = 1
then b_addr.inst_no
end AS INT) as inst
from b_addr
)
SELECT RIGHT('00000' + CONVERT(VARCHAR(5), COALESCE(min(inst)+1, 0)),5) AS next_inst_no
from usedNos
where inst is not null and
not exists (select 1
from usedNos usn
where usn.inst = usedNos.inst +1
);
```
The key is the use of `isnumeric()` inside of `case`. This guarantees that the `cast()` is not attempted unless the value looks like a number. If it doesn't look like a number, then the result is `NULL`, which is filtered out in the outer `where` clause.
Your `where` clause:
```
where b_addr.inst_no LIKE '[0-9][0-9][0-9][0-9][0-9]'
```
attempts to do the same thing. However, SQL Server doesn't guarantee that the `where` clause is processed before the `select` -- which is why you are getting an unexpected error. | You can check whether the number you're trying to cast is a real number:
Instead of
```
CAST(b_addr.inst_no AS INT) as inst
```
Do something like
```
CAST(CASE WHEN ISNUMERIC(b_addr.inst_no) = 1 THEN b_addr.inst_no ELSE 0 END AS INT) as inst
```
It's a bit convoluted, but it works.
**Demo: <http://sqlfiddle.com/#!3/1aee5/6>** | First unused number in a varchar(5) field padded with zeros | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a query like
```
select SUM(*) as "tot1" from table1 t, table2 t2 where t1.id=t2.id and t1.column=1
select SUM(*) as "tot2" from table1 t, table2 t2 where t1.id=t2.id and t1.column=2
select SUM(*) as "tot3" from table1 t, table2 t2 where t1.id=t2.id and t1.column=3
```
I want a query result look like this
```
tot1 tot2 tot3
500 600 3
```
Is this even possible? or is there any alternate solution for me to view these query in same table. | try this:
```
select * from
(select SUM(*) as "tot1" from table1 t, table2 t2 where t1.id=t2.id and t1.column=1) a,
(select SUM(*) as "tot2" from table1 t, table2 t2 where t1.id=t2.id and t1.column=2) b,
(select SUM(*) as "tot3" from table1 t, table2 t2 where t1.id=t2.id and t1.column=3) c
``` | Try this query
```
select
SUM(CASE t1.column WHEN 1 THEN t1.column ELSE 0 END) as tot1,
SUM(CASE t1.column WHEN 2 THEN t1.column ELSE 0 END) as tot2,
SUM(CASE t1.column WHEN 3 THEN t1.column ELSE 0 END) as tot3
from
table1 t, table2 t2
where
t1.id=t2.id
``` | adding a new column SQL query | [
"",
"sql",
"postgresql",
""
] |
I'm currently running more database queries of update, like the following:
```
UPDATE table SET status = 1 WHERE id = 3
UPDATE table SET status = 1 WHERE id = 7
UPDATE table SET status = 1 WHERE id = 9
UPDATE table SET status = 1 WHERE id = 18
etc...
```
**Question:**
How is it possible to run these queries in one? | If you need to update some rows on a given list you can use [**`IN()`**](http://dev.mysql.com/doc/refman/5.0/en/any-in-some-subqueries.html)
```
UPDATE table SET status = 1 WHERE id IN (3, 7, 18);
```
If instead you need to update all rows just don't add any `WHERE` conditions
```
UPDATE table SET status = 1;
``` | ```
UPDATE table SET status = 1 WHERE id in (3,7,9,18,...)
``` | How to update multiple rows with one query with MySQL? | [
"",
"mysql",
"sql",
""
] |
My database has two tables, a car table and a wheel table.
I'm trying to find the number of wheels that meet a certain condition over a range of days, but some days are not included in the output.
Here is the query:
```
USE CarDB
SELECT MONTH(c.DateTime1) 'Month',
DAY(c.DateTime1) 'Day',
COUNT(w.ID) 'Wheels'
FROM tblCar c
INNER JOIN tblWheel w
ON c.ID = w.CarID
WHERE c.DateTime1 BETWEEN '05/01/2013' AND '06/04/2013'
AND w.Measurement < 18
GROUP BY MONTH(c.DateTime1), DAY(c.DateTime1)
ORDER BY [ Month ], [ Day ]
GO
```
The output results seem to be correct, but days with 0 wheels do not show up. For example:
Sample Current Output:
```
Month Day Wheels
2 1 7
2 2 4
2 3 2 -- 2/4 is missing
2 5 9
```
Sample Desired Ouput:
```
Month Day Wheels
2 1 7
2 2 4
2 3 2
2 4 0
2 5 9
```
I also tried a left join but it didn't seem to work. | You were on the right track with a `LEFT JOIN`
Try run your query with this kind of outer join but remove your `WHERE` clause. Notice anything?
What's happening is that the join is applied and then the where clause removes the values that don't match the criteria. All this happens before the group by, meaning the cars are excluded.
Here's one method for you:
```
SELECT Year(cars.datetime1) As the_year
, Month(cars.datetime1) As the_month
, Day(cars.datetime1) As the_day
, Count(wheels.id) As wheels
FROM (
SELECT id
, datetime1
FROM tblcar
WHERE datetime1 BETWEEN '2013-01-05' AND '2013-04-06'
) As cars
LEFT
JOIN tblwheels As wheels
ON wheels.carid = cars.id
```
What's different this time round is that we're limiting the results of the car table *before* we join to the wheels table. | You probably want to use a `LEFT OUTER JOIN`:
```
USE CarDB
SELECT MONTH (c.DateTime1) 'Month', DAY (c.DateTime1) 'Day', COUNT (w.ID) 'Wheels'
FROM tblCar c LEFT OUTER JOIN tblWheel w ON c.ID = w.CarID
WHERE c.DateTime1 BETWEEN '05/01/2013' AND '06/04/2013'
AND (w.Measurement IS NULL OR w.Measurement < 18)
GROUP BY MONTH (c.DateTime1), DAY (c.DateTime1)
ORDER BY [Month], [Day]
GO
```
Aand then, you need to adapt the WHERE condition, as you want to keep the rows with `w.Measurement` being `NULL` due to the OUTER join. | Missing a single day | [
"",
"sql",
"sql-server",
""
] |
When I don't use `WHERE` clause query works fine. How can I make it work with `WHERE` clause? Is it too ambitious?
Thanks
**ERROR**:
```
[Err] 1054 - Unknown column 'claims.client_name' in 'where clause'
```
**SQL:**
```
SELECT
(
CASE
WHEN client = 'Driver' THEN (SELECT fullname FROM driver WHERE id = claims.client_id)
WHEN client = 'Passenger' THEN (SELECT fullname FROM passengers WHERE id = claims.client_id)
WHEN client = 'Thirdparty' THEN (SELECT fullname FROM thirdparty WHERE id = claims.client_id)
WHEN client = '' THEN NULL
END
) AS client_name
FROM claims
WHERE (client_name LIKE '%john%')
``` | Replace
```
WHERE (client_name LIKE '%john%')
```
with
```
HAVING (client_name LIKE '%john%')
``` | Try this :-
```
SELECT CASE WHEN client = 'Driver' THEN (SELECT fullname FROM driver WHERE id = claims.client_id)
WHEN client = 'Passenger' THEN (SELECT fullname FROM passengers WHERE id = claims.client_id)
WHEN client = 'Thirdparty' THEN (SELECT fullname FROM thirdparty WHERE id = claims.client_id)
WHEN client = '' THEN NULL
END AS client_name FROM claims WHERE client_name LIKE '%john%'
```
Hope it will help you. | WHERE clause to select from AS part of the query | [
"",
"mysql",
"sql",
""
] |
I have all the primary keys and drop tables in the correct order.
I basically want stockID to be a foreign key in my refund table.
My schema looks like this...
```
CREATE TABLE refunds (
refundID SMALLINT AUTO_INCREMENT,
stockID SMALLINT,
refundDate DATE,
FOREIGN KEY (stockID) REFERENCES tblStock(stockID),
PRIMARY KEY (refundID)
);
CREATE TABLE tblStock (
stockID SMALLINT AUTO_INCREMENT,
stockName VARCHAR(60),
StockNumbers SMALLINT
);
``` | When referencing another table for a foreign key reference, that table needs to already exist. And, the column being referenced should be a primary key. Try this:
```
CREATE TABLE tblStock (
stockID SMALLINT AUTO_INCREMENT PRIMARY KEY,
stockName VARCHAR(60),
StockNumbers SMALLINT
);
CREATE TABLE refunds (
refundID SMALLINT AUTO_INCREMENT,
stockID SMALLINT,
refundDate DATE,
FOREIGN KEY (stockID) REFERENCES tblStock(stockID),
PRIMARY KEY (refundID)
);
``` | ```
FOREIGN KEY (stockID) REFERENCES tblStock(stockID)
```
Is referencing a table that doesn't yet exist. Create `tblStock` first. | #1005 - Cant create table... (errno: 150) | [
"",
"mysql",
"sql",
"errno",
""
] |
I have a table called
`A` and it contains column called `A1` and it has `A21, A31 , A41` as row values
**A** :
```
A1
+---+
A21
A31
A41
```
I have another table called `B` and it has columns of `A21,A31,A41`
**B:**
```
| A21 | A31 | A41
```
Now how can i make a inner join to find column match ...whether same columns of B is same as A1 Row values.... (i.e) Matching A1's Row values with B Column names
```
select A1
from A a
inner join information_schema.columns c on c.column_name = a.A
where c.TABLE_NAME = 'B'
```
OutPut :
```
A1
A11
```
But I need
```
A1
+---+
A21
A31
A41
```
Thanks in Advance
my query
```
select Distinct c.Column_Name,Sorting_Order from Fee_Individual_New a
inner join information_schema.columns c on c.column_name = a.Column_Name
where c.TABLE_NAME ='Feed' and Delivered_to_mobile='Yes'
```
The output is coming in Ascending order .... I need the get the same structure as from table column values... | Your question is a bit ambiguous, but this might be what you mean:
```
select *
from A1 a
inner join information_schema.columns c on c.column_name = a.A1
where c.table_name = 'B'
```
Although why you want this is a bit of a mystery! | Try this,
```
SELECT
A1.*
FROM
A1
INNER JOIN sys.columns S ON S.name = A1.name
INNER JOIN sys.tables ST ON ST.object_id = s.object_id
WHERE
ST.name = 'A2'
``` | How to Compare information schema column with Table Column in Sql Server2008 using inner Join? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"c#-4.0",
""
] |
There is a sample schema in here:
<http://sqlfiddle.com/#!2/c0723a/2>
query is `select id,group_concat(val) from test group by id`
result is
> ID GROUP\_CONCAT(VAL)
> 1 ,64,66,,203,214,204
I wanted to concat val field without a comma for empty records like this
> ID GROUP\_CONCAT(VAL)
> 1 64,66,203,214,204 | Simply use **Replace**
```
select id,replace(group_concat(val),',,',',') from test group by id
```
Or you can use **IF**
```
select id,group_concat(if(val ='',null, val)) from test group by id
```
Or You can use **NULLIF**
```
select id,group_concat(Nullif(val,'')) from test group by id
```
**[Fiddle Demo](http://sqlfiddle.com/#!2/c0723a/11)** | ```
SELECT id, GROUP_CONCAT(val) FROM test
WHERE val is not null AND val <> ''
GROUP BY id
``` | Mysql group_concat adds a separator for empty fields | [
"",
"mysql",
"sql",
"group-concat",
""
] |
i know this must be in SQL 101, but i need to return rows where a column contains both X And Y, where an ID is duplicated.
Example Table.
```
ACCOUNT | Activity
-------------------------
1 | Email
1 | Appointment
2 | Email
2 | Email
3 | Email
3 | Appointment
```
I need the SQL that will return to me accounts 1 and 3 only, like so:
```
ACCOUNT | Activity
-------------------------
1 | Email
1 | Appointment
3 | Email
3 | Appointment
```
In pseudo code it's (WHERE ID occurs with both 'Appointment' AND 'Email')
Account 2 should not be returned, since there is no row in the table WHERE Account = 2 AND Activity = 'Appointment'.
I hope that makes sense, and would really appreciate your input.
Thank in advance.
EDIT BELOW THE LINE
---
Thanks everybody for your suggestions, i very much like @rafa 's suggestion of using count distinct, but of course the query is more complex than first suggested. The first table is actually a result set from another query, query below including @rafa suggestion, but the query doesn't work, i understand i need to invoke a sub query, but am unsure where or how, thanks again:
```
SELECT T.ACCOUNT, T.COMPANY, H.RESPONSE, H.DAT_ AS Resp_Date, H.USERNAME, (date_format(P.ENDDATE,'%M')) AS Renewal, T.OWNER, COUNT(DISTINCT(H.ACTIVITY)) AS Dis_Act, H.CustomerStatus, H.Contact, A.ANAL14 AS APPDATE
FROM TELRCMxxx T LEFT JOIN TELCOMxxx H ON T.ACCOUNT = H.ACCOUNT LEFT JOIN ACCSTOxxx P ON T.ACCOUNT = P.ACCOUNT LEFT JOIN RCMANLxxx A ON T.ACCOUNT = A.ACCOUNT
WHERE (H.ACTIVITY in ('Appointment', 'email'))
Group by Account
having Dis_Act > 1
```
J.
---
Thanks everyone. | Here's one way to do it. Use an `EXISTS` predicate that uses a correlated subquery to check for the existence of the "other" row. This assumes that you only want to return rows that have values of 'Email' and 'Appointment' in the Activity column, and to exclude rows that have any other value for activity. (This isn't the most efficient way to do it.)
```
SELECT t.account
, t.activity
FROM example_table t
WHERE t.activity IN ('Email','Appointment')
AND EXISTS ( SELECT 1
FROM example_table d
WHERE d.account = t.account
AND d.activity =
CASE t.activity
WHEN 'Email' THEN 'Appointment'
WHEN 'Appointment' THEN 'Email'
END
)
```
---
**ADDITION**
Here's the approach above applied to the original query (which is later supplied as a comment on another answer...)
```
SELECT t.ACCOUNT
, t.COMPANY
, h.RESPONSE
, h.DAT_ AS Resp_Date
, h.USERNAME
, DATE_FORMAT(p.ENDDATE,'%M') AS Renewal
, t.OWNER
, h.CustomerStatus
, h.Contact
, a.ANAL14 AS APPDATE
FROM TELRCMxxx t
JOIN TELCOMxxx h
ON t.ACCOUNT = h.ACCOUNT
AND h.ACTIVITY in ('Appointment','email')
AND EXISTS
( SELECT 1
FROM TELCOMxxx b
WHERE b.ACCOUNT = h.ACCOUNT
AND b.ACTIVITY = CASE h.ACTIVITY
WHEN 'email' THEN 'Appointment'
WHEN 'Appointment' THEN 'email'
END
)
LEFT
JOIN ACCSTOxxx p
ON t.ACCOUNT = p.ACCOUNT
LEFT
JOIN RCMANLxxx a
ON t.ACCOUNT = a.ACCOUNT
```
**NOTES:** original query has a LEFT join to h, but the "outerness" of the join operation is negated by the WHERE clause, which effectively verifies `h.ACTIVITY IS NOT NULL`. The LEFT keyword is removed, and the `h.ACTIVITY IN ('Appointment','email')` predicate is moved from the WHERE clause to the ON clause of the join. But that doesn't really change anything about the query.
The change to the query is the addition of an "`EXISTS`" predicate that checks for the existence of another row in `h`, matching on ACCOUNT, and matching either 'Appointment' or 'email', as the opposite of the value in the row from `h` being checked.
Note that this:
```
AND b.ACTIVITY = CASE h.ACTIVITY
WHEN 'email' THEN 'Appointment'
WHEN 'Appointment' THEN 'email'
END
```
is equivalent to:
```
AND ( ( b.ACIVITY = 'email' AND h.ACTIVITY = 'Appointment' )
OR
( b.ACIVITY = 'Appointment' AND h.ACTIVITY = 'email' )
)
```
**END ADDITION**
---
If you need to return all rows for the account, including other values for activity, then you remove the `t.Activity IN` predicate from the WHERE clause on the outer query, and just check for the existence of both an 'Email' and 'Appointment' row for that account:
```
SELECT t.account
, t.activity
FROM example_table t
WHERE EXISTS ( SELECT 1
FROM example_table e
WHERE e.account = t.account
AND e.activity = 'Email'
)
AND EXISTS ( SELECT 1
FROM example_table a
WHERE a.account = t.account
AND a.activity = 'Appointment'
)
```
This is *not* the most efficient approach, but it will return the specified result.
For large sets, a (usually) more efficient approach is to use a JOIN operation.
To get a list of distinct account values that have both 'Email' and 'Appointment' rows:
```
SELECT e.account
FROM example_table e
JOIN example_table a
ON a.account = e.account
AND a.activity = 'Appointment'
WHERE e.activity = 'Email'
GROUP BY e.account
```
To get all rows from the table for those accounts:
```
SELECT t.account
, t.activity
FROM example_table t
JOIN ( SELECT e.account
FROM example_table e
JOIN example_table a
ON a.account = e.account
AND a.activity = 'Appointment'
WHERE e.activity = 'Email'
GROUP BY e.account
) s
ON s.account = t.account
```
If you only want to return rows with particular values for activity, you can add a WHERE clause, e.g.
```
WHERE t.activity IN ('Email','Appointment','Foo')
``` | A very naive approach would be:
```
SELECT
*
FROM MyTable t1
WHERE EXISTS
(
SELECT 1 FROM MyTable WHERE ACCOUNT = t1.ACCOUNT AND Activity = 'Email'
)
AND EXISTS
(
SELECT 1 FROM MyTable WHERE ACCOUNT = t1.ACCOUNT AND Activity = 'Appointment'
)
``` | SQL Return rows where column 1 contains X and Y | [
"",
"mysql",
"sql",
"return",
"duplicates",
""
] |
Since `EXCEPT` is a Set Operation, it automatically makes its results distinct. How can I rewrite a query like the following without the result containing only distinct values?
```
SELECT T1.ContainerName, T1.PanelName FROM MyTable AS T1
EXCEPT
SELECT T2.ContainerName, T2.PanelName FROM MyOtherTable AS T2
```
Given the first `SELECT` results as follows:
```
Container1 MyPanel
Container1 MyPanel
Container2 MyPanel
Container3 MyPanel
```
and the second `SELECT` returning as follows:
```
Container3 MyPanel
```
then the above `EXCEPT` query will return:
```
Container1 MyPanel
Container2 MyPanel
```
Since the names matter and we're able to have duplicate names as I've done above, it's important that I have both listings of `Container1 MyPanel` in my final result list.
So the question is, in a general sense, how can this `EXCEPT` query be transformed into a query that will not specify `DISTINCT` results?
EDIT: To clarify, I'm looking to see the difference between the two results without excluding duplicates in the final result set.
The final result I WANT is as follows:
```
Container1 MyPanel
Container1 MyPanel
Container2 MyPanel
``` | You appear to want everything from `mytable` where there is no corresponding row in `myothertable`. Sounds like a job for `exists()`. Try:-
```
select t1.ContainerName, t1.PanelName
from MyTable t1
where not exists(
select * FROM MyOtherTable t2
where t2.ContainerName=t1.ContainerName
and t2.PanelName=t1.PanelName
)
```
If you can use the PK of `myothertable` (assuming it's an FK in `mytable`) you can simplify the `where` in the sub query. | how about `LEFT JOIN`
```
SELECT t1.*
FROM MyTable1 AS t1
LEFT JOIN MyTable2 AS t2
ON t1.ContainerName = t2.ContainerName
AND t1.PanelName = t2.PanelName
WHERE t2.PanelName IS NULL
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/58c1ca/1) | Rewrite Except Without Distinct | [
"",
"sql",
"t-sql",
""
] |
I'm trying to make an optimal SQL query for an iSeries database table that can contain millions of rows (perhaps up to 3 million per month). The only key I have for each row is its RRN (relative record number, which is the physical record number for the row).
My goal is to join the table with another small table to give me a textual description of one of the numeric columns. However, the number of rows involved can exceed 2 million, which typically causes the query to fail due to an out-of-memory condition. So I want to rewrite the query to avoid joining a large subset with any other table. So the idea is to select a single page (up to 30 rows) within a given month, and *then* join that subset to the second table.
However, I ran into a weird problem. I use the following query to retrieve the RRNs of the rows I want for the page:
```
select t.RRN2 -- Gives correct RRNs
from (
select row_number() over() as SEQ,
rrn(e2) as RRN2, e2.*
from TABLE1 as e2
where e2.UPDATED between '2013-05-01' and '2013-05-31'
order by e2.UPDATED, e2.ACCOUNT
) as t
where t.SEQ > 270 and t.SEQ <= 300 -- Paging
order by t.UPDATED, t.ACCOUNT
```
This query works just fine, returning the correct RRNs for the rows I need. However, when I attempted to join the result of the subquery with another table, *the RRNs changed*. So I simplified the query to a subquery within a simple outer query, without any join:
```
select rrn(e) as RRN, e.*
from TABLE1 as e
where rrn(e) in (
select t.RRN2 -- Gives correct RRNs
from (
select row_number() over() as SEQ,
rrn(e2) as RRN2, e2.*
from TABLE1 as e2
where e2.UPDATED between '2013-05-01' and '2013-05-31'
order by e2.UPDATED, e2.ACCOUNT
) as t
where t.SEQ > 270 and t.SEQ <= 300 -- Paging
order by t.UPDATED, t.ACCOUNT
)
order by e.UPDATED, e.ACCOUNT
```
The outer query simply grabs all of the columns of each row selected by the subquery, using the RRN as the row key. But this query *does not work* - it returns rows with completely different RRNs.
I need the actual RRN, because it will be used to retrieve more detailed information from the table in a subsequent query.
Any ideas about why the RRNs end up different?
**Resolution**
I decided to break the query into two calls, one to issue the simple subquery and return just the RRNs (rows-IDs), and the second to do the rest of the JOINs and so forth to retrieve the complete info for each row. (Since the table gets updated only once a day, and rows never get deleted, there are no potential timing problems to worry about.)
This approach appears to work quite well.
**Addendum**
As to the question of why an out-of-memory error occurs, this appears to be a limitation on only *some* of our test servers. Some can only handle up to around 2m rows, while others can handle much more than that. So I'm guessing that this is some sort of limit imposed by the admins on a server-by-server basis. | I find it hard to believe that querying a table of mere 3 million rows, even when joined with something else, should cause an out-of-memory condition, so in my view you should address this issue first (or cause it to be addressed).
As for your question of *why the RRNs end up different* I'll take the liberty of quoting [the manual](http://pic.dhe.ibm.com/infocenter/iseries/v6r1m0/topic/db2/rbafzscarrn.htm):
> If the argument identifies a view, common table expression, or nested table expression derived from more than one base table, the function returns the relative record number of the first table in the outer subselect of the view, common table expression, or nested table expression.
A construct of the type `...where something in (select somethingelse...)` typically translates into a join, so there. | Trying to use RRN as a primary key is asking for trouble.
I find it hard to believe there isn't a key available.
Granted, there may be no explicit primary key defined in the table itself. But is there a unique key defined in the table?
It's possible there's no keys defined in the table itself ( a practice that is 20yrs out of date) but in that case there's usually a logical file with a unique key defined that is by the application as the de-facto primary key to the table.
Try looking for related objects via green screen (DSPDBR) or GUI (via "Show related"). Keyed logical files show in the GUI as views. So you'd need to look at the properties to determine if they are uniquely keyed DDS logicals instead of non-keyed SQL views.
A few times I've run into tables with no existing de-facto primary key. Usually, it was possible to figure out what could be defined as one from the existing columns.
When there truly is no PK, I simply add one. Usually a generated identity column. There's a technique you can use to easily add columns without having to recompile or test any heritage RPG/COBOL programs. (and note LVLCHK(\*NO) is NOT it!)
The technique is laid out in Chapter 4 of the modernizing Redbook
<http://www.redbooks.ibm.com/abstracts/sg246393.html>
1) Move the data to a new PF (or SQL table)
2) create new LF using the name of the existing PF
3) repoint existing LF to new PF (or SQL table)
Done properly, the record format identifiers of the existing objects don't change and thus you don't have to recompile any RPG/COBOL programs. | iSeries query changes selected RRN of subquery result rows | [
"",
"sql",
"db2",
"ibm-midrange",
"db2-400",
""
] |
I am trying to make an application, and it requires me to store the data somewhere. so i have to create a database and i have absolutely no idea about making database. i can make the android application for for the front end but i need help with storing, retrieving and manipulating the data stored online.
1. How and where do i store the data online.
2. What languages do i need to learn to do that.
3. How do i access that data using internet.
Please guys help me out. just guide me to the sources, add links to the answer. | Not sure if you're asking how to create and use a database in Android, or online, so I'll try to answer both.
SQlite is the default embedded database solution in Android. You have two options here, but I'll go ahead and advise you to choose the former.
First option is to do it, as they say, "close to the metal" as possible. This means implementing your own contentprovider, your table structures, and all that. [Here](http://www.vogella.com/tutorials/AndroidSQLite/article.html)'s a very nice tutorial on how to do just that.
Second option is to use an abstraction layer or whatever you want to call a library that does the heavy lifting and makes you stay away from coding the boilerplate stuff. There's a lot of choices out there, and each one differs in many ways -- some of them doesn't even use SQlite underneath. I suggest you take a look at this [stackoverflow thread](https://stackoverflow.com/questions/1842357/higher-level-database-layer-for-android) that lists some of the better persistence abstraction solutions available in Android.
Now for your question about storing stuff online -- if I understand correctly, what you want is a cloud server/solution setup. There's also quite a lot to choose from, and I'll let the other answers tell you what they are, but I personally recommend [Parse](https://parse.com/). Storage is just one of the many useful features that it has, plus it also provides an Android API that simplifies all your network queries and result handling so that you don't have to deal with HttpConnection and JSON parsing. Plus it's free for small projects. | I guess you will have to follow the client - server protocol.
1. Use SQLite Database (relational database) on the client (Android) to store data locally.
2. On your server, get a Relational Database eg. On Amazon EC2 you can get one.
3. Create an API for your application on the server, which will accept HTTP requests from the client and store data accordingly.
4. Look into REST, Flask APIs to get started. | Database for android online | [
"",
"android",
"sql",
"database",
""
] |
Hi i'm a beginner in Ruby on Rails
Here is my doubt.
I have this method in my controller
```
def save_profile
p "======================================"
p params
p "======================================"
p params[:company][:job_locations_attributes]
p "company_params------------------------"
p company_params
p "--------------------------------------"
company_profile
@location = JobLocation.new(:city_id => params[:city_id])
@location.save
if @company.update_attributes(company_params)
redirect_to company_dashboard_path(@company.id)
else
render 'company_profile'
end
end
```
and in terminal i get the output like
```
"======================================"
{"utf8"=>"✓", "authenticity_token"=>"kbId4JgeaM+mGlmZC1U4gFCUYN7LHmuqWq8Es3rxa+k=", "company"=>{"name"=>"Besole", "address"=>"<p>Besole, Besole, Besole, Besole</p>", "employee_count"=>"> 200", "company_type"=>"Business", "foundation_year"=>"2020", "mission"=>"Besolification", "website"=>"https://www.besole.com", "facebook_page_url"=>"https://www.facebook.com/besole", "twitter_page_url"=>"https://www.twitter.com/besole", "description"=>"Besole besole Besole Besole Besole Besole besole", "delete_logo"=>"", "logo_crop_x"=>"", "logo_crop_y"=>"", "logo_crop_h"=>"", "logo_crop_w"=>"", "jobs_attributes"=>{"33"=>{"id"=>"33", "company_id"=>"11", "title"=>"Besoler", "description"=>"<p>besoler besoler besoler</p>", "job_type_id"=>"1", "experience"=>"1 - 3 Years", "job_category_id"=>"2", "skill_ids"=>["", "6", "5", "7", "9", "11", "12", "16", "15"], "_destroy"=>"false"}}, "job_locations_attributes"=>{"33"=>{"job_locations"=>"2"}}}, "city_id"=>"40035", "portfolios_id"=>"321", "commit"=>"Submit", "controller"=>"companies", "action"=>"save_profile", "id"=>"11"}
"======================================"
{"33"=>{"job_locations"=>"2"}}
"company_params------------------------"
{"name"=>"Besole", "logo_crop_h"=>"", "logo_crop_w"=>"", "logo_crop_x"=>"", "logo_crop_y"=>"", "delete_logo"=>"", "address"=>"<p>Besole, Besole, Besole, Besole</p>", "employee_count"=>"> 200", "company_type"=>"Business", "mission"=>"Besolification", "website"=>"https://www.besole.com", "description"=>"Besole besole Besole Besole Besole Besole besole", "foundation_year"=>"2020", "facebook_page_url"=>"https://www.facebook.com/besole", "twitter_page_url"=>"https://www.twitter.com/besole", "jobs_attributes"=>{"33"=>{"id"=>"33", "company_id"=>"11", "title"=>"Besoler", "description"=>"<p>besoler besoler besoler</p>", "job_type_id"=>"1", "experience"=>"1 - 3 Years", "job_category_id"=>"2", "skill_ids"=>["", "6", "5", "7", "9", "11", "12", "16", "15"], "_destroy"=>"false"}}}
"--------------------------------------"
```
I want to get that 33 and Job Locations..
I want to save those two variables inside table Location under columns job\_id and state\_id where job\_id is 33 and state\_id is 2 | If you'd like to keep it short
```
job_id, state_id = params[:company][:job_locations_attributes].first
Location.create(:job_id => job_id, state_id => state_id.values[0])
``` | you can use it.
```
job_id = params[:company][:job_locations_attributes].keys[0]
state_id = params[:company][:job_locations_attributes][job_id]['job_locations']
``` | Ruby on Rails Saving to Database | [
"",
"sql",
"ruby-on-rails",
"ruby",
""
] |
I have lots of useless approved comments on my site. One way I use of removing them is
```
DELETE FROM wp_comments WHERE comment_content LIKE '%agree%with%you,%thanks%, '
```
However, this is removing good comments as well and leaving out a lot of bad comments.
How Do I modify the query to delete comments with less that 5 words.
Due to the large number of comments in the actual database, I am worried whether to run a command with `like` since it will scan more than 20K rows. Is there any way to reduce the load ? | try using `LENGTH` . bellow you will delete entries which have less then 35 characters. Ithink its better then using words.
```
length(comment_content) < 35 --//change length number as you want //35 characters
```
like that:
```
DELETE FROM wp_comments WHERE comment_content LIKE '%agree%with%you,%thanks%, '
AND length(comment_content) < 35
```
[**DEMO HERE**](http://sqlfiddle.com/#!2/6fad1/1) | The most effective way is:
**MySQL Query Start:**
**`-- Select All Rows`**
```
mysql> SELECT * FROM comments;
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
| id | comments | log_time |
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
| 1 | This Comment has 4 space | 2014-03-20 16:05:33 |
| 2 | Lorem ipsum dolor sit amet. | 2014-03-20 16:08:12 |
| 3 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laborum molest | 2014-03-20 16:08:12 |
| 4 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsa, eum, fuga dolorum cupiditate blanditiis enim | 2014-03-20 16:08:29 |
| 5 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Magnam | 2014-03-20 16:08:29 |
| 6 | Lorem ipsum dolor sit amet. | 2014-03-20 16:09:09 |
| 7 | Lorem ipsum dolor sit amet. | 2014-03-20 16:09:16 |
| 8 | Lorem ipsum dolor sit amet. | 2014-03-20 16:09:18 |
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
8 rows in set (0.00 sec)
```
**`-- Check Space Count`**
```
mysql> SELECT comments, (
-> length( trim( comments ) ) - length( replace( trim( comments ) , ' ', '' ) )
-> ) AS total_space
-> FROM comments
-> LIMIT 0 , 30;
+---------------------------------------------------------------------------------------------------------------+-------------+
| comments | total_space |
+---------------------------------------------------------------------------------------------------------------+-------------+
| This Comment has 4 space | 4 |
| Lorem ipsum dolor sit amet. | 4 |
| Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laborum molest | 9 |
| Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsa, eum, fuga dolorum cupiditate blanditiis enim | 14 |
| Lorem ipsum dolor sit amet, consectetur adipisicing elit. Magnam | 8 |
| Lorem ipsum dolor sit amet. | 4 |
| Lorem ipsum dolor sit amet. | 4 |
| Lorem ipsum dolor sit amet. | 4 |
+---------------------------------------------------------------------------------------------------------------+-------------+
8 rows in set (0.00 sec)
```
**`-- Delete Those Records who Has less than 5 words`**
```
mysql> DELETE FROM comments WHERE (
-> length( trim( comments ) ) - length( replace( trim( comments ) , ' ', '' ) )
-> ) < 5;
Query OK, 5 rows affected (0.16 sec)
```
**`-- Select All Rows Again to Verify Rows`**
```
mysql> SELECT * FROM comments;
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
| id | comments | log_time |
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
| 3 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Laborum molest | 2014-03-20 16:08:12 |
| 4 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Ipsa, eum, fuga dolorum cupiditate blanditiis enim | 2014-03-20 16:08:29 |
| 5 | Lorem ipsum dolor sit amet, consectetur adipisicing elit. Magnam | 2014-03-20 16:08:29 |
+----+---------------------------------------------------------------------------------------------------------------+---------------------+
3 rows in set (0.00 sec)
```
In your case you can use like this:
```
-- Check Space Count
SELECT comment_content, (
length( trim( comment_content ) ) - length( replace( trim( comment_content ) , ' ', '' ) )
) AS total_space
FROM wp_comments
LIMIT 0 , 30;
-- Delete those comments who has less than 5 words
DELETE FROM wp_comments WHERE (
length( trim( comment_content ) ) - length( replace( trim( comment_content ) , ' ', '' ) )
) < 5;
```
**`-- Live DEMO`**
Click Here to see **[Live DEMO](http://sqlfiddle.com/#!2/e1864/1/0)** | Delete Comments / Content with Less than 5 words | [
"",
"mysql",
"sql",
"wordpress",
"phpmyadmin",
""
] |
I have this Application where Releasedates of Musicalbums are saved into the MySQL Database as `Varchar` (I have no influence on that) so it displays the date like **(2014-09-11)**. How can I make it that the Releasedate gets displayed like only **(2014)** ? Do I have to convert it to `DATETIME`
My MySQl query looks like this:
```
SELECT products.pro_id,
products.title AS album_title,
products.genre AS album_genre,
products.coverarttoenailurl AS cover_image,
products.ccws_pro_id AS product_upc,
track_title,
track_duration,
album_id,
physicalReleaseDate,
digitalReleaseDate
FROM products
INNER JOIN track_albumn_information
ON products.ccws_pro_upc = track_albumn_information.product_upc
AND track_albumn_information.artist_id ='".$artist_id."'
ORDER BY physicalReleaseDate, digitalReleaseDate DESC
```
the relevant Fields are `physicalReleaseDate` and `digitalReleaseDate`
Please help anyone... | ```
SUBSTR(physicalReleaseDate, 1, 4)
SUBSTR(digitalReleaseDate, 1, 4)
``` | Try This...Use Year function of mysql for ref. <http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_year>
```
SELECT products.pro_id,products.title AS album_title, products.genre AS album_genre, products.coverarttoenailurl AS cover_image, products.ccws_pro_id
AS product_upc, track_title, track_duration, album_id, Year(physicalReleaseDate) , year(digitalReleaseDate) FROM products INNER JOIN track_albumn_information
ON products.ccws_pro_upc = track_albumn_information.product_upc AND track_albumn_information.artist_id ='".$artist_id."' ORDER BY physicalReleaseDate, digitalReleaseDate DESC
``` | MYSQL Datetime as varchar and how to display year only | [
"",
"mysql",
"sql",
"datetime",
""
] |
I have to datetimeoffset fields, which I want to calculate the time between then.
I have done this:
```
CONVERT(varchar(5), DATEADD(MINUTE,DATEDIFF(MINUTE, P.DT_INIC_PARD , P.DT_TMNO_PARD ),0),114) AS 'Duration'.
```
The problem is: When the time between then is higher than 24 hours, the result query shows me '00:00'.
Example:
```
P.DT_INIC_PARD = 2014-03-18 07:00:00.0000000 -03:00
P.DT_TMNO_PARD = 2014-03-19 07:20:00.0000000 -03:00
```
This gives me more than 24 hours. And shows me '00:20'
The question is: how to solve this?
Thanks | What do you intend to do with that `DATEADD()` function? What it's doing is turning your `DATEDIFF()` output into a `DATETIME` field, which you then `CONVERT()` to a time format. You can't display more than 24 hours in a time format `00:00`, so you need to choose a different way to display the output.
This works fine:
```
SELECT DATEDIFF(MINUTE, '2014-03-18 07:00:00.0000000' , '2014-03-19 07:20:00.0000000' ) -- 1460 minutes
```
If you want the result in days/hours/minutes you can use modulus division:
```
WITH cte AS (SELECT DATEDIFF(MINUTE, '2014-03-18 07:00:00.0000000' , '2014-03-19 07:20:00.0000000') Duration)
SELECT CAST(Duration/1440 AS VARCHAR(15))+'d '
+ CAST((Duration-1440)/60 AS VARCHAR(15))+'h '
+ CAST((Duration-1440)%60 AS VARCHAR(15))+'m'
FROM cte
``` | You can do the following:
```
SELECT CAST(Duration/60 AS VARCHAR(5)) + ':' +
RIGHT('0'+ CAST(Duration % 60 AS VARCHAR(2)),2)
FROM ( SELECT DATEDIFF(MINUTE, P.DT_INIC_PARD, P.DT_TMNO_PARD) Duration
FROM YourTable) A
``` | Datediff more than 24 hours | [
"",
"sql",
""
] |
I have a string something like this `1234-56-7-8-9012`. This can be any size and the values between the `-` can be any size. I need to extract the `7` in the middle of this string but cannot accommodate to the changing size of the string. This is what I've been using but it doesn't take the change of size:
```
@String = '1234-56-7-8-9012'
SUBSTRING(
SUBSTRING(@String,CHARINDEX('-',@String)+1,LEN(@String))
,CHARINDEX('-'
,SUBSTRING(@String,CHARINDEX('-',@String)+1,LEN(@String))
)+1
,1
)
```
This will give me the 7 value, but sometimes that value will be 2 in length. Basically I need to get the number between the 2nd and 3rd `-`. | Try this:
```
declare @String varchar(50) = '1234-56-7-8-9012'
select substring(@String, charindex('-',@String,charindex('-',@String,1)+1)+1,
charindex('-',@String,charindex('-',@String,charindex('-',@String,1)+1)+1)-
charindex('-',@String,charindex('-',@String,1)+1)-1)
``` | Please try:
```
declare @String nvarchar(1000) = '1234-56-72-9012'
select
LEFT(y, CHARINDEX('-', y)-1)
from(
select
RIGHT(x, LEN(x)-CHARINDEX('-', x)) y
from(
select RIGHT(@String, LEN(@String)-CHARINDEX('-', @String)) x
)x1
)x2
``` | Retrieve a string between 2 same variables SQL | [
"",
"sql",
"sql-server",
"substring",
"charindex",
""
] |
I'm trying to create a procedure that will insert me an new entry IF the ID is not already used. I just can't figure out what I'm not doing right there:
```
CREATE OR REPLACE PROCEDURE add_user
IS
BEGIN
IF NOT EXISTS SELECT * FROM tab WHERE tab.id = '111'
THEN
INSERT INTO tab(id, name, job, city, phone)
VALUES(111, 'Frank', 'Programmer', 'Chicago', '111111111');
END IF;
END;
``` | Do not try to enforce referential integrity in code as it is very, very likely that it will be very, very wrong. Not *obviously* wrong - no, this is the kind of "wrong" that only shows up when you have 200 users banging on the database and then something goes very very far south in a very big hurry and all of a sudden nobody can use the system or do their jobs and your phone is ringing and your boss is breathing steam and cinders down the back of your neck and you're sweating and there's perspiration running down your contacts and you can't see and you can't think and and and... You know, **that** kind of wrong. :-)
Instead, use the referential integrity features of the database that are designed to *prevent* this kind of wrong. A good place to start is with the rules about Normalization. You remember, you learned about them back in school, and everybody said how stooopid they were, and **WHY** do we have to learn this junk because everybody *knows* that **nobody** does this stuff because it, like, doesn't matter, does it, hah, hah, hah (aren't we smart)? Yeah, that stuff - the stuff that after a few days on a few projects like the paragraph above you suddenly Get Religion About, because it's what's going to save you (the *smarter* you, the *older-but-wiser* you) and your dumb ass from having days like the paragraph above.
So, first thing - ensure your table has a primary key. Given the field names above I'm going to suggest it should be the `ID` column. To create this constraint you issue the following command at the SQL\*Plus command line:
```
ALTER TABLE TAB ADD CONSTRAINT TAB_PK PRIMARY KEY(ID);
```
Now, rewrite your procedure as
```
CREATE OR REPLACE PROCEDURE add_user IS
BEGIN
INSERT INTO TAB
(id, name, job, city, phone)
VALUES
(111, 'Frank', 'Programmer', 'Chicago', '111111111');
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
EXIT; -- fine - the user already exists
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error in ADD_USER : ' || SQLCODE || ' : ' || SQLERRM);
RAISE;
END;
```
So, rather than try to pre-check to see if the coast is clear we just barge right in and insert the damn user! For we are programmers! We are strong!! We care nothing for the potential risks!!! We drink caffeinated soft drinks WITH HIGH-FRUCTOSE CORN SYRUP!!!! Errors cannot frighten us!!!
**ARE WE NOT MEN?!?!?**
Well, actually, if we're *responsible* (rather than "reprehensible" :-) programmers, we actually care a whole helluva lot about potential errors, if only because we know they're gonna land on OUR desks, most likely when we'd rather be surfing the web, or learning a new programming language, or chatting to/drooling over the girl in the next aisle who works in marketing and who is seriously out of our league - my point being, errors concern us. Avoiding them, handling them properly - these things are what set professional developers apart from dweeb wannabees and management strikers. Which gets us to the line of code right after the `INSERT` statement:
**EXCEPTION**
That's right - we know things can go wrong with that `INSERT` statement - but because we *ARE* men, and we *ARE* responsible programmers, we're gonna Do The Right Thing, which in this case means "Handle The Exception We Can Forsee". And what do we do?
```
WHEN DUP_VAL_ON_INDEX THEN
```
GREAT! We KNOW we can get a DUP\_VAL\_ON\_INDEX exception, because that's what's gonna get thrown when we try to insert a user that already exists. And then we'll Do The Right Thing:
```
EXIT; -- fine - the user already exists.
```
which in this case is to ignore the error completely. No, really! We're trying to insert a new user. The user we're trying to insert is already there. What's not to love? Now, it may well be that in that mystical, mythical place called The Real World it's just possible that it might be considered gauche to simply ignore this fact, and there might be a requirement to do something like log the fact that someone tried to add an already-extant user - but here in PretendLand we're just going to say, fine - the user already exists so we're happy.
BUT WAIT - there's **more**! Not ONLY are we going to handle (and, yeah, ok, ignore) the DUP\_VAL\_ON\_INDEX exception, **BUT** we will also handle EVERY OTHER POSSIBLE ERROR CONDITION KNOWN TO MAN-OR-DATABASE KIND!!!
```
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') ||
' Error in ADD_USER : ' || SQLCODE || ' : ' || SQLERRM);
RAISE;
```
Which means we'll spit out a somewhat useful error message telling use WHEN, WHERE, and WHAT went wrong, and then re-raise the exception (whatever it might be) so that whomsoever called us gets it dumped gracelessly into THEIR lap so THEY can handle it. (Serves 'em right for calling us, the lazy so-and-so's...).
So, now you know.
Go thou and do good works.
And...share and enjoy. | The direct answer to your question is to put the subquery in parentheses:
```
IF NOT EXISTS (SELECT * FROM tab WHERE tab.id = 111) THEN
INSERT INTO tab(id, name, job, city, phone)
VALUES(111, 'Frank', 'Programmer', 'Chicago', '111111111');
END IF;
```
However, that is not a good solution to your problem (I removed the single quotes because the constant is quoted in one place but not in another). One issue is race conditions. The `not exists` part might run, then *another* process might insert the row, and then the `insert` will fail.
The right solution is:
```
create unique index tab_id on tab(id);
```
Then, either use `merge` for the `insert`. Or wrap the `insert` in exception handling code. *Or*, use the logging facility:
```
INSERT INTO tab(id, name, job, city, phone)
VALUES(111, 'Frank', 'Programmer', 'Chicago', '111111111');
LOG ERRORS INTO errlog ('Oops, something went wrong') REJECT LIMIT UNLIMITED;
```
Then you will not be *allowed* to insert duplicate rows into the table. They won't go in, and you won't get an error (unless you want one). | Using "IF NOT EXIST" in a procedure | [
"",
"sql",
"plsql",
""
] |
I'm trying to make a `Insert Select` query with additional parameters but it doesn't work. Below is my code:
```
INSERT INTO [epmscdc].[dbo].[billing]
([pid] ,[did] ,[lid] ,[totalamt] ,
[cash] ,[bchange] ,[btrans_date])
SELECT patientlab.did, patientlab.pid,
patientlab.lid, laboratory.lprice
FROM patientlab
INNER JOIN laboratory
ON patientlab.lid = laboratory.lid
INNER JOIN doctor
ON patientlab.did = doctor.did
WHERE patientlab.pid = 3
AND pstatus = '-', 1000,1000,GETDATE())
```
I just want to select specific field from the other tables and I want to get the `cash`,`change`,`trans_date` from a textbox so how can I do that? | ```
INSERT INTO [epmscdc].[dbo].[billing] ([pid] ,[did] ,[lid] ,[totalamt] ,[cash] ,[bchange] ,[btrans_date])
SELECT patientlab.did, patientlab.pid, patientlab.lid, laboratory.lprice , 1000,1000,GETDATE()
FROM patientlab
INNER JOIN laboratory ON patientlab.lid = laboratory.lid
INNER JOIN doctor ON patientlab.did = doctor.did
WHERE patientlab.pid = 3 AND pstatus = '-'
```
INSERT INTO .....
SELECT ..... Syntax inserts the result set returned from your select statement, you cannot use commas at the end to pass other values. just simply select the values in your select statement and the whole returned result set will be inserted into your target table. | ```
INSERT INTO [epmscdc].[dbo].[billing] ([pid] ,[did] ,[lid] ,[totalamt] ,[cash] ,[bchange] ,[btrans_date]) VALUES
(SELECT patientlab.did, patientlab.pid, patientlab.lid, laboratory.lprice,1000,1000,GETDATE()
FROM patientlab
INNER JOIN laboratory ON patientlab.lid = laboratory.lid
INNER JOIN doctor ON patientlab.did = doctor.did
WHERE patientlab.pid = 3 AND pstatus = '-') As b
``` | Insert Select with additional parameters in SQL Server | [
"",
"sql",
"sql-server",
"insert-select",
""
] |
I have a large dataset with invoices that I need to perform some currency conversions on. There are a lot of *joins* required to get the data, but the basic idea is that One table has the individual invoice charges in USD, while another has the total charges in Local Currency, but no where are the individual charges on the invoice provided in local currency, so I need to compute the effective exchange rate by dividing Local Currency Total by USD Total and then multiply the charges by this rate. So far, I have successfully completed this for a single record as follows:
```
DECLARE @USD_Total float, @LOC_Total float, @FX_Rate float
SET @USD_Total = (Select SUM (InvoiceTable.USD_ChargeAmount)
FROM
{Some Joins}
WHERE InvoiceID in ('1234567')
Group By InvoiceTable.InvoiceID)
SET @LOC_Total = (Select LocalInvoiceTable.ChargeTotal
FROM
{Some Joins}
WHERE InvoiceID in ('1234567')
Group By LocalInvoiceTable.InvoiceID
SET @FX_Rate = @LOC_Total / @USD_Total
SELECT
InvoiceTable.InvoiceID
SUM(CASE InvoiceTable.ChargeCode when 'TYPE A' THEN InvoiceTable.USD_ChargeAmount ELSE 0 END)*@FX_Rate As Type_A,
SUM(CASE InvoiceTable.ChargeCode when 'TYPE B' THEN InvoiceTable.USD_ChargeAmount ELSE 0 END)*@FX_Rate As Type_B,
SUM(CASE InvoiceTable.ChargeCode when 'TYPE C' THEN InvoiceTable.USD_ChargeAmount ELSE 0 END)*@FX_Rate As Type_C
FROM
{Some Joins}
WHERE InvoiceID in ('1234567')
Group By LocalInvoiceTable.InvoiceID
```
So this works fine, but I need to replicate this for thousands of invoice ID's. How can I accomplish this without the WHERE clause? I'm a novice at this, so any help is greatly appreciated. I am using Microsoft SQL Server 2008. | Don't think in terms of individual rows. You have sets of rows so think in terms of sets of each question. Using CTEs you can encapsulate each piece (result set for each part of the question) and combine the answers in either other CTEs or the final query.
The following adaptation of your query shows 3 CTEs:
1. First we get USD values for each InvoiceID
2. Second we get localized values for each InvoiceID
3. Third we calculate the effective exchange rate based on the previous two aggregations, JOINing on their respective InvoiceIDs
Finally, in the main query we use the outcome of the effective Exchange Rate calculation
```
;WITH USD AS (
SELECT InvoiceID, SUM(InvoiceTable.USD_ChargeAmount) AS [Total]
FROM
{Some Joins}
GROUP BY InvoiceTable.InvoiceID
),
LOC AS (
SELECT InvoiceID, SUM(LocalInvoiceTable.ChargeTotal) AS [Total]
FROM
{Some Joins}
GROUP BY LocalInvoiceTable.InvoiceID
),
FX AS (
SELECT USD.InvoiceID,
CONVERT(MONEY, LOC.Total) / CONVERT(MONEY, USD.Total) AS [Rate]
FROM USD
INNER JOIN LOC
ON LOC.InvoiceID = USD.InvoiceID
),
SELECT InvoiceTable.InvoiceID,
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE A' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * FX.[Rate] AS [Type_A],
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE B' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * FX.[Rate] AS [Type_B],
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE C' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * FX.[Rate] AS [Type_C]
FROM InvoiceTable
{Some Joins}
INNER JOIN FX
ON FX.InvoiceID = InvoiceTable.InvoiceID
```
**EDIT:**
A more *ideal* approach would be to store the exchange rate used at the time that the InvoiceTable and LocalInvoiceTable rows are being inserted. You could use a table similar to:
```
InvoiceInfo
(
InvoiceID INT NOT NULL, -- PK, FK to InvoiceTable
ExchangeRate SMALLMONEY NOT NULL,
CurrencyCode CHAR(3) NOT NULL, -- USD, EUR, etc.
InvoiceTotalUSD MONEY, -- optional
InvoiceTotalLocal MONEY -- optional
)
```
I would recommend using the [ISO 4217](http://www.currency-iso.org/en/home/tables/table-a1.html) currency codes. The two InvoiceTotal\* fields are only for if you routinely need to aggregate by InvoiceID outside of needing to calculate the "effective exchange rate".
Storing the info at the time of Invoice creation reduces your query to just:
```
SELECT InvoiceTable.InvoiceID,
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE A' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * II.[ExchangeRate] AS [Type_A],
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE B' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * II.[ExchangeRate] AS [Type_B],
SUM(CASE InvoiceTable.ChargeCode
when 'TYPE C' THEN InvoiceTable.USD_ChargeAmount
ELSE 0 END) * II.[ExchangeRate] AS [Type_C]
FROM InvoiceTable
{Some Joins}
INNER JOIN InvoiceInfo II
ON II.InvoiceID = InvoiceTable.InvoiceID
``` | You must not approach it as a procedural program. SQL is a language for set manipulation. You need to think in terms like calculated columns and joins. First you need to calculate USD totals by invoice, then join it to LocalInvoiceTable and divide by ChargeTotal to get the FX rate. Finally join back to InvoiceTable and multiply each individual charge by the FX rate.
```
with UsdTotal as
(
select InvoiceID, sum(USD_ChargeAmount) as USD_ChargeTotal
from InvoiceTable
group by InvoiceID
)
, FxRate as
(
select lit.InvoiceID, lit.ChargeTotal / ut.USD_ChargeTotal as Rate
from LocalInvoiceTable lit
inner join UsdTotal ut
on ut.InvoiceID = lit.InvoiceID
)
select
it.InvoiceId,
it.USD_ChargeAmount as UsdValue,
it.USD_ChargeAmount * fx.Rate as LocalCcyValue
from InvoiceTable it
inner join FxRate fx on fx.InvoiceID = it.InvoiceID
```
You can try it in [SQLFiddle](http://www.sqlfiddle.com/#!3/e750e/4/0). Please note that FX rates change every day. If the individual charges have been realized on different days you don't get the effective FX rate just and average. Hence the calculated amounts in local currency will be different from the real ones. | SQL Server to calculate a variable for every row in a query | [
"",
"sql",
"sql-server-2008",
"variables",
"local-variables",
""
] |
**EDIT**
After removing my silly mistake of `INTO` (I was working with INSERTS and just keep going) the error below is showing. Still not working:
> Affected rows: 0
>
> [Err] 1093 - You can't specify target table 'tbl'
> for update in FROM clause
---
I'm trying to create an update where I select all the previous data in the column, add a complementary string and save it as new data. The code is below (with the error)
Using only the select, the result:
```
set @id = 3;
SELECT tbl_alias.string_id
FROM tbl as tbl_alias
WHERE id = @id
-- the output `3,10,8,9,4,1,7,11,5,2,6,12`
```
I also tried with this query (the output is what I want)
```
SELECT CONCAT((
SELECT tbl_alias.string_id
FROM tbl as tbl_alias
WHERE id = @id
),',13,14,15,16') AS X
-- the output `3,10,8,9,4,1,7,11,5,2,6,12,13,14,15,16`
```
But after replacing the select below. It brings the same error.
**The query**
```
set @id = 3;
UPDATE INTO tbl
SET string_id =
CONCAT((
SELECT tbl_alias.string_id
FROM tbl as tbl_alias
WHERE id = @id
),',13,14,15,16') WHERE id = @id;
```
**The error**
> [Err] 1064 - You have an error in your SQL syntax; check the manual
> that corresponds to your MySQL server version for the right syntax to
> use near ' INTO tbl SET string\_id = CONCAT(( SELECT
> tbl\_alias.string\_id ' at line 1
---
It's probably the `CONCAT` together with `SELECT`. But I didn't find the solution... | Do you need the sub query?
```
UPDATE tbl
SET string_id = CONCAT(string_id, ',13,14,15,16')
WHERE id = @id;
```
Note that in MySQL you cannot modify using an UPDATE the table that is used in the sub query (although there are fiddles around it):-
<https://dev.mysql.com/doc/refman/5.5/en/subqueries.html>
> *In MySQL, you cannot modify a table and select from the same table in a subquery. This applies to statements such as DELETE, INSERT,
> REPLACE, UPDATE, and (because subqueries can be used in the SET
> clause) LOAD DATA INFILE.* | Try to use `UPDATE` without `INTO`:
```
set @id = 3;
UPDATE tbl
SET string_id =
CONCAT((
SELECT tbl_alias.string_id
FROM tbl as tbl_alias
WHERE id = @id
),',13,14,15,16') WHERE id = @id;
```
**Update:**
Try this:
```
set @id = 3;
UPDATE tbl
SET string_id =
CONCAT(SELECT string_id FROM (
SELECT tbl_alias.string_id
FROM tbl as tbl_alias
WHERE id = @id
) t1 ,',13,14,15,16') WHERE id = @id;
``` | Update mysql field using CONCAT and SELECT | [
"",
"mysql",
"sql",
"concatenation",
""
] |
I want to set a column in the `group by` table based on the first (actually, only) value in the group.
Specifically, given a table
```
id good
1 t
1 t
2 f
3 t
```
I want to produce the table
```
id multiplicity goodN
1 2 0
2 1 0
3 1 1
```
where `goodN` is 1 if and only if `multiplicity` is 1 and `good` is `t`:
```
select id, count(*) as multiplicity,
if (count(*) > 1, 0, if(good = 't', 1, 0)) as goodN
from ...
```
The question is: how do I extract the first (in my case, only) value of `good` from the group?
PS. is there a cheaper way to test that the group has size 1 than `count(*)=1`? | If the count is 1, then both the MAX(good) and MIN(good) will be the "first" row in the group.
```
select id, count(*) as multiplicity,
if (count(*) > 1, 0, if(max(good) = 't', 1, 0)) as goodN
from ...
``` | Select good from...
Where id=(select min(id) from...
Where good >0)
And
Group by id having count(id)=1
You can test count(goods) instead of count(\*) and at the end add group by id. To take only record with moltiplicity 1 add having count(goods)=1 | first value in group by | [
"",
"sql",
"hiveql",
""
] |
I have a query that I'm working with in Access that is supposed to take records specified in another query and alter them. Unfortunately, if I had multiple records selected before, it creates duplicates in this new query.
For example, if I had 2 records selected, it creates two of the same entry for each. If I had 3 selected, it creates 3 records for each, with a total of 9 records when I only wanted 3. If i had only one record initially it works perfectly.
I read that it might be a problem with the join, but I'm not sure how to fix it.
Below is my code, I hope I explained myself well enough :/
```
SELECT
GV_transfer3.[Dept ID],
GV_transfer3.[Existing Account],
GV_transfer3.Class,
GV_transfer3.Fund,
GV_transfer3.Program,
GV_transfer3.Project,
GV_transfer3.ID,
GV_transfer3.[project Number],
GV_transfer3.[Account Number],
GV_transfer3.Code,
GV_transfer3.Date,
GV_transfer3.Vendor,
'transferred from ' & Right([GV_transfer3].[Project Number],Len([GV_transfer3].[Project Number])-8) & ' to ' & Right([New Project Number],Len([New Project Number])-8) & '; ' & [GV_transfer3].[Description] AS Description1,
GV_transfer3.[Req By],
GV_transfer3.[Approved By],
GV_transfer3.[Proj# Number],
GV_transfer3.[Transferred out],
GV_transfer.Action,
-[Amount to transfer] AS Amount,
0 AS Reconciled,
'done ' & (Date()) & '; ' & [amount to transfer] & ' from ' & Right([GV_transfer3].[Project Number],Len([GV_transfer3].[Project Number])-8) & ' to ' & Right([New Project Number],Len([New Project Number])-8) & '; ' & [GV_transfer3].[Comment] AS Comment1,
GV_transfer3.Transfer,
GV_transfer3.Match,
IIf((Date())<=#6/30/2010#,'FY10',IIf((Date()) Between #7/1/2010# And #6/30/2011#,'FY11',IIf((Date()) Between #7/1/2011# And #6/30/2012#,'FY12','FY13'))) AS [Fiscal Year],
GV_transfer3.EquipGroupID,
GV_transfer3.EquipNumber,
GV_transfer3.Rep_Maint_Purchase,
Null AS Budget, GV_transfer.[Rel Project],
GV_transfer.MEIF,
GV_transfer.Released,
GV_transfer3.Proposed, GV_transfer3.Funded,
GV_transfer3.Declined,
GV_transfer3.Indirect,
GV_transfer3.DIC,
GV_transfer3.Forecast,
GV_transfer3.IntFunded,
GV_transfer3.Invoice,
GV_transfer3.VContract,
GV_transfer3.Category,
GV_transfer3.Activity
FROM GV_transfer3
INNER JOIN GV_transfer
ON GV_transfer3.ID = GV_transfer.ID;
``` | This is just a guess--but it looks like you have duplicate rows with the same ID in either `GV_transfer` or `GV_transfer3` or both. You will need to ensure that both tables use unique IDs. One way to check is to run the following query on the tables, one by one:
```
select ID, count(ID) as num_times
from GV_transfer
group by ID
```
... and the same query for `GV_transfer3`, with the table name modified. | You could try using the DISTINCT command. using SQL in access.
It will eliminate duplication in a column/columns.
Make a new query and select the current query you made that gave you the duplication.
SELECT DISTINCT("name of column")
FROM "name of table/query";
That will make a new query eliminating all duplication.
Hope that helps. | Access query returning duplicate records | [
"",
"sql",
"ms-access",
"join",
""
] |
I'm trying understand how I can pull information from multiple tables at once in one query if that is possible.
I have 3 tables and I'm wondering if there is a way I can query all the product names for customers that live in california?
```
Table:
products
Fields:
productOid
productName
companyOid
Table:
customerData
Fields:
customerOid
firstName
lastName
state
Table:
orders
Fields:
orderNumber
customerOid
productOid
```
Would this fall under something like an INNER JOIN?
Also, I'm learning mySQL. | You will need to use inner joins for this.
```
SELECT DISTINCT p.productName
FROM orders o
INNER JOIN customerData c ON o.customerOid = c.customerOid
INNER JOIN products p ON o.productOid = p.productOid
WHERE c.state = 'CA';
```
I am using DISTINCT here because it's possible a customer would order the same product more than once (or multiple customers would order the same products) and I'm assuming you don't want duplicates.
I'm also making the assumption that your state is represented as a two character column.
[Read more about joins](http://en.wikipedia.org/wiki/Join_%28SQL%29) | You could use one more join, but I would write it this way:
```
SELECT DISTINCT p.productName
FROM
orders o INNER JOIN products p
ON o.productOid = p.productOid
WHERE
o.customerOid IN (SELECT customerOid
FROM customerData
WHERE state = 'California')
```
It might be a little slover than a join, but it's more readable. | Query for multiple tables | [
"",
"mysql",
"sql",
""
] |
I'm trying to insert new rows into my DB. I have 155 rows that I need to insert. What is happening is that I am adding new users based on an existing account table. I have a query that lists the new users that I need to add to the users table, but I don't want to have to type out the insert 155 times. Is there an easier way of doing this? I've seen where you can have multiple sets of 'Values' in the insert, but I'm not sure how I would implement this. For example:
```
insert into tblUsers (State,City,Code)
Values ('IN','Indy',(select UserCode from tblAccounts where UserCode in(Select UserCode from tblAccounts where State = 'IN')))
```
I know that my sub-query will return 155 values and so won't actually work here, is there a way to modify this so that it will work? | Try this:
```
INSERT INTO tblUsers (State,City,Code)
SELECT 'IN','Indy', UserCode
FROM tblAccounts
WHERE UserCode IN
(SELECT UserCode
FROM tblAccounts
WHERE State = 'IN')
```
or better simplified (a subquery is not needed):
```
INSERT INTO tblUsers (State,City,Code)
SELECT 'IN','Indy', UserCode
FROM tblAccounts
WHERE State = 'IN'
``` | ```
insert into tblUsers (State,City,Code)
Select 'IN','Indy', UserCode
from tblAccounts
where UserCode in (Select UserCode
from tblAccounts
where State = 'IN')
``` | How to insert multiple rows with one insert statement | [
"",
"sql",
"sql-server-2008",
""
] |
I am trying to get the avg difference between order requested delivery date and actual delivery date, by Destination. The below is giving me issues. Do you know where I am going wrong? The message I am getting is "Each GROUP BY expression must contain at least on column that is not an outer reference"
```
SELECT CONCAT(Locations.City, ', ', Locations.State) AS Destination,
AVG(DATEDIFF("Day", Orders.ReqDate, Orders.SchedDate)) AS "Average Diff"
FROM dbo.Orders,
dbo.Locations
WHERE dbo.Orders.OrderNum= dbo.Locations.OrderNum
GROUP BY 'Destination'
``` | There are several ways to improve your query. I would suggest:
```
SELECT CONCAT(l.City, ', ', l.State) AS Destination,
AVG(DATEDIFF(Day, o.ReqDate, o.SchedDate)*1.0) AS "Average Diff"
FROM dbo.Orders o join
dbo.Locations l
ON o.OrderNum = l.OrderNum
GROUP BY CONCAT(l.City, ', ', l.State) ;
```
Note the following:
* `"Day"` --> `Day`. `Day` is a keyword in SQL Server and does not need quotes.
* The use of table aliases, `o` and `l` to make the query more readable.
* The multiplication by `1.0`. `DATEDIFF()` returns an integer. The average of an integer (in SQL Server) is an integer. Usually, you would want a decimal number. You can also convert this to `float` or to another format, but I find `*1.0` to be the simplest way for a fast conversion.
* Fixed the `GROUP BY` (as noted in other answers). | Try
```
SELECT CONCAT(Locations.City, ', ', Locations.State) AS Destination,
AVG(DATEDIFF("Day", Orders.ReqDate, Orders.SchedDate)) AS "Average Diff"
FROM dbo.Orders,
dbo.Locations
WHERE dbo.Orders.OrderNum= dbo.Locations.OrderNum
GROUP BY CONCAT(Locations.City, ', ', Locations.State)
``` | GROUP BY not working in my code | [
"",
"sql",
""
] |
Is it possible to `SELECT` name and address attribute from names table but at the same time I also want to combine description attribute from info table.
```
CREATE TABLE names(
ID CHAR(2) PRIMARY KEY
name VARCHAR2(20) NOT NULL
address VARCHAR2(40) NOT NULL
)
CREATE TABLE info (
ID CHAR(2) REFERENCES names
description VARCHAR2(80) NOT NULL
)
```
I Tried `union` method but failed
```
SELECT name, address
FROM names
UNION
SELECT description
FROM info;
```
is there any way to do it? | ```
SELECT name, address, description
FROM names JOIN info USING(id)
``` | select n.name, n.address, i.description from names as n
Left join info as i on n.id = i.id | SELECT in SQL QUERY - is it possible | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I am trying to insert data into an existing table and keep receiving an error.
```
INSERT INTO Patient
(
PatientNo,
PatientFirstName,
PatientLastName,
PatientStreetAddress,
PatientTown,
PatientCounty,
PatientPostcode,
DOB,
Gender,
PatientHomeTelephoneNumber,
PatientMobileTelephoneNumber
)
VALUES
(
121,
'Miles',
'Malone',
'64 Zoo Lane',
'Clapham',
'United Kingdom',
'SW4 9LP',
'1989-12-09',
'M',
02086950291,
07498635200
);
```
Error:
```
Error starting at line : 1 in command -
INSERT INTO Patient (PatientNo,PatientFirstName,PatientLastName,PatientStreetAddress,PatientTown,PatientCounty,PatientPostcode,DOB,Gender,PatientHomeTelephoneNumber,PatientMobileTelephoneNumber)
VALUES (121, 'Miles', 'Malone', '64 Zoo Lane', 'Clapham', 'United Kingdom','SW4 9LP','1989-12-09','M',02086950291,07498635200)
Error report -
SQL Error: ORA-01861: literal does not match format string
01861. 00000 - "literal does not match format string"
*Cause: Literals in the input must be the same length as literals in
the format string (with the exception of leading whitespace). If the
"FX" modifier has been toggled on, the literal must match exactly,
with no extra whitespace.
*Action: Correct the format string to match the literal.
```
Just not sure why this keeps happening I am learning SQL at the moment, any help will be greatly appreciated! | Try replacing the string literal for date `'1989-12-09'` with `TO_DATE('1989-12-09','YYYY-MM-DD')` | The format you use for the date doesn't match to Oracle's default date format.
A default installation of Oracle Database sets the DEFAULT DATE FORMAT to `dd-MMM-yyyy`.
Either use the function `TO_DATE(dateStr, formatStr)` or simply use `dd-MMM-yyyy` date format model. | SQL Error: ORA-01861: literal does not match format string 01861 | [
"",
"sql",
"oracle",
"sql-insert",
""
] |
I am new to SQL and a little embarrassed to ask this.
I have A table that contains 2 columns A and B
```
A B
0 2
1 3
3 1
```
I want a query that will return
```
Category | Sum
A 4
B 6
```
What is the best way to write this query? | ```
select 'A', sum(A) from table
union
select 'B', sum(B) from table
``` | If UNPIVOT is supported by the SQL product you are using:
```
SELECT Category, SUM(Value) AS Sum
FROM atable
UNPIVOT (Value FOR Category IN (A, B)) u
GROUP BY Category
;
```
In particular, the above syntax works in [Oracle](http://sqlfiddle.com/#!4/f4b4f2/1 "Demo at SQL Fiddle") and [SQL Server](http://sqlfiddle.com/#!3/f4b4f/1 "Demo at SQL Fiddle"). | SQL counting values and selecting Column Name as result | [
"",
"sql",
""
] |
I have an SQL query that brings back 17 numbers with this format
> 06037-11
I need to add a 0 before the dash, so it is:
> 060370-11
Is there an easy way to do this? I have seen STUFF() as an option, but I don't understand it.
**Edit**
I am using Teradata | Previous response includes example for Teradata 14.x using regular expression support. The following will work in Teradata 13.x or Teradata 12.x without regular expression support:
```
SELECT SUBSTRING('06037-11' FROM 1 FOR (POSITION('-' IN '06037-11') -1))
|| '0-'
|| SUBSTRING('06037-11' FROM (POSITION('-' IN '06037-11') + 1))
``` | One way in Oracle:
```
with qry as
(select '06037-11' code from dual)
select regexp_replace(code, '-', '0-') from qry;
``` | SQL Adding the same char into multiple fields | [
"",
"sql",
"teradata",
""
] |
This seems like it should be easy, but isn't. I'm migrating a query from MySQL to Redshift of the form:
```
INSERT INTO table
(...)
VALUES
(...)
ON DUPLICATE KEY UPDATE
value = MIN(value, VALUES(value))
```
For primary keys we're inserting that aren't already in the table, those are just inserted. For primary keys that are already in the table, we update the row's values based on a condition that depends on the existing and new values in the row.
<http://docs.aws.amazon.com/redshift/latest/dg/merge-replacing-existing-rows.html> does not work, because `filter_expression` in my case depends on the current entries in the table. I'm currently creating a staging table, inserting into it with a `COPY` statement and am trying to figure out the best way to merge the staging and real tables. | I'm having to do exactly this for a project right now. The method I'm using involves 3 steps:
**1.**
Run an update that addresses changed fields (I'm updating whether or not the fields have changed, but you can certainly qualify that):
```
update table1 set col1=s.col1, col2=s.col2,...
from table1 t
join stagetable s on s.primkey=t.primkey;
```
**2.**
Run an insert that addresses new records:
```
insert into table1
select s.*
from stagetable s
left outer join table1 t on s.primkey=t.primkey
where t.primkey is null;
```
**3.**
Mark rows no longer in the source as inactive (our reporting tool uses views that filter inactive records):
```
update table1
set is_active_flag='N', last_updated=sysdate
from table1 t
left outer join stagetable s on s.primkey=t.primkey
where s.primkey is null;
``` | Is posible to create a temp table. In redshift is better to delete and insert the record.
Check this doc
<http://docs.aws.amazon.com/redshift/latest/dg/merge-replacing-existing-rows.html> | Bulk updating existing rows in Redshift | [
"",
"sql",
"postgresql",
"amazon-redshift",
""
] |
```
TableOne
PersonId PersonScore
1 10
1 20
2 99
2 40
3 45
```
I need to fetch only those rows where PersonId appears More than Once e.g. Following is the resultset i want to achieve
```
PersonId PersonScore
1 10
1 20
2 99
2 40
```
i am using cte
```
;WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY i.PersonId ORDER BY i.PersonId) AS [Num]
FROM TableOne i
)
SELECT *
FROM cte
WHERE cte.Num > 1
```
The problem is it removes the extra rows. It romoves the first instance of any PersonId. Can anyone suggest a solution please | You want to use `count(*)` as a window function, rather than `row_number()`:
```
select t.PersonId, t.PersonScore
from (select t.*, count(*) over (partition by PersonId) as cnt
from TableOne t
) t
where cnt > 1;
``` | ```
SELECT *
FROM TableOne t1
JOIN (SELECT PersonId
FROM TableOne
GROUP BY PersonId
HAVING COUNT(*) > 1) t2
on t1.PersonId = t2.PersonId
``` | Select Rows that appear more than once | [
"",
"sql",
"sql-server",
""
] |
I am using Microsoft Access and am having difficulty with this scenario for two reasons:
1. the ORDER BY function is not working
2. When I run the query, a pop up box appears asking for a value to be entered for SalesOrder.PID. When I enter a 1, the datasheet appears.
Any advice would be much appreciated regarding my code below:
```
SELECT
Product.PID,
Product.Code,
COUNT (SalesOrder.PID) AS Ordered,
SUM (ExtendedPrice) AS [Value],
SUM (NbrItemsRequested) AS Requested
FROM Product, SalesOrderProduct
WHERE
Product.PID = SalesOrderProduct.PID
GROUP BY
Product.PID, Product.Code
ORDER BY
Requested;
``` | The pop-up box appears because `SalesOrder.PID` references a table `SalesOrder` that is not in the `from` clause. Presumably you mean `SalesOrderProduct`.
You should also use explicit joins in your query, and table aliases for *all* columns:
```
SELECT p.PID, p.Code, COUNT(sop.PID) AS Ordered,
SUM(sop.ExtendedPrice) AS [Value],
SUM(sop.NbrItemsRequested) AS Requested
FROM Product as p inner join
SalesOrderProduct as sop
on p.PID = sop.PID
GROUP BY p.PID, p.Code
ORDER BY SUM(sop.NbrItemsRequested);
```
I'm not sure what you mean by the `order by` not working, but I'm guessing that putting the formula in instead of the alias will fix the problem.
Note: the above is guessing as to where `ExtendedPrice` and `NbrItemsRequested` come from. | If a pop up box appears asking for the SalesOrder.PID value this means Access doesn't recognise that column anywhere in the tables you've included in the query. In other words you are not counting SalesOrder.PID as Access doesn't recognise this. I would recommend checking your column names to make sure they are correct. | SQL Order By function not working | [
"",
"sql",
"ms-access",
"sql-order-by",
""
] |
This is sample table data. My Database consists of so many tables like this. In all the tables where wv1 is same and rv1 is different
```
wv1 rv1
341.6 2.48
343.6 2.58
344.7 2.37
346.3 2.32
347.9 2.29
349.5 2.36
351.1 2.23
352.6 2.24
354.2 2.25
355.8 2.29
357.4 2.28
358.9 2.23
```
For comparing one table with another table, I mentioned another table rv1 as rv2 here.
I am using the data rv1 from user selected tablename, and rv2 for all the tables in the database
The formula is
* I = ACOS[rv1.rv2/|rv1|.|rv2|]
PostgreSQL Query for comparing two tables is shown below, I want to compare one table with all the tables in the database and produce i value for each comparison.
```
select
acos(sum(t1.rv1 * t2.rv2) / (
sqrt(sum(power(t1.rv1, 2))) * sqrt(sum(power(t2.rv2, 2)))
)) as i
from
t1
inner join
t2 on t1.wv1 = t2.wv2
```
This query is joining two tables and applying formula described above, then it shows correct output . Now I want to compare single table(t1 as user defined table, t2 as all the tables in the database) with all the tables in the database and generate I value for each comparison. I want to apply formula for all the tables in the database.
The output should be like this
***Final Output***
```
Ivalue
0.3559772512926
0.52684312
.............
.............
```
I want to write the formula in PostgreSQL query, How to write it.
. | It is made through JSP + PostgreSQL
```
String ss1 = "SELECT UPPER(table_name)FROM information_schema.tables where table_schema='public' and table_type='BASE TABLE' ORDER BY table_name ASC;";
stmt = connection.createStatement();
rset=stmt.executeQuery(ss1);
```
The String ss1 is passed as dbTab
```
String usTab ="oak";
while (rset.next()) {
String sname1 = rset.getString(1);
String dbTab = sname1.toUpperCase();
try
{
out.println("<tr><td>" + dbTab + "</td><td>"+ usTab + " vs " + dbTab + "</td>");
if(dbTab.equals(usTab)) continue;
Connection connection = DriverManager.getConnection(ss, "postgres", "acheive9");
String ss2 = "select acos(sum("+usTab+".reflectance * "+dbTab+".reflectance) / ( sqrt(sum(power("+usTab+".reflectance, 2))) * sqrt(sum(power("+dbTab+".reflectance, 2))))) as i from "+usTab+" inner join "+dbTab+" on "+usTab+".wavelength = "+dbTab+".wavelength";
stmt2 = connection.createStatement();
rset2=stmt2.executeQuery(ss2);
while (rset2.next()) {
String sname17 = rset2.getString(1);
out.println("<td>"+sname17+"</td></tr>");
}
```
This Code is working.. | If I understand this correctly, you actually want to compare different sets that all 'look alike' (that is 2 columns, same number of rows, identical wv1 values) and each set is stored in a new table, right ?
My first remark would be: why didn't you store all the sets in the same table and add an extra column (eg. set\_id) that points to which set the value-pair belongs. If that would have been the case you could have created a query like this:
```
select t1.set_id as left_set_id,
t2.set_id as right_set_id,
acos(sum(t1.rv1 * t2.rv2) / ( sqrt(sum(power(t1.rv1, 2))) * sqrt(sum(power(t2.rv2, 2))))) as i
from bigtable as t1
inner join bigtable as t2
on t1.wv1 = t2.wv2
AND t1.set_id > t2.set_id
GROUP BY t1.set_id,
t2.set_id
ORDER BY t1.set_id,
t2.set_id
```
Note: the `t1.set_id > t2.set_id` avoids that you compare a set with itself + it seems (to me) that the relationship is symmetrical anyway so you only need to compare sets in one direction.
Update, if you want to compare just 1 set with all the other sets, you'll have to use it like this :
```
select t2.set_id,
acos(sum(t1.rv1 * t2.rv2) / ( sqrt(sum(power(t1.rv1, 2))) * sqrt(sum(power(t2.rv2, 2))))) as i
from bigtable as t1
inner join bigtable as t2
on t1.wv1 = t2.wv2
AND t1.set_id <> t2.set_id
WHERE t1.set_id = '<base set identifier>'
GROUP BY t2.set_id
ORDER BY t2.set_id
```
Now, since we don't have this, we'll have to fake it. To do so I would suggest to create a view that `UNION ALL`s all your tables and then you can run the query above on the view instead of on `bigtable`
```
CREATE VIEW bigview
AS
SELECT 'table1' as set_id, wv1, rv1 FROM table1
UNION ALL
SELECT 'table2' as set_id, wv1, rv1 FROM table2
UNION ALL
SELECT 'table3' as set_id, wv1, rv1 FROM table3
UNION ALL
etc...
```
This view can then be re-used regardless for which set (= table) you are comparing, you will have to make sure though that it includes all the tables (= sets) you want to be included in the comparison.
PS: you might have to work a bit on the syntax, don't have PostgreSQL available right here.
PS: [the documentation](http://www.postgresql.org/docs/8.4/static/functions-math.html) mentions you can use `@` to calculate the absolute value of a number; I'm guessing that would be easier than taking the root of the square. It surely will be less prone to overflows. | Compare one table with all tables in database | [
"",
"sql",
"postgresql",
""
] |
I have a table structure where there are `CARS` going between `BUILDINGS` that are built on `PLOTS`:
**CARS**
```
CAR_ID | ORIGIN | DESTINATION
----------------------------------
1 | A | C
2 | B | A
3 | A | B
```
**BUILDINGS**
```
BUILD_ID | PLOT
------------------
A | 2
B | 1
C | 3
```
**PLOTS**
```
PLOT_ID | COORD
------------------------
1 | "39.9,-75.5"
2 | "38.3,-74.7"
3 | "37.8,-76.1"
```
I'm trying to build a query that would show for each `CAR`, the `PLOT` coordinates of the origin and destination `BUILDING`s, like this:
```
CAR_ID | ORIGIN_COORD | DEST_COORD
-------------------------------------------
1 | "38.3,-74.7" | "37.8,-76.1"
2 | "39.9,-75.5" | "38.3,-74.7"
3 | "39.9,-75.5" | "39.9,-75.5"
```
This is what I tried but I don't think I'm approaching this right.
```
SELECT * FROM
(SELECT BUILD_ID, PLOT, COORD FROM BUILDINGS
INNER JOIN PLOTS ON PLOT = PLOT_ID) X
RIGHT JOIN CARS C
ON C.ORIGIN = X.BUILD_ID
```
Could someone please help me understand how to lookup/join for multiple columns (`ORIGIN` and `DEST`)? | below is my first thought:
```
select c.car_id,p1.coord as origin_coord, p2.coord as dest_coord
from cars as c
join buildings as b1 on b1.build_id=c.origin
join buildings as b2 on b2.build_id=c.destination
join plots as p1 on p1.plot_id=b1.plot
join plots as p2 on p2.plot_id=b2.plot
``` | Try
`SELECT C.car_id, po.coord as origin_coord, pd.cooord as dest_coord
FROM Cars as C
JOIN Buildings as o
on c.origin = o.build_id
JOIN Buildings as d
on c.destination = d.build_id
JOIN Plots as po
on po.plot_id = o.plot
JOIN Plots as pd
on pd.plot_id = d.plot
ORDER BY C.car_id` | SQL looking up values on three joined tables | [
"",
"sql",
"sql-server",
"join",
"lookup-tables",
""
] |
I have a query being executed every X milliseconds. As a part of result I would like to have alternating `true/false` flag. This flag should change whenever the query is executed again.
Example
Sample query: `select 1, <<boolean alternator>>;`
1st execution returns: `1 | true`
2nd execution returns: `1 | false`
3rd execution returns: `1 | true`
and so on. It does not matter if it returns `true` or `false` for the first time.
For cases when X is odd number of seconds I have the following solution:
```
select
mod(right(extract(epoch from current_timestamp)::int::varchar,1)::int, 2) = 0
as alternator
```
This extracts last digit from epoch and then test if this is even number. Because X is defined as odd number, this test will alternate from one execution to another.
What would work in the same way when X is different - even or not in whole seconds? I would like to make it work for X like 500ms, 1200ms, 2000ms, ...
Note: I plan to use this with PostgreSQL. | I suggest a dedicated [**`SEQUENCE`**](http://www.postgresql.org/docs/current/interactive/sql-createsequence.html).
```
CREATE SEQUENCE tf_seq MINVALUE 0 MAXVALUE 1 START 0 CYCLE;
```
Each call with [`nextval()`](http://www.postgresql.org/docs/current/interactive/functions-sequence.html) returns 0 / 1 alternating. You can cast to boolean:
```
0::bool = FALSE
1::bool = TRUE
```
So:
```
SELECT nextval('tf_seq'::regclass)::int::bool;
```
To keep other roles from messing with the state of the sequence, only
`GRANT USAGE ON SEQUENCE tf_seq TO $dedicated_role;`. Run your query as that role or create a function with `SECURITY DEFINER` and `ALTER FUNCTION foo() OWNER TO $dedicated_role`;
Or, simpler yet, just make it the **column default** and completely ignore it in your inserts:
```
ALTER TABLE foo ALTER COLUMN bool_col
SET DEFAULT nextval('tf_seq'::regclass)::int::bool;
```
You need to grant `USAGE` on the sequence to roles that can insert.
Every next row gets the flipped value automatically.
[The usual notes for sequences apply](http://www.postgresql.org/docs/current/interactive/sql-createsequence.html). Like, if you roll back an `INSERT`, the sequence stays flipped. Sequence states are never rolled back. | A temporary table can save the boolean state:
```
create temporary table t (b boolean);
insert into t (b) values (true);
with u as (
update t
set b = not b
returning b
)
select 1, b
from t;
``` | Alternating true/false flag for current time given step | [
"",
"sql",
"postgresql",
"auto-increment",
"sql-insert",
""
] |
How can I get the output for maximum count of repeated values from a table, which contains the repeated values corresponding to a column such a way that there are multiple different distinct values having maximum-counts.
Consider r the table data below :
```
+---------+------------+-------------+--------------+
| Bill_No | Bill_Date | Customer_ID | Total_Amount |
+---------+------------+-------------+--------------+
| 101 | 2012-04-10 | C001 | 64 |
| 102 | 2012-04-10 | C002 | 8 |
| 103 | 2012-04-11 | C002 | 140 |
| 104 | 2012-04-13 | C001 | 29 |
| 105 | 2012-04-12 | C003 | 125 |
| 106 | 2012-04-16 | C004 | 258 |
+---------+------------+-------------+--------------+
```
We see here maximum `count(customer_id)` is same for `C001` and `C002`. I want to get both values.
The final output should look like follows:
```
customer_id | count(customer_id) //max value
----------------+-----------------------
C001 | 2
C002 | 2
----------------+-----------------------
``` | So, with some guessing, would
```
select
distinct g.customer_id,
g.cnt
from
(
select
distinct customer_id,
count(customer_id) cnt
from
table
group by
customer_id
) g
inner join
(
select
max(s.cnt) max_cnt
from
(
select
distinct customer_id,
count(customer_id) cnt
from
table
group by
customer_id
) s
) m
on
m.max_cnt = g.cnt
```
do the trick? | Just use multiple steps.
First save the count for every customer\_id. Then select the customer\_ids which have the MAX count.
```
CREATE TEMPORARY TABLE `tmp_count` SELECT count(*) as noOf, customer_id FROM `table` GROUP BY customer_id;
SELECT noOf, customer_id FROM `tmp_count` WHERE noOf = (SELECT MAX(noOf) FROM `tmp_count`);
``` | To get the multiple maximum-count of repeated values in mysql | [
"",
"mysql",
"sql",
"groupwise-maximum",
""
] |
I need to do a correlated SQL Query and for that purpose i need to provide an alias to outer query which in which I perform an inner join. I am not able to do the alias
```
SELECT DISTINCT(name)
FROM PERSON
INNER JOIN M_DIRECTOR AS dira
ON (dira.PID = M_DIRECTOR.PID) as dira
WHERE 9 > (
SELECT COUNT(MID) FROM M_DIRECTOR WHERE name = dira.name
) ;
``` | I didn't really understand what you want to do, but I guess
```
select
distinct p.name,
count(d.MID) cnt
from
hindi2_PERSON p
inner join
hindi2_M_DIRECTOR d
on
p.PID = d.PID
group by
p.name
having count(d.MID) > 9
;
```
would do what you want | I dont know what you are asking and what you mean by make an alias to an eniter result ?
but you are doing
```
select distinct(name) as othername
```
which is you are selecting `name` and you are giving here `othername` as an alias
then you retrieve it in result
```
$row['othername']
``` | How to give alias to results returned after inner join in mySQL | [
"",
"mysql",
"sql",
""
] |
I am stuck on a project design. One of the table has 1-M relation with users table. So it has a foreign key. Same field is also primary key.
Table as follows
Itemid:
* Primarykey
* Autoincrement
Useriditem:
* Primarykey
* Foreign key of id in users table
Itemname:
* Not null
Values:
```
-----------------------------------------
| **ITEMID** | **USERID** | ITEMNAME |
-----------------------------------------
| 1 | 1 | fooooooo |
-----------------------------------------
| 2 | 1 | tinytext |
-----------------------------------------
| 3 | 1 | MediumText |
-----------------------------------------
| 4 | 2 | LARGEtext |
-----------------------------------------
| 5 | 2 | HUGETEXT |
-----------------------------------------
| 6 | 1 | BLOOOOOB |
-----------------------------------------
| 7 | 3 | 001010101 |
-----------------------------------------
```
This is the result of the current design. What i am wondering is that a way to make auto increment for each user separately.
Something like "Autoincrement item id GROUP BY user id"
```
-----------------------------------------
| **ITEMID** | **USERID** | ITEMNAME |
-----------------------------------------
| 1 | 1 | fooooooo |
-----------------------------------------
| 2 | 1 | tinytext |
-----------------------------------------
| 3 | 1 | MediumText |
-----------------------------------------
| 1 | 2 | LARGEtext |
-----------------------------------------
| 2 | 2 | HUGETEXT |
-----------------------------------------
| 4 | 1 | BLOOOOOB |
-----------------------------------------
| 1 | 3 | 001010101 |
-----------------------------------------
```
Is there a way to do this using mysql? | You want something like this:
[Demo](http://sqlfiddle.com/#!2/8bd40e/1)
```
CREATE TRIGGER item_id_auto_inc
BEFORE INSERT ON items
FOR EACH ROW
BEGIN
SET NEW.item_id := (SELECT CASE WHEN ISNULL(MAX(item_id)) THEN 0 ELSE MAX(item_id) END +1 FROM items WHERE user_id=NEW.user_id);
END
//
``` | The safer way to do this is taking into account that your application can get hit by more than 1 user at any given time
```
START TRANSACTION;
Insert into table A
Select Last inserted id from table A using Last_Insert_ID()
Update table B
COMMIT;
```
At least you are guaranteed to get this last inserted id from table A into table B. | Mysql->autoincrementing related to an other field | [
"",
"mysql",
"sql",
"auto-increment",
"ddl",
""
] |
I've got a boring issue to solve (hope it is hard only for me haha), as follows:
I have a PostgreSQL database with many tables.
These tables are updated daily by a Perl Script.
The table that interests for my problem follows the pattern below:
```
ID | Central | ts | Country | Name | Column3 | Column4 | Column5 |
------------------------------------------------------------------------------------------
```
There isn't a unique column primary key that identify the rows unicaly...
Instead, I can see a BTree in the Perl script built with "ID-Central-ts" acting as a PK.
"ts" is a timestamp generated by the script, and there are always 3 ts in the DB, so it stores every "central-ID" row for the past 3 days.
So, what I want:
Letting go "Country" and "Name" columns (these columns may differ even in the same ID-central-ts without problems, or even repeat themselves), one "ID-Central-ts" shouldn't have different column' values from those shown in a specific central.
I need a Query that shows me these values that mismatch from the right central, for the LAST timestamp added (The biggest number).
I mean:
If, for ID 01, the "default-central" says that values for "column3", "column4" and "column5" need to be a string with 'right' in the last "ts", any value different should be caught.
Example:
Assume that Central 'Alfa' is the "default Central".
It stores values that need to be equal to every single "ID" in this or any other Central, for that given ID.
```
ID | Central | ts | Country | Name | Column3 | Column4 | Column5 |
------------------------------------------------------------------------------------------
01 | Alfa | 10000001 | USA | Fairy | right | right | right |
01 | Alfa | 10000002 | USA | Minish | right | right | right |
01 | Alfa | 10000003 | USA | Elf | right | right | right |
01 | Delta | 10000001 | USA | Goron | right | right | right |
01 | Delta | 10000002 | USA | Elf | right | wrong | right |
01 | Delta | 10000003 | USA | Acqua | wrong | right | right |
.
.
.
02 | Alfa | 10000001 | BRA | Fairy | RIGHT | RIGHT | RIGHT |
02 | Alfa | 10000002 | BRA | Minish | RIGHT | RIGHT | RIGHT |
02 | Alfa | 10000003 | BRA | Elf | RIGHT | RIGHT | RIGHT |
02 | Delta | 10000001 | BRA | Goron | WRONG | RIGHT | RIGHT |
02 | Delta | 10000002 | BRA | Elf | RIGHT | WRONG | RIGHT |
02 | Delta | 10000003 | BRA | Acqua | WRONG | RIGHT | (null) |
```
I need to get:
```
ID | Central | ts | Country | Name | Column3 | Column4 | Column5 |
-------------------------------------------------------------------------------------------
01 | Delta | 10000003 | USA | Acqua | wrong | | |
02 | Delta | 10000003 | BRA | Acqua | WRONG | | "Wrong null" |
```
See that even when ts 10000001 or 10000002 have wrong values, they're not taken in.
Also notice that when there are nulls where should existe some value, i need to write something to show that this null shouldn't exist.
Can anyone please take a look?
I've managed to create a view to get the values from central Alfa, but I cant' figure a LEFT JOIN or ways to create these rules of writing the "wrong null" thing or how to disconsider the lower ts's.
Any help will be highly appreciated. | I've came to an answer with a LEFT JOIN, which includes every case I wanted.
Thanks a lot for every piece of advice, and sorry for not accepting any of the answers that came before mine...
Maybe I have not been fully clear, but my answer brings up the exact response to my problem.
I won't change the query to fit the column names I used as example before, as I'm afraid I could make it wrong.
Instead of creating lots of ANDs, I decided to get every difference column by column and join all of them of them in a FULL OUTER JOIN, later.
Follows my first query, wich gets values that differ from a given central.
```
SELECT Test_Configs.central, Test_Configs.imsi,
CASE Test_Configs.mapver WHEN '' THEN '-'
ELSE COALESCE(Test_Configs.mapver, '-')
END
FROM config_imsis_centrais AS Default_Configs -- Valores padrão da central correta
LEFT JOIN config_imsis_centrais AS Test_Configs -- Valores das centrais a serem testadas
ON Default_Configs.central = 'CENTRAL_USED_AS_EXAMPLE'
AND Default_Configs.ts = (SELECT MAX(ts) FROM config_imsis_centrais)
AND Default_Configs.imsi = Test_Configs.imsi
AND Default_Configs.ts = Test_Configs.ts
AND Test_Configs.central <> Default_Configs.central
WHERE ( -- Análise:
COALESCE(Default_Configs.mapver, 'null') <> COALESCE(Test_Configs.mapver, 'null') AND
Test_Configs.central <> ''
)
```
My FULL OUTER Join is made by joining every potencial table, using central and "imsi", (that works like the ID in my example). Follows:
```
SELECT central, imsi, mapver, camel, nrrg
FROM
vw_erros_mgisp_mapver
FULL OUTER JOIN
vw_erros_mgisp_camel USING (central, imsi)
FULL OUTER JOIN
vw_erros_mgisp_nrrg USING (central, imsi)
ORDER BY central, imsi
```
That's it. Thanks a lot to everybody, and sorry not accepting your hard working answers, I just think it would be nicier for someone with the same problem to study a better solution.
Cheers! | The way I would go about this would be with a self-join:
```
SELECT t.*
FROM theTable AS m -- values from the "master" central
INNER JOIN theTable AS t -- values from the central to test
ON m.Central = 'ALFA'
AND m.ts = (SELECT MAX(ts) FROM theTable)
AND m.ID = t.ID
AND m.ts = t.ts
AND t.Central <> m.Central
AND (
-- we assume that values in the "master" central cannot be null or blank
m.Column3 <> coalesce(t.Column3, '') OR
m.Column4 <> coalesce(t.Column4, '') OR
m.Column5 <> coalesce(t.Column5, '')
)
```
In a situation like this, you could also use CTEs, which some people find more readable:
```
WITH MaxTimestamp AS (
SELECT MAX(tx) value FROM theTable
),
MasterValues AS (
SELECT * FROM theTable WHERE Central = 'ALFA' AND ts = (SELECT value FROM MaxTimestamp)
),
TestValues AS (
SELECT * FROM theTable WHERE Central <> 'ALFA' AND ts = (SELECT value FROM MaxTimestamp)
)
SELECT t.*
FROM MasterValues m
INNER JOIN TestValues t
ON m.ID = t.ID
AND (
-- we assume that values in the "master" central cannot be null or blank
m.Column3 <> coalesce(t.Column3, '') OR
m.Column4 <> coalesce(t.Column4, '') OR
m.Column5 <> coalesce(t.Column5, '')
)
```
In either case, you could also write the whole thing as a function or anonymous block, which would allow you to specify the value of the master central as a parameter or variable, in case that is not a fixed value. | Postgres LEFT JOIN and data manipulation | [
"",
"sql",
"postgresql",
""
] |
I wrote a program to generate tests composed of a combination of questions taken from a large pool of questions. There were a number of criteria for each test and the program saved them to database only if they satisfied these criteria.
My program was written to ensure as even a distribution of questions as possible, i.e., when generating combinations of questions, the algorithm prioritise questions from the pool that have been asked the least number of times in previous iterations.
I created one table, `test_questions` to essentially store the `test_id` for each test and another, `test_questions` to store `test_id`s and their corresponding `question_id`s using n rows per test (where n is the number of questions in each test).
Now that I have the tests stored in a database, I’d like to check that the overlap of questions between different pairs of test are within certain bounds and I thought I should be able to do this using SQL.
Using a self-join, I was able to use this query to select the questions common to Test 3 and Test 5:
```
-- Get the number of questions that are common to tests 3 and 5
SELECT count(tq1.question_id) AS Overlap
FROM test_questions AS tq1
JOIN test_questions AS tq2
ON tq1.question_id = tq2.question_id
WHERE tq1.test_id = 5
AND tq2.test_id = 3;
```
I was able to generate each possible combination of test pairs from the first n (5) tests:
```
-- Get all combinations of pairs of tests from 1 to 5
SELECT t1.test_id AS Test1, t2.test_id AS Test2
FROM tests AS t1
JOIN tests AS t2
ON t2.test_id > t1.test_id
WHERE t1.test_id <= 5
AND t2.test_id <= 5;
```
What I’d like to do but so far have failed to do is to combine the above two queries to show each possible pair combination of the first 5 tests – along with the number of questions that are common to both tests.
```
-- This doesn't work
SELECT t1.test_id AS Test1, t2.test_id AS Test2, count(tq1.question_id) AS Overlap
FROM tests AS t1
JOIN tests AS t2
ON t2.test_id > t1.test_id
JOIN test_questions AS tq1
ON t1.test_id = tq1.test_id
JOIN test_questions AS tq2
ON t2.test_id = tq2.test_id
WHERE t1.test_id <= 11
AND t2.test_id <= 11
GROUP BY t1.test_id, t2.test_id;
```
I’ve created a simplified version (with randomised data) of the two tables at this [SQL Fiddle](http://sqlfiddle.com/#!2/d6f8e/5)
Note: I’m using MySQL as my DBMS but the SQL should be compatible with the ANSI standard.
**Edit:** The program I wrote to generate the tests actually generated more than the number of tests I needed and I only want to compare the first n tests. In the example, I added a `<= 5` WHERE condition to ignore the extra tests.
To clarify what I’m looking for as per Thorsten Kettner’s example data:
```
test 1: a, b and c
test 2: a, b and d
test 3: d, e and f
```
The results would be:
```
Test Test Overlap
Test1 Test2 2 (a and b in common)
Test1 Test3 0 (no questions in common)
Test2 Test3 1 (d is common to both)
``` | You just need a `group by` to your first query (basically). I also added another condition, so the test ids are produced in order:
```
SELECT tq1.test_id as test_id1, tq2.test_id as test_id2, count(tq1.question_id) AS Overlap
FROM test_questions tq1 LEFT JOIN
test_questions tq2
ON tq1.question_id = tq2.question_id and
tq1.test_id < tq2.test_id
GROUP BY tq1.test_id, tq2.test_id;
```
This is standard SQL.
If you want to get *all* pairs of tests, even those that have no questions in common, here is another approach:
```
SELECT t1.test_id as test_id1, t2.test_id as test_id2, count(tq2.question_id) AS Overlap
FROM tests t1 CROSS JOIN
tests t2 LEFT JOIN
test_questions tq1
on t1.test_id = tq1.test_id LEFT JOIN
test_questions tq2
ON t2.test_id = tq2.test_id and tq1.question_id = tq2.question_id
GROUP BY t1.test_id, t2.test_id;
```
This assumes that you have a table with one row per test. If not, replace `tests` with `(select distinct test from test_questions)`. | I modified Gordon's answer and this query provides a listing of test combinations along with their corresponding overlap (questions in common):
```
SELECT tq1.test_id as test_id1, tq2.test_id as test_id2, count(tq1.question_id) AS Overlap
FROM test_questions tq1
JOIN test_questions tq2
ON tq1.question_id = tq2.question_id
AND tq1.test_id < tq2.test_id
WHERE tq1.test_id <= 5
AND tq2.test_id <= 5
GROUP BY tq1.test_id, tq2.test_id;
``` | Query to check for commonality (Intersection) between each possible pair combination | [
"",
"mysql",
"sql",
"combinations",
"set-intersection",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.