Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Which one is fast either Index or View both are used for optimization purpose both are implement on table's column so any one explain which one is more faster and what is difference between both of them and which scenario we use view and index. | **VIEW**
* View is a logical table. It is a physical object which stores data logically. View just refers to data that is tored in base tables.
* A view is a logical entity. It is a SQL statement stored in the database in the system tablespace. Data for a view is built in a table created by the database engine in the TEMP tablespace.
**INDEX**
* Indexes are pointres that maps to the physical address of data. So by using indexes data manipulation becomes faster.
* An index is a performance-tuning method of allowing faster retrieval of records. An index creates an entry for each value that appears in the indexed columns.
**ANALOGY**:
Suppose in a shop, assume you have multiple racks. Categorizing each rack based on the items saved is like creating an index. So, you would know where exactly to look for to find a particular item. This is indexing.
In the same shop, you want to know multiple data, say, the Products, inventory, Sales data and stuff as a consolidated report, then it can be compared to a view.
Hope this analogy explains when you have to use a view and when you have to use an index! | Both are different things in the perspective of SQL.
**VIEWS**
A view is nothing more than a SQL statement that is stored in the database with an associated name. A view is actually a composition of a table in the form of a predefined SQL query.
Views, which are kind of virtual tables, allow users to do the following:
* A view can contain all rows of a table or select rows from a table. A view can be created from one or many tables which depends on the written SQL query to create a view.
* Structure data in a way that users or classes of users find natural or intuitive.
* Restrict access to the data such that a user can see and (sometimes) modify exactly what they need and no more.
* Summarize data from various tables which can be used to generate reports.
**INDEXES**
While Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply put, an index is a pointer to data in a table. An index in a database is very similar to an index in the back of a book.
For example, if you want to reference all pages in a book that discuss a certain topic, you first refer to the index, which lists all topics alphabetically and are then referred to one or more specific page numbers.
An index helps speed up SELECT queries and WHERE clauses, but it slows down data input, with UPDATE and INSERT statements. Indexes can be created or dropped with no effect on the data. | What is difference between INDEX and VIEW in MySQL | [
"",
"mysql",
"sql",
"database",
"view",
"indexing",
""
] |
Is there a way to compare software version (e.g. X.Y.Z > A.B.C) in Postgres ? I'm searching for a function on string/varchar or a "version" type.
I found out that <http://pgxn.org/dist/semver/doc/semver.html>, but I'm looking for alternatives (not so easy to deploy..) | You can split the version to array and then do [array comparison](http://www.postgresql.org/docs/9.1/static/functions-array.html).
```
select regexp_split_to_array(v1, '\.')::int[] v1,
regexp_split_to_array(v2, '\.')::int[] v2,
regexp_split_to_array(v1, '\.')::int[] > regexp_split_to_array(v2, '\.')::int[] cmp
from versions;
```
[demo](http://sqlfiddle.com/#!15/697d0/3) | Use [**`string_to_array()`**](https://www.postgresql.org/docs/current/functions-array.html#ARRAY-FUNCTIONS-TABLE). No need for expensive regular expressions.
```
SELECT string_to_array(v1, '.')::int[] AS v1
, string_to_array(v2, '.')::int[] AS v2
,(string_to_array(v1, '.')::int[] > string_to_array(v2, '.')::int[]) AS cmp
FROM versions;
```
[fiddle](https://dbfiddle.uk/aTjI5GGi)
Old [sqlfiddle](http://sqlfiddle.com/#!17/44b7c)
Assuming all parts of the version number are valid integer numbers, of course. | Compare software versions in Postgres | [
"",
"sql",
"postgresql",
"version",
"version-sort",
""
] |
I have a travel log, I am trying to display a list of cities visited during a day
```
id day cityvisited user
1 1 2 4
1 1 6 3
1 1 4 10
1 1 4 6
SELECT cityvisited FROM cv WHERE day = 1
```
returns the data 2, 6, 4, 4
Can I get it so it just returns 2, 6, 4 | ```
SELECT DISTINCT cityvisited FROM cv WHERE day = 1
```
should do it. | Which SQL you are using ?
If you are using SQL Server then you can use
```
SELECT distinct cityvisited FROM cv WHERE day = 1
```
It works for many other DB as well.
Hope this help you. | Basic SQL retrieve list of items without counting it twice | [
"",
"sql",
""
] |
Basically I have 2 tables. 1 table has a list of two (important) columns. The general idea is that items in column 2 cannot be sold in combination with items in column 1. It is essentially a set of rules to determine correct billing combinations. The table looks similar to this:
```
col 1 ; col 2
----- -----
a ---- b
a ---- h
a ---- d
b ---- f
b ---- z
c ---- z
c ---- d
c ---- b
```
Items in column 1 *can't* be sold with items in column 2.
the second table is essentially an "orders" table. There are transaction numbers, and line numbers for each transaction. on each transaction line there is the item sold. There are commonly *many* items sold per transaction. The table is set up similarly to this:
```
trans # ; trans line ; item
------- ----------- -----
12345 ---- 1 ---- a
12345 ---- 2 ---- b
12345 ---- 3 ---- a
45678 ---- 1 ---- z
45678 ---- 2 ---- f
```
What I am trying to do is to take all of the transaction data, and reconcile it with the data on the list of inappropriate item combinations. As you can see, transaction 12345 breaks the first rule because 'a' is being sold with 'b'. This is the general idea. | ```
SELECT * FROM orders ord1, orders ord2, conditions con
WHERE ord1.trans = ord2.trans
AND ord1.item = con.Product1 AND ord2.item = con.Product2
``` | I think this is kind of solution
name the first table INCOMPAT and the second table ORDERS.
below query will give you the result, however in your actual database you may need some changes:
```
select o1.item + '@' + o2.item
from orders o1 full join orders o2 on o1.orderid = o2.orderid
where exists (select * from incompat i where o1.item + '@' + o2.item = i.col1 + '@' + i.col2)
``` | SQL query based on multiple criteria | [
"",
"mysql",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I got 3 tables: **projects**, **employee** and **project\_employee**
**employee**
* ID (int, PK)
* Name
**projects**
* project\_id (int, PK)
* project\_name
**project\_employee**
* project\_id (int, PK)
* employee\_id (int, PK)
What I trying to do is write a query that get ID and Name of all employees that are not in a project, for example project number 9.
So I tried:
```
SELECT ID, Name
FROM [employee], [project_employee]
WHERE [employee].ID != [project_employee].emp_id AND [project_employee].project_id = 9;
```
but I always get empty result, something must be wrong with my logic? | You can do it using `NOT EXISTS` :
```
SELECT u.ID, u.Name
FROM [User] u
WHERE NOT EXISTS ( SELECT *
FROM [project_employee] pe
WHERE pe.project_id = 9
AND pe.employee_id = u.ID);
``` | Instead try something like this, it should get you all users that are not part of `project_id = 9`
```
SELECT u.ID, u.Name
FROM [User] u
WHERE u.ID NOT IN (SELECT pe.employee_id
FROM [project_employee] pe
WHERE pe.project_id = 9);
``` | SQL Server: select query from multiple tables | [
"",
"sql",
"sql-server",
""
] |
I have a database with properties and I need to delete all the properties except for those "owned" by either one of 3 "Inscription Agents". Each property can have up to 2 "owners": `agent_inscripteur_1` and `agent_inscripteur_2` on the database.
This is my code:
```
DELETE FROM inscriptions
WHERE (agent_inscripteur_1 != 100520 OR agent_inscripteur_2 != 100520)
AND (agent_inscripteur_1 != 97927 OR agent_inscripteur_2 != 97927)
AND (agent_inscripteur_1 != 99237 OR agent_inscripteur_2 != 99237)
```
I think what is happening in my case is that the first part of the code before the `AND` is executed before the rest, so all properties except for those listed by the first agent are being deleted (by the time it gets to the second and third agent, the properties are all gone).
Can someone please point me in the right direction?
Thanks! | What if you simply change your condition like below using `NOT IN` instead
```
DELETE FROM inscriptions
WHERE agent_inscripteur_1 NOT IN (100520,97927,99237)
OR agent_inscripteur_2 NOT IN (100520,97927,99237)
``` | Try out this,
```
DELETE FROM inscriptions
WHERE (
(agent_inscripteur_1 != 100520 AND agent_inscripteur_2 != 100520)
OR (agent_inscripteur_1 != 97927 AND agent_inscripteur_2 != 97927)
OR (agent_inscripteur_1 != 99237 AND agent_inscripteur_2 != 99237)
)
``` | MySQL - Delete with multiple conditions | [
"",
"mysql",
"sql",
"sql-delete",
""
] |
I have a `keyword` table containing a few millions entries. It is linked by a many to many relationship to an `element` table. I would like to get all the element ids matching a keyword. I tried this query and there is no problem, it returns rows in few milliseconds.
```
SELECT element_id FROM element_keyword
JOIN keyword ON keyword.id = element_keyword.keyword_id
WHERE keyword.value like 'LOREM';
```
Execution plan
```
"Nested Loop (cost=278.50..53665.56 rows=65 width=4)"
" -> Index Scan using keyword_value_index on keyword (cost=0.43..8.45 rows=1 width=4)"
" Index Cond: ((value)::text = 'LOREM'::text)"
" Filter: ((value)::text ~~ 'LOREM'::text)"
" -> Bitmap Heap Scan on element_keyword (cost=278.07..53510.66 rows=14645 width=8)"
" Recheck Cond: (keyword_id = keyword.id)"
" -> Bitmap Index Scan on element_keyword_keyword_index (cost=0.00..274.41 rows=14645 width=0)"
" Index Cond: (keyword_id = keyword.id)"
```
However when i put a wild card at the end of my search string the request becomes really slow. (~60000ms)
```
SELECT element_id FROM element_keyword
JOIN keyword ON keyword.id = element_keyword.keyword_id
WHERE keyword.value like 'LOREM%';
```
Execution plan:
```
"Hash Join (cost=12.20..3733738.08 rows=19502 width=4)"
" Hash Cond: (element_keyword.keyword_id = keyword.id)"
" -> Seq Scan on element_keyword (cost=0.00..3002628.08 rows=194907408 width=8)"
" -> Hash (cost=8.45..8.45 rows=300 width=4)"
" -> Index Scan using keyword_value_index on keyword (cost=0.43..8.45 rows=300 width=4)"
" Index Cond: (((value)::text ~>=~ 'LOREM'::text) AND ((value)::text ~<~ 'LOREN'::text))"
" Filter: ((value)::text ~~ 'LOREM%'::text)"
```
Even when the wildcard does not give more result, the query is slow.
I created an index on keyword(value) and element\_keyword(keyword\_id)
```
CREATE INDEX "keyword_value_index" ON "keyword" (value text_pattern_ops);
CREATE INDEX "element_keyword_keyword_index" ON "element_keyword" (keyword_id);
```
What's really happening behind ? How could i solve it ?
**UPDATE**
I'm not sure if this could help but here's a few more tests :
```
select id from keyword where value like 'LOREM%';
-> 6 ids retrieved in 17ms
select * from element_keyword where keyword_id in (1961746,1961710,2724258,2121442,1633163,1026116);
-> 40 rows retrieved in 17ms
select * from element_keyword where keyword_id in (select id from keyword where value like 'LOREM%');
-> 40 rows in 63221 ms
``` | Basically what happens is that the second query is doing a sequential scan (for some reasons i don't understand). This sequential scan is what's taking so long.
Disabling the sequential scan forces the query use the index. So if i execute this line before my query it becomes really quick.
```
set enable_seqscan to off;
``` | The reason is this:
```
WHERE keyword.value like 'LOREM'
```
is a pointless use case for the `LIKE` operator. Without wildcards (or escaped characters) this is effectively the same as:
```
WHERE keyword.value = 'LOREM'
```
.. which can use an index for equality - thus a **plain B-Tree index**.
If you are satisfied with matching leading characters (left anchored search pattern), a B-Tree index with the the operator class `text_pattern_ops` or with `COLLATE "C"` would serve well. See:
* [Full-text search in Postgres or CouchDB?](https://stackoverflow.com/questions/5285787/full-text-search-in-couchdb/8202007#8202007)
For arbitrary pattern matching use the **pg\_trgm** module:
* [PostgreSQL LIKE query performance variations](https://stackoverflow.com/questions/1566717/postgresql-like-query-performance-variations/13452528#13452528)
[Full text search](https://www.postgresql.org/docs/current/textsearch.html) may or may not be what you want. It is based on dictionaries and stemming, not so much on text patterns like your example suggests.
* [Overview over pattern matching in Postgres.](https://dba.stackexchange.com/a/10696/3684)
Also, a **multicolumn index** on `keyword (value,id)` (order of columns is relevant) could enable an index-only scan:
* [SQL query for index/primary key ordinal](https://stackoverflow.com/questions/11623928/sql-query-for-index-primary-key-ordinal/11624409#11624409)
* [The Postgres Wiki on index-only scan](https://wiki.postgresql.org/wiki/Index-only_scans) | Slow join query with wildcard | [
"",
"sql",
"postgresql",
"join",
""
] |
So I've always been told that it's absolutely necessary to have a primary key specified with a table. I've been doing some work and ran into a situation where a primary key's unique constraint would stop data I need from being added.
If there's an example situation where a table was structured with fields:
```
Age, First Name, Last Name, Country, Race, Gender
```
Where if a TON of data was being entered all these fields don't necessarily uniquely identify a row and I don't need an index across all columns anyways. Would the only solution here be to make an auto-incrementing ID field? Would it be okay to NOT have a primary at all? | It's not always necessary to have a primary key, most DBMS' will allow you to construct a table without one (a).
But that doesn't necessarily mean it's a good idea. Have a think about the situation in which you want to use that data. Now think about if you have two twenty-year-old Australian men named Bob Smith, both from Perth.
Without a unique constraint, you can *put* both rows into the table but her's the rub. How would you figure out which one you want to use in future? (b)
Now, if you just want to store the fact that there are one or more people meeting those criteria, you only need to store *one* row. But then, you'd probably have a composite primary key consisting of *all* columns.
If you have other information you want to store about the person (e.g., highest score in the "2048" game on their iPhone), then you *don't* want a primary key across the entire row, just across the columns you mention.
Unfortunately, that means there will undoubtedly come a time when both of those Bob Smith's try to write their high score to the database, only to find one of them loses their information.
If you want them both in the table and still want to allow for the possibility outlined above (two people with identical attributes in the columns you mention) then the best bet is to introduce an artificial key such as an auto-incrementing column, for the primary key. That will allow you to uniquely identify a row regardless of how identical the other columns are.
The other advantage of an artificial key is that, being arbitrary, it never needs to change for the thing being identified. In your example, if you use age, names, nationality or location (c) in your primary key, these are *all* subject to change, meaning that you will need to adjust any foreign keys referencing those rows. If the tables referencing these rows uses the unchanging artificial key, that will never be a problem.
---
(a) There are situations where a primary key doesn't really give you any performance benefit such as when the table is particularly small (such as mapping integers 1 through 12 to month name).
In other words, things where a full table scan isn't really any slower than indexing. But these situations are incredibly rare and I'd probably still use a key because it's more consistent (especially since the use of a key tends not to make a difference to the performance *either* way).
---
(b) Keep in mind that we're talking in terms of practice here rather than theory. While in practice you may create a table with no primary key, relational theory states that each row must be uniquely identifiable, otherwise relations are impossible to maintain.
C.J. Date who, along with Codd, is one of the progenitors of relational database theory, states the rules of relational tables in "An introduction to Database Systems", one of which is:
> The records have a unique identifier field or field combination called the primary key.
So, in terms of relational *theory,* each table must have a primary key, even though it's not always required in practice.
---
(c) *Particularly* age which is guaranteed to change annually until you're dead, so perhaps date of birth may be a better choice for that column. | > Would the only solution here be to make an auto-incrementing ID field?
That is a valid way, but it is not the only one: you could use other ways to generate unique keys, such as using [GUID](http://en.wikipedia.org/wiki/Globally_unique_identifier)s. Keys like that are called *surrogate primary keys*, because they are not related to the "payload" of the data row.
> Would it be okay to NOT have a primary at all?
Since you mentioned that the actual data in rows may not be unique, you wouldn't be able to use your table effectively without a primary key. For example, you would not be able to update or delete a specific row, which may be required, for example, when a user's name changes. | Table without a primary key | [
"",
"mysql",
"sql",
""
] |
I have two tables A & B
```
A :
id
data
B :
key
value
A_id
```
I have a problem with my sql query (its hard to explain it, so i create an [sqlfiddle](http://www.sqlfiddle.com/#!2/ae2a2/1/0))
```
SELECT A.id
FROM A
INNER JOIN B b1 ON b1.key = '20' AND b1.A_id = A.id
INNER JOIN B b2 ON b2.key = '18' AND b2.A_id = A.id
WHERE
b1.value = '1900' AND
b2.value >= '1900'
```
in this example, I'm supposed to get (A.id = 12 & A.id = 13) but nothing
```
CREATE TABLE A
(`id` int, `data` int)
;
INSERT INTO A
(`id`, `data`)
VALUES
(11, 11),
(12, 11),
(13, 12)
;
CREATE TABLE B
(`key` int, `value` int, `A_id` int)
;
INSERT INTO B
(`key`, `value`, `A_id`)
VALUES
(20, 1900, 12),
(2, 19, 11),
(11, 19, 11),
(9, 19, 11),
(18, 1950, 13),
(19, 1950, 12)
;
```
Any idea ?
thanks | First, if you code for more than 10 minutes, you will learn to despise the phrase "don't work"... "don't work" is the phrase that doesn't work.
/rant
You are trying to join tables in an effort to filter. Instead, filter accordingly. Check this out:
```
SELECT A.id
FROM A
INNER JOIN B b1 ON b1.A_id = A.id
WHERE
(b1.key = '20' AND b1.value = '1900')
OR
(b1.key = '18' AND b1.value >= '1900')
```
That asks for what you want and joins only when necessary. | `INNER JOIN` means that it will only return a result if a result exists in both tables. Since you inner join the same table twice each time with unique ids you will never get a result with those three results (table A inner join B inner join b).
I'd suggest something different:
```
SELECT A.id
FROM A
INNER JOIN B on A.id = B.A_id
WHERE
(B.id = '20' AND B.value = '1980') OR (B.id = '18' AND B.value >= '1990')
;
``` | Mysql multiple inner join don't work | [
"",
"mysql",
"sql",
"inner-join",
""
] |
I don't know if I am over complicating things or what but my SQL code gives errors. I am trying to find all the 'pid' that match from table parts.
Below is the necessary info.
Table: `Parts`:
```
pid, Color
```
Table: `Supplier`
```
sid
```
Table: `Catalog`
```
pid, sid
```
I am trying to retrieve the `pid` which have Color Red and a Supplier i.e exist in `sid` of table `Supplier`.
Here is my so far attempt:
```
SELECT pid
FROM catalog
WHERE sid = (SELECT sid
FROM 'suppliers'
WHERE sid=)(SELECT sid
FROM `catalog`
WHERE pid=)(SELECT pid
FROM `parts`
WHERE color = 'Red')
```
My 2nd Attempt says error, subquery returns more than 1 row:
```
SELECT pid FROM catalog c where c.pid=(SELECT pid from parts p where p.color='Red') AND c.sid=(SELECT sid FROM suppliers s)
``` | Why don't you just use two simple `INNER JOIN`'s instead of those nested subqueries?
```
SELECT c.pid
FROM catalog c
INNER JOIN suppliers s ON c.sid = s.sid
INNER JOIN parts p ON c.pid = p.pid
WHERE p.color = 'Red'
``` | I think you might be misplacing the open paranthesis. But I'd recommend using JOIN query instead.
```
SELECT pid
FROM Catalog C
INNER JOIN suppliers S ON C.sid = S.sid
INNER JOIN parts P ON C.pid = P.pid
WHERE P.color = 'Red'
AND S.sid = 'your_requirement'
``` | Nested SQL statements Getting error. Three tables select | [
"",
"sql",
""
] |
I have 2 tables which are linked via INNER JOIN. These are my tables:
```
T1
ID name Account
1 name1 123
1 name1 143
T2
AccountNum ID
222 1
111 1
```
I would like to add the total number of ByID for a specific name, my output should be
```
ID name Account NumOfByAcc
1 name1 123 2
```
In above table what happening is ID 1 has two account numbers(T2) how would I count the total number of account numbers for a specific ID using INNER JOIN between two tables. This is my query but I am not sure how would I complete my second select statements:
```
SELECT Table1.ID, Table1.NAme, Table2.Account AS Expr1,
SELECT count() AS NumOfByAcc//2nd select statement
FROM Table2 INNER JOIN
Table1 ON Table2.ID= Table1.ID
``` | > I am not sure how would I complete my second select statements:
Like this:
```
SELECT
t1.ID
, t1.NAme
, t2.Account AS Expr1
, (SELECT count(*) FROM Table1 tt WHERE tt.ID=t2.ID) AS NumOfByAcc
FROM Table2 t2
INNER JOIN Table1 t1 ON t2.ID= t1.ID
```
Note the use of aliases `t1`, `t2`, and `tt`. When the same table needs to participate in a single query more than once, aliases provide a way to reference that table in your expressions to filter and/or join the records as needed. | ```
select *, (select count(*) from T2 where T2.Id = T1.Id) as NumOfByAcc
from T
``` | SQL count number of results for a specific record | [
"",
"sql",
"inner-join",
""
] |
I want to modify my `Where` clause in my `SQL Server Query` below so that it would select ALL records from the previous month.
Example: if I run the query on 20 Feb, it should extract data for 1 Jan to 31 Jan
I have tried using the following but as you may notice, it picks up the records a month back from the day of execution.
```
WHERE date_col >= cast(dateadd(Month, -1, getdate()) as date) and
date_col <= cast(getdate() as date)
``` | TO get the last and first day of previous month :
```
SELECT DATEADD(month, DATEDIFF(month, 0, DATEADD(MONTH,-1,GETDATE())), 0) AS First_Day_Of_Last_Month
,DATEADD(s,-1,DATEADD(MONTH, DATEDIFF(MONTH,0,GETDATE()),0)) AS Last_day_Of_Last_Month
```
## Result:
```
βββββββββββββββββββββββββββ¦ββββββββββββββββββββββββββ
β First_Day_Of_Last_Month β Last_day_Of_Last_Month β
β ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββ£
β 2014-05-01 00:00:00.000 β 2014-05-31 23:59:59.000 β
βββββββββββββββββββββββββββ©ββββββββββββββββββββββββββ
```
**Your Query**
```
WHERE date_col >= DATEADD(month, DATEDIFF(month, 0, DATEADD(MONTH,-1,GETDATE())), 0)
AND date_col <= DATEADD(s,-1,DATEADD(MONTH, DATEDIFF(MONTH,0,GETDATE()),0))
``` | I'm not claiming this is the best way, but it should work:
```
SELECT * from YourTable
WHERE DATEPART(Month, date_col) = (DATEPART(Month,GETDATE()) - 1)
AND DATEPART(Year, date_col) = DATEPART(Year,DATEADD(Month, -1, GETDATE()))
``` | How to use Date range in the WHERE clause | [
"",
"sql",
"sql-server",
""
] |
Let's assume that I have millions of rows in SQL, using PostgreSQL to query.
Each row contains the string `~~~&AA=~~~` or not.
And out of all rows, I need to get the rows that CONTAINS `AA=`.
Then what would be the best way to query this?
I tried the following but it is extremely slow.
```
SELECT my_column
FROM table
WHERE my_column
LIKE '%AA=%'
```
What should I use? | You need to figure out a way to make an index or use something else like a search engine.
First have a look at why [LIKE can be slow in postgresl and how to make i faster at use-the-index-luke](http://use-the-index-luke.com/sql/where-clause/searching-for-ranges/like-performance-tuning). Basically it boils down to either using [special index functions](http://www.postgresql.org/docs/current/static/indexes-opclass.html) or consider using [Postgresql Fulltext Search Support](http://www.postgresql.org/docs/current/static/textsearch.html).
Also it wasn't clear from your question if every other row actually contains the contents `~~~&AA=~~~` then why not just `WHERE my_column = '~~~&AA=~~~'`? If that is the case you can easily create a partial index for `~~~&AA=~~~` as Postgresql supports partial indexes. | When dealing with `LIKE` expressions, Postgres can only use a `btree` index (the default index type) for characters before the first wildcard. So for something like `my_col LIKE 'ABC%XYZ'`, it can search the index for strings starting with `'ABC'`. When the wildcard is the first character, it can't use an index at all.
As [Adam Gent](https://stackoverflow.com/questions/24215823/postgresql-query-matching-string-tags/24215997#24215997) pointed out, if you want to look for arbitrary substrings, then you'll need additional data structures to support full-text search, which is far from trivial.
But if you're always looking for `'AA='`, and if you're doing it often enough, you can create an index specifically for this query, i.e.
```
CREATE INDEX ON my_table ((my_column LIKE '%AA=%'))
``` | Query matching string tags | [
"",
"sql",
"postgresql",
""
] |
I have two tables an `OrderHeader` and `OrderDetail` Table. What I need to do is Check the `StatusFK` in the `Detail` table for each record that relate to the `OrderHeader` Table.
So if all related records in the `StatusFK = 2` in the `Detail` table then I want this to show in the query as I will run an update on the `OrderHeader` Record to change its `Status` to received.
But say there are 6 records in the detail table that relate to the `OrderHeader` table, but only 5 out of 6 have `StatusFK = 2`, then I do not want this to show as it is not ready to be classed as fully received?
I hope somebody can make sense of what I'm saying and hopefully help me achieve this! | If I understand you correctly you only want OrderHeaders where the count of order detail records equals the count of the OrderDetails that have the statusFK of 2.
You can do that by using a SUM/IIF and compare it to the detail record count in the HAVING clause
```
SELECT OrderHeader.OrderID, OrderHeader.Name
FROM OrderHeader
INNER JOIN OrderDetail
ON OrderHeader.OrderId = OrderDetail.OrderId
GROUP BY OrderHeader.OrderID, OrderHeader.Name
HAVING
Count(OrderDetail.OrderId ) = SUM(IiF (OrderDetail.StatusFK = 2 ,1,0) )
``` | I have just realized that my above query, is readonly as it contains Totals, i need to be able to link it into an update able query?
is there an easy way to get around this?
thanks | Only show order when all the order detail are received | [
"",
"sql",
"ms-access",
""
] |
I have a table `materials` which contains (I'm using web2py):
```
materials_db.define_table('materials',
Field('name','string'),
Field('reference','string'),
Field('manufacturer','string'),
Field('date_added','date'),
Field('density','double'), # Density - kg/mΒ³
Field('emissivity','double'),
Field('specific_heat','double'), # Specific Heat - kJ/kg.K
Field('thermal_conductivity','double') # Thermal conductivity - W/m.K
)
```
I now would like to create a table `constructions`. Each construction is an assembly of any number of materials in an ordered way, e.g. something like this:
```
+------------+-------+
| Material | Layer |
+------------+-------+
| Concrete | 1 |
| Airgap | 2 |
| Insulation | 3 |
| Plaster | 4 |
+------------+-------+
```
`Layer` should enable changing the location of a material in the construction. Constructions will have new properties that will be calculated from the properties of the materials used and the location within the construction.
I don't really have a clue on how to go about it. The value for `layer` within a construction must be unique, however obviously must not be unique between constructions, i.e. every construction can have a layer with the value 1.
Do I need to create a new table for each construction and then reference all these table in my table `constructions`? That's the only vague idea I have at the moment, however that doesn't seem quite right... Is there a good way of doing this? | I'm not familiar with web2py, but purely from the database perspective, your model should probably look like this:

* Each construction has one row in CONSTRUCTION table.
* Each material one row in MATERIAL table.
* Each line in your BOM1 is one row in CONSTRUCTION\_MATERIAL table and is identified through the [composite](https://stackoverflow.com/a/24091830/533120) key {CONSTRUCTION\_ID, LAYER}.
In other words, this is a many-to-many relationship between constructions and materials, and the CONSTRUCTION\_MATERIAL table acts as a "junction" or "link" table2. You can add fields such as "quantity" to the junction table, if necessary.
---
*1 Bill or materials.*
*2 Albeit slightly "unusual": the MATERIAL\_ID is not part of a key to allow the same material in different layers of the same construction. In contrast, if same material can appear only once per construction, just make another composite key: {CONSTRUCTION\_ID, MATERIAL\_ID}.* | You have two nouns: material and construction. Any material item may go into multiple constructions and any construction may consist of multiple materials. This is a classic many-to-many relationship. The nouns become entities which are kept in tables. A cross table defines the relationship:
```
create table MatConst(
MatID int not null, -- Foreign key to Material table
ConstID int not null, -- Foreign key to Construction table
Qty number not null, -- The number, volume or weight of Mat in Const.
primary key( MatID, ConstID )
);
```
The two foreign keys become the primary key for the relationship table so for each thing being constructed, there can be only one entry for concrete, one for plaster and so on. The Qty field allows for amounts like 57 (pounds of concrete) or 150 (square feet of plaster). You may, of course, add other columns that describe the relationship as needed. | sqlite database architecture - referencing multiple rows | [
"",
"sql",
"database-design",
"relational-database",
""
] |
In SQL, is it possible to place a WITH inside a WITH?
Take the below query for example,
```
WITH Temp ([Description], [Amount], [OverdueBy])
AS
(select Description, SUM(Amount) as Amount, (DATEDIFF(day,DueDate,GETDATE())) as OverdueBy from brvAPAllInvoices
Where PaidDate is null and APCo = 1 and Amount > 0
Group By Description, DueDate, APRef
)
select * from Temp
```
I want to create a "virtual" temporary table based off the above query. Is it possible to use another WITH to contain it in?
Something along the lines of this:
```
WITH Temp2 ([Description], [Amount], [OverdueBy])
AS
(
WITH Temp ([Description], [Amount], [OverdueBy])
AS
(select Description, SUM(Amount) as Amount, (DATEDIFF(day,DueDate,GETDATE())) as OverdueBy from brvAPAllInvoices
Where PaidDate is null and APCo = 1 and Amount > 0
Group By Description, DueDate, APRef
)
select * from Temp)
select * from Temp2
``` | No, you can't define a CTE within a CTE, however you can define multiple CTE's and reference other CTE's in a single statement.
```
; with a as (
select * from some_table
),
b as (
select *
from another_table t
inner join a ON (t.key = a.key)
)
select *
from b
``` | Depending on your dbms, you can have multiple WITH statements, nested or not. (Illustrated with PostgreSQL.) [SQL Server doesn't allow nesting common table expressions](http://msdn.microsoft.com/en-us/library/ms175972.aspx). (Search for *CTE\_query\_definition*.)
**Nested**
```
with today as (
with yesterday as (select current_date - interval '1' day as yesterday)
select yesterday + interval '1' day as today from yesterday
)
select cast(today as date) from today
```
```
today
--
2014-06-11
```
When you nest common table expressions, the nested CTE isn't visible outside its enclosing CTE.
```
with today as (
with yesterday as (select current_date - interval '1' day as yesterday)
select yesterday + interval '1' day as today from yesterday
)
select * from yesterday
```
```
ERROR: relation "yesterday" does not exist
```
**Unnested**
```
with yesterday as (
select current_date - interval '1' day as yesterday
),
today as (
select yesterday + interval '1' day as today from yesterday
)
select cast(yesterday as date) as dates from yesterday
union all
select cast(today as date) from today
```
```
dates
--
2014-06-10
2014-06-11
```
When you use successive, unnested CTEs, the earlier ones are visible to the later ones, but not vice versa.
```
with today as (
select yesterday + interval '1' day as today from yesterday
),
yesterday as (
select current_date - interval '1' day as yesterday
)
select yesterday from yesterday
union all
select today from today
```
```
ERROR: relation "yesterday" does not exist
``` | Is it possible in SQL to use WITH inside a WITH | [
"",
"sql",
"view",
""
] |
I have a contact table where it stores names of the contact persons. So our research team copies the names from websites and pastes it into application, while copying we got some special characters got stored into database. Check below examples which will reflect as "?"(ASCII code=63) while extracted to text file. Examples of contact Last names are listed as follows.
EX: 1) Shefο¬eld
2) Grifο¬n-Smith
3) LhoΡst
Is there a way to query list of all special characters available with ASCII code as 63 in the "Last\_name" column of my contact table so that I could figure out and send them to researchers to update the correct names.
Thanks in advance! | If you want to find all entries which contain non ASCII Characters you can do the following:
```
select * from TheTable where Last_name != Cast(Last_name AS VARCHAR(1000))
``` | I would prefer to leave a comment but my reputation wasn't enough to comment Narti's answer. Anyway although MGM marked Narti's answer as correct answer, he/she commented that it does not exactly answer his/her question. Well it answered my exact same question.
I modified my script that didn't replace ascii 63 and I was able to replace it by casting and then replacing.
**From this: (This doesn't work)**
```
update TheTable set last_name = replace(last_name, char(63),'');
```
**To this:(When you cast and make it an actual '?', now you can replace '?' and remove it**
```
update TheTable set last_name = replace(Cast(last_name AS VARCHAR(1000)), '?','');
```
Just wanted to add this to this thread.
Thanks, | ASCII (?) Special characters issue | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-server-2005",
""
] |
I'm trying to eager load a model where I try to eager load a single record.
Let's say:
```
Customer.includes(:sales).where('sales.created_at' => 'customer.last_sale_at')
```
but instead of working SQL WHERE:
```
... WHERE "sales"."created_at" = "customer"."last_sale_at"
count
-------
1990
(1 row)
```
rails produces invalid:
```
... WHERE "sales"."created_at" = 'customer.last_sale_at'
ERROR: invalid input syntax for type timestamp: "last_sale_at"
LINE 1: ...d" = "customers"."id" WHERE "sales"."created_at" = 'last_sale...
```
I also tried:
```
Customer.includes(:sales).where('sales.created_at' => '"customer"."last_sale_at"')
Customer.includes(:sales).where('sales.created_at' => :last_sale_at)
Customer.includes(:sales).where('"sales"."created_at" = "customer"."last_sale_at"')
```
which produced variety of errors.
What's wrong?
**EDIT**: I updated the question to be more self-explanatory. | You can't call conditions when using "includes" because the records are being eager loaded in a separate query. You need to use a join instead:
```
Customer.joins(:sales).where('sales.created_at = customers.last_sale_at')
```
That being said, I suggest you use a slightly different architecture where you you create an actual relation on customer that points to the last sale, I.E a foreign\_key 'last\_sale\_id'.
```
class Customer < ActiveRecord::Base
has_many :sales
belongs_to :last_sale, class_name: 'Sale'
end
```
When a sale is created you can update the customers last\_sale in a callback:
```
class Sale < ActiveRecord::Base
after_create :update_customers_last_sale
private
def update_customers_last_sale
customer.last_sale = self
customer.save!
end
end
```
With that structure you can do this to load all customers with their last sale:
```
Customer.includes(:last_sale)
``` | You should pass date object instead of 'last\_sale\_at', it should be something like below,
```
Customer.includes(:sales).where('sales.created_at' => Time.now)
``` | Rails eager load where field matches the parent's field? | [
"",
"sql",
"ruby-on-rails",
"activerecord",
"models",
""
] |
i have two tables, and i have created left join query for them.
Suppose they are Table A And Table B.
When i fire query if Table B contains key which is present in Table A then record are displays according to it. if key not matches it displays null over all columns
if i have to replace null i use `ifnull` function of mysql , but there are much more column to apply `ifnull`, is there any way we can apply this function over all column at once ? means no need to apply on each individual columns | No, you need to apply `IFNULL` to each column individually.
However, if you had a slightly different requirement, that is, to show the first non-null column from a number of columns, and show a default if they are all null, you could use the `COALESCE` function like so:
```
select coalesce(col1,col2,col3,0)
from tbl
```
The problem with this is that it will return a single column in the result, and not multiple columns corresponding to `col1`, `col2` and `col3`. Therefore, as long as you want to have multiple columns in your result set, you need to do the null handling on a per-column basis. | No, there's no way to use `IFNULL` on multiple columns at once, you have to do it on each column separately:
```
SELECT a.id, a.somecol, IFNULL(b.col1, 0) col1, IFNULL(b.col2, "") col2, ...
FROM Table1 AS a
LEFT JOIN Table2 AS b ON <some condition>
``` | ifnull () over multiple column | [
"",
"mysql",
"sql",
""
] |
How do I fetch the number of rows fetched by another query in SQL server:
The required value should be:
```
select count(*) of select * from table
``` | Simply try
```
SELECT count(*) FROM
(
select * from yourtable
) AS A
``` | Simply Try this Query for the number of rows fetched by another query in SQL server
```
select temp.TblCount From
(select Count(*) As TblCount from YOURTABLE) As Temp
``` | query to fetch the number of rows fetched by another query | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have an input like this `4541` or `45` or `124`
and I would like to split the input to its numbers like this:
if the input is `1235`, I want to take `1` , `2`, `3`, `5`
I tried to search on google but everything I found is spliting numbers from float or change string to numbers.
could you help please? | **Method 1:**
Try this
**Function:**
```
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SplitIntoIndividualLetters]') AND type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
DROP FUNCTION [dbo].[SplitIntoIndividualLetters]
GO
CREATE FUNCTION [dbo].[SplitIntoIndividualLetters](@Str varchar(8000))
Returns @Lettertable TABLE (letters varchar(8000))
AS
Begin
;With NumCte As
(
Select Number = 1 Union All
Select Number +1 From NumCte
Where Number < 1000
)
Insert Into @Lettertable(letters)
Select Substring(@str,Number,1)
From NumCte
where Number Between 1 And Len(@str)
Option (Maxrecursion 0)
Return
End
```
How to use:
```
Declare @str Varchar(50) = '12345'
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' ,'') + letters
FROM dbo.SplitIntoIndividualLetters(@str)
SELECT @listStr
```
**Output:**
```
1,2,3,4,5
```
**Method 2 :**
I have came up with [Donal](https://stackoverflow.com/a/24240781/2118383) Solution:
```
DECLARE @str VARCHAR(50),@Inc INT,@len INT,@char VARCHAR(50)
SET @str = '12345'
SET @Inc = 1
SET @len = LEN(@str)
WHILE @Inc<= @len
BEGIN
SET @char = COALESCE(@char+',' ,'') + SUBSTRING(@str, @Inc, 1)
SET @Inc=@Inc+1
END
SELECT @char
```
Output:
```
1,2,3,4,5
```
**Number to Word**
```
CREATE FUNCTION fnIntegerToWords(@Number as BIGINT)
RETURNS VARCHAR(1024)
AS
BEGIN
DECLARE @Below20 TABLE (ID int identity(0,1), Word varchar(32))
DECLARE @Below100 TABLE (ID int identity(2,1), Word varchar(32))
INSERT @Below20 (Word) VALUES
( 'Zero'), ('One'),( 'Two' ), ( 'Three'),
( 'Four' ), ( 'Five' ), ( 'Six' ), ( 'Seven' ),
( 'Eight'), ( 'Nine'), ( 'Ten'), ( 'Eleven' ),
( 'Twelve' ), ( 'Thirteen' ), ( 'Fourteen'),
( 'Fifteen' ), ('Sixteen' ), ( 'Seventeen'),
('Eighteen' ), ( 'Nineteen' )
INSERT @Below100 VALUES ('Twenty'), ('Thirty'),('Forty'), ('Fifty'),
('Sixty'), ('Seventy'), ('Eighty'), ('Ninety')
declare @belowHundred as varchar(126)
if @Number > 99 begin
select @belowHundred = dbo.fnIntegerToWords( @Number % 100)
end
DECLARE @English varchar(1024) =
(
SELECT Case
WHEN @Number = 0 THEN ''
WHEN @Number BETWEEN 1 AND 19
THEN (SELECT Word FROM @Below20 WHERE ID=@Number)
WHEN @Number BETWEEN 20 AND 99
THEN (SELECT Word FROM @Below100 WHERE ID=@Number/10)+ '-' +
dbo.fnIntegerToWords( @Number % 10)
WHEN @Number BETWEEN 100 AND 999
THEN (dbo.fnIntegerToWords( @Number / 100)) +' Hundred '+
Case WHEN @belowHundred <> '' THEN 'and ' + @belowHundred else @belowHundred end
WHEN @Number BETWEEN 1000 AND 999999
THEN (dbo.fnIntegerToWords( @Number / 1000))+' Thousand '+
dbo.fnIntegerToWords( @Number % 1000)
WHEN @Number BETWEEN 1000000 AND 999999999
THEN (dbo.fnIntegerToWords( @Number / 1000000))+' Million '+
dbo.fnIntegerToWords( @Number % 1000000)
WHEN @Number BETWEEN 1000000000 AND 999999999999
THEN (dbo.fnIntegerToWords( @Number / 1000000000))+' Billion '+
dbo.fnIntegerToWords( @Number % 1000000000)
ELSE ' INVALID INPUT' END
)
SELECT @English = RTRIM(@English)
SELECT @English = RTRIM(LEFT(@English,len(@English)-1))
WHERE RIGHT(@English,1)='-'
RETURN (@English)
END
```
**Usage:**
```
SELECT dbo.fnIntegerToWords(5) As Word
Word
Five
```
**To get the answer in comma separated list**
Try this
```
Declare @str Varchar(50) = '12345'
DECLARE @listStr VARCHAR(MAX)
SELECT @listStr = COALESCE(@listStr+',' ,'') + dbo.fnIntegerToWords(letters)
FROM dbo.SplitIntoIndividualLetters(@str)
SELECT @listStr
```
---
*Output:*
```
One,Two,Three,Four,Five
``` | You need to treat it as a string. Then you can use SUBSTRING to get each character.
```
DECLARE @str VARCHAR(50)
DECLARE @i INT
DECLARE @len INT
DECLARE @char VARCHAR(50)
DECLARE @result VARCHAR(50)
SET @str = '1235'
SET @i = 1
SET @len = LEN(@str)
WHILE @i<= @len
BEGIN
SET @char = SUBSTRING(@str, @i, 1)
@result = @char + ", "
SET @i=@i+1
END
``` | sql split an integer to numbers | [
"",
"sql",
"sql-server-2008",
""
] |
I have the following sql
```
select 2 as seq, description, figure from tableA
union
select 3 as seq, description, figure from tableB
union
select 1 as seq, 'TOTAL' as description, sum(figure) from (
select figure from tableA
union
select figure from tableB
) order by seq
```
with above sql the result will be
```
DESCRIPTION FIGURE
TOTAL 200
APPLE 100
PEAR 100
```
the issue is..is there a way to simplify this query so that i don't need to repeat the first 2 query to get the last query figure? cux this is just an example, my real query is way way to long so if possible i don't wan to repeat it so as to speed up the process.
Thanks in advance for any possible help! | Using a `CTE` it's possible to define the details query only once
```
WITH Details AS (
SELECT 2 as seq, description, figure FROM tableA
UNION ALL
SELECT 3 as seq, description, figure FROM tableB
)
SELECT seq, description, figure
FROM Details
UNION ALL
SELECT 1 as seq, 'TOTAL' as description, sum(figure)
FROM Details
ORDER BY seq
```
If other values need to be added just stuff them in the `CTE` without touching the main query, for example
```
WITH Details AS (
SELECT 2 as seq, description, figure FROM tableA
UNION ALL
SELECT 3 as seq, description, figure FROM tableB
UNION ALL
SELECT 4 as seq, description, figure FROM tableC
UNION ALL
SELECT 5 as seq, description, figure FROM tableD
UNION ALL
SELECT 6 as seq, description, figure FROM tableE
)
SELECT seq, description, figure
FROM Details
UNION ALL
SELECT 1 as seq, 'TOTAL' as description, sum(figure)
FROM Details
ORDER BY seq
``` | use [ROLLUP](http://www.oracle-base.com/articles/misc/rollup-cube-grouping-functions-and-grouping-sets.php)
```
SELECT COALESCE("DESCRIPTION", 'TOTAL') As "DESCRIPTION",
SUM("FIGURE") as "FIGURE"
FROM
(
SELECT "DESCRIPTION", "FIGURE"
FROM TableA
UNION
SELECT "DESCRIPTION", "FIGURE"
FROM TableB
)
GROUP BY ROLLUP ("DESCRIPTION")
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!4/5cdc6/6) | how to get individual figure and sum of figures from 2 different table in oracle SQL without repeating sql | [
"",
"sql",
"oracle",
"sum",
""
] |
I'm Working on Android project and I'm using ***SQLite*** database in that, I have written one insert query as :
```
sqliteDB_Obj.execSQL("INSERT INTO tbl_order_master (user_id, order, amount)" + "VALUES(" + strUserId +","+ strOrderString +","+ dblAmount +");");
```
But I'm getting following error
***android.database.sqlite.SQLiteException: near "order": syntax error (code 1): , while compiling: INSERT INTO tbl\_order\_master (user\_id, order, amount) VALUES(dny, my test order, 160.0);***
some people have asked same question before so according to the answer mentioned their I have tried following way :
```
String strTableName="tbl_order_master";
String strQuery= String.format("INSERT INTO %s (user_id, order, amount) VALUES(%s, %s, %s);", strTableName, strUserId, strOrderString, ""+dblAmount);
sqliteDB_Obj.execSQL(strQuery);
```
**but still getting same error** Please help.. Thanks..! | If you use a reserved name such as `order` as an identifier, put it in `"double quotes"`. Or just rename the identifier so it isn't a reserved keyword.
Also in SQL, string literals need to be written in `'single quotes'`. Yours are not quoted.
It's better to use parameters instead though, e.g.
```
execSQL("INSERT INTO tbl_order_master (\"user_id\", \"order\", \"amount\") VALUES (?, ?, ?)",
new String[] { strUserId, strOrderString, Double.toString(dblAmount) } );
```
passing the parameter values in the second arg. | try this
```
sqliteDB_Obj.execSQL("INSERT INTO tbl_order_master (`user_id`, `order`, `amount`)" + "VALUES('" + strUserId +"','"+ strOrderString +"',"+ dblAmount +");");
```
change the column name "order" or using **`order`** | near "COLUMN_NAME": syntax error (code 1) in SQLite | [
"",
"android",
"mysql",
"sql",
"sqlite",
""
] |
Below query says, find the number of shippers from order table with shipper info as well as employee info.
```
select Count(Shipper.ShipperID),Shipper.ShipperName from [Order]
left join Shipper
on
[Order].ShipperID = Shipper.ShipperID
group by ShipperID
```
Above query gives Error on groupby ShipperID. When i change it to ShipperName, it works fine.
Error i get is: Ambiguous column name 'ShipperID'.
I need to understand why this happens so? I cannot understand on w3schools or any other site. | If you are looking to count the orders for each shipper then I recommend that you do something like this
```
Select s.ShipperID, s.ShipperName,
(Select Count(1) From [Order] as o where o.ShipperId = s.ShipperId) as OrderCount
from Shipper as s
``` | ```
select Count(Shipper.ShipperID),Shipper.ShipperName from [Order]
left join Shipper
on [Order].ShipperID = Shipper.ShipperID
group by Shipper.ShipperName
```
When you add any Aggregate function in sql server's select statement, Sql server knows it needs to do that aggregate operation on all the rows returned from that query.
Now when you add another column name in your select query and it is not contained in any aggregate function, Just the column name on it own. Now Sql server needs to know what to do with this Column when rows are returned.
An aggregate function own its own is simple are straight forward sql server will simply perform that operation on all the values returned from that query in each row and returns a single row with a scalar value.
But when you have another column name in your select statement that is not contained in an aggregate function, This will return multiple rows for this column, Now If sql server shows SUM of first column and all the rows from 2nd column this will show inaccurate results. There sql server throw and error and you will need to GROUP the results retuned by your aggregate function by the column that is not contained in any aggregate function.
*Your Query*
If you only SELECT the count of ShipperID it will return an integer showing the count of ShipperIDs let say 10, Now when you add ShipperName in your select statement, The results will shows something like 10 for count column and all the ShipperName in your second column, and you will end up with a result set something like 10, 10 , 10 in front of each shipperName, which will be a wrong count and obviously sql server will not execute this statement.
But if you add GROUP BY ShipperName in your select statement, The Count will No Longer Show 10 for each ShipperName but The COUNT for each ShipperName something like 1, 3,4,2 and so on.....
I hope this will help you to understand. :) | Concept for sql groupby statement | [
"",
"sql",
"sql-server-2008",
""
] |
What is the correct way to return a string when no results are returned.
The following isn't working
```
SELECT
TOP 1 CASE
WHEN
CustomerName IS NULL
THEN
'Unknown'
WHEN
CustomerName = ''
THEN
'Unknown'
ELSE
CustomerName
END
AS
CustomerName FROM CUSTOMER WHERE CustomerCode = 222
``` | It seems you want to return `Unknown` when there are no rows in your table that have `CustomerName` that's not `NULL` or not `''`.
```
SELECT COALESCE((SELECT TOP 1 CustomerName FROM
CUSTOMER WHERE CustomerCode = 222
AND CustomerName IS NOT NULL
AND CustomerName <> ''),'Unknown') CustomerName
``` | If I understand the question, you want to return a value even when the WHERE clause doesn't match, and you want 'Unknown' to replace the empty string or a NULL value:
```
SELECT TOP 1 COALESCE(NULLIF(CustomerName,''),'Unknown')
FROM (
SELECT CustomerName FROM CUSTOMER WHERE CustomerCode = 222
UNION ALL
SELECT NULL
) t
ORDER BY CustomerName DESC
``` | Return string when no results | [
"",
"sql",
"sql-server",
""
] |
I have a SQL table that contains the following columns:
```
ID, VehRegID, RenewalLetterNumber, DateLetterSent
1 200675 1 2014-02-01
4 200675 1 2014-03-21
7 200675 2 2014-04-11
9 201175 1 2014-02-21
12 201175 1 2014-03-31
65 201175 2 2014-04-11
88 201100 1 2014-05-18
97 201100 2 2014-05-21
```
What I am looking to accomplish is:
If a VehRegID appears 3 times in the table, then get the Min(DateLetterSent) and update the RenewalLetterNumber = 3
The query I have is:
```
SELECT VehRegID, MIN(DateLetterSent) As [EarliestDate]
FROM tblTempRenewal
WHERE VehRegID IN
(SELECT VehRegID, Count(*) Total
FROM tblTempRenewal
GROUP BY VehRegID
HAVING Count(*) = 3)
GROUP BY VehRegID
```
I get an error "Incorrect syntax newr )"
I can't figure out where the error is. Any suggestions? | Remove the `Count(*) Total` from the Subquery. The Subquery in the `IN` Operator can only have one column in the Resultset. | The error is a result of you returning two columns in your sub-select for the IN clause. You can only return one column in that query because it is part of the IN clause. Below I removed the COUNT(\*) from your sub-query.
```
SELECT VehRegID
, MIN(DateLetterSent) As [EarliestDate]
FROM tblTempRenewal
WHERE VehRegID IN
( SELECT VehRegID
FROM tblTempRenewal
GROUP
BY VehRegID HAVING count(*) = 3
)
GROUP BY VehRegID
```
You can still use the having statement without selecting the count(\*). | SQL GroupBy query returns error | [
"",
"sql",
"sql-server",
""
] |
Given a table like the following, which contains a list of names, tasks, priority of task, and status of task:
```
mysql> select * from test;
+----+------+--------+----------+--------+
| id | name | task | priority | status |
+----+------+--------+----------+--------+
| 1 | bob | start | 1 | done |
| 2 | bob | work | 2 | NULL |
| 3 | bob | finish | 3 | NULL |
| 4 | jim | start | 1 | done |
| 5 | jim | work | 2 | done |
| 6 | jim | finish | 3 | NULL |
| 7 | mike | start | 1 | done |
| 8 | mike | work | 2 | failed |
| 9 | mike | finish | 3 | NULL |
| 10 | joan | start | 1 | NULL |
| 11 | joan | work | 2 | NULL |
| 12 | joan | finish | 3 | NULL |
+----+------+--------+----------+--------+
12 rows in set (0.00 sec)
```
I want to build a query which returns only the next task to be run for each name. Specifically, I want to return the row containing the lowest number priority which has a NULL status per person.
But here's the catch: **I want to only return the row if all preceding tasks have a status of "done".**
Given the above table and query logic, the end result of this query should look like this:
```
+----+------+--------+----------+--------+
| id | name | task | priority | status |
+----+------+--------+----------+--------+
| 2 | bob | work | 2 | NULL |
| 6 | jim | finish | 3 | NULL |
+----+------+--------+----------+--------+
```
Initially, this was being done with a whole mess of sub-queries and derived tables, which extremely inefficient and slow. I have managed to speed it up considerably by using several temporary tables to get the result I want.
In the real world, this will be run on a table with about 200k records, and multiple servers will each be executing this query several times per minute. My current solution takes about 2 seconds to run, which simply won't do.
Here is the DML/DDL to get my example data:
```
DROP TABLE IF EXISTS `test`;
CREATE TABLE `test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(20) DEFAULT NULL,
`task` varchar(20) DEFAULT NULL,
`priority` int(11) DEFAULT NULL,
`status` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`)
);
INSERT INTO `test` VALUES
(1,'bob','start',1,'done'),
(2,'bob','work',2,NULL),
(3,'bob','finish',3,NULL),
(4,'jim','start',1,'done'),
(5,'jim','work',2,'done'),
(6,'jim','finish',3,NULL),
(7,'mike','start',1,'done'),
(8,'mike','work',2,'failed'),
(9,'mike','finish',3,NULL),
(10,'joan','start',1,NULL),
(11,'joan','work',2,NULL),
(12,'joan','finish',3,NULL);
```
Here's what I am currently doing to get the desired result (which works, but is slow):
```
drop table if exists tmp1;
create temporary table tmp1 as
select
name,
min(priority) as priority
from test t
where status is null
group by name;
create index idx_pri on tmp1(priority);
create index idx_name on tmp1(name);
drop table if exists tmp2;
create temporary table tmp2 as
select tmp.*
from test t
join tmp1 tmp
on t.name = tmp.name
and t.priority < tmp.priority
group by name having sum(
case when status = 'done'
then 0
else 1
end
) = 0;
create index idx_pri on tmp2(priority);
create index idx_name on tmp2(name);
select
t.*
from test t
join tmp2 t2
on t.name = t2.name
and t.priority = t2.priority;
```
I have the DDL/DML in SQL Fiddle as well, but I can't put my solution in there because technically the creation of these temp tables is DDL, and it doen't allow DDL in the query box. <http://sqlfiddle.com/#!2/2d9e2/1>
Please help me with coming up with a better way to do this. I am open to modifying schema or logic to accommodate outside of the box solutions as well, so long as said solution is efficient. | You can turn your logic pretty directly into a query like this:
```
select t.*
from test t
where t.status is null and
not exists (select 1
from test t2
where t2.name = t.name and
t2.id < t.id and
(t2.status <> 'done' or
t2.status is null
)
) and
exists (select 1
from test t2
where t2.name = t.name and
t2.id < t.id and
t2.status = 'done'
);
```
For performance, create an index on `test(name, id, status)`.
[Here](http://www.sqlfiddle.com/#!2/d92c98/4) is a SQL Fiddle. | This query determines whether all tasks before a given task are done by verifying that the # of tasks that are not done before the given task are 0
```
SELECT t1.name, t1.id, t1.priority, t1.task
FROM test t1
JOIN test t2
ON t2.name = t1.name
AND t2.priority < t1.priority
WHERE t1.status IS NULL
GROUP BY t1.name, t1.priority, t1.id, t1.task
HAVING COUNT(CASE WHEN t2.status = 'done' THEN NULL ELSE 1 END) = 0
CREATE INDEX test_index1 ON test (name,status,priority,id,task);
```
<http://sqlfiddle.com/#!2/c912f/7> | Mysql Query: returning rows where all preceding rows in group match a condition | [
"",
"mysql",
"sql",
"query-optimization",
""
] |
I'm using a SQL Server database and have two tables with the following columns:
`Driver: DriverID, NumCars`
`Cars: CarID, DriverID`
`NumCars` is empty, but I need it to be the count of rows in `Cars` that contain that particular `DriverID`. How would I do this? | You can fetch the count of `Cars` that belong to a driver, along with all `Driver` data with the following `SELECT` query:
```
SELECT *
,(
SELECT COUNT(*)
FROM Cars c
WHERE c.DriverID = d.DriverID
)
FROM Driver d
```
You can `UPDATE` the `NumCars` column with the following statement:
```
UPDATE Driver
SET NumCars = (
SELECT COUNT(*)
FROM Cars
WHERE Driver.DriverID = Cars.DriverID
)
``` | Try:
```
SELECT count(carID) as CountOfCars from Cars where DriverID = [DriverID]
```
Will give you the count based on input DriverID
or
```
SELECT DriverId, count(carID) as CountOfCars from Cars group by DriverID
```
Will give you all counts of carIDs and group them by DriverID
If you need to base the counts on the data from the Driver table:
```
SELECT count(carID) as CountOfCars from Cars inner join on Driver.DriverID = Cars.DriverID group by Cars.DriverID
``` | SQL set column to row count | [
"",
"sql",
"sql-server",
""
] |
I have two tables. One has products and the other has bundles that go with it. I need to figure out the SQL that allows me to find all the combinations in which I can sell the product with extras.
```
Products
Name ID
Bench 1
Extra
Name ID Parent ID QTY
undershelf 1 1 1
overshelf 2 1 1
wheels 3 1 1
```
I need and output table that shows all the combination in which I can sell the product:
```
Bench
Bench + undershelf
Bench + undershelf + overshelf
Bench + overshelf
Bench + wheels
bench + wheels + overshelf and so one.
``` | Every extras can be in the bundle or not, making that a binary property.
A way to visualize the combination is to create a word with a bit for every extra, `1` mean that the extra is in the list, `0` mean the that it is not.
For example `Bench + undershelf + overshelf` is 110 (or 011 if the binary string is read in the opposite order)
Generating every combination of n bit will give every combination of n extras, it will also give every number from `0` to `2^n - 1`.
We can work back from here:
1. generate the list of number from `0` to `2^n - 1`;
2. convert the number to binary, to list the combination of extras
3. match every bit with an extra
4. concatenate the names of the extras in the bundle description.
```
SELECT CONCAT(b.Name
, COALESCE(CONCAT(' + '
, GROUP_CONCAT(x.Name SEPARATOR ' + '))
, '')) Combination
FROM (SELECT p.Name, p.id
, LPAD(BIN(u.N + t.N * 10), e.Dim, '0') bitmap
FROM Products p
CROSS JOIN (SELECT 0 N UNION ALL SELECT 1
UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7
UNION ALL SELECT 8 UNION ALL SELECT 9) u
CROSS JOIN (SELECT 0 N UNION ALL SELECT 1
UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7
UNION ALL SELECT 8 UNION ALL SELECT 9) t
INNER JOIN (SELECT COUNT(1) Dim
, `Parent ID` pID
FROM Extra) E ON e.pID = p.ID
WHERE u.N + t.N * 10 < Pow(2, e.Dim)
) B
LEFT JOIN (SELECT @rownum := @rownum + 1 ID
, `Parent ID` pID
, Name
FROM Extra
, (Select @rownum := 0) r) X
ON x.pID = b.ID
AND SUBSTRING(b.bitmap, x.ID, 1) = '1'
GROUP BY b.Name, b.bitmap
```
this query will work up to six extras, then it'll need another digit table (one digit every three extras).
**How it Works**
The subquery **`E`** count the number of the extras, this is used in **`C`** to limit the elements generated by the digit tables `u` and `t` (unit and tens) to 2^dim.
The number is converted to binary by `BIN(u.N + t.N * 10)`, then left padded with '0' to the number of elements, generating a combination bitmap.
To use the generated bitmap each extras need a fake id that will match a position in it, that's what the subquery **`X`** is meant for.
The two subqueries are `JOIN`ed by the nth char of the bitmap: if the char is 1 the extra is in the bundle, `LEFT` joined to not loose the product without extras. | I cannot think of any ingenious way of doing this in mysql, but it is very easy in a scripting language. Here in PHP:
```
<?php
$extra = array('undershelf', 'overshelf', 'sheels');
$possible_combinations = pow(2, count($extra));
for ($i = 0; $i < $possible_combinations; $i++) {
$combo = array('Bench');
foreach ($extra as $j => $item) {
if ($i & pow(2, $j)) {
$combo[] = $item;
}
}
echo implode(' + ', $combo) . "\n";
}
```
prints
```
Bench
Bench + undershelf
Bench + overshelf
Bench + undershelf + overshelf
Bench + sheels
Bench + undershelf + sheels
Bench + overshelf + sheels
Bench + undershelf + overshelf + sheels
``` | MySQL permutation | [
"",
"mysql",
"sql",
""
] |
I am writing a script in PowerShell ISE and I am using Invoke-Sqlcmd.
After the command is executed the Powershell session switches into sqlps session (PS SQLSERVER:>) and I can't execute script for the second time. I have to quit PowerShell ISE and start it again.
So my question is: how to switch back from sqlps to regular ps or how to prevent Invoke-Sqlcmd from switching session.
```
Invoke-Sqlcmd -ServerInstance $server -Database master -Username $user -Password $password -InputFile $file `
-ErrorAction Stop -OutputSqlErrors $true -Variable $variable
```
This doesn't work:
```
Push-Location
Invoke-Sqlcmd -ServerInstance $server -Database master -Username $user -Password $password -InputFile $file `
-ErrorAction Stop -OutputSqlErrors $true -Variable $variable
Pop-Location
``` | The `sqlps` module's behavior is to leave you in the `psdrive` that it creates. I'm fairly certain that people have asked Microsoft to fix this, as it's very annoying and disruptive.
The automatic importing of modules introduced by PowerShell 3.0 makes this even more annoying, because you may not even realize that you're importing the module until after the fact.
When I use `sqlps`, I explicitly import it so that I can control my working directory as follows:
```
push-location
import-module sqlps -disablenamechecking
pop-location
```
This returns you to your previous directory after the module is loaded.
Very late edit: With the advent of SQL Server Management Studio 2016 we have a new PowerShell module, `sqlserver`, which supersedes `sqlps` and resolves this problem. | A simple *cd 'directory of your choice'* will put you back to the previous directory you were in.
```
PS SQLSERVER:\> cd D:
```
Result:
```
PS D:\>
``` | PowerShell Invoke-Sqlcmd switches into sqlps session | [
"",
"sql",
"powershell",
"invoke",
""
] |
I've just come across some SQL syntax that I thought was invalid, but actually works fine (in SQL Server at least).
Given this table:
```
create table SomeTable (FirstColumn int, SecondColumn int)
```
The following `insert` statement executes with no error:
```
insert SomeTable(Any.Text.Here.FirstColumn, It.Does.Not.Matter.What.SecondColumn)
values (1,2);
```
The insert statement completes without error, and checking `select * from SomeTable` shows that it did indeed execute properly. See fiddle: <http://sqlfiddle.com/#!6/18de0/2>
SQL Server seems to just ignore anything except the last part of the column name given in the insert list.
**Actual question:**
Can this be relied upon as documented behaviour?
Any explanation about why this is so would also be appreciated. | This was [reported as a bug](https://connect.microsoft.com/SQLServer/feedback/details/311881/sql-server-incorrectly-accepts-prefixes-in-column-list-of-insert-statements) on Connect and despite initially encouraging comments the current status of the item is closed as "won't fix".
The Order by clause used to behave in a similar fashion but that one was fixed in SQL Server 2005. | It's *unlikely* to be part of the SQL standard, given its dubious utility (though I haven't checked specifically (a)).
What's most likely happening is that it's throwing away the non-final part of the column specification because it's superfluous. You have *explicitly* stated what table you're inserting into, with the `insert into SomeTable` part of the command, and *that's* the table that will be used.
What you appear to have done here is to find a way to execute SQL commands that are less readable but have no real advantage. In that vein, it appears similar to the C code:
```
int nine = 9;
int eight = 8;
xyzzy = xyzzy + nine - eight;
```
which could perhaps be better written as `xyzzy++;` :-)
I wouldn't rely on it at all, *possibly* because it's not standard but *mostly* because it makes maintenance harder rather than easier, and because I know DBAs all over the world would track me down and beat me to death with IBM DB2 manuals, their choice of weapon due to the voluminous size and skull-crushing abilities :-)
---
(a) I *have* checked non-specifically, at least for ISO 9075-2:2003 which dictates the SQL03 language.
Section `14.8` of that standard covers the `insert` statement and it appears that the following clause may be relevant:
> Each column-name in the insert-column-list shall identify an updatable column of T.
Without spending a huge amount of time (that document is 1,332 pages long and would take several days to digest properly), I suspect you could argue that the column *could* be identified just using the final part of the column name (by removing all the owner/user/schema specifications from it).
Especially since it appears only one target table is possible (updatable views crossing table boundaries notwithstanding):
```
<insertion target> ::= <table name>
```
Fair warning: I haven't checked later iterations of the standard so things may have changed. But I'd consider that unlikely since there seems to be no real use case for having this feature. | Why are dot-separated prefixes ignored in the column list for INSERT statements? | [
"",
"sql",
"sql-server",
""
] |
I have a SQL Insert statement that needs to insert records into another table only if the the record doesn't exist in table2 or the zip code has changes in table1. I have tried the following but it throws an error and it is the logic I am looking for:
```
INSERT INTO table2
SELECT id, zip
FROM table1 t1
JOIN table2 t2
ON t1.id = t2.id and t1.zip <> t2.zip
```
I also need it to insert the records if the id doesn't exist at all in table2. I have googled the crap out of this and can't seem to find the solution anywhere. | What about this?
```
INSERT INTO table2
SELECT t2.id, t2.zip
FROM table1 t1
LEFT OUTER JOIN table2 t2
ON t1.id = t2.id
WHERE (t1.id IS NULL OR t2.zip <> t1.zip)
```
Also, be sure to clarify which table's `id` and `zip` columns you are asking for. | You should always include column lists when doing inserts. Second, your query doesn't quite capture your logic. You need a `left outer join` to find the records that don't exist in the second table. Perhaps this might do what you want:
```
INSERT INTO table2(id, zip)
SELECT id, zip
FROM table1 t1 LEFT JOIN
table2 t2
ON t1.id = t2.id
WHERE (t1.zip <> t2.zip) or (t2.zip is null)
``` | SQL Insert Data If Column A Matches and Column B Doesn't | [
"",
"sql",
"sql-server",
"join",
"insert",
""
] |
I'm building a Table Valued Function in SSMS, and I'm expecting IntelliSense to help me select columns, but it doesn't. Consider this:
```
CREATE FUNCTION dbo.My_TVF
(
)
RETURNS TABLE
AS
RETURN
(
SELECT [PO].I -- Here I is my cursor, ctrl+space does nothing
FROM dbo.SomePurchaseOrderView PO
JOIN dbo.SomePurchaseOrderLineView POL ON PO.PO_NUM = POL.PO_NUM
WHERE PO.PO_NUM IN (
SELECT TOP 500 PO_NUM
FROM dbo.SomeTable
WHERE PROCESSED = 0
)
)
GO
```
I want it to suggest column names for the `select` clause.
Notes:
* The cache is fresh (`CTRL` + `SHIFT` + `R`)
* IntelliSense works fine in general, this is the only situation I've encountered where it doesn't.
* I'm querying a view instead of a table, if it matters.
I know it often fails when there is some kind of syntax error in what you are writing, but I can execute my query just fine when I specify a column. | On msdn it says: "IntelliSense is available for the SELECT statement when it is coded by itself, but not when the SELECT is contained in a CREATE FUNCTION statement."
Although, when testing it in my SSMS 2012, it does seem to work...
Source: <http://msdn.microsoft.com/en-us/library/bb934481.aspx>
(sorry, I don't have enough rep for a comment...) | Here are few assumptions:
1. You are connected to the right SQL Server that has the tables listed in query below.
2. You are not using index hints. I have had problems with intellisense and index hints.
I was able to reproduce the problem on my computer, looks like you need to type a comma before SSMS will suggest the next column.
```
CREATE FUNCTION dbo.My_TVF
(
)
RETURNS TABLE
AS
RETURN
(
SELECT [PO].I X -- If you press ctrl+space at the location of X it does nothing
-- since it has already suggested that column, PO.I (that's my assumption)
-- for next columns type a comma after PO.I and press ctrl+space.
-- I had no problems getting suggestions for other columns.
FROM dbo.SomePurchaseOrderVIEW PO
JOIN dbo.SomePurchaseOrderLineView POL ON PO.PO_NUM = POL.PO_NUM
WHERE PO.PO_NUM IN (
SELECT TOP 500 PO_NUM
FROM dbo.SomeTable
WHERE PROCESSED = 0
)
)
GO
``` | No IntelliSense in SSMS 2008 for Table Valued Function | [
"",
"sql",
"sql-server-2008",
"intellisense",
"ssms",
""
] |
I'm trying to join multiple tables together and only get some of the fields
here's my sql code:
```
SELECT t1.match_id, t1.provider_id, t1.visible, t2.match_type, t3.class_num, t3.location_id, t4.prov_name, t6.city_name
FROM match AS t1
JOIN umtc_class AS t2
on t1.match_id = t2.match_id
JOIN abroad_class AS t3
on t1.match_id = t3.match_id
JOIN provider AS t4
on t1.provider_id = t3.provider_id
JOIN location AS t5
on t3.location_id = t3.location_id
JOIN location AS t6
on t5.city_id = t6.city_id
```
# 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'match AS t1
```
JOIN umtc_class AS t2
on t1.match_id = t2.match_id
```
' at line 2
I keep messing around with it but I can't get anywhere....any ideas has to why it isn't working? | MATCH is a function name and thus a reserved word. Try using the fully qualified table name with the database name prepended:
```
SELECT ...
FROM mydb.match AS t1
``` | ```
FROM match AS t1
JOIN umtc_class AS t2
on t1.match_id = t2.match_id
JOIN abroad_class AS t3
on t1.match_id = t3.match_id
JOIN provider AS t4
on t1.provider_id = t3.provider_id --<-- This ON clause should have t4 in it
JOIN location AS t5
on t3.location_id = t3.location_id --<-- This ON clause should have t5 in it
JOIN location AS t6
on t5.city_id = t6.city_id
```
Any `ON` clause should define the relation between the table name is has followed and preceding result set from a table or joins of tables. | Joins are erroring with multiple tables | [
"",
"mysql",
"sql",
""
] |
I have a table like this
```
item_id item_quantity
----------------------
324 2
432 23
543 12
879 3
```
The `item_id` field is not auto-increment but it is unique.
The inputs of the query will be some `item_id` values and the result should be sum of those in `item_quantity`.
For instance, if the inputs are `324, 543`, then the query should return `14` (2 + 12)
Is it possible to do this at mysql level? | If all you need is the sum of the `item_quantity`, all you need to do is:
```
Select Sum(item_quantity)
From Table
Where item_id In (324, 543)
``` | Try using in :
```
SELECT SUM(item_quantity)
FROM table
WHERE item_id in (324,543)
``` | Calculate total of some fields in mysql? | [
"",
"mysql",
"sql",
"sum",
""
] |
I have
TABLE 1: `r_profile_token`
Columns:
```
r_sequence int(45) AI PK
r_profileid varchar(45)
r_token varchar(300)
r_deviceType int(1)
r_date date
r_time time
```
and
TABLE 2: `r_token_arn`
Columns:
```
r_token varchar(300) PK
r_arn varchar(300)
```
I need a result of the form -
```
r_profileid
r_arn
r_deviceType
```
where I can specify the `r_profileid`.
So far my SQL statement is:
```
SELECT
b.r_arn,
a.r_deviceType
FROM
coffee2.r_profile_token a INNER JOIN
coffee2.r_token_arn b
ON a.r_token=b.r_token;
```
Which returns `r_arn` and `r_deviceType` but for ***all*** r\_profileid?
How do I modify the statement so that it returns me `r_arn` and `r_deviceType` only corresponding to a specific `r_profileid`? | Use a `WHERE` clause.
```
SELECT B.R_ARN, A.R_DEVICETYPE
FROM COFFEE2.R_PROFILE_TOKEN A
INNER JOIN
COFFEE2.R_TOKEN_ARN B
ON A.R_TOKEN=B.R_TOKEN
WHERE R_PROFILEID = 'SOME_VALUE';
```
If you want for a single profileid, then use
```
WHERE R_PROFILEID = 'SOME_VALUE';
```
If you want for a range of profileIds , then use
```
WHERE R_PROFILE_ID IN ('VALUE1','VALUE2','VALUE3');
``` | You can try this Query against your requirements.
```
SELECT
b.r_arn,
a.r_deviceType ,
a.r_profileid
FROM
r_profile_token a
INNER JOIN
r_token_arn b
ON
a.r_token=b.r_token
where r_profileid='profile name';
``` | How do I achieve the following using JOIN statement in mysql? | [
"",
"mysql",
"sql",
""
] |
I have two tables in a Database

and 
I need to retrieve the number of staff per manager in the following format

I've been trying to adapt an answer to another question
```
SELECT bankNo AS "Bank Number",
COUNT (*) AS "Total Branches"
FROM BankBranch
GROUP BY bankNo
```
As
```
SELECT COUNT (*) AS StaffCount ,
Employee.Name AS Name
FROM Employee, Stafflink
GROUP BY Name
```
As I look at the Group BY I'm thinking I should be grouping by The ManID in the Stafflink Table.
My output with this query looks like this

So it is counting correctly but as you can see it's far off the output I need to get.
Any advice would be appreciated. | You need to join the Employee and Stafflink tables. It appears that your FROM clause should look like this:
```
FROM Employee INNER JOIN StaffLink ON Employee.ID = StaffLink.ManID
``` | If you don't specify a join between the tables, a so called Cartesian product will be built, i.e., each record from one table will be paired with every record from the other table. If you have 7 records in one table and 10 in the other you will get 70 pairs (i.e. rows) before grouping. This explains why you are getting a count of 7 per manager name.
Besides joining the tables, I would suggest you to group on the manager id instead of the manager name. The manager id is known to be unique per manager, but not the name. This then requires you to either group on the name in addition, because the name is in the select list or to apply an aggregate function on the name. Each additional grouping slows down the query; therefore I prefer the aggregate function.
```
SELECT
COUNT(*) AS StaffCount,
FIRST(Manager.Name) AS ManagerName
FROM
Stafflink
INNER JOIN Employee AS Manager
ON StaffLink.ManId = Manager.Id
GROUP BY
StaffLink.ManId
```
I don't know if it makes a performance difference, but I prefer to group on `StaffLink.ManId` than on `Employee.Id`, since `StaffLink` is the main table here and `Employee` is just used as lookup table in this query. | Ms-Access: counting from 2 tables | [
"",
"sql",
"ms-access",
""
] |
I have a table with prices and dates on product:
```
id
product
price
date
```
I create a new record when price change. And I have a table like this:
```
id product price date
1 1 10 2014-01-01
2 1 20 2014-02-17
3 1 5 2014-03-28
4 2 25 2014-01-05
5 2 12 2014-02-08
6 2 30 2014-03-12
```
I want to get last price for all products. But when I group with "product", I can't get a price from a row with maximum date.
I can use `MAX()`, `MIN()` or `COUNT()` function in request, but I need a result based on other value.
I want something like this in final:
```
product price date
1 5 2014-03-28
2 30 2014-03-12
```
But I don't know how. May be like this:
```
SELECT product, {price with max date}, {max date}
FROM table
GROUP BY product
``` | Alternatively, you can have subquery to get the latest get for every product and join the result on the table itself to get the other columns.
```
SELECT a.*
FROM tableName a
INNER JOIN
(
SELECT product, MAX(date) mxdate
FROM tableName
GROUP BY product
) b ON a.product = b.product
AND a.date = b.mxdate
``` | I think the easiest way is a `substring_index()`/`group_concat()` trick:
```
SELECT product,
substring_index(group_concat(price order by date desc), ',', 1) as PriceOnMaxDate
max(date)
FROM table
GROUP BY product;
```
Another way, that might be more efficient than a `group by` is:
```
select p.*
from table t
where not exists (select 1
from table t2
where t2.product = t.product and
t2.date > t.date
);
```
This says: "Get me all rows from the table where the same product does not have a larger date." That is a fancy way of saying "get me the row with the maximum date for each product."
Note that there is a subtle difference: the second form will return all rows that on the maximum date, if there are duplicates.
Also, for performance an index on `table(product, date)` is recommended. | How to get rows with max date when grouping in MySQL? | [
"",
"mysql",
"sql",
"max",
""
] |
I've a table like this
```
Acode Bcode
100 4
101 3
100 8
105 4
105 8
104 1
109 8
110 3
109 8
```
I would like to find out Which are Acode belongs to more than one Bcode like
100 belongs to 4 as well 8 and 105 belongs to 4 as well 8 so on. | try this
```
SELECT Acode, COUNT(Bcode)
FROM TableName
GROUP BY Acode
HAVING COUNT(Bcode) > 1
``` | Try this
```
SELECT Acode,Count(Acode)
FROM TABLE1
GROUP BY Acode
HAVING Count(Acode) > 1
```
If you want to **find out Which are Acode belongs to more than one Bcode** then
Try this
```
SELECT T.ACode,T.BCode,S.Acode
FROM table1 T JOIN
(
SELECT Acode,Count(Acode) As CCode
FROM TABLE1
GROUP BY Acode
HAVING Count(Acode) > 1
) As S ON S.ACode = T.ACode
```
**[FIDDLE DEMO](http://sqlfiddle.com/#!3/190cc/3)**
*Output*
---
```
ACODE BCODE
100 4
100 8
105 4
105 8
109 8
109 8
``` | Find a record which has multiple occurrences? | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
There are a couple of similar questions already out there and the consensus seemed to be that a primary key should always be created.
But what if you have a single row table for storing settings (and let's not turn this into a discussion about why it might be good/bad to create a single row table please)?
Surely having a primary key on a single row table becomes completely useless? | It may seem completely useless, but it's also completely harmless, and I'd vote for harmless with good design principles vs. useless with no design principles every time.
Other people have commented, rightly, that you don't know how you're going to use the table in a year or five years... what if someone comes along and decides they want to duplicate the configuration -- move it to a distributed environment or add a test environment by using a duplicate configuration string or whatever. Having a field that acts like a primary key means that whenever you query the table, if you use the key, you'll be certain no matter what anyone else may do to your table, that you're getting the correct record.
You're right there are a million other aspects -- surrogate keys vs. intelligent keys, indexing, partitioning (silly on a single row table, I know), whatever... but without getting into that I'd vote add the key rather than not add it. You could have done it by the time you read this thread. | Short answer, no key, duplicate records possible. Your planning a single row now, but what about six months in the future when you single row multiplies. Put a primary key on the table, even for single row. | SQL - Must there always be a primary key? | [
"",
"sql",
""
] |
I have two tables : Table A and Table B
Table A and Table B has a common column : `CustomerID`. `Table A` is already populated while `Table B` is empty.
`Table B` has another column `PhoneNo` that can't be null. So, I want to first populate the `CustomerID` column of `Table B` by copying same data from `Table A`, and then add other values to the second column `PhoneNo` (`PhoneNo` CAN'T BE NULL )
The problem is when I do this (inside a stored procedure) :
```
INSERT INTO Table B ( CustomerID, PhoneNo)
VALUES ( ( SELECT TableA.CustomerID FROM TableA JOIN TableB
ON TableA.CustomerID = TableB.CustomerID), @PhoneNo)
```
When I try to execute, this gives me an error saying
```
Subquery returned more than 1 value. This is not permitted when the subquery
follows =, !=, <, <= , >, >= or when the subquery is used as an expression
```
How can I first populate ONLY `CustomerID` column in `TableB` then Add values to the `PhoneNO` column later? | I rewrote your query and changed quite a bit.
The `insert` has two columns, the `select` provides two now. The `join` was removed since it was obsolete. The `values` use was incorrect, using `select` directly now.
```
INSERT
INTO Table B
( CustomerID
, PhoneNo
)
SELECT CustomerID
, @PhoneNo
FROM TableA
``` | Make the insert without VALUES like this:
```
INSERT INTO Table B ( CustomerID, PhoneNo)
SELECT TableA.CustomerID, @PhoneNo FROM TableA
``` | How to populate only one column of a table when others can't be null? | [
"",
"sql",
"sql-server",
"database",
""
] |
Im trying to do something like this in postgresql 9.1
```
select name from persons where 'Usher is a great singer' ~ concat('\\m',name,'\\M');
```
Where the column "name" has the following values: Usher, Micheal, John Smith. But it always return an empty result.
Does anyone know why? | Supposing this is a query, not a string literal in a specific language adding constraints, you don't have to escape the `\` :
```
select name from persons where
'Usher is a great singer' ~ concat('\m',name,'\M');
``` | You are selecting `Usher is a great singer` from `persons` shouldn't you select a table column value? | Column in regex matching | [
"",
"sql",
"postgresql",
""
] |
I have a table that contains patters for phone numbers, where x can match any digit.
```
+----+--------------+----------------------+
| ID | phone_number | phone_number_type_id |
+----+--------------+----------------------+
| 1 | 1234x000x | 1 |
| 2 | 87654311100x | 4 |
| 3 | x111x222x | 6 |
+----+--------------+----------------------+
```
Now, I might have 511132228 which will match with row 3 and it should return its type. So, it's kind of like SQL wilcards, but the other way around and I'm confused on how to achieve this. | Give this a go:
```
select * from my_table
where '511132228' like replace(phone_number, 'x', '_')
``` | ```
select *
from yourtable
where '511132228' like (replace(phone_number, 'x','_'))
``` | Search an SQL table that already contains wildcards? | [
"",
"sql",
"sql-server",
""
] |
Have some ( difficult ) MySQL query
```
SELECT
*,
(
SELECT
MIN(tbl.id)
FROM
stat tbl
WHERE
tbl.id > stat.id
and tbl.user_id=stat.user_id
) next_session_id,
(
Select
(case
when next_session_id IS NULL then '9999-12-31 23:59:59' else sub.date
end)
from
stat sub
where
sub.id=next_session_id
) next_date
FROM
stat
WHERE
user_id=15973
and date >'2014-06-01'
```
Problem in next, for last row `next_session_id` are `NULL`(check in MySQL Workbench) and it must return datetime `'9999-12-31 23:59:59'`, but return `NULL`. Check `IS NULL` not work i think or incorrect. Why it's happen?
**UPDATE**
Example of result set
 | You should add `OR next_session_id IS NULL` to the `WHERE` clause of your subselect
```
SELECT
(CASE
WHEN next_session_id IS NULL THEN '9999-12-31 23:59:59' ELSE sub.date
END)
FROM
stat sub
WHERE
sub.id=next_session_id OR next_session_id IS NULL
```
because without that the where condition will not be true, if next\_session is null. | In database NULL and `''` (empty string) are different. So ensure you have `NULL` value instead of `''` (empty string) in your table.
Check this query with your table
`SELECT * FROM TABLE WHERE next_session_id=''` | MySQL IS NULL not working | [
"",
"mysql",
"sql",
"null",
"isnull",
""
] |
I have a user table as follows
```
id name age
1 John 21
2. Mathan 23
3. Raj 21
4. Manoj 50
5 Krishnan 91
```
**I want to find minimum age and its corresponding name. How can I do it with rails?**
Can I do it in a single query?
Note: More than one names can have single age. | Is there a specific reason why you want to do it in a single query ?
If you can write 2 queries, I think you can just write :
```
User.where age: User.minimum(:age)
``` | ```
select age, group_concat(name) from table group by age order by age asc limit 1
```
You will need to process the results later on in ruby, but this gives all you need in one single query. Also i am assuming mysql, so might differ on other rdbms. | Find lowest value and coresponding code on asingle query mysql | [
"",
"mysql",
"sql",
"ruby-on-rails",
"ruby-on-rails-4",
""
] |
I have a table with multiple instances of title some hardcover (h) and some paperback (p)
```
title | type
-----------------------------+------
Franklin in the Dark | p
Little Women | p
The Cat in the Hat | p
Dune | p
The Shining | p
Programming Python | p
Goodnight Moon | p
2001: A Space Odyssey | h
Dynamic Anatomy | p
Bartholomew and the Oobleck | p
The Cat in the Hat | h
Dune | h
The Velveteen Rabbit | p
The Shining | h
The Tell-Tale Heart | p
2001: A Space Odyssey | p
```
I'm trying to return instances that have both paper back and hardcover copies.
The table should ideally return only 4 titles.
\*edit these are part of two different tables.
```
7808 | The Shining | 4156 | 9
4513 | Dune | 1866 | 15
4267 | 2001: A Space Odyssey | 2001 | 15
1608 | The Cat in the Hat | 1809 | 2
1590 | Bartholomew and the Oobleck | 1809 | 2
25908 | Franklin in the Dark | 15990 | 2
0385121679 | 7808 | 2 | 75 | 1993-10-01 | h
1885418035 | 156 | 1 | 163 | 1995-03-28 | p
0929605942 | 156 | 2 | 171 | 1998-12-01 | p
0441172717 | 4513 | 2 | 99 | 1998-09-01 | p
044100590X | 4513 | 3 | 99 | 1999-10-01 | h
0451457994 | 4267 | 3 | 101 | 2000-09-12 | p
0451198492 | 4267 | 3 | 101 | 1999-10-01 | h
0823015505 | 2038 | 1 | 62 | 1958-01-01 | p
0596000855 | 41473 | 2 | 113 | 2001-03-01 | p
``` | there are a couple ways of doing this. If you just want the title of the book that has both a hard cover and a paperback (I'm assuming those are the only two options). Then you can do a query like this:
```
select title, count(*) from book group by title having count(*) > 1
```
You also could join to the table.
```
select t0.title from
(
select title from book where btype = 'h'
) t0
inner join
(
select title from book where btype = 'p'
) t1 on t0.title = t1.title
```
Edited for the two tables
```
select * from table_one where bookid in (
select t0.bookid
from
(
select bookid from table_two where type = 'h'
) t0
inner join
(
select bookid from table_two where type = 'p'
) t1
on t0.bookid = t1.bookid
) t2
``` | This could also work.
```
SELECT TITLE
FROM BOOKS
GROUP BY TITLE
HAVING COUNT(DISTINCT TYPE) > 1
``` | Select if data has two instances with different values sql | [
"",
"sql",
"postgresql",
""
] |
I have a table and want to update a column based on other column of same table. Please look at below image for table design and table data.

In this tbl I want to update JoinDate as below steps.
1) if ModifiedDatetime is not null then ModifiedDatetime else CreatedDate.
2) Now if NextLevel is Hour then want to add hour of above date which we have in setp 1
3) Now if NextLevel is Day then want to add Day of above date which we have in setp 1
4) Now if NextLevel is Min then want to add Min of above date which we have in setp 1
5) Finally after completing all above process the date which I will get , I want to use to that data to update joindate.
I did this usig below cursor but I wnat to do this using sql update query.
```
DECLARE @EmpID INT
Declare @DtTm datetime
DECLARE @NextLevl INT
Declare @JoinDtTm datetime
DECLARE CurProg CURSOR FOR
select EmpID from tblEmp
OPEN CurProg
FETCH NEXT
FROM CurProg INTO @EmpID
WHILE @@FETCH_STATUS = 0
BEGIN
select @DtTm = case when ModifiedTime is null then CreatedDate else ModifiedTime end, @NextLevl = NextLevel from tblEmp where EmpID = @EmpID
if (@NextLevl = 'Min')
BEGIN
set @JoinDtTm = DATEADD(MI,1,@DtTm)
END
ELSE IF (@NextLevl= 'Hour')
BEGIN
set @JoinDtTm = DATEADD(HH,1,@DtTm)
END
ELSE
BEGIN
set @JoinDtTm = DATEADD(D,1,@DtTm)
END
--update tblEmp set JoinDtTm = @JoinDtTm where EMPId= @EMPId
FETCH NEXT
FROM CurProg INTO @EmpID
END
CLOSE CurProg
DEALLOCATE CurProg
```
Thanks,
Hitesh | Try this:
```
UPDATE TableName
SET JoinDate = CASE WHEN NextLevel = 'Hour' THEN DATEADD(HH,1,ISNULL(ModifiedDate,CreatedDate))
WHEN NextLevel = 'Day' THEN DATEADD(DD,1,ISNULL(ModifiedDate,CreatedDate))
WHEN NextLevel = 'Min' THEN DATEADD(MI,1,ISNULL(ModifiedDate,CreatedDate))
END
``` | To do this without a cursor in an update statement then this should work
```
UPDATE TBL_EMP
SET JOINDATE =
(CASE WHEN MODIFIEDTIME IS NOT NULL
THEN (CASE WHEN NEXTLEVEL ='Hour' THEN dateadd(hh,1,modifiedtime)
ELSE (CASE WHEN NEXTLEVEL = 'Day' THEN dateadd(dd,1,MODIFIEDTIME)
ELSE(CASE WHEN NEXTLEVEL = 'Min' THEN dateadd(n,1,MODIFIEDTIME) ELSE MODIFIEDTIME END) END) END)
ELSE(CASE WHEN NEXTLEVEL ='Hour' THEN dateadd(hh,1,CREATEDDATE)
ELSE(CASE WHEN NEXTLEVEL = 'Day' THEN dateadd(dd,1,CREATEDDATE)
ELSE(CASE WHEN NEXTLEVEL = 'Min' THEN dateadd(n,1,CREATEDDATE) ELSE CREATEDDATE END) END) END) END)
FROM TBL_EMP
``` | Update data based on other column | [
"",
"sql",
"sql-server",
"sql-server-2008",
"sql-update",
""
] |
Let's say I have these two relations:
`alert`:
```
id http_code_result domain_id ip_src
1 404 1 1.1.1.1
2 404 1 1.1.1.1
3 200 1 1.1.1.1
```
`domain`:
```
id name
1 google
```
I want to get only the domains with 404 responses. So, for the IP address 1.1.1.1 it would return 0 rows, because the domain with id 1 also has one HTTP 200 response. | My understanding of your question is: "**You want to retrieve the domains having HTTP response 404. If the same domain is having responses other than 404 (Say 200), then that domain name should not be displayed.**"
Basing on this understanding, we can write query as follows::
```
SELECT d.id, d.name
FROM domain d, alert a
WHERE a.domain_id=d.id AND a.domain_id NOT IN(SELECT domain_id
FROM alert
WHERE http_code_result!=404);
```
Please Refer this **[SQLFiddle](http://sqlfiddle.com/#!12/11f59/2)** link for better understanding | I'm not 100% sure about the question, but when I'm right, you can fix this by using an aggregation:
```
SELECT
domain.name,
alert.ip_src
FROM alert
INNER JOIN domain ON alert.domain_id = domain.id
GROUP BY -- create a group by name and ip:
domain.name,
alert.ip_src
HAVING array_agg(http_code_result) @> ARRAY[404] -- this group must have a 404
AND NOT array_agg(http_code_result) @> ARRAY[200]; -- this group may not have a 200
```
Maybe you don't want the ip address, than just leave it out. | SQL query to fetch only rows if all values are equals to some value | [
"",
"sql",
"postgresql",
""
] |
I have a table like this:
```
ID action Date
xxx action1 20140101
xxx action2 20140102
yyy action1 20140116
yyy action2 20140117
yyy action3 20140122
yyy action4 20140123
yyy action5 20140131
zzz action1 20140103
zzz action2 20140106
zzz action3 20140107
```
Is there a way to select 2 records for each ID (the first one and the last one)
```
xxx action1 20140101
xxx action2 20140102
yyy action1 20140116
yyy action5 20140131
zzz action1 20140103
zzz action3 20140107
```
Alternatively doing some other elaboration
```
ID firstAction Date LastAction Date
xxx action1 20140101 action2 20140102
yyy action1 20140116 action5 20140131
zzz action1 20140103 action3 20140107
```
These results can be easily reached by coding with JAVA or C++. With SQL?
First I ordered by ID and then by Date but then I don't know how to proceed | try this:
```
select x.ID, x.mindate as FirstActionDate, x.FirstAction as FirstAction, y.maxdate as LastActionDate, y.LastAction as LastAction
from
(
--FirstDate and Action
select a.ID, a.mindate as mindate, t.action as FirstAction
from
(
select ID, Min(date) as mindate
from data
group by ID
) a inner join data t on a.mindate = t.date
) x inner join
(
--Last Date And Action
select a.ID, a.maxdate, t.action as LastAction
from
(
select ID, max(date) as maxdate
from data
group by ID
) a inner join data t on a.maxdate = t.date
) y on x.ID = y.ID
```
[Demo](http://sqlfiddle.com/#!3/bc4b1/5)
Edit: I think the correct approach here is to:
(updated demo to match your sample data)
1. Get the ID and the first date (`min(date)`) and then join on the original table (`a.mindate = t.date`) to get the action name - this will give you half of the story - table `x`
2. In a similar fashion you can build table `y` to get the last date and the action
3. Finally you join them on the `ID` column to get all the columns in single `select` statement. | You can do this with window functions:
```
select t.id, t.action, t.date
from (select t.*,
row_number() over (partition by id order by date asc) as seqnum_asc,
row_number() over (partition by id order by date desc) as seqnum_desc
from table t
) t
where seqnum_asc = 1 or seqnum_desc = 1;
``` | get only the first and the last occurence from a table | [
"",
"sql",
"sql-server",
""
] |
How can I get every article + name of all authors in the article (different from article to another)
```
Users table :
ID NAME FNAME
1 A AA
2 B BB
3 C CC
4 D DD
Articles
ID AUTHOR title
1 1,3 ty
2 4 tz
Results
A.id A.title USER1_name USER2_name
1 ty A C
2 tz D no user
```
Its working thanks, is there a way to make the one user column instead of two like this :
```
ID TITLE NAME
1 ty A,C
2 yy B,C,E
```
because sometimes we have mote than 7 authors :/ | Assuming maximum of 2 authors per article
```
select a.id, a.title, b.u1, b.u2
from articles a
join (
select if(a.id=b.id,a.id,concat(a.id,',',b.id)) author, a.name u1, if(a.id=b.id,'no user',b.name) u2
from users a
join users b) b on a.author = b.author;
```
[demo](http://sqlfiddle.com/#!2/87c99/4) | Try with this
```
SELECT A.id, A.title, A.name
FROM USERS U, ARTICLES A
WHERE U.id in (A.author)
``` | SQL SELECT from multiple tables + IN LIST | [
"",
"mysql",
"sql",
"select",
""
] |
I'm currently maintaining a website done with HTML/CSS/Javascript/PHP that uses a mssql database. We have one database for development purposes on a dev server and another one on a production server. On both the dev and prod server, there is a data.DailyNormalData. I would like to take every row in the dev database and put them in the existing table in the prod database (after erasing everything in the prod database) as the data in the prod database is not up to date.
My question is, how could I achieve this using Microsoft SQL Server Management Studio 2012? I have searched on how to do this, but have not found exactly a solution for my specific needs. | You should use the Import/Export wizard in Management Studio. Your dev database will be your source, and your production database will be your destination. The data.DailyNormalData table should be shown in the list. (If you like, you can clear out the destination table with `DELETE FROM data.DailyNormalData` or `TRUNCATE TABLE data.DailyNormalData`.) | Try this:
right click on Database -> Tasks -> Export Data -> Follow instructions | Migrate data from one database table to another one on another server in MSSQL | [
"",
"sql",
"sql-server",
"database",
""
] |
Say I have a data that describes different items sold and when they were sold. I want to breakdown this data and count different items sold on monthly basis. So here is what I have so far:
```
SELECT
ItemDescription
,OrderReceivedData
,COUNT(*) AS ItemCount
INTO 'C:\Users\whatever.csv'
FROM 'C:\user\inputFileHere.csv'
GROUP BY ItemDescription, OrderReceivedDate
HAVING OrderReceivedDate LIKE '%2011%'
```
Now the thing is that my dates are in a bad format. So what the query above does is that it shows count for an item on 01JAN2011, 02JAN2011, ... , 10FEB2011, ...and so on. But what I want is the count for JAN2011, FEB2011, MAR2011... and so on. So basically I dont wanna GROUP BY OrderReceivedData but I want to Group by these specific 7 characters in OrderReceivedDate so I can ignore the dates. I hope it makes sense. So how do I do this? | ```
SELECT
ItemDescription
,SUBSTR(OrderReceivedDate,2,7) AS OrderReceivedDateUpdated
,COUNT(*) AS ItemCount
INTO 'C:\Users\whatever.csv'
FROM 'C:\user\inputFileHere.csv'
GROUP BY ItemDescription, OrderReceivedDateUpdated
HAVING OrderReceivedDate LIKE '%2011%'
``` | The simple approach, although a bit of a hack, is that you need to parse out the date characters, then group by that. For simplicity, you can reference the column by number. If you think this will change, repeat the parsing logic in your GROUP BY clause. This assumes the field contains two leading characters:
```
SELECT
ItemDescription
,RIGHT(OrderReceivedData, LEN(OrderReceivedData) - 2) AS MonthOrderReceivedData
,COUNT(*) AS ItemCount
INTO 'C:\Users\whatever.csv'
FROM 'C:\user\inputFileHere.csv'
GROUP BY ItemDescription, 2
HAVING OrderReceivedDate LIKE '%2011%'
```
I did not test this code, but should get you on the right track. | GROUP BY in sql to get monthly breakdown of data | [
"",
"mysql",
"sql",
"logparser",
""
] |
I have two tables `Affaire(ID, Obj, ID_TA ...)` and `TypeAffaire(ID_TA, Label)`, i want to count number of each `ID_TA` in Affaire and show the `label` instead if `ID_TA`, I tried this but it doesnt work :
```
SELECT A.ID_TA
,T.Label
,count(*) AS TotalAmount
FROM Affaire AS A
LEFT JOIN TypeAffaire AS T ON T.ID_TA = A.TA
GROUP BY A.ID_TA;
```
it says that label can't be found | You need to group by `T.Label` as well. All non-aggegated fields in the `select` clause, must also be in the `group by` clause. | Add T.Label in `Group By`
Try this
```
SELECT A.ID_TA
,T.Label
,count(*) AS TotalAmount
FROM Affaire AS A
LEFT JOIN TypeAffaire AS T ON T.ID_TA = A.TA
GROUP BY A.ID_TA,T.Label;
``` | Select count and group by from two tables? | [
"",
"sql",
"ms-access",
""
] |
How would I go about showing **ALL** most common values in SQL?
So I have the query to show the most common value here bellow.
```
SELECT name, COUNT(*) AS popularity
FROM cattwo
GROUP BY name
ORDER BY popularity DESC
LIMIT 1;
+----------+------------+
| name | popularity |
+----------+------------+
| cat22610 | 7 |
+----------+------------+
```
However, when I display the top 10 most common value the result is ...
```
SELECT name, COUNT(*) AS popularity
FROM cattwo
GROUP BY name
ORDER BY popularity DESC
LIMIT 10;
+----------+------------+
| name | popularity |
+----------+------------+
| cat22610 | 7 |
| cat68704 | 7 |
| cat14153 | 7 |
| cat52476 | 7 |
| cat4556 | 7 |
| cat64173 | 7 |
| cat5586 | 7 |
| cat89302 | 6 |
| cat97131 | 6 |
| cat42010 | 6 |
+----------+------------+
```
The goal is to display all cats with the highest popularity. Something like this.
```
+----------+------------+
| name | popularity |
+----------+------------+
| cat22610 | 7 |
| cat68704 | 7 |
| cat14153 | 7 |
| cat52476 | 7 |
| cat4556 | 7 |
| cat64173 | 7 |
| cat5586 | 7 |
+----------+------------+
```
Help would be great. Thanks in advance. | I guess this query would help you!
```
SELECT NAME, COUNT(*) AS POPULARITY
FROM CATTWO
GROUP BY NAME
HAVING COUNT(*) =
(
SELECT COUNT(*) AS MAX_POPULARITY
FROM CATTWO
GROUP BY NAME
ORDER BY MAX_POPULARITY DESC
LIMIT 1
);
``` | ```
SELECT name, COUNT(*) AS popularity
FROM cattwo
GROUP BY name
HAVING COUNT(*) =
(
SELECT COUNT(*) AS popularity
FROM cattwo
GROUP BY name
ORDER BY popularity DESC
LIMIT 1
)
LIMIT 10;
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!2/4d91c5/2) | How to Find ALL Most Common Values in SQL? | [
"",
"mysql",
"sql",
"count",
"popularity",
""
] |
I want to randomized my data after grouping them. How I can achieve that in elegant way ?
Here I have created [SQLFiddle Demo](http://sqlfiddle.com/#!2/a8b81/1).
**Expected Results**,
Gold, Gold 2, Gold 3, Silver 3, Silver, Bronze
Gold 2, Gold 3, Gold, Silver, Silver 3, Bronze
Gold, Gold 3, Gold 2, Silver, Silver 3, Bronze
Bronze, Silver 3, Silver, Gold 2, Gold, Gold 3 etc....
In brief it should **group them first alphabetically (name) and then randomized internally grouped data**.
I tried,
> SELECT \* FROM `type` ORDER BY name, RAND()
This do order by name but not randomize them.
> SELECT \* FROM `type` GROUP BY name ORDER BY RAND()
This randomized the data but not grouping them. | Use [`substring_index(col, ' ', 1)`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index) to split the string value by single space delimiter and get the first token and use this as your first order criteria. On your second order criteria use `rand()` so that names that have equal first word are randomized
```
select name
from type
order by substring_index(name, ' ', 1), rand()
```
The assumption here is words on your name column always delimited by single space
**Edit**
Following are more examples on `substring_index`:
```
select substring_index('hello world', ' ', 1); -- 'hello'
select substring_index('hello', ' ', 1); -- 'hello'
select substring_index('hello_world foo', ' ', 1); -- 'hello_world'
select substring_index('yippie ki yay', ' ', 2); -- 'yippie ki'
``` | Try this, `substr` will benefit performance if you know maximum character in `name` you want to order:
```
SELECT * FROM type ORDER BY substr(name,1,4), rand();
```
And from @gerrytan's above answer, use substring\_index for real life desired result.
[SQLFiddle Demo](http://sqlfiddle.com/#!2/a8b81/48) | Randomized data after grouping them | [
"",
"mysql",
"sql",
"random",
"group-by",
""
] |
Goal:
The goal is to make the variable with value '2014-06-14 09:00:00.000'.
Problem:
The syntax code is created as a dynamical object how do you make it from '2014-06-14 16:20:10.000' into value '2014-06-14 09:00:00.000'?
```
DECLARE @a datetime = '2014-06-14 16:20:10.000'
```
Information:
\*The variable @a's value will be change all the time and the. The important is that "09:00:00.000" is not changeable.
\*The value " 16:20:10.000' " can be different from time to time. | ```
select dateadd(hour, 9, cast(cast(@a as date) as datetime))
``` | Write as:
```
-- to get desired result the base query should be like:
SELECT DATEADD(day, DATEDIFF(day,'19000101',@a), CAST('09:00:00.000' AS DATETIME2(7)))
-- and then you can convert it into any other desired format as:
SELECT CONVERT(VARCHAR(23),
DATEADD(day, DATEDIFF(day,'19000101',@a),
CAST('09:00:00.000' AS DATETIME2(7))),
121);
``` | Change Value based on Dynamical Object | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I want the resulting table to be like the example Table I have provided, even though the only month provided was 5, I want the table to display all the 12 months even if the Amount was to be 0 for the other months.
I'm using this Query, Let me try and explain exactly what I want when it comes to the Amount. I want the amount column to contain the sum of every sale for the corresponding month.
```
Select
YEAR([Date]) as [Year],
MONTH([Date]) as [Month],
IsNull(SUM(Amount), 0) AS TotalSales
From Sales Left Outer Join Employee
On Employee.EmployeeID = Sales.EmployeeID
Group By YEAR([Date]), MONTH([Date])
Order By YEAR([Date]), MONTH([Date])
```
---
Employee Table,
```
+--+--+------+
|ID| Name |
+--+---------+
|1 |John Doe |
+--+---------+
|2 |Jane Doe |
+--+---------+
```
Sales Table,
```
+--+------+---------+-------+--------+
|ID|SaleID| Date |Amount |Quantity|
+--+------+---------+-------+--------+
|1 | 1 |5-14-2014|300 |12 |
+--+------+---------+-------+--------+
|1 | 2 |5-16-2014|600 |4 |
+--+------+---------+-------+--------+
|2 | 3 |5-14-2014|452 |10 |
+--+------+---------+-------+--------+
|2 | 4 |5-16-2014|356 |2 |
+--+------+---------+-------+--------+
```
What I want,
```
+--+---------+-----+------+--------+--------+
|ID| Name |Year |Month |Amount |Quantity|
+--+---------+-----+------+--------+--------+
|1 |John Doe |2014 |5 |900 |16 |
+--+---------+-----+------+--------+--------+
|2 |Jane Doe |2014 |5 |808 |12 |
+--+---------+-----+------+--------+--------+
``` | Try this
```
Select S.ID,E.Name,YEAR(MAX([Date])) as [Year],MONTH(MAX([Date])) as [Month],
IsNull(SUM(Amount), 0) AS TotalSales,IsNull(SUM(Quantity), 0) AS TotalQuantity
From Sales S Left Join Employee E On E.ID = S.ID
Group By S.ID,E.Name,YEAR([Date]), MONTH([Date])
Order By S.ID
```
**[FIDDLE DEMO](http://sqlfiddle.com/#!6/7e034/3)**
*Output:*
---
```
+--+---------+-----+------+--------+--------+
|ID| Name |Year |Month |Amount |Quantity|
+--+---------+-----+------+--------+--------+
|1 |John Doe |2014 |5 |900 |16 |
+--+---------+-----+------+--------+--------+
|2 |Jane Doe |2014 |5 |808 |12 |
+--+---------+-----+------+--------+--------+
```
**Update**
If you wanna display the months (1-12) even though that month would have an Amount of 0
Try This
```
;With Months (Month,MonthID)
AS
(
select 'January' as Month , 1 as MonthID
UNION
select 'February' as Month , 2 as MonthID
UNION
select 'March' as Month , 3 as MonthID
UNION
select 'April' as Month , 4 as MonthID
UNION
select 'May' as Month , 5 as MonthID
UNION
select 'June' as Month , 6 as MonthID
UNION
select 'July' as Month , 7 as MonthID
UNION
select 'August' as Month , 8 as MonthID
UNION
select 'September' as Month , 9 as MonthID
UNION
select 'October' as Month , 10 as MonthID
UNION
select 'November' as Month , 11 as MonthID
UNION
select 'December' as Month , 12 as MonthID
)
SELECT R.ID,R.Name,R.[Year],M.[MonthID],M.[Month],R.TotalSales,R.TotalQUANTITY FROM
Months M LEFT JOIN
(
Select S.ID,E.Name,YEAR([Date]) as [Year],MONTH([Date]) as [Month],
IsNull(SUM(Amount), 0) AS TotalSales,IsNull(SUM(Quantity), 0) AS TotalQuantity
From Sales S Left Join Employee E On E.ID = S.ID
Left OUTER JOIN Months ON MonthID = MONTH([Date])
Group By S.ID,E.Name,YEAR([Date]), MONTH([Date])
) R ON R.Month = M.MonthID
Order By CASE WHEN R.ID IS NULL THEN 1 ELSE 0 End,R.ID,M.MonthID
```
**[FIDDLE DEMO](http://sqlfiddle.com/#!6/7e034/33)** | ```
SELECT e.ID,
e.Name,
YEAR(s.date)AS Year,
MONTH(s.date)AS Month,
sum(s.amount) AS Amount,
sum(s.quantity) AS Quantity
FROM Employee e
LEFT JOIN Sales s ON e.id=s.id
GROUP BY e.ID,e.Name,year(s.date),month(s.date)
``` | Dealing with Dates and Summing up Rows | [
"",
"sql",
"sql-server",
""
] |
I have a table in SQL Database and insert service activation log for all users. for example: service a is deactivate for mike today then activate for him tomorrow and then deactivate again. i need to have the last status of each service for users in another table. (unique constraint: userid,serviceid)
```
logid | userid | serviceid | status
-------+-------------+-------------+--------
1 | mike | a | 0
2 | mike | b | 1
3 | mike | b | 0
4 | mike | a | 1
5 | Dave | c | 1
6 | Dave | a | 0
7 | mike | d | 1
8 | mike | c | 1
9 | mike | a | 0
```
for example: in above table, i need to have the following table:
```
userid | serviceid | laststatus
-------+-------------+-------------+--------
mike | a | 0
mike | b | 0
mike | c | 1
mike | d | 1
Dave | c | 1
Dave | a | 0
```
Is there any while loop to read all records from table1 and insert or update table2 to store the last status of each service for users? like this:
```
while (select * from table1)
{
IF EXISTS
(
SELECT 1
FROM table2
where table2.userid=table1.userid and table2.serviceid=table1.serviceid
)
BEGIN
UPDATE table2
SET table2.laststatus = table1.status
WHERE table2.userid=@userid and table2.serviceid=table1.serviceid
END
ELSE
INSERT INTO table2 VALUES ( table1.userid ,table1.serviceid ,table1.status )
}
```
Last status is the status with largest logid for a given user and service | This can be solved with a `MERGE` statement, like so:
```
merge table2 as tgt
using
(
select * from table1
where logid in
(
--Get the row which has the last status for that user-service combination
select distinct max(logid) over (partition by userid, serviceid order by userid,serviceid) mid
from table1)
) as src
on tgt.userid = src.userid and tgt.serviceid = src.serviceid
when matched then
update
set tgt.[status] = src.[status]
when not matched then
insert (userid,serviceid, [status])
values(src.userid, src.serviceid, src.[status]);
```
Demo [here](http://rextester.com/MZPH40248). | Create an `INSERT` Trigger on Table1 which would update your Table2.
The trigger would update `laststatus` of Table2. You can get the last status of a particular user like this:
```
declare @status int
select top 1 @status = status from Table1 where userid = 'dave' and serviceid = 'c'
order by logid desc
select @status
```
Hope it helps. | insert the last status of a service of users to another table in SQL Server | [
"",
"sql",
"sql-server",
""
] |
I have a number of electricity meters connected to a PLC (Programmable Logic Controller).
The PLC counts (integrates) the kWh pulses from the meters over a 24 hour period.
The count is reset at midnight.
The current count value is logged to a table every second.
I need to retrieve the kWh total in each 15 minute period for a meter.
E.g.:
```
Meter count at 11:00:00 = 1000
Meter count at 11:14:59 = 1110
Meter count at 11:29:59 = 1200
Meter count at 11:59:59 = 1400
```
15 Minute kWh totals:
```
At 11:14:59 = 110
At 11:29:59 = 90
At 11:59:59 = 200
```
Basically, I want to subtract the count at a 15 minute period from the count at a previous 15 min period.
The database is MSSQLServer.
Is it possible to return the above using a select query.
I need to export this data to a csv file. | I think CTE and self join is enough to achieve the desired result:
```
--constructing ID
WITH vPLC as (select *, ROW_NUMBER() OVER (order by date) as recID
from @PLC
)
select vPLC.*, COALESCE( vPLC.value - n.Value, 0 ) as diffVal
from vPLC
left JOIN vPLC n on n.recID = vPLC.recID - 1
----------OUTPUT----------
recId date value diffVal
1 6/5/2014 11:00:00 AM 1000 0
2 6/5/2014 11:14:59 AM 1110 110
3 6/5/2014 11:29:59 AM 1200 90
4 6/5/2014 11:59:59 AM 1400 200
```
[Proof code is here](http://rextester.com/NSRZ84736) | The following queries will get the data on the quarters of hour, not at a second before that as in the OP data.
The OP didn't provide the table schema, this answer use this
```
CREATE TABLE UtilityMeter (
_Time Time
, KWh Int
)
```
The server is supposed to be SQLServer 2008 or better (to use the `TIME` type)
The first thing to do is to filter the data to get only the quarters
```
SELECT _Time, KWh
FROM UtilityMeter
WHERE _Time = Cast(DateAdd(mi, DateDiff(mi, 0, _Time) / 15 * 15, 0) as Time)
```
`DateDiff` return an integer so `minutes / 15` is an integer division and `minutes / 15 * 15` will not returns minutes but the quarter before it.
Now if the SQLServer is 2012 of better it's possible to use `LAG`
```
With Quarter AS (
SELECT _Time, KWh
FROM UtilityMeter
WHERE _Time = Cast(DateAdd(mi, DateDiff(mi, 0, _Time) / 15 * 15, 0) as Time)
)
SELECT _Time
, BlockConsume = KWh - LAG(KWh, 1, 0) OVER (ORDER BY _Time)
FROM Quarter;
```
otherwise an auto-`JOIN` is needed
```
With Quarter AS (
SELECT _Time, KWh
, ID = Row_Number() OVER (ORDER BY _Time)
FROM UtilityMeter
WHERE _Time = Cast(DateAdd(mi, DateDiff(mi, 0, _Time) / 15 * 15, 0) as Time)
)
SELECT _1._Time
, BlockConsume = _1.KWh - _2.KWh
FROM Quarter _1
INNER JOIN Quarter _2 ON _1.ID = _2.ID + 1
```
the calculated `ID` added in the `CTE` is to simplify the `JOIN` condition.
`SQLFiddle demo` with both queries and generated data | SQL Query to return 15 minute totals from a totaliser integrating over 24 hours | [
"",
"sql",
"sql-server",
""
] |
I've got two tables: `AP` and table `AP supervisory visits`, with a one-many relationship respectively. `AP supervisory visits` has no primary key or unique index (so it can be a bit hard to work with) and the function "IN()" is not having an easy time with it.
Table `AP`:
```
+------------------------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------------------+-------------+------+-----+---------+-------+
| HID | varchar(50) | NO | PRI | NULL | |
| yr | varchar(50) | NO | PRI | NULL | |
| mo | varchar(50) | NO | PRI | NULL | |
```
Table `AP supervisory visits`:
```
+------------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+-------------+------+-----+---------+-------+
| HID | varchar(50) | YES | MUL | NULL | |
| yr | varchar(50) | YES | | NULL | |
| mo | varchar(50) | YES | | NULL | |
| exported | datetime | YES | | NULL | |
| visitor name | varchar(50) | YES | | NULL | |
| title | varchar(50) | YES | | NULL | |
| reason for visit | varchar(50) | YES | | NULL | |
| number of visits | int(11) | YES | MUL | 0 | |
+------------------+-------------+------+-----+---------+-------+
```
I've got about 8000 records in AP and 1700 unique records (using HID, yr, mo) in `AP supervisory visits`. I would like all the records in the `AP` table that have no 'children' in the `AP supervisory visits` table. Using other articles on Stakeoverflow I came to the conclusion I should use **"NOT IN()"**. This works on a few of my tables with similar relationships, but not this time (note the last query is the inverse, it lacks the NOT):
```
mysql> SELECT HID, yr, mo FROM AP
-> WHERE (HID, yr, mo) NOT IN(SELECT HID, yr, mo FROM `AP supervisory visits`);
Empty set (0.34 sec)
mysql> SELECT HID, yr, mo FROM AP
-> WHERE (HID, yr, mo) NOT IN(SELECT DISTINCT HID, yr, mo FROM `AP supervisory visits`);
Empty set (0.47 sec)
mysql> SELECT HID, yr, mo FROM AP
-> WHERE (HID, yr, mo) NOT IN(SELECT DISTINCT * FROM (SELECT DISTINCT HID, yr, mo FROM `AP supervisory visits`) AS temp);
Empty set (3.04 sec)
mysql> SELECT HID, yr, mo FROM AP
-> WHERE (HID, yr, mo) IN(SELECT DISTINCT HID, yr, mo FROM `AP supervisory visits`) LIMIT 5;
+-----+------+----+
| HID | yr | mo |
+-----+------+----+
| 109 | 2011 | 03 |
| 109 | 2012 | 05 |
| 109 | 2012 | 06 |
| 110 | 2010 | 11 |
| 110 | 2010 | 12 |
+-----+------+----+
5 rows in set (0.00 sec)
```
I've since created a temporary table with all the distinct combinations of HID, yr, mo, *but I would rather do without temporary tables altogether*. The third query above should be creating a temporary table within memory with distinct values (It is ugly I know) but it doesn't seem to be the case.
My 'temporary solution':
```
mysql> CREATE TABLE myTempAPSup SELECT DISTINCT HID, yr, mo FROM `AP supervisory visits`;
mysql> ALTER TABLE myTempAPSup ADD PRIMARY KEY(HID, yr, mo);
mysql> SELECT HID, yr, mo FROM AP
-> WHERE (HID, yr, mo) NOT IN (SELECT HID, yr, mo FROM myTempAPSup) LIMIT 5;
+-----+------+----+
| HID | yr | mo |
+-----+------+----+
| 109 | 2010 | 01 |
| 109 | 2012 | 01 |
| 109 | 2012 | 02 |
| 109 | 2012 | 03 |
| 109 | 2012 | 04 |
+-----+------+----+
5 rows in set (0.00 sec)
```
Is there a way to get the rows with no 'children' from such a badly made table? I've assumed my troubles stem from the lack of unique/primary keys, but am I missing some other syntactical 'gotcha'? | If you come from an Oracle background, the idea of a left join, and then checking for null may seem weird. That solution is wicked fast though, as you're not scanning through every record in the joined table. Another approach would be to use a correlated subquery with the not exists clause.
```
SELECT a.HID
, a.yr
, a.mo
FROM AP a
WHERE NOT EXISTS (
SELECT v.HID
FROM `AP supervisory visits` v
WHERE v.HID = a.HID
AND v.yr = a.yr
AND v.mo = a.mo
)
``` | You can use a `LEFT JOIN` and check if there is a child record, which is null when it does not exist.
```
select a.HID, a.yr, a.mo
from AP a
left join `AP supervisory visits` b on a.HID = b.HID and a.yr = b.yr and a.mo = b.mo
where b.yr is null;
``` | Using NOT IN() mysql with table lacking unique rows or primary keys | [
"",
"mysql",
"sql",
"primary-key",
"distinct",
""
] |
Ok, This has me stumped...
My first query below returns all possible rows I am interested in just fine.
**First Query**:
```
select * from FloatTable where Val > 0 and TagIndex = 325
```
My second query below which is supposed to return me all the rows with last day (doesn't necessary calendar last day but whatever max value is present in database) of each month from all available rows. BUT I get all months except month of May (integer value of 5).
**Second Query**
```
select
DateAndTime, TagIndex,Val
from FloatTable
WHERE
(TagIndex = 325) AND
(Val > 0) AND
DateAndTime IN (Select Max(DateAndTime) from FloatTable
group by month(DateAndTime), Year(DateAndTime)
)
```
Using SQL-Server 2012. | ```
SELECT *
FROM (
select DateAndTime, TagIndex,Val
,ROW_NUMBER() OVER (PARTITION BY MONTH(DateAndTime) ORDER BY DateAndTime DESC) RN
FROM FloatTable
WHERE TagIndex = 325
AND Val > 0
)A
WHERE RN = 1
```
Since you are using SQL Server 2012, you can make use of the new window functions like `LAST_VALUE()`
`FIRST_VALUE()`.
**Using FIRST\_VALUE()**
```
SELECT *
FROM (
SELECT DateAndTime, TagIndex,Val
,FIRST_VALUE(DateAndTime) OVER (PARTITION BY MONTH(DateAndTime)
ORDER BY DateAndTime DESC) Last_Date
FROM FloatTable
WHERE TagIndex = 325
AND Val > 0
) A
WHERE DateAndTime = Last_Date
```
**Using LAST\_VALUE()**
```
SELECT *
FROM (
SELECT DateAndTime, TagIndex,Val
,LAST_VALUE(DateAndTime) OVER (PARTITION BY MONTH(DateAndTime) ORDER BY DateAndTime
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) Last_Date
FROM FloatTable
WHERE TagIndex = 325
AND Val > 0
) A
WHERE DateAndTime = Last_Date
``` | Your second query does not work (as you expect) because you put the conditions only in the main query and not in the subquery. If you add them, it will work fine:
```
SELECT
DateAndTime, TagIndex, Val
FROM FloatTable
WHERE
TagIndex = 325 AND
Val > 0 AND
DateAndTime IN
( SELECT Max(DateAndTime) FROM FloatTable
WHERE TagIndex = 325 AND
Val > 0
GROUP BY Month(DateAndTime), Year(DateAndTime)
) ;
```
Using a Common Table Expression, it may be more readable:
```
; WITH cte AS
( SELECT
DateAndTime, TagIndex, Val
FROM FloatTable
WHERE
TagIndex = 325 AND
Val > 0
)
SELECT
DateAndTime, TagIndex, Val
FROM cte
WHERE DateAndTime IN
( SELECT Max(DateAndTime) FROM cte
GROUP BY Month(DateAndTime), Year(DateAndTime)
) ;
``` | Return just last day from all available rows for each month | [
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
In the SQL query below, I have 2 tables, documents and document user map. In documents table it has columns documentId, documentname and userid; and in doumnent usermap table it has documentid and userid. The documents that we create will be in documenttable (created documentid, userid). The documents we share to other users will be in documentusermap table (documentid .other userid).here i have to pass my userid to sp
My aim is I want to get documents what other users shared for me.
```
@i_userid int,
SELECT Doc.UserID,
Doc.DocumentID,
Doc.DocumentName
FROM Documents Doc
LEFT OUTER JOIN DocumentUserMapping DUM
ON DUM.DocumentID = Doc.DocumentID
AND DUM.UserID != Doc.UserID
``` | Try this query
```
SELECT Doc.UserID,
Doc.DocumentID,
Doc.DocumentName
FROM Documents Doc
JOIN DocumentUserMapping DUM
ON DUM.DocumentID = Doc.DocumentID
WHERE Doc.UserID != @i_userid
AND DUM.UserID = @i_userid
``` | Hard to understand your question but if I did understand it this is what you want - a list of documents that are shared to you, but excluding docs that you own. There's no reason this would be a LEFT join in this case.
```
-- Return documents shared to me that I do not own
SELECT Doc.UserID,
Doc.DocumentID,
Doc.DocumentName
FROM Documents Doc
JOIN DocumentUserMapping DUM
ON DUM.DocumentID = Doc.DocumentID
AND DUM.UserID = @i_userid -- Shared to me
WHERE Doc.UserID != @i_userid -- Not owned by me
```
Alternatively you want both docs you own, AND docs shared to you; simplest way shown below:
```
-- Return documents shared to me AS WELL AS docs I own
SELECT Doc.UserID,
Doc.DocumentID,
Doc.DocumentName
FROM Documents Doc
WHERE Doc.UserID = @i_userid -- Docs I own
-- Or.. Docs shared
OR EXISTS (SELECT 1 FROM DocumentUserMapping WHERE DocumentID = Doc.DocumentID AND UserID = @i_userid)
``` | Sql Query left outer join | [
"",
"sql",
"sql-server",
""
] |
I need to query a set of data (first WHERE) and based on that result sub-query with another WHERE.
Using the following code I get
```
**Incorrect syntax near the keyword 'where'.**
```
Could you tell me what am I doing wrong here?
---
```
select * from [Analytics]
WHERE
DateCreated >= '2014-05-01'
AND DateCreated < '2014-06-01'
AND Identification = 'ElementFlow'
where exists
(
SELECT *
FROM [Analytics]
WHERE Location = 'x.DetailsAdvertisement'
OR Location = 'x.DetailsShop'
OR Location = 'x.None'
OR Location = 'x'
)
``` | You should change `where exists` to `and exists` and that should be it. | Try this by removing where clause coming two times
```
select * from [Analytics]
WHERE
DateCreated >= '2014-05-01'
AND DateCreated < '2014-06-01'
AND Identification = 'ElementFlow'
AND exists
(
SELECT *
FROM [Analytics]
WHERE Location = 'x.DetailsAdvertisement'
OR Location = 'x.DetailsShop'
OR Location = 'x.None'
OR Location = 'x'
)
```
Or you try this...
```
select * from [Analytics]
WHERE
DateCreated >= '2014-05-01'
AND DateCreated < '2014-06-01'
AND Identification = 'ElementFlow'
AND location in ('x.DetailsAdvertisement' ,'x.DetailsShop','x.None','x')
``` | How to use two where clauses in a subquery? | [
"",
"sql",
"sql-server",
""
] |
I have a csv file that contains phone numbers, some of them have 9 digits and some of them have 10. Is there a command that would allow the transformation of the column such that numbers that have only 9 digits will have a 0 appended in front of the numbers.
For example,
if the column has values "443332332" and "0441223332", I would like to have the value of the one with 9 digits changed to "0443332332"?
Sorry, I should have elaborated.
I was wondering if there was a command to do it in SQLlite easily? I prefer not to use excel to transform the column as if I can get it to working with sqllite it would be so much easier and faster. | A more generic solution would be:
```
select substr('0000000000'||'1234567', -10, 10) from table_name;
```
The above query would always return 10 digits and add leading zeroes to the missed out number of digits.
For example, the above query would return : **0001234567**
For Update, use
```
UPDATE TABLE_NAME SET PHONE_NO = substr('0000000000'|| PHONE_NO, -10, 10);
``` | If you're sure that just prepending a zero on strings with length 9 will work for your application, something simple will work:
```
SELECT CASE WHEN LENGTH(phone_number) = 9 THEN '0'||phone_number
ELSE phone_number
END AS phone_number
FROM your_table
;
```
You could also update the table, depending on your needs:
```
UPDATE your_table
SET phone_number = '0'||phone_number
WHERE LENGTH(phone_number) = 9
;
``` | Transforming a column to have 10 Digits | [
"",
"mysql",
"sql",
"sqlite",
"csv",
""
] |
**Edit**:
[SQLZoo More Join Operations](http://sqlzoo.net/wiki/More_JOIN_operations) problem 15 has changed since I asked this question. It now states: "List the films released in the year 1978 ordered by the number of actors in the cast, then by title."
I give thanks to all who tried to help with the original phrasing. I've updated the accepted answer to match the current problem.
**Original Question**:
I'm trying to solve problem number 15 under [SQLZoo More Join Operations](http://sqlzoo.net/wiki/More_JOIN_operations) (I'm brushing up for an interview tomorrow)
The question is: "List the 1978 films by order of cast list size. "
My answer is:
```
SELECT movie.title, count(casting.actorid)
FROM movie INNER JOIN casting
ON movie.id=casting.movieid
WHERE movie.yr=1978
GROUP BY movie.id
ORDER BY count(casting.actorid) desc
```
This is essentially identical to the answer given by [Gideon Dsouza](http://www.gideondsouza.com/blog/solutions-to-sqlzoo.net-more-join-operation-questions-movie-database) except that my solution does not assume titles are unique:
```
SELECT m.title, Count(c.actorid)
FROM casting c JOIN movie m ON
m.id = c.movieid
WHERE m.yr = 1978
GROUP BY m.title
ORDER BY Count(c.actorid) DESC
```
Neither my solution nor his is marked correct.
The results from my solution and the "correct" solution are given at the end. My list has two movies ("Piranha" and "The End") that the "correct" solution lacks. And the "correct" solution has two movies ("Force 10 From Navarone" and "Midnight Express") that mine lacks.
Since these movies are all in the smallest cast size, I hypothesized that SQLZoo is cutting off the query at 50 rows and it was an ordering irregularity that causes the difference. However, I tried adding `,fieldname` to the end of my `order by` clause for all values of `fieldname` but none yielded an identical answer.
Am I doing something wrong or is SQLZoo broken?
---
## Result Listings
My solution yields (after using libreoffice to make a fixed width column):
```
The Bad News Bears Go to Japan 50
The Swarm 37
Grease 28
American Hot Wax 27
The Boys from Brazil 26
Heaven Can Wait 25
Big Wednesday 21
Orchestra Rehearsal 19
A Night Full of Rain 19
A Wedding 19
The Cheap Detective 19
Go Tell the Spartans 18
Superman 17
Movie Movie 17
The Driver 17
The Cat from Outer Space 17
Death on the Nile 17
The Star Wars Holiday Special 17
Blue Collar 16
J.R.R. Tolkien's The Lord of the 16
Ice Castles 16
International Velvet 16
Coming Home 15
Revenge of the Pink Panther 15
The Brink's Job 15
David 15
The Chant of Jimmie Blacksmith 15
The Water Babies 15
Violette Nozière 15
Occupation in 26 Pictures 15
Without Anesthesia 15
Bye Bye Monkey 15
Alexandria... Why? 15
Who'll Stop The Rain 15
Gray Lady Down 15
Damien: Omen II 14
The Empire of Passion 14
Bread and Chocolate 14
I Wanna Hold Your Hand 14
Closed Circuit 14
Almost Summer 13
Goin' South 13
An Unmarried Woman 13
The Left-Handed Woman 13
Foul Play 13
The End 12
California Suite 12
In Praise of Older Women 12
Jaws 2 12
Piranha 12
```
The correct answer is given as:
```
The Bad News Bears Go to Japan 50
The Swarm 37
Grease 28
American Hot Wax 27
The Boys from Brazil 26
Heaven Can Wait 25
Big Wednesday 21
A Wedding 19
A Night Full of Rain 19
Orchestra Rehearsal 19
The Cheap Detective 19
Go Tell the Spartans 18
Superman 17
The Star Wars Holiday Special 17
Death on the Nile 17
The Cat from Outer Space 17
Movie Movie 17
The Driver 17
Blue Collar 16
Ice Castles 16
J.R.R. Tolkien's The Lord of the 16
International Velvet 16
Coming Home 15
The Brink's Job 15
Gray Lady Down 15
Bye Bye Monkey 15
Without Anesthesia 15
Violette Nozière 15
The Water Babies 15
Revenge of the Pink Panther 15
Who'll Stop The Rain 15
Alexandria... Why? 15
Occupation in 26 Pictures 15
David 15
The Chant of Jimmie Blacksmith 15
The Empire of Passion 14
Damien: Omen II 14
Closed Circuit 14
Bread and Chocolate 14
I Wanna Hold Your Hand 14
An Unmarried Woman 13
Almost Summer 13
Goin' South 13
Foul Play 13
The Left-Handed Woman 13
Jaws 2 12
California Suite 12
In Praise of Older Women 12
Force 10 From Navarone 12
Midnight Express 12
``` | There is no problem on SQL ZOO but you just need to add `title` to `ORDER By` clause
because the requirement is to order count then title. Below is the modified version of your sql:
```
SELECT m.title, Count(c.actorid)
FROM casting c JOIN movie m ON
m.id = c.movieid
WHERE m.yr = 1978
GROUP BY m.title
ORDER BY Count(c.actorid) DESC, title
``` | I do think SQLZoo is coding their answer a little bit differently than OP causing ties to be non-ordered.. as far as I can tell. I was stuck on this problem as well with an answer similar to OP.
I did try different combinations of `GROUP BY` and `JOIN` (per comments) before I resorted to custom ordering.. maybe I missed the correct combination..
So, in order to get SQLZoo's answer (the "smiley face"), I had to use `CASE title WHEN` to put in a custom order for ties:
`SELECT title, COUNT(actorid)
FROM movie JOIN casting
ON (movieid=movie.id)
WHERE yr=1978
GROUP BY title
ORDER BY COUNT(actorid) DESC,
CASE title
WHEN 'A Wedding' THEN 1
WHEN 'A Night Full of Rain' THEN 2
WHEN 'Orchestra Rehearsal' THEN 3
WHEN 'The Cheap Detective' THEN 4
WHEN 'The Driver' THEN 1
WHEN 'Movie Movie' THEN 2
WHEN 'Superman' THEN 3
WHEN 'The Star Wars Holiday Special' THEN 4
WHEN 'Death on the Nile' THEN 5
WHEN 'The Cat from Outer Space' THEN 6
WHEN 'Blue Collar' THEN 1
WHEN 'Ice Castles' THEN 2
WHEN "J.R.R Tolkien's The Lord of the Rings" THEN 3
WHEN 'International Velvet' THEN 4
WHEN 'Alexandria... Why?' THEN 1
WHEN 'Occupation in 26 Pictures' THEN 2
WHEN 'The Chant of Jimmie Blacksmith' THEN 3
WHEN 'David' THEN 4
WHEN "The Brink's Job" THEN 5
WHEN 'Coming Home' THEN 6
WHEN 'Gray Lady Down' THEN 7
WHEN 'Bye Bye Monkey' THEN 8
WHEN 'Without Anesthesia' THEN 9
WHEN 'Violette Nozière' THEN 10
WHEN 'The Water Babies' THEN 11
WHEN 'Revenge of the Pink Panther' THEN 12
WHEN "Who'll Stop The Rain" THEN 13
WHEN 'The Empire of Passion' THEN 1
WHEN 'Damien: Omen II' THEN 2
WHEN 'Closed Circuit' THEN 3
WHEN 'Bread and Chocolate' THEN 4
WHEN 'I Wanna Hold Your Hand' THEN 5
WHEN 'Foul Play' THEN 1
WHEN 'The Left-Handed Woman' THEN 2
WHEN 'An Unmarried Woman' THEN 3
WHEN 'Almost Summer' THEN 4
WHEN "Goin' South" THEN 5
WHEN 'Piranha' THEN 1
WHEN 'Jaws 2' THEN 2
WHEN 'California Suite' THEN 3
WHEN 'In Praise of Older Woman' THEN 4
WHEN 'Force 10 From Navarone' THEN 5
WHEN 'Midnight Express' THEN 6
WHEN 'The End' THEN 7
END ASC`
It is super cumbersome.. **you should understand OP's answer** *before* you copy/paste the `CASE title WHEN` to get SQLZoo's "smiley face".
A helpful `SORT`resource: [tutorialspoint.com](http://www.tutorialspoint.com/sql/sql-sorting-results.htm) -- scroll down to:
> To fetch the rows with own preferred order, the SELECT query would as follows: | SQLZoo "More Join Operations" #15 | [
"",
"sql",
"join",
""
] |
I have a sql database which looks like this:

I want to get all `EAN`s (european article number, it's like a sku) where `stock_level` is `Null`, and which have a duplicate" (where duplicate means same `style`, `color` and `size`)
So the red cells represent this kind of "duplicate" pair, so `EAN` `400006` would be what I want to output and later delete.
Knowing how I can group and count, I still can't figure out a way to isolate the `EAN`s. | I would suggest:
```
select *
from eans t
where stock_level is null and
exists (select 1
from eans t2
where t2.style = t.style and
t2.color = t.color and
t2.size = t.size and
t2.stock_level is not null
);
```
I'm assuming that you want to duplicate to have a non-NULL `stock_level`, as in your example. | ```
SELECT DISTINCT t.EAN
FROM `table` t JOIN ( SELECT style
, color
, size
FROM `table`
GROUP BY style
, color
, size
HAVING COUNT(*) > 1
) x ON t.style = x.style
AND t.color = x.color
AND t.size = x.size
WHERE t.stock_level IS NULL ;
```
### [SQLFiddle](http://sqlfiddle.com/#!2/fccc7/2) | Find records that qualify as pairs | [
"",
"sql",
"join",
""
] |
```
SELECT DISTINCT OH.MRN,OH.VISIT_NUMBER,OH.EVENT_TIME
FROM ONCOLOGY_HISTROY_MV OH
WHERE OH.PROC='BC - Heamatology-Oncology Appt'
ORDER BY EVENT_TIME ASC
```
I'm trying to get the first visit for each MRN (Patient) in a particular PROC (which is Procedure)
Please help :) | This is a good opportunity to use analytic functions:
```
SELECT MRN, VISIT_NUMBER, EVENT_TIME
FROM (SELECT OH.MRN, OH.VISIT_NUMBER, OH.EVENT_TIME,
ROW_NUMBER() OVER (PARTITION BY OH.MRN ORDER BY OH.EVENT_TIME) AS SEQNUM
FROM ONCOLOGY_HISTROY_MV OH
WHERE OH.PROC = 'BC - Heamatology-Oncology Appt'
) OH
WHERE SEQNUM = 1
ORDER BY EVENT_TIME ASC;
``` | It could be as simple as this:
```
select mrn, min(visit_date) mindate
from oncology_history_mv
where proc = 'BC Heamatology-Oncology Appt'
group by mrn
```
If you need more details, make the above a derived table and join to it.
```
select fields you need
from oncology_history_mv onc join
(select mrn, min(visit_date) mindate
from oncology_history_mv
where proc = 'BC Heamatology-Oncology Appt'
group by mrn ) temp on onc.mrn = temp.mrn and vist_date = mindate
where proc = 'BC Heamatology-Oncology Appt'
```
This assumes that event\_time does not include the date of the procedure, just the time of day. | Select First Row of every MRN | [
"",
"sql",
"oracle",
"select",
""
] |
As per my understanding, `FULL OUTER JOIN` is a combination of `LEFT OUTER JOIN` and `RIGHT OUTER JOIN`. In this case, it simply joins two tables with all entries. Please let me know why we are giving "`ON`" clause for `FULL OUTER JOIN` ? or explain how the ON clause is applied or what difference it makes in the query?
Eg Query:
```
SELECT Customers.CustomerName, Orders.OrderID
FROM Customers
FULL OUTER JOIN Orders
ON Customers.CustomerID=Orders.CustomerID
``` | A `cross join` produces a Cartesian product between the two tables, returning all possible combinations of all rows. It has **no on clause** because you're just joining everything to everything.
A `full outer join` is a combination of a left outer and right outer join. It returns all rows in both tables that match the query's where clause, and in cases where the on condition can't be satisfied for those rows it puts null values in for the un-populated fields.
From [@Donnie answer](https://stackoverflow.com/questions/3228871/sql-server-what-is-the-difference-between-cross-join-and-full-outer-join) | This diagram will help you to understand joins.
 | why ON clause used on full outer join? | [
"",
"sql",
"sql-server",
"oracle",
"join",
""
] |
I have this data (for example):
```
id col1 col2
--------------------------------
5614 (null) Y
5614 Y (null)
5614 Y (null)
```
I want to obtain a single line where it tells me if the fields has some value `Y`.
```
id col1 col2
5614 Y Y
```
If no value `Y` present, then it should be `N`.
My actual query do that, but I think it should be a shorter way:
```
SELECT op.id,
CASE WHEN (SUM(CASE WHEN col1 = 'S' THEN 1 ELSE 0 END)) > 0 THEN 'S' ELSE 'N' END AS col1,
CASE WHEN (SUM(CASE WHEN col2 = 'S' THEN 1 ELSE 0 END)) > 0 THEN 'S' ELSE 'N' END AS col2
FROM OPERATOR op LEFT JOIN TRAM_OP tr ON op.id = tr.id AND tr.id2 = 20
WHERE op.id = 5614 GROUP BY op.id
```
As you can see I have two tables and I need to join them to match the `id2` key. | One way is to use a shorthand that eliminates the need for some of the `case` statements:
```
SELECT op.id,
(CASE WHEN SUM(col1 = 'S') > 0 THEN 'S' ELSE 'N' END) AS col1,
(CASE WHEN SUM(col2 = 'S') > 0 THEN 'S' ELSE 'N' END) AS col2
FROM OPERATOR op LEFT JOIN
TRAM_OP tr
ON op.id = tr.id AND tr.id2 = 20
WHERE op.id = 5614
GROUP BY op.id;
```
You could also take advantage of the fact the `'S'` > `'N'`:
```
SELECT op.id, MAX(col1) AS col1, MAX(col2) AS col2
FROM OPERATOR op LEFT JOIN
TRAM_OP tr
ON op.id = tr.id AND tr.id2 = 20
WHERE op.id = 5614
GROUP BY op.id;
```
However, this version might be confusing to someone else reading the code. | If `Y` is the only possible value, then with a simple `max`, you get either this value or NULL:
```
SELECT id, max(col1) AS col1, max(col2) AS col2
FROM ...
WHERE ...
GROUP BY id
```
You can change the NULL to `'N'` with the [ifnull function](http://www.sqlite.org/lang_corefunc.html#ifnull):
```
SELECT id,
ifnull(max(col1), 'N') AS col1,
ifnull(max(col2), 'N') AS col2
FROM ...
WHERE ...
GROUP BY id
``` | Aggregate functions to specific values in a Group By query | [
"",
"sql",
"sqlite",
"group-by",
""
] |
I'm trying to select a list of post\_ids from one column based on whether or not a user\_id exists in another.
The idea is simply to get a list of post\_ids that a user has not yet interacted with.
The table is like this with it's contents contents:
```
+----+---------+---------+--------+---------------------+------------+------+
| id | post_id | user_id | rating | last_update | num_images | url |
+----+---------+---------+--------+---------------------+------------+------+
| 1 | 2938 | 5 | 1 | 2014-06-12 22:54:31 | NULL | null |
| 2 | 2938 | 1 | 1 | 2014-06-12 22:54:54 | NULL | null |
| 3 | 2940 | 6 | 1 | 2014-06-12 23:36:25 | NULL | null |
| 4 | 2943 | 3 | 0 | 2014-06-12 23:39:29 | NULL | NULL |
+----+---------+---------+--------+---------------------+------------+------+
```
My attempt was this:
```
SELECT Distinct post_id
FROM `table`
WHERE user_id !=1
```
Yet I am still getting results that still gives results where the user has already been connected with the post -- it just excludes the entry including the user.
**How do I get results on the condition that the user\_id has not been connected with any instance of post\_id in the compared column?** | Your query should be :
```
select distinct(post_id )
from tableName
where post_id not in(select post_id from tableName where user_id=1);
``` | My proposal
```
SELECT Distinct post_id
FROM `table` T
WHERE post_id NOT IN (
/*select posts NOT to be shown */
SELECT post_id
FROM `table` T1
/* naming the tables differently allows you to make operations
with data of the inner selection -T1- AND the outer one -T-
*/
WHERE T1.user_id = 1
)
``` | Select column if results/row not exists? | [
"",
"mysql",
"sql",
""
] |
```
ID UserId Name Amount RewardId
----------------------------
1 1 James 10.00 1
2 1 James 10.00 2
3 1 James 10.00 3
4 2 Dave 20.00 1
5 2 Dave 20.00 3
6 3 Lim 15.00 2
```
I'm trying to insert to another table, and this is the result that i'm struggling with:
```
Tbl1ID RewardId
------------------
1 1
1 2
1 3
4 1
4 3
6 2
```
I'm trying to get the MIN(ID) of each person and select all the RewardId that belong to that person. | You could do a simple self join to get the minimum id value per userid/rewardid combination;
```
SELECT MIN(a.id) Tbl1ID, b.RewardId
FROM mytable a
JOIN mytable b
ON a.name = b.name
GROUP BY b.userid, b.rewardid
ORDER BY tbl1id, rewardid;
```
[An SQLfiddle to test with](http://sqlfiddle.com/#!6/dd805/1). | If you are running SQL Server 2008+, you can simplify it by using Window Function.
```
INSERT INTO AnotherTable (Tbl1ID, RewardID)
SELECT MIN(ID) OVER (PARTITION BY Name),
RewardID
FROM SourceTable
```
* [SQLFiddle Demo](http://sqlfiddle.com/#!3/1408c0/4) | SQL. How to select multiple rows by using the MIN and GROUP BY | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
i am using `UTL_FILE` utility in oracle to get the data in to csv file. here i am using the script.
so i am getting the set of text files
case:1
sample of output in the test1.csv file is
```
"sno","name"
"1","hari is in singapore
ramesh is in USA"
"2","pong is in chaina
chang is in malaysia
vilet is in uk"
```
now i am counting the number of records in the test1.csv by using linux commans as
```
egrep -c "^\"[0-9]" test1.csv
```
here i am getting the record count as
```
2 (ACCORDING TO LINUX)
```
but if i calculate the number of records by using select \* from test;
```
COUNT(*)
---------- (ACCORDING TO DATA BASE)
2
```
case:2
sample of output in the test2.csv file is
`"sno","name","p"
"","",""
"","","ramesh is in USA"
"","",""`
now i am counting the number of records in the test2.csv by using linux commans as
```
egrep -c "^\"[0-9]" test2.csv
```
here i am getting the record count as
```
0 (ACCORDING TO LINUX)
```
but if i calculate the number of records by using select \* from test;
```
COUNT(*)
---------- (ACCORDING TO DATA BASE)
2
```
can any body help me how to count the exact lines in case:1 and case:2 using the single command
thanks in advance. | Columns in both case is different. To make it generic I wrote a perl script which will print the rows. It generates the regex from headers and used it to calculate the rows. I assumed that first line always represents the number of columns.
```
#!/usr/bin/perl -w
open(FH, $ARGV[0]) or die "Failed to open file";
# Get coloms from HEADER and use it to contruct regex
my $head = <FH>;
my @col = split(",", $head); # Colums array
my $col_cnt = scalar(@col); # Colums count
# Read rest of the rows
my $rows;
while(<FH>) {
$rows .= $_;
}
# Create regex based on number of coloms
# E.g for 3 coloms, regex should be
# ".*?",".*?",".*?"
# this represents anything between " and "
my $i=0;
while($i < $col_cnt) {
$col[$i++] = "\".*?\"";
}
my $regex = join(",", @col);
# /s to treat the data as single line
# /g for global matching
my @row_cnt = $rows =~ m/($regex)/sg;
print "Row count:" . scalar(@row_cnt);
```
Just store it as `row_count.pl` and run it as `./row_count.pl filename` | `egrep -c test1.csv` doesn't have a search term to match for, so it's going to try to use `test1.csv` as the regular expression it tries to search for. I have no idea how you managed to get it to return 2 for your first example.
A useable `egrep` command that will actually produce the number of records in the files is `egrep '"[[:digit:]]*"' test1.csv` assuming your examples are actually accurate.
```
timp@helez:~/tmp$ cat test.txt
"sno","name"
"1","hari is in singapore
ramesh is in USA"
"2","pong is in chaina
chang is in malaysia
vilet is in uk"
timp@helez:~/tmp$ egrep -c '"[[:digit:]]*"' test.txt
2
timp@helez:~/tmp$ cat test2.txt
"sno","name"
"1","hari is in singapore"
"2","ramesh is in USA"
timp@helez:~/tmp$ egrep -c '"[[:digit:]]*"' test2.txt
2
```
Alternatively you might do better to add an extra value to your `SELECT` statement. Something like `SELECT 'recmatch.,.,',sno,name FROM TABLE;` instead of `SELECT sno,name FROM TABLE;` and then `grep` for `recmatch.,.,` though that's something of a hack. | line count with in the text files having multiple lines and single lines | [
"",
"sql",
"linux",
""
] |
I'm trying to split a string in MSSQL by only the first whitespace
Considering here can have 2 spaces in their full name, I have no idea how to do this.
Example:
`Henk de Vries`
I would like to split it into:
```
Firstname: Henk
Lastname: de Vries
``` | try using [Patindex](http://msdn.microsoft.com/en-IN/library/ms188395.aspx)
```
create table #t(name varchar(20))
insert into #t values('Henk de Vries')
select substring(name,1,PATINDEX('% %',name)) as First_name,SUBSTRING(name,patindex('% %',name),LEN(name)) as Last_Name from #t
```
*This is done to fix as said in comments* by t-clausen.dk
```
select left(name,charindex(' ',name+' ')) as First_Name,substring(name,charindex(' ',name+' '),len(name)) as Last_Name from #t
```
`An demo to test with`
`Updated demo` | Here is an example that will compare this answer to the chosen answer, note that the chosen answer has bugs when there is just 1 name (I know most people have combined names, but this is not a perfect world):
```
SELECT
LEFT(name, charindex(char(32), name + char(32))) Firstname,
STUFF(name, 1, charindex(char(32), name), '') LastName,
-- included example from accepted answer
substring(name,1,PATINDEX('% %',name)) as First_name,
SUBSTRING(name,patindex('% %',name),LEN(name)) as Last_Name
FROM (values('Henk de Vries'), ('Thomas')) x(name)
```
Result
```
Firstname LastName First_name Last_Name
Henk de Vries Henk de Vries
Thomas Thoma
``` | split firstname and lastname but only on first space | [
"",
"sql",
"sql-server",
""
] |
I have SQL query that gives me the following sample data:
```
Name | Address | Seq_No |
Name1 Add1 6
Name1 Add1 6
Name1 Add1 6
Name1 Add1 5
Name1 Add1 5
Name1 Add1 6
Name2 Add2 2
Name2 Add2 1
Name2 Add2 2
Name2 Add2 2
Name2 Add2 1
```
I want to be able to pull out of the result set the data where the SEQ\_No is the Maximum value
I have tried using a HAVING at the end of the query:
```
HAVING SEQ_no=MAX(seq_no)
```
But that doesn't work, if I use
```
HAVING SEQ_no=6
```
then obviously this pulls out the data where seq\_no=6, what am I doing wrong with the MAX? | Use Rank() function to assign ranking based on seq and then filter by Rank=1
LIKE
Rank() OVER (PARTITION BY clnt\_name,cmpadd.add\_1 ORDER BY hraptvpd.seq\_no DESC) AS RNK
After that you can filter on where RNK =1
I tried editing your query, if there is any error please sort it out
```
select
hgmclent.clnt_name
,cmpadd.add_1
,cmpadd.add_2
,cmpadd.add_3
,hgmprty1.post_code
,hgmprty1.prty_id
,hraptvpd.seq_no
,vd_prd
,void_cd
,void_desr
,st_date
,seq_no
,end_date
,est_avldt
,lst_revn,
Rank() OVER (PARTITION BY clnt_name,cmpadd.add_1 ORDER BY hraptvpd.seq_no DESC) AS RNK
from hgmprty1
join hratency on prty_ref=prty_id
join hrartamt on hrartamt.rent_acc_no=hratency.rent_acc_no
join hracracr on hracracr.rent_acc_no=hrartamt.rent_acc_no
join hgmclent on hgmclent.client_no=hracracr.clnt_no
join cmpadd on cmpadd.add_id=hgmprty1.add_cd
JOIN hraptvpd WITH (nolock) ON hraptvpd.prty_ref=hgmprty1.prty_id
JOIN hraptvps on hraptvps.prty_ref=hraptvpd.prty_ref AND seq_no=vd_prd
where
tency_end_dt is NULL AND
prim_clnt_yn=1
``` | You don't need HAVING here just WHERE:
```
SELECT * FROM T WHERE SEQ_no=(SELECT MAX(SEQ_no) FROM T)
``` | Using Max in a Having SQL query | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I am trying to count distincts
```
Sub sql(sqlStr As String, PasteRnG As Range)
Dim con As Object
Dim rstData As Object
Dim sDatabaseRangeAddress As String
Set con = CreateObject("ADODB.Connection")
Set rstData = CreateObject("ADODB.Recordset")
con.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & ActiveWorkbook.FullName & ";Extended Properties = ""Excel 12.0 Macro;HDR=yes"";"
rstData.Open sqlStr, con
PasteRnG.CopyFromRecordset rstData
rstData.Close
Set rstData = Nothing
Set con = Nothing
End Sub
```
query
```
sql "SELECT DISTINCT `P3`,`P2`,`P1`, COUNT(DISTINCT `P3`) AS countOf FROM [QQ$] ", Worksheets("QQQQ").Range("a3")
```
error on
```
SELECT DISTINCT `P3`,`P2`,`P1`, COUNT(DISTINCT `P3`) AS countOf FROM table;
```
But I get error on `COUNT(DISTINCT`P3`)`
I need to get in 4th column count of each DISTINCT `P3`
```
Distinctlist P2 P1 CountOfDistinctInP3
111 , 11, 1, 56pcs.
222, 22, 2, 25pcs.
```
P3 cannot be empty!!! Always have value
But this fine works like i need
```
sql "SELECT DISTINCT `P3`, COUNT(`P3`) FROM [QQ$] GROUP BY `P3` ", Worksheets("QQQQ").Range("a3")
```
But without P2 and P1 (((
--- | Solved, every column in a SELECT that is not aggregated needs to be in the GROUP BY clause
```
sql "SELECT `P3`, `P2`, `P1`, COUNT(`P3`) FROM [QQ$] GROUP BY `P3`, `P2`, `P1` ", Worksheets("QQQQ").Range("a3")
``` | Try below Query
```
select count (select distinct p3 from table) from table
``` | Excel vba ADODB.Connection Sql Count distincts | [
"",
"sql",
"count",
"distinct",
"ado",
""
] |
I've got the following query:
```
select distinct a.id, a.name
from Employee a
join Dependencies b on a.id = b.eid
where not exists
(
select *
from Dependencies d
where b.id = d.id
and d.name = 'Apple'
)
and exists
(
select *
from Dependencies c
where b.id = c.id
and c.name = 'Orange'
);
```
I have two tables, relatively simple.
The first Employee has an id column and a name column
The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc).
the data looks like this
Employee table looks like this
```
id | name
-----------
1 | Pat
2 | Tom
3 | Rob
4 | Sam
```
Dependencies
```
id | eid | Name
--------------------
1 | 1 | Orange
2 | 1 | Apple
3 | 2 | Strawberry
4 | 2 | Apple
5 | 3 | Orange
6 | 3 | Banana
```
As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob | Inner join with the name you want, left join on the name you dont, then use where to ensure the left join fails to match, like so ([**SQL Fiddle**](http://sqlfiddle.com/#!2/52b828/1/0)):
```
select distinct a.id, a.name
from Employee a
inner join Dependencies b on a.id = b.eid
and b.name = 'Orange'
left join Dependencies c on ( a.id = c.eid
and c.name = 'Apple')
where c.id is null;
``` | Two joins to `Dependencies` will be needed, as there are 2 tests. Neglecting performance for a moment, you could try to improve the understandability of the joins by naming the aliases, e.g.:
```
SELECT DISTINCT e.ID, e.Name
FROM Employee e
LEFT OUTER JOIN Dependencies withApple
ON withApple.eid = e.id
AND withApple.Name = 'Apple'
LEFT OUTER JOIN Dependencies withOrange
ON withOrange.eid = e.id
AND withOrange.Name = 'Orange'
WHERE
withApple.id IS NULL -- Don't want this
AND
withOrange.id IS NOT NULL -- Do want this.
```
[SqlFiddle](http://sqlfiddle.com/#!2/8c0de1/1) | How do I replace NOT EXISTS with JOIN? | [
"",
"mysql",
"sql",
"join",
"output",
""
] |
I'm new to Go and I'm trying to write a little program to save enumerated values to a database.
The way I declare my values is as follows:
```
type FileType int64
const (
movie FileType = iota
music
book
etc
)
```
I use these values in my struct like this:
```
type File struct {
Name string
Type FileType
Size int64
}
```
I use gorp for my database stuff, but I guess the use of gorp isn't relevant to my problem. I put stuff in my DB like this:
```
dbmap.Insert(&File{"MyBook.pdf",movie,1000})
```
but when I try to retrieve stuffβ¦
```
dbmap.Select(&dbFiles, "select * from Files")
```
I get the following error:
```
panic: reflect.Set: value of type int64 is not assignable to type main.FileType
```
When I use `int64` as the type for the `const(...)` and for the `File.Type` field, everything works fine, but I'm new to Go and want to understand the problem.
The way I see it, I have two problems:
1. Why can't Go convert this stuff successfully? I looked at the source code of the Go reflection and sql packages and there are methods for this kind of conversion, but they seem to fail. Is this a bug? What is the problem?
2. I figured out, that one can implement the `sql.Scanner` interface by implementing the following method:
```
Scan(src interface{}) error
```
I tried to implement the method and I even was able to get the right value from `src` and convert it to a `FileType`, but I was confused if I should implement the method for "`(f *FileType)` or `(f FileType)`. Either way the method gets invoked, however I'm not able to overwrite `f` (or at least the update gets lost later) and the `File` instances read from the DB always had a "0" as value for `File.Type`.
Do you have any ideas on those two points? | I recently had the same need, and the solution is to implement two interfaces:
1. [sql/driver.Valuer](http://golang.org/pkg/database/sql/driver/#Valuer)
2. [sql.Scanner](http://golang.org/pkg/database/sql/#Scanner)
Here's a working example:
```
type FileType int64
func (u *FileType) Scan(value interface{}) error { *u = FileType(value.(int64)); return nil }
func (u FileType) Value() (driver.Value, error) { return int64(u), nil }
``` | Slightly off-topic, but may be useful to others as I kept revisiting this question/answer when solving a similar problem when working with postgres enum fields in golang (which are returned as bytes).
```
// Status values
const (
incomplete Status = "incomplete"
complete Status = "complete"
reject Status = "reject"
)
type Status string
func (s *Status) Scan(value interface{}) error {
asBytes, ok := value.([]byte)
if !ok {
return errors.New("Scan source is not []byte")
}
*s = Status(string(asBytes))
return nil
}
func (s SubjectStatus) Value() (driver.Value, error) {
// validation would go here
return string(s), nil
}
``` | Saving enumerated values to a database | [
"",
"sql",
"types",
"enums",
"go",
""
] |
What's the syntax in MS Access 2010 for limiting the results of a query to the first 1,000?
I've tried this
```
SELECT tblGL.[Cost Centre Code]
FROM tblGL
LIMIT 1000;
```
but I get the error 'Syntax error in FROM clause'.
I've also tried setting the Max Records property but it doesn't seem to do anything - I still get 7,000+ results regardless of what value I enter into the Max Records field.
I also want to have a 2nd query which selects the next 25,000, starting from the 1,001st. Something like:
```
SELECT tblGL.[Cost Centre Code]
FROM tblGL
LIMIT 1001, 25000;
``` | > What then is the Access equivalent of MySQL: LIMIT 1001, 25000 (ie return 25,000 results starting from the 1,001st)?
Unfortunately, in MS Access this isn't as straightforward as in MySQL.
In Access, you need to work with nested subqueries.
Here' an answer of mine where I'm showing how to build the correct SQL string for paging in C#:
[How to do MS Access database paging + search?](https://stackoverflow.com/a/6916791/6884)
Taking the SQL string from that answer and inserting your table name and column names will result in this query:
```
select [Cost Centre Code] from tblGL
where [Cost Centre Code] in (
select top 25000 sub.[Cost Centre Code]
from (
select top 26000 tab.[Cost Centre Code]
from tblGL tab
where 1=1
order by tab.[Cost Centre Code]
) sub
order by sub.[Cost Centre Code] desc
)
order by [Cost Centre Code]
```
This eliminates at least the need for basic C# knowledge, but I'm afraid you'll still be confused in case you don't know how subqueries work :-)
The problem is:
Access has no built-in way to directly get 25000 rows, but skip the first 1000.
The only thing that's possible is to get the first X rows.
So I'm doing this *(from the inside to the outside)*:
1. Load the first **26000** rows
*(1000 + 25000, because we want to skip 1000 rows and then load 25000)*
2. From this dataset, load the first 25000 rows, but order descending.
This will effectively load row 26000 to 1001 *(in that order, because we ordered the rows descending!)*
3. To get the rows in ascending order, just load from the table again *(and order ascending!)*, but only the rows with the `Cost Centre Codes` from step 2.
Got it?
Yes, it looks intimidating at first glance, but as soon as you "get" subqueries, it's actually not that difficult. | MySQL & SQLite use `limit`. For SQL Server & MS Access, you need to use [`top X` in the `select` portion](https://stackoverflow.com/questions/9339079/how-to-select-top-10-in-access-query) of your query:
```
select top 1000 tblGL.[Cost Centre Code] FROM tblGL
``` | Access 2010 Limit Query Results | [
"",
"sql",
"ms-access",
"ms-access-2010",
""
] |
I have a simple question, if I have three tables A, B, and C as follows:
```
CREATE TABLE `A` (
`id` VARCHAR(35) NOT NULL,
`name` VARCHAR(20) NOT NULL,
PRIMARY KEY (`id`));
CREATE TABLE `B` (
`emp_id` VARCHAR(35) NOT NULL,
`phone` VARCHAR(7) NOT NULL,
FOREIGN KEY (`emp_id`) REFERENCES `A`(`id`));
CREATE TABLE `C` (
`emp_id` VARCHAR(35) NOT NULL,
`desk_no` INT(11) NOT NULL,
FOREIGN KEY (`emp_id`) REFERENCES `A`(`id`)
```
Is it good practice to take a join directly on C.emp\_id = B.emp\_id or should I join (A.id = B.emp\_id) as R and then do join R.id = C.emp\_id? I ask because it makes sense that you could join B and C directly but is that dangerous because this joins a foreign key to another foreign key rather than a foreign key to a primary key?
Thanks! | You are free to do the join, even encouraged if it is the right thing to do.
But, with caution. Such a join can result in a cartesian product for each employee. So, an employee that has 3 phones and 4 desks would result in 12 records -- every combination of the phones and desks for that employee. Furthermore, employees who are missing a record on one side or the other would lose records in an inner join.
Note that joining back to the original table does *not* fix this problem, which arises because you are joining along different dimensions of the data. You just need to realize that this might occur. | It depends on the end result you are expecting. If you need any columns from Table A then you might need to join Table A, else your Foreign Key relation and NOT NULL conditions would take care, you need not join Table A | Joining tables directly from foreign key to foreign key | [
"",
"mysql",
"sql",
"join",
"foreign-keys",
""
] |
I have 2 tables: `user` and `post`.
With show create table statements:
```
CREATE TABLE `user` (
`user_id` bigint(20) NOT NULL AUTO_INCREMENT,
`user_name` varchar(20) CHARACTER SET latin1 NOT NULL,
`create_date` datetime DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=59 DEFAULT CHARSET=utf8;
CREATE TABLE `post` (
`post_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`owner_id` bigint(20) NOT NULL,
`data` varchar(300) CHARACTER SET latin1 DEFAULT NULL,
PRIMARY KEY (`post_id`),
KEY `my_fk` (`owner_id`),
CONSTRAINT `my_fk` FOREIGN KEY (`owner_id`) REFERENCES `user` (`user_id`) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1012919 DEFAULT CHARSET=utf8;
```
Everything is fine util I execute 2 queries with ORDER BY statement and the result is very strange, `ASC` is slow but `DESC` is very fast.
```
SELECT sql_no_cache * FROM mydb.post where post_id > 900000 and owner_id = 20 order by post_id desc limit 10;
10 rows in set (0.00 sec)
SELECT sql_no_cache * FROM mydb.post where post_id > 900000 and owner_id = 20 order by post_id asc limit 10;
10 rows in set (0.15 sec)
```
Then I use explain statements:
```
explain SELECT sql_no_cache * FROM mydb.post where post_id > 900000 and owner_id = 20 order by post_id desc limit 10;
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
| 1 | SIMPLE | post | ref | PRIMARY,my_fk | my_fk | 8 | const | 239434 | Using where |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
1 row in set (0.01 sec)
explain SELECT sql_no_cache * FROM mydb.post where post_id > 900000 and owner_id = 20 order by post_id asc limit 10;
+----+-------------+-------+------+---------------+-------+---------+-------+--------+------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+------------------------------------+
| 1 | SIMPLE | post | ref | PRIMARY,my_fk | my_fk | 8 | const | 239434 | Using index condition; Using where |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+------------------------------------+
1 row in set (0.00 sec)
```
I think the point is `Using index condition` but I don't know why. How can I improve my database for better performance?
**UPDATE:**
```
explain SELECT * FROM mydb.post where post_id < 600000 and owner_id = 20 order by post_id desc limit 10;
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
| 1 | SIMPLE | post | ref | PRIMARY,my_fk | my_fk | 8 | const | 505440 | Using where |
+----+-------------+-------+------+---------------+-------+---------+-------+--------+-------------+
explain SELECT * FROM mydb.post where post_id < 600000 and owner_id > 19 and owner_id < 21 order by post_id desc limit 10;
+----+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
| 1 | SIMPLE | post | range | PRIMARY,my_fk | PRIMARY | 4 | NULL | 505440 | Using where |
+----+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
``` | These are the relevant facts to understand this behavior:
* You are using InnoDB which uses a Clustered Index concept.
The single interesting side effect of Clustered Indexes for your particular case is that every non-primary key index will also contain the primary key as the very last column in the index implicitly. No nedd for an index on `(owner_id, post_id)` β you already have it.
* MySQL can't resolve range conditions (<, >) on non-leading index columns in the right way. Instead, it will just ignore them during index lookup, and later on apply this part of the where clause as a filter. This is just a MySQL limitation to not start the scan directly at the position of `post_id = 900000` β other databases do this very fine.
* When you are using `DESC` order, MySQL will start reading the index with the biggest `post_id` value it finds. It will then apply your filter `post_id > 900000`. If it matches, it returns the row. Then it proceeds to the next row and so on until it has found 10 matching rows. However, all the matching rows are guaranteed to be where the index scan started.
* When you are using `ASC` order, MySQL starts reading the index at the other end, checks this value against `post_id > 900000` and will probably need to discard the row because `post_id` is below that threshold. Now guess how many rows it needs to process this way before it finds the first row that matches `post_id > 900000` ? That's what's eating up your time.
* "Using Index Condition" refers to Index Condition Pushdown: <http://dev.mysql.com/doc/refman/5.6/en/index-condition-pushdown-optimization.html> I'd say it should apply in both cases. However, it is not so relevant in the DESC case because the filter doesn't remove any rows anyway. In the ASC case it is very relevant and performance would be worst without it.
If you wan't to verify my statements, you could
* Increase/decrease the numeric value (900000) and see how the performance changes. Lower values should make `ASC` faster while keeping `DESC` fast too.
* Change the range condition `>` to `<` and see if it reverses the performance behavior of `ASC`/`DESC`. Remember that you might need to change the number to some lower value to actually see the performance difference.
How could one possibly know that?
<http://use-the-index-luke.com/> is my guide that explains how indexes work. | It is nothing because βUsing index conditionβ but how MySQL use INDEX and their query-engine works. MySQL use a simple query analyzer and optimizer.
In the case of `post_id > 900000 and owner_id = 20`, you may notice it try to use key `my_fk` which is a "BIGGER INDEX" as it is sized in (64+32)\*rows. It find all `owner_id = 20` from index (yep, post\_id was not used. stupid mysql)
After MySQL used a BIG and HEAVIER index to locate all the rows you need, it do another lookup to read actual rows (because you do `SELECT *`) by their primary keys, (few more HDD seek here), and filter the result by using `post_id > 900000` (SLOW)
In the case of `order by post_id desc`, it run faster could be many reason. One possible reason is the InnoDB cache, least inserted rows are warmer and easier to access then others.
In the case of `post_id > 900000 and owner_id > 19 and owner_id < 20`, MySQL giveup the `my_fk` as a ranged scan on secondary index is not better then ranged scan on primary index.
It just use the PK to locate the right page of post\_id 900000, and do a **SEQUENCE READ** from there, if your InnoDB page is not fragmented. (assume you are using AUTO\_INCREMENT) scan some pages, and filter what match your need.
To do a "Optimization", (do it now) : Don't use `SELECT *`
To do a "Premature Optimization" (don't do it; don't do it yet); hint MySQL by `USE INDEX`; create a index contains exact all the columns you need.
It is hard to say which is faster, `my_fk` and `PK`. Because the performance is various by the pattern of data. If owner\_id = 20 is dominate or common in your table, using PK directly could be faster.
If owner\_id = 20 is not common in your table, `my_fk` will give a boost as there are too many rows to read until (post\_id > 900000 + XXX).
--
EDIT: BTW, try `ORDER BY owner_id ASC, post_id ASC` or DESC. MySQL will faster if it can just use the INDEX's order(not order the index). | ORDER BY ... ASC is slow and "Using index condition" | [
"",
"mysql",
"sql",
"database",
"indexing",
"sql-order-by",
""
] |
My table structure is as follows:
```
Computer:
id serial,
sbeadminlogin text,
sbeadminfirstname text,
sbeadminlastname text
Server:
id serial,
sbeadminlogin text,
sapadminfirstname text,
sapadminlastname text
```
There is no relation between those tables however they have column `sbeadminlogin` which is named same in both. It is possible that the person who is administrator in `Server` table is also administrator of other device which is stored iin `computer` tables.
I would like to create list of all administrators(only unique rows) from those tables in one query. In order to do so I used:
```
(select distinct sbeadminlogin from computer)
union
(select distinct sbeadminlogin from server)
```
which works great until I wanted to add name and last name to be visible in the result like :
```
(select distinct sbeadminlogin, sbeadminfirstname || ' ' || sbeadminlastname from computer)
union
(select distinct sbeadminlogin, saeadminfirstname || ' ' || saeadminlastname from server)
```
I got the error saying that column `saeadminfirstname` does not exist.
Could anyone give me a hint how to prepare statement so I get unique logins, first and lastnames from both tables? | I believe that if you place the correct column names (with alias or not) it will work:
```
(select sbeadminlogin, sbeadminfirstname || ' ' || sbeadminlastname as adminname from computer)
union
(select sbeadminlogin, sapadminfirstname || ' ' || sapadminlastname as adminname from server)
```
Example: [SQLFiddle](http://sqlfiddle.com/#!11/58feb/1) | I think you should use +' '+ to concatenate two columns and not ||.
ex.
(select distinct name+' '+city+' '+lastname from temp1)
union
(select distinct name+' '+city+' '+lastname from temp2) | Selecting Similar columns from multiple tables | [
"",
"sql",
"postgresql",
""
] |
I have below string:
```
ThisSentence.ShouldBe.SplitAfterLastPeriod.Sentence
```
So I want to select `Sentence` since it is the string after the last period. How can I do this? | You can probably do this with complicated regular expressions. I like the following method:
```
select substr(str, - instr(reverse(str), '.') + 1)
```
Nothing like testing to see that this doesn't work when the string is at the end. Something about - 0 = 0. Here is an improvement:
```
select (case when str like '%.' then ''
else substr(str, - instr(reverse(str), ';') + 1)
end)
```
EDIT:
Your example works, both when I run it on my local Oracle and in [SQL Fiddle](http://www.sqlfiddle.com/#!4/d41d8/30965).
I am running this code:
```
select (case when str like '%.' then ''
else substr(str, - instr(reverse(str), '.') + 1)
end)
from (select 'ThisSentence.ShouldBe.SplitAfterLastPeriod.Sentence' as str from dual) t
``` | Just for completeness' sake, here's a solution using regular expressions (not very complicated IMHO :-) ):
```
select regexp_substr(
'ThisSentence.ShouldBe.SplitAfterLastPeriod.Sentence',
'[^.]+$')
from dual
```
The regex
* uses a negated character class to match anything except for a dot `[^.]`
* adds a quantifier `+` to match one or more of these
* uses an anchor `$` to restrict matches to the end of the string | oracle 12c - select string after last occurrence of a character | [
"",
"sql",
"string",
"oracle",
"oracle12c",
""
] |
For my dissertation **data collection**, one of the sources is an externally-managed system, which is based on Web form for submitting *SQL queries*. Using `R` and `RCurl`, I have implemented an automated data collection framework, where I simulate the above-mentioned form. Everything worked well while I was limiting the size of the resulting dataset. But, when I tried to go over 100000 records (`RQ_SIZE` in the code below), the tandem "my code - their system" started being unresponsive ("hanging").
So, I have decided to use *SQL pagination feature* (`LIMIT ... OFFSET ...`) to submit a series of requests, hoping then to combine the **paginated results** into a target data frame. However, after changing my code accordingly, the output that I see is only one pagination progress character (`*`) and then no more output. I'd appreciate, if you could help me identify the probable cause of the unexpected behavior. I cannot provide reproducible example, as it's very difficult to extract the functionality, not to mention the data, but I hope that the following code snippet would be enough to reveal the issue (or, at least, a direction toward the problem).
```
# First, retrieve total number of rows for the request
srdaRequestData(queryURL, "COUNT(*)", rq$from, rq$where,
DATA_SEP, ADD_SQL)
assign(dataName, srdaGetData()) # retrieve result
data <- get(dataName)
numRequests <- as.numeric(data) %/% RQ_SIZE + 1
# Now, we can request & retrieve data via SQL pagination
for (i in 1:numRequests) {
# setup SQL pagination
if (rq$where == '') rq$where <- '1=1'
rq$where <- paste(rq$where, 'LIMIT', RQ_SIZE, 'OFFSET', RQ_SIZE*(i-1))
# Submit data request
srdaRequestData(queryURL, rq$select, rq$from, rq$where,
DATA_SEP, ADD_SQL)
assign(dataName, srdaGetData()) # retrieve result
data <- get(dataName)
# some code
# add current data frame to the list
dfList <- c(dfList, data)
if (DEBUG) message("*", appendLF = FALSE)
}
# merge all the result pages' data frames
data <- do.call("rbind", dfList)
# save current data frame to RDS file
saveRDS(data, rdataFile)
``` | I'm answering my own question, as, finally, I have figured out what has been the **real source** of the problem. My investigation revealed that the unexpected waiting state of the program was due to PostgreSQL becoming confused by **malformed SQL queries**, which contained multiple `LIMIT` and `OFFSET` keywords.
The **reason** of that is pretty simple: I used `rq$where` both outside and inside the `for` loop, which made `paste()` concatenate previous iteration's `WHERE` clause with the current one. I have fixed the code by processing contents of the `WHERE` clause and saving it before the loop and then using the saved value in each iteration of the loop safely, as it became independent from the value of the original `WHERE` clause.
This investigation also helped me to fix some other deficiencies in my code and make improvements (such as using sub-selects to properly handle SQL queries returning number of records for queries with aggregate functions). The **moral** of the story: you can never be too careful in software development. **Big thank you to those nice people who helped with this question.** | It probably falls into the category when presumably MySQL hinders LIMIT OFFSET:
[Why does MYSQL higher LIMIT offset slow the query down?](https://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down)
Overall, fetching large data sets over HTTP repeatedly is not very reliable. | Handling paginated SQL query results | [
"",
"sql",
"r",
"postgresql",
"pagination",
"rcurl",
""
] |
I have a column where we are storing the sum of selected date values .. for example I have fixed date value below
```
1 Sunday
2 Monday
4 Tuesday
8 Wednesday
16 Thursday
32 Friday
64 Saturday
```
in my column I am getting 12 so its clear that user has selected `(4)Tuesday+(8)Wednesday=12`
now I have 106 what logic I can write to find out what combination of days is it? | ```
SELECT *,
( (SelDates & 1) = 1) AS IsSun,
( (SelDates & 2) = 2) AS IsMon,
( (SelDates & 4) = 4) AS IsTue,
( (SelDates & 8) = 8) AS IsWed,
( (SelDates & 16) = 16) AS IsThu,
( (SelDates & 32) = 32) AS IsFri,
( (SelDates & 64) = 64) AS IsSat
FROM <table>
```
This should work both in MySQL and SQL-Server, as they both support the & bitwise AND operator. | You can check it like this (MS SQL, Bitwise AND):
```
SELECT
CASE WHEN (daySum & 1) = 1 THEN 'IsMonday'
CASE WHEN (daySum & 2) = 2 THEN 'IsTuesDay'
CASE WHEN (daySum & 4) = 4 THEN 'IsWednesDay'
... etc ...
END FROM MyTable
``` | How to find the values is sum of what all days | [
"",
"sql",
""
] |
I have following table strucutre.
Logic: If a plant-partnumber combination, SumQty should be lised if it has an entry in @MHL table.
1. I need to list the sum value for plant "11" and "DEF" partnumber combination, I need to list the sum as 50
**CODE**
```
DECLARE @MHL TABLE (LineNumber VarCHAR(5), PartNumber VARCHAR(10), Qty INT)
INSERT INTO @MHL VALUES ('10001','ABC',10)
INSERT INTO @MHL VALUES ('10002','ABC',100)
INSERT INTO @MHL VALUES ('10003','DEF',50)
INSERT INTO @MHL VALUES ('10005','KXY',25)
INSERT INTO @MHL VALUES ('10006','KXY',30)
DECLARE @MHP TABLE (PlantCode VarCHAR(5), LineNumber VARCHAR(5))
INSERT INTO @MHP VALUES ('20','10001')
INSERT INTO @MHP VALUES ('21','10002')
INSERT INTO @MHP VALUES ('80','10005')
INSERT INTO @MHP VALUES ('80','10006')
DECLARE @MasterPLantParts TABLE (PlantCode VarCHAR(5), PartNumber VARCHAR(10))
INSERT INTO @MasterPLantParts VALUES ('20','ABC')
INSERT INTO @MasterPLantParts VALUES ('21','ABC')
INSERT INTO @MasterPLantParts VALUES ('96','ABC')
INSERT INTO @MasterPLantParts VALUES ('11','DEF')
INSERT INTO @MasterPLantParts VALUES ('80','KXY')
SELECT M.PlantCode,M.PartNumber,SumQty
FROM @MasterPLantParts M
LEFT OUTER JOIN
(SELECT PartNumber, PlantCode, SUM(Qty) SumQty
FROM @MHL H
LEFT OUTER JOIN @MHP p
on P.LineNumber = H.LineNumber
GROUP BY PartNumber, PlantCode
)T
ON T.PartNumber = M.PartNumber
AND T.PlantCode = M.PlantCode
```
**CURRENT RESULT**

**EXPECTED RESULT**

**QUESTION**
The QtySum "50" is not coming now. How to do it in SQL Server 2005? It would be great if the approach can work in Oracle 8i also
Note: Even if there is no record in @MHP I need to get sum from @MHL. But if there is a value in @MHP, get the associated value from @MHP, @MHL relationship | **EDIT**:
The following query (which is even simpler than the first one I posted) would get you the desired result. First, all rows from MasterPLantParts are selected (using a LEFT OUTER JOIN). Then, a join is made with MHL. If there were no entries in MHP for a given Plant, then all Lines from MHL are selected for the Plant).
```
select
mpp.PlantCode PlantCode,
mpp.PartNumber PartNumber,
sum(MHL.Qty) as SumQty
from MasterPLantParts mpp
left outer join MHP on mpp.PlantCode = MHP.PlantCode
inner join MHL on MHL.LineNumber = MHP.LineNumber or (mpp.PartNumber = MHL.PartNumber and MHP.LineNumber is null)
group by mpp.PlantCode, mpp.PartNumber
order by mpp.PlantCode, mpp.PartNumber;
```
`SQL Fiddle demo`
**Oracle 8i syntax**:
```
select
mpp.PlantCode PlantCode,
mpp.PartNumber PartNumber,
sum(MHL.Qty) as SumQty
from MasterPLantParts mpp, MHP, MHL
where mpp.PlantCode = MHP.PlantCode(+)
and (MHL.LineNumber = MHP.LineNumber or (mpp.PartNumber = MHL.PartNumber and MHP.LineNumber is null))
group by mpp.PlantCode, mpp.PartNumber
order by mpp.PlantCode, mpp.PartNumber;
```
**Reference**:
[Oracle SQL\*Plus Pocket Reference](http://oreilly.com/catalog/orsqlpluspr2/chapter/ch01.html) | I haven't tested it on any of your dbms, but it should work with minor fixes on both of them.
```
select PlantCode, PartNumber, sum(qty)
from (
select x.PlantCode, x.PartNumber, y.Qty
from MasterPLantParts x
join MHL y
on x.PartNumber = y.PartNumber
left join MHP z
on x.PlantCode = z.PlantCode
where z.PlantCode is null
union
select x.PlantCode, x.PartNumber, z.Qty
from MasterPLantParts x
join MHP y
on x.PlantCode = y.PlantCode
join mhl z
on y.LineNumber = z.LineNumber
) as T
group by PlantCode, PartNumber;
PLANTCODE PARTNUMBER 3
--------- ---------- -----------
11 DEF 50
20 ABC 10
21 ABC 100
80 KXY 55
96 ABC 110
``` | Complex Outer Join condition | [
"",
"sql",
"sql-server",
""
] |
Here is an extract from the fairly large table (SQL Server 2005) I'm querying against:
```
id (primary key) | account | phone | employee | address
------------------------------------------------------------------
1 | 123 | Y | Y | N
2 | 456 | N | N | N
3 | 789 | Y | Y | Y
```
I need to only return the rows that have at least one Y in phone, employee, or address (there are about 10 others not shown here). Then I need to order those results by the number of Y's they have in any of the three.
I've tried getting the "tagTotal" like this:
```
SELECT
SUM(
CASE WHEN [phone] = 'Y' THEN 1 ELSE 0 END
+ CASE WHEN [employee] = 'Y' THEN 1 ELSE 0 END
+ CASE WHEN [address] = 'Y' THEN 1 ELSE 0 END
)
FROM table
GROUP BY id
```
this returns:
```
tagTotal
---------------
2
0
3
```
I'm at a loss on how to combine this with my existing giant query and order by it without adding each column to the group by at the end. | Since the sum of values you're after is on the same row, you don't need to aggregrate the results, thereby eliminating the need for the group by..
```
SELECT
CASE WHEN [phone] = 'Y' THEN 1 ELSE 0 END +
CASE WHEN [employee] = 'Y' THEN 1 ELSE 0 END +
CASE WHEN [address] = 'Y' THEN 1 ELSE 0 END as Total
FROM table
``` | You can just do the addition as a column and then order the results. The aggregation seems unnecessary, at least with the sample data in the question. There is only one row per `id`.
```
SELECT t.*
FROM (SELECT t.*,
((CASE WHEN [phone] = 'Y' THEN 1 ELSE 0 END) +
(CASE WHEN [employee] = 'Y' THEN 1 ELSE 0 END) +
(CASE WHEN [address] = 'Y' THEN 1 ELSE 0 END)
) as NumYs
FROM table t
) t
WHERE NumYs > 0
ORDER BY NumYs DESC;
``` | Order SQL query by the sum of specefic columns | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
```
SELECT j.JobNo, j.JobDescription
FROM Job j
JOIN Job_Nav n ON j.JobNo = n.JobNo
WHERE n.Blocked = 0
AND n.[Starting Date] < GETDATE() AND n.[Ending Date] > GETDATE()
```
I want to change **AND n.[Starting Date] < GETDATE() AND n.[Ending Date] > GETDATE()** into BETWEEN clause, can anyone tell me how to use **Between Clause** for above expression.
THANKS | `getdate() between n.[Starting Date] AND n.[Ending Date]`
is the equivalent of
`getdate() >= n.[Starting Date] and getdate() <= n.[Ending Date]`
(note that it is using `>=` and `<=` instead of `>` and `<`)
I assume you want to use `between` to make your code a little more compact and readable, but your code as written isn't a candidate for such a substitution. | `SELECT j.JobNo, j.JobDescription
FROM Job j
JOIN Job_Nav n ON j.JobNo = n.JobNo
WHERE n.Blocked = 0
AND GETDATE() Between n.[Starting Date] AND n.[Ending Date]` | How to use SQL BETWEEN Clause and GETDATE function | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
select *
from records
where id in ( select max(id) from records group by option_id )
```
This query works fine even on millions of rows. However as you can see from the result of explain statement:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=30218.84..31781.62 rows=620158 width=44) (actual time=1439.251..1443.458 rows=1057 loops=1)
-> HashAggregate (cost=30218.41..30220.41 rows=200 width=4) (actual time=1439.203..1439.503 rows=1057 loops=1)
-> HashAggregate (cost=30196.72..30206.36 rows=964 width=8) (actual time=1438.523..1438.807 rows=1057 loops=1)
-> Seq Scan on records records_1 (cost=0.00..23995.15 rows=1240315 width=8) (actual time=0.103..527.914 rows=1240315 loops=1)
-> Index Scan using records_pkey on records (cost=0.43..7.80 rows=1 width=44) (actual time=0.002..0.003 rows=1 loops=1057)
Index Cond: (id = (max(records_1.id)))
Total runtime: 1443.752 ms
```
`(cost=0.00..23995.15 rows=1240315 width=8)` <- Here it says it is scanning all rows and that is obviously inefficient.
I also tried reordering the query:
```
select r.* from records r
inner join (select max(id) id from records group by option_id) r2 on r2.id= r.id;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=30197.15..37741.04 rows=964 width=44) (actual time=835.519..840.452 rows=1057 loops=1)
-> HashAggregate (cost=30196.72..30206.36 rows=964 width=8) (actual time=835.471..835.836 rows=1057 loops=1)
-> Seq Scan on records (cost=0.00..23995.15 rows=1240315 width=8) (actual time=0.336..348.495 rows=1240315 loops=1)
-> Index Scan using records_pkey on records r (cost=0.43..7.80 rows=1 width=44) (actual time=0.003..0.003 rows=1 loops=1057)
Index Cond: (id = (max(records.id)))
Total runtime: 840.809 ms
```
`(cost=0.00..23995.15 rows=1240315 width=8)` <- Still scanning all rows.
**I tried with and without index on `(option_id)`, `(option_id, id)`, `(option_id, id desc)`, none of them had any effect on the query plan.**
Is there a way of executing a groupwise maximum query in Postgres without scanning all rows?
What I am looking for, programmatically, is an index which stores the maximum id for each `option_id` as they are inserted into the records table. That way, when I query for maximums of option\_ids, I should only need to scan index records as many times as there are different option\_ids.
I've seen `select distinct on` answers all over SO from high ranking users (thanks to @Clodoaldo Neto for giving me keywords to search for). Here's why it doesn't work:
```
create index index_name on records(option_id, id desc)
select distinct on (option_id) *
from records
order by option_id, id desc
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=0.43..76053.10 rows=964 width=44) (actual time=0.049..1668.545 rows=1056 loops=1)
-> Index Scan using records_option_id_id_idx on records (cost=0.43..73337.25 rows=1086342 width=44) (actual time=0.046..1368.300 rows=1086342 loops=1)
Total runtime: 1668.817 ms
```
That's great, it's using an index. However using an index to scan all ids doesn't really make much sense. According to my executions, it is in fact slower than a simple sequential scan.
Interesting enough, MySQL 5.5 is able to optimize the query simply using an index on `records(option_id, id)`
```
mysql> select count(1) from records;
+----------+
| count(1) |
+----------+
| 1086342 |
+----------+
1 row in set (0.00 sec)
mysql> explain extended select * from records
inner join ( select max(id) max_id from records group by option_id ) mr
on mr.max_id= records.id;
+------+----------+--------------------------+
| rows | filtered | Extra |
+------+----------+--------------------------+
| 1056 | 100.00 | |
| 1 | 100.00 | |
| 201 | 100.00 | Using index for group-by |
+------+----------+--------------------------+
3 rows in set, 1 warning (0.02 sec)
``` | Assuming relatively *few* rows in `options` for *many rows in `records`*.
Typically, you would have a look-up **table `options`** that is referenced from `records.option_id`, ideally with a [foreign key constraint](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-FK). If you don't, I suggest to create one to enforce referential integrity:
```
CREATE TABLE options (
option_id int PRIMARY KEY
, option text UNIQUE NOT NULL
);
INSERT INTO options
SELECT DISTINCT option_id, 'option' || option_id -- dummy option names
FROM records;
```
Then there is no need to emulate a [loose index scan](https://wiki.postgresql.org/wiki/Loose_indexscan) any more and this becomes **very simple and fast**. Correlated subqueries can use a plain index on `(option_id, id)`.
```
SELECT option_id, (SELECT max(id)
FROM records
WHERE option_id = o.option_id) AS max_id
FROM options o
ORDER BY 1;
```
This includes options with no match in table `records`. You get NULL for `max_id` and you can easily remove such rows in an outer `SELECT` if needed.
Or (same result):
```
SELECT option_id, (SELECT id
FROM records
WHERE option_id = o.option_id
ORDER BY id DESC NULLS LAST
LIMIT 1) AS max_id
FROM options o
ORDER BY 1;
```
May be slightly faster. The subquery uses the sort order `DESC NULLS LAST` - same as the aggregate function `max()` which ignores NULL values. Sorting just `DESC` would have NULL first:
* [Why do NULL values come first when ordering DESC in a PostgreSQL query?](https://stackoverflow.com/questions/20958679/why-do-null-values-come-first-when-ordering-desc-in-a-postgresql-query/20959470#20959470)
The perfect index for this:
```
CREATE INDEX on records (option_id, id DESC NULLS LAST);
```
Index sort order doesn't matter much while columns are defined `NOT NULL`.
There can still be a sequential scan on the small table `options`, that's just the fastest way to fetch all rows. The `ORDER BY` may bring in an index (only) scan to fetch pre-sorted rows.
The big table `records` is only accessed via (bitmap) index scan or, if possible, [*index-only scan*](https://wiki.postgresql.org/wiki/Index-only_scans).
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=856444a0c4d83362465d96fc74634b13)* - showing two index-only scans for the simple case
Old [sqlfiddle](http://sqlfiddle.com/#!17/066467/2)
***Or*** use `LATERAL` joins for a similar effect in Postgres 9.3+:
* **[Optimize GROUP BY query to retrieve latest row per user](https://stackoverflow.com/questions/25536422/optimize-group-by-query-to-retrieve-latest-record-per-user/25536748#25536748)** | You mention wanting an index that only indexes the max(id) for each option\_id. This is not currently supported by PostgreSQL. If such a feature is added in the future, it would be probably done through the mechanism of making a materialized view on the aggregate query, and then indexing the materialized view. I wouldn't expect for at least a couple years, though.
What you can do now, though, is use a recursive query make it skip through the index to each unique value of option\_id. See [the PostgreSQL wiki page](http://wiki.postgresql.org/wiki/Loose_indexscan) for a general description of technique.
The way you can use this for your case it write the recursive query to return the distinct values of option\_id, and then for each one of those subselect the max(id):
```
with recursive dist as (
select min(option_id) as option_id from records
union all
select (select min(option_id) from records where option_id > dist.option_id)
from dist where dist.option_id is not null
)
select option_id,
(select max(id) from records where records.option_id=dist.option_id)
from dist where option_id is not null;
```
It is ugly, but you can hide it behind a view.
In my hands this runs in 43ms, rather than 513ms for the `on distinct` variety.
It could probably be made about twice as fast if you can find a way to incorporate the max(id) into the recursive query, but I couldn't find a way to do that. The problem is that these queries have a rather restrictive syntax, you cannot use "limit" or "order by" in conjunction with the UNION ALL.
This query touches page widely scattered throughout the index, and if those pages don't fit in the cache, then you will be doing a lot of inefficient IO. However, if this type of query is popular, then the 1057 leaf index pages will have little problem staying in cache.
This is how a set up my test case:
```
create table records as select floor(random()*1057)::integer as option_id, floor(random()*50000000)::integer as id from generate_series(1,1240315);
create index on records (option_id ,id);
explain analyze;
``` | Optimize groupwise maximum query | [
"",
"sql",
"postgresql",
"query-optimization",
"greatest-n-per-group",
"groupwise-maximum",
""
] |
I have an issue with where clause. I want `"is null"` and `"<>"` functionality of a column together. Wrote something like this but it removes rows with `t1.column1= test and is null` . I want to remove only test.
```
where (t1.column1 is null
or t2.column1 is null
or t2.column2 is null)
and t1.column1 <> 'test'
```
Thanks in advance. | You could try:
```
WHERE ( t1.column1 != 'test'
AND (t2.column1 IS NULL OR t2.column2 IS NULL)
)
OR t1.column1 IS NULL
```
to get your desired results. | You may have to break each out:
```
where (t1.column1 is null and t1.column1 <> 'test')
or (t2.column1 is null and t2.column1 <> 'test')
or (t2.column2 is null and t2.column2 <> 'test')
```
Or whatever criteria you need for each condition... | where clause with Is null and not in | [
"",
"sql",
"t-sql",
""
] |
I have to extract a dosage and unit of measure from a string.
A string can contain very similar patters, but I just need to extract the very first number in a string with a dosage that follows it.
For example:
> Pain Medication. 20 mg/100 mL NS (0.2mg/mL) Therapy: IV PCA Adult /
> Qualifier: Standard Continuous Rate = 0 mg/hr, IV, Routine PCA Dose =
> 0.4 mg
I need to extract **20** as a dosage, and **"mg"** that follows it as a unit of measure in a Select statement.
Any ideas?
Thanks! | [SQL Fiddle](http://sqlfiddle.com/#!3/2b6f6/5)
**MS SQL Server 2008 Schema Setup**:
```
CREATE TABLE Table1
([dosage] varchar(144))
;
INSERT INTO Table1
([dosage])
VALUES
('Pain Medication. 20 mg/100 mL NS (0.2mg/mL)
Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr,
IV, Routine PCA Dose = 0.4 mg')
;
```
**Query 1**:
```
SELECT substring(dosage,
PATINDEX('%[0-9]%',dosage),
PATINDEX('%/%',dosage)-PATINDEX('%[0-9]%',dosage)
)
FROM Table1
```
**[Results](http://sqlfiddle.com/#!3/2b6f6/5/0)**:
```
| COLUMN_0 |
|----------|
| 20 mg |
``` | --Schema
```
CREATE TABLE script
(
id int identity primary key,
details varchar(200)
);
INSERT INTO script
(details)
VALUES
('Pain Medication. 20 mg/100 mL NS (0.2mg/mL) Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr, IV, Routine PCA Dose = 0.4 mg'),
('Pain Medication. 300 mg/100 mL NS (0.2mg/mL) Therapy: IV PCA Adult / Qualifier: Standard Continuous Rate = 0 mg/hr, IV, Routine PCA Dose = 0.4 mg');
```
--Dosage
```
SELECT
SUBSTRING(
SUBSTRING(
details,PATINDEX('%[0-9]%',details),
(select (PATINDEX('%[/]%',details)-PATINDEX('%[0-9]%',details)))),
0, PATINDEX('%[ ]%',SUBSTRING(
details,PATINDEX('%[0-9]%',details),
(select (PATINDEX('%[/]%',details)-PATINDEX('%[0-9]%',details)))))) as [Dosage]
from script
```
-- Units
```
SELECT
SUBSTRING(
SUBSTRING(
details,PATINDEX('%[0-9]%',details),
(select (PATINDEX('%[/]%',details)-PATINDEX('%[0-9]%',details)))),
PATINDEX('%[ ]%',SUBSTRING(
details,PATINDEX('%[0-9]%',details),
(select (PATINDEX('%[/]%',details)-PATINDEX('%[0-9]%',details))))),
3) as [Units]
from script
```
SQL Fiddle: <http://sqlfiddle.com/#!6/4e0b6/38> | Extracting first available number and its following text from a string | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a column that has numbers and `characters` in `SQL Server 2005`
Note: Here `character` means anything other than a number.
I need to list all records that has a character in it (other than numbers). Following query will list a record if there are no numbers in a record. But I need to list all records that has characters in it irrespective of whether it has number or not.
How to modify this query to return all records that has atlesat one character?
**Reference**:
1. [How would I determine if a varchar field in SQL contains any numeric characters?](https://stackoverflow.com/questions/4324912/how-would-i-determine-if-a-varchar-field-in-sql-contains-any-numeric-characters)
2. [SQL query for finding rows with special characters only](https://stackoverflow.com/questions/2968042/sql-query-for-finding-rows-with-special-characters-only?rq=1)
**CODE**
```
DECLARE @WorkOrderHistory TABLE (WorkOrder VARCHAR(10))
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('123456') --EXCLUDE from result
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('abvc@@@') --Need to list in result
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('a+bvc1234') --Need to list in result
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('++') --Need to list in result
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('$1') --Need to list in result
INSERT INTO @WorkOrderHistory (WorkOrder) VALUES ('1.2') --Need to list in result
--Any record without a number
SELECT TOP 10 *
FROM @WorkOrderHistory WO
WHERE WO.WorkOrder NOT like '%[0-9]%'
``` | The carat (^) inside a [] bracket means match any character except for those in the list. So if you want to match any character except for 0-9, you use [^0-9] and the LIKE (without the NOT) will match all other characters.
This should work:
```
SELECT TOP 10 *
FROM @WorkOrderHistory WO
WHERE WO.WorkOrder LIKE '%[^0-9]%'
```
Of course, you'll match all punctuation, and unprintable characters as well.
Via [http://technet.microsoft.com/en-us/library/ms174214(v=sql.110).aspx](http://technet.microsoft.com/en-us/library/ms174214%28v=sql.110%29.aspx) | select \* from sales
where name not like '%[0-9]%' | how to query a table to find rows where there are no numbers in a column | [
"",
"sql",
"sql-server",
""
] |
I have two select case statements that I would like to merge together somehow. I've tried different variations of nesting, and so far I've been getting nothing but errors. I'm using SQL Server 2008 for this, and would like it to be backwards compatible to SQL 2005 if possible, as we have a second instance set up that's 2005.
What I'm doing is looking at the number of payments on an account, and trying to decide if it's weekly, biweekly, monthly, or bi-monthly.
Depending on the number of payments scheduled, depends on which date I use to calculate the days between payments. Because there isn't a true weekly, bi-weekly, etc. selection, we're using a range of days based on the amount of days between their first scheduled payment and their last scheduled payment, divided by the number of payments.
Here are the two case statements I have: The first one gives me the number of days between payments.
```
case NoPymnts
when 1 then 0
when 2 then (datediff(d, DueDt1, DueDt2)/NoPymnts)
when 3 then (datediff(d, DueDt1, DueDt3)/NoPymnts)
when 4 then (datediff(d, DueDt1, DueDt4)/NoPymnts)
when 5 then (datediff(d, DueDt1, DueDt5)/NoPymnts)
when 6 then (datediff(d, DueDt1, DueDt6)/NoPymnts)
when 7 then (datediff(d, DueDt1, DueDt7)/NoPymnts)
when 8 then (datediff(d, DueDt1, DueDt8)/NoPymnts)
when 9 then (datediff(d, DueDt1, DueDt9)/NoPymnts)
when 10 then (datediff(d, DueDt1, DueDt10)/NoPymnts)
when 11 then (datediff(d, DueDt1, DueDt11)/NoPymnts)
when 12 then (datediff(d, DueDt1, DueDt12)/NoPymnts)
end as DayCount
```
This one contains the logic to decide the payment frequency:
```
case
when DayCount = 0 then 'SinglePayment'
when DayCount <=7 then 'Weekly'
when DayCount >7 and DayCount <= 21 then 'Bi-Weekly'
when DayCount >21 and Daycount <= 42 then 'Monthly'
when DayCount > 42 then 'Bi-Monthly'
End As Frequency
```
One option I know that works, but is large and messy is to put the second case into a sub-case statement for each of the options in the first case statement. I would really prefer to avoid this. The second option I would like to avoid is using a temp table. Though, if it is not possible to do what I am trying here, I may be forced to use one anyways.
Thank you for your help! | Create a user defined function for the second case as below
```
CREATE FUNCTION GetFrequencyFromDayCount
(
@DayCount INT
)
RETURNS VARCHAR(50)
AS
BEGIN
DECLARE @Freq NVARCHAR(50)
SELECT @Freq = case
when @DayCount = 0 then 'SinglePayment'
when @DayCount <=7 then 'Weekly'
when @DayCount >7 and @DayCount <= 21 then 'Bi-Weekly'
when @DayCount >21 and @DayCount <= 42 then 'Monthly'
when @DayCount > 42 then 'Bi-Monthly'
End
RETURN @Freq
END
GO
```
Call the function in the first case statement, something like below
```
select
case NoPymnts
when 1 then 0
when 2 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt2)/NoPymnts))
when 3 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt3)/NoPymnts))
when 4 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt4)/NoPymnts))
when 5 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt5)/NoPymnts))
when 6 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt6)/NoPymnts))
when 7 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt7)/NoPymnts))
when 8 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt8)/NoPymnts))
when 9 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt9)/NoPymnts))
when 10 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt10)/NoPymnts))
when 11 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt11)/NoPymnts))
when 12 then GetFrequencyFromDayCount((datediff(d, DueDt1, DueDt12)/NoPymnts))
end as DayCount
``` | Are you simply looking for a way to find the day count first and then build on it? Then use a derived table query:
```
select
some_column,
other_column,
case
when DayCount = 0 then 'SinglePayment'
when DayCount <=7 then 'Weekly'
when DayCount >7 and DayCount <= 21 then 'Bi-Weekly'
when DayCount >21 and Daycount <= 42 then 'Monthly'
when DayCount > 42 then 'Bi-Monthly'
End As Frequency
from
(
select
some_column,
other_column,
case NoPymnts
when 1 then 0
when 2 then (datediff(d, DueDt1, DueDt2)/NoPymnts)
when 3 then (datediff(d, DueDt1, DueDt3)/NoPymnts)
when 4 then (datediff(d, DueDt1, DueDt4)/NoPymnts)
when 5 then (datediff(d, DueDt1, DueDt5)/NoPymnts)
when 6 then (datediff(d, DueDt1, DueDt6)/NoPymnts)
when 7 then (datediff(d, DueDt1, DueDt7)/NoPymnts)
when 8 then (datediff(d, DueDt1, DueDt8)/NoPymnts)
when 9 then (datediff(d, DueDt1, DueDt9)/NoPymnts)
when 10 then (datediff(d, DueDt1, DueDt10)/NoPymnts)
when 11 then (datediff(d, DueDt1, DueDt11)/NoPymnts)
when 12 then (datediff(d, DueDt1, DueDt12)/NoPymnts)
end as DayCount
from mytable
) myderivedtable;
``` | SQL Select Case Combining | [
"",
"sql",
"sql-server-2008",
"case",
""
] |
I want to trim a string to a specified length. If the string is shorter, I don't want to do anything. I found a function substr() which does the job. However there is nothing in the Oracle documentation what happens if the string is shorter, than maximal length.
For example this:
```
select substr('abc',1,5) from dual;
```
returns 'abc', which is what I need.
I'd like to ask if this is safe, because the function seems not to be defined for this usage. Is there a better way how to truncate? | This is an interesting question. Surprisingly, the [documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions162.htm) doesn't seem to cover this point explicitly.
I think what you are doing is quite safe. `substr()` is not going to "add" characters to the end of the string when the string is too short. I have depended on this behavior in many databases, including Oracle, over time. This is how similar functions work in other databases and most languages.
The one sort-of-exception would be when the original data type is a `char()` rather than `varchar2()` type. In this case, the function would return a string of the same type, so it might be padded with spaces. That, though, is a property of the type not really of the function. | It is totally ok, but if you want, you can use this query:
```
select substr('abc',1,least(5,length('abc'))) from dual;
``` | Using substr to trim string on Oracle | [
"",
"sql",
"oracle",
"plsql",
"substr",
""
] |
By user sorting I mean that as a user on the site you see a bunch of items, and you are supposed to be able to reorder them (I'm using jQuery UI).
The user only sees 20 items on each page, but the total number of items can be thousands.
I assume I need to add another column in the table for custom ordering.
If the user sees items from 41-60, and and he sorts them like:
41 = 2nd
42 = 1st
43 = fifth
etc.
I can't just set the ordering column to 2,1,5.
I would need to go through the entire table and change each record.
Is there any way to avoid this and somehow sort only the current selection? | Add another column to store the custom order, just as you suggested yourself. You can avoid the problem of having to reassign all rows' values by using a `REAL`-typed column: For new rows, you still use an increasing integer sequence for the column's value. But if a user reorders a row, the decimal data type will allow you to use the formula `Β½ (previous row's value + next row's value)` to update the column of the single row that was moved. You
have got two special cases to take care of, namely if a user moves a row to the very beginning or end of the list. In that case, just use `min - 1` rsp. `max + 1`.
This approach is the simplest I can think of, but it also has some downsides. First, it has a theoretical limitation due to the datatype having only double-precision. After a finite number of reorderings, the values are too close together for their average to be a different number. But that's really only a theoretical limit you should never reach in practical applications. Also, the column will use 8 bytes of memory per row, which probably is much more than you actually need.
If your application might scale to the point where those 8 bytes matter or where you might have users that overeagerly reorder rows, you should instead stick to the `INTEGER` column and use multiples of a constant number as the default values (e.g. `100, 200, 300, ..`). You still use the update formula from above, but whenever two values become too close together, you reassign all values. By tweaking the constant multiplier to the average table size / user behaviour, you can control how often this expensive operation has to be done. | There are a couple ways I can think of to do this. One would be to use a SELECT FROM SELECT style statement. As in something like this.
```
SELECT *
FROM (
SELECT col1, col2, col3...
FROM ...
WHERE ...
LIMIT n,m
) as Table_A
ORDER BY ...
```
The second option would be to use temp tables such as:
```
INSERT INTO temp_table_A SELECT ... FROM ... WHERE ... LIMIT n,m;
SELECT * FROM temp_table_A ORDER BY ...
```
Another option to look at would be jQuery plugin like [DataTables](https://datatables.net/) | Solution for allowing user sorting in SQlite | [
"",
"sql",
"sqlite",
"sorting",
""
] |
I'm wondering why the following two queries produce different results (the first query has more rows than second).
```
SELECT * FROM A
JOIN ...
JOIN ...
JOIN C ON ...
LEFT JOIN B ON B.id = A.id AND B.otherId = C.otherId
```
As opposed to:
```
SELECT * FROM A
JOIN ...
JOIN ...
JOIN C ON ...
LEFT JOIN B ON B.id = A.id
WHERE B.otherId = C.otherId
```
Please help me understand. In the second query, the left join has only 1 condition so shouldn't it include all the results from the first query and more (where the extra rows have unmatched `otherId`). Then the `WHERE` clause should ensure that the `otherId` matches, like in the first query. Why are they different? | The `WHERE` is performed first by the Query engine before performing the `JOIN`.
The reasoning being why do the expensive `JOIN`, if we are going to filter some rows later.
The query engines are pretty good at optimizing the query you write.
Also you will see this effect only in `OUTER JOIN`s. In inner joins both `WHERE` and `JOIN` conditions behave the same. | The second query returns less rows because your `where` clause was filtering the records out, and this is essentially changing the query from a left outer join to an inner join. So, you need to be careful where you place your filters in, but this will not matter if you were to do an inner join. | Why are the two queries different (left join on ... and ... as opposed to using where clause) | [
"",
"sql",
"oracle",
""
] |
I have been tasked with migrating my organization's Oracle SQL Views over to SQL Server. I am a novice when it comes to SQL, and have been unable to figure out the error I am getting from the CASE statement.
In the CASE statement, I want to read in the field's value, which will be a street address such as 123 Main St. The plan is to remove the 'St' substring from the field, and return '123 Main'
It already works in Oracle, I am just having difficulty converting the syntax into SQL Server, and the documentation on [SUBSTRING](http://msdn.microsoft.com/en-us/library/ms187748.aspx) and [CASE](http://msdn.microsoft.com/en-us/library/ms181765.aspx) have not helped me solve the problem. I'm probably overlooking something very simple.
The code is:
```
SELECT c.objectid, p.SHAPE, c.APN_PARCEL_NO AS prc_parcel_no, b.addr_status, b.addr_st_nmbr AS st_number, b.addr_st_frac AS st_fraction, b.addr_st_pfx AS st_prefix
CASE
WHEN SUBSTRING (b.addr_st_name, CHARINDEX( ' ', b.addr_st_name, -1) + 1) = 'ALY'
THEN SUBSTRING (b.addr_st_name, 1, CHARINDEX( ' ', b.addr_st_name, -1) -1)
ELSE b.add_st_name
END
FROM dbo.PARCEL AS p INNER JOIN dbo.TBL_APN_PARCEL_LINK AS c ON p.PRC_PARCEL_NO = c.LAND_APN INNER JOIN dbo.TBL_SITE_ADDRESS_ALL AS b ON c.APN_PARCEL_NO = b.prc_parcel_no
```
and the error I get reads:
> Error Source:.Net SqlClient Data Provider
>
> Error Message: Incorrect syntax near the keyword 'CASE'
I apologize for any formatting problems, still learning how to properly write SQL. For your convenience, the CASE statement code in Oracle is:
```
CASE
WHEN SUBSTR (b.addr_st_name,
INSTR (b.addr_st_name, ' ', -1) + 1
) = 'ALY'
THEN SUBSTR (b.addr_st_name,
1,
INSTR (b.addr_st_name, ' ', -1) - 1
)
ELSE b.addr_st_name
END st_name,
```
tl;dr - Looking to convert Oracle view to SQL Server, error is:
> Incorrect syntax near the keyword 'CASE' | The error is:
```
Incorrect syntax near the keyword 'CASE'
```
In your statement, you have:
```
SELECT c.objectid, p.SHAPE, c.APN_PARCEL_NO AS prc_parcel_no, b.addr_status, b.addr_st_nmbr AS st_number, b.addr_st_frac AS st_fraction, b.addr_st_pfx AS st_prefix
CASE
WHEN SUBSTRING (b.addr_st_name, CHARINDEX( ' ', b.addr_st_name, -1) + 1) = 'ALY'
THEN SUBSTRING (b.addr_st_name, 1, CHARINDEX( ' ', b.addr_st_name, -1) -1)
ELSE b.add_st_name
END
FROM dbo.PARCEL AS p INNER JOIN dbo.TBL_APN_PARCEL_LINK AS c ON p.PRC_PARCEL_NO = c.LAND_APN INNER JOIN dbo.TBL_SITE_ADDRESS_ALL AS b ON c.APN_PARCEL_NO = b.prc_parcel_no
```
The `CASE` statement is a new column being returned, but right before it you do not have a comma separating the new column from the previous column:
```
...b.addr_st_pfx AS st_prefix /*Insert comma here*/ CASE ...
```
Granted, the error message isn't helpful, but at least it's specific: the error is "near `CASE`" in the sense that it's right before it. There may be other issues, but that's the one referenced in the error message.
Another issue I can see is that the column name in the `ELSE` clause is misspelled: you have `add_st_name`, when it should be `addr_st_name`.
The `CASE` statement itself also looks like it has a problem with your conversion of `INSTR` to `CHARINDEX`. `CHARINDEX`'s last parameter is the start position, but negative values are equivalent to 0: it starts at the beginning of the string. Oracle's `INSTR` uses negative positions to search backwards. See <https://stackoverflow.com/a/9479899> for a technique to find the last occurrence of a character (in this case a space) in TSQL.
I think what you're trying to do is something like:
```
SELECT c.objectid, p.SHAPE, c.APN_PARCEL_NO AS prc_parcel_no, b.addr_status,
b.addr_st_nmbr AS st_number, b.addr_st_frac AS st_fraction,
b.addr_st_pfx AS st_prefix,
CASE
WHEN RIGHT(b.addr_st_name, NULLIF(CHARINDEX(' ', REVERSE(b.addr_st_name)), 0) - 1) = 'ALY'
THEN SUBSTRING(b.addr_st_name, 1, LEN(b.addr_st_name) - CHARINDEX(' ', REVERSE(b.addr_st_name)))
ELSE b.addr_st_name
END
FROM dbo.PARCEL AS p
INNER JOIN dbo.TBL_APN_PARCEL_LINK AS c ON p.PRC_PARCEL_NO = c.LAND_APN
INNER JOIN dbo.TBL_SITE_ADDRESS_ALL AS b ON c.APN_PARCEL_NO = b.prc_parcel_no
```
This strips off an ending of "(space)ALY" from `b.addr_st_name` if it exists. Not too familiar with Oracle, but I think that's what your original statement was doing. | It looks like you're missing a comma before your `CASE` statement.
Also, on a side note, you might be able clarify the `CASE` statement like this:
```
SELECT c.objectid, p.SHAPE, c.APN_PARCEL_NO AS prc_parcel_no, b.addr_status, b.addr_st_nmbr AS st_number, b.addr_st_frac AS st_fraction, b.addr_st_pfx AS st_prefix,
CASE SUBSTR (b.addr_st_name, INSTR(b.addr_st_name, ' ', -1) + 1)
WHEN 'ALY'
THEN SUBSTR (b.addr_st_name, 1, INSTR (b.addr_st_name, ' ', -1) - 1)
ELSE b.addr_st_name
END st_name,
FROM dbo.PARCEL AS p INNER JOIN dbo.TBL_APN_PARCEL_LINK AS c ON p.PRC_PARCEL_NO = c.LAND_APN INNER JOIN dbo.TBL_SITE_ADDRESS_ALL AS b ON c.APN_PARCEL_NO = b.prc_parcel_no
```
*MSDN Reference on `CASE`: <http://msdn.microsoft.com/en-us/library/ms181765.aspx>* | CASE Statement Error | [
"",
"sql",
"sql-server",
""
] |
I have a database table with 3 columns being ID, Key and Value.
The ID field is an int, Key is varchar(50) and Value is varchar(100).
My sample data for this table is as follows:
```
ID Key Value
1 Template one
2 RequestedOn 15/04/2014 12:12:27
3 PrintedOn 15/04/2014 12:12:37
4 Template two
5 RequestedOn 16/04/2014 12:22:27
6 PrintedOn 16/04/2014 12:22:37
7 Template three
8 RequestedOn 17/05/2014 12:32:27
9 PrintedOn 17/05/2014 12:32:37
:
:
45 RequestedOn 17/06/2014 12:22:27
46 PrintedOn 17/06/2014 12:22:37
47 Template three
48 RequestedOn 17/06/2014 12:32:27
49 PrintedOn 17/06/2014 12:32:37
```
I want to be able to query the table to return values between certain date ranges.
For example:
I want to return all rows where PrintedOn is between 17/06/2014 12:22:27 and 17/06/2014 12:32:37
I have tried the following query but get **'Conversion failed when converting character string to smalldatetime data type.'** message.
```
SET DATEFORMAT DMY
;with cte as
(
select CONVERT(datetime, CAST(Value as smalldatetime)) as PrintedOn
from ExtendedProperties
where isdate(Value) = 1
)
select * from cte where PrintedOn > '17/05/2014' and PrintedOn < '17/06/2014'
``` | **Disclaimer:** The OP didn't specified the version of SQLServer he is using in the answer it is assumed as SQLServer 2012 or better.
The first step should be to get the data in shape with a `PIVOT` or a fake `PIVOT`.
Using the hypothesys that the key 'Template' will always precede the other two this can be done with this
```
SELECT [Template]
, Try_Parse(RequestedOn as DATETIME2 USING 'it-IT') RequestedOn
, Try_Parse(PrintedOn as DATETIME2 USING 'it-IT') PrintedOn
FROM (SELECT [Key], Value
, ID = SUM(CASE WHEN [Key] = 'Template' THEN 1 ELSE 0 END)
OVER(ORDER BY ID)
FROM Table1) d
PIVOT
(MAX(Value) FOR [Key] IN ([Template], [RequestedOn], [PrintedOn])) u
```
The `Try_Parse` is using the italian culture because it's one ofthe country where the date is in the format dd/MM/yyyy, using a culture that have a different format will result in `NULL` values.
Having that everything is a matter of querying the `VIEW`/`CTE`, here I'll use a `CTE`
```
With T AS (
SELECT [Template]
, Try_Parse(RequestedOn as DATETIME2 USING 'it-IT') RequestedOn
, Try_Parse(PrintedOn as DATETIME2 USING 'it-IT') PrintedOn
FROM (SELECT [Key], Value
, ID = SUM(CASE WHEN [Key] = 'Template' THEN 1 ELSE 0 END)
OVER(ORDER BY ID)
FROM Table1) d
PIVOT
(MAX(Value) FOR [Key] IN ([Template], [RequestedOn], [PrintedOn])) u
)
SELECT *
FROM T
WHERE PrintedOn BETWEEN '20140617 12:22:27' and '20140617 12:32:37'
```
`SQLFiddle demo`
The result will be in the format
```
Template | RequestedOn | PrintedOn
---------+-------------+----------
value | date | date
``` | If using SQL Server 2012 you can replace your CAST with TRY CAST. here is an example of syntax
```
Declare @string as varchar(7)
Set @string ='raresql'
SELECT Try_Cast(@string as smalldatetime) as [Cast Text to smalldatetime]
```
In this case, your values like "three" will return null instead of exception. | Convert string to datetime in sql | [
"",
"sql",
"sql-server",
""
] |
My mind is exploding right now.. I can't get any of this to work the way I want to! SQL is seriously such a pain in the butt. (/End Rant)
I have three tables that have some common columns to link with. I am trying to retrieve the ID off one table based on the name from the middle table based on the code from the farthest table. (Excuse my vocabulary, I am not skilled with SQL or its' lingo) If the farthest table has a code not found in the middle table, it is to default to a certain value. Then, the first table will return the default for null values. etc.
Example,
* `tblCounty` table has an `ID` and `name` column. I am to return the `ID` from `tblCounty` based on the `name` column matching the `name` column of `tblCode`.
* `tblCode` has two columns `name` and `code`. `tblCode` returns the respective `name` based on the matching `code` column with `tblAddress`'s `code` column.
* `tblAdress` has many columns, but shares in common a code field.
My attempt,
```
INSERT INTO vendor (CountyID, Contact)
SELECT
(SELECT a.id
FROM county a
WHERE a.name = (CASE WHEN (SELECT TOP(1) c.countyID
FROM tblAdress c
INNER JOIN tblCode d ON c.CountyID = d.CodeID
WHERE d.CodeID = b.CountyID) IS NULL THEN '**NONE**'
ELSE (SELECT a.CodeName
FROM tblCode a
WHERE a.CodeID = b.CountyID) END)),
b.Contact
FROM
tblAdress b
```
The error I am receiving is:
> Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
Now of course I googled this and looked at results on StackOverflow, but I was unable to apply what I found to my problem.
**Vendor**:
```
CountyID | ....
-------------------
1 | ...
2 | ...
3 | ...
2 | ...
```
**tblCounty**:
```
ID | Name | ...
----------------------
1 | **None**
2 | NYC
3 | Buffalo
```
**tblCode**:
```
Name | Code
--------------
**None** | **None**
NYC | 56A
Buffalo | 75B
```
**tblAdress**:
```
Code | ....
----------------
**None** | ....
56A | ......
75B | .....
56A | .....
```
Using the above tables, I want to transfer all data out of `tblAdress` into another table (`vendor`). In the process I will convert column `Code` to `tblCode`'s column `name` via `code` comparison, then to `tblCounty.ID` via `name` comparison.
**Essentially a catch all is needed.** If a code in `tblAddress` does not exist in `tblCode` or the code is null in `tblAddress`, it will return a default value (`None`). Then `tblCounty` will convert that default value into `ID = 1`, then store it into the `Vendor` table.
**Edit**
```
(SELECT TOP(1)
c.ID
FROM
dbo.Address a
LEFT OUTER JOIN
dbo.tblCode cd ON ISNULL(CASE a.CountyID WHEN ' ' THEN '**None**' ELSE a.CountyID END, '**None**') = cd.CodeID
LEFT OUTER JOIN
dbo.tblCounty c ON c.NAME = cd.CodeName
WHERE a.CountyID = b.countyID)
``` | Firstly, your database doesn't seem to be following the best practices of creating a database.
Ideally the design of the relationships and tables should prevent you having to do null checks in joins and the majority of the time a simple left join would do most of what you want. Could you can use constraints and ISNULLs when the data is being added to ensure its integrity? Also, I would advise against joining tables on text like county if you can - It would be much more elegant to use an integer primary key.
I suggest that you make sure that your design is solid before progressing, as these problems may just multiply in the future.
That being said, if you are insistent on continuing the way you're going, the following query should do what you want:
```
SELECT tblCounty.ID,
ISNULL(tblAddress.Code, 'none')
--Whatever you want to select
FROM tblCounty
LEFT JOIN tblCode ON tblCounty.Name = tblCode.Name
LEFT JOIN tblAddres ON ISNULL(tblCode.Code, 'none') = ISNULL(tblAddress.Code, 'none')
``` | Would this not get you the desired results?
```
select isnull(a.ID, '**NONE**') as CountyID, c.Contact
from tblCounty a
left join tblCode b on a.Name = b.Name
left join tblAddress c on b.Code = c.Code
``` | Handling null values with join across multiple tables | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm looking to do something like this:
```
SELECT a, b, c, d FROM someTable WHERE
WHERE a in (SELECT testA FROM otherTable);
```
Only I want to be able to test if 2 columns exist in a sub select of 2 columns.
```
SELECT a, b, c, d FROM someTable WHERE
WHERE a OR b in (SELECT testA, testB FROM otherTable);
```
We are using MS SQL Server 2012 | Try this
```
SELECT a, b, c, d
FROM someTable WHERE
WHERE a IN (SELECT testA FROM otherTable)
OR b IN (SELECT testB FROM otherTable)
```
or
```
SELECT a, b, c, d
FROM someTable WHERE
WHERE EXISTS
(SELECT NULL
FROM otherTable
WHERE testA = a OR testB = a
OR testA = b OR testB = b)
```
**UPDATE:**
Maybe you need to add index on testB column, if you have bad performance.
Also another option to use `CROSS APPLY` for `MS SQL`
```
SELECT a, b, c, d
FROM someTable ST
CROSS APPLY (
SELECT 1
FROM otherTable OT
WHERE OT.testA = ST.a OR OT.testB = ST.b
)
```
If any of this won't work, try using **UNION**. Mostly **UNION** gives better performance than **OR**
```
SELECT a, b, c, d
FROM someTable WHERE
WHERE a IN (SELECT testA FROM otherTable)
UNION
SELECT a, b, c, d
FROM someTable WHERE
WHERE b IN (SELECT testB FROM otherTable)
```
**UPDATE 2:**
For further reading on OR and UNION differences
[Why is UNION faster than an OR statement](https://stackoverflow.com/questions/15361972/why-is-union-faster-than-an-or-statement) | If I'm understanding your question correctly, `LEFT JOIN` is probably the way to go here:
```
SELECT a, b, c, d
FROM TableA ta
LEFT JOIN TableB tb
ON ta.a = tb.a
AND ta.b = tb.b
WHERE tb.a IS NOT NULL
AND tb.c IS NOT NULL
```
You could also use UNION and INNER JOIN:
```
SELECT a, b, c, d
FROM someTable
INNER JOIN OtherTable OT on someTable.B = OT.testB
UNION
SELECT a, b, c, d
FROM someTable
INNER JOIN OtherTable OT ON someTable.A= OT.testA
```
Note that the JOIN approach should be orders of magnitude faster if you have an index on the column | Where one or another column exists in a sub select | [
"",
"sql",
"sql-server",
"t-sql",
"sql-server-2012",
"where-clause",
""
] |
I have a complex scenario which I will try to explain.
I have a Column Named "Phase" from ProjectPhases table which holds the description about the project phases. Then there is table called DailyReport Contains fields Such as Desription ,QTY1,QTY2,PHASEID (Foreign key from ProjectPhases table ) etc.
Now the USER in the front end will create Daily Report records for Each day in a week . Now I wrote query to summarise the daily report containing the Data from ProjectPhases as well the data from Daily report.
```
select P.projectname ,PP.Description as Phase,DD.description, DD.QTYSunday,DD.QTYSMonday,DD.QTYTues,
from
Document_DailyReport DD LEFT OUTER JOIN
ProjectPhases PP on PP.Id=DD.PhaseId LEFT OUTER JOIN
Projects P on P.Id=DD.ProjectId
```
The output of the query is in the below format
```
ProjectName Phase Description QTYSunday QTYMonday QtyTues
Project1 Phase 1 Qty-SUNDAy 10
Project1 Phase 1 Qty-Monday 10
Project1 Phase 1 Qty-Monday 10
```
Now I want the output to be in the below format
```
ProjectName Phase QTYSunday QTYMonday QtyTues
Project1 Phase 1 10 10 10
```
I want all the records of the daily report for a particular phase in a single line like above.
Thanks !
EDITED :
My full query below.
```
SELECT P.projectname ,
PP.Description AS Phase,
DD.Description,
DD.DocNumber,
(SELECT CompanyName
FROM Companies
WHERE Id=dbo.GetCompanyIdByUser(DD.InsertedBy)) AS CreatedBy,
DD.ReportDate,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, -(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantitySun,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 1-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantityMon,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 2-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantityTues,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 3-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantityWed,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 4-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantityThur,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 5-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantityFri,
(SELECT sum(quantity)+sum(cast(field9 AS INT))
FROM Document_DailyReportOnSite
WHERE DailyReportId=DD.Id
AND ClassificationId=1
AND DATEDIFF(DAY,reportdate,DATEADD(dd, 6-(DATEPART(dw, (@Date)-1)), @Date)) =0) AS StaffQuantitySat ,
FROM Document_DailyReport DD
LEFT OUTER JOIN ProjectPhases PP ON PP.Id=DD.PhaseId
LEFT OUTER JOIN Projects P ON P.Id=DD.ProjectId
WHERE DD.ReportDate BETWEEN DATEADD(dd, -(DATEPART(dw,@Date)-1), @Date) AND DATEADD(dd, 6-(DATEPART(dw, @Date)-1), @Date)
```
OUTPUT
```
projectname Phase description CreatedBy 0StaffQuantityMon StaffQuantityTues StaffQuantityWed StaffQuantityThur
Bollywood Park (MCC-ARCO) Bollywood Theatre Main Contractor Package Daily Report as on 16-Jun-2014 ARCO Contracting 22 NULL NULL NULL
Bollywood Park (MCC-ARCO) Bollywood Theatre Main Contractor Package Daily Report as om 17-Jun-2014 ARCO Contracting NULL 23 NULl NULL
Bollywood Park (MCC-ARCO) Bollywood Theatre Main Contractor Package Daily Report as on 18.06.2014 ARCO Contracting NULL NULL NULL 23
``` | You can try the following:
```
SELECT ProjectName, Phase, SUM(StaffQuantitySun) AS StaffQuantitySun, SUM(StaffQuantityMon) AS StaffQuantityMon, SUM(StaffQuantityTues) AS StaffQuantityTues, SUM(StaffQuantityWed) AS StaffQuantityWed,
SUM(StaffQuantityThur) AS StaffQuantityThur, SUM(StaffQuantityFri) AS StaffQuantityFri, SUM(StaffQuantitySat) AS StaffQuantitySat
(
select P.projectname ,PP.Description as Phase,DD.Description,DD.DocNumber,
(Select CompanyName from Companies where Id=dbo.GetCompanyIdByUser(DD.InsertedBy)) As CreatedBy,DD.ReportDate,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, -(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantitySun,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 1-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantityMon,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 2-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantityTues,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 3-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantityWed,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 4-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantityThur,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 5-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantityFri,
(select sum(quantity)+sum(cast(field9 as INT)) from Document_DailyReportOnSite where DailyReportId=DD.Id and ClassificationId=1 and DATEDIFF(day,reportdate,DATEADD(dd, 6-(DATEPART(dw, (@Date)-1)), @Date)) =0) as StaffQuantitySat ,
from
Document_DailyReport DD LEFT OUTER JOIN
ProjectPhases PP on PP.Id=DD.PhaseId LEFT OUTER JOIN
Projects P on P.Id=DD.ProjectId
Where DD.ReportDate between DATEADD(dd, -(DATEPART(dw,@Date)-1), @Date) and DATEADD(dd, 6-(DATEPART(dw, @Date)-1), @Date)
) DerivedTable
GROUP BY ProjectName, Phase
```
Basically, I've embedded your query as a Derived Table and then GROUP BY ProjectName and Phase and then take a SUM of your columns that you want to be all on the one line. | Use `GROUP BY` Function
Try this:
```
SELECT P.projectname, PP.Description AS Phase,
SUM(DD.QTYSunday) AS QTYSunday,
SUM(DD.QTYSMonday) AS QTYSMonday,
SUM(DD.QTYTues) AS QTYTues
FROM Projects P
INNER JOIN Document_DailyReport DD ON P.Id = DD.ProjectId
LEFT OUTER JOIN ProjectPhases PP ON PP.Id = DD.PhaseId
GROUP BY P.projectname, PP.Description;
``` | SQL Select Complex logic | [
"",
"sql",
"sql-server",
"sql-server-2008",
"group-by",
"sum",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.