Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
Let's say we have this silly query:
```
Select * From Emp Where Id In (Select Id From Emp)
```
Let's do a small modification inside `IN` condition by adding an `Order By` clause.
```
Select * From Mail Where Id In (Select Id From Mail Order By Id)
```
Then we are getting the following error:
> ORA-00907: missing right parenthesis
Oracle assumes that IN condition will end after declaring the From table. As a result waits for right parenthesis, but we give an Order By instead.
Why can't we add an Order By inside IN condition?
**FYI**: I don't ask for a equivalent query. This is an example after all. I just can't understand why this error occurs.
|
Consider the condition `x in (select something from somewhere)`. It returns true if `x` is equal to any of the `something`s returned from the query - regardless of whether it's the first, second, last, or anything in the middle. The order that the `something`s are returned is inconsequential. Adding an `order by` clause to a query often comes with a hefty performance hit, so I'm guessing this this limitation was introduced to prevent adding a clause that has no impact on the query's correctness on the one hand and may be quite expensive on the other.
|
It would not make sense to sort the values inside the **IN** clause. Think in this way:
```
IN (a, b, c)
```
is same as
```
IN (c, b, a)
```
IS SAME AS
```
IN (b, c, a)
```
Internally Oracle applies **OR** condition, so it is equivalent to:
```
WHERE id = a OR id = b OR id = c
```
Would it make any sense to order the conditions?
Ordering comes at it's own expense. So, when there is no need to sort, just don't do it. And in this case, Oracle applied the same rule.
When it comes to performance of the query, optimizer needs choose the best possible execution plan i.e. with the least cost to achieve the desired output. ORDER BY is useless in this case, and *Oracle did a good job to prevent using it at all*.
For the [documentation](http://docs.oracle.com/cd/B19306_01/server.102/b14200/conditions013.htm),
> ```
> Type of Condition Operation
> ----------------- ----------
> IN Equal-to-any-member-of test. Equivalent to =ANY.
> ```
So, when you need to test **ANY** value for membership in a list of values, there is no need of ordered list, just a random matching does the job.
|
Using Order By in IN condition
|
[
"",
"sql",
"oracle",
"syntax-error",
""
] |
NOTE, this is a homework question.
Please show the department number and the lowest salary in the department whose average salary is the highest average salary.
This is what I have so far,
```
SELECT DEPARTMENT_ID, MAX_AVG_SALARY
FROM
(SELECT DEPARTMENT_ID, AVG(SALARY) AS MAX_AVG_SALARY
FROM EMPLOYEES
GROUP BY DEPARTMENT_ID)
WHERE MAX_AVG_SALARY =
(SELECT MAX(MAX_AVG_SALARY)
FROM
(SELECT DEPARTMENT_ID,
AVG(SALARY) AS MAX_AVG_SALARY
FROM EMPLOYEES
GROUP BY DEPARTMENT_ID
));
```
I can get the department\_id with the highest salary, but then how do I find the lowest salary in the same department?
Please help!
Thanks!
|
```
SELECT MINIMUM_SALARY,DEPARTMENT_ID
FROM
(
SELECT AVG(SALARY) AS AVERAGE_SALARY,
MIN(SALARY) AS MINIMUM_SALARY,
DEPARTMENT_ID
FROM EMPLOYEES
GROUP BY DEPARTMENT_ID
)EMPLOYEE_AGGREGATED
WHERE
AVERAGE_SALARY = (SELECT MAX(AVG(SALARY)) FROM EMPLOYEES GROUP BY DEPARTMENT_ID)
```
|
Solution with [analytic functions](http://oracle-base.com/articles/misc/analytic-functions.php):
```
select department_id, ms min_salary
from (
select department_id, max(avg(salary)) over () mav,
min(min(salary)) over (partition by department_id) ms,
min(avg(salary)) over (partition by department_id) av
from employees group by department_id )
where av = mav order by department_id
```
[SQLFiddle demo](http://sqlfiddle.com/#!4/409d71/1)
|
ORACLE SQL: Show the lowest salary in the department with the highest average salary
|
[
"",
"sql",
"oracle",
""
] |
I have been looking through for answers for this question, and am still struggling.
```
Location Cashier TXN Date Product Date Time Ref
Toronto Z 15-01-04 15-01-04 090501 Transaction was a very
Toronto Z 15-01-04 15-01-04 090501 intresting one
NewYork X 15-01-04 15-01-04 123035 Transaction completed
London Z 15-02-04 15-01-04 100612 This transaction had complications
London Z 15-02-04 15-01-04 100612 in it. We need to follow up
Rochest Y 15-01-04 15-01-04 153045 This transaction was a fun one to
Rochest Y 15-01-04 15-01-04 153045 process
Vanc L 15-01-04 15-01-04 174535 Something intresting
```
I would like the records with the same Location, Cashier, Dates, and Time to only appear once, with the two applicable reference lines being in one box... e.g.
```
Toronto Z 15-01-04 15-01-04 090501 Transaction was a very intresting one
```
I know this is fairly basic, but any help would be appreciated, on my SQL learning journey
Thanks
|
[Sample SQL Fiddle](http://sqlfiddle.com/#!6/6d498/1)
You can combine using STUFF and Group BY. Assuming you are using SQL Server.
```
Select location,Cashier,[Time],
stuff((SELECT distinct ', ' + cast(ref as varchar(500))
FROM t t2
where t2.location = t1.location
FOR XML PATH('')),1,1,'')
Grom t t1
Group By location, Cashier,[Time]
```
|
well if you are using oracle you can use listagg
```
SELECT location, cashier, txn_date, product_date, time LISTAGG(ref, '; ') WITHIN GROUP (ORDER BY ref) as "ref_list"
FROM mytable;
```
|
SQL two records, should be one
|
[
"",
"sql",
"join",
""
] |
(The following is a highly simplified description of my problem. The company policy does not allow me to describe the actual scenario in any detail.)
The DB tables involved are:
```
PRODUCTS:
ID Name
---------
1 Ferrari
2 Lamborghini
3 Volvo
CATEGORIES:
ID Name
----------
10 Sports cars
20 Safe cars
30 Red cars
PRODUCTS_CATEGORIES
ProductID CategoryID
-----------------------
1 10
1 30
2 10
3 20
LOCATIONS:
ID Name
------------
100 Sports car store
200 Safe car store
300 Red car store
400 All cars r us
LOCATIONS_CATEGORIES:
LocationID CategoryID
------------------------
100 10
200 20
300 30
400 10
400 20
400 30
```
Note that the locations are not directly connected to the products, only the categories. The customer should be able to see a list of locations that can provide all the product categories that the products they want to buy belong to. So, for example:
A customer wants to buy a Ferrari. This would be available from stores in categories 10 *or* 30. This gives us stores 100, 300 and 400 but not 200.
However, if a customer wants to buy a Volvo and a Lamborghini this would be available from stores in categories 10 *and* 20. Which only gives us store 400.
Another customer wants to buy a Ferrari and a Volvo. This they could get from a store in either categories 10 + 20 (sporty and safe) or categories 30 + 20 (red and safe).
What I need is a postgres query that takes a number of products and returns the locations where all of them can be found. I got started with arrays and the <@ operator but got lost quickly. Here follows some example SQL that attempts to get stores where a Ferrari and a Lamborghini can be bought. It does not work correctly since it requires the locations to satisfy *all* the categories that *all* the selected cars belong to. It returns location 400 only but should return locations 400 and 100.
```
SELECT l.* FROM locations l
WHERE
(SELECT array_agg(DISTINCT(categoryid)) FROM products_categories WHERE productid IN (1,2))
<@
(SELECT array_agg(categoryid) FROM locations_categories WHERE locationid = l.id);
```
I hope my description makes sense.
|
Here is the query. You should insert a list of selected cars Ids `pc.ProductId in (1,3)` and in the end you should correct condition to selected cars count so if you select 1 and 3 you should write `HAVING COUNT(DISTINCT pc.ProductId) = 2` if you select 3 cars then there have to be 3. This condition in `HAVING` give you condition that ALL cars are in these Locations:
```
SELECT Id FROM Locations l
JOIN Locations_Categories lc on l.Id=lc.LocationId
JOIN Products_Categories pc on lc.CategoryId=pc.CategoryID
where pc.ProductId in (1,3)
GROUP BY l.id
HAVING COUNT(DISTINCT pc.ProductId) = 2
```
`Sqlfiddle demo`
For example for one car it will be:
```
SELECT Id FROM Locations l
JOIN Locations_Categories lc on l.Id=lc.LocationId
JOIN Products_Categories pc on lc.CategoryId=pc.CategoryID
where pc.ProductId in (1)
GROUP BY l.id
HAVING COUNT(DISTINCT pc.ProductId) = 1
```
`Only Ferrary demo`
`Volvo and a Lamborghini demo`
|
(This basically elaborates on @valex's answer, though I didn't realise that until I posted; please accept @valex's not this one).
---
This can be done using only joins and aggregation.
Build a join tree mapping locations to products, as normal. Then join it with the list of desired products (one-column values rows) and filter the join to only matching product names. You now have one row with the location of a product wherever that product can be found.
Now group by location and return locations where the number of products present is equal to the number we're looking for (for ALL). For ANY we omit the `HAVING` filter because any location row returned by the join is what we want.
So:
```
WITH wantedproducts(productname) AS (VALUES('Volvo'), ('Lamborghini'))
SELECT l."ID"
FROM locations l
INNER JOIN locations_categories lc ON (l."ID" = lc."LocationID")
INNER JOIN categories c ON (c."ID" = lc."CategoryID")
INNER JOIN products_categories pc ON (pc."CategoryID" = c."ID")
INNER JOIN products p ON (p."ID" = pc."ProductID")
INNER JOIN wantedproducts wp ON (wp.productname = p."Name")
GROUP BY l."ID"
HAVING count(DISTINCT p."ID") = (SELECT count(*) FROM wantedproducts);
```
is what you want, basically.
For "stores with any of the wanted products" queries, drop the `HAVING` clause.
You an also `ORDER BY` the aggregate if you want to show stores with any match but sort based on number of matches.
You can also add a `string_agg(p."Name")` to the `SELECT` values-list if you want to list products that can be found at that store.
If you want your input to be an array rather than a values-list, just replace the `VALUES (...)` with `SELECT unnest($1)` and pass your array as the parameter `$1`, or write it literally in place of `$1`.
|
Postgres array query
|
[
"",
"sql",
"postgresql",
""
] |
I'm sorry to ask such a basic question, but I cant for the life of me spot the error here as far as I can see everything is correct. Yet I get the error, perhaps I need a pair of fresh eyes to have a look
> You have an error in your SQL syntax; check the manual that
> corresponds to your MySQL server version for the right syntax to use
> near 'WHERE event\_id = '1243'' at line 1 in INSERT INTO expiredEvents
> (event\_id,sport\_type,tournament,round,team1,team2,venue,event\_date)
> values ('1243','Rugby','Super15','3','Waratahs','Sharks','Allianz
> Stadium','') WHERE event\_id = '1243'
```
$sql="INSERT INTO expiredEvents
(event_id,sport_type,tournament,round,team1,team2,venue,event_date)
values ('$id','$sport','$trnmnt','$rnd','$t1','$t2','$ven','$eDate')
WHERE event_id = '$id'"
```
|
There is no `WHERE` clause in the correct syntax of the [`INSERT` statement](http://dev.mysql.com/doc/refman/5.7/en/insert.html).
Depending on what you want to achieve, choose one of the following.
### Insert a new row, don't bother if another one having the same `event_id` already exists
```
INSERT INTO expiredEvents
(event_id, sport_type, tournament, round, team1, team2, venue, event_date)
VALUES
('$id', '$sport', '$trnmnt', '$rnd', '$t1', '$t2', '$ven', '$eDate')
```
If `event_id` is an `UNIQUE INDEX` of table `expiredEvents`, this query fails if another record having `event_id = '$id'` already exists.
Assuming `event_id` is the `PK` of the table, keep reading.
### Insert a new row but only if it does not already exists
```
INSERT IGNORE INTO expiredEvents
(event_id, sport_type, tournament, round, team1, team2, venue, event_date)
VALUES
('$id', '$sport', '$trnmnt', '$rnd', '$t1', '$t2', '$ven', '$eDate')
```
The `IGNORE` keyword turns the errors into warnings and the query completes successfully but it does not insert the row if another one having `event_id = '$id'` already exists.
### Inserts a row if it does not exist or update the existing one, if it exists
```
INSERT INTO expiredEvents
(event_id, sport_type, tournament, round, team1, team2, venue, event_date)
VALUES
('$id', '$sport', '$trnmnt', '$rnd', '$t1', '$t2', '$ven', '$eDate')
ON DUPLICATE KEY UPDATE
sport_type=VALUES(sport_type), round=round+1, event_date=NOW()
```
If the row does not exist, this query insert it using the values from the `VALUES` clause. If the row already exists then it uses the [`ON DUPLICATE KEY UPDATE`](http://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html) to know how to update it. There are three fields modified in this query:
* `sport_type=VALUES(sport_type)` - the value of column `sport_type` is updated using the value provided in the query for column `sport_type` (`VALUES(sport_type)`, which is `'$sport'`);
* `round=VALUES(round)+1` - the value of column `round` is updated using its current value plus 1 (`round+1`); the value provided in the `VALUES` clause is not used;
* `event_date=NOW()` - the value of column `event_date` is modified using the value returned by the function `NOW()`; both the old value and the one provided in the `VALUES` clause of the query are ignored.
This is just an example, you put there whatever expressions you need to update the existing row.
### Completely replace the existing row with a new one
```
REPLACE INTO expiredEvents
(event_id, sport_type, tournament, round, team1, team2, venue, event_date)
VALUES
('$id', '$sport', '$trnmnt', '$rnd', '$t1', '$t2', '$ven', '$eDate')
```
The [`REPLACE` statement](http://dev.mysql.com/doc/refman/5.7/en/replace.html) is a MySQL extension to the SQL standard. It first `DELETE`s the row having `event_id = '$id'` (if any) then `INSERT`s a new row. It is functionally equivalent with `DELETE FROM expiredEvents WHERE event_id = '$id'` followed by the first query exposed above in this answer.
|
Don't use where in insert statment, instead of just get the data from the respective tables which contains the exact records and insert necessary rows alone by adding your conditions in the select statement of that query.
Here is the simple one to resolve your error,
```
$sql="INSERT INTO expiredEvents
(event_id,sport_type,tournament,round,team1,team2,venue,event_date)
SELECT event_id,sport_type,tournament,round,team1,team2,venue,event_date from events WHERE event_id = '$id'"
```
|
mysql error in WHERE clause
|
[
"",
"mysql",
"sql",
""
] |
I need to remove duplicates from a table that looks like this:
```
id post_author post_title
-----------------------------------------------------------------------------
21319 1 Youngstown State University
20535 1 Yo San University of Traditional Chinese Medicine
30268 29 Yo San University of Traditional Chinese Medicine
29747 29 Yeshiva University
21964 1 Yale University
29247 29 Yale University
29497 29 Xavier University of Louisiana
21916 1 Xavier University
29862 29 Xavier University
29860 29 Wright State University-Main Campus
20915 1 Wright State University-Lake Campus
21562 1 World Mission University
30267 29 World Mission University
```
Basically, if there are two entries with the same `post_title`, I need to remove the one with `post_author = 1`, but if the `post_title` is unique then the entry should be left as is.
How can this be done with an SQL query?
EDIT:
I've tried a query suggested by Mureinik. The query looks like this:
```
DELETE t FROM wp_posts AS t
WHERE t.post_author = 1 AND
EXISTS (SELECT * FROM wp_posts s
WHERE t.post_title = s.post_title
AND s.post_authot != 1)
```
But I got error:
```
[Err] 1093 - You can't specify target table 't' for update in FROM clause
```
What am I doing wrong?
|
You could set a condition on `post_author = 1` where there are more than one `post_title` with a temporary table:
```
CREATE TEMPORARY TABLE t_del AS SELECT post_title
FROM t
GROUP BY post_title
HAVING count(post_title)>1;
DELETE FROM t
WHERE post_author = 1
AND post_title IN (select post_title FROM t_del) ;
```
[SQL Fiddle here](http://sqlfiddle.com/#!9/49fa0/1)
|
You could use the `exists` operator:
```
DELETE FROM my_table t
WHERE post_author = 1 AND
EXISTS (SELECT *
FROM my_table s
WHERE t.post_title = s.post_title AND
s.post_author != 1)
```
|
Removing duplicates in an SQL database on a condition
|
[
"",
"mysql",
"sql",
"sql-delete",
""
] |
I have a table, [Customer], which has [postcode] and [totalValue] fields.
I need to design a query that returns the first two characters of all [postcode], and then a SUM of the [totalValue] for that particular postcode group (so one value each for NW, L1, etc.).
I have this code, which is running fine:
```
SELECT LEFT(postcode, 2), SUM(totalValue)
FROM Customer
GROUP BY LEFT(postcode, 2)
```
...however I have entries in [Customer] that don't contain a valid [postcode].
A valid UK postcode is always either one or two letters followed by a number (e.g. LE1, L12, etc.).
I would like to filter out all the incorrect/empty/Null [postcode] records into a separate record entry, but this is beyond my skillset.
|
You can use `LIKE`:
```
where postcode like '[A-Z][A-Z0-9][0-9]%'
```
You might also want to check the length and other characteristics, but this answers your specific question.
EDIT:
For a separate entry, use `case`:
```
SELECT (CASE WHEN postcode like '[A-Z][A-Z0-9][0-9]%'
THEN LEFT(postcode, 2)
ELSE 'Separate Entry'
END) as PostCode2, SUM(totalValue)
FROM Customer
GROUP BY (CASE WHEN postcode like '[A-Z][A-Z0-9][0-9]%'
THEN LEFT(postcode, 2)
ELSE 'Separate Entry'
END);
```
|
Your assumptions about valid postcodes is slightly out
> A valid UK postcode is always either one or two letters followed by a number (e.g. LE1, L12, etc.).
In its simplest terms the [valid formats for a UK Postcode](http://en.wikipedia.org/wiki/Postcodes_in_the_United_Kingdom) are:
```
+----------+---------------------------------------------+----------+
| Format | Coverage | Example |
+----------+---------------------------------------------+----------+
| AA9A 9AA | WC postcode area; EC1–EC4, NW1W, SE1P, SW1 | EC1A 1BB |
+----------+---------------------------------------------+----------+
| A9A 9AA | E1W, N1C, N1P | W1A 0AX |
+----------+---------------------------------------------+----------+
| A9 9AA | B, E, G, L, M, N, S, W | M1 1AE |
| A99 9AA | | B33 8TH |
+----------+---------------------------------------------+----------+
| AA9 9AA | All other postcodes | CR2 6XH |
| AA99 9AA | | DN55 1PT |
+----------+---------------------------------------------+----------+
```
Each of which you can define a pattern match for:
```
AA9A 9AA - [A-Z][A-Z][0-9][A-Z] [0-9][A-Z][A-Z]
A9A 9AA - [A-Z][0-9][A-Z] [0-9][A-Z][A-Z]
A9 9AA - [A-Z][0-9] [0-9][A-Z][A-Z]
A99 9AA - [A-Z][0-9][0-9] [0-9][A-Z][A-Z]
AA9 9AA - [A-Z][A-Z][0-9] [0-9][A-Z][A-Z]
AA99 9AA - [A-Z][A-Z][0-9][0-9] [0-9][A-Z][A-Z]
```
For something as re-usable as this, I think it is worth storing, so I would create a table for it:
```
CREATE TABLE dbo.SimplePostCodeValidation
(
PostCode VARCHAR(8) NOT NULL,
Pattern VARCHAR(50) NOT NULL
);
INSERT dbo.SimplePostCodeValidation (PostCode, Pattern)
VALUES
('AA9A 9AA', '[A-Z][A-Z][0-9][A-Z] [0-9][A-Z][A-Z]'),
('A9A 9AA', '[A-Z][0-9][A-Z] [0-9][A-Z][A-Z]'),
('A9 9AA', '[A-Z][0-9] [0-9][A-Z][A-Z]'),
('A99 9AA', '[A-Z][0-9][0-9] [0-9][A-Z][A-Z]'),
('AA9 9AA', '[A-Z][A-Z][0-9] [0-9][A-Z][A-Z]'),
('AA99 9AA', '[A-Z][A-Z][0-9][0-9][0-9][A-Z][A-Z]'),
-- REPEAT THE POSTCODES WITHOUT SPACES
('AA9A9AA', '[A-Z][A-Z][0-9][A-Z][0-9][A-Z][A-Z]'),
('A9A9AA', '[A-Z][0-9][A-Z][0-9][A-Z][A-Z]'),
('A99AA', '[A-Z][0-9][0-9][A-Z][A-Z]'),
('A999AA', '[A-Z][0-9][0-9][0-9][A-Z][A-Z]'),
('AA99AA', '[A-Z][A-Z][0-9][0-9][A-Z][A-Z]'),
('AA999AA', '[A-Z][A-Z][0-9][0-9][0-9][A-Z][A-Z]');
```
Now you can easily validate your postcodes:
```
DECLARE @T TABLE (Postcode VARCHAR(8));
INSERT @T (PostCode)
SELECT PostCode
FROM dbo.SimplePostCodeValidation
UNION ALL
SELECT PostCode
FROM (VALUES ('123456'), (''), ('TEST')) t (PostCode);
SELECT t.PostCode, IsValid = CASE WHEN pc.PostCode IS NULL THEN 0 ELSE 1 END
FROM @T AS t
LEFT JOIN SimplePostCodeValidation AS pc
ON t.PostCode LIKE pc.Pattern;
```
Which returns:
```
PostCode IsValid
----------------------
AA9A 9AA 1
A9A 9AA 1
A9 9AA 1
A99 9AA 1
AA9 9AA 1
AA99 9AA 1
123456 0
0
TEST 0
```
To apply this to your situation you would use:
```
SELECT CASE WHEN pc.PostCode IS NULL THEN 'Invalid' ELSE LEFT(c.postcode, 2) END,
TotalValue = SUM(totalValue)
FROM Customer AS c
LEFT JOIN SimplePostCodeValidation AS pc
ON t.PostCode LIKE pc.Pattern;
GROUP BY CASE WHEN pc.PostCode IS NULL THEN 'Invalid' ELSE LEFT(c.postcode, 2) END;
```
If you want to get more complicated, there are actually further limitations to what is a valid postcode, e.g. if it is the pattern `A9 9AA` then the first letter can only be one of (B, E, G, L, M, N, S, W). The guidelines set out on wikipedia state:
* Areas with only single-digit districts: BR, FY, HA, HD, HG, HR, HS, HX, JE, LD, SM, SR, WC, WN, ZE (although WC is always subdivided by a further letter, e.g. WC1A).
* Areas with only double-digit districts: AB, LL, SO.
* Areas with a district '0' (zero): BL, BS, CM, CR, FY, HA, PR, SL, SS (BS is the only area to have both a district 0 and a district 10).
* The following central London single-digit districts have been further divided by inserting a letter after the digit and before the space: EC1–EC4 (but not EC50), SW1, W1, WC1, WC2, and part of E1 (E1W), N1 (N1C and N1P), NW1 (NW1W) and SE1 (SE1P).
* The letters QVX are not used in the first position.
* The letters IJZ are not used in the second position.
* The only letters to appear in the third position are ABCDEFGHJKPSTUW when the structure starts with A9A.
* The only letters to appear in the fourth position are ABEHMNPRVWXY when the structure starts with AA9A.
* The final two letters do not use the letters CIKMOV, so as not to resemble digits or each other when hand-written.
* Post code sectors are one of ten digits: 0 to 9 with 0 only used once 9 has been used in a post town, save for Croydon and Newport (see above).
Since SQL Server does not support full regex it gets a bit more complicated to account for all these caveats. If you really wanted fool proof validation I would be inclined to use the regex from [answers this question](https://stackoverflow.com/q/164979/1048425), and use a CLR function to validate the postcode.
|
Struggling with SQL Where filter
|
[
"",
"sql",
"sql-server",
""
] |
**I converted an RDD[myClass] to dataframe and then register it as an
SQL table**
```
my_rdd.toDF().registerTempTable("my_rdd")
```
**This table is callable and can be demonstrated with following command**
```
%sql
SELECT * from my_rdd limit 5
```
**But the next step gives error, saying Table Not Found: my\_rdd**
```
val my_df = sqlContext.sql("SELECT * from my_rdd limit 5")
```
**Quite newbie for Spark. Do not understand why this is happening. Can anyone help me out of this?**
```
java.lang.RuntimeException: Table Not Found: my_rdd
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:111)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:111)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:58)
at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:111)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:175)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:187)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$6.applyOrElse(Analyzer.scala:182)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:50)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:186)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:207)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:236)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:192)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:207)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:236)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:192)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:177)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:182)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:172)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:1071)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:1071)
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:1069)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:915)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:68)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:73)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:75)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:77)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:79)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:81)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:83)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:85)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:87)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:89)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:91)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:93)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:95)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:97)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:99)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:101)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:103)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:105)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:107)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:109)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:111)
at $iwC$$iwC$$iwC.<init>(<console>:113)
at $iwC$$iwC.<init>(<console>:115)
at $iwC.<init>(<console>:117)
at <init>(<console>:119)
at .<init>(<console>:123)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:556)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:532)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:525)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:264)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```
|
Make sure to import the implicits.\_ from the same SQLContext. Temporary tables are kept in-memory in one specific SQLContext.
```
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
my_rdd.toDF().registerTempTable("my_rdd")
val my_df = sqlContext.sql("SELECT * from my_rdd LIMIT 5")
my_df.collect().foreach(println)
```
|
I found it easy to cause problems with temptables if there is more than one open zeppelin session, either in your own browser, or from someone else using the same server. The variable sqlContext is shared across those sessions and its easy to overwrite the value of it.
|
SparkSQL error Table Not Found
|
[
"",
"sql",
"scala",
"apache-spark",
"cassandra",
""
] |
We can try to get max age simply by using
```
SELECT TOP 1 age FROM Head1 ORDER BY Age DESC
```
But I've tried using while loop in SQL Server
**code**
```
declare @a int, @m int, @maxo int;
set @maxo = 0;
while(@a<10)
begin
select name, @m = age from head1 where ID = @a;
if @m>@maxo
@maxo = @m;
set @a=@a+1;
end
print @maxo
```
**error**
> Msg 141, Level 15, State 1, Line 5
> A SELECT statement that assigns a value to a variable must not be combined with data-retrieval operations.
>
> Msg 102, Level 15, State 1, Line 7
> Incorrect syntax near '@maxo'.
I am sort of stuck here. Please help guys.....
|
```
declare @a int, @m int, @maxo int
declare @name NVARCHAR(100)
set @maxo = 0;
while(@a<10)
begin
select @name = name, @m = age from head1 where ID = @a;
if @m>@maxo
SET @maxo = @m;
set @a=@a+1;
END
PRINT @name + ',' + CAST(@maxo AS NVARCHAR(50))
```
**ALTERNATE WAY IS**
```
SELECT Name, Age FROM Header1 WHERE Age = (SELECT MAX(Age) FROM Header1)
```
|
There are two issues:
**Issue 1:**
The error which you are getting is self explanatory ie, you cant select the column while you are assigning the values to the variable.
You can resolve it like this:
```
select @name = name, @m = age from head1 where ID = @a;
```
**Issue 2:**
In this I dont think you need that query at all to find the max age from your table. You can simply use max() function to find the max age from your table like this
```
SELECT Name, Age FROM Head1 WHERE Age = (SELECT MAX(Age) FROM Head1)
```
Using a loop is inefficient as well as it will create a performance bottleneck if your table is huge.
|
Gettin error in SQL Server while finding out max age in a table
|
[
"",
"sql",
"sql-server",
""
] |
I have a table containing records all with a create date. I want to Select all the records created in January 2014 and have a count of how many were created each day. I've gotten as far as Selecting all the records created that month but I'm unsure how to proceed to get an output of all the days that month and a count of how many records were created.
```
SELECT TYPE
,PART_ID
,DESIRED_QTY
,RECEIVED_QTY
,CREATE_DATE
FROM WORK_ORDER
WHERE DATEPART(year,CREATE_DATE) = 2014 and DATEPART(month,CREATE_DATE) = 01
ORDER BY CREATE_DATE ASC
```
The information I'm selecting in that statement isn't important, it's just there in that query so I'm selecting something.
|
Is this what you are looking for? It is an aggregation query that does the count:
```
SELECT CAST(CREATE_DATE as Date) as date,
COUNT(*)
FROM WORK_ORDER
WHERE DATEPART(year, CREATE_DATE) = 2014 and DATEPART(month, CREATE_DATE) = 01
GROUP BY CAST(CREATE_DATE as Date)
ORDER BY CAST(CREATE_DATE as Date) ;
```
The `CAST()` is only needed if `CREATE_DATE` could have a time component. The reason this query uses `CAST()` is so the same structure will work on multiple months.
I want to add, that it is better to write the query like this:
```
SELECT CAST(CREATE_DATE as Date) as date, COUNT(*)
FROM WORK_ORDER
WHERE CREATE_DATE >= '2014-01-01' and CREATE_DATE < '2014-02-01'
GROUP BY CAST(CREATE_DATE as Date)
ORDER BY CAST(CREATE_DATE as Date) ;
```
The difference here is the `WHERE` clause. This formulation can make use of an index on `CREATE_DATE`. Your formulation has functions around the arguments, so it would not use the index.
|
```
SELECT DATEPART(day,CREATE_DATE), count(*)
FROM WORK_ORDER
WHERE DATEPART(year,CREATE_DATE) = 2014 and DATEPART(month,CREATE_DATE) = 01
group by DATEPART(day,CREATE_DATE)
ORDER BY CREATE_DATE ASC
```
|
Get a count of a number of records with same create date
|
[
"",
"sql",
"sql-server-2008-r2",
""
] |
I am trying to use case and like statement. When I run this below query it is duplicating results on some rows where it is y on particular columns. I am using SQL Server 2000.
```
SELECT DISTINCT [rjvn_pound_reference]
,t_reference
,t_street_name
,t_zone_name
,(
CASE
WHEN rjvn_note LIKE '%CORRESPONDENCE%'
THEN 'Y'
ELSE 'N'
END
) AS CorrespondenceReceived
,(
CASE
WHEN rjvn_note LIKE '%review form complete%'
THEN 'Y'
ELSE 'N'
END
) AS REVIEWFORMCOMPLETE
,(
CASE
WHEN rjvn_note LIKE '%Manually issued NTO - Drive off%'
THEN 'Y'
ELSE 'N'
END
) AS ManuallyissuedNTO
,(
CASE
WHEN rjvn_note LIKE '%Manually issued NTK - Drive Off%'
THEN 'Y'
ELSE 'N'
END
) AS ManuallyissuedNTK
,(
CASE
WHEN rjvn_note LIKE '%Drive off- Final reminder sent%'
THEN 'Y'
ELSE 'N'
END
) AS finalReminderSent
FROM [ICPS].[dbo].[removal_job_vehicle_notes] AS a
INNER JOIN tickets t ON (a.rjvn_pound_reference = t.t_number)
INNER JOIN dbo.ticket_hold_record thr ON (t.t_number = thr.thr_system_ref)
WHERE thr_hold_type = '2207555'
AND t.t_contract IN ('18','1')
```
Results:
```
805464 XL05512211 Selby (60-100) Abbey Walk N N Y N N
805464 XL05512211 Selby (60-100) Abbey Walk N Y N N N
805464 XL05512211 Selby (60-100) Abbey Walk Y N N N N
```
But I want this to displayed on single row instead like:
```
805464 XL05512211 Selby (60-100) Abbey Walk Y Y Y N N.
UPDATED :
944786 XL07075730 Selby (60-100) Abbey Walk N N N N Y
but I should get
944786 XL07075730 Selby (60-100) Abbey Walk N N Y N Y
```
MY ACTUAL TABLE DATA for that record :
```
rjvn_pound_reference rjvn_time rjvn_userid rjvn_note
944786 2014-01-08 10:38:17.000 JAMIED Vehicle Drove Away During PCN Issue
944786 2014-01-08 14:22:21.000 ADRIANJ Correspondent Scanned
944786 2014-02-05 15:44:53.000 ELLIED Manually issued NTO drive off
944786 2015-03-12 10:16:53.000 KATIEB Drive off- Final reminder sent
--manually issued NTO drive off is displayed as 'N' instead of 'y'
```
|
Because the column references in the query are not qualified (to indicate *which* table they are returned from), and because we don't have a schema definition to go through to sort it out... we can't tell what table the `rjvn_note` column comes from.
As far as getting "duplicate" rows, the most likely explanation is a `JOIN`, there's a one-to-many association, and multiple rows are being returned. This is expected behavior.
To "collapse" rows, you can add a **`GROUP BY`** to the end of the query. You can also use aggregate functions (like **`MIN()`** or **`MAX()`**) to "pick out" values returned by expressions from the collapsed rows.
The **`DISTINCT`** keyword (as in your query) operates on the *entire* set of expressions in the SELECT list. That will eliminate rows that are exact duplicates.
There's a couple of other alternatives that are useful in some situations. For example, using as an **`EXISTS`** predicate (it the requirement is test for the existence of a row in a related table, without producing "duplicates" like a `JOIN` operation would. Or, using a subquery in the `SELECT` list.
Because the column references are not qualified, and because we don't have a schema definition, we're really just guessing. Some of the "maybe try this" guesses might turn out to be right, but they are just guesses.
---
Here's my "guess" at the changes you need to make to the query to get the specified resultset:
* Ditch the `DISTINCT` keyword
* Add `MAX()` aggregate around the CASE expressions
* Add a `GROUP BY` clause listing all non-aggregate expressions from the SELECT list
* (optional) Qualify ALL column references with a table alias (several good reasons to do this: as an aid to future readers, and insulate the statement from future failure
```
SELECT a.rjvn_pound_reference
, t.t_reference
, t.t_street_name
, t.t_zone_name
, MAX(CASE WHEN a.rjvn_note LIKE '%CORRESPONDENCE%'
THEN 'Y' ELSE 'N' END
) AS CorrespondenceReceived
, MAX(CASE WHEN a.rjvn_note LIKE '%review form complete%'
THEN 'Y' ELSE 'N' END
) AS REVIEWFORMCOMPLETE
, MAX(CASE WHEN a.rjvn_note LIKE '%Manually issued NTO - Drive off%'
THEN 'Y' ELSE 'N' END
) AS ManuallyissuedNTO
, MAX(CASE WHEN a.rjvn_note LIKE '%Manually issued NTK - Drive Off%'
THEN 'Y' ELSE 'N'END
) AS ManuallyissuedNTK
, MAX(CASE WHEN a.rjvn_note LIKE '%Drive off- Final reminder sent%'
THEN 'Y' ELSE 'N' END
) AS finalReminderSent
FROM [ICPS].[dbo].[removal_job_vehicle_notes] a
JOIN [tickets] t
ON t.t_number = a.rjvn_pound_reference
JOIN [dbo].[ticket_hold_record] thr
ON thr_system_ref = t.t_number
WHERE thr.thr_hold_type = '2207555'
AND t.t_contract IN ('18','1')
GROUP
BY a.rjvn_pound_reference
, t.t_reference
, t.t_street_name
, t.t_zone_name
```
|
Try this:
```
SELECT [RJVN_POUND_REFERENCE],
T_REFERENCE,
T_STREET_NAME,
T_ZONE_NAME,
DECODE(MAX(CASE WHEN RJVN_NOTE LIKE '%CORRESPONDENCE%' THEN 1 ELSE 0 END), 1, 'Y', 'N') AS CORRESPONDENCERECEIVED,
DECODE(MAX(CASE WHEN RJVN_NOTE LIKE '%review form complete%' THEN 1 ELSE 0 END), 1, 'Y', 'N') AS REVIEWFORMCOMPLETE,
DECODE(MAX(CASE WHEN RJVN_NOTE LIKE '%Manually issued NTO - Drive off%' THEN 1 ELSE 0 END), 1, 'Y', 'N') AS MANUALLYISSUEDNTO,
DECODE(MAX(CASE WHEN RJVN_NOTE LIKE '%Manually issued NTK - Drive Off%' THEN 1 ELSE 0 END), 1, 'Y', 'N') AS MANUALLYISSUEDNTK,
DECODE(MAX(CASE WHEN RJVN_NOTE LIKE '%Drive off- Final reminder sent%' THEN 1 ELSE 0 END), 1, 'Y', 'N') AS FINALREMINDERSENT
FROM [ICPS].[DBO].[REMOVAL_JOB_VEHICLE_NOTES] AS A
INNER JOIN TICKETS T
ON(A.RJVN_POUND_REFERENCE=T.T_NUMBER)
INNER JOIN DBO.TICKET_HOLD_RECORD THR
ON(T.T_NUMBER=THR.THR_SYSTEM_REF)
WHERE THR_HOLD_TYPE='2207555'
AND T.T_CONTRACT IN('18', '1')
GROUP BY [RJVN_POUND_REFERENCE],
T_REFERENCE,
T_STREET_NAME,
T_ZONE_NAME
```
|
Using case and like statment without duplicate results
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I would like to use Now() to get current date and time, so trying to create a function, just cannot get syntax right
```
USE [db1]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Create FUNCTION [dbo].[Now]()
RETURNS [date] AS
BEGIN
Declare @Now1
Select GetDate()
-- Return the result of the function
RETURN @Now1
END
```
Any help would be appreciated.
|
You don't need to create your own function to get current date, you can use
```
Select GetDate()
```
just kidding :) I imagine that you want to do something more than returning only current date
Type the body this way:
```
Declare @Now1 datetime
Select @Now1 = GetDate()
```
so your full code should be
```
USE [db1]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Create FUNCTION [dbo].[Now]()
RETURNS [datetime] AS
BEGIN
Declare @Now1 datetime
Select @Now1 = GetDate()
RETURN @Now1
END
```
|
This seems completely ridiculous and pointless but you are making this way more complicated than it needs to be.
```
create function Now()
returns datetime as
begin
return getdate()
end
select dbo.Now()
```
|
Function for Using Now instead of GetDate in SQL Server 2012
|
[
"",
"sql",
"sql-server",
""
] |
I need to count a value (`M_Id`) at each change of a date (`RS_Date`) and create a column grouped by the `RS_Date` that has an active total from that date.
So the table is:
```
Ep_Id Oa_Id M_Id M_StartDate RS_Date
--------------------------------------------
1 2001 5 1/1/2014 1/1/2014
1 2001 9 1/1/2014 1/1/2014
1 2001 3 1/1/2014 1/1/2014
1 2001 11 1/1/2014 1/1/2014
1 2001 2 1/1/2014 1/1/2014
1 2067 7 1/1/2014 1/5/2014
1 2067 1 1/1/2014 1/5/2014
1 3099 12 1/1/2014 3/2/2014
1 3099 14 2/14/2014 3/2/2014
1 3099 4 2/14/2014 3/2/2014
```
So my goal is like
```
RS_Date Active
-----------------
1/1/2014 5
1/5/2014 7
3/2/2014 10
```
If the `M_startDate = RS_Date` I need to count the `M_id` and then for
each `RS_Date` that is not equal to the start date I need to count the `M_Id` and then add that to the `M_StartDate` count and then count the next `RS_Date` and add that to the last active count.
I can get the basic counts with something like
```
(Case when M_StartDate <= RS_Date
then [m_Id] end) as Test.
```
But I am stuck as how to get to the result I want.
Any help would be greatly appreciated.
Brian
-added in response to comments
I am using Server Ver 10
|
If using SQL SERVER 2012+ you can use `ROWS` with your the analytic/window functions:
```
;with cte AS (SELECT RS_Date
,COUNT(DISTINCT M_ID) AS CT
FROM Table1
GROUP BY RS_Date
)
SELECT *,SUM(CT) OVER(ORDER BY RS_Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS Run_CT
FROM cte
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/b3b65/20/1)
If stuck using something prior to 2012 you can use:
```
;with cte AS (SELECT RS_Date
,COUNT(DISTINCT M_ID) AS CT
FROM Table1
GROUP BY RS_Date
)
SELECT a.RS_Date
,SUM(b.CT)
FROM cte a
LEFT JOIN cte b
ON a.RS_DAte >= b.RS_Date
GROUP BY a.RS_Date
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/b3b65/20/1)
|
You need a cumulative sum, easy in SQL Server 2012 using Windowed Aggregate Functions. Based on your description this will return the expected result
```
SELECT p_id, RS_Date,
SUM(COUNT(*))
OVER (PARTITION BY p_id
ORDER BY RS_Date
ROWS UNBOUNDED PRECEDING)
FROM tab
GROUP BY p_id, RS_Date
```
|
SQL Server : count types with totals by date change
|
[
"",
"sql",
"sql-server-2008",
"count",
"cumulative-sum",
""
] |
I have encountered a weird problem in a MySQL query. This following query is returning me 0 result whereas there are many rows which should be returned. the query is
```
SELECT DISTINCT SKU, EAN, URL, CID
FROM tab_comp_data
WHERE status =''
```
And in many rows there are NULL values in STATUS column but it returns me no rows.
I have tried some other way round.
```
SELECT DISTINCT SKU, EAN, URL, CID
FROM tab_comp_data
WHERE status <>'INACT'
```
this also returns me no rows.
P.S. the STATUS column can only be `NULL` or `'INACT'`
|
I you want to query for `NULL` values, you should use the `IS NULL` operator.
So, your query should look like:
```
SELECT DISTINCT SKU, EAN, URL, CID
FROM tab_comp_data
WHERE status IS NULL
```
|
if the value of status is null you have to use `is null`. That is different from `=''`
```
select DISTINCT SKU, EAN, URL, CID from tab_comp_data WHERE status is null
```
|
NULL values in where condition is not working
|
[
"",
"mysql",
"sql",
"select",
"null",
""
] |
I have multiple tables, related by multiple foreign keys as in the following example:
* `Recipes(id_recipe,name,calories,category)` - `id_recipe` as PK.
* `Ingredients(id_ingredient,name,type)` - `id_ingredient` as PK.
* `Contains(id_ingredient,id_recipe,quantity,unit)` - `(id_ingredient,id_recipe)` as PK, and as Foreign Keys for `Recipes(id_recipe)` and `Ingredients(id_ingredient)`.
You can see this relations represented in this image.

So basically `Contains` is a bridge between `Recipes` and `Ingredients`.
The query I try to write it's supposed to give as result the names of the recipes whose ingredients type are "bovine" but not "lactic".
My attempt:
`SELECT DISTINCT Recipes.name
FROM Ingredients JOIN Contains USING(id_ingredient) JOIN Recipes USING (id_recipe)
WHERE Ingredients.type = "bovin"
AND Ingredients.type <> "lactic";`
The problem is it still shows me recipes that have at least one lactic ingredient.
I would appreciate any help!
|
This is the general form of the kind of query you need:
```
SELECT *
FROM tableA
WHERE tableA.ID NOT IN (
SELECT table_ID
FROM ...
)
;
```
-- EXAMPLE BELOW --
The subquery gives the id values of all recipes that the "lactic" ingredient is used in, the outer query says "give me all the recipes not in that list".
```
SELECT DISTINCT Recipes.name
FROM Recipes
WHERE id_recipe IN (
SELECT DISTINCT id_recipe
FROM `Ingredients` AS `i`
INNER JOIN `Contains` AS `c` USING (id_ingredient)
WHERE `i`.`type` = "lactic"
)
;
```
---
Alternatively, using your original query:
You could've changed the second join to a `LEFT JOIN`, changed it's `USING` to an `ON` & included `AND type = "lactic"` there instead, and ended the query with `HAVING Ingredients.type IS NULL` (or WHERE, I just prefer HAVING for "final result" filtering). This would tell you which items could not be joined to the "lactic" ingredient.
|
A common solution of this type of question (checking conditions over a set of rows) utilizes aggregate + CASE.
```
SELECT R.Name
FROM Recipes R
INNER JOIN Contains C
on R.ID_Recipe = C.ID_Recipe
INNER JOIN Ingredients I
on C.ID_Ingredient = I.ID_Ingredient
GROUP BY R.name
having -- at least one 'lactic' ingredient
sum(case when type = 'lactic' then 1 else 0 end) = 0
and -- no 'bovin' ingredient
sum(case when type = 'bovin' then 1 else 0 end) > 0
```
It's easy to extend to any number of ingredients and any kind of question.
Hijacked the [fiddle](http://sqlfiddle.com/#!9/d4bde1/3) of xQbert
|
Incomprehensible query behaviour
|
[
"",
"mysql",
"sql",
"database",
""
] |
How can I count the number of occurrences in a given column?
Follow this example. With a table like this:
```
+----------+---------+------+
| PersonID | Name | City |
+----------+---------+------+
| 1 | John | NY |
| 2 | Mohit | CA |
| 3 | Jay | AZ |
| 4 | Roger | NY |
| 5 | David | NY |
| 6 | Peter | AZ |
| 7 | Ana | NY |
| 8 | Irina | NY |
| 9 | Michael | NY |
| 10 | Ken | AZ |
+----------+---------+------+
```
How can I achieve this?
```
+----------+---------+------+-------------+
| PersonID | Name | City | CityCounter |
+----------+---------+------+-------------+
| 1 | John | NY | 1 |
| 2 | Mohit | CA | 1 |
| 3 | Jay | AZ | 1 |
| 4 | Roger | NY | 2 |
| 5 | David | NY | 3 |
| 6 | Peter | AZ | 2 |
| 7 | Ana | NY | 4 |
| 8 | Irina | NY | 5 |
| 9 | Michael | NY | 6 |
| 10 | Ken | AZ | 3 |
+----------+---------+------+-------------+
```
Will I have to use ROW\_NUMBER function over SUM?
|
It's as simple as this.
```
SELECT *,
cityCounter = ROW_NUMBER() OVER (PARTITION BY city ORDER BY personID)
FROM yourTable
ORDER BY personID
```
|
If I understand correctly, you just want to enumerate the values for each city and then sort by `id`:
```
select t.*
from (select t.*,
row_number() over (partition by city
order by personid) as CityCounter
from table t
) t
order by personid;
```
|
ROW_NUMBER over column result
|
[
"",
"sql",
"sql-server",
"t-sql",
"window-functions",
""
] |
I have a query that I want to add some log to, to drop results that successfully match when I add one more table to the JOIN.
I'm accomplishing this now with an additional WHERE IN statement instead:
```
SELECT blah blah FROM donation
WHERE donation.id NOT IN (SELECT donation_id FROM donation_relation)
```
I just worry that selecting all ID fields from this *donation\_relation* table in the subquery will begin dragging when the table starts growing. What's the more efficient way (if it exists) to use JOIN to accomplish this exclusion? The two tables are joinable on *donation.id* and *donation\_relation.donation\_id*
Thanks!
|
`LEFT OUTER JOIN` alternative:
```
SELECT blah blah
FROM donation
LEFT JOIN donation_relation ON donation.id = donation_relation.donation_id
WHERE donation_relation.donation_id IS NULL;
```
Probably faster, especially when MySQL.
`EXISTS` version:
```
SELECT blah blah FROM donation
WHERE NOT EXISTS (SELECT donation_id FROM donation_relation
WHERE donation.id = donation_relation.donation_id )
```
|
The common way in Standard SQL uses correlated NOT EXISTS (additionally NOT IN might have some non-intuitive side-effects when NULLs are involved):
```
SELECT blah blah FROM donation as d
WHERE NOT EXISTS
(SELECT * FROM donation_relation as dr
where dr.donation_id = d.donation_id)
```
|
"Exclude" results with an SQL WHERE or JOIN?
|
[
"",
"mysql",
"sql",
""
] |
Hi there I am trying to execute a query but cannot seem to get it right.
```
SELECT *
FROM table
WHERE id IN (SELECT *
FROM table
WHERE description = 'A')
AND description = 'B'
```
Above is the query that I have got, the `select * from table where description = A` works as expected when ran alone I just need to make the where clause to work so I can see any id that has a description of A and B.
|
You will be getting multiple columns from the sub query when I assume you only want the id column:
```
SELECT *
FROM table
WHERE id IN (SELECT id
FROM table
WHERE description = 'A')
AND description = 'B'
```
|
No need for the select in the where clause
```
SELECT *
FROM table
WHERE id IN ('A', 'B')
```
|
SQL select statement in where clause
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"select",
"where-clause",
""
] |
Is it possible for Oracle to reuse the result of a function when it is called in the same query (transaction?) without the use of the function result cache?
The application I am working with is heavily reliant on Oracle functions. Many queries end up executing the exact same functions multiple times.
A typical example would be:
```
SELECT my_package.my_function(my_id),
my_package.my_function(my_id) / 24,
my_package.function_also_calling_my_function(my_id)
FROM my_table
WHERE my_table.id = my_id;
```
I have noticed that Oracle always executes each of these functions, not realizing that the same function was called just a second ago in the same query. It is possible that some elements in the function get cached, resulting in a slightly faster return. This is not relevant to my question as I want to avoid the entire second or third execution.
Assume that the functions are fairly resource-consuming and that these functions may call more functions, basing their result on tables that are reasonably large and with frequent updates (a million records, updates with say 1000 updates per hour). For this reason it is not possible to use Oracle's Function Result Cache.
Even though the data is changing frequently, I expect the result of these functions to be the same when they are called from the same query.
Is it possible for Oracle to reuse the result of these functions and how? I am using Oracle11g and Oracle12c.
Below is an example (just a random non-sense function to illustrate the problem):
```
-- Takes 200 ms
SELECT test_package.testSpeed('STANDARD', 'REGEXP_COUNT')
FROM dual;
-- Takes 400ms
SELECT test_package.testSpeed('STANDARD', 'REGEXP_COUNT')
, test_package.testSpeed('STANDARD', 'REGEXP_COUNT')
FROM dual;
```
Used functions:
```
CREATE OR REPLACE PACKAGE test_package IS
FUNCTION testSpeed (p_package_name VARCHAR2, p_object_name VARCHAR2)
RETURN NUMBER;
END;
/
CREATE OR REPLACE PACKAGE BODY test_package IS
FUNCTION testSpeed (p_package_name VARCHAR2, p_object_name VARCHAR2)
RETURN NUMBER
IS
ln_total NUMBER;
BEGIN
SELECT SUM(position) INTO ln_total
FROM all_arguments
WHERE package_name = 'STANDARD'
AND object_name = 'REGEXP_COUNT';
RETURN ln_total;
END testSpeed;
END;
/
```
|
Add an inline view and a `ROWNUM` to prevent the Oracle from re-writing the query into a single query block and executing the functions multiple times.
---
**Sample function and demonstration of the problem**
```
create or replace function wait_1_second return number is
begin
execute immediate 'begin dbms_lock.sleep(1); end;';
-- ...
-- Do something here to make caching impossible.
-- ...
return 1;
end;
/
--1 second
select wait_1_second() from dual;
--2 seconds
select wait_1_second(), wait_1_second() from dual;
--3 seconds
select wait_1_second(), wait_1_second() , wait_1_second() from dual;
```
**Simple query changes that do NOT work**
Both of these methods still take 2 seconds, not 1.
```
select x, x
from
(
select wait_1_second() x from dual
);
with execute_function as (select wait_1_second() x from dual)
select x, x from execute_function;
```
**Forcing Oracle to execute in a specific order**
It's difficult to tell Oracle "execute this code by itself, don't do any predicate pushing, merging, or other transformations on it". There are hints for each of those optimizations, but they are difficult to use. There are a few ways to disable those transformations, adding an extra `ROWNUM` is usually the easiest.
```
--Only takes 1 second
select x, x
from
(
select wait_1_second() x, rownum
from dual
);
```
It's hard to see exactly where the functions get evaluated. But these explain plans show how the `ROWNUM` causes the inline view to run separately.
```
explain plan for select x, x from (select wait_1_second() x from dual);
select * from table(dbms_xplan.display(format=>'basic'));
Plan hash value: 1388734953
---------------------------------
| Id | Operation | Name |
---------------------------------
| 0 | SELECT STATEMENT | |
| 1 | FAST DUAL | |
---------------------------------
explain plan for select x, x from (select wait_1_second() x, rownum from dual);
select * from table(dbms_xplan.display(format=>'basic'));
Plan hash value: 1143117158
---------------------------------
| Id | Operation | Name |
---------------------------------
| 0 | SELECT STATEMENT | |
| 1 | VIEW | |
| 2 | COUNT | |
| 3 | FAST DUAL | |
---------------------------------
```
|
You can try the `deterministic` keyword to mark functions as pure. Whether or not this actually improves performance is another question though.
**Update:**
I don't know how realistic your example above is, but in theory you can always try to re-structure your SQL so it knows about repeated functions calls (actually repeated values). Kind of like
```
select x,x from (
SELECT test_package.testSpeed('STANDARD', 'REGEXP_COUNT') x
FROM dual
)
```
|
Oracle performance: query executing multiple identical function calls
|
[
"",
"sql",
"oracle",
"performance",
"function",
""
] |
I want to group data based on the time interval, let us say group of 3 hours. How can I group a data within a time frame of data.
My data is like
```
DocId, UserCode, ProcessCode, ProcessDone
1 1 10 21/11/2015 11:04:00
2 1 10 21/11/2015 12:14:00
3 1 20 21/11/2015 11:04:00
4 1 20 21/11/2015 11:54:00
5 1 30 21/11/2015 13:04:00
```
For example in above data I want to group the data based on `UserCode` process using within frame of a time let us say 10-12.
like
```
UserCode, Process, Total
1 10 1
1 20 2
```
As this code total count is done based on Time between 10-12 and group by `UserCode` and `ProcessCode`.
|
Try this way:
```
select UserCode, ProcessCode, count(1) Total
from tab
where convert(time,ProcessDone) between '10:00' and '12:00'
group by UserCode, ProcessCode
```
**[Sql Fiddle](http://sqlfiddle.com/#!6/ac293/6/0) Demo**
or
```
select UserCode, ProcessCode, count(1) Total
from tab
where DATEPART(hh,ProcessDone) > 10 and DATEPART(hh,ProcessDone) < 12
group by UserCode, ProcessCode
```
**[Sql Fiddle](http://sqlfiddle.com/#!6/ac293/7/0) Demo**
or including `date` in `group by`
```
select UserCode, ProcessCode, count(1) Total
from tab
where convert(time,ProcessDone) between '10:00' and '12:00'
group by UserCode, ProcessCode, convert(date,ProcessDone)
```
**[Sql Fiddle](http://sqlfiddle.com/#!6/10539/2/0) Demo**
|
You should group by parsed date parts
```
select
DATEPART(YEAR,ProcessDone),
DATEPART(MONTH,ProcessDone),
DATEPART(DAY,ProcessDone),
DATEPART(HH,ProcessDone),
UserCode,
ProcessCode,
count(1) Total
from tab
group by UserCode,
ProcessCode,
DATEPART(YEAR,ProcessDone),
DATEPART(MONTH,ProcessDone),
DATEPART(DAY,ProcessDone),
DATEPART(HH,ProcessDone)
```
And then combine date parts for visualization
|
SQL Server: How to group by a datetime column based on a time interval (Such as within 2 hours)
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"group-by",
""
] |
I know the title does not understand anything. But I would not know how to define it better.
I will explain more my problem.
I have 2 tables. A which in its interior has the following values:
Name table: battle
```
res1 res2 winner
225 552 552
```
In the second table I have:
Name table: fighters
```
ID name
225 example 1
552 example 2
```
ID and res 1 res2 winner it is the same things.
How should I structure the table in such a way as to show, instead of the ID, the names?
Normally I always used the inner join, but I did not ever find myself more values to the same string.
Thanks for the reply.
|
You need to have three aliases of table fighters for the desired output. Please see below query:
```
Select f1.name as Fighter-1,
f2.name as Fighter-2,
f3.name as Winner
From
battel b,
fighters f1,
fighters f2,
fighters f3
Where
b.res1 = f1.id AND
b.res2 = f2.id AND
b.winner = f3.id AND
(f1.id = 225 OR f2.id=225)
```
Hope this helps!
|
There's no need for three joins as the winner will always be one of the two opponents, so you simply need to do a CASE:
```
select f1.name as name1, f2.name as name2,
case when b.winner = b.res1 then f1.name else f1.name end as winner
from battle b
join fighters f1
on b.res1 = f1.id
join fighters f2
on b.res2 = f2.id
```
To avoid any potential problems inserting or updating a winner who didn't play this game you might change the winner number to a simple flag (1 or 2) indicating who won:
```
res1 res2 winner
225 552 2
```
|
Query for multiple column with multiple values but in the different table with same column
|
[
"",
"mysql",
"sql",
"inner-join",
""
] |
well i'm still new in oracle, i'm trying to query a table with a subquery..it looks like this
```
select id_user, count(*) as jumlah from (select * from users where username = 'usr' and pass = 'pwd' and company_id = 'PAN' and status = 1)
group by id_user;
```
the above code works. but when i try to put it inside a stored procedure i got some errors like this
here's the stored procedure
```
create type login_obj is object(jumlah integer);
create type login_table is table of login_obj;
create or replace function startLogin(u varchar, p varchar, cid varchar)
return login_table
is
tabel login_table := login_table();
the_count integer;
the_sql varchar(200);
begin
the_sql := 'select id_user, count(*) as jumlah from (select * from users where username = ''' || u || ''' and pass = ''' || p || ''' and company_id = ''' || cid || ''' and status = 1) GROUP BY id_user';
execute immediate the_sql into the_count;
if the_count IS NOT NULL
then
begin
tabel.extend;
tabel(1) := login_obj(the_count);
end;
end if;
return tabel;
end;
```
then executing it by
```
select * from table (startLogin('usr','pwd','PAN'));
```
and here are the errors
```
SQL Error: ORA-00932: inconsistent datatypes: expected - got -
ORA-06512: at "LUKI.STARTLOGIN", line 14
00932. 00000 - "inconsistent datatypes: expected %s got %s"
```
any idea?
|
Add one more variable below the line
```
the_sql varchar(200);
```
as
```
yid_user users.id_user%TYPE;
```
and change your execute immediate as
```
execute immediate the_sql into yid_user, the_count;
```
Also some tips for using variable types in Oracle:
```
1. VARCHAR is obsolete, use VARCHAR2 instead.
2. Instead of using INTEGER type, use NUMBER.
```
|
Your query returns 2 columns, but there's only one column defined in INTO.
|
Oracle 00932. 00000 - "inconsistent datatypes: expected %s got %s"
|
[
"",
"sql",
"oracle",
"stored-procedures",
"ora-06512",
""
] |
I have the following table structure:
```
id int -- more like a group id, not unique in the table
AddedOn datetime -- when the record was added
```
For a specific `id` there is **at most** one record each day. I have to write a query that returns contiguous (at day level) date intervals for each `id`.
The expected result structure is:
```
id int
StartDate datetime
EndDate datetime
```
Note that the time part of `AddedOn` is available but it is not important here.
To make it clearer, here is some input data:
```
with data as
(
select * from
(
values
(0, getdate()), --dummy record used to infer column types
(1, '20150101'),
(1, '20150102'),
(1, '20150104'),
(1, '20150105'),
(1, '20150106'),
(2, '20150101'),
(2, '20150102'),
(2, '20150103'),
(2, '20150104'),
(2, '20150106'),
(2, '20150107'),
(3, '20150101'),
(3, '20150103'),
(3, '20150105'),
(3, '20150106'),
(3, '20150108'),
(3, '20150109'),
(3, '20150110')
) as d(id, AddedOn)
where id > 0 -- exclude dummy record
)
select * from data
```
And the expected result:
```
id StartDate EndDate
1 2015-01-01 2015-01-02
1 2015-01-04 2015-01-06
2 2015-01-01 2015-01-04
2 2015-01-06 2015-01-07
3 2015-01-01 2015-01-01
3 2015-01-03 2015-01-03
3 2015-01-05 2015-01-06
3 2015-01-08 2015-01-10
```
Although it looks like a common problem I couldn't find a similar enough question. Also I'm getting closer to a solution and I will post it when (and if) it works but I feel that there should be a more elegant one.
|
Here's answer without any fancy joining, but simply using group by and row\_number, which is not only simple but also more efficient.
```
WITH CTE_dayOfYear
AS
(
SELECT id,
AddedOn,
DATEDIFF(DAY,'20000101',AddedOn) dyID,
ROW_NUMBER() OVER (ORDER BY ID,AddedOn) row_num
FROM data
)
SELECT ID,
MIN(AddedOn) StartDate,
MAX(AddedOn) EndDate,
dyID-row_num AS groupID
FROM CTE_dayOfYear
GROUP BY ID,dyID - row_num
ORDER BY ID,2,3
```
The logic is that the dyID is based on the date so there are gaps while row\_num has no gaps. So every time there is a gap in dyID, then it changes the difference between row\_num and dyID. Then I simply use that difference as my groupID.
|
In `Sql Server 2008` it is a little bit pain without `LEAD` and `LAG` functions:
```
WITH data
AS ( SELECT * ,
ROW_NUMBER() OVER ( ORDER BY id, AddedOn ) AS rn
FROM ( VALUES ( 0, GETDATE()), --dummy record used to infer column types
( 1, '20150101'), ( 1, '20150102'), ( 1, '20150104'),
( 1, '20150105'), ( 1, '20150106'), ( 2, '20150101'),
( 2, '20150102'), ( 2, '20150103'), ( 2, '20150104'),
( 2, '20150106'), ( 2, '20150107'), ( 3, '20150101'),
( 3, '20150103'), ( 3, '20150105'), ( 3, '20150106'),
( 3, '20150108'), ( 3, '20150109'), ( 3, '20150110') )
AS d ( id, AddedOn )
WHERE id > 0 -- exclude dummy record
),
diff
AS ( SELECT d1.* ,
CASE WHEN ISNULL(DATEDIFF(dd, d2.AddedOn, d1.AddedOn),
1) = 1 THEN 0
ELSE 1
END AS diff
FROM data d1
LEFT JOIN data d2 ON d1.id = d2.id
AND d1.rn = d2.rn + 1
),
parts
AS ( SELECT * ,
( SELECT SUM(diff)
FROM diff d2
WHERE d2.rn <= d1.rn
) AS p
FROM diff d1
)
SELECT id ,
MIN(AddedOn) AS StartDate ,
MAX(AddedOn) AS EndDate
FROM parts
GROUP BY id ,
p
```
Output:
```
id StartDate EndDate
1 2015-01-01 00:00:00.000 2015-01-02 00:00:00.000
1 2015-01-04 00:00:00.000 2015-01-06 00:00:00.000
2 2015-01-01 00:00:00.000 2015-01-04 00:00:00.000
2 2015-01-06 00:00:00.000 2015-01-07 00:00:00.000
3 2015-01-01 00:00:00.000 2015-01-01 00:00:00.000
3 2015-01-03 00:00:00.000 2015-01-03 00:00:00.000
3 2015-01-05 00:00:00.000 2015-01-06 00:00:00.000
3 2015-01-08 00:00:00.000 2015-01-10 00:00:00.000
```
Walkthrough:
**diff**
This `CTE` returns data:
```
1 2015-01-01 00:00:00.000 1 0
1 2015-01-02 00:00:00.000 2 0
1 2015-01-04 00:00:00.000 3 1
1 2015-01-05 00:00:00.000 4 0
1 2015-01-06 00:00:00.000 5 0
```
You are joining same table on itself to get the previous row. Then you calculate difference in days between current row and previous row and if the result is 1 day then pick 0 else pick 1.
**parts**
This `CTE` selects result from previous step and sums up the new column(it is a cumulative sum. sum of all values of new column from starting till current row), so you are getting partitions to group by:
```
1 2015-01-01 00:00:00.000 1 0 0
1 2015-01-02 00:00:00.000 2 0 0
1 2015-01-04 00:00:00.000 3 1 1
1 2015-01-05 00:00:00.000 4 0 1
1 2015-01-06 00:00:00.000 5 0 1
2 2015-01-01 00:00:00.000 6 0 1
2 2015-01-02 00:00:00.000 7 0 1
2 2015-01-03 00:00:00.000 8 0 1
2 2015-01-04 00:00:00.000 9 0 1
2 2015-01-06 00:00:00.000 10 1 2
2 2015-01-07 00:00:00.000 11 0 2
3 2015-01-01 00:00:00.000 12 0 2
3 2015-01-03 00:00:00.000 13 1 3
```
The last step is just a grouping by `ID` and `new column` and picking `min` and `max` values for dates.
|
Determine contiguous date intervals
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
In MySQL `SELECT 2+NULL FROM tbl_name` will return `NULL`, since MySQL cannot interpret `NULL` as a number.
But why will `SELECT SUM(quantity) FROM tbl_name` not return `NULL` if just one of the values is `NULL`? The same goes for `MIN`, `MAX`, `AVG`, etc. Since MySQL doesn't know what NULL could be, shouldn't it return NULL for all the specified functions?
|
This is a good question, and one that does not have a good answer. The treatment of `NULL` in your two examples is different.
The fundamental problem is what `NULL` means. Commonly, it is used to denote *missing* values. However, in the ANSI standard, it stands for an *unknown* value. I'm sure philosophers could devote tomes to the difference between "missing" and "unknown".
In a simple expression (boolean or arithmetic or scalar of another sort), ANSI defines the result of "unknown" in almost all cases where any of the operands are "unknown". There are some exceptions: `NULL AND FALSE` is false and `NULL IS NULL` is true, but these are rare.
For the aggregation operations, think of `SUM()` as "sum all the known values", and so on. `SUM()` treats `NULL` values differently from `+`. But, this behavior is also standard so that is how all databases work.
If you want a `NULL` value for an aggregation when *any* of its operands is `NULL`, then you need to use `CASE`. I think the easiest way for a single column is:
```
(CASE WHEN COUNT(col) = COUNT(*) THEN SUM(COL) END)
```
|
According to the [MySQL Reference](https://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html), NULL values are ignored in the aggregate functions. Here is the direct quote from the page:
> Unless otherwise stated, group functions ignore NULL values.
|
Sum, Avg, Max, Min, Count of NULL values
|
[
"",
"mysql",
"sql",
"null",
"sum",
""
] |
I am trying to save an old stored procedure into a string. When I use the following I do not get any line breaks.
```
SELECT @OldProcedure = Object_definition(object_id)
FROM sys.procedures
WHERE name = @ProcedureName
```
Any suggestions on how to get the stored procedure text with the line breaks?
I was thinking of using `sp_helptext`
Thanks
**Update**
I am copying and pasting the results from the results which will show me a single line.
As for the script as I am storing the results in a db field. I am create a tool that will generate procedures on the fly, but I wanted to create a history of them.
**Update**
Turned out the Object\_definition does what I want, But for some reason when I copy it from the results I get a single line.
```
Declare @sql varchar(max) ;
SELECT @sql=Object_definition(object_id)
FROM sys.procedures
WHERE name = 'Test_QueryBuilder';
drop procedure Test_QueryBuilder
exec( @sql)
print @sql
```
|
Two ways:
```
select name, object_definition(object_id)
from sys.procedures
```
or
```
select object_name(p.object_id), definition
from sys.sql_modules as m
join sys.procedures as p
on m.object_id = p.object_id
```
|
I've recently come across the same question and made a quick and dirty script to get the definition of views, but the very same thing could also work for stored procedures and functions.
```
DECLARE @Lines TABLE (Line NVARCHAR(MAX)) ;
DECLARE @FullText NVARCHAR(MAX) = '' ;
INSERT @Lines EXEC sp_helptext 'sp_ProcedureName' ;
SELECT @FullText = @FullText + Line FROM @Lines ;
PRINT @FullText ;
```
It simply uses `sp_helptext` as you suggested, grabs its output in a table variable and concatenates all the resulting lines into a text variable. It also uses the fact that each line in the `sp_helptext` result set includes the trailing new line character, so no need to add it here.
From there on, you just use the variable as you would do normally, print it, save to some table or do some manipulation on it. My particular use case was to build a helper stored procedure to drop a view and recreate it when modifying its underlying tables.
|
Get the text of a stored procedure in SQL Server
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have table booking in which I have data
```
GUEST_NO HOTEL_NO DATE_FROM DATE_TO ROOM_NO
1 1 2015-05-07 2015-05-08 103
1 1 2015-05-11 2015-05-12 104
1 1 2015-05-14 2015-05-15 103
1 1 2015-05-17 2015-05-20 101
2 2 2015-05-01 2015-05-02 204
2 2 2015-05-04 2015-05-05 203
2 2 2015-05-17 2015-05-22 202
```
What I want is to get the result as.
1 ) It should show output as Guest\_no, Hotel\_no, Room\_no, and column with count as number of time previous three column combination repeated.
So OutPut should like
```
GUEST_NO HOTEL_NO ROOM_NO Count
1 1 103 2
1 1 104 1
1 1 101 1
2 2 204 1
```
etc. But I want result to in ordered way e.g.: The output should be `order by bk.date_to desc`
My query is as below its showing me count but if I use order by its not working
```
select bk.guest_no, bk.hotel_no, bk.room_no,
count(bk.guest_no+bk.hotel_no+bk.room_no) as noOfTimesRoomBooked
from booking bk
group by bk.guest_no, bk.hotel_no, bk.room_no, bk.date_to
order by bk.date_to desc
```
So with adding order by result is showing different , because as I added order by date\_to column so i have to add this column is group by clause too which will end up in different result as below
```
GUEST_NO HOTEL_NO ROOM_NO Count
1 1 103 1
1 1 104 1
1 1 103 1
1 1 101 1
2 2 204 1
```
Which is not the output I want.
I want these four column but with order by desc of date\_to column and count as no of repetition of first 3 columns
|
I think a good way to do this would be grouping by guest\_no, hotel\_no and room\_no, and sorting by the maximum (i.e. most recent) booking date in each group.
```
SELECT
guest_no,
hotel_no,
room_no,
COUNT(1) AS BookingCount
FROM
booking
GROUP BY
guest_no,
hotel_no,
room_no
ORDER BY
MAX(date_to) DESC;
```
|
Maybe this is what you're looking for?
```
select
guest_no,
hotel_no,
room_no,
count(*) as Count
from
booking
group by
guest_no,
hotel_no,
room_no
order by
min(date_to) desc
```
Or maybe max() instead of min(). SQL Fiddle: <http://sqlfiddle.com/#!6/e684c/3>
|
SQL query to group by data but with order by clause
|
[
"",
"sql",
"sql-server",
""
] |
In my SQL Server 2008, a stored procedure `P_ABC` is scheduled to run everyday at 2 a.m.
But how can I verify that this procedure `P_ABC` is scheduled to run everyday at 2 a.m. or not and it is always running at 2 a.m.? Is there any query to get the lists of procedures that are scheduled to run at specific time?
|
It's too old question but still I am posting this. You can try -
```
CREATE PROCEDURE MyTask
AS
BEGIN
SET NOCOUNT ON;
-- For executing the stored procedure at 11:00 P.M
declare @delayTime nvarchar(50)
set @delayTime = '23:00'
while 1 = 1
begin
waitfor time @delayTime
begin
--Name for the stored proceduce you want to call on regular bases
execute [DatabaseName].[dbo].[StoredProcedureName];
end
end
END
```
Then,
```
-- Sets stored procedure for automatic execution.
sp_procoption @ProcName = 'MyTask',
@OptionName = 'startup',
@OptionValue = 'on'
```
|
Not sure this is exactly what you need but you can check this piece of code below.
You can also find it there:
[SQL Server Find What Jobs Are Running a Procedure](https://stackoverflow.com/questions/11828733/sql-server-find-what-jobs-are-running-a-procedure)
```
SELECT j.name
FROM msdb.dbo.sysjobs AS j
WHERE EXISTS
(
SELECT 1 FROM msdb.dbo.sysjobsteps AS s
WHERE s.job_id = j.job_id
AND s.command LIKE '%procedurename%'
);
```
Using this approach you could list all your store procedures (by name) and then find all jobs (or steps to be more specific) that contain these store procedures.
Please refer to this question/answer to list store procedures: [Query to list all stored procedures](https://stackoverflow.com/questions/219434/query-to-list-all-stored-procedures)
Thanks
|
How to check if the stored procedure is scheduled to run at specific time
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
"scheduled-tasks",
""
] |
I currently have a table named `DATA` it has entries like the following:
```
abc000
ab000cde
000abc
```
I just want to remove all 0 from beginning and the end. If 0 comes in between the character then it will remain same.
|
This also works for leading and trailing zeros at the same time:
```
declare @s varchar(15) = '00abc00efg000'
select substring(@s,
patindex('%[^0]%', @s),
len(@s)-patindex('%[^0]%', reverse(@s))-patindex('%[^0]%', @s)+2);
```
Description: this is substring from first nonzero symbol till first nonzero symbol in reversed string.
|
Say your data exists in column called `Col1`, then this expression should do it
```
select CASE
WHEN RIGHT(col1 , 1) = '0'
THEN SUBSTRING(col1,0,PATINDEX('%[A-Z1-9]%',REVERSE(col1)))
WHEN LEFT(col1 , 1) = '0'
THEN SUBSTRING(col1,PATINDEX('%[A-Z1-9]%',col1),LEN(col1))
ELSE
Col1
END AS 'ParsedCol1'
FROM Data
```
|
How to remove the begining and ending characters if those are '0' in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
```
SN DATE
=========== =========
111 1/1/2014
222 2/1/2014
333 3/1/2014
111 4/1/2014
222 5/1/2014
333 6/1/2015
222 7/1/2015
111 8/1/2015
333 9/1/2015
111 10/1/2015
111 11/1/2015
```
I have a table with 2 columns (`SN` and `DATE`). I would like to create a query that will find duplicate `SN` between `1/1/2014` and `31/12/2014`. I want to count duplicates and show each row that is a duplicate with `SN` and `DATE`.
|
One method is to use `exists`:
```
select t.*
from table as t
where date between #2014-01-01# and #2014-12-31# and
exists (select 1
from table as t2
where date between #2014-01-01# and #2014-12-31# and
t2.sn = t.sn and t2.date <> t.date
);
```
However, this will not find an sn that has two records on the same date. For that, you can do:
```
select t.*
from table as t
where t.sn in (select t2.sn
from table as t2
where date between #2014-01-01# and #2014-12-31#
group by t2.sn
having count(*) >= 2
);
```
|
Try this
```
SELECT SN , Count(SN) AS Dup
FROM [TableName]
WHERE DATE BETWEEN #2014-01-01# AND #2014-12-31#
GROUP By SN
HAVING Count(SN) >1
```
|
Access SQL: How to find duplicate records between dates
|
[
"",
"sql",
"ms-access",
"duplicates",
""
] |
Trying to use FOR XML in SQL 2012. Need to have a result like this:
```
<Loader xmlns:xsi="url1" xmlns="url2">
<Buy_New>
<Ticker>IBM</Ticker>
<Acct>12345</Acct>
<Qty>10</Qty>
</Buy_New>
<Sell_New>
<Ticker>MSFT</Ticker>
<Acct>12345</Acct>
<Qty>15</Qty>
</Sell_New>
<Buy_New>
<Ticker>IBM</Ticker>
<Acct>12345</Acct>
<Qty>10</Qty>
</Buy_New>
</Loader>
```
After looking on here, MSDN, etc., I am not seeing a way to dynamically change the `<Buy_New>` and `<Sell_New>` (I have about 20 different types to work with) in my query, I have explored using FOR XML PATH and FOR XML EXPLICIT but both seem to require static element tags.
Is there a way to drive the tags off the rows in the query?
@Jon C Sure: here is what I have:
```
SELECT
(
SELECT
Investment1 as Investment,
EventDate1 as Date,
Quantity1 as Quantity
FROM #temptable e1 where e1.temp_id = e.temp_id
FOR XML PATH(''),TYPE
) AS 'DYNAMICTAG'
FROM #temptable e
FOR XML PATH(''), ROOT('Loader')`
```
Here is a sample of the result. The DYNAMICTAG is the part that needs to change to be `<Buy_New>`, `<Sell_New>` etc.
```
<Loader>
<DYNAMICTAG>
<Investment>XYZ</Investment>
<Date>2015-05-08T00:00:00</Date>
<Quantity>50</Quantity>
</DYNAMICTAG>
<DYNAMICTAG>
<Investment>ABC</Investment>
<Date>2015-05-08T00:00:00</Date>
<Quantity>10</Quantity>
</DYNAMICTAG>
<DYNAMICTAG>
<Investment>CSCO</Investment>
<Date>2015-05-08T00:00:00</Date>
<Quantity>50</Quantity>
</DYNAMICTAG>
<DYNAMICTAG>
<Investment>IBM</Investment>
<Date>2015-05-08T00:00:00</Date>
<Quantity>30</Quantity>
</DYNAMICTAG>
</Loader>
```
|
**Refined Answer** (based on @JonC's suggestion to use CAST, but without needing a separate table to hold all the different possibilities for tags)
```
SELECT
(
SELECT
CAST('<' + t1.RecordType + '>' +
CAST(
(
SELECT
t2.Portfolio1 AS 'Portfolio',
t2.Qty1 AS 'Qty',
t2.Price AS 'Price',
FROM #temptable t2
WHERE t2.temptable_id = t1.temptable_id
FOR XML PATH(''),TYPE
) as varchar(max))
+ '</' + t1.RecordType + '>' as xml)
from #temptable t1
FOR XML PATH(''), TYPE
) AS 'TranRecords'
FOR XML PATH(''), ROOT('Loader')
```
|
I'm not sure about the condition to determine if the tag should be 'buy\_new' or 'sell\_new', but this may work for you:
```
SELECT
(
SELECT
Investment1 as Investment,
EventDate1 as Date,
Quantity1 as Quantity
FROM #temptable e1 where e1.temp_id = e.temp_id
AND (SOME CONDITION FOR 'Buy_New')
FOR XML PATH('Buy_New'),TYPE
) ,
(
SELECT
Investment1 as Investment,
EventDate1 as Date,
Quantity1 as Quantity
FROM #temptable e1 where e1.temp_id = e.temp_id
AND (SOME CONDITION FOR 'Sell_New')
FOR XML PATH('Sell_New'),TYPE
)
FROM #temptable e
FOR XML PATH(''), ROOT('Loader')
GO
```
EDIT:
By your answer I understand that you need something more dynamic. This may not be the elegant solution you're looking for, but It may get the job done without having to hard code every single element:
First, create a table to store your desired dynamic element names:
```
create table elements (id int, name nvarchar(20))
insert into elements (id, name) values (1, 'sell_new')
insert into elements (id, name) values (2, 'buy_new')
insert into elements (id, name) values (3, 'other_new')
```
Then comes the workaround:
```
SELECT cast('<' + e.name +'>' +
cast(
(
SELECT
Investment AS 'Investment',
EventDate as 'Date',
Quantity AS 'Quantity'
FROM temptable t1
WHERE t1.investment = t.investment
FOR XML PATH(''),TYPE
) as varchar(max))
+ '</' + e.name +'>' as xml)
from temptable t
JOIN elements e ON e.id = t.RecordType
order by investment
for xml path(''), root('Loader')
```
The result:
```
<Loader>
<sell_new>
<Investment>abc</Investment>
<Quantity>456</Quantity>
</sell_new>
<buy_new>
<Investment>cde</Investment>
<Quantity>789</Quantity>
</buy_new>
<sell_new>
<Investment>efg</Investment>
<Quantity>0</Quantity>
</sell_new>
</Loader>
```
|
SQL Server FOR XML PATH vary tags dynamically
|
[
"",
"sql",
"sql-server",
"xml",
""
] |
Let's say I have 2 tables (`tb1`, `tb2`) with both the following schema:
```
CREATE TABLE tb1 (
col1 INT NOT NULL,
col2 TEXT NOT NULL,
col3 TEXT NOT NULL,
col4 REAL
);
```
How do I find records of `tb1` which are not present in `tb2` on columns `col1`, `col2`, `col3`?
I researched on [this](https://stackoverflow.com/questions/1415438/how-to-find-rows-in-one-table-that-have-no-corresponding-row-in-another-table), [this](https://stackoverflow.com/questions/25049609/sqlite-query-select-all-records-that-does-not-exist-in-another-table) and [this](https://stackoverflow.com/questions/2246772/whats-the-difference-between-not-exists-vs-not-in-vs-left-join-where-is-null) but so far they're all finding records only on one column. I've also used the codes/logic in these links but ended up returning the wrong result with really bad performance (45K records on tb1, 1.7M records on tb2). I'm trying to implement this on SQLite.
If you wanna see, here's my sample code (using left join w/ where is null), but don't rely on it:
```
SELECT *
FROM tb1
LEFT JOIN tb2
ON
tb1.col1 = tb2.col1 AND
tb1.col2 = tb2.col2 AND
tb1.col3 = tb2.col3
WHERE
tb2.col1 IS NULL AND
tb2.col2 IS NULL AND
tb2.col3 IS NULL
```
|
Try `NOT EXISTS` instead, of course performance might depend on existing indexes...
```
SELECT *
FROM tb1
WHERE NOT EXISTS
(
SELECT *
FROM tb2
WHERE
tb1.col1 = tb2.col1 AND
tb1.col2 = tb2.col2 AND
tb1.col3 = tb2.col3
)
```
|
How about something like:
```
SELECT *
FROM tb1
WHERE NOT EXISTS (SELECT * FROM tb2
WHERE tb1.col1 = tb2.col1
AND tb1.col2 = tb2.col2
AND tb1.col3 = tb2.col3)
```
|
Find records on multiple fields not in another table
|
[
"",
"sql",
"sqlite",
""
] |
I'm taking a intro to database class right now. For a homework assignment I was given a piece of SQL code and told to just run some queries and give the results back. The problem is I can't even build the schema given with the code my teacher gave me. Here is this code:
```
CREATE TABLE emp (
name char(15),
dno int,
FOREIGN KEY (dno) REFERENCES dept(dno)
ON DELETE SET NULL
ON UPDATE CASCADE
);
CREATE TABLE dept (
dno int,
location char(30)
);
INSERT INTO emp(name,dno) VALUES
("Tom",111),
("Mary",111),
("Jack",222),
("Henry",222);
INSERT INTO dept(dno, location) VALUES
(111,"Irvine"),
(222,"LA"),
(333,"SF");
```
When this runs I get a error saying `Cannot add foreign key constraint`. I tried doing this via MySQL workbench and SQL Fiddle, both of which produce the same error. I don't really know whats wrong with the code given to me though and after looking online, couldn't seem to see anything obvious.
|
It may be part of the exercise for you to figure this out. However, assuming that it is not, here are important observations:
* The foreign key reference should be to a primary key, and neither table has primary keys.
* The table being referenced has to be defined before it can be referred to.
You can work on fixing these problems. [Here](http://www.sqlfiddle.com/#!9/b3e71) is a SQL Fiddle with the right definitions.
|
The definition of the first table (emp) has a reference to a table that has not yeat been created (dept).You have to create first 'dept' to create a foreign key of it into another table.
Execute the queries in this order:
```
CREATE TABLE dept (
dno int,
location char(30)
);
CREATE TABLE emp (
name char(15),
dno int,
FOREIGN KEY (dno) REFERENCES dept(dno)
ON DELETE SET NULL
ON UPDATE CASCADE
);
INSERT INTO emp(name,dno) VALUES
("Tom",111),
("Mary",111),
("Jack",222),
("Henry",222);
INSERT INTO dept(dno, location) VALUES
(111,"Irvine"),
(222,"LA"),
(333,"SF");
```
|
Cannot add foreign key constraint SQL when building schema
|
[
"",
"mysql",
"sql",
"foreign-keys",
""
] |
I've got a table called **tblRoutes** that holds a unique list of from and to routes (f = from; t = to):
```
| fCity | fState | tCity | tState |
|========|========|========|========|
|New York| NY | Miami | CA |
|Houston | TX |New York| NY |
...
```
And then I have a table called **tblCarrierRates** that lists a bunch of tiers and rates offered by carriers for travelling certain routes:
```
| fCity | fState | tCity | tState | Tier | Rate | CarrID | CarrName |
|========|========|========|========|======|======|========|=============|
|New York| NY | Miami | CA | 2 | $2.99| ABCD | Abracadabra |
|New York| NY | Miami | CA | 1 | $3.00| BUMP | Bumpy Rides |
|Houston | TX |New York| NY | 2 | $4.00| SLOW |Slow Carriers|
|Houston | TX |New York| NY | 2 | $4.01| ABCD | Abracadabra |
...
```
For each unique route listed in **tblRoutes** I'm looking for the 1 "best" offered from **tblCarrierRates**.
The criteria for "the best" is lowest *Tier*, followed by lowest *Rate*.
The result needs to return all the fields shown in **tblCarrierRates**, so based on the 2 routes shown above in **tblRoutes**, the desired result would be:
```
| fCity | fState | tCity | tState | Tier | Rate | CarrID | CarrName |
|========|========|========|========|======|======|========|=============|
|New York| NY | Miami | CA | 1 | $3.00| BUMP | Bumpy Rides |
|Houston | TX |New York| NY | 2 | $4.00| SLOW |Slow Carriers|
```
The approach I was looking at was to sort in ascending order *Tier* and then *Rate*, then some how match the TOP 1 record for each unique combination of fCity, fState, tCity, and tState:
```
SELECT A.fCity, A.fState, A.tCity, A.tState, Q.Tier, Q.Rate, Q.CarrID, Q.CarrName
FROM tblRoutes As A LEFT JOIN
(SELECT TOP 1 B.CarrID, B.CarrName, B.fCity, B.fState, B.tCity, B.tState, B.Rate, B.Tier
FROM tblCarrierRates As B
ORDER BY tblCarrierRates.Tier ASC, tblCarrierRates.Rate ASC) As Q
ON (A.tState = Q.tState) AND (A.tCity = Q.tCity) AND (A.fState = Q.fState) AND (A.fCity = Q.fCity);
```
The query doesn't fail, but as you can probably guess, the subquery I've written (Q) only returns a single record rather than 1 for each route in **tblRoutes**, so the end result is:
```
| fCity | fState | tCity | tState | Tier | Rate | CarrID | CarrName |
|========|========|========|========|======|======|========|=============|
|New York| NY | Miami | CA | 1 | $3.00| BUMP | Bumpy Rides |
|Houston | TX |New York| NY | | | | |
```
...as you can see, nothing's been matched for Houston to New York because my sub query only returned 1 result rather than 1 for each route.
How can I achieve the result I'm after?
|
I believe you are looking for an equivalent of the Sql Server and Oracle Analytic / Windowing functions like `ROW_NUMBER() OVER (PARTITION .. ORDER BY)`, e.g. [like so](http://sqlfiddle.com/#!6/a1c1a/2).
Although this isn't directly offered in MS Access, I believe it is possible to simulate a row numbering function in MS Access by applying a correlated subquery which counts the number of rows which have the same "Partition" (as defined by the join filter), and in which each row is ranked by counting the number of preceding rows in the same partition, which are 'below' the ordering criteria:
```
SELECT A.fCity, A.fState, A.tCity, A.tState, Q.Tier, Q.Rate, Q.CarrID, Q.CarrName, TheRank
FROM tblRoutes As A LEFT JOIN
(
SELECT B.CarrID, B.CarrName, B.fCity, B.fState, B.tCity, B.tState, B.Rate, B.Tier,
(
SELECT COUNT(*) + 1
FROM tblCarrierRates rnk
-- Partition Simulation (JOIN)
WHERE B.fCity = rnk.fCity AND B.fState = rnk.fState
AND B.tCity = rnk.tCity AND B.tState = rnk.tState
-- ORDER BY Simulation
AND (rnk.Tier < B.Tier OR
(rnk.Tier = B.Tier AND rnk.Rate < B.Rate))) AS TheRank
FROM tblCarrierRates As B) As Q
ON (A.tState = Q.tState) AND (A.tCity = Q.tCity)
AND (A.fState = Q.fState) AND (A.fCity = Q.fCity)
-- Now, you just want the top rank in each partition.
WHERE TheRank = 1;
```
Just be forewarned of performance - the subquery will be executed for each row.
Also, if there are ties, then both rows will be returned.
The +1 is to start each partition off with a Row number of 1 (since there will be zero preceding rows in its partition)
**Edit, taking out the comments**
```
SELECT A.fCity, A.fState, A.tCity, A.tState, Q.Tier, Q.Rate, Q.CarrID, Q.CarrName, TheRank
FROM tblRoutes As A LEFT JOIN
(
SELECT B.CarrID, B.CarrName, B.fCity, B.fState, B.tCity, B.tState, B.Rate, B.Tier,
(
SELECT COUNT(*) + 1
FROM tblCarrierRates rnk
WHERE B.fCity = rnk.fCity AND B.fState = rnk.fState
AND B.tCity = rnk.tCity AND B.tState = rnk.tState
AND (rnk.Tier < B.Tier OR
(rnk.Tier = B.Tier AND rnk.Rate < B.Rate))) AS TheRank
FROM tblCarrierRates As B) As Q
ON (A.tState = Q.tState) AND (A.tCity = Q.tCity)
AND (A.fState = Q.fState) AND (A.fCity = Q.fCity)
WHERE TheRank = 1
```
|
Your Inner Query needs to group by city and state. This will produce the 1 per city state that allows the outer join to join on those fields.
Debug your inner query independently until you see the results you would expect for your outer query to work. Take out the Top1 at first so you can see that the sorting and grouping are working correctly. I would explicitly put ASC DESC in your inner query so that others know which direction you intended the top to work against.
|
How to select the TOP 1 record per group (Partition)
|
[
"",
"sql",
"ms-access",
"ms-access-2010",
"ms-access-2013",
""
] |
In my query I have following part of the code:
```
CASE WHEN Field1 LIKE '%choice1%' OR Field1 LIKE '%Choice1%' .... THEN 'category 1'
WHEN Field1 LIKE '%choicea%' OR Field1 LIKE '%Choiceb%' .... THEN 'category 2'
END AS 'Cats'
```
I have numerous of those choices (1line about 20), (2line about 15 and son on).
Can I do something to make my code cleaner, something that I usually use `IN` for?
Example:
```
CASE WHEN Field1 IN LIKE ('%choice1'.'%choice2%',...) THEN 'category 1'
WHEN Field1 IN LIKE ('%choicea'.'%choiceb%',...) THEN 'category 2'
END AS 'Cats'
```
P.S. I heard that I can use somehow those `||` but I can't find in google how to use them.
What do you think about this problem?
|
If those choices are already known, put them in a table and use a join:
```
SELECT Field1, c.Category
FROM MainTable
LEFT JOIN Choices c ON Field1 LIKE ('%' + c.Choice + '%')
```
**Update:**
If there are sub-strings inside one category, add a `DISTINCT` to this query.
|
I think your using SQL for something it isn't designed for. Having that many like statements is going to cause your query to absolutely devastate the resources on the SQL server, and when you get more rows than a demo db you are going to have a runtime that makes your users cry.
If you need this kind of information you need to try one of these things:
1.Make new columns to store the data you are trying to extract, and write update statements to parse out that says. For example, if you have case text like 'warning%' make an isWarning bit column, or a type varchar and store 'Warning' in it
2. Select the data back from the db without the case/like and perform that lookup in your business code afterwards
|
SQL CASE LIKE with multiple choices
|
[
"",
"sql",
"sql-server",
"t-sql",
"case",
"sql-like",
""
] |
I'm trying to combine these two stored procedures so that it returns the values of both under their own headings/columns so the results are as followed "Support Hours Worked" "Support Hours Charged" "Development Hours Worked" and "Development Hours Charged"
Query One
```
USE [Database]
GO
/****** Object: StoredProcedure [dbo].[usp_JobTimeSystem_FetchSupport] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_JobTimeSystem_FetchSupport]
@FromDate datetime,
@ToDate datetime,
@SystemUserID uniqueidentifier
AS
;WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= @FromDate
AND tblJobWorkLog.EndTime <= @ToDate
AND (WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93')
AND SystemUserID = @SystemUserID
)
SELECT
FromDate,
(SUM(JobTime) / 60.0) HoursWorked,
SUM(ChargeableTime) AS HoursCharged
FROM
cte
GROUP BY
FromDate
ORDER BY
FromDate
```
Query Two
```
USE [Database]
GO
/****** Object: StoredProcedure [dbo].[usp_JobTimeSystem_FetchDevelopment] Script Date: 18/05/2015 09:23:53 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_JobTimeSystem_FetchDevelopment]
@FromDate datetime,
@ToDate datetime,
@SystemUserID uniqueidentifier
AS
;WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= @FromDate
AND tblJobWorkLog.EndTime <= @ToDate
AND (WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F'
OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315' OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR
WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD')
AND SystemUserID = @SystemUserID
)
SELECT
FromDate,
(SUM(JobTime) / 60.0) HoursWorked,
SUM(ChargeableTime) AS HoursCharged
FROM
cte
GROUP BY
FromDate
ORDER BY
FromDate
```
I know there's definitely a way to put these together but I only started SQL a few weeks ago and don't really know how to go about doing it, a few pointers even would mean a lot, thank you.
|
Welcome to the wonderful world of SQL! Hopefully the below code will help you out.
Basically create a query with the common select between the two queries and use a case statement to filter out further the unique results you want from both the queries:
```
WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime,
WorkLogJobTypeID
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= @FromDate
AND tblJobWorkLog.EndTime <= @ToDate
AND SystemUserID = @SystemUserID
)
SELECT
FromDate,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then (SUM(JobTime) / 60.0) end as SupportHoursWorked,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then SUM(ChargeableTime) end AS SupportHoursCharged,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then (SUM(JobTime) / 60.0) end as DevelopmentHoursWorked,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then SUM(ChargeableTime) end as DevelopmentHoursCharged
FROM
cte
GROUP BY
FromDate, WorkLogJobTypeID
ORDER BY
FromDate
```
|
You can define 2 CTEs in one query, and then join them up on the UserID like this:
```
USE [Database]
GO
/****** Object: StoredProcedure [dbo].[usp_JobTimeSystem_FetchSupport] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[usp_JobTimeSystem]
@FromDate datetime,
@ToDate datetime,
@SystemUserID uniqueidentifier
AS
;WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= @FromDate
AND tblJobWorkLog.EndTime <= @ToDate
AND (WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93')
AND SystemUserID = @SystemUserID
)
, cte2 AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= @FromDate
AND tblJobWorkLog.EndTime <= @ToDate
AND (WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F'
OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315' OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR
WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD')
AND SystemUserID = @SystemUserID
)
SELECT
a.FromDate,
(SUM(a.JobTime) / 60.0) SupportHoursWorked,
SUM(a.ChargeableTime) AS SupportHoursCharged,
(SUM(b.JobTime) / 60.0) DevelopmentHoursWorked,
SUM(b.ChargeableTime) AS DevelopmentHoursCharged
FROM
cte as a
JOIN
cte2 as b
ON
a.SystemUserID=b.SystemUserID
GROUP BY
a.FromDate
ORDER BY
a.FromDate
```
But obviously without test data, I can only guess that this will work based on what you have shown us here.
|
How can I combine these two stored procedures
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I have got few sales offices, together with their sales. I am trying to set-up report that will basically tell how is each office performing. Getting some SUMs, COUNTs are quite easy, however I am struggling with getting rank of single office.
I would like to have this query return the rank of single office, during the entire period and/or specified time (eg. BETWEEN '2015-01-01' AND '2015-01-15')
I need to also exclude some offices from the rank list (eg. OfficeName NOT IN ('GGG','QQQ')), so using the sample data, the rank of office 'XYZ' would be 5.
In case that the OfficeName = 'XYZ' is included in WHERE clause, the RANK would be obviously = 1 as SQL filters out other rows, not contained in WHERE clause before executing the rest of the code.
Is there any way of doing the same, without using the TemporaryTable ?
```
SELECT OfficeName, SUM(Value) as SUM,
RANK() OVER (ORDER BY SUM(VALUE) DESC) AS Rank
FROM Transactions t
JOIN Office o ON t.TransID=o.ID
WHERE OfficeName NOT IN ('GGG','QQQ')
--AND OfficeName = 'XYZ'
GROUP BY OfficeName
ORDER BY 2 DESC;
```
I am using MS SQL server 2008.
SQL Fiddle with some random data is here: <http://sqlfiddle.com/#!3/fac7a/35>
Many thanks for help!
|
You just need to wrap your code as derived table or use a `CTE` like this and then do the filter for `OfficeName = 'XYZ'`.
```
;WITH CTE AS
(
SELECT OfficeName, SUM(Value) as SUM,
RANK() OVER (ORDER BY SUM(VALUE) DESC) AS Rank
FROM Transactions t
JOIN Office o ON t.TransID=o.ID
WHERE OfficeName NOT IN ('GGG','QQQ')
GROUP BY OfficeName
)
SELECT *
FROM CTE
WHERE OfficeName = 'XYZ';
```
|
if i understand you correctly you want to do:
```
SELECT *
FROM (
SELECT OfficeName, SUM(Value) as SUM,
RANK() OVER (ORDER BY SUM(VALUE) DESC) AS Rank
FROM Transactions t
JOIN Office o ON t.TransID=o.ID
WHERE OfficeName NOT IN ('GGG','QQQ')
GROUP BY OfficeName
) dat
WHERE OfficeName = 'XYZ';
```
|
SQL RANK with multiple WHERE clause
|
[
"",
"sql",
"t-sql",
"select",
""
] |
Could you please explain me why it is necessary to specify those foreign keys when creating tables?
I mean, I've created two tables which have a one-to-many relationship (on the ER-diagram) but I didn't specify the foreign keys and references. I can connect the tables using the where-clause and even perform joins and so on.
Probably, I don't get some basic concepts, although I've read some stuff about it. I guess it has something to do with data consistency or referential integrity or something.
So, could you explain me these concepts?
Are those references and foreign keys are absolutely required if I have, let's say, 8-10 tables with one-to-many relationships and if I can assure that the data is inserted correctly into the database?
|
It is not *necessary* to specify foreign key relationships. It is just a good idea.
When you specify the relationship, the database ensures relational integrity. That is, it ensures that the values in the foreign key column are legitimate values.
In addition, the `cascade` options on foreign keys are a big help when values are updated or deleted.
|
The reason it's necessary is to ensure [data integrity](http://en.wikipedia.org/wiki/Data_integrity).
Suppose you have a table called `orders`, and a table called `order details`, both have a column called `order id`.
If you don't use foreign keys you might be inserting order details for an order that doesn't exists in the orders table.
Having a foreign key will make the database raise an error if you try to add order details to a non existing order.
It will also raise an error if you delete an order that already have details, unless you delete the order details first or specify [cascade delete](http://www.mysqltutorial.org/mysql-on-delete-cascade/) on the foreign key.
|
Why is it necessary to explicitly specify foreign keys and references in databases?
|
[
"",
"mysql",
"sql",
"foreign-keys",
"foreign-key-relationship",
""
] |
I have two tables:
Table 1 which the fields
```
o_id (Fk),
dt
```
I have Table 2
```
o_id (Fk),
attr1 (Number)
```
My objective is count how many occurrences of attr1 occur where the value is from 0-10, 11-20, 21+. I then need to take these occurrences and Group them by dt from Table 1
To make it more clear I provide a sample output:
```
| Date |count(0-10) | count(11-20) | count(21+) |
-----------------------------------------------------
| 01/13 | 3 | 5 | 13 |
| 02/13 | 2 | 7 | 10 |
```
This is the closest I can get but it doesn't quite work:
```
SELECT
t1.dt,
(select count(tt2.o_id) from table1 tt1, table2 tt2 WHERE attr1 BETWEEN 0 AND 10),
(select count(tt2.o_id) from table1 tt1, table2 tt2 WHERE attr1 BETWEEN 11 AND 20),
(select count(tt2.o_id) from table1 tt1, table2 tt2 WHERE attr1 > 21),
FROM
table1 t1, table2 t2
WHERE
t1.o_id = t2.o_id
GROUP BY
t1.dt;
```
The results I get now:
```
| Date |count(0-10) | count(11-20) | count(21+) |
-----------------------------------------------------
| 01/13 | 3 | 5 | 13 |
| 02/13 | 3 | 5 | 13 |
| 02/14 | 3 | 5 | 13 |
| 02/15 | 3 | 5 | 13 |
| 02/16 | 3 | 5 | 13 |
| 02/17 | 3 | 5 | 13 |
| 02/18 | 3 | 5 | 13 |
```
Just keeps repeating inside the counts column.
|
You don't need subqueries. This will do what you seem to want:
```
SELECT
t1.dt,
count(CASE WHEN attr1 BETWEEN 0 AND 10 THEN 1 END),
count(CASE WHEN attr1 BETWEEN 11 AND 20 THEN 1 END),
count(CASE WHEN attr1 > 20 THEN 1 END),
FROM
table1 t1
JOIN table2 t2
ON t1.o_id = t2.o_id
GROUP BY
t1.dt
;
```
The `COUNT()` aggregate function counts the number of rows in each group for which the argument expression is not `NULL`. The `CASE` expressions that are being counted are non-null only when their (one each) `WHEN` condition is satisfied.
|
The issue is that you don't join the tables in the subqueries, but in any case this query can be done as a conditional aggregation which at least I would think look a bit better:
```
SELECT
t1.dt,
SUM(CASE WHEN attr1 BETWEEN 0 AND 10 THEN 1 ELSE 0 END) AS "count(0-10)",
SUM(CASE WHEN attr1 BETWEEN 11 AND 20 THEN 1 ELSE 0 END) AS "count(11-20)",
SUM(CASE WHEN attr1 > 21 THEN 1 ELSE 0 END) AS "count(21+)"
FROM
table1 t1
JOIN
table2 t2
ON
t1.o_id = t2.o_id
GROUP BY
t1.dt;
```
|
Get counts based on a group of dates
|
[
"",
"sql",
"oracle11g",
""
] |
Consider the following MySQL table schema:
```
id int,
amount decimal,
transaction_no,
location_id int,
created_at datetime
```
The above schema is used to store POS receipts for restaurants. For getting daily report of count of receipts & their sum. tried following query:
```
SELECT location_id,count(distinct(transaction_no)) as count,sum(amount) as receipt_amount FROM `receipts` WHERE date(`receipts`.`created_at`) = '2015-05-17' GROUP BY `receipts`.`location_id`
```
But the issue is that a receipt with same transaction number gets repeated multiple times where each time the amount may/may not be different. business rule for handling this is that the last receipt we have received is the latest one. so the above query doesn't work.
What I am looking to do is the following:
1. For each location, get all the receipts of that day.
2. If the transaction no are duplicated, Get the last received receipts based on created\_at
3. So sum of amounts of amounts col.
**[edit]**
here is the query plan:
```
*************************** 1. row ***************************
id: 1
select_type: PRIMARY
table: <derived2>
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 25814155
filtered: 100.00
Extra: Using where; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: PRIMARY
table: r
type: ref
possible_keys: punchh_key_location_id_created_at
key: punchh_key_location_id_created_at
key_len: 50
ref: t.punchh_key
rows: 1
filtered: 100.00
Extra: Using index condition; Using where
*************************** 3. row ***************************
id: 2
select_type: DERIVED
table: r
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 25814155
filtered: 100.00
Extra: Using temporary; Using filesort
3 rows in set, 1 warning (0.00 sec)
```
|
How do you count a transaction that is repeated on *multiple* days?
I presume that you actually do *not* want to count a transaction, simply because it is the last one on the day, if there is another receipt on the next day. You can get the final record for each transaction in several ways. A typical way is to use `group by` (this is similar to Brian's query, but subtly different):
```
select r.*
from receipts r join
(select transaction_no, max(created_at) as maxca
from receipts r
group by transaction_no
) t
on r.transaction_no = t.transaction_no and r.created_at = t.maxca;
```
The full query is then:
```
select location_id, count(*) as numtransactions, sum(amount) as receipt_amount
from receipts r join
(select transaction_no, max(created_at) as maxca
from receipts r
group by transaction_no
) t
on r.transaction_no = t.transaction_no and r.created_at = t.maxca;
where r.created_at >= date('2015-05-17') and r.created_at < date('2015-05-18')
group by location_id;
```
Note about date comparison.
Your original form of `date(r.created_at) = '2015-05-17'` is logically correct. However, the use of `date()` means that an index cannot be used. The form with two comparisons to constants would allow the query to take advantage of an index on `receipts(created_at)`.
The use of `like` for dates is to discouraged. This requires converting the date *implicitly* to a string and then doing the comparison as a string. This has needless conversions and in some databases makes the semantics dependent on globalization settings.
|
You can get sum up the amounts for just the last `created_at` value within the same day by joining to an inline view that determines the last `created_at` for each `transaction_no` in that day.
This avoids simply using `sum(distinct ...` because otherwise two different transactions with the same amount, if such exist, would only be counted once.
This approach should avoid that problem.
```
select r.location_id,
count(*) as num_transactions,
sum(r.amount) as receipt_amount
from receipts r
join (
select transaction_no,
max(created_at) as last_created_at_for_trans
from receipts
where created_at like '2015-05-17%'
group by transaction_no
) v
on r.transaction_no = v.transaction_no
and r.created_at = v.last_created_at_for_trans
where r.created_at like '2015-05-17%'
group by r.location_id
```
Another approach is to use `not exists`, you might want to test to see which provides better performance:
```
select r.location_id,
count(*) as num_transactions,
sum(r.amount) as receipt_amount
from receipts r
where r.created_at like '2015-05-17%'
and not exists ( select 1
from receipts x
where x.transaction_no = r.transaction_no
and x.created_at > r.created_at
)
group by r.location_id
```
|
SQL Query for sum of unique amounts, remove duplicates
|
[
"",
"mysql",
"sql",
"select",
"distinct",
""
] |
I have a problem understanding how the combination of `NOT EXISTS` and subqueries works.
I understand that by adding `NOT EXISTS`, the query will return all the rows the subquery will not. What I don't understand is how it works when we use it in the following case.
```
+-------------+--------+
| id_employee | salary |
+-------------+--------+
| E001 | 1000 |
| E002 | 1001 |
| E003 | 999 |
| E004 | 1001 |
+-------------+--------+
```
And try the following query:
```
SELECT E.id_employee, E.salary
FROM Employees E
WHERE NOT EXISTS (
SELECT F.id_employee
FROM Employees F
WHERE F.salary > E.salary
);
```
Which returns:
```
+-------------+--------+
| id_employee | salary |
+-------------+--------+
| E002 | 1001 |
| E004 | 1001 |
+-------------+--------+
```
Well, I don't really get to understand how the `F.salary > E.salary` comparison works, because if it compares row by row the two tables (which are the same), doesn't make sense to me to return any row at all.
Could you give me any hint on how this works?
|
`NOT EXISTS` is usually used with a Correlated Subquery, i.e. there's a relation between the Outer and the Inner query (and you can't run the subquery code on its own without a "table not found" error). Your example is correlated but in a strange way.
Logically it's processed like this:
for each row in the employee table run following query:
```
SELECT F.id_employee
FROM Employees F
WHERE F.salary > <salary of the current row>
```
If this query doesn't return any row (i.e. there's no row with a salary greater than the current one) the `NOT EXISTS` evaluates to true and returns this row from the employee table.
In fact this query returns the rows with the lowest salary.
A Correlated `EXISTS` is quite similar to a `JOIN` and the `NOT EXISTS` is a kind of anti-JOIN, only those rows which can't be joined will be returned. And this is the common usage, find rows with values which don't exists in the 2nd table, e.g find employees with an invalid/non-existing department number
```
SELECT *
FROM Employees E
WHERE NOT EXISTS (
SELECT *
FROM Departments D
WHERE E.department_number = D.department_number
);
```
|
Since you are specifying no other condition in your subquery than:
```
F.salary > E.salary
```
You are getting any row from `E` where there is no other entry in `F` that has a salary greater then it. Since both `E002` and `E004` have a salary of `1001`, and no one else has a salary greater then `1001`, both `E002` and `E004` are returned.
If you change your `WHERE` to be:
```
F.salary >= E.salary
```
You will get no records returns as everyone has a salary that is less then or equal to another employee's salary.
|
Confusion with NOT EXISTS and subqueries
|
[
"",
"mysql",
"sql",
"database",
""
] |
I am looking for the best way to build a query which would hide the record (row) in the event that three field values (in three different columns) would be null. The code below is giving me a syntax run time error message of 3075. Also, I am not sure if it is causing a problem but the code below is executed from a main form and impacting the subform frmStaticDataSkills02.
```
sql_get = "SELECT [tblCompetency02].[HighLevelObjective], [tblCompetency04].[Self], [tblCompetency04].[SelfSpecialLanguage], [tblCompetency04].[SelfChecklist], [tblCompetency04].[Team], [tblCompetency04].[TeamSpecialLanguage], [tblCompetency04].[TeamChecklist], [tblCompetency04].[Organisation], [tblCompetency04].[OrganisationSpecialLanguage], [tblCompetency04].[OrganisationChecklist], [tblCompetency02].[Competency] FROM [tblCompetency04] INNER JOIN [tblCompetency02] ON [tblCompetency04].[HighLevelObjective] = [tblCompetency02].[ID] WHERE ([tblcompetency04].[self]<>"" or [tblcompetency04].[team]<>"" or [tblcompetency04].[organisation]<>"")"
Form_frmStaticDataSkills02.Form.RecordSource = sql_get
```
|
This will return the data if not all three columns are NULL:
```
where not (col1 is null and col2 is null and col3 is null)
```
This is the same after applying algebra of logic: return the row if any of the three rows is NOT NULL
```
where col1 is not null or col2 is not null or col3 is not null)
```
|
In a general sense, in a table of N columns, you can explicitly count the number of NULL columns in a table and then add them up and compare the count of nulls to `3` in a where predicate:
```
SELECT *
FROM MyTable x
WHERE
((IIF(x.COL1 IS NULL, 1 , 0) +
IIF(x.COL2 IS NULL, 1 , 0) +
IIF(x.COL3 IS NULL, 1 , 0) +
IIF(x.COL4 IS NULL, 1 , 0))) <> 3;
```
(Obviously, keep adding `IIF` statements for all N columns of the table
|
Hiding a row in a table with a query when 3 columns have null values
|
[
"",
"sql",
"ms-access",
"vba",
"ms-access-2010",
""
] |
I have a company table and a licence table. I need to insert a new row in the license table for every company in the company table that isn't already in the license table.
`License (ID,CompanyID,LicenseStart,LicenseEnd,CreatedDate,AddUser,UpdateUser,UpdateDate,TemplateId)`
The ID in this table is incremented by 1 when a new row is added.
`Company (ID,Name,CreateDate,CState,LocationID,LegalName)`
The default value that would be entered for each CompanyID that isn't already in the license table should look something like this.
`Insert (ID, @theCompanyID, GetDate(), DATEADD(day,14,GETDATE()), null,null,null,null null)`
`@theCompanyID` would be the CompanyID that isn't in the license table
I am very new to this so any help would be appreciated.
|
`license.id` is an identity column, so you do not need to insert it.
```
insert into license (CompanyID, LicenseStart, LicenseEnd)
select c.id, GetDate(), DATEADD(day, 14, GETDATE())
from company c
where not exists (select 1
from license l
where c.ID = l.CompanyID
);
```
You also don't need to insert explicit `NULL` values for columns where you are not supplying values. The default is to set these values to `NULL`.
If your start and end dates do not have a *time* component -- just the date -- then use this instead:
```
select c.id, cast(GetDate() as date), cast(DATEADD(day, 14, GETDATE()) as date)
from company c
where not exists (select 1
from license l
where c.ID = l.CompanyID
);
```
|
Do an `INSERT` with `SELECT`, use `NOT EXISTS` to make sure no existing company is inserted again. Something like:
```
insert into license (ID,CompanyID,LicenseStart,LicenseEnd,CreatedDate,AddUser,UpdateUser,UpdateDate,TemplateId)
select ID, theCompanyID, GetDate(), DATEADD(day,14,GETDATE()), null,null,null,null null
from company
where not exists (select 1 from company
where company.CompanyID = license.CompanyID)
```
|
SQL - Insert new row into an existing table based on another table entry
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
What is wrong with this query:
```
SELECT AVG(price) as AVG_price FROM
(
(SELECT price FROM PC, Product WHERE maker = 'A' AND Product.model = PC.model)
UNION ALL
(SELECT price FROM Laptop, Product WHERE maker = 'A' AND Product.model = Laptop.model)
)
```
It gives the following error:
> Incorrect syntax near ')'
|
Try This..
```
SELECT AVG(d.price) as AVG_price FROM
(
SELECT price FROM PC, Product WHERE maker = 'A' AND Product.model = PC.model
UNION ALL
SELECT price FROM Laptop, Product WHERE maker = 'A' AND Product.model = Laptop.model
) as d
```
|
Alias the derived table and converted to SQL-92 ANSI standards.
```
SELECT AVG(price) as AVG_price FROM
(
SELECT price FROM PC INNER JOIN Product on Product.model = PC.model
WHERE maker = 'A'
UNION ALL
SELECT price FROM Laptop INNER JOIN Product on Product.model = Laptop.model
WHERE maker = 'A'
) B
```
|
Select from (select union all select) error
|
[
"",
"sql",
"sql-server",
""
] |
I have simple scala application with spark dependencies. I am just trying to create spark context using the follwing code.
```
def main(args: Array[String]) {
var sparkConfig : SparkConf = new SparkConf() ;
sparkConfig.setAppName("ProxySQL").setMaster("local");
var sc = new SparkContext(sparkConfig)
}
```
When i try to run this code inside main - it throws security execption at new SparkContext(sparkConfig) with the following message .
```
Exception in thread "main" java.lang.SecurityException: class "javax.servlet.ServletRegistration"'s signer information does not match signer information of other classes in the same package .
```
At problem tab of Eclipse, it shows one warning
Description Path Resource Location Type
```
More than one scala library found in the build path (D:/workspaces/scala/scalaEclipse/eclipse/plugins/org.scala-ide.scala210.jars_4.0.0.201503031935/target/jars/scala-library.jar, C:/Users/prems.bist/.m2/repository/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.jar).This is not an optimal configuration, try to limit to one Scala library in the build path. SQLWrapper Unknown Scala Classpath Problem
```
I have scala installation of 2.10.4 at windows machine.
Scala compiler version set at eclipse is 2.10.5 . What is causing this security exception? Is this the incompatiblity version issues or what exaclty else? How would i solve it?
|
The problem was more or less related with conflicting dependencies.
The following task resolve my issue.
> Go to Project
>
> Build Path -> Order and Export tab -> Change the order of
> javax.servlet jar
> either to bottom or top.
This Resolved the problem.
|
Well,as I follow the suggestion:Go to Project Build Path -> Order and Export tab -> Change the order of javax.servlet jar either to bottom or top.
I find my buidpath libiaries was changed and it seems mussy(too many small libs),maybe this was caused by maven.
So I try to remove all of them and reimport the libs and chose Project -> Maven ->Update Project !
Now ,it goes well.
[](https://i.stack.imgur.com/RPM19.png)
|
Scala with spark - "javax.servlet.ServletRegistration"'s signer information does not match signer information of other classes in the same package
|
[
"",
"sql",
"scala",
"apache-spark",
""
] |
I have a table and would like to find minimum and maximum values of price.
I would like to get minimal price from action\_table when current date between "from" and "to".
```
from to action_price price
2015-04-02 2015-08-02 20 25
2015-04-02 2015-04-20 0 30
2015-04-03 2015-04-21 0 40
```
So from the table above I need: **min->20** (because current date between "from"/"to") and **max->40**
I have tried something like that, but don't work as expected:
```
SELECT
CASE WHEN curdate() BETWEEN from AND to THEN MAX(action_price) ELSE MAX(price) END AS max,
CASE WHEN curdate() BETWEEN from AND to THEN MIN(action_price) ELSE MIN(price) END AS min
FROM `table`;
```
|
If I understand correctly, you want minumum and maximum of these values:
> ~~25~~ 20, 30, 40
You simply need to wrap the case statement inside the aggregate functions instead of other way round:
```
SELECT
MIN(CASE WHEN CURDATE() BETWEEN `from` AND `to` THEN action_price ELSE price END) AS `min`,
MAX(CASE WHEN CURDATE() BETWEEN `from` AND `to` THEN action_price ELSE price END) AS `max`
FROM action_table;
```
```
+------+------+
| min | max |
+------+------+
| 20 | 40 |
+------+------+
```
|
I think a "HAVING" statement will work here:
<http://sqlfiddle.com/#!9/4653c/4>
|
Conditional select between dates
|
[
"",
"mysql",
"sql",
"date",
"datetime",
"between",
""
] |
I would like to use `INNER JOIN` on one table. But I get this error message:
> Msg 208, Level 16, State 1, Line 1
> Invalid object name 'a'.
My query is :
```
select *
from
(select
*,
ROW_NUMBER() OVER (ORDER BY GoodMainCode) as row
from [SarMem].[dbo].[Book_Data1]
where GoodName like '%A%' and GroupCode = 115) a
inner join a b on b.GoodMainCode = a.GoodMainCode
where a.row > 0 and a.row <= 100
```
**updated**

|
Do it with cte:
```
;with a as(select *, ROW_NUMBER() OVER (ORDER BY GoodMainCode) as row
from [SarMem].[dbo].[Book_Data1]
where GoodName like '%A%' and GroupCode = 115)
select * from a
join a b on b.GoodMainCode = a.GoodMainCode
where a.row > 0 and a.row <= 100
```
|
I think here `a` is an alias name not the table name. So SQL SERVER won't allow `a` to create one more alias `b`.
So if you wanna use same table as `a`, then you have to rewrite subquery for `b` also.
Like,
```
select * from (
select *,ROW_NUMBER() OVER (ORDER BY GoodMainCode) as row from [SarMem].[dbo].[Book_Data1] where GoodName like '%A%' and GroupCode = 115 ) a
INNER JOIN (
select *,ROW_NUMBER() OVER (ORDER BY GoodMainCode) as row from [SarMem].[dbo].[Book_Data1] where GoodName like '%A%' and GroupCode = 115 ) b on b.GoodMainCode = a.GoodMainCode where a.row > 0 and a.row <= 100
```
|
Self-join a table in SQL Server 2008
|
[
"",
"sql",
"sql-server",
""
] |
I have a university graduate database and would like to extract a random sample of data of around 1000 records.
I want to ensure the sample is representative of the population so would like to include the same proportions of courses eg

I could do this using the following:
```
select top 500 id from degree where coursecode = 1 order by newid()
union
select top 300 id from degree where coursecode = 2 order by newid()
union
select top 200 id from degree where coursecode = 3 order by newid()
```
but we have hundreds of courses codes so this would be time consuming and I would like to be able to reuse this code for different sample sizes and don't particularly want to go through the query and hard code the sample sizes.
Any help would be greatly appreciated
|
You want a stratified sample. I would recommend doing this by sorting the data by course code and doing an nth sample. Here is one method that works best if you have a large population size:
```
select d.*
from (select d.*,
row_number() over (order by coursecode, newid) as seqnum,
count(*) over () as cnt
from degree d
) d
where seqnum % (cnt / 500) = 1;
```
EDIT:
You can also calculate the population size for each group "on the fly":
```
select d.*
from (select d.*,
row_number() over (partition by coursecode order by newid) as seqnum,
count(*) over () as cnt,
count(*) over (partition by coursecode) as cc_cnt
from degree d
) d
where seqnum < 500 * (cc_cnt * 1.0 / cnt)
```
|
It is not necessary to partition the population at all.
If you are taking a sample of 1000 from a population among hundreds of course codes, it stands to reason that many of those course codes will not be selected at all in any one sampling.
If the population is uniform (say, a continuous sequence of student IDs), a uniformly-distributed sample will automatically be representative of population weighting by course code. Since newid() is a uniform random sampler, you're good to go out of the box.
The only wrinkle that you might encounter is if a student ID is a associated with multiple course codes. In this case make a unique list (temporary table or subquery) containing a sequential id, student id and course code, sample the sequential id from it, grouping by student id to remove duplicates.
|
SQL random sample with groups
|
[
"",
"sql",
"sql-server",
"sample",
"random",
""
] |
I have following table (master\_group) structure :
```
code name under
1 National Sales Manager 1
2 regional sales manager 1
3 area sales manager 2
4 sales manager 3
```
How do I get the ultimate parent of a particular row like :
```
code name under ultimateparent
1 National Sales Manager 1 1
2 regional sales manager 1 1
3 area sales manager 2 1
4 sales manager 3 1
```
|
With recursive cte going from top to childs:
```
with cte as(
select *, code as ultimate from t where code = under
union all
select t.*, c.ultimate from t
join cte c on c.code = t.under
where t.code <> t.under
)
select * from cte
```
For data:
```
create table t (code int, name varchar(100), under int)
insert into t values
(1, 'National Sales Manager', 1),
(2, 'regional sales manager', 1),
(3, 'area sales manager', 2),
(4, 'sales manager', 3),
(5, 'a', 5),
(6, 'b', 5),
(7, 'c', 5),
(8, 'd', 7),
(9, 'e', 7),
(10, 'f', 9),
(11, 'g', 9)
```
it generates the output:
```
code name under ultimate
1 National Sales Manager 1 1
5 a 5 5
6 b 5 5
7 c 5 5
8 d 7 5
9 e 7 5
10 f 9 5
11 g 9 5
2 regional sales manager 1 1
3 area sales manager 2 1
4 sales manager 3 1
```
Fiddle <http://sqlfiddle.com/#!6/17c12e/1>
|
You can use a recursive CTE to walk the tree and then choose the highest level for each code:
```
with cte as (
select mg.code, mg.name as name, mg.under as under, mg.under as parent, 1 as lev
from master_group mg
union all
select mg.code, mg.name, mg.under, cte.under as parent, cte.lev + 1
from master_group mg join
cte
on mg.under = cte.code
where cte.under is not null and cte.under <> mg.code
)
select code, name, under, parent as ultimateparent
from (select cte.*, max(lev) over (partition by cte.code) as maxlev
from cte
) t
where lev = maxlev;
```
[Here](http://sqlfiddle.com/#!6/0760f/1) is a SQL Fiddle.
|
SQL Server function to get top level parent in hierarchy
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I have a stored procedure, which I've temporarily filled with values that returns the time spent on logged jobs at my company, I've grouped the JobTypeID's that should fit into 4 different Columns, Dev hours worked, Dev hours charged, Support hours worked, and Support hours charged. Upon running the code procedure, due to how I'm using the GroupBy clause it's splitting the data into multiple rows, how can i fix this so it outputs the data into one row?
```
;WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime,
WorkLogJobTypeID,
tblJobWorkLog.SystemUserID
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= '20150511'
AND tblJobWorkLog.EndTime <= '20150515'
AND SystemUserID = '65405273-6BFD-4482-8A0D-BC6430AC996D'
)
SELECT
FromDate,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then (SUM(JobTime) / 60.0) end as SupportHoursWorked,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then SUM(ChargeableTime) end AS SupportHoursCharged,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then (SUM(JobTime) / 60.0) end as DevelopmentHoursWorked,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then SUM(ChargeableTime) end as DevelopmentHoursCharged
FROM
cte
GROUP BY
FromDate, WorkLogJobTypeID
ORDER BY
FromDate
```
RESULTS:
```
2015-05-11 NULL NULL 1.250000 1.250000
2015-05-11 1.166666 0.5 NULL NULL
2015-05-12 0.000000 0.0 NULL NULL
2015-05-12 NULL NULL 3.250000 1.250000
2015-05-12 0.250000 0.000000 NULL NULL
2015-05-12 NULL NULL 0.250000 0.000000
2015-05-13 NULL NULL 0.750000 0.750000
2015-05-13 0.000000 0.000000 NULL NULL
2015-05-13 NULL NULL 0.0 0.000000
2015-05-14 NULL NULL 1.0 1.000000
2015-05-14 4.000000 1.166667 NULL NULL
2015-05-14 NULL NULL 1.0 0.750000
```
^ I need to combine the date results into one row
|
Apologies this didn't work first time round, You can use the following which should now give you the result you're looking for:
```
WITH cte AS (
SELECT
DATEPART(Year, StartTime) AS YearNumber,
DATEPART(Month, StartTime) AS MonthNumber,
DateName(Month, StartTime) + ' ' + CAST(DatePart(Year, StartTime) AS nvarchar(50)) AS TimePeriod,
DATEADD(day, DATEDIFF(Day, 0, StartTime), 0) AS FromDate,
DateDiff(minute, StartTime, EndTime) AS JobTime,
tblJobWorkLog.ChargeableTime,
WorkLogJobTypeID,
tblJobWorkLog.SystemUserID
FROM
tblJobWorkLog
INNER JOIN tblJob ON tblJobWorkLog.JobID = tblJob.JobID
INNER JOIN tblContact ON tblJob.ContactID = tblContact.ContactID
WHERE
tblJobWorkLog.StartTime >= '20150511'
AND tblJobWorkLog.EndTime <= '20150515'
AND SystemUserID = '65405273-6BFD-4482-8A0D-BC6430AC996D'
)
Select FromDate, SUM(SupportHoursWorked) SupportHoursWorked, SUM(SupportHoursCharged) SupportHoursCharged, SUM(DevelopmentHoursWorked) DevelopmentHoursWorked, SUM(DevelopmentHoursCharged) DevelopmentHoursCharged
From
(SELECT
FromDate,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then (SUM(JobTime) / 60.0) end as SupportHoursWorked,
Case when WorkLogJobTypeID = 'FA5E6979-D228-44B7-A91B-8DDC8DDC709B' OR WorkLogJobTypeID = '3171B295-60E9-4724-95A3-04FA182D7D43' OR WorkLogJobTypeID = '52c2691f-ff0a-4263-a440-8a309f868f93' then SUM(ChargeableTime) end AS SupportHoursCharged,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then (SUM(JobTime) / 60.0) end as DevelopmentHoursWorked,
Case when WorkLogJobTypeID = 'D0E910B1-B4BD-430C-AD04-EB4E67946806' OR WorkLogJobTypeID = 'B0BBF362-294D-4262-BED8-EDA7EE74745B' OR WorkLogJobTypeID = '1E333ADC-E4F2-4042-8B65-E25F2770D59F' OR WorkLogJobTypeID = 'A445B7CE-E9E4-48E6-B5AA-83C83F045315'
OR WorkLogJobTypeID = '1D83F510-87FA-446E-9337-3D0376210D57' OR WorkLogJobTypeID = 'B59C1596-E1D0-4118-A805-65208E27AFB5' OR WorkLogJobTypeID = 'F44A4B3C-B149-45A8-A9F0-5A57883482FD' then SUM(ChargeableTime) end as DevelopmentHoursCharged
FROM
cte
GROUP BY
FromDate, WorkLogJobTypeID) a
Group by FromDate
ORDER BY
FromDate
```
|
Use sum on the columns with nulls, or you will lose results, then group by date
|
Combing two returned rows in SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"stored-procedures",
""
] |
I have a PostgreSQL database. What I'm asking is when I have a table where `A`, `B` and `C` are the column names and the other numbers beyond are the values:
```
A | B | C
---------
0 | 1 | 2
1 | 2 | 3
2 | 1 | 3
0 | 3 | 2
```
And my search requires that the row must contain `1` and `3` the valid rows would be row 2 and row 3 since they have both the values. Is there an easier way than to create a huge nested query with lots of `IN` operators?
|
Just use two `in` statements:
```
select *
from table t
where 1 in (A, B, C) and 3 in (A, B, C);
```
|
@GordonLindoff's answer works for testing equality. If you have to use LIKE, however, you'll have to use regular logic:
```
WHERE ((A LIKE '%1%' AND (B LIKE '%3%' OR C LIKE '%3%'))
OR (B LIKE '%1%' AND (A LIKE '%3%' OR C LIKE '%3%'))
OR (C LIKE '%1%' AND (A LIKE '%3%' OR B LIKE '%3%')))
```
Bear in mind that this will probably be pretty inefficient, given how LIKE functions.
|
SQL Create Statement for Multiple Columns Containing all Given Values
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I have a business problem that I formulated to the following example for easier communication.
Say I have three tables `Employee`, `Project`, `EmpWorkProj`. `EmpWorkProj` is used to link employees and the projects they worked on (or link table between `Employee` and `Project`). This is an example of the table data:
Table: `Employee`
```
EmployeeID EmpName
1 Alex
2 Pete
3 Mike
```
Table: `Project`
```
ProjectID ProjectCity
11 NY
22 LA
33 NY
44 LA
```
Table: `EmpWorkProj`
```
EmployeeID ProjectID
1 11
1 33
1 22
2 11
3 33
3 44
```
What I want to return is the employee who works on all projects with city `'NY'`. In this example I want to return `Alex` because he is the only employee who worked on project id `11` and `33`.
I would really appreciate a solution that uses standard sql (can't deploy recursive CTEs, dynamic sql or cursors). Thanks.
|
You can solve this using `group by` and `having`. For each employee, count the number of projects they are in with NY as the city. Then, in the `having` clause, see if that is all the projects:
```
select ewp.employeeid
from EmpWorkProj ewp join
Project p
on ewp.projectid = p.projectid
where p.projectcity = 'NY'
group by ewp.employeeid
having count(*) = (select count(*) from project where projectcity = 'NY')
```
|
This also works:
```
declare @e table(EmployeeID int, EmpName varchar(10))
declare @p table(ProjectID int, ProjectCity varchar(10))
declare @ep table(EmployeeID int, ProjectID int)
insert into @e values
(1 , 'Alex'),
(2 , 'Pete'),
(3 , 'Mike')
insert into @p values
(11, 'NY'),
(22, 'LA'),
(33, 'NY'),
(44, 'LA')
insert into @ep values
(1, 11),
(1, 33),
(1, 22),
(2, 11),
(3, 33),
(3, 44)
;with cte as(select ProjectID from @p where ProjectCity = 'NY')
select ep.EmployeeID
from @ep ep
outer apply(select * from cte except select ProjectID from @ep e where e.EmployeeID = ep.EmployeeID) o
group by ep.EmployeeID
having MAX(o.ProjectID) is null
```
|
Get records that match all corresponding records on conditional join
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
""
] |
## Question:
For each month in 2014, display the total profit (i.e. using the attributes purchasedPrice and agreedPrice) generated from car sales only.
Tables:
## Car
```
(VIN, dateAcquired, yearBuilt, purchasedPrice, askingPrice, currentMileage, modelNo)
```
## Sales Transaction
```
(VIN, custID, agentID, dateOfSale, agreedPrice)
```
## My incorrect code:
```
select c.dateacquired, st.agreedprice - c.purchasedprice as "Total Profit"
from salestransaction st, car c
where c.VIN = st.VIN
and extract(year from c.dateacquired) = '2014'
group by extract(month from c.dateacquired), st.agreedprice - c.purchasedprice;
```
Could someone please tell me what I'm doing wrong?
I am getting the following error:
> not a GROUP BY expression at line 01
|
Some issues:
* The profit is made when you sell the car (not when it is aquired).
* You need to `SUM` all the profits to get the total profit.
* When using `GROUP BY`, you need to `SELECT` either the exact statements in the `GROUP BY` or use aggregate functions.
Like this:
```
SELECT TRUNC( s.dateOfSale, 'MM' ) AS month_aquired,
SUM( st.agreedprice - c.purchasedprice ) as "Total Profit"
FROM salestransaction st,
car c
WHERE c.VIN = st.VIN
AND EXTRACT( year FROM s.dateOfSale ) = 2014
GROUP BY
TRUNC( s.dateOfSale, 'MM' );
```
or you can replace `TRUNC( s.dateOfSale, 'MM' )` with `EXTRACT( month FROM s.dateOfSale )` at the top and bottom.
|
Columns in `SELECT` list which are not aggregate functions should match your group by expression accurately.
This means `c.dateacquired` does not equal `extract(month from c.dateacquired)`, thus giving you the error.
You would like to group by months, yet you try to select the exact, not truncated `date`.
Correct code assuming that you wish to select the month:
```
select extract(month from c.dateacquired), st.agreedprice - c.purchasedprice as "Total Profit"
from salestransaction st, car c
where c.VIN = st.VIN
and extract(year from c.dateacquired) = '2014'
group by extract(month from c.dateacquired), st.agreedprice - c.purchasedprice;
```
This is just a fix to your query, but I'm thinking the result would still not be what you're looking for. You probably wanted to use aggregate function `sum()` on that total.
|
Oracle find total profit for each month in a certain year
|
[
"",
"sql",
"oracle",
""
] |
Essentially, I have an `orders` table and each day I'm running a cronjob to check the logs and see which customers ordered 11 months ago, because I need to warn them that their warranty expires in 1 month. What's the best way this query is accomplished?
I could do it a few ways, but I'm interested in seeing opinions of the best way.
|
```
SELECT *
FROM orders
WHERE order_date = CURDATE() - INTERVAL 11 MONTH
;
```
This will find all rows with order\_date value exactly 11 months old.
|
I'd use [`date_add`](https://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-add) with a month `interval` and compare it to `now()`:
```
SELECT *
FROM orders
WHERE DATE_ADD (order_date, INTERVAL 11 MONTH) <= NOW()
```
|
Query MySQL Datetime Field For Any Records Made 11 Months Ago
|
[
"",
"mysql",
"sql",
"datetime",
"select",
""
] |
I have an Excel sheet that looks as given below:
```
+---------+----------------+
| ItemNum | Substitutes |
+---------+----------------+
| ABCD | XXXX/YYYY/ZZZZ |
| PQRS | AAAA/BBBB/CCCC |
+---------+----------------+
```
I need to load this into a table in MS Access or SQL Server in the following way:
```
+---------+------------+
| ItemNum | Substitute |
+---------+------------+
| ABCD | XXXX |
| ABCD | YYYY |
| ABCD | ZZZZ |
| PQRS | AAAA |
| PQRS | BBBB |
| PQRS | CCCC |
+---------+------------+
```
Kindly advise on a way to do this. I know there is a way to do this using Excel VBA. But I am looking for options that are macro - free wherein the file as is can be loaded into a staging table and then using SQL it can be obtained in the desired form.
|
Import the data as is, then you can use this query to split them down:
```
SELECT T1.ItemNum, T2.mySplits as Substitute
FROM
(
SELECT *,
CAST('<X>'+replace(T.Substitutes,'/','</X><X>')+'</X>' as XML) as my_Xml
FROM Table1 T
) T1
CROSS APPLY
(
SELECT my_Data.D.value('.','varchar(50)') as mySplits
FROM T1.my_Xml.nodes('X') as my_Data(D)
) T2
```
Here's an Sql Fiddle: <http://sqlfiddle.com/#!6/042da/2>
|
You should take a staging table approach. Because you have this tagged for both SQL Server and Access, here are two options.
If you *know* the codes are always the same length, then put the data in a staging table and extract with something like:
```
select ItemNum, substring(substitutes, 1, 4) as substitute
from staging
where substitutes is not null
union all
select ItemNum, substring(substitutes, 6, 4) as substitute
from staging
where substitutes like '%/%'
union all
select ItemNum, substring(substitutes, 10, 4) as substitute
from staging
where substitutes like '%/%/%'
```
and so on. This is intentionally written in a way that can be easily modified to run in MS Access.
The second choice is similar, but does the pivoting in Excel. Use *Data --> Text To Columns* to split the data into separate columns. Then, load them into a table with columns like `substitute1`, and so on. You then want a query like:
```
select ItemNum, substitute1 as substitute
from staging
where substitute1 is not null
union all
select ItemNum, substitute2 as substitute
from staging
where substitute2 is not null
union all
select ItemNum, substitute3 as substitute
from staging
where substitute3 is not null
```
Once again, this query is intentionally written compatible with SQL Server and MS Access.
|
How to load a column with multiple values into separate lines into a table
|
[
"",
"sql",
"sql-server",
"excel",
"ms-access",
""
] |
The question may be very simple but i don't know how to fix it,
I have this table structure
```
sno left Right
1 2 1
2 2 2
3 1 2
4 3 1
5 2 4
6 7 1
7 2 8
```
How do I get a result set like the one below
```
sno left Right Result
1 2 1 1
2 2 2 2
3 1 2 1
4 3 1 1
5 2 4 2
6 7 1 1
7 2 8 2
```
I wanna select the Data what mimimum value is matched between two columns,
Eg:`3 and 1`
1 is minimum value between these two and 1 is matched with 3, so the matched value is 1.
eg: `2 and 4`
2 is minimum value between these two and 2 is is mathed with 4, so the matched value is 2.
Edited:
If choose 8 and 2 for example
8 contains(**1,2**,3,4,5,6,7,8)
2 contains(**1,2**)
So the Result is **2**
Because 2 values are matched here.
I hope i explained it well, thanks
|
One of the possible solutions:
```
DECLARE @t TABLE ( sno INT, l INT, r INT )
INSERT INTO @t
VALUES ( 1, 2, 1 ),
( 2, 2, 2 ),
( 3, 1, 2 ),
( 4, 3, 1 ),
( 5, 2, 4 ),
( 6, 7, 1 ),
( 7, 2, 8 )
SELECT *,
(SELECT MIN(v) FROM (VALUES(l),(r)) m(v)) AS m
FROM @t
```
Output:
```
sno l r m
1 2 1 1
2 2 2 2
3 1 2 1
4 3 1 1
5 2 4 2
6 7 1 1
7 2 8 2
```
|
The following SQL will return the positive value of a subtraction operation between the left and right values - in a column with Result as the header. It will calculate the difference between left and right values - ABS will make the result positive.
```
SELECT
sno,
left,
Right,
ABS(left - right) AS Result
FROM tablename
```
|
Select Query on Sql Server
|
[
"",
"sql",
"sql-server-2008",
""
] |
I am running a SQL query with a while constraint containing some "id's".
For example:
```
SELECT table.id
FROM TableOne table
WHERE table.id IN (1, 2, 3, 4)
AND some conditions
```
The query might return [1, 2] granted that some conditions are met. How would I go about returning [3, 4], that is, all the id's that have not met the conditions but are still part of the list in which I want to include?
Thanks
|
If this is how you get result of ID you want to include and meet condition
```
SELECT table.id
FROM TableOne table
WHERE table.id IN (1, 2, 3, 4)
AND some conditions
```
Then, this is how to get result of ID you want to include but not meet condition
```
SELECT table.id
FROM TableOne table
WHERE table.id IN (1, 2, 3, 4)
AND NOT (some conditions)
```
|
You can create a table with list of your ids and then do a `EXCEPT` from your query.
```
SELECT ID FROM
(
SELECT 1 ID
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
)List
EXCEPT
SELECT table.id
FROM TableOne table
WHERE table.id IN (1, 2, 3, 4)
AND some conditions
```
***Assumption:*** If your table does not contain the ID listed in the query, the query below will still return that id.
|
SQL - remainder / not belonging
|
[
"",
"mysql",
"sql",
"sql-server",
"database",
"where-clause",
""
] |
My problem is auto populating Table.
I have table with 1000 record in it, but for testing purpose, i need to insert more data.
**ID | PersonID | Date | Time | Sum | TypeID | PlaceID | StatusID**
So i need to populate the database with 10000 records where the date is between 1/3/2015 and 1/5/2015, Time is Random, SUM Between 100 and 1000, TypeID between 1 and 2, PlaceID between 1-10, StatusID between 1-3
I would a appreciate any kind of help or suggestion.
Thanks in advance.
|
You need a small t-sql to do it:
```
--CREATE TABLE TEST (CID INT, PERSONID INT, TEST_DATE DATE, TEST_TIME TIME, TEST_SUM INT, TYPEID INT, PLACEID INT, STATUSID INT);
--TRUNCATE TABLE TEST;
SET NOCOUNT ON;
DECLARE @X INT, @PERSONID INT, @DATE DATE, @TIME TIME, @SUM INT, @TYPEID INT, @PLACEID INT, @STATUSID INT,@R INT;
SELECT @X=0;
WHILE @X < 10000 BEGIN
SELECT @X=@X +1;
SELECT @DATE = DATEADD(DAY, @X / 4000, '2015-1-3');
SELECT @R=CONVERT(INT, RAND() * 3600 * 24);
SELECT @TIME = DATEADD(SECOND, @R , '00:00:01');
SELECT @SUM = 100 + @R % 900;
SELECT @TYPEID = @R % 2 + 1 ;
SELECT @PLACEID = @R % 10 +1 ;
SELECT @STATUSID = @R % 3 +1 ;
SELECT @PERSONID = @R % 500 +1 ;
INSERT INTO TEST (CID, PERSONID, TEST_DATE, TEST_TIME, TEST_SUM, TYPEID, PLACEID, STATUSID)
VALUES(@X, @PERSONID, @DATE, @TIME, @SUM, @TYPEID, @PLACEID, @STATUSID);
END;
SET NOCOUNT OFF;
```
Also, please try not to use column names like "ID","Date","Time" and etc which have special meanings in SQL Server.
|
Here is some brutal solution but completely randomized:
```
with rows as(select row_number() over(order by(select null)) as dummy from
(values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t1(n)
cross join (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t2(n)
cross join (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t3(n)
cross join (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t4(n))
select *,
cast(dateadd(ms, cast(cast(newid() as varbinary(30)) as int), getdate()) as time) as time
from rows r
cross apply(select top 1 p as place
from (values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10))p(p)
where r.dummy = r.dummy order by newid()) cap
cross apply(select top 1 s as status
from (values(1),(2),(3))s(s)
where r.dummy = r.dummy order by newid()) cas
cross apply(select top 1 t as time
from (values(1),(2))t(t)
where r.dummy = r.dummy order by newid()) cat
cross apply(select top 1 sum from(select 100 + row_number() over(order by(select null)) as sum
from (values(1),(1),(1),(1),(1),(1),(1),(1),(1))t1(n)
cross join (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t2(n)
cross join (values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1))t3(n)) t
where r.dummy = r.dummy order by newid()) casu
cross apply(select top 1 dateadd(dd, s -1, '20150103') as date
from (values(1),(2),(3))s(s)
where r.dummy = r.dummy order by newid()) cad
```
Fiddle <http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/892>
|
Auto Populate Table With Date Range SQL
|
[
"",
"sql",
"sql-server",
""
] |
So, the problem is that report on SSRS is executing immediately after opening. I use query based default parameters.. And i saw the solution with adding additional parameter without default value. It doesn't work for me because of the textbox which cannot be hidden (i tried to hide it and report stop working).
So is there a way to hide this additional parameter? or maybe another way to solve this issue?
|
The problem is happening because you are setting a required parameter as nullable or you are giving it a default value that is invalid. To fix the problem, remove the default values. When you go to the report it will not be able to run until you give it the required value(s).
|
There can be two solutions to it.
1. Set the default of the parameter in question to such a value that would absolutely have no matches in dataset. Say for example, the parameter is Location. Give the defalut value as "Mars". (Unless of course you build software of extra terrestrial beings). This way the report will execute pretty fast, without any errors.
2. Set the default value of the parameter to be `NULL`. Add a dataset filter like below:
```
=IIF(ISNOTHING(Parameters!Location.Value), TRUE, FALSE)
```

Using `IsNothing` function you can ask dataset to return rows only when the parameter has values.
Let me know if either approach works out.
|
SSRS how to show report without executing it immediately
|
[
"",
"sql",
"reporting-services",
"rdl",
""
] |
I have two tables category and adverts, i need to select all categories and the number of adverts it has in the adverts table that has at least count greater than zero
the category table
```
cat_id | Name
----------------
1 | Toys
2 | fashion
3 | electronics
```
The adverts table
```
cat_id | title
----------------
1 | a
2 | b
2 | c
1 | d
2 | e
```
what i expect
```
cat_id | count | Name
-----------------------
1 |2 | a
2 |3 | b
```
The query i tried
```
Select
c.name, c.cat_id,c.parent_id, @count:= (Select Count(av.cat_id) From adsview av Where av.cat_id = c.cat_id)
from
category c WHERE @count > 0
```
i am getting and empty result, what am i doing wrong?
|
If you want to make sure that the `cat_id` from `category` table are in `adverts` table you need to join as
```
select
c.cat_id,
c.Name,
count(a.cat_id) as `count`
from category c
join adverts a on a.cat_id = c.cat_id
group by c.cat_id ;
```
|
```
select cat_id, count(*)
from adverts
group by cat_id;
```
So the mySQL query engine will grab every single row from the adverts table, it'll put them into neat piles where all rows in the pile have the same category, it'll count the number of rows in each pile, and then it'll return to you a result row for each pile with the pile's id and the number of rows.
Now lets add something: we want to also get the category's name. So we indicate that in the select clause, and add a join to the from clause. The join says "for every row in table a, consider it alongside every row in table b. if some condition holds, put this combined row into the from set". You can see that joins are actually quite slow in SQL (relatively).
```
select c.cat_id, count(*) as count, c.name
from adverts as a join categories as c on a.cat_id = c.cat_id
group by c.cat_id;
```
Note also that I've aliased the tables as a and c respectively, so as to remove the ambiguity over the column name cat\_id (otherwise the mySQL query engine may get confused).
|
select count items each group
|
[
"",
"mysql",
"sql",
""
] |
I am trying to rename a column in `MySQL` community server `5.5.27` using this SQL expression:
```
ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name;
```
I also tried
```
ALTER TABLE table_name RENAME old_col_name TO new_col_name;
```
But it says:
> Error: check the Manual that corresponds to your MySQL server version
|
Use this query:
```
ALTER TABLE tableName CHANGE oldcolname newcolname datatype(length);
```
The `RENAME` function is used in `Oracle` databases.
```
ALTER TABLE tableName RENAME COLUMN oldcolname TO newcolname datatype(length);
```
---
[@lad2025](https://stackoverflow.com/a/49925232/4762528) mentions it below, but I thought it'd be nice to add what he said. Thank you @lad2025!
You can use the `RENAME COLUMN` in MySQL 8.0 to rename any column you need renamed.
```
ALTER TABLE table_name RENAME COLUMN old_col_name TO new_col_name;
```
[ALTER TABLE Syntax](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html):
> **RENAME COLUMN:**
>
> * Can change a column name but not its definition.
> * More convenient than CHANGE to rename a column without changing its definition.
|
In Server version: 5.6.34 MySQL Community Server
```
ALTER TABLE table_name
CHANGE COLUMN old_column_name new_column_name data_type;
```
|
Rename a column in MySQL
|
[
"",
"sql",
"mysql",
"rename",
"alter",
""
] |
I'm currently trying to answer the following question:
Display the name of the customer who has purchased the most cars from Archie’s Luxury Motors.
Tables I'm working with:
**Customer**
```
(custID, name, DOB, streetAddress, suburb, postcode,
gender, phoneNo, email, type)
```
**SalesTransaction**
```
(VIN, custID, agentID, dateOfSale, agreedPrice)
```
I have tried the following query:
```
select customer.name
from customer, salestransaction
where customer.custid = salestransaction.custid
group by (salestransaction.custid)
having count(salestransaction.custid) = max(salestransaction.custid);
```
I receive the following error:
```
ORA-00979: not a GROUP BY expression
```
Please tell me what I'm doing wrong.
|
Probably this should work:
```
select * from (
select customer.name
from customer, salestransaction
where customer.custID = salestransaction.custID
group by (salestransaction.custID), customer.name
order by count(*) desc
) where rownum=1
```
|
Easiest way utilizes a RANK:
```
select customer.name, st.cnt
from customer
join
(
select
custid,
count(*) as cnt,
rank() over (order by count(*) desc) as rnk
from salestransaction
group by custid
) st
on customer.custid = st.custid
where st.rnk = 1;
```
|
Oracle display customer who purchased most cars
|
[
"",
"sql",
"oracle",
"group-by",
""
] |
I have locations in longitude and latitude coordinates. My goal is eventually to be able to select all rows from myTable Where distance is less than 2km.
1. How can one use the longitude and latitude to store location within a geography column?(because it's supposed to be only one geographic point not two right? not one for longitude and one for latitude?)
2. Now that I've got the geography points, how can i select all the rows within a specific distance(in my case 2km)?
|
> How can i use the longitute and latitute to store location within a geography column?(because it's supposed to be only one geographic point not two right? not one for longitute and one for latitute?)
You can use [`geography::STPointFromText`](https://msdn.microsoft.com/en-us/library/bb933979.aspx) / [`geography::Point`](https://msdn.microsoft.com/en-us/library/bb933811.aspx) to store longitude and latitude in a geography datatype.
```
SELECT geography::STPointFromText('POINT(' + CAST([Longitude] AS VARCHAR(20)) + ' ' + CAST([Latitude] AS VARCHAR(20)) + ')', 4326)
```
or
```
SELECT geography::Point(Latitude, Longitude , 4326)
```
Reference Link:
[Update Geography column in table](https://stackoverflow.com/questions/29938202/update-geography-column-in-table/29938384#29938384)
> Now that I've got the geography points, how can i select all the rows within a specific distance(in my case 2km)?
You can use [`STDistance`](https://msdn.microsoft.com/en-in/library/bb933808.aspx) like this.
```
DECLARE @g geography;
DECLARE @h geography;
SET @g = geography::STGeomFromText('POINT(-122.35900 47.65129)', 4326);
SET @h = geography::STGeomFromText('POINT(-122.34720 47.65100)', 4326);
SELECT @g.STDistance(@h);
```
Reference Link:
[Distance between two points using Geography datatype in sqlserver 2008?](https://stackoverflow.com/questions/8907873/distance-between-two-points-using-geography-datatype-in-sqlserver-2008)
**Insert Query**
```
DECLARE @GeoTable TABLE
(
id int identity(1,1),
location geography
)
--Using geography::STGeomFromText
INSERT INTO @GeoTable
SELECT geography::STGeomFromText('POINT(-122.35900 47.65129)', 4326)
--Using geography::Point
INSERT INTO @GeoTable
SELECT geography::Point(47.65100,-122.34720, 4326);
```
**Get Distance Query**
```
DECLARE @DistanceFromPoint geography
SET @DistanceFromPoint = geography::STGeomFromText('POINT(-122.34150 47.65234)', 4326);
SELECT id,location.Lat Lat,location.Long Long,location.STDistance(@DistanceFromPoint) Distance
FROM @GeoTable;
```
|
You can convert lat and long to a point and save it in table.
```
Declare @geo Geography,
@Lat varchar(10),
@long varchar(10)
SET @Lat = '34.738925'
SET @Long = '-92.39764'
SET @geo= geography::Point(@LAT, @LONG, 4326)
```
|
How to store longitude & latitude as a geography in sql server 2014?
|
[
"",
"sql",
"sql-server-2014",
""
] |
I need a solution for the scenario below:
```
TABLE A:
col1
1
-1
6
-5
2
4
-2
5
```
I want the OUTPUT as:
```
POSITIVE NEGATIVE
1 -1
2 -2
4 NULL
5 -5
6 NULL
```
|
You can try something like:
```
SELECT t1.col1 AS POSITIVE, t2.col1 AS NEGATIVE
FROM (
SELECT col1
from tableA
WHERE col1 > 0 ) t1
FULL OUTER JOIN (
SELECT col1
FROM tableA
WHERE col1 < 0 ) t2 ON t1.col1 = ABS(t2.col1)
ORDER BY ABS(COALESCE(t1.col1, t2.col1))
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/a8343/1)
|
This correlated subquery does what you want using [`ABS`](https://technet.microsoft.com/en-us/library/ms189800%28v=sql.110%29.aspx):
```
SELECT POSITIVE = a1.col1,
NEGATIVE = (SELECT TOP 1 a2.col1
FROM TableA a2
WHERE a2.col1 < 0
AND ABS(a2.col1) = a1.col1)
FROM TableA a1
WHERE a1.col1 >= 0
ORDER BY a1.Col1
```
`Sql-Fiddle`
I find it easier to read as a `FULL OUTER` self `JOIN`.
|
Need to get the output as positive and negative numbers from one column into different columns from a single table
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a large database with a lot of tables and I want to know if I can select the table that have less columns of them. How can I do that? Is there some info in the `INFORMATION_SCHEMA`?
|
I feel like the other answers gave you plenty of info to help you answer your own questions with a little effort, but you don't seem satisfied, so maybe this helps.
If this isn't what you're looking for, you really need to edit your question so we can understand exactly what type of result you want.
This returns a single record for the table with the least number of columns. You can filter this down further by schema or whatever to fit your needs.
```
SELECT TOP 1
tbl.TABLE_CATALOG,
tbl.TABLE_SCHEMA,
tbl.TABLE_NAME,
COUNT(col.ordinal_position)
FROM INFORMATION_SCHEMA.TABLES tbl
JOIN INFORMATION_SCHEMA.COLUMNS col
ON tbl.TABLE_CATALOG = col.TABLE_CATALOG
AND tbl.TABLE_NAME = col.TABLE_NAME
AND tbl.TABLE_SCHEMA = col.TABLE_SCHEMA
WHERE tbl.table_type = 'BASE TABLE'
GROUP BY tbl.TABLE_CATALOG, tbl.TABLE_SCHEMA, tbl.TABLE_NAME
ORDER BY COUNT(col.ORDINAL_POSITION)
```
But what if there's a bunch of tables with only a single column? What to return? You can use TSQL and find the count of columns for the table with the least number of columns, and then select all tables with that count of columns.
```
BEGIN
DECLARE @minColumnCount INT
--This will find the count of columns for the table with the least number of columns
SELECT TOP 1 @minColumnCount = COUNT(col.ordinal_position)
FROM INFORMATION_SCHEMA.TABLES tbl
JOIN INFORMATION_SCHEMA.COLUMNS col
ON tbl.TABLE_CATALOG = col.TABLE_CATALOG
AND tbl.TABLE_NAME = col.TABLE_NAME
AND tbl.TABLE_SCHEMA = col.TABLE_SCHEMA
WHERE tbl.table_type = 'BASE TABLE'
GROUP BY tbl.TABLE_CATALOG, tbl.TABLE_SCHEMA, tbl.TABLE_NAME
ORDER BY COUNT(col.ORDINAL_POSITION)
--This will give you a list of tables that have exactly @minColumnCount columns
SELECT tbl.TABLE_CATALOG, tbl.TABLE_SCHEMA, tbl.TABLE_NAME, COUNT(col.ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.TABLES tbl
JOIN INFORMATION_SCHEMA.COLUMNS col
ON tbl.TABLE_CATALOG = col.TABLE_CATALOG
AND tbl.TABLE_NAME = col.TABLE_NAME
AND tbl.TABLE_SCHEMA = col.TABLE_SCHEMA
WHERE tbl.table_type = 'BASE TABLE'
GROUP BY tbl.TABLE_CATALOG, tbl.TABLE_SCHEMA, tbl.TABLE_NAME
HAVING COUNT(col.ORDINAL_POSITION) = @minColumnCount
END
```
|
This will give you the list of tables and the count of columns for each table
```
SELECT TABLE_NAME,
COUNT(COLUMN_NAME)
FROM INFORMATION_SCHEMA.COLUMNS col INNER JOIN
sys.objects obj
ON obj.name = col.TABLE_NAME
WHERE obj.type = N'U'
GROUP BY col.TABLE_NAME
```
|
How to select the table with less columns in a database with Sql Server
|
[
"",
"sql",
"sql-server",
""
] |
I need to generate a table with the date values that look like this:
```
01/31/2015
02/28/2015
03/31/2015
04/30/2015
....
12/31/2015
```
Each value represents the last day of each month. What is the best approach to accomplish this?
|
@sDate is the starting date, @eDate is the ending date.
```
declare @sDate datetime,
@eDate datetime
select @sDate = '2013-02-25',
@eDate = '2013-04-25'
;with cte as (
select convert(date,left(convert(varchar,@sdate,112),6) + '01') startDate,
month(@sdate) n
union all
select dateadd(month,n,convert(date,convert(varchar,year(@sdate)) + '0101')) startDate,
(n+1) n
from cte
where n < month(@sdate) + datediff(month,@sdate,@edate)
)
select dateadd(day,-1,dateadd(month,1,startdate)) as enddate
from cte
```
Results:
```
2013-02-28
2013-03-31
2013-04-30
```
|
```
SELECT MonthEnd = DATEADD(ms, -3, DATEADD(m, DATEDIFF(m, 0, GETDATE()) + 1 + A.N, 0))
FROM (VALUES -- Feel free to replace this with some sort of numbers table
(0), (1), (2), (3), (4), (5)
) A (N)
```
|
How to generate a table, where each row represents the last day of each month?
|
[
"",
"sql",
"sql-server-2008",
""
] |
I needed guidance and help with a question I have never done in SQL SERVER.
I have this data:
```
Cust_Nr Date FilmName
157649306 20150430 Inside Llewyn Davis
158470722 20150504 Nick Cave: 20,000 Days On Earth
158467945 20150504 Out Of The Furnace
158470531 20150504 FilmA
157649306 20150510 Gladiator
158470722 20150515 Gladiator
```
The customer number :`158470722` has bought two movies,1`):Inside Llewyn Davis` and 2) `Gladiator` on different dates. What I am trying to do in SQL SERVER is to this:
```
Cust_Nr Date FilmName Before Gladiator Purchase
157649306 20150430 Inside Llewyn Davis 1
158470722 20150504 Nick Cave: 20,000 Days On Earth 1
158467945 20150504 Out Of The Furnace 0
158470531 20150504 FilmA 0
157649306 20150510 Gladiator 0
158470722 20150515 Gladiator 0
```
If we look at the dates for the customer nr:`157649306`, he purchased a movie on a date ,20150430, and purchased Gladiator on the 20150510. How can I create a column like the one above to show if a customer has purchased a movie on a date earlier than the gladiator date?
|
You can do this with conditional aggregation and window/analytic functionality:
```
SELECT *,CASE WHEN [Date] < MIN(CASE WHEN FilmName = 'Gladiator'
THEN [Date]
END) OVER(PARTITION BY Cust_Nr)
THEN 1
ELSE 0
END AS Before_Gladiator
FROM Table1
```
Demo: [SQL Fiddle](http://sqlfiddle.com/#!6/57b50f/3/0)
|
You need agregation and details in the same query, so use a Windowed Aggregate Function:
```
case
when Date < min(case when FilmName = 'Gladiator' then Date end)
over (partition by Cust_Nr)
then 1
else 0
end
```
|
purchased a movie earlier than Gladiator date SQL SERVER
|
[
"",
"sql",
"select",
"sql-server-2012",
"row",
"rank",
""
] |
i have a table with these value, and here is a small sample
```
order state item id
10064315 ON MCM1L162L116
10064315 ON MCM1R162R116
10064315 ON SHIPPING
10064316 MS 00801-1778
10064316 MS SHIPPING
10064317 AZ CHM110439-1
10064317 AZ SHIPPING
10064318 TX 2607
10064318 TX SHIPPING
10064319 MO CHG8080
10064319 MO SHIPPING
10064322 CA W10001130
```
I want to write a sql query that only list on the order number that without SHIPPING, in this sample, the only one without SHIPPING would be 10064322.
I try to find in here but didn't find what I am looking for.
Thank for the help
|
```
SELECT DISTINCT [order]
FROM MYTABLE
WHERE [order] not in (SELECT [order]
FROM MYTABLE
WHERE [item id] = 'SHIPPING')
```
EDIT:
According to Aaron Bertrand's article (hat tip @TTeeple in the comments) the best performance for this problem (needing the distinct) is done as follows:
```
SELECT [order]
FROM MYTABLE
EXCEPT
SELECT [order]
FROM MYTABLE
WHERE [item id] = 'SHIPPING'
```
For the full article -> <http://sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join>
|
One option is to use `not exists`:
```
select ordernumber
from yourtable y
where not exists (
select 1
from yourtable y2
where y.ordernumber = y2.ordernumber and y2.itemid = 'SHIPPING'
)
```
|
SQL exclude all rows that with have certain value?
|
[
"",
"sql",
"sql-server",
""
] |
Consider the relation account (customer, balance) where customer is a primary key and there are no null values.
I would like to rank customers according to decreasing balance. (The customer with the largest balance gets rank 1. Ties are not broke but ranks are skipped: if exactly two customers have the largest balance they each get rank 1 and rank 2 is not assigned.)
Why does the following query never prints the customer having rank 1?
```
select A.customer, 1+count(B.customer)
from account A, account B
where A.balance < B.balance
group by A.customer
```
[SQLfiddle Link](http://sqlfiddle.com/#!9/e7e62/2)
|
Let's try this [Example](http://sqlfiddle.com/#!9/e7e62/23)
**Ranking without skipping a rank**
```
set @number:=0;
set @balance:=0;
select customer, balance, rank
from (
select
*,
@number:=if(@balance=balance, @number, @number+1) as rank,
@balance:=balance
from account
order by balance
) as rankeddata;
```
**Result**
```
customer balance rank
S 300 1
Q 400 2
R 400 2
P 500 3
```
To show ranking from 500 -> 300, just change the `ORDER BY balance` to `ORDER BY balance DESC`
**Skip rank if multiple rows have same rank**
If you prefer to skip an assigned rank, you can tweak the query a little bit [SQL Fiddle](http://sqlfiddle.com/#!9/0e076/1).
```
set @number:=0;
set @balance:=0;
set @rank_reused:=0;
select customer, balance, rank
from (
select
*,
@number:=if(@balance=balance, @number, @number+1+@rank_reused) as rank,
@rank_reused:=if(@balance=balance, @rank_reused+1, 0) as reused,
@balance:=balance
from account
order by balance desc
) as rankeddata;
```
**Result**
```
customer balance rank
S 300 1
Q 400 2
R 400 2
P 500 4
```
|
This has nothing to do with the count.
You are doing a join between A and B and show only the records where A.balance < B.balance since there is no such record for your top rank customer (by definition there is no account with higher balance) you don't get any record.
This should do the trick
```
select A.customer, (
select count(*) + 1
from account B
where A.balance < B.balance
) from account A
```
|
Why is count(column_name) behaving like this?
|
[
"",
"mysql",
"sql",
"database",
"group-by",
""
] |
I currently want to do some sort of conditional union. Given the following example:
```
SELECT age, name
FROM users
UNION
SELECT 25 AS age, 'Betty' AS name
```
Say I wanted to only union the second statement if the count of 'users' was >=2 , otherwise do not union the two.
In summary I want to append a table with a row if the table only has 2 or more values.
|
You could use an ugly hack something like this, but I think [Tim's answer](https://stackoverflow.com/a/30327097/3094533) is better:
```
SELECT age, name
FROM users
UNION ALL
SELECT 25, 'Betty'
WHERE (SELECT COUNT(*) FROM users) > 1;
```
|
If it's in a stored-procedure you could use `If...Else`:
```
IF (SELECT COUNT(*) FROM users) < 2
BEGIN
SELECT age, name
FROM users
END
ELSE
SELECT age, name
FROM users
UNION ALL
SELECT 25 AS age, 'Betty' AS name
```
Otherwise you could try something like this:
```
SELECT age, name
FROM users
UNION ALL
SELECT TOP 1 25 AS age, 'Betty' AS name
FROM users
WHERE (SELECT COUNT(*) FROM users) >= 2
```
Note that i've used `UNION ALL` since it doesn't seem that you want to eliminate duplicates.
Played around here: <http://sqlfiddle.com/#!6/a7540/2323/0>
**Edit**: Instead of my second approach i prefer [Zohar's](https://stackoverflow.com/a/30327171/284240). So if you can use `If....Else` prefer that otherwise `WHERE (SELECT COUNT(*) FROM users) > 1` without a table.
|
Case statement to determine if I should union
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
"case",
"union",
""
] |
I have two tables
```
Table 1 Table 2
|leadid|Location| |leadid|leadstatus|
|---------------| |-----------------|
|1 |Japan | |1 | Hired |
|2 |China | |2 | Failed |
|3 |Korea | |3 | Hired |
|4 |Japan | |4 | Hired |
|5 |Japan | |5 | Hired |
```
My objective is to count the number of interviews per country and also count the number of hires and failed in each country. The resulting table should be like this
```
|Location|Interview|Hired|Failed|
|-------------------------------|
|Japan | 3 |3 |0 |
|Korea | 1 |1 |0 |
|China | 1 |0 |1 |
```
I have already done the counting of interviews per country. MY problem is I cannot count the number of hires and failed in each country.
Here is my MySQL code as of the moment:
```
SELECT Location, count(*) as Interview
FROM table1
GROUP BY Location
ORDER BY Interview DESC
```
|
This should work for you:
```
SELECT Location, COUNT(*) as Interview,
SUM(CASE WHEN leadstatus = 'Hired' THEN 1 ELSE 0 END) as Hired,
SUM(CASE WHEN leadstatus = 'Failed' THEN 1 ELSE 0 END) as Failed
FROM table1
LEFT JOIN table2 ON table1.leadid = table2.leadid
GROUP BY Location
ORDER BY Interview DESC
```
[Here](http://sqlfiddle.com/#!9/49b07/8) is a working sqlfiddle.
EDIT 2019: This can be simplified without using case statements, as the conditional statement itself returns a 1 or a 0, so you can simply `SUM()` on that:
```
SELECT Location, COUNT(*) as Interview,
SUM(leadstatus = 'Hired') as Hired,
SUM(leadstatus = 'Failed') as Failed
FROM table1
LEFT JOIN table2 ON table1.leadid = table2.leadid
GROUP BY Location
ORDER BY Interview DESC
```
[Here](http://sqlfiddle.com/#!9/49b07/9) is the updated sqlfiddle.
|
You can use conditional sum along with ranking system using the user defined variable as
```
select
@rn:=@rn+1 as rank,
location,
interview,
hired,
failed
from(
select
t1.location,
count(*) as interview,
sum(t2.leadstatus='Hired') as hired,
sum(t2.leadstatus='Failed') as failed
from table1 t1
join table2 t2 on t1.leadid = t2.leadid
group by t1.location
order by interview desc
)x,(select @rn:=0)y
order by rank ;
```
|
SQL select total and split into success and failed
|
[
"",
"mysql",
"sql",
""
] |
So I want to select a range of rows in an Oracle DB. I need to do this because I have millions of rows in the table and I want to paginate results to the user (if you know another way to do this on the client side, I'm using JavaFX if it matters but I don't think it's a good idea to send all the datas over the network to paginate them on the client side).
So after reading this post: [SQL ROWNUM how to return rows between a specific range](https://stackoverflow.com/questions/4552769/sql-rownum-how-to-return-rows-between-a-specific-range), I have the following query:
```
Select * From (Select t.*, rownum r from PERSON t) Where r > 100 and r < 110;
```
The `100` and `110` are just example. In the application, I just ask for the lower bound and add a size of 10\_000 to fetch the next 10\_000 rows.
Now the rownum column appears in the result and I don't want to see it. As I'm not very experienced with SQL, here's my questions:
1. Why (this was my first attempt until I search on SO) `Select * From Person Where rownum > 100 and rownum < 110;` returns 0 rows ?
2. Why there is no simple way to do something like `Select ... FROM ... WHERE rownum BETWEEN lowerBound AND upperBound` ?
3. How to get rid of the `r` column in the resulting values? From there [SQL exclude a column using SELECT \* [except columnA] FROM tableA?](https://stackoverflow.com/questions/729197/sql-exclude-a-column-using-select-except-columna-from-tablea) I need apparently to create a view or a temporary table, but is there another way considering my query?
4. Does it ensure correct pagination? I read [this article](http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html) section "Pagination with ROWNUM", that said I should order the values by something unique to get a consistent pagination (so I guess ordering by `rownum` is fine, if you can confirm). Doesn't it defeat the purpose of using `FIRST_ROWS(N)`?
I hope it's not too much, I could split into separate questions, but I think it's relevant to have them collapsed as they are closely related.
|
You have 4 questions, and all revolve around the usage and functionality of **ROWNUM**. I will answer each question one-by-one.
> Why (this was my first attempt until I search on SO) Select \* From Person Where rownum > 100 and rownum < 110; returns 0 rows ?
Nice explanation by Thomas Kyte regarding ROWNUM and pagination [here](https://asktom.oracle.com/Misc/oramag/on-rownum-and-limiting-results.html).
A **ROWNUM** value is assigned to a row after it passes the **predicate phase** of the query but before the query does any sorting or aggregation. Also, a ROWNUM value is incremented only after it is assigned, which is why the following query will never return a row:
```
select *
from t
where ROWNUM > 1;
```
Because ROWNUM > 1 is not true for the first row, ROWNUM does not advance to 2. Hence, no ROWNUM value ever gets to be greater than 1.
> Why there is no simple way to do something like Select ... FROM ... WHERE rownum BETWEEN lowerBound AND upperBound ?
Yes, there is. From **Oracle 12c** onwards, you could use the new **Top-n Row limiting** feature. [See my answer here](https://stackoverflow.com/a/29411029/3989608).
For example, the below query would return the employees between **4th highest** till **7th highest salaries** in ascending order:
```
SQL> SELECT empno, sal
2 FROM emp
3 ORDER BY sal
4 OFFSET 4 ROWS FETCH NEXT 4 ROWS ONLY;
EMPNO SAL
---------- ----------
7654 1250
7934 1300
7844 1500
7499 1600
SQL>
```
> How to get rid of the r column in the resulting values?
Instead of `select *`, list the required column names in the outer query. For frequently using the query, creating a view is a simple one time activity.
Alternatively, in **`SQL*Plus`** you could use the **NOPRINT** command. It will not display the column name you don't want to display. However, it would only work in SQL\*Plus.
For example,
```
COLUMN column_name NOPRINT
```
For example,
```
SQL> desc dept
Name Null? Type
----------------------------------------- -------- ------------
DEPTNO NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
SQL> COLUMN dname NOPRINT
SQL> COLUMN LOC NOPRINT
SQL> SELECT * FROM dept;
DEPTNO
----------
10
20
30
40
SQL>
```
> Does it ensure correct pagination?
Yes, if you write the pagination query correctly.
For example,
```
SELECT val
FROM (SELECT val, rownum AS rnum
FROM (SELECT val
FROM t
ORDER BY val)
WHERE rownum <= 8)
WHERE rnum >= 5;
VAL
----------
3
3
4
4
4 rows selected.
SQL>
```
Or, use the new row limiting feature on 12c as I have shown above.
Few good examples [here](http://oracle-base.com/articles/12c/row-limiting-clause-for-top-n-queries-12cr1.php).
|
Answer to question 2: In Oracle 12 you can use pagination
```
select owner, object_name, object_id
from t
order by owner, object_name
OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;
```
|
How ROWNUM works in pagination query?
|
[
"",
"sql",
"oracle",
"pagination",
"rownum",
""
] |
There are two types of results I want to return:
* Unread notifications
* Read notifications
If there are > 10 unread notifications available I want to select as many as there are
If there are <= 10, I want to select all (say there were 7) the unread notifications and 3 'filler' read notifications. How can I accomplish this?
If I wanted to just select all unread notifications my query would be:
```
SELECT * FROM notifications WHERE read = 0
```
If I wanted to just select all read notifications my query would be:
```
SELECT * FROM notifications WHERE read = 1
```
|
This should help you: <http://sqlfiddle.com/#!9/e7e2a/2>
```
SELECT * FROM
(
SELECT @rownum := @rownum + 1 AS rownum, name, read
FROM notifications,
(SELECT @rownum := 0) r --initialise @rownum to 0
) t
WHERE read = 0 OR (read = 1 AND rownum <= 10)
ORDER BY rownum
```
The records are numbered with @rownum. The where clause make sure the read=0 are selected first. If they are up to 10 or more, all are selected. But if not, the second criteria `(read = 1 AND rownum <= 10)` is checked.
`(SELECT @rownum := 0) r` initialises `@rownum` to 0 otherwise it would be `NULL` and `NULL+1=NULL`

|
What you can do is union the two results, order by most important, and then limit the Union:
```
SELECT Col1, Col2, ...
FROM
(
SELECT Col1, Col2, `read`, ... FROM notifications WHERE read = 0
UNION
SELECT Col1, Col2, `read`, ... FROM notifications WHERE read = 1
) x
ORDER BY x.`read` -- And probably a column like Time?
LIMIT 10;
```
[SqlFiddle here](http://sqlfiddle.com/#!9/494df/2)
**Edit, Re : Must return ALL Unread, not just first 10**
Apologies, I missed that part of the question. I can't think of an elegant way to achieve this, so ~~here's a ~partial solution which resorts to an imperative procedure and a temporary table to fill up the rows, if needed:~~ use codingbiz's solution until MySql supports Windowing functions (e.g. `ROW_NUMBER() OVER (PARTITION BY read ORDER BY Col1 DESC)`
|
SQL query to select minimum of 10 results
|
[
"",
"mysql",
"sql",
""
] |
I have a table ("table1") with 3 columns called col1, col2 and col3 (each is VARCHAR) with 4 values as shown below:
```
col1 col2 col3
datA1 datB1 datC1
datA2
```
I need an ability to add a data at any time into any column NOT affecting on the other ones. The very popular code in the Internet is that (say, we need to add data only to the columns col2 and col3):
```
INSERT INTO table1 (col2, col3)
VALUES ('datB2', 'datC2');
```
But it adds new rows, like below:
```
col1 col2 col3
datA1 datB1 datC1
datA2
NULL datB2 datC2
```
What I really need is to fill the row starting with value "datA2" at the column "col1" with new values and get the table like below:
```
col1 col2 col3
datA1 datB1 datC1
datA2 datB2 datC2
```
The table has 3 columns and each column responses for a particular type of values (say: name, colour, size). What I need is just a possibility to add new values at any time in a particular column and have them without Null and new rows if it has a free cell before.
|
I found the solution (a chain of logical operations):
1) `CHECK` if there is a cell (in the target column) with values either `""` or `NULL`.
2) `IF` it has one of those then rewrite the FIRST one keeping the values of the other cells in this row at their places (assumably we use `UPDATE`))) ).
3) `ELSE` just add a new row with all `NULL`s in the other cell in the row.
If we want to add a few values into various columns simultaneously we may prepare our queries for all of those and then execute them simultaneously (sorry for tautology).
If we need to add a few values into the same column within one query we can prepare it, using loops (repeating paragraphs 1 and 2 (or, optionally, 3).
|
```
UPDATE table1
SET col2 = dataB2, col3 = dataC2
WHERE col1 = dataA2;
```
This may serve your purpose.
|
How to INSERT data into a specific column without NULLs in the other columns?
|
[
"",
"sql",
"postgresql",
"insert",
""
] |
Below is the SQL Server query I want to convert to MySQL:
```
SELECT SaleDetail.cTaxiNo, Category.cCarName, qryDriverExpenses.nAmount,
Sale.nInvoiceNo, Sale.cCustomerName, IIf([cPackageType] ='GENERAL',
[nProviderRate],[nProviderRate]/[nProviderKM]) AS nARate, SaleDetail.nKM,
SaleDetail.nAmount, Sale.dInvoiceDate, qryFuel.nAmount,
SaleDetail.nProviderKM, SaleDetail.nProviderRate
FROM qryFuel
RIGHT JOIN (((Sale INNER JOIN SaleDetail ON Sale.nSalePk = SaleDetail.nSalePk)
INNER JOIN (CarMaster
INNER JOIN Category
ON CarMaster.nCategoryPk = Category.nCategoryPK) ON SaleDetail.cTaxiNo =
CarMaster.cCarNo)
LEFT JOIN qryDriverExpenses
ON SaleDetail.nTourBookingpk = qryDriverExpenses.nTourBookingPK)
ON qryFuel.nTourBookingPK =
SaleDetail.nTourBookingpk
ORDER BY Sale.nInvoiceNo, Sale.dInvoiceDate;
```
I am getting error sql
> syntax error . near [cPackageType]
Please help on this!
|
`IIf` in `SQL Server` is `If` in `MySQL`
```
SELECT SaleDetail.cTaxiNo, Category.cCarName, qryDriverExpenses.nAmount,
Sale.nInvoiceNo, Sale.cCustomerName,
IF(cPackageType ='GENERAL', nProviderRate, nProviderRate/nProviderKM) AS nARate,
SaleDetail.nKM, SaleDetail.nAmount, Sale.dInvoiceDate, qryFuel.nAmount,
SaleDetail.nProviderKM, SaleDetail.nProviderRate
FROM qryFuel RIGHT JOIN
(((Sale INNER JOIN SaleDetail ON Sale.nSalePk =
SaleDetail.nSalePk) INNER JOIN (CarMaster INNER JOIN Category ON
CarMaster.nCategoryPk = Category.nCategoryPK) ON SaleDetail.cTaxiNo =
CarMaster.cCarNo) LEFT JOIN qryDriverExpenses ON SaleDetail.nTourBookingpk
= qryDriverExpenses.nTourBookingPK)
ON qryFuel.nTourBookingPK = SaleDetail.nTourBookingpk
ORDER BY Sale.nInvoiceNo, Sale.dInvoiceDate;
```
|
Using those sort of joins are just asking for trouble. Please study up on proper joining syntax, <https://technet.microsoft.com/en-us/library/ms191517%28v=sql.105%29.aspx>. Here is formatted query:
```
SELECT SaleDetail.cTaxiNo
,Category.cCarName
,qryDriverExpenses.nAmount
,Sale.nInvoiceNo
,Sale.cCustomerName
,CASE WHEN `cPackageType` ='GENERAL' THEN
`nProviderRate`
ELSE
`nProviderRate`/`nProviderKM`
END AS nARate
,SaleDetail.nKM
,SaleDetail.nAmount
,Sale.dInvoiceDate
,qryFuel.nAmount
,SaleDetail.nProviderKM
,SaleDetail.nProviderRate
FROM qryFuel
INNER JOIN SaleDetail
ON qryFuel.nTourBookingPK = SaleDetail.nTourBookingpk
RIGHT OUTER JOIN Sale
ON Sale.nSalePk = SaleDetail.nSalePk
INNER JOIN CarMaster
ON SaleDetail.cTaxiNo = CarMaster.cCarNo
INNER JOIN Category
ON CarMaster.nCategoryPk = Category.nCategoryPK
LEFT OUTER JOIN qryDriverExpenses
ON SaleDetail.nTourBookingpk = qryDriverExpenses.nTourBookingPK
ORDER BY Sale.nInvoiceNo, Sale.dInvoiceDate;
```
|
Convert SQL Server SQL query to MySQL
|
[
"",
"mysql",
"sql",
"sql-server",
""
] |
I'm doing SQL query with Excel VBA.
Below is the SQL string that I'm using:
```
SELECT DESC, SEQNUM, INSTRUCTION FROM TABLE
WHERE DESC = 'ITEM1' AND SEQNUM <=3
```
Here is the query result:
```
DESC SEQNUM INSTRUCTION
ITEM1 1 002.000
ITEM1 2 137.000
ITEM1 3 005.000
```
How can I make the query result looks like this?
```
DESC SEQNUM1 SEQNUM2 SEQNUM3
ITEM1 002.000 137.000 005.000
```
|
If supported - do a self full outer join, ANSI SQL syntax here:
```
select coalesce(t1.desc, t2.desc, t3.desc),
t1.INSTRUCTION SEQNUM1,
t2.INSTRUCTION SEQNUM2,
t3.INSTRUCTION SEQNUM3
from (select * from TABLE where SEQNUM = 1) t1
full outer join (select * from TABLE where SEQNUM = 2) t2 on t1.desc = t2.desc
full outer join (select * from TABLE where SEQNUM = 3) t3 on t2.desc = t3.desc
```
|
Like this, **provided there is one DESC-SEQNUM each**:
```
SELECT DESC, S1.INSTRUCTION, S2.INSTRUCTION, S3.INSTRUCTION
FROM TABLE AS S1 INNER JOIN
TABLE AS S2 ON S1.DESC = S2.DESC
INNER JOIN TABLE AS S3 ON S3.DESC=S1.DESC
```
For Access SQL you may need to enclose the INNER JOINs like mentioned here:
[SQL multiple join statement](https://stackoverflow.com/questions/7854969/sql-multiple-join-statement)
The result for Access SQL would be:
```
SELECT DESC, S1.INSTRUCTION, S2.INSTRUCTION, S3.INSTRUCTION
FROM (TABLE AS S1 INNER JOIN
TABLE AS S2 ON S1.DESC = S2.DESC)
INNER JOIN TABLE AS S3 ON S3.DESC=S1.DESC
```
As mentioned in the comments this assumes that there are not missing allocations to DESC. **If there would be you would need to change the INNER JOINs to LEFT JOINs like this**:
```
SELECT DESC, S1.INSTRUCTION, S2.INSTRUCTION, S3.INSTRUCTION
FROM (TABLE AS S1 LEFT JOIN
TABLE AS S2 ON S1.DESC = S2.DESC)
LEFT JOIN TABLE AS S3 ON S3.DESC=S1.DESC
```
|
Excel VBA - How to pivot sql result
|
[
"",
"sql",
"vba",
"excel",
""
] |
I'm creating a foreign table (`foo_table`) in database\_a. `foo_table` lives in database\_b. `foo_table` has an enum (`bar_type`) as one of its columns. Because this enum is in database\_b, the creation of the foreign table fails in database\_a. database\_a doesn't understand the column type. Running the following in database\_a
`CREATE FOREIGN TABLE foo_table (id integer NOT NULL, bar bar_type) SERVER database_b`
One gets the error:
`ERROR: type "bar_type" does not exist`
I could just create a copy of `bar_type` in database\_a, but this feels duplicative and possibly a future cause of inconsistency. Would anyone have thoughts on best practices for handling?
|
I summarize the answer I received from the pgsql-general mailing list:
1. A foreign table is basically an ad-hoc remote data source for the local database, so the onus is on the local database to maintain its definition of the remote table, whether it's in another (or even the same) PostgreSQL server or a completely different data source, especially as the local definition can be different from the remote one.
2. This does mean that there's no simple way of ensuring any remote dependencies are present on the local server. PostgreSQL 9.5 will provide the IMPORT FOREIGN SCHEMA command, however this is limited to table/view definitions.
3. If the definition of enum becomes inconsistant, we'd expect errors to occur when retrieving rows that had values not known on the local side.
4. One potential way around this is to declare the foreign table's columns as "text" rather than enums; you would lose some error checking on the local side, but the remote server would enforce validity whenever you
stored something.
5. However, not clear if this hack would to behave desirably for WHERE conditions on the enum column. I will be checking the performance for WHERE conditions and will update this answer when I have more details.
All credit goes to the fine persons on the pgsql-general mailing list.
|
You can create a script to transfer the enums by making this query and then create them on your server:
```
SELECT format(
'CREATE TYPE %s AS ENUM (%s);',
enumtypid::regtype,
string_agg(quote_literal(enumlabel), ', ')
)
FROM pg_enum
GROUP BY enumtypid;
```
Thanks @laurenz-albe
[Postgres how to transfer all enums from foreign server](https://stackoverflow.com/questions/69584081/postgres-how-to-transfer-all-enums-from-foreign-server/69584382#69584382)
|
SQL: error when creating a foreign table that has an enum column
|
[
"",
"sql",
"postgresql",
"enums",
"foreign-data-wrapper",
""
] |
I have two tables appointments and line\_items.
```
appointments
id integer primary key
price decimal
line_items
id integer primary key
price decimal
appointment_id integer foreign key
```
I need to get sum of appointments.price with sum of associated sum of line\_items price.
For example, I have 2 appointments
```
id price
1 10
2 15
```
and line items
```
id price appointment_id
1 10 1
2 15 1
3 10 2
3 20 2
```
The total result should be (10 + (10 + 15)) + (15 + (10 + 20))
Currently I am trying this query. But it doesn't work.
```
SELECT SUM(appointments.price + sum(line_items.price)) AS total_sum
FROM `appointments`
INNER JOIN 'line_items' ON 'line_items'.'appointment_id' = 'appointments'.'id'
```
What is best approach for this? Subquery for sum of line\_items.price or joining sum line\_items.price. Or there is another better solution?
Thanks in advance!
|
This should give you what you're looking for, it seems you were close to the answer:
```
SELECT SUM(app.price + IFNULL(li.price,0)) AS total_sum
FROM appointments app
LEFT JOIN
(Select appointment_id, Sum(price) price from line_items group by appointment_id) li on li.appointment_id = app.id
```
|
I think what you are looking for is a `SUM() GROUP BY` in a derived table and then a `join` with `appointments`. Something like this.
```
SELECT SUM(appointments.price + line_items.price) AS total_sum
FROM appointments
INNER JOIN
(
SELECT line_items.appointment_id,SUM(price) as price
line_items
GROUP BY appointment_id
) line_items ON line_items.appointment_id = appointments.id
```
|
sum column with sum of joined table
|
[
"",
"mysql",
"sql",
""
] |
I have two tables whose key is a zip code. I'm trying to join them, but it's inaccurate because one table has leading zeros and the other does not, so it's not matching up.
Is there a simple way to format this? I have read access only, and no write access - so physically changing the tables is not an option unfortunately :(
Help! Thanks.
Table 1 looks like this
```
zip_code msa
00210 Boston
00211 Boston
00212 Boston
01431 Boston
```
Table 2 looks like this
```
zip_code city
210 Boston
211 Boston
212 .
1431 .
```
|
You can "normalize" zipcodes by adding leading zeros using lpad function. It adds "padding" - in this case zeroes, so that every zipcode will have the same length. I assume that you have zipcodes of 4 or 5 characters, then this might work:
```
SELECT lpad(zipcode, 5, '0')
```
|
You can use `RIGHT` function [documentations here](http://www.postgresql.org/docs/9.1/static/functions-string.html)
Which allows you to get x number of characters.
```
SELECT RIGHT('00000' || zip_code ,5)
```
|
Trying to join two tables on a zip - one table has no leading zeros, the other has leading zeros
|
[
"",
"sql",
"postgresql",
""
] |
I have two tables `account_agent` and `bulk_index`. The data on each table is as follows
`account_agent`
```
name code account remark scheme
avi 10 151362 first as
babi 11 123478 new as
avi 10 151721 new as
avi 10 151765 new as
sam sas/01 925458 tin sas
```
`bulk_index`
```
agentcode account customer_name
10 151362 Alok
22 265721 Rohan
10 151721 Akash
```
using these tables I want result on `account_agent` table as follows
`account_agent`
```
name code account remark
avi 10 151362 first
avi 10 151721 new
```
I have tried the query
```
select * from account_agent where code = '10' and account = (select account from bulk_index where code = '10')
```
|
This query worked for me
```
SELECT A.name, A.code, A.Account, A.remark, A.scheme
FROM account_agent A
INNER JOIN bulk_index B ON A.account = B.account
WHERE A.scheme = 'as' AND A.code = '10'
```
|
Assuming your `account` column is unique, you could just do a simple join and a where clause to get this:
```
SELECT A.name, A.code, A.Account, A.remark
FROM account_agent A
INNER JOIN bulk_index B ON A.account = B.account
WHERE a.scheme like '%as%' and a.code = '10'
--remove the preceeding or succeeding % in like as needed
```
If you are only interested in values where `scheme` column is equal to 'as' and `code` is equal to 10, then you could just do this:
```
SELECT A.name, A.code, A.Account, A.remark
FROM account_agent A
INNER JOIN bulk_index B ON A.account = B.account
WHERE a.scheme = 'as' and a.code = '10'
```
Here is a link to W3Schools page which explains the use of like operator in sql.
[SQL LIKE Operator](http://www.w3schools.com/sql/sql_like.asp)
|
How to filter data on one table based on reference data on another table?
|
[
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I have following SQL
```
SELECT tsd.ID
FROM test_series_details tsd
WHERE tsd.DIV_ID =1
AND tsd.END_DATE > NOW()
AND tsd.ID NOT IN
((SELECT tsr.TEST_ID
FROM test_series_results tsr
WHERE tsr.STUDENT_ID=3)
UNION
(SELECT tsrlv.TEST_ID
FROM test_series_restore_log_viewer tsrlv
WHERE tsrlv.STUDENT_ID=3
GROUP BY tsrlv.TEST_ID))
```
I have tested below part of query and it is working,
```
(SELECT tsr.TEST_ID
FROM test_series_results tsr
WHERE tsr.STUDENT_ID=3)
UNION
(SELECT tsrlv.TEST_ID
FROM test_series_restore_log_viewer tsrlv
WHERE tsrlv.STUDENT_ID=3
GROUP BY tsrlv.TEST_ID)
```
But how to give output of this UNION to IN condition?
ERROR IS AS follows
> Error Code: 1064. You have an error in your SQL syntax; check the
> manual that corresponds to your MySQL server version for the right
> syntax to use near 'UNION (SELECT tsrlv.TEST\_ID FROM
> test\_series\_restore\_log\_viewer tsrlv ' at line 9
|
The best way to write this is as two separate clauses:
```
SELECT tsd.ID
FROM test_series_details tsd
WHERE tsd.DIV_ID = 1 AND
tsd.END_DATE > NOW() AND
tsd.ID NOT IN (SELECT tsr.TEST_ID
FROM test_series_results tsr
WHERE tsr.STUDENT_ID = 3
) AND
tsd.ID NOT IN (SELECT tsrlv.TEST_ID
FROM test_series_restore_log_viewer tsrlv
WHERE tsrlv.STUDENT_ID = 3
);
```
This allows the engine to take advantage of indexes on the two tables. In addition, the `GROUP BY` is not necessary (either when written this way or in your original formulation).
For performance -- and just in case the ids in the subqueries could ever be `NULL` -- I would be inclined to write this using `NOT EXISTS`:
```
SELECT tsd.ID
FROM test_series_details tsd
WHERE tsd.DIV_ID = 1 AND
tsd.END_DATE > NOW() AND
NOT EXISTS (SELECT 1
FROM test_series_results tsr
WHERE tsr.STUDENT_ID = 3 and tsr.TEST_ID = tsd.ID
) AND
NOT EXISTS (SELECT 1
FROM test_series_restore_log_viewer tsrlv
WHERE tsrlv.STUDENT_ID = 3 and tsrlv.TEST_ID = tsd.ID
);
```
The best indexes for this query are: `test_series_results(test_id, student_id)` and `test_series_restore_log_viewer(test_id, student_id)` and `test_series_details(DIV_ID, end_date, id)`.
|
Try to use `LEFT JOIN` as below
```
SELECT tsd.ID
FROM test_series_details tsd
LEFT JOIN ((SELECT tsr.TEST_ID
FROM test_series_results tsr
WHERE tsr.STUDENT_ID=3)
UNION
(SELECT tsrlv.TEST_ID
FROM test_series_restore_log_viewer tsrlv
WHERE tsrlv.STUDENT_ID=3
GROUP BY tsrlv.TEST_ID)) AS T on T.TEST_ID = tsd.ID
WHERE tsd.DIV_ID =1
AND tsd.END_DATE > NOW()
AND T.TEST_ID is null
```
|
MySQL , How to give output of UNION to WHERE clause
|
[
"",
"mysql",
"sql",
""
] |
I am trying to create some tables within a database, however the tables are not appearing in my object explorer view.
my code is as follows:
```
use testDB
GO
create table dbo.teacher (id varchar(5), name varchar(24));
insert into teacher values ('dm112', 'Magro, Deirdre');
insert into teacher values ('je232', 'Elkner, Jeff');
insert into teacher values ('cm147', 'Meyers, Chris');
insert into teacher values ('kr387', 'Reed, Kevin');
create table dbo.course (
number varchar(6),
name varchar(24),
credits int,
teacherid varchar(6)
);
insert into course values ('SDV100', 'College Success Skills', 1, 'dm112');
insert into course values ('ITD110', 'Web Page Design I', 3, 'je232');
insert into course values ('ITP100', 'Software Design', 3, 'je232');
insert into course values ('ITD132', 'Structured Query Language', 3, 'cm147');
insert into course values ('ITP140', 'Client Side Scripting', 4, 'kr378');
insert into course values ('ITP225', 'Web Scripting Languages', 4, 'kr387');
create table dbo.student (id varchar(3), name varchar(24));
insert into student values ('411', 'Perez, Gustavo');
insert into student values ('412', 'Rucker, Imani');
insert into student values ('413', 'Gonzalez, Alexis');
insert into student values ('414', 'Melgar, Lidia');
create table dbo.enrolled (studentId varchar(3), courseNumber varchar(6));
insert into enrolled values ('411', 'SDV100');
insert into enrolled values ('411', 'ITD132');
insert into enrolled values ('411', 'ITP140');
insert into enrolled values ('412', 'ITP100');
insert into enrolled values ('412', 'ITP14p');
insert into enrolled values ('412', 'ITP225');
insert into enrolled values ('413', 'ITD132');
insert into enrolled values ('413', 'ITP225');
insert into enrolled values ('414', 'SDV100');
insert into enrolled values ('414', 'ITD110');
```
I looked this up before posting and found this exact question:
[Creating table with T-SQL - can't see created tables in Object explorer](https://stackoverflow.com/questions/21267506/creating-table-with-t-sql-cant-see-created-tables-in-object-explorer)
However, he was using "tempdb", which I am not.
I ran the query
```
select name, type_desc from testDB.sys.objects
```
which returned:
```
name type_desc
---------------------------
...
teacher USER_TABLE
course USER_TABLE
student USER_TABLE
enrolled USER_TABLE
...
```
I can modify, select, drop, etc. on these tables, but I cannot see them. Am I missing something? Another question brought up the prospect of "test" and "production"? They didn't go into much detail and google did not help me
:(
Thank you for any help you can offer.
Edit: Karl below found the solution! Although clicking refresh (F5) on the object explorer does not update the database view, right clicking on the database and clicking refresh updates the tables.
|
This would happen if you have the tables node open in object explorer and don't refresh after running your DDL. It is annoying that SSMS doesn't autorefresh explorer after DDL. Refresh is available via the right-click context menu in object explorer.
|
I had a scenario in which refreshing the folders in Object Explorer did not lead to my missing table appearing in either of the Tables or Views folders. Like the original post, I am able to query the table - but not find it in Object Explorer.
The [Microsoft docs on the FROM clause](https://learn.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-2017) specify that FROM is followed by table\_source which is a qualified table\_or\_view\_name which is defined as "Is the name of a table or view". Yet while my FROM clause works, no such table or view name appears in Object Explorer (even after refresh). What's going on here?
The original question had the following query:
```
select name, type_desc from testDB.sys.objects
```
which simplifies to:
```
select name, type_desc from sys.objects;
```
and this gave me the answer to my scenario, which differed from the original question.
In the original question the missing table's type\_desc value was USER\_TABLE, but in my case it showed SYNONYM.
There is, in fact, another folder in Object Explorer for Synonyms - and this is where I found the "table" I was querying. (It would be helpful if Microsoft updated their doc to mention the fact that a Synonym name is also a valid value for use with the FROM clause - at least in my case, synonyms are not frequently used).
|
Can't see created tables in Object Explorer - Microsoft SQL Management Studio
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
table1:
I need all the item\_id records where image\_id=1 and the item\_id appears in the table more than one time:
So, for the following table (lets call it my\_table):
```
item_id image_id
------- --------
1 1
1 5
1 6
2 1
3 1
4 1
6 1
6 33
6 34
```
The output for item\_id records should be: 1,6
I have tried:
```
SELECT m_t1.item_id
FROM my_table m_t1
INNER JOIN
(SELECT COUNT(item_id)
FROM my_table
GROUP BY item_id
HAVING COUNT(item_id)>1) m_t2
ON m_t2.image_id=1
WHERE m_t1.item_id=m_t2.item_id
```
|
```
SELECT item_id FROM my_table
WHERE image_id = 1
AND item_id IN (SELECT item_id FROM my_table GROUP BY item_id HAVING COUNT(*) > 1);
```
|
One way is using `exists`
```
select
t1.item_id
from my_table t1
where t1.image_id = 1
and exists(
select 1 from my_table t2
where t2.item_id = t1.item_id
and t2.image_id <> 1
);
```
|
MySql get duplicate id where other column condition
|
[
"",
"mysql",
"sql",
""
] |
I have the following table:
```
+------------------------------------+
| Number Name Date |
+------------------------------------+
| 1 1050 Name1 2015-01-01 |
| 2 1051 Name2 2015-04-27 |
| 3 1057 Name3 2015-04-27 |
+------------------------------------+
```
How should I get the most recent records? I've tried something like:
```
SELECT number, name, MAX(DATE) AS recent_date
FROM Recent_Table
HAVING recent_date < '2015-05-19'
GROUP BY number, name
```
I'm expecting to get the most recent records but instead I'm getting all three because of my having clause. Is there a way to work around this while still keeping my `having`? Thanks.
Expected output would be:
```
1051 Name2 2015-04-27
1057 Name3 2015-04-27
```
|
Try this
```
select number, name, date
from Recent_Table
where Date = (SELECT MAX(DATE) AS recent_date
FROM Recent_Table
WHERE DATE < '2015-05-19')
```
there is problem with `name` column not with `having` clause. I think you don't need `having`.
**[Sql fiddle](http://sqlfiddle.com/#!3/92267/2/0) demo**
|
```
SELECT
number,
name,
DATE AS recent_date
FROM Recent_Table
WHERE DATE = (SELECT MAX(t2.DATE)
FROM Recent_Table t2
WHERE t2.number = t1.number)
```
|
How do I get the most recent records based on as of date?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a sql file that I need to modify so that the ID section of a row is increased by 3. How can I increased the ID field and keep the rest of the line as it is in BASH?
Ex:
```
insert into info.names values (35, 'John', 'C', 2);
insert into info.names values (36, 'Mark', 'C', 1);
insert into info.names values (37, 'Bob', 'C', 5);
```
I need to add 3 to 35, 36 and 37, so that they become 38,39,40. SO the output would be
```
insert into info.names values (38, 'John', 'C', 2);
insert into info.names values (39, 'Mark', 'C', 1);
insert into info.names values (40, 'Bob', 'C', 5);
```
I'd like to do this in BASH.
Thank you
|
awk is the more appropriate tool for this task
```
awk '{tmp=$5 ;sub(/^\(/, "", tmp); tmp+=3; $5="("tmp","; print $0}' file > file.new && mv file.new file
```
To explain, I'll add comments and use a more verbose format
```
awk '{
# Capture the 5th field '(35,' into a tmp variable
tmp=$5
# remove the leading "(" char
sub(/^\(/, "", tmp)
# add 3 to the tmp value (see note below)
tmp+=3
# reconstruct the 5th field adding "(" and "," as needed.
$5="(" tmp ","
# print the whole line
print $0
}' file > file.new && mv file.new file
# | |-> if no error (`&&`) overwrite the original file
# | with the fixed file (not an awk feature)
# |-> write output to tmp file
```
Note that after the operation `sub(/^\(/, "", tmp)` the value of tmp is actually `35,` (note the `,` char!). When given a variable in a number context (like `+=3` ) awk will only process only the number part of that value and then perform the math operation. That is why you get `38` and not `35,3`. The line following then "put's back" the missing '(' and ',' chars.
IHTH
|
Using gnu-awk you can do this:
```
awk 'BEGIN{ FPAT="[^(]+|\\([0-9]+" } {$2="(" substr($2,2)+3} 1' file.sql
insert into info.names values (38 , 'John', 'C', 2);
insert into info.names values (39 , 'Mark', 'C', 1);
insert into info.names values (40 , 'Bob', 'C', 5);
```
|
Add a number to a number located in a string in BASH
|
[
"",
"sql",
"bash",
"parsing",
""
] |
I have a table with 2 columns TimeStamp and ID in `MS SQLSERVER`
```
TimeStamp ID
2015-05-20 1
2015-05-20 2
2015-05-20 1
2015-05-21 1
2015-05-21 2
2015-05-21 2
2015-05-21 1
```
My requirement is to calculate number of records for every Id according to date.
Requirement:
```
Date No of records for Id=1 No. records for ID=2 Total
2015-05-20 2 1 3
2015-05-21 2 2 4
```
Please let me know how can i do this for other columns as well.
Thanks
|
Using Pivot - can look like this:
```
SELECT [TimeStamp], [1] AS [No of records for Id=1], [2] AS [No of records for Id=2], [1]+[2] AS Total
FROM dbo.YourTable
PIVOT
(
COUNT(ID) FOR ID IN ([1],[2])
) pvt
ORDER BY [TimeStamp]
```
|
I've got your answer using subqueries:
SQLFIDDLE: <http://sqlfiddle.com/#!3/8c28c/5>
```
SELECT DISTINCT t1.myts,
(SELECT COUNT(*)
FROM mytable t2
WHERE t2.myts = t1.myts
AND t2.id = 1) AS id_1_count,
(SELECT COUNT(*)
FROM mytable t2
WHERE t2.myts = t1.myts
AND t2.id = 2) AS id_2_count,
(SELECT COUNT(*)
FROM mytable t2
WHERE t2.myts = t1.myts) AS total
FROM mytable t1
```
If you add an id here though you would need another subquery
|
Write a query to count different kind of records in single line result
|
[
"",
"sql",
"sql-server",
"sql-server-2008-r2",
"group-by",
"sql-server-2012",
""
] |
I have this query in SQL Server:
```
SELECT
cr.formno, cr.controlid as formid,
min(cr.series) as formstartseries,
max(cr.series) as formendseries,
(SELECT currentseries
FROM af_inventory
WHERE objid=cr.controlid) AS afi_currentseries,
(SELECT currentseries
FROM af_control
WHERE objid=cr.controlid) AS afc_currentseries
FROM cashreceipt cr
WHERE cr.collector_objid='USR409c49cc:1488078ce62:-7958'
AND cr.objid NOT IN (SELECT objid
FROM remittance_cashreceipt
WHERE objid=cr.objid)
GROUP BY cr.formno, cr.controlid
HAVING formstartseries <> afi_currentseries
```
After executing this query, it throws an error:
> **Invalid column name `formstartseries`, `afi_currentseries`**
What's wrong?
|
You cannot use your ALIASed column names in the HAVING clause. Try the following:
```
SELECT
cr.formno,
cr.controlid AS formid,
MIN(cr.series) AS formstartseries,
MAX(cr.series) AS formendseries,
(
SELECT
currentseries
FROM
af_inventory
WHERE
objid = cr.controlid
) AS afi_currentseries,
(
SELECT
currentseries
FROM
af_control
WHERE
objid = cr.controlid
) AS afc_currentseries
FROM
cashreceipt cr
WHERE
cr.collector_objid = 'USR409c49cc:1488078ce62:-7958'
AND cr.objid NOT IN ( SELECT
objid
FROM
remittance_cashreceipt
WHERE
objid = cr.objid )
GROUP BY
cr.formno,
cr.controlid
HAVING
MIN(cr.series)
<>
(SELECT currentseries
FROM af_inventory
WHERE objid=cr.controlid)
```
|
An alias assigned in the `SELECT` cannot be used in the `HAVING` clause because the `SELECT` phase is executed later, have a look at [Logical Query Processing](http://www.nickyvv.com/2013/02/logical-query-processing.html).
You could use a `cte` and then include a `WHERE` clause like this:
```
WITH cte
AS (
SELECT cr.formno
,cr.controlid AS formid
,min(cr.series) AS formstartseries
,max(cr.series) AS formendseries
,(
SELECT currentseries
FROM af_inventory
WHERE objid = cr.controlid
) AS afi_currentseries
,(
SELECT currentseries
FROM af_control
WHERE objid = cr.controlid
) AS afc_currentseries
FROM cashreceipt cr
WHERE cr.collector_objid = 'USR409c49cc:1488078ce62:-7958'
AND cr.objid NOT IN (
SELECT objid
FROM remittance_cashreceipt
WHERE objid = cr.objid
)
GROUP BY cr.formno
,cr.controlid
)
SELECT *
FROM cte
WHERE formstartseries <> afi_currentseries
```
|
Invalid column error in SQL Server
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
This has probably been asked before, but I have no idea on how to find it in the first place.
In the following query (with respective returned values):
```
select * from tbChapter where idLesson in(12, 13)
```
-- Result --
```
id idLesson name sequence
52 12 Intro 1
53 12 Chapter One 2
54 12 Chapter Two 3
55 13 Intro 1
56 13 Chapter One 2
57 13 Chapter Two 3
58 13 Chapter Three 4
```
I want to get only the last entry for each idLesson, for example:
-- Expected result --
```
id idLesson name sequence
54 12 Chapter Two 3
58 13 Chapter Three 4
```
How can I proceed?
Ps: I'l actually replace `where idLesson in(12, 13)` with subquery that will return dozens of `idLesson` values.
|
Try this:
```
select * from tbChapter where id in
(select MAX(id) from tbChapter group by idLesson)
```
|
Try this:
```
select *
from tbChapter as a
where sequence = (select max(sequence)
from tbChapter as b
where a.id_lesson = b.id_lesson)
```
|
How to select only one entry based on each parameter
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I found a solution for this here,
[sql query that groups different items into buckets](https://stackoverflow.com/questions/6669730/sql-query-that-groups-different-items-into-buckets)
It works well as long as there is data for all the ranges. I want the query to return 0 if there is no data.
So if my table is:
```
item_name | price
i1 | 2
i2 | 22
i3 | 4
i4 | 26
i5 | 44
i6 | 6
```
I want output as:
```
range | number of item
0 - 10 | 3
11 - 20 | 0
21 - 30 | 2
31 - 40 | 0
41 - 50 | 1
```
The below query will not display result for the 0 count cases.
```
select
case when price >= 0 and price <= 10 then " 0 - 10"
when price > 10 and price <= 20 then " 10 - 20"
when price > 20 and price <= 30 then " 20 - 30"
when price > 30 and price <= 40 then " 30 - 40"
when price > 40 and price <= 50 then " 40 - 50"
else "over 50"
end PriceRange,
count(*) as TotalWithinRange
from
YourTable
group by 1
```
Does anyone has a solution for this?
|
You need to build an inline table containing all price ranges. Then perform a `LEFT JOIN` with a derived table based on your query to get the expected result:
```
SELECT x.PriceRange, COALESCE(TotalWithinRange, 0) AS TotalWithinRange
FROM (
SELECT "0 - 10" AS PriceRange
UNION SELECT "10 - 20"
UNION SELECT "20 - 30"
UNION SELECT "30 - 40"
UNION SELECT "40 - 50"
UNION SELECT "over 50" ) x
LEFT JOIN (
SELECT
CASE when price >= 0 and price <= 10 then "0 - 10"
when price > 10 and price <= 20 then "10 - 20"
when price > 20 and price <= 30 then "20 - 30"
when price > 30 and price <= 40 then "30 - 40"
when price > 40 and price <= 50 then "40 - 50"
else "over 50"
END AS PriceRange,
COUNT(*) as TotalWithinRange
FROM YourTable
GROUP BY 1 ) y ON x.PriceRange = y.PriceRange
```
[**SQL Fiddle Demo**](http://sqlfiddle.com/#!9/9f351/6)
|
Group by `TotalWithinRange`, so it will count each category
|
MySQL query to group data into different ranges
|
[
"",
"sql",
"postgresql",
""
] |
I have a table with duplicate item's ...
I need show the list of all columns without duplicate item's
for example i have this table:
```
ID CODE RANK TIME
1 12345 2 10:00
2 12345 2 11:00
3 98765 3 20:00
4 98765 3 22:00
5 66666 2 10:00
6 55555 5 11:00
```
result , i need :
```
ID CODE RANK TIME
1 12345 2 10:00
3 98765 3 20:00
5 66666 2 10:00
6 55555 5 11:00
```
The time column in not Important , only one of them most be show ...
|
try this:
```
SELECT * FROM myTable WHERE ID IN(SELECT MIN(ID) FROM myTable GROUP BY Code)
```
|
If there is no specific way the ID should show (just like the time column), and the ID and TIME column are always sorted that way,this should work.
`SELECT MIN(id), code, rank, MIN(time)
FROM table
GROUP BY code, rank`
|
SQL Show All column by one column distinct
|
[
"",
"sql",
"oracle",
"select",
"duplicates",
"distinct",
""
] |
How can I write query which would output this below?
```
ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"
DECLARE @x XML;
SET @x = N'<?xml version = "1.0"?>
<EVENT_INSTANCE>
<EventType>ALTER_TABLE</EventType>
<PostTime>2015-05-19T14:01:46.930</PostTime>
<SPID>52</SPID>
<ServerName>computer1</ServerName>
<LoginName>domain\user</LoginName>
<UserName>dbo</UserName>
<DatabaseName>DBA</DatabaseName>
<SchemaName>dbo</SchemaName>
<ObjectName>Table_1</ObjectName>
<ObjectType>TABLE</ObjectType>
<AlterTableActionList>
<Create>
<Columns>
<Name>c8</Name>
</Columns>
</Create>
</AlterTableActionList>
<TSQLCommand>
<SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
<CommandText>ALTER TABLE dbo.Table_1 ADD c8 INT NULL</CommandText>
</TSQLCommand>
</EVENT_INSTANCE>';
```
|
Bit of a hack, but this should work:
```
DECLARE @x XML;
SET @x = N'<?xml version = "1.0"?>
<EVENT_INSTANCE>
<EventType>ALTER_TABLE</EventType>
<PostTime>2015-05-19T14:01:46.930</PostTime>
<SPID>52</SPID>
<ServerName>computer1</ServerName>
<LoginName>domain\user</LoginName>
<UserName>dbo</UserName>
<DatabaseName>DBA</DatabaseName>
<SchemaName>dbo</SchemaName>
<ObjectName>Table_1</ObjectName>
<ObjectType>TABLE</ObjectType>
<AlterTableActionList>
<Create>
<Columns>
<Name>c8</Name>
</Columns>
</Create>
</AlterTableActionList>
<TSQLCommand>
<SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
<CommandText>ALTER TABLE dbo.Table_1 ADD c8 INT NULL</CommandText>
</TSQLCommand>
</EVENT_INSTANCE>';
SELECT REPLACE(REPLACE(CONVERT(VARCHAR(1000), @X.query('(//TSQLCommand/SetOptions)')),'<SetOptions', ''), '/>', '')
```
|
Assuming you want to get string containing all attributes of `<SetOption>` as output, one possible way is using XQuery for loop like so :
```
SELECT @x.query('
for $attr in /EVENT_INSTANCE/TSQLCommand/SetOptions/@*
(: return value in format : attribute_name="attribute_value" :)
return concat(local-name($attr),''="'',$attr,''"'')
')
```
**[SQL Fiddle](http://sqlfiddle.com/#!6/9eecb7db59d16c80417c72d1/1045)**
**output :**
```
ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE"
```
**brief explanation :**
* `/EVENT_INSTANCE/TSQLCommand/SetOptions` : specify path to `<SetOption>` element
* `/@*` : get all attributes from current context element (it is `<SetOption>` element in this case)
* `(: some comment here :)` : [XQuery comment](https://msdn.microsoft.com/en-us/library/ms186281.aspx)
* `concat()` : concatenates all parameters into single string result
* `local-name()` : return element local name (a.k.a XML tag name)
* note that `''` within `concat()` means simply `'`. Single quotes need to be escaped in this situation, and it is done in SQL Server by doubling each of them.
**for reference :**
* [MSDN : FLWOR (for, let, where, order by, return) Statement and Iteration (XQuery)](https://msdn.microsoft.com/en-us/library/ms190945.aspx)
|
TSQL to get XML data into a column
|
[
"",
"sql",
"sql-server",
"xml",
"t-sql",
""
] |
I have a simple table named HotelRate
```
HID | START_DATE | END_DATE | PRICE_PER_DAY
--------------------------------------
1 01/1/2015 10/1/2015 100
1 11/1/2015 20/1/2015 75
1 21/1/2015 30/1/2015 110
```
what is the most simple way to calculate price for Hotel Room if user queries for Total Price between `5/1/2015` to `25/1/2015`.
I have checked :
* [How can I query for overlapping date ranges?](https://stackoverflow.com/questions/16944258/how-can-i-query-for-overlapping-date-ranges)
* [Overlapping date range MySQL](https://stackoverflow.com/questions/3667668/overlapping-date-range-mysql)
but none of it is making much sense to me.
I have tried a couple queries but those seems like hitting arrow in the blind. Can someone suggest me a simple and elegant way to do it ?
---
@JamesZ
On running the first query i get
```
start_date end_date duration price_per_day
---------- ---------- ----------- -------------
2015-01-01 2015-01-10 5 100
2015-01-11 2015-01-20 9 75
2015-01-21 2015-01-30 4 110
```
For first range `5` is OK, second range it should be `10`, third be `5`
How days are calculated : Total no of nights between `start` & `end` date, which is same as days difference
```
05-Jan-15 06-Jan-15 1 Night
06-Jan-15 07-Jan-15 1 Night
07-Jan-15 08-Jan-15 1 Night
08-Jan-15 09-Jan-15 1 Night
09-Jan-15 10-Jan-15 1 Night
10-Jan-15 11-Jan-15 1 Night
11-Jan-15 12-Jan-15 1 Night
12-Jan-15 13-Jan-15 1 Night
13-Jan-15 14-Jan-15 1 Night
14-Jan-15 15-Jan-15 1 Night
15-Jan-15 16-Jan-15 1 Night
16-Jan-15 17-Jan-15 1 Night
17-Jan-15 18-Jan-15 1 Night
18-Jan-15 19-Jan-15 1 Night
19-Jan-15 20-Jan-15 1 Night
20-Jan-15 21-Jan-15 1 Night
21-Jan-15 22-Jan-15 1 Night
22-Jan-15 23-Jan-15 1 Night
23-Jan-15 24-Jan-15 1 Night
24-Jan-15 25-Jan-15 1 Night
Count : 20 Night
```
|
Something like this should do the trick:
```
declare @startdate date, @enddate date
set @startdate = '20150105'
set @enddate = '20150125'
select
start_date,
end_date,
datediff(
day,
case when @startdate > start_date then @startdate else start_date end,
case when @enddate < end_date then @enddate else end_date end) as duration,
price_per_day
from
reservation
where
end_date >= @startdate and
start_date <= @enddate
```
This just handles the overlapping ranges with case so that if the reservation start is the correct one to use, it takes it, otherwise the search criteria, and same thing for end date. The days and price are here separate, but you can just multiply them to get the result.
SQL Fiddle: <http://sqlfiddle.com/#!3/4027b3/1>
Edit, this way to get total sum:
```
declare @startdate date, @enddate date
set @startdate = '20150105'
set @enddate = '20150125'
select
sum(datediff(
day,
case when @startdate > start_date then @startdate else start_date end,
case when @enddate < end_date then @enddate else end_date end)
* price_per_day)
from
reservation
where
end_date >= @startdate and
start_date <= @enddate
```
|
You will need a calendar table, but every database should have one.
The actual implementation is always user and DBMS specific (e.g. [MS SQL Server](http://web.archive.org/web/20070611150639/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-calendar-table.html)), so searching for "calendar table" + yourDBMS will probably reveal some source code for your system.
```
select HID, sum(PRICE_PER_DAY)
from calendar_table as c
join HotelRate
on calendar_date between START_DATE and END_DATE
group by HID
```
|
Calculate Price For Overlapping Date Range
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with Three columns:
GEOID, ParcelID, and PurchaseDate.
The PKs are GEOID and ParcelID which is formatted as such:
```
GEOID PARCELID PURCHASEDATE
12345 AB123 1/2/1932
12345 sfw123 2/5/2012
12345 fdf323 4/2/2015
12346 dfefej 2/31/2022 <-New GEOID
```
What I need is an aggregation based on GEOID.
I need to count the number of ParcelIDs from last month PER GEOID
and I need to provide a percentage of that GEOID of all total sold last month.
I need to produce three columns:
GEOID Nbr\_Parcels\_Sold Percent\_of\_total
For each GEOID, I need to know how many Parcels Sold Last month, and with that Number, find out how much percentage that entails for all Solds.
For example: if there was 20 Parcels Sold last month, and 4 of them were sold from GEOID 12345, then the output would be:
```
GEOID Nbr_Parcels_Sold Perc_Total
12345 4 .2 (or 20%)
```
I am having issues with the dual aggregation. The concern is that the table in question has over 8 million records.
if there is a SQL Warrior out here who have seen this issue before, Any wisdom would be greatly appreciated.
Thanks.
|
How about using sub-query to count the sum
```
WITH data AS
(
SELECT *
FROM [Table]
WHERE
YEAR(PURCHASEDATE) * 100 + MONTH(PURCHASEDATE) = 201505
)
SELECT
GEOID,
COUNT(*) AS Nbr_Parcels_Sold,
CONVERT(decimal(18,8), COUNT(*)) /
(SELECT COUNT(*) FROM data) AS Perc_Total
FROM
data t
GROUP BY
GEOID
```
**EDIT**
To update another table by the result, use `UPDATE` under `WITH()`
```
WITH data AS
(
SELECT *
FROM [Table]
WHERE
YEAR(PURCHASEDATE) * 100 + MONTH(PURCHASEDATE) = 201505
)
UPDATE target SET
Nbr_Parcels_Sold = source.Nbr_Parcels_Sold,
Perc_Total = source.Perc_Total
FROM
[AnotherTable] target
INNER JOIN
(
SELECT
GEOID,
COUNT(*) AS Nbr_Parcels_Sold,
CONVERT(decimal(18,8), COUNT(*)) /
(SELECT COUNT(*) FROM data) AS Perc_Total
FROM
data t
GROUP BY
GEOID
) source ON target.GEOID = source.GEOID
```
|
Hopefully you are using SQL Server 2005 or later version, in which case you can get advantage of [windowed](https://msdn.microsoft.com/en-us/library/ms189461.aspx "OVER Clause (Transact-SQL)") aggregation. In this case, windowed aggregation will allow you to get the total sale count alongside counts per `GEOID` and use the total in calculations. Basically, the following query returns just the counts:
```
SELECT
GEOID,
Nbr_Parcels_Sold = COUNT(*),
Total_Parcels_Sold = SUM(COUNT(*)) OVER ()
FROM
dbo.atable
GROUP BY
GEOID
;
```
The `COUNT(*)` call gives you counts per `GEOID`, according to the GROUP BY clause. Now, the `SUM(...) OVER` expression gives you the grand total count *in the same row as the detail count*. It is the empty OVER clause that tells the SUM function to add up the results of `COUNT(*)` across the entire result set. You can use that result in calculations just like the result of any other function (or any expression in general).
The above query simply returns the total value. As you actually want not the value itself but a percentage from it for each `GEOID`, you can just put the `SUM(...) OVER` call into an expression:
```
SELECT
GEOID,
Nbr_Parcels_Sold = COUNT(*),
Percent_of_total = COUNT(*) * 100 / SUM(COUNT(*)) OVER ()
FROM
dbo.atable
GROUP BY
GEOID
;
```
The above will give you integer percentages (truncated). If you want more precision or a different representation, remember to cast either the divisor or the dividend (optionally both) to a non-integer numeric type, since SQL Server always performs integral division when both operands are integers.
|
how to perform multiple aggregations on a single SQL query
|
[
"",
"sql",
"sql-server",
"stored-procedures",
""
] |
I'm a novice with my coding so forgive me if my question seems basic but I'm having some trouble extracting my variables from this While Loop in order to then use the results of my SQL query for validation.
This script below is the event handling for a login button on an .aspx form processing an email and login field that will be listed in a correlating MSSQL database:
```
Public Class _Default
Inherits System.Web.UI.Page
Protected Sub submit_Click(sender As Object, e As EventArgs) Handles submit.Click
Dim Column1 As String
Dim Column2 As String
Dim SQL = "SELECT * FROM Logins WHERE Email='" & email.Text & "' AND Password='" & password.Text & "'"
Dim oSqlDataReader As System.Data.SqlClient.SqlDataReader = Nothing
Using oSqlConnection As New System.Data.SqlClient.SqlConnection("SERVER=[Server Name];UID=[User];PWD=[Pass];DATABASE=[Database Name]")
oSqlConnection.Open()
Using oSqlCommand As New System.Data.SqlClient.SqlCommand(SQL, oSqlConnection)
oSqlDataReader = oSqlCommand.ExecuteReader
While oSqlDataReader.Read
Column1 = oSqlDataReader(name:="Email")
Column2 = oSqlDataReader(name:="Password")
End While
End Using
oSqlConnection.Close()
End Using
If "Column 1 etc."
End if
End Sub
End Class
```
As far as I can tell my code is working with no errors but every time I try and create an If statement my Variable `Column 1 and Column 2` are undeclared making them useless.
If anyone could help with the correct layout for my code or missing areas and an explanation as to where I've gone wrong that'd be great.
|
If you move the `If` block inside the loop, do you get closer to the behaviour that you're expecting?
```
Protected Sub submit_Click(sender As Object, e As EventArgs) Handles submit.Click
Dim Column1 As String
Dim Column2 As String
Dim SQL = "SELECT * FROM Logins WHERE Email='" & email.Text & "' AND Password='" & password.Text & "'"
Dim oSqlDataReader As System.Data.SqlClient.SqlDataReader = Nothing
Using oSqlConnection As New System.Data.SqlClient.SqlConnection("SERVER=[Server Name];UID=[User];PWD=[Pass];DATABASE=[Database Name]")
oSqlConnection.Open()
Using oSqlCommand As New System.Data.SqlClient.SqlCommand(SQL, oSqlConnection)
oSqlDataReader = oSqlCommand.ExecuteReader
While oSqlDataReader.Read
Column1 = oSqlDataReader(name:="Email")
Column2 = oSqlDataReader(name:="Password")
If "Column 1 etc....."
End if
End While
End Using
oSqlConnection.Close()
End Using
End Sub
```
|
It could be that your query is returning no rows or that the values returned are dbNull or nothing. I would check the data getting returned and error if appropriate.
Try running the query against the database directly. Are you getting a row back?
To avoid the error, you can declare the string as String.empty
```
Dim Column1 As String = String.empty
```
Or, when using it in the if statement check for nothing:
```
If Column1 Is Not Nothing AndAlso ...
```
|
Need help extracting variables from a While Loop in VB.Net
|
[
"",
"sql",
"vb.net",
"variables",
"while-loop",
""
] |
I have list of items that have start and end date. Items belong to user. For one item the period can range from 1-5 years. I want to find the count of days that are between the given date range which I would pass from query. Period start is always `sysdate` and end `sysdate - 5 years`
The count of days returned must also be in the period range.
Example:
I initiate a query as of 15.05.2015) as me being user, so I need to find all days **between 15.05.2010 and 15.05.2015**
During that period 2 items have belong to me:
Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010 = ~195 days
Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015 = ~170 days
I need a sum of these days that are exactly in that period.
For query right now I just have the count which takes the full range of an item (making it simple):
```
SELECT SUM(i.end_date - i.start_date) AS total_days
FROM items i
WHERE i.start_date >= to_date('2010-15-05', 'yyyy-mm-dd')
AND i.end_date <= to_date('2015-15-05', 'yyyy-mm-dd')
AND i.user = 'me'
```
So right now this would give me about count of 2 year period dates which is wrong, how should I update my select sum to include the dates that are in the period? Correct result would be 195 + 170. Currently I would get like 365 + 365 or something.
|
> Period start is always `sysdate` and end `sysdate - 5 years`
You can get this using: `SYSDATE` and `SYSDATE - INTERVAL '5' YEAR`
> Item 1) 01.01.2010 - 31.12.2010. Valid range: 15.05.2010 - 31.12.2010
> = ~195 days
>
> Item 2) 01.01.2015 - 31.12.2015. Valid range: 01.01.2015 - 15.05.2015
> = ~170 days
Assuming these examples show `start_date` - `end_date` and the valid range is your expected answer for that particular `SYSDATE` then you can use:
[SQL Fiddle](http://sqlfiddle.com/#!4/4fed2/5)
**Oracle 11g R2 Schema Setup**:
```
CREATE TABLE items ( "user", start_date, end_date ) AS
SELECT 'me', DATE '2010-01-01', DATE '2010-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2015-01-01', DATE '2015-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2009-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2009-01-01', DATE '2016-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2012-01-01', DATE '2012-12-31' FROM DUAL
UNION ALL SELECT 'me', DATE '2013-01-01', DATE '2013-01-01' FROM DUAL;
```
**Query 1**:
```
SELECT "user",
TO_CHAR( start_date, 'YYYY-MM-DD' ) AS start_date,
TO_CHAR( end_date, 'YYYY-MM-DD' ) AS end_date,
TO_CHAR( GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR), 'YYYY-MM-DD' ) AS valid_start,
TO_CHAR( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE)), 'YYYY-MM-DD' ) AS valid_end,
LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
- GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
+ 1
AS total_days
FROM items i
WHERE i."user" = 'me'
AND TRUNC(i.start_date) <= TRUNC(SYSDATE)
AND TRUNC(i.end_date) >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
```
**[Results](http://sqlfiddle.com/#!4/4fed2/5/0)**:
```
| user | START_DATE | END_DATE | VALID_START | VALID_END | TOTAL_DAYS |
|------|------------|------------|-------------|------------|------------|
| me | 2010-01-01 | 2010-12-31 | 2010-05-21 | 2010-12-31 | 225 |
| me | 2015-01-01 | 2015-12-31 | 2015-01-01 | 2015-05-21 | 141 |
| me | 2009-01-01 | 2016-12-31 | 2010-05-21 | 2015-05-21 | 1827 |
| me | 2012-01-01 | 2012-12-31 | 2012-01-01 | 2012-12-31 | 366 |
| me | 2013-01-01 | 2013-01-01 | 2013-01-01 | 2013-01-01 | 1 |
```
This assumes that the start date is at the beginning of the day (00:00) and the end date is at the end of the day (24:00) - so, if the start and end dates are the same then you are expecting the result to be 1 total day (i.e. the period 00:00 - 24:00). If you are, instead, expecting the result to be 0 then remove the `+1` from the calculation of the total days value.
**Query 2**:
If you want the sum of all these valid ranges and are happy to count dates in overlapping ranges multiple times then just wrap it in the `SUM` aggregate function:
```
SELECT SUM( LEAST(TRUNC(i.end_date),TRUNC(SYSDATE))
- GREATEST(TRUNC(i.start_date), TRUNC(SYSDATE)-INTERVAL '5' YEAR)
+ 1 )
AS total_days
FROM items i
WHERE i."user" = 'me'
AND TRUNC(i.start_date) <= TRUNC(SYSDATE)
AND TRUNC(i.end_date) >= TRUNC(SYSDATE) - INTERVAL '5' YEAR
```
**[Results](http://sqlfiddle.com/#!4/4fed2/5/1)**:
```
| TOTAL_DAYS |
|------------|
| 2560 |
```
**Query 3**:
Now if you want to get a count of all the valid days in the range and not count overlap in ranges multiple times then you can do:
```
WITH ALL_DATES_IN_RANGE AS (
SELECT TRUNC(SYSDATE) - LEVEL + 1 AS valid_date
FROM DUAL
CONNECT BY LEVEL <= SYSDATE - (SYSDATE - INTERVAL '5' YEAR) + 1
)
SELECT COUNT(1) AS TOTAL_DAYS
FROM ALL_DATES_IN_RANGE a
WHERE EXISTS ( SELECT 'X'
FROM items i
WHERE a.valid_date BETWEEN i.start_date AND i.end_date
AND i."user" = 'me' )
```
**[Results](http://sqlfiddle.com/#!4/4fed2/5/2)**:
```
| TOTAL_DAYS |
|------------|
| 1827 |
```
|
Assuming the time periods have no overlaps:
```
SELECT SUM(LEAST(i.end_date, DATE '2015-05-15') -
GREATEST(i.start_date, DATE '2010-05-15')
) AS total_days
FROM items i
WHERE i.start_date >= DATE '2010-05-15' AND
i.end_date <= DATE '2015-05-15' AND
i.user = 'me';
```
|
Count of days in a period
|
[
"",
"sql",
"oracle",
""
] |
I am trying to make a view that has 6 columns, and each row is the first 3 rows from 2 columns of another table.
For Example:
```
Asset Table
Asset_ID Asset_Name
1 Kitchen
2 Bathroom
3 Bedroom
4 Bed
5 Knife
6 Basement
View Combined
Asset_ID_A Asset_Name_A Asset_ID_B Asset_Name_B Asset_ID_C Asset_Name_C
1 Kitchen 2 Bathroom 3 Bedroom
4 Bed 5 Knife 6 Basement
```
Is something like this possible?
Sorry, this is purely SQL. I shouldn't have said report.
As for which columns, it will be column X and Y from sql table Z.
|
Let me be clear here: this is a *really bad idea* to do this in SQL. That said, it's entirely possible:
```
WITH Ordered As (
select Asset_ID, Asset_Name, row_number() over (order by Asset_ID) as Sequence
from Asset
)
SELECT o1.Asset_ID Asset_ID_A, o1.Asset_Name Asset_Name_A
,o2.Asset_ID Asset_ID_B, o2.Asset_Name Asset_Name_B
,o3.Asset_ID Asset_ID_C, o3.Asset_Name Asset_Name_C
FROM Ordered o1
LEFT JOIN Ordered o2 ON o2.Sequence = o1.Sequence + 1
LEFT JOIN Ordered o3 ON o3.Sequence = o1.Sequence + 2
WHERE o1.Sequence % 3 = 1
```
|
If you only have 6 rows
```
SELECT
t1.Asset_ID as Asset_ID_A,
t1.Asset_Name as Asset_Name_A,
t2.Asset_ID as Asset_ID_B,
t2.Asset_Name as Asset_Name_B,
t3.Asset_ID as Asset_ID_C,
t3.Asset_Name as Asset_Name_C
FROM yourtable t1
INNER JOIN yourtable t2
ON t2.Asset_ID - 1 = t1.Asset_ID
INNER JOIN yourtable t3
ON t3.Asset_ID - 1 = t2.Asset_ID
WHERE t1.Asset_ID in (1,4)
```
|
SQL View Statement
|
[
"",
"sql",
"sql-server-express",
""
] |
I have a simple table like this
```
firstname, lastname , nb_payments
a, b , 10
a, b , 20
b, a , 30
b, a , 40
b, b , 50
```
I want to select the 3 max payments grouped by the couple firstname, lastname
The output desired is like this
```
firstname, lastname , top3
b, a , 70
b, b , 50
a, b , 30
```
I did this query : `SELECT firstname, lastname , nb_payments as top3 FROM account ORDER BY nb_payments DESC LIMIT 3`
But it doesn't add up nb\_payments grouped by the couple firstname,lastname
Any help would be appreciated
Thank you very much
|
You need to sum the payments and group by `firstname` and `lastname`
```
SELECT firstname, lastname , SUM(nb_payments)
FROM account
GROUP BY firstname, lastname
ORDER BY 3 DESC LIMIT 3
```
|
I think this produces the output you want:
```
select firstname, lastname, sum(nb_payments)
from (select t.*,
row_number() over (partition by firstname, lastname
order by nb_payments desc) as seqnum
from table t
) t
where seqnum <= 3
group by firstname, lastname;
```
|
selecting max values grouped by two column
|
[
"",
"sql",
"postgresql",
"greatest-n-per-group",
""
] |
Why I get the following syntax error in this section of stored procedure :
> Incorrect syntax near the keyword 'FROM'.
---
```
WITH CTE_Residence_Overtime
AS
(
SELECT *
FROM #Residence
)
UPDATE t1
SET t1.over_time = t1.over_time + CONVERT(TIME, CAST(CTE_Residence_Overtime.overtimeHours AS VARCHAR(2))
FROM r_overtime AS t1
INNER JOIN CTE_Residence_Overtime
ON t1.[trans_date] = CTE_Residence_Overtime.[dayDate];
```
|
I think you are missing one bracket
```
WITH CTE_Residence_Overtime
AS
(
SELECT *
FROM #Residence
)
UPDATE t1
SET t1.over_time = t1.over_time + CONVERT(TIME, CAST(CTE_Residence_Overtime.overtimeHours AS VARCHAR(2)))
FROM r_overtime AS t1
INNER JOIN CTE_Residence_Overtime
ON t1.[trans_date] = CTE_Residence_Overtime.[dayDate];
```
|
You are missing a closing paren on the `set`:
```
SET t1.over_time = t1.over_time +
CONVERT(TIME,
CAST(CTE_Residence_Overtime.overtimeHours AS VARCHAR(2)
)
)
------------------------------^
```
|
Update query using inner join
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"inner-join",
"common-table-expression",
""
] |
I know how to change the length of a column, but my SQL statement fails because the column I'm trying to change is a PK, so I get the following error:
> Msg 5074, Level 16, State 1, Line 1
> The object 'PK\_TableName' is dependent on column 'PersonID'.
PersonID = PK.
I've read [What is the sql to change the field length of a table column in sql server](https://stackoverflow.com/questions/4372176/what-is-the-sql-to-change-the-field-length-of-a-table-column-in-sql-server) which only applies to non-PK columns.
I tried this:
```
ALTER TABLE table_name
ALTER COLUMN column_name <new datatype>
```
|
See below sample example how to increase size of the primary column
1. Create a sample table
`create table abc (id varchar(10) primary key)`
2. Find primary constraint in key constraints tables
`select object_name(object_id),* from sys.key_constraints where object_name(parent_object_id) = 'abc`
3. Drop constraint
`ALTER TABLE abc
DROP CONSTRAINT PK__abc__3213E83F74EAC69B`
*(Replace PK\_\_abc\_\_3213E83F74EAC69B with constraint name you receive.)*
4. Add not null
`ALTER TABLE abc alter column id varchar(20) NOT NULL;`
5. Add primary key again
`ALTER TABLE abc
ADD CONSTRAINT MyPrimaryKey PRIMARY KEY (id)`
|
```
ALTER TABLE <Table_Name>
DROP CONSTRAINT <constraint_name>
ALTER TABLE table_name
ALTER COLUMN column_name datatype
ALTER TABLE <Table_Name>
ADD CONSTRAINT <constraint_name> PRIMARY KEY (<Column1>,<Column2>)
```
|
How to change the column length of a primary key in SQL Server?
|
[
"",
"sql",
"sql-server",
"primary-key",
""
] |
I have problem here.
I have 2 tables want to join with `group by.`
```
tbl_1
NO userID users status branchID
1 1 abc ok 2
2 2 da ok 1
3 3 fe ok 3
4 1 abc ok 1
5 3 fe ok 2
tbl_2
branchID branch
2 medan
1 jakarta
3 batam
```
I need to show the user and the branch based on `branchID`, I think this will use group by. How to do that?
I want the output is not duplicate users show, when do select.
```
output
user branch
abc medan,jakarta
fe medan,batam
```
Thanks
|
This isnt very elegant but will do the job based on your data and output
```
SELECT DISTINCT Users,
STUFF((
SELECT ', ' + branch
FROM(
SELECT DISTINCT Users,
branch
FROM #tbl_1 t1
INNER JOIN #tbl_2 t2 ON t1.branchID = t2.branchID ) A
WHERE A.users = T.users
FOR XML PATH( '' )),1,2,'' ) AS Branches
FROM #tbl_1 T;
```
|
```
select distinct tbl_1.userID, tbl_1.users, tbl_1.status, tbl_1.branchID, tbl_2.branch
from tbl_1
left join tbl_2
on tbl.branchID = tbl_2.branchID
```
this will only show you all the field that have different value.
|
JOIN 2 Table SQL Server with group by
|
[
"",
"sql",
"sql-server",
""
] |
this is not duplicate of [How to get time part from SQL Server 2005 datetime in 'HH:mm tt' format](https://stackoverflow.com/questions/3201432/how-to-get-time-part-from-sql-server-2005-datetime-in-hhmm-tt-format)
be cause all answers of that question returns 12:06PM (without space and need space)
I am trying to get only time part from SQL `GETDATE()`
and I am trying
```
SELECT LTRIM(RIGHT(CONVERT(VARCHAR(20), GETDATE(), 100), 7))
```
but it is returning
```
12:06PM
```
I need 12:06 PM (Space before AM or PM)..
search a lot but failed...
|
You can use:
```
select CONVERT(VARCHAR(5), GETDATE(), 108) + ' ' +
RIGHT(CONVERT(VARCHAR(30), GETDATE(), 9),2)
```
SQL fiddle: <http://sqlfiddle.com/#!6/a7540/2377>
|
For SQL 2012 above:
```
SELECT FORMAT(GETDATE(), 'h:mm tt', 'en-US')
```
|
How to get time part from GetDate() As hh:mm tt in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.