Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm using
Microsoft SQL Server 2012 (SP1) - 11.0.3153.0 (X64) Express Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
and I need to copy the database architecture(Tables and their constraints) to a new database so I can extract the database diagram. The current database won't allow me to(extract a diagram), but any new database I create, will.
I tried creating one with the old ones mdf but it threw an error.
I tried to do an export, which copied all of the tables, but none of the constraints!!!
|
In SQL Server Management Studio, right click on your database, select Tasks, Generate Scripts. Select the options for generating tables, indexes and constraints. Use the generated script(s) to recreate a new database where you can create the diagrams.
|
1. Management studio->Right click on the database
2. Task->Generate script->Script entire database and all database objects
3. generate the script to a file/clipboard/new query window
4. Create new database schema from this script
|
How to copy architecture of an existing database to a new database in SQL Server
|
[
"",
"sql",
"sql-server",
"database",
"database-diagram",
""
] |
Is there an elegant way in SAS to calculate a "count distinct" at multiple levels without using a series of sql statements?
Suppose we are counting unique customers. Joe has bought 2 large widgets and 1 small widget, and it is simple enough to count him as 1 customer for large widgets and 1 customer for small widgets. But we also need to count him as only 1 "widget customer", so it requires a separate SQL calculation. And it turns out Joe has also bought a mouse pad, so when we count unique customers for the Miscellaneous Products group we need a third SQL calculation. Joe bought all this stuff from store #1, but when we count the unique customers of miscellaneous products for the Small Store Number Region we also have to account for Joe's purchase of a coffee cup from store #2. Another SQL calculation.
There are many ways in SAS to count things, but it seems there is not a simple way to count "distinct" things. Any suggestions?
|
You've got a few different problems here, and it depends on the structure of the data. Happily you didn't provide that, so I get to decide!
What we're going to assume is you have a dataset like this. You also could have the two LargeWidget purchases in two rows (and similar for other products), that wouldn't have any effect on this.
```
data customers;
length product $20;
input name $ product $ store $ count;
datalines;
Joe SmallWidget Store1 1
Joe LargeWidget Store1 2
Joe Mousepad Store1 1
Joe TeaPitcher Store2 1
Jack SmallWidget Store2 1
Jack Mousepad Store1 1
Jane LargeWidget Store2 1
Jane Mousepad Store1 1
Jill LargeWidget Store3 1
;;;;
run;
```
Then we need to do one more setup: determine how you intend to group things. Multilabel formats let you ask one value to fall into multiple buckets (ie, Large Widgets and all Widgets both). This is one example, you can do many different things here; you can also do this from a dataset (look about for `CNTLIN` option on proc format, or ask another question).
```
proc format;
value $prodgroup (multilabel default=20)
SmallWidget = "Small Widgets"
SmallWidget = Widgets
LargeWidget = "Large Widgets"
LargeWidget = Widgets
Mousepad = Mousepads
Mousepad = "Misc Products"
TeaPitcher = "Tea Pitchers"
TeaPitcher = "Misc Products"
;
value $storeRegions (multilabel)
Store1=Store1
Store1=West
Store2=Store2
Store2=East
Store3=Store3
Store3=East
;
quit;
```
Then we face the big problem: SAS sucks at calculating 'distinct' things. It just wasn't really made to do that. I don't know why; tabulate really should, but it doesn't. However, it does a really good job of putting things in buckets, and that's really the hard part here; the actual distinct counting bit can be done in a second pass (which, regardless of your initial data size, will be really fast, assuming you have many fewer kinds of products than you have data points).
```
proc tabulate data=customers out=custdata;
class product store /mlf preloadfmt order=data;
class name;
format product $prodgroup20.;
format store $storeRegions6.;
tables (all store product store*product),name*n;
run;
proc tabulate data=custdata;
class product store/missing;
tables (product*store), n;
run;
```
The `MLF PRELOADFMT ORDER=DATA` makes the multilabel bit work properly and makes it come out in a useful order (in my opinion). You can add `NOTSORTED` to the value statement in `PROC FORMAT` to get even more control, though the second `PROC TABULATE` will just mess things up in terms of order so I wouldn't bother.
In this case what we do is first produce a table that is by customer, then that table gets output to a dataset; that dataset now is guaranteed to contain one line per customer per [whatever grouping], as long as the groupings are mirrored in both procs (minus the NAME class of course!).
The one deficiency of this process is that it doesn't produce lines for those products with 0to do that you would need to not have the second tabulate, and instead do it in SQL or a data step. For example:
```
proc tabulate data=customers out=custdata2;
class product store /mlf preloadfmt order=data;
class name;
format product $prodgroup20.;
format store $storeRegions6.;
tables (all store product store*product),name*n/printmiss;
run;
proc sql;
select product, store, sum(case when n>0 then 1 else 0 end) as count
from custdata2
group by product,store;
quit;
```
Note the one change to the TABLE line: `printmiss`, which instructs Tabulate to make one line for every possible combination no matter what. Then `SQL` does the rest of the work. Of course this SQL could also replicate the second Tabulate statement earlier if you prefer SQL.
|
If I'm interpreting it right, it sounds like proc means with a class statement may give you what you want.
Add store (1 vs 2), product (large widget, small widget, coffee, & mouse pad), product rollup (widget vs non-widget) as classes and look at the frequency of occurrence.
If Joe's coffee shows up multiple times for the same purchase, you may need to pre-process the data with a single SQL select distinct statement.
I am reaching right now though as I'm guessing at the structure of the data. If you add a small example of the data, I'm sure we can get to a correct answer.
|
SAS: How to count distinct at multiple levels
|
[
"",
"sql",
"sas",
""
] |
I have a SQL table called transaction where different type of transactions are stored e.g. Payment arrangements, sent letter and so on.
I have ran a query:
```
SELECT TOP 6 Case_Ref as Case Ref,TrancRefNO as Tranc RefNO, Date_CCYYMMDD, LetterSent, Arr_Freq,
SMS_Sent_CCYYMMDD
From Transaction
Where (LEN(LetterSent ) >0 OR Arr_Freq >0)
```
The table looks something like this
```
Case Ref Tranc RefNO Date_CCYYMMDD LetterSent Arr_Freq SMS_Sent_CCYYMMDD
-------- ----------- ---------- ---------- ---------- -----------------
15001 100 20140425 Stage1
15001 101 20140430 Stage2
15001 102 20140510 30
15001 104 20140610 30
15002 105 20140425 Stage1
15002 106 20140610 30
```
From the table, I can clearly see that a letter was sent on '20140430' for the case 15001 and the person started arrangements on '20140510'. And a letter was sent on '20140425' for the case 15001 and the person made arrangements on on '20140610'.
I'm trying to create a excel report using C# which will show the total number of cases got arrangements after getting a letter and total number of cases for arrangements after receiving a SMS.
I have tried
```
select MAX(ROW_NUMBER() OVER(ORDER BY o3.Date_CCYYMMDD ASC)), o3.
from
(
select o.TrancRefNO, o.Date_CCYYMMDD , sq.LetterSent
from Transaction o
join Transaction sq on sq.TrancRefNO= o.TrancRefNO
and sq.Date_CCYYMMDD <= o.Date_CCYYMMDD
where o.Arr_Freq >0
and len(sq.LetterSent ) > 0
) o2
join Transaction o3 on o3.TrancRefNO= o2.TrancRefNO
```
But gives me an error :
```
Msg 4109, Level 15, State 1, Line 2
Windowed functions cannot be used in the context of another windowed function or aggregate.
```
P.s Title will need to be changed as I don't know what to call it.
|
```
SELECT * FROM table as t1
WHERE (LetterSent != '' OR SMS_SENT_CCYYMMDD != '')
AND (SELECT COUNT(*) FROM table AS t2
WHERE t1.case_ref = t2.case_ref
AND t1.DATE_CCYYMMDD < t2.DATE_CCYYMMDD
AND Arr_freq > 0) > 1
```
My assumptions based on what I could glean from your post:
1. ARR\_FREQ!='' indicates that some time of arrangement was made at the specified date
2. Since NULL is not shown, I'm assuming all values are ''. With null values you will have to use a coalesce command
Hope this helps. I'm not sure about your second question (max date) in the comments. You would need to explain it a bit more.
|
```
SELECT * FROM table where Date =
(SELECT MIN(Date) from table)
```
|
SQL query to select min date
|
[
"",
"sql",
"sql-server",
""
] |
I am using SQL Server 2012, and have the following query. Let's call this query A.
```
SELECT a.col, a.fk
FROM Table1 a
INNER JOIN (
select b.col
from Table1 b
group by b.col
having count(*) > 1)
b on b.col = a.col
```
I want to delete only the rows returned from query A, specifically rows that match the returned col AND fk
I am thinking of doing the following, but it will only delete rows that match on the col.
```
delete from Table1
where col in (
SELECT a.col
FROM Table1 a
INNER JOIN (
select b.col
from Table1 b
group by b.col
having count(*) > 1)
b on b.col = a.col)
)
```
|
Use `delete from Join` syntax
```
delete t1
from table1 t1
INNER JOIN (SELECT a.col, a.fk
FROM Table1 a
INNER JOIN (
select b.col
from Table1 b
group by b.col
having count(*) > 1)
b on b.col = a.col) t2
ON t1.col1=t2.col1 and t1.fk=t2.fk
```
|
you can combine **col** and **fk** fields to be another unique filed to retrieve wanted rows
```
delete from Table1
where cast(col as varchar(50))+'//'+cast(fk as varchar(50)) in (
SELECT cast(a.col as varchar(50))+'//'+cast(a.fk as varchar(50))
FROM Table1 a
INNER JOIN (
select b.col
from Table1 b
group by b.col
having count(*) > 1)
b on b.col = a.col)
)
```
|
SQL Server. Delete from Select
|
[
"",
"sql",
"sql-server",
""
] |
I'm migrating an old web app based on SQL Server and ASP to Symfony2 and MySQL. I made some queries and export old data to individual SQL files.
How can I execute thoses files in my fixtures, when I run the command
```
$php app/console doctrine:fixtures:load
```
Now I have some fixtures that works directly with Doctrine ORM and entities, but I have a lot of data to import.
|
I find a good solution. I didn't find an `exec` method in class `ObjectManager`, so... this work very well for me.
```
public function load(ObjectManager $manager)
{
// Bundle to manage file and directories
$finder = new Finder();
$finder->in('web/sql');
$finder->name('categories.sql');
foreach( $finder as $file ){
$content = $file->getContents();
$stmt = $this->container->get('doctrine.orm.entity_manager')->getConnection()->prepare($content);
$stmt->execute();
}
}
```
In this solution your fixture class has to implement the `ContainerAwareInterface` with the method
```
public function setContainer( ContainerInterface $container = null )
{
$this->container = $container;
}
```
|
You can load the file contents as a string, and execute native SQL using the EntityManager:
```
class SQLFixtures extends AbstractFixture implements OrderedFixtureInterface
{
$filename = '/path/to/sql/file.sql';
public function load(ObjectManager $manager) {
$sql = file_get_contents($filename); // Read file contents
$manager->getConnection()->exec($sql); // Execute native SQL
$manager->flush();
}
public function getOrder() {
return 99; // Order in which this fixture will be executed
}
}
```
|
Symfony2 execute SQL file in Doctrine Fixtures Load
|
[
"",
"mysql",
"sql",
"symfony",
"doctrine-orm",
"fixtures",
""
] |
I have tables β**table1**', β**table2**β,β**table3**' and '**table4**β . βtable1β has a column βaccountβ on basis of I need to get record from 'table2β or βtable3β or βtable4β with respect of another column βuser\_idβ.
I am using query like
```
SELECT *
FROM table1
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table3 ON table1.user_id = table3.user_id
```
using this I am getting lots of NULL values because each record of table1 will have only record in either βtable2β or βtable3β or βtable4β.
Present output is like -
```
t1.user_id | t1.post_id | t2.token | t3.token | t4.token
1 | 1 | xyz | NULL | NULL
1 | 1 | NULL | abc | NULL
1 | 1 | NULL | NULL | etc
```
needed is like
```
t1.user_id | t1.post_id | token
1 | 1 | xyz
1 | 1 | abc
1 | 1 | etc
```
t2,t3 and t4 can have more than 1 column to select.
|
It would be much simpler to use the `COALESCE()` function:
```
SELECT
t1.user_id,
t1.post_id,
COALESCE(t2.Token, t3.Token, t4.Token) AS Token
FROM table1
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table3 ON table1.user_id = table3.user_id
```
|
For this you can use **COALESCE()** function.
The COALESCE() function returns the first non-null value in a list.
for e.g.
```
select COALESCE(NULL, NULL, NULL, 'abc', NULL, 'xyz');
```
the result of the above query is **abc**.
Updated query :
```
SELECT
t1.user_id,
t1.post_id,
COALESCE(t2.Token, t3.Token, t4.Token) AS token
FROM table1
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table2 ON table1.user_id = table2.user_id
LEFT OUTER JOIN table3 ON table1.user_id = table3.user_id
```
|
How to get only specific record from multiple tables?
|
[
"",
"mysql",
"sql",
"database",
"query-optimization",
""
] |
I have a question about writing SQL constraints.
I have two tables:
Person
```
personid (primarykey, int)
name (varchar)
```
Car
```
personid (foreignkey)
brand (varchar)
```
How can I write a constraint that says 'person can only have 1 car'?
|
There are a few ways you could do it:
Create a unique constraint on Car.PersonId
```
ALTER TABLE Car
ADD UNIQUE (personid)
```
or
Make Car.PersonId the primary key on the table (innately unique)
```
ALTER TABLE Car
ADD PRIMARY KEY (personid)
```
Note that syntax may differ slightly for your RDBMs, you did not specify, but the above is MS sql.
See: <http://www.w3schools.com/sql/sql_unique.asp> for different syntaxes on setting up unique constraints across multiple RDBMs
See: <http://www.w3schools.com/sql/sql_primarykey.asp> for the same but on primary keys.
Such a relationship is referred to as a [1 to 1 relationship](http://www.databaseprimer.com/pages/relationship_1to1/)
|
Check for [this](http://www.w3schools.com/sql/sql_unique.asp) and use it on `car.personid`.
|
Sql constraints for "person can only have 1"
|
[
"",
"sql",
""
] |
I am using HSQLDB 2.3.2 and am getting a bizarre error when trying to create a stored procedure.
My `addresses` table:
```
CREATE TABLE IF NOT EXISTS addresses (
address_id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1) NOT NULL PRIMARY KEY,
address_line_1 NVARCHAR(500) NOT NULL,
address_line_2 NVARCHAR(500),
address_city NVARCHAR(100) NOT NULL,
address_postal_code NVARCHAR(25) NOT NULL,
CONSTRAINT uc_addresses UNIQUE (address_line_1, address_city)
)
```
An insert to populate it:
```
INSERT INTO addresses (
address_line_1,
address_city,
address_postal_code
) VALUES (
'123 Test Blvd.', 'Testville', '11111'
)
```
And my proc:
```
CREATE PROCEDURE sp_get_address_by_id(
IN address_id INTEGER,
OUT address_id INTEGER,
OUT address_line_1 NVARCHAR(500),
OUT address_line_2 NVARCHAR(500),
OUT address_city NVARCHAR(100),
OUT address_postal_code NVARCHAR(25))
READS SQL DATA
BEGIN ATOMIC
SELECT
a.address_id,
a.address_line_1,
a.address_line_2,
a.address_city,
a.address_postal_code
INTO
address_id,
address_line_1,
address_line_2,
address_city,
address_postal_code
FROM
addresses a
WHERE
a.address_id = address_id;
END
```
When I run this I get:
```
Error: attempt to assign to non-updatable column
SQLState: 0U000
ErrorCode: -2500
```
Questions:
1. What is wrong with my proc (what is producing this error)?
2. I'm actually looking for a `CREATE IF NOT EXISTS`-type declaration, so I can run this script over and over again and the procedure will only get created one time if it doesn't already exist. Will this happen or do I need to change the syntax to accomplish `IF NOT EXISTS`?
|
Try the syntax below, according to the hsqldb documentation
<http://hsqldb.org/doc/2.0/guide/sqlroutines-chapt.html#src_psm_assignment>
> The SET statement is used for assignment. It can be used flexibly with
> rows or single values.
Also change the `address_id` parameter to type `INOUT` and remove the duplicate `address_id` parameter lines.
```
CREATE PROCEDURE sp_get_address_by_id(
INOUT address_id INTEGER,
OUT address_line_1 NVARCHAR(500),
OUT address_line_2 NVARCHAR(500),
OUT address_city NVARCHAR(100),
OUT address_postal_code NVARCHAR(25))
READS SQL DATA
BEGIN ATOMIC
SET (address_id,
address_line_1,
address_line_2,
address_city,
address_postal_code)
=
(
SELECT
a.address_id,
a.address_line_1,
a.address_line_2,
a.address_city,
a.address_postal_code
FROM
addresses a
WHERE
a.address_id = address_id
);
END
```
You can try adding this as the first statement in your script if you want to drop the procedure if it already exists, so that you can re-run the script many times. You can search the [documentation](http://hsqldb.org/doc/guide/databaseobjects-chapt.html) for `<specific routine designator>` for more info.
```
DROP SPECIFIC PROCEDURE sp_get_address_by_id IF EXISTS;
```
|
In your procedure you have two parameters with the same name:
```
IN address_id INTEGER,
OUT address_id INTEGER,
```
It may cause a problem when you refer to `address_id` in the body of procedure.
You should rather use:
```
INOUT address_id INTEGER,
```
instead of these two lines.
Answering your second question:
Why do you want to run this script over and over again without rebuilding the procedure? Running this script again has sense when something has changed in it.
* If you don't plan to change this procedure but you want to change other things in your script then maybe you should simply move this procedure to other script and run it only once
* If you plan to change the body of procedure then it should be rebuilded every time. So you should use `CREATE OR REPLACE PROCEDURE`
|
HSQLDB Stored Procedure Error: attempt to assign to non-updatable column
|
[
"",
"sql",
"stored-procedures",
"syntax-error",
"hsqldb",
""
] |
I'm trying to count the ID of the table where it's last status is not equal 'Called'
`Table1
βββββββββββ¦βββββββββ¦βββββββββββ
β q_log_idβ q_id β q_status β
β ββββββββββ¬βββββββββ¬βββββββββββ£
β 1 β 1 β Waiting β
β 2 β 1 β Waiting β
β 3 β 1 β Called β
β 4 β 2 β Waiting β
β 4 β 2 β Waiting β
β 5 β 3 β Waiting β
β 5 β 3 β Waiting β
βββββββββββ©βββββββββ©βββββββββββ`
So, the count should return 2. I'm not good with queries so I need some help. I tried count with distinct but it still doesn't work for me.
|
```
SELECT COUNT(DISTINCT q_id) AS Count
FROM table1
WHERE q_id IN (
SELECT q_id
FROM table1
GROUP BY q_id
HAVING MIN(q_status)<> 'Called'
)
```
|
you can do it with window analytic function `row_number`, you can do the order by on `q_log_id` and partition the rows on q\_id column
and get the last row and check if status is Called
**[SQL Fiddle](http://www.sqlfiddle.com/#!3/47f22/6)**
```
with cte
as
(
select * , row_number() over ( partition by q_id order by q_log_id desc) as rn
from table1
)
select count(q_id)
from cte
where rn =1
and q_status !='Called'
```
|
Counting Distinct ID
|
[
"",
"sql",
"sql-server",
"count",
""
] |
**Edit:** The original example I used had an int for the primary key when in fact my primary key is a var char containing a UUID as a string. I've updated the question below to reflect this.
**Caveat**: Solution must work on postgres.
**Issue**: I can easily paginate data when starting from a known page number or index into the list of results to paginate but how can this be done if all I know is the primary key of the row to start from. Example say my table has this data
```
TABLE: article
======================================
id categories content
--------------------------------------
B7F79F47 local a
6cb80450 local b
563313df local c
9205AE5A local d
E88F7520 national e
5ab669a5 local f
fb047cf6 local g
591c6b50 national h
======================================
```
Given an article primary key of '9205AE5A' (article.id == '9205AE5A') and categories column must contain 'local' what sql can I use to return a result set that includes the articles either side of this one if it was paginated i.e. the returned result should contain 3 items (previous, current, next articles)
```
('563313df','local','c'),('9205AE5A','local','d'),('5ab669a5','local','f')
```
Here is my example setup:
```
-- setup test table and some dummy data
create table article (
id varchar(36),
categories varchar(256),
content varchar(256)
)
insert into article values
('B7F79F47', 'local', 'a'),
('6cb80450', 'local', 'b'),
('563313df', 'local', 'c'),
('9205AE5A', 'local', 'd'),
('E88F7520', 'national', 'e'),
('5ab669a5', 'local', 'f'),
('fb047cf6', 'local', 'g'),
('591c6b50', 'national', 'h');
```
I want to paginate the rows in the article table but the starting point I have is the 'id' of an article. In order to provide a "Previous Article" and "Next Article" links on the rendered page I also need the articles that would come either side of this article I know the id of
On the server side I could run my pagination sql and iterate through each result set to find the index of the given item. See the following inefficient pseudo code / sql to do this:
```
page = 0;
resultsPerPage = 10;
articleIndex = 0;
do {
resultSet = select * from article where categories like '%local%' limit resultsPerPage offset (page * resultsPerPage) order by content;
for (result in resultSet) {
if (result.id == '9205AE5A') {
// we have found the articles index ('articleIndex') in the paginated list.
// Now we can do a normal pagination to return the list of 3 items starting at the article prior to the one found
return select * from article where categories like '%local%' limit 3 offset (articleIndex - 1);
}
articleIndex++;
}
page++;
} while (resultSet.length > 0);
```
This is horrendously slow if the given article is way down the paginated list. How can this be done without the ugly while+for loops?
**Edit 2**: I can get the result using two sql calls
```
SELECT 'CurrentArticle' AS type,* FROM
(
SELECT (ROW_NUMBER() OVER (ORDER BY content ASC)) AS RowNum,*
FROM article
WHERE categories LIKE '%local%'
ORDER BY content ASC
) AS tagCloudArticles
WHERE id='9205AE5A'
ORDER BY content ASC
LIMIT 1 OFFSET 0
```
From that result returned e.g.
```
('CurrentArticle', 4, '9205AE5A', 'local', 'd')
```
I can get the RowNum value (4) and then run the sql again to get RowNum+1 (5) and RowNum-1 (3)
```
SELECT 'PrevNextArticle' AS type,* FROM
(
SELECT (ROW_NUMBER() OVER (ORDER BY content ASC)) AS RowNum,*
FROM article
WHERE categories LIKE '%local%'
ORDER BY content ASC
) AS tagCloudArticles
WHERE RowNum in (3, 5)
ORDER BY content ASC
LIMIT 2 OFFSET 0
```
with result
```
('PrevNextArticle', 3, '563313df', 'local', 'c'),
('PrevNextArticle', 5, '5ab669a5', 'local', 'f')
```
It would be nice to do this in one efficient sql call though.
|
If the only information about the surrounding articles shown in the page is "Next" and "Previous" there is no need to get their rows in advance. When the user chooses "Previous" or "Next" use these queries [SQL Fiddle](http://sqlfiddle.com/#!15/a204b/1)
```
-- Previous
select *
from article
where categories = 'local' and id < 3
order by id desc
limit 1
;
-- Next
select *
from article
where categories = 'local' and id > 3
order by id
limit 1
;
```
---
If it is necessary to get information about the previous and next articles: [SQL Fiddle](http://sqlfiddle.com/#!15/903d1/1)
```
with ordered as (
select
id, content,
row_number() over(order by content) as rn
from article
where categories = 'local'
), rn as (
select rn
from ordered
where id = '9205AE5A'
)
select
o.id,
o.content,
o.rn - rn.rn as rn
from ordered o cross join rn
where o.rn between rn.rn -1 and rn.rn + 1
order by o.rn
```
The articles will have `rn` -1, 0, 1, if existent.
|
I think this query will yield you the result
```
(SELECT *, 2 AS ordering from article where categories like '%local%' AND id = 3 LIMIT 1)
UNION
(SELECT *, 1 AS ordering from article where categories like '%local%' AND id < 3 ORDER BY id DESC LIMIT 1 )
UNION
(SELECT *, 3 AS ordering from article where categories like '%local%' AND id > 3 ORDER BY id ASC LIMIT 1 )
```
|
SQL to Paginate Data Where Pagination Starts at a Given Primary Key
|
[
"",
"sql",
"postgresql",
"pagination",
""
] |
This is my table structure.
sometime table1 data my repeat (Ex : Actually Id 1 should have only 4 rows but sometime it is 8 due to duplication) so avoid duplication I use **GROUP BY** command in select query
**Table 1**
```
|id| website|time|
-----------------
|01|facebook|20.0|
|01|google |40.0|
|01|youtube |10.0|
|01|ebay |30.0|
|02|facebook|50.0|
|02|ebay |50.0|
```
**Table 2**
```
|id|marks|
-----------
|01| 80|
|02| 90|
|03| 70|
|04| 100|
```
I want to select **(marks)**,**(time on facebook)** and **(count of time on google & youtube)** of specific user
Following select query gives **(marks)**,**(time on facebook)** of user id '01'
**How to receive count of time of both google and youtube of id'1' in same query ?**
```
SELECT table2.id,table2.marks, table1.time
FROM table1
RIGHT JOIN table2 ON table1.id= table2.id
WHERE table1.website LIKE ('%facebook%')
AND table1.id= '01'
GROUP BY table1.id, table1.website
```
|
You want to find the time on `facebook` and then the sum of `youtube` and `google` for a particular user you can use the mysql `conditional sum` to achieve it
```
select
sum(case when t1.website = 'facebook' then t1.time else 0 end) as `fb_time`,
(
sum(case when t1.website='google' then t1.time else 0 end)+
sum(case when t1.website='youtube' then t1.time else 0 end)
)
as `google_youtube`,
t2.marks
from table1 t1
join table2 t2 on t1.id = t2.id
where t1.id = '01'
```
If you need to calculate the same for all the users then you can do it as
```
select
t1.id,
sum(case when t1.website = 'facebook' then t1.time else 0 end) as `fb_time`,
(
sum(case when t1.website='google' then t1.time else 0 end)+
sum(case when t1.website='youtube' then t1.time else 0 end)
)
as `google_youtube`,
t2.marks
from table1 t1
join table2 t2 on t1.id = t2.id
group by t1.id
```
|
If I understand your query correctly, I think you will need to use a subquery.
The following subquery returns two counts; time\_on\_facebook & time\_on\_google\_and\_youtube
for all users
```
SELECT t1.id, t2.marks,
COUNT(t1.time) as time_on_facebook,
(SELECT COUNT(t1_sq.time)
FROM `table1` as t1_sq
WHERE (t1_sq.website = "youtube" OR t1_sq.website = "google")
AND t1_sq.id = t1.id
GROUP BY t1.id) as time_on_google_and_youtube
FROM `table1` as t1
LEFT JOIN table2 t2 ON t2.id = t1.id
WHERE t1.website = "facebook"
GROUP BY t1.id
```
To restrict it to user id = 01, add in a WHERE clause
```
SELECT t1.id, t2.marks,
COUNT(t1.time) as time_on_facebook,
(SELECT COUNT(t1_sq.time)
FROM `table1` as t1_sq
WHERE (t1_sq.website = "youtube" OR t1_sq.website = "google")
AND t1_sq.id = t1.id
GROUP BY t1.id) as time_on_google_and_youtube
FROM `table1` as t1
LEFT JOIN table2 t2 ON t2.id = t1.id
WHERE t1.website = "facebook" AND t1.id = 1
GROUP BY t1.id
```
Are you sure you want COUNT(time) or do you want SUM(time)?
Lastly, consider adding a primary key to both tables and maybe rename the "id" column to "user\_id" for clarity.
|
How to count and get result of two rows in Mysql Select Query?
|
[
"",
"mysql",
"sql",
"join",
"count",
"select-query",
""
] |
Say I have these 2 tables:
### posts
```
id | user_id | title | slug | content | views
----+---------+-------+------+---------+-------
```
### users
```
id | username | email | hash | role | auth
----+----------+-------+------+------+------
```
With these two tables I want to do a `SELECT * FROM posts ...` query and ultimately end up responding with something like this:
```
{[
{
"id": "1",
"user_id": "2",
"title": "Foo",
"slug": "foo",
"content": "bar",
"views": "0",
"user": {
"id": "2",
"username": "john",
"email": "john@smith.com",
"hash": "aeigh*Β£HAEGhoiaehg",
"role": "admin",
"auth": "aeg89hiae9g8hA*HΒ£GA(*EHG"
}
}
]}
```
I'm pretty new to SQL, and patterns like this seem pretty common when using an ORM, so I'm wondering what steps I have to do to get a result like this using raw SQL.
Any idea?
|
In postgresql 9.2 or greater you can use the `row_to_json` function:
```
select row_to_json(posts)
from
posts....;
```
[Read more](http://hashrocket.com/blog/posts/faster-json-generation-with-postgresql)
Your query might look something like:
```
select row_to_json(row)
from (
select p.*, urd AS user
from posts p
inner join (
select ur.*
from users ur
) urd(id,username,email,hash,role,auth) on urd.id = p.user_id
) row;
```
The straight sql for this wouldnt need to be so complicated:
```
select
p.*, u.*
from
posts p
inner join users u on p.user_id = u.id
```
|
> what steps I have to do to get a result like this using raw SQL.
[**`row_to_json()`**](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-TABLE) has a second parameter to pretty-print the outer level. [The manual:](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-TABLE)
> Line feeds will be added between level 1 elements if pretty\_bool is true.
Use a subquery to add the user to the row:
```
SELECT row_to_json(combi, true) AS pretty_json
FROM (
SELECT p.*, u AS user -- whole users row
FROM posts p
JOIN users u ON u.id = p.user_id
WHERE p.id = 1
) combi;
```
You only need one subquery level.
Or use [**`jsonb_pretty()`**](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-PROCESSING-TABLE) in Postgres 9.4 or later to pretty-print all levels:
```
SELECT jsonb_pretty(to_jsonb(combi)) AS pretty_jsonb
FROM (
SELECT p.*, u AS user
FROM posts p
JOIN users u ON u.id = p.user_id
WHERE p.id = 1
) combi;
```
Keys (former column names) are ordered alphabetically now, that's how they are stored in `jsonb`.
*db<>fiddle [here](https://dbfiddle.uk/?rdbms=postgres_13&fiddle=554a202ab5260429c012ff924a7843fa)*
OLD [sqlfiddle](http://sqlfiddle.com/#!15/ce74a/5) - cast to text (`::text`) for display; line feeds not displayed
|
Raw SQL queries in a REST Api
|
[
"",
"sql",
"postgresql",
"relational-database",
""
] |
I have a string value E.g 28Dec2013 which I need to convert to a date field in SQL.
I need to convert to a date field so I can carry out calculations on the field.
For example if I query: select max(date) it returns the value of '31Oct2015' - However I know this value should be '01Jan2016 '
Does anyone have any ideas how to go about this.
I'm unsure because of the format of the string how to go about this.
I am running this on Sybase.
Thanks a lot.
|
My solution:
```
SELECT CONVERT(DATE, SUBSTRING(thedate, 0, 2) || ' ' || SUBSTRING(thedate, 3, 3) || ' ' || SUBSTRING(thedate, 6, 4), 106) FROM calendartable;
```
|
Since I don't know Sybase I'd go with a more or less dbms independent solution:
```
SUBSTRING(col FROM 6 for 4) || CASE SUBSTRING(col from 3 for 3)
when 'Jan' then '01'
when 'Feb' then '02'
when 'Mar' then '03'
when 'Apr' then '04'
when 'May' then '05'
when 'Jun' then '06'
when 'Jul' then '07'
when 'Aug' then '08'
when 'Sep' then '09'
when 'Oct' then '10'
when 'Nov' then '11'
when 'Dec' then '12' end
|| SUBSTRING(col from 1 for 2)
```
ANSI SQL compliant!
|
Converting DD MMM YYYY to a date field with SQL
|
[
"",
"sql",
"database",
"date",
"sybase",
""
] |
From Rhino, I am running the following script:
```
var SQLstatement = connection.createStatement();
query = "INSERT INTO TableName(ID, Name, Date) VALUES ('1234', 'John Smith', '2012-05-24')";
for (var i = 0; i <= 1500; i++)
{
SQLstatement.executeUpdate(query);
}
```
All what I am trying to do is to insert 1,500 times the values listed in the query above into an SQL table.
However, due to limitations in SQL the statement inserts only 1000 records. Is there a way to insert all 1500 records? I would prefer not to use any text files with the option of bulk insert.
|
There's nothing that should limit you to only 1000 inserts, therefore it sounds like related to a commit. Try the following options:
1. Add a commit at the end
2. Check if there's some kind of setting in Rhino that commits in batches of 1000
3. Try running your code with say 500 and 2500 iterations separately and check how many records get inserted in each case
|
How about a different approach -- just run the `sql` statement once. This creates a `cartesian product` which creates the 1500 records in a single statement.
```
INSERT INTO TableName(ID, Name, Date)
SELECT '1234', 'John Smith', '2012-05-24'
FROM (SELECT 1 rn UNION SELECT 2) t, (SELECT 1 rn union SELECT 2) t2,
(SELECT 1 rn union SELECT 2) t3, (SELECT 1 rn union SELECT 2) t4,
(SELECT 1 rn union SELECT 2) t5, (SELECT 1 rn union SELECT 2) t6,
(SELECT 1 rn union SELECT 2) t7, (SELECT 1 rn union SELECT 2) t8,
(SELECT 1 rn union SELECT 2) t9, (SELECT 1 rn union SELECT 2) t10,
(SELECT 1 rn union SELECT 2) t11, (SELECT 1 rn union SELECT 2) t12
LIMIT 1500
```
* [SQL Fiddle Demo](http://www.sqlfiddle.com/#!2/7a387/1)
|
Rhino INSERT SQL statement
|
[
"",
"sql",
"rhino",
""
] |
I am trying to re-write the following query (not created by myself) as this throws an execution error:
**Old Query:**
```
SELECT LastName + ', ' + FirstName AS 'Teamleader'
FROM dbo.EmpTable
WHERE EmpID IN (
SELECT SupEmpID
FROM dbo.EmpTable
WHERE
SupEmpID = (
SELECT EmpID
FROM EmpTable
WHERE SupEmpID = (
SELECT EmpID
FROM EmpTable
WHERE NTID = @NTID
)
)
)
ORDER BY TeamLeader
```
I thought of using JOIN instead of the nested query but I am not sure how to apply this when there is an IN statement involved.
So far I have the following but this is returning the same error (probably as I still have the IN statement in there).
**New Query (attempt):**
```
SELECT A.LastName + ', ' + A.FirstName AS TeamLeader
FROM dbo.EmpTable AS A
WHERE A.EmpID IN
(
SELECT D.SupEmpID
FROM dbo.EmpTable AS B
INNER JOIN dbo.EmpTable AS C
ON C.SupEmpID = B.EmpID
INNER JOIN dbo.EmpTable AS C
ON D.SupEmpID = C.EmpID
WHERE B.NTID = @NTID
)
ORDER BY TeamLeader
```
Would someone explain the IN statement here, and let me know what I could do to fix this ?
|
Your last query might work if you didn't have two tables aliased to `C`:
```
SELECT A.LastName + ', ' + A.FirstName AS TeamLeader
FROM dbo.EmpTable AS A
WHERE A.EmpID IN
(
SELECT D.SupEmpID
FROM dbo.EmpTable AS B
INNER JOIN dbo.EmpTable AS C
ON C.SupEmpID = B.EmpID
INNER JOIN dbo.EmpTable AS D
--------------------------------------------------------^
ON D.SupEmpID = C.EmpID
WHERE B.NTID = @NTID
)
ORDER BY TeamLeader
```
|
`IN` may be the best solution here. `IN` will return all of the records from the "outer" table where there is a matching record in the inner table (or subquery).
If you did a `JOIN`, the records of the outer table would be *paired* with the records in the inner table, so you may get *multiple* results back if there are multiple matches in the inner table. You *could* do a `DISTINCT` on the result to eliminate duplicates, but an `IN` would probably be faster since it doesn't have to match up all of the records and then eliminate duplicates.
|
How to replace an IN statement in SQL Server?
|
[
"",
"sql",
"sql-server",
"select",
"stored-procedures",
"sql-order-by",
""
] |
I'm having trouble setting up my database tables with these values. Is everything in the right tables, because, if they're not, could anyone please help me and show me the right? I just need clarification. Thanks!

***Check out all the answers, they're all very helpful but I could only choose one!***
|
First thought is that it's redundant to label every table with the suffix `Tbl`. We know its a table, there won't ever be any confusion about that. Calling it `users`, and `purchases`, for example, is sufficient. The same can also be said in regards to the primary keys of those tables. For example, you don't need to call the primary key id column of the Album table `AlbumID` because it can never be referenced outside of the context of the table to which it belongs. You can simply call it `id` instead.
You've got a number of extra columns in some tables that don't belong. The pattern I'm seeing might indicate you're not completely grounded in how relationships work. I will point out a couple of places where there is a mixup, and that should give you a solid enough grounding to review the rest of the tables and find the other errors.
Looking at the `UserTbl` table, you have the id of the purchase (`Purcahse_ID`). In real world terms, that means each user can only ever have one purchase, which is unlikely to ever be the case. More likely, a User **may have many** purchases. As a professor once told me, when deciding where to put the foreign key, the key always goes on the "many" side of the relationship. So, in this case, you will want to remove the `Purchase_ID` from `UserTbl`, and add a `User_ID` in the `PurchaseTbl`. That way, a user may indeed have many purchases, with each purchase containing the id of the user who made the purchase.
You have similar problems in the ArtistTbl and GenreTbl. See if you can determine how the relationships should look. Hopefully that makes a little more sense.
*Quick side note:* Unless your system is made to track slave labour, I believe the `ArtistID` in the `PurchaseTbl` should actually be the `AlbumID` instead, since the user is most likely purchasing an album, not a person.
|
Your foreign keys look like they're mostly in the wrong tables.
* Does a user purchase an artist or an album? Probably an album, so the Purchase table should have an Album id instead of an Artist id. The Purchase table should also have a User id to keep track of what user made the purchase.
* You could say that an artist belongs to a specific genre, but some artists switch genres every few years. It would probably be better to say an album has a genre, and put the Genre id foreign key in the Album table instead.
* The Genre table should not have an Album id. Each record in your genre table will not have one album that it's associated with. The relationship is the other way around. Each album belongs to a genre, so the Genre id foreign key should be in the Album table, as in the previous point.
* The Album table should have an Artist id foreign key, not the other way around.
In general, the way to get this right is to think about how many records in one table need to refer to a record in another table. If an artist releases ten albums, you can't put all ten album ids in one record in the artist table. You'd need ten columns for those ids, and that doesn't make sense. Instead, you want to track which artist recorded the album in the album table. That only takes one extra column for the artist id in the album table.
|
ERD Relationships
|
[
"",
"sql",
"erd",
""
] |
I have two columns which contain email info. Column A, Column B. Now, in these fields, emails SHOULD be as such:
```
Column A Column B
x@x.com; b@b.com abc@x.com; xyz@x.com
x@x.com;b@b.com abc@x.com;xyz@x.com
```
However, in an effort to do some data quality checks and such, it turns out that MANY entries are instead not following this format. I am trying to find all the outliers, and I have identified the outliers to take the form as such:
```
x@x.com and b@b.com
x@x.com, b@b.com (uses comma so it is incorrect)
x@x.com or b@b.com
x@x.com / b@b.com
```
There could be other wrong characters or words that make the format incorrect. But I hope these examples pinpoint the issue.
What I am trying to do:
Create a query that will pinpoint all instances that are NOT in the correct format, so that the problem points can be found and edited later, but that's a different topic :)
Here is a Query I have so far:
```
SELECT A_EMAIL, B_EMAIL, NAME, ID
FROM NAMES
WHERE A_EMAIL LIKE ('and %') OR A_EMAIL LIKE ('or %')
OR B_EMAIL LIKE ('and %') OR B LIKE ('or %')
```
This is using LIKE and the % is with a space in between. However, this returns no results, and I know such results definitely do exist. But I would like to build a logic that would bring me back everything that isn't in the proper format instead of trying to use LIKE 'XYZ' because even though I know most of the problem issues, I could still miss some.
However, if such a thing isn't possible via SQL. Then I would still like to get my current logic of using LIKE ('XYZ %') to work instead which even though not an optimal route, should still be able to help me in my goal someway.
|
Your query is fine, you just missed one `%` in it.
Instead of this
```
WHERE A_EMAIL LIKE ('and %') OR A_EMAIL LIKE ('or %')
OR B_EMAIL LIKE ('and %') OR B LIKE ('or %')
```
you should use this
```
WHERE A_EMAIL LIKE ('%and %') OR A_EMAIL LIKE ('%or %')
OR B_EMAIL LIKE ('%and %') OR B LIKE ('%or %')
```
Your original query looks for values that start with 'and ', while you are interested in cases where 'and ' appears anywhere inside the column value.
Of course, this is a one-off solution to your immediate problem.
The permanent solution is not to store several e-mails in the same column in the first place.
|
I would suggest that you remove spaces and then look for non-email like characters:
```
where replace(cola, '; ', ';') like '%[^a-zA-Z0-9@ ;.]%'
```
That is, the column contains an invalid character.
Then, I would encourage you to create a junction table so you are not storing lists of emails in semi-colon delimited columns.
|
Finding a Specific Character and/or String SQL Server 2008
|
[
"",
"sql",
"sql-server",
"string",
"sql-like",
""
] |
I have a SQL select statement as follows:
```
select * from editortemp order by case
when CategoryID = 10 then 0
else CategoryID end,
Ranking
```
I would like to change the ordering of the result, so that in addition to putting all rows with CategoryID = 10 at the top, the REST of the rows will be ordered according to the count of the rows with each CategoryID, in descending order.
When I try doing
```
select * from editortemp order by case
when CategoryID = 10 then 0
else count(CategoryID) end,
Ranking
```
I get the error: Column 'editortemp.EditorTempID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
After researching this error online, it looks like this means there's a problem with the logic of the select statement.
How do I create a logical select statement which will order the rows as I described above?
EDIT: When I try to run the following statement:
```
select * from editortemp order by case
when CategoryID = 10 then 0
else (select count(CategoryID) from editortemp) end,
Ranking
```
I don't get the error mentioned above, but the ordering does not come out right. It puts all category=10 rows at the top, but then starts ordering according to the ranking, not according to the count of the categoryid... Any ideas what needs to be changed to get the results I want?
Thanks so much for your help!
|
You should really show a sample structure of your tables, some sample data and sample of what you are expecting in your original post.
From the way it looks, your editortemp table has multiple records and may have more data elements obviously not presented here. I would first get a pre-aggregate of counts, then join back to include as part of your ordering. Something like
```
select
et.*
from
editortemp
JOIN ( select categoryID, count(*) as CatCount
from editortmp
group by categoryID ) preAgg
on et.categoryID = preAgg.categoryID
order by
case when CategoryID = 10 then 1 else 2 end,
preAgg.CatCount desc,
ranking
```
The case/when will force a preliminary sort of categoryID = 10 first, then anything else second. The secondary part of the order by is the count from the pre-aggregate table joined to. So, even if Category 10 has a count of 3 entries and Category 2 has 100, Category 10 still stays at the first spot... only then do the rest get sorted in descending order by count.
Per feedback...
I Don't know what the ranking is, but the ranking should only be an impact if there are multiple entries for a given category count.
```
What if categories 1, 2, 3, 4, 5 all have a count of 73 entries... and
cat 1 has a ranks of 6, 12, 15...
cat 2 has ranks of 3, 20, 40...
cat 3 has ranks of 9, 10, 18...
they will be intermingled.
```
If you want all of same category grouped, then that would be added before ranking something like
```
order by
case when CategoryID = 10 then 1 else 2 end,
preAgg.CatCount desc,
CategoryID,
ranking
```
This way, inf the scenario of multiple categories having the count of 73, then the above order by will have all of category 1 by its ranking, then category 2 by its ranking, etc.
|
```
select
TEMP1.*
from
(
select CategoryID, 999999999 AS Ranking FROM editortemp WHERE CategoryID = 10
UNION ALL
Select CategoryID, (SELECT COUNT(*) FROM editortemp AS t1 WHERE t1.CategoryID = t2.CategoryID) AS Ranking FROM editortemp AS t2 WHERE CategoryID <> 10
) TEMP1
ORDER BY
TEMP1.Ranking DESC
```
|
How to ORDER BY count of rows with specific column value
|
[
"",
"sql",
"sql-server-2012",
"sql-order-by",
""
] |
I have a response column that stores 2 different values for a same product based on question 1 and question 2. That creates 2 rows for each product but I want only one row for each product.
Example:
```
select Product, XNumber from MyTable where QuestionID IN ('Q1','Q2')
```
result shows:
```
Product XNumber
Bat abc
Bat abc12
```
I want it to display like below:
```
Product Xnumber1 Xnumber2
Bat abc abc12
```
Please help.
Thanks.
|
This will work best if you don't have answers for both Q1 and Q2 for all ids
```
SELECT a.Product, b.XNumber as XNumber1, c.XNumber as XNumber2
FROM (SELECT DISTINCT Product FROM MyTable) a
LEFT JOIN MyTable b ON a.Product = b.Product AND b.QuestionID = 'Q1'
LEFT JOIN MyTable c ON a.Product = c.Product AND c.QuestionID = 'Q2'
```
|
If you always have two different values you can try this:
```
SELECT a.Product, a.XNumber as XNumber1, b.XNumber as XNumber2
FROM MyTable a
INNER JOIN MyTable b
ON a.Product = b.Product
WHERE a.QuestionId = 'Q1'
AND b.QuestionId = 'Q2'
```
I assume that XNumber1 is the result for Q1 and Xnumber2 is the result for Q2.
|
2 Rows to 1 Row - Nested Query
|
[
"",
"sql",
"sql-server-2008",
"sql-server-2012",
""
] |
I'm having table Student as below
```
Student(id,jdate)
```
where column `id` is primary key. Now I'm writing a query as below
```
select * from Student where id=2 and (jdate='date1' or jdate='date2')
```
Will index work here? or Can i modify as below?
```
select * from Student where (id=2) and (jdate='date1' or jdate='date2')
```
|
Both your examples will use the PK Index for column 'id'.
In case it is not clear, The operator "=" has precedence over "and", and thus the parenthesis are not necessary.
|
Since you are declaring a PK on the id column then you are defining a unique clustered index on the table as well. And since you are using the id column in the where clause then the index should be used.
The two queries, both of them, will use the index and the parenthesis around id = 2 don't change anything in the logic / condition evaluation.
|
SQL Server - Indexing and Operator precedence
|
[
"",
"sql",
"sql-server",
""
] |
The below code I have two query listed stSQL = "SELECT \* FROM QA\Dependencies" and stSQL1 = "SELECT \* FROM QA\QA Priority" If you see the 1st query when i run that its runs smoothly but if run the other query where the table name or you can view name of a lotus notes has spaces in them it gives me an error. Can any one help with writing a for Lotus not view whose name have space in them.
```
Option Explicit
Private Sub CommandButton21_Click()
Dim wbBook As Workbook
Dim wsSheet As Worksheet
Dim rnData As Range
Dim adoCN As ADODB.Connection
Dim adoRst As ADODB.Recordset
Dim adoRst1 As ADODB.Recordset
Dim stSQL As String
Dim stSQL1 As String
Dim vaData As Variant
Dim a As String
Set adoCN = New ADODB.Connection
Set wbBook = ThisWorkbook
Set wsSheet = wbBook.Worksheets("Sheet1")
adoCN.Open ("Driver={Lotus NotesSQL Driver (*.nsf)};Database=DATABASE.nsf;Server=Server;")
stSQL = "SELECT * FROM QA\Dependencies"
Set adoRst = adoCN.Execute(stSQL)
stSQL1 = "SELECT * FROM QA\QA Priority"
Set adoRst1 = adoCN.Execute(stSQL1)
adoCN.Close
Set adoRst = Nothing
Set adoRst1 = Nothing
Set adoCN = Nothing
End Sub
```
|
I believe you need to surround your table name with `[]` like so:
```
stSQL1 = "SELECT * FROM [QA\QA Priority]"
```
|
Should be referenced as follows:
```
stSQL1 = "SELECT * FROM QA\[QA Priority]"
```
|
Query SQL table whose table name has space in between in VBA Excle
|
[
"",
"sql",
"excel",
"vba",
""
] |
I wanted to make a view with all records from the 3 tables I have.. But I'm getting an error and I don't know how to solve it.
The error is that Customer.customerID is duplicated.
```
Create view overview as
Select
customer.customerID,
customer.username,
customer.name,
customer.surname,
customer.city,
customer.street,
customer.zipcode,
customer.birthdate,
customer.`payment method`,
customer.screenname,
order.OrderID,
order.customerID,
order.date,
order.GameID,
games.GameID,
games.game,
games.price,
games.released,
games.genre,
games.grid
FROM customer
INNER JOIN order
ON customer.customerID = order.customerID
INNER JOIN games
ON games.GameID = order.GameID
```
The tables and their rows:
order
OrderID, customerID, date, GameID
games
GameID, game, price, released, genre, grid
customer
customerID, username, name, surname, city, street, zipcode, birthdate, payment method, screenname
please help.
|
I removed the double named rows like @Mahmoud\_Gamal suggested but it gave me an issue with SQL making a record with every combination possible from the data in other records like:
Customer 1 | Order1 | Game1
Customer 2 | Order1 | Game1
Customer 1 | Order2 | Game2
I solved it by using "select distinct", never though it would give soo much pain by forgetting to use the distinct function...
|
The `CustomerID` column is selected twice with the same name, you have to give one of them a different name using alias `AS ...` like this:
```
Create view overview as
Select
customer.customerID,
customer.username,
customer.name,
customer.surname,
customer.city,
customer.street,
customer.zipcode,
customer.birthdate,
customer.`payment method`,
customer.screenname,
order.OrderID,
order.customerID AS OrderCustomerId, -- this
order.date,
order.GameID,
games.GameID gamesGameID, -- This too
games.game,
games.price,
games.released,
games.genre,
games.grid
FROM customer
INNER JOIN order
ON customer.customerID = order.customerID
INNER JOIN games ON games.GameID = order.GameID
```
Or remove it from the select as it is the same value as the `Customers.CustomerId`.
|
Creating a view in SQL with three tables
|
[
"",
"mysql",
"sql",
"view",
""
] |
I'm using Oracle SQL Developer. There is one column in the source table, `fiscal_week_string`, which is in the following format:
```
2011 β 01 (09/27 β 10/31)
```
I want to convert this to the following format:
```
2011-01 (09/27 to 10/31)
```
I.e., I want to remove a *some* of the white spaces. I also want to replace the "`-`" inside the braces with "`to`".
Here's a few rows:
`datestr pc WEEK_START WEEK_END month_start mnth_end year mth qtr wk fiscal_week_string Fiscal_month_string
2010-09-27 14879 2010-09-27 2010-10-03 2010-09-27 2010-10-31 2011 1 1 1 2011 β 01 (09/27 β 10/03) 2011 β 01 (09/27 β 10/31)
2010-09-28 14880 2010-09-27 2010-10-03 2010-09-27 2010-10-31 2011 1 1 1 2011 β 01 (09/27 β 10/03) 2011 β 01 (09/27 β 10/31)`
Could anyone please guide me with this?
|
expecting the DDL as below since you have not shared it
```
create table table_one
(
fiscal_week_string varchar(30)
);
insert into table_one
values ('2011 β 01 (09/27 β 10/31)');
insert into table_one
values ('2011 β 01 (09/26 β 11/31)');
```
your Oracle query would be
```
select col_one || col_two || col_three || ' ' || col_four||' '|| 'to'||' '||col_six as fmt_date
from
(
select REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 1) col_one,
REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 2) col_two,
REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 3) col_three,
REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 4) col_four,
REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 5) col_five,
REGEXP_SUBSTR(t.fiscal_week_string, '[^ ]+', 1, 6) col_six
from table_one t
)
```
and
results in
```
2011β01 (09/27 to 10/31)
2011β01 (09/26 to 11/31)
```
SQL Fiddle <http://sqlfiddle.com/#!4/c30b15/8>
|
You can use regexp\_replace to replace target patterns.
Example:
```
select regexp_replace(
regexp_replace('2011 - 01 (09/27 - 10/31)',' - ','-',1,1)
, ' - ',' to ') from dual;
```
Sample update:
```
create table a1(id number, name varchar2(100));
insert into a1 values(1, '2011 - 01 (09/27 - 10/31)');
update a1 set name=
regexp_replace(regexp_replace(name,' - ','-',1,1), ' - ',' to ') where id=1;
select * from a1;
```
|
Replace specific characters at specific positions in a string
|
[
"",
"sql",
"database",
""
] |
I have the following string:
```
'{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com'
```
and I would like to extract ONLY the MessageID part:
```
<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>
```
I've tried unsuccessfully to use substring and charindex functions:
```
SELECT SUBSTRING('{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com',
(SELECT CHARINDEX(' Message-ID=<','{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com')),
(SELECT CHARINDEX('>, Authentication-Results', '{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com')))
```
and the result to my query is:
```
Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com
```
What I am doing wrong??
|
The 3rd argument to SUBSTRING is a LENGTH. Your statement is passing in the character position...
```
SUBSTRING ( expression ,start , length )
```
try this:
```
SELECT SUBSTRING('{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com',
(SELECT CHARINDEX(' Message-ID=<','{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com')),
CHARINDEX('>, Authentication-Results', '{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com')
-
CHARINDEX(' Message-ID=<','{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com')
+1)
```
(the value should be 75 in your example)
|
It is the Charindex function. Please refer to <http://msdn.microsoft.com/en-us/library/ms186323.aspx>. Charindex gets the first occurrance index of the character so a
```
select CHARINDEX('A','TESTING A LETTER')
```
The above will return 9 and if you do a substring, the substring will start from 9, not from the end of the string.
For your query, the solution is below;
```
DECLARE @String varchar(500) = '{Mapi-Reply-Recipient-Smtp-Proxies=, Mapi-Non-Receipt-Notification-Requested=true, Message-ID=<6a2k4081-f6134-4345c-b232429-a32410de78@abd.local.test.global>, Authentication-Results=vadmzmail2342.test.com'
SELECT SUBSTRING(@String,
CHARINDEX('Message-ID=<',@String) + 12,
CHARINDEX('>',@String,CHARINDEX('Message-ID=<',@String)))
```
|
SQL substract string using SUBSTRING and CHARINDEX
|
[
"",
"sql",
""
] |
I am a little confused about #7 of the "sum and count" section on [SQLZoo](http://sqlzoo.net/wiki/SUM_and_COUNT)
The question is :
> For each continent show the continent and number of countries with
> populations of at least 10 million.
This answer worked, but I am not sure exactly how :
```
SELECT DISTINCT(continent), COUNT(name)
FROM world
WHERE name IN
( SELECT name
FROM world
WHERE population > 10000000 )
GROUP BY continent
```
However, it is not making sense intuitively. I am not seeing how it makes sense to have this subquery :
```
SELECT name
FROM world
WHERE population > 10000000
```
Mainly because we are returning *all* the countries of 10,000,000+ population.. .however we want to link each continent to its own group of 10,000,000+ countries.
Does SQL automatically know to connect the `continent` with its own `countries`?
|
No. The query is assuming that the `name` column is unique.
By the way, this is a bad query. For instance, it suggests that `distinct` is a function. A much better way to write this query is to use conditional aggregation:
```
SELECT continent, sum(case when population >= 10000000 then 1 else 0 end)
FROM world
GROUP BY continent;
```
Note:
I realize that the first query will filter out Antartica, so this is an even better query:
```
SELECT continent, COUNT(*)
FROM world
WHERE population >= 10000000
GROUP BY continent;
```
I have no idea why SQLZoo would have such a complex query for something that is pretty simple.
|
DISTINCT is not a function, it works on the whole row/result set! (Adding redundant parenthesis around continent makes no difference.)
DISTINCT is also not needed here because you are already doing a GROUP BY.
|
Seeking a better understanding of how this query works?
|
[
"",
"mysql",
"sql",
"mariadb",
""
] |
I have a following query.

My result after executing is this.

But issue is in here for invoice no 'I261001001S000371' there is 2 records. I want to show as 1 record. Means for that invoice cash amount and cheque amount should show in 1 row.
Any ideas please?
|
Try this query: Use aggrigate function to sum up all the cash that are common for all the invoice no. I am not sure about the settle. You can use aggrigate, IF needed.
```
SELECT InVoiceNo, sum(cash), sum(cheque),
sum(credit), NetAmount, Settle
from
(--Your select query to get result as specified in question) as a
group by InVoiceNo, NetAmount, Settle
```
|
Change your select statement to `SUM` the `Cash` and `Cheque` and `Credit` columns `GROUP BY InvoiceNo, NetAmount, Settle`
That way each invoice will show the total of each of the columns that you specify in your Case statements and get it all in one line
|
Show multiple row as one row SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
"stored-procedures",
""
] |
I am having issues with combining two columns into the one using mssql
table 1 format:
```
|| WJCPrefix || WJCNo ||
|| UK-R/SWJC/14/ || 1234 ||
|| UK-R/CUWJC/14/ || 2345 ||
|| UK-R/CUWJC/14/ || 3456 ||
|| UK-R/SWJC/14/ || 4567 ||
|| UK-R/CUWJC/14/ || 5678 ||
```
The desired out would be:
UK-R/CUWJC/14/3456
UK-R/CUWJC/14/5678
the sql statement i am using is:
```
SELECT tblWJCItem.AddedDescription, concat(tblWJC.WJCPrefix, tblWJC.WJCNo) AS OurRef
FROM tblWJC
INNER JOIN tblWJCItem ON tblWJC.WJCID = tblWJCItem.WJCID;
```
I've also used:
```
SELECT tblWJCItem.AddedDescription, tblWJC.WJCPrefix + '' + tblWJC.WJCNo AS OurRef
FROM tblWJC
INNER JOIN tblWJCItem ON tblWJC.WJCID = tblWJCItem.WJCID;
```
I can't seem to connect these two columns could anyone point out what I am doing wrong here?
Thanks
|
Your first query (`Concat()` function) should work if you are using SQL Server 2012 or later.
For other versions, you may need to `Convert()`/`Cast()` `WJCNo` to a string type
```
SELECT t2.AddedDescription,
t1.WJCPrefix + CONVERT(Varchar(10),t1.WJCNo) AS OurRef
FROM tblWJC t1
INNER JOIN tblWJCItem t2 ON t1.WJCID = t2.WJCID;
```
|
Your first query should be fine. But you might try:
```
SELECT tblWJCItem.AddedDescription, tblWJC.Prefix + cast(tblWJC.WJCNo as varchar(255)) AS OurRef
FROM tblWJC INNER JOIN
tblWJCItem
ON tblWJC.WJCID = tblWJCItem.WJCID;
```
You might get an error in the second version if `WJCno` is numeric rather than a string.
|
issues combining two columns using concat
|
[
"",
"sql",
"sql-server",
""
] |
I have a query, when I run it as a select I get 1014 records. When I run it as an update it updates a varying number of the records usually less than 5.
Examples
```
Select * from Table
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
```
Returns 1014 rows
```
update Table
set meaninglesscolumn = 1
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
```
After this update running this query returns 1012 (or a lower number, it keeps dropping after every run but running this +500 times is going to be a hassle)
```
Select * from Table
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
and meaninglesscolumn <> 1
```
Thanks.
UPDATE
Sorry for the confusion, the above 'select' returned 1658 records, not 1014. I ran it with the 'and meaninglesscolumn <> 1' and got 1015 (the number of records I want to update). I then ran the above 'update' and the 'select' and got 1014, I then ran the 'update' again and got 1012.
So full process...
```
Select * from Table
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
```
Returned 1658
```
Select * from Table
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
and meaninglesscolumn <> 1
```
Returned 1015
```
update Table
set meaninglesscolumn = 1
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
```
Returned 1 row updated.
```
Select * from Table
where firstname ='john'
and lastname = 'smith'
and email='john.smith@gmail.com'
and meaninglesscolumn <> 1
```
Returned 1014
|
Why don't you just use a while loop to update them all?
SELECT
while(result = fetch){
UPDATE
}
|
The select query return all records.
The update query update or set only different value is found in meaninglesscolumn.
In the first case 2 record already have value 1 in meaninglesscolumn. So it update only 1012 records.
|
update select same where clause update subset of records
|
[
"",
"mysql",
"sql",
"select",
"sql-update",
"where-in",
""
] |
I want to replace a particular string in mysql database, i am using this query :
UPDATE `users` SET `name`=replace(name,'raj','rajesh')
however what this query does is where it find raj it will replaces by rajesh for e.g if there is a string raju in databse after running this query raju becomes rajeshu which i dont want. i want a query which matches the replace string exactly means after running a query only 'raj' should get replaced with 'rajesh' and 'raju' should remain as is.. can someone please help??
|
This query works for me:
```
UPDATE users
SET name = replace(name,'raj','rajesh')
WHERE name = 'raj'
```
|
Try below query to replace `raj` with `rajesh`
```
update users set name=replace(name,' raj ',' rajesh ');
```
OR
```
update users set name=replace(name,'raj ','rajesh ') where name like '% raj %';
```
|
How to find and replace string in MYSQL Database for a particular string only
|
[
"",
"mysql",
"sql",
"replace",
"sql-update",
"where-clause",
""
] |
I have three tables with the following structures:
**User Table**
```
UserID Name Age
------ ---- ---
001 AA 23
002 BB 25
003 CC 30
004 DD 24
005 EE 23
```
**temp\_User Table**
```
UserID Name Age
------ ---- ---
001 AA 23
002 BB 25
004 DD 24
005 EE 23
007 GG 23
009 HH 28
```
**ExceptionUsers Table**
```
UserID Status
------ --------
021 Active
002 Inactive
004 Active
010 Active
012 Inactive
```
I used the following query to retrieve my result:
**Query**
```
select
A.Name
,B.Name
,A.Age
,B.Age
from User A
inner join temp_User B
on A.UserID = B.UserID
left join ExceptionUsers C
on A.UserID = C.UserID
and C.Status = 'Inactive'
order by
A.Name
```
**Result**
```
001 AA 23
002 BB 25
005 EE 23
```
But the result includes users who are 'Active'.
```
001 AA 23
002 BB 25
004 DD 24
005 EE 23
```
How can I try the query to get my result?
|
I'd move the LEFT JOIN logic to the WHERE clause, to a NOT EXISTS.
```
select A.Name, B.Name, A.Age, B.Age
from User A
inner join temp_User B
on A.UserID = B.UserID
where not exists (select * from ExceptionUsers C
where A.UserID = C.UserID
and C.Status = 'active')
order by A.Name
```
|
Move C.Status = 'Inactive' to the where clause.
|
Can't Retrieve Exact Result with LEFT JOIN
|
[
"",
"sql",
"join",
"left-join",
""
] |
I am trying to get the query for getting the count of users every hour of day in the table. If the data for that hour is not present, I want to record the hour with count of zero. Also users should be counted only for their first entry. Subsequent entries should be ignored.
Table:
```
userId creationDate
1 2014-10-08 14:33:20.763
2 2014-10-09 04:24:14.283
3 2014-10-10 18:34:26.260
```
Desired output:
```
Date UserCount
2014-10-08 00:00:00.000 1
2014-10-08 01:00:00.000 1
2014-10-08 02:00:00.000 1
2014-10-08 03:00:00.000 0
2014-10-08 04:00:00.000 1
....
.....
2014-10-10 23:00:00.000 1
2014-10-10 00:00:00.000 0
```
My attempt:
```
SELECT
CAST(creationDate as date) AS ForDate,
DATEPART(hour, date) AS OnHour,
COUNT(distinct userId) AS Totals
FROM
Table
WHERE
primaryKey = 123
GROUP BY
CAST(creationDate as date), DATEPART(hour, createDate)
```
This only gives me per hour for the record that is present. Not the data for the missing hours. I think there is a way by using a cross join to get 0 data even for the missing hours.
Something like this, I came across, but not able to construct a proper query with it.
```
cross join (select
ROW_NUMBER() over (order by (select NULL)) as seqnum
from
INFORMATION_SCHEMA.COLUMNS) hours
where hours.seqnum >= 24
```
Once again, I am not a SQL expert, but trying hard to construct this result set.
One more attempt :
```
with dh as (
select DATEADD(hour, seqnum - 1, thedatehour ) as DateHour
from (select distinct cast(cast(createDate as DATE) as datetime) as thedatehour
from Table a
) a
cross join
(select ROW_NUMBER() over (order by (select NULL)) as seqnum
from INFORMATION_SCHEMA.COLUMNS
) hours
where hours.seqnum (less than)= 24
)
select dh.DateHour, COUNT(distinct c.userId)
from dh cross join Table c
--on dh.DateHour = c.createDate
group by dh.DateHour
order by 1
```
|
You need to build up a table of possible hours, and then join this to your actual records.
The best way to build up a table of possible hours is to use a recursive common table expression.
Here's how:
```
-- Example data
DECLARE @users TABLE(UserID INT, creationDate DATETIME)
INSERT @users
( UserID, creationDate )
VALUES ( 1, '2014-10-08 14:33:20.763'),
( 2, '2014-10-09 04:24:14.283'),
( 3, '2014-10-10 18:34:26.260')
;WITH u1st AS ( -- determine the FIRST time the user appears
SELECT UserID, MIN(creationDate) AS creationDate
FROM @users
GROUP BY UserID
), hrs AS ( -- recursive CTE of start hours
SELECT DISTINCT CAST(CAST(creationDate AS DATE) AS DATETIME) AS [StartHour]
FROM @users AS u
UNION ALL
SELECT DATEADD(HOUR, 1, [StartHour]) AS [StartHour] FROM hrs
WHERE DATEPART(HOUR,[StartHour]) < 23
), uGrp AS ( -- your data grouped by start hour
SELECT -- note that DATETIMEFROMPARTS is only in SQL Server 2012 and later
DATETIMEFROMPARTS(YEAR(CreationDate),MONTH(CreationDate),
DAY(creationDate),DATEPART(HOUR, creationDate),0,0,0)
AS StartHour,
COUNT(1) AS UserCount FROM u1st AS u
GROUP BY YEAR(creationDate), MONTH(creationDate), DAY(creationDate),
DATEPART(HOUR, creationDate)
)
SELECT hrs.StartHour, ISNULL(uGrp.UserCount, 0) AS UserCount
FROM hrs LEFT JOIN uGrp ON hrs.StartHour = uGrp.StartHour
ORDER BY hrs.StartHour
```
NB - DATETIMEFROMPARTS is only in SQL SERVER 2012 and greater. If you are using an earlier version of SQL SERVER you could have
```
WITH u1st AS ( -- determine the FIRST time the user appears
SELECT UserID, MIN(creationDate) AS creationDate
FROM @users
GROUP BY UserID
), hrs AS ( -- recursive CTE of start hours
SELECT DISTINCT CAST(CAST(creationDate AS DATE) AS DATETIME) AS [StartHour]
FROM @users AS u
UNION ALL
SELECT DATEADD(HOUR, 1, [StartHour]) AS [StartHour] FROM hrs
WHERE DATEPART(HOUR,[StartHour]) < 23
), uGrp AS ( -- your data grouped by start hour
SELECT -- note that DATETIMEFROMPARTS is only in SQL Server 2012 and later
CAST(CAST(YEAR(creationDate) AS CHAR(4)) + '-'
+ RIGHT('0' + CAST(MONTH(creationDate) AS CHAR(2)), 2) + '-'
+ RIGHT('0' + CAST(DAY(creationDate) AS CHAR(2)), 2) + ' '
+ RIGHT('0' + CAST(DATEPART(HOUR, creationDate) AS CHAR(2)), 2)
+ ':00:00.000'
AS DATETIME) AS StartHour,
COUNT(1) AS UserCount FROM u1st AS u
GROUP BY YEAR(creationDate), MONTH(creationDate), DAY(creationDate),
DATEPART(HOUR,creationDate)
)
SELECT hrs.StartHour, ISNULL(uGrp.UserCount, 0) AS UserCount
FROM hrs LEFT JOIN uGrp ON hrs.StartHour = uGrp.StartHour
ORDER BY hrs.StartHour
```
|
I asked a similar question on dba just this morning...<https://dba.stackexchange.com/questions/86435/filling-in-date-holes-in-grouped-by-date-sql-data>.
You can used my GetSequence function, or create a Numbers table. I haven't done my own testing yet to validate what was suggested in my scenario.
|
Query to get results for every hour of the day even data if not present
|
[
"",
"sql",
"sql-server",
""
] |
I believe GROUP BY in SQL would make DISTINCT unnecessary because if you group by a column then there will only be one of each value in the result, but I want to make sure I am understanding the keywords correctly. Is it the case that you would not need to do this:
```
SELECT DISTINCT a_uuid
FROM table
GROUP BY a_uuid
HAVING NOT bool_or(type = 'Purchase')
```
because you could just drop the DISTINCT completely?
|
You do not need the `distinct` in this query. In general, you don't need `distinct` with `group by`. There are actually some queries where `distinct` and `group by` go together, but they are very rare.
You need `group by` in this query, because you are using an aggregation function in the `having` clause. So, use:
```
SELECT a_uuid
FROM table
GROUP BY a_uuid
HAVING NOT bool_or(type = 'Purchase')
```
|
As long as aggregate functions aren't involved you can use DISTINCT instead of GROUP BY.
Use either DISTINCT or GROUP BY - not both!
|
Does GROUP BY inherently imply DISTINCT?
|
[
"",
"sql",
"postgresql",
""
] |
I have a `users(id)` table and a `friends(user_id1, user_id2)` where `user_id1` and `user_id2` are both foreign keys to the `users` table.
What relationship is this ?
I think is One to Many, but since I have the user id two times it's a bit confusing.
(I have to make a diagram and don't know what relationship to use).
|
This UML class diagram describes the situation ([yuml.me script (click to edit)](http://yuml.me/diagram/plain/class/edit/[%3C%3CTable%3E%3E;users%7Bbg:lightskyblue%7D]%202%20%3C-%20*%20[%3C%3CTable%3E%3E;friends%7Bbg:gold%7D]))

In UML terms the line's name is [navigable association](http://www.uml-diagrams.org/association.html#navigability) with known [multiplicity](http://www.uml-diagrams.org/multiplicity.html).
In order to draw a diagram you don't need to give the relationship another "name"
|
There are two "one-to-many" relationships...
Users.id --> 1 to many --> friends.userid1
Users.id --> 1 to many --> friends.userid2
|
Relationship between tables with two foreign keys
|
[
"",
"mysql",
"sql",
"database",
"orm",
"uml",
""
] |
How to select top 10 rows from the following query ?
```
select dim_product.PRODUCT_NAME, sum(units_sold * (revenue - cost))
from FT_SALES
INNER JOIN dim_product
ON ft_sales.prod_dwh_id = dim_product.product_dwh_id
group by dim_product.PRODUCT_NAME
order by 2 desc;
```
--- I tried the suggestion to use ROWNUM.
without ROWNUM I get the following results
Super Soft Bulk - 2 Litres 48223,5
Super Soft - 1 Litre 38842,16
Super Soft - 500ml 32879,29
Detafast Stain Remover - 100ml 29382
Detafast Stain Remover - 800ml 27336
Detafast Stain Remover - 200ml 24516
Pure Soft Detergent - 500ml 21147
Pure Soft Detergent - 250ml 20675,6
Pure Soft Detergent - 100ml - extra 19693,5
Pure Soft Detergent - 200ml 17563,74
Super Soft - 250ml 17430
Super Soft - Product Sample -2647,8
When added ROWNUM like that:
```
select dim_product.PRODUCT_NAME, sum(units_sold * (revenue - cost))
from FT_SALES
INNER JOIN dim_product
ON ft_sales.prod_dwh_id = dim_product.product_dwh_id
where ROWNUM<=10
group by dim_product.PRODUCT_NAME
order by 2 desc ;
```
I get the results:
Super Soft - 500ml 670,08
Pure Soft Detergent - 100ml - extra 597
Pure Soft Detergent - 200ml 338,3
Detafast Stain Remover - 100ml 246
Detafast Stain Remover - 200ml 213
Super Soft - 1 Litre 179,64
so it is not a solution in my case.
|
When selecting using `rownum` in Oracle, you need to use a subquery:
```
select p.*
from (select dim_product.PRODUCT_NAME, sum(units_sold * (revenue - cost))
from FT_SALES INNER JOIN
dim_product
ON ft_sales.prod_dwh_id = dim_product.product_dwh_id
group by dim_product.PRODUCT_NAME
order by 2 desc
) p
where rownum <= 10
```
|
You can use ROWNUM <= 10 along with your query.
|
TOP 10 rows in ORACLE when query contains GROUP BY clause
|
[
"",
"sql",
"oracle",
""
] |
If I have a very basic stored procedure like this in SQL Server 2012:
```
Create Procedure [MyStoredProcedure] (@ input as int)
As
Select 1 as col1, 2 as Col2
Into #mytemp1
Drop Table #mytemp1
Select 3 as col1, 4 as Col2, 5 as Col3
Into #mytemp1
Drop Table #mytemp1
```
and I try and run it it fails with the error 'There is already an object named '#mytemp1' in the database.'
If this wasn't a stored procedure I could use GO after I initially drop the temp table.
Is there a way around this?
Thanks
|
Since there is no explicit need to re-use the same temp table name, just use unique names for each temp table.
Well, *technically* you could do something like the following:
```
EXEC('
Select 1 as col1, 2 as Col2
Into #mytemp1
something else related to #mytemp1
');
EXEC('
Select 3 as col1, 4 as Col2, 5 as Col3
Into #mytemp1
something else related to #mytemp1
');
```
That would not fail as each temp table is isolated in a subprocess that is not parsed until the `EXEC` actually runs. And, the temp table disappears when the `EXEC` is finished (hence no need for the explicit `DROP` statements). But in most cases this is not a practical solution since the typical purpose of creating a temp table is to carry that data along to other operations, but here those temp tables are only viable within their particular `EXEC` context, so a bit limited.
|
If more than one temporary table is created inside a single stored procedure or batch, they must have different names.
```
Create Procedure [MyStoredProcedure] (@ input as in)
As
begin
Select 1 as col1, 2 as Col2
Into #mytemp1
Drop Table #mytemp1
Select 3 as col1, 4 as Col2, 5 as Col3
Into #mytemp1 // you cant use same **temp table** name in sp
Drop Table #mytemp1
end
```
And this is not because the table has been dropped and can't be re-created; this code never gets executed, the parser actually sees you trying to create the same table twice
|
Using temp tables with the same name in a stored procedure
|
[
"",
"sql",
"sql-server",
"stored-procedures",
"temp-tables",
"temp",
""
] |
The following script gives me delivery address information. But due to bad system set up over several years the NAME1 (address name) has several variations.
Is there a way I can change the NAME1 to show the most commonly used NAME1 but keep all of the other row information?
```
select ad.name1,
ad.town,
replace(ad.zip, ' ', '') zip,
ad.country,
(select sum(dp1.palette)
from oes_dpos dp1
where dp1.dheadnr = dh.dheadnr) count_pallet,
(select sum(dp2.delqty)
from oes_dpos dp2
where dp2.dheadnr = dh.dheadnr) del_units,
((select sum(dp3.delqty)
from oes_dpos dp3
where dp3.dheadnr = dh.dheadnr) * sp.nrunits) del_discs
from oes_dhead dh,
oes_dpos dp,
oes_address ad,
oes_opos op,
SCM_PACKTYP sp
where dh.actshpdate >= '01-DEC-2013'
and dh.actshpdate - 1 < '30-NOV-2014'
and dh.shpfromloc = 'W'
and ad.key = dh.delnr
and ad.adr = dh.deladr
and dp.dheadnr = dh.dheadnr
and op.ordnr = dp.ordnr
and op.posnr = dp.posnr
and op.packtyp = sp.packtyp
and upper(ad.name1) not like '%SONY%'
and replace(ad.zip, ' ', '') = 'CO77DW'
```
|
Yes you need to count the occurrences in the NAME1
Change it as follows
```
select name1, count(name1) occurances, town, zip, country, count_palle,del_units,del_discs from (
select ad.name1,
ad.town,
replace(ad.zip, ' ', '') zip,
ad.country,
(select sum(dp1.palette)
from oes_dpos dp1
where dp1.dheadnr = dh.dheadnr) count_pallet,
(select sum(dp2.delqty)
from oes_dpos dp2
where dp2.dheadnr = dh.dheadnr) del_units,
((select sum(dp3.delqty)
from oes_dpos dp3
where dp3.dheadnr = dh.dheadnr) * sp.nrunits) del_discs
from oes_dhead dh,
oes_dpos dp,
oes_address ad,
oes_opos op,
SCM_PACKTYP sp
where dh.actshpdate >= '01-DEC-2013'
and dh.actshpdate - 1 < '30-NOV-2014'
and dh.shpfromloc = 'W'
and ad.key = dh.delnr
and ad.adr = dh.deladr
and dp.dheadnr = dh.dheadnr
and op.ordnr = dp.ordnr
and op.posnr = dp.posnr
and op.packtyp = sp.packtyp
and upper(ad.name1) not like '%SONY%'
and replace(ad.zip, ' ', '') = 'CO77DW'
)
group by name1, town, zip, country, count_palle,del_units,del_discs
```
Remember I am using your query as a sub query since I do not have a clue how your DDL would look like.
You can even sort the result by the count.
|
Easiest would be to just replace `ad.name1` with
```
( SELECT name1 FROM
( SELECT name1, COUNT(*) FROM oes_address GROUP BY name1 ORDER BY 2 DESC )
WHERE ROWNUM = 1
)
```
This counts all occurences of name1 in the whole table, counts them and sorts by this count. So the most common name1 is in the first row. This row is selected.
|
oracle sql, change value to most common value
|
[
"",
"sql",
"oracle",
""
] |
**Situation**
I have two queries that select information from various related tables. One selects all the records for `year = 2012` and the other for `year = 2013`
```
SELECT c.Company_ID, c.Company_Name, e.Employee_ID, e.Employee_Name, p.Position
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE (((r.Report_Year)=2012))
```
and
```
SELECT c.Company_ID, c.Company_Name, e.Employee_ID, e.Employee_Name, p.Position
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE (((r.Report_Year)=2013))
```
**Question**
How do I construct a query to return the `Company_ID`, `Company_Name`, `Employee_ID`, `Employee_Name`, and `Position` that match in both queries (i.e. that exists in 2012 AND 2013)?
---
**Update**
Using a sub-query I can select the `company_id` from 2012 only if it is in 2013. How can I now make sure it only selects also the `employee_id` and `position` if these are also the same in 2013 as they are in 2012?
```
SELECT c.Company_ID, c.Company_Name, e.Employee_ID, e.Employee_Name, p.Position
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE r.Report_Year=2013
AND
c.Company_ID IN
(
SELECT c.Company_ID
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE r.Report_Year=2012
)
```
|
This is clunky, there might be a more elegant way, but I think this accomplishes what you want:
```
SELECT DISTINCT * FROM
(SELECT c.Company_ID, c.Company_Name, e.Employee_ID, e.Employee_Name, p.Position_ID, p.Position
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE (((r.Report_Year)=2012))
)
t1 INNER JOIN
(SELECT c.Company_ID, c.Company_Name, e.Employee_ID, e.Employee_Name, p.Position_ID, p.Position
FROM ((tbl_Company AS c
INNER JOIN tbl_Employee AS e ON c.Company_ID = e.Company_ID)
INNER JOIN tbl_Position AS p ON e.Employee_ID = p.Employee_ID)
INNER JOIN tbl_Report AS r ON (r.Report_ID = p.Report_ID) AND (c.Company_ID = r.Company_ID)
WHERE (((r.Report_Year)=2013))
)
t2 ON t1.Company_ID = t2.Company_ID
AND
t1.Employee_ID = t2.Employee_ID
AND t1.Position_ID = t2.Position_ID
```
|
There's intersect, which is similar to union: <http://www.sqlguides.com/sql_intersect.php>
> SQL INTERSECT is query that allows you to select related information
> from 2 tables, this is combine 2 SELECT statement into 1 and display
> it out.
>
> INTERSECT produces rows that appear in both queries.that means
> INTERSECT command acts as an AND operator (value is selected only if
> it appears in both statements).
You can make it easier for people to try out your SQL if you use SQL Fiddle. I setup a very simple table to show an example here: <http://sqlfiddle.com/#!6/baf67/3>
|
Access SQL - Matching records from two queries
|
[
"",
"sql",
"ms-access-2013",
""
] |
i have to display duration between startdate and EndDate
for that i am use **DATEDIFF** of week it will give me a int value like 3 week , 2 week
but DATEDIFF of days is 23 then result like **3 Week, 2 days**
|
```
SELECT DATEDIFF(WEEK, @StartDate, @EndDate) as WeekCount,
DATEDIFF(DAY, @StartDate, @EndDate) % 7 as DayCount
```
|
try using
```
SELECT DATEDIFF('2010-10-08 18:23:13', '2010-09-21 21:40:36') AS days;
```
to get the days and then just divide by 7 to get the weeks, and % to get the days
|
date difference in week and days in sql
|
[
"",
"sql",
"sql-server-2008",
""
] |
I want to make a SELECT where I have multiple GUIDs that I use with an IN operator.
Why this query does not return all rows where Table1UID equals each item from @table1IDs?
```
DECLARE @table1IDs NVARCHAR(100)
SET @table1IDs = 'b15f554f-90ad-4fab-b3ac-a27bc68f3b21, 72213eb7-1041-408c-9ea1-35a667285c9b'
SELECT * FROM table1
WHERE Table1UID in (@table1IDs)
```
It only returns the row where ID is 'b15f554f-90ad-4fab-b3ac-a27bc68f3b21'.
Mention: type of Table1UID is UNIQUEIDENTIFIER
|
The way you stored the value is wrong i guess, do as below
```
declare @temp table(ID nvarchar(1000))
insert into @temp values('b15f554f-90ad-4fab-b3ac-a27bc68f3b21')
insert into @temp values('72213eb7-1041-408c-9ea1-35a667285c9b')
SELECT * FROM table1
WHERE Table1UID in (select ID from @temp)
```
|
```
DECLARE @QRY NVARCHAR(MAX)
SET @QRY = 'SELECT * FROM table1 WHERE Table1UID IN (' + @table1IDs + ')'
EXEC SP_EXECUTESQL @QRY
```
**UPDATE**
If you are selecting dynamic values, use the below code and that will work
```
SELECT @table1IDs = COALESCE (@table1IDs + ',[' + YOURCOLUMN + ']',
'[' + YOURCOLUMN + ']')
FROM (SELECT DISTINCT YOURCOLUMN FROM YOURTABLE) PV
ORDER BY YOURCOLUMN
```
|
How to select rows from a table when having a list of GUIDs?
|
[
"",
"sql",
"sql-server",
""
] |
I'm trying to figure out an efficient way to identify the set of "record high" values from a set of data. This is more complicated than just finding the maximum value since I want to find all values that were the maximum/record high value at the time they were recorded. So to simplify the problem as much as possible, imagine a model/database table like:
```
Temperatures(date, value)
```
with values like:
```
*1/1/2001: 79
*1/2/2001: 81
1/3/2001: 81
1/4/2001: 80
*1/5/2001: 82
1/6/2001: 81
*1/7/2001: 90
```
I'm trying to find a query that would help identified the starred ("\*") records above as "records". Either a rails finder or raw sql statement would work fine. The best I've gotten so far is a query that will tell me if a given value is a record, but that doesn't scale very well.
|
You can use `not exists` to select rows where another row with both a higher value and earlier date does not exist
```
select date, value
from mytable t1
where not exists (
select 1 from mytable t2
where t2.value >= t1.value
and t2.date < t1.date
)
```
|
In Standard-SQL:
```
with temp (d, t, r) as
(select d, t, row_number() over (order by d)
from t),
data (d, t, r) as
(select d, t, r
from temp
where r = 1
union all
select case when t.t > d.t then t.d else d.d end,
case when t.t > d.t then t.t else d.t end,
t.r
from temp t
inner join data d on d.r+1 = t.r)
select min(d), min(t)
from data
group by d, t
```
|
Efficiently find "record high" values
|
[
"",
"sql",
"ruby-on-rails",
"algorithm",
"ruby-on-rails-4",
"query-optimization",
""
] |
I have a column that contains status changes, but I don't want to return the whole string. Is there any way to return just a part of a string after a certain keyword? Every value of the column is in the format of From X to Y where X and Y could be a single word or multiple words. I've looked at the `substring` and `trim` functions, but those seem to require knowledge of how many spaces you want to keep.
Edit: I want to keep part Y from the status and get rid of 'From X to'.
|
You can use a combination of Charindex and Substring and Len to do it.
Try this:
```
select SUBSTRING(field,charindex('keyword',field), LEN('keyword'))
```
So this will find Flop and extract it wherever it is in the field
```
select SUBSTRING('bullflop',charindex('flop','bullflop'), LEN('flop'))
```
EDIT:
To get the remainder then just set LEN to the field `LEN(field)`
```
declare @field varchar(200)
set @field = 'this is bullflop and other such junk'
select SUBSTRING(@field,charindex('flop',@field), LEN(@field) )
```
EDIT 2:
Now I understand, here is a quick and dirty version...
```
declare @field varchar(200)
set @field = 'From X to Y'
select Replace(SUBSTRING(@field,charindex('to ',@field), LEN(@field) ), 'to ','')
```
Returns:
`Y`
EDIT 3:
Cory is right, this is cleaner.
```
declare @field varchar(200) = 'From X to Y'
declare @keyword varchar(200) = 'to '
select SUBSTRING(@field,charindex(@keyword,@field) + LEN(@keyword), LEN(@field) )
```
|
Other answers are fine, but I like the `STUFF()` function and it doesn't seem to be well-known, so here's another option:
```
DECLARE @field VARCHAR(50) = 'From Authorized to Auth Not Needed'
,@keyword VARCHAR(50) = ' to '
SELECT STUFF(@field,1,CHARINDEX(@keyword,@field)+LEN(@keyword),'')
```
`STUFF()` is like `SUBSTRING()` and `REPLACE()` combined, you feed it a string, a start position and a length, and can replace that with anything or in your case, nothing `''`.
From MSDN:
```
STUFF ( character_expression , start , length , replaceWith_expression )
```
|
Trim String After Keyword
|
[
"",
"sql",
"sql-server",
"sql-server-2012",
""
] |
I'm executing lots of sql selects as follows. Imagine we have a database with flights, where each flight might have an outbound and inbound airport of course, a departure date, a number of stops between origin and destination (on long flights), and of course price.
I now want to select a specific route, and select the one with the lowest count of stops, and of course the bestprice among them.
```
CREATE TABLE flights(
id integer
outbound character varying,
inbound character varying,
date timestamp,
stops integer
price numeric
);
CREATE INDEX my_idx ON flights (outbound, inbound, date, stops, price);
select * from flights where outbound = 'SFO' and inbound = 'SYD' and date = '2015-10-10' and stops < 2 order by stops asc, price asc.
```
Problem: the costs using `explain-analyze` are quite high:
```
Sort (cost=9.78..9.79 rows=1 width=129) (actual time=0.055..0.055 rows=4 loops=1)
Sort Key: stops, price
Sort Method: quicksort Memory: 26kB
-> Index Scan using my_idx (cost=0.42..9.77 rows=1 width=129) (actual time=0.039..0.041 rows=4 loops=1)
Index Cond: ((date = '2015-10-10'::date) AND ((outbound)::text = 'SFO'::text) AND (stops < 2) AND ((inbound)::text = 'SYD'::text))
Total runtime: 0.079 ms
```
If I just sort by price without stops, the costs are ok (0.42). But sorting by stops somehow increases the costs siginificant.
How can I reduce the costs?
`postgresql 9.3.2`
|
Judging from the given numbers, your alternate query (*"If I just sort by price without stops"*) is actually **slower**, and you misread the numbers. `0.079 ms` vs. `0.42` (?).
That also makes sense, because your first query matches the sort order of the index perfectly.
**You already have the perfect index**. The suggestion to remove `price` is unfounded. The additional column removes the cost for the sort step: `time=0.055..0.055` as you can see in the plan.
Either way, it should hardly matter at all. As soon as you have reduced the number of rows retrieved to a *small* number (with predicates on the leading columns of the index), the rest is cheap either way.
To get more interesting results, don't test with `stops < 2` (which only leaves 0 and 1 stops), try with a bigger number to see any (probably small) difference.
Actually, since almost all columns are in the index already, I would try and add the one missing column `id`, too - if you can get [**index-only scans**](https://wiki.postgresql.org/wiki/Index-only_scans) out of this (Postgres 9.2+, read the Postgres Wiki at the linked page):
```
CREATE INDEX my_idx ON flights (outbound, inbound, date, stops, price, id);
```
```
SELECT id, outbound, inbound, date, stops, price
FROM ...
```
|
This is your query:
```
select *
from flights
where outbound = 'SFO' and inbound = 'SYD' and date = '2015-10-10' and stops < 2
order by stops asc, price asc.
```
The optimal index is: `flights(outbound, inbound, date, stops)`. This applies to the `where` clause. I don't know if there is a way to eliminate the `order by`, given the `where` but the sort should not be a big deal unless there are thousands of flights on that day.
|
How to reduce the costs of an sql select with order by?
|
[
"",
"sql",
"database",
"postgresql",
""
] |
I have an sql query:
```
select el as COL_1, sum(volume) as COL_2
from table1
where el like 'False%' and Month = 'SEP' and year(RollUpDate) = '2014' group by el
union
select el as COL_1, sum(volume) as COL_2
from table1
where el = 'True' and Month = 'SEP' and year(RollUpDate) = '2014' group by el
```
which will bring back:
```
true | 12
false a| 12
false b| 13
false 2| 3
```
What I am trying to do is combine the falses so that I can bring back a clearer result with one false and a sum of the false volume
i.e.
```
true | 12
false | 55
```
Any help appreciated, sql server 2008 btw
EDIT: Well, these solutions are great and most work as described but I cannot use them with the rigid jdbc set up, so I cannot use case statements. Jarlh's answer is close but when a null result is returned it still has those columns which have been created, if there was a simple way for this to return null by not using case statements it would be perfect
|
```
select 'False' as COL_1, sum(volume) as COL_2
from table1 where el like 'False%' and Month = 'SEP' and year(RollUpDate) = '2014'
union all
select 'True' as COL_1, sum(volume) as COL_2
from table1
where el = 'True' and Month = 'SEP' and year(RollUpDate) = '2014'
```
|
You can use your query as it would be a table, and select from it pretty much the same as you would from a regular or temp table.
```
SELECT t.COL_1, sum(t.COL_2) FROM
((select el as COL_1, sum(volume) as COL_2
from table1
where el like 'False%' and Month = 'SEP' and year(RollUpDate) = '2014' group by el)
union all
(select el as COL_1, sum(volume) as COL_2
from table1
where el = 'True' and Month = 'SEP' and year(RollUpDate) = '2014' group by el)) t
WHERE t.COL_1 = 'false'
GROUP BY t.COL_1
```
|
Combining rows in a query
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have a `WHERE` clause that logically I want it to bring back results where the first digit of the short names don't match.
The short names can be:
```
1.1
1.2
2.1
2.2
```
Sample data WITHOUT `where` clause:
```
+-----------+--------+------+
| Shortname | number | ID |
+-----------+--------+------+
| 2.1 | 1 | 3333 |
| 1.1 | 60 | 3333 |
| 1.2 | 90 | 3333 |
| 2.1 | 50 | 4444 |
| 2.2 | 30 | 4444 |
| 1.1 | 80 | 5555 |
| 1.2 | 10 | 5555 |
+-----------+--------+------+
```
Expected data WITH `where` clause:
```
+-----------+--------+------+
| Shortname | number | ID |
+-----------+--------+------+
| 2.1 | 1 | 3333 |
| 1.1 | 60 | 3333 |
| 1.2 | 90 | 3333 |
+-----------+--------+------+
```
I tried the code:
```
SELECT shortname, number, id
FROM table
WHERE ((left(shortname,1) like '%1%') != ((left(shortname,1) like '%2%')
```
But it generates the error:
> Msg 102, Level 15, State 1, Line 21
> Incorrect syntax near '!'.
**Clarification UPDATE**
I need the results per ID, so in the sample above there are ID's 3333, 4444 and 5555. I want to only bring back the ID `3333` because it doesn't have only a single first character value in each `shortname`. It contains both values `1` and `2`.
Where as I don't want to see the other ID's as there short names are matching on the first digit 1 = 1 and 2 = 2 etc.
|
If you want to return the `ID` that a multiple first characters in the `shortname`, then first look at getting a distinct count of the rows:
```
select id
from yourtable
group by id
having count(distinct left(shortname, 1)) > 1;
```
This should return to you the rows that have both a 2 and a 1 as the first character when associated with the IDs. Then you can use this to return the rest of the data:
```
;with cte as
(
select id
from yourtable
group by id
having count(distinct left(shortname, 1)) > 1
)
select
t.shortname,
t.number,
t.id
from yourtable t
inner join cte c
on t.id = c.id;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/d75723/1). This returns:
```
| SHORTNAME | NUMBER | ID |
|-----------|--------|------|
| 2.1 | 1 | 3333 |
| 1.1 | 60 | 3333 |
| 1.2 | 90 | 3333 |
```
A more flexible option would be to get the characters before the decimal and verify that you have a distinct count of all the digits. To do this, you'll use a function like [`CHARINDEX`](http://msdn.microsoft.com/en-us/library/ms186323.aspx) along with the `LEFT`.
```
;with cte as
(
select id
from yourtable
group by id
having count(distinct left(shortname, charindex('.', shortname)-1)) > 1
)
select
t.shortname,
t.number,
t.id
from yourtable t
inner join cte c
on t.id = c.id;
```
See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/907356/2). This will return:
```
| SHORTNAME | NUMBER | ID |
|-----------|--------|------|
| 2.1 | 1 | 3333 |
| 1.1 | 60 | 3333 |
| 1.2 | 90 | 3333 |
| 14.1 | 5 | 6666 |
| 14.2 | 78 | 6666 |
| 24.1 | 89 | 6666 |
```
|
You are comparing two conditions! You might want to do something like this:
```
SELECT shortname, number, id
FROM table
WHERE ((left(shortname,1) like '%1%') OR ((left(shortname,1) like '%2%')))
```
Anyway for you problem:
```
SELECT shortname, number, id
FROM (
SELECT shortname, number, id, (SELECT COUNT(*)
FROM table AS TI
WHERE LEFT(ShortName, 1) = RIGHT(ShortName, 1)
AND TI.ID = T.ID) AS C
FROM table AS T) AS TT
```
WHERE C = 0
But this leave out the ID 3333 too because it has one shortname = 1.1.
|
SQL WHERE statement syntax error when using distinct count left
|
[
"",
"sql",
"sql-server-2008",
""
] |
```
SELECT
E.`employee_id`,
E.`full_name`,
LE.`no_of_leaves` AS AllocatedLeaves,
MLLT.`leave_type` AS LeaveTypeName,
COUNT(SELECT * FROM leave_approval WHERE employee_id = 1) AS TotalLeavesTaken
FROM employee E
INNER JOIN leave_entitlement LE
ON E.`employee_id` = LE.`employee_id`
INNER JOIN `ml_leave_type` MLLT
ON MLLT.`ml_leave_type_id` = LE.`ml_leave_type_id`
LEFT JOIN leave_approval LA
ON E.`employee_id` = LA.`employee_id`
LEFT JOIN leave_application LAPP
ON LAPP.`application_id` = LA.`leave_application_id`
LEFT JOIN ml_leave_type MLLTLA
ON MLLTLA.`ml_leave_type_id` = LAPP.`ml_leave_type_id`
```
i am getting syntax error near count, but i tried to find out the syntax error and i could not find any?
```
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'select * from leave_approval where employee_id = 1) AS TotalLeavesTaken
from emp' at line 6
```
Is it really a syntax error. or is there something i am missing here??
|
This line:
```
COUNT(SELECT * FROM leave_approval WHERE employee_id = 1) AS TotalLeavesTaken
```
is incorrect. You can not do a count on `select *` subquery without placing it in parentheses, but even then you'd need a `group by` and additional logic. The better approach would be:
```
(Select count(*) from leave_approval where employee_id = 1) AS TotalLeavesTaken
```
|
change
```
COUNT(SELECT * FROM leave_approval WHERE employee_id = 1) AS TotalLeavesTaken
```
to
```
(SELECT count(*) FROM leave_approval WHERE employee_id = 1) AS TotalLeavesTaken
```
|
MYSQL | Getting syntax error for using subquery in COUNT()
|
[
"",
"mysql",
"sql",
"select",
"count",
"group-by",
""
] |
I have a table **product**:
```
id | owner_id | last_activity | box_id
------------------------------------
1 | 2 | 12/19/2014 | null
2 | 2 | 12/13/2014 | null
3 | 2 | 08/11/2014 | null
4 | 2 | 12/11/2014 | 99
5 | 2 | null | 99
6 | 2 | 12/15/2014 | 99
7 | 2 | null | 105
8 | 2 | null | 105
9 | 2 | null | 105
```
The only variable that I have is **owner\_id**.
I need to select all products of a user, but if the product is in a box then only latest one should be selected.
Sample output for owner = 2 is following:
```
id | owner_id | last_activity | box_id
------------------------------------
1 | 2 | 12/19/2014 | null
2 | 2 | 12/13/2014 | null
3 | 2 | 08/11/2014 | null
6 | 2 | 12/15/2014 | 99
7 | 2 | null | 105
```
I'm not able to find a way to select the latest product from a box.
My current query, which does not return correct value, but can be executed:
```
SELECT p.* FROM product p
WHERE p.owner_id = 2
AND (
p.box IS NULL
OR (
p.box IS NOT NULL
AND
p.id = ( SELECT MAX(pp.id) FROM product pp
WHERE pp.box_id = p.box_id )
)
```
I tried with dates:
```
SELECT p.* FROM product p
WHERE p.owner_id = 2
AND (
p.box IS NULL
OR (
p.box IS NOT NULL
AND
p.id = ( SELECT * FROM (
SELECT pp.id FROM product pp
WHERE pp.box_id = p.box_id
ORDER BY last_activity desc
) WHERE rownum = 1
)
)
```
Which gives error: **p.box\_id** is undefined as it's inside 2nd subquery.
Do you have any ideas how can I solve it?
|
The [`ROW_NUMBER`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm) analytical function might help with such queries:
```
SELECT "owner_id", "id", "box_id", "last_activity" FROM
(
SELECT "owner_id", "id", "box_id", "last_activity",
ROW_NUMBER()
OVER (PARTITION BY "box_id" ORDER BY "last_activity" DESC NULLS LAST) rn
-- ^^^^^^^^^^^^^^^
-- descending order, reject nulls after not null values
-- (this is the default, but making it
-- explicit here for self-documentation
-- purpose)
FROM T
WHERE "owner_id" = 2
) V
WHERE rn = 1 or "box_id" IS NULL
ORDER BY "id" -- <-- probably not necessary, but matches your example
```
See <http://sqlfiddle.com/#!4/db775/8>
---
> there can be nulls as a value. If there are nulls in all products inside a box, then MIN(id) should be returned
Even if is is probably **not** a good idea to rely on `id` to order things is you think you need that, you will have to change the `ORDER BY` clause to:
```
... ORDER BY "last_activity" DESC NULLS LAST, "id" DESC
-- ^^^^^^^^^^^
```
|
Use exists
```
SELECT
p.*
FROM
product p
WHERE
p.owner_id = 2 AND
( p.box IS NULL OR
(
p.box IS NOT NULL AND
NOT EXISTS
(
SELECT
pp.id
FROM
product pp
WHERE
pp.box_id = p.box_id AND
pp.last_activity > p.last_activity
)
)
)
```
|
Select last changed row in sub-query
|
[
"",
"sql",
"oracle",
"select",
""
] |
I have a table with the following information:
```
--------------------------------------
| client_name | supplier | completed |
--------------------------------------
| Acme_1 | Sup_1 | 0 |
| Acme_1 | Sup_1 | 0 |
| Acme_1 | Sup_1 | 1 |
| Acme_1 | Sup_2 | 0 |
| Acme_1 | Sup_1 | 1 |
| Acme_1 | Sup_1 | 0 |
| Acme_2 | Sup_1 | 0 |
| Acme_2 | Sup_1 | 0 |
| Acme_2 | Sup_3 | 0 |
| Acme_1 | Sup_1 | 1 |
| Acme_3 | Sup_2 | 0 |
| Acme_2 | Sup_3 | 0 |
--------------------------------------
```
What I need to have is a result that shows how many times `Sup_x` supplied to `Acme_x`. So for the above table I need an overview thar looks like this:
```
--------------------------------------
| supplier | client_name | total |
--------------------------------------
| Sup_1 | Acme_1 | 6 |
| Sup_1 | Acme_2 | 2 |
| Sup_2 | Acme_1 | 1 |
| Sup_2 | Acme_3 | 1 |
| Sup_3 | Acme_2 | 2 |
--------------------------------------
```
But I'm completely lost. I'm trying to do `DISTINCT` and `GROUP BY`'s . But I'm not even getting close to what I need. For example:
```
SELECT
[supplier]
,[client_name]
,COUNT([client_name]) AS total
FROM data
GROUP BY
[client_name]
--,[supplier]
```
But this query doesn't really give what I need.
What query do I need to get the desired resukt?
|
Try this:
```
SELECT [supplier], [client_name], COUNT(1) AS total
FROM data
GROUP BY [supplier], [client_name];
```
|
It should be the following:
SELECT
[supplier]
,[client\_name]
,COUNT([completed]) AS total
FROM data
GROUP BY [client\_name],[supplier]
Explanation:
* The "Group By" clause will return only distinct row sets for the selected columns in the select statement. Read more [here](http://msdn.microsoft.com/en-us/library/ms177673.aspx)
* The count will just count the number of rows for the specified [supplier],[client\_name] set.
Please do not hesitate to contact me should you need more help or clarification
|
Count total when grouping by two columns
|
[
"",
"sql",
"sql-server",
"count",
"sql-server-2008-r2",
"group-by",
""
] |
I'm converting a stored procedure from MySql to SQL Server. The procedure has one input parameter `nvarchar`/`varchar` which is a comma-separated string, e.g.
```
'1,2,5,456,454,343,3464'
```
I need to write a query that will retrieve the relevant rows, in MySql I'm using `FIND_IN_SET` and I wonder what the equivalent is in SQL Server.
I also need to order the ids as in the string.
The original query is:
```
SELECT *
FROM table_name t
WHERE FIND_IN_SET(id,p_ids)
ORDER BY FIND_IN_SET(id,p_ids);
```
|
The equivalent is `like` for the `where` and then `charindex()` for the `order by`:
```
select *
from table_name t
where ','+p_ids+',' like '%,'+cast(id as varchar(255))+',%'
order by charindex(',' + cast(id as varchar(255)) + ',', ',' + p_ids + ',');
```
Well, you could use `charindex()` for both, but the `like` will work in most databases.
Note that I've added delimiters to the beginning and end of the string, so 464 will not accidentally match 3464.
|
You would need to write a FIND\_IN\_SET function as it does not exist. The closet mechanism I can think of to convert a delimited string into a joinable object would be a to create a [table-valued function](http://technet.microsoft.com/en-us/library/ms191165(v=SQL.105).aspx) and use the result in a standard in statement. It would need to be similar to:
```
DECLARE @MyParam NVARCHAR(3000)
SET @MyParam='1,2,5,456,454,343,3464'
SELECT
*
FROM
MyTable
WHERE
MyTableID IN (SELECT ID FROM dbo.MySplitDelimitedString(@MyParam,','))
```
And you would need to create a `MySplitDelimitedString` type [table-valued function](http://technet.microsoft.com/en-us/library/ms191165(v=SQL.105).aspx) that would split a string and return a `TABLE (ID INT)` object.
|
Select rows using in with comma-separated string parameter
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm a bit stumped on a query I need to write for work. I have the following two tables:
```
|===============Patterns==============|
|type | bucket_id | description |
|-----------------------|-------------|
|pattern a | 1 | Email |
|pattern b | 2 | Phone |
|==========Results============|
|id | buc_1 | buc_2 |
|-----------------------------|
|123 | pass | |
|124 | pass |fail |
```
In the results table, I can see that entity 124 failed a validation check in buc\_2. Looking at the patterns table, I can see bucket 2 belongs to pattern b (bucket\_id corresponds to the column name in the results table), so entity 124 failed phone validation. But how do I write a query that joins these two tables on the value of one of the columns? Limitations to how this query is going to be called will most likely prevent me from using any cursors.
|
An other approach if you are using Oracle β₯ 11g, would be to use the [`UNPIVOT` operation](http://docs.oracle.com/cd/B28359_01/server.111/b28313/analysis.htm#sthref1185). This will translate columns to rows at query execution:
```
select * from Results
unpivot ("result" for "bucket_id" in ("buc_1" as 1, "buc_2" as 2))
join Patterns
using("bucket_id")
where "result" = 'fail';
```
Unfortunately, you still have to hard-code the various column names.
See <http://sqlfiddle.com/#!4/a3eae/17>
|
Some crude solutions:
```
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_1" = 'fail' AND "bucket_id" = 1
union all
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_2" = 'fail' AND "bucket_id" = 2
```
Or, with a very probably better execution plan:
```
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_1" = 'fail' AND "bucket_id" = 1
OR "buc_2" = 'fail' AND "bucket_id" = 2;
```
This will report *all* failure descriptions for each `id` having a fail case in bucket 1 or 2.
See <http://sqlfiddle.com/#!4/a3eae/8> for a live example
---
That being said, the *right* solution would be probably to change your schema to something more manageable. Say by using an [association table](http://en.wikipedia.org/wiki/Junction_table) to store each failed test -- as you have in fact here a [many to many relationship](http://en.wikipedia.org/wiki/Many-to-many_%28data_model%29).
|
SQL join two tables using value from one as column name for other
|
[
"",
"sql",
"oracle",
""
] |
The Query using LIKE :(This query when fired gives the desired result)
`select * from catissue_audit_event where event_timestamp like '16-DEC-14'`
But when using query with '=' results in an empty resultset
`select * from catissue_audit_event where event_timestamp='16-DEC-14'`
Here event\_timestamp is of type Date
Strange thing is that the query runs for other dates such as:
`select * from catissue_audit_event where event_timestamp='15-DEC-14'`
What can be the issue? I already checked for leading and trailing spaces in the data
Output after running the first query:

|
In Oracle a `DATE` (and of course a `TIMESTAMP`) column contains a time part as well.
Just because your SQL client is hiding the time, doesn't mean it isn't there.
If you want all rows from a specific day (ignoring the time) you need to use `trunc()`
```
select *
from catissue_audit_event
where trunc(event_timestamp) = DATE '2014-12-16';
```
Be aware that this query will not use an index on the `event_timestamp` column.
You should also not rely on implicit data type conversion as you do with the expression `event_timestamp = '16-DEC-14`. That statement is going to fail if I run it from my computer because of different NLS settings. Always use a proper DATE literal (as I have done in my statement). If you don't like the unambiguous ISO date, then use `to_date()`:
```
where trunc(event_timestamp) = to_date('16-12-2014', 'dd-mm-yyyy');
```
You should avoid using month names unless you know that all environments (which includes computers **and** SQL clients) where your SQL statement is executed are using the same NLS settings. If you are sure, you *can* use e.g. `to_date('16-DEC-14', 'dd-mon-yy')`
|
The reason why this is different is different to the solution to your issue.
The solution to your issue is to stop performing date comparisons by implicit conversion to a string. [Convert your string to a date to perform a date comparison](https://stackoverflow.com/questions/10178292/comparing-dates-in-oracle-sql/10178346#10178346):
```
select * from catissue_audit_event where event_timestamp = date '2014-12-16'
```
I cannot stress this enough; **when performing a date comparison *only* compare dates.**
Your column EVENT\_TIMESTAMP is being implicitly (this is bad) converted to a date in accordance with your [NLS\_DATE\_FORMAT](http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch3globenv.htm#i1006817), which you can find as follows:
```
select * from nls_session_parameters
```
This governs how [date-data is displayed and implicitly converted](https://stackoverflow.com/a/15970516/458741). The reason why `LIKE` works and and `=` doesn't is because your NLS\_DATE\_FORMAT is masking additional data. In other words, your date has a time component.
If you run the following and then re-select the data from your table you'll see the additional time component
```
alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss'
```
Thus, if you want all the data for a specific date without constraint on time you'll need to remove the time component:
```
select * from catissue_audit_event where trunc(event_timestamp) = date '2014-12-16'
```
|
Sql Query using 'Like' is giving results but using '=' does not returns any result in Oracle
|
[
"",
"sql",
"oracle",
"sql-like",
""
] |
Is it possible to create a hash value from a field that is NVARCHAR(Max) in SQL, and how?
For example:
```
SELECT Name, HASH(Name) --How to achieve this?
FROM People
```
|
Use `checksum()` (documented [here](http://msdn.microsoft.com/en-us/library/ms189788.aspx)) or `binary_checksum()`:
```
select name, checksum(name)
from people;
```
|
```
DECLARE @HashThis nvarchar(max);
SET @HashThis = CONVERT(nvarchar(max),name);
SELECT HASHBYTES('SHA1', @HashThis);
```
|
Can I create a hash value from a field that is nvarchar(max)?
|
[
"",
"sql",
"sql-server",
"t-sql",
"hash",
""
] |
My procedure
```
ALTER PROCEDURE [dbo].[spMyProc]
@uniqueid uniqueidentifier,
@y int
DECLARE @ID uniqueidentifier
SELECT TOP 1 @ID = uniqueid FROM tbl_x WHERE y= @y
INSERT INTO tbl_x
(
otherfield
,uniqueid
)
VALUES
(
@otherfields
,if @ID == null then @uniqueid else @ID -- this is what i want to happen how i do not know
)
```
Now i can do this using if else block like this but i do not want to do it. I want some nice way of coding this
this is what i do not want to do
```
if(@id IS NULL)
BEGIN
INSERT INTO tbl_x
(
otherfield
,uniqueid
)
VALUES
(
@otherfields
,@uniqueid
)
END
ELSE
BEGIN
INSERT INTO tbl_x
(
otherfield
,uniqueid
)
VALUES
(
@otherfields
,@id
)
END
```
is there any suggestion
|
you can use `insert into select` like this
```
INSERT INTO tbl_x
(
otherfield
,uniqueid
)
SELECT
@otherfields,
ISNULL(@ID , @uniqueid)
```
|
You can put your `IF` into the `SELECT` where you assign the value:
```
ALTER PROCEDURE [dbo].[spMyProc]
@uniqueid uniqueidentifier,
@y int
DECLARE @ID uniqueidentifier
SELECT TOP 1 @ID = ISNULL(uniqueid,@uniqueid) FROM tbl_x WHERE y= @y
INSERT INTO tbl_x
(
otherfield
,uniqueid
)
VALUES
(
@otherfields
,@ID
)
```
|
conditional insert sql
|
[
"",
"sql",
"sql-server",
""
] |
I am working on a script to analyze some data contained in thousands of tables on a SQL Server 2008 database.
For simplicity sakes, the tables can be broken down into groups of 4-8 semi-related tables. By semi-related I mean that they are data collections for the same item but they do not have any actual SQL relationship. Each table consists of a date-time stamp (`datetime2` data type), value (can be a `bit`, `int`, or `float` depending on the particular item), and some other columns that are currently not of interest. The date-time stamp is set for every 15 minutes (on the quarter hour) within a few seconds; however, not all of the data is recorded precisely at the same time...
For example:
TABLE1:
```
TIMESTAMP VALUE
2014-11-27 07:15:00.390 1
2014-11-27 07:30:00.390 0
2014-11-27 07:45:00.373 0
2014-11-27 08:00:00.327 0
```
TABLE2:
```
TIMESTAMP VALUE
2014-11-19 08:00:07.880 0
2014-11-19 08:15:06.867 0.0979999974370003
2014-11-19 08:30:08.593 0.0979999974370003
2014-11-19 08:45:07.397 0.0979999974370003
```
TABLE3
```
TIMESTAMP VALUE
2014-11-27 07:15:00.390 0
2014-11-27 07:30:00.390 0
2014-11-27 07:45:00.373 1
2014-11-27 08:00:00.327 1
```
As you can see, not all of the tables will start with the same quarterly `TIMESTAMP`. Basically, what I am after is a query that will return the VALUE for each of the 3 tables for every 15 minute interval starting with the earliest `TIMESTAMP` out of the 3 tables. For the example given, I'd want to start at 2014-11-27 07:15 (don't care about seconds... thus, would need to allow for the timestamp to be +- 1 minute or so). Returning NULL for the value when there is no record for the particular TIMESTAMP is ok. So, the query for my listed example would return something like:
```
TIMESTAMP VALUE1 VALUE2 VALUE3
2014-11-27 07:15 1 NULL 0
2014-11-27 07:30 0 NULL 0
2014-11-27 07:45 0 NULL 1
2014-11-27 08:00 0 NULL 1
...
2014-11-19 08:00 0 0 1
2014-11-19 08:15 0 0.0979999974370003 0
2014-11-19 08:30 0 0.0979999974370003 0
2014-11-19 08:45 0 0.0979999974370003 0
```
I hope this makes sense. Any help/pointers/guidance will be appreciated.
|
The first thing I would do is normalize the timestamps to the minute. You can do this with an update to the existing column
```
UPDATE TABLENAME
SET TIMESTAMP = dateadd(minute,datediff(minute,0,TIMESTAMP),0)
```
or in a new column
```
ALTER TABLE TABLENAME ADD COLUMN NORMTIME DATETIME;
UPDATE TABLENAME
SET NORMTIME = dateadd(minute,datediff(minute,0,TIMESTAMP),0)
```
For details on flooring dates this see this post: [Floor a date in SQL server](https://stackoverflow.com/questions/85373/floor-a-date-in-sql-server)
---
The next step is to make a table that has all of the timestamps (normalized) that you expect to see -- that is every 15 -- one per row. Lets call this table TIME\_PERIOD and the column EVENT\_TIME for my examples (call it whatever you want).
There are many ways to make such a table recursive CTE, ROW\_NUMBER(), even brute force. I leave that part up to you.
---
Now the problem is simple select with left joins and a filter for valid values like this:
```
SELECT TP.EVENT_TIME, a.VALUE as VALUE1, b.VALUE as VALUE2, c.VALUE as VALUE3
FROM TIME_PERIOD TP
LEFT JOIN TABLE1 a ON a.[TIMESTAMP] = TP.EVENT_TIME
LEFT JOIN TABLE2 b ON b.[TIMESTAMP] = TP.EVENT_TIME
LEFT JOIN TABLE3 c ON c.[TIMESTAMP] = TP.EVENT_TIME
WHERE COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP]) is not null
ORDER BY TP.EVENT_TIME DESC
```
The where might get a little more complex if they are different types so you can always use this (which is not as good as coalesce but will always work):
```
WHERE a.[TIMESTAMP] IS NOT NULL OR
b.[TIMESTAMP] IS NOT NULL OR
c.[TIMESTAMP] IS NOT NULL
```
|
Use `Full Outer Join`
```
SELECT COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP]) [TIMESTAMP],
Isnull(Max(a.VALUE), 0) VALUE1,
Max(b.VALUE) VALUE2,
Isnull(Max(c.VALUE), 0) VALUE3
FROM TABLE1 a
FULL OUTER JOIN TABLE2 b
ON CONVERT(SMALLDATETIME, a.[TIMESTAMP]) = CONVERT(SMALLDATETIME, b.[TIMESTAMP])
FULL OUTER JOIN TABLE3 c
ON CONVERT(SMALLDATETIME, a.[TIMESTAMP]) = CONVERT(SMALLDATETIME, c.[TIMESTAMP])
GROUP BY COALESCE(a.[TIMESTAMP], b.[TIMESTAMP], c.[TIMESTAMP])
ORDER BY [TIMESTAMP] DESC
```
|
SQL select from multiple tables based on datetime
|
[
"",
"sql",
"sql-server",
"join",
"datetime2",
""
] |
I know how to increase/decrease value in column.
value type is `INT(11)`
Command:
```
UPDATE table SET value = value - 1 WHERE id = 1
```
But when value reaches "0" it keeps decreasing.
**Question:** How to decrease until zero, so I don't get value (-1) in column?
Thank you for your code / ideas / suggestions
|
Just use a conditional:
```
UPDATE table
SET value = value - 1
WHERE id = 1 AND value > 0;
```
|
Add it to the where clause:
```
UPDATE table SET value = value - 1 WHERE (id = 1) AND (value > 0)
```
Once value reaches 0, it won't get updated anymore, because the `where` clause will exclude that record from the update.
|
SQL - decrease value to zero
|
[
"",
"mysql",
"sql",
""
] |
i'm trying to create a report for fields which have been updated, so basically like this:
```
| X | Y | Z | - Table 1
| X | Y | P | - Table 2
| NUL | NUL | P | - Is what i want the outcome to be
```
Does anyone know if this is possible / how to approach it? I'm not very competent with SQL!
Cheers,
|
Here is a simple, but type intensive solution:
```
SELECT
CASE WHEN (
T1.Field1 = T2.Field1 OR (T1.Field1 IS NULL AND T2.Field1 IS NULL)
) THEN NULL ELSE T2.Field1 END AS Field1,
CASE WHEN (
T1.Field2 = T2.Field2 OR (T1.Field2 IS NULL AND T2.Field2 IS NULL)
) THEN NULL ELSE T2.Field1 END AS Field2
/** and so on **/
FROM
Table1 T1
FULL OUTER JOIN Table2 T2
ON T1.JoinField = T2.JoinField
```
You have to compare each field individually from the two tables. You can use a `FULL OUTER JOIN` to get all record from each tables
Also you have to check if the value in each table is NULL (`T1.Field1 IS NULL AND T2.Field2 IS NULL`). Remember NULL is never equals to any value neither NULL.
Note: `FULL OUTER JOIN` is maybe too broad, so you can use either `LEFT/RIGHT OUTER JOIN` or `INNER JOIN`, but you have to chose the join visely to match the business requirements.
|
use [**`Left Outer Join`**](http://technet.microsoft.com/en-us/library/ms187518(v=sql.105).aspx) to get the result
```
SELECT Table1.col1,
Table1.col2,
Table2.col3
FROM Table2
LEFT OUTER JOIN Table1
ON Table2.col3 = Table1.col3
```
[**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/b5992/1)
|
SQL joining two tables, null same result
|
[
"",
"sql",
"sql-server",
"inner-join",
""
] |
I have a `MYSQL COUNT` Status that is working great by counting records greater than `CURDATE`. How do I specify this query to ONLY count tomorrow based on `CURDATE` for the column specified?
```
SELECT COUNT(ID) as total_count
FROM HTG_ScheduleRequest
WHERE (ScheduleDateCurrent > CURDATE())
AND JobStatus = '3'
GROUP BY SSR
```
|
The "easy" way is:
```
WHERE date(ScheduleDateCurrent) = date_add(CURDATE(), interval 1 day) AND JobStatus = '3'
```
The better way is:
```
WHERE JobStatus = '3' AND
ScheduleDateCurrent >= date_add(CURDATE(), interval 1 day) AND
ScheduleDateCurrent < date_add(CURDATE(), interval 2 day)
```
The reason this is better is because it can take advantage of an index on `JobStatus, ScheduleDateCurrent`, if one is available.
|
The following query would do the trick
```
SELECT COUNT(ID) as total_count
FROM HTG_ScheduleRequest
WHERE (ScheduleDateCurrent BETWEEN DATE_ADD(DATE(CURDATE()), INTERVAL 1 DAY)
AND DATE_ADD(DATE(CURDATE()), INTERVAL 2 DAY))
AND JobStatus = '3'
GROUP BY SSR
```
|
MySQL COUNT on Tomorrow's Date based on CURDATE
|
[
"",
"mysql",
"sql",
""
] |
The hibernate orm options, create, create-drop, update and validate.
Is there any one that says, create once, but don't drop or update?
An option that says, just create if don't exist, don't update, don't drop?
|
I had to write my own solution for this. This is not possible using hibernate or jpa.
|
Here is an existing discussion that summarizes the options of hibernate.hbm2ddl.auto. [hbm2ddl discussion](https://stackoverflow.com/questions/438146/hibernate-hbm2ddl-auto-possible-values-and-what-they-do)
It is common for shops that use Hibernate to manage the physical database layer separately, using SQL DDL scripts run directly by DBAs for example, rather than allowing Hibernate tools to modify the database schema. If you still wish to have Hibernate run these for you, the hibernate.hbm2ddl.import\_files option can be used to tweak your DDL operations manually by providing custom script files.
In your case if you wish to let your ORM dictate the phsyical model as opposed to the reverse, you could just leave your hbm2ddl.auto as 'update'. The other option would be to programmatically generate your script files referenced in hibernate.hbm2ddl.import\_files using entity mapping or meta-data(annotations) as the guiding input.
|
Hibernate create, don't update, don't drop.
|
[
"",
"sql",
"spring",
"hibernate",
"jpa",
"jpa-2.0",
""
] |
I have two tables in my db and it is one to many connection with other as shown below

In kvves\_units table I will get '`name`' from the `GET` Method
now I want to have all value from the `kvves_units` and `kvves_members` according to the name of the `kvves_units`
I'm using the code something like this
```
$kvvesDetails = $conn->prepare( "SELECT u.id, u.name, u.phone, u.email, u.address, m.name, m.designantion, m.phone, m.email, m.imageFROM kvves_units AS u JOIN kvves_members AS m ON m.unit_id = u.id WHERE `name` = $committee");
```
|
This is a standard join:
```
$kvvesDetails = $conn->prepare( "SELECT u.id, u.name, u.phone, u.email, u.address, m.name, m.designantion, m.phone, m.email, m.image FROM kvves_units AS u JOIN kvves_members AS m ON m.unit_id = u.id WHERE name = '$committee'"
```
|
> Use this SQL
```
select kvves_units.*,kvves_members.* from kvves_units a join kvves_members b where a.name = b.name and a.name = '".$_GET['name']."'
```
|
mySQL join with where condition
|
[
"",
"mysql",
"sql",
""
] |
My SQL Query:
```
SELECT
[content_id] AS [LinkID]
, dbo.usp_ClearHTMLTags(CONVERT(nvarchar(600), CAST([content_html] AS XML).query('root/Physicians/name'))) AS [Physician Name]
FROM
[DB].[dbo].[table1]
WHERE
[id] = '188'
AND
(content LIKE '%Urology%')
AND
(contentS = 'A')
ORDER BY
--[content_title]
dbo.usp_ClearHTMLTags(CONVERT(nvarchar(600), CAST([content_html] AS XML).query('root/Physicians/name')))
```
The issue I am having is, if the `content` is `Neurology` or `Urology` it appears in the result.
Is there any way to make it so that if it's **Urology**, it will only give **Urology** result and if it's **Neurology**, it will only give **Neurology** result.
It can be Urology, Neurology, Internal Medicine, etc. etc... So the two above used are what is causing the issue.
The `content` is a ntext column with XML tag inside, for example:
```
<root><Location><location>Office</location>
<office>Office</office>
<Address>
<Address1>1 Road</Address1>
<Address2></Address2>
<City>Qns</City>
<State>NY</State>
<zip>14404</zip>
<phone>324-324-2342</phone>
<fax></fax>
<general></general>
<from_north></from_north>
<from_south></from_south>
<from_west></from_west>
<from_east></from_east>
<from_connecticut></from_connecticut>
<public_trans></public_trans>
</Address>
</Location>
</root>
```
With the update this `content` column has the following XML:
```
<?xml version="1.0" encoding="UTF-8"?>
<root>
<Physicians>
<name>Doctor #1</name>
<picture>
<img src="phys_lab coat_gradation2.jpg?n=7529" />
</picture>
<gender>M</gender>
<langF1>
<a href="/ult.aspx" title="English">English</a>
</langF1>
<specialty>
<a title="Neurology" href="neu.aspx">Neurology</a>
</specialty>
</Physicians>
</root>
```
If I search for `Lab` the result appears because there is the text `lab` in the column.
|
This is what I would do if you're not into making a CLR proc to use Regexes (SQL Server doesn't have regex capabilities natively)
```
SELECT
[...]
WHERE
(content LIKE @strService OR
content LIKE '%[^a-z]' + @strService + '[^a-z]%' OR
content LIKE @strService + '[^a-z]%' OR
content LIKE '%[^a-z]' + @strService)
```
This way you check to see if content is equal to @strService **OR** if the word exists somewhere within content with non-letters around it **OR** if it's at the very beginning or very end of content with a non-letter either following or preceding respectively.
`[^...]` means *"a character that is none of these"*. If there are other characters you don't want to accept before or after the search query, put them in every 4 of the square brackets (**after the `^`!**). For instance `[^a-zA-Z_]`.
|
Databases are notoriously bad at semantics (i.e. they don't understand the concept of neurology or urology - everything is just a string of characters).
The best solution would be to create a table which defines the terms (two columns, PK and the name of the term).
The query is then a join:
```
join table1.term_id = terms.term_id and terms.term = 'Urology'
```
That way, you can avoid the `LIKE` and search for specific results.
If you can't do this, then SQL is probably the wrong tool. Use `LIKE` to get a set of results which match and then, in an imperative programming language, clean those results from unwanted ones.
|
How to make LIKE in SQL look for specific string instead of just a wildcard
|
[
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
New to Stack Overflow (and coding in general).
I did some research but was unable to find an answer to the following problem:
How can I join two tables ON the results of functions applied to dimensions, rather than on the dimensions themselves?
i.e. I want to join the following two tables on the lowercase results of the function lower() rather than joining on the case ambiguous dimensions as they are.
```
SELECT
lower(first_name) as firstname
,lower(last_name) as lastname
,lower(email) as email1
,total_donated
From BensData.Donations As a
JOIN EACH
(Select
lower(first_name) as first
,lower(last_name) as last
,lower(email) as email2
,sum(amount) as total_donated
From BensData.Donations
GROUP BY email2, first, last) As b
ON a.email1=b.email2 AND a.firstname=b.first AND a.lastname=b.last
```
It does not let me join on the aliases I create in the first table (a), however, if I join ON the original dimensions in table a (first\_name and last\_name) then the results are based on the case ambiguous dimensions, and give an undesired result.
I hope that was clear.
Thanks for any help!
|
Thanks for everyone's help!
Particularly sprocket who pointed me in the right direction! The main difference in his code and mine is that mine does not have the table aliases appended on the front of each dimension of the first SELECT clause (e.g. \*\*a.\*\*fistname, \*\*a.\*\*lastname, -----> firstname, lastname)
For some reason BigQuery kept giving me an error because of the table aliases.
Here's the code that worked.
```
SELECT
firstname
,lastname
,email1
,total_donated
FROM
(SELECT
lower(first_name) as firstname
,lower(last_name) as lastname
,lower(email) as email1
From BensData.Donations) As a
JOIN EACH
(Select
lower(first_name) as first
,lower(last_name) as last
,lower(email) as email2
,sum(float(amount)) as total_donated
From BensData.Donations
GROUP BY email2, first, last) As b
ON a.email1=b.email2 AND a.firstname=b.first AND a.lastname=b.last
```
Thanks all for your help!
|
Try using two subqueries like this:
```
SELECT
a.firstname
,a.lastname
,a.email1
,a.total_donated
FROM
(SELECT
lower(first_name) as firstname
,lower(last_name) as lastname
,lower(email) as email1
,total_donated
From BensData.Donations) As a
JOIN EACH
(Select
lower(first_name) as first
,lower(last_name) as last
,lower(email) as email2
,sum(amount) as total_donated
From BensData.Donations
GROUP BY email2, first, last) As b
ON a.email1=b.email2 AND a.firstname=b.first AND a.lastname=b.last
```
In your original query, a is just an alias for BensData.Donations, so you can only join on fields present in that table.
|
Using Aliases in the ON Clause of a JOIN
|
[
"",
"sql",
"join",
"google-bigquery",
"case-sensitive",
"on-clause",
""
] |
I am not sure if this is doable, i have a table of data with timestamp
e.g.
```
apple 2013-01-02 06:24:22
banana 2013-01-12 11:08:22
apple 2013-01-02 15:24:22
apple 2013-01-02 12:30:16
banana 2013-01-14 18:34:22
```
i want to:
1. rearrange the data base on distinct date not timetamp, means all same date will group together
2. if possible i hope to achieve the below in final result
```
apple 2013-01-02 06:24:22 2013-01-02 15:24:22 // get first and last data
banana 2013-01-12 11:08:22 2013-01-12 11:08:22 // only one data in the group
banana 2013-01-14 18:34:22 2013-01-14 18:34:22 // only one data in the group
```
How can i do this with minimum sql queries instead of using a loop to determine?
|
Try this:
```
SELECT fruitName, MIN(a.timestamp), MAX(a.timestamp)
FROM tableA a
GROUP BY DATE(a.timestamp), fruitName;
```
|
Use MySQL [DATE()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date) function.
```
SELECT FRUIT_NAME, DATE(mydate) AS dateStamp FROM TABLE
GROUP BY DATE(TABLE.mydate);
```
Where:
FRUIT\_NAME: your field for name
mydate your date field.
|
select data group by distinct date and get first and last data in the group
|
[
"",
"mysql",
"sql",
"date",
"select",
"aggregate-functions",
""
] |
Having a query like the following (shortened and renamed for simplicity)
```
SELECT
SOME_COLUMN AS value1,
SOME_COMMON_ID as commonID,
SOME_TAG as tagID,
SOME_TIMESTAMP as endTime,
( SELECT
SOME_TIMESTAMP AS beginTime
FROM
EVENTLIST
WHERE
EVENTLIST.SOME_TAG = 'BEGIN'
AND EVENTLIST.SOME_COMMON_ID = commonID /* <-- Invalid column name commonID */
),
endTime - beginTime AS duration
FROM
EVENTLIST
JOIN
(...some irrelevant lookups on other tables)
WHERE
(...some criteria...)
```
What I would like to achieve:
This table logs some events, the time on which the event occurs is stored in `SOME_TIMESTAMP`. Several events are grouped by a common identifier called `SOME_COMMON_ID`. The type of the event is stored in `SOME_TAG`
For each of the events I would like to select the duration of the current event since the event having the tag `BEGIN`
How to achieve this in SQL Server?
|
It was even easier than I thought.
```
SELECT
SOME_COLUMN AS value1,
SOME_COMMON_ID as commonID,
SOME_TAG as tagID,
SOME_TIMESTAMP as endTime,
( SELECT
DATEDIFF(SECOND, e2.SOME_TIMESTAMP, e1.SOME_TIMESTAMP) AS duration
/* ^-- calc the diff here, not in the outer query */
FROM
EVENTLIST e2
WHERE
e2.SOME_TAG = 'BEGIN'
AND e2.SOME_COMMON_ID = e1.SOME_COMMON_ID /* <-- qualify table names */
)
FROM
EVENTLIST e1 /* <-- name required */
JOIN
(...some irrelevant lookups on other tables)
WHERE
(...some criteria...)
```
I had to qualify the tables and calculate the difference right in the inner query.
And I have to calculate the difference in the inner query, so even that is possible.
|
Lets create some test data
```
DECLARE @EventList TABLE
(
SOME_COLUMN_ID int,
SOME_COLUMN varchar(20),
SOME_TAG varchar(20),
SOME_TIMESTAMP datetime
)
INSERT INTO @EventList
( SOME_COLUMN_ID, SOME_COLUMN, SOME_TAG, SOME_TIMESTAMP )
VALUES
( 1, 'Exporting', 'BEGIN', DATEADD(HOUR, -5, GETDATE()) ),
( 1, 'Exporting', 'GOING', DATEADD(HOUR, -4, GETDATE()) ),
( 1, 'Exporting', 'STILL_GOING', DATEADD(HOUR, -3, GETDATE()) ),
( 1, 'Exporting', 'GONE', DATEADD(HOUR, -2, GETDATE()) ),
( 1, 'Exporting', 'END', DATEADD(HOUR, -1, GETDATE()) ),
( 2, 'Parsing1', 'BEGIN', DATEADD(HOUR, -5, GETDATE()) ),
( 2, 'Parsing2', 'GOING', DATEADD(HOUR, -4, GETDATE()) ),
( 2, 'Parsing3', 'STILL_GOING', DATEADD(HOUR, -3, GETDATE()) ),
( 2, 'Parsing4', 'GONE', DATEADD(HOUR, -2, GETDATE()) );
```
Now I'm going to make a CTE to order the events by time and partitioned by the ID
```
WITH T AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY SOME_COLUMN_ID ORDER BY SOME_TIMESTAMP) RN
FROM @EventList
)
```
Now we are going to pull in all the events finding the one after and get the duration of each step, I also check to see if the process hit the END, otherwise I use the time now to find the duration.
```
SELECT
T1.SOME_COLUMN_ID,
T1.SOME_COLUMN,
T1.SOME_TAG,
T1.SOME_TIMESTAMP AS BeginTime,
(CASE WHEN t1.SOME_TAG != 'END' THEN ISNULL(t2.SOME_TIMESTAMP, GETDATE()) ELSE NULL END) EndTime,
(CASE WHEN t1.SOME_TAG != 'END' THEN DATEDIFF(MINUTE, t1.SOME_TIMESTAMP, ISNULL(t2.SOME_TIMESTAMP, GETDATE())) ELSE NULL END) Duration
FROM T t1
LEFT JOIN T t2
ON t1.SOME_COLUMN_ID = t2.SOME_COLUMN_ID
AND t1.RN = t2.RN - 1
```
Here is the output:
```
SOME_COLUMN_ID SOME_COLUMN SOME_TAG BeginTime EndTime Duration
1 Exporting BEGIN 2014-12-18 05:31:06.090 2014-12-18 06:31:06.090 60
1 Exporting GOING 2014-12-18 06:31:06.090 2014-12-18 07:31:06.090 60
1 Exporting STILL_GOING 2014-12-18 07:31:06.090 2014-12-18 08:31:06.090 60
1 Exporting GONE 2014-12-18 08:31:06.090 2014-12-18 09:31:06.090 60
1 Exporting END 2014-12-18 09:31:06.090 NULL NULL
2 Parsing1 BEGIN 2014-12-18 05:31:06.090 2014-12-18 06:31:06.090 60
2 Parsing2 GOING 2014-12-18 06:31:06.090 2014-12-18 07:31:06.090 60
2 Parsing3 STILL_GOING 2014-12-18 07:31:06.090 2014-12-18 08:31:06.090 60
2 Parsing4 GONE 2014-12-18 08:31:06.090 2014-12-18 10:31:06.090 120
```
|
Select in Select using parameters
|
[
"",
"sql",
"sql-server",
""
] |
I have two poorly designed tables Form and FormDetails that I am trying to clean up.
The Form table holds information about the status of a form in my application:
```
+-----------------------------------------+
| form_id | status_id | form_created_by |
+-----------------------------------------+
| 1 | 1 | abc |
+-----------------------------------------+
| 2 | 3 | def |
+-----------------------------------------+
```
form\_id, in the Form table, is the **primary key**.
The FormDetails table holds additional information about the form:
```
+-----------------------------------------+
| form_id | status_id | process_id |
+-----------------------------------------+
| 1 | 1 | 1 |
+-----------------------------------------+
| 2 | 2 | 1 |
+-----------------------------------------+
| 2 | 3 | 1 |
+-----------------------------------------+
| 2 | 3 | 1 |
+-----------------------------------------+
```
form\_id, in the Form table, is **NOT** primary or foreign key. There are no constraints. This table is poorly designed and over time duplicate data has been added to this table.
I would like to clean this table up by copying the unique data into a new FormDetails table and making form\_id a foreign key to the Form table.
In order to achieve this I have tried the following query:
```
select *
from FormDetails fd
right join Form f on f.form_id = fd.form_id and f.status_id = fd.status_id
```
Unfortunately, I am still getting rows with duplicate form\_id since form\_id=2 has two rows with a status\_id=3.
```
+-----------------------------------------+
| form_id | status_id | process_id |
+-----------------------------------------+
| 1 | 1 | 1 |
+-----------------------------------------+
| 2 | 3 | 1 |
+-----------------------------------------+
| 2 | 3 | 1 |
+-----------------------------------------+
```
What I am trying to write is a query that says: select all the rows in FormDetails that correspond to the current Form. If there are duplicates, just pick one of them.
Any idea how I could write such a query?
What I would like to see is:
```
+-----------------------------------------+
| form_id | status_id | process_id |
+-----------------------------------------+
| 1 | 1 | 1 |
+-----------------------------------------+
| 2 | 3 | 1 |
+-----------------------------------------+
```
|
With the help of some friends, I managed to find an answer to this problem. The closest answer to my question was @Gordon Linoff but it still wasn't exactly what I was looking for.
In order to determine how many rows in the Form table did not have any corresponding record in the FormDetails table we wrote the following query:
```
SELECT *
FROM Form
LEFT OUTER JOIN FormDetails ON Form.form_id = FormDetails.form_id
AND Form.status_id = FormDetails.status_id
WHERE
FormDetails.form_id IS NULL
```
When I ran this against my tables it returned 38 rows. This number helped us determine if the query we wrote was returning the correct number of rows. @Gordon Linoff's answer was different by 26 rows so I think it was returning more rows than expected but it was closer than all the other answers and a step in the right direction.
We then put together this query which uses `CROSS APPLY` (a bit of sql which I had never seen before).
The query looks like this:
```
SELECT ad.*
FROM Form f
CROSS APPLY (SELECT TOP 1 * FROM FormDetails fd WHERE
f.form_id = fd.form_id AND f.status_id = fd.status_id) AS ad
```
When I compare the number of rows that are in the Form table with the number of rows returned by this query I get a difference of 38. This seems to indicate that this is a good solution to my problem.
I wouldn't have ever thought to use CROSS APPLY in order to solve this...but I thought I would document the solution here just in case it helps someone in the future.
|
Find the `Distinct rows` in `Form` table then Use `Right Outer join` with `Formdetails` table
```
SELECT f.form_id,
f.status_id,
f.process_id,
FROM FormDetails fd
RIGHT JOIN (SELECT DISTINCT form_id,
status_id,
process_id,
FROM Form) f
ON f.form_id = fd.form_id
AND f.status_id = fd.status_id
```
|
How do I filter out duplicates?
|
[
"",
"sql",
"sql-server",
"select",
"duplicates",
""
] |
To get max value from a simple table of values, I can write the following query in Django:
```
MyTable.objects.aggregate(Max('value'))
```
The SQL generated is : `'SELECT MAX("mytable"."value") AS "value__max" FROM "mytable"'`
Now if I write the same SQL using the raw query manager:
```
1. MyTable.objects.raw('SELECT max(value) FROM mytable')
```
Django throws an error `InvalidQuery: Raw query must include the primary key`. This is also mentioned in Django docs: "There is only one field that you canβt leave out - the primary key field". So after adding the `id` field, I need `GROUP BY` as well. The new query becomes:
```
2. MyTable.objects.raw('SELECT id, max(value) FROM mytable GROUP BY id')
```
This doesn't give me a single max value anymore because I'm forced to use `GROUP BY id`. Now I need to add an `ORDER BY` and `LIMIT` statement to get the expected answer for an otherwise simple SQL statement that work.
```
3. MyTable.objects.raw('SELECT id, max(value) AS mv FROM mytable GROUP BY id ORDER BY mv DESC LIMIT 1')
```
Is there a way simplify the above query i.e. not use ORDER/LIMIT/GROUP BY (FWIW, using PosgreSQL)?
**Update:**
Here's a hack that'll work. I alias the max value as `id` to make Django happy. Is there any issue here?
```
MyTable.objects.raw('SELECT max(value) AS id FROM mytable')
```
**Update 2:**
Here's the query plan for the simple SQL (1) vs the complicated final one (3):
```
"Aggregate (cost=5.25..5.26 rows=1 width=2) (actual time=0.155..0.155 rows=1 loops=1)"
" -> Seq Scan on mytable (cost=0.00..4.60 rows=260 width=2) (actual time=0.018..0.067 rows=260 loops=1)"
"Total runtime: 0.222 ms"
```
```
"Limit (cost=9.80..9.80 rows=1 width=6) (actual time=0.548..0.548 rows=1 loops=1)"
" -> Sort (cost=9.80..10.45 rows=260 width=6) (actual time=0.545..0.545 rows=1 loops=1)"
" Sort Key: (max(value))"
" Sort Method: top-N heapsort Memory: 25kB"
" -> HashAggregate (cost=5.90..8.50 rows=260 width=6) (actual time=0.328..0.432 rows=260 loops=1)"
" -> Seq Scan on mytable (cost=0.00..4.60 rows=260 width=6) (actual time=0.018..0.069 rows=260 loops=1)"
"Total runtime: 0.638 ms"
```
**P.S.** The actual query is more complicated (somewhat related to this answer : <https://dba.stackexchange.com/a/86404/52114>)
|
You should use [custom SQL](https://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly) instead of `Manager.raw()` method:
```
from django.db import connection
cursor = connection.cursor()
cursor.execute('SELECT max(value) FROM mytable')
max_value = cursor.fetchone()[0]
```
|
U can use
```
ModelName.objects.raw('SELECT 1 as id , max(value) FROM mytable')
```
|
Primary key requirement in raw SQL complicates the query in Django
|
[
"",
"sql",
"django",
"postgresql",
"django-orm",
""
] |
I have an employee table that looks like this:
```
| id | name | q1 | q2 | q3 | q4 |
+----+------+----+----+----+----+
| 1 | John | 20 | 30 | 10 | 4 |
| 2 | Ram | 07 | 10 | 03 | 4 |
| 3 | John | 05 | 03 | 15 | 40 |
| 4 | Sree | 12 | 05 | 20 | 25 |
```
I needed to get the minimum value and maximum value of questions where the id is equal to 4. In this case, I needed 5 and 25 returned. I acheived that using the following query:
```
SELECT id, name,
LEAST(q1, q2, q3, q4) AS minValue,
GREATEST(q1, q2, q3, q4) AS maxValue
FROM employee
WHERE id = 4;
```
But what this doesn't return is the question id. How can I adjust my query to show that q2 is the minimum and q4 is the maximum? I know I could write a big case statement, but I also feel like it could be accomplished using a join but I can't figure it out.
Note: This is for a postgresql database, but I tagged MySQL as well because I know it also supports the `LEAST` and `GREATEST` functions. If the solution is *very* different for both, then I will remove this note and make a separate question.
**EDIT**
I have an [SQL Fiddle](http://sqlfiddle.com/#!15/8f17b/2) already.
|
You can use a `case` statement:
```
CASE
WHEN LEAST(q1, q2, q3, q4) = q1 THEN 'q1'
WHEN LEAST(q1, q2, q3, q4) = q2 THEN 'q2'
WHEN LEAST(q1, q2, q3, q4) = q3 THEN 'q3'
ELSE 'q4'
END as minQuestion
```
(Note: it will lose information over ties.)
If you're interested in ties, approaching it with a subquery and arrays will do the trick:
```
with employee as (
select id, q1, q2, q3, q4
from (values
(1, 1, 1, 3, 4),
(2, 4, 3, 1, 1)
) as rows (id, q1, q2, q3, q4)
)
SELECT least(q1, q2, q3, q4),
array(
select q
from (values (q1, 'q1'),
(q2, 'q2'),
(q3, 'q3'),
(q4, 'q4')
) as rows (v, q)
where v = least(q1, q2, q3, q4)
) as minQuestions
FROM employee e
WHERE e.id = 1;
```
|
I would use a CASE statement to compare the greatest value against each column until a match is found
Not perfect as it won't be fast and will also just find the first column when more than one share the same max value:-
```
SELECT id,
name,
LEAST(q1, q2, q3, q4) AS minValue,
CASE LEAST(q1, q2, q3, q4)
WHEN q1 THEN 'q1'
WHEN q2 THEN 'q2'
WHEN q3 THEN 'q3'
ELSE 'q4'
END,
GREATEST(q1, q2, q3, q4) AS maxValue,
CASE GREATEST(q1, q2, q3, q4)
WHEN q1 THEN 'q1'
WHEN q2 THEN 'q2'
WHEN q3 THEN 'q3'
ELSE 'q4'
END
FROM employee
WHERE id = 4;
```
EDIT - using a join, and I suspect this will be far worse:-
```
SELECT a0.id,
a0.name,
LEAST(a0.q1, a0.q2, a0.q3, a0.q4) AS minValue,
CASE
WHEN a1.id IS NULL THEN 'q1'
WHEN a2.id IS NULL THEN 'q2'
WHEN a3.id IS NULL THEN 'q3'
ELSE 'q4'
END,
GREATEST(a0.q1, a0.q2, a0.q3, a0.q4) AS maxValue,
CASE GREATEST(q1, q2, q3, q4)
WHEN a15.id IS NULL THEN 'q1'
WHEN a16.id IS NULL THEN 'q2'
WHEN a17.id IS NULL THEN 'q3'
ELSE 'q4'
END
FROM employee a0
LEFT OUTER JOIN employee a1 ON a0.id = a1.id AND LEAST(a0.q1, a0.q2, a0.q3, a0.q4) = a1.q1
LEFT OUTER JOIN employee a2 ON a0.id = a2.id AND LEAST(a0.q1, a0.q2, a0.q3, a0.q4) = a2.q2
LEFT OUTER JOIN employee a3 ON a0.id = a3.id AND LEAST(a0.q1, a0.q2, a0.q3, a0.q4) = a3.q3
LEFT OUTER JOIN employee a4 ON a0.id = a4.id AND LEAST(a0.q1, a0.q2, a0.q3, a0.q4) = a4.q4
LEFT OUTER JOIN employee a11 ON a0.id = a11.id AND GREATEST(a0.q1, a0.q2, a0.q3, a0.q4) = a1.q11
LEFT OUTER JOIN employee a12 ON a0.id = a12.id AND GREATEST(a0.q1, a0.q2, a0.q3, a0.q4) = a2.q12
LEFT OUTER JOIN employee a13 ON a0.id = a13.id AND GREATEST(a0.q1, a0.q2, a0.q3, a0.q4) = a3.q13
LEFT OUTER JOIN employee a14 ON a0.id = a14.id AND GREATEST(a0.q1, a0.q2, a0.q3, a0.q4) = a4.q14
WHERE a0.id = 4;
```
|
How to get the column name of the result of a least function?
|
[
"",
"mysql",
"sql",
"postgresql",
""
] |
I have a problem with the following query:
```
SELECT
ee.id
ee.column2
ee.column3,
ee.column4,
SUM(ee.column5)
FROM
table1 ee
LEFT JOIN table2 epc ON ee.id = epc.id
WHERE
ee.id (6050)
GROUP BY ee.id
```
WHERE column id is the primary key.
On version 8.4, the query returns an error saying that column2, column3 and column4 don't exist in the group by clause.
This same query executes successfully on version 9.3.
Does anybody know why?
|
This was introduced in 9.1
Quote from [the release notes](http://www.postgresql.org/docs/current/static/release-9-1.html):
> Allow non-GROUP BY columns in the query target list when the primary key is specified in the GROUP BY clause (Peter Eisentraut)
> The SQL standard allows this behavior, and because of the primary key, the result is unambiguous.
It is also explained with examples [in the chapter about `group by`:](http://www.postgresql.org/docs/current/static/queries-table-expressions.html#QUERIES-GROUP)
> In this example, the columns product\_id, p.name, and p.price must be in the GROUP BY clause since they are referenced in the query select list (but see below). The column s.units does not have to be in the GROUP BY list since it is only used in an aggregate expression (sum(...)), which represents the sales of a product. For each product, the query returns a summary row about all sales of the product.
In a nutshell: if the `group by` clause contains a column that uniquely identifies the rows, it is sufficient to include that column only.
|
The SQL-99 standard introduced the concept of functionally dependent columns. A column is functionally dependent on another column when that other column (or set of columns) already uniquely defines it. So if you have a table with a primary key, then all other columns in that table are functionally dependent on that primary key.
So when using a `GROUP BY`, and you include the primary key of a table, then you do not need to include the other columns of that same table in the `GROUP BY`-clause as they have already been uniquely identified by the primary key.
This is also documented in [`GROUP BY` Clause](http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY):
> When `GROUP BY` is present, or any aggregate functions are present, it is not valid for the `SELECT` list expressions to refer to ungrouped columns except within aggregate functions or *when the ungrouped column is functionally dependent on the grouped columns*, since there would otherwise be more than one possible value to return for an ungrouped column. *A functional dependency exists if the grouped columns (or a subset thereof) are the primary key of the table containing the ungrouped column.*
(emphasis mine)
|
postgresql 9.3. Group by without all columns
|
[
"",
"sql",
"postgresql",
"group-by",
""
] |
I need to obtain a query result that would be showing in multiples of a determined number (10 in my case), independent of the real quantity of rows (actually to solve a jasper problem).
For example, in this link, I build a example schema: <http://sqlfiddle.com/#!3/c3dba/1/0>
I'd like the result to be like this:
```
1 Item 1 1 10
2 Item 2 2 30
3 Item 3 5 15
4 Item 4 2 10
null null null null null
null null null null null
null null null null null
null null null null null
null null null null null
null null null null null
```
I have found this explanation, but doesn't work in SQLServer and I can't convert: <http://community.jaspersoft.com/questions/514706/need-table-fixed-size-detail-block>
|
Another option is to use a `recursive CTE` to get the pre-determined number of rows, then use a `nested CTE` construct to union rows from the recursive CTE with the original table and finally use a `TOP` clause to get the desired number of rows.
```
DECLARE @n INT = 10
;WITH Nulls AS (
SELECT 1 AS i
UNION ALL
SELECT i + 1 AS i
FROM Nulls
WHERE i < @n
),
itemsWithNulls AS
(
SELECT * FROM itens
UNION ALL
SELECT NULL, NULL, NULL, NULL FROM Nulls
)
SELECT TOP (@n) *
FROM itemsWithNulls
```
**EDIT:**
By reading the requirements more carefully, the OP actually wants the total number of rows returned to be **a multiple of 10**. E.g. if table `itens` has 4 rows then 10 rows should be returned, if `itens` has 12 rows then 20 rows should be return, etc.
In this case `@n` should be set to:
```
DECLARE @n INT = ((SELECT COUNT(*) FROM itens) / 10 + 1) * 10
```
We can actually fit everything inside *a single sql statement* with the use of nested CTEs:
```
;WITH NumberOfRows AS (
SELECT n = ((SELECT COUNT(*) FROM itens) / 10 + 1) * 10
), Nulls AS (
SELECT 1 AS i
UNION ALL
SELECT i + 1 AS i
FROM Nulls
WHERE i < (SELECT n FROM NumberOfRows)
),
itemsWithNulls AS
(
SELECT * FROM itens
UNION ALL
SELECT NULL, NULL, NULL, NULL FROM Nulls
)
SELECT TOP (SELECT n FROM NumberOfRows) *
FROM itemsWithNulls
```
[SQL Fiddle here](http://sqlfiddle.com/#!3/21db4/2)
|
This might work for you - use an arbitrary cross join to create a large number of null rows, and then union them back in with your real data. You'll need to pay extra attention to the ORDERING to ensure that it is the nulls at the bottom.
```
DECLARE @NumRows INT = 50;
SELECT TOP (@NumRows) *
FROM
(
SELECT * FROM itens
UNION
SELECT TOP (@NumRows) NULL, NULL, NULL, NULL
FROM sys.objects o1 CROSS JOIN sys.objects o2
) x
ORDER BY CASE WHEN x.ID IS NULL THEN 9999 ELSE ID END
```
[Fiddle Demo](http://sqlfiddle.com/#!3/8f406c/2)
|
How can I get a sql query that always would be multiple of n?
|
[
"",
"sql",
"sql-server",
"jasper-reports",
""
] |
I have a huge database that contains writer names.
There are multiple records in my database but I don't know which rows are duplicate.
How can I delete duplicate rows without knowing the value?
|
Try:
```
delete from tbl
where writer_id in
(select writer_id
from (select * from tbl) t
where exists (select 1
from (select * from tbl) x
where x.writer_name = t.writer_name
and t.writer_id < x.writer_id));
```
See demo:
<http://sqlfiddle.com/#!2/845ca3/1/0>
This keeps the first row for each `writer_name`, in order of `writer_id` ascending.
The EXISTS subquery will run for every row, however. You could also try:
```
delete t
from
tbl t
left join ( select writer_name, min(writer_id) as writer_id
from tbl
group by writer_name ) x
on t .writer_name = x.writer_name
and x.writer_id = t .writer_id
where
x.writer_name is null;
```
demo: <http://sqlfiddle.com/#!2/075f9/1/0>
If there are no foreign key constraints on the table you could also use `create table as select` to create a new table without the duplicate entries, drop the old table, and rename the new table to that of the old table's name, getting what you want in the end. (This would not be the way to go if this table has foreign keys though)
That would look like this:
```
create table tbl2 as (select distinct writer_name from tbl);
drop table tbl;
alter table tbl2 add column writer_id int not null auto_increment first,
add primary key (writer_id);
rename table tbl2 to tbl;
```
demo: <http://sqlfiddle.com/#!2/8886d/1/0>
|
```
SELECT a.*
FROM the_table a
INNER JOIN the_table b ON a.field1 = b.field1 AND (etc)
WHERE a.pk != b.pk
```
hope that query can solve your problem.
|
Delete multiple rows without knowing the names of the rows
|
[
"",
"mysql",
"sql",
"duplicates",
""
] |
Here's the scenario:
```
create table a (
id serial primary key,
val text
);
create table b (
id serial primary key,
a_id integer references a(id)
);
create rule a_inserted as on insert to a do also insert into b (a_id) values (new.id);
```
I'm trying to create a record in `b` referencing to `a` on insertion to `a` table. But what I get is that `new.id` is null, as it's automatically generated from a sequence. I also tried a trigger `AFTER` insert `FOR EACH ROW`, but result was the same. Any way to work this out?
|
Avoid rules, as they'll come back to bite you.
Use an after trigger on table a that runs for each row. It should look something like this (untested):
```
create function a_ins() returns trigger as $$
begin
insert into b (a_id) values (new.id);
return null;
end;
$$ language plpgsql;
create trigger a_ins after insert on a
for each row execute procedure a_ins();
```
|
To keep it simple, you could also just use a [data-modifying CTE](http://www.postgresql.org/docs/current/interactive/queries-with.html#QUERIES-WITH-MODIFYING) (and no trigger or rule):
```
WITH ins_a AS (
INSERT INTO a(val)
VALUES ('foo')
RETURNING a_id
)
INSERT INTO b(a_id)
SELECT a_id
FROM ins_a
RETURNING b.*; -- last line optional if you need the values in return
```
Related answer with more details:
* [PostgreSQL multi INSERT...RETURNING with multiple columns](https://stackoverflow.com/questions/22775109/postgresql-multi-insert-returning-with-multiple-columns/22775268#22775268)
Or you can work with [`currval()` and `lastval()`](http://www.postgresql.org/docs/current/interactive/functions-sequence.html#FUNCTIONS-SEQUENCE-TABLE):
* [How to get the serial id just after inserting a row?](https://stackoverflow.com/questions/16897007/how-to-get-the-serial-id-just-after-inserting-a-row/16897071#16897071)
* [Reference value of serial column in another column during same INSERT](https://stackoverflow.com/questions/12433075/reference-value-of-serial-column-in-another-column-during-same-insert/12433446#12433446)
|
Insert inserted id to another table
|
[
"",
"sql",
"postgresql",
"triggers",
"primary-key",
"entity-relationship",
""
] |
I need to group `Customer` and `AlertName` to get how many alerts each customer has and after that I need to order results by `No_Alerts`. I'm using this SQL query:
```
SELECT Customer, AlertName, COUNT(AlertName) as No_Alerts
FROM Alerts
GROUP BY Customer, AlertName
ORDER BY Customer, No_Alerts DESC
```
Result is:
```
Customer AlertName No_Alerts
----------------------------------
1 Cust1 Alert1 12
2 Cust1 Alert7 5
3 Cust1 Alert5 3
4 Cust2 Alert8 32
5 Cust2 Alert4 17
6 Cust2 Alert2 2
7 Cust3 Alert3 234
8 Cust3 Alert4 22
9 Cust3 Alert6 7
```
But how to get following result, so that data above is ordered by `No_Alerts`?
```
Customer AlertName No_Alerts
----------------------------------
1 Cust3 Alert3 234
2 Cust3 Alert4 22
3 Cust3 Alert6 7
4 Cust2 Alert8 32
5 Cust2 Alert4 17
6 Cust2 Alert2 2
7 Cust1 Alert1 12
8 Cust1 Alert7 5
9 Cust1 Alert5 3
```
Thanks in advance!
|
I think you want to order by the total number (maximum number) of alerts for each customers. You can do this by adding an additional column, using window functions, and sorting by that:
```
SELECT Customer, AlertName, COUNT(AlertName) as No_Alerts,
SUM(COUNT(AltertName)) OVER (PARTITION BY Customer) as TotalCustomerAlerts
FROM Alerts
GROUP BY Customer, AlertName
ORDER BY TotalCustomerAlerts DESC, Customer, No_Alerts DESC;
```
Note that the `order by` includes both the total and `Customer`. This handles the situation where two customers have the same total.
If you really want the ordering by the max for the customer, use `MAX()` instead of `SUM()`.
If you don't want to see the extra column, use a subquery or CTE.
|
Vendor-specific solutions notwithstanding, it should be possible to nest your SELECTS.
The outer one has the ORDER BY, the inner one has all the rest :
`SELECT *
FROM (SELECT ... GROUP BY ...)
ORDER BY ...`
|
SQL Server 2008 - Order grouped data by
|
[
"",
"sql",
"sql-server-2008",
""
] |
The table below was created from another table with columns ID,Name,Organ,and Age.
The values found in the Organ column were codes which designated both organ and condition.
Using CASE I made a table like this:
```
--------------------------------------------------------
ID NAME Heart Brain Lungs Kidneys AGE
1318 Joe Smith NULL NULL NULL NULL 50
1318 Joe Smith NULL NULL NULL NULL 50
1318 Joe Smith NULL NULL NULL Below 50
1318 Joe Smith NULL NULL NULL Below 50
1318 Joe Smith NULL NULL Above NULL 50
1318 Joe Smith NULL NULL Above NULL 50
1318 Joe Smith Average NULL NULL NULL 50
1318 Joe Smith Average NULL NULL NULL 50
--------------------------------------------------------
```
I would like to query this table and get the following result:
```
--------------------------------------------------------
1318 Joe Smith Average NULL Above Below 50
--------------------------------------------------------
```
In other words I would like to create one record based on the unique values from each
column.
|
Assuming each organ can either have just one value or a `null`, as shown in the sample data, the `max` aggregate function should do the trick:
```
SELECT id, name,
MAX(heart), MAX(brain), MAX(lungs), MAX(kidneys),
age
FROM my_table
GORUP BY id, name, age
```
|
```
select id,name,heart=(select distinct(heart) from organ where id=1318 and heart is not null)
,brain= (select distinct(brain) from organ where id=1318 and brain is not null)
,lungs=(select distinct(lungs) from organ where id=1318 and lungs is not null)
,kidneys = (select distinct(kidneys) from organ where id=1318 and kidneys is not null)
,age from organ where id=1318
```
|
SQL - How to combine rows based on unique values
|
[
"",
"sql",
"sql-server",
"string",
"select",
"unique",
""
] |
Table
```
ColA ColB
1 Testa:testb:Testc:testd
```
How can I find matching rows based on a single value found within the colon-delimited values? I've tried this but it doesn't match rows where ColB has more than one embedded value:
```
Select * from Table
Where ColB = 'testc' and ColA =1
```
|
FWIW you can also use INSTR:
```
Select *
from Table
Where INSTR(':' || ColB || ':', ':testc:') > 0 and ColA = 1;
```
|
Use `like`:
```
Select *
from Table
Where (':' || ColB || ':' like '%:testc:%') and ColA = 1;
```
|
how to use colon separated values in oracle sql where clause
|
[
"",
"sql",
"oracle",
"oracle-apex",
""
] |
I want to select
```
data12 [last entry for 12-21-2014],
data11 [last entry for 12-20-2014],
data8 [last entry for 12-19-2014]
```
from the below table.
```
snapshot_datestamp data
-------------------------------
12-21-2014 08:24:21 data12
12-20-2014 19:58:49 data11
12-20-2014 19:55:36 data10
12-20-2014 19:53:59 data9
12-19-2014 21:56:23 data8
12-19-2014 21:13:16 data7
12-19-2014 11:05:45 data6
12-19-2014 11:05:07 data5
12-19-2014 10:56:13 data4
12-19-2014 10:52:21 data3
12-19-2014 10:50:43 data2
12-19-2014 10:49:30 data1
```
Not quite sure how to achieve this.Any pointer will be a great help.
|
One way is to get the latest time per day, then select the corresponding records:
```
select
trunc(snapshot_datestamp),
data
from mytable
where snapshot_datestamp in
(
select max(snapshot_datestamp)
from mytable
group by trunc(snapshot_datestamp)
)
order by trunc(snapshot_datestamp);
```
Another is to use an analytic function:
```
select
trunc(snapshot_datestamp),
max(data) keep (dense_rank last order by snapshot_datestamp)
from mytable
group by trunc(snapshot_datestamp)
order by trunc(snapshot_datestamp);
```
|
Simple way is by using `Order by` and `ROWNUM`
```
SELECT *
FROM (SELECT data
FROM tablename
WHERE trunc(snapshot_datestamp) = TO_DATE('12-21-2014','MM-DD-YYYY')
ORDER BY snapshot_datestamp DESC)
WHERE ROWNUM = 1;
```
|
Get the latest entry for a given day in Oracle
|
[
"",
"sql",
"oracle",
"greatest-n-per-group",
"sysdate",
""
] |
I'm a little new at SQL so please bear with me. I am attempting to write some a query that will allow me to loop through an entire table and find the number of times null values appear in each column. This is easy to do the hard way by typing the following:
```
Select
SUM(CASE COL_1 WHEN IS NULL THEN 1 ELSE 0 END) AS COL_1_NULLS
,SUM(CASE COL_2 WHEN IS NULL THEN 1 ELSE 0 END) AS COL_2_NULLS
FROM TABLE1
```
This is easy but it can become arduous if you want to do this for multiple tables or if a single table has a lot of columns.
I'm looking for a way to write a query that passes a table name into it and then loops through each column in the defined table (possibly pulling the column name by ordinance via a join to a metadata view?) and then sums the number of nulls in the column. Before anyone jumps on the nitpick bandwagon please keep in mind that this basic idea could be used for more than just finding nulls. Any assistance with this issue is greatly appreciated.
|
You need to use dynamic sql:
```
declare @custom_sql varchar(max)
set @custom_sql = 'SELECT null as first_row'
select
@custom_sql = @custom_sql + ', ' + 'SUM(CASE WHEN ' + COLUMN_NAME + ' IS NULL THEN 1 ELSE 0 END) as ' + COLUMN_NAME + '_NULLS'
from
INFORMATION_SCHEMA.COLUMNS where table_name = 'MYTABLE'
set @custom_sql = @custom_sql + ' FROM MYTABLE'
exec(@custom_sql)
```
You can also use the `COALESCE` term (just for a slightly different approach):
```
declare @custom_sql varchar(max)
select
@custom_sql = COALESCE(@custom_sql + ', ', '') + 'SUM(CASE WHEN ' + COLUMN_NAME + ' IS NULL THEN 1 ELSE 0 END) as ' + COLUMN_NAME + '_NULLS'
from
INFORMATION_SCHEMA.COLUMNS where table_name = 'users'
set @custom_sql = 'SELECT ' + @custom_sql
set @custom_sql = @custom_sql + ' FROM Users'
print @custom_sql
exec(@custom_sql)
```
|
I don't know how to make a generic query, but you can always generate the script like this
```
declare @sql nvarchar(max) = 'select 1 as dummy'
select @sql = @sql + '
, sum(case when [' + c.name + '] is null then 1 else 0 end) as [' + c.name + '_NULLS]'
from sys.columns c
join sys.tables t on t.object_id = c.object_id
where t.name = 'TABLE1'
set @sql = @sql + ' from TABLE1'
select @sql
```
Then you can execute the result eg. with `exec sp_executesql @sql`
|
Dynamically Count Null Values in SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
Consider I have a table like the following:
my\_table
```
+---------------+
| id | name |
+---------------+
| 1 | ABC |
+---------------+
| 2 | XYZ |
+---------------+
| 3 | PQR |
+---------------+
| 4 | LMN |
+---------------+
```
And say I have a query like this
```
select * from my_table where id in (1,2,3,4,5)
```
Is it possible to get output like the following,by changing the query.
```
+---------------+
| id | name |
+---------------+
| 1 | ABC |
+---------------+
| 2 | XYZ |
+---------------+
| 3 | PQR |
+---------------+
| 4 | LMN |
+---------------+
| 5 | NULL |
+---------------+
```
I tried using self JOIN and other conditions and also google'd a lot,but didn't find a solution.
Can anyone suggest a solution?
|
Unfortunately, mysql doesn't have a built in function that generates a series as many other databases do). There are (at least) two ways of doing it:
Hard code the desired values as a subquery, then left join to your table:
```
select x.id, t.name
from (select 1 id
union select 2
union select 3
union select 4
union select 5) x
left join my_table t on t.id = x.id
```
But this is tedious and hard to code and maintain.
Or (as I have done before) create a table (once) and populate with natural numbers (once) to use as a proxy series generator:
```
create table numbers (num int);
insert into numbers values (1), (2), (3), ... etc
```
then:
```
select n.num id, t.name
from numbers n
left join my_table t on t.id = n.num
where n.num in (1,2,3,4,5)
```
Once set up and populated with lots of numbers, this approach is very handy.
You can create a similar table populated with dates, used in a similar way, which is very handy for producing figures for every date in a range when not all dates have data.
|
You can create a number serie without having to create an extra table or without writing conditions for each values that needed to be searched. [You can use a variable `rownum`, initialize with value 0 & increase it by 1 to easily create a serie by using βlimitβ](http://jimmod.com/blog/2008/09/09/displaying-row-number-rownum-in-mysql/). I used the `INFORMATION_SCHEMA.COLUMNS`table so you can create a big serie (you can use any bigger table that you have in your DB or any table large enough for your needs).
[SQL Fiddle](http://sqlfiddle.com/#!9/78515/15)
**MySQL 5.6.6 m9 Schema Setup**:
```
CREATE TABLE my_table
(`id` int, `name` varchar(3))
;
INSERT INTO my_table
(`id`, `name`)
VALUES
(1, 'ABC'),
(2, 'XYZ'),
(3, 'PQR'),
(4, 'LMN')
;
```
**Query 1**:
```
select rownum id, name
from (
select @rownum:=@rownum+1 as rownum
from INFORMATION_SCHEMA.COLUMNS,(SELECT @rownum:=0) r limit 5) as num
left outer join my_table on id = rownum
```
**[Results](http://sqlfiddle.com/#!9/78515/15/0)**:
```
| ID | NAME |
|----|--------|
| 1 | ABC |
| 2 | XYZ |
| 3 | PQR |
| 4 | LMN |
| 5 | (null) |
```
|
Mysql get all records described in in condition even if it does not exists in table
|
[
"",
"mysql",
"sql",
"database",
"select",
""
] |
Is it possible to include [strtotime](http://php.net/manual/en/function.strtotime.php) date comparison parameters when using medoo to SELECT rows from a database?
I'm using [medoo](http://medoo.in/) to make database work nice & simple - it's great. However, I'm not sure if I can (but suspect I can) include a date range like parameter when getting rows from my database.
Instead of going through all rows in a foreach routine of what is a large database, I was hoping I could optimise my SELECT/WHERE calls. My loose example/idea is here, but I'm not sure what will work.
```
$thelogs = $database->select("system_logs",
array("date", "category", "description","class"),
array("date" => .........strtotime('-7 day'))
);
```
...More information pertaining the way dates are being saved.
Date column is set to datetime in MySQL
Example entry would be: 2014-12-21 05:31:22
In php i'm just using date('Y-m-d H:i:s')
|
I managed to get this to work with the following:
```
$log_date_range = date("Y-m-d H:i:s", strtotime("-7 days"));
$thelogs = $database->select("logs", array(
"date",
"category",
"description",
"class",
"ID"), array(
"date[>]" => $log_date_range
));
```
|
Medoo is supported datetime condition. You can use it like this:
```
$database->select("system_logs", array(
"date",
"category",
"description",
"class"
), array(
"date" => date('Y-m-d H:i:s', strtotime("-7 days"))
));
```
|
Medoo SQL database select rows matching a date range
|
[
"",
"mysql",
"sql",
"date",
"strtotime",
"medoo",
""
] |
I'm beginner in sql.
I want that deleteseveral rows from my table Once (no One to One). I think that the best way is use array.
I want pass Id of rows (of type int) to store procedure with array. I search and find several way. but don't find my sulotion yet.
I Thanks any one that guide me. :)
|
That's a link that I found useful. it solve my problem.
I hope that it helpful for you,too :)
<http://www.codeproject.com/Questions/631284/How-to-pass-an-array-to-SQL-SERVER-stored-pro>
|
This will help you. :) Follow the next steps,
1. Open the Query Designer
2. Copy Paste the Following code as it is,it will create the Function which convert the String to Int
CREATE FUNCTION dbo.SplitInts
(
@List VARCHAR(MAX),
@Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT Item = CONVERT(INT, Item) FROM
( SELECT Item = x.i.value('(./text())[1]', 'varchar(max)')
FROM ( SELECT [XML] = CONVERT(XML, '*'
+ REPLACE(@List, @Delimiter, '**') + '*').query('.')
) AS a CROSS APPLY [XML].nodes('i') AS x(i) ) AS y
WHERE Item IS NOT NULL
);
GO
3. Create the Following stored procedure
```
CREATE PROCEDURE dbo.sp_DeleteMultipleId
@List VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
DELETE FROM TableName WHERE Id IN( SELECT Id = Item FROM dbo.SplitInts(@List, ','));
END
GO
```
4. Execute this SP Using `exec sp_DeleteId '1,2,3,12'` this is a string of Id's which you want to delete,
5. You convert your array to string in C# and pass it as a Stored Procedure parameter
int[] intarray = { 1, 2, 3, 4, 5 };
string[] result = intarray.Select(x=>x.ToString()).ToArray();
```
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandText = "sp_DeleteMultipleId";
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add("@Id",SqlDbType.VARCHAR).Value=result ;
```
This will delete multiple rows, All the best
|
How to pass an array to sql?
|
[
"",
"sql",
"stored-procedures",
""
] |
I have a table `event`, where i have records with a field `end_date`, so my problem is i want to fetch number of records, grouping month wise, where `end_date` should with in that month only, so for example:
If a record have `end_date` as `2013-01-01 00:00:00` then it should be counted in `January 2013`, and i am not able to do that. I am unable to put that where condition, how to do tell database that `end_date` should be between the month for which it is currently grouping.
```
SELECT COUNT(*) AS 'count', MONTH(created) AS 'month', YEAR(created) AS 'year' FROM event WHERE is_approved =1 GROUP BY YEAR(created), MONTH(created)
```
Please help me out.
**EDIT :**
Data say i have is like:
```
Record name end_date
record_1 2013-11-01 00:00:00
record_2 2013-11-30 00:00:00
record_3 2013-12-01 00:00:00
record_4 2013-12-04 00:00:00
record_5 2013-12-06 00:00:00
record_6 2013-12-10 00:00:00
...many more
```
Result Expected is:
```
Count month year
2 11 2013
4 12 2013
....so on
```
|
Try this:
```
SELECT COUNT(1) AS 'count', MONTH(end_date) AS 'month', YEAR(end_date) AS 'year'
FROM event
WHERE is_approved = 1
GROUP BY EXTRACT(YEAR_MONTH FROM end_date);
```
**OR**
```
SELECT COUNT(1) AS 'count', MONTH(end_date) AS 'month', YEAR(end_date) AS 'year'
FROM event
WHERE is_approved = 1
GROUP BY YEAR(end_date), MONTH(end_date);
```
**::EDIT::**
**1. end date is greater than that particular month** - Simply add where condition in your query and pass particular month in format of **YYYYMM** instead of **201411**
**2. event is started** - Add one more where condition to check whether the **created** date is less then current date
```
SELECT COUNT(1) AS 'count', MONTH(end_date) AS 'month', YEAR(end_date) AS 'year'
FROM event
WHERE is_approved = 1 AND
EXTRACT(YEAR_MONTH FROM end_date) > 201411 AND
DATE(created) <= CURRENT_DATE()
GROUP BY EXTRACT(YEAR_MONTH FROM end_date);
```
**OR**
```
SELECT COUNT(1) AS 'count', MONTH(end_date) AS 'month', YEAR(end_date) AS 'year'
FROM event
WHERE is_approved = 1 AND
EXTRACT(YEAR_MONTH FROM end_date) > 201411 AND
DATE(created) <= CURRENT_DATE()
GROUP BY YEAR(end_date), MONTH(end_date);
```
|
The count is aggregated based on the month and year so if you are spanning years, you wont have Jan 2013 mixed with Jan 2014, hence pulling those values too and that is the same basis of the group by.
As for your criteria, that all goes in the WHERE clause. In this case, I did anything starting with Jan 1, 2013 and ending Dec 31, 2014 via 'yyyy-mm-dd' standard date recognized format. That said, and the structure of the table you provided, I am using the "end\_date" column.
```
SELECT
YEAR(end_date) AS EventYear,
MONTH(end_Date) AS EventMonth,
COUNT(*) AS EventCount
FROM
event
WHERE is_approved = 1
and end_date between '2013-01-01' and '2014-12-31'
GROUP BY
YEAR(end_date),
MONTH(end_Date)
```
Now, if you want them to have the most recent events on the top, I would put the year and month descending so 2014 is listed first, then 2013, etc and months within them as December (month 12), before the others.
```
GROUP BY
YEAR(end_date) DESC,
MONTH(end_Date) DESC
```
Your criteria could be almost anything from as simple as just a date change, approved status, or even get counts per account status is so needed, such as (and these are just EXAMPLES if you had such code status values)
```
SUM( is_approved = 0 ) as PendingEvent,
SUM( is_approved = 1 ) as ApprovedEvent,
SUM( is_approved = 2 ) as CancelledEvent
```
Per comment feedback.
For different date ranges, ignore the between clause and change the WHERE to something like
```
WHERE end_date > '2014-08-01' or all after a date...
where end_date < '2014-01-01' or all before a date...
```
They will still group by month / year. If you wanted based on a start date of the event, just change that column in instead, or do IN ADDITION to the others.
|
Created date between that particular month mysql
|
[
"",
"mysql",
"sql",
"date",
"select",
"group-by",
""
] |
I'm trying to add an XMLType column into a table, but it returns an error. Why?
This is the query:
```
alter table TEST_ID add column xml_column xmltype;
```
It returns the error:
```
[SQL] alter table TEST_ID add column xml_column xmltype
[Err] ORA-00904: : invalid identifier
```
|
You don't need the "column" word in there, so it's:
`ALTER TABLE test_id
ADD xml_column xmltype;`
|
In addition,
you can add multiple columns at the same time with:
```
ALTER TABLE table_name ADD (column1 VARCHAR(40), column2 Date, column3 Number);
```
|
Add column in Oracle table
|
[
"",
"sql",
"oracle",
"xmltype",
""
] |
I'm currently learning SQL and I've installed oracle 11g express on my system. I'm trying to create a table however when I try to run the below command I get the following Error Message:
ERROR at line 3:
ORA-00904 : invalid identifier
```
CREATE TABLE PROJECTS (
proID NUMBER(4) NOT NULL,
Desc CHAR(20),
sDate DATE,
eDate DATE,
Budget NUMBER(7,2),
maxStaff NUMBER(2)
);
```
Can anybody please tell me what I'm doing wrong?
Thanks for all the replies, I ran this command succesfully:
```
CREATE TABLE PROJECTS (
proID NUMBER(4) NOT NULL,
description CHAR(20),
sDate DATE,
eDate DATE,
Budget NUMBER(7,2),
maxStaff NUMBER(2)
);
```
Really Appreciate the fast replies!
Chris
|
You have `DESC` in as a column name. While you can use it you will have to encompass it in quotes:
```
CREATE TABLE PROJECTS (
proID NUMBER(4) NOT NULL,
"Desc" CHAR(20),
sDate DATE,
eDate DATE,
Budget NUMBER(7,2),
maxStaff NUMBER(2)
);
```
You will also have to use quotes every time you call it in a query. I recommend just changing that column to something else (maybe `DESCRIPTION`?)
|
Since DESC is a reserved word, you would have to enclose it in double quotes.
However, I would not recommend using reserved words for fields names, perhaps change to description or something similar
|
Creating table via SQL Command Line, invalid identifier
|
[
"",
"sql",
"database",
"oracle",
"oracle11g",
""
] |
```
SELECT DATE_FORMAT(date, "%b") AS month, SUM(total_price) as total
FROM cart
WHERE date <= NOW()
and date >= Date_add(Now(),interval - 12 month)
GROUP BY DATE_FORMAT(date, "%m-%Y")
```
This query displaying result for only existing month. I need all 12 months sales.
**Output**:
```
"month" "total"
--------------
"Jun" "22"
"Aug" "30"
"Oct" "19"
"Nov" "123"
"Dec" "410"
```
**Required Output**:
```
"month" "total"
--------------
"Jan" "0"
"Feb" "0"
"Mar" "0"
"Apr" "0"
"May" "0"
"Jun" "22"
"Jul" "0"
"Aug" "30"
"Sep" "0"
"Oct" "19"
"Nov" "123"
"Dec" "410"
```
|
Thanks for @pankaj hint, Here i resolved it via this query...
```
SELECT
SUM(IF(month = 'Jan', total, 0)) AS 'Jan',
SUM(IF(month = 'Feb', total, 0)) AS 'Feb',
SUM(IF(month = 'Mar', total, 0)) AS 'Mar',
SUM(IF(month = 'Apr', total, 0)) AS 'Apr',
SUM(IF(month = 'May', total, 0)) AS 'May',
SUM(IF(month = 'Jun', total, 0)) AS 'Jun',
SUM(IF(month = 'Jul', total, 0)) AS 'Jul',
SUM(IF(month = 'Aug', total, 0)) AS 'Aug',
SUM(IF(month = 'Sep', total, 0)) AS 'Sep',
SUM(IF(month = 'Oct', total, 0)) AS 'Oct',
SUM(IF(month = 'Nov', total, 0)) AS 'Nov',
SUM(IF(month = 'Dec', total, 0)) AS 'Dec',
SUM(total) AS total_yearly
FROM (
SELECT DATE_FORMAT(date, "%b") AS month, SUM(total_price) as total
FROM cart
WHERE date <= NOW() and date >= Date_add(Now(),interval - 12 month)
GROUP BY DATE_FORMAT(date, "%m-%Y")) as sub
```
|
Consider the following table
```
mysql> select * from cart ;
+------+------------+-------------+
| id | date | total_price |
+------+------------+-------------+
| 1 | 2014-01-01 | 10 |
| 2 | 2014-01-20 | 20 |
| 3 | 2014-02-03 | 30 |
| 4 | 2014-02-28 | 40 |
| 5 | 2014-06-01 | 50 |
| 6 | 2014-06-13 | 24 |
| 7 | 2014-12-12 | 45 |
| 8 | 2014-12-18 | 10 |
+------+------------+-------------+
```
Now as per the logic you are looking back one year and `december` will appear twice in the result i.e. `dec 2013 and dec 2014` and if we need to have a separate count for them then we can use the following technique of generating dynamic date range [MySql Single Table, Select last 7 days and include empty rows](https://stackoverflow.com/questions/23300303/mysql-single-table-select-last-7-days-and-include-empty-rows/23301236#23301236)
```
t1.month,
t1.md,
coalesce(SUM(t1.amount+t2.amount), 0) AS total
from
(
select DATE_FORMAT(a.Date,"%b") as month,
DATE_FORMAT(a.Date, "%m-%Y") as md,
'0' as amount
from (
select curdate() - INTERVAL (a.a + (10 * b.a) + (100 * c.a)) DAY as Date
from (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as a
cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as b
cross join (select 0 as a union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) as c
) a
where a.Date <= NOW() and a.Date >= Date_add(Now(),interval - 12 month)
group by md
)t1
left join
(
SELECT DATE_FORMAT(date, "%b") AS month, SUM(total_price) as amount ,DATE_FORMAT(date, "%m-%Y") as md
FROM cart
where Date <= NOW() and Date >= Date_add(Now(),interval - 12 month)
GROUP BY md
)t2
on t2.md = t1.md
group by t1.md
order by t1.md
;
```
Output will be
```
+-------+---------+-------+
| month | md | total |
+-------+---------+-------+
| Jan | 01-2014 | 30 |
| Feb | 02-2014 | 70 |
| Mar | 03-2014 | 0 |
| Apr | 04-2014 | 0 |
| May | 05-2014 | 0 |
| Jun | 06-2014 | 74 |
| Jul | 07-2014 | 0 |
| Aug | 08-2014 | 0 |
| Sep | 09-2014 | 0 |
| Oct | 10-2014 | 0 |
| Nov | 11-2014 | 0 |
| Dec | 12-2013 | 0 |
| Dec | 12-2014 | 55 |
+-------+---------+-------+
13 rows in set (0.00 sec)
```
And if you do not care about the above case i.e. `dec 2014 and dec 2013`
Then just change the `group by` in dynamic date part as
```
where a.Date <= NOW() and a.Date >= Date_add(Now(),interval - 12 month)
group by month
```
and final group by as `group by t1.month`
|
MySQL monthly Sale of last 12 months including months with no Sale
|
[
"",
"mysql",
"sql",
"zero",
"monthcalendar",
"not-exists",
""
] |
I have the following `items` table:
```
items:
id pr1 pr2 pr3
-------------------
1 11 22 tt
...
```
and two tables associated with the items:
```
comments:
item_id text
-------------
1 "cool"
1 "very good"
...
tags:
item_id tag
-------------
1 "life"
1 "drug"
...
```
Now I want to get a table with columns `item_id, pr1, pr2, count(comments), count(tags)` with a condition `WHERE pr3 = zz`. What is the best way to get it? I can do this by creating additional tables, but I was wondering if there is a way achieve this by using only a single SQL statement. I'm using Postgres 9.3.
|
The easiest way is certainly to get the counts in the select clause:
```
select
id,
pr1,
pr2,
(select count(*) from comments where item_id = items.id) as comment_count,
(select count(*) from tags where item_id = items.id) as tag_count
from items;
```
|
You can just join, but you need to be careful that you don't get double count. E.g. you can use a subqueries to get what you want.
```
SELECT i.id,i.pr1,i.pr2, commentcount,tagcount FROM
items i
INNER JOIN
(SELECT item_id,count(*) as commentcount from comments GROUP BY item_id) c
ON i.id = c.item_id
INNER JOIN
(SELECT item_id,count(*) as tagcount from tags GROUP BY item_id) t
ON i.id = t.item_id
```
[EDIT] based on the comment, here's the left join version...
```
SELECT i.id,i.pr1,i.pr2, coalesce(commentcount,0) as commentcount,
coalesce(tagcount,0) as tagcount FROM
items i
LEFT JOIN
(SELECT item_id,count(*) as commentcount from comments GROUP BY item_id) c
ON i.id = c.item_id
LEFT JOIN
(SELECT item_id,count(*) as tagcount from tags GROUP BY item_id) t
ON i.id = t.item_id
```
|
Joining three tables with aggregation
|
[
"",
"sql",
"postgresql",
"left-join",
"correlated-subquery",
""
] |
In specs default columns are set as nullable meaning they optional, I'm not clear what scenario is this good for to use with? I'll break it into two questions:
1. Why would I even want to use it in relational db? If integrity isn't important Ill use other half-structured model. Fearing for each row would it return usable object. Seems it is pretty rare scenario to use them in relational db.
2. Handling it in programing language. Is it even different in different languages? Let's say php, what values should it be compared with? fetch\_array() are even keys going to be set?
|
A null value indicates an absent value--it's different from a zero-value (for numbers) or an empty value (for strings). Think of a boolean column where a user opts true or false to a question--if the user hasn't yet completed the form then their response would be null.
In practice null is the default value for optional columns; it's also required that a new column being added allows nulls (at least until all records have been assigned a non-null value).
Most languages have specific support for null. PHP can be a bit confusing as it's easy to ignore the differences between null, empty, false, and zero. PHP will evaluate all of these as `false` if checked in a boolean statement--but they're all there and distinct.
|
`NULL` should be used when you legitimately don't have a value or when the value doesn't make sense. `NULL` is good for "I don't know".
I'll use some examples from the system I use at work, a student information system.
In the registration table, there's a field for the first name, middle name, and last name of a student. Leaving out some details, what do you do about students that don't put anything in the middle name field on the enrollment form? They're implying they don't have a middle name, but that's not the same as explicitly writing "no middle name" in the field. Using an empty string would be explicitly stating that they don't have a middle name. The truth is, you don't know. Use `NULL`.
In the entry/withdrawal table, each entry has one entry date, and one withdrawal date. Now, when a student enters, the school has no idea when the student leaves. It might be the end of the year. It might be tomorrow. It hasn't happened yet. Use `NULL`.
Now, you can argue that any field which could be nullable should be in it's own table, but that level of normalization is basically unworkable for a real work application. I mean, to get a student's name, you'd need to join four tables because that's three 1 to 0-1 relationships. You're not even saving data space by doing this, since you have to store all those IDs somewhere to map everything back, *and* you have to index everything to make it work faster than a snail's pace. You're sacrificing the usability of the DB on altar of normalization and the main thing you gain is a loss of performance.
First Normal Form is absolutely vital. You'll see lots of questions here that boil down to, "Help! I violated 1NF and now queries are a huge pain in the ass!" Second Normal Form is often a great idea. Third Normal Form is when things often start to get unworkable or unreasonable for most non-trivial types of data.
|
When to use nullable columns - sql
|
[
"",
"sql",
""
] |
I am having a result from a query which looks as below
```
+-------------------+
| id | c1 | c2 | c3 |
+-------------------+
| 1 | x | y | z |
+----+----+----+----+
| 1 | x | y | z1 |
+----+----+----+----+
| 2 | a | b | c |
+----+----+----+----+
| 2 | a1 | b | c1 |
+-------------------+
```
I need to fetch only records which have values in C1 and c2 different for the same id.
For the above example the result should be
```
+-------------------+
| id | c1 | c2 | c3 |
+-------------------+
| 2 | a | b | c |
+----+----+----+----+
| 2 | a1 | b | c1 |
+-------------------+
```
Can you please help with the query.
|
Joining the table to itself should work. I'm assuming that you meant C1 **or** C2 being different, given the example result you posted.
```
SELECT
t1.id,
t1.c1,
t1.c2,
t1.c3
FROM
your_table t1
INNER JOIN your_table t2 ON t1.id = t2.id
WHERE
t1.c1 <> t2.c1 OR
t1.c2 <> t2.c2
```
[SQLFiddle](http://sqlfiddle.com/#!15/77784/1).
|
This will give you any row which is different of any other. However in the case you have 3 rows, it might not work as you want.
```
SELECT t1.*
FROM someTable t1, someTable t2
WHERE t1.id = t2.id
AND (t1.c1 != t2.c1 OR t1.c2 != t2.c2)
```
**Edit:**
If you want only rows that are different of any other row with the same id the first query won't work in this case:
```
+-------------------+
| id | c1 | c2 | c3 |
+-------------------+
| 1 | x | y | z |
+----+----+----+----+
| 1 | x | y | z1 |
+----+----+----+----+
| 2 | a | b | c |
+----+----+----+----+
| 2 | a1 | b | c1 |
+----+----+----+----+
| 2 | a1 | b | c3 |
+-------------------+
```
You would get:
```
+-------------------+
| id | c1 | c2 | c3 |
+-------------------+
| 2 | a | b | c |
+----+----+----+----+
| 2 | a1 | b | c1 |
+----+----+----+----+
| 2 | a1 | b | c3 |
+-------------------+
```
Which I think would be wrong. In that case you will need something like:
```
SELECT t2.*
FROM
(
SELECT id, c1, c2
FROM someTable
GROUP BY id, c1, c2
HAVING COUNT(*) = 1
) t1
JOIN someTable t2 ON t2.id = t1.id
AND t2.c1 = t1.c1
AND t2.c2 = t1.c2
```
|
comparing rows in same table
|
[
"",
"sql",
""
] |
when i execute the below query i get the error "Cannot perform an aggregate function on an expression containing an aggregate or a subquery." How can i write the same query without getting the error.
```
SELECT CCMID[Client],CCMName[Company],
(SELECT SUM( (Select SUM( PAm) from PaySalReg(NOLOCK)
Join PayEleMas (NOLOCK) ON PSPCo =PEPCo
where PSRAMId=AMId and PEMDedTy ='SSC'
and PProYe=(SELECT DOYe FROM DefOMonth)
and PProMo=(SELECT DOMo-1 FROM DefOMonth))
-
(Select SUM(PAm) from PaySalReg(NOLOCK)
Join PayEleMas (NOLOCK) ON PSPCo =PEPCo
where PSRAMId=AMId and PEMDedTy ='PRTY'
and PProYe=(SELECT DOYe FROM DefOMonth)
and PProMo=(SELECT DOMo-1 FROM DefOMonth)))
from AssMas(NOLOCK) WHERE ACID = CCMID ) AS [Net Pay],
COUNT(PAMId)[No. of Associates]
FROM PaySalReg
JOIN AssMas ON AMId =PSRAMId
JOIN CorpClMas ON ACID = CCMID
WHERE PProMon='10'
AND PProYe='2014'
AND PAPPSTA IS NULL
GROUP BY CCM_Name,CCM_ID
```
|
You can remove the outer aggregate, and wrap the query in a sub query...
Then perform the aggregate on the sub query...
Something like this...
```
SELECT CCMID, CCMName, [No. of Associates], SUM([Net Pay])
FROM (
SELECT Client as CCMID, Company as CCMName,
(Select SUM( PAm) from PaySalReg(NOLOCK)
Join PayEleMas (NOLOCK) ON PSPCo =PEPCo
where PSRAMId=AMId and PEMDedTy ='SSC'
and PProYe=(SELECT DOYe FROM DefOMonth)
and PProMo=(SELECT DOMo-1 FROM DefOMonth))
-
(Select SUM(PAm)
from PaySalReg(NOLOCK)
Join PayEleMas (NOLOCK) ON PSPCo =PEPCo
where PSRAMId=AMId and PEMDedTy ='PRTY'
and PProYe=(SELECT DOYe FROM DefOMonth)
and PProMo=(SELECT DOMo-1 FROM DefOMonth)) AS [Net Pay],
COUNT(PAMId) AS [No. of Associates]
FROM PaySalReg
JOIN AssMas ON AMId =PSRAMId
JOIN CorpClMas ON ACID = CCMID
WHERE PProMon='10'
AND PProYe='2014'
AND PAPPSTA IS NULL) data
GROUP BY CCMID, CCMName, [No. of Associates]
```
(untested, as you didn't supply a data structure.)
|
Try to move the aggregate queries to a new function(Scalar function) and call the function from your procedure .(replace the aggregate query by the function)
|
Error with aggregate function
|
[
"",
"sql",
"sql-server",
""
] |
I don't want to type all tables' name to drop all of them. Is it possible with one query?
|
Use the [**INFORMATION\_SCHEMA.TABLES**](http://msdn.microsoft.com/en-IN/library/ms186778.aspx) view to get the list of tables. Generate Drop scripts in the select statement and drop it using Dynamic SQL:
```
DECLARE @sql NVARCHAR(max)=''
SELECT @sql += ' Drop table ' + QUOTENAME(TABLE_SCHEMA) + '.'+ QUOTENAME(TABLE_NAME) + '; '
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
Exec Sp_executesql @sql
```
[**Sys.Tables**](http://msdn.microsoft.com/en-us/library/ms187406.aspx) Version
```
DECLARE @sql NVARCHAR(max)=''
SELECT @sql += ' Drop table ' + QUOTENAME(s.NAME) + '.' + QUOTENAME(t.NAME) + '; '
FROM sys.tables t
JOIN sys.schemas s
ON t.[schema_id] = s.[schema_id]
WHERE t.type = 'U'
Exec sp_executesql @sql
```
**Note:** If you have any `foreign Keys` defined between tables then first run the below query to **disable** all `foreign keys` present in your database.
```
EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
```
For more information, [***check here***](http://blog.sqlauthority.com/2013/04/29/sql-server-disable-all-the-foreign-key-constraint-in-database-enable-all-the-foreign-key-constraint-in-database/).
|
If you want to use only one SQL query to delete all tables you can use this:
```
EXEC sp_MSforeachtable @command1 = "DROP TABLE ?"
```
This is a hidden Stored Procedure in sql server, and will be executed for each table in the database you're connected.
**Note:** You may need to execute the query a few times to delete all tables due to dependencies.
**Note2:** To avoid the first note, before running the query, first check if there foreign keys relations to any table. If there are then just [disable](https://stackoverflow.com/questions/159038/how-can-foreign-key-constraints-be-temporarily-disabled-using-t-sql) foreign key constraint by running the query bellow:
```
EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
```
|
How to drop all tables from a database with one SQL query?
|
[
"",
"sql",
"sql-server",
""
] |
```
SELECT formid, avals(data) as datavalues, akeys(data) as datakeys, taskid
FROM task_form_data
WHERE taskid IN (449750,449699,449620)
ORDER BY formid, timestamp DESC
```
Would be a test query -Problem is that the table (which i cant change) has newer rows added with new data - but formid remains same.
So when i select like that i get old data aswell.
I cannot use DISTINCT ON (formid). I would need the newest results ( often 3-4 rows with diff formid) - for each of the taskid IN (comes from php - can be large number - cant do separate query for each).
IS there any way to get it working like that?
Data example(just a quick example - larger date value would be oldest timestamp):
```
formid timestamp taskid
6 1 449750
2 2 449750
2 3 449750
4 4 449750
4 5 449750
```
What should come out(number of various formid-s can be larger or smaller):
```
6 1 449750
2 2 449750
4 4 449750
```
UPDATE:
```
SELECT DISTINCT ON(formid) formid, avals(data) as datavalues, akeys(data) as datakeys, taskid, timestamp
FROM task_form_data
WHERE taskid IN (450567,449699,449620)
GROUP BY formid,taskid,data,timestamp
ORDER BY formid,timestamp DESC
```
I tried that - it seems to work - but only with the first parameter in where taskid IN.
Could it be modified to work with each value in the array?
|
You can do this with `row_number()`. For instance, if you want the three most recent rows:
```
SELECT formid, avals(data) as datavalues, akeys(data) as datakeys, taskid
FROM (SELECT d.*, row_number() over (partition by formid, taskid order by timestamp desc) as seqnum
FROM task_form_data d
WHERE taskid IN (449750, 449699, 449620)
) d
WHERE seqnum <= 3
ORDER BY taskid, formid, timestamp DESC;
```
|
I don't really like this, but if your table is not huge it can work:
```
SELECT tfm.formid, tfm.avals(data) as datavalues, tfm.akeys(data) as datakeys, tfm.taskid
FROM task_form_data AS tfm
INNER JOIN
(
SELECT formid, MAX(timestamp)
FROM task_form_data
WHERE taskid IN (449750,449699,449620)
GROUP BY formid
) AS a
ON tfm.formid = a.formid
AND tfm.timestamp = a.timestamptimestamp
WHERE tfm.taskid IN (449750,449699,449620)
ORDER BY tfm.formid, timestamp DESC;
```
Did you really called a field "timestamp"?
|
SELECT rows without duplicates and WHERE IN ()
|
[
"",
"sql",
"postgresql",
""
] |
I'm trying to update a column that could possibly have a single space or multiple spaces into just one single space using a plain sql statement not pl sql
I could do it through update table set column\_name='' where column\_name like '% %'
However, there could be some data such as abc def in that column. I do not want to disturb the pattern of that data meaning if want to do it only when the column is filled with white space and not touch columns that have any data.
|
I would recommend using a regular expression to do this, both to do the replacement and to do the matching:
```
UPDATE mytable
SET mycolumn = REGEXP_REPLACE(mycolumn, '\s{2,}', ' ')
WHERE REGEXP_LIKE(mycolumn, '\s{2,}')
```
This will replace two or more consecutive whitespace characters (spaces, tabs, etc.) with a single space. If you just want to replace spaces and not tabs, carriage returns, or newlines, use the following:
```
UPDATE mytable
SET mycolumn = REGEXP_REPLACE(mycolumn, ' {2,}', ' ')
WHERE REGEXP_LIKE(mycolumn, ' {2,}')
```
The reason for using `{2,}` is so that we don't bother replacing spaces where it need not be done.
|
With regular expression:
```
update table set column=regexp_replace(column, ' +', ' ')
```
|
Oracle update multiple spaces in a column with a single space
|
[
"",
"sql",
"oracle",
""
] |
Got a simple question. This is apparently not valid T-SQL syntax:
```
SELECT Col1
FROM SomeTable
WHERE Col2 IN
CASE WHEN someCondition THEN
('a', 'b')
ELSE
('c', 'd')
END
```
Doing some real simple things, like surrounding the `CASE WHEN...END` block in parentheses, adding `SELECT` keywords, cutting and pasting `Col2 IN` into each of the case handlers, etc. hasn't quite panned out too well. Honestly this looks like it should just work, but `CASE` statements can run into some awkward restrictions in SQL.
What's wrong with this syntax, and how can it be fixed? Is there any real reason the above syntax is broken (possibly leading to a small tweak that will fix the above), or is this just something that the language designers arbitrarily "forgot" to handle (probably requiring a fairly different approach than the above)?
Note that some applications of this would involve `someCondition` taking the following form:
```
(SELECT SomeCol FROM OtherTable WHERE id = n) = 'someVal'
```
I'm not sure whether duplicating that sort of subselect and comparison within the same query is liable to cause it to actually get called separately in each spot in SQL, but even if it's only called as many times as without duplication, it still feels bad writing it twice. It could be run once and stored in a variable, I suppose, but that's also extra code.
|
The result of a `CASE` must be a *single* value.
The RHS of `IN` can be the result of a `SELECT`, hence you could do something like:
```
declare @Which as Int = 0;
select *
from ( values ( 0 ), ( 1 ), ( 2 ), ( 3 ), ( 4 ), ( 5 ) ) as Things( Thing )
where Thing in (
select * from ( values ( 1 ), ( 2 ) ) as Bar( Foo ) where @Which = 0
union
select * from ( values ( 3 ), ( 4 ) ) as Bar( Foo ) where @Which = 1 )
```
|
```
SELECT Col1
FROM SomeTable
WHERE
( someCondition and Col2 in ('a','b') ) or ( not someCondition and Col2 in ('c','d') )
```
|
WHERE Col1 IN CASE WHEN someCondition THEN ('a', 'b') ELSE ('c', 'd') END
|
[
"",
"sql",
"t-sql",
""
] |
I need to write a select statement which returns a list of users where a list of SenderSubIDs exist in a field called constraint\_values:
I need a LIKE IN statement ideally:
```
SELECT SenderSubID FROM fix_user_subids WHERE SenderSubID IN ('**00390529MGERAN1**','**00912220PBALDIS**','**03994113LDAMBRO**','**04004308SLOMBAR**','**04935278CARELLI**','**4004308SLOMBARD**')
SELECT * FROM fix_dyno_rule_defs WHERE constraint_values LIKE '%**00390529MGERAN1**%'
```
Returns:
`rule_def_id tag msg_type required constraint_values constraint_type data_type default_value validation_type trans_type attribute_tag trans_tag memo
99800 10000 D,F,G 0 ((50,4,1,00390529MGERAN1)or(50,4,1,00912220PBALDIS)or(50,4,1,03994113LDAMBRO)or(50,4,1,04004308SLOMBAR)or(50,4,1,04935278CARELLI)or(50,4,1,4004308SLOMBARD))and(21,4,1,3)#STROP1#addattr(EQD,EQST) 0 1 #TAG=6506# 12 1800 0 0 Set EQD=1 for Equity Desk`
I wrote this:
```
SELECT * FROM fix_dyno_rule_defs WHERE constraint_values LIKE '%' + (SELECT MAX(SenderSubID) FROM fix_user_subids WHERE SenderSubID IN ('00390529MGERAN1','00912220PBALDIS','03994113LDAMBRO','04004308SLOMBAR','04935278CARELLI','4004308SLOMBARD')) +'%'
```
But need it without the MAX as there is a list...
|
You don't get your ideal. Use `or`:
```
SELECT SenderSubID
FROM fix_user_subid
WHERE SenderSubID like '%00390529MGERAN1%' OR
SenderSubID like '%00912220PBALDIS%' OR
SenderSubID like '%04004308SLOMBAR%' OR
SenderSubID like '%04935278CARELLI%' OR
SenderSubID like '%4004308SLOMBARD%'
```
|
/*This should help you try this
declare a temp table and store all the IDs in it*/
```
declare @tempSenderIDs Table(
T_TableID varchar(10)
)
insert into @tempSender
(
T_TableID
)
values
select
senderSubID
from
SenderSubIDs
```
/*final result will come from here*/
```
select
f.SenderSubIDs
from
fix_user_subid f
where
s.ConstraintValue like '%' + (select T_TableID from @tempSenderIDs) +'%'
```
|
SQL Select Statement LIKE IN
|
[
"",
"sql",
""
] |
so I'm working on a Rails side project with a few friends of mine that requires a lot of seed data to be loaded. I have the seed data in a JSON document, which now contains just under 3200 JSON objects, each with the same 3 field names.
I've parsed and seeded this data in Rails, creating a new record for every JSON object. I should note that the object has a few `has_many` associations.
The issue is, after I've seeded the database with the JSON file, when I'm attempting to load all 3200 records into the page (using just a plain `ModelName.all.each` block), the load time is ridiculous...about `17826ms` (17.8 seconds).
According to the console, it says `(Views: 16252.4ms | ActiveRecord 1522.9ms)`. I've looked into implementing eager loading and pre-loading (and I still am working on it).
My question is, how can I optimize this further? I've tried `ModelName.preload(:associationName)` for pre-loading, but this doesn't seem to change the repsonse time much at all (perhaps I'm doing it incorrectly). What boggles my mind is that 3200 records is not very much...why is this taking so long? Would incorporating something quicker like Redis be a viable option this early in the project?
|
You probably have an `n+1` issue, sure you fetch all objects in the controller, but in the view you're accessing some relations that requires extra lazy queries, to fix that you need to eager load the whole data, using `includes` keyword
```
@all_objects_with_associations = SomeObject.includes(:some_association).all
```
Another extra level would be using fragment caching, assuming you don't want to paginate ( if you are using rails 4 it's enabled by default ), all you need to do is adding a cache block on the top of the template, and all variables that you think would require a cache invalidation if changed
```
cache @all_objects_with_associations do
```
To get more information you can check the following links
[Eager loading in rails](http://guides.rubyonrails.org/active_record_querying.html#eager-loading-associations)
[Fragment caching in rails](http://guides.rubyonrails.org/caching_with_rails.html#fragment-caching)
|
It's the view that truly killing the speed. Rendering many objects will of course take time. You should paginate your data and display a limited number of them at a time. That will take the strain off the database as well as the view.
Use "will\_paginate" for pagination.
Alternatively, you could implement "vertical pagination" i.e. make an AJAX call once the page is scrolled down to the bottom to fetch and append more results.
Furthermore, I would take a look at the view and ensure that no database queries were being made through it. If they were, I'd **try** to eliminate them with eager-loading or moving them to the controller somehow i.e. sending the data already to the view and not having the view fetch data.
|
Optimize Database Query Rails
|
[
"",
"sql",
"ruby-on-rails",
"json",
"database",
"activerecord",
""
] |
I have on table **PersonDetail** which contains NAME and AGE as column.
I am using one application which is taking all data as String by comma and pipe separated value like **'Acton,58|Nairi,20|Sara,14|Denny,52'** (Format is same as given)
I want to insert that data into table(PersonDetail) but I don't know how can I separate it as a NAME and AGE.
Some one suggested me to create function which can separate the data, But i have no idea to do it.
Can any one give me suggestion??
Thanks in advance :)
|
you can create Multi-statement table value function to separate the data.
you just need to insert each NAME and AGE into table type variable and after inserting you should return that table as given below.
```
CREATE FUNCTION UDF_InsertDataFromString
(
@dataString VARCHAR(5000)
)
RETURNS @insertedData TABLE
(
NAME VARCHAR(30),
AGE INT
)
AS
BEGIN
DECLARE @pipeIndex INT,
@commaIndex INT,
@LENGTH INT,
@NAME VARCHAR(100),
@AGE INT
SELECT @LENGTH = LEN(RTRIM(LTRIM(@dataString))),
@dataString = RTRIM(LTRIM(@dataString))
WHILE (@LENGTH <> 0)
BEGIN
SELECT @LENGTH = LEN(@dataString),
@commaIndex = CHARINDEX(',', @dataString),
@pipeIndex = CHARINDEX('|', @dataString)
IF(@pipeIndex = 0) SET @pipeIndex = @LENGTH +1
SELECT @NAME = RTRIM(LTRIM(SUBSTRING(@dataString, 1, @commaIndex-1))),
@AGE = RTRIM(LTRIM(SUBSTRING(@dataString, @commaIndex+1, @pipeIndex-@commaIndex-1))),
@dataString = RTRIM(LTRIM(SUBSTRING(@dataString, @pipeIndex+1, @LENGTH-@commaIndex-1)))
INSERT INTO @insertedData(NAME, AGE)
VALUES(@NAME, @AGE)
SELECT @LENGTH = LEN(@dataString)
END
RETURN
END
```
Now you can use this function while inserting data from string, you just need to pass string as parameter in function as given below.
```
DECLARE @personDetail TABLE(NAME VARCHAR(30), AGE INT)
INSERT INTO @personDetail(NAME, AGE)
SELECT NAME, AGE
FROM dbo.UDF_InsertDataFromString('Acton,58|Nairi,20|Sara,14|Denny,52')
SELECT NAME, AGE
FROM @personDetail
```
|
use this split func
```
CREATE FUNCTION [dbo].[fnSplitString]
(
@string NVARCHAR(MAX),
@delimiter CHAR(1)
)
RETURNS @output TABLE(splitdata NVARCHAR(MAX)
)
BEGIN
DECLARE @start INT, @end INT
SELECT @start = 1, @end = CHARINDEX(@delimiter, @string)
WHILE @start < LEN(@string) + 1 BEGIN
IF @end = 0
SET @end = LEN(@string) + 1
INSERT INTO @output (splitdata)
VALUES(SUBSTRING(@string, @start, @end - @start))
SET @start = @end + 1
SET @end = CHARINDEX(@delimiter, @string, @start)
END
RETURN
END
```
and then use this query
```
DECLARE @x table ( id int identity(1,1)
, str varchar(50) )
DECLARE @str varchar(50)='Acton,58|Nairi,20|Sara,14|Denny,52'
INSERT INTO @x
SELECT *
FROM [dbo].[fnSplitString] (@str ,'|')
SELECT *
from @x
DECLARE @y int=(SELECT count(*)
FROM @x)
DECLARE @e varchar(50)
DECLARE @b varchar(50)
DECLARE @c varchar(50)
WHILE @y!=0
BEGIN
set @e =(SELECT str
FROM @x
where id=@y)
set @b =(substring(@e,1,(charindex(',',@e)-1)))
set @c = (substring(@e,(charindex(',',@e)+1),len(@e)-charindex(',',@e)))
INSERT INTO PersonDetail
SELECT distinct @b
, @c
FROM @x
SET @y=@y-1
END
```
|
How to insert data from String in table?
|
[
"",
"sql",
"sql-server",
"function",
""
] |
I Have a staging table which gets updated as part of scheduled batch script.
Lets call this table Staging\_Table
Now on a daily basis I update a tabled called Product\_Table with entries from Staging\_Table.
I need to delete the rows in Products\_Table which do not have entries from the Staging table.
Now to uncomplicate things the staging table hold around 97000 records while the producst table has only 7000. However ona daily basis the entries in the staging table go up by 97000. i have a key for these products called TDC\_IDP\_ID....
So i have this query which seems to be taking forever to execute...
```
DELETE FROM Product_Table
WHERE PRODUCT_TD_PARTCODE NOT IN ( SELECT TDC_TD_PARTCODE FROM Staging_Table WHERE TDC_IDP_ID = @TDC_IDP_ID )
```
Now the inner query has 97000 records. How can i optimise this query(to atleast run) or is there another way to go about this? Instead of select i tried the following query and it is still running as i type this question. Its been 11 minutes that it is running....
```
SELECT COUNT(*)
FROM Product_Table
WHERE PRODUCT_TD_PARTCODE NOT IN ( SELECT TDC_TD_PARTCODE FROM Staging_Table WHERE TDC_IDP_ID = @TDC_IDP_ID )
```
|
First, rephrase the index as `not exists`:
```
DELETE FROM Product_Table
WHERE NOT EXISTS (SELECT 1
FROM Staging_Table st
WHERE st.TDC_IDP_ID = @TDC_IDP_ID AND
st.TDC_TD_PARTCODE = product_table.PRODUCT_TD_PARTCODE
);
```
Then you want an index on the staging table:
```
create index idx_Staging_Table_2 on Staging_Table(TDC_TD_PARTCODE, TDC_IDP_ID);
```
|
Use **LEFT JOIN** instead of **NOT IN**
Try this:
```
SELECT COUNT(*)
FROM Product_Table PT
LEFT OUTER JOIN Staging_Table ST ON PT.PRODUCT_TD_PARTCODE = ST.TDC_TD_PARTCODE AND ST.TDC_IDP_ID = @TDC_IDP_ID
WHERE ST.TDC_TD_PARTCODE IS NULL
DELETE PT
FROM Product_Table PT
LEFT OUTER JOIN Staging_Table ST ON PT.PRODUCT_TD_PARTCODE = ST.TDC_TD_PARTCODE AND ST.TDC_IDP_ID = @TDC_IDP_ID
WHERE ST.TDC_TD_PARTCODE IS NULL
```
|
How do I get a records for a table that do not exist in another table?
|
[
"",
"sql",
"select",
"subquery",
"sql-delete",
"notin",
""
] |
I have `'param1, param2, parma3'` coming from SSRS to a stored procedure as a `varchar` parameter: I need to use it in a query's `IN` clause but then need to change its format like this first:
```
select *
from table1
where col1 in('param1', 'param2', 'param3')
```
What is the best way to reformat the parameter without creating functions and parameter tables?
|
Try this one, Just need to add commas at the beginning and at the end of @params string.
```
Declare @params varchar(100) Set @params = ',param1,param2,param3,'
Select * from t
where CHARINDEX(','+cast(col1 as varchar(8000))+',', @params) > 0
```
[SQL FIDDLE](http://sqlfiddle.com/#!3/71c5d/1)
|
you can use split function and use it as in following way
here my split fnSplitString return splitdata
```
select * from tb1 where id in(select splitdata from dbo.fnSplitString((select col1 from tb12 where id=3),','))
create FUNCTION [dbo].[fnSplitString]
(
@string NVARCHAR(MAX),
@delimiter CHAR(1)
)
RETURNS @output TABLE(splitdata NVARCHAR(MAX)
)
BEGIN
DECLARE @start INT, @end INT
SELECT @start = 1, @end = CHARINDEX(@delimiter, @string)
WHILE @start < LEN(@string) + 1 BEGIN
IF @end = 0
SET @end = LEN(@string) + 1
INSERT INTO @output(splitdata)
VALUES(SUBSTRING(@string, @start, @end - @start))
SET @start = @end + 1
SET @end = CHARINDEX(@delimiter, @string, @start)
END
RETURN
END
```
|
Using a comma-separated parameter in an IN clause
|
[
"",
"sql",
"t-sql",
"stored-procedures",
"reporting-services",
"parameters",
""
] |
I was wondering if it is possible to select all unique rows (by doing a group by columnname) where another column doesn't contain specific values. EG:
```
users table
+-------+-----------+
| NAME | COUNTRY |
+-------+-----------+
| Bob | Australia |
+-------+-----------+
| Bob | America |
+-------+-----------+
| Steve | Australia |
+-------+-----------+
| Sam | Australia |
+-------+-----------+
| Sam | America |
+-------+-----------+
```
What I'm looking to do is get all rows grouped by NAME where COUNTRY doesn't contain America.
**PSUEDO-SQL**: SELECT \* FROM users GROUP BY Name WHERE COUNTRY NOT LIKE 'America' (obviously this doesn't work but it's for clarification)
Output from above:
```
+-------+-----------+
| NAME | COUNTRY |
+-------+-----------+
| Steve | Australia |
+-------+-----------+
```
Can this be done or am I approaching this the wrong way?
**EDIT**: I am unsure of how to word title so if someone has a better suggestion please update it/let me know
|
You should not use `select *` with `group by`. Perhaps you want:
```
SELECT name, country
FROM users
WHERE COUNTRY NOT LIKE 'America'
```
Or if you want `name`s that are not in `'America'`:
```
SELECT name
FROM users
GROUP BY name
HAVING SUM(COUNTRY LIKE 'America') = 0;
```
|
The following query should work.
```
SELECT DISTINCT name, country FROM table WHERE country != 'America'
```
|
SQL query to effectively have a "WHERE CLAUSE" after GROUP BY
|
[
"",
"mysql",
"sql",
""
] |
I am trying to understand how to use `if` in SQL (in mySql).
I wrote this source code:
```
$query = "
IF NOT EXISTS(SELECT * FROM sec WHERE CHROM='1' AND POS='2')
INSERT INTO sec (CHROM, POS) VALUES ('1','2')
END IF
";
mysqli_query($config, $query) or die('Eror: ' . mysqli_error($config));
```
and I got this error:
```
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'IF NOT EXISTS(SELECT * FROM sec WHERE CHROM='1' AND POS='2') INSERT INTO se' at line 1
```
How should I correct the error?
|
To answer your question if you really want to use an IF-statement:
```
IF NOT EXISTS(SELECT * FROM sec WHERE CHROM='1' AND POS='2')
BEGIN
INSERT INTO sec (CHROM, POS) VALUES ('1','2')
END
```
|
There is no need for IF here. You can simply do:
```
INSERT IGNORE INTO sec (CHROM, POS) VALUES ('1','2')
```
The `IGNORE` keyword will tell mySql to treat duplicate key errors as mere warnings.
|
How to use IF in SQL
|
[
"",
"mysql",
"sql",
""
] |
I'm trying to compute a 28 day moving sum in BigQuery using the LAG function.
The top answer to this question
[Bigquery SQL for sliding window aggregate](https://stackoverflow.com/questions/22415647/bigquery-sql-for-sliding-window-aggregate)
from Felipe Hoffa indicates that that you can use the LAG function. An example of this would be:
```
SELECT
spend + spend_lagged_1day + spend_lagged_2day + spend_lagged_3day + ... + spend_lagged_27day as spend_28_day_sum,
user,
date
FROM (
SELECT spend,
LAG(spend, 1) OVER (PARTITION BY user ORDER BY date) spend_lagged_1day,
LAG(spend, 2) OVER (PARTITION BY user ORDER BY date) spend_lagged_2day,
LAG(spend, 3) OVER (PARTITION BY user ORDER BY date) spend_lagged_3day,
...
LAG(spend, 28) OVER (PARTITION BY user ORDER BY date) spend_lagged_day,
user,
date
FROM user_spend
)
```
Is there a way to do this without having to write out 28 lines of SQL!
|
The BigQuery documentation doesn't do a good job of explaining the complexity of window functions that the tool supports because it doesn't specify what expressions can appear after ROWS or RANGE. It actually supports the SQL 2003 standard for window functions, which you can find documented other places on the web, such as [here](http://msdn.microsoft.com/en-us/library/ms189461.aspx).
That means you can get the effect you want with a single window function. The range is 27 because it's how many rows before the current one to include in the sum.
```
SELECT spend,
SUM(spend) OVER (PARTITION BY user ORDER BY date ROWS BETWEEN 27 PRECEDING AND CURRENT ROW),
user,
date
FROM user_spend;
```
A RANGE bound can also be extremely useful. If your table was missing dates for some user, then 27 PRECEDING rows would go back more than 27 days, but RANGE will produce a window based on the date values themselves. In the following query, the date field is a BigQuery TIMESTAMP and the range is specified in microseconds. I'd advise that whenever you do date math like this in BigQuery, you test it thoroughly to make sure it's giving you the expected answer.
```
SELECT spend,
SUM(spend) OVER (PARTITION BY user ORDER BY date RANGE BETWEEN 27 * 24 * 60 * 60 * 1000000 PRECEDING AND CURRENT ROW),
user,
date
FROM user_spend;
```
|
Bigquery: How to get a rolling time range in a window clause.....
This is an old post, but I spend a long time searching for a solution, and this post came up so maybe this will help someone.
IF your partition of your window clause does not have a record for every day, you need to use the RANGE clause to accurately get a rolling time range, (ROWS would search the number records, which would to go too far back since you don't have a record for every day in your PARTITION BY). The problem is that in Bigquery RANGE clause does not support Dates.
From BigQuery's documentation:
> numeric\_expression must have numeric type. DATE and TIMESTAMP are not currently supported. In addition, the numeric\_expression must be a constant, non-negative integer or a parameter.
The workaround I found was to use the UNIX\_DATE(date\_expression) in the ORDER BY clause along with a RANGE clause:
`SUM(value) OVER (PARTITION BY Column1 ORDER BY UNIX_DATE(Date) RANGE BETWEEN 5 PRECEDING AND CURRENT ROW`
|
BigQuery SQL for 28-day sliding window aggregate (without writing 28 lines of SQL)
|
[
"",
"sql",
"google-bigquery",
"sliding-window",
""
] |
I have a table like below
```
CompanyNumber Name Status other column1 other Column2
10 A Active
10 A NULL
11 B Active
11 B NULL
12 C Active
12 C NULL
13 D NULL
14 E Active
...
```
So over 300 000 rows like this.
I would like to delete the one that has status NULL and the resulting table should be like below :
```
CompanyNumber Name Status
10 A Active
11 B Active
12 C Active
13 D NULL
14 E Active
```
|
```
;WITH CTE AS
(
SELECT CompanyNumber
,Name
,[Status]
,ROW_NUMBER() OVER (PARTITION BY CompanyNumber, Name ORDER BY [Status] DESC) rn
FROM @TABLE
)
DELETE FROM CTE WHERE rn > 1
```
|
I get that you want to delete row where status is null. Try SQL with where clause like
```
DELETE
FROM mytable
WHERE status is null
```
If you wish to remove only duplicate rows then you could do something like:
```
DELETE
FROM mytable
WHERE status is null
AND CompanyNumber IN (SELECT CompanyNumber
FROM mytable
GROUP BY CompanyNumber
HAVING COUNT(CompanyNumber) > 1)
```
|
How to delete duplicates with condition in SQL
|
[
"",
"sql",
"sql-server",
""
] |
I want to work out the male/female split of my customer based on the person's title (Mr, Mrs, etc)
To do this I need to combine the result for the Miss/Mrs/Ms into a 'female' field.
The query below gets the totals per title but I think I need a sub query to return the combined female figure.
Any help would be greatly appreciated.
```
Query:
SELECT c.Title, COUNT(*) as Count
FROM
Customers c
GROUP BY Title
ORDER By [Count] DESC
Answer:
Mr 1903
Miss 864
Mrs 488
Ms 108
```
|
You could do it like this
```
SELECT
[Gender] = CASE [Title] WHEN 'Mr' THEN 'M' ELSE 'F' END,
COUNT(*) as Count
FROM
Customers c
GROUP BY
CASE [Title] WHEN 'Mr' THEN 'M' ELSE 'F' END
ORDER By
[Count] DESC
```
Demo at <http://sqlfiddle.com/#!3/05c74/4>
|
You can use CASE to project the new groups for the Titles:
```
SELECT SUM(CASE WHEN Title IN ('Mr') THEN 1 ELSE 0 END) AS Male,
SUM(CASE WHEN Title IN ('Miss', 'Ms', 'Mrs') THEN 1 ELSE 0 END) AS Female
FROM
Customers c;
```
|
Combining results before grouping in SQL
|
[
"",
"sql",
"sql-server",
"t-sql",
"select",
"group-by",
""
] |
I am breaking my head over a simple sub query. I have the following query witch runs for 30+ seconds:
```
SELECT DISTINCT SUBSTRING( 6pp, 1, 4 ) AS postcode
FROM 6pp
WHERE gemeenteID
IN (
SELECT gebiedID
FROM tmp_orderimport
WHERE typeGebied = 'Gemeente'
AND idorder =1733
AND STATUS = TRUE );
```
Explain:

If I break the query in 2 queries and first run the IN part, the whole thing takes no more than a second. But of course I prefer to use one query. The tables are in MYISAM. Any suggestions?
# Update:
Following Gordon Linoff's example. I changed the SQL to:
```
SELECT DISTINCT SUBSTRING( 6pp, 1, 4 ) AS postcode
FROM `6pp`
WHERE EXISTS (SELECT 1
FROM tmp_orderimport oi
WHERE oi.gebiedID = `6pp`.gemeenteID AND
typeGebied = 'Gemeente' AND idorder = 1733 AND STATUS = TRUE
);
```
And added an index on tmp\_orderimport(gebiedID, typeGebied, idorder, status). Now the query runs under 4 seconds.
New explain:

|
Try using `exists` instead:
```
SELECT DISTINCT SUBSTRING( 6pp, 1, 4 ) AS postcode
FROM `6pp`
WHERE EXISTS (SELECT 1
FROM tmp_orderimport oi
WHERE oi.gebiedID = `6pp`.gemeenteID AND
typeGebied = 'Gemeente' AND idorder = 1733 AND STATUS = TRUE
);
```
You can also speed this up with an index on `tmp_orderimport(gebiedID, typeGebied, idorder, status)`.
MySQL can be inefficient (sometimes and depending on the version) when using `IN` with a subquery. `EXISTS` usually fixes the problem. The specific problem is that the subquery is run for each comparison. When you create a temporary table, you circumvent that.
|
Use **INNER JOIN** instead of **IN operator**
Try this:
```
SELECT DISTINCT SUBSTRING(a.6pp, 1, 4) AS postcode
FROM 6pp a
INNER JOIN tmp_orderimport b ON a.gemeenteID = b.gebiedID
WHERE b.typeGebied = 'Gemeente' AND b.idorder =1733 AND b.STATUS = TRUE
```
|
mySQL subquery slower than 2 separate queries
|
[
"",
"mysql",
"sql",
"select",
"subquery",
"query-optimization",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.