Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I am just trying to add 1 hour to a value, it is kind of complicated on where and why i am doing this but basically i just need to query something like this
```
select DATE_ADD(hh,1,'2014-10-15 03:30:00 pm') from dual
```
I keep reading old articles that say to use dateAdd or date\_add but I keep getting invalid identifier errors.
|
```
select sysdate + 1/24 from dual;
```
**sysdate** is a function without arguments which returns DATE type
**+ 1/24** adds 1 hour to a date
```
select to_char(to_date('2014-10-15 03:30:00 pm', 'YYYY-MM-DD HH:MI:SS pm') + 1/24, 'YYYY-MM-DD HH:MI:SS pm') from dual;
```
|
Use an interval:
```
select some_date_column + interval '1' hour
from your_table;
```
|
Oracle Add 1 hour in SQL
|
[
"",
"sql",
"oracle",
""
] |
I have a table called amounts that looks like this:
```
// structure 1
+-----+-----+-----+-----+-----+-----+-----+
| MON | TUE | WED | THU | FRI | SAT | SUN |
+-----+-----+-----+-----+-----+-----+-----+
| 1 | 6 | 3 | 1 | 1 | 3 | 0 |
+-----+-----+-----+-----+-----+-----+-----+
```
I need to grab the `MAX()` value from the row above (the table only has one row).
How could I `PIVOT` the table so that it reads like this:
```
// structure 2
+-----+-----+
| DAY | AMT |
+-----+-----+
| MON | 1 |
+-----+-----+
| TUE | 6 |
+-----+-----+
| WED | 3 |
+-----+-----+
| THU | 1 |
+-----+-----+
| FRI | 1 |
+-----+-----+
| SAT | 3 |
+-----+-----+
| SUN | 0 |
+-----+-----+
```
... and then select the `MAX()` value of the `AMT` column:
```
SELECT MAX(AMT)
FROM (
SELECT * FROM amounts
) AS amts
PIVOT (
AMT FOR * IN ... // got stuck here, pivots are confusing
) AS highest_amt
```
Is there a more efficient way of doing this?
|
You don't need to unpivot the results. In your case where the results come from a reporting view, this could result in an inefficient execution plan.
You can use the `VALUES` constructor to create a table of values from the view's columns then get the maximum value like this:
```
select MAX(dayvalues.v)
from
amounts
CROSS APPLY (VALUES (amounts.MON),(amounts.TUE),(amounts.WED),(amounts.THU),
(amounts.FRI),(amounts.SAT),(amounts.SUN)
) dayvalues(v)
```
`VALUES` will create a table from the values found in `amounts` then `MAX` will select the maximum value. `CROSS APPLY` applies the table-valued function on its right (VALUES) on every row found on the left.
This means that the view is executed only once
|
Actually it is `unpivot` not `pivot`!!
**Pivot - rows to column**
**UnPivot - column to row**
try this..
```
CREATE TABLE #temp
(
MON INT,
TUE INT,
WED INT,
THU INT,
FRI INT,
SAT INT,
SUN INT
)
INSERT INTO #temp
VALUES (1,2,3,4,5,6,7)
SELECT *
FROM #temp
;WITH cte
AS (SELECT value,
col,
Row_number()
OVER(
ORDER BY value DESC) rn
FROM #temp
UNPIVOT ( value
FOR col IN (MON,
TUE,
WED,
THU,
FRI,
SAT,
SUN ) ) unpiv)
SELECT value,
COL AS AMT
FROM cte
WHERE rn = 1
```
|
SELECT MAX() value from row data converted into column data using UNPIVOT and SELECT MAX()
|
[
"",
"sql",
"sql-server",
"pivot",
"max",
""
] |
I want to find out how many users have permission to create a table in my Oracle 11g database by using meta data.
What kind of query should I write? (I don't want it to be very complicated).
|
All users and roles who has such privilege:
```
select *
from dba_sys_privs
where privilege = 'CREATE TABLE' or privilege = 'CREATE ANY TABLE';
```
Also you need view DBA\_ROLE\_PRIVS because user can get privilege indirectly, using role.
|
```
select count(distinct usr)
from (
select auth, tp, connect_by_root auth usr, connect_by_root tp tp_usr
from (
select null grantee, username auth, 'user' tp from dba_users
union
select grantee, granted_role, 'role' from dba_role_privs
union
select grantee, privilege, 'priv' from dba_sys_privs
)
start with grantee is null
connect by grantee = prior auth
) where tp_usr = 'user' and auth in ('CREATE TABLE', 'CREATE ANY TABLE');
```
This query also scans granted roles recursively. However, I'm still not sure that it gives all the users who can create a table.
|
Finding how many users have table creation privileges
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
```
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
ORDER BY TABLE_NAME ASC
```
I am using this code to print the table names of a db. What I want to do is print the table name and the col names in each table. Can I do this by nesting a statement.
This code is being run on a SQL Server in a query window.
I tried this
```
SELECT COL_NAME
FROM
(SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
ORDER BY TABLE_NAME ASC)
```
Any ideas?
|
This should do it:
```
SELECT C.TABLE_NAME, C.COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS C
WHERE EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.TABLES T
WHERE T.TABLE_TYPE='BASE TABLE' AND C.TABLE_NAME=T.TABLE_NAME)
ORDER BY C.TABLE_NAME, C.COLUMN_NAME
```
|
In Sqlserver 2005 **INFORMATION\_SCHEMA** views was introduced first.
> These views are mainly created to get the metadata like table name,
> column name, datatype of columns etc. about tables, columns, views,
> domains etc.
Each and every database contains these views. If you want to check what's going on behind the scene you can check the logic of these views by doing just **sp\_helptext**. Like
> sp\_helptext INFORMATION\_SCHEMA.COLUMNS
By using above views you can get your desired result. Please check below query.
```
SELECT T.TABLE_NAME,C.COLUMN_NAME,C.DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS C
INNER JOIN INFORMATION_SCHEMA.TABLES T ON C.TABLE_NAME = T.TABLE_NAME
AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
WHERE T.TABLE_TYPE = 'BASE TABLE'
```
|
SQL statement to print table names and their column names
|
[
"",
"sql",
"sql-server",
""
] |
I think I am having serious issue managing database connection pool in Golang. I built an RESTful API using Gorilla web toolkit which works great when only few requests are being sent over to the server. But now I started performing load testing using loader.io site. I apologize for the long post, but I wanted to give you the full picture.
Before going further, here are some info on the server running the API and MySQL:
Dedicated Hosting Linux
8GB RAM
Go version 1.1.1
Database connectivity using go-sql-driver
MySQL 5.1
Using loader.io I can send 1000 GET requests/15 seconds without problems. But when I send 1000 POST requests/15 seconds I get lots of errors all of which are ERROR 1040 too many database connections. Many people have reported similar issues online. Note that I am only testing on one specific POST request for now. For this post request I ensured the following (which was also suggested by many others online)
1. I made sure I use not Open and Close \*sql.DB for short lived functions. So I created only global variable for the connection pool as you see in the code below, although I am open for suggestion here because I do not like to use global variables.
2. I made sure to use db.Exec when possible and only use db.Query and db.QueryRow when results are expected.
Since the above did not solve my problem, I tried to set db.SetMaxIdleConns(1000), which solved the problem for 1000 POST requests/15 seconds. Meaning no more 1040 errors. Then I increased the load to 2000 POST requests/15 seconds and I started getting ERROR 1040 again. I tried to increase the value in db.SetMaxIdleConns() but that did not make a difference.
Here some connection statistics I get from the MySQL database on the number of connections by running SHOW STATUS WHERE `variable_name` = 'Threads\_connected';
For 1000 POST requests/15 seconds: observed #threads\_connected ~= 100
For 2000 POST requests/15 seconds: observed #threads\_connected ~= 600
I also increased the maximum connections for MySQL in my.cnf but that did not make a difference. What do you suggest? Does the code look fine? If yes, then it is probably the connections are just limited.
You will find a simplified version of the code below.
```
var db *sql.DB
func main() {
db = DbConnect()
db.SetMaxIdleConns(1000)
http.Handle("/", r)
err := http.ListenAndServe(fmt.Sprintf("%s:%s", API_HOST, API_PORT), nil)
if err != nil {
fmt.Println(err)
}
}
func DbConnect() *sql.DB {
db, err := sql.Open("mysql", connectionString)
if err != nil {
fmt.Printf("Connection error: %s\n", err.Error())
return nil
}
return db
}
func PostBounce(w http.ResponseWriter, r *http.Request) {
userId, err := AuthRequest(r)
//error checking
//ready requesy body and use json.Unmarshal
bounceId, err := CreateBounce(userId, b)
//return HTTP status code here
}
func AuthRequest(r *http.Request) (id int, err error) {
//parse header and get username and password
query := "SELECT Id FROM Users WHERE Username=? AND Password=PASSWORD(?)"
err = db.QueryRow(query, username, password).Scan(&id)
//error checking and return
}
func CreateBounce(userId int, bounce NewBounce) (bounceId int64, err error) {
//initialize some variables
query := "INSERT INTO Bounces (.....) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
result, err := db.Exec(query, ......)
//error checking
bounceId,_ = result.LastInsertId()
//return
}
```
|
Go `database/sql` doesn't prevent you from creating an infinite number of connections to the database. If there is an idle connection in the pool, it will be used, otherwise a new connection is created.
So, under load, your request handlers sql.DB is probably finding no idle connections and so a new connection is created when needed. This churns for a bit -reusing idle connections when possible and creating new when needed-, ultimately reaching the max connections for the Db. And, unfortunately, in Go 1.1 there isn't a convenient way (e.g. [SetMaxOpenConns](http://golang.org/pkg/database/sql/#DB.SetMaxOpenConns)) to limit open connections.
Upgrade to a newer version of Golang. In [Go 1.2+](https://golang.org/doc/go1.2) you get [SetMaxOpenConns](http://golang.org/pkg/database/sql/#DB.SetMaxOpenConns). And [check out the MySql docs for starting setting](http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html) and then tune.
```
db.SetMaxOpenConns(100) //tune this
```
If you must use Go 1.1 you'll need to ensure in your code that `*sql.DB` is only being used by N clients at a time.
|
@MattSelf proposed solution is correct, but I ran into other issues. Here I highlighted what I did exactly to solve the problem (by the way, the server is running CentOS).
1. Since I have a dedicated server I increased the max\_connections for MySQL
In /etc/my.cnf I added the line max\_connections=10000. Although, that is more connections than what I need.
2. Restart MySQL: service mysql restart
3. Changed the ulimit -n. That is to increase the number of descriptive files that are open.
To do that I made changes to two files:
In /etc/sysctl.conf I added the line
```
fs.file-max = 65536
```
In /etc/security/limits.conf I added the following lines:
```
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
```
4. Reboot your server
5. Upgraded Go to 1.3.3 as suggested by @MattSelf
6. Set
```
db.SetMaxOpenConns(10000)
```
Again the number is too large for what I need, but this proved to me that things worked.
7. I ran a test using loader.io which consists of 5000 clients each sending POST request all within 15 seconds. All went through without errors.
|
Golang RESTful API load testing causing too many database connections
|
[
"",
"mysql",
"sql",
"go",
"gorilla",
""
] |
Consider the following tables
```
=# \d users
Column | Type
--------+-----------------------
id | integer
name | character varying(32)
=# \d profiles
Column | Type
---------+---------
id | integer
user_id | integer
=# \d views
Column | Type
------------+-----------------------------
id | integer
profile_id | integer
time | timestamp without time zone
```
I need to find all users with an associated view in each month of a given date range. Currently I am doing the following:
```
with months as (
select to_char(month, 'MM/YYYY') from generate_series('2014-07-01', '2014-09-01', INTERVAL '1 month') as month
)
select * from users
join profiles on user_id = users.id
join views on profile_id = profiles.id
and to_char(views.time, 'MM/YYYY') in (select * from months)
```
I have setup a fiddle [here](http://sqlfiddle.com/#!12/8154c/3).
Currently the results include the user *Kyle* who had no views in August and September. The correct result should only include the user *Stan* who had views in all the 3 months in the given range. How do we modify this query to return the desired result?
|
You seem to have an extended relational division, i.e. you're looking for users who had views in the given range only, although they might have views outside the range of interest also.
Along with `GROUP BY`, you can check this via `EXCEPT` construct. Basically, if you'll substract all months in your range with all the views within a given range, you should receive no rows:
```
WITH months(month) AS (
SELECT DATE '2014-07-01' + m*INTERVAL'1mon'
FROM generate_series(0,2) m
)
SELECT *
FROM users u
JOIN profiles p ON p.user_id=u.id
JOIN views v ON v.profile_id=p.id
WHERE 0 = (SELECT count(*) FROM (
SELECT month FROM months
EXCEPT ALL
SELECT date_trunc('mon',time) FROM views
WHERE date_trunc('mon',time) IN (SELECT * FROM months)
AND profile_id=p.id) minus);
```
You can slightly simplify this construct via `= ALL` construct, as it will return `true` in the case when subquery returns no rows:
```
WITH months(month) AS (
SELECT DATE '2014-07-01' + m*INTERVAL'1mon'
FROM generate_series(0,2) m
)
SELECT *
FROM users u
JOIN profiles p ON p.user_id=u.id
JOIN views v ON v.profile_id=p.id
WHERE date_trunc('mon',time) = ALL (
SELECT month FROM months
EXCEPT ALL
SELECT date_trunc('mon',time) FROM views
WHERE date_trunc('mon',time) IN (SELECT * FROM months)
AND profile_id=p.id);
```
A quote from the manual on [`ALL`](http://www.postgresql.org/docs/current/interactive/functions-subquery.html#FUNCTIONS-SUBQUERY-ALL):
> The result of ALL is "true" if all rows yield true
> **(including the case where the subquery returns no rows)**.
Both my queries are effectively the same. The first one counts number of rows in the inner side and compares them to zero (and I agree, this is more obvious). The second one compares current `views.time` to all the results of the subqueries. This construct yields true only if all entries returned by the subquery equals to the `views.time` (of course, truncated to the month boundary). And, as quoted, this construct yields true also if subquery returns no rows.
And by intent, subquery should yield no rows, which indicates that all views happened within the desired time range.
`Check on SQL Fiddle`
|
Maybe this will be enough (I don't know Postgresql)
```
select u.id, u.name from users u
join profiles on user_id = users.id
join views on profile_id = profiles.id
and views.time between ? and ?
group by u.id, u.name
having count(distinct to_char(views.time, 'MM/YYYY')) = 3;
```
|
SQL: find records having data for each month in a given date range
|
[
"",
"sql",
"postgresql",
"relational-division",
""
] |
Let's say I have a table like this, called Inventory:
```
**Fruit** **Price**
orange 4
grapefruit 10
banana 15
```
I would like to write a select statement that would show the output like this:
```
**Fruit** **Price**
citrus 14
banana 15
```
Still new to SQL so I can't figure out the SELECT statement for this. If this is the SELECT statement that would give me the first set of results above (itemized fruit), how can I change it so that it combines oranges and grapefruits and display the result as citrus?
```
SELECT
INV.fruit,
INV.price
FROM
INVENTORY INV
```
|
You can use a `case` expression to translate oranges and grapefruits to citrus:
```
SELECT CASE WHEN fruit IN ('orange', 'grapefruit')
THEN 'citrus'
ELSE fruit
END,
SUM(price)
FROM inventory
GROUP BY CASE WHEN fruit IN ('orange', 'grapefruit')
THEN 'citrus'
ELSE fruit
END
```
|
You can use a `case` statement:
```
select (case when fruit in ('orange', 'grapefruit') then citrus else fruit end) as fruit,
sum(price) as price
from inventory i
group by (case when fruit in ('orange', 'grapefruit') then citrus else fruit end);
```
|
SQL Combine 2 Different Rows into 1
|
[
"",
"sql",
"oracle",
""
] |
I'm trying to create a sequence in MySQL (I'm very new to SQL as a whole). I'm using the following code, but it causes an error:
```
CREATE SEQUENCE ORDID INCREMENT BY 1 START WITH 622;
```
ORDID refers to a field in a table I'm using. How do I create the sequence properly?
Edit:
Allegedly, MySQL doesn't use sequences. I'm now using the following code, but this is causing errors too. How do I fix them?
```
CREATE TABLE ORD (
ORDID NUMERIC(4) NOT NULL AUTO_INCREMENT START WITH 622,
//Rest of table code
```
Edit:
I think I found a fix. For phpMyAdmin (which I was using) you can use the following code.
```
ALTER TABLE ORD AUTO_INCREMENT = 622;
```
I have no idea why it prefers this, but if anyone else needs help with this then here you go. :)
|
Check out [this article](http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html). I believe it should help you get what you are wanting. If your table already exists, and it has data in it already, the error you are getting may be due to the auto\_increment trying to assign a value that already exists for other records.
In short, as others have already mentioned in comments, sequences, as they are thought of and handled in Oracle, do not exist in MySQL. However, you can likely use auto\_increment to accomplish what you want.
Without additional details on the specific error, it is difficult to provide more specific help.
**UPDATE**
```
CREATE TABLE ORD (
ORDID INT NOT NULL AUTO_INCREMENT,
//Rest of table code
PRIMARY KEY (ordid)
)
AUTO_INCREMENT = 622;
```
[This link](http://www.tutorialspoint.com/mysql/mysql-using-sequences.htm) is also helpful for describing usage of auto\_increment.
Setting the AUTO\_INCREMENT value appears to be a [table option](http://dev.mysql.com/doc/refman/5.0/en/create-table.html), and not something that is specified as a column attribute specifically.
Also, per one of the links from above, you can alternatively set the auto increment start value via an alter to your table.
```
ALTER TABLE ORD AUTO_INCREMENT = 622;
```
**UPDATE 2**
Here is a link to a [working sqlfiddle example](http://sqlfiddle.com/#!2/f90716/1), using auto increment.
I hope this info helps.
|
This is a solution [suggested by the MySQl manual](http://dev.mysql.com/doc/refman/5.6/en/information-functions.html#function_last-insert-id):
> If expr is given as an argument to LAST\_INSERT\_ID(), the value of the
> argument is returned by the function and is remembered as the next
> value to be returned by LAST\_INSERT\_ID(). This can be used to simulate
> sequences:
>
> Create a table to hold the sequence counter and initialize it:
>
> ```
> mysql> CREATE TABLE sequence (id INT NOT NULL);
> mysql> INSERT INTO sequence VALUES (0);
> ```
>
> Use the table to generate sequence numbers like this:
>
> ```
> mysql> UPDATE sequence SET id=LAST_INSERT_ID(id+1);
> mysql> SELECT LAST_INSERT_ID();
> ```
>
> The UPDATE statement increments the sequence counter and causes the next call to LAST\_INSERT\_ID() to return the updated value. The
> SELECT statement retrieves that value. The mysql\_insert\_id() C API
> function can also be used to get the value. See Section 23.8.7.37,
> “mysql\_insert\_id()”.
>
> You can generate sequences without calling LAST\_INSERT\_ID(), but the
> utility of using the function this way is that the ID value is
> maintained in the server as the last automatically generated value. It
> is multi-user safe because multiple clients can issue the UPDATE
> statement and get their own sequence value with the SELECT statement
> (or mysql\_insert\_id()), without affecting or being affected by other
> clients that generate their own sequence values.
|
How do I create a sequence in MySQL?
|
[
"",
"mysql",
"sql",
"sequence",
""
] |
I have a table like this.
TABLE-1
```
id Code
-----------------
1 N188
1 N1Z2
1 N222
2 N189
2 N1Z2
2 N1Z3
3 N188
3 A123
3 B321
4 N188
4 A333
4 B444
```
I want to select id and code only code has `N188`.Result should like this:
TABLE-2
```
id Code
---------------
1 N188
1 N1Z2
1 N222
3 N188
3 A123
3 B321
4 N188
4 A333
4 B444
```
How can I write sql for this in SQL Server?
Thanks
|
You can use `EXISTS` for this:
```
SELECT id, code
FROM table1 t
WHERE EXISTS (
SELECT 1
FROM table1 t2
WHERE t.id = t2.id
AND t2.Code = 'N188'
)
```
* [Condensed SQL Fiddle Demo](http://sqlfiddle.com/#!2/d1c21/1)
|
Using `INNER JOIN`
```
SELECT *
FROM tablename A
JOIN (SELECT id
FROM tablename
WHERE code = 'N188') B
ON a.id = b.id
```
|
Select Grouped Column Values Where Have Same Id In SQL Server
|
[
"",
"sql",
"sql-server",
""
] |
I have 2 tables, `customer` and `payment` in my Oracle database, where a `customer` can have many `payment`.
If I run the following query:
```
SELECT customer.customer_id, customer.name, payment.pay_date
FROM customer, payment
WHERE customer.customer_id = payment.customer_id;
```
I will get the result as shown in an example below.
```
customer_id | name | pay_date
----------------------------------
1001 | Mr.A | 01/10/2014
1001 | Mr.A | 02/10/2014
1001 | Mr.A | 03/10/2014
1001 | Mr.A | 04/10/2014
```
Now, I want to select the last `pay_date` for each `customer_id`, I would do this.
```
SELECT customer.customer_id, customer.name, max(payment.pay_date) as last_pay_date
FROM customer, payment
WHERE customer.customer_id = payment.customer_id
GROUP BY customer.customer_id, customer.name;
```
So that the result becomes
```
customer_id | name | last_pay_date
----------------------------------
1001 | Mr.A | 04/10/2014
```
The problem is that I know `customer.name` is not distinct for every customer with the same `customer_id`.
**I would like to know if there is an aggregation function, says `ONE()`, that let me do as follows:**
```
SELECT customer.customer_id, ONE(customer.name), max(payment.pay_date) as last_pay_date
FROM customer, payment
WHERE customer.customer_id = payment.customer_id
GROUP BY customer.customer_id;
```
So that I don't have to put the `customer.name` to the `GROUP BY` clause as I think it makes my SQL hard to read: it misleads anyone reading the SQL to think that the result might has more than 1 row with the same `customer_id` but different `name`.
|
You can use the `FIRST()` or `LAST()` (as of Oracle version 9i) or `MIN()` or `MAX()` aggregate function.
```
SELECT
customer.customer_id,
FIRST(customer.name),
max(payment.pay_date) as last_pay_date
FROM
customer JOIN payment
ON
customer.customer_id = payment.customer_id
GROUP BY
customer.customer_id;
```
But you cannot predict which customer name pops up...
See [Manual](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions083.htm#SQLRF00653)
|
If you don't really care which name will be selected, you can use MIN, MAX, FIRST, LAST...
```
SELECT customer.customer_id, MIN(customer.name),
max(payment.pay_date) as last_pay_date
FROM customer, payment
GROUP BY customer.customer_id;
```
|
How to select one value from a column in a group query
|
[
"",
"sql",
"oracle",
""
] |
I have a table that contains:
```
EXTDEP EXTCODE PRICE
200 10001 200
500 10001 25
600 10001 36
```
The result I want:
```
EXTCODE PRICE200 PRICE500 PRICE600
10001 200 25 36
```
The result I get:
```
EXTCODE PRICE200 PRICE500 PRICE600
10001 NULL NULL 36
10001 NULL 25 NULL
10001 200 NULL NULL
```
My code:
```
SELECT
[ExtCode]
case when extdep = '200' then price1 end as '200',
case when extdep = '500' then price1 end AS '500',
case when extdep = '600' then price1 end AS '600'
```
Any ideas? :)
|
Try this which applies an aggregate to the CASE statement:
```
SELECT
[ExtCode],
MAX(case when extdep = '200' then price1 end) as '200',
MAX(case when extdep = '500' then price1 end) AS '500',
MAX(case when extdep = '600' then price1 end) AS '600'
FROM your_table
GROUP BY ExtCode
```
|
You can use the PIVOT function also like this:
```
SELECT [ExtCode],
[1] AS '200',
[2] AS'500',
[3] AS'600'
FROM
(
Select [ExtCode], extdep, price1
From your_table
) src
PIVOT
(
Max(price1)
For extdep in ([1], [2], [3])
) piv
```
|
SQL Results of multiple case statements in one row
|
[
"",
"sql",
"case",
""
] |
I'm writing a query to select employee names from a database where the employees are married but have no children.
I have two different tables: Employee and Dependents
Employee has the following fields
```
fname, lname, ssn
```
And Dependents have the following fields
```
essn, dependents_name, relationship
```
Dependents.essn is a FK that references Employee.ssn
Some Employee.ssn have multiple tuples in Department, each with a different `relationship` status (spouse, son, daughter), describing the type of dependent that employee has.
I'm looking to write a query that selects those employees, based on the ssn -> essn relationship, that have the relationship `spouse` but not the relationships `son` or `daughter`.
So far, this is what I've tried:
```
select e.fname, d.relationship
from (employee e left outer join dependents d
on e.ssn = d.essn)
where d.relationship = 'spouse'
```
It returns the tuples of employees with the `spouse` value, but also with the `son` and `daughter` value.
How can I filter my tables to include only those employees with the `spouse` value?
|
you can use `NOT EXISTS` clause
```
select e.fname, d.essn, d.relationship
from employee e
join dependents d
on e.ssn = d.essn
and d.relationship = 'spouse'
and not exists ( select 1 from dependents d1
where d1.essn= e.ssn
and d1.relationship <> 'spouse'
)
```
|
Here's another way with a common table expression and `count` with `partition`:
```
with cte as (
select e.fname,
d.relationship,
count(d.relationship) over (partition by e.ssn) cnt
from employee e
join dependents d on e.ssn = d.essn
)
select fname, relationship
from cte
where cnt = 1 and relationship = 'spouse'
```
* [SQL Fiddle Demo](http://sqlfiddle.com/#!3/78819/4)
---
BTW, no need for an `OUTER JOIN` -- your `WHERE` criteria negates it since you require a spouse to exist.
|
SQL -- Excluding Tuples
|
[
"",
"sql",
"sql-server",
"filter",
"conditional-statements",
""
] |
I have this in my DB:
```
| id | time | time-left |
| 1 | 242363246 | 1 |
| 2 | 215625252 | 1 |
| 3 | 147852369 | 1 |
| 4 | 951753456 | 1 |
```
How can I insert into `time-left` the same as in `time` but with 111 more? Here is how I want it to become:
```
| id | time | time-left |
| 1 | 242363246 | 242363357 |
| 2 | 215625252 | 215625363 |
| 3 | 147852369 | 147852480 |
| 4 | 951753456 | 951753567 |
```
|
Assuming `time` is an integer or a char type that can be incremented (with implicit conversion):
```
UPDATE your_table
SET "time-left" = time + 111;
```
If `time`can't be incremented this way you have to cast/convert it to an integer type first.
Depending on what DBMS you are using the way to use quoted identifiers for invalid names (like `time-left`in this case) might be something else like using brackets: `[time-left]` (SQL Server) or using backticks: `` `time-left` `` (MySQL).
|
```
update myTable set "time-left" = time + 111;
```
|
SELECT and then update in the same column
|
[
"",
"sql",
""
] |
I have three tables in my Oracle db:
**Peoples:**
```
IdPerson PK
Name
Surname
```
**Earnings:**
```
IdEarning
IdPerson
EarningValue
```
**Awards:**
```
IdAward PK
IdPerson FK
AwardDescription
```
* A person can have many earnings.
* An earning can have many or no any earnings.
* A person can have many awards, one award, or no any award.
I want to make a query that will return 3 columns:
* Surname
* SUM of all EarningValue of person with this surname
* COUNT of all Awards for this person
An important thing is that i also need to display: 0 value if person don't have any award or earning. There is a possibility that person have an award but don't have any earning.
Is it possible to make such query?
|
```
SELECT p.IdPerson,
p.Surname,
NVL(SUM(e.EarningValue), 0) as SumEarnings,
COUNT(a.IdAward) as CntAwards
FROM Peoples p
LEFT JOIN Earnings e ON p.IdPerson = e.IdPerson
LEFT JOIN Awards a ON p.IdPerson = a.IdPerson
GROUP BY p.IdPerson,
p.Surname;
```
What returns this:
```
SELECT p.Surname,
(SELECT NVL(SUM(e.EarningValue), 0)
FROM Earnings e WHERE e.IdPerson = p.IdPerson) as SumEarnings,
(SELECT COUNT(aIdAward)
FROM Awards a WHERE a.IdPerson = p.IdPerson) as CntAwards
FROM Peoples p
```
|
Yes it is possible, you just have to join the tables
```
SELECT Peoples.Surname, SUM(Earnings.EarningValue) as Earnings, COUNT(Awards. IdPerson) as Awards
FROM Peoples
INNER JOIN Earnings
ON Peoples.IdPerson = Earnings.IdPerson
INNER JOIN Awards
ON Peoples.IdPerson = Awards.IdPerson
GROUP BY Peoples.IdPerson;
```
|
Query with aggregating data from 2 tables
|
[
"",
"sql",
"oracle",
"aggregate",
""
] |
I want to add zeroes after the number .
for eg a= 6895
then a= 6895.00
datatype of a is number(12);
I am using the below code .
```
select to_char(6895,'0000.00') from dual .
```
I m getting the desired result from above code but
'6895' can be any number.so due to that i need to add '0' in above code manually.
for eg.
```
select to_char(68955698,'00000000.00') from dual .
```
Can any one suggest me the better method .
|
The [number format models](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#SQLRF00211) are the place to start when converting numbers to characters. `0` prepends `0`s, which means you'd have to get rid of them somehow. However, `9` means:
> Returns value with the specified number of digits with a leading space if positive or with a leading minus if negative. Leading zeros are blank, except for a zero value, which returns a zero for the integer part of the fixed-point number.
So, the following gets you almost there, save for the leading space:
```
SQL> select to_char(987, '9999.00') from dual;
TO_CHAR(
--------
987.00
```
You then need to use a [format model modifier](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements004.htm#SQLRF00216), `FM`, which is described thusly:
> FM Fill mode. Oracle uses trailing blank characters and leading zeroes
> to fill format elements to a constant width. The width is equal to the
> display width of the largest element for the relevant format model
>
> ...
>
> The FM modifier suppresses the above padding in the return value of
> the TO\_CHAR function.
This gives you a format model of `fm9999.00`, which'll work:
```
SQL> select to_char(987, 'fm9999.00') from dual;
TO_CHAR(
--------
987.00
```
If you want a lot of digits before the decimal then simply add a lot of `9`s.
|
> datatype of a is number(12);
Then use 12 9s in the format model. And, keep the decimal to just 2. So, since the column datatype is `NUMBER(12)`, you cannot have any number more than the given size.
```
SQL> WITH DATA AS(
2 SELECT 12 num FROM dual union ALL
3 SELECT 234 num FROM dual UNION ALL
4 SELECT 9999 num FROM dual UNION ALL
5 SELECT 123456789 num FROM dual)
6 SELECT to_char(num,'999999999999D99') FROM DATA
7 /
TO_CHAR(NUM,'999
----------------
12.00
234.00
9999.00
123456789.00
SQL>
```
**Update** Regarding leading spaces
```
SQL> select ltrim(to_char(549,'999999999999.00')) from dual;
LTRIM(TO_CHAR(54
----------------
549.00
SQL>
```
|
how to add zeros after decimal in Oracle
|
[
"",
"sql",
"oracle",
""
] |
I need to calculate the count of occurrences of specified element in array, something like:
> `elem_occurrences_count(ARRAY[a,b,c,a,a], a) = 3`
>
> `elem_occurrences_count(ARRAY[a,b,c], d) = 0`
Is there any function in PostgreSQL that can be used to solve the problem? Any help is appreciated.
|
You will need to unnest the array and then count the occurrences.
```
with elements (element) as (
select unnest(ARRAY['a','b','c','a','a'])
)
select count(*)
from elements
where element = 'a';
```
This can easily be embedded into a function:
```
create or replace function count_elements(elements text[], to_find text)
returns bigint
as
$body$
select count(*)
from unnest(elements) element
where element = to_find;
$body$
language sql;
```
**Update**
Since Postgres 9.5 this can also be done using `array_positions()` which returns an array of positions where an element was found. The length of that array is the number of occurrences:
```
select cardinality(array_positions(ARRAY['a','b','c','a','a'], 'a'));
```
|
# 9.5+
There is an easier method now
```
SELECT
sArray,
c,
coalesce(array_length( array_positions(sArray, c), 1 ),0) AS count
FROM ( VALUES
(ARRAY['a','b','c','a','a'], 'a'),
(ARRAY['a','b','c'], 'd')
) AS t(sArray,c);
sarray | c | count
-------------+---+-------
{a,b,c,a,a} | a | 3
{a,b,c} | d | 0
(2 rows)
```
|
PostgreSQL: get count of occurrences of specified element in array
|
[
"",
"sql",
"arrays",
"postgresql",
"plpgsql",
""
] |
Is SQL a context free language or some other type of language?
|
According to <https://stackoverflow.com/a/31265136> SQL is not a regular language. The short explanation is that each select query looks like
```
SELECT x FROM y WHERE z
```
and `y` can be another select query itself, so it cannot be simulated with finite-state machine. As mentioned before, there are some CFGs for SQL standarts in [Backus–Naur Form](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form), thereby SQL is **nonregular context free** language.
|
Any CFG for SQL will do most of the work, but will always be slightly too permissive.
A good example of an SQL CFG comes from antlr: <https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4#L1982>
But at that line (1982), you see that in a `values_clause`, column values are recursively added regardless of how many columns might be specified, or how many values are in another row, which is invalid sql:
```
insert into xyz values
(1, 'hello'),
(2, 'bye'),
(3, 'hi', 'uhm...'); -- invalid!
```
Try running it here: <https://www.db-fiddle.com/f/6gwgcg6qBLKpFbCEPtB6dy/0>
This syntax can never be fully encapsulated by a CFG, as it is equivalent to the `{ (a^n)(b^n)(c^n) : n ≥ 1 }` language, which is famously not allowed in CFGs ([but allowed in CSGs](https://en.wikipedia.org/wiki/Context-sensitive_grammar#Examples))
You could argue this is a runtime error instead of a parsing error. But you can use the same argument for every language that is interpreted, so it's a bit of a grey area.
|
What kind of language is SQL?
|
[
"",
"sql",
"programming-languages",
"chomsky-hierarchy",
""
] |
i am writting sql script for the first time and i want to seperate a value that contains two user id's which looks like this
```
612831408762037_617540318291146
```
i want to separate these two so i have a
user\_id = `612831408762037` and post\_id = `617540318291146` any ideas on how i shuld approch this?
thanks in advance
|
Please try:
```
declare @var nvarchar(100)='612831408762037_617540318291146';
select
LEFT(@var, charindex('_',@var)-1),
RIGHT(@var, charindex('_',REVERSE(@var))-1)
```
|
Please try this.
```
DECLARE @document varchar(400);
DECLARE @FIRST int;
SELECT @document = '612831408762037_617540318291146234234234234';
SET @FIRST = (SELECT CHARINDEX('_', @document, 1));
SELECT rightSide = SUBSTRING(@document, 0,@FIRST), leftSide = SUBSTRING(@document, @FIRST+1, LEN(@document));
```
|
how to split values in SQL
|
[
"",
"sql",
"sql-server",
""
] |
My code:
```
$results = $GLOBALS['wpdb']->get_results( 'SELECT * FROM myTable WHERE date = 2014 ORDER BY id DESC', object );
```
The problem is date is stored in this format: 2014-01-01
So how do I select just the year ( I don't care about month and day for the time being ).
Thanks
|
Use the `year()` function:
```
WHERE year(date) = 2014
```
or use explicit comparisons:
```
WHERE (date >= '2014-01-01' and date < '2015-01-01')
```
The latter is better because it can make use of an index on the `date` column.
|
Try this Query :
```
SELECT * FROM myTable WHERE year(`date`)='2014' ORDER BY id DESC
```
|
How to select year in sql statement
|
[
"",
"mysql",
"sql",
"wordpress",
""
] |
I'm new too SQL and I've been struggling to write this query. I want to find the SUM of all salaries for employees in a give department, let's say 'M', and a given hire date, let's say '2002', any ideas? I'm thinking I have to JOIN the tables somehow but having trouble, I've set up a schema like this.
jobs table and columns
```
JOBS
------------
job_id
salary
hire_date
```
employees table and columns
```
EMPLOYEES
------------
employee_id
name
job_id
department_id
```
department table and columns
```
DEPARTMENTS
------------
department_id
department_name
```
This is very similar to the way the HR schema does it in Oracle so I think the schema should be OK just need help with the query now.
|
You need a statement like this:
```
SELECT e.name,
d.department_name,
SUM(j.salary)
FROM employees e,
departments d,
jobs j
WHERE d.department_name = 'M'
AND TO_CHAR(j.hire_date, 'YYYY') = '2002'
AND d.department_id = e.department_id
AND e.job_id = j.job_id
GROUP BY e.name,
d.department_name;
```
|
FWIW, you shouldn't use the old ANSI-89 [implicit join notation](http://en.wikipedia.org/wiki/Join_%28SQL%29#Inner_join) (using `,`). It is considered as deprecated since the ANSI-92 standard (more than 20 yers ago!) and some vendors start dropping its support (MS SQL Server 2008; I don't know if there is a deprecation warning for this "feature" with Oracle?).
So, as a newcomer, you shouldn't learn [bad habits](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-old-style-joins) from the start.
With the "modern" syntax, your query should be written:
```
SELECT e.name,
d.department_name,
SUM(j.salary)
FROM employees e
JOIN departments d USING(department_id)
JOIN jobs j USING(job_id)
WHERE d.department_name = 'M'
AND TO_CHAR(j.hire_date, 'YYYY') = '2002'
GROUP BY e.name, d.department_name;
```
With that syntax, there is a clear distinction between the `JOIN` relation (`USING` or `ON`) and the filter clause `WHERE`. It will later ease things when you will encounter "advanced" joins such as `OUTER JOIN`.
|
SQL query to find SUM
|
[
"",
"sql",
"oracle",
"sum",
"schema",
""
] |
This is similiar to this one. [How to concatenate all columns in a select with SQL Server](https://stackoverflow.com/questions/13761337/how-to-concatenate-all-columns-in-a-select-with-sql-server)
But not quite. This is MS SQL 2008. I am pulling patient demographics, and one of those is race, which is a multi-choice field (you could be asian and chinese for example). The race table 'PatientRace' is linked to the patient table by patientid. So the table structure is thus:
```
Patient table
PatientID
PatientName
PatientAddress
PatientRace table
PatientRaceID
PatientID (FK)
Description
```
I only want one row, and I want race to be concatenated. Is it possible to do this within a single SELECT statement or do I need to do a cursor? I am envisioning the cursor to be like this: Initial select for all the other demographics, insert into a temp table. Go through the temp table by patientID, then for each, grab out the race, concat, and add to the temp table.
The desired output is like this: 1 row per patient.
Name: "Joe Blow"
Race: "Asian, Chinese"
|
Concatenating string values in SQL Server is not obvious. It requires using "xml" data processing and a subquery:
```
select p.*,
stuff((select ', ' + Description
from patientrace pr
where pr.patientid = p.patientid
for xml path ('')
), 1, 2, ''
) as races
from patients p;
```
|
You need to use STUFF and FOR XML like this
```
SELECT p.PatientName,
(STUFF(SELECT ',' + r.Description
FROM PatientRace r
WHERE r.PatientID = p.PatientID
FOR XML('')), 1, 1, '')
FROM Patients p
```
|
sql concat within a SELECT statement
|
[
"",
"sql",
"sql-server",
""
] |
i have a listbox with more than 100 items i want this 100 items to be comma separated with single quotes e.g: ''abc','def'' i want this to search in a select clause
```
List<string> SelectedValue = new List<string>();
foreach (ListItem lst in ListBox2.Items)
{
if (lst.Selected)
{
SelectedValue.Add(lst.Value);
}
}
string.Join(",",SelectedValue.Select(x=>string.Format("'{0}'",x)));
```
its giving me error 'the best overloaded method match for 'string.join has some invalid arguments, what an doing wrong here
|
Might I suggest a slightly different approach using a `StringBuilder` instead of putting the items into a `List` object? You are flatting these out to CSV anyways, so why not just build it that way like so..
```
var SB = new StringBuilder();
foreach (ListItem lst in ListBox2.Items)
{
if (lst.Selected)
{
SB.Append("'" + list.value + "',");
}
}
var FinalString = SB.ToString().Substring(0, (SB.Length - 1));
```
|
You need to convert the result of `SelectedValue.Select` to an array:
```
String result = string.Join(",", SelectedValue.Select(x => string.Format("'{0}'", x)).ToArray());`
```
|
convert listbox value to comma separated string with single quotes
|
[
"",
"sql",
"asp.net",
""
] |
By that I mean leaving any duplicate records
For example
```
ID NAME
1 a
2 a
3 b
4 b
5 c
```
Desired output.
5 c only
I am tired of trying this . So I don't think I don't have any reasonably code to paste here .
|
Here is another possible way:
```
select min(id), name from table group by name having count(*) = 1
```
|
Here is one way:
```
select t.*
from table t
where not exists (select 1
from table t2
where t2.name = t.name and t2.id <> t.id
);
```
Here is another way:
```
select t.*
from table t join
(select name, count(*) as cnt
from table t
group by name
having cnt = 1
) tt
on tt.name = t.name;
```
|
How to select only the unique records
|
[
"",
"mysql",
"sql",
""
] |
I have a table with Amounts, and Created Dates. I want to create a query that loops - AND changes the date within the query. Otherwise I have to do it manually.
Current query:
```
<cfquery name=qWeekly datasource="#DSN#">
SELECT SUM(Amount)
FROM Transactions
WHERE ( CreatedDate BETWEEN '1-19-2014' AND '1-25-2014' )
AND ( Amount > 0 )
</cfquery>
```
I would manually change the filter to `... (CreatedDate BETWEEN '1-19-22014' AND '1-25-2014')...`, then manually change it again to: `(CreatedDate ...BETWEEN '1-26-2014' AND '2-8-2014')`.
What I want to do is something like `Between 'X' AND 'X+7'`, so that I get one weeks worth of data, incremented by 7 days, so I can generate an output by weekly date ranges. In psuedo code, something like this:
```
<cfif CreatedDate < Now()>
<cfloop index="x" step="7">
<cfquery name=qWeekly datasource="#DSN#">
SELECT SUM(Amount)
FROM Transactions
WHERE ( CreatedDate BETWEEN 'x' AND 'x+7' )
AND ( Amount > 0 )
</cfquery>
</cfloop>
</cfif>
```
Is this possible?
|
You did not mention your dbms (always good to include it with sql questions) but it is possible you could achieve the goal without looping. For example, in SQL Server, you could use a [CTE](http://msdn.microsoft.com/en-us/library/ms175972.aspx) to generate a table of week start dates. Then JOIN it back to your Transactions table and aggregate the amounts by week.
**[SQLFiddle](http://sqlfiddle.com/#!3/a0ccd/1/0)**
```
<!--- Example: generate a range of 7 weeks --->
<cfset firstSunday = createDate(2014,1,19)>
<cfset lastSunday = dateAdd("ww", 7, firstSunday)
...
<!--- Calculate totals for all weeks in range --->
<cfquery name="getTotalsByWeek" ...>
;WITH ranges ( WeekStartDate ) AS (
SELECT <cfqueryparam value="#firstSunday#" cfsqltype="cf_sql_date"> AS WeekStartDate
UNION ALL
SELECT DATEADD(d, 7, WeekStartDate)
FROM ranges
WHERE <cfqueryparam value="#lastSunday#" cfsqltype="cf_sql_date"> > WeekStartDate
)
SELECT r.WeekStartDate, SUM(t.Amount) AS TotalAmount
FROM ranges r LEFT JOIN Transactions t
<!--- CreatedDate falls within 7 days of the start date --->
ON t.CreatedDate >= r.WeekStartDate
AND t.CreatedDate < DATEADD(d, 8, r.WeekStartDate)
GROUP BY r.WeekStartDate
ORDER BY r.WeekStartDate
</cfquery>
```
**Results:**
```
2014-01-19 00:00:00.000 | 1915.74
2014-01-26 00:00:00.000 | 567.00
2014-02-02 00:00:00.000 | 1250.00
2014-02-09 00:00:00.000 | NULL
2014-02-16 00:00:00.000 | 300.00
2014-02-23 00:00:00.000 | NULL
2014-03-02 00:00:00.000 | NULL
2014-03-09 00:00:00.000 | NULL
```
**NB:** The query above uses a special construct for date comparisons that will work regardless of whether your CreatedDate column contains a date (only) or a date AND time.
```
col >= startDateAtMidnight AND
col < dayAfterEndDateAtMidnight
```
|
Something like this:
```
<Cfset datePlaceholder= now()/>
<!--- this would loop for 12 weeks --->
<Cfloop from="7*12" to="1" step="-7" index="stepper">
<cfquery...>
SELECT SUM(Amount)
FROM Transactions
WHERE (CreatedDate
BETWEEN #dateformat(datePlaceholder,'MM-DD-YYYY')#
AND #dateformat(dateAdd('d',stepper,datePlaceholder),'MM-DD-YYYY')#)
AND (Amount > 0)
</cfquery>
...do whatever you need to with this week of data
</cfloop>
```
Cavaets - some types of date fields include time (smalldatetime for example) so you might need to append 00:00:00 or 23:59:59 to them.
You will have to debug this code - I did not run it.
Cfquery is one of those spots were I prefer the tag for readibility (and cutting and pasting from query analyzer or navicat or whatever).
|
Using ColdFusion Loops to change database query filter
|
[
"",
"sql",
"coldfusion",
""
] |
I am creating a simple database for a forum. I have a category table `forum_categories` which looks like this:
```
`cat_id` INT(8) NOT NULL AUTO_INCREMENT,
`cat_title` VARCHAR(255) NOT NULL,
`cat_desc` TEXT NOT NULL,
PRIMARY KEY (`cat_id`),
UNIQUE KEY (`cat_title`)
```
And a topics table `forum_topics` which looks like this:
```
`topic_id` INT(8) NOT NULL AUTO_INCREMENT,
`cat_id` INT(8) NOT NULL COMMENT 'foreign key with forum_categories table',
`user_id` INT(11) NOT NULL COMMENT 'foreign key with users table',
`topic_title` VARCHAR(255) NOT NULL,
`topic_desc` TEXT NOT NULL,
`topic_date` DATETIME DEFAULT NULL,
PRIMARY KEY (`topic_id`),
FOREIGN KEY (`cat_id`) REFERENCES forum_categories (`cat_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci COMMENT='forum topics';
```
Now when I select a categroy I just want the topics to show, where the cat\_id fields are equal to each other.
Example:
Category 1 has cat\_id = 1
Category 2 has cat\_id = 2
Topic 1 has cat\_id = 1
Topic 2 has cat\_id = 2
Now when category 1 is selected I just want topic 1 to show. If category2 is selected I just want topic 2 to show.
This query achieves that:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
WHERE fc.cat_id = 1
ORDER BY ft.topic_date DESC
```
Now, I would like to not have one query for every category, if that is possible. Trying to do this:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
WHERE fc.cat_id = ft.cat_id
ORDER BY ft.topic_date DESC
```
or this:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
ORDER BY ft.topic_date DESC
```
Shows topic 1 and 2 in both categories. Is there a way to have one query for all categories?
|
If you want to create a query that pulls this data for all categories, you can join the two tables together, on the condition that the cat\_id is equal. This would look something like this:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
ORDER BY ft.topic_date DESC;
```
If you want to query for a specific category, you can put that in your WHERE statement, like this:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
WHERE fc.cat_id = 1 // Or whatever you want.
ORDER BY ft.topic_date DESC;
```
I think one issue you might be having is that you want this to be reusable, where you can write one query and change the cat\_id as needed. If that is the case, I recommend you look into prepared statements.
What a prepared statement does is create a query with a parameter value that you insert as needed. Here is an example:
```
PREPARE stmnt FROM
'SELECT *
FROM forum_categories fc
JOIN forum_topics ft ON fc.cat_id = ft.cat_id
WHERE fc.cat_id = ?
ORDER BY ft.topic_date DESC';
SET @a = 1;
EXECUTE stmnt USING @a;
```
When you do this, you can assign a new variable (@b, @c) with whatever you want. All you'd have to do is call `EXECUTE stmnt` each time, without having to always rewrite that query.
Here is an [SQL Fiddle](http://sqlfiddle.com/#!2/f9161/5) that covers all three of my examples for you.
Here is a reference to MySQL [prepared statements](http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-prepared-statements.html).
|
Please try the following:
```
SELECT *
FROM forum_categories fc
JOIN forum_topics ft
ON fc.cat_id = ft.cat_id
WHERE fc.cat_id = 1 -- or whatever category you want
ORDER
BY ft.topic_date DESC
```
|
Write one SQL query for multiple values
|
[
"",
"mysql",
"sql",
"select",
"prepared-statement",
""
] |
Imagine having a table as the one below:
```
create table test (
id int auto_increment,
some int,
columns int
)
```
And then this table get used alot. Rows are inserted and rows are deleted and over time there might be gaps in the number that once was auto incremented. As an example, if I at some point make the following query:
```
select top 10 id from test
```
I might get something like
```
3
4
6
7
9
10
13
14
18
19
```
How do I design a query that returns the missing values 1,2,5,8 etc?
|
The easiest way is to get *ranges* of missing values:
```
select (id + 1) as firstmissing, (nextid - 1) as lastmissing
from (select t.id, lead(id) over (order by id) as nextid
from test t
) t
where nextid is not null and nextid <> id + 1;
```
Note this uses the `lead()` function, which is available in SQL Server 2012+. You can do something similar with `apply` or a subquery in earlier versions. Here is an example:
```
select (id + 1) as firstmissing, (nextid - 1) as lastmissing
from (select t.id, tt.id as nextid
from test t cross apply
(select top 1 id
from test t2
where t2.id > t.id
order by id
) tt
) t
where nextid is not null and nextid <> id + 1;
```
|
Simple way is by using cte..
```
;WITH cte
AS (SELECT 1 id
UNION ALL
SELECT id + 1 id from cte
WHERE id < (SELECT Max(id)
FROM tablename))
SELECT *
FROM cte
WHERE id NOT IN(SELECT id
FROM tablename)
```
**Note:** this will start from 1. If you want start from the min value of your table just replace
```
"SELECT 1 id" to "SELECT Min(id) id FROM tablename"
```
|
Find gaps in auto incremented values
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
I need to copy data from one table to another. I need to merge the firname and surname into one field called fullname
How can I do that?
```
INSERT INTO table_one (id, full_name, dob)
SELECT id, first_name, second_name, dob
FROM table_two
WHERE id>0;
```
I need to merge the first\_name and surname with a comma in the middle
EDIT:
How can i also check if the first name and second name are null before trying to concatinate them. I don't want names like John,Null
|
Check out MySQL [CONCAT](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat).
After your edit, you will need to use [CONCAT\_WS](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat-ws). This will skip empty or NULL values.
```
SELECT id, CONCAT_WS(' ', first_name, second_name) as full_name, dob
```
|
You can use the function `CONCAT_WS`. This will ignore any null values:
```
INSERT INTO table_one (id, full_name, dob)
SELECT id, CONCAT_WS(' ', first_name, second_name) AS name, dob
FROM table_two
WHERE id>0;
```
<http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat-ws>
|
Merge two or more fields before inserting
|
[
"",
"mysql",
"sql",
""
] |
I have a column in SQL that is varchar. I need it to return anything with a value.
Example...
```
select * from students where StudentID <> ''
```
Is this the correct way of doing it? I've tried is not null but then it returns anything that is empty as well.
Thanks
|
I would suggest using coalesce:
```
select * from students where coalesce(StudentID, '') <> ''
```
This will turn nulls into empty strings and disallow them. This has the added bonus of restricting empty strings as well.
A null is not equal to anything, not even another null, so a simple `<>` doesnt work.
|
```
select * from students where StudentID <> '' AND StudentID IS NOT NULL
```
You can target both white space and null.
|
Returning varchars that are not null or empty SQL
|
[
"",
"sql",
"varchar",
"isnullorempty",
""
] |
I was using a simple sql query and getting an ordered list, but when I changed some of the values in the column I'm sorting by, those rows were no longer being sorted correctly.
```
select distinct u.Email,
case
when l.region_id is null then 'EU'
else l.region_id
end
as Location
from TB_User u
left join cat..location l on l.location=u.Location
where u.Username in (....)
order by l.region_id
```
I have about 5 rows that returned null for their region\_id so they would be at the top of the result set. When I added the case and replaced their value, they still remain at the top. Is there anyway to make these rows sort according to their given value?
|
You can use `CASE` also in the `ORDER BY`. But in this case it seems that you instead want to order by the column which uses the `CASE`.
```
ORDER BY Location
```
If you instead want the null-regions at the bottom:
```
ORDER BY CASE WHEN l.region_id is null THEN 0 ELSE 1 END DESC,
Location ASC
```
If your rdbms doesn't support this (like SQL-Server does) you have to repeat it:
```
ORDER BY CASE WHEN l.region_id IS NULL THEN 'EU' ELSE l.region_id END ASC
```
|
You just order by the column value, which is null.
If you want to order by the case statement, just copy it in the order by clause:
```
order by
case
when l.region_id is null then 'EU'
else l.region_id end
```
|
Select Case is not working with Order by
|
[
"",
"sql",
"sql-order-by",
"case",
""
] |
I have a dataset in the form of a CSV file than is sent to me on a regular basis. I want to import this data into my MySql database and turn it into a proper set of tables. The problem I am having is that one of the fields the is used to store multiple values. For example the field is storing email addresses. It may one email address or it may have two, or three, or four, etc. The field contents would look something like this. "user1@domain.com,user2@domain.com,user3@domain.com".
I need to be able to take the undetermined number of values from each field and then add them into a separate table so that they look like this.
```
user1@domain.com
user2@domain.com
user3@domain.com
```
I am not sure how I can do this. Thank you for the help.
|
Instead of importing the csv file directly and then trying to fix the problems in it, I found the best way to attack this was to first pass the csv to AWK.
AWK outputs three separate csv file that follow the normal forms. I then import those tables and all is well.
```
2 info="`ncftpget -V -c -u myuser -p mypassword ftp://fake.com/data_map.csv`"
3
4 echo "$info" | \
5 awk -F, -v OFS="," 'NR > 1 {
6 split($6, keyvalue, ";")
7 for (var in keyvalue) {
8 gsub(/.*:/, "", keyvalue[var])
9 print $1, keyvalue[var]
10 }}' > ~/sqlrw/table1.csv
11
12 echo "$info" | \
13 awk -F, -v OFS="," 'NR > 1 {
14 split($6, keyvalue, ";")
15 for (var in keyvalue) {
16 gsub(/:/, ",", keyvalue[var])
17 print keyvalue[var]
18 }}' > ~/sqlrw/table2.csv
19
20 sort -u ~/sqlrw/table2.csv -o ~/sqlrw/table2.csv
21
22 echo "$info" | \
23 awk -F, -v OFS="," 'NR > 1 {
24 print $1, $2, $3, $4, $5, $7, $8
25 }' > ~/sqlrw/table3.csv
```
|
Probably the simplest way is a brute force approach of inserting the first email, then the second, and so on:
```
insert into newtable(email)
select substring_index(substring_index(emails, ',', 1), ',', -1)
from emails
where (length(replace(emails, ',', ',,')) - length(emails)) >= 1;
insert into newtable(email)
select substring_index(substring_index(emails, ',', 2), ',', -1)
from emails
where (length(replace(emails, ',', ',,')) - length(emails)) >= 2;
insert into newtable(email)
select substring_index(substring_index(emails, ',', 3), ',', -1)
from emails
where (length(replace(emails, ',', ',,')) - length(emails)) >= 3;
```
And so on.
That is, extract the nth element from the list and insert that into the table. The `where` clause counts the number of commas in the list, which is a proxy for the length of the list.
You need to repeat this up to the maximum number of emails in the list.
|
Convert MySql column with multiple values to proper table
|
[
"",
"mysql",
"sql",
""
] |
```
SELECT FacSSN, FacLastName, FacDept
FROM Faculty
WHERE FacSSN IN
(SELECT DISTINCT FacSSN FROM Offering
WHERE OffTerm = 'WINTER'
AND OffYear = 2006 );
```
I have a question which is meant for revision for an exam, This is the question:
List the name (first and last) and department of faculty who are only teaching in winter
term 2006
However there is a person which has a lecture in winter 2006 and summer 2006, the question does not want to list that person
|
Try to use EXISTS clause instead of IN:
```
SELECT
FacSSN, FacLastName, FacDept
FROM
Faculty
WHERE EXISTS
(SELECT * FROM Offering
WHERE OffTerm='WINTER' AND OffYear=2006 AND FacSSN=Faculty.FacSSN)
AND NOT EXISTS
(SELECT * FROM Offering
WHERE (OffTerm<>'WINTER' OR OffYear<>2006) AND FacSSN=Faculty.FacSSN)
```
There is also no need to use DISTINCT in subqueries.
|
Assuming that `FacSSN` holds the person who is lecturing this should work:
```
SELECT
FacSSN, FacLastName, FacDept
FROM
Faculty
WHERE
FacSSN IN
(SELECT DISTINCT FacSSN FROM Offering WHERE OffTerm = 'WINTER' AND OffYear = 2006)
AND FacSSN NOT IN
(SELECT DISTINCT FacSSN FROM Offering WHERE OffTerm <> 'WINTER' AND OffYear = 2006);
```
|
SQL Nested Select with faculty members
|
[
"",
"sql",
"ms-access",
""
] |
I am having trouble with my logic around this select query.
```
SELECT
ISNULL((SELECT cs_facilities.name from cs_facilities where ci_periodicBillings.facility = cs_facilities.guid),'Unknown') as [Care Centre],
ISNULL((SELECT cs_clients.title + ' ' + cs_clients.forename + ' ' + cs_clients.surname from cs_clients where ci_periodicBillings.client = cs_clients.guid),'Unknown') as [Resident Name],
***CASE WHEN ci_periodicbillings.contributionFunder = '00000000-0000-0000-0000-000000000000' then 'No Funder' ELSE (select ci_contributionFunders.name from ci_contributionFunders where ci_contributionFunders.guid = ci_periodicBillings.contributionFunder) END as [Contribution Funder],***
ISNULL((SELECT cs_clients.ADMISSION from cs_clients where ci_periodicBillings.client = cs_clients.guid),'') as [Admission Date],
ISNULL(ci_periodicbillings.RATE,'') as [Weekly Rate],
CASE WHEN BILLRES = 1 THEN 'Self Payer' ELSE CASE WHEN BILLRES = 2 THEN 'Top up' ELSE 'Other Funder' END END as [Type of Funding],
ISNULL(ci_periodicbillings.LASTBILL,'') as [Billing Period Start Date],
ISNULL(ci_periodicbillings.NEXTBILL,'') as [Billing Period Next Repeat Date],
CASE WHEN invoiceDaysOffset = 22 then 'In Arrears' ELSE CASE WHEN invoicedaysoffset = -6 then 'In Advance' ELSE '' END END as [Advance or Arrears Billing]
from ci_periodicBillings
where facility = '00000000-0000-0000-0000-000000000000'
and ENDBILL = '1900-01-01 00:00:00.000'
order by [Care Centre], [Resident Name]
```
The problematic line, which I'm sure you experts will spot, is the one highlighted with triple stars.
I am hoping that you can see my logic here. within the ci\_periodicbillings table is a column labelled "contributionFunder". This refers to a guid in the table ci\_contributionFunders. I want my case statement to say if this is a null guid (i.e. '00000000-0000-0000-0000-000000000000') then there is no funder, otherwise if the guid matches the column guid in the ci\_periodicbillings table, return me the ci\_contributionfunders.name value which is the name of the funder.
Can anyone help me tidy this up?
|
I don't know what your error message is, nor can I see anything actually wrong with your query, but in reference to tidying it up, there are a few things I would do.
The first is to remove all the correlated subqueries, and use `LEFT JOIN` instead, and the second is to use table aliases rather than repeating the full table names throughout the query. Finally, I would simplify the case statements rather than nesting them:
```
SELECT ISNULL(f.Name, 'Unknown') AS [Care Centre],
ISNULL(c.title + ' ' + c.forename + ' ' + c.surname, 'Unknown') AS [Resident Name],
ISNULL(cf.Name, 'No Funder') AS [Contribution Funder],
ISNULL(c.ADMISSION, '') AS [Admission Date],
ISNULL(pb.RATE,'') AS [Weekly Rate],
CASE pb.BILLRES
WHEN 1 THEN 'Self Payer'
WHEN 2 THEN 'Top up'
ELSE 'Other Funder'
END AS [Type of Funding],
ISNULL(pb.LASTBILL,'') AS [Billing Period Start Date],
ISNULL(pb.NEXTBILL,'') AS [Billing Period Next Repeat Date],
CASE pb.invoiceDaysOffset
WHEN 22 THEN 'In Arrears'
WHEN -6 THEN 'In Advance'
ELSE ''
END AS [Advance or Arrears Billing]
FROM ci_periodicBillings AS pb
LEFT JOIN cs_facilities AS f
ON f.guid = pb.facility
LEFT JOIN cs_clients AS c
ON c.guid = pb.client
LEFT JOIN ci_contributionFunders AS cf
ON cf.guid = pb.contributionFunder
AND pb.contributionFunder != '00000000-0000-0000-0000-000000000000'
WHERE pb.facility = '00000000-0000-0000-0000-000000000000'
AND pb.ENDBILL = '1900-01-01 00:00:00.000';
```
The last thing I would do, but I have deliberately separated this because it is very much subjective, is switch to the `Alias = <Expression>` syntax that SQL Server allows, rather than `<Expression> AS Alias`. I find this much more legible since you immediately see with a scan down the select list which column you are looking at. Aaron Bertrand has written a [good article](https://sqlblog.org/2012/01/23/bad-habits-to-kick-using-as-instead-of-for-column-aliases) that more or less exactly mirrors my feeling on the subject.
```
SELECT [Care Centre] = ISNULL(f.Name, 'Unknown'),
[Resident Name] = ISNULL(c.title + ' ' + c.forename + ' ' + c.surname, 'Unknown'),
[Contribution Funder] = ISNULL(cf.Name, 'No Funder'),
[Admission Date] = ISNULL(c.ADMISSION, ''),
[Weekly Rate] = ISNULL(pb.RATE,''),
[Type of Funding] = CASE pb.BILLRES
WHEN 1 THEN 'Self Payer'
WHEN 2 THEN 'Top up'
ELSE 'Other Funder'
END,
[Billing Period Start Date] = ISNULL(pb.LASTBILL,''),
[Billing Period Next Repeat Date] = ISNULL(pb.NEXTBILL,''),
[Advance or Arrears Billing] = CASE pb.invoiceDaysOffset
WHEN 22 THEN 'In Arrears'
WHEN -6 THEN 'In Advance'
ELSE ''
END
FROM ci_periodicBillings AS pb
LEFT JOIN cs_facilities AS f
ON f.guid = pb.facility
LEFT JOIN cs_clients AS c
ON c.guid = pb.client
LEFT JOIN ci_contributionFunders AS cf
ON cf.guid = pb.contributionFunder
AND pb.contributionFunder != '00000000-0000-0000-0000-000000000000'
WHERE pb.facility = '00000000-0000-0000-0000-000000000000'
AND pb.ENDBILL = '1900-01-01 00:00:00.000';
```
|
Using your syntax, the case statement worked with:
```
CASE WHEN ci_periodicbillings.contributionFunder = '00000000-0000-0000-0000-000000000000' then 'No Funder' when ci_periodicbillings.contributionFunder <> '00000000-0000-0000-0000-000000000000' THEN (select ci_contributionFunders.name from ci_contributionFunders where ci_contributionFunders.guid = ci_periodicBillings.contributionFunder) END as [Contribution Funder],
```
|
Case statement returning a select where query
|
[
"",
"sql",
"select",
"case",
""
] |
Let's imagine I have two tables of orders which I want to compare to eachother.
```
orders {
id int,
price money
}
ordereditems {
orderid int,
itemid int,
price money
}
```
where **orders.id = ordereditems.orderid**
Naturally, this is a bad design since both tables don't need a price. However, how can I design a query to find out what rows in orders has a price, which mismatches with the sum of the pricecolumn in ordereditems?
|
If I understand right:
```
SELECT * FROM orders o
LEFT JOIN OrderedItems ei
ON o.Price = ei.Price WHERE ei.Price is NULL
```
|
You can also use the below query to get order id's with mismatched price
```
SELECT o.id,MAX(o.price) AS price
FROM orders o
INNER JOIN ordereditems oi
ON o.id=oi.orderid
GROUP BY o.id
HAVING SUM(oi.price)<>MAX(o.price)
```
|
Compare value of a row with multiple rows in another table
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
Say I have three tables with the following data in them:
```
CREATE TABLE movies (
movie_id INT,
movie_name VARCHAR(255),
PRIMARY KEY (movie_id)
);
CREATE TABLE movie_ratings (
movie_rating_id INT,
movie_id INT,
rating_value TINYINT,
PRIMARY KEY (movie_rating_id),
KEY movie_id (movie_id)
);
CREATE TABLE movie_actors (
movie_actor_id INT,
movie_id INT,
actor_id INT,
PRIMARY KEY (movie_actor_id),
KEY movie_id (movie_id)
);
INSERT INTO movies VALUES (1, 'Titanic'),(2,'Star Trek');
INSERT INTO movie_ratings VALUES (1,1,5),(2,1,4),(3,1,5);
INSERT INTO movie_actors VALUES (1,1,2),(2,2,2);
```
If I wanted to get the average rating and number of actors for each movie, I could do this using `JOINs`:
```
SELECT m.movie_name, AVG(rating_value) AS avgRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN movie_ratings r ON m.movie_id = r.movie_id
LEFT JOIN movie_actors a ON m.movie_id = a.movie_id
GROUP BY m.movie_id;
```
Let's call that query A. Query A can be rewritten with `USING` thusly:
```
SELECT m.movie_name, AVG(rating_value) AS avgRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN movie_ratings r USING (movie_id)
LEFT JOIN movie_actors a USING (movie_id)
GROUP BY m.movie_id;
```
Let's call that query B.
Both of those queries return 1 as numActors for the movie 'Star Trek'. So let's modify that query a bit:
```
SELECT m.movie_name, AVG(rating_value) AS avgRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN movie_ratings r ON m.movie_id = r.movie_id
LEFT JOIN movie_actors a ON r.movie_id = a.movie_id
GROUP BY m.movie_id;
```
Let's call this query C. Instead of doing `m.movie_id = a.movie_id` I'm now doing `r.movie_id = a.movie_id`. For query C numActors is 0.
My questions are:
1. How can I write query C using `USING`? Can I?
2. Is `USING` essentially doing an `ON` with the current table and the table mentioned in `FROM`?
3. If the answer to #2 is yes then what does `USING` do when an implicit `JOIN` is used and multiple tables are in the `FROM`?
|
## 1. Can C be rewritten using USING?
Yes, you can, using a nested join:
```
SELECT m.movie_name, AVG(rating_value) AS avgRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN (
movie_ratings r
LEFT JOIN movie_actors a USING (movie_id)
) USING (movie_id)
GROUP BY m.movie_id
```
## 2. Is USING essentially doing an ON with the current table and the table mentioned in FROM?
No. [MySQL Documentation](http://dev.mysql.com/doc/refman/5.7/en/join.html) says:
> The evaluation of multi-way natural joins differs in a very important way that affects the result of NATURAL or USING joins and that can require query rewriting. Suppose that you have three tables t1(a,b), t2(c,b), and t3(a,c) that each have one row: t1(1,2), t2(10,2), and t3(7,10). Suppose also that you have this NATURAL JOIN on the three tables:
>
> SELECT ... FROM t1 NATURAL JOIN t2 NATURAL JOIN t3;
>
> Previously, the left operand of the second join was considered to be t2, whereas it should be the nested join (t1 NATURAL JOIN t2). As a result, the columns of t3 are checked for common columns only in t2, and, if t3 has common columns with t1, these columns are not used as equi-join columns. Thus, previously, the preceding query was transformed to the following equi-join:
>
> SELECT ... FROM t1, t2, t3
> WHERE t1.b = t2.b AND t2.c = t3.c;
So basically, in older versions of MySQL your query B was not the same as query A, but as query C!
## 3. What does USING do when an implicit JOIN is used and multiple tables are in the FROM?
Again, citing the [MySQL Documentation](http://dev.mysql.com/doc/refman/5.7/en/join.html):
> Previously, the comma operator (,) and JOIN both had the same precedence, so the join expression t1, t2 JOIN t3 was interpreted as ((t1, t2) JOIN t3). Now JOIN has higher precedence, so the expression is interpreted as (t1, (t2 JOIN t3)). This change affects statements that use an ON clause, because that clause can refer only to columns in the operands of the join, and the change in precedence changes interpretation of what those operands are.
It's all about join-order and precedence. So basically `t1, t2 JOIN t3 USING (x)` would do `t2 JOIN t3 USING(x)` first and join that with `t1`.
|
If the column name is the same in both tables then yes, you can use `USING()`.
In other words, this:
```
SELECT movie_name, AVG(rating_value) AS averageRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN movie_ratings r ON m.movie_id = r.movie_id
LEFT JOIN movie_actors a ON m.movie_id = a.movie_id
GROUP BY m.movie_id;
```
Is the same as:
```
SELECT movie_name, AVG(rating_value) AS averageRating, COUNT(actor_id) AS numActors
FROM movies m
LEFT JOIN movie_ratings USING (movie_id)
LEFT JOIN movie_actors USING (movie_id)
GROUP BY movie_id;
```
As far as the ambiguity there won't be any here. It will join the tables when the movie\_id is equal. In your select statement, you are pulling the movie\_name, which only exists in one column.
However, if you said this:
```
SELECT movie_id, AVG(rating_value) AS averageRating, COUNT(actor_id) AS numActors
```
MySQL will say there is an error because movie\_id cannot be resolved because it as ambiguous. To fix this ambiguity, you'd just have to make sure you used a table alias or name when selecting movie\_id.
This is a valid select statement:
```
SELECT m.movie_id, AVG(rating_value) AS averageRating, COUNT(actor_id) AS numActors
```
No error would be thrown for this.
I would like to comment that I foresee some danger here. If you left join movies with all of these tables, you could potentially receive null values. If movie\_id 1 does not have any ratings, your AVG(rating\_value) will return null. You won't have this problem for COUNT(actor\_id) as this will just return 0. I don't know if this bothers you, but be aware that that column could return null.
I built the sample tables in MySQL workbench, and I'm unable to get SQL Fiddle to work to show you, but if you would like to see the data I've created let me know and I will edit the question.
|
Difference between USING and ON when joining more than two tables
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a very small test application in which I'm trying to install a Windows Service and create a LocalDB database during the install process, then connect to that LocalDB database when the Windows Service runs.
I am running into huge problems connecting to a LocalDB instance from my Windows Service.
My installation process is exactly like this:
1. Execute an installer .msi file which runs the msiexec process as the NT AUTHORITY\SYSTEM account.
2. Run a custom action to execute SqlLocalDB.exe with the following commands:
* sqllocaldb.exe create MYINSTANCE
* sqllocaldb.exe share MYINSTANCE MYINSTANCESHARE
* sqllocaldb.exe start MYINSTANCE
3. Run a custom C# action using ADO.NET (System.Data.SqlConnection) to perform the following actions:
* Connect to the following connection string, `Data Source=(localdb)\MYINSTANCE; Integrated Security=true`
* CREATE DATABASE TestDB
* USE TestDB
* CREATE TABLE ...
4. Start the Windows Service before the installer finishes.
5. The Windows Service is installed to the LocalSystem account and so also runs as the NT AUTHORITY\SYSTEM user account.
6. The service attempts to connect using the same connection string used above.
I am consistently getting the following error when trying to open the connection to the above connection string from within the Windows Service:
> System.Data.SqlClient.SqlException (0x80131904): A network-related or
> instance-specific error occurred while establishing a connection to
> SQL Server. The server was not found or was not accessible. Verify
> that the instance name is correct and that SQL Server is configured to
> allow remote connections. (provider: SQL Network Interfaces, error: 50
> - Local Database Runtime error occurred. The specified LocalDB instance does not exist.
This is frustrating because both the msi installer custom action and the Windows Service are running under the same Windows user account (I checked, they're both NT AUTHORITY\System). So why the first works and the second does not is beyond me.
I have tried changing the connection strings used in the custom action and the Windows Service to use the share name `(localdb)\.\MYINSTANCESHARE` and I get the exact same error from the Windows Service.
I have tried changing the user account that the Windows Service logs on as to my Windows user account, which does work as long as I first run a command to add it to the SQL server logins for that instance.
I've also tried running a console application and connecting to the share name connection string and that works as well.
I've also tried connecting to the share name from SQL Server Management Studio and that works as well.
However none of these methods really solve my problem. I need a Windows Service because it starts up as soon as the computer starts up (even if no user logs on) and starts up no matter which user account is logged in.
**How does a Windows Service connect to a LocalDB private instance?**
I am using SQL Server 2014 Express LocalDB.
|
I was able to solve similar issue in our WiX installer recently. We have a Windows service, running under SYSTEM account, and an installer, where LocalDB-based storage is one of the options for database configuration. For some time (a couple of years actually) product upgrades and service worked quite fine, with no issues related to LocalDB. We are using default v11.0 instance, which is created in SYSTEM profile in C:\Windows\System32\config tree, and a database specified via AttachDbFileName, created in ALLUSERSPROFILE tree. DB provider is configured to use Windows authentication. We also have a custom action in installer, scheduled as deferred/non-impersonate, which runs DB schema updates.
All this worked fine until recently. After another bunch of DB updates, our new release started to fail after having upgraded over the former - service was unable to start, reporting infamous "A network-related or instance-specific error occurred while establishing a connection to SQL Server" (error 50) fault.
When investigating this issue, it became apparent that the problem is in a way WiX runs custom actions. Although non-impersonated CA-s run under SYSTEM account, the registry profile and environment remain that of current user (I suspect WiX loads these voluntary when attaching to user's session). This leads to incorrect path being expanded from the LOCALAPPDATA variable - the service receives SYSTEM profile one, but the schema update CA works with the user's one.
So here are two possible solutions. The first one is simple, but too intrusive to user's system - with cmd.exe started via psexec, recreate broken instance under the SYSTEM account. This was not an option for us as the user may have other databases created in v11.0 instance, which is public. The second option assumed lots of refactoring, but wouldn't hurt anything. Here is what to do to run DB schema updates properly with LocalDB in WiX CA:
1. Configure your CA as deferred/non-impersonate (should run under SYSTEM account);
2. Fix environment to point to SYSTEM profile paths:
```
var systemRoot = Environment.GetEnvironmentVariable("SystemRoot");
Environment.SetEnvironmentVariable("USERPROFILE", String.Format(@"{0}\System32\config\systemprofile", systemRoot));
Environment.SetEnvironmentVariable("APPDATA", String.Format(@"{0}\System32\config\systemprofile\AppData\Roaming", systemRoot));
Environment.SetEnvironmentVariable("LOCALAPPDATA", String.Format(@"{0}\System32\config\systemprofile\AppData\Local", systemRoot));
Environment.SetEnvironmentVariable("HOMEPATH", String.Empty);
Environment.SetEnvironmentVariable("USERNAME", Environment.UserName);
```
3. Load SYSTEM account profile. I used `LogonUser`/`LoadUserProfile` native API methods, as following:
```
[DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool LogonUser(
string lpszUserName,
string lpszDomain,
string lpszPassword,
int dwLogonType,
int dwLogonProvider,
ref IntPtr phToken);
[StructLayout(LayoutKind.Sequential)]
struct PROFILEINFO
{
public int dwSize;
public int dwFlags;
[MarshalAs(UnmanagedType.LPWStr)]
public String lpUserName;
[MarshalAs(UnmanagedType.LPWStr)]
public String lpProfilePath;
[MarshalAs(UnmanagedType.LPWStr)]
public String lpDefaultPath;
[MarshalAs(UnmanagedType.LPWStr)]
public String lpServerName;
[MarshalAs(UnmanagedType.LPWStr)]
public String lpPolicyPath;
public IntPtr hProfile;
}
[DllImport("userenv.dll", SetLastError = true, CharSet = CharSet.Unicode)]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool LoadUserProfile(IntPtr hToken, ref PROFILEINFO lpProfileInfo);
var hToken = IntPtr.Zero;
var hProfile = IntPtr.Zero;
bool result = LogonUser("SYSTEM", "NT AUTHORITY", String.Empty, 3 /* LOGON32_LOGON_SERVICE */, 0 /* LOGON32_PROVIDER_DEFAULT */, ref token);
if (result)
{
var profileInfo = new PROFILEINFO();
profileInfo.dwSize = Marshal.SizeOf(profileInfo);
profileInfo.lpUserName = @"NT AUTHORITY\SYSTEM";
if (LoadUserProfile(token, ref profileInfo))
hProfile = profileInfo.hProfile;
}
```
Wrap this in an `IDisposable` class, and use with a `using` statement to build a context.
4. The most important - refactor your code to perform necessary DB updates in a child process. This could be a simple exe-wrapper over your installer DLL, or stand-alone utility, if your already have one.
P.S. All these difficulties could be avoided, if only Microsoft let uses choose where to create LocalDB instances, via command line option. Like Postgres' initdb/pg\_ctl utilities have, for example.
|
Picking up from the comments on the question, here are some areas to look at. Some of these have already been answered in those comments, but I am documenting here for others in case the info might be helpful.
* Check here for a great source of info on SQL Server Express LocalDB:
+ [SQL Server 2014 Express LocalDB](http://msdn.microsoft.com/en-us/library/hh510202.aspx)
+ [SqlClient Support for LocalDB](http://msdn.microsoft.com/en-us/library/hh309441.aspx)
+ [SqlLocalDB Utlity](http://msdn.microsoft.com/en-us/library/hh212961.aspx)
+ [Introducing LocalDB, an improved SQL Express](http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx) (also look at the Q&A section at the end of the main post, just before the comments, as someone asked if LocalDB can be launched from a service, and the answer is:
> LocalDB can be launched from a service, as long as the profile is loaded for the service account.
* What version of .Net is being used? Here it is 4.5.1 (good) but earlier versions could not handle the preferred connection string (i.e. `@"(localdb)\InstanceName"`). The following quote is taken from the link noted above:
> If your application uses a version of .NET before 4.0.2 you must connect directly to the named pipe of the LocalDB.
And according to the MSDN page for [SqlConnection.ConnectionString](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring.aspx):
> Beginning in .NET Framework 4.5, you can also connect to a LocalDB database as follows:
>
> `server=(localdb)\\myInstance`
* Paths:
+ Instances: **C:\Users{Windows Login}\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances**
+ Databases:
- Created via SSMS or direct connection: **C:\Users{Windows Login}\Documents** or **C:\Users{Windows Login}**
- Created via Visual Studio: **C:\Users{Windows Login}\AppData\Local\Microsoft\VisualStudio\SSDT**
* **Initial Problem**
+ Symptoms:
- Database files (`.mdf` and `.ldf`) created in the expected location:
**C:\Windows\System32\config\systemprofile**
- Instance files created in an unexpected location:
**C:\Users\{current user}\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances**
+ Cause (note taken from "SqlLocalDB Utility" MSDN page that is linked above; emphasis mine):
> Operations other than start can only be performed on an instance belonging to **currently logged in user**.
* Things to try:
+ Connection string that specifies the database (though maybe a long-shot if the error is regarding not being able to connect to the instance):
+ `"Server=(LocalDB)\MYINSTANCE; Integrated Security=true ;AttachDbFileName=C:\Windows\System32\config\systemprofile\TestDB.mdf"`
+ `"Server=(LocalDB)\.\MYINSTANCESHARE; Integrated Security=true ;AttachDbFileName=C:\Windows\System32\config\systemprofile\TestDB.mdf"`
+ Is the service running? Run the following from a Command Prompt:
`TASKLIST /FI "IMAGENAME eq sqlservr.exe"`
It should probably be listed under "Console" for the "Session Name" column
+ Run the following from a Command Prompt:
`sqllocaldb.exe info MYINSTANCE`
And verify that the value for "Owner" is correct. Is the value for "Shared name" what it should be? If not, the documentation states:
> Only an administrator on the computer can create a shared instance of LocalDB
+ As part of the setup, add the `NT AUTHORITY\System` account as a Login to the system, which is required if this account is not showing as the "Owner" of the instance:
`CREATE LOGIN [NT AUTHORITY\System] FROM WINDOWS;
ALTER SERVER ROLE [sysadmin] ADD MEMBER [NT AUTHORITY\System];`
+ Check the following file for clues / details:
**C:\Users{Windows Login}\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MYINSTANCE\error.log**
+ In the end you might need to create an actual account to create and own the Instance and Database, as well as run your service. LocalDB really is meant to be user-mode, and is there any downside to having your service have its own login? And you probably wouldn't need to share the instance at that point.
And in fact, as noted by Microsoft on the [SQL Server YYYY Express LocalDB](https://msdn.microsoft.com/en-us/library/hh510202.aspx) MSDN page:
> An instance of **LocalDB** owned by the built-in accounts such as NT AUTHORITY\SYSTEM can have manageability issues due to windows file system redirection; Instead use a normal windows account as the owner.
## UPDATE (2015-08-21)
Based on feedback from the O.P. that using a regular User account can be problematic in certain environments, AND keeping in mind the original issue of the LocalDB instance being created in the `%LOCALAPPDATA%` folder for the user running the installer (and *not* the `%LOCALAPPDATA%` folder for **NT AUTHORITY\System** ), I found a solution that seems to keep with the intent of easy installation (no user to create) and should not require needing extra code to load the SYSTEM profile.
Try using one of the two built-in accounts that is *not* the [LocalSystem](https://msdn.microsoft.com/en-us/library/windows/desktop/ms684190.aspx) account (which does not maintain its own registry info. Use either:
* [NT AUTHORITY\LocalService](https://msdn.microsoft.com/en-us/library/windows/desktop/ms684188.aspx)
* [NT AUTHORITY\NetworkService](https://msdn.microsoft.com/en-us/library/windows/desktop/ms684272.aspx)
Both have their profile folders in: **C:\Windows\ServiceProfiles**
While I have not been able to test via an installer, I did test a service logging on as **NT AUTHORITY\NetworkService** by setting my SQL Server Express 2014 instance to log on as this account, and restarted the SQL Server service. I then ran the following:
```
EXEC xp_cmdshell 'sqllocaldb c MyTestInstance -s';
```
and it created the instance in: **C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances**
I then ran the following:
```
EXEC xp_cmdshell N'SQLCMD -S (localdb)\MyTestInstance -E -Q "CREATE DATABASE [MyTestDB];"';
```
and it had created the database in: **C:\Windows\ServiceProfiles\NetworkService**
|
Using SQL LocalDB in a Windows Service
|
[
"",
"sql",
"sql-server",
"localdb",
""
] |
The following query gives me an error in phpmyadmin. It looks syntactically correct to me, and the table/column names match up accordingly. I have tried a number of variations (quoting table names, using as, etc) with no luck.
```
SELECT *
FROM GROUP
INNER JOIN GROUP_MEMBER ON GROUP.group_id = GROUP_MEMBER.group_id
WHERE group_owner='test';
```
Error I'm getting:
# 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'GROUP INNER JOIN GROUP\_MEMBER ON GROUP.group\_id = GROUP\_MEMBER.group\_id WHERE ' at line 2
|
"group" is a sql-keyword, you need to surround it with backticks if you want to use it as tablename.
|
`GROUP` is a reserved word in SQL so it's a bad choice for a table name. If you surround it with backticks it might work but I'd really recommend changing that table name.
```
SELECT *
FROM `GROUP`
INNER JOIN GROUP_MEMBER ON `GROUP`.group_id = GROUP_MEMBER.group_id
WHERE group_owner='test';
```
|
MySQL join error
|
[
"",
"mysql",
"sql",
"join",
""
] |
```
CREATE TABLE EMPLOYEE
( Fname VARCHAR(15) NOT NULL,
Minit CHAR,
Lname VARCHAR(15) NOT NULL,
Ssn CHAR(9) NOT NULL,
Bdate DATE,
Address VARCHAR(30),
Sex CHAR,
Salary DECIMAL(10,2),
Super_ssn CHAR(9),
Dno INT NOT NULL,
PRIMARY KEY (Ssn),
FOREIGN KEY (Super_ssn) REFERENCES EMPLOYEE(Ssn),
FOREIGN KEY (Dno) REFERENCES DEPARTMENT(Dnumber) );
CREATE TABLE DEPARTMENT
( Dname VARCHAR(15) NOT NULL,
Dnumber INT NOT NULL,
Mgr_ssn CHAR(9) NOT NULL,
Mgr_start_date DATE,
PRIMARY KEY (Dnumber),
UNIQUE (Dname),
FOREIGN KEY (Mgr_ssn) REFERENCES EMPLOYEE(Ssn) );
CREATE TABLE DEPT_LOCATIONS
( Dnumber INT NOT NULL,
Dlocation VARCHAR(15) NOT NULL,
PRIMARY KEY (Dnumber, Dlocation),
FOREIGN KEY (Dnumber) REFERENCES DEPARTMENT(Dnumber) );
CREATE TABLE PROJECT
( Pname VARCHAR(15) NOT NULL,
Pnumber INT NOT NULL,
Plocation VARCHAR(15),
Dnum INT NOT NULL,
PRIMARY KEY (Pnumber),
UNIQUE (Pname),
FOREIGN KEY (Dnum) REFERENCES DEPARTMENT(Dnumber) );
CREATE TABLE WORKS_ON
( Essn CHAR(9) NOT NULL,
Pno INT NOT NULL,
Hours DECIMAL(3,1) NOT NULL,
PRIMARY KEY (Essn, Pno),
FOREIGN KEY (Essn) REFERENCES EMPLOYEE(Ssn),
FOREIGN KEY (Pno) REFERENCES PROJECT(Pnumber) );
CREATE TABLE DEPENDENT
( Essn CHAR(9) NOT NULL,
Dependent_name VARCHAR(15) NOT NULL,
Sex CHAR,
Bdate DATE,
Relationship VARCHAR(8),
PRIMARY KEY (Essn, Dependent_name),
FOREIGN KEY (Essn) REFERENCES EMPLOYEE(Ssn) );
```
Here I am not able to create any table because each table is referencing to some other table using foreign key constraint.So kindly help me how to create the whole database.for eg. if I am creating EMPLOYEE table it is searching for DEPARTMENT table and not allowing to create the table and I trying to create DEPARTMENT then it is searching for EMPLOYEE table again not allowing the creation of the table.
|
you can try to create and populate the tables without foreign key constraints and after that add the foreign key constraints.
|
Just add the foreign keys separately when the tables have been created, e.g.
```
ALTER TABLE EMPLOYEE ADD CONSTRAINT FK_EMPLOYEE_DEPARTMENT
FOREIGN KEY (Dno) REFERENCES DEPARTMENT(Dnumber);
```
|
Creation of a database
|
[
"",
"mysql",
"sql",
"database",
"foreign-keys",
""
] |
I know how to perform an IF/ELSE in the join conditions, but is there a way I can use a different evaluation of the field altogether based on the ELSE?
```
SELECT o.id, s.id
FROM orders o
RIGHT JOIN subjects s ON s.name = 'Blah'
AND s.group_id = IF(o.service = 'Law Services', 2, ????)
```
In the above, I don't know what to put at `????`
What I want to say is (pseudo code):
```
IF o.service = 'Law Services'
JOIN subjects s ON s.name = 'Blah' AND s.group_id = 2
ELSE
JOIN subjects s ON s.name = 'Blah'
```
Basically I only want to add the JOIN condition if `s.group_id = 2`.
|
```
SELECT o.id, s.id
FROM orders o
RIGHT JOIN subjects s ON s.name = 'Blah'
AND s.group_id = IF(o.service = 'Law Services', 2, s.group_id)
```
|
You can write the condition without an `if`:
```
SELECT o.id, s.id
FROM orders o RIGHT JOIN
subjects s
ON s.name = 'Blah' AND
(o.service <> 'Law Services' or s.group_id = 2);
```
|
SQL - How to do an IF/ELSE in JOIN conditions
|
[
"",
"mysql",
"sql",
"join",
""
] |
I have a table of episodes that looks like this:

I want to take the highest `season` and then the highest `episode` from the highest season from `series` `1` for example (WHERE series = 1). The output wil be:
```
id | episode | season | series
8 3 1 1
```
How can I do that?
|
Try something like that:
```
SELECT * FROM tablename WHERE series = 1 ORDER BY season DESC, episode DESC LIMIT 1;
```
Another possibilty would be to "SELECT MAX(...)..." and use it in the WHERE-clause.
|
```
SELECT TOP 1 *
FROM testTable
WHERE series= 1 and (season in (select MAX(season) from testTable))
ORDER BY episode DESC
```
|
Select highest season and episode (mysql)
|
[
"",
"mysql",
"sql",
"greatest-n-per-group",
""
] |
I am trying to attach the `AdventureWorks2012_Data.mdf` database file to my SQL Server 2012 database.
I am getting this error:
> TITLE: Microsoft SQL Server Management Studio
>
> Attach database failed for Server 'USER-PC'. (Microsoft.SqlServer.Smo)
>
> For help, click: <http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.3000.0+((SQL11_PCU_Main).121019-1322+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Attach+database+Server&LinkId=20476>
>
> ADDITIONAL INFORMATION:
>
> An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
>
> Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2012\_log.ldf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". (Microsoft SQL Server, Error: 5120)
Maybe the message is clear that I don't have the `.log` file and yes I don't have it. But is it necessary to have it? Or is there is another thing? How can I solve that please?
|
This is a common error that anyone could encounter when you don't have any `*.LDF` log file for your database, like the `AdventureWorks` database. SQL Server complains about the `*.LDF` log file (typically associated to a database `*.MDF` file) that is missing.
Execute the following T-SQL query to attach the database considering only the `*.MDF` file:
```
USE [master]
GO
CREATE DATABASE [AdventureWorksDW2012] ON
( FILENAME = N'C:\Users\User\Desktop\ARPAD\AdventureWorks2012_Data.mdf' )
FOR ATTACH
GO
```
or remove the connection to the `*.LDF` log file (a new `*.LDF` file will be created for the database) through SQL Server Management Studio as shown in the screenshot below:

**UPDATE**
You get the error:
```
The database 'AdventureWorksDW2012' cannot be opened because it is version 706. This server supports version 661 and earlier. A downgrade path is not supported. Could not open new database 'AdventureWorksDW2012'. CREATE DATABASE is aborted. (Microsoft SQL Server, Error: 948)
```
because you're trying to attach a SQL Server 2012 (version 706) database file to SQL Server 2008 istance (version 661). For this reason, you [cannot perform this downgrade](https://stackoverflow.com/questions/26103477/downgrade-database-from-sql-server-2012-to-2008/26104462#26104462). [Download the AdventureWorks2008](http://msftdbprodsamples.codeplex.com/downloads/get/478218) database for your SQL Server 2008 istance instead, or upgrade your SQL Server istance with the latest version.
The version of your SQL Server istance is indeed SQL Server 2008: `Microsoft SQL Server 2008 R2 (RTM) - 10.50.1617.0 (X64) Apr 22 2011 19:23:43 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)`.
[Download and install SQL Server 2012 Express](http://www.microsoft.com/en-us/download/details.aspx?id=29062) to solve the problem.
|
The key is this line (edited for clarity):
```
Unable to open the physical file
"C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2012_log.ldf".
Operating system error 2:
"2(failed to retrieve text for this error. Reason: 15105)".
(Microsoft SQL Server, Error: 5120)
```
According to [this page](http://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx), operating system error 2 is "File Not Found". Based on this, I would guess that either, A) the file is not there , or B) the name or path is misspelled.
|
Can't attach database to SQL Server 2012 reason not found .log
|
[
"",
"sql",
".net",
"sql-server",
"sql-server-2012",
""
] |
I am working with a PostgreSQL database. I have a column that contains rows that look like this:

I am to replace the 'PC' text with '00000' and then remove the period and all text after.
So for example, row 5 should look like 0000055000 after the transformation.
I was able to the 'PC' with the overlay function. So my current query looks like this:
```
select set_name, overlay(set_name placing '00000' from 1 for 2 ) FROM
src.sap_setnode
WHERE set_name LIKE'PC%'
```
From there how can I remove the period and everything after it?
Thanks
|
```
SELECT SPLIT_PART(REPLACE(set_name ,SUBSTR(set_name ,1,2),'00000'),'.',1) FROM t WHERE set_name LIKE 'PC%' OR set_name NOT LIKE 'PC-%' OR set_name NOT LIKE 'PCA%'
```
|
I actually found the answer - if anyone has a similar question. The split\_part function can be used to split the field by using '.' as a delimiter and then grabbing the first part.
SELECT
split\_part( overlay(set\_name placing '00000' from 1 for 2 ),'.',1) FROM
src.sap\_setnode
|
Query to remove character and text from field
|
[
"",
"sql",
"postgresql",
"replace",
""
] |
I am trying to aggregate data from one table into another. I've inherited this project; I did not design this database nor will I be able to change its format.
The [RawData] table will have 1 record per account, per ChannelCodeID. This table (where I currently have data) has the following fields:
```
[Account] int
[ChannelCodeID] int
[ChannelCode] varchar(10)
```
---
The [AggregatedData] table will have 1 record per account. This table (into which I need to insert data) has the following fields:
```
[Account] int
[Count] int
[Channel1] int
[Channel2] int
[Channel3] int
[Names] varchar(250)
```
---
For example, I might have the following records in my [RawData] table:
```
Account ChannelCodeID ChannelCode
12345 2 ABC
12345 4 DEF
12345 6 GHI
54321 2 ABC
54321 6 GHI
99999 2 ABC
```
And, after aggregating them, I would need to produce the following records in my [AggregatedData] table:
```
Account Count Chanel1 Channel2 Channel3 Names
12345 3 2 4 6 ABC.DEF.GHI
54321 2 2 6 0 ABC.GHI
99999 1 2 0 0 ABC
```
As you can see, the count is how many records exist in my [RawData] table, Channel1 is the first ChannelCodeID, Channel2 is the second, and Channel3 is the third. If there are not enough ChannelCodeIDs from my [RawData] table, extra Channel columns get a '0' value. Furthermore, I need to concatenate the 'ChannelCode' column and store it in the 'Names' column of the [AggregatedData] table, but (obviously) if there is only one record, I don't want to add the '.'
I can't figure out how to do this without using a cursor and a bunch of variables - but I'm guessing there HAS to be a better way. This doesn't have to be super-fast since it will only run once a month, but it will have to process at least 10-15,000 records each time.
Thanks in advance...
**EDIT:**
ChannelCodes and ChannelCodeIDs map directly to each other and are always the same. For example, ChannelCodeID 2 is ALWAYS 'ABC'
Also, in the [AggregatedData] table, Channel1 is ALWAYS the lowest value, although this is incidental.
|
## Test Data
```
DECLARE @TABLE TABLE (Account INT, ChannelCodeID INT, ChannelCode VARCHAR(10))
INSERT INTO @TABLE VALUES
(12345 ,2 ,'ABC'),
(12345 ,4 ,'DEF'),
(12345 ,6 ,'GHI'),
(54321 ,2 ,'ABC'),
(54321 ,6 ,'GHI'),
(99999 ,2 ,'ABC')
```
## Query
```
SELECT Account
,[Count]
,ISNULL([Channel1], 0) AS [Channel1]
,ISNULL([Channel2], 0) AS [Channel2]
,ISNULL([Channel3], 0) AS [Channel3]
,Names
FROM
(
SELECT t.Account, T.ChannelCodeID, C.[Count]
,'Channel' + CAST(ROW_NUMBER() OVER
(PARTITION BY t.Account ORDER BY t.ChannelCodeID ASC) AS VARCHAR(10))Channels
,STUFF((SELECT '.' + ChannelCode
FROM @TABLE
WHERE Account = t.Account
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'),1,1,'') AS Names
FROM @TABLE t INNER JOIN (SELECT Account , COUNT(*) AS [Count]
FROM @TABLE
GROUP BY Account) c
ON T.Account = C.Account
)A
PIVOT (MAX(ChannelCodeID)
FOR Channels
IN ([Channel1],[Channel2],[Channel3])
) p
```
## Result
```
╔═════════╦═══════╦══════════╦══════════╦══════════╦═════════════╗
║ Account ║ Count ║ Channel1 ║ Channel2 ║ Channel3 ║ Names ║
╠═════════╬═══════╬══════════╬══════════╬══════════╬═════════════╣
║ 12345 ║ 3 ║ 2 ║ 4 ║ 6 ║ ABC.DEF.GHI ║
║ 54321 ║ 2 ║ 2 ║ 6 ║ 0 ║ ABC.GHI ║
║ 99999 ║ 1 ║ 2 ║ 0 ║ 0 ║ ABC ║
╚═════════╩═══════╩══════════╩══════════╩══════════╩═════════════╝
```
|
-- Back up raw data into temp table
```
select * into #rawData FROM RawData
```
-- First, populate the lowest channel and base records
```
INSERT INTO AggregatedData (Account,Count,Channel1,Channel2,Channel3)
SELECT AccountID,1,Min(ChannelCODEID),0,0
FROM #RawData
GROUP BY AccountID
```
-- Gives you something like this
```
Account Count Chanel1 Channel2 Channel3 Names
12345 1 2 0 0 NULL
54321 1 2 6 0 NULL
99999 1 2 0 0 NULL
```
--
```
DELETE FROM #rawData
WHERE account + str(channelCodeID) in
(SELECT account + str(channelCodeID) FROM AggregatedData)
```
-- Now do an update
```
UPDATE AggregatedData SET channel2= xx.NextLowest,count= count+1
FROM
( SELECT AccountID,Min(ChannelCODEID) as NextLowest
FROM #RawData
GROUP BY AccountID ) xx
WHERE AggregatedData.account=xx.accountID
```
-- Repeat above for Channel3
You then need an update statement against the final aggregated table based on the channel id's. If not run often, I would suggest a UDF which takes 3 parameters and returns a string, some like
```
UPDATE AggregatedData SET [names] = dbo.BuildNameList(channel1,channel2,channel3)
```
Will run a bit slow, but still not bad overall
Hope this gives you some ideas
|
Aggregating SQL Data and concatenating strings / combining records
|
[
"",
"sql",
"sql-server",
"database",
"t-sql",
""
] |
I am writing a query to select the greatest number of total nominations among the movie of genre Comedy. I have the following query so far
```
SELECT m.movie_title, m.release_year, COUNT(*)
FROM MOVIE m JOIN NOMINATION n
ON (n.movie_title=m.movie_title AND n.release_year=m.release_year)
WHERE m.genre='Comedy' GROUP BY m.movie_title, m.release_year;
```
If I execute the following query
```
SELECT m.movie_title, m.release_year
FROM MOVIE m
WHERE (m.genre='Comedy') GROUP BY m.movie_title,m.release_year);
```
It returns me with 3 results. My motive is to get one result with the highest count of nominations for movies with genre comedy.
Is my logic incorrect? I am still a beginner to sql (oracle11g) and teaching myself.
I have looked at multiple tutorials online but nothing has been able to help so far.
Thank you for all the help.
|
Lets understand it with the employee table example.
I have three rows which shows the count(\*) of each department.
```
SQL> WITH data AS
2 ( SELECT deptno, COUNT(*) rn FROM emp GROUP BY deptno
3 )
4 SELECT * FROM DATA
5 /
DEPTNO RN
---------- ----------
30 6
20 5
10 3
```
Now, I want one row which has the maximum count of departments.
```
SQL> WITH data AS
2 ( SELECT deptno, COUNT(*) rn FROM emp GROUP BY deptno
3 )
4 SELECT * FROM DATA WHERE rn =
5 (SELECT MAX(rn) FROM data
6 )
7 /
DEPTNO RN
---------- ----------
30 6
SQL>
```
Or, also could be written as :
```
SQL> WITH data1 AS
2 ( SELECT deptno, COUNT(*) rn FROM emp GROUP BY deptno
3 ),
4 data2 AS
5 ( SELECT MAX(rn) max_rn FROM DATA1
6 )
7 SELECT d1.* FROM data1 d1, data2 d2 WHERE d1.rn = d2.max_rn
8 /
DEPTNO RN
---------- ----------
30 6
SQL>
```
So, in your case, use `MAX` function.
Other approach would be to use `ANALYTIC` function, `ROW_NUMBER` to assign `RANK` to each count(\*) and then select the highest rank.
For example,
```
SQL> WITH data AS
2 (SELECT deptno,
3 row_number() OVER( ORDER BY COUNT(*) DESC) rn
4 FROM emp
5 GROUP BY deptno
6 )
7 SELECT deptno FROM DATA WHERE rn = 1
8 /
DEPTNO
----------
30
SQL>
```
|
Try something like this:
```
SELECT n.movie_title, n.release_year, COUNT(*) nominations
FROM NOMINATION n
LEFT JOIN MOVIE m ON n.movie_title=m.movie_title AND n.release_year=m.release_year
WHERE m.genre='Comedy'
GROUP BY n.movie_title,n.release_year
ORDER BY nominations DESC
```
Your approach would be:
1. Start with the table NOMINATION, because you want to count those.
2. Left Join the movie genre to every row in nomination and then filter by genre
3. Group your results to be able to count nominations for the same movie
4. Order the result set in descending order
To better understand the process please try the above query line for line and take a look at the results in between. Start by looking at the result, when just executing the first two lines. Then add the third line and execute again. And so on.
**Update (after removing LIMIT):**
Wrap the above SELECT Statement in
```
SELECT * FROM (
... above statement here ...
) WHERE ROWNUM=1;
```
See [How do I limit the number of rows returned by an Oracle query after ordering?](https://stackoverflow.com/questions/470542/how-do-i-limit-the-number-of-rows-returned-by-an-oracle-query-after-ordering) for further details.
|
SQL COUNT to count all the numbers from another table
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I'm trying to select the most recent non zero entry from my data set in SQL. Most examples of this are satisfied with returning only the date and the group by variables, but I would also like to return the relevant Value. For example:
```
ID Date Value
----------------------------
001 2014-10-01 32
001 2014-10-05 10
001 2014-10-17 0
002 2014-10-03 17
002 2014-10-20 60
003 2014-09-30 90
003 2014-10-10 7
004 2014-10-06 150
005 2014-10-17 0
005 2014-10-18 9
```
Using
```
SELECT ID, MAX(Date) AS MDate FROM Table WHERE Value > 0 GROUP BY ID
```
Returns:
```
ID Date
-------------------
001 2014-10-05
002 2014-10-20
003 2014-10-10
004 2014-10-06
005 2014-10-18
```
But whenever I try to include Value as one of the selected variables, SQLServer results in an error:
> "Column 'Value' is invalid in the select list because it is not
> contained in either an aggregate function or the GROUP BY clause."
My desired result would be:
```
ID Date Value
----------------------------
001 2014-10-05 10
002 2014-10-20 60
003 2014-10-10 7
004 2014-10-06 150
005 2014-10-18 9
```
One solution I have thought of would be to look up the results back in the original Table and return the Value that corresponds to the relevant ID & Date (I have already trimmed down and so I know these are unique), but this seems to me like a messy solution. Any help on this would be appreciated.
NOTE: I do not want to group by Value as this is the result I am trying to pull out in the end (i.e. for each ID, I want the most recent Value). Further Example:
```
ID Date Value
----------------------------
001 2014-10-05 10
001 2014-10-06 10
001 2014-10-10 10
001 2014-10-12 8
001 2014-10-18 0
```
Here, I only want the last non zero entry. (001, 2014-10-12, 8)
```
SELECT ID, MAX(Date) AS MDate, Value FROM Table WHERE Value > 0 GROUP BY ID, Value
```
Would return:
```
ID Date Value
----------------------------
001 2014-10-10 10
001 2014-10-12 8
```
|
Assuming you don't have repeated dates for the same `ID` in the table, this should work:
```
SELECT A.ID, A.Date, A.Value
FROM
T1 AS A
INNER JOIN (SELECT ID,MAX(Date) AS Date FROM T1 WHERE Value > 0 GROUP BY ID) AS B
ON A.ID = B.ID AND A.Date = B.Date
```
|
This can also be done using a window function which is very ofter faster than a join on a grouped query:
```
select id, date, value
from (
select id,
date,
value,
row_number() over (partition by id order by date desc) as rn
from the_table
) t
where rn = 1
order by id;
```
|
Select Most Recent Entry in SQL
|
[
"",
"sql",
"select",
"sql-server-2008-r2",
"greatest-n-per-group",
""
] |
i have two tables namely `profile` and `table_1`, `profile` is a parent table and `table_1` is a child table. i have a `email` column in both tables and what i want to do is for example `'me@mail.com'` in the child table column `email` should update every column of the parent table column `record` with `1` where parent `email` column is `me@mail.com`.
in the where statement i use both primary keys of both tables to do it but since they are not the same it causes mistakes.
```
update profile
set record= (
select table_1.record
from Table_1
where profile.profile_id = Table_1.profile_id
)
```
but when i used this one below which i know will work it give me an error `"Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >="`
```
update profile
set record = (
select table_1.record
from Table_1
where profile.email = Table_1.email
)
```
please how do i make the update work with the email columns but not the primary key columns
|
If record is functionally dependant on email then derive a set of email, record and use that for your update
```
with s as (select distinct email, record from table1)
update t
set record = s.record
from profile t
join s on s.email = t.email
```
If for s, email /-> record then the record you get from the join is not guaranteed
|
I think this is what you may want:
```
update P
set P.record= T.record
from profile P
Inner Join Table_1 T
ON P.profile_id = T.profile_id
```
Any row in P that matches the ProfileId in T will get it's "Record" column updated with the value in T's "Record" column.
You can modify this to use other columns in the JOIN. You can also add multiple columns to update in the SET portion of the code.
|
Updating parent table
|
[
"",
"sql",
"sql-server",
""
] |
I need to write a sql script which is used to calculate our customer left points after redemption at the beginning of each month.But I am not very sure about the difference between SQL query and SQL script.
I wrote the two queries to calculate the points earned and spent. For total points earned:
```
select Tbl_member.member_id,sum(points_earned)
from Tbl_member, Tbl_member_participation
where tbl_member_participation.member_id = Tbl_member.member_id
group by Tbl_member.member_id,
```
and points spent:
```
select Tbl_member.member_id,sum(points_redeemed)
from Tbl_member, Tbl_member_redemption
where tbl_member_redemption.member_id = Tbl_member.member_id
group by Tbl_member.member_id
```
I want to subtract spent points from earned points, but I don't know how to combine it into a signle output sql script. And how to make this run for every month.
|
Try this
```
SELECT Tbl_member.member_id,
Sum(points_earned) - Sum(points_redeemed) Remaining_points
FROM Tbl_member
JOIN Tbl_member_participation
ON tbl_member_participation.member_id = Tbl_member.member_id
JOIN Tbl_member_redemption
ON tbl_member_redemption.member_id = Tbl_member.member_id
GROUP BY Tbl_member.member_id,
Datepart(mm, datecol),
Datepart(yyyy, datecol)
```
|
You can simply use UNION function to combine two query results.
This query is not tested but these is the the way that you can simply play with two queries by using UNION.
```
SELECT
member_id,
SUM(points_earned) - SUM(points_redeemed)
FROM
(
SELECT
Tbl_member.member_id,
sum(points_earned) points_earned,
0 AS points_redeemed
FROM
Tbl_member,
Tbl_member_participation
WHERE
tbl_member_participation.member_id = Tbl_member.member_id
GROUP BY
Tbl_member.member_id
UNION ALL
select
Tbl_member.member_id,
0 AS points_earned,
sum(points_redeemed) points_redeemed
from
Tbl_member, Tbl_member_redemption
where
tbl_member_redemption.member_id = Tbl_member.member_id
group by
Tbl_member.member_id
) as
a
GROUP BY
member_id
```
|
How to do the subtract function in sql script
|
[
"",
"sql",
"sql-server",
"sql-scripts",
""
] |
I'm taking a database course at Johns Hopkins in Maryland and have a question. I already emailed my professor and he knows I'm asking this question here and he's cool with that. So I'm developing a COOKBOOK DB in Postgres and I have an interesting problem I’ve been facing in Postgres where I just can’t seem to be able to build the PRICE table. I have a nice Cookbook ERD but apparently can't post until my reputation is at least 10. I'll do my best to describe the ERD. There are three factoring tables relating to PRICE. These are INGREDIENT, SUBSTITUTION, and PRICE.
Here's a link to the ERD (notice the 1:M for INGREDIENT to SUBSTITUTION):
[[ERD](https://i.stack.imgur.com/YEExS.jpg)]
I can have one INGREDIENT with potentially many SUBSTITUTIONS and a possible one-to-one between SUBSTITUTION and PRICE (one PRICE per SUBSTITUTION if a PRICE is known). If the PRICE is known then the PRICE is able to define a tuple with a composite primary key: (price\_id, ingredient\_id (fk), substitution\_id (fk))
The challenge I’m facing is that Postgres SQL is not allowing me to establish this relationship and I’m not exactly sure why. I’ve set the keys in SUBSTITUTION to have a UNIQUE constraint so that shouldn’t be the problem. The only thing I can think of is that the ingredient\_id in SUBSTITUTION is a foreign key to INGREDIENT and therefore may not be physically established in SUBSTITUTION but the error I'm getting doesn't suggest that. This is what I’m getting in the terminal (**first describing SUBSTITUTION**):
```
cookbook=# \d+ SUBSTITUTION
Table "public.substitution"
Column | Type | Modifiers | Storage | Description
--------------------+-----------------------+--------------------------------------------------------------------------+----------+-------------
substitution_id | integer | not null default nextval('subsitution_substitution_id_seq'::regclass) | plain |
ingredient_id | integer | not null default nextval('subsitution_ingredient_id_seq'::regclass) | plain |
name | character varying(50) | not null | extended |
measurement_ref_id | integer | not null default nextval('subsitution_measurement_ref_id_seq'::regclass) | plain |
metric_unit | character varying(25) | not null | extended |
Indexes:
"subsitution_pkey" PRIMARY KEY, btree (substitution_id, ingredient_id)
"uniqueattributes" UNIQUE, btree (substitution_id, ingredient_id)
Foreign-key constraints:
"subsitution_ingredient_id_fkey" FOREIGN KEY (ingredient_id) REFERENCES ingredient(ingredient_id)
"subsitution_measurement_ref_id_fkey" FOREIGN KEY (measurement_ref_id) REFERENCES measurement_ref(measurement_ref_id)
Has OIDs: no
cookbook=# create table price(
price_id serial not null,
ingredient_id serial references substitution(ingredient_id),
cookbook(# substitution_id serial references substitution(substitution_id),
cookbook(# usdollars smallint not null,
cookbook(# availability season,
cookbook(# seasonal boolean,
cookbook(# primary key (price_id, ingredient_id, substitution_id)
cookbook(# );
NOTICE: CREATE TABLE will create implicit sequence "price_price_id_seq" for serial column "price.price_id"
NOTICE: CREATE TABLE will create implicit sequence "price_ingredient_id_seq" for serial column "price.ingredient_id"
NOTICE: CREATE TABLE will create implicit sequence "price_substitution_id_seq" for serial column "price.substitution_id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "price_pkey" for table "price"
ERROR: there is no unique constraint matching given keys for referenced table "substitution"
```
|
I've omitted some columns, so you can focus on the keys. I barely glanced at your ERD. (I hate ERDs with the burning passion of a thousand suns.)
```
create table ingredients (
ingredient_id serial primary key,
-- Don't allow duplicate names.
ingredient_name varchar(35) not null unique
);
create table substitutions (
-- These are properly declared integer, not serial.
-- Also note two separate foreign key references.
ingredient_id integer not null references ingredients (ingredient_id),
substitute_id integer not null references ingredients (ingredient_id),
primary key (ingredient_id, substitute_id)
);
create table prices (
-- Price id number is unnecessary.
ingredient_id integer not null,
substitute_id integer not null,
-- Money is usually declared numeric(n, m) or decimal(n, m).
us_dollars numeric(10, 2) not null
-- Negative amounts don't make sense.
check (us_dollars >= 0),
-- Only one row per distinct substitution.
primary key (ingredient_id, substitute_id),
-- One single foreign key reference, but it references *two* columns.
foreign key (ingredient_id, substitute_id) references substitutions (ingredient_id, substitute_id)
);
```
|
The issue is because the `FOREIGN KEY` defined in the `price` table, `substitution(ingredient_id)`, is not unique.
In the `substitution` table, you have the following indexes defined:
```
"subsitution_pkey" PRIMARY KEY, btree (substitution_id, ingredient_id)
"uniqueattributes" UNIQUE, btree (substitution_id, ingredient_id)
```
This means that for that table uniqueness currently requires a *tuple* of `(substitution_id, ingredient_id)`. As an aside, those two indexes are really duplicates of each other, since a `PRIMARY KEY` constraint guarantees uniqueness by defintion.
So, you have a variety of options, but I find the easiest thing to do is often use a single unique ID which is defined for every table -- I like to use `id serial`, which will create an implicit sequence, and then define that to be the `PRIMARY KEY`.
Then you can use just that key to define `FOREIGN KEY` relationships. While doing that with multi-key tuples, it certainly complicates things, and I find using a single ID easier. You can always create additional unique indexes on multi-key tuples if you need those for `SELECT` performance, etc.
The same thing applies to the FK constraint on `ingredient_id` -- it's not unique. The same type of remedy I mentioned above applies to that column also.
|
Postgres Composite Primary Key Dependency Issue
|
[
"",
"sql",
"postgresql",
"foreign-keys",
"primary-key",
"composite-primary-key",
""
] |
```
SELECT *, null AS score,
'0' AS SortOrder
FROM products
WHERE datelive = -1
AND hidden = 0
UNION
SELECT e.*, (SUM(r.a)/(COUNT(*)*1.0)+
SUM(r.b)/(COUNT(*)*1.0)+
SUM(r.c)/(COUNT(*)*1.0)+
SUM(r.d)/(COUNT(*)*1.0))/4 AS score,
'1' AS SortOrder
FROM products e
LEFT JOIN reviews r
ON r.productID = e.productID
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID
HAVING COUNT(*) >= 5
UNION
SELECT e.*, (SUM(r.a)/(COUNT(*)*1.0)+
SUM(r.b)/(COUNT(*)*1.0)+
SUM(r.c)/(COUNT(*)*1.0)+
SUM(r.d)/(COUNT(*)*1.0))/4 AS score,
'2' AS SortOrder
FROM products e
LEFT JOIN reviews r
ON r.productID = e.productID
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID
HAVING COUNT(*) < 5
ORDER BY SortOrder ASC, score DESC
```
This creates an SQL object for displaying products on a page. The first request grabs items of type `datelive = -1`, the second of type `datelive != -1` but `r.count(*) >= 5`, and the third of type `datelive != -1` and `r.count(*) < 5`. The reviews table is structured similar to the below:
```
reviewID | productID | a | b | c | d | approved
-------------------------------------------------
1 1 5 4 5 5 1
2 5 3 2 5 5 0
3 2 5 5 4 3 1
... ... ... ... ... ... ...
```
I'm trying to work it such that `r.count(*)` only cares for rows of type `approved = 1`, since tallying data based on unapproved reviews isn't ideal. How can I join these tables such that the summations of scores and the number of rows is dependent only on `approved = 1`?
I've tried adding in `AND r.approved = 1` in the `WHERE` conditional for the joins and it doesn't do what I'd like. It does sort it properly, but then it no longer includes items with zero reviews.
|
You seem to be nearly there.
In your question you talked about adding the AND r.approved = 1 to the join criteria but by the sounds of it you are actually adding it to the `WHERE` clause.
If you instead properly add it to the join criteria like below then it should work fine:
```
SELECT *, null AS score,
'0' AS SortOrder
FROM products
WHERE datelive = -1
AND hidden = 0
UNION
SELECT e.*, (SUM(r.a)/(COUNT(*)*1.0)+
SUM(r.b)/(COUNT(*)*1.0)+
SUM(r.c)/(COUNT(*)*1.0)+
SUM(r.d)/(COUNT(*)*1.0))/4 AS score,
'1' AS SortOrder
FROM products e
LEFT JOIN reviews r ON r.productID = e.productID
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID
HAVING COUNT(*) >= 5
UNION
SELECT e.*, (SUM(r.a)/(COUNT(*)*1.0)+
SUM(r.b)/(COUNT(*)*1.0)+
SUM(r.c)/(COUNT(*)*1.0)+
SUM(r.d)/(COUNT(*)*1.0))/4 AS score,
'2' AS SortOrder
FROM products e
LEFT JOIN reviews r ON r.productID = e.productID AND r.approved = 1
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID
HAVING COUNT(*) < 5
ORDER BY SortOrder ASC, score DESC
```
[SQL Fiddle here](http://sqlfiddle.com/#!2/67fcd/1).
Notice again how I have simply put the `AND r.approved = 1` directly after `LEFT JOIN reviews r ON r.productID = e.productID` which adds an extra criteria to the join.
As I mentioned in my comment, the WHERE clause will filter rows out of the combined record set after the join has been made. In some cases the RDBMS may optimise it out and put it into the join criteria but only where that would make no difference to the result set.
|
Calculating the non-zero sums and joining it to your result may solve it;
[fiddle](http://sqlfiddle.com/#!4/b7702/17)
```
SELECT a.productID,
NULL AS score,
'0' AS SortOrder
FROM products a
WHERE datelive = -1
AND hidden = 0
UNION
SELECT e.productID,
(min(x.a)/(min(x.cnt)*1.0)+ min(x.b)/(min(x.cnt)*1.0)+ min(x.c)/(min(x.cnt)*1.0)+ min(x.d)/(min(x.cnt)*1.0))/4 AS score,
'1' AS SortOrder
FROM products e
JOIN reviews r ON r.productID = e.productID
LEFT JOIN
(SELECT ee.productID,
sum(rr.a) AS a,
sum(rr.b) AS b,
sum(rr.c) AS c,
sum(rr.d) AS d,
count(*) AS cnt
FROM products ee
LEFT JOIN reviews rr ON ee.productID = rr.productID
GROUP BY ee.productID) x ON e.productID = x.productID
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID HAVING COUNT(*) >= 5
UNION
SELECT e.productID,
(min(x.a)/(min(x.cnt)*1.0)+ min(x.b)/(min(x.cnt)*1.0)+ min(x.c)/(min(x.cnt)*1.0)+ min(x.d)/(min(x.cnt)*1.0))/4 AS score,
'2' AS SortOrder
FROM products e
LEFT JOIN reviews r ON r.productID = e.productID
LEFT JOIN
(SELECT ee.productID,
sum(rr.a) AS a,
sum(rr.b) AS b,
sum(rr.c) AS c,
sum(rr.d) AS d,
count(*) AS cnt
FROM products ee
LEFT JOIN reviews rr ON ee.productID = rr.productID
GROUP BY ee.productID) x ON e.productID = x.productID
WHERE e.hidden = 0
AND e.datelive != -1
GROUP BY e.productID HAVING COUNT(*) < 5
ORDER BY SortOrder ASC,
score DESC
```
|
Join table on conditions, count on conditions
|
[
"",
"sql",
""
] |
I'm trying to `INSERT` some rows using the auto increment value obtained from another `INSERT`.
```
WITH id AS (
INSERT INTO myTable
(mt_name, mt_description)
VALUES ('Name', 'Description')
RETURNING id
)
INSERT INTO anotherTable
(at_id, at_foo, at_bar)
VALUES (id, 'PLT', 50682),
(id, 'PLT', 54482),
(id, 'PLT', 52570),
(id, 'PLT', 9192);
```
What I'm expecting is to `INSERT` a bunch of rows with the `at_id` column having the value that was returned by the first `INSERT`. But when I run this, I get:
```
psql:file.txt:100: ERROR: column "id" does not exist
LINE 9: VALUES (id, 'PLT', 50682),
```
Is it possible to insert a returned value this way? If so, what changes must I make to get this to work?
I'm using Postgres 9.3.5. The client and server have the same version.
|
Try this one:
```
WITH id AS (
INSERT INTO myTable
(mt_name, mt_description)
VALUES ('Name', 'Description')
RETURNING id
)
INSERT INTO anotherTable(at_id, at_foo, at_bar)
SELECT id.id, 'PLT', val.v
FROM (VALUES (50682),(54482),(52570),(9192)) val(v)
CROSS JOIN id;
```
|
change your query like this
```
WITH id AS (
INSERT INTO myTable
(mt_name, mt_description)
VALUES ('Name', 'Description')
RETURNING id
)
INSERT INTO anotherTable
select id,'PLT', 50682 from id;
```
|
Inserting data using value bound with WITH
|
[
"",
"sql",
"postgresql",
""
] |
I have a requirement to find sets of rows where one or more fields are matching.
E.g:
Vendor Master
```
VendorId | VendorName | Phone | Address | Fax
------------------------------------------------------------------------
1 AAAA 10101 Street1 111
2 BBBB 20202 Street2 222
3 CCCC 30303 Street3 333
4 DDDD 40404 Street2 444
5 FFFF 50505 Street5 555
6 GGGG 60606 Street6 444
7 HHHH 10101 Street6 777
```
```
SELECT VendorId FROM VendorMaster vm
WHERE EXISTS
( Select 1 FROM VendorMaster vm1
WHERE vm1.VendorId <> vm2.VendorId
AND (vm1.Phone = vm2.Phone OR vm1.Address=vm2.Address OR vm1.Fax = vm2.Fax)
```
With the above query I am getting records, but my requirement is to assign a set-id for each set of matching records.
Like below:
```
SetId | VendorId
---------------------
1000 1
1000 7 //1 and 7- Phone numbers are matching
1001 2
1001 4 //2 and 4 - Address matching
1001 6 // 4 and 6 - Fax matching
```
Please advise me on how to write a query to **assign set ids for matching sets**. The performance of the query is also key here as the number of records will be around 100,000.
Thanks
|
I believe this will give you your desired result. A little explanation is in the comments, let me know if more is needed.
```
with relations
--Get all single relationships between vendors.
as (
select t1.vendorId firstId,
t2.vendorId secondId
from VendorMaster t1
inner join VendorMaster t2 on t1.vendorId < t2.vendorId and(
t1.Phone = t2.Phone
or t1.address = t2.address
or t1.Fax = t2.Fax
)
),
recurseLinks
--Recurse the relationships
as (
select r.*, CAST(',' + CAST(r.firstId AS VARCHAR) + ',' AS VARCHAR) tree
from relations r
union all
select r.firstId,
l.secondId,
cast(r.Tree + CAST(l.secondId AS varchar) + ',' as varchar)
from relations l
inner join recurseLinks r on r.secondId = l.firstId and r.tree not like '%' + cast(l.secondId as varchar) + ',%'
union all
select r.firstId,
l.firstId,
cast(r.Tree + CAST(l.firstId AS varchar) + ',' as varchar)
from relations l
inner join recurseLinks r on r.secondId = l.secondId and r.tree not like '%' + cast(l.firstId as varchar) + ',%'
),
removeInvalid
--Removed invalid relationships.
as (
select l1.firstId, l1.secondId
from recurseLinks l1
where l1.firstId < l1.secondId
),
removeIntermediate
--Removed intermediate relationships.
as (
select distinct l1.*
from removeInvalid l1
left join removeInvalid l2 on l2.secondId = l1.firstId
where l2.firstId is null
)
select result.secondId,
dense_rank() over(order by result.firstId) SetId
from (
select firstId,
secondId
from removeIntermediate
union all
select distinct firstId,
firstId
from removeIntermediate
) result;
```
The 'relations' named result set returns all VendorMasters relationships where they share a common Phone, Address or Fax. It also only returns [A,B] it won't return the reverse relationship [B,A].
The 'recurseLinks' named result set is a little more complex. It recursively joins all rows that are related to each other. The path column keep track of lineage so it won't get stuck in an endless loop. The first query of this union selects all the relations from the 'relations' named result set. The second query of this union selects all the forward recursive relationships, so given [A,B], [B,C] and [C, D] then [A,C], [A,D] and [B,D] are added to the result set. The third query of the union selects all the non forward recursive relationships, so given [A,D], [C,D], [B,C] then [A,C], [A,B] and [B,D] are added to the result set.
The 'removeInvalid' named result set removes any invalid intermediate relationships added by the recursive query. For Example, [B,A] because we will already have [A,B]. Note this could have been prevented in the 'recurseLinks' result set with some effort.
The 'removeIntermediate' named result set removes any intermediate relationships. So given [A,B],[B,C], [C,D], [A,C], [A,D] it will remove [B,C] and [C,D].
The final result set selects the current results and adds in a self relationship. So given [A,B], [A, C], [A,D] add in [A,A]. Which produces are finial result set.
|
You can use the built in [Ranking functions](http://msdn.microsoft.com/en-us/library/ms189798.aspx) to accomplish this. For example, for unique Address values:
```
DECLARE @VendorMaster TABLE ( VendorID INT, Vendorname VARCHAR(20), Phone VARCHAR(20), Address VARCHAR(20), Fax VARCHAR(20) )
INSERT INTO @VendorMaster
(VendorID, Vendorname, Phone, Address, Fax )
VALUES
(1, 'AAAA', '10101', 'Street1', '111'),
(2, 'BBBB', '20202', 'Street2', '222'),
(3, 'CCCC', '30303', 'Street3', '333'),
(4, 'DDDD', '40404', 'Street2', '444'),
(5, 'FFFF', '50505', 'Street5', '555'),
(6, 'GGGG', '60606', 'Street6', '444'),
(7, 'HHHH', '10101', 'Street6', '777')
SELECT
DenseRank = DENSE_RANK() OVER ( ORDER BY Address )
,* FROM @VendorMaster
```
Results
```
DenseRank RowNumber VendorID Vendorname Phone Address Fax
1 1 1 AAAA 10101 Street1 111
2 2 2 BBBB 20202 Street2 222
3 3 3 CCCC 30303 Street3 333
2 4 4 DDDD 40404 Street2 444
4 5 5 FFFF 50505 Street5 555
5 6 6 GGGG 60606 Street6 444
5 7 7 HHHH 10101 Street6 777
```
If these SetId values need to persist, you could create a separate table with an identity column to track the values associated with each SetID for each set up. It sounds like you may simply want to normalize your database and break out data elements being duplicated into their own tables linked by an identity column relationship.
|
How to find sets of rows with one or more fields matching and assign a set id for each matching set?
|
[
"",
"sql",
"sql-server",
"t-sql",
"query-performance",
""
] |
This query is to print the JOINED tables with the `creditcard` column showing "cash" for every row
where there is no card name.
I wrote the following, but it results in error:
```
SELECT st.Name Territoryname,pp.LastName SalesPerson,ps.Name ShipMethod,
sc.CardType CreditCardType,
soh.OrderDate,soh.TotalDue
FROM Sales.SalesOrderHeader soh
JOIN Person.Person pp
ON soh.SalesPersonID=pp.BusinessEntityID
JOIN Purchasing.ShipMethod ps
ON ps.ShipMethodID=soh.ShipMethodID
JOIN Sales.CreditCard sc (CASE WHEN sc.CardType='Distinguish' THEN 'Cash' ELSE sc.CardType END),
ON sc.CreditCardID=soh.CreditCardID
JOIN Sales.SalesTerritory st
ON st.TerritoryID=soh.TerritoryID
```
|
Is this what you want perhaps?
```
SELECT
st.Name Territoryname,
pp.LastName SalesPerson,
ps.Name ShipMethod,
CASE WHEN sc.CardType='Distinguish' THEN 'Cash' ELSE sc.CardType END AS CreditCardType,
soh.OrderDate,
soh.TotalDue
FROM Sales.SalesOrderHeader soh
JOIN Person.Person pp
ON soh.SalesPersonID=pp.BusinessEntityID
JOIN Purchasing.ShipMethod ps
ON ps.ShipMethodID=soh.ShipMethodID
JOIN Sales.CreditCard sc
ON sc.CreditCardID=soh.CreditCardID
JOIN Sales.SalesTerritory st
ON st.TerritoryID=soh.TerritoryID
```
This will display`cash`where the CardType is`Distinguish`. Maybe it should be:
```
CASE WHEN sc.CardType IS NULL THEN 'Cash' ELSE sc.CardType END AS CreditCardType
```
if you want to display`cash`where the card type is missing (although I don't think this can happen as you are using inner joins, and not left joins).
|
you do not need the CASE in the join. instead put the case in the select only:
```
SELECT ...,
CASE WHEN CardType = '' THEN 'Cash' ELSE CardType END AS CardType,
...
```
also, if some of the joined tables will not have corresponding rows in case of a cash payment you'll need to use `LEFT JOIN`s
|
CASE statement within a JOIN query in SQL
|
[
"",
"sql",
"join",
"case-statement",
""
] |
Here is a thing.I am trying to find a query which can include all there 3 results. But I I only know how to do the query for one of them each.
Questions:
For each survey that has had at least 200 members participate, provide the following information:
1)Survey ID and Survey Description
2)Number of members that started the survey
3) Number of members that already finished.
query for survey ID and survey description which have at least 200 participations
1)
```
Select survey_id, Survey_desc, count(Tbl_survey.member_id) as totalnumber
from Tbl_survey,Tbl_member_participation
where Tbl-survey.member_id = Tbl_member_participation.member_id
group by survey_id, Survey_desc
having totalnumber >= 200
```
2) query for number of members that started but not finished.
```
select count(survey_id)
from Tbl_survey,
where survey_id exists ( Select survey_id, Survey_desc, count(Tbl_survey.member_id) as totalnumber
from Tbl_survey,Tbl_member_participation
where Tbl-survey.member_id = Tbl_member_participation.member_id
group by survey_id, Survey_desc
having totalnumber >= 200) and
finishi_date is **null**
group by survey_id
```
3) query for number of members that already finished.
```
select count(survey_id)
from Tbl_survey,
where survey_id exists ( Select survey_id, Survey_desc, count(Tbl_survey.member_id) as totalnumber
from Tbl_survey,Tbl_member_participation
where Tbl-survey.member_id = Tbl_member_participation.member_id
group by survey_id, Survey_desc
having totalnumber >= 200) and
finishi_date is ***not null***
group by survey_id
```
SO I just want a way can combine these three results like
**Survey Id, Survey desc, start people, finish people.** in a only one query table.
|
First of all, you should get your queries right.
1) Your first query is okay, but you should not use the out-dated comma-separated join syntax. Use explicit joins instead (that where introduced in SQL standard some twenty years ago, btw).
```
select s.survey_id, s.survey_desc, count(*) as totalnumber
from tbl_survey s
join tbl_member_participation mp on s.member_id = mp.member_id
group by s.survey_id, s.survey_desc
having count(*) >= 200;
```
2,3) Your second and third queries are syntactically broken. You are kind of mixing EXISTS and IN. Moreover finishi\_date is likely to be a member of tbl\_member\_participation, so you must join this table again. Here is query 2 corrected:
```
select s.survey_id, count(*)
from tbl_survey s
join tbl_member_participation mp on s.member_id = mp.member_id
where survey_id in
(
select s.survey_id
from tbl_survey s
join tbl_member_participation mp on s.member_id = mp.member_id
group by s.survey_id
having count(*) >= 200
)
and mp.finishi_date is null
group by s.survey_id;
```
In order to combine all three, you don't have to use EXISTS or IN. All the data needed is available in query 1 already. Look, how I modify query 1 to get to a much simpler query 2:
```
select
s.survey_id,
sum(case when mp.finishi_date is null then 1 else 0 end) as count_unfinished
from tbl_survey s
join tbl_member_participation mp on s.member_id = mp.member_id
group by s.survey_id
having count(*) >= 200;
```
Having said this, your final query is this:
```
select
s.survey_id,
s.survey_desc,
sum(case when mp.finishi_date is null then 1 else 0 end) as count_unfinished,
sum(case when mp.finishi_date is not null then 1 else 0 end) as count_finished,
count(*) as totalnumber
from tbl_survey s
join tbl_member_participation mp on s.member_id = mp.member_id
group by s.survey_id, s.survey_desc
having count(*) >= 200;
```
|
try something like:
```
select a.*, b.*, c.*
from
(select query 1...) as a,
(select query 2...) as b,
(select query 3...) as c
```
where the `select ...` are your three queries. In essence it runs the three queries and takes their results as intermediate tables. Then it runs another query - the outermost select - which provides the results with an automatic cartesian product. Since your other 2 queries produce a single row, you will still get the same number of rows as in your first query.
You should name the output column in query 2 and 3. Like `select count(survey_id) as "COUNT_1" from ...`
|
How to combine different queries results into a single query result table
|
[
"",
"sql",
"operation",
""
] |
I have records like these:
```
id, hstore_col
1, {a: 1, b: 2}
2, {c: 3, d: 4}
3, {e: 1, f: 5}
```
How to order them by a maximum/minimum value inside hstore for **any** attribute?
The result should be like this(order by lowest):
```
id, hstore_col
1, {a: 1, b: 2}
3, {e: 1, f: 5}
2, {c: 3, d: 4}
```
I know, I can only order them by specific attribute like this: `my_table.hstore_fields -> 'a'`, but it doesn't work for my issue.
|
Convert to an array using `avals` and cast the resulting array from text to ints. Then sort the array and order the results by the 1st element of the sorted array.
```
select * from mytable
order by (sort(avals(attributes)::int[]))[1]
```
<http://sqlfiddle.com/#!15/84f31/5>
|
If you know all of the elements, you can just piece them all together like this:
```
ORDER BY greatest(my_table.hstore_fields -> 'a', my_table.hstore_fields -> 'b',my_table.hstore_fields -> 'c', my_table.hstore_fields -> 'd', my_table.hstore_fields -> 'e', my_table.hstore_fields -> 'f')
```
or
```
ORDER BY least(my_table.hstore_fields -> 'a', my_table.hstore_fields -> 'b',my_table.hstore_fields -> 'c', my_table.hstore_fields -> 'd', my_table.hstore_fields -> 'e', my_table.hstore_fields -> 'f')
```
|
Order by a value of an arbitrary attribute in hstore
|
[
"",
"sql",
"postgresql",
"hstore",
""
] |
I wasn't sure how to really search for this..
Lets say I have a simple table like this
```
ID Type
1 0
1 1
2 1
3 0
4 0
4 1
```
How could I select all ID's which have a type of both 0 and 1?
|
```
SELECT id,type
FROM t
GROUP BY id
HAVING SUM(type=0)>0
AND SUM(type=1)>0
```
You just group by id ,than with HAVING you use post aggregation filtering to check for 0 and 1.
|
Having is pretty expensive and that query can't hit keys.
```
SELECT ID FROM foo AS foo0 JOIN foo AS foo1 USING (ID) WHERE foo0.Type=0 AND foo1.Type=1 GROUP BY foo0.id.
```
|
SQL where particular column values appears
|
[
"",
"mysql",
"sql",
""
] |
This SQL statement returns no results :
```
SELECT [AutoNo], [PersonID]
FROM [PersonTBL]
WHERE PersonID NOT IN (SELECT PersonID
FROM FollowUpTBL
WHERE FDate > '09/24/2014')
```
While this one returns the expected result:
```
SELECT [AutoNo], [PersonID]
FROM [PersonTBL]
WHERE (SELECT TOP 1 FDate
FROM FollowUpTBL
WHERE FollowUpTBL.PersonID = PersonTBL.PersonID
ORDER BY FDate DESC) < '09/24/2014'
```
I need to return all persons who has any follow-up after '09/24/2014', why does the first SQL query not work? While second query return the true data!
Is there are any way to make first query return same result (I mean use "IN (select)")?
|
I found the glitch, The problem is because there are some FollowUpTBL records inserted with PersonID = null, So to fix the first sql query and make it work, I have to modified it as below :
```
SELECT [AutoNo], [PersonID]
FROM [PersonTBL]
WHERE PersonID NOT IN (
SELECT PersonID
FROM FollowUpTBL
WHERE FDate > '09/24/2014'
and PersonID Is Not Null
)
```
So, add "and PersonID Is Not Null" fix the problem, Thanks,
|
> I need to return all persons **who has** any follow-up after '09/24/2014', why first one didn't work!
Your first statement is using a `NOT IN` clause in your `WHERE` statement. This would get you exactly the opposite of what you want.
|
Why doesn't a "where not in (select statement)" criteria return the same resault as "where (select statement > parameter)"?
|
[
"",
"sql",
"sql-server",
""
] |
Now I wrote query SQL for getting rows limited by first query:
```
SELECT * FROM commenttoarticle a
WHERE a.idCommentToArticle = (SELECT CommentToArticlePID FROM commenttoarticle b)
ORDER BY a.idCommentToArticle DESC LIMIT 3
```
When I try to execute this query, I get:
```
#1242 - Subquery returns more than 1 row
```
How to resolve this issue? So, I need get all rows from sub-query.
If I want return one row - I need use `GROUP BY`, but it is not solution
**Modified query:**
```
SELECT a.idCommentToArticle FROM
commenttoarticle a WHERE a.CommentToArticlePID IN
(SELECT idCommentToArticle FROM commenttoarticle b) ORDER BY a.idCommentToArticle DESC LIMIT 3
```
Dump table **commenttoarticle**:
```
CREATE TABLE IF NOT EXISTS `commenttoarticle` (
`idCommentToArticle` int(11) NOT NULL AUTO_INCREMENT,
`CommentToArticleTime` int(11) NOT NULL,
`CommentToArticleIdArticle` int(11) NOT NULL,
`CommentToArticleComment` text NOT NULL,
`CommentToArticleIdUser` int(11) NOT NULL,
`CommentToArticlePID` int(11) NOT NULL,
PRIMARY KEY (`idCommentToArticle`),
UNIQUE KEY `idCommentToArticle_UNIQUE` (`idCommentToArticle`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=59 ;
--
-- Дамп данных таблицы `commenttoarticle`
--
INSERT INTO `commenttoarticle` (`idCommentToArticle`, `CommentToArticleTime`, `CommentToArticleIdArticle`, `CommentToArticleComment`, `CommentToArticleIdUser`, `CommentToArticlePID`) VALUES
(29, 0, 11, 'продажам?\nИнтересует не мега-звезда, а именно предметный, руками умеющий продавать сам и помогающий выстраивать это бизнесам.', 459, 0),
(30, 0, 11, '2', 459, 0),
(31, 0, 11, '3', 459, 0),
(36, 0, 11, '3.1', 459, 31),
(37, 1413822798, 11, 'also facing that prob. on the plteform of win 7', 459, 29),
(38, 0, 11, ' here i dont have internet connection.. @Samint Sinha thanks ill check it out maybe tomorrow.', 459, 29),
(39, 0, 11, ' Select max id and you will have dhe last row returned', 459, 29),
(32, 0, 11, '4', 459, 0),
(44, 1414354324, 11, 'How to do', 456, 29),
(45, 1414354469, 11, 'sfsfsf', 456, 29),
(46, 1414354708, 11, 'dddd', 456, 29),
(47, 1414357761, 11, 'sfsfs', 456, 0),
(57, 1414370833, 39, 'kkkppppppp', 456, 0),
(49, 1414358233, 11, 'VSF\nSFSF', 456, 0),
(50, 1414359589, 11, 'How to do', 456, 0),
(51, 1414359660, 11, 'sfsfsdf', 456, 0),
(52, 1414361057, 11, 'SDFSF', 456, 0),
(53, 1414364023, 11, 'dsfdsjfsifmsi', 456, 0),
(54, 1414364031, 11, 'sdfdskjfnskf', 456, 52),
(55, 1414364034, 11, 'sdfdskjfnskf', 456, 52),
(56, 1414364044, 11, 'fndsdfnsofosfi', 456, 52),
(58, 1414370841, 39, 'dfgdfgdgdgdgdgdfgdgdfg', 456, 0);
```
**Result what I need:**
Here is example at sqlfiddle: [sqlfiddle.com/#!2/dbd82a/1](http://sqlfiddle.com/#!2/dbd82a/1) I need get last 3 rows with a unlimited `COMMENTTOARTICLEPID` for each first query, if exists. In example, I need get rows with `IDCOMMENTTOARTICLE: 58, 57, 56, 52`
|
I am thinking that your subquery returns more than 1 row hence replace your "=" with "IN". Like this...
```
SELECT * FROM commenttoarticle a
WHERE a.idCommentToArticle IN (SELECT CommentToArticlePID FROM commenttoarticle b)
ORDER BY a.idCommentToArticle DESC LIMIT 3
```
|
```
SELECT a.*
FROM commenttoarticle a
JOIN commenttoarticle b
ON b.CommentToArticlePID = a.idCommentToArticle
ORDER
BY a.idCommentToArticle DESC
LIMIT 3
```
|
Return all rows from sub-query SQL?
|
[
"",
"mysql",
"sql",
""
] |
I'm using MySQL and have the following table definition:
```
create table student_t (student id int primary key, grade int);
```
I've then populated the table with the following:
```
insert into student_t values (1, 70), (2, 90), (3, 80);
```
I want to get the id and grade for the student with the highest grade.
I try the following but no records are output.
```
select * from student_t where grade = max(grade);
```
Help would be greatly appreciated. Thank you for your time!
|
You can't use `MAX()` function directly like that in `WHERE` condition (except in `HAVING` clause). Your query should look like below using sub query
```
select * from student_t where grade in (select max(grade) from student_t);
```
Or using `ORDER BY` clause along with `LIMIT` operator like
```
select * from student_t
order by grade desc limit 1;
```
|
`where` clauses are applied on a row-by-row basis, as the rows are being considered for inclusion in the result set. At the time your individual rows are being considered, the results of `max()` aren't available yet. You have to use a "having" clause, which is less efficient, or use a sub-query to get the max() value separately:
```
SELECT *
FROM student_t
HAVING grade = (SELECT MAX(grade) FROM student_t)
```
|
mysql query using max does not output any records
|
[
"",
"mysql",
"sql",
"select",
"max",
"aggregate-functions",
""
] |
I was asked this question during an interview for a Junior Oracle Developer position, the interviewer admitted it was a tough one:
Write a query/queries to check if the table 'employees\_hist' is an exact copy of the table 'employees'. Any ideas how to go about this?
EDIT: Consider that tables can have duplicate records so a simple MINUS will not work in this case.
EXAMPLE
```
EMPLOYEES
NAME
--------
Jack Crack
Jack Crack
Jill Hill
```
These two would not be identical.
```
EMPLOYEES_HIST
NAME
--------
Jack Crack
Jill Hill
Jill Hill
```
|
One possible solution, which caters for duplicates, is to create a subquery which does a UNION on the two tables, and includes the number of duplicates contained within each table by grouping on all the columns. The outer query can then group on all the columns, including the row count column. If the table match, there should be no rows returned:
```
create table employees (name varchar2(100));
create table employees_hist (name varchar2(100));
insert into employees values ('Jack Crack');
insert into employees values ('Jack Crack');
insert into employees values ('Jill Hill');
insert into employees_hist values ('Jack Crack');
insert into employees_hist values ('Jill Hill');
insert into employees_hist values ('Jill Hill');
with both_tables as
(select name, count(*) as row_count
from employees
group by name
union all
select name, count(*) as row_count
from employees_hist
group by name)
select name, row_count from both_tables
group by name, row_count having count(*) <> 2;
```
gives you:
```
Name Row_count
Jack Crack 1
Jack Crack 2
Jill Hill 1
Jill Hill 2
```
This tells you that both names appear once in one table and twice in the other, and therefore the tables don't match.
|
If the tables have the same columns, you can use this; this will return no rows if the rows in both tables are identical:
```
(
select * from test_data_01
minus
select * from test_data_02
)
union
(
select * from test_data_02
minus
select * from test_data_01
);
```
|
Check if tables are identical using SQL in Oracle
|
[
"",
"sql",
"oracle",
"compare",
""
] |
I have a stored procedure that does this...
```
CREATE PROCEDURE spGetRegion @postcode nvarchar(10)
AS
SELECT capt FROM REGION_DATA
where POSTCODE = @postcode
go
```
This means I can run
```
exec spGetCounty @postcode = 'rh12'
```
and it returns an area, such as "south east'
Is it possible to include the results of a stored procedure like this in a select statement. for example
```
SELECT
FIRST_NAME
, LAST_NAME
, POSTCODE
, << NEED TO INCLUDE RESULTS OF STORED PROCEDURE HERE PASSING IN POSTCODE
FROM PEOPLE
```
So something like
```
SELECT
FIRST_NAME
, LAST_NAME
, POSTCODE
, exec spGetCounty @postcode = 'rh12' << THIS BIT DOESN'T WORK !!
FROM PEOPLE
```
any ideas?
|
Instead of a stored procedure, consider using a scalar function:
```
CREATE FUNCTION spGetRegion (
@postcode nvarchar(10)
)
RETURNS varchar(255)
BEGIN
DECLARE @capt varchar(255);
SELECT @capt = capt
FROM REGION_DATA
where POSTCODE = @postcode;
RETURN(@capt);
END;
```
Then you can call it as:
```
SELECT FIRST_NAME, LAST_NAME, POSTCODE,
dbo.spGetRegion('rh12');
FROM PEOPLE;
```
|
You need to create a function
```
CREATE function spGetRegion
(
@postcode nvarchar(10)
)
RETURNS varchar(100)
as
begin
SELECT capt FROM REGION_DATA
where POSTCODE = @postcode
end
```
and use it in your query like
```
SELECT
FIRST_NAME
, LAST_NAME
, POSTCODE
, dbo.spGetCounty('rh12')
FROM PEOPLE
```
|
incorrect syntax : including results of stored procedure in select statement
|
[
"",
"sql",
"sql-server",
""
] |
I have the following query:
```
SELECT s.username FROM `instagram_shop` s
INNER JOIN `instagram_shop_picture` p ON
s.id = p.shop_id
WHERE COUNT(p.id) = 0
GROUP BY p.id
;
```
I essentially wanted to find all shops that doesn't have any pictures yet. I wanted to get the username of the shop. However the above query gives me `#1111 - Invalid use of group function` why is this?
Here's what the instagram\_shop looks like:

and here's the instagram\_shop\_picture:

|
You have two things to correct here.
**`HAVING` for conditions related to Aggregate functions**
`Where` is for the filter to be applied before the data gets aggregated using Count, Min, Max, Sum, etc.
`Having` is to be used after the aggregation is done, and only on aggregation functions.
**Use a `LEFT OUTER JOIN` Or `EXISTS`**
When you use an `Inner Join`, you will only get rows from instagram\_shop, and instagram\_shop\_picture. To get rows that do not exist in instagram\_shop\_picture, you can use an `OUTER JOIN`, or use an `EXISTS`
```
SELECT s.username
FROM instagram_shop s
LEFT OUTER JOIN instagram_shop_picture p
ON s.id = p.shop_id
GROUP BY p.id
HAVING COUNT(p.id) = 0
```
OR
```
SELECT s.username
FROM instagram_shop s
Where Not Exists
(
Select 1
From instagram_shop_picture p
Where s.id = p.shop_id
)
```
|
`WHERE` filters are executed against the data before aggregation occurs (aggregation being a GROUP BY or SUM() or AVG(), for instance). If you need to filter your results after aggregation you will need to use `HAVING`
```
SELECT s.username, count(p.id)
FROM `instagram_shop` s
LEFT JOIN `instagram_shop_picture` p ON
s.id = p.shop_id
GROUP BY s.username
HAVING COUNT(p.id) = 0;
```
It's important to note that `HAVING` can be an expensive procedure since the database has to return and aggregate all of your results before it is used. So if your returned results are HUGE, `HAVING` is going to churn for a while.
ADDED: As folks have pointed out, because of your schema, you need to use a LEFT JOIN here. Using an INNER JOIN causes you to drop records from your result where there are no pictures. Once that's done there will be no records before OR after aggregation where the count(p.id) = 0. So you get nothing back.
Seeing your schema you could also use:
```
SELECT s.username
FROM `instagram_shop` s
LEFT JOIN (SELECT shop_id, count(id) as countOfPictures FROM `instagram_shop_picture`) p ON
s.id = p.shop_id
WHERE p.countOfPictures IS NULL
```
This may actually reduce runtime since it involves aggregating on a smaller result from a single table (where the index can be taken into consideration by the DB). The derived table adds back some overhead, but overall you'll probably come out ahead.
|
Issue with count and group by to get inner join with count of 0
|
[
"",
"mysql",
"sql",
""
] |
I have a table with the schema below:
```
+---------+--------+
|studentId | course |
+---------+--------+
|1 | 2 |
|1 | 3 |
|1 | 4 |
|1 | 5 |
|2 | 4 |
|2 | 5 |
+---------+--------+
```
and I want to perform a query to get student Ids that don't have course 2 and 3
```
select * from students where course not in (2,3);
```
but it returns Students IDs 1 and 2 and I want it to return Student ID 2 only.
How do I do that?
|
You could do it like this:
```
select * from students where studentId not in -- exclude all students from course 2 & 3
(
--find all the students in course 2 & 3
select distinct studentId --could be duplicates might as well grab a distinct list.
from students
where course in (2,3)
)
```
|
*This answers assumes that OP wants to filter out students that have either course 2 or course 3 or both of them set.*
At first, find all students, who have course 2 or 3
```
SELECT DISTINCT studentId
FROM students
WHERE course IN (2,3)
```
Then, find all students, who are not in that list
```
SELECT *
FROM students
WHERE studentId NOT IN (...)
```
If you only want to return a list of studentIds, without their courses, replace `*` with `DISTINCT studentId`.
Put those together:
```
SELECT DISTINCT studentId
FROM students
WHERE studentId NOT IN (
SELECT DISTINCT studentId
FROM students
WHERE course IN (2,3)
)
```
|
select ids that doesn't have a specific values
|
[
"",
"mysql",
"sql",
""
] |
I have a requirement to generate absentee report. Employees are allocated to specific date range and the daily punching IN/OUT recorded to CheckINOUT table.
**CheckINOUT Table**
EmployeeID-Int
CheckINOUT-smalldatetime
**EmpAllocation Table**
EmployeeID-Int
StartDate-smalldatetime
EndDate-smalldatetime
**Report Format**
EmployeeID | Date Absent
TIA.
|
I am not sure that this meets your actual requirement but this may be a start for you..Try this(untested).
```
SELECT *
FROM empallocation A
WHERE NOT EXISTS (SELECT 1
FROM checkinout B
WHERE A.employeeid = B.employeeid
AND b.checkinout BETWEEN a.startdate AND b.enddate)
```
I am adding this code based on the link you provided..
[SQL FIDDLE DEMO](http://sqlfiddle.com/#!3/4c649/5)
```
CREATE TABLE empalloc
(
eid INT,
startdate DATETIME,
endate DATETIME
)
INSERT INTO empalloc
VALUES (001,'2014-10-01','2014-10-15'),
(002,'2014-10-10','2014-10-15')
CREATE TABLE checkinout
( eid INT,
checkin DATETIME )
INSERT INTO checkinout
VALUES (001,'2014-10-03'),(001,'2014-10-04'),(001,'2014-10-08'),
(001,'2014-10-09'),(001,'2014-10-10'),(001,'2014-10-11'),
(001,'2014-10-13'),(001,'2014-10-15'),(002,'2014-10-12'),
(002,'2014-10-13')
;WITH cte
AS (SELECT eid,
startdate,
endate
FROM empalloc
UNION ALL
SELECT eid,
Dateadd(dd, 1, startdate) startdate,
endate
FROM cte
WHERE startdate < endate) SELECT eid,
startdate
FROM cte
WHERE startdate >= '2014-10-01'
AND startdate < '2014-10-10'
EXCEPT
SELECT eid,
checkin
FROM checkinout
```
In where condition just add the date range in which u need to generate the report
|
This will list all absences for each employee.
```
;WITH DateRange AS
(
SELECT EmployeeID, StartDate, EndDate FROM @EmpAllocation
UNION ALL
SELECT EmployeeID, StartDate + 1, EndDate FROM DateRange WHERE StartDate < EndDate
)
select
d.EmployeeID, d.StartDate
from DateRange d
left join @CheckINOUT c on c.EmployeeID = d.EmployeeID and c.CheckInOut = d.StartDate
where c.CheckInOut is null
order by d.EmployeeID, d.StartDate;
```
|
How to generate absentee report with date of absent
|
[
"",
"sql",
"sql-server",
""
] |
I have a table with 14 columns which can be referred to as column\_1 - column\_14. I need a table that is unique on a combination of two fields (i.e. column\_1 and column\_2). I cannot have any instances in this table where there are multiple rows containing the same information in column\_1 and 2. To give a clear understanding of what I mean, I referenced this post to identify the duplicates I speak of.
I have referenced [this post](https://stackoverflow.com/questions/3504012/sql-how-to-find-duplicates-based-on-two-fields)
Now, I need to learn how to delete these rows from my table, so I am left with completely unique rows based on columns 1 & 2.
Thank you
|
In order to find you duplicates you can use the following query
```
SELECT * FROM your_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM your_table
GROUP BY column1, column2); //those are the columns that define which row is unique
```
In order to delete the duplicates
```
DELETE FROM your_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM your_table
GROUP BY column1, column2); //those are the columns that define row is unique
```
|
Oracle has a `rowid` pseudo-column that can be used exactly for this purpose. This means that you can do what you want even if your data has no unique ids (which is quite common in Oracle databases because setting up an auto-incrementing primary key is painful prior to Oracle 12).
The code looks like:
```
DELETE FROM table t
WHERE rowid > (SELECT min(rowid) FROM table t2 WHERE t2.col1 = t.col1 and t2.col2 = t.col2);
```
Note that `min(rowid)` is an *arbitrary* row. It is not necessarily the first row inserted with the two values.
|
Remove duplicates based on two fields
|
[
"",
"sql",
"oracle",
""
] |
I am new to sql queries. I am writing a query in which I am suppose to return title
I have the following query so far and it works but it does not return me the correct result.
If someone can guide me where I am going wrong.
```
SELECT movie_title, release_year
FROM (
SELECT m.movie_title, m.release_year,COUNT(n.won),
COUNT(c.actor_name)
FROM MOVIE m JOIN ON (n.movie_title=m.movie_title AND n.release_year=m.release_year)
JOIN on (c.movie_title=m.movie_title AND c.release_year=m.release_year)
GROUP BY m.movie_title, m.release_year
HAVING COUNT(n.won)>2 OR COUNT(c.actor_name)>2
);
```
I have looked at examples online but have not been able to get the exact result.
Thank you for all the help.
|
Change your condition to;
```
HAVING ( COUNT(n.won) > 1 AND COUNT(c.actor_name) < 2 )
OR ( COUNT(n.won) < 2 AND COUNT(c.actor_name) > 1 )
```
Or using the more terse XOR as mentioned by a previous poster;
```
HAVING COUNT(n.won) > 1 XOR COUNT(c.actor_name) > 1
```
|
Most database systems have an `XOR` operator, which is exactly the behaviour you want.
From the [MySQL documentation](https://dev.mysql.com/doc/refman/5.0/en/logical-operators.html#operator_xor):
> [XOR] returns `NULL` if either operand is `NULL`. For non-`NULL` operands, evaluates to `1` if an odd number of operands is nonzero, otherwise `0` is returned.
And from the [Oracle 11g documentation](http://docs.oracle.com/cd/E15523_01/doc.1111/e12048/conditions.htm#i1052219):
> `XOR`: Returns `TRUE` if either component condition is `TRUE`. Returns `FALSE` if both are `FALSE`. Otherwise returns `UNKNOWN`.
From a mathematical perspective, XOR stands for eXclusive OR; it literally means "One or the other, but not both". Interestingly, it's logically equivalent to `NOT AND`, so you could use that just as effectively.
|
SQL Condition to get "a" or "b" but not both
|
[
"",
"mysql",
"sql",
"oracle11g",
""
] |
I have a table with customer numbers, order numbers & rank fields. Each customer number can have multiple orders with different rank values. Ex.
```
cust# order# rank
1 12 1
1 13 3
1 14 2
2 15 2
2 16 1
3 17 3
3 18 4
3 19 1
3 20 2
```
I am using this table to populate another table which looks like this.
```
cust order1 order2 order3 order4
1 12 14 13
2 16 15
3 19 20 17 18
```
So, how do I select the second, third minimum rank to populate the `order2, order3, order4` fields?
EDIT: I do not want to do by ranks 1,2,3 etc because sometimes there might be no rank 2 so the order2 field will be empty but order3 field will be populated. I want to do by minimum rank. 1st minimum rank in order1, second minimum rank in order2 etc.
|
Use a combination of `ROW_NUMBER()` and `PIVOT`, like this:
```
DECLARE @tmp TABLE (Customer INT, OrderNumber INT, Ranking INT)
INSERT INTO @tmp (Customer, OrderNumber, Ranking)
SELECT 1, 12, 1 UNION
SELECT 1, 13, 3 UNION
--SELECT 1, 14, 2 UNION
SELECT 2, 15, 2 UNION
SELECT 2, 16, 1 UNION
SELECT 3, 17, 3 UNION
SELECT 3, 18, 4 UNION
SELECT 3, 19, 1 UNION
SELECT 3, 20, 2
SELECT
Customer,
MAX(Order1) AS Order1,
MAX(Order2) AS Order2,
MAX(Order3) AS Order3,
MAX(Order4) AS Order4
FROM
(
SELECT
*,
'Order' + CAST(ROW_NUMBER() OVER (PARTITION BY Customer ORDER BY Ranking) AS VARCHAR(4)) AS rn
FROM @tmp
) d
PIVOT
(
MAX(OrderNumber) FOR rn IN ([Order1], [Order2], [Order3], [Order4])
) p
GROUP BY Customer
```
|
```
SELECT R1.cust#, R1.order#, R2.order#, r3.order#, r4.order#
FROM MyTable AS R1
LEFT JOIN MyTable AS R2 ON R1.cust# = R2.cust# AND R2.rank = 2
LEFT JOIN MyTable AS R3 ON R1.cust# = R3.cust# AND R3.rank = 3
LEFT JOIN MyTable AS R4 ON R1.cust# = R4.cust# AND R4.rank = 4
WHERE R1.rank = 1
```
|
Select rows with second minimum rank in SQL Server
|
[
"",
"sql",
"sql-server",
"minimum",
""
] |
How can we use IN Statement into sql server function.
Here is my code:
```
ALTER FUNCTION [dbo].[getMentalTable]
(
@Code VARCHAR(50),
@Statu VARCHAR(10),
@donem1 VARCHAR(10),
@donem2 VARCHAR(10),
@principal VARCHAR(10)
)
RETURNS TABLE
AS
RETURN
(
SELECT
m.Code,
m.CodeType,
k.TANIM,
kart.DonemAdi,
COUNT(DISTINCT m.DRGPatientId) AS VakaSayısı
FROM
TIGTest.dbo.MentalBozuklukVerileriOzet m,
TIGTest.dbo.KART_DONEM kart,
TIGTest.dbo.Hospitals h,
KODLAR k
WHERE m.Code IN(@Code)
AND kart.DonemKodu = m.DonemKodu
AND h.HospitalCode = m.HospitalCode
AND h.Statu='D'
AND m.DonemKodu BETWEEN @donem1 AND @donem2
AND k.KOD2=m.Code
AND m.IsPrincipal=@principal
GROUP BY
kart.DonemAdi,
m.DonemKodu,
m.Code,
m.CodeType,
k.TANIM,
kart.DonemAdi
)
```
I want to set multiple parameters in `@Code` variable.So this using cant work.How can we use IN statement in sql server functions.
Thanks .
|
Try this(considering that @code will have CSV)..U need to convert your CSV into rows to compare with IN operator
```
....
WHERE m.Code IN (SELECT Split.a.value('.', 'VARCHAR(100)')
FROM (SELECT Cast ('<M>' + Replace(@Code, ',', '</M><M>') + '</M>' AS XML) AS code) AS A
CROSS APPLY code.nodes ('/M') AS Split(a))
AND...
```
|
One method is to use dynamic SQL. That is, embed the list of codes in a string.
Another is to use a CTE to split the list into a table and use a `join` or `in`.
A third method that is simpler to write, but cannot take advantage of an index on `m.Code`, is:
```
WHERE ',' + @Code + ',' like '%,' + m.Code + ',%'
```
This uses `like` for this purpose.
|
How can we use IN SQL Statement into SQL SERVER Function
|
[
"",
"sql",
"sql-server",
""
] |
I want to get the hours and minutes between two dates. But it's not working and I getting this error
```
ORA-01843: not a valid month
01843. 00000 - "not a valid month"
```
Here's the script
```
select q.name as queue_name
,to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss')
,to_date(t.close_time, 'dd-mon-yyyy hh24:mi:ss')
,NUMTOYMINTERVAL(t.close_time - t.create_time, 'DAY')*24
,NUMTOYMINTERVAL(t.close_time - t.create_time, 'DAY')*24*60
from app_account.otrs_ticket t
left join app_account.otrs_user u
on t.create_user_id=u.id
left join app_account.otrs_queue q
on q.id=t.queue_id
group by to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss')
,to_date(t.close_time, 'dd-mon-yyyy hh24:mi:ss')
,NUMTOYMINTERVAL(t.close_time - t.create_time, 'DAY')*24
,NUMTOYMINTERVAL(t.close_time - t.create_time, 'DAY')*24*60
,q.name
order by to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss') desc;
```
|
Try this you will get atleast idea how to calculate time difference between two dates.
```
SELECT floor(((date1-date2)*24*60*60)/3600)
|| ' HOURS ' ||
floor((((date1-date2)*24*60*60) -
floor(((date1-date2)*24*60*60)/3600)*3600)/60)
|| ' MINUTES ' ||
round((((date1-date2)*24*60*60) -
floor(((date1-date2)*24*60*60)/3600)*3600 -
(floor((((date1-date2)*24*60*60) -
floor(((date1-date2)*24*60*60)/3600)*3600)/60)*60) ))
|| ' SECS ' time_difference
FROM dates;
TIME_DIFFERENCE
--------------------------------------------------------------------------------
24 HOURS 0 MINUTES 0 SECS
1 HOURS 0 MINUTES 0 SECS
0 HOURS 1 MINUTES 0 SECS
```
For Futher Reference Please Refer This Link :
[Click Here](http://www.orafaq.com/faq/how_does_one_get_the_time_difference_between_two_date_columns)
|
use `NUMTODSINTERVAL(close_date - create_date, 'DAY') or NUMTOYMINTERVAL(close_date - create_date, 'DAY')` for calculating the difference in days.
For hours multiply the value with 24 and for minutes with 24\*60.
```
select q.name as queue_name
,to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss') as create_date
,to_date(t.close_time, 'dd-mon-yyyy hh24:mi:ss') as close_date
,NUMTODSINTERVAL(close_date - create_date, 'DAY') or NUMTOYMINTERVAL(close_date - create_date, 'DAY')*24
,NUMTODSINTERVAL(close_date - create_date, 'DAY') or NUMTOYMINTERVAL(close_date - create_date, 'DAY') *24*60
from app_account.otrs_ticket t
left join app_account.otrs_user u
on t.create_user_id=u.id
left join app_account.otrs_queue q
on q.id=t.queue_id
where q.name not like 'Facilities Management::%'
and q.name not like 'HR::%'
and q.name not like 'Raw%'
and q.name not like 'Procurement::%'
and q.name not like 'Facilities Management%'
and q.name not like 'Junk%'
and q.name not like 'Facility Request Test%'
and q.name not like 'Misc%'
and q.name not like 'POS::POS issue - need paper%'
group by to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss') as create_date
,to_date(t.close_time, 'dd-mon-yyyy hh24:mi:ss') as close_date
order by to_date(t.create_time, 'dd-mon-yyyy hh24:mi:ss') desc;
```
|
How to get hours and minutes by subtracting two dates?
|
[
"",
"sql",
"oracle",
""
] |
If I have a table(Oracle or MySQL), which stores the date user logins.
So how can I write a SQL(or something else) to find the users who have continuously login for n days.
For example:
```
userID | logindate
1000 2014-01-10
1000 2014-01-11
1000 2014-02-01
1000 2014-02-02
1001 2014-02-01
1001 2014-02-02
1001 2014-02-03
1001 2014-02-04
1001 2014-02-05
1002 2014-02-01
1002 2014-02-03
1002 2014-02-05
.....
```
We can see that user 1000 has continually logined for two days in 2014, and user 1001 has continually logined for 5 days. and user 1002 never continuously logins.
The SQL should be extensible , which means I can pick every number of n, and modify a little or pass a new parameter, and the results is as expected.
Thank you!
|
As we don't know what dbms you are using (you named both MySQL and Oracle), here are are two solutions, both doing the same: Order the rows and subtract rownumber days from the login date (so if the 6th record is 2014-02-12 and the 7th is 2014-02-13 they both result in 2014-02-06). So we group by user and that groupday and count the days. Then we group by user to find the longest series.
Here is a solution for a dbms with analytic window functions (e.g. Oracle):
```
select userid, max(days)
from
(
select userid, groupday, count(*) as days
from
(
select
userid, logindate - row_number() over (partition by userid order by logindate) as groupday
from mytable
)
group by userid, groupday
)
group by userid
--having max(days) >= 3
```
And here is a MySQL query (untested, because I don't have MySQL available):
```
select
userid, max(days)
from
(
select
userid, date_add(logindate, interval -row_number day) as groupday, count(*) as days
from
(
select
userid, logindate,
@row_num := @row_num + 1 as row_number
from mytable
cross join (select @row_num := 0) r
order by userid, logindate
)
group by userid, groupday
)
group by userid
-- having max(days) >= 3
```
|
I think the following query will give you a very extensible parametrization:
```
select z.userid, count(*) continuous_login_days
from
(
with max_dates as
( -- Get max date for every user ID
select t.userid, max(t.logindate) max_date
from test t
group by t.userid
),
ranks as
( -- Get ranks for login dates per user
select t.*,
row_number() over
(partition by t.userid order by t.logindate desc) rnk
from test t
)
-- So here, we select continuous days by checking if rank inside group
-- (per user ID) matches login date compared to max date
select r.userid, r.logindate, r.rnk, m.max_date
from ranks r, max_dates m
where m.userid = r.userid
and r.logindate + r.rnk - 1 = m.max_date -- here is the key
) z
-- Then we only group by user ID to get the number of continuous days
group by z.userid
;
```
Here is the result:
```
USERID CONTINUOUS_LOGIN_DAYS
1 1000 2
2 1001 5
3 1002 1
```
So you can just choose by querying field `CONTINUOUS_LOGIN_DAYS`.
**EDIT** : If you want to choose from all ranges (not only the last one), my query structure no longer works because it relied on the last range. But here is a workaround:
```
with w as
( -- Parameter
select 2 nb_cont_days from dual
)
select *
from
(
select t.*,
-- Get number of days around
(select count(*) from test t2
where t2.userid = t.userid
and t2.logindate between t.logindate - nb_cont_days + 1
and t.logindate) m1,
-- Get also number of days more in the past, and in the future
(select count(*) from test t2
where t2.userid = t.userid
and t2.logindate between t.logindate - nb_cont_days
and t.logindate + 1) m2,
w.nb_cont_days
from w, test t
) x
-- If these 2 fields match, then we have what we want
where x.m1 = x.nb_cont_days
and x.m2 = x.nb_cont_days
order by 1, 2
```
You just have to change the parameter in the `WITH` clause, so you can even create a function from this query to call it with this parameter.
|
How to wirte an extensible SQL to find the users who continuously login for n days
|
[
"",
"mysql",
"sql",
"database",
"oracle",
""
] |
I need to get the data month wise and sorted, for this I tried following query
```
select to_char(regn_date,'Mon-yyyy') "Month", count(id) "No of Persons"
from Person
group by to_char(regn_date,'Mon-yyyy')
order by to_char(regn_date,'Mon-yyyy')
```
and the output I get is
**Month Number of Persons**
```
Dec-2011 1383
Feb-2012 1230
Jan-2012 1409
Mar-2012 1495
Nov-2011 985
Oct-2011 825
Sep-2011 742
```
Ascending sort on a string value i.e order by to\_char(regn\_date,'Mon-yyyy') asc
```
select to_char(regn_date,'Mon-yyyy') "Month", count(id) "No of Persons"
from Person
group by to_char(regn_date,'Mon-yyyy')
order by to_char(regn_date,'Mon-yyyy') asc
```
**Month Number of Persons**
```
Dec-2011 1383
Feb-2012 1230
Jan-2012 1409
Mar-2012 1495
Nov-2011 985
Oct-2011 825
Sep-2011 742
```
Getting the data sorted results in different display of date format
```
select to_date(to_char(regn_date,'MM-YYYY'),'MM-YYYY') "Month", count(id) "Number of Persons" from Person
where trunc(regn_date) < '31-MAR-2012'
group by to_date(to_char(regn_date,'MM-YYYY'),'MM-YYYY')
order by to_date(to_char(regn_date,'MM-YYYY'),'MM-YYYY')asc
```
Result:
**Month Number of Persons**
```
9/1/2011 742
10/1/2011 825
11/1/2011 985
12/1/2011 1383
1/1/2012 1409
2/1/2012 1230
3/1/2012 1495
```
The desired output is date wise sorted data showing month and year in ascending order.
**Month Number of Persons**
```
Sep-2011 742
Oct-2011 825
Nov-2011 985
Dec-2011 1383
Jan-2012 1409
Feb-2012 1230
Mar-2012 1495
```
|
Use `TO_CHAR` to display, use `TO_DATE` for operations.
```
SQL> WITH DATA AS(
2 SELECT 'Dec-2011' mnth, 1383 persons FROM dual UNION ALL
3 SELECT 'Feb-2012' , 1230 persons FROM dual UNION ALL
4 SELECT 'Jan-2012', 1409 persons FROM dual UNION ALL
5 SELECT 'Mar-2012', 1495 persons FROM dual UNION ALL
6 SELECT 'Nov-2011', 985 persons FROM dual UNION ALL
7 SELECT 'Oct-2011', 825 persons FROM dual UNION ALL
8 SELECT 'Sep-2011', 742 persons from dual
9 )
10 SELECT to_char(to_date(mnth,'Mon-YYYY'), 'Mon-YYYY') mnth, persons
11 FROM DATA
12 ORDER BY to_date(mnth,'Mon-YYYY')
13 /
MNTH PERSONS
-------- --------------------
Sep-2011 742
Oct-2011 825
Nov-2011 985
Dec-2011 1383
Jan-2012 1409
Feb-2012 1230
Mar-2012 1495
7 rows selected.
SQL>
```
**EDIT** The `WITH` clause is to build the sample table for just demonstration. In your database you need to just rename the DATA with your table name and just execute the select query.
So, your final query would look like,
```
SELECT to_char(to_date(regn_date,'Mon-YYYY'), 'Mon-YYYY') regn_date,
count(id) "No of Persons"
FROM person
GROUP BY to_date(regn_date,'Mon-YYYY')
ORDER BY to_date(regn_date,'Mon-YYYY')
/
```
Considering, the `regn_date` is an input as `literal`.
If the `regn_date` is a `date`, then,
```
SELECT to_char(regn_date,'Mon-YYYY') regn_date,
count(id) "No of Persons"
FROM person
GROUP BY to_char(regn_date,'Mon-YYYY')
ORDER BY regn_date
/
```
|
Just sort by the actual date, without converting to string:
```
SELECT to_char(regn_date,'Mon-yyyy') "Month", count(id) "No of Persons"
FROM Person
GROUP BY to_char(regn_date,'Mon-yyyy')
ORDER BY regn_date
```
|
Grouping and sorting data month wise sorted data showing month and year in ascending order
|
[
"",
"sql",
"database",
"sorting",
"oracle-sqldeveloper",
""
] |
I created a function "ufngetFirstDateOfMonth" and "ufngetLastDateOfMonth" stored in Microsoft SQL Server 2008. My purpose is to send some date into the function and it will return the first date of month with '00:00:00' or the last date of month with '23:59:59'.
I call the function like this:
```
exec ufngetLastDateOfMonth('2014-10-15')
```
and normally it returns '2014-10-31 23:59:59'
but when I send the last date of months that have 31 days (august, january,...):
```
exec ufngetLastDateOfMonth('2014-10-31')
```
it return '2014-10-30 23:59:59' whick is not correct Actally, it should be '2014-10-31 23:59:59'
Something goes wrong here...
This is my function:
```
CREATE FUNCTION [dbo].[ufnLastDateOfMonth](@Date date)
RETURNS varchar(50)
AS
BEGIN
DECLARE @New_Date varchar(50)
select @New_date = cast(dateadd(dd,-(DAY(@Date )),DATEADD(mm,1,@Date ))as varchar(50)) + ' 23:59:59'
RETURN @New_Date
END
```
|
To get the last day you can do this:
```
SELECT DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,'2014-08-12')+1,0))
```
Adding to your function:
```
select @New_date = DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0,@date)+1,0))
```
Source:
[SQL SERVER – Find Last Day of Any Month – Current Previous Next](http://blog.sqlauthority.com/2007/08/18/sql-server-find-last-day-of-any-month-current-previous-next/)
|
For those who are using SQL Server 2012,
`EOMONTH` function could be an alternative.
```
DECLARE @date DATETIME = '12/1/2011';
SELECT EOMONTH ( @date ) AS Result;
GO
```
Source: <https://msdn.microsoft.com/en-us/library/hh213020.aspx>
|
How to get last date of month SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
```
SELECT months_between(TO_DATE(SYSDATE,'DD-MM-YYYY'),TO_DATE(DATE_COLUMN,'DD-MM-YYYY')) FROM TABLE_A;
```
1.I got an error which says, 'a non-numeric character was found where a numeric was expected'
Also, how to handle nulls.. I want to put a default date if the date\_column is null in table\_A.
2.
```
SELECT MONTHS_BETWEEN(TO_DATE(SYSDATE,'DD-MM-YYYY'),TO_DATE(DATE_COLUMN,'DD-MM-YYYY')) FROM TABLE_A;
```
After calculating months between dates, I want to categorize the records with date range with a case statement..
For example..
if months\_between for set of records is 22..I want to put a flag,'0-24', for all those records..
similarly, if months\_between is 34.. I want to put a flag,'24-48', for all the records which fall under this range..
|
To add to previous answer you can avoid null like this:
`SELECT months_between(trunc(SYSDATE), nvl(TO_DATE(DATE_COLUMN,'DD-MM-YYYY'), sysdate))`
If your `DATE_COLUMN` is date type than You don't need to use `TO_DATE()`
So You'll get somthing like this
`SELECT months_between(trunc(SYSDATE), nvl(trunc(DATE_COLUMN), sysdate))`
For the second part of Your question You may try something like this:
```
SELECT
trunc(months_between(trunc(SYSDATE), trunc(DATE_COLUMN))),
(trunc(months_between(trunc(SYSDATE), trunc(DATE_COLUMN))/24)*24)||'-'||((trunc(months_between(trunc(SYSDATE), trunc(DATE_COLUMN))/24)*24)+24)
FROM your_table
```
|
You don't need to convert `sysdate` to a date. Try this:
```
SELECT months_between(trunc(SYSDATE), TO_DATE(DATE_COLUMN,'DD-MM-YYYY'))
```
The problem is that the default format for a date in Oracle uses the month name. When you say `to_date(sysdate, . . .)`, the first argument is converted to a string and then back to a date.
By the way, the `trunc()` is irrelevant, but you seemed to want to extract the day portion of `sysdate` so I left it in.
EDIT:
If `date_column` is already a date, just use:
```
SELECT months_between(trunc(SYSDATE), DATE_COLUMN)
```
|
MONTHS_BETWEEN error and to create flags
|
[
"",
"sql",
"oracle",
"sql-date-functions",
""
] |
I have table tableA as below:
```
+--------+----------------------+
| Id | Ref |
+--------+----------------------+
| 1 | RN1-102,RN2-103 |
| 2 | RN1-106 |
| 3 | RN2-203 |
| 4 | NULL |
| 5 | RN1-104|,RN2-107 |
| 6 | RN1-101,RN2-105 |
| 7 | RN1-100,RN2-109 |
+--------+----------------------+
```
I need a output (distinct Ref from tableA) as below:
```
+--------------------+
| Distinct Ref data |
+--------------------+
| RN1-100 |
| RN1-101 |
| RN1-102 |
| RN1-104 |
| RN1-106 |
| RN2-103 |
| RN2-105 |
| RN7-107 |
| RN2-109 |
| RN2-203 |
+--------------------+
```
I tried with below query:
```
select distinct Ref from tableA
```
Please help me out..
|
Try using this
```
SELECT distinct Split.a.value('.', 'VARCHAR(100)') REF
FROM (select ID,Cast ('<M>'
+ replace(Replace(REF, ',', '</M><M>'),'&','&')
+ '</M>' AS XML) AS Data from #Table) AS A
CROSS APPLY Data.nodes ('/M') AS Split(a)
```
|
This question is similar to [this question](https://stackoverflow.com/a/25624014/3630826) I answered before. I offered two approach in that answer, one is using recursive another not.
But if you already have a table stores the `ref` keys, you have the third choice like this:
```
create table ref_table (ref varchar(10), ref_name varchar(100), primary key (ref));
select
r.ref
from
ref_table r
where
EXISTS (
select 1
from tableA a
where a.ref like '%'+r.ref+'%' -- or ','+a.ref+',' like '%,'+r.ref+',%'
)
```
`SQL FIDDLE DEMO`
|
how to get a string from a list of string using SQL query
|
[
"",
"sql",
"sql-server-2008",
""
] |
I have 3 columns, lets call them `column A`, `column B` and `column C` each column is a date.
I am trying to write the where logic like this:
If column A is NULL or if column C date is greater than today's today minus 3 days
If column B is NULL or if column C date is greater than today's today minus 3 days
If column C is NULL or if column C date is greater than today's today minus 3 days
I have came up with this, but when I run it, I am missing items:
```
( columnA IS NULL OR columnA >= Dateadd(day, -3, Getdate()) )
AND
( columnB IS NULL OR columnB >= Dateadd(day, -3, Getdate()) )
AND
( columnC IS NULL OR columnC >= Dateadd(day, -3, Getdate()) )
```
This is causing a conflict, for an example, I have one item that is missing and here is why:
columnA date is less then today's date minus 3 days
columnB date is less then today's date minus 3 days
columnC is NULL
therefore since columnC is NULL, the item should appear. What Am I doing wrong?
Thanks,
I am using SQL Server Management Studio 2008
|
Your problem-description, and your code, differ in the association of the 'greater'-clause, so the answer is not unambiguously, either. Please precise your question in case - also see the hints for optimization by the former comment.
a) This should be the logically correct answer to your description:
```
WHERE ( columnA IS NULL OR columnB IS NULL OR columnC IS NULL OR columnC >= Getdate()-3 )
```
b) Derived from your code, this should be the logically correct answer (but not optimized yet) :
```
WHERE ( columnA IS NULL OR columnA >= Dateadd(day, -3, Getdate())
OR columnB IS NULL OR columnB >= Dateadd(day, -3, Getdate())
OR columnC IS NULL OR columnC >= Dateadd(day, -3, Getdate())
)
```
|
It sounds like you really want to use `OR` rather than `AND` between your clauses.
```
( columnA IS NULL OR columnA >= Dateadd(day, -3, Getdate()) )
OR
( columnB IS NULL OR columnB >= Dateadd(day, -3, Getdate()) )
OR
( columnC IS NULL OR columnC >= Dateadd(day, -3, Getdate()) )
```
|
SQL - Issue with WHERE logic
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I found a way to combine multiple row's into one row which is comma separated but now I would like to remove the last comma.
```
CREATE TABLE supportContacts
(
id int identity primary key,
type varchar(20),
details varchar(30)
);
INSERT INTO supportContacts (type, details)
VALUES ('Email', 'admin@sqlfiddle.com'),
('Twitter', '@sqlfiddle');
```
This query combines types, but I want to now remove the last comma:
```
SELECT top (2)
type + ', ' AS 'data()'
FROM
supportContacts
ORDER BY
type DESC
FOR XML PATH('')
```
This is the current result:
```
Twitter, Email,
```
|
```
declare @BigStrRes8K nvarchar(4000)
SELECT @BigStrRes8K = ( SELECT top (2) [type] + ', ' AS 'data()'
FROM supportContacts
ORDER BY type DESC
FOR XML PATH('') )
SELECT LEFT(RTRIM(@BigStrRes8K), ( LEN(RTRIM(@BigStrRes8K))) - 1) as FinalNoComma
```
I would never do this where I controlled the render code. I would teach the caller to handle the trailing comma. Also you have to allow for nulls and the 4K or 8K limit of SQL rows
|
While you already have an answer, another common idiom that you'll see is:
```
select stuff((
SELECT top (2)
', ' type AS 'data()'
FROM
supportContacts
ORDER BY
type DESC
FOR XML PATH('')
), 1, 2, '')
```
This says "take the result of the select and replace the two characters starting at position 1 with a zero-length string".
|
SQL Server: how to remove last comma after combining rows using XML Path
|
[
"",
"sql",
"sql-server",
"for-xml-path",
""
] |
I tried to do a lot of research but I'm more of a db guy - so even the explanation in the MSDN doesn't make any sense to me. Can anyone please explain, and provide some examples on what `Include()` statement does in the term of `SQL` query?
|
Let's say for instance you want to get a list of all your customers:
```
var customers = context.Customers.ToList();
```
And let's assume that each `Customer` object has a reference to its set of `Orders`, and that each `Order` has references to `LineItems` which may also reference a `Product`.
As you can see, selecting a top-level object with many related entities could result in a query that needs to pull in data from many sources. As a performance measure, `Include()` allows you to indicate which related entities should be read from the database as part of the same query.
Using the same example, this might bring in all of the related order headers, but none of the other records:
```
var customersWithOrderDetail = context.Customers.Include("Orders").ToList();
```
As a final point since you asked for SQL, the first statement without `Include()` could generate a simple statement:
```
SELECT * FROM Customers;
```
The final statement which calls `Include("Orders")` may look like this:
```
SELECT *
FROM Customers JOIN Orders ON Customers.Id = Orders.CustomerId;
```
|
I just wanted to add that "Include" is part of eager loading. It is described in Entity Framework 6 tutorial by Microsoft. Here is the link:
<https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/reading-related-data-with-the-entity-framework-in-an-asp-net-mvc-application>
---
Excerpt from the linked page:
> Here are several ways that the Entity Framework can load related data into the navigation properties of an entity:
>
> **Lazy loading.** When the entity is first read, related data isn't retrieved. However, the first time you attempt to access a navigation property, the data required for that navigation property is automatically retrieved. This results in multiple queries sent to the database — one for the entity itself and one each time that related data for the entity must be retrieved. The DbContext class enables lazy loading by default.
>
> **Eager loading.** When the entity is read, related data is retrieved along with it. This typically results in a single join query that retrieves all of the data that's needed. You specify eager loading by using the **`Include`** method.
>
> **Explicit loading.** This is similar to lazy loading, except that you explicitly retrieve the related data in code; it doesn't happen automatically when you access a navigation property. You load related data manually by getting the object state manager entry for an entity and calling the Collection.Load method for collections or the Reference.Load method for properties that hold a single entity. (In the following example, if you wanted to load the Administrator navigation property, you'd replace `Collection(x => x.Courses)` with `Reference(x => x.Administrator)`.) Typically you'd use explicit loading only when you've turned lazy loading off.
>
> Because they don't immediately retrieve the property values, lazy loading and explicit loading are also both known as deferred loading.
|
What does Include() do in LINQ?
|
[
"",
"sql",
"linq",
""
] |
I have 2 DB tables that both share an Order Number column.
One table is "orders" and the Order Number is the unique key.
The second table is my "transactions" table that has one row, per transaction made for each order number. Based on the fact we take monthly payments, the "transactions" table obviously has multiple rows with a unique date but many repeats of a each Order Number.
How can I run a query which has a list of unique OrderNumbers in one column, and the latest "TransDate" (Transaction Date) in the second column.
I tried the below but its pulling back the first TransDate that exists for each ordernumber, not the latest one. I think I need a sub query of some sort:
```
select orders.ordernumber, transdate from orders
join transactions on transactions.ordernumber = orders.ordernumber
where status = 'booking'
group by ordernumber
order by orders.ordernumber, TransDate DESC
```
|
You should just use `MAX()` function along with grouping on order number. There also doesn't seem to be any reason to do a join here.
```
SELECT
ordernumber,
MAX(transdate) AS maxtransdate
FROM transactions
WHERE status = 'booking'
GROUP BY ordernumber
ORDER BY ordernumber ASC
```
|
Use aggregate functions, specifically `max()`:
```
select o.ordernumber, max(transdate) as last_transdate
from orders as o
inner join transactions as t on o.ordernumber = t.ordernumber
-- where conditions go here
group by ordernumber
```
If you need to pull the details of the last transaction for each order, you can use the above query as a data source of another query and join it with the transactions table:
```
select a.*, t.*
from (
select o.ordernumber, max(transdate) as last_transdate
from orders as o
inner join transactions as t on o.ordernumber = t.ordernumber
-- where conditions go here
group by ordernumber
) as a
inner join transactions as t on a.ordernumber = t.ordernumber and a.last_transdate = t.transdate
```
|
How can I write an SQL query to show the latest transaction in one table?
|
[
"",
"mysql",
"sql",
""
] |
I am working with SQL server 2008, and facing problem about character replacement.
If I use
```
SELECT REPLACE(MYWORD,0,1) FROM MYTABLE
```
It is replacing all 0 into 1, I just want to replace Last character Like `MYWORD = "ERMN0"` so it will be `MYWORD = "ERMN1"`
|
using `STUFF`, which, IMO, ends up being most readable:
```
DECLARE @MyWORD VARCHAR(20) = 'ABCDEF123'
SELECT STUFF(@MyWORD, LEN(@MyWORD), 1, '2')
```
output:
```
ABCDEF122
```
|
You may use combination of `LEFT`, `RIGHT`, and `CASE`.
You need to use `CASE` to check the most `RIGHT` character whether it's a `0` or not and replace it with `1`. And at last, combine it with the `LEFT` part (after being separated from the last character) of the `MYWORD` string.
However, depending on your requirement, it may have a drawback.
When there is a word ending with `10`, it would also be replaced.
```
SELECT LEFT(MYWORD,LEN(MYWORD)-1) + CASE RIGHT(MYWORD,1) WHEN '0' THEN '1' ELSE RIGHT(MYWORD,1) END
```
|
Replace Last character in SQL Server 2008
|
[
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
How do I return a particular string if column value is null MySQL? So basically in the instance below sometimes the values "tableA.Name" is sometimes Null as the user has forgot to fill it.
In this case rather than returning Null, I'd like some sort of if statement to check if the value is null then say "Name not assigned"
My MySQL query:
```
SELECT `shoes_item`.Product_Name, `run_task`.Name, `shoes_item`.Notes FROM `shoes_item` As test
LEFT OUTER JOIN `run_item`
ON `run_item`.shoes_ID = `shoes_item`.shoes_ID
LEFT OUTER JOIN `run_details`
ON `run_details`.run_Item_ID = `run_item`.run_Item_ID
```
|
Is his what you are looking for?
```
SELECT IFNULL(`shoes_item`.Product_Name,"name not assigned") as product_name, `run_task`.Name, `shoes_item`.Notes FROM `shoes_item` As test
LEFT OUTER JOIN `run_item`
ON `run_item`.shoes_ID = `shoes_item`.shoes_ID
LEFT OUTER JOIN `run_details`
ON `run_details`.run_Item_ID = `run_item`.run_Item_ID
```
|
```
select coalesce(tableA.name, 'Name not assigned') as name
```
|
How do I return a partcular string if column value is null MySQL
|
[
"",
"mysql",
"sql",
"database",
""
] |
I have a redshift table which is used for tracking, and as a result its pretty huge. I need to update one column after applying some text operations and extracting a value from another column.
The query that I have managed to write works only for one row.
```
UPDATE schema.table_name SET data_id = (SELECT split_part(regexp_substr(data_column,'pattern=[^&]*'),'=',2)::BIGINT FROM schema.table_name where id = 1620) WHERE id = 1620;
```
How do I get it to work for every row in the table.
|
```
UPDATE
schema.table_name
SET
data_id = SPLIT_PART(REGEXP_SUBSTR(data_column, 'pattern=[^&]*'),'=',2)::BIGINT;
```
|
Just don't put `WHERE id = 1620;` at end of update query.
|
Update redshift column value with modified data from other column
|
[
"",
"sql",
"postgresql",
"amazon-redshift",
""
] |
I am spooling a package from a database and this is what I get:
```
CREATE OR REPLACE PACKAGE BODY "CPI"."GIPI_WBOND_BASIC_PKG"
AS
FUNCTION get_gipi_wbond_basic (p_par_id gipi_wbond_basic.par_id%TYPE)
RETURN gipi_wbond_basic_tab PIPELINED
IS
v_wbond gipi_wbond_basic_type;
BEGIN
FOR i IN (SELECT a.par_id, a.obligee_no, a.bond_dtl, a.inde
mnity_text,
a.clause_type, a.waiver_limit, a.contract_date, a.cont
ract_dtl,
a.prin_id, a.co_prin_sw, a.np_no, a.coll
_flag,
a.plaintiff_dtl, a.defendant_dtl, a.civil_case_no
FROM gipi_wbond_basic a
WHERE a.par_id = p_par_id)
```
And I am expecting it to be something like this:
```
CREATE OR REPLACE PACKAGE BODY cpi.gipi_wbond_basic_pkg
AS
FUNCTION get_gipi_wbond_basic (p_par_id gipi_wbond_basic.par_id%TYPE)
RETURN gipi_wbond_basic_tab PIPELINED
IS
v_wbond gipi_wbond_basic_type;
BEGIN
FOR i IN (SELECT a.par_id, a.obligee_no, a.bond_dtl, a.indemnity_text,
a.clause_type, a.waiver_limit, a.contract_date,
a.contract_dtl, a.prin_id, a.co_prin_sw, a.np_no,
a.coll_flag, a.plaintiff_dtl, a.defendant_dtl,
a.civil_case_no
FROM gipi_wbond_basic a
WHERE a.par_id = p_par_id)
```
Please help me on how can I get rid of those new lines and ugly format. Thanks!
|
Ok this one solved my problem.
From this,
```
SET HEADING OFF;
SET ECHO OFF;
SET PAGES 999;
SET LONG 999999;
```
I added this:
```
SET LONGCHUNKSIZE 999999;
SET PAGESIZE 0;
SET LINESIZE 500;
```
|
TO remove extra Line breaks Try :-
**SET FEED OFF**
|
How to remove unnecessary line breaks in SQL Plus Spooling?
|
[
"",
"sql",
"database",
"command-line",
"sqlplus",
"spool",
""
] |
I using this query to insert a row in the table tblWorkOrders.
But @@ROWCOUNT is giving 0 eventhough a row is inserted into the table.
What may be the reason? how to change the query to give proper @@ROWCOUNT?
```
INSERT INTO tblWorkOrders
( WO_Type , WO_Yard , WO_Operation , WO_Source , WO_SourceID , WO_SourceXLocation , WO_SourceYLocation ,
WO_Destination , WO_DestinationID , WO_DestinationXLoc, WO_DestinationYLoc , WO_Crane , WO_Priority ,
WO_ScheduledAt , WO_CoilNo , WO_ExecutionStart , WO_ExecutionEnd , WO_ExecutedBy , WO_Remarks,WO_RequestRefID )
( SELECT OM_WOType,OM_Yard,OM_OperationsID,'Saddle' ,STARTINGPOINT.NAME,STARTINGPOINT.XAxis,STARTINGPOINT.YAxis,
'Saddle',ENDINGPOINT.NAME,ENDINGPOINT.XAxis,ENDINGPOINT.YAxis,OM_AssignedCrane,NULL,
NULL,@COILNO,NULL,NULL,NULL,NULL,NULL
FROM
tblOperationsMaster OPM
INNER JOIN
(SELECT SP_OperatingPointID AS [NAME],SP_XLocation AS [XAxis],SP_YLocation AS [YAxis] FROM tblStandardPoints
UNION ALL
SELECT SM_SaddleID AS [NAME],SM_XCoordinate AS [XAxis],SM_YCoordinate AS [YAxis] FROM tblSaddleMaster) STARTINGPOINT
ON STARTINGPOINT.NAME = @SOURCEIDFROMCALLINGFN
INNER JOIN
(SELECT SP_OperatingPointID AS [NAME],SP_XLocation AS [XAxis],SP_YLocation AS [YAxis] FROM tblStandardPoints
UNION ALL
SELECT SM_SaddleID AS [NAME],SM_XCoordinate AS [XAxis],SM_YCoordinate AS [YAxis] FROM tblSaddleMaster) ENDINGPOINT
ON ENDINGPOINT.NAME = @DESTINATIONIDFROMCALLINGFN
WHERE OPM.OM_OperationsID = @OPERATIONID)
SET @ROWSINSERTED = @@ROWCOUNT;
```
|
Hi you will get `rowcount` by this:
```
SELECT @ROWSINSERTED =@@ROWCOUNT
```
Hope it helps
|
try this,
i have provided example check it.
use this after insert.
```
SELECT @ROWSINSERTED=@@ROWCOUNT
SELECT @ROWSINSERTED AS Rows
```
|
@@rowcount not giving correct value after a insert
|
[
"",
"sql",
"sql-server-2012",
"rowcount",
""
] |
I want to return all the rows from a table which are unique. I.e. if a certain field in two rows contain the same name, that name shouldn't be shown.
|
Since you want only the uniques names (and not an unique row for every names like you could have with `DISTINCT`), you have to use a `GROUP BY` and a `HAVING` (instead of a `WHERE`, because your parameter is the result of a function, not a variable) :
```
SELECT name FROM myTable GROUP BY name HAVING COUNT(name) = 1
```
|
SELECT DISTINCT column\_name FROM table
|
find unique rows using SQL?
|
[
"",
"sql",
"postgresql",
""
] |
I create a table with 43kk rows, populate them with values 1..200. So ~220k per each number spreaded through the table.
```
create table foo (id integer primary key, val bigint);
insert into foo
select i, random() * 200 from generate_series(1, 43000000) as i;
create index val_index on foo(val);
vacuum analyze foo;
explain analyze select id from foo where val = 55;
```
Result:
<http://explain.depesz.com/s/fdsm>
I expect total runtime < 1s, is it possible?
I have SSD, core i5 (1,8), 4gb RAM. 9,3 Postgres.
If I use Index Only scan it works very fast:
```
explain analyze select val from foo where val = 55;
```
<http://explain.depesz.com/s/7hm>
But I need to select *id* not *val* so Incex Only scan is not suitable in my case.
Thanks in advance!
Additional info:
```
SELECT relname, relpages, reltuples::numeric, pg_size_pretty(pg_table_size(oid))
FROM pg_class WHERE oid='foo'::regclass;
```
Result:
```
"foo";236758;43800000;"1850 MB"
```
Config:
```
"cpu_index_tuple_cost";"0.005";""
"cpu_operator_cost";"0.0025";""
"cpu_tuple_cost";"0.01";""
"effective_cache_size";"16384";"8kB"
"max_connections";"100";""
"max_stack_depth";"2048";"kB"
"random_page_cost";"4";""
"seq_page_cost";"1";""
"shared_buffers";"16384";"8kB"
"temp_buffers";"1024";"8kB"
"work_mem";"204800";"kB"
```
|
I have got answer here:
<http://ask.use-the-index-luke.com/questions/235/postgresql-bitmap-heap-scan-on-index-is-very-slow-but-index-only-scan-is-fast>
The trick is to use composite index for id and value:
```
create index val_id_index on foo(val, id);
```
So Index Only scan will be used, but I can select *id* now.
```
select id from foo where val = 55;
```
Result:
<http://explain.depesz.com/s/nDt3>
But this works ONLY in Postgres with version 9.2+.
If you have forced to use versions below try another options.
|
Although you're querying only 0,5% of the table, or ~10MB worth of data (out of nearly 2GB table), values of interest are spread evenly across whole table.
You can see it in the first plan you've provided:
* `BitmapIndexScan` completes in 123.172ms
* `BitmapHeapScan` takes 17055.046ms.
You can try clustering your tables based on index order, which will put rows together on the same pages. On my SATA disks I have the following:
```
SET work_mem TO '300MB';
EXPLAIN (analyze,buffers) SELECT id FROM foo WHERE val = 55;
Bitmap Heap Scan on foo (...) (actual time=90.315..35091.665 rows=215022 loops=1)
Heap Blocks: exact=140489
Buffers: shared hit=20775 read=120306 written=24124
SET maintenance_work_mem TO '1GB';
CLUSTER foo USING val_index;
EXPLAIN (analyze,buffers) SELECT id FROM foo WHERE val = 55;
Bitmap Heap Scan on foo (...) (actual time=49.215..407.505 rows=215022 loops=1)
Heap Blocks: exact=1163
Buffers: shared read=1755
```
Of course, this is a one-time operation and it'll get longer bit-by-bit over the time.
|
PostgreSQL Bitmap Heap Scan on index is very slow but Index Only Scan is fast
|
[
"",
"sql",
"performance",
"postgresql",
""
] |
So I have a variable on the declaration on the store procedure. It must have a default value that is populated on the return of a sql statement. The thing is that that variable is used in the declaration of the query of a cursor, thats why it needs to be populated on the declaration. How am i supposed to do this?
```
create or replace
PROCEDURE PROCESAR AS
ultimaejecucion date := select fecha from table where rownum<=1;
--I know I have to use the word INTO but throws me error when i compile.
cursor cursor_licencias is
select lic.campo1
from lic lic
where lic.licfhing >= ultimaejecucion
BEGIN
open cursor
.
.
.
END PROCESAR;
```
|
Your problem is quite unclear. If your cursor always depends on values from another table, you should include this table to query, as Multisync advised. If your cursor also has to depend on different rows of that table, you can use cursor parameters to choose that row:
```
create or replace
PROCEDURE PROCESAR AS
cursor cursor_licencias (cursor_parameter number default 1234) is
select lic.campo1
from lic lic
where lic.licfhing >= (select fecha from table where column = cursor_parameter);
BEGIN
open cursor cursor_licencias; -- using with default value
open cursor cursor_licencias (5678); -- using with value '5678'
END PROCESAR;
```
[Example in oracle documentation.](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/static.htm#BABHCIAC)
|
You may move it into the cursor declaration:
```
create or replace
PROCEDURE PROCESAR AS
cursor cursor_licencias is
select lic.campo1
from lic lic
where lic.licfhing >= (select fecha from table where rownum<=1);
BEGIN
open cursor
.
.
.
END PROCESAR;
```
if you need this field somewhere else you can assign the value to the **ultimaejecucion** right after BEGIN
|
How to populate variable in pl sql on declaration in store procedure with sql statement return
|
[
"",
"sql",
"database",
"oracle",
"stored-procedures",
""
] |
So I have a table schema set up:
<http://sqlfiddle.com/#!6/62d3e/1/0>
Where I have couple of rows that are duplicate by field, fieldtypeid, fieldalias columns. I need to delete all rows that duplicate by these 3 columns, but have null values for "somevalue" column.
What is the easiest way of doing this? I have tried to do a MERGE Statement, but that doesn't work because merge catches the duplicates.
```
DECLARE @A TABLE(Id INT, field NVARCHAR(50), fieldType INT, fieldalias NVARCHAR(50), ForDelete BIT)
DECLARE @B TABLE(Id INT, field NVARCHAR(50), fieldTypeId INT, fieldalias NVARCHAR(50), ForDelete BIT)
INSERT INTO @A
SELECT DISTINCT c.Id, c.field, c.fieldTypeId, c.fieldalias,0
FROM DuplicateValues c
WHERE c.somevalue IS NOT NULL
INSERT INTO @B
SELECT DISTINCT c.Id, c.field, c.fieldTypeId, c.fieldalias,0
FROM DuplicateValues c
WHERE c.somevalue IS NULL
declare @T table(Id int, ForDelete BIT, Act varchar(10))
MERGE @B AS B
USING @A AS A
ON A.fieldTypeId = B.fieldTypeId AND A.field = B.field AND A.fieldalias = B.fieldalias
WHEN MATCHED THEN UPDATE SET B.ForDelete = 1
WHEN NOT MATCHED THEN INSERT (Id, field, fieldTypeId, fieldalias, ForDelete) VALUES(A.Id, A.field,A.fieldTypeId, A.fieldalias, 1)
OUTPUT INSERTED.Id, INSERTED.ForDelete, $ACTION INTO @T;
SELECT * FROM @T
```
|
Per your provided Fiddle schema, you can use the below query to achieve the same. See your modified fiddle <http://sqlfiddle.com/#!6/0f201/1>
```
delete from DuplicateValues
where FIELD in(
select FIELD from DuplicateValues
group by field, fieldtypeId, fieldalias
having count(*) > 1
)
and somevalue is null
```
Here `group by .. having count(*) > 1` will give all the `FIELDS` which have been duplicated.
|
Using window function with in a cte u can find duplicates. Try this
```
;WITH cte
AS (SELECT Row_number() OVER(partition BY field, fieldtypeid, fieldalias ORDER BY id) rn,
*
FROM duplicatevalues)
DELETE A
FROM duplicatevalues a
JOIN cte b
ON a.id = b.id
WHERE rn > 1
```
|
How do you delete duplicate row based none primary key columns
|
[
"",
"mysql",
"sql",
"sql-server",
"t-sql",
""
] |
I have a scenario where i need to calculate number of total values and number of null values against each ID
each ID column have a number of rows
Summarizing Data
```
- ID Col1 Col2 Col3 Col4 Col5 Col6
- 132 12 0.5 0 Null 0.3 1.5
- 132 Null 0.5 0 Null 0.3 1.5
- 132 1 0.5 Null Null 0.3 1.5
- 132 2 0.5 0 0.3 1.5 Null
- 132 21 0.5 0 Null 0.3 1.5
- 133 Null Null 0 Null Null 1.5
- 133 12 0.5 0 Null 0.3 1.5
- 133 Null 0.5 0 Null 0.3 1.5
- 133 1 0.5 Null Null 0.3 1.5
- 133 2 0.5 0 0.3 1.5 Null
- 133 1 Null 0 Null 0.3 1.5
- 133 Null Null 0 Null 0.3 1.5
```
Summarizing Answer :I need to write a query which gives me the data like following
```
- ID NullCount ValuesCount
- 132 7 21
- 133 15 27
```
I have a deployment quick help will be much appreciated.
Thanks
|
Adapted from [Oracle: How to count null and non-null rows](https://stackoverflow.com/questions/5512691/oracle-how-to-count-null-and-non-null-rows):
```
SELECT
COUNT(Col1)+COUNT(Col2)+COUNT(Col3)+
COUNT(Col4)+COUNT(Col5)+COUNT(Col6) AS ValuesCount,
6*COUNT(*)-COUNT(Col1)-COUNT(Col2)-COUNT(Col3)-
COUNT(Col4)-COUNT(Col5)-COUNT(Col6) AS NullCount
FROM data
GROUP BY id
```
`COUNT(ColX)` only counts `NOT NULL` values. Adding those for all six columns equals ValuesCount, of course.
`COUNT(*)` counts all rows, even if all columns within one row were `NULL`. Multiply by 6 for the total number of cells and then subtract all `NOT NULL` values to get the `NULL` count.
|
`COUNT` counts non-null values. So the `ValueCount` is easy - add the counts of each column.
For the `NullCount` you can use `CASE` or other similar logic. Or you can use `NVL2` function to turn anything NOT NULL into NULL and any NULL into something NOT NULL (like a constant.)
```
select id
, count(nvl2(col1,null,1)) + count(nvl2(col2,null,1)) +
count(nvl2(col3,null,1)) + count(nvl2(col4,null,1)) +
count(nvl2(col5,null,1)) + count(nvl2(col6,null,1)) nullcount
, count(col1) + count(col2) + count(col3) +
count(col4) + count(col5) + count(col6) valuecount
from tab
group by id
order by id
/
```
EDIT:
An alternative method is to *UNPIVOT* the data (which can be done using `UNPIVOT` or alternative unpivoting methods.)
```
select id
, count(nvl2(col_value,null,1)) nullcount
, count(col_value) valuecount
from tab
unpivot include nulls(
col_value for col_name in (
col1 as 'col1'
, col2 as 'col2'
, col3 as 'col3'
, col4 as 'col4'
, col5 as 'col5'
, col6 as 'col6'
)
)
group by id
order by id
/
```
But when you have a fixed number of columns, it might be overkill to do unpivoting, when you can get the desired result in a single `GROUP BY` operation just by doing some copy-paste to specify all columns like my first example.
|
I have to write a very interesting query which calculates null values and rows with values
|
[
"",
"sql",
"oracle",
"oracle11g",
""
] |
I can't get the correct result, as the result shown. I want to SUM up the cost column but failed.
The total fees should be sum of cost but when I group by it become 6000+. Any mistake I did?
```
Select p.title, SUM(c.cost) as Total_Fees, c.cost
from programmes p Inner Join courses c
ON p.programme_id = c.programme_id
Inner join class_schedules cs
ON c.course_id = cs.course_id AND c.semester = 4
Inner join class_enrollments ce
ON cs.schedule_id = ce.schedule_id
Inner Join students s
ON ce.student_id = s.student_id and s.student_id = '13WAR1001'
GROUP BY p.title, c.cost;
TITLE TOTAL_FEES COST
--------------------------------------------------------------------------------- ---------- ----------
Bachelor of Information Technology (Honours) in Internet Technology 1728 576
Bachelor of Information Technology (Honours) in Internet Technology 1353 451
Bachelor of Information Technology (Honours) in Internet Technology 1184 592
Bachelor of Information Technology (Honours) in Internet Technology 1800 600
TITLE COURSE_ID TOTAL_FEES
---------------------------------------------------------------------------------------------------- ---------- ----------
Bachelor of Information Technology (Honours) in Internet Technology 14 1728
Bachelor of Information Technology (Honours) in Internet Technology 15 1353
Bachelor of Information Technology (Honours) in Internet Technology 13 1800
Bachelor of Information Technology (Honours) in Internet Technology 12 1184
SQL> SELECT SUM(cost) from courses where course_id = 14 or course_id = 15 or course_id = 13 or course_id = 12;
SUM(COST)
----------
2219
```
//// after remove the c.cost and group by c.cost
```
TITLE TOTAL_FEES
---------------------------------------------------------------------------------------------------- ----------
Bachelor of Information Technology (Honours) in Internet Technology 6065
```
|
You shouldn't include `cost`in the select or the group by as you do not want to compute the sum over groups based on title and cost. Do this instead:
```
SELECT
p.title,
SUM(c.cost) AS Total_Fees
FROM programmes p
INNER JOIN courses c
ON p.programme_id = c.programme_id
INNER JOIN class_schedules cs
ON c.course_id = cs.course_id AND c.semester = 4
INNER JOIN class_enrollments ce
ON cs.schedule_id = ce.schedule_id
INNER JOIN students s
ON ce.student_id = s.student_id AND s.student_id = '13WAR1001'
GROUP BY p.title;
```
|
you have c.cost in the `GROUP BY` and you are doing the SUM aggregation on c.cost column
you need to remove the `cost` column from group by and use only title and also remove cost column from select.
```
GROUP BY p.title
```
|
Unable to get the correct result sql plus
|
[
"",
"sql",
"oracle",
""
] |
I am new to sql queries and executing the query:
I have looked up some examples online but did not succeed much.
Here is the query
```
SELECT m.genre, m.movie_title, m.release_year, m.movie_length
FROM MOVIE m GROUP BY m.genre, m.movie_title, m.release_year,m.movie_length
ORDER BY m.movie_length asc;
```
If someone can explain on how I can get the values , would highly appreciate the help.
Thank you
|
To get the greatest length of all movies for a given genre, you can use the MAX() aggregate function. All you will have to do is group by the genre. This is a little tricky, because you can't just jump in and select the movie title, or it won't group correctly. Start by getting the genre and max movie length for it:
```
SELECT m.genre, MAX(m.movie_length) AS longestMovie
FROM movie m
GROUP BY m.genre;
```
Once you've done that, you can join it with your table and get the max movies:
```
SELECT m.genre, m.movie_title, m.release_year, m.movie_length
FROM movie m
JOIN (SELECT m.genre, MAX(m.movie_length) AS longestMovie
FROM movie m
GROUP BY m.genre) t
ON t.genre = m.genre AND t.longestMovie = m.movie_length;
```
Here is a working [SQL Fiddle](http://sqlfiddle.com/#!2/7d6b70/1).
|
```
SELECT m.genre, m.movie_title, m.release_year, m.movie_length
FROM MOVIE as m order by m.movie_length desc limit 0,1;
```
After adding order by DESC put the limit to get only 1 record. 0 means to start at 0 index and 1 to define the number of records.
|
SQL Query to find greatest value among all in the same table
|
[
"",
"mysql",
"sql",
"oracle11g",
""
] |
I have a SQL-Server database in which I want to get the data from the column VL\_DR between two given dates and where the date is a Friday.
```
"SELECT VL_DR FROM table_VL WHERE date>='" + date_start
+ "' AND date<='" + date_end
+ "' AND DATENAME(dw, date)='Friday'"
```
But after the request I have an empty list while I should have some values.
What should I fix to get the correct request?
|
If its SQL Server
Try
`SELECT VL_DR FROM table_VL WHERE ([date] BETWEEN date_start and date_end) AND LOWER(DATENAME(dw, [date]))='friday'`
If its MySQL
Try
`SELECT VL_DR FROM table_VL WHERE ([date] BETWEEN date_start and date_end) AND DAYNAME([date])='Friday'`
|
This code will work, based on data in a Share Code database that I have. This is for MS-SQL.
```
SELECT
CloseDate, DatePart(WEEKDAY,CloseDate) AS DAY_OF_WEEK_No , SharePrice
FROM
SharePrice
WHERE
CloseDate BETWEEN '01 October 2014' AND '31 October 2014'
AND
DATEPART(WEEKDAY,CloseDate) = 6 -- 1=Sunday; 7=Saturday
AND
ShareCode = 'AGL'
```
|
Get data for every Friday between two dates
|
[
"",
"sql",
"sql-server",
"database",
""
] |
I'm having trouble with a MySQL query. If the label is "5" then I only want records where there is more than 10 of them for the current day, but any number of rows for other labels. What is the best way to accomplish this in the query?
```
SELECT id,name,label,date
FROM items
GROUP BY DATE('date'), label
HAVING COUNT(id) > 10 IF label = 5
```
|
I would do this in a single query:
```
SELECT id, name, label, date
FROM items
WHERE date >= date(now()) and date < date(now()) + interval 1 day
GROUP BY label
HAVING (label <> 5) or (COUNT(id) > 10);
```
This seems like the simplest method.
|
```
SELECT id,name,label,date
FROM items
WHERE label = 5 AND date = NOW()
GROUP BY label
HAVING COUNT(id) > 10
UNION ALL
SELECT id,name,label,date
FROM items
WHERE label != 5 AND date = NOW()
GROUP BY label
HAVING COUNT(id) > 0
```
This will return indeterminate values for columns not in the GROUP BY.
|
MySQL having if
|
[
"",
"mysql",
"sql",
"having",
"having-clause",
""
] |
I want to create a table from another table (schema copy) but not the data, just the schema. Is there a SQL or Postgres way of doing this? Hopefully something that can copy indexes & constraints as well.
Also once the table is created is it possible to keep the schema in sync, in case there are any future changes to the original table. Will save me manual sync of the schema. May be Postgres has something?
|
Yes, use `create table like`
```
create table new_table (like old_table);
```
More details in the manual:
<http://www.postgresql.org/docs/current/static/sql-createtable.html>
The `like` clause will copy indexes and and check constraints, but not foreign key constraints.
It will not keep the new\_table "in sync" with the old one though - and it's unclear to me why you would want that.
|
Copy only structure with
```
Select * into newtable from oldtable where 1=2
```
|
Create a table from another table
|
[
"",
"sql",
"postgresql",
""
] |
I'm wanting to combine the following SQL statements so that I have all the results in one column rather than in 4 seperate ones:
```
select count(incnum) as counttenth3 from kincident where (to_char(reportdate,'hh24') between 0 and 2.59 or to_char(reportdate,'hh24') between 21 and 23.59
select count(incnum) as counttenth2 from kincident where to_char(reportdate,'hh24') between 15 and 20.59
select count(incnum) as counttenth1 from kincident where to_char(reportdate,'hh24') between 9 and 14.59
select count(incnum) as counttenth0 from kincident where to_char(reportdate,'hh24') between 3 and 8.59
```
The only difference is the time range given for each statement.
So I'm trying to combine them all into one to be in one column and I want a second column to include rows with given strings (not from the database)
E.g.
```
Timing | count of incidents
-----------------------------
morning | 26
afternoon | 35
night | 40
```
|
Something like this - untested since I generally write MS SQL :)
```
select
t.time_of_day,
count(t.time_of_day)
from
(
---
select
case to_char(reportdate,'hh24')
when between 15 and 20.59 then 'afternoon'
when between 9 and 14.59 then 'morning'
when between 3 and 8.59 then 'early-morning'
else 'night' --when between 0 and 2.59 or to_char(reportdate,'hh24') between 21 and 23.59
end as time_of_day
from
kincident
) t
group by
t.time_of_day
```
Use a CASE statement to categories your time ranges, which I've put into a sub-query, but you could wrap in a view; as part of the outer query we group the time-of-day categories and count then in the outer select.
|
Seems you want group by with a case statement:
```
select
case when to_char(reportdate,'hh24') between 3 and 8.59 then 'morning'
when to_char(reportdate,'hh24') between 9 and 14.59 then 'noon'
when to_char(reportdate,'hh24') between 15 and 20.59 then 'afternoon'
else 'night' end as range
,count(incnum)
from
kincident
group by
case when to_char(reportdate,'hh24') between 3 and 8.59 then 'morning'
when to_char(reportdate,'hh24') between 9 and 14.59 then 'noon'
when to_char(reportdate,'hh24') between 15 and 20.59 then 'afternoon'
else 'night' end
```
|
Combining SQL statements and including strings for time ranges
|
[
"",
"sql",
"oracle",
""
] |
I'm having this strange problem using PostgreSQL 9.3 with tables that are created using qoutes. For instance, if I create a table using qoutes:
```
create table "TEST" ("Col1" bigint);
```
the table is properly created and I can see that the quotes are preserved when view it in the SQL pane of pgAdminIII. But when I query the DB to find the list of all available tables (using the below query), I see that the result does not contain quotes around the table name.
```
select table_schema, table_name from information_schema.tables where not table_schema='pg_catalog' and not table_schema='information_schema';
```
Since the table was created with quotes, I can't use the table name returned from the above query directly since it is unquoted and throws the error in posted in the title.
I could try surrounding the table names with quotes in all queries but I'm not sure if it'll work all the time. I'm looking for a way to get the list of table names that are quoted with quotes in the result.
I'm having the same issue with column names as well but I'm hoping that if I can find a solution to the table names issue, a similar solution will work for column names as well.
|
you have two choices:
- no quotes: then everything will automatically be lowercase and non-case-sensitive
- with quotes: from now on everything is case sensitive.
i would highly recommend to NOT use quotes and make PostgreSQL behave non case sensitive. it makes life so much easier. once you get into quoting you got to use it EVERYWHERE as PostgreSQL will start to be very precise.
some example:
```
TEST = test <-- non case sensitive
"Test" <> Test <-- first is precise, second one is turned to lower case
"Test" = "Test" <-- will work
"test" = TEST <-- should work; but you are just lucky.
```
really try to avoid this kind of trickery at any cost. stay with 7 bit ascii for object names.
|
While using npg package as your data store ORM you are expecting the ORM framework (Entity Framework in our case) to generate the sql statement you might face a PostgreSQL exception the relation 'Table Name' does not exist
Either the table is not created or the generated SQL statement is missing something. Try to debug using visual studio you will see that the schema name is not leading the table name
```
SELECT "ID", "Name", "CreatedBy", "CreatedDate"
FROM "TestTable";
```
while PostgreSQL is expecting the schema name. Resolution is in the DBContext class override the OnModelCreating method and add `modelBuilder.HasDefaultSchema("SchemaName");` and execute the base constructor which should look like the following
```
protected override void OnModelCreating(ModelBuilder modelBuilder) {
modelBuilder.HasDefaultSchema("PartyDataManager");
base.OnModelCreating(modelBuilder);
}
```
|
PostgreSQL ERROR: 42P01: relation "[Table]" does not exist
|
[
"",
"sql",
"postgresql",
"postgis",
"quoted-identifier",
""
] |
I have a table named Group in my database , when I try to work on it it gave error :
```
SELECT GroupID,GroupName FROM Group
```
it said :
```
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'Group'.
```
I know that Group is illegal name for table but its a old database with too many data and relations , and I cant rename this table , is there any way to work with table ?
|
You can use Square Brackets `[` and `]` in the Column name,Table Name.
```
SELECT GroupID,GroupName FROM [Group]
```
other eg.
```
select * from [table]
select [primary] from [table]
```
P.S: it worked by adding [dbo] before [Group] , so it is now like :
```
SELECT GroupID, GroupName FROM [dbo].[Group]
```
|
```
SELECT GroupID, GroupName FROM `Group`
```
Now it knows `Group` is the name of a table rather than incorrectly placed language syntax
|
how to work with Sql table named Group
|
[
"",
"sql",
"sql-server",
"select",
""
] |
I tried to use column names with symbols like minus, dot and a few others. Oracle SQL parser doesn't except them. Like this:
```
select
a.ID as Article-Number, a.Name as Name
from
Articles a
where
(a.ID = '3264')
```
And it doesn't except `a.ID as 'Article-Number'` (says `"FROM keyword not found where expected"`). Can I have symbols in column names?
|
You can use double quotes (`"`) to escape aliases and column names:
```
select
a.ID as "Article-Number", a.Name as "Name"
from
Articles a
where
(a.ID = '3264')
```
|
> Can I have symbols in column names?
>
> I'm automating stuff here so all my aliases will get double quotes. Important is that I know how to use any strings in my aliases.
In Oracle you use double quotes (`"`) to escape schema's name (incl. column names).
However, if you're trying to automatize things, there are several rules you should be aware of:
* names must be from 1 to 30 characters. Identifier exceeding this limit will raise
`ORA-00972: identifier is too long`
* names cannot contain quotes (`"`) nor the `\0` (aka nul) character
See [*Oracle's Schema Object Naming Rules*](http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements008.htm) for the details.
|
How to use symbols in SQL column names?
|
[
"",
"sql",
"oracle",
"select",
"oracle8i",
""
] |
To convert a `datetime` to `MM/DD/YYYY`, this works:
```
declare @datetime datetime = '2015-01-01'
select convert(varchar(10),convert(date,@datetime),101)
```
This evaluates to `01/01/2015`. How can I have the date convert to `1/1/2015` instead?
Nothing on <http://www.sql-server-helper.com/tips/date-formats.aspx> matches the `M/D/YYYY` format.
|
I think the only possibility you have is to do something like this:
```
DECLARE @datetime DATETIME = '2015-01-01'
SELECT LTRIM(STR(MONTH(@datetime))) + '/' +
LTRIM(STR(DAY(@datetime))) + '/' +
STR(YEAR(@datetime), 4)
```
With SQL Server 2012 and above, you can do this:
```
SELECT FORMAT(@datetime, 'M/d/yyyy')
```
|
```
DECLARE @datetime DATETIME = '2015-01-01';
SELECT STUFF(REPLACE('/' + CONVERT(CHAR(10), @datetime, 101),'/0','/'),1,1,'')
```
This is how it works:
1. First CONVERT the DATETIME to CHAR
2. Then Add a '/' character at the begining
3. REPLACE all '/0' with '/'
4. With STUFF, get rid of the first '/'
|
How to format datetime as M/D/YYYY in SQL Server?
|
[
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SELECT e.qty * e.unit_cost * e.unit_no * (e.factor/100) AS Ext_price, c.cost_type, c.cost_dt, c.internal_cost_amt, c.client_cost_amt
FROM equipment e
LEFT JOIN CostAllocation c ON (c.object_row_id = e.row_id)
```
I now want to divide the `Ext_price` by the number of `CostAllocation` rows that match the equipment row. How would I do that?
|
```
SELECT e.qty*e.unit_cost*e.unit_no*e.factor/(100*x.cnt) AS Ext_price,
c.cost_type,
c.cost_dt,
c.internal_cost_amt,
c.client_cost_amt
FROM equipment e
LEFT JOIN CostAllocation c ON c.object_row_id = e.row_id
LEFT JOIN
(SELECT row_id,
count(*) cnt
FROM CostAllocation
GROUP BY row_id) x ON c.object_row_id = x.row_id
```
|
try below:
```
select g.*,g.Ext_price/(select count(*) from CostAllocation k where k.object_row_id=g.row_id) as youranswer
from
(
SELECT e.row_id,e.qty*e.unit_cost*e.unit_no*e.factor/100 AS Ext_price, c.cost_type, c.cost_dt, c.internal_cost_amt, c.client_cost_amt
FROM equipment e
LEFT JOIN CostAllocation c ON c.object_row_id = e.row_id)g
```
|
use join count in calculation in select
|
[
"",
"mysql",
"sql",
"join",
"count",
""
] |
I have a database(table), in which 2 fields are:
```
fromdate varchar(20)
todate varchar(20)
```
Dates are stored in this fashion:
```
YYYY-MM-DD HH:mm:ss
```
For ex: '2014-10-30 10:10:10' in database.
Now I want to compare two dates and fetch records from database by using query, 2014-09-10 10:10:10(`fromdate`) to 2014-10-10 10:10:10(`todate`)
How to fetch all accurate records.. Is there any kind of solution..
Thanks.
|
Just compare the string without extra overhead.
This format "YYYY-MM-DD HH:mm:ss" shares chronological and ~~literal~~ alphabetical order
```
SELECT * FROM someTable
WHERE fromdate >= '2014-09-10 10:10:10' AND todate <= '2014-10-10 10:10:10'
```
Also, I would create an index on those columns.
|
> i have a database(table), in which 2 fields are: fromdate varchar(20)
> todate varchar(20)
It is a design flaw. Date should always be a `DATE` data type and never be string.
> dates are stored in this fashion YYYY-MM-DD HH:mm:ss
`DATE` is never stored in any format. it is for us, human beings to understand.
Oracle stores `DATE` in total of `7 bytes`. Each byte in it stores values for an element of the DATE as follows:
```
Byte Description
---- -------------------------------------------------
1 Century value but before storing it add 100 to it
2 Year and 100 is added to it before storing
3 Month
4 Day of the month
5 Hours but add 1 before storing it
6 Minutes but add 1 before storing it
7 Seconds but add 1 before storing it
```
> for eg :"2014-10-30 10:10:10" in database.
>
> Now i want to compare two dates and fetch records from database by
> using query, 2014-09-10 10:10:10(fromdate) to 2014-10-10
> 10:10:10(todate)
Just use `to_date('2014-10-30 10:10:10', 'YYYY-MM-DD HH24:MI:SS')`
**NOTE** This is for Oracle database. I see you have tagged SQL Server too. I don't understand why did you do that.
|
How to compare dates which is stored as String(varchar) in database?
|
[
"",
"mysql",
"sql",
"sql-server",
"oracle",
""
] |
Getting error
`ORA-00932: inconsistent datatypes: expected CHAR got NUMBER 00932. 00000 - "inconsistent datatypes: expected %s got %s"`
When i run the following query
```
SELECT distinct
CASE when t.cancelled = 'TRUE' then '0'
else t.amount END AMOUNT,
FROM table t
```
If i run it with either a number or text for the else output, like this, it works.
```
SELECT distinct
CASE when t.cancelled = 'TRUE' then '0'
else 'xxx' END AMOUNT,
FROM table t
```
|
Use `0` instead of `'0'`. Amount is a number, and numbers aren't quoted.
```
SELECT distinct
CASE when t.cancelled = 'TRUE' then 0
else t.amount END AMOUNT,
FROM table t
```
|
In addition to the answer by @RADAR,
The reason for the error is that `t.amount` field is a `NUMBER` data type and not a `string`.
Your `CASE` expression is internally trying to fit a `STRING` in a `NUMBER` data type.
As already suggested in RADAR's answer, use zero as a number and NOT as a string.
|
Oracle SQL CASE WHEN ORA-00932: inconsistent datatypes: expected CHAR got NUMBER 00932. 00000 - "inconsistent datatypes: expected %s got %s"
|
[
"",
"sql",
"oracle",
"oracle-sqldeveloper",
"case-when",
""
] |
I just startied working with this database and I have a small problem.
So the main idea behind this is to use VBA to get needed information from database that I can use later on.
I am using ADO recordset and connect sting to connect to server. All is fine apart from one problem: when I am creating RecordSet by using SQL request it only returns one field when i know there should me more. At the moment I think that RecordSet is just grabbing first result and storing it in but looses anything else that should be there. Can you please help me.
Here is my code:
```
'Declare variables'
Dim objMyConn As ADODB.Connection
Dim objMyCmd As ADODB.Command
Dim objMyRecordset As ADODB.Recordset
Dim fldEach As ADODB.Field
Dim OrderNumber As Long
OrderNumber = 172783
Set objMyConn = New ADODB.Connection
Set objMyCmd = New ADODB.Command
Set objMyRecordset = New ADODB.Recordset
'Open Connection'
objMyConn.ConnectionString = "Provider=SQLOLEDB;Data Source=Local;" & _
"Initial Catalog=SQL_LIVE;"
objMyConn.Open
'Set and Excecute SQL Command'
Set objMyCmd.ActiveConnection = objMyConn
objMyCmd.CommandText = "SELECT fldImage FROM tblCustomisations WHERE fldOrderID=" & OrderNumber
objMyCmd.CommandType = adCmdText
'Open Recordset'
Set objMyRecordset.Source = objMyCmd
objMyRecordset.Open
objMyRecordset.MoveFirst
For Each fldEach In objMyRecordset.Fields
Debug.Print fldEach.Value
Next
```
At the moment Debug returns only one result when it should return two because there are two rows with the same OrderID.
|
In addition to the @enderland's answer, you can also have a disconnected RecordSet, that have all the values and fields ready for consumption. It's handy when you need to pass the data around or need to close the connection fast.
Here's a function that returns a disconnected RecordSet:
```
Function RunSQLReturnRS(sqlstmt, params())
On Error Resume next
' Create the ADO objects
Dim rs , cmd
Set rs = server.createobject("ADODB.Recordset")
Set cmd = server.createobject("ADODB.Command")
' Init the ADO objects & the stored proc parameters
cmd.ActiveConnection = GetConnectionString()
cmd.CommandText = sqlstmt
cmd.CommandType = adCmdText
cmd.CommandTimeout = 900 ' 15 minutos
collectParams cmd, params
' Execute the query for readonly
rs.CursorLocation = adUseClient
rs.Open cmd, , adOpenForwardOnly, adLockReadOnly
If err.number > 0 then
BuildErrorMessage()
exit function
end if
' Disconnect the recordset
Set cmd.ActiveConnection = Nothing
Set cmd = Nothing
Set rs.ActiveConnection = Nothing
' Return the resultant recordset
Set RunSQLReturnRS = rs
End Function
```
|
The recordset only opens a single record at a time. You are iterating through all the fields in a single record. Not each record in the recordset.
If your query returns two records, you need to tell the Recordset to advance to the next one.
A query returns **one recordset** which has **some number of records** which have **some number of fields.**
You are iterating through the fields only for one record in the returned recordset.
You can do this with a few ways, but I generally do something like:
```
objMyRecordset.MoveFirst
Do
If Not objMyRecordset.EOF Then
debug.print "Record Opened - only returning 1 field due to SQL query"
For Each fldEach In objMyRecordset.Fields
Debug.Print fldEach.Value
Next
'this moves to the NEXT record in the recordset
objMyRecordset.MoveNext
Else
Exit Do
End If
Loop
```
Note that if you want to include more fields you will need to modify this line:
```
objMyCmd.CommandText = "SELECT fldImage FROM tblCustomisations WHERE fldOrderID=" & OrderNumber
```
To include whatever additional *fields* you want returned.
|
VBA Recordset doesn't return all fields
|
[
"",
"sql",
"vba",
"ado",
"recordset",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.