Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I wanted get id of something whose barcode is either empty or barcode state is active. Barcode state is stored in another table. Here is what I tried
```
SELECT a.id from a
where a.bar=''
OR a.bar=(SELECT b.barcode from b where b.barcode=active)
```
But it gives me nothing when there are some results should come. Where did I make mistake?
Thanks in advance | ```
SELECT a.id FROM a
WHERE a.bar=''
OR a.bar IN (SELECT b.barcode FROM b WHERE b.barcode='active')
``` | I guess your subquery returns more than one record so in this case you have to use `IN` instead of `=`. Also I guess `b.barcode` is a varchar field so you should use a varchar constant `'active'`
```
SELECT a.id from a
where a.bar=''
OR a.bar IN (SELECT b.barcode from b where b.barcode='active')
``` | How to write following Select in sqlite | [
"",
"android",
"sql",
"sqlite",
""
] |
Is there any way to check every element of a list comprehension in a clean and elegant way?
For example, if I have some db result which may or may not have a 'loc' attribute, is there any way to have the following code run without crashing?
```
db_objs = SQL("query")
top_scores = [{"name":obj.name, "score":obj.score, "latitude":obj.loc.lat, "longitude":obj.loc.lon} for obj in db_objs]
```
If there is any way to fill these fields in either as None or the empty string or anything, that would be much very nice. Python tends to be a magical thing, so if any of you have sage advice it would be much appreciated. | Try this:
```
top_scores = [{"name":obj.name,
"score":obj.score,
"latitude": obj.loc.lat if hasattr(obj.loc, lat) else 0
"longitude":obj.loc.lon if hasattr(obj.loc, lon) else 0}
for obj in db_objs]
```
Or, in your query set a default value. | Clean and unified solution:
```
from operator import attrgetter as _attrgetter
def attrgetter(attrname, default=None):
getter = _attrgetter(attrname)
def wrapped(obj):
try:
return getter(obj)
except AttributeError:
return default
return wrapped
GETTER_MAP = {
"name":attrgetter('name'),
"score":attrgetter('score'),
"latitude":attrgetter('loc.lat'),
"longitude":attrgetter('loc.lon'),
}
def getdict(obj):
return dict(((k,v(obj)) for (k,v) in GETTER_MAP.items()))
if __name__ == "__main__":
db_objs = SQL("query")
top_scores = [getdict(obj) for obj in db_objs]
print top_scores
``` | Clean solution for missing values in python list comprehensions | [
"",
"python",
"list-comprehension",
"dictionary-comprehension",
""
] |
I am trying to implement a simple stack with Python using arrays. I was wondering if someone could let me know what's wrong with my code.
```
class myStack:
def __init__(self):
self = []
def isEmpty(self):
return self == []
def push(self, item):
self.append(item)
def pop(self):
return self.pop(0)
def size(self):
return len(self)
s = myStack()
s.push('1')
s.push('2')
print(s.pop())
print s
``` | I corrected a few problems below. Also, a 'stack', in abstract programming terms, is usually a collection where you add and remove from the top, but the way you implemented it, you're adding to the top and removing from the bottom, which makes it a queue.
```
class myStack:
def __init__(self):
self.container = [] # You don't want to assign [] to self - when you do that, you're just assigning to a new local variable called `self`. You want your stack to *have* a list, not *be* a list.
def isEmpty(self):
return self.size() == 0 # While there's nothing wrong with self.container == [], there is a builtin function for that purpose, so we may as well use it. And while we're at it, it's often nice to use your own internal functions, so behavior is more consistent.
def push(self, item):
self.container.append(item) # appending to the *container*, not the instance itself.
def pop(self):
return self.container.pop() # pop from the container, this was fixed from the old version which was wrong
def peek(self):
if self.isEmpty():
raise Exception("Stack empty!")
return self.container[-1] # View element at top of the stack
def size(self):
return len(self.container) # length of the container
def show(self):
return self.container # display the entire stack as list
s = myStack()
s.push('1')
s.push('2')
print(s.pop())
print(s.show())
``` | Assigning to `self` won't turn your object into a list (and if it did, the object wouldn't have all your stack methods any more). Assigning to `self` just changes a local variable. Instead, set an attribute:
```
def __init__(self):
self.stack = []
```
and use the attribute instead of just a bare `self`:
```
def push(self, item):
self.stack.append(item)
```
Also, if you want a stack, you want `pop()` rather than `pop(0)`. `pop(0)` would turn your data structure into a(n inefficient) queue. | Implementing Stack with Python | [
"",
"python",
"algorithm",
"data-structures",
"stack",
""
] |
I have a small problem with `while` loop in Python 2.7.
I have Defined a procedure, `print_multiplication_table`, that takes as input a positive whole number, and prints out a multiplication, table showing all the whole number multiplications up to and including the input number.
Here is my `print_multiplication_table` function:
```
def print_multiplication_table(n):
count = 1
count2 = 1
result = count * count2
print 'New number: ' + str(n)
while count != n and count2 != n:
result = count * count2
print str(count) + " * " + str(count2) + " = " + str(result)
if count2 == n:
count += 1
count2 = 1
else:
count2 += 1
```
Here is an expecting output:
```
>>>print_multiplication_table(2)
new number: 2
1 * 1 = 1
1 * 2 = 2
2 * 1 = 2
2 * 2 = 4
>>>print_multiplication_table(3)
new number: 3
1 * 1 = 1
1 * 2 = 2
1 * 3 = 3
2 * 1 = 2
2 * 2 = 4
2 * 3 = 6
3 * 1 = 3
3 * 2 = 6
3 * 3 = 9
```
Everything works just fine until I add my `while` loop:
```
while count != n and count2 != n:
```
And now my output looks like this:
```
>>>print_multiplication_table(2)
New number: 2
1 * 1 = 1
>>>print_multiplication_table(3)
New number: 3
1 * 1 = 1
1 * 2 = 2
```
What have I made wrong and how can I fix that?
Thanks. | Change your *while* loop to:
```
while count <= n and count2 <= n:
``` | ```
import itertools
def print_multiplication_table(n):
nums = range(1,n+1)
operands = itertools.product(nums,nums)
for a,b in operands:
print '%s * %s = %s' % (a,b,a*b)
print_multiplication_table(3)
```
Gives:
```
1 * 1 = 1
1 * 2 = 2
1 * 3 = 3
2 * 1 = 2
2 * 2 = 4
2 * 3 = 6
3 * 1 = 3
3 * 2 = 6
3 * 3 = 9
```
`range` generates the individual operands; `product` generates the cartesian product, and the `%` is the operator which substitutes values into the string.
The `n+1` is an artifact of how range works. Do `help(range)` to see an explanation.
In general in python, it is preferable to use the rich set of features for constructing sequences to create the right sequence, and then use a single, relatively simple loop to work with the data so generated. Even if the loop body needs complex processing, it will be simpler if you take care to generate the right sequence first.
I'd also add that `while` is the wrong thing where there is a definite sequence to iterate over.
---
I'd like to show that this is a better approach, by generalising the above code. You will struggle to do that with your code:
```
import itertools
import operator
def print_multiplication_table(n,dimensions=2):
operands = itertools.product(*((range(1,n+1),)*dimensions))
template = ' * '.join(('%s',)*dimensions)+' = %s'
for nums in operands:
print template % (nums + (reduce(operator.mul,nums),))
```
(ideone here: <http://ideone.com/cYUSrL>)
Your code would need to introduce one variable per dimension, which would mean a list or dict to keep track of those values (because you can't dynamically create variables), and an inner loop to act per list item. | Python 2.7: Wrong while loop, need an advice | [
"",
"python",
"python-2.7",
"while-loop",
""
] |
thanks in advance :)
I have this async Celery task call:
```
update_solr.delay(id, context)
```
where id is an integer and context is a Python dict.
My task definition looks like:
```
@task
def update_solr(id, context):
clip = Clip.objects.get(pk=id)
clip_serializer = SOLRClipSerializer(clip, context=context)
response = requests.post(url, data=clip_serializer.data)
```
where `clip_serializer.data` is a dict and `url` is a string representing a url.
When I try to call `update_solr.delay()`, I get this error:
```
PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
```
Neither of the args to the task are instance methods so I'm confused.
When the task code is run synchronously, no error.
**Update: Fixed per comments about passing pk instead of object.** | The `context` dict had an object in it, unbeknownst to me...
To fix, I executed code dependent on the `context` before the async call and just passed a dict with only native types:
```
def post_save(self, obj, created=False):
context = self.get_serializer_context()
clip_serializer = SolrClipSerializer(obj, context=context)
update_solr.delay(clip_serializer.data)
```
The task ended up like this:
```
@task
def update_solr(data):
response = requests.post(url, data=data)
```
This works out perfectly fine because the only purpose of making this an async task is to make the POST non-blocking.
Thanks for the help! | Try passing the model instance primary key (`pk`). This is much simpler to pickle, reduces the payload and avoids race conditions. | Django and Celery: unable to pickle task | [
"",
"python",
"django",
"celery",
"pickle",
"django-celery",
""
] |
Right I have the following data which i need to insert into a table called locals but I only want to insert it if the street field is not already present in the locals table. The data and fields are as follows:
```
Street PC Locality
------------------------------
Street1 ABC xyz A
Street2 DEF xyz B
```
And so on but I want to insert into the Locals table if the Street field is not already present in the locals table.
I was thinking of using the following:
```
INSERT
INTO Locals (Street,PC,Locality)
(
SELECT DISTINCT s.Street
FROM Locals_bk s
WHERE NOT EXISTS (
SELECT 1
FROM Locals l
WHERE s.Street = l.Street
)
)
;
```
But I realize that will only insert the street field not the rest of the data on the same row. | ```
insert into Locals (Street, PC, Locality)
select b.Street, b.PC, b.Locality
from Locals_bk as b
where not exists (select * from Locals as t where t.street = b.street)
```
or
```
insert into Locals (Street, PC, Locality)
select b.Street, b.PC, b.Locality
from Locals_bk as b
where b.street not in (select t.street from Locals as t)
``` | How about
```
INSERT [Locals]
SELECT
[Street],
[PC],
[Locality]
FROM
[Locals_bk] bk
WHERE
NOT EXIST (
SELECT * FROM [Locals] l WHERE l.[Street] = bk.[Street]
);
``` | How to insert multiple values into a Row if 1 field is distinct | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a parser that reads in a long octet string, and I want it to print out smaller strings based on the parsing details. It reads in a hexstring which is as follows
The string will be in a format like so:
```
01046574683001000004677265300000000266010000
```
The format of the interface contained in the hex is like so:
```
version:length_of_name:name:op_status:priority:reserved_byte
```
==
```
01:04:65746830:01:00:00
```
== (when converted from hex)
```
01:04:eth0:01:00:00
```
^ this is 1 segment of the string , represents eth0 (I inserted the : to make it easier to read). At the minute, however, my code returns a blank list, and I don't know why. Can somebody help me please!
```
def octetChop(long_hexstring, from_ssh_):
startpoint_of_interface_def=0
# As of 14/8/13 , the network operator has not been implemented
network_operator_implemented=False
version_has_been_read = False
position_of_interface=0
chopped_octet_list = []
#This while loop moves through the string of the interface, based on the full length of the container
try:
while startpoint_of_interface_def < len(long_hexstring):
if version_has_been_read == True:
pass
else:
if startpoint_of_interface_def == 0:
startpoint_of_interface_def = startpoint_of_interface_def + 2
version_has_been_read = True
endpoint_of_interface_def = startpoint_of_interface_def+2
length_of_interface_name = long_hexstring[startpoint_of_interface_def:endpoint_of_interface_def]
length_of_interface_name_in_bytes = int(length_of_interface_name) * 2 #multiply by 2 because its calculating bytes
end_of_interface_name_point = endpoint_of_interface_def + length_of_interface_name_in_bytes
hex_name = long_hexstring[endpoint_of_interface_def:end_of_interface_name_point]
text_name = hex_name.decode("hex")
print "the text_name is " + text_name
operational_status_hex = long_hexstring[end_of_interface_name_point:end_of_interface_name_point+2]
startpoint_of_priority = end_of_interface_name_point+2
priority_hex = long_hexstring[startpoint_of_priority:startpoint_of_priority+2]
#Skip the reserved byte
network_operator_length_startpoint = startpoint_of_priority+4
single_interface_string = long_hexstring[startpoint_of_interface_def:startpoint_of_priority+4]
print single_interface_string + " is chopped from the octet string"# - keep for possible debugging
startpoint_of_interface_def = startpoint_of_priority+4
if network_operator_implemented == True:
network_operator_length = long_hexstring[network_operator_length_startpoint:network_operator_length_startpoint+2]
network_operator_length = int(network_operator_length) * 2
network_operator_start_point = network_operator_length_startpoint+2
network_operator_end_point = network_operator_start_point + network_operator_length
network_operator = long_hexstring[network_operator_start_point:network_operator_end_point]
#
single_interface_string = long_hexstring[startpoint_of_interface_def:network_operator_end_point]
#set the next startpoint if there is one
startpoint_of_interface_def = network_operator_end_point+1
else:
self.network_operator = None
print single_interface_string + " is chopped from the octet string"# - keep for possible debugging
#This is where each individual interface is stored, in a list for comparison.
chopped_octet_list.append(single_interface_string)
finally:
return chopped_octet_list
``` | The reason your code is returning a blank list is the following: In this line:
```
else:
self.network_operator = None
```
`self` is not defined so you get a **NameError** exception. This means that the `try` jumps directly to the the `finally` clause without ever executing the part where you:
```
chopped_octet_list.append(single_interface_string)
```
As a consequence the list remains empty. In any case the code is overly complicated for such a task, I would follow one of the other answers. | I hope I got you right. You got a hex-string which contains various interface definition. Inside each interface definition the second octet describes the length of the name of the interface.
Lets say the string contains the interfaces eth0 and eth01 and looks like this (length 4 for eth0 and length 5 for eth01):
```
01046574683001000001056574683031010000
```
Then you can split it like this:
```
def splitIt (s):
tokens = []
while s:
length = int (s [2:4], 16) * 2 + 10 #name length * 2 + 10 digits for rest
tokens.append (s [:length] )
s = s [length:]
return tokens
```
This yields:
```
['010465746830010000', '01056574683031010000']
``` | I want my parser to return a list of strings, but it returns a blank list | [
"",
"python",
"list",
"parsing",
"hex",
""
] |
```
a=np.arange(3)
a.shape #(3,)
a.reshape(3,1)
```
somethings multiply, plus failed for a.
So what's shape (3,) used for? | Shape `(n,)` indicates a one dimensional array. If you do `reshape(3, 1)` you get a two dimensional array with one column and 3 rows.
Not sure what your question is exactly, can you elaborate? | reshape(n,m) is used to change the dimension of existing multi-dimensional array.
Your multiplication might have failed because of mismatch in the dimensions of the two arrays. Check if they have same dimensions or not. If not you won't be able to multiply them, it should be of same dimensions. And to get more on reshape(n,m) go to the official documentation of numpy module. | when to reshape numpy array like (3,) | [
"",
"python",
"numpy",
""
] |
I am starting to learn SQL and I have a book that provides a database to work on. These files below are in the directory but the problem is that when I run the query, it gives me this error:
> Msg 5120, Level 16, State 101, Line 1 Unable to open the physical file
> "C:\Murach\SQL Server 2008\Databases\AP.mdf". Operating system error
> 5: "5(Access is denied.)".
```
CREATE DATABASE AP
ON PRIMARY (FILENAME = 'C:\Murach\SQL Server 2008\Databases\AP.mdf')
LOG ON (FILENAME = 'C:\Murach\SQL Server 2008\Databases\AP_log.ldf')
FOR ATTACH
GO
```
In the book the author says it should work, but it is not working in my case. I searched but I do not know exactly what the problem is, so I posted this question. | SQL Server database engine service account must have permissions to read/write in the new folder.
Check out [this](http://dbamohsin.wordpress.com/2009/06/03/attaching-database-unable-to-open-physical-file-access-is-denied/)
> To fix, I did the following:
>
> Added the Administrators Group to the file security permissions with
> full control for the Data file (S:) and the Log File (T:).
>
> Attached the database and it works fine.
[](https://i.stack.imgur.com/i8VlC.png)
[](https://i.stack.imgur.com/PNUhP.png) | An old post, but here is a step by step that worked for SQL Server 2014 running under windows 7:
* Control Panel ->
* System and Security ->
* Administrative Tools ->
* Services ->
* Double Click SQL Server (SQLEXPRESS) -> right click, Properties
* Select Log On Tab
* Select "Local System Account" (the default was some obtuse Windows System account)
* -> OK
* right click, Stop
* right click, Start
Voilá !
I think setting the logon account may have been an option in the installation, but if so it was not the default, and was easy to miss if you were not already aware of this issue. | SQL Server Operating system error 5: "5(Access is denied.)" | [
"",
"sql",
"sql-server",
""
] |
I want List of party names with 1st option as 'All' from database. but i won't insert 'All' to Database, needs only retrieve time. so, I wrote this query.
```
Select 0 PartyId, 'All' Name
Union
select PartyId, Name
from PartyMst
```
This is my Result
```
0 All
1 SHIV ELECTRONICS
2 AAKASH & CO.
3 SHAH & CO.
```
when I use `order by Name` it displays below result.
```
2 AAKASH & CO.
0 All
3 SHAH & CO.
1 SHIV ELECTRONICS
```
But, I want 1st Option as 'All' and then list of Parties in Sorted order.
How can I do this? | You need to use a sub-query with `CASE` in `ORDER BY` clause like this:
```
SELECT * FROM
(
Select 0 PartyId, 'All' Name
Union
select PartyId, Name
from PartyMst
) tbl
ORDER BY CASE WHEN PartyId = 0 THEN 0 ELSE 1 END
,Name
```
Output:
| PARTYID | NAME |
| --- | --- |
| 0 | All |
| 2 | AAKASH & CO. |
| 3 | SHAH & CO. |
| 1 | SHIV ELECTRONICS |
See [this SQLFiddle](http://sqlfiddle.com/#!18/6d3c2/1) | Since you are anyway hardcoding 0, All just add a space before the All
```
Select 0 PartyId, ' All' Name
Union
select PartyId, Name
from PartyMst
ORDER BY Name
```
[SQL FIDDLE](http://sqlfiddle.com/#!3/fdfca/8)
Raj | Order by clause with Union in Sql Server | [
"",
"sql",
"sql-server",
"sql-order-by",
"union",
""
] |
I have two tables, `managers` and `users`.
`managers`:
```
manager_user_id user_user_id
--------------- ------------
1000011 1000031
1000011 1000032
1000011 1000033
```
etc.
`users`:
```
user_id name
------- ----
1000011 John
1000031 Jack
1000032 Mike
1000033 Paul
```
What I want to do is pull out a list of users' names and their user id's for a specific manager. So something like…
Users for John are:
```
1000031 Jack
1000032 Mike
1000033 Paul
```
I tried the following SQL, but it's wrong:
```
SELECT users.name,
users.user_id
FROM users
INNER JOIN managers
on users.user_id = managers.user_user_id
WHERE managers.manager_user_id='1000011'
``` | I don't think there is error in your query.But you can try your query without quote as
```
SELECT users.name,
users.user_id
FROM users
INNER JOIN managers
on users.user_id = managers.user_user_id
WHERE managers.manager_user_id=1000011
``` | Your query seems to be OK.
You can check the following **[SQL Fiddle](http://sqlfiddle.com/#!3/b0dbb/4)**.
```
select u.name, u.user_id, m.manager_user_id
from users u
left join managers m on m.user_user_id = u.user_id
;
``` | SQL inner join on two tables | [
"",
"sql",
"inner-join",
""
] |
How do I get the name of the Attached databases in SQLite?
I've tried looking into:
```
SELECT name FROM sqlite_master
```
But there doesn't seem to be any information there about the attached databases.
I attach the databases with the command:
```
ATTACH DATABASE <fileName> AS <DBName>
```
It would be nice to be able to retrieve a list of the FileNames or DBNames attached.
I'm trying to verify if a database was correctly attached without knowing its schema beforehand. | Are you looking for this?
```
PRAGMA database_list;
```
> **[PRAGMA database\_list;](http://www.sqlite.org/pragma.html#pragma_database_list)**
> This pragma works like a query **to return one row for each database
> attached to the current database connection.** The second column is the
> "main" for the main database file, "temp" for the database file used
> to store TEMP objects, or the name of the ATTACHed database for other
> database files. The third column is the name of the database file
> itself, or an empty string if the database is not associated with a
> file. | You can use [`.database`](https://sqlite.org/cli.html#querying_the_database_schema) command. | sqlite get name of attached databases | [
"",
"sql",
"database",
"sqlite",
"command",
"sqlite-shell",
""
] |
I've tried searching this topic, and my searches led me to this format, which is still throwing an error. When I execute my script, I basically get a load of ORA-01735 errors for all my later statements. I had it done out differently, but googling led me to this format, which still doesn't work. Any tips?
```
CREATE TABLE table7
(
column1 int NOT NULL,
column2 int NOT NULL,
column3 int NOT NULL
)
/
ALTER TABLE table7
ADD( pk1 PRIMARY KEY(column1),
fk1 FOREIGN KEY(column2) REFERENCES Table1(column2),
fk2 FOREIGN KEY(column3) REFERENCES Service(column3)
)
/
``` | `ADD` should surround each column definition. You don't wrap a single `ADD` around 3 new columns.
See: <http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_3001.htm#i2183462>
For Primary Key and Foreign Key constraints you need the `CONSTRAINT` keyword. See: <http://docs.oracle.com/javadb/10.3.3.0/ref/rrefsqlj81859.html> Section on "adding constraints".
**EDIT:** This was the only thing that worked on the fiddle I tried:
```
ALTER TABLE table7
ADD (
CONSTRAINT pk1 PRIMARY KEY (column1),
CONSTRAINT fk1 Foreign Key (column2) REFERENCES Table1 (column2),
CONSTRAINT fk2 Foreign Key (column3) REFERENCES Service (column3)
)
```
Here's the fiddle: <http://sqlfiddle.com/#!4/9d2a3> | Check this out:
```
ALTER TABLE table7
ADD pk1 PRIMARY KEY(column1),
ADD fk1 FOREIGN KEY(column2) REFERENCES Table1(column2),
ADD fk2 FOREIGN KEY(column3) REFERENCES Service(column3)
```
See syntax and examples:
<http://docs.oracle.com/cd/E17952_01/refman-5.1-en/alter-table.html>
<http://docs.oracle.com/cd/E17952_01/refman-5.1-en/alter-table-examples.html> | Alter Scripts in SQL - ORA 01735 | [
"",
"sql",
"oracle",
"alter",
""
] |
This is too easy, if you have `Id` column and `Value` column which has duplicate rows. But in the interview i had been asked how to remove it, if you have only `Value` column. For example:
table\_a input:
```
Value
A
A
B
A
C
D
D
E
F
F
E
```
table\_a output:
```
Value
A
B
C
D
E
F
```
Question: You have table with only one column `Value` and you have to `delete` all rows, which have duplicates (as in result upper). | if you are allowed to use CTE:
```
with cte as (
select
row_number() over(partition by Value order by Value) as row_num,
Value
from Table1
)
delete from cte where row_num > 1
```
[**sql fiddle demo**](http://sqlfiddle.com/#!3/d6ce6/4)
as t-clausen.dk suggested in comments, you don't even need value inside the CTE:
```
with cte as (
select
row_number() over(partition by Value order by Value) as row_num
from Table1
)
delete from cte where row_num > 1;
``` | Well, gow about using a [CTE](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx)
> A common table expression (CTE) can be thought of as a temporary
> result set that is defined within the execution scope of a single
> SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. A CTE is
> similar to a derived table in that it is not stored as an object and
> lasts only for the duration of the query. Unlike a derived table, a
> CTE can be self-referencing and can be referenced multiple times in
> the same query.
and [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx).
> Returns the sequential number of a row within a partition of a result
> set, starting at 1 for the first row in each partition.
Something like
```
;WITH Vals AS (
SELECT [Value],
ROW_NUMBER() OVER(PARTITION BY [Value] ORDER BY [Value]) RowID
FROM MyTable
)
DELETE
FROM Vals
WHERE RowID > 1
``` | Remove duplicates if you have only one column with value | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Some external data vendor wants to give me a data field - pipe delimited string value, which I find quite difficult to deal with.
Without help from an application programming language, is there a way to transform the string value into rows?
There is a difficulty however, the field has unknown number of delimited elements.
DB engine in question is MySQL.
For example:
```
Input: Tuple(1, "a|b|c")
Output:
Tuple(1, "a")
Tuple(1, "b")
Tuple(1, "c")
``` | It may not be as difficult as I initially thought.
This is a general approach:
1. Count number of occurrences of the delimiter `length(val) - length(replace(val, '|', ''))`
2. Loop a number of times, each time grab a new delimited value and insert the value to a second table. | Use this function by [Federico Cargnelutti](http://blog.fedecarg.com/2009/02/22/mysql-split-string-function/):
```
CREATE FUNCTION SPLIT_STR(
x VARCHAR(255),
delim VARCHAR(12),
pos INT
)
RETURNS VARCHAR(255)
RETURN REPLACE(SUBSTRING(SUBSTRING_INDEX(x, delim, pos),LENGTH(SUBSTRING_INDEX(x, delim, pos -1)) + 1),
delim, '');
```
Usage
```
SELECT SPLIT_STR(string, delimiter, position)
```
you will need a loop to solve your problem. | Split delimited string value into rows | [
"",
"mysql",
"sql",
"database",
"delimiter",
""
] |
I have a table called "users" that has a column called "username." Recently, I added the prefix "El\_" to every username in the database. Now I wish to delete these first three letters. How can I do that? | assuming `MySql` you can do something like this.
`update users set username=substring(username,4);`
which will update every row to not include `el_`, but this assumes that every row starts with the El\_.
sqlfiddle - <http://sqlfiddle.com/#!2/3bcf6/1/0> | ```
SELECT RIGHT(MyColumn, LEN(MyColumn) - 3) AS MyTrimmedColumn
```
This removes the first three characters from your result. Using RIGHT needs two arguments: the first one is the column you'd wish to display, the second one is the number of characters counting from the right of your result.
This should do!
EDIT: if you really like to remove this prefix from every username for good use an UPDATE statement as follows:
```
UPDATE MyTable
SET MyColumn = RIGHT(MyColumn, LEN(MyColumn) - 3)
``` | I need to remove the first three characters of every field in an SQL column | [
"",
"sql",
""
] |
I am trying to rename a column name in [w3schools website](http://www.w3schools.com/sql/trysql.asp?filename=trysql_select_between)
```
ALTER TABLE customers
RENAME COLUMN contactname to new_name;
```
However, the above code throws syntax error. What am I doing wrong? | You can try this to rename the column in SQL Server:-
```
sp_RENAME 'TableName.[OldColumnName]' , '[NewColumnName]', 'COLUMN'
```
> sp\_rename automatically renames the associated index whenever a
> PRIMARY KEY or UNIQUE constraint is renamed. If a renamed index is
> tied to a PRIMARY KEY constraint, the PRIMARY KEY constraint is also
> automatically renamed by sp\_rename. sp\_rename can be used to rename
> primary and secondary XML indexes.
For MYSQL try this:-
```
ALTER TABLE table_name CHANGE [COLUMN] old_col_name new_col_name
``` | From "Learning PHP, MySQL & JavaScript" by Robin Nixon pg 185. I tried it and it worked.
`ALTER TABLE tableName CHANGE oldColumnName newColumnName TYPE(#);`
note that `TYPE(#)` is, for example, VARCHAR(20) or some other data type and must be included even if the data type is not being changed. | How do I rename column in w3schools sql? | [
"",
"sql",
"alter",
""
] |
Just a real quick question. When modifying a table's columns, and saving requires table recreation, does recreating it erase all of its contents?
Thanks in advance. | @gman4455 and @Erik .. yes it will erase the data but when you are adding it from SSMS it will take are of eveything .. you dont need to worry about the data .. SSMS will hold data temprory and when it recreates the table it will recreate the data for you.... so you dont need to worry about anything when you modifying a table's columns, and saving through SSMS | recreate = drop table + create (a new) table.
drop means all the data is lost.
If you want to keep the data use the `Alter Table`
<http://technet.microsoft.com/en-us/library/ms190273.aspx> | Does recreating a table in SQL erase its contents? | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a simple m-to-n table in a database and need to perform an AND search. The table looks as follows:
```
column a | column b
1 x
1 y
1 z
2 x
2 c
3 a
3 b
3 c
3 y
3 z
4 d
4 e
4 f
5 f
5 x
5 y
```
I want to be able to say 'give me column A where it has x AND y in column b (returning 1 and 5 here), but i can't figure out how to form that query.
I tried `SELECT column_a FROM table WHERE column_b = x AND columb_b = y` but it seems that would only return if the column was somehow both. Is it fundamentally possible, or should i have a different table layout? | Here's one way:
```
SELECT a
FROM Table1
WHERE b IN ('x', 'y')
GROUP BY a
HAVING COUNT(DISTINCT(b)) = 2
```
[SQL Fiddle](http://www.sqlfiddle.com/#!3/2ca9a/1)
If you are guaranteed (a,b) is unique, you can get rid of the DISTINCT as well. | This is an example of a "set-within-sets" subquery. I like to use `group by` and put the logic in the `having` clause:
```
select column_a
from table
group by column_a
having sum(case when column_b = x then 1 else 0 end) > 0 and
sum(case when column_b = y then 1 else 0 end) > 0;
```
The each `sum()` in the `having` clause is counting the number of rows that match one of the conditions.
This turns out to be quite general. So, you can check for `z` just by adding a clause:
```
select column_a
from table
group by column_a
having sum(case when column_b = x then 1 else 0 end) > 0 and
sum(case when column_b = y then 1 else 0 end) > 0 and
sum(case when column_b = z then 1 else 0 end) > 0;
```
Or, make it "x" or "y" by using `or` instead of `and`:
```
select column_a
from table
group by column_a
having sum(case when column_b = x then 1 else 0 end) > 0 or
sum(case when column_b = y then 1 else 0 end) > 0;
``` | AND query a m-to-n table | [
"",
"sql",
"database",
"layout",
"sqlite",
""
] |
I'm trying to convert `sysdate` using `toChar` to the following format:
`2006-11-20T17:10:02+01:00`
From this format:
`16/08/2012 13:40:59`
Is there a standard way of doing this?
I've tried using the `toChar` to specific the `T` part as a string but it doesn't appear to be working.
Thanks in advance
Jezzipin
EDIT:
I've tried Nicholas' solution however as I mention above, I need to use sysdate. I've used the following select query:
```
select to_char(to_timestamp_tz(sysdate-365, 'dd/mm/yyyy hh24:mi:ss'),'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM') from dual;
```
However, this returns:
```
0012-08-16T00:00:00 +01:00
```
which is incorrect as it should be 2012-08-16T00:00:00 +01:00 | **Try:**
```
select to_char(sysdate, 'yyyy-mm-dd') || 'T' || to_char(sysdate,'hh24:mi:ss') || sessiontimezone
from dual;
```
**Returns:**
2013-08-16T13:00:51+00:00 | To display `sysdate` in the format that contains timezone information you need to do a series of conversions:
1. Convert `sysdate` to string literal using `to_char()` function.
2. Convert string literal to timestamp with tome zone using `to_timestamp_tz()` function.
3. And finally, convert the final result back to string literal using `to_char()`.
as follows:
```
select to_char(
to_timestamp_tz(
to_char(sysdate - 365, 'dd/mm/yyyy hh24:mi:ss')
, 'dd/mm/yyyy hh24:mi:ss')
, 'yyyy-mm-dd"T"hh24:mi:ss TZH:TZM'
) as res
from dual
```
Result:
```
RES
--------------------------
2012-08-16T17:29:28 +04:00
```
You can include string literal in the format mask enclosing it with double quotes. | Using toChar to output a date in W3C XML Schema xs:dateTime type format | [
"",
"sql",
"oracle",
"plsql",
"oracle11g",
""
] |
**EDITED:**
I'm working in Sql Server 2005 and I'm trying to get a year over year (YOY) count of distinct users for the current fiscal year (say Jun 1-May 30) and the past 3 years. I'm able to do what I need by running a select statement four times, but I can't seem to find a better way at this point. I'm able to get a distinct count for each year in one query, but I need it to a cumulative distinct count. Below is a mockup of what I have so far:
```
SELECT [Year], COUNT(DISTINCT UserID)
FROM
(
SELECT u.uID AS UserID,
CASE
WHEN dd.ddEnd BETWEEN @yearOneStart AND @yearOneEnd THEN 'Year1'
WHEN dd.ddEnd BETWEEN @yearTwoStart AND @yearTwoEnd THEN 'Year2'
WHEN dd.ddEnd BETWEEN @yearThreeStart AND @yearThreeEnd THEN 'Year3'
WHEN dd.ddEnd BETWEEN @yearFourStart AND @yearFourEnd THEN 'Year4'
ELSE 'Other'
END AS [Year]
FROM Users AS u
INNER JOIN UserDataIDMatch AS udim
ON u.uID = udim.udim_FK_uID
INNER JOIN DataDump AS dd
ON udim.udimUserSystemID = dd.ddSystemID
) AS Data
WHERE LOWER([Year]) 'other'
GROUP BY
[Year]
```
I get something like:
```
Year1 1
Year2 1
Year3 1
Year4 1
```
But I really need:
```
Year1 1
Year2 2
Year3 3
Year4 4
```
Below is a rough schema and set of values (updated for simplicity). I tried to create a SQL Fiddle, but I'm getting a disk space error when I attempt to build the schema.
```
CREATE TABLE Users
(
uID int identity primary key,
uFirstName varchar(75),
uLastName varchar(75)
);
INSERT INTO Users (uFirstName, uLastName)
VALUES
('User1', 'User1'),
('User2', 'User2')
('User3', 'User3')
('User4', 'User4');
CREATE TABLE UserDataIDMatch
(
udimID int indentity primary key,
udim.udim_FK_uID int foreign key references Users(uID),
udimUserSystemID varchar(75)
);
INSERT INTO UserDataIDMatch (udim_FK_uID, udimUserSystemID)
VALUES
(1, 'SystemID1'),
(2, 'SystemID2'),
(3, 'SystemID3'),
(4, 'SystemID4');
CREATE TABLE DataDump
(
ddID int identity primary key,
ddSystemID varchar(75),
ddEnd datetime
);
INSERT INTO DataDump (ddSystemID, ddEnd)
VALUES
('SystemID1', '10-01-2013'),
('SystemID2', '10-01-2014'),
('SystemID3', '10-01-2015'),
('SystemID4', '10-01-2016');
``` | Unless I'm missing something, you just want to know how many records there are where the date is less than or equal to the current fiscal year.
```
DECLARE @YearOneStart DATETIME, @YearOneEnd DATETIME,
@YearTwoStart DATETIME, @YearTwoEnd DATETIME,
@YearThreeStart DATETIME, @YearThreeEnd DATETIME,
@YearFourStart DATETIME, @YearFourEnd DATETIME
SELECT @YearOneStart = '06/01/2013', @YearOneEnd = '05/31/2014',
@YearTwoStart = '06/01/2014', @YearTwoEnd = '05/31/2015',
@YearThreeStart = '06/01/2015', @YearThreeEnd = '05/31/2016',
@YearFourStart = '06/01/2016', @YearFourEnd = '05/31/2017'
;WITH cte AS
(
SELECT u.uID AS UserID,
CASE
WHEN dd.ddEnd BETWEEN @yearOneStart AND @yearOneEnd THEN 'Year1'
WHEN dd.ddEnd BETWEEN @yearTwoStart AND @yearTwoEnd THEN 'Year2'
WHEN dd.ddEnd BETWEEN @yearThreeStart AND @yearThreeEnd THEN 'Year3'
WHEN dd.ddEnd BETWEEN @yearFourStart AND @yearFourEnd THEN 'Year4'
ELSE 'Other'
END AS [Year]
FROM Users AS u
INNER JOIN UserDataIDMatch AS udim
ON u.uID = udim.udim_FK_uID
INNER JOIN DataDump AS dd
ON udim.udimUserSystemID = dd.ddSystemID
)
SELECT
DISTINCT [Year],
(SELECT COUNT(*) FROM cte cteInner WHERE cteInner.[Year] <= cteMain.[Year] )
FROM cte cteMain
``` | # Concept using an existing query
I have done something similar for finding out the number of distinct customers who bought something in between years, I modified it to use your concept of year, the variables you add would be that **start day** and **start month** of the year and the **start year** and **end year**.
Technically there is a way to avoid using a loop but this is very clear and you can't go past year 9999 so don't feel like putting clever code to avoid a loop makes sense
## Tips for speeding up the query
Also when matching dates make sure you are comparing dates, and not comparing a function evaluation of the column as that would mean running the function on every record set and would make indices useless if they existed on dates (which they should). Use date add on
zero to initiate your target dates subtracting 1900 from the year, one from the month and one from the target date.
Then self join on the table where the dates create a valid range (i.e. yearlessthan to yearmorethan) and use a subquery to create a sum based on that range. Since you want accumulative from the first year to the last limit the results to starting at the first year.
At the end you will be missing the first year as by our definition it does not qualify as a range, to fix this just do a union all on the temp table you created to add the missing year and the number of distinct values in it.
```
DECLARE @yearStartMonth INT = 6, @yearStartDay INT = 1
DECLARE @yearStart INT = 2008, @yearEnd INT = 2012
DECLARE @firstYearStart DATE =
DATEADD(day,@yearStartDay-1,
DATEADD(month, @yearStartMonth-1,
DATEADD(year, @yearStart- 1900,0)))
DECLARE @lastYearEnd DATE =
DATEADD(day, @yearStartDay-2,
DATEADD(month, @yearStartMonth-1,
DATEADD(year, @yearEnd -1900,0)))
DECLARE @firstdayofcurrentyear DATE = @firstYearStart
DECLARE @lastdayofcurrentyear DATE = DATEADD(day,-1,DATEADD(year,1,@firstdayofcurrentyear))
DECLARE @yearnumber INT = YEAR(@firstdayofcurrentyear)
DECLARE @tempTableYearBounds TABLE
(
startDate DATE NOT NULL,
endDate DATE NOT NULL,
YearNumber INT NOT NULL
)
WHILE @firstdayofcurrentyear < @lastYearEnd
BEGIN
INSERT INTO @tempTableYearBounds
VALUES(@firstdayofcurrentyear,@lastdayofcurrentyear,@yearNumber)
SET @firstdayofcurrentyear = DATEADD(year,1,@firstdayofcurrentyear)
SET @lastdayofcurrentyear = DATEADD(year,1,@lastdayofcurrentyear)
SET @yearNumber = @yearNumber + 1
END
DECLARE @tempTableCustomerCount TABLE
(
[Year] INT NOT NULL,
[CustomerCount] INT NOT NULL
)
INSERT INTO @tempTableCustomerCount
SELECT
YearNumber as [Year],
COUNT(DISTINCT CustomerNumber) as CutomerCount
FROM Ticket
JOIN @tempTableYearBounds ON
TicketDate >= startDate AND TicketDate <=endDate
GROUP BY YearNumber
SELECT * FROM(
SELECT t2.Year as [Year],
(SELECT
SUM(CustomerCount)
FROM @tempTableCustomerCount
WHERE Year>=t1.Year
AND Year <=t2.Year) AS CustomerCount
FROM @tempTableCustomerCount t1 JOIN @tempTableCustomerCount t2
ON t1.Year < t2.Year
WHERE t1.Year = @yearStart
UNION
SELECT [Year], [CustomerCount]
FROM @tempTableCustomerCount
WHERE [YEAR] = @yearStart
) tt
ORDER BY tt.Year
```
It isn't efficient but at the end the temp table you are dealing with is so small I don't think it really matters, and adds a lot more versatility versus the method you are using.
**Update:** I updated the query to reflect the result you wanted with my data set, I was basically testing to see if this was faster, it was faster by 10 seconds but the dataset I am dealing with is relatively small. (from 12 seconds to 2 seconds).
## Using your data
I changed the tables you gave to temp tables so it didn't effect my environment and I removed the foreign key because they are not supported for temp tables, the logic is the same as the example included but just changed for your dataset.
```
DECLARE @startYear INT = 2013, @endYear INT = 2016
DECLARE @yearStartMonth INT = 10 , @yearStartDay INT = 1
DECLARE @startDate DATETIME = DATEADD(day,@yearStartDay-1,
DATEADD(month, @yearStartMonth-1,
DATEADD(year,@startYear-1900,0)))
DECLARE @endDate DATETIME = DATEADD(day,@yearStartDay-1,
DATEADD(month,@yearStartMonth-1,
DATEADD(year,@endYear-1899,0)))
DECLARE @tempDateRangeTable TABLE
(
[Year] INT NOT NULL,
StartDate DATETIME NOT NULL,
EndDate DATETIME NOT NULL
)
DECLARE @currentDate DATETIME = @startDate
WHILE @currentDate < @endDate
BEGIN
DECLARE @nextDate DATETIME = DATEADD(YEAR, 1, @currentDate)
INSERT INTO @tempDateRangeTable(Year,StartDate,EndDate)
VALUES(YEAR(@currentDate),@currentDate,@nextDate)
SET @currentDate = @nextDate
END
CREATE TABLE Users
(
uID int identity primary key,
uFirstName varchar(75),
uLastName varchar(75)
);
INSERT INTO Users (uFirstName, uLastName)
VALUES
('User1', 'User1'),
('User2', 'User2'),
('User3', 'User3'),
('User4', 'User4');
CREATE TABLE UserDataIDMatch
(
udimID int indentity primary key,
udim.udim_FK_uID int foreign key references Users(uID),
udimUserSystemID varchar(75)
);
INSERT INTO UserDataIDMatch (udim_FK_uID, udimUserSystemID)
VALUES
(1, 'SystemID1'),
(2, 'SystemID2'),
(3, 'SystemID3'),
(4, 'SystemID4');
CREATE TABLE DataDump
(
ddID int identity primary key,
ddSystemID varchar(75),
ddEnd datetime
);
INSERT INTO DataDump (ddSystemID, ddEnd)
VALUES
('SystemID1', '10-01-2013'),
('SystemID2', '10-01-2014'),
('SystemID3', '10-01-2015'),
('SystemID4', '10-01-2016');
DECLARE @tempIndividCount TABLE
(
[Year] INT NOT NULL,
UserCount INT NOT NULL
)
-- no longer need to filter out other because you are using an
--inclusion statement rather than an exclusion one, this will
--also make your query faster (when using real tables not temp ones)
INSERT INTO @tempIndividCount(Year,UserCount)
SELECT tdr.Year, COUNT(DISTINCT UId) FROM
Users u JOIN UserDataIDMatch um
ON um.udim_FK_uID = u.uID
JOIN DataDump dd ON
um.udimUserSystemID = dd.ddSystemID
JOIN @tempDateRangeTable tdr ON
dd.ddEnd >= tdr.StartDate AND dd.ddEnd < tdr.EndDate
GROUP BY tdr.Year
-- will show you your result
SELECT * FROM @tempIndividCount
--add any ranges that did not have an entry but were in your range
--can easily remove this by taking this part out.
INSERT INTO @tempIndividCount
SELECT t1.Year,0 FROM
@tempDateRangeTable t1 LEFT OUTER JOIN @tempIndividCount t2
ON t1.Year = t2.Year
WHERE t2.Year IS NULL
SELECT YearNumber,UserCount FROM (
SELECT 'Year'+CAST(((t2.Year-t1.Year)+1) AS CHAR) [YearNumber] ,t2.Year,(
SELECT SUM(UserCount)
FROM @tempIndividCount
WHERE Year >= t1.Year AND Year <=t2.Year
) AS UserCount
FROM @tempIndividCount t1
JOIN @tempIndividCount t2
ON t1.Year < t2.Year
WHERE t1.Year = @startYear
UNION ALL
--add the missing first year, union it to include the value
SELECT 'Year1',Year, UserCount FROM @tempIndividCount
WHERE Year = @startYear) tt
ORDER BY tt.Year
```
## Benefits over using a WHEN CASE based approach
### More Robust
Do not need to explicitly determine the end and start dates of each year, just like in a logical year just need to know the start and end date. Can easily change what you are looking for with some simple modifications(i.e. say you want all 2 year ranges or 3 year).
### Will be faster if the database is indexed properly
Since you are searching based on the same data type you can utilize the indices that should be created on the date columns in the database.
## Cons
### More Complicated
The query is a lot more complicated to follow, even though it is more robust there is a lot of extra logic in the actual query.
### In some circumstance will not provide good boost to execution time
If the dataset is very small, or the number of dates being compared isn't significant then this could not save enough time to be worth it. | Year Over Year (YOY) Distinct Count | [
"",
"sql",
"t-sql",
"sql-server-2005",
""
] |
i have a table as follows:
> PRODUCT(P\_CODE, DESCRIPTION, PRODUCTION\_DATE)
the expired product are those have been produced more than 1 year. How do I list all the products that have already expired & together with their expiration date ? | ```
create table PRODUCT(P_CODE number, DESCRIPTION varchar2(200), PRODUCTION_DATE date);
insert into product values(1,'XXX',to_date('12-03-2013','dd-mm-yyyy'));
insert into product values(2,'YYY',to_date('13-03-2012','dd-mm-yyyy'));
insert into product values(3,'ZZZ',to_date('12-08-2012','dd-mm-yyyy'));
insert into product values(4,'AAA',to_date('16-08-2013','dd-mm-yyyy'));
select p_code
,description
,production_date
,add_months(production_date,12) expire_date
from product
where production_date<add_months(sysdate,-12)
```
 | ```
SELECT P_CODE,PRODUCTION_DATE
FROM PRODUCT
WHERE PRODUCTION_DATE >= NOW() - INTERVAL 12 MONTH
``` | SELECT Product rows with expired date | [
"",
"sql",
"oracle",
"select",
""
] |
I want to select some rows from a table if a certain condition is true, then if another condition is true to select some others and else (in end) to select some other rows. The main problem is that I want to insert a parameter from command line like this:
```
if exists(select a.* from a
left join b on a.id=b.id
where b.id=:MY_PARAMETER)
else if exists
(select c.* from c where c.id=:Another_Parameter)
else
(select * from b)
```
I understand that I am doing something wrong but I can not figure out what. I tried using CASE-Then but I couldn't find a way to adapt to the solution. Any idea? Thanks
PS: I read some other posts about something like this but as I explained I am having difficulties through this.
<===Edited=====>
Hoping I am clarifying something:
```
select
case when b.id=6
then (select * from a)
else (select a.* from a join b
on b.aid=a.aid)
end
from a join b
on b.aid=a.aid
join c
on b.id=c.bid
where b.id=:num
```
In this case the problem is that it does not allow to return more than one value in the CASE statement. | The union should do just fine, for example for your first example (this will work only if tables a, b and c have similar column order and types):
```
select a.* from a
left join b on a.id=b.id
where b.id=:MY_PARAMETER
UNION
select c.* from c where c.id=:Another_Parameter
and not exists(select a.* from a
left join b on a.id=b.id
where b.id=:MY_PARAMETER)
UNION
select b.* from b
where not exists
(select c.* from c where c.id=:Another_Parameter
and not exists(select a.* from a
left join b on a.id=b.id
where b.id=:MY_PARAMETER))
and not exists (select a.* from a
left join b on a.id=b.id
where b.id=:MY_PARAMETER)
```
In order to build more effective query, I need more specific example.
---
```
SELECT a.* FROM a
INNER JOIN b ON a.id = b.id
WHERE b.id = :MY_PARAMETER
UNION
SELECT a.* FROM a
INNER JOIN b ON a.id = b.id
INNER JOIN c ON b.id = c.bid
WHERE NOT EXISTS (SELECT * FROM b WHERE
b.id = :MY_PARAMETER)
AND c.id = :Another_Parameter
``` | Thanks from Mikhail, the right query that solved my problem is this:
```
select a.*
from a
join b
on a.id1=b.id2
where b.id1= :value and exists
(select b.id1 from bwhere b.id1 = :value)
union all
select *
from a
where exists (select * from c where c.id1=5 and c.id2=:value)
``` | How to select some rows if one condition is true and others if another condition is true? | [
"",
"sql",
"oracle",
"oracle11g",
""
] |
I have a stored procedure that I want to perform a different `select` based on the result stored in a local variable. My use case is simply that, on certain results from a previous query in the stored procedure, I know the last query will return nothing. But the last query is expensive, and takes a while, so I'd like to short circuit that and return nothing.
Here is a mock-up of the flow I want to achieve, but I get a syntax error from SQL Management Studio
```
DECLARE @myVar int;
SET @myVar = 1;
CASE WHEN @myVar = 0
THEN
SELECT 0 0
ELSE
SELECT getDate()
END
```
The error is: `Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'CASE'.
Msg 102, Level 15, State 1, Line 8
Incorrect syntax near 'END'.` | Use [`IF...ELSE`](http://technet.microsoft.com/en-us/library/ms182717.aspx) syntax for control flow:
```
DECLARE @myVar int;
SET @myVar = 1;
IF @myVar = 0
SELECT 0;
ELSE
SELECT GETDATE();
``` | brother, for CASE function, it can only return single value such as string, in order to execute different query based on certain condition, if else will be the options.
```
DECLARE @myVar INT
SET @myVar = 1
IF @myVar = 0
SELECT '0 0'
ELSE
SELECT GETDATE()
``` | Is it possible to change which SELECT statement is run with a case statement | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
```
SELECT
Income.point, Income.date, SUM(out), SUM(inc)
FROM
Income
LEFT JOIN
Outcome ON Income.point = Outcome.point
AND Income.date = Outcome.date
GROUP BY
Income.point, Income.date
UNION
SELECT
Outcome.point, Outcome.date, SUM(out), SUM(inc)
FROM
Outcome
LEFT JOIN
Income ON Income.point = Outcome.point
AND Income.date = Outcome.date
GROUP BY
Outcome.point, Outcome.date;
```
I have this code what I want to do is to group by before joining.
"Assume that we have an SQL query containing joins and a group-by. The standard way of evaluating this type of query is to first perform all the joins and then the group-by operation. However, it may be possible to perform the group-by early, that is, to push the group-by operation past one or more joins. Early grouping may reduce the query processing cost by reducing the amount of data participating in joins."
So I need explanation how to do that
exercise is as follows in this case :
> Under the assumption that the income (inc) and expenses (out) of the money at each outlet (point) are registered any number of times a day, get a result set with fields: outlet, date, expense, income.
>
> Note that a single record must correspond to each outlet at each date.
>
> Use Income and Outcome tables. | Try this code
```
SELECT ip,id,ii,oo FROM
(SELECT I.point ip, I.date id, SUM(I.inc) ii FROM Income I GROUP BY I.point, I.date ) in1
LEFT JOIN
(SELECT O.point op, O.date od, SUM(O.out) oo FROM Outcome O GROUP BY O.point, O.date ) ou1
ON op=ip AND od=id
UNION
SELECT ip,id,ii,oo FROM
(SELECT I.point ip, I.date id, SUM(I.inc) ii FROM Income I GROUP BY I.point, I.date ) in1
RIGHT JOIN
(SELECT O.point op, O.date od, SUM(O.out) oo FROM Outcome O GROUP BY O.point, O.date ) ou1
ON op=ip AND od=id
```
Maybe someone can give it a name too. I don't even know how you call these SELECTS in parentheses ... :-/
**Edit**
Well, taking Luis LL's idea and combining it with "early grouping" one would get the following:
```
SELECT COALESCE(ip,op) point,COALESCE(id,od) date,ii inc,oo out FROM
(SELECT point ip, date id, SUM(inc) ii FROM Income GROUP BY point, date ) in1
FULL OUTER JOIN
(SELECT point op, date od, SUM(out) oo FROM Outcome GROUP BY point, date ) ou1
ON op=ip AND od=id
```
Maybe that will do the trick? | Assuming that you need to register this data in 2 separate tables (not the most elegant way to do it) you could get away with using subqueries.
A UNION is probably not the way to do it, since that does not put the datasets of each 'point' together in one record. | SQL GROUP BY before JOIN - sequence when querying | [
"",
"sql",
""
] |
I was recently asked this question in an interview.
I tried this in mySQL, and got the same results(final results).
All gave the number of rows in that particular table.
Can anyone explain the major difference between them. | Nothing really, unless you specify a field in a table or an expression within parantheses instead of constant values or \*
Let me give you a detailed answer. Count will give you non-null record number of given field. Say you have a table named A
```
select 1 from A
select 0 from A
select * from A
```
will all return same number of records, that is the number of rows in table A. Still the output is different. If there are 3 records in table. With X and Y as field names
```
select 1 from A will give you
1
1
1
select 0 from A will give you
0
0
0
select * from A will give you ( assume two columns X and Y is in the table )
X Y
-- --
value1 value1
value2 (null)
value3 (null)
```
So, all three queries return the same number. Unless you use
```
select count(Y) from A
```
since there is only one non-null value you will get 1 as output | `COUNT(*)` will count the number of rows, while `COUNT(expression)` will count non-null values in expression and `COUNT(column)` will count all non-null values in column.
Since both 0 and 1 are non-null values, `COUNT(0)=COUNT(1)` and they both will be equivalent to the number of rows `COUNT(*)`. It's a different concept, but the result will be the same. | What is the difference between count(0), count(1).. and count(*) in mySQL/SQL? | [
"",
"mysql",
"sql",
""
] |
My table looks like below
```
tbPatientEpisode
EpisodeIDP int
EpisodeNumber varcher
EpisodeDate datetime
```
when try with below SQL query
```
SELECT * FROM tbPatientEpisode
```
it show me results like below image..

i want data for particular date like 2013-01-22 for this i tried below SQL query
```
SELECT * FROM tbPatientEpisode where EpisodeDate like '%2013-01-22%'
```
but it does't return any row. what wrong with my query or is there any proper way to get this kind of result..any help will be appreciate. | That's because your column `EpisodeDate` is datetime type.
You could cast `EpisodeDate` to datetime, but it will prevent using indexes on your column, so best way I know is to compare `EpisodeDate` with two datetimes:
```
SELECT *
FROM tbPatientEpisode
where
EpisodeDate >= convert(datetime, '20130122', 112) and
EpisodeDate < dateadd(day, 1, convert(datetime, '20130122', 112))
```
I'll explain a bit:
First, you could compare EpisodeDate to string without conversion and rely on implicit SQL Server conversion, but you should be aware of two things:
* [Priority of types](http://technet.microsoft.com/en-us/library/ms190309.aspx). When you comparing valus of different types, data with lower priority would be converted to type with higher priority. It's good in our case, because `varchar` have lower priorty than `datetime`, but it could prevent of using indexes when, for example, your column is `varchar` and you want compare it with `datetime`.
* You have to know how SQL server will convert your string to date. For example SQL server could recognize format YYYYMMDD easily, but in general I think it's good practice to convert data explicitly.
So it your case you could use
```
select *
FROM tbPatientEpisode
where
EpisodeDate >= '20130122' and
EpisodeDate < '20130123'
```
but you have to be sure that you know what you're doing
I've not specified `dateadd(day, 1, '20130122')` and not `'20130123'` because I'm thinking about 20130122 as input parameters, so you could replace this string in my query | Don't use like on date columns, it's not going to work.
Instead (SQL Server 2008 onwards):
```
SELECT *
FROM tbPatientEpisode
where CAST(EpisodeDate as Date) = '2013-01-22'
```
Note: this form won't use any applicable index starting with column `EpisodeDate`
If you want to ensure any applicable index is use (and works on SQL Server 2005):
```
SELECT *
FROM tbPatientEpisode
where EpisodeDate between = '2013-01-22 00:00:00' AND '2013-01-22 23:59:59.997'
``` | SQL select query for particular date | [
"",
"sql",
"sql-server-2005",
"sql-server-2012",
""
] |
Just updated my question. But I have an array
```
Dim divName(3) 'Fixed size array
divName(0) = "DIV1"
divName(1) = "DIV2"
divName(2) = "DIV3"
```
I would like to apply one particular value ("DIV1") from my array within my SQL query
```
sql = "SELECT * FROM DivisionNew814 WHERE JMS_UpdateDateTime >= DATEADD(day,-7, GETDATE()) AND Division ='" & divName (divrec(0)) &"' order by JMS_UpdateDateTime desc"
```
It's not working.
"Divrec" is a variable that outputs to "Division 1" I would like to change that value to "DIV1" using my array within the SQL query. | To output a array value you need to use the index:
```
Dim divName(3) 'Fixed size array
divName(0) = "DIV1"
divName(1) = "DIV2"
divName(2) = "DIV3"
... AND Division ='" & divName(0) &"' order by ...
```
If you need the conversion from "Division 1" to "DIV1" and if its always like "Division X" to "DIVX" you could to a replace:
```
... AND Division ='" & Replace(divrec(0), "Division ", "DIV") &"' order by ...
``` | ```
<%
if divrec = "Division 1" then
divrec = "Div1"
end if
%>
``` | Would like to create a variable in ASP to change a value | [
"",
"sql",
"asp-classic",
""
] |
I have the following table named Fruits.
```
ID English Spanish German
1 Apple Applice Apple-
2 Orange -- --
```
If the program passes 1 and English, I have to return 'Apple'. How could I write the sql query for that? Thank you. | ```
select
ID,
case @Lang
when 'English' then English
when 'Spanish' then Spanish
end as Name
from Fruits
where ID = @ID;
```
or, if you have more than one column to choose, you can use apply so you don't have to write multiple case statements
```
select
F.ID,
N.Name,
N.Name_Full
from Fruits as F
outer apply (values
('English', F.English, F.English_Full),
('Spanish', F.Spanish, F.Spanish_Full)
) as N(lang, Name, Name_Full)
where F.ID = @ID and N.lang = @lang
``` | Fisrt you should normalize database, to support multilanguage, this means split the table in 2
tables:
1. `Fruit` (FruitID, *GenericName*, etc)
2. `Languages` (LanguageID, LanguageName, etc)
3. `FruitTranslations` (FruitID, LanguageID, LocalizedName)
then the query will be just a simple query to table FruitTranslations...
---
If you still want a query for this then you can use Dynamic SQL,
```
DECLARE @cmd VARCHAR(MAX)
SET @Cmd = 'SELECT ' + @Language +
' FROM Fruits WHERE ID = ''' + CONVERT(VARCHAR, @Id) + ''''
EXEC(@Cmd)
``` | Query from multiple column | [
"",
"sql",
"sql-server",
""
] |
I have the following issue in SQL Server, I have some code that looks like this:
```
DROP TABLE #TMPGUARDIAN
CREATE TABLE #TMPGUARDIAN(
LAST_NAME NVARCHAR(30),
FRST_NAME NVARCHAR(30))
SELECT LAST_NAME,FRST_NAME INTO #TMPGUARDIAN FROM TBL_PEOPLE
```
When I do this I get an error 'There is already an object named '#TMPGUARDIAN' in the database'. Can anyone tell me why I am getting this error? | You are dropping it, then creating it, then trying to create it again by using `SELECT INTO`. Change to:
```
DROP TABLE #TMPGUARDIAN
CREATE TABLE #TMPGUARDIAN(
LAST_NAME NVARCHAR(30),
FRST_NAME NVARCHAR(30))
INSERT INTO #TMPGUARDIAN
SELECT LAST_NAME,FRST_NAME
FROM TBL_PEOPLE
```
In MS SQL Server you can create a table without a `CREATE TABLE` statement by using `SELECT INTO` | I usually put these lines at the beginning of my stored procedure, and then at the end.
It is an "exists" check for #temp tables.
```
IF OBJECT_ID('tempdb..#MyCoolTempTable') IS NOT NULL
begin
drop table #MyCoolTempTable
end
```
Full Example:
(Note the LACK of any "SELECT INTO" statements)
```
CREATE PROCEDURE [dbo].[uspTempTableSuperSafeExample]
AS
BEGIN
SET NOCOUNT ON;
IF OBJECT_ID('tempdb..#MyCoolTempTable') IS NOT NULL
BEGIN
DROP TABLE #MyCoolTempTable
END
CREATE TABLE #MyCoolTempTable (
MyCoolTempTableKey INT IDENTITY(1,1),
MyValue VARCHAR(128)
)
INSERT INTO #MyCoolTempTable (MyValue)
SELECT LEFT(@@VERSION, 128)
UNION ALL SELECT TOP 3 LEFT(name, 128) FROM sysobjects
INSERT INTO #MyCoolTempTable (MyValue)
SELECT TOP 3 LEFT(name, 128) FROM sysobjects ORDER BY NEWID()
ALTER TABLE #MyCoolTempTable
ADD YetAnotherColumn VARCHAR(128) NOT NULL DEFAULT 'DefaultValueNeededForTheAlterStatement'
INSERT INTO #MyCoolTempTable (MyValue, YetAnotherColumn)
SELECT TOP 3 LEFT(name, 128) , 'AfterTheAlter' FROM sysobjects ORDER BY NEWID()
SELECT MyCoolTempTableKey, MyValue, YetAnotherColumn FROM #MyCoolTempTable
IF OBJECT_ID('tempdb..#MyCoolTempTable') IS NOT NULL
BEGIN
DROP TABLE #MyCoolTempTable
END
SET NOCOUNT OFF;
END
GO
```
Output ~Sample:
```
1 Microsoft-SQL-Server-BlahBlahBlah DefaultValueNeededForTheAlterStatement
2 sp_MSalreadyhavegeneration DefaultValueNeededForTheAlterStatement
3 sp_MSwritemergeperfcounter DefaultValueNeededForTheAlterStatement
4 sp_drop_trusted_assembly DefaultValueNeededForTheAlterStatement
5 sp_helplogreader_agent DefaultValueNeededForTheAlterStatement
6 fn_MSorbitmaps DefaultValueNeededForTheAlterStatement
7 sp_check_constraints_rowset DefaultValueNeededForTheAlterStatement
8 fn_varbintohexstr AfterTheAlter
9 sp_MSrepl_check_publisher AfterTheAlter
10 sp_query_store_consistency_check AfterTheAlter
```
Also, see my answer here (on "what is the SCOPE of a #temp table") : <https://stackoverflow.com/a/20105766/214977> | Temporary table in SQL server causing ' There is already an object named' error | [
"",
"sql",
"sql-server",
"t-sql",
"temp-tables",
""
] |
How can I do in **one select** with multiple columns and put each column in a variable?
Something like this:
```
--code here
V_DATE1 T1.DATE1%TYPE;
V_DATE2 T1.DATE2%TYPE;
V_DATE3 T1.DATE3%TYPE;
SELECT T1.DATE1 INTO V_DATE1, T1.DATE2 INTO V_DATE2, T1.DATE3 INTO V_DATE3
FROM T1
WHERE ID='X';
--code here
``` | Your query should be:
```
SELECT T1.DATE1, T1.DATE2, T1.DATE3
INTO V_DATE1, V_DATE2, V_DATE3
FROM T1
WHERE ID='X';
``` | ```
SELECT
V_DATE1 = T1.DATE1,
V_DATE2 = T1.DATE2,
V_DATE3 = T1.DATE3
FROM T1
WHERE ID='X';
```
I had problems with Bob's answer but this worked fine | Select multiple columns into multiple variables | [
"",
"sql",
"oracle",
"plsql",
"select-into",
""
] |
What would be the best way to remove duplicates while merging their records into one?
I have a situation where the table keeps track of player names and their records like this:
```
stats
-------------------------------
nick totalgames wins ...
John 100 40
john 200 97
Whistle 50 47
wHiStLe 75 72
...
```
I would need to merge the rows where nick is duplicated (when ignoring case) and merge the records into one, like this:
```
stats
-------------------------------
nick totalgames wins ...
john 300 137
whistle 125 119
...
```
I'm doing this in Postgres. What would be the best way to do this?
I know that I can get the names where duplicates exist by doing this:
```
select lower(nick) as nick, totalgames, count(*)
from stats
group by lower(nick), totalgames
having count(*) > 1;
```
I thought of something like this:
```
update stats
set totalgames = totalgames + s.totalgames
from (that query up there) s
where lower(nick) = s.nick
```
Except this doesn't work properly. And I still can't seem to be able to delete the other duplicate rows containing the duplicate names. What can I do? Any suggestions? | [SQL Fiddle](http://sqlfiddle.com/#!12/8311d)
Here is your update:
```
UPDATE stats
SET totalgames = x.games, wins = x.wins
FROM (SELECT LOWER(nick) AS nick, SUM(totalgames) AS games, SUM(wins) AS wins
FROM stats
GROUP BY LOWER(nick) ) AS x
WHERE LOWER(stats.nick) = x.nick;
```
Here is the delete to blow away the duplicate rows:
```
DELETE FROM stats USING stats s2
WHERE lower(stats.nick) = lower(s2.nick) AND stats.nick < s2.nick;
```
(Note that the 'update...from' and 'delete...using' syntax are Postgres-specific, and were stolen shamelessly from [this answer](https://stackoverflow.com/a/6258586/1020168) and [this answer](https://stackoverflow.com/a/4442825/1020168).)
You'll probably also want to run this to downcase all the names:
```
UPDATE STATS SET nick = lower(nick);
```
Aaaand throw in a unique index on the lowercase version of 'nick' (or add a constraint to that column to disallow non-lowercase values):
```
CREATE UNIQUE INDEX ON stats (LOWER(nick));
``` | I think easiest way to do it in one query would be using [common table expressions](http://www.postgresql.org/docs/9.2/static/queries-with.html):
```
with cte as (
delete from stats
where lower(nick) in (
select lower(nick) from stats group by lower(nick) having count(*) > 1
)
returning *
)
insert into stats(nick, totalgames, wins)
select lower(nick), sum(totalgames), sum(wins)
from cte
group by lower(nick);
```
As you see, inside the cte I'm deleting duplicates and returning deleted rows, after that inserting grouped deleted data back into table.
see [**`sql fiddle demo`**](http://sqlfiddle.com/#!12/980be/1) | SQL: How to merge case-insensitive duplicates | [
"",
"sql",
"postgresql",
"duplicates",
""
] |
So I have a list of values that is returned from a subquery and would like to select all values from another table that match the values of that subquery. Is there a particular way that's best to go about this?
So far I've tried:
```
select * from table where tableid = select * from table1 where tableid like '%this%'
``` | ```
select * from table where tableid in(select tableid
from table1
where tableid like '%this%')
``` | ```
select * from table where tableid IN
(select tableid from table1 where tableid like '%this%')
```
A sub-query needs to return what you are asking for. Additionally, if there's more than 1 result, you need `IN` rather than `=` | Select All Values From Table That Match All Values of Subquery | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
Please, explain me how to use cursor for loop in oracle.
If I use next code, all is fine.
```
for rec in (select id, name from students) loop
-- do anything
end loop;
```
But if I define variable for this sql statement, it doesn't work.
```
v_sql := 'select id, name from students';
for rec in v_sql loop
-- do anything
end loop;
```
Error: PLS-00103 | To address issues associated with the second approach in your question you need to use
cursor variable and explicit way of opening a cursor and fetching data. It is not
allowed to use cursor variables in the `FOR` loop:
```
declare
l_sql varchar2(123); -- variable that contains a query
l_c sys_refcursor; -- cursor variable(weak cursor).
l_res your_table%rowtype; -- variable containing fetching data
begin
l_sql := 'select * from your_table';
-- Open the cursor and fetching data explicitly
-- in the LOOP.
open l_c for l_sql;
loop
fetch l_c into l_res;
exit when l_c%notfound; -- Exit the loop if there is nothing to fetch.
-- process fetched data
end loop;
close l_c; -- close the cursor
end;
```
[Find out more](http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/static.htm#BABGEDAE) | try this :
```
cursor v_sql is
select id, name from students;
for rec in v_sql
loop
-- do anything
end loop;
```
then no need to `open`, `fetch` or `close` the cursor. | Cursor for loop in Oracle | [
"",
"sql",
"oracle",
"for-loop",
"plsql",
"database-cursor",
""
] |
I have an interesting situation where I'm trying to select everything in a sql server table but I only have to access the table through an old company API instead of SQL. This API asks for a table name, a field name, and a value. It then plugs it in rather straightforward in this way:
```
select * from [TABLE_NAME_VAR] where [FIELD_NAME_VAR] = 'VALUE_VAR';
```
I'm not able to change the = sign to != or anything else, only those vars. I know this sounds awful, but I cannot change the API without going through a lot of hoops and it's all I have to work with.
There are multiple columns in this table that are all numbers, all strings, and set to not null. Is there a value I can pass this API function that would return everything in the table? Perhaps a constant or special value that means it's a number, it's not a number, it's a string, \*, it's not null, etc? Any ideas? | You might try to pass this VALUE\_VAR
```
1'' or ''''=''
```
If it's used as-is and executed as Dynamic SQL it should result in
```
SELECT * FROM tab WHERE fieldname = '1' or ''=''
``` | No this isn't possible if the API is constructed correctly.
If this is some home grown thing it may not be, however. You could try entering `YourTable]--` as the value for `TABLE_NAME_VAR` such that when plugged into the query it ends up as
```
select * from [YourTable]--] where [FIELD_NAME_VAR] = 'VALUE_VAR';
```
If the `]` is either rejected or properly escaped (by doubling it up) this won't work however. | Selecting everything in a table... with a where statement | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
A while ago I came across an SQL statement that can be used on the iSeries/DB2 to extract data directly from database journals. It worked without having to use DSPJRN etc. It invoked a stored procedure and the results came directly back to the SQL session. Unfortunately I have since lost the info.
Does anyone know the stored procedure name and how to write the SQL statement? | Found it. It works via a UDTF - user defined table function - provided by IBM.
The format is as follows. Only the first 2 parameters are required. There is a decent blog about it here: <http://ibmsystemsmag.blogs.com/i_can/2010/11/>
select \* from table (Display\_Journal(
'journLib', 'Journal', -- Journal library and name --
'receiverLib', 'Receiver', -- Receiver library and name --
CAST(null as TIMESTAMP), -- Starting timestamp --
CAST(null as DECIMAL(21,0)), -- Starting sequence number --
'', -- Journal codes --
'', -- Journal entries --
'','', -- Object library, Object name --
'','', -- Object type, Object member --
'', -- User --
'', -- Job --
'' -- Program --
) ) as x | It's not exactly what you describe but the Tools/400 [EXPJRNE](http://www.tools400.de/English/Freeware/Utilities/utilities.html) freeware utility could be called as a stored procedure or could be wrapped in a custom stored procedure that would return the result set as well. | SQL DB2 journal entries directly on iSeries | [
"",
"sql",
"db2",
"ibm-midrange",
"journal",
""
] |
I have a table that I need to delete duplicates. However, the table is designed so that each customer can have identical items. Here is a screen shot for example:

When I try to delete duplicates I also get Book Number that are the same for both customer.
The Green area shows actual duplicates, however the blue shows books that are not duplicate because customers can borrow the same books.
How do I delete only where the row is the same for each customer? So the green area.
**Here is my code:** It does not work if both customer has the same Book number.
```
WITH CTE AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY BookNumber ORDER BY BookNumber DESC) AS DUPS
FROM Store.Books
)
SELECT * FROM CTE WHERE DUPS > 1
``` | the below query should give you a list of all the duplicate booknumbers with the customerIDs, all you need to do is a simple delete statement with the results to remove the duplicate records
```
SELECT count(bookNumber), booknumber, customerID FROM TableName GROUP BY booknumber, customerID having count(booknumber)> 1
``` | ```
WITH CTE AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY BookNumber, Customerid ORDER BY BookNumber) AS DUPS
FROM Store.Books
)
SELECT * FROM CTE WHERE DUPS > 1
-- if you want to delete, replace last line with this:
--DELETE FROM CTE WHERE DUPS > 1
```
I should mention that order by booknumber desc wasn't necessary so I removed the 'desc' part | Finding and Deleting duplicate rows in table where multiple occurances of records is allowed | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
i need to get a large amount of data from a remote database. the idea is do a sort of pagination, like this
1 Select a first block of datas
```
SELECT * FROM TABLE LIMIT 1,10000
```
2 Process that block
```
while(mysql_fetch_array()...){
//do something
}
```
3 Get next block
and so on.
Assuming 10000 is an allowable dimension for my system, let us suppose i have 30000 records to get: i perform 3 call to remote system.
But my question is: when executing a select, the resultset is transmitted and than stored in some local part with the result that fetch is local, or result set is stored in remote system and records coming one by one at any fetch? Because if the real scenario is the second i don't perform 3 call, but 30000 call, and is not what i want.
I hope I explained, thanks for help
bye | First, it's highly recommended to utilize MySQLi or PDO instead of the deprecated `mysql_*` functions
<http://php.net/manual/en/mysqlinfo.api.choosing.php>
By default with the mysql and mysqli extensions, the entire result set is loaded into PHP's memory when executing the query, but this can be changed to load results on demand as rows are retrieved if needed or desired.
* mysql
+ [`mysql_query()`](http://www.php.net/manual/en/function.mysql-query.php) buffers the entire result set in PHP's memory
+ [`mysql_unbuffered_query()`](http://www.php.net/manual/en/function.mysql-unbuffered-query.php) only retrieves data from the database as rows are requested
* mysqli
+ [`mysqli::query()`](http://www.php.net/manual/en/mysqli.query.php)
The `$resultmode` parameter determines behaviour.
The default value of `MYSQLI_STORE_RESULT` causes the entire result set to be transfered to PHP's memory, but using `MYSQLI_USE_RESULT` will cause the rows to be retrieved as requested.
PDO by default will load data as needed when using [`PDO::query()`](http://www.php.net/manual/en/pdo.query.php) or [`PDO::prepare()`](http://www.php.net/manual/en/pdo.prepare.php) to execute the query and retrieving results with [`PDO::fetch()`](http://www.php.net/manual/en/pdostatement.fetch.php).
To retrieve all data from the result set into a PHP array, you can use [`PDO::fetchAll()`](http://www.php.net/manual/en/pdostatement.fetchall.php)
Prepared statements can also use the [`PDO::MYSQL_ATTR_USE_BUFFERED_QUERY`](http://www.php.net/manual/en/ref.pdo-mysql.php#pdo.constants.mysql-attr-use-buffered-query) constant, though `PDO::fetchALL()` is recommended.
---
It's probably best to stick with the default behaviour and benchmark any changes to determine if they actually have any positive results; the overhead of transferring results individually may be minor, and other factors may be more important in determining the optimal method. | You would be performing 3 calls, not 30.000. That's for sure.
Each 10.000 results batch is rendered on the server (by performing each of the 3 queries). Your while iterates through a set of data that has already been returned by MySQL (that's why you don't have 30.000 queries).
That is assuming you would have something like this:
```
$res = mysql_query(...);
while ($row = mysql_fetch_array($res)) {
//do something with $row
}
```
Anything you do inside the `while` loop by making use of `$row` has to do with already-fetched data from your initial query.
Hope this answers your question. | SQL - When data are transfered | [
"",
"sql",
"fetch",
""
] |
I am trying to get a running balance using SQL Server 2012
Here is what I got so far...
```
DECLARE @Transactions TABLE
(
Amount decimal (18,2),
TransactionId uniqueidentifier,
AccountId uniqueidentifier,
TransactionDate date
)
DECLARE @AccountId uniqueidentifier = NEWID()
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT 3224.99, NEWID(), @AccountId, '2013-06-02'
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT 18.99, NEWID(), NEWID(), '2013-06-14'
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT -8.99, NEWID(), @AccountId, '2013-06-14'
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT -6.99, NEWID(), @AccountId, '2013-06-14'
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT -22.14, NEWID(), @AccountId, '2014-11-09'
INSERT INTO @Transactions (Amount, TransactionId, AccountId, TransactionDate)
SELECT -84.99, NEWID(), @AccountId, '2013-06-09'
SELECT SUM(Amount) OVER (ORDER BY TransactionDate, TransactionId) as [RunningBalance],
Amount
FROM @Transactions
WHERE AccountId = @AccountId
ORDER BY TransactionDate DESC
```
Results are
```
RunningBalance Amount
--------------------------------------- ---------------------------------------
3101.88 -22.14
3133.01 -6.99
3124.02 -8.99
3140.00 -84.99
3224.99 3224.99
```
My goal is to have RunningBalance show each balance, even if its the same day, each row should have its own balance
As you can see, the 2nd row is not coming up correctly and I believe it is because I also have a 2nd account Id that is conflicting with it, but assumed the WHERE statement would remove it..
I can remove the ORDER BY, however I am wanting my list newest transaction first, as final query will have paging, and I have tried something like this.. but balance is off...
```
SELECT * FROM (
SELECT SUM(Amount) OVER (PARTITION BY AccountId ORDER BY TransactionDate, TransactionId) as [RunningBalance],
Amount, TransactionDate
FROM @Transactions
WHERE AccountId = @AccountId
) AS Results
ORDER BY TransactionDate DESC
RunningBalance Amount TransactionDate
--------------------------------------- --------------------------------------- ---------------
3101.88 -22.14 2014-11-09
3131.01 -8.99 2013-06-14
3124.02 -6.99 2013-06-14
3140.00 -84.99 2013-06-09
3224.99 3224.99 2013-06-02
```
I'm not too sure what the problem is... | Instead of ordering by TransactionId (a meaningless GUID value that has no bearing on when the row was inserted), you need to determine proper order in some other way. Since you have a CreatedOn column that stores the date/time the row was inserted, you should add that to your order by to generate the correct sequence. | I think what you need to use is ROWS UNBOUNDED PRECEDING.
```
SELECT TransactionDate
,Amount
,SUM(Amount) OVER (
ORDER BY TransactionDate DESC ROWS UNBOUNDED PRECEDING
) AS [RunningBalance]
FROM @Transactions
WHERE AccountId = @AccountId
``` | Running Balance in SQL Server 2012 | [
"",
"sql",
"sql-server-2012",
"cumulative-sum",
""
] |
I need to query a table that has a "gender" column, like so:
```
| id | gender | name |
-------------------------
| 1 | M | Michael |
-------------------------
| 2 | F | Hanna |
-------------------------
| 3 | M | Louie |
-------------------------
```
And I need to extract the first N results which have, for example 80% males and 20% females. So, if I needed 1000 results I would want to retrieve 800 males and 200 females.
1. Is it possible to do it in a single query? How?
2. If I don't have enough records (imagine I have only 700 males on the example above) is it possible to select 700 / 300 automatically? | Basically, you want to get as many 'M' as you can, but not more than your percentage and then get enough 'F' so you have total 1000 rows:
```
with cte_m as (
select * from Table1 where gender = 'M' limit (1000 * 0.8)
), cte as (
select *, 0 as ord from cte_m
union all
select *, 1 as ord from Table1 where gender = 'F'
order by ord
limit 1000
)
select id, gender, name
from cte
```
[**`sql fiddle demo`**](http://sqlfiddle.com/#!12/90eff/1) | How about the following, which assumes you are supplying a row count ("lmt"), and floats for the M/F distribution:
```
create table gen (
id integer,
gender text,
name text
);
-- inserts 75% males and 25% females into the source table ("gen")
insert into gen select n, case when mod(n,5) = 0 then 'F' else 'M' end, (case when mod(n,5) = 0 then 'F' else 'M' end)||'_'||n::text
from generate_series(1,20000) n
-- extract 80/20 M vs F
with conf as (select 1000 as lmt, .80::FLOAT as mpct, .20::FLOAT as fpct),
g as (select id,gender,name,row_number() over (partition by gender order by gender) rn from gen)
select *
from g
where (gender = 'M' and rn <= (select lmt*mpct from conf))
or (gender = 'F' and rn <= (select lmt*fpct from conf));
-- Same query, to show the percent M vs F:
with conf as (select 1000 as lmt, .80::FLOAT as mpct, .20::FLOAT as fpct),
g as (select id,gender,name,row_number() over (partition by gender order by gender) rn from gen)
select gender,count(*)
from (
select *
from g
where (gender = 'M' and rn <= (select lmt*mpct from conf))
or (gender = 'F' and rn <= (select lmt*fpct from conf))
) y
group by gender
``` | How to select different percentages of data based in a column value? | [
"",
"sql",
"postgresql",
""
] |
I have a data table (not very well structured) in which I have the following
```
ClientID | Parameter | Value
111..........Street..........Evergreen
111..........Zip................75244
111..........Country.........USA
222..........Street..........Evergreen
222..........Zip................75244
222..........Country.........USA
333..........Street..........Evergreen
333..........Zip................75240
333..........Country.........USA
444..........Street..........Evergreen
444..........Zip................75240
444..........Country.........USA
555..........Street..........Evergreen
555..........Zip................75240
555..........Country.........USA
666..........Street..........Some Street
666..........Zip................75244
666..........Country.........USA
```
For this I want to Select all those Client ID that are on Street = Evergreen BUT also with ZIP 75244, I have over 700K rows so, exporting all would be a big issue.
My idea was:
```
SELECT ClientID
from (select ClientID from table1 where Value = 'evergreen')
Where Zip = '75244'
```
But it wont give me the accurate results in this case I would like to get the values for ClientIDs 111 and 222 because the match the criteria Im looking for Street= Evergreen adn Zip=75244
Is there a way to do this? | Try this:
```
select clientid
from table1
where (parameter='Street' and value='Evergreen')
and clientid in (select clientid from table1 where parameter='Zip' and value='75244')
``` | ```
Select ClientId from MyTable e
inner join MyTable z On e.clientId = z.ClientID
Where e.value = 'Evergreen' and e.Parameter = 'Street'
and z.parameter = 'Zip' and z.Value = '75244'
```
Just use an alias with a join so you can "use your table twice" | SQL Server Query for values within same Table | [
"",
"sql",
"sql-server",
"join",
""
] |
I have big SQL query like this:
```
Select Distinct [Student].[Class].roll_nbr as [PERIOD-NBR],[Student].[Class].ent_nbr as [CLASS-NBR],
IsNull(Stuff((SELECT CAST(', ' AS Varchar(MAX)) + CAST([Student].[Subject].ent_nbr AS Varchar(MAX))
FROM [Student].[Subject]
WHERE [Student].[Subject].roll_nbr = [Student].[Class].roll_nbr
and ([Student].[Subject].class_nbr = [Student].[Class].roll_assignment_nbr
or ([Student].[Class].roll_assignment_nbr = '0'
and [Student].[Subject].class_nbr = [Student].[School].bus_stop) )
AND [Student].[Subject].ent_nbr <> ''
FOR XML PATH ('')), 1, 2, ''), '')
AS [OLD-STUDENT-NBR.OLD],IsNull(Stuff((SELECT CAST(', ' AS Varchar(MAX)) + ....
```
It goes on and on and a page long query, which builds a report. The problem I am having is some variable is erring out with message:
```
Error converting data type varchar to numeric.
```
This is very generic error does not tell me which variable. Is there any way to pinpoint which variable is erring out in sql 2008? | Comment out half the columns, if the error continues, comment out another half. If the error stops, it's in the section you just commented out. Rinse-repeat. | When faced with this type of error in the past, I've narrowed it down by commenting out portions of the query, see if it executes, then uncomment portions of the query until it point right to the error. | Pinpoint a particular variable error in sql 2008 script | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I have a drop down where in the values are concatenated from two columns of a table, now I will use the value of that drop down as a parameter to another stored procedure, is this possible? How will I do that?
This is my code
```
CREATE PROCEDURE [dbo].[Reassign]
@recordumber int
@employeeName
AS
BEGIN
UPDATE QR
SET QR.QAMemberID = @QAMemberID
FROM Roster AS QR
INNER JOIN TeamMaster S TM
ON QR.QAMemberID = TM.QAMemberID
WHERE QR.recordNumber = @recordumber
and qr.firstname, qr.lastname = @employeeName
END
```
and `qr.firstname`, `qr.lastname = @employeeName` I know this last piece of code is wrong... how can I do this the right way? Thank you... | Do not concatenate the values together in your code, but rather use a parameter for `firstName` and a parameter for `lastName`, like this:
```
CREATE PROCEDURE [dbo].[Reassign]
@recordumber int,
@firstName,
@lastName
AS
BEGIN
UPDATE QR
SET QR.QAMemberID = @QAMemberID
FROM Roster AS QR
INNER JOIN TeamMaster S TM
ON QR.QAMemberID = TM.QAMemberID
WHERE QR.recordNumber = @recordumber
and qr.firstname = @firstName, qr.lastname = @lastName
END
``` | To use the value from your drop down you can use this in the code behind
```
if (myDropDown.SelectedIndex != 0)
{
var myData = myDropDown.SelectedValue;
// access Sproc and you can then pass myData
}
``` | How to pass concatenated value of drop down to stored proc as parameter | [
"",
"asp.net",
"sql",
"drop-down-menu",
""
] |
Is it possible to use a subquery in an INSERT statement?
```
INSERT INTO table (age, p_id)
VALUES('22', '(SELECT id FROM people WHERE name='Bob')')
```
Is there a working form of what I'm attempting?
Using MySQL database | ```
INSERT INTO MyTable (age, p_id)
SELECT '22', ( select id from people where name = 'bob')
```
or you could do this
```
INSERT INTO MyTable (age, p_id)
SELECT '22', id
FROM people
WHERE name = 'bob'
``` | should be something like
```
INSERT INTO table (age, p_id)
SELECT '22', people.id
FROM people
WHERE people.name = 'bob
``` | SQL subquery in INSERT? | [
"",
"mysql",
"sql",
"insert",
"subquery",
""
] |
I am using dblink to move certain data between databases. Everything is save and sound but I am wondering if there is a more convenient way to define the column definition list of a dblink query result. I can do something like this:
```
SELECT *
FROM dblink('dbname=remote', 'select * from test')
AS t1(id integer, data text);
```
The tables I'm interacting with have the same schema definition in both databases (remote & local). I was thinking of something like:
```
SELECT *
FROM dblink('dbname=remote', 'select * from test')
AS t1 LIKE public.test;
```
Or:
```
SELECT *
FROM dblink('dbname=remote', 'select * from test')
AS t1::public.test;
```
The column definition list tends to become quite long. Is there something I may have overlooked?
EDIT:
As this has been a problem for me before I created a small function as a work-around.
```
CREATE OR REPLACE FUNCTION dblink_star_func(_conn text, _schema_name text, _table_name text)
RETURNS text
LANGUAGE PLPGSQL
VOLATILE STRICT
AS $function$
DECLARE
_dblink_schema text;
_cols text;
_q text;
_func_name text := format('star_%s', $3);
_func text;
BEGIN
SELECT nspname INTO _dblink_schema
FROM pg_namespace n, pg_extension e
WHERE e.extname = 'dblink' AND e.extnamespace = n.oid;
SELECT array_to_string(array_agg(column_name || ' ' || udt_name), ', ') INTO _cols
FROM information_schema.columns
WHERE table_schema = $2 AND table_name = $3;
_q := format('SELECT * FROM %I.dblink(%L, %L) AS remote (%s)',
_dblink_schema,
_conn,
format('SELECT * FROM %I.%I', $2, $3),
_cols
);
_func := $_func$
CREATE OR REPLACE FUNCTION %s()
RETURNS SETOF %I.%I
LANGUAGE SQL
VOLATILE STRICT
AS $$ %s; $$
$_func$;
EXECUTE format(_func, _func_name, $2, $3, _q);
RETURN _func_name;
END;
$function$;
```
This function creates and yields a function that wraps the dblink call. It's certainly not meant for heavy lifting but convenience.It would be nice if it turns out it's not necessary at all.
```
> select dblink_star_func('dbname=ben', 'public', 'test');
┌──────────────────┐
│ dblink_star_func │
├──────────────────┤
│ star_test │
└──────────────────┘
(1 row)
> select * from star_test() where data = 'success';
┌────┬─────────┐
│ id │ data │
├────┼─────────┤
│ 1 │ success │
└────┴─────────┘
(1 row)
``` | You might need to make sure that your types are always in sync but this should work:
```
SELECT (t1::test).*
FROM dblink('dbname=remote', 'select * from test') AS t1;
```
The key is that often you need parentheses to ensure that the parser knows you are dealing with tuples.
For example this works for me:
```
CREATE TABLE test (id int, test bool);
select (t1::test).* from (select 1, true) t1;
```
But this throws a syntax error:
```
select t1::test.* from (select 1, true) t1;
``` | Try something like this :
```
select (rec).* from dblink('dbname=...','select myalias from foreign_table
myalias') t1 (rec local_type)
```
Example (get tables stats from other database) :
```
select (rec).* from dblink('dbname=foreignDb','select t1 from
pg_stat_all_tables t1') t2 (rec pg_stat_all_tables)
``` | Specify dblink column definition list from a local existing type | [
"",
"sql",
"postgresql",
"dblink",
""
] |
I have a table that looks like this:
```
Column A | Column B | Counter
---------------------------------------------
A | B | 53
B | C | 23
A | D | 11
C | B | 22
```
I need to remove the last row because it's cyclic to the second row. Can't seem to figure out how to do it.
**EDIT**
There is an indexed date field. This is for Sankey diagram. The data in the sample table is actually the result of a query. The underlying table has:
```
date | source node | target node | path count
```
The query to build the table is:
```
SELECT source_node, target_node, COUNT(1)
FROM sankey_table
WHERE TO_CHAR(data_date, 'yyyy-mm-dd')='2013-08-19'
GROUP BY source_node, target_node
```
In the sample, the last row C to B is going backwards and I need to ignore it or the Sankey won't display. I need to only show forward path. | If you can adjust how your table is populated, you can change the query you're using to only retrieve the values for the first direction (for that date) in the first place, with a little bit an analytic manipulation:
```
SELECT source_node, target_node, counter FROM (
SELECT source_node,
target_node,
COUNT(*) OVER (PARTITION BY source_node, target_node) AS counter,
RANK () OVER (PARTITION BY GREATEST(source_node, target_node),
LEAST(source_node, target_node), TRUNC(data_date)
ORDER BY data_date) AS rnk
FROM sankey_table
WHERE TO_CHAR(data_date, 'yyyy-mm-dd')='2013-08-19'
)
WHERE rnk = 1;
```
The inner query gets the same data you collect now but adds a ranking column, which will be 1 for the first row for any source/target pair in any order for a given day. The outer query then just ignores everything else.
This might be a candidate for a materialised view if you're truncating and repopulating it daily.
If you can't change your intermediate table but can still see the underlying table you could join back to it using the same kind of idea; assuming the table you're querying from is called `sankey_agg_table`:
```
SELECT sat.source_node, sat.target_node, sat.counter
FROM sankey_agg_table sat
JOIN (SELECT source_node, target_node,
RANK () OVER (PARTITION BY GREATEST(source_node, target_node),
LEAST(source_node, target_node), TRUNC(data_date)
ORDER BY data_date) AS rnk
FROM sankey_table) st
ON st.source_node = sat.source_node
AND st.target_node = sat.target_node
AND st.rnk = 1;
```
[SQL Fiddle demos](http://sqlfiddle.com/#!4/73525/4). | Removing all edges from your graph where the tuple (source\_node, target\_node) is not ordered alphabetically and the symmetric row exists should give you what you want:
```
DELETE
FROM sankey_table t1
WHERE source_node > target_node
AND EXISTS (
SELECT NULL from sankey_table t2
WHERE t2.source_node = t1.target_node
AND t2.target_node = t1.source_node)
```
If you don't want to DELETE them, just use this WHERE clause in your query for generating the input for the diagram. | SQL Query to remove cyclic redundancy | [
"",
"sql",
"oracle",
"sankey-diagram",
""
] |
I am new to Oracle. I have an Oracle table with three columns: `serialno`, `item_category` and `item_status`. In the third column the rows have values of `serviceable`, `under_repair` or `condemned`.
I want to run the query using count to show how many are serviceable, how many are under repair, how many are condemned against each item category.
I would like to run something like:
```
select item_category
, count(......) "total"
, count (.....) "serviceable"
, count(.....)"under_repair"
, count(....) "condemned"
from my_table
group by item_category ......
```
I am unable to run the inner query inside the count.
Here's what I'd like the result set to look like:
```
item_category total serviceable under repair condemned
============= ===== ============ ============ ===========
chair 18 10 5 3
table 12 6 3 3
``` | You can either use CASE or DECODE statement inside the COUNT function.
```
SELECT item_category,
COUNT (*) total,
COUNT (DECODE (item_status, 'serviceable', 1)) AS serviceable,
COUNT (DECODE (item_status, 'under_repair', 1)) AS under_repair,
COUNT (DECODE (item_status, 'condemned', 1)) AS condemned
FROM mytable
GROUP BY item_category;
```
Output:
```
ITEM_CATEGORY TOTAL SERVICEABLE UNDER_REPAIR CONDEMNED
----------------------------------------------------------------
chair 5 1 2 2
table 5 3 1 1
``` | This is a very basic "group by" query. If you search for that you will find [plenty](http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions003.htm) [of](http://psoug.org/reference/group_by.html) [documentation](http://www.techonthenet.com/sql/group_by.php) on how it is used.
For your specific case, you want:
```
select item_category, item_status, count(*)
from <your table>
group by item_category, item_status;
```
You'll get something like this:
```
item_category item_status count(*)
======================================
Chair under_repair 7
Chair condemned 16
Table under_repair 3
```
Change the column ordering as needed for your purpose | Different value counts on same column | [
"",
"sql",
"oracle",
"pivot-table",
""
] |
I am using SSRS 2005 to create a report to show sums of hours for different categories (Reg Hours, Overtime, etc.)
```
SELECT OGL.PACostCenter, vpt.LL6, sum((vpt.timeinseconds*1.0)/3600) [Hours]
FROM totals as vpt
INNER JOIN OracleLookup OGL on vpt.LL6 = OGL.OracleCostCenter COLLATE SQL_Latin1_General_CP1_CI_AI
WHERE vpt.DATE BETWEEN @StartDate AND @EndDate
AND vpt.PAYCODENAME in ('519-H-Overtime 1.0',
'519-H-Holiday OT 1.5',
'519-H-Overtime 1.5',
'519-H-Overtime 2.0',
'519-H-Regular')
GROUP BY OGL.PayrollAccount, vpt.LL6
ORDER BY OGL.PayrollAccount, vpt.LL6
```
I have the total hours fine, but also want to be able to break it down and have columns for only overtime, and only regular hours as well as the total for everything. Is there an easy/clean way to do this that anyone knows of? I tried using different data sets but it wasn't doing what I expected, and something like a subquery seemed like it would be really messy and redundant. | Yes, you can use conditional summation:
```
SELECT OGL.PACostCenter, vpt.LL6, sum((vpt.timeinseconds*1.0)/3600) [Hours],
sum(case when vpt.PAYCODENAME in ('519-H-Overtime 1.0', '519-H-Holiday OT 1.5',
'519-H-Overtime 1.5', '519-H-Overtime 2.0'
)
then (vpt.timeinseconds*1.0)/3600
else 0
end) as OvertimeHours,
sum(case when vpt.PAYCODENAME in ('519-H-Regular')
then (vpt.timeinseconds*1.0)/3600
else 0
end) as RegularHours,
FROM totals as vpt
INNER JOIN OracleLookup OGL on vpt.LL6 = OGL.OracleCostCenter COLLATE SQL_Latin1_General_CP1_CI_AI
WHERE vpt.DATE BETWEEN @StartDate AND @EndDate
AND vpt.PAYCODENAME in ('519-H-Overtime 1.0',
'519-H-Holiday OT 1.5',
'519-H-Overtime 1.5',
'519-H-Overtime 2.0',
'519-H-Regular')
GROUP BY OGL.PayrollAccount, vpt.LL6
ORDER BY OGL.PayrollAccount, vpt.LL6;
``` | Assuming you have control over that Dataset query, you should be able to do something like:
```
SELECT OGL.PACostCenter, vpt.LL6
, sum((vpt.timeinseconds*1.0)/3600) [Hours] -- your initial total
, sum(case when vpt.PAYCODENAME = '519-H-Overtime 1.0'
then vpt.timeinseconds*1.0 else null end) / 3600 [OT1]
, sum(case when vpt.PAYCODENAME = '519-H-Overtime 1.5'
then vpt.timeinseconds*1.0 else null end) / 3600 [OT1pt5]
-- further SUM/CASE as required
FROM totals as vpt
INNER JOIN OracleLookup OGL on vpt.LL6 = OGL.OracleCostCenter COLLATE SQL_Latin1_General_CP1_CI_AI
WHERE vpt.DATE BETWEEN @StartDate AND @EndDate
AND vpt.PAYCODENAME in ('519-H-Overtime 1.0',
'519-H-Holiday OT 1.5',
'519-H-Overtime 1.5',
'519-H-Overtime 2.0',
'519-H-Regular')
GROUP BY OGL.PayrollAccount, vpt.LL6
ORDER BY OGL.PayrollAccount, vpt.LL6
```
i.e. Appy `SUM` to `CASE` statements that extract the groups you require. | Multiple row totals in SSRS 2005 | [
"",
"sql",
"t-sql",
"reporting-services",
"reportingservices-2005",
""
] |
I have a table `WCA`:
```
ID TYPE ..
1 *1*3*5*
2 *1*5*
..
```
Now i want move data to new table `WCA_TYPE`:
```
ID WCA_ID TYPE
1 1 1
2 1 3
3 1 5
4 2 1
5 2 5
..
```
ID here is auto increase.
How to write sql in MS SQL server to split type from old table to multi type & insert it into new table. | ```
WITH CTE AS
(
select id,Type,0 as startPos,
CHARINDEX('*',TYPE)-1 as endPos from WCA
UNION ALL
select id,Type,endPos+2 as startPos,
CHARINDEX('*',TYPE,endPos+2)-1 as endPos from CTE
where CHARINDEX('*',TYPE,endPos+2)>0
)
INSERT INTO WCA_TYPE (WCA_ID, TYPE)
select ID,
CASE WHEN EndPos>0
THEN
Substring(Type,StartPos,EndPos-StartPos+1)
else
Type
end as Type
from CTE
where EndPos<>0
```
[SQLFiddle Select demo](http://sqlfiddle.com/#!3/82266/1) | You could use a recursive cte - e.g.
```
CREATE TABLE #WCA_TYPE
(ID INT IDENTITY(1, 1) PRIMARY KEY
,WCA_ID INT
,TYPE INT);
WITH sampleData(WCA_ID, TYPE) AS
(
SELECT
*
FROM ( VALUES ('1', '1*3*5')
,('2', '1*5')
) nTab(nCol1, nCol2)
)
,rep(WCA_ID, item, delim) AS
(
SELECT
WCA_ID
,TYPE item
,'*' delim
FROM sampleData
UNION ALL
SELECT
WCA_ID
,LEFT(item, CHARINDEX(delim, item, 1) - 1) item
,delim
FROM rep
WHERE (CHARINDEX(delim, item, 1) > 0)
UNION ALL
SELECT
WCA_ID
,RIGHT(item, LEN(item) - CHARINDEX(delim, item, 1)) item
,delim
FROM rep
WHERE (CHARINDEX(delim, item, 1) > 0)
)
INSERT #WCA_TYPE
(TYPE
,WCA_ID)
SELECT
item
,WCA_ID
FROM rep
WHERE (CHARINDEX(delim, item, 1) = 0)
ORDER BY WCA_ID
OPTION (MAXRECURSION 0);
SELECT * FROM #WCA_TYPE;
``` | SQL insert data to other table after split string | [
"",
"sql",
"sql-server",
""
] |
Can I declare name of SQL Server variable in a table with spaces?
```
create table test(
record name, float, not null
.....
```
The above query when executed gives me an error. Is there any way to declare the variable as
`variable name` with a space..?? | Yes, escape these names using `[]`:
```
[record name] ....
```
These names are called [`Delimited identifiers`](http://technet.microsoft.com/en-us/library/ms175874.aspx):
> Are enclosed in double quotation marks (") or brackets ([ ]).
But it is not recommended, use legal names instead or regular identifiers. | Try using square brackets:
```
create table test(
[record name] float not null)
``` | Variable names in SQL Server | [
"",
"sql",
"sql-server-2008",
""
] |
I have a SQL query string that is like this:
```
DECLARE @sql varchar(max)
SET @sql = ' INSERT INTO ' + @tempTable1 +
' SELECT 0 as Type1, 0 as Type2, ' +
'''' + @name + ''' as CompanyName ' +
' FROM #tempTable2 tt2'
```
The query runs fine except for two names that happen to contain a single quote (ex: Pete's Corner). When either one of these names becomes part of the query it breaks the query string. I thought the easiest thing to do would be to replace the single quote like this replace(@name,'''','') but it doesn't work because I'm already in a string and so its affecting the rest of the statement. Altering the table itself is not an option unfortunately.
How can I replace or remove these single quotes?
Addition: I apologize, I did not include the part where @name is actually being populated from another database table by a join so setting the value of @name before the string is created I think would be difficult for me. | I think this should do it:
```
DECLARE @sql varchar(max)
SET @sql = ' INSERT INTO ' + @tempTable1 +
' SELECT 0 as Type1, 0 as Type2, ' + ''''+
replace( @name ,'''','''''')+''''+' as CompanyName
FROM #tempTable2 tt2'
``` | Why do you need to do this at all? You should be passing strong parameters to `sp_executesql` instead of munging all of your parameters into a single string and using `EXEC()`. [More info on that here](https://sqlblog.org/2011/09/17/bad-habits-to-kick-using-exec-instead-of-sp_executesql).
```
DECLARE @sql NVARCHAR(MAX), @name NVARCHAR(32);
SET @name = 'Pete''s Corner';
SET @sql = 'INSERT INTO ' + @tempTable1 +
' SELECT 0 as Type1, 0 as Type2, @name as CompanyName ' +
' FROM #tempTable2 tt2';
EXEC sp_executesql @sql, N'@name NVARCHAR(32)', @name;
```
I presume the `@name` parameter actually gets populated from elsewhere, and if using proper parameterization you shouldn't have to deal with escaping the `'`.
Now I'm not quite sure what `@tempTable1` is supposed to represent, or if you can access `#tempTable2` from this scope, but whenever you find yourself running a replace that requires `''''` or `''''''` (or both), you should ask yourself if maybe there's a better way. | SQL Dynamic Query String Breaks When Variable Contains Single Quote | [
"",
"sql",
"sql-server-2005",
""
] |
Is it a good idea to order by ID instead of a date field, when I want to get the latest records? It should be faster because the ID is the primary key, but would I always get the latest records? | If you order by ID and your ID is your identity column it should be fine because you will be ordering from latest to earliest. Just remember to put the results in DESC order. | Depends on whether the latest one have a bigger Id than earlier ones doesn't it.
It's down to pragmatism this one, if you have complete control over what the id is and you are prepared to take the hit if some future change forced you to change the Id (say from int to string) then I wouldn't criticise you for it.
If you are nervous about being able to rely on it, abstract it out e.g a GetItemsInOrder, method, Stored proc or View type thingy
That way if the silicon deity refused to smile up on you, you can add a Created\_Order column, populate it from ID, amend the thingy to use it and then go ahead with what would be a breaking change. | ORDER by ID to get latest records? | [
"",
"sql",
"sqlite",
"sorting",
"sql-order-by",
""
] |
I have a stored procedure named **CreateUpdateNewOrder** and i call another SP in it named **CreateClinicalDocument** Now i want to see what exact values my second SP is getting for execution. I can run a sql profiler tool to see what input values **CreateUpdateNewOrder** is getting but I can't think of any other way of getting input values for inner SP call other than print them in query. Anyone has better way to do it? | You can run SQL Profiler and select the SPS template instead of the default one.This will show you every statement executed, even if it's inside a stored procedure.To use the SPS template you need to do the following:
1. File -> New trace
2. In the dialog that opens go to combo "Use this template" and select TSQL\_SPs.
3. Now continue setting up your profiling session as you would normally.
Once you start the trace you will notice it's much more verbose. It will break down each procedure and will show what's executed line by line.Please let me know if you would need any other details. | It all depends on how you need to access and use the information, but it could be useful to log the values to a table. You could also try Debug in SSMS and set appropriate break points. | Can we get a stored procedure call using Sql server profiler within another SP | [
"",
"sql",
"sql-server",
""
] |
I've got a database table with logs which has 3 columns:
```
date | status | projectId
```
status can be either 0 or 1, primary key is on date and projectID
I'm trying to find out how many times a projectID had status 0 since the last time it was 1.
so if there would be only one projectId
```
date | status | projectId
1 0 3
2 0 3
3 1 3
4 1 3
5 0 3
6 0 3
```
this should return 2 (row 5 and 6 are 0 and row 4 is 1)
The thing that makes it hard for me is that I have to maintain the order of date. What would be a good way to tackle such problems, and this one in particular? | Here is how you would do it for one project:
```
select count(*)
from logs l
where status = 0 and
projectid = 3 and
date > (select max(date) from logs where projectid = 3 and status = 1)
```
Here is how you would do it for all projects:
```
select l.projectId, count(l1.projectId)
from logs l left outer join
(select projectId, max(date) as maxdate
from logs
where status = 1
group by projectId
) l1
on l.projectId = l1.projectId and
l.date > l1.date and
l.status = 0
group by l.projectId;
``` | Here's one way to get the result for all project\_id:
```
SELECT m.project_id
, COUNT(1) AS mycount
FROM ( SELECT l.project_id
, MAX(l.date) AS latest_date
FROM mytable l
WHERE l.status = 1
) m
JOIN mytable t
ON t.project_id = m.project_id
AND t.date > m.latest_date
AND t.status = 0
```
If you need only a subset of project\_id, the predicate should be added to the WHERE clause in the inline view query:
```
WHERE l.status = 1
AND l.project_id IN (3,5,7)
```
**EDIT**
That query does not return a row if there is no status=0 row after the latest status=1 row. To return a zero count, this could be done with an outer join.
```
SELECT m.project_id
, COUNT(t.status) AS mycount
FROM ( SELECT l.project_id
, MAX(l.date) AS latest_date
FROM mytable l
WHERE l.status = 1
AND l.project_id IN (3)
) m
LEFT
JOIN mytable t
ON t.project_id = m.project_id
AND t.date > m.latest_date
AND t.status = 0
```
For optimum performance, the statement could make use of an index with leading columns of `project_id` and `date` (in that order) and including the `status` column, e.g.
```
ON mytable (`project_id`,`date`,`status`)
``` | Select last N rows following a condition | [
"",
"mysql",
"sql",
""
] |
I have a table which has time-stamped action records for an employee. Each record has a department and a work title. Now I want to extract what changes have occurred when the employee changed departments and/or work title.
I am using SQL Server 2008.
Assuming our table holds records for a simple employee, the data could look like this:
```
Time | Department | WorkTitle
t1 Dep1 Wt1 <---
t2 Dep1 Wt1
t3 Dep2 Wt2 <---
t4 Dep2 Wt2
t5 Dep1 Wt1 <---
t6 Dep3 Wt1 <---
t7 Dep3 Wt1
t8 Dep3 Wt1
```
I want to extract the first instance when the employee appears in a new department and/or has a new work title.
In the above data, the records with arrows should be the ones extracted and it should result in the following results:
```
Time | Department | WorkTitle
t1 Dep1 Wt1
t3 Dep2 Wt2
t5 Dep1 Wt1
t6 Dep3 Wt1
```
Note that at time t1 and t5 the same department and work title occur, so a simple GROUP BY clause does not work.
I have tried some attempts using OVER/PARTITION, but the complexity of this query seems to be beyond my knowledge.
Can this be done using an SQL statement? | As Gordon Linoff said, this problem is really easy when you have [lag()](http://technet.microsoft.com/en-us/library/hh231256.aspx) function. SQL Server 2008 doesn't have it, so I prefer to solve it with [outer apply](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx):
```
select t1.*
from t as t1
outer apply (
select top 1 t2.*
from t as t2
where t2.worktime < t1.worktime
order by t2.worktime desc
) as t2
where
t2.worktime is null or
t2.department <> t1.department or
t2.worktitle <> t1.worktitle
``` | The function you really need is `lag()`, but that is not available until SQL Server 2012. In its absence, I prefer a correlated subquery.
This approach retrieves the previous time, then joins the table back in, and does the comparison for filtering:
```
select tprev.*
from (select t.*,
(select top 1 time
from t t2
where t.time < t2.time
order by time desc
) as prevtime
from t
) tprev join
t
on tprev.prevtime = t.time
where tprev.department <> t.department or
tprev.worktitle <> t.worktitle or
tprev.prevtime is null
``` | Summarize log data into change history with SQL statement | [
"",
"sql",
"sql-server",
"sql-server-2008",
""
] |
I know this should be easy, but I can't see the forest for the trees.
```
TABLE 1
GNUM
Team1
Team2
TABLE 2
GNUM
ID
TeamID
```
For every record in table 1 there can be multiple related records in Table 2
Here's what I cant seem to figure out
I need to have a query that will tell me when there no records in TABLE2 for a specific team for a specific game
```
For Example
TABLE1
GNUM Team1 Team2
1 A B
2 A C
3 B A
4 C B
5 B A
TABLE 2
GNUM TeamID
1 A
1 B
3 B
4 B
4 B
4 B
```
Given the Above I would need a query that would show me all the games with missing records in Table 2. For Example:
```
GNUM Missing Team
2 A
2 C
3 A
4 C
5 B
5 A
``` | You can use `UNION` and `LEFT JOIN` for this:
```
SELECT a.*
FROM (SELECT GNUM, Team1
FROM TABLE1
UNION
SELECT GNUM, Team2
FROM TABLE1
)a
LEFT JOIN TABLE2 b
ON a.GNUM = b.GNUM
AND a.Team1 = b.TeamID
WHERE b.TeamID IS NULL
``` | ```
SELECT GNUM, TEAM
FROM (
SELECT t1.GNUM, Team1 AS TEAM
FROM Table1 t1
LEFT JOIN Table2 t2
ON t1.GNUM = t2.GNUM
AND t1.Team1 = t2.TeamID
WHERE t2.TeamID IS NULL
UNION ALL
SELECT t1.GNUM, Team2 AS TEAM
FROM Table1 t1
LEFT JOIN Table2 t2
ON t1.GNUM = t2.GNUM
AND t1.Team2 = t2.TeamID
WHERE t2.TeamID IS NULL
) t
ORDER BY GNUM, TEAM
```
Grab all the missing team 1 and team 2 rows separately, then combine the results.
Union all will be faster than union, as it doesn't perform a distinct operation. This is also desirable as in the unlikely event that a table1 contains (9, 'Z', 'Z') and table2 has no entries for game number 9, you would expect to see two rows of (9, 'Z') in the results.
If you don't care about record ordering, you can remove the wrapper query around the union to slightly increase performance.
[SQL Fiddle.](http://sqlfiddle.com/#!3/4c5da/3/0) | MSSQL Query Help? Should be easy, but I must need more coffee? | [
"",
"sql",
"sql-server",
"performance",
"sql-server-2008",
""
] |
I have a lot of rows, which are in a mysql db table, and they are in datetime format. Is there a way that i could change the year of the dates without changing their time?
Ex.
* 2000-05-01 11:00:00
* 2000-01-01 06:30:00
* 2000-01-01 07:00:00
changing them to
* 2050-05-01 11:00:00
* 2050-01-01 06:30:00
* 2050-01-01 07:00:00 | if you want to add them by 50 years,
```
UPDATE tableName
SET columnName = columnName + INTERVAL 50 YEAR
```
* [MySQL DATE\_ADD()](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-add) | You can try this also:-
```
UPDATE table_name SET columnName = concat('2011-01-12 ', time(columnName))
``` | change all dates but not the time in mysql using sql | [
"",
"mysql",
"sql",
""
] |
I want to delete some records from a table based on criteria in another table. How do you delete from one of those tables without removing the records in both table?
I am looking to delete a table which are joined with other tables and the query looks something like this.
```
DELETE DeletingFromTable
FROM DeletingFromTable
INNER JOIN CriteriaTable ON DeletingFromTable.field_id = CriteriaTable.id
WHERE CriteriaTable.criteria = "value" ;
``` | This should work:
```
DELETE DeleteFromTable FROM DeleteFromTable AS DT
JOIN CriteriaFromTable AS CT ON DT.SomeId = CT.SomeId
WHERE CT.SomeId=[value]
``` | Your question is not 100% clear on what your issue is, but this query will drop tables 1,2 and 3 at the same time:
```
DROP TABLE table1,table2,table3
``` | Delete rows from multiple tables in a database | [
"",
"sql",
"sql-server",
""
] |
Help me please
```
CREATE TABLE RET (anim SET('dog','pig','voon') DEFAULT 'pig');
```
Insert :
```
INSERT INTO RET VALUES('root') //empty string! Why? DEFAULT doesn't work!
```
Thanks. | `DEFAULT` doesn't replace an invalid value, it just defines the default value to use if one is not specified in the insert. | It works as it should.
Change your schema like this for example
```
CREATE TABLE RET (id int, anim SET('dog','pig','voon') DEFAULT 'pig');
```
And then omit `anim` column in your insert
```
INSERT INTO ret (id) VALUES(1);
```
Or use `DEFAULT` keyword
```
INSERT INTO ret (id, anim) VALUES(2, DEFAULT);
```
Output:
```
| ID | ANIM |
-------------
| 1 | pig |
| 2 | pig |
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/6b229/1)** demo | DEFAULT doesn't work in SET data | [
"",
"mysql",
"sql",
""
] |
I am using SQLFire. I am doing an assignment and when beginning the assignment I came across that I couldn't return a particular value. See this SQL Fiddle, it contains the EXACT data in the table that I am using in my assignment and it contains the SQL Statement that i have tried.
FIDDLE: <http://sqlfiddle.com/#!2/e761ac/1>
What i want to be outputted is:
```
Rate | RentalCode
-----------------
350 | WL
```
I am getting this error when I type my code into SQL Fire.

I Have been told NOT to use the ORDER BY clause and I have not learnt 'LIMIT'
Thank you | You need to have `GROUP BY` clause since you have non-aggregated column in the `SELECT` clause
```
SELECT MIN(Rate), Rentalcode
FROM RentalRates
GROUP BY Rentalcode
```
**UPDATE**
Since you want to get the lowest rate, I think this is the better way than using `ORDER BY - LIMIT` since it supports multiple records having the lowest rate.
```
SELECT *
FROM RentalRates
WHERE rate = (SELECT MIN(rate) FROM rentalrates)
``` | It's not clear what you want to get with this query. I guess following will work:
```
SELECT Rate, Rentalcode
FROM RentalRates
order by Rate
LIMIT 1
```
[SQLFiddle demo](http://sqlfiddle.com/#!2/e761ac/4) | Why won't this statement work? SQL | [
"",
"sql",
"sqlfire",
""
] |
When I try to run this query I get the above error. Could some one help me on this
```
UPDATE CCO.ORDER_CREATION
SET "Doc_Date" = 8/9/2013
WHERE "Document_Number" IN (3032310739,3032310740,3032310738)
``` | Try this
```
UPDATE CCO.ORDER_CREATION
SET "Doc_Date" = TO_DATE('8/9/2013', 'MM/DD/YYYY')
WHERE "Document_Number" IN (3032310739,3032310740,3032310738)
``` | `8/9/2013` is a numeric value: 8 divided by 9 divided by 2013.
You should use the `to_date()` function in order to convert a string to a date:
```
UPDATE CCO.ORDER_CREATION
SET "Doc_Date" = to_date('08/09/2013', 'dd/mm/yyyy')
WHERE "Document_Number" IN (3032310739,3032310740,3032310738);
```
You might need to adjust the format mask, as it's unclear whether you mean August, 9th or September 8th
Alternatively you can use the an ANSI date literal (the format is always yyyy-mm-dd for an ANSI SQL date literal):
```
UPDATE CCO.ORDER_CREATION
SET "Doc_Date" = DATE '2013-09-08'
WHERE "Document_Number" IN (3032310739,3032310740,3032310738);
``` | Lookup Error ORA-00932: inconsistent datatypes: expected DATE got NUMBER | [
"",
"sql",
"oracle",
""
] |
I have a query result like this
<https://i.stack.imgur.com/sDuIj.png>
**EDIT:
Here is the actual table
<http://pastebin.com/TZCGHKdt>**
**SECOND EDIT:
<http://sqlfiddle.com/#!2/49bae/1>**
If u see the result in the SQLFIDDLE link, it shows duplicate entries in ID column. For example the value 26 in the ID Column has a total of 4 values, the query shows them broken up into 3 and 1. I want them joined.
---
Here is the insert query for the table that i'm using
```
INSERT INTO `capture_captive` (`capture_id_1`, `capture_id_2`, `capture_id_3`, `capture_id_4`, `capture_id_5`)
VALUES
(23, 32, 0, 0, 0),
(26, 25, 24, 0, 15),
(26, 32, 0, 0, 0),
(0, 0, 0, 0, 0),
(26, 26, 0, 0, 0),
(32, 32, 0, 0, 0);
```
The query that i'm using is
```
select id, num from
(select `capture_id_1` id, (COUNT(capture_id_1)) num from capture_captive where capture_id_1<>0 group by capture_id_1
UNION
select `capture_id_2`, (COUNT(capture_id_2)) num from capture_captive where capture_id_2<>0 group by capture_id_2
UNION
select `capture_id_3`, (COUNT(capture_id_3)) num from capture_captive where capture_id_3<>0 group by capture_id_3
UNION
select `capture_id_4`, (COUNT(capture_id_4)) num from capture_captive where capture_id_4<>0 group by capture_id_4
UNION
select `capture_id_5`, (COUNT(capture_id_5)) num from capture_captive where capture_id_5<>0 group by capture_id_5 ) as E
where id<>0
order by id;
```
I want to show the total number of id, against their ids.
Thanks in advance. | Just found the answer to my own question. I was missing a very basic function SUM(). Probably needed a break.
Here is the link to it.
<http://sqlfiddle.com/#!2/49bae/2>
Hope it helps | Some might say this should be a comment, but...
Your main problem here is not with the query itself. You have a [normalization](https://en.wikipedia.org/wiki/Database_normalization) problem. And that problem leads to sub-optimal queries as you discovered yourself.
You should definitively think about re-factoring your data table. Here is a possible "equivalent" (see <http://sqlfiddle.com/#!2/759b9/2>):
```
CREATE TABLE `capture_captive_norm` (`capture_group` int not null,
`capture_id` int not null,
`value` int,
PRIMARY KEY (`capture_group`, `capture_id`));
INSERT INTO `capture_captive_norm` (`capture_group`, `capture_id`, `value`)
VALUES
(1,1,23), (1,2,32), (1,3,0), (1,4,0), (1,5,0),
(2,1,26), (2,2,25), (2,3,24), (2,4,0), (2,5,15),
(3,1,26), (3,2,32), (3,3,0), (3,4,0), (3,5,0),
(4,1,0), (4,2,0), (4,3,0), (4,4,0), (4,5,0),
(5,1,26), (5,2,26), (5,3,0), (5,4,0), (5,5,0),
(6,1,32), (6,2,32), (6,3,0), (6,4,0), (6,5,0);
```
---
I agree this *looks* more complicated. But:
> I want to show the total number of id, against their ids.
This in now simply:
```
SELECT `capture_id`, COUNT(IF(`value` <> 0, 1, NULL))
FROM `capture_captive_norm`
GROUP BY `capture_id`;
```
Producing:
```
+-------------+----------------------------------+
| CAPTURE_ID | COUNT(IF(`VALUE` <> 0, 1, NULL)) |
+-------------+----------------------------------+
| 1 | 5 |
| 2 | 5 |
| 3 | 1 |
| 4 | 0 |
| 5 | 1 |
+-------------+----------------------------------+
```
As you understand, based on that example, you might now *easily* query that table to count *captures by value* or *value by id*.
Please note that, by using the special NULL value to represent non-existent data, this might have been shortened to:
```
SELECT `capture_id`, COUNT(`value`)
FROM `capture_captive_norm`
GROUP BY `capture_id`;
```
Well ... in fact, with this sheme, you don't have to *insert* NULL for missing values. If you just "don't insert them", of course `COUNT()` *will not* count them... | Adding column data based on column values? | [
"",
"mysql",
"sql",
"select",
""
] |
I'm trying to write a `T-SQL` query to find all zip codes that have gaps in their `zip4` ranges. Here are sample tables of my data:
Zip ranges:
```
Zip minRangeZip4 maxRangeZip4
85005 0 6505
85005 6506 9999
85006 0 5555
85006 5559 9999
```
How would I write a query to pick up that there's a gap between the `zip4` range for `85006`? | If you are on sql server 2012, you could use `Lead` (or `Lag`) as below.
```
;with cte as (
select Zip, minRangeZip4, maxRangeZip4,
lead(minRangeZip4) over (partition by zip order by minRangeZip4) leadVal
from t
)
select Zip
from cte
where maxRangeZip4 +1 <> leadVal
order by Zip
```
[**Fiddle Demo**](http://sqlfiddle.com/#!6/1c013/17)
Results
```
| ZIP |
---------
| 85006 |
``` | If you take the difference between the max and min range for each row and then sum those differences for each zip, you will be able to identify which zip codes don't have the full range. The sum of the differences will be 10,000 for zips where there are no gaps, and it will be less than 10,000 for zips where there are gaps.
```
SELECT zip, SUM(maxRangeZip4-minRangeZip4+1)
FROM zips
GROUP BY zip
HAVING SUM(maxRangeZip4-minRangeZip4+1) <> 10000
```
Here is the example with data setup in [SQL Fiddle](http://sqlfiddle.com/#!3/a9ee1/1/0). I tested this on SQL Server 2008, and it should work on other database versions also.
Note: When taking the difference between maxRangeZip4 and minRangeZip4, we need to add 1 because the count of codes needs to be inclusive of the minimum value. | Query to find gaps in zip4 codes | [
"",
"sql",
"t-sql",
"sql-server-2012",
""
] |
Please explain why the following query:
```
select in.status as "no_installments"
, count(in.id) as "installment"
FROM instalsched.instalment in
GROUP BY in.status;
```
returns
> ORA-00936: missing expression
> 00936. 00000 - "missing expression"
> \*Cause:
> \*Action:
> Error at Line: 1 Column: 12 | `in` is a key word in SQL. It is used as part of a where clause, such as `where person_id in (1,2,3,4)`. To remedy, simply change the alias.
```
select
in1.status as "no_installments",
count(in1.id) as "installment"
FROM instalsched.instalment in1
GROUP BY in1.status;
``` | `in` is a keyword. Use a different alias or wrap it in double quotes. | GROUP BY syntax is causing "missing expression" error | [
"",
"sql",
"oracle",
"syntax",
"group-by",
""
] |
I want to left pad int number with 12 digits as 0 and starting with character as 'P'
E.g if number is 345
Then output should be '`P00000000345`'
My Code :
```
Declare @a int
Set @a =8756
Select Right('P00000000000' +Cast(@a As Varchar(11)),12)
```
**DB : SQL SERVER 2008** | Try this
```
Declare @a int
Set @a =8756
Select 'P' + REPLACE(STR(@a, 11), SPACE(1), '0')
```
Demo: <http://sqlfiddle.com/#!3/d41d8/18547> | Try
```
Declare @a int
Set @a =8756
Select 'P'+Right('00000000000' +Cast(@a As Varchar(11)),11)
``` | Left pad varchar to certain length in sql server | [
"",
"sql",
"sql-server-2008",
"t-sql",
""
] |
I have to show data from two different queries together.
Query 1:
```
select count(DISTINCT(nickname)) users_got_a_card from capture_captive where `number_successed`>0;
```
Query 2:
```
select count(DISTINCT(`nickname`)) users_shown_captive from `capture_captive`;
```
I want to show the data from both these queries together. | If you want them as two columns of a single row you can try
```
SELECT
(
SELECT COUNT(DISTINCT(nickname))
FROM capture_captive
WHERE `number_successed`> 0
) users_got_a_card,
(
SELECT COUNT(DISTINCT(`nickname`))
FROM `capture_captive`
) users_shown_captive
```
or
```
SELECT users_got_a_card, users_shown_capt
FROM
(
SELECT COUNT(DISTINCT(nickname)) users_got_a_card
FROM capture_captive
WHERE `number_successed`> 0
) a CROSS JOIN
(
SELECT COUNT(DISTINCT(`nickname`)) users_shown_capt
FROM `capture_captive`
) b
```
or
```
SELECT COUNT(DISTINCT(IF(`number_successed` > 0, `nickname`, NULL))) users_got_a_card,
COUNT(DISTINCT(`nickname`)) users_shown_capt
FROM `capture_captive`
```
Here is **[SQLFiddle](http://sqlfiddle.com/#!2/3ec5d/5)** demo | depending on what you need you can do someting like this
per column:
```
select ( select count(DISTINCT(nickname)) users_got_a_card from capture_captive where 'number_successed'>0) as column1,
(select count(DISTINCT(`nickname`)) users_shown_captive from 'capture_captive')as column2
```
or per row:
```
select count(DISTINCT(nickname)) users_got_a_card from capture_captive where number_successed>0
UNION ALL
select count(DISTINCT(nickname)) users_shown_captive from capture_captive
``` | Showing results from two separate queries together | [
"",
"mysql",
"sql",
"join",
""
] |
I am fighting with the distinct keyword in `sql`.
I just want to display all row numbers of unique (`distinct`) values in a column & so I tried:
```
SELECT DISTINCT id, ROW_NUMBER() OVER (ORDER BY id) AS RowNum
FROM table
WHERE fid = 64
```
however the below code giving me the `distinct` values:
```
SELECT distinct id FROM table WHERE fid = 64
```
but when tried it with `Row_Number`.
then it is not working. | Use this:
```
SELECT *, ROW_NUMBER() OVER (ORDER BY id) AS RowNum FROM
(SELECT DISTINCT id FROM table WHERE fid = 64) Base
```
and put the "output" of a query as the "input" of another.
Using CTE:
```
; WITH Base AS (
SELECT DISTINCT id FROM table WHERE fid = 64
)
SELECT *, ROW_NUMBER() OVER (ORDER BY id) AS RowNum FROM Base
```
The two queries should be equivalent.
Technically you could
```
SELECT DISTINCT id, ROW_NUMBER() OVER (PARTITION BY id ORDER BY id) AS RowNum
FROM table
WHERE fid = 64
```
but if you increase the number of DISTINCT fields, you have to put all these fields in the `PARTITION BY`, so for example
```
SELECT DISTINCT id, description,
ROW_NUMBER() OVER (PARTITION BY id, description ORDER BY id) AS RowNum
FROM table
WHERE fid = 64
```
I even hope you comprehend that you are going against standard naming conventions here, `id` should probably be a primary key, so unique by definition, so a `DISTINCT` would be useless on it, unless you coupled the query with some `JOIN`s/`UNION ALL`... | This can be done very simple, you were pretty close already
```
SELECT distinct id, DENSE_RANK() OVER (ORDER BY id) AS RowNum
FROM table
WHERE fid = 64
``` | sql query distinct with Row_Number | [
"",
"sql",
"distinct",
""
] |
I need to extract SQL files from multiple tables of a PostgreSQL database. This is what I've come up with so far:
```
pg_dump -t 'thr_*' -s dbName -U userName > /home/anik/psqlTest/db_dump.sql
```
However, as you see, all the tables that start with the prefix `thr` are being exported to a single unified file (`db_dump.sql`). I have almost 90 tables in total to extract SQL from, so it is a must that the data be stored into separate files.
How can I do it? Thanks in advance. | If you are happy to hard-code the list of tables, but just want each to be in a different file, you could use a shell script loop to run the `pg_dump` command multiple times, substituting in the table name each time round the loop:
```
for table in table1 table2 table3 etc;
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
```
**EDIT**: This approach can be extended to get the list of tables dynamically by running a query through psql and feeding the results into the loop instead of a hard-coded list:
```
for table in $(psql -U userName -d dbName -t -c "Select table_name From information_schema.tables Where table_type='BASE TABLE' and table_name like 'thr_%'");
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
```
Here `psql -t -c "SQL"` runs `SQL` and outputs the results with no header or footer; since there is only one column selected, there will be a table name on each line of the output captured by `$(command)`, and your shell will loop through them one at a time. | Since version 9.1 of PostgreSQL (Sept. 2011), one can use the *directory format* output when doing backups
and 2 versions/2 years after (PostgreSQL 9.3), the --jobs/-j makes it even more efficient to backup every single objects in parallel
but what I don't understand in your original question, is that you use the -s option which dumps only the object definitions (schema), not data.
if you want the data, you shall not use -s but rather -a (data-only) or no option to have schema+data
so, to backup all objects (tables...) that begins with 'th' for the database dbName on the directory dbName\_objects/ with 10 concurrent jobs/processes (increase load on the server) :
> pg\_dump -Fd -f dbName\_objects -j 10 -t 'thr\_\*' -U userName dbName
(you can also use the -a/-s if you want the data or the schema of the objects)
as a result the directory will be populated with a toc.dat (table of content of all the objects) and one file per object (.dat.gz) in a compressed form
each file is named after it's object number, and you can retrieve the list with the following pg\_restore command:
> pg\_restore --list -Fd dbName\_objects/ | grep 'TABLE DATA'
in order to have each file not compressed (in raw SQL)
> pg\_dump --data-only --compress=0 --format=directory --file=dbName\_objects --jobs=10 --table='thr\_\*' --username=userName --dbname=dbName | PostgreSQL - dump each table into a different file | [
"",
"sql",
"database",
"postgresql",
"postgresql-9.1",
"pg-dump",
""
] |
How can I query only the records that show up twice in my table?
Currently my table looks something like this:
```
Number Date RecordT ReadLoc
123 08/13/13 1:00pm N Gone
123 08/13/13 2:00pm P Home
123 08/13/13 3:00pm N Away
123 08/13/13 4:00pm N Away
```
I need a query that will select the records that have the same 'Value' in the RecordT field and the same 'Value' in the ReadLoc field.
So my result for the above would show with the query:
```
Number Date RecordT ReadLoc
123 08/13/13 3:00pm N Away
123 08/13/13 4:00pm N Away
```
I was trying to do a subselect like this:
```
SELECT t.Number, t.Date, n.RecordT, n.ReadLoc
FROM Table1 t join Table2 n ON t.Number = n.Number
WHERE t.Number IN (SELECT t.Number FROM Table1 GROUP BY t.Number HAVING COUNT(t.Number) > 1 )
AND n.ReadLoc IN (SELECT n.ReadLoc FROM Table2 GROUP n.ReadLoc HAVING COUNT(n.ReadLoc) > 1 )
``` | ```
SELECT a.*
FROM Table1 a
JOIN (SELECT RecordT, ReadLoc
FROM Table1
GROUP BY RecordT, ReadLoc
HAVING COUNT(*) > 1
)b
ON a.RecordT = b.RecordT
AND a.ReadLoc = b.ReadLoc
```
[SQL Fiddle](http://sqlfiddle.com/#!3/ecff8/1/0) | Shouldn't this work:
```
select *
from table1
where (RecordT, ReadLoc) in
(select RecordT, ReadLoc
from table1
group by RecordT, ReadLoc
having count(*) > 1)
``` | Select rows with the same field values | [
"",
"sql",
"sql-server",
"sql-server-2005",
""
] |
So I'm just starting to learn SQL, and hit upon the following problem. Suppose I have a table with 3 columns like so:
```
ID | Property_Name | Property_Value
1 | Color | "Blue"
1 | Size | "Large"
2 | Color | "Red"
3 | Color | "Orange"
3 | Size | "Small"
4 | Color | "Blue"
4 | Size | "Large"
...
```
Now, suppose I want to find the IDs that have Color=Blue and Size=Large (aka. ID 1 and 4),
how would I best do this. The best way I came up with is the following, but it seems clunky...
```
SELECT ID FROM PropertyTable
WHERE
ID IN (
SELECT ID FROM PropertyTable WHERE
Property_Name='Color' AND Property_Value='blue' )
AND
(Property_Name='Size' AND Property_Value='Large')
```
Thank you :)
EDIT: Forgot to add preformat tags to the example table text. Just did so. | How about self join?
```
SELECT T1.ID
FROM PropertyTable T1
JOIN PropertyTable T2 ON T1.ID = T2.ID
WHERE
T1.PropertyName = 'Color' AND T1.PropertyValue = 'Blue'
AND
T2.PropertyName = 'Size' AND T2.PropertyValue = 'Large'
```
Here is an [SQLFiddle](http://sqlfiddle.com/#!2/7d33f/1) | This is an example of a "set-within-sets" subquery. I think the most general approach is to use aggregation with a `having` clause:
```
select ID
from PropertyTable pt
group by ID
having sum(case when Property_Name='Color' AND Property_Value='blue' then 1 else 0 end) > 0 and
sum(case when Property_Name='Size' AND Property_Value='Large' then 1 else 0 end) > 0;
```
Each clause of the `having` statement is counting the number of rows that match each condition.
The reason I like this is because it is quite general If you wanted to add in another property, you just add similar clauses:
```
select ID
from PropertyTable pt
group by ID
having sum(case when Property_Name='Color' AND Property_Value='blue' then 1 else 0 end) > 0 and
sum(case when Property_Name='Size' AND Property_Value='Large' then 1 else 0 end) > 0 and
sum(case when Property_Name = 'Heating' and Property_Value = 'Gas' then 1 else 0 end) > 0;
```
If you wanted any of the three conditions, then you would use `or`:
```
select ID
from PropertyTable pt
group by ID
having sum(case when Property_Name='Color' AND Property_Value='blue' then 1 else 0 end) > 0 or
sum(case when Property_Name='Size' AND Property_Value='Large' then 1 else 0 end) > 0 or
sum(case when Property_Name = 'Heating' and Property_Value = 'Gas' then 1 else 0 end) > 0;
``` | How to best use SQL to find common IDs that match multiple WHERE clauses | [
"",
"sql",
""
] |
I've got a table structure like this:
* month (datetime)
* account (int)
* product (int)
* amountPaid (int)
And some example data:
```
month account product amountPaid
------------------------------------
1-1-2012 1 1 50
2-1-2012 1 1 50
2-1-2012 2 1 150
2-1-2012 2 2 100
```
What I'd like is a query that can tell me is,
1. For each month, the number of accounts that paid for only product 1.
2. The number of accounts that paid for only product 2.
3. and the number of accounts that paid for both products 1 and 2.
Also, the products that each account pays for can change month to month. For example, one month, an account might pay for only product 1, the next month, both products 1 and 2, and the following month, only product 2.
Can this be done in a SQL query?
The result set might like something like:
```
month product count
------------------------
1-1-2012 1 10
1-1-2012 2 5
1-1-2012 1+2 3
2-1-2012 1 8
2-1-2012 2 4
2-1-2012 1+2 2
``` | You can do it with a single SQL query, but my solution isn't dynamic. If your products change, you'll have to update the query to add/remove products that you want to display. See [SQL Fiddle](http://sqlfiddle.com/#!3/abdaf/6) for working example in SQL Server 2008.
```
with AccPay as (
select
pay_month,
account,
sum(case when product=1 then 1 else 0 end) as prod1_pay,
sum(case when product=2 then 1 else 0 end) as prod2_pay
from payments p
group by
pay_month,
account
)
select
pay_month,
sum(case when prod1_pay>=1 and prod2_pay=0 then 1 else 0 end) as prod1_only,
sum(case when prod2_pay>=1 and prod1_pay=0 then 1 else 0 end) as prod2_only,
sum(case when prod1_pay>=1 and prod2_pay>=1 then 1 else 0 end) as prod1_and_prod2
from AccPay
group by
pay_month
;
```
**OR**
```
with AccPay as (
select
pay_month,
account,
sum(case when product=1 then 1 else 0 end) as prod1_pay,
sum(case when product=2 then 1 else 0 end) as prod2_pay
from payments p
group by
pay_month,
account
)
select
pay_month,
'1' as product,
count(*) as count
from AccPay
where prod1_pay>=1 and prod2_pay=0
group by
pay_month
union all
select
pay_month,
'2' as product,
count(*) as count
from AccPay
where prod2_pay>=1 and prod1_pay=0
group by
pay_month
union all
select
pay_month,
'1+2' as product,
count(*) as count
from AccPay
where prod1_pay>=1 and prod2_pay>=1
group by
pay_month
;
``` | Since there are only two products there is a simple solution. In this query I assume your table's name is data, and the number three stands for product 1+2.
```
select month, sum ( product ) as products, count ( number ) as number
from (
select month, product, count ( account ) as number
from data
group by month, product
) as t
group by month
```
In mysql there is a function group\_concat that does just this ( it returns '1+2' ). In MS Sql, it did'nt exist back in 2000, but maybe it has been introduced since. | Can this be done in a SQL query? (sort of like a Venn diagram) | [
"",
"sql",
"sql-server",
""
] |
I'm looking for a way to determine if a timestamp falls between two times, regardless of the *date* in that timestamp. So for example, if the time in the timestamp falls between '00:00:00.000' (midnight) and '01:00:00.000' (1 A.M.), I'd want to select that row regardless of the particular date.
I've tried lots of different variations on the `to_char` and `to_date` functions, but I keep getting errors. Coming from Informix, Oracle seems much more complicated.
The thing closest to "correct" (I think) that I've tried is:
```
SELECT *
FROM my_table
WHERE SUBSTR(TO_CHAR(my_timestamp), 10) > '00:00:00.000'
AND SUBSTR(TO_CHAR(my_timestamp), 10) < '01:00:00.000'
```
... But nothing works. Any tips or tricks?
I found a way to do it, but I'd still prefer something a little less hacky, if it exists.
```
SUBSTR(SUBSTR(TO_CHAR(my_timestamp), 11), 0, 12) > '01.00.00.000'
``` | Your solution looks correct to me except I haven't tried substr function. This is what I used in one of my previous project:
```
select * from orders
where to_char(my_timestamp,'hh24:mi:ss.FF3')
between '00:00:00.000' and '01:00:00.123';
``` | Use `TRUNC(my_timestamp, 'J')` to remove the hours and get only the '2013-08-15 00:00:00.00'.
So:
```
WHERE my_timestamp - TRUNC(my_timestamp, 'J') > 0
AND my_timestamp - TRUNC(my_timestamp, 'J') < 1/24 ;
``` | Oracle SQL - Determine if a timestamp falls within a range, excluding date | [
"",
"sql",
"oracle",
"timestamp",
""
] |
I have an SSIS Package, which contains multiple flows.
Each flow is responsible for creating a "staging" table, which gets filled up after creation.
These tables are **global** temporary tables.
I added 1 extra flow (I did not make the package) which does exactly as stated above, for another table. However, for some reason, the package fails intermittently on this flow, while it is exactly the same as others, besides some table names.
The error that keeps popping up:
> Update - Insert Data Flow:Error: SSIS Error Code DTS\_E\_OLEDBERROR. An
> OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is
> available. Source: "Microsoft SQL Server Native Client 11.0"
> Hresult: 0x80004005 Description: "Unspecified error". An OLE DB
> record is available. Source: "Microsoft SQL Server Native Client
> 11.0" Hresult: 0x80004005 Description: "The metadata could not be determined because statement 'select \* from
> '##TmpMcsConfigurationDeviceHistory86B34BFD041A430E84CCACE78DA336A1'' uses a temp table.".
Creation expression:
```
"CREATE TABLE " + @[User::TmpMcsConfigurationDeviceHistory] + " ([RecId] [bigint] NULL,[DataAreaID] [nvarchar](4) COLLATE database_default NULL,[Asset] [bigint] NULL,[Code] [nvarchar](255) COLLATE database_default NULL,[Configuration] [bigint],[StartdateTime] [datetime] NULL,[EndDateTime] [datetime] NULL)
```
"
Parsed expression (=evaluated):
```
CREATE TABLE ##TmpMcsConfigurationDeviceHistory764E56F088DC475C9CC747CC82B9E388 ([RecId] [bigint] NULL,[DataAreaID] [nvarchar](4) COLLATE database_default NULL,[Asset] [bigint] NULL,[Code] [nvarchar](255) COLLATE database_default NULL,[Configuration] [bigint],[StartdateTime] [datetime] NULL,[EndDateTime] [datetime] NULL)
``` | [Using `WITH RESULT SETS` to explicitly define the metadata](http://www.sqlservercentral.com/Forums/FindPost1577869.aspx) will allow SSIS to skip the `sp_describe_first_result_set` step and use the metadata that you define. The upside is that you can use this to get SSIS to execute SQL that contains a temporary table (for me, that performance helped a lot); the downside is, you have to manually maintain and update this if anything changes.
Query sample (stored procedure:)
```
EXEC ('dbo.MyStoredProcedure')
WITH RESULT SETS
(
(
MyIntegerColumn INT NOT NULL,
MyTextColumn VARCHAR(50) NULL,
MyOtherColumn BIT NULL
)
)
```
Query sample (simple SQL:)
```
EXEC ('
CREATE TABLE #a
(
MyIntegerColumn INT NOT NULL,
MyTextColumn VARCHAR(50) NULL,
MyOtherColumn BIT NULL
)
INSERT INTO #a
(
MyIntegerColumn,
MyTextColumn,
MyOtherColumn
)
SELECT
1 AS MyIntegerColumn,
''x'' AS MyTextColumn,
0 AS MyOtherColumn
SELECT MyIntegerColumn, MyTextColumn, MyOtherColumn
FROM #a')
WITH RESULT SETS
(
(
MyIntegerColumn INT NOT NULL
,MyTextColumn VARCHAR(50) NULL
,MyOtherColumn BIT NULL
)
)
``` | Another option (kind of a hack, but it works and doesn't require you to change your use of global temp tables) is to use a SET FMTONLY ON command in front of your actual query to send a fake "First result set" to SSIS with your correct column structure. So you can do something like
```
SET FMTONLY ON
select 0 as a, 1 as b, 'test' as C, GETDATE() as D
SET FMTONLY OFF
select a, b, c, d from ##TempTable
```
When SSIS runs sp\_describe\_first\_result\_set, it will return the metadata and column names of your FMTONLY command, and won't complain about not being able to determine the metadata of your temp table because it won't even try. | SSIS Package not wanting to fetch metadata of temporary table | [
"",
"sql",
"sql-server",
"ssis",
"ssis-2012",
""
] |
I have two tables.
Table 1:
| ID | Color | Description |
| --- | --- | --- |
| 1 | red | It's red |
| 2 | blue | yeah |
| 3 | blue | blue |
Table 2:
| ID | Family |
| --- | --- |
| 1 | family1 |
| 2 | family1 |
| 3 | family2 |
I want to dissolve table 2 and just add the `Family` column to the end of my table 1. Easy, right? So I add a family column to table 1 and
```
UPDATE table1
SET Table1.family = table2.family
FROM
table1 INNER JOIN table2
ON table1.ID = table2.id;
```
I get
> Syntax Error : Missing operator.
Isn't this the syntax for these types of queries? | The MS-Access syntax for a joined update is as follows:
```
UPDATE table1 INNER JOIN table2
ON table1.ID = table2.id
SET table1.family = table2.family
``` | You have the wrong syntax, for Access use:
```
UPDATE table1 INNER JOIN table2
ON table1.ID = table2.id
SET Table1.family = table2.family;
``` | Syntax error "Missing operator" on simple update query | [
"",
"sql",
"ms-access",
""
] |
I am currently joining 2 other tables when exporting 1, but this is causing rows to be duplicated. Instead of duplicating the row to match the value, is it possible to separate values of a specific row with commas?
Here is a sample of my table as it is now:
```
id,optioncatid,optionsdesc_sidenote,isproductcode,applytoproductcodes,stockstatus
"325","30","","BRB8PACK","00LDCLU131401C","17"
"325","30","","BRB8PACK","00LDDEV131401C","17"
"325","30","","BRB8PACK","00LDHEI131401C","17"
//etc
```
And this is what I would like it to be:
```
id,optioncatid,optionsdesc_sidenote,isproductcode,applytoproductcodes,stockstatus
"325","30","","BRB8PACK","00LDCLU131401C,00LDCLU131401C,00LDHEI131401C, etc...","17"
//etc
```
There can be thousands of values for `applytoproductcodes`, ballooning the file up to 200+MB when exporting as XML. This is obviously extremely bloated.
My SQL query:
```
SELECT
Options.ID,
Options.OptionCatID,
Options.optionsdesc_sidenote,
Options.IsProductCode,
Options_ApplyTo.ProductCode AS ApplyToProductCodes,
Products.StockStatus AS StockStatus
FROM
Options
JOIN Options_ApplyTo ON Options.ID = Options_ApplyTo.OptionID
JOIN Products ON Options.IsProductCode = Products.ProductCode
WHERE
Options.IsProductCode <> ''
ORDER BY
Options.ID
```
---
Edit: Now I have done more research and modified my code to this:
```
SELECT
Options.ID,
Options.OptionCatID,
Options.optionsdesc_sidenote,
Options.IsProductCode,
t.ProductCode AS ApplyToProductCodes,
Products.StockStatus AS StockStatus
FROM
Options
LEFT JOIN
(
select OptA.ProductCode as ProductCode, OptA.OptionID as OptionID
from Options_ApplyTo AS OptA, Options
WHERE Options.ID = OptA.OptionID
order by OptA.OptionID
for xml path('')
) t
ON Options.ID = t.OptionID
LEFT JOIN Products
ON Options.IsProductCode = Products.ProductCode
WHERE
Options.IsProductCode <> ''
ORDER BY
Options.ID
```
But now I am getting the error **No column was specified for column 1 of 't'.** | Just put that sucker right into the select. You cannot join on that as the `for xml` turns the result into a scalar value...not a table....
```
SELECT
Options.ID,
Options.OptionCatID,
Options.optionsdesc_sidenote,
Options.IsProductCode,
(
select OptA.ProductCode as ProductCode, OptA.OptionID as OptionID
from Options_ApplyTo AS OptA
WHERE Options.ID = OptA.OptionID
order by OptA.OptionID
for xml path('')
) as ApplyToProductCodes,
Products.StockStatus AS StockStatus
FROM
Options
LEFT JOIN Products
ON Options.IsProductCode = Products.ProductCode
WHERE
Options.IsProductCode <> ''
ORDER BY
Options.ID
``` | mysql [GROUP\_CONCAT](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) function concatenates a column with comma character. Hence your query can be written as
```
SELECT
Options.ID,
Options.OptionCatID,
Options.optionsdesc_sidenote,
Options.IsProductCode,
t.pcodes AS ApplyToProductCodes,
Products.StockStatus AS StockStatus
FROM
OPTIONS
JOIN
(SELECT Options_ApplyTo.OptionID, GROUP_CONCAT(Options_ApplyTo.ProductCode) pcodes
FROM Options_ApplyTo GROUP BY Options_ApplyTo.OptionID) t
ON Options.ID = t.OptionID
JOIN Products
ON Options.IsProductCode = Products.ProductCode
WHERE
Options.IsProductCode <> ''
ORDER BY
Options.ID
``` | Joining tables without duplicating the row | [
"",
"sql",
"sql-server-2005",
""
] |
I have following SQL:
```
;WITH CTE
AS (SELECT AL.*,
RV.FORENAME,
RV.SURNAME,
RV.USERNAME AS RegistrantUsername,
E.FORENAME AS EmployeeForename,
E.SURNAME AS EmployeeSurname,
U.USERNAME,
CASE
WHEN @Language = 2 THEN C.NAMELANG2
ELSE C.NAMELANG1
END AS CompanyName,
CASE
WHEN VC.COMPANYID IS NULL THEN
CASE
WHEN @Language = 2 THEN V.COMPANYNAMELANG2
ELSE V.COMPANYNAMELANG1
END
ELSE
CASE
WHEN @Language = 2 THEN VC.NAMELANG2
ELSE VC.NAMELANG1
END
END AS VacancyCompanyName,
CASE
WHEN @Language = 2 THEN V.JOBTITLELANG2
ELSE V.JOBTITLELANG1
END AS JobTitle,
ROW_NUMBER()
OVER(
PARTITION BY AL.REGISTRANTID, AL.COMPANYID
ORDER BY ACTIONDATE ASC) AS RN
FROM DBO.HR_ACTIONLOG AL
LEFT OUTER JOIN DBO.REGISTRANTSLISTVIEW RV
ON AL.REGISTRANTID = RV.REGISTRANTID
LEFT OUTER JOIN DBO.HR_EMPLOYEES E
ON AL.EMPLOYEEID = E.EMPLOYEEID
LEFT OUTER JOIN DBO.HR_USERS U
ON AL.USERID = U.USERID
LEFT OUTER JOIN DBO.HR_COMPANIES C
ON AL.COMPANYID = C.COMPANYID
LEFT OUTER JOIN DBO.HR_VACANCIES V
ON AL.VACANCYID = V.VACANCYID
LEFT OUTER JOIN DBO.HR_COMPANIES VC
ON V.COMPANYID = VC.COMPANYID
WHERE ( @Action IS NULL
OR AL.ACTION = @Action )
AND ( @DateFrom IS NULL
OR DBO.DATEONLY(AL.ACTIONDATE) >=
DBO.DATEONLY(@DateFrom) )
AND ( @DateTo IS NULL
OR DBO.DATEONLY(AL.ACTIONDATE) <= DBO.DATEONLY(@DateTo) )
AND ( @CompanyID IS NULL
OR AL.COMPANYID = @CompanyID )
AND ( @RegistrantID IS NULL
OR AL.REGISTRANTID = @RegistrantID )
AND ( @VacancyID IS NULL
OR AL.VACANCYID = @VacancyID )
--ORDER BY AL.ActionDate DESC
)
SELECT *
FROM CTE
WHERE RN = 1;
```
It returns first element from the group based on actiondate which is fine but the returned result is not ordered by date means returns each groups first record but the this collection of first records is not ordered by action date. I tried ORDER BY AL.ActionDate DESC in CTE but it gives error:
```
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified.
``` | Try this one -
```
;WITH CTE AS
(
SELECT ...
FROM dbo.hr_ActionLog AL
LEFT JOIN ...
WHERE AL.[Action] = ISNULL(@Action, AL.[Action])
AND dbo.DateOnly(AL.ActionDate) BETWEEN
dbo.DateOnly(ISNULL(@DateFrom, AL.ActionDate))
AND
dbo.DateOnly(ISNULL(@DateTo, '30000101'))
AND AL.CompanyID = ISNULL(@CompanyID, AL.CompanyID)
AND AL.RegistrantID = ISNULL(@RegistrantID, AL.RegistrantID)
AND AL.VacancyID = ISNULL(@VacancyID, AL.VacancyID)
)
SELECT *
FROM CTE
WHERE RN = 1
ORDER BY AL.ActionDate DESC;
``` | Move `order by` clause outside of cte:
```
WITH CTE AS
(
...
)
SELECT *
FROM CTE
WHERE RN = 1;
ORDER BY ActionDate DESC
``` | Ordering the results of CTE in SQL | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I have a Customer table and a Payment table that are joined on a customers ID (CID). In the Payment table there is an Amount column; some rows have a positive amount (they owed money), but others have a negative amount (they paid money).
I want know which customers, when I sum their positive and negative amounts, still owe me money.
If I just wanted to see their balances, I could do that like so:
```
SELECT Customer.FirstName, Customer.LastName, Customer.AccountNumber, SUM(Amount) As Balance
FROM Customer
JOIN Payment
ON Customer.CID = Payment.CID
GROUP BY Customer.AccountNumber
```
How do I change this so that if the balance is 0 or less, I don't return a row for that customer?
Edit: `HAVING` was the keyword I did not know. Thanks for the many right answers! | The HAVING keyword allows you to select based on group by functions:
```
SELECT Customer.FirstName, Customer.LastName, Customer.AccountNumber, SUM(Amount) As Balance
FROM Customer
JOIN Payment
ON Customer.CID = Payment.CID
GROUP BY Customer.AccountNumber
HAVING SUM(Amount) > 0
``` | ```
SELECT Customer.FirstName, Customer.LastName, Customer.AccountNumber, SUM(Amount) As Balance
FROM Customer
JOIN Payment
ON Customer.CID = Payment.CID
GROUP BY Customer.AccountNumber
HAVING Balance>0
``` | How do I return only those customers who owe me money (in mySQL)? | [
"",
"mysql",
"sql",
""
] |
I currently have a URL redirect table in my database that contains ~8000 rows and ~6000 of them are duplicates.
I was wondering if there was a way I could delete these duplicates based on a certain columns value and if it matches, I am looking to use my "old\_url" column to find duplicates and I have used
```
SELECT old_url
,DuplicateCount = COUNT(1)
FROM tbl_ecom_url_redirect
GROUP BY old_url
HAVING COUNT(1) > 1 -- more than one value
ORDER BY COUNT(1) DESC -- sort by most duplicates
```
however I'm not sure what I can do to remove them now as I don't want to lose every single one, just the duplicates. They are almost a match completely apart from sometimes the new\_url is different and the url\_id (GUID) is different in each time | In my opinion [ranking functions](http://technet.microsoft.com/en-us/library/ms189798.aspx) and a [`CTE`](http://technet.microsoft.com/en-us/library/ms175972.aspx) are the easiest approach:
```
WITH CTE AS
(
SELECT old_url
,Num = ROW_NUMBER()OVER(PARTITION BY old_url ORDER BY DateColumn ASC)
FROM tbl_ecom_url_redirect
)
DELETE FROM CTE WHERE Num > 1
```
Change `ORDER BY DateColumn ASC` accordingly to determine which records should be deleted and which record should be left alone. In this case i delete all newer duplicates. | If your table has a primary key then this is easy:
```
BEGIN TRAN
CREATE TABLE #T(Id INT, OldUrl VARCHAR(MAX))
INSERT INTO #T VALUES
(1, 'foo'),
(2, 'moo'),
(3, 'foo'),
(4, 'moo'),
(5, 'foo'),
(6, 'zoo'),
(7, 'foo')
DELETE FROM #T WHERE Id NOT IN (
SELECT MIN(Id)
FROM #T
GROUP BY OldUrl
HAVING COUNT(OldUrl) = 1
UNION
SELECT MIN(Id)
FROM #T
GROUP BY OldUrl
HAVING COUNT(OldUrl) > 1)
SELECT * FROM #T
DROP TABLE #T
ROLLBACK
``` | finding duplicates and removing but keeping one value | [
"",
"sql",
"sql-server",
""
] |
How to Update all employee `departmentID` who belongs to `department` code `500` to department code `503`
\*\*tHREmployee
```
> EmployeeId #.......Employeee.........DepartmentID
...101...............Ajith.............101
...102...............Arathy ...........Null
...103...............Arurna............102
...104...............Ambily............101
...105...............Anjaly............Null
...106...............Babitha...........103
```
\*\*tHRDepartment
```
DepartmentID #.............Code
101........................500
102........................501
103........................502
105........................503
..
``` | ```
DECLARE @CodeFrom AS INT
DECLARE @CodeTo AS INT
SET @CodeFrom = 500
SET @CodeTo= 503
UPDATE tHREmployee
SET DepartmentID = (
SELECT DepartmentID
FROM tHRDepartment
WHERE Code = @CodeTo
)
FROM tHREmployee E
JOIN tHRDepartment D
ON E.DepartmentID = D.DepartmentID
WHERE D.Code = @CodeFrom
``` | What about this:
```
Declare @NewDepID int
Select @NewDepID = DepartmentID from Departments Where DepartmentCode = 503
update tHREmployee
Set DepartmentID = @NewDepID
Where DepartmentID in (Select DepartmentID from Departments Where DepartmentCode = 500)
``` | Update the id with some field id in another table | [
"",
"sql",
"sql-server-2008-r2",
""
] |
I have data in my database table as
```
last_updated_product
01/Jan/1899 6:25:01 AM
01/Jan/1899 6:25:02 AM
```
How can I update only the date part with sysdate without modifying the time part?
Expected outout
```
21/Aug/2013 6:25:01 AM
21/Aug/2013 6:25:02 AM
```
`last_updated_product` column data type is defined as `date`
Thanks | You need to use midnight on the current day, and add on the time part from the original value:
```
trunc(sysdate) + (last_updated_product - trunc(last_updated_product))
```
`trunc()` gives you the date with the time component set to 00:00:00, so `date - trunc(date)` gives you just the original time component, as a number (fraction of a day) as per the [datetime arithmetic rules](http://docs.oracle.com/cd/E11882_01/server.112/e26088/sql_elements001.htm#sthref168). That number can then be added on to midnight today.
Not sure if you're actually updating the table or just doing this in a query, but it's the same calculation either way. | You can either work out the time portion and add in the date you want, for instance:
```
update my_table
set last_updated_product =
to_date('21/Aug/2013', 'dd/Mon/yyyy')
-- difference between the start of the day and the time
+ (last_updated_product - trunc(last_updated_product))
```
The extra brackets are to ensure the query works according to the operator order of preference as you can't add a date to a date but you can add an interval. The brackets ensure that `last_updated_product - trunc(last_updated_product)` is evaluated before the addition takes place.
or convert it to a character, concatenate it to the date and then convert it back to a date.
```
update my_table
set last_updated_product =
to_date('21/Aug/2013' || to_char(last_updated_product, 'hh24:mi:ss')
, 'dd/Mon/yyyyhh24:mi:ss')
```
e.g.
```
create table my_table ( last_updated_product date );
Table created.
insert into my_table values (sysdate - 100);
1 row created.
update my_table
set last_updated_product =
to_date('21/Aug/2013', 'dd/Mon/yyyy')
+ (last_updated_product - trunc(last_updated_product))
;
1 row updated.
select * from my_table;
LAST_UPDATED_PRODUC
-------------------
2013/08/21 08:13:57
``` | Update only date without modifying time | [
"",
"sql",
"oracle",
"oracle10g",
""
] |
I have a Postgre SQL database table which contains over 5 million entries.
Also have a CSV file which contains 100,000 entries.
I need to run a query to get data from DB which related to the CSV file's data.
However as per the everyone's understanding and with my own experience, This kind of query takes ages to get completed. (more than 6 hours, as per my guess)
So as per the newest findings and tools, do we have a better, fast solution to perform this same task? | Just a small addition to Erwin answer - if you want just check if email in csv file, the code could be something like this:
```
create temp table tmp_emails (email text primary key);
copy tmp_emails from 'path/emails.csv';
analyze tmp_emails;
update <your table> set
...
from <your table> as d
where exists (select * from tmp_emails as e where e.email = d.email);
```
I think may be it's possible to create table-returning function which reads your csv and
call it like:
```
update <your table> set
...
from <your table> as d
where exists (select * from csv_func('path/emails.csv') as e where e.email = d.email);
```
But I have no postgresql installed here to try, I'll do it later | The *fast* lane: create a [temporary table](http://www.postgresql.org/docs/current/interactive/sql-createtable.html) matching the structure of the CSV file (possibly using an existing table as template for convenience) and use [**`COPY`**](http://www.postgresql.org/docs/current/interactive/sql-copy.html):
### Bulk load
```
CREATE TEMP TABLE tmp(email text);
COPY tmp FROM 'path/to/file.csv';
ANALYZE tmp; -- do that for bigger tables!
```
I am *assuming* emails in the CSV are unique, you did not specify. If they are not, *make* them unique:
```
CREATE TEMP TABLE tmp0
SELECT DISTINCT email
FROM tmp
ORDER BY email; -- ORDER BY cheap in combination with DISTINCT ..
-- .. may or may not improve performance additionally.
DROP TABLE tmp;
ALTER TABLE tmp0 RENAME TO tmp;
```
### Index
For your particular case a **unique** index on email is in order.
It is much more efficient to create the index *after* loading and sanitizing the data. This way you also prevent `COPY` from bailing out with a unique violation if there should be dupes:
```
CREATE UNIQUE INDEX tmp_email_idx ON tmp (email);
```
On second thought, if all you do is update the big table, you *don't* need an index on the temporary table at all. It will be read sequentially.
> Yes DB table is indexed using primary key.
The only relevant index in this case:
```
CREATE INDEX tbl_email_idx ON tbl (email);
```
Make that `CREATE UNIQUE INDEX ...` if possible.
### Update
To update your table as detailed in your later comment:
```
UPDATE tbl t
SET ...
FROM tmp
WHERE t.email = tmp.email;
```
All of this can easily be wrapped into a plpgsql or sql function.
Note that `COPY` requires dynamic SQL with [`EXECUTE`](http://www.postgresql.org/docs/current/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN) in a plpgsql function if you want parameterize the file name.
Temporary tables are dropped at the end of the session automatically by default.
Related answer:
[How to bulk insert only new rows in PostreSQL](https://stackoverflow.com/questions/15834569/how-to-bulk-insert-only-new-rows-in-postresql/15834758#15834758) | How to get fast result from a SQL query | [
"",
"sql",
"database",
"performance",
"postgresql",
"csv",
""
] |
I currently have two queries that were handed to me. The first searches for contacts in our system within a 50 mile radius of some address. The second query does the same thing except it searches a 100 mile radius.
What I need to do is modify the second query so that it excludes the results from the first query. If you can visualize it we're basically creating a doughnut shaped area that is 50 miles thick.
Here is the first query (50 mile radius of LA):
```
SELECT * FROM
(
SELECT ROW_NUMBER() OVER (order by t0.name ASC, t0.accountid) AS RowNumber, t0.accountid as pkt0
, t0.name as cn
, t0.name as c1
, t1.name as c2
, t1.homephone as c3
, t1.mobilephone as c4
, t1.officephone as c5
, t1.contactid as pkt1
FROM [account] as t0
Left Join [contact] as t1
ON t0.primarycontactid = t1.contactid
WHERE (((
((t0.shippingaddress not like '') AND (t0.shippingaddresslatitude >= 33.6907920399124 ) AND (t0.shippingaddresslatitude <= 34.4136759600876 ) AND (t0.shippingaddresslongitude >= -118.679928573928 ) AND (t0.shippingaddresslongitude <= -117.807441426072 ))
)) AND (t0.deleted = 0))
) _tmpInlineView
WHERE RowNumber > 0
```
Here is the second query (100 mile radius of LA):
```
SELECT * FROM
(
SELECT ROW_NUMBER() OVER (order by t0.name ASC, t0.accountid) AS RowNumber, t0.accountid as pkt0
, t0.name as cn
, t0.name as c1
, t1.name as c2
, t1.homephone as c3
, t1.mobilephone as c4
, t1.officephone as c5
, t1.contactid as pkt1
FROM [account] as t0
Left Join [contact] as t1
ON t0.primarycontactid = t1.contactid
WHERE (((
((t0.shippingaddress not like '') AND (t0.shippingaddresslatitude >= 33.3293500798248 ) AND (t0.shippingaddresslatitude <= 34.7751179201752 ) AND (t0.shippingaddresslongitude >= -119.11615629035 ) AND (t0.shippingaddresslongitude <= -117.37121370965 ))
)) AND (t0.deleted = 0))
) _tmpInlineView
WHERE RowNumber > 0
```
The second query is correct except it must exclude the results of the first. I'm sure it's probably simple, but I haven't manually written any SQL in years. | It is more about substracting surfaces given latitude and longitude.
**EDIT: It's not just subtraction as there are two parts "outside" - logic broken in 100 miles logic with AND NOT ( clause for 50 miles )**
Try with something like:
```
SELECT * FROM
(
SELECT ROW_NUMBER() OVER (order by t0.name ASC, t0.accountid) AS RowNumber, t0.accountid as pkt0
, t0.name as cn
, t0.name as c1
, t1.name as c2
, t1.homephone as c3
, t1.mobilephone as c4
, t1.officephone as c5
, t1.contactid as pkt1
FROM [account] as t0
Left Join [contact] as t1
ON t0.primarycontactid = t1.contactid
WHERE
/* 100 miles */
(t0.shippingaddresslatitude >= 33.3293500798248 AND t0.shippingaddresslatitude <= 34.7751179201752 )
/* but not 50 miles */
and not( t0.shippingaddresslatitude >= 33.6907920399124 AND t0.shippingaddresslatitude <= 34.4136759600876 )
/* SAME for LONGITUDE */
AND (t0.shippingaddresslongitude >= -119.11615629035 AND t0.shippingaddresslongitude <= -117.37121370965 )
and not (t0.shippingaddresslongitude >= -118.679928573928 AND t0.shippingaddresslongitude <= -117.807441426072 )
AND (t0.deleted = 0))
) _tmpInlineView
WHERE RowNumber > 0
``` | As long as they have the same number of parameters, use EXCEPT.
```
Select column from table1
Except
Select column from table2
```
<http://technet.microsoft.com/en-us/library/ms188055.aspx> | How do you query for something in SQL and exclude the results from a sub query? | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"latitude-longitude",
""
] |
What is the difference b/w IS NULL functions in MySQL and MS SQL.
(@VendorID IS NULL OR @VendorID = Contract.VendorID) this is the MS SQL statement.If I want this in MySql do I need to change the syntax.And one more thing is What is IF Null? | You can use [`IS NULL`](http://dev.mysql.com/doc/refman/5.0/en/working-with-null.html) in `MySQL`.
[`IFNULL`](http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull) is a function that will return another value if the field is `NULL`. This is SQL Servers equivalent of `ISNULL`
For example:
```
IFNULL(VendorId, 0)
```
Will return 0 if the VendorId is null
You can also use [`COALESCE`](http://www.w3resource.com/mysql/comparision-functions-and-operators/coalesce-function.php) which will return the first non `NULL` value:
```
SELECT COALESCE(Field1, Field2, Field3)
FROM YourTable
``` | No there is no difference at all, this will work fine in mysql query window as well. | Is there any difference b/w IS NULL in MySQL and MS SQL | [
"",
"mysql",
"sql",
"sql-server",
""
] |
I'm having a bit confusion in using a statement like "where 0=0" in Oracle procedures? Can someone please clarify it? Even though we do it for dynamic selection, why do we add this statement even though we append the actual condition in the query? Will this where condition make any difference to the result set?..
I went through
[How can I Select all rows where column contain any words of a string](https://stackoverflow.com/questions/10522345/how-can-i-select-all-rows-where-column-contain-any-words-of-a-string)?
but I didn't exactly understand the reason for using `where 0=0`.
Can some one please give me the proper reason for using such a condition? | When using dynamic sql, extra clauses may need to be added, depending upon certain conditions being met. The 1=1 clause has no meaning in the query ( other than it always being met ), its only use is to reduce the complexity of the code used to generate the query in the first place.
E.g. This pseudo code
```
DECLARE
v_text VARCHAR2(2000) := 'SELECT * FROM table WHERE 1=1 ';
BEGIN
IF condition_a = met THEN
v_text := v_text ||' AND column_1 = ''A'' ';
END IF;
IF condition_b = also_met THEN
v_text := v_text ||' AND column_2 = ''B'' ';
END IF;
execute_immediate(v_text);
END;
```
is simpler than the pseudo code below, and as more clauses were added, it would only get messier.
```
DECLARE
v_text VARCHAR2(2000) := 'SELECT * FROM table ';
BEGIN
IF condition_a = met THEN
v_text := v_text ||' WHERE column_1 = ''A'' ';
END IF;
IF condition_b = also_met AND
condition_a != met THEN
v_text := v_text ||' WHERE column_2 = ''B'' ';
ELSIF condition_b = also_met AND
condition_a = met THEN
v_text := v_text ||' AND column_2 = ''B'' ';
END IF;
execute_immediate(v_text);
END;
``` | We use `0 = 0` or, usually, `1 = 1` as a *stub*:
```
select *
from My_Table
where 1 = 1
```
So when you write filters you can do it by adding/commenting out *single lines*:
```
-- 3 filters added
select *
from My_Table
where 1 = 1
and (Field1 > 123) -- 1st
and (Field2 = 456) -- 2nd
and (Field3 like '%test%') -- 3d
```
Next version, say, will be with two filters removed:
```
-- 3 filters added, 2 (1st and 3d) removed
select *
from My_Table
where 1 = 1
-- and (Field1 > 123) -- <- all you need is to comment out the corresponding lines
and (Field2 = 456)
-- and (Field3 like '%test%')
```
Now let's restore the 3d filter in very easy way:
```
-- 3 filters added, 2 (1st and 3d) removed, then 3d is restored
select *
from My_Table
where 1 = 1
-- and (Field1 > 123)
and (Field2 = 456)
and (Field3 like '%test%') -- <- just uncomment
``` | Whats the exact meaning of having a condition like where 0=0? | [
"",
"sql",
"oracle",
"procedure",
""
] |
I need to delete the records in all the multiple table having same foreign key ID with single sql query. I given the example below,
```
Table Name : basetable
id | column1
---|---------
1 | BCol1
2 | BCol2
Table Name : Reftable1
id | BaseTableID | Column1
---|-------------|--------
1 | 1 | RT1Col1
2 | 1 | RT1Col2
Table Name : Reftable2
id | BaseTableID | Column1
---|-------------|--------
1 | 2 | RT2Col1
2 | 1 | RT2Col2
Table Name : Reftable3
id | BaseTableID | Column1
---|-------------|--------
1 | 1 | RT3Col1
2 | 2 | RT3Col2
```
In above three reference table I want to delete records having BaseTableID=1 with single mysql query. Is there is any possibilities please share your idea | I think this will do it:
```
DELETE r1, r2, r3
FROM Reftable1 r1
JOIN Reftable2 r2
JOIN Reftable3 r3
WHERE r1.BaseTableID = 1
AND r2.BaseTableID = 1
AND r3.BaseTableID = 1
```
[SQLFIDDLE](http://www.sqlfiddle.com/#!2/200d1/2)
If some tables may not have matching rows, this LEFT JOIN should do it:
```
DELETE r1, r2, r3
FROM basetable b
LEFT JOIN Reftable1 r1 ON b.id = r1.BaseTableID
LEFT JOIN Reftable2 r2 ON b.id = r2.BaseTableID
LEFT JOIN Reftable3 r3 ON b.id = r3.BaseTableID
WHERE b.id = 1
```
[SQLFIDDLE](http://www.sqlfiddle.com/#!2/329b9/1) | You will have to specify `on cascade delete` when you create the table.
```
baseTable id references BaseTable(id) on delete cascade
```
For more info please refer <http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html> | Is it possible to delete the records having same foreign key in different table with single query? | [
"",
"mysql",
"sql",
""
] |
the problem here is that the first half of the output is NOT returning the data for WWDTA but for sure there are matches. All we are doing in the second part of this query is taking the sales rep id and getting the sales rep name for display in the report.
```
CREATE VIEW astccdta.acwocmpk AS (
SELECT
ALL T01.OHORD#, T01.OHSLR#,T01.OHORDT, T01.OHORDD,
T01.OHTTN$, ' ' as WWDTA
FROM ASTDTA.OEORHDOH T01,
ASTDTA.OETRANOT T02
WHERE T01.OHORD# = T02.OTORD#
AND( T02.OTTRNC = 'WOC')
and T01.OHORDD > 20120101
UNION ALL
SELECT
ALL T01.OHORD#, T01.OHSLR#, T01.OHORDT, T01.OHORDD,
T01.OHTTN$,
SUBSTR(RFDTA,1,20) AS WWDTA
FROM ASTCCDTA.WOCREPS T01,
ASTCCDTA.REPREF1 T02
WHERE T01.OHSLR# = T02.RFSLC)
``` | What UNION does is to take two separate SELECT statements and combine them in one result set, one after the other. Say your first SELECT brings back:
```
1 A 2013-08-01 100.00 ''
2 B 2013-08-02 200.00 ''
3 A 2013-08-03 300.00 ''
```
and your second SELECT brings back:
```
1 A 2013-08-01 100.00 'John Smith'
2 B 2013-08-02 200.00 'Jane Jones'
3 A 2013-08-03 300.00 'John Smith'
```
When you UNION them you get:
```
1 A 2013-08-01 100.00 ''
2 B 2013-08-02 200.00 ''
3 A 2013-08-03 300.00 ''
1 A 2013-08-01 100.00 'John Smith'
2 B 2013-08-02 200.00 'Jane Jones'
3 A 2013-08-03 300.00 'John Smith'
```
I think that you want to alter the first SELECT to JOIN to the sales rep name table and drop the UNION and second SELECT altogether:
```
SELECT ALL
T01.OHORD#, T01.OHSLR#,T01.OHORDT, T01.OHORDD, T01.OHTTN$, SUBSTR(RFDTA,1,20) AS WWDTA
FROM ASTDTA.OEORHDOH T01,
ASTDTA.OETRANOT T02,
ASTCCDTA.REPREF1 T03
WHERE T01.OHORD# = T02.OTORD#
AND (T02.OTTRNC = 'WOC')
and T01.OHORDD > 20120101
and T01.OHSLR# = T03.RFSLC
``` | The problem is the 2nd SELECT has an extra column at the end.
The clue the system gave you was that the number of columns was inconsistent, between the two sides of the UNION.
Add an extra, empty char(20) column at the end of the first SELECT list to match it, then you should be OK there. But check that each column in the first SELECT matches the corresponding column in the 2nd, **and** that they appear in the same order.
As others pointed out, you probably want UNION ALL. | Have to combine 2 queries | [
"",
"sql",
"crystal-reports",
"ibm-midrange",
"db2-400",
""
] |
Writing down this using JOIN ..how?
because this is very slow..
```
SELECT *
FROM table1
WHERE ID IN (SELECT ID
FROM table1
GROUP BY ID
HAVING COUNT(*) = 2
AND MAX(awaiting) = 1)
AND awaiting = 1
```
so, how can I write? | ```
SELECT t1.*
FROM table1 AS t1
INNER JOIN
(
SELECT ID
FROM table1
GROUP BY ID
HAVING COUNT(*) = 2
AND MAX(awaiting) = 1
) AS t2 ON t1.ID = t1.ID AND t1.awaiting = t2.awaiting
WHERE t1.awaiting = 1;
``` | Here is the `join` version:
```
SELECT t1.*
FROM table1 t1 join
(SELECT ID
FROM table1
GROUP BY ID
HAVING COUNT(*) = 2 AND MAX(awaiting) = 1
) tsum
on t1.id = tsum.id
WHERE t1.awaiting = 1
``` | Where IN to JOIN how? | [
"",
"mysql",
"sql",
""
] |
I am building a Twitter-like app. There is a Feed in which I want to only show posts of Users who I follow.
I tried everything with joins, but nothing seems to work.
I have 3 tables: `Users`, `Followers`, `Shares`
The Tables look like this:
**Users**: `id`
**Followers**: `user_id`, `follower_id`
**Shares**: `user_id`
What I need to get is "ALL Shares WHERE share.user\_id = followers.follower\_id"
"ANDWHERE followers.user\_id = users.id"
Assume, the users.id is 3, I tried this:
```
$shares = DB::table('shares')
->leftjoin('followers', 'shares.user_id', '=', 'followers.follower_id')
->leftjoin('users', 'followers.user_id', '=', 'users.id')
->where('users.id', 3)
->where('shares.user_id', 'followers.follower_id')
->get();
```
But it doesnt work.
Any help is appreciated :) | I believe your join is wrong:
```
$shares = DB::table('shares')
->join('users', 'users.id', '=', 'shares.user_id')
->join('followers', 'followers.user_id', '=', 'users.id')
->where('followers.follower_id', '=', 3)
->get();
```
I also suggest you to name your table as `follows` instead, it feels a bit more natural to say `user has many followers through follows` and `user has many followees through follows`.
**Example**
```
$shares = DB::table('shares')
->join('users', 'users.id', '=', 'shares.user_id')
->join('follows', 'follows.user_id', '=', 'users.id')
->where('follows.follower_id', '=', 3)
->get();
```
## Model approach
I didn't realize you were using `DB::` queries and not models. So I'm fixing the answer and providing a lot more clarity. I suggest you use models, it's a lot easier for those beginning with the framework and specially SQL.
**Example of models:**
```
class User extends Model {
public function shares() {
return $this->hasMany('Share');
}
public function followers() {
return $this->belongsToMany('User', 'follows', 'user_id', 'follower_id');
}
public function followees() {
return $this->belongsToMany('User', 'follows', 'follower_id', 'user_id');
}
}
class Share extends Model {
public function user() {
return $this->belongsTo('User');
}
}
```
**Example of Model usage:**
```
$my = User::find('my_id');
// Retrieves all shares by users that I follow
// eager loading the "owner" of the share
$shares = Share::with('user')
->join('follows', 'follows.user_id', '=', 'shares.user_id')
->where('follows.follower_id', '=', $my->id)
->get('shares.*'); // Notice the shares.* here
// prints the username of the person who shared something
foreach ($shares as $share) {
echo $share->user->username;
}
// Retrieves all users I'm following
$my->followees;
// Retrieves all users that follows me
$my->followers;
``` | In terms of general MySQL syntax, this is best written:
```
SELECT * FROM USER a JOIN FOLLOWERS b ON (a.id = b.user_id) JOIN SHARES c on (b.follower_id = c.user_id) WHERE a.id = 3
```
will return a data set of all followers and their respective shares.
I believe you would want the following in Laravel
```
DB::table('USER')
->join('FOLLOWERS', 'USER.id', '=', 'FOLLOWERS.user_id')
->join('SHARES', 'FOLLOWERS.follower_id', '=', 'SHARES.user_id')
->where('USER.id', 3)
->get();
``` | Laravel join with 3 Tables | [
"",
"mysql",
"sql",
"database",
"join",
"laravel",
""
] |
my tables look like this:
```
tags: id, name, description
tag_relations: tag, item
```
`item` references the id of another table and `tag` references the id of the `tags` table.
So I'm trying to select the most used tags:
```
SELECT t.*, COUNT(r.item) AS item_count
FROM tag_relations as r
INNER JOIN tags as t ON r.tag = t.id
GROUP BY t.id
ORDER BY item_count
```
which works, but if I add
```
WHERE t.id = ?
```
the item\_count is always 1...
Is there any way I could still have the global tag count with a select statement that selects only 1 tag or a specific set of tags? | Sql fiddle at
<http://www.sqlfiddle.com/#!2/ba97d/1>
```
SELECT name,count(item) as counter_item
FROM tag_relations
INNER JOIN tags ON
tag_relations.tag =tags.id
order by counter_item
```
the line
```
where tags.id=1
```
Can be added if needed | I don't have access to MySQL, but I do have access to Microsoft SQLServer. I realize your tags specify mysql. Even so, the query you presented fails in SQLServer with error
```
Msg 8120, Level 16, State 1, Line 1
Column 'tags.name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
```
... because the select t.\* is not included in the group by clause.
Anyways, to address your specific question you can derive a global number while still selecting a specific record using cross join...
```
select
t.*
, vTagRelations.GlobalCountOfTagRelations
, vTags.GlobalCountOfTags
from
tags t
cross join (select
count(tag_relations.tag) as GlobalCountOfTagRelations
from
tag_relations) vTagRelations
cross join (select
count(tags.id) as GlobalCountOfTags
from
tags) vTags
where
t.id = 2
``` | Select records that match a field, but order them by a different field | [
"",
"mysql",
"sql",
"sqlite",
"select",
"count",
""
] |
I am asked to extract information of employees who are hired on Saturday. I tried like the code given below. But even-though there are data's available, it shows me no data available
```
select first_name,last_name
from employees
where upper(to_char(hire_date,'day')) like 'SATURDAY'
```
But when I use the following code it works like champ. This code was given to me by my boss.
```
select first_name,last_name
from employees
WHERE TO_CHAR(HIRE_DATE,'fmDay') = 'Saturday'
```
I want to know why my code doesn't work. Please explain. I am a newbie in oracle and working hard for finding the difference between one function and another. | `TO_CHAR(hire_date,'day')` returns the day name, blank padded from the right to the length of the longest name of the week. As it happens to be `Wednesday` then each returned day's name that is shorter than 9 characters will be blank padded.
`like 'SATURDAY'` condition without wildcard characters is equivalent to `= 'SATURDAY'`.
So, `to_char(hire_date,'day')` returns `SATURDAY_`(where `_` is an extra space character) which is not equal to `SATURDAY`, thus your query returns no rows.
Use the `FM` format model modifier (`TO_CHAR(HIRE_DATE,'fmDay')`) to get rid of those extra spaces. | Difference between day and fmday is, in fmday the embedded spaces can be removed by placing the 'fm' prefix.
Try this to get the result,
```
select first_name,last_name
from employees
where REPLACE(upper(to_char(hire_date,'day')),' ','') like 'SATURDAY'
``` | Extracting day in Date | [
"",
"sql",
"oracle",
""
] |
I want to create a database which does not exist through JDBC. Unlike MySQL, PostgreSQL does not support `create if not exists` syntax. What is the best way to accomplish this?
The application does not know if the database exists or not. It should check and if the database exists it should be used. So it makes sense to connect to the desired database and if connection fails due to non-existence of database it should create new database (by connecting to the default `postgres` database). I checked the error code returned by Postgres but I could not find any relevant code that species the same.
Another method to achieve this would be to connect to the `postgres` database and check if the desired database exists and take action accordingly. The second one is a bit tedious to work out.
Is there any way to achieve this functionality in Postgres? | ### Restrictions
You can ask the system catalog [`pg_database`](https://www.postgresql.org/docs/current/catalog-pg-database.html) - accessible from any database in the same database cluster. The tricky part is that [`CREATE DATABASE`](https://www.postgresql.org/docs/current/sql-createdatabase.html) can only be executed as a single statement. [The manual:](https://www.postgresql.org/docs/current/sql-createdatabase.html#id-1.9.3.61.7)
> `CREATE DATABASE` cannot be executed inside a transaction block.
So it cannot be run directly inside a function or [`DO`](https://www.postgresql.org/docs/current/sql-do.html) statement, where it would be inside a transaction block implicitly. SQL procedures, introduced with Postgres 11, [cannot help with this either](https://dba.stackexchange.com/a/194811/3684).
### Workaround from within psql
You can work around it from within psql by executing the DDL statement conditionally:
```
SELECT 'CREATE DATABASE mydb'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mydb')\gexec
```
[The manual:](https://www.postgresql.org/docs/current/app-psql.html)
> `\gexec`
>
> Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed.
### Workaround from the shell
With `\gexec` you only need to call psql *once*:
```
echo "SELECT 'CREATE DATABASE mydb' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mydb')\gexec" | psql
```
You may need more psql options for your connection; role, port, password, ... See:
* [Run batch file with psql command without password](https://stackoverflow.com/questions/15359348/run-batch-file-with-psql-command-without-password/15593100#15593100)
The same cannot be called with `psql -c "SELECT ...\gexec"` since `\gexec` is a psql meta‑command and the `-c` option expects a single *command* for which [the manual states:](https://www.postgresql.org/docs/current/app-psql.html)
> ***`command`*** must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands within a `-c` option.
### Workaround from within Postgres transaction
You could use a `dblink` connection back to the current database, which runs outside of the transaction block. Effects can therefore also not be rolled back.
Install the additional module dblink for this (once per database):
* [How to use (install) dblink in PostgreSQL?](https://stackoverflow.com/questions/3862648/how-to-use-install-dblink-in-postgresql/13264961#13264961)
Then:
```
DO
$do$
BEGIN
IF EXISTS (SELECT FROM pg_database WHERE datname = 'mydb') THEN
RAISE NOTICE 'Database already exists'; -- optional
ELSE
PERFORM dblink_exec('dbname=' || current_database() -- current db
, 'CREATE DATABASE mydb');
END IF;
END
$do$;
```
Again, you may need more psql options for the connection. See Ortwin's added answer:
* [Simulate CREATE DATABASE IF NOT EXISTS for PostgreSQL?](https://stackoverflow.com/questions/18389124/simulate-create-database-if-not-exists-for-postgresql/36218838#36218838)
Detailed explanation for dblink:
* [How do I do large non-blocking updates in PostgreSQL?](https://stackoverflow.com/questions/1113277/how-do-i-do-large-non-blocking-updates-in-postgresql)
You can make this a function for repeated use. | another alternative, just in case you want to have a shell script which creates the database if it does not exist and otherwise just keeps it as it is:
```
psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'my_db'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE my_db"
```
I found this to be helpful in devops provisioning scripts, which you might want to run multiple times over the same instance.
For those of you who would like an explanation:
```
-c = run command in database session, command is given in string
-t = skip header and footer
-q = silent mode for grep
|| = logical OR, if grep fails to find match run the subsequent command
``` | Simulate CREATE DATABASE IF NOT EXISTS for PostgreSQL? | [
"",
"sql",
"database",
"postgresql",
"jdbc",
"ddl",
""
] |
I am trying to write procedure to print day month
There is no compilation error but its not printing anything. | Try this,
```
CREATE OR REPLACE PROCEDURE P2 AS
to_d date;
BEGIN
SELECT REP_DATE
INTO to_d
FROM cmpss_report_calender
WHERE REPID='F2';
IF LAST_DAY(SYSDATE) =LAST_DAY(to_d)
THEN
dbms_output.put_line('');
ELSE
dbms_output.put_line(to_d);
end if;
END P2;
``` | ```
CREATE OR REPLACE PROCEDURE P2 AS
to_d date;
BEGIN
SELECT REP_DATE
INTO to_d
FROM cmpss_report_calender
WHERE REPID='F2';
IF LAST_DAY(SYSDATE) =LAST_DAY(to_d)
THEN
dbms_output.put_line('');
ELSE
dbms_output.put_line(to_d);
end if;
END P2;
```
If you are using sql plus, do as
```
SET SERVEROUTPUT ON;
EXEC P2;
``` | sql procedure to print last day of month | [
"",
"sql",
"oracle",
""
] |
I have an application which I will be accessing SQL server to return data which has been filtered by selections from the application as any common faceted search. I did see some out the box solutions, but these are expensive and I prefer building out something custom, but just don't know where to start.
The database structure is like this:

The data from the PRODUCT table would be searched by tags from the TAG table. Values which would be found in the TAG table would be something like this:
```
ID NAME
----------------------
1 Blue
2 Green
3 Small
4 Large
5 Red
```
They would be related to products through the ProductTag table.
I would need to return two groups of data from this setup:
1. The Products that are only related to the Tags selected, whether single or multiple
2. The Remaining tags that are also available to select for the products which have already been refined by single or multiple selected tags.
I would like this to be all with-in SQL server if possible, 2 seperate as stored procedures.
Most websites have this feature built into it these days, ie: <http://www.gnc.com/family/index.jsp?categoryId=2108294&cp=3593186.3593187> (They've called it 'Narrow By')
I have been searching for a while how to do this, and I'm taking a wild guess that if a stored procedure has to be created in this nature, that there would need to be 1 param that accepts CSV values, like this:
```
[dbo].[GetFacetedProducts] @Tags_Selected = '1,3,5'
[dbo].[GetFacetedTags] @Tags_Selected = '1,3,5'
```
So with this architecture, does anyone know what types of queries need to be written for these stored procedures, or is the architecture flawed in any way? Has anyone created a faceted search before that was like this? If so, what types of queries would be needed to make something like this? I guess I'm just having trouble wrap my head around it, and there isn't much out there that shows someone how to make something like this. | There's other places where you can get examples of turning a CSV parameter into a table variable. Assuming you have done that part your query boils down to the following:
GetFacetedProducts:
Find Product records where all tags passed in are assigned to each product.
If you wrote it by hand you could end up with:
```
SELECT P.*
FROM Product P
INNER JOIN ProductTag PT1 ON PT1.ProductID = P.ID AND PT1.TagID = 1
INNER JOIN ProductTag PT2 ON PT1.ProductID = P.ID AND PT1.TagID = 3
INNER JOIN ProductTag PT3 ON PT1.ProductID = P.ID AND PT1.TagID = 5
```
While this does select only the products that have those tags, it is not going to work with a dynamic list. In the past some people have built up the SQL and executed it dynamically, don't do that.
Instead, lets assume that the same tag can't be applied to a product twice, so we could change our question to:
Find me products where the number of tags matching (dynamic list) is equal to the number of tags in (dynamic list)
```
DECLARE @selectedTags TABLE (ID int)
DECLARE @tagCount int
INSERT INTO @selectedTags VALUES (1)
INSERT INTO @selectedTags VALUES (3)
INSERT INTO @selectedTags VALUES (5)
SELECT @tagCount = COUNT(*) FROM @selectedTags
SELECT
P.ID
FROM Product P
JOIN ProductTag PT
ON PT.ProductID = P.ID
JOIN @selectedTags T
ON T.ID = PT.TagID
GROUP BY
P.ID,
P.Name
HAVING COUNT(PT.TagID) = @tagCount
```
This returns just the ID of products that match all your tags, you could then join this back to the products table if you want more than just an ID, otherwise you're done.
As for your second query, once you have the product IDs that match, you want a list of all tags for those product IDs that aren't in your list:
```
SELECT DISTINCT
PT2.TagID
FROM aProductTag PT2
WHERE PT2.ProductID IN (
SELECT
P.ID
FROM aProduct P
JOIN aProductTag PT
ON PT.ProductID = P.ID
JOIN @selectedTags T
ON T.ID = PT.TagID
GROUP BY
P.ID,
P.Name
HAVING COUNT(PT.TagID) = @tagCount
)
AND PT2.TagID NOT IN (SELECT ID FROM @selectedTags)
``` | A RDBMS for being used for faceted searching is the wrong tool for the job at hand. Faceted searching is a multidimensional search, which is difficult to express in the set-based SQL language. Using a data-cube or the like might give you some of the desired functionality, but would be quite a bit of work to build.
When we were faced with similar requirements we ultimately decided to utilize the Apache Solr search engine, which supports faceting as well as many other search-oriented functions and features. | How to create a faceted search with SQL Server | [
"",
"sql",
"sql-server-2008",
"faceted-search",
""
] |
My query looks like this:
```
SELECT o.CustomerID, null emptyColumn, o.ShipFirstName, o.ShipCountry, c.EmailAddress
FROM Orders o, Customers c
WHERE o.CustomerID = c.CustomerID
```
And the results are like,
```
1,,John,United States,john@example.com
2,,Peter,Canada,peter@example.com
```
But I need to change "United States" to "US" and "Canada" to "Ca". How can I do this? | You can use `case`,
```
SELECT o.CustomerID, null emptyColumn, o.ShipFirstName,
case when o.ShipCountry = 'United States' then 'US'
when o.ShipCountry = 'Canada' then 'CA'
end,
c.EmailAddress
FROM Orders o, Customers c
WHERE o.CustomerID = c.CustomerID
``` | If it's for a large range of countries, then do a search for ISO country codes ([for example](http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm)) - import that data into a new table and then join to that table.
Or put your selection of countries into a CTE and join to that, e.g.,
```
WITH Countries(ISO_Code, ISO_Name) AS
(
SELECT
*
FROM (VALUES ('AF', 'AFGHANISTAN')
,('AX', 'ÅLAND ISLANDS')
,('AL', 'ALBANIA')
,('DZ', 'ALGERIA')
,('AS', 'AMERICAN SAMOA')
,('AD', 'ANDORRA')
,('AO', 'ANGOLA')
,('AI', 'ANGUILLA')
,('AQ', 'ANTARCTICA')
,('AG', 'ANTIGUA AND BARBUDA')) nTab(nCol1, nCol2)
)
SELECT
*
FROM Countries c
JOIN ...
``` | SQL SELECT Replace country names with abbreviations | [
"",
"sql",
""
] |
I am trying to get the sum(docBal) for each company after adjusting the docBal. But when I run the query below, I am forced to include doctype and docbal in the groupby- which I dont want. How can I modify this query?
```
select
DocBal = CASE d.DocType WHEN 'AD' THEN -d.DocBal
ELSE d.DocBal END,
Sum(DocBal)
from Vendor v
inner join APDoc d
on v.VendId = d.VendId
where
d.PerPost = 201307
and d.DocType in ('VO','AP','AD')
and d.OpenDoc = 1
and Acct = 210110
group by CpnyID
``` | ```
select sum(DocBal) FROM
(select
DocBal = CASE d.DocType WHEN 'AD' THEN -d.DocBal
ELSE d.DocBal END,
CpnID
from Vendor v
inner join APDoc d
on v.VendId = d.VendId
where
d.PerPost = 201307
and d.DocType in ('VO','AP','AD')
and d.OpenDoc = 1
and Acct = 210110) a
GROUP BY CpnID
``` | You are not saying what SQL product (and which version) you are using. It is true that the derived table solution that others have suggested would work in any product, but in case you are working with SQL Server 2005 or later version, you could also use CROSS APPLY, like this:
```
SELECT
CpnyID, -- assuming you would like to know which SUM() belongs to which CpnyID
SUM(x.DocBal)
FROM Vendor AS v
INNER JOIN APDoc AS d ON v.VendId = d.VendId
CROSS APPLY (
SELECT DocBal = CASE d.DocType WHEN 'AD' THEN -d.DocBal ELSE d.DocBal END
) AS x
WHERE d.PerPost = 201307
AND d.DocType in ('VO','AP','AD')
AND d.OpenDoc = 1
AND Acct = 210110
GROUP BY
CpnyID
;
```
There's also a much simpler, in my opinion, solution: you could make the entire CASE expression the argument of SUM, like this:
```
SELECT
CpnyID,
SUM(CASE d.DocType WHEN 'AD' THEN -d.DocBal ELSE d.DocBal END)
FROM Vendor AS v
INNER JOIN APDoc AS d ON v.VendId = d.VendId
WHERE d.PerPost = 201307
AND d.DocType in ('VO','AP','AD')
AND d.OpenDoc = 1
AND Acct = 210110
GROUP BY
CpnyID
;
``` | How to aggregate the result of an expression? | [
"",
"sql",
"t-sql",
""
] |
I am able to `join` information from **2 tables** in order to output which users said which status.
It manages to get the users status from `tbl_status` and the username of who said it from `tbl_users`.
But now I would also like to add the **users photo** alongside that status update and output that information in the same query. Not surprisingly the user photos are contained in `tbl_photos`.
How can I add in this additional query into the existing out to output the results with the last query?
```
$result = mysql_query("SELECT tbl_status.id as statID, tbl_status.from_user as statFROM, tbl_status.status as statSTATUS, tbl_status.deleted as statDEL, tbl_status.date as statDATE, tbl_users.id as usrID, tbl_users.name as usrNAME FROM tbl_status INNER JOIN tbl_users ON tbl_status.from_user = tbl_users.id WHERE tbl_status.deleted = '0'");
``` | All you need to do is another `JOIN` on the `tbl_photos` table.
I'll have to make some assumptions about `tbl_photos`.
```
SELECT <all your existing columns>, ISNULL(tbl_photos.profile_filename, 'Default Filename') as ProfilePic
FROM tbl_status
INNER JOIN tbl_users ON tbl_status.from_user = tbl_users.id
LEFT JOIN tbl_photos ON tbl_photos.user_id = tbl_users.id
WHERE tbl_status.deleted = '0'
``` | I have added a left join in case the user has no photo or optional.
```
SELECT tbl_status.id AS statID ,
tbl_status.from_user AS statFROM ,
tbl_status.status AS statSTATUS ,
tbl_status.deleted AS statDEL ,
tbl_status.date AS statDATE ,
tbl_users.id AS usrID ,
tbl_users.name AS usrNAME,
p.photo
FROM tbl_status
INNER JOIN tbl_users ON tbl_status.from_user = tbl_users.id
Left JOIN tbl_photos p ON tbl_users.id = tbl_photos.userId
WHERE tbl_status.deleted = '0'
```
however, if every user has a photo, then use...
```
SELECT tbl_status.id AS statID ,
tbl_status.from_user AS statFROM ,
tbl_status.status AS statSTATUS ,
tbl_status.deleted AS statDEL ,
tbl_status.date AS statDATE ,
tbl_users.id AS usrID ,
tbl_users.name AS usrNAME,
p.photo
FROM tbl_status
INNER JOIN tbl_users ON tbl_status.from_user = tbl_users.id
INNER JOIN tbl_photos p ON tbl_users.id = tbl_photos.userId
WHERE tbl_status.deleted = '0'
```
You also didn't specify what the FK was, so I just made one up `tbl_users.id = tbl_photos.userId`, you will need to replace this line along with the field you are pulling from the `tbl_photo` table as I used `p.photo` | Joining more than 2 tables in order to loop results | [
"",
"sql",
"join",
"left-join",
""
] |
In MySql you can see the table definition (columns with their data types etc) with `show create table table_name`.
Is there a similar functionality for oracle sql? | If you are asking about SQL\*Plus commands (`show create table table_name` doesn't appear to be a SQL statement), you can use the `desc` command
```
SQL> desc emp
Name Null? Type
----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)
```
If you really want a SQL statement, you can use the `dbms_metadata` package
```
1 select dbms_metadata.get_ddl( 'TABLE', 'EMP', 'SCOTT' )
2* from dual
SQL> /
DBMS_METADATA.GET_DDL('TABLE','EMP','SCOTT')
--------------------------------------------------------------------------------
CREATE TABLE "SCOTT"."EMP"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"
ALTER INDEX "SCOTT"."PK_EMP" UNUSABLE ENABLE,
CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"
CACHE
```
Depending on the tool you are using, you may need to run `set long 10000` first, that tells SQL\*Plus to display the first 10,000 bytes of any LOB that is selected. If your DDL is longer, set a larger value. | Use [`DESC`](http://ss64.com/ora/desc.html):
```
DESC mytable
```
Will show you the columns, but unfortunately the create statement is not available using standard oracle tools. | `show create table` equivalent in oracle sql | [
"",
"sql",
"oracle",
""
] |
I'm newbie at SQL Server. I'm stuck on a problem that I can't solve. I want to write a stored procedure.
`TimeStamp` column's datatype is `datetime`.
This my stored procedure:
```
@fetchtype int,
@startdate nvarchar(22),
@finishdate nvarchar(22)
AS
if (@fetchtype = 0)
BEGIN
PRINT('Select TimeStamp ' From WindData Where TimeStamp between '+@startdate+' and '+@finishdate)
EXEC('Select TimeStamp ' From WindData Where TimeStamp between '+@startdate+' and '+@finishdate)
END
```
An also my execution query is
```
DECLARE @return_value int
EXEC @return_value = [dbo].[Get_Values]
@columnnames = N'V81_Avg',
@fetchtype = 0,
@startdate = N'2013-04-23 12:58:40.000',
@finishdate = N'2013-04-23 12:59:00.000'
SELECT 'Return Value' = @return_value
```
But when I execute my query I get this error
> Msg 102, Level 15, State 1, Line 1
> Incorrect syntax near '12'.
I think I didn't write properly datetime format. | The problem here is in SQL query string:
It should be:
```
PRINT('Select TimeStamp From WindData Where TimeStamp between '''+@startdate+''' and '''+@finishdate+'''');
EXEC('Select TimeStamp From WindData Where TimeStamp between '''+@startdate+''' and '''+@finishdate+'''')
```
But the best way is to use these parameters as DATETIME and avoid dynamic query and replace `EXEC()` with just SELECT:
```
@startdate datetime,
@finishdate datetime
...
Select TimeStamp From WindData Where TimeStamp between @startdate and @finishdate;
...
``` | You need to put quotes around the dates:
```
declare @sql nvarchar(max) = 'Select TimeStamp From WindData Where TimeStamp between '''+@startdate+''' and '''+@finishdate+'''')
EXEC(@sql);
```
Or, better yet, use `sp_executesql`:
```
declare @sql nvarchar(max) = 'Select TimeStamp From WindData Where TimeStamp between @startdate and @finishdate')
exec sp_executesql @sql, N'@startdate date, @finishdate date', @startdate = @startdate, @finishdate = @finishdate;
``` | Datetime syntax error | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
"stored-procedures",
""
] |
In the Following query where would I place `WITH(NOLOCK)`?
```
SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM dbo.VBsplit(@mnemonicList, ',')) a) b
JOIN dct
ON dct.concept = b.concept
WHERE b.geo = dct.geo) c
JOIN dct_rel z
ON c.db_int = z.db_int) d
JOIN rel_d y
ON y.rel_id = d.rel_id
WHERE y.update_status = 0
GROUP BY y.rel_id,
d.concept,
d.geo_rfa
``` | You should not put `NOLOCK` anywhere in that query. If you are trying to prevent readers from blocking writers, a much better alternative is `READ COMMITTED SNAPSHOT`. Of course, you should read about this, just like you should read about `NOLOCK` before blindly throwing it into your queries:
* [Is the `NOLOCK` SQL Server hint bad practice?](https://stackoverflow.com/questions/1452996/is-the-nolock-sql-server-hint-bad-practice)
* [Is `NOLOCK` always bad?](https://dba.stackexchange.com/questions/10655/is-nolock-always-bad)
* [What risks are there if we enable read committed snapshot in SQL Server?](https://dba.stackexchange.com/questions/5014/what-risks-are-there-if-we-enable-read-committed-snapshot-in-sql-server)
Also, since you're using SQL Server 2008, you should probably replace your `VBSplit()` function with a table-valued parameter - this will be much more efficient than splitting up a string, even if the function is baked in CLR as implied.
First, create a table type that can hold appropriate strings. I'm going to assume the list is guaranteed to be unique and no individual mnemonic word can be > 900 characters.
```
CREATE TYPE dbo.Strings AS TABLE(Word NVARCHAR(900) PRIMARY KEY);
```
Now, you can create a procedure that takes a parameter of this type, and which sets the isolation level of your choosing in one location:
```
CREATE PROCEDURE dbo.Whatever
@Strings dbo.Strings READONLY
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL --<choose wisely>;
SELECT -- please list your columns here instead of *
FROM @Strings AS s
INNER JOIN dbo.dct -- please always use proper schema prefix
ON dct.concept = s.Word
...
END
GO
```
Now you can simply pass a collection (such as a DataTable) in from your app, be it C# or whatever, and not have to assemble or deconstruct a messy comma-separated list at all. | Since the question really is, "where should I put NOLOCK". I am not going do debate the use of OR reformat the query with better joins. I will just answer the question.
**In no way am I intending to say this is the better way or to say that the other answers are bad. The other answer solve the actual problem. I'm just intending to show where exactly to place the lock hints as the question asks**
```
SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM (SELECT *
FROM dbo.VBsplit(@mnemonicList, ',')) a) b
JOIN dct WITH (NOLOCK) -- <---
ON dct.concept = b.concept
WHERE b.geo = dct.geo) c
JOIN dct_rel z WITH (NOLOCK) -- <---
ON c.db_int = z.db_int) d
JOIN rel_d y WITH (NOLOCK) -- <---
ON y.rel_id = d.rel_id
WHERE y.update_status = 0
GROUP BY y.rel_id,
d.concept,
d.geo_rfa
``` | Placement of WITH(NOLOCK) in nested queries | [
"",
"sql",
"sql-server",
"sql-server-2008",
"t-sql",
""
] |
I have been searching Google and SO for the last 25 min searching for how to do the following in MySQL.
I currently have the following query (Sent by PHP):
```
SELECT
COUNT(*),
`$db`.`crop`.`id` AS `crop`,
`$db`.`crop`.`harvest_date` AS `harvest_date`
FROM
`$db`.`crop`
WHERE
`$db`.`crop`.`harvest_date` BETWEEN $startDate AND $endDate
GROUP BY `$db`.`crop`.`harvest_date`
$startDate = 2012-01-01
$endDate = 2013-07-01
```
I am trying to find all the rows that have a harvest\_date between start and end dates, and then count the number of rows that fall on the same date. However, I seem to be getting no results. The query doesn't fail, it just doesn't return anything. Can anyone point me in the right direction/tell me where I got it wrong?
EDIT: Found the problem. As Michael pointed out below, the dates were not getting passed as dates, but as numbers. I solved this by adding ' before and after startDate and endDate in the query. | MySQL expects date literals to be single-quoted strings like `'2012-01-01'` and `'2013-07-01'`.
Since you have not quoted your date literals which are PHP variables, PHP is actually interpreting them as arithmetic on integer values before it passes them into the query.
```
// You see this:
$startDate = 2012-01-01
$endDate = 2013-07-01
// PHP does this:
// 2012 - 1 - 1 = 2010
$startDate = 2010
// 2013 - 7 - 1 = 2005
$endDate = 2005
```
Your query ultimately uses this:
```
WHERE
`$db`.`crop`.`harvest_date` BETWEEN 2010 AND 2005
```
And MySQL will cast both of those integers to a `DATE`, which will return `NULL`.
```
mysql> SELECT CAST(2010 AS DATE);
+--------------------+
| CAST(2010 AS DATE) |
+--------------------+
| NULL |
+--------------------+
```
So the simple fix is:
```
$startDate = '2012-01-01';
$endDate = '2013-07-01';
```
And if you eventually convert this to a parameterized query, the correct quoting of placeholders would be handled for you. | To get the count between the given date range. Modify the query as
```
SELECT
COUNT(`$db`.`crop`.`id`),
FROM
`$db`.`crop`
WHERE
`$db`.`crop`.`harvest_date` BETWEEN $startDate AND $endDate
```
Grouping the result with "harvest\_date" will determine the count for that particular date and group them.
example :
if the table is like
8-21-2013
8-21-2013
8-20-2013
then grouping will give
2
1
without grouping
3 | Count rows per date in a date range | [
"",
"mysql",
"sql",
"database",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.