Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a customer table and a separate table that stores addresses for customers. They often have at least two joined records in the address table, if not more. When I want a mailing list, I want the billing address for the customer, which is marked by a 'type' column in the address table. Unfortunately, not every customer has an address marked 'billing'. How can I write a statement that says to pick the address marked billing, unless there isn't one, in which case choose another. I don't want to just use 'or' because I only want one record to be chosen. For simplicity sake, the tables could look like this: > Customer: > id > > Address: > id > custid > type (shipping, billing, '', etc) And here's some sql that isn't what i want (lol): Select \* from Customer inner join Address on address.custid = Customer.id where Address.type = 'billing' **!!Late Addition!!** Kevin provided the coalesce solution, which I like very much, but it is only working for a single column from the address table. I need all the fields/columns from the table to form a useable address. I've been trying to do it like so (only using two fields here for simplicity): `select coalesce(concat_ws(',',a1.address, a1.city), concat_ws(',',a2.address, a2.city), concat_ws(',',a3.address, a3.city)) from customer c ...` however, the concat\_ws breaks the coalesce by always returning non-null. I also tried just doing coalesce independently on each field: `coalesce(a1.address, a2.address), coalesce(a1.address2, a2.address2), etc` but this has the effect of 'mixing up' addresses from 3 different records if for example address2 (for apartment number or whatever) is null for a row, then it will insert a different record's address2 in there. Does anyone know how to prevent concat\_ws from returning non-null even though the fields are null inside?
This will get you the billing address if it exists, otherwise the shipping address: ``` select ifnull(a1.address, a2.address) from customer c left join address a1 on c.id = a1.custid and a1.type = 'billing' left join address a2 on c.id = a2.custid and a2.type = 'shipping' ``` if you wanted to check for other types, you could add joins and use coalesce instead of ifnull, like so: ``` select coalesce(a1.address, a2.address, a3.address) from customer c left join address a1 on c.id = a1.custid and a1.type = 'billing' left join address a2 on c.id = a2.custid and a2.type = 'shipping' left join address a3 on c.id = a3.custid and a3.type = 'X' ```
``` select c.id, a.id from customer c join address a on a.custid = c.id where a.type = 'Billing' union select c.id, a.id from customer c join address a on a.custid = c.id where a.type <> 'Billing' and c.id not in (select custid from address a where a.type = 'Billing') ``` This is one way to do it.
MySQL Join chose only 1 condition from a series of conditions
[ "", "mysql", "sql", "where-clause", "" ]
When I try to count months between 2 dates I get the wrong result. ``` SELECT ROUND(MONTHS_BETWEEN(TO_DATE('23/01/2015','dd/mm/yyyy'), TO_DATE('26/10/2014','dd/mm/yyyy'))) FROM dual; SELECT ROUND(MONTHS_BETWEEN(TO_DATE('25/05/2015','dd/mm/yyyy'), TO_DATE('05/05/2015','dd/mm/yyyy'))) FROM dual; ``` I have to get in the first code - 4 on the second - 1 but I didn't get that.
finaly i get it: ``` SELECT TRUNC(ROUND(MONTHS_BETWEEN(TRUNC(TO_DATE('23/01/2015','dd/mm/yyyy'),'mm'),TRUNC(TO_DATE('15/01/2015','dd/mm/yyyy'),'mm')))) + 1 FROM dual ; ```
Try this ``` select trunc( months_between(TO_DATE('2015/01/23','yyyy/mm/dd'), TO_DATE('2014/10/26','yyyy/mm/dd')) ) mths FROM dual; ```
How to count months between two date?
[ "", "sql", "oracle", "date", "" ]
For example, I have such sql table of messages: ``` Id from_user_id to_user_id message 1 1 2 Hello 2 1 2 How are you? 3 1 3 Where are you? ``` And query SELECT DISTINCT to\_user\_id FROM messages; it returns ``` to_user_id 2 3 ``` But, that is not enough. I need to show all last messages of from\_user\_id(id=1) to others users, and avoid N+1 queries problem. The result must be like this ``` Id from_user_id to_user_id message 2 1 2 How are you? 3 1 3 Where are you? ```
You're going to want to do a self join: ``` SELECT m.* FROM messages m LEFT JOIN messages _m ON m.to_user_id = _m.to_user_id AND _m.id > m.id WHERE _m.id IS NULL ```
Postgres offers `distinct on`, which is generally the best way to approach this type of problem: ``` select distinct on (to_user_id) m.* from messages m where m.from_user_id = 1 order by to_user_id, id desc; ```
Sql query distinct + other columns
[ "", "sql", "postgresql", "distinct", "" ]
I want to write a query that counts every object in the database. I'm aware that you can query for statistics, but I want to know how to write this general type of query in case I need to do it again. I have the results of this query, which gives every table name in the DB: ``` SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE' ``` The results look something like this: ``` table_name ---------- queue_classic_jobs activities attachments comments csv_files ``` I want to do a `count(*)` on each table for each resulting row: ``` table_name | row_count ---------------------------------- queue_classic_jobs 6 activities 2 attachments 4 comments 8 csv_files 10 ``` and then sum the row\_count column. What should my query look like?
Use plpgsql `EXECUTE` command. You have two options. The first is anonymous code block with temporary table to save results: ``` CREATE TEMP TABLE IF NOT EXISTS results(table_name text, row_count bigint); TRUNCATE results; DO $$ DECLARE tname text; BEGIN FOR tname IN SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type='BASE TABLE' LOOP EXECUTE format($fmt$ INSERT INTO results SELECT '%s', count(*) from %s $fmt$, tname, tname); END LOOP; END $$; SELECT * FROM results UNION ALL SELECT 'TOTAL', sum(row_count) FROM results; ``` The second option is a plpgsql function: ``` CREATE OR REPLACE FUNCTION show_row_count() RETURNS TABLE (table_name text, row_count bigint) LANGUAGE plpgsql AS $$ DECLARE tname text; BEGIN FOR tname IN SELECT i.table_name FROM information_schema.tables i WHERE table_schema='public' AND table_type='BASE TABLE' LOOP RETURN QUERY EXECUTE format($fmt$ SELECT '%s'::text, count(*) from %s $fmt$, tname, tname); END LOOP; END $$; WITH row_counts AS (SELECT * FROM show_row_count()) SELECT * FROM row_counts UNION ALL SELECT 'TOTAL'::text, sum(row_count) FROM row_counts; ``` Read more: [Executing Dynamic Commands](http://www.postgresql.org/docs/9.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN)
You can use analytic functions to get the total count in each row. ``` SELECT nspname AS schemaname, relname AS TABLE_NAME, reltuples AS ROW_COUNT, SUM (reltuples) OVER () AS total_rows_count FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C .relnamespace) WHERE nspname = 'ptab' AND relkind = 'r' ORDER BY reltuples DESC ```
Perform the sum of many count queries, from the result of an initial query
[ "", "sql", "postgresql", "" ]
Example: Lets say I have table `customerphonedetails` ( `customer_no` , `phone_no` ). ``` data is like this : customer_no =2 Phone_no=11 customer_no =2 Phone_no=12 customer_no =2 Phone_no=13 customer_no =1 Phone_no=11 customer_no =1 Phone_no=12 customer_no =1 Phone_no=13 customer_no =3 Phone_no=22 ``` Now I want to write a query which will find the `customer_no` having **all** `phone_no=(11,12,13)`. Expected result is : Customer\_no = (1,2).
You can check that all three `Phone_no` provided in the `where` are present for each `customer_no` using `having` clause: ``` select customer_no from customerphonedetails where Phone_no in (11, 12, 13) group by customer_no having count(distinct Phone_no) = 3 ```
``` select DISTINCT customer_no from table where phone_no in(11,12,13) ``` It should be remove duplicate record.
Find all the customer having phone multiple number in a table
[ "", "sql", "oracle", "" ]
This is my query ``` select U_YBID, T1.onhand from oitw T1 inner join oitm T2 on T1.ItemCode=T2.ItemCode where T1.WhsCode='01' and U_YBID is not null and u_YBId like '%/%' ``` It gives output like this: ``` 302/02 41.000000 302/01 23.000000 X82/02 20.000000 355/01 2.000000 355/02 4.000000 355/03 5.000000 ``` I need query which gives the `sum` and id for eg: ``` 302 64 x82 20 355 11 ```
You can use `left`: ``` select left(U_YBID, 3) , sum(T1.onhand) from oitw T1 inner join oitm T2 on T1.ItemCode = T2.ItemCode where T1.WhsCode = '01' and u_YBId like '%/% group by left(U_YBID, 3) ```
If length of string in column `U_YBID` before `/` is not static to `3` then you can write a query as: ``` SELECT U_YBID,SUM (T1.onhand) AS T1.onhand FROM ( select PARSENAME(REPLACE(U_YBID, '/', '.'), 2) AS U_YBID , T1.onhand from oitw T1 inner join oitm T2 on T1.ItemCode=T2.ItemCode where T1.WhsCode='01' and U_YBID is not null and u_YBId like '%/%' )AS T GROUP BY U_YBID ``` You can read more about `PARSENAME` function [here..](https://msdn.microsoft.com/en-us/library/ms188006.aspx)
Group by substring and sum
[ "", "sql", "sql-server", "" ]
I would like to find amounts that end in 999. What I have so far is: ``` select * from bkpf_bseg_0010 where right(cast(dmbtr as int),3) = 999 ``` But when I run the query I get the following error message: > Arithmetic overflow error for type int, value = 4302564925.000000 The column `dmbtr` is a float. Any other ideas on how to bypass this issue?
You get a number overflow when casting big numbers to `int` which has only a limited range. `bigint` should be enough. ``` where cast(dmbtr as bigint) % 1000 = 999 ``` And you can also use modulo to get the last 3 digits
``` select * from bkpf_bseg_0010 where right(cast(dmbtr as bigint),3) = 999 ```
sql - find amounts that end in 999
[ "", "sql", "sql-server", "t-sql", "" ]
``` LedgerId AccountId EntryType Debit Credit 2 2 D 50000.00 NULL 3 2 D 10000.00 NULL 4 2 C NULL 25000.00 6 2 C NULL 10000.00 7 2 D 89000.00 NULL 8 2 D 89000.00 NULL 10 3 D 715871.00 NULL ``` Following query calculates the `Balance`: ``` Select Accounts.ID [AccountID],Name,AccountType [AccountType], SUM(Debit) - SUM(Credit) [Balance] FROM Accounts join Dealers on Accounts.DealerId = Dealers.ID join Ledger on Accounts.ID = Ledger.AccountId GROUP BY Accounts.ID, Name, AccountType ``` and it returns: ``` AccountID Name AccountType Balance 2 Mateen P 203000.00 3 Shery P NULL ``` expected output: ``` AccountID Name AccountType Balance 2 Mateen P 203000.00 3 Shery P 715871.00 ``` The `Balance` for Account 3 is Null, it returns `null` when it comes to subtract anything from `null`. **For Example:** ``` select 5 - NULL ``` returns `NULL`. **Question:** Now, how can I get the `Balance` instead of `NULL`?
You can use [`coalesce`](https://msdn.microsoft.com/ru-ru/library/ms190349(v=sql.120).aspx): ``` coalesce(sum(Debit), 0) - coalesce(sum(Credit), 0) ```
try this with use of isnull(value,0) it will take null values as 0 ``` Select Accounts.ID [AccountID],Name,AccountType [AccountType], SUM(isnull( Debit,0)) - SUM(isnull(Credit,0)) isnull([Balance],0) as Balance FROM Accounts join Dealers on Accounts.DealerId = Dealers.ID join Ledger on Accounts.ID = Ledger.AccountId GROUP BY Accounts.ID, Name, AccountType ```
Arithmetic operation on null
[ "", "sql", "sql-server", "conditional-statements", "" ]
I want to pass a list of int's (comma separated) which is a field in my table > ie. 1234, 2345, 3456, 4567 to my `IN` clause in `WHERE`. But the list is a string (`VARCHAR`), and I'm comparing to an int field. Is there a way for me to convert the list to list of ints? `Enterprise_ID` is `INT` Path is a field in the table which is a comma separated string ie. 1234, 2345, 3456, 4567 ``` SELECT * FROM tbl_Enterprise WHERE Enterprise_ID IN ( Path ) ``` My database is Vertica.
You can use SPLIT\_PART function in vertica to split the comma separated list into rows and insert them into a temp table. Use a query something like this to achieve your goal: ``` SELECT * FROM tbl_Enterprice WHERE Enterprice_ID IN ( Select Enterprice_ID from temp_table ) ``` Split part function: <https://my.vertica.com/docs/7.1.x/HTML/Content/Authoring/SQLReferenceManual/Functions/String/SPLIT_PART.htm> Here is a example of splitting string into rows using split\_part: ``` dbadmin=> SELECT SPLIT_PART('JIM|TOM|PATRICK|PENG|MARK|BRIAN', '|', row_num) "User Names" dbadmin-> FROM (SELECT ROW_NUMBER() OVER () AS row_num dbadmin(> FROM tables) row_nums dbadmin-> WHERE SPLIT_PART('JIM|TOM|PATRICK|PENG|MARK|BRIAN', '|', row_num) <> ''; User Names ------------ JIM TOM PATRICK PENG MARK BRIAN (6 rows) ```
I would consider these two solutions to be anti-patterns and would recommend testing them for performance. The first method uses functions that come in the flex table package. ``` SELECT values::INT as var1 FROM ( SELECT MapItems(v1) OVER () AS (keys, values) FROM ( SELECT MapDelimitedExtractor( '1234, 2345, 3456, 4567' USING PARAMETERS DELIMITER=',') AS v1 ) AS T ) AS T2 WHERE REGEXP_SUBSTR(values,'\d+',1) IS NOT NULL; var1 ------ 1234 2345 3456 4567 (4 rows) ``` The second method uses functions that comes in the text index package. ``` SELECT words::INT AS var1 FROM ( SELECT TxtIndex.StringTokenizerDelim('1234, 2345, 3456, 4567',',') OVER() AS (words, input_string) ) AS T WHERE REGEXP_SUBSTR(words, '\d+',1) IS NOT NULL; var1 ------ 1234 2345 3456 4567 (4 rows) ```
Convert comma separated string to a list
[ "", "sql", "vertica", "" ]
I am getting data from table for certain date range, but the query not getting data for today's date. Why does this happen? ``` select * from mytable where action_date >= to_date('01/07/2015', 'DD/MM/YYYY') and action_date <= to_date('22/07/2015', 'DD/MM/YYYY'); ``` Result is not showing `22/07/2015` data. **Edit:** ``` ACTION_DATE TIMESTAMP(6) ``` Sample date in that column : > 22/07/15 12:47:18.000000000 PM
I have changed the query by this way, its working as expected. ``` SELECT * FROM mytable WHERE action_date >= to_date('01/07/2015', 'DD/MM/YYYY') AND TRUNC(to_date(TO_CHAR(action_date, 'DD/MM/YYYY'), 'DD/MM/YYYY')) <= TRUNC(to_date('22/07/2015', 'DD/MM/YYYY')); ```
Try to add the time part: ``` select * from mytable where action_date >= to_date('01/07/2015 00:00:00,000000000', 'DD/MM/YYYY HH24:MI:SS,FF9') and action_date <= to_date('22/07/2015 23:59:00,999999999', 'DD/MM/YYYY HH24:MI:SS,FF9'); ``` If you only give the date part, the time part is automatically added with the actual time, so if the time part is later you do not get the row.
Query not getting current date result
[ "", "sql", "oracle", "" ]
I'm try to create table with clever sequence generator for using this insert-strucure: ``` insert into SOMEUSERS (SOMEUSERS_NAME, SOMEUSERS_PASSWORD) values ('Artem', 'PracTimPatie'); ``` instead of this: ``` insert into SOMEUSERS (SOMEUSERS_ID, SOMEUSERS_NAME, SOMEUSERS_PASSWORD) values (2, 'Artem', 'PracTimPatie'); ``` or this structure: ``` insert into SOMEUSERS (SOMEUSERS_ID, SOMEUSERS_NAME, SOMEUSERS_PASSWORD) values (GEN_ID_SOMEUSERS.nextval, 'Artem', 'PracTimPatie'); ``` When I executing the following sql script: ``` create sequence gen_id_someUsers START WITH 1 INCREMENT BY 1 NOCACHE NOCYCLE; CREATE TABLE loc_db.someUsers ( someUsers_id number(10) DEFAULT gen_id_someUsers.NEXTVAL NOT NULL, --because of this row someUsers_name varchar2(50) NOT NULL, someUsers_password varchar2(50), CONSTRAINT someUsers_pk PRIMARY KEY (someUsers_id) ); ``` the following notice is given to me: > Error report - SQL Error: ORA-00984: column not allowed here > 00984. 00000 - "column not allowed here" For clarity, said that in this case: ``` ... CREATE TABLE loc_db.someUsers ( someUsers_id number(10) NOT NULL, --correct this row ... ``` > Sequence GEN\_ID\_SOMEUSERS created. > > Table LOC\_DB.SOMEUSERS created. ## ***How can I configure comfortable sequence generator?*** (in case of **PostgreSQL too**. If possible with no trigger(as easily as possible)
Oracle 12c introduces [Identity columns](http://docs.oracle.com/database/121/SQLRF/statements_7002.htm#CJAECCFH): ``` CREATE TABLE SOMEUSERS ( SOMEUSERS_ID NUMBER(10) GENERATED ALWAYS AS IDENTITY CONSTRAINT SOMEUSERS__SOMEUSERS_ID__PK PRIMARY KEY, SOMEUSERS_NAME VARCHAR2(50) CONSTRAINT SOMEUSERS__SOMEUSERS_NAME__NN NOT NULL, SOMEUSERS_PASSWORD VARCHAR2(50) ); ``` If you want to do it in earlier versions then you will need a trigger and a sequence: ``` CREATE TABLE SOMEUSERS ( SOMEUSERS_ID NUMBER(10) CONSTRAINT SOMEUSERS__SOMEUSERS_ID__PK PRIMARY KEY, SOMEUSERS_NAME VARCHAR2(50) CONSTRAINT SOMEUSERS__SOMEUSERS_NAME__NN NOT NULL, SOMEUSERS_PASSWORD VARCHAR2(50) ); / CREATE SEQUENCE gen_id_someUsers START WITH 1 INCREMENT BY 1 NOCACHE NOCYCLE; / CREATE OR REPLACE TRIGGER SOMEUSERS__ID__TRG BEFORE INSERT ON SOMEUSERS FOR EACH ROW BEGIN :new.SOMEUSERS_ID := gen_id_someUsers.NEXTVAL; END; / ``` You can then just do (either with the identity column or the trigger combined with your sequence): ``` INSERT INTO SOMEUSERS ( SOMEUSERS_NAME, SOMEUSERS_PASSWORD ) VALUES ( 'Name', 'Password' ); ```
In postgres just use a serial like this: ``` CREATE TABLE SOMEUSERS ( SOMEUSERS_ID serial NOT NULL, SOMEUSERS_NAME text, SOMEUSERS_PASSWORD text ); ``` Your insert statement is then easy as: ``` INSERT INTO SOMEUSERS (SOMEUSERS_NAME, SOMEUSERS_PASSWORD) values ('Artem', 'PracTimPatie'); ``` If you wanna query the sequence you can just query it like any other relation.
Sequence in Oracle/PostgreSQL with no ID in insert statement
[ "", "sql", "oracle", "postgresql", "sequence", "ddl", "" ]
I have a workflow application where the workflow is written to the DB as shown below when the status changes. There is no end time as it is a sequence of events. I want to create a query that will group by the WorkFlowID and total the amount of time spent in each. I am not sure how to even begin My table and data looks like this ``` +------------+---------------------+ | WorkFlowID | EventTime | +------------+---------------------+ | 1 | 07/15/2015 12:00 AM | | 2 | 07/15/2015 12:10 AM | | 3 | 07/15/2015 12:20 AM | | 2 | 07/15/2015 12:30 AM | | 3 | 07/15/2015 12:40 AM | | 4 | 07/15/2015 12:50 AM | +------------+---------------------+ ``` My end result should be like: ``` +------------+-----------------+ | WorkFlowID | TotalTimeInMins | +------------+-----------------+ | 1 | 10 | | 2 | 20 | | 3 | 20 | | 4 | 10 | +------------+-----------------+ ```
In SQL Server 2012+, you would just use `lead()`. There are several ways to approach this in SQL Server 2008. Here is one using `outer apply: ``` select t.WorkFlowId, sum(datediff(second, EventTime, nextTime)) / 60.0 as NumMinutes from (select t.*, t2.EventTime as nextTime from table t outer apply (select top 1 t2.* from table t2 where t2.EventTime > t.EventTime order by t2.EventTime ) t2 ) tt group by t.WorkFlowId; ``` The only question is how you get "10" for event 4. There is no following event, so that value doesn't make sense. You can use `datediff(second, EventTime coalesce(NextEvent, getdate())` to handle the `NULL` value.
As an alternative: ``` ;WITH t AS ( SELECT *, ROW_NUMBER() OVER (ORDER BY EventTime) As rn FROM yourTable) SELECT t1.WorkFlowID, SUM(DATEDIFF(SECOND, t1.EventTime, ISNULL(t2.EventTime, GETDATE()) / 60) As TotalTimeInMins FROM t t1 LEFT JOIN t t2 ON t1.rn = t2.rn - 1 ```
Calculate total time spent by group and one datetime column
[ "", "sql", "sql-server-2008", "" ]
For example, if I have string 'sunday', then I want to insert same value in 1000 rows using SQL only; without using loops.
If you don't want to use another table you can use: ``` INSERT INTO some_table (some_column) SELECT 'Sunday' FROM ( SELECT 1 FROM (SELECT 1 UNION SELECT 2) as d1 JOIN (SELECT 1 UNION SELECT 2) as d2 JOIN (SELECT 1 UNION SELECT 2) as d3 JOIN (SELECT 1 UNION SELECT 2) as d4 JOIN (SELECT 1 UNION SELECT 2) as d5 JOIN (SELECT 1 UNION SELECT 2) as d6 JOIN (SELECT 1 UNION SELECT 2) as d7 JOIN (SELECT 1 UNION SELECT 2) as d8 JOIN (SELECT 1 UNION SELECT 2) as d9 JOIN (SELECT 1 UNION SELECT 2) as d10 ) AS t LIMIT 1000 ``` You can adjust the amount of JOIN's depending on the limit you want.
If you already have a table that has more than a 1000 rows, you could do ``` insert into mytable (mycolumn) select "Sunday" from mytablewithmorethan1000 limit 1000 ```
SQL query to insert same value 1000 times without loop
[ "", "mysql", "sql", "sql-server", "" ]
While trying to restore SQL Server database or do any other actions which require exclusive database access it displays following error: > Exclusive access could not be obtained because the database is in use.
try this.. ``` USE master GO ALTER DATABASE AdventureWorksDW SET SINGLE_USER --This rolls back all uncommitted transactions in the db. WITH ROLLBACK IMMEDIATE GO RESTORE DATABASE AdventureWorksDW FROM ... ... GO ```
This will do what you are asking for, but will terminate all open connections and will rollback uncommitted changes ``` ALTER DATABASE <yourDB> SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO ```
How to close all active database connections in one shot?
[ "", "sql", "sql-server", "sql-server-2014", "" ]
I have a query as below ``` Select sum(total) from sales ``` it will give the result as `123456789` because in back end data is stored with out decimal point. I need to format it as `1234567.89` i tried ``` select CONVERT(DECIMAL(10,2),SUM(total)) from sales ``` but it give the output as `123456789.00` > how to make it as 1234567.89
Divide by 100.0: ``` select CONVERT(DECIMAL(10,2), SUM(total) / 100.0) from sales ```
You should simply divide by 100. Something like this: ``` select sum(total) / 100 from sales ```
how to place a decimal point just before last two digit of a number
[ "", "sql", "sql-server-2008", "" ]
When attempting to run a ServiceStack service, I'm getting the following error: When debugging, the code only runs once and does not cycle through twice, I've also put breakpoints on all other functions with datareaders and none of them are being hit first and therefore managed to narrow the issue down to this one function. ``` Public Function GetVisitList(SiteKey As String) As List(Of VisitDetail) Implements IVisitorData.GetVisitList Dim vlcmd As SqlClient.SqlCommand = New SqlCommand vlcmd.CommandTimeout = 60 Try vlcmd.Connection = Conn vlcmd.CommandType = CommandType.StoredProcedure vlcmd.CommandText = "GetVisitList" vlcmd.Parameters.AddWithValue("@sitekey", SiteKey) Dim dr As SqlDataReader = vlcmd.ExecuteReader() Dim visitList As New List(Of VisitDetail) While dr.Read() Dim visit As New VisitDetail If Not IsDBNull(dr("VKey")) Then visit.VisitorKey = dr("VKey") End If If Not IsDBNull(dr("VisitIP")) Then visit.IP = dr("VisitIP") End If If Not IsDBNull(dr("SiteKey")) Then visit.SiteKey = dr("SiteKey") End If If Not IsDBNull(dr("Alert")) Then visit.AlertDescription = dr("Alert") End If If Not IsDBNull(dr("AlertNo")) Then visit.AlertNumber = dr("AlertNo") End If If Not IsDBNull(dr("VisitNo")) Then visit.VisitNumber = dr("VisitNo") Else visit.VisitNumber = 0 End If If Not IsDBNull(dr("Invited")) Then visit.Invited = dr("Invited") End If If Not IsDBNull(dr("Chatted")) Then visit.Chatted = dr("Chatted") End If If Not IsDBNull(dr("Prospect")) Then visit.Prospect = dr("Prospect") End If If Not IsDBNull(dr("Customer")) Then visit.Customer = dr("Customer") End If If Not IsDBNull(dr("HackRaised")) Then visit.Hacker = dr("HackRaised") End If If Not IsDBNull(dr("Spider")) Then visit.Spider = dr("Spider") End If If Not IsDBNull(dr("Cost")) Then visit.ThisVisitCost = dr("Cost") End If If Not IsDBNull(dr("Revenue")) Then visit.ThisVisitRevenue = dr("Revenue") End If If Not IsDBNull(dr("Visits")) Then visit.Visits = dr("Visits") Else visit.Visits = 0 End If If Not IsDBNull(dr("FirstDate")) Then visit.FirstVisitDate = dr("FirstDate") End If If Not IsDBNull(dr("TotalCost")) Then visit.TotalCost = dr("TotalCost") End If If Not IsDBNull(dr("TotalRevenue")) Then visit.TotalRevenue = dr("TotalRevenue") End If If Not IsDBNull(dr("OperatingSystem")) Then visit.OperatingSystem = dr("OperatingSystem") End If If Not IsDBNull(dr("Browser")) Then visit.Browser = dr("Browser") End If If Not IsDBNull(dr("SearchEngine")) Then visit.SearchEngine = dr("SearchEngine") End If If Not IsDBNull(dr("Referrer")) Then visit.Referrer = dr("Referrer") End If If Not IsDBNull(dr("Keywords")) Then visit.Keywords = dr("Keywords") End If If Not IsDBNull(dr("ReferrerQuery")) Then visit.ReferrerQuery = dr("ReferrerQuery") End If If Not IsDBNull(dr("Name")) Then visit.ContactName = dr("Name") End If If Not IsDBNull(dr("Email")) Then visit.ContactEmail = dr("Email") End If If Not IsDBNull(dr("Company")) Then visit.ContactCompany = dr("Company") End If If Not IsDBNull(dr("Telephone")) Then visit.ContactTelephone = dr("Telephone") End If If Not IsDBNull(dr("Fax")) Then visit.ContactFax = dr("Fax") End If If Not IsDBNull(dr("Street")) Then visit.ContactStreet = dr("Street") End If If Not IsDBNull(dr("City")) Then visit.ContactCity = dr("City") visit.City = dr("City") End If If Not IsDBNull(dr("Zip")) Then visit.ContactZip = dr("Zip") End If If Not IsDBNull(dr("Country")) Then visit.ContactCountry = dr("Country") visit.Country = dr("Country") End If If Not IsDBNull(dr("Web")) Then visit.ContactWebSite = dr("Web") End If If Not IsDBNull(dr("Organization")) Then visit.Organization = dr("Organization") End If If Not IsDBNull(dr("CRMID")) Then visit.CrmID = dr("CRMID") End If If Not IsDBNull(dr("Notes")) Then visit.ContactNotes = dr("Notes") End If If Not IsDBNull(dr("DNS")) Then visit.DNS = dr("DNS") End If If Not IsDBNull(dr("Region")) Then visit.Region = dr("Region") End If If Not IsDBNull(dr("FirstAlert")) Then visit.FirstAlertDescription = dr("FirstAlert") End If If Not IsDBNull(dr("FirstVisitReferrer")) Then visit.FirstReferrer = dr("FirstVisitReferrer") End If If Not IsDBNull(dr("ProspectTypes")) Then visit.ProspectTypes = dr("ProspectTypes") End If If Not IsDBNull(dr("VisitDate")) Then visit.SessionStarted = dr("VisitDate") End If If Not IsDBNull(dr("SecondsOnSite")) Then visit.TimeOnSite = dr("SecondsOnSite") End If If Not IsDBNull(dr("Page")) Then visit.Page = dr("Page") End If visitList.Add(visit) dr.Close() Conn.Close() End While Return visitList Catch ex As Exception Throw ex End Try End Function ``` The associated connection initialization is: ``` Public Sub Init(connectionString As String) Implements IVisitorData.Init connString = connectionString Conn = New SqlConnection(connectionString) Conn.Close() If Conn.State = ConnectionState.Closed Then Conn.Open() End If End Sub ``` So far I've tried: * Closing the connection * Disposing the command * Closing the datareader * Deploying a check to only open the db connection if it's current state is closed And none of these have worked. Does anybody have any idea what might be happening here? As far as I can see the datareader is only opened once then closed, but there must be something that I'm missing.
I would put the connection, command, and datareader into using statements [MSDN](https://msdn.microsoft.com/en-us/library/htd05whh.aspx). It ensures the disposal of resources when the code is finished in the using block. So here is the code that I would use: ``` using con as new sqlconnection("connection string") con.open() using command as new sqlcommand("sql string", con) 'Your command code... using rdr as sqldatareader = command.executereader() 'Your reader code... end using end using end using ``` I think this is the best way to ensure your objects are disposed. HTH Wade
``` dr.Close() Conn.Close() End While ``` These should be outside the while loop. You trying to read after you have closed the connection. ``` End While dr.Close() Conn.Close() ```
VB Error: "There is a already a datareader associated with this command"
[ "", "sql", "vb.net", "servicestack", "" ]
On dropping a unique `constraint`, following error occurs, > ORA-04098: trigger 'SYS.XDB\_PI\_TRIG' is invalid and failed re-validation Having no permission to recompile this `trigger`. What could be the problem here and is there any way we can solve this?
Found a solution to this, The XDB Schema is invalid for the Database. So we are unable to drop any objects in this Database. So making the XDB schema valid, has solved this problem. Thanks for your answers!
I wouldthink that you may be dropping a primary key check what constraint your dropping. if your dropping a pk and its being used as a foreign key then this would invalidate the trigger.
Unable to drop constraint
[ "", "sql", "oracle", "constraints", "" ]
I imported and am in the process of cleaning some data exported from an older Mainframe and have quite a few lines that start with an abnormal character (i.e. ASCII characters 194, 195, 226, etc). I can trim off most of the characters with a simple `remainder = trim(leading '%' from remainder)` (where '%' represents the character in question. The only character that won't be removed is the 'Í'. If I run a `remainder = trim(leading 'Í' from remainder)` query it won't find and trim the character, if I run a `ascii(remainder)` query over my data it shows up as a character 195 for the strings that start with that character. Next I ran a `remainder = trim(leading CHAR(195) from remainder)` query and that skipped the character as well. Why am I able to remove everything else but this one character when clearly MySQL can convert it to it's ASCII character code and doesn't have any issues displaying the character when a normal select query is run and the applicable records displayed? **Update** I have also run the following queries: ``` remainder = trim(leading convert('Í' using ASCII) from remainder) remainder = trim(leading convert('Í' using UTF8) from remainder) remainder = trim(leading convert(Char(195) using ASCII) from remainder) ```
Your query works correctly on my MySQL ( 5.5.44-0ubuntu0.14.04.1 ). Possibly the reason it doesn't work for you is that, due to character set mismatch, what you *see* as a character 195 **is not a character 195 at all**; it might be for example a 0xCD hex, or an UTF8 sequence corresponding to 0xC38D hex, in which case trimming the 'Í' would apparently transform it in something even weirder. Try using HEX() to check the character in question. What is it? ``` CD Latin1 Í C38D UTF8 Í <--- C3 CHAR(195) ``` Notice that the Í is **not** a CHAR(195) at all, but 195 *is* the *beginning* of a Í in UTF8. In a pinch, you can perform the operation... in hex. ``` select unhex(trim(leading 'C38D' from HEX('Íturalde'))); +---------------------------------------------------+ | unhex(trim(leading 'C38D' from HEX('Íturalde'))) | +---------------------------------------------------+ | turalde | +---------------------------------------------------+ ``` This will *only* ever trim a leading C38D, or UTF8 Í, while ignoring everything else. **UPDATE**: you may want to dump your table to a text file and try running `recode`, `iconv` or `fixcode` on it.
So I finally found *a method*. Using ``` remainder = trim(leading Char(195) from convert(remainder using ASCII)) ``` I was able to finally get rid of that pesky 'Í'. My only concern is that it really didn't trim anyway, it turned **ALL** of the ASCII characters over 127 that would normally fall into the 'Extended ASCII' code list into '?' which could then be removed with `remainder = trim(leading '?' from remainder)`. It works for my current task but I am interested in more exact queries that can remove a specific character should I need to in the future.
Can't trim() Char(195) in MySQL
[ "", "mysql", "sql", "ascii", "" ]
I've got two tables: 1. `T1` is a table of data * column `one` cannot have `null` * column `two` and `three` can have `null`s 2. `T2` is a table of categorization rules * it has the same columns as `T1` along with a `cat` column to represent the category * the idea is that the first three columns have criteria used to determine how and if rows in `T1` should be categorized * it is possible that a row in `T2` could have values in 2+ columns meaning there are multiple criteria that need to match in `T1` (e.g. `T1.two like "2*" and T1.three like "hi"`) I want a query that finds the rows in `T1` that match based on the criteria in `T2`. Here is an example: ``` +------+------+-------+ | T1 | +------+------+-------+ | one | two | three | +------+------+-------+ | aaaa | 1111 | | | bbbb | 2222 | | | cccc | | test | | dddd | | | +------+------+-------+ +------+-----+-------+------+ | T2 | +------+-----+-------+------+ | one | two | three | cat | +------+-----+-------+------+ | aaaa | * | * | 1 | -> all rows in T1 where column one equals aaaa | * | 2* | * | 2 | -> all rows in T1 where column two starts with 2 | * | * | test | 3 | -> all rows in T1 where column three equals test | * | 3* | hi | 3 | -> all rows in T1 where column two starts with 3 AND column 3 equals hi +------+-----+-------+------+ ``` I've got `*` in `T2` because I am trying to say the value in those columns should not matter. So using the second row as an example I'm saying match all rows in T1 where: * `one` is anything * `two` starts with 2 * `three` is anything My thought was to do an ambiguous join and that filter on matching rows: ``` SELECT T1.one, T2.one, T1.two, T2.two, T1.three, T2.three, T2.id FROM T1, T2 WHERE (T1.one Like [T2].[one]) ' match column one AND (T1.two Is Null Or T1.two Like [T2].[two]) ' match column two; the "is null" is needed in case the value is not there in T1 AND (T1.three Is Null Or T1.three Like [T2].[three]) ' match column three; the "is null" is needed in case the value is not there in T1 ``` This results in the table below. It partially works but returns rows it should not (marked below). ``` +--------+--------+--------+--------+----------+----------+----+ | Result | +--------+--------+--------+--------+----------+----------+----+ | T1.one | T2.one | T1.two | T2.two | T1.three | T2.three | cat| +--------+--------+--------+--------+----------+----------+----+ | aaaa | aaaa | 1111 | * | | * | 1 | | aaaa | * | 1111 | * | | test | 3 | -> THIS SHOULD NOT BE RETURNED | bbbb | * | 2222 | 2* | | * | 2 | | bbbb | * | 2222 | * | | test | 3 | -> THIS SHOULD NOT BE RETURNED | cccc | * | | 2* | test | * | 2 | -> THIS SHOULD NOT BE RETURNED | cccc | * | | * | test | test | 3 | | dddd | * | | 2* | | * | 2 | -> THIS SHOULD NOT BE RETURNED | dddd | * | | * | | test | 3 | -> THIS SHOULD NOT BE RETURNED +--------+--------+--------+--------+----------+----------+----+ ``` I've started at this for a few hours but I cannot figure out how to do what I need. I figure this is not a database-specific question but if it matters I'm trying to do this with MS Access 2013.
For future reference, I think I figured out a working answer: <http://sqlfiddle.com/#!9/595eb/1>. Instead of using `*` (wildcard) in `T2` I use `null` and then in the query check if the value is `null`. This seems to have the desired/expected result. ``` WHERE (T2.one is null or T1.one = T2.one) AND (T2.two is null or T1.two like T2.two) AND (T2.three is null or T1.three like T2.three); ``` There will be duplicates if multiple rows in `T2` matche a row in `T1` so I'm still working on figuring that out.
From what I see, you have 2 problems: you cannot join tables on computed fields. -> all rows in T1 where column two starts with 2 -> all rows in T1 where column two starts with 3 AND column 3 equals hi cannot be done this way. However, I suggest you use sql fiddle to show your example. I have done it for you. <http://sqlfiddle.com/#!9/bb1fc/2>
find rows in Table1 that match criteria rows in Table2
[ "", "sql", "ms-access", "ms-access-2013", "" ]
I want to insert a not null value. This is a example: ``` create table time( a varchar2(9), b varchar2(9), c varchar2(9)); table create insert into time (a,c) values ('qq','ee'); table altered ``` When I key this: ``` alter table time modify b varchar2(9) not null; ``` This error come out: ``` alter table time * ERROR at line 1: ORA-02296: cannot enable (DIP0114713.) - null values found ``` So how I insert a value to `a` and `c` column and also column `b` is not null?
If you don't mention a column in an `insert`, then it gets the default value. By default, the default value is `NULL`. You can specify something else: ``` create table time ( a varchar2(9), b varchar2(9) not null default 'NO VALUE', c varchar2(9)) ); ``` EDIT: To get your `alter table` to work, update the value first: ``` update time set b = 'NO VALUE' where b is null; ```
It's not possible. First you need to update the data in column b and then apply not null constraint. ``` Update time set b= 'NA' where b is null go ALTER TABLE time ALTER COLUMN b varchar(9) NOT NULL go ```
Null values found
[ "", "sql", "oracle", "" ]
I'm using Microsoft SQL 2012 I'm using OFFSET and FETCH NEXT for paging in a query. I do need the overall count though so, I'm using OVER() to get the overall count. The problem I'm having is that when I have a UNION in one of my queries, OVER() will return the total for that part of the UNION. Example: ``` SELECT something ,overall_count = COUNT(*) OVER() FROM tbl_something WHERE something = @someparameter UNION SELECT something ,overall_count = COUNT(*) OVER() FROM tbl_something WHERE something = @someOtherParameter OFFSET (@intPage - 1) * 50 ROWS FETCH NEXT 50 ROWS ONLY ``` In the example above, if there is 10 result for each SELECT statement in the query, overall\_count will return as 10 for each record. But I need the count of all the records.
Just need to combine the 2 queries then get your count and offset ``` SELECT *, overall_count = COUNT(*) OVER() FROM ( SELECT something FROM tbl_something WHERE something = @someparameter UNION SELECT something FROM tbl_something WHERE something = @someOtherParameter ) u ORDER BY something OFFSET (@intPage - 1) * 50 ROWS FETCH NEXT 50 ROWS ONLY ``` or you can use a cte if it makes more sense ``` ;WITH cte AS ( SELECT something FROM tbl_something WHERE something = @someparameter UNION SELECT something FROM tbl_something WHERE something = @someOtherParameter ) SELECT *, overall_count = COUNT(*) OVER() FROM cte ORDER BY something OFFSET (@intPage - 1) * 50 ROWS FETCH NEXT 50 ROWS ONLY ```
Have you tried wrapping an additional Select Statement around your union? like below... ``` SELECT * FROM ( SELECT something ,overall_count = COUNT(*) OVER() FROM tbl_something WHERE something = @someparameter UNION SELECT something ,overall_count = COUNT(*) OVER() FROM tbl_something WHERE something = @someOtherParameter ) a ORDER BY something OFFSET (@intPage - 1) * 50 ROWS FETCH NEXT 50 ROWS ONLY ```
Using OVER() in a query that has a UNION
[ "", "sql", "sql-server", "" ]
Does Oracle have a builtin function to create a date from its individual components (year, month and day) that just returns null on missing data? I'm aware of `TO_DATE()` but I need to compose a string first and neither the `||` operator nor the `CONCAT()` function make it easy to handle missing data: ``` -- my_year NUMBER(4,0) NULL SELECT TO_DATE(my_year || '-01-01', 'YYYY-MM-DD') AS my_date FROM my_table; ``` Whenever `my_year` is `NULL` we end up with `TO_DATE('-01-01', 'YYYY-MM-DD')` and: ``` ORA-01841: (full) year must be between -4713 and +9999, and not be 0 ```
For your example, you can use `case`: ``` select (case when my_year is not null and my_year <> 0 and my_year between -4713 and 9999 then TO_DATE(my_year || '-01-01', 'YYYY-MM-DD') end) ``` Unfortunately, Oracle does not have a method of doing the conversion, if possible, and otherwise returning NULL. SQL Server recently introduced `try_convert()` for this purpose. One option is to write your own function with an exception handler for the failed conversion. The exception handler would simply return `NULL` for a bad format.
I've eventually composed a user-defined function to encapsulate the logic. It returns a date from its individual components or `NULL` if the date is not valid: ``` CREATE OR REPLACE FUNCTION TRY_TO_DATE ( V_YEAR IN NUMBER, V_MONTH IN NUMBER, V_DAY IN NUMBER ) RETURN DATE DETERMINISTIC IS BEGIN RETURN TO_DATE(LPAD(V_YEAR, 4, '0') || LPAD(V_MONTH, 2, '0') || LPAD(V_DAY, 2, '0'), 'YYYYMMDD'); EXCEPTION WHEN OTHERS THEN RETURN NULL; END TRY_TO_DATE; / ``` [Fiddle](https://dbfiddle.uk/sqP-t3-p)
Create date from year, month and day
[ "", "sql", "oracle", "oracle10g", "oracle-xe", "" ]
I have an existing T-SQL query that returns a list of customers who need to be sent updates. As a part of the WHERE clause, the query checks to make sure the version conforms to a length of 6, and then appends additional characters. Our latest version has a longer version, so I need to add a conditional statement to the SQL, but I keep getting an error when I do so. I won't post the entire statement for brevity. The SQL statement returns values when I run it like this. ``` (SELECT MAX(substring(cversion, 1, 4) + case when len(cversion) = 6 then '0' else '' end + substring(cversion,5,3)) FROM version GROUP BY iproductid, LEFT(cversion, 3))) ``` Below are my modifications. I know that the else code is identical, but I am just trying to get it to run currently. All I did was add the `CASE` and `ELSE`, and duplicate what was done, but now I get errors stating > Incorrect syntax near the keyword SELECT along with two similar errors near the added keywords. ``` vr.cversion IN ( CASE WHEN product.iproductid < 8 THEN SELECT MAX ( substring(cversion,1,4) + case when len(cversion) = 6 then '0' else '' end + substring(cversion,5,3) ) FROM version GROUP BY iproductid,LEFT(cversion, 3) ELSE SELECT MAX ( substring(cversion,1,4) + case when len(cversion) = 6 then '0' else '' end + substring(cversion,5,3) ) FROM version GROUP BY iproductid,LEFT(cversion, 3) END ) ``` Could anyone tell me what I am doing wrong? Thanks. **UPDATE** Several people have said I need to use CASE instead. I had tried that with the same results, but I updated the code to show how I organized it.
You need parentheses around the select statements so that they get parsed as sub-queries. I noticed that the two code blocks are identical, but I guess they are meant to not be? ``` vr.cversion IN ( CASE WHEN product.iproductid < 8 THEN ( SELECT MAX ( substring(cversion,1,4) + case when len(cversion) = 6 then '0' else '' end + substring(cversion,5,3) ) FROM version GROUP BY iproductid,LEFT(cversion, 3) ) ELSE ( SELECT MAX ( substring(cversion,1,4) + case when len(cversion) = 6 then '0' else '' end + substring(cversion,5,3) ) FROM version GROUP BY iproductid,LEFT(cversion, 3) ) END ) ```
You can't use `IF` inside a TSQL Statement. You can only use it on Stored Procedures. Use `CASE` or [`IIF`](https://msdn.microsoft.com/en-us/library/hh213574%28v=sql.110%29.aspx) instead
Using a Conditional in a T-SQL WHERE Clause
[ "", "sql", "t-sql", "" ]
I have the following which brings me back 3 sets of results correctly ``` SELECT TOP (1) rid,score, weight FROM dbo.tblThree WHERE (caseNo = '111111111') ORDER BY rID DESC SELECT TOP (1) rid,score, weight FROM dbo.tblTwo WHERE (caseNo = '111111111') ORDER BY rID DESC SELECT TOP (1) rid,score, weight FROM dbo.tblOne WHERE (caseNo = '111111111') ORDER BY rID DESC ``` Adding a `UNION ALL` between them fails because of the `ORDER BY` statements. However if I get rid of them, the it fails because it doesn't get the latest record. Is there a simpler solution to this? What I want is a single SQL statement to output the 3 rows.
You can use the following to get your desired output: ``` WITH data AS ( SELECT rid,score, weight, ROW_NUMBER() OVER (ORDER BY rID DESC) AS rn FROM dbo.tblThree WHERE (caseNo = '111111111') UNION ALL SELECT rid,score, weight, ROW_NUMBER() OVER (ORDER BY rID DESC) AS rn FROM dbo.tblTwo WHERE (caseNo = '111111111') UNION ALL SELECT rid,score, weight, ROW_NUMBER() OVER (ORDER BY rID DESC) AS rn FROM dbo.tblOne WHERE (caseNo = '111111111') ) SELECT * FROM data WHERE rn = 1 ```
Try the following: ``` select * from ( select top (1) rid, score, weight from dbo.tblThree where caseNo = '111111111' order by rid desc) t1 union all select * from ( select top (1) rid, score, weight from dbo.tblTwo where caseNo = '111111111' order by rid desc) t2 union all select * from ( select top (1) rid, score, weight from dbo.tblOne where caseNo = '111111111' order by rid desc) t3 ```
Select Max Records on UNION ALL
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "union-all", "" ]
We have a column that stores a value in 24 hr time format. It is a string/text that users enter on to the interface. I need to convert it into sql time format so that I can do time difference calculations. How can I convert the string to time? Example: ``` StringColumn 1400 1600 ``` needs to be ``` TimeColumn 1400 1600 ``` so that I can calculate the time difference to get 2 hrs. Thanks.
If your string value are always 4 characters (meaning 01-09 and not 1-9 for early hours) then this works: ``` convert(time, stuff(StringColumn,3,0,':')) ```
You can do a conversion as in @jpw's answer, especially if you can use `DATEDIFF` on the results to get what you need. Alternately you could perhaps do it as integer maths like: ``` SELECT (60*left('1900',2) + right('1900',2)) - (60*left('1400',2) + right('1400',2)) ``` (I have used constants here, but you can replace `'1900'` and `'1400`' with column names).
24hr time format string to sql time format
[ "", "sql", "sql-server-2012", "" ]
I'm trying to select with reference to one column changing its time in MySQL, but I don't know how. How will I do it? Example: Time original: 2015-07-20 22:10:52 Updated: 2015-07-20 23:59:59
You can use [`timestamp`](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestamp) to join the current `timestamp`'s date with the time you want to set: ``` UPDATE mytable SET mytimestamp = TIMESTAMP(DATE(mytimestamp), '23:59:59') ```
To update with no reference to previous column value, it's no different from other columns. To update with it, and based on certain part (second, month, year, whatever), you can use [DATE\_ADD](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date-add) or [DATE\_SUB](https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html#function_date-sub) function. Any other function in the same page might be useful, too, depending on your needs.
Update time on timestamp column
[ "", "mysql", "sql", "timestamp", "sql-update", "" ]
The following query excludes all accounts where `AMT` balance is 0. Although I am trying to make an exception for accounts `L1_ORG_SEG_ID = 101, 102 and 105` even if it's value is equal to 0. Can someone point me into the right direction? Thanks! ``` SELECT 'Table' colA ,O.L1_ORG_SEG_ID colb ,A.L1_ACCT_SEG_ID colc ,A.L2_ACCT_SEG_ID cold ,A.L3_ACCT_SEG_ID cole ,' ' colf ,A.ACCT_NAME colg ,'USD' colh ,SUM(G.AMT) coli ,' ' colj ,' ' colk ,' ' coll ,' ' colm ,'4' coln ,'2015' colo ,'4/24/15' colp ,' ' colq ,' ' colr ,' ' cols ,' ' colt ,' ' colu ,' ' colv FROM GL_POST_SUM G INNER JOIN ACCT A ON A.ACCT_ID = G.ACCT_ID INNER JOIN ORG O ON O.ORG_ID = G.ORG_ID WHERE G.FY_CD = '2015' AND G.PD_NO < 5 AND A.S_ACCT_TYPE_CD IN ( 'L' ,'A' ) AND G.ORG_ID NOT LIKE 'J%' AND A.ACTIVE_FL = 'Y' AND O.L1_ORG_SEG_ID NOT IN ( '125' ,'126' ,'127' ,'129' ) GROUP BY O.L1_ORG_SEG_ID ,A.L1_ACCT_SEG_ID ,A.L2_ACCT_SEG_ID ,A.L3_ACCT_SEG_ID ,A.ACCT_NAME HAVING SUM(G.AMT) <> 0 ORDER BY 2 ```
Something like this should work ``` SELECT 'Table' colA,O.L1_ORG_SEG_ID colb,A.L1_ACCT_SEG_ID colc, A.L2_ACCT_SEG_ID cold,A.L3_ACCT_SEG_ID cole, ' ' colf,A.ACCT_NAME colg, 'USD' colh,SUM(G.AMT) coli,' ' colj,' ' colk,' ' coll,' ' colm,'4' coln,'2015' colo, '4/24/15' colp,' ' colq,' ' colr,' ' cols,' ' colt,' ' colu,' ' colv FROM GL_POST_SUM G JOIN ACCT A ON A.ACCT_ID = G.ACCT_ID JOIN ORG O ON O.ORG_ID = G.ORG_ID WHERE G.FY_CD='2015' AND G.PD_NO < 5 AND A.S_ACCT_TYPE_CD IN ('L','A') AND G.ORG_ID NOT LIKE 'J%' AND A.ACTIVE_FL = 'Y' AND O.L1_ORG_SEG_ID NOT IN ('125', '126', '127', '129') GROUP BY O.L1_ORG_SEG_ID,A.L1_ACCT_SEG_ID,A.L2_ACCT_SEG_ID,A.L3_ACCT_SEG_ID,A.ACCT_NAME HAVING SUM(G.AMT) <> 0 OR O.L1_ORG_SEG_ID in (101, 102, 105) order by 2 ```
You can OR the exclusion condition in your HAVING so it always includes the specified accounts like so: `HAVING (SUM(G.AMT) <> 0 OR O.L1_ORG_SEG_ID IN ('101', '102', '105'))`
SQL query exception
[ "", "sql", "" ]
I'm sure I've done this type of operation a 1000 times before but for some reason this is not working for me. I'm doing a report to determine if a patient receive medication on a day. So regardless if they get 1 does or 5 doses in a day the value should be 1. Staff also do corrections on the system, that come in as negative values. So I need to sum all of the dose value for each day, if it is a + value then its 1, otherwise its a 0. All i want to accomplish at this point is to have 1 row for each date as either 1 or 0. Here is my SQL Query to sum the values: ``` SELECT DIM_DRUG_NAME_SHORT.Drug_Name_Short AS 'Med_Name_Short' , SUM(Baseline.Doses) as 'DOT' , Day(Baseline.Dispense_Date) as 'd_Date' FROM FACT_AMS_Baseline_Report Baseline INNER JOIN DIM_DRUG_NAME_SHORT ON Baseline.Med_Name_ID = DIM_DRUG_NAME_SHORT.Drug_Name_Long INNER JOIN DIM_Date tDate ON Baseline.Dispense_Date = tDate.Date WHERE Baseline.Encounter = '00000001/01' GROUP BY DIM_DRUG_NAME_SHORT.Drug_Name_Short , Baseline.Dispense_Date , Doses Order By Drug_Name_Short ``` For time being I'm just pulling one encounter out of the data set to test with. This is the output i'm getting. I also included the Day in the select just to show that the same day is coming through twice and they are not getting Summed. Here is a sample of the output I get: ``` Med_Name_Short DOT day of month CEFTRIAXONE 1 15 CEFTRIAXONE 1 16 CEFTRIAXONE 4 16 CEFTRIAXONE 1 17 CEFTRIAXONE 1 18 CEFTRIAXONE 1 20 CEFTRIAXONE -3 21 CEFTRIAXONE 1 21 CEFTRIAXONE -1 23 PROPRANOLOL -1 24 PROPRANOLOL 3 24 PROPRANOLOL 1 25 PROPRANOLOL 2 26 PROPRANOLOL 2 27 ``` What I was hoping to see in this was that Day 16 would be a 5, day 21 would be -2 and day 24 would be -2. Any assistance would be greatly appreciated. Thanks
I don't think you should be grouping by doses. Without seeing your data, I can only guess that, for example, there are two doses of quantity 2 on the 16th. So try: ``` SELECT DIM_DRUG_NAME_SHORT.Drug_Name_Short AS 'Med_Name_Short' , SUM(Baseline.Doses) as 'DOT' , Day(Baseline.Dispense_Date) as 'd_Date' FROM FACT_AMS_Baseline_Report Baseline INNER JOIN DIM_DRUG_NAME_SHORT ON Baseline.Med_Name_ID = DIM_DRUG_NAME_SHORT.Drug_Name_Long INNER JOIN DIM_Date tDate ON Baseline.Dispense_Date = tDate.Date WHERE Baseline.Encounter = '00000001/01' GROUP BY DIM_DRUG_NAME_SHORT.Drug_Name_Short , Baseline.Dispense_Date Order By Drug_Name_Short ```
Remove Doses from your Group By list. You are using an aggregate function on it (SUM) which is correct, so it should not be in the GROUP BY.
SQL - Get Sum of Values with same Date
[ "", "sql", "sql-server", "" ]
How can I populate the `NULL` value with the previous row per group? Say something like, ``` +--------+---------+--------+ | Date | Product | Amount | + + + + | 7/1/15 | Prod1 | 5 | | 7/1/15 | Prod2 | 7 | | 7/1/15 | Prod3 | 9 | | 8/1/15 | Prod1 | NULL | | 8/1/15 | Prod2 | 8 | | 8/1/15 | Prod3 | NULL | | 9/1/15 | Prod1 | 1 | | 9/1/15 | Prod2 | NULL | | 9/1/15 | Prod3 | NULL | | 10/1/15| Prod1 | NULL | +--------+---------+--------+ ``` To achieve something like this: ``` +--------+---------+--------+ | Date | Product | Amount | + + + + | 7/1/15 | Prod1 | 5 | | 7/1/15 | Prod2 | 7 | | 7/1/15 | Prod3 | 9 | | 8/1/15 | Prod1 | 5 | | 8/1/15 | Prod2 | 8 | | 8/1/15 | Prod3 | 9 | | 9/1/15 | Prod1 | 1 | | 9/1/15 | Prod2 | 8 | | 9/1/15 | Prod3 | 9 | | 10/1/15| Prod1 | 1 | +--------+---------+--------+ ``` Does this make sense? I have no idea where to start. Any help would be much appreciated. Thanks! **EDIT** Rule: * If `Amount` column is `NULL` then it should be populated with the `Amount` of the previous `Amount` in the same `Product` category which in not `NULL`. Say for example, above is a sample data. This row ``` Date | Product | Amount 8/1/15 | Prod1 | NULL ``` It's amount should be populated with `5` since it should get the values prior to it in the same `Product` category.
You can use `ISNULL`(or `COALESCE`) and a correlated subquery: ``` SELECT t.Date, t.Product, Amount = ISNULL(t.Amount, (SELECT TOP 1 Amount FROM dbo.TableName t2 WHERE t2.Product = t.Product AND t2.Amount IS NOT NULL AND t2.Date <= t.Date ORDER BY t2.Date DESC)) FROM dbo.TableName t ``` `Demo` with your sample data. I prefer `ISNULL` over `CAOLESCE` since the latter [will be translated to a `CASE` that is executed twice](https://www.mssqltips.com/sqlservertip/2689/deciding-between-coalesce-and-isnull-in-sql-server/). You can read more about the issue [at MS-Connect](https://connect.microsoft.com/SQLServer/feedback/details/336002/unnecessarily-bad-performance-for-coalesce-subquery).
You need a query like this: ``` ;WITH t AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Product ORDER BY [Date]) rn FROM yourTable) , tt AS ( SELECT t1.[Date], t1.Product, t1.Amount, MAX(CASE WHEN t2.Amount IS NOT NULL THEN t2.rn END) AS LastSeq FROM t t1 LEFT JOIN t t2 ON t1.Product = t2.Product AND t2.rn <= t1.rn GROUP BY t1.[Date], t1.Product, t1.Amount) SELECT tt.[Date], tt.Product, ISNULL(tt.Amount, t.Amount) As Amount FROM tt JOIN t ON tt.Product = t.Product AND tt.LastSeq = t.rn ORDER BY tt.[Date], tt.Product ```
How to populate NULL column with the previous column per group?
[ "", "sql", "sql-server", "null", "" ]
I'm trying to write a `Select` statement that increments a `column` value by 50, but the range can end up being 200,000 so I can't do it all in a case statement manually. Something similar to this, but instead of manually writing the `increments` ``` Select count(order_id) as order_count, case when revenue between 0 and 50 then ‘$50’ when order_value between 51 and 100 then ‘$100’ else ‘over $101’ end as revenue_bucket from Orders group by 2 ```
Turn your `revenue` into the bucket value, then make string out of it: ``` SELECT count(order_id) AS order_count, '$' || ((((revenue - 0.01)/50)::int + 1) * 50)::text AS revenue_bucket FROM Orders GROUP BY 2; ``` This obviously runs well past $200,000.
You can work with modulo to get this. Limit would be 101 in your example. All you have to do, is cast the result in a string and add the $ before it ``` Select count(order_id) as order_count, case when revenue < limit then revenue - (revenue % 50) + 50 else ‘over $101’ end as revenue_bucket from Orders group by 2 ```
SQL Increment column value in select statement
[ "", "sql", "postgresql", "amazon-redshift", "" ]
I need to show the data from DB into a table of report file. `my_table` looks like: ``` +----+-------+------+------+-------------------+-----------+-------+----+-------------------+ | id |entryID|userID|active| dateCreated |affiliateId|premium|free| endDate | | 1 | 69856 | 1 | N |2014-03-22 13:54:49| 1 | N | N |2014-03-22 13:54:49| | 2 | 63254 | 2 | Y |2014-03-21 13:35:15| 2 | Y | N | | | 3 | 56324 | 3 | N |2014-03-21 11:11:22| 2 | Y | N |2014-02-22 16:44:46| | 4 | 41256 | 4 | Y |2014-03-21 08:10:46| 1 | N | Y | | | .. | ... | ... | ... | ... | ... | ... | .. | ... | +----+-------+------+------+-------------------+-----------+-------+----+-------------------+ ``` I need to create the table with data from `my_table` ``` | Date | № of Entries (in that date) | Total № of Entries | Premium | Free | Afiiliate | ``` The final table in file should looks like: Report 17-07-2013: ``` +----------+--------------+-------+---------+------+-----------+ | Date | № of Entries | Total | Premium | Free | Afilliate | |2013-07-17| 2 | 99845 | 2 | 0 | 0 | |2013-07-18| 1 | 99843 | 0 | 1 | 0 | |2013-07-22| 1 | 99842 | 1 | 0 | 1 | |2013-07-23| 3 | 99841 | 2 | 1 | 2 | |2013-07-24| 298 | 99838 | 32 | 273 | 25 | |2013-07-25| 5526 | 99540 | 474 | 5058 | 126 | |2013-07-26| 1686 | 94014 | 157 | 1532 | 56 | |2013-07-27| 1673 | 92328 | 156 | 1517 | 97 | |2013-07-28| 1461 | 90655 | 155 | 1310 | 83 | | ... | ... | ... | ... | ... | ... | +----------+--------------+-------+---------+------+-----------+ ``` Should I for each column do a `SELECT` or I should do only 1 select? If it possible to do 1 select how to do it? It should be by analogy with this report: [report](http://storage4.static.itmages.com/i/15/0722/h_1437597193_5941024_287b307eaa.png) Some fields differ (like 'Number of Entries in that date'). Total number of Entries means: all entries from beginning to the that specific date. Number of Entries in that date means: all entries in that date. In a final table the date from column Date will not repeat, that's why Column 'Number of Entries (in that date)' will calculate all entries for that date.
SQLFiddle Demo: <http://sqlfiddle.com/#!9/20cc0/5> The added column `entryID` does not matter for us. I don't really understand what you want for `Total`, or the criteria for `affiliateID`. This query should get you started. ``` SELECT DATE(dateCreated) as "Date", count(dateCreated) as "No of Entries", 99845 as Total, sum( case when premium='Y' then 1 else 0 end ) as Premium, sum( case when premium='N' then 1 else 0 end ) as Free, sum( case when affiliateID IS NOT NULL then 1 else 0 end) as Affiliate FROM MyTable GROUP BY DATE(dateCreated) ORDER BY Date ASC ``` > The final table in file should looks like: > ... This new table can be in a file or in the web page. But it is not a new table in DB. – It sounds like you may be new to this area so I just wanted to inform you that spitting out a report into a **file** for a **website** is highly unusual and typically only done when your data is completely separate from the website. Putting data from a database onto a website (like the query we made here) is very common and it's very likely you don't need to mess with any files.
Your result is not so clear for the total is a count or sum and affiliate is sum or count also but assuming total will be count and affiliate will be sum here a query you might use to give you a result ( using ms-sql ) ``` select DateCreated,count(EntryId) as Total, sum(case when Premium='Y' then 1 else 0 end) as Premium, sum(case when Premium='N' then 1 else 0 end) as Free, sum(AffiliateId) as Affiliate from sample group by DateCreated ``` here a working [demo](http://www.sqlfiddle.com/#!3/5c8bc/1) if I didn't understood you correctly, kindly advise hope it will help you
How to do one big select from table in MySQL?
[ "", "mysql", "sql", "database", "" ]
I am trying to count Same values in a column and want it to return a count there off. ``` | ITEM | COUNT | +-------+-------+ | GREEN | 1 | | GREEN | 2 | | GREEN | 3 | | RED | 1 | | RED | 2 | ``` I tried ``` ROW_NUMBER() OVER (ORDER BY ITEM) AS Row ``` But that only counts each line 1 - 1000 How would I accomplish this?
You'll want to include a `partition by` clause with the row\_number function. This makes the row\_number restart from 1 for each new type of item. ``` ROW_NUMBER() OVER (PARTITION BY ITEM ORDER BY ITEM) AS Row ``` this would give you a result like: ``` item Row GREEN 1 GREEN 2 GREEN 3 RED 1 RED 2 ```
Try: ``` ROW_NUMBER() OVER (PARTITION BY ITEM ORDER BY ITEM) AS Row ``` or ``` ROW_NUMBER() OVER (PARTITION BY ITEM ORDER BY COUNT) AS Row ``` You can have count of each `ITEM` like this: ``` COUNT(*) OVER (PARTITION BY ITEM) As CountByItem ```
Count same values in column SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "count", "" ]
Can some one please guide me what's wrong with this query? In SQL Server we just check the presence of the Object\_ID of a table to drop it and re-create it. I am new to Oracle and wrote this query: ``` declare Table_exists INTEGER; BEGIN Select count(*) into Table_exists from sys.all_tables where table_name='TABLENAME1'; EXCEPTION WHEN NO_DATA_FOUND THEN Table_Exists :=0; if(table_exists)=1 Then Execute Immediate 'Drop Table TABLENAME1;' 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('Table Dropped and Re-Created!'); Else Execute Immediate 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('New Table Created!'); END IF; END; ``` I get the output - ANONYMOUS BLOCK COMPLETED, but the table is not created. The table was previously existing, so I dropped it to check if the PL/SQL is actually creating the table, but NO. What is wrong here? What am I missing? Please guide.
The EXCEPTION clause lasts till the next END and not just the next statement. If you want to continue after catching the exception you need to add an additional BEGIN/END: ``` declare Table_exists INTEGER; BEGIN BEGIN Select count(*) into Table_exists from sys.all_tables where table_name='TABLENAME1'; EXCEPTION WHEN NO_DATA_FOUND THEN Table_Exists :=0; END; if(table_exists)=1 Then Execute Immediate 'Drop Table TABLENAME1;' Execute Immediate 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('Table Dropped and Re-Created!'); Else Execute Immediate 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('New Table Created!'); END IF; END; ``` As pointed out by Gordon, the EXCEPTION clause is not really needed in this case since `count(*)` will always return one row. So the following is sufficient: ``` declare Table_exists INTEGER; BEGIN Select count(*) into Table_exists from sys.all_tables where table_name='TABLENAME1'; if(table_exists)=1 Then Execute Immediate 'Drop Table TABLENAME1;' Execute Immediate 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('Table Dropped and Re-Created!'); Else Execute Immediate 'Create Table TABLENAME1;'; DBMS_OUTPUT.PUT_LINE('New Table Created!'); END IF; END; ```
When you are using `all_tables` filter the results for your schema by adding `where owner = 'your_schema'` or use `sys.user_tables` > ALL\_TABLES describes the relational tables accessible to the current user > > USER\_TABLES describes the relational tables owned by the current user. When use `execute_emmidiate` remove the `;` from the query; Modified query; ``` DECLARE Table_exists INTEGER; BEGIN Select count(*) into Table_exists from sys.user_tables where table_name='TABLENAME1'; --or --Select count(*) into Table_exists from sys.all_tables --where table_name='TABLENAME1' and owner = 'your_DB'; if table_exists = 1 Then Execute Immediate 'Drop Table TABLENAME1'; Execute Immediate 'Create Table TABLENAME1(num number)'; DBMS_OUTPUT.PUT_LINE('Table Dropped and Re-Created!'); Else Execute Immediate 'Create Table TABLENAME1(num number)'; DBMS_OUTPUT.PUT_LINE('New Table Created!'); END IF; END; ```
Oracle SQL - If Exists, Drop Table & Create
[ "", "sql", "oracle", "plsql", "" ]
In an SQL Server 2012 table I want to take all the rows from two columns and turn them into one row, still two columns, but each column being comma-separated. For example ``` Customerid | FacilityId ----------------------------- 1 5678 2 9101 5 6543 ``` Then afterwards id like the results to be like this ``` Customerid | FacilityId ----------------------------- 1,2,5 5678,9101,6543 ```
you can use `FOR XML` like this [SQL Fiddle](http://sqlfiddle.com/#!3/c9478a/1) **Query** ``` SELECT STUFF(( SELECT ',' + CONVERT(VARCHAR(10),Customerid) FROM Customer FOR XML PATH('')),1,1,'') as Customerid, STUFF(( SELECT ',' + CONVERT(VARCHAR(10),FacilityId) FROM Customer FOR XML PATH('')),1,1,'') as FacilityId ``` **Output** ``` Customerid FacilityId 1,2,5 5678,9101,6543 ``` **EDIT** You can even use variable to concatenate the csv together which doesn't require 2 table scans like the `FOR XML` however you may encounter issues with it when using with `ORDER BY` or other functions in the same query. Since you have only 3-4 rows, I would suggest going with `FOR XML` approach ``` DECLARE @Customerid VARCHAR(MAX) = '',@FacilityId VARCHAR(MAX) = '' SELECT @Customerid += ',' + CONVERT(VARCHAR(10),Customerid), @FacilityId += ',' + CONVERT(VARCHAR(10),FacilityId) FROM Customer SELECT STUFF(@Customerid,1,1,'') as Customerid, STUFF(@FacilityId,1,1,'') as FacilityId ```
Here is a simple and fast way using [CONCAT](https://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k(CONCAT_TSQL);k(SQL11.SWB.TSQLRESULTS.F1);k(SQL11.SWB.TSQLQUERY.F1);k(MISCELLANEOUSFILESPROJECT);k(DevLang-TSQL)&rd=true), it will work from sqlserver 2012: ``` DECLARE @t table(Customerid int, FacilityId int) INSERT @t values(1,5678),(2,9101),(5,6543) DECLARE @x1 varchar(max), @x2 varchar(max) SELECT @x1 = concat(@x1 + ',', Customerid), @x2 = concat(@x2 + ',', FacilityId) FROM @t SELECT @x1, @x2 ```
Comma separate two separate columns
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I am entering the following query below and getting duplicate values. I thought if I did a Left Outer Join that it wouldn't do that. I want `T0.` data for 2 of the 3 columns. The one column that I want `T1.` data is for the related customer name to the customer code. But it seems to want to populate the record twice. Here is the code that I am attempting to use: ``` SELECT T0.CardCode ,T1.CardName ,T0.State FROM CRD1 T0 LEFT OUTER JOIN OCRD T1 ON T0.CardCode=T1.CardCode ```
Try using distinct keyword. ``` SELECT distinct T0.CardCode ,T1.CardName ,T0.State FROM CRD1 T0 LEFT OUTER JOIN OCRD T1 ON T0.CardCode=T1.CardCode ```
Usually this means you have multiple matches on the join predicate in the related table. The left outer join ensures you keep all rows from the left table regardless of match or not, but doesn't prevent multiple matches if they happen to exist. Example: ``` with _left (id) as ( select 3 union all select 4 union all select 5 ) ,_right(id) as ( select 3 union all select 3 ) select * from _left l left join _right r on l.id = r.id ``` Result: ``` id id 3 3 3 3 4 NULL 5 NULL ```
Why am I getting duplicate records for this query? I am performing a Left Outer Join
[ "", "sql", "t-sql", "join", "sapb1", "" ]
I am using oracle SQL queries in an external Program (Pentaho Data Integration (PDI)). I need to convert all columns to string values before I can proceed with using them. What i am looking for is something that automatically applies the ``` select to_date(col1), to_date(col2),..., to_date(colN) from example_table; ``` to all columns, so that you might at best wrap this statement: ``` select * from example_table; ``` and all columns are automatically converted. For explanation: I need this because PDI doesn't seem to work fine when getting uncasted DATE columns. Since I have dynamic queries, I do not know if a DATE column exists and simply want to convert all columns to strings. **EDIT** Since the queries vary and since I have a long list of them as an input, I am looking for a more generic method than just manually writing to\_char() infront of every column.
If you are looking for a solution in PDI, you need to create a job (.kjb) where in you take 2 transformations. **First .ktr** will rebuild the query and the **Second .ktr** will execute the new query. **1. First Transformation:** Rebuild the query [![enter image description here](https://i.stack.imgur.com/gig7Z.png)](https://i.stack.imgur.com/gig7Z.png) * Read the columns in the Source Table Step *(use Table Input step in your case)*. Write the query `select * from example_table;` and limit the rows to either 0 or 1. The idea here is not to fetch all the rows but to recreate the query. * Use **Meta Structure** Step to get the meta-structure of the table. It will fetch you the list of columns coming in from the prev. step. * In the Modified JavaScript step, use a small snip of code to check if the data type of column is **Date** and then concat **to\_Char(column)** to the rows. * Finally Group and Set the variables into a variable. This is the point where the fields are recreated for you automatically. Now the next step is to execute this field with the new query. **2. Second Transformation:** Using this set variable in the next step to get the result. `${NWFIELDNAME}` is the variable you have set with the modified column in the above transformation. [![enter image description here](https://i.stack.imgur.com/Lqi6P.png)](https://i.stack.imgur.com/Lqi6P.png) Hope this helps :) I have placed the code for the first ktr in [gist here](https://gist.github.com/rishuatgithub/dab811352e3e007c7a54).
> select TO\_CHAR(\*) from example\_table; You should not use `*` in your production code, it is a bad coding practice. You should explicitly mention the column names which you want to fetch. Also, **TO\_CHAR(\*)** makes no sense. How would you convert **date** to **string**? You must use proper **format model**. In conclusion, it would take a minute or two at max to list down the **column names** using a good **text editor**.
Convert all selected columns to_char
[ "", "sql", "oracle", "pentaho", "kettle", "to-date", "" ]
My string looks something like this: ``` \\\abcde\fghijl\akjfljadf\\ \\xyz\123 ``` I want to select everything between the 1st set and next set of slashes Desired result: ``` abcde xyz ``` EDITED: To clarify, the special character is always slashes - but the leading characters are not constant, sometimes there are 3 slashes and other times there are only 2 slashes, followed by texts, and then followed by 1 or more slashes, some more texts, 1 or more slash, so on and so forth. I'm not using any adapter at all, just looking for a way to select this substring in my SQL query Please advise. Thanks in advance.
Use substring, like this (only works for the specified pattern of two slashes, characters, then another slash): ``` declare @str varchar(100) = '\\abcde\cc\xxx' select substring(@str, 3, charindex('\', @str, 3) - 3) ``` Replace `@str` with the column you actually want to search, of course. The `charindex` returns the location of the first slash, starting from the 3rd character (i.e. skipping the first two slashes). Then the `substring` returns the part of your string starting from the 3rd character (again, skipping the first two slashes), and continuing until just before the next slash, as determined by `charindex`. Edit: To make this work with different numbers of slashes at the beginning, use `patindex` with regex to find the first alphanumeric character, instead of hardcoding that it should be the third character. Example: ``` declare @str varchar(100) = '\\\1Abcde\cc\xxx' select substring(@str, patindex('%[a-zA-Z0-9]%', @str), charindex('\', @str, patindex('%[a-zA-Z0-9]%', @str)) - patindex('%[a-zA-Z0-9]%', @str)) ```
You could do a `cross join` to find the second position of the backslash. And then, use `substring` function to get the string between 2nd and 3rd backslash of the text like this: ``` SELECT substring(string, 3, (P2.Pos - 2)) AS new_string FROM strings CROSS APPLY ( SELECT (charindex('\', replace(string, '\\', '\'))) ) AS P1(Pos) CROSS APPLY ( SELECT (charindex('\', replace(string, '\\', '\'), P1.Pos + 1)) ) AS P2(Pos) ``` [**SQL Fiddle Demo**](http://www.sqlfiddle.com/#!3/2c39f/29/0) **UPDATE** In case, when you have unknown number of backslashes in your string, you could just do something like this: ``` DECLARE @string VARCHAR(255) = '\\\abcde\fghijl\akjfljadf\\' SELECT left(ltrim(replace(@string, '\', ' ')), charindex(' ',ltrim(replace(@string, '\', ' ')))-1) AS new_string ``` [**SQL Fiddle Demo2**](http://www.sqlfiddle.com/#!3/dfcec/2/0)
SQL: how to select a substring between special characters
[ "", "sql", "string", "sql-server-2014", "" ]
I have an old database to refactor and have faced this table: Member ``` id | Name | Role 1 | Tyler | 6 2 | Jane | 16 3 | Jack | 20 ``` Role ``` id | role | Level 1 | Visitor | 1 2 | Member | 2 3 | Monthly Member | 4 ``` First, I was confused because role in member table didn't match with any data in role table. After analyzing source codes, I've learned that this programmer had come up with his own way of doing it... selected roles = 2^(role's level). ie) Jack is a Member & a monthly Member Jack's role = 2^(member level) + 2^(monthly member level) = 2^2 + 2^4 = 20. --- Well, Here is I want to do. I will create three tables representing many-to-many relationship. Member ``` id | Name 1 | Tyler 2 | Jane 3 | Jack ``` Role ``` id | role 1 | Visitor 2 | Member 3 | Monthly Member ``` Member\_Role ``` member_id | role_id 1 | 1 1 | 2 3 | 2 ``` Now, I want to select all members and insert appropriate member\_id & rold\_id to Member\_Role table but not sure how to code this query. My general idea is this... ``` //Select all members from member table //if role is 16 // insert his member id and monthly memberid to member_role table //if role is 20 // insert his member id and member(role) id to member_role table // insert his member id and monthly member id to member_role table // and so on.... ``` The important thing is **I want to manually insert values depending on its condition**. This is because I will rearrange the role's order and this "selected roles = 2^(role's level)" formula will not work anymore. Thus, I prefer it to be hardcoded instead of generic. It's one-time use code anyway... Thanks !!
There, you asked for an example of the ugly approach (non-einstein approach) For that one permutation ``` create table oldstuff ( id int not null, yuck int not null ); create table newstuff_junction ( userId int not null, roleId int not null ); insert oldstuff (id,yuck) values (1,20),(2,20); insert newstuff_junction (userId,roleId) select id,2 from oldstuff where yuck=20; insert newstuff_junction (userId,roleId) select id,4 from oldstuff where yuck=20; select * from newstuff_junction order by userId,roleId; +--------+--------+ | userId | roleId | +--------+--------+ | 1 | 2 | | 1 | 4 | | 2 | 2 | | 2 | 4 | +--------+--------+ 4 rows in set (0.00 sec) ```
You can do this using the bitwise "and" operator. I would expect following to get the matches: ``` select m.*, r.* from member m join role r on (r.level & m.role) > 0; ``` However, your example is suggesting something more like: ``` from member m join role r on ((1 << r.level) & m.role) > 0; ``` Do some investigation to see what the situation really is. The logic for the insertion into the new table is a bit more complicated. Assuming you have a `new_roles` table with the same names: ``` insert into member_roles(member_id, role_id) select m.id, nr.id from member m join role r on (r.level & m.role) > 0 join new_roles nr on nr.role = r.role; ``` The first `on` clause might be: ``` on ((1 << r.level) & m.role) > 0; ```
Mysql - Conditional inserting after select query
[ "", "mysql", "sql", "" ]
I have a column where I have values like: ``` Email_Password+oi8hu907b;New_eMail+Y;Email_Username+iugbhijhb8 ``` Now I want to update New\_eMail attribute for all rows which has `Y` to `N` without affecting anything else. Please advise.
i hate it but... ``` update table set column = replace(column,'New_eMail+Y','New_eMail+N') where column like '%New_eMail+Y%' ``` you don't need the WHERE clause but if you put a functional index on the table it may be quicker with it
My answer is a slight improvement over the answer from user davegreen100 Since they don't allow me to post it as a comment, I add it here. ``` update <<tablename>> set <<columnname>> = replace(<<columnname>>,';New_eMail+Y;',';New_eMail+N;') where <<columnname>> like '%;New_eMail+Y;%' ```
Oracle Update table to set specific attribute value in semicolon separated values
[ "", "sql", "oracle", "" ]
We have two tables `Country` and `CountryStates` with the following fields ``` Country - CountryId - CountryName CountryStates - StateId - StateName - CountryId ``` There are some Countries without any states added in the database. Now we have to fetch only those countries where states are added.
You can use this query to retrieve only countries that have a state: ``` SELECT * FROM Country AS C WHERE EXISTS (SELECT TOP 1 1 FROM CountryStates CS WHERE CS.CountryId = C.CountryId ) ```
**Same Result can be obtained by using sample INNER JOIN..** SELECT CountryID,CountryName FROM Country C INNER JOIN CountryStates S ON C.CountryID = S.CountryID
Fetch corresponding States of Countries table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm trying to rename a column name in order to aid mapping data from this database into another for a data push. What I need to do is change the name of `StdType` to `IncomeType`, although no matter what I try it simply does not change the column name in the result. This is my SQL query: ``` SELECT 'AA' + CAST(ClientID AS VARCHAR) AS AAID, Description, Convert(Money,Amount)/100 AS Amount, Note, StdType FROM [Finstat_Income] AS IncomeType INNER JOIN #Assessments A ON 'AA' + CAST(ClientID AS VARCHAR) = A.AAID ``` `#Assessments` is a temporary table which does exist and works for my other queries. `Finstat_Income` is a table and not a view. I've also tried the query with and without square brackets but there is no difference. The resulting output is headed as: ``` AAID | Description | Amount | Note | StdType ``` Where my desired output is: ``` AAID | Description | Amount | Note | IncomeType ```
You are Aliasing the table, not the column. Try this - ``` SELECT 'AA' + CAST(ClientID AS VARCHAR) AS AAID, Description, Convert(Money,Amount)/100 as Amount, Note, StdType AS IncomeType FROM [Finstat_Income] INNER JOIN #Assessments A ON 'AA' + CAST(ClientID AS VARCHAR) = A.AAID ```
Put your alias next to the column name. ``` SELECT 'AA' + CAST(ClientID AS VARCHAR) AS AAID, Description, Convert(Money,Amount)/100 AS Amount, Note, StdType AS IncomeType FROM [Finstat_Income] INNER JOIN #Assessments A ON 'AA' + CAST(ClientID AS VARCHAR) = A.AAID ```
SQL Server column alias has no effect
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I have a table and storing date value in it and I need to fetch that value and pass that as parameter to function. But whenever I try to execute my code it's throwing error like ``` ORA-00936: missing expression. ``` Following is my code sample: ``` SELECT refno, Tatcalculate(to_timestamp(SELECT h_modified_date FROM TPADETAILS WHERE refno = 'WOC0021946','DD/MM/YYYY HH24:MI:SS')) FROM Table1; ``` `Table1` is my table and `h_modified_date` is the column. `Tatcalculate()` is the function. When I run inner query it's returning the date value from the table but when I execute complete above query then its throwing the error. Anybody please help me to resolve this.
I would put the `to_timestamp()` around the column name, rather than around the select statement. Also, you need to enclose the select statement in its own brackets. Thus: ``` SELECT refno, Tatcalculate((SELECT to_timestamp(h_modified_date,'DD/MM/YYYY HH24:MI:SS') FROM TPADETAILS WHERE refno='WOC0021946')) FROM Table1; ``` ETA: Given the extra information that the tatcalculate parameter is DATE datatype and also the h\_modified\_date column is a DATE, there is absolutely no need to do any datatype conversions at all. Therefore, the query becomes: ``` SELECT refno, Tatcalculate((SELECT h_modified_date FROM TPADETAILS WHERE refno='WOC0021946')) FROM Table1; ```
give column name > SELECT refno, Tatcalculate(to\_timestamp(SELECT h\_modified\_date FROM > TPADETAILS WHERE refno='WOC0021946','DD/MM/YYYY HH24:MI:SS')) as TAT > FROM Table1;
How to pass Timestamp to Oracle Function?
[ "", "sql", "oracle", "subquery", "ora-00936", "" ]
If u have a text ( mostly sql query ) and you want to put prefix on each line something like sQ &= " And at the end of line " Something like this. ``` SELECT Name,Age FROM Users Where id<>0 ``` I want ``` sQ &=" SELECT " sQ &=" Name,Age " sQ &=" FROM" sQ &=" Users " sQ &=" Where " sQ &=" id<>0" ``` If possible to keep it formated if not who cares. I need this because i use vb.net and mostly i work with queries.
I recommend Nimbletext. Add it your external tools for VS and SSMS. A pattern of `sql &=" $0 "` gives the results you want. Tools, Options... turns trim off
In bash you can use awk: --add prefix and suffix to each line a text file ``` awk '{ printf("sQ &=" %s "", $l);}' sample-text-file.txt ```
SQL string format with quotes
[ "", "sql", "vb.net", "text", "" ]
I'm developing a website, where people will be able to play small games. At the end of each game the user will be given a score, which I wish to store in a MySQL database with a timestamp. This will allow me to present the data in a nice way further down the line. I use a Facebook login system, so I currently have a table which consists of User ID, Name, and a Facebook ID. Currently I have a table that looks like this: ``` ID------Name------UID 1 Bob 123123 2 Marley 23134 ``` Then, for each user I will need something like this: ``` Game1Time Game1Score Game2Time Game2Score etc.... 3292 400 10 7824 129 32 101 231 ``` I've looked at some of the other related posts, and it seems that people generally think it's a bad idea to have a table for each user, but to me, that seems like the easiest way. I could also do something like this, where I store all data in one table: ``` ID------Name------UID-------Game1Time-----Game1Score----Game2Time-----Game2Score etc... 1 Bob 123123 3291, 129 400, 32 10, 101 7824, 231 2 Marley 23134 ``` But this seems like it might cause problems, when people play a lot of the same game. I feel like I'm missing some smart way of doing this, so if you have any recommendation, please tell me. I'm fairly new to SQL so I'm probably missing something stupid
I will recommend some normalization here. Your users table is fine. Create a games table to store information about games. Create a table that combines information about users and games and results. Something on the lines of: ``` create table users ( id int primary key, name varchar(100), uid int ); create table games ( id int primary key, name varchar(100) ); create table users_games ( id int primary key, userid int, gameid int, startdate datetime, enddate datetime, score bigint, constraint fk_users_games_userid foreign key (userid) references users(id), constraint fk_users_games_gameid foreign key (gameid) references games(id) ); ``` Using startdate and enddate you can calculate the time user was on the game. You can run statistics about distinct users on a given hour of the day and a whole bunch more reporting based on users\_games table. You can create indexes on users\_games as you desire. That table can grow big, no problem. You can use partitioning by userid if you like or archive data systematically as the data gets too stale to be used on a regular basis. Example schema is here: <http://sqlfiddle.com/#!9/315a5>.
My suggestion would be the following: So basically you have two tables one with the uid that you can use to search for the other info you need. Plus you save stuff where needed ``` ID------Name------UID------- 1 Bob 123123 2 Marley 23134 ID-----UID------GameTime-----GameScore 1 123123 3291 400 2 123123 129 32 3 234134 10 101 4 432123 7824 231 ```
SQL storing large amount of data for each user
[ "", "mysql", "sql", "user-data", "" ]
Maybe not the most descriptive subject line, but the question is, whilst this query below does exactly what I'm after, I would also like to return records when Note.Name = 'Travel Advisory' is NOT true. HOWEVER, when there is no Note.Name that equals 'Travel Advisory' I would still like the select values to be returned EXCEPT for Note.Notes. The Note.Name column has various values such as 'Travel Advisory', 'General Information' and 'Guide'. ``` SELECT Supplier.Name, Supplier.Changeoverday, Supplier.Code, Note.Notes FROM Supplier INNER JOIN Note ON Supplier.ID=Note.LINKID WHERE Supplier.TID = 315 AND Supplier.SID = 2350 AND Note.Name = 'Travel Advisory' ```
It sounds like you want a `LEFT JOIN`: ``` SELECT Supplier.Name, Supplier.Changeoverday, Supplier.Code, Note.Notes FROM Supplier LEFT JOIN Note ON Supplier.ID=Note.LINKID AND Note.Name = 'Travel Advisory' WHERE Supplier.TID = 315 AND Supplier.SID = 2350 ``` If there is no matching record in the `Note` table then `Notes` will be `null`.
Edit: I didn't notice you were selecting from 2 tables, I'd agree with the `OUTER JOIN` method that the other poster answered with. My way is a handy way to do it against 1 table (if you don't use table aliasing and an `OUTER JOIN`, that is...) You can use a case to select the column and do what you want, just remove the part from the WHERE. I think this syntax is good... ``` SELECT Supplier.Name , Supplier.Changeoverday , Supplier.Code , CASE WHEN Note.Notes = 'Travel Advisory' THEN Note.Notes ELSE 'Not Travel Advisory' END FROM Supplier INNER JOIN Note ON Supplier.ID=Note.LINKID WHERE Supplier.TID = 315 AND Supplier.SID = 2350 ```
SQL: Reducing select criteria when WHERE statement not true
[ "", "sql", "sql-server", "" ]
I'd like to join IP routing table information to IP whois information. I'm using Amazon's RDS which means I can't use the Postgres [ip4r](https://github.com/RhodiumToad/ip4r) extension, and so I am instead using [int8range](http://www.postgresql.org/docs/9.2/static/rangetypes.html) types to represent the IP address ranges, with [gist](http://www.postgresql.org/docs/9.1/static/textsearch-indexes.html) indexes. My tables look like this: ``` => \d routing_details Table "public.routing_details" Column | Type | Modifiers ----------+-----------+----------- asn | text | netblock | text | range | int8range | Indexes: "idx_routing_details_netblock" btree (netblock) "idx_routing_details_range" gist (range) => \d netblock_details Table "public.netblock_details" Column | Type | Modifiers ------------+-----------+----------- range | int8range | name | text | country | text | source | text | Indexes: "idx_netblock_details_range" gist (range) ``` The full routing\_details table contains just under 600K rows, and netblock\_details contains around 8.25M rows. There are overlapping ranges in both tables, but for each range in the routing\_details table I want to get the single best (smallest) match from the netblock\_details table. I came up with 2 different queries that I think will return the accurate data, one using window functions and one using DISTINCT ON: ``` EXPLAIN SELECT DISTINCT ON (r.netblock) * FROM routing_details r JOIN netblock_details n ON r.range <@ n.range ORDER BY r.netblock, upper(n.range) - lower(n.range); QUERY PLAN QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------- Unique (cost=118452809778.47..118477166326.22 rows=581300 width=91) Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, ((upper(n.range) - lower(n.range))) -> Sort (cost=118452809778.47..118464988052.34 rows=4871309551 width=91) Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, ((upper(n.range) - lower(n.range))) Sort Key: r.netblock, ((upper(n.range) - lower(n.range))) -> Nested Loop (cost=0.00..115920727265.53 rows=4871309551 width=91) Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, r.netblock, (upper(n.range) - lower(n.range)) Join Filter: (r.range <@ n.range) -> Seq Scan on public.routing_details r (cost=0.00..11458.96 rows=592496 width=43) Output: r.asn, r.netblock, r.range -> Materialize (cost=0.00..277082.12 rows=8221675 width=48) Output: n.range, n.name, n.country -> Seq Scan on public.netblock_details n (cost=0.00..163712.75 rows=8221675 width=48) Output: n.range, n.name, n.country (14 rows) -> Seq Scan on netblock_details n (cost=0.00..163712.75 rows=8221675 width=48) EXPLAIN VERBOSE SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY r.range ORDER BY UPPER(n.range) - LOWER(n.range)) AS rank FROM routing_details r JOIN netblock_details n ON r.range <@ n.range ) a WHERE rank = 1 ORDER BY netblock; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=118620775630.16..118620836521.53 rows=24356548 width=99) Output: a.asn, a.netblock, a.range, a.range_1, a.name, a.country, a.rank Sort Key: a.netblock -> Subquery Scan on a (cost=118416274956.83..118611127338.87 rows=24356548 width=99) Output: a.asn, a.netblock, a.range, a.range_1, a.name, a.country, a.rank Filter: (a.rank = 1) -> WindowAgg (cost=118416274956.83..118550235969.49 rows=4871309551 width=91) Output: r.asn, r.netblock, r.range, n.range, n.name, n.country, row_number() OVER (?), ((upper(n.range) - lower(n.range))), r.range -> Sort (cost=118416274956.83..118428453230.71 rows=4871309551 width=91) Output: ((upper(n.range) - lower(n.range))), r.range, r.asn, r.netblock, n.range, n.name, n.country Sort Key: r.range, ((upper(n.range) - lower(n.range))) -> Nested Loop (cost=0.00..115884192443.90 rows=4871309551 width=91) Output: (upper(n.range) - lower(n.range)), r.range, r.asn, r.netblock, n.range, n.name, n.country Join Filter: (r.range <@ n.range) -> Seq Scan on public.routing_details r (cost=0.00..11458.96 rows=592496 width=43) Output: r.asn, r.netblock, r.range -> Materialize (cost=0.00..277082.12 rows=8221675 width=48) Output: n.range, n.name, n.country -> Seq Scan on public.netblock_details n (cost=0.00..163712.75 rows=8221675 width=48) Output: n.range, n.name, n.country (20 rows) ``` The DISTINCT ON seems slightly more efficient, so I've proceeded with that one. When I run the query against the full dataset I get an out of disk space error after around a 24h wait. I've created a routing\_details\_small table with a subset of N rows of the full routing\_details table to try and understand what's going on. With N=1000 ``` => EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) * -> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range -> ORDER BY r.netblock, upper(n.range) - lower(n.range); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=4411888.68..4453012.20 rows=999 width=90) (actual time=124.094..133.720 rows=999 loops=1) -> Sort (cost=4411888.68..4432450.44 rows=8224705 width=90) (actual time=124.091..128.560 rows=4172 loops=1) Sort Key: r.netblock, ((upper(n.range) - lower(n.range))) Sort Method: external sort Disk: 608kB -> Nested Loop (cost=0.41..1780498.29 rows=8224705 width=90) (actual time=0.080..101.518 rows=4172 loops=1) -> Seq Scan on routing_details_small r (cost=0.00..20.00 rows=1000 width=42) (actual time=0.007..1.037 rows=1000 loops=1) -> Index Scan using idx_netblock_details_range on netblock_details n (cost=0.41..1307.55 rows=41124 width=48) (actual time=0.063..0.089 rows=4 loops=1000) Index Cond: (r.range <@ range) Total runtime: 134.999 ms (9 rows) ``` With N=100000 ``` => EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) * -> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range -> ORDER BY r.netblock, upper(n.range) - lower(n.range); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=654922588.98..659034941.48 rows=200 width=144) (actual time=28252.677..29487.380 rows=98992 loops=1) -> Sort (cost=654922588.98..656978765.23 rows=822470500 width=144) (actual time=28252.673..28926.703 rows=454856 loops=1) Sort Key: r.netblock, ((upper(n.range) - lower(n.range))) Sort Method: external merge Disk: 64488kB -> Nested Loop (cost=0.41..119890431.75 rows=822470500 width=144) (actual time=0.079..24951.038 rows=454856 loops=1) -> Seq Scan on routing_details_small r (cost=0.00..1935.00 rows=100000 width=96) (actual time=0.007..110.457 rows=100000 loops=1) -> Index Scan using idx_netblock_details_range on netblock_details n (cost=0.41..725.96 rows=41124 width=48) (actual time=0.067..0.235 rows=5 loops=100000) Index Cond: (r.range <@ range) Total runtime: 29596.667 ms (9 rows) ``` With N=250000 ``` => EXPLAIN ANALYZE SELECT DISTINCT ON (r.netblock) * -> FROM routing_details_small r JOIN netblock_details n ON r.range <@ n.range -> ORDER BY r.netblock, upper(n.range) - lower(n.range); QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=1651822953.55..1662103834.80 rows=200 width=144) (actual time=185835.443..190143.266 rows=247655 loops=1) -> Sort (cost=1651822953.55..1656963394.18 rows=2056176250 width=144) (actual time=185835.439..188779.279 rows=1103850 loops=1) Sort Key: r.netblock, ((upper(n.range) - lower(n.range))) Sort Method: external merge Disk: 155288kB -> Nested Loop (cost=0.28..300651962.46 rows=2056176250 width=144) (actual time=19.325..177403.913 rows=1103850 loops=1) -> Seq Scan on netblock_details n (cost=0.00..163743.05 rows=8224705 width=48) (actual time=0.007..8160.346 rows=8224705 loops=1) -> Index Scan using idx_routing_details_small_range on routing_details_small r (cost=0.28..22.16 rows=1250 width=96) (actual time=0.018..0.018 rows=0 loops=8224705) Index Cond: (range <@ n.range) Total runtime: 190413.912 ms (9 rows) ``` Against the full table with 600k rows the query fails after around 24h with an error about running out of disk space, which is presumably caused by the external merge step. So this query is working well and very quickly with a small routing\_details table, but is scaling very poorly. Suggestions for how to improve my query, or perhaps even schema changes I could make so that this query will work efficiently on the full dataset?
I was thinking originally of a lateral join as in other suggested approaches (for example, the last query by Erwin Brandstetter, where he uses simple `int8` datatype and simple indexes), but didn't want to write it in the answer, because I thought that it is not really efficient. When you try to find all `netblock` ranges that cover the given range, an index doesn't help much. I'll repeat the Erwin Brandstetter's query here: ``` SELECT * -- only select columns you need to make it faster FROM routing_details r , LATERAL ( SELECT * FROM netblock_details n WHERE n.ip_min <= r.ip_min AND n.ip_max >= r.ip_max ORDER BY n.ip_max - n.ip_min LIMIT 1 ) n; ``` When you have an index on netblock\_details, like this: ``` CREATE INDEX netblock_details_ip_min_max_idx ON netblock_details (ip_min, ip_max DESC NULLS LAST); ``` you can quickly (in `O(logN)`) find the starting point of the scan in the `netblock_details` table - either the maximum `n.ip_min` that is less than `r.ip_min`, or the minimum `n.ip_max` that is more than `r.ip_max`. But then you have to scan/read the rest of the index/table and for each row do the second part of the check and filter out most rows. In other words, this index helps to quickly find the starting row that satisfies first search criteria: `n.ip_min <= r.ip_min`, but then you'll continue reading all rows that satisfy this criteria and for each such row perform the second check `n.ip_max >= r.ip_max`. On average (if data has even distribution) you'll have to read half of the rows of the `netblock_details` table. Optimizer may choose to use index to search `n.ip_max >= r.ip_max` first and then apply second filter `n.ip_min <= r.ip_min`, but you can't use this index to apply both filters together. End result: for each row from `routing_details` we'll read through half of rows from `netblock_details`. 600K \* 4M = 2,400,000,000,000 rows. It is 2 times better than Cartesian product. You can see this number (Cartesian product) in the output of `EXPLAIN ANALYZE` in the question. ``` 592,496 * 8,221,675 = 4,871,309,550,800 ``` No wonder your queries run out of disk space when trying to materialize and sort this. --- The overall high level process to get to the final result: * join two tables (without finding the best match yet). In the worst case it is Cartesian product, in the best case it is all covering ranges from `netblock_details` for each range from `routing_details`. You said that there are multiple entries in `netblock_details` for each entry in `routing_details`, anything from 3 to around 10. So, result of this join could have ~6M rows (not too much) * order/partition the result of the join by the `routing_details` ranges and for each such range find the best (smallest) covering range from `netblock_details`. --- My idea is to reverse the query. Instead of finding all covering ranges from larger `netblock_details` for each row from smaller `routing_details` table I suggest to find all smaller ranges from smaller `routing_details` for each row from larger `netblock_details`. **Two step process** For each row from larger `netblock_details` find all ranges from `routing_details` that are inside the `netblock` range. I would use the following schema in the queries. I've added primary key `ID` to both tables. ``` CREATE TABLE routing_details ( ID int ,ip_min int8 ,ip_max int8 ,asn text ,netblock text ); CREATE TABLE netblock_details ( ID int ,ip_min int8 ,ip_max int8 ,name text ,country text ,source text ); SELECT netblock_details.ID AS n_ID ,netblock_details.ip_max - netblock_details.ip_min AS n_length ,r.ID AS r_ID FROM netblock_details INNER JOIN LATERAL ( SELECT routing_details.ID FROM routing_details WHERE routing_details.ip_min >= netblock_details.ip_min AND routing_details.ip_min <= netblock_details.ip_max -- note how routing_details.ip_min is limited from both sides -- this would make it possible to scan only (hopefully) small -- portion of the table instead of full or half table AND routing_details.ip_max <= netblock_details.ip_max -- this clause ensures that the whole routing range -- is inside the netblock range ) AS r ON true ``` We need index on `routing_details` on `(ip_min, ip_max)`. The main thing here is index on `ip_min`. Having second column in the index helps by eliminating the need to do the lookup for the value of `ip_max`; it doesn't help in the tree search. Note that the lateral subquery doesn't have `LIMIT`. It is not the final result yet. This query should return ~6M rows. Save this result in a temporary table. Add an index to the temporary table on `(r_ID, n_length, n_ID)`. `n_ID` is again just to remove extra lookups. We need an index do avoid sorting the data for each `r_ID`. **Final step** For each row from `routing_details` find the `n_ID` with the smallest `n_length`. Now we can use the lateral join in "proper" order. Here `temp` table is result of the previous step **with the index**. ``` SELECT routing_details.* ,t.n_ID ,netblock_details.* FROM routing_details INNER JOIN LATERAL ( SELECT temp.n_ID FROM temp WHERE temp.r_ID = routing_details.ID ORDER BY temp.n_length LIMIT 1 ) AS t ON true INNER JOIN netblock_details ON netblock_details.ID = t.n_ID ``` Here subquery should be a seek of the index, not scan. Optimizer should be smart enough to do just the seek and return the first found result because of `LIMIT 1`, so you'll have 600K seeks of index in 6M row temp table. --- Original answer (I'll keep it just for the diagram of ranges): Can you "cheat"? If I understood your query correctly, for each `routing_details.range` you want to find a smallest `netblock_details.range` that covers/is larger than `routing_details.range`. Given the following example, where `r` is routing range and `n1, ..., n8` are netblock ranges, the correct answer is `n5`. ``` |---| n1 |------------------| n2 |---------------| n3 |-----| n4 |------------------| n5 |--------------------------------------| n6 |---------------------------| n7 |-----| n8 |------------| r start end n.start <= r.start AND n.end >= r.end order by n.length limit 1 ``` Your [query that took 14 hours](https://gist.github.com/coderholic/9e90311f9323b543aef2) shows that index scan took 6ms, but sorting by range length took 80ms. With this kind of search there is no simple 1D ordering of the data. You are using `n.start` together with `n.end` and together with `n.length`. But, maybe your data is not that generic, or it is OK to return a somewhat different result. For example, if it was OK to return `n6` as a result, it could work much faster. `n6` is the covering range that has largest `start`: ``` n.start <= r.start AND n.end >= r.end order by n.start desc limit 1 ``` Or, you could go for `n7`, which has smallest `end`: ``` n.start <= r.start AND n.end >= r.end order by n.end limit 1 ``` This kind of search would use simple index on `n.start` (or `n.end`) without extra sorting. --- A second, completely different approach. How big/long are the ranges? If they are relatively short (few numbers), then you could try to store them as an explicit list of integers, instead of a range. For example, range `[5-8]` would be stored as 4 rows: `(5, 6, 7, 8)`. With this storage model it may be easier to find intersections of ranges.
I don't have really good answer for you, because I'm not familiar with gist indexes, but I'm kind of interested so I took a little look at your explain plan. A couple things stood out: 1) Your plan is using a nested loop join, even in the 250K example. It is seq scanning the larger table, and doing lookups on the smaller one. This means it's doing 8 million index lookups on the smaller table, taking up over 148 seconds. It's strange to me that this slows significantly with an increase in the size of the `routing_details_small` table. Like I said, I'm unfamiliar with gist indexes, but I would experiment with `set enable_nestloop to false;` to see if you can get it to do some kind of sorted merge/hash join. 2) The distinct is being executed at the end. It takes a fairly small portion of the time (~11 seconds), but that also means you may be doing slightly extra work. It looks like the distinct brings the resulting number of records down from over 1 million to 250K, so I would experiment with trying it earlier. I'm not sure if you are getting duplicates because there are multiple entries in the `routing_details_small` table for a `netblock`, or that the `netblock_details` table has multiple matches for a given netblock. If the former, you could join on a subquery with only unique routing details. If the latter, try the thing I'm about to mention: 3) Somewhat combining the previous two observations, you might try doing a partial join (joining on a subquery) from a seq scan on routing\_details\_small. This should only result in 600K index scans. Something like (assuming postgres 9.4): `SELECT * FROM routing_details_small r, LATERAL (SELECT * FROM netblock_details n WHERE r.range <@ n.range LIMIT 1) nb;`
Joining 2 large postgres tables using int8range not scaling well
[ "", "sql", "postgresql", "amazon-web-services", "amazon-rds", "" ]
Supposed that I have the following table. How to display 'amount' only once based on column `ID` ? ``` ID Amount 1 10.00 1 10.00 1 10.00 2 10.00 2 10.00 2 10.00 ``` Given the ff example the expected output should be like this : ``` ID Amount 1 10.00 1 0.00 1 0.00 2 10.00 2 0.00 2 0.00 ``` I tried using `ROW_NUMBER` but its not sufficient, its giving me this result. ``` ID Amount 1 10.00 1 0.00 1 0.00 2 0.00 2 0.00 2 0.00 ``` EDIT : This is what i tried so far : ``` SELECT ID ,CASE WHEN ROW_NUMBER() OVER(PARTITION BY Amount ORDER BY ID) = 1 THEN Amount ELSE 0.00 END [AMOUNT] FROM TABLE ```
Just change your partition to use ID: ``` SELECT ID ,CASE WHEN ROW_NUMBER() OVER(PARTITION BY ID ORDER BY ID) = 1 THEN Amount ELSE 0.00 END [AMOUNT] FROM TABLE Order by ID, Amount Desc ```
This should do it: ``` select id, iif(id=lag(id,1,0), 0, amount) from mytable order by id ``` basically, if the current id is the same as the id one row back, use 0. otherwise, use the amount column. I am assuming, from the data you have provided, that "amount" stays the same for any single id.
Display column values only once
[ "", "sql", "sql-server", "" ]
I have a query ``` Select age,qualification,sum(income) as total_income from employee group by age,qualification; ``` I want to find rank on based total\_income for the group of age and qualification. For example ``` 19|Grad|5000|rank:1 19|Grad|4000|rank:2 19|Grad|3000|rank:3 26|Grad|6000|rank:1 26|Grad|5000|rank:2 26|PosG|8000|rank:1 26|PosG|6000|rank:2 ``` Can I do it in Oracle? I tried with partition by but not able to figure it out.
[SQL Fiddle](http://sqlfiddle.com/#!4/41f539/4) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE Employees ( Age, Qualification, Income ) AS SELECT 19, 'Grad', 5000 FROM DUAL UNION ALL SELECT 19, 'Grad', 4000 FROM DUAL UNION ALL SELECT 19, 'Grad', 3000 FROM DUAL UNION ALL SELECT 26, 'Grad', 6000 FROM DUAL UNION ALL SELECT 26, 'Grad', 5000 FROM DUAL UNION ALL SELECT 26, 'PosG', 8000 FROM DUAL UNION ALL SELECT 26, 'PosG', 6000 FROM DUAL; ``` **Query 1**: ``` SELECT Age, Qualification, Income, RANK() OVER ( PARTITION BY Age, Qualification ORDER BY Income DESC ) AS "Rank" FROM Employees ``` **[Results](http://sqlfiddle.com/#!4/41f539/4/0)**: ``` | AGE | QUALIFICATION | INCOME | Rank | |-----|---------------|--------|------| | 19 | Grad | 5000 | 1 | | 19 | Grad | 4000 | 2 | | 19 | Grad | 3000 | 3 | | 26 | Grad | 6000 | 1 | | 26 | Grad | 5000 | 2 | | 26 | PosG | 8000 | 1 | | 26 | PosG | 6000 | 2 | ``` **Query 2**: ``` WITH total_incomes AS ( SELECT Age, Qualification, SUM( Income ) AS total_income FROM Employees GROUP BY Age, Qualification ) SELECT Age, Qualification, total_income, RANK() OVER ( ORDER BY total_income DESC ) AS "Rank" FROM total_incomes ``` **[Results](http://sqlfiddle.com/#!4/41f539/4/1)**: ``` | AGE | QUALIFICATION | TOTAL_INCOME | Rank | |-----|---------------|--------------|------| | 26 | PosG | 14000 | 1 | | 19 | Grad | 12000 | 2 | | 26 | Grad | 11000 | 3 | ```
``` select age,qualification,total_income row_number() over (partition by age,qualification order by income desc) as rank from ( Select age,qualification,sum(income) as total_income from employee group by age,qualification ) T1 ```
RANKing in group by in oracle
[ "", "sql", "oracle", "" ]
I have database with statistics over a number of websites and I'm currently having an issue with a rather complex query that I have no idea how to do (or if it's even possible). I have 2 tables: `websites` and `visits`. The former is a list of all websites and their properties, while the former is a list of each user's visit on a specific website. The program I'm making is supposed to fetch websites that need to be "scanned". The interval between each scan for each site depends on the websites total number of visits for the last 30 days. Here is a table with the intended scan-interval: ![enter image description here](https://i.stack.imgur.com/orRWS.png) The tables have the following structure: **Websites** [![enter image description here](https://i.stack.imgur.com/hKzCb.png)](https://i.stack.imgur.com/hKzCb.png) **Visits** [![enter image description here](https://i.stack.imgur.com/mC7Pd.png)](https://i.stack.imgur.com/mC7Pd.png) What I want is a query that returns the websites that are either *at* or *past* their individual update deadline (can be seen from the `last_scanned` column). Is this easily doable in a single query?
Here's something you can try: ``` SELECT main.* FROM ( SELECT w.web_id, w.url, w.last_scanned, (SELECT COUNT(*) FROM visits v WHERE v.web_id = w.web_id AND TIMESTAMPDIFF(DAY,v.added_on, NOW()) <=30 ) AS visit_count, TIMESTAMPDIFF(HOUR,w.last_scanned, NOW()) AS hrs_since_update FROM websites w ) main WHERE (CASE WHEN visit_count >= 0 AND visit_count <= 10 AND hrs_since_update >= 4320 THEN 1 WHEN visit_count >= 11 AND visit_count <= 100 AND hrs_since_update >= 2160 THEN 1 WHEN visit_count >= 101 AND visit_count <= 500 AND hrs_since_update >= 1080 THEN 1 WHEN visit_count >= 501 AND visit_count <= 1000 AND hrs_since_update >= 720 THEN 1 WHEN visit_count >= 1001 AND visit_count <= 2000 AND hrs_since_update >= 360 THEN 1 WHEN visit_count >= 2001 AND visit_count <= 5000 AND hrs_since_update >= 168 THEN 1 WHEN visit_count >= 5001 AND visit_count <= 10000 AND hrs_since_update >= 72 THEN 1 WHEN visit_count >= 10001 AND hrs_since_update >= 24 THEN 1 ELSE 0 END) = 1; ``` Here's the fiddle demo: <http://sqlfiddle.com/#!9/1f671/1>
Just an improvment on @morgb query, using a table for visit count ranges [SQL FIDDLE DEMO](http://sqlfiddle.com/#!9/eafd1/4) ``` create table visitCount ( `min` bigint(20), `max` bigint(20), `frequency` bigint(20) ); SELECT main.* FROM ( SELECT w.web_id, w.url, w.last_scanned, (SELECT COUNT(*) FROM visits v WHERE v.web_id = w.web_id AND TIMESTAMPDIFF(DAY,v.added_on, NOW()) <=30 ) AS visit_count, TIMESTAMPDIFF(HOUR,w.last_scanned, NOW()) AS hrs_since_update FROM websites w ) main inner join visitCount v on visit_count between v.min and v.max WHERE main.hrs_since_update > v.frequency ```
Mysql - get results from complex criteria
[ "", "mysql", "sql", "" ]
I have a sqlite table with timestamps in milliseconds as primary key each row should be 1 second or 1000 apart from one another. Sometimes my data recorder goes out and there is no data in the table for that time. How can I find the gaps using a SQL statement? A cursor based solution is possible I know. ``` table = PVT TS 1119636081000 1119636082000 1119636083000 1119636084000 1119636085000 ------gap------ 1119636090000 1119636091000 ```
This may work. Assuming the table name is "tstamps", ``` select a.ts from tstamps a where not exists (select b.ts from tstamps b where b.ts = a.ts+1000) and exists (select c.ts from tstamps c where c.ts = a.ts+2000) ``` Another way ``` select a.ts from tstamps a where not exists (select b.ts from tstamps b where b.ts = a.ts+1000) and a.ts < (select max(c.ts) from tstamps c ) ``` Using MINUS operator. I am not sure, which of these queries does better performance wise. ``` select ts+1000 from pvt where ts != (select max(ts) from pvt) minus select ts from pvt where ts != (select min(ts) from pvt) ```
Something like this (Assuming PVT.TS is your Column name): ``` SELECT * FROM 'table' WHERE PVT.TS ISNULL; ``` or ``` SELECT * FROM 'table' WHERE PVT.TS IS NULL; ``` If your collector is actually entering a blank entry you might need ``` WHERE PVT.TS = '' ``` or ``` where ifnull(some_column, '') = '' ```
How do I find gap in sqlite table?
[ "", "sql", "sqlite", "gaps-and-islands", "" ]
How to concatenate a column value with single quotes and a comma in sql? ``` select tpaa_id from dbo.Sheet where tpaa_id is not null ``` At present query returns, value as .. ``` ABC123 ABC456 ``` We have around 1000 records. I expect to return as ``` 'ABC123', 'ABC456', ```
To get the values with comma , use below statement... select field 1 || ',' from table name;
Use this construction ``` SELECT CONCAT(CHAR(39), MyString ,CHAR(39)) FROM Table return '<MyString>' ```
How to concatenate a column value with single quotes in sql?
[ "", "sql", "sql-server", "" ]
I have the following table schema representing some players and the teams that they play for: ``` CREATE TABLE PLAYERS ( NAME VARCHAR(64) NOT NULL, BIRTHDAY TIMESTAMP NOT NULL, TEAM VARCHAR(64) NOT NULL, CAPTAIN BOOLEAN ); ``` My data looks like this ``` "PLAYER1","1998-02-13 00:00:00","TEAM_A",NULL "PLAYER2","1984-01-13 00:00:00","TEAM_A","1" "PLAYER3","1985-07-13 00:00:00","TEAM_A",NULL "PLAYER4","1979-08-13 00:00:00","TEAM_B",NULL "PLAYER5","1986-09-13 00:00:00","TEAM_B",NULL "PLAYER6","1990-11-13 00:00:00","TEAM_B",NULL "PLAYER7","1993-12-13 00:00:00","TEAM_C",NULL "PLAYER8","1987-05-13 00:00:00","TEAM_C",NULL "PLAYER9","1995-04-13 00:00:00","TEAM_C",NULL ``` Now I have the requirement that each team needs exactly one captain. TEAM\_A already has one (PLAYER2) but for TEAM\_B and TEAM\_C doesn't. So I need a SQl script, which identifies the oldest players within one team and sets the captain flag for them. Can anybody please help me on this.
I would start by getting the oldest player for each team without a captain like this: ``` SELECT team, MIN(birthday) AS minBirthday FROM myTable WHERE team NOT IN (SELECT DISTINCT team FROM myTable WHERE captain = 1) GROUP BY team; ``` Once you have that, you can use it to update the captains using a JOIN: ``` UPDATE myTable m JOIN( SELECT team, MIN(birthday) AS minBirthday FROM myTable WHERE team NOT IN (SELECT DISTINCT team FROM myTable WHERE captain = 1) GROUP BY team) t ON t.team = m.team AND t.minBirthday = m.birthday SET m.captain = 1; ``` As it is written, this will set two captains if two players share the same minimum birthday. If you have another tiebreaker, you can adjust the inner query to pick the correct player, and adjust the join if necessary. Here is an [SQL Fiddle](http://sqlfiddle.com/#!9/22f93/1) example.
Try this ``` UPDATE players p1 SET captain = 1 WHERE birthday = (SELECT Min(birthday) FROM players p2 WHERE p1.team = p2.team) AND CAPTAIN <> 1 ```
Update Table based on minimal date within group
[ "", "mysql", "sql", "" ]
I have a table that looks like this: ``` ID x1 x2 x3 x4 1 20 30 0 0 2 60 0 0 0 3 10 30 0 0 4 30 30 30 30 ``` I want to be able to query this and return the ID with the number of columns that have more than 0 as their value in that row. So the result would look like this: ``` ID Count 1 2 2 1 3 2 4 4 ```
Try this: ``` SELECT ID, z.cnt FROM mytable CROSS APPLY (SELECT COUNT(*) AS cnt FROM (VALUES (x1), (x2), (x3), (x4)) x(y) WHERE x.y > 0) z ``` This query makes use of a [Table Value Constructor](https://msdn.microsoft.com/en-us/library/dd776382.aspx) to create an in-line table whose *rows* are the *columns* of the initial table. Performing a `COUNT` on this in-line table, you can get the number of columns greater than zero. I think this scales well if you have more than 4 columns. [**Demo here**](http://sqlfiddle.com/#!6/1b646/1)
Try this: ``` Select ID, Case When x1 <> 0 Then 1 Else 0 End + Case When x2 <> 0 Then 1 Else 0 End + Case When x3 <> 0 Then 1 Else 0 End + Case When x4 <> 0 Then 1 Else 0 End as Count From MyTable ``` While this is easy to code, the more columns you have, the larger your select is going to be the more columns you will have to add.
Count how many columns have a specific value
[ "", "sql", "sql-server", "" ]
How can I select no rows if any row in the result set meets a certain condition? For instance: ``` Id|SomeColumn|Indicator 1 | test | Y 1 | test1 | Y 1 | test2 | X 2 | test1 | Y 2 | test2 | Y 3 | test1 | Y ``` Say I wanted to select all rows where Id = 1 unless there is a row with an indicator = X Currently I am doing something like this ``` SELECT * FROM SOMETABLE WHERE ID = 1 AND INDICATOR = 'Y' AND ID NOT IN (SELECT ID WHERE INDICATOR = 'X') ``` But that feels really clunky and I feel like there could be a better way to be doing this. Is there or am I just being overly sensitive
Something like this ? ``` SELECT * FROM SOMETABLE WHERE ID = 1 AND NOT EXISTS (SELECT 1 FROM SOMETABLE WHERE INDICATOR = 'X') ``` or, if you want the X to discriminate only on the same id: ``` SELECT * FROM SOMETABLE t1 WHERE t1.ID = 1 AND NOT EXISTS (SELECT 1 FROM SOMETABLE t2 WHERE t1.ID = t1.ID AND INDICATOR = 'X') ```
There are not too many options to do this. Another option is to use `EXISTS`. ``` SELECT * FROM SOMETABLE s1 WHERE ID = 1 AND INDICATOR = 'Y' AND NOT EXISTS (SELECT TOP 1 ID FROM SOMETABLE s2 WHERE s1.ID = s2.ID AND INDICATOR = 'X') ```
Select No Rows If Any Row Meets A Condition?
[ "", "sql", "t-sql", "" ]
``` Row TimeStamp ____|________________________ 1 | 2015-01-01 12:00:01.000 2 | 2015-01-01 12:00:02.000 3 | 2015-01-01 12:00:03.000 4 | 2015-01-01 12:00:04.000 5 | 2015-01-01 12:00:05.000 6 | 2015-01-01 12:00:06.000 7 | 2015-01-01 12:00:07.000 8 | 2015-01-01 12:00:08.000 9 | 2015-01-01 12:00:09.000 ``` Selecting the previous row's TimeStamp has been a rather simple task e.g. ``` SELECT MAX([TimeStamp]) FROM [MyTable] WHERE [TimeStamp] < '2015-01-01 12:00:02.000' ``` gets `2015-01-01 12:00:01.000` as expected. However I'm having some trouble selecting a list of TimeStamps from multiple preceding rows. For example, if I wanted to get the timestamps of the preceding rows for Row >= 3 && <= 6 ``` (i.e. SELECT [TimeStamp] FROM [MyTable] WHERE [Row] >= 3 AND [Row] <= 6 => TimeStamps 2015-01-01 12:00:03.000 2015-01-01 12:00:04.000 2015-01-01 12:00:05.000 2015-01-01 12:00:06.000 ) ``` How would I go about getting the preceding TimeStamp for *each* of these result rows? ``` (i.e. TimeStamps 2015-01-01 12:00:02.000 2015-01-01 12:00:03.000 2015-01-01 12:00:04.000 2015-01-01 12:00:05.000 ) ``` I've seen quite a few solutions related to lag/lead, but my usage of SQLServer 2008 is difficult to change.
You could try this: ``` SELECT t.TimeStamp, ( SELECT MAX(t1.TimeStamp) FROM MyTable t1 WHERE t1.Row < t.Row ) AS PrevTimeStamp FROM MyTable t WHERE t.Row >= 3 AND t.Row <= 6 ``` This would give you side-by-side columns, one current and one previous.
``` select b.id, max(a.timestamp) from mytable a join mytable b on a.id < b.id where b.id between --desired values group by b.id ``` Here is a fiddle with sample data: [fiddle](http://sqlfiddle.com/#!3/ebce1/27) will this work for you? This will `join` on all the previous rows and then picks up the `max` timestamp which would always be the value of the previous row, assuming timestamp column is ordered ascending.
Select previous row for each entry in Table/List
[ "", "sql", "sql-server-2008-r2", "" ]
I am using SQL Server 2012. I have table like this: ``` Val1 Val2 Val3 Val4 ------------------------------------------- 1 25000.00 1 1900-01-01 00:00:00.000 2 25000.00 1 2012-04-02 00:00:00.000 1 25125.00 1 2013-01-01 00:00:00.000 1 25502.00 1 2014-01-01 00:00:00.000 2 25502.00 1 2014-04-01 00:00:00.000 3 25502.00 1 2015-01-01 00:00:00.000 4 25502.00 1 2015-04-01 00:00:00.000 1 62500.00 2 1900-01-01 00:00:00.000 2 62500.00 2 2012-06-29 00:00:00.000 1 63750.00 2 2013-01-01 00:00:00.000 1 65025.00 2 2014-01-01 00:00:00.000 1 69250.00 2 2015-01-01 00:00:00.000 1 4300.00 3 1900-01-01 00:00:00.000 2 4300.00 3 2012-05-01 00:00:00.000 1 4343.00 3 2013-01-01 00:00:00.000 2 4343.00 3 2013-06-01 00:00:00.000 3 4343.00 3 2013-09-01 00:00:00.000 4 4343.00 3 2014-04-01 00:00:00.000 5 4343.00 3 2014-09-01 00:00:00.000 1 3257.25 3 2014-09-15 00:00:00.000 2 3257.25 3 2015-03-01 00:00:00.000 1 4543.00 3 2015-04-01 00:00:00.000 ``` I would like to get something like this: ``` Val1 Val2 Val3 Val4 ---------------------------------------------- 2 25000.00 1 2012-04-02 00:00:00.000 1 25125.00 1 2013-01-01 00:00:00.000 4 25502.00 1 2015-04-01 00:00:00.000 2 62500.00 2 2012-06-29 00:00:00.000 1 63750.00 2 2013-01-01 00:00:00.000 1 65025.00 2 2014-01-01 00:00:00.000 1 69250.00 2 2015-01-01 00:00:00.000 2 4300.00 3 2012-05-01 00:00:00.000 5 4343.00 3 2014-09-01 00:00:00.000 2 3257.25 3 2015-03-01 00:00:00.000 1 4543.00 3 2015-04-01 00:00:00.000 ``` I tried this SQL query: ``` select t1.* from table1 t1 inner join (select Val3, max(Val1) as MaxRank from table1 group by Val3) t13 on t1.Val3 = t13.Val3 and t1.Val1 = t13.MaxRank order by t1.Val3 ``` But I have only one record for `Val3`.
Maybe this help you: ``` select t1.* from table1 t1 join ( select max(Val1) as Val1, Val2, Val3 from table1 group by Val2, Val3 ) t2 on t1.Val1=t2.Val1 and t1.Val2 = t2.Val2 and t1.Val3=t2.Val3 ```
this should work ``` SELECT val1, val2, val3, val4 FROM (SELECT *, ROW_NUMBER () OVER (PARTITION BY val2 ORDER BY val1 DESC) rn FROM table1 ) a WHERE rn = 1 ```
Group record by one column
[ "", "sql", "t-sql", "sql-server-2012", "" ]
I have query . ``` SELECT VendorID,VendorName,VendorType,FirstName,LastName, VendorCompany,Contact,Phone,AltContact,Email,OpeningBalance,OpeningDate,VendorAccountNo ,Photo,VendorNotes FROM Vendor inner JOIN VendorTypeTable ON Vendor.VendorTypeID=VendorTypeTable.VendorTypeID ``` Now on the other side , i want to also retrieve those vendors those have VendorTypeID as null, like this query. ``` SELECT VendorID,VendorName,FirstName,LastName, VendorCompany,Contact,Phone,AltContact,Email,OpeningBalance,OpeningDate,VendorAccountNo ,Photo,VendorNotes FROM Vendor WHERE VendorTypeID IS NULL; ``` how i can combine these two queries.
While @Fireblade is absolutely correct, sometimes a UNION is the right answer if you have two queries returning the same fields. This allows you to tweak the performance of each independently of the other. ``` SELECT VendorID,VendorName,VendorType,FirstName,LastName, VendorCompany,Contact,Phone,AltContact,Email, OpeningBalance,OpeningDate,VendorAccountNo ,Photo,VendorNotes FROM Vendor inner JOIN VendorTypeTable ON Vendor.VendorTypeID=VendorTypeTable.VendorTypeID UNION SELECT VendorID,VendorName,FirstName,LastName, VendorCompany, Contact,Phone,AltContact,Email,OpeningBalance,OpeningDate, VendorAccountNo ,Photo,VendorNotes FROM Vendor WHERE VendorTypeID IS NULL; ```
Something like this ``` SELECT vendorid, vendorname, vendortype, firstname, lastname, vendorcompany, contact, phone, altcontact, email, openingbalance, openingdate, vendoraccountno, photo, vendornotes FROM vendor INNER JOIN vendortypetable ON vendor.vendortypeid = vendortypetable.vendortypeid OR vendor.vendortypeid IS NULL ```
OR Operator in inner join
[ "", "sql", "subquery", "inner-join", "" ]
I have variable in Oracle procedure declared like this: ``` myMsg VARCHAR2(256 BYTE) ``` How can I port it to mysql? Because this when I try to declare it in MySQL procedure: ``` DECLARE myMsg VARCHAR(256 BYTE); ``` MySQL throws me syntax error. --- How can I port that Oracle variable definition to MySQL? --- It is `256 BYTE` part that doesn't let me to declare variable in MySQL and I need to find something similar to this `256 BYTE` expression in MySQL.
In MySql, varchar colums get declared like this ``` colname VARCHAR(256), another VARCHAR(20) NOT NULL, ``` etcetera. The Oracle syntax is different. In stored code in MySql, inside function or procedure scope, variables get declared like this: ``` DECLARE myMsg VARCHAR(256); ``` If you're not aware of this, you soon will be: Oracle PL/SQL and MySql's little language for stored code are very different from each other.
The maximum size of `varchar` in `mysql` is `255` before `mysql 5.0.3` From `MySql` Doc: > A variable-length string. M represents the maximum column length in > characters. In MySQL 5.0, the range of M is 0 to 255 before MySQL > 5.0.3, and 0 to 65,535 in MySQL 5.0.3 and later In `MySQL 5.0.3 and later`, a `CHAR` length greater than `255` is illegal and fails with an error ``` mysql> CREATE TABLE c1 (col1 INT, col2 CHAR(500)); ERROR 1074 (42000): Column length too big for column 'col' (max = 255); **use BLOB or TEXT instead** ``` In the error its clearly mention that `use BLOB or TEXT` For more info about `string data type` [link](https://dev.mysql.com/doc/refman/5.0/en/string-type-overview.html)
Migrating from Oracle to MySQL. VARCHAR2 length defined using bytes. How to port?
[ "", "mysql", "sql", "oracle", "migration", "" ]
I have two tables, TABLE A has columns *token* (primary key) and *downtime* (`INT`), TABLE B has columns *token*, *status* (`ENUM` with 3 states: `active`, `unstable`, `inactive`), *duration* (`INT`). I want to sum *duration* from TABLE B, only for the states `unstable` and `inactive`. And after that, assign the result to column *downtime*, from TABLE A. So, for example, ``` TABLE A ======= token downtime -------------------------- bv87pxicnrtk8pw null v3525kq2kzihb9u null TABLE B ======= token state duration ------------------------------------------ v3525kq2kzihb9u active 9 v3525kq2kzihb9u unstable 20 v3525kq2kzihb9u inactive 60 bv87pxicnrtk8pw unstable 11 bv87pxicnrtk8pw active 140 bv87pxicnrtk8pw inactive 40 RESULT ====== token downtime -------------------------- bv87pxicnrtk8pw 51 v3525kq2kzihb9u 80 ``` I tried ``` UPDATE A SET downtime = (SELECT SUM(duration) FROM B WHERE state != 'active' GROUP BY token) WHERE A.token = B.token; ``` but without success. Newbie in SQL. Can anyone help me?
In mysql you can use join and update something as ``` update tablea a join( select token,sum(duration) as duration from tableb where state != 'active' group by token )b on b.token = a.token set a.downtime = b.duration ```
try this: ``` UPDATE A a INNER JOIN ( SELECT SUM(duration) AS s FROM B WHERE state <> 'active' GROUP BY token ) b ON a.token = b.token SET a.downtime = b.s ```
MySql UPDATE statement with SELECT summarizing data from table 2
[ "", "mysql", "sql", "" ]
I have some data as below: ``` DECLARE @MyTable AS TABLE (productName varchar(13), test1 int,test2 int) INSERT INTO @MyTable (productName, test1,test2) VALUES ('a', 1,1), ('a', 2,2), ('a', 3,3), ('b', 1,4), ('b', 2,5), ('b', 3,6), ('a', 1,7), ('a', 4,8), ('a', 5,9) ; SELECT productname,MAX(test1) from @MyTable group BY productname ``` a MAX query on test1 column gives ``` a,5 b,3 ``` but I need to have result as ``` a,3 b,3 a,5 ``` when I have order by test2
You can solve this by using a trick with row\_numbers, so that you assign 2 different row numbers, one for the whole data and one that is partitioned by productname. If you compare the difference between these numbers, you can figure out when product name has changed, and use that to determine the max values for each group. ``` select productname, max(test1) from ( SELECT *, row_number() over (order by test2 asc) - row_number() over (partition by productname order by test2 asc) as GRP from @MyTable ) X group by productname, GRP ``` You can test this in [SQL Fiddle](http://sqlfiddle.com/#!6/1e943/1) If the test2 column is always a row number without gaps, you can use that too instead of the first row number column. If you need ordering in the data, you'll have to for example to use the max of test1 to do that.
Please check the following SQL Select statement ``` DECLARE @MyTable AS TABLE (productName varchar(13), test1 int,test2 int) INSERT INTO @MyTable (productName, test1,test2) VALUES ('a', 1,1), ('a', 2,2), ('a', 3,3), ('b', 1,4), ('b', 2,5), ('b', 3,6), ('a', 1,7), ('a', 4,8), ('a', 5,9) DECLARE @MyTableNew AS TABLE (id int identity(1,1), productName varchar(13), test1 int,test2 int) insert into @MyTableNew select * from @MyTable --select * from @MyTableNew ;with cte as ( SELECT id, productName, test1, test2, case when (lag(productName,1,'') over (order by id)) = productName then 0 else 1 end ischange from @MyTableNew ), cte2 as ( select t.*,(select sum(ischange) from cte where id <= t.id) grp from cte t ) select distinct grp, productName, max(test1) over (partition by grp) from cte2 ``` This is implemented according to the following [SQL Server Lag() function tutorial](http://www.kodyaz.com/t-sql/sqlserver-lag-function-to-group-rows-on-column-value.aspx) The Lag() function is used to identify and order the groups in table data
Multiple SQL MAX when items are not in order
[ "", "sql", "sql-server", "t-sql", "" ]
I have table called `Orders`. There are three columns: `CustomerID`, `ProductGroup`, `Size`. How can I get `TOP` selling size by `ProductGroup` from this table? I can do it 1 by 1 with ``` SELECT TOP 1 Count(customerid) as customers, ProductGroup, Size FROM Orders WHERE ProductGroup = xxx GROUP BY ProductGroup, Size ORDER BY Count(customerid) DESC ``` However, I would like to get full list at once.
Not sure, but it may help you. ``` Declare @temp table(CustomerID int, ProductGroup varchar(10), Size int) insert into @temp Select 1,'ABC',15 union all Select 2,'ABC',10 union all Select 3,'XYZ',12 union all Select 4,'ABC',15 union all Select 3,'XYZ',12 union all Select 3,'XYZ',12 union all Select 3,'XYZ',15 union all Select 3,'XYZ',11 union all Select 3,'XYZ',12 Select * from ( Select RANK() Over(PARTITION BY ProductGroup order by customers desc) RankVal,customers,ProductGroup, Size from ( SELECT Count(CustomerID) as customers, ProductGroup, Size FROM @temp GROUP BY ProductGroup, Size ) T )T1 WHERE RankVal = 1 ```
Please check following SELECT query with [SQL Count using Partition By clause](http://www.kodyaz.com/t-sql/sql-count-function-with-partition-by-clause.aspx) ``` ;with cte as ( SELECT Distinct ProductGroup, Size, COUNT(*) OVER (Partition By ProductGroup, Size) cnt FROM Customers ) select ProductGroup, Size, cnt from ( select *, rn = ROW_NUMBER() OVER (Partition By ProductGroup Order By cnt desc) from cte ) t where rn = 1 ```
select top selling size by product group from table
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I am using the following SQL CASE: ``` SELECT BomMast.BomStockCode , BomMast.BomDescription , CASE WHEN StkItem.AveUCst <= 0 THEN 'ERROR' WHEN StkItem.AveUCst > 0 THEN (StkItem.AveUCst * BomComp.ProductionQty) END AS TotalCost FROM BomComp INNER JOIN BomMast ON BomMast.BomID = BomComp.BomMasterKey INNER JOIN StkItem ON StkItem.StockLink = BomComp.ComponentStockLink ``` But I get the following message: > Msg 8114, Level 16, State 5, Line 2 > Error converting data type varchar to float. Am I not allowed to add test within the CASE statement? Thank you!
Yes, text can be used as the result a case statement, as can any datatype, but each case must return the same type, as the results column must have **one type** only. Your `[TotalCost]` column has conflicting data types. `[StkItem.AveUCst]` is a **float** and the literal value of `'ERROR'` is a **varchar**. If you are intending to retain the benefits of number-based value in your results column, consider replacing `'ERROR'` with the SQL keyword `NULL`.
Change your query to: ``` SELECT BomMast.BomStockCode ,BomMast.BomDescription ,CASE WHEN StkItem.AveUCst <= 0 THEN 'ERROR' WHEN StkItem.AveUCst > 0 THEN CAST((StkItem.AveUCst * BomComp.ProductionQty) AS NVARCHAR(MAX)) END AS TotalCost FROM BomComp INNER JOIN BomMast ON BomMast.BomID = BomComp.BomMasterKey INNER JOIN StkItem ON StkItem.StockLink = BomComp.ComponentStockLink ``` The datatypes of the values you want to show in either branches of your `CASE` statements need to be the same in order to work. **Edit:** After **@underscore\_d**'s suggestion, I also consider that it would be a far better option to display `NULL` instead of the message `ERROR` and then handle this `NULL` value in the application level. Hence, your case statement will change to: ``` CASE WHEN StkItem.AveUCst <= 0 THEN NULL WHEN StkItem.AveUCst > 0 THEN (StkItem.AveUCst * BomComp.ProductionQty) END AS TotalCost ```
Adding text string to CASE Statement
[ "", "sql", "sql-server", "t-sql", "" ]
I'm trying to sort by a certain character on a string, for example, before: ``` +----+---------+ | id | name | +----+---------+ | 1 | red | | 2 | red-a | | 3 | red-xy | | 4 | blue | | 5 | blue-a | | 6 | blue-xy | +----+---------+ ``` after: ``` +----+---------+ | id | name | +----+---------+ | 4 | blue | | 1 | red | | 5 | blue-a | | 2 | red-a | | 6 | blue-xy | | 3 | red-xy | +----+---------+ ``` are there any ways to categorize based on **-a** or **-xy** using `ORDER BY` Thank you in advance.
``` SELECT CASE WHEN RIGHT(`name`,LENGTH(`name`)-INSTR(`name`,'-')) = `name` THEN '' ELSE RIGHT(`name`,LENGTH(`name`)-INSTR(`name`,'-')) END AS `suffix`, `name` FROM `table1` ORDER BY `suffix`, `name` ``` If no suffix is found, it will put the record in the first row set. Caveat: the first dash is used to separate the word from the prefix.
This will do what you're looking for. Wouldn't like to promise great performance if you had a lot of rows though: ``` select id, name from ( select id, name, if (substring_index(name,'-', -1) = name, '', substring_index(name,'-', -1)) as grouping from Table1 order by grouping, name ) as subTable ``` [SQLFiddle here](http://sqlfiddle.com/#!9/92b63/16/0) [EDIT] Actually, that can be simplified to a single select with : ``` select id, name from Table1 order by if (substring_index(name,'-', -1) = name, '', substring_index(name,'-', -1)), name ```
MySQL - sort by certain last string character
[ "", "mysql", "sql", "string", "sorting", "" ]
i want to create an SQL select statement that will JOIN a few tables and yeidl results. Essentially i have two tables that are independent of one another. One table contains users (which is assumed to have the most up to date and correct information) and i have a placed holder table called players. The placeholder table has a column labelled userID which is set as NULL. when a user is recognized as a player then the NULL value for userID is replaced by the user's unique id. This cases me to want to use SQL to test if userID is NULL. i have written the following statement which does not work ``` SELECT teams.*, players.number, players.position, players.userId, CASE WHEN players.userId IS NULL THEN players.id ELSE userPlayer.id END AS playerId, CASE WHEN players.userId IS NULL THEN players.firstName ELSE userPlayer.firstName END AS firstName, CASE WHEN players.userId IS NULL THEN players.lastName ELSE userPlayer.lastName END AS lastName, CASE WHEN players.userId IS NULL THEN players.email ELSE userPlayer.email END AS email FROM teams LEFT JOIN players ON players.teamId = teams.id LEFT JOIN users AS userPlayer ON userPlayer.id= players.userId WHERE teams.id = (:t) ``` This seems very inefficient. Is there a better way? thanks
To begin with, `userPlayer.id` is never different from `players.userId` (either both are something, or both are null), so you can dispense with that expression. Also, use `coalesce()` in preference to `case when x is null` for clarity and brevity: ``` SELECT teams.*, players.number, players.position, players.userId AS playerId, COALESCE(userPlayer.firstName, players.firstName) AS firstName, COALESCE(userPlayer.lastName, players.lastName) AS lastName, COALESCE(userPlayer.email, players.email) AS email FROM teams LEFT JOIN players ON players.teamId = teams.id LEFT JOIN users AS userPlayer ON userPlayer.id = players.userId WHERE teams.id = (:t) ```
You could create a view where you can comfortably select teams by their id. In this case it is possible to optimize, because if players.userId is null then all values of user are null (because of left join). So you could create a union, if you have a lot of null entries it could be beneficial: ``` SELECT teams.*, players.number, players.position, players.userId AS playerId, players.firstName, players.lastName, players.email FROM teams LEFT JOIN players ON players.teamId = teams.id WHERE players.userId IS NULL OR NOT EXISTS(SELECT * FROM users WHERE users.id = players.userId) UNION SELECT teams.*, players.number, players.position, players.userId AS playerId, COALESCE(userPlayer.firstName, players.firstName) AS firstName, COALESCE(userPlayer.lastName, players.lastName) AS lastName, COALESCE(userPlayer.email, players.email) AS email FROM teams INNER JOIN players ON players.teamId = teams.id INNER JOIN users AS userPlayer ON userPlayer.id = players.userId ```
SQL Case column is null redefine selectors
[ "", "sql", "join", "left-join", "" ]
Can't believe I am stuck with this but how can I check that value I am returning is null in select statement ``` IF EXISTS(SELECT TU.Tagged FROM TopicUser TU WHERE TU.TopicId = @TopicId and TU.UserId = @UserId) BEGIN --do stuff END ``` The value of `TU.Tagged` is `NULL` but yet it does go into the condition. In mind it does not exist.
It looks like you need something like: ``` IF EXISTS(SELECT TU.Tagged FROM TopicUser TU WHERE TU.TopicId = @TopicId AND TU.UserId = @UserId AND TU.Tagged IS NOT NULL) BEGIN --do stuff END ``` Otherwise, you're checking only if records meeting your criteria exist, but those records could have a `NULL` value in the `TU.Tagged` column.
Solution 1 : Use IsNULL() Function, When below query return null value IsNULL function replace null value with 0 and if condition treated as False. ``` IF EXISTS (SELECT IsNULL(TU.Tagged,0) FROM TopicUser TU WHERE TU.TopicId = @TopicId and TU.UserId = @UserId) BEGIN END ``` Solution 2 : Use (IS NULL OR IS NOT NULL) Property. ``` IF EXISTS (SELECT TU.Tagged FROM TopicUser TU WHERE TU.TopicId = @TopicId and TU.UserId = @UserId AND TU.Tagged IS NOT NULL) BEGIN END ```
Check if a column's value is null in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I have 2 following table structure first is like this ``` szCustId szArClassId decCreditLimit szPaymentTermId 22101100071 AMORE 0.00 30 HARI_AMORE 22101100071 BELLISIMO 500.00 17 HARI_BELLISIMO 22101100071 CAPE 0.00 17 HARI_CAPEDISC 22101100071 FOOD 0.00 17 HARI_FOOD 22101100071 GFES 1000.00 0 HARI_GFES 22101100071 GILBEYS 0.00 17 HARI_GILBEYS 22101100071 GZERO 0.00 13 HARI_GZERO 22101100071 AMORE 0.00 30 HARI_AMORE 22101100069 BELLISIMO 500.00 17 HARI_BELLISIMO 22101100069 CAPE 0.00 17 HARI_CAPEDISC 22101100069 FOOD 0.00 17 HARI_FOOD 22101100888 FOOD 0.00 17 HARI_FOOD 22101100888 GFES 1000.00 0 HARI_GFES 22101100888 GILBEYS 0.00 17 HARI_GILBEYS 22101100888 GZERO 0.00 13 HARI_GZERO ``` and the second table (that i want to update) is this ``` szCustId bAllowToCredit decCreditLimit 22101100071 1 0.00 22101100069 1 0.00 22101100888 1 0.00 ``` I need to take a sum of `decCreditLimit` of matching or group by particular `szCustId` so the result (in table 2) would be like this ``` szCustId bAllowToCredit decCreditLimit 22101100071 1 1500.00 22101100069 1 500.00 22101100888 1 1000.00 ``` How can I write this SQL query?
You can `join` two tables and use `sum` aggregate function with the `group by` clause to calculate total `decCreditLimit` for each `szCustId` and `bAllowToCredit`: ``` select t2.szCustId , t2.bAllowToCredit , sum(t1.decCreditLimit) as decCreditLimit from table1 t1 join table2 t2 on t1.szCustId = t2.szCustId group by t2.szCustId , t2.bAllowToCredit ```
Try this query ``` select a.szCustId,a.bAllowToCredit,b.decCreditLimit from tablesecond a join( select szCustId,sum(decCreditLimit) as decCreditLimit from tablefirst)b on a.szCustId=b.szCustId ```
How to take sum of column with same id and different table in SQL?
[ "", "sql", "sql-server", "" ]
We currently have the following columns: ``` CustomerID int ArrivalDateTime datetime CheckPoint varchar(20) DepartureDateTime datetime ``` CheckPoint column records 24hr format time based on specific queries. ``` CustomerID ArrivalDateTime CheckPoint DepartureDateTime 1 2015-05-03 08:15 0800 2015-05-03 08:30 2 2015-05-04 13:15 1300 2015-05-04 15:30 ``` What I need is replace time from ArrivalDateTime with CheckPoint time. So that it reads as below: ``` CustomerID ArrivalDateTime DepartureDateTime 1 2015-05-03 08:00 2015-05-03 08:30 2 2015-05-04 13:00 2015-05-04 15:30 ``` So that I can get the time difference from ArrivalDateTime to DepartureDateTime. Expected Results: ``` CustomerID ArrivalDateTime DepartureDateTime TimeInMinutes 1 2015-05-03 08:00 2015-05-03 08:30 30 2 2015-05-04 13:00 2015-05-04 15:30 150 ```
Try the following: ``` SELECT CustomerID ,CAST(CAST(ArrivalDateTime AS DATE) AS DATETIME) + CAST(CONCAT(LEFT([CheckPoint], 2), ':', RIGHT([CheckPoint], 2)) AS DATETIME) AS ArrivalDateTime ,DepartureDateTime ,DATEDIFF(MINUTE, CAST(CAST(ArrivalDateTime AS DATE) AS DATETIME) + CAST(CONCAT(LEFT([CheckPoint], 2), ':', RIGHT([CheckPoint], 2)) AS DATETIME), DepartureDateTime) AS TimeInMinutes FROM @tbl ``` --- Sample data with output ``` DECLARE @tbl TABLE (CustomerID INT, ArrivalDateTime DATETIME, [CheckPoint] Varchar(20), DepartureDateTime DATETIME) INSERT @tbl SELECT 1 ,'2015-05-03 08:15', '0800' , '2015-05-03 08:30' UNION ALL SELECT 2 ,'2015-05-04 13:15', '1300' , '2015-05-04 15:30' SELECT CustomerID ,CAST(CAST(ArrivalDateTime AS DATE) AS DATETIME) + CAST(CONCAT(LEFT([CheckPoint], 2), ':', RIGHT([CheckPoint], 2)) AS DATETIME) AS ArrivalDateTime ,DepartureDateTime ,DATEDIFF(MINUTE, CAST(CAST(ArrivalDateTime AS DATE) AS DATETIME) + CAST(CONCAT(LEFT([CheckPoint], 2), ':', RIGHT([CheckPoint], 2)) AS DATETIME), DepartureDateTime) AS TimeInMinutes FROM @tbl CustomerID ArrivalDateTime DepartureDateTime TimeInMinutes 1 2015-05-03 08:00:00.000 2015-05-03 08:30:00.000 30 2 2015-05-04 13:00:00.000 2015-05-04 15:30:00.000 150 ``` [SQLFiddle](http://sqlfiddle.com/#!6/7146f/1)
Try this: ``` select CustomerID , DATEADD(day, DATEDIFF(day, 0, ArrivalDateTime), STUFF(CheckPoint_,3,0,':')) ArrivalDateTime , DepartureDateTime , DATEDIFF(mi, DATEADD(day, DATEDIFF(day, 0, ArrivalDateTime), STUFF([CheckPoint],3,0,':')), DepartureDateTime) as TimeInMinutes from t; ``` If you don't want to repeat the functions you can move them to a cte or derived table like so: ``` select CustomerID , ArrivalDateTime , DepartureDateTime , DATEDIFF(mi, ArrivalDateTime, DepartureDateTime) as TimeInMinutes from ( select CustomerID , DATEADD(day, DATEDIFF(day, 0, ArrivalDateTime), STUFF([CheckPoint],3,0,':')) ArrivalDateTime , DepartureDateTime from t) a ; ```
Concatenate + time difference
[ "", "sql", "sql-server", "t-sql", "datetime", "sql-server-2012", "" ]
I have a table that shows the status of each case with multiple jobs being performed simultaneously, I would like to have the results displayed so that it only shows the first and last instance. (Mainly I want to know when the job was first started and what's its last known status). I've managed to get the results with 2 similar min, max, and group by queries joined by an UNION function. But is there a simpler way? However, would it be possible to display the 2 instances on one line instead of 2 separate lines? because the date from the first instance will be the start date and the last instance will be the end date, and i don't really care about the first status because it's always pending, i just want to know what's the last known status 1st table shows unfiltered results and 2nd table is desired results (but if we can combine the first and last instance on one line that'd be even better) ``` ID Status Date Job Note 1 pending 1-Jul A abc 1 pending 2-Jul A xyz 1 pending 2-Jul A abc 1 done 3-Jul B xyz 1 done 4-Jul A abc 2 pending 1-Jul A abc 2 done 2-Jul A xyz 2 done 2-Jul A abc 2 pending 3-Jul C xyz 2 pending 4-Jul C xyz 2 pending 5-Jul C xyz 2 pending 6-Jul C xyz 3 pending 2-Jul D xyz 3 done 3-Jul D abc 3 pending 4-Jul D abc 3 pending 1-Jul E xyz 3 done 3-Jul E xyz ID Status Date Job Note 1 pending 1-Jul A abc 1 done 3-Jul B xyz 1 done 4-Jul A abc 2 pending 1-Jul A abc 2 done 2-Jul A abc 2 pending 3-Jul C xyz 2 pending 6-Jul C xyz 3 pending 2-Jul D xyz 3 pending 4-Jul D abc 3 pending 1-Jul E xyz 3 done 3-Jul E xyz ``` Thank you very much in advance
One way to do it is to use `ROW_NUMBER` function twice in ascending and descending order to get first and last rows of each group. See [SQL Fiddle](http://sqlfiddle.com/#!6/e9ff5/2/0) ``` WITH CTE AS ( SELECT ID ,Status ,dt ,Job ,Note ,ROW_NUMBER() OVER (PARTITION BY ID, Job ORDER BY dt ASC) AS rnASC ,ROW_NUMBER() OVER (PARTITION BY ID, Job ORDER BY dt DESC) AS rnDESC FROM T ) SELECT ID ,Status ,dt ,Job ,Note FROM CTE WHERE rnAsc=1 OR rnDesc=1 ORDER BY ID, Job, dt ``` This variant would scan through the whole table, calculate row numbers and discard those rows that don't satisfy the filter. The second variant is to use `CROSS APPLY`, which may be more efficient, if (a) your main table has millions of rows, (b) you have a small table with the list of all `ID`s and `Job`s, (c) the main table has appropriate index. In this case instead of reading all rows of the main table you can do index seek for each `(ID, Job)` (two seeks, one for first row plus one for the last row).
I don't think there's much wrong with your UNION idea. Is this what you have? select id, job, status, max(date), job, note, 'max' as type from test1 group by job UNION select id, job, status, min(date), job, note, 'min' as type from test1 group by job;
SQL Query: how to return only the first and last instance?
[ "", "sql", "sql-server", "sql-server-2014", "" ]
i have two columns - email id and customer id, where an email id can be associated with multiple customer ids. Now, I need to list only those email ids (along with their corresponding customer ids) which are having a count of more than 1 customer id. I tried using grouping sets, rollup and cube operators, however, am not getting the desired result. Any help or pointers would be appreciated.
I *think* this will get you what you want, if I am understanding you question correctly ``` select emailid, customerid from tablename where emailid in ( select emailid from tablename group by emailid having count(emailid) > 1 ) ```
``` SELECT emailid FROM ( SELECT emailid, count(custid) FROM table Group by emailid Having count(custid) > 1 ) ```
using group by operators in sql
[ "", "sql", "sql-server", "group-by", "cube", "rollup", "" ]
How can i get the first day(Monday) and last day(Sunday) for **EVERY** week in a month? **Example: July 2015** *Week 1* First: 29-Jun Last: 5-Jul *Week 2* First: 6-Jul Last: 12-Jul *Week 3* First: 13-Jul Last: 19-Jul *Week 4* First: 20-Jul Last: 26-Jul *Week 5* First: 27-Jul Last: 2-Aug
Try this: ``` SET DATEFIRST 1; GO declare @Month date = '150701'; declare @Index int = 1; declare @StartWeek date; declare @MonthWeek table ([Week] char(6), StartDate date, EndDate date) set @StartWeek = DATEADD(dd, -(DATEPART(dw, @Month)-1), @Month) insert into @MonthWeek([Week], StartDate, EndDate) select 'Week ' + cast(@Index as char), @StartWeek, DATEADD(D, 6, @StartWeek); select @StartWeek = DATEADD(D, 7, @StartWeek), @Index = @Index + 1; while DATEPART(M,@StartWeek) = DATEPART(M,@Month) begin insert into @MonthWeek([Week],StartDate,EndDate) select 'Week ' + cast(@Index as char), @StartWeek, DATEADD(D, 6, @StartWeek); select @StartWeek = DATEADD(D, 7, @StartWeek), @Index = @Index + 1 end; select * from @MonthWeek; ```
Here you go: ``` SELECT DATEADD(DAY, 2-DATEPART(WEEKDAY, CURRENT_TIMESTAMP), CURRENT_TIMESTAMP); -- first day of week SELECT DATEADD(DAY, 2-DATEPART(WEEKDAY, CURRENT_TIMESTAMP) + 6, CURRENT_TIMESTAMP); -- last day of week ``` [`SQLFiddle Demo`](http://www.sqlfiddle.com/#!3/9eecb7db59d16c80417c72d1/1648)
How to get first day and last day for every week in a month?
[ "", "sql", "sql-server-2008", "" ]
I need to generate a column for InvoiceID. and I want to keep the formate of this column like this ``` INV0000001, INV0000002, . . . . INV0000010, INV0000011, . . and so on. ``` As you can see this column is increasing with last index. How can i do this. I'm using SQL Server 2012. I have searched, but couldn't find, how to increase a number like this.
Try using `computed column` [MSDN](https://msdn.microsoft.com/en-IN/library/ms188300.aspx) ``` CREATE TABLE Yourtablename ( ID int IDENTITY (1,1) NOT NULL, InvoiceID AS 'INV'+ right('000000'+cast(ID as varchar(20)),7) PERSISTED ); ``` [**SQLFIDDLE DEMO**](http://sqlfiddle.com/#!6/83552) For more info on why you need to make your computed column as `persisted` check [*here*](http://blog.sqlauthority.com/2010/08/03/sql-server-computed-column-persisted-and%C2%A0performance/)
try, this one :- (completely dynamic) ``` Declare @NextInvoice Int = 0 Select @NextInvoice = Isnull(Max(Cast(Replace('INV0000011','INV','') As Bigint)),0) + 1 Select 'INV' + Left('0000000', (Len('0000000') -Len(@NextInvoice))) + Cast(@NextInvoice As Varchar(20)) ```
Create column which increases from last index
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I installed SQL Server 2014 Express. It doesn't have SSIS, so I installed SQL Server 2014 evaluation. I still don't see SSIS. Am I installing it in a wrong way or does SSIS come only with purchased SQL Server 2014 Standard edition and above. I need to load multiple flat files into a database so I can work on them together.
SSIS IS NOT FREE !! We have to buy atleast the standard edition of Sql Server which is too costly if you are working for a small company. Best way to load multiple files into a database without SSIS is to merge all files with same columns together in excel [Merging files](http://www.oaultimate.com/office/merge-multiple-excel-files-into-a-single-spreadsheet-ms-excel-2007.html) and then load that big file into the database using Sql Server Import & Export. Formatting columns may be tricky.
Express version comes without SSIS. Actually there is not free version of SSIS. You need either SQL Server Standard, Developer, or Enterprise edition to get access to BIDS. If you have Visual Studio then you can download SQL Server Data Tools and you will be able to create SSIS projects if you have Visual Studio 2013, then download and install this: <http://www.microsoft.com/en-us/download/details.aspx?id=42313>
Does SQL Server 2014 evaluation copy provide SSIS
[ "", "sql", "sql-server", "ssis", "" ]
I've got the following dataset ordered by a specific column: ``` ratio ----- 1 1 1 0.8333 1 1.6667 3.3333 1 ``` And I want to count the rows where ratio equals 1, but **only** until I reach a row where ratio **is not** 1. For the above dataset my expected result would be *3* (the first three rows). Of course I could do this in the code, but I just wondered whether there's an SQL solution to this.
You say that the data is "ordered by a specific column". If so, you can simply do: ``` select count(*) from table t where specificcolumn < (select min(t2.specificcolumn) from table t2 where t2.ratio <> 1) ``` Depending on the ordering, the `<` may need to be `>`. Note: this assumes that the specific column has unique values. If the values are not unique, then you need multiple columns for a unique key.
If you have another primary key column in the table: ``` SELECT COUNT(`id`) FROM `table` WHERE `ratio` = 1 AND `id` < (SELECT `id` FROM `table` WHERE `ratio` != 1 ORDER BY `id` ASC LIMIT 0, 1) ```
SQL Query to return count of rows until certain criteria is met
[ "", "mysql", "sql", "" ]
I am stuck with escape characters which give different values.Please help me to solve this problem.Please help me why same query is giving different results. **Query :** ``` SELECT * FROM APP_REALM_ENTRIES WHERE ID IN (SELECT ID FROM APP_ENTRIES where APP_EXT_CODE ='TTL1' AND VERSION_NUMBER='1.0.1'); ``` ***Result*** : Single row result comes **SQL Block :** ``` declare appcode varchar2(20); version_number varchar2(20); type rc is ref cursor; table_cursor rc; rec_table REALM_ENTRIES%ROWTYPE; begin appcode := 'TTL1'; version_number := '1.0.1'; open table_cursor for 'SELECT * FROM REALM_ENTRIES WHERE ID IN (SELECT ID FROM APP_ENTRIES where APP_EXT_CODE ='''||appcode||''||'AND VERSION_NUMBER='||version_number||''')'; LOOP FETCH table_cursor INTO rec_table; DBMS_OUTPUT.PUT_LINE('ROWCOUNT ' || table_cursor%ROWCOUNT ); EXIT WHEN table_cursor%NOTFOUND; END LOOP; CLOSE table_cursor; end; ``` ***Result :*** ROWCOUNT 0
As Justin suggested use bind variables. You'll achieve two things by doing that, you won't need to worry about getting the number of quotes correct, and more importantly, you'll close the door to SQL Injection vulnerabilities. You can do this by changing your open statement to the following: ``` open table_cursor for 'SELECT * FROM REALM_ENTRIES WHERE ID IN (SELECT ID FROM APP_ENTRIES where APP_EXT_CODE =:appcode AND VERSION_NUMBER=:version_number)' using appcode, version_number; ```
You can avoid all the headaches associated to dynamic SQL and explicit cursor manipulation by using this very simplified implicit cursor for loop (Documentation: [Query Result Set Processing With Cursor FOR LOOP Statements](http://docs.oracle.com/database/121/LNPLS/static.htm#CIHCGJAD)): ``` for rec in ( SELECT * FROM APP_REALM_ENTRIES WHERE ID IN (SELECT ID FROM APP_ENTRIES where APP_EXT_CODE = appcode AND VERSION_NUMBER= version_number) ) loop -- read values from 'rec' object. end loop; ``` But for what it's worth, you weren't doubling your single quotes correctly. Examples: ``` ||''|| -- this is appending NULL, not a single quote. ``` You probably meant to do this instead: ``` ||''''|| -- this appends 1 single quote. ``` Also... ``` 'AND VERSION_NUMBER='||version_number -- this is not adding a single quote before appending the version_number value. ``` You probably meant to do this instead: ``` 'AND VERSION_NUMBER='''||version_number ```
Oracle Procedure Escape characters
[ "", "sql", "oracle", "plsql", "" ]
I have a table of orders, ``` Invoice Location Customer Code SalesPersonEmail ------------------------------------------------------ 300001 001 CUS001 ? 300002 006 CUS002 ? ``` And a table of email groups, ``` Role Email ----------------------------------------------------- Filtered_Group Management@gmail.com;john@gmail.com ``` When Location = 001, SalesPersonEmail must be the Email field from Filtered\_Group SalesPersonEmail for all other locations must be "Orders@gmail.com;" + the Email for Role No\_Filter\_Group. I'm currently using the following to achieve this, ``` SELECT i.Invoice, i.Location, i.[Customer Code], CASE WHEN i.Location = 001 THEN f.Email ELSE N'Orders@gmail.com;' + nf.Email as SalesPersonEmail END FROM Invoice i, RoleCodes f, RoleCodes nf WHERE f.Role = N'Filtered_Group' AND nf.Role = N'No_Filter_Group' ``` My problem is the Role No\_Filter\_Group may not exist in the Role table at times, which causes the above query to return nothing. How do I join these tables properly so if *No\_Filter\_Group* does not exist in the table, rows that have a *SalesPersonEmail* of *Filtered\_Group* are still returned from the query? Thanks
A relatively simple way is to use `LEFT JOIN` and put the special number `001` for your location and special role names `Filtered_Group` and `No_Filter_Group` in the join condition. In this [SQL Fiddle](http://sqlfiddle.com/#!3/3c6a6/2/0) you can comment/uncomment one line in the schema definition to see how it works when `RoleCodes` has a row with `No_Filter_Group` and when it doesn't. In any case, the query would return all rows from `Invoice` table. ``` SELECT Invoice.Invoice ,Invoice.Location ,Invoice.[Customer Code] ,CASE WHEN Invoice.Location = '001' THEN RoleCodes.Email ELSE 'Orders@gmail.com;' + ISNULL(RoleCodes.Email, '') END AS SalesPersonEmail FROM Invoice LEFT JOIN RoleCodes ON (Invoice.Location = '001' AND RoleCodes.Role = 'Filtered_Group') OR (Invoice.Location <> '001' AND RoleCodes.Role = 'No_Filter_Group') ```
Try something like this. **Note:** This is just a example am not sure about the tables and column of your schema. Replace with the respective tables and columns ``` SELECT CASE WHEN location = '001' THEN (SELECT TOP 1 email FROM email_table WHERE [role] = 'Filtered_Group') ELSE 'Orders@gmail.com;' END FROM orders ``` If email\_table table will have only one row for `[role] = 'Filtered_Group'` then you can remove the `TOP 1` from the `sub-query`
SQL Joining tables with 'constants'
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a table which is a mapping table t1. It looks like the following: ``` +-------+---------+-------------+ | Reqid | FIELDID | LISTITEMID | +-------+---------+-------------+ | 219 | 76 | 3548 | | 219 | 86 | 2382 | | 220 | 76 | 3548 | | 220 | 86 | 3491 | | 221 | 77 | 3550 | | 221 | 87 | 2387 | +-----------------+-------------+ ``` Now What I want todo is to select the distinct reqIds that has both of the following ``` select * from t1 where (FIELDID='76' and LISTITEMID='3548') or (FIELDID='77' and LISTITEMID='3550') or ((FIELDID='86' and (LISTITEMID='3491' or LISTITEMID='2380')) or (FIELDID='87' and (LISTITEMID='3494' or LISTITEMID='2386'))) order by REQUIREMENTID ``` This gives me all rows that has any one of those above requirements. But what I want todo is to get ``` select * from t1 where ((FIELDID='76' and LISTITEMID='3548') or (FIELDID='77' and LISTITEMID='3550')) and (((FIELDID='86' and (LISTITEMID='3491' or LISTITEMID='2380')) or (FIELDID='87' and (LISTITEMID='3494' or LISTITEMID='2386')))) order by REQUIREMENTID ``` The above obviously don't return any rows. But it points to what I want todo. What the query should return is **220** since thats the only rows that contain both FieldID=76 and ListItemId=3548 and fieldId=86 and listItemId=3491. Is this easily done in a query or do I have to write two separate querys and then create 2 arrays and compare which ids that occurs in both arrays?
You almost had it. What you need to do is select all `Reqid` matching your conditions, and get all rows with that `Reqid`. This can be accomplished with a sub-query. Done using two sub-queries: ``` SELECT * FROM t1 WHERE Reqid in ( SELECT t11.Reqid FROM t1 as t11 WHERE (t11.FIELDID='76' AND t11.LISTITEMID='3548') OR (t11.FIELDID='77' AND t11.LISTITEMID='3550') ) AND Reqid in ( SELECT t11.Reqid FROM t1 as t11 WHERE (t11.FIELDID='86' AND (t11.LISTITEMID='3491' OR t11.LISTITEMID='2380')) OR (t11.FIELDID='87' AND (t11.LISTITEMID='3494' OR t11.LISTITEMID='2386')) ) ORDER BY REQUIREMENTID ``` This can further be translated into a single sub-query using a `JOIN`. ``` SELECT * FROM t1 WHERE Reqid in ( SELECT t11.Reqid FROM t1 as t11 JOIN t1 as t12 on t11.Reqid = t12.Reqid WHERE ((t11.FIELDID='76' AND t11.LISTITEMID='3548') OR (t11.FIELDID='77' AND t11.LISTITEMID='3550')) AND ( (t12.FIELDID='86' AND (t12.LISTITEMID='3491' OR t12.LISTITEMID='2380')) OR (t12.FIELDID='87' AND (t12.LISTITEMID='3494' OR t12.LISTITEMID='2386')) ) ) ORDER BY REQUIREMENTID ```
you could just do an `EXIST` statement ``` SELECT DISTINCT t.ReqID FROM t1 t WHERE ((t.FIELDID = '76' AND t.LISTITEMID = '3548') OR (t.FIELDID = '77' AND t.LISTITEMID = '3550')) AND EXISTS ( SELECT 1 FROM t1 WHERE t1.Reqid = t.Reqid AND ((t1.FIELDID = '86' AND t1.LISTITEMID IN ('3491','2380')) OR (t1.FIELDID = '87' AND t1.LISTITEMID IN ('3494','2386')))) ```
Get only rows where 2 conditions are fulfilled in Microsoft SQL
[ "", "sql", "sql-server-2008", "" ]
I have a table wit customer with arrival date and departure date in a format '2014-25-11 00:00:00.000'. I am creating a query with to select those customer but I am having a error > Conversion failed when converting date and/or time from character string. But I want to find only with date `2014-25-11` SQL query used ``` select FirstName, LastName from customer where ArrivalDate > '%2014-25-11 00:00:00.000%' and DepartureDate < '%2014-29-11 00:00:00.000%' ``` Can anyone help me to write that query?
> Conversion failed when converting date and/or time from character string. Delete '%' from your values Try this one: ``` select FirstName and LastName from customer where ArrivalDate > CAST('2014-25-11' AS DATE) and DepartureDate < CAST('2014-29-11' AS DATE) ```
if you only want to find the ones in the day 2014-25-11 you could use this ``` select FirstName and LastName from customer where DATEPART(YEAR,ArrivalDate) = 2014 and DATEPART(month,ArrivalDate) = 25 and DATEPART(day,ArrivalDate)= 11 ```
How to find tables in database with date range SQL?
[ "", "sql", "sql-server", "" ]
Say I have this table: (column: `Row` is a count based on the column `ID`) ``` ID | Row | State | 1 | 1 | CA | 1 | 2 | AK | 2 | 1 | KY | 2 | 2 | GA | 2 | 3 | FL | 3 | 1 | WY | 3 | 2 | HI | 3 | 3 | NY | 3 | 4 | DC | 4 | 1 | RI | ``` I'd like to generate a new column that would have the highest number in the `Row` column grouped by the `ID` column for each row. How would I accomplish this? I've been messing around with MAX(), GROUP BY, and some partitioning but I'm getting different errors each time. It's difficult to finesse this correctly. Here's my target output: ``` ID | Row | State | MaxRow 1 | 1 | CA | 2 1 | 2 | AK | 2 2 | 1 | KY | 3 2 | 2 | GA | 3 2 | 3 | FL | 3 3 | 1 | WY | 4 3 | 2 | HI | 4 3 | 3 | NY | 4 3 | 4 | DC | 4 4 | 1 | RI | 1 ```
Use window version of `MAX`: ``` SELECT ID, Row, State, MAX(Row) OVER (PARTITION BY ID) AS MaxRow FROM mytable ``` [**Demo here**](http://sqlfiddle.com/#!3/4b5c8/1)
You could join between a query on the table and an aggregate table: ``` SELECT t.*, max_row FROM t JOIN (SELECT id, MAX([row]) AS max_row FROM t GROUP BY id) agg ON t.id = agg.id ```
How to get the max row count grouped by the ID in sql
[ "", "sql", "sql-server", "select", "filter", "group-by", "" ]
I have a MySQL database named shop and a table named orders with these columns: ``` `id(INT AUTO_INCREMENT PRIMARY KEY`) `value(INT(30))` `order_date(DATE)` ``` I want to create an SQL query to select only the orders placed Friday and Monday with a value larger then 100.
Use [DAYOFWEEK](https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_dayofweek) ``` select * from orders where value > 100 and DAYOFWEEK(order_date) in (2,6) ```
Try this ``` select * from table where dayname(order_date) in ('friday','monday') and value>100 ```
Sql query day check
[ "", "mysql", "sql", "" ]
I have the following function which returns a bit: ``` Declare @Ret bit SET @Ret = 0 IF EXISTS ( Select * from tblExclusion where StatusID = 1 and AccountID = @AccountID ) Begin SET @Ret = 1 End Return @Ret ``` Now there can be multiple entries for the same AccountID in the table or none at all but only one entry will ever have a "1" status if it exists. I have to be honest I'm not very knowledgeable when it comes to SQL but when called the function seems to take a long time to return. I'm wondering if there is a more efficient way of writing the above. Thanks in advance.
An index may be necessary, reviewing a [sample execution plan](https://technet.microsoft.com/en-us/library/ms178071(v=sql.105).aspx) will reveal what index would improve. If you were to modify your query to: ``` Declare @Ret bit SET @Ret = 0 IF EXISTS ( Select 1 from tblExclusion where StatusID = 1 and AccountID = @AccountID ) Begin SET @Ret = 1 End Return @Ret ``` An `NONCLUSTERED INDEX` would be of the format: ``` USE [DatabaseName] GO CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>] ON [dbo].[tblExclusion] ([StatusID],[AccountID]) <optional, INCLUDE ([columns within the select,]) > GO ``` Types of indexes and how to create them: [Create Index](https://msdn.microsoft.com/en-ca/library/ms188783.aspx)
If it takes a long time to run, then I would suspect that there is no index on the column "AccountID". Adding an index on that column will probably significantly improve performance. However, without knowing how tblExclusion is defined, there is no way to be certain of this answer. Also, adding an index to StatusID will help as well, assuming there are a large number of entries for different StatusIDs. Also, since you only need to test the existence of the record, you don't need to select every column in tblExclusion. You could change "\*" to "1" or something, though this will not improve performance significantly.
SQL Server Function Efficiency If Exists
[ "", "sql", "performance", "sql-server-2008", "function", "ssms", "" ]
I have two tables one for saving user session values and second to save user session visit log. I am saving session value when user visit on first time, after that i am only saving visit of that session in visitlog table with date & time. Now i need to get those session record who don't visited in last two months. **usersessionlog** ``` sessionid sessionval1 sessionval2 ``` **usersessionvisitlog** ``` visitid sessionid visitdatetime ``` How can i get those records using mysql query.
Use NOT EXISTS ``` select t1.sessionid from usersessionlog as t1 where not exists (select * from usersessionvisitlog as t2 where t1.sessionid =t2.sessionid and visitdatetime>=date_add(current_date,interval -2 month)) ```
Using left join: ``` select sessionid from usersessionlog sl left join usersessionvisitlog vl on sl.sessionid = vl.sessionid and visitdatetime > now() - interval 2 month where vl.sessionid is null ``` Usually in MySQL `join` works faster than `in` clause.
How to get user records who don't visited in last two months using mysql
[ "", "mysql", "sql", "" ]
I need to implement Categorization and Sub-Categorization on something which is a bit similar to golden pages. Assume I have the following table: # Category Table ``` CategoryId, Title 10, Home 20, Business 30, Hobbies ``` I have two options to code the sub-categorization. # OPTION 1 - Subcategory Id is unique within Category ONLY: # Sub Category Table ``` CategoryId, SubCategoryId, Title 10, 100, Gardening 10, 110, Kitchen 10, 120, ... 20, 100, Development 20, 110, Marketing 20, 120, ... 30, 100, Soccer 30, 110, Reading 30, 120, ... ``` # OPTION 2 - Subcategory Id is unique OVERALL: # Sub Category Table ``` CategoryId, SubCategoryId, Title 10, 100, Gardening 10, 110, Kitchen 10, 120, ... 20, 130, Development 20, 140, Marketing 20, 150, ... 30, 160, Soccer 30, 170, Reading 30, 180, ... ``` Option 2 sounds like it is easier to fetch rows from table For example: `SELECT BizTitle FROM tblBiz WHERE SubCatId = 170` whereas using Option 1 I'd have to write something like this: ``` SELECT BizTitle FROM tblBiz WHERE CatId = 30 AND SubCatId = 170 ``` i.e., containing an extra `AND` However, Option 1 is easier to maintain manually (when I need to update and insert new subcategories etc. and it is more pleasant to the eye in my opinion. Any thoughts about it? Does Option 2 worth the trouble in terms of efficiency? Is there any design patters related with this common issue?
I would use this structure: ``` ParentId, CategoryId, Title null, 1, Home null, 2, Business null, 3, Hobbies 1, 4, Gardening 1, 5, Kitchen 1, 6, ... 2, 7, Development 2, 8, Marketing 2, 9, ... 3, 10, Soccer 3, 11, Reading 3, 12, ... ``` In detail: * only use one table, which **references itself**, so that you can have unlimited depth of categories * use **technical ids** (using `IDENTITY`, or similar), so that you can have more than 10 subcategories * if required add a **human readable** column for category-numbers as **separate field** As long as you are only using two levels of categories you can still select like this: ``` SELECT BizTitle FROM tblBiz WHERE ParentId = 3 AND CategoryId = 11 ``` --- The new `hierarchyid` feature of SQL server also looks quite promising: <https://msdn.microsoft.com/en-us/library/bb677173.aspx> --- What I don't like about the **Nested Set Model**: * **Inserting and deleting** items in the *Nested Set Model* is a quite comlicated thing and requires expensive locks. * One can easily create **inconsistencies** which is prohibited, if you use the `parent` field in combination with a foreign key constraint. + Inconsistencies can appear, if `rght` is **lower** than `lft` + Inconsistencies can appear, if a value **apprears in several** `rght` or `lft` fields + Inconsistencies can appear, if you create **gaps** + Inconsistencies can appear, if you create **overlaps** * The *Nested Set Model* is in my opinion more *complex* and therefore not as easy to understand. This is absolutely subjective, of course. * The *Nested Set Model* requires two fields, instead of one - and so uses more disk space.
Managing hierarchical data has some ways. One of the most important ones is `Nested Set Model`. [See here](http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/) for implementation. Even some content management systems like Joomla, use this structure. **Update 2020:** As there are some considerations on this post, I should say that now I prefer the Adjacency List Model instead of the Nested Set Model since there is less complexity in this way. Also [See here](https://www.mysqltutorial.org/mysql-adjacency-list-tree/) for implementation.
database design - categories and sub-categories
[ "", "sql", "database", "" ]
How do i add a calculated column to an Access table in SQL? I know i can add a column with SQL like this: ``` ALTER TABLE Clients ADD COLUMN AccountDate TEXT(60) ``` Thanks, Vítor
You cannot add a calculated column with SQL because calculated field requires an expression and that cannot be supplied through SQL. Technically, a calculated field is a base type - int, double, text etc. Above the base type is an expression that helps Access do the math/logic. **You could use VBA to create a calculated column** ``` -- create a module and let's assume that -- your table has Field1 integer and Field2 integer -- we will add field3 Public Sub CreateField() Dim DB As DAO.Database Dim TableDef As DAO.TableDef Dim Fld As DAO.Field2 Set DB = CurrentDb() Set TableDef = DB.TableDefs("Table1") Set Fld = TableDef.CreateField("field3", dbDouble) Fld.Expression = "[field1] * [field2]" TableDef.Fields.Append Fld MsgBox "Added" End Sub ``` As Gordon and BJones mentioned, you could create a view or saved query with relevant calculation.
I don't think MS Access supports computed columns. Instead, you can create a view: ``` create view v_clients as select c.*, (col1 + col2) as col3 from clients; ```
How to add a calculated column to Access via SQL
[ "", "sql", "ms-access", "vba", "ddl", "" ]
Below is my sample data. Row 3 and 4 have the same st\_case (the primary key), but their dist\_min are different. I want to keep the row with the minimum dist\_min value. And please notice that there could be more than 2 duplicate rows associate with the same st\_case. Thank you so much for the help! [![data](https://i.stack.imgur.com/1S8Gm.png)](https://i.stack.imgur.com/1S8Gm.png)
In MySQL, you can do this with a `delete` and `join`: ``` delete s from sample s left join (select st_case, min(dist_min) as mindm from sample s group by st_case ) ss on ss.st_case = s.st_case and s.dist_min > ss.mindm; ```
You can try this one: ``` DELETE t1 FROM table AS t1 LEFT JOIN table t2 ON t1.st_case = t2.st_case WHERE t1.dist_min > t2.dist_min ```
sql: keep only the minimum value if two rows have duplicate id
[ "", "mysql", "sql", "database", "duplicate-data", "" ]
I am trying to exclude a list of Names in an I have the table `Names` ``` id| Name | Surname --------------------- 1 | Michael | Kane 2 | Torben | Dane 3 | Dinge | Chain 4 | Django | Fain 5 | Juliett | Bravo ``` And i have the Table `Excludes` ``` id| Name ----------- 1 | Michael 2 | Torben ``` Now I have two queries: ``` SELECT * From Names, Excludes WHERE Names.Name = Excludes.Name GROUP BY Names.Name ``` which results in ``` id | Name | Surname | id | Name -------------------------------- 1 | Michael | Kane | 1 |Michael 2 | Torben | Dane | 2 |Torben ``` Now i want to do the exact opposite with `!=` to do the actual purpouse and erase the lines which have the names `Michael` and `Torben` in it The seconds query is: ``` SELECT * From Names, Excludes WHERE Names.Name != Excludes.Name GROUP BY Names.Name ``` The Result is ``` id | Name | Surname | id | Name -------------------------------- 3 | Dinge | Chain | 2 |Torben 4 | Django | Fain | 2 |Torben 5 | Juliett | Bravo | 2 |Torben 1 | Michael | Kane | 2 |Torben 2 | Torben | Dane | 1 |Michael ``` The Result I would want to have is ``` id| Name | Surname --------------------- 3 | Dinge | Chain 4 | Django | Fain 5 | Juliett | Bravo ``` What am I doing wrong?
If you are going to use a `join` (as opposed to `not in` or `not exists`), you want a `left join` and `where` clause: ``` SELECT n.* From Names n LEFT JOIN Excludes e ON n.Name = e.Name WHERE e.Name IS NULL; ``` A simple rule: *Never* use commas in the `FROM` clause.
Use a `left join` instead and filter out `null` values: ``` SELECT Names.* FROM Names LEFT JOIN Excludes ON Names.Name = Excludes.Name WHERE Excludes.Name IS NULL ``` The `group by` clause seemed meaningless so I removed it. Another option is to use the `not exists` predicate with a correlated subquery: ``` SELECT * FROM Names n WHERE NOT EXISTS (SELECT Name FROM Excludes e WHERE e.Name = n.Name) ```
Why is a WHERE clause to exclude rows in SQLite not working as expected?
[ "", "sql", "sqlite", "" ]
I have `KPI_LOG` table which contains `600000` record and the table is growing is everytime. I have `TIME_STAMP` coulmn in this table which specify when the record is created and i have to delete records which is older than 2 days from this table. I am using below query to perform deletion : `delete from KPI_LOG where SYSDATE - TIMESTAMP > 2;` But as the number of records is too large it will take lot of time if i use simply Delete query for this case. I am looking for faster deletion for this purpose but didn't find any solution.
> delete from KPI\_LOG where SYSDATE - TIMESTAMP > 2; If you are deleting more number of rows then you are keeping in the table, then you could do a **CTAS** i.e. `create table as select` and then drop the old able and rename the new table. Make sure, you have an **index** on the `timestamp` column. For example, ``` CREATE INDEX tmstmp_indx ON KPI_LOG(TIMESTAMP ) / CREATE TABLE KPI_LOG_NEW AS SELECT * FROM KPI_LOG WHERE TIMESTAMP > SYSDATE -2 / DROP TABLE KPI_LOG / ALTER TABLE KPI_LOG_NEW RENAME TO KPI_LOG / ``` Make sure you create all the necessary indexes and constraints on the new table. Deleting rows doesn't reset the **HIGH WATERMARK**, by doing **CTAS** you have a fresh new table. Therefore, you don't have to scan all those rows below the high watermark which you would do in case of deletion.
Deleting a record requires a transaction log record. This is integral to consistency. This also means that deleting records is far from cheap - in a case like this, when a single statement deletes hundreds of thousands of rows, each of those rows needs to be written to the transaction log first. There is no way around this when you don't want to delete *all* the records. If possible, you could simply call the delete command more often - the total time taken while deleting will not change much, but each command will take much shorter time (as long as you have proper indices).
Deleting large records in oracle sql
[ "", "sql", "oracle", "sql-delete", "delete-row", "large-data", "" ]
How can I combine the following two triggers, one 'after insert' and the other 'after delete' into one trigger? ``` create trigger TechStaff_update_Studio on TechStaff after insert begin update Studio set employees = employees + 1 where studioID = Studio.studioID end create trigger TechStaff_update_Studio2 on TechStaff after delete as begin update Studio set employees = employees - 1 where studioID = Studio.studioID end ```
Like this, I guess: ``` create trigger TechStaff_update_Studio on TechStaff after insert, delete as begin update s set employees = employees + (select count(*) from inserted) - (select count(*) from deleted) from Studio s end ``` Notice that you must actually count rows rather than assuming one row inserted/deleted per trigger execution (e.g. `delete * from TechStaff` would fire the trigger **once**). See MSDN on [the `inserted` and `deleted` pseudo-tables](https://msdn.microsoft.com/en-nz/library/ms191300.aspx). And your where clause was always true (`studioID = studioID`), and therefore not needed. I assume you did mean to update every row of the Studio table (is there only one row?). *Edit:* Since you say there are multiple `Studio` rows, you will need to be more clever. You have to get the count of deleted and inserted rows by StudioID. The following could probably be simplified, but I lack the time to make it shorter. ``` create trigger TechStaff_update_Studio on TechStaff after insert, delete as begin update s set employees = employees + c.Delta from Studio s join ( select StudioID, sum(i) as Delta from ( select StudioID, 1 as i from inserted union all select StudioID, -1 as i from deleted ) counts group by StudioID ) c on c.StudioID = s.StudioID end ```
Simply you could handle it in one trigger like this : ``` CREATE TRIGGER [dbo].[<TriggerName>] ON [<SchemaName>].[<TabaleName>] FOR INSERT, DELETE AS ........-- Your Code ``` And remember that inside the Trigger, you can use two tables `Deleted` and `Inserted` that they have the same as structure as your table and they have the rows that was inserted or deleted. For more information see this link : [Use the inserted and deleted Tables](https://msdn.microsoft.com/en-us/library/ms191300.aspx)
Insert and delete in one trigger
[ "", "sql", "sql-server", "t-sql", "triggers", "" ]
Full disclosure, I'm a noob at SQL Given two sparce matrices A and B, defined as: A(row\_number, column\_number, value) and B(row\_number, column\_number, value) I don't understand how this query represents the multiplication of the two matrices: ``` SELECT A.row_number, B.column_number, SUM(A.value * B.value) FROM A, B WHERE A.column_number = B.row_number GROUP BY A.row_number, B.column_number ``` My confusion lies in the SUM syntax and the GROUP BY / SELECT syntax So for my GROUP BY / SELECT confusion, I don't understand why the expressions A.row\_number and B.column\_number are necessary after the SELECT statement Why do we have to specify that when we're already using SELECT and WHERE ? To me that seems like we're saying we want to SELECT using those expressions (A.row\_number and B.column\_number) even though we're given back a table from WHERE already. Would it not make more sense to just say SELECT \* ? I'm assuming that GROUP BY just requires you to type out the expressions it uses in the SELECT statement, but I don't know for sure. For the SUM, I just want to clarify, the SUM is only using the A.value and the B.value from whatever is returned by the WHERE correct? Otherwise, you would be multiplying all A.value with all B.value. Clarifying either of these would be immensely helpful. Thank you!
``` create table A ( column_number int, row_number int, value int ); create table B ( column_number int, row_number int, value int ); insert A (column_number,row_number,value) values (1,1,1),(1,2,2),(2,1,3),(2,2,4); insert B (column_number,row_number,value) values (1,1,10),(1,2,20),(2,1,30),(2,2,40); ``` Data with your old style (non explicit) join **without** aggregage or group by: ``` SELECT A.row_number as Ar, B.column_number as Bc, A.value as Av,B.value as Bv,A.value*B.value as product FROM A, B WHERE A.column_number = B.row_number +------+------+------+------+---------+ | Ar | Bc | Av | Bv | product | +------+------+------+------+---------+ | 1 | 1 | 1 | 10 | 10 | | 2 | 1 | 2 | 10 | 20 | | 1 | 1 | 3 | 20 | 60 | | 2 | 1 | 4 | 20 | 80 | | 1 | 2 | 1 | 30 | 30 | | 2 | 2 | 2 | 30 | 60 | | 1 | 2 | 3 | 40 | 120 | | 2 | 2 | 4 | 40 | 160 | +------+------+------+------+---------+ ``` Seeing the above, the below gets a little more clarity: ``` SELECT A.row_number, B.column_number,sum(A.value * B.value) as theSum FROM A, B WHERE A.column_number = B.row_number GROUP BY A.row_number, B.column_number +------------+---------------+--------+ | row_number | column_number | theSum | +------------+---------------+--------+ | 1 | 1 | 70 | | 1 | 2 | 150 | | 2 | 1 | 100 | | 2 | 2 | 220 | +------------+---------------+--------+ ```
1. Giving table name after `SELECT` will identify which table to refer to. Mainly useful in the case where both tables have same column names. 2. `GROUP BY` will aggregate the data and display one record per grouped-by value. That is, in your case, you'll end up with only one record per row-column combination.
How does this matrix multiply work in SQL?
[ "", "mysql", "sql", "matrix", "group-by", "sum", "" ]
Can I use any special character as alias name for my table column. for e.g.: `select id as #,first_name,last_name from student;`
You would have to use [a quoted identifier](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements008.htm#SQLRF00223): ``` select id as "#",first_name,last_name from student ``` You are allowed a # in an unquoted object name (which includes aliases), from object naming rule 7: > Nonquoted identifiers can contain only alphanumeric characters from your database character set and the underscore (\_), dollar sign ($), and pound sign (#). Database links can also contain periods (.) and "at" signs (@). Oracle strongly discourages you from using $ and # in nonquoted identifiers. > > Quoted identifiers can contain any characters and punctuations marks as well as spaces. However, neither quoted nor nonquoted identifiers can contain double quotation marks or the null character (\0). But not as a single character name, because of rule 6: > Nonquoted identifiers must begin with an alphabetic character from your database character set. Quoted identifiers can begin with any character.
You could use **quoted-idetifier** i.e. **double-quotation marks** around the alias. From the [docs](http://docs.oracle.com/cd/E11882_01/server.112/e41084/sql_elements008.htm#SQLRF51129), > Database Object Naming Rules > > Every database object has a name. In a SQL statement, you represent > the name of an object with a quoted identifier or a nonquoted > identifier. > > * A quoted identifier begins and ends with double quotation marks ("). If you name a schema object using a quoted identifier, then you > must use the double quotation marks whenever you refer to that object. > * A nonquoted identifier is not surrounded by any punctuation. For example, ``` SQL> SELECT empno as "#" FROM emp WHERE ROWNUM <=5; # ---------- 7369 7499 7521 7566 7654 SQL> ``` Alternatively, in **SQL\*Plus** you could use the **HEADING** command. For example, ``` SQL> column empno heading # SQL> SELECT empno FROM emp WHERE ROWNUM <=5; # ---------- 7369 7499 7521 7566 7654 SQL> ```
Sql: Using special characters as alias for table columns?
[ "", "sql", "oracle11g", "" ]
I am having issues with a SQL query that ideally should return all the comments to a thread in a forum. Right now i'm having the following query: ``` SELECT p.*, 'BBCode' AS Format, FROM_UNIXTIME(TIME) AS DateInserted, FROM_UNIXTIME(editTime) AS DateUpdated FROM et_post p LEFT JOIN et_conversation c ON c.conversationId = p.conversationId WHERE c.private = 0 AND p.postId NOT IN ( SELECT p.postId FROM et_conversation c LEFT JOIN et_post p ON p.conversationId = c.conversationId WHERE c.private = 0 GROUP BY p.conversationId ORDER BY p.TIME ) ``` This, however, returns 0 rows. I expect it to return around 8800 rows. **If I run the first part alone:** ``` SELECT p.*, 'BBCode' AS Format, FROM_UNIXTIME(TIME) AS DateInserted, FROM_UNIXTIME(editTime) AS DateUpdated FROM et_post p LEFT JOIN et_conversation c ON c.conversationId = p.conversationId WHERE c.private = 0 ``` Output: ``` # postId, conversationId, memberId, time, editMemberId, editTime, deleteMemberId, deleteTime, title, content, attributes, Format, DateInserted, DateUpdated '12', '5', '1', '1436600657', NULL, NULL, NULL, NULL, '', 'Content1', ?, 'BBCode', '2015-07-11 09:44:17', NULL '13', '5', '1', '1436600681', NULL, NULL, NULL, NULL, 'Testing area', 'Content2', ?, 'BBCode', '2015-07-11 09:44:41', NULL '14', '5', '1', '1436600698', NULL, NULL, NULL, NULL, 'Testing area', 'Content 3', ?, 'BBCode', '2015-07-11 09:44:58', NULL '15', '5', '19', '1436602065', NULL, NULL, NULL, NULL, 'Testing area', 'More content', ?, 'BBCode', '2015-07-11 10:07:45', NULL '16', '5', '19', '1436602093', NULL, NULL, NULL, NULL, 'Testing area', 'Even more content', ?, 'BBCode', '2015-07-11 10:08:13', NULL '17', '5', '1', '1436602137', NULL, NULL, NULL, NULL, 'Testing area', 'Will it ever stop?', ?, 'BBCode', '2015-07-11 10:08:57', NULL '54', '5', '1', '1436617274', NULL, NULL, NULL, NULL, 'Testing area', 'Ah, final one..', ?, 'BBCode', '2015-07-11 14:21:14', NULL ``` It returns 9304 rows like the above which sounds right. **Running the subquery alone:** ``` SELECT p.postId FROM et_conversation c LEFT JOIN et_post p ON p.conversationId = c.conversationId WHERE c.private = 0 GROUP BY p.conversationId ORDER BY p.TIME ``` Output: ``` # postId '12' '18' '19' '44' '70' '73' '75' ``` And it gives me 412 rows like the above which also sounds right. Ideally, my output of the final query should look like this: ``` # postId, conversationId, memberId, time, editMemberId, editTime, deleteMemberId, deleteTime, title, content, attributes, Format, DateInserted, DateUpdated '13', '5', '1', '1436600681', NULL, NULL, NULL, NULL, 'Testing area', 'Content2', ?, 'BBCode', '2015-07-11 09:44:41', NULL '14', '5', '1', '1436600698', NULL, NULL, NULL, NULL, 'Testing area', 'Content 3', ?, 'BBCode', '2015-07-11 09:44:58', NULL '15', '5', '19', '1436602065', NULL, NULL, NULL, NULL, 'Testing area', 'More content', ?, 'BBCode', '2015-07-11 10:07:45', NULL '16', '5', '19', '1436602093', NULL, NULL, NULL, NULL, 'Testing area', 'Even more content', ?, 'BBCode', '2015-07-11 10:08:13', NULL '17', '5', '1', '1436602137', NULL, NULL, NULL, NULL, 'Testing area', 'Will it ever stop?', ?, 'BBCode', '2015-07-11 10:08:57', NULL '54', '5', '1', '1436617274', NULL, NULL, NULL, NULL, 'Testing area', 'Ah, final one..', ?, 'BBCode', '2015-07-11 14:21:14', NULL ``` (Notice postId 12 is gone) [EDIT] From some quick head calculations I came up to the fact that the following query sounds right according to the number of rows returned: ``` SELECT p.*, 'BBCode' AS Format, FROM_UNIXTIME(TIME) AS DateInserted, FROM_UNIXTIME(editTime) AS DateUpdated FROM et_post p INNER JOIN et_conversation c ON c.conversationId = p.conversationId WHERE c.private = 1 AND p.postId NOT IN ( SELECT DISTINCT po.conversationId FROM et_post po ); ``` [EDIT2] Now with an [sqlfiddle](http://sqlfiddle.com/#!9/e6f16/1) Basically, I want the rows with id 12, 15 and 18 to be gone since they are the original posts created by the one who started the conversation. [EDIT3] Now with an [updated sqlfiddle](http://sqlfiddle.com/#!9/ab144) * I dug some more into the database and figured out the first sqlfiddle wasn't 100% correct regarding how the data is in the database - therefore this updated version.
Based on the provided SQLFiddle in the edited question, this works. ``` SELECT p.*, 'BBCode' AS Format, FROM_UNIXTIME(TIME) AS DateInserted, FROM_UNIXTIME(editTime) AS DateUpdated FROM et_post p INNER JOIN et_conversation c ON c.conversationId = p.conversationId and c.private = 0 join ( select conversationId,min(postId) as m from et_post group by conversationId ) r on r.conversationId = c.conversationId where p.postId<>r.m ``` **12,15,18 disappear** as requested in your edit ... so too does `NOT IN` madness
The issue here appears very simple. Lets break down your statement: ``` SELECT p.*, 'BBCode' AS Format, FROM_UNIXTIME(TIME) AS DateInserted, FROM_UNIXTIME(editTime) AS DateUpdated FROM et_post p LEFT JOIN et_conversation c ON c.conversationId = p.conversationId WHERE c.private = 0 AND p.postId NOT IN ( SELECT p.postId FROM et_conversation c LEFT JOIN et_post p ON p.conversationId = c.conversationId WHERE c.private = 0 GROUP BY p.conversationId ORDER BY p.TIME ) ``` In the first section you are pulling ALL rows from the et\_post table and et\_conversation table where `c.private = 0` There is no other clauses. Then in your `NOT IN` section, you are saying: "Return ALL results where `c.private=0`" And then of course it removes them from the outer result. So what is happening here is: a) You are returning all records in the outer statement b) the `NOT IN` is returning ALL statements based on the SAME `WHERE` conditions c) With every row matching, of course you get zero results Sounds like you need to modify your subquery to what it is exactly that you don't want to see.
Query with NOT IN subquery returning 0 rows
[ "", "mysql", "sql", "subquery", "" ]
Hi I have data like- ``` ORDER_NUMBER REVISION_NUMBER 2-345 1 2-345 2 2-345 3 5-436 1 6-436 1 ``` Now I need to pick only those order\_numbers which has only revision number 1 and that order\_number should not have any other revision number like 2 ,3 In this case, it should display order\_numbers - 5-436 and 6-436 since 2-345 has revision number (2,3) also. How do i do this in SQL?
Another solution using not exists: ``` Select ordernumber from table a WHERE NOT EXISTS ( SELECT 1 FROM table b WHERE a.ordernumber = b.ordernumber and revisionnumber != 1) ```
Since you say specifically you want to use `NOT EXISTS`... ``` SELECT * FROM Orders T WHERE T.REVISION_NUMBER = 1 AND NOT EXISTS ( SELECT * FROM Orders T2 WHERE T2.ORDER_NUMBER = T.ORDER_NUMBER AND T2.REVISION_NUMBER <> 1) ``` [Here's an SQL Fiddle that demonstrates it in action](http://sqlfiddle.com/#!4/ecb7c/1).
How to eliminate unwanted data using not exists in SQL
[ "", "sql", "oracle", "" ]
I have a query that returns a dataset that returns results for two different years. There will be exactly two rows per location id (not necessarily in sequence): ``` +------+---------------------------------------+ | year | location_id | unique_1 | data +------+---------------------------------------+ | 1990 | 100 | 343 | 100 | 2000 | 100 | 343 | 200 | 1990 | 55 | 111 | 50 | 2000 | 55 | 111 | 60 ``` I want to take the results for each of the years and subtract the data column from the earlier year's from the data column from the later year's row. Something like this (which would return 100 if this was actually valid MySQL syntax), but it would need to be for all rows: ``` (SELECT data FROM TABLE WHERE year = 2000 AND location_id = 100 AND unique_1 = 343 ) MINUS (SELECT data FROM TABLE WHERE year = 1990 AND location_id = 100 AND unique_1 = 343 ) ```
If you are guaranteed that there are exactly two rows for the same `location_id`, you can do it like this: ``` select a.location_id , b.data - a.data from test a join test b on a.location_id=b.location_id and a.data>b.data ``` This query ensures that two rows with the same location ids get joined together in such a way that the one with smaller `data` is on the `a` side, and `b` is on the `b` side. [Demo.](http://www.sqlfiddle.com/#!9/c8c11/3)
You can do this using conditional aggregation: ``` select t.location_id, max(case when t.year = 2000 then data when t.year = 1999 then - data end) as diff from table t group by t.location_id; ```
Query to subtract values from two different year columns?
[ "", "mysql", "sql", "" ]
I have a table like below. [![enter image description here](https://i.stack.imgur.com/2Il3L.png)](https://i.stack.imgur.com/2Il3L.png) I need to get the data like below. [![enter image description here](https://i.stack.imgur.com/P1PEb.png)](https://i.stack.imgur.com/P1PEb.png) I have created two temp tables and achieved the result like this. Please help me to do the same with PIVOT.
At least I wouldn't use pivot for that, to my mind this is simpler to do with group by and row\_number: ``` select UserId, max(starttime) as starttime, max(endtime) as endtime from ( select UserId, case when StartOrEnd = 'S' then time end as starttime, case when StartOrEnd = 'E' then time end as endtime, row_number() over (partition by UserID order by time asc) + case when StartOrEnd = 'S' then 1 else 0 end as GRP from table1 ) X group by UserId, GRP order by starttime ``` The derived table splits the time into start / end time columns (to handle cases where only one exists) and uses a trick with row number to group the S / E items together. The outer select just groups the rows into the same row. Example in [SQL Fiddle](http://sqlfiddle.com/#!3/31e70/1)
Here is another method ``` declare @t table(userid int, StartOrEnd char(1), time datetime) insert into @t select 1,'S','2015-07-27 16:45' union all select 1,'E','2015-07-27 16:46' union all select 2,'S','2015-07-27 16:47' union all select 2,'E','2015-07-27 16:48' union all select 1,'S','2015-07-27 16:49' union all select 1,'E','2015-07-27 16:50' select userid,min(time) as minimum_time, max(time) as maximum_time from ( select *, row_number() over (partition by cast(UserID as varchar(10)) +StartOrEnd order by time asc) as sno from @t ) as t group by userid,sno ``` Result ``` userid minimum_time maximum_time ----------- ----------------------- ----------------------- 1 2015-07-27 16:45:00.000 2015-07-27 16:46:00.000 2 2015-07-27 16:47:00.000 2015-07-27 16:48:00.000 1 2015-07-27 16:49:00.000 2015-07-27 16:50:00.000 ```
convert row to column using Pivot without any clause
[ "", "sql", "sql-server", "sql-server-2008-r2", "pivot", "" ]
I have table `news` with this fields: `idNews` `idArea` `title` `text` `date` What I want to do is delete all duplicate rows with same `title` and `text` except one (the one with the earliest date) i already try some query without success. I tried these queries, but they didn't work: ``` DELETE FROM news WHERE idNews NOT IN (SELECT MIN(date) FROM news GROUP BY title,text, date); DELETE idNews FROM news WHERE date< date AND title= title and text=text; ```
Assuming `idNews` is a key then this should work: ``` delete from news where idnews not in ( select idnews from ( select idnews from news join ( select title, text, min(date) as min_date from news group by title, text ) x on news.title = x.title and news.text = x.text and news.date = x.min_date ) a ); ``` The reason for the nested joins is that MySQL won't let you delete data from a table that you're directly referencing in a join. The second level subquery creates a temporary result set that will allow the delete. [Sample SQL Fiddle](http://www.sqlfiddle.com/#!9/7f8a3c/1)
One of the methods is ``` delete from table as t1 inner join ( select title,text,min(date) as date from table group by title,text ) as t2 on t1.title=t2.title and t1.text=t2.text where t1.date>t2.date; ```
Delete duplicate rows with same text
[ "", "mysql", "sql", "text", "duplicates", "" ]
I have the following table test: Table: `Test` ``` create table test ( number int, name varchar(10) ); ``` Insertion: ``` insert into test values(111,'PersonA'); insert into test values(211,'PersonB'); insert into test values(311,'PersonC'); insert into test values(111,'PersonA'); insert into test values(212,'PersonB'); insert into test values(313,'PersonC'); insert into test values(111,'PersonA'); insert into test values(211,'PersonB'); ``` Now I want to display those records, like if a person used two different numbers then it has to show there details. If I want to see 1 to 2 then it has to show me the details of person who have used 1 to 2 numbers: Result : for 1 to 2 ``` number name ------------ 111 PersonA 211 PersonB 212 PersonB 311 PersonC 313 PersonC ``` If I want to see 2 to 2 then it has to show me the details of person who have used 2 to 2 numbers: Result : for 2 to 2 ``` number name --------------- 211 PersonB 212 PersonB 311 PersonC 313 PersonC ```
Well I have done it using CTE. For 1 to 2: ``` with cte AS ( SELECT COUNT(DISTINCT Number) as a,Name from test group by name ) select DISTINCT x.Number,z.Name from cte z inner join test x ON z.name = x.name WHERE z.a between 1 and 2; ``` Result: ``` number name ------------- 111 PersonA 211 PersonB 212 PersonB 311 PersonC 313 PersonC ``` For 1 to 2: ``` with cte AS ( SELECT COUNT(DISTINCT Number) as a,Name from test group by name ) select DISTINCT x.Number,z.Name from cte z inner join test x ON z.name = x.name WHERE z.a between 2 and 2; ``` Result: ``` number name ------------- 211 PersonB 212 PersonB 311 PersonC 313 PersonC ```
``` select * from test GROUP BY name HAVING COUNT(name) > 1 ```
SQL Server 2008 R2: Select with condition
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
``` declare @date Datetime ='',@query nvarchar(max); set @date =getdate(); set @query='select * from [3].[Sync_Database_BTS].dbo.[Test] where [Test] >=cast(date,'+ @date +' ,103) ; ' exec sp_executesql @query ``` Can you please tell me what is my mistake .
You need to convert the `date` to `varchar` if you are using the `datetime` type in dynamic query You cannot use `style` in `cast` function you need to use `Convert` instead. Also you need extra single quotes around `datetime` parameter. ``` DECLARE @date DATETIME ='', @query NVARCHAR(max); SET @date = Getdate(); SET @query='select * from [3].[Sync_Database_BTS].dbo.[Test] where [Test] >=convert(date,''' + Cast(@date AS VARCHAR(50)) + ''' ,103) ; ' --print @query EXEC Sp_executesql @query ``` Always use **Print statement** to debug dynamic query
Just because you're using a dynamic query, that doesn't prevent you from using [parameters](https://msdn.microsoft.com/en-us/library/ms188001.aspx) and keeping everything nicely typed: ``` declare @date Datetime ='',@query nvarchar(max),@parms nvarchar(max); set @date =getdate(); set @query=N'select * from [3].[Sync_Database_BTS].dbo.[Test] where [Test] >=cast(date,@date) ; '; set @parms = N'@date datetime'; exec sp_executesql @query,@parms,@date ```
"Conversion failed when converting date and/or time from character string " on dynamic query
[ "", "sql", "sql-server", "" ]
I have some queries like this ``` Alter Table Table1 ALTER COLUMN T1 varchar(MAX); Alter Table Table1 ALTER COLUMN T2 varchar(MAX); Alter Table Table1 ALTER COLUMN T3 varchar(MAX); --Table2 does not have a column "R1" and is likely to give error Alter Table Table2 ALTER COLUMN R1 varchar(MAX); Alter Table Table2 ALTER COLUMN T1 varchar(MAX); Alter Table Table2 ALTER COLUMN T2 varchar(MAX); Alter Table Table2 ALTER COLUMN T3 varchar(MAX); ``` ## Possible Error Now in the **4th statement** it is likely that a message would pop because there is no field in Table2 named **R1**. ## Need I need a way so that all the statement gets executed even after receiving the error. ## My Approach I tried to **execute these statements individually** to receive error message for every line but it takes too much time as it makes 7 times connection to a **server which is connected to the PC by internet** . So, i used all those query together to get records in one connection but it breaks the command on 4th line as the statement is invalid. Any suggestion or piece of code is appreciated
Use a try-catch block: ``` Alter Table Table1 ALTER COLUMN T1 varchar(MAX); Alter Table Table1 ALTER COLUMN T2 varchar(MAX); Alter Table Table1 ALTER COLUMN T3 varchar(MAX); BEGIN TRY Alter Table Table2 ALTER COLUMN R1 varchar(MAX); END TRY BEGIN CATCH print 'error altering column R1 of Table2'; END CATCH; Alter Table Table2 ALTER COLUMN T1 varchar(MAX); Alter Table Table2 ALTER COLUMN T2 varchar(MAX); Alter Table Table2 ALTER COLUMN T3 varchar(MAX); ```
You should use 'GO' between instructions in order to contine the execution no matter on the errors: ``` Alter Table Table1 ALTER COLUMN T1 varchar(MAX); GO Alter Table Table1 ALTER COLUMN T2 varchar(MAX); GO Alter Table Table1 ALTER COLUMN T3 varchar(MAX); GO Alter Table Table2 ALTER COLUMN R1 varchar(MAX); GO Alter Table Table2 ALTER COLUMN T1 varchar(MAX); GO Alter Table Table2 ALTER COLUMN T2 varchar(MAX); GO Alter Table Table2 ALTER COLUMN T3 varchar(MAX); GO ``` This will give you all messages and will execute all the sentences one after each other. Those are my logs on a similar situation. As you will see, various errors are notified, and not only one: [![enter image description here](https://i.stack.imgur.com/ULb10.png)](https://i.stack.imgur.com/ULb10.png) > NOTE: The catch behavior depends on the severity of the error, this > link from the MSDOC explains how try\_catch works > <https://learn.microsoft.com/en-us/sql/t-sql/language-elements/try-catch-transact-sql?view=sql-server-2017> Hope it helps :)
SQL - Continue running all SQL statements even after error
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have these tables: * steps(stepId\_,stepName) * stepOfProject(idProject\_,stepId\_,dateofStart) * Project(IdProject\_,idResponsable) I want to get the latest step in each project it is working with this query: ``` select stepName from step where step.step=(select max(idstep) from stepOfProject where idProject=1) ; ``` But it's working just for project number 1; I want to get for all projectId.
Try this: ``` select s1.stepName, sp1.projectId from steps s1 join stepOfProject sp1 on s1.stepId = sp1.stepId left join stepOfProject sp2 on sp1.idProject = sp2.idProject and sp2.stepId > sp1.stepId where sp2.stepId is null ``` What the `left join` is doing here is checking for the existence of another step belonging to the same project, with a higher `stepId`. If that record exists, then the record is *excluded* from the results (that's what the `sp2.stepId is null` does). This leaves you with only the highest-numbered `stepId` for each project.
Maybe I totally don't get why you're after... and I think Nick is right -- why NOT use a group by, unless this is an academic exercise. It would help to have some sample data an expected results, but short of that: ``` select idProjectSummary.idProject, step.step, step.stepName from step join ( select idProject, max (idstep) as max_step from stepOfProject group by idProject ) as idProjectSummary on step.step = idProjectSummary.max_step ``` I haven't used MySQL in a while, but last time I used it it didn't support the `with` clause.
How to resolve this OPERATION without using GROUP BY in "SQL"
[ "", "mysql", "sql", "" ]
I am trying date control by month with this script ``` DECLARE @Date DATETIME = '2015-07-31'; DECLARE @MonthCount INT = 3; DECLARE @controlDate DATETIME = '2015-04-28'; SELECT MONTH(@controlDate), MONTH(DATEADD(MONTH, -@MonthCount, @Date)), IIF(MONTH(@controlDate) > MONTH(DATEADD(MONTH, -@MonthCount, @Date)),'OK','No') as isOK ``` But I am getting this syntax error: > Msg 102, Level 15, State 1, Line 8 Incorrect syntax near '>' [![enter image description here](https://i.stack.imgur.com/vY79R.png)](https://i.stack.imgur.com/vY79R.png) EDIT: When I try `if` it is working: ``` DECLARE @Date DATETIME = '2015-07-31'; DECLARE @MonthCount INT = 3; DECLARE @controlDate DATETIME = '2015-04-28'; if(MONTH(@controlDate) > MONTH(DATEADD(MONTH, -@MonthCount, @Date))) print 'OK' else print 'No' ``` What am I doing wrong or is this a bug?
This happens when you're not running on a correct version of SQL Server, or if the compatibility level of the database isn't set sufficiently. To check compatibility level: ``` select compatibility_level from sys.databases where name = '<database name>' ``` To alter compatibility level: ``` alter database <database-name> set compatibility level = 110 -- SQL Server 2012 ``` List of compatibility levels: <https://msdn.microsoft.com/en-us/library/bb510680.aspx>
It should work correctly. You can also use CASE expression ``` DECLARE @Date DATETIME = '2015-07-31'; DECLARE @MonthCount INT = 3; DECLARE @controlDate DATETIME = '2015-04-28'; SELECT MONTH(@controlDate), MONTH(DATEADD(MONTH, -@MonthCount, @Date)), case when MONTH(@controlDate) > MONTH(DATEADD(MONTH, -@MonthCount, @Date)) then 'OK' else 'No' end as isOK ```
IIF 'Incorrect syntax near' error at SQL Server 2014
[ "", "sql", "sql-server-2014", "iif", "" ]
I have this query here : ``` SELECT nsd.REGISTRATION_DATE,nsd.Subject_Name,nsd.Subject_Code,nsda.District_Name, nsda.Region_Name FROM tbl1 AS nsd LEFT JOIN tbl2 AS nsda ON nsd.Subject_Code = nsda.Subject_Code ``` I need to group this query by `Subject_Name, District_Name and Region_Name.` i also want to add a counter which tells me for example: ``` Subject|Nr.Total| Region| District Maths 2 Austria Tyrol Maths 5 Austria Vienna ``` When i try to add the group by i get an error, it says something about aggregate function: > Msg 8120, Level 16, State 1, Line 1 Column 'tbl1.REGISTRATION\_DATE' is > invalid in the select list because it is not contained in either an > aggregate function or the GROUP BY clause. Thanks in advance.
Since you're not using `REGISTRATION_DATE` in your `SELECT`, you can omit it. ``` SELECT nsd.Subject_Name, [Nr. Total] = ISNULL(COUNT(nsda.Subject_Code), 0), nsda.Region_Name, nsda.District_Name FROM tbl1 nsd LEFT JOIN tbl2 nsda ON nsd.Subject_Code = nsda.Subject_Code GROUP BY nsd.Subject_Name, nsda.Region_Name, nsda.District_Name ``` In SQL Server, columns in the `SELECT` must appear in the [**`GROUP BY`**](https://msdn.microsoft.com/en-us/library/ms177673.aspx) clause.
``` SELECT nsd.Subject_Name AS [Subject], count(*) AS [Nr.Total], nsda.Region_Name AS [Region], nsda.District_Name AS [District] FROM tbl1 AS nsd LEFT JOIN tbl2 AS nsda ON nsd.Subject_Code = nsda.Subject_Code Group by nsd.Subject_Name, nsda.Region_Name, nsda.District_Name ``` If you need 'tbl1.REGISTRATION\_DATE' in select, add this too in Group By statement.
Add Group By in query and a counting variable
[ "", "sql", "sql-server", "" ]