Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have an issue with my SQL Server request: ``` SELECT * FROM table WHERE M.StatutMandatId = CASE WHEN @Mandat = '1' THEN '5' WHEN @Mandat = '0' THEN 'IN(1,2,3,4,5)' END ``` But I have this issue: > Conversion failed when converting the varchar value 'IN(1,2,3)' to > data type int. It's not possible to use IN like that?
You can't do an `IN` like that, instead if you have to use the `CASE` do it like this: ``` SELECT * FROM table WHERE CASE WHEN @Mandat = '1' AND M.StatutMandatId = 5 THEN 1 WHEN @Mandat = '0' AND M.StatutMandatId IN(1,2,3,4,5) THEN 1 ELSE 0 END = 1 ``` It probably won't be very performant, I would instead restructure it like this: ``` SELECT * FROM table WHERE (@Mandat = '1' AND M.StatutMandatId = 5) OR (@Mandat = '0' AND M.StatutMandatId IN(1,2,3,4,5)) ```
No, you can't create dynamic SQL like that. You could achieve the same using the follow statement though ``` SELECT * FROM table WHERE (M.StatutMandatId IN(1,2,3,4,5) AND @Mandat=0) OR (M.StatutMandatId =5 AND @Mandat=1) ```
CASE and IN in WHERE
[ "", "sql", "sql-server", "" ]
I have SQL table with duplicated record but one filed is different How can Delete duplicated record which this filed is null ? and if two pair is null delete one ``` my table is : id Number No 1 A25 10 2 A24 20 3 A25 4 C12 5 C12 ``` --- ``` new table : id Number No 1 A25 10 2 A24 20 4 C12 5 C12 ```
``` delete from mytable where id in( select t1.id from mytable t join mytable t1 on t.number=t1.number and t1.id>t.id where (t1.no is null and t.no is not null) or (t1.no is null and t.no is null) ) ```
Try this. ``` select * into #temp1 from My_table group by Number having Count(*)>1 delete My_table from My_table join #temp1 on My_table.Number = #temp1.Number where My_table.Number is null ```
How to delete duplicated record?
[ "", "sql", "" ]
I have table: * Products(ID, weight, expired, ...) I would like to get the N first products which expired is false and sum of the weight is form `example >= 5kg`. How could I do this query? Thanks so much. EDIT: it seems that I was not very clear in my question. For example I can have this products: * p1: ID = 1, weight = 1; * p2: ID = 2, weight = 1; * p3: ID = 3, weight = 1; * p4: ID = 4, weight = 2; For example, I would like to get the first N products which sum of weights is 3. The order is the same. So I would like to get for example p1, p2 and p3 because the sum of weights is 3 (in this case N is 3 because I need 3 products), so it is correct. I could get p1 and p4 and it would be correct (int this case N is 2 because I need only two products), or p2 and p4... and so on.
You can write as: ``` select top 10 * from -- top N goes here ( SELECT *,SUM(weight) OVER (partition BY expired ORDER BY Id asc) as cumulativesum FROM Products WHERE expired = 0 )T where cumulativesum>=5 ``` `DEMO`
``` SELECT TOP 10 p.* FROM Prodcts p WHERE p.expired = 0 AND p.weight >= 5 ``` However, currently it's not clear how you want to `sum` a single record.
How to get the first N rows where the sum is ≥ X?
[ "", "sql", "sql-server", "" ]
How can I group an unknown number of rows into a single row, where the set columns determine the grouping? I want to shift ``` Group1 Group2 Group3 Val1 Val2 Val3 =============================================== John Smith 25 1 0 0 John Smith 25 0 6 0 John Smith 25 0 0 8 Chris Green 30 0 3 0 Chris Green 30 5 0 0 ``` into ``` Group1 Group2 Group3 Val1 Val2 Val3 =============================================== John Smith 25 1 6 8 Chris Green 30 5 3 0 ```
``` declare @t table (group1 varchar(10),group2 varchar(20),group3 int,val1 int,val2 int,val3 int) insert into @t (group1,group2,group3,val1,val2,val3)values ('John','Smith',25,1,0,0), ('John','Smith',25,1,6,0), ('John','Smith',25,1,0,8), ('Chris','Green',30,1,0,0), ('Chris','Green',30,1,3,0), ('Chris','Green',30,5,0,0) select distinct group1,group2,group3,MAX(val1),MAX(val2),MAX(val3) from @t group by group1,group2,group3 ORDER BY group1 desc,group2 desc ```
``` select Group1, Group2, Group3, SUM(val1), SUM(val2), SUM(val3) from tablename GROUP BY Group1, Group2, Group3 ```
How can I group rows into columns with multiple values?
[ "", "sql", "sql-server", "t-sql", "grouping", "pivot-table", "" ]
I have a table named `route_location` like the following table: ![enter image description here](https://i.stack.imgur.com/m20Rm.png) This table is for a map. There are routes, and each route has a unique `routeID`. The routes contain locations, and the order of the locations appearing in any route is defined by `orderID`. I want to get all locations in all routes, which come after a specific `locationID`(in my current case, `11`), which information is supplied by the `orderID`. I want to get all locationID from the any route where the orderID exceeds the orderID of a given value. I want all locations whose orderID is greater than the orderID of locationID=11. The current answers would be ``` locationID=12,13,16 [12,13 from routeID=1, 16 from routeID=2, no data from routeID=3] ``` How do I do it? I think it is related with this post: [Advanced filter in SQL](https://stackoverflow.com/questions/27253759/advanced-filter-in-sql) Thank you very much in advance! **UPDATE:** > The orderID of the locationID must be greater than the orderID of locationID=11 in the route
It's still unclear, but you seem to ask for logic applied to an order set: return all data **after** there was a `locationID = 11` on the same route at a previous `orderID` ``` SELECT * FROM route_location AS rl WHERE EXISTS ( SELECT * FROM route_location AS rl2 WHERE rl.routeID = rl2.routeID -- on the same route AND rl.orderID < rl2.orderID -- any previous order AND rl2.locationID = 11 -- had a locationID of 11 ) ```
Try this ``` SELECT * FROM route_location AS rl inner join route_location AS r2 on rl.routeID = r2.routeID and rl.orderID < r2.orderID and r2.locationID = 11 ```
SQL advanced filtering
[ "", "mysql", "sql", "set", "" ]
Say I have a query that is something like this: ``` select table1.column1, table1.column2, ..., case when foo.id is null then 'N/A' else sum(foo.points) end as score -- same result using ifnull() from table1 inner join table2 on ... left join foo on ... group by ... ``` Since I do a `LEFT JOIN` on `foo`, there is a chance that there is no match. In that case, I want the `score` to show as `'N/A'` instead of `NULL` or `0`. But when I do the above query, I get `blob` for the `score` column for all rows.
When you have different data types in the results in the case it will return a value with the data type `blob` if it can't find a common type. Try casting the sum to a char: ``` case when foo.id is null then 'N/A' else cast(sum(foo.points) as char) end as score ```
If you are grouping, you should really put your sum around the case, like: `sum(case when foo.id is null then 0 else foo.points)` ..as otherwise you are summing just the row value (meaning only one value). Also, a column should only have one data type, so either number or text, which is why you might be seeing this issue since you are trying to display either a number or text in the same column. If you really want N/A, you can try converting the number to text, and then using the coalesce function to handle nulls, however I would need to see your data to say the best way to write your above query. If you can create an SQL fiddle, I would be more than happy to take a look :)
Mysql SUM inside a CASE
[ "", "mysql", "sql", "" ]
I've searched around and couldn't find an answer anywhere. I'm querying a database that has stored numbers as a `VARCHAR2` data type. I'm trying to find numbers that are greater than `1450000` (`where BI_SO_NBR > '1450000'`), but this doesn't bring back the results I'm expecting. I'm assuming it's because the value is stored as text and I don't know any way to get around it. Is there some way to convert the field to a number in my query or some other trick that would work? Hopefully this makes sense. I'm fairly new to SQL. Thanks in advance.
You can try to use like this: ``` where to_number(BI_SO_NBR) > 1450000 ``` Assuming you are using Oracle database. Also check [To\_Number](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions191.htm) function **EDIT:-** You can try this(*after OP commented that it worked*): ``` where COALESCE(TO_NUMBER(REGEXP_SUBSTR(BI_SO_NBR, '^\d+(\.\d+)?')), 0) > 1450000 ```
If the number is too long to be converted correctly to a number, and it is always an integer with no left padding of zeroes, then you can also do: ``` where length(BI_SO_NBR) > length('1450000') or (length(BI_SO_NBR) = length('1450000') and BI_SO_NBR > '1450000' ) ```
SQL Query - Greater Than with Text Data Type
[ "", "sql", "oracle", "varchar2", "" ]
Not sure if this is possible on DB side, but so far I can only get this result: Query: ``` SELECT City.city_name "City", PC.subcategory_id "Subcategory", PC.count "count" FROM products_counter PC , Cities City WHERE PC.city_id = City.city_id +-----------+----------------+-------+ | city_name | subcategory_id | count | +----------------------------+-------+ | City1 | fruits | 4 | | City2 | vegetables | 4 | | City1 | meat | 1 | +-----------+----------------+-------+ ``` Here are my two tables : Table **products\_counter:** ``` +-------+---------+----------------+-------+ | ID | city_id | subcategory_id | count | +-------+---------+----------------+-------+ | 1 | 1 | fruits | 4 | | 2 | 2 | vegetables | 4 | | 3 | 2 | meat | 1 | +-------+---------+----------------+-------+ ``` Table **cities:** ``` +---------+------------+ | city_id | city_name | +---------+------------+ | 1 | City1 | | 2 | City2 | | 3 | City3 | +---------+------------+ ``` and this is the expected result: ``` +-----------+----------------+-------+ | city_name | subcategory_id | count | +-----------+----------------+-------+ | City1 | fruits | 4 | | City1 | vegetables | 0 | | City1 | meat | 0 | | City2 | fruits | 0 | | City2 | vegetables | 4 | | City1 | meat | 1 | | City3 | fruits | 0 | | City3 | vegetables | 0 | | City3 | meat | 0 | +-----------+----------------+-------+ ``` But I'm not really sure how to list all the cities from `Cities` table and then just assign the `count` column if the city\_id and subcategory\_id are equal.
You can use cross join for this. ``` SELECT c.city_name, pc.subcategory_id, IFNULL((select `count` from products_counter where city_id = c.city_id and subcategory_id = pc.subcategory_id),0) as 'Count' FROM cities c CROSS JOIN products_counter pc ``` working example here - <http://sqlfiddle.com/#!9/34c38/16>
Try this. ``` select c.city_name , ps.subcategory_id , ( select `count` from products_counter where city_id = c.city_id and subcategory_id = ps.subcategory_id) as 'Count' from cities c cross join products_counter pc ```
mysql combining data from two tables
[ "", "mysql", "sql", "" ]
I currently have this over long query that I'm trying to optimize. I looked at combining subquery's but I fail to get a count for every group in one query while having my constraints. Is it possible to slim down the following query? ``` SELECT val_year, spg, SUM(val_q1) val_q1, SUM(val_q2) val_q2, SUM(val_q3) val_q3, SUM(val_q4) val_q4, SUM(val_q1) + SUM(val_q2) + SUM(val_q3) + SUM(val_q4) val_total FROM ( SELECT val_year, spg, count_prods val_q1, 0 val_q2, 0 val_q3, 0 val_q4 FROM table_report WHERE val_quarter = 'Q1' UNION ALL SELECT val_year, spg, 0 val_q1, count_prods val_q2, 0 val_q3, 0 val_q4 FROM table_report WHERE val_quarter = 'Q2' UNION ALL SELECT val_year, spg, 0 val_q1, 0 val_q2, count_prods val_q3, 0 val_q4 FROM table_report WHERE val_quarter = 'Q3' UNION ALL SELECT val_year, spg, 0 val_q1, 0 val_q2, 0 val_q3, count_prods val_q4 FROM table_report WHERE val_quarter = 'Q4' ) ``` table\_report looks like this: ``` table_report: val_year VARCHAR2(4) spg VARCHAR2(256) val_quarter VARCHAR2(2) count_prods NUMBER(10) ```
You can replace the UNIONs with a single query using CASEs: ``` SELECT val_year, spg, CASE WHEN val_quarter = 'Q1' THEN count_prods ELSE 0 END val_q1, CASE WHEN val_quarter = 'Q2' THEN count_prods ELSE 0 END val_q2, CASE WHEN val_quarter = 'Q3' THEN count_prods ELSE 0 END val_q3, CASE WHEN val_quarter = 'Q4' THEN count_prods ELSE 0 END val_q4 FROM table_report ```
It looks like a PIVOT query, so you could do all in one ``` SELECT val_year, spg, coalesce(q1_total, 0) as q1, coalesce(q2_total, 0) as q2, coalesce(q3_total, 0) as q3, coalesce(q4_total, 0) as q4, coalesce(q1_total, 0) + coalesce(q2_total, 0) + coalesce(q3_total, 0) + coalesce(q4_total, 0) as total FROM (SELECT val_Year, spg, count_prod, val_quarter FROM test) PIVOT (SUM(count_prod) as total FOR (val_Quarter) IN ('Q1' AS q1, 'Q2' AS q2, 'Q3' AS q3, 'Q4' as q4))s ; ``` see [SqlFiddle](http://sqlfiddle.com/#!4/603cd/3)
Is a combined subquery possible for this scenario?
[ "", "sql", "oracle", "optimization", "subquery", "" ]
I have been given this task to try to detect some duplicate records in a table with a large volume of rows. The table comprises 2 joined tables. So to begin with I have: ``` select b.event_number_id, b.tenure_number_id, a.work_start_date, a.work_stop_date, a.amount from MTA.mta_sow_event a, mta_tenure_event_xref b where a.event_number_id = b.event_number_id ``` Now we have a table to work from. The duplicate records have unique event\_number\_id, the reamining fields will contain identical data, so something like this: ``` | event_number_id | tenure_number_id | work_start_date | work_stop_date |amount| |-----------------|-------------------|-----------------|----------------|------| | 5532733 | 688203 | 01-SEP-14 | 25-SEP-14 | 5000 | | 5532734 | 688203 | 01-SEP-14 | 25-SEP-14 | 5000 | ``` So, this is an example of a duplicate record. There are consecutive event\_number\_id's and all the remaining columns have identical information. We believe that our system has been creating duplicate events for some time now (this isn't supposed to happen), so I want to query the whole joined table and find anything that has rows that have exactly the same data, but different and consecutive event numbers. So far, I managed to make a simple query that shows me any rows that have identical information, excluding the event\_number\_id column: ``` select b.tenure_number_id, a.work_start_date, a.work_stop_date, a.amount, count(*) from MTA.mta_sow_event a, mta_tenure_event_xref b where a.event_number_id = b.event_number_id group by b.tenure_number_id, a.work_start_date, a.work_stop_date, a.amount having count(*) > 1 ``` which returns: ``` | tenure_number_id | work_start_date | work_stop_date |amount|Count(*)| |-------------------|-----------------|----------------|------|--------| | 688203 | 01-SEP-14 | 25-SEP-14 | 5000 | 2 | ``` The problem is, sometimes there are rows that have identical data, but could be valid, so the best we can do at this point is find any of these matching rows that have consecutive event\_number\_id's. This is where I am hung up. Is there a way to pull out only the rows that contain these consecutive numbers?
Here's an approach based on a join of the data sets: ``` with cte_base_data as ( select ... your query here ...) select from cte_base_data t1 join cte_base_data t2 on (t1.tenure_number_id = t2.tenure_number_id and t1.work_start_date = t2.work_start_date and t1.work_stop_date = t2.work_stop_date and t1.amount = t2.amount) where t1.event_number_id = t2.event_number_id - 1; ``` The efficiency will depend on a few factors, such as the efficiency of scanning the base tables and the size of the data sets. It would be interesting to see a comparison of the execution plans of this method and the analytics-function methods. This common table expression-based join ought to be very efficient as it depends on hash joins, which have almost no cost as long as they stay in memory (a big question mark over that). I'd be inclined to go for the analytic functions if the event\_number\_id's were not consecutive -- if there might be gaps, for instance, which would be harder to implement as a join. Given that one of them is the other incremented, I think it's worth taking a punt on a join.
General idea: group rows with the same values (`partition by tenure_number_id, work_start_date, work_end_date, amount`), find minimal `event_number_id` for each group and row number of `event_number_id` inside group starting from zero (using analytic functions `min` and `row_number`), then compare sum of minimal ID and row number with `event_number_id`. For consecutive numbers they have to be equal: ``` with t as (select b.event_number_id, b.tenure_number_id, a.work_start_date, a.work_stop_date, a.amount from MTA.mta_sow_event a, mta_tenure_event_xref b where a.event_number_id = b.event_number_id) select * from (select t.*, min(event_number_id) over (partition by tenure_number_id, work_start_date, work_end_date, amount) + row_number() over (partition by tenure_number_id, work_start_date, work_end_date, amount order by event_number_id) - 1 group_id from t) where event_number_id = group_id ```
How to find consecutive values in column
[ "", "sql", "oracle", "" ]
I have a table in a SQL Server database with some columns along with ``` CreatedBy (varchar) CreatedOn (datetime) UpdatedBy(varchar) UpdatedOn(datetime) ``` as standard columns. The `CreatedOn` and `CreatedBy` values are automatically populated as and when there is a new row added to this table (using default value approach). Question: is there any out-of-the-box solution in SQL Server which can pupulate `UpdatedOn` and `UpdatedBy` values as and when row updates?
Apart from adding a trigger as mentioned by Alex K et all., you could also add a RowVersion column. This data type was previously called TimeStamp, but unfortunately the name is misleading. This will give you the **UpdatedOn** smalldatetime timestamp, but not the UpdatedBy information. If you want to convert the RowVersion colunm into a Date & Time format, you need to do a bit of work; ## 1. Create a table "UpdateTimeStamp" with three columns (CreatedDate Smalldatetime, NewRowVersion, OldRowVersion) like this: `Create Table dbo.UpdateTimeStamp( OldRowVersion binary(8), CreatedDate DateTime constraint DF_UpdateTimeStamp_TimeStamp Default getdate(), NewRowVersion ROWVERSION CONSTRAINT PKUpdateTimeStamp PRIMARY KEY CLUSTERED )` Insert the first row manually `Insert into dbo.UpdateTimeStamp(OldRowVersion, CreatedDate) VALUES (0x0000000000000000, '2000-01-01')` ## 2. Create a SQL Agent job Which inserts one row in the table every one minute. Make step 1 run this code: `Insert into dbo.UpdateTimeStamp(OldRowVersion) SELECT TOP (1) NewRowVersion FROM dbo.UpdateTimeStamp ORDER BY NewRowVersion DESC` Set the schedule to run every 1 minute. ## 3. Join Join the UpdateTimeStamp table to your table with a between join claus like this: ``` SELECT top 10000 mt.*, uts.CreatedDate AS ModifiedDate FROM dim.MyTable MT LEFT JOIN dbo.UpdateTimeStamp uts ON MT.DT1RowVersion > OldRowVersion AND MT.DT1RowVersion <= NewRowVersion ORDER BY uts.CreatedDate ``` HIH.
You can add a trigger like the one below. This trigger works if multiple rows are updated are the same time. Please notes it assumes you have added an identity column to your source table. ``` CREATE TRIGGER [dbo].UpdateTrans ON [dbo].YourTable FOR UPDATE AS SET NOCOUNT ON UPDATE yt SET UpdatedBy = SUSER_SNAME() , UpdatedDate = GETDATE() FROM DELETED d INNER JOIN YourTable yt ON yt.YourTableId = d.YourTableId ```
Automatically update column values in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
In a QUERY I've SELECT a specific product data from Book Table. Like title, author, year, etc. I wanna get all it's authors, but it's **only showing the** **first one**. Product Table: ``` produto idproduto sku titulo others not important ``` Author Table: ``` autor id_autor nome_autor ``` Relational Table: ``` autorxprod idautoria idprod (FK) idautor (FK) ``` How show all authors that are related to the Book1?? Should I make another QUERY inside the main QUERY?? --- EDITED WITH CODE. I'VE COMMENTED A BIT CAUSE IT'S IN PORTUGUESE ``` $produto = $_GET['sku']; $idproduto = $_GET['idprod']; $query = "SELECT p.titulo AS titulo, p.imgprod AS imgprod, p.marca AS idmarca, p.sku AS sku, p.prodcateg AS prodcateg, p.ano AS ano, p.titeng AS titeng, p.codbarra AS codbarra, p.desc AS descricao, // product infos c.idcategoria AS idcategoria, c.nomecateg AS nomecateg, // category infos m.nome_marca AS nome_marca, // brand info a.nome_autor AS nome_autor, // author info a.id_autor AS idautor // author id FROM (produto p, autor a) LEFT JOIN categoria c // category join ON p.prodcateg = c.idcategoria LEFT JOIN marcas m // brand join ON p.marca = m.id_marca LEFT JOIN autorxprod axp // relational table ON p.idproduto = axp.idprod AND a.id_autor = axp.idautor WHERE p.sku LIKE '$produto'"; $result = $conn ->query ($query); if (!$result) die ($conn->error); $rows = $result->num_rows; for ($j = 0; $j <$rows; ++$j) { $result -> data_seek($j); $row = $result->fetch_array (MYSQLI_ASSOC);?> ----------- <?php echo $row['nome_autor']?> ```
Thanks for all patient with me, as I'm learning I was really lost. Here is the completed result for what I need, and it's working as expected. Much more simple than before. This code gets all authors from related product and shows as Author 1, Author 2, Author 3... ``` "SELECT ....get stuff... a.id_autor AS idautor, GROUP_CONCAT(DISTINCT a.nome_autor ORDER BY a.nome_autor SEPARATOR ', ') AS autor_list FROM produto p LEFT JOIN categoria c ON p.prodcateg = c.idcategoria LEFT JOIN marcas m ON p.marca = m.id_marca LEFT OUTER JOIN (autor a, autorxprod axp) ON axp.idautor = a.id_autor AND axp.idprod = '$produto' WHERE p.sku = '$produto' ```
I don't believe it is wrong but your sql is a little strange in mixing the join styles. Can I suggest this: ``` SELECT p.titulo AS titulo, p.imgprod AS imgprod, p.marca AS idmarca, p.sku AS sku, p.prodcateg AS prodcateg, p.ano AS ano, p.titeng AS titeng, p.codbarra AS codbarra, p.desc AS descricao, // product infos c.idcategoria AS idcategoria, c.nomecateg AS nomecateg, // category infos m.nome_marca AS nome_marca, // brand info a.nome_autor AS nome_autor, // author info a.id_autor AS idautor // author id FROM produto p LEFT JOIN categoria c ON p.prodcateg = c.idcategoria LEFT JOIN marcas m ON p.marca = m.id_marca LEFT JOIN autorxprod axp ON p.idproduto = axp.idprod JOIN autor a ON a.id_autor = axp.idautor -- above is what your code did, you could also use a left join like this and you might have null authors -- LEFT JOIN autor a ON a.id_autor = axp.idautor WHERE p.sku LIKE '$produto' ```
How query all results in a Relational Table?
[ "", "mysql", "sql", "select", "foreign-key-relationship", "" ]
Using SQL Server 2014 I have a table with four fields. Each record contains a separate donation. It's possible for two separate donations from the same Donor to have the same amount and even the same date (say they gave the same amount twice on a given day - rare but it happens). I'd like to return a set that has full record for each donor's maximum contribution. In cases where there are two donations of that max amount pick the most recent one. If there are two max contributions on the same day return the one with the highest IDNumber (indicating the order in which they were entered into the system) My head exploded event thinking about this one so I put it to the group. ``` CREATE TABLE [dbo].[Donations1]( [IDNumber] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [DonorIDNumber] [int] NOT NULL, [DTGCreated] [datetime2](7) NOT NULL, [TransactionDate] [datetime2](7) NOT NULL, [Amount] [numeric](18, 2) NOT NULL ) ```
Use ROW\_NUMBER to rank your records, so that the records you want get row number 1: ``` select * from ( select donations1.*, row_number() over (partition by donoridnumber order by amount desc, transactiondate desc, idnumber desc) as rn from donations1 ) donations where rn = 1; ```
You can do this with the `ROW_NUMBER()` function: ``` ;with cte AS (SELECT *, ROW_NUMBER() OVER(PARTITION BY DonorIDNumber ORDER BY Amount DESC, TransactionDate DESC, IDNumber DESC) AS RN FROM Donations1 ) SELECT * FROM cte WHERE RN = 1 ``` The `ROW_NUMBER()` function assigns a number to each row. `PARTITION BY` is optional, but used to start the numbering over for each value in a given group, ie: if you `PARTITION BY` `DonorIDNumber` then for each donor the numbering would start over at 1. `ORDER BY` of course is used to define how the counting should go, and is required in the ROW\_NUMBER() function.
Select the MAX Record with tie breaking logic
[ "", "sql", "t-sql", "" ]
I get an incorrect syntax near '.' and can't seem to identify why in the following code: ``` select o.object_id, (select top 1 Zone from dbo.getzone(o.object_id)) as Zone from object as o ``` getzone is a table-valued Function that works perfectly when I reference it directly, or if I put a specific object\_id in, but everytime I try to make it dynamic, I get the syntax error. What am I missing?
You can't do that. You need to have a scalar version that returns only one result. It can be just a wrapper script if you want. Something like this: ``` CREATE FUNCTION [dbo].[getSingleZone](@object_id varchar(20)) RETURNS varchar(20) AS BEGIN DECLARE @Zone varchar(20) select @Zone = max(Zone) from dbo.getzone(@object_id) return @Zone END select o.object_id, dbo.getSingleZone(o.object_id) as Zone from object o ``` I don't know your data types, so I guessed.
Fix your alias ``` select o.object_id, (select top 1 Zone from dbo.getzone(o.object_id)) as Zone from object AS o ```
Transact SQL Subquery calling a function incorrect syntax
[ "", "sql", "sql-server", "t-sql", "syntax-error", "" ]
I'm currently working on a sample script which allows me to calculate the sum of the previous two rows and the current row. However, I would like to make the number '2' as a variable. I've tried declaring a variable, or directly casting in the query, yet a syntax error always pops up. Is there a possible solution? ``` DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) FROM @myTable ```
``` DECLARE @test VARCHAR = 1 DECLARE @sqlCommand VARCHAR(1000) DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) SET @sqlCommand = 'SELECT SUM(myValue) OVER (ORDER BY myValue ROWS BETWEEN ' + @test + ' PRECEDING AND CURRENT ROW) FROM #temp' EXEC (@sqlCommand) ```
You can try something like this which does not use dynamic SQL. ``` DECLARE @myTable TABLE (myValue INT) INSERT INTO @myTable ( myValue ) VALUES ( 5) INSERT INTO @myTable ( myValue ) VALUES ( 6) INSERT INTO @myTable ( myValue ) VALUES ( 7) INSERT INTO @myTable ( myValue ) VALUES ( 8) INSERT INTO @myTable ( myValue ) VALUES ( 9) INSERT INTO @myTable ( myValue ) VALUES ( 10) DECLARE @prev_records INT = 2 ;WITH CTE as ( SELECT ROW_NUMBER() OVER(ORDER BY myValue) rn,myValue FROM @myTable ) SELECT (SELECT SUM(myValue) FROM CTE t2 WHERE t2.rn BETWEEN (t1.rn - @prev_records) AND t1.rn ) FROM CTE t1 ``` `SUM(myValue) OVER()` is best option however it does not allow you to pass previous N rows using a variable.
Dynamic row range when calculating moving sum/average using window functions (SQL Server)
[ "", "sql", "sql-server", "t-sql", "window-functions", "" ]
I have a table with multiple entries. One entry consists of start Datetime and end datetime. I want to find clusters of entries in such way that: If an entry starts before the prior entry ends then both are part of the cluster. It is some kind of overlapping problem. Example: ``` id start end 1 2007-04-11 15:34:02 2007-05-11 13:09:01 2 2007-06-13 15:42:39 2009-07-21 11:30:00 3 2007-11-26 14:30:02 2007-12-11 14:09:07 4 2008-02-14 08:52:11 2010-02-23 16:00:00 ``` I want output of ``` id start end 1 2007-04-11 15:34:02 2007-05-11 13:09:01 2-4 2007-06-13 15:42:39 2010-02-23 16:00:00 ``` I had a solution that sorts start and then does some calculations with rownumber and lag/lead and so on. The problem is the special case where line 4 does come directly after line 2 and so I don't recognize it... Is there a good solution in sql here? Maybe I am missing something?
Ok here is some solution with recursive cte: ``` CREATE TABLE t ( id INT, s DATE, e DATE ); INSERT INTO t VALUES (1, '20070411', '20070511'), (2, '20070613', '20090721'), (3, '20071126', '20071211'), (4, '20080214', '20100223'); WITH cte AS ( SELECT id, s, e, id AS rid, s AS rs, e AS re FROM t WHERE NOT EXISTS( SELECT * FROM t ti WHERE t.s > ti.s AND t.s < ti.e ) UNION ALL SELECT t.*, c.rid, c.rs, CASE WHEN t.e > c.re THEN t.e ELSE c.re END FROM t JOIN cte c ON t.s > c.s AND t.s < c.e ) SELECT min(id) minid, max(id) maxid, min(rs) startdate, max(re) enddate FROM cte GROUP BY rid ``` Output: ``` minid maxid startdate enddate 1 1 2007-04-11 2007-05-11 2 4 2007-06-13 2010-02-23 ``` Fiddle <http://sqlfiddle.com/#!6/2d6d3/10>
Try this... ``` select a.id ,a.start,a.end,b.id,b.start,b.end from tab a cross join tab b where a.start between b.start and b.end order by a.start, a.end ``` We will have to check each row against all other rows, just like using a loop and an inner loop. for this purpose we do a cross join. Then we will check the overlapping using the BETWEEN AND Operator
Find clusters of time intervals
[ "", "sql", "sql-server", "group-by", "lead", "" ]
I have a table in db2 that I want to sort. The table has four columns. The data looks like: ``` KEY Type GROUP_ID SpecialType 2 T02 700 S 2 T01 500 R 1 T02 300 B 2 T02 600 S 1 T02 400 S 2 T02 700 B 1 T01 400 R 1 T02 300 S 2 T02 600 B ``` The sort order has the following rules: * First sort key is `KEY` * Special Type R should be always come first in each Key group. * S may come after R and B element, but it should have the same group\_id * If S has the same group\_id with the R specialtype, it should come after this R Specialtype The result should look like in this form ``` KEY Type GROUP_ID SpecialType 1 T01 400 R 1 T02 400 S 1 T02 300 B 1 T02 300 S 2 T01 500 R 2 T02 600 B 2 T02 600 S 2 T02 700 B 2 T02 700 S ``` I tried this `ORDER BY`, but it doesn't work: ``` select * from myTable ORDER BY KEY, GROUP_ID, CASE WHEN SPECIALTYPE = 'R' THEN 1 WHEN SPECIALTYPE='B' THEN 2 ELSE 3 END ``` Any Idea??
This should return the list in the order you provided. It includes an analytic function in the sort which counts the number of times 'R' is included in each GroupId in order to have those groupIds listed first. ``` SELECT t.KeyId, t.TypeName, t.GroupId, t.SpecialType FROM @Table t ORDER BY KeyId, CASE WHEN SpecialType = 'R' THEN 1 ELSE 2 END, COUNT(CASE WHEN t.SpecialType = 'R' THEN SpecialType END) OVER (PARTITION BY t.GroupId) DESC, GroupId, SpecialType ```
You're almost there. Try `select * from myTable ORDER BY KEY, CASE WHEN SPECIALTYPE = 'R' THEN 1 ELSE 2 END, GROUP_ID`
How can I sort the data in a table over four columns?
[ "", "sql", "db2", "" ]
I tried to order by Integer but it seemed that it order by the larger to the smaller number but it order by 2-digit then 3-digit sc is the score i need to order Code i used ``` SELECT * FROM `users` ORDER BY `sc` DESC ``` The result are : 47 3 102
You may use silent conversion to integer as ``` select * from users order by sc+0 desc ```
Looks like datetype of `sc` is char/varchar. You have to [cast](https://dev.mysql.com/doc/refman/5.0/en/cast-functions.html) it to an integer: ``` SELECT * FROM `users` ORDER BY cast(`sc` as UNSIGNED) DESC ```
Wrong OrderBy in Sql query ordered in opposite way
[ "", "mysql", "sql", "" ]
I have table: ``` +-------+-------+------+----------+ | Name | Price | Url | Adress | +-------+-------+------+----------+ | John | Smith | blah | London 1 | +-------+-------+------+----------+ | John | Smith | blah | London 1 | +-------+-------+------+----------+ | Jenny | Cole | blah | Prague 1 | +-------+-------+------+----------+ ``` and I want to get this: ``` +-------+------+------+----------+ | Jenny | Cole | blah | Prague 1 | +-------+------+------+----------+ ``` I was trying to create sqlite command with *having*, but no result..
``` ;WITH C AS( SELECT 'John' AS Name , 'Smith' AS Price , 'blah' AS URL , 'London 1' AS Address UNION ALL SELECT 'John' AS Name , 'Smith' AS Price , 'blah' AS URL , 'London 1' AS Address UNION ALL SELECT 'Jenny','Cole' ,'blah' ,'Prague 1' UNION ALL SELECT 'Jenny','Cole' ,'blah' ,'Prague 2' UNION ALL SELECT 'Jenny','Cole' ,'blah' ,'Prague 3' ) SELECT Name, Price, URL, Address FROM C GROUP BY Name, Price, URL, Address HAVING COUNT(*) = 1; ```
``` Select * From your_table_name Group By Name, Price, Url, Adress Having Count(*) = 1; ```
I want to get only NON-duplicate row in sqlite
[ "", "sql", "sqlite", "duplicates", "unique", "" ]
i have a condition in where clause like: ``` LocationId=IFNULL(LocId,LocationId) ``` here LocId is NULL so by IFNUll its returning LocationId which is Also Null in this case this query is not returning anything. Please tell me the solution for it.
You could use the [null safe comparison](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to) operator: ``` LocationId <=> IFNULL(LocId, LocationId) ``` The difference between the regular `=` operator is that the `null` value is not treated as special, e.g. if both left and right hand argument are `null` it will yield `1`. See also [this answer](https://stackoverflow.com/questions/21927117/what-is-this-operator-in-mysql/21928508#21928508) for more examples.
mysql NULL is not equal to NULL, check here <https://dev.mysql.com/doc/refman/5.0/en/working-with-null.html> > Because the result of any arithmetic comparison with NULL is also > NULL, you cannot obtain any meaningful results from such comparisons. > > In MySQL, 0 or NULL means false and anything else means true. The > default truth value from a boolean operation is 1. so, your logic should be like this: ``` select ... from ... where ( (LocId is null and LocationId is null) OR (LocId is not null and LocationId = LocId) ) ``` you can write it shorter, usually Ids are positive values, so you could compare: ``` select ... from ... where ifnull(LocId, -1) = ifnull(LocationId, -1) ``` UPDATE: in mysql also exists non ansi standard [null-safe comparison](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_equal-to), which is shorter, check @Ja͢ck answer
Comparing NULL value in MYSql
[ "", "mysql", "sql", "" ]
I want to filter rows from a single table between two datetimes and those filtered rows should come under a single date, for example i want to get all the rows between (16 mar 2015 6AM) and (17 mar 2015 6AM) datetimes as (17 mar 2015) date and (17 mar 2015 6AM) and (18 mar 2015 6AM) datetimes as (18 mar 2015) date and so on. this is my demo table ``` Id Name LogTime 1 mj 2015-03-16 01:28:03.257 2 mj 2015-03-16 05:28:03.257 3 mj 2015-03-16 06:28:03.257 4 mj 2015-03-16 18:28:03.257 5 mj 2015-03-17 01:28:06.677 6 mj 2015-03-17 06:28:06.677 7 mj 2015-03-17 16:28:07.460 8 mj 2015-03-17 07:28:03.257 9 mj 2015-03-18 01:28:08.193 10 mj 2015-03-18 05:28:03.257 11 mj 2015-03-18 06:28:03.257 12 mj 2015-03-18 18:28:03.257 13 mj 2015-03-19 01:28:06.677 14 mj 2015-03-19 06:28:06.677 15 mj 2015-03-19 16:28:07.460 16 mj 2015-03-19 07:28:03.257 17 mj 2015-03-20 01:28:08.193 18 mj 2015-03-20 05:28:03.257 19 mj 2015-03-20 06:28:03.257 20 mj 2015-03-20 18:28:03.257 ``` below is the query that I am using. ``` DECLARE @i INT = 1 DECLARE @from DATETIME , @to DATETIME WHILE (@i <= 5) BEGIN SET @from = CONVERT(DATETIME, CONVERT(VARCHAR(10), DATEADD(D, -@i, '2015-03-20'), 102) + ' 6:00:00') SET @to = CONVERT(DATETIME, CONVERT(VARCHAR(10), DATEADD(D, -@i + 1, '2015-03-20'), 102) + ' 6:00:00') SELECT *, @to AS 'FetchedOn' FROM Biometric WHERE LogTime BETWEEN @from AND @to ORDER BY LogTime SET @i = @i + 1 END ``` that generates the following result. ``` Id Name LogTime FetchedOn 14 mj 2015-03-19 06:28:06.677 2015-03-20 06:00:00.000 16 mj 2015-03-19 07:28:03.257 2015-03-20 06:00:00.000 15 mj 2015-03-19 16:28:07.460 2015-03-20 06:00:00.000 17 mj 2015-03-20 01:28:08.193 2015-03-20 06:00:00.000 18 mj 2015-03-20 05:28:03.257 2015-03-20 06:00:00.000 Id Name LogTime FetchedOn 11 mj 2015-03-18 06:28:03.257 2015-03-19 06:00:00.000 12 mj 2015-03-18 18:28:03.257 2015-03-19 06:00:00.000 13 mj 2015-03-19 01:28:06.677 2015-03-19 06:00:00.000 Id Name LogTime FetchedOn 6 mj 2015-03-17 06:28:06.677 2015-03-18 06:00:00.000 8 mj 2015-03-17 07:28:03.257 2015-03-18 06:00:00.000 7 mj 2015-03-17 16:28:07.460 2015-03-18 06:00:00.000 9 mj 2015-03-18 01:28:08.193 2015-03-18 06:00:00.000 10 mj 2015-03-18 05:28:03.257 2015-03-18 06:00:00.000 Id Name LogTime FetchedOn 3 mj 2015-03-16 06:28:03.257 2015-03-17 06:00:00.000 4 mj 2015-03-16 18:28:03.257 2015-03-17 06:00:00.000 5 mj 2015-03-17 01:28:06.677 2015-03-17 06:00:00.000 Id Name LogTime FetchedOn 1 mj 2015-03-16 01:28:03.257 2015-03-16 06:00:00.000 2 mj 2015-03-16 05:28:03.257 2015-03-16 06:00:00.000 ``` now I want to get the same result without using loop. I am using sql 2014, is there any alternate solution?
You can calculate FetchedOn based on the time part of the LogTime field: ``` SELECT B.*, dateadd(day, (iif(cast(LogTime as time) < '06:00:00', 0, 1)), cast(LogTime as date)) + cast('06:00:00' as datetime) as FetchedOn from Biometric B ORDER BY FetchedOn DESC, LogTime ``` **Update:** Simpler formula to calculate FetchedOn, and also a cast to `datetime` added for SQL2012+ compatibility. ``` SELECT B.*, cast(cast(dateadd(hour, +18, LogTime) as date) as datetime) + cast('06:00:00' as datetime) as FetchedOn from Biometric B ORDER BY FetchedOn DESC, LogTime ```
Here is some code. The Idea is to get all possible distinct ranges from test data. This is what CTE returns: ``` st ed 2015-03-15 06:00:00.000 2015-03-16 06:00:00.000 2015-03-16 06:00:00.000 2015-03-17 06:00:00.000 2015-03-17 06:00:00.000 2015-03-18 06:00:00.000 2015-03-18 06:00:00.000 2015-03-19 06:00:00.000 2015-03-19 06:00:00.000 2015-03-20 06:00:00.000 ``` After this it simple join on condition where data falls between those ranges: ``` DECLARE @t TABLE(ID INT, D DATETIME) INSERT INTO @t VALUES (1 ,'2015-03-16 01:28:03.257'), (2 ,'2015-03-16 05:28:03.257'), (3 ,'2015-03-16 06:28:03.257'), (4 ,'2015-03-16 18:28:03.257'), (5 ,'2015-03-17 01:28:06.677'), (6 ,'2015-03-17 06:28:06.677'), (7 ,'2015-03-17 16:28:07.460'), (8 ,'2015-03-17 07:28:03.257'), (9 ,'2015-03-18 01:28:08.193'), (10 ,'2015-03-18 05:28:03.257'), (11 ,'2015-03-18 06:28:03.257'), (12 ,'2015-03-18 18:28:03.257'), (13 ,'2015-03-19 01:28:06.677'), (14 ,'2015-03-19 06:28:06.677'), (15 ,'2015-03-19 16:28:07.460'), (16 ,'2015-03-19 07:28:03.257'), (17 ,'2015-03-20 01:28:08.193'), (18 ,'2015-03-20 05:28:03.257'), (19 ,'2015-03-20 06:28:03.257'), (20 ,'2015-03-20 18:28:03.257') ; WITH cte AS ( SELECT DISTINCT DATEADD(HOUR, -18, CAST(CAST(D AS DATE) AS DATETIME)) AS st , DATEADD(HOUR, 6, CAST(CAST(D AS DATE) AS DATETIME)) AS ed FROM @t ) SELECT t.ID, t.D, c.ed FROM cte c JOIN @t t ON t.D BETWEEN c.st AND c.ed ``` Output: ``` ID D ed 1 2015-03-16 01:28:03.257 2015-03-16 06:00:00.000 2 2015-03-16 05:28:03.257 2015-03-16 06:00:00.000 3 2015-03-16 06:28:03.257 2015-03-17 06:00:00.000 4 2015-03-16 18:28:03.257 2015-03-17 06:00:00.000 5 2015-03-17 01:28:06.677 2015-03-17 06:00:00.000 6 2015-03-17 06:28:06.677 2015-03-18 06:00:00.000 7 2015-03-17 16:28:07.460 2015-03-18 06:00:00.000 8 2015-03-17 07:28:03.257 2015-03-18 06:00:00.000 9 2015-03-18 01:28:08.193 2015-03-18 06:00:00.000 10 2015-03-18 05:28:03.257 2015-03-18 06:00:00.000 11 2015-03-18 06:28:03.257 2015-03-19 06:00:00.000 12 2015-03-18 18:28:03.257 2015-03-19 06:00:00.000 13 2015-03-19 01:28:06.677 2015-03-19 06:00:00.000 14 2015-03-19 06:28:06.677 2015-03-20 06:00:00.000 15 2015-03-19 16:28:07.460 2015-03-20 06:00:00.000 16 2015-03-19 07:28:03.257 2015-03-20 06:00:00.000 17 2015-03-20 01:28:08.193 2015-03-20 06:00:00.000 18 2015-03-20 05:28:03.257 2015-03-20 06:00:00.000 ```
how to filter rows between dates without using loop in tsql
[ "", "sql", "sql-server", "t-sql", "sql-server-2014", "" ]
I have the following tables: * **ROUTE (id, train\_number, station\_id, arrival\_time, departure\_time)** * **SERVICE (train\_number, train\_name)** * **STATION (id, name, city)** A row in ROUTE table will contain the train number, it's arrival and departure time (or one of these two if the station is the first/last one) and a station\_id. I would like to find out for each train\_number, which paths it goes. My difficulty is in figuring out the arrival and departure station name for each route. For example train 1337 might have the following path: * stationA -> stationB: + route(1, 1337, stationA, null, 1800) + route(2, 1337, stationB, 1900, 2000) * stationB -> stationC: + route(3, 1337, stationC, 2020, 2100) * stationC -> stationD: + route(4, 1337, stationD, 2120, 2200) I want to figure out these paths, i.e stationA -> stationB -> stationC etc. How is it achievable (I'm using Oracle SQL if that matters)? for stationA (getting the start of route) I could do the following: ``` SELECT name FROM STATION WHERE station.id = ROUTE.station_id AND ROUTE.arrival IS NULL ```
Setup: ``` CREATE TABLE STATION ( id NUMBER PRIMARY KEY, name VARCHAR2(20), city VARCHAR2(20) ); CREATE TABLE SERVICE ( train_number NUMBER PRIMARY KEY, train_name VARCHAR2(20) ); CREATE TABLE ROUTE ( id NUMBER PRIMARY KEY, train_number NUMBER REFERENCES SERVICE( train_number ), station_id NUMBER REFERENCES STATION( id ), arrival_time NUMBER, departure_time NUMBER ); INSERT INTO STATION VALUES ( 1, 'stationA', 'city1' ); INSERT INTO STATION VALUES ( 2, 'stationB', 'city1' ); INSERT INTO STATION VALUES ( 3, 'stationC', 'city2' ); INSERT INTO STATION VALUES ( 4, 'stationD', 'city3' ); INSERT INTO SERVICE VALUES ( 1337, 'train1' ); INSERT INTO SERVICE VALUES ( 1338, 'train2' ); INSERT INTO SERVICE VALUES ( 1339, 'train3' ); INSERT INTO ROUTE VALUES (1, 1337, 1, null, 1800); INSERT INTO ROUTE VALUES (2, 1337, 2, 1900, 2000); INSERT INTO ROUTE VALUES (3, 1337, 3, 2020, 2100); INSERT INTO ROUTE VALUES (4, 1337, 4, 2120, 2200); INSERT INTO ROUTE VALUES (5, 1338, 1, null, 1800); INSERT INTO ROUTE VALUES (6, 1338, 4, 1900, 2000); INSERT INTO ROUTE VALUES (7, 1338, 3, 2020, 2100); INSERT INTO ROUTE VALUES (8, 1338, 2, 2120, 2200); ``` Query: ``` WITH INDEXED_ROUTES AS ( SELECT train_number, s.name, ROW_NUMBER() OVER( PARTITION BY train_number ORDER BY COALESCE( arrival_time, 0 ), COALESCE( departure_time, 2400 ) ) AS IDX FROM ROUTE r INNER JOIN STATION s ON ( r.station_id = s.id ) ) SELECT train_number, SUBSTR( SYS_CONNECT_BY_PATH( NAME, ' -> ' ), 5 ) AS route FROM indexed_routes WHERE CONNECT_BY_ISLEAF = 1 START WITH IDX = 1 CONNECT BY PRIOR IDX + 1 = IDX AND PRIOR train_number = train_number; ``` or alternatively: ``` SELECT train_number, LISTAGG( s.name, ' -> ' ) WITHIN GROUP ( ORDER BY arrival_time ASC NULLS FIRST ) AS route FROM ROUTE r INNER JOIN STATION s ON ( r.station_id = s.id ) GROUP BY train_number; ``` Either will output: ``` TRAIN_NUMBER ROUTE ------------ -------------------------------------------- 1337 stationA -> stationB -> stationC -> stationD 1338 stationA -> stationD -> stationC -> stationB ``` If you just want pairs of stations then use the `LAG` or `LEAD` analytic functions: ``` SELECT train_number, LAG( s.name ) OVER( PARTITION BY train_number ORDER BY COALESCE( arrival_time, 0 ), COALESCE( departure_time, 2400 ) ) AS prev_station_name, s.name AS station_name FROM ROUTE r INNER JOIN STATION s ON ( r.station_id = s.id ); ``` Output: ``` TRAIN_NUMBER PREV_STATION_NAME STATION_NAME ------------ -------------------- -------------------- 1337 (null) stationA 1337 stationA stationB 1337 stationB stationC 1337 stationC stationD 1338 (null) stationA 1338 stationA stationD 1338 stationD stationC 1338 stationC stationB ```
If there are no branches in the paths then the problem is simply to order the stations by arrival or departure time: ``` SELECT s.station_id, name FROM ROUTE r INNER JOIN STATION s r.station_id = s.station_id ON WHERE train_number = 1 ORDER BY departure_time ```
SQL query - getting arrival and departure stations based on time table
[ "", "sql", "oracle", "" ]
I have a table which can have a maximum of 5 rows and minimum 1 row. Now I need to store these rows in different variable like `@v1,@v2,@v3,@v4,@v5`. How can I do it? The Table has only 1 column `custid`. ``` CustId 100 200 300 400 ``` If the table contain only 1 row then `@v1` should have that value and rest can be `null`.
You can use the following query: ``` SELECT @v1 = MAX(CASE WHEN rn = 1 THEN CustId END), @v2 = MAX(CASE WHEN rn = 2 THEN CustId END), @v3 = MAX(CASE WHEN rn = 3 THEN CustId END), @v4 = MAX(CASE WHEN rn = 4 THEN CustId END), @v5 = MAX(CASE WHEN rn = 5 THEN CustId END) FROM ( SELECT CustId, ROW_NUMBER() OVER (ORDER BY CustId) AS rn FROM mytable ) t ``` Using `ROW_NUMBER` you assign a distinct number to each record of your table. Then, using *conditional aggregates* in an outer query you can consume this number in order to set each separate variable. If there are less than 5 rows, the corresponding variables will be set to `NULL`. [**SQL Fiddle Demo**](http://sqlfiddle.com/#!6/7bce0/1)
If you have SQL Server 2012 or newer, you can try [LAG()](https://msdn.microsoft.com/en-us/library/hh231256.aspx) function. ``` SELECT @v1 = custID , @v2 = LAG(custID, 1) OVER (ORDER BY custID DESC) , @v3 = LAG(custID, 2) OVER (ORDER BY custID DESC) , @v4 = LAG(custID, 3) OVER (ORDER BY custID DESC) , @v5 = LAG(custID, 4) OVER (ORDER BY custID DESC) FROM yourTable ORDER BY CustID DESC ``` [SQLFiddle Demo](http://sqlfiddle.com/#!6/a96e5/7)
How to store fixed row values in a variable - SQL server
[ "", "sql", "sql-server", "" ]
I'm having an issue returning a value when the other values are `NULL`. My SQL is as follows: ``` select id , CASE WHEN value1 IS NOT NULL THEN value1 WHEN value1 IS NULL THEN value2 WHEN value1 IS NULL AND value2 IS NULL THEN value 3 END from ('table containing these values') ``` It successfully displays `value 2` when `value 1` is `NULL` but in the third `case` when both `value1` and `value2` are `NULL`, it has just been returning a blank. Is there some restructuring of the `CASE WHEN` statement needed?
I think you should just use `coalesce`: ``` select coalesce(value1, value2, value3) ``` The correct form for your `case` is: ``` SELECT (CASE WHEN value1 IS NOT NULL THEN value1 WHEN value2 IS NOT NULL THEN value2 ELSE value3 END) ```
The third case is never used, because the first and second cases are each others complements, so they cover all possibilities. Change the order, so that you check the third case before the second: ``` select id , CASE WHEN value1 IS NOT NULL THEN value1 WHEN value1 IS NULL AND value2 IS NULL THEN value3 WHEN value1 IS NULL THEN value2 END from ('table containing these values') ``` However, two of the conditions are superflous, as you know that `value1` is always `null` if it gets past the first condition: ``` select id , CASE WHEN value1 IS NOT NULL THEN value1 WHEN value2 IS NULL THEN value3 ELSE value2 END from ('table containing these values') ```
Nulls within a CASE Statement
[ "", "sql", "case", "case-when", "" ]
I have 3 tables : ![enter image description here](https://i.stack.imgur.com/ZOGlm.jpg) This a part of my query to join two tables (tbl\_ads and tbl\_inf\_adstate): ``` @state_id int=NULL AS BEGIN SET NOCOUNT ON; SELECT a.Id ,a.ad_title ,a.ad_brief ,a.ad_pic INTO #Results FROM [tbl_ads] a JOIN tbl_inf_adstate b ON a.Id=b.ad_id WHERE (b.state_id=@state_id OR @state_id IS NULL) AND a.ad_is_accept=1 AND a.ad_is_show=1 AND a.ad_is_slide=0 order by a.ad_type ASC,NEWID() ``` The `@state_id` parameter is optional. My problem is that the result have multiple and repetitive records of an ad , but I want only 1 record of each ad . This is the result : ![enter image description here](https://i.stack.imgur.com/2CaRT.jpg)
``` Try this one : SELECT Id ,ad_title ,ad_brief ,ad_pic INTO #Results FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY a.ID ORDER BY a.ID) AS rn, a.Id ,a.ad_title ,a.ad_brief ,a.ad_pic ,a.ad_type FROM [tbl_ads] a JOIN tbl_inf_adstate b ON a.Id=b.ad_id WHERE (b.state_id=@state_id OR @state_id IS NULL) AND a.ad_is_accept=1 AND a.ad_is_show=1 AND a.ad_is_slide=0 )x WHERE rn = 1 ORDER BY ad_type ASC,NEWID() ```
Just distinguish them with `DISTINCT` keyword: ``` SELECT DISTINCT a.Id ,a.ad_title ,a.ad_brief ,a.ad_pic INTO #Results FROM [tbl_ads] a JOIN tbl_inf_adstate b ON a.Id=b.ad_id WHERE (b.state_id=@state_id OR @state_id IS NULL) AND a.ad_is_accept=1 AND a.ad_is_show=1 AND a.ad_is_slide=0 ``` I have removed ordering because it makes no sense here. You should order your data when selecting from temp table not when inserting into that table.
Joining two tables with GROUP BY
[ "", "sql", "sql-server", "t-sql", "" ]
I have table `Foo` like: ``` id, bar_1, bar_2, bar_3 ``` `bar1, bar2, bar3` could contain foreign key (integer) or `null`. I want to select all rows from `Foo`, where ids `2, 4` (both) are present in any two of `bar1, bar2, bar3`. Simple way would be to make lot of `OR`'s, but I believe there's simplier way. I thought about something like `SELECT * FROM foo WHERE (2,4) IN ARRAY(bar_1, bar_2, bar_3)` Is it possible?
Well, almost :-) ``` SELECT * FROM foo WHERE 2 IN (bar_1, bar_2, bar_3) AND 4 IN (bar_1, bar_2, bar_3); ```
``` SELECT * FROM foo WHERE bar_1 IN (2,4) OR bar_2 IN (2,4) OR bar_3 IN (2,4) ```
MySQL - cast columns to array and use IN statement on it
[ "", "mysql", "sql", "" ]
``` SELECT rollID FROM [dbo].[dbo_Roll] a INNER JOIN [evt_Building Permits] b ON LEFT(a.rollnumber, LEN(a.rollnumber)-4) = b.rollnumber COLLATE DATABASE_DEFAULT ``` And I want to update the result to this table `[evt_Building Permits].roll_id = [dbo].[dbo_Roll].rollID`? How can I attain this?
Try this ``` UPDATE b SET b.roll_id = a.rollID FROM [dbo].[dbo_Roll] a INNER JOIN [evt_Building Permits] b ON LEFT(a.rollnumber, LEN(a.rollnumber)-4) = b.rollnumber COLLATE DATABASE_DEFAULT ```
Something like this. ``` update b set rollID = a.rollID from [dbo].[dbo_Roll] a inner join [evt_Building Permits] b on LEFT(a.rollnumber, LEN(a.rollnumber)-4) = b.rollnumber COLLATE DATABASE_DEFAULT ```
Update statement with select to another table
[ "", "sql", "sql-server", "t-sql", "sql-update", "" ]
I have a table called location which has one to many relations with rates table. The following is a description of the rate table: ``` +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | bigint(20) | NO | PRI | NULL | auto_increment | | location_id | bigint(20) | YES | MUL | NULL | | | price | double | NO | | NULL | | | start_date | datetime | NO | | NULL | | +-------------+--------------+------+-----+---------+----------------+ ``` How could I get the latest price which has most recent start\_date? I tried many queries the following query is one of them, but I still can't get the correct result: ``` SELECT * FROM rate r INNER JOIN (SELECT r2.start_date,r2.id AS id, r2.location_id, MIN(ABS(DATEDIFF(r2.start_date, NOW()))) AS version FROM rate r2 GROUP BY r2.location_id) o2 ON (o2.id = r.id AND r.start_date = o2.start_date); ``` What am I doing wrong? how can I fix it?
This question appears about once a week, just with different data. My latest answer is [here](https://stackoverflow.com/questions/29692750/return-only-most-recent-entry-per-id/29738276#29738276). Your query will look like this: ``` select l.*, r.price from location l join rate r on r.Location_ID = l.ID and r.Start_Date =( select Max( Start_Date ) from rate where Location_ID = r.Location_ID ); ``` You could allow future dates for Start\_Date, for time when a rate change is scheduled ahead of time. In that case, the subquery would need a slight change to pull out the *current* rate which is the latest rate still in the past: ``` ... where Location_ID = r.Location_ID and Start_Date <= CurDate() ); ``` Assuming the PK of the Rate table is (Location\_ID, Start\_Date), this index will provide very fast lookup of the exact row you're looking for. Yes, I can see the PK for the Rate table is a surrogate key. Bad idea. Have you ever or do you think you *will* ever access the Rate table using the surrogate key value? Look at your own query. You have the Location\_ID to access the Rate table -- and now the Start\_Date value. Those two fields form a natural unique key and they are also the values you will have on hand when you access the table. There is your best primary key.
I'll take a guess at your data structure. I will guess that you have pricing for many locations. I will guess that you have both past and future pricing for each location. So, if you want the so-called "current" price for each location, this query gets it. This will give back one row for each distinct location with the current price. ``` SELECT a.id, a.price, a.location, a.start_date FROM rate a JOIN ( SELECT location, MAX(start_date) start_date FROM rate WHERE start_date <= NOW() GROUP BY location ) b USING(location, start_date) ``` An index on `(start_date, location, price, id)` will accelerate this query nicely. It may seem strange to have to join the subquery back to the original table. But, if you want the `price` and `id` columns, that's necessary. Why? Because the subquery is an aggregate (`GROUP BY`) query that can only return a single `start_date` for each location. All it can do is identify the appropriate date. The `JOIN` recovers the other columns from the table.
Get Latest Price from Location table in SQL
[ "", "mysql", "sql", "" ]
Hi. Below i have written query to retrieve total-hours, last-month-total-hours and current-month-total-hours. All these are calculating from hours column of time\_entries table and spent\_on column of same table. Sorry if table formatting is not good. Following three query is giving correct result. Query#1 ``` select p.name, FORMAT(sum(te.hours), 2) AS totalhours from projects p left join time_entries te on p.id = te.project_id group by p.id ``` Result#1 name                     totalhours ----------------          --------------- Query#2 ``` select p.name, FORMAT(sum(te_last_mo.hours), 2) AS totalhours_last_mo from projects p left join time_entries te on p.id=te_last_mo.project_id where te_last_mo.spent_on>=DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-01') and te_last_mo.spent_on<DATE_FORMAT(NOW() ,'%Y-%m-1') group by p.id ``` Result#2 name                     total\_hours\_last\_mo ----------------          ------------------------------ Query#3 ``` select p.name, FORMAT(sum(te_this_mo.hours), 2) AS totalhours_this_mo from projects p left join time_entries te_this_mo on p.id=te_this_mo.project_id where te_this_mo.spent_on>=DATE_FORMAT(NOW() ,'%Y-%m-01') and te_this_mo.spent_on<DATE_FORMAT(NOW() ,'%Y-%m-31') group by p.id ``` Result#3 name                     total\_hours\_this\_mo ----------------          ------------------------------ The above results and query are correct. Now i want result like this but unable to figure out. name             total\_hours                       total\_hours\_last\_mo                  total\_hours\_this\_mo ------------      -----------------------------      -------------------------------             ------------------------------ To combine these three hour columns i wrote query like this but throwing wrong result, may be because of join three times for same table. ``` select p.name, FORMAT(sum(te.hours), 2) AS totalhours, FORMAT(sum(te_last_mo.hours), 2) AS totalhours_last_mo, FORMAT(sum(te_this_mo.hours), 2) AS totalhours_this_mo from projects p left join time_entries te on p.id = te.project_id left join time_entries te_last_mo on p.id = te_last_mo.project_id and te_last_mo.spent_on>=DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-01') and te_last_mo.spent_on<DATE_FORMAT(NOW() ,'%Y-%m-1') left join time_entries te_this_mo on p.id = te_this_mo.project_id where te_this_mo.spent_on>=DATE_FORMAT(NOW() ,'%Y-%m-01') and te_this_mo.spent_on<DATE_FORMAT(NOW() ,'%Y-%m-31') group by p.id ``` Any solution would be appreciated. Thanks in advance.
You may run the query with joins and no aggregations to see how those joins are working when used together and why that will lead to wrong results. You can achieve the desired result by using one join and moving the criteria to aggregate calculations: ``` select p.name, FORMAT(sum(te.hours), 2) AS totalhours, FORMAT(sum( IF(spent_on>=DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-01') and spent_on<DATE_FORMAT(NOW() ,'%Y-%m-1'), hours, NULL) ), 2) AS totalhours_last_mo, FORMAT(sum( IF(spent_on>=DATE_FORMAT(NOW() ,'%Y-%m-01') and spent_on<DATE_FORMAT(NOW() ,'%Y-%m-31'), hours, NULL) ), 2) AS totalhours_this_mo from projects p left join time_entries te on p.id = te.project_id group by p.id ```
You could dump the data from the queries into temporary tables making sure the project is in each and then query those based on the project to pull it all together
How to retrieve sql results with different calculated values for same column with join and group by?
[ "", "mysql", "sql", "" ]
When i try to divide the first nested select statement by the count of a column, the query returns a zero. When I replace the "/" with a ",", I receive two different numbers so the return value shouldn't be a zero. Could this have something to do with there being zeros in the data set? Any help would be appreciated ``` declare @hospitalfk int; set @hospitalfk='1335' declare @startdate date; set @startdate='03/01/2014' declare @enddate date; set @enddate='02/28/2015' declare @reportname varchar(50); set @reportname='%Medicaid Billable Report%' declare @metasectionname varchar(100); set @metasectionname='Medicaid Primary' select ( select count(iscoded) from ope.ope.vwerali where iscoded=1 and hospitalfk=@hospitalfk and reportdate between @startdate and @enddate and reportname like @reportname and metasectionname like @metasectionname ) /count(iscoded) from ope.ope.vwerali where hospitalfk=@hospitalfk and reportdate between @startdate and @enddate and reportname like @reportname and metasectionname like @metasectionname ```
There is no need for two selects, simply use a Case statement for count. Also I think you are trying to avoid a Zero `0` in divisor. Also for date parameters pass date values as ANSI Date i.e `YYYYMMDD` ``` select count(CASE WHEN iscoded=1 THEN iscoded END) * 1.00 / NULLIF(count(iscoded), 0) from ope.ope.vwerali where hospitalfk=@hospitalfk and reportdate between @startdate and @enddate and reportname like @reportname and metasectionname like @metasectionname ```
``` declare @hospitalfk int; set @hospitalfk='1335' declare @startdate date; set @startdate='03/01/2014' declare @enddate date; set @enddate='02/28/2015' declare @reportname varchar(50); set @reportname='%Medicaid Billable Report%' declare @metasectionname varchar(100); set @metasectionname='Medicaid Primary' select ( select CONVERT(NUMERIC(10,2),count(iscoded)) from ope.ope.vwerali where iscoded=1 and hospitalfk=@hospitalfk and reportdate between @startdate and @enddate and reportname like @reportname and metasectionname like @metasectionname ) /CONVERT(NUMERIC(10,2),count(iscoded)) from ope.ope.vwerali where hospitalfk=@hospitalfk and reportdate between @startdate and @enddate and reportname like @reportname and metasectionname like @metasectionname ```
Query returning a zero when dividing in the select statement
[ "", "sql", "sql-server", "" ]
What I am asking is this query- ``` select * from emp order by sal + 2000; ``` Gives exactly same result as this query- ``` select * from emp order by sal; ``` Then what is the difference in terms of calculation? What does `order by sal + 2000` mean? How this influences the result?
**Those queries do not give the same results if SAL is stored as a string.** ``` create table emp as select '90000' sal from dual union all select '100000' sal from dual union all select '110000' sal from dual; select * from emp order by sal; SAL --- 100000 110000 90000 select * from emp order by sal + 2000; SAL --- 90000 100000 110000 ``` I wish this was unusual but stringly-typed data models are far too common. --- **Using an expression may change the execution plan and avoid an index.** ``` create table emp2(id number, sal number not null); select 1 id, 90000 sal from dual union all select 2 id, 100000 sal from dual union all select 3 id, 110000 sal from dual; create index emp2_idx on emp2(sal); explain plan for select * from emp2 order by sal; select * from table(dbms_xplan.display(format => 'basic')); Plan hash value: 1831800775 ------------------------------------------------ | Id | Operation | Name | ------------------------------------------------ | 0 | SELECT STATEMENT | | | 1 | TABLE ACCESS BY INDEX ROWID| EMP2 | | 2 | INDEX FULL SCAN | EMP2_IDX | ------------------------------------------------ explain plan for select * from emp2 order by sal + 2000; select * from table(dbms_xplan.display(format => 'basic')); Plan hash value: 2441141433 ----------------------------------- | Id | Operation | Name | ----------------------------------- | 0 | SELECT STATEMENT | | | 1 | SORT ORDER BY | | | 2 | TABLE ACCESS FULL| EMP2 | ----------------------------------- ``` There are usually other, better ways to avoid an index, but some people use this method.
Consider the following result set from your query, sorted (by default) in ascending order by the salary: ``` select name, salary from emp order by sal; +------+--------+ | name | salary | +------+--------+ | John | 10000 | | Mike | 15000 | | Joe | 30000 | +------+--------+ ``` Here is the result set using `ORDER BY sal + 2000`: ``` select name, salary, (salary+2000) as new_salary from emp order by new_salary; +------+--------+------------+ | name | salary | new_salary | +------+--------+------------+ | John | 10000 | 12000 | | Mike | 15000 | 17000 | | Joe | 30000 | 32000 | +------+--------+------------+ ``` Adding 2000 to the salary doesn't change the order you get.
What does it mean if I add something with some column in Order by clause?
[ "", "sql", "oracle", "" ]
Here's the query which works well: ``` select MIN(dbo.GetDiscountedPrice(P.AutoID)) FROM Product P where P.AutoID in (2910,2912,2820) ``` It returns the cheapest price among 2910,2912 and 2820 (AutoID is the primary key) BUT this query returns an error: ``` select MIN(dbo.GetDiscountedPrice(P.AutoID)) FROM Product P where P.AutoID in (SELECT AutoID FROM Product WHERE Category=2) ``` Error: > Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. This query actually returns 2910,2912,2820 ``` SELECT AutoID FROM Product WHERE Category=2 ``` Why is that so and how can I achieve the purpose with by getting products based on category? Function `GetDiscountedPrice`: It's just a very simple function which: 1. Gets the price 2. Gets the percentage 3. Returns the discounted price after minus off the percentage Note that percentage was taken from another table called `ProductDiscount`. ``` CREATE FUNCTION [dbo].[GetDiscountedPrice] ( @ProductAutoID bigint ) RETURNS decimal(10,4) AS BEGIN DECLARE @Result decimal(10,4) Declare @Price float = (select Price from Product where AutoID = @ProductAutoID) Declare @DP float = ISNULL ( (select DiscountPercentage from ProductDiscount where Product_AutoID = @ProductAutoID), 0) set @Result = @Price - (@Price * 0.01 * @DP) RETURN @Result END ```
If statement `where P.AutoID in (SELECT AutoID FROM Product WHERE Category=2)` fails but `where P.AutoID in (2910,2912,2820)` works, then the problem is definitely in function itself, because first statement can not produce mentioned error. I.e. first statement returns different set then second statement and data produces that error probably here: ``` Declare @Price float = (select Price from Product where AutoID = @ProductAutoID) ``` and here: ``` (select DiscountPercentage from ProductDiscount where Product_AutoID = @ProductAutoID) ``` Those subqueries are returning more then 1 row, so the error occures.
try this code (i did not check it)- ``` DECLARE @tbl TABLE (id int IDENTITY(1,1),AutoID int) INSERT INTO @tbl SELECT AutoID FROM Product WHERE Category=2 DECLARE @x int=1 DECLARE @y int=1 WHILE @x <= (SELECT count (*) from @tbl) BEGIN set @y=(select AutoID FROM @tbl WHERE id=@x) select MIN(dbo.GetDiscountedPrice(@y)) FROM Product P SET @x=@x+1 END ```
SQL Server Scalar Function with "IN" Getting Subquery Returned more than 1 Error
[ "", "sql", "sql-server", "" ]
I have two tables, employee records (id, firstName, lastName) and violation (violatorId, supervisorId). My query pulls a list of violators and supervisors. I would like to rather than displaying the id of the person, display their first and last name. So this is my query so far... ``` SELECT emp.firstName as vFirstName, emp.lastName as vLastname, v.violatorId, v.supervisorId, emp.firstName as sFirstName, emp.lastName as sLastName FROM employee emp LEFT JOIN violator v ON violatorId = emp.id ``` So this will return the first and last name information for the violator, but not for the supervisor. Sample Output: ``` Violator vFirstName vLastName Supervisor sFirstName sLastName 1 Jon Smith 20 Jon Smith ``` Any help would be great
``` SELECT e1.firstName AS violatorFirstName, e1.lastName AS violatorLastName, e2.firstName AS supervisorFirstName, e2.lastName AS supervisorLastName FROM violator v LEFT JOIN employee e1 ON v.violatorId = e1.id LEFT JOIN employee e2 ON v. supervisorId = e2.id ```
You need to join once more to `employee`: ``` SELECT emp.firstName AS vFirstName, emp.lastName AS vLastName, v.violatorId AS vId, v.supervisorId AS sId, s.firstName AS sFirstName, s.lastName AS sLastName FROM employee emp LEFT JOIN violator v ON v.violatorId = emp.id LEFT JOIN employee s ON v.supervisorId = s.id ```
SQL Query pulling same field with different values
[ "", "mysql", "sql", "" ]
I have a simple MYSQL table with 2 columns: ``` id | status 1 | b ``` `status` is `ENUM('a','b','c')` I want to update this row so that if: 1) The current status is `c` then new value `b` 2) The current status is `b` then new value `a` 3) The current status is `a` then do nothing I tried the following but of course is not working: ``` UPDATE table SET status = status - 1 WHERE id = '1' ```
``` UPDATE tablename SET status = case status when 'b' then 'a' when 'c' then 'b' end WHERE id = '1' and status <> 'a' ``` (`<> 'a'` in `WHERE` is to avoid 'a'-rows in transaction.)
Try like this: ``` update table_name set status = case when status = 'c' then 'b' when status = 'b' then 'a' when status = 'a' then 'a' end where id = 1; ```
mysql update row to previous enum value
[ "", "mysql", "sql", "enums", "" ]
How can I update 2 of my rows randomly from 4 other rows that have this condition `F01=1 And F02=1` in a table with 5 rows? and i want the number of that rows that changed after the execute of the code. like this ``` ID F01 F02 ID F01 F02 1 0 1 1 0 1 2 1 1 2 A 1 3 1 1 --> 3 1 1 4 1 1 4 1 1 5 1 1 5 A 1 ``` And finally i have the number 2 and 5 in another table. The numbers are just for example. I can't use newid() because SQL says you can not use order by in subqueries. This is my code that doesn't work ``` Update Chess.dbo.Sheet1$ set F01 = A Where ID in (select top 2 ID from Chess.dbo.Sheet1$ where (select ID from Chess.dbo.Sheet1$ order by NEWID()) ```
``` ;WITH CTE AS ( SELECT TOP 30 * FROM TableName WHERE F01 = 1 AND F02 = 1 ORDER BY NEWID() ) UPDATE CTE SET ColumnName = 'NewValue' ``` OR ``` UPDATE t SET t.ColumnName = 'NewValue' FROM ( SELECT TOP 30 * FROM TableName WHERE F01 = 1 AND F02 = 1 ORDER BY NEWID() )t ``` ## Edit If you want the updated row IDs once the randomly selected rows has been updated. you can do the following ``` DECLARE @Table TABLE (ID INT); ;WITH CTE AS ( SELECT TOP 30 * FROM TableName WHERE F01 = 1 AND F02 = 1 ORDER BY NEWID() ) UPDATE CTE SET ColumnName = 'NewValue' OUTPUT inserted.ID INTO @Table(ID) -- now select from the table variable SELECT * FROM @Table ```
``` DECLARE @ids AS TABLE(id INT) INSERT INTO @ids(id) SELECT TOP(2) ID FROM Chess.dbo.Sheet1$ WHERE F01 = 1 AND F02 = 1 ORDER BY NEWID() UPDATE Chess.dbo.Sheet1$ SET F01 = 'A' WHERE ID IN ( SELECT id FROM @ids) ```
How I can update 2 of my rows randomly, that have this codition F01=1 And F03=1 in SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I posted a question at MSDN (Brazilian) site. My problem is that I want to make a select at a table that returns a first and last number (from a column) WITHOUT INTERRUPTION and WITHOUT CHANGES on other columns related on the search. **QUESTION UPDATE (2015-04-23)** I've updated my own post on MSDN as on this [link](https://social.msdn.microsoft.com/Forums/pt-BR/b23b33b0-7b06-44c5-9145-63b4523cf5c3/select-de-linhas-consecutivas-incrementais-reaberto?forum=520#a98d4a2e-0e0c-4c7d-b1c8-7f29a7e71ff0). I made a SELECT that would give me almost what i want - but 1 line for the first and 1 line for the final number on each range. It is breaking perfectly, so when the `Diag` column is `O` (letter O) means that there is no other number on the range (as O stands for "Only"), so the number would be both initial and final. When the `Diag` column ins ´F´ is the first number of the range, and due to the ´ORDER BY´ arguments, the last number of that range (as you may guess, with an ´L´ for LAST on the `Diag` column) is right below it, on the next line. My task now is to create 1 more column on it and then repeated the `enNumber` field when `Diag` is `O`, place the next line ´enNumber´ value when the ´Diag´ column where ´F´ and eliminate all lines with ´L´ on the ´Diag´ column. ``` SELECT [EnT].enPrefix, [EnT].enNumber, [EnT].enOrder, [EnT].enDate, [EnT].enNfe, [EnT].enClient, (CASE WHEN [EnN].enNumber IS NULL THEN CASE WHEN [EnP].enNumber IS NULL THEN 'O' ELSE 'L' END ELSE 'F' END) AS [Diag] FROM SerialsDB.dbo.Entries AS [EnT] LEFT OUTER JOIN SerialsDB.dbo.Entries AS [EnN] ON ([EnT].enProduct = [EnN].enProduct) AND ([EnN].enNumber = [EnT].enNumber + 1) AND ([EnN].enOrder = [EnT].enOrder) AND ([EnN].enClient = [EnT].enClient) AND ([EnN].enNfe = [EnT].enNfe) AND ([EnN].enDate = [EnT].enDate) LEFT OUTER JOIN SerialsDB.dbo.Entries AS [EnP] ON ([EnT].enProduct = [EnP].enProduct) AND ([EnP].enNumber = [EnT].enNumber - 1) AND ([EnP].enOrder = [EnT].enOrder) AND ([EnP].enClient = [EnT].enClient) AND ([EnP].enNfe = [EnT].enNfe) AND ([EnP].enDate = [EnT].enDate) WHERE ([EnT].enOrder IS NOT NULL) AND ([EnT].enClient IS NOT NULL) AND (([EnP].enNumber IS NULL) OR ([EnN].enNumber IS NULL)) ORDER BY [EnT].enOrder ASC, [EnT].enDate ASC, [EnT].enNfe ASC, [EnT].enPrefix ASC, [EnT].enNumber ASC ``` Help!
I've wrote a query and got the results i was trying to find. I still don't like it very much as i think it will have a big cost on the database to run it, and i'd like to find one simpler. But anyway, the "problem" it self is resolved. The resulting query is writen as follows: ``` SELECT [Pre].[pfPrefix], [Pro].[prId], [Pro].[prName], (CASE WHEN [EnP].[enNumber] IS NULL THEN [EnT].[enNumber] ELSE (SELECT TOP 1 [iET].[enNumber] FROM [SerialsDB].[dbo].[Entries] AS [iET] LEFT OUTER JOIN [SerialsDB].[dbo].[Entries] AS [iEP] ON (([iEP].[enProduct] IS NULL AND [iET].[enProduct] IS NULL) OR ([iEP].[enProduct] = [iET].[enProduct])) AND (([iEP].[enOrder] IS NULL AND [iET].[enOrder] IS NULL) OR ([iEP].[enOrder] = [iET].[enOrder])) AND (([iEP].[enClient] IS NULL AND [iET].[enClient] IS NULL) OR ([iEP].[enClient] = [iET].[enClient])) AND (([iEP].[enNfe] IS NULL AND [iET].[enNfe] IS NULL) OR ([iEP].[enNfe] = [iET].[enNfe])) AND (([iEP].[enDate] IS NULL AND [iET].[enDate] IS NULL) OR ([iEP].[enDate] = [iET].[enDate])) AND (([iEP].[enAuth] IS NULL AND [iET].[enAuth] IS NULL) OR ([iEP].[enAuth] = [iET].[enAuth])) AND (([iEP].[enStatus] IS NULL AND [iET].[enStatus] IS NULL) OR ([iEP].[enStatus] = [iET].[enStatus])) AND ([iEP].[enNumber] = ([iET].[enNumber] - 1)) WHERE (([iET].[enProduct] IS NULL AND [EnT].[enProduct] IS NULL) OR ([iET].[enProduct] = [EnT].[enProduct])) AND (([iET].[enOrder] IS NULL AND [EnT].[enOrder] IS NULL) OR ([iET].[enOrder] = [EnT].[enOrder])) AND (([iET].[enDate] IS NULL AND [EnT].[enDate] IS NULL) OR ([iET].[enDate] = [EnT].[enDate])) AND (([iET].[enClient] IS NULL AND [EnT].[enClient] IS NULL) OR ([iET].[enClient] = [EnT].[enClient])) AND (([iET].[enNfe] IS NULL AND [EnT].[enNfe] IS NULL) OR ([iET].[enNfe] = [EnT].[enNfe])) AND (([iET].[enAuth] IS NULL AND [EnT].[enAuth] IS NULL) OR ([iET].[enAuth] = [EnT].[enAuth])) AND (([iET].[enStatus] IS NULL AND [EnT].[enStatus] IS NULL) OR ([iET].[enStatus] = [EnT].[enStatus])) AND ([iET].[enNumber] <= [EnP].[enNumber]) AND ([iEP].[enNumber] IS NULL) ORDER BY [iET].[enNumber] DESC) END) AS [Initial], [EnT].[enNumber] AS [Final], [EnT].[enOrder], [EnT].[enDate], [EnT].[enNfe], [EnT].[enClient], [Cli].[clGroup], [EnT].[enAuth] FROM [SerialsDB].[dbo].Entries AS [EnT] LEFT OUTER JOIN [SerialsDB].[dbo].[Entries] AS [EnN] ON ([EnN].[enProduct] = [EnT].[enProduct]) AND ([EnN].[enOrder] = [EnT].[enOrder]) AND ([EnN].[enClient] = [EnT].[enClient]) AND (([EnN].[enNfe] IS NULL AND [EnT].[enNfe] IS NULL) OR ([EnN].[enNfe] = [EnT].[enNfe])) AND ([EnN].[enDate] = [EnT].[enDate]) AND ([EnN].[enStatus] = [EnT].[enStatus]) AND ([EnN].[enAuth] = [EnT].[enAuth]) AND ([EnN].[enNumber] = ([EnT].[enNumber] + 1)) LEFT OUTER JOIN [SerialsDB].[dbo].[Entries] AS [EnP] ON ([EnP].[enProduct] = [EnT].[enProduct]) AND ([EnP].[enOrder] = [EnT].[enOrder]) AND ([EnP].[enClient] = [EnT].[enClient]) AND (([EnP].[enNfe] IS NULL AND [EnT].[enNfe] IS NULL) OR ([EnP].[enNfe] = [EnT].[enNfe])) AND ([EnP].[enDate] = [EnT].[enDate]) AND ([EnP].[enStatus] = [EnT].[enStatus]) AND ([EnP].[enAuth] = [EnT].[enAuth]) AND ([EnP].[enNumber] = ([EnT].[enNumber] - 1)) LEFT OUTER JOIN [SerialsDB].[dbo].[Prefixes] AS [Pre] ON ([EnT].[enPrefix] = [Pre].[pfId]) LEFT OUTER JOIN [SerialsDB].[dbo].[Products] AS [Pro] ON ([EnT].[enProduct] = [Pro].[prId]) LEFT OUTER JOIN [SerialsDB].[dbo].[Clients] AS [Cli] ON ([EnT].[enClient] = [Cli].[clId]) WHERE ([EnT].[enOrder] IS NOT NULL) AND ([EnT].[enClient] IS NOT NULL) AND ([EnN].[enNumber] IS NULL) AND ([EnT].[enStatus] = 4) AND ([EnT].[enAuth] IS NOT NULL) ``` Thank you Amir for the great inputs, and for Stamen for the suggestion.
This is not going to be a simple query using a `GROUP BY`. For simplicity and to show the basic idea I assume `Entries` table just got two columns: `enNumber` and `enProduct`. You can add all other columns and criteria later. ``` CREATE TABLE [Entries]( [enNumber] [int] NOT NULL, [enProduct] [int] NOT NULL, CONSTRAINT [PK_Entries] PRIMARY KEY CLUSTERED ([enNumber] ASC) ON [PRIMARY] ) ON [PRIMARY] GO INSERT INTO [Entries] VALUES (1000309,2768) INSERT INTO [Entries] VALUES (1000310,2768) INSERT INTO [Entries] VALUES (1000311,2768) INSERT INTO [Entries] VALUES (1000312,2768) INSERT INTO [Entries] VALUES (1000313,2768) INSERT INTO [Entries] VALUES (1000314,2768) INSERT INTO [Entries] VALUES (1000315,2768) INSERT INTO [Entries] VALUES (1000316,2768) INSERT INTO [Entries] VALUES (1000317,2768) /* interrupt */ INSERT INTO [Entries] VALUES (1001388,3328) INSERT INTO [Entries] VALUES (1001389,3328) INSERT INTO [Entries] VALUES (1001390,3328) INSERT INTO [Entries] VALUES (1001391,3328) INSERT INTO [Entries] VALUES (1001392,3328) /* change */ INSERT INTO [Entries] VALUES (1001393,3743) INSERT INTO [Entries] VALUES (1001394,3743) INSERT INTO [Entries] VALUES (1001395,3743) INSERT INTO [Entries] VALUES (1001396,3743) /* change */ INSERT INTO [Entries] VALUES (1001397,3328) INSERT INTO [Entries] VALUES (1001398,3328) INSERT INTO [Entries] VALUES (1001399,3328) INSERT INTO [Entries] VALUES (1001400,3328) /* interrupt */ INSERT INTO [Entries] VALUES (1003000,2768) /* change */ INSERT INTO [Entries] VALUES (1003001,3328) INSERT INTO [Entries] VALUES (1003002,3328) INSERT INTO [Entries] VALUES (1003003,3328) GO ``` Create two views, one of them giving all the records that act as the initial of a group and another view for the finals. A record is an initial/final record if the previous/next `enNumber` does not exist (an interrupt) or has different values (a change). ``` CREATE VIEW [Initials] AS SELECT E.enNumber FROM Entries E LEFT JOIN Entries E2 on E2.enNumber = E.enNumber - 1 AND E2.enProduct= E.enProduct WHERE E2.enNumber IS NULL GO CREATE VIEW [Finals] AS SELECT E.enNumber FROM Entries E LEFT JOIN Entries E2 on E2.enNumber = E.enNumber + 1 AND E2.enProduct= E.enProduct WHERE E2.enNumber IS NULL GO ``` Each record in `Initials` has a counterpart in `Finals` with an `enNumber` equal or greater than its `enNumber`. ``` SELECT i.enNumber AS Initial, E.enProduct FROM Initials i LEFT JOIN Entries E ON E.enNumber = i.enNumber SELECT f.enNumber AS Final, E.enProduct FROM Finals f LEFT JOIN Entries E ON E.enNumber = f.enNumber Initial enProduct 1000309 2768 1001388 3328 1001393 3743 1001397 3328 1003000 2768 1003001 3328 Final enProduct 1000317 2768 1001392 3328 1001396 3743 1001400 3328 1003000 2768 1003003 3328 ``` You can combine these two views in different ways to achieve the desired results: ``` SELECT i.enNumber AS initial, MIN(f.enNumber) AS final, E.enProduct FROM Initials i LEFT JOIN finals f ON f.enNumber >= i.enNumber LEFT JOIN Entries E ON E.enNumber = i.enNumber GROUP BY i.enNumber, E.enProduct ORDER BY i.enNumber initial final enProduct 1000309 1000317 2768 1001388 1001392 3328 1001393 1001396 3743 1001397 1001400 3328 1003000 1003000 2768 1003001 1003003 3328 ```
SQL Server SELECT Sequential Data, Breaking Lines at any Column Diference
[ "", "sql", "sql-server-2008", "" ]
Tables in my db: ``` CREATE TABLE items ( id serial NOT NULL, user_id integer NOT NULL, status smallint ); CREATE TABLE phones ( phone character varying, users integer[] ); ``` My Query for find phone numbers where status = 1: ``` SELECT phones.phone, COUNT(*) AS "count" FROM phones,items WHERE phones.phone = ANY (Array['7924445544', '8985545444']) AND items.user_id = ALL (phones.users) AND items.status = 1 GROUP BY phones.phone; ``` Query out: ``` phone | count ------------+------ 7924445588 | 3 ``` Need out with ZERO count: ``` phone | count ------------+------- 8985545444 | 0 7924445544 | 3 ``` How to get that?
It's a bit tricky to create non-existing rows. (There are billions of them, at least...) Do a `UNION ALL` with a select providing the result you want if that phone no doesn't exist. ``` <current query> UNION ALL select '8985545444',0 from one_row_table where not exists (select 1 from phones where phones.phone = '8985545444') ``` **EDIT:** If the phone numbers do exists, but not fulfill the `WHERE` clause conditions, use a correlated sub-select to do the count: ``` SELECT phones.phone, (select count(*) from items where items.status = 1 and items.user_id = phones.users) as "Count" FROM phones WHERE phones.phone = ANY (Array['7924445544', '8985545444']) ```
You shouldn't do that in the query. However it is rather easy to do it if you want to: ``` WITH phone_seek AS( SELECT '8985545444' AS phone UNION ALL SELECT '7924445588 ' ) SELECT phone_seek.phone, COUNT(items.id) AS "count" FROM phones_seek JOIN phones ON phones_seek.phone = phones.phones CROSS JOIN items WHERE items.user_id = ALL (phones.users) AND items.status = 1 GROUP BY phones.phone; ```
SELECT COUNT(*) GROUP BY out zero rows
[ "", "sql", "database", "postgresql", "" ]
Okay so I have a table and in one column I have some data and the second column the average of the data. Example ``` id|Data|avg 1 |20 |20 2 |4 |12 3 |18 |14 ``` How do I populate the avg column on insert with the running average of the Data column using T-SQL? EDIT: Sorry guys, this was actually a stupid mistake I made. I assumed I had SQL 2014 but after trying Stephan's code and getting some errors, I went back to confirm and realize I use SQL 2008. Sorry for the misinformation. I have also updated the tags
SQL Server <=2008 doesn't have the `OVER(ORDER BY ...)` clause for aggregate functions. ``` CREATE TRIGGER trg_running_avg ON myTable AFTER INSERT, UPDATE, DELETE AS BEGIN UPDATE old SET avg = new_avg FROM myTable old CROSS APPLY ( SELECT AVG(Data) AS new_avg FROM myTable WHERE ID <= old.ID ) new --Skip the full table update. Start from the lowest ID that was changed. WHERE id >= (SELECT MIN(id) FROM (SELECT ID FROM inserted UNION ALL SELECT ID FROM deleted) t) END GO ``` Use a view for this if you can. It's a bad design for a change in one row to invalidate data stored in other rows. Rows should represent independent facts.
On insert, assuming `id` is an identity and you are just putting in `data`: ``` insert into table t(id, data, avg) select @data, @data * (1.0 / n) + avg * (n - 1.0)/n from (select count(*) as cnt, avg(data) as avg from t ) t; ``` In SQL Server 2012+, it is easy enough just to get it on output: ``` select t.*, avg(data) over (order by id) as cume_avg from table t ``` Prior to SQL Server 2012, you would do this with a correlated subquery or `apply`: ``` select t.*, (select avg(data) from table t2 where t2.id <= t.id ) as cume_avg from table t; ``` Here performance might suffer if the table is large. However, an index on `id, data` would help.
Get the running average of a column in T-SQL
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have the following table in Postgres that has overlapping data in the two columns `a_sno` and `b_sno`. ``` create table data ( a_sno integer not null, b_sno integer not null, PRIMARY KEY (a_sno,b_sno) ); insert into data (a_sno,b_sno) values ( 4, 5 ) , ( 5, 4 ) , ( 5, 6 ) , ( 6, 5 ) , ( 6, 7 ) , ( 7, 6 ) , ( 9, 10) , ( 9, 13) , (10, 9 ) , (13, 9 ) , (10, 13) , (13, 10) , (10, 14) , (14, 10) , (13, 14) , (14, 13) , (11, 15) , (15, 11); ``` As you can see from the first 6 rows data values 4,5,6 and 7 in the two columns intersects/overlaps that need to partitioned to a group. Same goes for rows 7-16 and rows 17-18 which will be labeled as group 2 and 3 respectively. The resulting output should look like this: ``` group | value ------+------ 1 | 4 1 | 5 1 | 6 1 | 7 2 | 9 2 | 10 2 | 13 2 | 14 3 | 11 3 | 15 ```
Assuming that all pairs exists in their mirrored combination as well `(4,5)` and `(5,4)`. But the following solutions work without mirrored dupes just as well. ### Simple case All connections can be lined up in a **single ascending sequence** and complications like I added in the [fiddle](http://sqlfiddle.com/#!15/55cc22/4) are not possible, we can use this solution without duplicates in the rCTE: I start by getting minimum `a_sno` per group, with the minimum associated `b_sno`: ``` SELECT row_number() OVER (ORDER BY a_sno) AS grp , a_sno, min(b_sno) AS b_sno FROM data d WHERE a_sno < b_sno AND NOT EXISTS ( SELECT 1 FROM data WHERE b_sno = d.a_sno AND a_sno < b_sno ) GROUP BY a_sno; ``` This only needs a single query level since a window function can be built on an aggregate: * [Get the distinct sum of a joined table column](https://stackoverflow.com/questions/13169367/get-the-distinct-sum-of-a-joined-table-column/13169627#13169627) Result: ``` grp a_sno b_sno 1 4 5 2 9 10 3 11 15 ``` I avoid branches and duplicated (multiplicated) rows - potentially *much* more expensive with long chains. I use `ORDER BY b_sno LIMIT 1` in a correlated subquery to make this fly in a recursive CTE. * [Create a unique index on a non-unique column](https://stackoverflow.com/questions/29171623/create-a-unique-index-on-a-non-unique-column/29172960#29172960) Key to performance is a matching index, which is already present provided by the PK constraint `PRIMARY KEY (a_sno,b_sno)`: not the other way round `(b_sno, a_sno)`: * [Is a composite index also good for queries on the first field?](https://dba.stackexchange.com/a/27493/3684) ``` WITH RECURSIVE t AS ( SELECT row_number() OVER (ORDER BY d.a_sno) AS grp , a_sno, min(b_sno) AS b_sno -- the smallest one FROM data d WHERE a_sno < b_sno AND NOT EXISTS ( SELECT 1 FROM data WHERE b_sno = d.a_sno AND a_sno < b_sno ) GROUP BY a_sno ) , cte AS ( SELECT grp, b_sno AS sno FROM t UNION ALL SELECT c.grp , (SELECT b_sno -- correlated subquery FROM data WHERE a_sno = c.sno AND a_sno < b_sno ORDER BY b_sno LIMIT 1) FROM cte c WHERE c.sno IS NOT NULL ) SELECT * FROM cte WHERE sno IS NOT NULL -- eliminate row with NULL UNION ALL -- no duplicates SELECT grp, a_sno FROM t ORDER BY grp, sno; ``` ### Less simple case All nodes can be reached in ascending order with one or more branches from the root (smallest `sno`). This time, get *all* greater `sno` and de-duplicate nodes that may be visited multiple times with `UNION` at the end: ``` WITH RECURSIVE t AS ( SELECT rank() OVER (ORDER BY d.a_sno) AS grp , a_sno, b_sno -- get all rows for smallest a_sno FROM data d WHERE a_sno < b_sno AND NOT EXISTS ( SELECT 1 FROM data WHERE b_sno = d.a_sno AND a_sno < b_sno ) ) , cte AS ( SELECT grp, b_sno AS sno FROM t UNION ALL SELECT c.grp, d.b_sno FROM cte c JOIN data d ON d.a_sno = c.sno AND d.a_sno < d.b_sno -- join to all connected rows ) SELECT grp, sno FROM cte UNION -- eliminate duplicates SELECT grp, a_sno FROM t -- add first rows ORDER BY grp, sno; ``` Unlike the first solution, we don't get a last row with NULL here (caused by the correlated subquery). Both should perform very well - especially with long chains / many branches. Result as desired: [**SQL Fiddle**](http://sqlfiddle.com/#!15/55cc22/4) (with added rows to demonstrate difficulty). ### Undirected graph If there are local minima that cannot be reached from the root with ascending traversal, the above solutions won't work. Consider [Farhęg's solution](https://stackoverflow.com/a/29737591/939860) in this case.
I want to say another way, it may be useful, you can do it in 2 steps: **1.** take the `max(sno)` per each group: ``` select q.sno, row_number() over(order by q.sno) gn from( select distinct d.a_sno sno from data d where not exists ( select b_sno from data where b_sno=d.a_sno and a_sno>d.a_sno ) )q ``` *result:* ``` sno gn 7 1 14 2 15 3 ``` **2.** use a `recursive cte` to find all related members in groups: ``` with recursive cte(sno,gn,path,cycle)as( select q.sno, row_number() over(order by q.sno) gn, array[q.sno],false from( select distinct d.a_sno sno from data d where not exists ( select b_sno from data where b_sno=d.a_sno and a_sno>d.a_sno ) )q union all select d.a_sno,c.gn, d.a_sno || c.path, d.a_sno=any(c.path) from data d join cte c on d.b_sno=c.sno where not cycle ) select distinct gn,sno from cte order by gn,sno ``` *Result:* ``` gn sno 1 4 1 5 1 6 1 7 2 9 2 10 2 13 2 14 3 11 3 15 ``` [here is **the demo**](http://sqlfiddle.com/#!15/13cb0/3) of what I did.
SQL grouping interescting/overlapping rows
[ "", "sql", "postgresql", "common-table-expression", "recursive-query", "" ]
I have the following code which gives me production dates and production volumes for a thirty day period. ``` select (case when trunc(so.revised_due_date) <= trunc(sysdate) then trunc(sysdate) else trunc(so.revised_due_date) end) due_date, (case when (case when sp.pr_typ in ('VV','VD') then 'DVD' when sp.pr_typ in ('RD','CD') then 'CD' end) = 'CD' and (case when so.tec_criteria in ('PI','MC') then 'XX' else so.tec_criteria end) = 'OF' then sum(so.revised_qty_due) end) CD_OF_VOLUME from shop_order so left join scm_prodtyp sp on so.prodtyp = sp.prodtyp where so.order_type = 'MD' and so.plant = 'W' and so.status_code between '4' and '8' and trunc(so.revised_due_date) <= trunc(sysdate)+30 group by trunc(so.revised_due_date), so.tec_criteria, sp.pr_typ order by trunc(so.revised_due_date) ``` The problem I have is where there is a date with no production planned, the date wont appear on the report. Is there a way of filling in the missing dates. i.e. the current report shows the following ... ``` DUE_DATE CD_OF_VOLUME 14/04/2015 35,267.00 15/04/2015 71,744.00 16/04/2015 20,268.00 17/04/2015 35,156.00 18/04/2015 74,395.00 19/04/2015 3,636.00 21/04/2015 5,522.00 22/04/2015 15,502.00 04/05/2015 10,082.00 ``` Note: missing dates (20/04/2015, 23/04/2015 to 03/05/2015) Range is always for a thirty day period from sysdate. How do you fill in the missing dates? Do you need some kind of calendar table? Thanks
You can get the 30-day period from `SYSDATE` as follows (I assume you want to include `SYSDATE`?): ``` WITH mydates AS ( SELECT TRUNC(SYSDATE) - 1 + LEVEL AS due_date FROM dual CONNECT BY LEVEL <= 31 ) ``` Then use the above to do a `LEFT JOIN` with your query (perhaps not a bad idea to put your query in a CTE as well): ``` WITH mydates AS ( SELECT TRUNC(SYSDATE) - 1 + LEVEL AS due_date FROM dual CONNECT BY LEVEL <= 31 ), myorders AS ( select (case when trunc(so.revised_due_date) <= trunc(sysdate) then trunc(sysdate) else trunc(so.revised_due_date) end) due_date, (case when (case when sp.pr_typ in ('VV','VD') then 'DVD' when sp.pr_typ in ('RD','CD') then 'CD' end) = 'CD' and (case when so.tec_criteria in ('PI','MC') then 'XX' else so.tec_criteria end) = 'OF' then sum(so.revised_qty_due) end) CD_OF_VOLUME from shop_order so left join scm_prodtyp sp on so.prodtyp = sp.prodtyp where so.order_type = 'MD' and so.plant = 'W' and so.status_code between '4' and '8' and trunc(so.revised_due_date) <= trunc(sysdate)+30 group by trunc(so.revised_due_date), so.tec_criteria, sp.pr_typ order by trunc(so.revised_due_date) ) SELECT mydates.due_date, myorders.cd_of_volume FROM mydates LEFT JOIN myorders ON mydates.due_date = myorders.due_date; ``` If you want to show a zero on "missing" dates instead of a `NULL`, use `COALESCE(myorders.cd_of_volume, 0) AS cd_of_volume` above.
what you can do is this : creating a new table with all the days you need . ``` WITH DAYS AS (SELECT TRUNC(SYSDATE) - ROWNUM DDD FROM ALL_OBJECTS WHERE ROWNUM < 365) SELECT DAYS.DDD FROM DAYS; ``` then full outer join between thoes table : ``` select DUE_DATE , CD_OF_VOLUME , DDD from ( select (case when trunc(so.revised_due_date) <= trunc(sysdate) then trunc(sysdate) else trunc(so.revised_due_date) end) due_date, (case when (case when sp.pr_typ in ('VV','VD') then 'DVD' when sp.pr_typ in ('RD','CD') then 'CD' end) = 'CD' and (case when so.tec_criteria in ('PI','MC') then 'XX' else so.tec_criteria end) = 'OF' then sum(so.revised_qty_due) end) CD_OF_VOLUME from shop_order so left join scm_prodtyp sp on so.prodtyp = sp.prodtyp where so.order_type = 'MD' and so.plant = 'W' and so.status_code between '4' and '8' and trunc(so.revised_due_date) <= trunc(sysdate)+30 group by trunc(so.revised_due_date), so.tec_criteria, sp.pr_typ order by trunc(so.revised_due_date) ) full outer join NEW_TABLE new on ( new .DDD = DUE_DATE ) where new .DDD between /* */ AND /* */ /* pick your own limit) */ ```
ORACLE SQL: Fill in missing dates
[ "", "sql", "oracle", "" ]
This question is a bit vague and I do apologize, hopefully the example below will clear it up. It's a fairly elementary question, I just can't seem to quite find the right solution with my very limited knowledge and SQL relevant vocabulary There is a table with people, ``` create table People ( id integer, name LongName, primary key (id) ); ``` And one for workers that references people ``` create table Workers ( id integer references People(id), worktype varchar(20), primary key (id) ); ``` and lastly a works\_for relationship ``` create table Works_for ( worker integer references Workers(id), employer integer references Job(id), primary key (worker,job) ); ``` Now what I want to do is get all people that work at least 20 jobs, so I get the correct list of id's with the following query: ``` SELECT worker FROM Works_for GROUP BY worker HAVING COUNT(worker) > 20; ``` However I also want to get the names of these workers. How would I go about this? I've tried a number of things but I keep running into errors. Any help would be much appreciated!
You can join the tables and select both fields like this: ``` SELECT p.name, p.id FROM People p JOIN Works_for wf ON (p.id = wf.worker) GROUP BY id HAVING COUNT(wf.worker) > 20; ``` [sqlfiddle](http://sqlfiddle.com/#!15/bd2da/6/0)
``` SELECT worker,name FROM Works_for join People on worker=id GROUP BY worker,name HAVING COUNT(employer) > 20; ``` <http://sqlfiddle.com/#!15/e03e3/1> There will be no 20 but just 3 records but I think it's enough as a demo
Use id values from one query, to corresponding column with same id in another table
[ "", "sql", "postgresql", "" ]
if a table has many rows and each row has many columns which query is faster? If the value of the column is within {1,2,3,4} ``` SELECT * FROM table t WHERE t.column1<>3 ``` Or ``` SELECT * FROM table t WHERE t.column1 in {1,2,4} ``` Is it 3 compares vs 1 compare in these two cases?
In Oracle, it would make a huge difference if you have an **INDEX**. You could make a small test case, and check the explain plan for each statement. * **CASE# 1 : Without an Index** **Setup** For demo purpose, I am creating a small table from scott.emp table with 4 rows. ``` SQL> CREATE TABLE t AS SELECT * FROM emp WHERE empno IN (7369, 7499, 7521, 7566); Table created. SQL> ``` Since I am on 12c, CTAS doesn't need to gather statistics. Let's see the explain plan: **Using not equal to Condition** ``` SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE empno <> 7369; Explained. SQL> SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------ Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 117 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 3 | 117 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("EMPNO"<>7369) 13 rows selected. SQL> ``` **Using IN condition** ``` SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE empno IN (7499, 7521, 7566); Explained. SQL> SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------ Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 117 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 3 | 117 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("EMPNO"=7499 OR "EMPNO"=7521 OR "EMPNO"=7566) 13 rows selected. SQL> ``` * **CASE# 2 : With an Index** **Setup** ``` SQL> CREATE INDEX t_indx ON t(empno); Index created. SQL> ``` **Using not equal to Condition** ``` SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE empno <> 7369; Explained. SQL> SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------- Plan hash value: 1601196873 -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 117 | 3 (0)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T | 3 | 117 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("EMPNO"<>7369) 13 rows selected. SQL> ``` **Using IN condition** ``` SQL> EXPLAIN PLAN FOR SELECT * FROM t WHERE empno IN (7499, 7521, 7566); Explained. SQL> SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------- Plan hash value: 4217410154 ----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 117 | 2 (0)| 00:00:01 | | 1 | INLIST ITERATOR | | | | | | | 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 3 | 117 | 2 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | T_INDX | 3 | | 1 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("EMPNO"=7499 OR "EMPNO"=7521 OR "EMPNO"=7566) 15 rows selected. SQL> ```
They are different. The first selects all records which don't have an ID of 3 whereas the second selects all which are 1, 2 or 4. So for example the record with ID = 5 is not included. So performance should not be the reason to choose one query over the other. Either you have millions of records in this table, then the `IN` approach doesn't work, or you have really only those 4 records, then, what are you asking? Choose the most appropriate way in terms of readability and fail-safety and don't care about nano seconds. Query optimization mostly relies on the RDBMS, so which are you using? In SQL-Server there should be no difference. But check it out yourself. [If you have two horses and you want to know which of the two is the faster then race your horses.](http://ericlippert.com/2012/12/17/performance-rant/)
which query is optimal id<>3 or id in {1,2,4}
[ "", "mysql", "sql", "sql-server", "oracle", "" ]
I am having two Drop Down list Country and state both list are filled from database. In Drop Down list for states I want to add "Others" at the end of the list. I tried adding "Others" from database but when I use order by Name to show name of states in alphabetic order in my Drop Down List even "Other" gets rearranged but I want to display it at the end of my list. This is my code how I am populating my list ``` private void FillStateDDl() { DataSet ds = new DataSet(); try { SqlParameter[] sql = new SqlParameter[2]; sql[0] = new SqlParameter("@Type", 2); sql[1] = new SqlParameter("@cid", drpCountry.SelectedValue); ds = gs.DisplayData("sp_FillDropDownList", sql); if (ds.Tables[0].Rows.Count > 0) { drpState.DataTextField = "statename"; drpState.DataValueField = "sid"; drpState.DataSource = ds; drpState.DataBind(); drpState.Items.Insert(0, new ListItem("--Select State--", "-1")); } } catch (Exception ex) { Response.Write(ex.Message.ToString()); } } ```
You need to ensure that `AppendDataBoundItems` is set to true. Then items can be added after the data source like: ``` ddl.AppendDataBoundItems = true; ddl.DataSource = CreateDataSource(); ddl.DataTextField="TextField"; ddl.DataValueField="ValueField"; ddl.DataBind(); ddl.Items.Add("Static value here...."); ```
Yes, you can set static value with datasource in the dropdown list. You need to do following steps: 1. Select properties of dropdownlist, and find AppendDataBoundItems and set AppendDataBoundItems = true. 2. After binding the dropdown list, add following line of code ddl.Items.Add("Others") in vb.net 3. Also order by in descending order
Can we set static value at the end of Dropdown list?
[ "", "sql", "asp.net", "sql-server", "t-sql", "" ]
I'm trying to join three tables. But I get this error ``` Syntax error(missing operator) in query expression STUDENTAI.mėgstamiausia_laida = TV_LAIDOS.id LEFT JOIN MIESTAI ON STUDENTAI.kilme = MIESTAI.koda'. ``` Here is my code ``` SELECT MIESTAI.pavadinimas, TV_LAIDOS.pavadinimas FROM STUDENTAI LEFT JOIN TV_LAIDOS ON STUDENTAI.mėgstamiausia_laida = TV_LAIDOS.id LEFT JOIN MIESTAI ON STUDENTAI.kilme = MIESTAI.kodas WHERE STUDENTAI.ugis > 190; ``` So what's wrong? Why I get this error?
This query: ``` SELECT MIESTAI.pavadinimas, TV_LAIDOS.pavadinimas FROM STUDENTAI LEFT JOIN TV_LAIDOS ON STUDENTAI.mėgstamiausia_laida = TV_LAIDOS.id LEFT JOIN MIESTAI ON STUDENTAI.kilme = MIESTAI.kodas WHERE STUDENTAI.ugis > 190; ``` Looks structurally correct for any database . . . except MS Access. In that system, you need parentheses around the joins: ``` SELECT MIESTAI.pavadinimas, TV_LAIDOS.pavadinimas FROM (STUDENTAI LEFT JOIN TV_LAIDOS ON STUDENTAI.mėgstamiausia_laida = TV_LAIDOS.id ) LEFT JOIN MIESTAI ON STUDENTAI.kilme = MIESTAI.kodas WHERE STUDENTAI.ugis > 190; ``` Note that this assumes that the tables and columns all exist and accented characters are allowed in column names and so on.
This column name looks special: ``` STUDENTAI.mėgstamiausia_laida ``` It ran on my MySQL if I enclosed it in ``` `` ```, i.e. ``` SELECT MIESTAI.pavadinimas, TV_LAIDOS.pavadinimas FROM STUDENTAI LEFT JOIN TV_LAIDOS ON STUDENTAI.`mėgstamiausia_laida` = TV_LAIDOS.id LEFT JOIN MIESTAI ON STUDENTAI.kilme = MIESTAI.kodas WHERE STUDENTAI.ugis > 190; ```
SQL syntax error JOIN three tables
[ "", "sql", "ms-access", "left-join", "" ]
I'm trying to use an `INNER JOIN` in SQL to show a column from one table into another. Here is the query that I'm using: ``` SELECT * FROM Person.BusinessEntity INNER JOIN HumanResources.Employee ON Person.BusinessEntity.BusinessEntityID = HumanResources.Employee.JobTitle ``` When I execute this, I get an error: > Conversion failed when converting the nvarchar value 'Chief Executive Officer' to data type int I'm not trying to convert the data so I'm not sure why I'm getting this error. Any help would be much appreciated!
As you are trying to join to columns with different data types you are forcing the database to make an implicit conversion which it can't do in this case. What you want to do is to join the tables on the keys the share (which would be BusinessEntityID in this case). The table and column names look familiar and I'm guessing that you are following the examples in the book *Beginning Microsoft SQL Server 2012 Programming* (or some similar book that uses the AdventureWorks sample database) and if so the source of the issue is that you are using the wrong column in the HumanResources.Employee table. It should be ``` SELECT Person.BusinessEntity.*, HumanResources.Employee.JobTitle FROM Person.BusinessEntity INNER JOIN HumanResources.Employee ON Person.BusinessEntity.BusinessEntityID = HumanResources.Employee.BusinessEntityID ```
I think type of column BusinessEntityID is INT while the type of column JobTitle is NVARCHAR. So this: ``` ON Person.BusinessEntity.BusinessEntityID = HumanResources.Employee.JobTitle ``` is equals to: ``` ON 1 = 'Chief...Officer' ``` And you can't compare an int with a NVARCHAR, both must be the same type.
INNER JOIN SQL query conversion nvarchar value to int
[ "", "sql", "" ]
I've created a script that getting a total of sales in the month that's declared by the user and grouping in weekly. But the problem is the total is exceeding in my declared date range. Because the month declared is not dividend in 7(This is the number of days in a week). And now I want to get the result based only in my date range. this is my example query: ``` Set @starttime = '2014-10-01' Set @endtime = '2014-10-31' while @endtime > @starttime begin print @starttime set @xendtime = DATEADD(DAY, 7,@starttime) print @xendtime set @starttime = DATEADD(DAY, 7,@starttime) end ``` And this is the result ``` Oct 1 2014 12:00AM ---@StartDate Oct 7 2014 12:00AM ---@xEndDate Oct 8 2014 12:00AM ---@StartDate Oct 14 2014 12:00AM ---@xEndDate Oct 15 2014 12:00AM ---@StartDate Oct 21 2014 12:00AM ---@xEndDate Oct 22 2014 12:00AM ---@StartDate Oct 28 2014 12:00AM ---@xEndDate Oct 29 2014 12:00AM ---@StartDate Nov 4 2014 12:00AM ---@xEndDate <----- I want this to end in Oct 31. ``` Can you Help me with this? Please
Here is your desired result with little amendments in your query. ``` declare @starttime datetime ``` `declare @endtime datetime declare @time datetime Set @starttime = '2014-10-01' Set @endtime = '2014-10-31'` `while @endtime > @starttime begin print @starttime set @time = DATEADD(DAY, 6,@starttime) If @time<@endtime Begin print @time set @starttime = DATEADD(DAY, 1,@time) End Else begin print @endtime set @endtime = @starttime End end` Hope this will help you.
``` DECLARE @starttime date ,@endtime date ,@xendtime date Set @starttime = '2014-10-01' Set @endtime = '2014-10-31' while @endtime > @starttime begin set @xendtime = CASE WHEN DATEADD(DAY, 6,@starttime) >= @endtime THEN @endtime ELSE DATEADD(DAY, 6,@starttime) END SELECT @starttime, @xendtime set @starttime = DATEADD(DAY, 1,@xendtime) end GO ``` Using CTE: ``` DECLARE @starttime date ,@endtime date ,@xendtime date Set @starttime = '2014-10-01' Set @endtime = '2014-10-31' ;WITH dt as ( SELECT @starttime as StartDate , CASE WHEN DATEADD(DAY, 6,@starttime) >= @endtime THEN @endtime ELSE DATEADD(DAY, 6,@starttime) END EndDate UNION ALL SELECT DATEADD(DAY, 1,dt.EndDate), CASE WHEN DATEADD(DAY, 7,dt.EndDate) >= @endtime THEN @endtime ELSE DATEADD(DAY, 7,dt.EndDate) END FROM dt WHERE dt.EndDate < @endtime ) SELECT * FROM dt OPTION (MAXRECURSION 0); ```
How to get the result of data from sql by month and divided into weeks
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am trying to subtract to date from each other The question says that I have to create a query to display the orders that were not shipped within 30 days of ordering. Here is my trying: ``` select orderno from orders where 30> (select datediff(dd,s.ship_date,o.odate ) from o.orders,s.shipment); ``` The error I get is > ERROR at line 1: > ORA-00942: table or view does not exist These are the two tables : ``` SQL> desc orders Name Null? Type ----------------------------------------- -------- ---------------------------- ORDERNO NOT NULL NUMBER(3) ODATE NOT NULL DATE CUSTNO NUMBER(3) ORD_AMT NUMBER(5) SQL> desc shipment Name Null? Type ----------------------------------------- -------- ---------------------------- ORDERNO NOT NULL NUMBER(3) WAREHOUSENO NOT NULL VARCHAR2(3) SHIP_DATE DATE ```
Here is one method, using Oracle syntax: ``` select o.orderno from orders o where 30 > (select o.date - s.ship_date from shipment s where s.orderno = o.orderno ); ``` Note the correlation clause in the subquery, but each table is only mentioned once. The problem that you have is that an order could ship on more than on occasion -- and this would generate an error in the query, because the subquery would return more than one row. One solution is aggregation. You need to decide if the question is "the entire order does not ship within 30 days" or "no part of the order ships within 30 days". The latter would use `MIN()`: ``` select o.orderno from orders o where (select MIN(o.date - s.ship_date) from shipment s where s.orderno = o.orderno ) > 30; ```
You'd be wanting something along the lines of: ``` select ... from orders o where not exists ( select null from shipments s where s.orderno = o.orderno and s.ship_date <= (o.odate + 30)) ``` Date arithmetic is pretty easy if you just want a difference in days, as you can add or subtract days as integers. If it were months, quarters or years you'd want to use Add\_Months(). Also, it's better in the query above to say "shipment\_date <= (order\_date + 30)" rather than "(shipment\_date - order\_date) <= 30)" as it lets indexes be used on the join key and shipment date combined. In practice you'd probably want an index on (s.orderno, s.ship\_date) so that the shipment table does not have to be accessed for this query. I used NOT EXISTS here because in the case that there might be multiple shipments per order you would want the query stop finding additional shipments if it has found a single one.
Oracle find whether a corresponding record exists within a number of days
[ "", "sql", "oracle", "date-arithmetic", "" ]
I've been trying to convert a mySQL query in my Rails application to use Postgres Syntax and have been pouring over the Postgres documentation, specifically looking at the differences between conditionals. The query is: ``` Story.select('*', 'IF (views.id is null, 0, 1) as viewed') .join('left join views on( views.story_id = stories.id and views.user_id = ? )', user.id) .group(:id) .order(viewed: :asc) .order(id: :asc) ``` or as a pure query: ``` select stories.*, IF (views.id is null, 0, 1) as viewed from stories left join views on(views.story_id = stories.id and views.user_id = 1) where 1 group by stories.id order by viewed, stories.id ``` It seems postgres uses 'case' instead of IF, so I tired something like this: ``` Story.select('*', '(case when viewid is null then 0, 1 end) as viewed') .join('left join views on( views.story_id = stories.id and view.user_id = ? )', user.id) .group(:id) .order(viewed: :asc) .order(id: :asc) ``` but I get a **wrong number of arguments (2 for 0..1)** error... I've tried several variations but with no luck. I'm planning to study SQL a bit more in the near future but in the mean time, I feel somewhat lost here. Can anyone point out the finer details here of what the different syntax might be? Thanks!
`EXISTS(...)` yields a boolean, which solves the NULL case quite elegantly: ``` select s.* , EXISTS (select 42 from views v where v.story_id = s.id and v.user_id = 1) as viewed from stories s where 1=1 ; ``` If you want to use the `viewed` column in an `ORDER BY` clause, you'll have to push the above query into a subquery.
Based on the error: **wrong number of arguments (2 for 0..1)** Rails (no postgres) is telling you that some method is receiving two arguments but the method only supports or none arguments or only one argument. Looking into the code, the only two methods where you are sending two arguments are `select` and `join` so what if you try: `select('*, (case when viewid is null then 0, 1 end) as viewed')` instead of `select('*', '(case when viewid is null then 0, 1 end) as viewed')` and (just for test because is a **bad practice**) ``` .join("left join views on( views.story_id = stories.id and view.user_id = #{user.id} )") ``` instead of ``` .join('left join views on( views.story_id = stories.id and view.user_id = ? )', user.id) ``` ?
converting a conditional MySQL query syntax to Postgres in my Rails application
[ "", "mysql", "sql", "ruby-on-rails", "postgresql", "activerecord", "" ]
This may be too much of an opinion-based question but here goes: I've found an interesting quirk with Entity Framework and database migrations. It seems that whenever we create a foreign key it also creates an index on that column. I read this SO question: [Entity Framework Code First Foreign Key adding Index as well](https://stackoverflow.com/questions/22225796) and everyone seems to say it's a great, efficient idea but I don't see how; indexing a column is very circumstance-specific. For instance, EF is indexing FKs on my table that are almost never (~1%) used for searches and are also on a source table, meaning that even when I join other tables, I'm searching the FK's linked table using it's PK...there's no benefit from having the FK indexed in that scenario (that I'm aware of). **My question:** Am I missing something? Is there some reason why I would want to index a FK column that is never searched and is always on the source table in any joins? My plan is to remove some of these questionable indexes but I wanted to to confirm that there's not some optimization concept that I'm missing.
In EF Code First, the general reason why you would model a foreign key relationship is for navigability between entities. Consider a simple scenario of `Country` and `City`, with eager loading defined for the following LINQ statement: ``` var someQuery = db.Countries .Include(co => co.City) .Where(co => co.Name == "Japan") .Select(...); ``` This would result in a query along the lines of: ``` SELECT * FROM Country co INNER JOIN City ci ON ci.CountryId = co.ID WHERE co.Name = 'Japan'; ``` Without an Index on the foreign key on `City.CountryId`, SQL will need to scan the Cities table in order to filter the cities for the Country during a JOIN. The FK index will also have [performance benefits if rows are deleted](https://www.dataversity.net/foreign-keys-and-the-delete-performance-issue/) from the parent Country table, as referential integrity will need to detect the presence of any linked City rows (whether the FK has `ON CASCADE DELETE` defined or not). **TL;DR** Indexes on Foreign Keys [are](https://learn.microsoft.com/en-us/sql/relational-databases/tables/primary-and-foreign-key-constraints?view=sql-server-ver15#indexes-on-foreign-key-constraints) [recommended](https://stackoverflow.com/q/3650690), even if you don't filter directly on the foreign key, it will still be needed in Joins. The exceptions to this seem to be quite contrived: * If the selectivity of the foreign key is very low, e.g. in the above scenario, if 50% of ALL cities in the countries table were in Japan, then the Index would not be useful. * If you don't actually ever navigate across the relationship. * If you never delete rows from the parent table (or attempt update on the PK) . One additional optimization consideration is whether to use the foreign key in the `Clustered Index` of the child table (i.e. cluster Cities by Country). This is often beneficial in parent : child table relationships where it is common place to retrieve all child rows for the parent simultaneously.
Short answer. No. To expand slightly, at the database create time, entity framework does not know how many records each table or entity will have, nor does it know how the entities will be queried. \*In my opinion \* the creation of a foreign key is more likely to be right than wrong, I had massive performance issues using a different ORM which took longer to diagnose because I thought I had read in the documentation that it behaved the same way. You can check the Sql statement that EF produces and run it manually if you want to double check. You know your data better than EF does, and it should work just fine if you drop the index manually. IIRC you can create 1 way navigation properties if you use the right naming convention, although this was some time ago, and I never checked whether the index was created.
Entity Framework Indexing ALL foreign key columns
[ "", "sql", "sql-server", "entity-framework", "" ]
I'm trying to combine 2 querys in oracle, those lines have the same value expect one field. Ex: ``` SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '0' UNION SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '1' ``` Result: ``` NAME AGE EMAIL DATE JOAO 23 a@a.com 20150414 JOAO 23 a@a.com null ``` How i can group this lines?? I'm looking for something who can give me something like this result: ``` NAME AGE EMAIL DATE JOAO 23 a@a.com 20150414 ``` Thank you (sorry for my english..)
You can use COALESCE(). <http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions023.htm#SQLRF00617/ms190349.aspx> This Query should work for every name, and should coalesce the other rows. ``` SELECT NAME1 AS NAME, COALESCE(AGE1, AGE2) AS AGE, COALESCE(EMAIL1, EMAIL2) AS EMAIL, COALESCE(DATE1, DATE2) AS DATE FROM( SELECT t1.NAME AS NAME1, t1.AGE AS AGE1, t1.EMAIL AS EMAIL1, t1.DATE AS DATE1, t2.NAME AS NAME2, t2.AGE AS AGE2, t2.EMAIL AS EMAIL2, t2.DATE AS DATE2 FROM table_a AS t1 INNER JOIN table_a AS t2 ON t2.FLAG = 1 AND t1.FLAG = 0 AND t1.NAME = t2.NAME ) AS t3; ```
If you're just trying to ignore the NULL values: ``` SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG in ( '0', '1' ) and date is not null / ``` or if you want to keep the nulls, but defer to available non-null values: ``` with w_data as ( SELECT NAME, AGE, EMAIL, DATE , row_number() over ( partition by name order by date desc nulls last ) rnum FROM table_a WHERE NAME = 'JOAO' AND FLAG in ( '0', '1' ) ) select name, age, email, date from w_data where rnum = 1 / ``` [edit] in response to comment: If you want to keep union, that's fine, union and OR are mostly interchangeable (in this case): ``` SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '0' and date is not null union SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '1' and date is not null / ``` or even: ``` select * from ( SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '0' union SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '1' ) where date is not null / ``` if you just want the NULL criteria in 1 place ;) Same logic can apply to the 2nd query I wrote above using row\_number() analytics ... ``` with w_data as ( SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '0' union SELECT NAME, AGE, EMAIL, DATE FROM table_a WHERE NAME = 'JOAO' AND FLAG = '1' ), w_sub as ( select name, age, email, date, row_number() over ( partition by name order by date desc nulls last ) rnum from w_data ) select name, age, email, date from w_sub where rnum = 1 / ```
Union null in Oracle
[ "", "sql", "oracle", "union", "" ]
Why am I getting `*` in the result where I have two digits.? ``` Declare @Product TABLE(id int identity(1,1),Cust VARCHAR(25), Product VARCHAR(20), QTY INT) INSERT INTO @Product(Cust, Product, QTY) VALUES('KATE','VEG',2), ('KATE','SODA',6), ('KATE','MILK',1), ('KATE','BEER',12), ('FRED','MILK',3), ('FRED','BEER',24), ('KATE','VEG',3) select *,TRY_CONVERT(varchar(1),QTY) qty1 from @Product ```
It is because the resulting datatype is too short to display. As per [msdn](https://msdn.microsoft.com/en-us/library/ms187928.aspx) > **Truncating and Rounding Results** > > When you convert character or binary expressions (char, nchar, > nvarchar, varchar, binary, or varbinary) to an expression of a > different data type, data can be truncated, only partially displayed, > or an error is returned because the result is too short to display. > Conversions to char, varchar, nchar, nvarchar, binary, and varbinary > are truncated, except for the conversions shown in the following > table. This happens when you convert from `int , smallint, or tinyint` to `char /varchar`
You have only specified a single character in your `TRY_CONVERT(VARCHAR(1), QTY) qty1`. If you change it to `VARCHAR(2)`, it will work: ``` DECLARE @Product TABLE ( id INT IDENTITY(1, 1) , Cust VARCHAR(25) , Product VARCHAR(20) , QTY INT ) INSERT INTO @Product ( Cust, Product, QTY ) VALUES ( 'KATE', 'VEG', 2 ), ( 'KATE', 'SODA', 6 ), ( 'KATE', 'MILK', 1 ), ( 'KATE', 'BEER', 12 ), ( 'FRED', 'MILK', 3 ), ( 'FRED', 'BEER', 24 ), ( 'KATE', 'VEG', 3 ) SELECT * , TRY_CONVERT(VARCHAR(2), QTY) qty1 FROM @Product ``` I'm not sure why you would want to convert the `QTY` to a `VARCHAR` though as it's an `INT`. The size would have to increase if you had a `QTY = 200`. The reason you see `*` is due to [truncation of values](https://msdn.microsoft.com/en-us/library/ms187928.aspx) that are too large for the specified datatype as mentioned by @Damien\_The\_Unbeliever in his [comment](https://stackoverflow.com/questions/29744249/try-convert-issue-sql-server-2012#comment47621415_29744249).
Try_convert issue * sql server 2012
[ "", "sql", "sql-server", "" ]
I have a SQL database that has a column that is a string. The data inside is numbers such as.. ``` $12,000,394.09 $56,874.94 $110,339,384.11 ``` It is a string column but I am wanting to sort it with lowest on top. Is it possible to do this? Right now I have only tried the simple.. ``` SELECT * FROM sales ORDER BY saleamount DESC ``` Any help would be great. Thanks @Brian - Error ``` java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587) at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:210) at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:30) at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:762) at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:925) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1111) at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1309) at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:422) at TestServ.processRequest(TestServ.java:37) at TestServ.doGet(TestServ.java:116) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:318) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:357) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:260) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:188) at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:191) at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:168) at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:189) at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:288) at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:206) at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:136) at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:114) at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:838) at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:113) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:115) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:55) at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:135) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:564) at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:544) at java.lang.Thread.run(Thread.java:745) ```
You can do this with `regexp_replace` to keep only numeric characters and the decimal point, and `to_number` to convert the result to a number: ``` select * from sales order by to_number(regexp_replace(string_num_field, '[^0-9.]+', '')) ``` **Fiddle:** <http://sqlfiddle.com/#!4/7d46e/2/0>
If all of your strings have the same format -- starting with a currency symbol, commas separator (if appropriate) and two decimal places -- then you can use the following: ``` order by length(saleamount) desc, saleamount desc ``` If you want the smallest value first, then: ``` order by length(saleamount) asc, saleamount asc ```
Sort SQL string column with currency
[ "", "sql", "oracle", "" ]
I want to convert my string data to array in sql server. I tried it like below. ``` SELECT '223456789' AS SerialOriginalCode -- SerialOriginalCode 223456789 DECLARE @tbl_SerialOriginalVerion TABLE(ID INT, SerialOriginalCode INT); INSERT INTO @tbl_SerialOriginalVerion VALUES (1, 2), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6), (7, 7), (8, 8), (9, 9); SELECT * FROM @tbl_SerialOriginalVerion ``` But it is manual way, it is not the way of programmatic convertion as it needs me to key in every insert value for each line. Could someone please suggest me more elegent way?
``` DECLARE @InputText AS VARCHAR(MAX) = '223456789' DECLARE @Pos Int = 1 DECLARE @End Int DECLARE @TextLength Int = DATALENGTH(@InputText) DECLARE @Array TABLE ( TokenID Int PRIMARY KEY IDENTITY(1,1), Match Varchar(MAX) ) -- Exit function if no text is passed in. IF @TextLength <> 0 BEGIN WHILE @Pos <= @TextLength BEGIN INSERT @Array (Match) VALUES (SUBSTRING(@InputText,@Pos,1)) SET @Pos = @Pos + 1 END END SELECT * FROM @Array ```
Try this `INSERT` using `number` from `master..spt_values` ``` SELECT '223456789' AS SerialOriginalCode -- SerialOriginalCode 223456789 DECLARE @tbl_SerialOriginalVerion TABLE(ID INT, SerialOriginalCode INT); INSERT INTO @tbl_SerialOriginalVerion SELECT number + 1, SUBSTRING(t.SerialOriginalCode, sv.number + 1, 1) FROM (SELECT '223456789' AS SerialOriginalCode) t INNER JOIN master..spt_values sv ON sv.number < LEN(t.SerialOriginalCode) WHERE sv.[type] = 'P' SELECT * FROM @tbl_SerialOriginalVerion ``` **Output:** ``` ID SerialOriginalCode 1 2 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 ```
SQL server varchar.toArray() or string.toArray() method
[ "", "sql", "sql-server", "t-sql", "" ]
In table i have field pwd which goes like this 001 , 002, 003 , 004 . To get highest value of that i go like this ``` ("SELECT pwd FROM users WHERE pwd= (SELECT max(pwd) FROM users)") ``` That way i got number 004. To increment it and add again to table i use ``` biggest= CInt((SQLDataset.Tables(0).Rows(0).Item(0))) Dim test As String = "000" & biggest txtpwd.Text = test.Substring(test.Length - 3) test2 = test.Substring(test.Length - 3) ``` But what if i want to find the first available number ( missing one ) . Example if i have ``` 001, 002, 003 ,005 , 006 , 007 , 009 , 013. ``` I want to grab the number `004`. How can i do that .
This way i will get all of the missing numbers in array and then i use row index 0 to get only the first one. ``` select u1.pwd+1 as firstmissing from users as u1 left outer join users as u2 on u2.pwd=u1.pwd+1 where u2.pwd is null ```
You could use a subquery (presuming sql-server): ``` SELECT MIN(u.pwd) + 1 AS FirstMissing FROM users u WHERE (u.pwd + 1) <> (SELECT TOP 1 u2.pwd FROM users u2 WHERE u2.pwd > u.pwd) ``` `Demo` If you want to do that in memory since you already have the `DataSet` you could use following Linq-To-DataSet query. But i strongly advise against filtering on client side. ``` Dim rows = SQLDataset.Tables(0).AsEnumerable() Dim missing = From row In rows Let id = row.Field(Of Int32)(0) Let nextId = rows.Select(Function(r) r.Field(Of Int32)(0)). Where(Function(idNext) idNext > id). OrderBy(Function(idNext) idNext). DefaultIfEmpty(-1). First() Where nextId <> -1 AndAlso id + 1 <> nextId Select id + 1 Dim firstMissing As Int32 = missing.FirstOrDefault() ``` If it's actually a `string` in the `DataTable` you can use `Int32.Parse`: ``` Dim missing = From row In rows Let id = Int32.Parse(row.Field(Of String)(0)) Let nextId = rows.Select(Function(r) Int32.Parse(r.Field(Of String)(0))). Where(Function(idNext) idNext > id). OrderBy(Function(idNext) idNext). DefaultIfEmpty(-1). First() Where nextId <> -1 AndAlso id + 1 <> nextId Select id + 1 ```
Get missing number sql
[ "", "sql", "vb.net", "" ]
I would like to take the following query ``` select '2009042 Restraint 151214.pdf', '2009042 Restraint 170215.pdf', '2009042 Restraint 240215.pdf', '2009856 Restraint 190215.pdf', '208272 Notice 120215.pdf', '208272 Restraint 120215.pdf', '212598 Restraint 160215.pdf', '213195 Notice 130215.pdf' from dual ``` and convert it into a query that returns the columns from the above query as rows. The pivot statement doesn't seem to be able to do this. So I would like to return the rows: ``` COL1 2009042 Restraint 151214.pdf 2009042 Restraint 170215.pdf 2009042 Restraint 240215.pdf 2009856 Restraint 190215.pdf 208272 Notice 120215.pdf 208272 Restraint 120215.pdf 212598 Restraint 160215.pdf 213195 Notice 130215.pdf ``` Note that there is a slight difference from [inverse row to column](https://stackoverflow.com/questions/5301613/inverse-row-to-column)
Just phrase the query as a union all in the first place: ``` select '2009042 Restraint 151214.pdf' from dual union all select '2009042 Restraint 170215.pdf' from dual union all select '2009042 Restraint 240215.pdf' from dual union all select '2009856 Restraint 190215.pdf' from dual union all select '208272 Notice 120215.pdf' from dual union all select '208272 Restraint 120215.pdf' from dual union all select '212598 Restraint 160215.pdf' from dual union all select '213195 Notice 130215.pdf' from dual ```
> The pivot statement doesn't seem to be able to do this. Yes, because you're not trying to pivot your rows into columns, you're trying to *unpivot* them from columns into rows. If your columns had a predefined, enumerated set of names, you could use `UNPIVOT` like this: ``` select * from ( select '2009042 Restraint 151214.pdf' v1, '2009042 Restraint 170215.pdf' v2, '2009042 Restraint 240215.pdf' v3, '2009856 Restraint 190215.pdf' v4, '208272 Notice 120215.pdf' v5, '208272 Restraint 120215.pdf' v6, '212598 Restraint 160215.pdf' v7, '213195 Notice 130215.pdf' v8 from dual ) t unpivot ( col1 for v in (v1, v2, v3, v4, v5, v6, v7, v8) ) ``` Yielding: ``` V COL1 -------------------------------- V1 2009042 Restraint 151214.pdf V2 2009042 Restraint 170215.pdf V3 2009042 Restraint 240215.pdf V4 2009856 Restraint 190215.pdf V5 208272 Notice 120215.pdf V6 208272 Restraint 120215.pdf V7 212598 Restraint 160215.pdf V8 213195 Notice 130215.pdf ``` If no such names are available, things get a bit more hairy, but still doable (probably depending on not well-defined column naming conventions in Oracle): ``` select * from ( select '2009042 Restraint 151214.pdf', '2009042 Restraint 170215.pdf', '2009042 Restraint 240215.pdf', '2009856 Restraint 190215.pdf', '208272 Notice 120215.pdf', '208272 Restraint 120215.pdf', '212598 Restraint 160215.pdf', '213195 Notice 130215.pdf' from dual ) t unpivot ( col1 for v in ( "'2009042RESTRAINT151214.PDF'", "'2009042RESTRAINT170215.PDF'", "'2009042RESTRAINT240215.PDF'", "'2009856RESTRAINT190215.PDF'", "'208272RESTRAINT120215.PDF'", "'208272NOTICE120215.PDF'", "'212598RESTRAINT160215.PDF'", "'213195NOTICE130215.PDF'" ) ) ``` Yielding: ``` V COL1 ------------------------------------------------------------ '2009042RESTRAINT151214.PDF' 2009042 Restraint 151214.pdf '2009042RESTRAINT170215.PDF' 2009042 Restraint 170215.pdf '2009042RESTRAINT240215.PDF' 2009042 Restraint 240215.pdf '2009856RESTRAINT190215.PDF' 2009856 Restraint 190215.pdf '208272RESTRAINT120215.PDF' 208272 Restraint 120215.pdf '208272NOTICE120215.PDF' 208272 Notice 120215.pdf '212598RESTRAINT160215.PDF' 212598 Restraint 160215.pdf '213195NOTICE130215.PDF' 213195 Notice 130215.pdf ``` Of course, if you're ready to write that much SQL anyway, why not just use `UNION ALL`, [as Gordon suggested](https://stackoverflow.com/a/29747423/521799)
Oracle SQL convert a single row with multiple columns into multiple rows
[ "", "sql", "oracle", "" ]
I have a table that looks like this: ``` Name | Product | Total ---- --------- ------ A Toy $5 A Car $30,000 A Equipment $500 B Car $100,000 ``` etc... I would like to remove the duplicate name entries but keep the relationship to the product example: ``` Name | Product | Total ---- --------- ------ A Toy $5 Car $30,000 Equipment $500 B Car $100,000 ``` Any ideas?
I assumed you want to make `Name = ''` because you cannot delete the duplicate records. So you can use a [`ROW_NUMBER`](http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions137.htm) function inside a [`WITH`](http://oracle-base.com/articles/misc/with-clause.php) clause and update `Name` field ``` ;WITH C AS( SELECT ROW_NUMBER() OVER (PARTITION BY Name ORDER BY Name) AS Rn ,Name ,Product ,Total FROM yourTable ) UPDATE C SET Name = '' WHERE Rn <> 1 ```
If you want to keep the `Name` only when `Product ='Toy'` then use `Case` statement ``` Select case when Product ='Toy' then Name else '' END as Name, Product, Total From yourtable ```
Remove duplicate entry but keep relationship
[ "", "sql", "oracle11g", "" ]
I've written the following function: ``` declare v_count DATE; strsql VARCHAR2(200); begin for r in (select OBJECT_NAME from all_objects where OBJECT_TYPE='TABLE' and OWNER='SALES' and OBJECT_NAME LIKE 'T%' ORDER BY OBJECT_NAME DESC) loop DBMS_OUTPUT.PUT_LINE(r.OBJECT_NAME); strsql := 'select SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) FROM : 1 INTO v_count'; execute immediate strsql USING r.OBJECT_NAME; DBMS_OUTPUT.PUT_LINE(v_count); end loop; end; ``` Unfortunately, two (bad) things are happening: 1. It outputs only the first OBJECT\_NAME (the first PUT\_LINE). 2. It does NOT execute the second PUT\_LINE. Obviously, I am NOT a Oracle developer, I just need a quick and dirty function to show me the last updated times for all tables in a given schema. Can anyone tell me what I did wrong?
After a bunch of work, breakage, consulting the almigty Google, I came up with this function: ``` declare v_count TIMESTAMP; strsql VARCHAR2(200); tblname VARCHAR2(32); begin for r in (select OBJECT_NAME from all_objects where OBJECT_TYPE='TABLE' and OWNER='SALES' and OBJECT_NAME LIKE 'T%' ORDER BY OBJECT_NAME) loop begin tblname := R.OBJECT_NAME; strsql := 'select SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) FROM SALES.' || tblname; execute immediate strsql into v_count; DBMS_OUTPUT.PUT_LINE(tblname || ' ' || v_count); EXCEPTION when others then null; end; end loop; end; ``` The ORA\_ROWSCN is held for a limited amount of time, so if it isn't there an exception will be thrown. If I wrote this "correctly", it would only return a null for ORA-006550, and I would figure out a way to sort the results by the SCN (but descending).
Your strsql contains incorrect SQL which cannot be executed. So, first `execute immediate` raises an exception (sure you saw it). Table name cannot be used from parameter. You should something like ``` strsql := 'select .... from ' || r.owner || '.' || r.object_name; ```
Oracle Function to Get Last Updated Timestamps For All Tables In A Schema
[ "", "sql", "oracle", "oracle-sqldeveloper", "" ]
I'm trying to get the amount of days it has been since April 15th from the current date, excluding the year so that no matter what its never greater than 365 When this is performed: ``` SELECT DATEDIFF(dd, '04/15/2015', GETDATE()) ``` it will only work for this year and need to get it to work for each year.
You need conditional logic to handle dates before April 15th in the current year: ``` SELECT (case when month(getdate()) > 4 or month(getdate()) = 4 and day(getdate()) > 15 then DATEDIFF(day, cast(year(getdate()) as varchar(255)) + '-04-15', GETDATE()) else DATEDIFF(day, cast(year(getdate()) - 1 as varchar(255)) + '-04-15', GETDATE()) end) ```
Just pop the current year into your query ``` SELECT DATEDIFF(dd, '04/15/'+ CAST(YEAR(GETDATE()) AS CHAR(4)), GETDATE()) ```
SQL for amount of days since specified date each year (SQL Server)
[ "", "sql", "sql-server", "datediff", "" ]
Hi so here is the table: ``` Select * from LogTable order by insert_time desc ``` LogTable: ``` |user_id|action|object_id|insert_time| |2 |start |123 |20.04.2015 | |2 |stop |123 |19.04.2015 | |2 |start |123 |17.04.2015 | |1 |stop |321 |16.04.2015 | |1 |start |321 |12.04.2015 | |3 |start |1234 |11.04.2015 | |4 |start |12345 |5.04.2015 | |4 |stop |12345 |3.04.2015 | ... ``` Now I want to **select all user ids that have their own same object ever stopped but never started again**. As a result, **only user with id=1** should be selected. How should the sql query be written?
You would seem to want all users where the last value for `action` is `stop`. Here is one method using `row_number()`: ``` select lt.* from (select lt.*, row_number() over (partition by user_id, object_id order by insert_time desc) as seqnum from logtable lt ) lt where seqnum = 1 and action = 'stop'; ```
Try this: ``` SELECT T1.* FROM LogTable T1 WHERE T1.Action = 'stop' AND NOT EXISTS ( SELECT 1 FROM LogTable T2 WHERE T1.UserId = T2.UserId AND T2.Action = 'start' AND T1.insert_time < T2.insert_time ) ``` [See sql fiddle](http://sqlfiddle.com/#!6/8d799/1)
complex SQL depending on each object insert time?
[ "", "sql", "sql-server", "" ]
I want to perform sorting and filtering in my stored procedure. My create table for Holiday table: ``` CREATE TABLE [dbo].[Holiday]( [HolidaysId] [int] IDENTITY(1,1) NOT NULL, [HolidayDate] [date] NULL, [HolidayDiscription] [nvarchar](500) NULL, [Name] [nvarchar](max) NULL, CONSTRAINT [PK_Holiday] PRIMARY KEY CLUSTERED ( [HolidaysId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` My filtering criteria would be as: 1. Starts With 2. Is Equal to 3. Not Equal to. **Note:Please ignore HolidayId in filter comparision.** My Table:Holiday ``` HolidaysId int,Name nvarchar(500),HolidayDate date. ``` Sample Input: ``` HolidayId Name Date 1 abc 1/1/2015 2 pqr 1/2/2015 3 xyz 1/3/2015 ``` Output: ``` Case 1:Starts with(This is just for name column only.likewise i want to do for HolidayDate column too) Input:ab(filtering parameter) Query:where Name like '%ab%' order by Name(when sort column name is passed as parameter in stored procedure for column to sort(for eg:Name)) output:1,abc,1/1/2015 Case 2:Is Equal to(Same as above) Input:prr(filtering parameter) output:2,pqr,1/2/2015 Case 3:Not Equal to(Same as above) Input:bbb(filtering parameter) output:All Records ``` This is my stored procedure so far: ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_PagedItems] ( @Page int, @RecsPerPage int ) AS -- We don't want to return the # of rows inserted -- into our temporary table, so turn NOCOUNT ON SET NOCOUNT ON --Create a temporary table CREATE TABLE #TempItems ( ID int, Name varchar(50), HolidayDate date ) -- Insert the rows from tblItems into the temp. table INSERT INTO #TempItems (ID, Name,HolidayDate) SELECT HolidaysId,HolidayDiscription,HolidayDate FROM holiday -- Find out the first and last record we want DECLARE @FirstRec int, @LastRec int SELECT @FirstRec = (@Page - 1) * @RecsPerPage SELECT @LastRec = (@Page * @RecsPerPage + 1) -- Now, return the set of paged records, plus, an indiciation of we -- have more records or not! SELECT *, MoreRecords = ( SELECT COUNT(*) FROM #TempItems TI WHERE TI.ID >= @LastRec ) FROM #TempItems WHERE ID > @FirstRec AND ID < @LastRec -- Turn NOCOUNT back OFF SET NOCOUNT OFF ``` Now there are 4 things i would send to my stored procedure are: 1. Page no 2. PageSize(number of records to retrive) 3. Sorting Column Name(Name Or HolidayDate) 4. My filter Column name(Name of Holidaydate) and Operator like StartWith or Equal to or not equal to.(ColumnName and Operator) Can anybody help me to perform sorting and filtering and if any performance optimization related changes is there then please please do suggest me.
This is how i have done and i am getting expected output but still i want to take improvement suggestion from all of you if there is any. ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[HolidayFetchList] @pageno int, @pagesize int, @sortorder varchar(10), @sortcolumn varchar(100), @filter varchar(max), @count int OUTPUT AS BEGIN declare @Start int=(@pageno)*@pagesize; declare @End int=@Start+@pagesize; SET NOCOUNT ON; DECLARE @tblHoliday AS TABLE (HolidaysId int,HolidayDate date,HolidayDiscription nvarchar(500),HolidayName nvarchar(max),RN int) declare @sql varchar(max)= ' select HolidaysId,HolidayDate,HolidayDiscription,HolidayDiscription as HolidayName,ROW_NUMBER() OVER (ORDER BY '+@sortcolumn + ' '+@sortorder+' ) AS RN from Holiday WHERE 1=1 '+@filter print @sql INSERT INTO @tblHoliday exec (@sql) select @count=COUNT(*) from @tblHoliday print @count select * from @tblHoliday where RN>@Start and RN<=@End order by RN END ``` Please do give me any suggestion if you have any.
I've not tested this, but something like this you can use as starter and do modifications to make it stable: ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_PagedItems] ( @ID int = NULL, @Name varchar(50) = NULL, @HolidayDate date = NULL, @SortCol varchar(20) = '', @Page int=1, @RecsPerPage int=10 -- default size, you can change it or apply while executing the SP ) AS BEGIN -- We don't want to return the # of rows inserted -- into our temporary table, so turn NOCOUNT ON SET NOCOUNT ON --Create a temporary table CREATE TABLE #TempItems ( ID int, Name varchar(50), HolidayDate date ) -- Insert the rows from tblItems into the temp. table INSERT INTO #TempItems (ID, Name,HolidayDate) SELECT HolidaysId, HolidayDiscription, HolidayDate FROM holiday -- Find out the first and last record we want DECLARE @FirstRec int, @LastRec int SELECT @FirstRec = (@Page - 1) * @RecsPerPage SELECT @LastRec = (@Page * @RecsPerPage + 1) -- Now, return the set of paged records, plus, an indiciation of we -- have more records or not! ; WITH CTE_Results AS ( SELECT ROW_NUMBER() OVER (ORDER BY CASE WHEN @SortCol = 'ID_Asc' THEN ID END ASC, CASE WHEN @SortCol = 'ID_Desc' THEN ID END DESC, CASE WHEN @SortCol = 'Name_Asc' THEN Name END ASC, CASE WHEN @SortCol = 'Name_Desc' THEN Name END DESC, CASE WHEN @SortCol = 'HolidayDate_Asc' THEN HolidayDate END ASC, CASE WHEN @SortCol = 'HolidayDate_Desc' THEN HolidayDate END DESC ) AS ROWNUM, ID, Name, HolidayDate FROM #TempItems WHERE (@ID IS NULL OR ID = @ID) AND (@Name IS NULL OR Name LIKE '%' + @Name + '%') AND (@HolidayDate IS NULL OR HolidayDate = @HolidayDate) ) SELECT ID, Name, HolidayDate FROM CTE_Results WHERE ROWNUM > @FirstRec AND ROWNUM < @LastRec ORDER BY ROWNUM ASC -- Turn NOCOUNT back OFF SET NOCOUNT OFF END GO ``` You can check the blog posts I've written on: 1. [Creating Stored Procedures with Dynamic Search (filter)](http://sqlwithmanoj.com/2011/12/30/creating-stored-procedures-with-dynamic-search-filter/) 2. [Creating Stored Procedures with Dynamic Search & Paging (Pagination)](http://sqlwithmanoj.com/2011/12/30/creating-stored-procedures-with-dynamic-search-paging-pagination/) 3. [Creating Stored Procedure with Dynamic Search, Paging and Sorting](http://sqlwithmanoj.com/2011/12/30/creating-stored-procedure-with-dynamic-search-paging-and-sorting/) 4. You can also use the FETCH-OFFSET clause for Pagination if you are on SQL 2012 or more, [link](http://sqlwithmanoj.com/2011/07/28/tsql-denali-new-feature-offset-fetch-clause-for-pagingpagination/).
how to perform sorting and filtering in stored procedure with performance optimization?
[ "", "sql", "sql-server", "select", "stored-procedures", "sql-server-2008-r2", "" ]
I have the following value format : **1234567890** I want to convert it to the following format : **123-45A67890** Table Name: Test Column Name : MyCode Note that I am using Microsoft SQL 2012.
You can use `STUFF` . Something like this ``` DECLARE @v VARCHAR(15) = '1234567890' SELECT STUFF(STUFF(@v,4,0,'-'),7,0,'A') ``` Your `SELECT` would be ``` SELECT STUFF(STUFF(MyCode,4,0,'-'),7,0,'A') FROM Test ``` Your `UPDATE` would be ``` UPDATE Test SET MyCode = STUFF(STUFF(MyCode,4,0,'-'),7,0,'A') ```
Does SQL Server support the `PASTE` function? ``` paste(paste('1234567890',7,0,'A'),4,0,'-') ``` For example: ``` select paste(paste(column_name,7,0,'A'),4,0,'-') from table_name ``` or ``` update table_name set column_name = paste(paste(column_name,7,0,'A'),4,0,'-') ```
updating a text with additional characters in SQL
[ "", "sql", "sql-server", "character", "" ]
My table 'Customer' contains customerid , firstname, lastname, company,city,state,country, email, invoicetotal Question: For countries that have at least two customers using yahoo as email provider, display the name alongside the revenue My solution: ``` select county,sum(invoiceTotal)from customer where email like '%yahoo%' group by Country,Email having Count(Country)>2 ``` I am unable to get proper result the no of rows displayed in my output are different from the number of rows in expected output,Can any1 tell me where have i gone wrong???
You are grouping by `email` as well - just group by `Country` and you should be fine ``` select county, sum(invoiceTotal) from customer where email like '%yahoo%' group by Country having Count(Country)>2 ```
You can't group by `email`, since that's unique. Fortunately, you don't have to.
sql query for the following statement?
[ "", "sql", "" ]
I have a table with 3 columns - a timestamp, a real, and a string. The String columns holds various time frequency values (like Minutes, Hours, Days). I want to take the value in my timestamp column and add to it the values in the other columns. For example I'd like to do something like ``` select timestamp '2015-04-20 01:00' + interval '2 hours' ``` if I had a table with the following columns and values ``` action_start_time rate frequency 2015-04-20 01:00 2 hours ```
You can use following query to convert `rate` and `frequency` columns to ***interval*** ``` select format('%s %s',rate,frequency)::interval from actions ``` and then get the exact timestamp just by adding the interval with `action_start_time` *column* ``` select action_start_time + format('%s %s',rate,frequency)::interval my_timestamp from actions ``` [**demo-sqlfiddle**](http://sqlfiddle.com/#!15/4bcd9/1)
Simply: ``` SELECT action_start_time + rate * ('1'|| frequency)::interval ``` The all trick is in a using multiplication of some interval value: ``` postgres=# select 3 * interval '1 hour'; ┌──────────┐ │ ?column? │ ╞══════════╡ │ 03:00:00 │ └──────────┘ (1 row) ``` You can simply create any interval value without necessity to manipulate with strings. Surely, should be better, if your column `frequency` is native PostgreSQL `interval` type.
Postgresql using date interval and SELECT statement
[ "", "sql", "postgresql", "intervals", "" ]
How can I make following query work? This is inside stored procedure `@GetClosedMeetings` `bit` parameter passed true to get the meetings which are finished, otherwise get planned and current meetings. (Status: 0 - Planned, 1 - Current, 2 - closed) ``` SELECT MeetingId, Status FROM dbo.Meeting WHERE Status IN (CASE WHEN @GetClosedMeetings = 1 THEN 1 ELSE @OtherStatuses) ```
You missed end in case ``` SELECT MeetingId, Status FROM dbo.Meeting WHERE Status IN (CASE WHEN @GetClosedMeetings = 1 THEN 1 ELSE @OtherStatuses end) ```
If I understand your requirement correctly, if `@GetClosedMeetings` is `TRUE`, you want to return `Meetings` that are finished, which from your question seems to have a `STATUS` of 2. On the other hand, if `@GetClosedMeetings` is `FALSE`, then you want to get planned and current meetings. Here is one way to achieve this: ``` SELECT MeetingId, Status FROM dbo.Meeting WHERE (@GetClosedMeetings = 1 AND Status = 2) OR (@GetClosedMeetings = 0 AND Status IN(0, 1)) ```
CASE syntax inside a WHERE clause with IN on SQL Server
[ "", "sql", "sql-server", "case", "" ]
This question is about a `DELETE` query involving two tables. I have two tables called `TableOne` and `TableTwo`. I have the same columns in `TableTwo` as in `TableOne`. ``` CREATE TABLE TABLEONE ( ColumnOne INT, ColumnTwo INT ); CREATE TABLE TABLETWO ( ColumnOne INT, ColumnTwo INT ); ``` **[HERE IS SQLFIDDLE](http://sqlfiddle.com/#!6/70f22):** But they differ from the number of records each table contains. In both the tables `ColumnOne` and `ColumnTwo` can contain duplicate values. I want to delete a record in `TableTwo` based on its `ColumnTwo` value. Fo example, suppose there are following records in `TableTwo` ``` ColumnOne ColumnTwo 1 200 1 300 1 400 2 200 2 100 ``` And there are following records in `TableOne` ( It contains the records need to be deleted) ``` ColumnOne ColumnTwo 1 200 2 200 2 100 ``` According to my requirement, only records containing 200 in `ColumnTwo` of the table `TableTwo` should be deleted. I have written the following query to delete the records in `TableTwo` ``` DELETE FROM TableTwo WHERE ColumnOne IN (SELECT l.ColumnOne FROM TableOne l WHERE l.ColumnTwo = TableTwo.ColumnTwo ) ``` This query deletes records containing 200 and 100 as well. But I need to delete only records containing 200 as it is associated with all the unique `ColumnOne` values in `TableTwo`. Requirement for deletion: `TableTwo` - specified `ColumnTwo` value must be associated with all the unique values of `ColumnOne`. Can someone help me to achieve this ?
The `Column2` values in `TableOne` that are associated with all groups are found by: ``` select column2 from TableOne group by column2 having count(*) = (select count(distinct column1 from tableone)); ``` To delete these values from `TableTwo`: ``` delete from TableTwo t2 where t2.column2 in (select column2 from TableOne group by column2 having count(*) = (select count(distinct column1 from tableone)) ); ``` Note: if entire rows can be duplicated, then the `count(*)` should be `count(distinct column1)`.
``` Try this one: DELETE FROM @TABLETWO WHERE ColumnTwo IN (SELECT ColumnTwo FROM @TABLETWO GROUP BY ColumnTwo HAVING COUNT(DISTINCT ColumnOne) > 1) AND ColumnOne IN (SELECT ColumnOne FROM @TABLEONE) ```
SQL Server : delete a record only if it is associated with all the groups of the table
[ "", "sql", "sql-server", "delete-row", "" ]
I am using the below query ``` SELECT * FROM [dbo].[QUANTITY] WHERE getdate() BETWEEN fromdate AND todate ``` If the `todate` is `null` (`1900-01-01 00:00:00.000`) it just filters them out. I want even the fields with `todate` to be `null` (`1900-01-01 00:00:00.000`) in the result. How can this be done?
You could explicitly check it with a combination of `and` and `or` operators: ``` SELECT * FROM [dbo].[QUANTITY] WHERE (getdate() >= fromdate AND (todate IS NULL OR todate = '1900-01-01 00:00:00.000') ) OR getdate() BETWEEN fromdate AND todate ```
``` SELECT * FROM [dbo].[QUANTITY] WHERE getdate() >= fromdate AND getdate() <= COALESCE(todate ,NULLIF(todate,'1900-01-01 00:00:00.000') ,'9999-12-31 23:59:59.997' ) ```
Using getdate()
[ "", "sql", "sql-server", "select", "getdate", "" ]
I have to update XML of table based on some conditions of that XML. Sample XML: ``` <CountryValues> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>2</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>3</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>4</Month> <PlaceValue>10</PlaceValue> </CountryRow> <CountryRow> <CountryName>Australia</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Australia</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Australia</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> <PlaceValue>4</PlaceValue> </CountryRow> </CountryValues> ``` Each Country can have multiple Places. I have to group on the basis of Country and Places, then I have to update PlaceValues to null for PlaceValue = 0 except 0 which is immediately preceding PlaceValue > 1. Example in this sample, for Country = Brazil and PlaceName = 1, PlaceValue for Month1 to Month2 will be Null but Month3 will remain 0 as its preceding Month4 which is greate than 0.
Basically, I see 2 ways of dealing with this. First - split xml to sql table/derived table, do your work and then combine into xml again. ``` declare @data xml = '<CountryValues> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>2</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>3</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>4</Month> <PlaceValue>10</PlaceValue> </CountryRow> </CountryValues>' ;with cte as ( select t.c.value('CountryName[1]', 'nvarchar(max)') as CountryName, t.c.value('PlaceName[1]', 'nvarchar(max)') as PlaceName, t.c.value('Month[1]', 'int') as [Month], t.c.value('PlaceValue[1]', 'int') as PlaceValue from @data.nodes('CountryValues/CountryRow') as t(c) ) select c1.CountryName, c1.PlaceName, c1.[Month], case when c1.PlaceValue = 0 and isnull(c2.PlaceValue, 0) <= 1 then null else c1.PlaceValue end as PlaceValue from cte as c1 left outer join cte as c2 on c2.CountryName = c1.CountryName and c2.PlaceName = c1.PlaceName and c2.[Month] = c1.[Month] + 1 for xml path('CountryRow'), root('CountryValues') ---------------------------------- <CountryValues> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>1</Month> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>2</Month> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>3</Month> <PlaceValue>0</PlaceValue> </CountryRow> <CountryRow> <CountryName>Brazil</CountryName> <PlaceName>Place 1</PlaceName> <Month>4</Month> <PlaceValue>10</PlaceValue> </CountryRow> </CountryValues> ``` Second way would be to use xquery inside the xml itself. The answer is really depends on what do you mean by *"immediately preceding PlaceValue > 1"*. I've assumed here that this means - month right before month with value > 1.
You need to use the .modify() method of the xml data type (see here: <https://msdn.microsoft.com/en-us/library/ms187093.aspx>) and XQuery to find which nodes you should change. You can find some nice examples in this link: <http://www.mssqltips.com/sqlservertip/2738/examples-of-using-xquery-to-update-xml-data-in-sql-server/>
How to Update XML in SQL based on values in that XML
[ "", "sql", "sql-server", "xml", "t-sql", "sqlxml", "" ]
Is it possible to return all rows between matched rows? What I'm trying to is query an audit table where jobs writes an audit table. There is a clear start audit message and end audit message as well as info between. Is it possible to get a select statement to return all the rows between the "start" audit entry and "end" entry? Example of data. ``` DATE TIME USER ENTRY ----------------------- -------- -------------------- -------------------------------------------------------------------- 2015-04-13 07:30:15.150 07:30:15 CmdLne SOME JOB STARTED FOR PROCESSING DATE 13/04/2015 2015-04-13 07:31:15.150 07:31:15 CmdLne PROCESSED 10 WHATEVERS 2015-04-13 07:32:25.150 07:32:25 CmdLne PROCESSED 10 SOMETHINGS 2015-04-13 07:33:33.150 07:33:33 CmdLne PROCESSED 40 XYZ 2015-04-13 07:33:34.150 07:33:34 CmdLne SOME JOB FINISHED FOR PROCESSING DATE 13/04/2015 ``` Because it's unknown how many audit entries there will be during an audit write it has to be able to select everything between the "start" entry and "end" entry. Is this possible?
Here's a simple and clean solution. If you have any questions or need anything else let me know. ``` SELECT A.[DATE], A.[TIME], A.[User], A.[Entry] FROM @Table A CROSS APPLY(SELECT MIN([Date]) FROM @Table WHERE [Entry] LIKE 'Some Job%') CA_min(start_dt) CROSS APPLY(SELECT MAX([Date]) FROM @Table WHERE [Entry] LIKE 'Some Job%') CA_max(end_dt) WHERE [DATE] BETWEEN start_dt AND end_dt ```
# Data sample ``` DECLARE @Table TABLE ( [DATE] DATETIME , [TIME] TIME , [USER] VARCHAR(100) , [ENTRY] VARCHAR(1000) ) INSERT INTO @Table VALUES ( '2015-04-13 07:30:15.150', '07:30:15', 'CmdLne', 'SOME JOB STARTED FOR PROCESSING DATE 13/04/2015' ), ( '2015-04-13 07:31:15.150', '07:31:15', 'CmdLne', 'PROCESSED 10 WHATEVERS' ), ( '2015-04-13 07:32:25.150', '07:32:25', 'CmdLne', 'PROCESSED 10 SOMETHINGS' ), ( '2015-04-13 07:33:33.150', '07:33:33', 'CmdLne', 'PROCESSED 40 XYZ' ), ( '2015-04-13 07:33:34.150', '07:33:34', 'CmdLne', 'SOME JOB FINISHED FOR PROCESSING DATE 13/04/2015' ) , ( '2015-04-13 07:30:15.150', '07:30:15', 'Powershell', 'SOME JOB STARTED FOR PROCESSING DATE 13/04/2015' ), ( '2015-04-13 07:31:15.150', '07:31:15', 'Powershell', 'PROCESSED 10 WHATEVERS' ), ( '2015-04-13 07:32:25.150', '07:32:25', 'Powershell', 'PROCESSED 10 SOMETHINGS' ), ( '2015-04-13 07:33:33.150', '07:33:33', 'Powershell', 'PROCESSED 40 XYZ' ), ( '2015-04-13 07:33:34.150', '07:33:34', 'Powershell', 'SOME JOB FINISHED FOR PROCESSING DATE 13/04/2015' ) ``` # Final query ``` SELECT * FROM ( SELECT ROW_NUMBER() OVER ( PARTITION BY CONVERT(DATE, T.DATE), T.[USER] ORDER BY T.DATE ) AS RN , * FROM @Table AS T ) T WHERE T.RN NOT IN ( 1, 2 ) ```
SQL - Select all rows between two string rows
[ "", "sql", "sql-server", "t-sql", "" ]
I am having some real trouble with this CASE statement... It says I get an error near > which seems to point to the 3rd line where its checking > 15. However I don't see anything wrong with this query. I tried searching around on google and here and didn't see anyone with this exact problem, hopefully you can point it out. ``` SELECT CASE WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 15 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 30 THEN '15+' ELSE DATEDIFF(day, C1.RecvdDate, GETDATE()) > 30 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 60 THEN '30+' ELSE DATEDIFF(day, C1.RecvdDate, GETDATE()) > 60 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 90 THEN '60+' ELSE DATEDIFF(day, C1.RecvdDate, GETDATE()) > 90 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 120 THEN '90+' ELSE '120+' END AS 'Days', C1.CallID AS 'Ticket#', A1.WhoAcknow AS 'Acknowledged By', C1.RecvdDate AS 'Received On', C1.Category, COUNT(*) As Assignments FROM [HEAT].[heat].[CallLog] C1 LEFT JOIN [HEAT].[heat].[Asgnmnt] A1 ON C1.CallID = A1.CallID WHERE DATEDIFF(day, C1.RecvdDate, GETDATE()) > 15 AND C1.CallStatus = 'Open' AND C1.Category <> 'welfare' AND C1.CustType <> 'IFS' AND A1.Resolution = '' AND (A1.GroupName = 'Help Desk' AND A1.Assignee = 'EITS PIV Badge') GROUP BY C1.CallID, A1.WhoAcknow, C1.RecvdDate, C1.Category ORDER BY C1.RecvdDate ASC ``` Thanks in advance!
Your problem lies in the `CASE` expression. When using multiple conditions, you use `WHEN` instead of `ELSE`. `ELSE` should only be used once. ``` SELECT CASE WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 15 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 30 THEN '15+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 30 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 60 THEN '30+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 60 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 90 THEN '60+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 90 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 120 THEN '90+' ELSE '120+' END AS 'Days'... ```
You've got the syntax wrong; you can't use multiple ELSE clauses - it should be multiple WHENs: ``` SELECT CASE WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 15 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 30 THEN '15+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 30 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 60 THEN '30+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 60 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 90 THEN '60+' WHEN DATEDIFF(day, C1.RecvdDate, GETDATE()) > 90 AND DATEDIFF(day, C1.RecvdDate, GETDATE()) <= 120 THEN '90+' ELSE '120+' END AS 'Days' ``` See [the documentation](https://msdn.microsoft.com/en-us/library/ms181765.aspx) for more information and examples.
SQL Error (102): Incorrect syntax near '>'
[ "", "sql", "sql-server", "" ]
Why is this query returning an error. I am trying to load the code for table as a constant string, the flag for data again a constant string, the time of insertion and the counts for a table. I thought, let me try and run the secelct before writing the inserts. But for some reason, it fails listing column names from tables from where I am trying to get a count. All i need is two constant values, one date and one count. Tried by removing the groupby as well, throws another error. **hive -e "select "WEB" as src\_cd, "1Hr" as Load\_Flag, from\_unixtime((unix\_timestamp(substr(sysDate, 0, 11), 'dd/MMM/yyyy')), 'MM/dd/yyyy') as time, count(\*) from weblog where year=2015 and month=04 and day=17 group by src\_cd, load\_flag, time ;** " OK Time taken: 1.446 seconds FAILED: SemanticException [Error 10004]: Line 4:9 Invalid table alias or column reference 'src\_cd': (possible column names are: clientip, authuser, sysdate, clfrequest.........(and so on) year, month, day)
The double quotes on the literals is a problem. Here is a simpler version that I tested successfully: ``` hive -e "select 'WEB' , '1Hr' , from_unixtime((unix_timestamp(substr(sysDate, 0, 11), 'dd/MMM/yyyy')), 'MM/dd/yyyy') as time, count(*) from weblog where year=2015 and month=04 and day=17 group by 1,2 , from_unixtime((unix_timestamp(substr(sysDate, 0, 11), 'dd/MMM/yyyy')), 'MM/dd/yyyy') ; " ```
Just leave out the constants in the `group by`. It isn't doing anything: ``` select "WEB" as src_cd, "1Hr" as Load_Flag, from_unixtime((unix_timestamp(substr(sysDate, 0, 11), 'dd/MMM/yyyy')), 'MM/dd/yyyy') as time, count(*) from weblog where year = 2015 and month = 04 and day = 17 group by from_unixtime((unix_timestamp(substr(sysDate, 0, 11), 'dd/MMM/yyyy')), 'MM/dd/yyyy') ``` I don't think Hive allows column aliases in the `group by`, so you need to put in the entire expression or use a subquery/CTE.
How to Select count and literal value in hive
[ "", "sql", "hadoop", "mapreduce", "hive", "hortonworks-data-platform", "" ]
I have two tables: **test:** ``` id name last_name price ``` **test1** ``` id name price ``` I want to combine the two fields (price) from two tables and calculate their average. But I get an average of the field for each of these tables. ``` SELECT AVG(price) FROM test as p UNION SELECT AVG(price) FROM test1 as p ```
How about this: ``` SELECT AVG(price) AS P FROM ( SELECT price FROM test UNION ALL SELECT price FROM test1 ) AS TMP ``` I think you need `UNION ALL` to really include ALL prices, otherwise duplicate prices will be omitted, which may have influence on the average.
You can do it using subquery, or with clause: ``` WITH all_data as( SELECT id,name,price FROM test as t UNION ALL SELECT id,name,price FROM test as t2 ) SELECT AVG(price) FROM all_data ```
Combining two fields from two different tables in SQL
[ "", "sql", "database", "" ]
I've got this situation where I think the best solution is using a pivot function, but I don't really know how to. Or maybe there is a much better way to do this... I got this table as a source : ``` ID | version | code | property | value -----|---------|------|----------|------ 110| 1| AA| prop1| 10 110| 1| AA| prop2| 20 110| 1| AA| prop3| 30 110| 1| BB| prop1| 10 110| 1| BB| prop2| 40 110| 1| BB| prop3| 20 110| 2| AA| prop1| 50 120| 1| BB| prop2| 60 120| 2| AA| prop3| 80 ``` What I want to end up with is the following : ``` ID | version | code | prop1 | prop2 | prop3 -----|---------|------|-------|-------|------ 110| 1| AA| 10| 20| 30 110| 1| BB| 10| 40| 20 110| 2| AA| 50| | 120| 1| BB| | 60| 120| 2| AA| | | 80 ``` So you see I don't do aggregation, just a pivot over the first 3 tables. Is this posible in TSQL, I'm using SQL Server 2012.
You want grouping on id, version and code, so it is simple pivot: ``` Select * from TableName pivot(max(value) for property in([prop1],[prop2],[prop3]))p ``` If table has more columns then you will need to pick right columns first: ``` ;with cte as(Select id, version, code, property, value from TableName) Select * from cte pivot(max(value) for property in([prop1],[prop2],[prop3]))p ```
If the `property` is always `prop1`, `prop2` and `prop3`, you could do this using conditional aggregation: ``` SELECT ID , Version , Code , MAX(CASE WHEN Property = 'prop1' THEN VALUE END) AS [prop1] , MAX(CASE WHEN Property = 'prop2' THEN VALUE END) AS [prop2] , MAX(CASE WHEN Property = 'prop3' THEN VALUE END) AS [prop3] FROM SampleData GROUP BY ID, Version, Code ``` --- Here is a dynamic approach. Read this [**article**](http://www.sqlservercentral.com/articles/Crosstab/65048/) for reference. [**SQL Fiddle**](http://sqlfiddle.com/#!6/74490/1/0) ``` DECLARE @sql1 VARCHAR(4000) = '' DECLARE @sql2 VARCHAR(4000) = '' DECLARE @sql3 VARCHAR(4000) = '' SELECT @sql1 = 'SELECT ID , Version , Code ' SELECT @sql2 = @sql2 + ' , MAX(CASE WHEN Property = ''' + Property + ''' THEN VALUE END) AS [' + Property + ']' + CHAR(10) FROM( SELECT DISTINCT Property FROM SampleData )t ORDER BY Property SELECT @sql3 = 'FROM SampleData GROUP BY ID, Version, Code ORDER BY ID, Version, Code' PRINT(@sql1 + @sql2 + @sql3) EXEC (@sql1 + @sql2 + @sql3) ```
Pivot table without aggregation and multiple pivot columns
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "pivot", "" ]
At school We saw you can select this way: ``` SELECT * FROM BLALBLA WHERE id=1 AND id = 2 AND id=3 AND id=4 ``` But how can we do : ``` SELECT FROM BLABLA WHERE id = 1 to 4 ``` Thank you goldiman
``` SELECT * FROM BLABLA WHERE id => 1 AND id =< 4 ```
There are several ways 1) if `id` field is integer. ``` SELECT * FROM BLALBLA WHERE id between 1 and 4 SELECT * FROM BLALBLA WHERE id => 1 AND id =< 4 SELECT * FROM BLALBLA WHERE id IN (1,2,3,4) ``` 2) if `id` is varchar ``` SELECT * FROM BLALBLA WHERE id IN ('1','2','3','4') ```
Select in a primary key interval SQL
[ "", "mysql", "sql", "primary-key", "" ]
I have a form with several inputs, that are then passed on to a query. This is pretty straightforward, except I want one of my parameters to be used in an IN statement in my sql like such: ``` Select sum(id) as numobs from table where year=[form]![form1]![year] and (group in([form]![form1]![group])); ``` When [form]![form1]![group]="3,4" it querys "group in(34)" and if [form]![form1]![group]="3, 4" then I get an error saying "This expression is typed incorrectly, or it is too complex to be evaluated..." I would like to be able to enter multiple numbers separated by a comma in a field in a form, and then have a query use the result in an IN statement. Does this seem doable? I know with VBA I could do if-then statements to look at every possible combination of group numbers (there are over 40 groups so combinatorically there are over 4 trillion ways to combine the 40+ groups since the sum of 42 choose k from 0 to 42 is over 4 trillion) so using the IN statement seems like a better option. Any ideas on how to get the IN statement to work with a parameter from a form? Thanks
This can be very simply done with a sub in a VBA module: ``` Sub MakeTheQuery() Dim db As DAO.Database Dim strSQL As String Dim strElements As String Set db = CurrentDb() strSQL = "SELECT Sum(id) AS numobs " & _ "FROM ErrorKey WHERE ErrorKey.ID In ({0});" ' Example: "3,5" strElements = Forms!YourForm!YourControl.Caption ' Assign SQL to query db.QueryDefs!YourQuery.SQL = Replace(strSQL, "{0}", strElements) End Sub ```
I can't figure out a way to do it with `IN`. My solution: in a VBA module in your database, write a sub to build a query based on the values in the form control. Let's use the `Split` method to make an array, which we can iterate through to build a query. ``` Sub MakeTheQuery() Dim strSQL As String, db As DAO.Database Set db = CurrentDb() strSQL = "SELECT sum(id) AS numobs _ FROM table WHERE (" Value = Forms!YourForm!YourControl.Caption 'The property may vary with controls ^ 'Create an array from the control values anArray = Split(Value, ",") 'Add to the query for each value For i = LBound(anArray) To (UBound(anArray) - 1) strSQL = strSQL & " ErrorKey.ID = " & anArray(i) & " OR" Next i 'Wrap it all up strSQL = strSQL & " ErrorKey.ID = " & anArray(UBound(anArray)) & ")" 'Assign SQL to query db.QueryDefs!YourQuery.SQL = strSQL End Sub ``` Please keep in mind this assumes that the string from the form control will be formatted with **no spaces**, but you can easily change that by using `anArray = Split(Value, ", ")` instead (note the space add after the comma). To assign the VBA sub to a button or other control on your form by going to `Layout View`, clicking on a control, going to the `Event` tab on the Property Sheet, hitting the `...` button beside the On Click. Hit `Code Builder`. Put `Call MakeTheQuery` in the sub that it builds for you. If you want to run the query, too, put `DoCmd.OpenQuery "YourQuery"`. You can also use this method to [build a more complex, dynamic parameter query](https://stackoverflow.com/questions/28199777/access-dynamic-query-better-to-build-one-conditional-sql-query-or-multiple-que/28199953#28199953).
Using parameter from form in IN statement
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
I have wide table with 100 columns. I need a SP which takes 100 parameters and then does the insert. I know how to do this manually. But having the table definition and knowing that SP parameters will have exact same name of the table columns, can you think of a better/faster way to generate this stored procedure?
I use SQL to write it for you. Check it out and let me know if it needs any tweaks or if you have any questions. ``` IF OBJECT_ID('yourTable') IS NOT NULL DROP TABLE yourTable; CREATE TABLE yourTable ( col1 INT, col2 VARCHAR(100), col3 NUMERIC(18,2) ) DECLARE @InputParams VARCHAR(MAX), @InsertColumns VARCHAR(MAX), @InsertParams VARCHAR(MAX); WITH CTE_columns AS ( SELECT COLUMN_NAME, UPPER(DATA_TYPE) data_type, '(' + CAST(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(10)) + ')' max_length, CASE WHEN DATA_TYPE IN ('Numeric','Decimal') THEN CONCAT('(',NUMERIC_PRECISION,',',NUMERIC_SCALE,')') END prec_scale --@InsertColumns = COALESCE(@InsertColumns + ',','') + COLUMN_NAME, FROM INFORMATION_SCHEMA.COLUMNS A WHERE TABLE_NAME = 'yourTable' ) SELECT @InputParams = COALESCE(@InputParams + ',','') + CONCAT('@',column_name,' ',data_type,max_length,prec_scale), @InsertColumns = COALESCE(@InsertColumns + ',','') + COLUMN_NAME, @InsertParams = COALESCE(@InsertParams + ',','') + '@'+ COLUMN_NAME FROM CTE_columns SELECT 'CREATE PROCEDURE dbo.yourProc ' + @InputParams + ' AS INSERT INTO yourTable(' + @InsertColumns + ') VALUES (' + @InsertParams + '); GO' ``` Results(Formatting isn't great, but it works): ``` CREATE PROCEDURE dbo.yourProc @col1 INT,@col2 VARCHAR(100),@col3 NUMERIC(18,2) AS INSERT INTO yourTable(col1,col2,col3) VALUES (@col1,@col2,@col3); GO ```
For this type of scenario, I like to leverage table types. First, create the table type: `CREATE TYPE [YourType] AS TABLE (columns...)` In C#, populate a `DataTable` "template" by running this query (using a `DataAdapter`): ``` DECLARE @tt AS [YourType] SELECT * FROM @tt ``` Then add a row to the table and cycle through the columns to add the necessary values. Then pass the datatable as a parameter into your procedure: ``` CREATE PROCEDURE [YourProc] @tt [YourType] READONLY AS BEGIN --do stuff END ``` An added benefit of this approach is that you can pass multiple records into the procedure with a single call.
Generate insert statement which has same columns of a table in SQL Server
[ "", "sql", "sql-server", "t-sql", "" ]
I'm looking to implement a sort of Type 2 Slow Changing Dimension like behavior into my dimension table using PL/SQL's MERGE statement. It workd just fine for updating existing values and inserting new ones. I'm looking to extend this functionality by not only updating existing values but creating a different row with the updated values while preserving the row with the "outdated" values. In short, is it possible to do this? ``` MERGE INTO A USING B ON (A.ID = B.ID) WHEN MATCHED THEN UPDATE END_DATE ON THE EXISTING ROW; INSERT UPDATED VALUES IN A NEW ROW; WHEN NOT MATCHED THEN INSERT A NEW ROW WITH NEW VALUES; ``` Thanks you guys in advance.
If you are truly doing a Type 2 dimension and want to use a merge, yes it can be done but it's not terribly straightforward. Effectively, if you will need to compare your data using an inline view in the USING clause, that features a column that indicates whether its an insert or an update. This field gets joined to an slowly changing dimension table that drives whether an insert or an update occurs. This blog post describes the technique in great detail and has worked for us, albeit we used a hash to determine unqiueness as opposed to a column by column compare. [Load Slowly Changing Dimension Type 2 using Oracle Merge Statement](http://databobjr.blogspot.com/2011/04/load-slowly-changing-dimension-type-2.html)
You need to run two statements instead of 1 here. (in your pseudo-code) ``` UPDATE END_DATE IN A WHERE A.ID = something_from_B INSERT new VALUES IN A ``` The insert needs to happen no matter what. So no need of checking if the row exists or not (and only updating if the row exists). Just end\_date all records that match B. This would end\_date 0 to n number of rows. And then Insert everything together without worrying if that record has been end-dated already or not. Put this inside a `BEGIN...END;` if you want transactional atomicity.
PL/SQL Merge statement
[ "", "sql", "oracle", "oracle11g", "" ]
My query below is used to retrieve list of items which has been ordered and returned by the customer. I'm confused with the last join, `ReturnCustomer`, where I want the query to return data with `status = 20` or otherwise it returns `NULL`. Below is my query: ``` SELECT Product.id AS product_id, Product.supplier_product_id AS vip_id, Product.name AS product_name, detailSO.qty, detailRC.return_qty FROM Product RIGHT JOIN detailSO ON detailSO.product_id = Product.id RIGHT JOIN SalesOrder ON SalesOrder.id = detailSO.so_id AND SalesOrder.status >= 20 LEFT JOIN detailRC ON detailRC.sur_key = detailSO.sur_key LEFT JOIN ReturnCustomer ON ReturnCustomer.id = detailRC.rc_id AND ReturnCustomer.status >= 20 ``` If I use `LEFT JOIN`, it doesn't consider about `ReturnCustomer.status >= 20` since it returns all data. On the other hand, if I use `RIGHT JOIN` it will only return data with `ReturnCustomer.status >= 20`, `LEFT JOIN Result:` ``` P_id Pp_id P_name i_qty r_qty P000001 P000001 Item 1 15 1 P000001 P000001 Item 1 5 1 P000002 P000002 Item 2 5 NULL ``` `RIGHT JOIN Result:` ``` P_id Pp_id P_name i_qty r_qty P000001 P000001 Item 1 15 1 ``` `Expected Result:` ``` P_id Pp_id P_name i_qty r_qty P000001 P000001 Item 1 15 1 P000001 P000001 Item 1 5 NULL <-- null since it comes from ReturnCustomer with status = 0 P000002 P000002 Item 2 5 NULL ``` I know maybe I could solve this using nested query, hopefully you guys can provide me a better solution. Thanks in advance **UPDATE:** Here is my simplified problem [sqlfiddle](http://sqlfiddle.com/#!9/9af82/1)..
`a LEFT JOIN b` works exactly like that.. (the `a` will always be present, where `b` may be null) `ON` works only removing the `b`, where `a` won't be affected.. `WHERE` works by removing rows so when ``` A have 1,2,3 B have 1,2 C have 1,3 ``` when those three left joined, for example: ``` SELECT * FROM A LEFT JOIN B ON A.x = B.x LEFT JOIN C ON B.x = C.x ``` it would give: ``` A B C 1 1 1 2 2 3 ``` if the second `LEFT JOIN` joined to `A`, for example: ``` SELECT * FROM A LEFT JOIN B ON A.x = B.x LEFT JOIN C ON A.x = C.x ``` it would give: ``` A B C 1 1 1 2 2 3 3 ``` any more criteria on the last `ON` won't remove the B part, since it's already joined before, any criteria on `WHERE` part will remove whole row. In your case, if you want to hide the `B` part, you should not use `ON` or `WHERE`, the correct one would be using `CASE WHEN` in the `SELECT` part, for example: ``` SELECT detailSO.product_id , detailSO.qty , CASE WHEN RC.id IS NULL THEN NULL ELSE detailRC.qty END AS x FROM SO LEFT JOIN detailSO ON detailSO.so_id = SO.id LEFT JOIN detailRC ON detailRC.sur_key = detailSO.sur_key LEFT JOIN RC ON RC.id = detailRC.rc_id AND RC.status >= 20 WHERE SO.status >= 20 ``` Result in sql Fiddle ``` product_id qty x P00001 15 1 P00001 5 (null) P00002 5 (null) ```
this may works ``` SELECT Product.id AS product_id, Product.supplier_product_id AS vip_id, Product.name AS product_name, detailSO.qty, detailRC.return_qty FROM Product RIGHT JOIN detailSO ON detailSO.product_id = Product.id RIGHT JOIN SalesOrder ON SalesOrder.id = detailSO.so_id AND SalesOrder.status >= 20 LEFT JOIN detailRC ON detailRC.sur_key = detailSO.sur_key LEFT JOIN ReturnCustomer ON ReturnCustomer.id = detailRC.rc_id WHERE ReturnCustomer.status >= 20 ```
LEFT JOIN doesn't consider ON clause
[ "", "mysql", "sql", "" ]
I've an array `priority = ['HIGH', 'MEDIUM', 'LOW']` which is used to set an 'urgency' database column. I'd like to retrieve the data sorted by priority, though applying `Task.order(:urgency)` returns the results alphabetically (i.e. HIGH, LOW, MEDIUM). I'm using PostgreSQL for the database. I'd (obviously) like these to return from high to low priority. Is there a simple way to implement this, perhaps using the values' positions in the array?
A simple `CASE WHEN` can do the trick (postgreSQL syntax used here): ``` scope :urgency_ordered { order(<<-SQL) CASE tasks.urgency WHEN 'HIGH' THEN 'a' WHEN 'MEDIUM' THEN 'b' WHEN 'LOW' THEN 'c' ELSE 'z' END ASC, id ASC SQL } ``` Call it this way: ``` Task.urgency_ordered ```
I am late to the party, but I am surprised none has come up with this answer yet: If you are using MySQL (untested but should work for Postgres, too), there is a shorter, safer and more generic way than any of the previous answers. It works with any field and is protected against sql injection, so you can pass any list of values from the user. Add the following scope to your model or ApplicationRecord: ``` class Task < ActiveRecord::Base scope :order_by_field, ->(field, values) { order(sanitize_sql_for_order(["field(#{field}, ?)", values])) } end ``` You can now call the scope directly on your relation: ``` tasks.ordered_by_field(:priority, ["high", "medium", "low"]) ```
Rails - order on column's values (priority column)
[ "", "sql", "ruby-on-rails", "postgresql", "" ]
I am writing java program and based on user input I have to perform operation. If user gives me id (suppose id=3) then I have to execute ``` select emp_name from employee where empid = 3 ``` else if user doesn't provide id then I have to show everything from table (here is my question how to achieve this) ``` select emp_name from employee where empid = 'this should match everything' ``` I can't exclude where clause since I am using 'prepared statement' and just assigning values based on user input. id is of type 'number' I also want to know how to match it if id is of type 'varchar'
You should use parameters, don't know Java to help with that part but you could provide NULL to the query as such. ``` select emp_name from employee where empid = (case when @parameter is null then empid else @parameter end) ```
There are several answers to your question. You can choose to implement what you feel more comfortable; ``` Method 1: Check the user input and construct 2 different queries: If the user enters "nothing" execute "select emp_name from employee", else execute "select emp_name from employee where empid=X", where X is the user input. Method 2: Use only one query as follows i m wrtting a pseudocode): if User.Input exists then X=User.Input else X="%" then execute: select emp_name from employee where empid like "X" ``` In any case, please don't forget to sanitize user input in order to avoid SQL injection attacks... ;)
How to write query in SQL Server to select everything
[ "", "sql", "sql-server", "" ]
the next sql statement work fine. it shows both patient\_id and serv\_name, but i try to show patient\_name instead of patient\_id ``` SELECT C1.patient_id, S.serv_name FROM Checkup_Details C INNER JOIN Services S ON (C.serv_id=S.serv_id), Checkup C1 WHERE C1.today = DATE() AND C1.check_id=C.check_id ORDER BY C.check_id ``` so how am i suppose to do that by adding this sql statement ``` INNER JOIN Patient P ON (C1.patient_id=P.patient_id) ``` but i don't know how exactely.
Assuming the field patient\_name is in Checkup\_Details, you have to put ``` SELECT C1.patient_name ... ``` instead from ``` SELECT C1.patient_id ... ```
Simply add the INNER JOIN clause to get the columns of the table Patient and change C1.patient\_id by P.patient\_name. ``` SELECT P.patient_name, S.serv_name FROM Checkup_Details C INNER JOIN Services S ON (C.serv_id=S.serv_id), INNER JOIN Patient P ON (C1.patient_id=P.patient_id) Checkup C1 WHERE C1.today = DATE() AND C1.check_id=C.check_id ORDER BY C.check_id ```
INNER JOIN SQL QUERY
[ "", "sql", "" ]
I have the following data in a column named `datos`. I want to update the text after the 8th slash (DR004), to be DR013. ``` /1/14/0/0/ / / /DR004/1/rttrgftgr/ZM003/0/0/0/1/0/ ``` I have tried using this ``` update sumcon set substr(datos, instr(datos, '/', 1, 8) + 1, 1) = decode(substr(datos, instr(datos, '/', 1, 8) + 1, 1), ' ', 'DR013') where nis_rad = 200000732; ``` but it's not working. Here is a sample of the data from `datos` column ``` /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / /DR008/1/ /ZM004/0/0/0/0/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/1/0/0/0/ /1/14/0/0/ / / /DR008/1/ /ZM004/0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/0/1/1/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ /1/14/0/0/ / / /DR001/1/ /ZM004/0/0/0/0/0/ /1/14/0/0/ / / / /1/ / /0/0/0/1/0/ ```
Per [@LalitKumarB's](https://stackoverflow.com/users/3989608/lalit-kumar-b) suggestion, change your UPDATE statement to ``` UPDATE SUMCON SET DATOS = SUBSTR(DATOS, 1, INSTR(DATOS, '/', 1, 8)) || 'DR013' || SUBSTR(DATOS, INSTR(DATOS, '/', 1, 9)) WHERE NIS_RAD = 200000732; ``` [SQLFiddle here](http://sqlfiddle.com/#!4/4cf40/5)
You can use a regular expression: ``` UPDATE sumcon SET datos = REGEXP_REPLACE( datos, '(/(.*?/){7}).*?/', '\1' || :new_value || '/' ) WHERE nis_rad = 200000732; ``` * `(/(.*?/){7})` - will find the first `/` then will match zero-or-more of any characters `.*?` until another `/` is found and then repeat this match 7 times (to get the first slash and then 7 successive one) - this will all be stored in a capturing group (denoted by the surrounding `()`s). * `.*?/` will match zero-or-more of any characters `.*?` until another `/` is found - this is what you want to replace.
Using substr and instr to update a delimited string
[ "", "sql", "oracle", "decode", "substr", "" ]
I am transferring some SQL query into C# code, now I am having problem understand the following query. So far my understanding to the following query is: If `PREFIX` is between `0` to `99`, then trim `PREFIX`, but what does `|| '-' ||` mean here? My understanding for line 3 is after finishing the trim function in line 2, do another trim, but I do not recognize the syntax in line 3 either . This is DB2. ``` RETURN CASE WHEN PREFIX BETWEEN '00' AND '99' //line 1 THEN TRIM(PREFIX) || '-' || //line 2 TRIM(TRIM(L '0' FROM DIGITS(CLNUMBER))) //line 3 ```
* `DIGITS(CLNUMBER)` returns the number stored in CLNUMBER as a string with leading zeros. * `TRIM(L '0' FROM something)` removes leading zeros from something. * `TRIM(something)` removes leading and trainling blanks from something. * || concatenates strings. PREFIX is a string. In case it contains a two-digit number, some processing gets done: 1. First you get that number trimmed, but because of the condition there can be no blank, so you get the original number string, e.g. '12'. (The condition would work on '01' but ignore ' 1' or '1'.) 2. Then '-' gets added, so you have '12-'. 3. Then you get CLNUMBER as a string with leading zeros and leading and trainling blanks removed. Let's say CLNUMBER contains '0345 ', then you'd get '345'. 4. Then this gets concatenated too and you finally get '12-345'.
Your code does the following. Line1: If you prefix is between `'00'` AND `'99'` Line2: Then trim the spaces from prefix and then append `-` Line3: Then append `CLNUMBER` by removing the leading `0` from `CLNUMBER` first You can lookup the syntax of `TRIM` function [here](http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0023198.html?cp=SSEPGG_9.7.0%2F2-10-3-2-165)
DB2 SQL Query Trim inside trim
[ "", "sql", "db2", "trim", "" ]
I have a table, which has around 2 million records and out of these there are around 70000 records which have duplicate Deal ID's. 1. Here for only duplicated Deal ID records I want to consider the record with last updated Month(**FP**). 2. only if the Last Updated Month(**FP**) is Equal then the record with **Source** = **'MDM'** needs to selected. 3. Deal ID can be repeated many times. 4. Important is we need to consider only Deal ID which has repetitions. ![Sample Data](https://i.stack.imgur.com/Q9tSI.jpg) I have tried by grouping Deal ID's and having the records count greater than **1** and by doing inner join with the records having count > 1 stored in other table , and with Rank, but here I am not able to retrieve the records required. Output should be something like this ![enter image description here](https://i.stack.imgur.com/RDXEV.jpg) [SQLFiddle](http://sqlfiddle.com/#!6/46b96/2) for sample data
Looks like basic ROW\_NUMBER case, doesn't it? ``` select * from ( select *, [rank] = ROW_NUMBER() OVER( partition by deal order by fp desc, case when source = 'MDM' then 0 else 1 end asc ) from src ) ranked where [rank] = 1 ``` <http://sqlfiddle.com/#!6/46b96/12>
give this a try ``` Select * from [table] where DealId in( Select DealId from [table] Group by DealId Having count(DealId) > 1) where source = 'MDM' ``` this should give you all the records with duplicate dealIds and source = 'mdm'
Retrieve last updated record in SQL with conditions
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a query that returns a row of multiple `Itemcodes`. The result is ``` Date group list of item code 2015-04-15 118 FYCT-00063,FYCM-00016,FYCM-00064,FYCF-00018 ``` it's working fine but i need the result like this, with quotes around every code `'FYCT-00063','FYCM-00016','FYCM-00064','FYCF-00018'` The query is this: ``` SELECT DISTINCT SS.PostDate,SS.U_Unit,STUFF(( SELECT ', ',+ CAST(OWOR.ItemCode AS VARCHAR(10)) [text()] FROM OWOR WHERE OWOR.PostDate=SS.PostDate AND OWOR.U_Unit=SS.U_Unit AND OWOR.Status=SS.Status FOR XML PATH('') , TYPE) .value('.','NVARCHAR(MAX)'),1,2,' ') LISTGroup from owor SS Where SS.PostDate='15.APRIL.15' AND SS.U_Unit='Unit No 2' and SS.Status!='C' ```
Try this ``` SELECT DISTINCT SS.PostDate ,SS.U_Unit ,STUFF(( SELECT ', ' + QUOTENAME(CAST(OWOR.ItemCode AS VARCHAR(10), '''')) FROM OWOR WHERE OWOR.PostDate = SS.PostDate AND OWOR.U_Unit = SS.U_Unit AND OWOR.STATUS = SS.STATUS FOR XML PATH('') ).value('.', 'NVARCHAR(MAX)'), 1, 2, ' ') LISTGroup FROM owor SS WHERE SS.PostDate = '15.APRIL.15' AND SS.U_Unit = 'Unit No 2' AND SS.STATUS != 'C' ```
``` SELECT DISTINCT SS.PostDate,SS.U_Unit, STUFF(( SELECT ', ',+ '''' + CAST(OWOR.ItemCode AS VARCHAR(10) + '''') [text()] FROM OWOR WHERE OWOR.PostDate=SS.PostDate AND OWOR.U_Unit=SS.U_Unit AND OWOR.Status=SS.Status FOR XML PATH('') , TYPE) .value('.','NVARCHAR(MAX)'),1,2,' ') LISTGroup FROM owor SS WHERE SS.PostDate='15.APRIL.15' AND SS.U_Unit='Unit No 2' AND SS.Status!='C' ```
sql server comma delimted string values into one row
[ "", "sql", "sql-server", "sapb1", "" ]
Here is my current query: ``` SELECT `performers`.`hash`, `performers`.`alias`, `performers`.`date_updated`, `performers`.`status`, IF(`performers`.`status` = 'active', 'deleted','active') AS `statususe`, `images`.`image_hash_file` FROM `performers` LEFT JOIN `images` ON `images`.`asset_id` = `performers`.`id` WHERE (`images`.`asset_type` = 'performer') ORDER BY `alias` ASC LIMIT 12` ``` In it, there is a where clause ``` WHERE (`images`.`asset_type` = 'performer') ``` I'd like it to be optional, such that if there where clause doesn't fit it still shows the records from the performers table that do not have a join to the fulfilling images records.
You may add the WHERE clause in the LEFT JOIN so not matching rows are still in the result set. ``` SELECT `performers`.`hash`, `performers`.`alias`, `performers`.`date_updated`, `performers`.`status`, IF(`performers`.`status` = 'active', 'deleted','active') AS `statususe`, `images`.`image_hash_file` FROM `performers` LEFT JOIN `images` ON `images`.`asset_id` = `performers`.`id` AND `images`.`asset_type` = 'performer' ORDER BY `alias` ASC LIMIT 12 ``` The main difference between WHERE and ON clause is that the ON clause works during the JOIN when the WHERE clause works after everything have been joined making not matching rows to be removed from the result set.
Try something like this: ``` WHERE (`images`.`asset_type` = 'performer' OR `images`.`asset_type` IS NULL) ```
making a where clause optional in a sql statement
[ "", "mysql", "sql", "" ]
So I have this sample table. Contains data of what `OS` are installed of all internet shop branches. ``` ID ShopName PCName OS 1 Mineski M101 WinXP 2 Mineski M102 WinXP 3 GameCity G201 Win7 4 GameCity G202 Win7 5 CyberBob C301 WinXP 6 CyberBob C302 Win7 ``` I need to query the `OS` installed by Shop. I can do this using this query. ``` select ShopName, OS from ShopInv group by ShopName, OS ``` Expected results would be: ``` ShopName OS CyberBob Win7 CyberBob WinXP GameCity Win7 Mineski WinXP ``` However, I only want 1 row per shop to be listed. So in cases there are more than 1 row (due to different `OS` version installed) like the sample above. I just want to display **Mixed**. So the result would be something like this: ``` ShopName OS CyberBob Mixed GameCity Win7 Mineski WinXP ``` Is this feasible on SQL Server 2008? [**SQLFiddle**](http://sqlfiddle.com/#!6/a92ad4/1) Note: I'm a bit confused on how should I state my question so please do edit it if you like. :)
You can use `case` with `distinct` `OS` value `count` for each `ShopName` check : ``` select ShopName , case when count(distinct OS) > 1 then 'Mixed' else min(OS) end from ShopInv group by ShopName ``` [**SQLFiddle**](http://sqlfiddle.com/#!6/a92ad4/8)
This is easy: As you want to have one row per shop, group by shop only. Then get the OS with an aggregate function. This can be MIN or MAX. If you detect however, that MIN <> MAX, then you must show 'Mixed' instead. ``` select ShopName, case when MIN(OS) = MAX(OS) then MIN(OS) else 'Mixed' end as OS from ShopInv group by ShopName; ```
How to combine two values (row) into a single row with custom value?
[ "", "sql", "sql-server", "database", "sql-server-2008", "" ]
I have the following two relations: ``` Game(id, name, year) Devs(pid, gid, role) ``` Where Game.id is a primary key, and where Devs.gid is a foreign key to Game.id. I want to write a SQL query that finds the game with the largest amount of people who worked on that game. I wrote the following query: ``` SELECT Game.name, count(DISTINCT Devs.pid) FROM Game, Devs WHERE Devs.gid=Game.id GROUP BY Devs.gid, Game.name ORDER BY count(Devs.pid) DESC; ``` This query sort of accomplishes my goal, in the sense that it returns all of the Games in the relation sorted by the number of people who worked on each game, but I'm trying to modify this query so that it does two things. One, it should only return the game with the most people who worked on it, and two, if there are two games that had an equal amount of people work on them, it should return both of those games. I know that if I replace the top line like so: ``` SELECT TOP 1 Game.name, count(DISTINCT Devs.pid) ``` Then it accomplishes my first goal, but if there are two games that both have the highest number of people who worked on them, then it only returns one of those games. How can I go about changing this query so that it will return all games with the highest number of people that worked on it?
The task can be shorten to: > Give me all the rows from the original query with the maximum number of developers It can be reached through the `WITH` [statement](https://msdn.microsoft.com/en-us/library/ms175972.aspx). The answer is the following: ``` WITH GamesDevs (GameName, DevsCount) AS ( SELECT Game.name AS GameName, count(DISTINCT Devs.pid) AS DevsCount FROM Game, Devs WHERE Devs.gid=Game.id GROUP BY Devs.gid, Game.name ORDER BY count(Devs.pid) DESC ) SELECT * FROM GamesDevs WHERE GamesDevs.DevsCount = (SELECT MAX(DevsCount) FROM GamesDevs) ``` Hope this helps.
Try this query : ``` select g.name, count(d.gid) from game g join devs d on g.id=d.gid group by g.name having count(d.gid)= (select max (temp.cnt) from (select gid p, count(*) cnt from devs group by gid) temp) ; ```
Select only largest values in a SQL relation?
[ "", "sql", "max", "" ]
Suppose we have a table with scheme ``` student(id (primary key), name, math_score, english_score) ``` I am trying to get student information (id and name) with highest rank (ordered by highest sum of math score and english score). There may be several student with tie, and we want all of them. The way I thought about doing this is to use subquery to get a table with sum of scores, then find ids, names that have highest sum. ``` SELECT s.id, s.name FROM (SELECT s.id, s.name, s.math_score+s.english_score as sum FROM student s) s WHERE s.sum = (SELECT max(s.sum) FROM (SELECT s.id, s.name, s.math_score+s.english_score as sum FROM student s) s) ``` This works, but seems very redundant and not efficient. I just started learning sql language, and I would appreciate some insight on this problem!
Use WITH TIES ``` create table #student( id int primary key identity(1,1), name varchar(50), math_score decimal, english_score decimal ) insert into #student values ('Tom', 90, 90), ('Dick', 70, 70), ('Harry', 80, 100) select TOP(1) WITH TIES id, name, math_score, english_score, math_score + english_score as ScoreRank from #student order by math_score + english_score desc ``` Gives the answer: ``` id|name|math_score|english_score|ScoreRank 1|Tom|90|90|180 3|Harry|80|100|180 ```
This should accomplish it, you're adding in an unnecessary step. ``` select id, name, math_score+english_score as total_score from student where math_score+english_score=(select max(math_score+english_score) from student) ```
Removing redundancies in sql query that contains subquery
[ "", "sql", "azure", "subquery", "" ]
In our database we do have a table that keeps track of the power consumption of a device. The rate at which new values get inserted is not fixed, they only get written when there really is a change, so the temporal distance between the values is varying and may reach from 1 second to several minutes. The entries consist of a timestamp and a value. The value always increases with every new row as it counts the kWh. What I want to achieve is the following: I want to specify a start and an end datetime, let's say a month. I also want to specify an interval like 15 minutes, 1 hour, 1 day or similar. The outcome I need to get is in the form of [Beginning of interval as datetime], [power consumption in that interval], e.g. like this (where interval would be set to 1 hour): ``` 2015-01.01 08:00:00 - 65 2015-01.01 09:00:00 - 43 2015-01.01 10:00:00 - 56 ``` This is what the table looks like: ``` TimeStamp Value ------------------------- 2015-01-08 08:29:47, 5246 2015-01-08 08:36:15, 5247 2015-01-08 08:37:10, 5248 2015-01-08 08:38:01, 5249 2015-01-08 08:38:38, 5250 2015-01-08 08:38:51, 5251 2015-01-08 08:39:33, 5252 2015-01-08 08:40:20, 5253 2015-01-08 08:41:10, 5254 2015-01-09 08:56:25, 5255 2015-01-09 08:56:43, 5256 2015-01-09 08:57:31, 5257 2015-01-09 08:57:36, 5258 2015-01-09 08:58:02, 5259 2015-01-09 08:58:57, 5260 2015-01-09 08:59:27, 5261 2015-01-09 09:00:06, 5262 2015-01-09 09:00:59, 5263 2015-01-09 09:01:54, 5265 2015-01-09 09:02:44, 5266 2015-01-09 09:03:39, 5267 2015-01-09 09:04:22, 5268 2015-01-09 09:05:11, 5269 2015-01-09 09:06:08, 5270 ``` I have the feeling that I would have to combine the `SUM()` function with a `GROUP BY`, but I have no clue how to do that, because as far as I can see I would also have to consider only the *growth* per interval and not the sum of the absolute values within that interval. It would be great if someone could bring me onto the right track.
Your sample data does not match the result intervals, so you may miss increases within an interval at the end or the beginning. Therefore, I assumed a linear increase between sample data rows and matched them to the result interval. ``` declare @start datetime2 = '2015-01-09 09:00:00' declare @end datetime2 = '2015-01-09 09:30:00' declare @intervalMinutes int = 5 ;with intervals as ( select @start iStart, dateadd(minute, @intervalMinutes, @start) iEnd union all select iEnd, dateadd(minute, @intervalMinutes, iEnd) from intervals where iEnd < @end ), increases as ( select T.Timestamp sStart, lead(T.Timestamp, 1, null ) over (order by T.Timestamp) sEnd, -- the start of the next period if there is one, null else lead(T.value, 1, null ) over (order by T.Timestamp) - T.value increase -- the increase within this period from @T T ), rates as ( select sStart rStart, sEnd rEnd, (cast(increase as float))/datediff(second, sStart, sEnd) rate -- increase/second from increases where increase is not null ), samples as ( select *, case when iStart > rStart then iStart else rStart end sStart, -- debug case when rEnd>iEnd then iEnd else rEnd end sEnd, -- debug datediff(second, case when iStart > rStart then iStart else rStart end, case when rEnd>iEnd then iEnd else rEnd end)*rate x -- increase within the period within the interval from intervals i left join rates r on rStart between iStart and iEnd or rEnd between iStart and iEnd or iStart between rStart and rEnd -- overlaps ) select iStart, iEnd, isnull(sum(x), 0) from samples group by iStart, iEnd ``` The CTEs: * `intervals` holds the intervales you want data for * `increaese` calculates the increases within the sample data periods * `rates` calculates the increase per second in the sample data periods * `samples` matches the result intervals to the sample intervals by respecting the overlaps between the bounds Finally the select sums up the sample periods matched to a single interval. NOTES: * For an interval amount > [your max recursion depth] you have to use another solution to crate the `intervals` CTE (see @GarethD solution) * Debug hint: By simply using `select * from samples` you can see the sample periods matched to your result intervals
I think the best way to deal with this is to first generate your intervals, and then left join your data, since this firstly makes the grouping much less complicated for variable intervals, and also means you still get results for intervals with no data. To do this you will need a numbers table, since many people don't have one below is a quick way of generating one on the fly: ``` WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2) SELECT * FROM Numbers; ``` This simply generates a sequence from 1 to 10,000. For more reading on this see the following series: * [Generate a set or sequence without loops – part 1](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) * [Generate a set or sequence without loops – part 2](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-2) * [Generate a set or sequence without loops – part 3](http://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-3) You can then define a start time, an interval and the number of records to show, and along With your numbers table you can generate your data: ``` DECLARE @Start DATETIME2 = '2015-01-09 08:00', @Interval INT = 60, -- INTERVAL IN MINUTES @IntervalCount INT = 3; -- NUMBER OF INTERVALS TO SHOW WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2) SELECT TOP (@IntervalCount) Interval = DATEADD(MINUTE, (N - 1) * @Interval, @Start) FROM Numbers; ``` Finally you can LEFT JOIN this to your data to get the minimum and the maximum values for each interval ``` DECLARE @Start DATETIME2 = '2015-01-09 08:00', @Interval INT = 60, -- INTERVAL IN MINUTES @IntervalCount INT = 3; -- NUMBER OF INTERVALS TO SHOW WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2), Intervals AS ( SELECT TOP (@IntervalCount) IntervalStart = DATEADD(MINUTE, (N - 1) * @Interval, @Start), IntervalEnd = DATEADD(MINUTE, N * @Interval, @Start) FROM Numbers AS n ) SELECT i.IntervalStart, MinVal = MIN(t.Value), MaxVal = MAX(t.Value), Difference = ISNULL(MAX(t.Value) - MIN(t.Value), 0) FROM Intervals AS i LEFT JOIN T AS t ON t.timestamp >= i.IntervalStart AND t.timestamp < i.IntervalEnd GROUP BY i.IntervalStart; ``` If your values can go up and down within the inverval, then you will need to use a ranking function to get the first and last record for each hour, rather than min and max: ``` DECLARE @Start DATETIME2 = '2015-01-09 08:00', @Interval INT = 60, -- INTERVAL IN MINUTES @IntervalCount INT = 3; -- NUMBER OF INTERVALS TO SHOW WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2), Intervals AS ( SELECT TOP (@IntervalCount) IntervalStart = DATEADD(MINUTE, (N - 1) * @Interval, @Start), IntervalEnd = DATEADD(MINUTE, N * @Interval, @Start) FROM Numbers AS n ), RankedData AS ( SELECT i.IntervalStart, t.Value, t.timestamp, RowNum = ROW_NUMBER() OVER(PARTITION BY i.IntervalStart ORDER BY t.timestamp), TotalRows = COUNT(*) OVER(PARTITION BY i.IntervalStart) FROM Intervals AS i LEFT JOIN T AS t ON t.timestamp >= i.IntervalStart AND t.timestamp < i.IntervalEnd ) SELECT r.IntervalStart, Difference = ISNULL(MAX(CASE WHEN RowNum = TotalRows THEN r.Value END) - MAX(CASE WHEN RowNum = 1 THEN r.Value END), 0) FROM RankedData AS r WHERE RowNum = 1 OR TotalRows = RowNum GROUP BY r.IntervalStart; ``` **[Example on SQL Fiddle with 1 Hour intervals](http://sqlfiddle.com/#!6/64b1c5/5)** **[Example on SQL Fiddle with 15 minute intervals](http://sqlfiddle.com/#!6/003d5/2)** **[Example on SQL Fiddle with 1 Day intervals](http://sqlfiddle.com/#!6/1e446/2)** --- **EDIT** As pointed out in comments neither of the above solutions account for the advance over period boundaries, the below will account for this: ``` DECLARE @Start DATETIME2 = '2015-01-09 08:25', @Interval INT = 5, -- INTERVAL IN MINUTES @IntervalCount INT = 18; -- NUMBER OF INTERVALS TO SHOW WITH N1 AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) t (N)), N2 (N) AS (SELECT 1 FROM N1 AS N1 CROSS JOIN N1 AS N2), Numbers (N) AS (SELECT ROW_NUMBER() OVER(ORDER BY N1.N) FROM N2 AS N1 CROSS JOIN N2 AS N2), Intervals AS ( SELECT TOP (@IntervalCount) IntervalStart = DATEADD(MINUTE, (N - 1) * @Interval, @Start), IntervalEnd = DATEADD(MINUTE, (N - 0) * @Interval, @Start) FROM Numbers AS n ), LeadData AS ( SELECT T.timestamp, T.Value, NextValue = nxt.value, AdvanceRate = ISNULL(1.0 * (nxt.Value - T.Value) / DATEDIFF(SECOND, T.timestamp, nxt.timestamp), 0), NextTimestamp = nxt.timestamp FROM T AS T OUTER APPLY ( SELECT TOP 1 T2.timestamp, T2.value FROM T AS T2 WHERE T2.timestamp > T.timestamp ORDER BY T2.timestamp ) AS nxt ) SELECT i.IntervalStart, Advance = CAST(ISNULL(SUM(DATEDIFF(SECOND, d.StartTime, d.EndTime) * t.AdvanceRate), 0) AS DECIMAL(10, 4)) FROM Intervals AS i LEFT JOIN LeadData AS t ON t.NextTimestamp >= i.IntervalStart AND t.timestamp < i.IntervalEnd OUTER APPLY ( SELECT CASE WHEN t.timestamp > i.IntervalStart THEN t.timestamp ELSE i.IntervalStart END, CASE WHEN t.NextTimestamp < i.IntervalEnd THEN t.NextTimestamp ELSE i.IntervalEnd END ) AS d (StartTime, EndTime) GROUP BY i.IntervalStart; ```
SQL statement that calculates per-interval growth
[ "", "sql", "sql-server", "aggregate-functions", "intervals", "" ]
This is my first post in this forum so please be understanding. I have following issue. I want to two join two tables: Table1: ``` Product | Start Date | End Date ------------------------------------- Product1 | 01/01/2014 | 01/05/2015 Product2 | 01/03/2014 | 01/01/2015 ``` Table2: ``` Product | Start Date | End Date | Value -------------------------------------------- Product1 | 01/01/2014 | 01/02/2015 | 10 Product1 | 02/02/2014 | 01/04/2015 | 15 Product1 | 02/04/2014 | 01/05/2015 | 15 Product2 | 01/03/2014 | 04/05/2014 | 5 Product2 | 05/05/2014 | 01/01/2015 | 5 ``` To have a table with latest value like: ``` Product | Start Date | End Date | Value ------------------------------------------------ Product1 | 02/04/2014 | 01/05/2015 | 15 Product2 | 05/05/2014 | 01/01/2015 | 5 ``` I need to join them and not use just the second table because both of them have more unique columns that I need to use. I was thinking about firstly using some kind of IF function on second table to make one row per product (the one with latest start date) and than join it simply then with first table. But I have no idea how to do the first part. I am really looking forward for your help. Regards, Matt
Just use WHERE NOT EXISTS to filter out everything but the latest date from TABLE2 (I am assuming that you are asking for the latest STARTDATE from TABLE2; also I add 'SomeOtherField' to Table1, because otherwise you could just query Table2): SELECT t1.Product, t1.SomeOtherField, t2.StartDate, t2.EndDate, t2.Value FROM Table1 t1 JOIN (SELECT a.Product, a.StartDate, a.EndDate, a.Value FROM Table2 a WHERE NOT EXISTS (SELECT \* FROM Table2 b WHERE b.Product = a.Product AND b.StartDate > a.StartDate)) t2 ON (t2.Product = t1.Product)
This is possible, the query will involve three steps: 1. Find all the max `start date` for each `product` in table 2. Hint: use group by. 2. Join table 2 with the result from #1 to get the `Value`. 3. Join table 1 with the result from #2 to filter out products that are not in table 1.
SQL - Join two tables without unique fields
[ "", "sql", "" ]
I have a date. I have to identify date of the monday in the week which my date lies.
``` SET DATEFIRST 1 SELECT DATEADD(DAY,1-DATEPART(WEEKDAY,'2012-01-01'),'2012-01-01') ```
**MSSQL** ``` SELECT CONVERT(varchar,DATEADD(day, DATEDIFF(day, 0, GETDATE()) /7*7, 0),100) AS weekstart ``` **MySql** ``` SELECT DATE_ADD(date, INTERVAL(1-DAYOFWEEK(date)) +1 DAY) AS weekStart ```
How to find start date of a week
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table something like the following: ``` name | language | text ----------------------------- greet | 1 | hello greet | 2 | bonjour blue | 1 | blue blue | 2 | bleu red | 1 | red green | 1 | green yellow | 1 | yellow ``` I need to retrieve all the records which only exist for language number 1, so in the above example I should only have a list containing red, green and yellow. I don't really know much about sql joins so not sure what would be the best way to go about this? Any help would be appreciated
Use `NOT EXISTS`: ``` SELECT t.* FROM dbo.TableName t WHERE t.language = 1 AND NOT EXISTS ( SELECT 1 FROM dbo.TableName t2 WHERE t.name = t2.name AND ( t2.language IS NULL OR t2.language <> 1 ) ) ``` `DEMO` I've included `t2.language IS NULL` to show you how to handle `NULL`-values, if it's a non-null column you only need `t2.language <> 1`. This is the most readable and efficient approach that has no issues with NULL values. There are [others](http://sqlperformance.com/2012/12/t-sql-queries/left-anti-semi-join).
Filter records using `NOT IN` ``` SELECT * FROM TABLENAME WHERE LANGUAGE = 1 AND NAME NOT IN ( SELECT NAME FROM TABLENAME WHERE LANGUAGE <> 1 ) ```
Sql self join where not exists?
[ "", "sql", "t-sql", "join", "" ]
My table `table1` has the column `date_txt` which includes `2/16/2011 12:00:00 AM` - column `date_txt` is `VARCHAR2 (250 Char)`. My table `table1` also has the column `date` which is a `DATE`. I would like to "update" my field: The final output should be: table1: ``` date 2/16/2011 ``` So it takes from `table1` `date_txt` the "date" and updates it to the column `date` as a date. Any ideas? I am a bloody beginner.
You can use Oracle's [to\_date function](http://www.techonthenet.com/oracle/functions/to_date.php) to convert a string to a date: ``` update table1 set "date" = to_date(date_txt, 'MM/DD/YYYY HH:MI:ss AM') ``` [See it working at SQL Fiddle.](http://sqlfiddle.com/#!4/7ccd0/1)
> column date\_txt which includes 2/16/2011 12:00:00 AM * Firstly, **DATE** doesn't have any **format**. What you see is for **display** purpose to interpret date value easily. * Secondly, you should **never ever store DATE as VARCHAR2**. This is a big problem and a poor design. Now that you have a bad design, it is a good idea to fix it right now. **Follow these steps:** 1. Add a new column with **DATE data type**. 2. Update the new column with date values from the old column using **TO\_DATE**. 3. **Drop** the old column. 4. **Rename** the new column to the old column. I have already answered how to do it here <https://stackoverflow.com/a/29625772/3989608>
sql oracle update date
[ "", "sql", "oracle", "date", "" ]
I have three tables: 1. users: contains the registered users. 2. courses: contains available courses. 3. userscourses: has two columns: user\_id and course\_id. Whenever a user joins a course, a new record is created that relate that course with the user. I want to create a mysql query that; given a user\_id, is going to fetch all courses joined by the user. Also, with each course row, I want to add a column that represents how many users are in that course. I made a query but it does not give me the desired result: ``` SELECT courses.id AS CourseID, courses.name AS Name, COUNT(usersCourses.userId) AS JoinedUsersNumber FROM courses, users, usersCourses WHERE users.id = courses.userId AND usersCourses.courseId = courses.id AND usersCourses.courseId=courses.id AND usersCourses.userId='31' ``` Can anybody tell me how to achieve this or tell me what's wrong with my query? **EDIT** This is an example: users table ``` id | username -------------------------- u1 | user01 u2 | user02 u3 | user03 u4 | user04 ``` courses table: ``` id | name -------------------------- c1 | course01 c2 | course02 c3 | course03 c4 | course04 ``` userscourses table: ``` userId | courseId -------------------------- u1 | c1 u1 | c2 u1 | c3 u2 | c1 u3 | c1 u4 | c1 u2 | c2 u3 | c2 ``` Expected Result (user\_id=u1): ``` name | Joined Users -------------------------- course01 | 4 course02 | 3 course03 | 1 ```
Joining back on the userscourses table to get all the users on that course:- ``` SELECT courses.id AS CourseID, courses.name AS Name, COUNT(usersCourses2.userId) AS JoinedUsersNumber FROM users INNER JOIN usersCourses ON users.id = usersCourses.userId INNER JOIN courses ON usersCourses.courseId = courses.id INNER JOIN usersCourses usersCourses2 ON usersCourses.courseId = usersCourses2.courseId WHERE usersCourses.userId='31' GROUP BY courses.id AS CourseID, courses.name AS Name ``` or if you want the count of users on that course to exclude the selected users (ie, you want a count of other users):- ``` SELECT courses.id AS CourseID, courses.name AS Name, COUNT(usersCourses2.userId) AS JoinedUsersNumber FROM users INNER JOIN usersCourses ON users.id = usersCourses.userId INNER JOIN courses ON usersCourses.courseId = courses.id LEFT OUTER JOIN usersCourses usersCourses2 ON usersCourses.courseId = usersCourses2.courseId AND users.id != usersCourses2.userId WHERE usersCourses.userId='31' GROUP BY courses.id AS CourseID, courses.name AS Name ```
Your join is in a deprecated style and probably wrong in content as well. Also, you need to add a Group by on courses id and name. Try this: ``` SELECT courses.id AS CourseID, courses.name AS Name, COUNT(usersCourses.userId) AS JoinedUsersNumber FROM courses INNER JOIN usersCourses ON usersCourses.courseId=courses.id WHERE usersCourses.userId='31' GROUP BY courses.id, courses.name ```
Execute mysql query over multiple tables
[ "", "mysql", "sql", "" ]
I need your help. I want to `union` data from the same table based on a second table with oracle SQL. **Table 1** ``` id | au_id | data ------------------ 1 | 33 | foo 2 | 44 | foo 3 | 34 | foo 4 | 55 | foobar 5 | 55 | fooo ``` **Table 2** ``` au_id | follow_au_id ----------------------- 33 | 55 ``` **Result** ``` au_id | data ---------------- 33 | foo 33 | foobar 33 | fooo 44 | foo 34 | foo 55 | foobar 55 | fooo ```
``` select au_id, data from tbl union all select t2.au_id, t1.data from tbl t1 join tbl t2 on t1.au_id = t2.follow_au_id ``` [**SQLFiddle**](http://sqlfiddle.com/#!4/b7ec43/14)
you have to use a `join` ``` select t1.au_id, t3.data from table1 t1 join table2 t2 on t1.au_id=t2.au_id join table1 t3 on t2.follow_au_id = t3.au_id union select t1.au_id, t1.data from table1 t1 ```
Select union table from same table based on key with oracle
[ "", "sql", "oracle", "union", "" ]
In `MyTable` is column named `Values` of `NVARCHAR` datatype. Here are placed many values like: ``` Alisa 20150111 Something 12 Etc 2222 4444 01/21/2015 11:01:12 AM etc ``` I need select all values from this column but where format is like `mm/dd/yyyy hh:mm:ss AM/PM` I need to convert it to DATE (without time) `mm/dd/yyyy` I've tried to use `ISDATE`, but It not working as I want, It converting other values to date too, like from my sample list `4444` to date format. This is my query for now: ``` SELECT CASE WHEN ISDATE(Values) = 1 THEN CONVERT(VARCHAR(20), Values, 101) ELSE Values END AS Val1 FROM MyTable ``` So list should be returned in following: ``` Alisa 20150111 Something 12 Etc 2222 4444 01/21/2015 -- this line should be changed etc ```
Maybe try to check if string contains `/` as below: ``` SELECT CASE WHEN ISDATE(Values) = 1 and CHARINDEX('/',Values) > 0 THEN CONVERT(VARCHAR(20), Values, 101) ELSE Values END AS Val1 FROM MyTable ```
try ``` SELECT CASE WHEN Values LIKE '__/__/____%' THEN CONVERT(VARCHAR(20), Values, 101) ELSE Values END AS Val1 FROM MyTable ```
How to catch exact date format from value?
[ "", "sql", "sql-server", "t-sql", "format", "case", "" ]
Here's a sample data. ``` ID | Date --------- 1 | 4/21/2015 11:00:00 AM 1 | 4/21/2015 01:00:00 PM ``` Let's say it's currently 2 PM, I need to query ID number 1 only if the time difference between the Column `Date` and `Now` is >= 2 hours. ``` Select ID from Table where datediff(hour, Date, getdate()) >= 2 and ID = '1' ``` Now this query would return the 1st record with the 11 AM time, but I want to ignore other records and just check if the latest record has existed for 2 or more hours. How should I change my query so that I will not get any results if the current time is 2 PM and my last record is 1 PM.
``` select id, max(date) as date from Table where id = 1 group by id having datediff(hh, max(date), getdate()) >= 2 ``` Remove `where` clause and you will get all ids that satisfy condidion.
Use `MAX` with `GROUP BY` to get the latest date for an id and then check if it is greater than 2 hours. Something like this. ``` SELECT ID,Date FROM (Select ID,MAX(Date) Date from Table GROUP BY ID )T where datediff(hour, Date, getdate()) >= 2 and ID = 1 ```
Query to check if record has existed for more than X hours
[ "", "sql", "sql-server", "sql-server-2005", "" ]
I currently have a `user_addresses` table: ``` id | name | address_type | address ---+------+--------------+---------- 1 | John | HOME | home addr 1 | John | MAIL | mail addr 2 | Bill | HOME | home addr 3 | Rick | HOME | home addr 3 | Rick | MAIL | mail addr ``` I want to build a new view that uses the data from the `user_addresses` table. When `address_type=MAIL`, it should use their mail address in the `address` field. Otherwise it uses their `home` address: ``` id | name | address_type | address | data from other tables ---+------+--------------+-----------+----------------------- 1 | John | MAIL | mail addr | 2 | Bill | HOME | home addr | 3 | Rick | MAIL | mail addr | ``` I'm currently flattening the `user_addresses` table so users are one row and they have home/mail addresses in their own columns. Then I'm selecting from this new flattened view and doing a case statement: `case when mail_address is not null then mail_address else home_address end` Should I be using `max(case when)`, a union/minus, or something else?? What is the best way to go about accomplishing this?
Any particular reason not to use PIVOT? [SQL Fiddle](http://sqlfiddle.com/#!4/2e6fd/6) **Oracle 11g R2 Schema Setup**: ``` CREATE TABLE t ("id" int, "name" varchar2(4), "address_type" varchar2(4), "address" varchar2(9)) ; INSERT ALL INTO t ("id", "name", "address_type", "address") VALUES (1, 'John', 'HOME', 'home addr') INTO t ("id", "name", "address_type", "address") VALUES (1, 'John', 'MAIL', 'mail addr') INTO t ("id", "name", "address_type", "address") VALUES (2, 'Bill', 'HOME', 'home addr') INTO t ("id", "name", "address_type", "address") VALUES (3, 'Rick', 'HOME', 'home addr') INTO t ("id", "name", "address_type", "address") VALUES (3, 'Rick', 'MAIL', 'mail addr') SELECT * FROM dual ; ``` **Query 1**: ``` with flat as ( select * from t pivot( max("address") for "address_type" in ('HOME' as home,'MAIL' as mail) ) ) select "id","name",coalesce(mail, home) as address from flat ``` **[Results](http://sqlfiddle.com/#!4/2e6fd/6/0)**: ``` | id | name | ADDRESS | |----|------|-----------| | 2 | Bill | home addr | | 3 | Rick | mail addr | | 1 | John | mail addr | ``` p.s. Disregard double-quoted identifiers - too lazy to fix sqlfiddle's text-to-ddl parser output :)
One way to approach this is to take all ids with the "mail" record and then all the "home" record for all ids with no "mail" record: ``` select ua.* from user_addresses us where address_type = 'MAIL' union all select ua.* from user_addresses ua where address_type = 'HOME' and not exists (select 1 from user_addresses ua2 where ua2.id = ua.id and ua2.address_type = 'MAIL' ); ``` Another method is to prioritize the rows using `row_number()`: ``` select ua.* from (select ua.*, row_number() over (partition by id order by (case when address_type = 'MAIL' then 1 else 2 end)) as seqnum from user_addresses ua ) ua where seqnum = 1; ```
Merge rows based on one column SQL
[ "", "sql", "oracle", "plsql", "oracle11g", "" ]
I am receiving the above error when I try to execute a query on MS SQL Server 2005. It is a dynamic query built up in multiple parts. Here is the simplified structure and data: ``` CREATE TABLE [record_fields]( [field_id] [int] NOT NULL, [campaign_id] [int] NOT NULL, [record_import_type_id] [int] NOT NULL, [fieldname] [varchar](150) NOT NULL, [import_file_column_index] [smallint] NOT NULL, [records_fieldname] [varchar](50) NOT NULL, [show_field] [bit] NOT NULL, [field_order] [int] NOT NULL, [dialler_field_required] [bit] NULL, [dialler_field_name] [varchar](50) NULL, [dialler_field_order] [int] NULL, [field_group_id] [int] NOT NULL ); INSERT INTO [record_fields] VALUES(1,2,1,'Record Id',47,'record_id',0,1,1,'Record Id',NULL,1); INSERT INTO [record_fields] VALUES(2,2,1,'Field Name 1',46,'field01',0,1,1,'Field 1',NULL,1); INSERT INTO [record_fields] VALUES(3,2,1,'Field Name 2',46,'field02',0,1,1,'Field 2',NULL,1); CREATE TABLE [records]( [record_id] [int] NOT NULL, [campaign_id] [int] NOT NULL, [dialler_entry_created] BIT NOT NULL, [field01] [VARCHAR](50) NULL, [field02] [VARCHAR](50) NULL ); INSERT INTO [records] VALUES(1,2,0,'Field01 Value','Field02 Value'); INSERT INTO [records] VALUES(1,2,0,'Field01 Value','Field02 Value'); ``` And the query I am attempting to run is as follows: ``` DECLARE @campaignId INT SET @campaignId = 2 DECLARE @FieldName VARCHAR(250) DECLARE @ColumnName VARCHAR(250) DECLARE @SelectQuery VARCHAR(2000) DECLARE @InsertQuery VARCHAR(2000) SET @SelectQuery = '' SET @InsertQuery = '' declare #FieldNames cursor for SELECT records_fieldname, dialler_field_name FROM record_fields where campaign_id = @campaignid AND dialler_field_required = 1 ORDER BY dialler_field_order open #FieldNames fetch next from #FieldNames into @FieldName, @ColumnName while @@fetch_status = 0 begin -- Build up a dymamic string of columns to read in the select query SET @SelectQuery = @SelectQuery + '''"''+' + @FieldName + '+''"'', ' -- Build up a dynamic string of column names to add to our temp table SET @InsertQuery = @InsertQuery + '[' + @ColumnName + '] varchar(255), ' fetch next from #FieldNames into @FieldName, @ColumnName end close #FieldNames deallocate #FieldNames IF Len(@SelectQuery) > 1 AND Len(@InsertQuery) > 1 BEGIN -- Trim the trailing ',' SET @InsertQuery = Left(@InsertQuery,Len(@InsertQuery)-1) SET @SelectQuery = Left(@SelectQuery,Len(@SelectQuery)-1) EXEC ('DECLARE @RecordData TABLE (' + @InsertQuery + ');' + 'INSERT INTO @RecordData SELECT ' + @SelectQuery + ' from records WHERE campaign_id =' + @campaignId + ' AND ISNULL(dialler_entry_created, 0) = 0; ' + 'SELECT * FROM @RecordData;') END ``` The problem seems to stem from trying to select on record\_id, which gives the *'Conversion failed when converting the varchar value '"' to datatype int'* error If I do not include the record\_id column (which is the only INT column in the select list) it seems to work ok. I have tried to apply CONVERT(VARCHAR(250), record\_id) but cannot seem to get the syntax correct. Any help would be greatly appreciated
There were 2 errors. In constructing select query and campaignid in exec function: ``` DECLARE @campaignId INT SET @campaignId = 2 DECLARE @FieldName VARCHAR(250) DECLARE @ColumnName VARCHAR(250) DECLARE @SelectQuery VARCHAR(2000) DECLARE @InsertQuery VARCHAR(2000) SET @SelectQuery = '' SET @InsertQuery = '' declare #FieldNames cursor for SELECT records_fieldname, dialler_field_name FROM record_fields where campaign_id = @campaignid AND dialler_field_required = 1 ORDER BY dialler_field_order open #FieldNames fetch next from #FieldNames into @FieldName, @ColumnName while @@fetch_status = 0 begin -- Build up a dymamic string of columns to read in the select query SET @SelectQuery = @SelectQuery + @FieldName + ', ' -- Build up a dynamic string of column names to add to our temp table SET @InsertQuery = @InsertQuery + '[' + @ColumnName + '] varchar(255), ' fetch next from #FieldNames into @FieldName, @ColumnName end close #FieldNames deallocate #FieldNames IF Len(@SelectQuery) > 1 AND Len(@InsertQuery) > 1 BEGIN -- Trim the trailing ',' SET @InsertQuery = Left(@InsertQuery,Len(@InsertQuery)-1) SET @SelectQuery = Left(@SelectQuery,Len(@SelectQuery)-1) Declare @result nvarchar(max) ='DECLARE @RecordData TABLE (' + @InsertQuery + ');' + 'INSERT INTO @RecordData SELECT ' + @SelectQuery + ' from records WHERE campaign_id =' + cast(@campaignId as nvarchar(50))+ ' AND ISNULL(dialler_entry_created, 0) = 0; ' + 'SELECT * FROM @RecordData;' Exec(@result) END ``` Here is working fiddle: <http://sqlfiddle.com/#!6/e450c/23>
``` SET @SelectQuery = @SelectQuery + '''"''+' + @FieldName + '+''"'', ' ``` Change to ``` SET @SelectQuery = @SelectQuery + ' ' + @FieldName + ' ,' ``` If you want an often discouraged (but nonetheless very handy) shortcut to converting numbers to string, change the bit at the bottom from ``` @campaignId ``` to ``` RTRIM@campaignId) ``` Does the character conversion and trims off trailing etc all in one word.
Conversion failed when converting the varchar value '"' to datatype int in dynamic query
[ "", "sql", "sql-server", "dynamicquery", "" ]
I have the following tables, in which I store the properties of my application (FYI I'm not directly using SQL in my app but Python Django which allows me to have a structure like that): ``` string_properties id | name | value (type=string) 1 : module.ip : "127.0.0.1" 2 : module.name : "a_name" : : integer_properties id | name | value (type=integer) 1 : module.size : 10 2 : module.port : 80 : : boolean_properties id | name | value (type=bool) 1 : module.activated : 1 : : ``` And I would like to generate a view which shows for each property, the value associated. I tried several strategies to do so: 1- using a basic UNION: it didn't work as the "value" column isn't the same type in all my tables 2- changing the names of the "value" column, adding a suffix to it (like value\_int, value\_string, etc). Then I would have expected to be able to use an UNION statement but it failed too. I would be happy with a result table like below: ``` result id | name | value_int | value_string | value_bool 1 : module.ip : null : "127.0.0.1" : null 2 : module.name : nulll : "a_name" : null 3 : module.activated : null : null : 1 4 : module.size : 10 : null : null 5 : module.port : 80 : null : null : : ``` 3- I guess I could convert all my values to string and then UNION the table, but it's ugly to lose the types. What do you think?
If you "don't want to lose the types", you can select the underlying type as a constant in the `UNION`: ``` CREATE VIEW all_properties AS SELECT id, name, typ, value FROM ( SELECT sp.id AS id , sp.name AS name , 'String' AS typ , sp.value AS value FROM string_properties sp UNION ALL SELECT ip.id AS id , ip.name AS name , 'Integer' AS typ , ip.value::varchar AS value FROM integer_properties ip UNION ALL SELECT bp.id AS id , bp.name AS name , 'Boolean' AS typ , bp.value::varchar AS value FROM boolean_properties bp ) x ; SELECT * FROM all_properties ap ORDER BY id,name ; ```
``` SELECT id, name, value AS value_int, null AS value_string, null AS value_bool FROM integer_properties UNION ALL SELECT id, name, null AS value_int, value AS value_string, null AS value_bool FROM string_properties UNION ALL SELECT id, name, null AS value_int, null AS value_string, value AS value_bool FROM boolean_properties ```
SQL Union with tables with different types
[ "", "mysql", "sql", "postgresql", "" ]