Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a scenario where I have to find out missing record. ``` --Code for Creating Source Table CREATE TABLE [dbo].[NaTarget]( [BillKey] [int] NULL, [StartDate] [date] NULL, [EndDate] [date] NULL ) GO --Code for Creating Target Table CREATE TABLE [dbo].[NaSource]( [BillKey] [int] NULL, [StartDate] [date] NULL, [EndDate] [date] NULL ) GO --Inserting Records in Source INSERT INTO [dbo].[NaSource] ([BillKey],[StartDate],[EndDate]) VALUES('1','2014-01-13','2014-03-27') GO INSERT INTO [dbo].[NaSource]([BillKey],[StartDate],[EndDate]) VALUES('2','2014-02-14','2014-04-20') GO INSERT INTO [dbo].[NaSource]([BillKey],[StartDate],[EndDate]) VALUES('3','2013-11-13','2014-01-18') GO --Inserting records In Target INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('1','2014-01-13' , '2014-01-31' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('1','2014-02-01' , '2014-02-28' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('1','2014-03-01' , '2014-03-27' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('2','2014-02-14' , '2014-02-28' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('2','2014-03-01' , '2014-03-31' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('2','2014-04-01' , '2014-04-20' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('3','2013-11-13' , '2013-11-30' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('3','2013-12-01' , '2013-12-31' ) INSERT INTO [dbo].[NaTarget] ([BillKey] ,[StartDate],[EndDate]) VALUES ('3','2014-01-01' , '2014-01-18' ) ``` Now for any `BillKey`, `StartDate` in target will be StartDate from `Source` and `EndDate` will be last date of month and now for same `Billkey`, next record will have 1st date of next month and `EndDate` will be last date, until last date of same `BillKey` is reached. I have to find any record if it gets deleted. Example if BillKey = 3 ``` StartDate= 2013-12-01 EndDate = 2013-12-31 is ``` not present in target we need to find it Example will explain it better
Here is an attempt at this: If I understand your question correctly, you're looking to check to see if any expected values in the Target table based on the Start and End Dates in the Source table aren't actually there. You'll need to essentially recreate the results table with what you are expecting from the `NaSource` table's `StartDate` and `EndDate`, and check that against the `NaTarget` table. I'm positive there's a more efficient way of doing this (preferably without using cursors and while loops), but this should give you the results you're looking for: ``` Declare @Results Table ( BillKey Int, StartDate Date, EndDate Date ) Declare @BillKey Int Declare @EndDate Date Declare @Cur Date Declare cur Cursor Fast_Forward For Select BillKey, StartDate, EndDate From NaSource Open cur While 1 = 1 Begin Fetch Next From cur Into @BillKey, @Cur, @EndDate If @@FETCH_STATUS <> 0 Break While (@Cur < @EndDate) Begin Insert @Results Select @BillKey, @Cur, Case When DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, @Cur) + 1, 0)) > @EndDate Then Convert(Date, @EndDate) Else Convert(Date, DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, @Cur) + 1, 0))) End As EndDate Set @Cur = DATEADD(m, DATEDIFF(m, -1, @Cur), 0) End End Close cur Deallocate cur Select R.* From @Results R Where Not Exists ( Select 1 From NaTarget T Where R.BillKey = T.BillKey And R.StartDate = T.StartDate And R.EndDate = T.EndDate ) ```
Here's my solution using recursive CTE. Build what the natarget table should look like and compare it to the actual natarget. I started getting confused on the dates piece so it may be simplified but this does work. ``` ;with targetCte as ( select billkey, startdate, CAST(DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, startdate) + 1, 0)) as DATE) as enddate from nasource union all select t.billkey, cast(DATEADD(month, DATEDIFF(mm, 0, dateadd(mm, 1, t.startdate)), 0) as DATE) , case when cast(DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, DATEADD(month, DATEDIFF(mm, 0, dateadd(mm, 1, t.startdate)), 0)) + 1, 0)) as DATE) < n.enddate then cast(DATEADD(d, -1, DATEADD(m, DATEDIFF(m, 0, DATEADD(month, DATEDIFF(mm, 0, dateadd(mm, 1, t.startdate)), 0)) + 1, 0)) as DATE) else n.enddate end as enddate from targetCte t join nasource n on n.billkey = t.billkey where t.enddate < n.enddate ) select * from targetcte t where not exists (select * from natarget nt where t.billkey = nt.billkey and t.startdate = nt.startdate and t.enddate = nt.enddate) ```
SQL finding Missing Record
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following table `log`: ``` event_time | name | ------------------------- 2014-07-16 11:40 Bob 2014-07-16 10:00 John 2014-07-16 09:20 Bob 2014-07-16 08:20 Bob 2014-07-15 11:20 Bob 2014-07-15 10:20 John 2014-07-15 09:00 Bob ``` I would like to generate a report, where I can group data by number of entries per day and by entry day. So the resulting report for the table above would be something like this: ``` event_date | 0-2 | 3 | 4-99 | ------------------------------- 2014-07-16 1 1 0 2014-07-15 2 0 0 ``` I use the following approached to solve it: * [Select with grouping in range](https://stackoverflow.com/questions/7597723/select-with-grouping-in-range) * [How to select the count of values grouped by ranges](https://stackoverflow.com/questions/5136246/how-to-select-the-count-of-values-grouped-by-ranges) If I find answer before anybody post it here, I will share it. ## Added I would like to count a number of daily entries for each `name`. Then I check to which column this value belongs to, and the I add 1 to that column.
I took it in two steps. Inner query gets the base counts. The outer query uses case statements to sum counts. [SQL Fiddle Example](http://sqlfiddle.com/#!15/328ef/6) ``` select event_date, sum(case when cnt between 0 and 2 then 1 else 0 end) as "0-2", sum(case when cnt = 3 then 1 else 0 end) as "3", sum(case when cnt between 4 and 99 then 1 else 0 end) as "4-99" from (select cast(event_time as date) as event_date, name, count(1) as cnt from log group by cast(event_time as date), name) baseCnt group by event_date order by event_date ```
This is a variation on a `PIVOT` query (although PostgreSQL supports this via the [`crosstab(...)` table functions](http://www.postgresql.org/docs/current/static/tablefunc.html)). The existing answers cover the basic technique, I just prefer to construct queries without the use of `CASE`, where possible. To get started, we need a couple of things. The first is essentially a Calendar Table, or entries from one (if you don't already have one, they're among the most useful dimension tables). If you don't have one, the entries for the specified dates can easily be generated: ``` WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay FROM GENERATE_SERIES(CAST('2014-07-01' AS DATE), CAST('2014-08-01' AS DATE), INTERVAL '1 DAY') AS dr(startOfDay)) ``` `SQL Fiddle Demo` This is primarily used to create the first step in the double aggregate, like so: ``` SELECT Calendar_Range.startOfDay, COUNT(Log.name) FROM Calendar_Range LEFT JOIN Log ON Log.event_time >= Calendar_Range.startOfDay AND Log.event_time < Calendar_Range.nextDay GROUP BY Calendar_Range.startOfDay, Log.name ``` `SQL Fiddle Demo` Remember that most aggregate columns with a nullable expression (here, `COUNT(Log.name)`) will *ignore* `null` values (not count them). This is also one of the few times it's acceptable to **not** include a grouped-by column in the `SELECT` list (normally it makes the results ambiguous). For the actual queries I'll put this into a subquery, but it would also work as a CTE. We also need a way to construct our `COUNT` ranges. That's pretty easy too: ``` Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next FROM (VALUES('0 - 2', 0), ('3', 3), ('4+', 4)) e(text, start)) ``` `SQL Fiddle Demo` We'll be querying these as "exclusive upper-bound" as well. We now have all the pieces we need to do the query. We can actually use these virtual tables to make queries in both veins of the current answers. --- First, the `SUM(CASE...)` style. For this query, we'll take advantage of the null-ignoring qualities of aggregate functions again: ``` WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay FROM GENERATE_SERIES(CAST('2014-07-14' AS DATE), CAST('2014-07-17' AS DATE), INTERVAL '1 DAY') AS dr(startOfDay)), Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next FROM (VALUES('0 - 2', 0), ('3', 3), ('4+', 4)) e(text, start)) SELECT startOfDay, COUNT(Zero_To_Two.text) AS Zero_To_Two, COUNT(Three.text) AS Three, COUNT(Four_And_Up.text) AS Four_And_Up FROM (SELECT Calendar_Range.startOfDay, COUNT(Log.name) AS count FROM Calendar_Range LEFT JOIN Log ON Log.event_time >= Calendar_Range.startOfDay AND Log.event_time < Calendar_Range.nextDay GROUP BY Calendar_Range.startOfDay, Log.name) Entry_Count LEFT JOIN Count_Range Zero_To_Two ON Zero_To_Two.text = '0 - 2' AND Entry_Count.count >= Zero_To_Two.start AND Entry_Count.count < Zero_To_Two.next LEFT JOIN Count_Range Three ON Three.text = '3' AND Entry_Count.count >= Three.start AND Entry_Count.count < Three.next LEFT JOIN Count_Range Four_And_Up ON Four_And_Up.text = '4+' AND Entry_Count.count >= Four_And_Up.start GROUP BY startOfDay ORDER BY startOfDay ``` `SQL Fiddle Example` --- The other option is of course the `crosstab` query, where the `CASE` was being used to segment the results. We'll use the `Count_Range` table to decode the values for us: ``` SELECT startOfDay, "0 -2", "3", "4+" FROM CROSSTAB($$WITH Calendar_Range AS (SELECT startOfDay, startOfDay + INTERVAL '1 DAY' AS nextDay FROM GENERATE_SERIES(CAST('2014-07-14' AS DATE), CAST('2014-07-17' AS DATE), INTERVAL '1 DAY') AS dr(startOfDay)), Count_Range AS (SELECT text, start, LEAD(start) OVER(ORDER BY start) as next FROM (VALUES('0 - 2', 0), ('3', 3), ('4+', 4)) e(text, start)) SELECT Calendar_Range.startOfDay, Count_Range.text, COUNT(*) AS count FROM (SELECT Calendar_Range.startOfDay, COUNT(Log.name) AS count FROM Calendar_Range LEFT JOIN Log ON Log.event_time >= Calendar_Range.startOfDay AND Log.event_time < Calendar_Range.nextDay GROUP BY Calendar_Range.startOfDay, Log.name) Entry_Count JOIN Count_Range ON Entry_Count.count >= Count_Range.start AND (Entry_Count.count < Count_Range.end OR Count_Range.end IS NULL) GROUP BY Calendar_Range.startOfDay, Count_Range.text ORDER BY Calendar_Range.startOfDay, Count_Range.text$$, $$VALUES('0 - 2', '3', '4+')$$) Data(startOfDay DATE, "0 - 2" INT, "3" INT, "4+" INT) ``` (I *believe* this is correct, but don't have a way to test it - Fiddle doesn't seem to have the crosstab functionality loaded. In particular, CTEs probably must go inside the function itself, but I'm not sure....)
SQL: grouping by number of entries and entry date
[ "", "sql", "postgresql", "" ]
I want to provide a correct ordered list of names. My question: is the SQL `ORDER BY` clause the best way to provide correct ordered strings in multiple languages or are there some problems which should be considered? I just tested it with Russian letters and it seems to work.
The sorting of a column depends on the `collation` in use for that column, or the optional collation applied to that sort - ie: `ORDER BY {column name} COLLATE {collation name}` You would be advised to pick a collation that meets your requirements, perhaps `Cyrillic_General_CI_AS` See <http://msdn.microsoft.com/en-us/library/ms143508(v=sql.105).aspx>
SELECT \* FROM your\_table ORDER BY nlssort(your\_column, 'NLS\_SORT=russian');
Order By on cyrillic letters
[ "", "sql", "sql-server", "unicode", "sql-server-2012", "sql-order-by", "" ]
We use foll. query **to count no. of primary key columns** in a database:- ``` SELECT t.name,is_primary_key FROM sys.indexes i INNER JOIN sys.tables t ON i.object_id = t.object_id AND t.type = 'U' LEFT JOIN sys.extended_properties AS EP ON EP.major_id = T. [object_id] WHERE (EP.class_desc IS NULL OR (EP.class_desc <> 'OBJECT_OR_COLUMN' AND EP.[name] <> 'microsoft_database_tools_support')) ``` **It ignores the columns in system tables.** Now we want to **query for the number of foreign keys** in the database. This **should ignore the system tables and display the count against each tablename**. Is this possible? Below query returns all foreign keys in db, but I want to ignore the systabes.. Just like above query. ``` SELECT COUNT(*) AS 'FOREIGN_KEY_CONSTRAINT' FROM sys.objects WHERE type_desc IN ('FOREIGN_KEY_CONSTRAINT') ```
Run the below query. This will work for the required purpose: ``` SELECT KC.Column_Name, t.Table_Name, tc.Constraint_Name FROM information_schema.table_constraints tc LEFT JOIN information_schema.tables t ON tc.Table_Name = t.Table_Name LEFT JOIN information_schema.KEY_COLUMN_USAGE kc ON kc.CONSTRAINT_NAME = tc.CONSTRAINT_NAME WHERE constraint_type = 'FOREIGN KEY' AND TABLE_TYPE = 'BASE TABLE' ```
This ? ``` SELECT DISTINCT A.COLUMN_NAME, B.CONSTRAINT_TYPE FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE A JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS B ON A.CONSTRAINT_NAME = B.CONSTRAINT_NAME ORDER BY CONSTRAINT_TYPE ASC ``` Use a where clause if needed
Count number of foreign key columns, ignore system tables, display against each tablename
[ "", "sql", "sql-server", "" ]
The Oracle documentation claims that it stores XMLType more compact as BINARY XML than as CLOB. But how do I find out how much space is taken by the binary xml? ``` CREATE TABLE t (x XMLTYPE) XMLTYPE x STORE AS BINARY XML; SELECT vsize(x), dbms_lob.getlength(XMLTYPE.getclobval(x)) FROM t; 94 135254 94 63848 94 60188 ``` So, `vsize` seems to be the size of some sort of pointer or LOB locator, and `getclobval` unpacks the binary XML into text. But what about the storage size of the binary XML itself? Please help, the table size is 340GB, so it's worth looking into storage options...
Oracle Binary XML format corresponds to "Compact Schema Aware XML Format" abbreviated as CSX. Encoded data stored as BLOB field. Details about binary XML format available from Oracle documentation ([here](http://www.vldb.org/pvldb/2/vldb09-148.pdf) and [here](http://www.oracle.com/technetwork/database-features/xmldb/oracle-binaryxml-rfc-128974.pdf)). Real size of data field depends on LOB storage parameters of XMLType column. E.g. if `storage in row` option enabled then small documents stored directly with other data and `vsize()` returns appropriate values. In reality Oracle creates underlying BLOB column with system name, which can be found by querying `user_tab_cols` view: ``` select table_name, column_name, data_type from user_tab_cols where table_name = 'T' and hidden_column = 'YES' and column_id = ( select column_id from user_tab_cols where table_name = 'T' and column_name = 'X' ) ``` This query returns system hidden column name which looks like `SYS_NC00002$`. After that it's possible to get size of fields with regular `dbms_lob.getlength()` call against hidden column: ``` select dbms_lob.getlength(SYS_NC00002$) from t ```
Actual storage consumption is stored in a view called user\_segments. To find the correlating LOB to the column you will have to join user\_segments with user\_lobs: ``` CREATE TABLE clob_table (x XMLTYPE) XMLTYPE x store as CLOB; CREATE TABLE binaryxml_table (x XMLTYPE) XMLTYPE x STORE AS BINARY XML; INSERT INTO clob_table (x) SELECT XMLELEMENT("DatabaseObjects", XMLAGG( XMLELEMENT("Object", XMLATTRIBUTES(owner, object_type as type, created, status), object_name) ) ) as x FROM all_objects; INSERT INTO binaryxml_table (x) select XMLELEMENT("DatabaseObjects", XMLAGG( XMLELEMENT("Object", XMLATTRIBUTES(owner, object_type as type, created, status), object_name) ) ) as x FROM all_objects; SELECT lobs.table_name, (SELECT column_name FROM user_tab_cols WHERE table_name = lobs.table_name AND data_type = 'XMLTYPE' AND column_id = (SELECT column_id FROM user_tab_cols WHERE table_name = lobs.table_name AND column_name = lobs.column_name ) ) column_name, seg.segment_name, seg.bytes FROM user_lobs lobs, user_segments seg WHERE lobs.segment_name = seg.segment_name; TABLE_NAME COLUMN_NAME SEGMENT_NAME BYTES --------------- ----------- ------------------------- -------- BINARYXML_TABLE X SYS_LOB0000094730C00002$$ 7536640 CLOB_TABLE X SYS_LOB0000094727C00002$$ 19922944 ```
How big is an Oracle XMLType when stored as BINARY XML
[ "", "sql", "xml", "oracle", "oracle11gr2", "" ]
Here how my tables look like: CREATE TABLE my\_table(id INT,user\_id VARCHAR(5)); INSERT INTO my\_table VALUES (1, 100), (2, 200), (3, 100), (4, 150), (5, 200), (6, 300), (7, 400), (8, 500); what i want first of all,i need to check the query group by user\_id , i got **100,150,200,300,400,500** then check each user\_id get sets of its own id value. finally i want this answer **user\_id** | **sets** 100 | 1,3 150 | 4 200 | 2,5 300 | 6 400 | 7 500 | 8 note: i need single query. please help me.
Try using [**GROUP\_CONCAT**](http://www.w3resource.com/mysql/aggregate-functions-and-grouping/aggregate-functions-and-grouping-group_concat.php) function like ``` SELECT user_id,GROUP_CONCAT(id separator ",") as sets FROM my_table GROUP BY user_id; ```
See using [GROUP\_CONCAT()](http://www.w3resource.com/mysql/aggregate-functions-and-grouping/aggregate-functions-and-grouping-group_concat.php) function.
How to get selected value group set in single query for mysql?
[ "", "mysql", "sql", "dataset", "" ]
I have a query which returns several rows of data (in `datetime` format) of a single column obtained by performing `JOINS` on multiple `SQL Tables`. The Data obtained is a DateTime type and now I just want the individual count of latest three dates probably the count of lat three distinct dates as it sorted from earliest to latest. SQL Query ``` SELECT ST.EffectiveDate FROM Person.Contact C INNER JOIN Sales.SalesPerson SP ON C.ContactID = SP.SalesPersonID FULL OUTER JOIN Sales.SalesTerritory ST ON ST.TerritoryID = SP.TerritoryID ``` The above query returns around 200 rows of data but I want the count for each of three latest dates possibly bottom three
I would do this with `top` and `group by`: ``` SELECT TOP 3 ST.EffectiveDate, COUNT(*) as cnt FROM Person.Contact C INNER JOIN Sales.SalesPerson SP ON C.ContactID = SP.SalesPersonID FULL OUTER JOIN Sales.SalesTerritory ST ON ST.TerritoryID = SP.TerritoryID GROUP BY ST.EffectiveDate ORDER BY ST.EffectiveDate DESC ```
added another query to get the latest 3 distinct dates ``` SELECT count(1) FROM Person.Contact C INNER JOIN Sales.SalesPerson SP ON C.ContactID = SP.SalesPersonID FULL OUTER JOIN Sales.SalesTerritory ST ON ST.TerritoryID = SP.TerritoryID WHERE ST.effectivedate in (select distinct top 3 effectivedate from salesterritory order by effectivedate desc) ``` Or if you need to see the counts for the 3 dates broken out ``` SELECT st.effectivedate, count(1) FROM Person.Contact C INNER JOIN Sales.SalesPerson SP ON C.ContactID = SP.SalesPersonID FULL OUTER JOIN Sales.SalesTerritory ST ON ST.TerritoryID = SP.TerritoryID WHERE ST.effctivedate in (select distinct top 3 effectivedate from salesterritory order by effectivedate desc) GROUP BY st.effectivedate ```
Getting individual counts of last three distinct rows in column of data retrieved from multiple tables
[ "", "sql", "sql-server", "" ]
I have the following tables with data: **Projects table:** ``` ProjID Name 1 A 2 B ``` **Project Tags table:** ``` ProjID TagID 1 1 2 2 2 3 ``` **Tags Metadata table:** ``` TagID TagName 1 Series:Go 2 Series:Exploring 3 Chapter:01 ``` 1. The `ProjID` fild in "Project Tags Table" is foreign key to `ProjID` field in "Projects table". 2. The `TagID` field in "Project Tags Table" is foreign key to `TagID` field in "Tags Metadata table". Here, projects have tags which are of two types: **Series** and **Chapter**. I need and SQL query to return a custom table associating the Series and Chapter tag names mentioned in the Tags Metadata Table to the respective projects in the Projects Table. The final table should look like this: ``` ProjID Name Series Chapter 1 A Series:GO null 2 B Series:Exploring Chapter:01 ```
Try this: ``` select * from (select p.projid, p.projname, m.tagname, case when substring(m.tagname,1,1) = 'S' then 1 else 2 end tagtype --Type from projects p left join projecttags t on p.projid = t.projid left join tagsmetadata m on t.tagid = m.tagid ) as src pivot (max(tagname) for tagtype in ([1],[2]) ) as pvt; ``` We first create a derived column to check tag type, and then use it along with `PIVOT` to get the desired results. [Demo](http://rextester.com/TMC1196)
This will work for your sample data: ``` select p.ProjID, max(p.Name) as ProjName, max(case when charindex('Series:', t.TagName) > 0 then t.TagName else null end) Series, max(case when charindex('Chapter:', t.TagName) > 0 then t.TagName else null end) Chapter from Projects p join ProjectTags pt on (pt.ProjID = p.ProjID) join Tags t on (t.TagID = pt.TagID) group by p.ProjID ``` See the example: <http://sqlfiddle.com/#!3/71f2d/9>
SQL - query to create custom table
[ "", "sql", "sql-server", "" ]
Results of 1,2 and 3 below are same. Just need to know whether this logic is correct that both of the queries return the value of no. of primary keys.. 1. ``` SELECT * FROM sys.indexes i INNER JOIN sys.tables t ON i.object_id = t.object_id AND t.type = 'U' LEFT JOIN sys.extended_properties AS EP ON EP.major_id = T.[object_id] where is_primary_key=1 ``` 2. ``` SELECT COUNT(*) AS 'PRIMARY_KEY_CONSTRAINT' FROM sys.objects WHERE type_desc IN ('PRIMARY_KEY_CONSTRAINT') ``` 3. ``` SELECT Count(*) FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' ```
``` SELECT Count(*) FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' ``` I think this is the better way to do that
Third query is correct as it is. But the second query needs to be modified to add checks to exclude SYS Tables. sys.Objects has System Tables (so we need to add a check) whereas Information Schema doesnot includes System Tables 2) ``` SELECT COUNT(*) AS 'PRIMARY_KEY_CONSTRAINT' FROM sys.objects WHERE type_desc IN ('PRIMARY_KEY_CONSTRAINT') AND is_ms_shipped <>1 ``` 3) ``` SELECT Count(*) FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' ```
Query logic check: to calculate no. of primary keys in db
[ "", "sql", "sql-server", "" ]
i have the following table ``` create table mytab ( mID int primary key, pname varchar(100) not null, pvalue varchar(100) not null ) ``` example data looks like ``` mID |pname |pvalue ----------------------- 1 |AAR | 2.3 1 |AAM | 1.2 1 |GXX | 5 2 |AAR | 5.4 2 |AAM | 3.0 3 |AAR | 0.2 ``` I want to flip the table so that i get ``` mID | AAR | AAM | GXX| --------------------------------- 1 | 2.3 | 1.2 | 5| 2 | 5.4 | 3.0 | 0| 3 | 0.2 | 0 | 0 ``` Is this somehow possible and if so, is there a way to create a dynamic query because there are lots of these pname pvalue pairs
Write `Dynamic Pivot Query` as: ``` DECLARE @cols AS NVARCHAR(MAX) DECLARE @query AS NVARCHAR(MAX) DECLARE @colsFinal AS NVARCHAR(MAX) select @cols = STUFF((SELECT distinct ',' + QUOTENAME(pname) FROM mytab FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, '') select @colsFinal = STUFF((SELECT distinct ',' + 'ISNULL('+QUOTENAME(pname)+',0) AS '+ QUOTENAME(pname) FROM mytab FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') , 1, 1, '') --Edited query to replace null with 0 in Final result set. SELECT @query = 'SELECT mID, '+@colsFinal +' FROM mytab PIVOT ( MAX(pvalue) FOR pname IN(' + @cols + ')) AS p;' exec sp_executesql @query ``` [Check demo here..](http://rextester.com/PQWE85096)
``` Declare @t table (mID INT, pname VARCHAR(10), pvalue FLOAT) INSERT INTO @t (mID,pname,pvalue)values (1,'AAR',2.3) INSERT INTO @t (mID,pname,pvalue)values (1,'AAM', 1.2) INSERT INTO @t (mID,pname,pvalue)values (1,'GXX', 5) INSERT INTO @t (mID,pname,pvalue)values (2,'AAR', 5.4) INSERT INTO @t (mID,pname,pvalue)values (2,'AAM', 0.3) INSERT INTO @t (mID,pname,pvalue)values (3,'AAR', 0.2) select mid, CASE WHEN [AAR]IS NOT NULL THEN [AAR] ELSE ISNULL([AAR],0)END [AAR], CASE WHEN [AAM]IS NOT NULL THEN [AAM] ELSE ISNULL([AAM],0)END [AAM], CASE WHEN [GXX]IS NOT NULL THEN [GXX] ELSE ISNULL([GXX],0)END [GXX] from ( select mID, pvalue,pname from @t ) d pivot ( max(pvalue) for pname in ( [AAR], [AAM], [GXX]) ) piv; ```
SQL Transpose table - sqlserver
[ "", "sql", "sql-server", "pivot", "" ]
I am trying to alter the code below to also include suppliers who did not supply anything from the l\_foods table. I got it to display them if they do supply food, but I cannot figure out how to display the rest with a 0 in the number of foods column. I thought the left join would help with that. I'm not sure where to go from here any help would be appreciated. ``` SELECT a.supplier_id , b.supplier_name , count(a.supplier_id) AS "number of foods" FROM l_foods a , LEFT JOIN l_suppliers b ON a.supplier_id = b.supplier_id GROUP BY a.supplier_id ,b.supplier_name ORDER BY a.supplier_id ``` it gives me the table with the suppliers who have food located in the l\_foods table Asp A Soup Place 3 Cbc Certified Beef Company 2 Frv Frank Reed's VegeTABLEs 2 Jbr Just Beverages 2 Rgf Really Good Foods 2 Vsb Virginia Street Bakery 1
In order to see all suppliers you need to select from the suppliers table and left join to the foods table ``` SELECT a.supplier_id , b.supplier_name , count(a.supplier_id) AS "number of foods" FROM l_suppliers b , LEFT JOIN l_foods a ON a.supplier_id = b.supplier_id GROUP BY b.supplier_id ,b.supplier_name ORDER BY b.supplier_id ```
Trying using the NVL function (replaces null with 0 the way I've used it here) and using table b in the select list and group by ``` SELECT b.supplier_id, b.supplier_name, nvl(count(a.supplier_id), 0) AS "number of foods" FROM l_foods a LEFT JOIN l_suppliers b ON a.supplier_id = b.supplier_id GROUP BY b.supplier_id, b.supplier_name ORDER BY b.supplier_id ```
SQL count the number of foods supplied by each supplier even if they did not supply anything
[ "", "sql", "oracle", "join", "" ]
I have a Query where I get the **WeekDay** of a date but by default: * Sunday = 1 * Moday = 2 * etc. The function is: ``` DATEPART(dw,ads.date) as weekday ``` I need the result so: * Sunday = 7 * Monday = 1 * etc. Is there any shortcut to do this? Or I will have to do a `CASE statement`?
You can use a formula like: ``` (weekday + 5) % 7 + 1 ``` If you decide to use this, it would be worth running through some examples to convince yourself that it actually does what you want. **addition**: for not to be affected by the DATEFIRST variable (it could be set to any value between 1 and 7) the real formula is : ``` (weekday + @@DATEFIRST + 5) % 7 + 1 ```
This will do it. ``` SET DATEFIRST 1; -- YOUR QUERY ``` Examples ``` -- Sunday is first day of week set datefirst 7; select DATEPART(dw,getdate()) as weekday -- Monday is first day of week set datefirst 1; select DATEPART(dw,getdate()) as weekday ```
SQL DATEPART(dw,date) need monday = 1 and sunday = 7
[ "", "sql", "sql-server", "" ]
Suppose I have a table with two relevant columns: A primary key and an amount. The 'amount' represents money and is one of two things: Either a numeric value (e.g., 47.50) or the word 'unliquidated'. I want to use aggregate functions on this data set. The following works just fine when the criteria doesn't return any records with the 'unliquidated' amount: ``` select count(primary_key), sum(cast(amount as numeric) from (table) where (criteria) ``` However, if the (criteria) returns any records that are 'unliquidated', then it throws the following error: > "Msg 8114, Level 16, State 5, Line 1 Error converting data type > varchar to numeric. I would like my query to (a) count 'Unliquidated' records in the 'count' function, and (b) treat 'Unliquidated' as a 'zero' for purposes of the sum function. As such, simply altering the criteria to exclude the unliquidateds doesn't work.
Although y state the the only non-numeric string in the column is 'unliquidated', I don't necessarily believe that. In any case, this should be safe: ``` select count(primary_key), sum(case when isnumeric(amount) = 1 then cast(amount as numeric) end) from table where (criteria); ``` `ISNUMERIC` returns 1 when the string is a currency amount or written in scientific notation. In these cases the cast would fail, but it sounds like that won't happen here. Note that you have to put the condition in a `case` statement or you might still get an error. In SQL Server 2012+, you can also use `try_convert()`. And, I would be inclined to use `money` instead of `numeric` as the destination data type.
Use `NullIF` ``` select count(primary_key), sum(cast(nullif(amount,'unliquidated') as numeric)) from (table) where (criteria) ```
TSQL: How to cast varchar as numeric when some entries are text
[ "", "sql", "sql-server", "t-sql", "" ]
Im trying to get the min time for each call. Each call can have sevealr parts to it all logged on the same call id. What I want to do is fine the min time for each call id but im getting duplicate values when I query it one way or if I query it another way I just get the min value. I want to end up with being able to count all the calls by hour this is the query I have been playing with to try and get the unique call id and time but this returns duplicates currently eg: Callid 1 = 2014-07-04 16:37:22.043 callid 2 = 2014-07-04 16:37:23.370 what I want is just the values from called 1 ``` select t.callid, (select min(timein) from loggeddata t2 where t2.callid = t.callid and t2.timein > t.timein ) as 'mintime' from loggeddata t ```
You don't need a subquery for that, just use `GROUP BY` to define the grouping expression and add the aggregation as a column: ``` select t.callid, min(timein) as 'mintime' from loggeddata GROUP BY callid ```
``` ;WITH CTE AS ( select t.callid, (select min(timein) from loggeddata t2 where t2.callid = t.callid and t2.timein > t.timein ) as 'mintime' from loggeddata t ) Select c.callid,c.mintime from (select row_number() over (partition by mintime order by callid desc) as seqnum from CTE )c where seqnum = 1 ```
Get MIN value for each id
[ "", "sql", "sql-server", "time", "min", "" ]
Let's say I have the following tables **Table A** ``` id pk_id name value (Varchar) 1 1 name test name 2 1 city los angeles ``` **Table B** ``` id pk_id name value (int) 1 1 age 33 2 1 amount 30 ``` Is it possible to get the following results? ``` name test name age 33 ``` when I do select tablea.*, tableb.* from tablea, tableb where tablea.pk\_id=1 and tableb.pk\_id=1, I get a one single row with all the columns.
What you want is a `UNION` but with a particular thing since you have different types of fields and the `UNION` requires that all fields should be the same type. So as you did not specify what your RDBMs is I will do it for oracle. But the basic idea is that you need just a function to convert your number column to a varchar column: ``` select name, value from tablea where pk_id=1 UNION ALL select name, TO_CHAR(value) from tableb where pk_id=1 ``` It should give you what you need.
Yes. ``` SELECT name, value FROM tablea WHERE id=<somevalue> UNION ALL SELECT name, CONVERT(varchar(10), value) FROM tableb WHERE pk_id=<somevalue> ``` You need to use the `UNION` keyword, but your columns must be the same data type
Getting rows from tables with same columns, but a different type
[ "", "sql", "" ]
How do I convert a string to a date type in SQL Server 2008 R2? My string is formatted `dd/mm/yyyy` I tried this ``` SELECT CAST('01/08/2014' AS DATE) ``` But that does the cast in `mm/dd/yyyy` format.
You need the `convert` function, where you can specify a format code: ``` select convert(datetime, '01/08/2014', 103) ``` The `103` means dd/mm/yyyy. See the [docs](http://msdn.microsoft.com/en-nz/library/ms187928.aspx).
Dateformat. ``` SET DATEFORMAT DMY ; SELECT cast('01/08/2014' as date) ; ``` Convert. ``` SELECT convert(date, '01/08/2014', 103 ) ; ``` And for completeness, SQL Server 2012 and later has the following. ``` SELECT parse('01/08/2014' as date using 'en-NZ' ) ; ```
Convert string to date in specific format
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "date-parsing", "" ]
following is my table ``` CREATE TABLE tblnull ( tnull integer ) ``` for example `tblnull` contains `3652` rows all rows are `NULL` So how can i delete only `3452` rows from `tblnull` on `PostgreSQL` and `SQL-Server`
In Postgres, you can do this via the hidden [`ctid`](http://www.postgresql.org/docs/current/interactive/ddl-system-columns.html) column: ``` DELETE FROM tblnull WHERE ctid IN (SELECT ctid FROM tblnull LIMIT 3452) ```
Try following query: ``` DELETE z FROM (SELECT TOP 3452 * FROM tbnull)z ```
how to delete fixed number of rows from a table with null values in PostgreSQL
[ "", "sql", "sql-server", "postgresql", "" ]
I was trying to take the last row from my customer table,While executing the following Query: ``` mystring = " mystring = "SELECT TOP 1 customer_id FROM customer ORDER BY customer_id DESC"" openconnection() cmd = New Odbc.OdbcCommand(mystring, myconnection) MsgBox(cmd.ExecuteScalar()) closeconnection() ``` An error message is shown as follows: **"check the manual that corresponds to your MySQL server version for the right syntax to use near '1 customer\_id FROM customer ORDER BY customer\_id DESC' at line 1"** Can anyone suggest me that why this error occur, what are the changes i need to do in my query to get the expected result.
This is one of the "few" (I expect a bit of backlash on that one) differences between T-SQL and MySQL. While in TSQL, you'd be spot on by calling `SELECT TOP n ...`, in MySQL you should use the [`LIMIT`](http://dev.mysql.com/doc/refman/5.0/en/limit-optimization.html) keyword to specify that you want only `n` number of rows. In your specific case, you'd want to call a statement similar to this: ``` SELECT customer_id FROM customer ORDER BY customer_id DESC LIMIT 1 ``` --- **This isn't really related to your question**, but just since I gather you haven't used `LIMIT` before, I'll throw in a bit of extra credit. `LIMIT` is also how you write paging expressions in MySQL, so if you ever find yourself needing to, say, display only pages of ten, you could write a query such as this: ``` SELECT customer_id FROM customer ORDER BY customer_id DESC LIMIT 0, 10 ``` then for the second page, ``` SELECT customer_id FROM customer ORDER BY customer_id DESC LIMIT 10, 10 ``` Again, not related to your question, everything above the line should do that, but I thought it might be useful to have as knowledge in the back of your mind.
Try to use LIMIT in mysql instead of `TOP` like ``` SELECT customer_id FROM customer ORDER BY customer_id DESC LIMIT 1 ```
Error while retriving the last row of a table
[ "", "mysql", "sql", "" ]
I want to display all data in the left table only once and display matching data on the right table. table 1 ``` id name 1 abc 2 cdr 3 sdf 4 asew 5 sda 6 few 7 asdq 8 uio 9 kjh ``` table 2 ``` id table1_id table2_name 1 1 test1 2 2 test1 3 3 test1 4 4 test1 5 1 test2 6 2 test2 7 3 test2 8 4 test2 ``` I want to display all the data in table1 only once and display all matching data from table2 where table2\_name is equal to 'test1' table1\_2 ``` table1_name table2_name abc test1 cdr test1 sdf test1 asew test1 sda null few null asdq null uio null kjh null ```
``` select t1.name, t2.min_table2_name from table1 t1 left join (select t2.table1_id, min(t2.table2_name) min_table2_name from table2 t2 group by t2.table1_id ) t2 on t1.id = t2.table1_id ``` or if you want to only show a certain table2\_name value ``` select t1.name, t2.min_table2_name from table1 t1 left join (select t2.table1_id, min(t2.table2_name) min_table2_name from table2 t2 where t2.table2_name = 'test1' group by t2.table1_id ) t2 on t1.id = t2.table1_id ```
You can do this in SQL, but it is not a simple `left outer join`. The key is reducing `table2` to get only the first occurrence for each `id`. The following uses the `substring_index()`/`group_concat()` trick: ``` select t1.name as table1_name, t2.table2_name from table1 t1 left outer join (select table1_id, substring_index(group_concat(table2_name order by id asc), ',', 1) as table2_name from table2 t2 group by table2_name ) t2 on t2.table1_id = t1.id; ```
how to display all data in the left table only once and display matching data on the right table
[ "", "mysql", "sql", "left-join", "" ]
**yes I know the table names have been changed as with the fields names so it may be a bit confusing** I am trying to do a batch update, effectively I want to pass in a list of XXXX and YYYY that together make a composite key, I want to update all records that match the list of composite keys, the bellow statement will update a single record where all conditions are met, but I would like to modify the where clause to be something along the lines of an IN statement that will allow me to update multiple records... I was thinking of adjusting it to be ``` IN ('1','2','3') IN ('5','6','7') ``` but this poses another issue as with the concept a composite key they pairs 1-5, 2-6, 3-7 are valid however any other combination would be invalid... is there anyway I can accomplish this by inserting an IN statement or equivalent in place of XXXX and YYYY ``` UPDATE Table SET id = 99, status_id = 45, change_date = GetDate(), reason = (SELECT Meaning FROM T2 WHERE code = @StatusCode), d_id = T2.d_id FROM Job JOIN T2 ON gid = T2.gid AND j_id = T2.Ref_id WHERE T2.Status = 0 /**** this and is supposed to match a composite key ****/ AND d_id = XXXX AND [uid] = YYYY ```
``` UPDATE t SET id = 99 FROM ( VALUES ('xxx', 'yyy'), ('zzz', 'ttt') ) v (x, y) JOIN mytable t ON t.d_id = x AND t.[uid] = y ```
Another way is to concatenate both columns and use that to compare the pair as if it was a single column: ``` where convert(varchar, d_id) + ' ' + convert(varchar, [uid]) in ('1 5', '2 6', '3 7') ``` <http://sqlfiddle.com/#!3/aecea/2>
Multi row update statement where record = composite key
[ "", "sql", "sql-server", "" ]
I have a Stored Procedure that retrieves employee daily summary intime - outtime: ``` SELECT ads.attendancesumid, ads.employeeid, ads.date, ads.day, -- month day number ads.intime, ads.outtime --employee shift intime and outtime ss.intime, ss.outtime FROM employee_attendance_daily_summary ads JOIN employee emp ON emp.employeeid = ads.employeeid JOIN setup_shift ss ON ss.shiftcode = emp.shiftcode AND DATEPART(dw, ads.date) = ss.day WHERE ads.employeeid = 4 -- just to filter one employee ``` The result of the query is something like this: ![enter image description here](https://i.stack.imgur.com/35R0j.png) Each `day` is repeated **3 times** because table `setup_shift` (employee shifts) has: > Monday to Sunday for 3 different shift types: **DAY**, **AFTERNOON** and **NIGHT**. Here is the same info but with the shift type column: ![enter image description here](https://i.stack.imgur.com/ItNQG.png) > What I need is to **ONLY** get 1 row per day but with the closest employee shift depending on the `intime` and `outtime`. So the desire result should looks like this: ![enter image description here](https://i.stack.imgur.com/Ha3Nr.png) Any clue on how to do this? Appreciate it in advance. I have also these case where `intime` is `00:00:00` but `outtime` has a value: ![enter image description here](https://i.stack.imgur.com/MLGo9.png) **UPDATE:** HERE IS THE SQL FIDDLE <http://sqlfiddle.com/#!6/791cb/7>
``` select ads.attendancesumid, ads.employeeid, ads.date, ads.day, ads.intime, ads.outtime, ss.intime, ss.outtime from employee_attendance_daily_summary ads join employee emp on emp.employeeid = ads.employeeid join setup_shift ss on ss.shiftcode = emp.shiftcode and datepart(dw, ads.date) = ss.day where ads.employeeid = 4 and ((abs(datediff(hh, cast(ads.intime as datetime), cast(ss.intime as datetime))) between 0 and 2) or (ads.intime = '00:00:00' and ss.intime = (select min(x.intime) from setup_shift x where x.shiftcode = ss.shiftcode and x.intime > (select min(y.intime) from setup_shift y where y.shiftcode = x.shiftcode)))) ```
This would be much easier if the times were in seconds after midnight, rather than in a `time`, `datetime`, or string format. You can convert them using the formula: ``` select datepart(hour, intime) * 3600 + datepart(minute, intime) * 60 + datepart(second, intime) ``` (Part of this is just my own discomfort with all the nested functions needed to handle other data types.) So, let me assume that you have a series of similar columns measured in seconds. You can then approach this problem by taking the overlap with each shift and choosing the shift with the largest overlap. ``` with t as ( <your query here> ), ts as ( select t.*, (datepart(hour, ads.intime) * 3600 + datepart(minute, ads.intime) * 60 + datepart(second, ads.intime) ) as e_intimes, . . . from t ), tss as ( select ts.*, (case when e_intimes >= s_outtimes then 0 when e_outtimes <= s_inttimes then 0 else (case when e_outtimes < s_outtimes then e_outtimes else s_outtimes end) - (case when e_intimes > s_intimes then e_intimes else s_intimes end) end) as overlap from ts ) select ts.* from (select ts.*, row_number() over (partition by employeeid, date order by overlap desc ) as seqnum from ts ) ts where seqnum = 1; ```
SQL Match employee intime (punch time) with employee shift
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
for an assignment I have to add three columns of a table (Basic, Additional Labour and Additional Parts) to make an overall charge. I then also have to determine the minimum and maximum overall charge found as well as the average. I have been successful in writing an Overall Charge query ``` SELECT Service.ServiceId, Sum([BasicCharges]+[AdditionalLabourCharges]+ [AdditionalPartCharges]) AS [OverallCharge] FROM Service ``` However, I cannot get my head around adding these min, max and avg statements into this. My draft looks like... but does not work ``` SELECT MIN(OverallCharge) AS [MinOverallCharge], MAX(OverallCharge) AS [MaxOverallCharge], AVG(OverallCharge) AS [AverageOverallCharge] FROM Service WHERE Service (SELECT Sum([S.BasicCharges]+[S.AdditionalLabourCharges]+[S.AdditionalPartCharges]) AS [OverallCharge] FROM Service AS S); ``` Any help would be greatly appreciated. Thanks Nic
Don't forget the `GROUP BY`: ``` SELECT MIN(OverallCharge) AS [MinOverallCharge], MAX(OverallCharge) AS [MaxOverallCharge], AVG(OverallCharge) AS [AverageOverallCharge] FROM (SELECT ServiceId, Sum([S.BasicCharges]+[S.AdditionalLabourCharges]+ [S.AdditionalPartCharges]) AS [OverallCharge] FROM Service AS S GROUP BY ServiceId) dt; ``` The subquery generates a table which contains `OverallCharge` for each `ServiceId`, then the main query gets your `min`, `max`, `avg`.
``` SELECT --get the min/avg/max of all charges MIN(OverallCharge) AS [MinOverallCharge], MAX(OverallCharge) AS [MaxOverallCharge], AVG(OverallCharge) AS [AverageOverallCharge] FROM ( -- calculate OverallCharge for each ServiceId SELECT S.ServiceId, Sum([S.BasicCharges]+[S.AdditionalLabourCharges]+[S.AdditionalPartCharges]) AS [OverallCharge] FROM Service AS S GROUP BY S.ServiceId ) dt; ```
Issue with Subquerying
[ "", "sql", "ms-access", "subquery", "" ]
I have access to a database and I need to know the Partition Scheme definitions in the database. i.e. I need to know the partition scheme name, which Partition function is it using, what file groups are the partitions assigned, etc... For example someone creates a partition scheme as so (taken from msdn): ``` CREATE PARTITION SCHEME myRangePS1 AS PARTITION myRangePF1 TO (test1fg, test2fg, test3fg, test4fg); ``` Then I want the name: myRangePS1, the function: myRangePF1, and the partitions: (test1fg, test2fg, test3fg, test4fg), Whether it is partition ALL or not How would I go about this using SQL statements only? I can query the names and some data about partitions by using the system view sys.partition\_scheme, but it is not enough. The below shows a similar solution on finding the definition of Partition functions: <http://social.msdn.microsoft.com/forums/sqlserver/en-US/d0ce92e3-bf48-455d-bd89-c334654d7e97/how-to-find-partition-function-text-applied-to-a-table>
I have modified knkarthick24's first query to show Partition function values associated to each file group: ``` select distinct ps.Name AS PartitionScheme, pf.name AS PartitionFunction,fg.name AS FileGroupName, rv.value AS PartitionFunctionValue from sys.indexes i join sys.partitions p ON i.object_id=p.object_id AND i.index_id=p.index_id join sys.partition_schemes ps on ps.data_space_id = i.data_space_id join sys.partition_functions pf on pf.function_id = ps.function_id left join sys.partition_range_values rv on rv.function_id = pf.function_id AND rv.boundary_id = p.partition_number join sys.allocation_units au ON au.container_id = p.hobt_id join sys.filegroups fg ON fg.data_space_id = au.data_space_id where i.object_id = object_id('TableName') ``` This is the query I was looking for and I hope other people can make use of this!
Please try this query: 1) ``` select ps.Name AS PartitionScheme, pf.name AS PartitionFunction,fg.name AS FileGroupName from sys.indexes i JOIN sys.partitions p ON i.object_id=p.object_id AND i.index_id=p.index_id join sys.partition_schemes ps on ps.data_space_id = i.data_space_id join sys.partition_functions pf on pf.function_id = ps.function_id join sys.allocation_units au ON au.container_id = p.hobt_id join sys.filegroups fg ON fg.data_space_id = au.data_space_id where i.object_id = object_id('TableName') ``` or for more detailed information use the below query( SQL 2008 Internals Book) 2) ``` SELECT ISNULL(quotename(ix.name),'Heap') as IndexName ,ix.type_desc as type ,prt.partition_number ,prt.data_compression_desc ,ps.name as PartitionScheme ,pf.name as PartitionFunction ,fg.name as FilegroupName ,case when ix.index_id < 2 then prt.rows else 0 END as Rows ,au.TotalMB ,au.UsedMB ,case when pf.boundary_value_on_right = 1 then 'less than' when pf.boundary_value_on_right is null then '' else 'less than or equal to' End as Comparison ,fg.name as FileGroup ,rv.value FROM sys.partitions prt inner join sys.indexes ix on ix.object_id = prt.object_id and ix.index_id = prt.index_id inner join sys.data_spaces ds on ds.data_space_id = ix.data_space_id left join sys.partition_schemes ps on ps.data_space_id = ix.data_space_id left join sys.partition_functions pf on pf.function_id = ps.function_id left join sys.partition_range_values rv on rv.function_id = pf.function_id AND rv.boundary_id = prt.partition_number left join sys.destination_data_spaces dds on dds.partition_scheme_id = ps.data_space_id AND dds.destination_id = prt.partition_number left join sys.filegroups fg on fg.data_space_id = ISNULL(dds.data_space_id,ix.data_space_id) inner join (select str(sum(total_pages)*8./1024,10,2) as [TotalMB] ,str(sum(used_pages)*8./1024,10,2) as [UsedMB] ,container_id from sys.allocation_units group by container_id) au on au.container_id = prt.partition_id WHERE prt.OBJECT_ID = object_id(N'dbo.test') order by ix.type_desc; ```
Find Partition Schema Definitions in SQL Server Database
[ "", "sql", "sql-server", "partitioning", "database-partitioning", "partition", "" ]
I'm adding data to a db table one records at a time. After adding each record I need to check if the same item is already in the database. Whilst their may be multiple entries for the item, all column data other than 'reference' may be different in each record. So far I have the following: ``` CREATE PROCEDURE [dbo].[insertAcListingsAircraftData] .... AS INSERT INTO tblRes (....) VALUES (.....) DELETE FROM tblRes WHERE Id NOT IN ( SELECT MAX(Id) FROM tblRes GROUP BY reference ) ``` So, the aim is to insert the record then group all records together via the 'name' column. Then return the id of the most recently entered row for that named item. Assuming that I am on the right track with the code above, what is the best way to grab the Id of the inserted row, and then use it in the delete statement? Am I heading in the right direction? ## Edit I've changed the delete statement to this: ``` DELETE FROM tblRes WHERE SCOPE_IDENTITY NOT IN ( SELECT MAX(Id) FROM tblRes GROUP BY reference ) ``` Will this now do the job?
``` DELETE FROM tblRes WHERE SCOPE_IDENTITY NOT IN ( SELECT MAX(Id) FROM tblRes GROUP BY reference ) ```
If the Id column is an identity then you can use @@IDENTITY to return the Id of the row that was inserted last. For example: ``` CREATE TABLE #MyTable (PK INT IDENTITY(1,1), SomeValue VARCHAR(50)); INSERT INTO #MyTable (SomeValue) SELECT 'xxx'; SELECT @@IDENTITY; INSERT INTO #MyTable (SomeValue) SELECT 'yyy'; SELECT @@IDENTITY; ``` Returns 1, then 2. So, to use this in your original stub: ``` CREATE PROCEDURE [dbo].[insertAcListingsAircraftData] .... AS INSERT INTO tblRes (....) VALUES (.....) DECLARE @Id INT; SELECT @Id = @@IDENTITY; DECLARE @Reference NVARCHAR(MAX); --or whatever data type it is SELECT @Reference = Reference FROM tblRes WHERE Id = @Id; DELETE FROM tblRes WHERE Reference = @Reference AND Id != @Id; ``` ...or something along those lines?
Insert record then delete older duplicates
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "" ]
I have been scratching my head on this for hours now. I am using PHP and MySQLi. I have two tables: table1 and table2. The table1 table has: ``` uid name phoneNumber ``` And the table2 table has ``` id phoneNumber ``` The registered table contains more than 100,000 entries as compared to the users table with about 1000 entries. I am looking for an efficient query to scan through table2 and count the number of records that have the phoneNumber value present in table1 (phoneNumber). I will really appreciate any help or direction in solving this query problem Regards :)
``` SELECT count(t2.id) FROM table2 t2 WHERE exists (select t1.* FROM table1 t1 WHERE t1.phoneNumber = t2.phoneNumber) GROUP BY t2.phoneNumber ```
Could you give us a closer explanation how your tables look like? You need to have one foreign key for this to be possible (pretty much means an id or key that is the same in both tables so they can be LEFT JOIN'ed by that column), then a little WHERE t2.col IS NOT NULL and you are done but I cannot give you a sample query until I am sure how your db structure really looks like because field "phoneNumber" looks somewhat redundant.
Fastest way to count from a table where a certain field exists in another table?
[ "", "mysql", "sql", "" ]
I am trying to write a SQL Select statement that will return all results that begin with an 'a' or an 'A'. (In other words, not case sensitive.) The way I think I should write this is as follows: SELECT \* FROM mytable WHERE name LIKE 'a%' OR 'A%'; No results are returned. Can someone tell me what I'm doing wrong?
Although I like MikeB's answer, you would get better performance by using an OR: ``` SELECT * FROM mytable WHERE name LIKE 'a%' OR name like 'A%' ``` This is because you wouldn't apply an `upper` on every `name`, which can't use the index.
Try the following SQL statement ``` SELECT * FROM mytable WHERE upper(name) LIKE 'A%'; ```
How do I return results that begin with a or A?
[ "", "sql", "" ]
i used the following code to create a table table is created but warning is shown ``` No index defined! ``` i used the following SQL command to create table ``` CREATE TABLE IF NOT EXISTS `test` ( `path` varchar(50) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ``` i used the following PHP code to insert multiple image path into database but each path is store in new row how do i store in single row in SQL table ``` if ($_FILES) { $upload = new Upload_Rename(); $destination = 'upload'; $paths=$upload->multiple_files($_FILES['myfile'], $destination); //Fill this with correct information $mysql_hostname = ""; $mysql_user = ""; $mysql_password = ""; $mysql_database = ""; $tbl_name=""; $pathfield_name='path'; // $mysql= new mysqli($mysql_hostname,$mysql_user,$mysql_password,$mysql_database); foreach($paths as $path){ $query='INSERT INTO `'.$tbl_name.'` (id, `'.$pathfield_name.'`) VALUES ("'.$mysql- >escape_string($path).'");'; $mysql->query($query);} $mysql->close(); } ?> <form method="post" enctype="multipart/form-data"> <?php for ($i = 0; $i < 10; $i++): ?> file: <input type="file" name="myfile[]"><br> <?php endfor; ?> <input type="submit"> ```
You should have a PK as A\_I field in your table, thats better performance for query/indexing. ``` CREATE TABLE IF NOT EXISTS `test` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `path` varchar(50) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ``` Also, the path only 50 chars, why you expected multiple paths in 1 row? What is purpose?
1. If you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index. 2. If you do not define a PRIMARY KEY for your table, MySQL picks the first UNIQUE index that has only NOT NULL columns as the primary key and InnoDB uses it as the clustered index. 3. If the table has no PRIMARY KEY or suitable UNIQUE index, InnoDB internally generates a hidden clustered index on a synthetic column containing row ID values. The rows are ordered by the ID that InnoDB assigns to the rows in such a table. The row ID is a 6-byte field that increases monotonically as new rows are inserted. Thus, the rows ordered by the row ID are physically in insertion order. So in case of 3 the synthetic index will be created. And this warning just provides the understanding, that no special index has been defined for the table, that can cause in the future issues with sorting, searching e.t.c. queries. <http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html>
create index in SQL table shows no index defined
[ "", "mysql", "sql", "" ]
I have this query: ``` SELECT distinct a.Project_Manager, a.Project_ID, a.Topic, a.Stage, a.Presentation_Date, a.Presentation_Notes, a.Presentation_Status, a.Category, a.Postponed_Reason, a.Postponed_Manager FROM GSC_Presentation_Info a inner join ( SELECT Project_ID as Project_ID, MAX(Presentation_Date) as Max_Presentation_Date from GSC_Presentation_Info group by Project_ID ) b ON a.Project_ID = b.Project_ID and a.Presentation_Date = b.Max_Presentation_Date ``` I really want to hide the records which are not linked with a recent presentation date. But Access keeps showing me the circular reference caused by alias `Project_ID` in query definition's `SELECT` list. I don't really know how to fix that. Tried this code: ``` SELECT Max(GSC_Presentation_Info.Presentation_Date) AS MaxOfPresentation_Date1, GSC_Presentation_Info.Project_ID AS Project_ID FROM GSC_Presentation_Info GROUP BY GSC_Presentation_Info.Project_ID; ``` Works for me, but I need other columns as well. But once I add them in within the group function, they will show the duplicate right away. Also tried with this code but it shows duplicates too: ``` SELECT * FROM GSC_Presentation_Info GPI, (SELECT Max(GSC_Presentation_Info.Presentation_Date) AS MaxOfPresentation_Date, GSC_Presentation_Info.Project_ID FROM GSC_Presentation_Info GROUP BY GSC_Presentation_Info.Project_ID) MVV WHERE GPI.Presentation_Date = MVV.Presentation_Date AND GPI.Project_ID = MVV.Project_ID; ```
You just need to remove the alias for your Project\_ID column - Access won't let you use an alias name that already exists as a column name or another alias name. ``` ...inner join ( SELECT Project_ID, MAX(Presentation_Date) as Max_Presentation_Date from GSC_Presentation_Info group by Project_ID ) b ... ```
I think this would work with RANK() function as well. I am using Teradata SQL. The synatx might be slightly different for us. The table I created: ``` CREATE TABLE stack_test ( Project_ID VARCHAR(3) CHARACTER SET LATIN CASESPECIFIC, Project_Manager VARCHAR(20) CHARACTER SET LATIN CASESPECIFIC, Presentation_Date DATE FORMAT 'yyyy-mm-dd' ) PRIMARY KEY stack_test_pk ( Project_ID ,Presentation_Date ); ``` Records I Inserted: ``` INSERT INTO stack_test ('123','Adam','2014-05-01'); INSERT INTO stack_test ('123','Adam','2014-05-02'); INSERT INTO stack_test ('123','Adam','2014-05-03'); INSERT INTO stack_test ('234','Leveen','2014-05-03'); INSERT INTO stack_test ('345','Sang','2014-03-01'); INSERT INTO stack_test ('345','Sang','2014-03-02'); INSERT INTO stack_test ('678','Liam','2014-05-19'); ``` SELECT Statement that use: ``` SELECT Project_Manager, Project_ID, Presentation_Date, RANK() OVER (PARTITION BY Project_ID ORDER BY Presentation_Date DESC) presen_rank FROM stack_test QUALIFY presen_rank = 1; ``` Result that I got: ``` Project_Manager Project_ID Presentation_Date presen_rank -------------------- ---------- ----------------- ----------- Adam 123 2014-05-03 1 Leveen 234 2014-05-03 1 Sang 345 2014-03-02 1 Liam 678 2014-05-19 1 ``` Hope this works for you too. Since RANK() isn't working for you. Try the following: ``` SELECT y.Project_Manager, y.Project_ID,y.Presentation_Date FROM ( SELECT Project_ID, MAX(Presentation_Date) AS Presentation_Date FROM stack_test GROUP BY Project_ID ) x INNER JOIN stack_test y ON (y.Project_ID = x.Project_ID AND y.Presentation_Date = x.Presentation_Date) GROUP BY y.Project_Manager, y.Project_ID,y.Presentation_Date; ``` This gives the result as follows. ``` Project_Manager Project_ID Presentation_Date -------------------- ---------- ----------------- Liam 678 2014-05-19 Sang 345 2014-03-02 Adam 123 2014-05-03 Leveen 234 2014-05-03 ```
Access SQL sorting the data with latest date
[ "", "sql", "ms-access", "inner-join", "" ]
Pardon the question title but I have table CUSTOMER(ACCT\_NUM,LRN,NAME,ADDRESS,CITY,STATE,COUNTRY) I have to write a query to pull records which have more than 1 (distinct) combination of name,address,city,state and country for a **same combination of acct\_num and LRN**. This is what I have tried but I am not sure if it is correct. I just want to group by acct\_num and LRN but I know group by wont allow me to exclude other columns. ``` select distinct name, address, state, country, city, COUNT(1) from CUSTOMER group by acct_num, LRN , name, address, state, country, city having COUNT(1) > 1 ``` Please help.
Creating some test data ``` DECLARE @Customer TABLE ( ACCT_NUM INT, LRN INT, name varchar(20), address varchar(100), state varchar(2), country varchar(100), city varchar(100) ) INSERT INTO @Customer VALUES ( 1, 1, 'Test1', 'Addr1', 'FL', 'USA', 'Tampa' ), ( 1, 1, 'Test1', 'Addr2', 'FL', 'USA', 'Tampa' ), ( 1, 1, 'Test1', 'Addr3', 'FL', 'USA', 'Tampa' ), ( 1, 1, 'Test1', 'Addr4', 'FL', 'USA', 'Tampa' ), ( 2, 1, 'Test2', 'Addr1', 'FL', 'USA', 'Tampa' ), ( 2, 1, 'Test2', 'Addr1', 'FL', 'USA', 'Tampa' ), ( 3, 1, 'Test3', 'Addr1', 'FL', 'USA', 'Tampa' ) ``` I use rank to figure out all of the distinct combinations (if they are equal, rank would be equal as well) ``` SELECT * FROM ( SELECT *, Rank() OVER (PARTITION BY c.ACCT_NUM, c.LRN ORDER BY c.Name, c.Address, c.State, c.Country, c.City) RK FROM @Customer c ) d WHERE d.RK > 1 ``` Output: ``` ACCT_NUM LRN name address state country city RK 1 1 Test1 Addr2 FL USA Tampa 2 1 1 Test1 Addr3 FL USA Tampa 3 1 1 Test1 Addr4 FL USA Tampa 4 ```
The `RANK() OVER` answer is correct but does fail to show the first address in the results. I would prefer to use an additional level of nesting to accomplish this. ``` SELECT * FROM ( SELECT *, COUNT(*) OVER (PARTITION BY acct_num, LRN) AS distinct_matches FROM ( SELECT acct_num, LRN, name, address, state, country, city FROM CUSTOMER GROUP BY acct_num, LRN, name, address, state, country, city ) AS unique_rows ) AS counted_unique_rows WHERE distinct_matches > 1 ; ```
How to pull distinct rows based on combination of few columns of a table
[ "", "sql", "sql-server", "count", "group-by", "distinct", "" ]
I have a table like this and want to add a column so that for the same bpin it should be increase by 1 and for different it should be reset and again start by 1. ``` BPIN Name 101 A 101 B 101 C 102 D 102 E 103 F 103 G 103 H 103 I 104 K ``` Need OutPut like:- ``` BPIN Name Value 101 A 1 101 B 2 101 C 3 102 D 1 102 E 2 103 F 1 103 G 2 103 H 3 103 I 4 104 K 1 ```
Using variables to do the count. The first sub query forces the order prior to the processing that adds the counter. The 2nd sub query initialises the variables. ``` SELECT BPIN, Name, @value:=IF(@bpin=BPIN, @value + 1, 1) AS value, @bpin:=BPIN FROM ( SELECT BPIN, Name FROM some_table ORDER BY BPIN, Name ) sub0 CROSS JOIN (SELECT @value:=0, @bpin:=0) sub1 ``` SQL fiddle for it:- <http://www.sqlfiddle.com/#!2/79bfb/2>
Something like below will work ``` set @no:=0; set @BPIN:=''; select BPIN,Name,@no:=case when @BPIN=BPIN then @no+1 else 1 end , @BPIN=BPIN from table; ```
For the same value it should be incremented by 1 and different it should be reset and gives result
[ "", "mysql", "sql", "" ]
I'm struggling with the performance of one MS-sql query, which I have to run to create a report in our ERP system. Hopefully you can help me with that query. Here is the query: "Original version": ``` SELECT artikel.artikelnummer, artikel.bezeichnung, SUM(positionen.anzahl), artikel.Einheit FROM artikel, auftrag, positionen INNER JOIN auftrag AS auftrag1 ON (auftrag1.auftrag = positionen.auftrag) INNER JOIN artikel AS artikel1 ON (positionen.artikel = artikel1.artikel) WHERE artikel.warengruppe = 2 OR artikel.warengruppe = 1234 OR artikel.warengruppe = 1235 OR artikel.warengruppe = 1236 OR artikel.warengruppe = 1237 OR artikel.warengruppe = 1239 OR artikel.warengruppe = 1240 OR artikel.warengruppe = 2139 AND auftrag.lieferscheinnr IS NOT NULL GROUP BY artikel.artikelnummer, artikel.bezeichnung,artikel.Einheit ``` "Translated version": ``` SELECT article.articlenr, article.description, SUM(positions.amount), article.unit FROM article, order, positions INNER JOIN order AS order1 ON (order1.order = positions.order) INNER JOIN article AS article1 ON (positions.article = article1.article) WHERE article.materialgroup = 2 OR article.materialgroup = 1234 OR article.materialgroup = 1235 OR article.materialgroup = 1236 OR article.materialgroup = 1237 OR article.materialgroup = 1239 OR article.materialgroup = 1240 OR article.materialgroup = 2139 AND order.dordernr IS NOT NULL GROUP BY article.articlenr, article.description,article.unit ``` We want to count the amount of ink about all of our delivery orders (DO). In the table "auftrag" I have all of the DO numbers and the ordernumbers. In the table "positionen" I have all positions of the several orders including the right amount of inkbottles. In the table "artikel" I have all of the article-details like description, bottle size and so on. The column "artikel.warengruppe" contains the right material groups which contains the ink. Now the problem is that the tables "auftrag" contains 160.000, "artikel" 155.000 and positionen 570.000 entries. I abort the query after 1 hour runtime. So my question is how can I optimize the query? My problem is that I cant change the ER-model. Thank you very much in advance for your help. I hope you can understand my crappy english. ;)
If you phrase the `where` clause correctly, then performance will probably improve. I would suggest: ``` WHERE artikel.warengruppe in (2, 1234, 1235, 1236, 1237, 1239, 1240, 2139) and auftrag.lieferscheinnr IS NOT NULL; ``` Your logic is finding everything for the first 7 `warengruppe` values. Then it is also looking for `2139` where `lieferscheinnr IS NOT NULL`. Second, you need to fix your `from` clause. It should be something like: ``` FROM artikel INNER JOIN auftrag AS auftrag1 ON (auftrag1.auftrag = positionen.auftrag) INNER JOIN artikel AS artikel1 ON (positionen.artikel = artikel1.artikel) ``` Just avoid commas in the `from` clause. Your query was doing a massive cartesian product of all the tables and *then* joining the tables back in. An incredible amount of work.
Apart from indexing, would something like this get you the results you need: ``` SELECT article.articlenr, article.description, SUM(positions.amount), article.unit FROM positions INNER JOIN order AS order1 ON (order1.order = positions.order) INNER JOIN article AS article1 ON (positions.article = article1.article) WHERE EXISTS (SELECT * FROM article a WHERE materialgroup IN (2,1234, 1235, 1236, 1237, 1239, 1240, 2139) AND a.article = article1.article) AND order.dordernr IS NOT NULL GROUP BY article.articlenr, article.description,article.unit ```
Optimizing MS-SQL Query
[ "", "sql", "sql-server", "" ]
So I have a table of scores with a foreign key TeamID ``` ID TeamID Score Month 1 1 100 1 2 2 90 2 3 2 80 3 ``` And on table Team there are duplicates like this: ``` ID TeamName 1 Team A 2 Team A ``` I want to update TeamID field on the score table to the first occurence on team table so I can safely delete duplicates on team table, for example to something like this: ``` ID TeamID Score Month 1 1 100 1 2 1 90 2 3 1 80 3 ``` any ideas how to do that without manually updating for each ID?
``` select t1.id, t2.id as newId from team t1 join (select min(id) as id, TeamName from team group by TeamName) t2 on t1.TeamName=t2.TeamName ``` The query will return you which id should be replaced with new one. Use it in the score table to update ids
Try this ``` UPDATE score SET TeamID = (SELECT ID from Team ORDER BY ID limit 1); ```
Update field that have duplicates on another table
[ "", "mysql", "sql", "" ]
In **Oracle**, if I have a table defined as … ``` CREATE TABLE taxonomy ( key NUMBER(11) NOT NULL CONSTRAINT taxPkey PRIMARY KEY, value VARCHAR2(255), taxHier NUMBER(11) ); ALTER TABLE taxonomy ADD CONSTRAINT taxTaxFkey FOREIGN KEY (taxHier) REFERENCES tax(key); ``` With these values … ``` key value taxHier 0 zero null 1 one 0 2 two 0 3 three 0 4 four 1 5 five 2 6 six 2 ``` This query syntax … ``` SELECT value FROM taxonomy CONNECT BY PRIOR key = taxHier START WITH key = 0; ``` Will yield … ``` zero one four two five six three ``` How is this done in **PostgreSQL**?
Use a `RECURSIVE` CTE in Postgres: ``` WITH RECURSIVE cte AS ( SELECT key, value, 1 AS level FROM taxonomy WHERE key = 0 UNION ALL SELECT t.key, t.value, c.level + 1 FROM cte c JOIN taxonomy t ON t.taxHier = c.key ) SELECT value FROM cte ORDER BY level; ``` Details and links to documentation in my previous answer: * [Does PostgreSQL have a pseudo-column like "LEVEL" in Oracle?](https://stackoverflow.com/q/22626394/330315) Or you can install the additional module `tablefunc` which provides the function [`connectby()`](https://www.postgresql.org/docs/current/tablefunc.html#TABLEFUNC-CONNECTBY-PARAMETERS) doing almost the same. See [Stradas' answer](https://stackoverflow.com/a/37846191/939860) for details.
Postgres does have an equivalent to the connect by. You will need to enable the module. Its turned off by default. It is called **tablefunc**. It supports some cool crosstab functionality as well as the familiar "**connect by**" and "**Start With**". I have found it works much more eloquently and logically than the recursive CTE. If you can't get this turned on by your DBA, you should go for the way Erwin is doing it. It is robust enough to do the "bill of materials" type query as well. Tablefunc can be turned on by running this command: ``` CREATE EXTENSION tablefunc; ``` Here is the list of connection fields freshly lifted from the official documentation. ``` Parameter: Description relname: Name of the source relation (table) keyid_fld: Name of the key field parent_keyid_fld: Name of the parent-key field orderby_fld: Name of the field to order siblings by (optional) start_with: Key value of the row to start at max_depth: Maximum depth to descend to, or zero for unlimited depth branch_delim: String to separate keys with in branch output (optional) ``` You really should take a look at the docs page. It is well written and it will give you the options you are used to. (On the doc page scroll down, its near the bottom.) [Postgreql "Connect by" extension](https://www.postgresql.org/docs/9.5/static/tablefunc.html) Below is the description of what putting that structure together should be like. There is a ton of potential so I won't do it justice, but here is a snip of it to give you an idea. ``` connectby(text relname, text keyid_fld, text parent_keyid_fld [, text orderby_fld ], text start_with, int max_depth [, text branch_delim ]) ``` A real query will look like this. Connectby\_tree is the name of the table. The line that starting with "AS" is how you name the columns. It does look a little upside down. ``` SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', 'pos', 'row2', 0, '~') AS t(keyid text, parent_keyid text, level int, branch text, pos int); ```
What is the equivalent PostgreSQL syntax to Oracle's CONNECT BY ... START WITH?
[ "", "sql", "oracle", "postgresql", "recursive-query", "connect-by", "" ]
I am still getting a weird error: > The select list for the INSERT statement contains more items than the insert list. The number of SELECT values must match the number of INSERT columns. Code: ``` INSERT INTO @tab (Phone) select t2.Phone from ( SELECT DISTINCT top 999 t3.Phone, MIN(t3.Ord) FROM ( select Phone1 as Phone, Ord from @tabTemp union all select Phone2 as Phone, Ord from @tabTemp ) t3 GROUP BY t3.Phone ORDER BY MIN(t3.Ord) asc, t3.Phone ) t2 ``` The idea is to select all phone numbers from @tabTemp with their row order. Then I wanna distinct them and insert distincted numbers into table @tab. *Top 999* is here only for *order by* purpose, because I use it into a *function* (UDF). Structures are following: ``` declare @tabTemp TABLE ( Phone1 varchar(128) NULL, Phone2 varchar(128) NULL, Ord int ); declate @tab TABLE ( Phone varchar(max) NULL ); ``` **EDITED:** **FULL CODE** ``` CREATE FUNCTION dbo.myFnc(@PID int, @VID int, @JID int, @ColumnNo int) RETURNS @tab TABLE ( Phone varchar(max) NULL ) AS BEGIN if @PID is null and @VID is null and @JID is null return; if @ColumnNo is null or (@ColumnNo<>2 and @ColumnNo<>3 and @ColumnNo<>6) return; declare @catH int; set @catH = dbo.fncGetCategoryID('H','tt'); -- just returning int value declare @kvalP int; set @kvalP = dbo.fncGetCategoryID('P','te'); declare @kvalR int; set @kvalR = dbo.fncGetCategoryID('R','te'); declare @tabTemp TABLE ( Phone1 varchar(128) NULL, Phone2 varchar(128) NULL, Ord int ); -- finding parent subject + current one WITH subj AS( SELECT * FROM Subjekt WHERE (ID = @PID and @PID is not null) or (ID = @VID and @VID is not null) or (ID = @JID and @JID is not null) UNION ALL SELECT t.* FROM Subjekt t INNER JOIN subj r ON r.ID = t.ID ) INSERT INTO @tabTemp (Phone1,Phone2) (select (case when o.TYP1=@catH then o.TEL1 else null end) Phone1 ,(case when o.TYP2=@catH then o.TEL2 else null end) Phone2 ,so.POR_C from subj s ,SubjektPerson so ,Persons o ,recSetup idS ,recSetup idSO ,recSetup idO where 1=1 and idO.isValid=1 and idSO.isValid=1 and idS.isValid=1 and idSO.ID0=so.ID and idS.ID0=s.ID and idO.ID0=o.ID and so.ID_PERSON=o.ID and so.ID_SUBJECT=s.ID and (o.TYP=@kvalP or o.TYP=@kvalR) ) INSERT INTO @tab (Phone) select t2.Phone from ( SELECT DISTINCT top 999 t3.Phone, MIN(t3.Ord) FROM ( select Phone1 as Phone, Ord from @tabTemp union all select Phone2 as Phone, Ord from @tabTemp ) t3 GROUP BY t3.Phone ORDER BY MIN(t3.Ord) asc, t3.Phone ) t2 RETURN END ```
You've focussed on the wrong insert. This is the one with the mismatch: ``` INSERT INTO @tabTemp (Phone1,Phone2) (select (case when o.TYP1=@catH then o.TEL1 else null end) Phone1 ,(case when o.TYP2=@catH then o.TEL2 else null end) Phone2 ,so.POR_C from ... ``` Two columns in the insert list, 3 columns in the subselect. I can't tell just from the naming whether `POR_C` was meant to end up in the `Ord` column or not.
Not sure why you have distinct AND a group by on the same query. You could greatly simplify this. ``` INSERT INTO @tab (Phone) SELECT top 999 t3.Phone FROM ( select Phone1 as Phone, Ord from @tabTemp union all select Phone2 as Phone, Ord from @tabTemp ) t3 GROUP BY t3.Phone ORDER BY MIN(t3.Ord) asc, t3.Phone ``` Now for the error message you were receiving, it doesn't seem like it came from this block of code because the syntax is fine and the number of columns matches correctly. I suspect the error is somewhere earlier in your code. Also, you might want to consider using temp tables instead of table variables since it seems like you have a lot of rows in these tables.
INSERT INTO from SELECT: The select list for the INSERT statement contains more items than the insert list
[ "", "sql", "sql-server", "sql-insert", "insert-into", "" ]
Need eloquent/fluent query to get sum with groupBy function. So far I have tried: ``` $this->data['no_of_pages'] = Document::sum('no_of_pages') ->groupBy('users_editor_id'); ``` Which ofcourse gives me `call to member function groupBy() on non-object` because of the fact that sum() will already execute the query and have result ready by the time grouBy() is applied. So can anyone guide me?
``` Document::groupBy('users_editor_id') ->selectRaw('sum(no_of_pages) as sum, users_editor_id') ->pluck('sum','users_editor_id'); // originally lists(), which was deprecated in favour of pluck in 5.2 // and dropped completely in 5.3 // ->lists('sum','users_editor_id'); // returns array like this: array( users_editor_id => sum, ... ) ``` Or this way (which I wouldn't use, since it won't be actual ORM result): ``` Document::groupBy('users_editor_id') ->selectRaw('*, sum(no_of_pages) as sum') ->get(); // returns collection of Document pseudo models with additional sum field ```
``` Document::Where('some_condition',true) ->select([DB::raw("SUM(debit) as total_debit"), DB::raw("SUM(credit) as total_credit")]) ->groupBy('id') ->get() ```
Laravel Eloquent: sum with groupBy
[ "", "sql", "eloquent", "" ]
Hi I have a column called mix which is as follows ``` 120 102 201 300 234 212 11 21 ``` So the issue is I want to extract the digits of the left when a space is found, I am trying that with substring as shown below, I wonder why it is not working. ``` select mix, SUBSTRING(mix,1,CHARINDEX(' ',mix)-1) FROM tbl_xx where CHARINDEX(mix,' ')>0 ```
Find string comes first in `CHARINDEX`... Change it in WHERE condition ``` SELECT mix, SUBSTRING(mix, 1, CHARINDEX(' ',mix) - 1) FROM tbl_xx WHERE CHARINDEX(' ', mix) > 0 ```
Try this ``` select mix, SUBSTRING(mix,1,CHARINDEX(' ',mix)-1) FROM tbl_xx where CHARINDEX(' ',mix)>0 ``` Your where clause charindex is wrong check the above
SQL Substring Issue
[ "", "sql", "sql-server", "" ]
I have a table similar to below: ``` +-----+-----------+--------+--------+ | key | timestamp | event1 | event2 | +-----+-----------+--------+--------+ | 123 | 07:06 | 1 | 0 | | 123 | 07:21 | 1 | 0 | | 123 | 07:59 | 0 | 1 | | 123 | 08:02 | 0 | 1 | | 456 | 14:21 | 1 | 0 | | 456 | 15:02 | 0 | 1 | | ... | ... | ... | ... | +-----+-----------+--------+--------+ ``` And I'm looking to get one row for each key, where the next two columns are the **minimum** values of event1 and the **maximum** values of event2, and then (fingers crossed) a delta between the two times. ``` +-----+--------+--------+-------+ | key | event1 | event2 | delta | +-----+--------+--------+-------+ | 123 | 07:06 | 08:02 | 00:54 | | 456 | 14:21 | 15:02 | 00:41 | | ... | ... | ... | ... | +-----+--------+--------+-------+ ``` So far I've tried a `max` function where `event1` = 1 however I get the overall maximum value of `event1` alongside every key value regardless of whether or not that key had that value at any point.
Or you could use multiple CTEs, depending on your RDBMS: [SQL Fiddle](http://sqlfiddle.com/#!3/41f9e/5) ``` with mins as ( select [key], min([timestamp]) as event1 from table1 where [event1] = 1 group by [key]) ,maxes as (select [key], max([timestamp]) as event2 from table1 where [event2] = 1 group by [key]) select mins.[key], mins.event1, maxes.event2 from mins inner join maxes on mins.[key] = maxes.[key] ``` Calculating the delta will depend on how you're actually storing the data.
Try this (using SQL Server syntax only for the *TimeFromParts(...)* and \*Right(...) functions for formatting, and standard SQ elswwhere): ``` select "key" ,right(cast(event1 as char(8)),5) as event1 ,right(cast(event2 as char(8)),5) as event2 ,right(timefromparts(0,0,datediff(ss,event1, event2),0,0),5) as delta from ( select "key" ,min(case when event1=1 then timestamp else cast('23:59:59' as time) end) as event1 ,max(case when event2=1 then timestamp else cast('00:00:00' as time) end) as event2 from table1 group by "key" ) T ; ``` yields from the sample data: ``` key event1 event2 delta ----------- ------ ------ ----- 123 07:06 08:02 00:56 456 14:21 15:02 00:41 ```
SQL Column with Max Values where seperate column = 1
[ "", "sql", "max", "" ]
I have the following code ``` SELECT PhysicalAddressID FROM tblPhysicalAddresses as a WHERE EXISTS ( SELECT PhysicalAddressID FROM tblPhysicalAddresses as b WHERE a.PhysicalAddressID <> b.PhysicalAddressID AND a.Address1 = b.Address1 AND a.Address2 = b.Address2 AND a.Address3 = b.Address3 AND a.City = b.City AND a.CountryID = b.CountryID AND a.PersonID = b.PersonID AND a.PhysicalAddressTypeID = b.PhysicalAddressTypeID AND a.PostalCode = b.PostalCode AnD a.Province = b.Province AND a.StateID = b.StateID AND a.ZipCode = b.ZipCode ) ``` I can visually look at the data and see that there are matches. However when I run the query it returns nothing. Could this be due to the comparison of nullable columns? If so what is the alternative. The goal is to find duplicate rows. For example an instance where are of the columns are null. There is no match found although they are all null. Basically I have 80 databases with the same schema. I decided to merge them into one. However there are duplicate addresses. Therefore my goal is to merge duplicate addresses.
Not clear what you want but this might be it ``` SELECT a.PhysicalAddressID FROM tblPhysicalAddresses as b JOIN tblPhysicalAddresses as b ON a.PhysicalAddressID <> b.PhysicalAddressID AND (a.Address1 = b.Address1 or (a.Address1 is null and b.Address1 is null)) AND (a.Address2 = b.Address2 or (a.Address2 is null and a.Address2 is null)) AND (a.Address3 = b.Address3 or (a.Address2 is null and b.Address3 is null)) AND ... ```
You can use [this technique](http://web.archive.org/web/20180422151947/http://sqlblog.com:80/blogs/paul_white/archive/2011/06/22/undocumented-query-plans-equality-comparisons.aspx) to evaluate two corresponding nulls as equal ``` WHERE a.PhysicalAddressID <> b.PhysicalAddressID AND EXISTS (SELECT a.Address1, a.Address2 INTERSECT SELECT b.Address1, b.Address2) ``` (Remaining columns left for you to fill out)
Comparing Columns with Null
[ "", "sql", "sql-server", "t-sql", "" ]
I do a `sum` function over a column. But the column can have string values also. I want SQL Server to ignore the string values and sum only the string values. Eg: column can have values like 16000Euro or 2588, or 3671. The input is from user and I cant change validation in the app to integer I have tried this but still shows error: ``` SUM(CASE WHEN Type_New = 202 AND ISNUMERIC(Summe) = 1 THEN Summe ELSE 0 END) AS total_Euro ``` So how can I ignore the string values when doing sum operation? The error I get is: > Error converting nvarchar value '2588. 'in the int data type. EDIT: I want SQL to ignore such string values and sum what it can.. The main aim is that Query should not throw any error
Try the below Query, it will work perfectly :) ``` SELECT SUM(CASE WHEN Type_New = 202 AND ISNUMERIC(Summe + '.0e0') = 1 THEN Summe ELSE 0 END) AS total_Euro FROM TableName ``` IsNumeric returns 1 if the varchar value can be converted to ANY number type (includes int, bigint, decimal, numeric, real & float) Values like 1e4,1., 2.0 will create the issue if the above check to bypass these values is not added.
Try this ``` SUM(CASE WHEN Type = 202 AND case when Sum not like '%[^0-9]%' THEN Sum END ELSE 0 END) AS total_Euro ``` or ``` SUM(CASE WHEN Type = 202 AND ISNUMERIC(Sum+'.e0') = 1 THEN Sum ELSE 0 END) AS total_Euro ```
SQL Server : ignore strings in SUM function
[ "", "sql", "sql-server", "sql-server-2012", "sum", "" ]
I'm currently designing a MS SQL table for use in a project. The problem I'm currently facing is that the [data types of "Transact SQL "](http://msdn.microsoft.com/de-de/library/ms187752.aspx) are pointing to the deprecation of the types "text" and "ntext", which I always used for big texts. Char, nchar, varchar and nvarchar may only be 8000 bytes big, this just isn't enough for a user input text, e. g. if he's writing a big article. Is there any alternative to the obsolete data type text/ntext?
Using `nvarchar(MAX)` will allow you to store up to 2 GB of data. The same goes for `varchar(MAX)` too.
varchar(MAX) and nvarchar(MAX). Try 'em; you'll like 'em.
SQL: Alternative to text and ntext data type?
[ "", "sql", "text", "char", "varchar", "ntext", "" ]
Is there any way to make table pivot in sql server like such a way. I have data like ``` | OldItem | NewItem | --------------------- | HD1 | 365 | ``` I need output like below. ``` | Name | Value1 | --------------------- | OldItem | HD1 | | NewItem | 365 | ``` Thanks in advance.
Please try using `UNPIVOT`. Sample given is for static two rows. ``` SELECT Name, Value1 FROM (SELECT * FROM tbl) p UNPIVOT (Value1 FOR Name IN (OldItem, NewItem) )AS unpvt; ```
And here is my little code :D ``` DECLARE @dataTable TABLE (OldItem VARCHAR(10), NewItem INT) INSERT INTO @dataTable SELECT 'HD1', 365 INSERT INTO @dataTable SELECT 'HD2', 300 INSERT INTO @dataTable SELECT 'HD3', 200 INSERT INTO @dataTable SELECT 'HD4', 200 --first select data what you need and add upcoming new column name SELECT 'Value' + CAST(ROW_NUMBER() OVER (ORDER BY OldITem) AS VARCHAR) AS NewColumn, 'OldItem' as RowName, OldItem AS Item INTO #SelectedData FROM @dataTable WHERE OldItem IN ('HD1', 'HD2', 'HD3') UNION ALL SELECT 'Value' + CAST(ROW_NUMBER() OVER (ORDER BY OldITem) AS VARCHAR) AS NewColumn, 'NewItem' as RowName, CAST(NewItem AS VARCHAR) AS Item FROM @dataTable WHERE OldItem IN ('HD1', 'HD2', 'HD3') --Collect what column names will be DECLARE @columns NVARCHAR(MAX) = ( SELECT STUFF( (SELECT DISTINCT ', [' + NewColumn + ']' FROM #SelectedData FOR XML PATH ('')), 1, 2, '' ) ) -- create dynamic code for pivot DECLARE @dynamicSQL AS NVARCHAR(MAX); SET @dynamicSQL = N' SELECT RowName, ' + @columns + ' FROM #SelectedData PIVOT (MIN(Item) FOR NewColumn IN (' + @columns + ')) AS T '; EXEC sp_executesql @dynamicSQL ```
Pivot Header Data in row using sql server
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have two tables Category ``` CategorySerno | CategoryName 1 One 2 Two 3 Three ``` Status ``` StatusSerno | Status 1 Active 2 Pending ``` Data ``` CatId |Status | Date 1 1 2014-07-26 11:30:09.693 2 2 2014-07-25 17:30:09.693 1 1 2014-07-25 17:30:09.693 1 2 2014-07-25 17:30:09.693 ``` When I join them I get I need the Joining of the latest Date/ Like ``` One Active 2014-07-26 11:30:09.693 Two Inactive 2014-07-25 17:30:09.693 Three Null Null ``` When I am doing a Join and group them It gives me ``` One Active 2014-07-26 11:30:09.693 One Active 2014-07-26 11:30:09.693 One Active 2014-07-26 11:30:09.693 Two Inactive 2014-07-25 17:30:09.693 Three Null Null ```
You could use `ROW_NUMBER` in a CTE: ``` WITH CTE AS ( SELECT c.CategoryName, s.Status, d.Date, dateNum = ROW_NUMBER() OVER (PARTITION BY CatId, d.Status ORDER BY Date DESC) FROM Category c LEFT OUTER JOIN Data d ON c.CategorySerno = d.CatId LEFT OUTER JOIN Status s ON d.Status = s.StatusSerno ) SELECT CategoryName, Status, Date FROM CTE WHERE dateNum = 1 ``` `Demo-Fiddle`
You prabobly have mismatch between SELECT and GROUP BY columns withch couse duplications Try this: ``` SELECT Category.CategoryName, Status.Status, MAX(Data.Date) Data FROM Data LEFT JOIN Category ON Category.CategorySerno = Data.CatId LEFT JOIN Status ON Status.StatusSerno = Data.Status GROUP BY Category.CategoryName, Status.Status ```
Joining and Grouping data from 3 tables
[ "", "sql", "join", "group-by", "" ]
I have a table: ``` CREATE TABLE tblproducts ( productid integer, product character varying(20) ) ``` With the rows: ``` INSERT INTO tblproducts(productid, product) VALUES (1, 'CANDID POWDER 50 GM'); INSERT INTO tblproducts(productid, product) VALUES (2, 'SINAREST P SYP 100 ML'); INSERT INTO tblproducts(productid, product) VALUES (3, 'ESOZ D 20 MG CAP'); INSERT INTO tblproducts(productid, product) VALUES (4, 'HHDERM CREAM 10 GM'); INSERT INTO tblproducts(productid, product) VALUES (5, 'CREAM 15 GM'); INSERT INTO tblproducts(productid, product) VALUES (6, 'KZ LOTION 50 ML'); INSERT INTO tblproducts(productid, product) VALUES (7, 'BUDECORT 200 Rotocap'); ``` If I execute `string_agg()` on `tblproducts`: ``` SELECT string_agg(product, ' | ') FROM "tblproducts" ``` It will return the following result: ``` CANDID POWDER 50 GM | ESOZ D 20 MG CAP | HHDERM CREAM 10 GM | CREAM 15 GM | KZ LOTION 50 ML | BUDECORT 200 Rotocap ``` How can I sort the aggregated string, in the order I would get using `ORDER BY product`? I'm using PostgreSQL 9.2.4.
With postgres 9.0+ you can write: ``` select string_agg(product,' | ' order by product) from "tblproducts" ``` [Details here](http://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES).
For Microsoft SQL: Use "WITHIN GROUP" <https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql?view=sql-server-2017> ``` SELECT STRING_AGG(prod, '|') WITHIN GROUP (ORDER BY product) FROM ... ```
How to sort the result from string_agg()
[ "", "sql", "postgresql", "string-aggregation", "" ]
Hello I have a sql script like this. My intention of the script is to produce out the highest value of today. However the script produce out an unintended result. Can anyone help me look at my code and see what is wrong with it ? Script ``` SELECT MAX(Value), TIMESTAMP, fooditem, cashcurren FROM farm1 WHERE r.timestamp > 1405987200 AND r.timestamp <= (1405987200 + 86400) AND fooditem = '2' AND cashcurren = '10' GROUP BY timestamp, fooditem, cashcurren; ``` The Unintended Result ``` Value Timestamp fooditem cashcurren 200 1406029354 2 10 84 1406034965 2 10 536 1406034973 2 10 70 1406035006 2 10 63 1406035025 2 10 ``` The Result I want Value Timestamp fooditem cashcurren ``` 536 1406034973 2 10 ``` Basically I want my Oracle SQL to return back the highest value for food item #2 and cashcurrency #10 from the timestamp 1405987200 to 1405987200 + 86400 (the timestamp is the whole day of 7/22 in this case).
``` SELECT Value, TIMESTAMP, fooditem, cashcurren FROM farm1 f WHERE timestamp between 1405987200 and (1405987200 + 86400) AND fooditem = '2' AND cashcurren = '10' where value = (select max(x.value) from farm1 x where x.timestamp between 1405987200 and (1405987200 + 86400) and x.fooditem = f.fooditem and x.cashcurren = f.cashcurren) ``` Using max(value) and grouping by timestamp does not lead to any aggregation and does not make sense. (there is likely only one per timestamp) The above query uses a subquery to select the max value for the given timestamp range, fooditem, and cashcurren, and then feeds that value to the query in the where clause.
``` SELECT MAX(Value), TIMESTAMP, fooditem, cashcurren FROM farm1 WHERE r.timestamp > 1405987200 AND r.timestamp <= (1405987200 + 86400) AND fooditem = '2' AND cashcurren = '10' GROUP BY timestamp, fooditem, cashcurren order by 1 desc limit 1; ```
SQL . Timestamp issue with max value
[ "", "sql", "sqlite", "oracle-sqldeveloper", "" ]
have a single column `Varchar(2000)`. Data looks like in a single column, ``` 12:10:08: Dialing12:10:08: Connecting12:10:08: ABC: abc:9433769781$100.88.77.0:878712:10:08: ABCD: 000012:10:09: Agent Initializing12:10:25: On Call12:10:25: Assigned to operator12:10:25: Waiting for Supervisor12:10:30: Waiting for Manager12:11:30: Call Ended12:11:30: Call Not connected.. ``` I want to parse it like, ``` 12:10:08: Dialing 12:10:08: Connecting 12:10:08: ABC: abc:9433769782$100.88.77.0:8787 12:10:08: ABCD: 0000 12:10:25: Agent Initializing 12:10:18: On Call 12:10:25: Assigned to operator 12:10:30: Waiting for Supervisor 12:10:30: Waiting for Manager 12:11:30: Call Ended 12:11:30: Call Not connected ``` Any help. Searched the complete forum, but I am really unsure about this, particularly an absence of a specific identifier. Appreciate your help. P/S- This is just an example of a single time,time is not constant.
Yuck. But, you can do this with a recursive CTE. Here is how: ``` with t as ( select '12:10:08: Dialing12:10:08: Connecting12:10:08: ABC: abc:9433769781$100.88.77.0:878712:10:08: ABCD: 000012:10:09: Agent Initializing12:10:25: On Call12:10:25: Assigned to operator12:10:25: Waiting for Supervisor12:10:30: Waiting for Manager12:11:30: Call Ended12:11:30: Call Not connected.. ' as col ), cte as ( select left(t.col, 9 + patindex('%[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %', substring(t.col, 11, 1000))) as val, substring(t.col, 10 + patindex('%[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %', substring(t.col, 11, 1000)), 1000) as rest from t where t.col like '[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %' union all select (case when rest like '[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %' then left(rest, 9 + patindex('%[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %', substring(rest, 11, 1000))) else rest end) as val, substring(rest, 10 + patindex('%[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %', substring(rest, 11, 1000)), 1000) as rest from cte where rest like '[0-9][0-9]:[0-9][0-9]:[0-9][0-9]: %' ) select val from cte; ``` The SQL Fiddle is [here](http://www.sqlfiddle.com/#!6/d41d8/20439).
Alternative; ``` DECLARE @string VARCHAR(1024) = '12:10:08: Dialing12:10:08: Connecting12:10:08: ABC: abc:9433769781$100.88.77.0:878712:10:08: ABCD: 000012:10:09: Agent Initializing12:10:25: On Call12:10:25: Assigned to operator12:10:25: Waiting for Supervisor12:10:30: Waiting for Manager12:11:30: Call Ended12:11:30: Call Not connected' WITH T(last, pos) AS( SELECT 0, 1 UNION ALL SELECT pos, pos + PATINDEX('%[0-9][0-9]:[0-9][0-9]:[0-9][0-9]%', SUBSTRING(@string, pos + 1, LEN(@string))) FROM T WHERE pos != last ) SELECT SUBSTRING(@string, last, CASE WHEN pos = last THEN len(@string) ELSE pos - last END) FROM T WHERE LAST > 0 ``` For ``` (No column name) 12:10:08: Dialing 12:10:08: Connecting 12:10:08: ABC: abc:9433769781$100.88.77.0:8787 12:10:08: ABCD: 0000 12:10:09: Agent Initializing 12:10:25: On Call 12:10:25: Assigned to operator 12:10:25: Waiting for Supervisor 12:10:30: Waiting for Manager 12:11:30: Call Ended 12:11:30: Call Not connected ```
Parse data, single to multiple
[ "", "sql", "sql-server", "" ]
I'm trying to make it so that it copies all the data from oa\_tags into member\_info, but the problem is that I have a unique auto\_increment key in both oa\_tags and member\_info(it's the same in both, called ID). I need it to copy all the data from oa\_tags into member\_info, but obviously it has to ignore the entries with the same "ID" column. This is what I have so far - ``` INSERT INTO member_info SELECT * FROM oa_tags, member_info WHERE oa_tags.ID > member_info.ID; ``` It's throwing this error at me - "#1136 - Column count doesn't match value count at row 1" Any suggestions are welcome. Thanks
This is the how mysql supports what you're wanting to do. <http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-select-into-table.html> This explains why I wanted to know the field list/structure of both tables. (I don't like the not in . I think there has to be a way to do it with exists but I'm struggling; and I'm not sure you really care about performance as this seems to be a one time thing) ``` INSERT INTO member_info (FIELD LIST) SELECT (FIELD LIST) from oa_tags where ID not in (Select ID from member_info) ``` This might work, but I doubt it and it's far from "Best practice" but if it's one time throw away, it might get the job done. ``` INSERT INTO member_info SELECT * from oa_tags where ID not in (Select ID from member_info) ```
If you implemented a best practice where you identify all the columns it will fix many of the problems you are currently having. In general it is poor practice to use the \* for columns other than testing. Naming/Qualifying your columns (even when there is a lot) will prevent a lot of future issues when tables change. You can name both the columns in the insert and the select so that they match and only insert the ones you are interested in. Such as ignoring the 'ID' column
Trying to use INTO...SELECT in MySQL. Having trouble with the WHERE clause.
[ "", "mysql", "sql", "syntax", "" ]
I have a many-to-many relationship between User & Role, with a role\_user table. My migrations are setup as so (simplified): `users` table: ``` public function up() { Schema::create('users', function(Blueprint $table) { $table->increments('id'); $table->string('email')->unique(); }); } ``` `roles` table: ``` public function up() { Schema::create('roles', function(Blueprint $table) { $table->increments('id'); $table->string('name'); }); } ``` `role_user` table: ``` public function up() { Schema::create('role_user', function(Blueprint $table) { $table->increments('id'); $table->integer('user_id')->unsigned(); $table->foreign('user_id')->references('id')->on('users')->onDelete('cascade'); $table->integer('role_id')->unsigned(); $table->foreign('role_id')->references('id')->on('roles')->onDelete('cascade'); }); } ``` So as per the docs, I set my foreign keys to unsigned. Now, I add a couple of users, and attach some roles - everything works fine. However, when I delete a user (`User::destroy(2)`) the rows for that user in the `role_user` table do not get deleted, which is causing redundant rows. What am I doing wrong? * MySQL + InnoDB EDIT: Grabbing the model and applying `->delete();` also has the same effect.
Try setting when trying to create this table. This fix has worked for me. ``` $table->engine = 'InnoDB'; ``` I have filed a bug under: <https://github.com/laravel/framework/issues/8730>
It is established by Jake's answer that you have set default engine to InnoDB ``` $table->engine = 'InnoDB'; ``` Instead of doing it in each migration file, You can do so in `config/database.php` ``` 'mysql' => [ 'driver' => 'mysql', 'host' => env('DB_HOST', '127.0.0.1'), 'port' => env('DB_PORT', '3306'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'unix_socket' => env('DB_SOCKET', ''), 'charset' => 'utf8mb4', 'collation' => 'utf8mb4_unicode_ci', 'prefix' => '', 'strict' => false, 'engine' => 'InnoDB', ], ```
Laravel foreign key onDelete('cascade') not working
[ "", "sql", "laravel", "eloquent", "foreign-key-relationship", "" ]
I need to optimize a query. I have two tables: * `tblcard` (`CardID int, SerialNumber varchar(15), clientID`) * `tblTransaction` (`TransactionID int, SerialNumber Varchar(15), Transactiondate datetime, ...`) I need to list for a date interval all cards that were involved in a transaction, client name and the date of first transaction for all the cards Here is what I've done: ``` select tra.serialNumber, cli.clientName, (select top 1 tra.Transactiondate from tblTransaction tra where tra.SerialNumber = car.SerialNumber order by tra.TransactionDate) from tblTransaction tra left join tblCard car on car.SerialNumber = tra.SerialNumber left join tblClient cli on car.ClientID = cli.ClientID where --date conditions ``` but given the fact that are very many transactions, this query runs very slow(more that 3 minutes). Do you have any idea on how to optimize this?
An execution plan would help. Out of the box, replacing `left` joins with `inner` joins might help if possible. And using a subquery is also an awful idea from performance viewpoint. Instead, you might want to use a view or a CTE: ``` with LatestTransactions (SerialNumber, TransactionDate) as ( select SerialNumber, max(TransactionDate) as TransactionDate from tblTransaction group by SerialNumber ) select tra.serialNumber, cli.clientName, lt.TransactionDate from tblTransaction tra left join LatestTransactions lt on lt.SerialNumber = tra.SerialNumber left join tblCard car on car.SerialNumber = tra.SerialNumber left join tblClient cli on car.ClientID = cli.ClientID where --date conditions ``` Of course, if you're not using the proper indices, it might not help much. That's why looking at the query execution plan is important. Where is the query spending time and why? Can you limit the dataset in a reasonable way? Would introducing a new index on some column help? Why are you joining on the serial number, which is a 15 char long string, instead of some identity column?
Luaan, your answer is good, here is a more readable(in my opinion) answer (ingnore the missing of tblclient table) ``` select tblTransaction.TransactionDate, Mindate.SerialNumber, Mindate.TransactionDate from tblTransaction outer apply (select MIN(tra.Transactiondate) TransactionDate, car.SerialNumber from tblTransaction tra INNER JOIN tblCard car on car.SerialNumber = tra.SerialNumber where car.SerialNumber = tblTransaction.SerialNumber group by car.SerialNumber) Mindate where tblTransaction.TransactionDate between '2013-05-05' and '2014-05-05' ```
sql query performance improvements
[ "", "sql", "performance", "query-optimization", "" ]
I really don't know what I did wrong...I was following advice from a blog post stating that this code would allow me to keep Access from breaking up my criteria (I have a ton of criteria and it was making this statement into four separate lines and adding columns.) Here's my code right now. ``` Choose(1,(([dbo_customerQuery].[store])>=[forms]![TransactionsForm]![txtStoreFrom] Or [forms]![TransactionsForm]![txtStoreFrom] Is Null) And (([dbo_customerQuery].[store]) <=[forms]![TransactionsForm]![txtStoreTo] Or [forms]![TransactionsForm]![txtStoreTo] Is Null)) ``` The statement inside of the choose is definitely correct so am I using "Choose" wrong? I don't get it, the blog post used it exactly this way. When I execute queries, no matter what those fields do, I end up getting no results. The query is supposed to filter based on a date range, taking null values into account
I have found what it was now. My statement ``` Choose(1,(([dbo_customerQuery].[store])>=[forms]![TransactionsForm]![txtStoreFrom] Or [forms]![TransactionsForm]![txtStoreFrom] Is Null) And (([dbo_customerQuery].[store]) <=[forms]![TransactionsForm]![txtStoreTo] Or [forms]![TransactionsForm]![txtStoreTo] Is Null)) ``` was correct, the problem was I assumed it would work as a criteria, but it actually had to be done exactly as in the blog post posted above. It had to be posted directly as the FIELD, with "<> False" being the criteria. Once done, it did stay on one line, and it worked just as expected.
My concern is that you are trying to work around a bad design. You may get this immediate issue solved to some degree, and continue to build the bad design. Access is flexible, and forgiving, but there's a big price eventually -- maybe you're already there. I realize this is not an answer. It may seem rude -- I apologize. But I think the general advice may help you. I'll tag this "community wiki" since I'm not contributing to a programming solution.
Why does this choose statement not work in an Access criteria?
[ "", "sql", "ms-access", "" ]
I have this trigger to merge a column value to its parent table. In this case the InvoiceTotal gets added/subtracted to the Account's InvoiceTotal ``` WITH Deltas as ( SELECT AccountID, Sum(InvoiceTotal) as InvoiceTotal From inserted Group By AccountID UNION ALL SELECT AccountID, Sum(InvoiceTotal * -1) as InvoiceTotal From deleted Group By AccountID ), Merged as (Select AccountID, Sum(InvoiceTotal) InvoiceTotal From Deltas Group by AccountID) Update Account set InvoiceTotal = Account.InvoiceTotal + Merged.InvoiceTotal From Merged Where Account.AccountID = Merged.AccountID; ``` Now I have a new column called IsCancelled in the Invoice Table. How can I modify the above trigger to handle that? If the invoice is cancelled, the Account total should reduce and if IsCancelled is set to 0, it should increase. Is it possible to do the above task in one single SQL statement? Thanks
By adding the IsCancelled flag to your invoice table, you are saying that when an invoice is cancelled, then you want to remove that invoice total amount from the account balance. Another way to say this is that when IsCancelled = 1 then the InvoiceTotal should be treated as 0, and the account balance updated accordingly. For ex: if the old invoice total was 100, and then that invoice is cancelled, then you should subtract 100 from the account table (inserted/new value of 0 minus deleted/old value of 100). In that case, you can update your trigger query as follows. ``` WITH Deltas as ( SELECT AccountID, Sum(case when IsCancelled = 0 then InvoiceTotal else 0 end) as InvoiceTotal From inserted Group By AccountID UNION ALL SELECT AccountID, Sum(case when IsCancelled = 0 then InvoiceTotal * -1 else 0 end) as InvoiceTotal From deleted Group By AccountID ), Merged as (Select AccountID, Sum(InvoiceTotal) InvoiceTotal From Deltas Group by AccountID) Update Account set InvoiceTotal = Account.InvoiceTotal + Merged.InvoiceTotal From Merged Where Account.AccountID = Merged.AccountID; ``` Here is a SQL Fiddle with examples of this in action. <http://sqlfiddle.com/#!6/e0246/16>
``` WITH Deltas as ( SELECT AccountID, CASE WHEN IsCancelled=0 THEN Sum(InvoiceTotal) ELSE Sum(InvoiceTotal*-1) END as InvoiceTotal From inserted Group By AccountID UNION ALL SELECT AccountID, CASE WHEN IsCancelled=0 THEN Sum(InvoiceTotal*-1) ELSE Sum(InvoiceTotal) END as InvoiceTotal From deleted Group By AccountID ), Merged as (Select AccountID, Sum(InvoiceTotal) InvoiceTotal From Deltas Group by AccountID) Update Account set InvoiceTotal = Account.InvoiceTotal + Merged.InvoiceTotal From Merged Where Account.AccountID = Merged.AccountID; ``` a case when condition might help you on this. updated on the deleted portion.
SQL trigger to merge totals to parent table (with cancelled column)
[ "", "sql", "sql-server", "triggers", "" ]
I stumbled over a database design in which a table has in one column the name of the table that needs to be joined and in another column the foreign key to that table (integer, not the foreign key name). I wonder how would you solve this? This is the current situation: *Table chatting:* ``` id | someColumn | chatter_id | chatter_ref -------------------------------------------------------- 1 2 1 customers 2 3 1 colleagues 3 4 2 customers ``` *Table customers:* ``` id | colA ------------- 1 whatever 2 hello_world ``` *Table colleagues:* ``` id | colB ------------- 1 yesyes ``` EDIT: What I'd do, would be to have multiple `chatting` tables, one for each referenced table. Like this: *Table chatting\_customers (with a foreign key to customers table):* ``` id | someColumn | chatter_id ------------------------------------ 1 2 1 3 4 2 ``` *Table chatting\_colleagues (with a foreign key to colleagues table):* ``` id | someColumn | chatter_id ------------------------------------ 2 3 1 ``` Instead of joining all tables and deciding from which table to select based on the column chatter\_ref (with a `select case when chatter_ref = 'customers' then customers.colA else colleagues.colB end from ...`) I would `union` multiple queries. Is this the way to go? Is there a better way to **normalize** this?
This is the classic Polymorphic Association anti-pattern. There are a number of possible solutions: 1 Exclusive Arcs (as suggested by Argeman) e.g. for the chatting table ``` id | someColumn | customerId | colleagueId ------------------------------------------ 1 2 1 2 3 1 3 4 2 ``` Where both customerId and colleagueID are nullable and exactly one must be not null. Foreign keys can be declared on the customerId and colleagueId. 2 Reverse the Relationship e.g remove the chatterId and chatterRef from the chatting table and create two new tables customer chattings ``` chattingId | customerId ----------------------- 1 1 3 2 ``` colleague chattings ``` chattingId | colleagueId ------------------------ 2 1 ``` Foreign keys can be declared on the customerId and colleagueId. 3 Create a super-type table for customers/colleagues such as persons and have the chatting table reference the primary key of this new table.
I would put two or more **nullable** rows into the chatting table that are referencing the different other tables. For example one column chatter\_colleague and one chatter\_customer. That is also not very nice but I just don't know any *really* good solution! That would need some effort to keep the table clean but otherwise offers optimal joining and indexing capabilities as far as i know. And it is quite simple, which is desirable in my point of view.
How to resolve "tablename to be joined in a table"?
[ "", "mysql", "sql", "database-design", "" ]
below is a query I use to get the latest record per `serverID` unfortunately this query does take endless to process. According to the stackoverflow question below it should be a very fast solution. Is there any way to speed up this query or do I have to split it up? (first get all serverIDs than get the last record for each server) [Retrieving the last record in each group](https://stackoverflow.com/questions/1313120/retrieving-the-last-record-in-each-group) ``` SELECT s1.performance, s1.playersOnline, s1.serverID, s.name, m.modpack, m.color FROM stats_server s1 LEFT JOIN stats_server s2 ON (s1.serverID = s2.serverID AND s1.id < s2.id) INNER JOIN server s ON s1.serverID=s.id INNER JOIN modpack m ON s.modpack=m.id WHERE s2.id IS NULL ORDER BY m.id 15 rows in set (34.73 sec) ``` **Explain:** ``` +------+-------------+-------+------+---------------+------+---------+------+------+----------+------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +------+-------------+-------+------+---------------+------+---------+------+------+----------+------------------+ | 1 | SIMPLE | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE | +------+-------------+-------+------+---------------+------+---------+------+------+----------+------------------+ 1 row in set, 1 warning (0.00 sec) ``` **Sample Output:** ``` +-------------+---------------+----------+---------------+-------------------------+--------+ | performance | playersOnline | serverID | name | modpack | color | +-------------+---------------+----------+---------------+-------------------------+--------+ | 99 | 18 | 15 | hub | Lobby | AAAAAA | | 98 | 12 | 10 | horizons | Horizons | AA00AA | | 97 | 6 | 11 | m_lobby | Monster | AA0000 | | 99 | 1 | 12 | m_north | Monster | AA0000 | | 86 | 10 | 13 | m_south | Monster | AA0000 | | 87 | 17 | 14 | m_east | Monster | AA0000 | | 98 | 10 | 16 | m_west | Monster | AA0000 | | 84 | 7 | 5 | tppi | Test Pack Please Ignore | 55FFFF | | 95 | 15 | 6 | agrarian_plus | Agrarian Skies | 00AA00 | | 98 | 23 | 7 | agrarian2 | Agrarian Skies | 00AA00 | | 74 | 18 | 9 | agrarian | Agrarian Skies | 00AA00 | | 97 | 37 | 17 | agrarian3 | Agrarian Skies | 00AA00 | | 99 | 17 | 3 | bteam_pvp | Attack of the B-Team | FFAA00 | | 73 | 44 | 8 | bteam_pve | Attack of the B-Team | FFAA00 | | 93 | 11 | 4 | crackpack | Crackpack | EFEFEF | +-------------+---------------+----------+---------------+-------------------------+--------+ 15 rows in set (38.49 sec) ``` **Sample Data:** <http://www.mediafire.com/download/n0blj1io0c503ig/mym_bridge.sql.bz2>
**Edit** Ok I solved it. Here is expanded rows showing your original slow query: ![enter image description here](https://i.stack.imgur.com/OuTfj.jpg) And here is a fast query using `MAX()` with `GROUP BY` that gives the identical results. Please try it for yourself. ``` SELECT s1.id ,s1.performance ,s1.playersOnline ,s1.serverID ,s.name ,m.modpack ,m.color FROM stats_server s1 JOIN ( SELECT MAX(id) as 'id' FROM stats_server GROUP BY serverID ) AS s2 ON s1.id = s2.id JOIN server s ON s1.serverID = s.id JOIN modpack m ON s.modpack = m.id ORDER BY m.id ``` ![enter image description here](https://i.stack.imgur.com/PECsE.jpg)
I would phrase this query using `not exists`: ``` SELECT ss.performance, ss.playersOnline, ss.serverID, s.name, m.modpack, m.color FROM stats_server ss INNER JOIN server s ON ss.serverID = s.id INNER JOIN modpack m ON s.modpack = m.id WHERE NOT EXISTS (select 1 from stats_server ss2 where ss2.serverID = ss.serverID AND ss2.id > ss.id ) ``` Apart from the primary key indexes on `server` and `modpack` (which I assume are there), you also want an index on `stats_server(ServerId, id)`. This index should also help your version of the query.
MySQL Query performance improvement for order by before group by
[ "", "mysql", "sql", "" ]
``` SELECT `text`, `oc_product_to_category.product_id` FROM `oc_product_attribute` AS pa LEFT JOIN `oc_attribute_description` AS od ON pa.`attribute_id`=od.`attribute_id` LEFT JOIN `oc_product_to_category` AS opc ON pa.`product_id`=od.`product_id` ``` the problem caused by `oc_product_to_category.product_id`. I removed that and it work. As I know I can do like this opc.`product_id` but why `oc_product_to_category.product_id` doesn't work?
Backticks aren't causing your problem in this case. You aliased the table, but then used the original name in the column list. You need to refer to it by its alias. ``` oc_product_to_category.product_id ``` should be ``` opc.product_id ``` I recommend you remove all of the backticks and just write standard SQL. I see nothing in the query that justifies cluttering it with non-standard escape sequences; that includes TEXT as MySQL allows you to use it unescaped, along with a short list of other reserved words like ACTION, BIT, DATE, ENUM, NO, TIME and TIMESTAMP. <http://dev.mysql.com/doc/refman/5.5/en/reserved-words.html> In general, I avoid escaping identifiers, and I specifically avoid backticks because they are non-standard and non-portable. MySQL can handle the standard SQL quoting syntax of double quotes, if properly configured. On new databases I enable ANSI\_QUOTES. ``` SET global sql_mode='ANSI_QUOTES'; ``` Though you should never do this on a production database, because it will change the behavior of existing queries. Once enabled, you'll be able to use ANSI (standard) quoting, but the side effect is you can no longer use double quotes for string literal values, however, that is also a non-standard practice which only works on MySQL and should be avoided.
`oc_product_to_category.product_id` If you want to use backticks, then use it for your table and column separately, like this: `oc_product_to_category`.`product_id` Also, you have aliased your table, so use your alias, like this: `opc`.`product_id` But you do not need backticks in this case.
backtick quote causing error in sql query
[ "", "mysql", "sql", "" ]
I am trying to get the max + 1 value from one column, and all of the values from another column. However, my query does not give any results. For example, ``` SectionItemID SectionItem 1 blue 2 red ``` The query should return ``` SectionItemID SectionItem 3 blue red ``` Heres what I have ``` SELECT SectionItem,MAX(SectionItemID) + 1 AS SectionItemID FROM Core.SectionItem_Lkup ```
``` SELECT SectionItem, (select MAX(SectionItemID)+1 FROM Core.SectionItem_Lkup) AS SectionItemID FROM Core.SectionItem_Lkup ```
Whenever you `GROUP BY`, you *should* aggregate the other columns involved. * Mysql does allow to omit aggregation on other colums * MsSQL does not, cause the result is undefined for columns without Aggregation. Best way *is* to aggregate other columns. For your szenario, you could use `group_concat` ``` SELECT MAX(SectionItemID)+1, Group_concat(SectionItem) FROM tbl ``` Note: The query does not contain any `Group By`, because you dont want to group on `SectionItemId` nor `SectionItem`. Omiting the `Group By` and using aggregate-functions will use them on the whole table. Output: ``` MAX(SECTIONITEMID)+1 GROUP_CONCAT(SECTIONITEM) 3 blue,red ``` <http://sqlfiddle.com/#!2/353bf3/6>
Select Max and Select other column
[ "", "mysql", "sql", "select", "max", "" ]
I have a table with a login column that has the text FTO and then a sequence of numbers following it (i.e. `FTO3210` or `FTO1002`). I have a query that says `SELECT * FROM tablename`. I am trying to filter it so that it does not `SELECT` any row that has a login value ranging from `FTO1000` to `FTO1010`.
``` SELECT * FRM [Table] WHERE [Login] NOT BETWEEN 'FTO1000' AND 'FTO1010' ``` or ``` SELECT * FROM [Table] WHERE CAST(REPLACE([Login],'FTO','') AS INT) NOT BETWEEN 1000 AND 1010 ```
[SQLFiddle](http://sqlfiddle.com/#!2/1feced/6/0): ``` CREATE TABLE X ( V VARCHAR(100) NOT NULL ); INSERT X (V) VALUES ('FTO3210'); INSERT X (V) VALUES ('FTO1002'); SELECT V FROM X WHERE NOT SUBSTR(V,4,4) BETWEEN '1000' and '1010'; ```
How do I filter out SQL rows where numbers range?
[ "", "mysql", "sql", "" ]
I think I'm encountering a fairly simple problem in PL/SQL on an Oracle Database(10g) and I'm hoping one of you guys can help me out. I'm trying to explain this as clear as possible, but it's hard for me. When I try to compare varchar2 values of 2 different tables to check if I need to create a new record or I can re-use the ID of the existing one, the DB (or I) compares these values in a wrong way. All is fine when both the field contain a value, this results in 'a' = 'a' which it understands. But when both fields are NULL (or '' which Oracle will turn into NULL) it can not compare the fields. I found a 'solution' to this problem but I'm certain there is a better way. ``` rowTable1 ROWTABLE1%ROWTYPE; iReUsableID INT; SELECT * INTO rowTable1 FROM TABLE1 WHERE TABLE1ID = 'someID'; SELECT TABLE2ID INTO iReUsableID FROM TABLE2 WHERE NVL(SOMEFIELDNAME,' ') = NVL(rowTable1.SOMEFIELDNAME,' '); ``` So NVL changes the `null` value to `' '` after which it will compare in the right way. Thanks in advance, Dennis
Your method is fine, unless one of the values could be a space. The "standard" way of doing the comparison is to explicitly compare to `NULL`: ``` WHERE col1 = col2 or col1 is null and col2 is null ``` In Oracle, comparisons on strings are encumbered by the fact that Oracle treats the empty string as NULL. This is a peculiarity of Oracle and not a problem in other databases.
You can use `LNNVL` function (<http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions078.htm>) and reverse the condition: ``` SELECT TABLE2ID INTO iReUsableID FROM TABLE2 WHERE LNNVL(SOMEFIELDNAME != rowTable1.SOMEFIELDNAME); ```
PL/SQL Oracle condition equals
[ "", "sql", "oracle", "plsql", "" ]
I have two tables: 1st table: ``` NAMES ----------------------------- CD_SPECIES SPECIES 1 Sp1 2 Sp2 3 Sp3 ``` Created with this command: ``` CREATE TABLE NAMES ( CD_SPECIES serial PRIMARY KEY, SPECIES varchar(64)); ``` and the 2nd one: ``` COEFFICIENTS ------------------------------- CD_COEFFICIENT COEFFICIENT 1 Coeff1 2 Coeff2 ``` created with ``` CREATE TABLE COEFFICIENTS ( CD_COEFFICIENT serial PRIMARY KEY, COEFFICIENT varchar(64) --HOLDS A COEFFICIENT NAME ); ``` I want to create a third table with the following ``` COMBINED TABLE ---------------------------------------- SPECIES COEFFICIENT CVALUE Sp1 Coeff1 Sp1 Coeff2 Sp2 Coeff1 Sp2 Coeff2 Sp3 Coeff1 Sp3 Coeff2 ``` where `CVALUE` is the column that will hold `float` type data defined by me. How should I create the 3rd table? NOTE: If there is another way of combining these tables feel free to share it (e.g. combining the keys etc.). I am very new to databases! Thanks
You would do this with `create table as` and a `cross join`: ``` create table thirdtable as select s.species, c.coefficient, cast(12345 as real) as cvalue from species cross join coefficients; ```
Create the 3rd table, like you have done for other two. And then insert the records in this 3rd table taking the data from a join of table 1 and 2. ``` INSERT INTO TABLE3 (col1, col2) values (select tab1.SPECIES.tab2.COEFFICIENT from tab1,tab2 where tab1.CD_SPECIES = tab2.CD_COEFFICIENT); ```
Combining two tables and defining a third field
[ "", "sql", "postgresql", "" ]
I have looked through questions here and have not been able to find exactly what I am looking for. How would I create a query in mysql that would return missing data in field Name in database Demo as a string (such as 'Blank') instead of NULL? I really appreciate your help!
``` SELECT IFNULL(fieldname, 'Blank') FROM tablename ``` or ``` SELECT COALESCE(fieldname, 'Blank') FROM tablename ```
Yes, in MySQL you can use `IFNULL`, as in: ``` SELECT IFNULL(fieldname, 'Blank') as fieldname, ... ```
SQL Renaming NULL with string
[ "", "mysql", "sql", "string", "null", "" ]
I have a really big stored Procedure which I cannot share but I am only having trouble with implementing a dynamic where clause, which already has 7 ANDs. Within the last 'And' of the WHERE clause, I need to check a parameter passed to the stored procedure and construct the 'And' accordingly. Basically, a user can pass in either Buyer or seller or an empty string (indicating to use both), to the stored procedure. Algorithm needed: ``` SELECT blah blah blah FROM multiple joins, left outer joins blah WHERE (1st clause) AND (2nd clause) AND (3rd clause) AND (4th clause) AND (5th clause) AND (6th clause) AND (7th clause) AND (need to check paramater passed in and fill in this and accordingly.... Basically in JS terms: if (parameter ='buyer'){ B.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%' } else if (parameter = 'seller'){ S.S_FNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%' } else{ B.S_FNAME LIKE '%' + @Name + '%' OR S.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%' } ) ``` My attempt was as follows where @CliType is the parameter passed in: ``` AND (CASE WHEN @CliType = 'Buyer' THEN B.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%' WHEN @CliType = 'Seller' THEN S.S_FNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%' WHEN @CliType = '' THEN B.S_FNAME LIKE '%' + @Name + '%' OR S.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%' END) ``` But this gives me error lines under the first LIKE and the second WHEN. I'm using sql server 2008 and have not much experience with it. Any help appreciated
Please try this, hope its useful. ``` DECLARE @CliType VARCHAR(100); DECLARE @Name VARCHAR(100); DECLARE @TableBuyer TABLE (ID INT, S_FNAME VARCHAR(100), S_SNAME VARCHAR(100)); DECLARE @TableSeller TABLE (ID INT, S_FNAME VARCHAR(100), S_SNAME VARCHAR(100)); --Assign your input parameters here SET @CliType = 'Buyer' --NULL SET @Name = 'Daniella' INSERT INTO @TableBuyer SELECT '1','Bryan', 'Greenberg' UNION ALL SELECT '2','Channing', 'Tatum' UNION ALL SELECT '3','Paul', 'William' UNION ALL SELECT '4','Eric', 'Bana' UNION ALL SELECT '5','James', 'Lafferty' UNION ALL SELECT '6','Wentworth', 'Miller' INSERT INTO @TableSeller SELECT '1','Dianna', 'Agron' UNION ALL SELECT '2','Malin', 'Akerman' UNION ALL SELECT '3','Christina', 'Aguilera' UNION ALL SELECT '4','Jessica', 'Alba' UNION ALL SELECT '5','Krista', 'Allen' UNION ALL SELECT '6','Daniella', 'Alonso' SELECT b.ID,b.S_FNAME,b.S_SNAME,s.ID,s.S_FNAME,s.S_SNAME FROM @TableBuyer b JOIN @TableSeller s ON b.ID=s.ID WHERE (@CliType = 'Buyer' AND (B.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%')) OR (@CliType = 'Seller' AND (S.S_FNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%')) OR (ISNULL(@CliType, '') = '' AND (B.S_FNAME LIKE '%' + @Name + '%' OR S.S_FNAME LIKE '%' + @Name + '%' OR B.S_SNAME LIKE '%' + @Name + '%' OR S.S_SNAME LIKE '%' + @Name + '%')); ```
You can parse the parameter and build a string variable that you plug into your select statement. ``` if @CliType = 'Buyer' THEN @WhereVariable = B.S_FNAME LIKE '''%''' + @Name + '''%''' OR...' else if @CliType = 'Seller' then ... ``` Then just plug that into a string that ends up being your full query, and execute it with [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms188001.aspx)
dynamic where clause in one 'AND' of stored procedure
[ "", "sql", "sql-server", "stored-procedures", "if-statement", "case", "" ]
i have table as `Dept` ``` DEPTNO DNAME LOC 10 ACCOUNTING NEW YORK 20 RESEARCH DALLAS 30 SALES CHICAGO 40 OPERATIONS BOSTON ``` and another table as `Emp` ``` EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO 7369 SMITH CLERK 7902 12/17/1980 800 NULL 20 7499 ALLEN SALESMAN 7698 2/20/1981 1600 300 30 7521 WARD SALESMAN 7698 2/22/1981 1250 500 30 7566 JONES MANAGER 7839 4/2/1981 2975 NULL 20 7654 MARTIN SALESMAN 7698 9/28/1981 1250 1400 30 7698 BLAKE MANAGER 7839 5/1/1981 2850 NULL 30 7782 CLARK MANAGER 7839 6/9/1981 2450 NULL 10 7788 SCOTT ANALYST 7566 12/9/1982 3000 NULL 20 7839 KING PRESIDENT NULL 11/17/1981 5000 NULL 10 7844 TURNER SALESMAN 7698 9/8/1981 1500 0 30 7876 ADAMS CLERK 7788 1/12/1983 1100 NULL 20 7900 JAMES CLERK 7698 12/3/1981 950 NULL 30 7902 FORD ANALYST 7566 12/3/1981 3000 NULL 20 7934 MILLER CLERK 7782 1/23/1982 1300 NULL 10 ``` My question : List ALL the department names and their employee count. Count should include only those employees whose hire date is greater than 1981 Result: should be like this ``` DNAME EMPCOUNT ACCOUNTING 1 OPERATIONS 0 RESEARCH 2 SALES 0 ```
``` SELECT dname, SUM(CASE WHEN emp.hierdate > '1981/01/01' THEN 1 ELSE 0 END) FROM dept LEFT JOIN emp ON dept.deptno = emp.empno GROUP BY dname ```
It involves a bit of trickery in order to get the departments with 0 matching employees ``` SELECT Dept.DNAME, COALESCE(t.cnt, 0) AS count FROM Dept LEFT JOIN ( SELECT deptno, count(*) FROM Emp WHERE HIREDATA > '1981/01/01' GROUP BY deptno) as t ON t.deptno = Dept.deptno ```
Count Query with joins needed
[ "", "sql", "" ]
I'm moving from MySQL to PostgreSQL and have hit a wall with user privileges. I am used to assigning a user all privileges to all tables of a database with the following command: ``` # MySQL grant all privileges on mydatabase.* to 'myuser'@'localhost' identified by 'mypassword'; ``` It appears to me that the PostgreSQL 9.x solution involves assigning privileges to a "schema", but the effort required of me to figure out exactly what SQL to issue is proving excessive. I know that a few more hours of research will yield an answer, but I think everyone moving from MySQL to PostgreSQL could benefit from having at least one page on the web that provides a simple and complete recipe. This is the only command I have ever needed to issue for users. I'd rather not have to issue a command for every new table. I don't know what scenarios have to be handled differently in PostgreSQL, so I'll list some of the scenarios that I have typically had to handle in the past. Assume that we only mean to modify privileges to a single database that has already been created. > (1a) Not all of the tables have been created yet, or (1b) the tables have already been created. > > (2a) The user has not yet been created, or (2b) the user has already been created. > > (3a) Privileges have not yet been assigned to the user, or (3b) privileges were previously assigned to the user. > > (4a) The user only needs to insert, update, select, and delete rows, or (4b) the user also needs to be able to create and delete tables. I have seen answers that grant all privileges to all databases, but that's not what I want here. Please, I am looking for a simple recipe, although I wouldn't mind an explanation as well. I don't want to grant rights to all users and all databases, as seems to be the conventional shortcut, because that approach compromises all databases when any one user is compromised. I host multiple database clients and assign each client a different login. It looks like I also need the `USAGE` privilege to get the increasing values of a `serial` column, but I have to grant it on some sort of sequence. My problem got more complex.
### Basic concept in Postgres Roles are global objects that can access all databases in a db cluster - given the required privileges. A *cluster* holds many *databases*, which hold many *schemas*. Schemas (even with the same name) in different DBs are unrelated. Granting privileges for a schema only applies to this particular schema in the current DB (the current DB at the time of granting). Every database starts with a schema `public` by default. That's a convention, and many settings start with it. Other than that, the schema `public` is just a schema like any other. Coming from MySQL, you may want to start with a single schema `public`, effectively ignoring the schema layer completely. I am using dozens of schema per database regularly. Schemas are a bit (but not completely) like directories in the file system. Once you make use of multiple schemas, be sure to understand `search_path` setting: * [How does the search\_path influence identifier resolution and the "current schema"](https://stackoverflow.com/questions/9067335/how-to-create-table-inside-specific-schema-by-default-in-postgres/9067777#9067777) ### Default privileges [Per documentation on `GRANT`:](https://www.postgresql.org/docs/current/sql-grant.html) > PostgreSQL grants default privileges on some types of objects to > `PUBLIC`. No privileges are granted to `PUBLIC` by default on tables, > columns, schemas or tablespaces. For other types, the default > privileges granted to `PUBLIC` are as follows: `CONNECT` and `CREATE TEMP TABLE` > for databases; `EXECUTE` privilege for functions; and `USAGE` privilege for languages. All of these defaults can be changed with [`ALTER DEFAULT PRIVILEGES`](https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html): * [Grant all on a specific schema in the db to a group role in PostgreSQL](https://stackoverflow.com/questions/10352695/grant-all-on-a-specific-schema-in-the-db-to-a-group-role-in-postgresql/10353730#10353730) ### Group role [Like @Craig commented](https://stackoverflow.com/questions/24918367/how-do-i-grant-privileges-to-a-particular-database-in-postgresql/24923877?noredirect=1#comment38727672_24923877), it's best to `GRANT` privileges to a group role and then make a specific user member of that role (`GRANT` the group role to the user role). This way it is simpler to deal out and revoke bundles of privileges needed for certain tasks. A group role is just another role without login. Add a login to transform it into a user role. More: * [Why did PostgreSQL merge users and groups into roles?](https://stackoverflow.com/questions/8485387/why-did-postgresql-merge-users-and-groups-into-roles/8487886#8487886) ### Predefined roles **Update:** Postgres 14 or later adds the new [predefined roles](https://www.postgresql.org/docs/14/predefined-roles.html) (formally "default roles") `pg_read_all_data` and `pg_write_all_data` to simplify some of the below. See: * [Grant access to all tables of a database](https://dba.stackexchange.com/a/91974/3684) ### Recipe Say, we have a new database `mydb`, a group `mygrp`, and a user `myusr` ... While connected to the database in question as superuser (`postgres` for instance): ``` REVOKE ALL ON DATABASE mydb FROM public; -- shut out the general public GRANT CONNECT ON DATABASE mydb TO mygrp; -- since we revoked from public GRANT USAGE ON SCHEMA public TO mygrp; ``` To assign *"a user all privileges to all tables"* like you wrote (I might be more restrictive): ``` GRANT ALL ON ALL TABLES IN SCHEMA public TO mygrp; GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO mygrp; -- don't forget those ``` To set default privileges for future objects, run for *every role* that creates objects in this schema: ``` ALTER DEFAULT PRIVILEGES FOR ROLE myusr IN SCHEMA public GRANT ALL ON TABLES TO mygrp; ALTER DEFAULT PRIVILEGES FOR ROLE myusr IN SCHEMA public GRANT ALL ON SEQUENCES TO mygrp; -- more roles? ``` Now, grant the group to the user: ``` GRANT mygrp TO myusr; ``` Related answer: * [PostgreSQL - DB user should only be allowed to call functions](https://stackoverflow.com/questions/15867175/postgresql-db-user-should-only-be-allowed-to-call-functions/15868268#15868268) ### Alternative (non-standard) setting Coming from MySQL, and since you want to keep privileges on databases separated, you might like this non-standard setting `db_user_namespace`. [Per documentation:](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-DB-USER-NAMESPACE) > This parameter enables per-database user names. It is off by default. Read the manual carefully. I don't use this setting. It does not void the above.
> Maybe you could give me an example that grants a specific user > select/insert/update/delete on all tables -- those existing and not > yet created -- of a specific database? What you call a database in MySQL more closely resembles a PostgreSQL schema than a PostgreSQL database. Connect to database "test" as a superuser. Here that's ``` $ psql -U postgres test ``` [Change the default privileges](http://www.postgresql.org/docs/current/static/sql-alterdefaultprivileges.html) for the existing user "tester". ``` ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT INSERT, SELECT, UPDATE, DELETE ON TABLES TO tester; ``` Changing default privileges has no effect on existing tables. That's by design. For existing tables, use standard GRANT and REVOKE syntax. You can't assign privileges for a user that doesn't exist.
Grant privileges for a particular database in PostgreSQL
[ "", "sql", "database", "postgresql", "privileges", "" ]
I am trying this query to get unique rows based on ID\_B but I also want ID\_B where DueDate is nearest, this is what I am trying, ``` SELECT distinct ID_B, ID_A, Own_ID, DueDate FROM Table1 WHERE (ID_A = 6155) ``` Result I am getting, ![enter image description here](https://i.stack.imgur.com/ntV8z.png) ***I want unique ID\_B with earliest Due Date.*** For example In result pane, I only want ONE ID\_B record for ID\_B = 1 with DueDate = 2014-07-21 10:54:02.027
Try this: ``` ;with cte as (select id_b, id_a, own_id, duedate, row_number() over (partition by isnull(id_b,0) order by duedate) rn from yourtable where id_a = 6155) select id_b,id_a,own_id,duedate from cte where rn = 1 ``` [Demo](http://rextester.com/DHPCEK71622)
You can write as: ``` ;with CTE as ( Select ID_B, ID_A, DueDate, Row_number() over ( partition by ID_B order by [DueDate] asc) as rownum from Table1 ) select ID_B, ID_A,DueDate from CTE C where C.rownum = 1 and ID_A = 6155 and ISNULL(ID_B,0)<>0 ``` [Check Demo here..](http://rextester.com/MHSCTR53582)
How to get unique record based on due date
[ "", "sql", "sql-server-2012", "" ]
I need help in building the query as the below structured.i have tried using for loop but its just printing as 1 2 3 4 5 6 7 8 9 For example if the N Value is 9 means the output should be like this. **EXAMPLE Output** : 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0 1 2 3 4 5 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 0 1 2 3 0 1 2 0 1 0
This select query gives exact result: ``` WITH CTE1 AS (SELECT 9 AS COL FROM DUAL) ,CTE2 AS ( SELECT LEVEL - 1 AS A, SYS_CONNECT_BY_PATH(LEVEL - 1, ' ') AS B, COL FROM DUAL, CTE1 CONNECT BY LEVEL <= COL + 1) SELECT B FROM (SELECT A, B FROM CTE2 UNION ALL SELECT 2 * COL - A, B FROM CTE2 WHERE A != COL) ORDER BY A; ``` --- **OUTPUT:** ``` 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0 1 2 3 4 5 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 0 1 2 3 4 5 0 1 2 3 4 0 1 2 3 0 1 2 0 1 0 ```
To do this just in SQL - albeit with a bind variable if you want to be able to specify `n` - you need to combine start with the `connect by` and build from there. This is one way, though I'm pretty sure it can be done without the `union`: ``` with t as ( select level as rn, level - 1 as val from dual connect by level <= :n + 1 ) select t1.rn as rn, listagg(t2.val, ' ') within group (order by t2.val) as answer from t t1 join t t2 on t2.val <= t1.val group by t1.rn, t1.val union all select (2 * (:n + 1)) - t1.rn, listagg(t2.val, ' ') within group (order by t2.val) as answer from t t1 join t t2 on t2.val <= t1.val where t1.rn <= :n group by t1.rn, t1.val order by rn; ``` The CTE generates the numbers 0 to n. The two halves of the union create the mirror halves of the output; the second has the `rn <= :n` filter to prevent the 'middle' line being duplicated. With: ``` var n number; exec :n := 9; ``` This gives: ``` RN ANSWER ------ ---------------------------------------- 1 0 2 0 1 3 0 1 2 4 0 1 2 3 5 0 1 2 3 4 6 0 1 2 3 4 5 7 0 1 2 3 4 5 6 8 0 1 2 3 4 5 6 7 9 0 1 2 3 4 5 6 7 8 10 0 1 2 3 4 5 6 7 8 9 11 0 1 2 3 4 5 6 7 8 12 0 1 2 3 4 5 6 7 13 0 1 2 3 4 5 6 14 0 1 2 3 4 5 15 0 1 2 3 4 16 0 1 2 3 17 0 1 2 18 0 1 19 0 ``` Or for 6: exec :n := 6; ``` RN ANSWER ------ ---------------------------------------- 1 0 2 0 1 3 0 1 2 4 0 1 2 3 5 0 1 2 3 4 6 0 1 2 3 4 5 7 0 1 2 3 4 5 6 8 0 1 2 3 4 5 9 0 1 2 3 4 10 0 1 2 3 11 0 1 2 12 0 1 13 0 ``` You don't really want to see the `rn` but you can remove that by putting this in a subquery.
SQL or PL/SQL queries to Print sequence of given N numbers
[ "", "sql", "oracle", "plsql", "" ]
I have a table with some basic address data: ![table structure](https://i.stack.imgur.com/RsK2l.png) If Street and Postcode/Town are filled, I want Street and Postcode/Town separated by comma. `select Concat(coalesce(Street,''),", ", coalesce(Postcode,'')," ",coalesce(Town,'')) from adresses` If street is not available, I would like to have just the postcode and town separated by a blank, if even the post code is missing I want just the name (if just postcode is missing, I'd like to have "Street, Town") How can I design the query so that it considers what is available and where comma and blank need to be put?
The easiest way to do this with with the built-in `concat_ws()`: ``` select concat_ws(', ', Street, Postcode, Town) from adresses; ``` If any of the arguments are `NULL` (except the separator), then `concat_ws()`, simply skips that string and the associated separator.
``` select (case when street is not null and postcode is not null and town is not null then concat(street,', ',postcode,', ',town) when street is null and postcode is not null and town is not null then concat(postcode,' ',town) when street is null and postcode is null and town is not null then town else 'N/A' end) myAddress from address ``` You may have to check for the empty string depending on your data i.e. `when street is not null and street <> ''`
Intelligent coalesce
[ "", "mysql", "sql", "coalesce", "" ]
There are situations where ActiveRecord sets the alias table name if there are multiple joins with the same table. I'm stuck in a situation where these joins contain scopes (using 'merge'). I have a many-to-many relationship: > Models table\_name: `users` > > Second models table\_name: `posts` > > Join table name: `access_levels` A Post has many users through access\_levels and vice versa. Both, the User model and the Post model share the same relation: `has_many :access_levels, -> { merge(AccessLevel.valid) }` The scope inside of the AccessLevel model looks like this: ``` # v1 scope :valid, -> { where("(valid_from IS NULL OR valid_from < :now) AND (valid_until IS NULL OR valid_until > :now)", :now => Time.zone.now) } # v2 # scope :valid, -> { # where("(#{table_name}.valid_from IS NULL OR #{table_name}.valid_from < :now) AND (#{table_name}.valid_until IS NULL OR #{table_name}.valid_until > :now)", :now => Time.zone.now) # } ``` I would like to call sth like this: ``` Post.joins(:access_levels).joins(:users).where (...) ``` ActiveRecord creates an alias for the second join ('access\_levels\_users'). I want to reference this table name inside of the 'valid' scope of the AccessLevel model. V1 obviously generates a `PG::AmbiguousColumn`-Error. V2 results in prefixing both conditions with `access_levels.`, which is semantically wrong. This is how I generate the query: (simplified) ``` # inside of a policy scope = Post. joins(:access_levels). where("access_levels.level" => 1, "access_levels.user_id" => current_user.id) # inside of my controller scope.joins(:users).select([ Post.arel_table[Arel.star], "hstore(array_agg(users.id::text), array_agg(users.email::text)) user_names" ]).distinct.group("posts.id") ``` The generated query looks like this (using the `valid` scope v2 from above): ``` SELECT "posts".*, hstore(array_agg(users.id::text), array_agg(users.email::text)) user_names FROM "posts" INNER JOIN "access_levels" ON "access_levels"."post_id" = "posts"."id" AND (("access_levels"."valid_from" IS NULL OR "access_levels"."valid_from" < '2014-07-24 05:38:09.274104') AND ("access_levels"."valid_until" IS NULL OR "access_levels"."valid_until" > '2014-07-24 05:38:09.274132')) INNER JOIN "users" ON "users"."id" = "access_levels"."user_id" INNER JOIN "access_levels" "access_levels_posts" ON "access_levels_posts"."post_id" = "posts"."id" AND (("access_levels"."valid_from" IS NULL OR "access_levels"."valid_from" < '2014-07-24 05:38:09.274675') AND ("access_levels"."valid_until" IS NULL OR "access_levels"."valid_until" > '2014-07-24 05:38:09.274688')) WHERE "posts"."deleted_at" IS NULL AND "access_levels"."level" = 4 AND "access_levels"."user_id" = 1 GROUP BY posts.id ``` ActiveRecord sets a propriate alias 'access\_levels\_posts' for the second join of the access\_levels table. The problem is that the merged `valid`-scope prefixes the column with 'access\_levels' instead of 'access\_levels\_posts'. I also tried to use arel to generate the scope: ``` # v3 scope :valid, -> { where arel_table[:valid_from].eq(nil).or(arel_table[:valid_from].lt(Time.zone.now)).and( arel_table[:valid_until].eq(nil).or(arel_table[:valid_until].gt(Time.zone.now)) ) } ``` The resulting query remains the same.
I've been able to solve my own problem in the meantime. I'll post my solution to help others that are having similar issues. Preamble: It's a long way to the promise land ;) I'll keep the setup as short as possible: ``` # # Setup # class Post < ActiveRecord::Base has_many :access_levels, -> { merge(AccessLevel.valid) } has_many :users, :through => :access_levels end class AccessLevel < ActiveRecord::Base belongs_to :post belongs_to :user scope :valid, -> { where arel_table[:valid_from].eq(nil).or(arel_table[:valid_from].lt(Time.zone.now)).and( arel_table[:valid_until].eq(nil).or(arel_table[:valid_until].gt(Time.zone.now)) ) } enum :level => [:publisher, :subscriber] end class User < ActiveRecord::Base has_many :access_levels, -> { merge(AccessLevel.valid) } has_many :users, :through => :access_levels end ``` The original goal was to call something like this (in order to add further conditions etc.): ``` Post.joins(:users).joins(:access_levels) ``` That results in an semantically wrong query: ``` SELECT "posts".* FROM "posts" INNER JOIN "access_levels" ON "access_levels"."post_id" = "posts"."id" AND (("access_levels"."valid_from" IS NULL OR "access_levels"."valid_from" < '2014-09-15 20:42:46.835548') AND ("access_levels"."valid_until" IS NULL OR "access_levels"."valid_until" > '2014-09-15 20:42:46.835688')) INNER JOIN "users" ON "users"."id" = "access_levels"."user_id" INNER JOIN "access_levels" "access_levels_posts" ON "access_levels_posts"."post_id" = "posts"."id" AND (("access_levels"."valid_from" IS NULL OR "access_levels"."valid_from" < '2014-09-15 20:42:46.836090') AND ("access_levels"."valid_until" IS NULL OR "access_levels"."valid_until" > '2014-09-15 20:42:46.836163')) ``` The second join uses an alias - but the condition is not using this alias. Arel to the rescue! I've build all of the following joins with bare arel instead of trusting ActiveRecord. Unfortunately it seems that combining both is not always working as expected. But at least it's working at all that way. I'm using outer joins in this example so I'd have to build them by myself anyway. In addition all those queries are stored inside of policies (using Pundit). So they are easily testable and there's no fat controller or any redundancy. So I'm fine with some extra code. ``` # # Our starting point ;) # scope = Post # # Rebuild `scope.joins(:users)` or `scope.joins(:access_levels => :user)` # No magic here. # join = Post.arel_table.join(AccessLevel.arel_table, Arel::Nodes::OuterJoin).on( Post.arel_table[:id].eq(AccessLevel.arel_table[:post_id]). and(AccessLevel.valid.where_values) ).join_sources scope = scope.joins(join) join = AccessLevel.arel_table.join(User.arel_table, Arel::Nodes::OuterJoin).on( AccessLevel.arel_table[:user_id].eq(User.arel_table[:id]) ).join_sources scope = scope.joins(join) # # Now let's join the access_levels table for a second time while reusing the AccessLevel.valid scope. # To accomplish that, we temporarily swap AccessLevel.table_name # table_alias = 'al' # This will be the alias temporary_table_name = AccessLevel.table_name # We want to restore the original table_name later AccessLevel.table_name = table_alias # Set the alias as the table_name valid_clause = AccessLevel.valid.where_values # Store the condition with our temporarily table_name AccessLevel.table_name = temporary_table_name # Restore the original table_name # # We're now able to use the table_alias combined with our valid_clause # join = Post.arel_table.join(AccessLevel.arel_table.alias(table_alias), Arel::Nodes::OuterJoin).on( Post.arel_table[:id].eq(AccessLevel.arel_table.alias(table_alias)[:post_id]). and(valid_clause) ).join_sources scope = scope.joins(join) ``` After all the blood, sweat and tears, here's our resulting query: ``` SELECT "posts".* FROM "posts" LEFT OUTER JOIN "access_levels" ON "posts"."id" = "access_levels"."post_id" AND ("access_levels"."valid_from" IS NULL OR "access_levels"."valid_from" < '2014-09-15 20:35:34.420077') AND ("access_levels"."valid_until" IS NULL OR "access_levels"."valid_until" > '2014-09-15 20:35:34.420189') LEFT OUTER JOIN "users" ON "access_levels"."user_id" = "users"."id" LEFT OUTER JOIN "access_levels" "al" ON "posts"."id" = "al"."post_id" AND ("al"."valid_from" IS NULL OR "al"."valid_from" < '2014-09-15 20:35:41.678492') AND ("al"."valid_until" IS NULL OR "al"."valid_until" > '2014-09-15 20:35:41.678603') ``` All conditions are now using a proper alias!
After looking closer at this problem on a [similar question here](https://stackoverflow.com/a/28821450/488195), I came up with a simpler and cleaner (to my eyes) solution to this question. I'm pasting here the relevant bits of my answer of the other question for completeness, along with your scope. The point was to find a way to access the current `arel_table` object, with its `table_aliases` if they are being used, inside the scope at the moment of its execution. With that table, you will be able to know if the scope is being used within a `JOIN` that has the table name aliased (multiple joins on the same table), or if on the other hand the scope has no alias for the table name. ``` # based on your v2 scope :valid, -> { where("(#{current_table_from_scope}.valid_from IS NULL OR #{current_table_from_scope}.valid_from < :now) AND (#{current_table_from_scope}.valid_until IS NULL OR #{current_table_from_scope}.valid_until > :now)", :now => Time.zone.now) } def self.current_table_from_scope current_table = current_scope.arel.source.left case current_table when Arel::Table current_table.name when Arel::Nodes::TableAlias current_table.right else fail end end ``` I'm using [`current_scope`](http://apidock.com/rails/ActiveRecord/Scoping/ClassMethods/current_scope) as the base object to look for the arel table, instead of the prior attempts of using `self.class.arel_table` or even `relation.arel_table`. I'm calling `source` on that object to obtain an [`Arel::SelectManager`](http://www.rubydoc.info/github/rails/arel/Arel/SelectManager) that in turn will give you the current table on the `#left`. At this moment there are two options: that you have there an `Arel::Table` (no alias, table name is on `#name`) or that you have an `Arel::Nodes::TableAlias` with the alias on its `#right`. If you are interested, here are some references I used down the road: * A [similar question on SO](https://stackoverflow.com/questions/24921729/join-the-same-table-twice-with-conditions), answered with a ton of code, that you could use instead of your beautiful and concise Ability. * This [Rails issue](https://github.com/rails/rails/issues/12770) and this [other one](https://github.com/rails/rails/issues/12982).
Join the same table twice with conditions
[ "", "sql", "ruby-on-rails-4", "rails-activerecord", "arel", "" ]
I am looking for a solution, how to create SQL UDF with optional params. Pseudocode for function where Param1 is necessary and Param2 may be filled (but not needed): ``` dbo.myFnc(Param1 int [, Param2 int]) ``` Is there a way to create thislike function? For existing built-in sample watch the [STR function](http://msdn.microsoft.com/en-us/library/ms189527.aspx) ``` STR ( float_expression [ , length [ , decimal ] ] ) ```
You can define default parameters in the create statement (= default): ``` --Transact-SQL Inline Table-Valued Function Syntax CREATE FUNCTION [ schema_name. ] function_name ( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type [ = default ] [ READONLY ] } [ ,...n ] ] ) RETURNS TABLE [ WITH <function_option> [ ,...n ] ] [ AS ] RETURN [ ( ] select_stmt [ ) ] [ ; ] ``` [Source MSDN](http://msdn.microsoft.com/de-ch/library/ms186755.aspx) So you can do something like: ``` CREATE FUNCTION dbo.myFnc( @param1 int, -- necessary @param2 int = 5 -- 5 as default ) ``` But as shree.pat18 said you need to call the optional function parameter with the "default". Like: ``` dbo.myFnc(5, default) ```
You need to pass all arguments to functions, unlike stored procedures. You can use the `default` keyword to indicate that default value is to be used for the parameter, rather than a user-specified value. So you can create your function like this: ``` CREATE FUNCTION dbo.myFnc ( @param1 int, @param2 int) ... ``` and then call the function like this: ``` dbo.myFnc(Param1, default) ```
Optional parameters in SQL UDF without DEFAULT keyword
[ "", "sql", "sql-server", "function", "" ]
I have a stored procedure which uses dynamic sorting, 2 parameters determine the sorting - column: `@SortIndex` and sort direction: `@SortDirection` relevant code: ``` ... ROW_NUMBER() OVER ( ORDER BY -- string order by CASE @SortDirection WHEN 'ASC' THEN CASE @SortIndex WHEN 1 THEN SKU WHEN 2 THEN BrandName WHEN 3 THEN ItemName END END ASC, CASE @SortDirection WHEN 'DESC' THEN CASE @SortIndex WHEN 1 THEN SKU WHEN 2 THEN BrandName WHEN 3 THEN ItemName END END DESC, ``` This sorts on single columns, but I want to sort on `BrandName ASC, ItemName ASC` when `@SortIndex` is 2.
``` ROW_NUMBER() OVER ( ORDER BY -- string order by CASE @SortDirection WHEN 'ASC' THEN CASE @SortIndex WHEN 1 THEN SKU WHEN 2 THEN BrandName + ',' + ItemName WHEN 3 THEN ItemName END END ASC, CASE @SortDirection WHEN 'DESC' THEN CASE @SortIndex WHEN 1 THEN SKU WHEN 2 THEN BrandName + ',' + ItemName WHEN 3 THEN ItemName END END DESC, ``` Use Brandname + ItemName in the When 2 Clause and to have both fields be used in the sort.
If you cannot use Dynamic SQL, the only way is to list all the possible combination for `ASC` and `DESC` For example: ``` ORDER By CASE WHEN @SortIndex = '1' AND @SortDirection = 'ASC' THEN SKU END, CASE WHEN @SortIndex = '1' AND @SortDirection = 'DESC' THEN SKU END DESC, CASE WHEN @SortIndex = '2' AND @SortDirection = 'ASC' THEN BrandName END, CASE WHEN @SortIndex = '2' AND @SortDirection = 'DESC' THEN BrandName END DESC, --and so on... ```
SQL Server dynamic sorting on multiple columns
[ "", "sql", "sql-server", "sorting", "" ]
Could someone kindly point me to the right direction to seek further or give me any hints regarding the following task? I'd like to list all the distinct values from a MySQL table from column A that don't have a specific value in column B. I mean none of the same A values have this specific value in B in any of there rows. Taking the following table (let this specific value be 1): ``` column A | column B ---------------------- apple | apple | apple | 1 banana | anything banana | lemon | lemon | 1 orange | ``` I'd like to get the following result: ``` banana orange ``` Thanks.
This might help you: ``` SELECT DISTINCT A FROM MY_TABLE WHERE A NOT IN (SELECT DISTINCT A FROM MY_TABLE WHERE B = 1) ```
Since there are null values, I have also added a nvl condition to column B . **ORACLE:** ``` SELECT DISTINCT COLUMN_A FROM MY_TABLE WHERE COLUMN_A NOT IN (SELECT COLUMN_A FROM MY_TABLE WHERE nvl(COLUMN_B,'dummy') = '1'); ``` **MYSQL:** ``` SELECT DISTINCT COLUMN_A FROM MY_TABLE WHERE COLUMN_A NOT IN (SELECT COLUMN_A FROM MY_TABLE WHERE IFNULL(COLUMN_B,'dummy') = '1'); ```
MySQL query all distinct from column A where none of them has a specific value in column B
[ "", "mysql", "sql", "" ]
I have the case when there are two date fields. If one is Null then take the other one. At the moment this is working fine with the below code ``` select e.EMPLOY_REF, ISNULL(e.PROB_DOCS_SENT, ec.USR_FINALPROB) from EMPLOYEE_TABLE e join EMPLOYEE_USERCUST ec on ec.EMPLOY_REF = e.EMPLOY_REF ``` However, it could be the case where both are filled and if they are I need to take the latest date as being the valid one. How can I ensure it takes the latest date in the case that both dates are filled?
Try the following query, the advantage of this solution over a case when solution is that you can easily extend it with more columns ``` SELECT EMPLOY_REF, MAX(Date) FROM ( select e.EMPLOY_REF, e.PROB_DOCS_SENT AS Date from EMPLOYEE_TABLE e join EMPLOYEE_USERCUST ec on ec.EMPLOY_REF = e.EMPLOY_REF union select e.EMPLOY_REF, ec.USR_FINALPROB AS Date from EMPLOYEE_TABLE e join EMPLOYEE_USERCUST ec on ec.EMPLOY_REF = e.EMPLOY_REF ) tbl GROUP BY EMPLOY_REF ```
You can use CASE-WHEN in order to handle all the possible cases. In the case where both the dates are null, the following query will take today's date as value. ``` select e.EMPLOY_REF, ISNULL(e.PROB_DOCS_SENT, ec.USR_FINALPROB) case when e.PROB_DOCS_SENT is null and ec.USR_FINALPROB is null then getdate() when e.PROB_DOCS_SENT is null and ec.USR_FINALPROB is not null then ec.USR_FINALPROB when e.PROB_DOCS_SENT is not null and ec.USR_FINALPROB is null then e.PROB_DOCS_SENT when e.PROB_DOCS_SENT is not null and ec.USR_FINALPROB is not null then case when e.PROB_DOCS_SENT> ec.USR_FINALPROB then e.PROB_DOCS_SENT when e.PROB_DOCS_SENT> ec.USR_FINALPROB then ec.USR_FINALPROB end end from EMPLOYEE_TABLE e join EMPLOYEE_USERCUST ec on ec.EMPLOY_REF = e.EMPLOY_REF ```
SQL - ISNULL or if not latest date
[ "", "sql", "date", "isnull", "" ]
I am looking to get an SQL Statement that will get me all data with the highest price and link\_id = 1 <http://sqlfiddle.com/#!2/26a13/2> ``` ID .. LINK_ID .. PRICE 10 .. 1 .. 100,000 20 .. 1 .. 150,000 30 .. 2 .. 150,000 ``` The following returns the correct price/offer but doesnt bring across the correct id and link\_id. Is there possible statement? Or do I need to extract the data seperately? ``` SELECT id, p_id, MAX(offer) FROM offers WHERE p_id = 1 ``` I also tried ``` SELECT * FROM offers WHERE p_id = 1 AND offer = MAX(offer) ```
if you just want the highest price record try this ``` SELECT id, p_id, offer FROM offers WHERE p_id = 1 order by offer desc limit 1 ```
Try this, It is much faster than sub query. ``` SELECT o1.* FROM offers o1 LEFT JOIN offers o2 on o2.prop_id=o1.prop_id and o1.offer<o2.offer where o2.id is null and o1.prop_id=2; ``` example : <http://sqlfiddle.com/#!2/26a13/29>
Use SQL to get highest item with same id
[ "", "mysql", "sql", "" ]
Thanks to advise me for the below issue: I am using below query to fetch the value of a column: ``` Select OptionList = case when isnull(AS_CIS_Code,'') <> '' then AS_CIS_Code else '' end from added_services ``` AS\_CIS\_Code column is of varchar(10) in added\_services table. It contains values like 'AB', 'ABC', 'GHKIK', 'UYTIOPJ' and so on which represents different codes. Now I have to select these codes after modifying the above query so that '\_' is appended after each character. Like it should be fetched as 'A\_B\_', 'A\_B\_C\_', 'G\_H\_K\_I\_K\_', 'U\_Y\_T\_I\_O\_P\_J\_'. How should I implement it? Using a temp table will down the performance for one column only, so should I use while loop or please suggest me better alternatives.
You can split the string using a numbers table and the rebuild it using `for xml path()`. ``` select isnull(C.Value, '') as AS_CIS_Code from added_services as A cross apply ( select substring(A.AS_CIS_Code, T.N, 1)+'_' from (values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10)) as T(N) where T.N <= len(A.AS_CIS_Code) order by T.N for xml path('') ) as C(Value) ``` [SQL Fiddle](http://sqlfiddle.com/#!3/f89ea/1)
Try this: ``` DECLARE @Input VARCHAR(100) = 'TESTING' DECLARE @Pos INT = LEN(@Input) WHILE @Pos > 1 BEGIN SET @Input = STUFF(@Input,@Pos,0,'_') SET @Pos = @Pos - 1 END SELECT @Input ``` **Output** ``` T_E_S_T_I_N_G ``` **UDF** ``` CREATE FUNCTION PadStr(@Data VARCHAR(100)) RETURNS VARCHAR(200) AS BEGIN DECLARE @Input VARCHAR(200) = @Data DECLARE @Pos INT = LEN(@Input) WHILE @Pos > 1 BEGIN SET @Input = STUFF(@Input,@Pos,0,'_') SET @Pos = @Pos - 1 END RETURN @Input + '_' END ``` **Output** ``` SELECT dbo.PadStr('TESTING') -- T_E_S_T_I_N_G_ ```
Append a specific character after each character of a string in sql server
[ "", "sql", "sql-server", "" ]
EDIT original question: Our UDW is broken out into attribute and attribute list tables. I would like to write a data dictionary query that dynamically pulls in all column values from all tables that are like `%attr_list%` without having to write a series of unions and update or add every time a new attribute list is created in our UDW. All of our existing attribute list tables follow the same format (number of columns, most column names, etc). Below is the first two unions in our existing view which I want to avoid updating each time a new attribute list table is added to our UDW. ``` CREATE VIEW [dbo].[V_BI_DATA_DICTIONARY] ( ATTR_TABLE ,ATTR_LIST_ID ,ATTR_NAME ,ATTR_FORMAT ,SHORT_DESCR ,LONG_DESCR ,SOURCE_DATABASE ,SOURCE_TABLE ,SOURCE_COLUMN ,INSERT_DATETIME ,INSERT_OPRID ) AS SELECT 'PREAUTH_ATTR_LIST' ATTR_TABLE ,[PREAUTH_ATTR_LIST_ID] ATTR_LIST_ID ,[ATTR_NAME] ATTR_NAME ,[ATTR_FORMAT] ATTR_FORMAT ,[SHORT_DESCR] SHORT_DESCR ,[LONG_DESCR] LONG_DESCR ,[SOURCE_DATABASE] SOURCE_DATABASE ,[SOURCE_TABLE] SOURCE_TABLE ,[SOURCE_COLUMN] SOURCE_COLUMN ,[INSERT_DATETIME] INSERT_DATETIME ,[INSERT_OPRID] INSERT_OPRID FROM [My_Server].[MY_DB].[dbo].[PREAUTH_ATTR_LIST] UNION SELECT 'SAVINGS_ACCOUNT_ATTR_LIST' ,[SAVINGS_ACCOUNT_ATTR_LIST_ID] ,[ATTR_NAME] ,[ATTR_FORMAT] ,[SHORT_DESCR] ,[LONG_DESCR] ,[SOURCE_DATABASE] ,[SOURCE_TABLE] ,[SOURCE_COLUMN] ,[INSERT_DATETIME] ,[INSERT_OPRID] FROM [My_Server].[MY_DB].[dbo].[SAVINGS_ACCOUNT_ATTR_LIST]' ```
Something like this might work for you if all tables contain the same columns. Just change the temp table and the selected columns to match your own columns. ``` CREATE TABLE #results ( ATTR_TABLE SYSNAME, ATTR_LIST_ID INT, ATTR_NAME NVARCHAR(50), ATTR_FORMAT NVARCHAR(50), SHORT_DESCR NVARCHAR(50), LONG_DESCR NVARCHAR(255), SOURCE_DATABASE NVARCHAR(50), SOURCE_TABLE NVARCHAR(50), SOURCE_COLUMN NVARCHAR(50), INSERT_DATETIME DATETIME, INSERT_OPRID INT ); INSERT INTO #results EXEC sp_MSforeachtable @command1 = ' SELECT ''?'' , * FROM ? WHERE ''?'' LIKE ''%ATTR_LIST%'' ' SELECT * FROM #results DROP TABLE #results ``` EDIT: Updated my example with your columns. Because you use different column name for `ATTR_LIST_ID` in each table I changed the select to `SELECT *`. Obviously I don't know the data types of your columns so you have to change them. This won't work in a view but you could create a stored procedure.
For SQL Server you should be able to use something like this: ``` SELECT c.name AS ColName, t.name AS TableName FROM sys.columns c JOIN sys.tables t ON c.object_id = t.object_id WHERE t.name LIKE '%attr_list%' ``` And this will include views as well as tables ``` SELECT COLUMN_NAME, TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME LIKE '%attr_list%' ```
Select all values from all tables with specific table name
[ "", "sql", "sql-server", "select", "dynamic-sql", "" ]
Say, I have a customer profile with many contacts. That means I will have separated contact table from customer table. **tbl\_customer** ``` CustomerId (PK, NOT NULL, UNIQUE, AUTO_INCREMENT) CustomerName Address (etc) ``` **tbl\_contact** ``` ContactId (PK, NOT NULL, UNIQUE, AUTO_INCREMENT) CustomerId (FK REFERENCES tbl_customer(CustomerId), CONSTRAINT) Contact Type Contact Number ``` So now let say, a customer called `John Leweinsky` has 4 contacts. ``` ContactType1: Fax ContactType2: Office Phone ContactType3: Personal Phone ContactType4: Personal Phone ``` Can this be done in one query transaction without knowing `CustomerId`? Thanks for advance if you have answered this.
Try like this ``` START TRANSACTION; insert into (feilds name) values(Values); insert into tbl_contact(CustomerId ,ContactType,ContacNumber) values((select max(CustomerId) from tbl_customer),'type1','Fax'); COMMIT; ``` Pass all other contacttype
Assuming the name is unique, you can do this as: ``` insert into tbl_contact(CustomerId, ContactType, ContactNumber) select cust.CustomerId, c.ContactType, c.ContactNumber from (select 'Fax' as ContactType, Faxnumber as ContactNumber union all select 'Office', Officenumber union all select 'PersonalPhone', PersonalPhone union all select 'PersonalPhone', PersonalPhone ) c cross join Customers cust where cust.CustomerName = 'John Leweinsky'; ``` If name is not unique, you need some way to disambiguate the customers.
SQL insert one record with multiple relational data in one transaction
[ "", "mysql", "sql", "" ]
I have a query to get some data: ``` select max(v.SEQUENCENO) as vmaxseq, v.CODE, v.NAMECODE, v.CODENO from smalltbl v join (select max(SEQUENCENO) as maxseq, CODE, CODENO from smalltbl group by CODE, CODENO) sm on sm.CODE = v.CODE and sm.CODE = 'D50451489' group by v.CODE, v.NAMECODE, v.CODENO; ``` But when I run, it will return more data than I expected: ![SQL1](https://i.stack.imgur.com/45Y2X.png) What I want is only return max of VMAXSEQ on each CODENO, something like this: ![SQL2](https://i.stack.imgur.com/9ggWw.png) How do I write query to get those 2 data only? Thank You!
You can do what you want with analytic functions. I think the query you want is: ``` select sm.* from (select sm.*, max(SEQUENCENO) over (partition by codeno) as maxseq from smalltbl sm where sm.CODE = 'D50451489' ) sm where sequenceno = maxseq; ```
namecode is your problem. Because it varies over your rows and doesn't relate directly to codeno, you're essentially getting a cartesian product.
SQL query oracle return more than expected result
[ "", "sql", "oracle", "" ]
How to combine two SQL queries in one? ``` SELECT *FROM table1 WHERE chapter=88 AND sentence>=23 SELECT *FROM table1 WHERE chapter=89 AND sentence>=1 AND sentence<=23 ```
``` SELECT * FROM TABLE1 WHERE ( CHAPTER = 88 AND SENTENCE >= 23 ) OR ( CHAPTER = 89 AND SENTENCE >= 1 AND SENTENCE <= 23 ) ```
This is one way ``` SELECT *FROM table1 WHERE chapter=88 AND sentence>=23 UNION ALL SELECT *FROM table1 WHERE chapter=89 AND sentence>=1 AND sentence<=23 ``` but you should get into the habit of explicitly listing columns. The columns must align or it won't work. Here's another way ``` SELECT * FROM table1 WHERE (chapter=88 AND sentence>=23) OR (chapter=89 AND sentence>=1 AND sentence<=23) ```
Merge two SQL queries
[ "", "sql", "" ]
I have a table with the left 2 columns. I am trying to achieve the 3th column based on some logic. Logic: If we take date 1/1 and go further the highest score that wil be reached with going further in dates before the score goes down will be on 3/1. With a score of 12. So as HighestAchievedScore we will retrieve 12 for 1/1. And so forth. If we are on a date where the next score goes down my highestAchieveScore will be my next score. Like you can see at 3/01/2014 ``` date score HighestAchieveScore 1/01/2014 10 12 2/01/2014 11 12 3/01/2014 12 10 4/01/2014 10 11 5/01/2014 11 9 6/01/2014 9 8 7/01/2014 8 9 8/01/2014 9 9 ``` I hope I explained it clear enough. Thanks already for every input resolving the problem.
Lets make some test data: ``` DECLARE @Score TABLE ( ScoreDate DATETIME, Score INT ) INSERT INTO @Score VALUES ('01-01-2014', 10), ('01-02-2014', 11), ('01-03-2014', 12), ('01-04-2014', 10), ('01-05-2014', 11), ('01-06-2014', 9), ('01-07-2014', 8), ('01-08-2014', 9); ``` Now we are going to number our rows and then link to the next row to see if we are still going up ``` WITH ScoreRows AS ( SELECT s.ScoreDate, s.Score, ROW_NUMBER() OVER (ORDER BY ScoreDate) RN FROM @Score s ), ScoreUpDown AS ( SELECT p.ScoreDate, p.Score, p.RN, CASE WHEN p.Score < n.Score THEN 1 ELSE 0 END GoingUp, ISNULL(n.Score, p.Score) NextScore FROM ScoreRows p LEFT JOIN ScoreRows n ON n.RN = p.RN + 1 ) ``` We take our data recursively look for the next row that is right before a fall, and take that value as our max for any row that is still going up. otherwise, we use the score for the next falling row. ``` SELECT s.ScoreDate, s.Score, CASE WHEN s.GoingUp = 1 THEN d.Score ELSE s.NextScore END Test FROM ScoreUpDown s OUTER APPLY ( SELECT TOP 1 * FROM ScoreUpDown d WHERE d.ScoreDate > s.ScoreDate AND GoingUp = 0 ) d; ``` Output: ``` ScoreDate Score Test 2014-01-01 00:00:00.000 10 12 2014-01-02 00:00:00.000 11 12 2014-01-03 00:00:00.000 12 10 2014-01-04 00:00:00.000 10 11 2014-01-05 00:00:00.000 11 9 2014-01-06 00:00:00.000 9 8 2014-01-07 00:00:00.000 8 9 2014-01-08 00:00:00.000 9 9 ```
I'm not sure this will work.... but this is the general concept. Self join on A.Date < B.Date to get max score, but use coalesce and a 3rd self join on a rowID assigned in a CTE to determine if the score dropped on the next record, and if it did coalesce that score in, otherwise use the max score. NEED TO TEST but have to setup a fiddle to do so.. ``` WITH CTE as (SELECT Date, Score, ROW_NUMBER() OVER(ORDER BY A.Date ASC) AS Row FROM tableName) SELECT A.Date, A.Score, coalesce(c.score, Max(A.Score)) as HighestArchievedScore FROM CTE A LEFT JOIN CTE B on A.Date < B.Date LEFT JOIN CTE C on A.Row+1=B.Row and A.Score > C.Score GROUP BY A.DATE, A.SCORE ```
SQL Query Find x rows forward the highest value without having a lower value in between
[ "", "sql", "sql-server", "" ]
In our database each user has a `created_at` and `cancelled_at` date. How could I calculate the distribution of days active for users (that have a `cancelled_at` date present).
By distribution, I would expect that you want to see the histogram. For that, you want aggregation: ``` select date_part('day', cancelled_at - created_at) as activedays, count(*) from databasetable group by date_part('day', cancelled_at - created_at) order by activedays; ```
``` SELECT *,DATE_PART('day', cancelled_at - created_at) as daysActive FROM tableName WHERE cancelled_at IS NOT NULL ```
Calculate distribution of date differences in SQL
[ "", "sql", "postgresql", "" ]
Following table structure name: table ``` quantity price 100 30.00 50 15.00 25 25.00 25 24.00 25 23.00 ``` I want to select the rows `WHERE SUM(quantity)<=90 AND price<26.00` the output should be ``` quantity price 50 15.00 25 23.00 25 24.00 ``` I want to select all rows which are necessary to fullfill the quantity 90 WHERE and SUM doesn't seem to work
`SUM()` is an aggregation function and you provide no aggregate, so this is bound to fail. What you seem to want is a running total, this can easily be achieved: ``` SELECT quantity, price, @rt:=@rt+quantity AS runningtotal FROM `table` INNER JOIN (SELECT @rt:=0) AS init ON 1=1 WHERE price<26.00 AND @rt<=90 ``` See the [SQLfiddle](http://sqlfiddle.com/#!2/38248fd/3/0)
Did you mean - `WHERE quantity <=90 AND price<26.00`
SUM() in WHERE in MYSQL statement
[ "", "mysql", "sql", "join", "sum", "" ]
Is there a way that i can eliminate all 'special characters' from a SQL Server query? Sometimes our users will put odd things like '@' or '&' in their name fields and I am trying to eliminate anything that is not a number or letter. Is this possible? Thanks in advance.
I would use the answer here: [How to strip all non-alphabetic characters from string in SQL Server?](https://stackoverflow.com/questions/1007697/how-to-strip-all-non-alphabetic-characters-from-string-in-sql-server) If you cannot create a function, you might be able to use the function's code in your query.
Best way, if possible, is to do this before you even get to SQL Server. SQL Server is not the best tool for string manipulation. RegEx in the middle tier would handle it well or you could potentially use CLR in SQL using RegEx. Unsure on performance of latter, so would need to be tested.
Eliminating Special Characters From a SQL Server Query
[ "", "sql", "sql-server", "ascii", "" ]
I have a table that has data that holds a total amount for each month of sales and each column is split by month. E.g. ``` InvJan | InvFeb | InvMar ... TotalInv | CrdJan | CrdFeb | CrdMar ...TotalCrd ------------------------------------------------------------------------------ 100 | 120 | 200 ... 420 | 0 | 140 | 30 ... 170 ``` AS you can see all the columns read across. I want to be able to PIVOT then to look like this: ``` | INV CRD ------------------------------ JAN | 100 0 FEB | 120 140 MAR | 200 30 . | . . . | . . . | ______________________________ TOTAL| 420 170 ``` Can I do this using Pivots? I am not too experience with them...or is there a better way to do this?
This type of data transformation columns into rows is known as UNPIVOT. Since you are using SQL Server 2008, you should be able to use `CROSS APPLY` to unpivot your columns in pairs: ``` select Mnth, Inv, Crd from yourtable cross apply ( values ('Jan', InvJan, CrdJan), ('Feb', InvFeb, CrdFeb), ('Mar', InvMar, CrdMar), ('Apr', InvApr, CrdApr), ('May', InvMay, CrdMay) -- insert more columns here ) c (Mnth, Inv, Crd); ```
First define what is Inv and Crd: ``` SELECT 'Inv' as Types, InvJan AS Jan, InvFeb as Feb, InvMar as Mar INTO #testTable FROM @table UNION SELECT 'Crd' as Types, CrdJan AS Jan, CrdFeb as Feb, CrdMar as Mar FROM @table ``` Then Unpivot this| ``` select months, Types, data INTO #test2 from #testTable unpivot ( data for months in (jan, Feb, Mar) ) u; ``` and after that put as you like ``` select * from #test2 pivot (SUM (data) for Types in (Inv, Crd)) as T ```
PIVOT a horizontal table with Totals
[ "", "sql", "sql-server", "sql-server-2008", "stored-procedures", "" ]
I've got 2 tables, called `Entry` and `Booking`. The idea is: I create an `entry` which has a `start` and an `end` time and then I can create bookings for this `entry`. A `booking` is associated with an `entry` and it also has a `start` and an `end`. A `booking` lies within the time of its `entry`, meaning `booking.start >= entry.start` and `booking.end <= entry.end`. Also, there are no overlapping `bookings` in one `entry` **"Entry" Table:** ``` id | start | end ---------------------------------------------- 1 | 2014-07-24 09:00:00 | 2014-07-24 11:20:00 ``` **"Booking" Table:** ``` id | start | end | entry_id ----------------------------------------------------------- 1 | 2014-07-24 09:10:00 | 2014-07-24 09:40:00 | 1 2 | 2014-07-24 09:50:00 | 2014-07-24 10:20:00 | 1 3 | 2014-07-24 10:50:00 | 2014-07-24 11:20:00 | 1 ``` This Example then roughly looks like the following: ``` <-------------- entry 1 interval ----------------------------------> <-booking 1-> <-booking 2-> <-booking 3-> ``` Now I'd like to get all the intervals in which entry 1 has no booking yet: **Desired result:** ``` entry_id | start | end ---------------------------------------------------- 1 | 2014-07-24 09:00:00 | 2014-07-24 09:10:00 1 | 2014-07-24 09:40:00 | 2014-07-24 09:50:00 1 | 2014-07-24 10:20:00 | 2014-07-24 10:50:00 ``` How do i do this with SQL? (I'm looking for a general solution, though im going to use it in MySQL)
It got a bit ugly: ``` SELECT e.id AS entry_id, e.start AS start, COALESCE(MIN(b1.start), e.end) AS end FROM entry e LEFT JOIN booking b1 ON b1.entry_id = e.id AND b1.start >= e.start AND b1.end <= e.end GROUP BY e.id HAVING start < end UNION SELECT e.id AS entry_id, b1.end AS start, COALESCE(MIN(b2.start), e.end) AS end FROM entry e JOIN booking b1 ON b1.entry_id = e.id AND b1.start >= e.start AND b1.end <= e.end LEFT JOIN booking b2 ON b2.entry_id = e.id AND b2.start >= b1.end AND b2.end <= e.end GROUP BY b1.id HAVING start < end; ``` tested against MySQL; for other SQL-Servers you probably need to include further columns into the `GROUP BY` terms. No MySQL-specific functions used. What it does? The statement upto `UNION` determines how much time is left before the first booking and returns up to one row per entry. The statement after `UNION` selects all bookings for the entry and pairs them with all bookings to the same entry after the booking. `HAVING` is needed to remove 0-length intervals and should only ruin runtime if many bookings are directly adjacent. ## Bonus: I this is a slightly modified query suitable for easy selection of entries, bookings which are not fully included inside the entry and ordering: ``` SELECT entry_id, start, end FROM ( SELECT e.id AS entry_id, e.start AS start, COALESCE(MIN(b1.start), e.end) AS end FROM entry e LEFT JOIN booking b1 ON b1.entry_id = e.id AND b1.start <= e.end AND b1.end >= e.start GROUP BY e.id UNION SELECT e.id AS entry_id, b1.end AS start, COALESCE(MIN(b2.start), e.end) AS end FROM entry e JOIN booking b1 ON b1.entry_id = e.id AND b1.start <= e.end AND b1.end >= e.start LEFT JOIN booking b2 ON b2.entry_id = e.id AND b2.start >= b1.end AND b2.start <= e.end AND b2.end >= e.start GROUP BY b1.id) iq WHERE start < end AND entry_id = 1 ORDER BY start; ```
**MS SQL** ``` SELECT * FROM ( SELECT ENTRY_ID, b1.BOOK_END_DATE AS INTERVALL_START_DATE, (SELECT TOP 1 b2.BOOK_START_DATE FROM Booking b2 WHERE b1.BOOK_ENTRY_ID = b2.BOOK_ENTRY_ID AND b2.BOOK_START_DATE > b1.BOOK_END_DATE ORDER BY b2.BOOK_START_DATE) AS INTERVALL_END_DATE FROM Entry JOIN Booking b1 ON ENTRY_ID = b1.BOOK_ENTRY_ID ) AS sub01 WHERE sub01.INTERVALL_START_DATE is not null AND sub01.INTERVALL_END_DATE is not null AND sub01.INTERVALL_START_DATE <> sub01.INTERVALL_END_DATE UNION (SELECT ENTRY_ID, ENTRY_START_DATE, (SELECT TOP 1 BOOK_START_DATE FROM Booking WHERE BOOK_ENTRY_ID = ENTRY_ID AND BOOK_START_DATE > ENTRY_START_DATE ORDER BY BOOK_START_DATE) FROM Entry) UNION (SELECT ENTRY_ID, (SELECT TOP 1 BOOK_END_DATE FROM Booking WHERE BOOK_ENTRY_ID = ENTRY_ID AND BOOK_END_DATE < ENTRY_END_DATE ORDER BY BOOK_END_DATE DESC), ENTRY_END_DATE FROM Entry) ``` **MySQL** ``` SELECT * FROM ( SELECT * FROM ( SELECT ENTRY_ID, b1.BOOK_END_DATE AS INTERVALL_START_DATE, (SELECT b2.BOOK_START_DATE FROM Booking b2 WHERE b1.BOOK_ENTRY_ID = b2.BOOK_ENTRY_ID AND b2.BOOK_START_DATE > b1.BOOK_END_DATE ORDER BY b2.BOOK_START_DATE LIMIT 1) AS INTERVALL_END_DATE FROM Entry JOIN Booking b1 ON ENTRY_ID = b1.BOOK_ENTRY_ID ) AS sub01 WHERE sub01.INTERVALL_START_DATE is not null AND sub01.INTERVALL_END_DATE is not null UNION (SELECT e2.ENTRY_ID, e2.ENTRY_START_DATE, (SELECT b2.BOOK_START_DATE FROM Booking b2 WHERE b2.BOOK_ENTRY_ID = e2.ENTRY_ID AND b2.BOOK_START_DATE >= e2.ENTRY_START_DATE ORDER BY b2.BOOK_START_DATE LIMIT 1) FROM Entry e2) UNION (SELECT e3.ENTRY_ID, (SELECT b3.BOOK_END_DATE FROM Booking b3 WHERE b3.BOOK_ENTRY_ID = e3.ENTRY_ID AND BOOK_END_DATE <= e3.ENTRY_END_DATE ORDER BY b3.BOOK_END_DATE DESC LIMIT 1), e3.ENTRY_END_DATE FROM Entry e3) ) sub02 WHERE sub02.INTERVALL_START_DATE <> sub02.INTERVALL_END_DATE ``` [Fiddle](http://sqlfiddle.com/#!2/0986b/8)
Find available intervals
[ "", "sql", "" ]
I am trying to create a function which has a cursor in it. I want to get the Quanatity value from that cursor and put it in the temp table. But I havent succeeded to get the value and put it into the temp table. I put comment where I couldnt get it done... here is my code ``` alter FUNCTION test(@input VARCHAR(250)) RETURNS Decimal(8, 2) AS BEGIN DECLARE @rst Decimal(8, 2) SET @rst=0 DECLARE @Temp TABLE (Quantity Decimal(8,2), Price Decimal(8,2)) DECLARE @amount Decimal(8,2) DECLARE @price Decimal(8,2) DECLARE CrsOrfLine CURSOR FOR SELECT AMOUNT FROM LG_001_01_ORFLINE WHERE LINETYPE = 0 AND ORDFICHEREF = (SELECT TOP 1 LOGICALREF FROM LG_001_01_ORFICHE WHERE GUID='EEB44E72-3717-4F5B-8F7E-6A36EB38EA22') ORDER BY LINENO_ ASC; FETCH NEXT FROM CrsOrfLine INTO @amount WHILE (@@FETCH_STATUS = 0) BEGIN INSERT INTO @Temp (Quantity) /* HOW AM I SUPPOSED TO ADD IT INTO THE TEMP?????? */ /* I COULDNT FIGURE THIS PART OUT */ FETCH NEXT FROM CrsOrfLine INTO @amount END /*WHILE*/ CLOSE CrsOrfLine DEALLOCATE CrsOrfLine ```
You can do the following. Note that it only inserts the quantity so it needs to be modified if you intend to include the price. ``` DECLARE @Temp TABLE ( Quantity Decimal(8,2), Price Decimal(8,2) ) INSERT INTO @temp (Quantity) SELECT AMOUNT FROM LG_001_01_ORFLINE WHERE LINETYPE = 0 AND ORDFICHEREF = (SELECT TOP 1 LOGICALREF FROM LG_001_01_ORFICHE WHERE GUID='EEB44E72-3717-4F5B-8F7E-6A36EB38EA22 ORDER BY LINENO_ ASC') ```
``` CREATE PROCEDURE [dbo].[usp_demo_cursor_with_temp_table] AS BEGIN DECLARE @temp TABLE (value1 varchar(5),value2 varchar(5),value3 INT,value4 varchar(1)) DECLARE @value1 varchar(5) DECLARE @value2 varchar(5) DECLARE @value3 INT DECLARE @value4 varchar(5) DECLARE check_data_cursor CURSOR FOR select distinct value1,value2,value3,value4 from table_name where status = 'A' OPEN check_data_cursor FETCH NEXT FROM check_data_cursor INTO @value1,@value2,@value3,@value4 WHILE (@@FETCH_STATUS <> -1) BEGIN -- any business logic + temp inseration insert into @temp values (@tickerCode,@quarter,@financial_year,@status) END FETCH NEXT FROM check_data_cursor INTO @value1,@value2,@value3,@value4 END CLOSE check_data_cursor DEALLOCATE check_data_cursor -- to view temp data select * from @temp END ```
In Sql Server, how do you put value from cursor into temp table?
[ "", "sql", "sql-server", "temp-tables", "database-cursor", "" ]
When performing a SQL Query with using an alias on a column name, I am trying to reference that alias in the same query to perform a calculation on it. Such as: ``` SELECT QUANTITY, AMOUNT, Count(AMOUNT) AS CntAmount, Sum(CNTAMOUNT) FROM MY_TEST GROUP BY QUANTITY, AMOUNT ``` This query is just an example. I thought you normally reference the query as an alias but that didn't work. Such as: ``` SELECT * FROM (SELECT QUANTITY, AMOUNT, Count(AMOUNT) AS CntAmount, Sum(A.CNTAMOUNT) FROM MY_TEST GROUP BY QUANTITY, AMOUNT) a ```
You can use an alias in the same query, but... If you need to use an alias of an aggregate function (such as `SUM(amount) AS aggregated_alias`, `AVG()`, `COUNT()`, etc) in the same query, you must use a subquery. Here is a sample subquery: ``` SELECT a.QUANTITY, a.AMOUNT, SUM(a.CntAmount) AS SUM_CNTAMOUNT FROM (SELECT QUANTITY, AMOUNT, Count(AMOUNT) AS CntAmount FROM MY_TEST GROUP BY QUANTITY, AMOUNT) AS a GROUP BY a.QUANTITY, a.AMOUNT ```
As Per my knowledge you **cant directly use an alias** to perform the calculation on it. Because if you use any name such as **'xyz'** to perform the calculation it searches into the table & if you have used alias for performing the calculation it will not permanently appears on your table. Thats why you will get an error such as Column name is not exist. or invalid column name. ``` In your case you can do like this : ;with CTE as ( SELECT QUANTITY, AMOUNT, Count(AMOUNT) AS CntAmount FROM MY_TEST GROUP BY QUANTITY, AMOUNT ) select sum(CntAmount),QUANTITY,AMOUNT as a from cte group by GROUP BY QUANTITY, AMOUNT ``` But, if your requirement is to use only alias then you can also wrap your code into the CTE & then you can use alias if you want Please refer this. [enter link description here](https://stackoverflow.com/questions/24935028/laravel-eloquent-query-to-get-the-result-groupbymonth)
Use alias name in same query for SQL Server
[ "", "sql", "sql-server", "" ]
I am new to MySQL having previously done everything in MS Access. I am trying to join together 2 tables so that I can show all of the records from Table1 and add in certain columns from Table2. I can join the tables together using ``` SELECT Table1.Name, Table1.Address, Table1.TelephoneNumber FROM Table1 LEFT JOIN Table2 ON Table1.TelephoneNumber=Table2.PhoneNumber ``` Table1 has 3900 records and Table2 almost 7million I then want to add in (for example) PostTown and PostCode from Table2. So that my query will return Table1.Name, Table1.Address, Table1.TelephoneNumber, Table2.PostTown, Table2.PostCode How do I make the query only return everything in Table1 but show matches from Table2 where it has some and blanks where it hasn't. There are some blank values in the Table2.PhoneNumber which I think are duplicating in my results as it returns almost a million rows...
You have blanks in your data (not nulls): ``` SELECT Table1.Name, Table1.Address, Table1.TelephoneNumber FROM Table1 LEFT JOIN Table2 ON Table1.TelephoneNumber = Table2.PhoneNumber AND Table1.TelephoneNumber != '' ``` Checking for `NOT NULL` won't help, because null is not equal to null (whereas blank is equal to blank)
``` SELECT Table1.Name, Table1.Address, Table1.TelephoneNumber FROM Table1 LEFT JOIN Table2 ON Table1.TelephoneNumber=Table2.PhoneNumber WHERE Table1.TelephoneNumber IS NOT NULL ```
Left Join returning more records than in Table 1 and adding in additional data
[ "", "mysql", "sql", "join", "" ]
I need to merge 2 select results, one is like this: ``` SELECT count(emaildata2.EM_SENT_FLAG), emaildata2.EMAIL_FINANCIAL_WEEK FROM `email-redeye`.emaildata2 WHERE emaildata2.EM_SENT_FLAG='Yes' GROUP BY emaildata2.EMAIL_FINANCIAL_WEEK ``` and the other like this: ``` SELECT count(emaildata2.EM_OPEN_FLAG), emaildata2.EMAIL_FINANCIAL_WEEK FROM `email-redeye`.emaildata2 WHERE emaildata2.EM_OPEN_FLAG='Yes' GROUP BY emaildata2.EMAIL_FINANCIAL_WEEK ``` so that the output looks like this: ``` count(opens)|count(sends)|Week 2 8 52 5 15 53 ``` I have tried various selects, unions but the results of the count always rolls up to a total and is not broken down by the week. any ideas?
You can use **SUM WITH CASE** as below: **Syntax** : `CASE WHEN CONDITION THEN TRUE_VALUE ELSE FALSE_VALUE END` ``` SELECT SUM(CASE WHEN emaildata2.EM_SENT_FLAG = 'Yes' THEN 1 ELSE 0 END) SEND_COUNT, SUM(CASE WHEN emaildata2.EM_OPEN_FLAG= 'Yes' THEN 1 ELSE 0 END) OPEN_COUNT, emaildata2.EMAIL_FINANCIAL_WEEK FROM `email-redeye`.emaildata2 GROUP BY emaildata2.EMAIL_FINANCIAL_WEEK ``` You can also use **SUM WITH IF** as below: **Syntax** : `IF(CONDITION) , TRUE_VALUE , FALSE_VALUE` ``` SELECT SUM(IF(emaildata2.EM_SENT_FLAG='Yes', 1, 0) SEND_COUNT, SUM(IF(emaildata2.EM_OPEN_FLAG='Yes', 1, 0)) OPEN_COUNT, emaildata2.EMAIL_FINANCIAL_WEEK FROM `email-redeye`.emaildata2 GROUP BY emaildata2.EMAIL_FINANCIAL_WEEK ```
Just add the extra count but use IF and SUM:- ``` SELECT SUM(IF(emaildata2.EM_SENT_FLAG='Yes', 1, 0), SUM(IF(emaildata2.EM_OPEN_FLAG, 1, 0)), emaildata2.EMAIL_FINANCIAL_WEEK FROM `email-redeye`.emaildata2 GROUP BY emaildata2.EMAIL_FINANCIAL_WEEK ```
Merging two select statements
[ "", "mysql", "sql", "" ]
``` id name number counter 1 test 010101 2 2 test 010101 1 ``` I need to select the duplicates rows that have the same number-name combination. And I need to update the counter in one of the rows with the sum of the counter in the two rows and then delete the second row I'm using the below query to select the duplication but I'm not being able to do the update: ``` SELECT * FROM [prof] WHERE ID NOT IN ( SELECT MAX(ID) FROM [prof] GROUP BY number, name ) ```
The update would be something like this. ``` update prof set counter = counterSum from prof join (select name, number, sum(counter) counterSum from prof where whatever group by name, number) temp on prof.name = temp.name and prf.number = temp.number where whatever ``` The two "where whatever"s should be the same.
This will update the counter and keep the first(lowest) id. It will only keep one unique id so if there are 3 or more rows with the same name, number this will still work. ``` Update [prof] set counter=a.cnt from [prof] p inner join ( select name,number,sum(counter)cnt from [prof] group by name,number)a on p.name=a.name and p.number=a.number; delete [prof] from [prof] join ( select id,row_number () over (partition by name, number order by id asc)row from [prof])d on [prof].id=d.id where d.row>1; ```
Join Duplicate Rows Into One Row in SQL Server
[ "", "sql", "sql-server", "" ]
**dbcc useroptions** shows the date format later i changed format using **set dateformat dmy** even though i changed the format the date is being display in the the format of yyyymmdd how to change the display format without using *CONVERT*
Since, you didn't mentioned that we can't use ***CAST***, Would the following work for you? > select CAST(getdate() as nvarchar(30)) I know i am not being too fair, but that's all i could do.
you can do something like ``` SELECT ProductName, Price, FORMAT(Now(),'dd/MM/yyyy') AS PerDate FROM Products; ```
Change the Display date format without using CONVERT
[ "", "sql", "sql-server-2008", "" ]
I am trying to spool results directly to a CSV using a few substitution variables. I have this query: ``` SET echo off set feedback off set pagesize 0 set termout off define startdate='12-JAN-14' define enddate='01-JUN-14' define vpath=C:\14-0724_Spool testing\' define dt='60_testupdate_subvar' define ext='.csv' SPOOL &&vpath&&dt&&ext SELECT /*csv*/ table1.SOURCE_DMS_ID, COUNT(table1. AMOUNT) FROM table1 WHERE table1.DATE BETWEEN to_date('&&startdate') and to_date('&&enddate') --AND table1_DATE BETWEEN '01-JAN-14' and '31-JAN-14' GROUP BY table1_DIM.SOURCE_DMS_ID; ``` ...being called with this script: ``` @"\Results_Feed_Spooled.sql" ``` I specifically call it with a script (and SET various other items) so that the resulting CSV file will NOT have the query in the file. However, the select query DOES appear in the CSV file. Interestingly, when I switch the where clause to be an actual date (see the commented section of the query) instead of a reference to the substitution variable, the query does NOT appear in the resulting CSV. Why does using the variable in the WHERE clause instead of actual values cause the query to be included in the result file? How can I correct that? It is important to the usefulness of the script that I am able to use the variables. (SQL developer Version 4.0.0.13) Thanks!
Assuming SQL\_Developer is sqlplus compliant, first, try adding "SET VERIFY OFF" . If that doesn't work you can also try putting the spool after the query as in the following example: ``` set verify off define x = 'X' select * from dual where '&x' = 'X' spool x.out / spool off ``` Note the blank line after the SELECT, and the absence of the semi-colon after the SELECT statement.
Use `-s`. The -s flag means silent, a complete bash script will look like something as follow: ``` #!/bin/bash sqlplus -s /nolog << EOF CONNECT USER/PASSWORD set echo off set verify off set colsep "," set linesize 9999 set trimout on set trimspool on set heading on set pagesize 0 embedded on set wrap off set feedback off set newpage 0 set arraysize 5000 set serveroutput off spool "/tmp/exported_date/table.csv" SELECT * FROM TABLE; spool off; EXIT; EOF ```
Remove query from spool output in SQL Developer with variables
[ "", "sql", "oracle", "csv", "oracle-sqldeveloper", "sqlplus", "" ]
I have to update a column in one table with values in another column which is another table. I have to update around 100000 records (not a single row update). My query is ``` update asset_dmr_revision adr set adr.revision_date = (select adrt.revision_date_test from asset_dmr_revision_test adrt, asset_dmr_revision ad where ad.id = adrt.id and adrt.revision_date_test is not null and ad.asset_id = adrt.asset_id) ``` But when I run in toad it says "single-row subquery returns more than one row" . How to update all coumns ? I am using oracle
Add `and rownum = 1` or `and rownum <=1` to the inner select query. This will ensure that always only 1 record is returned by the inner query. ``` select adrt.revision_date_test from asset_dmr_revision_test adrt, asset_dmr_revision ad where ad.id = adrt.id and adrt.revision_date_test is not null and ad.asset_id = adrt.asset_id and rownum <= 1 ``` Note: In my opinion, `rownum <=1` is known to provide better performance than `rownum=1` EDIT: Try below update query: ``` update asset_dmr_revision adr set adr.revision_date = nvl(select adrt.revision_date_test from asset_dmr_revision_test adrt, asset_dmr_revision ad where ad.id = adrt.id and adrt.revision_date_test is not null and ad.asset_id = adrt.asset_id and adr.id = adrt.id and rownum <=1 ) , adr.revision_date); ``` Basically, there is no correlation between the outer adr table and the inner adrt table. So i have added additional condition `and adr.id = adrt.id`. Check and let me know if it helps.
Like I said in my comment, you need to return only ONE row. You can achieve this by using functions : ``` update asset_dmr_revision adr set adr.revision_date = (select max(adrt.revision_date_test) from asset_dmr_revision_test adrt, asset_dmr_revision ad where ad.id = adr.id and adrt.revision_date_test is not null and ad.asset_id = adrt.asset_id group by ad.id) ``` Replace MAX with the functionally correct function in your case. Also, if the value is NULL you might need to assign a default value : ``` update asset_dmr_revision adr set adr.revision_date = nvl((select max(adrt.revision_date_test) from asset_dmr_revision_test adrt, asset_dmr_revision ad where ad.id = adr.id and adrt.revision_date_test is not null and ad.asset_id = adrt.asset_id group by ad.id),adr.revision_date_test) ```
Not able to update column in DB
[ "", "sql", "oracle", "" ]
I am having a hard time constructing an sql query that gets all the associated data with respect to another (*associated*) table and loops over into that set of data on which are considered as latest (*or most recent*). The image below describes my two tables (*Inventory and Sales*), the Inventory table contains all the item and the Sales table contains all the transaction records. The Inventory.Id is related to Sales.Inventory\_Id. And the **Wanted result** is the output that I am trying to work on to. My objective is to associate all the sales record with respect to inventory but only get the most recent transaction for each item. ![enter image description here](https://i.stack.imgur.com/xz3gD.png) Using a plain join (*left, right or inner*) doesn't produce the result that I am looking into for I don't know how to add another category in which you can filter the most recent data to join to. Is this doable or should I change my table schema? Thanks.
You can use [APPLY](http://technet.microsoft.com/en-us/library/ms175156%28v=sql.105%29.aspx) ``` Select Item,Sales.Price From Inventory I Cross Apply(Select top 1 Price From Sales S Where I.id = S.Inventory_Id Order By Date Desc) as Sales ```
I would just use a correlated subquery: ``` select Item, Price from Inventory i inner join Sales s on i.id = s.Inventory_Id and s.Date = (select max(Date) from Sales where Inventory_Id = i.id) ```
Join two tables but only get most recent associated record
[ "", "sql", "sql-server", "" ]
I am not that good at subqueries. I am trying to essentially create columns by selecting certain values from another table, and then insert those columns into my table [Presummary 7/24 . . . AM]. The thing is in my select subqueries to generate the values I want to be inserting, I reference the workstation column of my table [Presummary 7/24 . . . AM]. I think Access SQL is saying I am not allowed to do that, but I'm not sure how to get around doing it. Here is my query (generated by VBA function I wrote): ``` INSERT INTO [Presummary 7/24/2014 11:07:33 AM] (660201, 660202, 660203, 660206, 660207, 660208, 660209) VALUES ( SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660201' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660202' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660203' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660206' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660207' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660208' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION, SELECT h.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660209' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h.WORKSTATION ) ``` When I try running this code it says "Syntax error in query expression 'SELECT h.sumOFHRS\_CLA . . . workstation = h.WORKSTATION". (It is only referencing the first subquery, but I'm sure that the same error applies in all the subqueries). The below does not work either: ``` INSERT INTO [Presummary 7/24/2014 11:07:33 AM] (660201, 660202, 660203, 660206, 660207, 660208, 660209) VALUES ( SELECT h1.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h1, SELECT h2.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h2, SELECT h3.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h3, SELECT h4.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h4, SELECT h5.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h5, SELECT h6.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h6, SELECT h7.SumOFHRS_Claimed FROM [qry EngineHoursSummaryA] as h7 ) WHERE Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660201' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h1.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660202' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h2.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660203' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h3.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660206' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h4.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660207' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h5.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660208' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h6.WORKSTATION AND Right(h.MODULE_ENGINE_SWO_SERIAL,6) = '660209' AND [Presummary 7/24/2014 11:01:44 AM].workstation = h7.WORKSTATION ``` How do I get the result I want to from my query?
Never figured out the true cause of the syntax errors. But regardless, for what i was trying to do, I realized I needed to use an UPDATE rather than INSERT statement.
Try this code, ``` INSERT INTO [Presummary 7/24/2014 11:07:33 AM] (660201, 660202, 660203, 660206, 660207, 660208, 660209) SELECT Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660201', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E1, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660202', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E2, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660203', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E3, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660206', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E4, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660207', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E5, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660208', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E6, Sum(IIF(Right([qry EngineHoursSummaryA].MODULE_ENGINE_SWO_SERIAL,6) = '660209', [qry EngineHoursSummaryA].SumOFHRS_Claimed, 0)) As E7 FROM [Presummary 7/24/2014 11:07:33 AM] INNER JOIN [qry EngineHoursSummaryA] ON [Presummary 7/24/2014 11:01:44 AM].workstation = [qry EngineHoursSummaryA].WORKSTATION; ```
INSERT INTO - Reference outer table in Values subquery
[ "", "sql", "ms-access", "vba", "" ]
Let's assume we have such data set: Table: DataTable1 ``` ID ExperienceId LanguageId ... ------------------------------------------- 1 1 1 2 1 2 3 1 3 4 2 1 5 2 2 6 2 3 7 3 1 8 3 2 9 3 3 ... ``` Table: DataTable2 ``` ID SomeId OtherId LanguageId ... ------------------------------------------- 1 459 1 1 2 459 1 2 3 459 1 3 4 245 2 1 5 245 2 2 6 245 2 3 7 295 3 1 8 295 3 2 9 295 3 3 ... ``` I want to join those tables and get only SomeId column ignoring the LanguageId column. To make it clearer: ``` SELECT t2.SomeId AS RequiredId -- ...other data mainly from t2 FROM DataTable1 AS t1 LEFT JOIN DataTable2 AS t2 ON t2.OtherId = t1.ExperienceId AND t2.LanguageId = (SELECT TOP 1 t1.LanguageId ORDER BY t1.LanguageId) ``` This query should return (if it wasn't wrong, clearly) rows: ``` SomeId ... ---------------- 459 ... 245 ... 295 ... ... ``` Now it returns three times of identical data (with only LanguageId different). I would try to filter it with `WHERE t1.LanguageId = 1` if I was sure it always exists, but I'm not sure. Rows can be with `LanguageId` from 1 to 3, also they can be only with ID 2, etc. Rows surely will have at least one `LanguageId`. Now my question is: *how can I join tables with unique values with one column completely ignored*?
Wrapping it in another query does the trick? ``` SELECT RequiredId, <all_the_other_fields> from ( SELECT t2.SomeId AS RequiredId -- ...other data mainly from t2 FROM DataTable1 AS t1 LEFT JOIN DataTable2 AS t2 ON t2.OtherId = t1.ExperienceId AND t2.LanguageId = (SELECT TOP 1 t1.LanguageId ORDER BY t1.LanguageId) ) group by RequiredId, <all_the_other_fields> ``` or even not extracting the column in the first place? ``` SELECT distinct t2.SomeId AS RequiredId -- ...other data mainly from t2 BUT not the Language id FROM DataTable1 AS t1 LEFT JOIN DataTable2 AS t2 ON t2.OtherId = t1.ExperienceId AND t2.LanguageId = (SELECT TOP 1 t1.LanguageId ORDER BY t1.LanguageId) ```
Try this: ``` ;with cte as (select *, row_number() over (partition by someid order by languageid) rn from datatable2) select * from datatable1 dt left join cte c on dt.experienceid = c.otherid and c.rn = 1 ```
SQL LEFT JOIN first row only
[ "", "sql", "sql-server", "join", "" ]
I have a query which looks something like this: ``` select distinct col1, ( select ltrim(str(col2))+',' from @temp t2 where t2.col1 = t1.col1 for xml path('') ) col2 from @temp t1 ``` Some of the values in col2 are null, so instead of displaying a string with null, that part is missing totally, i.e. col2 should have ``` 3,4,3,NULL,2,3,4,3,2, ``` but instead I am getting: ``` 3,4,3,2,3,4,3,2, ``` How do I correct this?
To get a NULL output, you'll need to change the NULL value into the string 'NULL'. That also means that you'll need to cast the numeric value to a string to have a consistent type for the field; ``` SELECT DISTINCT col1, ( SELECT COALESCE(CAST(col2 AS VARCHAR),'NULL')+',' FROM @temp t2 WHERE t2.col1 = t1.col1 FOR XML PATH('') ) col2 FROM @temp t1 ``` [An SQLfiddle to test with](http://sqlfiddle.com/#!6/d41d8/20489).
A NULL value is not the string NULL. It is simply absence of information. What you could do is to force a NULL value to be treated as the string NULL for your purposes like so: ``` SELECT CASE WHEN col2 IS NULL THEN 'NULL' ELSE col2 END + ',' ``` To make it more concise, you may use `isnull(col2,'NULL')` or `coalesce(col2,'NULL')` If `col2` is not a text type, cast it first before using any of the above solutions like so: ``` SELECT CASE WHEN col2 IS NULL THEN 'NULL' ELSE CAST(col2 AS VARCHAR(5)) END + ',' ```
Concat with NULL value
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I have a stored procedure to update data, which is being called through C#. I also have several triggers set to my table which insert audit information to audit table when the data is changed. The problem is I have two columns in audit table called changedBy(UserName) and changeSetID, which I can only retrieve them from C#, I can pass them as parameters to the stored procedure, then I don't know how to let trigger get the value of these two variables. Originally I use context\_info as a global variable, but I need to pass two variables to trigger.
I actually figured out a workaround about this. First, I pass my username and changesetID from C# to SQL stored procedure as parameters. Inside my stored procedure, I set CONTEXT\_INFO to the combination of userName and changeSetID ``` Declare @userAndSetID VARBINARY(128) SET @UserName = @UserName + ',' + @ChangeSetID + ',' SET @userAndSetID = CAST(CAST(@UserName AS NVARCHAR(MAX)) AS VARBINARY(128)); SET CONTEXT_INFO @userAndSetID ``` And Inside my trigger, I retrieve my two variables from CONTEXT\_INFO(), by using split string function: ``` select @ChangedBy = value from dbo.fn_Split((CAST (CONTEXT_INFO() AS NVARCHAR(MAX))),',') where position = 1; select @changesetID = value from dbo.fn_Split((CAST (CONTEXT_INFO() AS NVARCHAR(MAX))),',') where position = 2; ``` And here is the split string function, it gets a string separated by whatever symbol, and put them to a table variable row by row: ``` CREATE FUNCTION [dbo].[fn_Split](@text nvarchar(MAX), @delimiter nvarchar(20) = ' ') RETURNS @Strings TABLE ( position int IDENTITY PRIMARY KEY, value nvarchar(1500) ) AS BEGIN DECLARE @index int SET @index = -1 WHILE (LEN(@text) > 0) BEGIN SET @index = CHARINDEX(@delimiter , @text) IF (@index = 0) AND (LEN(@text) > 0) BEGIN INSERT INTO @Strings VALUES (@text) BREAK END IF (@index > 1) BEGIN INSERT INTO @Strings VALUES (LEFT(@text, @index - 1)) SET @text = RIGHT(@text, (LEN(@text) - @index)) END ELSE SET @text = RIGHT(@text, (LEN(@text) - @index)) END RETURN END ``` This is just a workaround, as @Tab Alleman suggested, the best way to solve this problem is to save the variables somewhere inside database, but since I don't want to modify the database at the moment, this solution will do the trick.
Several approaches: 1) As granadaCoder, suggested, add those columns to your updated table, so that the trigger will be able to access them 2) Disable the trigger at the beginning of your proc, include the inserts to the audit tables in your proc (instead of letting the trigger do them), then re-enable the trigger at the end. 3) Keep this information in a permanent table. Have a table with the UniqueID of your updated table, and the "changedBy" and "changeSetID" columns. At the beginning of the proc, update this table with the parameters passed from C#. In the trigger, join inserted to this table to get the values of the two columns you need.
How to set two dynamic temporary global variables for triggers in SQL server?
[ "", "sql", "sql-server", "stored-procedures", "triggers", "global-variables", "" ]
I got in trouble with a sql script as this: ``` SELECT A, B, C, CASE WHEN D < 21 THEN '0<20' WHEN D < 51 THEN '21-50' WHEN D < 101 THEN '51-100' ELSE '>101' END AS E COUNT(*) FROM TABLE_X GROUP BY A, B, C, D; ``` Resultset like this; ``` A B C D count(*) CAR 1 2 21-50 1 CAR 1 2 21-50 1 BIKE 1 3 0-20 1 ``` At first row is CAR has a D=25.So it is between 21-50. And then second row is CAR has D=32.So it is between 21-50 too. Shortly I want to resultset like above: ``` A B C D count(*) CAR 1 2 21-50 2 BIKE 1 3 0-20 1 ``` So CAR must be 2 by grouping as using D column. How can I assure this ?
The problem here is that you're grouping by `D` first and only then applying the `case` logic. If you add `D` to the select list, you'd see results that probably look like this: ``` A B C D E count(*) CAR 1 2 20 21-50 1 CAR 1 2 30 21-50 1 BIKE 1 3 7 0-20 1 ``` In order to avoid this, you could apply the `case` first and only then the `group by` clause, by using a subquery: ``` SELECT A, B, C, E, COUNT(*) FROM (SELECT A, B, C, CASE WHEN D < 21 THEN '0<20' WHEN D < 51 THEN '21-50' WHEN D < 101 THEN '51-100' ELSE '>101' END AS E FROM TABLE_X) t GROUP BY A, B, C, E; ```
The below query should work. Basically, I am just pulling the count(1) function and hence the group by clause to an outer query while leaving all the rest functionality to the inner query. ``` SELECT A,B,C,E, count(1) from ( SELECT A, B, C, CASE WHEN D < 21 THEN '0<20' WHEN D < 51 THEN '21-50' WHEN D < 101 THEN '51-100' ELSE '>101' END AS E FROM TABLE_X ) GROUP BY A, B, C, E; ```
Sql grouping trouble
[ "", "sql", "" ]
I have a select result like this: ``` ID | DATE ---------------- 10 | 2014-07-23 7 | 2014-07-24 8 | 2014-07-24 9 | 2014-07-24 1 | 2014-07-25 2 | 2014-07-25 6 | 2014-07-25 3 | 2014-07-26 4 | 2014-07-27 5 | 2014-07-28 ``` The result above is ordered by date. Now, I want to select the one previous row before: ``` 2 | 2014-07-25 ``` Which is: ``` 1 | 2014-07-25 ``` In case I don't know the exact ID and the conditional code must be compatible with if I want to select a previous row of: ``` 3 | 2014-07-26 ``` Which is: ``` 6 | 2014-07-25 ``` What condition should I use? **UPDATE** Tried this: ``` SET @rank=0; SELECT @rank:=@rank+1 AS rank, t1.* FROM table t1 ``` Then I got this: ``` RANK | ID | DATE ---------------- 1 | 10 | 2014-07-23 2 | 7 | 2014-07-24 3 | 8 | 2014-07-24 4 | 9 | 2014-07-24 5 | 1 | 2014-07-25 6 | 2 | 2014-07-25 7 | 6 | 2014-07-25 8 | 3 | 2014-07-26 9 | 4 | 2014-07-27 10 | 5 | 2014-07-28 ``` Then I tried this: ``` SET @rank=0; SELECT @rank:=@rank+1 AS rank, t1.* FROM table t1 WHERE rank < 3; ``` I got this error: Unknown column 'rank' in 'where clause'.
Here's one way... ``` DROP TABLE IF EXISTS my_table; CREATE TABLE my_table (ID INT NOT NULL PRIMARY KEY ,DATE DATE NOT NULL ); INSERT INTO my_table VALUES (10 ,'2014-07-23'), (7 ,'2014-07-24'), (8 ,'2014-07-24'), (9 ,'2014-07-24'), (1 ,'2014-07-25'), (2 ,'2014-07-25'), (6 ,'2014-07-25'), (3 ,'2014-07-26'), (4 ,'2014-07-27'), (5 ,'2014-07-28'); SELECT a.id , a.date , b.id b_id , b.date b_date FROM ( SELECT x.* , COUNT(*) rank FROM my_table x JOIN my_table y ON (y.date < x.date) OR (y.date = x.date AND y.id <= x.id) GROUP BY x.date , x.id ) a LEFT JOIN ( SELECT x.* , COUNT(*) rank FROM my_table x JOIN my_table y ON (y.date < x.date) OR (y.date = x.date AND y.id <= x.id) GROUP BY x.date , x.id ) b ON b.rank = a.rank - 1; +----+------------+------+------------+ | id | date | b_id | b_date | +----+------------+------+------------+ | 10 | 2014-07-23 | NULL | NULL | | 7 | 2014-07-24 | 10 | 2014-07-23 | | 8 | 2014-07-24 | 7 | 2014-07-24 | | 9 | 2014-07-24 | 8 | 2014-07-24 | | 1 | 2014-07-25 | 9 | 2014-07-24 | | 2 | 2014-07-25 | 1 | 2014-07-25 | | 6 | 2014-07-25 | 2 | 2014-07-25 | | 3 | 2014-07-26 | 6 | 2014-07-25 | | 4 | 2014-07-27 | 3 | 2014-07-26 | | 5 | 2014-07-28 | 4 | 2014-07-27 | +----+------------+------+------------+ ``` ... but you can also do this (quicker) with variables.
You can add a row id to the select like this ``` SELECT @rowid:=@rowid+1 as rowid, t1.* FROM yourdatabase.tablename t1, (SELECT @rowid:=0) as rowids; ``` Then you can run a simple query to get the lower rowid from the input.
How to select a row before a specific row on MySQL, if the table is ordered by date?
[ "", "mysql", "sql", "database", "" ]
I am trying to search within the values (table names) returned from a query to check if there is a record and some values in that record are null. If they are, then I want to insert the table's name to a temporary table. I get an error: ``` Conversion failed when converting the varchar value 'count(*) FROM step_inusd_20130618 WHERE jobDateClosed IS NULL' to data type int. ``` This is the query: ``` DECLARE @table_name VARCHAR(150) DECLARE @sql VARCHAR(1000) DECLARE @test int SELECT @table_name = tableName FROM #temp WHERE id = @count SET @sql = 'SELECT * FROM ' + @table_name + ' WHERE jobDateClosed IS NULL' --ERROR is below: select @test = 'count(*) FROM ' + @table_name + ' WHERE jobDateClosed IS NULL' --PRINT @sql -- EXEC(@sql) IF @test > 0 BEGIN INSERT INTO #temp2 (tablename) VALUES ( @table_name); END SET @count = @count + 1 ``` Any ideas how to convert the result of the count into an integer?
Check for [sp\_executesql](http://msdn.microsoft.com/en-us/library/ms188001.aspx) where you may define output parameters. ``` DECLARE @table_name VARCHAR(150) DECLARE @sql VARCHAR(1000) DECLARE @test int SELECT @table_name = tableName FROM #temp WHERE id = @count DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); SET @SQLString = N'SELECT @test = count(*) FROM ' + @table_name + ' WHERE jobDateClosed IS NULL' SET @ParmDefinition = N'@test int OUTPUT'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @test=@test OUTPUT; IF @test > 0 BEGIN INSERT INTO #temp2 (tablename) VALUES ( @table_name); END SET @count = @count + 1 ```
Shouldn't be "SET" instead of "select" ? E.g., changing: ``` select @test = 'count(*) FROM ' + @table_name + ' WHERE jobDateClosed IS NULL' ``` for: ``` SET @test = 'select count(*) FROM ' + @table_name + ' WHERE jobDateClosed IS NULL' ```
Converting dynamic sql query's COUNT result to INT
[ "", "sql", "sql-server", "" ]
I am having issues with a query I have written in that the data warehouse we use does not want to accept the query as it contains a portion which is Parallel from Serial or S->P. When I run the explain plan it gives me a Trivial error, yet when I run the query it errors out. Any thoughts would be helpful. **Query** ``` SELECT /*+ use_hash (doi,seg) */ doi.ORDER_ID AS "PO" , doi.DISTRIBUTOR_ID AS "Vendor ID" , doi.ISBN AS "ASIN" , doi.QUANTITY_ORDERED AS "Quantity Confirmed" FROM D_DISTRIBUTOR_ORDER_ITEMS doi JOIN (SELECT /*+ use_hash */ psm.ASIN FROM PRODUCT_SEGMENT_MEMBERSHIP psm WHERE psm.SEGMENT_ID = 784885) seg ON doi.ISBN = seg.ASIN WHERE doi.ORDER_DAY = '2009/04/06' ORDER BY seg.ASIN ```
As for the Serial to Parallel, I was unable to find a solution, but it seems to be a trivial issue and will only make the query take a few seconds longer. The error ORA-01861 that was being seen was able to be resolved through the use of changing the line doi.ORDER\_DAY= '2009/04/06' To doi.ORDER\_DAY = TO\_DATE ('20090406', 'YYYYMMDD') This was able to clear up the issue of the ORA-01861.
This should be the same as your query. ``` SELECT doi.ORDER_ID AS "PO" , doi.DISTRIBUTOR_ID AS "Vendor ID" , doi.ISBN AS "ASIN" , doi.QUANTITY_ORDERED AS "Quantity Confirmed" FROM D_DISTRIBUTOR_ORDER_ITEMS doi JOIN PRODUCT_SEGMENT_MEMBERSHIP psm ON doi.ISBN = psm.ASIN AND psm.SEGMENT_ID = 784885 WHERE doi.ORDER_DAY = '2009/04/06' ORDER BY psm.ASIN ``` Does this also give you an error?
Is there a way to resolve Parallel from Serial in SQL?
[ "", "sql", "oracle", "parallel-processing", "" ]
I have a stored procedure that returns two columns like this; ![enter image description here](https://i.stack.imgur.com/ujEFU.png) I want to know if it's possible for me to create a comma delimited string from the iTypeID column of the result set. I'm just not sure how to loop through that result set to create the string, i'd want the string to just end up as ``` @sqlString = '1,2,4,11,14' ``` as my parameter value. The stored procedure call that produces the above table is; ``` exec cfn_PlanningGoalGetType 0,'xmlString' ``` don't know if that helps or is clear enough but any direction would be appreciated.
You can use [COALESCE](http://msdn.microsoft.com/en-gb/library/ms190349.aspx) ``` DECLARE @Str varchar(100) SELECT @Str = COALESCE(@Str + ', ', '') + CAST(iTypeID AS varchar(5)) FROM TableName Select @Str ```
This could create table variable to insert data from sproc and then select output into one line string. ``` DECLARE TABLE @tmpBus ( iTypeId INT, Desc INT ) INSERT INTO @tmpBus EXEC cfn_PlanningGoalGetType 0,'xmlString' DECLARE @CodeNameString varchar(100) SELECT @CodeNameString = STUFF( (SELECT ',' + iTypeID FROM @tmpBus ORDER BY Sort FOR XML PATH('')), 1, 1, '') ```
Looping through stored procedure results to create string
[ "", "sql", "sql-server", "stored-procedures", "" ]
I have a very simple query , and yet I can't get it to work like I want it to. I have 2 tables, `A` and `B` which are very similar and look like this : `A` : ``` +------+----------+---------+ | a_id | a_cnt_id | a_value | +------+----------+---------+ | 1 | 848 | 0.5 | | 2 | 848 | 3 | | 3 | 848 | 4 | | 4 | 848 | 65 | +------+----------+---------+ ``` `B` : ``` +------+----------+---------+ | b_id | b_cnt_id | b_value | +------+----------+---------+ | 1 | 849 | 36 | | 2 | 849 | 42 | | 3 | 849 | 8 | +------+----------+---------+ ``` `B` has more records than `A` for a given set of `{a_cnt_id, b_cnt_id}`. I would like my query to return this : ``` +------+------+---------+---------+ | a_id | b_id | a_value | b_value | +------+------+---------+---------+ | 1 | 1 | 0.5 | 36 | | 2 | 2 | 3 | 42 | | 3 | 3 | 4 | 8 | | 4 | NULL | 65 | NULL | +----+--------+---------+---------+ ``` My (not working) query, because it returns only the first 3 rows : ``` select distinct a.a_id, b.b_id, a.a_value, b.b_value from b full join a on b.b_id = a.a_id where a.a_cnt_id = 849 and b.b_cnt_id = 848; ```
Adding a `WHERE` clause will filter the results to what the where suggests. So if you have `where a.a_cnt_id = 849` it will get only these rows, not the ones with the `null`. Move your filters to the join: ``` select distinct a.a_id, b.b_id, a.a_value, b.b_value from b full join a on b.b_id = a.a_id and a.a_cnt_id = 849 and b.b_cnt_id = 848; ```
I don't remember where I found this but here you go: ![enter image description here](https://i.stack.imgur.com/IbX73.jpg) EDIT: The link of the image belongs to [Visual-Representation-of-SQL-Joins](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins). Thanks @jyparask
Simple outer join between two tables not working
[ "", "sql", "oracle", "" ]