Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
When a row is deleted from `titles_in_stock` table, I want to insert equivalent row in table named `titles_in_stock_out`. I tried the following ``` create trigger titles_in_stock_out on titles_in_stock after delete as begin insert into titles_in_stock_out (cd_title, invenotry, cd_type) values (deleted.cd_title, deleted.invenotry, deleted.cd_type) end ``` but this gives following error when tried to execute above statement. > Msg 128, Level 15, State 1, Procedure titles\_in\_stock\_out, Line 8 > The name "deleted.cd\_title" is not permitted in this context. Valid > expressions are constants, constant expressions, and (in some > contexts) variables. Column names are not permitted. Any Help? Thanks
Your syntax is incorrect. The `deleted` is virtual table which available in trigger, then you must refer it as table. ``` create trigger titles_in_stock_out on titles_in_stock after delete as begin insert into titles_in_stock_out (cd_title, invenotry, cd_type) select cd_title, invenotry, cd_type from deleted end ``` This will helpful in cases when you deleted more than one row.
You need to refer to`deleted`as a table, and also the trigger needs to have a different name; in your example it had the same name as the table. ``` create trigger titles_in_stock_out_trig on titles_in_stock after delete as begin insert into titles_in_stock_out (cd_title, invenotry, cd_type) select cd_title, invenotry, cd_type from deleted end ```
create trigger after delete
[ "", "sql", "sql-server", "" ]
*UPDATE* I was able to find my solution by leveraging NOT EXISTS. Thus, my and statement looks like this: ``` and NOT EXISTS (select RejectCode from #rejects where od.UnitDescription LIKE ('%' + RejectCode + '%')) ``` This leverages a temp table #results in which I'm storing the 184 reject codes. THANK YOU everyone who responded. I went down each route, learned a few things, and the responses led me to the solution. Post -> CTE -> Temp Table (couldn't assign a variable right after the CTE) -> Join on LIKE -> NOT EXISTS *UPDATE* I'm attempting to filter a query with a lot of conditions - the nested select statement below returns 184 reject codes. I have a unit description coming back that is a VARCHAR(100) and inside it contains a code (not in a static spot). I'm attempting to do the following (summarized select for simplicity): ``` select od.*, o.* from OrderDetails od join Orders o ON o.oKey = od.oKey where CustomerID = '104' and od.UnitDescription NOT LIKE (select RejectCode from [CustDiscountTables].[dbo].RejectDiscount where GroupID = '15' AND (DiscountPercent = '1' or DiscountPercent = '.4' or DiscountPercent = '.2')) ``` Previously I only had to filter on only a couple codes, so I simply did "and (od.UnitDescription NOT LIKE '%a%' and od.UnitDescription NOT LIKE '%b%' and ...etc). My question would be: How do I use NOT LIKE, or something similar (since the UnitDescription is not static and contains more than just the reject code), to filter all of the codes returned from the nested select statement? Am I really doomed to write out "NOT LIKE" 184 times?
Here is how I would do it -- I can't see your exact example so you might need different columns in the criteria table. ``` with criteria as ( select 15 as groupid, '1' as discountpercent union all select 15 as groupid, '.4' as discountpercent union all select 15 as groupid, '.2' as discountpercent ), rejects ( select distinct RejectCode from [CustDiscountTables].[dbo].RejectDiscount rd join criteria c on c.groupid = rd.GroupID and c.discountpercent = rd.DiscountPercent ) select od.*, o.* from OrderDetails od join Orders o ON o.oKey = od.oKey where CustomerID = '104' and od.UnitDescription NOT LIKE (select RejectCode from rejects) ``` As you can see I'm making the criteria information in a CTE but this could also be in a table (if you think it is going to expand often.) You can also add more columns or more criteria sub-queries and then just union them in the rejects CTE as needed. For example lets say you also have critera on foo and fab: ``` with criteria as ( select 15 as groupid, '1' as discountpercent union all select 15 as groupid, '.4' as discountpercent union all select 15 as groupid, '.2' as discountpercent ), critera2 as ( select 1 as foo, '1' as fab union all select 2 as foo, '2' as fab union all select 3 as foo, '3' as fab ), rejects as ( select distinct RejectCode from [CustDiscountTables].[dbo].RejectDiscount rd join criteria c on c.groupid = rd.GroupID and c.discountpercent = rd.DiscountPercent union select distinct RejectCode from foofabrejectlist foofab join criteria2 c2 on c2.foo = foofab.foo and c2.fab = foofab.fab ) select od.*, o.* from OrderDetails od join Orders o ON o.oKey = od.oKey where CustomerID = '104' and od.UnitDescription NOT LIKE (select RejectCode from rejects) ``` as you can see this can get quite "complex" quickly but still remain easy to maintain or make dynamic (by adding the criteria to a table.) **Additional info as requested in the comments:** To use join and not "NOT LIKE" do this: ``` select od.*, o.* from OrderDetails od join Orders o ON o.oKey = od.oKey left join rejects on od.UnitDescription = rejects.RejectCode where CustomerID = '104' and rejects.RejectCode is null ``` NOTE: A good SQL optimizer should produce the same execution plan if you write it as a join or using the IN syntax.
Another way might be filtering the ids of those "like" it first and then selecting those "not in" the ids returned (this is assuming you have a column id in OrderDetails): ``` select od.*, o.* from OrderDetails od join Orders o ON o.oKey = od.oKey where CustomerID = '104' and od.id not in (select od2.id from OrderDetails od2 inner join [CustDiscountTables].[dbo].RejectDiscount rd on od2.UnitDescription LIKE '%' + rejectCode + '%' where od2.CustomerID = '104' and rd.GroupID = '15' and (rd.DiscountPercent = '1' or rd.DiscountPercent = '.4' or rd.DiscountPercent = '.2')) ```
SQL NOT LIKE with multiple values
[ "", "sql", "sql-server", "" ]
I was trying to execute this statement to delete records from the F30026 table that followed the rules listed.. I'm able to run a select \* from and a select count(\*) from with that statement, but when running it with a delete it doesn't like it.. it gets lost on the 'a' that is to define F30026 as table a ``` delete from CRPDTA.F30026 a where exists ( select b.IMLITM from CRPDTA.F4101 b where a.IELITM=b.IMLITM and substring(b.IMGLPT,1,2) not in ('FG','IN','RM')); ``` Thanks!
This looks like an inner join to me, see [MySQL - DELETE Syntax](https://dev.mysql.com/doc/refman/5.5/en/delete.html) ``` delete a from CRPDTA.F30026 as a inner join CRPDTA.F4101 as b on a.IELITM = b.IMLITM where substring(b.IMGLPT, 1, 2) not in ('FG', 'IN', 'RM') ``` Please note the alias syntax `as a` and `as b`.
Instead of the 'exists' function, you can match the id (like you do in the where clause): ``` delete from CRPDTA.F30026 a where a.IELITM IN ( select b.IMLITM from CRPDTA.F4101 b where a.IELITM=b.IMLITM and substring(b.IMGLPT,1,2) not in ('FG','IN','RM')); ```
Delete values in one table based on values in another table
[ "", "mysql", "sql", "" ]
So I'm solving the interactive tutorial [http://www.sqlzoo.net/wiki/SELECT\_from\_Nobel\_Tutorial](http://www.sqlzoo.net/wiki/SELECT%5ffrom%5fNobel%5fTutorial) However the bonus part doesn't have a solution. **Here's the question:** In which years was the Physics prize awarded but no Chemistry prize. ``` SELECT yr FROM nobel WHERE subject = 'Physics' AND yr NOT IN (SELECT yr FROM nobel WHERE subject = 'Literature') ``` I got the output ``` 1943 1935 1918 1914 ``` When the tutorial said the answer is ``` 1933 1924 1919 1917 ``` I don't understand why my solution is wrong EDIT: I saw the careless mistake that 'Literature' should be 'Chemistry' but it still seems to be invalid
There are two errors in your query: * You mistyped `Chemistry` (your query says `Literature` instead) * You did not ask for `distinct` results Here is your modified query: ``` SELECT DISTINCT yr FROM nobel WHERE subject = 'Physics' AND yr NOT IN (SELECT yr FROM nobel WHERE subject = 'Chemistry') ``` `DISTINCT` is important, because in 1933 the physics prize has been awarded to multiple winners - namely, Dirac and Schrödinger. These two rows from the table result in two entries for 1933 in the output, which you do not want.
> In which years was the Physics prize awarded but no Chemistry prize. :) Read Your task again... > WHERE subject = 'Physics' AND yr NOT IN > (SELECT yr FROM nobel WHERE subject = 'Literature')
Whats wrong with my nested subquery?
[ "", "mysql", "sql", "subquery", "" ]
I'm trying to save a R dataframe back to a sql database with the following code: ``` channel <- odbcConnect("db") sqlSave(db, new_data, '[mydb].[dbo].mytable', fast=T, rownames=F, append=TRUE) ``` However, this returns the error "table not found on channel", while simultaneously creating an empty table with column names. Rerunning the code returns the error "There is already an object named 'mytable' in the database". This continues in a loop - can someone spot the error?
Is this about what your data set looks like? ``` MemberNum x t.x T.cal m.x T.star h.x h.m.x e.trans e.spend 1 2.910165e+12 0 0 205 8.77 52 0 0 0.0449161 ``` I've had this exact problem a few times. It has nothing to do with a table not being found on the channel. From my experience, sqlSave has trouble with dates and scientific notation. Try converting x to a factor: ``` new_data$x = as.factor(new_data$x) ``` and then sqlSave. If that doesn't work, try `as.numeric` and even `as.character` (even though this isn't the format that you want.
As a first shot try to run `sqlTables(db)` to check the tables in the db and their correct names. You could then potentially use this functions return values as the input to `sqlSave(...)`
sqlSave error: table not found
[ "", "sql", "r", "rodbc", "" ]
I'm new to Oracle and I need to help with this query. I have table with data samples /records like: ``` name | datetime ----------- A | 20140414 10:00 A | 20140414 10:30 A | 20140414 11:00 B | 20140414 11:30 B | 20140414 12:00 A | 20140414 12:30 A | 20140414 13:00 A | 20140414 13:30 ``` And I need to "group"/get informations into this form: ``` name | datetime_from | datetime_to ---------------------------------- A | 20140414 10:00 | 20140414 11:00 B | 20140414 11:30 | 20140414 12:00 A | 20140414 12:30 | 20140414 13:30 ``` I couldnt find any solution for query similar to this. Could anyone please help me? Note: I dont want do use temporary tables. Thanks, Pavel
You need to find periods where the values are the same. The easiest way in Oracle is to use the `lag()` function, some logic, and aggregation: ``` select name, min(datetime), max(datetime) from (select t.*, sum(case when name <> prevname then 1 else 0 end) over (order by datetime) as cnt from (select t.*, lag(name) over (order by datetime) as prevname from table t ) t ) t group by name, cnt; ``` What this does is count, for a given value of `datetime`, the number of times that the name has switched on or before that datetime. This identifies the periods of "constancy", which are then used for aggregation.
``` SQL> with t (name, datetime) as 2 ( 3 select 'A', to_date('20140414 10:00','YYYYMMDD HH24:MI') from dual union all 4 select 'A', to_date('20140414 10:30','YYYYMMDD HH24:MI') from dual union all 5 select 'A', to_date('20140414 11:00','YYYYMMDD HH24:MI') from dual union all 6 select 'B', to_date('20140414 11:30','YYYYMMDD HH24:MI') from dual union all 7 select 'B', to_date('20140414 12:00','YYYYMMDD HH24:MI') from dual union all 8 select 'A', to_date('20140414 12:30','YYYYMMDD HH24:MI') from dual union all 9 select 'A', to_date('20140414 13:00','YYYYMMDD HH24:MI') from dual union all 10 select 'A', to_date('20140414 13:30','YYYYMMDD HH24:MI') from dual 11 ) 12 select name, min(datetime) datetime_from, max(datetime) datetime_to 13 from ( 14 select name, datetime, 15 datetime-(1/48)*(row_number() over(partition by name order by datetime)) dt 16 from t 17 ) 18 group by name,dt 19 order by 2,1 20 / N DATETIME_FROM DATETIME_TO - -------------- -------------- A 20140414 10:00 20140414 11:00 B 20140414 11:30 20140414 12:00 A 20140414 12:30 20140414 13:30 ```
Oracle query - without temporary table
[ "", "sql", "oracle", "" ]
I have created a cable database in Access, and I generate a report that has lists the connectors on each end of a cable. Each cable has its own ID, and 2 connector IDs, associated with it. All the connectors are from the same table and is linked to many other tables. I need 2 fields in one table (cable) associated with 2 records in the second table. My solution was to create 2 more tables: A primary connector table and secondary connector table, each of which has all entries from the first connector table. Then I could create columns in the cable ID Table for the primary and secondary IDs. The problem with this is that I have to maintain 2 extra tables with the same data. I'm new to database theory, but I was wondering is there some advanced method that addresses this problem? Any Help would be appreciated!
You need two tables--one you have already: ``` Cables ID autoincrement primary key ... ``` The `Cables` table should just describe the properties of the cables, and should know nothing of how a cable connects to other cables. The second table should be a list of possible connections between pairs of cables and optionally descriptive information about the connections: ``` Connections Cable1ID long not null constraint Connections_Cable1ID references Cables (ID) on delete cascade Cable2ID long not null constraint Connections_Cable2ID references Cables (ID) on delete cascade ConnectionDesc varchar(100) ``` This kind of table is known as a junction table, or a mapping table. It is used to implement a many-to-many relationship. Normally the relationship is between two *different* tables (e.g. students and courses), but it works just as well for relating two records within the *same* table. This design will let you join the `Cables`, `Connections`, and `Cables` (again) tables in a single query to get the report you need.
The alternative is to have one connectors table, and then two foreign keys to it in the cables table. ``` CREATE TABLE connector ( id INT PRIMARY KEY, ... ); CREATE TABLE cable ( primary_connector_id INT NOT NULL REFERENCES connector(id), secondary_connector_id INT NOT NULL REFERENCES connector(id), ... ); ```
How can I associate one record with another in the same table?
[ "", "sql", "database", "ms-access", "entity-relationship", "" ]
I'm Having 2 tables ``` modules_templates ``` and ``` templates ``` In table templates i have 75 records . I want to insert into table modules\_templates some data which template\_id in modules\_templates = template\_id from templates. I created this query : ``` INSERT INTO `modules_templates` (`module_template_id`,`module_template_modified`,`module_id`,`template_id`) VALUES ('','2014-04-14 10:07:03','300',(SELECT template_id FROM templates WHERE 1)) ``` And I'm having error that `#1242 - Subquery returns more than 1 row` , how to add all 75 rows in 1 query ?
Try this ``` INSERT INTO `modules_templates` (`module_template_id`,`module_template_modified`,`module_id`,`template_id`) (SELECT '','2014-04-14 10:07:03','300',template_id FROM templates WHERE 1) ``` Your query didn't work because you were inserting value for one row, where last field i.e result of sub query was multirow, so what you had to do was to put those single row values in sub-query so they are returned for each row in sub query.
Try this: ``` INSERT INTO `modules_templates` (`module_template_id`,`module_template_modified`,`module_id`,`template_id`) SELECT '','2014-04-14 10:07:03','300',template_id FROM templates WHERE 1 ```
Inserting values from 1 table to another error
[ "", "mysql", "sql", "" ]
I have not found any documentation that explains the following behavior, both db and server level collation are CI (Case Insensitive), why is it still case sensitive in this aspect? ``` --Works SELECT CASE name WHEN 'a' THEN 'adam' ELSE 'bertrand' END AS name, COUNT(value) FROM ( SELECT 'a' AS name,1 AS value UNION SELECT 'b',1 UNION SELECT 'b',2 )a GROUP BY CASE name WHEN 'a' THEN 'adam' ELSE 'bertrand' END --Returns an Error Message, please note the "B" in Bertrand in the GROUP BY SELECT CASE name WHEN 'a' THEN 'adam' ELSE 'bertrand' END name, COUNT(value) FROM ( SELECT 'a' AS name,1 AS value UNION SELECT 'b',1 UNION SELECT 'b',2 )a GROUP BY CASE name WHEN 'a' THEN 'adam' ELSE 'Bertrand' END ``` The second query returns this error message. > Msg 8120, Level 16, State 1, Line 2 > > Column 'a.name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
This is more of extended comment that real answer. I believe this issue is coming from how SQL Server is attempting to evaluate `case` statement expression. To prove that server is case insensetive you can run the following two statements ``` SELECT CASE WHEN 'Bertrand' = 'bertrand' THEN 'true' ELSE 'false' end ``` - ``` DECLARE @base TABLE (NAME VARCHAR(1) ,value INT ) INSERT INTO @base Values('a',0),('b',0),('B',0) SELECT * FROM @base SELECT name, COUNT(value) AS Cnt FROM @base GROUP BY NAME ``` --- results: ![enter image description here](https://i.stack.imgur.com/8JDra.jpg) as you can see here even though letter in second row is lower case and in third row is upper case, group by clause ignores the case. Looking at execution plan there are two expression for ``` Expr 1007 COUNT([value]) Expr 1004 CONVERT_IMPLICIT(int,[Expr1007],0) ``` --- now when we change it to `case` ``` SELECT CASE WHEN name = 'a' THEN 'adam' ELSE 'bertrand' END AS name, COUNT(value) AS Cnt FROM @base GROUP BY CASE WHEN name = 'a' THEN 'adam' ELSE 'bertrand' END ``` execution plan shows 3 expressions. 2 from above and new one ``` Expr 1004 CASE WHEN [NAME]='a' THEN 'adam' ELSE 'bertrand' END ``` so at this point aggregate function is no longer evaluating value of the column `name` but now it evaluating value of the expression. What i think is happening is, could be incorrect. When SQL server converts both `CASE` statement in `SELECT` and `GROUP BY` clause to a expression it comes up with different expression value. In this case you might as well do `'bertrand'` in `select` and `'charlie'` in `group by` clause because if `CASE` expression is not 100% match between select and group by clause SQL Server will consider them as different `Expr` aka (columns) that no longer match. --- Update: To take this one step further, the following statement will also fail. ``` SELECT CASE WHEN name = 'a' THEN 'adam' ELSE UPPER('bertrand') END AS name ,COUNT(value) AS Cnt FROM @base GROUP BY CASE WHEN name = 'a' THEN 'adam' ELSE UPPER('Bertrand') END ``` Even wrapping the different case strings in `UPPER()` function, SQL Server is still unable to process it.
You have found something that is genuinely weird, but I think the problem is that you are using the case statement at all in the group by statement. It should be: ``` SELECT CASE name WHEN 'a' THEN 'adam' ELSE 'bertrand' END AS name FROM ( SELECT 'a' AS name,1 AS value UNION SELECT 'b',1 UNION SELECT 'b',2 ) a GROUP BY name ``` The group by should apply to the entire table and not individual rows. I could be missing some reason to do this, but I don't think it makes sense to conditionally group by a value. I am more surprised that the first one works at all than that the second one doesn't work. Comparing 'a' = 'A' is subtly different from comparing a column to another column. SQL server doesn't seem to use the collation settings in this check to see if the column is in the group by. The error message you receive from your second query is saying 'this column in the select is not the same as the column in the group by' and not 'these values are not equal'.
Group by is case sensitive in T-SQL even though db and server collations are CI
[ "", "sql", "sql-server", "t-sql", "" ]
I am creating multiple views in my code and each time the code is run, I would like to drop all the materialized views generated thus far. Is there any command that will list all the materialized views for Postgres or drop all of them?
### Pure SQL Show all: ``` SELECT oid::regclass::text FROM pg_class WHERE relkind = 'm'; ``` Names are automatically double-quoted and schema-qualified where needed according to your current [`search_path`](https://stackoverflow.com/questions/9067335/how-to-create-table-inside-specific-schema-by-default-in-postgres/9067777#9067777) in the cast from `regclass` to `text`. In the system catalog `pg_class` materialized views are tagged with `relkind = 'm'`. [The manual:](https://www.postgresql.org/docs/current/catalog-pg-class.html) > ``` > m = materialized view > ``` To **drop** all, you can generate the needed SQL script with this query: ``` SELECT 'DROP MATERIALIZED VIEW ' || string_agg(oid::regclass::text, ', ') FROM pg_class WHERE relkind = 'm'; ``` Returns: ``` DROP MATERIALIZED VIEW mv1, some_schema_not_in_search_path.mv2, ... ``` One [`DROP MATERIALIZED VIEW`](https://www.postgresql.org/docs/current/sql-dropmaterializedview.html) statement can take care of multiple materialized views. You may need to add `CASCADE` at the end if you have nested views. Inspect the resulting DDL script to be sure before executing it. Are you sure you want to drop **all** MVs from all schemas in the db? And do you have the required privileges to do so? (Currently there are no materialized views in a fresh standard installation.) ### Meta command in psql In the default interactive terminal `psql`, you can use the meta-command: ``` \dm ``` Executes this query on the server: ``` SELECT n.nspname as "Schema", c.relname as "Name", CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' WHEN 'p' THEN 'partitioned table' WHEN 'I' THEN 'partitioned index' END as "Type", pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind IN ('m','') AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' AND n.nspname !~ '^pg_toast' AND pg_catalog.pg_table_is_visible(c.oid) ORDER BY 1,2; ``` Which can be reduced to: ``` SELECT n.nspname as "Schema" , c.relname as "Name" , pg_catalog.pg_get_userbyid(c.relowner) as "Owner" FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind = 'm' AND n.nspname <> 'pg_catalog' AND n.nspname <> 'information_schema' AND n.nspname !~ '^pg_toast' AND pg_catalog.pg_table_is_visible(c.oid) ORDER BY 1,2; ```
You may also use `pg_matviews` system view to list all the materialized views. This view will give you information including the materialized view definition, if the materialized view is populated or is empty (`ispopulated` column) ``` select * from pg_matviews where matviewname = 'my_materialized_view'; ```
Is there a postgres command to list/drop all materialized views?
[ "", "sql", "postgresql", "ddl", "materialized-views", "drop", "" ]
I have a `checkboxlist`. The selected (checked) items are stored in `List<string> selected`. For example, value selected is `monday,tuesday,thursday` out of 7 days I am converting `List<>` to a comma-separated `string`, i.e. ``` string a= "monday,tuesday,thursday" ``` Now, I am passing this value to a stored procedure as a string. I want to fire query like: ``` Select * from tblx where days = 'Monday' or days = 'Tuesday' or days = 'Thursday'` ``` My question is: how to separate string in the stored procedure?
If you pass the comma separated (any separator) string to store procedure and use in query so must need to spit that string and then you will use it. Below have example: ``` DECLARE @str VARCHAR(500) = 'monday,tuesday,thursday' CREATE TABLE #Temp (tDay VARCHAR(100)) WHILE LEN(@str) > 0 BEGIN DECLARE @TDay VARCHAR(100) IF CHARINDEX(',',@str) > 0 SET @TDay = SUBSTRING(@str,0,CHARINDEX(',',@str)) ELSE BEGIN SET @TDay = @str SET @str = '' END INSERT INTO #Temp VALUES (@TDay) SET @str = REPLACE(@str,@TDay + ',' , '') END SELECT * FROM tblx WHERE days IN (SELECT tDay FROM #Temp) ```
Try this: ``` CREATE FUNCTION [dbo].[ufnSplit] (@string NVARCHAR(MAX)) RETURNS @parsedString TABLE (id NVARCHAR(MAX)) AS BEGIN DECLARE @separator NCHAR(1) SET @separator=',' DECLARE @position int SET @position = 1 SET @string = @string + @separator WHILE charindex(@separator,@string,@position) <> 0 BEGIN INSERT into @parsedString SELECT substring(@string, @position, charindex(@separator,@string,@position) - @position) SET @position = charindex(@separator,@string,@position) + 1 END RETURN END ``` Then use this function, ``` Select * from tblx where days IN (SELECT id FROM [dbo].[ufnSplit]('monday,tuesday,thursday')) ```
How to separate (split) string with comma in SQL Server stored procedure
[ "", "asp.net", "sql", "sql-server", "stored-procedures", "" ]
I am trying to make a stored procedure for the query I have: ``` SELECT count(DISTINCT account_number) from account NATURAL JOIN branch WHERE branch.branch_city='Albany'; ``` or ``` SELECT count(*) from ( select distinct account_number from account NATURAL JOIN branch WHERE branch.branch_city='Albany' ) as x; ``` I have written this stored procedure but it returns count of all the records in column not the result of query plus I need to write stored procedure in plpgsql not in SQL. ``` CREATE FUNCTION account_count_in(branch_city varchar) RETURNS int AS $$ PERFORM DISTINCT count(account_number) from (account NATURAL JOIN branch) WHERE (branch.branch_city=branch_city); $$ LANGUAGE SQL; ``` Help me write this type of stored procedure in plpgsql which returns returns the number of accounts managed by branches located in the specified city.
The PL/pgSQL version can look like this: ``` CREATE FUNCTION account_count_in(_branch_city text) RETURNS int LANGUAGE plpgsql AS $func$ BEGIN RETURN ( SELECT count(DISTINCT a.account_number)::int FROM account a NATURAL JOIN branch b WHERE b.branch_city = _branch_city ); END $func$; ``` Call: ``` SELECT account_count_in('Albany'); ``` Avoid naming collisions. Make the parameter name unique or table-qualify columns in the query. I did both. Just `RETURN` the result for a simple query like this. The function is declared to return `integer`. Make sure the return type matches by casting the `bigint` to `int`. `NATURAL JOIN` is short syntax, but it may not be the safest form. Later changes to underlying tables can easily break this. Better to join on column names explicitly. `PERFORM` is only valid in PL/pgSQL functions, not in SQL functions and not useful here.
you can use this template ``` CREATE OR REPLACE FUNCTION a1() RETURNS integer AS $BODY$ BEGIN return (select 1); END $BODY$ LANGUAGE plpgsql VOLATILE COST 100; select a1() ```
Stored procedure to return count
[ "", "sql", "postgresql", "stored-procedures", "plpgsql", "" ]
Given the following table: ``` id message owner_id counter_party_id datetime_col 1 "message 1" 4 8 2014-04-01 03:58:33 2 "message 2" 4 12 2014-04-02 10:27:34 3 "message 3" 4 8 2014-04-03 09:34:38 4 "message 4" 4 12 2014-04-06 04:04:04 ``` How to get the most recent counter\_party number and then get all the messages from that counter\_party id? output: ``` 2 "message 2" 4 12 2014-04-02 10:27:34 4 "message 4" 4 12 2014-04-06 04:04:04 ``` I think a double select must work for that but I don't know exactly how to perform this. Thanks
Should be the last message so either max(id) or latest datetime in this case, counter\_party\_id is just an user id the most recent counter\_party\_id does not mean the max counter\_party\_id(I found the solution in the answers and I gave props): ``` SELECT * FROM yourTable WHERE counter_party_id = ( SELECT MAX(id) FROM yourTable ) ``` or ``` SELECT * FROM yourTable WHERE counter_party_id = ( SELECT counter_party_id FROM yourTable ORDER BY m.time_send DESC LIMIT 1) ``` Reason being is that I simplified the example but I had to implement this in a much more complicated scheme.
A number of ways exists to do that. This is properly the most straight forward. ``` SELECT * FROM YourTable WHERE counter_party_id = (SELECT MAX(counter_party_id) FROM YourTable); ``` You could also select the MAX into a variable before hand; ``` DECLARE @m int SET @m = (SELECT MAX(counter_Party_id) FROM YourTable); ``` And use @m in your where. Depending on which database system you're using other tools exists which can help you as well.
SQL select id from a table to query again all at once
[ "", "sql", "select", "multi-query", "" ]
I'm currently stuck on a SQL query I'm trying to put together. Here is the table layout: --- ## Table 1: **tblUsers** *this table contains more columns, but not necessary in example* * UserID (int) **Sample Data:** ``` ------ | ID | ------ | 1 | ------ | 2 | ------ ``` --- ## Table 2: **tblColumns** * ColumnID (int) * ColumnName (nvarchar) **Sample Data:** ``` -------------------- | ID | Column Name | -------------------- | 1 | Name | -------------------- | 2 | Email | -------------------- | 3 | Age | -------------------- ``` --- ## Table 3: **tblColumnData** * ColumnDataID (int) * UserID (int) (FK) * ColumnID (int) (FK) * ColumnDataContent (nvarchar) **Sample Data:** ``` ---------------------------------------------- | ID | UserID | ColumnID | ColumnDataContent | ---------------------------------------------- | 1 | 1 | 1 | John Smith | ---------------------------------------------- | 2 | 1 | 2 | john@email.com | ---------------------------------------------- | 3 | 1 | 3 | 45 | ---------------------------------------------- | 4 | 2 | 2 | james@email.com | ---------------------------------------------- | 5 | 2 | 3 | 30 | ---------------------------------------------- ``` --- So you will see above, UserID:2 doesn't have a record in the tblColumnData table for ColumnID 1 which is the NAME column. I still need this to appear in the results even if it's NULL. So I'm trying to get the data to return like this: ``` ------------------------------------------------------ | UserID | ColumnID | ColumnName | ColumnDataContent | ------------------------------------------------------ | 1 | 1 | Name | John Smith | ------------------------------------------------------ | 1 | 2 | Email | john@email.com | ------------------------------------------------------ | 1 | 3 | Age | 45 | ------------------------------------------------------ | 2 | 1 | Name | NULL or '' | ------------------------------------------------------ | 2 | 2 | Email | james@email.com | ------------------------------------------------------ | 2 | 3 | Age | 30 | ------------------------------------------------------ ``` The select I have looks like this: ``` SELECT cd.UserID,c.ColumnID,c.ColumnName,cd.ColumnDataContent FROM tblColumns c INNER JOIN tblColumnData cd ON c.ColumnID=cd.ColumnID ``` I have tried INNER, OUTER, LEFT.... etc all the different joins but with no success. Hope someone can help :) Thanks
I think this would help you: ``` with userCTE as ( select u.userId , c.columnId from tblUsers as u cross join tblColumns as c ) select u.* , Coalesce(cd.ColumnDatacontent, 'N/A') AS columnDataContent from userCTE as u left join tblColumnData as cd on u.columnId = cd.columnId and u.userID = cd.userId ``` What you need else is to select which columns are interesting to you, this is only general sample how to get all needed rows. Even more, you can use `COALESCE` or `ISNULL` function to convert NULL values into more specific strings, if you need to.
With Fiddle down we're all flying blind, but this is what I'd try first there if it were up. ``` SELECT tblUsers.UserID, tblColumns.ColumnID, tblColumns.ColumnName tblColumnData.ColumnDataContent FROM tblUsers, tblColumns LEFT JOIN tblColumnData ON tblColumnData.ColumnID = tblColumns.ColumnID AND tblColumnData.UserID = tblUsers.UserID ; ``` You want the [Cartesian Product](http://en.wikipedia.org/wiki/Join_%28SQL%29#Cross_join) of Users and Columns, left joined to the Data table on ColumnID.
SQL Query Join Issue
[ "", "sql", "sql-server", "join", "" ]
I have the following scenario: I have 2 columns, the first column is called AgentID and the second column is called AgentName in the agents table. Few AgentID starts with an "A" and few starts with an "M", what I want to do is the following: ``` AgentID AgentName A123 Name1 M123 Name2 A234 Name3 Aagents AAgentName Magents MAgentName A123 Name1 M123 Name2 A234 Name3 NULL NULL ``` Is this possible? I know it is weird but my boss wants it to be this way!
Try this ``` WITH AAgents AS ( SELECT ROW_NUMBER() over (order by AgentID) AS RN, AgentID AS Aagents, AgentName As AAgentName FROM Agents WHERE LEFT(AgentID,1)='A' ), MAgents As ( SELECT ROW_NUMBER() over (order by AgentID) AS RN, AgentID AS Magents, AgentName As MAgentName FROM Agents WHERE LEFT(AgentID,1)='M' ) SELECT Aagents, AAgentName, Magents, MAgentName FROM AAgents FULL OUTER JOIN MAgents ON AAgents.RN=MAgents.RN ``` [**SQL FIDDLE DEMO**](http://sqlfiddle.com/#!3/cd984/3)
You can do a full outer join between 2 subqueries (one for A agents, one for M agents) and join on `ROW_NUMBER()` to have nulls on one side (the one with less records): ``` select A.AgentID Aagents, A.AgentName AAgentName, M.AgentID Magents, M.AgentName MAgentName from (select *, ROW_NUMBER() over (order by AgentID) rn from Agents where AgentID like 'A%') A full outer join (select *, ROW_NUMBER() over (order by AgentID) rn from Agents where AgentID like 'M%') M on A.rn = M.rn ```
Query divides a column into multiple columns
[ "", "sql", "sql-server", "t-sql", "" ]
I can't to understand firebird group logic Query: ``` SELECT t.id FROM T1 t INNER JOIN T2 j ON j.id = t.jid WHERE t.id = 1 GROUP BY t.id ``` works perfectly But when I try to get other fields: ``` SELECT * FROM T1 t INNER JOIN T2 j ON j.id = t.jid WHERE t.id = 1 GROUP BY t.id ``` I get error: `Invalid expression in the select list (not contained in either an aggregate function or the GROUP BY clause)`
When you use `GROUP BY` in your query, the field or fields specified are used as 'keys', and data rows are grouped based on unique combinations of those 2 fields. In the result set, every such unique combination has one and only one row. In your case, the only identifier in the group is `t.id`. Now consider that you have 2 records in the table, both with `t.id = 1`, but having different values for another column, say, `t.name`. If you try to select both `id` and `name` columns, it directly contradicts the constraint that one group can have only one row. That is why you cannot select any field apart from the group key. For aggregate functions it is different. That is because, when you sum or count values or get the maximum, you are basically performing that operation only based on the `id` field, effectively ignoring the data in the other columns. So, there is no issue because there can only be one answer to, say, count of all names with a particular id. In conclusion, if you want to show a column in the results, you need to group by it. This will however, make the grouping more granular, which may not be desirable. In that case, you can do something like this: ``` select * from T1 t where t.id in (SELECT t.id FROM T1 t INNER JOIN T2 j ON j.id = t.jid WHERE t.id = 1 GROUP BY t.id) ```
When you using `GROUP BY` clause in `SELECT` you should use only aggreagted functions or columns that listed in `GROUP BY` clause. More about `GROUP BY` clause:<http://www.firebirdsql.org/manual/nullguide-aggrfunc.html> As example: ``` SELECT Max(t.jid), t.id FROM T1 t INNER JOIN T2 j ON j.id = t.jid WHERE t.id = 1 GROUP BY t.id ```
Firebird group clause
[ "", "sql", "firebird", "" ]
I'm trying to solve #13 on <http://www.sqlzoo.net/wiki/The_JOIN_operation> "List every match with the goals scored by each team as shown. This will use "CASE WHEN" which has not been explained in any previous exercises." **Here's my query:** ``` SELECT game.mdate, game.team1, SUM(CASE WHEN goal.teamid = game.team1 THEN 1 ELSE 0 END) AS score1, game.team2, SUM(CASE WHEN goal.teamid = game.team2 THEN 1 ELSE 0 END) AS score2 FROM game INNER JOIN goal ON matchid = id GROUP BY game.id ORDER BY mdate,matchid,team1,team2 ``` I get the result "Too few rows". I don't understand what part I got wrong.
An `INNER JOIN` only returns games where there have been goals, i.e. matches between the `goal` and `game` table. What you need is a `LEFT JOIN`. You need all the rows from your first table, `game` but they don't need to match all the rows in `goal`, as per the 0-0 comment I made on your question: ``` SELECT game.mdate, game.team1, SUM(CASE WHEN goal.teamid = game.team1 THEN 1 ELSE 0 END) AS score1, game.team2, SUM(CASE WHEN goal.teamid = game.team2 THEN 1 ELSE 0 END) AS score2 FROM game LEFT JOIN goal ON matchid = id GROUP BY game.id,game.mdate, game.team1, game.team2 ORDER BY mdate,matchid,team1,team2 ``` This returns the 0-0 result between Portugal and Spain on 27th June, which your initial answer missed out.
As there are columns with the same names in both tables, no need to specify where the current column is coming from. ``` SELECT mdate, team1, SUM(CASE WHEN teamid = team1 THEN 1 ELSE 0 END) AS score1, team2, SUM(CASE WHEN teamid = team2 THEN 1 ELSE 0 END) AS score2 FROM game LEFT JOIN goal ON matchid = id GROUP BY mdate, team1, team2 ORDER BY mdate,matchid,team1,team2 ```
Whats wrong with my query with CASE statement
[ "", "mysql", "sql", "select", "group-by", "case", "" ]
I have 3 tables as follow. ``` DATA(entity_id, crv_name, data_cnt_id) PROCESSED_DATA(entity_id, crv_name, run_id) RUNS(run_id, data_cnt_id) ``` Both `DATA` and `PROCESSED_DATA` have similar columns, except for the last one. `PROCESSED_DATA` also has significantly more rows than `DATA`. `DATA` is about old crv\_name values, and `PROCESSED_DATA` contains new crv\_name values. I am trying to write a query which would return something like this, for a specific run (identified with `RUN_ID` in `PROCESSED_DATA` and `RUNS` and with `DATA_CNT_ID` in `DATA`) : ``` +----------------------------------------------------------------+ | PROCESSED_DATA.ENTITY_ID PROCESSED_DATA.CRV_NAME DATA.CRV_NAME | +----------------------------------------------------------------+ | entity123 123_new_name 123_old_name | | entity456 456_new_name 456_old_name | | entity789 789_new_name null | +----------------------------------------------------------------+ ``` However, I can't get that last row (ie an entity which doesn't have an old crv\_name, and therefore is not present in the table `DATA`) with my query below : ``` select pd.entity_id, pd.crv_name, d.crv_name from processed_data pd join data d on d.entity_id = pd.entity_id join runs r on r.run_id = pd.run_id and r.data_cnt_id = d.data_cnt_id where r.run_id = 7 ``` Could you help me to improve my query ?
You need to outer join the records: ``` select pd.ntt_id, pd.crv_name, d.crv_name from processed_data pd left join data d on d.entity_id = pd.entity_id left join runs r on r.run_id = pd.run_id and r.data_cnt_id = d.data_cnt_id and r.run_id = 7; ``` In case there must be a run 7: ``` select pd.ntt_id, pd.crv_name, d.crv_name from processed_data pd inner join runs r on r.run_id = pd.run_id and r.run_id = 7 left join data d on d.entity_id = pd.entity_id and d.data_cnt_id = r.data_cnt_id; ```
Change `join` to `left join`: ``` select pd.ntt_id, pd.crv_name, d.crv_name from processed_data pd left join data d on d.entity_id = pd.entity_id join runs r on r.run_id = pd.run_id and r.data_cnt_id = m.data_cnt_id where r.run_id = 7 ```
SQL query not returning some rows
[ "", "sql", "" ]
I have a database where the months and years are saved in different columns as integers. The query is working fine but if the user of the application is selecting a timespan over several years the query isn't working how it should be. ``` WORKS FINE: 01-2014 <--> 04-2014 DOESN'T WORK: 12-2013 <--> 02-2014 ``` Here is the original working query of the app: ``` SELECT tbl_report.YEAR, tbl_report.MONTH, tbl_question.ID, tbl_question.QUESTION , tbl_answer.ANSWER, tbl_question.TYPEID FROM .... WHERE tbl_report.CITYID = 'london' AND tbl_report.YEAR >= 2013 AND tbl_report.YEAR <= 2013 AND tbl_report.MONTH >= 8 AND tbl_report.MONTH <= 9 ``` ***How can I solve build a query in order to give right results back to the user***?
You need separate conditions for months that are in the same year as the start and stop year, and the other months. For example, if you want records from 2013-10 and forward, you want the months 10, 11 and 12 from 2013, but all the months in the following years. Example to get the records from 2012-05 to 2014-02: ``` where tbl_report.CITYID = 'london' and ((tbl_report.YEAR = 2012 and tbl_report.MONTH >= 5) or tbl_report.YEAR > 2012) and (tbl_report.YEAR < 2014 or (tbl_report.YEAR = 2014 and tbl_report.MONTH <= 2)) ```
How much sense does this make to you? Give me a number less than or equal to 2 and greater than or equal to 12. ``` tbl_report.MONTH >= 12 AND tbl_report.MONTH <= 2 ``` That will never be true! As for a solution: Try this in your `WHERE` clause: ``` WHERE tbl_report.CITYID = 'london' AND ( ((tbl_report.YEAR = 2013 AND tbl_report.MONTH >= 12) OR tbl_report.YEAR > 2013) AND ((tbl_report.YEAR = 2014 AND tbl_report.MONTH <= 2) OR tbl_report.YEAR < 2014) ) ``` **\* Updated to support multiple year spans. Not specified in the original question, but it is the more robust query.**
MySQL integer as Date in where clause
[ "", "mysql", "sql", "function", "" ]
Using Sql Server 2012 I want to query a table to only fetch rows where certain columns are not null or don't contain an empty string. The columns I need to check for null and ' ' all start with either **col\_as** or **col\_m** followed by two digits. At the moment I write `where col_as01 is not null or` .... which becomes difficult to maintain due to the quantity of columns I have to check. Is there a more elegant way to do this? Some kind of looping? I also use ISNULL(NULLIF([col\_as01], ''), Null) AS [col\_as01] in the select stmt to get rid of the empty string values. thank you for your help.
You should fill in the blanks. ``` select @myWhereString =stuff((select 'or isnull('+COLUMN_NAME+','''') = '''' ' as [text()] from Primebet.INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'YourTable' and (column_name like 'col_as%' or column_name like 'col_m%') for xml path('')),1,3,'') set @myWhereString ='rest of your query'+ @myWhereString exec executesql with your query ```
You can use something like this `WHERE DATALENGTH(col_as01) > 0` That will implicitly exclude null values, and the length greater 0 will guarantee you to retrieve non empty strings. PS: You could also use `LEN` instead of `DATALENGTH` but that will trim spaces in your string at the beginning and end so you would not get values that only contain spaces then.
TSQL - dynamic WHERE clause to check for NULL for certain columns
[ "", "sql", "t-sql", "null", "sql-server-2012", "" ]
So I have the following where conditions ``` sessions = sessions.Where(y => y.session.SESSION_DIVISION.Any(x => x.DIVISION.ToUpper().Contains(SearchContent)) || y.session.ROOM.ToUpper().Contains(SearchContent) || y.session.COURSE.ToUpper().Contains(SearchContent)); ``` I want to split this into multiple lines based on whether a string is empty for example: ``` if (!String.IsNullOrEmpty(Division)) { sessions = sessions.Where(y => y.session.SESSION_DIVISION.Any(x => x.DIVISION.ToUpper().Contains(SearchContent))); } if (!String.IsNullOrEmpty(Room)) { // this shoudl be OR sessions = sessions.Where(y => y.session.ROOM.ToUpper().Contains(SearchContent)); } if (!String.IsNullOrEmpty(course)) { // this shoudl be OR sessions = sessions.Where(y => y.session.COURSE.ToUpper().Contains(SearchContent)); } ``` If you notice I want to add multiple OR conditions split based on whether the Room, course, and Division strings are empty or not.
There are a few ways to go about this: 1. Apply the "where" to the original query each time, and then `Union()` the resulting queries. ``` var queries = new List<IQueryable<Session>>(); if (!String.IsNullOrEmpty(Division)) { queries.Add(sessions.Where(y => y.session.SESSION_DIVISION.Any(x => x.DIVISION.ToUpper().Contains(SearchContent)))); } if (!String.IsNullOrEmpty(Room)) { // this shoudl be OR queries.Add(sessions.Where(y => y.session.ROOM.ToUpper().Contains(SearchContent))); } if (!String.IsNullOrEmpty(course)) { // this shoudl be OR queries.Add(sessions.Where(y => y.session.COURSE.ToUpper().Contains(SearchContent))); } sessions = queries.Aggregate(sessions.Where(y => false), (q1, q2) => q1.Union(q2)); ``` 2. Do Expression manipulation to merge the bodies of your lambda expressions together, joined by `OrElse` expressions. (Complicated unless you've already got libraries to help you: after joining the bodies, you also have to traverse the expression tree to replace the parameter expressions. It can get sticky. See [this post](https://stackoverflow.com/a/50414456/120955) for details. 3. Use a tool like [PredicateBuilder](http://www.albahari.com/nutshell/predicatebuilder.aspx) to do #2 for you.
`.Where()` assumes logical `AND` and as far as I know, there's no out of box solution to do it. If you want to separate `OR` statements, you may want to look into using [Predicate Builder](http://www.albahari.com/nutshell/predicatebuilder.aspx) or [Dynamic Linq](http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx).
LINQ: Split Where OR conditions
[ "", "sql", ".net", "linq", "where-clause", "multiple-conditions", "" ]
``` client_ref matter_suffix date_opened 1 1 1983-11-15 00:00:00.000 1 1 1983-11-15 00:00:00.000 1 6 2002-11-18 00:00:00.000 1 7 2005-08-01 00:00:00.000 1 9 2008-07-01 00:00:00.000 1 10 2008-08-22 00:00:00.000 2 1 1983-11-15 00:00:00.000 2 2 1992-04-21 00:00:00.000 3 1 1983-11-15 00:00:00.000 3 2 1987-02-26 00:00:00.000 4 1 1989-01-07 00:00:00.000 4 2 1987-03-15 00:00:00.000 ``` I have the above table, and I simply want to return the most recent matter opened for each client, in the below format: ``` client_ref matter_suffix Most Recent 1 10 2008-08-22 00:00:00.000 2 2 1992-04-21 00:00:00.000 3 2 1987-02-26 00:00:00.000 4 1 1989-01-07 00:00:00.000 ``` I can perform a very simple query to return the most recent (shown below), but whenever I try to include the matter\_suffix data (necessary), I have problems. Thanks in advance. ``` select client_ref,max (Date_opened)[Most Recent] from archive a group by client_ref order by client_ref ```
In SQL 2012 there are handy functions to make this easier but in SQL 2008 you need to do it the old way: Find the most recent: ``` SELECT client_ref,MAX(date_opened) last_opened FROM YourTable GROUP BY client_ref ``` Now join to that back: ``` SELECT client_ref,matter_suffix, date_opened FROM YourTable YT INNER JOIN ( SELECT client_ref,MAX(date_opened) last_opened FROM YourTable GROUP BY client_ref ) MR ON YT.client_ref = MR.client_ref AND YT.date_opened = MR.last_opened ```
Doesn't this work? ``` select client_ref,matter_suffix,max (Date_opened)[Most Recent] from archive a group by client_ref,matter_suffix order by client_ref ```
Including Data from other columns in Aggregate Function Result
[ "", "sql", "sql-server", "sql-server-2008", "" ]
i have bunch of discount scheme for my item table , and for each item i have different discount scheme. now i want to give row id to that item but it should be start from zer0(0) for each item group, and when it got different DiscountId then it should be change, my table is in below image..![enter image description here](https://i.stack.imgur.com/wrMe3.png) now for an example, for ItemCode 429 there are 7 same discount with DiscountId 427 so for this all i want row Id 0(zero) but when change DiscountId, it means for Same ItemCode and 428 DiscountId, then i want another RowId with increment. and when ItemCode change then rowId should be start from Zero(0). can anyone help me please?? my current query is simpaly "select \* from ItemDiscount\_md".
Maybe something like this: **Test data:** ``` DECLARE @tbl TABLE(ITEMCode INT,DiscountId INT) INSERT INTO @tbl VALUES (73,419),(73,419),(73,420),(73,420),(73,420), (429,427),(429,427),(429,427),(429,427),(429,427), (429,427),(429,427),(429,427),(429,428),(429,428) ``` **Query:** ``` ;WITH CTE AS ( SELECT DENSE_RANK() OVER(PARTITION BY tbl.ITEMCode ORDER BY DiscountId) AS Rownbr, tbl.* FROM @tbl AS tbl ) SELECT CTE.Rownbr-1 AS RowNbr, CTE.DiscountId, CTE.ITEMCode FROM CTE ``` Of course you can simplify the query by writing this: ``` SELECT (DENSE_RANK() OVER(PARTITION BY tbl.ITEMCode ORDER BY DiscountId))-1 AS Rownbr, tbl.* FROM @tbl AS tbl ``` I just thought it was nicer and more readable with a CTE function **References:** * [DENSE\_RANK](http://technet.microsoft.com/en-us/library/ms173825.aspx) * [OVER Clause](http://technet.microsoft.com/en-us/library/ms189461.aspx) * [Using Common Table Expressions](http://technet.microsoft.com/en-us/library/ms190766%28v=sql.105%29.aspx) * [ROW\_NUMBER](http://technet.microsoft.com/en-us/library/ms186734.aspx) **EDIT** To answer the comment. No ROW\_NUMBER will not return the same counter. This is the output with `DENSE_RANK`: ``` 0 419 73 0 419 73 1 420 73 1 420 73 1 420 73 0 427 429 0 427 429 0 427 429 0 427 429 0 427 429 0 427 429 0 427 429 0 427 429 1 428 429 1 428 429 ``` And this is with `ROW_NUMBER`: ``` 0 419 73 1 419 73 2 420 73 3 420 73 4 420 73 0 427 429 1 427 429 2 427 429 3 427 429 4 427 429 5 427 429 6 427 429 7 427 429 8 428 429 9 428 429 ``` As you see `ROW_NUMBER`() recounts the group when the `DENSE_RANK` ranks the group
Just more simplified Arion's Answer ``` DECLARE @tbl TABLE(ITEMCode INT,DiscountId INT) INSERT INTO @tbl VALUES (73,419), (73,419), (73,420), (73,420), (73,420), (429,427), (429,427), (429,427), (429,427), (429,427), (429,427), (429,427), (429,427), (429,428), (429,428) ; SELECT (DENSE_RANK() OVER(PARTITION BY ITEMCode ORDER BY DiscountId) -1) AS Rownbr, DiscountId, ITEMCode FROM @tbl ```
how to give different row id to sub group in in a group in sql query?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2012", "" ]
I have 2 mysql tables: "Orders" table: ``` customer_id | money 3 120 5 80 3 45 3 70 6 20 ``` "collecting" table: ``` customer_id | money 3 50 3 70 4 20 4 90 ``` I want a result like: "Total" table: ``` customer_id | Amount 3 115 4 110 5 80 6 20 ``` 1. "Total" table "customer\_id" should be singular 2. Amount = (SUM(All customer orders.money) - SUM(All customer collecting.money)) 3. "Money" can be NULL "Orders" table can have customer\_id and "Collecting" table may not have Or "Collecting" table can have customer\_id and "Orders" table may not have How can i write a single query for output "Total" table?
The fastest way is to union your data with the money value being negative on the `collecting` table: ``` -- load test data create table orders(customer_id int, money int); insert into orders values (3,120), (5,80), (3,45), (3,70), (6,20); create table collecting(customer_id int,money int); insert into collecting values (3,50), (3,70), (4,20), (4,90); -- populate Total table create table Total(customer_id int,Amount int); insert into Total select oc.customer_id,sum(oc.money) Amount from ( select customer_id,coalesce(money,0) money from orders union all select customer_id,coalesce(-money,0) money from collecting ) oc group by oc.customer_id; -- return results select * from Total; ``` SQL Fiddle: <http://www.sqlfiddle.com/#!2/deebc>
The following returns the result you expect. ``` SELECT customer_id, SUM(amount) as amount FROM ( SELECT customer_id, SUM(money) as amount FROM orders GROUP BY customer_id UNION ALL SELECT customer_id, SUM(money) * -1 as amount FROM collecting GROUP BY customer_id ) as tb GROUP BY customer_id; ``` customer\_id = 4 returns -110, not 110, since it's only in the collecting table. Example: <http://www.sqlfiddle.com/#!2/3b922/5/0>
MYSQL calculate amount from two table?
[ "", "mysql", "sql", "join", "" ]
I've more experience using Access where I would build up my analysis in small parts and query each new view. I'm not trying to do something that must be simple in SQL. If I have a query of the format: ``` SELECT events.people, COUNT(events.eventType) AS usersAndEventCount FROM theDB.events WHERE event_id = 884 GROUP BY people ``` And I then want to query usersAndEventCount like so: ``` Select usersAndEventCount.people, usersAndEventCount.events FROM [from where actually?] ``` Tried from: ``` usersAndEventCount; events theDB.events ``` This must seem very basic to SQL users on SO. But in my mind it's much easier to breakdown the larger query into these sub queries. How would I query usersAndEventCount in the same query?
Your statement "*then want to query usersAndEventCount*" does not make sense because `usersAndEventCount` is a *column* - at least in your first example. You can not "query" a column. But from the example you have given it seems you want something like this: ``` Select usersAndEventCount.people, usersAndEventCount.events FROM ( SELECT events.people, COUNT(events.eventType) AS as events FROM theDB.events WHERE event_id = 884 GROUP BY people ) as usersAndEventCount ``` This is called a "derived table" in SQL
In pure SQL, you can use nested queries (AKA sub-queries). Just enclose your first query in `()` brackets, so your query will look like this: ``` Select usersAndEventCount.people, usersAndEventCount.events FROM (SELECT events.people, COUNT(events.eventType) AS events FROM theDB.events WHERE event_id = 884 GROUP BY people) usersAndEventCount ``` Alternatively, to save the first query and reuse it in several places like you were doing in Access, you can save it as a View or Stored Procedure depending on the database system you're using. If you want an example, let me know the database system you're using.
Querying a query
[ "", "sql", "" ]
I'm trying to add a column named *order* to my table. I realize that *order* is a reserved word in SQL. So, how do I do it? My command: ``` alter table mytable add column order integer; ``` I've also tried: ``` alter table mytable add column 'order' integer; ``` PostgreSQL 9.1.
Use this: ``` alter table mytable add column "order" integer; ``` But, you might want to consider using a non-reserved name instead, like `sort_order` or something similar that reflects what the column is used for (and isn't a reserved word).
You are using order which is a reserved keyword you should consider renaming that to something like orders. And the problem should go away.
"syntax error at or near 'order'"" in PostgreSQL
[ "", "sql", "postgresql", "quoted-identifier", "" ]
I need to concatenate the `City`, `State` and `Country` columns into something like City, State, Country. This is my code: ``` Select City + ', ' + State + ', ' + Country as outputText from Places ``` However, because `City` and `State` allow `null` (or empty) value, what happen is, (for example) if the `City` is null/empty, my output will look like `, Iowa, USA`; or say the `State` is empty, then the output will look like `Seattle, , USA` Is there anyway I can format the output and remove "unnecessary" commas? Edited: Because of the requirements, I should not use any other mean (such as PL/SQL, Store Procedure) etc., so it has to be plain SQL statement
``` select isnull(City, '') + case when isnull(City, '') != '' then ', ' else '' end + isnull(State, '') + case when isnull(State, '') != '' then ', ' else '' end + isnull(Country, '') as outputText from Places ```
Since adding a string with null will result null so if they are null (not empty string) this will give you teh desired result ``` Select isnull(City + ', ','') + isnull(State + ', ' ,'') + isnull(Country,'') as outputText from Places ```
Concatenate and format text in SQL
[ "", "sql", "sql-server", "" ]
In the below code i have a document table ``` document name | document id | client id ``` and attachment table ``` attach id | document id | attachment name. ``` I want to get attachemt name from attachment table.in the below code i get document id,document name ,clientid from document name and i want to get attachment name from attachment table.how to do in this same query. ``` @i_DocumentID varchar(MAX) SELECT DocumentID, DocumentName, ClientID FROM Documents WHERE DocumentID IN (SELECT * FROM dbo.CSVToTable(@i_DocumentID)) ```
``` @i_DocumentID varchar(MAX) SELECT a.DocumentID, a.DocumentName, a.ClientID, b.AttachmentName from Documents a join Attachments b on a.DocumentId = b.DocumentId where a.DocumentID IN (SELECT * FROM dbo.CSVToTable(@i_DocumentID)) ``` You can read more about joins here: <http://www.w3schools.com/sql/sql_join.asp> or here <http://www.mssqltips.com/sqlservertip/1667/sql-server-join-example/>
Use a join: ``` @i_DocumentID varchar(MAX) SELECT D.DocumentID, D.DocumentName,D.ClientID, A.Attachement_name from Documents D JOIN Attachment A on D.document_id=A.document_id where D.DocumentID IN (SELECT * FROM dbo.CSVToTable(@i_DocumentID)) ```
Sql Query to get column name from another table
[ "", "sql", "sql-server", "" ]
My task is to validate existing data in an MSSQL database. I've got some SQL experience, but not enough, apparently. We have a zip code field that must be either 5 or 9 digits (US zip). What we are finding in the zip field are embedded spaces and other oddities that will be prevented in the future. I've searched enough to find the references for LIKE that leave me with this "novice approach": ``` ZIP NOT LIKE '[0-9][0-9][0-9][0-9][0-9]' AND ZIP NOT LIKE '[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' ``` Is this really what I must code? Is there nothing similar to...? ``` ZIP NOT LIKE '[\d]{5}' AND ZIP NOT LIKE '[\d]{9}' ``` I will loath validating longer fields! I suppose, ultimately, both code sequences will be equally efficient (or should be). Thanks for your help
Unfortunately, LIKE is not regex-compatible so nothing of the sort `\d`. Although, combining a length function with a numeric function may provide an acceptable result: ``` WHERE ISNUMERIC(ZIP) <> 1 OR LEN(ZIP) NOT IN(5,9) ``` I would however not recommend it because it ISNUMERIC will return 1 for a +, - or valid currency symbol. Especially the minus sign may be prevalent in the data set, so I'd still favor your "novice" approach. Another approach is to use: ``` ZIP NOT LIKE '%[^0-9]%' OR LEN(ZIP) NOT IN(5,9) ``` which will find any row where zip does not contain any character that is not 0-9 (i.e only 0-9 allowed) where the length is not 5 or 9.
There are few ways you could achieve that. 1. You can replace `[0-9]` with `_` like ZIP NOT LIKE '**\_**' 2. USE LEN() so it's like LEN(ZIP) NOT IN(5,9)
Using SQL - how do I match an exact number of characters?
[ "", "sql", "sql-server", "regex", "sql-like", "" ]
I created a Hive Table, which loads data from a text file. But its returning empty result set on all queries. I tried the following command: ``` CREATE TABLE table2( id1 INT, id2 INT, id3 INT, id4 STRING, id5 INT, id6 STRING, id7 STRING, id8 STRING, id9 STRING, id10 STRING, id11 STRING, id12 STRING, id13 STRING, id14 STRING, id15 STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION '/user/biadmin/lineitem'; ``` The command gets executed, and the table gets created. But, always returns 0 rows for all queries, including `SELECT * FROM table2;` Sample data: Single line of the input data: 1|155190|7706|1|17|21168.23|0.04|0.02|N|O|1996-03-13|1996-02-12|1996-03-22|DELIVER IN PERSON|TRUCK|egular courts above the| I have attached the screen shot of the data file. Output for command: DESCRIBE FORMATTED table2; ``` | Wed Apr 16 20:18:58 IST 2014 : Connection obtained for host: big-instght-15.persistent.co.in, port number 1528. | | # col_name data_type comment | | | | id1 int None | | id2 int None | | id3 int None | | id4 string None | | id5 int None | | id6 string None | | id7 string None | | id8 string None | | id9 string None | | id10 string None | | id11 string None | | id12 string None | | id13 string None | | id14 string None | | id15 string None | | | | # Detailed Table Information | | Database: default | | Owner: biadmin | | CreateTime: Mon Apr 14 20:17:31 IST 2014 | | LastAccessTime: UNKNOWN | | Protect Mode: None | | Retention: 0 | | Location: hdfs://big-instght-11.persistent.co.in:9000/user/biadmin/lineitem | | Table Type: MANAGED_TABLE | | Table Parameters: | | serialization.null.format | | transient_lastDdlTime 1397486851 | | | | # Storage Information | | SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe | | InputFormat: org.apache.hadoop.mapred.TextInputFormat | | OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat | | Compressed: No | | Num Buckets: -1 | | Bucket Columns: [] | | Sort Columns: [] | | Storage Desc Params: | | field.delim | | ``` +-----------------------------------------------------------------------------------------------------------------+ Thanks!
Please make sure that the location **/user/biadmin/lineitem.txt** actually exists and you have data present there. Since you are using **LOCATION** clause your data must be present there, instead of the default warehouse location, **/user/hive/warehouse**. Do a quick **ls** to verify that : ``` bin/hadoop fs -ls /user/biadmin/lineitem.txt ``` Also, make sure that you are using the proper **delimiter**.
Did you tried with `LOAD DATA LOCAL INFILE` ``` LOAD DATA LOCAL INFILE'/user/biadmin/lineitem.txt' INTO TABLE table2 FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' (id1,id2,id3........); ``` Documentation: <http://dev.mysql.com/doc/refman/5.1/en/load-data.html>
Hive Table returning empty result set on all queries
[ "", "mysql", "sql", "hadoop", "hive", "bigdata", "" ]
Hey guys I have a dilemma with one of my SELECTS that I use in mySQL DB. Firstly this is how it looks : ![This](https://i.stack.imgur.com/4Kojm.png) My select is supposed to extract all the users and count each their prezente , when I use this select instead of taking all my users I get this : ``` SELECT users1.id,users1.Nume, COUNT(pontaj.prezente) FROM users1, pontaj WHERE users1.id = pontaj.id ``` ![enter image description here](https://i.stack.imgur.com/Z3WER.png)
You need to add a GROUP BY clause to your query. Also replace the old join syntax using WHERE clause with recommended JOIN / ON syntax. ``` SELECT users1.id,users1.Nume, COUNT(pontaj.prezente) FROM users1 INNER JOIN pontaj ON users1.id = pontaj.id GROUP BY users1.id,users1.Nume ```
I think you should add a group by clause meaning at the end of the SQL add ``` group by users1.Nume ```
display certain values mySQL
[ "", "mysql", "sql", "" ]
We have the following solution: ``` select substring(convert(varchar(20),convert(datetime,getdate())),5,2) + ' ' + left(convert(varchar(20),convert(datetime,getdate())),3) ``` What is the elegant way of achieving this format?
You can use the `dateName` function: ``` select right(N'0' + dateName(DD, getDate()), 2) + N'-' + dateName(M, getDate()) ``` If you really want the `mmm` part to only have the tree-letter abbreviation of the month, you're stuck with parsing the appropriate conversion type, for example ``` select left(convert(nvarchar, getDate(), 7), 3) ``` The problem is that `dateName` doesn't have an option to get you the abbreviated month, and the abbreviation isn't always just the first three letters (for example, in czech, two months start with `Čer`). On the other hand, convert `7` always starts with the abbreviation. Now, even with this, I assume that the abbreviation is always three letters long, so it isn't necessarily 100% reliable (you could search for space instead), but I'm not aware of any better option in MS SQL.
You can do it this way: ``` declare @date as date = getdate() select replace(convert(varchar(6), @date, 6), ' ', '-') -- returns '11-Apr' ``` Format 6 is `dd mon yy` and you take the first 6 characters by converting to varchar(6). You just need to replace space with dash at the end.
Elegantly convert DateTime type to a string formatted "dd-mmm"
[ "", "sql", "sql-server", "t-sql", "" ]
I am trying to return an Employee Name and Award Date **if** they have no prior award dates before dates the user specifies (these are fields on a form, `StartDateTxt` and `EndDateTxt` as seen below), aka their "first occurrence." Example (AwardTbl only for simplicity): ``` AwardDate EmployeeID PlanID AwardedUnits 3/1/2005 100200 1 3 3/1/2008 100200 1 7 3/1/2005 100300 1 5 3/1/2013 100300 1 8 ``` If I ran the query between the dates `1/1/2005 - 12/31/2005`, it would return `3/1/2005` and `100200` and `100300`. If I ran the query between `1/1/2008-12/31/2008` it would return nothing and likewise with `1/1/2013 - 12/31/2013` because those employees have already had an earlier award date. I tried a couple different things, which gave me some weird results. ``` SELECT x.AstFirstName , x.AstLastName , y.AwardDate , y.AwardUnits , z.PlanDesc FROM (AssociateTbl AS x INNER JOIN AwardTbl AS y ON x.EmployeeID = y.EmployeeID) INNER JOIN PlanTbl AS z ON y.PlanID = z.PlanID WHERE y.AwardDate BETWEEN [Forms]![PlanFrm]![ReportSelectSbfrm].[Form]![StartDateTxt] And [Forms]![PlanFrm]![ReportSelectSbfrm].[Form]![EndDateTxt] ; ``` This query did NOT care if there was previous record or not, and I think that's where I am unsure of how to narrow down the query. I also tried : 1. `Min(AwardDate)` (didn't work) 2. A subquery in the `WHERE` clause that ordered by `TOP 1 AwardDate ASC`, which only returned 1 record 3. A `DCount("*", "AwardTbl", "AwardDate < [Forms]![PlanFrm]![ReportSelectSbfrm].[Form]![StartDateTxt]") < 1` (This also did not differentiate whether or not it was the first occurrence of the AwardDate) Please note: This is MS Access. There is no `ROW_NUMBER()` or `CTE` features.
Min Award Date will work if grouped by EmployeeID with your date range in the `Having` clause and used as a filter list: ``` SELECT x.AstFirstName , x.AstLastName , y.AwardDate , y.AwardUnits , z.PlanDesc FROM ((AssociateTbl AS x INNER JOIN AwardTbl AS y ON x.EmployeeID = y.EmployeeID) INNER JOIN (select a.EmployeeID,min(a.AwardDate) AS MinAwardDate from AwardTbl AS a group by a.EmployeeID having ((min(a.awardDate)>=[Forms]![PlanFrm]![ReportSelectSbfrm].[Form]![StartDateTxt] and min(a.awardDate)<[Forms]![PlanFrm]![ReportSelectSbfrm].[Form]![EndDateTxt])) ) AS d on d.EmployeeID = x.EmployeeID and d.MinAwardDate = y.AwardDate) INNER JOIN PlanTbl AS z ON y.PlanID = z.PlanID ```
Try like below ``` SELECT x.AstFirstName , x.AstLastName , y.AwardDate , y.AwardUnits , z.PlanDesc FROM (AssociateTbl AS x INNER JOIN AwardTbl AS y ON x.EmployeeID = y.EmployeeID) INNER JOIN PlanTbl AS z ON y.PlanID = z.PlanID WHERE y.AwardDate BETWEEN '1/1/2005' AND '12/31/2005' GROUP BY y.EmployeeID, x.AstFirstName , x.AstLastName , y.AwardDate , y.AwardUnits , z.PlanDesc HAVING COUNT(*) = 1 ```
First Occurrence of Record by Date
[ "", "sql", "ms-access-2007", "" ]
I know some of SQL but, I always use `join`, `left`, `cross` and so on, but in a query where the tables are separated by a comma. It's looks like a cross join to me. But I don't know how to test it (the result is the same with the tries I made). ``` SELECT A.id, B.id FROM A,B WHERE A.id_B = B.id ``` Even in the great Question (with great answers) ["What is the difference between Left, Right, Outer and Inner Joins?"](https://stackoverflow.com/questions/448023/what-is-the-difference-between-left-right-outer-and-inner-joins/448080#448080) I didn't find an answer to this.
It *would* be a cross join if there wasn't a `WHERE` clause relating the two tables. In this case it's functionally equivalent to an inner join (matching records by `id_rel` and `id`) It's an older syntax for joining tables that is still supported in most systems, but JOIN syntax is largely preferred.
**This is old pre - 1992 standard SQL join syntax, which you should not mimic or use in anything you write today** The `where` clause specifies the join style and columns. You can use `=` or `*=` or`=*` in the where to make join, left or right joins. No joins in the `WHERE` make it a cross join.
Is a query with table separated by comma a cross join query?
[ "", "sql", "operation", "" ]
I have Table called `url_info` and the structure of the table is: url\_info: ``` url_id ( auto_increment, primary key ) url ( unique,varchar(500) ) ``` When I insert into table like this: ``` INSERT INTO url_info(url) VALUES('Tom'); INSERT INTO url_info(url) VALUES('Jerry'); ``` The output is: ``` 1 Tom 2 Jerry ``` When I insert like this ``` INSERT INTO url_info(url) VALUES('Tom'); INSERT INTO url_info(url) VALUES('Tom'); INSERT INTO url_info(url) VALUES('Jerry'); ``` The output is ``` 1 Tom 3 Jerry ``` The auto-increment id is incremented when I try to insert to duplicate entry. I have also tried `Insert Ignore` How to prevent it from incrementing when I try to insert a duplicate entry?
It's probably worth creating a stored procedure to insert what you want into the table. But, in the stored procedure check what items you have already in the table. If these match what you're trying to insert, then the query should not even attempt the insert. Ie. The procedure needs to contain something like this: ``` IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = 'Tom') INSERT INTO url_info(url) VALUES('Tom') ``` So, in your stored procedure, it would look like this (assuming the arguments/variables have been declared) ``` IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = @newUrl) INSERT INTO url_info(url) VALUES(@newUrl) ```
This is expected behaviour in InnoDB. The reason is that they want to let go of the auto\_increment lock as fast as possible to improve concurrency. Unfortunately this means they increment the AUTO\_INCREMENT value before resolving any constraints, such as UNIQUE. You can read more about the idea in the manual on [AUTO\_INCREMENT Handling in InnoDB](http://dev.mysql.com/doc/refman/5.5/en/innodb-auto-increment-handling.html), but the manual is also unfortunately [buggy](http://bugs.mysql.com/bug.php?id=63128) and doesn't tell why your simple insert will give non-consecutive values. If this is a real problem for you and you really need consecutive numbers, consider setting the `innodb_autoinc_lock_mode` option to `0` in your server, but this is not recommended as it will have severe effects on your database (you cannot do any inserts concurrently).
Prevent auto increment on duplicated entry?
[ "", "mysql", "sql", "duplicates", "" ]
I have one table containing all users information. **Example of the users table** ``` id username last_login 1 david 2014-01-30 12:21:54 2 sam 2014-01-30 17:43:12 3 lynda 2014-01-30 10:31:31 4 mark 2014-01-30 21:21:15 5 john 2014-01-30 23:01:01 6 jakson 2014-01-30 16:21:31 7 mandela 2014-01-30 16:35:54 8 Ashy 2014-01-30 16:11:53 ``` **PROBLEM STATEMENT** I want to select users by following criteria * Some users (i.e user id 1,3,4,5,6) will be selected based on `ORDER BY last_login ASC` and rest will be based on `ORDER BY id ASC` using a single query. Is it possible or not ? Thanks Sanjog
Yes. You can have multiple arguments to `order by`. So, first identify the users you want ordered one way, then order them as you want, and order the rest: ``` order by userid in (1, 3, 4, 5, 6) desc, (case when userid in (1, 3, 4, 5, 6) then last_login end) ASC, id ASC ```
One of possibilities would be to create 2 queries and join them using `UNION` for example ``` SELECT Id, UserName, Last_login from Users where Id in(1,2,3,4,5,6,7) order by Id UNION SELECT Id, UserName, last_login from Users ORDER BY last_login ASC ```
MYSQL QUERY FOR THE FOLLOWING SCENARIO
[ "", "mysql", "sql", "performance", "" ]
I´m trying to generate unique datetime values for a set of legacy values. Unfortunately the GETDATE() call is only evaluated once, which results in the same datetime for each row. ``` UPDATE e SET e.moddate = GETDATE() FROM dbo.entries as e ``` Does anyone know how it would be possible to delay the single inserts or how to evaluate the GETDATE() call multiple times? Thanks in advance.
Try this: ``` ;with cte as (select *,row_number() over (partition by getdate() order by getdate()) rn from entries) update entries set [moddate] = dateadd(second,rn,getdate()) from entries e inner join cte c on e.col1 = c.col1 ``` This will work as long as you have another column called `col1`, regardless of uniqueness. Demo [here](http://rextester.com/DRUO31026).
You need to generate some sort of formula to do this. System don't know by what logic you want to go to find unique date. If I would you I would do something like this ``` update e set e.moddate = DateAdd(second, 1, (select max(moddate) from dbo.entries)) FROM dbo.entries as e where e.id != 1 ``` Here system will pick the last highest date and add 1 second to it, now, what I do is I enter one record that is TOTAL Record count - Now(), as date, so the last record get today's time. and this way I get unique datetime for each record.
TSQL Generate Unique Datetime values
[ "", "sql", "sql-server", "" ]
I have a query like this: ``` SELECT DISTINCT devices1_.id AS id27_, devices1_.createdTime AS createdT2_27_, devices1_.deletedOn AS deletedOn27_, devices1_.deviceAlias AS deviceAl4_27_, devices1_.deviceName AS deviceName27_, devices1_.deviceTypeId AS deviceT21_27_, devices1_.equipmentVendor AS equipmen6_27_, devices1_.exceptionDetail AS exceptio7_27_, devices1_.hardwareVersion AS hardware8_27_, devices1_.ipAddress AS ipAddress27_, devices1_.isDeleted AS isDeleted27_, devices1_.loopBack AS loopBack27_, devices1_.modifiedTime AS modifie12_27_, devices1_.osVersion AS osVersion27_, devices1_.productModel AS product14_27_, devices1_.productName AS product15_27_, devices1_.routerType AS routerType27_, devices1_.rundate AS rundate27_, devices1_.serialNumber AS serialN18_27_, devices1_.serviceName AS service19_27_, devices1_.siteId AS siteId27_, devices1_.siteIdA AS siteIdA27_, devices1_.status AS status27_, devices1_.creator AS creator27_, devices1_.lastModifier AS lastMod25_27_ FROM goldenvariation goldenconf0_ INNER JOIN devices devices1_ ON goldenconf0_.deviceId=devices1_.id CROSS JOIN devices devices2_ WHERE goldenconf0_.deviceId=devices2_.id AND (goldenconf0_.classType = 'policy-options') AND DATE(goldenconf0_.rundate)=DATE('2014-04-14 00:00:00') AND devices2_.isDeleted=0 AND EXISTS (SELECT DISTINCT(deviceId) FROM goldenvariation goldenconf3_ WHERE (goldenconf3_.goldenVariationType = 'MISMATCH') AND (goldenconf3_.classType = 'policy-options') AND DATE(goldenconf3_.rundate)=DATE('2014-04-14 00:00:00')) AND EXISTS (SELECT DISTINCT (deviceId) FROM goldenvariation goldenconf4_ WHERE (goldenconf4_.goldenVariationType = 'MISSING') AND (goldenconf4_.classType = 'policy-options') AND DATE(goldenconf4_.rundate)=DATE('2014-04-14 00:00:00')); ``` Its taking too much time, how i can rewrite the query and make it fast? Table structure of goldervariation is: ``` CREATE TABLE `goldenvariation` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `classType` VARCHAR(255) DEFAULT NULL, `createdTime` DATETIME DEFAULT NULL, `goldenValue` LONGTEXT, `goldenXpath` VARCHAR(255) DEFAULT NULL, `isMatched` TINYINT(1) DEFAULT NULL, `modifiedTime` DATETIME DEFAULT NULL, `pathValue` LONGTEXT, `rundate` DATETIME DEFAULT NULL, `value` LONGTEXT, `xpath` VARCHAR(255) DEFAULT NULL, `deviceId` BIGINT(20) DEFAULT NULL, `goldenXpathId` BIGINT(20) DEFAULT NULL, `creator` INT(10) UNSIGNED DEFAULT NULL, `lastModifier` INT(10) UNSIGNED DEFAULT NULL, `goldenVariationType` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`), KEY `FK6804472AD99F2D15` (`deviceId`), KEY `FK6804472A98002838` (`goldenXpathId`), KEY `FK6804472A27C863B` (`creator`), KEY `FK6804472A3617A57C` (`lastModifier`), KEY `rundateindex` (`rundate`), KEY `varitionidindex` (`id`), KEY `classTypeindex` (`classType`), CONSTRAINT `FK6804472A27C863B` FOREIGN KEY (`creator`) REFERENCES `users` (`userid`), CONSTRAINT `FK6804472A3617A57C` FOREIGN KEY (`lastModifier`) REFERENCES `users` (`userid`), CONSTRAINT `FK6804472A98002838` FOREIGN KEY (`goldenXpathId`) REFERENCES `goldenconfigurationxpath` (`id`), CONSTRAINT `FK6804472AD99F2D15` FOREIGN KEY (`deviceId`) REFERENCES `devices` (`id`) ) ENGINE=INNODB AUTO_INCREMENT=1868865 DEFAULT CHARSET=latin1; ``` And explain plan of query is : ``` "1" "PRIMARY" "goldenconf0_" "ref" "FK6804472AD99F2D15,classTypeindex" "classTypeindex" "258" "const" "179223" "Using where; Using temporary" "1" "PRIMARY" "devices2_" "eq_ref" "PRIMARY,deviceindex" "PRIMARY" "8" "cmdb.goldenconf0_.deviceId" "1" "Using where" "1" "PRIMARY" "devices1_" "eq_ref" "PRIMARY,deviceindex" "PRIMARY" "8" "cmdb.goldenconf0_.deviceId" "1" "" "3" "DEPENDENT SUBQUERY" "goldenconf4_" "index_subquery" "FK6804472AD99F2D15,classTypeindex" "FK6804472AD99F2D15" "9" "func" "19795" "Using where" "2" "DEPENDENT SUBQUERY" "goldenconf3_" "index_subquery" "FK6804472AD99F2D15,classTypeindex" "FK6804472AD99F2D15" "9" "func" "19795" "Using where" ```
``` INNER JOIN goldenvariation goldenconf4_ ON goldenconf4_.deviceId = goldenconf0_deviceId AND (goldenconf4_.goldenVariationType = 'MISSING') AND (goldenconf4_.classType = 'policy-options') AND DATE(goldenconf4_.rundate)=DATE('2014-04-14 00:00:00')) ``` In the same way change another `EXISTS`. I think this one should work much faster. Also small tips from me: try to use shorter aliases. Your query is really hard to read. --- ``` SELECT DISTINCT devices1_.id AS id27_, devices1_.createdTime AS createdT2_27_, devices1_.deletedOn AS deletedOn27_, devices1_.deviceAlias AS deviceAl4_27_, devices1_.deviceName AS deviceName27_, devices1_.deviceTypeId AS deviceT21_27_, devices1_.equipmentVendor AS equipmen6_27_, devices1_.exceptionDetail AS exceptio7_27_, devices1_.hardwareVersion AS hardware8_27_, devices1_.ipAddress AS ipAddress27_, devices1_.isDeleted AS isDeleted27_, devices1_.loopBack AS loopBack27_, devices1_.modifiedTime AS modifie12_27_, devices1_.osVersion AS osVersion27_, devices1_.productModel AS product14_27_, devices1_.productName AS product15_27_, devices1_.routerType AS routerType27_, devices1_.rundate AS rundate27_, devices1_.serialNumber AS serialN18_27_, devices1_.serviceName AS service19_27_, devices1_.siteId AS siteId27_, devices1_.siteIdA AS siteIdA27_, devices1_.status AS status27_, devices1_.creator AS creator27_, devices1_.lastModifier AS lastMod25_27_ FROM goldenvariation goldenconf0_ INNER JOIN devices devices1_ ON goldenconf0_.deviceId=devices1_.id INNER JOIN goldenvariation a on a.deviceId = goldenconf0_.deviceId and a.goldenVariationType = 'MISMATCH' INNER JOIN goldenvariation b on b.deviceId = goldenconf0_.deviceId and b.goldenVariationType = 'MISSING' WHERE (goldenconf0_.classType = 'policy-options') AND convert(date,goldenconf0_.rundate) = '2014-04-14' AND devices1_.isDeleted=0 ``` Try this one. Should work much faster than your query. You joined table using `CROSS JOIN` but not even 1 column from this was used in `SELECT`.
Yes, the query could be rewritten to improve performance (though it looks like a query generated by Hibernate, and getting Hibernate to use a different query can be a challenge.) How sure are you that this query is returning the resultset you expect? Because the query is rather odd. In terms of performance, dollars to donuts, its the repeated executions of the dependent subqueries that are really eating your lunch, and your lunchbox, in terms of performance. It looks like MySQL is using the index on the **`deviceId`** column to satisfy that subquery, and that doesn't look like the most appropriate index. We notice that there are two JOIN operations to the devices table; there is no reason this table needs to be joined twice. Both JOIN operations require a match to the deviceID column of goldenvariation, the second join to the devices table does additional filtering with the `isDeleted=0`. The keywords `INNER` and `CROSS` don't have any impact on the statement at all; and the second join to the `devices` table isn't really a "cross" join, it's really an inner join. (We prefer to see the join predicates in an ON clause rather than the WHERE clause. The `DATE()` function wrapped around the `rundate` column disables an index range scan operation. These predicates can be rewritten to take advantage of an appropriate index. The **`DISTINCT(deviceId)`** in the SELECT list of an EXISTS subquery is very strange. Firstly, `DISTINCT` is a keyword, not a function. There's no need for parens around **`deviceId`**. But beyond that, it doesn't matter what is returned in the SELECT list of the EXISTS subquery, it could just be **`SELECT 1`**. It's odd to see an `EXISTS` predicate with a query that doesn't reference any expression in from the outer query (i.e. a correlated subquery). It's valid syntax. With a correlated subquery, MySQL performs that query for each and every row returned by the outer query. The EXPLAIN output looks like MySQL is doing the same thing, it didn't recognize any optimization. The way those EXIST predicates are written, if there isn't a 'policy-options' row with 'MISMATCH' AND there isn't a 'policy-options' row with 'MISSING' (for the specified date, then the query will not return any rows. If a row of each type is found (for the specified date, then ALL of 'policy-options' rows for that date are returned. (It's syntactically valid, but it's rather odd.) Assuming that the `id` column on the devices table is UNIQUE (i.e. it's the PRIMARY KEY or there's a UNIQUE index on that column, then the DISTINCT keyword is unnecessary on the outermost query. (From the EXPLAIN output, it looks like MySQL already optimized away the usual operations, that is, MySQL recognized that the DISTINCT keyword is unnecessary. --- But bottom line, it's the dependent subqueries that are killing performance; the absence of suitable indexes, and the predicate on the date column wrapped in a function. To answer your question, yes, this query can be rewritten to return an equivalent resultset more efficiently. (It's not entirely clear that the query is returning the resultset you expect.) ``` SELECT d1.id AS id27_ , d1.createdTime AS createdT2_27_ , d1.deletedOn AS deletedOn27_ , d1.deviceAlias AS deviceAl4_27_ , d1.deviceName AS deviceName27_ , d1.deviceTypeId AS deviceT21_27_ , d1.equipmentVendor AS equipmen6_27_ , d1.exceptionDetail AS exceptio7_27_ , d1.hardwareVersion AS hardware8_27_ , d1.ipAddress AS ipAddress27_ , d1.isDeleted AS isDeleted27_ , d1.loopBack AS loopBack27_ , d1.modifiedTime AS modifie12_27_ , d1.osVersion AS osVersion27_ , d1.productModel AS product14_27_ , d1.productName AS product15_27_ , d1.routerType AS routerType27_ , d1.rundate AS rundate27_ , d1.serialNumber AS serialN18_27_ , d1.serviceName AS service19_27_ , d1.siteId AS siteId27_ , d1.siteIdA AS siteIdA27_ , d1.status AS status27_ , d1.creator AS creator27_ , d1.lastModifier AS lastMod25_27_ FROM devices d1 JOIN (SELECT g.deviceId FROM goldenvariation g CROSS JOIN (SELECT 1 FROM goldenvariation x3 WHERE x3.goldenVariationType = 'MISMATCH' AND x3.classType = 'policy-options' AND x3.rundate >= '2014-04-14' AND x3.rundate < '2014-04-14' + INTERVAL 1 DAY LIMIT 1 ) t3 CROSS JOIN (SELECT 1 FROM goldenvariation x4 WHERE x4.goldenVariationType = 'MISSING' AND x4.classType = 'policy-options' AND x4.rundate >= '2014-04-14' AND x4.rundate < '2014-04-14' + INTERVAL 1 DAY LIMIT 1 ) t4 WHERE g.classType = 'policy-options' AND g.rundate >= '2014-04-14' AND g.rundate < '2014-04-14' + INTERVAL 1 DAY GROUP BY g.deviceId ) t2 ON t2.device_id = d1.id WHERE d1.isDeleted=0 ```
What should i use instead of IN?
[ "", "mysql", "sql", "performance", "join", "query-optimization", "" ]
I have a table structure like ``` Files Latest_Update ------------------- ------------------------------ Fid - Name - Type - l-id - c_name - c_val - Fid - ------------------- ------------------------------ 1 - D1 - xls - 1 - text1 - hello - 1 - ------------------- ------------------------------ Revisions ------------------------------ r-id - c_name - c_val - Fid - ------------------------------ 1 - text1 - bye - 1 - ------------------------------ ``` i am trying to get all values of text1 control either in `Revisions` or `Latest_Update` here is what i have tried ``` SELECT RS.c_name. RS.c_val from (SELECT Revisions.c_name, Revisions.c_val, Latest_Update.c_name,Latest_Update.c_val From Revisions INNER JOIN Latest_Update on Revisions.Fid = Latest_Updates.Fid) AS RS ``` but its not working , i need the result to be something like ``` Result ------------------------------ - c_name - c_val - Fid - ------------------------------ - text1 - Hello- 1 - ------------------------------ - text1 - bye - 1 - ------------------------------ ``` **I am Using MS ACCESS 2010**
I think union is the solution. Something like.. ``` SELECT * FROM ( SELECT Revisions.c_name, Revisions.c_val, Revisions.Fid FROM Revisions UNION ALL SELECT Latest_Update.c_name,Latest_Update.c_val, Latest_Update.c_name,Latest_Update.Fid FROM Latest_Update ) RS <if needed add Where Condition> ORDER BY RS.Fid ```
Why not to use union instead of inner join ... ``` select * from ( select c_name,c_val,fid from Revisions union select c_name,c_val,fid from Latest_updates)x where fid = 1 ```
SQL Subquery not working properly
[ "", "sql", "ms-access", "subquery", "" ]
I have a million row table in `Oracle 11g Express` and want to run a slow SQL select query so I can test stopping agents in various ways and observe the results on the database server. However no matter what I do, like self joins against a non-indexed column, selecting random rows using dbms\_random, where/order by statements using non-indexed columns, the results all finish within a few seconds. Is there a query I can write that will make it take a few minutes?
I'm not quite sure what you mean by "stopping agents" in this context. The only "agent" I can think of in this context would be an Enterprise Manager agent but I doubt that's what you're talking about and I don't see why you'd need a long-running query for that. The simplest way to force a query to run for a long time is to have it sleep for a bit using the `dbms_lock.sleep` procedure. Something like ``` CREATE OR REPLACE FUNCTION make_me_slow( p_seconds in number ) RETURN number IS BEGIN dbms_lock.sleep( p_seconds ); RETURN 1; END; ``` which you can call in your query ``` SELECT st.*, make_me_slow( 0.01 ) FROM some_table st ``` That will call `make_me_slow` once for every row in `some_table`. Each call to `make_me_slow` will take at least 0.01 seconds. If `some_table` has, say, 10,000 rows, that would take at least 100 seconds. If it has 100,000 rows, that would take 1,000 seconds (16.67 minutes). If you don't care about the results of the query, you can use the `dual` table to generate the rows so that you don't need a table with materialized rows. Something like ``` SELECT make_me_slow( 0.01 ) FROM dual CONNECT BY level <= 20000 ``` will generate 20,000 rows of data and take at least 200 seconds. If you want a pure SQL query (which gives you less ability to control exactly how long it's going to run), ``` select count(*) from million_row_table a cross join million_row_table b ``` will generate a 1 million x 1 million = 1 trillion row result set. That's likely to run long enough to blow out whatever `TEMP` tablespace you have defined.
From `Oracle12c` you could use: ``` WITH FUNCTION my_sleep(t NUMBER) RETURN NUMBER AS BEGIN DBMS_LOCK.SLEEP(t); RETURN t; END; SELECT my_sleep(2) FROM dual; -- 2 after two seconds ``` This approach is nice because: * you don't need to use separate PL/SQL block (BEGIN ... END;) * it is fully contained query * does not "pollute" your schema (no need for creation object privilege) * it could be used for Time-Based Blind SQL Injection testing. More info: [www.owasp.org/index.php/Blind\_SQL\_Injection](https://www.owasp.org/index.php/Blind_SQL_Injection)
How to create a slow SQL query?
[ "", "sql", "oracle", "database-performance", "" ]
I have a DB like ``` column_1 | column_2 | column_3 ------------------------------ aaa | bbb | ccc xys | hkj | fgh dfs | jhv | ccc cfg | rty | fgh iyd | olp | ccc gdv | tdr | www ``` I want to get the result such that all the rows that have the same value in `column_3` are printed out along with their `column_3` value ``` column_1 | column_3 -------------------- aaa | ccc dfs | ccc iyd | ccc cfg | fgh xys | fgh gdv | www ``` I know `SELECT DISTINCT column_3 FROM table_name;` gives me all the unique names of `column_3` but how do I get all the `column_1` values that depend on `column_3`?
``` Select ... From table_name As T Where Exists ( Select 1 From table_name As T2 Where T2.column_3 = T.column_3 And T2.<Primary Key Column> <> T.<Primary Key Column> ) ``` Obviously, you would replace `<Primary Key Column>` with the column(s) that uniquely identify each row in `table_name`. If `column_1` is that column, then we would get: ``` Select ... From table_name As T Where Exists ( Select 1 From table_name As T2 Where T2.column_3 = T.column_3 And T2.column_1 <> T.column_1 ) ```
Is it possible you're overthinking this? I believe you want ``` SELECT column_1, column_3 FROM table_name ORDER BY column_3, column_1 ```
sql query to return all rows with same column value
[ "", "mysql", "sql", "" ]
I have 2 tables. t1 and t2. t1 has an ID and a Reference. The format of that reference is TO-0000000 which cannot exceed 7 characters. t2 has 1500 records. How do insert that data with an incremented count? T2 doesn't contain a reference field. T2 contains an ID which has 1500 records. I want to import that ID or even just loop a count to 1500 and generate a reference. For example ``` ID Reference 1 TO-0000001 .. ...... 1500 TO-0001500 ``` Hope that makes sense.
I assume you want to import the `ID` column to T1 and also generate a reference field when inserting that data. If that is the case, then try this: ``` INSERT INTO T1 (ID,Reference) SELECT ID,'TO-'+Left('0000000',(7-LEN(ID)))+CAST(ID AS CHAR) FROM T2 ``` Try executing this in SSMS for easier understanding ``` DECLARE @num int=1 DECLARE @Results TABLE ( ID INT, Reference Char(10) ) WHILE @num <= 1500 BEGIN Insert INTO @Results Select @num, 'TO-'+Left('0000000',(7-LEN(@num)))+CAST(@num AS CHAR) SET @num = @num+1 END SELECT * FROM @Results ```
Make ID column of table t1 as `IDENTITY` and increment will automatically be taken care of. Now you only need to extarct refernces values from source table t2 and insert into target table t1. ``` INSERT INTO t1(References) SELECT References FROM t2 ```
SQL Server: How to generate a reference column with data that increments?
[ "", "sql", "sql-server", "" ]
Lets say you have the following table: ``` Id Index 1 3 1 1 2 1 3 3 1 5 ``` what I would like to have is the following: ``` Id Index 1 0 1 1 2 0 3 0 1 2 ``` As you might notice, the goal is for every row where Id is the same, to incrementally update the Index column, starting from zero. Now, I know this is fairly simple with using cursors, but out of curiosity is there a way to do this with single UPDATE query, somehow combining with temp tables, common table expressions or something similar?
Yes, assuming that the you don't really care about the order of the values for the new `index` values. SQL Server offers updatable CTEs and window functions that do exactly what you want: ``` with toupdate as ( select t.*, row_number() over (partition by id order by (select NULL)) as newindex from table t ) update toupdate set index = newindex; ``` If you want them in a specific order, then you need another column to specify the ordering. The existing `index` column doesn't work.
With `Row_number()` -1 and `CTE` you can write as: ``` CREATE TABLE #temp1( Id int, [Index] int) INSERT INTO #temp1 VALUES (1,3),(1,1),(2,1),(3,3),(1,5); --select * from #temp1; With CTE as ( select t.*, row_number() over (partition by id order by (select null))-1 as newindex from #temp1 t ) Update CTE set [Index] = newindex; select * from #temp1; ``` [Demo](http://rextester.com/KVRXFB63876)
Is there a way to update groups of rows with separate incrementing values in one query
[ "", "sql", "sql-server", "t-sql", "" ]
What is "Impedance Mismatch" and how does it relate to databases?
Quoted from [hacked.com](http://haacked.com/archive/2004/06/15/impedance-mismatch.aspx/) > Imagine you have a low current flashlight that normally uses AAA > batteries. Don't try this at home, but suppose you could attach your > car battery to the flashlight. The low current flashlight will > pitifully output a fraction of the light energy that the high current > battery is capable of producing. Likewise, if you attached the AAA > batteries to Batman's spotlight, you'll also get low output. However, > match the AAA batteries to the flashlight and they will run with > maximum efficiency. > > So taking this discussion back to software engineering, if you > imagine the flow of data to be analogous to a current, then the > impedance of a relational data model is not matched with the impedance > of an object hierarchy. Therefore, the data will not flow with maximum > efficiency, a result of the impedance mismatch.
Relational impedance mismatch is a set of technical difficulties appearing because objects or class definitions must be mapped to database tables defined by relational schema. Concretely, initial versions of SQL were not computationally complete (they did not have any programming constructs like declarations or conditional statements) but nowadays you can embed SQL in high-level programming languages (Java, C, Python and so on). And here comes the **impedance mismatch**. SQL is a declarative language while C (or whatever high-level programming language you chose to embed your SQL queries on) is a procedural language, leading to a mixing of approaches and requiring a huge amount of unnecessary effort. For instance, SQL has the Date type and handles rows of data at a time while high-level programming languages might use different models to represent data (i.e might not have the Date type) and can handle only one row of data at a time. We might consider extending SQL with procedural constructs (SQL/PSM = Persistent Stored Modules) to avoid impedance mismatching and to treat a table as a flat file that can be accessed one row at a time.
What is Impedance Mismatch?
[ "", "sql", "database", "" ]
I have a data structure, where I have to store pairs of elements. Each pair has exactly 2 values in it, so we are employing a table, with the fields(leftvalue, rightvalue....). These pairs should be unique, and they are considered the same, if the keys are changed. ``` Example: (Fruit, Apple) is the same as (Apple, Fruit). ``` If it is possible in an efficient way, I would put a database constraint on the fields, but not at any cost - performance is more important. We are using `MSSQL server 2008` currently, but an update is possible. Is there an efficient way of achieving this?
Two solutions, both really about changing the problem into an easier one. I'd usually prefer the `T1` solution if forcing a change on consumers is acceptable: ``` create table dbo.T1 ( Lft int not null, Rgt int not null, constraint CK_T1 CHECK (Lft < Rgt), constraint UQ_T1 UNIQUE (Lft,Rgt) ) go create table dbo.T2 ( Lft int not null, Rgt int not null ) go create view dbo.T2_DRI with schemabinding as select CASE WHEN Lft<Rgt THEN Lft ELSE Rgt END as Lft, CASE WHEN Lft<Rgt THEN Rgt ELSE Lft END as Rgt from dbo.T2 go create unique clustered index IX_T2_DRI on dbo.T2_DRI(Lft,Rgt) go ``` In both cases, neither `T1` nor `T2` can contain duplicate values in the `Lft,Rgt` pairs.
If you always store the values in order but store the direction in another column, ``` CREATE TABLE [Pairs] ( [A] NVarChar(MAX) NOT NULL, [B] NVarChar(MAX) NOT NULL, [DirectionAB] Bit NOT NULL, CONSTRAINT [PK_Pairs] PRIMARY KEY ([A],[B]) ) ``` You can acheive exaclty what you want with one clustered index, and optimize your lookups too. So when I insert the pair `'Apple', 'Fruit'` I'd do, ``` INSERT [Pairs] VALUES ('Apple', 'Friut', 1); ``` Nice and easy. Then I insert `'Fruit', 'Apple'`, ``` INSERT [Pairs] VALUES ('Apple', 'Fruit', 0); -- 0 becuase order is reversed. ``` The insert fails because this is a primary key violation. To further illustrate, the pair `'Coconuts', 'Bananas'` would be stored as ``` INSERT [Pairs] VALUES ('Bananas', 'Coconuts', 0); ``` --- For additional lookup performance, I'd add the index ``` CREATE NONCLUSTERED INDEX [IX_Pairs_Reverse] ON [Pairs] ([B], [A]); ``` --- If you can't control inserts to the table, it may be necessary to ensure that `[A]` and `[B]` are inserted correctly. ``` CONSTRAINT [CK_Pairs_ALessThanB] CHECK ([A] < [B]) ``` But this may be an unnecessary performance hit, depending on how controlled your inserts are.
Unique constraint on two fields, and their opposite
[ "", "sql", "sql-server", "sql-server-2008", "unique", "" ]
I have the following table: ``` DECLARE @T TABLE( Val NUMERIC(18,2) ``` ) with the following values: ``` INSERT INTO @T SELECT 3.01 INSERT INTO @T SELECT 30 INSERT INTO @T SELECT 2.00 INSERT INTO @T SELECT 2.1 ``` when I run this command: ``` SELECT * FROM @T ``` I get: ``` 3.01 30.00 2.00 2.10 ``` what I want to get is: ``` 3.01 30 2 2.1 ``` how can I achieve that?
Why don't you use a real number type instead. Following are the two ways. ``` DECLARE @T TABLE( Val real ) INSERT INTO @T SELECT 3.01 INSERT INTO @T SELECT 30 INSERT INTO @T SELECT 2.00 INSERT INTO @T SELECT 2.1 SELECT Val FROM @T ``` OR ``` DECLARE @T TABLE( Val NUMERIC(18,2) ) INSERT INTO @T SELECT 3.01 INSERT INTO @T SELECT 30 INSERT INTO @T SELECT 2.00 INSERT INTO @T SELECT 2.1 SELECT cast(Val as real) FROM @T ```
Completely hackish, but a simple solution. But, it's not subject to rounding errors. ``` DECLARE @T TABLE( Val NUMERIC(18,2) ) INSERT INTO @T SELECT 3.01 INSERT INTO @T SELECT 30.00 INSERT INTO @T SELECT 2.0 INSERT INTO @T SELECT 2.10 INSERT INTO @T SELECT 2.20 INSERT INTO @T SELECT 2.30 INSERT INTO @T SELECT 2.40 INSERT INTO @T SELECT 2.50 INSERT INTO @T SELECT 2.60 INSERT INTO @T SELECT 2.70 INSERT INTO @T SELECT 2.80 INSERT INTO @T SELECT 2.90 select replace( replace( replace( replace( replace( replace( replace( replace( replace( replace( convert(varchar,Val) ,'.00','') ,'.10','.1') ,'.20','.2') ,'.30','.3') ,'.40','.4') ,'.50','.5') ,'.60','.6') ,'.70','.7') ,'.80','.8') ,'.90','.9') from @T ```
convert numeric(18,2) to varchar
[ "", "sql", "sql-server", "t-sql", "" ]
I have a text file with three columns of text (strings) per line. I want to create an SQL insert command by substituting each of the three strings into a skeleton SQL command. I have put place markers in the skeleton script and used SED s/placemarker1/first string/ but with no success. Is there an easier way to accomplish this task. I used pipes to repeat the process for 'second string' etc. I actually used awk to get the fields but could not convert to the actual values. ``` enter code here for i in [ *x100* ]; do if [ -f "$i" ]; then { grep -e "You received a payment" -e "Transaction ID:" -e "Receipt No: " $i >> ../temp cat ../temp | awk 'NR == 1 {printf("%s\t",$9)} NR == 2 {printf("%s\t",$9)} NR == 3 {printf("%s\n",$3)}' | awk '{print $2,$1,$3}' | sed 's/(/ /' | sed 's/)./ /' >> ../temp1 cat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";' cat /home/linux014/opt/skeleton.sql | sed 's/EMAIL/"$email"/' | sed 's/TRANSACTION/"$transaction"/' | sed 's/CCRECEIPT/"$ccreceipt"/' > /home/linux014/opt/new-member.sql rm -f ../temp } fi done ``` I cannot figure out how to get the values instead of the names of the variables inserted into my string. Sample input (one line only): ``` catdog@gmail.com 2w4e5r6t7y8u9i8u7 1111-2222-3333-4444 ``` Sample actual output: ``` INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('"$email"','"$transaction"','"$ccreceipt"'); ``` Preferred output: ``` INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('catdog@gmail.com','2w4e5r6t7y8u9i8u7','1111-2222-3333-4444'); ```
``` awk '{print "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES"; print "(\x27"$1"\x27,\x27"$2"\x27,\x27"$3"\x27);"}' input.txt ``` Converts your sample input to preferred output. It should work for multi line input. **EDIT** The variables you are using in this line: ``` cat temp1 | awk 'email="$1"; transaction="$2"; ccreceipt="$3";' ``` are only visible to awk and in this command. They are not shell variables. Also in your `sed` commands remove those single quotes then you can get the values: ``` sed "s/EMAIL/$email/" ```
You can try this `bash`, ``` while read email transaction ccreceipt; do echo "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('$email','$transaction','$ccreceipt');"; done<inputfile ``` inputfile: ``` catdog@gmail.com 2w4e5r6t7y8u9i8u7 1111-2222-3333-4444 dog@gmail.com 2dsdsda53563u9i8u7 3333-4444-5555-6666 ``` **Test:** ``` sat:~$ while read email transaction ccreceipt; do echo "INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('$email','$transaction','$ccreceipt')"; done<inputfile INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('catdog@gmail.com','2w4e5r6t7y8u9i8u7','1111-2222-3333-4444') INSERT INTO users (email,paypal_tran,CCReceipt) VALUES ('dog@gmail.com','2dsdsda53563u9i8u7','3333-4444-5555-6666') ```
string substitution from text file to another string
[ "", "sql", "string", "bash", "awk", "sed", "" ]
I am attempting to group by twice, once by an individual level field and then by the month of the timestamp field. Little new to SQL but here's what I came up with after reading another SO post here: [SQL query to group by month part of timestamp](https://stackoverflow.com/questions/18087659/sql-query-to-group-by-month-part-of-timestamp) ``` SELECT VwNIMUserDim.USER_EMAIL_ADDRESS, VwNIMEventFct.NIM_USER_ID, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 880 THEN 1 ELSE 0 END) AS APP_OPEN, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 881 THEN 1 ELSE 0 END) AS AUTODL_SETTINGS_SAVE, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 882 THEN 1 ELSE 0 END) AS AUTO_QUERY_CONFIRM, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 883 THEN 1 ELSE 0 END) AS ISSUE_CLOSE, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 884 THEN 1 ELSE 0 END) AS ISSUE_DOWNLOAD, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 885 THEN 1 ELSE 0 END) AS ISSUE_DOWNLOAD_COMPLETE, SUM(CASE WHEN NIM_EVENT_TYPE_ID = 886 THEN 1 ELSE 0 END) AS PICKER_SEND_PICKS FROM RDMAVWSANDBOX.VwNIMEventFct GROUP BY NIM_USER_ID, MONTH('EVENT_GMT_TIMESTAMP') ``` The error message returned by my SQL client, Teradata, says: "SELECT FAILED Error 3706 Syntax error expected something in between ',' and the MONTH keyword. This is my first time doing two things: Grouping By twice and using the Month function. What am I doing wrong here? How do I group by email address users in each month?
There's no MONTH function in Teradata/Standard SQL, it's EXTRACT(YEAR/MONTH/DAY/HOUR/MINUTE/SECOND): ``` EXTRACT(MONTH FROM EVENT_GMT_TIMESTAMP) ```
Your query as written will result in a PRODUCT (Cartesian) JOIN between the VwNIMUserDIm and VwNIMEventFct tables (views). I have taken the liberty to modify the SQL based on your comments to the previous response: ``` SELECT User_.USER_EMAIL_ADDRESS, Event_.NIM_USER_ID, Event_.EVENT_GMT_TIMESTAMP(FORMAT 'yyyy-mm')(char(7)) AS EVENT_MONTH, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 880 THEN 1 ELSE 0 END) AS APP_OPEN, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 881 THEN 1 ELSE 0 END) AS AUTODL_SETTINGS_SAVE, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 882 THEN 1 ELSE 0 END) AS AUTO_QUERY_CONFIRM, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 883 THEN 1 ELSE 0 END) AS ISSUE_CLOSE, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 884 THEN 1 ELSE 0 END) AS ISSUE_DOWNLOAD, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 885 THEN 1 ELSE 0 END) AS ISSUE_DOWNLOAD_COMPLETE, SUM(CASE WHEN Event_.NIM_EVENT_TYPE_ID = 886 THEN 1 ELSE 0 END) AS PICKER_SEND_PICKS FROM RDMAVWSANDBOX.VwNIMEventFct Event_ JOIN RDMAVWSANDBOX.VwNIMUserDim User_ ON Event_.NIM_USER_ID = User_.NIM_USER_ID GROUP BY 1,2; ```
Grouping by Month of TimeStamp
[ "", "sql", "timestamp", "teradata", "group-by", "" ]
I have 2 Jobs that read and produce data in a Sql Server Database. Every once in a while the jobs crash with a System.Transactions.TransactionInDoubtException. The exact stack trace is: ``` Unhandled Exception: System.Transactions.TransactionInDoubtException: The transaction is in doubt. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. ---> System.ComponentModel.Win32Exception: The wait operation timed out. Exitcode: -532462766 --- End of inner exception stack trace --- at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParserStateObject.TryReadNetworkPacket() at System.Data.SqlClient.TdsParserStateObject.TryPrepareBuffer() at System.Data.SqlClient.TdsParserStateObject.ReadSniSyncOverAsync() at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.TdsParserStateObject.TryReadByte(Byte& value) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) ``` I googled a bit about it and found something about MSDTC, but I think this can't be the problem because the Transaction should be local since the jobs only work on a single database. The following query: ``` SELECT cntr_value AS NumOfDeadLocks FROM sys.dm_os_performance_counters WHERE object_name = 'SQLServer:Locks' AND counter_name = 'Number of Deadlocks/sec' AND instance_name = '_Total' ``` shows that there have been no deadlocks on the database, so deadlocks can't be the reason. I couldn't find any other resource on the internet which gives exact information about the reason of the exception. So has anybody a idea what the reason could be or how to find the root of this error?
Even if the transaction is local, transaction will still escalated to the MSDTC if you open multiple connections within the same transaction scope, according to this article: <http://msdn.microsoft.com/en-us/library/ms229978(v=vs.110).aspx> > An escalation that results in the System.Transactions infrastructure > transferring the ownership of the transaction to MSDTC happens when: > ... > > * At least two durable resources that support single-phase notifications are enlisted in the transaction. For example, enlisting > a single connection with does not cause a transaction to be promoted. > However, whenever you open a second connection to a database causing > the database to enlist, the System.Transactions infrastructure detects > that it is the second durable resource in the transaction, and > escalates it to an MSDTC transaction. NOTE: I have read some articles that state that this only applies to SQL 2005, and that SQL 2008+ is smarter about the MSDTC promotion. These state that SQL 2008 will only promote to MSDTC when multiple connections are open **at the same time**. See: [TransactionScope automatically escalating to MSDTC on some machines?](https://stackoverflow.com/questions/1690892/transactionscope-automatically-escalating-to-msdtc-on-some-machines) Also, your inner exception is a `Timeout` (System.Data.SqlClient.SqlException: Timeout expired), not a `Deadlock`. While both are related to blocking, they are not the same thing. A `timeout` occurs when blocking causes the application to stop waiting on a resource that is blocked by another connection, so that the current statement can obtain locks on that resource. A `deadlock` occurs when two different connections are competing for the same resources, and they are blocking in a way they will never be able to complete unless one of the connections is terminated (this why the deadlock error messages say "transaction... has been chosen as the deadlock victim"). Since your error was a Timeout, this explains why you deadlock query returned a 0 count. `System.Transactions.TransactionInDoubtException` from MSDN (<http://msdn.microsoft.com/en-us/library/system.transactions.transactionindoubtexception(v=vs.110).aspx>) states: > This exception is thrown when an action is attempted on a transaction > that is in doubt. A transaction is in doubt when the state of the > transaction cannot be determined. Specifically, the final outcome of > the transaction, whether it commits or aborts, is never known for this > transaction. > > This exception is also thrown when an attempt is made to > commit the transaction and the transaction becomes InDoubt. The reason: something occurred during the `TransactionScope` that caused it's state to be unknown at the end of the transaction. The cause: There could be a number of different causes, but it is tough to identify your specific cause without the source code being posted. Things to check: 1. If you are using SQL 2005, and more than one connection is opened, your transaction will be promoted to a MSDTC transaction. 2. If you are using SQL 2008+, AND you have multiple connection open at the same time (i.e. nested connections or multiple ASYNC connections running in parallel), then the transaction will be promoted to a MSDTC transaction. 3. If you have "try/catch{retry if timeout/deadlock}" logic that is running within your code, then this can cause issues when the transaction is within a `System.Transactions.TransactionScope`, because of the way that SQL Server automatically rolls back transaction when a timeout or deadlock occurs.
I think this can happen also without MSDTC. I think I have had this happen in a system that did not use MSDTC at all. I think it is triggered if your DB connection fails at exactly a certain moment. The moment has to be such that the service has sent a COMMIT to the DB, but then the connection fails so that the service can't be sure if the DB ever received the COMMIT command or not.
Reason for System.Transactions.TransactionInDoubtException
[ "", "sql", ".net", "sql-server", "transactions", "" ]
I have multiple tables. I only wanted to operate on these two. The objective is to project the order\_id, sum of (total order cost calculated from multiple table), order\_date ---grouped by each one order\_id and sorted by order\_date. It sounds quite simple, but I have multiple queries, sum gave error & wrong result. I m far from getting the correct simple query for this. Here are the tables and data : Order\_Table : ``` ORDERID CUSTOMERID EMPLOYEEID ORDERDATE PURCHASEORDERNUMBER SHIPDATE SHIPPINGMETHODID FREIGHTCHARGE TAXES PAYMENTRECEIVED COMMENTS ------- ---------- ---------- --------- ------------------- --------- ---------------- ------------- ----- --------------- ------------------- 1 2 1 23-JAN-05 10 29-JAN-05 1 64 5 0 2 1 1 23-JAN-05 11 29-JAN-05 1 0 5 0 3 3 3 21-JAN-05 30 28-JAN-05 5 0 5 0 4 2 2 05-JAN-05 26 19-JAN-05 2 0 5 0 5 4 5 02-JAN-05 32 27-JAN-05 2 0 5 0 13 1 1 29-JAN-08 1 2 50 5 0 ``` 6 rows selected Order\_Detail\_Table : ORDERDETAILID ORDERID PRODUCTID QUANTITY UNITPRICE DISCOUNT --- ``` 1 1 4 15 5 0 2 2 7 10 75 0 3 3 8 5 4 0 5 4 9 100 5 0.05 6 5 6 5 7 0 7 5 9 30 5 0.05 9 1 1 9 6 0 11 13 4 1 5 0 12 13 1 2 25 0 13 13 7 1 75 0 ``` 10 rows selected So for example for orderid = 13. We have 3 items ordered listed on the order\_detail\_table. For the unique orderid=13, I want to calculate 3 rows of (quantity\*unitprice)\*(1-discount). It will be 130 for orderid=13 ``` select o.orderid, sum((od.unitprice*od.quantity)*(1-od.discount*0.01)) as total from order_table o, order_detail_table od where o.orderid=od.orderid group by o.orderid; ``` Then using those values, I would like to add the final summed values with the individual freightcharge + taxes. = (130+50)*(1+tax%) = 180*(1.05) = 189 for the row orderid=13. I am stuck here, to add with the order\_table.freight etc. I just met with compile and logic errors. ``` select o.orderid, (sum((od.unitprice*od.quantity)*(1-od.discount*0.01)) + o.freightcharge) as total from order_table o, order_detail_table od where o.orderid=od.orderid group by o.orderid; ``` Gives me SQL Error: ORA-00979: not a GROUP BY expression --- Idk why as I think the sum is only aggregate function to be grouped, while + o.freightcharge is outside of sum. ``` select o.orderid, sum((od.unitprice*od.quantity)*(1-od.discount*0.01)) + sum(o.freightcharge) as total from order_table o, order_detail_table od where o.orderid=od.orderid group by o.orderid; ``` Gives me 3 x freightcharges, for single order in case of orderid=13. --Wrong logic. There should be 6 rows for each unique orderid. I tried building from several simpler queries. But I havent got success. As I think I might need to use subqueries Pls Help.
The reason is that you combine the freightcharge from order\_detail\_table with aggregated data. Something like that should solve the problem: ``` select o.orderid, sq.total+o.freightcharge from order_table o left join ( select o.orderid, sum((od.unitprice*od.quantity)*(1-od.discount*0.01)) as total from order_table o, order_detail_table od where o.orderid=od.orderid group by o.orderid ) sq on sq.orderid=o.order_id; ```
I think this will work for you: ``` ;with OrderTotals AS ( select o.orderid ,o.freightcharge ,sum((od.unitprice*od.quantity)*(1-od.discount*0.01)) as total from order_table o inner join order_detail_table od on o.orderid = od.orderid group by o.orderid ,o.freightcharge ) select OrderTotals.OrderId AS OrderId ,OrderTotals.Total AS OrderTotal ,OrderTotals.Total + freightcharge AS Total from OrderTotals ```
SQL sum values from different tables using item_details group by order_id and order_date
[ "", "sql", "subquery", "aggregate-functions", "calculated-columns", "" ]
Im looking to query all Users without Comments in a single sql query? Models: ``` class User < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :user end ``` So I want the opposite of this: ``` User.joins(:comments).group('users.id') ``` But not like this: (because it generates two queries) ``` User.where.not(id: Comment.pluck(:user_id)) ``` Maybe something like this? ``` User.joins.not(:comments).group('users.id') ``` Thanks for any input!
You can accomplish this with: ``` User.includes(:comments).where.not(comments: { id: nil }) ``` This will result in raw SQL that looks something like: ``` SELECT DISTINCT `users`.`*` FROM `users` LEFT OUTER JOIN `comments` ON `comments`.`user_id` = `users`.`id` WHERE `comments`.`id` IS NULL ``` For accomplishing this via a subquery, see the below answer. **Old Answer**: You can do something like ``` User.where.not(id: Comment.select(:user_id)) ``` If you wanted a single (though nested) query. Otherwise, check out <http://guides.rubyonrails.org/active_record_querying.html#joining-tables> for using an outer join.
If you are using `postgresql` you can do something like this ``` User.joins("LEFT join comments c on users.id = c.comment_id"). select("users.id"). group("users.id"). having("count(users.id) = 1") ``` This will generate this query ``` select u.id from users u LEFT join comments c on c.comment_id = u.id group by u.id having count(u.id) = 1 ``` This query is not generating two SQL (neither nested sql), the above answer does.
Rails User.joins.not(...) in Active Record?
[ "", "sql", "ruby-on-rails", "activerecord", "ruby-on-rails-4", "" ]
I have a survey system which the following `questionanswer` table ``` questionId personId response 1 a red 2 a blue 3 a green 4 a black 1 b red 2 b blue 3 b green 4 b black ``` I need to construct a query for a search form allowing users to pick the question and the response to be returned. i.e. the user may pick `question 1 answer = red` and `question 3 and answer green` I have to return those `personId`'s that match that criteria (in the above table it would return a and b) `psuedo: select from questionanswer table all personId's where question=1 && answer=red AND those personId's where question=3 and answer was green` This code would run after the administrator selects 'search' returning back distinct a and b personId's (seems simple but i must be missing something?)
How are you passing in the QuestionIds and Responses? Let's assume you eventually get them into a table variable: ``` DECLARE @Search TABLE ( questionId int, response varchar(30) ) ``` Then your SQL then becomes: ``` SELECT q.personID FROM questionanswertable AS q INNER JOIN @Search AS s ON s.questionId = q.questionId AND s.response = q.response GROUP BY q.personID HAVING COUNT(1) = (SELECT COUNT(1) FROM @Search) ```
``` SELECT DISTINCT personId FROM questionanswer WHERE questionId IN(SELECT questionId FROM questionanswer WHERE questionId = 1 AND response = 'red') AND questionId IN(SELECT questionId FROM questionanswer WHERE questionId = 3 AND response = 'green') ```
Sql query which returns multiple rows matching criteria
[ "", "sql", "sql-server", "linq", "" ]
**Hello i have a table with columns:** ``` saled qty, item no, no -3, 1996-s, 149 -2, 1996-s, 150 -2, 1968-b, 151 ``` Now i should get the **top sold** products. Maybe i have to **group** somehow by item no and then sort by saled qty and then i could **select top 3** for example? **Update:** There are also other columns that i don't need but becuase of them i get error: *timestamp' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.* **Almost there** ``` SELECT TOP(3) [Item No_], SUM([Invoiced Quantity]) as [qty] FROM [Demo Database NAV (7 - 1)].[dbo].[CRONUS (Schweiz) AG$Value Entry] GROUP BY [Item No_] HAVING [Item Ledger Entry Type] = 1 ORDER BY SUM([Invoiced Quantity]) DESC ``` gives error: Item Ledger Entry Type' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause.
``` SELECT TOP(3) item_no, SUM(saled qty) FROM TABLENAME GROUP BY item_no, [Item Ledger Entry Type] HAVING [Item Ledger Entry Type] = 1 ORDER BY SUM(saled qty) DESC ``` or ``` SELECT TOP(3) item_no, SUM(saled qty) FROM TABLENAME WHERE [Item Ledger Entry Type] = 1 GROUP BY item_no ORDER BY SUM(saled qty) DESC ``` Depending on if you want pre or post aggregation filtering Just replace 3 with whatever number of products you want. Explanation: When you do a group by you can only include columns in your result which are either part of the group by clause OR are being aggregated in some way. Here I've used SUM which aggregates the saled\_qty column and I've ommited all the other columns from your query. The best way to think of this is to imagine you yourself rather than SQL were grouping the data.... Say we group by item\_no... how are you meant to know what to do with all the saled\_qty values? You will have multiple values of saled\_qty for each row of your results (One row per whatever is in your grouping clause... in this case item\_no). SQL will not allow you to be ambiguous about this and so throws an error.
``` SELECT TOP 3 * FROM table GroupBy itemno ORDER BY saledqty DESC ``` **Update:** If you need to select other columns too, then include them in your GroupBy Clause also. For Example: ``` SELECT TOP 3 count(itemno),itemno,timestamp FROM table GroupBy itemno,timestamp ORDER BY saledqty DESC ```
SQL query - get top sold items from table with cols (saled qty, item no, no )
[ "", "sql", "sql-server", "select", "group-by", "" ]
I need a query to return a certain result for a certain column depending on what value the column that is being run the select statement against has. > if the column is one of the following : I, D, U then I want to return Y > > if the column is one of the following : N, E, D then I want to return N > > ``` > else : I want to return NULL > ``` I wrote the following statement but it doesn't work. ``` SELECT HIERARCHY_TYPE, NODE_ID, NODE_TYPE, NODE_NAME, NODE_LEVEL, PREFERRED_ALIAS, PARENT_NODE_ID, CASE ACTIVE_INDICATOR WHEN ('I' or 'U' or 'Y') THEN 'Y' WHEN ('D' or 'E' or 'N') THEN 'N' ELSE NULL END FROM MV_HIERARCHY MV; ``` Is there a way to rewrite it without using multiple OR clauses for each possible value?
I'd use the `IN` operator: ``` SELECT HIERARCHY_TYPE, NODE_ID, NODE_TYPE, NODE_NAME, NODE_LEVEL, PREFERRED_ALIAS, PARENT_NODE_ID, CASE WHEN ACTIVE_INDICATOR IN ('I','U','Y') THEN 'Y' WHEN ACTIVE_INDICATOR IN ('D','E','N') THEN 'N' ELSE NULL END AS ACTIVE_INDICATOR FROM MV_HIERARCHY MV; ```
``` CASE WHEN ACTIVE_INDICATOR IN ('I','U','Y') THEN 'Y' WHEN ACTIVE_INDICATOR IN ('D', 'E', 'N') THEN 'N' ELSE NULL -- useless, but for readbility END as ACTIVE_INDICATOR ``` You've got to repeat `ACTIVE_INDICATOR`, cause I don't think (may be wrong) you can use the syntax ``` CASE <field> WHEN IN() ``` but you can use ``` CASE WHEN <field> IN() ```
Case when using IN clause
[ "", "sql", "oracle", "select", "" ]
I want to return the 4 most recent for each CustomerID. So far I have: Use Northwind\_2012 ``` SELECT CustomerID, OrderDate FROM Orders ORDER BY CustomerID; ``` From here I want to restrict it to the 4 most recent OrderDate's for each CustomerID. Where do I go from here in order to achieve this, as I have also tried using PARTITION BY and ROW\_NUMBER but haven't been able to accomplish it yet. ``` CustomerID OrderDate ALFKI 2007-08-25 00:00:00.000 ALFKI 2007-10-03 00:00:00.000 ALFKI 2007-10-13 00:00:00.000 ALFKI 2008-01-15 00:00:00.000 ALFKI 2008-03-16 00:00:00.000 ALFKI 2008-04-09 00:00:00.000 ANATR 2008-03-04 00:00:00.000 ANATR 2007-11-28 00:00:00.000 ANATR 2007-08-08 00:00:00.000 ANATR 2006-09-18 00:00:00.000 ANTON 2006-11-27 00:00:00.000 ANTON 2007-04-15 00:00:00.000 ```
Tested, works: (assuming a "OrderID" primary key: ``` SELECT CustomerId, OrderDate FROM Orders as ExtOrders WHERE OrderId IN ( SELECT TOP 4 OrderId FROM Orders WHERE CustomerId LIKE ExtOrders.CustomerId ORDER BY OrderDate DESC) ORDER BY CustomerId; -- Might not work in SQLFiddle if CustomerId is text ``` [SQLFiddle](http://sqlfiddle.com/#!6/25b07/8)
You can do this using `row_number()`, using either a subquery or CTE: ``` SELECT CustomerID, OrderDate FROM (SELECT o.*, ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY OrderDate DESC) as seqnum FROM Orders ) o WHERE seqnum <= 4 ORDER BY CustomerID; ```
Returning the 4 most recent orders for each customer
[ "", "sql", "sql-server", "sql-server-2012", "" ]
**Problem:** I have a huge database with lots of tables and entries. My application is similar to a social network and in my database I've infos about user logged-in, users phone model, registration data... Number of page viewed, number of comments, replies... every info and user activity has a datetime reference. I have a web application used to manage my user and I want to include a statistic page. In this statistic page I want to show some data: * how many user logged-in * how many user registered in last 7 days * how many Android VS iOS * ... I'm facing the problem on how to retrieve all these infos from the DB. Having a JPA level and using entities, is it possible to create a special `View` *Statistic* as key/value: ``` Key Value TotUser 200 Android 100 iOS 100 NewUser 10 ``` **Simple example:** ``` CREATE TABLE peoples ( id number(38), name varchar(20), os varchar(10) ); INSERT INTO peoples (id, name, os) VALUES (1,'john', 'Android'); INSERT INTO peoples (id, name, os) VALUES (2,'jane', 'iOS'); ``` Is there a way to store my query in a VIEW or other object? ``` select count(*) from peoples; // as TotUser select count(*) from peoples where os like 'Android'; // as Android select count(*) from peoples where os like 'iOS'; // as iOS ``` and have a `STATISTIC` table with key/value as results of my queries? ``` STATISTIC --------------- Key Value --------------- TotUser 2 Android 1 iOS 1 ``` **What I done:** Now I created an object Statistic, I execute every single query and store informations inside the object Statistic, providing a key name. Is there a way to do all this through SQL, avoiding any modification on my querys in my Java application? Stored procedure? Reporting functions?
Sounds like what you need is a [`materialized view`](http://docs.oracle.com/cd/B28359_01/server.111/b28326/repmview.htm#i29867) I suggest creating mviews with refresh on demand with an interval of your choice. You have two options to achieve your goal with materizlized views 1. Create a big mview using union all to gather all the stats. 2. Create several small mviews for each source table and a big main MVIEW to concatenate all.
This can all be done with a single SQL statement and a single scan of the table, there's no need to do anything complicated unless this becomes too slow, in which case a MV might be the answer. ``` with all_cts as ( select os, count(*) as ct from peoples group by os ) select 'TotUser' as key, sum(ct) as value from all_cts union all select * from all_cts ``` By utilising [sub-query factoring](http://www.oracle-base.com/articles/misc/with-clause.php) you query the table once and, effectively, create a tiny in-memory table, which you can then use twice. If you then need to query for a single OS you can either just run the query or create this query as a view and query the view with a WHERE clause: ``` select * from my_view_with_everything where os = 'Android'; ```
Oracle best practice to store statistics and infos as key/value
[ "", "sql", "oracle", "jpa", "oracle11g", "" ]
I am a bit new to TSQL programming. I am using MS SQL Server 2008 R2. Why the following query doesn't work? I thought a select query just return a bunch of records and I should still be able to select from those records. ``` select * from ( select * from dbo.[sometable] ) ```
``` select * from ( select * from dbo.[sometable] ) as X ``` You always need to create alias if using FROM subquery even if later on you're not using this alias that's just how the thing works.
i want to select all the records from sometable then this should be enough ``` select * from dbo.[sometable] ``` However, if you want to filter your records of one table based on values from some other table then you can use nested queries like ``` select * from table1 where id in (select id from table2) ``` or you can use joins like ``` select * from table1 t join table2 t2 on t.id = t2.id ```
Why I cannot select from some selected results?
[ "", "sql", "t-sql", "" ]
I need to join two tables `Companies` and `Customers`. `Companies` table is in **MS SQLServer** and `Customer` table is in **MySQL Server** . What is the best way to achieve this goal ?
If I am understand correctly, you need to join tables in SQL Server, not in code, because tag is `sql`. If I have right, then you need to do some administrative tasks, like server linking. [Here](http://www.packtpub.com/article/mysql-linked-server-on-sql-server-2008) you have an explanation how to link MySQL server into MSSQL server. After you successfully link those servers, then your syntax is simple as: ``` SELECT [column_list] FROM companies JOIN [server_name].[database_name].[schema_name].[table_name] WHERE ... ``` Keep in mind that when accessing tables that exist on linked server, then you must write four-part names.
1. Select Companies from DB1 2. Select Customers from DB2 3. Put them in `Map<WhatToJoinOn, Company>` and `Map<WhatToJoinOn, Customer>` 4. Join on map keys, creating a `List<CompanyCustomer>`
Joining two tables on different database servers
[ "", "sql", "" ]
I have a table that contains two text fields which hold a lot of text. For some reason our table have started growing exponentially. I suspect that TOAST (compression for text fields in postgres) is not working automatically. In our table definition we have not defined any clause to force compression of these fields. Is there a way to check if compression is working on that table or not?
[From the docs](http://www.postgresql.org/docs/9.3/static/storage-toast.html) . . . > If any of the columns of a table are TOAST-able, the table will have > an associated TOAST table, whose OID is stored in the table's > pg\_class.reltoastrelid entry. Out-of-line TOASTed values are kept in > the TOAST table, as described in more detail below. So you can determine whether a TOAST table exists by querying the [pg\_class system catalog](http://www.postgresql.org/docs/9.3/static/catalog-pg-class.html). This should get you close to what you're looking for. ``` select t1.oid, t1.relname, t1.relkind, t2.relkind, t2.relpages, t2.reltuples from pg_class t1 inner join pg_class t2 on t1.reltoastrelid = t2.oid where t1.relkind = 'r' and t2.relkind = 't'; ``` In psql, you can use `\d+`. I'll use the pg\_class system catalog as an example; you'd use your own table name. ``` sandbox=# \d+ pg_class Column | Type | Modifiers | Storage | Stats target | Description ----------------+-----------+-----------+----------+--------------+------------- relname | name | not null | plain | | relnamespace | oid | not null | plain | | [snip] relacl | aclitem[] | | extended | | reloptions | text[] | | extended | | ``` Where Storage is 'extended', PostgreSQL will try to reduce row size by compressing first, then by storing data out of line. Where Storage is 'main' (not shown), PostgreSQL will try to compress. In your particular case, you might find it useful to monitor changes in size over time. You can use this query, and save the results for later analysis. ``` select table_catalog, table_schema, table_name, pg_total_relation_size(table_catalog || '.' || table_schema|| '.' || table_name) as pg_total_relation_size, pg_relation_size(table_catalog || '.' || table_schema|| '.' || table_name) as pg_relation_size, pg_table_size(table_catalog || '.' || table_schema|| '.' || table_name) as pg_table_size from information_schema.tables ``` [PostgreSQL admin functions](http://www.postgresql.org/docs/current/static/functions-admin.html) has details about what each function includes in its calculations.
This is old, but I've recently had some success with a similar issue. ANALYZE VERBOSE revealed that a couple of our tables had grown to > 1 page of disk per tuple, and EXPLAIN ANALYZE revealed that sequential scans were taking up to 30 seconds on a table of 27K rows. Estimates of the number of active rows were getting further and further off. After much searching, I learned that rows can only be vacuumed if there is no transaction that has been open since they were updated. This table was rewritten every 3 minutes, and there was a connection that was "idle in transaction" that was 3 days old. You can do the math. In this case, we had to 1. kill the connection with the open transaction 2. **reconnect** to the database. Unfortunately the maximum transaction ID for rows that can be vacuumed is currently (as of 9.3) stored in the connection, so vacuum full will not work. 3. VACUUUM FULL your table (this will take out an ACCESS EXCLUSIVE lock, which will block everything including reads. You may want to run VACUUM first (non-blocking), to speed up the time VACUUM FULL takes). This may not have been your problem, but if you would like to see if tables are affected in your own database, I wrote a query to order tables by the average number of tuples stored in a page of disk. Tables with large rows should be at the top - ANALYZE VERBOSE should give you an idea of the ratio of dead to live tuples in these tables. **Valid for 9.3** - this will probably require some minor tweaks for other versions: ``` SELECT rolname AS owner, nspname AS schemaname , relname AS tablename , relpages, reltuples, (reltuples::FLOAT / relpages::FLOAT) AS tuples_per_page FROM pg_class JOIN pg_namespace ON relnamespace = pg_namespace.oid JOIN pg_roles ON relowner = pg_roles.oid WHERE relkind = 'r' AND relpages > 20 AND reltuples > 1000 AND nspname != 'pg_catalog' ORDER BY tuples_per_page; ```
How to check if TOAST is working on a particular table in postgres
[ "", "sql", "postgresql", "compression", "" ]
I am stuck in joining two tables "a" and "b". The structure of the two tables is given below. **Table A** ``` +-------------+---------------------+ + SKU | Title + +-------------+---------------------+ + 12345_786 | Some text + + | + + 12345_231 | Sony + + | + + 12345_222 | Samsung + + | + + 67481_21 | IBM + + | + + 88723_231 | HP + +-------------+---------------------+ ``` **Table B** ``` +-------+---------------------+ + SKU | Price + +-------+---------------------+ + 786 | $230 + + | + + 231 | $540 + + | + + 222 | $120 + + | + + 21 | $220 + + | + + 231 | $50 + +-------+---------------------+ ``` Table SKU convention is ParentSKU + "*" + Child SKU. So each sku in table has a child sku. Parent child is saperated by "*". I wan to Join table A.SKU part after "\_" on table B.SKU So far I have tried the following query but without desired result. ``` SELECT A.SKU,B.Price FROM A INNER JOIN B ON Substring(A.SKU, patindex('%_%', A.SKU), Cast(Len(A.SKU) as int)-cast(patindex('%_%',A.SKU)as int)) = CAST(B.SKU AS varchar(12)) ``` Your help in this context will be highly appreciated.
The problem you're finding is because the \_ underscore character has a meaning in a pattern string, much like the % percent character. You need to escape it as referenced in this post [How to escape underscore character in PATINDEX pattern argument?](https://stackoverflow.com/questions/863534/t-sql-how-to-escape-underscore-character-in-patindex-pattern-argument)
I would either create a new column on the table with the value after "\_" and then use this to join on OR create an indexed view. The view would create the new column. Your select query could then join on this new column (from the table or view) without having to do any additional string manipulation. This may also prevent Search Arguments if you then have an index setup on this new column thereby keeping your query execution performant.
Joining two tables based on substring from foreign keys (MS SQL2005)
[ "", "mysql", "sql", "sql-server", "" ]
I have a table: ``` id firstval secondval 1 4 5 2 5 4 3 3 3 4 6 6 5 7 8 6 9 8 7 3 3 8 3 3 ``` The first thing I need to do is count the number of times `secondval > firstval`. This is obviously no problem. However, the thing I'm struggling with is how to then count how many times (for each instance of `secondval > firstval`) the next row satisfies the condition `secondval < firstval` So in this example there are two rows that would satisfy the first rule id 1 & 5 and two for the second rule, the next rows id 2 and 6.
``` SELECT id, @prevGreater AND secondval < firstval AS discrepancy, @prevGreater := secondval > firstval AS secondGreater FROM (SELECT * FROM YourTable ORDER BY id) AS x CROSS JOIN (SELECT @prevGreater := false) AS init ``` [DEMO](http://www.sqlfiddle.com/#!2/97f72c/9)
``` SELECT * from table t1 INNER JOIN table t2 on t1.ID+1=t2.ID -- here we join on t2.ID is t1.ID+1 WHERE t1.secondval>t1.firstval AND t2.secondval<t2.firstval ``` Now you can use COUNT statement as you want :)
Compare rows SQL counting discrepancies in value
[ "", "mysql", "sql", "" ]
I have two table of bank database. One is customer list and other is account list. When I left outer join customer to account I have records of customer which does not have anything. How can I print zero for them? ``` SQL> CREATE TABLE Customer ( 2 BSB# CHAR(6) NOT NULL, /* Bank BSB number */ 3 Customer# NUMBER(10) NOT NULL, /* Customer number */ 4 Name VARCHAR2(30) NOT NULL, /* Customer name */ 5 DOB Date, /* Date of birth */ 6 Sex CHAR, /* M-Male, F-Female */ 7 Address VARCHAR2(50) NOT NULL, /* Customer address */ 8 Phone# VARCHAR2(15), /* Phone number */ 9 CONSTRAINT Project_PK PRIMARY KEY(BSB#, Customer#), 10 CONSTRAINT Project_FK FOREIGN KEY (BSB#) REFERENCES Bank(BSB#) 11 ); SQL> CREATE TABLE Account ( 2 BSB# CHAR(6) NOT NULL, /* Bank BSB number */ 3 Customer# NUMBER(10) NOT NULL, /* Customer number */ 4 Account# NUMBER(10) NOT NULL, /* Account number */ 5 Type VARCHAR2(20) NOT NULL, /* Account type */ 6 Balance NUMBER(10,2) NOT NULL, /* Account balance */ 7 CONSTRAINT WorksOn_PK PRIMARY KEY(BSB#, Account#), 8 CONSTRAINT WorksOn_FK1 FOREIGN KEY(BSB#, Customer#) REFERENCES Customer(BSB#, Customer#) 9 ); ``` Command: ``` select c.name, CUS.MYSUM from customer c left outer join ( select customer#,sum(balance) MYSUM from account group by customer# ) CUS on c.customer# = CUS.customer#;​ ``` Outout: ``` NAME MYSUM ------------------------------ ---------- Ben -470211.09 Mike -470211.09 Jean -60028.03 Douglas 1970.35 Josef 1970.35 Duke 54469.12 Alex -323631.76 Harry Will ``` I want to print 0 in front of Harry and Will?
Change your first query line to ``` select c.name, COALESCE(CUS.MYSUM, 0) ``` `COALESCE` is a standard sql function that returns the value of the first expression among its arguments that does evaluate to NULL. if there is no such expression, it will return NULL instead. a searched `CASE` expression provides similar functionality: ``` select c.name, CASE WHEN CUS.MYSUM IS NULL THEN '0' ELSE CUS.MYSUM END ``` the advantages are: * the actual value used in the not-null case need not be the value of the inspected column * you may test for arbitrary expressions which come at the price of * less readability * performance impact ( often negligable, in complex queries result set projection [i.e. `select`-ing the columns] takes the least amount of processing time ). ref: [mysql 5.0 doc on `COALESCE`](https://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#function_coalesce)
You can use [IFNULL](https://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull) for this <https://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull> ``` mysql> SELECT IFNULL(1,0); -> 1 mysql> SELECT IFNULL(NULL,10); -> 10 mysql> SELECT IFNULL(1/0,10); -> 10 mysql> SELECT IFNULL(1/0,'yes'); -> 'yes' ``` When handling single values, IFNULL should be preferred over COALESCE for readability if nothing else
How to print zero for rows with no content?
[ "", "mysql", "sql", "sql-server", "oracle", "oracle11g", "" ]
<http://www.sqlfiddle.com/#!3/7a295/1> I have a ms sql database inside visual studio 10, I need help with a query to get me the most wanted game from the favourites table along with the game cover\_img from the games table. I have got the query working with the first part:- ``` SELECT game_name, game_platform, Count(game_name) AS Expr1 FROM favourites GROUP BY game_name, game_platform HAVING ( Count(game_name) = (SELECT Max(mycount) AS Expr1 FROM (SELECT game_name, Count(game_name) AS mycount FROM favourites AS Favourites_1 GROUP BY game_name) AS derivedtbl_1) ); ``` But I cant get the cover\_img to match the max game from the games table, can someone please help me, this is what i have so far, but its not working:- ``` SELECT favourites.game_name, favourites.game_platform, games.cover_img, Count(favourites.game_name) AS Expr1 FROM games inner join favourites ON games.name = favourites.game_name GROUP BY favourites.game_name, favourites.game_platform, games.cover_img HAVING (Count(favourites.game_name) = (SELECT Max(mycount) AS Expr1 FROM (SELECT game_name, Count(game_name) AS mycount FROM favourites AS Favourites_1 GROUP BY game_name) AS derivedtbl_1)); ```
Does this do what you want? ``` SELECT TOP 1 f.game_name, f.game_platform, g.cover_img, Count(f.game_name) as cnt FROM games g inner join favourites f ON g.name = f.game_name GROUP BY f.game_name, f.game_platform, g.cover_img ORDER BY Count(f.game_name) desc; ```
You can split your problem in smaller parts: ``` ; WITH Games AS ( SELECT * FROM ( VALUES ('Pacman', 'Atari', 'A'), ('Enduro', 'Atari', 'B'), ('River Raid', 'Atari', 'C') ) AS X (Name, [Platform], cover_img) ), Favourites AS ( SELECT * FROM ( VALUES ('Pacman', 'Atari'), ('Pacman', 'Atari'), ('Pacman', 'Atari'), ('Enduro', 'Atari') ) AS Y (game_name, game_platform) ), MostWanted AS ( -- Identify the most wanted game, per platform SELECT game_name, game_platform, COUNT(*) Total, ROW_NUMBER() OVER ( PARTITION BY /*game_name,*/ game_platform ORDER BY COUNT(*) DESC ) [Order] FROM Favourites GROUP BY game_name, game_platform ) SELECT MostWanted.*, Games.cover_img FROM MostWanted JOIN Games -- JOIN that with games table to get more details ON MostWanted.game_name = Games.Name WHERE MostWanted.[Order] = 1 -- Only the first most wanted ```
SQL Query get most wanted game details
[ "", "sql", "sql-server", "" ]
I'm sure this is a simple task, but I got stuck for couple of days: I've got simplified table: ``` INSERT INTO `tree` (`fruit`, `tree`) VALUES ('apple', 'A'), ('pear', 'A'), ('apple', 'B'), ('banana', 'B'); ``` and for example I need specify exactly what fruit i need and what i dont need, so I something like: ``` IN ('apple', 'banana') && NOT IN('pear') ``` should return "B" because we have got tree with apple and banana, without pear Any ideas? thank you.
You could also use grouping and check aggregate values in a HAVING clause, like this: ``` SELECT tree FROM tree GROUP BY tree HAVING MAX(fruit IN ('apple', 'banana')) = 1 AND MAX(fruit IN ('pear') ) = 0 ; ``` A more platform-independent version of the same method would involve CASEs: ``` SELECT tree FROM tree GROUP BY tree HAVING MAX(CASE WHEN fruit IN ('apple', 'banana') THEN 1 ELSE 0 END) = 1 AND MAX(CASE WHEN fruit IN ('pear') THEN 1 ELSE 0 END) = 0 ; ``` To specify that the tree must have both `apple` and `banana`, add one more condition: ``` AND COUNT(*) = 2 ``` If a tree is allowed to have duplicate fruit entries, use this condition instead: ``` AND COUNT(DISTINCT fruit) = 2 ``` Alternatively you could specify separate conditions for each allowed fruit, like this: ``` SELECT tree FROM tree GROUP BY tree HAVING MAX(fruit = 'apple' ) = 1 AND MAX(fruit = 'banana' ) = 1 AND MAX(fruit IN ('pear')) = 0 ; ```
A simple select should work with a NOT EXISTS check. ``` SELECT t1.tree FROM tree t1 WHERE t1.fruit='apple' AND NOT EXISTS (SELECT 1 FROM tree t2 WHERE t1.tree=t2.tree and t2.fruit='pear'); ```
mysql, how to select tree without node
[ "", "mysql", "sql", "" ]
I have two SQL Server 2008R2 tables as below: * `Tbl_Item` (`Item_Id`) * `Tbl_Item_Cost` (`Item_Id, Item_Cost`) Values in the tables are as below: `Tbl_Item`: ``` Item_Id -------------------- Candy Chocolate IceCream Chocolate/IceCream ``` `Tbl_Item_Cost`: ``` Item_Id Item_Cost ---------------------- Candy 10 Chocolate 20 IceCream 30 ``` Now I want to display the Item and its cost. Only thing is cost for Chocolate/IceCream should be addition of Chocolate and IceCream i.e. 20+30=50. So the result should look like ``` Item_Id Item_Cost ----------------------- Candy 10 Chocolate 20 IceCream 30 IceCream/Chocolate 50 ``` What can be the simplest way to do this in SQL Server 2008R2?
Okay. As you don't have the option of changing the design, although it's not good to hardcode anything but if you really want this to occur for only(always) these two items, you may try this. ``` SELECT I.Item_Id,CASE WHEN I.Item_Id='Chocolate/IceCream' THEN (SELECT SUM(Item_Cost) FROM dbo.Tbl_Item_Cost WHERE Item_Id IN ('Chocolate','IceCream')) ELSE Item_Cost END AS Item_Cost FROM dbo.Tbl_Item_Cost C RIGHT JOIN dbo.Tbl_Item I ON I.Item_Id=C.Item_Id ``` RESULT: ![enter image description here](https://i.stack.imgur.com/IkTfb.png) If I am missing something then please let me know so that I can come with better solution.
I would do it following: ``` SELECT t1.item_id+'/'+t2.item_id, t1.cost+t2.cost FROM tbl_item_cost t1 INNER JOIN tbl_item_cost t2 ON t1.item_id<>t2.item_id ``` We could also use CROSS JOIN which allows us to get all possible pairs `(xn, yn)` from sets `X={x1, x2, ..., xn}, Y={y1, y2, ... , yn}`. Here `X=Y`. But then we would get pairs `(xi, xi)` - example: ice cream/ice cream.
How to add same column value but having different IDs
[ "", "sql", "sql-server", "t-sql", "sql-server-2008-r2", "" ]
I have: ``` CREATE TABLE Vote (`pid` int, `choice` varchar(10)) ; INSERT INTO Vote (`pid`, `choice`) VALUES (1, 'a'), (2, 'b'), (3, 'ana'), (4, 'aya'), (5, 'ayna'), (6, 'anya'), (7, 'x'), (8, 'y'), (9, 'z') ``` And I'm trying to find how many choices included 'n' but not 'y', how many included 'y' but not 'n' and how many either included both or included none. This is what I have: ``` SELECT COUNT(v1.choice) AS n_not_y,COUNT(v2.choice) AS y_not_n,COUNT(v3.choice) AS rest FROM Vote v1,Vote v2,Vote v3 WHERE v1.choice LIKE '%n%' AND v1.choice NOT LIKE '%y%' AND v2.choice LIKE '%y%' AND v2.choice NOT LIKE '%n%' AND ( (v3.choice NOT LIKE '%y%' AND v3.choice NOT LIKE '%n%') OR (v3.choice LIKE '%y%n%' OR v3.choice LIKE '%n%y%') ) ``` But as you can see [here](http://www.sqlfiddle.com/#!2/52b9c/7/0) it gives one row showing 12,12,12.
The reason your result shows 12,12,12 is that the `count` gets the number of returned rows regardless of their values. You would use CASE for that as below ``` SELECT SUM(CASE WHEN v1.choice LIKE '%n%' AND v1.choice NOT LIKE '%y%' THEN 1 ELSE O) AS n_not_y SUM(CASE WHEN v2.choice LIKE '%y%' AND v2.choice NOT LIKE '%n%' THEN 1 ELSE O) AS y_not_n ``` and so on
``` SELECT SUM(CASE WHEN (V.choice LIKE '%n%' AND V.choice NOT LIKE '%y%') THEN 1 ELSE 0 END) AS n_not_y, SUM(CASE WHEN (V.choice LIKE '%y%' AND V.choice NOT LIKE '%n%') THEN 1 ELSE 0 END) AS y_not_n ,SUM(CASE WHEN ((V.choice NOT LIKE '%y%' AND V.choice NOT LIKE '%n%') OR (V.choice LIKE '%y%n%' OR V.choice LIKE '%n%y%')) THEN 1 ELSE 0 END) AS rest FROM Vote V ```
How to get three fields (one record) each counts different condition on the same field?
[ "", "sql", "" ]
I have a database table full of transactions. The transactions contain negative numbers from people returning items. I would like to add up all the amount field while also subtracting the negative values of returns from the total. How can I do this and output it out? Currently the best I can do is get: ``` SELECT SUM(amount) FROM outputaddition GROUP by SIGN(amount); ``` But this only puts positive and negative in the same column.
``` SELECT personId,SUM(CASE WHEN amount<0 THEN amount ELSE 0 END) as NegativeTotal, SUM(CASE WHEN amount>=0 THEN amount ELSE 0 END) as PostiveTotal FROM outputaddition GROUP BY personID ``` If you want single column ``` SELECT personId,SUM(amount) as Total FROM outputaddition GROUP BY personID ```
try this, ``` SELECT SUM(amount) as ActualTotal ,Sum(Case When amount > 0 then amount else 0 end) totalOfPositiveOnly FROM outputaddition ```
MySQL Sum positive and negative - calculate the exact total from money transactions
[ "", "mysql", "sql", "" ]
I have two tables in my database and they each share the a field called cID. So here's an example: ``` Parent Table _______________ cID Name -- ------ 1 Record #1 2 Record #2 3 Record #3 Child Table _______________ ID cID Name -- --- ----- 10 1 Record #1 11 1 Record #2 12 2 Record #3 13 1 Record #4 ``` So what's happened is that someone has gone in and deleted cIDs from the parent the parent table and the child table still has those cIDs but they now reference nothing. Is there a SQL statement that I could use to select all the records in the child table that no longer have a cID in the parent table?
In your text, you say that parents are missing, but your example shows a parent without a child. For parents without children in a left join: ``` select p.* from parent p left join child c on p.cId = c.cId where c.cid is null ``` I created a [SQL Fiddle](http://sqlfiddle.com/#!3/84694/17) as an example. For children without parents: ``` select c.* from child c left join parent p on p.cId = c.cId where p.cid is null ``` [Fiddle](http://sqlfiddle.com/#!3/5ba49/4) Note that if you changed the first query to a `RIGHT` join, you'd get the same answer as the second query where I've changed the sequence of the tables.
With Subselect: ``` SELECT * FROM Child WHERE cID NOT IN (SELECT cID FROM Parent) ``` With Join: ``` SELECT Child.* FROM Child LEFT JOIN Parent ON Child.cID = Parent.cID WHERE Parent.cID IS NULL ```
Query database tables for missing link
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table a such that Select \* From a where person id =1; returns ``` person_id colA colB 1 AA BB 1 CC DD 1 EE FF ``` Now what i need is ``` person_id colA colB colA_row_2, colB_Row_2 ColA_row_3 ColB_ROW_3 1 AA BB CC DD EE FF ``` I will always get nine rows per person. So i can just create the columns as non dynamic but cant seem how to do it. This needs to be in pure sql no stored procedures. Thanks in advance
You can use decode as an alternative. [https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11\_QUESTION\_ID:4471013000346257238](https://asktom.oracle.com/pls/asktom/f?p=100:11:0%3a%3a%3a%3aP11_QUESTION_ID:4471013000346257238)
You need PIVOT for the query See the examples <http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html> and <http://www.oracle-base.com/articles/11g/pivot-and-unpivot-operators-11gr1.php> You have to define how rows are converted into columns e.g. ``` pivot ( count(colA ) for colA in ('AA' as "AA",'CC' "CC"...) ) ```
How to merge many rows into one oracle sql
[ "", "sql", "oracle", "oracle11g", "" ]
I have this query: ``` SELECT DISTINCT f.CourseEventKey, ( SELECT f.Title + '; ' AS [text()] FROM @Facilities WHERE CourseEventKey = f.CourseEventKey ORDER BY f.Title FOR XML PATH('') ) Facilities FROM @Facilities f ``` It produces this result set: ``` CourseEventKey Facilities -------------- ----------------------------------- 29 Test Facility 1; 30 Memphis Training Room; 32 Drury Inn &amp; Suites Creve Coeur; ``` The data is fine, but the `&amp;` is actually an encoded `&`, which is not suitable for my purposes. How can I modify this query to return the original values of the special characters in my data?
I think you're going to have to manually wrap the Facilities inline query block with REPLACE statements to reverse the automatic escaping. It sounds like what you're wanting to do is concatenate multiple facilities that could present a given course. Have you considered other options? [This question](https://stackoverflow.com/questions/887628/convert-multiple-rows-into-one-with-comma-as-separator/887687#887687) has several possible approaches that don't have an issue with escaping your characters.
Use `,TYPE).value('.','NVARCHAR(MAX)')` and your special characters will not be escaped: ``` SELECT DISTINCT f.CourseEventKey, ( SELECT f.Title + '; ' AS [text()] FROM @Facilities WHERE CourseEventKey = f.CourseEventKey ORDER BY f.Title FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)') AS Facilities FROM @Facilities f ``` Credit for this goes to [Rob Farley](http://blogs.lobsterpot.com.au/2010/04/15/handling-special-characters-with-for-xml-path/). **UPDATE:** I just heard about this new method. I haven't tested it thoroughly yet, and would appreciate any feedback. We can replace `[text()]` with `[processing-instruction(x)]`, like this ``` select 'hello & there >' as [processing-instruction(x)] FOR XML PATH('') ``` will return ``` <?x hello & there >?> ``` We just need to strip off the `<? ... ?>`
How to use FOR XML PATH('') in a query without escaping special characters?
[ "", "sql", "sql-server", "t-sql", "escaping", "for-xml-path", "" ]
I have an ID such as: ``` 61759092-5066-4D02-A0E4-000084E9E68D ``` The result I would like is: ``` \6\1\7\5 ``` This is my way: ``` select '\' + substring(CAST(idField as varchar(38)),1,1) + '\' + substring(CAST(idField as varchar(38)),2,1) + '\' + substring(CAST(idField as varchar(38)),3,1) + '\' + substring(CAST(idField as varchar(38)),4,1) from mytable ``` It works, but something is telling me, I'm using way too much overhead to do this. is there a smarter, more efficient way to do this? **How do i get the first four characters of a uniqueidentifier separated by '\'?**
I'm not sure if it is any better but here is what I came up with. ``` Select left(x.id, 1) + '\' + left(right(x.Id, 3), 1) + '\' + left(right(x.Id, 2), 1) + '\' + right(x.Id, 1) From ( Select Left(cast(idField as varchar(38)), 4) as Id From myTable ) x ```
You can use the stuff function to accomplish this. ``` Select stuff(Stuff(Stuff(Stuff(Left(Cast(idField As VarChar(38)), 4),1,0,'\'), 3,0,'\'), 5,0,'\'), 7,0,'\') from myTable ```
build a varchar using the first four characters of a uniqueidentifier
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I get crazy because of one query. I have a table like following and I want to get a data - **Summa of Values by Status For every Date in interval.** **Table** ``` Id Name Value Date Status 1 pro1 2 01.04.14 0 2 pro1 8 02.04.14 1 3 pro2 6 02.04.14 1 4 pro3 0 03.04.14 0 5 pro4 7 03.04.14 0 6 pro4 2 03.04.14 0 7 pro4 4 03.04.14 1 8 pro4 6 04.04.14 1 9 pro4 1 04.04.14 1 ``` For example, *Input:* **Name = pro4, minDate = 01.02.14, maxDate = 04.09.14** *Output:* ``` Date Values sum for 0 Status Values sum for 1 Status 01.04.14 0 0 02.04.14 0 0 03.04.14 9 (=7+2) 4 (only 4 exist) 04.04.14 0 7 (6+1) ``` In `01.02.14` and `02.04.14` dates, `pro4` has not values by status, but I want to show that rows, because I need all dates in that interval. Can anyone help me to create this query? **Edit:** I can not change structure, I have already that table with data. Every day exist in table many times (minimum 1 time) Thanks in advance.
Assuming you have a row for each date in the table, use conditional aggregation: ``` select date, sum(Case when name = 'pro4' and status = 0 then Value else 0 end) as values_0, sum(case when name = 'pro4' and status = 1 then Value else 0 end) as values_1 from Table t where date >= '2014-04-01' and date <= '2014-04-09' group by date order by date; ``` If you don't have this list of dates, you can take this approach instead: ``` with dates as ( select cast('2014-04-01' as date) as thedate union all select dateadd(day, 1, thedate) from dates where thedate < '2014-04-09' ) select dates.thedate, sum(Case when status = 0 then Value else 0 end) as values_0, sum(case when status = 1 then Value else 0 end) as values_1 from dates left outer join table t on t.date = dates.thedate and t.name = 'pro4' group by dates.thedate; ```
To expand my comment the complete query is ``` WITH [counter](N) AS (SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1) , days(N) AS ( SELECT row_number() over (ORDER BY (SELECT NULL)) FROM [counter]) , months (N) AS ( SELECT N - 1 FROM days WHERE N < 13) , calendar ([date]) AS ( SELECT DISTINCT cast(dateadd(DAY, days.n , dateadd(MONTH, months.n, '20131231')) AS date) FROM months CROSS JOIN days ) SELECT a.Name , c.Date , [Sum of 0] = SUM(CASE Status WHEN 0 THEN Value ELSE 0 END) , [Sum of 1] = SUM(CASE Status WHEN 1 THEN Value ELSE 0 END) FROM Calendar c LEFT JOIN myTable a ON c.Date = a.Date AND a.name = 'pro4' WHERE c.date BETWEEN '20140201' AND '20140904' GROUP BY c.Date, a.Name ORDER BY c.Date ``` Note that the condition on the name need to be in the JOIN, otherwise you'll get only the date of your table. If you need multiple years just add another CTE for the count and a dateadd(YEAR,...) in the CTE calendar
SQL query - sum of values by status for date interval
[ "", "sql", "sql-server", "t-sql", "group-by", "sql-order-by", "" ]
I really don't know what is happening insert query is not working for me. ``` $query_getgreenyear = "INSERT INTO `greenityear` `ConsolidateYear` VALUES ('".$sentdata."')"; ``` in $sentdata the value is ('A','B') and the datatype for `ConsolidateYear` is varchar.I need this value to be inserted into the database. but i am getting error You have a SQL syntax error near '`ConsolidateYear` VALUES ('('A','B')')' at line 1. Please help me in this regard. I am new to database activities.
You forgot to place a `bracket()` for your column name. Try this: ``` $query_getgreenyear = "INSERT INTO `greenityear` (`ConsolidateYear`) VALUES ('".$sentdata."')"; ```
Please take a look at the [MySQL Reference Manual](https://dev.mysql.com/doc/refman/5.1/en/insert.html). You need to surround your column name with parantheses in your INSERT statement: ``` $query_getgreenyear = "INSERT INTO `greenityear` (`ConsolidateYear`) VALUES ('".$sentdata."')"; ``` And I would highly recommend you to use prepared statements as provided by your MySQL-extension (at least if you're not using the deprectated mysql\_connect). This protects you against SQL injections.
insert query is not working in mysql
[ "", "mysql", "sql", "insert", "" ]
I have following mysql table: ``` item code | warehouse | qty --------------------------- itm001 | abc |10 itm002 | xyz |20 itm003 | pqr |20 itm004 | pqr |15 itm001 | abc |60 itm002 | xyz |10 itm004 | tqr |20 itm003 | www |20 itm001 | ppp |15 ``` I want sum of **when item code and warehouse repeat their qty total**: for example: itm001 abc is repeted two times their sum display itm001 = 70 same way for itm002 and display sum itm002 = 30. I dont want static where clause (where item code = 'itm001' and warehouse = 'abc' or item code ='itm002' and warehouse = 'xyz').
You should use `GROUP BY` with `HAVING`: ``` SELECT itemcode,warehouse,SUM(qty) FROM yourtable GROUP BY itemcode,warehouse HAVING COUNT(*) > 1; ``` `HAVING` filters out the items that only exist once in a warehouse.
try this: ``` SELECT itemcode,warehouse,SUM(qty) FROM table1 GROUP BY warehouse,itemcode; ```
Sum of one column based on two columns of dynamically value
[ "", "mysql", "sql", "" ]
I have a table: ``` id | value 1 | - 1 | a 2 | b 1 | c 3 | d 2 | e ``` then I need a counter column which begins from 1 for every different value in the id column Desired select result: ``` id | value | counter 1 | - | 1 1 | a | 2 2 | b | 1 1 | c | 3 3 | d | 1 2 | e | 2 ``` All I found was for counting the number of duplicates etc., but not for an incrementing counter on each duplicate of a specific column...?
If you do not care about ordering, only about corresponding row number, use variables like this: ``` SELECT t.*, @i:=IF(id=@id, @i+1, 1) AS num, @id:=id FROM t CROSS JOIN (SELECT @i:=0, @id:=0) AS init ORDER BY id ```
What you are trying to accomplish is called **RANKING** . Unfortunately MySQL doesnot have ranking functions like in SQL Server `ROW_NUMBER()` , `DENSE_RANK()` etc.. You can try below query for MySQL ``` SELECT t.ID, t.Value, count(*) as Counter FROM tableName t JOIN tableName b ON t.ID = b.ID AND t.Value >= b.Value GROUP BY t.ID, t.Value ``` [**SQL Fiddle DEMO**](http://www.sqlfiddle.com/#!2/02228/1)
Incrementing counter on each duplicate in column
[ "", "mysql", "sql", "duplicates", "" ]
I'm not sure how to search for this and hence I started a new question. I have a table in the following structure ``` ---------------------------------------- | TALKS | PERSON ---------------------------------------- | Networks | John Doe | Steganography | Alex | Assembly | Mark | Networks | Mark | Steganography | John Doe | Networks | Mark ---------------------------------------- ``` How do I perform a query so that I get ``` ---------------------------------------- | TALKS | PERSON ---------------------------------------- | Networks | John Doe | Steganography | | Assembly | Mark | Networks | | Steganography | Alex ---------------------------------------- ``` Note that 'Mark' has two talks on 'Networks', but it needs to return only one.
SELECT DISTINCT talks, person FROM table
This is how you can achieve what you are trying ``` ( select talks,name from table1 group by name order by talks ) UNION ALL ( select distinct talks as talks, case when name is not null then '' end name from table1 ) ``` **[DEMO](http://www.sqlfiddle.com/#!2/b2a23/12)**
Selecting distinct row
[ "", "mysql", "sql", "" ]
I have this query: ``` SELECT DISTINCT ces.CourseEventKey, up.Firstname + ' ' + up.Lastname FROM InstructorCourseEventSchedule ices INNER JOIN CourseEventSchedule ces ON ces.CourseEventScheduleKey = ices.MemberKey INNER JOIN UserProfile up ON up.UserKey = ices.UserKey WHERE ces.CourseEventKey IN ( SELECT CourseEventKey FROM @CourseEvents ) ORDER BY CourseEventKey ``` It produces this result set: ``` CourseEventKey Name -------------- -------------------- 30 JACK K. BACKER 30 JEFFREY C PHILIPPEIT 30 ROBERT B. WHITE 33 JEFFREY C PHILIPPEIT 33 KENNETH J. SIMCICH 35 JACK K. BACKER 35 KENNETH J. SIMCICH 76 KENNETH J. SIMCICH 90 BARRY CRANFILL 90 KENNETH J. SIMCICH ``` The data is accurate, but I need the result set to look like this: ``` CourseEventKey Name -------------- -------------------- 30 JACK K. BACKER; JEFFREY C PHILIPPEIT; ROBERT B. WHITE 33 JEFFREY C PHILIPPEIT; KENNETH J. SIMCICH 35 JACK K. BACKER; KENNETH J. SIMCICH 76 KENNETH J. SIMCICH 90 BARRY CRANFILL; KENNETH J. SIMCICH ``` I've seen questions like mine with working solutions, but I cannot for the life of me adapt those solutions to work with my data. How can I change my query to produce the 2nd result set using some form of concatenation? Thanks in advance.
You can use `FOR XML PATH('')` in an inner query to get the concatenated values and then use it to match with `CourseEventKey` from the outer query: ``` ;WITH CTE AS ( SELECT DISTINCT ces.CourseEventKey, up.Firstname + ' ' + up.Lastname AS Name FROM InstructorCourseEventSchedule ices INNER JOIN CourseEventSchedule ces ON ces.CourseEventScheduleKey = ices.MemberKey INNER JOIN UserProfile up ON up.UserKey = ices.UserKey WHERE ces.CourseEventKey IN ( SELECT CourseEventKey FROM @CourseEvents ) ) SELECT DISTINCT i1.CourseEventKey, STUFF( (SELECT '; ' + Name FROM CTE i2 WHERE i1.CourseEventKey = i2.CourseEventKey FOR XML PATH('')) ,1,2, '' ) FROM CTE i1 ORDER BY i1.CourseEventKey ```
``` select distinct ces.CourseEventKey,STUFF((SELECT ', ' +up.Firstname + ' ' + up.Lastname) AS Name FROM UserProfile up where UP.id = UserKey = ices.UserKey FOR XML PATH ('')) , 1, 1, '') AS Name) FROM InstructorCourseEventSchedule ices INNER JOIN CourseEventSchedule ces ON ces.CourseEventScheduleKey = ices.MemberKey WHERE ces.CourseEventKey IN ( SELECT CourseEventKey FROM @CourseEvents ) ```
How to concatenate distinct column values from rows with duplicate IDs into one row?
[ "", "sql", "sql-server", "t-sql", "concatenation", "" ]
Tables: ``` Users Roles RolesUsers ``` Associations: ``` User hasMany Role through RolesUsers Role hasMany User through RolesUsers ``` What I want is to get all **users** along with their all **roles** that (user) contain **at least one role** that matches given requirements. I use this query to get all users that are associated to `Role.name = 'admin' or 'member'`. It works but along with each `User` it returns only `Roles` that match `name` from the query. And if user is associated with role with name `author` it wont be returned - how I can include ALL associated roles? ``` SELECT "Users".*, "Roles"."name" AS "Roles.name", "Roles"."id" AS "Roles.id", "Roles.RolesUser"."createdAt" AS "Roles.RolesUser.createdAt", "Roles.RolesUser"."updatedAt" AS "Roles.RolesUser.updatedAt", "Roles.RolesUser"."UserId" AS "Roles.RolesUser.UserId", "Roles.RolesUser"."RoleId" AS "Roles.RolesUser.RoleId" FROM ( SELECT "Users".* FROM "Users" WHERE ( SELECT "UserId" FROM "RolesUsers" AS "Roles.RolesUser" INNER JOIN "Roles" AS "Roles" ON "Roles"."id" = "Roles.RolesUser"."RoleId" WHERE "Users"."id" = "Roles.RolesUser"."UserId" AND "Roles"."name" IN ('member','admin') LIMIT 1 ) IS NOT NULL LIMIT 100 ) AS "Users" INNER JOIN "RolesUsers" AS "Roles.RolesUser" ON "Users"."id" = "Roles.RolesUser"."UserId" INNER JOIN "Roles" AS "Roles" ON "Roles"."id" = "Roles.RolesUser"."RoleId" AND "Roles"."name" IN ('member','admin'); ```
``` SELECT u.* , r."name" AS "Roles.name" , r."id" AS "Roles.id" , ru."createdAt" AS "createdAt" , ru."updatedAt" AS "updatedAt" , ru."UserId" AS "UserId" , ru."RoleId" AS "RoleId" FROM "Users" u JOIN "RolesUsers" ru ON u."id" = ru."UserId" JOIN "Roles" r ON r."id" = ru."RoleId" WHERE EXISTS ( SELECT * FROM "RolesUsers" ru2 JOIN "Roles" r2 ON r2."id" = ru2."RoleId" WHERE ru2."UserId" = u."Id" AND r2."name" IN ( 'member' , 'admin' ) ) ; ```
I would use analytic functions for this purpose, as in the following: ``` select ur.* from (select u.*, ru.name as rolename, ru.id as roleid, ru.createdAt as rolecreatedat, sum(case when ru.name in ('member', 'admin') then 1 else 0 end) over (partition by u.id) as NumMatches from users u join roleusers as ru on u.id = ru.userid ) ur where NumMatches > 1; ``` I don't understand your table aliases scheme, where you make the table names even more complicated than they need to be. Abbreviations usually make the query more readable.
Query to return all records (including all associated data) where at least one associated data matches requirements
[ "", "sql", "postgresql", "" ]
I have a script which declares a couple of temporary tables and has cursors to loop through each item and then performs updates. However, I wanted to add a rollback to this in case anything within the script fails it can roll back, otherwise commit. How would I do this exactly? I have looked at the other questions related to this but I am still confused. Thanks. Would this work? ``` BEGIN TRY BEGIN TRANSACTION ----sql statements --- --- inserts----- ---- updates ----- COMMIT END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK END CATCH ```
Roughly your code will work fine: ``` OPEN CURSOR BEGIN TRANSACTION BEGIN TRY COMMIT END TRY BEGIN CATCH ROLLBACK END CATCH END TRANSACTION CLOSE CURSOR ``` However, why do you want to use cursor? as far as I know, cursor performs slow in term of performance and once your transaction fail and go to rollback, all of your transaction in cursor will be rollback. CMIWW.
Here it is a [code snippet](http://sqlhint.com/sqlserver/scripts/tsql/code-template-handling-transactions-sql-server-rollback-xact_abort) to use: ``` SET XACT_ABORT ON; BEGIN TRY BEGIN TRANSACTION -- Your script here COMMIT TRANSACTION END TRY BEGIN CATCH IF XACT_STATE() != 0 ROLLBACK TRANSACTION -- Re-throw error here END CATCH ```
Rollback and commit transaction SQL Server
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I'm new in programming. I'm designing a database that has two tables: Addresses and Customers. I don't know how to assign foreign keys. I don't know which way is better to use: **Method1:** Customer Table Columns: CustomerID FirstName LastName Address ID Addresses Table Columns: AddressId Country State City AddressLine **Method2** Customer Table: CustomerId FirstName LastName Addresses Table: AddressId CustomerId Country City AddressLine In other words I don't know where to place the foreign key.
If you are trying to implement one to many, so the recommended way is to put the primary key for the table besides the one into the table of the many as a foreign key.. For example: if one costumer have many addresses and address have one costumer. so you will have to put the costumerId as a foreign key in the table of the address. if one address can have many costumers and one costumer have one address. so you will have to put the AddressNumber as a foreign key in the table of the costumers. Good Luck
I will point out that you **may** actually have a many to many relationship. One address can have more than one customer and one customer can have more than one address. In this case you use a Join table to model the relationship that contains the addressid and the customerid and nothing else. This would have a join PK of both those fields to ensure uniqueness.
Implementing one to many relationship
[ "", "sql", "database", "" ]
Let's say I have a table `call_log` that has a column named `duration`. Let's further pretend that when I put the table together, I made the mistake of making `duration` a varchar column instead of a number. Now I can't properly sort on that column. I want to rename the column so I issue... ``` ALTER TABLE call_log MODIFY (duration NUMBER); ``` But I get... ``` ORA-01439: column to be modified must be empty to change datatype. ``` My table has been in use and has data! And I don't want to lose the data. How do I modify the column's datatype without losing data?
Create a temporary column with the correct data type, copy the data to the new column, drop the old column, rename the new column to match what the old column was named. ``` ALTER TABLE call_log ADD (duration_temp NUMBER); UPDATE call_log SET duration_temp = duration; ALTER TABLE call_log DROP COLUMN duration; ALTER TABLE call_log RENAME COLUMN duration_temp TO duration; ``` The idea for this answer came from [Oracle's forums](https://community.oracle.com/thread/597075).
the previous solution is excellent, however if you don't want to change the columns orders in call\_log table structure, then the following steps are for this: ``` create table temp_call_log as select * from call_log; /* temp backup table for call_log */ UPDATE call_log SET duration = null; /*check,... be sure.... then commit*/ commit; ALTER TABLE call_log MODIFY duration NUMBER; UPDATE call_log c SET c.duration = (select t.duration from temp_call_log t where t.primarykey_comumn = c.primarykey_column); /*check,... be sure.... then commit*/ commit; drop table temp_call_log; ``` note1: change primarykey\_comumn with primary key in table call\_log. note2: this solution assumes that your data size is not big.
How to change datatype of column with data
[ "", "sql", "oracle", "" ]
I have the following: ``` create table words (value varchar(10)); insert into words values('bbbbb'); insert into words values('ccccc'); ``` If now I query `select * from words where value > 'b%';` I get: ``` +-------+ | value | +-------+ | bbbbb | | ccccc | +-------+ ``` The question is: how can I get all tuples having the `value` column value greater than `b`? I mean I wanna get all tuples having the value of the `value` column starting, according the alphabetic order, from c to z.
`>` operator compares string in [lexicographical order](http://en.wikipedia.org/wiki/Lexicographical_order) see examples below: ``` mysql> SELECT 'a' > 'b', 'b' > 'a', 'ab' > 'aaa', 'ab' > 'b'; +-----------+-----------+--------------+------------+ | 'a' > 'b' | 'b' > 'a' | 'ab' > 'aaa' | 'ab' > 'b' | +-----------+-----------+--------------+------------+ | 0 | 1 | 1 | 0 | +-----------+-----------+--------------+------------+ 1 row in set (0.00 sec) ``` If you are looking for patter matching you should use [like](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html) or regex. Check example: [12.5.1 String Comparison Functions](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html) ``` mysql> SELECT 'bbbbb' like 'b%', 'cccc' like 'b%'; +-------------------+------------------+ | 'bbbbb' like 'b%' | 'cccc' like 'b%' | +-------------------+------------------+ | 1 | 0 | +-------------------+------------------+ 1 row in set (0.00 sec) ``` [12.5.2 Regular Expressions](https://dev.mysql.com/doc/refman/5.1/en/regexp.html) ``` mysql> SELECT 'bbbb' REGEXP 'b.*', 'cccc' REGEXP 'b.*'; +---------------------+---------------------+ | 'bbbb' REGEXP 'b.*' | 'cccc' REGEXP 'b.*' | +---------------------+---------------------+ | 1 | 0 | +---------------------+---------------------+ 1 row in set (0.00 sec) ```
Try ``` WHERE value REGEXP '^[c-z]' ```
Get all values greater than 'x'
[ "", "mysql", "sql", "" ]
How can sum or Multiply row in SQL Server? For example... ![enter image description here](https://i.stack.imgur.com/fRkUE.png) I want to Sum Number1 and Number2 (1 + 4) and automatically number 5 will be added to Number3 column
If you want `Number3` to always contain the sum of `Number1` and `Number2`, then define it as a computed column. In the `create table` statement, say: ``` Number3 as (Number1 + Number2) ``` This will ensure that the third number is always the sum of the other two. Automatically.
``` update t set t.Number3 = t.Number1 + t.Number2 from mytable t ``` As @JoelCoehoorn very insightfully suggests, perhaps you want to include the value of `Number3` in which case it would be: ``` t.Number3 = t.Number3 + t.Number1 + t.Number2 ```
Sum or Multiply in SQL Server
[ "", "sql", "sql-server", "visual-studio", "" ]
I am trying to return FirstName, LastName, Line1, City, State, and ZipCode from the Customers table and the Addresses table using a join. I am to return one row for each customer, but only return addresses that are the shipping address for a customer. This is what I have so far: ``` SELECT FirstName, LastName, Line1, City, State, ZipCode FROM Customers JOIN Addresses ON (Customers.CustomerID = Addresses.CustomerID); ``` These are the fields for the Customers table: ``` CustomerID EmailAddress Password FirstName LastName ShippingAddressID BillingAddressID ``` These are the fields for the Addresses table: ``` AddressID CustomerID Line1 Line2 City State ZipCode Phone Disabled ``` I tried to use a ad hoc relationship and it did not work. I do not know how to filter out only the shipping address.
Your join has to be based on AddressIds, not CustomerIds, ie: ``` SELECT FirstName, LastName, Line1, City, State, ZipCode FROM Customers JOIN Addresses ON (Customers.ShippingAddressId = Addresses.AddressID); ```
Do you mean, you need to return Customer detail even if doesn't have an address. If so, use Left Join ``` SELECT FirstName, LastName, Line1, City, State, ZipCode FROM Customers LEFT JOIN Addresses ON (Customers.CustomerID = Addresses.CustomerID and Customers.ShippingAddressID=Addresses.AddressID) ```
How do I extract only the shipping address and not the billing address using SQL and join?
[ "", "sql", "sql-server", "select", "join", "filtering", "" ]
Can I get data types of each column I selected instead of the values, using a select statement? FOR EXAMPLE: ``` SELECT a.name, a.surname, b.ordernum FROM customer a JOIN orders b ON a.id = b.id ``` and result should be like this ``` name | NVARCHAR(100) surname | NVARCHAR(100) ordernum| INTEGER ``` or it can be in row like this, it isn't important: ``` name | surname | ordernum NVARCHAR(100) | NVARCHAR(100) | INTEGER ``` Thanks
You can query the `all_tab_columns` view in the database. ``` SELECT table_name, column_name, data_type, data_length FROM all_tab_columns where table_name = 'CUSTOMER' ```
I found a not-very-intuitive way to do this by using `DUMP()` ``` SELECT DUMP(A.NAME), DUMP(A.surname), DUMP(B.ordernum) FROM customer A JOIN orders B ON A.id = B.id ``` It will return something like: `'Typ=1 Len=2: 0,48'` for each column. `Type=1` means `VARCHAR2/NVARCHAR2` `Type=2` means `NUMBER/FLOAT` `Type=12` means `DATE`, etc. You can refer to this oracle doc for information [Datatype Code](https://docs.oracle.com/cd/A58617_01/server.804/a58234/datatype.htm "DataType Code") or this for a simple mapping [Oracle Type Code Mappings](https://ellebaek.wordpress.com/2011/02/25/oracle-type-code-mappings/ "Oracle Type Code Mappings")
Get data type of field in select statement in ORACLE
[ "", "sql", "oracle", "function", "types", "" ]
apologies as i am fairly new to SQL but im having trouble finding a solution to a problem. I have two tables outline bellow: **Comment table** > Columns: Name, Comment, rating a, rating b, rating c, venue\_id **Venue table** > Columns: id, venue\_name, description, address I would like to get the Name from the venue table where the id in the same row matches that of the venue id in the comment table, is this possible? (The venue\_id and id match) For example if i were to print this i would have ``` venue_name (from the venue table) rating a (from the comment table) rating b (from the comment table) rating c (from the comment table) ``` (the ratings linked to the venue\_name through the venue id in the comment table and the id in the venue table) apologies if this doesn't make sense but would appreciate some help if possible. Thanks
Use Simple Join query. `SELECT Venue.venue_name, Comment.rating_a, Comment.rating_b, Comment.rating_c FROM Venue INNER JOIN Comment ON Venue.id=Comment.venue_id;` This will give you: * venue\_name (from the venue table) * rating a (from the comment table) * rating b (from the comment table) * rating c (from the comment table)
``` select Venue.venue_name, rating_a, rating_b, rating_c from Venue, Comment where Comment.venue_id = Venue.id; ``` Explanation: Venue.venue\_name and the three rating\_ identifiers all tell SQL which columns you want to output (note: you may have to prepend Comment. in front of the three rating columns, but I think it will work without that since those columns are uniquely named). The from statement must contain each table you want to draw data out of (whether for display or for a where clause). Finally, the where clause is how the rows in Venue get related to the rows in Comment (they share a common venue\_id). You'll need to make sure that venue\_id in Venue is a primary key (and ideally, set it to auto\_increment)
logical issue with SQL query dealing with two tables
[ "", "mysql", "sql", "sql-server", "" ]
Is it possible to search for multi-accounters in a cleaner method than what follows? I am searching for this in logs, having multiple accounts isn't against the rules, but I would like to know who is multi accounting. ``` SELECT a.username, a.ipaddr FROM logs AS a INNER JOIN logs AS b ON a.ipaddr = b.ipaddr AND a.username <> b.username GROUP BY a.username ORDER BY a.ipaddr ``` If I concat a.username and b.username, then I will get the same thing as expanding brackets i.e. (a + b)(c + d) when I want to get something like this: a, b, c, d If this is still not clear, here is an example of the above query: ``` username, ipaddress bob, 1.2.3.4 tom, 1.2.3.4 joe, 1.2.3.4 sally, 2.3.4.5 jenny, 2.3.4.5 bob2, 2.3.4.5 ``` When I would like to return something as follows: ``` username, ipaddress bob & tom & joe, 1.2.3.4 sally & jenny & bob2, 2.3.4.5 ``` How can I do this? I cannot think of any ways without using hundreds of sub queries.
Use group\_concat with a distinct select subquery: ``` SELECT GROUP_CONCAT(a.username SEPARATOR ' & ') as username, a.ipaddr FROM (select distinct username,ipaddr from logs) AS a GROUP BY a.ipaddr HAVING count(*) > 1 ORDER BY a.ipaddr ``` <http://www.sqlfiddle.com/#!2/d9211/2> **EDIT** Added `HAVING count(*) > 1` to only get mult-accounters.
Try with GROUP\_CONCAT ``` SELECT GROUP_CONCAT(a.username SEPARATOR ' & ') as username, a.ipaddr FROM logs AS a INNER JOIN logs AS b ON a.ipaddr = b.ipaddr AND a.username <> b.username GROUP BY a.username ORDER BY a.ipaddr ```
mySQL searching for multi accounters
[ "", "mysql", "sql", "" ]
I am trying to write a stored procedure in which the where clause needs to be dynamic based on the parameter passed to the procedure. Based on whether an empty string is passed to StartDate and EndDate, the ContractDate condition selects rows. If they are empty strings, the user would have passed a list of dates(assumption) and I need to select rows that have ContractDates sames as the ones in the list. If the @StartDate and @EndDate params are not empty strings, I select rows that have ContractDate >= @StartDate and <=@EndDate. How will I incorporate this in my proc? The current code results in a syntax error. ``` select TermDescription,ContractDate,Price,SortOrder into #tbtp from BaseTermPrice inner hash join Term on Term.TermID = BaseTermPrice.TermID where BaseID = @BaseID and PeakType = @PeakType and case when @StartDate != '' and @EndDate != '' then ContractDate >= @StartDate and ContractDate <= @EndDate else ContractDate in (@DateList) end order by ContractDate,SortOrder ```
Thanks for all your valuable suggestions. Just ended up cracking this with ands and ors ``` select TermDescription,ContractDate,Price,SortOrder into #tbtp from BaseTermPrice inner hash join Term on Term.TermID = BaseTermPrice.TermID where BaseID = @BaseID and ((@PeakType IS NULL and PeakType is null) or (@PeakType IS NOT NULL and PeakType=@PeakType)) and ((@DateList IS NULL and ContractDate between @StartDate and @EndDate) or (@StartDate IS NULL and ContractDate in (@DateList))) order by ContractDate,SortOrder ```
You should look into Dynamic SQL. Best place to start is <http://www.sommarskog.se/dynamic_sql.html>. Dynamic SQL lets you build the SQL string programmatically based on whatever conditions you need.
T-SQL stored proc conditionals in where clause
[ "", "sql", "sql-server", "" ]
``` CREATE TABLE kategorijos( kat_id number(11) not null, kat_pavadinimas varchar(255), CONSTRAINT kategorijos_pk PRIMARY KEY(kat_id) ); create or replace procedure delete_kategorijos (kat_id number) is begin delete from kategorijos where kat_id = kat_id; end; ``` I call procedure: ``` BEGIN delete_kategorijos(1); END; ``` This procedure delete ALL records in database, but it really must delete only row where kat\_id = 1 , what is wrong in my procedure ?
You are using the below line of code for deleting records ``` delete from kategorijos where kat_id = kat_id; ``` Where in condition filed you can find that ``` where kat_id = kat_id; ``` Which is always true cause you are comparing same value thats why its deleting all the records
It is better to give the parameter another name, as follows: create or replace procedure delete\_kategorijos (l\_kat\_id number) is begin delete from kategorijos where kat\_id = l\_kat\_id; end; But think about the following: - what if kat\_id = 1 does not exist? - what if the record is locked by another transaction?
Oracle stored delete procedure
[ "", "sql", "oracle", "stored-procedures", "plsql", "" ]
I need to migrate data from one Database to another one, both are on the same local system. The tables and columns got different names and I mustn't migrate all the Columns from the old Database, so `Select *` doesn't work for me. ``` INSERT INTO newDatabase.table1(Column1, Column2); SELECT oldDatabase.table1(column1, column2) FROM oldDatabase.table1 ``` but all i got is a `#1064 - Syntax Error` What is the error in my Query and How can i fix this ? Thanks in advance
Your query should go like this: ``` INSERT INTO newDatabase.table1 (Column1, Column2) SELECT column1, column2 FROM oldDatabase.table1; ``` **UPDATE** Since this answer is getting more attention than I even anticipated, I should expand on this answer. First of all, it might not be obvious from the answer itself, but the columns do not need to have the same name. So, following will work too (assuming that the columns exist in their respective tables): ``` INSERT INTO newDatabase.table1 (Column1, Column2) SELECT SomeOtherColumn, MoreColumns FROM oldDatabase.table1; ``` Furthermore, they don't even need to be real columns in the table. One of the examples for transforming data that I use quite often is: ``` INSERT INTO newDatabase.users (name, city, email, username, added_by) SELECT CONCAT(first_name, ' ', last_name), 'Asgard', CONCAT(first_name,'@gmail.com'), CONCAT(first_name,last_name), 'Damir' FROM oldDatabase.old_users; ``` So, as it might be more obvious now, the rule is, as long as the SELECT query returns same number of columns that INSERT query needs, it can be used in place of VALUES.
``` INSERT INTO db1.table SELECT * FROM db2.table; ``` If you want to copy data to same tables of different db.
MySQL Insert Into from one Database in another
[ "", "mysql", "sql", "database", "" ]
I have this largish table with three columns as such: ``` +-----+-----+----------+ | id1 | id2 | associd | +-----+-----+----------+ | 1 | 38 | 73157604 | | 1 | 112 | 73157605 | | 1 | 113 | 73157606 | | 1 | 198 | 31936810 | | 1 | 391 | 73157607 | +-----+-----+----------+ ``` This continues for 38m rows. The problem is I want to remove the 'associd' column but running `ALTER TABLE table_name DROP COLUMN associd;` simply takes too long. I wanted to do something like: `ALTER TABLE table_name SET UNUSED associd;` and `ALTER TABLE table_name DROP UNUSED COLUMNS CHECKPOINT 250;` then which apparently speeds up the process but [it isn't possible in MySQL?](http://forums.phpfreaks.com/topic/115428-solved-set-unused-column-for-mysql/) Is there an alternative to remove this column-- maybe creating a new table with only the two columns or getting a drop with checkpoints?
Anything that you do is going to require reading and writing 38m rows, so nothing is going to be real fast. Probably the fastest method is to put the data into a new table: ``` create table newTable as select id1, id2 from oldTable; ``` Or, if you want to be sure that you preserve types and indexes: ``` create table newTable like oldTable; alter table newTable drop column assocId; insert into newTable(id1, id2) select id1, id2 from oldTable; ``` However, it is usually faster to drop all index on a table before loading a bunch of data and then recreate the indexes afterwards.
Disclaimer: this answer is MySQL oriented and might not work for other databases. I think in the accepted answer there are some things missing, I have tried to expose here a generic sequence I use to do this kind of operations in a production environment, not only for adding/removing columns but also to add indexes for example. We call it [the Indiana Jones' movement](https://www.youtube.com/watch?v=Pr-8AP0To4k). ## Create a new table A new table using the old one as template: ``` create table my_table_new like my_table; ``` ## Remove the column in the new table In the new table: ``` alter table my_table_new drop column column_to_delete; ``` ## Add the foreign keys to the new table The are not generate automatically in the `create table like` command. You can check the actual foreign keys: ``` mysql> show create table my_table; ``` Then apply them to the new table: ``` alter table my_table_new add constraint my_table_fk_1 foreign key (field_1) references other_table_1 (id), add constraint my_table_fk_2 foreign key (field_2) references other_table_2 (id) ``` ## Clone the table Copy all fields but the one you want to delete. I use a `where` sentence to be able to run this command many times if necessary. As I suppose this is a production environment the `my_table` will have new records continuously so we have to keep synchronizing until we are capable to do the name changing. Also I have added a `limit` because if the table is too big and the indexes are too heavy making a one-shot clone can shut down the performance of your database. Plus, if in the middle of the process you want to cancel the operation it will must to rollback all the already done insertions which means your database won't be recovered instantly (<https://dba.stackexchange.com/questions/5654/internal-reason-for-killing-process-taking-up-long-time-in-mysql>) ``` insert my_table_new select field_1, field_2, field_3 from my_table where id > ifnull((select max(id) from my_table_new), 0) limit 100000; ``` As I was doing this several times I created a procedure: <https://gist.github.com/fguillen/5abe87f922912709cd8b8a8a44553fe7> ## Do the name changing Be sure you run this commands inmediately after you have replicate the last records from your table. Idealy run all commands at once. ``` rename table my_table to my_table_3; rename table my_table_new to my_table; ``` ## Delete the old table Be sure you have a back up before you do this ;) ``` drop table my_table_3 ``` Disclaimer: I am not sure what will happen with foreign keys that were pointing to the old table.
Drop Column from Large Table
[ "", "mysql", "sql", "" ]
Got three tables I'm joining. `submissions`, `submissions_votes`, and `users`. I want to find out how many total helpfulVotes there are (which is a sum of the count of all `submissions_votes`) and I've got that. I also want to return a count (boolean, rather) of 0 or 1 if the `user_id` of `sv.user_id` relates to the submission being viewed. The `user_id` is passed in to the `WHERE` clause. ``` SELECT s.*, u.username, u.photo as userPhoto, COALESCE(SUM(sv.up), 0) helpfulVotes FROM submissions s LEFT JOIN submissions_votes sv on s.id = sv.submission_id WHERE u.id = ? INNER JOIN users u ON s.user_id = u.id ``` I know I need an additional join (on `sv.user_id = u.id`) but what would I select? Then would I group by `sv.id`? Edit: `users` table: ``` +----------------+------------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +----------------+------------------------+------+-----+-------------------+-----------------------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | email | varchar(128) | NO | MUL | NULL | | | username | varchar(23) | NO | | NULL | | | type | enum('normal','admin') | NO | | normal | | | about | varchar(255) | NO | | NULL | | | photo | varchar(32) | NO | | NULL | | +----------------+------------------------+------+-----+-------------------+-----------------------------+ ``` `submissions_votes` table: ``` +---------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+---------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | submission_id | int(10) unsigned | NO | MUL | NULL | | | when | datetime | NO | | NULL | | | user_id | int(10) unsigned | NO | MUL | NULL | | | up | tinyint(3) unsigned | NO | | NULL | | | down | tinyint(3) unsigned | NO | | NULL | | +---------------+---------------------+------+-----+---------+----------------+ ``` `submissions` table: ``` +-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+-----+---------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | title | varchar(255) | NO | MUL | NULL | | | slug | varchar(255) | NO | | NULL | | | description | mediumtext | NO | | NULL | | | user_id | int(11) | NO | MUL | NULL | | | created | datetime | NO | | NULL | | | type | enum('tip','request') | NO | | NULL | | | thumbnail | varchar(64) | YES | | NULL | | | removed | tinyint(1) unsigned | NO | | 0 | | | keywords | varchar(255) | NO | | NULL | | | ip | int(10) unsigned | NO | | NULL | | +-------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+-----+---------+----------------+ ```
You can check if the `sv.user_id` = input `user_id` using `CASE` and `SUM` it up (grouped by each submission). If the SUM is 1, then the input user\_id has a submission, otherwise not. *So, your input user\_id would go into the CASE function.* Also, the `COALESCE(SUM(sv.up), 0)` requires a grouping by whichever columns of `submissions` and `users` tables are selected. The following is the query based on the tables in the **[SQL Fiddle here](http://www.sqlfiddle.com/#!2/0d9c3/15)**. ``` SELECT s.id as submission_id, s.title as submission_title, MAX(u.email) as submission_user_email, COALESCE(SUM(sv.up), 0) helpfulVotes, SUM(CASE sv.user_id WHEN ? THEN 1 ELSE 0 END) User_Submission FROM submissions s LEFT JOIN submissions_votes sv on s.id = sv.submission_id INNER JOIN USERS u ON s.user_id = u.id GROUP BY s.id, s.title; ``` (If more columns from the `submissions` table need to be selected, then they need to be either grouped or aggregated)
I don't think you need an additional join. Just a boolean expression in the `select`: ``` SELECT s.*, u.username, u.photo as userPhoto, COALESCE(SUM(sv.up), 0) helpfulVotes, SUM(sv.user_id = u.id) as SubmissionUserMatches FROM submissions s LEFT JOIN submissions_votes sv on s.id = sv.submission_id INNER JOIN users u ON s.user_id = u.id GROUP BY s.id, u.username, u.photo; ```
Get total count in addition of count if user voted
[ "", "mysql", "sql", "" ]
How can I do it so that when I run this query. ``` SELECT distinct cus_areacode AS "Area Code", cus_code AS "Number" FROM CUSTOMER WHERE cus_areacode = 713 OR cus_areacode = 615; ``` instead of showing the following. ``` Area Code Number 713 10015 713 10018 615 10019 615 10017 713 10011 615 10010 615 10016 615 10012 615 10014 615 10013 ``` It may show this. ``` Area Code Number 615 7 713 3 ``` I tried this ``` SELECT distinct cus_areacode AS "Area Code", count(cus_code) AS "Number" FROM CUSTOMER WHERE cus_areacode = 713 OR cus_areacode = 615; ``` But it does not work.
``` SELECT cus_areacode AS "Area Code", count(cus_code) AS "Number" FROM CUSTOMER GROUP BY cus_areacode ```
Try this ``` SELECT cus_areacode AS "Area Code", count(cus_code) AS "Number" FROM CUSTOMER WHERE cus_areacode IN(713,615) GROUP BY cus_areacode; ``` OR: ``` SELECT cus_areacode AS "Area Code", count(cus_code) AS "Number" FROM CUSTOMER WHERE cus_areacode = 713 OR cus_areacode = 615 GROUP BY cus_areacode; ``` **[Fiddle Demo Here](http://sqlfiddle.com/#!2/93e32/22)**
Get count on SQL
[ "", "mysql", "sql", "" ]
`tableOne`: ``` id name -- ---- 1 A 2 B 3 C ``` `tableTwo`: ``` stuffno id stuff stufftype ------- -- ----- --------- 1 1 D1 D 2 1 E1 E 3 1 F1 F 4 2 D2 D 5 2 E2 E 6 2 F2 F 7 3 D3 D 8 3 E3 E 9 3 F3 F ``` Requested result: ``` name stuffD stuffE stuffF ---- ------ ------ ------ A D1 E1 F1 B D2 E2 F2 C D3 E3 F3 ``` How do I do that in one SQL query?
You can use multiple `JOIN` to `tableTwo` to achieve the result you want: ``` SELECT p.name as name, f.stuff AS stuffF, d.stuff AS stuffD, e.stuff AS stuffE FROM tableOne p JOIN tableTwo f on (p.id = f.id AND f.stufftype = 'F') JOIN tableTwo d on (p.id = d.id AND d.stufftype = 'D') JOIN tableTwo e on (p.id = e.id AND e.stufftype = 'E') ```
Try something like ``` SELECT t1.name, t2.stuff as stuffD, t3.stuff as stuffE, t4.stuff as stuffF FROM tableOne t1 JOIN tableTwo t2 ON (t1.id = t2.id AND t2.stufftype = 'D') ... and repeat JOIN .. ON .. for 'E' (tableTwo t3) and 'F' (tableTwo t4) ```
grouping one to many relationship in a row
[ "", "sql", "join", "pivot", "" ]
``` Product ID Quantity DateAdded 1 100 4/1/14 2 200 4/2/14 3 300 4/2/14 1 80 4/3/14 3 40 4/5/14 2 5 4/6/14 1 10 4/7/14 ``` I am using this SQL statement to display the first and last record of each item: ``` SELECT ProductID, MIN(Quantity) AS Starting, MAX(Quantity) AS Ending FROM Records WHERE DateAdded BETWEEN '2014-04-01' AND '2014-04-30' GROUP BY ProductID, Quantity ``` but I am getting the same values for the Starting and Ending columns. I want to achieve something like this: ``` Product ID Starting Ending 1 100 10 2 200 5 3 300 40 ```
You are getting the same quantities because you are aggregating by `quantity` in the `group by` as well as product. Your version of the query, properly written would be: ``` SELECT ProductID, MIN(Quantity) AS Starting, MAX(Quantity) AS Ending FROM Records WHERE DateAdded BETWEEN '2014-04-01' AND '2014-04-30' GROUP BY ProductID; ``` However, this doesn't give you the first and last values. It only gives you the minimum and maximum ones. To get those values, use `row_number()` and conditional aggregation: ``` SELECT ProductID, MAX(CASE WHEN seqnum_asc = 1 THEN Quantity END) as Starting, MAX(CASE WHEN seqnum_desc = 1 THEN Quantity END) as Ending FROM (SELECT r.*, row_number() over (partition by product order by dateadded asc) as seqnum_asc, row_number() over (partition by product order by dateadded desc) as seqnum_desc FROM Records r ) r WHERE DateAdded BETWEEN '2014-04-01' AND '2014-04-30' GROUP BY ProductID; ``` If you are using SQL Server 2012, then you can also use this with `FIRST_VALUE()` and `LAST_VALUE()` instead of `row_number()`.
Use the `row_number()` ranking function ``` select starting.*, ending.ending from (select ProductID, quantity as starting from (select * , ROW_NUMBER() over (partition by productid order by dateadded) rn from yourtable where DateAdded BETWEEN '2014-04-01' AND '2014-04-30' ) first where rn = 1) starting inner join (select ProductID, quantity as ending from (select * , ROW_NUMBER() over (partition by productid order by dateadded desc) rn from yourtable where DateAdded BETWEEN '2014-04-01' AND '2014-04-30' ) last where rn = 1) ending on starting.productid=ending.productid ``` The first subquery gets the first entry for the time period, the second gets the last entry
Get the first and last record of each item for the month
[ "", "sql", "sql-server", "" ]
``` create table #sample ( product varchar(100), Price float ) insert into #sample values ('Pen',10) insert into #sample values ('DVD',29) insert into #sample values ('Pendrive',45) insert into #sample values ('Mouse',12.5) insert into #sample values ('TV',49) select * from #sample ``` Consider this situation ... I have 1000$, I want to buy something listed above. I want to spend the entire amount So I need a query which gives how much units in all products will cost 1000$ Any help ?
This is hard coded and has little flexiblity. Took my system 2 minutes to run. But might be helpful, sorry if it isn't. fnGenerate\_Numbers is a table function that returns integers within the range of the parameters. [Ways to do that.](http://www.sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1) ``` DECLARE @Max INT, @Pens money, @Dvds money, @Pendrives money, @Mouses money, @Tvs money SELECT @Max = 1000, @Pens = 10, @Dvds = 29, @Pendrives = 45, @Mouses = 12.5, @Tvs = 49 ;WITH Results AS ( SELECT p.n pens, d.n dvds, pd.n pendrives, m.n mouses, t.n tvs, tot.cost FROM fnGenerate_Numbers(0, @Max/@Pens) p -- Pens CROSS JOIN fnGenerate_Numbers(0, @Max/@Dvds) d -- DVDs CROSS JOIN fnGenerate_Numbers(0, @Max/@Pendrives) pd -- Pendrives CROSS JOIN fnGenerate_Numbers(0, @Max/@Mouses) m -- Mouses CROSS JOIN fnGenerate_Numbers(0, @Max/@Tvs) t -- Tvs CROSS APPLY (SELECT p.n * @Pens + d.n * @Dvds + pd.n * + @Pendrives + m.n * @Mouses + t.n * @Tvs cost) tot WHERE tot.cost < @Max ), MaxResults AS ( SELECT MAX(pens) pens, dvds, pendrives, mouses, tvs FROM Results GROUP BY dvds, pendrives, mouses, tvs ) SELECT mr.*, r.cost FROM MaxResults mr INNER JOIN Results r ON mr.pens = r.pens AND mr.dvds = r.dvds AND mr.pendrives = r.pendrives AND mr.mouses = r.mouses AND mr.tvs = r.tvs ORDER BY cost ```
The problem you are referring to is also known as the [knapsack problem](http://en.wikipedia.org/wiki/Knapsack_problem). There's a range of algorithms you can use to solve this. The most well known is dynamic programming, it requires that the weights are integer numbers, so you'd have to measure in cents. None of them are easy to implement in t-sql. I actually found a link to someone's implementation in sql server: <http://sqlinthewild.co.za/index.php/2011/02/22/and-now-for-a-completely-inappropriate-use-of-sql-server/> Notice the title, they too find it an inappropriate use of a database. I'd recommend that you solve this in a different language.
Need T-SQL Query find all possible ways
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "knapsack-problem", "" ]
the problem story goes like: consider a program to manage bank accounts with balance limits for each customer {`table Customers`, `table Limits`} where for each Customer.id there's one Limit record then the client said to store a history for the limits' changes, it's not a problem since I've already had date column for `Limit` but the `active/latest limits`'s view-query needs to be changed before: Customer-Limit was 1 to 1 so a simple select did the job now: it would show all the Limits' records which means multiple records for each Customers and I need the latest Limits only so I thought of something like this pseudo code ``` foreach( id in Customers) { select top 1 * from Limits where Limits.customer_id = id order by Limits.date } ``` but while looking through SO for similar issues, I came across stuff like `"95% of the time when you need a looping structure in tSQL you are probably doing it wrong"-JohnFx` and `"SQL is primarily a set-orientated language - it's generally a bad idea to use a loop in it."-Mark Bannister` can anyone confirm/explain why is it wrong to loop? and in the explained problem above, what am I getting wrong that I need to loop? thanks in advance **update** : my solution in light of TomTom's answer & suggested link [here](https://stackoverflow.com/questions/725153/most-recent-record-in-a-left-join) and before Dean kindly answered with code I came up with this ``` SELECT * FROM Customers c LEFT JOIN Limits a ON a.customer_id = c.id AND a.date = ( SELECT MAX(date) FROM Limits z WHERE z.customer_id = a.customer_id ) ``` thought I'd share :> thanks for your response, happy coding
Will this do? ``` ;with l as ( select *, row_number() over(partition by customer_id order by date desc) as rn from limits ) select * from customers c left join l on c.customer_id = l.customer_id and l.rn = 1 ```
I am assuming that earlier (i.e. before implementing the history functionality) you must be updating the `Limits` table. Now, for implementing the history functionality you have started inserting new records. Doesnt this trigger a lot of changes in your databases and code? Instead of inserting new records, how about keeping the original functionality as is and creating a new table say `Limits_History` which will store all the old values from `Limits` table before updating it? Then all you need to do is fetch records from this table if you want to show history. This will not cause any changes in your existing SPs and code hence will be less error prone. To insert record in the `Limits_History` table, you can simply create an `AFTER TRIGGER` and use the `deleted` magic table. Hence you need not worry about calling an SP or something to maintain history. The trigger will do this for you. Good examples of trigger are [here](http://msdn.microsoft.com/en-us/magazine/cc164047.aspx) Hope this helps
SQL, to loop or not to loop?
[ "", "sql", "sql-server", "sql-server-2012", "" ]
i tried the query below and it returned an error "`single-row query returns more than one row`" ``` SELECT (SELECT COUNT(*) FROM pans) AS NUMBEROFPANS, (SELECT name FROM facts, pans WHERE fact_id=pafact_id) AS NAMEOFPANS, (SELECT COUNT(*) FROM stas) AS NUMBEROFSTAS, (SELECT name FROM facts, stas WHERE fact_id=safact_id) AS NAMEOFSTAS, (SELECT COUNT(*) FROM zelyc) AS NUMBEROFZELYCS, (SELECT name FROM facts, Zelycs WHERE fact_id=zafact_id) AS NAMEOFZELYCS FROM DUAL; ``` desired result: ``` Numberofpans Nameofpans Numberofstas Nameofstas 1 jade 1 relate 1 arrow 1 simi ``` i am using oracle and also pans, stas and zelycs are subclasses of facts table so they all have the same primary key Is there a different way to write this query?
It wasn't very clear what you were trying to do in your post. I think, considering your other answer / comments on other answers, here might be what you're trying to do: ``` SELECT Count(p.pafact_id) as PanCount, p.name as Pan, Count(s.safact_id) as StasCount, s.name as Stas, Count(z.zafact_id) as ZelycsCount, z.name as Zelyc FROM facts f LEFT JOIN pans p ON f.fact_id = pafact_id LEFT JOIN stas s ON f.fact_id = safact_id LEFT JOIN zelycs z ON f.fact_id = zafact_id GROUP BY p.name, s.name, z.name ```
You could run each sub-query to find the one(s) returning multiple rows. Alternatively, you could add '`where rownum = 1`' to each subquery, to force it to return only the first row it comes to.
single row query returns more than one row
[ "", "sql", "oracle", "" ]
I have a lot of rows with `city` and `direction` fields. But from an old import, city and direction are mixed in `direction` field. Like: ``` dir number, extra data, CITY, AL 111111 dir number, CITY, AL 111111 number, dir, number, CITY, dir number, CITY, AL 111111 ``` The important thing is that 'CITY' always comes before a US postal code and I want to extract it and save it in `city` field with an `UPDATE` (using regex?). Is it possible? Something like: ``` update TABLE set city = SOME_REGEX_MAGIC_FROM_DIRECTION_FIELD where direccion ~ 'REGEX_MAGIC' ``` --- The working SQL statement: ``` update TABLE set city = substring(direction FROM '([^,]+),[^,]+$') where direction like '%, __ _____'; ```
If you want the part before the last comma, one way (of many) is with a plain [`substring()`](https://www.postgresql.org/docs/current/functions-string.html#FUNCTIONS-STRING-SQL) call (regexp variant): ``` substring(direction FROM ',([^,]+),[^,]+$') AS city ``` [db<>fiddle here](https://dbfiddle.uk/?rdbms=postgres_11&fiddle=8ba2037afe62d9f223da0648754971fe) Your `UPDATE` statement might look like this: ``` UPDATE tbl SET city = substring(direction FROM ',([^,]+),[^,]+$') WHERE direction ~ ', *\D\D \d{5}$' ```
From your data I would gather that you'll need something like this: ``` SELECT regexp_matches('direction_field', '([^,]+) \d{5}'); ```
Extract string from text field in PostgreSQL (using regex?)
[ "", "sql", "regex", "postgresql", "sql-update", "" ]
Price Table: ``` date | price ------------------ 2014-01-01 100 2014-01-02 20 2014-01-03 -30 2014-01-04 70 ... ``` Is it possible to show a result table with a SQL query? Result table: ``` date | price | total -------------------------- 2014-01-01 100 100 2014-01-02 20 120 2014-01-03 -30 90 2014-01-04 70 160 ... ``` 1. "Result table" is not an exist table its just sql result 2. "total" column should show the last status of customer's balance with addition and subtraction with previous row Is it possible to write this SQL query?
``` select date, price, @total := @total + price as total from (select date, price from your_table order by date) x CROSS JOIN (select @total := 0) r ```
``` select date, @total := @total + price as running_total, price from your_table, (select @total := 0) r order by date ```
MySQL Dynamic Addition and Subtraction Row
[ "", "mysql", "sql", "" ]