Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a database with two tables: one is a table for people, indicating which sports they practice; the second is a sports table shows which sports represent each id. **persons table** ``` id name sport1 sport2 100 John 0 3 101 Max 1 3 102 Axel 2 4 103 Simon 4 2 ``` **sports table** ``` sportid sportn 0 Football 1 Baseball 2 Basketball 3 Hockey 4 Swimming ``` I want to do a query where it shows me what sports Max practices, something like this ``` id name sport1 sport2 101 Max Baseball Hockey ``` So far I got this ``` select p.id, p.name, s.sportn, s.sportn from persons as p, sports as s where p.sport1 = s.sportid and p.id = 101 ``` This shows me the first sport twice, so I don't know where to go from here.
The problem with your query is that you are only joining on the `sports` table once with `p.sport1`. This query should give you what you need : ``` SELECT p.id, p.name, s1.sportn AS sport1, s2.sportn AS sport2 FROM persons AS p JOIN sports AS s1 ON p.sport1 = s1.sportid JOIN sports AS s2 ON p.sport2 = s2.sportid WHERE p.id = 101 ``` You could also read on [sql joins](http://www.w3schools.com/sql/sql_join.asp) to help you understand this solution better. Finally, if you really need to use your current syntax, here is the corrected query : ``` SELECT p.id, p.name, s1.sportn AS sport1, s2.sportn AS sport2 FROM persons AS p, sports AS s1, sports AS s2 WHERE p.sport1 = s1.sportid AND p.sport2 = s2.sportid AND p.id = 101 ```
This should do: ``` SELECT p.id, p.name, s1.sportn AS sport1, s2.sportn AS sport2 FROM persons AS p LEFT JOIN sports AS s1 ON p.sport1 = s1.sportid LEFT JOIN sports AS s2 ON p.sport2 = s2.sportid WHERE p.id = 101 ```
SQL - Selecting a column from another table twice with different values
[ "", "sql", "sqlite", "" ]
I have a table `tine_by_day` and I know how to use `TO_CHAR` function in `oracle`, but how to get same output if I use in `MySQL`? Is there any conversion function in `MySQL` for `TO_CHAR()`? I have already tried `date_format` instead `to_char` but I'm not getting sufficient results. ``` SELECT to_char(t.the_date,'mm-DD-YYYY') Date, SUM(sf7.unit_sales) UnitSales, SUM(sf7.store_sales) StoreSales, SUM(sf7.store_cost) StoreCost FROM time_by_day t INNER JOIN sales_fact_1997 sf7 ON t.time_id=sf7.time_id WHERE to_char(t.the_date,'YYYY-MM-DD')>='2012-01-01' AND to_char(t.the_date,'YYYY-MM-DD')<='2012-01-07' GROUP BY t.the_date ORDER BY t.the_date ```
In SQL Server, you would typically use the `convert()` function, which is not nearly as convenient as `to_char()`. For your query, you only need it in the `select` clause: ``` SELECT convert(varchar(10), t.the_date, 110) as Date, SUM(sf7.unit_sales) as UnitSales, SUM(sf7.store_sales) as StoreSales, SUM(sf7.store_cost) as StoreCost FROM time_by_day t INNER JOIN sales_fact_1997 sf7 ON t.time_id = sf7.time_id WHERE t.the_date >='2012-01-01' AND t.the_date <= '2012-01-07' GROUP BY t.the_date ORDER BY t.the_date; ``` SQL Server will *normally* treat the ISO standard YYYY-MM-DD as a date and do the conversion automatically. There is a particular internationalization setting that treats this as YYYY-DD-MM, alas. The following should be interpreted correctly, regardless of such settings (although I would use the above form): ``` WHERE t.the_date >= cast('20120101' as date) AND t.the_date <= cast('20120107' as date) ``` EDIT: In MySQL, you would just use `date_format()`: ``` SELECT date_format(t.the_date, '%m-%d-%Y') as Date, SUM(sf7.unit_sales) as UnitSales, SUM(sf7.store_sales) as StoreSales, SUM(sf7.store_cost) as StoreCost FROM time_by_day t INNER JOIN sales_fact_1997 sf7 ON t.time_id = sf7.time_id WHERE t.the_date >= date('2012-01-01') AND t.the_date <= date('2012-01-07') GROUP BY t.the_date ORDER BY t.the_date; ```
Based on Gordons approach, but usign CHAR(10) instead of VARCHAR(10) since there's hardly a date not being returned with a length of 10... ``` SELECT convert(char(10), t.the_date, 110) as [Date], SUM(sf7.unit_sales) as UnitSales, SUM(sf7.store_sales) as StoreSales, SUM(sf7.store_cost) as StoreCost FROM time_by_day t INNER JOIN sales_fact_1997 sf7 ON t.time_id = sf7.time_id WHERE t. the_date >='20120101' AND t.the_date <= '20120107' GROUP BY t.the_date ORDER BY t.the_date; ``` Edit: also changed the date format in the WHERE clause to be ISO compliant and therewith not affected by the setting of DATEFORMAT.
How to use to_char function functionality in MySQL
[ "", "mysql", "sql", "oracle", "mysql-workbench", "sqlyog", "" ]
Given 100 000 record table which of those 2 would be faster search? ``` create table #SearchSet(Item varchar(10)) insert into @SearchSet(Item) values(('AA'),('BB'),('CC'),('DD22'),('AC123'),('456AA'),('125AA15'),('A154A'),('DDSSAA'),('55KKAA')) select t1.* from Table1 as t1 join #SearchSet as s on t1.Column1 = s.Item drop table #SearchSet ``` or this ``` select * from Table1 where ColumnA = 'AA' or ColumnA = 'BB' or ColumnA = 'CC' or ColumnA = 'DD22' or ColumnA = 'AC123' or ColumnA = '456AA' or ColumnA = '125AA15' or ColumnA = 'A154A' or ColumnA = 'DDSSAA' or `ColumnA = '55KKAA'` ```
The only way to really answer a performance question is to test the different versions in your environment. You have so few values for the comparison (just 10), that I would expect the `where` clause to work better. Plus as your query is written, it cannot take advantage of an index, which could really help the first version. If you add an index, then the first might be competitive with the second. Try doing this: ``` create table #SearchSet(Item varchar(10) primary key); insert into @SearchSet(Item) values(('AA'),('BB'),('CC'),('DD22'),('AC123'),('456AA'),('125AA15'),('A154A'),('DDSSAA'),('55KKAA')); ``` If you were comparing hundreds or thousands of values, I would expect the temporary table with a primary key to be faster.
on a small list like this, likely not much difference. On a bigger list the join should be faster if you ``` CREATE UNIQUE INDEX IX_SearchSet ON #searchset (item) ``` Either way, the real winner is to: ``` CREATE INDEX IX_Table1 ON Table1 (ColumnA) ``` If you're using SQL2012 and the results are sparse, do: ``` CREATE INDEX IX_Table1 ON Table1 (ColumnA) WHERE ColumnA in ('val1', 'val2', ...) ```
Join or filter in SQL
[ "", "sql", "sql-server", "t-sql", "" ]
In my application I have models: Car: ``` class Car < ActiveRecord::Base has_one :brand, through: :car_configuration has_one :model, through: :car_configuration has_one :body_style, through: :car_configuration has_one :car_class, through: :car_configuration belongs_to :car_configuration end ``` CarConfiguration: ``` class CarConfiguration < ActiveRecord::Base belongs_to :model, class_name: 'CarModel' belongs_to :body_style, class_name: 'CarBodyStyle' belongs_to :car_class has_one :brand, through: :model has_many :cars, dependent: :destroy has_many :colors, dependent: :destroy def brand_id brand.try(:id) end end ``` and CarBrand: ``` class CarBrand < ActiveRecord::Base default_scope { order(name: :asc) } validates :name, presence: true has_many :models, class_name: 'CarModel', foreign_key: 'brand_id' end ``` Now I want to get all Cars which CarConfiguration brand id is for example 1. I tried something like this but this not work: ``` joins(:car_configuration).where(car_configurations: {brand_id: 1}) ``` Thanks in advance for any help.
``` def self.with_proper_brand(car_brands_ids) ids = Array(car_brands_ids).reject(&:blank?) car_ids = Car.joins(:car_configuration).all. map{|x| x.id if ids.include?(x.brand.id.to_s)}.reject(&:blank?).uniq return where(nil) if ids.empty? where(id: car_ids) end ``` That was the answer.
**Associations** I don't think you can have a `belongs_to :through` association ([belongs\_to through associations](https://stackoverflow.com/questions/4021322/belongs-to-through-associations)), and besides, your models look really bloated to me I'd look at using a [`has_many :through` association](http://guides.rubyonrails.org/association_basics.html#the-has-many-through-association): ``` #app/models/brand.rb Class Brand < ActiveRecord::Base has_many :cars end #app/models/car.rb Class Car < ActiveRecord::Base #fields id | brand_id | name | other | car | attributes | created_at | updated_at belongs_to :brand has_many :configurations has_many :models, through: :configurations has_many :colors, through: :configurations has_many :body_styles, through: :configurations end #app/models/configuration.rb Class Configuration < ActiveRecord::Base #id | car_id | body_style_id | model_id | detailed | configurations | created_at | updated_at belongs_to :car belongs_to :body_style belongs_to :model end #app/models/body_style.rb Class BodyStyle < ActiveRecord::Base #fields id | body | style | options | created_at | updated_at has_many :configurations has_many :cars, through: :configurations end etc ``` This will allow you to perform the following: ``` @car = Car.find 1 @car.colours.each do |colour| = colour end ``` --- **OOP** Something else to consider is the [`object-orientated`](http://en.wikipedia.org/wiki/Object-oriented_programming) nature of Ruby (& Rails). Object orientated programming is not just a fancy buzzword - it's a core infrastructure element to your applications, and as such, you need to consider constructing your Models etc *around* objects: ![enter image description here](https://i.stack.imgur.com/SrC55.png) This means that when you're creating your models to call the likes of `Car` objects, etc, you need to appreciate the `associations` you create should directly compliment that particular object Your associations currently don't do this - they are very haphazard & mis-constructed. I would recommend examining what objects you wish to populate / create, and then create your application around them
Rails has_one and belongs_to join
[ "", "sql", "ruby-on-rails", "ruby", "join", "" ]
I have two tables * First table is `CUSTOMERS` with columns `CustomerId, CustomerName` * Second table is `LICENSES` with columns `LicenseId, Customer` The column `Customer` in the second table is the `CustomerId` from the First table I wanted to create a stored procedure that insert values into table 2 ``` Insert into Licenses (Customer) Values(CustomerId) ``` How can I get this data from the other table? Thanks in advance for any help
this looks to me like simply a syntax question - I think what you want is ``` INSERT INTO Licenses (Customer) SELECT CustomerId FROM customers where ... ```
``` CREATE PROCEDURE uspInsertToTable @CustomerID INT OUTPUT, @CustomerName VARCHAR(50), @LicenseID INT, AS BEGIN BEGIN TRY BEGIN TRANSACTION; INSERT INTO CUSTOMERS VALUES (@CustomerName); SET @CustomerID=SCOPE_IDENTITY(); INSERT INTO LICENSES VALUES (@LicenseID, @CustomerID) COMMIT TRANSACTION; END TRY BEGIN CATCH IF @@TRANCOUNT > 0 BEGIN ROLLBACK TRANSACTION; END END CATCH; END; ]; ``` if CustomerID is identity, I wish that it will be work... :)
Stored procedure insert into function
[ "", "sql", "join", "primary-key", "" ]
I am trying to find all sale\_id's that have an entry in sales\_item\_taxes table, but do NOT have a corresponding entry in the sales\_items table. ``` mysql> describe phppos_sales_items_taxes; +------------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +------------+---------------+------+-----+---------+-------+ | sale_id | int(10) | NO | PRI | NULL | | | item_id | int(10) | NO | PRI | NULL | | | line | int(3) | NO | PRI | 0 | | | name | varchar(255) | NO | PRI | NULL | | | percent | decimal(15,3) | NO | PRI | NULL | | | cumulative | int(1) | NO | | 0 | | +------------+---------------+------+-----+---------+-------+ 6 rows in set (0.01 sec) mysql> describe phppos_sales_items; +--------------------+----------------+------+-----+--------------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------------+----------------+------+-----+--------------+-------+ | sale_id | int(10) | NO | PRI | 0 | | | item_id | int(10) | NO | PRI | 0 | | | description | varchar(255) | YES | | NULL | | | serialnumber | varchar(255) | YES | | NULL | | | line | int(3) | NO | PRI | 0 | | | quantity_purchased | decimal(23,10) | NO | | 0.0000000000 | | | item_cost_price | decimal(23,10) | NO | | NULL | | | item_unit_price | decimal(23,10) | NO | | NULL | | | discount_percent | int(11) | NO | | 0 | | +--------------------+----------------+------+-----+--------------+-------+ 9 rows in set (0.00 sec) mysql> ``` Proposed Query: ``` SELECT DISTINCT sale_id FROM phppos_sales_items_taxes WHERE item_id NOT IN (SELECT item_id FROM phppos_sales_items WHERE sale_id = phppos_sales_items_taxes.sale_id) ``` The part I am confused by is the subquery. The query seems to work as intended but I am not understanding the subquery part. How does it look for each sale? For example if I have the following data: ``` mysql> select * from phppos_sales; +---------------------+-------------+-------------+---------+-------------------------+---------+--------------------+-----------+-----------+------------+---------+-----------+-----------------------+-------------+---------+ | sale_time | customer_id | employee_id | comment | show_comment_on_receipt | sale_id | payment_type | cc_ref_no | auth_code | deleted_by | deleted | suspended | store_account_payment | location_id | tier_id | +---------------------+-------------+-------------+---------+-------------------------+---------+--------------------+-----------+-----------+------------+---------+-----------+-----------------------+-------------+---------+ | 2014-08-09 17:53:38 | NULL | 1 | | 0 | 1 | Cash: $12.96<br /> | | | NULL | 0 | 0 | 0 | 1 | NULL | | 2014-08-09 17:56:59 | NULL | 1 | | 0 | 2 | Cash: $12.96<br /> | | | NULL | 0 | 0 | 0 | 1 | NULL | +---------------------+-------------+-------------+---------+-------------------------+---------+--------------------+-----------+-----------+------------+---------+-----------+-----------------------+-------------+---------+ mysql> select * from phppos_sales_items; +---------+---------+-------------+--------------+------+--------------------+-----------------+-----------------+------------------+ | sale_id | item_id | description | serialnumber | line | quantity_purchased | item_cost_price | item_unit_price | discount_percent | +---------+---------+-------------+--------------+------+--------------------+-----------------+-----------------+------------------+ | 2 | 1 | | | 1 | 1.0000000000 | 10.0000000000 | 12.0000000000 | 0 | +---------+---------+-------------+--------------+------+--------------------+-----------------+-----------------+------------------+ 1 row in set (0.00 sec) mysql> select * from phppos_sales_items_taxes; +---------+---------+------+-----------+---------+------------+ | sale_id | item_id | line | name | percent | cumulative | +---------+---------+------+-----------+---------+------------+ | 1 | 1 | 1 | Sales Tax | 8.000 | 0 | | 2 | 1 | 1 | Sales Tax | 8.000 | 0 | +---------+---------+------+-----------+---------+------------+ 2 rows in set (0.00 sec) ``` When I run the query below it does find sale\_id 1. But how does the subquery know to filter correctly. I guess I am not understanding how the sub query works. ``` mysql> SELECT DISTINCT sale_id -> FROM phppos_sales_items_taxes -> WHERE item_id NOT IN -> (SELECT item_id FROM phppos_sales_items WHERE sale_id = phppos_sales_items_taxes.sale_id) -> ; +---------+ | sale_id | +---------+ | 1 | +---------+ 1 row in set (0.00 sec) ```
Duffy356 link to the SQL-Joins is good, but sometimes seeing with your own data might sometimes make more sense... First, your query as written and obviously learning will be very expensive to the engine. How it knows what to include is because it is doing a correlated sub-query -- meaning that FOR every record IN the sales\_items\_taxes table it is running a query TO the sales\_items table, which is returning every item possible for said sale\_id. Then it comes back to the main query and compares it to the sales\_items\_taxes table. If it does NOT find it, it allows the sale\_id to be included in the result set. Then it goes to the next record in the sales\_items\_taxes table. (Your query reformatted for better readability) ``` SELECT DISTINCT sale_id FROM phppos_sales_items_taxes WHERE item_id NOT IN ( SELECT item_id FROM phppos_sales_items WHERE sale_id = phppos_sales_items_taxes.sale_id) ``` Now, think about this. You have 1 sale with 100 items. It is running the correlated sub-query 100 times. Now do this with 1,000 sales id entries and each has however many items, gets expensive quickly. A better alternative is to take advantage of databases and do a left-join. The indexes work directly with the LEFT JOIN (or inner join) and are optimized by the engine. Also, notice I am using "aliases" for the tables and qualifying the aliases for readability. By starting with your sales items taxes table (the one you are looking for extra entries) is the basis. Now, left-join this sales items table on the two key components of the sale\_id and item\_id. I would suggest that each table has an index ON (sale\_id, item\_id) to match the join condition here. ``` SELECT DISTINCT sti.sale_id FROM phppos_sales_items_taxes sti LEFT JOIN phppos_sales_items si ON sti.sale_id = si.sale_id AND sti.item_id = si.item_id WHERE si.sale_id IS NULL ``` So, from here, think of it that each table is lined-up side-by-side with each other and all you are getting are those on the left side (sale items taxes) that DO NOT have an entry on the right side (sales\_items).
Your problem can be fixed by using joins. Read the following article about [SQL-Joins](http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins) and think about your problem -> you will be able to fix it ;) The IN-clause is not the best solution, because some databases have limits on the number of arguments contained in it.
mysql subquery understanding
[ "", "mysql", "sql", "" ]
I have a SQL query ``` SELECT TABLE_SCHEMA + TABLE_NAME AS ColumnZ FROM information_schema.tables ``` I want the result should be `table_Schema.table_name`. help me please!!
Try this: ``` SELECT TABLE_SCHEMA + '.' + TABLE_NAME AS ColumnZ FROM information_schema.tables ```
This code work for you try this.... ``` SELECT Title, FirstName, lastName, ISNULL(Title,'') + ' ' + ISNULL(FirstName,'') + ' ' + ISNULL(LastName,'') as FullName FROM Customer ```
SQL Server, how to merge two columns into one column?
[ "", "sql", "sql-server", "calculated-columns", "" ]
I have a table with 31,483 records in it. I would like to search this table using both a LIKE and a NOT LIKE operator. The user can select the Contains or the Does Not Contain option. When I do a LIKE the where clause is as follows ``` WHERE Model LIKE '%test%' ``` Which filters the result set down to 1345 records - so all is fine and Dandy **HOWEVER** I expected that running a not like on the table would result in *n records where n = totalRowCount - LikeResultSet* which results in an expected record count of 30138 when running a NOT LIKE operation. I ran this WHERE clause: ``` WHERE Model NOT LIKE '%test%' ``` However it returned 30526 records. I am assuming there is some intricacy of the NOT LIKE operator I am not realising. so my question **Why arent I recieving a record count of TotalRows - LikeResults?** I am using SQL Server Compact 4.0 C# Visual Studio 2012
Check if some Model values are *nulls*, e.g. for the simple artifitial table ``` with data as ( select 'test' as model union all select 'abc' union all select 'def' union all select null -- <- That's vital ) ``` you'll get ``` -- 4 items select count(1) from data -- 1 item: 'test' select count(1) from data where model like '%test%' -- 2 items: 'abc' and 'def' select count(1) from data where model not like '%test%' ``` And so *1 + 2 != 4*
Your `WHERE` clause should be changed to `WHERE ISNULL(Model,'') LIKE '%test%'` to replace NULLs with empty strings.
Not Like Operator Weird Result Count
[ "", "sql", "sql-server-ce-4", "" ]
This is **table1**: ![enter image description here](https://i.stack.imgur.com/O6ujS.png) This is **table2**: ![enter image description here](https://i.stack.imgur.com/mBzvG.png) I wanted to see this result: ![enter image description here](https://i.stack.imgur.com/42MQx.png) I wrote this query: ``` select title, value from table1 left outer join table2 on table1.id = table2.id where category="good" ``` But it gives me this result: ![enter image description here](https://i.stack.imgur.com/1N7BP.png) So, what query should I use to get result with title c coming with ""? (empty string)
Move the predicate on `table2.category` to the `ON` clause, rather than the `WHERE` clause. (In the WHERE clause, that negates the "outerness" of the LEFT JOIN operation, since any rows from table1 with no matching row from table2 would have values of NULL for the table2 columns. Checking for a non-null value excludes all the "unmatched" rows, rendering the `LEFT JOIN` equivalent to an `INNER JOIN`. One way to return the specified resultset: ``` SELECT t.title , s.value FROM table1 t LEFT JOIN table2 s ON s.id = t.id AND s.category = "good" ```
This should work: ``` select title, value from table1 left outer join table2 on table1.id = table2.id and category="good" ``` Ann the `and category="good"` to the `on` clause. Not to the where clause.
Left outer join is not working on column with repeated id
[ "", "mysql", "sql", "" ]
I have data which looks like: ``` data.csv company year value A yz x ``` I wan't to grab all columns when value is < x and year is 2005, 2006, 2007, 2008 etc. ``` SELECT * FROM data WHERE value < "X" AND year ="2005" AND year="2006" AND year="2007" AND year="2008"; ``` Above results in nothing. So essentially: give me all companies for which value has been below X the last YZ years.
You have a condition that can never be true. It can't be 2005 and 2006 at the same time. Try `in`: ``` SELECT * FROM data WHERE value < "X" AND year in ("2005", "2006", "2007", "2008") ; ``` The `in` checks whether `year` is one of the values following. Or `or`: ``` SELECT * FROM data WHERE value < "X" AND ( year = "2005" or year = "2006" or year = "2007" or year = "2008" ) ; ``` The `or` just checks whether the left side *or* the right side condition is true.
This should work for you: ``` SELECT * FROM data WHERE value < "X" AND YEAR IN (2005, 2006, 2007, 2008) ```
select columns based on multiple values
[ "", "sql", "" ]
I would like to write a routine which will allow me to take dated events (records) in a table which span accross a set time frame and in the cases where no event took place for a specific day, an event will be created duplicating the most recent prior record where an event DID take place. For example: If on September 4 Field 1 = X, Field 2 = Y and Field 3 = Z and then nothing took place until September 8 where Field 1 = Y, Field 2 = Z and Field 3 = X, the routine would create records in the table to account for the 3 days where nothing took place and ultimately return a table looking like: Sept 4: X - Y - Z Sept 5: X - Y - Z Sept 6: X - Y - Z Sept 7: X - Y - Z Sept 8: Y - Z - X Unfortunately, my level of programming knowledge although good, does not allow me to logically conclude a solution in this case. My gut feeling tells me that a loop could be the correct solution here but I still an not sure exactly how. I just need a bit of guidance to get me started.
Here you go. ``` Sub FillBlanks() Dim rsEvents As Recordset Dim EventDate As Date Dim Fld1 As String Dim Fld2 As String Dim Fld3 As String Dim SQL As String Set rsEvents = CurrentDb.OpenRecordset("SELECT * FROM tblevents ORDER BY EventDate") 'Save the current date & info EventDate = rsEvents("EventDate") Fld1 = rsEvents("Field1") Fld2 = rsEvents("Field2") Fld3 = rsEvents("Field3") rsEvents.MoveNext On Error Resume Next Do ' Loop through each blank date Do While EventDate < rsEvents("EventDate") - 1 'for all dates up to, but not including the next date EventDate = EventDate + 1 'advance date by 1 day rsEvents.AddNew rsEvents("EventDate") = EventDate rsEvents("Field1") = Fld1 rsEvents("Field2") = Fld2 rsEvents("Field3") = Fld3 rsEvents.Update Loop ' get new current date & info EventDate = rsEvents("EventDate") Fld1 = rsEvents("Field1") Fld2 = rsEvents("Field2") Fld3 = rsEvents("Field3") rsEvents.MoveNext ' new records are placed on the end of the recordset, ' so if we hit on older date, we know it's a recent insert and quit Loop Until rsEvents.EOF Or EventDate > rsEvents("EventDate") End Sub ```
With no details about your specifics (table schema, available language options etc), iI guess that you just need the algorithm to pick up. So here's a quick algorithm with no safeguards. ``` properdata = "select * from data where eventHasTakenPlace=true"; wrongdata = "select * from data where eventHasTakenPlace=false"; for each wrongRecord in wrongdata { exampleRecord = select a.value1, a.value2,...,a.date from properdata as a inner join (select id,max(date) from properdata group by id having date<wrongRecord.date ) as b on a.id=b.id minDate = exampleRecord.date; maxDate = wrongRecord.date -1day; --use proper date difference function as per your language of choice. for i=minDate to maxDate step 1day{ dynamicsql="INSERT INTO TABLE X(Value1,Value2....,date) VALUES (exampleRecord.Value1, exampleRecord.Value2,...i); exec dynamicsql; } } ```
Writing a routine to create sequential records
[ "", "sql", "ms-access", "vba", "ms-access-2010", "" ]
SQL Fiddle : <http://sqlfiddle.com/#!2/49db7/2> ``` create table EmpDetails ( emp_code int, e_type varchar(10) ); insert into EmpDetails values (100,'A'); insert into EmpDetails values (101,'D'); insert into EmpDetails values (102,'A'); insert into EmpDetails values (103,'D'); create table QDetails ( id int, emp_code int, dn_num int ); insert into QDetails values (1,100,NULL); insert into QDetails values (2,101,4343); insert into QDetails values (3,101,4343); insert into QDetails values (4,103,NULL); insert into QDetails values (5,103,NULL); insert into QDetails values (6,100,NULL); select * from EmpDetails select * from QDetails ``` -- expected result ``` 1 100 NULL 6 100 NULL 2 101 4343 3 101 4343 ``` --When e\_type = A it should include rows from QDetails doesn't matter dn\_num is null or not null --but when e\_type = D then from QDetails it should include only NOT NULL values should ignore null ``` select e.emp_code, e.e_type, q.dn_num from empdetails e left join qdetails q on e.emp_code = q.emp_code and (e.e_type = 'D' and q.dn_num is not null) ``` --Above query I tried includes 103 D NULL which I don't need and exclueds 6 100 NULL which i need.
I am not sure why you are using left join here. You can get the results you specified with inner join ``` select e.emp_code ,e.e_type ,q.dn_num from empdetails e inner join qdetails q on e.emp_code = q.emp_code where e.e_type = 'A' or (e.e_type = 'D' and q.dn_num is not null) order by e.emp_code ,e.e_type ``` The left join would be used if you also wanted to list records from empdetails table that have no match in qdetails
Your problem is your `q.dn_num is not null` condition, it is specifically excluding those records that you state that you want. Removing that should fix it. ``` select e.emp_code, e.e_type, q.dn_num from empdetails e left join qdetails q on e.emp_code = q.emp_code WHERE (e.e_type = 'D' and q.db_num is not null) OR e.e_type = 'A' ``` Additionally, it is a general convention to use your join to specify only the join criteria and your where clause to filter (there are reasons why you may not want to do this, but depends on your query). Writing your queries as above may make them easier to read later on.
Wrong behaviour with left outer join?
[ "", "sql", "sql-server", "left-join", "" ]
I'm looking for a way to pass the Rails/Devise variable "current\_user" into a SQL query. I have an app with two models, Users and Tips. Users can friend other users and send tips to each other. I'm trying to display a user's friend list ordered by the number of tips that the user has sent to each friend, so that the friend to whom the user has sent the most tips shows up at the top of the friend list, and so on. I've read that RoR isn't equipped to handle this kind of query easily, so I've been able to put together the following SQL query, which works fine: ``` def friend_list @friends = User.find_by_sql("SELECT users.*, COUNT(tips.id) AS c FROM users, tips WHERE tips.recipient_id = users.id AND tips.user_id = 3 GROUP BY users.id ORDER BY c DESC") end ``` The only problem is, I have manually entered a user.id there ("3") when ideally that "3" would be replaced by "current\_user" so that each time a user loads this list they get their friend list ranked by who they themselves have sent tips to, rather than everyone just seeing user 3's ranking. In an ideal world this would look something like: ``` AND tips.user_id = current_user ``` but that doesn't work. How can I pass a variable into this query so that it is different for each person viewing? UPDATE: User model (excerpt): ``` has_many :tips has_many :received_tips, :class_name => "Tip", :foreign_key => "recipient_id" ``` Tip model (excerpt): ``` belongs_to :user belongs_to :recipient, :class_name => "User" ```
You cannot access current\_user in models if using `devise`...though there are few good ways to do it ``` ##method one---in your controller @users=User.find(current_user.id) ##method two---in your controller @users=User.get_user_details(current_user.id) ##in model def self.get_user_details(current_user_id) User.find(current_user_id) end ###you can also use in this way in controller @all_videos = Video.where("videos.user_id !=?",current_user.id) .........so this can be your solution............ ##in controller or pass current_user and user it in your model def friend_list @friends = User.joins(:tips).select("users.* and count(tips.id) as c").where ("tips.users_id= ?",current_user.id).group("users.id").order("c DESC") ##or @friends = User.all(:joins => :tips, :select => "users.*, count(tips.id) as tips_count", :group => "users.id",:order=>"tips_count DESC") end ```
You can try this ``` def friend_list @friends = User.find_by_sql("SELECT users.*, COUNT(tips.id) AS c FROM users, tips WHERE tips.recipient_id = users.id AND tips.user_id = ? GROUP BY users.id ORDER BY c DESC", current_user.id) end ``` Anyway it is not that hard to do this query using active\_record.
Rails - Pass current_user into SQL query
[ "", "sql", "ruby-on-rails", "devise", "" ]
I have the following two tables: ``` Table "Center": CenterKey CenterName --------- ----------- Center1 CenterName1 Center2 CenterName2 Center3 CenterName3 Center4 CenterName4 Center5 CenterName5 Center6 CenterName6 Center7 CenterName7 Center8 CenterName8 Table "Log": CenterKey Date Value --------- -------- ----- Center1 6/1/2014 10 Center2 6/3/2014 20 Center1 7/2/2014 30 Center3 7/3/2014 40 Center4 7/5/2014 50 Center5 7/8/2014 60 Center6 8/3/2014 70 ``` I'm interested in create a view, say "MyView", that if I specify a date range, it will return the CenterNames whose CenterKey are not in the date range. For example, if I do ``` SELECT CenterName FROM MyView WHERE Date>='6/1/2014' AND Date <='6/30/2014' ``` I want this result: ``` CenterName ---------- CenterName3 CenterName4 CenterName5 CenterName6 CenterName7 CenterName8 ``` if I do ``` SELECT CenterName FROM MyView WHERE Date>='7/1/2014' AND Date <='7/31/2014' ``` I want this result: ``` CenterName ---------- CenterName2 CenterName6 CenterName7 CenterName8 ``` if I do ``` SELECT CenterName FROM MyView WHERE Date>='6/3/2014' AND Date <='7/5/2014' ``` I want this result: ``` CenterName5 CenterName6 CenterName7 CenterName8 ``` Can someone help me create MyView?
The following query should do what you want: ``` select c.centername from center c left outer join log l on l.centerkey = c.centerkey group by c.centername having sum(l.Date >='2014-07-01' AND .Date <='2014-07-31') = 0; ``` I can't think of a way to incorporate it easily into a view with a `where` clause. The alternative formulation doesn't really help either: ``` select c.centername from center c where not exists (select 1 from log l where l.centerkey = c.centerkey and l.Date >= '2014-07-01' AND l.Date <='2014-07-31' ); ``` EDIT: If you have a calendar table, you can do: ``` select c.centername, ca.dte from calendar ca cross join center c left outer join log l on l.centerkey = c.centerkey and ca.dte = l.date where l.date is null; ``` If you put this in a view with a `where`, you will get a separate row for each date in the range when the center is available.
## View with Parameter **Final SQL Query (<http://sqlfiddle.com/#!2/570cb8/1>):** ``` SELECT * FROM (SELECT @startd:='2014-6-1', @endd:='2014-6-30') p , MyView; ``` **View with Functions:** ``` create function startd() returns DATE DETERMINISTIC NO SQL return @startd; create function endd() returns DATE DETERMINISTIC NO SQL return @endd; create view MyView as select centername from center c where not exists ( select 1 from log l where l.centerkey = c.centerkey AND d between startd() AND endd() ); ``` **Reference** * [Can I create view with parameter in MySQL?](https://stackoverflow.com/questions/2281890/can-i-create-view-with-parameter-in-mysql)
select complement records in date range
[ "", "mysql", "sql", "" ]
I have a table that tracks changes in stocks through time for some stores and products. The value is the absolute stock, but we only insert a new row when a change in stock occurs. This design was to keep the table small, because it is expected to grow rapidly. This is an example schema and some test data: ``` CREATE TABLE stocks ( id serial NOT NULL, store_id integer NOT NULL, product_id integer NOT NULL, date date NOT NULL, value integer NOT NULL, CONSTRAINT stocks_pkey PRIMARY KEY (id), CONSTRAINT stocks_store_id_product_id_date_key UNIQUE (store_id, product_id, date) ); insert into stocks(store_id, product_id, date, value) values (1,10,'2013-01-05', 4), (1,10,'2013-01-09', 7), (1,10,'2013-01-11', 5), (1,11,'2013-01-05', 8), (2,10,'2013-01-04', 12), (2,11,'2012-12-04', 23); ``` I need to be able to determine the average stock between a start and end date, per product and store, but my problem is that a simple avg() doesn't take into account that the stock remains the same between changes. What I would like is something like this: ``` select s.store_id, s.product_id , special_avg(s.value) from stocks s where s.date between '2013-01-01' and '2013-01-15' group by s.store_id, s.product_id ``` with the result being something like this: ``` store_id product_id avg 1 10 3.6666666667 1 11 5.8666666667 2 10 9.6 2 11 23 ``` In order to use the SQL average function I would need to "propagate" forward in time the previous value for a store\_id and product\_id, until a new change occurs. Any ideas as how to achieve this?
The **special difficulty** here: you cannot just pick data points inside your time range, but have to consider the *latest* data point *before* the time range and the *earliest* data point *after* the time range additionally. This varies for every row and each data point may or may not exist. Requires a sophisticated query and makes it hard to use indexes. You can use [**range types**](https://www.postgresql.org/docs/current/functions-range.html) and [**operators**](https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE) (Postgres **9.2+**) to simplify calculations: ``` WITH input(a,b) AS (SELECT '2013-01-01'::date -- your time frame here , '2013-01-15'::date) -- inclusive borders SELECT store_id, product_id , sum(upper(days) - lower(days)) AS days_in_range , round(sum(value * (upper(days) - lower(days)))::numeric / (SELECT b-a+1 FROM input), 2) AS your_result , round(sum(value * (upper(days) - lower(days)))::numeric / sum(upper(days) - lower(days)), 2) AS my_result FROM ( SELECT store_id, product_id, value, s.day_range * x.day_range AS days FROM ( SELECT store_id, product_id, value , daterange (day, lead(day, 1, CURRENT_DATE) OVER (PARTITION BY store_id, product_id ORDER BY day)) AS day_range FROM stock ) s JOIN ( SELECT daterange(a, b+1) FROM input ) x(day_range) ON s.day_range && x.day_range ) sub GROUP BY 1, 2 ORDER BY 1, 2; ``` Note, I use the column name `day` instead of `date`. I never use basic type names as column names. In the subquery `sub` I fetch the day from the next row for each item with the window function `lead()`, using the built-in option to provide "today" as default where there is no next row. With this I form a `daterange` and match it against the input with the [**overlap operator `&&`**](https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE), computing the resulting date range with the [**intersection operator `*`**](https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE). All ranges here are with **exclusive** upper border. That's why I add one day to the input range. This way we can simply subtract `lower(range)` from `upper(range)` to get the number of days. I assume that "yesterday" is the latest day with reliable data. "Today" can still change in a real life application. So I `CURRENT_DATE` as exclusive upper bound for open ranges. I provide two results: * `your_result` agrees with your displayed results. You divide by the number of days in your date range unconditionally. For instance, if an item is only listed for the last day, you get a very low (misleading!) "average". * `my_result` computes the same or higher numbers. I divide by the *actual* number of days an item is listed. For instance, if an item is only listed for the last day, I return the listed value as average. To make sense of the difference I added the number of days the item was listed: `days_in_range` [fiddle](https://dbfiddle.uk/25X28bf_) Old [sqlfiddle](http://sqlfiddle.com/#!17/4fc00/1) ## Index and performance For this kind of data, old rows typically don't change. This would make an excellent case for a **materialized view**: ``` CREATE MATERIALIZED VIEW mv_stock AS SELECT store_id, product_id, value , daterange (day, lead(day, 1, now()::date) OVER (PARTITION BY store_id, product_id ORDER BY day)) AS day_range FROM stock; ``` Then you can add a [GiST index which supports the relevant operator `&&`](https://www.postgresql.org/docs/current/rangetypes.html#RANGETYPES-INDEXING): ``` CREATE INDEX mv_stock_range_idx ON mv_stock USING gist (day_range); ``` In Postgres 9.6 or later possibly even a [covering index](https://www.postgresql.org/docs/current/indexes-index-only-scans.html): ``` CREATE INDEX mv_stock_range_plus_idx ON mv_stock USING gist (day_range) INCLUDE (store_id, product_id, value); ``` ### Big test case I ran a more realistic test with 200k rows. The query using the MV was ~ 6x as fast, which in turn was ~ 10x as fast as @Joop's query. (~ 50x in later test with Postgres 16.) Performance heavily depends on data distribution. An MV helps most with big tables and high frequency of entries. Also, if the table has additional columns not relevant to this query, the MV can be smaller. A question of cost vs. gain. I've put all solutions posted so far (and adapted) in a big fiddle to play with: [fiddle](https://dbfiddle.uk/ijMbwgwk) with big test case Old [sqlfiddle](http://sqlfiddle.com/#!17/91469/1)
This is rather quick&dirty: instead of doing the nasty interval arithmetic, just join to a calendar-table and sum them all. ``` WITH calendar(zdate) AS ( SELECT generate_series('2013-01-01'::date, '2013-01-15'::date, '1 day'::interval)::date ) SELECT st.store_id,st.product_id , SUM(st.zvalue) AS sval , COUNT(*) AS nval , (SUM(st.zvalue)::decimal(8,2) / COUNT(*) )::decimal(8,2) AS wval FROM calendar JOIN stocks st ON calendar.zdate >= st.zdate AND NOT EXISTS ( -- this calendar entry belongs to the next stocks entry SELECT * FROM stocks nx WHERE nx.store_id = st.store_id AND nx.product_id = st.product_id AND nx.zdate > st.zdate AND nx.zdate <= calendar.zdate ) GROUP BY st.store_id,st.product_id ORDER BY st.store_id,st.product_id ; ```
Average stock history table
[ "", "sql", "postgresql", "average", "date-range", "window-functions", "" ]
I have two tables that have a one-to-many relationship. Table A has an ID column and table B has an A\_ID column. I want my output to contain one row per record in table A. This row should have some values from A, but also some from one of the rows in B. For each record in table A there are 0, 1, or more records in B - I want to be able to sort these and return just one (e.g. the record with the largest column Z). ``` Table A - Artists ╔═════╦════════════════╦════════════════════════╗ ║ ID ║ Artist ║ Real name ║ ╠═════╬════════════════╬════════════════════════╣ ║ 383 ║ Bob Dylan ║ Robert Allen Zimmerman ║ ║ 395 ║ Marilyn Manson ║ Brian Hugh Warner ║ ║ 402 ║ David Bowie ║ David Robert Jones ║ ╚═════╩════════════════╩════════════════════════╝ Table B - Tracks ╔══════╦═══════════╦══════════════════════╦════════════════╗ ║ ID ║ Artist_ID ║ Track Name ║ Chart position ║ ╠══════╬═══════════╬══════════════════════╬════════════════╣ ║ 1458 ║ 383 ║ Maggie's Farm ║ 22 ║ ║ 1598 ║ 383 ║ Like a Rolling Stone ║ 4 ║ ║ 1674 ║ 395 ║ Personal Jesus ║ 13 ║ ║ 1782 ║ 383 ║ Lay Lady Lay ║ 5 ║ ╚══════╩═══════════╩══════════════════════╩════════════════╝ ``` Using the above tables as an example, we might want to return the following: ``` ╔════════════════╦══════════════════════╦══════════╗ ║ Artist ║ Top charting ║ Position ║ ╠════════════════╬══════════════════════╬══════════╣ ║ Bob Dylan ║ Like a Rolling Stone ║ 4 ║ ║ Marilyn Manson ║ Personal Jesus ║ 13 ║ ║ David Bowie ║ null ║ null ║ ╚════════════════╩══════════════════════╩══════════╝ ``` Note that the result set contains two columns from the child table, but only for the row with the minimum chart position for each Artist\_ID. (And since poor old Bowie doesn't have any tracks in table B, he gets null values in the columns from the child table.) If I wanted to return a single column from B, I'd use a subquery in the select statement. (I suppose I could include multiple subqueries in the select statement, but this would presumably result in a severe performance hit and doesn't seem very elegant.) Because I need to return multiple columns from the subquery, I could `JOIN` it instead and treat it like a table. Unfortunately, this seems to lead to duplicate rows in my output - i.e. when there are multiple child records, each gets a row in the output. What's the best way around this?
Since you want to return the TOP `chart_position` for each `artist`, I would suggest looking at using a [windowing function](http://msdn.microsoft.com/en-us/library/ms189461(v=sql.105).aspx) similar to [`row_number()`](http://msdn.microsoft.com/en-us/library/ms186734.aspx). This function will create a unique sequence for each `artist. This sequence can be generated in a specific order, in your case you will order by the`chart\_position`. You'll then return the row with the "TOP" value or where the sequence equals 1. The code would be similar to: ``` select artist, Top_Chart = track_name, Position = chart_position from ( select a.artist, t.track_name, t.chart_position, seq = row_number() over(partition by a.id order by t.chart_position) from artists a left join tracks t on a.id = t.artist_id ) d where seq = 1; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/626ad/3) If you aren't using a version of SQL Server that supports windowing functions (like SQL Server 2000), then you could write this query using a self-join to the `tracks` table with an aggregate function to get the highest charting track, then join to the artists. ``` select a.artist, Top_Chart = t.track_name, Position = t.chart_position from artists a left join ( select t.artist_id, t.track_name, t.chart_position from tracks t inner join ( select chart_position = min(chart_position), artist_id from tracks group by artist_id ) m on t.artist_id = m.artist_id and t.chart_position = m.chart_position ) t on a.id = t.artist_id; ``` See [SQL Fiddle with Demo](http://sqlfiddle.com/#!3/626ad/5). They both give the same result.
This will work, you just need to create a subquery to get the row number for each track for a given artist and select the first one (i.e. where the row number is 1). Copy and paste the below and it will work, then you can swap things out/edit for your purposes: ``` DECLARE @TableA TABLE (ID INT, Artist VARCHAR(100), RealName VARCHAR(100)) DECLARE @TableB TABLE (ID INT, Artist_ID INT, TrackName VARCHAR(100), ChartPosition INT) INSERT INTO @TableA (ID, Artist, RealName) VALUES (383, 'Bob Dylan', 'Robert Allen Zimmerman'), (395, 'Marilyn Manson', 'Brian Hugh Warner'), (402, 'David Bowie','David Robert Jones') INSERT INTO @TableB (ID, Artist_ID, TrackName, ChartPosition) VALUES (1458, 383, 'Maggie''s Farm', 22), (1598, 383, 'Like a Rolling Stone', 4), (1674, 395, 'Personal Jesus', 13), (1782, 383, 'Lay Lady Lay', 5) SELECT [Artist], [Top Charting], [Position] FROM ( SELECT ta.Artist AS [Artist], tb.TrackName AS [Top Charting], tb.ChartPosition AS [Position], ROW_NUMBER() OVER (PARTITION BY ta.Artist ORDER BY tb.ChartPosition) [RowNum] FROM @TableA ta LEFT JOIN @TableB tb ON ta.ID = tb.Artist_ID ) Result WHERE Result.RowNum = 1 ```
One-to-many relationships: how to return several columns of a single row of the child table?
[ "", "sql", "sql-server", "database", "join", "subquery", "" ]
Consider the following two tables: ``` Messages MessageId (PK) Text Languages LanguageId (PK) Language ``` There is a many-to-many relationship between the Messages and Languages tables (i.e each Message can have multiple Languages, each Language can be shared by multiple Messages). To that end, I've added the following junction table: ``` Messages_Languages MessageId (PK)(FK) LanguageId (PK)(FK) ``` However, it seems somewhat unnecessary as the Language is fundamentally a property of the message. What is the advantage of use the Messages\_Languages table above instead of just adding the LanguageId as a foreign key to the Messages table? (seen below) ``` Messages MessageId (PK) LanguageId (PK)(FK) Text ``` It seems to accomplish the same thing with one less table.
> It seems to accomplish the same thing... No. The junction table models many-to-many, while FK models one-to-many. So the question is not which is better, but which do you *need*?
This is a very good simple example. Based on your simple example it is better to add LanguageID as a property to the table as you are correct: it is really just a property. Presumably there is a fixed number of languages and the users of your application are not adding new languages as they use your application. Whether this holds true as you further develop the application is another question. Perhaps it will make sense to have a separate table if you add the concept of a message version or some other additional column.
Many-to-many relationship simplification
[ "", "sql", "sql-server", "many-to-many", "data-modeling", "" ]
I am extracting the recipient list from a column in a table and I am running the stored procedure `sp_send_dbmail` to send the emails to these recipients. There are a total of around 200 recipients. Ironically only few of the mails are delivered even though the message that I am getting is **Mail queued**. The database mail is correctly configured and I am using the `Public` database profile. When I check the `msdb.dbo.sysmail_mailitems` table, as expected value in `sent_status` column is 1 for the recipients for whom the mail is delivered and for rest of them the value is either 0 or 2. I am completely sure that the recipient list is 100% correct. Do we have any workaround to resolve this issue? Below is the code that I am running: ``` CREATE procedure [dbo].[sp_dataRefreshNotification] AS BEGIN DECLARE @ToMail VARCHAR(20) DECLARE @Body1 VARCHAR(MAX) = 'Dear User, Data has been refreshed. Regards, IT Support Team Note: This is an auto generated e-mail, please do not reply this mail. ' SELECT DISTINCT RecipientAddress FROM dbo.RecipientAddressList OPEN notification_cursor FETCH NEXT FROM notification_cursor INTO @ToMail WHILE @@FETCH_STATUS = 0 BEGIN EXEC msdb.dbo.sp_send_dbmail @profile_name ='aaaa', @Recipients = @Tomail, @Subject = 'Required Data', @Body = @Body1 FETCH NEXT FROM notification_cursor INTO @ToMail END CLOSE notification_cursor DEALLOCATE notification_cursor END ```
Use Wait for Delay function which will break each instance of your cursor. ``` CREATE procedure [dbo].[sp_dataRefreshNotification] AS BEGIN DECLARE @ToMail VARCHAR(20) DECLARE @Body1 VARCHAR(MAX) = 'Dear User, Data has been refreshed. Regards, IT Support Team Note: This is an auto generated e-mail, please do not reply this mail. ' SELECT DISTINCT RecipientAddress FROM dbo.RecipientAddressList OPEN notification_cursor FETCH NEXT FROM notification_cursor INTO @ToMail WHILE @@FETCH_STATUS = 0 BEGIN Waitfor Delay '000:00:10' EXEC msdb.dbo.sp_send_dbmail @profile_name ='aaaa', @Recipients = @Tomail, @Subject = 'Required Data', @Body = @Body1 FETCH NEXT FROM notification_cursor INTO @ToMail END CLOSE notification_cursor DEALLOCATE notification_cursor END ```
ouch you do not need a cursor here, same thing can be done using a simple select query, see below ``` CREATE PROCEDURE [dbo].[sp_dataRefreshNotification] AS BEGIN SET NOCOUNT ON; DECLARE @ToMail VARCHAR(MAX); DECLARE @Body1 VARCHAR(MAX); SET @Body1 = 'Dear User, ' + CHAR(10) + 'Data has been refreshed. ' + CHAR(10) + 'Regards, ' + CHAR(10) + 'IT Support Team ' + CHAR(10) + CHAR(10) + 'Note: This is an auto generated e-mail, please do not reply this mail. ' + CHAR(10) SELECT @ToMail = COALESCE(@ToMail+';' ,'') + RecipientAddress FROM (SELECT DISTINCT RecipientAddress FROM dbo.RecipientAddressList) t EXEC msdb.dbo.sp_send_dbmail @profile_name ='aaaa' ,@Recipients = @Tomail ,@Subject = 'Required Data' ,@Body = @Body1 END ```
Only few mails getting delivered using sp_send_dbmail in SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "t-sql", "" ]
I am trying to search the wordpress posts in my database, that match a string. However i want to get the rows that have the matching words in them as a plain text, but not the ones that have the word embedded inside a link. For example, if a post has the following text in 'wp\_posts' table of database: ``` 'This is a Test page: <a href="http://localhost/test-page">About Test</a>'. ``` And i search for the word 'about' in my custom query: ``` SELECT * FROM `wp_posts` WHERE post_content like '%about%'; ``` so i am getting the above post as the result. Is there a way i can ignore the posts that have the search string embedded inside a link, and fetch other posts which have the search string as part of a normal string? I tried a REGEX query, with no luck: ``` SELECT * FROM `wp_posts` WHERE post_content REGEXP '[[:<:]]about[[:>:]]'; ```
May be this : ``` SELECT * FROM `wp_posts` WHERE post_content LIKE '%about%' AND post_content NOT REGEXP '>[^>]*about[^<]*<'; ```
Try this ``` SELECT * FROM `wp_posts` WHERE post_content REGEXP '^[^<>]+about'; ``` Here is the [Fiddle](http://www.sqlfiddle.com/#!2/2530a9/4)
mysql search string in table ignoring links
[ "", "mysql", "sql", "database", "wordpress", "" ]
I really get stuck with this question: I have two tables from an Oracle 10g XE database. I have been asked to give out the FName and LName of the fastest male and female athletes from an event. The eventID will be given out. It works like if someone is asking the event ID, the top male and female's FName and LName will be given out separately. I should point out that each athlete will have a unique performance record. Thanks for reminding that! Here are the two tables. I spent all the night on that. ATHLETE ``` ╔═════════════════════════════════════════╦══╦══╗ ║ athleteID* FName LName Sex Club ║ ║ ║ ╠═════════════════════════════════════════╬══╬══╣ ║ 1034 Gabriel Castillo M 5011 ║ ║ ║ ║ 1094 Stewart Mitchell M 5014 ║ ║ ║ ║ 1161 Rickey McDaniel M 5014 ║ ║ ║ ║ 1285 Marilyn Little F ║ ║ ║ ║ 1328 Bernard Lamb M 5014 ║ ║ ║ ║ ║ ║ ║ ╚═════════════════════════════════════════╩══╩══╝ ``` PARTICIPATION\_IND ``` ╔═════════════════════════════════════════════╦══╦══╗ ║ athleteID* eventID* Performance_in _Minutes ║ ║ ║ ╠═════════════════════════════════════════════╬══╬══╣ ║ 1094 13 18 ║ ║ ║ ║ 1523 13 17 ║ ║ ║ ║ 1740 13 ║ ║ ║ ║ 1285 13 21 ║ ║ ║ ║ 1439 13 25 ║ ║ ║ ╚═════════════════════════════════════════════╩══╩══╝ ```
Here's one approach: ``` SELECT a.FName , a.LName , a.sex , p.eventID , p.performance_in_minutes FROM ( SELECT i.eventID, g.sex, MIN(i.performance_in_minutes) AS fastest_pim FROM athlete g JOIN participation_ind i ON i.athleteID = g.athleteID WHERE i.eventID IN (13) -- supply eventID here GROUP BY i.eventID, g.sex ) f JOIN participation_ind p ON p.eventID = f.eventID AND p.performance_in_minutes = f.fastest_pim JOIN athlete a ON a.athleteID = p.athleteID AND a.sex = f.sex ``` The inline view aliased as **f** gets the fastest time for each eventID, at most one per "sex" from the athlete table. We use a JOIN operation to extract the rows from participation\_ind that have a matching eventID and performance\_in\_minutes, and we use another JOIN operation to get matching rows from athlete. (Note that we need to include join predicate on the "sex" column, so we don't inadvertently pull a row with a matching "minutes" value and a non-matching gender.) **NOTE:** If there are two (or more) athletes of the same gender that have a matching "fastest time" for a particular event, the query will pull all those matching rows; the query doesn't distinguish a single "fastest" for each event, it distinguishes only the athletes that have the "fastest" time for a given event. The "extra" columns in the SELECT list can be omitted/removed. Those aren't required; they are only included to aid in debugging.
``` SELECT * FROM ( SELECT a.FName , a.LName FROM ATHLETE a JOIN PARTICIPATION_IND p ON a.athleteID = p.athleteID WHERE a.Sex = 'M' AND p.eventID = 13 ORDER BY p.Performance_in_Minutes ) WHERE ROWNUM = 1 UNION SELECT * FROM SELECT ( a.FName , a.LName FROM ATHLETE a JOIN PARTICIPATION_IND p ON a.athleteID = p.athleteID WHERE a.Sex = 'F' AND p.eventID = 13 ORDER BY p.Performance_in_Minutes ) WHERE ROWNUM = 1 ```
I want to select two different columns from two tables based on another column by SQL
[ "", "sql", "jdbc", "oracle10g", "" ]
I have a big query which also returns very big response. The query looks like this: ``` SELECT group, subgroup, max(last_update) FROM ( SELECT a as group, a1 as subgroup, d1 as last_update FROM.... UNION ALL SELECT b as group, b1 as subgroup, d2 as last_update FROM.... UNION ALL SELECT c as group, c1 as subgroup, d3 as last_update FROM.... UNION ALL SELECT d as group, d1 as subgroup, d3 as last_update FROM.... UNION ALL SELECT e as group, e1 as subgroup, d4 as last_update FROM.... ... and some more selects (15 select queries in total) ) GROUP BY group, subgroup; ``` As you can see I need to load maximum date from some groups. The problem is that those dates needs to be loaded from 15 selects and it works very slow (~4s). I tested that subselect ``` SELECT a as group, a1 as subgroup, d1 as last_update FROM.... UNION ALL SELECT b as group, b1 as subgroup, d2 as last_update FROM.... UNION ALL SELECT c as group, c1 as subgroup, d3 as last_update FROM.... UNION ALL SELECT d as group, d1 as subgroup, d3 as last_update FROM.... UNION ALL SELECT e as group, e1 as subgroup, d4 as last_update FROM.... ... ans some more selects ``` works pretty (~0.1s) fast and the problem is with grouping function (thats why query works slowly): ``` SELECT group, subgroup, max(last_update) FROM ( ... ) GROUP BY group, subgroup; ``` Is there some way to improve this grouping? As I wrote the goal is to get maximum dates for each subgroup in group.
I offer you take a look at parallel queries: ``` create table ttt as with t(a, b, c, d, a1, b1, c1, d1, last_updated) as ( select 1, 2, 3, 4, 1, 2, 3, 4, sysdate + 1 from dual union all select 1, 2, 3, 4, 1, 2, 3, 4, sysdate from dual union all select 2, 3, 4, 5, 2, 3, 4, 5, sysdate + 2 from dual union all select 2, 3, 4, 5, 2, 3, 4, 5, sysdate + 1 from dual union all select 3, 4, 5, 6, 3, 4, 5, 6, sysdate + 3 from dual union all select 3, 4, 5, 6, 3, 4, 5, 6, sysdate + 2 from dual union all select 4, 5, 6, 7, 4, 5, 6, 7, sysdate + 4 from dual union all select 4, 5, 6, 7, 4, 5, 6, 7, sysdate + 3 from dual ) select * from t; select a grp, a1 subgrp, max(last_updated) from ttt group by a, a1 ``` Explain plan ![for the first query](https://i.stack.imgur.com/1ycl7.jpg) Let's add some parallelism: ``` alter table ttt parallel; select a grp, a1 subgrp, max(last_updated) from ttt group by a, a1 ``` Explain plan ![for the second query](https://i.stack.imgur.com/R3Zc7.jpg) As you can see the cost cut down. But it is not for free, during a parallel query execution the query use all the resources you have, so it could damage your performance, but you said that this query was run not so often, I think this is a good solution. To read more about parallel query take a look at [this](http://www.orafaq.com/wiki/Parallel_Query_FAQ)
Maybe do the `group by` in each individual subquery too? ``` select g, s, max(last_update) from ( select g, s, max(last_update) as last_update from t1 group by g, s union all select g, s, max(last_update) as last_update from t2 group by g, s union all ... ) group by g, s ``` I can't say for sure, but if the server is building a temporary rowset for the query then this might cut down the size of that temporary.
Is there some replacement for 'max' function and grouping (performance optimization of aggregate operations)?
[ "", "sql", "performance", "oracle", "oracle11g", "" ]
I have a table that i wish to select rows from, I want to be able to define the order in which the results are defined. ``` SELECT * FROM Table ORDER BY (CSV of Primary key) ``` EG: ``` Select * from table order by id (3,5,8,9,3,5) ``` Wanted results ``` ID | * | ----------- 3 .... 4 .... ``` etc I'm guessing this isn't possible?
You can use the `field()` function in MySQL: ``` order by field(id, 3, 5, 8, 9, 3, 5) ``` Of course, `3` and `5` match the the first time, not the second, so they are superfluous. You can use a similar function `choose()` in SQL Server. Or, use `case` in any database: ``` order by (case when id = 3 then 1 when id = 5 then 2 when id = 8 then 3 when id = 9 then 4 end); ```
If I'm understanding your question correctly you could do something like this: ``` Select * from table order by CASE id WHEN 3 THEN 1 WHEN 5 THEN 2 WHEN 8 THEN 3 WHEN 9 THEN 4 WHEN 3 THEN 5 WHEN 5 THEN 6 ELSE 10000 END ```
SQL select rows in a given Order
[ "", "sql", "select", "" ]
I am using MySQL, and My SQL table looks like this: ``` sales_year (INT), sales_month (INT), sales_day (INT), price (float), customer_type (TEXT) ``` I'd like to know which `sql_query` would aggregate price data by quarter (compute the median price for each quarter, and how many observations where used to compute the median), and grouped by customer type. I am struggling with two major steps: Median does not seem to be supported by mySQL, and also how to aggregate data by quarter - seems grouping by customer type it's very easy once those two are solved. STRUGGLE - Computing the median.... I for example just tried creating a quarter column and it works but it computes the AVG instead of the median: ``` select avg(price) as avg_price, floor(sales_month/3.0+1) as sales_quarter, count(*) as n_transactions, sales_year, customer_type from mydb.mytable group by sales_quarter, sales_year, customer_type; ``` This command works perfectly fine. But ideally I could change avg by MEDIAN but mySQL does not have such support, any suggestions on how to change this code to make it work for median purposes? Note: I also tried installing my own median function from user defined functions in this [site](http://mysql-udf.sourceforge.net/) but the C code did not compile on my mac os X. So the output would look like this: ``` sales_quarter (INT) sales_year (INT) median_price (FLOAT) number_users_used_to_compute_median (INT) customer_type (TEXT) ```
Oh, just call the average the median. People you talk to usually won't know the difference (;). Okay, seriously, you can do this in MySQL. There is a method using `group_concat()` and `substring_index()`, but that runs the risk of overflowing the intermediate string values. Instead, enumerate the values and do simple arithmetic. For this, you need an enumeration and a total. The enumeration is: ``` select t.*, @rn := if(@q = quarter and @y = @year and @ct = customer_type, @rn + 1, if(@q := quarter, if(@y := @year, if(@ct := customer_type, 1, 1), 1), 1) ) as rn from mydb.mytable t cross join (select @q := '', @y := '', @ct := '', @rn := 0) vars order by sales_quarter, sales_year, customer_type, price; ``` This is carefully formulated. The `order by` columns correspond to the variables defined. There is only one statement that assigns variables in the `select`. The nested `if()` statements ensure that each variable gets set (using an `and` or `or` could result in short-circuiting). It is important to remember that MySQL does not guarantee the order of evaluation for expressions in the `select`, so having only one statement set variables is important to ensure correctness. Now, getting the median is pretty easy. You need the total count, the sequential value (`rn`) and some arithmetic to handle the case where there are an even number of values: ``` select trn.sales_quarter, trn.sales_year, trn.customer_type, avg(price) as median from (select t.*, @rn := if(@q = quarter and @y = @year and @ct = customer_type, @rn + 1, if(@q := quarter, if(@y := @year, if(@ct := customer_type, 1, 1), 1), 1) ) as rn from mydb.mytable t cross join (select @q := '', @y := '', @ct := '', @rn := 0) vars order by sales_quarter, sales_year, customer_type, price ) trn join (select sales_quarter, sales_year, customer_type, count(*) as numrows from mydb.mytable t group by sales_quarter, sales_year, customer_type ) s on trn.sales_quarter = s.sales_quarter and trn.sales_year = s.sales_year and trn.customer_type = s.customer_type where 2*rn in (numrows, numrows - 1, numrows + 1) group by trn.sales_quarter, trn.sales_year, trn.customer_type; ``` Just to emphasize that the final average is *not* doing an average calculation. It is calculating the median. The normal definition is that for an even number of values, the median is the average of the two in the middle. The `where` clause handles both the even and odd cases.
To get the median, you could try something like, ``` SELECT * FROM table LIMIT COUNT(*)/2, 1 ``` This basically says: "Give me 1 item starting at the n/2th element, where n is the size of the set." So, if you were doing it by quarter it would be the same thing with some GROUP BY quarter type things thrown in. Let me know if you'd like me to expand on this more.
aggregate data (by median) in mysql by quarter and by customer_type
[ "", "mysql", "sql", "" ]
I am trying to get some id-s from a join on multiple conditions. Example After join etc. ``` A B x 1 x 2 x 3 y 1 y 2 y 3 y 5 z 1 z 5 ``` so i want those id-s ( A) where for example B is 1 and also 5 > so the result y,z Thanks
Using two `EXISTS` may look cleaner : ``` SELECT DISTINCT A FROM Table1 t1 WHERE EXISTS(SELECT A FROM Table1 WHERE A = t1.A and B = 1) AND EXISTS(SELECT A FROM Table1 WHERE A = t1.A and B = 5) ```
You can achieve this by a WHERE and a GROUP + HAVING, the `HAVING` will ensure that 2 conditions are met. ``` SELECT T1.A FROM Table1 AS T1 JOIN Table2 AS T2 ON T2.ID = T1.ID WHERE T2.B IN(1,5) GROUP BY T1.A HAVING COUNT(*) = 2 ``` Or if there is a possibility of B having more than one row for 1 or 5 for the same A, you could do this: ``` SELECT DISTINCT T1.A FROM Table1 AS T1 JOIN Table2 AS T2 ON T2.ID = T1.ID AND T2.B = 1 JOIN Table2 AS T3 ON T3.ID = T1.ID AND T3.B = 5 ```
Get 1 if statments are true
[ "", "sql", "oracle", "" ]
Even though I am removing and trying to drop table, I get error, ``` ALTER TABLE [dbo].[Table1] DROP CONSTRAINT [FK_Table1_Table2] GO DROP TABLE [dbo].[Table1] GO ``` Error > Msg 3726, Level 16, State 1, Line 2 Could not drop object 'dbo.Table1' > because it is referenced by a FOREIGN KEY constraint. Using SQL Server 2012 **I generated the script using sql server 2012, so did sQL server gave me wrong script ?**
Not sure if I understood correctly what you are trying to do, most likely Table1 is referenced as a FK in another table. If you do: ``` EXEC sp_fkeys 'Table1' ``` (this was taken from [How can I list all foreign keys referencing a given table in SQL Server?](https://stackoverflow.com/questions/483193/how-can-i-list-all-foreign-keys-referencing-a-given-table-in-sql-server?rq=1)) This will give you all the tables where 'Table1' primary key is a FK. Deleting the constraints that exist inside a table its not a necessary step in order to drop the table itself. Deleting every possible FK's that reference 'Table1' is. As for the the second part of your question, the SQL Server automatic scripts are blind in many ways. Most likely the table that is preventing you to delete Table1, is being dropped below or not changed by the script at all. RedGate has a few tools that help with those cascading deletes (normally when you are trying to drop a bunch of tables), but its not bulletproof and its quite pricey. <http://www.red-gate.com/products/sql-development/sql-toolbelt/>
Firstly, you need to drop your FK. I can recommend you take a look in this stack overflow post, is very interesting. It is called: [SQL DROP TABLE foreign key constraint](https://stackoverflow.com/q/1776079/771579) There are a good explanation about how to do this process. I will quote a response: > .....Will not drop your table if there are indeed foreign keys referencing it. > > To get all foreign key relationships referencing your table, you could use this SQL (if you're on SQL Server 2005 and up): ``` SELECT * FROM sys.foreign_keys WHERE referenced_object_id = object_id('Student') SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id = object_id('Student') ```
Could not drop object 'dbo.Table1' because it is referenced by a FOREIGN KEY constraint
[ "", "sql", "sql-server", "" ]
I am using an oracle database. For this database I have to code up a filter. I am doing the filtering within the `WHERE` sql clause. However, if the filter is not intact I would like to set the `SELECT * FROM Person WHERE Name = '*'`. This does not work. Any recommedations how to set the `WHERE clause` to all `names` WITHOUT leaving it out? I appreciate your answer!
You have to use `SELECT * FROM Person WHERE Name like '%'` In `SQL` the wildcard is `%` not the `asterix`. and like is the operator which enables wildcard search.
I interpret 'how to set the `WHERE` clause to all `names` WITHOUT leaving it out' as 'I want the fieldname `Name` to appear in the `WHERE` clause but I want all records. ``` ... WHERE Name like '%' or Name is null ``` would return all rows (I am quite sure that `like '%'` also matches the empty string) Coding of your filter should dynamically exclude or include the fields selected. In other words: Create the query string dynamically, similar to this ``` whereClause = '1=1'; foreach (filter in filterConditions) { if (!filter.IsEmpty) { whereClause += ' AND '+filter.SQLExpression } } ``` Sorry, I am not familiar with Java, but I hope you do get the point. Of course, you should still introduce parameters (coded in the `filter.SQLExpression`) instead of directly coding the filter value into the SQLExpression.
Get all fields in WHERE
[ "", "sql", "oracle", "" ]
I have a table called person there are about 3 million rows I have added a column called company\_type\_id whose default value is 0 now i want to update the value of company\_type\_id to 1 where person\_id from 1 to 212465 and value of company\_type\_id to 8 where person\_id from 256465 to 656464 how can i do this I am using mysql
You can do that in one update statement: ``` update person set company_type_id = 1 where (person_id >= 1 and person_id <= 212465) or (company_type_id = 8 and person_id >= 256465 and person_id <= 656464) ```
``` update person set company_type_id=1 where person_id>=1 and person_id<=212465; ``` I am sure you'll make the second update query by yourself.
updating the column of a table with different values
[ "", "mysql", "sql", "" ]
I have a table in a SQL database that provides a "many-to-many" connection. The table contains id's of both tables and some fields with additional information about the connection. ``` CREATE TABLE SomeTable ( f_id1 INTEGER NOT NULL, f_id2 INTEGER NOT NULL, additional_info text NOT NULL, ts timestamp NULL DEFAULT now() ); ``` The table is expected to contain 10 000 - 100 000 entries. How is it better to design a primary key? Should I create an additional 'id' field, or to create a complex primary key from both id's? DBMS is PostgreSQL
This is a "hard" question in the sense that there are pretty good arguments on both sides. I have a bias toward putting in auto-incremented ids in all tables that I use. Over time, I have found that this simply helps with the development process and I don't have to think about whether they are necessary. A big reason for this is so foreign key references to the table can use only one column. In a many-to-many junction table (aka "association table"), this probably isn't necessary: * It is unlikely that you will add a table with a foreign key relationship to a junction table. * You are going to want a unique index on the columns anyway. * They will probably be declared `not null` anyway. Some databases actually store data based on the primary key. So, when you do an insert, then data must be moved on pages to accommodate the new values. Postgres is *not* one of those databases. It treats the primary key index just like any other index. In other words, you are not incurring "extra" work by declaring one more more columns as a primary key. My conclusion is that having the composite primary key is fine, even though I would probably have an auto-incremented primary key with separate constraints. The composite primary key will occupy less space so probably be more efficient than an auto-incremented id. However, if there is any chance that this table would be used for a foreign key relationship, then add in another id field.
A surrogate key wont protect you from adding multiple instances of (f\_id1, f\_id2) so you should definitely have a unique constraint or primary key for that. What would the purpose of a surrogate key be in your scenario?
Primary key in "many-to-many" table
[ "", "sql", "postgresql", "" ]
I have a Google Big Query Table that has an `email` column in it. Basically each rows shows a state the user with that email address existed in. What I want to do is query the table to get a result showing the most recent row per email address. I've tried all sorts of `GROUP BY`'s, `JOIN`ing the table against itself and the usual fun stuff that I would use in MySQL, but I keep getting duplicate emails returned if the entire row isn't a match. Any help is much appreciated! Sample Data ``` user_email | user_first_name | user_last_name | time | is_deleted test@test.com | Joe | John | 123456790 | 1 test@test.com | Joe | John | 123456789 | 0 test2@test.com | Jill | John | 123456789 | 0 ``` So if sampling that data I would want to return: ``` user_email | user_first_name | user_last_name | time | is_deleted test@test.com | Joe | John | 123456790 | 1 test2@test.com | Jill | John | 123456789 | 0 ```
``` SELECT user_email, user_first_name, user_last_name, time, is_deleted FROM ( SELECT user_email, user_first_name, user_last_name, time, is_deleted , RANK() OVER(PARTITION BY user_email ORDER BY time DESC) rank FROM table ) WHERE rank=1 ```
There is a potential shortcoming of the use of `RANK()` over the [alternative numbering function](https://cloud.google.com/bigquery/docs/reference/standard-sql/numbering_functions) `ROW_NUMBER()`. The accepted answer does provide the desired solution, except in the event in a tie in the order by clause, were duplicate records are again returned: ``` with minimal_reproducible as ( select 'test@test.com' as user_email, 'Joe' as user_first_name, 'John' as user_last_name, 123456789 as time, 1 is_deleted union all select 'test@test.com', 'Joe', 'John', 123456789, 0 union all select 'test2@test.com', 'Jill', 'John', 123456789, 0 ) select user_email, user_first_name, user_last_name, time, is_deleted from ( select *, rank() over (partition by user_email order by time desc) as rank from minimal_reproducible) inner_table where rank = 1; ``` A better solution therefore is to use `ROW_NUMBER()` in the place of `RANK()` to ensure (albeit arbitrarily) unique `user_email` come what may: ``` with minimal_reproducible as ( select 'test@test.com' as user_email, 'Joe' as user_first_name, 'John' as user_last_name, 123456789 as time, 1 is_deleted union all select 'test@test.com', 'Joe', 'John', 123456789, 0 union all select 'test2@test.com', 'Jill', 'John', 123456789, 0 ) select user_email, user_first_name, user_last_name, time, is_deleted from ( select *, row_number() over (partition by user_email order by time desc) as row_number from minimal_reproducible) inner_table where row_number = 1; ```
Google Big Query SQL - Get Most Recent Column Value
[ "", "sql", "google-bigquery", "" ]
Can anyone tel me whats wrong with this SQL statement please? ``` const string updateStatement = " IF NOT EXISTS(SELECT * FROM BankAccount WHERE ac_no = @ACNumber AND deleted IS NULL) " + " UPDATE BankAccount SET BankAccount.deleted = NULL WHERE ac_no = @ACNumber"; ``` Here I'm trying to recover a deleted bank account record (a logically deleted one). Even the `IF NOT EXISTS()` part doesn't return any records `UPDATE` is not happened! **Table** ``` BankAccount [ac_no(pk), emp_id, ac_name, bank_name, emp_name, status, deleted] ``` **EDIT:** Before recovering a bank account the system should check whether there are any other bank account entered for that employee. If there is another bank account for the same employee the recovery should not happened as at any given time an employee should have only one bank account. ![enter image description here](https://i.stack.imgur.com/fiJTY.png)
From what I understand, you want to check that the employee holding the account doesn't have any other open accounts. Below, BA1 contains just the account we might undelete. BA2 contains all the employee's open accounts. If the employee has no open accounts, the select won't return anything. ``` IF NOT EXISTS (SELECT * FROM BankAccount BA1 INNER JOIN BankAccount BA2 ON BA1.ac_no = @ACNumber AND BA1.emp_id = BA2.emp_id AND BA2.deleted IS NULL) UPDATE BankAccount SET Deleted = NULL WHERE ac_no = @ACNumber ``` In your original statement, you're only checking that single account row, and you're not looking at other accounts the employee may have or may have had. If you know the employee ID as well as the account number when running the statement, you could simplify it to: ``` IF NOT EXISTS (SELECT * FROM BankAccount WHERE emp_id = @EmployeeId AND Deleted IS NULL) UPDATE BankAccount SET Deleted = NULL WHERE ac_no = @ACNumber ``` **EDIT** I don't know why it doesn't work for you. It works for me. Here's an example: ``` DECLARE @BankAccount TABLE ([ac_no] int ,[emp_id] int ,[name] varchar(50) ,deleted datetime ) INSERT @BankAccount VALUES (17, 103, 'GS Siri', NULL) , (18, 108, 'N.S Per', '2014-08-10') , (19, 116, 'K.V Sil', NULL) , (25, 104, 'N.Kusha', NULL) , (45, 108, 'N.S Per', '2014-08-11') SELECT * FROM @BankAccount DECLARE @ACNumber int SET @ACNumber = 45 IF NOT EXISTS (SELECT * FROM @BankAccount BA1 INNER JOIN @BankAccount BA2 ON BA1.ac_no = @ACNumber AND BA1.emp_id = BA2.emp_id AND BA2.deleted IS NULL) UPDATE @BankAccount SET Deleted = NULL WHERE ac_no = @ACNumber SELECT * FROM @BankAccount ``` The first select shows the following: ``` ac_no emp_id name deleted 17 103 GS Siri NULL 18 108 N.S Per 2014-08-10 00:00:00.000 19 116 K.V Sil NULL 25 104 N.Kusha NULL 45 108 N.S Per 2014-08-11 00:00:00.000 ``` The second: ``` ac_no emp_id name deleted 17 103 GS Siri NULL 18 108 N.S Per 2014-08-10 00:00:00.000 19 116 K.V Sil NULL 25 104 N.Kusha NULL 45 108 N.S Per NULL ``` If you can provide an example like I have above where it doesn't work, I can look into it further.
OK, you are checking whether there are any other bank account entered for that employee, but your query is wrong because `ac_no` is `PK field` and your `IF` statement does not return correct value. you should to change your query and replace `ac_no` with `emp_id` and looking over it. ``` IF NOT EXISTS( SELECT * FROM BankAccount WHERE emp_id = @EmpID AND DELETED IS NULL ) UPDATE BankAccount SET BankAccount.deleted = NULL WHERE emp_id = @EmpID ```
Cannot UPDATE even IF NOT EXISTS doesn't return rows
[ "", "sql", "sql-server", "database", "exists", "" ]
I have a table that associates names with account ID's, like this ``` **Accounts** name account_id ``` and another table like that assocates information with account uri's ``` **Info** information account_uri ``` An account uri looks like this: /some/random/path/123456/randomDigits Where 123456 is the accound\_id associated with the user. How can I join these tables, and get a result like ``` **results** name information ``` It's an amazon redshift/postgresql database. Thanks for any advice!
This would work if you aren't able to put the `account_id` in the Info table: ``` SELECT A.Name, I.Information FROM Information I JOIN Account A ON A.account_id = REVERSE(SUBSTRING(REVERSE([account_uri]),0,CHARINDEX('/',REVERSE(account_uri)))) ``` The `REVERSE(...)` is being used to create a LastIndexOf function. Might want to just do: ``` SELECT REVERSE(SUBSTRING(REVERSE([account_uri]),0,CHARINDEX('/',REVERSE(account_uri)))) FROM Information ``` To see if the SUBSTRING/REVERSE are getting the entire account id This should find the second string between '/' ``` SUBSTRING(REVERSE(SUBSTRING(REVERSE([account_uri]),CHARINDEX('/',REVERSE(account_uri)))),0,CHARINDEX('/',REVERSE(SUBSTRING(REVERSE([account_uri]),CHARINDEX('/',REVERSE(account_uri))))) ```
It won't be efficient, but you can do: ``` select a.name, i.information from accounts a join info i on i.account_uri like '%/' || a.account_id; ``` Unfortunately, with this data structure it is hard to take advantage of indexes.
How can I join tables on a substring in a record?
[ "", "sql", "database", "postgresql", "amazon-redshift", "" ]
**`EDITED :`** I have a Table **`Attendance`**. ![enter image description here](https://i.stack.imgur.com/CMrsz.jpg) Where **`0`** stands for false, and **`1`** stands for true. I need to get the **`count or sum`** of **`Present`** , **`Absent`** & **`Leaves`** When * `Present = 1` * `Absent = 1` * `Leave = 1` The result what i need is like : ``` Present 3 Absent 2 Leave 1 ``` Can any one guide me what the query will be. I have tried with **`COUNT()`** on single coulmn but dnt know how to do this on multiple columns.
You can do so,using expression in sum will result as a boolean and you can have your count on basis of your conditions ``` select name, sum(Present = 1) Presentcount, sum(Absent = 1) Absentcount, sum(Leaves= 1) Leavescount from Attendance group by name ```
If the items can have values of only `0` or `1`, you can simply sum them up. ``` SELECT SUM(Present) as PresentCnt , SUM(Absent) as AbsentCnt , ... FROM ... ``` If other values are allowed, use `SUM` using a `CASE` expression as an argument: ``` SELECT SUM(CASE WHEN Present=1 THEN 1 ELSE 0 END) as Present_One , SUM(CASE WHEN Absent=1 THEN 1 ELSE 0 END) as Absent_One , ... FROM ... ```
COUNT( ) on multiple columns each having a condition
[ "", "mysql", "sql", "android-sqlite", "" ]
I have this query : `SELECT p.id, p.name, p.img, p.template_usage, t.name as tpl_name FROM platforms as p, xml_template as t WHERE p.template_usage = t.id` It returns required results, however if the t.id does not exist, the whole row is not returned. Is it possible to add some kind of parameter that says `"If t.id does not exist then tpl_name = ''"` - so the row is returned, but the value of the tpl\_name is empty.
``` SELECT p.id, p.name, p.img, p.template_usage, t.name as tpl_name FROM platforms as p left join xml_template as t on t.id = p.template_usage ```
Your issue stems from the fact that you're using an implicit inner join in your query. Using a left join would be more apt for your situation. ``` SELECT p.id, p.name, p.img, p.template_usage, t.name as tpl_name FROM platforms as p LEFT JOIN xml_template as t ON p.template_usage = t.id ``` I would suggest reading [Explicit vs implicit SQL joins](https://stackoverflow.com/questions/44917/explicit-vs-implicit-sql-joins) as well as looking up what each SQL join accomplishes. [![SQL joins](https://i.stack.imgur.com/6ZTbd.png)](https://i.stack.imgur.com/6ZTbd.png) (source: [geekphilip.com](http://www.geekphilip.com/wp-content/uploads/2012/04/Join-Diagram.png)) Note: As John mentioned below, MySQL does not support a FULL OUTER JOIN. There can be quirks between DBMSs when using some joins, however I would suggest knowing what each join type is trying to accomplish. Once you know how you want to join the tables, you will just need to look up how to accomplish it within your particular database.
Return a result even if Select is empty
[ "", "mysql", "sql", "database", "select", "join", "" ]
Have tables Employees и Invoice. ``` Employees: Invoice: id_emp name id_invoice id_invoice date_invoice ------------------------------- ------------------------------ 1 Peter 5 5 01.01.2014 10:56 2 Alfred 6 6 02.04.2014 11:21 3 Jack 7 7 03.09.2014 12:32 2 Alfred 8 8 10.10.2014 16:43 ``` How can I get all the employees and their only last invoices, ie in the form: ``` id_emp name id_invoice date_invoice ------------------------------------------------------ 1 Peter 5 01.01.2014 10:56 3 Jack 7 03.09.2014 12:32 2 Alfred 8 10.10.2014 16:43 ``` I tried to do: ``` SELECT id_emp, name, emp.id_invoice, max(date_invoice) as date_invoice FROM Employees emp, Invoice inv WHERE emp.id_invoice = inv.id_invoice GROUP BY id_emp, name, emp.id_invoice; ``` But it doesn't work as I want.
Below Query will help you.You also need to grouping of id\_invoice. ``` SELECT id_emp, name, MAX(emp.id_invoice) AS id_invoice, max(date_invoice) as date_invoice FROM Employees emp, Invoice inv WHERE emp.id_invoice = inv.id_invoice GROUP BY id_emp, name; ```
When you do GROUP BY and need to include more columns from "the last" invoice, KEEP syntax can be very handy: ``` with employees as ( select 1 id_emp, 'Peter' name, 5 id_invoice from dual union all select 2, 'Alfred', 6 from dual union all select 3, 'Jack', 7 from dual union all select 2, 'Alfred', 8 from dual ), invoice as ( select 5 id_invoice, to_date('01.01.2014 10:56','DD-MM-YYYY HH24:MI') date_invoice from dual union all select 6, to_date('02.04.2014 11:21','DD-MM-YYYY HH24:MI')from dual union all select 7, to_date('03.09.2014 12:32','DD-MM-YYYY HH24:MI')from dual union all select 8, to_date('10.10.2014 16:43','DD-MM-YYYY HH24:MI')from dual ) select emp.id_emp , max(emp.name) name , max(inv.id_invoice) keep (dense_rank last order by inv.date_invoice) last_id_invoice , max(inv.date_invoice) last_date_invoice from employees emp join invoice inv on inv.id_invoice = emp.id_invoice group by emp.id_emp order by emp.id_emp ``` Group by id\_emp. (I assume id\_emp is a primary key and emp.name is redundant, therefore I use max(name) - if id\_emp is not a key, then include it in the group by.) The last invoice date we get with an ordinary max() function. Getting the id\_invoice for the invoice that has the last date is done with the KEEP syntax - using DENSE\_RANK LAST order by inv.date\_invoice tells the max(inv.id\_invoice) function that it should only take the max() of those rows that have the LAST date\_invoice. An alternative method is using analytic functions like for example: ``` with employees as ( select 1 id_emp, 'Peter' name, 5 id_invoice from dual union all select 2, 'Alfred', 6 from dual union all select 3, 'Jack', 7 from dual union all select 2, 'Alfred', 8 from dual ), invoice as ( select 5 id_invoice, to_date('01.01.2014 10:56','DD-MM-YYYY HH24:MI') date_invoice from dual union all select 6, to_date('02.04.2014 11:21','DD-MM-YYYY HH24:MI')from dual union all select 7, to_date('03.09.2014 12:32','DD-MM-YYYY HH24:MI')from dual union all select 8, to_date('10.10.2014 16:43','DD-MM-YYYY HH24:MI')from dual ) select id_emp, name, id_invoice, date_invoice from ( select emp.id_emp , emp.name , inv.id_invoice , inv.date_invoice , row_number() over ( partition by emp.id_emp order by inv.date_invoice desc ) rn from employees emp join invoice inv on inv.id_invoice = emp.id_invoice ) where rn = 1 order by id_emp ``` Both methods have the advantage of not accessing tables more than once. The second method using analytic row\_number() function is the easiest if you need many columns from "the last" row - in such cases KEEP method requires copying the same KEEP clause in many columns.
How to select the last data? Oracle
[ "", "sql", "oracle", "select", "" ]
I've got a table with 14,000,000 (14 million) rows. I've just realised I can cut this number down significantly by removing rows where a certain field's 1st and 2nd characters are either 23,24,25,54,55, or 56. I was thinking something along the lines of: ``` DELETE FROM tablename WHERE Substring of field in column LIKE 24 OR 25 OR 26.... Etc ``` Thanks, LazyTotoro :-)
If you are sure of which strings to look for, you can just do this: ``` delete from tablename where left(fieldname,2) in ('23','24','25','54','55','56') ``` `left` provides a convenient solution since you are looking for the first 2 characters. If you wanted to get a substring between the ends, you could use `substring` with the syntax `substring(fieldname, startingposition, length)`.
I think you can use the SUBSTRING() function. <http://www.w3resource.com/mysql/string-functions/mysql-substring-function.php> For example, if you use SUBTRING(str, 1, 2) it will return 2 characters starting from the first position in the string, so if you have SUBSTRING("123456", 1, 2), it will return "12". So you would have: ``` DELETE FROM tablename WHERE SUBSTRING(field, 1, 2) IN (24,25,26) ```
Removing a row in mySQl based on the substring of a field value?
[ "", "mysql", "sql", "substring", "where-clause", "sql-delete", "" ]
I am trying to create a query to get the last 12 month records based on month for chart representation. After a lot of reading and after watching this [similar topic](https://stackoverflow.com/questions/19757458/mysql-sum-of-records-by-month-for-the-last-12-months) I created a query that seems right but missing the months with 0 money. As an example, I see in my graph months 1/14,2/14,4/14 and so on... 3/14 is missing. My code is this ``` SELECT * FROM (SELECT DATE_FORMAT(now(), '%m/%y') AS Month UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 1 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 2 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 3 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 4 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 5 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 6 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 7 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 8 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 9 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 10 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 11 MONTH), '%m/%y') ) AS Months LEFT JOIN (SELECT sum(expenses.price) AS ExpenseAmount, sum(payments.amount) AS PaymentsAmount, DATE_FORMAT(expenses.date_occurred,'%m/%y') AS Month, DATE_FORMAT(payments.date_occurred,'%m/%y') AS Montha FROM expenses, payments WHERE payments.user_id= 1 AND payments.user_id=expenses.user_id GROUP BY MONTH(payments.date_occurred), YEAR(payments.date_occurred) ORDER BY payments.date_occurred ASC ) data ON Months.MONTH = data.Montha ORDER BY data.Montha; ``` Any help will be great as this kind of queries are too advanced for me :-) ![enter image description here](https://i.stack.imgur.com/S6OWF.png)
As the query looks like it should produce a row for each month, can you check the query output, rather than what your graph is producing? I suspect you've got an entry for 04/14, but that the value is NULL rather than 0. To correct this, you can change the query to start ``` SELECT Months.Month, COALESCE(data.ExpenseAmount, 0) AS ExpenseAmount, COALESCE(data.PaymentAmount, 0) AS PaymentAmount ``` COALESCE will give you 0 instead of NULL where there are no rows matching your left join. However, there are further problems in your query. You will only get rows if there is an expense and a payment in the same month - check <http://sqlfiddle.com/#!2/3f52a8/1> and you'll see the problem if you remove some of the data from one table. Here's a working solution which will give you all months, summing the data from both tables, even if only one is present. This works by handling the expenses and payments as separate queries, then joining them together. ``` SELECT Months.Month, COALESCE(expensedata.ExpenseAmount, 0) AS ExpenseAmount, COALESCE(paymentdata.PaymentAmount, 0) AS PaymentAmount FROM (SELECT DATE_FORMAT(now(), '%m/%y') AS Month UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 1 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 2 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 3 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 4 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 5 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 6 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 7 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 8 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 9 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 10 MONTH), '%m/%y') UNION SELECT DATE_FORMAT(DATE_SUB(now(), INTERVAL 11 MONTH), '%m/%y') ) AS Months LEFT JOIN (SELECT SUM(price) AS ExpenseAmount, DATE_FORMAT(date_occurred,'%m/%y') AS Month FROM expenses WHERE user_id = 1 GROUP BY MONTH(date_occurred), YEAR(date_occurred)) expensedata ON Months.Month = expensedata.Month LEFT JOIN (SELECT SUM(amount) AS PaymentAmount, DATE_FORMAT(date_occurred,'%m/%y') AS Month FROM payments WHERE user_id = 1 GROUP BY MONTH(date_occurred), YEAR(date_occurred)) paymentdata ON Months.Month = paymentdata.Month ORDER BY Months.Month; ``` SQL Fiddle showing this working: <http://sqlfiddle.com/#!2/3f52a8/5>
i think your problem is the left join. the result of the select statement inside your left join does not contain any results with 0 money ... it looks like your ``` where payments.user_id = 1 ``` is the matter... under the following link you can find an explanation to all types of joins... <http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/> hope that helps
Mysql sum for last 12 months
[ "", "mysql", "sql", "" ]
I have been trying to do the bellow query but honestly it's driving me crazy. I have 2 Tables on MS SQL CE 4.0 Table 1 Name: Items * ID * Item\_Code * Logged\_by * Description ``` ID | Item_Code | Logged_by | Description 1 | A | Pete | just an A 2 | B | Mary | Seams like a B 3 | C | Joe | Obviously this is a C 4 | D | Pete | This is another A ``` Table 2 Name: Item\_Comments * ID * Item\_Code * Comment * Date ``` ID | Item_Code | Comment | Date 1 | B | Done | 2014/08/08 2 | A | Nice A | 2014/08/08 3 | B | Send 1 More | 2014/08/09 4 | C | Done | 2014/08/10 5 | D | This is an A | 2014/08/10 6 | D | Opps Sorry | 2014/08/11 ``` The wanted result: I'm looking to join the most recent comment from **Item\_Comments** to the **Items** Table ``` ID | Item_Code | Logged_by | Description | Comment 1 | A | Pete | just an A | Nice A 2 | B | Mary | Seams like a B | Send 1 More 3 | C | Joe | Obviously this is a C | Done 4 | D | Pete | This is another A | Opps Sorry ``` I did this query but I'm getting all the information =( mixed. ``` SELECT * FROM Items t1 JOIN (SELECT Item_Code, Comment, MAX(date) as MyDate FROM Item_Comments Group By Item_Code, Comment, Date ) t2 ON Item_Code= Item_Code ORDER BY t1.Item_Code; ``` Do you know any way to do this ?
Try: ``` select x.*, z.comment from items x join (select item_code, max(date) as latest_dt from item_comments group by item_code) y on x.item_code = y.item_code join item_comments z on y.item_code = z.item_code and y.latest_dt = z.date ``` **Fiddle test:** <http://sqlfiddle.com/#!6/d387f/8/0> You were close with your query but in your inline view aliased as t2 you were grouping by comment, leaving the max function to not actually aggregate anything at all. In t2 you should have just selected item\_code and max(date) and grouped only by item\_code, then you can use that to join into item\_comments (y and z in my query above). This is a second way of doing this using a subquery, however I would stick to the above (a join w/ an inline view): ``` select i.*, c.comment from items i join item_comments c on i.item_code = c.item_code where c.date = (select max(x.date) from item_comments x where x.item_code = c.item_code) order by i.id ``` **Fiddle test:** <http://sqlfiddle.com/#!6/d387f/11/0>
Note if you run this inside piece you get every single record: ``` SELECT Item_Code, Comment, MAX(date) as MyDate FROM Item_Comments Group By Item_Code, Comment, Date ``` You want only the most recent comment. Assuming this is SQL Server 2008 or earlier, this get's you the most recent date for each Item\_Code: ``` SELECT Item_Code, MAX(date) as MyDate FROM Item_Comments Group By Item_Code ``` Now you need to join that back and look up the comment on that date: ``` SELECT C.* FROM Item_Comments C INNER JOIN (SELECT Item_Code, MAX(date) as MyDate FROM Item_Comments Group By Item_Code ) t2 ON C.Item_Code= t2.Item_Code AND C.date = t2.MyDate ``` Now you can use that to join back to your original table: ``` SELECT t1.*, LatestComment.* FROM Items t1 INNER JOIN ( SELECT C.* FROM Item_Comments C INNER JOIN (SELECT Item_Code, MAX(date) as MyDate FROM Item_Comments Group By Item_Code ) t2 ON C.Item_Code= t2.Item_Code AND C.date = t2.MyDate ) LatestComment On LatestComment.Item_Code = t1.Item_Code ``` Depending on the actual database you are using, this can get much simpler. Thats why you need to tag your database and version.
Inner Join + select the most recent
[ "", "sql", "sql-server-ce", "" ]
I have an Oracle query that counts the number of times something appears along with the grouped by details. It is similar to this: ``` SELECT COUNT(1) AS Num_Found, Column_A, Column_B, Column_C FROM Some_Table GROUP BY Column_A, Column_B, Column_C ``` I get result someting like this ``` |-----------|----------|----------|----------| | Num_Found | Column_A | Column_B | Column_C | | 145 | Acct1 | SubAcct1 | XXXX | | 6 | Acct1 | SubAcct1 | yyyy | | 346 | Acct2 | SubAcct2 | qwert | | 97 | Acct2 | SubAcct2 | plkmn | |-----------|----------|----------|----------| ``` I am not sure how to get Column\_A, Column\_B, Column\_C based on the max count of Num\_Found. So that I can get a result like this: ``` |-----------|----------|----------|----------| | Num_Found | Column_A | Column_B | Column_C | | 145 | Acct1 | SubAcct1 | XXXX | | 346 | Acct2 | SubAcct2 | qwert | |-----------|----------|----------|----------| ```
Try This. Based on the output you want, looks like you want to partition the result by column\_a select num\_found, column\_a, column\_b, column\_c from ( select a.\* , max(num\_found) over (partition by column\_a)as max\_num from table a) where num=max\_num
You group by three columns and of the resulting groups, you only wish to keep the one with the highest count for each combination of two of the columns? It could be done with something like: ``` select num_found, column_a, column_b, column_c from ( SELECT COUNT(*) AS Num_Found, Column_A, Column_B, Column_C , row_number() over ( partition by column_a, column_b order by count(*) desc ) rn FROM Some_Table GROUP BY Column_A, Column_B, Column_C ) where rn = 1 ``` Just note, that if (for same column\_a,column\_b combination) two different column\_c values have the same highest count, then it will be "random" which column\_c value will be in the output.
Oracle Top Max Count Results Sql Query
[ "", "sql", "oracle", "subquery", "" ]
Is it possible to create some enumaration for `'a','b','test','123','blabla'` in following statement? ``` sum(case when col1 in ('a','b','test','123','blabla') then col2 end) as sum ``` I've tried to read it from `letters_table` like this: ``` sum(case when col1 in (select letter from letters_table) then col2 end) as sum ``` but it told me `Cannot perform an aggregate function on an expression containing an aggregate or a subquery.` and following is not working fine for me: ``` DECLARE @letters varchar(10) select @letters = letter FROM letters_table sum(case when col1 in (@letters) then col2 end) as sum ``` because when I print the `@letters`, there is just the last one `'blabla'`
Your second example using the subquery is probably the best - you just need to do the subquery and aggregation in two separate phases, like this: ``` select sum(x.col2) as sum from ( select case when col1 in (select letter from letters_table) then col2 else 0 end as col2 from YourTableName ) x ``` This should work as intended, and make the parser happy.
If you have col1 (a text column) and col2 (a numeric column) in a table, say `myTable` and letter (a text column) in another table, say `letters_table`, then you can use a JOIN, as below: ``` SELECT --mt.col1, Include for totals by col1 SUM(mt.col2) FROM myTable mt INNER JOIN letters_table lt ON mt.col1 = lt.letter; -- GROUP BY mt.col1 Include for totals by col1 -- ORDER BY mt.col1; Include for totals by col1 ```
Create enumeration and use it in aggregate function
[ "", "sql", "sql-server", "" ]
I need to insert data into a table with a concatenation depending on the value of one of the columns. E.g. **Table A**: ``` +-------+-----------+-----------+ | rowid | objectIdA | objectIdB | +-------+-----------+-----------+ | 1 | 1 | 2 | | 2 | 1 | null | | 3 | 2 | 4 | | 4 | 5 | null | +-------+-----------+-----------+ ``` Would result in **Table B**: ``` +-------+-----------+ | rowid | concatCol | +-------+-----------+ | 1 | 1_2 | | 2 | 1 | | 3 | 2_4 | | 4 | 5 | +-------+-----------+ ``` So basically concatenate with an underscore if one of the columns isn't null.
What about? ``` INSERT INTO tableB(rowid, concatCol) SELECT rowid, objectIdA + CASE WHEN objectIDB IS NULL THEN '' ELSE '_' + objectIDB END FROM TableA ```
Depends on your database system and programming language. A pure SQL solution can get pretty ugly (especially if you need to support update and insert). ``` INSERT INTO tableB(rowid, concatCol) (SELECT rowid, objectIdA+'_'+objectIDB FROM tableA) ``` Would work if + is the concatenation operation (Oracle uses `||` I think) and if the second table is empty (and if rowid is actually insertable). If the columns are nullable you can use your RDBMS function for `CASE` to differentiate between the case you need to add second row and separator or skip both.
SQL - Inserting into table with concatenation with condition
[ "", "sql", "sql-server", "concatenation", "" ]
I have 3 tables, a `Users` table, a `Preferences` table, and a `User_Preferences` table. They look like this: `Users` Table: * **User\_ID**: Primary Key * **User\_Name**: Username * ... other unimportant fields `Preferences` Table: * **Pref\_ID**: Primary Key * **Pref\_Name**: Preference Name * ... other unimportant fields `User_Preferences` Table: * **Pref\_ID**: The User the Preference is for * **User\_ID**: The Preference being set * **Pref\_Val**: The Value of the Preference Over the years, many new Preferences have been added, but now the underlying architecture has change somewhat. Before, if the `User_Preferences` table didn't contain a link for a certain `Preference`, no matter, but now, every `Preference` needs to be assigned to every `User`. So, given those three tables are populated, what query can I run that will give me a list of `User_Preferences` that are missing? For example, if there are 2 users, and 2 permissions, and the first user has both but the second has one, it will give me a single row for that user and that permission
You can do this using a `cross join` to generate all combinations and then filter out the ones that exist using a `left join`: ``` select u.*, p.* from users u cross join preferences p left join user_preferences up on up.user_id = u.user_id and up.preference_id = p.preference_id where up.user_id is null; ```
Here is an alternative with an EXISTS. ``` SELECT p.preference_id, u.user_id FROM users u CROSS JOIN preferences p WHERE NOT EXISTS ( SELECT NULL FROM user_preferences up WHERE up.user_id = u.user_id AND up.preference_id = p.preference_id ) ```
Find Missing Relationships using PL/SQL
[ "", "sql", "oracle", "" ]
A have for example following data: ``` a 10 b 5 c 15 d 2 a 3 b 6 c 8 d 10 ``` How to sum `a` and `b` together and `c` and `d` to get something like this? ``` ab 24 cd 35 ```
``` select sum(case when col1 in ('a','b') then col2 end) as ab_sum, sum(case when col1 in ('c','d') then col2 end) as cd_sum from your_table ```
this could also help you, ``` DECLARE @TAB TABLE(NAME VARCHAR(1), MARK INT) INSERT INTO @TAB VALUES ('A',10), ('B', 5), ('C',15), ('D', 2), ('A', 3), ('B', 6), ('C', 8), ('D',10) SELECT NAME,SUM(MARK) FROM ( SELECT CASE WHEN NAME IN ('A','B') THEN 'AB' WHEN NAME IN ('C','D') THEN 'CD' END NAME, MARK FROM @TAB A ) LU GROUP BY NAME ```
how to sum listed values in column
[ "", "sql", "sql-server", "" ]
I am working on a new search function in our software where a user will be allowed to search on any or all of 3 possible fields A, B and C. I expect that if anything is entered for a field it will be a complete entry and not just a partial one. So the user choices are * Field A, or * Field B, or * Field C, or * Fields A & B, or * Fields A & C, or * Fields B & C, or * Fields A, B & C. My question is what indexes should be created on this table to provide maximum performance? This will be running on SQL Server 2005 and up I expect and a good user experience is essential.
this is difficult to answer without knowing your data or its usage. Hopefully A, B , and C are not long string data types. If you have minimal Insert/Update/Delete and/or will sacrifice everything for index usage, I would create an index on each of these: ``` A, B , C <<<handles queries for: A, or A & B, or A, B & C A, C <<<handles queries for: A & C B, C <<<handles queries for: B, or B & C C <<<handles queries for: C ``` These should cover all combinations you have mentioned. Also, you will also need to be careful to write a query that will actually use the index. If you have an `OR` in your `WHERE` you'll probably not use an index. In newer versions of SQL Server than you have you can use `OPTION(RECOMPILE)` to compile the query based on the runtime values of local variables and usually eliminate all `OR` and use an index. See: [Dynamic Search Conditions in T-SQL by Erland Sommarskog](http://www.sommarskog.se/dyn-search.html) you can most likely use a dynamic query where you only add the necessary conditions onto the WHERE to get optional index usage: [The Curse and Blessings of Dynamic SQL by Erland Sommarskog](http://www.sommarskog.se/dynamic_sql.html) You can also see this answer for more on [dynamic search conditions](https://stackoverflow.com/a/3415629/65223)
Assuming searches are much more numerous, you will want to create an index on every subset of fields by which you wish to access your data. So that would be 6 indices if you wish to do it on the powerset of columns.
Best indexes to create when a search may be on any or all of 3 fields
[ "", "sql", "performance", "sql-server-2005", "" ]
I have records related to dates: ``` DATE AMOUNT 16.03.2013 3 16.03.2013 4 16.03.2013 1 16.03.2013 3 17.03.2013 4 17.03.2014 3 ``` I know how to sum them up for each day, but how could I sum them up by week?`
Try this ``` SELECT to_char(DATE - 7/24,'IYYY'), to_char(DATE - 7/24,'IW'),SUM(AMOUNT) FROM YourTable GROUP BY to_char(DATE - 7/24,'IYYY'), to_char(DATE - 7/24,'IW') ``` **[FIDDLE DEMO](http://sqlfiddle.com/#!4/a037d/1)** --- *Output would be:* ``` +-----+-------+--------+ |YEAR | WEEK | AMOUNT | +-----+-------+--------+ |2013 | 11 | 18 | |2013 | 13 | 3 | +-----+-------+--------+ ```
You can use [`TRUNC`](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions220.htm#SQLRF51945) function to truncate date to the first day of week. There are [a few ways of defining week](http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions255.htm#SQLRF52058). For example, if you want to treat that the first day of week is Monday, you can `IW` format, like this: ``` select trunc(date, 'IW') week, sum(amount) from YourTable group by trunc(date, 'IW'); ``` You can also `TO_CHAR` function as the "@Vignesh Kumer"'s answer. The point is that you should truncate the date in the same week into one value. Then group by the value. That's it.
Oracle SQL - Sum and group data by week
[ "", "sql", "oracle", "aggregate-functions", "" ]
I'd like to know if you can exclude a column from a `MINUS` operation For example: ``` select column 1, column 2, column 3 from table 1 minus select column 1, column 2 from table 2; ``` while column 3 is excluded from the operation. Is there anyway to do that? Thank you
You have a few alternatives. **#1** Minus without column3 then rejoin result to the original table to get the missing values for column3. ``` select * from ( select column1, column2 from table1 minus select column1, column2 from table2 ) join table1 using (column1, column2) ``` **#2** Use an ANTI-JOIN the NOT EXISTS clause should work the same as NOT IN anyway. ``` select column1, column2, column3 from table1 where (column1, column2) not in ( select column1, column2 from table2 ) ```
Perhaps you are after this? ``` select table1.column1, table1.column2, table1.column3 from table1 left table2 on table1.column1 = table2.column1 and table1.column2 = table2.column2 where table1.id IS NULL ``` This will result in only those table1 records that do not have matching column1 and column2 values in table2.
Exclude columns from Minus operations in oracle sql
[ "", "sql", "oracle", "" ]
I am trying to run this sql query in PHPMyAdmin: ``` --create a mysql contact table --delete contact table if already exists DROP TABLE IF EXISTS contact; --create new table named contact with fields as specified CREATE TABLE contact( contactID int PRIMARY KEY, name VARCHAR(50), company VARCHAR(30), email VARCHAR(50) ); --add these to the table INSERT INTO contact VALUES (0, 'Bill Gates', 'Microsoft', 'bill@micro.com'); INSERT INTO contact VALUES (1, 'Larry Page', 'Google', 'larry@google.com'); --displays whats in this SELECT * FROM contact; ``` I thought that in sql this is considered a comment: `--I'm a comment` However PHPMyAdmin isn't accepting it. I get this error: ``` SQL query: --create a mysql contact table --delete contact table if already exists DROP TABLE IF EXISTS contact; MySQL said: Documentation 1064 - You have an error in your SQL syntax; Check the manual that corresponds to your MySQL server version for the right syntax to use near '--create a mysql contact table --delete contact table if already exists DROP T' at line 1 ``` I get the same error with the same code on these sql checkers also: <http://www.piliapp.com/mysql-syntax-check/> <http://sqlfiddle.com/>
You need an interval/space after the `--` Otherwise it's not considered a valid comment in MySQL.
You need a space if you use "--" style comments per the [manual](http://dev.mysql.com/doc/refman/5.5/en/comments.html). Also add a ";" after your create. ``` -- create a mysql contact table ``` -- delete contact table if already exists DROP TABLE IF EXISTS contact; -- create new table named contact with fields as specified CREATE TABLE contact( contactID int PRIMARY KEY, name VARCHAR(50), company VARCHAR(30), email VARCHAR(50) ); -- add these to the table INSERT INTO contact VALUES (0, 'Bill Gates', 'Microsoft', 'bill@micro.com'); INSERT INTO contact VALUES (1, 'Larry Page', 'Google', 'larry@google.com'); -- displays whats in this SELECT \* FROM contact;
SQL not accepting my comments
[ "", "sql", "phpmyadmin", "xampp", "comments", "" ]
Table 1: ``` Temp ResID Code Date 11 1 SPR 12 1 SPG 2009-10-05 13 1 SPR 14 1 SPG 2011-10-08 ``` Table 2: ``` TempID Res ID InDate Out Date 21 1. 2009-10-05 2010-11-15 22 1. 2011-10-08 2011-11-09 ``` Table 3: (Desired Result) ``` ResID Code Date 1 SPR 2010-11-15 1 SPG 2009-10-05 1 SPR 2011-11-09 1 SPG 2011-10-08 ``` I have two tables as above. I need to update the Table 1 Date column for every ID for every row where the code is SPR. The SPG value in table 1 date column is equal to InDate column value for the same resident Please advice with the query. How do I achieve this with a query joining the two tables, table 1 and table 2 to get table 3
**Edited from a select statement to an Update** I think this will work for you. See the [SqlFiddle](http://sqlfiddle.com/#!6/fcbe38/1) ``` Update Table1 set Table1.[Date] = Table2.OutDate from Table1 inner join ( select Temp, ResID, Code, Date, row_number() over(partition by ResID order by Temp) as RowId from Table1 where Code = 'SPR' ) as SPR on Table1.Temp = SPR.Temp inner join ( select ResID, Code, Date, row_number() over(partition by ResID order by Temp) as RowId from Table1 where Code = 'SPG' ) as SPG on SPG.RowId = SPR.RowId and SPG.ResID = SPR.ResID inner join Table2 on Table2.InDate = SPG.Date and Table2.ResID = SPG.ResID ```
You could delete all the `SPR` rows and then generate them: ``` delete Table1 where code = 'SPR'; insert Table1 (ResID, Code, [Date]) select t1.ResID , 'SPR' , OutDate from Table1 t1 join Table2 t2 on t1.ResID = t2.ResID and t1.[Date] = t2.InDate where t1.Code = 'SPG'; ``` [Example at SQL Fiddle.](http://sqlfiddle.com/#!6/2faab/1/0)
Fetch Date based on another date
[ "", "sql", "sql-server", "" ]
We have a lot of C# programs and SQL stored procs that call one horribly written SQL stored proc (320+ lines containing lots of similar IF's that have been copied & pasted for years) that I need to finally clean up and speed up. Notes: * This stored proc is executed 1,000's of times per hour. * This question is about this stored proc, not how it is referenced or changing references. We have a table with this structure: * type (VARCHAR) * last updated (DATETIME) * counter (INT) * auto\_increment flag (INT, either 0 or 1) (i.e.: an incrementing customer # 123456, and a non-incrementing billing cycle # 1001) When executed for: * customer #: increment 123456 to 123457, last\_updated = now, & return new 123457. * billing cycle: the value 1001 is returned. My boss wants **only one command that hits the table**. I've managed to accomplish that with this snippet: ``` DECLARE @return_value INT, @parmtype VARCHAR(10), SET @parmtype = 'CUSTOMER' UPDATE [system_numbers] -- 1 hit SET @return_value = current_number = current_number + auto_increment, last_updated_on = CASE auto_increment WHEN 1 THEN GETDATE() ELSE last_updated_on END WHERE [type] = @parmtype ``` My problem is that for the non-incrementing [type]'s, my command needlessly updates the table, writes to the log, and is slower than a SELECT. My question is ... **what one command can I use that removes this example's two hits to the table?** ``` DECLARE @return_value INT, -- actually an OUTPUT param in my SP @autoincrement INT, @parmtype VARCHAR(10) SET @parmtype = 'CUSTOMER' SELECT @autoincrement = auto_increment -- 1st hit FROM [system_numbers] WHERE [type] = @parmtype IF (@autoincrement = 1) BEGIN UPDATE [system_numbers] -- 2nd hit here - or - SET @return_value = current_number = current_number + auto_increment, last_updated_on = GETDATE() WHERE [type] = @parmtype END ELSE BEGIN SELECT @return_value = current_number -- 2nd hit here FROM [system_numbers] WHERE [type] = @parmtype END ```
I would probably consider the following approach: ``` ... UPDATE dbo.system_numbers SET @return_value = current_number = current_number + auto_increment, last_updated_on = GETDATE() WHERE [type] = @parmtype AND auto_increment = 1 ; IF @@ROWCOUNT = 0 SELECT @return_value = current_number FROM dbo.system_numbers WHERE [type] = @parmtype ; ``` The first statement updates the value only if it happens to be an incrementing one, and also stores the updated value in `@return_value`. The second statement only executes if the first statement did not update any row. It just assigns the (current) value to `@return_value`. So, this is one hit only when the value is an incrementing one. When it is non-incrementing, a second hit is involved, but in that case also nothing gets updated, only read.
I think this would simplify things a bit ``` DECLARE @return_value INT, -- actually an OUTPUT param in my SP @autoincrement INT, @parmtype VARCHAR(10) SET @parmtype = 'CUSTOMER' SELECT @autoincrement = auto_increment, @return_value = auto_increment + current_number -- 1st hit FROM [system_numbers] WHERE [type] = UPPER(@parmtype) IF (@autoincrement = 1) BEGIN UPDATE [system_numbers] -- 2nd hit here - or - SET current_number = @return_value, last_updated_on = GETDATE() WHERE [type] = UPPER(@parmtype) END ```
UPDATE command SET @return = [int_a] = [int_a] + auto_increment WHERE <clause>
[ "", "sql", "sql-server", "t-sql", "" ]
I am writing a procedure in order to handle a file name in SSIS. Overview: I am capturing the file name during a Text file load process in SSIS. I have written a procedure in order to split this file name into different components and return the values in form of Variables which I would be using further down the SSIS package. ``` Problem This file name is of the format @FileName ="FILE_DATE_REF_DATETIME".All I need to split this in a way like "FILE" , "DATE". I am able to achieve this by using SUBSTRING(@Filename,0,CHARINDEX('_',@FileName)) and ``` `Substring(@FileName,CHARINDEX('_',@FileName)+1,CHARINDEX('_',SUBSTRING(@Filename,CHARINDEX('_',@Filename)+1,Len(@Filename)))-1)` But here the major problem is when we get an additional '\_' in FILE it completely goes wrong. Can anyone please suggest a way that I split the above file name format into FILE and DATE. **EDIT** Samples of FileNames: * asdfkg\_20140710\_ets20140710\_0525\_theds * asdjjf\_they\_20140710\_ets20140710\_0525\_theds * oiuth\_theyb\_wgb\_20140710\_ets20140710\_0526\_theds I need to extract anything before the 20140710 and also 20140710.
You can do it using [PATINDEX](http://msdn.microsoft.com/en-us/library/ms188395.aspx) instead of [CHARINDEX](http://msdn.microsoft.com/en-us/library/ms186323.aspx) ``` select SUBSTRING(@Filename,0,PATINDEX('%[_][0-9]%',@FileName)) ```
If the last part of the filename is more reliable (no unexpected underscores), you could `REVERSE` it and use `CHARINDEX` to find the fourth underscore (and reverse the substrings again afterwards). Otherwise, if you can trust the date format, you can use `PATINDEX` with a horrible expression like ``` '%[_][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][_]%' ```
Substring depending on Delimiter in the string
[ "", "sql", "sql-server", "stored-procedures", "sql-server-2012", "ssis", "" ]
i have two tables as below- table1 mysql> select \* from Access\_ID ; ``` +------------+----------------+ | SERVICE_ID | ACCESS_LIST_ID | +------------+----------------+ | 1035 | 1040 | | 1094 | 1134 | ``` table2 mysql> select \* from PROFILE ; ``` +-------+------------------------+-------------+- | PR_ID | PR_NAME | PR_PARENTID | PR_TYPE | --------------------------------------------------------------------- | 1035 | raj | 1022 | name | | 1040 | computer | 1035 | course | | 1094 | suresh | 1077 | name | | 1134 | electronics | 1077 | course | ``` i require below output as table ``` | raj | computer | | suresh | electronics| ```
May be something like this ``` SELECT T2.PR_NAME AS NAME,T3.PR_NAME AS Serveice FROM Access_ID T1 JOIN PROFILE T2 ON T1.SERVICE_ID = T2.PR_ID JOIN PROFILE T3 ON T1.ACCESS_LIST_ID = T3.PR_ID ``` **[FIDDLE DEMO](http://sqlfiddle.com/#!2/ee138/6)** *Output would be* ``` +---------+-------------+ | NAME | SERVEICE | +---------+-------------+ | raj | computer | | suresh | electronics | +---------+-------------+ ```
Try: ``` select pro1.PR_NAME ,pro2.PR_NAME from Access_ID acs inner join PROFILE pro1 on (acs.SERVICE_ID = pro1.PR_ID) inner join PROFILE pro2 on (acs.ACCESS_LIST_ID = pro2.PR_ID) ```
Mysql query for getting values from two tables
[ "", "mysql", "sql", "" ]
Using MSSQL 2012, I have three tables. Table 0 has a value which I use to join with some other tables like so: ``` Table0 ---------- keyval 1 Table1 ---------- keyval someval 1 blah 1 blah1 Table2 ---------- keyval someotherval 1 woo 1 woo1 1 woo2 ``` Now if I do the following query ``` SELECT Table1.someval AS val1, Table2.someotherval AS val2 FROM Table0 INNER JOIN Table1 ON Table0.keyval = Table1.keyval INNER JOIN Table2 ON Table0.keyval = Table2.keyval WHERE Table0.keyval = '1' ``` I get the following results: ``` val1 val2 ---------- blah woo blah woo1 blah woo2 blah1 woo blah1 woo1 blah1 woo2 ``` My question is, how can I ensure that ***each value only displays once in the results?*** I would like to know how to run a query to get the following results ``` val1 val2 ---------- blah woo blah1 woo1 null woo2 ``` I have tried all different kinds of joins but with no good results. I have a feeling I need UNION somewhere but I am not sure where
You seem to want a random subset of the cartesian product where each value is represented once, so something like this should do: ``` with t0(k) as ( select 1 ), t1(k,v1) as ( select 1,'blah' union select 1,'blah1' ), t2(k,v2) as ( select 1,'woo' union select 1,'woo1' union select 1,'woo2' ), t3(k,v1,rn) as ( select t1.k, v1, row_number() over (order by t1.v1) as rn1 from t1 join t0 on t0.k = t1.k ), t4(k,v2,rn) as ( select t2.k, v2, row_number() over (order by t2.v2) as rn2 from t2 join t0 on t0.k = t2.k ) select t3.v1, t4.v2 from t3 full join t4 on t3.k = t4.k and t3.rn = t4.rn V1 V2 blah woo blah1 woo1 (null) woo2 ``` Edit: using tables instead of CTE t0, t1 and t2 ``` create table table0 (k int not null); insert into table0 (k) select 1; create table table1 (k int not null, v1 varchar(5) not null); create table table2 (k int not null, v2 varchar(5) not null); insert into table1 (k,v1) select 1,'blah' union select 1,'blah1' union select 1, 'jojo' union select 1, 'jojo1'; insert into table2 (k,v2) select 1,'woo' union select 1,'woo1' union select 1,'woo2'; with t3(k,v1,rn) as ( select t1.k, v1, row_number() over (order by t1.v1) as rn1 from table1 t1 join table0 t0 on t0.k = t1.k ), t4(k,v2,rn) as ( select t2.k, v2, row_number() over (order by t2.v2) as rn2 from table2 t2 join table0 t0 on t0.k = t2.k ) select t3.v1, t4.v2 from t3 full join t4 on t3.k = t4.k and t3.rn = t4.rn; V1 V2 blah woo blah1 woo1 jojo woo2 jojo1 (null) delete from table1 where v1 like 'joj%'; with t3(k,v1,rn) as ( select t1.k, v1, row_number() over (order by t1.v1) as rn1 from table1 t1 join table0 t0 on t0.k = t1.k ), t4(k,v2,rn) as ( select t2.k, v2, row_number() over (order by t2.v2) as rn2 from table2 t2 join table0 t0 on t0.k = t2.k ) select t3.v1, t4.v2 from t3 full join t4 on t3.k = t4.k and t3.rn = t4.rn; V1 V2 blah woo blah1 woo1 (null) woo2 ```
I have a feeling it will be a lot easier to pull the unique values into a single column, and then use a second 'dummy' column to specify where the values came from. Since they don't necessarily match up with eachother, there's no reason to list them out in pairs. ``` SELECT DISTINCT Table1.someval AS val,'Table1' As source FROM Table1 WHERE Table1.keyval = '1' UNION SELECT DISTINCT Table2.someval as val, 'Table2' As source FROM Table2 WHERE Table2.keyval = '1' ``` Note, in this case, I dropped `Table0` entirely, with the data set up the way it is in your question, the only time `Table0` will matter is if there is no keyval = 1 in `Table0`, in which case you will get no results. But, if that matters to you, ``` SELECT DISTINCT Table1.someval AS val,'Table1' As source FROM Table1 INNER JOIN Table0 ON Table0.keyval = Table1.keyval WHERE Table0.keyval = '1' UNION SELECT DISTINCT Table2.someval as val, 'Table2' As source FROM Table2 INNER JOIN Table0 ON Table0.keyval = Table2.keyval WHERE Table0.keyval = '1' ```
How to avoid extraneous data on table joins
[ "", "sql", "sql-server", "" ]
The following query returns the results that I need but I have to add the ID of the row to then update it. If I add the ID directly in the select statement it will return me more results then I need because each ID is unique so the DISTINCT statement see the line as unique. ``` SELECT DISTINCT ucpse.MemberID, ucpse.ProductID, ucpse.UserID FROM UserCustomerProductSalaryExceptions as ucpse WHERE EXISTS (SELECT NULL FROM UserCustomerProductSalaryExceptions as upcse2 WHERE ucpse.userid = upcse2.userid AND ucpse.MemberID = upcse2.MemberID AND ucpse.ProductID = upcse2.ProductID GROUP BY upcse2.UserID, upcse2.memberid, upcse2.productid HAVING COUNT(UserID) >= 2 ) ``` So basically I need to add ucpse.ID in the Select statement while keeping DISTINCT values for MemberID,ProductID and UserID. Any Ideas ? Thank you
According to you comment: *If the data has been duplicated 67 times for a given employee with a given product and a given client, I need to keep only one of thoses records. It's not important which one, so this is why I use DISTINC to obtain unique combinaison of given employee with a given product and a given client.* You can use `MIN()` or `MAX()` and `GROUP BY` instead of `DISTINCT` ``` SELECT MAX(ucpse.ID) AS ID, ucpse.MemberID, ucpse.ProductID, ucpse.UserID FROM UserCustomerProductSalaryExceptions as ucpse WHERE EXISTS (SELECT NULL FROM UserCustomerProductSalaryExceptions as upcse2 WHERE ucpse.userid = upcse2.userid AND ucpse.MemberID = upcse2.MemberID AND ucpse.ProductID = upcse2.ProductID GROUP BY upcse2.UserID, upcse2.memberid, upcse2.productid HAVING COUNT(UserID) >= 2 ) GROUP BY ucpse.MemberID, ucpse.ProductID, ucpse.UserID ``` **UPDATE:** From you comments I think the below query is what you need ``` DELETE FROM UserCustomerProductSalaryExceptions WHERE ID NOT IN ( SELECT MAX(ucpse.ID) AS ID FROM #UserCustomerProductSalaryExceptions GROUP BY ucpse.MemberID, ucpse.ProductID, ucpse.UserID HAVING COUNT(ucpse.ID) >= 2 ) ```
If all you want is to delete the duplicates, this will do it: ``` WITH X AS (SELECT ID, ROW_NUMBER() OVER (PARTITION BY MemberID, ProductID, UserID ORDER BY ID) AS DupRowNum<br FROM UserCustomerProductSalaryExceptions ) DELETE X WHERE DupRowNum > 1 ```
Select a NON-DISTINCT column in a query that return distincts rows
[ "", "sql", "sql-server", "t-sql", "" ]
I have table structure like `tblCustData` ``` ID UserID Fee FeePaid 1 12 150 0 2 12 100 0 3 12 50 0 ``` And value to be update in `FeePaid` Column such that if i have value in `@Amt` variable in `200` Then it should update **any** two rows Output should be like ``` ID UserID Fee FeePaid 1 12 150 150 2 12 100 50 3 12 50 0 ``` `FeePaid` should not be grater than `Fee` Column But if i pass `350` in `@Amt` variable it should produce output like ``` ID UserID Fee FeePaid 1 12 150 200 2 12 100 100 3 12 50 50 ``` **Only if `@Amt` is exceeding the total value in `Fee` column** I can not think beyond this query ``` Update tblCustData Set FeePaid=@Amt Where UserID=12 ```
First with CTE syntax we prepare a table with sums distribution and then using unique field `Code` update the main table using CASE to handle all possible ways (including first row with remainder). ``` Declare @Amt int; SET @Amt=250; with T as ( SELECT ROW_NUMBER() OVER (ORDER BY Fee desc) as rn, * FROM tblCustData WHERE UserId=12 ) ,T2 as ( SELECT *, ISNULL((SELECT SUM(Fee-FeePaid) FROM T WHERE T1.RN<RN),0) as PrevSum FROM T as T1 ) UPDATE A SET A.FeePaid = A.FeePaid+ CASE WHEN (B.PrevSum+B.Fee-B.FeePaid<=@Amt) AND (B.RN<>1) THEN B.Fee-B.FeePaid WHEN (B.PrevSum+B.Fee-B.FeePaid<=@Amt) AND (B.RN=1) THEN @Amt-B.PrevSum WHEN B.PrevSum>=@Amt THEN 0 WHEN B.PrevSum+B.Fee-B.FeePaid>@Amt THEN @Amt-B.PrevSum END FROM tblCustData A JOIN T2 B ON A.Code = B.Code GO ``` `SQLFiddle demo`
Try .. ``` declare @t table (id int identity, UserId int, Fee money, FeePaid money) insert into @t (UserID, Fee, FeePaid) values (12, 150, 0) ,(12, 100, 0) ,(12, 50 , 0) declare @amt money = 200; -- change to 400 to test over paid declare @Fees money; select @Fees = sum(Fee) from @t; declare @derivedt table (deid int, id int, UserId int, Fee money, FeePaid money) insert into @derivedt (deid, id, UserId, Fee, FeePaid) select row_number() over (order by case when @amt <= @Fees then id else -id end asc), id, UserId, Fee, FeePaid from @t -- order by case when @amt <= @Fees then id else -id end asc ; with cte(deid, id, UserId, Fee, FeePaid, Remainder) as ( select 0 as deid, 0 as id, 0 as UserId, cast(0.00 as money) as Fee, cast(0.00 as money) as FeePaid , @Amt as Remainder from @derivedt where id = 1 union all select t.deid, t.id, t.UserId, t.Fee, case when cte.Remainder > t.Fee then t.Fee else cte.Remainder end as FeePaid , case when cte.Remainder > t.Fee then cte.Remainder - t.Fee else 0 end as Remainder from @derivedt t inner join cte cte on t.deid = (cte.deid + 1) ) update origt set FeePaid = det.FeePaid from @t origt inner join ( select cte1.deid, cte1.id, cte1.UserId, cte1.Fee, cte1.FeePaid + ISNULL(cte2.Remainder, 0) as FeePaid from cte cte1 left outer join (select top 1 deid, Remainder from cte order by deid desc) cte2 on cte1.deid = cte2.deid where cte1.deid > 0 ) det on origt.id = det.id select * from @t ``` Modified to continuous update of value.. ``` -- Create table once and insert into table once create table #t (id int identity, UserId int, Fee money, FeePaid money) insert into #t (UserID, Fee, FeePaid) values (12, 150, 0) ,(12, 100, 0) ,(12, 50 , 0) -- =============================== -- Run multiple times to populate #t table declare @amt money = 100; -- change to 400 to test over paid declare @Fees money; select @Fees = sum(Fee - FeePaid) from #t; declare @derivedt table (deid int, id int, UserId int, Fee money, FeePaid money) insert into @derivedt (deid, id, UserId, Fee, FeePaid) select row_number() over (order by case when @amt <= @Fees then id else -id end asc), id, UserId, (Fee - FeePaid) as Fee, FeePaid from #t -- order by case when @amt <= @Fees then id else -id end asc ; with cte(deid, id, UserId, Fee, FeePaid, Remainder) as ( select 0 as deid, 0 as id, 0 as UserId, cast(0.00 as money) as Fee, cast(0.00 as money) as FeePaid , @Amt as Remainder from @derivedt where id = 1 union all select t.deid, t.id, t.UserId, t.Fee, case when cte.Remainder >= t.Fee then t.Fee else cte.Remainder end as FeePaid , case when cte.Remainder >= t.Fee then cte.Remainder - t.Fee else 0 end as Remainder from @derivedt t inner join cte cte on t.deid = (cte.deid + 1) ) update origt set FeePaid = origt.FeePaid + det.FeePaid from #t origt inner join ( select cte1.deid, cte1.id, cte1.UserId, cte1.Fee, cte1.FeePaid + ISNULL(cte2.Remainder, 0) as FeePaid, cte1.Remainder from cte cte1 left outer join (select top 1 deid, Remainder from cte order by deid desc) cte2 on cte1.deid = cte2.deid where cte1.deid > 0 ) det on origt.id = det.id select * from #t -- Drop temp table after -- drop table #t ```
distribute value to all rows while updating table
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Assume you have a `JOIN` with a `WHERE`: ``` SELECT * FROM partners JOIN orders ON partners.partner_id = orders.partner_id WHERE orders.date BETWEEN 20140401 AND 20140501 ``` 1) An index on `partner_id` in both tables will speed up the `JOIN`, right? 2) An index on `orders.date` will speed up the `WHERE` clause? 3) But as far as I know, one `SELECT` can not use more than one index. So which one will be used?
This is your query, with the quoting fixed (and assuming `orders.date` is really a date type): ``` SELECT * FROM partners JOIN orders ON partners.partner_id = orders.partner_id WHERE orders.date BETWEEN '2014-04-01' AND '2014-05-01'; ``` For inner join, there are basically two execution strategies. The engine can start with the partners table and find all matches in orders. Or it can start with orders can find all matches in partners. (There are then different algorithms that can be used.) For the first approach, the only index what would help is `orders(partner_id, orderdate)`. For the second approach, the best index is `orders(orderdate, partner_id)`. Note that these are not equivalent. In most scenarios like this, I would expect the orders table to be larger and the filtering to be important. That would suggest that the best execution plan is to start with the `orders` table and filter it first, using the second option.
To start, an index is used for an **operator** not for the **SELECT** statement. Therefore one index will be used for reading data from the `partner` table and another index could be used to get data from `orders` table. I think that the best strategy in this case would be to have a clustered index on `partners.partner_id` and one non-clustered index on `orders.partner_id` and `orders.date`
What colums to index for a JOIN with WHERE
[ "", "sql", "join", "indexing", "" ]
I have a simple query in PostgreSQL which is ok when I run it without any query parameters : ``` select date_trunc('week', action_time),count(*) from event group by date_trunc('week', action_time); ``` But if I try to send the 'week' as a parameter like this (in Java): ``` PreparedStatement statement = connection.prepareStatement ("select date_trunc(?, action_time),count(*) from event" + " group by date_trunc(?, action_time)"); statement.setString(1,"week"); statement.setString(2,"week"); statement.execute(); ``` it'll throw the following error: ``` ERROR: column "event.action_time" must appear in the GROUP BY clause or be used in an aggregate function ``` is this normal behavior ?
When the query is prepared there's no guarantee that you will bind the same value (`'week'`) for both placeholders. If you don't, the query would be illegal, and that's why postgres doesn't allow preparing it. One way around this could be to change your query so you only bind `'week'` once, and use it from inside a subquery: ``` PreparedStatement statement = connection.prepareStatement ("select dt, count(*) from (select date_trunc(?, action_time) as dt " + "from event) s group by dt"); statement.setString(1,"week"); statement.execute(); ```
I think this should work, but Postgres can be a bit finicky. For instance, the following does not work: ``` select date_trunc(val, now()) from (select 'week' as val) t ``` But this does: ``` select date_trunc(val, now()) from (select cast('week' as text) as val) t ``` You might check if this version works: ``` select date_trunc(datepart, action_time), count(*) from event cross join (select cast(? as text) as datepart) vars group by date_trunc(datepart, action_time); ``` And then supply only one parameter.
PostgreSQL asking for 'group by' clause in where, when sending parameters
[ "", "sql", "postgresql", "jdbc", "group-by", "prepared-statement", "" ]
I've been trying to develop a tax calculator. I'm trying to do the computation in SQL, but am not good with it, so please bear with me. ``` IF (12500 > WTAX_DAILY.HI_LIMIT) ( SELECT (HI_LIMIT) "DEDUCTION" FROM ( SELECT HI_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 12500 AND HI_LIMIT >= 12500 )T ) ELSE IF (12500 < WTAX_DAILY.HI_LIMIT) ( SELECT (LO_LIMIT) "DEDUCTION" FROM ( SELECT LO_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 12500 AND HI_LIMIT >= 12500 )T ) ``` Here's the error I got: ``` [Err] 42000 - [SQL Server]The multi-part identifier "WTAX_DAILY.HI_LIMIT" could not be bound. 42000 - [SQL Server]The multi-part identifier "WTAX_DAILY.HI_LIMIT" could not be bound. ```
Try this: (Remove `THEN` FROM `IF` statement) ``` IF EXISTS(SELECT 1 FROM WTAX_MONTHLY WHERE HI_LIMIT < 12500) SELECT (HI_LIMIT) "DEDUCTION" FROM (SELECT HI_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 12500 AND HI_LIMIT >= 12500 ) AS T ELSE IF EXISTS(SELECT 1 FROM WTAX_MONTHLY WHERE HI_LIMIT > 12500) SELECT (LO_LIMIT) "DEDUCTION" FROM (SELECT LO_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 12500 AND HI_LIMIT >= 12500 ) AS T ``` **Updated** ``` SELECT CASE WHEN HI_LIMIT > 12500 THEN HI_LIMIT ELSE LO_LIMIT END AS DEDUCTION FROM WTAX_MONTHLY ```
With regards to Jesuraja, I just modified his answer a bit and this is what shows up. Now it runs perfectly. Thanks Jesu ``` IF EXISTS(SELECT 1 FROM WTAX_MONTHLY WHERE HI_LIMIT < 15850) SELECT (LO_LIMIT) "DEDUCTION" FROM (SELECT LO_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 15850 AND HI_LIMIT >= 15850 ) AS T ELSE IF EXISTS(SELECT 1 FROM WTAX_MONTHLY WHERE HI_LIMIT > 15850) SELECT (HI_LIMIT) "DEDUCTION" FROM (SELECT HI_LIMIT, EXEMPTION FROM WTAX_MONTHLY WHERE LO_LIMIT <= 15850 AND HI_LIMIT >= 15850 ) AS T ```
How do I fix the SQL Server error "The multi-part identifier 'X.Y' could not be bound"?
[ "", "sql", "sql-server", "t-sql", "" ]
I cannot figure out a query to this, everything I find is immediately not doable thanks to the "group by" aspect of this. ``` ID | Username | Age | arbitrary 1 | bob | 10 | 34 2 | bob | 10 | 54 3 | alice | 10 | 123 ``` Where IDs 1 and 2 would both get deleted and have alice untouched. I've managed to create a web app that has a problem like this and would like to apply a unique(username, age) to patch it, but due to duplicates I can't.
``` DELETE FROM thetable dd WHERE EXISTS ( SELECT * FROM thetable xx WHERE xx.username = dd.username AND xx.age = dd.age AND xx.id <> dd.id -- if you want to delete ALL duplicates -- AND xx.id < dd.id -- if you want to keep only ONE of the duplicates ); ```
``` DELETE FROM tablename WHERE username IN (SELECT username FROM (SELECT row_number() OVER (PARTITION BY username), username FROM tablename) x WHERE x.row_number > 1); ------RESULT------ ID | Username | Age | arbitrary -----+---------------+-----+---------- 3 | alice | 10 | 123 ```
PostgreSQL query that deletes all results that have two repeated columns
[ "", "sql", "postgresql", "" ]
i have table and it has following data: ``` USERID NAME DATEFROM DATETO 1 xxx 2014-05-10 2014-05-15 1 xxx 2014-05-20 2014-05-25 4 yyy 2014-04-20 2014-04-21 ``` now i have sql query like : ``` select * from leave where datefrom>='2014-05-01' and dateto<='2014-05-31' ``` so now i want output : ``` userid name total_leave_days 1 xxx 12 4 yyy 2 (2014-05-10 - 2014-05-15 )=6 days (2014-05-20 - 2014-05-25 )=6 days total = 12 days for useid 1 (2014-04-20 - 2014-04-21)= 2 days for userid 4 ``` how can i calculate this total days .?
Please try: ``` select USERID, NAME, SUM(DATEDIFF(day, DATEFROM, DATETO)+1) total_leave_days From leave group by USERID, NAME ``` [SQL Fiddle Demo](http://sqlfiddle.com/#!3/38408/1)
It's important to note that you need "+1" to emulate the expected calculations because there is an inherent assumption of ""start of day" for the Start date and "end of day" for end date - but dbms's don't think that way. a date is always stored as "start of day". ``` select USERID , name , sum( datediff(day,DATEFROM,DATETO) + 1 ) as leave_days from leavetable group by USERID , name ``` produces this: ``` | USERID | NAME | LEAVE_DAYS | |--------|------|------------| | 1 | xxx | 12 | | 4 | yyy | 2 | ``` see: <http://sqlfiddle.com/#!3/ebe5d/1>
how to count days between two dates with where conditions
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I want to create a Questions and answers application like stack overflow. I do not know how to store multiple answers in MySQL table column. I want to know how to store questions and multiple answers in same row in the sql table.
Creating two tables and connecting them using foreign key relationship is one of the best solutions. Because keeping questions and answers in a same table may not allow you to have multiple answers for the same questions. If you want a single question to have multiple answers then the redundancy of your DB will surely increase. So, try to use DB-Relationship concepts properly.
No...don't do it. Very bad idea IMHO. It'll be hard to do any searching and stuff. There's a reason to why we still have R'ship database models. But if you must go down that path, you can always store it as JSON or XML, and maybe somewhere in future, Oracle will provide functionality to search JSON or XML within columns :)
How to Store multiple values in same SQL column
[ "", "mysql", "sql", "" ]
I'm trying to update row with same table query. Context: ``` ID | LANG | TEXT ---------------------------------- 1 | EN | Hello 1 | FR | 1 | ES | 2 | EN | Boat 2 | FR | Bateau 2 | ES | ``` I want to : For each row; if *TEXT IS NULL*; update it with TEXT value of row with same ID and LANG = 'EN'. What is the SQL request to do something like that ?
You don't specify the database. The following is standard SQL: ``` UPDATE t SET TEXT = (SELECT text FROM t t2 WHERE t.id = t2.id AND LANG ='EN' AND TEXT IS NOT NULL ) WHERE TEXT IS NULL; ``` In the event of duplicates, the following should work: ``` UPDATE t SET TEXT = (SELECT max(text) FROM t t2 WHERE t.id = t2.id AND LANG ='EN' AND TEXT IS NOT NULL ) WHERE TEXT IS NULL; ``` EDIT: Of course, not all databases support all ANSI standard functionality. In MySQL, you would use a `join` instead: ``` UPDATE t JOIN (SELECT id, max(text) as text_en FROM t t2 WHERE LANG ='EN' AND TEXT IS NOT NULL ) ten ON t.id = ten.id SET t.TEXT = ten.text_en WHERE t.TEXT IS NULL; ```
For MS SQL you can use a join to do this, it might not work with other databases though: ``` UPDATE t1 SET t1.text = t2.text FROM table1 t1 INNER JOIN table1 t2 ON t1.id = t2.id WHERE t1.TEXT IS NULL AND t2.LANG = 'EN' ``` [Sample SQL Fiddle](http://www.sqlfiddle.com/#!6/26202/4)
Update row with select on same table
[ "", "sql", "sql-update", "" ]
I have recently learned about GROUPING SETS, CUBE and ROLLUP for defining multiple grouping sets in sql server. What I am asking is under what circumstances do we use these features ? What are the benefits and advantages of using them? ``` SELECT shipperid, YEAR(shippeddate) AS shipyear, COUNT(*) AS numorders FROM Sales.Orders GROUP BY GROUPING SETS ( ( shipperid, YEAR(shippeddate) ), ( shipperid ), ( YEAR(shippeddate) ), ( ) ); SELECT shipperid, YEAR(shippeddate) AS shipyear, COUNT(*) AS numorders FROM Sales.Orders GROUP BY CUBE( shipperid, YEAR(shippeddate) ); SELECT shipcountry, shipregion, shipcity, COUNT(*) AS numorders FROM Sales.Orders GROUP BY ROLLUP( shipcountry, shipregion, shipcity ); ```
Firstly, for those who haven't already read up on the subject: * [Using GROUP BY with ROLLUP, CUBE, and GROUPING SETS](http://technet.microsoft.com/en-us/library/bb522495%28v=sql.105%29.aspx) That being said, don't think about these grouping options as ways to get a result set. **These are performance tools**. Let's take `ROLLUP` as a simple example. I can use the following query to get the count of records for each value of GrpCol. ``` SELECT GrpCol, count(*) AS cnt FROM dbo.MyTable GROUP BY GrpCol ``` And I can use the following query to summarily "roll up" the count of ALL records. ``` SELECT NULL, count(*) AS cnt FROM dbo.MyTable ``` And I could `UNION ALL` the above two queries to get the exact same results I might get if I had written the first query with the `ROLLUP` clause (that's why I put the NULL in there). It might actually be more convenient for me to execute this as two different queries because then I have the grouped results separate from my totals. Why would I want my final total mixed right in to the rest of those results? The answer is that doing both together using the `ROLLUP` clause is more efficient. SQL Server will use an execution plan that calculates all of the aggregations together in one pass. Compare that to the `UNION ALL` example which would provide the exact same results but use a less efficient execution plan (two table scans instead of one). Imagine an extreme example in which you are working on a data set so large that each scan of the data takes one whole hour. You have to provide totals on basically every possible dimension (way to slice) that data every day. Aha! I bet one of these grouping options is exactly what you need. If you save off the results of that one scan into a special schema layout, you will then be able to run reports for the rest of the day off the saved results. So I'm basically saying that you're working on a data warehouse project. For the rest of us it mostly falls into the "neat thing to know" category.
The `CUBE` is the same of `GROUPING SETS` with all possible combinations. So this (using `CUBE`) ``` GROUP BY CUBE (C1, C2, C3, ..., Cn-2, Cn-1, Cn) ``` is the same of this (using `GROUPING SETS`) ``` GROUP BY GROUPING SETS ( (C1, C2, C3, ..., Cn-2, Cn-1, Cn) -- All dimensions are included. ,( , C2, C3, ..., Cn-2, Cn-1, Cn) -- n-1 dimensions are included. ,(C1, C3, ..., Cn-2, Cn-1, Cn) … ,(C1, C2, C3, ..., Cn-2, Cn-1,) ,(C3, ..., Cn-2, Cn-1, Cn) -- n-2 dimensions included ,(C1 ..., Cn-2, Cn-1, Cn) … ,(C1, C2) -- 2 dimensions are included. ,… ,(C1, Cn) ,… ,(Cn-1, Cn) ,… ,(C1) -- 1 dimension included ,(C2) ,… ,(Cn-1) ,(Cn) ,() ) -- Grand total, 0 dimension is included. ``` Then, if you don't really need all combinations, you should use `GROUPING SETS` rather than `CUBE` > ROLLUP and CUBE operators generate some of the same result sets and > perform some of the same calculations as OLAP applications. The CUBE > operator generates a result set that can be used for cross tabulation > reports. A ROLLUP operation can calculate the equivalent of an OLAP > dimension or hierarchy. [Look here to see Grouping Sets Equivalents](http://technet.microsoft.com/en-us/library/bb510427(v=sql.105).aspx) --- **UPDATE** I think an example would help here. Suppose you have a table of number of UFOs sightings by country and gender, like bellow: ``` ╔═════════╦═══════╦═════════╗ ║ COUNTRY ║ GENDER║ #SIGHTS ║ ╠═════════╬═══════╬═════════╣ ║ USA ║ F ║ 450 ║ ║ USA ║ M ║ 1500 ║ ║ ITALY ║ F ║ 704 ║ ║ ITALY ║ M ║ 720 ║ ║ SWEDEN ║ F ║ 317 ║ ║ SWEDEN ║ M ║ 310 ║ ║ BRAZIL ║ F ║ 144 ║ ║ BRAZIL ║ M ║ 159 ║ ╚═════════╩═══════╩═════════╝ ``` Then, if you want to know the totals for each country, by gender and grand total only, you should use `GROUPING SETS` ``` select Country, Gender, sum(Number_Of_Sights) from Table1 group by GROUPING SETS((Country), (Gender), ()) order by Country, Gender ``` [SQL Fiddle](http://sqlfiddle.com/#!3/b87ce/11/0) To get the same result with `GROUP BY`, you would use `UNION ALL` as: ``` select Country, NULL Gender, sum(Number_Of_Sights) from Table1 GROUP BY Country UNION ALL select NULL Country, Gender, sum(Number_Of_Sights) from Table1 GROUP BY GENDER UNION ALL SELECT NULL Country, NULL Gender, sum(Number_Of_Sights) FROM TABLE1 ORDER BY COUNTRY, GENDER ``` [SQL Fiddle](http://sqlfiddle.com/#!3/b87ce/16/0) ***But it is not possible to obtain the same result with CUBE, since it will return all possibilities.*** Now, if you want to know all possible combinations, then you should use `CUBE`
When to use GROUPING SETS, CUBE and ROLLUP
[ "", "sql", "sql-server", "grouping", "cube", "rollup", "" ]
Before I ask my question a little background: I'm doing the Data Export/Import using the MySQL Workbench 6.1 of a MySQL 5.5 database from one machine to a 5.6 on another. both machines are ubuntu one 32-bit the other 64-bit. I dump the data no problem, but when I try to load it I get the: > ERROR 1118 (42000) at line 1807: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW\_FORMAT=DYNAMIC or ROW\_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline. Here is the table create: ``` CREATE TABLE `file_content` ( `fileid` bigint(20) NOT NULL, `content` LONGBLOB NOT NULL, PRIMARY KEY (`fileid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` I have the following relevant my.cnf settings ... > max\_allowed\_packet=1G > innodb\_file\_per\_table=1 > innodb\_file\_format=Barracuda I spend a lot of googling time trying to figure out what the problem is. BTW if I remove the primary key (no problem here, but another table that has a foreign key into this table complains. As luck would have it, I have the ssh access to the box so I can see actual mysqldb.log. why I find there is really interesting ... > 2014-08-12 20:42:12 25246 [ERROR] InnoDB: The total blob data length (14179167) is greater than 10% of the redo log file size (3072). Please increase innodb\_log\_file\_size. So increasing the redo log file size to 10x LONGBLOB size fixed my issue. However does that mean that to insert a 1G LONGBLOB (thats the actual maximum because of the packet size) I will need a 10G innodb\_log\_file\_size. Can anyone explain how "redo log size error" turns into "row size too large (>8126)" error. BTW I have no control over the structure of this db so no "why are you storing large blobs in the database". TIA
The reason for this issue is a change in [MySQL 5.6.20](http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html) as one could read in the change log: > As a result of the redo log BLOB write limit introduced for MySQL 5.6, the [innodb\_log\_file\_size](http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_log_file_size) setting should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields). No action is required if your [innodb\_log\_file\_size](http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_log_file_size) setting is already sufficiently large or your tables contain no BLOB data. To resolve your issue you have to increase the value of the [innodb\_log\_file\_size](http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_log_file_size) option in your `my.ini` below the `[mysqld]` section. Its default value is `48M`. Setting it to ``` [mysqld] innodb_log_file_size=256M ``` helped in my case. Be careful when changing the value of `innodb_log_file_size` that you [do this ***safely***](https://www.percona.com/blog/2011/07/09/how-to-change-innodb_log_file_size-safely/): > 1. You need to shut the server down cleanly and normally. > 2. Move away > (don’t delete) the log files, which are named ib\_logfile0, > ib\_logfile1, and so on. > 3. Check the error log to ensure there was no > problem shutting down. > 4. Then restart the server and watch the error log > output carefully. You should see InnoDB print messages saying that the > log files don’t exist. It will create new ones and then start. > 5. At this > point you can verify that InnoDB is working, and then you can delete > the old log files.
For these who cant find this for XAMPP: At first i could not find correct file to edit `innodb_log_file_size` Actual file: `xampp/mysql/bin/my.ini`
How to configure MySQL 5.6 LONGBLOB for large binary data
[ "", "mysql", "sql", "row", "innodb", "" ]
Am currently having a bit of trouble and am trying to dynamically pull values out of CSV files and dynamically create the external tables as each CSV file is going to have a different number of headers each time. Is there any way to dynamically to do this, the research I have done is telling me you cant, but if you could it would be pretty difficult via oracle threads and I also have not had any luck achieving this functionality. Has anyone tried this or had experience in it who could offer a bit of help or advice? My current script: ``` DROP TABLE TEST_CSV CASCADE CONSTRAINTS; CREATE TABLE TEST_CSV ( VAL1 VARCHAR2(255 BYTE), VAL2 VARCHAR2(255 BYTE), VAL3 VARCHAR2(255 BYTE) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY DATALOAD ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY "," ) LOCATION (DATALOAD:'test1.csv') ) REJECT LIMIT UNLIMITED; ```
You should use execute immediate statement for this: ``` execute immediate 'DROP TABLE TEST_CSV CASCADE CONSTRAINTS'; ``` And etc... It hasn't any differences with usual command using. But you can have some troubles with permissions. Oracle dynamic sql usually has some bugs and his behavior can be different with normal DDL command
Yes it is possible, you can create a dynamic statement for the external table and then perform EXECUTE IMMEDIATE of the statement. You can also view the code I have been using Line:1400 and Line:1419 [Code](https://github.com/maindolaamit/OAFileLoader/blob/master/Objects/XXCUST_OAFLDR_PKG.pkb)
Is it possible to Dynamically Create External tables in PLSQL/Oracle
[ "", "sql", "csv", "plsql", "oracle10g", "external", "" ]
I am running a SQL script I found [online](http://www.mssqltips.com/sqlservertip/1499/create-a-sql-server-data-dictionary-in-seconds-using-extended-properties/). The script generates a data dictionary from extended properties data. How can I save the results to an HTML file? I prefer not changing the defaults, however I tried going into SQL server > Tools > Options > Query Results > SQL Server > General > Changed "Default destination for results" to "Results to file" and set location to my documents folder. I ran the SQL job this query is saved in and I do not see any new files in there. I do not see a "save" in the original query either. Original Query ``` Set nocount on DECLARE @TableName nvarchar(35) DECLARE Tbls CURSOR FOR Select distinct Table_name FROM INFORMATION_SCHEMA.COLUMNS --put any exclusions here --where table_name not like '%old' order by Table_name OPEN Tbls PRINT '<HTML><body>' FETCH NEXT FROM Tbls INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN PRINT '</br>' PRINT '<table border="1">' Print '<B>' + @TableName + '</B>' PRINT '</br>' --Get the Description of the table --Characters 1-250 Select substring(cast(Value as varchar(1000)),1,250) FROM sys.extended_properties A WHERE A.major_id = OBJECT_ID(@TableName) and name = 'MS_Description' and minor_id = 0 --Characters 251-500 Select substring(cast(Value as varchar(1000)),251, 250) FROM sys.extended_properties A WHERE A.major_id = OBJECT_ID(@TableName) and name = 'MS_Description' and minor_id = 0 PRINT '<tr><b>' --Set up the Column Headers for the Table PRINT '<td><b>Column Name</b></td>' PRINT '<td><b>Description</b></td>' PRINT '<td><b>InPrimaryKey</b></td>' PRINT '<td><b>IsForeignKey</b></td>' PRINT '<td><b>DataType</b></td>' PRINT '<td><b>Length</b></td>' PRINT '<td><b>Numeric Precision</b></td>' PRINT '<td><b>Numeric Scale</b></td>' PRINT '<td><b>Nullable</b></td>' PRINT '<td><b>Computed</b></td>' PRINT '<td><b>Identity</b></td>' PRINT '<td><b>Default Value</b></td>' --Get the Table Data SELECT '</b></tr>', '<tr>', '<td>' + CAST(clmns.name AS VARCHAR(35)) + '</td>', '<td>' + substring(ISNULL(CAST(exprop.value AS VARCHAR(255)),''),1,250), substring(ISNULL(CAST(exprop.value AS VARCHAR(500)),''),251,250) + '</td>', '<td>' + CAST(ISNULL(idxcol.index_column_id, 0)AS VARCHAR(20)) + '</td>', '<td>' + CAST(ISNULL( (SELECT TOP 1 1 FROM sys.foreign_key_columns AS fkclmn WHERE fkclmn.parent_column_id = clmns.column_id AND fkclmn.parent_object_id = clmns.object_id ), 0) AS VARCHAR(20)) + '</td>', '<td>' + CAST(udt.name AS CHAR(15)) + '</td>' , '<td>' + CAST(CAST(CASE WHEN typ.name IN (N'nchar', N'nvarchar') AND clmns.max_length <> -1 THEN clmns.max_length/2 ELSE clmns.max_length END AS INT) AS VARCHAR(20)) + '</td>', '<td>' + CAST(CAST(clmns.precision AS INT) AS VARCHAR(20)) + '</td>', '<td>' + CAST(CAST(clmns.scale AS INT) AS VARCHAR(20)) + '</td>', '<td>' + CAST(clmns.is_nullable AS VARCHAR(20)) + '</td>' , '<td>' + CAST(clmns.is_computed AS VARCHAR(20)) + '</td>' , '<td>' + CAST(clmns.is_identity AS VARCHAR(20)) + '</td>' , '<td>' + isnull(CAST(cnstr.definition AS VARCHAR(20)),'') + '</td>' FROM sys.tables AS tbl INNER JOIN sys.all_columns AS clmns ON clmns.object_id=tbl.object_id LEFT OUTER JOIN sys.indexes AS idx ON idx.object_id = clmns.object_id AND 1 =idx.is_primary_key LEFT OUTER JOIN sys.index_columns AS idxcol ON idxcol.index_id = idx.index_id AND idxcol.column_id = clmns.column_id AND idxcol.object_id = clmns.object_id AND 0 = idxcol.is_included_column LEFT OUTER JOIN sys.types AS udt ON udt.user_type_id = clmns.user_type_id LEFT OUTER JOIN sys.types AS typ ON typ.user_type_id = clmns.system_type_id AND typ.user_type_id = typ.system_type_id LEFT JOIN sys.default_constraints AS cnstr ON cnstr.object_id=clmns.default_object_id LEFT OUTER JOIN sys.extended_properties exprop ON exprop.major_id = clmns.object_id AND exprop.minor_id = clmns.column_id AND exprop.name = 'MS_Description' WHERE (tbl.name = @TableName and exprop.class = 1) --I don't wand to include comments on indexes ORDER BY clmns.column_id ASC PRINT '</tr></table>' FETCH NEXT FROM Tbls INTO @TableName END PRINT '</body></HTML>' CLOSE Tbls DEALLOCATE Tbls ```
I ran your query and via the **messages** tab, if you right click you can `Save Result As`: ![enter image description here](https://i.stack.imgur.com/Nlnl6.png) If you then change `Report files (*.rpt)` to `All Files (*.*)` you can type the name of a `.hmtl` file and save: ![enter image description here](https://i.stack.imgur.com/fhq8r.png)
It looks like this script is being output via Print statements. Check on the messages tab in SSMS. You should be able to just copy and past from there into a HTML file.
How to save results of a SQL query to html file?
[ "", "sql", "sql-server", "data-dictionary", "" ]
I have two table in my database team ``` id game_id name image 26 48 t t.png 27 48 t2 t2.png ``` score ``` id team_id score 1 26 5 2 26 14 ``` my query ``` SELECT t.id,t.name,t.image,sum(s.score) FROM `team` AS t LEFT JOIN score s ON (s.team_id=t.id) where t.game_id=48 ``` my query always give one team that is team 26 but it will be like ``` id name image score 26 t t.png 19 27 t2 t2.png null ``` I cant understand what wrong in my query.
Add a GROUP BY clause. Try this query. ``` SELECT t.id, t.name, t.image, sum(s.score) as total_score FROM `team` AS t LEFT JOIN `score` s ON (s.team_id = t.id) WHERE t.game_id=48 GROUP BY t.id, t.name, t.image ```
``` SELECT t.id,t.name,t.image,sum(s.score) FROM `team` AS t LEFT JOIN score s ON (s.team_id=t.id) where t.game_id=48 group by t.id ``` Just add `group by` clause in it
LEFT JOIN Not return correct result
[ "", "mysql", "sql", "" ]
`TRUNCATE` and `DELETE` commands does the same job, in both the cases data is manipulated, then why does the `DELETE` command comes under DML commands and `TRUNCATE` command comes under DDL commands?
When we are using Truncate, we are de-allocating the whole space allocated by the data without saving into the undo-table-space. But, in case of Delete, we are putting all the data into undo table-space and then we are deleting all the data. The main points that put TRUNCATE in the DDL camp on Oracle, are: 1. TRUNCATE can change storage parameters (the NEXT parameter), and those are part of the object definition - that's in the DDL camp. 2. TRUNCATE does an implicit commit, and cannot be rolled back (flashback aside) - most (all?) DDL operations in Oracle do this, no DML does
DELETE 1. DELETE is a DML Command. 2. DELETE statement is executed using a row lock, each row in the table is locked for deletion. 3. We can specify filters in where clause 4. It deletes specified data if where condition exists. 5. Delete activates a trigger because the operation are logged individually. 6. Slower than truncate because, it keeps logs. 7. Rollback is possible. TRUNCATE 1. TRUNCATE is a DDL command. 2. TRUNCATE TABLE always locks the table and page but not each row. 3. Cannot use Where Condition. 4. It Removes all the data. 5. TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions. 6. Faster in performance wise, because it doesn't keep any logs. 7. Rollback is not possible.
Difference between TRUNCATE and DELETE?
[ "", "sql", "oracle", "" ]
I have two tables that I want to join. However the information is written as from to in the first table. My first table looks like this: ``` No. | Date | From entry | To Entry | +---+------------+------------+----------+ 1 | 21.12.2013 | 3 | 10 ``` My second table looks like this: ``` | Entry | Code | +--------+-------+ | 3 | 1 | | 4 | 0 | | 5 | 2 | | 6 | 3 | | 7 | 1 | | 8 | 0 | | 9 | 6 | | 10 | 1 | ``` I want to join both based on the from to information. The result should look like this: ``` | Entry | Code | Date | +--------+-------+------------+ | 3 | 1 | 21.12.2013 | | 4 | 0 | 21.12.2013 | | 5 | 2 | 21.12.2013 | | 6 | 3 | 21.12.2013 | | 7 | 1 | 21.12.2013 | | 8 | 0 | 21.12.2013 | | 9 | 6 | 21.12.2013 | | 10 | 1 | 21.12.2013 | ``` I have no idea how to achieve this with t-sql.
You just need to specify correct condition for the `INNER JOIN` ``` SELECT * FROM table1 INNER JOIN table2 ON table2.Entry >= table1.[From entry] AND table2.Entry <= table1.[To entry] ``` Or you may consider using `LEFT JOIN` if you want to return all records from table1 no matter if they have related records in table2.
Try this: (`INNER JOIN` and `BETWEEN AND`) ``` SELECT Entry, Code, Date FROM Table1 T1 INNER JOIN Table2 T2 ON T2.Entry BETWEEN T1.[From entry] AND T1.[To entry] ```
SQL Join on tables with from to entries
[ "", "sql", "sql-server", "t-sql", "" ]
I have the following table. ``` test_type | brand | model | band | firmware_version | avg_throughput -----------+---------+--------+------+-----------------+---------------- 1client | Linksys | N600 | 5ghz | 1 | 66.94 1client | Linksys | N600 | 5ghz | 2 | 94.98 1client | Linksys | N600 | 5ghz | 4 | 132.40 1client | Linksys | EA6500 | 5ghz | 1 | 216.46 1client | Linksys | EA6500 | 5ghz | 2 | 176.79 1client | Linksys | EA6500 | 5ghz | 4 | 191.44 ``` I'd like to select the `avg_throughput` of each `model` that has the lowest `firmware_version`. When I do `SELECT test_type, model, min(firmware_version) FORM temp_table GROUP BY test_type, model` I get what I want but once I add the `avg_throughput` column it requires me to also add it to the GROUP BY clause which makes it return all the rows when all I need is only the `avg_throughput` for the lowest `firmware_version` for each `model` type.
In standard SQL this can be done using a window function ``` select test_type, model, firmware_version, avg_throughput from ( select test_type, model, firmware_version, avg_throughput, min(firmware_version) over (partition by test_type, model) as min_firmware from temp_table ) t where firmware_version = min_firmware; ``` Postgres however has the `distinct on` operator which is usually faster than the corresponding solution with a window function: ``` select distinct on (test_type, model) test_type, model, firmware_version, avg_throughput from temp_table order by test_type, model, firmware_version; ``` SQLFiddle example: <http://sqlfiddle.com/#!15/563bd/1>
This should be what you're looking for if I'm reading your post correctly, and I think it's a pretty easily readable way of doing it. :-) ``` WITH min_firmware_version (model, firmware_version) AS ( SELECT model, MIN(firmware_version) FROM temp_table GROUP BY model ) SELECT temp_table.model, temp_table.firmware_version, temp_table.avg_throughput FROM temp_table INNER JOIN min_firmware_version ON temp_table.model = min_firmware_version.model AND temp_table.firmware_version = min_firmware_version.firmware_version ```
selecting a column based on a minimum value of another column
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
Using SQL Server 2012, I created a database and a table: ![enter image description here](https://i.stack.imgur.com/TCRRW.png) When I query the table the first time (after I connect to SQL Server) using this query: ``` select * from [dbo].[Downloads] ``` I get > Msg 2809, Level 16, State 1, Line 1 > The request for procedure 'Downloads' failed because 'Downloads' is a table object. But the second time I execute the same `SELECT` statement, it runs correctly How do I eliminate this error the first time around?
You had the text `Downloads` selected in Management Studio. This causes the T-SQL string `Downloads` to be executed. Such a T-SQL batch is interpreted as a procedure call.
I made one column a primary key, now this issue disappeared.
Initial error when querying SQL Server 2012 database
[ "", "sql", "sql-server", "database", "" ]
I have two table: table POI: ## NAME | VOTE Paris | rt\_1 Milan | rt\_2 Rome | rt\_3 ## ... | ... table rtgitems: ## ITEM | TOTALRATE rt\_1 | 22 rt\_2 | 3 rt\_3 | 3 rt\_4 | 5 ## ... | ... I want the attribute NAME from first table with minimum value in TOTALRATE from second table. Example: Milan, Rome. I use this query: ``` SELECT POI.Name FROM POI INNER JOIN rtgitems ON POI.Vote=rtgitems.item WHERE POI.Vote = (SELECT MIN(rtgitems.totalrate) FROM rtgitems) ``` but don't work, I have empty result. How must I do? Thanks.
When ever you are using min or max kind or function in your sql you should use group by clause to get the real output from the table. ``` SELECT POI.Name FROM POI INNER JOIN rtgitems ON POI.Vote=rtgitems.item where totalrate= (select min(totalrate) from rtgitems) GROUP BY POI.Name ``` hope it helps.
try `SELECT POI.name FROM POI join rtgitems ON POI.vote=rtgitems.item where totalrate<=(SELECT totalrate from rtgitems order by totalrate desc limit 1)`
Return min value from query with inner join
[ "", "mysql", "sql", "" ]
how can we insert default value like below ? ``` INSERT INTO table1 (name, number = rand()) VALUES ('name1'), ('name2'), ('name3'), ('name4') ```
You could just change/add the existing `default` constraint and perhaps revert it afterwards if it was only meant to be temporarily: ``` ALTER TABLE table1 ADD CONSTRAINT def_number DEFAULT ABS(CHECKSUM(NewId())) FOR number; ``` and now you can insert them like this ``` INSERT INTO table1 (name) VALUES ('name1'), ('name2'), ('name3'), ('name4') ``` For example, this ``` drop table table1; create table table1 (name varchar(50), number int); alter table table1 add constraint def_number default abs(checksum(newid())) for number; insert into table1 (name) values ('name1'), ('name2'), ('name3'), ('name4'); select * from table1; ``` will produce ![enter image description here](https://i.stack.imgur.com/U4pv0.png)
It is unclear what aspect of the question you are having trouble with and whether by "default" value you mean "default" in the specific way that SQL Server uses the term. Assuming that you want four *different* random number values on the four inserted rows, the easiest way to do what you want is four separate `insert` statements. Personally, I prefer the `insert . . . select` syntax to `insert . . . values` because it is more flexible: ``` INSERT INTO table (name, number) SELECT 'name1', rand(); INSERT INTO table (name, number) SELECT 'name2', rand(); INSERT INTO table (name, number) SELECT 'name3', rand(); INSERT INTO table (name, number) SELECT 'name4', rand(); ``` In SQL Server, if you put these into one statement, `rand()` is evaluated only once so all four rows will get the same value. A typical way around that is to use `checksum(newid())` or `rand(checksum(newid()))` for the random number generation. If so, this will work: ``` INSERT INTO table (name, number) SELECT 'name1', rand(checksum(newid())) UNION ALL SELECT 'name2', rand(checksum(newid())) UNION ALL SELECT 'name3', rand(checksum(newid())) UNION ALL SELECT 'name4', rand(checksum(newid())); ``` You can also use similar logic to set up a default value for the column. If you want the "random" number to be tied to the name, then you can use: `name` rather than `newid()` in the expression.
INSERT values with random values defined in front of columns
[ "", "sql", "sql-server", "insert", "default", "" ]
This query works: ``` SELECT recipient_id FROM messages WHERE (sender_id=? and folder=1) UNION SELECT sender_id FROM messages WHERE (recipient_id=? and folder=1) ``` but I need to `ORDER BY time DESC` and wherever I add that it doesn't work. If I add `, time` to what I'm selecting it will order correctly, but it will change what it returns. I'm using MariaDB.
You can do a subselect over your union ordered result set and select only the desired field ``` SELECT id FROM ( SELECT recipient_id as id,time FROM messages WHERE (sender_id=? and folder=1) UNION SELECT sender_id as id,time FROM messages WHERE (recipient_id=? and folder=1) ORDER BY time DESC ) t ``` Or if you want to distinguish what type of id is either a sender or receiver you do so ``` SELECT id,`type` FROM ( SELECT recipient_id as id,time, 'recipient' as `type` FROM messages WHERE (sender_id=? and folder=1) UNION SELECT sender_id as id,time ,'sender' as `type` FROM messages WHERE (recipient_id=? and folder=1) ORDER BY time DESC ) t ```
The `UNION` set operator will remove duplicate rows; if the SELECT returns multiple rows with the same recipient\_id, those will be collapsed into a single row. You'd need to decide which row you want the `time` value from. The `UNION ALL` set operator is similar to `UNION`, in that it concatenates the results, but it doesn't remove "duplicate" rows. You could get an equivalent result with a query something like this: ``` SELECT IF(m.sender_id=?,m.recipient_id,IF(m.recipient_id=?,m.sender_id,NULL)) AS x_id FROM messages m WHERE m.folder = 1 AND (sender_id=? OR recipient_id=?) GROUP BY x_id ORDER BY MAX(m.time) DESC ``` (NOTE: this will order the `x_id` values by the latest `time` value.) If you don't really want to eliminate duplicates (which the `UNION` operator is doing in the original query), if you want to return all of the rows, remove the `GROUP BY` clause and use a scalar `time` in place of the aggregate in the `ORDER BY` clause: ``` SELECT IF(m.sender_id=?,m.recipient_id,IF(m.recipient_id=?,m.sender_id,NULL)) AS x_id FROM messages m WHERE m.folder = 1 AND (sender_id=? OR recipient_id=?) ORDER BY m.time DESC ```
How to order two SELECTs with a UNION by a column I'm not selecting?
[ "", "mysql", "sql", "pdo", "mariadb", "" ]
Consider a table like this: ![enter image description here](https://i.stack.imgur.com/JONV9.png) The `id` column is set to auto-increment, `update_time` column is supposed to be the date that the row was inserted in the table. I want to simply get the latest entry for a user with user\_id = x, I found out there are some ways: ``` SELECT * FROM mytable WHERE user_id = x ORDER BY update_time DESC LIMIT 1 ``` And ``` SELECT * FROM mytable WHERE user_id = x MAX(update_time) ``` And another query would be selecting the row with highest id number I am not quite sure about the syntax of the later one (please correct me). This may seem to be a trivial task, but there was a case that someone else has altered the auto\_increment value on the table, and sometimes the time of the server has changed (well this has not been happened in my case but what if it does!!?) What would be the safest query to get the latest entry row for a user\_id ( I mean a query to return weigh, height and activity\_level for a user with user\_id = x) Do I need to add more columns to the table? if so, what?
I like the join approach for this. Assumes that userID + update\_time is unique...it'll pull back multiple rows if it isn't. ``` select user_id, max(update_time) maxtime from table group by user_id ``` Simple statement to get the max update time by user ID. Use it as a subquery with inner join (inner join will function as a filter) ``` select t.* from table t inner join (select user_id, max(update_time) maxtime from table group by user_id --where user_id = x )a on a.user_id = t.user_id and a.maxtime = t.update_time ``` I commented out the where user\_id = x line...the advantage to this method is you can get all users and their most recent at once.
``` SELECT * FROM mytable WHERE user_id = x AND update_time = (SELECT MAX(update_time) FROM mytable WHERE user_id = x) ```
Best practice to get newest row in a table
[ "", "mysql", "sql", "select", "sql-order-by", "" ]
How can I properly join the fields I have in the comment block to the SQL query? I have a billing phone number on the order header, but on the order lines, there is a shipping phone number for each line. The billing and shipping can be different. On every order line of one order, it's 99% the same shipping number, but I want to do `Top 1` and not `group by` just incase some data gets messed up. I think `UNION` might get me what I want, but it seems like there is a better way to get everything in one query and not copying & pasting the same "where" clauses. ``` SELECT a.Order_no ,a.Customer_no ,a.BILL_LAST_NAME ,a.BILL_FIRST_NAME ,b.email ,a.BILL_ADDRESS1 ,a.BILL_ADDRESS2 ,a.BILL_CITY ,a.BILL_STATE ,a.BILL_POSTAL_CODE ,a.BILL_COUNTRY ,b.Address_Type ,a.BILL_PHONE ,a.BILL_PHONE_EXT ,a.Order_Date ,a.billing_status ,a.PO_Number ,a.Customer_comments ,a.ShipMethodShipperDesc ,a.ShipRate ,a.CouponDiscountCode ,a.CouponDiscount ,a.CustomerDiscount ,a.CustomerDiscountPercent ,a.SalesTaxTotal ,a.Payment_Method ,a.Credit_Card_Type ,a.Credit_Card_Number ,a.Order_Date ,a.BILL_TYPE ,a.Order_Net /* I added these lines but would like them joined properly */ /*-------->*/ , (select top 1 SHIP_ADDRESS1 from LineItems C where c.ORDER_NO = a.ORDER_NO) , (select top 1 SHIP_ADDRESS2 from LineItems C where c.ORDER_NO = a.ORDER_NO) , (select top 1 SHIP_CITY from LineItems C where c.ORDER_NO = a.ORDER_NO) , (select top 1 SHIP_STATE from LineItems C where c.ORDER_NO = a.ORDER_NO) , (select top 1 SHIP_POSTAL_CODE from LineItems C where c.ORDER_NO = a.ORDER_NO) , (select top 1 SHIP_COUNTRY from LineItems C where c.ORDER_NO = a.ORDER_NO) /*<-----------*/ FROM Orders AS a ,Customers AS b WHERE a.customer_no = b.customer_no AND a.AccountName = 'mywebaccount' AND a.billing_status <> 'Canceled' AND a.transferred = 0 AND a.order_status <> 'Canceled' AND EXISTS ( SELECT * FROM LineItems c WHERE c.order_no = a.order_no ) ORDER BY a.order_date ,a.order_no ```
I suggest something like: ``` SELECT Order_no ,Customer_no ,BILL_LAST_NAME ,BILL_FIRST_NAME ... ,SHIP_ADDRESS1 ,SHIP_ADDRESS2 ,SHIP_CITY ,SHIP_STATE ,SHIP_POSTAL_CODE ,SHIP_COUNTRY FROM ( SELECT a.Order_no ,a.Customer_no ,a.BILL_LAST_NAME ,a.BILL_FIRST_NAME ,b.email ,a.BILL_ADDRESS1 ,a.BILL_ADDRESS2 ,a.BILL_CITY ,a.BILL_STATE ,a.BILL_POSTAL_CODE ,a.BILL_COUNTRY ,b.Address_Type ,a.BILL_PHONE ,a.BILL_PHONE_EXT ,a.Order_Date ,a.billing_status ,a.PO_Number ,a.Customer_comments ,a.ShipMethodShipperDesc ,a.ShipRate ,a.CouponDiscountCode ,a.CouponDiscount ,a.CustomerDiscount ,a.CustomerDiscountPercent ,a.SalesTaxTotal ,a.Payment_Method ,a.Credit_Card_Type ,a.Credit_Card_Number ,a.Order_Date ,a.BILL_TYPE ,a.Order_Net ,c.SHIP_ADDRESS1 ,c.SHIP_ADDRESS2 ,c.SHIP_CITY ,c.SHIP_STATE ,c.SHIP_POSTAL_CODE ,c.SHIP_COUNTRY , row_number() over (partition by a.order_no order by SHIP_ADDRESS1, SHIP_ADDRESS2, SHIP_CITY, SHIP_STATE , SHIP_POSTAL_CODE, SHIP_COUNTRY) as rn FROM Orders AS a JOIN Customers AS b ON a.customer_no = b.customer_no JOIN numberLineItems c ON c.order_no = a.order_no WHERE a.AccountName = 'mywebaccount' AND a.billing_status <> 'Canceled' AND a.transferred = 0 AND a.order_status <> 'Canceled' ) as x WHERE rn = 1 ORDER BY order_date , order_no; ```
Firstly, don't do `FROM sometable1 AS t1, sometable2 AS t2`. Always join explicitly. Secondly, from the way your case looks, some variant of `APPLY` would suit nicely. Here's my version: ``` SELECT a.Order_no ,a.Customer_no ,a.BILL_LAST_NAME ,a.BILL_FIRST_NAME ,b.email ,a.BILL_ADDRESS1 ,a.BILL_ADDRESS2 ,a.BILL_CITY ,a.BILL_STATE ,a.BILL_POSTAL_CODE ,a.BILL_COUNTRY ,b.Address_Type ,a.BILL_PHONE ,a.BILL_PHONE_EXT ,a.Order_Date ,a.billing_status ,a.PO_Number ,a.Customer_comments ,a.ShipMethodShipperDesc ,a.ShipRate ,a.CouponDiscountCode ,a.CouponDiscount ,a.CustomerDiscount ,a.CustomerDiscountPercent ,a.SalesTaxTotal ,a.Payment_Method ,a.Credit_Card_Type ,a.Credit_Card_Number ,a.Order_Date ,a.BILL_TYPE ,a.Order_Net ,li.SHIP_ADDRESS1 ,li.SHIP_ADDRESS2 ,li.SHIP_CITY ,li.SHIP_STATE ,li.SHIP_POSTAL_CODE ,li.SHIP_COUNTRY FROM Orders AS a INNER JOIN Customers AS b ON a.customer_no = b.customer_no CROSS APPLY ( SELECT TOP 1 c.SHIP_ADDRESS1 ,c.SHIP_ADDRESS2 ,c.SHIP_CITY ,c.SHIP_STATE ,c.SHIP_POSTAL_CODE ,c.SHIP_COUNTRY FROM LineItems c WHERE c.ORDER_NO = a.ORDER_NO ORDER BY c.Id -- or whatever ) AS li WHERE a.AccountName = 'mywebaccount' AND a.billing_status <> 'Canceled' AND a.transferred = 0 AND a.order_status <> 'Canceled' -- no need for that exists since CROSS APPLY works like INNER JOIN ORDER BY a.order_date,a.order_no ```
How to properly join TOP 1 fields to SQL query
[ "", "sql", "sql-server", "t-sql", "sql-server-2012", "" ]
I have 2 tables, one for members and another one for their services. Those are InnoDB tables on MySQL 5.6 server. Members table: ``` id | name | phone ---------------------------------------- 1 Daniel 123456789 2 Liam 123456789 3 Lucas 123456789 ``` Services table: ``` MID | profile | lastSeen ---------------------------------------- 1 2 2014-08-13 14:23:23 3 1 2014-08-12 15:29:11 ``` I try to achieve this result: ``` id | name | services --------------------------------- 1 Daniel true 2 Liam false 3 Lucas true ``` So if the user ID is exists in services table, the column services will be true or false otherwise. I tried to do it with JOINs and Sub-Queries without success, so I need your help ;)
use `LEFT JOIN` Services table, Try this query ``` SELECT members.id, members.name, IF(services.mid IS NULL, FALSE, TRUE) as services FROM members LEFT JOIN services ON (members.id = services.mid) ```
``` select m.id, m.name, case when s.mid is null then false else true end from members m left join services s on s.profile = m.id ```
select column as true / false if id is exists in another table
[ "", "mysql", "sql", "" ]
Pretty simple question I suppose, but I can't find the answer on here! I am trying to calculate the volume of records based on a WHERE clause, as well as return a percentage of those records based on the same where clause. The percentage would be calculated against the total amount of records in the database. For example, I count all my records that meet "MyCondition": ``` SELECT COUNT(*) FROM [MyTable] WHERE Condition='MyCondition' ``` This works fine. However, how does one take that count, and return the percentage it equates to when put against all the records in the database? In other words, I want to see the percentage of how many records meet WHERE Condition='MyCondition' in regards to the total record count. Sorry for the simple question and TIA! I am using MS SQL 2012.
You can do simply divide the match of the count by the total number of records. **Sample Data:** ``` create table test (MatchColumn int) insert into test (MatchColumn) values (1),(1),(1),(2),(3),(4) ``` **Match Condition:** ``` SELECT COUNT(*) MatchValues, (SELECT COUNT(*) FROM test) TotalRecords, CAST(COUNT(*) AS FLOAT)/CAST((SELECT COUNT(*) FROM test) AS FLOAT)*100 Percentage FROM [test] WHERE MatchColumn=1 ``` **Returns:** ``` | MATCHVALUES | TOTALRECORDS | PERCENTAGE | |-------------|--------------|------------| | 3 | 6 | 50 | ``` ## [SQL Fiddle Demo](http://sqlfiddle.com/#!3/67425/23) **Using a CTE:** Another option is to do the same with a **[CTE](http://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx)** and reference the columns it creates: ``` ;WITH CTE AS (SELECT COUNT(*) MatchValues, (SELECT COUNT(*) FROM test) TotalRecords FROM [test] WHERE MatchColumn=1) SELECT MatchValues, TotalRecords, CAST(MatchValues AS FLOAT)/CAST(TotalRecords AS FLOAT)*100 Percentage FROM CTE ``` ## [SQL Fiddle Demo](http://sqlfiddle.com/#!3/67425/25) **NOTE:** Casting the counts to a `float` to calculate a percentage is required as dividing 2 `int` values would return an `int`, but in this case we want a decimal value less than 0, which would simply be 0 if it were an `int`. **Reference:** ## [SQL Server, division returns zero](https://stackoverflow.com/questions/1666407/sql-server-division-returns-zero)
Here is another method that only hits the base table once. ``` SELECT COUNT(*) as TotalCount ,SUM(case when Condition = 'MyCondition' then 1 else 0 end) as ConditionalCount FROM [MyTable] ```
Using COUNT and Return Percentage Against SUM of Records
[ "", "sql", "sql-server", "sql-server-2012", "" ]
I am not quite sure if this is possible through SQL query alone. Let's say I have a table with the following data and structure: ``` ID | Item Code | Store Name | Store Price 1 | 101 | Red | 10.00 2 | 101 | Blue | 9.75 3 | 102 | Green | 11.50 4 | 103 | Black | 5.75 5 | 103 | Yellow | 4.50 6 | 103 | Purple | 6.00 ``` And I want to have a result like this: ``` ItemCode | Store1Name | Store1Price | Store2Name | Store2Price | Store3Name | Store3Price 101 | Red | 10.00 | Blue | 9.75 | | 102 | Green | 11.50 | | | | 103 | Purple | 6.00 | Black | 5.75 | Yellow | 4.50 ``` I am currently trying on using JOINS to solve this problem but still can't get the desired result. Sample JOIN I created: ``` SELECT A.ItemCode [ItemCode], A.StoreName [Store1Name], A.StorePrice [Store1Price], B.StoreName [Store2Name], B.StorePrice [Store2Price], C.StoreName [Store3Name], c.StorePrice [Store3Price] FROM tblStorePrice A LEFT JOIN tblStorePrice B ON A.ItemCode = B.ItemCode AND A.ID <> B.ID LEFT JOIN tblStorePrice C ON A.ItemCode = C.ItemCode AND A.ID <> C.ID ``` Note: The table only stores three stores per Item Code (at maximum). Item Code with less than 3 stores should have null values for the result. Hoping to have positive feedback and response. Thanks in advance, guys! :)
guys! Thanks for your time and effort. I've come up with this solution using JOIN as what I stated earlier. However, I know this is not the best solution so far but I'll be using this for the meantime. Using SELF JOIN: `SELECT A.ItemCode [ItemCode], A.StoreName [Store1Name], A.StorePrice [Store1Price],B.StoreName [Store2Name], B.StorePrice [Store2Price],C.StoreName [Store3Name], C.StorePrice [Store3Price] FROM tblStorePrice A LEFT JOIN tblStorePrice B ON A.ItemCode = B.ItemCode AND A.ID <> B.ID LEFT JOIN tblStorePrice C ON B.ItemCode = C.ItemCode AND B.ID <> C.ID AND A.ID <> C.ID` This will give a result like this:![enter image description here](https://i.stack.imgur.com/t4rOH.png) It's still not the desired output, but inserting this result into another table (with primary key) and selecting the distinct itemcode will do the trick.
Try this code Table creation ``` CREATE TABLE [dbo].[StorePrices]( [ID] [int] IDENTITY(1,1) NOT NULL, [ItemCode] [int] NULL, [StoreName] [varchar](256) NULL, [StorePrice] [float] NULL, CONSTRAINT [PK_StorePrices] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ``` and then SQL for prices ``` Select DistinctItems.ItemCode, StoreRedPrices.StorePrice 'Red', StoreBluePrices.StorePrice 'Blue', StoreYellowPrices.StorePrice 'Yellow' from (select distinct ItemCode from storeprices) DistinctItems left join storeprices StoreRedPrices on StoreRedPrices.ItemCode = DistinctItems.ItemCode and StoreRedPrices.StoreName = 'Red' left join storeprices StoreBluePrices on StoreBluePrices.ItemCode = DistinctItems.ItemCode and StoreBluePrices.StoreName = 'Blue' left join storeprices StoreYellowPrices on StoreYellowPrices.ItemCode = DistinctItems.ItemCode and StoreYellowPrices.StoreName = 'Yellow' ``` EDIT: XML alternative that i'd use, now that we know the stores are dynamic ``` select ItemCode, (select StoreName, StorePrice from StorePrices StorePrice where ItemCode = Item.ItemCode for xml auto,type) StorePrices from StorePrices Item group by Item.ItemCode for xml auto, root('Items') ``` I hope that this is of some help to you? If not then you may want to look into pivot queries Regards Liam
Combine multiple rows in one from same table
[ "", "sql", "sql-server", "" ]
I´m kind of new to databases and I was hoping you could answer my question: I have a table with an ID that´s a primary key, not null and Auto increments. It´s data type is int, with a limit of 50 digits. this table will be receiving new rows constantly, so I was wondering if it´s dangerous to leave as it is because of the digit limit and how can I override this problem. I know it might sound silly, but I´m really starting to know about databases.
If you're concerned that it may hit the 50 digits (that's a *lot* of rows, I would really be surprised if you reached it), you could setup a scheduled job to monitor the max Id and when close to 50 digits, then send an email to system admins about the need to clean up space or increase field size. As far as using auto numbers the only real problem you have over time will be spaces.. How do you clean up empty data and allow the system to re-use those numbers if reaching the limit is a concern? A secondary concern is moving records into or around in test databases. Since the number will be automatically assigned, you either have to disable the auto number or make accommodations.
I dont think there is any danger which you will face with that. If you have set the primary key to contain maximum of 50 digits then I dont think there is any issue with that as ``` 9999999..........50 times...99 will be a large number ``` However I am not sure which datatype you are using. Because as far as BIGINT is concerned a BIGINT is always 8 bytes and can store **-9223372036854775808 to 9223372036854775807** (signed) or **0 to 18446744073709551615** (unsigned). which is no where near to 50 digits but yes it is a big number and very rare to reach. On a side note: > [...] This optional display width may be used by applications to display integer values having a width less than the width specified for the column by left-padding them with spaces. ... > > **The display width does not constrain the range of values that can be stored in the column, nor the number of digits that are displayed for values having a width exceeding that specified for the column.** For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range allowed by three characters are displayed using more than three characters.
mySql database - ID auto_increment : what are the risks of it?
[ "", "mysql", "sql", "database", "" ]
This query: ``` SELECT CASE WHEN 'abc ' = 'abc' THEN 1 ELSE 0 END ``` Returns 1, even though 'abc ' clearly is not equal to 'abc'. Similarly, ``` SELECT CASE WHEN 'abc ' LIKE '%c' THEN 1 ELSE 0 END ``` Also returns 1. However, a very similar query: ``` SELECT * FROM #tempTable WHERE Name LIKE '%c' ``` Did not return a row where Name = 'abc '. SQL Server 2008 R2, Windows 7 & 2008 R2, x64.
Turns out that the Name column was `NVARCHAR` (even though it contained ASCII characters only) and `NVARCHAR` behaves differently than `VARCHAR`: ``` SELECT CASE WHEN N'abc ' LIKE 'abc' THEN 1 ELSE 0 END ``` Returns 0, ditto for column instead of literal. The following does return 1 still: ``` SELECT CASE WHEN N'abc ' = 'abc' THEN 1 ELSE 0 END ``` So `=` and `LIKE` work differently, another peculiar difference. PS. For posterity, here are some examples: ``` SELECT CASE WHEN 'abc ' = 'abc' THEN 1 ELSE 0 END -- 1 SELECT CASE WHEN 'abc ' LIKE 'abc' THEN 1 ELSE 0 END -- 1 SELECT CASE WHEN 'abc ' LIKE '%c' THEN 1 ELSE 0 END -- 1 SELECT CASE WHEN N'abc ' = N'abc' THEN 1 ELSE 0 END -- 1 SELECT CASE WHEN N'abc ' LIKE N'abc' THEN 1 ELSE 0 END -- 0 SELECT CASE WHEN N'abc ' LIKE N'%c' THEN 1 ELSE 0 END -- 0 CREATE TABLE #tempTable (Name VARCHAR(30)); INSERT #tempTable VALUES ('abc '); SELECT * FROM #tempTable WHERE Name = 'abc'; -- returns row SELECT * FROM #tempTable WHERE Name LIKE 'abc'; -- returns row SELECT * FROM #tempTable WHERE Name LIKE '%c'; -- returns row SELECT * FROM #tempTable WHERE Name LIKE N'%c'; -- does not return row CREATE TABLE #tempTable2 (Name NVARCHAR(30)); INSERT #tempTable2 VALUES (N'abc '); SELECT * FROM #tempTable2 WHERE Name = N'abc'; -- returns row SELECT * FROM #tempTable2 WHERE Name LIKE N'abc'; -- does not return row SELECT * FROM #tempTable2 WHERE Name LIKE '%c'; -- does not return row SELECT * FROM #tempTable2 WHERE Name LIKE N'%c'; -- does not return row ```
= ignores trailing space len ignores trailing space like does not ignore trailing space ``` SELECT CASE WHEN 'abc ' = 'abc' and DATALENGTH('abc ') = DATALENGTH('abc') THEN 1 ELSE 0 END ``` You can assert DATALENGTH is not relevant but it is still the solution.
SQL - string comparison ignores space
[ "", "sql", "sql-server", "string", "t-sql", "sql-server-2008-r2", "" ]
I have a table called Itemlookup and I want to insert values from two tables. However, I only want to insert it if Itemlookup doesn't have these values. I can insert into the table but if I execute it again, it inserts again. I want to avoid do that. I have tried using the If not exists, but am not getting any luck. Any help is most appreciated. ``` Insert into Itemlookup (ItemNumber, Cases, Shift, [TimeStamp]) Select a.ItemNumber, b.CaseCount, a.TimeStamp from ItemsProduced a innerjoin MasterItemList b on a.ItemNumber=b.ItemNumber ```
Try a `left join` ``` Insert into Itemlookup (ItemNumber, Cases, [TimeStamp]) Select a.ItemNumber, b.CaseCount, a.TimeStamp from ItemsProduced a inner join MasterItemList b on a.ItemNumber = b.ItemNumber left join Itemlookup i on i.ItemNumber = a.ItemNumber and i.Cases= b.CaseCount and i.TimeStamp = a.TimeStamp where i.ItemNumber is null ``` BTW your insert contains 4 columns and your select only 3. I removed `Shift` for consistency.
If you're interested, here is the correct way to use NOT EXISTS (also removing "Shift" as juergen did): ``` Insert into Itemlookup (ItemNumber, Cases, [TimeStamp]) Select a.ItemNumber, b.CaseCount, a.TimeStamp from ItemsProduced a inner join MasterItemList b on a.ItemNumber = b.ItemNumber WHERE NOT EXISTS( SELECT * FROM Itemlookup WHERE ItemNumber=a.ItemNumber AND Cases=b.CaseCount AND [TimeStamp]=a.TimeStamp ) ```
Sql join and insert if not exists
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I have a table with list of employees and the number of units that they have sold. I want to get the top 25 percentile Avg units sold and Bottom 25 percentile Avg units sold. I have created a representation of my data [SLQ Fiddle](http://sqlfiddle.com/#!2/49a2c) I really have no idea how to start on this? All the examples i see are for SQL Server and not MySQL. Here is what i am thinking. I want 25 percentile and cant limit to 25 items. Basically it would involve: ``` 1) #_of_employees = The number of total employees. 2) #_of_employees_in_25_percentile = #_of_employees*0.25 3) Calculate the sum of the units sold by the top/bottom 25 percentile (limit #_of_employees_in_25_percentile) 4) Divide the sum by #_of_employees_in_25_percentile to get the average. ``` How can all this be done efficiently in MySQL?
This is a solution that uses a devious trick I learned [from this question](https://stackoverflow.com/questions/3333665/mysql-rank-function/3333697#3333697). ``` SELECT id, unit_sold, n * 100 / @total AS percentile FROM ( SELECT id, unit_sold, @total := @total + unit_sold AS n FROM mydata, (SELECT @total := 0) AS total ORDER BY unit_sold ASC ) AS t ``` [SQL Fiddle.](http://sqlfiddle.com/#!2/49a2c/15/0)
What about this? ``` SELECT SUM(unit_sold) AS sum_tot, SUM(unit_sold)/count(id) AS average, SUM(CASE WHEN percentile<25 THEN unit_sold ELSE 0 END) AS sum_top25, SUM(CASE WHEN percentile<25 THEN 1 ELSE 0 END) AS count_top25, SUM(CASE WHEN percentile<25 THEN unit_sold ELSE 0 END)/SUM(CASE WHEN percentile<25 THEN 1 ELSE 0 END) AS average_top25, SUM(CASE WHEN percentile>75 THEN unit_sold ELSE 0 END) AS sum_bottom25, SUM(CASE WHEN percentile>75 THEN 1 ELSE 0 END) AS count_bottom25, SUM(CASE WHEN percentile>75 THEN unit_sold ELSE 0 END)/SUM(CASE WHEN percentile>75 THEN 1 ELSE 0 END) AS average_bottom25 FROM (SELECT id, unit_sold, c * 100 / @counter AS percentile FROM (SELECT m.*, @counter:=@counter+1 AS c FROM (SELECT @counter:=0) AS initvar, mydata AS m ORDER BY unit_sold desc ) AS t WHERE c <= (25/100 * @counter) OR c >= (75/100 * @counter) ) AS t2 ``` Output: ``` SUM_TOT AVERAGE SUM_TOP25 COUNT_TOP25 AVERAGE_TOP25 SUM_BOTTOM25 COUNT_BOTTOM25 AVERAGE_BOTTOM25 850 283.3333 500 1 500 350 2 175 ``` See [SQL Fiddle](http://sqlfiddle.com/#!2/49a2c/69). The idea is to use the [MySQL: LIMIT by a percentage of the amount of records?](https://stackoverflow.com/questions/5615172/mysql-limit-by-a-percentage-of-the-amount-of-records) solution to get the percentiles. Based on that (and on [pdw answer](https://stackoverflow.com/a/25316323/1983854)) we create an output in which we just show the top 25% and bottom 75%. Finally, we count and sum to get the values you requested. --- Note this runs on top of the command: ``` SELECT id, unit_sold, c * 100 / @counter AS percentile FROM (SELECT m.*, @counter:=@counter+1 AS c FROM (SELECT @counter:=0) AS initvar, mydata AS m ORDER BY unit_sold desc ) AS t WHERE c <= (25/100 * @counter) OR c >= (75/100 * @counter) ``` Whose output is: ``` ID UNIT_SOLD PERCENTILE d 500 20 a 250 80 e 100 100 ```
get top and bottom 25th percentile average
[ "", "mysql", "sql", "" ]
I don't know what *title* I should give for my *question* (never mind it). below given is my *select query* ``` select gtab04.product,gtab05.productid,gtab05.mrp, gtab05.ptr,gtab05.ssr,gtab07.patent from gtab05 inner join gtab07 on gtab05.patentid=gtab07.patentid inner join gtab04 on gtab05.productid=gtab04.productid where gtab05.qty-gtab05.iqty > 0 order by productid ``` and this will return `500+` rows, see the below sample, ``` product |productid |mrp |ptr |ssr |patent ------------------+------------+--------+-----+-----+----------------- IBUGESIC Plus Tab |200 |12.80000|9.85 |8.87 |CIPLA LTD ANGICAM 2.5 Tab |267 |9.00000 |6.93 |6.44 |BLUE CROSS LABORATORIES ANGICAM 2.5 Tab |267 |5.00000 |6.93 |6.24 |BLUE CROSS LABORATORIES ANGICAM 2.5 Tab |267 |5.00000 |6.93 |6.44 |BLUE CROSS LABORATORIES ANGICAM 2.5 Tab |267 |5.00000 |7.359|6.24 |BLUE CROSS LABORATORIES ANGICAM 5 Mg Tab |268 |14.00000|10.78|10.03|BLUE CROSS LABORATORIES ANGICAM 5 Mg Tab |268 |12.00000|11.44|9.7 |BLUE CROSS LABORATORIES ANGICAM BETA Tab |269 |17.00000|13.09|12.17|BLUE CROSS LABORATORIES ANGICAM BETA Tab |269 |15.00000|13.9 |11.78|BLUE CROSS LABORATORIES HIBESOR 25 TAB |270 |9.00000 |6.93 |6.44 |BLUE CROSS LABORATORIES ``` i would like to modify the above *result* as following.. ``` product |productid |mrp |ptr |ssr |patent ------------------+------------+--------+-----+-----+----------------- IBUGESIC Plus Tab |200 |12.80000|9.85 |8.87 |CIPLA LTD ANGICAM 2.5 Tab |267 |9.00000 |6.93 |6.44 |BLUE CROSS LABORATORIES ANGICAM 5 Mg Tab |268 |14.00000|10.78|10.03|BLUE CROSS LABORATORIES ANGICAM BETA Tab |269 |17.00000|13.9 |11.78|BLUE CROSS LABORATORIES HIBESOR 25 TAB |270 |9.00000 |6.93 |6.44 |BLUE CROSS LABORATORIES ``` * my criteria : need to *GROUP* `productid` and from each *group* taking `product`'s having `max(mrp)`. --- what i have tried as far as now. ``` With cte as ( select gtab04.product,gtab05.productid,gtab05.mrp, gtab05.ptr,gtab05.ssr,patent from gtab05 inner join gtab07 on gtab05.patentid=gtab07.patentid inner join gtab04 on gtab05.productid=gtab04.productid where qty-iqty > 0 order by productid limit 10 ) select productid,max(cte.mrp) as mrp from cte group by productid order by productid ) RESULT -------------- productid | mrp 200|12.80000 267|9.00000 268|14.00000 269|17.00000 270|9.00000 ``` ---
`distinct on` combined with `order by` ``` select distinct on (productid) gtab04.product, productid, gtab05.mrp, gtab05.ptr, gtab05.ssr, patent from gtab05 inner join gtab07 using(patentid) inner join gtab04 using(productid) where qty-iqty > 0 order by productid, gtab05.mrp desc ``` If there is any other untie criteria add it to the `order by` clause <http://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT>
Reusing the original CTE: ``` WITH cte AS ( select gtab04.product ,gtab05.productid ,gtab05.mrp, gtab05.ptr, gtab05.ssr ,patent -- NEEDS alias-designation from gtab05 inner join gtab07 on gtab05.patentid = gtab07.patentid -- NOT USED ... inner join gtab04 on gtab05.productid = gtab04.productid where qty-iqty > 0 -- NEEDS alias-designation order by productid -- NEEDS alias-designation -- limit 10 ) SELECT * FROM cte t0 WHERE NOT EXISTS ( SELECT * FROM cte t1 WHERE t1.productid = t0.productid AND t1.mrp > t0.mrp ) order by productid ; ```
how to group an ID field and select rows having highest value in another field PostgreSQL
[ "", "sql", "postgresql", "greatest-n-per-group", "" ]
so I need to write a stored procedure to generate a report. In this report, I need to show the average daily count for the check amount and check count. Here's what I have for now: ``` SELECT SendingBank, CheckCount As TotalCount, CheckAmt As TotalAmt, AVG(CheckCount/30) AS AvgDailyCount, AVG(CheckAmt/30) AS AvgDailyAmt FROM tblRptT001 WHERE InwardClearingDate = @asInwardClearingDt GROUP BY InwardClearingDate, SendingBank, CheckCount, CheckAmt ORDER BY InwardClearingDate ``` I know what I did for the average count is totally wrong and I am stuck with that as not every month have the same number of days. I tested this sp and it works, except not the way I wanted for both the average field. Any ideas on how I should approach this? Any kind of help is greatly appreciated. Thank you! **Edit**: This is the sample data ``` InwardClearingDate SendingBank CheckCount CheckAmt 2014-03-03 ABC Bank 1800 70000.00 2014-03-21 BBC Bank 526 456090.00 ``` and result that I wanted ``` Sending Bank|Total Count| Total Amt| Daily Avg Count| Daily Avg Amt ABC Bank | 1800 | 70000.00 | 60 |2333.00 ``` The parameter is the Inward clearing date, for example `2014-03-03`.
Please see whether this is what you are looking for, ``` Declare @asInwardClearingDt Date Declare @month int = (Datediff(Day,@asInwardClearingDt,Dateadd(Month,1,@asInwardClearingDt))) Select SendingBank, Sum(CheckCount) As TotalCount, Sum(CheckAmt) As TotalAmt, Sum(CheckCount)/@month As AvgDailyCount, Sum(CheckAmt)/@month As AvgDailyAmt From tblRptT001 Where Datepart(MONTH,InwardClearingDate) = Datepart(MONTH,@asInwardClearingDt) And Datepart(MONTH,InwardClearingDate) = Datepart(MONTH,@asInwardClearingDt) Group By SendingBank ``` nf91, Please comment
> > I know what I did for the average count is totally wrong and I am stuck with that as not every month have the same number of days. To address that, consider applying DAY and EOMONTH() functions on InwardClearingDate ``` SELECT SendingBank, CheckCount As TotalCount, CheckAmt As TotalAmt, AVG(CheckCount/DAY(EOMONTH(InwardClearingDate)) AS DaysInMonth) AS AvgDailyCount, AVG(CheckAmt/DAY(EOMONTH(InwardClearingDate)) AS DaysInMonth) AS AvgDailyAmt FROM tblRptT001 WHERE InwardClearingDate = @asInwardClearingDt GROUP BY InwardClearingDate, SendingBank, ChequeCount, ChequeAmt ORDER BY InwardClearingDate ```
Average daily count for every month
[ "", "sql", "sql-server", "" ]
I know there is a few post about it. I've tryed to work with the answer but I must be missing something somewhere (I'm pretty noob). As I said in the title, I would like to have a page that show a big list of item checked or not according to database information. The page load with an Id. Type: CSHTML, Razor Database: defect Table name: defectqc The table is looking a bit like that so far: ``` <table> <tr><td><p><input type="checkbox" name="ltcheck1" checked="@ltcheck1"></td></tr> <tr><td><p><input type="checkbox" name="ltcheck2" checked="@ltcheck2"></td></tr> <tr><td><p><input type="checkbox" name="ltcheck3" checked="@ltcheck3"></td> </tr> </table> ``` So the code script I've tryed at the begining is this one... ``` var Id = ""; var ltcheck1 = ""; var ltcheck2 = ""; var ltcheck3 = ""; if(!IsPost){ if(!Request.QueryString["Id"].IsEmpty() && Request.QueryString["Id"].IsInt()) { Id = Request.QueryString["Id"]; var db = Database.Open("defect"); var dbCommand = "SELECT * FROM defectqc WHERE Id = @0"; var row = db.QuerySingle(dbCommand, Id); if(row != null) { ltcheck1 = row.ltcheck1; ltcheck2 = row.ltcheck2; ltcheck2 = row.ltcheck3; } ``` The database got column written "True" or "False" into it. I want the checkboxes to be checked if the column is "true" Please MTV! Pimp my ride! ;D Sorry for my english, I'm trying hard
Following your logic Schaemelhout Would it be possible if I use this kind of statement ``` var Id = ""; var ltcheck1 = ""; var ltcheck2 = ""; var ltcheck3 = ""; if(!IsPost){ if(!Request.QueryString["Id"].IsEmpty() && Request.QueryString["Id"].IsInt()) { Id = Request.QueryString["Id"]; var db = Database.Open("defect"); var dbCommand = "SELECT * FROM defectqc WHERE Id = @0"; var row = db.QuerySingle(dbCommand, Id); if(row.ltcheck1 = "true") { ltcheck1.checked } if(row.ltcheck2 = "true") { ltcheck1.checked } ``` (I know the syntax isn't correct) I found this which is in PHP ``` <input type="checkbox" name="ltcheck1" value="True" <?php echo $checked; ?> /> if ($row['letcheck1'] == 'True') $checked = 'checked="checked"'; ``` Assuming that the data have already been pulled out. It would do exactly what I need... Is there a way to translate it?
Why don't you use the html helpers? ``` @Html.CheckBox("ltcheck1") @Html.CheckBox("ltcheck2") @Html.CheckBox("ltcheck3") ```
How can I load a page with checkboxes checked according to database info
[ "", "sql", "asp.net-mvc", "razor", "" ]
I'm trying to create a stored procedure that gets the last 4 months worth of results from the below query but I'm unsure how to do this. This is what I have done so far: ``` Declare @Number varchar(30) = '12' Select month = month(EndDate), YEAR = YEAR(EndDate), SUM(DownloadUnits) as downloads, SUM(UploadUnits) as uploads, number from testTable where number=@Number GROUP BY MONTH(EndDate), Year(Enddate),number ``` How can I filter it out so that when I pass month parameter (I haven't created it yet) it filters out the results so it only shows the last four months? (I have hard coded the number parameter for testing)
If you need to get whole months then you will need to get the first of the month 4 months ago. You can get the first of the current month using: ``` SELECT DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()), '19000101'); ``` Adapting this slightly will give you the first of the month 4 months ago: ``` SELECT DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()) - 4, '19000101'); ``` Then you can just apply this filter to your query: ``` WHERE EndDate >= DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()) - 4, '19000101') ``` Or if you need to pass the number of months a parameter (it should be an INT not a varchar by the way): ``` WHERE EndDate >= DATEADD(MONTH, DATEDIFF(MONTH, '19000101', GETDATE()) - @Number, '19000101') ``` If you pass a date parameter, just replace `GETDATE()` with your parameter name.
The last N months from now meet the condition ``` where EndDate >= dateadd(month, -@DEDUCT_MONTHS, cast(getdate() as DATE)) ``` Removing the `cast` will enforce the current time as a constraint as opposed to midnight N months ago.
How to filter my results so it shows the last four months of data - sql
[ "", "sql", "sql-server", "" ]
My question is for example I have following columns ``` Cl1 int - primary key Cl2 int Cl3 nvarchar(200) Cl4 nvarchar(max) ``` Now assume that I am creating a non-clustered index on `Cl2` however when I query `Cl2` I always retrieve `Cl3` and `Cl4` also. It is logical to include `Cl3` in the index as it will make retrieval operation faster, however `Cl4` is `nvarchar(max)` and can be pretty big: this column holds crawled page source So my question is: it is logical to include `Cl4` in the non-clustered index or not Including it would make `Cl4` to be stored exactly 2 times on the hard drive? Thank you very much for answers SQL Server 2014
from Books online for SQL Server 2014: ``` Up to 16 columns can be combined into a single composite index key. All the columns in a composite index key must be in the same table or view. The maximum allowable size of the combined index values is 900 bytes. Columns that are of the large object (LOB) data types ntext, text, varchar(max), nvarchar(max), varbinary(max), xml, or image cannot be specified as key columns for an index. ```
I will refer you to [this answer](https://stackoverflow.com/a/818835/1678437) that explains a bit more about included columns in indexes. In short, yes, the space used will be greater if you include the column in your index, but the performance will be better. The choice is yours.
Do SQL Server included columns takes double space?
[ "", "sql", "sql-server", "indexing", "sql-server-2014", "non-clustered-index", "" ]
I am trying to create a function in Postgres as follows: ``` Create function Samplefunc() returns resultdata( Tested int, Score int, Growth int) as $BODY$ Select Count(distinct student_id) Tested, Cast(Avg(R.raw_score) as Int) Score, Avg(R.growth) as Growth from results R where R.id=1 and test_id='ME04'; $BODY$ LANGUAGE sql; ``` But I am getting the following error: ``` ERROR: syntax error at or near "int" LINE 2: returns resultdata( NTested int, RawScore int, ^ ********** Error ********** ERROR: syntax error at or near "int" SQL state: 42601 Character: 59 ``` Where am I going wrong?
``` CREATE FUNCTION samplefunc() RETURNS TABLE(ntested int, rawscore int, growth int) AS $func$ SELECT count(DISTINCT r.student_id) -- AS NTested ,avg(r.raw_score)::int -- AS RawScore ,avg(r.growth)::int -- AS Growth FROM reports_results r WHERE r.test_type_id = 1 AND r.test_id = '201403MAME04' $func$ LANGUAGE sql; ``` * The clause to return a table is [**`RETURNS TABLE`**](http://www.postgresql.org/docs/current/interactive/sql-createfunction.html). * Carefully avoid conflicts between `OUT` parameters and column names. (I had such a conflict in my first draft). Table-qualify columns to disambiguate. All field names in `RETURNS TABLE` are effectively `OUT` parameters and visible inside the function (almost) everywhere. Also: * `avg(growth)` would result in a type mismatch with the declared return type `int`. You need to cast that, too. Using the short [Postgres-specific syntax `::type`](http://www.postgresql.org/docs/current/interactive/sql-expressions.html#SQL-SYNTAX-TYPE-CASTS), btw. Better yet: return [`numeric` or a floating point number](http://www.postgresql.org/docs/current/interactive/datatype-numeric.html) to preserve fractional digits in your avg numbers. * column aliases are only visible inside the function. If you are not going to reference them *inside* the function, they are just documentation. * What's with the capitalization? [Unquoted identifiers are cast to lower case in Postgres automatically.](http://www.postgresql.org/docs/current/interactive/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) **If** the query is guaranteed to return a *single* row, you might want to combine `OUT` parameters with `RETURNS record`: ``` CREATE FUNCTION samplefunc(OUT ntested int, OUT rawscore int, OUT growth int) RETURNS record AS ... ``` The subtle difference: This way you get a single row with `NULL` values if nothing is found, where the first form would return nothing / no row. ### Add IN parameters (request in comment) ``` CREATE FUNCTION samplefunc(_test_type_id int, _test_id text) RETURNS TABLE(ntested int, rawscore int, growth int) AS $func$ SELECT count(DISTINCT r.student_id) ,avg(r.raw_score)::int ,avg(r.growth)::int FROM reports_results r WHERE r.test_type_id = $1 -- or: = _test_type_id in Postgres 9.2+ AND r.test_id = $2 -- or: = _test_id $func$ LANGUAGE sql; ``` Many related answers here on SO with more code examples. Like: * [PostgreSQL return a function with a Custom Data Type](https://stackoverflow.com/questions/18235496/postgresql-return-a-function-with-a-custom-data-type/18235894#18235894) [Try a search.](https://stackoverflow.com/search?q=%5Bpostgres%5D%20%5Bplpgsql%5D%20CREATE%20FUNCTION%20parameter%20%22RETURNS%20TABLE%22)
Try not specifying the details of the composite return type. However, I think the return structure must exist first. ``` Create table resultdata (NTested int, RawScore int, Growth int); Create function Samplefunc() returns resultdata as $BODY$ Select Count(distinct student_id) as NTested, Cast(Avg(R.raw_score) as Int) as RawScore, Avg(R.growth) as Growth from reports_results R where R.test_type_id=1 and test_id='201403MAME04'; $BODY$ LANGUAGE sql; ``` Or try explicitly returning a table: ``` Create function Samplefunc() returns Table (NTested int, RawScore int, Growth int) as $BODY$ Select Count(distinct student_id) as NTested, Cast(Avg(R.raw_score) as Int) as RawScore, Avg(R.growth) as Growth from reports_results R where R.test_type_id=1 and test_id='201403MAME04'; $BODY$ LANGUAGE sql; ``` I think you can also return a set of records using output parameters: ``` Create function Samplefunc(OUT NTested int, OUT RawScore int, OUT Growth int) returns SetOf Record as $BODY$ Select Count(distinct student_id) as NTested, Cast(Avg(R.raw_score) as Int) as RawScore, Avg(R.growth) as Growth from reports_results R where R.test_type_id=1 and test_id='201403MAME04'; $BODY$ LANGUAGE sql; ```
Postgres error in creating a function
[ "", "sql", "postgresql", "user-defined-functions", "" ]
``` WITH L1 AS ( SELECT ) SELECT A FROM L1 UNION ALL SELECT A FROM TABLE UNION ALL WITH L2 AS ( SELECT ) SELECT A FROM L2 UNION ALL WITH L3 AS ( SELECT ) SELECT A FROM L3 ``` I get an error > Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon." Please help
The syntax is ``` With l1 ( a ) as ( Select ... ) , l2 ( a ) as ( ... ) Select ... From ... Union Select ... From ... ```
You cannot use `WITH` in the *middle* of a query expression. `WITH` is used to build up intermediate queries for use by other queries immediately after (meaning it cannot be used by multiple independent queries). So you probably want something like: ``` WITH L1 AS ( SELECT ... ), L2 AS ( SELECT ... ), L3 AS ( SELECT ... ) // begin final query SELECT A FROM L1 UNION ALL SELECT A FROM TABLE UNION ALL SELECT A FROM L2 UNION ALL SELECT A FROM L3 ```
How to use multiple with statements along with UNION ALL in SQL?
[ "", "sql", "sql-server", "union-all", "" ]
I have 2 tables containing bookmarks and likes of images in my Pinterest-style application. I want to create a SQL Query where I want to select the most popular images, based on bookmarks and likes. I want to prioritize bookmarks over likes, but I want an image with 100 likes be ranked higher than an image with 1 bookmark. So for example the result would look like: * Image with 100 likes * Image with 75 bookmarks * Image with 75 likes * Image with 30 likes * Image with 20 bookmarks The tables I got look like the following: images\_bookmarks `id|id_image|id_user|id_board` images\_likes `id|id_image|id_user` So I want to create a JOIN where I sort by the count grouped by id\_image. I'm aware of how to create a join but I'm unsure of how to make the sorting behave as I am describing.
To expand over rurouni88 answer, here is how you could do it to sort the bookmarks first when the number of occurences are the same : ``` SELECT id_image, occurences FROM ( SELECT id_image, COUNT(id) AS occurences, 2 AS priority FROM images_bookmarks GROUP BY id_image UNION SELECT id_image, COUNT(id) AS occurences, 1 AS priority FROM images_likes il GROUP BY id_image ) AS favorites ORDER BY occurences DESC, priority DESC ``` You can see it in action here : <http://sqlfiddle.com/#!2/eb14f/7>
(Left) join your image table with your like/bookmark tables, then group and count likes and bookmarks for each row. Then order the result by the criteria you have given (100 likes is "higher" than 1 bookmark) ``` SELECT *, COUNT(b.id) * 100 + COUNT(l.id) AS rank FROM images i LEFT JOIN images_bookmarks b ON i.id = b.image_id LEFT JOIN images_likes l ON i.id = l.image_id GROUP BY i.id ORDER BY COUNT(b.id) * 100 + COUNT(l.id) ``` If you want to sort by the sum of bookmarkss and likes, but have images with more bookmarks before images with likes, change the ORDER BY clause to `COUNT(b.id) + COUNT(l.id), COUNT(b.id), COUNT(l.id)`
How to sort by 2 columns when joining 2 tables?
[ "", "mysql", "sql", "" ]
im not good at "making" query. Let's say this is my db: ``` artist pics ------------------- --------------------------------------- | id | name | age | | id_artist | picUrl | picDescription | ------------------- --------------------------------------- | /\ ---------------------------- ``` I have to make a select to get a (single) artist with all its pics. The artist is always one and the pics are usually many. Is there a way to get this data with just one query? After the query is made, how can I handle the table fields to retrieve their values? I need to "print something like this" michael jackson -pics 1 blablabla -pics 2 blablabla -pics 3 blablabla .... Thank you
Try this:- ``` SELECT A.id, A.name, B.picUrl, B.picDescription FROM artist A, pics B WHERE A.id = B.id_artist; ```
You should get numerous help if you have Googled around. Anyways, You would want to use following query: ``` SELECT * from artist ar INNER JOIN pics p ON ar.id=p.id_artist ```
How to select a row from a table and many rows from another (related) table
[ "", "mysql", "sql", "" ]
I have these models: ``` class Foo(models.Model): some_field = models.CharField() class Meta: pass class Bar(Foo): some_other_field = models.CharField() class Meta: pass ``` The example is simplified, in reality both models have a lot of fields. When I query Bar, the Django ORM creates a query containing an inner join with Foo. I don't need the information in Foo. **Question**: Is there a way to query Bar *without* an inner join with Foo? I realize that removing Bar extending Foo and making it a foreign key would be a better way to solve this problem. However, there's a lot of legacy code relying on this so I'd prefer a quick solution until I have the time and guts to refactor legacy parts of the app. I also realize I can write an SQL query myself, but I'd prefer a solution that uses the ORM.
The way I've done it is to use a new unmanaged model for this instance ``` class SkinnyBar(models.Model): some_other_field = models.CharField() class Meta: managed = False db_table = "app_bar" ``` This will allow you to use the ORM. If you want to avoid the duplication you could try adding most of your properties and methods to a meta class ``` class BaseBar(models.Model): some_other_field = models.CharField() def some_common_method(self): return True class Meta: abstract = True class Bar(BaseBar, Foo): def some_method_that_requires_foo(self): return self.some_field == 1 class SkinnyBar(BaseBar): class Meta: managed = False db_table = "app_bar" ```
Interesting issue you have. The inner join on it's own shouldn't be that big of a deal so long as your queries are setup appropriately. What is the main reason for trying to limit it, performance I assume? Can you post some code of how you're trying to work on these models? I'm not positive this would work, but you could always look into using the '[defer](https://docs.djangoproject.com/en/dev/ref/models/querysets/#defer)' function on your queryset. This function is really only for advanced use-cases, and may not apply here. In essence, defer doesn't try to query the fields you specify. if you defer'some\_field', perhaps it won't do the join. The downside is that as soon as you try to access 'some\_field' from the object, it will perform a query (iteration will cause n extra queries).
Ignore a base model in a Django ORM query
[ "", "sql", "django", "orm", "" ]
This problem is difficult to describe, especially in the title. Perhaps it would be easier to show what I am trying to accomplish. This is an Oracle database. Given the following DDL (<http://sqlfiddle.com/#!4/e0730>): ``` CREATE TABLE TEST_LIST(CONTRACT VARCHAR2(10), RECEIPT_CITY VARCHAR2(15), DELIVERY_CITY VARCHAR2(15)); INSERT INTO TEST_LIST VALUES ('CTR_01', 'DETROIT', ''); INSERT INTO TEST_LIST VALUES ('CTR_01', 'KALAMAZOO', ''); INSERT INTO TEST_LIST VALUES ('CTR_01', '', 'KALAMAZOO'); INSERT INTO TEST_LIST VALUES ('CTR_01', '', 'MUSKEGON'); INSERT INTO TEST_LIST VALUES ('CTR_01', 'SOUTH HAVEN', ''); INSERT INTO TEST_LIST VALUES ('CTR_02', 'BATTLE CREEK', ''); INSERT INTO TEST_LIST VALUES ('CTR_02', '', 'KALAMAZOO'); ``` I am trying to get the following result: ``` CONTRACT CITY DIRECTION -------- ------------ ----------- CTR_01 DETROIT RECEIPT CTR_01 KALAMAZOO RECDEL CTR_01 MUSKEGON DELIVERY CTR_01 SOUTH HAVEN DELIVERY CTR_02 BATTLE CREEK RECEIPT CTR_02 KALAMAZOO DELIVERY ``` The part I'm struggling with is when the city is both a receipt and a delivery and getting it to show on one row as RECDEL. I've tried various combinations of UNIONs, INTERSECTs and FULL OUTER JOINs to no avail. Could LISTAGG be used? Not sure how that work work with the multiple columns. Any help would be appreciated.
Here's a solution with `LISTAGG`. It produces `Delivery Receipt` instead of `RECDEL`. I hope it's enough for you to get the idea. ``` SELECT contract, COALESCE(receipt_city, delivery_city, '') "City", LISTAGG(CASE WHEN receipt_city IS NOT NULL THEN 'Receipt' END || CASE WHEN delivery_city IS NOT NULL THEN 'Delivery' END, ' ') WITHIN GROUP (ORDER BY contract) "Direction" FROM test_list GROUP BY contract, COALESCE(receipt_city, delivery_city, ''); ``` [**SQL Fiddle**](http://sqlfiddle.com/#!4/e0730/39)
``` | CONTRACT | CITY | DIRECTION | |----------|--------------|-----------| | CTR_01 | DETROIT | RECEIPT | | CTR_01 | KALAMAZOO | RECDEL | | CTR_01 | MUSKEGON | DELIVERY | | CTR_01 | SOUTH HAVEN | RECEIPT | | CTR_02 | BATTLE CREEK | RECEIPT | | CTR_02 | KALAMAZOO | DELIVERY | ``` by: ``` select distinct t1.contract , case when t1.delivery_city = t2.receipt_city then t1.delivery_city when t1.delivery_city is not null then t1.delivery_city when t1.receipt_city is not null then t1.receipt_city else 'unexpected' end as CITY , case when t1.delivery_city = t2.receipt_city or t1.receipt_city = t3.delivery_city then 'RECDEL' when t1.delivery_city is not null then 'DELIVERY' when t1.receipt_city is not null then 'RECEIPT' else 'unexpected' end as DIRECTION from TEST_LIST t1 left join test_list t2 on t1.CONTRACT = t2.CONTRACT and t1.delivery_city = t2.receipt_city left join test_list t3 on t1.CONTRACT = t3.CONTRACT and t1.receipt_city = t3.delivery_city order by t1.contract , CITY , DIRECTION ``` see: <http://sqlfiddle.com/#!4/57f0b/1>
Combining data from one table in Oracle
[ "", "sql", "oracle", "" ]
My Access database (2007) has Four tables; Customer, Supplier, Account, and AccountAgeing AccountAgeing has a composite key made up of the foreign keys of two of the other tables, plus a date. i.e.; AsAtDate, SupplierID, AccountNumber I am importing data from excel via a temporary table, and my parent tables (Customers, Suppliers, Accounts) are importing well. However importing AccountAgeing from my tempTable continually has a key violation. Of 749 possible imports, 746 violate the key. A query to test was: ``` SELECT DISTINCT tempTable.[SupplierID], #31/7/14#, tempTable.[AccountNumber] FROM tempTable; ``` This returned 749 records (all of them). If this is the case, how do I have a key violation?? The composite key fields are all indexed, with duplicates OK. There is no data in the destination table ![These are the indices on my table (obviously I have simplified the names in my question](https://i.stack.imgur.com/80agn.png) I have date and [Account Number] indexed as these are the fields searches will be on.
Here is a sequence of some troubleshooting steps you can try. 1. Remove the primary key from your target table and populate it. If you can't populate the target table, your problem may not be the key itself, and may become apparent based on error messages you receive. 2. If the target table does populate, try adding your desired composite key to the already populated target table. 3. If you are unable to add the key, re-run your "select distinct" query on the populated target table. 4. If you don't select 749 distinct rows, visually inspect the table contents to see what's going on. These steps should lead you to some insight. Just a guess - but it sounds possible that you may have a data type mismatch somewhere. In cases like this, Access will sometimes convert data on the fly and insert it without giving an error. But in the process the nature of the data are changed, resulting in a key violation in the target table. I'm curious to hear what you find. Please post a comment when you figure out what the problem is. Hope it helps. Good luck with the troubleshooting.
Thank you Marty!! I attempted to populate manually, which errored because there was no matching record in the Customers table. I discovered that I had incorrectly assigned AccountAgeing to be the parent of Customers, rather than of Accounts. The business logic is that an AccountAgeing record will always have an Account, but an AccountAgeing record does not always mention Company Number (the primary key of the Customer table). The fix was binding part of the Account Ageing composite key to the Accounts composite key. ![enter image description here](https://i.stack.imgur.com/1H0ZM.png) I am unsure what will happen when I add an ATBRecord which has an account number but no Company number, but that is another question
How to Troubleshoot composite key violation in MS Access
[ "", "sql", "ms-access", "foreign-keys", "composite-key", "" ]
I want to set default value constraint I have written the following sql ``` ALTER TABLE vote ADD CONSTRAINT deVote DEFAULT false FOR status ``` But it's giving error: Column names are not permitted What I am doing wrong. Help Thanks
*If column is a VARCHAR data type* ``` ALTER TABLE vote ADD CONSTRAINT df_devote_default DEFAULT 'false' FOR [status]; ``` *If column is a BIT data type* ``` ALTER TABLE vote ADD CONSTRAINT df_devote_default DEFAULT 0 FOR [status]; ```
Try this: ``` ALTER TABLE vote ADD CONSTRAINT deVote DEFAULT N'false' FOR status; ```
How to set default value using alter table statement in sql server 2008
[ "", "sql", ".net", "sql-server-2008", "alter-table", "" ]
I want to perform something like the following (which doesn't work): ``` SELECT COUNT(*) FROM `contestant_flags` INNER JOIN `contestants` ON `contestants`.`id` = `contestant_flags`.`contestant_id` INNER JOIN `teams` ON `teams`.`id` = `contestant_flags`.`team_id` WHERE `contestant_flags`.`flag_id` = 1 AND `contestant_flags`.`call_id` IS NULL AND (`contestants`.`instance_id` = 13 OR `teams`.`instance_id` = 13) ``` I have a table, `contestant_flags`, where it can belong to either a single contestant or a team (`contestant_id` and `team_id`, respectively), but not both. The contestant and team each have a column, `instance_id`, which is what I'm attempting to filter by. So, for example, I may have some contestant\_flags that belong to a contestant with the instance\_id 13, and some that belong to a team with the instance\_id 13, and I want to get the contestant\_flags that meet either of those conditions. The above query doesn't work, but given the explanation I'm after, can anyone help me out with a query that does? Thanks in advance.
Just make it left joins. And a coalesce looks a bit nicer if you know you one of them will always be null. ``` SELECT COUNT(*) FROM contestant_flags LEFT JOIN contestants ON contestants.id = contestant_flags.contestant_id LEFT JOIN teams ON teams.id = contestant_flags.team_id WHERE contestant_flags.flag_id = 1 AND contestant_flags.call_id IS NULL AND COALESCE(contestants.instance_id, teams.instance_id) = 13; ```
``` SELECT COUNT(*) FROM contestant_flags cf LEFT JOIN contestants c ON c.id = cf.contestant_id LEFT JOIN teams t ON t.id = cf.team_id WHERE (cf.flag_id = 1) AND (cf.call_id IS NULL) AND (c.instance_id = 13 OR t.instance_id = 13) ```
Inner join two tables with OR conditions
[ "", "mysql", "sql", "" ]
Pretty simple, what's the best way to fix the `NULL` filtering below, since the `=` operand doesn't work with `NULL`? The `Key2`, when `Data1=-1`, is trying to do a `Key2=NULL` which is not selecting `NULL` values. ``` LEFT JOIN myReferenceTable on myReferenceTable.Key1 = myDataTable.FKey1 and myReferenceTable.Key2 = CASE WHEN myDataTable.Data1 = -1 THEN NULL ELSE myDataTable.Data3 - myDataTable.Data4 END ``` Thanks!
Swap out your case statement using ANDs / ORs ``` LEFT JOIN myReferenceTable on myReferenceTable.Key1 = myDataTable.FKey1 and ((myDataTable.Data1 = -1 AND myReferenceTable.Key2 IS NULL) OR (myDataTable.Data1 != -1 AND myReferenceTable.Key2 = myDataTable.Data3 - myDataTable.Data4)) ```
If you're using MySQL, you need to use the "null-safe equal" operator. From the [manual](http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#operator_equal-to): > <=> > > NULL-safe equal. This operator performs an equality comparison like > the = operator, but returns 1 rather than NULL if both operands are > NULL, and 0 rather than NULL if one operand is NULL. ``` mysql> SELECT 1 <=> 1, NULL <=> NULL, 1 <=> NULL; -> 1, 1, 0 mysql> SELECT 1 = 1, NULL = NULL, 1 = NULL; -> 1, NULL, NULL ```
SQL: mix join with case and NULL
[ "", "sql", "join", "null", "case", "" ]
This is going to seem like a stupid question but I am just getting started with biztalk server 2013 R2 and need some help. I have a fairly basic understanding of how the system works and was able to follow the msdn tutorial to create a biztalk app that moves a file from one folder, reads it and saves to another folder base on values in the file. What I am really trying to do now is the following. When I recieve a value, I use that value as a key to read data from a sql server table and transform that data into another format for saving/transporting elsewhere. For the life of me I cannot find how to do the sql server portion of it. I know you are supposed to install the adapter pack (done) and add a Consume Adapter Service and I have done that...but I don't figure out how to connect the orchestration to read the data from SQL. If you someone could point me to a comprehensive tutorial or maybe even recommend a book that I could buy that would be fantastic. Thanks
Use WCF-SQL adapter in BizTalk for SQL Operation. If you do not see this adapter, then install it using BizTalk Server Installation ISO file Option Install Microsoft BizTalk Adapters and then install first three Install Microsoft WCF LOB Adapter SDK Install Microsoft BizTalk Adapter Pack Install Microsoft BizTalk Adapter Pack (64 bit) in same order Once its completed use the Consume Adapter Service Option to add necessary schemas from SQL. The Consume Adapter Service is available when you right click on project=>add generated items. <http://msdn.microsoft.com/en-us/library/cc150632.aspx> Choose the options as shown in attached image, you can change your database name and tables as per yours. ![enter image description here](https://i.stack.imgur.com/vY7Ih.png)
You can do the SQL Adapter Tutorials which cover the topics you're asking about. Details: <http://msdn.microsoft.com/en-us/library/dd788523.aspx>
How do you read a row of sql server db data from a biztalk orchestration?
[ "", "sql", "sql-server", "biztalk", "biztalk-2013", "" ]
I have a transactional database. one of the tables is almost empty (A). it has a unique indexed column (x), and no clustered index. 2 concurrent transactions: ``` begin trans insert into A (x,y,z) (1,2,3) WAITFOR DELAY '00:00:02'; -- or manually run the first 2 lines only select * from A where x=1; -- small tables produce a query plan of table scan here, and block against the transaction below. commit begin trans insert into A (x,y,z) (2,3,4) WAITFOR DELAY '00:00:02'; -- on a table with 3 or less pages this hint is needed to not block against the above transaction select * from A with(forceseek) -- force query plan of index seek + rid lookup where x=2; commit ``` My problem is that when the table has very few rows the 2 transactions can deadlock, because SQL Server generates a table scan for the select, even though there is an index, and both wait on the lock held by the newly inserted row of the other transaction. When there are lots of rows in this table, the query plan changes to an index seek, and both happily complete. When the table is small, the `WITH(FORCESEEK)` hint forces the correct query plan (5% more expensive for tiny tables). 1. is it possible to provide a default hint for all queries on a table to pretend to have the 'forceseek' hint? 2. the deadlocking code above was generated by Hibernate, is it possible to have hibernate emit the needed query hints? 3. we can make the tables pretend to be large enough that the query optimizer selects the index seek with the undocumented features in `UPDATE STATISTICS` <http://msdn.microsoft.com/en-AU/library/ms187348(v=sql.110).aspx> . Can anyone see any downsides to making all tables with less than 1000 rows pretend they have 1000 rows over 10 pages?
You can create a [Plan Guide](http://msdn.microsoft.com/en-us/library/ms190417.aspx). Or you can enable [Read Committed Snapshot](http://technet.microsoft.com/en-us/library/ms188277(v=sql.105).aspx) isolation level in the database. Better still: make the index clustered. For small tables that experience high update ratio, perhaps you can apply the advice from [Using tables as Queues](http://rusanu.com/2010/03/26/using-tables-as-queues/). > Can anyone see any downsides to making all tables with less than 1000 rows pretend they have 1000 rows over 10 pages? If the table appears in a another, more complex, query (think joins) then the cardinality estimates may cascade wildly off and produce bad plans.
You could create a view that is a copy of the table but with the hint and have queries use the view instead: ``` create view A2 as select * from A with(forceseek) ``` If you want to preserve the table name used by queries, rename the table to something else then name the view "A": ``` sp_rename 'A', 'A2'; create view A as select * from A2 with(forceseek) ```
SQL Server : how do I add a hint to every query against a table?
[ "", "sql", "sql-server", "" ]
I have a question about how big companies manage a database structure, let's say if i have an ecommerce app something like shopify which is the best approach you take: a) When a new user creates an account a new database with his tables is created specifically for that user (i think if for some reason this database fail the other databases of other users remain working)? b) When a new user creates an account Instead of create a new database, the app just create a new row in a table for his login and then all the products of all users are in the same table and only the user\_id is the difference between one product of a store and another. Thanks in advance.
Yeah, you should go for the second option (b), for alot of reasons: 1. Most of the data in your database shall be accessible for all users, so you don't want to have the same data stored in multiple places. 2. Sooner or later, you will come up with features where you want to share or link data between different users, etc. 3. Your first choice (a) would be extremely resource consuming compared to the second choice (b). 4. Well, the list can go on and on... :)
Obviously, the second choice - much more efficient and smarter. Think about the number of the users, let's say 10k for example. you realy don't want to handle this mass of tables!
Multiple databases or multiple row tables
[ "", "mysql", "sql", "database", "" ]
No Idea how this happened, but we have multiple of the same FK constraint on the same Column. I think they were named by some management studio type thing, meaning they aren't a default constraint, and they aren't named by a human. I have this query to find the constraint names ``` SELECT CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS where TABLE_NAME = 'Device' AND CONSTRAINT_NAME like '%Address%'; ``` How can I drop each constraint in the result set? I guess I'd have to iterate through and do a drop constraint?
Just try this: ``` Declare @Query Nvarchar(Max) Declare @name Nvarchar(150) Declare Cur Cursor For SELECT CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS where TABLE_NAME = 'Device' AND CONSTRAINT_NAME like '%Address%'; Open Cur Fetch Next From Cur Into @name While @@fetch_status=0 Begin Set @Query='Alter table Device DROP CONSTRAINT '+Ltrim(Rtrim(@name)) Execute sp_executesql @Query Fetch Next From Cur Into @name End Close Cur Deallocate Cur Go ```
The only way I can imagine doing it is like you said, iterate through each time and drop it using a cursor (<http://msdn.microsoft.com/en-us/library/ms180169.aspx>) or writing dynamic SQL.
Dynamically delete multiple foreign key constraints
[ "", "sql", "sql-server", "constraints", "" ]