Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a user defined function and I want to do a select and choose a column based on whether or not it is bigger or small than another. For example: ``` Select t.A, t.B From Table t Where if(t.A <= t.B) then use t.A else use t.B ``` Is this possible? Thanks in advance!
It seems like you want ``` SELECT CASE WHEN t.A <= t.B THEN t.A ELSE t.B END AS Smaller FROM Table t ```
No! But you can use a `case` statement ``` Select t.A, t.B From Table t Where (case when t.A <= t.B then t.A ELSE t.B END) = SomeValue ``` or if the smaller column is required in the select statement only then ``` Select (case when t.A <= t.B then t.A ELSE t.B END) as column From Table t ``` **NOTE** I haven't used pervasive sql before, but a quick googlez reveals that the case statement was added in version 9.x??
Using an IF statement in a WHERE clause
[ "", "sql", "pervasive-sql", "" ]
I need to create a column in SQL Server database. Entries for that column will contain messages from chat. Previously such messages has been stored as comments. My main quetion is: What is typical text length for **chat message** and **comment**? By the way: What would happen if I used *varchar(max)*? How would it impact database size and performance? Is better to use powers of 2 or powers of 10 (e.g. 128 instead of 100) while considering text lengths?
Using VARCHAR(MAX) has a disadvantage: you can not define an index over this column. Generally, your application should impose a maximum length for a chat message. How big that limit is depends very much on what the application is used for. But anything more than 1000 byte is probably less a legitimate message but an attempt to disrupt your service. If your maximum value is a power of 2, or a power of ten or any other value has no influence on the performance as long as the row fits in one (8KB) page.
Short answer - it doesn't matter. From [MSDN](http://technet.microsoft.com/en-us/library/ms176089.aspx): > The storage size is the actual length of the data entered + 2 bytes. So VARCHAR(10) and VARCHAR(10000) will consume the same amount of data if the values don't exceed 10 characters.
What are typical lengths of chat message and comment in database?
[ "", "sql", "sql-server", "database", "" ]
I cant realize whats wrong with my query (Firebird SQL): ``` select max(column1) from (select first 100 column1 from table1 where column1 > 0) p ``` Error message: ``` SQL Message : -104 Invalid token Message: isc_dsql_prepare failed SQL Message : -104 Invalid token Engine Code : 335544569 Engine Message : Dynamic SQL Error SQL error code = -104 Token unknown ``` Any help is appreciated!
SQL does not have `FIRST` keyword it uses `TOP` keyword. Full explanation here <http://technet.microsoft.com/en-us/library/ms189463.aspx> Firebird: "In MS SQL 7 and above, the SELECT clause can take a TOP specifier to limit the number of rows returned. This feature is currently under development for the Firebird engine." <http://www.firebirdsql.org/manual/migration-mssql-syntax.html#migration-mssql-sql-cursors>
Nested SELECT queries ("SELECT from SELECT") use derived tables, which are only supported in [Firebird 2.0 and above](http://www.firebirdsql.org/refdocs/langrefupd20-select.html#langrefupd20-derived-tables). However, since Firebird 1.5 allows SELECT statements to appear in the WHERE clause, your query can be rewritten as: ``` select max(column1) from table1 where column1 in ( select first 100 column1 from table1 where column1 > 0 ) ```
Firebird SQL Nested Select
[ "", "sql", "select", "nested", "firebird", "" ]
I have a table that records shifts worked by employees. It's pretty simple, consisting of (reduced for simplicity): an auto-increment `primary key`, `employee_id`, `job_id`, `date` A row is created for each shift worked, so if a particular employee doesn't work on a particular day, there will be no row for that employee for that date. For a given day, I need to be able to return the number of consecutive days worked by each employee leading up to that day. I have no idea where to start and would appreciate any help. Thanks
If you had a sequence of numbers, then the difference between those numbers and a sequence of dates will be constant. We can use this fact to group the working days into groups with a minimum and maximum date: ``` select employee_id, min(date) as mindate, max(date) as maxdate from (select s.*, date_sub(date, interval seqnum days) as grp from (select s.*, (@rn := @rn + 1) as seqnum from shifts s cross join (select @rn := 0) const ) s ) s group by employee_id, grp; ``` Once we have this, getting the length of time up to a particular date for a particular employee is pretty easy: ``` select employee_id, coalesce(datediff(XXX, 1+min(case when XXX between mindate and maxdate then mindate end)), 0) as SequentialWorkingDays from (select employee_id, min(date) as mindate, max(date) as maxdate from (select s.*, date_sub(date, interval seqnum days) as grp from (select s.*, (@rn := @rn + 1) as seqnum from shifts s cross join (select @rn := 0) const ) s ) s group by employee_id, grp ) s group by employee_id; ```
Suppose your table name is `shift`, then the following query would provide you with the number of rows between the provided date and current date. ``` SELECT COUNT(`date`) AS working_days FROM `shift` WHERE `date` BETWEEN '2014-01-02' AND CURDATE() ```
MySQL: Count rows with consecutive dates leading up to given date
[ "", "mysql", "sql", "" ]
i'm trying to create a table with columns that reference toward other tables. How do i make the foreign keys? Scheme: ![enter image description here](https://i.stack.imgur.com/b0ZuR.jpg) Query: (not working): ``` CREATE TABLE gebruikers_trainingen ( gebruiker_id INT UNSIGNED NOT NULL, training_id INT UNSIGNED NOT NULL, gebruiker_naam VARCHAR(255) NOT NULL, training_naam VARCHAR(255), CONSTRAINT fk_idGebruiker FOREIGN KEY (gebruiker_id) REFERENCES gebruikers(id), CONSTRAINT fk_idTraining FOREIGN KEY (training_id) REFERENCES trainingen(id), CONSTRAINT fk_naamGebruiker FOREIGN KEY (gebruiker_naam) REFERENCES gebruikers(voornaam), CONSTRAINT fk_naamTraining FOREIGN KEY (training_naam) REFERENCES trainingen(naam) ) ENGINE = INNODB; ``` Getting: > Error Code: 1005 Can't create table 'konecranes.gebruikers\_trainingen' > (errno: 150) EDIT: Other tables' queries. ``` CREATE TABLE gebruikers ( id int unsigned NOT NULL, voornaam varchar(255) NOT NULL, achternaam varchar(255) NOT NULL, account_level int unsigned NOT NULL, PRIMARY KEY (id, voornaam) ) ENGINE = InnoDB; CREATE TABLE trainingen ( id int unsigned NOT NULL, naam varchar(255) NOT NULL, PRIMARY KEY (id, naam) ) ENGINE = InnoDB; ```
You should add indexes on your foreign keys: ``` CREATE TABLE gebruikers_trainingen ( gebruiker_id INT UNSIGNED NOT NULL, training_id INT UNSIGNED NOT NULL, gebruiker_naam VARCHAR(255) NOT NULL, training_naam VARCHAR(255) NOT NULL, INDEX (gebruiker_id, gebruiker_naam), INDEX (training_id, training_naam), CONSTRAINT fk_idGebruiker FOREIGN KEY (gebruiker_id, gebruiker_naam) REFERENCES gebruikers(id, voornaam), CONSTRAINT fk_idTraining FOREIGN KEY (training_id, training_naam) REFERENCES trainingen(id, naam) ) ENGINE = INNODB; ```
Has this table existed before in a different guise? [Mysql 1005 error when creating table using InnoDB engine](https://stackoverflow.com/questions/4213013/mysql-1005-error-when-creating-table-using-innodb-engine) Hth Oli
Cannot create interconnecting table
[ "", "mysql", "sql", "foreign-keys", "" ]
I am trying to execute a large sql script which contains about 1000000 simple `UPDATE`queries. The total size of this script file is about 100 MB. When I run this script I'm getting an Out Of Memory exception. When I split the file into chunks of 10 MB I can run each of them. However, I for convenience I would like to have only one script I can run at once. Is there any statement I can introduce so SQL server releases allocated memory after running each query so I can execute this large script at once?
If you have not done so already, insert a `GO` every thousand statements or so. Otherwise the whole file will be one large batch. SQL Server calculates a single execution plan for a batch which may be pushing you to resource limits. You might run into another type of resource problem if you run the whole file in the same transaction. The larger the transaction, the more diskspace your TX log file will need to complete the processing of your file.
**UPDATED:** Open the window command prompt (run+cmd) and run the following: ``` sqlcmd -S YOURSQLSERVER\INSTANCENAME -i c:\your_script_file.sql -o c:\your_log_file.log ```
Out of memory exception in SQL Server 2012
[ "", "sql", "sql-server", "sql-update", "bulk", "" ]
how to get today date in YYYYMMDD in firebird, I had a look on [following](http://www.firebirdsql.org/en/firebird-date-literals/) but could not figured how to write this.
I think you can do: ``` select replace(cast(cast('Now' as date) as varchar(10)), '-', '') from rdb$database ```
**IN FIREBIRD v2.5.5:** ``` SELECT LPAD( EXTRACT( YEAR FROM CURRENT_TIMESTAMP ), 4, '0' ) || LPAD( EXTRACT( MONTH FROM CURRENT_TIMESTAMP ), 2, '0' ) || LPAD( EXTRACT( DAY FROM CURRENT_TIMESTAMP ), 2, '0' ) || ' ' || LPAD( EXTRACT( HOUR FROM CURRENT_TIMESTAMP ), 2, '0' ) || LPAD( EXTRACT( MINUTE FROM CURRENT_TIMESTAMP ), 2, '0' ) || LPAD( TRUNC( EXTRACT( SECOND FROM CURRENT_TIMESTAMP ) ), 2, '0' ) FROM rdb$database ``` OUTPUT IS: **YYYYMMDD HHMMSS**
how to get today date in YYYYMMDD in firebird
[ "", "sql", "firebird", "firebird2.5", "" ]
I am using SQL server management studio, i am trying to update something in my database the return type of that column is bit so its either 0 or 1: this is my sql statment: ``` public void Update(int ID) { SqlConnection myConnection = new sqlConnection etc....; string query = "UPDATE Table SET ColumnName= ColumnName^ 1 WHERE ID=@ID"; SqlCommand myComm = new SqlCommand(query, myConnection); myComm.CommandType = CommandType.StoredProcedure; myComm.Parameters.AddWithValue("@ID", ID); myConnection.Open(); myComm.ExecuteNonQuery(); myConnection.Close(); } ``` I am getting a error saying "Could not find stored procedure", can someone help me what does this means: i have looked around the web but i just dont understand? Based on a ID i should be able to change the bit to 0 to 1 or 1 to 0. Also is there any problem with my SQL statement? because i am also getting a error saying Incorrect syntax near '='.
Change the line ``` myComm.CommandType = CommandType.StoredProcedure; ``` to ``` myComm.CommandType = CommandType.Text; ``` (or just remove it since `CommandType.Text` is the default) You are not executing a stored procedure - you are executing direct SQL.
I think you should to remove this line `myComm.CommandType = CommandType.StoredProcedure;` You are not calling a procedure.. Hence, you don't need it.
sql Could not find stored procedure ''. with bit as a return type
[ "", "sql", "sql-server", "stored-procedures", "" ]
I want to left join this query but it does not seem to work ``` select *,(SELECT datediff(t1.expirydate,CURDATE())as daysleft from tbl1 t1, left join tbl2 t2 on (t1.mid=t2.mid and t1.pid=t2.pid and t1.uid=3) ``` you have an error in your SQL syntax; ``` right syntax to use near 'left join tbl2 t2 on (t1.mid=t2.mid and t1.pid=t2.p' at line 3 ``` Please let me know where am i wrong here am joining 3 table but doesnt work ``` SELECT t1.*, t2.*,t3.* datediff(t1.expirydate, CURDATE()) AS daysleft FROM tbl1 t1 left join tbl2 t2 on t1.mid = t2.mid and t1.pid = t2.pid left join tbl3 t3 on t3.pid = t2.pid where t1.uid=3 ``` for knw 3 table but the i get syntax error
You don't put a comma between joins. ``` SELECT t1.*, t2.*, datediff(fsp.expirydate, CURDATE()) AS daysleft FROM tbl1 t1 left join tbl2 t2 on t1.mid = t2.mid and t1.pid = t2.pid cross join fsp where t1.uid = 3 ``` You also shouldn't put the `t1.uid = 3` condition in the `ON` clause. `ON` should only contain conditions that relate the two tables, and in the case of `LEFT JOIN` it can also include conditions on the table you're joining with (`t2` in this case).
No comma after "t1" on line 2. Also I don't think you can do "\*," on line 1, you need to either do "SELECT \*" or "SELECT column, column2"
select query using leftjoin doesnt work
[ "", "mysql", "sql", "left-join", "" ]
Im creating a database design in MySQL Workbench. I want to have a enumarated table which holds some standard values. The values of the enumaration table needs to be linked to a row in my other table. So i have a table called 'club' which holds a row 'club\_soort'. The row 'club\_soort' needs to relate to the enumaration table. Also, I want to use my tables (when i'm ready with my database design) into phpMyAdmin. I understand the concept of enumaration, but I can't implement it. I hope someone can help me! Thanks!
Rather than using enumerations, you should use what's known as a lookup or reference table. This table would contain your enumerations and be referenced as a foreign key by the parent table. As an example, this would look like: ``` parent_table ------------ club id ---- club_soort ----------> soort ```
ENUM values cannot be linked to any MySQL structures. It can contain only static data.
Using enumerations in mySQL Workbench
[ "", "mysql", "sql", "workbench", "" ]
I need to generate a report every day for exception which have occurred today. I have a column "Exception Details" where I store the details of the exception along with the date on which it occurred. I have another table where i store my current date which changes everyday. I need to compare these two dates and if they match then i am generating a `CSV report`. I have managed to generate the reports by manually by adding the dates in my query but I would be automating it so i need a query which would automatically pick up the current date from my table and compare it with date present in the "Excepton Details". So far i have managed to write below query: ``` SELECT * FROM EXCEPTION WHERE EXP_STAT = 'OPEN' AND SUBSTR(EXCEPTION_DETAILS ,INSTR(EXCEPTION_DETAILS,'2014-01-22')) = '2014-01-22' ``` I need to change this so that I can just Pick up whatever date is present in "Exception Details" and compare it with the date in another table and get the records based on that.
I have found the solution to it using ReGEX: SELECT \* FROM EXCEPTION WHERE EXP\_STAT = 'OPEN' AND REGEXP\_SUBSTR(EXCEPTION\_DETAILS, '(19|20)\d\d- -.- -.') = (select to\_char(date, 'YYYY-MM-DD') from calendar\_table where date\_type='CURRENT')
``` SELECT tableA.* FROM tableA, tableB WHERE tableA.EXP_STAT = 'OPEN' AND SUBSTR(EXCEPTION_DETAILS ,INSTR(EXCEPTION_DETAILS,to_char(tableB.current_date,'YYYY-MM-DD')),12) = to_char(tableB.current_date,'YYYY-MM-DD'); ``` Assuming your date table , don't have repetitive dates.
How do I compare a date from one table with the date present in another table?
[ "", "sql", "oracle", "oracle11g", "" ]
I was trying to create a function which returns to an integer. However, I got the warning as ``` "Msg 2715, Level 16, State 3, Procedure median, Line 1 Column, parameter, or variable #0: Cannot find data type Median." ``` Here is the query. Thanks in advance. ``` CREATE FUNCTION dbo.median (@score int) RETURNS Median AS BEGIN DECLARE @MedianScore as Median; SELECT @MedianScore= ( (SELECT MAX(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score) AS BottomHalf) + (SELECT MIN(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score DESC) AS TopHalf) ) / 2 ; RETURN @MedianScore; END; GO ```
Just change the return type to integer: ``` CREATE FUNCTION dbo.median (@score int) RETURNS integer AS BEGIN DECLARE @MedianScore as integer; ``` Unless you're intentionally using the `Median` type for something that you haven't stated.
Since you are calculating Median of some values I would suggest you return a Numeric value instead of Integer as MAX(@score) + MIN(@score)/ 2 can return a decimal number value. so trying to save that value in an INT variable will truncate the Decimal part. which can lead to wrong results. In the following example I have used NUMERIC(20,2) return value. ``` CREATE FUNCTION dbo.median (@score int) RETURNS NUMERIC(20,2) AS BEGIN DECLARE @MedianScore as NUMERIC(20,2); SELECT @MedianScore= ( (SELECT MAX(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score) AS BottomHalf) + (SELECT MIN(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score DESC) AS TopHalf) ) / 2 ; RETURN @MedianScore; END; GO ``` or if you do want to return an INTEGER use round function inside the function something like this.. ``` CREATE FUNCTION dbo.median (@score int) RETURNS INT AS BEGIN DECLARE @MedianScore as INT; SELECT @MedianScore=ROUND( ( (SELECT MAX(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score) AS BottomHalf) + (SELECT MIN(@score) FROM (SELECT TOP 50 PERCENT Score FROM t ORDER BY Score DESC) AS TopHalf) ) / 2, 0) ; RETURN @MedianScore; END; GO ```
Create function return integer SQL Server 2008
[ "", "sql", "sql-server", "sql-server-2008", "function", "" ]
I am creating a application which is mainly used inside a office for data maintenance. It will be used to store data like work list, future works, reminders etc .All data will be presented to user in the form of grids. So it's all about data stored in SQL server database. There will be number of users accessing it and they modify data frequently. Also there will be so many options like an ERP program.There is no connection to internet is required for this program. So in this case which programming language is better? Should I choose WinForms or ASP.NET? The main concentration to choose between this will be performance, ease of use, also it should support more function for grid controls etc. So which one should I choose? And what will be the advantage and disadvantage of both?
Some pointers: **WinForms** Good * No webserver to install, setup and secure Bad * Installation of some kind required on each machine e.g .NET framwork, exe, assemblies, etc. * More difficult to rollout updates to the application **ASP.NET** Good * No installation on clients required * Can run on machines other than windows including mobile devices * Updates to the application can be published instantly to all clients Bad * Have to use IIS or UltiDev Web Server to serve up pages * File system is more secure so reading and writing to files can be time consuming to configure
Unless you want to use jQuery and Javascript to add additional functionality to the Standard ASP.NET GridView I would say a Windows Form would be more suited, depending on the size of the data it will most likely offer better performance and you have much more control over the actual functionality of the program, rather than dealing with browser related restraints.
Windows Forms vs ASP.NET to create a Data Maintenance program
[ "", "asp.net", "sql", "vb.net", "winforms", "datagridview", "" ]
I'm working with PostgreSQL. I have table with some elements. In last column there is 'Y' or 'N' letter. I need command which Select only first that match (I mean where last column is 'N') and change it on 'Y'. My idea: ``` UPDATE Table SET Checked='Y' WHERE (SELECT Checked FROM Table WHERE Checked='N' ORDER BY ID LIMIT 1) = 'N' ``` But it changes 'N' to 'Y' in every row.
Here is query ``` UPDATE Table SET Checked='Y' WHERE ID =(SELECT ID FROM Table WHERE Checked='N' ORDER BY ID LIMIT 1) ```
## Why it didn't work Others have answered the *how*, but you really need to understand *why* this was wrong: ``` UPDATE Table SET Checked='Y' WHERE ( SELECT Checked FROM Table WHERE Checked='N' ORDER BY ID LIMIT 1 ) = 'N' ``` SQL evaluates step by step in a well-defined order. In this case, the subquery evaluates first because it's *uncorrelated*, i.e. it doesn't refer to any variables from the outer query. The subquery finds the first row in `id` order where 'Checked' is 'N', and as the SELECT list contains the field `Checked`, that means the subquery will be substituted for the value `N`. Effectively it does nothing (except it might be `NULL` instead of `N` if no rows matched). So now you have: ``` UPDATE Table SET Checked='Y' WHERE 'N' = 'N'; ``` Starting to see what went wrong? `'N' = 'N'` is always going to be true. So the `WHERE` clause is always true, and you might as well have written an unconstrained update. ``` UPDATE Table SET Checked='Y'; ``` ## How to fix it You're trying to find the first row where `checked` is `n` and set `checked` to `y`. You need to *connect* these two parts using the primary key. Find the `id` you want to update, then use that to constrain the update. Others have already written the text of that query, so I won't repeat it here. Hopefully you now understand those answers, though.
SQL How to Update only first Row
[ "", "sql", "postgresql", "" ]
I have data that has come over from a hierarchical database, and it often has columns that contain data that SHOULD be in another table, if the original database had been relational. The column's data is formatted in pairs, with `LABEL\VALUE` with a space as the delimiter, like this: > LABEL1\VALUE LABEL2\VALUE LABEL3\VALUE There is seldom more than one pair in a record, but there as many as three. There are 24 different possible Labels. There are other columns in this table, including the ID. I have been able to convert this column into a sparse array without using a cursor, with columns for ID, LABEL1, LABEL2, etc.... But this is not ideal for using in another query. My other option it to use a cursor, loop through the entire table once and write to a temp table, but I can't see to get it to work the way I want. I have been able to do it in just a few minutes in VB.NET, using a couple of nested loops, but can't manage to do it in T-SQL even using cursors. Problem is, that I would have to remember to run this program every time before I want to use the table it creates. Not ideal. So, I read a row, split out the pairs from 'LABEL1\VALUE LABEL2\VALUE LABEL3\VALUE' into an array, then split them out again, then write the rows > ID, LABEL1, VALUE > > ID, LABEL2, VALUE > > ID, LABEL3, VALUE etc... I realize that 'splitting' the strings here is the hard part for SQL to do, but it just seems a lot more difficult that it needs to be. What am I missing?
Assuming that the data label contains no `.` characters, you can use a simple function for this: ``` CREATE FUNCTION [dbo].[SplitGriswold] ( @List NVARCHAR(MAX), @Delim1 NCHAR(1), @Delim2 NCHAR(1) ) RETURNS TABLE AS RETURN ( SELECT Val1 = PARSENAME(Value,2), Val2 = PARSENAME(Value,1) FROM ( SELECT REPLACE(Value, @Delim2, '.') FROM ( SELECT LTRIM(RTRIM(SUBSTRING(@List, [Number], CHARINDEX(@Delim1, @List + @Delim1, [Number]) - [Number]))) FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name) FROM sys.all_objects) AS x WHERE Number <= LEN(@List) AND SUBSTRING(@Delim1 + @List, [Number], LEN(@Delim1)) = @Delim1 ) AS y(Value) ) AS z(Value) ); GO ``` Sample usage: ``` DECLARE @x TABLE(ID INT, string VARCHAR(255)); INSERT @x VALUES (1, 'LABEL1\VALUE LABEL2\VALUE LABEL3\VALUE'), (2, 'LABEL1\VALUE2 LABEL2\VALUE2'); SELECT x.ID, t.val1, t.val2 FROM @x AS x CROSS APPLY dbo.SplitGriswold(REPLACE(x.string, ' ', N'ŏ'), N'ŏ', '\') AS t; ``` (I used a Unicode character unlikely to appear in data above, only because a space can be problematic for things like length checks. If this character is likely to appear, choose a different one.) Results: ``` ID val1 val2 -- -------- -------- 1 LABEL1 VALUE 1 LABEL2 VALUE 1 LABEL3 VALUE 2 LABEL1 VALUE2 2 LABEL2 VALUE2 ``` If your data might have `.`, then you can just make the query a little more complex, without changing the function, by adding yet another character to the mix that is unlikely or impossible to be in the data: ``` DECLARE @x TABLE(ID INT, string VARCHAR(255)); INSERT @x VALUES (1, 'LABEL1\VALUE.A LABEL2\VALUE.B LABEL3\VALUE.C'), (2, 'LABEL1\VALUE2.A LABEL2.1\VALUE2.B'); SELECT x.ID, val1 = REPLACE(t.val1, N'ű', '.'), val2 = REPLACE(t.val2, N'ű', '.') FROM @x AS x CROSS APPLY dbo.SplitGriswold(REPLACE(REPLACE(x.string, ' ', 'ŏ'), '.', N'ű'), 'ŏ', '\') AS t; ``` Results: ``` ID val1 val2 -- -------- -------- 1 LABEL1 VALUE.A 1 LABEL2 VALUE.B 1 LABEL3 VALUE.C 2 LABEL1 VALUE2.A 2 LABEL2.1 VALUE2.B ```
Using the [SQL split string function](http://www.kodyaz.com/articles/t-sql-convert-split-delimeted-string-as-rows-using-xml.aspx) given at referenced SQL tutorial, you can split the label-value pairs as following ``` SELECT id, max(label) as label, max(value) as value FROM ( SELECT s.id, label = case when t.id = 1 then t.val else NULL end, value = case when t.id = 2 then t.val else NULL end FROM dbo.Split(N'LABEL1\VALUE1 LABEL2\VALUE2 LABEL3\VALUE3', ' ') s CROSS APPLY dbo.Split(s.val, '\') t ) t group by id ``` You can see that the split string function is called twice, first for splitting pairs from others. Then the second split function joined to previous one using CROSS APPLY splits labels from pairs ![enter image description here](https://i.stack.imgur.com/n7sBL.png)
split string in column
[ "", "sql", "sql-server", "t-sql", "parsing", "split", "" ]
How can i restrict field in a table to 15 or 16 digits. I have this table: ``` create table Person( ,UserID varchar(30) ,Password varchar(30) not null ,CCtype varchar(8) ,CCNumber numeric ,primary key(UserID) ,constraint CK_CCvalidity check ( (CCType is null or CCNumber is null) or ( (CCType = 'Amex' or CCType = 'Discover' or CCType = 'MC' or CCType = 'VISA') and (CCNumber >= 15 and CCNumber <= 16) ) ) ); ``` But this actually checks for the values 15 an 16, not for the number of digits. Also, we can assume that the numeric may hold 000... as the first digits. Thanks for the help
`CCNumber` should never be numeric. That will lead to a world of pain. It should be `varchar(X)` where X is 13 - 24 digits. Credit card numbers are usually represented by groups of 4 or 5 digits separated by spaces or dashes or simply all together with no separators. [note: American Express: 15 digits; Visa: 13 or 16 digits] In response to your comment: ``` ALTER TABLE dbo.Person ADD CONSTRAINT CK_Person_CCNumber CHECK (LEN(CCNumber) = 16 OR LEN(CCNumber) = 15); ``` But probably better as: ``` ALTER TABLE dbo.Person ADD CONSTRAINT CK_Person_CCNumber CHECK (LEN(CCNumber) >= 13 AND LEN(CCNumber) <= 15); ``` AND add a constraint to ensure it is a valid credit card number perhaps (there are plenty of examples online). * [Bank Card Number](https://en.wikipedia.org/wiki/Bank_card_number)
You can create a function to remove the Non-Numeric characters from a varchar, like [this one](https://stackoverflow.com/a/6529463/122195): ``` CREATE Function [fnRemoveNonNumericCharacters](@strText VARCHAR(1000)) RETURNS VARCHAR(1000) AS BEGIN WHILE PATINDEX('%[^0-9]%', @strText) > 0 BEGIN SET @strText = STUFF(@strText, PATINDEX('%[^0-9]%', @strText), 1, '') END RETURN @strText END ``` Now, if you want to allow only digits and want to check the length, you could add two Check Constraints like this: ``` Create Table Person ( Id int not null primary key, CCNumber varchar(30), CONSTRAINT CK_Person_CCNumber_Length CHECK (LEN(CCNumber) BETWEEN 15 AND 16), CONSTRAINT CK_Person_CCNumber_IsNumeric CHECK (LEN(dbo.[fnRemoveNonNumericCharacters](CCNumber)) = LEN(CCNumber)) ) ``` First Constraint will check the length of the field to be 15 or 16. Second one will check that the field is numeric (length of field removing non-numeric is equal to length of the original field) You can also do it in just one ANDed Check Constraint.
Digit restriction on SQL server numeric field
[ "", "sql", "sql-server", "t-sql", "" ]
I am using `MySQL 5.2.42 CE`. I have a table which has a column named `"organization_id"`. It contains some repeated values for different rows, means more than one row can have `"org1"` value in that field. Now I have to loop through the distinct values of that column, means different organizations. But I don't know if there is any `For-Each loop in mysql`. I have done like this: ``` Select Distinct organization_id From tbl_organization ``` this is returning me all the different organizations. Now how can I loop through the list and access each organization\_id? I am writing this in a trigger in DB. Please help. Thanks & Regards, Surodip Patra
You have to use [`CURSORS`](http://dev.mysql.com/doc/refman/5.6/en/cursors.html). Using a cursor, read the fetched ids into local variable and perform other conditional operations based on that organisation id. ``` ... DECLARE cur_orgs CURSOR FOR SELECT DISTINCT organization_id FROM tbl_organization; ... OPEN cur_orgs; ... Reading_Orgs: LOOP FETCH cur_orgs INTO _org_id; ... IF done THEN LEAVE Reading_Orgs; END IF; IF ... THEN ... ELSE ... END IF; END LOOP; CLOSE cur_orgs; ... ... ```
If I am understanding right then you can use **CURSOR** in your trigger. ursor is used to iterate through a set of rows, which returned by a query, and process individual row. Currently with all versions greater 5.x, MySQL cursor has following properties: * Read only: it means you cannot update the cursor. * Non-scrollable: it only can traverse in one direction and cannot skip, move back or forth in result set. * Asensitive: you should avoid update table while open a cursor on that table otherwise you may get unexpected results. MySQL supports following statements for working with cursor. First you have to declare a cursor using DECLARE statement: ``` DECLARE cursor_name CURSOR FOR SELECT_statement; ``` Second you have to open the cursor using OPEN statement. You must open cursor before fetching rows from it. ``` OPEN cursor_name; ``` Next you can retrieve next row from cursor and move the cursor to the following row in a result set by using FETCH statement. ``` FETCH cursor_name INTO variable list; ``` And finally, you must close the cursor to deactivate it and release the memory associated with that cursor. To close the cursor you use CLOSE statement: ``` CLOSE cursor_name; ``` and get the value of a column in a mysql variable using loop. You can check more detail and a good tutorial , on how to use cursor. <http://www.mysqltutorial.org/mysql-cursor/>
Looping on a Select query result set element and get the elements in MySQL
[ "", "mysql", "sql", "" ]
I have a table like this: ``` Table company companyid | companyname | owner | 5 |coffecompany |Mike | 6 |juicecompany |Mike | ``` For some reason, I need to use this: ``` DELETE FROM company WHERE companyid='5'; DELETE FROM company WHERE companyid='6'; ``` instead of ``` DELETE FROM company WHERE owner='Mike'; ``` But I wonder if the second choice run faster, if it does, will it run much faster? In the future, I may have to use it to delete a large amount of records, so I really need to know.
`delete from company where companyId in (5, 6);` should always be faster, even though the difference might be negligible if eg. you've got proper indexes, no concurrent queries, no issues with locking etc. Note that my query is for MS SQL, if your database server allows using the same construct (ie. specifying all the values in such concise way), you should probably use it, if not, go with something like `delete from company where companyId = 5 or companyId = 6;` Also, don't use string literals if `companyid` is a number (is the table column actually a number, or a text?). In any case, it gives the server more lee-way in implementing the actual operation, and DB servers tend to be very good at query optimization. One possible bottle-neck for deletion could be in the transaction logs, however. It might very well be that if you're deleting a huge amount of rows at once, it would be better to do a few separate deletes in separate transactions to fit within transaction size limits.
Generally, SQL is language operating on sets of data so second query will be much faster for huge amount of rows. First choice might be slower as you'll have to send query text as many times as you have rows to delete. Imagine network traffic if you want to delete 1 000 000 rows. On small amounts of rows probably you won't be able to see any difference.
Will one query run faster than multiple queries, if they are deleting the same amount of records
[ "", "sql", "" ]
Lets say today's date is Jan 22, 2014 and i run a report having this table below (Table) but input range of SIdate from Jan 01, 2014 to Jan 18, 2014 only. I want to display all rows based on the SIdate i entered which is jan 01,2014 to Jan 18, 2014 but CMdate should be considered also and should pass the date parameter. As you've noticed in Table 2 (My desired output), the 6th row is not there anymore bcore CMdate is Jan 21, 2014 Table 1 ``` Emp SIDate item TotQty TotAmt CMDate CMAmt ------------------------------------------------------- CLO 01-01-14 item1 120 1500.00 null null CLO 01-02-14 item2 80 500.00 01-05-14 20.00 CLO 01-05-14 item6 21 1100.00 null null CLO 01-10-14 item5 100 2000.00 01-10-14 200.00 CLO 01-12-14 item9 300 100.00 null null CLO 01-16-14 item3 150 650.00 01-21-14 150.00 ``` Table 2 (desired output) ``` Emp SIDate item TotQty TotAmt CMDate CMAmt ------------------------------------------------------- CLO 01-01-14 item1 120 1500.00 null null CLO 01-02-14 item2 80 500.00 01-05-14 20.00 CLO 01-05-14 item6 21 1100.00 null null CLO 01-10-14 item5 100 2000.00 01-10-14 200.00 CLO 01-12-14 item9 300 100.00 null null ``` Any input would be very much appreciated.
You'll need to repeat the filter on both columns, optionally allowing for a NULL `CMDate` ``` SELECT Emp, SIDate, item, TotQty, TotAmt, CMDate, CMAmt FROM Table1 WHERE SIDate BETWEEN '2014-01-01' AND '2014-01-18' AND (CMDate IS NULL OR CMDate BETWEEN '2014-01-01' AND '2014-01-18'); ``` [Fiddle here](http://sqlfiddle.com/#!2/2be49/2) (As an aside, using a standard like [ISO](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html) for your date formats will go a long way to avoiding date format related issues)
``` select * from table where (cmdate between '01-01-2014' and '18-01-2014' or cmdate is null) and sidate between '01-01-2014' and '18-01-2014' ```
How to create a sql script to consider two columns?
[ "", "mysql", "sql", "date", "select", "where-clause", "" ]
Two Windows 7 computers connected through network (workgroup) and can ping each other. I have my database on one of them and need the other computer to connect to this database through my window application. Here's what I have : * both pcs are Windows 7 (ultimate & professional). * I enabled TCP/IP under SQL Configuration Tools. * I enabled the Browser service. * I configured the connection string correctly. * I tried connecting by ip address. * I created a rule in my Windows Firewall to allow incoming connection yo SQL Server with its default port. * I don't have another Firewall. * I allowed external connection from SQL Server Management Studio. Despite all this, I am still not able to connect to SQL Server on the other computer. Can anybody help please?
It turned out to be a Firewall problem. I disabled Firewall and it worked. Even though I have the port opened as a rule, but there is something else in my Firewall which seem to be blocking the connection. I ended up disabling Windows Firewall and installing another one.
If you are on a work group with a PC named SERVER and another PC named CLIENT, you have to choose what security you are going to use. STANDARD vs NT AUTHENTICATION. First, did you check to see if the server is active on the default port? Did you try the following commands on the server from a command shell. ``` cd c:\temp netstat -a -n -o > netstat.txt ``` **Search for port 1433, is it open and listening?** Second, did you make sure that TCP/IP is enabled for both the server and native client. Disable named pipes. Make sure shared memory is enabled. ![enter image description here](https://i.stack.imgur.com/XVcIw.jpg) I assume you can connect to the engine logged onto the SERVER via SSMS. **Correct?**
Cannot connect to SQL Server 2008 on workgroup network
[ "", "sql", "sql-server", "sql-server-2008", "networking", "windows-7", "" ]
I have a problem to getting the last month in a query : This is my query: ``` SELECT i_chelt_pub_val.`agent_id`, i_chelt_pub_val.`date`, i_chelt_pub_val.`form_val` FROM i_chelt_pub_val WHERE ( MONTH(i_chelt_pub_val.`date`) = (MONTH(CURRENT_TIMESTAMP) - 1) ) AND i_chelt_pub_val.`agent_id` = '253' ``` It works fine in 2013, now in 2014 I have an issue, because this line of query `MONTH(i_chelt_pub_val.date) = (MONTH(CURRENT_TIMESTAMP) - 1)` returns me 12 = 1-1 , which is not good. I want in the where clause to be 12 = 12. How can I do that?
try DATE\_SUB(): ``` ... = MONTH(DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 1 MONTH)); ```
Assuming, that you want all data from the beginning of the last month to the end of the last month, your query has three issues: * you will get for example all januaries of all years * depending on when you execute the query, you will miss the data from the very beginning of a month to the time when you execute it, for example execution date is `2014-01-22 15:03:00`, then you will get data from `2013-12-22 15:03:00` till today, but not from `2013-12-01 00:00:00` * it might be quite slow, cause MySQL can't use an index on your date column since you're using an index on it To fix this, query like this: ``` SELECT i_chelt_pub_val.`agent_id`, i_chelt_pub_val.`date`, i_chelt_pub_val.`form_val` FROM i_chelt_pub_val WHERE ( i_chelt_pub_val.`date` BETWEEN DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-01 00:00:00') AND DATE_FORMAT(NOW() - INTERVAL 1 MONTH, '%Y-%m-31 23:59:59') ) AND i_chelt_pub_val.`agent_id` = '253' ```
How do I get the last month using a query?
[ "", "mysql", "sql", "" ]
this is my original table. ``` +----------+---------------------+--------+------------+-------+ | lab_code | test_name | sample | time_taken | price | +----------+---------------------+--------+------------+-------+ | FBS | Fasting blood Suagr | Blood | 01:15:00 | 400 | | STLC | Stool Culture | Stool | 48:00:00 | 700 | | UA | Urinalysis | Urine | 02:30:00 | 425 | | UDS | Urine drug screen | Urine | 06:00:00 | 550 | +----------+---------------------+--------+------------+-------+ ``` i want to display ``` +----------+ | Hospital | +----------+ | FBS | | STLC | | UA | | UDS | +----------+ ``` i want to change attibute name when data Quering.(without changing original table) plz help.
Give `ALIAS` name to specific column ``` SELECT lab_code AS Hospital From tableA ```
What about ``` SELECT lab_code AS Hospital FROM <table_name>; ``` The use of AS is an **ALIAS** that can be used for the column name; it does not mean that the table's column name changes, but rather, when the SELECT query is executed, that becomes the column name or alias for the current column name present in the table.
change attribute name in Mysql
[ "", "mysql", "sql", "select", "alias", "" ]
I have a MS SQL table with over 250 million rows. Whenever I execute the following query `SELECT COUNT(*) FROM table_name` it takes over 30 seconds to get me the output. Why is it taking so much time? Does this do a count when I query? I'm assuming till date that it stores this information somewhere (probably in the table meta data. I m not sure if table meta even exists). Also, I would like to know if this query is IO/Processor/Memory intensive? Thanks
Every time you execute `SELECT COUNT(*) from TABLE` SQL server actually goes through the table and counts all rows. To get estemated row count on one or more tables you can run the following query which gets stored information and returns in under 1 sec. ``` SELECT OBJECT_NAME(OBJECT_ID) TableName, st.row_count FROM sys.dm_db_partition_stats st WHERE index_id < 2 ORDER BY st.row_count DESC ``` Read more about it here <http://technet.microsoft.com/en-us/library/ms187737.aspx>
No, sql server dosen't store this information. It computes it every query. But it can cache execution plan to emprove perfomace. So, if you want to get results quickly, you need a primary key at least.
What makes count(*) query to run for 30 sec?
[ "", "sql", "sql-server", "" ]
***UPDATED*** I have added a example of the dataset so its easier to explain my problem. It looks to me if it is a lookup question. I have a basetable with all months from 2013-01 to 2014-12. I have a lookuptable with contract working hours per person (column id; e.g. person 1 and person 2). I have to match the basetable months with the right contracthours for that month. The lookup table provides historical information about changes in workweek hours. **Can somebody help me to make this join or lookup?** **Example result person 1** : The contractual hours for april 2013 (2013-04) should be 16 hours. Because in the lookup table he worked 16 hours from 2013-02-18 and started working 8 hours on 2013-04-16. **Example result person 2** : His last contractual change is of 2012-11-01 to 32 hours. So all months in 2013 & 2014 has to be 32 hours. --- ``` CREATE TABLE [dbo].[_Lookup]( [ID] [int] NOT NULL, [Date] [nvarchar](50) NULL, [Hours] [int] NULL ) ON [PRIMARY] GO INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-01-01 00:00:00', 8) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-02-18 00:00:00', 16) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-04-16 00:00:00', 8) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-05-01 00:00:00', 32) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-09-15 00:00:00', 12) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2013-10-01 00:00:00', 20) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (1, N'2015-01-01 00:00:00', 12) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2008-01-01 00:00:00', 24) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2009-03-01 00:00:00', 36) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2009-08-31 00:00:00', 24) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2010-02-01 00:00:00', 36) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2010-04-01 00:00:00', 30) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2011-08-01 00:00:00', 24) INSERT [dbo].[_Lookup] ([ID], [Date], [Hours]) VALUES (2, N'2012-11-01 00:00:00', 32) CREATE TABLE [dbo].[_Basetable]( [ID] [int] NOT NULL, [Date] [datetime] NULL ) ON [PRIMARY] GO INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A13900000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A15800000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A17400000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A19300000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A1B100000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A1D000000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A1EE00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A20D00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A22C00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A24A00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A26900000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A28700000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A2A600000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A2C500000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A2E100000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A30000000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A31E00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A33D00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A35B00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A37A00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A39900000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A3B700000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A3D600000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (1, CAST(0x0000A3F400000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A13900000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A15800000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A17400000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A19300000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A1B100000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A1D000000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A1EE00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A20D00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A22C00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A24A00000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A26900000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A28700000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A2A600000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A2C500000000 AS DateTime)) INSERT [dbo].[_Basetable] ([ID], [Date]) VALUES (2, CAST(0x0000A2E100000000 AS DateTime)) -- The result table should look like this: -- -- -- ID Date -- 1 2013-01-01 8 -- 1 2013-02-01 8 -- 1 2013-03-01 16 -- 1 2013-04-01 16 -- 1 2013-05-01 32 -- 1 2013-06-01 32 -- 1 2013-07-01 32 -- 1 2013-08-01 32 -- 1 2013-09-01 32 -- 1 2013-10-01 20 -- 1 2013-11-01 20 -- 1 2013-12-01 20 -- 1 2014-01-01 20 -- 1 2014-02-01 20 -- 1 2014-03-01 20 -- 1 2014-04-01 20 -- 1 2014-05-01 20 -- 1 2014-06-01 20 -- 1 2014-07-01 20 -- 1 2014-08-01 20 -- 1 2014-09-01 20 -- 1 2014-10-01 20 -- 1 2014-11-01 20 -- 1 2014-12-01 20 -- 2 2013-01-01 32 -- 2 2013-02-01 32 -- 2 2013-03-01 32 -- 2 2013-04-01 32 -- 2 2013-05-01 32 -- 2 2013-06-01 32 -- 2 2013-07-01 32 -- 2 2013-08-01 32 -- 2 2013-09-01 32 -- 2 2013-10-01 32 -- 2 2013-11-01 32 -- 2 2013-12-01 32 -- 2 2014-01-01 32 -- 2 2014-02-01 32 -- 2 2014-03-01 32 ```
**This is the answer:** ``` Select A.ID , A.Date , ( Select Top 1 B.Hours From _Lookup B Where A.Date >= B.Date and A.Id = B.ID Order by B.Date desc ) as WorkingHours From [_Basetable] A ``` Thanks for helping me out!!
Does this work for you? ``` WITH MyTimeDate ([Year], [Month], Hours_Work) AS ( SELECT 2013, 1, 8 UNION ALL SELECT 2013, 2, 16 UNION ALL SELECT 2013, 4, 8 UNION ALL SELECT 2013, 5, 32 UNION ALL SELECT 2013, 9, 12 UNION ALL SELECT 2013, 10, 20 UNION ALL SELECT 2015, 1, 12 ) ,MySampleCal ([Year], [Month]) AS ( SELECT 2013, 1 UNION ALL SELECT 2013, 2 UNION ALL SELECT 2013, 3 UNION ALL SELECT 2013, 4 UNION ALL SELECT 2013, 5 UNION ALL SELECT 2013, 6 UNION ALL SELECT 2013, 7 UNION ALL SELECT 2013, 8 UNION ALL SELECT 2013, 9 UNION ALL SELECT 2013, 10 UNION ALL SELECT 2013, 11 UNION ALL SELECT 2013, 12 ) ,DistanceCTE AS ( SELECT C.[Year] ,C.[Month] ,D.Hours_Work ,DistanceSeq = ROW_NUMBER() OVER (PARTITION BY C.[Year], C.[Month] ORDER BY D.[Month] - C.[Month] DESC) FROM MySampleCal C JOIN MyTimeDate D ON C.[Year] = D.[Year] AND C.[Month] >= D.Month ) SELECT * FROM DistanceCTE WHERE DistanceSeq = 1 ```
SQL join of 2 date tables with gaps (Lookup)
[ "", "sql", "join", "" ]
I have about 11 columns in a mysql table and none of them are empty. When I view these columns in the Browse section, some columns are hidden whereas I'm sure they exist. I have updated phpMyAdmin to latest version (4.1.5) but that did not solve anything. I also exported the table, dropped it and imported it again but nothing changed. How do I make all the columns visible?
You could look in the exported file to double-check that they exist there. Another thing to try; from the table, click the SQL tab. The default text will probably say `` SELECT * FROM `tablename` WHERE 1 ``, try running that and see if you get any different response. If you click the Structure tab, do you see the columns that are hidden? Do you have access to the command-line client? If so, can you test showing the table structure there to see if it's different than displayed by phpMyAdmin? **Edit**: Have you tried a different browser? Marc Delisle points out that the column may be hidden within phpMyAdmin; in the Browse tab look to the left of the column headers, there's a T with two arrows -- to the right of that is a small dropdown arrow. Make sure all your columns are selected there. It's doubtful this is the cause of your problem, because you dropped the table and recreated it, but it's the next thing to check. Can you copy the table to another database for testing, verify that the problem exists there, then truncate the table (remove all data, see the Operations tab to do this), and see if it continues (that will help determine whether it's the structure or the data causing the problem)? Can you try to export the structure only and recreate the problem on the [demo server](http://demo.phpmyadmin.net/master) (log in with the username root and a blank password)? Can you post the structure here? Make sure you obscure any sensitive information if needed.
This question is a duplicate to this one: [phpMyAdmin doesn't show added columns](https://stackoverflow.com/questions/12960302) so here is my duplicate answer! I don't think I have enough kudos to flag this as duplicate yet. Database phpmyadmin is used to store which columns are hidden, which column a table should be sorted by, etc etc. Table pma\_table\_uiprefs (phpmyadmin.pma\_table\_uiprefs) in particular has columns: username; db\_name; table\_name; and prefs. I found the row in that table that matches your db\_name,user\_name and table\_name, and deleted it. That reset the layout back to showing all columns! The prefs column is text, and it's format could probably be deciphered if you have some spare time and energy, but deleting the row is easy and you can then adjust the layout again in phpmyadmin, and the row will be recreated in phpmyadmin.pma\_table\_uiprefs. I was authenticated as root when doing this.
Some columns are hidden in phpMyAdmin
[ "", "mysql", "sql", "phpmyadmin", "" ]
I'm trying to create a query to return the file with the max version, independent of the value of the server. How could I do that? actual table data: ``` server filename v4 date local code1.zip 41 0000-00-00 remote code1.zip 39 0000-00-00 local code1.zip 28 0000-00-00 remote code1.zip 21 0000-00-00 local code1.zip 32 0000-00-00 remote code1.zip 27 0000-00-00 ``` the query: ``` SELECT server, filename, max(v4) as v4, date FROM table WHERE date ='0000-00-00' GROUP BY filename, server, date ``` Actual result: ``` server filename v4 date local code1.zip 41 0000-00-00 remote code1.zip 39 0000-00-00 ``` Expected result: ``` server filename v4 date local code1.zip 41 0000-00-00 ``` EDIT: It's for **MySQL** Thanks in advance.
If you just want the row with largest v4 you can use this ``` SELECT server, filename, v4, date FROM `table` WHERE date ='0000-00-00' ORDER BY v4 DESC LIMIT 1 ``` to get max V4 for each filename, first you get max(v4) group by filename then INNER JOIN back with `table` like below ``` SELECT T1.server,T1.filename,T1.v4,T1.date FROM `table` T1 INNER JOIN (SELECT filename,max(v4) as maxv4 FROM `table` WHERE date = '0000-00-00' GROUP BY filename)T2 ON T1.filename = T2.filename AND T1.v4 = T2.maxV4 WHERE date = '0000-00-00'; ```
To get the the max version you only have to do: ``` SELECT MAX(version) FROM table ``` If you also want the filename (if you have another column `id`): ``` SELECT t2.filename, t2.maxver FROM table t1 INNER JOIN (SELECT id, MAX(version) maxver FROM table GROUP BY id) t2 ON t1.id=t2.id ORDER BY t2.maxver DESC LIMIT 1 ``` If you also want the filename (if you haven't another column `id`, use `filename`): ``` SELECT t2.filename, t2.maxver FROM table t1 INNER JOIN (SELECT filename, MAX(version) maxver FROM table GROUP BY filename) t2 ON t1.filename=t2.filename WHERE t2.maxver = (SELECT MAX(version) FROM table) ``` There are many ways to do it.
Returning max from query
[ "", "mysql", "sql", "select", "aggregate-functions", "" ]
This is just an SSCCE: ``` CREATE TABLE test(i INTEGER NOT NULL); WITH max_i AS (SELECT MAX(i) FROM test) SELECT * FROM test WHERE max_i - i < 2 AND max_i!=i ``` PostgreSQL complains: ``` ERROR: column "max_i" does not exist ``` I guess that's because `max_i` is a single value and not a rowset, but how do I get to define only once in my query values that I obtain through complex queries instead of having to repeat the subquery whenever they are used?
You don't reference the CTE at all in your "final" select statement. Therefor you can't reference it. Additionally the `where` condition needs to reference *columns* not tables. In your statement `max_i` is the name of a "table", thus it cannot be used in a where condition. ``` WITH t_max AS ( SELECT MAX(i) as max_i FROM test ) SELECT * FROM test CROSS JOIN t_max as t WHERE t.max_i - test.i < 2 AND t.max_i <> test.i; ``` The `cross join` doesn't do any harm because we know the CTE (named `t_max`) will always return only a single row. Once you can reference the CTE in the final select, the comparison is easy, but to do that you need to provide an alias for the column of the CTE inner select.
First, you don't need both these conditions in the `WHERE` clause. The `i = max_id - 1` would be enough (if it worked.) To use the value from the CTE, you either have to use a (cross) join as the other answers or use this (less common) syntax: ``` WITH max_i AS ( SELECT MAX(i) FROM test ) SELECT * FROM test WHERE i = (TABLE max_i) - 1 ; ``` Test at **[SQL-Fiddle](http://sqlfiddle.com/#!15/88f6b/1)**
why can't I use WITH (Common Table Expressions) in this query?
[ "", "sql", "postgresql", "common-table-expression", "" ]
I want to retrieve all the files from a cabinet (called 'Wombat Insurance Co'). Currently I am using this DQL query: ``` select r_object_id, object_name from dm_document(all) where folder('/Wombat Insurance Co', descend); ``` This is ok except it only returns a maximum of 100 results. If there are 5000 files in the cabinet I want to get all 5000 results. Is there a way to use pagination to get all the results? I have tried this query: ``` select r_object_id, object_name from dm_document(all) where folder('/Wombat Insurance Co', descend) ENABLE (RETURN_RANGE 0 100 'r_object_id DESC'); ``` with the intention of getting results in 100 file increments, but this query gives me an error when I try to execute it. The error says this: ``` com.emc.documentum.fs.services.core.CoreServiceException: "QUERY" action failed. java.lang.Exception: [DM_QUERY2_E_UNRECOGNIZED_HINT]error: "RETURN_RANGE is an unknown hint or is being used incorrectly." ``` I think I am using the RETURN\_RANGE hint correctly, but maybe I'm not. Any help would be appreciated! I have also tried using the hint `ENABLE(FETCH_ALL_RESULTS 0)` but this still only returns a maximum of 100 results. To clarify, my question is: how can I get *all* the files from a cabinet?
Aha, I've figured it out. Using DFS with Java (an abstraction layer on top of DFC) you can set the starting index for query results: ``` String queryStr = "select r_object_id, object_name from dm_document(all) where folder('/Wombat Insurance Co', descend);" PassthroughQuery query = new PassthroughQuery(); query.setQueryString(queryStr); query.addRepository(repositoryStr); QueryExecution queryEx = new QueryExecution(); queryEx.setCacheStrategyType(CacheStrategyType.DEFAULT_CACHE_STRATEGY); queryEx.setStartingIndex(currentIndex); // set start index here OperationOptions operationOptions = null; // will return 100 results starting from currentIndex QueryResult queryResult = queryService.execute(query, queryEx, operationOptions); ``` You can just increment the `currentIndex` variable to get all results.
Well, the hint *is* being used incorrectly. Start with 1, not 0. **There is no built-in limit** in DQL itself. All results are returned by default. The reason you get only 100 results must have something to do with the way you're using DFC (or whichever other client you are using). Using *IDfCollection* in the following way will surely return *everything*: ``` IDfQuery query = new DfQuery("SELECT r_object_id, object_name " + "FROM dm_document(all) WHERE FOLDER('/System', DESCEND)"); IDfCollection coll = query.execute(session, IDfQuery.DF_READ_QUERY); int i = 0; while (coll.next()) i++; System.out.println("Number of results: " + i); ``` In a test environment (CS 6.7 SP1 x64, MS SQL), this outputs: > Number of results: 37162 Now, there's proof. Using paging is however a good idea if you want to improve the overall performance in your application. As mentioned, start counting with the number 1: ``` ENABLE(RETURN_RANGE 1 100 'r_object_id DESC') ``` This way of paging requires that sorting be specified in the hint rather than as a DQL statement. If all you want is the first 100 records, try this hint instead: ``` ENABLE(RETURN_TOP 100) ``` In this case sorting with *ORDER BY* will work as you'd expect. Lastly, note that adding *(all)* will not only find all documents matching the specified qualification, but *all versions of every document*. If this was your intention, that's fine.
DQL query to return all files in a Cabinet in Documentum?
[ "", "sql", "database", "dql", "documentum", "dfc", "" ]
My table: ``` user | score | category ----------------------- 1 | 20 | 1 2 | 30 | 1 4 | 30 | 1 1 | 20 | 2 2 | 30 | 2 4 | 30 | 3 ``` Expected result: ``` user | score | category | rank ------------------------------ 1 | 20 | 1 | 3 2 | 30 | 1 | 1 4 | 30 | 1 | 1 1 | 20 | 2 | 2 2 | 30 | 2 | 1 4 | 30 | 3 | 1 ``` I need to make a permanent change to the table (not only select this value) so that I could select users like: ``` SELECT `user`, `rank`, `score` FROM `user` WHERE `category` = 2 ``` My real table is 200k+ rows and has much more columns. It has to be fast.
You can consider an alternative (which you may find to be faster) to a correlated subquery approach which involves using session variables ``` SET @n := 0, @r := 0, @c := NULL, @s := NULL; UPDATE users u JOIN ( SELECT user, score, category, @n := IF(@c = category, @n + 1, 1) rnum, @r := IF(@c = category, IF(@s = score, @r, @n), @n) rank, @c := category, @s := score FROM users ORDER BY category, score DESC ) r ON u.category = r.category AND u.user = r.user AND u.score = r.score SET u.rank = r.rank; ``` Outcome: ``` | USER | SCORE | CATEGORY | RANK | |------|-------|----------|------| | 2 | 30 | 1 | 1 | | 4 | 30 | 1 | 1 | | 1 | 20 | 1 | 3 | | 2 | 30 | 2 | 1 | | 1 | 20 | 2 | 2 | | 4 | 30 | 3 | 1 | ``` Here is **[SQLFiddle](http://sqlfiddle.com/#!2/26463/1)** demo
Here is the query to get the rank: ``` select t.*, (select count(*) + 1 from MyTable2 t2 where t2.category = t.category and t2.score > t.score ) as rank from MyTable t; ``` You can put this into an update, by joining again back to the original table, assuming the `user, category` is unique: ``` update MyTable join (select t.*, (select count(*) + 1 from MyTable2 t2 where t2.category = t.category and t2.score > t.score ) as rank from MyTable t ) toupdate on MyTable.user = toupdate.user and Mytable.category = toupdate.category set Mytable.rank = toupdate.rank; ```
How to add "rank" column to mysql table depending on the score with different categories?
[ "", "mysql", "sql", "database", "" ]
I wrote and managed this query to calculate total time a person has worked per day by dateDiff function now i am stuck at one place. i want to calculate that if a person has done over time than a new column should show OVERITME in hh:mm. I n our office total working hours are 08:00 hours, greater than 8 hours are considered overtime so e.g. if a person has worked 08:35 hours than a column should show that a person has worked 00:35 QUERY: ``` with times as ( SELECT t1.EmplID , t3.EmplName , min(t1.RecTime) AS InTime , max(t2.RecTime) AS [TimeOut] , t1.RecDate AS [DateVisited] FROM AtdRecord t1 INNER JOIN AtdRecord t2 ON t1.EmplID = t2.EmplID AND t1.RecDate = t2.RecDate AND t1.RecTime < t2.RecTime inner join HrEmployee t3 ON t3.EmplID = t1.EmplID group by t1.EmplID , t3.EmplName , t1.RecDate ) SELECT EmplID ,EmplName ,InTime ,[TimeOut] ,[DateVisited] ,CASE WHEN minpart = 0 THEN CAST(hourpart AS NVARCHAR(200)) + ':00' WHEN minpart <10 THEN CAST(hourpart AS NVARCHAR(200)) + ':0'+ CAST(minpart AS NVARCHAR(200)) ELSE CAST(hourpart AS NVARCHAR(200)) + ':' + CAST(minpart AS NVARCHAR(200)) END AS 'total time' FROM ( SELECT EmplID ,EmplName ,InTime ,[TimeOut] ,[DateVisited] ,DATEDIFF(minute, InTime, [TimeOut])/60 AS hourpart ,DATEDIFF(minute, InTime, [TimeOut]) % 60 AS minpart FROM times ) source ``` Output: ![enter image description here](https://i.stack.imgur.com/FfanL.png)
Try this instead: ``` with times as ( SELECT t1.EmplID , t3.EmplName , min(t1.RecTime) AS InTime , max(t2.RecTime) AS [TimeOut] , cast(min(t1.RecTime) as datetime) AS InTimeSub , cast(max(t2.RecTime) as datetime) AS TimeOutSub , t1.RecDate AS [DateVisited] FROM AtdRecord t1 INNER JOIN AtdRecord t2 ON t1.EmplID = t2.EmplID AND t1.RecDate = t2.RecDate AND t1.RecTime < t2.RecTime inner join HrEmployee t3 ON t3.EmplID = t1.EmplID group by t1.EmplID , t3.EmplName , t1.RecDate ) SELECT EmplID ,EmplName ,InTime ,[TimeOut] ,[DateVisited] ,convert(char(5),cast([TimeOutSub] - InTimeSub as time), 108) totaltime ,convert(char(5), case when TimeOutSub - InTimeSub >= '08:01' then cast(TimeOutSub - dateadd(hour, 8, InTimeSub) as time) else '00:00' end, 108) as overtime FROM times ```
``` --You can create a separate function to calculate work-hrs and overtime -- Try this CREATE FUNCTION GetWorkHours ( @INTime AS DATETIME ,@OutTime AS DATETIME ,@WorkingHrsINMinutes AS INT ) RETURNS @WorkHours TABLE ( WorkHours VARCHAR(5) ,OTHours VARCHAR(5) ) AS BEGIN INSERT INTO @WorkHours SELECT CAST((DATEDIFF(Minute, @INTime, @OutTime)) / 60 AS VARCHAR(2)) + ':' + CAST((DATEDIFF(Minute, @INTime, @OutTime)) % 60 AS VARCHAR(2)) AS TotalTime ,CASE WHEN DATEDIFF(Minute, @INTime, @OutTime) > @WorkingHrsINMinutes THEN CAST((DATEDIFF(Minute, @INTime, @OutTime) - @WorkingHrsINMinutes) / 60 AS VARCHAR(2)) + ':' + CAST((DATEDIFF(Minute, @INTime, @OutTime) - @WorkingHrsINMinutes) % 60 AS VARCHAR(2)) ELSE '00:00' END AS OverTime RETURN END --- Sample SELECT * FROM Dbo.GetWorkHours('2014-01-22 10:00:09.270', '2014-01-22 18:35:09.270', '480') SELECT * FROM Dbo.GetWorkHours('2014-01-22 10:00:09.270', '2014-01-22 17:35:09.270', '480') ```
new column to calculate overtime
[ "", "sql", "t-sql", "sql-server-2012", "" ]
I need to write Simple.data queries for following SQL queries can you help me ? ``` SELECT Table1.UserID, Table1.we, Table1.ba, Table1.re, Table1.rtes, Table1.datae, Table1.void, Table1.deletee FROM Table1 INNER JOIN Table1 ON UserID.UserID = Table2.UserID WHERE Table2.clinicId = 11 ``` I try it following way ``` db.Table1.FindAll() .Where(db.Table1.UserID == db.Table2.FindAll(db.Table2.ClinicID = 11).Select(db.Table2.UserID)); ``` but it does not work. I use mysql 4.0
``` db.Table2.FindAllByClinicId(11) .Select( db.Table2.Table1.UserId, db.Table2.Table1.we, db.Table2.Table1.ba, db.Table2.Table2.re, db.Table2.Table1.rtes, db.Table2.Table1.datae, db.Table2.Table1.void, db.Table2.Table1.deletee); ``` That should end up sending this to the database: ``` SELECT Table1.UserId , Table1.we , Table1.ba , Table1.re , Table1.rtes , Table1.datae , Table1.void , Table1.deletee FROM Table1 INNER JOIN Table2 ON Table1.UserId = Table2.UserId WHERE Table2.ClinicId = 11 ```
you are joining with same table. so you need to join two different tables or join same table giving alias. ``` SELECT Table1.UserID, Table1.we, Table1.ba, Table1.re, Table1.rtes, Table1.datae, Table1.void, Table1.deletee FROM Table1 INNER JOIN Table2 ON UserID.UserID = Table2.UserID where Table2.clinicId=11 ``` OR using alias for same table. ``` SELECT t1.UserID, t1.we, t1.ba, t1.re, t1.rtes, t1.datae, t1.void, t1.deletee FROM Table1 as t1 INNER JOIN Table1 as t2 ON UserID.UserID = t2.UserID where t2.clinicId=11 ```
How to write Simple.Data query to join two tables?
[ "", "sql", "database", "simple.data", "" ]
I'm having a difficult time figuring out why my mysql query is returning the same row five times. Here is the query below. I read on here something about joins and cartisian products. I'm very new to this, and editing this query on my site that someone else initially created. Trying to understand the joins that are used here, but it's confusing me. Any help is greatly appreciated. The goal is to get each row to return only once, not five times. Thanks! ``` SELECT off . * , resp.tracking_date, resp.paid_date, resp.total_value, resp.total_conversion FROM tbl_offers AS off, tbl_tracking AS tr, tbl_trackvalue AS resp, tbloffers_cat AS offcat WHERE off.id = tr.offer_id AND off.offer_type = 'cash_offer' AND off.id = offcat.offer_id AND tr.id = resp.tracking_id AND tr.user_id = '1454' AND resp.total_value !=0 AND resp.payment_status = 'paid' LIMIT 0 , 30 ``` Wow, thanks for all the help everyone. Looks like a lot of the solutions mentioned here would work. I did get the result I want by adding a group by tracking\_id to the query. Not sure if that's actually the best way to do it or not. I'm just learning to program while editing an old site, which I initially had someone else build. Lot's of fun. The whole join thing is one of the more complicated things I've dealt with so far, but I'm sure it'll get easier. I'll go through these answers to pick the best one, though it looks like multiple could work, so I'm not sure what the standard method of picking is on this site. Thanks again!
It doesn't look like tbloffers\_cat is used in the query, and may contain rows that aren't visible in your result set. To verify this, add `tbloffers_cat.*` to your query and see if the values in those fields differ from row to row. If that's the case, try this: ``` SELECT off.* , resp.tracking_date , resp.paid_date , resp.total_value , resp.total_conversion FROM tbl_offers AS off INNER JOIN tbl_tracking AS tr ON tr.offer_id = off.id INNER JOIN tbl_trackvalue AS resp ON resp.tracking_id = tr.id WHERE off.offer_type = 'cash_offer' AND tr.user_id = '1454' AND resp.total_value !=0 AND resp.payment_status = 'paid' LIMIT 0 , 30 ``` In my opinion, The syntax of inner joins is usually clearer when looking at the query.
While your query should be working as intended, it's using deprecated syntax, I suggest replacing the deprecated implicit joins with standard explicit joins: ``` SELECT off. * , resp.tracking_date, resp.paid_date, resp.total_value, resp.total_conversion FROM tbl_offers AS off JOIN tbl_tracking AS tr ON off.id = tr.offer_id JOIN tbl_trackvalue AS resp ON tr.id = resp.tracking_id JOIN tbloffers_cat AS offcat ON off.id = offcat.offer_id WHERE off.offer_type = 'cash_offer' AND tr.user_id = '1454' AND resp.total_value !=0 AND resp.payment_status = 'paid' LIMIT 0 , 30 ``` As to the multiple records, if the `JOIN` criteria doesn't identify a one to one relationship then you'll get multiple rows where you expected 1. Try selecting fields from all tables in your query to see which table has multiple values being returned. Perhaps there are multiple tracking records with the same offer\_id, but you wouldn't see that in your query because you aren't selecting anything from the tracking table.
mysql query returning the same row multiple times
[ "", "mysql", "sql", "" ]
I'm building a program that stores news headlines for companies and its timestamp from various sources. Let's say the number of company is 1000. It goes like Apple, Google, Microsoft.. etc. So I can think about two options. 1. One table with numerous rows (above code is just an example). ``` CREATE TABLE news ( news_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, company VARCHAR(10) NOT NULL, timestamp TIMESTAMP NOT NULL, source TEXT NOT NULL, content TEXT NOT NULL, ... ) // I also can make company and timestamp as primary keys, and news_id will be unique key.* ``` 2. 1000 Tables ``` CREATE TABLE news_apple // and news_google, news_microsoft, news_...(x 1000) ( news_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, timestamp TIMESTAMP NOT NULL, source TEXT NOT NULL, content TEXT NOT NULL, ... ) ``` Most of the time, I will find the news for the certain company. Let's say there are more than 10000 news for each company. I wonder that if I use a 'WHERE' clause in the first option, it would be slower than the second option. Which one is more efficient in terms of performance and why?
Relational databases are designed to store many rows per table. There are a whole bunch of mechanisms to facilitate large tables, such as: * Indexes on any combination of fields to speed searches * Page caching so commonly used pages remain in memory * Vertical partitioning (columnar databases) to further speed requests * Advanced algorithms such as hash joins and group bys (at least in databases other than MySQL) * Use of multiple processors and disks to process queries There is one thing that is more difficult when putting data in a single table, and that is security. And, in fact, in some circumstances this is a primary concern and basically requires that the data go in separate table. Those applications are rare and far between. To give an example of how bad storing data in multiple tables could be, imagine that in your system you have one record per company and you store it in a table. This record stores information about the company -- something like name, address, whatever. Call is 100 bytes of information. In your schema there is a separate table for each "company", so that is one row per table. That record will reside on one data page. A data page could be 16 kbytes, so you are wasting about 15.9 kbytes to store this data. Storing 1000 such records occupies 16 Mbytes instead of about 7 pages worth (112 Kbytes). That can be a significant performance hit. In addition, with multiple tables you are not taking into account the challenges of maintaining all the tables and ensuring the correctness of data in the different tables. Maintenance updates need to be applied to thousands of tables, instead of a handful.
My experience with Many tables, currently working for a company that has 6000+ tables and growing, is the difficulty in managing the many tables. The upside is you reduce the amount of rows in a table ( I have 30 million rows on other companies) which is great - but only if you have a manageable solution for all your tables. For example: the 6000+ tables are actually hotel's with their room number Id's (data\_room#\_hotelID) so I can write stored procedures, bash or php scripts to handle them. on the flipside, many rows is simpler because you have less tables to deal with, but then you need to start being creative on how to pull the data out. You will definitely want to add an index on your SQL WHERE clauses but that's obvious. Options I've used in the past to manage large tables are: 1) insert the data with auto-increment in a way that makes sense (like alphabetically) - but seeing that your data changes alot, it's not a good option 2) Loading tables into Memory - I don't know where you are hosting this, but I use MySQL in Memory. I only know of two ways to do this: One is by setting it up on MySQL itself and the other is to have a memory drive (use some of the computer's ram) and move MySQL and it's data to the mounted drive (I've also used Flash drives). You're also going to want to factor in, whether you are INSERTING or SELECTING data more often. So to sum up: * If you choose to make many tables, be sure they aren't holding huge amounts of data and that you can manage them all. * If you go with fewer tables and more rows, use Indexes, RAM drive, Caching to speed up the process of getting the data out of large tables. That's my two cents, I hope it helps you!
Many tables or rows, which one is more efficient in SQL?
[ "", "mysql", "sql", "database", "schema", "relation", "" ]
I am trying to execute a query in Oracle database. The query has case construction in where clause. ``` where sale.op = 2 and case when (:stat = -11) then (sale.type_id = 27 or sale.type_id = 28 or sale.type_id = 29) else (sale.type_id = 27) end ``` But I am getting the following error: > ORA-00907: missing right parenthesis. In Derby SQL this works. Does anybody can help me? Thank you.
``` where sale.op = 2 and ( (:stat = -11 and sale.type_id in (27, 28, 29)) or (:stat <> -11 and sale.type_id = 27) ) ```
try this query: ``` where sale.op = 2 and ((:stat = -11 and (sale.type_id = 27 or sale.type_id = 28 or sale.type_id = 29)) or (:stat <> -11 and sale.type_id = 27)) ```
Oracle - Case in where clause
[ "", "sql", "oracle", "case", "where-clause", "" ]
I have a revision database table with multiple revisions for pages(pageId) under a website(webId).I want to write a query to get only the latest revisions of a website.For example the table below shows two files with id 1 and 2,now i want to get the revision number 101 and 104 as they are latest for their respective pages.I am using **mysql** ``` +----+---------+-------------+---------------------+-------+---------------------+ | id | userId | webId |revisionCont | pageId| dateUpdated | +----+---------+-------------+---------------------+-------+---------------------+ | 100| 1 | 2 | some text | 1 | 2014-01-07 08:00:00 | | 101| 1 | 2 | some text | 1 | 2014-01-07 08:01:00 | | 103| 2 | 2 | some text | 2 | 2014-01-07 08:15:32 | | 104| 2 | 2 | some text | 2 | 2014-01-07 09:10:32 | +----+---------+-------------+---------------------+-------+---------------------+ ``` I am not able to figure out how can I do it?Can someone guide me over it?
To obtain the complete row do: ``` SELECT * FROM revision WHERE Id IN ( SELECT MAX(id) FROM revision GROUP BY pageId ORDER BY dateUpdated DESC ) ```
try this ``` SELECT max(id) id FROM revision GROUP BY pageId ``` [**DEMO HERE**](http://sqlfiddle.com/#!2/73aff/6) output: ``` ID 104 101 ``` EDIT: if you need to filrter or order the output then just add `ORDER BY dateUpdated DESC`
Getting latest entries from the revision table in mysql
[ "", "mysql", "sql", "database", "" ]
How can I pull only the number from a field and put that value into its own field? For example, if a field1 contains a value of "Name(1234U)". I need an SQL or VBA way to scan that field and pull the number out. So field2 will equal "1234". Any ideas?
It is possible that this or a variation may suit: ``` SELECT t.Field1, Mid([Field1],InStr([field1],"(")+1,4) AS Stripped FROM TheTable As t ``` For example: ``` UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,4); ``` EDIT re comment If the field ends `u)`, that is, alpha bracket, you can say: ``` UPDATE TheTable AS t SET [field2] = Mid([Field1],InStr([field1],"(")+1,Len(Mid([Field1],InStr([field1],"(")))-3) ```
The following VBA function might do the trick: ``` Option Compare Database Option Explicit Public Function RegexReplaceAll( _ OriginalText As Variant, _ Pattern As String, _ ReplaceWith As String) As Variant Dim rtn As Variant Dim objRegExp As Object ' RegExp rtn = Null If Not IsNull(OriginalText) Then Set objRegExp = CreateObject("VBScript.RegExp") ' New RegExp objRegExp.Pattern = Pattern objRegExp.Global = True rtn = objRegExp.Replace(OriginalText, ReplaceWith) Set objRegExp = Nothing End If RegexReplaceAll = rtn End Function ``` Example using the regular expression pattern ``` [^0-9]+ ``` which matches one or more non-digit characters ``` RegexReplaceAll("Name(1234U)","[^0-9]+","") ``` returns ``` 1234 ``` edit: To use this in a query *run from within the Access application itself*, try something like ``` SELECT Field1, RegexReplaceAll(Field1,"[^0-9]+","") AS JustNumbers FROM Table1; ```
How to select only numbers from a text field
[ "", "sql", "ms-access", "ms-access-2007", "vba", "" ]
I've got a small problem with an SQL cursor. I'm trying to execute an SQL command on android. I'm trying to sort the returned content by cases, but it seems like the system doesn't accept the returned values (?!) I've tried everything! Do you have the solution ? ;) ``` cursor = db.rawQuery(c, null); String c = "SELECT * FROM characters WHERE UPPER(descriptions) LIKE '%" + TextUtils.join("%", arr) + "%' UNION ALL SELECT * FROM words WHERE UPPER(descriptions) LIKE '%" + TextUtils.join("%", arr) + "%' ORDER BY CASE WHEN UPPER(descriptions) LIKE '" + s + "' THEN 1 WHEN UPPER(descriptions) LIKE '" + s + "|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|" + s + "|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|" + s + "' THEN 2 WHEN UPPER(descriptions) LIKE '%" + TextUtils.join(" ", arr) + "%' THEN 3 ELSE 4 END, descriptions ASC"; ``` Do I need to sort the command like this? ``` SELECT * FROM characters WHERE UPPER(descriptions) LIKE '%" + TextUtils.join("%", arr) + "%' UNION ALL SELECT * FROM words WHERE UPPER(descriptions) LIKE '%" + TextUtils.join("%", arr) + "%' ORDER BY CASE WHEN UPPER(descriptions) LIKE '" + s + "' THEN 1 WHEN UPPER(descriptions) LIKE '" + s + "|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|" + s + "|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|" + s + "' THEN 2 WHEN UPPER(descriptions) LIKE '%" + TextUtils.join(" ", arr) + "%' THEN 3 ELSE 4 END, descriptions ASC ``` Thanks ;) EDIT: An SQL command after inserting the search term "a test" ``` SELECT * FROM characters WHERE UPPER(descriptions) LIKE '%A%TEST%' UNION ALL SELECT * FROM words WHERE UPPER(descriptions) LIKE '%A%TEST%' ORDER BY CASE WHEN UPPER(descriptions) LIKE 'A TEST' THEN 1 WHEN UPPER(descriptions) LIKE 'A TEST|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|A TEST|%' THEN 2 WHEN UPPER(descriptions) LIKE '%|A TEST' THEN 2 WHEN UPPER(descriptions) LIKE '%A TEST%' THEN 3 ELSE 4 END, descriptions ASC ``` table schema: ``` characters (t TEXT,s TEXT,jy TEXT,descriptions TEXT) words (t TEXT,s TEXT,jy TEXT,descriptions TEXT) ```
When you are SELECTing records from a single table, you can use anything from those records for sorting. However, when you are combining multiple queries with UNION, the sorting is done on the entire result, so you must use some column from the result for ordering. In this case, this means that you must move the calculation into the query itself: ``` SELECT t, s, jy, descriptions, CASE ... END AS ordernr FROM ... UNION ALL SELECT t, s, jy, descriptions, CASE ... END AS ordernr FROM ... ORDER BY ordernr, descriptions ```
See [this post](https://stackoverflow.com/questions/4715820/how-to-order-by-with-union) and try specifying all columns in your select statements.
SQL request with case in order by throws "(1) 1st ORDER BY term does not match any column in the result set"
[ "", "android", "sql", "sqlite", "" ]
I have a query that SELECTs from multiple databases where column 'auth' = mad: ``` $result = mysql_query("SELECT * FROM test.1 WHERE auth = 'mad' LIMIT 1 UNION SELECT * FROM test.2 WHERE auth = 'mad' LIMIT 1 UNION SELECT * FROM test.3 WHERE auth = 'mad' LIMIT 1"); ``` How do I mark or categorize the result, obtained from test.1; test.2 or test.3 ? Thanks a lot.
Add the table name to results: ``` $result = mysql_query("SELECT *, 'test.1' FROM test.1 WHERE auth = 'mad' LIMIT 1 UNION SELECT *, 'test.2' FROM test.2 WHERE auth = 'mad' LIMIT 1 UNION SELECT *, 'test.3' FROM test.3 WHERE auth = 'mad' LIMIT 1"); ```
``` $result = mysql_query("SELECT *, 'test1' as Source FROM test.1 WHERE auth = 'mad' LIMIT 1 UNION SELECT *, 'test2' FROM test.2 WHERE auth = 'mad' LIMIT 1 UNION SELECT *, 'test3' FROM test.3 WHERE auth = 'mad' LIMIT 1"); ```
"Categorize" or "Mark" SELECT query
[ "", "mysql", "sql", "select", "" ]
I try to join two tables. One is table filled with totally unique variables `debtor`. No duplicates of the debtor id can exist. Now I wish to join it with the `items` table where the `debtor.debtor` id can appear multiple times. What I wish to do is to only join if the item `'20004'` does not exist in the order history I have the following sql fiddle <http://sqlfiddle.com/#!2/a0826/1/0> ``` create table items ( debtor int, item int, qty int, invoice int ); create table debtor ( debtor int, name varchar(255) ); insert into debtor(debtor, name) VALUES ('1000','Mister blue'), ('1001','Mister Orange'), ('1002','Mister Red'), ('1003','Mister Yellow'), ('1004','Mister Green'); INSERT INTO items (debtor, item,qty, invoice) values ('1000','20001','1','1'), ('1000','20002','1','1'), ('1000','20003','1','1'), ('1000','20004','1','1'), ('1000','20005','1','1'), ('1001','20001','1','2'), ('1001','20002','1','2'), ('1001','20003','1','2'), ('1001','20005','1','2'), ('1002','20001','1','3'), ('1002','20002','1','3'), ('1002','20003','1','3'), ('1002','20004','1','3'), ('1002','20005','1','3'), ('1002','20006','1','3'), ('1003','20001','1','4'), ('1003','20002','1','4'), ('1003','20003','2','4'), ('1003','20004','1','4'), ('1003','20005','1','4'); ``` And I do the following query: ``` SELECT * FROM debtor JOIN items on debtor.debtor = items.debtor AND items.item != '20004' ``` But I still get all the invoice lines that are not `'20004'`. How can I perform the query in a way that the join will not performed if the value `'20004'` does not appear? My preferred answer that should come from the query should be that only **mr Orange** should appear. Who can help me?
Try this: ``` SELECT * FROM debtor d JOIN items i on d.debtor = i.debtor WHERE NOT EXISTS (SELECT debtor FROM items i2 WHERE item = '20004' AND i2.debtor = i.debtor ) ``` Update fiddle: <http://sqlfiddle.com/#!2/a0826/12/0>
Or this: ``` select * from debtor d join items i on d.debtor = i.debtor where d.debtor not in (select debtor from items where item = '20004'); ```
Only join tables if certain value does not exist among multiple rows in other table
[ "", "sql", "sql-server", "join", "" ]
I am making a small project in which I am using mysql for my database, in which i need to input phone numbers with std codes. The problem is in inserting proper std codes. Suppose the std code is `0123` but while running the query it is only inserting `123`. If it is 0275, it is only inserting 275. but if it is `2210` it is properly inserting `2210` i tried making it `"int", "bigint", "varchar"` but all in vain.. plz sggst to solve it..
Try with this.It should work `varchar(5)`
I usually store phone numbers as a BIGINT in E164 format. E164 never start with a 0, with the first few digits being the country code. ``` +441234567890 +44 (0)1234 567890 01234 567890 ``` etc. would be stored as `441234567890`. > But my suggestion is you can use a **varchar** for a telephone number. > You do not need an int because you are not going to perform arithmetic > on the numbers.
How do I insert proper std code in mysql
[ "", "mysql", "sql", "" ]
My colleague has this in a procedure: ``` BEGIN TRAN --Some deletes and inserts IF(@@error <> 0) BEGIN ROLLBACK TRAN RETURN END COMMIT TRAN ``` I have another in a stored procedure that simply is: ``` BEGIN TRANSACTION --Some deltes and inserts COMMIT TRANSACTION ``` I have tested and found that my procedure always rolls everything back during an error (tested for example changing a column data type etc.) without explicitly coding a rollback. Also I have read that using `@@error` condition is outdated for SQL Server 2005 and above. What would you say is the correct way of doing a transaction for SQL Server 2008 R2 and above? Thanks
**YES**, the `ROLLBACK` is necessary! I would do a stored procedure based on this template for SQL Server 2005 and newer: ``` BEGIN TRANSACTION BEGIN TRY -- put your T-SQL commands here -- if successful - COMMIT the work COMMIT TRANSACTION END TRY BEGIN CATCH -- handle the error case (here by displaying the error) SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_SEVERITY() AS ErrorSeverity, ERROR_STATE() AS ErrorState, ERROR_PROCEDURE() AS ErrorProcedure, ERROR_LINE() AS ErrorLine, ERROR_MESSAGE() AS ErrorMessage -- in case of an error, ROLLBACK the transaction ROLLBACK TRANSACTION -- if you want to log this error info into an error table - do it here -- *AFTER* the ROLLBACK END CATCH ```
There a problem with the @@ERROR variable. It's a global variable thus if you are doing something like: ``` BEGIN TRAN --inserts --deletes --updates -- last operation IF(@@error <> 0) BEGIN ROLLBACK TRAN RETURN END COMMIT TRAN ``` @@error contains the result for the last operation only. Thus this piece of code can mask error in previous operations. My advice is, if you can manage transaction at application level, do it at application level. Handling errors at server side is not for faint hearts and it doesn't improves your application overral robusteness.
SQL Server 2008 R2 Transaction is @@error necessary and is ROLLBACK TRANS necessary
[ "", "sql", "sql-server-2008", "transactions", "sql-server-2008-r2", "" ]
I have Table Fields (Columns) like ``` ID DATE ``` I want to insert the rows in above table in between date ranges For Example: If I given the date Range like 23/1/2014 to 25/1/2014 then row insertion result should be like this ``` ID | DATES 1 | 23/1/2014 2 | 24/1/2014 3 | 25/1/2014 ``` **please provide me the solution in the form of Query not Stored procedure/Function**
If you want to use generated range in one query then look to the solution of your problem [How to get list of dates between two dates in mysql select query](https://stackoverflow.com/questions/9295616/).
Try this equivalent in MySQL. This works in SQL Serever but not tried in MySQL ``` Declare @date table(autoid int IDENTITY(1, 1),d date) Declare @d datetime set @d='20140123' While @d<='20140125' Begin Insert into @date values (@d) set @d=@d+1 End Select autoid ,d as DateCol from @date ```
How i can Insert Dates in the table from given date Ranges
[ "", "mysql", "sql", "date", "" ]
My Table data looks like ``` Col1 | Col2 | Col3 1 | NULL | NULL NULL | 2 | NULL NULL | NULL | 3 ``` It is given that for any column there will be only entry. This means that, in the above data, if row1 has value for Col1, then there will be no row with value for Col1. Similarly, if row1 has value for Col1, it will not have value for any other column. I want to write a query, so that I get only one row out for entire data (leaving NULL values). ie. ``` Col1 | Col2 | Col3 1 | 2 | 3 ```
The easiest way to do this is using aggregation: ``` select max(col1) as col1, max(col2) as col2, max(col3) as col3 from t; ```
Assuming the table is called tab the following query will work if there are only 3 columns: ``` select t1.Col1, t2.Col2, t3.Col3 from tab t1, tab t2, tab t3 where t1.Col1 is not null and t2.Col2 is not null and t3.Col3 is not null ``` The problem is the query will have to alias the table for each additional column. It may not be perfect, but it is a solution.
SQL Multiple rows into one row
[ "", "sql", "select", "" ]
The application is datasnap (with sqlite database). I run the following query against two tables (LOKACIJE and UPORABNIKI): ``` procedure TForm4.FormShow(Sender: TObject); begin ClientDataSet1.Close; ClientDataSet1.CommandText :=''; ClientDataSet1.CommandText := 'select lokacije.[HOTEL_ID],'+ ' lokacije.[NAZIV],'+ ' uporabniki.[LOKACIJA_ID],'+ ' uporabniki.[RESORT_ID],'+ ' uporabniki.[HOTEL_ID],'+ ' uporabniki.[UPORABNIK],'+ ' uporabniki.[GESLO],'+ ' uporabniki.[PRAVICE]'+ ' from UPORABNIKI'+ ' inner join LOKACIJE on uporabniki.lokacija_id=lokacije.lokacija_id '+ ' where lokacije.[NAZIV] = :@NAZIV'+ ' order by Uporabniki.[UPORABNIK]'; ClientDataSet1.Params.ParamByName('@NAZIV').Value:= '' + Form2.AdvOfficeStatusBar1.Panels[3].Text + '' ; ClientDataSet1.Open; end; ``` The query runs fine and gives me the desired results. However I would like to be able to edit and save the edited (or added) results of this query. The table that I want to update (or add the new record) is the UPORABNIKI. I do not need to write anything to the LOKACIJE table. How am I to accomplish this ? Apart from saving new record I would like the query to fill automatically the values LOKACIJA\_ID,RESORT\_ID,HOTEL\_ID as they are from the same table, when I click in the navigator the button 'insert'. UPORABNIKI is translated USERS table. edit : Inverted the query as suggested
I believe this is a case in which `TDatasetProvider` is unable to produce the right command to update the tables involved. In this cases, what I do is to add a handler to the `TDatasetProvider.BeforeUpdateRecord` event. This event will let you handle each operation worked on the dataset and produce the needed SQL statements to propertly persist those operations in the data server. You will have to write the UPDATE/DELETE/INSERT statements yourself, but you will also have absolute power about how the tables are updated. That´s why I always use this event and never rely on `TDatasetProvider` intrinsec update process.
See the `OnGetTableName` event on the `TDatasetProvider`. Also, I believe it would be better if you invert your query, i.e., use `... FROM UPORABNKI inner join LOKACIJE...`
Delphi update a joined query
[ "", "sql", "sqlite", "delphi", "delphi-xe4", "" ]
My DB has table ip(address, count) and I want to update the 'count' value if address is found in DB, else add a new record. I do this: ``` INSERT INTO ip(address,count) VALUES('0.0.0.1', 3) ON DUPLICATE KEY UPDATE count=count+VALUES(count); ``` But as a result a have this table: ``` +----+---------+-------+ | id | address | count | +----+---------+-------+ | 1 | 0.0.0.1 | 3 | | 2 | 0.0.0.1 | 3 | | 3 | 0.0.0.1 | 3 | +----+---------+-------+ `id` int(50) NOT NULL AUTO_INCREMENT, `address` varchar(40) DEFAULT NULL, `count` varchar(10) DEFAULT NULL, PRIMARY KEY (`id`) ``` Why doesn't it sum 'count' , but makes a new record always ? How do I change primary key ?
create your table using following query ``` CREATE TABLE `ip` ( `id` INT(50) NOT NULL AUTO_INCREMENT, `address` VARCHAR(40) NULL DEFAULT NULL, `count` VARCHAR(10) NULL DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE INDEX `address` (`address`) ) ``` Then use following query to insert ``` INSERT INTO ip(address,`count`) VALUES('0.0.0.1', 3) ON DUPLICATE KEY UPDATE `count`=`count`+1; ``` In this case if table already contain that ip, then count will be incremented by one. Please note that column `address` should be either `UNIQUE index` or `PRIMARY KEY`, otherwise this wont work. In my create table query i set `address` as unique.
Have you actually created the unique key on the address column? If you haven't then the ON DUPLICATE KEY UPDATE clause will never run and so rows will keep being created. Could you update your question to include the schema for the table?
Mysql update if one of fields is a duplicate
[ "", "mysql", "sql", "database", "" ]
SQL developers, I have a badly planned database as task to learn a lot about SQL Server 2012. SO, there is the table `Elem`: ``` +-----------+----+---+----------+------------+ |VERSION(PK)|NAME|KEY|PARENT_KEY|DIST_KEY(FK)| +-----------+----+---+----------+------------+ |1 |a |12 |NULL |1 | +-----------+----+---+----------+------------+ |2 |b |13 |12 |1 | +-----------+----+---+----------+------------+ |3 |c |14 |13 |1 | +-----------+----+---+----------+------------+ |4 |d |15 |12 |1 | +-----------+----+---+----------+------------+ |5 |e |16 |NULL |1 | +-----------+----+---+----------+------------+ |6 |e |17 |NULL |2 | +-----------+----+---+----------+------------+ ``` After update the row I need to check parent key of element to not allow element to be self-granny or something.. And when I delete the row I need to delete all children and children of children, etc. Questions are: 1. How can i select all "parent + grandparent + etc" of one element of DIST? 2. How can i selects all "sons + grandsons + etc" of one element of DIST? I read about solutions with CTE, but I have no root of elements and I can't even understand how I can use CTE then. Please, help! Thanks.
I have met this problem,I resolved problem by this way ``` --all "parent + grandparent + etc" @childID Replaced with the ID you need with tbParent as ( select * from Elem where [KEY]=@childID union all select Elem.* from Elem join tbParent on Elem.[KEY]=tbParent.PARENT_KEY ) SELECT * FROM tbParent --all "sons + grandsons + etc" @parentID Replaced with the ID you need with tbsons as ( select * from Elem where [KEY]=@parentID union all select Elem.* from Elem join tbsons on Elem.PARENT_KEY=tbsons.[KEY] ) SELECT * FROM tbsons ``` PS.My English is not good.
here is a recursive query giving you both all ancestors and all descendants of an element. Use these together or separate according to the situation. Replace the where clauses to get the desired record. In this example I am looking for key 13 (this is the element with name = b) and find its ancestor 12/a and its descendant 14/c. ``` with all_ancestors(relation, version, name, elem_key, parent_key, dist_key) as ( -- the record itself select 'self ' as relation, self.version, self.name, self.elem_key, self.parent_key, self.dist_key from elem self where elem_key = 13 union all -- all its ancestors found recursively select 'ancestor ' as relation, parent.version, parent.name, parent.elem_key, parent.parent_key, parent.dist_key from elem parent join all_ancestors child on parent.elem_key = child.parent_key ) , all_descendants(relation, version, name, elem_key, parent_key, dist_key) as ( -- the record itself select 'self ' as relation, self.version, self.name, self.elem_key, self.parent_key, self.dist_key from elem self where elem_key = 13 union all -- all its descendants found recursively select 'descendant' as relation, child.version, child.name, child.elem_key, child.parent_key, child.dist_key from elem child join all_descendants parent on parent.elem_key = child.parent_key ) select * from all_ancestors union select * from all_descendants order by elem_key ; ``` Here is the SQL fiddle: <http://sqlfiddle.com/#!6/617ee/28>.
Select all parents or children in same table relation SQL Server
[ "", "sql", "sql-server", "t-sql", "select", "sql-server-2012", "" ]
my problem is this: I have a table named ``` Doctor(id, name, department) ``` and another table named ``` department(id, name). ``` a Doctor is associated with a department (only one department, not more) I have to do a query returning the department with the maximum number of doctors associated with it. I am not sure how to proceed, I feel like I need to use a nested query but I just started and I'm really bad at this. I think it should be something like this, but again I'm not really sure and I can't figure out what to put in the nested query: ``` SELECT department.id FROM (SELECT FROM WHERE) , department d, doctor doc WHERE doc.id = d.id ```
A common approach to the "Find ABC with the maximum number of XYZs" problem in SQL is as follows: * Query for a list of ABCs that includes each ABC's count of XYZs * Order the list in descending order according to the count of XYZs * Limit the result to a single item; that would be the top item that you need. In your case, you can do it like this (I am assuming MySQL syntax for taking the top row): ``` SELECT * FROM department dp ORDER BY (SELECT COUNT(*) FROM doctor d WHERE d.department_id=dp.id) DESC LIMIT 1 ```
You can use `Group BY` ``` Select top (1) department.id,count(Doctor.*) as numberofDocs from department inner join Doctor on Doctor.id = department.id Group by department.id Order by count(Doctor.*) desc ```
SQL Maximum number of doctors in a department
[ "", "sql", "t-sql", "" ]
I have a table where I am storing timespan data. the table has a schema similar to: ``` ID INT NOT NULL IDENTITY(1,1) RecordID INT NOT NULL StartDate DATE NOT NULL EndDate DATE NULL ``` And I am trying to work out the start and end dates for each record id, so the minimum StartDate and maximum EndDate. StartDate is not nullable so I don't need to worry about this but I need the MAX(EndDate) to signify that this is currently a running timespan. It is important that I maintain the NULL value of the EndDate and treat this as the maximum value. The most simple attempt (below) doesn't work highlighting the problem that MIN and MAX will ignore NULLS (source: <http://technet.microsoft.com/en-us/library/ms179916.aspx>). ``` SELECT recordid, MIN(startdate), MAX(enddate) FROM tmp GROUP BY recordid ``` I have created an SQL Fiddle with the basic setup done. <http://sqlfiddle.com/#!3/b0a75> How can I bend SQL Server 2008 to my will to produce the following result from the data given in the SQLFiddle? ``` RecordId Start End 1 2009-06-19 NULL 2 2012-05-06 NULL 3 2013-01-25 NULL 4 2004-05-06 2009-12-01 ```
It's a bit ugly but because the `NULL`s have a special meaning to you, this is the cleanest way I can think to do it: ``` SELECT recordid, MIN(startdate), CASE WHEN MAX(CASE WHEN enddate IS NULL THEN 1 ELSE 0 END) = 0 THEN MAX(enddate) END FROM tmp GROUP BY recordid ``` That is, if any row has a `NULL`, we want to force that to be the answer. Only if no rows contain a `NULL` should we return the `MIN` (or `MAX`).
The effect you want is to treat the NULL as the largest possible date then replace it with NULL again upon completion: ``` SELECT RecordId, MIN(StartDate), NULLIF(MAX(COALESCE(EndDate,'9999-12-31')),'9999-12-31') FROM tmp GROUP BY RecordId ``` Per your fiddle this will return the exact results you specify under all conditions.
How can I include null values in a MIN or MAX?
[ "", "sql", "sql-server", "t-sql", "" ]
Simple enough, I can't figure out how to add (that's +) an integer from a textbox to the integer in the SQL Field. So for example, the SQL Field may have '10' in it and the textbox may have '5' in it. I want to add these numbers together to store '15' without having to download the SQL Table. The textbox that contains the integer to be added to the SQL integer is `tranamount.Text` and the SQL Column in the SQL Table is @ugpoints. Please note, without the '+' - which is in the below code and is admittedly wrong- the value of `tranamount.Text` is added to the Table without an issue, but it simply replaces the original value; meaning the end result would be '5' in the SQL Field. What would be the proper way to structure this? I've tried the below code, but that clearly doesn't work. ``` cmd = New SqlCommand("UPDATE PersonsA SET U_G_Studio=@ugpoints WHERE Members_ID=@recevierID", con) cmd.Parameters.AddWithValue("@recevierID", tranmemberID.Text) cmd.Parameters.AddWithValue("@ugpoints", + tranamount.Text) '<--- Value to add. cmd.ExecuteNonQuery() ``` Newbies question I know, I'm new to SQL in vb.
Take the current value of the `U_G_Studio` field, add the value of the parameter and reassign to `U_G_Studio`, but keep in mind that you need to pass the value as an integer because otherwise the `AddWithValue` will pass a string and you get conversion errors coming from the db. ``` cmd = New SqlCommand("UPDATE PersonsA SET U_G_Studio=U_G_Studio + @ugpoints " & "WHERE Members_ID=@recevierID", con) cmd.Parameters.AddWithValue("@recevierID", tranmemberID.Text) cmd.Parameters.AddWithValue("@ugpoints", Convert.ToInt32(tranamount.Text)) cmd.ExecuteNonQuery() ```
You have to use the correct sql: ``` Dim sql = "UPDATE PersonsA SET U_G_Studio=U_G_Studio + @ugpoints WHERE Members_ID=@recevierID" ``` Also use the correct type with `AddWithValue`: ``` Using cmd = New SqlCommand(sql, con) ' use the using-statement to dispose everything that implements IDisposable, so also the connection ' cmd.Parameters.AddWithValue("@ugpoints", Int32.Parse(tranamount.Text)) ' .... ' End Using ```
SQL Update Column By Adding A Number To Current Int
[ "", "sql", "sql-server", "vb.net", "sqlcommand", "" ]
For compound keys in MySQL, does the order of the columns matter for insuring the uniqueness of the rows? E.g. ``` CREATE TABLE test ( A INT NOT NULL, B INT NOT NULL, PRIMARY KEY (A, B) ); ``` Now, assuming I already have a row that contains the values A = 1, B = 2, will MySQL refuse to insert a row with A = 2, B = 1? I ask this because I need a solution that uses a compound key & ignores the order of the values.
The ordering in an index matters. As a reminder for why the order matters, consider what happens if the types are incompatible -- say `date` and `varchar(255)`. The values are not interchangeable. If you want uniqueness for both values, then you are going to need to add a trigger. The trigger can probably implement a much simpler condition, which is the requirement that `A` be less than `B`. This, in combination with the primary key will guarantee uniques across the two values. You can *express* this constraint in MySQL: ``` CONSTRAINT CHECK (A < B) ``` But, alas, MySQL will parse the code, but not execute the check. Instead, you can add before update and before insert triggers to put the smallest value in `A`. Here is an example: ``` CREATE TRIGGER table_beforeinsert BEFORE INSERT ON table FOR EACH ROW BEGIN declare xx int; if (NEW.A > NEW.B) then set xx := NEW.A; set NEW.A = NEW.B; set NEW.B = xx; endif; END ```
no, it does not matter for uniqueness, it is only important for sorting and access time
MySQL Compound Primary Key Column-Order-Insensitivity
[ "", "mysql", "sql", "" ]
I'm working on an Oracle database with an error made by a user. The issue is a number of person records were moved into a different "round". Each round has "episodes": Wrong "round" means all the episode processing has been affected (episodes skipped over). These users won't receive mails they were supposed to receive as a result of missed "episodes". I have a query put together that identifies all the records that have been mistakenly updated. I need a way to modify the query to help find **all tables** that have been wrongly moved into "round 2". (All the tables I need to identify are ones featuring the "round\_no" value) EDIT: There are over 70+ tables! With "ROUND\_NO" COLUMN, I need to only identify the ones with these person records found in them. **I also need to then take this data and return it back to round 1, from the incorrect round 2.** Here is the query that identifies persons that have been "skipped" into round 2 in error: ``` SELECT p.person_id , p.name , ep2.open_date , ( SELECT pr1.open_date FROM Person_ep ep1 WHERE ep1.person_id = ep2.person_id AND er1.round_no = 1 /* SOMETHING IS MISSING WHERE, WHERE IS er1 defined */ ) r1epiopen /* Round 1 episode open date */ FROM person p join region r on r.region_code = p.region_code and r.location_id = 50 join Person_ep er2 ON er2.person_id = p.person_id AND er2.round_no = 2 ORDER BY p.person_id ``` Using SQL Developer 3.2.20.09 on an Oracle 11G RDBMS.
Sorry to see this post that late... Hope it's not too late... I suppose you are using Oracle 10+, and you know approximately the hour of the crime (!). I see 2 possibilities: 1) Use the Log Miner to review the executed SQL: <http://docs.oracle.com/cd/B19306_01/server.102/b14215/logminer.htm> 2) Use the flashback query to review data of a table in the past. But for this one you need to test in on every suspected table (70+) :( <http://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_flashback.htm#ADFNS01001> On the suspected table you could run this kind of SQL to see if update occurred in timeframe: SELECT versions\_startscn, versions\_starttime, versions\_endscn, versions\_endtime, versions\_xid, versions\_operation, description FROM my\_table VERSIONS BETWEEN TIMESTAMP TO\_TIMESTAMP('2014-01-29 14:59:08', 'YYYY-MM-DD HH24:MI:SS') AND TO\_TIMESTAMP('2014-01-29 14:59:36', 'YYYY-MM-DD HH24:MI:SS') WHERE id = 1; I have no practical experience using log miner but I think it would be the best solution, especially if you have archive log activated :D You can access the data values of affected table before the update (if you know the time of the update) using a query like this one: ``` SELECT COUNT(*) FROM myTable AS OF TIMESTAMP TO_TIMESTAMP('2014-01-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS'); ``` Of course, data will be available only if still available (retention in undo tablepace). You could then create a temp table with data before the update: ``` create table tempTableA as SELECT * FROM myTable AS OF TIMESTAMP TO_TIMESTAMP('2014-01-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS'); ``` Then update you table with values coming from tempTableA.
If you want to find all tables with column "round\_no" you probably should use this query ``` select table_name from all_tab_columns where column_name='round_no' ```
Find the tables affected by user error & reverse the mistake
[ "", "sql", "oracle11g", "" ]
I am using the Select and Right Join SQL functions to call all the data into one table. Data being called will only populate if ans\_primary = '1' on the Assignment table. If asn\_primary = '1' then it will join all the below columns on att\_id = att\_id for both tables. The two tables and the columns used are below: Assignment (att\_id, itm\_id, asn\_primary) Attachment (att\_id, att\_name, att\_type, att\_path, att\_path2, att\_path3) ``` select assignment.itm_id, attachment.att_name, attachment.att_type, attachment.att_path, attachment.att_path2, attachment.att_path3 from assignment right join attachment on assignment.att_id=attachment.att_id where assignment.asn_primary = '1' ``` I need to be able to update all the fields in the att\_name column after the call has been made. I am not sure how to update columns after a Join call is used. The SQL I need to run after the info has been called/joined is: ``` Update attachment set att_name = att_name + '.tif' where att_path3 like '%.tif%' ```
The syntax for `join` in an `update` in SQL Server is: ``` Update att set att_name = att_name + '.tif' from assignment a join attachment att on a.att_id = att.att_id where a.asn_primary = '1' and att.att_path3 like '%.tif%'; ``` The `right outer join` doesn't seem appropriate, because you are filtering on a field from `assignment`.
What you're looking for is referred to as a side-effecting query. You'll want to follow the basic structure as below. Be careful of the results. If a row shows up three times, it will be updated three times. This can significantly affect server performance and the resulting logs. I usually imbed a 'SELECT \*' to allow me to view the results prior to executing any updates. ``` UPDATE T1 SET T1.Column1 = 'New Data' --SELECT * FROM dbo.Table1 AS T1 INNER JOIN dbo.Table2 AS T2 ON T2.Id = T1.Id WHERE T1.Column1 <> 'New Data'; ```
Update a Column after a SQL Join
[ "", "sql", "sql-server-2008", "sql-update", "right-join", "" ]
this query is working fine. ``` UPDATE data SET unit_id='a3110a89' WHERE unit_id='7d18289f'; ``` Now, I need to run this query over 30 times so I made csv file import it to the DB with this command: ``` COPY mytable FROM 'D:/test.csv' WITH CSV HEADER DELIMITER AS ',' ``` Now I have table called my table with 2 columns OLD and NEW i want to search the table "data" in column unit\_id anywhere there if the value equals to the value in table "mytable.old" replace it with the value "mytable.new" on the same row. I tried to run this query but I get an error: ``` UPDATE data SET unit_id=(SELECT mytable."old" FROM public.mytable) WHERE unit_id=(SELECT mytable."new" FROM public.mytable) ``` error: ``` more than one row returned by a subquery used as an expression ``` I think i'm just trying to do it in the wrong way... thx for the help! by the way Im using PostgreSQL
Your subqueries need to be correlated to the outer update: ``` UPDATE data SET unit_id = (SELECT mytable."new" FROM public.mytable where data.old = mytable.old) WHERE unit_id in (SELECT mytable."old" FROM public.mytable); ``` That is, set the `unit_id` to the "new" value, when you find the "old" value in the table.
Can you try like this, ``` UPDATE data A SET A.unit_id=B.old FROM (SELECT mytable."old",mytable."new" FROM public.mytable) B WHERE A.unit_id=B.new ```
SQL update set table if value in table A is equals to value in table B
[ "", "sql", "postgresql", "sql-update", "" ]
I am using Mysql Workbench. I have already made the table. Now I want to add foreign key in a table called `Personal_Details` that key is primary key in `Login` table. But when I am trying to do so, it shows me the following error: ``` ERROR 1005: Can't create table 'shaadiDB.#sql-404_25' (errno: 121) ``` SQL Statement: ``` ALTER TABLE `shaadiDB`.`Personal_Details` ADD CONSTRAINT `Login_Id` FOREIGN KEY (`Login_Id` ) REFERENCES `shaadiDB`.`Login` (`Login_Id` ) ON DELETE NO ACTION ON UPDATE NO ACTION , ADD INDEX `Login_Id` (`Login_Id` ASC) ```
before adding any constrain to a table that already have some data might cause this problem,try to add constrain with out data
It seems that the table `Personal_Details` is having data from which there might be some rows with Login\_Id for which entry is not present in table `Login`. If this is the case , then solution would be that you need to move the data to another table, then add constraint. After adding the constraint you need to add all rows back to table 1 by 1.
Error while adding Foreign key
[ "", "mysql", "sql", "foreign-keys", "innodb", "foreign-key-relationship", "" ]
lets say here is my DB structure ``` TableA { [ID] INT IDENTITY (1, 1) primary key, [Title] varchar (50) NOT NULL, [Details] varchar(50) NOT NULL } TableB: { [ID] INT foreign key references [TableA] ([ID]) [AvailableTime] DATETIME NOT NULL } TableC: { [ID] INT foreign key references [TableA] ([ID]) [AvailableLocation] varchar(50) NOT NULL } ``` I need all the columns from Table A as well as a list of AvailableTimes from Table B for a particular ID which the result should look like : new object (id, title, details, List availableTime); there could be multiple available time for one ID. Performance wise and speed wise would it be better if I do two separate queries to retrieve those information or do a join and then loop thru the list twice. I.E. ``` select * from TableA where ID = 1; select * from TableB where ID = 1; select * from TableC where ID = 1; ``` ... OR ``` select * from TableA inner join TableB on TableA.ID = TableB.ID inner join TableC on TableA.ID = TableC.ID where ID = 1 List<DateTime> availableTime = new List<DateTime>(); foreach(DataRow row in resultTable.Rows) { if (!availableTime.Contains((DateTime) row["AvailableTime"])) availableTime.Add((DateTime) row["AvailableTime"]); } foreach(DataRow row in resultTable.Rows) { if (!availableLocation.Contains((string) row["AvailableTime"])) availableLocation.Add((string) row["AvailableLocation"]); } return new object ( id, resultTable.Rows[1], resultTable.Rows[2], availableTime, availableLocation); ``` I am doing this on a server (ASP.NET) and the server is responsible for both running the query and processing the data before it sends out to the user. Thanks in advance.
You'll need to do processing on the client in either case to get a "tree" out of SQL. SQL returns *tabular* data which is de-normalized. --- > Performance wise and speed wise, would it be better if I do two separate queries to retrieve those information or do a join and then loop through the list twice? *Usually*, doing the join in SQL is faster and easier. Manually looping SQL selects in code is one way to *effectively make the RA useless* as any hash/merge join is turned into "nested loops". Since C# supports such a transformation trivially with LINQ, it is my "first preference" - well, first after using a bonafide LINQ-to-Database provider - to deal with this situation. Take the query (where I've renamed "id" for sanity): ``` SELECT i.item_id, i.title, i.details, t.available_time FROM items i JOIN times t ON t.item_id = i.item_id ``` The Joined form will return a result-set of `(item_id, title, details, available_time)`. Now to read the query result it into a collection of "readItems" such that the type conforms to `IEnumerable<{of the same fields as above}>` and then rebuild the tree structure from the data. ``` var items = readItems // The group by "re-normalize" on the key and the selector // is still key on RS - as the title/details are FDs on item_id .GroupBy(i => new { i.item_id, i.title, i.details }) .Select(g => new ItemWithTimes { ItemId = g.Key.item_id, Title = g.Key.title, Details = g.Key.details, // capture collected data AvailableTimes = g, }); ``` Of course, if you're using a LINQ-to-SQL/EF provider (instead of the imagined LINQ-to-Objects above) then you can skip manually going through the flat "readItems" result-set. --- There are some exceptions, generally with *hyper de-normalized* result-sets, where multiple queries may be more appropriate - this usually affects *really deep* trees or when "large" columns are duplicated. Also, some databases like SQL Server supports XML, which can deal with hyper de-normalized result-sets (aka trees) without needing to go through the flat tabular result-set. In any case, you'll know you need to do this when the time comes - and keeping a clean DAL (instead of leaking access all over the place) will make changing implementations relatively trivial.
If I understand the question correctly, you don't need a `join` at all: ``` select PropertyA from tableA where id = XXX union all select PropertyC from tableB where id = XXX; ``` If you want to remove duplicates, change `union all` to `union`.
SQL - multiple queries or one join
[ "", "sql", "join", "" ]
I am trying to `SELECT` rows in a table, by applying a filter condition of identifying number only columns. It is a report only query, so we least bother the performance, as we dont have the privilege to compile a PL/SQL am unable to check by `TO_NUMBER()` and return if it is numeric or not. I have to achieve it in SQL. Also the column is having the values like this, which have to be treated as Numbers. ``` -1.0 -0.1 -.1 +1,2034.89 +00000 1023 ``` After ground breaking research, I wrote this.(Hard time) ``` WITH dummy_data AS ( SELECT '-1.0' AS txt FROM dual UNION ALL SELECT '+0.1' FROM dual UNION ALL SELECT '-.1' FROM dual UNION ALL SELECT '+1,2034.89.00' FROM dual UNION ALL SELECT '+1,2034.89' FROM dual UNION ALL SELECT 'Deva +21' FROM dual UNION ALL SELECT '1+1' FROM dual UNION ALL SELECT '1023' FROM dual ) SELECT dummy_data.*, REGEXP_COUNT(txt,'.') FROM dummy_data WHERE REGEXP_LIKE (TRANSLATE(TRIM(txt),'+,-.','0000'),'^[-+]*[[:digit:]]'); ``` I got this. ``` TXT REGEXP_COUNT(TXT,'.') ------------- --------------------- -1.0 4 +0.1 4 -.1 3 +1,2034.89.00 13 /* Should not be returned */ +1,2034.89 10 1+1 3 /* Should not be returned */ 1023 4 7 rows selected. ``` Now terribly confused with 2 Questions. 1) I get `+1,2034.89.00` too in result, I should eliminate it. (means, two decimal points) Not just decimal point, double in every other special character (-+,) should be eliminated) 2) To make it uglier, planned to do a `REGEXP_COUNT('.') <= 1`. But it is not returning my expectation, while selecting it, I see strange values returned. Can someone help me to frame the `REGEXP` for the avoiding the double occurences of `('.','+','-')`
I just tried to correct the mistakes of you and made the SQL simple as possible. But not neat! ``` WITH dummy_data AS ( SELECT '-1.0' AS txt FROM dual UNION ALL SELECT '+.0' FROM dual UNION ALL SELECT '-.1' FROM dual UNION ALL SELECT '+1,2034.89.0' FROM dual UNION ALL SELECT '+1,2034.89' FROM dual UNION ALL SELECT 'Deva +21' FROM dual UNION ALL SELECT 'DeVA 234 Deva' FROM dual UNION ALL SELECT '1023' FROM dual ) SELECT to_number(REPLACE(txt,',')), REGEXP_COUNT(txt,'.') FROM dummy_data WHERE REGEXP_LIKE (txt,'^[-+]*') AND NOT REGEXP_LIKE (TRANSLATE(txt,'+,-.','0000'),'[^[:digit:]]') AND REGEXP_COUNT(txt,',') <= 1 AND REGEXP_COUNT(txt,'\+') <= 1 AND REGEXP_COUNT(txt,'\-') <= 1 AND REGEXP_COUNT(txt,'\.') <= 1; ```
First you remove plus and minus with translate and then you wonder why their position is not considered? :-) This should work: ``` WITH dummy_data AS ( SELECT '-1.0' AS txt FROM dual UNION ALL SELECT '+0.1' FROM dual UNION ALL SELECT '-.1' FROM dual UNION ALL SELECT '+12034.89.00' FROM dual -- invalid: duplicate decimal separator UNION ALL SELECT '+1,2034.89' FROM dual -- invalid: thousand separator placement UNION ALL SELECT 'Deva +21' FROM dual -- invalid: letters UNION ALL SELECT '1+1' FROM dual -- invalid: plus sign placement UNION ALL SELECT '1023' FROM dual UNION ALL SELECT '1.023,88' FROM dual -- invalid: decimal/thousand separators mixed up UNION ALL SELECT '1,234' FROM dual UNION ALL SELECT '+1,234.56' FROM dual UNION ALL SELECT '-123' FROM dual UNION ALL SELECT '+123,0000' FROM dual -- invalid: thousand separator placement UNION ALL SELECT '+234.' FROM dual -- invalid: decimal separator not followed by digits UNION ALL SELECT '12345,678' FROM dual -- invalid: missing thousand separator UNION ALL SELECT '+' FROM dual -- invalid: digits missing UNION ALL SELECT '.' FROM dual -- invalid: digits missing ) select * from dummy_data where regexp_like(txt, '[[:digit:]]') and ( regexp_like(txt, '^[-+]{0,1}([[:digit:]]){0,3}(\,([[:digit:]]){0,3})*(\.[[:digit:]]+){0,1}$') or regexp_like(txt, '^[-+]{0,1}[[:digit:]]*(\.[[:digit:]]+){0,1}$') ); ``` You see, you need three regular expressions; one to guarantee that there is at least one digit in the string, one for numbers with thousand separators, and one for numbers without. With thousand separators: txt may start with one plus or minus sign, then there may be up to three digits. These may be followed by a thousand separator plus three digits several times. Then there may be a decimal separator with at least one following number. Without thousand separators: txt may start with one plus or minus sign, then there may be digits. Then there may be a decimal separator with at least one following number. I hope I haven't overlooked anything.
Filter the rows with number only data in a column SQL
[ "", "sql", "regex", "oracle", "" ]
I have a table with people and timestamps. Each person has multiple timestamps ``` SELECT person, time FROM table; A 1 A 2 B 1 B 2 B 3 ``` I would like to get the most recent timestamp for each person ``` SELECT ???? A 2 B 3 ```
`GROUP BY` will do the trick : ``` SELECT person, MAX(time) FROM table GROUP BY person ```
In Oracle: ``` SELECT person, MAX(time) FROM table GROUP BY person ```
Most recent timestamp for each person
[ "", "sql", "" ]
Before Query ``` User Friend A B A C D A F A ``` After Query ``` User Friend A B A C A D A F ``` How can I get the result shown. I want to get all friends of A.
With a union: ``` select user, friend from t where user = 'A' union select friend, user from t where friend = 'A' ``` Notice than `Union` behavior is `distinct` that is that you expect ( opposed to `union all` )
Try this ``` SELECT USER AS 'User', friend AS 'Friend' FROM t WHERE USER = 'A' UNION SELECT friend , USER FROM t WHERE friend = 'A' ```
How to query a pair like table structure?
[ "", "sql", "sql-server", "" ]
I am new to database and making a gym management system, I implemented a database from the tutorials on www.homeandlearn.co.uk. I have completed the project without foreign keys. Now I have to link tables but I am getting this error: > Update cannot proceed due to validation errors. > Please correct the following errors and try again. > > SQL71516 :: The referenced table '[dbo].[member\_info]' contains no primary or candidate keys that match the referencing column list in the foreign key. If the referenced column is a computed column, it should be persisted. I do not know what is this error about. Please tell me how to fix this? Do i have to create a new database now or I can still use the foreign keys in the same database? I am using Visual Studio 2012. All help will be appreciated. Thanks in advance. Cheers, I do have a primary key, and I have set it to increment by 1. see this is my table. ``` CREATE TABLE [dbo].[member_info] ( [Id] INT IDENTITY (1, 1) NOT NULL, [memberName] NVARCHAR (50) NULL, [father_name] NVARCHAR (50) NULL, [age] NCHAR (10) NULL, [address] NVARCHAR (50) NULL, [contact] NVARCHAR (50) NULL, [height] NVARCHAR (50) NULL, [weight] NVARCHAR (50) NULL, [chest] NVARCHAR (50) NULL, [triceps_biceps] NVARCHAR (50) NULL, [waist] NVARCHAR (50) NULL, [shoulders] NVARCHAR (50) NULL, [thighs] NVARCHAR (50) NULL, [calves] NVARCHAR (50) NULL, [instructor] NVARCHAR (50) NULL, [date_of_admission] DATE NULL, [photo] IMAGE NULL, PRIMARY KEY CLUSTERED ([Id] ASC) CONSTRAINT [FK_member_info_instructor_info] FOREIGN KEY ([instructor]) REFERENCES [instructor_info]([instructor]) ); ``` This is my member\_info and below is my instructor\_info table: ``` CREATE TABLE [dbo].[instructor_info] ( [InstructorID] INT IDENTITY (1, 1) NOT NULL, [instructor] NVARCHAR (50) NULL, [father_name] NVARCHAR (50) NULL, [age] NCHAR (10) NULL, [address] NVARCHAR (MAX) NULL, [contact] NVARCHAR (50) NULL, [height] NCHAR (10) NULL, [weight] NCHAR (10) NULL, [chest] NCHAR (10) NULL, [triceps_biceps] NCHAR (10) NULL, [waist] NCHAR (10) NULL, [shoulders] NCHAR (10) NULL, [thighs] NCHAR (10) NULL, [calves] NCHAR (10) NULL, [memberName] NVARCHAR (50) NULL, [date_of_admission] DATE NULL, [photo] IMAGE NULL, PRIMARY KEY CLUSTERED ([InstructorID] ASC) ); ```
This is your table `instructor_info` ``` [dbo].[instructor_info] PRIMARY KEY CLUSTERED ([InstructorID] ASC) ``` So if you want to reference that primary key from your table `member_info`, you must reference **that exact column name** (`InstructorID`). So your current FK constraint won't work - you need to reference that column name, and you must use the same datatype. Change your table `member_info` to use ``` [Instructor_ID] INT ``` (instead of the `[instructor] NVARCHAR(50)` column) and then change your FK constraint to: ``` CONSTRAINT [FK_member_info_instructor_info] FOREIGN KEY ([instructor_ID]) REFERENCES [dbo].[instructor_info]([Instructor_ID]) ``` Any **foreign key** in a table must reference the other table's **primary key** (or a unique constraint) - it cannot just reference any column you like....
You have to create primary keys for your tables in order to reference them in your foreign keys. Below an example of creating a table with a primary key, so you don't stumble on this error later on. ``` CREATE TABLE member_info ( id MEDIUMINT NOT NULL AUTO_INCREMENT, name CHAR(30) NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM; ``` Link for MYSQL documentation on this: <http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html> EDIT: since my answer was posted, the tag changed from mysql to mssql, as well as a snippet to provide more info. For history purposes, I'm adding the code below to answer the new question. ``` CREATE TABLE [dbo].[instructor_info] ( [InstructorID] INT PRIMARY KEY IDENTITY (1, 1) NOT NULL, [instructor] NVARCHAR (50) NULL, [father_name] NVARCHAR (50) NULL, [age] NCHAR (10) NULL, [address] NVARCHAR (MAX) NULL, [contact] NVARCHAR (50) NULL, [height] NCHAR (10) NULL, [weight] NCHAR (10) NULL, [chest] NCHAR (10) NULL, [triceps_biceps] NCHAR (10) NULL, [waist] NCHAR (10) NULL, [shoulders] NCHAR (10) NULL, [thighs] NCHAR (10) NULL, [calves] NCHAR (10) NULL, [memberName] NVARCHAR (50) NULL, [date_of_admission] DATE NULL, [photo] IMAGE NULL ); CREATE TABLE [dbo].[member_info] ( [Id] INT PRIMARY KEY IDENTITY (1, 1) NOT NULL, [memberName] NVARCHAR (50) NULL, [father_name] NVARCHAR (50) NULL, [age] NCHAR (10) NULL, [address] NVARCHAR (50) NULL, [contact] NVARCHAR (50) NULL, [height] NVARCHAR (50) NULL, [weight] NVARCHAR (50) NULL, [chest] NVARCHAR (50) NULL, [triceps_biceps] NVARCHAR (50) NULL, [waist] NVARCHAR (50) NULL, [shoulders] NVARCHAR (50) NULL, [thighs] NVARCHAR (50) NULL, [calves] NVARCHAR (50) NULL, [instructor] INT FOREIGN KEY REFERENCES instructor_info(InstructorID), [date_of_admission] DATE NULL, [photo] IMAGE NULL ); ```
Foreign Key confusion
[ "", "sql", "sql-server", "database", "" ]
I have a DB in which there is a table `shows` with a multilingual column `title`. I would like to optimise full text search by adding an index like so: ``` CREATE INDEX title_idx ON shows USING gin(to_tsvector(title)); ``` I get this error: ``` ERROR: functions in index expression must be marked IMMUTABLE ``` It basically asks me to add the language parameter to make to\_tsvector immutable. the result would be: ``` CREATE INDEX title_idx ON shows USING gin(to_tsvector(LANGUAGE, title)); ``` where `LANGUAGE` would be one of my target languages. Is it possible to create an index that works for several languages ?
> Is it possible to create an index that works for several languages ? Yes, but you need a second column that identifies the language of the text. Say you added a column `doc_language` to the table; you could then write: ``` CREATE INDEX title_idx ON shows USING gin(to_tsvector(doc_language, title)); ``` Of course, this requires that you know the language of the subject text, something that can be hard to do in practice. If you don't need stemming, etc, you can just use the language `simple`, but I'm guessing you would've done that already if it were an option. As an alternative, if you have a fixed and limited set of languages, you can *concatenate the vectors for the different languages*. E.g.: ``` regress=> SELECT to_tsvector('english', 'cafés') || to_tsvector('french', 'cafés') || to_tsvector('simple', 'cafés'); ?column? ---------------------------- 'caf':2 'café':1 'cafés':3 (1 row) ``` That'll match a tsquery for `cafés` in any of those three languages. As an index: ``` CREATE INDEX title_idx ON shows USING gin(( to_tsvector('english', title) || to_tsvector('french', title) || to_tsvector('simple', title) )); ``` but this is clumsy to use in queries, as the planner isn't very smart about matching index quals. So I'd wrap it in a function: ``` CREATE FUNCTION to_tsvector_multilang(text) RETURNS tsvector AS $$ SELECT to_tsvector('english', $1) || to_tsvector('french', $1) || to_tsvector('simple', $1) $$ LANGUAGE sql IMMUTABLE; CREATE INDEX title_idx ON shows USING gin(to_tsvector_multilang(title)); ``` If you want you can even get fancy: pass the list of languages as an array (but remember it'll have to be *exactly* the same order for an index qual match to work). Use priorities with `setweight`, so you prefer a match in English to one in French, say. All sorts of options.
I've just made a Postgres function to test the text language. It's not perfect but it works for long texts. ``` CREATE OR REPLACE FUNCTION get_language(t text) RETURNS regconfig AS $$ DECLARE ret regconfig; BEGIN WITH l as ( SELECT cfgname, to_tsvector(cfgname::regconfig, title) as vector, length(to_tsvector(cfgname::regconfig, title)) as len FROM pg_ts_config, (select t as title) as ti) SELECT cfgname::regconfig INTO ret FROM l WHERE len=(SELECT MIN(len) FROM l) ORDER BY cfgname='simple' DESC, cfgname ASC LIMIT 1; RETURN ret; END; $$ LANGUAGE plpgsql; ``` It just look for the shortest tsvector for the given text (so it tries every ts config of postgres).
Full text search index on a multilingual column
[ "", "sql", "postgresql", "" ]
I have a query that exports its results via email in a table format, i would really like to hide the first row of my data so it never gets exported with my results. Example database table: ``` +------+--------+--------+ |Number|Language|Date | +------+--------+--------+ |2039 |text 1 |20/01/14| +------+--------+--------+ |1 |text 2 |20/01/14| +------+--------+--------+ |2 |text 3 |20/01/14| +------+--------+--------+ ``` The query that i am using at the moment is : ``` SELECT COUNT(*) as `count`, `lang`, DATE(NOW()) as `week_ending` FROM mydata.table WHERE `date` > DATE_ADD(DATE(NOW()), INTERVAL - 1 WEEK) AND `date` < DATE(NOW()) GROUP BY `lang` , DATE(NOW()); ``` Is it possible to hide the row **2039 text 1 20/01/14**
`` SELECT COUNT(*) as `count`,`lang`, DATE(NOW()) as `week_ending` FROM mydata.table WHERE `date` > DATE_ADD(DATE(NOW()), INTERVAL -1 WEEK) AND `date` < DATE(NOW()) GROUP BY `lang`, DATE(NOW()) LIMIT 1,x; `` replace x with a number big enough to contain all your records. or use, instead of x, 18446744073709551615, that is maximum value of big INT unsigned.
Use LIMIT 1, N at the End of query. Where N is the number of rows you want to retrieve.
Hide first row in database query
[ "", "mysql", "sql", "select", "" ]
I am trying to generate a report for accepted orders each month against the total orders for that month. For example, I have a table `Orders` like so: ``` Order_Id Submit_Date Order_Status -------- ----------- ------------ 1 20130501 Accepted 2 20130509 Rejected 3 20130610 Accepted 4 20130614 Accepted 5 20130626 Rejected 6 20130802 Accepted 7 20130801 Accepted 8 20131014 Accepted 9 20140116 Rejected 10 20140121 Rejected ``` And would like to get the results like so: ``` [Month] Accepted Total ------- -------- ----- 2013-05 1 2 2013-06 2 3 2013-08 2 2 2013-10 1 1 2014-01 2 2 ``` How do I go about it?
Assuming you will never have a time component, this should work just fine: ``` DECLARE @d TABLE([Order] INT, [Date] DATETIME, [Status] CHAR(8)); INSERT @d VALUES (1 ,'20130501','Accepted'), (2 ,'20130509','Rejected'), (3 ,'20130610','Accepted'), (4 ,'20130614','Accepted'), (5 ,'20130626','Rejected'), (6 ,'20130802','Accepted'), (7 ,'20130801','Accepted'), (8 ,'20131014','Accepted'), (9 ,'20140116','Rejected'), (10,'20140121','Rejected'); SELECT [Month] = DATEADD(DAY, 1-DAY([Date]), [Date]), Accepted = SUM(CASE WHEN [Status] = 'Accepted' THEN 1 ELSE 0 END), COUNT(*) FROM @d GROUP BY DATEADD(DAY, 1-DAY([Date]), [Date]) ORDER BY [Month]; ``` (And if you are on SQL Server 2008 or newer, you should use the `DATE` data type to prevent having to deal with any errant hours/minutes.) If you can have hours/minutes sometimes, and you're not on 2008 or greater, then: ``` SELECT [Month] = DATEADD(MONTH, DATEDIFF(MONTH, 0, [Date]), 0), Accepted = SUM(CASE WHEN [Status] = 'Accepted' THEN 1 ELSE 0 END), COUNT(*) FROM @d GROUP BY DATEADD(MONTH, DATEDIFF(MONTH, 0, [Date]), 0) ORDER BY [Month]; ``` I strongly recommend avoiding any solutions that group by using string conversions. Date/time math is much more efficient in SQL Server than converting to strings. Also if you want the client side to present things like `2013-05`, use Format(), ToString() etc. to apply that string formatting on the client.
To get yyyy-dd format you can use this ``` SELECT CONVERT(VARCHAR(7),[Date],20) ,COUNT(CASE WHEN [status] = 'Accepted' THEN 1 ELSE NULL END) AS 'Accepted' ,COUNT(*) AS Total FROM Orders GROUP BY CONVERT(VARCHAR(7),[Date],20) ```
How to group filtered rows against non filtered by month
[ "", "sql", "sql-server", "sql-server-2008-r2", "" ]
sorry I don't know how else to put the title into words. Below is my code. ``` SELECT DISTINCT association.NamaBadan, ( CASE WHEN type = 5 THEN 'Batal' WHEN NamaBadan LIKE '%BATAL%' THEN 'Batal' ELSE 'Sah' END ) AS STATUS FROM association LEFT JOIN tindakan ON association.assoc_id = tindakan.assoc_id ``` It returns the table below : ![query result](https://i.stack.imgur.com/WTbx9.png) As you can see there are repeating results on 'perseketuan lawn bowls malaysia' and different result on the next column where some of the lawn bowls are 'sah' and some are 'batal'. This is due to the fact that there are multiple entries in the table tindakan. Entries with type other than 5. For instance there are two entries on lawn bowls in table tindakan. One with type 5, another with type 4. Since my query stated that type 5 = Batal and others is = Sah, the query output both the status for lawn bowls. How can I choose 'batal' over 'sah'. Something like IF condition type = 5 EXISTS, THEN CHOOSE THIS CONDITION.
Solved it by the following query ``` SELECT DISTINCT association.NamaBadan, ( CASE WHEN type = 5 THEN 'Batal' WHEN NamaBadan LIKE '%BATAL%' THEN 'Batal' ELSE 'Sah' END ) AS STATUS FROM association LEFT JOIN tindakan ON (association.assoc_id = tindakan.assoc_id AND tindakan.type = 5) ```
You can try either a `Group by` clause or an `order by` clause to sort the results for you. then you might try this: ``` SELECT association.NamaBadan, ( CASE WHEN type = 5 THEN 'Batal' WHEN NamaBadan LIKE '%BATAL%' THEN 'Batal' ELSE 'Sah' END ) AS STATUS FROM association WHERE STATUS LIKE '%Batal%' LEFT JOIN tindakan ON association.assoc_id = tindakan.assoc_id ``` Hope it helps.
In SQL query, if condition exist, select that condition over anything else
[ "", "mysql", "sql", "" ]
i an new in vb.net sql and what i am asking may be silly question. I have a table in sql server 2005 and a column name activated. this column contain NULL, true or false. I want to select only NULL or false values. How can i do this?
The selection should be done on the SQL Server's side. Your query should look like this: ``` SELECT * FROM MyTable WHERE activated = 0 OR activated is NULL ``` The above assumes that the column `activated` is of an integral or a `BIT` data type. Note: it may be tempting to use a seemingly equivalent `WHERE activated <> 1` condition. This would be incorrect, though, because comparisons of `NULL` values to anything result in a `NULL`, so the rows with `activated = NULL` would be excluded.
With SQL's 3-valued logic, comparisons for NULL need to be done using the `IS NULL` operator... the check for false can be done with `=`: ``` SELECT YourColumn FROM YourTable WHERE YourColumn IS NULL OR YourColumn = 0 ```
select NULL and false but not true in sql
[ "", "sql", "sql-server", "vb.net", "" ]
Good Evening, I have the following sql code and need to replace the NULL values within a sub query. As you can tell from the code I have tried using the ISNULL function and case where = NULL. Could someone please help? ``` Select Student_Details.STU_ID , ( Select case ISNULL( s1stu_disability_type.DISABILITY_TYPE_CD , '' ) when '' then 'NO' else 'YES' end from s1stu_disability_type Where Student_Details.STU_ID = s1stu_disability_type.STU_ID and DISABILITY_TYPE_CD = '$HEAR' ) as 'Hearing Disability' from S1STU_DET as Student_Details ```
Try this: ``` Select Student_Details.STU_ID , IsNull(( Select case ISNULL( s1stu_disability_type.DISABILITY_TYPE_CD , '' ) when '' then 'NO' else 'YES' end from s1stu_disability_type Where Student_Details.STU_ID = s1stu_disability_type.STU_ID and DISABILITY_TYPE_CD = '$HEAR' ), 'No') as 'Hearing Disability' from S1STU_DET as Student_Details ``` Basically, the ISNULL function has to be outside the subquery for it to work the way you want it to. Think of it this way, if the subquery does not return any rows, the output will be null whether you have an isnull check inside the subquery or not.
Your `where` clause in your subquery rejects all rows but those where the column `DISABILITY_TYPE_CD` is `'$HEAR'`. Consequently, the `case` statement will always take the `else` route, as that column will never, ever be `null` or empty (`''`). What exactly are you trying to do? You query can better be written as ``` select sd.STU_ID , dt.DISABILITY_TYPE_CD from S1STU_DET sd join s1stu_disability_type dt on dt.STU_ID = sd.STU_D and dt.DISABILITY_TYPE_CD = '$HEAR' ``` It's almost certian that relationship between student and student disability has a zero-to-many cardinalilty, which is to say that each student has zero or more disabilities. As a result, your original query, with its correlated subquery, will return 1 row per student, but per the SQL standard, it's luck of the draw as to *which* matching disability gets selected by the subquery. My query above will return one row per student with a matching disability. Students *without* a matching disability are excluded. To change that to include all students, you want to change the `[inner] join` to a `left [outer] join`. Each student will then be represented in the result set at least once. If the student has no matching disabilities, all columns for the student disability table will be 'null'. If, as I suspect, what you're trying to do is identify students as to whether or not they have a hearing disability (or some particular type of disability), you need to **summarize** things. A query like this will likely do you: ``` select sd.STU_ID , case sign(coalesce(hd.cnt,0)) when 1 then 'YES' else 'NO' end as HAS_HEARING_DISABILITY from S1STU_DET sd left join ( select STU_ID , count(*) as cnt from s1stu_disability where DISABILITY_TYPE_CD = '$HEAR' group by STU_ID ) hd on hd.STU_ID = sd.STU_ID ```
Case Statement with IS NULL not acting as required
[ "", "sql", "sql-server", "t-sql", "null", "case", "" ]
Suppose I have a table of election data, call it ELECTIONS, with one row per voter per election, like so: ``` VoterID ElectionID A 1 A 2 B 1 C 2 D 3 E 1 E 2 ``` I want to know the number of voters who voted both in election 1 and in election 2; I don't care about anyone else. The number should be 2 (voter A and Voter E). Would something like this work: ``` select count(Elections) as NumVoters from ( select VoterID, ElectionID, count(ElectionID) as Elections from ELECTIONS where ElectionID=1 or ElectionID=2 group by VoterID having (count(ElectionID)=2) ) x; ``` UPDATE: This is my first question here, and I am blown away at how helpful and fast folks have been. I revised the query above to fix the lack of an alias at the end and to add a terminating semicolon. THANK YOU!
Yes. what you have should work. (You will need to add an alias on the derived table, the error messsage you get should be self explanatory. Easy to fix, just add a space and the letter c (or whatever name you want) at the end of your query. There's one caveat regarding the potential for duplicate `(VoterID, ElectionID)` tuples. If you have a unique constraint on (VoterID, ElectionID), then your query will work fine. If you don't have a unique constraint (which disallows duplicate `(VoterID, ElectionId)`), then there's a potential for a voter with two (2) rows for ElectionID 1, and no rows for ElectionID 2... for that voter to get included in the count. And a voter that voted twice in ElectionID 1 and only once in ElectionID 2, that voter will be excluded from the count. Including the DISTINCT keyword inside a COUNT would fix that problem, e.g. ``` HAVING COUNT(DISTINCT ElectionID) = 2 ``` --- I'd write the query differently, but what you have will work. To get the count of VoterID that participated in both ElectionID 1 and ElectionID2, for improved performance, I'd avoid using an inline view (MySQL calls it a derived table). I'd have the query use a JOIN operation instead. Something like this: ``` SELECT COUNT(DISTINCT e1.voterID) AS NumVoters FROM elections e1 JOIN elections e2 ON e2.voterID = e1.voterID WHERE e1.electionID = 1 AND e2.electionID = 2 ``` If you are guaranteed that `(voterID, ElectionID)` is unique, then the select could be simpler: ``` SELECT COUNT(1) AS NumVoters FROM elections e1 JOIN elections e2 ON e2.voterID = e1.voterID WHERE e1.electionID = 1 AND e2.electionID = 2 ```
I would recommend something more like this: ``` SELECT COUNT(*) AS NumVoters FROM ELECTIONS e1 WHERE e1.ElectionID = 1 AND e1.VoterID in ( SELECT e2.VoterID FROM ELECTIONS e2 WHERE e2.ElectionID = 2 ); ``` That way you solve the problem, and have only 1 subquery.
SQL query to count number of times certain values occur in multiple rows
[ "", "mysql", "sql", "" ]
When I use the following statement I just get `NULL` back where it's supposed to return values and undefined fields where the column value is blank. `SELECT STR_TO_DATE(strdate, '%Y-%m-%d %H:%m:%s') from table` ![enter image description here](https://i.stack.imgur.com/KTrWS.png) My date format is set to `2013-12-11 00:58:00`. If I select all without converting it (just displaying the VARCHAR as is) then I get this: ![enter image description here](https://i.stack.imgur.com/v7N27.png)
The correct code for minutes is `%i` not `%m`: ``` SELECT STR_TO_DATE(strdate, '%Y-%m-%d %H:%i:%s') from (select '2013-12-11 00:58:00' as strdate) t ```
The culprit is `'%Y-%m-%d %H:%m:%s'` - `%m` can't stand for month **and** minute. IIRC `%m` is correct for them month, and `%i` is what you want for the minute.
NULL when converting VARCHAR to DATETIME
[ "", "mysql", "sql", "" ]
Im working on a website that processes statistics, for a sporting competition. I.E my site will let users quickly look up which player(s) scored the most points in a game/season or what 1 players record is against another players or team. *My data for previous games are displayed like this* ![enter image description here](https://i.stack.imgur.com/5wQNz.jpg) *When you click on a round. Example round4 game1 (Which is Hurricanes vs Crusaders) The statistics for that game and all player statistics for that game (for both teams) gets displayed. As such:* ![enter image description here](https://i.stack.imgur.com/bTWYf.jpg) ![enter image description here](https://i.stack.imgur.com/IHE32.jpg) *Furthermore game statistics also gets displayed like this:* ![enter image description here](https://i.stack.imgur.com/81JhR.jpg) Now im trying to upload this to a database, so that users can: 1. Compare 1 players record against another players record 2. Compare 1 players record against a specific team 3. Compare a teams record against another teams record 4. Display best and worst player and team statistics This problem has kept me busy for 3 days, im at the stage now where Im starting to pull the hair out of my head. I just can not get an efficient way to link the tables together. Should I create a table for each individual match? or Should I create 1 table for all matches and 1 table for all players? If so what do I work with as my primary keys? I know this is a long question so to whomever took the time to read through this thank you in advance, if anyone could point me in the right direction or just give me an opinion it would be GREATLY appreciated. im just completely stumped on this one.
Capture the statistics at the lowest applicable level. You want to apply the principles of database normalization to your initial design. This gives you the most straight-forward data capture scenario while leaving you the most flexibility for reporting. Once you decide exactly what reporting you need and have determined what performance challenges you may be facing, then you can apply *denormalization* to create reporting data warehouse tables. Consider the following logical data model: ![ERD](https://i.stack.imgur.com/66Ctf.png)
If the hope of teaching how to fish.... Write out what Entities you think you have, and what attributes they own. An attribute might be another Entity. For example a Team entity has a Name attribute, and multiple Player attributes. But Player is also an Entity with a Name attribute and maybe some Statistics attributes. This will give you your starting point for Tables (Entities) and their columns (Attributes). Where an Attribute is also an Entity, draw a line from the Attribute to the Entity: those are your relationships. Then get hold of an example of putting data into Third Normal Form (3NF) and follow tghe example applying the steps to your diagram. When you've done that, you'll have a good DB Design. Cheers -
Need help creating structure to database
[ "", "mysql", "sql", "database", "database-design", "data-structures", "" ]
I am fetching rows from my DB using this request: ``` SELECT * FROM {$db_sales} WHERE date = '{$date}' ORDER BY 'amount' DESC ``` So, obviously, i expected the returned values to be sorted in descending order by the amount column in my DB, but it doesn't? it still fetches them, but just doesn't sort them? Any ideas here? is my SQL statement wrong?
ORDER BY clause uses column name. Column name should not give in quotes. there fore the query becomes as follows ``` SELECT * FROM {$db_sales} WHERE date = '{$date}' ORDER BY amount DESC ```
remove single quote around amount like this and try: ``` SELECT * FROM {$db_sales} WHERE date = '{$date}' ORDER BY amount DESC ```
MySQL sort by not sorting?
[ "", "mysql", "sql", "sorting", "" ]
I have a table called `flags` that contains a column called `coordinates` that is full of MySQL 'points'. I need to perform a query where I get all the flags within a circle based on a latitude and longitude position with 100m radius. From a usage point of view this is based around the user's position. For example, the mobile phone would give the user's latitude and longitude position and then pass it to this part of the API. It's then up to the API to create an invisible circle around the user with a radius of 100 metres and then return the flags that are in this circle. It's this part of the API I'm not sure how to create as I'm unsure how to use SQL to create this invisible circle and select points only within this radius. Is this possible? Is there a **MySQL spatial function** that will help me do this? I believe the `Buffer()` function can do this but I can't find any documentation as to how to use it (eg example SQL). Ideally I need an answer that shows me how to use this function or the closest to it. Where I'm storing these coordinates as geospatial points I should be using a geospatial function to do what I'm asking to maximize efficiency. Flags table: * id * coordinates * name Example row: 1 | [GEOMETRY - 25B] | Tenacy AB For the flags table I have latitude, longitude positions and easting and northing (UTM) The user's location is just standard latitude/longitude but I have a library that can conver this position to UTM
~~**There are no geospatial extension functions in MySQL supporting latitude / longitude distance computations.**~~ [There is as of MySQL 5.7](https://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html#function_st-distance-sphere). You're asking for proximity circles on the surface of the earth. You mention in your question that you have lat/long values for each row in your `flags` table, and also [universal transverse Mercator](http://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system) (UTM) projected values in one of several different [UTM zones](http://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system#UTM_zone). If I remember my UK Ordnance Survey maps correctly, UTM is useful for locating items on those maps. It's a simple matter to compute the distance between two points *in the same zone* in UTM: the Cartesian distance does the trick. But, when points are in different zones, that computation doesn't work. Accordingly, for the application described in your question, it's necessary to use the [Great Circle Distance](http://en.wikipedia.org/wiki/Great-circle_distance), which is computed using the haversine or another suitable formula. MySQL, augmented with geospatial extensions, supports a way to represent various planar shapes (points, polylines, polygons, and so forth) as geometrical primitives. MySQL 5.6 implements an undocumented distance function `st_distance(p1, p2)`. However, this function returns Cartesian distances. So it's *entirely unsuitable* for latitude and longitude based computations. At temperate latitudes a degree of latitude subtends almost twice as much surface distance (north-south) as a degree of longitude(east-west), because the latitude lines grow closer together nearer the poles. So, a circular proximity formula needs to use genuine latitude and longitude. In your application, you can find all the `flags` points within ten statute miles of a given `latpoint,longpoint` with a query like this: ``` SELECT id, coordinates, name, r, units * DEGREES(ACOS(LEAST(1.0, COS(RADIANS(latpoint)) * COS(RADIANS(latitude)) * COS(RADIANS(longpoint) - RADIANS(longitude)) + SIN(RADIANS(latpoint)) * SIN(RADIANS(latitude))))) AS distance FROM flags JOIN ( SELECT 42.81 AS latpoint, -70.81 AS longpoint, 10.0 AS r, 69.0 AS units ) AS p ON (1=1) WHERE MbrContains(GeomFromText ( CONCAT('LINESTRING(', latpoint-(r/units),' ', longpoint-(r /(units* COS(RADIANS(latpoint)))), ',', latpoint+(r/units) ,' ', longpoint+(r /(units * COS(RADIANS(latpoint)))), ')')), coordinates) ``` If you want to search for points within 20 km, change this line of the query ``` 20.0 AS r, 69.0 AS units ``` to this, for example ``` 20.0 AS r, 111.045 AS units ``` `r` is the radius in which you want to search. `units` are the distance units (miles, km, furlongs, whatever you want) per degree of latitude on the surface of the earth. This query uses a bounding lat/long along with `MbrContains` to exclude points that are definitely too far from your starting point, then uses the great circle distance formula to generate the distances for the remaining points. An [explanation of all this can be found here](http://www.plumislandmedia.net/mysql/haversine-mysql-nearest-loc/). If your table uses the MyISAM access method and has a spatial index, `MbrContains` will exploit that index to get you fast searching. Finally, the query above selects all the points within the rectangle. To narrow that down to only the points in the circle, and order them by proximity, wrap the query up like this: ``` SELECT id, coordinates, name FROM ( /* the query above, paste it in here */ ) AS d WHERE d.distance <= d.r ORDER BY d.distance ASC ```
**UPDATE** **Use ST\_Distance\_Sphere() to calculate distances using a lat/long** <http://dev.mysql.com/doc/refman/5.7/en/spatial-convenience-functions.html#function_st-distance-sphere>
Use MySQL spatial extensions to select points inside circle
[ "", "mysql", "sql", "geospatial", "" ]
I have a table ``` id user Visitor timestamp 13 username abc 2014-01-16 15:01:44 ``` I have to 'Count' total visitors for a 'User' for last seven days group by date(not timestamp) ``` SELECT count(*) from tableA WHERE user=username GROUPBY __How to do it__ LIMIT for last seven day from today. ``` If any day no visitor came so, no row would be there so it should show 0. What would be correct QUERY?
There is no need to `GROUP BY` resultset, you need to count visits for a week (with unspecified user). Try this: ``` SELECT COUNT(*) FROM `table` WHERE `timestamp` >= (NOW() - INTERVAL 7 DAY); ``` If you need to track visits for a specified user, then try this: ``` SELECT DATE(`timestamp`) as `date`, COUNT(*) as `count` FROM `table` WHERE (`timestamp` >= (NOW() - INTERVAL 7 DAY)) AND (`user` = 'username') GROUP BY `date`; ``` [MySQL `DATE()` function reference](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date).
Try this: ``` SELECT DATE(a.timestamp), COUNT(*) FROM tableA a WHERE a.user='username' AND DATEDIFF(NOW(), DATE(a.timestamp)) <= 7 GROUP BY DATE(a.timestamp); ```
match timestamp with date in MYSQL using PHP
[ "", "mysql", "sql", "date", "select", "group-by", "" ]
I want to insert the current date into one of the columns of my table. I am using the following: ``` to_date(SYSDATE, 'yyyy-mm-dd')); ``` This is working great, but it is displaying the year as '0014'. Is there some way that I can get the year to display as '2014'?
to\_date is used to convert a string to a date ... try to\_char(SYSDATE, 'yyyy-mm-dd') to convert a date to a string.
Inserting it as `TRUNC(sysdate)` would do. Date actually doesn't have a format internally as it is DataType itself. TRUNC() actualy will just trim the time element in the current date time and return today's date with time as 00:00:00 To explain what happened in your case. say ur `NLS_DATE_FORMAT="YY-MM-DD"` The Processing will be like below ``` select to_date(to_char(sysdate,'YY-MM-DD'),'YYYY-MM-DD') from dual; ``` **Output:** ``` TO_DATE(TO_CHAR(SYSDATE,'YY-MM-DD'),'YYYY-MM-DD') January, 22 0014 00:00:00+0000 ``` `2014` - gets reduced to `'14'` in first `to_char()` and later while converted again as `YYYY`.. it wil be treated as `0014` as the current century detail is last!
Formatting value of year from SYSDATE
[ "", "sql", "oracle", "" ]
How can I select Current Month records from a table of MySql database?? Like now current month is January. I would like to get records of January Month, Where data type of my table column is `timestamp`.I would like to know the sql query. Thanks
This query should work for you: ``` SELECT * FROM table WHERE MONTH(columnName) = MONTH(CURRENT_DATE()) AND YEAR(columnName) = YEAR(CURRENT_DATE()) ```
Check the [**MySQL Datetime**](http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html) Functions: Try this: ``` SELECT * FROM tableA WHERE YEAR(columnName) = YEAR(CURRENT_DATE()) AND MONTH(columnName) = MONTH(CURRENT_DATE()); ```
Get records of current month
[ "", "mysql", "sql", "date", "select", "timestamp", "" ]
Here is my query: ``` select * from students where status != 4. ``` I am not getting rows with status null. Shouldn't it return all the rows where status is not 4 - including rows with null status values? I can get what I need by following query. ``` select * from students where status != 4 or status is null ```
Use: IS DISTINCT FROM ``` SELECT * FROM students WHERE status IS DISTINCT FROM 4; ``` <http://www.postgresql.org/docs/current/interactive/functions-comparison.html>
SQL's understanding of `NULL` is ["three-valued"](http://en.wikipedia.org/wiki/Three-valued_logic#Application_in_SQL). Think of `NULL` as `UNKNOWN`. So, in other words: ``` UNKNOWN = 4 yields UNKNOWN UNKNOWN != 4 yields also UNKNOWN ``` In other words, if a student has `status is null`, then it is "unknown" whether that status is different from 4
postgres column with nulll values and filtering with "!="
[ "", "sql", "postgresql", "" ]
I want to query a table for all the values that are on a list on another table to find matches, but I know that some of the values in either table may be typed in incorrectly. One table may have '10Hf7K8' and another table may have '1OHf7K8' but I still want them to match. Another example, if one table has 'STOP' but I know that in myTable, some of fields may say '5T0P' or 'ST0P' or '5TOP'. I want those to come up as results too. The same thing may occur for '2' and 'Z' if I want 'ZEPT' and '2EPT' to match. So if I know to account for inconsistencies between '0' and 'O', '5' and 'S' and 'Z' and '2', and knowing that they will be in the same spot, but I do not know where exactly they will be in the word or how many letters the word will have, is it possible to make a query ignoring those letters? Additional Information: These values are hundreds of serial keys that I have no way of confirming which is correct version between the two tables. I should not have used actual words for my example, these values can be any combination of letters and numbers in any order. There is no distinct pattern that I can hard code. SOLUTION: Goat CO, Learning, and user3216429's answers contained the solution I needed. I was able to find matching values while keeping the underlying data.
Cleaning data is preferable, but could use nested `REPLACE()` statements if you can't alter the underlying data: ``` SELECT * FROM Table1 a JOIN Table2 b ON REPLACE(REPLACE(REPLACE(a.field1,'2','Z'),'5','S'),'0','O') = REPLACE(REPLACE(REPLACE(b.field1,'2','Z'),'5','S'),'0','O') ``` Cleansing the data could be the same nested replace statement: ``` ALTER TABLE Table1 ADD cleanfield VARCHAR(25) UPDATE Table1 SET cleanfield = REPLACE(REPLACE(REPLACE(dirtyfield,'2','Z'),'5','S'),'0','O') ``` Then you'd be able to join the tables on the clean field.
what you can and should do is to clean your data, replace all these `2,0,5` with `Z,O and S`. But if you want to try some other solution, then you can try something like this ``` select case when REPLACE(REPLACE(REPLACE('stop','0','o'),'5','s'),'2','Z') = REPLACE(REPLACE(REPLACE('5t0p','0','o'),'5','s'),'2','Z') then 1 else 2 end ```
Ignoring specific letters to find match in SQL Query
[ "", "sql", "sql-server", "" ]
Here is my source data, ``` Group | Item | Capacity ----------------------- 1 | A | 100 1 | B | 80 1 | C | 20 2 | A | 90 2 | B | 40 2 | C | 20 ``` The above data shows the capacity to consume "something" for each item. Now suppose I have maximum 100 allocated to each group. I want to distribute this "100" to each group upto the item's maximum capacity. So my desired output is like this: ``` Group | Item | Capacity | consumption ------------------------------------- 1 | A | 100 | 100 1 | B | 80 | 0 1 | C | 20 | 0 2 | A | 90 | 90 2 | B | 40 | 10 2 | C | 20 | 0 ``` My question is how do I do it in a single SQL query (preferably avoiding any subquery construct). Please note, number of items in each group is not fixed. I was trying `LAG()` with running `SUM()`, but could not quite produce the desired output... ``` select group, item, capacity, sum (capacity) over (partition by group order by item range between UNBOUNDED PRECEDING AND CURRENT ROW) run_tot, from table_name ```
Without a subquery using just the analytic SUM function: ``` SQL> create table mytable (group_id,item,capacity) 2 as 3 select 1, 'A' , 100 from dual union all 4 select 1, 'B' , 80 from dual union all 5 select 1, 'C' , 20 from dual union all 6 select 2, 'A' , 90 from dual union all 7 select 2, 'B' , 40 from dual union all 8 select 2, 'C' , 20 from dual 9 / Table created. SQL> select group_id 2 , item 3 , capacity 4 , case 5 when sum(capacity) over (partition by group_id order by item) > 100 then 100 6 else sum(capacity) over (partition by group_id order by item) 7 end - 8 case 9 when nvl(sum(capacity) over (partition by group_id order by item rows between unbounded preceding and 1 preceding),0) > 100 then 100 10 else nvl(sum(capacity) over (partition by group_id order by item rows between unbounded preceding and 1 preceding),0) 11 end consumption 12 from mytable 13 / GROUP_ID I CAPACITY CONSUMPTION ---------- - ---------- ----------- 1 A 100 100 1 B 80 0 1 C 20 0 2 A 90 90 2 B 40 10 2 C 20 0 6 rows selected. ```
Here's a solution using recursive subquery factoring. This clearly ignores your preference to avoid subqueries, but doing this in one pass might be impossible. Probably the only way to do this in one pass is to use [MODEL](http://docs.oracle.com/cd/E11882_01/server.112/e25554/sqlmodel.htm#DWHSG022), which I'm not allowed to code after midnight. Maybe someone waking up in Europe can figure it out. ``` with ranked_items as ( --Rank the items. row_number() should also randomly break ties. select group_id, item, capacity, row_number() over (partition by group_id order by item) consumer_rank from consumption ), consumer(group_id, item, consumer_rank, capacity, consumption, left_over) as ( --Get the first item and distribute as much of the 100 as possible. select group_id, item, consumer_rank, capacity, least(100, capacity) consumption, 100 - least(100, capacity) left_over from ranked_items where consumer_rank = 1 union all --Find the next row by the GROUP_ID and the artificial CONSUMER_ORDER_ID. --Distribute as much left-over from previous consumption as possible. select ranked_items.group_id, ranked_items.item, ranked_items.consumer_rank, ranked_items.capacity, least(left_over, ranked_items.capacity) consumption, left_over - least(left_over, ranked_items.capacity) left_over from ranked_items join consumer on ranked_items.group_id = consumer.group_id and ranked_items.consumer_rank = consumer.consumer_rank + 1 ) select group_id, item, capacity, consumption from consumer order by group_id, item; ``` Sample data: ``` create table consumption(group_id number, item varchar2(1), capacity number); insert into consumption select 1, 'A' , 100 from dual union all select 1, 'B' , 80 from dual union all select 1, 'C' , 20 from dual union all select 2, 'A' , 90 from dual union all select 2, 'B' , 40 from dual union all select 2, 'C' , 20 from dual; commit; ```
SQL query to Calculate allocation / netting
[ "", "sql", "oracle", "" ]
I have the following Input table: ``` Article Store Supplier NetPrice Pieces Sum Inventory Price Cond NL1234 N001 3100000 161,5 2 323 7 123,45 2,47 NL1234 N001 3100000 161,5 0 0 4 103,8 2,08 NL1234 N001 3100000 161,5 0 0 23 120,8 1,21 ``` I need to calculate the weighted average of the price for the number of inventory value. For example, Inventory\*price for all selected rows divided by the total no. of inventory number.Mathematically, > ((7\*123.45)+(4\*103.8)+(120.8))/(34) ``` SELECT Article, Store, Supplier, NetPrice, sum(Pieces) as "Pieces", sum(Sum) as "Sum", sum(Inventory) as "Inventory", (Inventory*Price)/sum(Inventory) as "Price", (Inventory*Cond)/sum(Inventory) as "Cond" FROM table_name WHERE "Article" = 'NL1234' GROUP BY STORE, SUPPLIER, NetPrice, Article ``` How can I extend/modify my select statement to get the following output: ``` Article Store Supplier NetPrice Pieces Sum Inventory Price Cond NL1234 N001 3100000 161,5 2 323 34 119,35 1,57 ```
You cant use (Inventory\*Price)/sum(Inventory) because you are not grouping by Inventory column. You only can use aggrigation functions like sum(Inventory). ``` SELECT Article, SUM(Pieces) as "Pieces", SUM(Sum) as "Sum", SUM(Inventory) as "Inventory", SUM(Inventory * Price) / SUM(Inventory) as "Price", SUM(Inventory * Cond) / SUM(Inventory) as "Cond" FROM table_name WHERE "Article" = 'NL1234' GROUP BY Article ```
Move the row totals into a `CROSS APPLY`, then use the result of that in the query like so: ``` SELECT Article, Store, Supplier, MAX(NetPrice), sum(Pieces) as "Pieces", sum(Sum) as "Sum", sum(Inventory) as "Inventory", T.TotalInvCost/sum(Inventory) as "Price", T.TotalInvCond/sum(Inventory) as "Cond" FROM table_name CROSS APPLY ( SELECT SUM(Inventory*Price) AS 'TotalInvCost' ,SUM(Inventory*Cond) AS 'TotalInvCond' FROM table_name WHERE Article = 'NL1234' ) T WHERE Article = 'NL1234' GROUP BY STORE, SUPPLIER, Article ```
Calculation on multiple rows
[ "", "mysql", "sql", "sql-server", "select", "hana", "" ]
I wanted to use function LEAST in my procedure to find the smallest value. The problem is that some of the values might have been NULLs so if I do ``` select least(NULL,0,1) from dual ``` The answer I get is NULL, which is probably correct by is not something I am expecting to return. I would like to get the least real non zero value. Any help greatly appreciated.
I doubt that's actually your query. Maybe you're doing something more like this? ``` select least(some_column) from dual ``` If so, change it to this: ``` select least(some_column) from dual where some_column is not null ``` --- Or, if you're doing something more like this, where you can't just use `where` to filter the set, ``` select least(expr1,expr2,expr3) from dual ``` do this: ``` select least(coalesce(expr1, 12345), coalesce(expr2, 12345), coalesce(expr3, 12345)) from dual ``` Where `12345` is a value big enough that it would only be chosen if all other expressions are `NULL`.
If any argument is NULL, you want to take the least of the other argument(s). If all arguments are NULL, you want to return NULL. I might use something like this for two arguments: ``` LEAST(NVL(colA,colB), NVL(colB,colA)) ``` It starts getting ugly for >2 arguments though: ``` LEAST(COALESCE(colA,colB,colC) ,COALESCE(colB,colA,colC) ,COALESCE(colC,colA,colB)) ``` At which point I'd start considering magic values; but this can be buggy (e.g. what if one of the values legitimately *is* the magic value?): ``` SELECT CASE WHEN r = maxv THEN NULL ELSE r END AS result FROM (SELECT LEAST(NVL(:colA,maxv) ,NVL(:colB,maxv) ,NVL(:colC,maxv)) AS r, maxv FROM (SELECT 9.999999999999999999999999999999999999999e125 AS maxv FROM DUAL)); ```
Least value but not NULL in Oracle SQL
[ "", "sql", "oracle", "" ]
my table's schema is something like ``` Table A Id | val | updatedtime ``` Where updatedtime column is of the type Datetime, I need to find all the records from db where the time difference between current time and updatedtime of the db is less than 5 min. I couldn't find any right approach on the internet
Like such? ``` SELECT * FROM TableA WHERE updatetime > DATEADD(MI, -5, GETDATE()) ```
``` SELECT * FROM TableA WHERE DATEDIFF(minute,updatedtime,GETDATE()) < 5 ``` Yo can use **[DATEDIFF](http://technet.microsoft.com/en-us/library/ms189794.aspx)** > Returns the count (signed integer) of the specified datepart > boundaries crossed between the specified startdate and enddate
Difference between 2 datetimes in sql server
[ "", "sql", "sql-server", "date", "datetime", "" ]
We have a table which have millions of entry. The table have two columns, now there is correlation between X and Y when X is beyond a value, Y tends to be B (However it is not always true, its a trend not a certainty). Here i want to find the threshold value for X, i.e(X1) such that at least 99% of the value which are less than X1 are B. It can be done using code easily. But is there a SQL query which can do the computation. For the below dataset expected is 6 because below 6 more than 99% is 'B' and there is no bigger value of X for which more than 99% is 'B'. However if I change it to precision of 90% then it will become 12 because if X<12 more than 90% of the values are 'B' and there is no bigger value of X for which it holds true So we need to find the biggest value X1 such that at least 99% of the value lesser than X1 are 'B'. ``` X Y ------ 2 B 3 B 3 B 4 B 5 B 5 B 5 B 6 G 7 B 7 B 7 B 8 B 8 B 8 B 12 G 12 G 12 G 12 G 12 G 12 G 12 G 12 G 13 G 13 G 13 B 13 G 13 G 13 G 13 G 13 G 14 B 14 G 14 G ```
This is mostly inspired from the previous answer, which had some flaws. ``` select max(next_x) from ( select count(case when y='B' then 1 end) over (order by x) correct, count(case when y='G' then 1 end) over (order by x) wrong, lead(x) over (order by x) next_x from table_name ) where correct/(correct + wrong) > 0.99 ``` Sample data: ``` create table table_name(x number, y varchar2(1)); insert into table_name select 2, 'B' from dual union all select 3, 'B' from dual union all select 3, 'B' from dual union all select 4, 'B' from dual union all select 5, 'B' from dual union all select 5, 'B' from dual union all select 5, 'B' from dual union all select 6, 'G' from dual union all select 7, 'B' from dual union all select 7, 'B' from dual union all select 7, 'B' from dual union all select 8, 'B' from dual union all select 8, 'B' from dual union all select 8, 'B' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 12, 'G' from dual union all select 13, 'G' from dual union all select 13, 'G' from dual union all select 13, 'B' from dual union all select 13, 'G' from dual union all select 13, 'G' from dual union all select 13, 'G' from dual union all select 13, 'G' from dual union all select 13, 'G' from dual union all select 14, 'B' from dual union all select 14, 'G' from dual union all select 14, 'G' from dual; ```
Ok, I think this accomplishes what you want to do, but it will **not** work for the data volume you are mentioning. I'm posting it anyway in case it can help someone else provide an answer. This may be one of those cases where the most efficient way is to use a cursor with sorted data. Oracle has some builting functions for correlation analysis but I've never worked with it so I don't know how they work. ``` select max(x) from (select x ,y ,num_less ,num_b ,num_b / nullif(num_less,0) as percent_b from (select x ,y ,(select count(*) from table b where b.x<a.x) as num_less ,(select count(*) from table b where b.x<a.x and b.y = 'B') as num_b from table a ) where num_b / nullif(num_less,0) >= 0.99 ); ``` The inner select does the following: For every value of X * Count the nr of values < X * Count the nr of 'B' The next SELECT computes the ratio of B's and filter only the rows where the ratio is above the threshold. The outer just picks the max(x) from those remaining rows. **Edit**: The non-scalable part in the above query is the semi-cartesian self-joins.
Calculating data point which have Precision of 99%
[ "", "sql", "oracle", "aggregate-functions", "precision", "percentile", "" ]
Within a stored proc ,I want to include an AND clause in a query only if a particular condition is satisfied. ``` INSERT INTO #firstResults SELECT x, y, z FROM ep WHERE ep.proposedBy = @Id AND ep.pId IN ( SELECT DISTINCT pId FROM #tempProj) ``` The 'AND' clause should join in only when the #tempProj is not empty. What is the most elegant way to implement this? I am on sql server 2008. Since this would be repeated several times over the proc,I wish to avoid branching IF statements.
Try this ... ``` AND (ep.pId IN (SELECT DISTINCT pId FROM #tempProj) OR (SELECT COUNT(*) FROM #tempProj) = 0)) ```
``` IF EXISTS (SELECT * FROM #tempProj) BEGIN INSERT INTO #firstResults SELECT x, y, z FROM ep WHERE ep.proposedBy = @Id AND ep.pId IN ( SELECT DISTINCT pId FROM #tempProj) END ELSE BEGIN INSERT INTO #firstResults SELECT x, y, z FROM ep WHERE ep.proposedBy = @Id END ```
Dynamic AND clause
[ "", "sql", "sql-server", "" ]
So I have a file that has over 100 entries in it as an excel worksheet. I want to put those over into a sql. So I fire up my sql developer and try and import the data but it doesn't show up. ![No data to preview](https://i.stack.imgur.com/uDcFA.png) The next and finish buttons don't do anything. (the blue underline words aren't links to anything either fyi)
Have you tried converting the original file to text (csv) then importing? That has worked for me in the past.
I had the same problem and the only way to get rid of it was to rename the preferences folder (as described here: <https://www.thatjeffsmith.com/archive/2015/08/how-to-reset-your-sql-developer-preferencessettings/>) and start the program with factory defaults.
SQL developer Import data wizard comes up blank
[ "", "sql", "sql-server", "excel", "import", "oracle-sqldeveloper", "" ]
I am designing a DB and I cannot figure out how to model the following situation: * There is a main table that is called "Transaction". * Every transaction has a "Status" to describe it. * Every status has 1 or 2 "Substatus" to describe it. * A "Substatus" can have a "Subsubstatus" to describe it. Moreover I need to express in the model that every "substatus" or "subsubstatus" is strictly link to its master table: indeed for a given "substatus" there is only one status possible. The link between "Status" , "Substatus" and "Subsubstatus" seems logically like that: * Status : STA\_Id,STA\_Name * Substatus : SST\_Id,\*STA\_Id\*,SST\_Name * Subsubstatus : SSS\_Id,\*SST\_Id\*,SSS\_Name But the problem is the way to link that to "Transaction" table, taking into account that it can have 2 substatus and a subsubstatus. I thought of linking "Subsubstatus" to "Transaction" but it forces me to give a subsubstatus to every transaction that is not really the case. If you have an idea about that, it would be awesome!
You are desrcribing several things. 1. `Transaction` has a `Status`, a `SubStatus` and a `SubSubStatus`. 2. There is a parent child relationship between `Status` and `SubStatus`. 3. These is a parent child realtionship between `SubStatus` and `SubSubStatus`. 4. A `Transaction`'s `Substatus` is contrained by its `Status`, defined by the realtionship between `Status` and `SubStatus`. 5. A `Transaction`'s `SubSubStatus` is constrained by its `SubStatus`, defined by the realtionship between `SubStatus` and `SubSubStatus`. Points 1, 2 and 3 are obviously defined by appropriate foriegn keys. Points 4, and 5 will need to be be defined by more complicated contraints. These contraints will depend on how your database is implemented. --- Alternatively you could denormalise all valid combinations into a single entity. This would make it easy to assure a `Transaction` has a valid combination but problematic when the relationship between statuses change.
Store the all the (sub)\*statuses in the same table. ``` STA_Id,STA_Name,*SuperStatus_ID* ``` You might also want an extra field, `STA_Depth` to indicate how many sub levels you are down.
Database design: different precision level to describe a table
[ "", "sql", "database", "database-design", "" ]
I am trying to check if there is any zeros in Flag1 or Flag2 fields but I am getting the wrong results here when I run the code below. In this case I know there is one zero in Flag1 field but my count is zero when I run the SQL. If there is one zero in either field then I am expecting to see the count to be either >=1. If there is no zero in either field then I expect zero count. How can I get that? thanks here is my code: ``` select count(*) from myTable where FLAG1 in(0) and FLAG2 in(0) and ID = 202 ``` Here is an example of what i have: ``` FLAG1 FLAG2 1 1 1 1 1 1 0 1 ```
``` select count(*) from myTable where (FLAG1 = 0 or FLAG2 = 0) and ID = 202 ```
Instead of `and` use `or` (and don't use `in` unless you provide a list of values or a subquery): ``` select count(*) from myTable where (FLAG1=0 or FLAG2=0) and ID = 202 ```
How to check if there is any Zeros in Column Fields using SQL
[ "", "sql", "sql-server", "t-sql", "" ]
``` iEndDate = Year(Now) For iStartDate As Int32 = 1900 To iEndDate StartDate = Format(dtpSD.Value, "dd/MMM") + "/" + iStartDate.ToString EndDate = Format(dtpED.Value, "dd/MMM") + "/" + iStartDate.ToString lSql = lSql & " And" & " MD.MD_DOB >= '" & StartDate.ToString & "' AND MD.MD_DOB <= '" & EndDate.ToString & "'" Next ``` I need those cut off that 'for' loop and make that query in single query. I cant use Stored Procedure. `Environment - VB.Net , SQL Serve 2008` For Example , ``` User start date : 16/2/1900 End Date : 16/3/2014. ``` Results Based on : ``` 16/2/1900-16/3/1900 16/2/1901-16/3/1901 16/2/1902-16/3/1902 . . . 16/2/2014-16/3/2014 ``` Retrieve records based on that date.
Get days and numeric values of month and use next `WHERE` clause ``` ... WHERE MONTH(MD.MD_DOB) > @StartMonth AND MONTH(MD.MD_DOB) < @EndMonth OR (MONTH(MD.MD_DOB) = @StartMonth AND DAY(MD.MD_DOB) >= @Startday) OR (MONTH(MD.MD_DOB) = @EndMonth AND DAY(MD.MD_DOB) <= @Endday) ``` You don't need to set year in the `WHERE` because you want all years which are in database
I would say you can use a fixed year, for instance: ``` DECLARE @startDate DATETIME = '2000-02-01' DECLARE @endDate DATETIME = '2000-03-31'; WITH cal AS ( SELECT *, CAST('2000-' + CAST(MONTH(BirthDate) AS VARCHAR) + '-' + CAST(DAY(BirthDate) AS VARCHAR) AS DATETIME) FakeBirthday FROM yourTable ) SELECT * FROM cal WHERE FakeBirthday >= @startDate AND FakeBirthday <= @endDate ``` 2000 is used here - but it could be up to your preference. as long as it's a valid year in sql...
Single Query to get values from Sql server 2008 table based on Birthdate(dd/mmm). User will not specify Year
[ "", "sql", "sql-server", "vb.net", "sql-server-2008", "sql-server-2008-r2", "" ]
I have 3 Tables ``` Item, Primary, Secondary ``` i want it to return `All columns` on Items and `Brandname` from Primary and `Size, Color` from Secondary ItemID - pk(item) ItemID - fk(Primary) ItemID - fk(Secondary) i know how to do 2 tables but I'm having problem on how to do 3 tables here's my code ``` From Item, Primary, Secondary Where Item.ItemID=Primary.ItemID AND Item.ItemID=Secondary.ItemID ``` can someone point me out on my mistakes just notice Primary is color blue is this a reserved words from access?
Since primary is a reserved word, surround the word primary with brackets [] [Edit to explain comment] Using the query that @janet wrote, try adding an open paren before the item table and a closing paren just before the second inner join. ``` SELECT a.*, b.Brandname, c.Size, c.Color FROM (Item a INNER JOIN [Primary] b ON a.ItemID = b.ItemID) INNER JOIN [Secondary] c ON b.ItemID = c.ItemID ```
try this: ``` From Item JOIN [Primary] ON Item.ItemID=[Primary].ItemID JOIN [Secondary] ON Item.ItemID=[Secondary].ItemID ```
How to join 3 tables in MS access
[ "", "sql", "ms-access", "" ]
I need to find how many true bit exists in my binary value. example: ``` input: 0001101 output:3 input: 1111001 output:5 ```
``` DECLARE @BinaryVariable2 VARBINARY(10); SET @BinaryVariable2 = 60; -- binary value is 111100 DECLARE @counter int = 0 WHILE @BinaryVariable2 > 0 SELECT @counter +=@BinaryVariable2 % 2, @BinaryVariable2 /= 2 SELECT @counter ``` Result: ``` 4 ```
While both answers work, both have issues. A loop is not optimal and destructs the value. Both solutions can not be used in a select statement. Possible better solution is by masking together as follows ``` select @counter = 0 + case when @BinaryVariable2 & 1 = 1 then 1 else 0 end + case when @BinaryVariable2 & 2 = 2 then 1 else 0 end + case when @BinaryVariable2 & 4 = 4 then 1 else 0 end + case when @BinaryVariable2 & 8 = 8 then 1 else 0 end + case when @BinaryVariable2 & 16 = 16 then 1 else 0 end + case when @BinaryVariable2 & 32 = 32 then 1 else 0 end + case when @BinaryVariable2 & 64 = 64 then 1 else 0 end + case when @BinaryVariable2 & 128 = 128 then 1 else 0 end + case when @BinaryVariable2 & 256 = 256 then 1 else 0 end + case when @BinaryVariable2 & 512 = 512 then 1 else 0 end ``` This can be used in a select and update statement. It is also an order of magnitude faster. (on my server about 50 times) To help you might want to use the following generator code ``` declare @x int = 1, @c int = 0 print ' @counter = 0 ' /*CHANGE field/parameter name */ while @c < 10 /* change to how many bits you want to see */ begin print ' + case when @BinaryVariable2 & ' + cast(@x as varchar) + ' = ' + cast(@x as varchar) + ' then 1 else 0 end ' /* CHANGE the variable/field name */ select @x *=2, @c +=1 end ``` Also as further note: if you use a bigint or go beyond 32 bits it is necessary to cast like follows ``` print ' + case when @Missing & cast(' + cast(@x as varchar) + ' as bigint) = ' + cast(@x as varchar) + ' then 1 else 0 end ' ``` Enjoy
Calculate Count of true bits in binary type with t-sql
[ "", "sql", "t-sql", "count", "binary", "varbinary", "" ]
I would like to find out for which discount ids the expirationDate has changed. Here is a snapshot of the table. I will need to do a self join. ``` tablename discountid expirationdate NEW 182150 2013-12-02 00:00:00.000 OLD 182150 2099-12-31 00:00:00.000 NEW 182151 2013-12-02 00:00:00.000 OLD 182151 2099-12-31 00:00:00.000 NEW 182152 2013-12-02 00:00:00.000 OLD 182152 2099-12-31 00:00:00.000 NEW 192608 2013-12-02 00:00:00.000 OLD 192608 2099-12-31 00:00:00.000 NEW 192609 2013-12-02 00:00:00.000 OLD 192609 2099-12-31 00:00:00.000 ```
From the comments I gather that you don' only look for changes of expiration dates, but for several columns. Here is a solution: ``` select discountid , decode(min(a),max(a),0,1) as a_changed , decode(min(b),max(b),0,1) as b_changed , decode(min(c),max(c),0,1) as c_changed from test group by discountid having decode(min(a),max(a),1,0) = 0 or decode(min(b),max(b),1,0) = 0 or decode(min(c),max(c),1,0) = 0; ``` EDIT: For non-Oracle dbms: ``` select discountid , case when (min(a) = max(a) then 0 else 1 end as a_changed , case when (min(b) = max(b) then 0 else 1 end as b_changed , case when (min(c) = max(c) then 0 else 1 end as c_changed from test group by discountid having min(a) <> max(a) or min(b) <> max(b) or min(c) <> max(c); ```
Here is one way to find the ids having two different expiration dates in the table: ``` select discountid from mytable group by discountid having count(distinct expirationdate) > 1; ```
I would like to find out for which discount ids the expirationDate has changed. Here is a snapshot of the table. I will need to do a self join
[ "", "sql", "" ]
MySQL query: ``` CREATE TABLE FC_Categories ( CategoryID INTEGER NOT NULL AUTO_INCREMENT , CategoryName VARCHAR( 15 ) , Description TEXT, Picture LONGBLOB, UNIQUE ( CategoryName ), PRIMARY KEY ( CategoryID ) ) TYPE = MYISAM ; ``` MySQL error after running query: ``` Documentation #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'TYPE=MyISAM' at line 8 ```
`Type` is obsolete now, and hence is the error. Use `Engine=MyISAM`
please use `ENGINE=MyISAM` instead.
how to remove mysql error #1064?
[ "", "mysql", "sql", "syntax-error", "" ]
I'm creating a commenting system, which will have 2 top comments. How can I select the latest 20 rows, and then from that selection, select the top 2 rows (likes-dislikes)? I can do it with a PHP loop, but it would not be as efficient. Currently I am just selecting the top 2 from the all the comments, but the two top comments never change, since people just up-vote those ones: ``` SELECT * FROM pagecomments WHERE page_id='$pageid' ORDER BY likes-dislikes DESC LIMIT 2 ``` EDIT: The table is ordered by the the column "id", which is auto\_increment. page\_id is the page on the site. Sorry.
Nest your query: ``` SELECT * FROM ( SELECT * FROM pagecomments WHERE page_id='$pageid' ORDER BY date DESC LIMIT 20 ) t ORDER BY likes-dislikes DESC LIMIT 2 ```
Order not only by LIkes, first order by date entered or timestamp. By ordering by date, you assure that you`ll get the latest 20 posts or comments. ``` SELECT * FROM pagecomments WHERE page_id='$pageid' ORDER by date_entered desc , likes-dislikes DESC limit 2 ```
How to select from a selection?
[ "", "mysql", "sql", "" ]
Consider a table Users with columns Id, Name, Surname and a table Actions with columns Ip and Actor. I need to retrieve, for every Ip, the set of users who did as action using that Ip. What I have now looks like: ``` SELECT a.ip, ( SELECT GROUP_CONCAT(t.id, '-', t.name, ' ', t.surname) FROM( SELECT ud.id, ud.name, ud.surname FROM users_data AS ud JOIN actions AS a2 ON a2.actor = ud.id WHERE a2.ip = a.ip GROUP BY ud.id) AS t ) FROM actions AS a WHERE a.ip != '' AND a.ip != '0.0.0.0' GROUP BY a.ip ``` It doesn't work because a.ip is unknown in the where clause in the inner subquery. Do to performance issues, I need to avoid to use DISTINCT. Any suggestion?
I solved it using this query (still quite slow, so there's still space for improvements...): ``` SELECT SQL_NO_CACHE t.ip, COUNT(t.id) AS c, GROUP_CONCAT(t.id, '-', t.name, ' ', t.surname, '-', t.designerAt > 0) FROM ( SELECT a.ip, ud.id, ud.name, ud.surname, u.designerAt FROM actions AS a JOIN users_data AS ud ON ud.id = a.actor JOIN users AS u ON u.id = a.actor WHERE a.ip != '' AND a.ip != '0.0.0.0' AND a.actor !=0 GROUP BY a.ip, a.actor ) AS t GROUP BY t.ip ```
You can rewrite your query as ``` SELECT n.ip, GROUP_CONCAT( DISTINCT n.your_user SEPARATOR ' -- ') `users` FROM ( SELECT a.ip AS ip, CONCAT(t.id, '-', t.name, ' ', t.surname) `your_user` FROM users_data AS ud JOIN actions AS a ON a.actor = ud.id ) `new_table` n WHERE n.ip != '' AND n.ip != '0.0.0.0' GROUP BY n.ip ``` > Note Be aware of that the result is truncated to the maximum length > that is given by the [group\_concat\_max\_len](http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_group_concat_max_len) system variable, which > has a default value of 1024
mysql - rewrite query with subqueries
[ "", "mysql", "sql", "" ]
I created a fiddle for this, at this link: <http://www.sqlfiddle.com/#!2/7e007> I could'nt find SQL compact / CE so it's in MySQL. The tables looks like this ``` Records Clients ID | NAME | AGE ID | NAME ------------------ ---------------- 1 | John | 20 1 | John 2 | Steven | 30 2 | Daniel 3 | Abraham | 30 3 | 4 | Donald | 25 5 | Lisa 6 | | 35 6 | Michael 7 | | 42 7 | ``` I would like to select from both tables, and if the id is in both tables and both have names I would like the the name from "Clients" as the default. If the name in Records is blank, use the client name (if any) and if the Clients.Name is blank; use the records.Name. From tables above, i would like this: ``` ID | NAME | AGE ------------------ 1 | John | 20 2 | Daniel | 30 3 | Abraham | 30 4 | Donald | 25 5 | Lisa | 6 | Michael | 35 7 | | 42 ``` How do i do this in SQL Compact? EDIT: Thanks to great answers below i've managed to come up with this query, which ALMOST works: ``` SELECT t.id, t.name, t.age FROM ( ( SELECT r.id, CASE WHEN r.name = NULL THEN c.name ELSE r.name END as name, r.age FROM Records r LEFT JOIN Clients c ON c.id = r.id ) UNION ( SELECT c.id, c.name, null as age FROM Clients c where c.id NOT IN (select id from Records) ) ) as t ORDER BY t.id ``` This gives me this output: ``` ID | NAME | AGE ------------------ 1 | John | 20 2 | Daniel | 30 3 | Abraham | 30 4 | Donald | 25 5 | Lisa | 6 | | 35 7 | | 42 ``` "Michael" (should be on #6) is missing in this case. Why?!
``` select r.id, IF(c.name != '',c.name,r.name) as name, r.age FROM Records r LEFT JOIN Clients c ON c.id = r.id GROUP BY c.id ``` Use above query. **EDITED:** ``` SELECT t.id, t.name, t.age FROM ( ( SELECT r.id, CASE WHEN c.name <> '' THEN c.name ELSE r.name END as name, r.age FROM Records r LEFT JOIN Clients c ON c.id = r.id ) UNION ( SELECT c.id, c.name, null as age FROM Clients c where c.id NOT IN (select id from Records) ) ) as t ORDER BY t.id ``` Use this query.
``` select case when a.id <> '' then a.id else b.id end as id , case when a.name <> '' then a.name else b.name end as name,a.age from records a full outer join clients b on a.Id = b.id order by a.id ```
Select from other table if value exist
[ "", "sql", "sql-server-ce", "" ]
I've been creating a database in derby/netbeans. And I'd like to output the structure of the database, not just exporting the whole database. How do I do this? I've tried both "EXEC 'table name';" which returned "Error code -1, SQL state 42X01: Syntax error: Encountered "exec" at line 1, column 1." and "SELECT \* FROM INFORMATION\_SCHEMA.COLUMNS WHERE table\_name = 'table name';" which returned "Error code -1, SQL state 42Y07: Schema 'INFORMATION\_SCHEMA' does not exist". I've read on multiple forums that this should work, do you guys have any idea what I'm doing wrong?
**GET TABLE STRUCTURE** ``` select COLUMNNAME,COLUMNDATATYPE FROM sys.systables t, sys.syscolumns WHERE TABLEID = REFERENCEID and tablename = 'FRIENDS' ``` ![enter image description here](https://i.stack.imgur.com/bdC1p.jpg) Other fields you can use in select * COLUMNDEFAULT * COLUMNDEFAULTID * AUTOINCREMENTVALUE * AUTOINCREMENTSTART * AUTOINCREMENTINC **Inside Netbeans** Expand the Tables node under the sample database connection, right-click the table node and choose Grab Structure. ![enter image description here](https://i.stack.imgur.com/xivwb.jpg) In the Grab Table dialog that opens, specify a location on your computer to save the grab file that will be created. Click Save. The grab file records the table definition of the selected table. Expand the APP schema node under the Contact DB database connection, right-click the Tables node and choose Recreate Table to open the Recreate Table dialog box. ![enter image description here](https://i.stack.imgur.com/NxfKC.jpg) In the Recreate Table dialog box, navigate to the location where you saved the CUSTOMER grab file and click Open to open the Name the Table dialog box. ![enter image description here](https://i.stack.imgur.com/XyRdG.jpg) **GET TABLES** A complete list. ``` select * from SYS.SYSTABLES; ``` ![enter image description here](https://i.stack.imgur.com/QRsrR.jpg) Only TABLENAME ``` select TABLENAME from SYS.SYSTABLES where TABLETYPE='T' ``` ![enter image description here](https://i.stack.imgur.com/KkyvC.jpg) [Derby Table](http://db.apache.org/derby/docs/10.8/ref/rrefsistabs24269.html)
The simplest with NetBeans (8.0, perhaps with the previous versions too): "View Data..." of the table, right click on the data and choose "Show SQL Script for CREATE". You can copy the SQL script.
Get table schema or structure in netbeans, derby
[ "", "sql", "netbeans", "schema", "structure", "derby", "" ]
Hello I have three tables that I have joined but it returns empty result even though there suppose to some result. Here is my sql ``` SELECT c.code,c.name, a.ltp as begning, b.ltp as enddate, d.interim_cash,d.interim_rec_date, CAST(((b.ltp - a.ltp) / a.ltp * 100) AS DECIMAL(10, 2)) as chng FROM eod_stock a JOIN eod_stock b ON a.company_id = b.company_id LEFT OUTER JOIN company AS c ON c.ID = a.company_id RIGHT JOIN divident_info AS d ON c.ID = d.company_id WHERE a.entry_date = "2012-09-24" AND b.entry_date = "2012-09-25" AND d.interim_rec_date BETWEEN "2012-09-24" AND "2012-09-25" AND a.company_id IN (13, 2) AND d.company_id IN (13,2); ``` The result i am expecting is like this: ``` +--------+-----------------+---------+--------+--------+------------------+------------+ | code | name | begning | end | chng | interim_rec_date |interim_cash| +--------+-----------------+---------+--------+--------+------------------+------------+ | ABBANK | AB BANK LIMITED | 518.00 | 459.00 | -11.39 |2012-09-24 |10 | | 1STICB | 1ST ICB M.F. | 227.00 | 253.00 | 11.45 | | | +--------+-----------------+---------+--------+--------+------------------+------------+ ``` But I am getting empty set in my result is this because the second one interim info is 0? how can I get all info like above if row is empty then it could be blank but i need other info related to it. Here are my table schema: eod\_stock: ``` +-----------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+-------+ | company_id | varchar(30) | NO | PRI | NULL | | | entry_date | date | NO | PRI | NULL | | | entry_timestamp | int(10) unsigned | NO | | NULL | | | open | decimal(16,2) | NO | | NULL | | | high | decimal(16,2) | NO | | NULL | | | low | decimal(16,2) | NO | | NULL | | | ltp | decimal(16,2) | NO | | NULL | | | ycp | decimal(16,2) | NO | | NULL | | | cse_price | decimal(9,2) | NO | | NULL | | | cse_volume | decimal(18,2) | NO | | NULL | | | total_trade | int(30) | NO | | NULL | | | total_volume | int(30) | NO | | NULL | | | total_value | decimal(18,4) | NO | | NULL | | +-----------------+------------------+------+-----+---------+-------+ ``` divident Info: ``` +------------------+--------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +------------------+--------------+------+-----+-------------------+-----------------------------+ | divident_ID | int(11) | NO | PRI | NULL | auto_increment | | company_id | int(11) | NO | | NULL | | | year | year(4) | NO | | NULL | | | right_base | int(11) | NO | | NULL | | | right_new | int(11) | NO | | NULL | | | right_dec_date | date | NO | | NULL | | | right_rec_date | date | NO | | NULL | | | interim_cash | decimal(6,2) | NO | | NULL | | | interim_stock | decimal(8,2) | NO | | NULL | | | interim_dec_date | date | NO | | NULL | | | interim_rec_date | date | NO | | NULL | | | annual_cash | decimal(6,2) | NO | | NULL | | | annual_stock | decimal(8,2) | NO | | NULL | | | annual_dec_date | date | NO | | NULL | | | annual_rec_date | date | NO | | NULL | | | update_time | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | +------------------+--------------+------+-----+-------------------+-----------------------------+ ``` Please can you help me to find out the result?
So finally I got what i needed here is the query: ``` SELECT c.code,c.name, a.ltp as begning, b.ltp as enddate, d.interim_cash,d.interim_rec_date, d.annual_rec_date,d.annual_cash, CAST(((b.ltp - a.ltp) / a.ltp * 100) AS DECIMAL(10, 2)) as chng FROM eod_stock AS a LEFT OUTER JOIN eod_stock AS b ON a.company_id = b.company_id LEFT OUTER JOIN company AS c ON c.ID = a.company_id LEFT OUTER JOIN dividend_info AS d ON c.ID = d.company_id AND d.interim_rec_date BETWEEN "2012-09-24" AND "2012-09-25" AND d.annual_rec_date BETWEEN "2012-09-24" AND "2012-09-25" WHERE a.entry_date = "2013-09-24" AND b.entry_date = "2013-09-25" AND a.company_id IN (13, 2,4,5); ``` Thanks Gordon Linoff for bring me on the track i was messing up with the right join
This is your query: ``` SELECT c.code,c.name, a.ltp as begning, b.ltp as enddate, d.interim_cash,d.interim_rec_date, CAST(((b.ltp - a.ltp) / a.ltp * 100) AS DECIMAL(10, 2)) as chng FROM eod_stock a JOIN eod_stock b ON a.company_id = b.company_id LEFT OUTER JOIN company AS c ON c.ID = a.company_id RIGHT JOIN divident_info d ON c.ID = d.company_id WHERE a.entry_date = "2012-09-24" AND b.entry_date = "2012-09-25" AND d.interim_rec_date BETWEEN "2012-09-24" AND "2012-09-25" AND a.company_id IN (13, 2) AND d.company_id IN (13, 2); ``` Personally, I find it very hard to follow queries where `left join`s are mixed with `right join`s: much easier to have a structure only using `left join`s so you can readily see which table drives the query. In any case, your `where` clause is undoing the effects of the outer joins. In addition to the conditions you have explicitly listed, you are also saying: ``` a.entry_date is not null and b.entry_date is not null and d.interim_rec_date is not null and a.company_id is not null and d.company_id is not null ``` It is really hard to say which of these conditions you need to eliminate to get the results you want. What I can say is that some of these should go into the `on` clauses rather than the `where` clause. That is the best solution. You can also change the `where` to be something like this: ``` WHERE a.entry_date = "2012-09-24" AND b.entry_date = "2012-09-25" AND d.interim_rec_date BETWEEN "2012-09-24" AND "2012-09-25" AND a.company_id IN (13, 2) AND d.company_id IN (13, 2) AND (a.entry_date is null or a.company_id is null or b.entry_date is null) ``` However, you might want only one or two of those conditions. Note the conditions on `d` should stay in the `where` clause because the `right outer join` is keeping all rows in `d`, so the would not be `NULL` (unless `NULL` in the original data).
Mysql three table join results empty set
[ "", "mysql", "sql", "outer-join", "right-join", "" ]
Given these entries in a table `table`: ``` user entry A 1 A 2 A 5 A 6 B 1 B 2 B 3 B 4 B 5 B 6 C 1 C 4 D 1 D 2 D 5 D 6 D 7 D 9 ``` And we have a subset `entries_A` to work with, which is the array `[1,2,5,6]`. **Problems:** 1. Find all users that have the same entries [1,2,5,6] and more, e.g. [1,2,5,6,7] or [1,2,3,5,6]. 2. Find all users that have a lot of the same entries (and more), e.g. [1,2,5,9] or [2,5,6,3]. The best solution to the first problem I could come up with, is the following select query: ``` SELECT DISTINCT user AS u FROM table WHERE EXISTS (SELECT * FROM table WHERE entry=1 AND user=u) AND EXISTS(SELECT * FROM table WHERE entry=2 AND user=u) AND EXISTS(SELECT * FROM table WHERE entry=5 AND user=u) AND EXISTS(SELECT * FROM table WHERE entry=6 AND user=u) ``` On the other hand, I get a feeling there's some algebraic vector-problem lurking below the surface (especially for problem two) but I can't seem to wrap my head around it. All ideas welcome!
I think the easiest way to perform this type of query is using aggregation and `having`. Here is an example. To get A's that have exactly those four elements: ``` select user from table group by user having sum(entry in (1,2,5,6)) > 0 and count(distinct entry) = 4; ``` To get A's that have those four elements and perhaps others: ``` select user from table group by user having sum(entry in (1,2,5,6)) > 0 and count(distinct entry) >= 4; ``` To order users by the number of matches they have and the number of other matches: ``` select count(distinct case when entry in (1, 2, 5, 6) then entry end) as Matches, count(distinct case when entry not in (1, 2, 5, 6) then entry end) as Others, user from table group by user order by Matches desc, Others; ```
This is how I would to your first query (though I think Gordon Linoff's answer is more efficient): ``` select distinct user from so s1 where not exists ( select * from so s2 where s2.entry in (1,2,5,6) and not exists ( select * from so s3 where s2.entry = s3.entry and s1.user = s3.user ) ); ``` For the second problem, you would need to specify what `a lot` should mean... three, four, ...
What's an easy way to perform this complicated SELECT query?
[ "", "mysql", "sql", "performance", "" ]
I am trying to group my station codes as shown below however my result set keeps outputting everything and `others` multiple times. I want to group everything by my alias names however for my station `others` it doesn't seem like it is grouping it. ``` WITH station as ( SELECT CASE WHEN t.station='AB' THEN 'AB' WHEN t.station='AS' THEN 'AS' WHEN t.station='KF' THEN 'KF' WHEN t.station='PR' THEN 'PR' WHEN t.station='SV' THEN 'SV' WHEN t.station='RE' THEN 'RE' WHEN t.station='TVR' OR t.station='TDT' THEN 'TDT' ELSE 'Others' END AS 'station' FROM t ) porders as ( SELECT CASE WHEN t.station='AB' THEN 'AB' WHEN t.station='AS' THEN 'AS' WHEN t.station='KF' THEN 'KF' WHEN t.station='PR' THEN 'PR' WHEN t.station='SV' THEN 'SV' WHEN t.station='RE' THEN 'RE' WHEN t.station='TVR' OR t.station='TDT' THEN 'TDT' ELSE 'Others' END AS 'station', COUNT(*) AS 'p_orders' FROM a, t, v, station WHERE a.psn = t.psn AND t.highest_psn = 0 AND v.detailno_i = a.detailno_i AND t.station!='' AND a.status=1 AND v.order_type='P' GROUP BY t.station ) SELECT station.station, porders.p_orders FROM station join porders on(station.station=porders.station) ``` **Output**: ``` station|porders --------------- AB 2 AS 4 KF 3 PR 10 SV 2 RE 20 TDT 10 Others 1 Others 2 Others 3 ``` **Desired Output:** ``` station|porders --------------- AB 2 AS 4 KF 3 PR 10 SV 2 RE 20 TDT 10 Others 6 ```
The results you're getting is due to the fact that you have three `t.station` values that get mapped to `Other` The statement `GROUP BY t.station` doesn't care that they get projected to `Other` in your Select. To fix this you need to group by your case instead of the untranslated value `t.station` ``` GROUP BY CASE WHEN t.station='AB' THEN 'AB' WHEN t.station='AS' THEN 'AS' WHEN t.station='KF' THEN 'KF' WHEN t.station='PR' THEN 'PR' WHEN t.station='SV' THEN 'SV' WHEN t.station='RE' THEN 'RE' WHEN t.station='TVR' OR t.station='TDT' THEN 'TDT' ELSE 'Others' END ```
This isn't an answer, just a note that you can simplify your case statement a lot: ``` CASE WHEN t.station in ('AB', 'AS', 'KF', 'PR', 'SV', 'RE') THEN t.station WHEN t.station in ('TVR', 'TDT') THEN 'TDT' ELSE 'Others' END ```
Grouping By Alias names
[ "", "sql", "sql-server", "t-sql", "" ]
I have one function that returns all employee IDs Function definition is like this: ``` CREATE OR REPLACE FUNCTION tmp() RETURNS setof record AS $func$ begin select emp_id from employee_master; end; $func$ LANGUAGE plpgsql; ``` But when i call this function using ``` select * from tmp() as abc(emp_id text); ``` It gives error like ``` ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "tmp" line 3 at SQL statement ``` Please give solution :)
If you want to return a rowset from a PL/PgSQL function you must use `RETURN` - in this case, probably `RETURN QUERY`: ``` RETURN QUERY SELECT emp_id FROM employee_master; ``` I don't see the point of having this in a PL/PgSQL function at all, though.
Make the function a [plain SQL](http://www.postgresql.org/docs/current/static/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET) one as in: ``` ... LANGUAGE SQL; ```
Error when calling a function
[ "", "sql", "postgresql", "plpgsql", "sql-function", "" ]
I have a table like below ``` Column | Type | Modifiers ------------+--------------------------+----------- id | integer | not null number | integer | not null status | integer | uid | integer | value | integer | comment | character varying(2000) | date | timestamp with time zone | ``` Query: `select * from table_name where id like '%58943';` Result: ``` id| number| status | uid | value| comment | date ----------+-------+------------+---------+------------+----------+------------------------------- 58943 | 5 | 1 | 1 | | | 2014-01-23 14:24:34.708676+01 58943 | 3 | 0 | 1 | 1 | | 2014-01-23 14:23:46.740663+01 58943 | 3 | 0 | 1 | 4 | [admin] | 2014-01-23 14:24:34.505752+01 58943 | 3 | 0 | 974 | 4 | [admin] | 2014-01-23 14:24:34.601017+01 58943 | 3 | 0 | 977 | 4 | [admin] | 2014-01-23 14:24:34.708676+01 58943 | 2 | 0 | 1 | | ver 12 | 2014-01-23 14:22:01.298001+01 58943 | 1 | 0 | 1 | | | 2014-01-23 14:22:01.052535+01 (7 rows) ``` Query: `select * from table_name where id like '%58944';` Result: ``` id| number| status | uid | value| comment | date ----------+-------+------------+---------+------------+----------+------------------------------- 58944 | 5 | 1 | 1 | | | 2014-01-23 14:25:34+01 58944 | 3 | 0 | 977 | 4 | looks fine | 2014-01-23 14:25:34+01 58944 | 3 | 0 | 974 | 4 | Approve all conff | 2014-01-23 14:25:34+01 58944 | 2 | 0 | 1 | | vers 12 | 2014-01-23 14:22:11.86668+01 58944 | 1 | 0 | 1 | | | 2014-01-23 14:22:11.857947+01 (5 rows) ``` **Question:** I'm trying to find those `id` which have `5| 1| 1` (number| status| uid) but don't have `3| 0| 1| 1` (number| status| uid| value). So if I run that query on complete table then I should get only `58944` in result.
This below query should work for you. ``` SELECT id FROM table_name WHERE (number = 5 AND status = 1 AND uid = 1) AND NOT (number = 3 AND status = 0 AND uid = 1 AND value = 1) ``` **EDIT :** I think I might have misunderstood your question. The following query will get you ids that have `5| 1| 1 (number| status| uid)` but not `3| 0| 1| 1 (number| status| uid| value)`. ``` SELECT * FROM tab a WHERE (a.number = 5 AND a.status = 1 AND a.uid = 1) AND NOT EXISTS (SELECT id FROM tab t WHERE t.id = a.id AND t.number = 3 AND t.status = 0 AND t.uid = 1 AND t.value = 1) ``` Check this fiddle out[**SQL FIDDLE**](http://sqlfiddle.com/#!10/bca07/4)
Modifying **user2989408** answer to the below could work. ``` SELECT id FROM table_name WHERE (id like '%58943' AND number = 5 AND status = 1 AND uid = 1) AND NOT (number = 3 AND status = 0 AND uid = 1 AND value = 1) ```
Select that column which don't have certain row
[ "", "sql", "database", "postgresql", "" ]
I have this table shown below...How do I select only the latest data of the id based on changeno? ``` +----+--------------+------------+--------+ | id | data | changeno | | +----+--------------+------------+--------+ | 1 | Yes | 1 | | | 2 | Yes | 2 | | | 2 | Maybe | 3 | | | 3 | Yes | 4 | | | 3 | Yes | 5 | | | 3 | No | 6 | | | 4 | No | 7 | | | 5 | Maybe | 8 | | | 5 | Yes | 9 | | +----+---------+------------+-------------+ ``` I would want this result... ``` +----+--------------+------------+--------+ | id | data | changeno | | +----+--------------+------------+--------+ | 1 | Yes | 1 | | | 2 | Maybe | 3 | | | 3 | No | 6 | | | 4 | No | 7 | | | 5 | Yes | 9 | | +----+---------+------------+-------------+ ``` I currently have this SQL statement... SELECT id, data, MAX(changeno) as changeno FROM Table1 GROUP BY id; and clearly it doesn't return what I want. This should return an error because of the aggrerate function. If I added fields under the GROUP BY clause it works but it doesn't return what I want. The SQL statement is by far the closest I could think of. I'd appreciate it if anybody could help me on this. Thank you in advance :)
This is typically referred to as the ["greatest-n-per-group"](https://stackoverflow.com/questions/tagged/greatest-n-per-group+sql-server?sort=votes&pageSize=50) problem. One way to solve this in SQL Server 2005 and higher is to use a CTE with a calculated `ROW_NUMBER()` based on the grouping of the `id` column, and sorting those by largest `changeno` first: ``` ;WITH cte AS ( SELECT id, data, changeno, rn = ROW_NUMBER() OVER (PARTITION BY id ORDER BY changeno DESC) FROM dbo.Table1 ) SELECT id, data, changeno FROM cte WHERE rn = 1 ORDER BY id; ```
You want to use `row_number()` for this: ``` select id, data, changeno from (SELECT t.*, row_number() over (partition by id order by changeno desc) as seqnum FROM Table1 t ) t where seqnum = 1; ```
SELECT only latest record of an ID from given rows
[ "", "sql", "sql-server", "" ]
I have one table in relational database Sybase ASE, with few columns. Three of them looks like this example: ``` _____________ | Product | --------------- | ProductId | | Name | | Quantity | _____________ ``` So we have some records: ``` __________________________________ | ProductId | Name | Quantity | ---------------------------------- | 1 | pants | 2 | | 2 | shirt | 1 | | 3 | sweater | 3 | ---------------------------------- ``` I need to get every name as many times as 'Quantity' of this product. So the result should looks like: * pants * pants * shirt * sweater * sweater * sweater If somebody have any idea how can I do this, please help me. **EDIT** *2014-01-24 14:17 UTC+1* I'd like to thanks everybody. Gordon's solution is realy nice, but for my situation (bigger Quantity) I can't use that sql. I try do somethnig like 333kenshin's and simon's solutions but without cursor. I do somthnig like this: ``` IF OBJECT_ID('#TEMP') is not null DROP TABLE #TEMP create TABLE #TEMP (Name varchar(255)) DECLARE @Name varchar(255) DECLARE @Quant INT DECLARE @prodId INT SET @prodId = 1 WHILE (EXISTS(SELECT 1 FROM product WHERE productID = @prodId)) BEGIN SELECT @Name = Name @Quant = Quantity FROM Product DECLARE @i INT SET @i = 1 WHILE @i <= @Quant BEGIN insert into #TEMP values(@Name) SELECT @i=@i+1 END SELECT @prodId = @prodId + 1 END select * from #TEMP drop table #TEMP ``` For me, and my DB it was fastest solution. So thanks a lot for every answers.
The correct way to do this is temp table + cursors: 1. create a temp table 2. create [cursor](http://www.sybaseteam.com/sybase-cursors-overview-explanation-and-examples-t-122.html) to iterate through Product table 3. within the cursor, create an inner [WHILE loop](http://www.sqlines.com/sybase/insert_n_rows_commit_after_m_rows) 4. exit the loop and finally select the temp table The following isn't 100% correct Sybase syntax, but it's pretty close. ``` -- 1: temp table select productName into #TEMP -- 2: cursor declare @productName char(10), @quantity int declare ProductRead CURSOR for select productName, quantity from Product OPEN ProductRead FETCH ProductRead INTO @productName, @quantity WHILE (@@sqlstatus=0) BEGIN -- 3: inner for loop DECLARE @i INT SET @i = 1 WHILE @i <= @quantity BEGIN insert @productName into #TEMP END END -- 4: final result set select productName from #TEMP ```
To do this, you need a series of integers. You can generate one manually: ``` select p.name from product p join (select 1 as n union all select 2 union all select 3 union all select 4 ) n on n.n <= p.quantity; ``` This will work if `quantity` is not too big and you can put in the values in `n`.
How to select as much records as indicated by the value from database
[ "", "sql", "sap-ase", "" ]