text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
So what exactly is N-Tier Architecture? N-Tier is an architecture of an application which has at least 3 separate layers that interact with each other. Each layer in N-Tier architecture can talk only to the layer below it. N in this case refers to an N number of layers. The picture below shows the way 3 layers typically communicate: Presentation Layer (Presentation Logic) makes a request to Business Layer which then forwards the request to Data Access Layer and returns the data to Presentation Logic. BAL (Business Access Layer) and DAL (Data Access Layers) represent the business logic of an application. Each of these layers has its own logic which is hidden from the other layers and this makes it really easy to update any logic necessary in any of the layers and update only that specific layer instead of recompiling the whole solution. This is a really strong point of N-Tier architecture because any changes or updates to a layer requires only that layer to be recompiled. Another good point of N-Tier archtiecture is that by separating Business Logic from Data Access Logic is that by moving to another Server type we only need to redesign the Data Access Layer's logic while the rest of the code and layers stay identically the same. Presentation Layer Presentation Layer in web application are actually .aspx pages together with all the server, custom and user controls. Presentation Layer talks to the Business Layer with a request to Load or Insert some new data to the database. Data can be encapsulated inside Business Objects for added security. Business Layer receives the request and forwards it to the Data Layer which makes a call to the database and sends back the data in raw form or again through some Business Object or a Generic List of Business Objects. Business Access Layer Business Layer is responsible for receiving requests made by the Presentation Layer and forwarding those to the Data Layer and later returning parsed data back to Presentation Layer. Data Access Layer This layer is responsible for direct communication with the database (or any other form of data storage). It has a set of Get, Insert, Delete and Modify methods and it communicates only with business layer. Data retrieved from database can be sent back as DataSet, DataTable or through a single or a collection of custom Business Objects which will then be forwarded back to BAL and then Presentation Logic for further use. Physically Separate Layers It is also possible to separate these layers physically instead of only "logically" however, separating layers physically has a direct impact on the speed of execution of the application since communication now involves another step of passing the data through the network to another layer. Physically separated layers also require some added methods to handle the communication in form of web services and remoting. So, having these layers physically separated has a strong impact on the application and that shouldn't be overlooked. Example of N-Tier Architecture use In the following example I will show you a simple scenario of N-Tier Architecture used to display a list of books that some online bookstore sells on their website through a set of strong-typed business objects. This is the structure I will be using for the example: So we have 3 classes here: Book.cs, BookBAL.cs and BookDAL.cs. Book.cs is a class that has a set of properties in which we will store data retrieved from the database and then use it later in the Presentation Layer. This is what the class Book looks like: using System; using System.Collections.Generic; using System.Linq; using System.Web; public class Book { #region Private and Public Properties private int _bookID = 0; public int BookID { get { return _bookID; } set { _bookID = value; } } private string _category = string.Empty; public string Category { get { return _category; } set { _category = value; } } private string _author = string.Empty; public string Author { get { return _author; } set { _author = value; } } private string _serialno = string.Empty; public string SerialNumber { get { return _serialno; } set { _serialno = value; } } private string _bookName = string.Empty; public string BookName { get { return _bookName; } set { _bookName = value; } } #endregion public Book() { } #region Private and Public Methods public static List<Book> GetAllBooks() { return BookBAL.GetAllBooks(); } #endregion } We can see that the class has some properties like BookID (the ID of the book in the database), Category to which the book belongs, Author, Serial and last but not least Book Name. We can also see public static method GetAllBooks(). This methods will call the Business Layer's public static method GetAllBooks() and Business Layer will forward this request further to Data Access Layer which will query the database for the list of all the books, create a collection of Book objects and return it back to Business Layer that will forward it back to Presentation Layer. It might sound complicated at first with all these layer interaction but believe me once you get to know it better you will just love N-Tier and the possibilities it gives you. Ok let's move on. The next class that we will examine is the Business Access Layer class called BookBAL.cs. It's a really simple and straightforward class with only 1 public method for now and that's GetAllBooks() which we saw inside the Book Business Object. using System; using System.Collections.Generic; using System.Linq; using System.Web; public class BookBAL { public BookBAL() { } public static List<Book> GetAllBooks() { return BookDAL.GetAllBooks(); } } Simple, isn't it? What we can see here inside this class is just 1 simple method which calls the Data Layer's public static method GetAllBooks() and returns its response. That's all there's to it. Moving on to the Book Data Layer class. using System; using System.Configuration; using System.Collections.Generic; using System.Linq; using System.Web; using System.Data.SqlClient; using System.Data; public class BookDAL { public BookDAL() { } public static List<Book> GetAllBooks() { string _connString = ConfigurationManager.ConnectionStrings["databaseConnection"].ToString(); using (SqlConnection conn = new SqlConnection(_connString)) { string queryString = "SELECT * FROM Books"; SqlCommand comm = new SqlCommand(queryString, conn); SqlDataAdapter da = new SqlDataAdapter(comm); DataTable dt = new DataTable(); da.Fill(dt); if (dt != null && dt.Rows.Count > 0) { List<Book> booksResult = new List<Book>(); foreach (DataRow row in dt.Rows) { Book book = new Book(); book.BookID = Convert.ToInt32(row["BookID"].ToString()); book.Category = row["Category"].ToString(); book.SerialNumber = row["SerialNo"].ToString(); book.Author = row["Author"].ToString(); book.BookName = row["BookName"].ToString(); booksResult.Add(book); } return booksResult; } else { return null; } } } } Now this class has a bit more code but nothing too fancy. It's all what you've seen before. We instantiate a connection to the server and execute a simple SELECT ALL query. After execution we check to see if the query returned any data and if it did we create a new generic collection of Book Business Objects. Then we parse through each record creating a new Book Business Object, populate the objects properties and add the object to the generic collection. After all the records are parsed we return the booksResult collection of objects back to Business Layer that forwards it further to Presentation Logic. Now all that is left to do is use this collection and bind GridView to it so it display data. This again is really simple and if you've ever done databinding in codebehind this one is no different. First this is what our Default.aspx page looks like: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title>Generic Collection - Books Sample</title> </head> <body> <form id="form1" runat="server"> <div> <h1>Bookstore Books</h1> <asp:GridView </asp:GridView> </div> </form> </body> </html> and now our codebehind: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { List<Book> ds = new List<Book>(); ds = Book.GetAllBooks(); GridView1.DataSource = ds; GridView1.DataBind(); } } First we check to see if the page is loaded for the first time or the postback occurred. If it's loaded for the first time we bind our gridview with our custom collection of Book Business Objects. We instantiate a new List<Book> collection and call the static Book.GetAllBooks() method to assign a list of Book objects to our collection. Next we specify GridView's DataSource property and call the DataBind() method. If you run this example you will get something that looks like this: I know it looks rather sloppy here but styling and additional functions for the gridview are out of scope of this article. I hope that by reading this article you gained some insight as to how and why N-Tier Architecture is used and what are it's good points in development of applications. Feel free to comment and ask any questions regarding the topic. I will gladly try to help you understand the subject discussed here. Best regards and happy coding! Sasha Kotlo.
http://www.nullskull.com/a/1452/ntier-architecture.aspx
CC-MAIN-2017-30
refinedweb
1,497
54.52
Namespaced IO Layer From HaskellWiki Latest revision as of 21:25, 19 December 2010 [edit]. [edit] 2 Availability Project summary (licensing, etc.): Source code: under the io-layer directory. [edit] 3 Note on the Haskell Runtime This library heavily depends on the most recent improvements in the Glasgow Haskell Compiler runtime, and thus cannot be used with other Haskell implementations. Throughout this text, the term "Haskell Runtime" means "GHC Runtime". [edit] 4 Structure [edit] 4.1 Base Layer This layer is represented by the Haskell (GHC) own IO library (the IO Monad). Handles provided by the standard library are used to perform actual IO operations. [edit]. [edit]. [edit]. [edit]. [edit] 4.2.4 Getting/Setting File Status Attributes These operations correspond to the STAT and WSTAT operations of 9P2000. Their purpose is to obtain and modify file status information. The data structure to describe file status is directly derived from the 9P2000 specification. [edit]). [edit] 4.3 Namespace LayerThis layer provides facilities to organize file systems presented by the Device Layer into per-process (thread) namespaces. Operations such as binding a file system to a namespace, and path evaluation are directly available to applications. The Namespace layer also introduces type-based separation between "application" and "system" code execution levels. At the application level, standard Haskell IO facilities based on the [edit] 4.3.1 Namespace Each process has a namespace which provides a per-process view of the underlying host resources as a single tree of directories and files. Reference to the namespace is stored in the per-process data structure (see below). A process may issue a request request to update its namespace (to bind a file path somewhere in the namespace). The original Plan 9 implementation is described in, and the Namespaced IO Layer closely follows this description except that the concept of the "current" directory does not exist: all file paths are absolute from the filesystem root ("/") or the device table root ("#"). A namespace is implemented as a map with file path keys and union directory values. A union directory is a concatenation of directories. Filename evaluation is a process of obtaining a physical (per device) path to the underlying host resource given a logical (visible to a process) path.If for example there is a "console" device whose root is known as #c, and containing a file cons, then #c/cons is a physical path. If a command bind -b '#c' /devwas once issued, then the logical path /dev/cons evaluates to the physical path #c/cons. Each process must have at least one binding in its namespace, that is the root binding. In many cases this will be some fragment of the underlying host filesystem. Once such root binding is introduced, filename evaluation becomes possible.Given a logical path, it is split into components (slash separated). The root component is replaced with whatever is bound to the namespace root. A prefix is formed from the rest components, starting from the topmost, and is tried to match a namespace entry. If no match found, next component is appended to the prefix, and the process repeats. If the whole path is tried, and no match in the namespace was found, the resulting physical path is the namespace root binding plus the logical path itself. That is, if the root was bound with bind -c '#Z' /then /foo/bar evaluates to #Z/foo/bar. If however a match is found at some step (that is, the logical path is /dev/cons, and /dev matches the namespace entry containing a concatenation of several directories, including one exposed by the console device), then the matched prefix is replaced with one of the concatenated directories: the one containing an entry with the path component following the matched prefix (if no such directory is found, evaluation fails). This is cons in our case, and /dev will be replaced with #c/ rather than with #Z/. Thus, /dev/cons evaluates to #c/cons. [edit] 4.3.2 Per-process DataThere is a small data structure that persists through the process lifetime. This structure is directly accessible only at the "system" level, via the The most important fields of this per-process data structure are: - Process running privileges: Init, Admin, Host owner, and None. The former three are used with locally initiated processes (most of the processes run as Host owner). The None privilege is given to server processes running on behalf of external/remote users. When a process attaches a device, its running privileges are copied into the attachment descriptor. The logic of granting access is entirely on file servers, however the common rule is that processes with Init, Admin, and Host owner privileges have almost full access to the underlying host resources while None has no access at all. Thus, processes running as None need to perform certain authentication procedures to obtain proper attachment descriptors (this is future work to implement a Plan 9 or Inferno authentication scheme). - Device map. This is a one-level map with character keys and device table values. It is used to find proper device table for a file path starting with the '#' character. - Reference to the process namespace. This is awhose contents is overwritten when the namespace is updated.MVar Other fields include host owner name string, path handles for process standard input and output, parent process (thread) identifier, but they are not essential for the Namespace layer itself. [edit] 4.3.3 New Process Creation The Namespace Layer provides functionality to start new processes with possible control over child process namespace and running privileges. The basic rules are: - Running privileges of the child process can only be lower than or same of the parent process, considering that Init > Admin > Host owner > None. - Namespace may be shared between parent and child, or cloned, or built from scratch (the child process starts with an empty namespace). Namespace sharing is only allowed for processes with same privileges. [edit] 4.4 Application Layer This layer implements streaming IO operations using the Iteratee concept. [edit] 5 DRAFT! DRAFT! DRAFT! This document as well as the library it describes are both work in progress and subject to changes of any unpredictable kind ;) The text on this page may look at times bizarre, and nonsense; this will be eventually corrected ;)
http://www.haskell.org/haskellwiki/index.php?title=Namespaced_IO_Layer&diff=prev&oldid=37946
CC-MAIN-2014-10
refinedweb
1,047
52.8
How do you backup a database using sql server 2005 express??? - lunes, 19 de junio de 2006 15:17 I know there isn't a backup feature but I was wondering if there was a way to back up a database? Thanks!!! Todas las respuestas - lunes, 19 de junio de 2006 17:10Moderador Hi, You're not quite right here, SQL Express fully supports backing up a database. What it does not have is SQL Agent, which allows you to schedule backups and the Mataintenance Plan wizard which allows you to create a plan to perform a number of tasks, including backup. You can backup your database in two ways: - Use Management Studio Express (available separately or as part of Express advanced from this download page) which has the Backup option on the right click menu for each database under Tasks. - Use T-SQL to manually write your backup script. You can learn about the T-SQL backup command in this BOL topic. If you want to schedule your backups, you would write a T-SQL script and then use Windows Task Schedule to call SQLCmd to run the script on what every schedule you're interested in. Regards, Mike Wachal SQL Express team ---- Mark the best posts as Answers! - martes, 20 de junio de 2006 5:47 One option that you could use is the SMO system, Here is a small c# application that backs up a database. You will have to compile it and change the values to reflect your server. Remember that the Visual Studio C# Express system, is free to use. using System; using System.Data; using System.Collections; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; class Program { static void Main(string[] args) { BackupDeviceItem bdi = new BackupDeviceItem("AdventureWorks.bak", DeviceType.File); Backup bu = new Backup( ); bu.Database = "AdventureWorks"; bu.Devices.Add(bdi); bu.Initialize = true; // add percent complete and complete event handlers bu.PercentComplete += new PercentCompleteEventHandler(Backup_PercentComplete); bu.Complete +=new ServerMessageEventHandler(Backup_Complete); Server server = new Server("localhost"); bu.SqlBackup(server); Console.WriteLine(Environment.NewLine + "Press any key to continue."); Console.ReadKey( ); } protected static void Backup_PercentComplete( object sender, PercentCompleteEventArgs e) { Console.WriteLine(e.Percent + "% processed."); } protected static void Backup_Complete(object sender, ServerMessageEventArgs e) { Console.WriteLine(Environment.NewLine + e.ToString( )); } } - jueves, 29 de junio de 2006 18:37 I do not understand why Microsoft went through the trouble of removing the Sql Agent scheduler from SQL 2005 Express. This is wrong for any product path/roadmap. There are many small applications, both in-house IT as well as distributed, that depend on the reliable SQL Server Agent Job scheduler. I stopped using the Windows Task Scheduler years ago because you cannot count on it. It is not reliable. While it has improved since the NT4 and early Win2000 days, it is only marginally better due to a UI instead of the AT command line. We need the service, SQLSERVERAGENT, to be included with SQL 2005 Express in order to migrate applications to 2005. If you have a system that is configured for Client - Server - Notebook/offline and you utilize SQL2005 on the servers, you need SQL2005 Express on the Client and notebooks for replication. For the same reasons we distribute MSDE now, we would distribute SQL2005 Express in the future for the Notebook/offline engine. Out installations, updates, and backups, depend on the server and notebook being the same, (i.e. the same service names, the same scheduler, etc.). If the notbooks cannot have Express with the SQLSERVERAGENT service, then that complicates everything and increases the cost to everyone involved. Put SQLSERVERAGENT in SQL 2005 Express. - lunes, 03 de julio de 2006 16:42 Clifton is right. I've been a Microsoft certified partner for several years and developing in windows for 9 years. This is another example of Microsoft disregarding the needs of it's smaller ISV developers and indiscriminately "changing the rules" when they feels like it. You don't expect someone to take away features when they release the next version of a product. You expect things to be added not taken away. Now we have to invest time, energy, and MONEY trying to figure out how to create a reliable automated backup facility for our small customers using SQL Express. The Windows scheduler is a joke. It’s not reliable and not easily managed by the end user. We had a tight and reliable backup and maintenance tool integrated directly into our product using DMO and the SQL Agent. Now it’s all useless code. I'm sure MS has some half justifiable reason for pulling the Agent, but, I'm sure it wasn't the only path they could have taken. It was simply the easiest path for them. If it hurts us, oh well, there's more developers out there were we came from. Mike Mazza MCP Kressa Software Corp - lunes, 03 de julio de 2006 23:26 Just a thought on this subject. Why not use Scheduled Tasks and a T-Sql.bat file to automate the backup.? Tailor - sábado, 08 de julio de 2006 4:48 why not just write a windows service using VS2005 templates and run the service on the server? The service could execute a SP that backs up the database. That sounds like the most reliable way (and very simple) to emulate the sql server agent backup functionality. -Andy - martes, 19 de septiembre de 2006 10:00Been searching the net for about an hour now, theres billions of tutorials, but they go into too much detail, I just want to do a single backup of one database, once a day. Its so simple with 2000. But it seems SQL 2005 is a cut-down version of 2000. I don't really want to write a script and then rely on Sheduler, I want to right click and be walked through a nice wizard like with 2000. - martes, 19 de septiembre de 2006 20:55Moderador hi, download SQL Server Management Studio Express,, connect to the desired instance, navigate to the database you are interested in.. right click, tasks, backup... the backup dialog will appear.. specify your options/settings and click [ok] ... that's all..... HTH - jueves, 05 de octubre de 2006 5:18... I created an .sql script (as described above) and saved it as backup.sql. I am trying to run it from a batch file called backup.bat using sqlcmd. The batch file contains the command sqlcmd -i c:\directory\backup.sql -o c:\directory\output.txt It fails, saying it cannot connect remotely. I did a bit of digging to find where I can set 'allow remote connections' and it is under some setting called SQL Surface Area Configuration or something like that. Looking in there, it says SQL EXpress 2005 can only connect locally, not remotely! The script runs fine. The sqlcmd command fails. So does anyone have any more information that may help? Thanks. - jueves, 05 de octubre de 2006 5:57 Extra Info: - I have set up a scheduled task in Windows so that it runs. It refers to the sa user and password. I get the same error - I have looked at the SQL Browser Service and it is active. - I have looked at the Firewall settings and these are not a problem as the firewall is not in use on this server. - I have looked at the Surface Area Config for my database and it says that Local and Remote Connections can be used using TCPIP. - jueves, 05 de octubre de 2006 15:50Moderador hi, verify via SQL Server Configuration Manager that the desired/required network protocoll is actually enabled.. you say SQL Browser, thus you have a named instance installed, probably with dynamic port assignement.. verify your firewall has been set with an exception on port UDP 1434 for the SQL Browser too, as long as the required exception for the SQL Server engine service (port or service) at you can find some related info about Windows Firewall related settings, but the "idea" is valid for all firewalls as well.. regards - sábado, 07 de octubre de 2006 7:55 For those who are interested have a look at the following post, I have found a codeproject system that simulates the Agent.... - martes, 10 de octubre de 2006 9:31 >One option that you could use is the SMO system, Here is a small c# application that backs up a database. You will have to compile it and change the values to reflect your server. Thanks for the sample! Just some additional info to get it to work using Visual Studio 2005 and SQL Server Express (sorry if being redundant ) Add references to (.Net tab): - Microsoft.SqlServer.ConnectionInfo - Microsoft.SqlServer.Smo Change "Adventureworks" (both) to the name of the database you want to backup, and change "localhost" to the name of your SQL instance. E.g. Server server = new Server("<my machine name>\\SQLEXPRESS"); - domingo, 15 de octubre de 2006 19:07 I was running into the same problem. I found that all I had to do was add an additional command line argument specifying the server name. In my case, the server runs on the same computer as the script, so I got it working with the following command: sqlcmd -S LOCALHOST\SQLEXPRESS -i backup.sql -o output.txt The -S needs to be in upper case. Hope that helps. Ryec. - Propuesto como respuesta Wilson.Shen lunes, 23 de agosto de 2010 8:52 - - lunes, 16 de octubre de 2006 4:22 Thank you all.. Ryec, adding the -S Servername\DatabaseName did the trick. - viernes, 22 de diciembre de 2006 16:04 anyway, my application NEEDS the SQLServerAgent engine to auto-schedule it's own queries. it creates jobs based on user-selectable options from a web-based interface. the use of task-scheduler is simply impossible. i'm sorry to MS, but we will not migrate all our customers to SQL2005standard while we cannot have an easily replicable testing environment running with the Express edition. that simple. - jueves, 11 de enero de 2007 19:45 I am also tryin gto back up sql express using sqlcmd. Sqlcmd -S .\SQLEXPRESS - q " BAckup Database XYZ to disk = N'D:\dbbackup\test11.bak'" i am getting error Can not open back up device 'D:\dbbackup\test11.bak' .OPerating system error. Back up database terminating abnormally. - viernes, 04 de enero de 2008 16:31 Another solution to building a process to manage your T-SQL inside a service can be found here: I'm being driven to use this tool, but have not yet done so...Nothing like a deadline to keep the momentum going. - miércoles, 02 de abril de 2008 21:54In case someone comes to this thread through search, and is looking for what Glenn Wilson linked to (which is no longer there), I think he is talking about this: If not, this does the same thing... a replacement for SQL Agent. - jueves, 01 de mayo de 2008 0:39 I get post not found when I click that link. I also tried searching for sql agent and nothing was found. *shrug* - jueves, 01 de mayo de 2008 0:42 This sounds great, but it looks like it waits for a user to hit a key to continue... so don't see how it would be usable as a service. - jueves, 01 de mayo de 2008 10:44ModeradorYou can also refer Automate Backups in SQL Express article to configure SQLExpress backup!!! - lunes, 04 de agosto de 2008 9:51 You must specify the instance of your sql server. i.e. if you are using SQLEXPRESS type sqlcmd -S .\sqlexpress - jueves, 14 de agosto de 2008 16:32 I chose to use the SQLCMD route to backup 2005 Express databases. Works fine with most instances, but my Sharepoint databases that have dashes are failing. Please advise on how to modify code to work with Sharepoint: .... DECLARE @IDENT INT, @sql VARCHAR(1000), @DBNAME VARCHAR(200); SELECT @IDENT=MIN(database_id) FROM SYS.DATABASES WHERE [database_id] > 0 AND NAME NOT IN ('TEMPDB') WHILE @IDENT IS NOT NULL BEGIN SELECT @DBNAME = NAME FROM SYS.DATABASES WHERE database_id = @IDENT; SELECT @SQL = 'BACKUP DATABASE '+ @DBNAME +' TO DISK = ''D:\SQLBack\'+@DBNAME+'_db_' + @dateString +'.BAK'' WITH INIT'; EXEC (@SQL) SELECT @IDENT=MIN(database_id) FROM SYS.DATABASES WHERE [database_id] > 0 AND database_id>@IDENT AND NAME NOT IN ('TEMPDB') END I have tried specifying the Sharepoint DB's in brackets [name] - for example - AND NAME IN ('[SharedServices_123-456-789-cxvvsefse-193747]') but this does not work either. Msg 102, Level 15, State 1, Server MSOCSP02\OFFICESERVERS, Line 1 Incorrect syntax near '-'. Msg 319, Level 15, State 1, Server MSOCSP02\OFFICESERVERS, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon. Please advise. Thank you!!!! - jueves, 14 de agosto de 2008 17:41 With more research I got it working by replacing the original line 6 with this line. SELECT @DBNAME = '['+NAME+']' FROM SYS.DATABASES WHERE database_id = @IDENT Many thanks to Jonathan Kehayias code on " How to Automate Database Backups with SQL Server Express" Link is here: Thanks - lunes, 13 de octubre de 2008 19:29Why bother with scripts when off-the-shelf freeware like SqlBackupAndFTP would do it better? - Propuesto como respuesta Mike Shilov miércoles, 22 de abril de 2009 9:24 - Marcado como respuesta Naomi NMicrosoft Community Contributor, Moderator lunes, 04 de julio de 2011 16:15 - - viernes, 24 de octubre de 2008 17:18 Why bother with scripts when off-the-shelf freeware like DBSave would do it better? DBSave is a net framework based software for backup and restore of running MS SQL Server, backups all versions: 7.0 up to 2005 including MSDE and express editions. DBSave supports Completely and different backups and can restore databases on another server. Automatically or with only one mouse click a complete server can be backuped. Backup via FTP, FTPS u. SFTP (SCP) is possible. - viernes, 24 de octubre de 2008 19:03Well, for a start DBSave is 54MB when SqlBackupAndFTP is 100 times smaller at 0.5MB. DBSave site has no English description and screenshots are too small to understand. Does it have English interface? If you tell me it does Remote backups (on hosted servers) - I'd try it. But otherwise it is safer to stay away from behemoths that try to do and be everything, but loose focus on strightforward backup process. - martes, 09 de diciembre de 2008 5:51 - martes, 24 de marzo de 2009 1:50Here is an alternative method for backing up the database: Tim Chapman Tim Chapman, MCITP - Propuesto como respuesta Internal IT viernes, 06 de agosto de 2010 11:59 - Marcado como respuesta Naomi NMicrosoft Community Contributor, Moderator lunes, 04 de julio de 2011 16:13 - - miércoles, 05 de mayo de 2010 13:29I'm agreed with it. SQl Backup and FTP is a cool tool to backup your databases. Very lightweight and easy to use. Just try it. - martes, 15 de junio de 2010 17:20. - martes, 15 de junio de 2010 17:23. - miércoles, 27 de octubre de 2010 11:34Can you send sample using c# of tacking database backup with sqlserver2005/08 - Propuesto como respuesta Nitin Hiwarkar miércoles, 27 de octubre de 2010 11:35 - Votado como útil Naomi NMicrosoft Community Contributor, Moderator lunes, 04 de julio de 2011 16:12 - - lunes, 04 de julio de 2011 14:16 I totally agree with RSudentas. We have a customer with no IT staff and SQL Backup and FTP perfectly solved the daily backup problems, plus the free version lets you to schedule up to 2 database backups daily. - martes, 06 de septiembre de 2011 13:08 I found this a great help. It also deals with db backup retention, in that it doesn't keep any more than x amount of copies of the backup. - Propuesto como respuesta Magefyre-OH lunes, 25 de febrero de 2013 19:01 - - martes, 01 de noviembre de 2011 12:57Hi Do you perhaps have that T-SQL scripr lying anywhere ? i'm looking to back up an sql express across the network, it also needs to be automated to run at specific times - martes, 01 de noviembre de 2011 13:12 Hi Do you perhaps have that T-SQL scripr lying anywhere ? i'm looking to back up an sql express across the network, it also needs to be automated to run at specific timesthe t-sql script that i used to do the backups is at - viernes, 30 de diciembre de 2011 17:50 The file path must exist in the machine where the instance runs Sqlcmd -S .\SQLEXPRESS -E -Q "Backup database XYZ to disk=N'C:\mybackup\myback.bak' with init with init is optional ,makes no append ,replace file backup .Might usefull for automating Backup. A backup set can take 64( or 128 not remember) backups then failed and return error. The N in frond is also optional if you use latin English for filepath (N represent unicode for non English Characters) -E connect with trust connection to -S server local sqlexpress instance -Q execute and exist. Hope help somehow. P Velachoutakos - viernes, 30 de diciembre de 2011 17:52 Because this call might wanted to be added to a client Windows Apllication Maybe? P Velachoutakos - martes, 24 de julio de 2012 14:46This works fine as long as you have only 2 DBs to back up. What about the system DBs? I know they don't change often on SQLExpress, but you still need a backup,.
http://social.msdn.microsoft.com/Forums/es-ES/sqlexpress/thread/95750bdf-fcb1-45bf-9247-d7c0e1b9c8d2
CC-MAIN-2013-20
refinedweb
2,939
63.59
Log". You can also see more information about a bar on hover (and modify this through the Vega spec in your charts!). To see the full Vega spec of a chart, hover over the top right corner and click on the "eye" icon. I finetune a CNN to predict 10 classes of living things: plants, birds, insects, etc. I want to plot the final precision for each label in my validation step. I compute the precision using sklearn.metrics.precision_score. This returns val_precision, a list of 10 precision values, one for each class. I then create a bar for each label: data = [[name, prec] for (name, prec) in zip(self.class_names, val_precision)] table = wandb.Table(data=data, columns=["class_name", "precision"]) wandb.log({"my_bar_chart_id" : wandb.plot.bar(table, "class_name", "precision", title="Per Class Precision")}) Steps to follow: dataobject: collect the (label, value) pairs as 2D list/array, where each row is a bar, one column is its label, and the other column is its value. The default bar chart assumes two dimensions / two columns, but you could pass in more data and customize the plot further if you wish (e.g. use a third column to give each bar a different color). datato a wandb.Table()object in which you name the columns in order so you can refer to them later tableobject and the column names in labels, values order to wandb.plot.bar()with an optional title, which will create your custom plot under the key my_bar_chart_id. To visualize multiple runs on the same plot, keep this plot key constant. Note that the table itself will also be logged in the "Media" section of your workspace, under my_bar_chart_id_table. There are many ways to customize the line plot using the Vega visualization grammar. Here are some simple ones: "title" : "Your Title"to the xand yfields under encoding xand y stackto centeror zero(instead of overlapping bars, as in the default) See the full API for wandb.plot.bar() →. You can compute this whenever your code has access to: val_predictions) on a set of examples ground_truth) for those examples from sklearn.metrics import precision_score ground_truth_class_ids = ground_truth.argmax(axis=1) guessed_class_ids = val_predictions.argmax(axis=1) val_precision = precision_score(ground_truth_class_ids, guessed_class_ids, average=None) # now you can log val_precision to a custom chart!
https://wandb.ai/wandb/plots/reports/Custom-Bar-Charts--VmlldzoyNzExNzk
CC-MAIN-2021-49
refinedweb
378
64.71
Addicted to Alright dear readers (both of you), I told you last time that I would answer that age-old question, “How many projects should I have in my solution?” The answer is: only as many as you need. I know, I know, but it’s not a copout, really it’s not. I’ve really found that is it easier to start with fewer projects and break them out when you need to, than trying to combine projects later on. Does that mean that one project with everything in it is right? Yes. If that is all you need. I realize it sounds like I am being wishy-washy, but I promise I am not. For the sake of being opinionated (because “as many as you need” is hard-to-follow advice if you are looking for guidance), I will say I normally find that I have three projects. I tend to call them Web.UI, Core and Specifications. For this series we’ll be building a web project, but you could just as easily swap the Web.UI project for a WPF.UI project. Now, if you don’t consider the Specifications project as part of the “According to Hoyle” project, then I only have two projects. The thing that I will do is namespace everything within the project n a way that makes it easy to break them into their own project if I need to. So I had my data interfaces in a data folder (with that namespace) and inside that a folder for each type of implementation (e.g. an NHibernate folder, a SubSonic folder and an Entity Framework folder). This sill allow me to make a new project called Core.Data.NHibernate and move those files over and hopefully not break anything (I haven’t needed to do that yet). I have been using this project structure for a while and am pretty happy with it. It came from stealing little ideas from respected developers around the community, and hashing it out with my co-worker, Troy. He is the one who kept reminding me that I didn’t need all those projects in the solution, and eventually we whittled it down to these three. One thing that I will also do, is have my Specifications project a folder above the location where the Core and Web.UI projects are in the file system. I stole this Idea from Sharp Architecture and it makes for good, solid separation of the Specifications from the rest of the production code. I hope this gets you ideas flowing for your own solutions and gives you a jumping-off point for deciding just what your solution needs in it. This should also be a good way to get the (civilized) conversation going about your won project structure within your team. Next time, we’ll talk about tools. We’ll briefly discuss why you would use each tool and what comparable tools are out there. We’ll look at application architectural patterns (MVC vs MVP vs MVVM vs Web Forms), ORM tools, IOC containers, mocking and testing frameworks and some helper libraries that might make it easier to do some things within your own project.
http://geekswithblogs.net/leesblog/archive/2009/09/13/letrsquos-build-a-dev-shop-part-4-of-n.aspx
CC-MAIN-2021-04
refinedweb
536
78.89
[edited repost of a message I sent to the python-win32 list] I was looking into improving support for win32com COM servers created with py2exe, and came up with the following questions: 1. Pythoncom uses the '--unregister' command line flag, and doesn't support a '--register' flag, registration is default. Why is this, instead of using the MS standard '/regserver' and '/unregserver' case insensitive flags? And why is the localserver registered with a /Automate flag, as I understand it localservers receive a '/Embedding' command line flag automatically when started by com (at least this is the result of my experiments on WinXP Pro). 2. Pythoncom uses a trivial approach looking for these command line flags, any unknown flags are simply ignored. Since it is impossible with the getopt module to parse windows style command lines correctly, I've written a simple w_getopt function which is able to do this. It is online at. 3. The win32com.server.register module contains special stuff for McMillan installer, and maybe also freeze. I suggest to invent a function like this (which can also be useful to the NT service module) to decide if the server must be started by python.exe (or pythonw.exe), or is contained in an executable built with py2exe, installer, cx_freeze or Python's freeze tool: def main_is_frozen(): import imp return (hasattr(sys, "frozen") # mcmillan installer or hasattr(sys, "importers") # py2exe or imp.is_frozen("__main__")) # freeze, cx_freeze Maybe a similar api can be provided for frozen inproc servers, which currently are only possible with mcmillan. 4. Why is win32com.server.register.UseCommandLine only used for registration and unregistration? Wouldn't it make more sense to use it also for starting the localserver process, provided that /Embedding is used? The advantage would be that UseCommandLine gets the COM classes to expose as arguments, and doesn't have to read registry to find them. Thomas
https://mail.python.org/pipermail/python-list/2003-June/234420.html
CC-MAIN-2018-05
refinedweb
314
52.6
“Or” in regular expressions `||`. This functionality is simple enough, however, that it is not usually necessary to use higher programming features to achieve logical “or.” Achieving logical “or” with Grouping and Alternation Grouping and alternation are core features of every modern regular expression library. You can provide as many terms as desired, as long as they are separated with the pipe character: |. This character separates terms contained within each (...) group. Take the following example, for instance: This expression will match any of the following strings: However, there is an unintended side-effect of our grouping and alternation, as written above. This pattern will match any combination of the terms we’ve supplied, as expected, but it will also store those matches into match groups for later inspection. If you don’t want your grouping and alternation to interfere with other numbered groups in your expression, each “or” group must be prefixed with ?: – like so: \\w+: Achieving logical “or” with your languageLogical “or” is more difficult to achieve using tools external to the regular expression engine itself, but it is achievable by combining results of multiple regular expressions using the native “or” logical feature of your programming language of choice. This can sometimes("I like dogs, but not lions.")); assertTrue(stringMatches("I like penguins, but not lions.")); assertFalse(stringMatches("I like lions, but not penguins.")); } private boolean stringMatches(String string) { return (string.matches("like dogs") || string.matches("like penguins")) && (string.matches("not lions") || string.matches("not tigers")); } } The discussion regarding disabling grouping expressions so as to not interfer with other match groups, is not clear. The reason is that you have not provided a context in which other groups are being captured or how they are referenced. The beginning of this discussion exists in you part1 discussion, so at the very least, you need provide a reference link directly to this discussion of groups. Second, that discussion really needs to be expanded to include disabling the capture of subgroups. Also note that grouping behavior is not a "side-effect," is it defined behavior, and is very useful as you point out in your main discussion. You are correct! I will clarify Thanks, Dad. Hi, thanks for the tutorial, but your stringMatches method does not have the right context as teh testAssert. Your stringMatches method has "penguin….., etc.", but your testAsserts pass "Start…." Wow, strange. I don’t know how that got lost Will fix asap. Updated. This and the other Regex post was really helpful. I was able to get a regex that worked for my needs but I’m wondering if there is a more concise way approach? Here’s my regex: ‘^(?!(?|foo|bar)$)(?!.*_)(.*)'; Basically I want to match any string that is NOT EQUAL TO "foo" nor "bar" nor that has an underscore. So "fool" and "baroom" should match as well as "bebar" but not "bar_oom" nor "be_bar" nor even "hello_goodbye." Said another way, only words without an underscore and that are not "foo" or "bar." So is there a more straightforward regex I could use? Negative matching for the entire words "foo" and "bar" were what made this difficult. Thanks in advance for your time. This might be close to what you want: Click to see live example in visual regex tester…
http://www.ocpsoft.org/tutorials/regular-expressions/or-in-regex/
CC-MAIN-2015-27
refinedweb
545
65.62
Display Custom Items in JavaFX ListView Last modified: August 23, 2021 1. Introduction JavaFX is a powerful tool designed to build application UI for different platforms. It provides not only UI components but different useful tools, such as properties and observable collections. ListView component is handy to manage collections. Namely, we didn't need to define DataModel or update ListView elements explicitly. Once a change happens in the ObjervableList, it reflects in the ListView widget. However, such an approach requires a way to display our custom items in JavaFX ListView. This tutorial describes a way to set up how the domain objects look in the ListView. 2. Cell Factory 2.1. Default Behavior By default ListView in JavaFX uses the toString() method to display an object. So the obvious approach is to override it: public class Person { String firstName; String lastName; @Override public String toString() { return firstName + " " + lastName; } } This approach is ok for the learning and conceptual examples. However, it's not the best way. First, our domain class takes on display implementation. Thus, this approach contradicts to single responsibility principle. Second, other subsystems may use toString(). For instance, we use the toString() method to log our object's state. Logs may require more fields than an item of ListView. So, in this case, a single toString() implementation can't fulfill every module need. 2.2. Cell Factory to Display Custom Objects in ListView Let's consider a better way to display our custom objects in JavaFX ListView. Each item in ListView is displayed with an instance of ListCell class. ListCell has a property called text. A cell displays its text value. So to customize the text in the ListCell instance, we should update its text property. Where can we do it? ListCell has a method named updateItem. When the cell for the item appears, it calls the updateItem. The updateItem method also runs when the cell changes. So we should inherit our own implementation from the default ListCell class. In this implementation, we need to override updateItem. But how can we make ListView use our custom implementation instead of the default one? ListView may have a cell factory. Cell factory is null by default. We should set it up to customize the way ListView displays objects. Let's illustrate cell factory on an example: public class PersonCellFactory implements Callback<ListView<Person>, ListCell<Person>> { @Override public ListCell<Person> call(ListView<Person> param) { return new ListCell<>(){ @Override public void updateItem(Person person, boolean empty) { super.updateItem(person, empty); if (empty || person == null) { setText(null); } else { setText(person.getFirstName() + " " + person.getLastName()); } } }; } } CellFactory should implement a JavaFX callback. The Callback interface in JavaFX is similar to the standard Java Function interface. However, JavaFX uses a Callback interface due to historical reasons. We should call default implementation of the updateItem method. This implementation triggers default actions, such as connecting the cell to the object and showing a row for an empty list. The default implementation of the method updateItem calls setText, too. It then sets up the text that will be displayed in the cell. 2.3. Display Custom Items in JavaFX ListView With Custom Widgets ListCell provides us with an opportunity to set up a custom widget as content. All we should do to display our domain objects in custom widgets is to use setGraphics() instead of setCell(). Supposing, we have to display each row as a CheckBox. Let's take a look at the appropriate cell factory: public class CheckboxCellFactory implements Callback<ListView<Person>, ListCell<Person>> { @Override public ListCell<Person> call(ListView<Person> param) { return new ListCell<>(){ @Override public void updateItem(Person person, boolean empty) { super.updateItem(person, empty); if (empty) { setText(null); setGraphic(null); } else if (person != null) { setText(null); setGraphic(new CheckBox(person.getFirstName() + " " + person.getLastName())); } else { setText("null"); setGraphic(null); } } }; } } In this example, we set the text property to null. If both text and graphic properties exist, the text will show beside the widget. Of course, we can set up the CheckBox callback logic and other properties based on our custom element data. It requires some coding, the same way as setting up the widget text. 3. Conclusion In this article, we considered a way to show custom items in JavaFX ListView. We saw that the ListView allows quite a flexible way to set it up. We can even display custom widgets in our ListView cells. As always, the code for the examples is available over on GitHub.
https://www.baeldung.com/javafx-listview-display-custom-items
CC-MAIN-2022-27
refinedweb
739
58.79
Progress Bar for Ray Actors (tqdm)¶ Tracking progress of distributed tasks can be tricky. This script will demonstrate how to implement a simple progress bar for a Ray actor to track progress across various different distributed components. Setup: Dependencies¶ First, import some dependencies. # Inspiration: # 1/files#diff-7ede881ddc3e8456b320afb958362b2aR12-R45 from asyncio import Event from typing import Tuple from time import sleep import ray # For typing purposes from ray.actor import ActorHandle from tqdm import tqdm This is the Ray “actor” that can be called from anywhere to update our progress. You’ll be using the update method. Don’t instantiate this class yourself. Instead, it’s something that you’ll get from a ProgressBar. @ray.remote class ProgressBarActor: counter: int delta: int event: Event def __init__(self) -> None: self.counter = 0 self.delta = 0 self.event = Event() def update(self, num_items_completed: int) -> None: """Updates the ProgressBar with the incremental number of items that were just completed. """ self.counter += num_items_completed self.delta += num_items_completed self.event.set() async def wait_for_update(self) -> Tuple[int, int]: """Blocking call. Waits until somebody calls `update`, then returns a tuple of the number of updates since the last call to `wait_for_update`, and the total number of completed items. """ await self.event.wait() self.event.clear() saved_delta = self.delta self.delta = 0 return saved_delta, self.counter def get_counter(self) -> int: """ Returns the total number of complete items. """ return self.counter This is where the progress bar starts. You create one of these on the head node, passing in the expected total number of items, and an optional string description. Pass along the actor reference to any remote task, and if they complete ten tasks, they’ll call actor.update.remote(10). # Back on the local node, once you launch your remote Ray tasks, call # `print_until_done`, which will feed everything back into a `tqdm` counter. class ProgressBar: progress_actor: ActorHandle total: int description: str pbar: tqdm def __init__(self, total: int, description: str = ""): # Ray actors don't seem to play nice with mypy, generating # a spurious warning for the following line, # which we need to suppress. The code is fine. self.progress_actor = ProgressBarActor.remote() # type: ignore self.total = total self.description = description @property def actor(self) -> ActorHandle: """Returns a reference to the remote `ProgressBarActor`. When you complete tasks, call `update` on the actor. """ return self.progress_actor def print_until_done(self) -> None: """Blocking call. Do this after starting a series of remote Ray tasks, to which you've passed the actor handle. Each of them calls `update` on the actor. When the progress meter reaches 100%, this method returns. """ pbar = tqdm(desc=self.description, total=self.total) while True: delta, counter = ray.get(self.actor.wait_for_update.remote()) pbar.update(delta) if counter >= self.total: pbar.close() return This is an example of a task that increments the progress bar. Note that this is a Ray Task, but it could very well be any generic Ray Actor. @ray.remote def sleep_then_increment(i: int, pba: ActorHandle) -> int: sleep(i / 2.0) pba.update.remote(1) return i Now you can run it and see what happens! def run(): ray.init() num_ticks = 6 pb = ProgressBar(num_ticks) actor = pb.actor # You can replace this with any arbitrary Ray task/actor. tasks_pre_launch = [ sleep_then_increment.remote(i, actor) for i in range(0, num_ticks) ] pb.print_until_done() tasks = ray.get(tasks_pre_launch) tasks == list(range(num_ticks)) num_ticks == ray.get(actor.get_counter.remote()) run() Gallery generated by Sphinx-Gallery
https://docs.ray.io/en/master/auto_examples/progress_bar.html
CC-MAIN-2022-05
refinedweb
567
60.82
Google Colaboratory for AI Research and Collaboration Machine learning enthusiasts, rejoice! Google has not only open sourced it’s deep learning framework, Tensorflow. Now, they bring us a new tool to aid in the research and education of machine learning with a Jupyter based research tool called Colaboratory. Colaboratory is a free tool which allows you to run and share jupyter notebooks all from the cloud. No need to install anything, all you need is a desktop based browser such as chrome and an account with Google and you will have access to a jupyter notebook environment with all the popular data science packages ready to begin coding. Accessing Colaboratory To start using Colaboratory, follow this link: You might need to register and wait for an invitation prior to begin using this tool. Once you are in, you are greeted with a Hello, Colaboratory notebook with a few code snippets and instructions on how to use the tool. At the end of the notebook are some important links. At this point, you can create your own new jupyter notebook with a Python 2 or Python 3 kernel. Only Python is supported at this time. The Jupyter notebooks you create are stored in your google drive where you can then search, download and upload new notebooks. If you chose to collaborate with others, they can access your notebook and you can work on the same notebook at the same time. When this happens you can see who is in your notebook and realtime the code changes of each of you. Awesome! Code Execution Google automatically provisions a personal linux virtual machine on which your notebooks will run. When you go idle for a while, your virtual machine is recycled. Each virtual machine has a maximum runtime. Shell Commands Google allows you to run certain shell commands on your virtual machine. For example, to install additional python libraries using pip or apt-get you can run the following: !pip install -q matplotlib-venn !apt-get -qq install -y libfluidsynth1 Shell commands are preceded by an exclamation mark. File Upload and Download Sample Let’s go through a quick demo on how you can save and download files to your Colaboratory environment. First create a pandas dataframe with 5 rows and 2 columns which we rename to A and B. At the end, we just use a regular pandas command to save your data frame into a text file. import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(5, 2)) df.columns = ['A','B'] df.to_csv("test_dataframe.txt",sep="\t") To view the files in your Colaboratory virtual machine run the shell command: !ls You will now see the test_dataframe.txt file we just saved under the datalab folder of your virtual machine. Let’s now download this file to your local pc by running the below python script: from google.colab import files files.download('test_dataframe.txt') To upload files manually from your local pc run the following: from google.colab import files uploaded = files.upload() When you execute the above, you get a file upload button where you can then select and load a file from your local PC. Code Snippets There are various other ways to load data onto your Colaboratory virtual machine not to mention much additional functionality. You can find these snippets under Tools > Snippets of your Colaboratory Jupyter Notebook. Conclusion Colaboratory allows you to easily start building, sharing and collaborating in your machine learning projects. Free and easy to use with great functionality, this is a great new machine learning tool. Thanks Google!
http://www.insightsbot.com/blog/1Su6Wx/google-colaboratory-for-ai-research-and-collaboration
CC-MAIN-2019-18
refinedweb
601
64.51
min Function (XQuery) Returns from a sequence of atomic values, $arg, the one item whose value is less than that of all the others. All types of the atomized values that are passed to min() min() receives the base type of the passed in types, such as xs:double in the case of xdt:untypedAtomic. If the input is statically empty, empty is implied and a static error is returned. The min() function returns the one value in the sequence that is smaller min() XQuery function to find the work center location that has the fewest labor hours The following query retrieves all work center locations in the manufacturing process of the product model (ProductModelID=7) that have the fewest labor hours. Generally, as shown in the following, a single location is returned. If multiple locations had an equal number of minimum labor hours, they would all be returned. select ProductModelID, Name, Instructions.query(' declare namespace ') as Result FROM Production.ProductModel WHERE ProductModelID=7 Note the following from the previous query: - The namespace keyword in the XQuery prolog defines a namespace prefix. This prefix is then used in the XQuery body. The XQuery body constructs the XML that has a <Location> element with WCID and LaborHrs attributes. - The query also retrieves the ProductModelID and name values. This is the result:
https://msdn.microsoft.com/en-US/library/ms190951(v=sql.90).aspx
CC-MAIN-2017-39
refinedweb
220
63.19
educational activity for teaching how water is routed through a drainage basin Project description rain_table Interactive rain table model written by Jeffrey Kwang at UIUC, refactored for SedEdu by Andrew J Moodie. The version in this (SedEdu) repository is different than the original by Jeffrey. This version retains most of the functionality, but does not rely on the Pygame dependency. The cost is that this simulation runs a little bit slower, but is still fast enough to be fun. See Jeffrey's original implementation at here. This repository is also linked into the SedEdu suite of education modules, and can be accessed there as well. About the model The model uses a D8 routing scheme to route rainfall over the surface of a DEM. All flow is assumed to be surface runoff. The hydrograph is scaled to the maximum baseflow equilibrium condition. These watersheds drain directly into the Columbia River in Washington state, (Lat 47°10'03.8"N, Lon 120°07'31.9"W). Installing the module This module depends on Python 3, tkinter, and the Python packages numpy, pillow, and matplotlib. Installing Python 3 If you are new to Python, it is recommended that you install Anaconda, which is an open source distribution of Python which includes many basic scientific libraries, some of which are used in the module. Anaconda can be downloaded at for Windows, macOS, and Linux. If you do not have storage space on your machine for Anaconda or wish to install a smaller version of Python for another reason, see below on options for Miniconda or vanilla Python. - Visit the website for Anaconda and select the installer for your operating system. Be sure to select the Python 3.x installation. - Start the installer. - If prompted, select to "install just for me", unless you know what you are doing. - When prompted to add Anaconda to the path during installation, select yes if you know you do not have any other Python installed on your computer; otherwise select no. See below for detailed instructions on installing rain_table for your operating system. Installing the module If you installed Anaconda Python or Miniconda, you can follow the instructions below for your operating system. Otherwise see the instructions for PyPi installation below. Please open an issue if you encounter any troubles installing or any error messages along the way! Please include 1) operating system, 2) installation method, and 3) copy-paste the error. Windows users Open your "start menu" and search for the "Anaconda prompt"; start this application. Install with the module type the following command and hit "enter": conda install -c sededu rain_table If asked to proceed, type Y and press "enter" to continue installation. 3. This process may take a few minutes as the necessary source code is downloaded. If the installation succeeds, proceed below to the "Run the module" section. Note on permissions: you may need to run as administrator on Windows. Mac OSX and Linux users Linux users: you will need to also install tkinter before trying to install the module below package through conda or pip3. On Ubuntu this is done with sudo apt install python3-tk. - Install the module by opening a terminal and typing the following command. conda install -c sededu rain_table If asked to proceed, type Y and press enter to continue installation. - This process may take a few minutes as the necessary source code is downloaded. If the installation succeeds, proceed below to the "Run the module" section. Note on permissions: you may need to use sudo on OSX and Linux. Advanced user installations To install with pip from Pypi use (not recommended for entry-level users): pip3 install pyqt rain_table or in the event of a failed install, try: pip3 install pyqt5 sededu See below instructions for downloading the source code if you wish to be able to modify the source code for development or for exploration. Run the module - Open a Python shell by typing python(or python3) at the terminal (OSX and Linux users) or at the Conda or Command Prompt (Windows users). - Run the module from the Python shell with: import rain_table Instructions will indicate to use the following command to then run the module: rain_table.run() Alternatively, you can do this in one line from the standard terminal with: python -c "import rain_table; rain_table.run()" Alternatively, run the module with provided script (this is the hook used for launching from SedEdu): python3 <path-to-installation>run_rain_table.py Please open an issue if you encounter any additional error messages! Please include 1) operating system, 2) installation method, and 3) copy-paste the error. Smaller Python installation options Note that if you do not want to install the complete Anaconda Python distribution you can install Miniconda (a smaller version of Anaconda), or you can install Python alone and use a package manager called pip to do the installation. You can get Python and pip together here. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rain-table/
CC-MAIN-2021-49
refinedweb
842
62.58
Question: Im trying to find out the angle (in degrees) between two 2D vectors. I know I need to use trig but I'm not too good with it. This is what I'm trying to work out (the Y axis increases downward): alt text I'm trying to use this code at the moment, but it's not working at all (calculates random angles for some reason): private float calcAngle(float x, float y, float x1, float y1) { float _angle = (float)Math.toDegrees(Math.atan2(Math.abs(x1-x), Math.abs(y1-y))); Log.d("Angle","Angle: "+_angle+" x: "+x+" y: "+y+" x1: "+x1+" y1: "+y1); return _angle; } These are my results (There constant when providing a constant position, but when I change the position, the angle changes and I can't find any link between the two angles): Position 1: x:100 y:100 x1:50 y1:50 Angle: 45 Position 2: x:92 y:85 x1:24 y1:16 Angle: 44.58 Position 3: x:44 y: 16 x1:106 y1:132 Angle: 28.12 Edit: Thanks everyone who answered and helped me figure out that was wrong! Sorry the title and the question was confusing. Solution:1 You first have to understand how to compute angle between two vectors and there are several of them. I will give you what I think is the simplest. - Given v1 and v2, their dot product is: v1x * v2x + v1y * v2y - The norm of a vector v is given by: sqtr(vx^2+vy^2) With this information, please take this definition: dot(v1, v2) = norm(v1) * norm(v2) * cos(angle(v1, v2)) Now, you solve for angle(v1, v2): angle(v1, v2) = acos( dot(v1, v2) / (norm(v1) * norm(v2)) ) Finally, taking the definitions given at the beginning, then you end up with: angle(v1, v2) = acos( (v1x * v2x + v1y * v2y) / (sqrt(v1x^2+v1y^2) * sqrt(v2x^2+v2y^2)) ) Again, there are many ways to do this, but I like this one because it is helpful for dot product given angle and norm, or angle, given vectors. The answer will be in radians, but you know that pi radians (that is 3.14 radians) are 180 degrees, so you simply multiply by the conversion factor 180/pi. Solution:2 Aha! Turns out I just needed to flip my angle and use atan2. This is my final code: private float calcAngle(float x, float y, float x1, float y1) { float _angle = (float)Math.toDegrees(Math.atan2(x1-x, y-y1)); return _angle; } Thanks everyone for helping me figure this out and also for helping me to understand what I'm actually doing! :) Solution:3 Do not take the absolute value of the arguments to atan2. The whole point of atan2 is that it uses the signs of its arguments to work out which qaudrant the angle is in. By taking the absolute values you are forcing atan2 to only return values between 0 and pi/2 instead of -pi to pi. Solution:4 It looks like Niall figured it out, but I'll finish my explanation, anyways. In addition to explaining why the solution works, my solution has two advantages: Potential division by zero within atan2() is avoided - Return value is always positive in the range 0 to 360 degrees atan2() returns the counter-clockwise angle relative to the positive X axis. Niall was looking for the clockwise angle relative to the positive Y axis (between the vector formed by the two points and the positve Y axis). The following function is adapted from my asteroids game where I wanted to calculate the direction a ship/velocity vector was "pointing:" // Calculate angle between vector from (x1,y1) to (x2,y2) & +Y axis in degrees. // Essentially gives a compass reading, where N is 0 degrees and E is 90 degrees. double bearing(double x1, double y1, double x2, double y2) { // x and y args to atan2() swapped to rotate resulting angle 90 degrees // (Thus angle in respect to +Y axis instead of +X axis) double angle = Math.toDegrees(atan2(x1 - x2, y2 - y1)); // Ensure result is in interval [0, 360) // Subtract because positive degree angles go clockwise return (360 - angle) % 360; } Solution:5 I believe the equation for the angle between two vectors should look more like: toDegrees(acos((x*x1+y*y1)/(sqrt(x*x+y*y)*sqrt(x1*x1+y1*y1)))) Your above equation will calculate the angle made between the vector p1-p2 and the line made by extending an orthogonal from the point p2 to the vector p1. The dot product of two vectors V1 and V2 is equal to |V1|*|V2|cos(theta). Therefore, theta is equal to acos((V1 dot V2)/(|V1||V2|)). V1 dot V2 is V1.xV2.x+V1.yV2.y. The magnitude of V (i.e., |V|) is the pathogorean theorem... sqrt(V.x^2 + V.y^2) Solution:6 It should be : atan( abs(x1 - x)/abs(y1 - y) ) abs stands for absolute (to avoid negative values) Solution:7 My first guess would be to calculate the angle of each vector with the axes using atan(y/x) and then subtract those angels and take the absolute value, that is: abs(atan(y/x) - atan(y1/x1)) Solution:8 Are you using integers? Cast the arguments as doubles, and I would use fabs on the result, not the arguments. The result will be in radians; to get degrees, use: res *= (360.0/(2.0*Math.PI)); Solution:9 The angle of the second vector relative to the first = atan2(y2,x2) - atan2(y1,x1). Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/06/tutorial-how-to-calculate-angle-of.html
CC-MAIN-2019-47
refinedweb
955
56.29
ketil+haskell: > Matthew Pocock <matthew.pocock at ncl.ac.uk> writes: > > > I've been using hxt to process xml files. Now that my files are getting a bit > > bigger (30m) I'm finding that hxt uses inordinate amounts of memory. > : > > Is this a known issue? > > Yes. I parse what I suppose are rather large XML files (the largest > so far is 26GB), and ended up replacing HXT code with TagSoup. I also > needed to use concurrency[1]. XML parsing is still slow, typically > consuming 90% of the CPU time, but at least it works without blowing > the heap. > > While I haven't tried HaXML, there is IMO a market opportunity for a > fast and small XML library, and I'd happily trade away features like > namespace support or arrows interfaces for that. So this is a request for an xml-light based on lazy bytestrings, designed for speed at all costs? -- Don
http://www.haskell.org/pipermail/haskell-cafe/2008-January/038642.html
CC-MAIN-2013-48
refinedweb
152
82.44
Log4c rolling policy interface. Defines the interface for managing and providing rolling policies. More... #include <stdio.h> #include <log4c/defs.h> #include <log4c/layout.h> Go to the source code of this file. Log4c rolling policy interface. Defines the interface for managing and providing rolling policies. A rolling policy is used to confogure a rollingfile appender to tell it when to trigger a rolover event. Effect a rollover according to policyp on the given file stream. log4c rollingpolicy type log4c rollingpolicy type. Defines the interface a specific policy must provide to the rollingfile appender. Attributes description: namerollingpolicy type name init()init the rollingpolicy is_triggering_event() rollover() Call the un initialization code of a rolling policy. This will call the fini routine of the particular rollingpolicy type to allow it to free up resources. If the call to fini in the rollingpolicy type fails then the rollingpolicy is not uninitialized. Try again later model... Get a new rolling policy Get the rollingfile appender associated with this policy. Get the rolling policy configuration. Call the initialization code of a rolling policy. Determine if a logging event should trigger a rollover according to the given policy. sets the rolling policy type Configure a rolling policy with a specific policy. Get a pointer to an existing rollingpolicy type. Use this function to register a rollingpolicy type with log4c. Once this is done you may refer to this type by name both programmatically and in the log4c configuration file. Example code fragment:
http://log4c.sourceforge.net/rollingpolicy_8h.html
CC-MAIN-2017-47
refinedweb
246
52.87
This is a common problem but I'm not sure how to solve it. The code below works fine. var mind = time % (60 * 60); var minutes = Math.floor(mind / 60); var secd = mind % 60; var seconds = Math.ceil(secd); You’re doing it wrong. To get the number of full minutes, divide the number of total seconds by 60 (60 seconds/minute): var minutes = Math.floor(time / 60); And to get the remaining seconds, multiply the full minutes with 60 and subtract from the total seconds: var seconds = time - minutes * 60; Now if you also want to get the full hours too, divide the number of total seconds by 3600 (60 minutes/hour · 60 seconds/minute) first, then calculate the remaining seconds: var hours = Math.floor(time / 3600); time = time - hours * 3600; Then you calculate the full minutes and remaining seconds. Bonus: Use the following code to pretty-print the time (suggested by Dru) function str_pad_left(string,pad,length) { return (new Array(length+1).join(pad)+string).slice(-length); } var finalTime = str_pad_left(minutes,'0',2)+':'+str_pad_left(seconds,'0',2);
https://codedump.io/share/CGwNwqJEm6Pz/1/javascript-seconds-to-minutes-and-seconds
CC-MAIN-2018-05
refinedweb
179
64.91
Tutorial:. Mocks Sometimes we need to mock out classes or APIs to assert the expected behaviour. Mocking is built in to Spock, we don’t need a separate library or framework for Mock support. It’s also possible to mock concrete classes. If you’re used to other Mocking frameworks you might expect to only be able to mock Java interfaces, but Spock lets us easily create a mock from a concrete class. The given block of a Spock test is the perfect place to set up mocks for our test. It’s clear then that this is all code that’s required to run the test, but is not the code that’s being tested itself. def "should be able to mock a concrete class"() { given: Renderer renderer = Mock() def polygon = new Polygon(4, renderer) when: polygon.draw() then: 4 * renderer.drawLine() } This test mocks a Renderer class, which is a concrete Java class. We can do this either by declaring a variable with type Renderer, and calling Mock without any arguments: Renderer renderer = Mock() …or if we prefer to use Groovy’s def to define our variables, we’ll need to pass the type in as an argument to the Mock method: def renderer = Mock(Renderer) Bear in mind that if you declare it using “def”, this variable is using Groovy’s dynamic typing, and so isn’t strongly recognised as a Renderer type by the IDE or by the code. This is fine if you’re not doing much with the mock, but you might sometimes want to specify the type more clearly, this will certainly be more natural for Java developers. The given block also sets up a Polygon with the given renderer, calling the constructor with the numberOfSides and the mocked renderer. The when section defines the call that’s actually the thing we’re testing, in this test we want to see what happens when we call the draw method on this polygon. Make sure there’s a draw method on the polygon, at this stage it can be empty because we’re doing a bit of TDD: public void draw() { } The then block defines the expectations. Spock has a nice, clear syntax for defining the behaviour we expect to see on the mock. In this test, we might expect to see four calls on the renderer’s drawLine method, given that the polygon has four sides. The then block states we expect to see renderer.drawLine called 4 times. Run this test now, it should fail. This is because the methods don’t do anything yet. We expected to see this drawLine method called four times, but it wasn’t called at all. Go into the implementation of the Polygon.draw method and change it to call the renderer’s drawLine method in here as many times as there are sides (note that this is an extremely over-simplified example to demonstrate the testing): public void draw() { for (int i = 0; i < numberOfSides; i++) { renderer.drawLine(); } } Re-run the test, it should pass. The code is calling drawLine on the renderer mock four times. Mocks are a powerful and useful tool to make sure that the code that we’re testing is calling the APIs that we expect, in the way we expect. Stubs Mocks are useful for checking calls out of our code, Stubs are useful for providing data or values into the code we’re testing. Let’s see an example of a stub in a new test method. def "should be able to create a stub"() { given: Palette palette = Stub() palette.getPrimaryColour() >> Colour.Red def renderer = new Renderer(palette) expect: renderer.getForegroundColour() == Colour.Red } The given block sets up the preconditions for the test. This time, we’re going to use the Stub() method to create a Stub of the concrete Palette class. Like with Mock(), you can define it this way, or use def and pass the type into the Stub() method. Next the test sets up the palette stub with the values it will produce when called by our code. We use right-shift ( >>) to state that when the method getPrimaryColour is called, the Enum value Red will be returned. The last step of setup is to create the Renderer with this stub palette. If you’re following along with this code in the IDE, make sure your Renderer looks something like: public class Renderer { private Palette palette; public Renderer(Palette palette) { this.palette = palette; } public void drawLine() { } } The test uses an expect label because the test and the assertion are combined – we expect that when we call getForegroundColour, this will return Colour.Red. This test states that we expect getForegroundColour to return the same colour as the palette’s primary colour. Once again, we can use test-driven development here – we can use the test to drive out what we expect the methods to look like even if they don’t exist yet. Use ⌥⏎ (macOS), or Alt+Enter (Windows/Linux), on any red method names to get IntelliJ IDEA to create the most basic methods that makes the code compile, then run the test. It should fail if we haven’t implemented the details for getForegroundColour. It’s good to see the test fail first, it often indicates the test is checking the right thing, even if that right thing hasn’t been implemented yet. Change the getForegroundColour method to return the palette’s primary colour. public Colour getForegroundColour() { return palette.getPrimaryColour(); } Re-run the test, it should pass. The test injects a Stub palette into the renderer, we tell the stub palette what to return when the getPrimaryColour method is called, so we can check that the renderer does what it’s supposed to do when we call getForegroundColour. If we had set this up as a Mock instead of a Stub, this would have worked as well. Mock objects support the mocking behaviour we saw in the previous test and the stubbing behaviour we saw here, whereas Stub objects only support stubbing, and not mocking. My preference is to keep stub and mock behaviour separate where possible, so it’s usually best to use Stubs just for stubbing and Mocks only for mocking. Conclusion In this blog, we looked at mocking and stubbing. Now you know how to: - Create a mock and write a test that shows a particular method was called when the test was run - Create a stub to provide an expected value, so a test can verify that expected value is used Spock has much more to offer than this, stay tuned for further blog posts, watch the full video, or take a look at the excellent reference documentation.
https://blog.jetbrains.com/idea/2021/02/tutorial-spock-part-4-mocking-and-stubbing/
CC-MAIN-2022-05
refinedweb
1,117
67.69
i use windows and new in python. i have tried to docstring and an error occured showing that'' function object has no attribute _doc_ ''. What i need to do? i have tried the code def printMax(x,y): '''Prints the maximum of two numbers. Numbers must be intiger.''' x=int(x) y=int(y) if x>y: print x,'is max' else: print y,'is max' print (printMax._doc_) about docstringPage 1 of 1 1 Replies - 2805 Views - Last Post: 25 September 2008 - 11:51 AM #1 about docstring Posted 16 September 2008 - 06:55 PM Replies To: about docstring #2 Re: about docstring Posted 25 September 2008 - 11:51 AM the only thing I can see is doc must have two _'s before and after it not just one so it's __doc__ you look like you have _doc_ Page 1 of 1
http://www.dreamincode.net/forums/topic/64066-about-docstring/
CC-MAIN-2016-26
refinedweb
145
77.98
The parallel directive #pragma omp parallel makes the code parallel, that is, it forks the master thread into a number of parallel threads, but it doesn’t actually share out the work. What we are really after is the parallel for directive, which we call a work-sharing construct. Consider #include <iostream> #include <omp.h> using namespace std; main (void) { int i; int foo; #pragma omp parallel for for(i=1;i<10;i++){ #pragma omp critical { foo=omp_get_thread_num(); cout << "Loop number: "<< i << " " << "Thread number: " << foo << endl; } } } The for directive applies to the for loop immediately preceding it. Notice how we don’t have to outline a parallel region with curly braces {} following this directive in contrast to before. This program yields: [michael@michael lindonslog]$ ./openmp Loop number: 1 Thread number: 0 Loop number: 8 Thread number: 3 Loop number: 2 Thread number: 0 Loop number: 3 Thread number: 0 Loop number: 9 Thread number: 3 Loop number: 6 Thread number: 2 Loop number: 4 Thread number: 1 Loop number: 7 Thread number: 2 Loop number: 5 Thread number: 1 [michael@michael lindonslog]$ Notice what I said about the order. By default, the loop index i.e. “i” in this context, is made private by the for directive. At the end of the parallel for loop, there is an implicit barrier where all threads wait until they have all finished. There are however some rules for the parallel for directive - The loop index, i, is incremented by a fixed amount each iteration e.g. i++ or i+=step. - The start and end values must not change during the loop. - There must be no “breaks” in the loop where the code steps out of that code block. Functions are, however, permitted and run as you would expect. - The comparison operators may be < <= => > There may be times when you want to perform some operation in the order of the iterations. This can be achieved with an ordered directive and an ordered clause. Each thread will wait until the previous iteration has finished it’s ordered section before proceeding with its own. int main(void){ int i,a[10]; #pragma omp parallel for ordered for(i=0;i<10;i++){ a[i]=expensive_function(i); #pragma omp ordered printf("Thread ID: %d Hello World %d\n",omp_get_thread_num(),i); } } Will now print out the Hello Worlds in order. N.B. There is a penalty for this. The threads have to wait until the preceding iteration has finished with its ordered section of code. Only if the expensive_function() in this case were expensive, would this be worthwhile.
http://www.lindonslog.com/programming/openmp/openmp-tutorial-work-sharing/
CC-MAIN-2014-15
refinedweb
431
61.56
Let. One more application worth mentioning here is the conversion of text to handwriting. I’ve already covered that topic, you can check that article here. “The most powerful tool we have as developers is automation”Scott Hanselman So why don’t we use it? Automation indeed is a very powerful tool if used for the right purpose. And Python is one such amazing language that has enormous potential to help you automate your tasks. Let’s just begin with our simple yet useful automation tutorial. There’s a video tutorial as well linked with this article, so you can take help from that also. Some pre-requisites before we continue - Make sure to already authenticate WhatsApp Web in your browser for the smooth functioning of the program. - If you don’t do this, the script will not be able to run. - Also just make sure you have an active internet connection while you are running this code. Installing pywhatkit - Open your IDE or any code editor of your choice. - Now in the terminal, write “pip install pywhatkit“. (It will take some time to install pywhatkit) **For a detailed pictorial representation of how to get started with pywhatkit, click here Let’s Automate WhatsApp Messages Once we have all the supporting elements ready to run, now let’s dive right into the actual code. Write this code in your python file with the following 4 parameters included: - Phone number of the receiver along with the country code the number belongs to. - The message that you want to deliver to your recipient. - Time in hours in 24-hour format. - Time in minutes at which you want to deliver the message. import pywhatkit as py py.sendwhatmsg("+9185********", "Hello, Welcome to the world of Automation", 18, 32) Executing our Script Once you’re done till here, as in you’ve all the parameters checked and ready to execute. Simply run the file in the terminal. If you are using VS Code, then click on the small triangle-shaped button on the top-right which says “Run Python File in Terminal” and you’ve done your part. You will see that your terminal will show a message indicating the time in seconds that is left to deliver the message. Check out this screenshot for reference. So as my terminal shows, WhatsApp web will be opened on my browser automatically after 151 seconds, and a message will be delivered to the said number after 20 seconds. Here I have attached one screenshot showing that the message got successfully delivered to the mentioned number at the desired time to which I assigned it. Here’s the video tutorial for automating your WhatsApp, you can check this out as well. Conclusion We are done with this mini project of ours. I hope you enjoyed it as much as I enjoyed writing it for you. Just think of how easy a task like this will become if you have to send some particular message to someone on a daily basis. We can further program this code according to our use case. Automating WhatsApp Messages with Python is such an intriguing thing for me. I am sure it must be exciting for you all also. I am still exploring python and learning something new every day. Do share your journey of becoming a Pythonista with me. I would love to read about it. I will bring a complete tutorial of “pywhatkit” next. Probably gonna explore every feature of it and then share it with you all. So stay tuned for some more automation in python. - 6 Comments Arpit · June 16, 2021 at 6:45 am Nice article Vaishali! Very interesting. Vaishali Rastogi · June 16, 2021 at 6:46 am Thank you, Arpit! Check out my other articles as well. Adil · July 30, 2021 at 12:50 pm Very good content Vaishali_Rastogi · July 30, 2021 at 1:01 pm Thanks, Adil. Robo · July 30, 2021 at 12:57 pm Keep up the good work. Vaishali_Rastogi · July 30, 2021 at 1:01 pm Thanks, Robo.
https://techbit.in/programming/how-to-easily-automate-whatsapp-messages-python-project-02/
CC-MAIN-2022-21
refinedweb
675
73.37
Hi guys. In the last article, we discussed how to add styles in React Native. In this article, we will discuss how to create React Native Class based Components. Before we begin, let’s discuss the difference between class based component and functional based component. React Native Functional based Components Functional based component is a great way to introduce you to React Native. But its functionality is little bit limited. In functional based components, you put data in one end and JSX comes in the other end. That’s the only thing it can do. There’s no capability to do more complex tasks such as fetch data or initiate more complicated operations. We normally use functional based components for representational needs like displaying some content to the user. It’s only ability is to present static data. React Native Class based Components React Native Class based Components are more complex and more code to be written. But they have more functionalities which functional based components doesn’t have. They can be used for fetching data, easier to use to write large components and also easier to organize large amounts of data because we gets a nice class type structure where you can add many helper methods in there. These types of components are called class based because it is based on ES6 classes(ES6 is the new version of JavaScript) that extends a base class called Component. Let’s create a class based component. Before that we need to create a file structure because if you are building a project, you should have a nice file structure. Creating a file structure for your project First create a folder inside your project folder named ‘src’. Inside that folder, create another folder named ‘components’. Now move App.js from its current position to inside ‘src’ folder. Now your file structure should be as following. Now you have changed the position of App.js file. So, you have to change index.js file according to that. You have to change the path of import statement of App.js file inside index.js file. import {AppRegistry} from 'react-native'; import App from './src/App'; import {name as appName} from './app.json'; AppRegistry.registerComponent(appName, () => App); Now we are done with creating file structure for now. Let’s build a class based component now. Creating a class based component First, create a javaScript file inside components folder and name it ‘DataList.js’. Inside that file, write the following code. import React, { Component } from 'react'; import { Text, View } from 'react-native'; class DataList extends Component { render(){ return( ); } } export default DataList; Here, just like in class based components we have to import the two react native libraries. As I mentioned before we extend a base class called Component. That means we are borrowing bunch of functionalities from that class. For that, as you can see in the first line of code, we have imported Component class from react library. Inside that class based component, we write a render method and return method inside the render method. We have to export the component at the end as well. This is how you create a React Native Class based Components. Let’s add some JSX into that code. import React, { Component } from 'react'; import { Text } from 'react-native'; class DataList extends Component { render(){ return( <Text>Class based Components has more functionalities.</Text> ); } } export default DataList; We have finished coding a basic class based component now. But, if you run the app now, you wouldn’t see any change in the UI because we didn’t import it in App.js. We know index.js is the only file that runs in mobile app as I have mentioned in a previous chapter. App.js have been imported into index.js. So, to show DataList.js, we need to import it in App.js. Add the following import statement in App.js. import DataList from './components/DataList'; Now, we have to add the DataList component inside return method in App.js, as a JSX element. To keep one root element, wrap the current JSX elements with another View tag in App.js as given below. const App = () => { return ( <View> <View style={styles.viewStyle}> <Text style={styles.textStyle}>Hello World!</Text> </View> <DataList/> </View> ); } We added View tag as a root element and entered DataList component inside that. This is how you import a component inside another component. Now, let’s run our app again. Well, we have got what we wanted. We have previous header section as well as new DataList class component. I have given you the whole code of App.js, DataList.js and index.js below. DataList.js import React, { Component } from 'react'; import { Text } from 'react-native'; class DataList extends Component { render(){ return( <Text>Class based Components has more functionalities.</Text> ); } } export default DataList; App.js //Import a library to help to create a component. import React from 'react'; import { Text, View } from 'react-native'; import DataList from './components/DataList'; //Create the component. const App = () => { return ( <View> <View style={styles.viewStyle}> <Text style={styles.textStyle}>Hello World!</Text> </View> </DataList> </View> ); } const styles = { viewStyle: { backgroundColor: '#04A5FA', justifyContent: 'center', alignItems: 'center', height: 60, paddingTop: 15, shadowColor: '#000', shadowOffset: { width: 0, height: 2 }, shadowOpacity: 0.2, elevation: 2, position: 'relative' }, textStyle: { fontSize: 20, fontWeight: 'bold' } } //Render it to the device. export default App; index.js import {AppRegistry} from 'react-native'; import App from './src/App'; import {name as appName} from './app.json'; AppRegistry.registerComponent(appName, () => App); This is only the creation of class based components. We will learn more about class based components and what it can do, in the future articles. Thank you.
http://coderaweso.me/react-native-class-based-components/?utm_source=rss&utm_medium=rss&utm_campaign=react-native-class-based-components
CC-MAIN-2020-16
refinedweb
946
59.9
- NAME - SYNOPSIS - PROPERTIES - METHODS - ACTIONS - METHODS - AUTHOR - LICENSE NAME Catalyst::Enzyme::CRUD::Controller - CRUD Controller Base Class with CRUD support SYNOPSIS See Catalyst::Enzyme PROPERTIES model_class The model class, overloaded by you in each controller class to return the actual class name for the Model this controller should handle. So in your Controller classes, something like this is recommended: sub model_class { return("BookShelf::Model::BookShelfDB::Genre"); } METHODS - ACTIONS These are the default CRUD actions. You should read the source so you know what the actions do, and how you can adjust or block them in your own code. They also deal with form validation, messages, and errors in a certain way that you could use (or not, you may have a better way) in your own Controller actions. auto : Private Set up the default model and class for this Controller set_crud_controller : Private Set the current Controller and it's Model class (and the Model's configuration using the model_class() ). Point $self->crud_config to the Model's config->{crud}. Set crud_config keys: model_class model moniker (default) rows_per_page (default 20) column_monikers (default) Set stash keys: crud (to the crud_config) controller_namespace (to the Controller's namespace) uri_for_list (to a version that accepts array refs from TT) Return 1. - Usage This action is automatically called by the autoaction. This means that if the user invokes an action in a Controller, the set_crud_controller is called properly and that Controller's Model class is used. No need to do anything. If you forward between actions in the same Controller, the same Model class should be used, so no need to do anything. But if you forward to an action in a different Controller, you need to tell Enzyme to start using the new Model class first. So, going from e.g. the Book Controller to a Genre action, you need to: $c->forward("/genre/set_crud_controller"); $c->forward("/genre/add"); default Forward to list. list Display list template view Select a row and display view template. add Display add template do_add Add a new row and redirect to list. edit Display edit template. do_edit Edit a row and redirect to edit. delete Display delete template. do_delete Destroy row and forward to list. METHODS default_dfv Return hash ref with a default Data::FormValidator config. crud_config() Return hash ref with config values form the Model class' config->{crud} (so model_class needs to be set). model_with_pager($c, $rows_per_page, $page) Return either the current model class, or (if $rows_per_page > 0) a pager for the current model class. $page indicates which page to display in the pager (default to the first page). Assign the pager to $c->stash->{pager}. The Model class (or it's base class) must use Class::DBI::Pager. template_with_item($template, $c, $id) Retrieve object with $id and set the $template. Suitable to call like this in an action (nothing else is needed): sub edit : Local { shift->template_with_item("edit.tt", @_); } AUTHOR Johan Lindstrom <johanl ÄT cpan.org> LICENSE This library is free software . You can redistribute it and/or modify it under the same terms as perl itself. 1 POD Error The following errors were encountered while parsing the POD: - Around line 374: Non-ASCII character seen before =encoding in 'ÄT'. Assuming ISO8859-1
https://metacpan.org/pod/Catalyst::Enzyme::CRUD::Controller
CC-MAIN-2019-43
refinedweb
536
64.1
You are to write a program that will input English text translate it into Pig Latin, then output the result to a file Pig Latin is an invented language formed by transforming each word according to the following rules: 1.If the word starts with a vowel ('a', 'e', 'i', 'o', 'u', or 'A, 'E', 'I', 'O', 'U') or does not contain a vowel, its Pig Latin equivalent is formed by just adding the suffix “way”. 2.If the word starts with a consonant (any letter that is not a vowel) and contains a vowel its Pig Latin equivalent is formed by moving the initial consonant string (that is all letters up to the first vowel) from the beginning of the word to the end and then adding the suffix “ay”. 3.If a word is transformed by rule 2) and starts with an upper case character (consonant), its Pig Latin equivalent also starts with an upper case character (vowel) and the consonant is changed to lower case. A word will be taken to mean any consecutive sequence of letters, i.e. you don't have to check that the sequence is actually an English word. A word is terminated by any non-letter (e.g white space, punctuation mark, etc.). Made this outline: int main( ) { declare el, pl as strings to store an English line, Pig Latin line; prompt for and get English line (use gets); while (line is not equal to “!!!”) { translate(el, pl); output Pig Latin line to the screen and to a file; prompt for and get English line (use getline); } system("pause"); return 0; } void translate(const char el[ ], char pl[ ]) { declare ew, pw as c-strings to store an English, Pig Latin word; declare ep, pp as integer variables that tell the current positions in el, pl; /*alternatively you can declare ep, pp as pointers of type char *, but this is tricky*/ initialize current positions ep, pp to be the beginning of el, pl, i.e. initialize them to 0; while (not at end of el) if (character at current position in el is a letter) { extract word from el that starts from current position and store in ew; translate word ew to Pig Latin and store in pw; copy pw at current end of pl; make sure ep, pp are now set to positions just past the word; } else /* character at current position in el is not a letter */ { copy this character unchanged to current position in pl; increment ep, pp; } Null terminate the string pl; } This is what I have tried to do but I really suck so.... #include <iostream> #include <cstring> #include <cstdlib> using namespace std; int main( ) { char englishline[], piglatinline[] cout << "Please enter the text that is going to be translated to piglatin."; cin.get(englishline[]); while (englishline[] != "!!!") { translate(englishline[], piglatinline[]); cout << piglatinline[] << endl; fout << piglatinline[] << endl; cin.getline(englishline[]); } system("pause"); return 0; } void translate(const char englishline[], char piglatinline[]) { char englishword[], piglatinword[]; int englishpostion=englishline[0] int piglatinpostion=piglatinline[0]; while (!englishline[\0]) { if (englishline[] == char) { englishword[]=englishline[]-englishline[] translate(englishword[] piglatinword[]); strcpy.piglatinword; englishpostion=englishline[0]-englishposition; piglatinpostion=piglatinline[0]-piglatinposition; } else { strcpy.englishword; englishword=piglatinpostion; copy character unchanged to current position in pl; increment ep, pp; } Null terminate the string pl; } Can someone help walk me through this? This post has been edited by jrayborn66: 08 July 2010 - 08:48 PM
http://www.dreamincode.net/forums/topic/180807-english-to-piglatin-program/
CC-MAIN-2016-30
refinedweb
565
51.72
In this post I will outline the major schema-related changes in MSXML 6.0 and what you might need to do if you rely on the old behavior. Questions? Comments? Let me know. XDR Support MSXML6 has removed support for XDR schemas. XML schema (XSD) 1.0 has now been a W3C recommendation for almost 4 years so we made the decision to discontinue support for proprietary XDR schemas in MSXML6. XDR schemas will continue to be supported in earlier versions of MSXML. Summary: If two XSD files are loaded from different locations for the same namespace the Add method will union the declarations found in both locations. In the past, adding a namespace from a secondary location would replace the definitions. Scenarios: Any scenario where the user is calling Add more than once on the same namespace may be affected. Also, see SchemaCache::getSchema changes below. If you rely on old behavior: Create a new SchemaCache and add only the appropriate schema Summary: We now “flatten” schema imports (xs:import) so every namespace referenced is a first class citizen in the schema cache. This ensures there is only one unique definition for every schema type in the schema cache. Scenarios: Any scenario where a common namespace was imported from more than one location by two or more namespaces – in other words, any situation where there could be ambiguous definitions for a type in the SchemaCache depending on the namespace it is used from. Also, SchemaCache.Length might have a different value. This is a breaking change and does not lend itself to the old behavior Summary: Inline schemas and schemas referenced from an instance using xsi:SchemaLocation are now added to an XML instance-specific cache which wraps the user-supplied SchemaCache. Scenarios: This enables some more complex scenarios where cross-references between runtime schemas are handled appropriately. This is a non-breaking change – it is enabling new scenarios Summary: The schema cache will compile a schema that has a reference to any other type already in the schema cache regardless of whether there is an explicit import or not (this is like an import with no location). In order to make the add order insignificant the validateOnParse flag must be set to false when schema is loaded. This is a non-breaking change o The simple workaround is just to create a new schema cache and add the desired schemas. o There is no workaround to get to the old behavior o If you rely on old behavior, create a new SchemaCache and add appropriate schemas o To get old behavior, you need to step through included/imported/redefined schemas manually and call get_schemaLocations for each one of them. Note that if you hit schemas loaded from multiple files, you will get multiple schema locations, which is different from the old behavior. MSDN just published my article on inline schemas. Check it out. The XML Editor provides support for Inline Schemas. Inline schemas are XML schema definitions included inside XML instance documents. They can be used to validate that the rest of the XML matches the schema constraints in the same way that external schema documents can be used. Likewise, the syntax and semantics of inline schemas are the same as for external schemas. Inline schemas can be useful in a number of situations, including: · An architecture where internal DTDs were used and the developers wish to preserve that design pattern. · It is difficult to access external files or URLs, e.g. for security or platform reasons. · There is too much diversity in the set of schemas and instances that a system must process, so it is easiest to simply keep the schema as an integral part of the XML document. The following XML snippet contains an example of using an inline schema. <?xml version="1.0" encoding="utf-8"?> <root xmlns: <xs:schema xmlns: <xs:element <xs:complexType> <xs:sequence> <xs:element </xs:sequence> </xs:complexType> </xs:element> </xs:schema> <inl:parent> <inl:child>text</inl:child> </inl:parent> </root> You can change the inline schema and the XML Editor will pick up those changes immediately and use the updated schema for validation and intellisense. For example, if you change the <child> element’s type from xs:string to xs:int, you will get a validation error. If you are a VB developer and tried to use the new XML Editor, you might have noticed that the File menu does not have File | New option (this is only true if you selected VB Development settings). So, how do you go about creating new XML files? Lisa Feigenbaum, a Program Manager on the VB team, talks about two workarounds - she tells you how to create a shortcut to File | New and how to get it back on the menu. One of the goals we had for MSXML6 and system.xml in .NET 2.0 was to bring down the number of differences in schema processing between the two engines. I think we did a pretty good job here, but there are still a few differences left. The table below lists all known differences. If you find something that is not listed, please let me know. In some cases both engines are compliant with the spec but have different behavior (when the spec is not specific). In others, one engine is compliant (denoted by a "*" in the table below) while the other one isn't. It is our goal to eliminate all the differences. However, some of them are corner case scenarios and we might decide not to invest time and resources in fixing them. If you really want to see a particular scenario fixed, let me know. Did. Below are a few tips and tricks on using schema cache and schema catalogs with the new XML Editor in VS 2005. Schema Cache The XML Editor comes with a set of schemas that describe some of the W3C standard and some of the VS specific XML namespaces. These schemas are installed into your VS installation directory under %vsinstalldir%\xml\schemas. When you declare one of the namespacess defined by these schemas in your XML files, the XML Editor will automatically associate appropriate schema(s) from the cache location and instantly provide you with intellisense data and validation. The purpose of the schema cache directory is to hold standard schemas as well as schemas that are unlikely to change. The following operations are supported by the XML Editor without requiring a VS restart.- Adding schemas- Deleting schemas- Renaming schemas You can also change the schema cache location. This is done by modifying Cache Directory Location on Tools | Options | Text Editor | XML | Miscellaneous dialog. When you do this, the XML Editor will stop using schemas from the old location and instead switch to the new folder. Schema Catalogs You can extend existing schema cache using catalog.xml file. VS installs a sample catalog.xml file (along with catalog.xsd) in the schema cache folder. This file can be used to do several things. First of all, you can associate namespaces with external locations as in the following example: < You can also associate file extensions with specific namespaces. The following line taken from the sample catalog.xml associates .config files with dotNetConfig.xsd schema I’ve seen a number of newsgroup posts asking how to find a particular element or how to get a list of all elements from either XmlSchema or XmlSchemaSet objects. Since we don’t provide this functionality in the framework, you need to manually traverse these objects to get what you want. Depending on what your goal is, you might need to get either pre-compile or post-compile information. For example, named groups are not available post-compile while PSVI information is not available pre-compile. In this post I’ll show you how you can get pre-compile information from these objects. To make the reading easier, I don’t include any error or exception handling code which is not relevant. I’m assuming that you have a SchemaSet with a few schemas added to it. SchemaSet provides collections of global elements, types, and attributes. However, these collections are empty until you compile the set. Note that named group collection is not available in the SchemaSet. To get the pre-compile info (including a list of named groups), you will need to go through each schema in the set. foreach (XmlSchema schema in ss.Schemas()) { } Once you have an XmlSchema object, you can step through and parse each global // stepping through global complex types foreach (XmlSchemaType type in schema.SchemaTypes.Values) if (type is XmlSchemaComplexType) { // stepping through global elements foreach (XmlSchemaElement el in schema.Elements.Values) // stepping through named groups foreach (XmlSchemaAnnotated xsa in schema.Items) if (xsa is XmlSchemaGroup) Now that we have a global, whether it’s a type, an element, or a group, how do we traverse it? I’m going to use a recursive method that takes an XmlSchemaParticle to do it. void walkTheParticle(XmlSchemaParticle particle) if (particle is XmlSchemaElement) { XmlSchemaElement elem = particle as XmlSchemaElement; // todo: insert your processing code here if (elem.RefName.IsEmpty) { XmlSchemaType type = (XmlSchemaType)elem.ElementSchemaType; if (type is XmlSchemaComplexType) { XmlSchemaComplexType ct = type as XmlSchemaComplexType; if (ct.QualifiedName.IsEmpty) { walkTheParticle(ct.ContentTypeParticle); } } } } else if (particle is XmlSchemaGroupBase) //xs:all, xs:choice, xs:sequence { XmlSchemaGroupBase baseParticle = particle as XmlSchemaGroupBase; foreach (XmlSchemaParticle subParticle in baseParticle.Items) walkTheParticle(subParticle); If the particle passed to the walkTheParticle is a base group (all, choice, or sequence), we will loop through each element within this base group. If the particle is an element, we will do our processing and then (if the element is of a complex type) walk through it. Finally, below are the last touches to the calling method to make it all work. void start(XmlSchemaSet ss) foreach (XmlSchema schema in ss.Schemas()) foreach (XmlSchemaType type in schema.SchemaTypes.Values) walkTheParticle(ct.ContentTypeParticle); foreach (XmlSchemaElement el in schema.Elements.Values) walkTheParticle(el); foreach (XmlSchemaAnnotated xsa in schema.Items) if (xsa is XmlSchemaGroup) XmlSchemaGroup xsg = xsa as XmlSchemaGroup; walkTheParticle(xsg.Particle); That’s about it. Let me know if you have any questions. <xs.
http://blogs.msdn.com/stan_kitsis/default.aspx
crawl-002
refinedweb
1,693
62.98
.txtFor other branches, the changelogs are distributed with the source, but are also available here: Table of contents - Changes between 1.1.0e and 1.1.1 [xx XXX xxxx] - Changes between 1.1.0d and 1.1.0e [16 Feb 2017] - Changes between 1.1.0c and 1.1.0d [26 Jan 2017] - Changes between 1.1.0b and 1.1.0c [10 Nov 2016] - Changes between 1.1.0a and 1.1.0b [26 Sep 2016] - Changes between 1.1.0 and 1.1.0a [22 Sep 2016] - Changes between 1.0.2h and 1.1.0 [25 Aug 2016] - Changes between 1.0.2g and 1.0.2h [3 May 2016] - Changes between 1.0.2f and 1.0.2g [1 Mar 2016] - Changes between 1.0.2e and 1.0.2f [28 Jan 2016] - Changes between 1.0.2d and 1.0.2e [3 Dec 2015] - Changes between 1.0.2c and 1.0.2d [9 Jul 2015] - Changes between 1.0.2b and 1.0.2c [12 Jun 2015] - Changes between 1.0.2a and 1.0.2b [11 Jun 2015] - Changes between 1.0.2 and 1.0.2a [19 Mar 2015] - Changes between 1.0.1l and 1.0.2 [22 Jan 2015] - Changes between 1.0.1k and 1.0.1l [15 Jan 2015] - Changes between 1.0.1j and 1.0.1k [8 Jan 2015] - Changes between 1.0.1i and 1.0.1j [15 Oct 2014] - Changes between 1.0.1h and 1.0.1i [6 Aug 2014] - Changes between 1.0.1g and 1.0.1h [5 Jun 2014] - Changes between 1.0.1f and 1.0.1g [7 Apr 2014] - Changes between 1.0.1e and 1.0.1f [6 Jan 2014] - Changes between 1.0.1d and 1.0.1e [11 Feb 2013] - Changes between 1.0.1c and 1.0.1d [5 Feb 2013] - Changes between 1.0.1b and 1.0.1c [10 May 2012] - Changes between 1.0.1a and 1.0.1b [26 Apr 2012] - Changes between 1.0.1 and 1.0.1a [19 Apr 2012] - Changes between 1.0.0h and 1.0.1 [14 Mar 2012] - Changes between 1.0.0g and 1.0.0h [12 Mar 2012] - Changes between 1.0.0f and 1.0.0g [18 Jan 2012] - Changes between 1.0.0e and 1.0.0f [4 Jan 2012] - Changes between 1.0.0d and 1.0.0e [6 Sep 2011] - Changes between 1.0.0c and 1.0.0d [8 Feb 2011] - Changes between 1.0.0b and 1.0.0c [2 Dec 2010] - Changes between 1.0.0a and 1.0.0b [16 Nov 2010] - Changes between 1.0.0 and 1.0.0a [01 Jun 2010] - Changes between 0.9.8n and 1.0.0 [29 Mar 2010] - Changes between 0.9.8m and 0.9.8n [24 Mar 2010] - Changes between 0.9.8l and 0.9.8m [25 Feb 2010] - Changes between 0.9.8k and 0.9.8l [5 Nov 2009] - Changes between 0.9.8j and 0.9.8k [25 Mar 2009] - Changes between 0.9.8i and 0.9.8j [07 Jan 2009] - Changes between 0.9.8h and 0.9.8i [15 Sep 2008] - Changes between 0.9.8g and 0.9.8h [28 May 2008] - Changes between 0.9.8f and 0.9.8g [19 Oct 2007] - Changes between 0.9.8e and 0.9.8f [11 Oct 2007] - Changes between 0.9.8d and 0.9.8e [23 Feb 2007] - Changes between 0.9.8c and 0.9.8d [28 Sep 2006] - Changes between 0.9.8b and 0.9.8c [05 Sep 2006] - Changes between 0.9.8a and 0.9.8b [04 May 2006] - Changes between 0.9.8 and 0.9.8a [11 Oct 2005] - Changes between 0.9.7h and 0.9.8 [05 Jul 2005] - Changes between 0.9.7l and 0.9.7m [23 Feb 2007] - Changes between 0.9.7k and 0.9.7l [28 Sep 2006] - Changes between 0.9.7j and 0.9.7k [05 Sep 2006] - Changes between 0.9.7i and 0.9.7j [04 May 2006] - Changes between 0.9.7h and 0.9.7i [14 Oct 2005] - Changes between 0.9.7g and 0.9.7h [11 Oct 2005] - Changes between 0.9.7f and 0.9.7g [11 Apr 2005] - Changes between 0.9.7e and 0.9.7f [22 Mar 2005] - Changes between 0.9.7d and 0.9.7e [25 Oct 2004] - Changes between 0.9.7c and 0.9.7d [17 Mar 2004] - Changes between 0.9.7b and 0.9.7c [30 Sep 2003] - Changes between 0.9.7a and 0.9.7b [10 Apr 2003] - Changes between 0.9.7 and 0.9.7a [19 Feb 2003] - Changes between 0.9.6h and 0.9.7 [31 Dec 2002] - Changes between 0.9.6l and 0.9.6m [17 Mar 2004] - Changes between 0.9.6k and 0.9.6l [04 Nov 2003] - Changes between 0.9.6j and 0.9.6k [30 Sep 2003] - Changes between 0.9.6i and 0.9.6j [10 Apr 2003] - Changes between 0.9.6h and 0.9.6i [19 Feb 2003] - Changes between 0.9.6g and 0.9.6h [5 Dec 2002] - Changes between 0.9.6f and 0.9.6g [9 Aug 2002] - Changes between 0.9.6e and 0.9.6f [8 Aug 2002] - Changes between 0.9.6d and 0.9.6e [30 Jul 2002] - Changes between 0.9.6c and 0.9.6d [9 May 2002] - Changes between 0.9.6b and 0.9.6c [21 dec 2001] - Changes between 0.9.6a and 0.9.6b [9 Jul 2001] - Changes between 0.9.6 and 0.9.6a [5 Apr 2001] - Changes between 0.9.5a and 0.9.6 [24 Sep 2000] - Changes between 0.9.5 and 0.9.5a [1 Apr 2000] - Changes between 0.9.4 and 0.9.5 [28 Feb 2000] - Changes between 0.9.3a and 0.9.4 [09 Aug 1999] - Changes between 0.9.3 and 0.9.3a [29 May 1999] - Changes between 0.9.2b and 0.9.3 [24 May 1999] - Changes between 0.9.1c and 0.9.2b [22 Mar 1999] - Changes between 0.9.1b and 0.9.1c [23-Dec-1998] - Changes between 0.9.0b and 0.9.1b [not released] Changes between 1.1.0e and 1.1.1 [xx XXX xxxx] *) Add EC_KEY_get0_engine(), which does for EC_KEY what RSA_get0_engine() does for RSA, etc. [Richard Levitte] *) Have 'config' recognise 64-bit mingw and choose 'mingw64' as the target platform rather than 'mingw'. [Richard Levitte] *) x86_64 assembly pack: annotate code with DWARF CFI directives to facilitate stack unwinding even from assembly subroutines. [Andy Polyakov] *) Remove VAX C specific definitions of OPENSSL_EXPORT, OPENSSL_EXTERN. Also remove OPENSSL_GLOBAL entirely, as it became a no-op. [Richard Levitte] *) Remove the VMS-specific reimplementation of gmtime from crypto/o_times.c. VMS C's RTL has a fully up to date gmtime() and gmtime_r() since V7.1, which is the minimum version we support. [Richard Levitte] *) Certificate time validation (X509_cmp_time) enforces stricter compliance with RFC 5280. Fractional seconds and timezone offsets are no longer allowed. [Emilia Käsper] *) Add support for ARIA [Paul Dale] *) Add support for SipHash [Todd Short] *) OpenSSL now fails if it receives an unrecognised record type in TLS1.0 or TLS1.1. Previously this only happened in SSLv3 and TLS1.2. This is to prevent issues where no progress is being made and the peer continually sends unrecognised record types, using up resources processing them. [Matt Caswell] *) 'openssl passwd' can now produce SHA256 and SHA512 based output, using the algorithm defined in [Richard Levitte] *) Heartbeat support has been removed; the ABI is changed for now. [Richard Levitte, Rich Salz] *) Support for SSL_OP_NO_ENCRYPT_THEN_MAC in SSL_CONF_cmd. [Emilia Käsper] Changes between 1.1.0d and 1.1.0e [16 Feb 2017] *) Encrypt-Then-Mac renegotiation crash During a renegotiation handshake if the Encrypt-Then-Mac extension is negotiated where it was not in the original handshake (or vice-versa) then this can cause OpenSSL to crash (dependant on ciphersuite). Both clients and servers are affected. This issue was reported to OpenSSL by Joe Orton (Red Hat). (CVE-2017-3733) [Matt Caswell] Changes between 1.1.0c and 1.1.0d [26 Jan 2017] *) Truncated packet could crash via OOB read If one side of an SSL/TLS path is running on a 32-bit host and a specific cipher is being used, then a truncated packet can cause that host to perform an out-of-bounds read, usually resulting in a crash. This issue was reported to OpenSSL by Robert Święcki of Google. (CVE-2017-3731) [Andy Polyakov] *) Bad (EC)DHE parameters cause a client crash If a malicious server supplies bad parameters for a DHE or ECDHE key exchange then this can result in the client attempting to dereference a NULL pointer leading to a client crash. This could be exploited in a Denial of Service attack. This issue was reported to OpenSSL by Guido Vranken. (CVE-2017-3730) [Matt Caswell] *). This issue was reported to OpenSSL by the OSS-Fuzz project. (CVE-2017-3732) [Andy Polyakov] Changes between 1.1.0b and 1.1.0c [10 Nov 2016] *) ChaCha20/Poly1305 heap-buffer-overflow TLS connections using *-CHACHA20-POLY1305 ciphersuites are susceptible to a DoS attack by corrupting larger payloads. This can result in an OpenSSL crash. This issue is not considered to be exploitable beyond a DoS. This issue was reported to OpenSSL by Robert Święcki (Google Security Team) (CVE-2016-7054) [Richard Levitte] *) CMS Null dereference.. This issue was publicly reported as transient failures and was not initially recognized as a security issue. Thanks to Richard Morgan for providing reproducible case. (CVE-2016-7055) [Andy Polyakov] *) Removed automatic addition of RPATH in shared libraries and executables, as this was a remainder from OpenSSL 1.0.x and isn't needed any more. [Richard Levitte] Changes between 1.1.0a and 1.1.0b [26 Sep 2016] *) Fix Use After Free for large message sizes The patch applied to address CVE-2016-6307. This issue only affects OpenSSL 1.1.0a. This issue was reported to OpenSSL by Robert Święcki. (CVE-2016-6309) [Matt Caswell] Changes between 1.1.0 and 1.1.0a [22 Sep 2016] *) OCSP Status Request extension unbounded memory growth. This issue was reported to OpenSSL by Shi Lei (Gear Team, Qihoo 360 Inc.) (CVE-2016-6304) [Matt Caswell] *) SSL_peek() hang on empty record OpenSSL 1.1.0 SSL/TLS will hang during a call to SSL_peek() if the peer sends an empty record. This could be exploited by a malicious peer in a Denial Of Service attack. the header for the message. This would allow for messages up to 16Mb in length. Messages of this length are excessive and OpenSSL includes a check to ensure that a peer is sending reasonably sized messages in order to avoid too much memory being consumed to service a connection. A flaw in the logic of version 1.1.0 means that memory for the message is allocated too early, prior to the excessive message length check. Due to way memory is allocated in OpenSSL this could mean an attacker could force up to 21Mb to be allocated to service a connection. This could lead to a Denial of Service through memory exhaustion. However, the excessive message length check still takes place, and this would cause the connection to immediately fail. Assuming that the application calls SSL_free() on the failed conneciton in a timely manner then the 21Mb of allocated memory will then be immediately freed again. Therefore the excessive memory allocation will be transitory in nature. This then means that there is only a security impact if: 1) The application does not call SSL_free() in a timely manner in the event that the connection fails or 2) The application is working in a constrained environment where there is very little free memory or 3) The attacker initiates multiple connection attempts such that there are multiple connections in a state where memory has been allocated for the connection; SSL_free() has not yet been called; and there is insufficient memory to service the multiple requests. Except in the instance of (1) above any Denial Of Service is likely to be transitory because as soon as the connection fails the memory is subsequently freed again in the SSL_free() call. However there is an increased risk during this period of application crashes due to the lack of memory - which would then mean a more serious Denial of Service. This issue was reported to OpenSSL by Shi Lei (Gear Team, Qihoo 360 Inc.) (CVE-2016-6307 and CVE-2016-6308) [Matt Caswell] *) solaris-x86-cc, i.e. 32-bit configuration with vendor compiler, had to be removed. Primary reason is that vendor assembler can't assemble our modules with -KPIC flag. As result it, assembly support, was not even available as option. But its lack means lack of side-channel resistant code, which is incompatible with security by todays standards. Fortunately gcc is readily available prepackaged option, which we firmly point at... [Andy Polyakov] Changes between 1.0.2h and 1.1.0 [25 Aug 2016] *) Windows command-line tool supports UTF-8 opt-in option for arguments and console input. Setting OPENSSL_WIN32_UTF8 environment variable (to any value) allows Windows user to access PKCS#12 file generated with Windows CryptoAPI and protected with non-ASCII password, as well as files generated under UTF-8 locale on Linux also protected with non-ASCII password. [Andy Polyakov] *) To mitigate the SWEET32 attack (CVE-2016-2183), 3DES cipher suites have been disabled by default and removed from DEFAULT, just like RC4. See the RC4 item below to re-enable both. [Rich Salz] *) The method for finding the storage location for the Windows RAND seed file has changed. First we check %RANDFILE%. If that is not set then we check the directories %HOME%, %USERPROFILE% and %SYSTEMROOT% in that order. If all else fails we fall back to C:\. [Matt Caswell] *) The EVP_EncryptUpdate() function has had its return type changed from void to int. A return of 0 indicates and error while a return of 1 indicates success. [Matt Caswell] *) The flags RSA_FLAG_NO_CONSTTIME, DSA_FLAG_NO_EXP_CONSTTIME and DH_FLAG_NO_EXP_CONSTTIME which previously provided the ability to switch off the constant time implementation for RSA, DSA and DH have been made no-ops and deprecated. [Matt Caswell] *) Windows RAND implementation was simplified to only get entropy by calling CryptGenRandom(). Various other RAND-related tickets were also closed. [Joseph Wylie Yandle, Rich Salz] *) The stack and lhash API's were renamed to start with OPENSSL_SK_ and OPENSSL_LH_, respectively. The old names are available with API compatibility. They new names are now completely documented. [Rich Salz] *) Unify TYPE_up_ref(obj) methods signature. SSL_CTX_up_ref(), SSL_up_ref(), X509_up_ref(), EVP_PKEY_up_ref(), X509_CRL_up_ref(), X509_OBJECT_up_ref_count() methods are now returning an int (instead of void) like all others TYPE_up_ref() methods. So now these methods also check the return value of CRYPTO_atomic_add(), and the validity of object reference counter. [fdasilvayy@gmail.com] *) With Windows Visual Studio builds, the .pdb files are installed alongside the installed libraries and executables. For a static library installation, ossl_static.pdb is the associate compiler generated .pdb file to be used when linking programs. [Richard Levitte] *) Remove openssl.spec. Packaging files belong with the packagers. [Richard Levitte] *) Automatic Darwin/OSX configuration has had a refresh, it will now recognise x86_64 architectures automatically.). [Andy Polyakov] *) Triple-DES ciphers have been moved from HIGH to MEDIUM. [Rich Salz] *) To enable users to have their own config files and build file templates, Configure looks in the directory indicated by the environment variable OPENSSL_LOCAL_CONFIG_DIR as well as the in-source Configurations/ directory. On VMS, OPENSSL_LOCAL_CONFIG_DIR is expected to be a logical name and is used as is. [Richard Levitte] *) The following datatypes were made opaque: X509_OBJECT, X509_STORE_CTX, X509_STORE, X509_LOOKUP, and X509_LOOKUP_METHOD. The unused type X509_CERT_FILE_CTX was removed. [Rich Salz] *) "shared" builds are now the default. To create only static libraries use the "no-shared" Configure option. [Matt Caswell] *) Remove the no-aes, no-hmac, no-rsa, no-sha and no-md5 Configure options. All of these option have not worked for some while and are fundamental algorithms. [Matt Caswell] *) Make various cleanup routines no-ops and mark them as deprecated. Most global cleanup functions are no longer required because they are handled via auto-deinit (see OPENSSL_init_crypto and OPENSSL_init_ssl man pages). Explicitly de-initing can cause problems (e.g. where a library that uses OpenSSL de-inits, but an application is still using it). The affected functions are CONF_modules_free(), ENGINE_cleanup(), OBJ_cleanup(), EVP_cleanup(), BIO_sock_cleanup(), CRYPTO_cleanup_all_ex_data(), RAND_cleanup(), SSL_COMP_free_compression_methods(), ERR_free_strings() and COMP_zlib_cleanup(). [Matt Caswell] *) --strict-warnings no longer enables runtime debugging options such as REF_DEBUG. Instead, debug options are automatically enabled with '--debug' builds. [Andy Polyakov, Emilia Käsper] *) Made DH and DH_METHOD opaque. The structures for managing DH objects have been moved out of the public header files. New functions for managing these have been added. [Matt Caswell] *) Made RSA and RSA_METHOD opaque. The structures for managing RSA objects have been moved out of the public header files. New functions for managing these have been added. [Richard Levitte] *) Made DSA and DSA_METHOD opaque. The structures for managing DSA objects have been moved out of the public header files. New functions for managing these have been added. [Matt Caswell] *) Made BIO and BIO_METHOD opaque. The structures for managing BIOs have been moved out of the public header files. New functions for managing these have been added. [Matt Caswell] *) Removed no-rijndael as a config option. Rijndael is an old name for AES. [Matt Caswell] *) Removed the mk1mf build scripts. [Richard Levitte] *) Headers are now wrapped, if necessary, with OPENSSL_NO_xxx, so it is always safe to #include a header now. [Rich Salz] *) Removed the aged BC-32 config and all its supporting scripts [Richard Levitte] *) Removed support for Ultrix, Netware, and OS/2. [Rich Salz] *) Add support for HKDF. [Alessandro Ghedini] *) Add support for blake2b and blake2s [Bill Cox] *) Added support for "pipelining". Ciphers that have the EVP_CIPH_FLAG_PIPELINE flag set have a capability to process multiple encryptions/decryptions simultaneously. There are currently no built-in ciphers with this property but the expectation is that engines will be able to offer it to significantly improve throughput. Support has been extended into libssl so that multiple records for a single connection can be processed in one go (for >=TLS 1.1). [Matt Caswell] *) Added the AFALG engine. This is an async capable engine which is able to offload work to the Linux kernel. In this initial version it only supports AES128-CBC. The kernel must be version 4.1.0 or greater. [Catriona Lucey] *) OpenSSL now uses a new threading API. It is no longer necessary to set locking callbacks to use OpenSSL in a multi-threaded environment. There are two supported threading models: pthreads and windows threads. It is also possible to configure OpenSSL at compile time for "no-threads". The old threading API should no longer be used. The functions have been replaced with "no-op" compatibility macros. [Alessandro Ghedini, Matt Caswell] *) Modify behavior of ALPN to invoke callback after SNI/servername callback, such that updates to the SSL_CTX affect ALPN. [Todd Short] *) Add SSL_CIPHER queries for authentication and key-exchange. [Todd Short] *) Changes to the DEFAULT cipherlist: - Prefer (EC)DHE handshakes over plain RSA. - Prefer AEAD ciphers over legacy ciphers. - Prefer ECDSA over RSA when both certificates are available. - Prefer TLSv1.2 ciphers/PRF. - Remove DSS, SEED, IDEA, CAMELLIA, and AES-CCM from the default cipherlist. [Emilia Käsper] *) Change the ECC default curve list to be this, in order: x25519, secp256r1, secp521r1, secp384r1. [Rich Salz] *) RC4 based libssl ciphersuites are now classed as "weak" ciphers and are disabled by default. They can be re-enabled using the enable-weak-ssl-ciphers option to Configure. [Matt Caswell] *) If the server has ALPN configured, but supports no protocols that the client advertises, send a fatal "no_application_protocol" alert. This behaviour is SHALL in RFC 7301, though it isn't universally implemented by other servers. [Emilia Käsper] *) Add X25519 support. Add ASN.1 and EVP_PKEY methods for X25519. This includes support for public and private key encoding using the format documented in draft-ietf-curdle-pkix-02. The coresponding EVP_PKEY method supports key generation and key derivation. TLS support complies with draft-ietf-tls-rfc4492bis-08 and uses X25519(29). [Steve Henson] *) Deprecate SRP_VBASE_get_by_user. SRP_VBASE_get_by_user had inconsistent memory management behaviour. In order to fix an unavoidable memory leak (CVE-2016-0798),. [Emilia Käsper] *) Configuration change; it's now possible to build dynamic engines without having to build shared libraries and vice versa. This only applies to the engines in engines/, those in crypto/engine/ will always be built into libcrypto (i.e. "static"). Building dynamic engines is enabled by default; to disable, use the configuration option "disable-dynamic-engine". The only requirements for building dynamic engines are the presence of the DSO module and building with position independent code, so they will also automatically be disabled if configuring with "disable-dso" or "disable-pic". The macros OPENSSL_NO_STATIC_ENGINE and OPENSSL_NO_DYNAMIC_ENGINE are also taken away from openssl/opensslconf.h, as they are irrelevant. [Richard Levitte] *) Configuration change; if there is a known flag to compile position independent code, it will always be applied on the libcrypto and libssl object files, and never on the application object files. This means other libraries that use routines from libcrypto / libssl can be made into shared libraries regardless of how OpenSSL was configured. If this isn't desirable, the configuration options "disable-pic" or "no-pic" can be used to disable the use of PIC. This will also disable building shared libraries and dynamic engines. [Richard Levitte] *) Removed JPAKE code. It was experimental and has no wide use. [Rich Salz] *) The INSTALL_PREFIX Makefile variable has been renamed to DESTDIR. That makes for less confusion on what this variable is for. Also, the configuration option --install_prefix is removed. [Richard Levitte] *) Heartbeat for TLS has been removed and is disabled by default for DTLS; configure with enable-heartbeats. Code that uses the old #define's might need to be updated. [Emilia Käsper, Rich Salz] *) Rename REF_CHECK to REF_DEBUG. [Rich Salz] *) New "unified" build system The "unified" build system is aimed to be a common system for all platforms we support. With it comes new support for VMS. This system builds supports building in a different directory tree than the source tree. It produces one Makefile (for unix family or lookalikes), or one descrip.mms (for VMS). The source of information to make the Makefile / descrip.mms is small files called 'build.info', holding the necessary information for each directory with source to compile, and a template in Configurations, like unix-Makefile.tmpl or descrip.mms.tmpl. With this change, the library names were also renamed on Windows and on VMS. They now have names that are closer to the standard on Unix, and include the major version number, and in certain cases, the architecture they are built for. See "Notes on shared libraries" in INSTALL. We rely heavily on the perl module Text::Template. [Richard Levitte] *) Added support for auto-initialisation and de-initialisation of the library. OpenSSL no longer requires explicit init or deinit routines to be called, except in certain circumstances. See the OPENSSL_init_crypto() and OPENSSL_init_ssl() man pages for further information. [Matt Caswell] *) The arguments to the DTLSv1_listen function have changed. Specifically the "peer" argument is now expected to be a BIO_ADDR object. *) Rewrite of BIO networking library. The BIO library lacked consistent support of IPv6, and adding it required some more extensive modifications. This introduces the BIO_ADDR and BIO_ADDRINFO types, which hold all types of addresses and chains of address information. It also introduces a new API, with functions like BIO_socket, BIO_connect, BIO_listen, BIO_lookup and a rewrite of BIO_accept. The source/sink BIOs BIO_s_connect, BIO_s_accept and BIO_s_datagram have been adapted accordingly. [Richard Levitte] *) RSA_padding_check_PKCS1_type_1 now accepts inputs with and without the leading 0-byte. [Emilia Käsper] *) CRIME protection: disable compression by default, even if OpenSSL is compiled with zlib enabled. Applications can still enable compression by calling SSL_CTX_clear_options(ctx, SSL_OP_NO_COMPRESSION), or by using the SSL_CONF library to configure compression. [Emilia Käsper] *) The signature of the session callback configured with SSL_CTX_sess_set_get_cb was changed. The read-only input buffer was explicitly marked as 'const unsigned char*' instead of 'unsigned char*'. [Emilia Käsper] *) Always DPURIFY. Remove the use of uninitialized memory in the RNG, and other conditional uses of DPURIFY. This makes -DPURIFY a no-op. [Emilia Käsper] *) Removed many obsolete configuration items, including DES_PTR, DES_RISC1, DES_RISC2, DES_INT MD2_CHAR, MD2_INT, MD2_LONG BF_PTR, BF_PTR2 IDEA_SHORT, IDEA_LONG RC2_SHORT, RC2_LONG, RC4_LONG, RC4_CHUNK, RC4_INDEX [Rich Salz, with advice from Andy Polyakov] *) Many BN internals have been moved to an internal header file. [Rich Salz with help from Andy Polyakov] *) Configuration and writing out the results from it has changed. Files such as Makefile include/openssl/opensslconf.h and are now produced through general templates, such as Makefile.in and crypto/opensslconf.h.in and some help from the perl module Text::Template. Also, the center of configuration information is no longer Makefile. Instead, Configure produces a perl module in configdata.pm which holds most of the config data (in the hash table %config), the target data that comes from the target configuration in one of the Configurations/*.conf files (in %target). [Richard Levitte] *) To clarify their intended purposes, the Configure options --prefix and --openssldir change their semantics, and become more straightforward and less interdependent. --prefix shall be used exclusively to give the location INSTALLTOP where programs, scripts, libraries, include files and manuals are going to be installed. The default is now /usr/local. --openssldir shall be used exclusively to give the default location OPENSSLDIR where certificates, private keys, CRLs are managed. This is also where the default openssl.cnf gets installed. If the directory given with this option is a relative path, the values of both the --prefix value and the --openssldir value will be combined to become OPENSSLDIR. The default for --openssldir is INSTALLTOP/ssl. Anyone who uses --openssldir to specify where OpenSSL is to be installed MUST change to use --prefix instead. [Richard Levitte] *) The GOST engine was out of date and therefore it has been removed. An up to date GOST engine is now being maintained in an external repository. See:. Libssl still retains support for GOST ciphersuites (these are only activated if a GOST engine is present). [Matt Caswell] *) EGD is no longer supported by default; use enable-egd when configuring. [Ben Kaduk and Rich Salz] *) The distribution now has Makefile.in files, which are used to create Makefile's when Configure is run. *Configure must be run before trying to build now.* [Rich Salz] *) The return value for SSL_CIPHER_description() for error conditions has changed. [Rich Salz] *) Support for RFC6698/RFC7671 DANE TLSA peer authentication. Obtaining and performing DNSSEC validation of TLSA records is the application's responsibility. The application provides the TLSA records of its choice to OpenSSL, and these are then used to authenticate the peer. The TLSA records need not even come from DNS. They can, for example, be used to implement local end-entity certificate or trust-anchor "pinning", where the "pin" data takes the form of TLSA records, which can augment or replace verification based on the usual WebPKI public certification authorities. [Viktor Dukhovni] *) Revert default OPENSSL_NO_DEPRECATED setting. Instead OpenSSL continues to support deprecated interfaces in default builds. However, applications are strongly advised to compile their source files with -DOPENSSL_API_COMPAT=0x10100000L, which hides the declarations of all interfaces deprecated in 0.9.8, 1.0.0 or the 1.1.0 releases. In environments in which all applications have been ported to not use any deprecated interfaces OpenSSL's Configure script should be used with the --api=1.1.0 option to entirely remove support for the deprecated features from the library and unconditionally disable them in the installed headers. Essentially the same effect can be achieved with the "no-deprecated" argument to Configure, except that this will always restrict the build to just the latest API, rather than a fixed API version. As applications are ported to future revisions of the API, they should update their compile-time OPENSSL_API_COMPAT define accordingly, but in most cases should be able to continue to compile with later releases. The OPENSSL_API_COMPAT versions for 1.0.0, and 0.9.8 are 0x10000000L and 0x00908000L, respectively. However those versions did not support the OPENSSL_API_COMPAT feature, and so applications are not typically tested for explicit support of just the undeprecated features of either release. [Viktor Dukhovni] *) Add support for setting the minimum and maximum supported protocol. It can bet set via the SSL_set_min_proto_version() and SSL_set_max_proto_version(), or via the SSL_CONF's MinProtocol and MaxProtcol. It's recommended to use the new APIs to disable protocols instead of disabling individual protocols using SSL_set_options() or SSL_CONF's Protocol. This change also removes support for disabling TLS 1.2 in the OpenSSL TLS client at compile time by defining OPENSSL_NO_TLS1_2_CLIENT. [Kurt Roeckx] *) Support for ChaCha20 and Poly1305 added to libcrypto and libssl. [Andy Polyakov] *) New EC_KEY_METHOD, this replaces the older ECDSA_METHOD and ECDH_METHOD and integrates ECDSA and ECDH functionality into EC. Implementations can now redirect key generation and no longer need to convert to or from ECDSA_SIG format. Note: the ecdsa.h and ecdh.h headers are now no longer needed and just include the ec.h header file instead. [Steve Henson] *) Remove support for all 40 and 56 bit ciphers. This includes all the export ciphers who are no longer supported and drops support the ephemeral RSA key exchange. The LOW ciphers currently doesn't have any ciphers in it. [Kurt Roeckx] *) Made EVP_MD_CTX, EVP_MD, EVP_CIPHER_CTX, EVP_CIPHER and HMAC_CTX opaque. For HMAC_CTX, the following constructors and destructors were added: HMAC_CTX *HMAC_CTX_new(void); void HMAC_CTX_free(HMAC_CTX *ctx); For EVP_MD and EVP_CIPHER, complete APIs to create, fill and destroy such methods has been added. See EVP_MD_meth_new(3) and EVP_CIPHER_meth_new(3) for documentation. Additional changes: 1) EVP_MD_CTX_cleanup(), EVP_CIPHER_CTX_cleanup() and HMAC_CTX_cleanup() were removed. HMAC_CTX_reset() and EVP_MD_CTX_reset() should be called instead to reinitialise an already created structure. 2) For consistency with the majority of our object creators and destructors, EVP_MD_CTX_(create|destroy) were renamed to EVP_MD_CTX_(new|free). The old names are retained as macros for deprecated builds. [Richard Levitte] *) Added ASYNC support. Libcrypto now includes the async sub-library to enable cryptographic operations to be performed asynchronously as long as an asynchronous capable engine is used. See the ASYNC_start_job() man page for further details. Libssl has also had this capability integrated with the introduction of the new mode SSL_MODE_ASYNC and associated error SSL_ERROR_WANT_ASYNC. See the SSL_CTX_set_mode() and SSL_get_error() man pages. This work was developed in partnership with Intel Corp. [Matt Caswell] *). [Kurt Roeckx] *) SSL_{CTX}_set_tmp_ecdh() which can set 1 EC curve now internally calls SSL_{CTX_}set1_curves() which can set a list. [Kurt Roeckx] *) Remove support for SSL_{CTX_}set_tmp_ecdh_callback(). You should set the curve you want to support using SSL_{CTX_}set1_curves(). [Kurt Roeckx] *) State machine rewrite. The state machine code has been significantly refactored in order to remove much duplication of code and solve issues with the old code (see ssl/statem/README for further details). This change does have some associated API changes. Notably the SSL_state() function has been removed and replaced by SSL_get_state which now returns an "OSSL_HANDSHAKE_STATE" instead of an int. SSL_set_state() has been removed altogether. The previous handshake states defined in ssl.h and ssl3.h have also been removed. [Matt Caswell] *) All instances of the string "ssleay" in the public API were replaced with OpenSSL (case-matching; e.g., OPENSSL_VERSION for #define's) Some error codes related to internal RSA_eay API's were renamed. [Rich Salz] *) The demo files in crypto/threads were moved to demo/threads. [Rich Salz] *) Removed obsolete engines: 4758cca, aep, atalla, cswift, nuron, gmp, sureware and ubsec. [Matt Caswell, Rich Salz] *) New ASN.1 embed macro. New ASN.1 macro ASN1_EMBED. This is the same as ASN1_SIMPLE except the structure is not allocated: it is part of the parent. That is instead of FOO *x; it must be: FOO x; This reduces memory fragmentation and make it impossible to accidentally set a mandatory field to NULL. This currently only works for some fields specifically a SEQUENCE, CHOICE, or ASN1_STRING type which is part of a parent SEQUENCE. Since it is equivalent to ASN1_SIMPLE it cannot be tagged, OPTIONAL, SET OF or SEQUENCE OF. [Steve Henson] *) Remove EVP_CHECK_DES_KEY, a compile-time option that never compiled. [Emilia Käsper] *) Removed DES and RC4 ciphersuites from DEFAULT. Also removed RC2 although in 1.0.2 EXPORT was already removed and the only RC2 ciphersuite is also an EXPORT one. COMPLEMENTOFDEFAULT has been updated accordingly to add DES and RC4 ciphersuites. [Matt Caswell] *) Rewrite EVP_DecodeUpdate (base64 decoding) to fix several bugs. This changes the decoding behaviour for some invalid messages, though the change is mostly in the more lenient direction, and legacy behaviour is preserved as much as possible. [Emilia Käsper] *) Fix no-stdio build. [ David Woodhouse <David.Woodhouse@intel.com> and also Ivan Nestlerode <ivan.nestlerode@sonos.com> ] *) New testing framework The testing framework has been largely rewritten and is now using perl and the perl modules Test::Harness and an extended variant of Test::More called OpenSSL::Test to do its work. All test scripts in test/ have been rewritten into test recipes, and all direct calls to executables in test/Makefile have become individual recipes using the simplified testing OpenSSL::Test::Simple. For documentation on our testing modules, do: perldoc test/testlib/OpenSSL/Test/Simple.pm perldoc test/testlib/OpenSSL/Test.pm [Richard Levitte] *) Revamped memory debug; only -DCRYPTO_MDEBUG and -DCRYPTO_MDEBUG_ABORT are used; the latter aborts on memory leaks (usually checked on exit). Some undocumented "set malloc, etc., hooks" functions were removed and others were changed. All are now documented. [Rich Salz] *) In DSA_generate_parameters_ex, if the provided seed is too short, return an error [Rich Salz and Ismo Puustinen <ismo.puustinen@intel.com>] *) Rewrite PSK to support ECDHE_PSK, DHE_PSK and RSA_PSK. Add ciphersuites from RFC4279, RFC4785, RFC5487, RFC5489. Thanks to Christian J. Dietrich and Giuseppe D'Angelo for the original RSA_PSK patch. [Steve Henson] *) Dropped support for the SSL3_FLAGS_DELAY_CLIENT_FINISHED flag. This SSLeay era flag was never set throughout the codebase (only read). Also removed SSL3_FLAGS_POP_BUFFER which was only used if SSL3_FLAGS_DELAY_CLIENT_FINISHED was also set. [Matt Caswell] *) Changed the default name options in the "ca", "crl", "req" and "x509" to be "oneline" instead of "compat". [Richard Levitte] *) Remove SSL_OP_TLS_BLOCK_PADDING_BUG. This is SSLeay legacy, we're not aware of clients that still exhibit this bug, and the workaround hasn't been working properly for a while. [Emilia Käsper] *) The return type of BIO_number_read() and BIO_number_written() as well as the corresponding num_read and num_write members in the BIO structure has changed from unsigned long to uint64_t. On platforms where an unsigned long is 32 bits (e.g. Windows) these counters could overflow if >4Gb is transferred. [Matt Caswell] *) Given the pervasive nature of TLS extensions it is inadvisable to run OpenSSL without support for them. It also means that maintaining the OPENSSL_NO_TLSEXT option within the code is very invasive (and probably not well tested). Therefore the OPENSSL_NO_TLSEXT option has been removed. [Matt Caswell] *) Removed support for the two export grade static DH ciphersuites EXP-DH-RSA-DES-CBC-SHA and EXP-DH-DSS-DES-CBC-SHA. These two ciphersuites were newly added (along with a number of other static DH ciphersuites) to 1.0.2. However the two export ones have *never* worked since they were introduced. It seems strange in any case to be adding new export ciphersuites, and given "logjam" it also does not seem correct to fix them. [Matt Caswell] *) Version negotiation has been rewritten. In particular SSLv23_method(), SSLv23_client_method() and SSLv23_server_method() have been deprecated, and turned into macros which simply call the new preferred function names TLS_method(), TLS_client_method() and TLS_server_method(). All new code should use the new names instead. Also as part of this change the ssl23.h header file has been removed. [Matt Caswell] *) Support for Kerberos ciphersuites in TLS (RFC2712) has been removed. This code and the associated standard is no longer considered fit-for-purpose. [Matt Caswell] *) RT2547 was closed. When generating a private key, try to make the output file readable only by the owner. This behavior change might be noticeable when interacting with other software. *) Documented all exdata functions. Added CRYPTO_free_ex_index. Added a test. [Rich Salz] *) Added HTTP GET support to the ocsp command. [Rich Salz] *) Changed default digest for the dgst and enc commands from MD5 to sha256 [Rich Salz] *) RAND_pseudo_bytes has been deprecated. Users should use RAND_bytes instead. [Matt Caswell] *) Added support for TLS extended master secret from draft-ietf-tls-session-hash-03.txt. Thanks for Alfredo Pironti for an initial patch which was a great help during development. [Steve Henson] *) All libssl internal structures have been removed from the public header files, and the OPENSSL_NO_SSL_INTERN option has been removed (since it is now redundant). Users should not attempt to access internal structures directly. Instead they should use the provided API functions. [Matt Caswell] *) config has been changed so that by default OPENSSL_NO_DEPRECATED is used. Access to deprecated functions can be re-enabled by running config with "enable-deprecated". In addition applications wishing to use deprecated functions must define OPENSSL_USE_DEPRECATED. Note that this new behaviour will, by default, disable some transitive includes that previously existed in the header files (e.g. ec.h will no longer, by default, include bn.h) [Matt Caswell] *) Added support for OCB mode. OpenSSL has been granted a patent license compatible with the OpenSSL license for use of OCB. Details are available at. Support for OCB can be removed by calling config with no-ocb. [Matt Caswell] *) SSLv2 support has been removed. It still supports receiving a SSLv2 compatible client hello. [Kurt Roeckx] *) Increased the minimal RSA keysize from 256 to 512 bits [Rich Salz], done while fixing the error code for the key-too-small case. [Annie Yousar <a.yousar@informatik.hu-berlin.de>] *) CA.sh has been removmed; use CA.pl instead. [Rich Salz] *) Removed old DES API. [Rich Salz] *) Remove various unsupported platforms: Sony NEWS4 BEOS and BEOS_R5 NeXT SUNOS MPE/iX Sinix/ReliantUNIX RM400 DGUX NCR Tandem Cray 16-bit platforms such as WIN16 [Rich Salz] *) Clean up OPENSSL_NO_xxx #define's Use setbuf() and remove OPENSSL_NO_SETVBUF_IONBF Rename OPENSSL_SYSNAME_xxx to OPENSSL_SYS_xxx OPENSSL_NO_EC{DH,DSA} merged into OPENSSL_NO_EC OPENSSL_NO_RIPEMD160, OPENSSL_NO_RIPEMD merged into OPENSSL_NO_RMD160 OPENSSL_NO_FP_API merged into OPENSSL_NO_STDIO Remove MS_STATIC; it's a relic from platforms <32 bits. [Rich Salz] *) Cleaned up dead code Remove all but one '#ifdef undef' which is to be looked at. [Rich Salz] *) Clean up calling of xxx_free routines. Just like free(), fix most of the xxx_free routines to accept NULL. Remove the non-null checks from callers. Save much code. [Rich Salz] *) Add secure heap for storage of private keys (when possible). Add BIO_s_secmem(), CBIGNUM, etc. Contributed by Akamai Technologies under our Corporate CLA. [Rich Salz] *) Experimental support for a new, fast, unbiased prime candidate generator, bn_probable_prime_dh_coprime(). Not currently used by any prime generator. [Felix Laurie von Massenbach <felix@erbridge.co.uk>] *) New output format NSS in the sess_id command line tool. This allows exporting the session id and the master key in NSS keylog format. [Martin Kaiser <martin@kaiser.cx>] *) Harmonize version and its documentation. -f flag is used to display compilation flags. [mancha <mancha1@zoho.com>] *) Fix eckey_priv_encode so it immediately returns an error upon a failure in i2d_ECPrivateKey. Thanks to Ted Unangst for feedback on this issue. [mancha <mancha1@zoho.com>] *) Fix some double frees. These are not thought to be exploitable. [mancha <mancha1@zoho.com>] *)] *) Use algorithm specific chains in SSL_CTX_use_certificate_chain_file(): this fixes a limitation in previous versions of OpenSSL. [Steve Henson] *) Experimental encrypt-then-mac support. Experimental support for encrypt then mac from draft-gutmann-tls-encrypt-then-mac-02.txt To enable it set the appropriate extension number (0x42 for the test server) using e.g. -DTLSEXT_TYPE_encrypt_then_mac=0x42 For non-compliant peers (i.e. just about everything) this should have no effect. WARNING: EXPERIMENTAL, SUBJECT TO CHANGE. ] *) Extend CMS code to support RSA-PSS signatures and RSA-OAEP for enveloped data. [Steve Henson] *) Extended RSA OAEP support via EVP_PKEY API. Options to specify digest, MGF1 digest and OAEP label. [Steve Henson] *) Make openssl verify return errors. [Chris Palmer <palmer@google.com> and Ben Laurie] *) New function ASN1_TIME_diff to calculate the difference between two ASN1_TIME structures or one structure and the current time. [Steve Henson] *) Update fips_test_suite to support multiple command line options. New test to induce all self test errors in sequence and check expected failures. [Steve Henson] *) Add FIPS_{rsa,dsa,ecdsa}_{sign,verify} functions which digest and sign or verify all in one operation. [Steve Henson] *) Add fips_algvs: a multicall fips utility incorporating all the algorithm test programs and fips_test_suite. Includes functionality to parse the minimal script output of fipsalgest.pl directly. [Steve Henson] *) Add authorisation parameter to FIPS_module_mode_set(). [Steve Henson] *) Add FIPS selftest for ECDH algorithm using P-224 and B-233 curves. [Steve Henson] *) Use separate DRBG fields for internal and external flags. New function FIPS_drbg_health_check() to perform on demand health checking. Add generation tests to fips_test_suite with reduced health check interval to demonstrate periodic health checking. Add "nodh" option to fips_test_suite to skip very slow DH test. [Steve Henson] *) New function FIPS_get_cipherbynid() to lookup FIPS supported ciphers based on NID. [Steve Henson] *) More extensive health check for DRBG checking many more failure modes. New function FIPS_selftest_drbg_all() to handle every possible DRBG combination: call this in fips_test_suite. [Steve Henson] *) Add support for canonical generation of DSA parameter 'g'. See FIPS 186-3 A.2.3. *) Add support for HMAC DRBG from SP800-90. Update DRBG algorithm test and POST to handle HMAC cases. [Steve Henson] *) Add functions FIPS_module_version() and FIPS_module_version_text() to return numerical and string versions of the FIPS module number. [Steve Henson] *) Rename FIPS_mode_set and FIPS_mode to FIPS_module_mode_set and FIPS_module_mode. FIPS_mode and FIPS_mode_set will be implemented outside the validated module in the FIPS capable OpenSSL. [Steve Henson] *) Minor change to DRBG entropy callback semantics. In some cases there is no multiple of the block length between min_len and max_len. Allow the callback to return more than max_len bytes of entropy but discard any extra: it is the callback's responsibility to ensure that the extra data discarded does not impact the requested amount of entropy. [Steve Henson] *) Add PRNG security strength checks to RSA, DSA and ECDSA using information in FIPS186-3, SP800-57 and SP800-131A. [Steve Henson] *) CCM support via EVP. Interface is very similar to GCM case except we must supply all data in one chunk (i.e. no update, final) and the message length must be supplied if AAD is used. Add algorithm test support. [Steve Henson] *) Initial version of POST overhaul. Add POST callback to allow the status of POST to be monitored and/or failures induced. Modify fips_test_suite to use callback. Always run all selftests even if one fails. [Steve Henson] *) XTS support including algorithm test driver in the fips_gcmtest program. Note: this does increase the maximum key length from 32 to 64 bytes but there should be no binary compatibility issues as existing applications will never use XTS mode. [Steve Henson] *) Extensive reorganisation of FIPS PRNG behaviour. Remove all dependencies to OpenSSL RAND code and replace with a tiny FIPS RAND API which also performs algorithm blocking for unapproved PRNG types. Also do not set PRNG type in FIPS_mode_set(): leave this to the application. Add default OpenSSL DRBG handling: sets up FIPS PRNG and seeds with the standard OpenSSL PRNG: set additional data to a date time vector. [Steve Henson] *) Rename old X9.31 PRNG functions of the form FIPS_rand* to FIPS_x931*. This shouldn't present any incompatibility problems because applications shouldn't be using these directly and any that are will need to rethink anyway as the X9.31 PRNG is now deprecated by FIPS 140-2 [Steve Henson] *) Extensive self tests and health checking required by SP800-90 DRBG. Remove strength parameter from FIPS_drbg_instantiate and always instantiate at maximum supported strength. [Steve Henson] *) Add ECDH code to fips module and fips_ecdhvs for primitives only testing. [Steve Henson] *) New algorithm test program fips_dhvs to handle DH primitives only testing. [Steve Henson] *) New function DH_compute_key_padded() to compute a DH key and pad with leading zeroes if needed: this complies with SP800-56A et al. [Steve Henson] *) Initial implementation of SP800-90 DRBGs for Hash and CTR. Not used by anything, incomplete, subject to change and largely untested at present. [Steve Henson] *) Modify fipscanisteronly build option to only build the necessary object files by filtering FIPS_EX_OBJ through a perl script in crypto/Makefile. [Steve Henson] *) Add experimental option FIPSSYMS to give all symbols in fipscanister.o and FIPS or fips prefix. This will avoid conflicts with future versions of OpenSSL. Add perl script util/fipsas.pl to preprocess assembly language source files and rename any affected symbols. [Steve Henson] *) Add selftest checks and algorithm block of non-fips algorithms in FIPS mode. Remove DES2 from selftests. [Steve Henson] *) Add ECDSA code to fips module. Add tiny fips_ecdsa_check to just return internal method without any ENGINE dependencies. Add new tiny fips sign and verify functions. [Steve Henson] *) New build option no-ec2m to disable characteristic 2 code. [Steve Henson] *) New build option "fipscanisteronly". This only builds fipscanister.o and (currently) associated fips utilities. Uses the file Makefile.fips instead of Makefile.org as the prototype. [Steve Henson] *) Add some FIPS mode restrictions to GCM. Add internal IV generator. Update fips_gcmtest to use IV generator. [Steve Henson] *) Initial, experimental EVP support for AES-GCM. AAD can be input by setting output buffer to NULL. The *Final function must be called although it will not retrieve any additional data. The tag can be set or retrieved with a ctrl. The IV length is by default 12 bytes (96 bits) but can be set to an alternative value. If the IV length exceeds the maximum IV length (currently 16 bytes) it cannot be set before the key. [Steve Henson] *) New flag in ciphers: EVP_CIPH_FLAG_CUSTOM_CIPHER. This means the underlying do_cipher function handles all cipher semantics itself including padding and finalisation. This is useful if (for example) an ENGINE cipher handles block padding itself. The behaviour of do_cipher is subtly changed if this flag is set: the return value is the number of characters written to the output buffer (zero is no longer an error code) or a negative error code. Also if the input buffer is NULL and length 0 finalisation should be performed. [Steve Henson] *) If a candidate issuer certificate is already part of the constructed path ignore it: new debug notification X509_V_ERR_PATH_LOOP for this case. [Steve Henson] *) Improve forward-security support: add functions void SSL_CTX_set_not_resumable_session_callback(SSL_CTX *ctx, int (*cb)(SSL *ssl, int is_forward_secure)) void SSL_set_not_resumable_session_callback(SSL *ssl, int (*cb)(SSL *ssl, int is_forward_secure)) for use by SSL/TLS servers; the callback function will be called whenever a new session is created, and gets to decide whether the session may be cached to make it resumable (return 0) or not (return 1). (As by the SSL/TLS protocol specifications, the session_id sent by the server will be empty to indicate that the session is not resumable; also, the server will not generate RFC 4507 (RFC 5077) session tickets.) A simple reasonable callback implementation is to return is_forward_secure. This parameter will be set to 1 or 0 depending on the ciphersuite selected by the SSL/TLS server library, indicating whether it can provide forward security. [Emilia Käsper <emilia.kasper@esat.kuleuven.be> (Google)] *) New -verify_name option in command line utilities to set verification parameters by name. [Steve Henson] *) Initial CMAC implementation. WARNING: EXPERIMENTAL, API MAY CHANGE. Add CMAC pkey methods. [Steve Henson] *) Experimental renegotiation in s_server -www mode. If the client browses /reneg connection is renegotiated.. Fix many cases where return value is ignored. NB. The functions RAND_add(), RAND_seed(), BIO_set_cipher() and some obscure PEM functions were changed so they can now return an error. The RAND changes required a change to the RAND_METHOD structure. [Steve Henson] *) New macro __owur for "OpenSSL Warn Unused Result". This makes use of a gcc attribute to warn if the result of a function is ignored. This is enable if DEBUG_UNUSED is set. Add to several functions in evp.h whose return value is often ignored. [Steve Henson] *) New -noct, -requestct, -requirect and -ctlogfile options for s_client. These allow SCTs (signed certificate timestamps) to be requested and validated when establishing a connection. [Rob Percival <robpercival@google.com>] Changes between 1.0.2g and 1.0.2h [3 May 2016] *) Prevent padding oracle in AES-NI CBC MAC check A MITM attacker can use a padding oracle attack to decrypt traffic when the connection uses an AES CBC cipher and the server support AES-NI.. This issue was reported by Juraj Somorovsky using TLS-Attacker. (CVE-2016-2107) [Kurt Roeckx] *). Internally to OpenSSL the EVP_EncodeUpdate() function is primarily used by the PEM_write_bio* family of functions. These are mainly used within the OpenSSL command line applications, so any application which processes data from an untrusted source and outputs it as a PEM file should be considered vulnerable to this issue. User applications that call these APIs directly with large amounts of untrusted data may also be vulnerable. This issue was reported by Guido Vranken. (CVE-2016-2105) [Matt Caswell] *) issue was reported by Guido Vranken. (CVE-2016-2106) [Matt Caswell] *) Prevent ASN.1 BIO excessive memory allocation When ASN.1 data is read from a BIO using functions such as d2i_CMS_bio() a short invalid encoding can cause by Brian Carpenter. (CVE-2016-2109) [Stephen Henson] *) EBCDIC overread ASN1 Strings that are over 1024 bytes can cause an overread in applications using the X509_NAME_oneline() function on EBCDIC systems. This could result in arbitrary stack data being returned in the buffer. This issue was reported by Guido Vranken. (CVE-2016-2176) [Matt Caswell] *) Modify behavior of ALPN to invoke callback after SNI/servername callback, such that updates to the SSL_CTX affect ALPN. [Todd Short] *) Remove LOW from the DEFAULT cipher list. This removes singles DES from the default. [Kurt Roeckx] *) Only remove the SSLv2 methods with the no-ssl2-method option. When the methods are enabled and ssl2 is disabled the methods return NULL. [Kurt Roeckx] Changes between 1.0.2f and 1.0.2g [1 Mar 2016] * Disable weak ciphers in SSLv3 and up in default builds of OpenSSL. Builds that are not configured with "enable-weak-ssl-ciphers" will not provide any "EXPORT" or "LOW" strength ciphers. [Viktor Dukhovni] * and server variants, SSLv2 ciphers vulnerable to exhaustive search key recovery have been removed. Specifically, the SSLv2 40-bit EXPORT ciphers, and SSLv2 56-bit DES are no longer available. (CVE-2016-0800) [Viktor Dukhovni] *) Fix a double-free in DSA code A double free bug was discovered when OpenSSL parses malformed DSA private keys and could lead to a DoS attack or memory corruption for applications that receive DSA private keys from untrusted sources. This scenario is considered rare. This issue was reported to OpenSSL by Adam Langley(Google/BoringSSL) using libFuzzer. (CVE-2016-0705) [Stephen Henson] *) Disable SRP fake user seed to address a server memory leak. Add a new method SRP_VBASE_get1_by_user that handles the seed properly. SRP_VBASE_get_by_user had inconsistent memory management behaviour. In order to fix an unavoidable memory leak,. (CVE-2016-0798) [Emilia Käsper] *) Fix BN_hex2bn/BN_dec2bn NULL pointer deref/heap corruption ptr deref.. All OpenSSL internal usage of these functions use data that is not expected to be untrusted, e.g. config file data or application command line arguments. If user developed applications generate config file data based on untrusted data then it is possible that this could also lead to security consequences. This is also anticipated to be rare. This issue was reported to OpenSSL by Guido Vranken. (CVE-2016-0797) [Matt Caswell] *) Fix memory issues in BIO_*printf functions The internal |fmtstr| function used in processing a "%s" format string in the BIO_*printf functions could overflow while calculating the length of a string and cause an OOB read when printing very long strings. Additionally the internal |doapr_outch| function can attempt to write to an OOB memory location (at an offset from the NULL pointer) in the event of a memory allocation failure. In 1.0.2 and below this could be caused where the size of a buffer to be allocated is greater than INT_MAX. E.g. this could be in processing a very long "%s" format string. Memory leaks can also occur. The first issue may mask the second issue dependent on compiler behaviour. These problems could enable attacks where large amounts of untrusted data is passed to the BIO_*printf functions. If applications use these functions in this way then they could be vulnerable. OpenSSL itself uses these functions when printing out human-readable dumps of ASN.1 data. Therefore applications that print this data could be vulnerable if the data is from untrusted sources. OpenSSL command line applications could also be vulnerable where they print out ASN.1 data, or if untrusted data is passed as command line arguments. Libssl is not considered directly vulnerable. Additionally certificates etc received via remote connections via libssl are also unlikely to be able to trigger these issues because of message size limits enforced within libssl. This issue was reported to OpenSSL Guido Vranken. (CVE-2016-0799) [Matt Caswell] *) Side channel attack on modular exponentiation A side-channel attack was found which makes use of cache-bank conflicts on the Intel Sandy-Bridge microarchitecture which could lead to the recovery of RSA keys. The ability to exploit this issue is limited as it relies on an attacker who has control of code in a thread running on the same hyper-threaded core as the victim thread which is performing decryptions. This issue was reported to OpenSSL by Yuval Yarom, The University of Adelaide and NICTA, Daniel Genkin, Technion and Tel Aviv University, and Nadia Heninger, University of Pennsylvania with more information at. (CVE-2016-0702) [Andy Polyakov] *) Change the req app to generate a 2048-bit RSA/DSA key by default, if no keysize is specified with default_bits. This fixes an omission in an earlier change that changed all RSA/DSA key generation apps to use 2048 bits by default. [Emilia Käsper] Changes between 1.0.2e and 1.0.2f [28 Jan 2016] *) DH small subgroups. The fix for this issue adds an additional check where a "q" parameter is available (as is the case in X9.42 based parameters). This detects the only known attack, and is the only possible defense for static DH ciphersuites. This could have some performance impact. Additionally the SSL_OP_SINGLE_DH_USE option has been switched on by default and cannot be disabled. This could have some performance impact. This issue was reported to OpenSSL by Antonio Sanso (Adobe). (CVE-2016-0701) [Matt Caswell] *) SSLv2 doesn't block disabled. This issue was reported to OpenSSL on 26th December 2015 by Nimrod Aviram and Sebastian Schinzel. (CVE-2015-3197) [Viktor Dukhovni] Changes between 1.0.2d and 1.0.2e [3 Dec 2015] *)>] Changes between 1.0.2c and 1.0.2d [9 Jul 2015] *) Alternate chains certificate forgery During certificate verification, was reported to OpenSSL by Adam Langley/David Benjamin (Google/BoringSSL). [Matt Caswell] Changes between 1.0.2b and 1.0.2c [12 Jun 2015] *) Fix HMAC ABI incompatibility. The previous version introduced an ABI incompatibility in the handling of HMAC. The previous ABI has now been restored. [Matt Caswell] Changes between 1.0.2a and 1.0.2b [11 Jun 2015] *) Malformed ECParameters causes infinite loop When processing and TLS servers with client authentication enabled. This issue was reported to OpenSSL by Joseph Barr-Pixton. (CVE-2015-1788) [Andy Polyakov] *) Exploitable out-of-bounds read in X509_cmp_time X509_cmp_time does not properly check the length of the ASN1_TIME string and can read a few bytes out of bounds. In addition, X509_cmp_time accepts an arbitrary number of fractional seconds in the time string. An attacker can use this to craft malformed certificates and CRLs of various sizes and potentially cause a segmentation fault, resulting in a DoS on applications that verify certificates or CRLs. TLS clients that verify CRLs are affected. TLS clients and servers with client authentication enabled may be affected if they use custom verification callbacks. This issue was reported to OpenSSL by Robert Swiecki (Google), and independently by Hanno Böck. (CVE-2015-1789) [Emilia Käsper] *) PKCS7 crash with missing EnvelopedContent The PKCS#7 parsing code does not handle missing inner EncryptedContent correctly. An attacker can craft malformed ASN.1-encoded PKCS#7 blobs with missing content and trigger a NULL pointer dereference on parsing. Applications that decrypt PKCS#7 data or otherwise parse PKCS#7 structures from untrusted sources are affected. OpenSSL clients and servers are not affected. This issue was reported to OpenSSL by Michal Zalewski (Google). (CVE-2015-1790) [Emilia Käsper] *) CMS verify infinite loop with unknown hash function When verifying a signedData message the CMS code can enter an infinite loop if presented with an unknown hash function OID. This can be used to perform denial of service against any system which verifies signedData messages using the CMS code. This issue was reported to OpenSSL by Johannes Bauer. (CVE-2015-1792) [Stephen Henson] *) Race condition handling NewSessionTicket If a NewSessionTicket is received by a multi-threaded client when attempting to reuse a previous ticket then a race condition can occur potentially leading to a double free of the ticket data. (CVE-2015-1791) [Matt Caswell] *) Only support 256-bit or stronger elliptic curves with the 'ecdh_auto' setting (server) or by default (client). Of supported curves, prefer P-256 (both). [Emilia Kasper] Changes between 1.0.2 and 1.0.2a [19 Mar 2015] *) ClientHello sigalgs DoS fix If a client connects to an OpenSSL 1.0.2 server and renegotiates with an invalid signature algorithms extension a NULL pointer dereference will occur. This can be exploited in a DoS attack against the server. This issue was was reported to OpenSSL by David Ramos of Stanford University. (CVE-2015-0291) [Stephen Henson and Matt Caswell] *) Multiblock corrupted pointer fix. This issue was reported to OpenSSL by Daniel Danner and Rainer Mueller. (CVE-2015-0290) [Matt Caswell] *) Segmentation fault in DTLSv1_listen fix. This issue was reported to OpenSSL by Per Allansson. (CVE-2015-0207) [Matt Caswell] *) Segmentation fault in ASN1_TYPE_cmp fix. (CVE-2015-0286) [Stephen Henson] *) Segmentation fault for invalid PSS parameters fix. This issue was was reported to OpenSSL by Brian Carpenter. (CVE-2015-0208) [Stephen Henson] *) ASN.1 structure reuse memory corruption fix Reusing a structure in ASN.1 parsing may allow an attacker to cause memory corruption via an invalid write. Such reuse is and has been strongly discouraged and is believed to be rare. Applications that parse structures containing CHOICE or ANY DEFINED BY components may be affected. Certificate parsing (d2i_X509 and related functions) are however not affected. OpenSSL clients and servers are not affected. (CVE-2015-0287) [Stephen Henson] *) PKCS7 NULL pointer dereferences fix The PKCS#7 parsing code does not handle missing outer ContentInfo correctly. An attacker can craft malformed ASN.1-encoded PKCS#7 blobs with missing content and trigger a NULL pointer dereference on parsing. Applications that verify PKCS#7 signatures, decrypt PKCS#7 data or otherwise parse PKCS#7 structures from untrusted sources are affected. OpenSSL clients and servers are not affected. This issue was reported to OpenSSL by Michal Zalewski (Google). (CVE-2015-0289) [Emilia Käsper] *) DoS via reachable assert in SSLv2 servers fix A malicious client can trigger an OPENSSL_assert (i.e., an abort) in servers that both support SSLv2 and enable export cipher suites by sending a specially crafted SSLv2 CLIENT-MASTER-KEY message. This issue was discovered by Sean Burford (Google) and Emilia Käsper (OpenSSL development team). (CVE-2015-0293) [Emilia Käsper] *) Empty CKE with client auth and DHE fix If client auth is used then a server can seg fault in the event of a DHE ciphersuite being selected and a zero length ClientKeyExchange message being sent by the client. This could be exploited in a DoS attack. (CVE-2015-1787) [Matt Caswell] *) Handshake with unseeded PRNG fix Under certain conditions an OpenSSL 1.0.2 client can complete a handshake with an unseeded PRNG. The conditions are: - The client is on a platform where the PRNG has not been seeded automatically, and the user has not seeded manually - A protocol specific client method version has been used (i.e. not SSL_client_methodv23) - A ciphersuite is used that does not require additional random data from the PRNG beyond the initial ClientHello client random (e.g. PSK-RC4-SHA). If the handshake succeeds then the client random that has been used will have been generated from a PRNG with insufficient entropy and therefore the output may be predictable. For example using the following command with an unseeded openssl will succeed on an unpatched platform: openssl s_client -psk 1a2b3c4d -tls1_2 -cipher PSK-RC4-SHA (CVE-2015-0285) [Matt Caswell] *) Use After Free following d2i_ECPrivatekey error fix. This issue was discovered by the BoringSSL project and fixed in their commit 517073cd4b. (CVE-2015-0209) [Matt Caswell] *) X509_to_X509_REQ NULL pointer deref fix The function X509_to_X509_REQ will crash with a NULL pointer dereference if the certificate key is invalid. This function is rarely used in practice. This issue was discovered by Brian Carpenter. (CVE-2015-0288) [Stephen Henson] *) Removed the export ciphers from the DEFAULT ciphers [Kurt Roeckx] Changes between 1.0.1l and 1.0.2 [22 Jan 2015] *) Facilitate "universal" ARM builds targeting range of ARM ISAs, e.g. ARMv5 through ARMv8, as opposite to "locking" it to single one. So far those who have to target multiple platforms would compromise and argue that binary targeting say ARMv5 would still execute on ARMv8. "Universal" build resolves this compromise by providing near-optimal performance even on newer platforms. [Andy Polyakov] *) Accelerated NIST P-256 elliptic curve implementation for x86_64 (other platforms pending). [Shay Gueron & Vlad Krasnov (Intel Corp), Andy Polyakov] *) Add support for the SignedCertificateTimestampList certificate and OCSP response extensions from RFC6962. [Rob Stradling] *) Fix ec_GFp_simple_points_make_affine (thus, EC_POINTs_mul etc.) for corner cases. (Certain input points at infinity could lead to bogus results, with non-infinity inputs mapped to infinity too.) [Bodo Moeller] *) Initial support for PowerISA 2.0.7, first implemented in POWER8. This covers AES, SHA256/512 and GHASH. "Initial" means that most common cases are optimized and there still is room for further improvements. Vector Permutation AES for Altivec is also added. [Andy Polyakov] *) Add support for little-endian ppc64 Linux target. [Marcelo Cerri (IBM)] *) Initial support for AMRv8 ISA crypto extensions. This covers AES, SHA1, SHA256 and GHASH. "Initial" means that most common cases are optimized and there still is room for further improvements. Both 32- and 64-bit modes are supported. [Andy Polyakov, Ard Biesheuvel (Linaro)] *) Improved ARMv7 NEON support. [Andy Polyakov] *) Support for SPARC Architecture 2011 crypto extensions, first implemented in SPARC T4. This covers AES, DES, Camellia, SHA1, SHA256/512, MD5, GHASH and modular exponentiation. [Andy Polyakov, David Miller] *) Accelerated modular exponentiation for Intel processors, a.k.a. RSAZ. [Shay Gueron & Vlad Krasnov (Intel Corp)] *) Support for new and upcoming Intel processors, including AVX2, BMI and SHA ISA extensions. This includes additional "stitched" implementations, AESNI-SHA256 and GCM, and multi-buffer support for TLS encrypt. This work was sponsored by Intel Corp. [Andy Polyakov] *) Support for DTLS 1.2. This adds two sets of DTLS methods: DTLS_*_method() supports both DTLS 1.2 and 1.0 and should use whatever version the peer supports and DTLSv1_2_*_method() which supports DTLS 1.2 only. [Steve Henson] *) Use algorithm specific chains in SSL_CTX_use_certificate_chain_file(): this fixes a limitation in previous versions of OpenSSL. [Steve Henson] *) Extended RSA OAEP support via EVP_PKEY API. Options to specify digest, MGF1 digest and OAEP label. ] *) Add functions to allocate and set the fields of an ECDSA_METHOD structure. [Douglas E. Engert, Steve Henson] *) New functions OPENSSL_gmtime_diff and ASN1_TIME_diff to find the difference in days and seconds between two tm or ASN1_TIME structures. [Steve Henson] *) Add -rev test option to s_server to just reverse order of characters received by client and send back to server. Also prints an abbreviated summary of the connection parameters. [Steve Henson] *) New option -brief for s_client and s_server to print out a brief summary of connection parameters. [Steve Henson] *) Add callbacks for arbitrary TLS extensions. [Trevor Perrin <trevp@trevp.net> and Ben Laurie] *) New option -crl_download in several openssl utilities to download CRLs from CRLDP extension in certificates. [Steve Henson] *) New options -CRL and -CRLform for s_client and s_server for CRLs. [Steve Henson] *) New function X509_CRL_diff to generate a delta CRL from the difference of two full CRLs. Add support to "crl" utility. [Steve Henson] *) New functions to set lookup_crls function and to retrieve X509_STORE from X509_STORE_CTX. [Steve Henson] *) Print out deprecated issuer and subject unique ID fields in certificates. [Steve Henson] *) Extend OCSP I/O functions so they can be used for simple general purpose HTTP as well as OCSP. New wrapper function which can be used to download CRLs using the OCSP API. [Steve Henson] *) Delegate command line handling in s_client/s_server to SSL_CONF APIs. [Steve Henson] *) SSL_CONF* functions. These provide a common framework for application configuration using configuration files or command lines. [Steve Henson] *) SSL/TLS tracing code. This parses out SSL/TLS records using the message callback and prints the results. Needs compile time option "enable-ssl-trace". New options to s_client and s_server to enable tracing. [Steve Henson] *) New ctrl and macro to retrieve supported points extensions. Print out extension in s_server and s_client. [Steve Henson] *) New functions to retrieve certificate signature and signature OID NID. [Steve Henson] *) Add functions to retrieve and manipulate the raw cipherlist sent by a client to OpenSSL. [Steve Henson] *) New Suite B modes for TLS code. These use and enforce the requirements of RFC6460: restrict ciphersuites, only permit Suite B algorithms and only use Suite B curves. The Suite B modes can be set by using the strings "SUITEB128", "SUITEB192" or "SUITEB128ONLY" for the cipherstring. [Steve Henson] *) New chain verification flags for Suite B levels of security. Check algorithms are acceptable when flags are set in X509_verify_cert. [Steve Henson] *) Make tls1_check_chain return a set of flags indicating checks passed by a certificate chain. Add additional tests to handle client certificates: checks for matching certificate type and issuer name comparison. [Steve Henson] *) If an attempt is made to use a signature algorithm not in the peer preference list abort the handshake. If client has no suitable signature algorithms in response to a certificate request do not use the certificate. [Steve Henson] *) If server EC tmp key is not in client preference list abort handshake. [Steve Henson] *) Add support for certificate stores in CERT structure. This makes it possible to have different stores per SSL structure or one store in the parent SSL_CTX. Include distinct stores for certificate chain verification and chain building. New ctrl SSL_CTRL_BUILD_CERT_CHAIN to build and store a certificate chain in CERT structure: returning an error if the chain cannot be built: this will allow applications to test if a chain is correctly configured. Note: if the CERT based stores are not set then the parent SSL_CTX store is used to retain compatibility with existing behaviour. [Steve Henson] *) New function ssl_set_client_disabled to set a ciphersuite disabled mask based on the current session, check mask when sending client hello and checking the requested ciphersuite. [Steve Henson] *) New ctrls to retrieve and set certificate types in a certificate request message. Print out received values in s_client. If certificate types is not set with custom values set sensible values based on supported signature algorithms. [Steve Henson] *) Support for distinct client and server supported signature algorithms. [Steve Henson] *) Add certificate callback. If set this is called whenever a certificate is required by client or server. An application can decide which certificate chain to present based on arbitrary criteria: for example supported signature algorithms. Add very simple example to s_server. This fixes many of the problems and restrictions of the existing client certificate callback: for example you can now clear an existing certificate and specify the whole chain. [Steve Henson] *) Add new "valid_flags" field to CERT_PKEY structure which determines what the certificate can be used for (if anything). Set valid_flags field in new tls1_check_chain function. Simplify ssl_set_cert_masks which used to have similar checks in it. Add new "cert_flags" field to CERT structure and include a "strict mode". This enforces some TLS certificate requirements (such as only permitting certificate signature algorithms contained in the supported algorithms extension) which some implementations ignore: this option should be used with caution as it could cause interoperability issues. [Steve Henson] *) Update and tidy signature algorithm extension processing. Work out shared signature algorithms based on preferences and peer algorithms and print them out in s_client and s_server. Abort handshake if no shared signature algorithms. [Steve Henson] *) Add new functions to allow customised supported signature algorithms for SSL and SSL_CTX structures. Add options to s_client and s_server to support them. [Steve Henson] *) New function SSL_certs_clear() to delete all references to certificates from an SSL structure. Before this once a certificate had been added it couldn't be removed. [Steve Henson] *) Integrate hostname, email address and IP address checking with certificate verification. New verify options supporting checking in openssl utility. [Steve Henson] *) Fixes and wildcard matching support to hostname and email checking functions. Add manual page. [Florian Weimer (Red Hat Product Security Team)] *) New functions to check a hostname email or IP address against a certificate. Add options x509 utility to print results of checks against a certificate. [Steve Henson] *) Fix OCSP checking. [Rob Stradling <rob.stradling@comodo.com> and Ben Laurie] *) Initial experimental support for explicitly trusted non-root CAs. OpenSSL still tries to build a complete chain to a root but if an intermediate CA has a trust setting included that is used. The first setting is used: whether to trust (e.g., -addtrust option to the x509 utility) or reject. [Steve Henson] *) Add -trusted_first option which attempts to find certificates in the trusted store even if an untrusted chain is also supplied. [Steve Henson] *) MIPS assembly pack updates: support for MIPS32r2 and SmartMIPS ASE, platform support for Linux and Android. [Andy Polyakov] *) Support for linux-x32, ILP32 environment in x86_64 framework. [Andy Polyakov] *) Experimental multi-implementation support for FIPS capable OpenSSL. When in FIPS mode the approved implementations are used as normal, when not in FIPS mode the internal unapproved versions are used instead. This means that the FIPS capable OpenSSL isn't forced to use the (often lower performance) FIPS implementations outside FIPS mode. [Steve Henson] *) Transparently support X9.42 DH parameters when calling PEM_read_bio_DHparameters. This means existing applications can handle the new parameter format automatically. [Steve Henson] *) Initial experimental support for X9.42 DH parameter format: mainly to support use of 'q' parameter for RFC5114 parameters. [Steve Henson] *) Add DH parameters from RFC5114 including test data to dhtest. [Steve Henson] *) Support for automatic EC temporary key parameter selection. If enabled the most preferred EC parameters are automatically used instead of hardcoded fixed parameters. Now a server just has to call: SSL_CTX_set_ecdh_auto(ctx, 1) and the server will automatically support ECDH and use the most appropriate parameters. [Steve Henson] *) Enhance and tidy EC curve and point format TLS extension code. Use static structures instead of allocation if default values are used. New ctrls to set curves we wish to support and to retrieve shared curves. Print out shared curves in s_server. New options to s_server and s_client to set list of supported curves. [Steve Henson] *) New ctrls to retrieve supported signature algorithms and supported curve values as an array of NIDs. Extend openssl utility to print out received values. [Steve Henson] *) Add new APIs EC_curve_nist2nid and EC_curve_nid2nist which convert between NIDs and the more common NIST names such as "P-256". Enhance ecparam utility and ECC method to recognise the NIST names for curves. [Steve Henson] *) Enhance SSL/TLS certificate chain handling to support different chains for each certificate instead of one chain in the parent SSL_CTX. [Steve Henson] *) Support for fixed DH ciphersuite client authentication: where both server and client use DH certificates with common parameters. [Steve Henson] *) Support for fixed DH ciphersuites: those requiring DH server certificates. [Steve Henson] *) New function i2d_re_X509_tbs for re-encoding the TBS portion of the certificate. Note: Related 1.0.2-beta specific macros X509_get_cert_info, X509_CINF_set_modified, X509_CINF_get_issuer, X509_CINF_get_extensions and X509_CINF_get_signature were reverted post internal team review. Changes between 1.0.1k and 1.0.1l [15 Jan 2015] *) Build fixes for the Windows and OpenVMS platforms [Matt Caswell and Richard Levitte] Changes between 1.0.1j and 1.0.1k [8 Jan 2015] *) Fix DTLS segmentation fault in dtls1_get_record. A carefully crafted DTLS message can cause a segmentation fault in OpenSSL due to a NULL pointer dereference. This could lead to a Denial Of Service attack. Thanks to Markus Stenberg of Cisco Systems, Inc. for reporting this issue. (CVE-2014-3571) [Steve Henson] *) Fix DTLS memory leak in dtls1_buffer_record.. Thanks to Chris Mueller for reporting this issue. (CVE-2015-0206) [Matt Caswell] *) Fix issue where no-ssl3 configuration sets method to NULL. When openssl is built with the no-ssl3 option and a SSL v3 ClientHello is received the ssl method would be set to NULL which could later result in a NULL pointer dereference. Thanks to Frank Schmirler for reporting this issue. (CVE-2014-3569) [Kurt Roeckx] *) Abort handshake if server key exchange message is omitted for ephemeral ECDH ciphersuites. Thanks to Karthikeyan Bhargavan of the PROSECCO team at INRIA for reporting this issue. (CVE-2014-3572) [Steve Henson] *) Remove non-export ephemeral RSA code on client and server. This code violated the TLS standard by allowing the use of temporary RSA keys in non-export ciphersuites and could be used by a server to effectively downgrade the RSA key length used to a value smaller than the server certificate. Thanks for Karthikeyan Bhargavan of the PROSECCO team at INRIA or reporting this issue. (CVE-2015-0204) [Steve Henson] *) Fixed issue where DH client certificates are accepted without verification. An OpenSSL server will accept a DH certificate for client authentication without the certificate verify message. This effectively allows a client to authenticate without the use of a private key. This only affects servers which trust a client certificate authority which issues certificates containing DH keys: these are extremely rare and hardly ever encountered. Thanks for Karthikeyan Bhargavan of the PROSECCO team at INRIA or reporting this issue. (CVE-2015-0205) [Steve Henson] *) Ensure that the session ID context of an SSL is updated when its SSL_CTX is updated via SSL_set_SSL_CTX. The session ID context is typically set from the parent SSL_CTX, and can vary with the CTX. [Adam Langley] *) Fix various certificate fingerprint issues. By using non-DER or invalid encodings outside the signed portion of a certificate the fingerprint can be changed without breaking the signature. Although no details of the signed portion of the certificate can be changed this can cause problems with some applications: e.g. those using the certificate fingerprint for blacklists. 1. Reject signatures with non zero unused bits. If the BIT STRING containing the signature has non zero unused bits reject the signature. All current signature algorithms require zero unused bits. 2. Check certificate algorithm consistency. Check the AlgorithmIdentifier inside TBS matches the one in the certificate signature. NB: this will result in signature failure errors for some broken certificates. Thanks to Konrad Kraszewski from Google for reporting this issue. 3. Check DSA/ECDSA signatures use DER. Re-encode DSA/ECDSA signatures and compare with the original received signature. Return an error if there is a mismatch. This will reject various cases including garbage after signature (thanks to Antti Karjalainen and Tuomo Untinen from the Codenomicon CROSS program for discovering this case) and use of BER or invalid ASN.1 INTEGERs (negative or with leading zeroes). Further analysis was conducted and fixes were developed by Stephen Henson of the OpenSSL core team. (CVE-2014-8275) [Steve Henson] *) Correct Bignum squaring. Bignum squaring (BN_sqr) may produce incorrect results on some platforms, including x86_64. This bug occurs at random with a very low probability, and is not known to be exploitable in any way, though its exact impact is difficult to determine. Thanks to Pieter Wuille (Blockstream) who reported this issue and also suggested an initial fix. Further analysis was conducted by the OpenSSL development team and Adam Langley of Google. The final fix was developed by Andy Polyakov of the OpenSSL core team. (CVE-2014-3570) [Andy Polyakov] *) Do not resume sessions on the server if the negotiated protocol version does not match the session's version. Resuming with a different version, while not strictly forbidden by the RFC, is of questionable sanity and breaks all known clients. [David Benjamin, Emilia Käsper] *) Tighten handling of the ChangeCipherSpec (CCS) message: reject early CCS messages during renegotiation. (Note that because renegotiation is encrypted, this early CCS was not exploitable.) [Emilia Käsper] *) Tighten client-side session ticket handling during renegotiation: ensure that the client only accepts a session ticket if the server sends the extension anew in the ServerHello. Previously, a TLS client would reuse the old extension state and thus accept a session ticket if one was announced in the initial ServerHello. Similarly, ensure that the client requires a session ticket if one was advertised in the ServerHello. Previously, a TLS client would ignore a missing NewSessionTicket message. [Emilia Käsper] Changes between 1.0.1i and 1.0.1j [15 Oct 2014] *) SRTP Memory Leak.. This issue affects OpenSSL 1.0.1 server implementations for both SSL/TLS and DTLS regardless of whether SRTP is used or configured. Implementations of OpenSSL that have been compiled with OPENSSL_NO_SRTP defined are not affected. The fix was developed by the OpenSSL team. (CVE-2014-3513) [OpenSSL team] *) Session Ticket Memory Leak.. (CVE-2014-3567) [Steve Henson] *) Build option no-ssl3 is incomplete. When OpenSSL is configured with "no-ssl3" as a build option, servers could accept and complete a SSL 3.0 handshake, and clients could be configured to send them. (CVE-2014-3568) [Akamai and the OpenSSL team] *) Add support for TLS_FALLBACK_SCSV. Client applications doing fallback retries should call SSL_set_mode(s, SSL_MODE_SEND_FALLBACK_SCSV). (CVE-2014-3566) [Adam Langley, Bodo Moeller] *) Add additional DigestInfo checks. Re-encode DigestInto in DER and check against the original when verifying RSA signature: this will reject any improperly encoded DigestInfo structures. Note: this is a precautionary measure and no attacks are currently known. [Steve Henson] Changes between 1.0.1h and 1.0.1i [6 Aug 2014] *) Fix SRP buffer overrun vulnerability. Invalid parameters passed to the SRP code can be overrun an internal buffer. Add sanity check that g, A, B < N to SRP code. Thanks to Sean Devlin and Watson Ladd of Cryptography Services, NCC Group for discovering this issue. (CVE-2014-3512) [Steve Henson] *). Thanks to David Benjamin and Adam Langley (Google) for discovering and researching this issue. (CVE-2014-3511) [David Benjamin] *). Thanks to Felix Gröbert (Google) for discovering and researching this issue. (CVE-2014-3510) [Emilia Käsper] *) By sending carefully crafted DTLS packets an attacker could cause openssl to leak memory. This can be exploited through a Denial of Service attack. Thanks to Adam Langley for discovering and researching this issue. (CVE-2014-3507) [Adam Langley] *) An attacker can force openssl to consume large amounts of memory whilst processing DTLS handshake messages. This can be exploited through a Denial of Service attack. Thanks to Adam Langley for discovering and researching this issue. (CVE-2014-3506) [Adam Langley] *) An attacker can force an error condition which causes openssl to crash whilst processing DTLS packets due to memory being freed twice. This can be exploited through a Denial of Service attack. Thanks to Adam Langley and Wan-Teh Chang for discovering and researching this issue. (CVE-2014-3505) [Adam Langley] *) If a multithreaded client connects to a malicious server using a resumed session and the server sends an ec point format extension it could write up to 255 bytes to freed memory. Thanks to Gabor Tyukasz (LogMeIn Inc) for discovering and researching this issue. (CVE-2014-3509) [Gabor Tyukasz] *) A malicious server can crash an OpenSSL client with a null pointer dereference (read) by specifying an SRP ciphersuite even though it was not properly negotiated with the client. This can be exploited through a Denial of Service attack. Thanks to Joonas Kuorilehto and Riku Hietamäki (Codenomicon) for discovering and researching this issue. (CVE-2014-5139) [Steve Henson] *) A flaw in OBJ_obj2txt may cause pretty printing functions such as X509_name_oneline, X509_name_print_ex et al. to leak some information from the stack. Applications may be affected if they echo pretty printing output to the attacker. Thanks to Ivan Fratric (Google) for discovering this issue. (CVE-2014-3508) [Emilia Käsper, and Steve Henson] *) Fix ec_GFp_simple_points_make_affine (thus, EC_POINTs_mul etc.) for corner cases. (Certain input points at infinity could lead to bogus results, with non-infinity inputs mapped to infinity too.) [Bodo Moeller] Changes between 1.0.1g and 1.0.1h [5 Jun 2014] *) Fix for SSL/TLS MITM flaw. An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers. Thanks to KIKUCHI Masashi (Lepidum Co. Ltd.) for discovering and researching this issue. (CVE-2014-0224) [KIKUCHI Masashi, Steve Henson] *) Fix DTLS recursion flaw. By sending an invalid DTLS handshake to an OpenSSL DTLS client the code can be made to recurse eventually crashing in a DoS attack. Thanks to Imre Rad (Search-Lab Ltd.) for discovering this issue. (CVE-2014-0221) [Imre Rad, Steve Henson] *) Fix DTLS invalid fragment vulnerability. A buffer overrun attack can be triggered by sending invalid DTLS fragments to an OpenSSL DTLS client or server. This is potentially exploitable to run arbitrary code on a vulnerable client or server. Thanks to Jüri Aedla for reporting this issue. (CVE-2014-0195) [Jüri Aedla, Steve Henson] *) Fix bug in TLS code where clients enable anonymous ECDH ciphersuites are subject to a denial of service attack. Thanks to Felix Gröbert and Ivan Fratric at Google for discovering this issue. (CVE-2014-3470) [Felix Gröbert, Ivan Fratric, Steve Henson] *) Harmonize version and its documentation. -f flag is used to display compilation flags. [mancha <mancha1@zoho.com>] *) Fix eckey_priv_encode so it immediately returns an error upon a failure in i2d_ECPrivateKey. [mancha <mancha1@zoho.com>] *) Fix some double frees. These are not thought to be exploitable. [mancha <mancha1@zoho.com>] Changes between 1.0.1f and 1.0.1g [7 Apr 2014] >>IMAGE] *) TLS pad extension: draft-agl-tls-padding-03 Workaround for the "TLS hang bug" (see FAQ and PR#2771): if the TLS client Hello record length value would otherwise be > 255 and less that 512 pad with a dummy extension containing zeroes so it is at least 512 bytes long. [Adam Langley, Steve Henson] Changes between 1.0.1e and 1.0.1f [6 Jan 2014] *) Fix for TLS record tampering bug. A carefully crafted invalid handshake could crash OpenSSL with a NULL pointer exception. Thanks to Anton Johansson for reporting this issues. (CVE-2013-4353) *) Keep original DTLS digest and encryption contexts in retransmission structures so we can use the previous session parameters if they need to be resent. (CVE-2013-6450) [Steve Henson] *) Add option SSL_OP_SAFARI_ECDHE_ECDSA_BUG (part of SSL_OP_ALL) which avoids preferring ECDHE-ECDSA ciphers when the client appears to be Safari on OS X. Safari on OS X 10.8..10.8.3 advertises support for several ECDHE-ECDSA ciphers, but fails to negotiate them. The bug is fixed in OS X 10.8.4, but Apple have ruled out both hot fixing 10.8..10.8.3 and forcing users to upgrade to 10.8.4 or newer. [Rob Stradling, Adam Langley] Changes between 1.0.1d and 1.0.1e [11 Feb 2013] *) Correct fix for CVE-2013-0169. The original didn't work on AES-NI supporting platforms or when small records were transferred. [Andy Polyakov, Steve Henson] Changes between 1.0.1c and 1.0.1d [5 Feb 2013] *)] *) Fix flaw in AESNI handling of TLS 1.2 and 1.1 records for CBC mode ciphersuites which can be exploited in a denial of service attack. Thanks go to and to Adam Langley <agl@chromium.org> for discovering and detecting this bug and to Wolfgang Ettlinger <wolfgang.ettlinger@gmail.com> for independently discovering this issue. (CVE-2012-2686) [Adam Langley] *) Return an error when checking OCSP signatures when key is NULL. This fixes a DoS attack. (CVE-2013-0166) [Steve Henson] *) Make openssl verify return errors. [Chris Palmer <palmer@google.com> and Ben Laurie] *) Call OCSP Stapling callback after ciphersuite has been chosen, so the right response is stapled. Also change SSL_get_certificate() so it returns the certificate actually sent. See. [Rob Stradling <rob.stradling@comodo.com>] *)@chromium.org>] Changes between 1.0.0h and 1.0.1 [14 Mar 2012] *) Add compatibility with old MDC2 signatures which use an ASN1 OCTET STRING form instead of a DigestInfo. [Steve Henson] *) The format used for MDC2 RSA signatures is inconsistent between EVP and the RSA_sign/RSA_verify functions. This was made more apparent when OpenSSL used RSA_sign/RSA_verify for some RSA signatures in particular those which went through EVP_PKEY_METHOD in 1.0.0 and later. Detect the correct format in RSA_verify so both forms transparently work. [Steve Henson] *) Some servers which support TLS 1.0 can choke if we initially indicate support for TLS 1.2 and later renegotiate using TLS 1.0 in the RSA encrypted premaster secret. As a workaround use the maximum permitted client version in client hello, this should keep such servers happy and still work with previous versions of OpenSSL. [Steve Henson] *) Add support for TLS/DTLS heartbeats. [Robin Seggelmann <seggelmann@fh-muenster.de>] *) Add support for SCTP. [Robin Seggelmann <seggelmann@fh-muenster.de>] *) Improved PRNG seeding for VOS. [Paul Green <Paul.Green@stratus.com>] *) Extensive assembler packs updates, most notably: - x86[_64]: AES-NI, PCLMULQDQ, RDRAND support; - x86[_64]: SSSE3 support (SHA1, vector-permutation AES); - x86_64: bit-sliced AES implementation; - ARM: NEON support, contemporary platforms optimizations; - s390x: z196 support; - *: GHASH and GF(2^m) multiplication implementations; [Andy Polyakov] *) Make TLS-SRP code conformant with RFC 5054 API cleanup (removal of unnecessary code) [Peter Sylvester <peter.sylvester@edelweb.fr>] *) Add TLS key material exporter from RFC 5705. [Eric Rescorla] *) Add DTLS-SRTP negotiation from RFC 5764. [Eric Rescorla] *) Add Next Protocol Negotiation,. Can be disabled with a no-npn flag to config or Configure. Code donated by Google. [Adam Langley <agl@google.com> and Ben Laurie] *) Add optional 64-bit optimized implementations of elliptic curves NIST-P224, NIST-P256, NIST-P521, with constant-time single point multiplication on typical inputs. Compiler support for the nonstandard type __uint128_t is required to use this (present in gcc 4.4 and later, for 64-bit builds). Code made available under Apache License version 2.0. Specify "enable-ec_nistp_64_gcc_128" on the Configure (or config) command line to include this in your build of OpenSSL, and run "make depend" (or "make update"). This enables the following EC_METHODs: EC_GFp_nistp224_method() EC_GFp_nistp256_method() EC_GFp_nistp521_method() EC_GROUP_new_by_curve_name() will automatically use these (while EC_GROUP_new_curve_GFp() currently prefers the more flexible implementations). [Emilia Käsper, Adam Langley, Bodo Moeller (Google)] *) Use type ossl_ssize_t instad of ssize_t which isn't available on all platforms. Move ssize_t definition from e_os.h to the public header file e_os2.h as it now appears in public header file cms.h [Steve Henson] *) New -sigopt option to the ca, req and x509 utilities. Additional signature parameters can be passed using this option and in particular PSS. [Steve Henson] *) Add RSA PSS signing function. This will generate and set the appropriate AlgorithmIdentifiers for PSS based on those in the corresponding EVP_MD_CTX structure. No application support yet. [Steve Henson] *) Support for companion algorithm specific ASN1 signing routines. New function ASN1_item_sign_ctx() signs a pre-initialised EVP_MD_CTX structure and sets AlgorithmIdentifiers based on the appropriate parameters. [Steve Henson] *) Add new algorithm specific ASN1 verification initialisation function to EVP_PKEY_ASN1_METHOD: this is not in EVP_PKEY_METHOD since the ASN1 handling will be the same no matter what EVP_PKEY_METHOD is used. Add a PSS handler to support verification of PSS signatures: checked against a number of sample certificates. [Steve Henson] *) Add signature printing for PSS. Add PSS OIDs. [Steve Henson, Martin Kaiser <lists@kaiser.cx>] *) Add algorithm specific signature printing. An individual ASN1 method can now print out signatures instead of the standard hex dump. More complex signatures (e.g. PSS) can print out more meaningful information. Include DSA version that prints out the signature parameters r, s. [Steve Henson] *) Password based recipient info support for CMS library: implementing RFC3211. [Steve Henson] *) Split password based encryption into PBES2 and PBKDF2 functions. This neatly separates the code into cipher and PBE sections and is required for some algorithms that split PBES2 into separate pieces (such as password based CMS). [Steve Henson] *) Session-handling fixes: - Fix handling of connections that are resuming with a session ID, but also support Session Tickets. - Fix a bug that suppressed issuing of a new ticket if the client presented a ticket with an expired session. - Try to set the ticket lifetime hint to something reasonable. - Make tickets shorter by excluding irrelevant information. - On the client side, don't ignore renewed tickets. [Adam Langley, Bodo Moeller (Google)] *) Fix PSK session representation. [Bodo Moeller] *) Add RC4-MD5 and AESNI-SHA1 "stitched" implementations. This work was sponsored by Intel. [Andy Polyakov] *) Add GCM support to TLS library. Some custom code is needed to split the IV between the fixed (from PRF) and explicit (from TLS record) portions. This adds all GCM ciphersuites supported by RFC5288 and RFC5289. Generalise some AES* cipherstrings to include GCM and add a special AESGCM string for GCM only. [Steve Henson] *) Expand range of ctrls for AES GCM. Permit setting invocation field on decrypt and retrieval of invocation field only on encrypt. [Steve Henson] *) Add HMAC ECC ciphersuites from RFC5289. Include SHA384 PRF support. As required by RFC5289 these ciphersuites cannot be used if for versions of TLS earlier than 1.2. [Steve Henson] *) For FIPS capable OpenSSL interpret a NULL default public key method as unset and return the appropriate default but do *not* set the default. This means we can return the appropriate method in applications that switch between FIPS and non-FIPS modes. [Steve Henson] *) Redirect HMAC and CMAC operations to FIPS module in FIPS mode. If an ENGINE is used then we cannot handle that in the FIPS module so we keep original code iff non-FIPS operations are allowed. [Steve Henson] *) Add -attime option to openssl utilities. [Peter Eckersley <pde@eff.org>, Ben Laurie and Steve Henson] *) Redirect DSA and DH operations to FIPS module in FIPS mode. [Steve Henson] *) Redirect ECDSA and ECDH operations to FIPS module in FIPS mode. Also use FIPS EC methods unconditionally for now. [Steve Henson] *) New build option no-ec2m to disable characteristic 2 code. [Steve Henson] *) Backport libcrypto audit of return value checking from 1.1.0-dev; not all cases can be covered as some introduce binary incompatibilities. [Steve Henson] *) Redirect RSA operations to FIPS module including keygen, encrypt, decrypt, sign and verify. Block use of non FIPS RSA methods. [Steve Henson] *) Add similar low level API blocking to ciphers. [Steve Henson] *) Low level digest APIs are not approved in FIPS mode: any attempt to use these will cause a fatal error. Applications that *really* want to use them can use the private_* version instead. [Steve Henson] *) Redirect cipher operations to FIPS module for FIPS builds. [Steve Henson] *) Redirect digest operations to FIPS module for FIPS builds. [Steve Henson] *) Update build system to add "fips" flag which will link in fipscanister.o for static and shared library builds embedding a signature if needed. [Steve Henson] *) Output TLS supported curves in preference order instead of numerical order. This is currently hardcoded for the highest order curves first. This should be configurable so applications can judge speed vs strength. [Steve Henson] *) Add TLS v1.2 server support for client authentication. [Steve Henson] *) Add support for FIPS mode in ssl library: disable SSLv3, non-FIPS ciphers and enable MD5. [Steve Henson] *) Functions FIPS_mode_set() and FIPS_mode() which call the underlying FIPS modules versions. [Steve Henson] *) Add TLS v1.2 client side support for client authentication. Keep cache of handshake records longer as we don't know the hash algorithm to use until after the certificate request message is received. [Steve Henson] *) Initial TLS v1.2 client support. Add a default signature algorithms extension including all the algorithms we support. Parse new signature format in client key exchange. Relax some ECC signing restrictions for TLS v1.2 as indicated in RFC5246. [Steve Henson] *) Add server support for TLS v1.2 signature algorithms extension. Switch to new signature format when needed using client digest preference. All server ciphersuites should now work correctly in TLS v1.2. No client support yet and no support for client certificates. [Steve Henson] *) Initial TLS v1.2 support. Add new SHA256 digest to ssl code, switch to SHA256 for PRF when using TLS v1.2 and later. Add new SHA256 based ciphersuites. At present only RSA key exchange ciphersuites work with TLS v1.2. Add new option for TLS v1.2 replacing the old and obsolete SSL_OP_PKCS1_CHECK flags with SSL_OP_NO_TLSv1_2. New TLSv1.2 methods and version checking. [Steve Henson] *) New option OPENSSL_NO_SSL_INTERN. If an application can be compiled with this defined it will not be affected by any changes to ssl internal structures. Add several utility functions to allow openssl application to work with OPENSSL_NO_SSL_INTERN defined. [Steve Henson] *) Add SRP support. [Tom Wu <tjw@cs.stanford.edu> and Ben Laurie] *) Add functions to copy EVP_PKEY_METHOD and retrieve flags and id. [Steve Henson] *) Permit abbreviated handshakes when renegotiating using the function SSL_renegotiate_abbreviated(). [Robin Seggelmann <seggelmann@fh-muenster.de>] *) Add call to ENGINE_register_all_complete() to ENGINE_load_builtin_engines(), so some implementations get used automatically instead of needing explicit application support. [Steve Henson] *) Add support for TLS key exporter as described in RFC5705. [Robin Seggelmann <seggelmann@fh-muenster.de>, Steve Henson] *) Initial TLSv1.1 support. Since TLSv1.1 is very similar to TLS v1.0 only a few changes are required: Add SSL_OP_NO_TLSv1_1 flag. Add TLSv1_1 methods. Update version checking logic to handle version 1.1. Add explicit IV handling (ported from DTLS code). Add command line options to s_client/s_server. [Steve Henson] Changes-enabled in the CMS code by setting the CMS_DEBUG_DECRYPT flag: this is useful for debugging and testing where an MMA defence is not necessary. Thanks to Ivan Nestlerode <inestlerode@us.ibm.com> for discovering this bug. [Steve Henson] Changes between 1.0.0f and 1.0.0g [18 Jan 2012] *)] Changes between 1.0.0e and 1.0.0f [4 Jan 2012] *). (CVE-2011-4108) [Robin Seggelmann, Michael Tuexen] *) Clear bytes used for block padding of SSL 3.0 records. (CVE-2011-4576) [Adam Langley (Google)] *) Only allow one SGC handshake restart for SSL/TLS. Thanks to George Kadianakis <desnacked@gmail.com> for discovering this issue and Adam Langley for preparing the fix. (CVE-2011-4619) [Adam Langley (Google)] *) Check parameters are not NULL in GOST ENGINE. (CVE-2012-0027) [Andrey Kulikov <amdeich@gmail.com>] *) Prevent malformed RFC3779 data triggering an assertion failure. Thanks to Andrew Chi, BBN Technologies, for discovering the flaw and Rob Austein <sra@hactrn.net> for fixing it. (CVE-2011-4577) [Rob Austein <sra@hactrn.net>] *) Improved PRNG seeding for VOS. [Paul Green <Paul.Green@stratus.com>] *) Fix ssl_ciph.c set-up race. [Adam Langley (Google)] *) Fix spurious failures in ecdsatest.c. [Emilia Käsper (Google)] *) Fix the BIO_f_buffer() implementation (which was mixing different interpretations of the '..._len' fields). [Adam Langley (Google)] *) Fix handling of BN_BLINDING: now BN_BLINDING_invert_ex (rather than BN_BLINDING_invert_ex) calls BN_BLINDING_update, ensuring that concurrent threads won't reuse the same blinding coefficients. This also avoids the need to obtain the CRYPTO_LOCK_RSA_BLINDING lock to call BN_BLINDING_invert_ex, and avoids one use of BN_BLINDING_update for each BN_BLINDING structure (previously, the last update always remained unused). [Emilia Käsper (Google)] *) In ssl3_clear, preserve s3->init_extra along with s3->rbuf. [Bob Buckholz (Google)] Changes between 1.0.0d and 1.0.0e [6 Sep 2011] *)] Changes between 1.0.0c and 1.0.0d [8 Feb 2011] *) Fix parsing of OCSP stapling ClientHello extension. CVE-2011-0014 [Neel Mehta, Adam Langley, Bodo Moeller (Google)] *) Fix bug in string printing code: if *any* escaping is enabled we must escape the escape character (backslash) or the resulting string is ambiguous. [Steve Henson] Changes between 1.0.0b and 1.0.0c [2 Dec 2010] *) Disable code workaround for ancient and obsolete Netscape browsers and servers: an attacker can use it in a ciphersuite downgrade attack. Thanks to Martin Rex for discovering this bug. CVE-2010-4180 [Steve Henson] *) Fixed J-PAKE implementation error, originally discovered by Sebastien Martini, further info and confirmation from Stefan Arentz and Feng Hao. Note that this fix is a security fix. CVE-2010-4252 [Ben Laurie] Changes between 1.0.0a and 1.0.0b [16 Nov 2010] *) Fix extension code to avoid race conditions which can result in a buffer overrun vulnerability: resumed sessions must not be modified as they can be shared by multiple threads. CVE-2010-3864 [Steve Henson] *) Fix WIN32 build system to correctly link an ENGINE directory into a DLL. [Steve Henson] Changes between 1.0.0 and 1.0.0a [01 Jun 2010] *) Check return value of int_rsa_verify in pkey_rsa_verifyrecover (CVE-2010-1633) [Steve Henson, Peter-Michael Hager <hager@dortmund.net>] Changes between 0.9.8n and 1.0.0 [29 Mar 2010] *) Add "missing" function EVP_CIPHER_CTX_copy(). This copies a cipher context. The operation can be customised via the ctrl mechanism in case ENGINEs want to include additional functionality. [Steve Henson] *) Tolerate yet another broken PKCS#8 key format: private key value negative. [Steve Henson] *) Add new -subject_hash_old and -issuer_hash_old options to x509 utility to output hashes compatible with older versions of OpenSSL. [Willy Weisz <weisz@vcpc.univie.ac.at>] *) Fix compression algorithm handling: if resuming a session use the compression algorithm of the resumed session instead of determining it from client hello again. Don't allow server to change algorithm. [Steve Henson] *) Add load_crls() function to apps tidying load_certs() too. Add option to verify utility to allow additional CRLs to be included. [Steve Henson] *) Update OCSP request code to permit adding custom headers to the request: some responders need this. [Steve Henson] *) The function EVP_PKEY_sign() returns <=0 on error: check return code correctly. [Julia Lawall <julia@diku.dk>] *) Update verify callback code in apps/s_cb.c and apps/verify.c, it needlessly dereferenced structures, used obsolete functions and didn't handle all updated verify codes correctly. [Steve Henson] *) Disable MD2 in the default configuration. [Steve Henson] *) In BIO_pop() and BIO_push() use the ctrl argument (which was NULL) to indicate the initial BIO being pushed or popped. This makes it possible to determine whether the BIO is the one explicitly called or as a result of the ctrl being passed down the chain. Fix BIO_pop() and SSL BIOs so it handles reference counts correctly and doesn't zero out the I/O bio when it is not being explicitly popped. WARNING: applications which included workarounds for the old buggy behaviour will need to be modified or they could free up already freed BIOs. [Steve Henson] *) Extend the uni2asc/asc2uni => OPENSSL_uni2asc/OPENSSL_asc2uni renaming to all platforms (within the 0.9.8 branch, this was done conditionally on Netware platforms to avoid a name clash). [Guenter <lists@gknw.net>] *) Add ECDHE and PSK support to DTLS. [Michael Tuexen <tuexen@fh-muenster.de>] *) Add CHECKED_STACK_OF macro to safestack.h, otherwise safestack can't be used on C++. [Steve Henson] *) Add "missing" function EVP_MD_flags() (without this the only way to retrieve a digest flags is by accessing the structure directly. Update EVP_MD_do_all*() and EVP_CIPHER_do_all*() to include the name a digest or cipher is registered as in the "from" argument. Print out all registered digests in the dgst usage message instead of manually attempting to work them out. [Steve Henson] *) If no SSLv2 ciphers are used don't use an SSLv2 compatible client hello: this allows the use of compression and extensions. Change default cipher string to remove SSLv2 ciphersuites. This effectively avoids ancient SSLv2 by default unless an application cipher string requests it. [Steve Henson] *) Alter match criteria in PKCS12_parse(). It used to try to use local key ids to find matching certificates and keys but some PKCS#12 files don't follow the (somewhat unwritten) rules and this strategy fails. Now just gather all certificates together and the first private key then look for the first certificate that matches the key. [Steve Henson] *) Support use of registered digest and cipher names for dgst and cipher commands instead of having to add each one as a special case. So now you can do: openssl sha256 foo as well as: openssl dgst -sha256 foo and this works for ENGINE based algorithms too. [Steve Henson] *) Update Gost ENGINE to support parameter files. [Victor B. Wagner <vitus@cryptocom.ru>] *) Support GeneralizedTime in ca utility. [Oliver Martin <oliver@volatilevoid.net>, Steve Henson] *)] *) Make PKCS#8 the default write format for private keys, replacing the traditional format. This form is standardised, more secure and doesn't include an implicit MD5 dependency. [Steve Henson] *) Add a $gcc_devteam_warn option to Configure. The idea is that any code committed to OpenSSL should pass this lot as a minimum. [Steve Henson] *) Add session ticket override functionality for use by EAP-FAST. [Jouni Malinen <j@w1.fi>] *) Modify HMAC functions to return a value. Since these can be implemented in an ENGINE errors can occur. [Steve Henson] *) Type-checked OBJ_bsearch_ex. [Ben Laurie] *) Type-checked OBJ_bsearch. Also some constification necessitated by type-checking. Still to come: TXT_DB, bsearch(?), OBJ_bsearch_ex, qsort, CRYPTO_EX_DATA, ASN1_VALUE, ASN1_STRING, CONF_VALUE. [Ben Laurie] *) New function OPENSSL_gmtime_adj() to add a specific number of days and seconds to a tm structure directly, instead of going through OS specific date routines. This avoids any issues with OS routines such as the year 2038 bug. New *_adj() functions for ASN1 time structures and X509_time_adj_ex() to cover the extended range. The existing X509_time_adj() is still usable and will no longer have any date issues. [Steve Henson] *) Delta CRL support. New use deltas option which will attempt to locate and search any appropriate delta CRLs available. This work was sponsored by Google. [Steve Henson] *) Support for CRLs partitioned by reason code. Reorganise CRL processing code and add additional score elements. Validate alternate CRL paths as part of the CRL checking and indicate a new error "CRL path validation error" in this case. Applications wanting additional details can use the verify callback and check the new "parent" field. If this is not NULL CRL path validation is taking place. Existing applications won't see this because it requires extended CRL support which is off by default. This work was sponsored by Google. [Steve Henson] *) Support for freshest CRL extension. This work was sponsored by Google. [Steve Henson] *) Initial indirect CRL support. Currently only supported in the CRLs passed directly and not via lookup. Process certificate issuer CRL entry extension and lookup CRL entries by bother issuer name and serial number. Check and process CRL issuer entry in IDP extension. This work was sponsored by Google. [Steve Henson] *) Add support for distinct certificate and CRL paths. The CRL issuer certificate is validated separately in this case. Only enabled if an extended CRL support flag is set: this flag will enable additional CRL functionality in future. This work was sponsored by Google. [Steve Henson] *) Add support for policy mappings extension. This work was sponsored by Google. [Steve Henson] *) Fixes to pathlength constraint, self issued certificate handling, policy processing to align with RFC3280 and PKITS tests. This work was sponsored by Google. [Steve Henson] *) Support for name constraints certificate extension. DN, email, DNS and URI types are currently supported. This work was sponsored by Google. [Steve Henson] *) To cater for systems that provide a pointer-based thread ID rather than numeric, deprecate the current numeric thread ID mechanism and replace it with a structure and associated callback type. This mechanism allows a numeric "hash" to be extracted from a thread ID in either case, and on platforms where pointers are larger than 'long', mixing is done to help ensure the numeric 'hash' is usable even if it can't be guaranteed unique. The default mechanism is to use "&errno" as a pointer-based thread ID to distinguish between threads. Applications that want to provide their own thread IDs should now use CRYPTO_THREADID_set_callback() to register a callback that will call either CRYPTO_THREADID_set_numeric() or CRYPTO_THREADID_set_pointer(). Note that ERR_remove_state() is now deprecated, because it is tied to the assumption that thread IDs are numeric. ERR_remove_state(0) to free the current thread's error state should be replaced by ERR_remove_thread_state(NULL). (This new approach replaces the functions CRYPTO_set_idptr_callback(), CRYPTO_get_idptr_callback(), and CRYPTO_thread_idptr() that existed in OpenSSL 0.9.9-dev between June 2006 and August 2008. Also, if an application was previously providing a numeric thread callback that was inappropriate for distinguishing threads, then uniqueness might have been obtained with &errno that happened immediately in the intermediate development versions of OpenSSL; this is no longer the case, the numeric thread callback will now override the automatic use of &errno.) [Geoff Thorpe, with help from Bodo Moeller] *) Initial support for different CRL issuing certificates. This covers a simple case where the self issued certificates in the chain exist and the real CRL issuer is higher in the existing chain. This work was sponsored by Google. [Steve Henson] *) Removed effectively defunct crypto/store from the build. [Ben Laurie] *) Revamp of STACK to provide stronger type-checking. Still to come: TXT_DB, bsearch(?), OBJ_bsearch, qsort, CRYPTO_EX_DATA, ASN1_VALUE, ASN1_STRING, CONF_VALUE. [Ben Laurie] *) Add a new SSL_MODE_RELEASE_BUFFERS mode flag to release unused buffer RAM on SSL connections. This option can save about 34k per idle SSL. [Nick Mathewson] *) Revamp of LHASH to provide stronger type-checking. Still to come: STACK, TXT_DB, bsearch, qsort. [Ben Laurie] *) Initial support for Cryptographic Message Syntax (aka CMS) based on RFC3850, RFC3851 and RFC3852. New cms directory and cms utility, support for data, signedData, compressedData, digestedData and encryptedData, envelopedData types included. Scripts to check against RFC4134 examples draft and interop and consistency checks of many content types and variants. [Steve Henson] *) Add options to enc utility to support use of zlib compression BIO. [Steve Henson] *) Extend mk1mf to support importing of options and assembly language files from Configure script, currently only included in VC-WIN32. The assembly language rules can now optionally generate the source files from the associated perl scripts. [Steve Henson] *) Implement remaining functionality needed to support GOST ciphersuites. Interop testing has been performed using CryptoPro implementations. [Victor B. Wagner <vitus@cryptocom.ru>] *) s390x assembler pack. [Andy Polyakov] *) ARMv4 assembler pack. ARMv4 refers to v4 and later ISA, not CPU "family." [Andy Polyakov] *) Implement Opaque PRF Input TLS extension as specified in draft-rescorla-tls-opaque-prf-input-00.txt. Since this is not an official specification yet and no extension type assignment by IANA exists, this extension (for now) will have to be explicitly enabled when building OpenSSL by providing the extension number to use. For example, specify an option -DTLSEXT_TYPE_opaque_prf_input=0x9527 to the "config" or "Configure" script to enable the extension, assuming extension number 0x9527 (which is a completely arbitrary and unofficial assignment based on the MD5 hash of the Internet Draft). Note that by doing so, you potentially lose interoperability with other TLS implementations since these might be using the same extension number for other purposes. SSL_set_tlsext_opaque_prf_input(ssl, src, len) is used to set the opaque PRF input value to use in the handshake. This will create an interal copy of the length-'len' string at 'src', and will return non-zero for success. To get more control and flexibility, provide a callback function by using SSL_CTX_set_tlsext_opaque_prf_input_callback(ctx, cb) SSL_CTX_set_tlsext_opaque_prf_input_callback_arg(ctx, arg) where int (*cb)(SSL *, void *peerinput, size_t len, void *arg); void *arg; Callback function 'cb' will be called in handshakes, and is expected to use SSL_set_tlsext_opaque_prf_input() as appropriate. Argument 'arg' is for application purposes (the value as given to SSL_CTX_set_tlsext_opaque_prf_input_callback_arg() will directly be provided to the callback function). The callback function has to return non-zero to report success: usually 1 to use opaque PRF input just if possible, or 2 to enforce use of the opaque PRF input. In the latter case, the library will abort the handshake if opaque PRF input is not successfully negotiated. Arguments 'peerinput' and 'len' given to the callback function will always be NULL and 0 in the case of a client. A server will see the client's opaque PRF input through these variables if available (NULL and 0 otherwise). Note that if the server provides an opaque PRF input, the length must be the same as the length of the client's opaque PRF input. Note that the callback function will only be called when creating a new session (session resumption can resume whatever was previously negotiated), and will not be called in SSL 2.0 handshakes; thus, SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv2) or SSL_set_options(ssl, SSL_OP_NO_SSLv2) is especially recommended for applications that need to enforce opaque PRF input. [Bodo Moeller] *) Update ssl code to support digests other than SHA1+MD5 for handshake MAC. [Victor B. Wagner <vitus@cryptocom.ru>] *)] *) Final changes to avoid use of pointer pointer casts in OpenSSL. OpenSSL should now compile cleanly on gcc 4.2 [Peter Hartley <pdh@utter.chaos.org.uk>, Steve Henson] *) Update SSL library to use new EVP_PKEY MAC API. Include generic MAC support including streaming MAC support: this is required for GOST ciphersuite support. [Victor B. Wagner <vitus@cryptocom.ru>, Steve Henson] *) Add option -stream to use PKCS#7 streaming in smime utility. New function i2d_PKCS7_bio_stream() and PEM_write_PKCS7_bio_stream() to output in BER and PEM format. [Steve Henson] *) Experimental support for use of HMAC via EVP_PKEY interface. This allows HMAC to be handled via the EVP_DigestSign*() interface. The EVP_PKEY "key" in this case is the HMAC key, potentially allowing ENGINE support for HMAC keys which are unextractable. New -mac and -macopt options to dgst utility. [Steve Henson] *) New option -sigopt to dgst utility. Update dgst to use EVP_Digest{Sign,Verify}*. These two changes make it possible to use alternative signing parameters such as X9.31 or PSS in the dgst utility. [Steve Henson] *) Change ssl_cipher_apply_rule(), the internal function that does the work each time a ciphersuite string requests enabling ("foo+bar"), moving ("+foo+bar"), disabling ("-foo+bar", or removing ("!foo+bar") a class of ciphersuites: Now it maintains the order of disabled ciphersuites such that those ciphersuites that most recently went from enabled to disabled not only stay in order with respect to each other, but also have higher priority than other disabled ciphersuites the next time ciphersuites are enabled again. This means that you can now say, e.g., "PSK:-PSK:HIGH" to enable the same ciphersuites as with "HIGH" alone, but in a specific order where the PSK ciphersuites come first (since they are the most recently disabled ciphersuites when "HIGH" is parsed). Also, change ssl_create_cipher_list() (using this new funcionality) such that between otherwise identical cihpersuites, ephemeral ECDH is preferred over ephemeral DH in the default order. [Bodo Moeller] *) Change ssl_create_cipher_list() so that it automatically arranges the ciphersuites in reasonable order before starting to process the rule string. Thus, the definition for "DEFAULT" (SSL_DEFAULT_CIPHER_LIST) now is just "ALL:!aNULL:!eNULL", but remains equivalent to "AES:ALL:!aNULL:!eNULL:+aECDH:+kRSA:+RC4:@STRENGTH". This makes it much easier to arrive at a reasonable default order in applications for which anonymous ciphers are OK (meaning that you can't actually use DEFAULT). [Bodo Moeller; suggested by Victor Duchovni] *) Split the SSL/TLS algorithm mask (as used for ciphersuite string processing) into multiple integers instead of setting "SSL_MKEY_MASK" bits, "SSL_AUTH_MASK" bits, "SSL_ENC_MASK", "SSL_MAC_MASK", and "SSL_SSL_MASK" bits all in a single integer. (These masks as well as the individual bit definitions are hidden away into the non-exported interface ssl/ssl_locl.h, so this change to the definition of the SSL_CIPHER structure shouldn't affect applications.) This give us more bits for each of these categories, so there is no longer a need to coagulate AES128 and AES256 into a single algorithm bit, and to coagulate Camellia128 and Camellia256 into a single algorithm bit, which has led to all kinds of kludges. Thus, among other things, the kludge introduced in 0.9.7m and 0.9.8e for masking out AES256 independently of AES128 or masking out Camellia256 independently of AES256 is not needed here in 0.9.9. With the change, we also introduce new ciphersuite aliases that so far were missing: "AES128", "AES256", "CAMELLIA128", and "CAMELLIA256". [Bodo Moeller] *) Add support for dsa-with-SHA224 and dsa-with-SHA256. Use the leftmost N bytes of the signature input if the input is larger than the prime q (with N being the size in bytes of q). [Nils Larsch] *) Very *very* experimental PKCS#7 streaming encoder support. Nothing uses it yet and it is largely untested. [Steve Henson] *) Add support for the ecdsa-with-SHA224/256/384/512 signature types. [Nils Larsch] *) Initial incomplete changes to avoid need for function casts in OpenSSL some compilers (gcc 4.2 and later) reject their use. Safestack is reimplemented. Update ASN1 to avoid use of legacy functions. [Steve Henson] *) Win32/64 targets are linked with Winsock2. [Andy Polyakov] *) Add an X509_CRL_METHOD structure to allow CRL processing to be redirected to external functions. This can be used to increase CRL handling efficiency especially when CRLs are very large by (for example) storing the CRL revoked certificates in a database. [Steve Henson] *) Overhaul of by_dir code. Add support for dynamic loading of CRLs so new CRLs added to a directory can be used. New command line option -verify_return_error to s_client and s_server. This causes real errors to be returned by the verify callback instead of carrying on no matter what. This reflects the way a "real world" verify callback would behave. [Steve Henson] *) GOST engine, supporting several GOST algorithms and public key formats. Kindly donated by Cryptocom. [Cryptocom] *) Partial support for Issuing Distribution Point CRL extension. CRLs partitioned by DP are handled but no indirect CRL or reason partitioning (yet). Complete overhaul of CRL handling: now the most suitable CRL is selected via a scoring technique which handles IDP and AKID in CRLs. [Steve Henson] *) New X509_STORE_CTX callbacks lookup_crls() and lookup_certs() which will ultimately be used for all verify operations: this will remove the X509_STORE dependency on certificate verification and allow alternative lookup methods. X509_STORE based implementations of these two callbacks. [Steve Henson] *) Allow multiple CRLs to exist in an X509_STORE with matching issuer names. Modify get_crl() to find a valid (unexpired) CRL if possible. [Steve Henson] *) New function X509_CRL_match() to check if two CRLs are identical. Normally this would be called X509_CRL_cmp() but that name is already used by a function that just compares CRL issuer names. Cache several CRL extensions in X509_CRL structure and cache CRLDP in X509. [Steve Henson] *) Store a "canonical" representation of X509_NAME structure (ASN1 Name) this maps equivalent X509_NAME structures into a consistent structure. Name comparison can then be performed rapidly using memcmp(). [Steve Henson] *) Non-blocking OCSP request processing. Add -timeout option to ocsp utility. [Steve Henson] *) Allow digests to supply their own micalg string for S/MIME type using the ctrl EVP_MD_CTRL_MICALG. [Steve Henson] *) During PKCS7 signing pass the PKCS7 SignerInfo structure to the EVP_PKEY_METHOD before and after signing via the EVP_PKEY_CTRL_PKCS7_SIGN ctrl. It can then customise the structure before and/or after signing if necessary. [Steve Henson] *) New function OBJ_add_sigid() to allow application defined signature OIDs to be added to OpenSSLs internal tables. New function OBJ_sigid_free() to free up any added signature OIDs. [Steve Henson] *) New functions EVP_CIPHER_do_all(), EVP_CIPHER_do_all_sorted(), EVP_MD_do_all() and EVP_MD_do_all_sorted() to enumerate internal digest and cipher tables. New options added to openssl utility: list-message-digest-algorithms and list-cipher-algorithms. [Steve Henson] *) Change the array representation of binary polynomials: the list of degrees of non-zero coefficients is now terminated with -1. Previously it was terminated with 0, which was also part of the value; thus, the array representation was not applicable to polynomials where t^0 has coefficient zero. This change makes the array representation useful in a more general context. [Douglas Stebila] *) Various modifications and fixes to SSL/TLS cipher string handling. For ECC, the code now distinguishes between fixed ECDH with RSA certificates on the one hand and with ECDSA certificates on the other hand, since these are separate ciphersuites. The unused code for Fortezza ciphersuites has been removed. For consistency with EDH, ephemeral ECDH is now called "EECDH" (not "ECDHE"). For consistency with the code for DH certificates, use of ECDH certificates is now considered ECDH authentication, not RSA or ECDSA authentication (the latter is merely the CA's signing algorithm and not actively used in the protocol). The temporary ciphersuite alias "ECCdraft" is no longer available, and ECC ciphersuites are no longer excluded from "ALL" and "DEFAULT". The following aliases now exist for RFC 4492 ciphersuites, most of these by analogy with the DH case: kECDHr - ECDH cert, signed with RSA kECDHe - ECDH cert, signed with ECDSA kECDH - ECDH cert (signed with either RSA or ECDSA) kEECDH - ephemeral ECDH ECDH - ECDH cert or ephemeral ECDH aECDH - ECDH cert aECDSA - ECDSA cert ECDSA - ECDSA cert AECDH - anonymous ECDH EECDH - non-anonymous ephemeral ECDH (equivalent to "kEECDH:-AECDH") [Bodo Moeller] *) Add additional S/MIME capabilities for AES and GOST ciphers if supported. Use correct micalg parameters depending on digest(s) in signed message. [Steve Henson] *) Add engine support for EVP_PKEY_ASN1_METHOD. Add functions to process an ENGINE asn1 method. Support ENGINE lookups in the ASN1 code. [Steve Henson] *) Initial engine support for EVP_PKEY_METHOD. New functions to permit an engine to register a method. Add ENGINE lookups for methods and functional reference processing. [Steve Henson] *) New functions EVP_Digest{Sign,Verify)*. These are enchance versions of EVP_{Sign,Verify}* which allow an application to customise the signature process. [Steve Henson] *) New -resign option to smime utility. This adds one or more signers to an existing PKCS#7 signedData structure. Also -md option to use an alternative message digest algorithm for signing. [Steve Henson] *) Tidy up PKCS#7 routines and add new functions to make it easier to create PKCS7 structures containing multiple signers. Update smime application to support multiple signers. [Steve Henson] *) New -macalg option to pkcs12 utility to allow setting of an alternative digest MAC. [Steve Henson] *) Initial support for PKCS#5 v2.0 PRFs other than default SHA1 HMAC. Reorganize PBE internals to lookup from a static table using NIDs, add support for HMAC PBE OID translation. Add a EVP_CIPHER ctrl: EVP_CTRL_PBE_PRF_NID this allows a cipher to specify an alternative PRF which will be automatically used with PBES2. [Steve Henson] *) Replace the algorithm specific calls to generate keys in "req" with the new API. [Steve Henson] *) Update PKCS#7 enveloped data routines to use new API. This is now supported by any public key method supporting the encrypt operation. A ctrl is added to allow the public key algorithm to examine or modify the PKCS#7 RecipientInfo structure if it needs to: for RSA this is a no op. [Steve Henson] *) Add a ctrl to asn1 method to allow a public key algorithm to express a default digest type to use. In most cases this will be SHA1 but some algorithms (such as GOST) need to specify an alternative digest. The return value indicates how strong the preference is 1 means optional and 2 is mandatory (that is it is the only supported type). Modify ASN1_item_sign() to accept a NULL digest argument to indicate it should use the default md. Update openssl utilities to use the default digest type for signing if it is not explicitly indicated. [Steve Henson] *) Use OID cross reference table in ASN1_sign() and ASN1_verify(). New EVP_MD flag EVP_MD_FLAG_PKEY_METHOD_SIGNATURE. This uses the relevant signing method from the key type. This effectively removes the link between digests and public key types. [Steve Henson] *) Add an OID cross reference table and utility functions. Its purpose is to translate between signature OIDs such as SHA1WithrsaEncryption and SHA1, rsaEncryption. This will allow some of the algorithm specific hackery needed to use the correct OID to be removed. [Steve Henson] *) Remove algorithm specific dependencies when setting PKCS7_SIGNER_INFO structures for PKCS7_sign(). They are now set up by the relevant public key ASN1 method. [Steve Henson] *) Add provisional EC pkey method with support for ECDSA and ECDH. [Steve Henson] *) Add support for key derivation (agreement) in the API, DH method and pkeyutl. [Steve Henson] *) Add DSA pkey method and DH pkey methods, extend DH ASN1 method to support public and private key formats. As a side effect these add additional command line functionality not previously available: DSA signatures can be generated and verified using pkeyutl and DH key support and generation in pkey, genpkey. [Steve Henson] *) BeOS support. [Oliver Tappe <zooey@hirschkaefer.de>] *) New make target "install_html_docs" installs HTML renditions of the manual pages. [Oliver Tappe <zooey@hirschkaefer.de>] *) New utility "genpkey" this is analogous to "genrsa" etc except it can generate keys for any algorithm. Extend and update EVP_PKEY_METHOD to support key and parameter generation and add initial key generation functionality for RSA. [Steve Henson] *) Add functions for main EVP_PKEY_method operations. The undocumented functions EVP_PKEY_{encrypt,decrypt} have been renamed to EVP_PKEY_{encrypt,decrypt}_old. [Steve Henson] *) Initial definitions for EVP_PKEY_METHOD. This will be a high level public key API, doesn't do much yet. [Steve Henson] *) New function EVP_PKEY_asn1_get0_info() to retrieve information about public key algorithms. New option to openssl utility: "list-public-key-algorithms" to print out info. [Steve Henson] *) Implement the Supported Elliptic Curves Extension for ECC ciphersuites from draft-ietf-tls-ecc-12.txt. [Douglas Stebila] *) Don't free up OIDs in OBJ_cleanup() if they are in use by EVP_MD or EVP_CIPHER structures to avoid later problems in EVP_cleanup(). [Steve Henson] *) New utilities pkey and pkeyparam. These are similar to algorithm specific utilities such as rsa, dsa, dsaparam etc except they process any key type. [Steve Henson] *) Transfer public key printing routines to EVP_PKEY_ASN1_METHOD. New functions EVP_PKEY_print_public(), EVP_PKEY_print_private(), EVP_PKEY_print_param() to print public key data from an EVP_PKEY structure. [Steve Henson] *) Initial support for pluggable public key ASN1. De-spaghettify the public key ASN1 handling. Move public and private key ASN1 handling to a new EVP_PKEY_ASN1_METHOD structure. Relocate algorithm specific handling to a single module within the relevant algorithm directory. Add functions to allow (near) opaque processing of public and private key structures. [Steve Henson] *) Implement the Supported Point Formats Extension for ECC ciphersuites from draft-ietf-tls-ecc-12.txt. [Douglas Stebila] *) Add initial support for RFC 4279 PSK TLS ciphersuites. Add members for the psk identity [hint] and the psk callback functions to the SSL_SESSION, SSL and SSL_CTX structure. New ciphersuites: PSK-RC4-SHA, PSK-3DES-EDE-CBC-SHA, PSK-AES128-CBC-SHA, PSK-AES256-CBC-SHA New functions: SSL_CTX_use_psk_identity_hint SSL_get_psk_identity_hint SSL_get_psk_identity SSL_use_psk_identity_hint [Mika Kousa and Pasi Eronen of Nokia Corporation] *) Add RFC 3161 compliant time stamp request creation, response generation and response verification functionality. [Zoltán Glózik <zglozik@opentsa.org>, The OpenTSA] *) Whirlpool hash implementation is added. [Andy Polyakov] *) BIGNUM code on 64-bit SPARCv9 targets is switched from bn(64,64) to bn(64,32). Because of instruction set limitations it doesn't have any negative impact on performance. This was done mostly in order to make it possible to share assembler modules, such as bn_mul_mont implementations, between 32- and 64-bit builds without hassle. [Andy Polyakov] *) Move code previously exiled into file crypto/ec/ec2_smpt.c to ec2_smpl.c, and no longer require the OPENSSL_EC_BIN_PT_COMP macro. [Bodo Moeller] *) New candidate for BIGNUM assembler implementation, bn_mul_mont, dedicated Montgomery multiplication procedure, is introduced. BN_MONT_CTX is modified to allow bn_mul_mont to reach for higher "64-bit" performance on certain 32-bit targets. [Andy Polyakov] *) New option SSL_OP_NO_COMP to disable use of compression selectively in SSL structures. New SSL ctrl to set maximum send fragment size. Save memory by seeting the I/O buffer sizes dynamically instead of using the maximum available value. [Steve Henson] *) New option -V for 'openssl ciphers'. This prints the ciphersuite code in addition to the text details. [Bodo Moeller] *) Very, very preliminary EXPERIMENTAL support for printing of general ASN1 structures. This currently produces rather ugly output and doesn't handle several customised structures at all. [Steve Henson] *) Integrated support for PVK file format and some related formats such as MS PUBLICKEYBLOB and PRIVATEKEYBLOB. Command line switches to support these in the 'rsa' and 'dsa' utilities. [Steve Henson] *) Support for PKCS#1 RSAPublicKey format on rsa utility command line. [Steve Henson] *) Remove the ancient ASN1_METHOD code. This was only ever used in one place for the (very old) "NETSCAPE" format certificates which are now handled using new ASN1 code equivalents. [Steve Henson] *) Let the TLSv1_method() etc. functions return a 'const' SSL_METHOD pointer and make the SSL_METHOD parameter in SSL_CTX_new, SSL_CTX_set_ssl_version and SSL_set_ssl_method 'const'. [Nils Larsch] *) Modify CRL distribution points extension code to print out previously unsupported fields. Enhance extension setting code to allow setting of all fields. [Steve Henson] *) Add print and set support for Issuing Distribution Point CRL extension. [Steve Henson] *) Change 'Configure' script to enable Camellia by default. [NTT] Changes between 0.9.8m and 0.9.8n [24 Mar 2010] *) When rejecting SSL/TLS records due to an incorrect version number, never update s->server with a new major version number. As of - OpenSSL 0.9.8m if 'short' is a 16-bit type, - OpenSSL 0.9.8f if 'short' is longer than 16 bits, the previous behavior could result in a read attempt at NULL when receiving specific incorrect SSL/TLS records once record payload protection is active. (CVE-2010-0740) [Bodo Moeller, Adam Langley <agl@chromium.org>] *) Fix for CVE-2010-0433 where some kerberos enabled versions of OpenSSL could be crashed if the relevant tables were not present (e.g. chrooted). [Tomas Hoger <thoger@redhat.com>] Changes between 0.9.8l and 0.9.8m [25 Feb 2010] *) Always check bn_wexpend() return values for failure. (CVE-2009-3245) [Martin Olsson, Neel Mehta] *) Fix X509_STORE locking: Every 'objs' access requires a lock (to accommodate for stack sorting, always a write lock!). [Bodo Moeller] *) On some versions of WIN32 Heap32Next is very slow. This can cause excessive delays in the RAND_poll(): over a minute. As a workaround include a time check in the inner Heap32Next loop too. [Steve Henson] *) The code that handled flushing of data in SSL/TLS originally used the BIO_CTRL_INFO ctrl to see if any data was pending first. This caused the problem outlined in PR#1949. The fix suggested there however can trigger problems with buggy BIO_CTRL_WPENDING (e.g. some versions of Apache). So instead simplify the code to flush unconditionally. This should be fine since flushing with no data to flush is a no op. [Steve Henson] *) Handle TLS versions 2.0 and later properly and correctly use the highest version of TLS/SSL supported. Although TLS >= 2.0 is some way off ancient servers have a habit of sticking around for a while... [Steve Henson] *). [Steve Henson] *) Constify crypto/cast (i.e., <openssl/cast.h>): a CAST_KEY doesn't change when encrypting or decrypting. [Bodo Moeller] *) Add option SSL_OP_LEGACY_SERVER_CONNECT which will allow clients to connect and renegotiate with servers which do not support RI. Until RI is more widely deployed this option is enabled by default. [Steve Henson] *) Add "missing" ssl ctrls to clear options and mode. [Steve Henson] *) If client attempts to renegotiate and doesn't support RI respond with a no_renegotiation alert as required by RFC5746. Some renegotiating TLS clients will continue a connection gracefully when they receive the alert. Unfortunately OpenSSL mishandled this alert and would hang waiting for a server hello which it will never receive. Now we treat a received no_renegotiation alert as a fatal error. This is because applications requesting a renegotiation might well expect it to succeed and would have no code in place to handle the server denying it so the only safe thing to do is to terminate the connection. [Steve Henson] *) Add ctrl macro SSL_get_secure_renegotiation_support() which returns 1 if peer supports secure renegotiation and 0 otherwise. Print out peer renegotiation support in s_client/s_server. [Steve Henson] *) Replace the highly broken and deprecated SPKAC certification method with the updated NID creation version. This should correctly handle UTF8. [Steve Henson] *) Implement RFC5746. Re-enable renegotiation but require the extension as needed. Unfortunately, SSL3_FLAGS_ALLOW_UNSAFE_LEGACY_RENEGOTIATION turns out to be a bad idea. It has been replaced by SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION which can be set with SSL_CTX_set_options(). This is really not recommended unless you know what you are doing. [Eric Rescorla <ekr@networkresonance.com>, Ben Laurie, Steve Henson] *) Fixes to stateless session resumption handling. Use initial_ctx when issuing and attempting to decrypt tickets in case it has changed during servername handling. Use a non-zero length session ID when attempting stateless session resumption: this makes it possible to determine if a resumption has occurred immediately after receiving server hello (several places in OpenSSL subtly assume this) instead of later in the handshake. [Steve Henson] *) The functions ENGINE_ctrl(), OPENSSL_isservice(), CMS_get1_RecipientRequest() and RAND_bytes() can return <=0 on error fixes for a few places where the return code is not checked correctly. [Julia Lawall <julia@diku.dk>] *) Add --strict-warnings option to Configure script to include devteam warnings in other configurations. [Steve Henson] *) Add support for --libdir option and LIBDIR variable in makefiles. This makes it possible to install openssl libraries in locations which have names other than "lib", for example "/usr/lib64" which some systems need. [Steve Henson, based on patch from Jeremy Utley] *) Don't allow the use of leading 0x80 in OIDs. This is a violation of X690 8.9.12 and can produce some misleading textual output of OIDs. [Steve Henson, reported by Dan Kaminsky] *) Delete MD2 from algorithm tables. This follows the recommendation in several standards that it is not used in new applications due to several cryptographic weaknesses. For binary compatibility reasons the MD2 API is still compiled in by default. [Steve Henson] *) Add compression id to {d2i,i2d}_SSL_SESSION so it is correctly saved and restored. [Steve Henson] *) Rename uni2asc and asc2uni functions to OPENSSL_uni2asc and OPENSSL_asc2uni conditionally on Netware platforms to avoid a name clash. [Guenter <lists@gknw.net>] *) Fix the server certificate chain building code to use X509_verify_cert(), it used to have an ad-hoc builder which was unable to cope with anything other than a simple chain. [David Woodhouse <dwmw2@infradead.org>, Steve Henson] *) Don't check self signed certificate signatures in X509_verify_cert() by default (a flag can override this): it just wastes time without adding any security. As a useful side effect self signed root CAs with non-FIPS digests are now usable in FIPS mode. [Steve Henson] *) In dtls1_process_out_of_seq_message() the check if the current message is already buffered was missing. For every new message was memory allocated, allowing an attacker to perform an denial of service attack with sending out of seq handshake messages until there is no memory left. Additionally every future messege was buffered, even if the sequence number made no sense and would be part of another handshake. So only messages with sequence numbers less than 10 in advance will be buffered. (CVE-2009-1378) [Robin Seggelmann, discovered by Daniel Mentz] *) Records are buffered if they arrive with a future epoch to be processed after finishing the corresponding handshake. There is currently no limitation to this buffer allowing an attacker to perform a DOS attack with sending records with future epochs until there is no memory left. This patch adds the pqueue_size() function to determine the size of a buffer and limits the record buffer to 100 entries. (CVE-2009-1377) [Robin Seggelmann, discovered by Daniel Mentz] *) Keep a copy of frag->msg_header.frag_len so it can be used after the parent structure is freed. (CVE-2009-1379) [Daniel Mentz] *) Handle non-blocking I/O properly in SSL_shutdown() call. [Darryl Miles <darryl-mailinglists@netbauds.net>] *) Add 2.5.4.* OIDs [Ilya O. <vrghost@gmail.com>] Changes between 0.9.8k and 0.9.8l [5 Nov 2009] *) Disable renegotiation completely - this fixes a severe security problem (CVE-2009-3555) at the cost of breaking all renegotiation. Renegotiation can be re-enabled by setting SSL3_FLAGS_ALLOW_UNSAFE_LEGACY_RENEGOTIATION in s3->flags at run-time. This is really not recommended unless you know what you're doing. [Ben Laurie] Changes between 0.9.8j and 0.9.8k [25 Mar 2009] *) Don't set val to NULL when freeing up structures, it is freed up by underlying code. If sizeof(void *) > sizeof(long) this can result in zeroing past the valid field. (CVE-2009-0789) [Paolo Ganci <Paolo.Ganci@AdNovum.CH>] *) Fix bug where return value of CMS_SignerInfo_verify_content() was not checked correctly. This would allow some invalid signed attributes to appear to verify correctly. (CVE-2009-0591) [Ivan Nestlerode <inestlerode@us.ibm.com>] *) Reject UniversalString and BMPString types with invalid lengths. This prevents a crash in ASN1_STRING_print_ex() which assumes the strings have a legal length. (CVE-2009-0590) [Steve Henson] *) Set S/MIME signing as the default purpose rather than setting it unconditionally. This allows applications to override it at the store level. [Steve Henson] *) Permit restricted recursion of ASN1 strings. This is needed in practice to handle some structures. [Steve Henson] *) Improve efficiency of mem_gets: don't search whole buffer each time for a '\n' [Jeremy Shapiro <jnshapir@us.ibm.com>] *) New -hex option for openssl rand. [Matthieu Herrb] *) Print out UTF8String and NumericString when parsing ASN1. [Steve Henson] *) Support NumericString type for name components. [Steve Henson] *) Allow CC in the environment to override the automatically chosen compiler. Note that nothing is done to ensure flags work with the chosen compiler. [Ben Laurie] Changes between 0.9.8i and 0.9.8j [07 Jan 2009] *) Properly check EVP_VerifyFinal() and similar return values (CVE-2008-5077). [Ben Laurie, Bodo Moeller, Google Security Team] *) Enable TLS extensions by default. [Ben Laurie] *) Allow the CHIL engine to be loaded, whether the application is multithreaded or not. (This does not release the developer from the obligation to set up the dynamic locking callbacks.) [Sander Temme <sander@temme.net>] *) Use correct exit code if there is an error in dgst command. [Steve Henson; problem pointed out by Roland Dirlewanger] *) Tweak Configure so that you need to say "experimental-jpake" to enable JPAKE, and need to use -DOPENSSL_EXPERIMENTAL_JPAKE in applications. [Bodo Moeller] *) Add experimental JPAKE support, including demo authentication in s_client and s_server. [Ben Laurie] *) Set the comparison function in v3_addr_canonize(). [Rob Austein <sra@hactrn.net>] *) Add support for XMPP STARTTLS in s_client. [Philip Paeps <philip@freebsd.org>] *) Change the server-side SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG behavior to ensure that even with this option, only ciphersuites in the server's preference list will be accepted. (Note that the option applies only when resuming a session, so the earlier behavior was just about the algorithm choice for symmetric cryptography.) [Bodo Moeller] Changes between 0.9.8h and 0.9.8i [15 Sep 2008] *) Fix NULL pointer dereference if a DTLS server received ChangeCipherSpec as first record (CVE-2009-1386). [PR #1679] *) Fix a state transition in s3_srvr.c and d1_srvr.c (was using SSL3_ST_CW_CLNT_HELLO_B, should be ..._ST_SW_SRVR_...). [Nagendra Modadugu] *) The fix in 0.9.8c that supposedly got rid of unsafe double-checked locking was incomplete for RSA blinding, addressing just one layer of what turns out to have been doubly unsafe triple-checked locking. So now fix this for real by retiring the MONT_HELPER macro in crypto/rsa/rsa_eay.c. [Bodo Moeller; problem pointed out by Marius Schilder] *) Various precautionary measures: - Avoid size_t integer overflow in HASH_UPDATE (md32_common.h). - Avoid a buffer overflow in d2i_SSL_SESSION() (ssl_asn1.c). (NB: This would require knowledge of the secret session ticket key to exploit, in which case you'd be SOL either way.) - Change bn_nist.c so that it will properly handle input BIGNUMs outside the expected range. - Enforce the 'num' check in BN_div() (bn_div.c) for non-BN_DEBUG builds. [Neel Mehta, Bodo Moeller] *) Allow engines to be "soft loaded" - i.e. optionally don't die if the load fails. Useful for distros. [Ben Laurie and the FreeBSD team] *) Add support for Local Machine Keyset attribute in PKCS#12 files. [Steve Henson] *) Fix BN_GF2m_mod_arr() top-bit cleanup code. [Huang Ying] *) Expand ENGINE to support engine supplied SSL client certificate functions. This work was sponsored by Logica. [Steve Henson] *) Add CryptoAPI ENGINE to support use of RSA and DSA keys held in Windows keystores. Support for SSL/TLS client authentication too. Not compiled unless enable-capieng specified to Configure. This work was sponsored by Logica. [Steve Henson] *) Fix bug in X509_ATTRIBUTE creation: don't set attribute using ASN1_TYPE_set1 if MBSTRING flag set. This bug would crash certain attribute creation routines such as certificate requests and PKCS#12 files. [Steve Henson] Changes between 0.9.8g and 0.9.8h [28 May 2008] *) Fix flaw if 'Server Key exchange message' is omitted from a TLS handshake which could lead to a cilent crash as found using the Codenomicon TLS test suite (CVE-2008-1672) [Steve Henson, Mark Cox] *) Fix double free in TLS server name extensions which could lead to a remote crash found by Codenomicon TLS test suite (CVE-2008-0891) [Joe Orton] *) Clear error queue in SSL_CTX_use_certificate_chain_file() Clear the error queue to ensure that error entries left from older function calls do not interfere with the correct operation. [Lutz Jaenicke, Erik de Castro Lopo] *) Remove root CA certificates of commercial CAs: The OpenSSL project does not recommend any specific CA and does not have any policy with respect to including or excluding any CA. Therefore it does not make any sense to ship an arbitrary selection of root CA certificates with the OpenSSL software. [Lutz Jaenicke] *) RSA OAEP patches to fix two separate invalid memory reads. The first one involves inputs when 'lzero' is greater than 'SHA_DIGEST_LENGTH' (it would read about SHA_DIGEST_LENGTH bytes before the beginning of from). The second one involves inputs where the 'db' section contains nothing but zeroes (there is a one-byte invalid read after the end of 'db'). [Ivan Nestlerode <inestlerode@us.ibm.com>] *) Partial backport from 0.9.9-dev: Introduce bn_mul_mont (dedicated Montgomery multiplication procedure) as a candidate for BIGNUM assembler implementation. While 0.9.9-dev uses assembler for various architectures, only x86_64 is available by default here in the 0.9.8 branch, and 32-bit x86 is available through a compile-time setting. To try the 32-bit x86 assembler implementation, use Configure option "enable-montasm" (which exists only for this backport). As "enable-montasm" for 32-bit x86 disclaims code stability anyway, in this constellation we activate additional code backported from 0.9.9-dev for further performance improvements, namely BN_from_montgomery_word. (To enable this otherwise, e.g. x86_64, try "-DMONT_FROM_WORD___NON_DEFAULT_0_9_8_BUILD".) [Andy Polyakov (backport partially by Bodo Moeller)] *) Add TLS session ticket callback. This allows an application to set TLS ticket cipher and HMAC keys rather than relying on hardcoded fixed values. This is useful for key rollover for example where several key sets may exist with different names. [Steve Henson] *) Reverse ENGINE-internal logic for caching default ENGINE handles. This was broken until now in 0.9.8 releases, such that the only way a registered ENGINE could be used (assuming it initialises successfully on the host) was to explicitly set it as the default for the relevant algorithms. This is in contradiction with 0.9.7 behaviour and the documentation. With this fix, when an ENGINE is registered into a given algorithm's table of implementations, the 'uptodate' flag is reset so that auto-discovery will be used next time a new context for that algorithm attempts to select an implementation. [Ian Lister (tweaked by Geoff Thorpe)] *) Backport of CMS code to OpenSSL 0.9.8. This differs from the 0.9.9 implementation in the following ways: Lack of EVP_PKEY_ASN1_METHOD means algorithm parameters have to be hard coded. Lack of BER streaming support means one pass streaming processing is only supported if data is detached: setting the streaming flag is ignored for embedded content. CMS support is disabled by default and must be explicitly enabled with the enable-cms configuration option. [Steve Henson] *) Update the GMP engine glue to do direct copies between BIGNUM and mpz_t when openssl and GMP use the same limb size. Otherwise the existing "conversion via a text string export" trick is still used. [Paul Sheer <paulsheer@gmail.com>] *) Zlib compression BIO. This is a filter BIO which compressed and uncompresses any data passed through it. [Steve Henson] *) Add AES_wrap_key() and AES_unwrap_key() functions to implement RFC3394 compatible AES key wrapping. [Steve Henson] *) Add utility functions to handle ASN1 structures. ASN1_STRING_set0(): sets string data without copying. X509_ALGOR_set0() and X509_ALGOR_get0(): set and retrieve X509_ALGOR (AlgorithmIdentifier) data. Attribute function X509at_get0_data_by_OBJ(): retrieves data from an X509_ATTRIBUTE structure optionally checking it occurs only once. ASN1_TYPE_set1(): set and ASN1_TYPE structure copying supplied data. [Steve Henson] *) Fix BN flag handling in RSA_eay_mod_exp() and BN_MONT_CTX_set() to get the expected BN_FLG_CONSTTIME behavior. [Bodo Moeller (Google)] *) Netware support: - fixed wrong usage of ioctlsocket() when build for LIBC BSD sockets - fixed do_tests.pl to run the test suite with CLIB builds too (CLIB_OPT) - added some more tests to do_tests.pl - fixed RunningProcess usage so that it works with newer LIBC NDKs too - removed usage of BN_LLONG for CLIB builds to avoid runtime dependency - added new Configure targets netware-clib-bsdsock, netware-clib-gcc, netware-clib-bsdsock-gcc, netware-libc-bsdsock-gcc - various changes to netware.pl to enable gcc-cross builds on Win32 platform - changed crypto/bio/b_sock.c to work with macro functions (CLIB BSD) - various changes to fix missing prototype warnings - fixed x86nasm.pl to create correct asm files for NASM COFF output - added AES, WHIRLPOOL and CPUID assembler code to build files - added missing AES assembler make rules to mk1mf.pl - fixed order of includes in apps/ocsp.c so that e_os.h settings apply [Guenter Knauf <eflash@gmx.net>] *) Implement certificate status request TLS extension defined in RFC3546. A client can set the appropriate parameters and receive the encoded OCSP response via a callback. A server can query the supplied parameters and set the encoded OCSP response in the callback. Add simplified examples to s_client and s_server. [Steve Henson] Changes between 0.9.8f and 0.9.8g [19 Oct 2007] *) Fix various bugs: + Binary incompatibility of ssl_ctx_st structure + DTLS interoperation with non-compliant servers + Don't call get_session_cb() without proposed session + Fix ia64 assembler code [Andy Polyakov, Steve Henson] Changes between 0.9.8e and 0.9.8f [11 Oct 2007] *) DTLS Handshake overhaul. There were longstanding issues with OpenSSL DTLS implementation, which were making it impossible for RFC 4347 compliant client to communicate with OpenSSL server. Unfortunately just fixing these incompatibilities would "cut off" pre-0.9.8f clients. To allow for hassle free upgrade post-0.9.8e server keeps tolerating non RFC compliant syntax. The opposite is not true, 0.9.8f client can not communicate with earlier server. This update even addresses CVE-2007-4995. [Andy Polyakov] *) Changes to avoid need for function casts in OpenSSL: some compilers (gcc 4.2 and later) reject their use. [Kurt Roeckx <kurt@roeckx.be>, Peter Hartley <pdh@utter.chaos.org.uk>, Steve Henson] *), Steve Henson] *) Add AES and SSE2 assembly language support to VC++ build. [Steve Henson] *) Mitigate attack on final subtraction in Montgomery reduction. [Andy Polyakov] *) Fix crypto/ec/ec_mult.c to work properly with scalars of value 0 (which previously caused an internal error). [Bodo Moeller] *) Squeeze another 10% out of IGE mode when in != out. [Ben Laurie] *) AES IGE mode speedup. [Dean Gaudet (Google)] *) Add the Korean symmetric 128-bit cipher SEED (see) and add SEED ciphersuites from RFC 4162: TLS_RSA_WITH_SEED_CBC_SHA = "SEED-SHA"" To minimize changes between patchlevels in the OpenSSL 0.9.8 series, SEED remains excluded from compilation unless OpenSSL is configured with 'enable-seed'. [KISA, Bodo Moeller] *) Mitigate branch prediction attacks, which can be practical if a single processor is shared, allowing a spy process to extract information. For detailed background information, see (O. Aciicmez, S. Gueron, J.-P. Seifert, "New Branch Prediction Vulnerabilities in OpenSSL and Necessary Software Countermeasures"). The core of the change are new versions BN_div_no_branch() and BN_mod_inverse_no_branch() of BN_div() and BN_mod_inverse(), respectively, which are slower, but avoid the security-relevant conditional branches. These are automatically called by BN_div() and BN_mod_inverse() if the flag BN_FLG_CONSTTIME is set for one of the input BIGNUMs. Also, BN_is_bit_set() has been changed to remove a conditional branch. BN_FLG_CONSTTIME is the new name for the previous BN_FLG_EXP_CONSTTIME flag, since it now affects more than just modular exponentiation. (Since OpenSSL 0.9.7h, setting this flag in the exponent causes BN_mod_exp_mont() to use the alternative implementation in BN_mod_exp_mont_consttime().) The old name remains as a deprecated alias. Similarly, RSA_FLAG_NO_EXP_CONSTTIME is replaced by a more general RSA_FLAG_NO_CONSTTIME flag since the RSA implementation now uses constant-time implementations for more than just exponentiation. Here too the old name is kept as a deprecated alias. BN_BLINDING_new() will now use BN_dup() for the modulus so that the BN_BLINDING structure gets an independent copy of the modulus. This means that the previous "BIGNUM *m" argument to BN_BLINDING_new() and to BN_BLINDING_create_param() now essentially becomes "const BIGNUM *m", although we can't actually change this in the header file before 0.9.9. It allows RSA_setup_blinding() to use BN_with_flags() on the modulus to enable BN_FLG_CONSTTIME. [Matthew D Wood (Intel Corp)] *) In the SSL/TLS server implementation, be strict about session ID context matching (which matters if an application uses a single external cache for different purposes). Previously, out-of-context reuse was forbidden only if SSL_VERIFY_PEER was set. This did ensure strict client verification, but meant that, with applications using a single external cache for quite different requirements, clients could circumvent ciphersuite restrictions for a given session ID context by starting a session in a different context. [Bodo Moeller] *) Include "!eNULL" in SSL_DEFAULT_CIPHER_LIST to make sure that a ciphersuite string such as "DEFAULT:RSA" cannot enable authentication-only ciphersuites. [Bodo Moeller] *) Update the SSL_get_shared_ciphers() fix CVE-2006-3738 which was not complete and could lead to a possible single byte overflow (CVE-2007-5135) [Ben Laurie] Changes between 0.9.8d and 0.9.8e [23 Feb 2007] *) Since AES128 and AES256 (and similarly Camellia128 and Camellia256) share a single mask bit in the logic of ssl/ssl_ciph.c, the code for masking out disabled ciphers needs a kludge to work properly if AES128 is available and AES256 isn't (or if Camellia128 is available and Camellia256 isn't). [Victor Duchovni] *) Fix the BIT STRING encoding generated by crypto/ec/ec_asn1.c (within i2d_ECPrivateKey, i2d_ECPKParameters, i2d_ECParameters): When a point or a seed is encoded in a BIT STRING, we need to prevent the removal of trailing zero bits to get the proper DER encoding. (By default, crypto/asn1/a_bitstr.c assumes the case of a NamedBitList, for which trailing 0 bits need to be removed.) [Bodo Moeller] *)] *) Add RFC 3779 support. [Rob Austein for ARIN, Ben Laurie] *) Load error codes if they are not already present instead of using a static variable. This allows them to be cleanly unloaded and reloaded. Improve header file function name parsing. [Steve Henson] *) extend SMTP and IMAP protocol emulation in s_client to use EHLO or CAPABILITY handshake as required by RFCs. [Goetz Babin-Ebell] Changes between 0.9.8c and 0.9.8d [28 Sep 2006] *)] *) Since 0.9.8b, ciphersuite strings naming explicit ciphersuites match only those. Before that, "AES256-SHA" would be interpreted as a pattern and match "AES128-SHA" too (since AES128-SHA got the same strength classification in 0.9.7h) as we currently only have a single AES bit in the ciphersuite description bitmap. That change, however, also applied to ciphersuite strings such as "RC4-MD5" that intentionally matched multiple ciphersuites -- namely, SSL 2.0 ciphersuites in addition to the more common ones from SSL 3.0/TLS 1.0. So we change the selection algorithm again: Naming an explicit ciphersuite selects this one ciphersuite, and any other similar ciphersuite (same bitmap) from *other* protocol versions. Thus, "RC4-MD5" again will properly select both the SSL 2.0 ciphersuite and the SSL 3.0/TLS 1.0 ciphersuite. Since SSL 2.0 does not have any ciphersuites for which the 128/256 bit distinction would be relevant, this works for now. The proper fix will be to use different bits for AES128 and AES256, which would have avoided the problems from the beginning; however, bits are scarce, so we can only do this in a new release (not just a patchlevel) when we can change the SSL_CIPHER definition to split the single 'unsigned long mask' bitmap into multiple values to extend the available space. [Bodo Moeller] Changes between 0.9.8b and 0.9.8c [05 Sep 2006] *) Avoid PKCS #1 v1.5 signature attack discovered by Daniel Bleichenbacher (CVE-2006-4339) [Ben Laurie and Google Security Team] *) Add AES IGE and biIGE modes. [Ben Laurie] *) Change the Unix randomness entropy gathering to use poll() when possible instead of select(), since the latter has some undesirable limitations. [Darryl Miles via Richard Levitte and Bodo Moeller] *) Disable "ECCdraft" ciphersuites more thoroughly. Now special treatment in ssl/ssl_ciph.s makes sure that these ciphersuites cannot be implicitly activated as part of, e.g., the "AES" alias. However, please upgrade to OpenSSL 0.9.9[-dev] for non-experimental use of the ECC ciphersuites to get TLS extension support, which is required for curve and point format negotiation to avoid potential handshake problems. activate] *) Add the symmetric cipher Camellia (128-bit, 192-bit, 256-bit key versions), which is now available for royalty-free use (see). Also, add Camellia TLS ciphersuites from RFC 4132. To minimize changes between patchlevels in the OpenSSL 0.9.8 series, Camellia remains excluded from compilation unless OpenSSL is configured with 'enable-camellia'. [NTT] *) Disable the padding bug check when compression is in use. The padding bug check assumes the first packet is of even length, this is not necessarily true if compresssion is enabled and can result in false positives causing handshake failure. The actual bug test is ancient code so it is hoped that implementations will either have fixed it by now or any which still have the bug do not support compression. [Steve Henson] Changes between 0.9.8a and 0.9.8b [04 May 2006] *) When applying a cipher rule check to see if string match is an explicit cipher suite and only match that one cipher suite if it is. [Steve Henson] *) Link in manifests for VC++ if needed. [Austin Ziegler <halostatue@gmail.com>] *) Update support for ECC-based TLS ciphersuites according to draft-ietf-tls-ecc-12.txt with proposed changes (but without TLS extensions, which are supported starting with the 0.9.9 branch, not in the OpenSSL 0.9.8 branch). [Douglas Stebila] *) New functions EVP_CIPHER_CTX_new() and EVP_CIPHER_CTX_free() to support opaque EVP_CIPHER_CTX handling. [Steve Henson] *) Fixes and enhancements to zlib compression code. We now only use "zlib1.dll" and use the default __cdecl calling convention on Win32 to conform with the standards mentioned here: Static zlib linking now works on Windows and the new --with-zlib-include --with-zlib-lib options to Configure can be used to supply the location of the headers and library. Gracefully handle case where zlib library can't be loaded. [Steve Henson] *) Several fixes and enhancements to the OID generation code. The old code sometimes allowed invalid OIDs (1.X for X >= 40 for example), couldn't handle numbers larger than ULONG_MAX, truncated printing and had a non standard OBJ_obj2txt() behaviour. [Steve Henson] *) Add support for building of engines under engine/ as shared libraries under VC++ build system. [Steve Henson] *) Corrected the numerous bugs in the Win32 path splitter in DSO. Hopefully, we will not see any false combination of paths any more. [Richard Levitte] Changes between 0.9.8 and 0.9.8a [11 Oct 2005] *))] *) Add two function to clear and return the verify parameter flags. [Steve Henson] *) Keep cipherlists sorted in the source instead of sorting them at runtime, thus removing the need for a lock. [Nils Larsch] *) Avoid some small subgroup attacks in Diffie-Hellman. [Nick Mathewson and Ben Laurie] *) Add functions for well-known primes. [Nick Mathewson] *) Extended Windows CE support. [Satoshi Nakamura and Andy Polyakov] *) Initialize SSL_METHOD structures at compile time instead of during runtime, thus removing the need for a lock. [Steve Henson] *) Make PKCS7_decrypt() work even if no certificate is supplied by attempting to decrypt each encrypted key in turn. Add support to smime utility. [Steve Henson] Changes between 0.9.7h and 0.9.8 [05 Jul 2005] [NB: OpenSSL 0.9.7i and later 0.9.7 patch levels were released after OpenSSL 0.9.8.] *) Add libcrypto.pc and libssl.pc for those who feel they need them. [Richard Levitte] *) Change CA.sh and CA.pl so they don't bundle the CSR and the private key into the same file any more. [Richard Levitte] *) Add initial support for Win64, both IA64 and AMD64/x64 flavors. [Andy Polyakov] *) Add -utf8 command line and config file option to 'ca'. [Stefan <stf@udoma.org] *) Removed the macro des_crypt(), as it seems to conflict with some libraries. Use DES_crypt(). [Richard Levitte] *) Correct naming of the 'chil' and '4758cca' ENGINEs. This involves renaming the source and generated shared-libs for both. The engines will accept the corrected or legacy ids ('ncipher' and '4758_cca' respectively) when binding. NB, this only applies when building 'shared'. [Corinna Vinschen <vinschen@redhat.com> and Geoff Thorpe] *) Add attribute functions to EVP_PKEY structure. Modify PKCS12_create() to recognize a CSP name attribute and use it. Make -CSP option work again in pkcs12 utility. [Steve Henson] *) Add new functionality to the bn blinding code: - automatic re-creation of the BN_BLINDING parameters after a fixed number of uses (currently 32) - add new function for parameter creation - introduce flags to control the update behaviour of the BN_BLINDING parameters - hide BN_BLINDING structure Add a second BN_BLINDING slot to the RSA structure to improve performance when a single RSA object is shared among several threads. [Nils Larsch] *) Add support for DTLS. [Nagendra Modadugu <nagendra@cs.stanford.edu> and Ben Laurie] *) Add support for DER encoded private keys (SSL_FILETYPE_ASN1) to SSL_CTX_use_PrivateKey_file() and SSL_use_PrivateKey_file() [Walter Goulet] *) Remove buggy and incomplete DH cert support from ssl/ssl_rsa.c and ssl/s3_both.c [Nils Larsch] *) Use SHA-1 instead of MD5 as the default digest algorithm for the apps/openssl applications. [Nils Larsch] *) Compile clean with "-Wall -Wmissing-prototypes -Wstrict-prototypes -Wmissing-declarations -Werror". Currently DEBUG_SAFESTACK must also be set. [Ben Laurie] *) Change ./Configure so that certain algorithms can be disabled by default. The new counterpiece to "no-xxx" is "enable-xxx". The patented RC5 and MDC2 algorithms will now be disabled unless "enable-rc5" and "enable-mdc2", respectively, are specified. (IDEA remains enabled despite being patented. This is because IDEA is frequently required for interoperability, and there is no license fee for non-commercial use. As before, "no-idea" can be used to avoid this algorithm.) [Bodo Moeller] *) Add processing of proxy certificates (see RFC 3820). This work was sponsored by KTH (The Royal Institute of Technology in Stockholm) and EGEE (Enabling Grids for E-science in Europe). [Richard Levitte] *) RC4 performance overhaul on modern architectures/implementations, such as Intel P4, IA-64 and AMD64. [Andy Polyakov] *) New utility extract-section.pl. This can be used specify an alternative section number in a pod file instead of having to treat each file as a separate case in Makefile. This can be done by adding two lines to the pod file: =for comment openssl_section:XXX The blank line is mandatory. [Steve Henson] *) New arguments -certform, -keyform and -pass for s_client and s_server to allow alternative format key and certificate files and passphrase sources. [Steve Henson] *) New structure X509_VERIFY_PARAM which combines current verify parameters, update associated structures and add various utility functions. Add new policy related verify parameters, include policy checking in standard verify code. Enhance 'smime' application with extra parameters to support policy checking and print out. [Steve Henson] *) Add a new engine to support VIA PadLock ACE extensions in the VIA C3 Nehemiah processors. These extensions support AES encryption in hardware as well as RNG (though RNG support is currently disabled). [Michal Ludvig <michal@logix.cz>, with help from Andy Polyakov] *) Deprecate BN_[get|set]_params() functions (they were ignored internally). [Geoff Thorpe] *) New FIPS 180-2 algorithms, SHA-224/-256/-384/-512 are implemented. [Andy Polyakov and a number of other people] *) Improved PowerPC platform support. Most notably BIGNUM assembler implementation contributed by IBM. [Suresh Chari, Peter Waltenberg, Andy Polyakov] *) The new 'RSA_generate_key_ex' function now takes a BIGNUM for the public exponent rather than 'unsigned long'. There is a corresponding change to the new 'rsa_keygen' element of the RSA_METHOD structure. [Jelte Jansen, Geoff Thorpe] *) Functionality for creating the initial serial number file is now moved from CA.pl to the 'ca' utility with a new option -create_serial. (Before OpenSSL 0.9.7e, CA.pl used to initialize the serial number file to 1, which is bound to cause problems. To avoid the problems while respecting compatibility between different 0.9.7 patchlevels, 0.9.7e employed 'openssl x509 -next_serial' in CA.pl for serial number initialization. With the new release 0.9.8, we can fix the problem directly in the 'ca' utility.) [Steve Henson] *) Reduced header interdepencies by declaring more opaque objects in ossl_typ.h. As a consequence, including some headers (eg. engine.h) will give fewer recursive includes, which could break lazy source code - so this change is covered by the OPENSSL_NO_DEPRECATED symbol. As always, developers should define this symbol when building and using openssl to ensure they track the recommended behaviour, interfaces, [etc], but backwards-compatible behaviour prevails when this isn't defined. [Geoff Thorpe] *) New function X509_POLICY_NODE_print() which prints out policy nodes. [Steve Henson] *) Add new EVP function EVP_CIPHER_CTX_rand_key and associated functionality. This will generate a random key of the appropriate length based on the cipher context. The EVP_CIPHER can provide its own random key generation routine to support keys of a specific form. This is used in the des and 3des routines to generate a key of the correct parity. Update S/MIME code to use new functions and hence generate correct parity DES keys. Add EVP_CHECK_DES_KEY #define to return an error if the key is not valid (weak or incorrect parity). [Steve Henson] *) Add a local set of CRLs that can be used by X509_verify_cert() as well as looking them up. This is useful when the verified structure may contain CRLs, for example PKCS#7 signedData. Modify PKCS7_verify() to use any CRLs present unless the new PKCS7_NO_CRL flag is asserted. [Steve Henson] *) Extend ASN1 oid configuration module. It now additionally accepts the syntax: shortName = some long name, 1.2.3.4 [Steve Henson] *) Reimplemented the BN_CTX implementation. There is now no more static limitation on the number of variables it can handle nor the depth of the "stack" handling for BN_CTX_start()/BN_CTX_end() pairs. The stack information can now expand as required, and rather than having a single static array of bignums, BN_CTX now uses a linked-list of such arrays allowing it to expand on demand whilst maintaining the usefulness of BN_CTX's "bundling". [Geoff Thorpe] *) Add a missing BN_CTX parameter to the 'rsa_mod_exp' callback in RSA_METHOD to allow all RSA operations to function using a single BN_CTX. [Geoff Thorpe] *) Preliminary support for certificate policy evaluation and checking. This is initially intended to pass the tests outlined in "Conformance Testing of Relying Party Client Certificate Path Processing Logic" v1.07. [Steve Henson] *) bn_dup_expand() has been deprecated, it was introduced in 0.9.7 and remained unused and not that useful. A variety of other little bignum tweaks and fixes have also been made continuing on from the audit (see below). [Geoff Thorpe] *) Constify all or almost all d2i, c2i, s2i and r2i functions, along with associated ASN1, EVP and SSL functions and old ASN1 macros. [Richard Levitte] *) BN_zero() only needs to set 'top' and 'neg' to zero for correct results, and this should never fail. So the return value from the use of BN_set_word() (which can fail due to needless expansion) is now deprecated; if OPENSSL_NO_DEPRECATED is defined, BN_zero() is a void macro. [Geoff Thorpe] *) BN_CTX_get() should return zero-valued bignums, providing the same initialised value as BN_new(). [Geoff Thorpe, suggested by Ulf Möller] *) Support for inhibitAnyPolicy certificate extension. [Steve Henson] *) An audit of the BIGNUM code is underway, for which debugging code is enabled when BN_DEBUG is defined. This makes stricter enforcements on what is considered valid when processing BIGNUMs, and causes execution to assert() when a problem is discovered. If BN_DEBUG_RAND is defined, further steps are taken to deliberately pollute unused data in BIGNUM structures to try and expose faulty code further on. For now, openssl will (in its default mode of operation) continue to tolerate the inconsistent forms that it has tolerated in the past, but authors and packagers should consider trying openssl and their own applications when compiled with these debugging symbols defined. It will help highlight potential bugs in their own code, and will improve the test coverage for OpenSSL itself. At some point, these tighter rules will become openssl's default to improve maintainability, though the assert()s and other overheads will remain only in debugging configurations. See bn.h for more details. [Geoff Thorpe, Nils Larsch, Ulf Möller] *) BN_CTX_init() has been deprecated, as BN_CTX is an opaque structure that can only be obtained through BN_CTX_new() (which implicitly initialises it). The presence of this function only made it possible to overwrite an existing structure (and cause memory leaks). [Geoff Thorpe] *) Because of the callback-based approach for implementing LHASH as a template type, lh_insert() adds opaque objects to hash-tables and lh_doall() or lh_doall_arg() are typically used with a destructor callback to clean up those corresponding objects before destroying the hash table (and losing the object pointers). So some over-zealous constifications in LHASH have been relaxed so that lh_insert() does not take (nor store) the objects as "const" and the lh_doall[_arg] callback wrappers are not prototyped to have "const" restrictions on the object pointers they are given (and so aren't required to cast them away any more). [Geoff Thorpe] *) The tmdiff.h API was so ugly and minimal that our own timing utility (speed) prefers to use its own implementation. The two implementations haven't been consolidated as yet (volunteers?) but the tmdiff API has had its object type properly exposed (MS_TM) instead of casting to/from "char *". This may still change yet if someone realises MS_TM and "ms_time_***" aren't necessarily the greatest nomenclatures - but this is what was used internally to the implementation so I've used that for now. [Geoff Thorpe] *) Ensure that deprecated functions do not get compiled when OPENSSL_NO_DEPRECATED is defined. Some "openssl" subcommands and a few of the self-tests were still using deprecated key-generation functions so these have been updated also. [Geoff Thorpe] *) Reorganise PKCS#7 code to separate the digest location functionality into PKCS7_find_digest(), digest addition into PKCS7_bio_add_digest(). New function PKCS7_set_digest() to set the digest type for PKCS#7 digestedData type. Add additional code to correctly generate the digestedData type and add support for this type in PKCS7 initialization functions. [Steve Henson] *) New function PKCS7_set0_type_other() this initializes a PKCS7 structure of type "other". [Steve Henson] *) Fix prime generation loop in crypto/bn/bn_prime.pl by making sure the loop does correctly stop and breaking ("division by zero") modulus operations are not performed. The (pre-generated) prime table crypto/bn/bn_prime.h was already correct, but it could not be re-generated on some platforms because of the "division by zero" situation in the script. [Ralf S. Engelschall] *) Update support for ECC-based TLS ciphersuites according to draft-ietf-tls-ecc-03.txt: the KDF1 key derivation function with SHA-1 now is only used for "small" curves (where the representation of a field element takes up to 24 bytes); for larger curves, the field element resulting from ECDH is directly used as premaster secret. [Douglas Stebila (Sun Microsystems Laboratories)] *) Add code for kP+lQ timings to crypto/ec/ectest.c, and add SEC2 curve secp160r1 to the tests. [Douglas Stebila (Sun Microsystems Laboratories)] *) Add the possibility to load symbols globally with DSO. [Götz Babin-Ebell <babin-ebell@trustcenter.de> via Richard Levitte] *) Add the functions ERR_set_mark() and ERR_pop_to_mark() for better control of the error stack. [Richard Levitte] *) Add support for STORE in ENGINE. [Richard Levitte] *) Add the STORE type. The intention is to provide a common interface to certificate and key stores, be they simple file-based stores, or HSM-type store, or LDAP stores, or... NOTE: The code is currently UNTESTED and isn't really used anywhere. [Richard Levitte] *) Add a generic structure called OPENSSL_ITEM. This can be used to pass a list of arguments to any function as well as provide a way for a function to pass data back to the caller. [Richard Levitte] *) Add the functions BUF_strndup() and BUF_memdup(). BUF_strndup() works like BUF_strdup() but can be used to duplicate a portion of a string. The copy gets NUL-terminated. BUF_memdup() duplicates a memory area. [Richard Levitte] *) Add the function sk_find_ex() which works like sk_find(), but will return an index to an element even if an exact match couldn't be found. The index is guaranteed to point at the element where the searched-for key would be inserted to preserve sorting order. [Richard Levitte] *) Add the function OBJ_bsearch_ex() which works like OBJ_bsearch() but takes an extra flags argument for optional functionality. Currently, the following flags are defined: OBJ_BSEARCH_VALUE_ON_NOMATCH This one gets OBJ_bsearch_ex() to return a pointer to the first element where the comparing function returns a negative or zero number. OBJ_BSEARCH_FIRST_VALUE_ON_MATCH This one gets OBJ_bsearch_ex() to return a pointer to the first element where the comparing function returns zero. This is useful if there are more than one element where the comparing function returns zero. [Richard Levitte] *) Make it possible to create self-signed certificates with 'openssl ca' in such a way that the self-signed certificate becomes part of the CA database and uses the same mechanisms for serial number generation as all other certificate signing. The new flag '-selfsign' enables this functionality. Adapt CA.sh and CA.pl.in. [Richard Levitte] *) Add functionality to check the public key of a certificate request against a given private. This is useful to check that a certificate request can be signed by that key (self-signing). [Richard Levitte] *)] *) Generate muti valued AVAs using '+' notation in config files for req and dirName. [Steve Henson] *) Support for nameConstraints certificate extension. [Steve Henson] *) Support for policyConstraints certificate extension. [Steve Henson] *) Support for policyMappings certificate extension. [Steve Henson] *) Make sure the default DSA_METHOD implementation only uses its dsa_mod_exp() and/or bn_mod_exp() handlers if they are non-NULL, and change its own handlers to be NULL so as to remove unnecessary indirection. This lets alternative implementations fallback to the default implementation more easily. [Geoff Thorpe] *) Support for directoryName in GeneralName related extensions in config files. [Steve Henson] *) Make it possible to link applications using Makefile.shared. Make that possible even when linking against static libraries! [Richard Levitte] *) Support for single pass processing for S/MIME signing. This now means that S/MIME signing can be done from a pipe, in addition cleartext signing (multipart/signed type) is effectively streaming and the signed data does not need to be all held in memory. This is done with a new flag PKCS7_STREAM. When this flag is set PKCS7_sign() only initializes the PKCS7 structure and the actual signing is done after the data is output (and digests calculated) in SMIME_write_PKCS7(). [Steve Henson] *) Add full support for -rpath/-R, both in shared libraries and applications, at least on the platforms where it's known how to do it. [Richard Levitte] *) In crypto/ec/ec_mult.c, implement fast point multiplication with precomputation, based on wNAF splitting: EC_GROUP_precompute_mult() will now compute a table of multiples of the generator that makes subsequent invocations of EC_POINTs_mul() or EC_POINT_mul() faster (notably in the case of a single point multiplication, scalar * generator). [Nils Larsch, Bodo Moeller] *) IPv6 support for certificate extensions. The various extensions which use the IP:a.b.c.d can now take IPv6 addresses using the formats of RFC1884 2.2 . IPv6 addresses are now also displayed correctly. [Steve Henson] *) Added an ENGINE that implements RSA by performing private key exponentiations with the GMP library. The conversions to and from GMP's mpz_t format aren't optimised nor are any montgomery forms cached, and on x86 it appears OpenSSL's own performance has caught up. However there are likely to be other architectures where GMP could provide a boost. This ENGINE is not built in by default, but it can be specified at Configure time and should be accompanied by the necessary linker additions, eg; ./config -DOPENSSL_USE_GMP -lgmp [Geoff Thorpe] *) "openssl engine" will not display ENGINE/DSO load failure errors when testing availability of engines with "-t" - the old behaviour is produced by increasing the feature's verbosity with "-tt". [Geoff Thorpe] *) ECDSA routines: under certain error conditions uninitialized BN objects could be freed. Solution: make sure initialization is performed early enough. (Reported and fix supplied by Nils Larsch <nla@trustcenter.de> via PR#459) [Lutz Jaenicke] *) Key-generation can now be implemented in RSA_METHOD, DSA_METHOD and DH_METHOD (eg. by ENGINE implementations) to override the normal software implementations. For DSA and DH, parameter generation can also be overridden by providing the appropriate method callbacks. [Geoff Thorpe] *) Change the "progress" mechanism used in key-generation and primality testing to functions that take a new BN_GENCB pointer in place of callback/argument pairs. The new API functions have "_ex" postfixes and the older functions are reimplemented as wrappers for the new ones. The OPENSSL_NO_DEPRECATED symbol can be used to hide declarations of the old functions to help (graceful) attempts to migrate to the new functions. Also, the new key-generation API functions operate on a caller-supplied key-structure and return success/failure rather than returning a key or NULL - this is to help make "keygen" another member function of RSA_METHOD etc. Example for using the new callback interface: int (*my_callback)(int a, int b, BN_GENCB *cb) = ...; void *my_arg = ...; BN_GENCB my_cb; BN_GENCB_set(&my_cb, my_callback, my_arg); return BN_is_prime_ex(some_bignum, BN_prime_checks, NULL, &cb); /* For the meaning of a, b in calls to my_callback(), see the * documentation of the function that calls the callback. * cb will point to my_cb; my_arg can be retrieved as cb->arg. * my_callback should return 1 if it wants BN_is_prime_ex() * to continue, or 0 to stop. */ [Geoff Thorpe] *) Change the ZLIB compression method to be stateful, and make it available to TLS with the number defined in draft-ietf-tls-compression-04.txt. [Richard Levitte] *) Add the ASN.1 structures and functions for CertificatePair, which is defined as follows (according to X.509_4thEditionDraftV6.pdf): CertificatePair ::= SEQUENCE { forward [0] Certificate OPTIONAL, reverse [1] Certificate OPTIONAL, -- at least one of the pair shall be present -- } Also implement the PEM functions to read and write certificate pairs, and defined the PEM tag as "CERTIFICATE PAIR". This needed to be defined, mostly for the sake of the LDAP attribute crossCertificatePair, but may prove useful elsewhere as well. [Richard Levitte] *) Make it possible to inhibit symlinking of shared libraries in Makefile.shared, for Cygwin's sake. [Richard Levitte] *) Extend the BIGNUM API by creating a function void BN_set_negative(BIGNUM *a, int neg); and a macro that behave like int BN_is_negative(const BIGNUM *a); to avoid the need to access 'a->neg' directly in applications. [Nils Larsch] *) Implement fast modular reduction for pseudo-Mersenne primes used in NIST curves (crypto/bn/bn_nist.c, crypto/ec/ecp_nist.c). EC_GROUP_new_curve_GFp() will now automatically use this if applicable. [Nils Larsch <nla@trustcenter.de>] *) Add new lock type (CRYPTO_LOCK_BN). [Bodo Moeller] *) Change the ENGINE framework to automatically load engines dynamically from specific directories unless they could be found to already be built in or loaded. Move all the current engines except for the cryptodev one to a new directory engines/. The engines in engines/ are built as shared libraries if the "shared" options was given to ./Configure or ./config. Otherwise, they are inserted in libcrypto.a. /usr/local/ssl/engines is the default directory for dynamic engines, but that can be overridden at configure time through the usual use of --prefix and/or --openssldir, and at run time with the environment variable OPENSSL_ENGINES. [Geoff Thorpe and Richard Levitte] *) Add Makefile.shared, a helper makefile to build shared libraries. Adapt Makefile.org. [Richard Levitte] *) Add version info to Win32 DLLs. [Peter 'Luna' Runestig" <peter@runestig.com>] *) Add new 'medium level' PKCS#12 API. Certificates and keys can be added using this API to created arbitrary PKCS#12 files while avoiding the low level API. New options to PKCS12_create(), key or cert can be NULL and will then be omitted from the output file. The encryption algorithm NIDs can be set to -1 for no encryption, the mac iteration count can be set to 0 to omit the mac. Enhance pkcs12 utility by making the -nokeys and -nocerts options work when creating a PKCS#12 file. New option -nomac to omit the mac, NONE can be set for an encryption algorithm. New code is modified to use the enhanced PKCS12_create() instead of the low level API. [Steve Henson] *) Extend ASN1 encoder to support indefinite length constructed encoding. This can output sequences tags and octet strings in this form. Modify pk7_asn1.c to support indefinite length encoding. This is experimental and needs additional code to be useful, such as an ASN1 bio and some enhanced streaming PKCS#7 code. Extend template encode functionality so that tagging is passed down to the template encoder. [Steve Henson] *) Let 'openssl req' fail if an argument to '-newkey' is not recognized instead of using RSA as a default. [Bodo Moeller] *) Add support for ECC-based ciphersuites from draft-ietf-tls-ecc-01.txt. As these are not official, they are not included in "ALL"; the "ECCdraft" ciphersuite group alias can be used to select them. [Vipul Gupta and Sumit Gupta (Sun Microsystems Laboratories)] *) Add ECDH engine support. [Nils Gura and Douglas Stebila (Sun Microsystems Laboratories)] *) Add ECDH in new directory crypto/ecdh/. [Douglas Stebila (Sun Microsystems Laboratories)] *) Let BN_rand_range() abort with an error after 100 iterations without success (which indicates a broken PRNG). [Bodo Moeller] *) Change BN_mod_sqrt() so that it verifies that the input value is really the square of the return value. (Previously, BN_mod_sqrt would show GIGO behaviour.) [Bodo Moeller] *) Add named elliptic curves over binary fields from X9.62, SECG, and WAP/WTLS; add OIDs that were still missing. [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) Extend the EC library for elliptic curves over binary fields (new files ec2_smpl.c, ec2_smpt.c, ec2_mult.c in crypto/ec/). New EC_METHOD: EC_GF2m_simple_method New API functions: EC_GROUP_new_curve_GF2m EC_GROUP_set_curve_GF2m EC_GROUP_get_curve_GF2m EC_POINT_set_affine_coordinates_GF2m EC_POINT_get_affine_coordinates_GF2m EC_POINT_set_compressed_coordinates_GF2m Point compression for binary fields is disabled by default for patent reasons (compile with OPENSSL_EC_BIN_PT_COMP defined to enable it). As binary polynomials are represented as BIGNUMs, various members of the EC_GROUP and EC_POINT data structures can be shared between the implementations for prime fields and binary fields; the above ..._GF2m functions (except for EX_GROUP_new_curve_GF2m) are essentially identical to their ..._GFp counterparts. (For simplicity, the '..._GFp' prefix has been dropped from various internal method names.) An internal 'field_div' method (similar to 'field_mul' and 'field_sqr') has been added; this is used only for binary fields. [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) Optionally dispatch EC_POINT_mul(), EC_POINT_precompute_mult() through methods ('mul', 'precompute_mult'). The generic implementations (now internally called 'ec_wNAF_mul' and 'ec_wNAF_precomputed_mult') remain the default if these methods are undefined. [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) New function EC_GROUP_get_degree, which is defined through EC_METHOD. For curves over prime fields, this returns the bit length of the modulus. [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) New functions EC_GROUP_dup, EC_POINT_dup. (These simply call ..._new and ..._copy). [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) Add binary polynomial arithmetic software in crypto/bn/bn_gf2m.c. Polynomials are represented as BIGNUMs (where the sign bit is not used) in the following functions [macros]: BN_GF2m_add BN_GF2m_sub [= BN_GF2m_add] BN_GF2m_mod [wrapper for BN_GF2m_mod_arr] BN_GF2m_mod_mul [wrapper for BN_GF2m_mod_mul_arr] BN_GF2m_mod_sqr [wrapper for BN_GF2m_mod_sqr_arr] BN_GF2m_mod_inv BN_GF2m_mod_exp [wrapper for BN_GF2m_mod_exp_arr] BN_GF2m_mod_sqrt [wrapper for BN_GF2m_mod_sqrt_arr] BN_GF2m_mod_solve_quad [wrapper for BN_GF2m_mod_solve_quad_arr] BN_GF2m_cmp [= BN_ucmp] (Note that only the 'mod' functions are actually for fields GF(2^m). BN_GF2m_add() is misnomer, but this is for the sake of consistency.) For some functions, an the irreducible polynomial defining a field can be given as an 'unsigned int[]' with strictly decreasing elements giving the indices of those bits that are set; i.e., p[] represents the polynomial f(t) = t^p[0] + t^p[1] + ... + t^p[k] where p[0] > p[1] > ... > p[k] = 0. This applies to the following functions: BN_GF2m_mod_arr BN_GF2m_mod_mul_arr BN_GF2m_mod_sqr_arr BN_GF2m_mod_inv_arr [wrapper for BN_GF2m_mod_inv] BN_GF2m_mod_div_arr [wrapper for BN_GF2m_mod_div] BN_GF2m_mod_exp_arr BN_GF2m_mod_sqrt_arr BN_GF2m_mod_solve_quad_arr BN_GF2m_poly2arr BN_GF2m_arr2poly Conversion can be performed by the following functions: BN_GF2m_poly2arr BN_GF2m_arr2poly bntest.c has additional tests for binary polynomial arithmetic. Two implementations for BN_GF2m_mod_div() are available. The default algorithm simply uses BN_GF2m_mod_inv() and BN_GF2m_mod_mul(). The alternative algorithm is compiled in only if OPENSSL_SUN_GF2M_DIV is defined (patent pending; read the copyright notice in crypto/bn/bn_gf2m.c before enabling it). [Sheueling Chang Shantz and Douglas Stebila (Sun Microsystems Laboratories)] *) Add new error code 'ERR_R_DISABLED' that can be used when some functionality is disabled at compile-time. [Douglas Stebila <douglas.stebila@sun.com>] *) Change default behaviour of 'openssl asn1parse' so that more information is visible when viewing, e.g., a certificate: Modify asn1_parse2 (crypto/asn1/asn1_par.c) so that in non-'dump' mode the content of non-printable OCTET STRINGs is output in a style similar to INTEGERs, but with '[HEX DUMP]' prepended to avoid the appearance of a printable string. [Nils Larsch <nla@trustcenter.de>] *) Add 'asn1_flag' and 'asn1_form' member to EC_GROUP with access functions EC_GROUP_set_asn1_flag() EC_GROUP_get_asn1_flag() EC_GROUP_set_point_conversion_form() EC_GROUP_get_point_conversion_form() These control ASN1 encoding details: - Curves (i.e., groups) are encoded explicitly unless asn1_flag has been set to OPENSSL_EC_NAMED_CURVE. - Points are encoded in uncompressed form by default; options for asn1_for are as for point2oct, namely POINT_CONVERSION_COMPRESSED POINT_CONVERSION_UNCOMPRESSED POINT_CONVERSION_HYBRID Also add 'seed' and 'seed_len' members to EC_GROUP with access functions EC_GROUP_set_seed() EC_GROUP_get0_seed() EC_GROUP_get_seed_len() This is used only for ASN1 purposes (so far). [Nils Larsch <nla@trustcenter.de>] *) Add 'field_type' member to EC_METHOD, which holds the NID of the appropriate field type OID. The new function EC_METHOD_get_field_type() returns this value. [Nils Larsch <nla@trustcenter.de>] *) Add functions EC_POINT_point2bn() EC_POINT_bn2point() EC_POINT_point2hex() EC_POINT_hex2point() providing useful interfaces to EC_POINT_point2oct() and EC_POINT_oct2point(). [Nils Larsch <nla@trustcenter.de>] *) Change internals of the EC library so that the functions EC_GROUP_set_generator() EC_GROUP_get_generator() EC_GROUP_get_order() EC_GROUP_get_cofactor() are implemented directly in crypto/ec/ec_lib.c and not dispatched to methods, which would lead to unnecessary code duplication when adding different types of curves. [Nils Larsch <nla@trustcenter.de> with input by Bodo Moeller] *) Implement compute_wNAF (crypto/ec/ec_mult.c) without BIGNUM arithmetic, and such that modified wNAFs are generated (which avoid length expansion in many cases). [Bodo Moeller] *) Add a function EC_GROUP_check_discriminant() (defined via EC_METHOD) that verifies that the curve discriminant is non-zero. Add a function EC_GROUP_check() that makes some sanity tests on a EC_GROUP, its generator and order. This includes EC_GROUP_check_discriminant(). [Nils Larsch <nla@trustcenter.de>] *) Add ECDSA in new directory crypto/ecdsa/. Add applications 'openssl ecparam' and 'openssl ecdsa' (these are based on 'openssl dsaparam' and 'openssl dsa'). ECDSA support is also included in various other files across the library. Most notably, - 'openssl req' now has a '-newkey ecdsa:file' option; - EVP_PKCS82PKEY (crypto/evp/evp_pkey.c) now can handle ECDSA; - X509_PUBKEY_get (crypto/asn1/x_pubkey.c) and d2i_PublicKey (crypto/asn1/d2i_pu.c) have been modified to make them suitable for ECDSA where domain parameters must be extracted before the specific public key; - ECDSA engine support has been added. [Nils Larsch <nla@trustcenter.de>] *) Include some named elliptic curves, and add OIDs from X9.62, SECG, and WAP/WTLS. Each curve can be obtained from the new function EC_GROUP_new_by_curve_name(), and the list of available named curves can be obtained with EC_get_builtin_curves(). Also add a 'curve_name' member to EC_GROUP objects, which can be accessed via EC_GROUP_set_curve_name() EC_GROUP_get_curve_name() [Nils Larsch <larsch@trustcenter.de, Bodo Moeller] *)] Changes between 0.9.7l and 0.9.7m [23 Feb 2007] *) Cleanse PEM buffers before freeing them since they may contain sensitive data. [Benjamin Bennett <ben@psc.edu>] *) Include "!eNULL" in SSL_DEFAULT_CIPHER_LIST to make sure that a ciphersuite string such as "DEFAULT:RSA" cannot enable authentication-only ciphersuites. [Bodo Moeller] *) Since AES128 and AES256 share a single mask bit in the logic of ssl/ssl_ciph.c, the code for masking out disabled ciphers needs a kludge to work properly if AES128 is available and AES256 isn't. [Victor Duchovni] *) Expand security boundary to match 1.1.1 module. [Steve Henson] *) Remove redundant features: hash file source, editing of test vectors modify fipsld to use external fips_premain.c signature. [Steve Henson] *) New perl script mkfipsscr.pl to create shell scripts or batch files to run algorithm test programs. [Steve Henson] *) Make algorithm test programs more tolerant of whitespace. [Steve Henson] *)] *) Load error codes if they are not already present instead of using a static variable. This allows them to be cleanly unloaded and reloaded. [Steve Henson] Changes between 0.9.7k and 0.9.7l [28 Sep 2006] *)] *) Change ciphersuite string processing so that an explicit ciphersuite selects this one ciphersuite (so that "AES256-SHA" will no longer include "AES128-SHA"), and any other similar ciphersuite (same bitmap) from *other* protocol versions (so that "RC4-MD5" will still include both the SSL 2.0 ciphersuite and the SSL 3.0/TLS 1.0 ciphersuite). This is a backport combining changes from 0.9.8b and 0.9.8d. [Bodo Moeller] Changes between 0.9.7j and 0.9.7k [05 Sep 2006] *) Avoid PKCS #1 v1.5 signature attack discovered by Daniel Bleichenbacher (CVE-2006-4339) [Ben Laurie and Google Security Team] *) Change the Unix randomness entropy gathering to use poll() when possible instead of select(), since the latter has some undesirable limitations. [Darryl Miles via Richard Levitte andactive] Changes between 0.9.7i and 0.9.7j [04 May 2006] *) Adapt fipsld and the build system to link against the validated FIPS module in FIPS mode. [Steve Henson] *) Fixes for VC++ 2005 build under Windows. [Steve Henson] *) Add new Windows build target VC-32-GMAKE for VC++. This uses GNU make from a Windows bash shell such as MSYS. It is autodetected from the "config" script when run from a VC++ environment. Modify standard VC++ build to use fipscanister.o from the GNU make build. [Steve Henson] Changes between 0.9.7h and 0.9.7i [14 Oct 2005] *) Wrapped the definition of EVP_MAX_MD_SIZE in a #ifdef OPENSSL_FIPS. The value now differs depending on if you build for FIPS or not. BEWARE! A program linked with a shared FIPSed libcrypto can't be safely run with a non-FIPSed libcrypto, as it may crash because of the difference induced by this change. [Andy Polyakov] Changes between 0.9.7g and 0.9.7h [11 Oct 2005] *))] *) Minimal support for X9.31 signatures and PSS padding modes. This is mainly for FIPS compliance and not fully integrated at this stage. [Steve Henson] *) For DSA signing, unless DSA_FLAG_NO_EXP_CONSTTIME is set, perform the exponentiation using a fixed-length exponent. (Otherwise, the information leaked through timing could expose the secret key after many signatures; cf. Bleichenbacher's attack on DSA with biased k.) [Bodo Moeller] *) Make a new fixed-window mod_exp implementation the default for RSA, DSA, and DH private-key operations so that the sequence of squares and multiplies and the memory access pattern are independent of the particular secret key. This will mitigate cache-timing and potential related attacks. BN_mod_exp_mont_consttime() is the new exponentiation implementation, and this is automatically used by BN_mod_exp_mont() if the new flag BN_FLG_EXP_CONSTTIME is set for the exponent. RSA, DSA, and DH will use this BN flag for private exponents unless the flag RSA_FLAG_NO_EXP_CONSTTIME, DSA_FLAG_NO_EXP_CONSTTIME, or DH_FLAG_NO_EXP_CONSTTIME, respectively, is set. [Matthew D Wood (Intel Corp), with some changes by Bodo Moeller] *) Change the client implementation for SSLv23_method() and SSLv23_client_method() so that is uses the SSL 3.0/TLS 1.0 Client Hello message format if the SSL_OP_NO_SSLv2 option is set. (Previously, the SSL 2.0 backwards compatible Client Hello message format would be used even with SSL_OP_NO_SSLv2.) [Bodo Moeller] *) Add support for smime-type MIME parameter in S/MIME messages which some clients need. [Steve Henson] *) New function BN_MONT_CTX_set_locked() to set montgomery parameters in a threadsafe manner. Modify rsa code to use new function and add calls to dsa and dh code (which had race conditions before). [Steve Henson] *) Include the fixed error library code in the C error file definitions instead of fixing them up at runtime. This keeps the error code structures constant. [Steve Henson] Changes between 0.9.7f and 0.9.7g [11 Apr 2005] [NB: OpenSSL 0.9.7h and later 0.9.7 patch levels were released after OpenSSL 0.9.8.] *) Fixes for newer kerberos headers. NB: the casts are needed because the 'length' field is signed on one version and unsigned on another with no (?) obvious way to tell the difference, without these VC++ complains. Also the "definition" of FAR (blank) is no longer included nor is the error ENOMEM. KRB5_PRIVATE has to be set to 1 to pick up some needed definitions. [Steve Henson] *) Undo Cygwin change. [Ulf Möller] *) Added support for proxy certificates according to RFC 3820. Because they may be a security thread to unaware applications, they must be explicitly allowed in run-time. See docs/HOWTO/proxy_certificates.txt for further information. [Richard Levitte] Changes between 0.9.7e and 0.9.7f [22 Mar 2005] *) Use (SSL_RANDOM_VALUE - 4) bytes of pseudo random data when generating server and client random values. Previously (SSL_RANDOM_VALUE - sizeof(time_t)) would be used which would result in less random data when sizeof(time_t) > 4 (some 64 bit platforms). This change has negligible security impact because: 1. Server and client random values still have 24 bytes of pseudo random data. 2. Server and client random values are sent in the clear in the initial handshake. 3. The master secret is derived using the premaster secret (48 bytes in size for static RSA ciphersuites) as well as client server and random values. The OpenSSL team would like to thank the UK NISCC for bringing this issue to our attention. [Stephen Henson, reported by UK NISCC] *) Use Windows randomness collection on Cygwin. [Ulf Möller] *) Fix hang in EGD/PRNGD query when communication socket is closed prematurely by EGD/PRNGD. [Darren Tucker <dtucker@zip.com.au> via Lutz Jänicke, resolves #1014] *) Prompt for pass phrases when appropriate for PKCS12 input format. [Steve Henson] *) Back-port of selected performance improvements from development branch, as well as improved support for PowerPC platforms. [Andy Polyakov] *) Add lots of checks for memory allocation failure, error codes to indicate failure and freeing up memory if a failure occurs. [Nauticus Networks SSL Team <openssl@nauticusnet.com>, Steve Henson] *) Add new -passin argument to dgst. [Steve Henson] *) Perform some character comparisons of different types in X509_NAME_cmp: this is needed for some certificates that re-encode DNs into UTF8Strings (in violation of RFC3280) and can't or won't issue name rollover certificates. [Steve Henson] *) Make an explicit check during certificate validation to see that the CA setting in each certificate on the chain is correct. As a side effect always do the following basic checks on extensions, not just when there's an associated purpose to the check: - if there is an unhandled critical extension (unless the user has chosen to ignore this fault) - if the path length has been exceeded (if one is set at all) - that certain extensions fit the associated purpose (if one has been given) [Richard Levitte] Changes between 0.9.7d and 0.9.7e [25 Oct 2004] *) Avoid a race condition when CRLs are checked in a multi threaded environment. This would happen due to the reordering of the revoked entries during signature checking and serial number lookup. Now the encoding is cached and the serial number sort performed under a lock. Add new STACK function sk_is_sorted(). [Steve Henson] *) Add Delta CRL to the extension code. [Steve Henson] *) Various fixes to s3_pkt.c so alerts are sent properly. [David Holmes <d.holmes@f5.com>] *) Reduce the chances of duplicate issuer name and serial numbers (in violation of RFC3280) using the OpenSSL certificate creation utilities. This is done by creating a random 64 bit value for the initial serial number when a serial number file is created or when a self signed certificate is created using 'openssl req -x509'. The initial serial number file is created using 'openssl x509 -next_serial' in CA.pl rather than being initialized to 1. [Steve Henson] Changes between 0.9.7c and 0.9.7d [17 Mar 2004] *) Fix null-pointer assignment in do_change_cipher_spec() revealed by using the Codenomicon TLS Test Tool (CVE-2004-0079) [Joe Orton, Steve Henson] *) Fix flaw in SSL/TLS handshaking when using Kerberos ciphersuites (CVE-2004-0112) [Joe Orton, Steve Henson] *)] *) X509 verify fixes. Disable broken certificate workarounds when X509_V_FLAGS_X509_STRICT is set. Check CRL issuer has cRLSign set if keyUsage extension present. Don't accept CRLs with unhandled critical extensions: since verify currently doesn't process CRL extensions this rejects a CRL with *any* critical extensions. Add new verify error codes for these cases. [Steve Henson] *) When creating an OCSP nonce use an OCTET STRING inside the extnValue. A clarification of RFC2560 will require the use of OCTET STRINGs and some implementations cannot handle the current raw format. Since OpenSSL copies and compares OCSP nonces as opaque blobs without any attempt at parsing them this should not create any compatibility issues. [Steve Henson] *) New md flag EVP_MD_CTX_FLAG_REUSE this allows md_data to be reused when calling EVP_MD_CTX_copy_ex() to avoid calling OPENSSL_malloc(). Without this HMAC (and other) operations are several times slower than OpenSSL < 0.9.7. [Steve Henson] *) Print out GeneralizedTime and UTCTime in ASN1_STRING_print_ex(). [Peter Sylvester <Peter.Sylvester@EdelWeb.fr>] *) Use the correct content when signing type "other". [Steve Henson] Changes between 0.9.7b and 0.9.7c [30 Sep 2003] *) Fix various bugs revealed by running the NISCC test suite: Stop out of bounds reads in the ASN1 code when presented with invalid tags (CVE-2003-0543 and CVE-2003-0544). Free up ASN1_TYPE correctly if ANY type is invalid (CVE-2003-0545). If verify callback ignores invalid public key errors don't try to check certificate signature with the NULL public key. [Steve Henson] *) New -ignore_err option in ocsp application to stop the server exiting on the first error in a request. ] *) Change AES_cbc_encrypt() so it outputs exact multiple of blocks during encryption. [Richard Levitte] *) Various fixes to base64 BIO and non blocking I/O. On write flushes were not handled properly if the BIO retried. On read data was not being buffered properly and had various logic bugs. This also affects blocking I/O when the data being decoded is a certain size. [Steve Henson] *) Various S/MIME bugfixes and compatibility changes: output correct application/pkcs7 MIME type if PKCS7_NOOLDMIMETYPE is set. Tolerate some broken signatures. Output CR+LF for EOL if PKCS7_CRLFEOL is set (this makes opening of files as .eml work). Correctly handle very long lines in MIME parser. [Steve Henson] Changes between 0.9.7a and 0.9.7b [10 Apr 2003] *)] *) Fixed a typo bug that would cause ENGINE_set_default() to set an ENGINE as defaults for all supported algorithms irrespective of the 'flags' parameter. 'flags' is now honoured, so applications should make sure they are passing it correctly. [Geoff Thorpe] *) Target "mingw" now allows native Windows code to be generated in the Cygwin environment as well as with the MinGW compiler. [Ulf Moeller] Changes between 0.9.7 and 0.9.7a [19 Feb 2003] *) In ssl3_get_record (ssl/s3_pkt.c), minimize information leaked via timing by performing a MAC computation even if incorrect)] *) Make the no-err option work as intended. The intention with no-err is not to have the whole error stack handling routines removed from libcrypto, it's only intended to remove all the function name and reason texts, thereby removing some of the footprint that may not be interesting if those errors aren't displayed anyway. NOTE: it's still possible for any application or module to have it's own set of error texts inserted. The routines are there, just not used by default when no-err is given. [Richard Levitte] *) Add support for FreeBSD on IA64. [dirk.meyer@dinoex.sub.org via Richard Levitte, resolves #454] *) Adjust DES_cbc_cksum() so it returns the same value as the MIT Kerberos function mit_des_cbc_cksum(). Before this change, the value returned by DES_cbc_cksum() was like the one from mit_des_cbc_cksum(), except the bytes were swapped. [Kevin Greaney <Kevin.Greaney@hp.com> and Richard Levitte] *) Allow an application to disable the automatic SSL chain building. Before this a rather primitive chain build was always performed in ssl3_output_cert_chain(): an application had no way to send the correct chain if the automatic operation produced an incorrect result. Now the chain builder is disabled if either: 1. Extra certificates are added via SSL_CTX_add_extra_chain_cert(). 2. The mode flag SSL_MODE_NO_AUTO_CHAIN is set. The reasoning behind this is that an application would not want the auto chain building to take place if extra chain certificates are present and it might also want a means of sending no additional certificates (for example the chain has two certificates and the root is omitted). [Steve Henson] *) Add the possibility to build without the ENGINE framework. [Steven Reddie <smr@essemer.com.au> via Richard Levitte] *) Under Win32 gmtime() can return NULL: check return value in OPENSSL_gmtime(). Add error code for case where gmtime() fails. [Steve Henson] *) DSA routines: under certain error conditions uninitialized BN objects could be freed. Solution: make sure initialization is performed early enough. (Reported and fix supplied by Ivan D Nestlerode <nestler@MIT.EDU>, Nils Larsch <nla@trustcenter.de> via PR#459) [Lutz Jaenicke] *) Another fix for SSLv2 session ID handling: the session ID was incorrectly checked on reconnect on the client side, therefore session resumption could still fail with a "ssl session id is different" error. This behaviour is masked when SSL_OP_ALL is used due to SSL_OP_MICROSOFT_SESS_ID_BUG being set. Behaviour observed by Crispin Flowerday <crispin@flowerday.cx> as followup to PR #377. [Lutz Jaenicke] *) IA-32 assembler support enhancements: unified ELF targets, support for SCO/Caldera platforms, fix for Cygwin shared build. [Andy Polyakov] *) Add support for FreeBSD on sparc64. As a consequence, support for FreeBSD on non-x86 processors is separate from x86 processors on the config script, much like the NetBSD support. [Richard Levitte & Kris Kennaway <kris@obsecurity.org>] Changes between 0.9.6h and 0.9.7 [31 Dec 2002] [NB: OpenSSL 0.9.6i and later 0.9.6 patch levels were released after OpenSSL 0.9.7.] *) Fix session ID handling in SSLv2 client code: the SERVER FINISHED code (06) was taken as the first octet of the session ID and the last octet was ignored consequently. As a result SSLv2 client side session caching could not have worked due to the session ID mismatch between client and server. Behaviour observed by Crispin Flowerday <crispin@flowerday.cx> as PR #377. [Lutz Jaenicke] *) Change the declaration of needed Kerberos libraries to use EX_LIBS instead of the special (and badly supported) LIBKRB5. LIBKRB5 is removed entirely. [Richard Levitte] *) The hw_ncipher.c engine requires dynamic locks. Unfortunately, it seems that in spite of existing for more than a year, many application author have done nothing to provide the necessary callbacks, which means that this particular engine will not work properly anywhere. This is a very unfortunate situation which forces us, in the name of usability, to give the hw_ncipher.c a static lock, which is part of libcrypto. NOTE: This is for the 0.9.7 series ONLY. This hack will never appear in 0.9.8 or later. We EXPECT application authors to have dealt properly with this when 0.9.8 is released (unless we actually make such changes in the libcrypto locking code that changes will have to be made anyway). [Richard Levitte] *) In asn1_d2i_read_bio() repeatedly call BIO_read() until all content octets have been read, EOF or an error occurs. Without this change some truncated ASN1 structures will not produce an error. [Steve Henson] *) Disable Heimdal support, since it hasn't been fully implemented. Still give the possibility to force the use of Heimdal, but with warnings and a request that patches get sent to openssl-dev. [Richard Levitte] *) Add the VC-CE target, introduce the WINCE sysname, and add INSTALL.WCE and appropriate conditionals to make it build. [Steven Reddie <smr@essemer.com.au> via Richard Levitte] *) Change the DLL names for Cygwin to cygcrypto-x.y.z.dll and cygssl-x.y.z.dll, where x, y and z are the major, minor and edit numbers of the version. [Corinna Vinschen <vinschen@redhat.com> and Richard Levitte] *) Introduce safe string copy and catenation functions (BUF_strlcpy() and BUF_strlcat()). [Ben Laurie (CHATS) and Richard Levitte] *) Avoid using fixed-size buffers for one-line DNs. [Ben Laurie (CHATS)] *) Add BUF_MEM_grow_clean() to avoid information leakage when resizing buffers containing secrets, and use where appropriate. [Ben Laurie (CHATS)] *) Avoid using fixed size buffers for configuration file location. [Ben Laurie (CHATS)] *) Avoid filename truncation for various CA files. [Ben Laurie (CHATS)] *) Use sizeof in preference to magic numbers. [Ben Laurie (CHATS)] *) Avoid filename truncation in cert requests. [Ben Laurie (CHATS)] *) Add assertions to check for (supposedly impossible) buffer overflows. [Ben Laurie (CHATS)] *) Don't cache truncated DNS entries in the local cache (this could potentially lead to a spoofing attack). [Ben Laurie (CHATS)] *) Fix various buffers to be large enough for hex/decimal representations in a platform independent manner. [Ben Laurie (CHATS)] *) Add CRYPTO_realloc_clean() to avoid information leakage when resizing buffers containing secrets, and use where appropriate. [Ben Laurie (CHATS)] *) Add BIO_indent() to avoid much slightly worrying code to do indents. [Ben Laurie (CHATS)] *) Convert sprintf()/BIO_puts() to BIO_printf(). [Ben Laurie (CHATS)] *) buffer_gets() could terminate with the buffer only half full. Fixed. [Ben Laurie (CHATS)] *) Add assertions to prevent user-supplied crypto functions from overflowing internal buffers by having large block sizes, etc. [Ben Laurie (CHATS)] *) New OPENSSL_assert() macro (similar to assert(), but enabled unconditionally). [Ben Laurie (CHATS)] *) Eliminate unused copy of key in RC4. [Ben Laurie (CHATS)] *) Eliminate unused and incorrectly sized buffers for IV in pem.h. [Ben Laurie (CHATS)] *) Fix off-by-one error in EGD path. [Ben Laurie (CHATS)] *) If RANDFILE path is too long, ignore instead of truncating. [Ben Laurie (CHATS)] *) Eliminate unused and incorrectly sized X.509 structure CBCParameter. [Ben Laurie (CHATS)] *) Eliminate unused and dangerous function knumber(). [Ben Laurie (CHATS)] *) Eliminate unused and dangerous structure, KSSL_ERR. [Ben Laurie (CHATS)] *) Protect against overlong session ID context length in an encoded session object. Since these are local, this does not appear to be exploitable. [Ben Laurie (CHATS)] *) Change from security patch (see 0.9.6e below) that did not affect the 0.9.6 release series: Remote buffer overflow in SSL3 protocol - an attacker could supply an oversized master key in Kerberos-enabled versions. (CVE-2002-0657) [Ben Laurie (CHATS)] *) Change the SSL kerb5 codes to match RFC 2712. [Richard Levitte] *) Make -nameopt work fully for req and add -reqopt switch. [Michael Bell <michael.bell@rz.hu-berlin.de>, Steve Henson] *) The "block size" for block ciphers in CFB and OFB mode should be 1. [Steve Henson, reported by Yngve Nysaeter Pettersen <yngve@opera.com>] *) Make sure tests can be performed even if the corresponding algorithms have been removed entirely. This was also the last step to make OpenSSL compilable with DJGPP under all reasonable conditions. [Richard Levitte, Doug Kaufman <dkaufman@rahul.net>] *) Add cipher selection rules COMPLEMENTOFALL and COMPLEMENTOFDEFAULT to allow version independent disabling of normally unselected ciphers, which may be activated as a side-effect of selecting a single cipher. (E.g., cipher list string "RSA" enables ciphersuites that are left out of "ALL" because they do not provide symmetric encryption. "RSA:!COMPLEMEMENTOFALL" avoids these unsafe ciphersuites.) [Lutz Jaenicke, Bodo Moeller] *) Add appropriate support for separate platform-dependent build directories. The recommended way to make a platform-dependent build directory is the following (tested on Linux), maybe with some local tweaks: # Place yourself outside of the OpenSSL source tree. In # this example, the environment variable OPENSSL_SOURCE # is assumed to contain the absolute OpenSSL source directory. mkdir -p objtree/"`uname -s`-`uname -r`-`uname -m`" cd objtree/"`uname -s`-`uname -r`-`uname -m`" (cd $OPENSSL_SOURCE; find . -type f) | while read F; do mkdir -p `dirname $F` ln -s $OPENSSL_SOURCE/$F $F done To be absolutely sure not to disturb the source tree, a "make clean" is a good thing. If it isn't successful, don't worry about it, it probably means the source directory is very clean. [Richard Levitte] *) Make sure any ENGINE control commands make local copies of string pointers passed to them whenever necessary. Otherwise it is possible the caller may have overwritten (or deallocated) the original string data when a later ENGINE operation tries to use the stored values. [Götz Babin-Ebell <babinebell@trustcenter.de>] *) Improve diagnostics in file reading and command-line digests. [Ben Laurie aided and abetted by Solar Designer <solar@openwall.com>] *) Add AES modes CFB and OFB to the object database. Correct an error in AES-CFB decryption. [Richard Levitte] *) Remove most calls to EVP_CIPHER_CTX_cleanup() in evp_enc.c, this allows existing EVP_CIPHER_CTX structures to be reused after calling EVP_*Final(). This behaviour is used by encryption BIOs and some applications. This has the side effect that applications must explicitly clean up cipher contexts with EVP_CIPHER_CTX_cleanup() or they will leak memory. [Steve Henson] *) Check the values of dna and dnb in bn_mul_recursive before calling bn_mul_comba (a non zero value means the a or b arrays do not contain n2 elements) and fallback to bn_mul_normal if either is not zero. [Steve Henson] *) Fix escaping of non-ASCII characters when using the -subj option of the "openssl req" command line tool. (Robert Joop <joop@fokus.gmd.de>) [Lutz Jaenicke] *) Make object definitions compliant to LDAP (RFC2256): SN is the short form for "surname", serialNumber has no short form. Use "mail" as the short name for "rfc822Mailbox" according to RFC2798; therefore remove "mail" short name for "internet 7". The OID for unique identifiers in X509 certificates is x500UniqueIdentifier, not uniqueIdentifier. Some more OID additions. (Michael Bell <michael.bell@rz.hu-berlin.de>) [Lutz Jaenicke] *) Add an "init" command to the ENGINE config module and auto initialize ENGINEs. Without any "init" command the ENGINE will be initialized after all ctrl commands have been executed on it. If init=1 the ENGINE is initailized at that point (ctrls before that point are run on the uninitialized ENGINE and after on the initialized one). If init=0 then the ENGINE will not be iniatialized at all. [Steve Henson] *) Fix the 'app_verify_callback' interface so that the user-defined argument is actually passed to the callback: In the SSL_CTX_set_cert_verify_callback() prototype, the callback declaration has been changed from int (*cb)() into int (*cb)(X509_STORE_CTX *,void *); in ssl_verify_cert_chain (ssl/ssl_cert.c), the call i=s->ctx->app_verify_callback(&ctx) has been changed into i=s->ctx->app_verify_callback(&ctx, s->ctx->app_verify_arg). To update applications using SSL_CTX_set_cert_verify_callback(), a dummy argument can be added to their callback functions. [D. K. Smetters <smetters@parc.xerox.com>] *) Added the '4758cca' ENGINE to support IBM 4758 cards. [Maurice Gittens <maurice@gittens.nl>, touchups by Geoff Thorpe] *) Add and OPENSSL_LOAD_CONF define which will cause OpenSSL_add_all_algorithms() to load the openssl.cnf config file. This allows older applications to transparently support certain OpenSSL features: such as crypto acceleration and dynamic ENGINE loading. Two new functions OPENSSL_add_all_algorithms_noconf() which will never load the config file and OPENSSL_add_all_algorithms_conf() which will always load it have also been added. [Steve Henson] *) Add the OFB, CFB and CTR (all with 128 bit feedback) to AES. Adjust NIDs and EVP layer. [Stephen Sprunk <stephen@sprunk.org> and Richard Levitte] *) Config modules support in openssl utility. Most commands now load modules from the config file, though in a few (such as version) this isn't done because it couldn't be used for anything. In the case of ca and req the config file used is the same as the utility itself: that is the -config command line option can be used to specify an alternative file. [Steve Henson] *) Move default behaviour from OPENSSL_config(). If appname is NULL use "openssl_conf" if filename is NULL use default openssl config file. [Steve Henson] *) Add an argument to OPENSSL_config() to allow the use of an alternative config section name. Add a new flag to tolerate a missing config file and move code to CONF_modules_load_file(). [Steve Henson] *) Support for crypto accelerator cards from Accelerated Encryption Processing,. (Use engine 'aep') The support was copied from 0.9.6c [engine] and adapted/corrected to work with the new engine framework. [AEP Inc. and Richard Levitte] *) Support for SureWare crypto accelerator cards from Baltimore Technologies. (Use engine 'sureware') The support was copied from 0.9.6c [engine] and adapted to work with the new engine framework. [Richard Levitte] *) Have the CHIL engine fork-safe (as defined by nCipher) and actually make the newer ENGINE framework commands for the CHIL engine work. [Toomas Kiisk <vix@cyber.ee> and Richard Levitte] *) Make it possible to produce shared libraries on ReliantUNIX. [Robert Dahlem <Robert.Dahlem@ffm2.siemens.de> via Richard Levitte] *) Add the configuration target debug-linux-ppro. Make 'openssl rsa' use the general key loading routines implemented in apps.c, and make those routines able to handle the key format FORMAT_NETSCAPE and the variant FORMAT_IISSGC. [Toomas Kiisk <vix@cyber.ee> via Richard Levitte] *) Fix a crashbug and a logic bug in hwcrhk_load_pubkey(). [Toomas Kiisk <vix@cyber.ee> via Richard Levitte] *) Add -keyform to rsautl, and document -engine. [Richard Levitte, inspired by Toomas Kiisk <vix@cyber.ee>] *) Change BIO_new_file (crypto/bio/bss_file.c) to use new BIO_R_NO_SUCH_FILE error code rather than the generic ERR_R_SYS_LIB error code if fopen() fails with ENOENT. [Ben Laurie] *) Add new functions ERR_peek_last_error ERR_peek_last_error_line ERR_peek_last_error_line_data. These are similar to ERR_peek_error ERR_peek_error_line ERR_peek_error_line_data, but report on the latest error recorded rather than the first one still in the error queue. [Ben Laurie, Bodo Moeller] *) default_algorithms option in ENGINE config module. This allows things like: default_algorithms = ALL default_algorithms = RSA, DSA, RAND, CIPHERS, DIGESTS [Steve Henson] *) Preliminary ENGINE config module. [Steve Henson] *) New experimental application configuration code. [Steve Henson] *) Change the AES code to follow the same name structure as all other symmetric ciphers, and behave the same way. Move everything to the directory crypto/aes, thereby obsoleting crypto/rijndael. [Stephen Sprunk <stephen@sprunk.org> and Richard Levitte] *) SECURITY: remove unsafe setjmp/signal interaction from ui_openssl.c. [Ben Laurie and Theo de Raadt] *) Add option to output public keys in req command. [Massimiliano Pala madwolf@openca.org] *) Use wNAFs in EC_POINTs_mul() for improved efficiency (up to about 10% better than before for P-192 and P-224). [Bodo Moeller] *) New functions/macros SSL_CTX_set_msg_callback(ctx, cb) SSL_CTX_set_msg_callback_arg(ctx, arg) SSL_set_msg_callback(ssl, cb) SSL_set_msg_callback_arg(ssl, arg) to request calling a callback function void cb(int write_p, int version, int content_type, const void *buf, size_t len, SSL *ssl, void *arg) whenever a protocol message has been completely received (write_p == 0) or sent (write_p == 1). Here 'version' is the protocol version according to which the SSL library interprets the current protocol message (SSL2_VERSION, SSL3_VERSION, or TLS1_VERSION). 'content_type' is 0 in the case of SSL 2.0, or the content type as defined in the SSL 3.0/TLS 1.0 protocol specification (change_cipher_spec(20), alert(21), handshake(22)). 'buf' and 'len' point to the actual message, 'ssl' to the SSL object, and 'arg' is the application-defined value set by SSL[_CTX]_set_msg_callback_arg(). 'openssl s_client' and 'openssl s_server' have new '-msg' options to enable a callback that displays all protocol messages. [Bodo Moeller] *) Change the shared library support so shared libraries are built as soon as the corresponding static library is finished, and thereby get openssl and the test programs linked against the shared library. This still only happens when the keyword "shard" has been given to the configuration scripts. NOTE: shared library support is still an experimental thing, and backward binary compatibility is still not guaranteed. ["Maciej W. Rozycki" <macro@ds2.pg.gda.pl> and Richard Levitte] *) Add support for Subject Information Access extension. [Peter Sylvester <Peter.Sylvester@EdelWeb.fr>] *) Make BUF_MEM_grow() behaviour more consistent: Initialise to zero additional bytes when new memory had to be allocated, not just when reusing an existing buffer. [Bodo Moeller] *) New command line and configuration option 'utf8' for the req command. This allows field values to be specified as UTF8 strings. [Steve Henson] *) Add -multi and -mr options to "openssl speed" - giving multiple parallel runs for the former and machine-readable output for the latter. [Ben Laurie] *) Add '-noemailDN' option to 'openssl ca'. This prevents inclusion of the e-mail address in the DN (i.e., it will go into a certificate extension only). The new configuration file option 'email_in_dn = no' has the same effect. [Massimiliano Pala madwolf@openca.org] *) Change all functions with names starting with des_ to be starting with DES_ instead. Add wrappers that are compatible with libdes, but are named _ossl_old_des_*. Finally, add macros that map the des_* symbols to the corresponding _ossl_old_des_* if libdes compatibility is desired. If OpenSSL 0.9.6c compatibility is desired, the des_* symbols will be mapped to DES_*, with one exception. Since we provide two compatibility mappings, the user needs to define the macro OPENSSL_DES_LIBDES_COMPATIBILITY if libdes compatibility is desired. The default (i.e., when that macro isn't defined) is OpenSSL 0.9.6c compatibility. There are also macros that enable and disable the support of old des functions altogether. Those are OPENSSL_ENABLE_OLD_DES_SUPPORT and OPENSSL_DISABLE_OLD_DES_SUPPORT. If none or both of those are defined, the default will apply: to support the old des routines. In either case, one must include openssl/des.h to get the correct definitions. Do not try to just include openssl/des_old.h, that won't work. NOTE: This is a major break of an old API into a new one. Software authors are encouraged to switch to the DES_ style functions. Some time in the future, des_old.h and the libdes compatibility functions will be disable (i.e. OPENSSL_DISABLE_OLD_DES_SUPPORT will be the default), and then completely removed. [Richard Levitte] *) Test for certificates which contain unsupported critical extensions. If such a certificate is found during a verify operation it is rejected by default: this behaviour can be overridden by either handling the new error X509_V_ERR_UNHANDLED_CRITICAL_EXTENSION or by setting the verify flag X509_V_FLAG_IGNORE_CRITICAL. A new function X509_supported_extension() has also been added which returns 1 if a particular extension is supported. [Steve Henson] *) Modify the behaviour of EVP cipher functions in similar way to digests to retain compatibility with existing code. [Steve Henson] *) Modify the behaviour of EVP_DigestInit() and EVP_DigestFinal() to retain compatibility with existing code. In particular the 'ctx' parameter does not have to be to be initialized before the call to EVP_DigestInit() and it is tidied up after a call to EVP_DigestFinal(). New function EVP_DigestFinal_ex() which does not tidy up the ctx. Similarly function EVP_MD_CTX_copy() changed to not require the destination to be initialized valid and new function EVP_MD_CTX_copy_ex() added which requires the destination to be valid. Modify all the OpenSSL digest calls to use EVP_DigestInit_ex(), EVP_DigestFinal_ex() and EVP_MD_CTX_copy_ex(). [Steve Henson] *) Change ssl3_get_message (ssl/s3_both.c) and the functions using it so that complete 'Handshake' protocol structures are kept in memory instead of overwriting 'msg_type' and 'length' with 'body' data. [Bodo Moeller] *) Add an implementation of SSL_add_dir_cert_subjects_to_stack for Win32. [Massimo Santin via Richard Levitte] *) Major restructuring to the underlying ENGINE code. This includes reduction of linker bloat, separation of pure "ENGINE" manipulation (initialisation, etc) from functionality dealing with implementations of specific crypto iterfaces. This change also introduces integrated support for symmetric ciphers and digest implementations - so ENGINEs can now accelerate these by providing EVP_CIPHER and EVP_MD implementations of their own. This is detailed in crypto/engine/README as it couldn't be adequately described here. However, there are a few API changes worth noting - some RSA, DSA, DH, and RAND functions that were changed in the original introduction of ENGINE code have now reverted back - the hooking from this code to ENGINE is now a good deal more passive and at run-time, operations deal directly with RSA_METHODs, DSA_METHODs (etc) as they did before, rather than dereferencing through an ENGINE pointer any more. Also, the ENGINE functions dealing with BN_MOD_EXP[_CRT] handlers have been removed - they were not being used by the framework as there is no concept of a BIGNUM_METHOD and they could not be generalised to the new 'ENGINE_TABLE' mechanism that underlies the new code. Similarly, ENGINE_cpy() has been removed as it cannot be consistently defined in the new code. [Geoff Thorpe] *) Change ASN1_GENERALIZEDTIME_check() to allow fractional seconds. [Steve Henson] *) Change mkdef.pl to sort symbols that get the same entry number, and make sure the automatically generated functions ERR_load_* become part of libeay.num as well. [Richard Levitte] *) New function SSL_renegotiate_pending(). This returns true once renegotiation has been requested (either SSL_renegotiate() call or HelloRequest/ClientHello received from the peer) and becomes false once a handshake has been completed. (For servers, SSL_renegotiate() followed by SSL_do_handshake() sends a HelloRequest, but does not ensure that a handshake takes place. SSL_renegotiate_pending() is useful for checking if the client has followed the request.) [Bodo Moeller] *) New SSL option SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION. By default, clients may request session resumption even during renegotiation (if session ID contexts permit); with this option, session resumption is possible only in the first handshake. SSL_OP_ALL is now 0x00000FFFL instead of 0x000FFFFFL. This makes more bits available for options that should not be part of SSL_OP_ALL (such as SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION). [Bodo Moeller] *) Add some demos for certificate and certificate request creation. [Steve Henson] *) Make maximum certificate chain size accepted from the peer application settable (SSL*_get/set_max_cert_list()), as proposed by "Douglas E. Engert" <deengert@anl.gov>. [Lutz Jaenicke] *) Add support for shared libraries for Unixware-7 (Boyd Lynn Gerber <gerberb@zenez.com>). [Lutz Jaenicke] *) Add a "destroy" handler to ENGINEs that allows structural cleanup to be done prior to destruction. Use this to unload error strings from ENGINEs that load their own error strings. NB: This adds two new API functions to "get" and "set" this destroy handler in an ENGINE. [Geoff Thorpe] *) Alter all existing ENGINE implementations (except "openssl" and "openbsd") to dynamically instantiate their own error strings. This makes them more flexible to be built both as statically-linked ENGINEs and self-contained shared-libraries loadable via the "dynamic" ENGINE. Also, add stub code to each that makes building them as self-contained shared-libraries easier (see README.ENGINE). [Geoff Thorpe] *) Add a "dynamic" ENGINE that provides a mechanism for binding ENGINE implementations into applications that are completely implemented in self-contained shared-libraries. The "dynamic" ENGINE exposes control commands that can be used to configure what shared-library to load and to control aspects of the way it is handled. Also, made an update to the README.ENGINE file that brings its information up-to-date and provides some information and instructions on the "dynamic" ENGINE (ie. how to use it, how to build "dynamic"-loadable ENGINEs, etc). [Geoff Thorpe] *) Make it possible to unload ranges of ERR strings with a new "ERR_unload_strings" function. [Geoff Thorpe] *) Add a copy() function to EVP_MD. [Ben Laurie] *) Make EVP_MD routines take a context pointer instead of just the md_data void pointer. [Ben Laurie] *) Add flags to EVP_MD and EVP_MD_CTX. EVP_MD_FLAG_ONESHOT indicates that the digest can only process a single chunk of data (typically because it is provided by a piece of hardware). EVP_MD_CTX_FLAG_ONESHOT indicates that the application is only going to provide a single chunk of data, and hence the framework needn't accumulate the data for oneshot drivers. [Ben Laurie] *) As with "ERR", make it possible to replace the underlying "ex_data" functions. This change also alters the storage and management of global ex_data state - it's now all inside ex_data.c and all "class" code (eg. RSA, BIO, SSL_CTX, etc) no longer stores its own STACKS and per-class index counters. The API functions that use this state have been changed to take a "class_index" rather than pointers to the class's local STACK and counter, and there is now an API function to dynamically create new classes. This centralisation allows us to (a) plug a lot of the thread-safety problems that existed, and (b) makes it possible to clean up all allocated state using "CRYPTO_cleanup_all_ex_data()". W.r.t. (b) such data would previously have always leaked in application code and workarounds were in place to make the memory debugging turn a blind eye to it. Application code that doesn't use this new function will still leak as before, but their memory debugging output will announce it now rather than letting it slide. Besides the addition of CRYPTO_cleanup_all_ex_data(), another API change induced by the "ex_data" overhaul is that X509_STORE_CTX_init() now has a return value to indicate success or failure. [Geoff Thorpe] *) Make it possible to replace the underlying "ERR" functions such that the global state (2 LHASH tables and 2 locks) is only used by the "default" implementation. This change also adds two functions to "get" and "set" the implementation prior to it being automatically set the first time any other ERR function takes place. Ie. an application can call "get", pass the return value to a module it has just loaded, and that module can call its own "set" function using that value. This means the module's "ERR" operations will use (and modify) the error state in the application and not in its own statically linked copy of OpenSSL code. [Geoff Thorpe] *) Give DH, DSA, and RSA types their own "**_up_ref()" function to increment reference counts. This performs normal REF_PRINT/REF_CHECK macros on the operation, and provides a more encapsulated way for external code (crypto/evp/ and ssl/) to do this. Also changed the evp and ssl code to use these functions rather than manually incrementing the counts. Also rename "DSO_up()" function to more descriptive "DSO_up_ref()". [Geoff Thorpe] *) Add EVP test program. [Ben Laurie] *) Add symmetric cipher support to ENGINE. Expect the API to change! [Ben Laurie] *) New CRL functions: X509_CRL_set_version(), X509_CRL_set_issuer_name() X509_CRL_set_lastUpdate(), X509_CRL_set_nextUpdate(), X509_CRL_sort(), X509_REVOKED_set_serialNumber(), and X509_REVOKED_set_revocationDate(). These allow a CRL to be built without having to access X509_CRL fields directly. Modify 'ca' application to use new functions. [Steve Henson] *) Move SSL_OP_TLS_ROLLBACK_BUG out of the SSL_OP_ALL list of recommended bug workarounds. Rollback attack detection is a security feature. The problem will only arise on OpenSSL servers when TLSv1 is not available (sslv3_server_method() or SSL_OP_NO_TLSv1). Software authors not wanting to support TLSv1 will have special reasons for their choice and can explicitly enable this option. [Bodo Moeller, Lutz Jaenicke] *) Rationalise EVP so it can be extended: don't include a union of cipher/digest structures, add init/cleanup functions for EVP_MD_CTX (similar to those existing for EVP_CIPHER_CTX). Usage example: EVP_MD_CTX md; EVP_MD_CTX_init(&md); /* new function call */ EVP_DigestInit(&md, EVP_sha1()); EVP_DigestUpdate(&md, in, len); EVP_DigestFinal(&md, out, NULL); EVP_MD_CTX_cleanup(&md); /* new function call */ [Ben Laurie] *) Make DES key schedule conform to the usual scheme, as well as correcting its structure. This means that calls to DES functions now have to pass a pointer to a des_key_schedule instead of a plain des_key_schedule (which was actually always a pointer anyway): E.g., des_key_schedule ks; des_set_key_checked(..., &ks); des_ncbc_encrypt(..., &ks, ...); (Note that a later change renames 'des_...' into 'DES_...'.) [Ben Laurie] *) Initial reduction of linker bloat: the use of some functions, such as PEM causes large amounts of unused functions to be linked in due to poor organisation. For example pem_all.c contains every PEM function which has a knock on effect of linking in large amounts of (unused) ASN1 code. Grouping together similar functions and splitting unrelated functions prevents this. [Steve Henson] *) Cleanup of EVP macros. [Ben Laurie] *) Change historical references to {NID,SN,LN}_des_ede and ede3 to add the correct _ecb suffix. [Ben Laurie] *) Add initial OCSP responder support to ocsp application. The revocation information is handled using the text based index use by the ca application. The responder can either handle requests generated internally, supplied in files (for example via a CGI script) or using an internal minimal server. [Steve Henson] *) Add configuration choices to get zlib compression for TLS. [Richard Levitte] *) Changes to Kerberos SSL for RFC 2712 compliance: 1. Implemented real KerberosWrapper, instead of just using KRB5 AP_REQ message. [Thanks to Simon Wilkinson <sxw@sxw.org.uk>] 2. Implemented optional authenticator field of KerberosWrapper. Added openssl-style ASN.1 macros for Kerberos ticket, ap_req, and authenticator structs; see crypto/krb5/. Generalized Kerberos calls to support multiple Kerberos libraries. [Vern Staats <staatsvr@asc.hpc.mil>, Jeffrey Altman <jaltman@columbia.edu> via Richard Levitte] *) Cause 'openssl speed' to use fully hard-coded DSA keys as it already does with RSA. testdsa.h now has 'priv_key/pub_key' values for each of the key sizes rather than having just parameters (and 'speed' generating keys each time). [Geoff Thorpe] *) Speed up EVP routines. Before: encrypt type 8 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes des-cbc 4408.85k 5560.51k 5778.46k 5862.20k 5825.16k des-cbc 4389.55k 5571.17k 5792.23k 5846.91k 5832.11k des-cbc 4394.32k 5575.92k 5807.44k 5848.37k 5841.30k decrypt des-cbc 3482.66k 5069.49k 5496.39k 5614.16k 5639.28k des-cbc 3480.74k 5068.76k 5510.34k 5609.87k 5635.52k des-cbc 3483.72k 5067.62k 5504.60k 5708.01k 5724.80k After: encrypt des-cbc 4660.16k 5650.19k 5807.19k 5827.13k 5783.32k decrypt des-cbc 3624.96k 5258.21k 5530.91k 5624.30k 5628.26k [Ben Laurie] *) Added the OS2-EMX target. ["Brian Havard" <brianh@kheldar.apana.org.au> and Richard Levitte] *) Rewrite apps to use NCONF routines instead of the old CONF. New functions to support NCONF routines in extension code. New function CONF_set_nconf() to allow functions which take an NCONF to also handle the old LHASH structure: this means that the old CONF compatible routines can be retained (in particular wrt extensions) without having to duplicate the code. New function X509V3_add_ext_nconf_sk to add extensions to a stack. [Steve Henson] *) Enhance the general user interface with mechanisms for inner control and with possibilities to have yes/no kind of prompts. [Richard Levitte] *) Change all calls to low level digest routines in the library and applications to use EVP. Add missing calls to HMAC_cleanup() and don't assume HMAC_CTX can be copied using memcpy(). [Verdon Walker <VWalker@novell.com>, Steve Henson] *) Add the possibility to control engines through control names but with arbitrary arguments instead of just a string. Change the key loaders to take a UI_METHOD instead of a callback function pointer. NOTE: this breaks binary compatibility with earlier versions of OpenSSL [engine]. Adapt the nCipher code for these new conditions and add a card insertion callback. [Richard Levitte] *) Enhance the general user interface with mechanisms to better support dialog box interfaces, application-defined prompts, the possibility to use defaults (for example default passwords from somewhere else) and interrupts/cancellations. [Richard Levitte] *) Tidy up PKCS#12 attribute handling. Add support for the CSP name attribute in PKCS#12 files, add new -CSP option to pkcs12 utility. [Steve Henson] *) Fix a memory leak in 'sk_dup()' in the case reallocation fails. (Also tidy up some unnecessarily weird code in 'sk_new()'). [Geoff, reported by Diego Tartara <dtartara@novamens.com>] *) Change the key loading routines for ENGINEs to use the same kind callback (pem_password_cb) as all other routines that need this kind of callback. [Richard Levitte] *) Increase ENTROPY_NEEDED to 32 bytes, as Rijndael can operate with 256 bit (=32 byte) keys. Of course seeding with more entropy bytes than this minimum value is recommended. [Lutz Jaenicke] *) New random seeder for OpenVMS, using the system process statistics that are easily reachable. [Richard Levitte] *) Windows apparently can't transparently handle global variables defined in DLLs. Initialisations such as: const ASN1_ITEM *it = &ASN1_INTEGER_it; won't compile. This is used by the any applications that need to declare their own ASN1 modules. This was fixed by adding the option EXPORT_VAR_AS_FN to all Win32 platforms, although this isn't strictly needed for static libraries under Win32. [Steve Henson] *) New functions X509_PURPOSE_set() and X509_TRUST_set() to handle setting of purpose and trust fields. New X509_STORE trust and purpose functions and tidy up setting in other SSL functions. [Steve Henson] *) Add copies of X509_STORE_CTX fields and callbacks to X509_STORE structure. These are inherited by X509_STORE_CTX when it is initialised. This allows various defaults to be set in the X509_STORE structure (such as flags for CRL checking and custom purpose or trust settings) for functions which only use X509_STORE_CTX internally such as S/MIME. Modify X509_STORE_CTX_purpose_inherit() so it only sets purposes and trust settings if they are not set in X509_STORE. This allows X509_STORE purposes and trust (in S/MIME for example) to override any set by default. Add command line options for CRL checking to smime, s_client and s_server applications. [Steve Henson] *) Initial CRL based revocation checking. If the CRL checking flag(s) are set then the CRL is looked up in the X509_STORE structure and its validity and signature checked, then if the certificate is found in the CRL the verify fails with a revoked error. Various new CRL related callbacks added to X509_STORE_CTX structure. Command line options added to 'verify' application to support this. This needs some additional work, such as being able to handle multiple CRLs with different times, extension based lookup (rather than just by subject name) and ultimately more complete V2 CRL extension handling. [Steve Henson] *) Add a general user interface API (crypto/ui/). This is designed to replace things like des_read_password and friends (backward compatibility functions using this new API are provided). The purpose is to remove prompting functions from the DES code section as well as provide for prompting through dialog boxes in a window system and the like. [Richard Levitte] *) Add "ex_data" support to ENGINE so implementations can add state at a per-structure level rather than having to store it globally. [Geoff] *) Make it possible for ENGINE structures to be copied when retrieved by ENGINE_by_id() if the ENGINE specifies a new flag: ENGINE_FLAGS_BY_ID_COPY. This causes the "original" ENGINE structure to act like a template, analogous to the RSA vs. RSA_METHOD type of separation. Because of this operational state can be localised to each ENGINE structure, despite the fact they all share the same "methods". New ENGINE structures returned in this case have no functional references and the return value is the single structural reference. This matches the single structural reference returned by ENGINE_by_id() normally, when it is incremented on the pre-existing ENGINE structure. [Geoff] *) Fix ASN1 decoder when decoding type ANY and V_ASN1_OTHER: since this needs to match any other type at all we need to manually clear the tag cache. [Steve Henson] *) Changes to the "openssl engine" utility to include; - verbosity levels ('-v', '-vv', and '-vvv') that provide information about an ENGINE's available control commands. - executing control commands from command line arguments using the '-pre' and '-post' switches. '-post' is only used if '-t' is specified and the ENGINE is successfully initialised. The syntax for the individual commands are colon-separated, for example; openssl engine chil -pre FORK_CHECK:0 -pre SO_PATH:/lib/test.so [Geoff] *) New dynamic control command support for ENGINEs. ENGINEs can now declare their own commands (numbers), names (strings), descriptions, and input types for run-time discovery by calling applications. A subset of these commands are implicitly classed as "executable" depending on their input type, and only these can be invoked through the new string-based API function ENGINE_ctrl_cmd_string(). (Eg. this can be based on user input, config files, etc). The distinction is that "executable" commands cannot return anything other than a boolean result and can only support numeric or string input, whereas some discoverable commands may only be for direct use through ENGINE_ctrl(), eg. supporting the exchange of binary data, function pointers, or other custom uses. The "executable" commands are to support parameterisations of ENGINE behaviour that can be unambiguously defined by ENGINEs and used consistently across any OpenSSL-based application. Commands have been added to all the existing hardware-supporting ENGINEs, noticeably "SO_PATH" to allow control over shared-library paths without source code alterations. [Geoff] *) Changed all ENGINE implementations to dynamically allocate their ENGINEs rather than declaring them statically. Apart from this being necessary with the removal of the ENGINE_FLAGS_MALLOCED distinction, this also allows the implementations to compile without using the internal engine_int.h header. [Geoff] *) Minor adjustment to "rand" code. RAND_get_rand_method() now returns a 'const' value. Any code that should be able to modify a RAND_METHOD should already have non-const pointers to it (ie. they should only modify their own ones). [Geoff] *) Made a variety of little tweaks to the ENGINE code. - "atalla" and "ubsec" string definitions were moved from header files to C code. "nuron" string definitions were placed in variables rather than hard-coded - allowing parameterisation of these values later on via ctrl() commands. - Removed unused "#if 0"'d code. - Fixed engine list iteration code so it uses ENGINE_free() to release structural references. - Constified the RAND_METHOD element of ENGINE structures. - Constified various get/set functions as appropriate and added missing functions (including a catch-all ENGINE_cpy that duplicates all ENGINE values onto a new ENGINE except reference counts/state). - Removed NULL parameter checks in get/set functions. Setting a method or function to NULL is a way of cancelling out a previously set value. Passing a NULL ENGINE parameter is just plain stupid anyway and doesn't justify the extra error symbols and code. - Deprecate the ENGINE_FLAGS_MALLOCED define and move the area for flags from engine_int.h to engine.h. - Changed prototypes for ENGINE handler functions (init(), finish(), ctrl(), key-load functions, etc) to take an (ENGINE*) parameter. [Geoff] *) Implement binary inversion algorithm for BN_mod_inverse in addition to the algorithm using long division. The binary algorithm can be used only if the modulus is odd. On 32-bit systems, it is faster only for relatively small moduli (roughly 20-30% for 128-bit moduli, roughly 5-15% for 256-bit moduli), so we use it only for moduli up to 450 bits. In 64-bit environments, the binary algorithm appears to be advantageous for much longer moduli; here we use it for moduli up to 2048 bits. [Bodo Moeller] *) Rewrite CHOICE field setting in ASN1_item_ex_d2i(). The old code could not support the combine flag in choice fields. [Steve Henson] *) Add a 'copy_extensions' option to the 'ca' utility. This copies extensions from a certificate request to the certificate. [Steve Henson] *) Allow multiple 'certopt' and 'nameopt' options to be separated by commas. Add 'namopt' and 'certopt' options to the 'ca' config file: this allows the display of the certificate about to be signed to be customised, to allow certain fields to be included or excluded and extension details. The old system didn't display multicharacter strings properly, omitted fields not in the policy and couldn't display additional details such as extensions. [Steve Henson] *) Function EC_POINTs_mul for multiple scalar multiplication of an arbitrary number of elliptic curve points \sum scalars[i]*points[i], optionally including the generator defined for the EC_GROUP: scalar*generator + \sum scalars[i]*points[i]. EC_POINT_mul is a simple wrapper function for the typical case that the point list has just one item (besides the optional generator). [Bodo Moeller] *) First EC_METHODs for curves over GF(p): EC_GFp_simple_method() uses the basic BN_mod_mul and BN_mod_sqr operations and provides various method functions that can also operate with faster implementations of modular arithmetic. EC_GFp_mont_method() reuses most functions that are part of EC_GFp_simple_method, but uses Montgomery arithmetic. [Bodo Moeller; point addition and point doubling implementation directly derived from source code provided by Lenka Fibikova <fibikova@exp-math.uni-essen.de>] *) Framework for elliptic curves (crypto/ec/ec.h, crypto/ec/ec_lcl.h, crypto/ec/ec_lib.c): Curves are EC_GROUP objects (with an optional group generator) based on EC_METHODs that are built into the library. Points are EC_POINT objects based on EC_GROUP objects. Most of the framework would be able to handle curves over arbitrary finite fields, but as there are no obvious types for fields other than GF(p), some functions are limited to that for now. [Bodo Moeller] *) Add the -HTTP option to s_server. It is similar to -WWW, but requires that the file contains a complete HTTP response. [Richard Levitte] *) Add the ec directory to mkdef.pl and mkfiles.pl. In mkdef.pl change the def and num file printf format specifier from "%-40sXXX" to "%-39s XXX". The latter will always guarantee a space after the field while the former will cause them to run together if the field is 40 of more characters long. [Steve Henson] *) Constify the cipher and digest 'method' functions and structures and modify related functions to take constant EVP_MD and EVP_CIPHER pointers. [Steve Henson] *) Hide BN_CTX structure details in bn_lcl.h instead of publishing them in <openssl/bn.h>. Also further increase BN_CTX_NUM to 32. [Bodo Moeller] *) Modify EVP_Digest*() routines so they now return values. Although the internal software routines can never fail additional hardware versions might. [Steve Henson] *) Clean up crypto/err/err.h and change some error codes to avoid conflicts: Previously ERR_R_FATAL was too small and coincided with ERR_LIB_PKCS7 (= ERR_R_PKCS7_LIB); it is now 64 instead of 32. ASN1 error codes ERR_R_NESTED_ASN1_ERROR ... ERR_R_MISSING_ASN1_EOS were 4 .. 9, conflicting with ERR_LIB_RSA (= ERR_R_RSA_LIB) ... ERR_LIB_PEM (= ERR_R_PEM_LIB). They are now 58 .. 63 (i.e., just below ERR_R_FATAL). Add new error code 'ERR_R_INTERNAL_ERROR'. [Bodo Moeller] *) Don't overuse locks in crypto/err/err.c: For data retrieval, CRYPTO_r_lock suffices. [Bodo Moeller] *) New option '-subj arg' for 'openssl req' and 'openssl ca'. This sets the subject name for a new request or supersedes the subject name in a given request. Formats that can be parsed are 'CN=Some Name, OU=myOU, C=IT' and 'CN=Some Name/OU=myOU/C=IT'. Add options '-batch' and '-verbose' to 'openssl req'. [Massimiliano Pala <madwolf@hackmasters.net>] *) Introduce the possibility to access global variables through functions on platform were that's the best way to handle exporting global variables in shared libraries. To enable this functionality, one must configure with "EXPORT_VAR_AS_FN" or defined the C macro "OPENSSL_EXPORT_VAR_AS_FUNCTION" in crypto/opensslconf.h (the latter is normally done by Configure or something similar). To implement a global variable, use the macro OPENSSL_IMPLEMENT_GLOBAL in the source file (foo.c) like this: OPENSSL_IMPLEMENT_GLOBAL(int,foo)=1; OPENSSL_IMPLEMENT_GLOBAL(double,bar); To declare a global variable, use the macros OPENSSL_DECLARE_GLOBAL and OPENSSL_GLOBAL_REF in the header file (foo.h) like this: OPENSSL_DECLARE_GLOBAL(int,foo); #define foo OPENSSL_GLOBAL_REF(foo) OPENSSL_DECLARE_GLOBAL(double,bar); #define bar OPENSSL_GLOBAL_REF(bar) The #defines are very important, and therefore so is including the header file everywhere where the defined globals are used. The macro OPENSSL_EXPORT_VAR_AS_FUNCTION also affects the definition of ASN.1 items, but that structure is a bit different. The largest change is in util/mkdef.pl which has been enhanced with better and easier to understand logic to choose which symbols should go into the Windows .def files as well as a number of fixes and code cleanup (among others, algorithm keywords are now sorted lexicographically to avoid constant rewrites). [Richard Levitte] *) In BN_div() keep a copy of the sign of 'num' before writing the result to 'rm' because if rm==num the value will be overwritten and produce the wrong result if 'num' is negative: this caused problems with BN_mod() and BN_nnmod(). [Steve Henson] *) Function OCSP_request_verify(). This checks the signature on an OCSP request and verifies the signer certificate. The signer certificate is just checked for a generic purpose and OCSP request trust settings. [Steve Henson] *) Add OCSP_check_validity() function to check the validity of OCSP responses. OCSP responses are prepared in real time and may only be a few seconds old. Simply checking that the current time lies between thisUpdate and nextUpdate max reject otherwise valid responses caused by either OCSP responder or client clock inaccuracy. Instead we allow thisUpdate and nextUpdate to fall within a certain period of the current time. The age of the response can also optionally be checked. Two new options -validity_period and -status_age added to ocsp utility. [Steve Henson] *) If signature or public key algorithm is unrecognized print out its OID rather that just UNKNOWN. [Steve Henson] *) Change OCSP_cert_to_id() to tolerate a NULL subject certificate and OCSP_cert_id_new() a NULL serialNumber. This allows a partial certificate ID to be generated from the issuer certificate alone which can then be passed to OCSP_id_issuer_cmp(). [Steve Henson] *) New compilation option ASN1_ITEM_FUNCTIONS. This causes the new ASN1 modules to export functions returning ASN1_ITEM pointers instead of the ASN1_ITEM structures themselves. This adds several new macros which allow the underlying ASN1 function/structure to be accessed transparently. As a result code should not use ASN1_ITEM references directly (such as &X509_it) but instead use the relevant macros (such as ASN1_ITEM_rptr(X509)). This option is to allow use of the new ASN1 code on platforms where exporting structures is problematical (for example in shared libraries) but exporting functions returning pointers to structures is not. [Steve Henson] *) Add support for overriding the generation of SSL/TLS session IDs. These callbacks can be registered either in an SSL_CTX or per SSL. The purpose of this is to allow applications to control, if they wish, the arbitrary values chosen for use as session IDs, particularly as it can be useful for session caching in multiple-server environments. A command-line switch for testing this (and any client code that wishes to use such a feature) has been added to "s_server". [Geoff Thorpe, Lutz Jaenicke] *) Modify mkdef.pl to recognise and parse preprocessor conditionals of the form '#if defined(...) || defined(...) || ...' and '#if !defined(...) && !defined(...) && ...'. This also avoids the growing number of special cases it was previously handling. [Richard Levitte] *). Additionally, it is now possible to define configuration/platform- specific names (called "system identities"). In the C code, these are prefixed with "OPENSSL_SYSNAME_". e_os2.h will create another macro with the name beginning with "OPENSSL_SYS_", which is determined from "OPENSSL_SYSNAME_*" or compiler-specific macros depending on what is available. [Richard Levitte] *) New option -set_serial to 'req' and 'x509' this allows the serial number to use to be specified on the command line. Previously self signed certificates were hard coded with serial number 0 and the CA options of 'x509' had to use a serial number in a file which was auto incremented. [Steve Henson] *) New options to 'ca' utility to support V2 CRL entry extensions. Currently CRL reason, invalidity date and hold instruction are supported. Add new CRL extensions to V3 code and some new objects. [Steve Henson] *) New function EVP_CIPHER_CTX_set_padding() this is used to disable standard block padding (aka PKCS#5 padding) in the EVP API, which was previously mandatory. This means that the data is not padded in any way and so the total length much be a multiple of the block size, otherwise an error occurs. [Steve Henson] *) Initial (incomplete) OCSP SSL support. [Steve Henson] *) New function OCSP_parse_url(). This splits up a URL into its host, port and path components: primarily to parse OCSP URLs. New -url option to ocsp utility. [Steve Henson] *) New nonce behavior. The return value of OCSP_check_nonce() now reflects the various checks performed. Applications can decide whether to tolerate certain situations such as an absent nonce in a response when one was present in a request: the ocsp application just prints out a warning. New function OCSP_add1_basic_nonce() this is to allow responders to include a nonce in a response even if the request is nonce-less. [Steve Henson] *) Disable stdin buffering in load_cert (apps/apps.c) so that no certs are skipped when using openssl x509 multiple times on a single input file, e.g. "(openssl x509 -out cert1; openssl x509 -out cert2) <certs". [Bodo Moeller] *) Make ASN1_UTCTIME_set_string() and ASN1_GENERALIZEDTIME_set_string() set string type: to handle setting ASN1_TIME structures. Fix ca utility to correctly initialize revocation date of CRLs. [Steve Henson] *)] *) Make mkdef.pl recognise all DECLARE_ASN1 macros, change rijndael to aes and add a new 'exist' option to print out symbols that don't appear to exist. [Steve Henson] *) Additional options to ocsp utility to allow flags to be set and additional certificates supplied. [Steve Henson] *) Add the option -VAfile to 'openssl ocsp', so the user can give the OCSP client a number of certificate to only verify the response signature against. [Richard Levitte] *) Update Rijndael code to version 3.0 and change EVP AES ciphers to handle the new API. Currently only ECB, CBC modes supported. Add new AES OIDs. Add TLS AES ciphersuites as described in RFC3268, "Advanced Encryption Standard (AES) Ciphersuites for Transport Layer Security (TLS)". (In beta versions of OpenSSL 0.9.7, these were not enabled by default and were not part of the "ALL" ciphersuite alias because they were not yet official; they could be explicitly requested by specifying the "AESdraft" ciphersuite group alias. In the final release of OpenSSL 0.9.7, the group alias is called "AES" and is part of "ALL".) [Ben Laurie, Steve Henson, Bodo Moeller] *) New function OCSP_copy_nonce() to copy nonce value (if present) from request to response. [Steve Henson] *) Functions for OCSP responders. OCSP_request_onereq_count(), OCSP_request_onereq_get0(), OCSP_onereq_get0_id() and OCSP_id_get0_info() extract information from a certificate request. OCSP_response_create() creates a response and optionally adds a basic response structure. OCSP_basic_add1_status() adds a complete single response to a basic response and returns the OCSP_SINGLERESP structure just added (to allow extensions to be included for example). OCSP_basic_add1_cert() adds a certificate to a basic response and OCSP_basic_sign() signs a basic response with various flags. New helper functions ASN1_TIME_check() (checks validity of ASN1_TIME structure) and ASN1_TIME_to_generalizedtime() (converts ASN1_TIME to GeneralizedTime). [Steve Henson] *) Various new functions. EVP_Digest() combines EVP_Digest{Init,Update,Final}() in a single operation. X509_get0_pubkey_bitstr() extracts the public_key structure from a certificate. X509_pubkey_digest() digests the public_key contents: this is used in various key identifiers. [Steve Henson] *) Make sk_sort() tolerate a NULL argument. [Steve Henson reported by Massimiliano Pala <madwolf@comune.modena.it>] *) New OCSP verify flag OCSP_TRUSTOTHER. When set the "other" certificates passed by the function are trusted implicitly. If any of them signed the response then it is assumed to be valid and is not verified. [Steve Henson] *) In PKCS7_set_type() initialise content_type in PKCS7_ENC_CONTENT to data. This was previously part of the PKCS7 ASN1 code. This was causing problems with OpenSSL created PKCS#12 and PKCS#7 structures. [Steve Henson, reported by Kenneth R. Robinette <support@securenetterm.com>] *) Add CRYPTO_push_info() and CRYPTO_pop_info() calls to new ASN1 routines: without these tracing memory leaks is very painful. Fix leaks in PKCS12 and PKCS7 routines. [Steve Henson] *) Make X509_time_adj() cope with the new behaviour of ASN1_TIME_new(). Previously it initialised the 'type' argument to V_ASN1_UTCTIME which effectively meant GeneralizedTime would never be used. Now it is initialised to -1 but X509_time_adj() now has to check the value and use ASN1_TIME_set() if the value is not V_ASN1_UTCTIME or V_ASN1_GENERALIZEDTIME, without this it always uses GeneralizedTime. [Steve Henson, reported by Kenneth R. Robinette <support@securenetterm.com>] *) Fixes to BN_to_ASN1_INTEGER when bn is zero. This would previously result in a zero length in the ASN1_INTEGER structure which was not consistent with the structure when d2i_ASN1_INTEGER() was used and would cause ASN1_INTEGER_cmp() to fail. Enhance s2i_ASN1_INTEGER() to cope with hex and negative integers. Fix bug in i2a_ASN1_INTEGER() where it did not print out a minus for negative ASN1_INTEGER. [Steve Henson] *) Add summary printout to ocsp utility. The various functions which convert status values to strings have been renamed to: OCSP_response_status_str(), OCSP_cert_status_str() and OCSP_crl_reason_str() and are no longer static. New options to verify nonce values and to disable verification. OCSP response printout format cleaned up. [Steve Henson] *) Add additional OCSP certificate checks. These are those specified in RFC2560. This consists of two separate checks: the CA of the certificate being checked must either be the OCSP signer certificate or the issuer of the OCSP signer certificate. In the latter case the OCSP signer certificate must contain the OCSP signing extended key usage. This check is performed by attempting to match the OCSP signer or the OCSP signer CA to the issuerNameHash and issuerKeyHash in the OCSP_CERTID structures of the response. [Steve Henson] *) Initial OCSP certificate verification added to OCSP_basic_verify() and related routines. This uses the standard OpenSSL certificate verify routines to perform initial checks (just CA validity) and to obtain the certificate chain. Then additional checks will be performed on the chain. Currently the root CA is checked to see if it is explicitly trusted for OCSP signing. This is used to set a root CA as a global signing root: that is any certificate that chains to that CA is an acceptable OCSP signing certificate. [Steve Henson] *) New '-extfile ...' option to 'openssl ca' for reading X.509v3 extensions from a separate configuration file. As when reading extensions from the main configuration file, the '-extensions ...' option may be used for specifying the section to use. [Massimiliano Pala <madwolf@comune.modena.it>] *) New OCSP utility. Allows OCSP requests to be generated or read. The request can be sent to a responder and the output parsed, outputed or printed in text form. Not complete yet: still needs to check the OCSP response validity. [Steve Henson] *) New subcommands for 'openssl ca': 'openssl ca -status <serial>' prints the status of the cert with the given serial number (according to the index file). 'openssl ca -updatedb' updates the expiry status of certificates in the index file. [Massimiliano Pala <madwolf@comune.modena.it>] *) New '-newreq-nodes' command option to CA.pl. This is like '-newreq', but calls 'openssl req' with the '-nodes' option so that the resulting key is not encrypted. [Damien Miller <djm@mindrot.org>] *) New configuration for the GNU Hurd. [Jonathan Bartlett <johnnyb@wolfram.com> via Richard Levitte] *) Initial code to implement OCSP basic response verify. This is currently incomplete. Currently just finds the signer's certificate and verifies the signature on the response. [Steve Henson] *) New SSLeay_version code SSLEAY_DIR to determine the compiled-in value of OPENSSLDIR. This is available via the new '-d' option to 'openssl version', and is also included in 'openssl version -a'. [Bodo Moeller] *) Allowing defining memory allocation callbacks that will be given file name and line number information in additional arguments (a const char* and an int). The basic functionality remains, as well as the original possibility to just replace malloc(), realloc() and free() by functions that do not know about these additional arguments. To register and find out the current settings for extended allocation functions, the following functions are provided: CRYPTO_set_mem_ex_functions CRYPTO_set_locked_mem_ex_functions CRYPTO_get_mem_ex_functions CRYPTO_get_locked_mem_ex_functions These work the same way as CRYPTO_set_mem_functions and friends. CRYPTO_get_[locked_]mem_functions now writes 0 where such an extended allocation function is enabled. Similarly, CRYPTO_get_[locked_]mem_ex_functions writes 0 where a conventional allocation function is enabled. [Richard Levitte, Bodo Moeller] *) Finish off removing the remaining LHASH function pointer casts. There should no longer be any prototype-casting required when using the LHASH abstraction, and any casts that remain are "bugs". See the callback types and macros at the head of lhash.h for details (and "OBJ_cleanup" in crypto/objects/obj_dat.c as an example). [Geoff Thorpe] *) Add automatic query of EGD sockets in RAND_poll() for the unix variant. If /dev/[u]random devices are not available or do not return enough entropy, EGD style sockets (served by EGD or PRNGD) will automatically be queried. The locations /var/run/egd-pool, /dev/egd-pool, /etc/egd-pool, and /etc/entropy will be queried once each in this sequence, querying stops when enough entropy was collected without querying more sockets. [Lutz Jaenicke] *) Change the Unix RAND_poll() variant to be able to poll several random devices, as specified by DEVRANDOM, until a sufficient amount of data has been collected. We spend at most 10 ms on each file (select timeout) and read in non-blocking mode. DEVRANDOM now defaults to the list "/dev/urandom", "/dev/random", "/dev/srandom" (previously it was just the string "/dev/urandom"), so on typical platforms the 10 ms delay will never occur. Also separate out the Unix variant to its own file, rand_unix.c. For VMS, there's a currently-empty rand_vms.c. [Richard Levitte] *) Move OCSP client related routines to ocsp_cl.c. These provide utility functions which an application needing to issue a request to an OCSP responder and analyse the response will typically need: as opposed to those which an OCSP responder itself would need which will be added later. OCSP_request_sign() signs an OCSP request with an API similar to PKCS7_sign(). OCSP_response_status() returns status of OCSP response. OCSP_response_get1_basic() extracts basic response from response. OCSP_resp_find_status(): finds and extracts status information from an OCSP_CERTID structure (which will be created when the request structure is built). These are built from lower level functions which work on OCSP_SINGLERESP structures but won't normally be used unless the application wishes to examine extensions in the OCSP response for example. Replace nonce routines with a pair of functions. OCSP_request_add1_nonce() adds a nonce value and optionally generates a random value. OCSP_check_nonce() checks the validity of the nonce in an OCSP response. [Steve Henson] *) Change function OCSP_request_add() to OCSP_request_add0_id(). This doesn't copy the supplied OCSP_CERTID and avoids the need to free up the newly created id. Change return type to OCSP_ONEREQ to return the internal OCSP_ONEREQ structure. This can then be used to add extensions to the request. Deleted OCSP_request_new(), since most of its functionality is now in OCSP_REQUEST_new() (and the case insensitive name clash) apart from the ability to set the request name which will be added elsewhere. [Steve Henson] *) Update OCSP API. Remove obsolete extensions argument from various functions. Extensions are now handled using the new OCSP extension code. New simple OCSP HTTP function which can be used to send requests and parse the response. [Steve Henson] *) Fix the PKCS#7 (S/MIME) code to work with new ASN1. Two new ASN1_ITEM structures help with sign and verify. PKCS7_ATTR_SIGN uses the special reorder version of SET OF to sort the attributes and reorder them to match the encoded order. This resolves a long standing problem: a verify on a PKCS7 structure just after signing it used to fail because the attribute order did not match the encoded order. PKCS7_ATTR_VERIFY does not reorder the attributes: it uses the received order. This is necessary to tolerate some broken software that does not order SET OF. This is handled by encoding as a SEQUENCE OF but using implicit tagging (with UNIVERSAL class) to produce the required SET OF. [Steve Henson] *) Have mk1mf.pl generate the macros OPENSSL_BUILD_SHLIBCRYPTO and OPENSSL_BUILD_SHLIBSSL and use them appropriately in the header files to get correct declarations of the ASN.1 item variables. [Richard Levitte] *) Rewrite of PKCS#12 code to use new ASN1 functionality. Replace many PKCS#12 macros with real functions. Fix two unrelated ASN1 bugs: asn1_check_tlen() would sometimes attempt to use 'ctx' when it was NULL and ASN1_TYPE was not dereferenced properly in asn1_ex_c2i(). New ASN1 macro: DECLARE_ASN1_ITEM() which just declares the relevant ASN1_ITEM and no wrapper functions. [Steve Henson] *) New functions or ASN1_item_d2i_fp() and ASN1_item_d2i_bio(). These replace the old function pointer based I/O routines. Change most of the *_d2i_bio() and *_d2i_fp() functions to use these. [Steve Henson] *) Enhance mkdef.pl to be more accepting about spacing in C preprocessor lines, recognice more "algorithms" that can be deselected, and make it complain about algorithm deselection that isn't recognised. [Richard Levitte] *) New ASN1 functions to handle dup, sign, verify, digest, pack and unpack operations in terms of ASN1_ITEM. Modify existing wrappers to use new functions. Add NO_ASN1_OLD which can be set to remove some old style ASN1 functions: this can be used to determine if old code will still work when these eventually go away. [Steve Henson] *) New extension functions for OCSP structures, these follow the same conventions as certificates and CRLs. [Steve Henson] *) New function X509V3_add1_i2d(). This automatically encodes and adds an extension. Its behaviour can be customised with various flags to append, replace or delete. Various wrappers added for certificates and CRLs. [Steve Henson] *) Fix to avoid calling the underlying ASN1 print routine when an extension cannot be parsed. Correct a typo in the OCSP_SERVICELOC extension. Tidy up print OCSP format. [Steve Henson] *) Make mkdef.pl parse some of the ASN1 macros and add appropriate entries for variables. [Steve Henson] *) Add functionality to apps/openssl.c for detecting locking problems: As the program is single-threaded, all we have to do is register a locking callback using an array for storing which locks are currently held by the program. [Bodo Moeller] *) Use a lock around the call to CRYPTO_get_ex_new_index() in SSL_get_ex_data_X509_STORE_idx(), which is used in ssl_verify_cert_chain() and thus can be called at any time during TLS/SSL handshakes so that thread-safety is essential. Unfortunately, the ex_data design is not at all suited for multi-threaded use, so it probably should be abolished. [Bodo Moeller] *) Added Broadcom "ubsec" ENGINE to OpenSSL. [Broadcom, tweaked and integrated by Geoff Thorpe] *) Move common extension printing code to new function X509V3_print_extensions(). Reorganise OCSP print routines and implement some needed OCSP ASN1 functions. Add OCSP extensions. [Steve Henson] *) New function X509_signature_print() to remove duplication in some print routines. [Steve Henson] *) Add a special meaning when SET OF and SEQUENCE OF flags are both set (this was treated exactly the same as SET OF previously). This is used to reorder the STACK representing the structure to match the encoding. This will be used to get round a problem where a PKCS7 structure which was signed could not be verified because the STACK order did not reflect the encoded order. [Steve Henson] *) Reimplement the OCSP ASN1 module using the new code. [Steve Henson] *) Update the X509V3 code to permit the use of an ASN1_ITEM structure for its ASN1 operations. The old style function pointers still exist for now but they will eventually go away. [Steve Henson] *) Merge in replacement ASN1 code from the ASN1 branch. This almost completely replaces the old ASN1 functionality with a table driven encoder and decoder which interprets an ASN1_ITEM structure describing the ASN1 module. Compatibility with the existing ASN1 API (i2d,d2i) is largely maintained. Almost all of the old asn1_mac.h macro based ASN1 has also been converted to the new form. [Steve Henson] *) Change BN_mod_exp_recp so that negative moduli are tolerated (the sign is ignored). Similarly, ignore the sign in BN_MONT_CTX_set so that BN_mod_exp_mont and BN_mod_exp_mont_word work for negative moduli. [Bodo Moeller] *) Fix BN_uadd and BN_usub: Always return non-negative results instead of not touching the result's sign bit. [Bodo Moeller] *) BN_div bugfix: If the result is 0, the sign (res->neg) must not be set. [Bodo Moeller] *) Changed the LHASH code to use prototypes for callbacks, and created macros to declare and implement thin (optionally static) functions that provide type-safety and avoid function pointer casting for the type-specific callbacks. [Geoff Thorpe] *) Added Kerberos Cipher Suites to be used with TLS, as written in RFC 2712. [Veers Staats <staatsvr@asc.hpc.mil>, Jeffrey Altman <jaltman@columbia.edu>, via Richard Levitte] *) Reformat the FAQ so the different questions and answers can be divided in sections depending on the subject. [Richard Levitte] *) Have the zlib compression code load ZLIB.DLL dynamically under Windows. [Richard Levitte] *) New function BN_mod_sqrt for computing square roots modulo a prime (using the probabilistic Tonelli-Shanks algorithm unless p == 3 (mod 4) or p == 5 (mod 8), which are cases that can be handled deterministically). [Lenka Fibikova <fibikova@exp-math.uni-essen.de>, Bodo Moeller] *) Make BN_mod_inverse faster by explicitly handling small quotients in the Euclid loop. (Speed gain about 20% for small moduli [256 or 512 bits], about 30% for larger ones [1024 or 2048 bits].) [Bodo Moeller] *) New function BN_kronecker. [Bodo Moeller] *) Fix BN_gcd so that it works on negative inputs; the result is positive unless both parameters are zero. Previously something reasonably close to an infinite loop was possible because numbers could be growing instead of shrinking in the implementation of Euclid's algorithm. [Bodo Moeller] *) Fix BN_is_word() and BN_is_one() macros to take into account the sign of the number in question. Fix BN_is_word(a,w) to work correctly for w == 0. The old BN_is_word(a,w) macro is now called BN_abs_is_word(a,w) because its test if the absolute value of 'a' equals 'w'. Note that BN_abs_is_word does *not* handle w == 0 reliably; it exists mostly for use in the implementations of BN_is_zero(), BN_is_one(), and BN_is_word(). [Bodo Moeller] *) New function BN_swap. [Bodo Moeller] *) Use BN_nnmod instead of BN_mod in crypto/bn/bn_exp.c so that the exponentiation functions are more likely to produce reasonable results on negative inputs. [Bodo Moeller] *) Change BN_mod_mul so that the result is always non-negative. Previously, it could be negative if one of the factors was negative; I don't think anyone really wanted that behaviour. [Bodo Moeller] *) Move BN_mod_... functions into new file crypto/bn/bn_mod.c (except for exponentiation, which stays in crypto/bn/bn_exp.c, and BN_mod_mul_reciprocal, which stays in crypto/bn/bn_recp.c) and add new functions: BN_nnmod BN_mod_sqr BN_mod_add BN_mod_add_quick BN_mod_sub BN_mod_sub_quick BN_mod_lshift1 BN_mod_lshift1_quick BN_mod_lshift BN_mod_lshift_quick These functions always generate non-negative results. BN_nnmod otherwise is like BN_mod (if BN_mod computes a remainder r such that |m| < r < 0, BN_nnmod will output rem + |m| instead). BN_mod_XXX_quick(r, a, [b,] m) generates the same result as BN_mod_XXX(r, a, [b,] m, ctx), but requires that a [and b] be reduced modulo m. [Lenka Fibikova <fibikova@exp-math.uni-essen.de>, Bodo Moeller] #if 0 The following entry accidentally appeared in the CHANGES file distributed with OpenSSL 0.9.7. The modifications described in it do *not* apply to OpenSSL 0.9.7. *)] #endif *). This is an incompatible change, but it does not affect non-interactive use of 'openssl passwd' (passwords on the command line, '-stdin' option, '-in ...' option) and thus should not cause any problems. [Bodo Moeller] *) Remove all references to RSAref, since there's no more need for it. [Richard Levitte] *) Make DSO load along a path given through an environment variable (SHLIB_PATH) with shl_load(). [Richard Levitte] *) Constify the ENGINE code as a result of BIGNUM constification. Also constify the RSA code and most things related to it. In a few places, most notable in the depth of the ASN.1 code, ugly casts back to non-const were required (to be solved at a later time) [Richard Levitte] *) Make it so the openssl application has all engines loaded by default. [Richard Levitte] *) Constify the BIGNUM routines a little more. [Richard Levitte] *) Add the following functions: ENGINE_load_cswift() ENGINE_load_chil() ENGINE_load_atalla() ENGINE_load_nuron() ENGINE_load_builtin_engines() That way, an application can itself choose if external engines that are built-in in OpenSSL shall ever be used or not. The benefit is that applications won't have to be linked with libdl or other dso libraries unless it's really needed. Changed 'openssl engine' to load all engines on demand. Changed the engine header files to avoid the duplication of some declarations (they differed!). [Richard Levitte] *) 'openssl engine' can now list capabilities. [Richard Levitte] *) Better error reporting in 'openssl engine'. [Richard Levitte] *) Never call load_dh_param(NULL) in s_server. [Bodo Moeller] *) Add engine application. It can currently list engines by name and identity, and test if they are actually available. [Richard Levitte] *) Improve RPM specification file by forcing symbolic linking and making sure the installed documentation is also owned by root.root. [Damien Miller <djm@mindrot.org>] *) Give the OpenSSL applications more possibilities to make use of keys (public as well as private) handled by engines. [Richard Levitte] *) Add OCSP code that comes from CertCo. [Richard Levitte] *) Add VMS support for the Rijndael code. [Richard Levitte] *) Added untested support for Nuron crypto accelerator. [Ben Laurie] *) Add support for external cryptographic devices. This code was previously distributed separately as the "engine" branch. [Geoff Thorpe, Richard Levitte] *) Rework the filename-translation in the DSO code. It is now possible to have far greater control over how a "name" is turned into a filename depending on the operating environment and any oddities about the different shared library filenames on each system. [Geoff Thorpe] *) Support threads on FreeBSD-elf in Configure. [Richard Levitte] *) Fix for SHA1 assembly problem with MASM: it produces warnings about corrupt line number information when assembling with debugging information. This is caused by the overlapping of two sections. [Bernd Matthes <mainbug@celocom.de>, Steve Henson] *) NCONF changes. NCONF_get_number() has no error checking at all. As a replacement, NCONF_get_number_e() is defined (_e for "error checking") and is promoted strongly. The old NCONF_get_number is kept around for binary backward compatibility. Make it possible for methods to load from something other than a BIO, by providing a function pointer that is given a name instead of a BIO. For example, this could be used to load configuration data from an LDAP server. [Richard Levitte] *) Fix for non blocking accept BIOs. Added new I/O special reason BIO_RR_ACCEPT to cover this case. Previously use of accept BIOs with non blocking I/O was not possible because no retry code was implemented. Also added new SSL code SSL_WANT_ACCEPT to cover this case. [Steve Henson] *) Added the beginnings of Rijndael support. [Ben Laurie] *) Fix for bug in DirectoryString mask setting. Add support for X509_NAME_print_ex() in 'req' and X509_print_ex() function to allow certificate printing to more controllable, additional 'certopt' option to 'x509' to allow new printing options to be set. [Steve Henson] *) Clean old EAY MD5 hack from e_os.h. [Richard Levitte] Changes between 0.9.6l and 0.9.6m [17 Mar 2004] *) Fix null-pointer assignment in do_change_cipher_spec() revealed by using the Codenomicon TLS Test Tool (CVE-2004-0079) [Joe Orton, Steve Henson] Changes between 0.9.6k and 0.9.6l [04 Nov 2003] *) Fix additional bug revealed by the NISCC test suite: Stop bug triggering large recursion when presented with certain ASN.1 tags (CVE-2003-0851) [Steve Henson] Changes between 0.9.6j and 0.9.6k [30 Sep 2003] *) Fix various bugs revealed by running the NISCC test suite: Stop out of bounds reads in the ASN1 code when presented with invalid tags (CVE-2003-0543 and CVE-2003-0544). If verify callback ignores invalid public key errors don't try to check certificate signature with the NULL public key. ] Changes between 0.9.6i and 0.9.6j [10 Apr 2003] *)] Changes between 0.9.6h and 0.9.6i [19 Feb 2003] *))] Changes between 0.9.6g and 0.9.6h [5 Dec 2002] *)] *) Bugfix: client side session caching did not work with external caching, because the session->cipher setting was not restored when reloading from the external cache. This problem was masked, when SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG (part of SSL_OP_ALL) was set. (Found by Steve Haslam <steve@araqnid.ddts.net>.) [Lutz Jaenicke] *) Fix client_certificate (ssl/s2_clnt.c): The permissible total length of the REQUEST-CERTIFICATE message is 18 .. 34, not 17 .. 33. [Zeev Lieber <zeev-l@yahoo.com>] *) Undo an undocumented change introduced in 0.9.6e which caused repeated calls to OpenSSL_add_all_ciphers() and OpenSSL_add_all_digests() to be ignored, even after calling EVP_cleanup(). [Richard Levitte] *) Change the default configuration reader to deal with last line not being properly terminated. [Richard Levitte] *) Change X509_NAME_cmp() so it applies the special rules on handling DN values that are of type PrintableString, as well as RDNs of type emailAddress where the value has the type ia5String. [stefank@valicert.com via Richard Levitte] *) Add a SSL_SESS_CACHE_NO_INTERNAL_STORE flag to take over half the job SSL_SESS_CACHE_NO_INTERNAL_LOOKUP was inconsistently doing, define a new flag (SSL_SESS_CACHE_NO_INTERNAL) to be the bitwise-OR of the two for use by the majority of applications wanting this behaviour, and update the docs. The documented behaviour and actual behaviour were inconsistent and had been changing anyway, so this is more a bug-fix than a behavioural change. [Geoff Thorpe, diagnosed by Nadav Har'El] *) Don't impose a 16-byte length minimum on session IDs in ssl/s3_clnt.c (the SSL 3.0 and TLS 1.0 specifications allow any length up to 32 bytes). [Bodo Moeller] *) Fix initialization code race conditions in SSLv23_method(), SSLv23_client_method(), SSLv23_server_method(), SSLv2_method(), SSLv2_client_method(), SSLv2_server_method(), SSLv3_method(), SSLv3_client_method(), SSLv3_server_method(), TLSv1_method(), TLSv1_client_method(), TLSv1_server_method(), ssl2_get_cipher_by_char(), ssl3_get_cipher_by_char(). [Patrick McCormick <patrick@tellme.com>, Bodo Moeller] *) Reorder cleanup sequence in SSL_CTX_free(): only remove the ex_data after the cached sessions are flushed, as the remove_cb() might use ex_data contents. Bug found by Sam Varshavchik <mrsam@courier-mta.com> (see [openssl.org #212]). [Geoff Thorpe, Lutz Jaenicke] *) Fix typo in OBJ_txt2obj which incorrectly passed the content length, instead of the encoding length to d2i_ASN1_OBJECT. [Steve Henson] Changes between 0.9.6f and 0.9.6g [9 Aug 2002] *) [In 0.9.6g-engine release:] Fix crypto/engine/vendor_defns/cswift.h for WIN32 (use '_stdcall'). [Lynn Gazis <lgazis@rainbow.com>] Changes between 0.9.6e and 0.9.6f [8 Aug 2002] *) Fix ASN1 checks. Check for overflow by comparing with LONG_MAX and get fix the header length calculation. [Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>, Alon Kantor <alonk@checkpoint.com> (and others), Steve Henson] *) Use proper error handling instead of 'assertions' in buffer overflow checks added in 0.9.6e. This prevents DoS (the assertions could call abort()). [Arne Ansper <arne@ats.cyber.ee>, Bodo Moeller] Changes between 0.9.6d and 0.9.6e [30 Jul 2002] *) Add various sanity checks to asn1_get_length() to reject the ASN1 length bytes if they exceed sizeof(long), will appear negative or the content length exceeds the length of the supplied buffer. [Steve Henson, Adi Stav <stav@mercury.co.il>, James Yonan <jim@ntlp.com>] *) Fix cipher selection routines: ciphers without encryption had no flags for the cipher strength set and where therefore not handled correctly by the selection routines (PR #130). [Lutz Jaenicke] *) Fix EVP_dsa_sha macro. [Nils Larsch] *) New option SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS for disabling the SSL 3.0/TLS 1.0 CBC vulnerability countermeasure that was added in OpenSSL 0.9.6d. As the countermeasure turned out to be incompatible with some broken SSL implementations, the new option is part of SSL_OP_ALL. SSL_OP_ALL is usually employed when compatibility with weird SSL implementations is desired (e.g. '-bugs' option to 's_client' and 's_server'), so the new option is automatically set in many applications. [Bodo Moeller] *) Changes in security patch: Changes marked "(CHATS)" were sponsored by the Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory, Air Force Materiel Command, USAF, under agreement number F30602-01-2-0537. *) Add various sanity checks to asn1_get_length() to reject the ASN1 length bytes if they exceed sizeof(long), will appear negative or the content length exceeds the length of the supplied buffer. (CVE-2002-0659) [Steve Henson, Adi Stav <stav@mercury.co.il>, James Yonan <jim@ntlp.com>] *) Assertions for various potential buffer overflows, not known to happen in practice. [Ben Laurie (CHATS)] *) Various temporary buffers to hold ASCII versions of integers were too small for 64 bit platforms. (CVE-2002-0655) [Matthew Byng-Maddick <mbm@aldigital.co.uk> and Ben Laurie (CHATS)> *) Remote buffer overflow in SSL3 protocol - an attacker could supply an oversized session ID to a client. (CVE-2002-0656) [Ben Laurie (CHATS)] *) Remote buffer overflow in SSL2 protocol - an attacker could supply an oversized client master key. (CVE-2002-0656) [Ben Laurie (CHATS)] Changes between 0.9.6c and 0.9.6d [9 May 2002] *) Fix crypto/asn1/a_sign.c so that 'parameters' is omitted (not encoded as NULL) with id-dsa-with-sha1. [Nils Larsch <nla@trustcenter.de>; problem pointed out by Bodo Moeller] *) Check various X509_...() return values in apps/req.c. [Nils Larsch <nla@trustcenter.de>] *) Fix BASE64 decode (EVP_DecodeUpdate) for data with CR/LF ended lines: an end-of-file condition would erroneously be flagged, when the CRLF was just at the end of a processed block. The bug was discovered when processing data through a buffering memory BIO handing the data to a BASE64-decoding BIO. Bug fund and patch submitted by Pavel Tsekov <ptsekov@syntrex.com> and Nedelcho Stanev. [Lutz Jaenicke] *) Implement a countermeasure against a vulnerability recently found in CBC ciphersuites in SSL 3.0/TLS 1.0: Send an empty fragment before application data chunks to avoid the use of known IVs with data potentially chosen by the attacker. [Bodo Moeller] *) Fix length checks in ssl3_get_client_hello(). [Bodo Moeller] *) TLS/SSL library bugfix: use s->s3->in_read_app_data differently to prevent ssl3_read_internal() from incorrectly assuming that ssl3_read_bytes() found application data while handshake processing was enabled when in fact s->s3->in_read_app_data was merely automatically cleared during the initial handshake. [Bodo Moeller; problem pointed out by Arne Ansper <arne@ats.cyber.ee>] *) Fix object definitions for Private and Enterprise: they were not recognized in their shortname (=lowercase) representation. Extend obj_dat.pl to issue an error when using undefined keywords instead of silently ignoring the problem (Svenning Sorensen <sss@sss.dnsalias.net>). [Lutz Jaenicke] *) Fix DH_generate_parameters() so that it works for 'non-standard' generators, i.e. generators other than 2 and 5. (Previously, the code did not properly initialise the 'add' and 'rem' values to BN_generate_prime().) In the new general case, we do not insist that 'generator' is actually a primitive root: This requirement is rather pointless; a generator of the order-q subgroup is just as good, if not better. [Bodo Moeller] *) Map new X509 verification errors to alerts. Discovered and submitted by Tom Wu <tom@arcot.com>. [Lutz Jaenicke] *) Fix ssl3_pending() (ssl/s3_lib.c) to prevent SSL_pending() from returning non-zero before the data has been completely received when using non-blocking I/O. [Bodo Moeller; problem pointed out by John Hughes] *) Some of the ciphers missed the strength entry (SSL_LOW etc). [Ben Laurie, Lutz Jaenicke] *) Fix bug in SSL_clear(): bad sessions were not removed (found by Yoram Zahavi <YoramZ@gilian.com>). [Lutz Jaenicke] *) Add information about CygWin 1.3 and on, and preserve proper configuration for the versions before that. [Corinna Vinschen <vinschen@redhat.com> and Richard Levitte] *) Make removal from session cache (SSL_CTX_remove_session()) more robust: check whether we deal with a copy of a session and do not delete from the cache in this case. Problem reported by "Izhar Shoshani Levi" <izhar@checkpoint.com>. [Lutz Jaenicke] *) Do not store session data into the internal session cache, if it is never intended to be looked up (SSL_SESS_CACHE_NO_INTERNAL_LOOKUP flag is set). Proposed by Aslam <aslam@funk.com>. [Lutz Jaenicke] *) Have ASN1_BIT_STRING_set_bit() really clear a bit when the requested value is 0. [Richard Levitte] *) [In 0.9.6d-engine release:] Fix a crashbug and a logic bug in hwcrhk_load_pubkey(). [Toomas Kiisk <vix@cyber.ee> via Richard Levitte] *) Add the configuration target linux-s390x. [Neale Ferguson <Neale.Ferguson@SoftwareAG-USA.com> via Richard Levitte] *) The earlier bugfix for the SSL3_ST_SW_HELLO_REQ_C case of ssl3_accept (ssl/s3_srvr.c) incorrectly used a local flag variable as an indication that a ClientHello message has been received. As the flag value will be lost between multiple invocations of ssl3_accept when using non-blocking I/O, the function may not be aware that a handshake has actually taken place, thus preventing a new session from being added to the session cache. To avoid this problem, we now set s->new_session to 2 instead of using a local variable. [Lutz Jaenicke, Bodo Moeller] *) Bugfix: Return -1 from ssl3_get_server_done (ssl3/s3_clnt.c) if the SSL_R_LENGTH_MISMATCH error is detected. [Geoff Thorpe, Bodo Moeller] *) New 'shared_ldflag' column in Configure platform table. [Richard Levitte] *) Fix EVP_CIPHER_mode macro. ["Dan S. Camper" <dan@bti.net>] *) Fix ssl3_read_bytes (ssl/s3_pkt.c): To ignore messages of unknown type, we must throw them away by setting rr->length to 0. [D P Chang <dpc@qualys.com>] Changes between 0.9.6b and 0.9.6c [21 dec 2001] *) Fix BN_rand_range bug pointed out by Dominikus Scherkl <Dominikus.Scherkl@biodata.com>. (The previous implementation worked incorrectly for those cases where range = 10..._2 and 3*range is two bits longer than range.) [Bodo Moeller] *) Only add signing time to PKCS7 structures if it is not already present. [Steve Henson] *) Fix crypto/objects/objects.h: "ld-ce" should be "id-ce", OBJ_ld_ce should be OBJ_id_ce. Also some ip-pda OIDs in crypto/objects/objects.txt were incorrect (cf. RFC 3039). [Matt Cooper, Frederic Giudicelli, Bodo Moeller] *) Release CRYPTO_LOCK_DYNLOCK when CRYPTO_destroy_dynlockid() returns early because it has nothing to do. [Andy Schneider <andy.schneider@bjss.co.uk>] *) [In 0.9.6c-engine release:] Fix mutex callback return values in crypto/engine/hw_ncipher.c. [Andy Schneider <andy.schneider@bjss.co.uk>] *) [In 0.9.6c-engine release:] Add support for Cryptographic Appliance's keyserver technology. (Use engine 'keyclient') [Cryptographic Appliances and Geoff Thorpe] *) Add a configuration entry for OS/390 Unix. The C compiler 'c89' is called via tools/c89.sh because arguments have to be rearranged (all '-L' options must appear before the first object modules). [Richard Shapiro <rshapiro@abinitio.com>] *) [In 0.9.6c-engine release:] Add support for Broadcom crypto accelerator cards, backported from 0.9.7. [Broadcom, Nalin Dahyabhai <nalin@redhat.com>, Mark Cox] *) [In 0.9.6c-engine release:] Add support for SureWare crypto accelerator cards from Baltimore Technologies. (Use engine 'sureware') [Baltimore Technologies and Mark Cox] *) [In 0.9.6c-engine release:] Add support for crypto accelerator cards from Accelerated Encryption Processing,. (Use engine 'aep') [AEP Inc. and Mark Cox] *) Add a configuration entry for gcc on UnixWare. [Gary Benson <gbenson@redhat.com>] *) Change ssl/s2_clnt.c and ssl/s2_srvr.c so that received handshake messages are stored in a single piece (fixed-length part and variable-length part combined) and fix various bugs found on the way. [Bodo Moeller] *) Disable caching in BIO_gethostbyname(), directly use gethostbyname() instead. BIO_gethostbyname() does not know what timeouts are appropriate, so entries would stay in cache even when they have become invalid. [Bodo Moeller; problem pointed out by Rich Salz <rsalz@zolera.com> *) Change ssl23_get_client_hello (ssl/s23_srvr.c) behaviour when faced with a pathologically small ClientHello fragment that does not contain client_version: Instead of aborting with an error, simply choose the highest available protocol version (i.e., TLS 1.0 unless it is disabled). In practice, ClientHello messages are never sent like this, but this change gives us strictly correct behaviour at least for TLS. [Bodo Moeller] *) Fix SSL handshake functions and SSL_clear() such that SSL_clear() never resets s->method to s->ctx->method when called from within one of the SSL handshake functions. [Bodo Moeller; problem pointed out by Niko Baric] *) In ssl3_get_client_hello (ssl/s3_srvr.c), generate a fatal alert (sent using the client's version number) if client_version is smaller than the protocol version in use. Also change ssl23_get_client_hello (ssl/s23_srvr.c) to select TLS 1.0 if the client demanded SSL 3.0 but only TLS 1.0 is enabled; then the client will at least see that alert. [Bodo Moeller] *) Fix ssl3_get_message (ssl/s3_both.c) to handle message fragmentation correctly. [Bodo Moeller] *) Avoid infinite loop in ssl3_get_message (ssl/s3_both.c) if a client receives HelloRequest while in a handshake. [Bodo Moeller; bug noticed by Andy Schneider <andy.schneider@bjss.co.uk>] *) Bugfix in ssl3_accept (ssl/s3_srvr.c): Case SSL3_ST_SW_HELLO_REQ_C should end in 'break', not 'goto end' which circumvents various cleanups done in state SSL_ST_OK. But session related stuff must be disabled for SSL_ST_OK in the case that we just sent a HelloRequest. Also avoid some overhead by not calling ssl_init_wbio_buffer() before just sending a HelloRequest. [Bodo Moeller, Eric Rescorla <ekr@rtfm.com>] *) Fix ssl/s3_enc.c, ssl/t1_enc.c and ssl/s3_pkt.c so that we don't reveal whether illegal block cipher padding was found or a MAC verification error occurred. (Neither SSLerr() codes nor alerts are directly visible to potential attackers, but the information may leak via logfiles.) Similar changes are not required for the SSL 2.0 implementation because the number of padding bytes is sent in clear for SSL 2.0, and the extra bytes are just ignored. However ssl/s2_pkt.c failed to verify that the purported number of padding bytes is in the legal range. [Bodo Moeller] *) Add OpenUNIX-8 support including shared libraries (Boyd Lynn Gerber <gerberb@zenez.com>). [Lutz Jaenicke] *) Improve RSA_padding_check_PKCS1_OAEP() check again to avoid 'wristwatch attack' using huge encoding parameters (cf. James H. Manger's CRYPTO 2001 paper). Note that the RSA_PKCS1_OAEP_PADDING case of RSA_private_decrypt() does not use encoding parameters and hence was not vulnerable. [Bodo Moeller] *) BN_sqr() bug fix. [Ulf Möller, reported by Jim Ellis <jim.ellis@cavium.com>] *) Rabin-Miller test analyses assume uniformly distributed witnesses, so use BN_pseudo_rand_range() instead of using BN_pseudo_rand() followed by modular reduction. [Bodo Moeller; pointed out by Adam Young <AYoung1@NCSUS.JNJ.COM>] *) Add BN_pseudo_rand_range() with obvious functionality: BN_rand_range() equivalent based on BN_pseudo_rand() instead of BN_rand(). [Bodo Moeller] *) s3_srvr.c: allow sending of large client certificate lists (> 16 kB). This function was broken, as the check for a new client hello message to handle SGC did not allow these large messages. (Tracked down by "Douglas E. Engert" <deengert@anl.gov>.) [Lutz Jaenicke] *) Add alert descriptions for TLSv1 to SSL_alert_desc_string[_long](). [Lutz Jaenicke] *) Fix buggy behaviour of BIO_get_num_renegotiates() and BIO_ctrl() for BIO_C_GET_WRITE_BUF_SIZE ("Stephen Hinton" <shinton@netopia.com>). [Lutz Jaenicke] *) Rework the configuration and shared library support for Tru64 Unix. The configuration part makes use of modern compiler features and still retains old compiler behavior for those that run older versions of the OS. The shared library support part includes a variant that uses the RPATH feature, and is available through the special configuration target "alpha-cc-rpath", which will never be selected automatically. [Tim Mooney <mooney@dogbert.cc.ndsu.NoDak.edu> via Richard Levitte] *) In ssl3_get_key_exchange (ssl/s3_clnt.c), call ssl3_get_message() with the same message size as in ssl3_get_certificate_request(). Otherwise, if no ServerKeyExchange message occurs, CertificateRequest messages might inadvertently be reject as too long. [Petr Lampa <lampa@fee.vutbr.cz>] *) Enhanced support for IA-64 Unix platforms (well, Linux and HP-UX). [Andy Polyakov] *) Modified SSL library such that the verify_callback that has been set specificly for an SSL object with SSL_set_verify() is actually being used. Before the change, a verify_callback set with this function was ignored and the verify_callback() set in the SSL_CTX at the time of the call was used. New function X509_STORE_CTX_set_verify_cb() introduced to allow the necessary settings. [Lutz Jaenicke] *) Initialize static variable in crypto/dsa/dsa_lib.c and crypto/dh/dh_lib.c explicitly to NULL, as at least on Solaris 8 this seems not always to be done automatically (in contradiction to the requirements of the C standard). This made problems when used from OpenSSH. [Lutz Jaenicke] *) In OpenSSL 0.9.6a and 0.9.6b, crypto/dh/dh_key.c ignored dh->length and always used BN_rand_range(priv_key, dh->p). BN_rand_range() is not necessary for Diffie-Hellman, and this specific range makes Diffie-Hellman unnecessarily inefficient if dh->length (recommended exponent length) is much smaller than the length of dh->p. We could use BN_rand_range() if the order of the subgroup was stored in the DH structure, but we only have dh->length. So switch back to BN_rand(priv_key, l, ...) where 'l' is dh->length if this is defined, or BN_num_bits(dh->p)-1 otherwise. [Bodo Moeller] *) In RSA_eay_public_encrypt RSA_eay_private_decrypt RSA_eay_private_encrypt (signing) RSA_eay_public_decrypt (signature verification) (default implementations for RSA_public_encrypt, RSA_private_decrypt, RSA_private_encrypt, RSA_public_decrypt), always reject numbers >= n. [Bodo Moeller] *) In crypto/rand/md_rand.c, use a new short-time lock CRYPTO_LOCK_RAND2 to synchronize access to 'locking_thread'. This is necessary on systems where access to 'locking_thread' (an 'unsigned long' variable) is not atomic. [Bodo Moeller] *) In crypto/rand/md_rand.c, set 'locking_thread' to current thread's ID *before* setting the 'crypto_lock_rand' flag. The previous code had a race condition if 0 is a valid thread ID. [Travis Vitek <vitek@roguewave.com>] *) Add support for shared libraries under Irix. [Albert Chin-A-Young <china@thewrittenword.com>] *) Add configuration option to build on Linux on both big-endian and little-endian MIPS. [Ralf Baechle <ralf@uni-koblenz.de>] *) Add the possibility to create shared libraries on HP-UX. [Richard Levitte] Changes between 0.9.6a and 0.9.6b [9 Jul 2001] *) Change ssleay_rand_bytes (crypto/rand/md_rand.c) to avoid a SSLeay/OpenSSL PRNG weakness pointed out by Markku-Juhani O. Saarinen <markku-juhani.saarinen@nokia.com>: PRNG state recovery was possible based on the output of one PRNG request appropriately sized to gain knowledge on 'md' followed by enough consecutive 1-byte PRNG requests to traverse all of 'state'. 1. When updating 'md_local' (the current thread's copy of 'md') during PRNG output generation, hash all of the previous 'md_local' value, not just the half used for PRNG output. 2. Make the number of bytes from 'state' included into the hash independent from the number of PRNG bytes requested. The first measure alone would be sufficient to avoid Markku-Juhani's attack. (Actually it had never occurred to me that the half of 'md_local' used for chaining was the half from which PRNG output bytes were taken -- I had always assumed that the secret half would be used.) The second measure makes sure that additional data from 'state' is never mixed into 'md_local' in small portions; this heuristically further strengthens the PRNG. [Bodo Moeller] *) Fix crypto/bn/asm/mips3.s. [Andy Polyakov] *) When only the key is given to "enc", the IV is undefined. Print out an error message in this case. [Lutz Jaenicke] *) Handle special case when X509_NAME is empty in X509 printing routines. [Steve Henson] *) In dsa_do_verify (crypto/dsa/dsa_ossl.c), verify that r and s are positive and less than q. [Bodo Moeller] *) Don't change *pointer in CRYPTO_add_lock() is add_lock_callback is used: it isn't thread safe and the add_lock_callback should handle that itself. [Paul Rose <Paul.Rose@bridge.com>] *) Verify that incoming data obeys the block size in ssl3_enc (ssl/s3_enc.c) and tls1_enc (ssl/t1_enc.c). [Bodo Moeller] *) Fix OAEP check. [Ulf Möller, Bodo Möller] *) The countermeasure against Bleichbacher's attack on PKCS #1 v1.5 RSA encryption was accidentally removed in s3_srvr.c in OpenSSL 0.9.5 when fixing the server behaviour for backwards-compatible 'client hello' messages. (Note that the attack is impractical against SSL 3.0 and TLS 1.0 anyway because length and version checking means that the probability of guessing a valid ciphertext is around 2^-40; see section 5 in Bleichenbacher's CRYPTO '98 paper.) Before 0.9.5, the countermeasure (hide the error by generating a random 'decryption result') did not work properly because ERR_clear_error() was missing, meaning that SSL_get_error() would detect the supposedly ignored error. Both problems are now fixed. [Bodo Moeller] *) In crypto/bio/bf_buff.c, increase DEFAULT_BUFFER_SIZE to 4096 (previously it was 1024). [Bodo Moeller] *) Fix for compatibility mode trust settings: ignore trust settings unless some valid trust or reject settings are present. [Steve Henson] *) Fix for blowfish EVP: its a variable length cipher. [Steve Henson] *) Fix various bugs related to DSA S/MIME verification. Handle missing parameters in DSA public key structures and return an error in the DSA routines if parameters are absent. [Steve Henson] *) In versions up to 0.9.6, RAND_file_name() resorted to file ".rnd" in the current directory if neither $RANDFILE nor $HOME was set. RAND_file_name() in 0.9.6a returned NULL in this case. This has caused some confusion to Windows users who haven't defined $HOME. Thus RAND_file_name() is changed again: e_os.h can define a DEFAULT_HOME, which will be used if $HOME is not set. For Windows, we use "C:"; on other platforms, we still require environment variables. *) Move 'if (!initialized) RAND_poll()' into regions protected by CRYPTO_LOCK_RAND. This is not strictly necessary, but avoids having multiple threads call RAND_poll() concurrently. [Bodo Moeller] *) In crypto/rand/md_rand.c, replace 'add_do_not_lock' flag by a combination of a flag and a thread ID variable. Otherwise while one thread is in ssleay_rand_bytes (which sets the flag), *other* threads can enter ssleay_add_bytes without obeying the CRYPTO_LOCK_RAND lock (and may even illegally release the lock that they do not hold after the first thread unsets add_do_not_lock). [Bodo Moeller] *) Change bctest again: '-x' expressions are not available in all versions of 'test'. [Bodo Moeller] Changes between 0.9.6 and 0.9.6a [5 Apr 2001] *) Fix a couple of memory leaks in PKCS7_dataDecode() [Steve Henson, reported by Heyun Zheng <hzheng@atdsprint.com>] *) Change Configure and Makefiles to provide EXE_EXT, which will contain the default extension for executables, if any. Also, make the perl scripts that use symlink() to test if it really exists and use "cp" if it doesn't. All this made OpenSSL compilable and installable in CygWin. [Richard Levitte] *) Fix for asn1_GetSequence() for indefinite length constructed data. If SEQUENCE is length is indefinite just set c->slen to the total amount of data available. [Steve Henson, reported by shige@FreeBSD.org] [This change does not apply to 0.9.7.] *) Change bctest to avoid here-documents inside command substitution (workaround for FreeBSD /bin/sh bug). For compatibility with Ultrix, avoid shell functions (introduced in the bctest version that searches along $PATH). [Bodo Moeller] *) Rename 'des_encrypt' to 'des_encrypt1'. This avoids the clashes with des_encrypt() defined on some operating systems, like Solaris and UnixWare. [Richard Levitte] *) Check the result of RSA-CRT (see D. Boneh, R. DeMillo, R. Lipton: On the Importance of Eliminating Errors in Cryptographic Computations, J. Cryptology 14 (2001) 2, 101-119,). [Ulf Moeller] *) MIPS assembler BIGNUM division bug fix. [Andy Polyakov] *) Disabled incorrect Alpha assembler code. [Richard Levitte] *) Fix PKCS#7 decode routines so they correctly update the length after reading an EOC for the EXPLICIT tag. [Steve Henson] [This change does not apply to 0.9.7.] *) Fix bug in PKCS#12 key generation routines. This was triggered if a 3DES key was generated with a 0 initial byte. Include PKCS12_BROKEN_KEYGEN compilation option to retain the old (but broken) behaviour. [Steve Henson] *) Enhance bctest to search for a working bc along $PATH and print it when found. [Tim Rice <tim@multitalents.net> via Richard Levitte] *) Fix memory leaks in err.c: free err_data string if necessary; don't write to the wrong index in ERR_set_error_data. [Bodo Moeller] *) Implement ssl23_peek (analogous to ssl23_read), which previously did not exist. [Bodo Moeller] *) Replace rdtsc with _emit statements for VC++ version 5. [Jeremy Cooper <jeremy@baymoo.org>] *) Make it possible to reuse SSLv2 sessions. [Richard Levitte] *) In copy_email() check for >= 0 as a return value for X509_NAME_get_index_by_NID() since 0 is a valid index. [Steve Henson reported by Massimiliano Pala <madwolf@opensca.org>] *) Avoid coredump with unsupported or invalid public keys by checking if X509_get_pubkey() fails in PKCS7_verify(). Fix memory leak when PKCS7_verify() fails with non detached data. [Steve Henson] *) Don't use getenv in library functions when run as setuid/setgid. New function OPENSSL_issetugid(). [Ulf Moeller] *) Avoid false positives in memory leak detection code (crypto/mem_dbg.c) due to incorrect handling of multi-threading: 1. Fix timing glitch in the MemCheck_off() portion of CRYPTO_mem_ctrl(). 2. Fix logical glitch in is_MemCheck_on() aka CRYPTO_is_mem_check_on(). 3. Count how many times MemCheck_off() has been called so that nested use can be treated correctly. This also avoids inband-signalling in the previous code (which relied on the assumption that thread ID 0 is impossible). [Bodo Moeller] *) Add "-rand" option also to s_client and s_server. [Lutz Jaenicke] *) Fix CPU detection on Irix 6.x. [Kurt Hockenbury <khockenb@stevens-tech.edu> and "Bruce W. Forsberg" <bruce.forsberg@baesystems.com>] *) Fix X509_NAME bug which produced incorrect encoding if X509_NAME was empty. [Steve Henson] [This change does not apply to 0.9.7.] *) Use the cached encoding of an X509_NAME structure rather than copying it. This is apparently the reason for the libsafe "errors" but the code is actually correct. [Steve Henson] *) Add new function BN_rand_range(), and fix DSA_sign_setup() to prevent Bleichenbacher's DSA attack. Extend BN_[pseudo_]rand: As before, top=1 forces the highest two bits to be set and top=0 forces the highest bit to be set; top=-1 is new and leaves the highest bit random. [Ulf Moeller, Bodo Moeller] *) In the NCONF_...-based implementations for CONF_... queries (crypto/conf/conf_lib.c), if the input LHASH is NULL, avoid using a temporary CONF structure with the data component set to NULL (which gives segmentation faults in lh_retrieve). Instead, use NULL for the CONF pointer in CONF_get_string and CONF_get_number (which may use environment variables) and directly return NULL from CONF_get_section. [Bodo Moeller] *) Fix potential buffer overrun for EBCDIC. [Ulf Moeller] *) Tolerate nonRepudiation as being valid for S/MIME signing and certSign keyUsage if basicConstraints absent for a CA. [Steve Henson] *) Make SMIME_write_PKCS7() write mail header values with a format that is more generally accepted (no spaces before the semicolon), since some programs can't parse those values properly otherwise. Also make sure BIO's that break lines after each write do not create invalid headers. [Richard Levitte] *) Make the CRL encoding routines work with empty SEQUENCE OF. The macros previously used would not encode an empty SEQUENCE OF and break the signature. [Steve Henson] [This change does not apply to 0.9.7.] *) Zero the premaster secret after deriving the master secret in DH ciphersuites. [Steve Henson] *) Add some EVP_add_digest_alias registrations (as found in OpenSSL_add_all_digests()) to SSL_library_init() aka OpenSSL_add_ssl_algorithms(). This provides improved compatibility with peers using X.509 certificates with unconventional AlgorithmIdentifier OIDs. [Bodo Moeller] *) Fix for Irix with NO_ASM. ["Bruce W. Forsberg" <bruce.forsberg@baesystems.com>] *) ./config script fixes. [Ulf Moeller, Richard Levitte] *) Fix 'openssl passwd -1'. [Bodo Moeller] *) Change PKCS12_key_gen_asc() so it can cope with non null terminated strings whose length is passed in the passlen parameter, for example from PEM callbacks. This was done by adding an extra length parameter to asc2uni(). [Steve Henson, reported by <oddissey@samsung.co.kr>] *) Fix C code generated by 'openssl dsaparam -C': If a BN_bin2bn call failed, free the DSA structure. [Bodo Moeller] *) Fix to uni2asc() to cope with zero length Unicode strings. These are present in some PKCS#12 files. [Steve Henson] *) Increase s2->wbuf allocation by one byte in ssl2_new (ssl/s2_lib.c). Otherwise do_ssl_write (ssl/s2_pkt.c) will write beyond buffer limits when writing a 32767 byte record. [Bodo Moeller; problem reported by Eric Day <eday@concentric.net>] *) In RSA_eay_public_{en,ed}crypt and RSA_eay_mod_exp (rsa_eay.c), obtain lock CRYPTO_LOCK_RSA before setting rsa->_method_mod_{n,p,q}. (RSA objects have a reference count access to which is protected by CRYPTO_LOCK_RSA [see rsa_lib.c, s3_srvr.c, ssl_cert.c, ssl_rsa.c], so they are meant to be shared between threads.) [Bodo Moeller, Geoff Thorpe; original patch submitted by "Reddie, Steven" <Steven.Reddie@ca.com>] *) Fix a deadlock in CRYPTO_mem_leaks(). [Bodo Moeller] *) Use better test patterns in bntest. [Ulf Möller] *) rand_win.c fix for Borland C. [Ulf Möller] *) BN_rshift bugfix for n == 0. [Bodo Moeller] *) Add a 'bctest' script that checks for some known 'bc' bugs so that 'make test' does not abort just because 'bc' is broken. [Bodo Moeller] *) Store verify_result within SSL_SESSION also for client side to avoid potential security hole. (Re-used sessions on the client side always resulted in verify_result==X509_V_OK, not using the original result of the server certificate verification.) [Lutz Jaenicke] *) Fix ssl3_pending: If the record in s->s3->rrec is not of type SSL3_RT_APPLICATION_DATA, return 0. Similarly, change ssl2_pending to return 0 if SSL_in_init(s) is true. [Bodo Moeller] *) Fix SSL_peek: Both ssl2_peek and ssl3_peek, which were totally broken in earlier releases, have been re-implemented by renaming the previous implementations of ssl2_read and ssl3_read to ssl2_read_internal and ssl3_read_internal, respectively, and adding 'peek' parameters to them. The new ssl[23]_{read,peek} functions are calls to ssl[23]_read_internal with the 'peek' flag set appropriately. A 'peek' parameter has also been added to ssl3_read_bytes, which does the actual work for ssl3_read_internal. [Bodo Moeller] *) Initialise "ex_data" member of RSA/DSA/DH structures prior to calling the method-specific "init()" handler. Also clean up ex_data after calling the method-specific "finish()" handler. Previously, this was happening the other way round. [Geoff Thorpe] *) Increase BN_CTX_NUM (the number of BIGNUMs in a BN_CTX) to 16. The previous value, 12, was not always sufficient for BN_mod_exp(). [Bodo Moeller] *) Make sure that shared libraries get the internal name engine with the full version number and not just 0. This should mark the shared libraries as not backward compatible. Of course, this should be changed again when we can guarantee backward binary compatibility. [Richard Levitte] *) Fix typo in get_cert_by_subject() in by_dir.c [Jean-Marc Desperrier <jean-marc.desperrier@certplus.com>] *) Rework the system to generate shared libraries: - Make note of the expected extension for the shared libraries and if there is a need for symbolic links from for example libcrypto.so.0 to libcrypto.so.0.9.7. There is extended info in Configure for that. - Make as few rebuilds of the shared libraries as possible. - Still avoid linking the OpenSSL programs with the shared libraries. - When installing, install the shared libraries separately from the static ones. [Richard Levitte] *) Fix SSL_CTX_set_read_ahead macro to actually use its argument. Copy SSL_CTX's read_ahead flag to SSL object directly in SSL_new and not in SSL_clear because the latter is also used by the accept/connect functions; previously, the settings made by SSL_set_read_ahead would be lost during the handshake. [Bodo Moeller; problems reported by Anders Gertz <gertz@epact.se>] *) Correct util/mkdef.pl to be selective about disabled algorithms. Previously, it would create entries for disabled algorithms no matter what. [Richard Levitte] *) Added several new manual pages for SSL_* function. [Lutz Jaenicke] Changes between 0.9.5a and 0.9.6 [24 Sep 2000] *) In ssl23_get_client_hello, generate an error message when faced with an initial SSL 3.0/TLS record that is too small to contain the first two bytes of the ClientHello message, i.e. client_version. (Note that this is a pathologic case that probably has never happened in real life.) The previous approach was to use the version number from the record header as a substitute; but our protocol choice should not depend on that one because it is not authenticated by the Finished messages. [Bodo Moeller] *) More robust randomness gathering functions for Windows. [Jeffrey Altman <jaltman@columbia.edu>] *) For compatibility reasons if the flag X509_V_FLAG_ISSUER_CHECK is not set then we don't setup the error code for issuer check errors to avoid possibly overwriting other errors which the callback does handle. If an application does set the flag then we assume it knows what it is doing and can handle the new informational codes appropriately. [Steve Henson] *) Fix for a nasty bug in ASN1_TYPE handling. ASN1_TYPE is used for a general "ANY" type, as such it should be able to decode anything including tagged types. However it didn't check the class so it would wrongly interpret tagged types in the same way as their universal counterpart and unknown types were just rejected. Changed so that the tagged and unknown types are handled in the same way as a SEQUENCE: that is the encoding is stored intact. There is also a new type "V_ASN1_OTHER" which is used when the class is not universal, in this case we have no idea what the actual type is so we just lump them all together. [Steve Henson] *) On VMS, stdout may very well lead to a file that is written to in a record-oriented fashion. That means that every write() will write a separate record, which will be read separately by the programs trying to read from it. This can be very confusing. The solution is to put a BIO filter in the way that will buffer text until a linefeed is reached, and then write everything a line at a time, so every record written will be an actual line, not chunks of lines and not (usually doesn't happen, but I've seen it once) several lines in one record. BIO_f_linebuffer() is the answer. Currently, it's a VMS-only method, because that's where it has been tested well enough. [Richard Levitte] *) Remove 'optimized' squaring variant in BN_mod_mul_montgomery, it can return incorrect results. (Note: The buggy variant was not enabled in OpenSSL 0.9.5a, but it was in 0.9.6-beta[12].) [Bodo Moeller] *) Disable the check for content being present when verifying detached signatures in pk7_smime.c. Some versions of Netscape (wrongly) include zero length content when signing messages. [Steve Henson] *) New BIO_shutdown_wr macro, which invokes the BIO_C_SHUTDOWN_WR BIO_ctrl (for BIO pairs). [Bodo Möller] *) Add DSO method for VMS. [Richard Levitte] *) Bug fix: Montgomery multiplication could produce results with the wrong sign. [Ulf Möller] *) Add RPM specification openssl.spec and modify it to build three packages. The default package contains applications, application documentation and run-time libraries. The devel package contains include files, static libraries and function documentation. The doc package contains the contents of the doc directory. The original openssl.spec was provided by Damien Miller <djm@mindrot.org>. [Richard Levitte] *) Add a large number of documentation files for many SSL routines. [Lutz Jaenicke <Lutz.Jaenicke@aet.TU-Cottbus.DE>] *) Add a configuration entry for Sony News 4. [NAKAJI Hiroyuki <nakaji@tutrp.tut.ac.jp>] *) Don't set the two most significant bits to one when generating a random number < q in the DSA library. [Ulf Möller] *) New SSL API mode 'SSL_MODE_AUTO_RETRY'. This disables the default behaviour that SSL_read may result in SSL_ERROR_WANT_READ (even if the underlying transport is blocking) if a handshake took place. (The default behaviour is needed by applications such as s_client and s_server that use select() to determine when to use SSL_read; but for applications that know in advance when to expect data, it just makes things more complicated.) [Bodo Moeller] *) Add RAND_egd_bytes(), which gives control over the number of bytes read from EGD. [Ben Laurie] *) Add a few more EBCDIC conditionals that make `req' and `x509' work better on such systems. [Martin Kraemer <Martin.Kraemer@MchP.Siemens.De>] *) Add two demo programs for PKCS12_parse() and PKCS12_create(). Update PKCS12_parse() so it copies the friendlyName and the keyid to the certificates aux info. [Steve Henson] *) Fix bug in PKCS7_verify() which caused an infinite loop if there was more than one signature. [Sven Uszpelkat <su@celocom.de>] *) Major change in util/mkdef.pl to include extra information about each symbol, as well as presenting variables as well as functions. This change means that there's n more need to rebuild the .num files when some algorithms are excluded. [Richard Levitte] *) Allow the verify time to be set by an application, rather than always using the current time. [Steve Henson] *) Phase 2 verify code reorganisation. The certificate verify code now looks up an issuer certificate by a number of criteria: subject name, authority key id and key usage. It also verifies self signed certificates by the same criteria. The main comparison function is X509_check_issued() which performs these checks. Lot of changes were necessary in order to support this without completely rewriting the lookup code. Authority and subject key identifier are now cached. The LHASH 'certs' is X509_STORE has now been replaced by a STACK_OF(X509_OBJECT). This is mainly because an LHASH can't store or retrieve multiple objects with the same hash value. As a result various functions (which were all internal use only) have changed to handle the new X509_STORE structure. This will break anything that messed round with X509_STORE internally. The functions X509_STORE_add_cert() now checks for an exact match, rather than just subject name. The X509_STORE API doesn't directly support the retrieval of multiple certificates matching a given criteria, however this can be worked round by performing a lookup first (which will fill the cache with candidate certificates) and then examining the cache for matches. This is probably the best we can do without throwing out X509_LOOKUP entirely (maybe later...). The X509_VERIFY_CTX structure has been enhanced considerably. All certificate lookup operations now go via a get_issuer() callback. Although this currently uses an X509_STORE it can be replaced by custom lookups. This is a simple way to bypass the X509_STORE hackery necessary to make this work and makes it possible to use more efficient techniques in future. A very simple version which uses a simple STACK for its trusted certificate store is also provided using X509_STORE_CTX_trusted_stack(). The verify_cb() and verify() callbacks now have equivalents in the X509_STORE_CTX structure. X509_STORE_CTX also has a 'flags' field which can be used to customise the verify behaviour. [Steve Henson] *) Add new PKCS#7 signing option PKCS7_NOSMIMECAP which excludes S/MIME capabilities. [Steve Henson] *) When a certificate request is read in keep a copy of the original encoding of the signed data and use it when outputting again. Signatures then use the original encoding rather than a decoded, encoded version which may cause problems if the request is improperly encoded. [Steve Henson] *) For consistency with other BIO_puts implementations, call buffer_write(b, ...) directly in buffer_puts instead of calling BIO_write(b, ...). In BIO_puts, increment b->num_write as in BIO_write. [Peter.Sylvester@EdelWeb.fr] *) Fix BN_mul_word for the case where the word is 0. (We have to use BN_zero, we may not return a BIGNUM with an array consisting of words set to zero.) [Bodo Moeller] *) Avoid calling abort() from within the library when problems are detected, except if preprocessor symbols have been defined (such as REF_CHECK, BN_DEBUG etc.). [Bodo Moeller] *) New openssl application 'rsautl'. This utility can be used for low level RSA operations. DER public key BIO/fp routines also added. [Steve Henson] *) New Configure entry and patches for compiling on QNX 4. [Andreas Schneider <andreas@ds3.etech.fh-hamburg.de>] *) A demo state-machine implementation was sponsored by Nuron () and is now available in demos/state_machine. [Ben Laurie] *) New options added to the 'dgst' utility for signature generation and verification. [Steve Henson] *) Unrecognized PKCS#7 content types are now handled via a catch all ASN1_TYPE structure. This allows unsupported types to be stored as a "blob" and an application can encode and decode it manually. [Steve Henson] *) Fix various signed/unsigned issues to make a_strex.c compile under VC++. [Oscar Jacobsson <oscar.jacobsson@celocom.com>] *) ASN1 fixes. i2d_ASN1_OBJECT was not returning the correct length if passed a buffer. ASN1_INTEGER_to_BN failed if passed a NULL BN and its argument was negative. [Steve Henson, pointed out by Sven Heiberg <sven@tartu.cyber.ee>] *) Modification to PKCS#7 encoding routines to output definite length encoding. Since currently the whole structures are in memory there's not real point in using indefinite length constructed encoding. However if OpenSSL is compiled with the flag PKCS7_INDEFINITE_ENCODING the old form is used. [Steve Henson] *) Added BIO_vprintf() and BIO_vsnprintf(). [Richard Levitte] *) Added more prefixes to parse for in the the strings written through a logging bio, to cover all the levels that are available through syslog. The prefixes are now: PANIC, EMERG, EMR => LOG_EMERG ALERT, ALR => LOG_ALERT CRIT, CRI => LOG_CRIT ERROR, ERR => LOG_ERR WARNING, WARN, WAR => LOG_WARNING NOTICE, NOTE, NOT => LOG_NOTICE INFO, INF => LOG_INFO DEBUG, DBG => LOG_DEBUG and as before, if none of those prefixes are present at the beginning of the string, LOG_ERR is chosen. On Win32, the LOG_* levels are mapped according to this: LOG_EMERG, LOG_ALERT, LOG_CRIT, LOG_ERR => EVENTLOG_ERROR_TYPE LOG_WARNING => EVENTLOG_WARNING_TYPE LOG_NOTICE, LOG_INFO, LOG_DEBUG => EVENTLOG_INFORMATION_TYPE [Richard Levitte] *) Made it possible to reconfigure with just the configuration argument "reconf" or "reconfigure". The command line arguments are stored in Makefile.ssl in the variable CONFIGURE_ARGS, and are retrieved from there when reconfiguring. [Richard Levitte] *) MD4 implemented. [Assar Westerlund <assar@sics.se>, Richard Levitte] *) Add the arguments -CAfile and -CApath to the pkcs12 utility. [Richard Levitte] *) The obj_dat.pl script was messing up the sorting of object names. The reason was that it compared the quoted version of strings as a result "OCSP" > "OCSP Signing" because " > SPACE. Changed script to store unquoted versions of names and add quotes on output. It was also omitting some names from the lookup table if they were given a default value (that is if SN is missing it is given the same value as LN and vice versa), these are now added on the grounds that if an object has a name we should be able to look it up. Finally added warning output when duplicate short or long names are found. [Steve Henson] *) Changes needed for Tandem NSK. [Scott Uroff <scott@xypro.com>] *) Fix SSL 2.0 rollback checking: Due to an off-by-one error in RSA_padding_check_SSLv23(), special padding was never detected and thus the SSL 3.0/TLS 1.0 countermeasure against protocol version rollback attacks was not effective. In s23_clnt.c, don't use special rollback-attack detection padding (RSA_SSLV23_PADDING) if SSL 2.0 is the only protocol enabled in the client; similarly, in s23_srvr.c, don't do the rollback check if SSL 2.0 is the only protocol enabled in the server. [Bodo Moeller] *) Make it possible to get hexdumps of unprintable data with 'openssl asn1parse'. By implication, the functions ASN1_parse_dump() and BIO_dump_indent() are added. [Richard Levitte] *) New functions ASN1_STRING_print_ex() and X509_NAME_print_ex() these print out strings and name structures based on various flags including RFC2253 support and proper handling of multibyte characters. Added options to the 'x509' utility to allow the various flags to be set. [Steve Henson] *) Various fixes to use ASN1_TIME instead of ASN1_UTCTIME. Also change the functions X509_cmp_current_time() and X509_gmtime_adj() work with an ASN1_TIME structure, this will enable certificates using GeneralizedTime in validity dates to be checked. [Steve Henson] *) Make the NEG_PUBKEY_BUG code (which tolerates invalid negative public key encodings) on by default, NO_NEG_PUBKEY_BUG can be set to disable it. [Steve Henson] *) New function c2i_ASN1_OBJECT() which acts on ASN1_OBJECT content octets. An i2c_ASN1_OBJECT is unnecessary because the encoding can be trivially obtained from the structure. [Steve Henson] *) crypto/err.c locking bugfix: Use write locks (CRYPTO_w_[un]lock), not read locks (CRYPTO_r_[un]lock). [Bodo Moeller] *) A first attempt at creating official support for shared libraries through configuration. I've kept it so the default is static libraries only, and the OpenSSL programs are always statically linked for now, but there are preparations for dynamic linking in place. This has been tested on Linux and Tru64. [Richard Levitte] *) Randomness polling function for Win9x, as described in: Peter Gutmann, Software Generation of Practically Strong Random Numbers. [Ulf Möller] *) Fix so PRNG is seeded in req if using an already existing DSA key. [Steve Henson] *) New options to smime application. -inform and -outform allow alternative formats for the S/MIME message including PEM and DER. The -content option allows the content to be specified separately. This should allow things like Netscape form signing output easier to verify. [Steve Henson] *) Fix the ASN1 encoding of tags using the 'long form'. [Steve Henson] *) New ASN1 functions, i2c_* and c2i_* for INTEGER and BIT STRING types. These convert content octets to and from the underlying type. The actual tag and length octets are already assumed to have been read in and checked. These are needed because all other string types have virtually identical handling apart from the tag. By having versions of the ASN1 functions that just operate on content octets IMPLICIT tagging can be handled properly. It also allows the ASN1_ENUMERATED code to be cut down because ASN1_ENUMERATED and ASN1_INTEGER are identical apart from the tag. [Steve Henson] *) Change the handling of OID objects as follows: - New object identifiers are inserted in objects.txt, following the syntax given in objects.README. - objects.pl is used to process obj_mac.num and create a new obj_mac.h. - obj_dat.pl is used to create a new obj_dat.h, using the data in obj_mac.h. This is currently kind of a hack, and the perl code in objects.pl isn't very elegant, but it works as I intended. The simplest way to check that it worked correctly is to look in obj_dat.h and check the array nid_objs and make sure the objects haven't moved around (this is important!). Additions are OK, as well as consistent name changes. [Richard Levitte] *) instead.] *), meaning] *) better than others it just uses the work 'NEW' in the certificate request header lines. Some software needs this. [Steve Henson] *) Reorganise password command line arguments: now passwords can be obtained from various sources. Delete the PEM_cb function and make it the default behaviour: i.e. if the callback is NULL and the usrdata argument is not NULL interpret it as a null terminated pass phrase. If usrdata and the callback are NULL then the pass phrase is prompted for as usual. [Steve Henson] *) Add support for the Compaq Atalla crypto accelerator. If it is installed, the support is automatically enabled. The resulting binaries will autodetect the card and use it if present. [Ben Laurie and Compaq Inc.] *) Work around for Netscape hang bug. This sends certificate request and server done in one record. Since this is perfectly legal in the SSL/TLS protocol it isn't a "bug" option and is on by default. See the bugs/SSLv3 entry for more info. [Steve Henson] *) HP-UX tune-up: new unified configs, HP C compiler bug workaround. [Andy Polyakov] *) Add -rand argument to smime and pkcs12 applications and read/write of seed file. [Steve Henson] *) New 'passwd' tool for crypt(3) and apr1 password hashes. [Bodo Moeller] *) Add command line password options to the remaining applications. [Steve Henson] *) Bug fix for BN_div_recp() for numerators with an even number of bits. [Ulf Möller] *) More tests in bntest.c, and changed test_bn output. [Ulf Möller] *) ./config recognizes MacOS X now. [Andy Polyakov] *) Bug fix for BN_div() when the first words of num and divsor are equal (it gave wrong results if (rem=(n1-q*d0)&BN_MASK2) < d0). [Ulf Möller] *) Add support for various broken PKCS#8 formats, and command line options to produce them. [Steve Henson] *) New functions BN_CTX_start(), BN_CTX_get() and BT_CTX_end() to get temporary BIGNUMs from a BN_CTX. [Ulf Möller] *) Correct return values in BN_mod_exp_mont() and BN_mod_exp2_mont() for p == 0. [Ulf Möller] *) Change the SSLeay_add_all_*() functions to OpenSSL_add_all_*() and include a #define from the old name to the new. The original intent was that statically linked binaries could for example just call SSLeay_add_all_ciphers() to just add ciphers to the table and not link with digests. This never worked because SSLeay_add_all_digests() and SSLeay_add_all_ciphers() were in the same source file so calling one would link with the other. They are now in separate source files. [Steve Henson] *) Add a new -notext option to 'ca' and a -pubkey option to 'spkac'. [Steve Henson] *) Use a less unusual form of the Miller-Rabin primality test (it used a binary algorithm for exponentiation integrated into the Miller-Rabin loop, our standard modexp algorithms are faster). [Bodo Moeller] *) Support for the EBCDIC character set completed. [Martin Kraemer <Martin.Kraemer@Mch.SNI.De>] *) Source code cleanups: use const where appropriate, eliminate casts, use void * instead of char * in lhash. [Ulf Möller] *) Bugfix: ssl3_send_server_key_exchange was not restartable (the state was not changed to SSL3_ST_SW_KEY_EXCH_B, and because of this the server could overwrite ephemeral keys that the client has already seen). [Bodo Moeller] *) Turn DSA_is_prime into a macro that calls BN_is_prime, using 50 iterations of the Rabin-Miller test. DSA_generate_parameters now uses BN_is_prime_fasttest (with 50 iterations of the Rabin-Miller test as required by the appendix to FIPS PUB 186[-1]) instead of DSA_is_prime. As BN_is_prime_fasttest includes trial division, DSA parameter generation becomes much faster. This implies a change for the callback functions in DSA_is_prime and DSA_generate_parameters: The callback function is called once for each positive witness in the Rabin-Miller test, not just occasionally in the inner loop; and the parameters to the callback function now provide an iteration count for the outer loop rather than for the current invocation of the inner loop. DSA_generate_parameters additionally can call the callback function with an 'iteration count' of -1, meaning that a candidate has passed the trial division test (when q is generated from an application-provided seed, trial division is skipped). [Bodo Moeller] *) New function BN_is_prime_fasttest that optionally does trial division before starting the Rabin-Miller test and has an additional BN_CTX * argument (whereas BN_is_prime always has to allocate at least one BN_CTX). 'callback(1, -1, cb_arg)' is called when a number has passed the trial division stage. [Bodo Moeller] *) Fix for bug in CRL encoding. The validity dates weren't being handled as ASN1_TIME. [Steve Henson] *) New -pkcs12 option to CA.pl script to write out a PKCS#12 file. [Steve Henson] *) New function BN_pseudo_rand(). [Ulf Möller] *) Clean up BN_mod_mul_montgomery(): replace the broken (and unreadable) bignum version of BN_from_montgomery() with the working code from SSLeay 0.9.0 (the word based version is faster anyway), and clean up the comments. [Ulf Möller] *) Avoid a race condition in s2_clnt.c (function get_server_hello) that made it impossible to use the same SSL_SESSION data structure in SSL2 clients in multiple threads. [Bodo Moeller] *) The return value of RAND_load_file() no longer counts bytes obtained by stat(). RAND_load_file(..., -1) is new and uses the complete file to seed the PRNG (previously an explicit byte count was required). [Ulf Möller, Bodo Möller] *) Clean up CRYPTO_EX_DATA functions, some of these didn't have prototypes used (char *) instead of (void *) and had casts all over the place. [Steve Henson] *) Make BN_generate_prime() return NULL on error if ret!=NULL. [Ulf Möller] *) Retain source code compatibility for BN_prime_checks macro: BN_is_prime(..., BN_prime_checks, ...) now uses BN_prime_checks_for_size to determine the appropriate number of Rabin-Miller iterations. [Ulf Möller] *) Diffie-Hellman uses "safe" primes: DH_check() return code renamed to DH_CHECK_P_NOT_SAFE_PRIME. (Check if this is true? OpenPGP calls them "strong".) [Ulf Möller] *) Merge the functionality of "dh" and "gendh" programs into a new program "dhparam". The old programs are retained for now but will handle DH keys (instead of parameters) in future. [Steve Henson] *) Make the ciphers, s_server and s_client programs check the return values when a new cipher list is set. [Steve Henson] *) Enhance the SSL/TLS cipher mechanism to correctly handle the TLS 56bit ciphers. Before when the 56bit ciphers were enabled the sorting was wrong. The syntax for the cipher sorting has been extended to support sorting by cipher-strength (using the strength_bits hard coded in the tables). The new command is "@STRENGTH" (see also doc/apps/ciphers.pod). Fix a bug in the cipher-command parser: when supplying a cipher command string with an "undefined" symbol (neither command nor alphanumeric [A-Za-z0-9], ssl_set_cipher_list used to hang in an endless loop. Now an error is flagged. Due to the strength-sorting extension, the code of the ssl_create_cipher_list() function was completely rearranged. I hope that the readability was also increased :-) [Lutz Jaenicke <Lutz.Jaenicke@aet.TU-Cottbus.DE>] *) Minor change to 'x509' utility. The -CAcreateserial option now uses 1 for the first serial number and places 2 in the serial number file. This avoids problems when the root CA is created with serial number zero and the first user certificate has the same issuer name and serial number as the root CA. [Steve Henson] *) Fixes to X509_ATTRIBUTE utilities, change the 'req' program so it uses the new code. Add documentation for this stuff. [Steve Henson] *) Changes to X509_ATTRIBUTE utilities. These have been renamed from X509_*() to X509at_*() on the grounds that they don't handle X509 structures and behave in an analogous way to the X509v3 functions: they shouldn't be called directly but wrapper functions should be used instead. So we also now have some wrapper functions that call the X509at functions when passed certificate requests. (TO DO: similar things can be done with PKCS#7 signed and unsigned attributes, PKCS#12 attributes and a few other things. Some of these need some d2i or i2d and print functionality because they handle more complex structures.) [Steve Henson] *) Add missing #ifndefs that caused missing symbols when building libssl as a shared library without RSA. Use #ifndef NO_SSL2 instead of NO_RSA in ssl/s2*.c. [Kris Kennaway <kris@hub.freebsd.org>, modified by Ulf Möller] *) Precautions against using the PRNG uninitialized: RAND_bytes() now has a return value which indicates the quality of the random data (1 = ok, 0 = not seeded). Also an error is recorded on the thread's error queue. New function RAND_pseudo_bytes() generates output that is guaranteed to be unique but not unpredictable. RAND_add is like RAND_seed, but takes an extra argument for an entropy estimate (RAND_seed always assumes full entropy). [Ulf Möller] *) Do more iterations of Rabin-Miller probable prime test (specifically, 3 for 1024-bit primes, 6 for 512-bit primes, 12 for 256-bit primes instead of only 2 for all lengths; see BN_prime_checks_for_size definition in crypto/bn/bn_prime.c for the complete table). This guarantees a false-positive rate of at most 2^-80 for random input. [Bodo Moeller] *) Rewrite ssl3_read_n (ssl/s3_pkt.c) avoiding a couple of bugs. [Bodo Moeller] *) to use this. Also make SSL_SESSION_print() print out the verify return code. [Steve Henson] *) Add manpage for the pkcs12 command. Also change the default behaviour so MAC iteration counts are used unless the new -nomaciter option is used. This improves file security and only older versions of MSIE (4.0 for example) need it. [Steve Henson] *) Honor the no-xxx Configure options when creating .DEF files. [Ulf Möller] *) Add PKCS#10 attributes to field table: challengePassword, unstructuredName and unstructuredAddress. These are taken from draft PKCS#9 v2.0 but are compatible with v1.2 provided no international characters are used. More changes to X509_ATTRIBUTE code: allow the setting of types based on strings. Remove the 'loc' parameter when adding attributes because these will be a SET OF encoding which is sorted in ASN1 order. [Steve Henson] *) Initial changes to the 'req' utility to allow request generation automation. This will allow an application to just generate a template file containing all the field values and have req construct the request. Initial support for X509_ATTRIBUTE handling. Stacks of these are used all over the place including certificate requests and PKCS#7 structures. They are currently handled manually where necessary with some primitive wrappers for PKCS#7. The new functions behave in a manner analogous to the X509 extension functions: they allow attributes to be looked up by NID and added. Later something similar to the X509V3 code would be desirable to automatically handle the encoding, decoding and printing of the more complex types. The string types like challengePassword can be handled by the string table functions. Also modified the multi byte string table handling. Now there is a 'global mask' which masks out certain types. The table itself can use the flag STABLE_NO_MASK to ignore the mask setting: this is useful when for example there is only one permissible type (as in countryName) and using the mask might result in no valid types at all. [Steve Henson] *) Clean up 'Finished' handling, and add functions SSL_get_finished and SSL_get_peer_finished to allow applications to obtain the latest Finished messages sent to the peer or expected from the peer, respectively. (SSL_get_peer_finished is usually the Finished message actually received from the peer, otherwise the protocol will be aborted.) As the Finished message. [Bodo Moeller] *) Enhanced support for Alpha Linux is added. Now ./config checks if the host supports BWX extension and if Compaq C is present on the $PATH. Just exploiting of the BWX extension results in 20-30% performance kick for some algorithms, e.g. DES and RC4 to mention a couple. Compaq C in turn generates ~20% faster code for MD5 and SHA1. [Andy Polyakov] *) Add support for MS "fast SGC". This is arguably a violation of the SSL3/TLS protocol. Netscape SGC does two handshakes: the first with weak crypto and after checking the certificate is SGC a second one with strong crypto. MS SGC stops the first handshake after receiving the server certificate message and sends a second client hello. Since a server will typically do all the time consuming operations before expecting any further messages from the client (server key exchange is the most expensive) there is little difference between the two. To get OpenSSL to support MS SGC we have to permit a second client hello message after we have sent server done. In addition we have to reset the MAC if we do get this second client hello. [Steve Henson] *) Add a function 'd2i_AutoPrivateKey()' this will automatically decide if a DER encoded private key is RSA or DSA traditional format. Changed d2i_PrivateKey_bio() to use it. This is only needed for the "traditional" format DER encoded private key. Newer code should use PKCS#8 format which has the key type encoded in the ASN1 structure. Added DER private key support to pkcs8 application. [Steve Henson] *) SSL 3/TLS 1 servers now don't request certificates when an anonymous ciphersuites has been selected (as required by the SSL 3/TLS 1 specifications). Exception: When SSL_VERIFY_FAIL_IF_NO_PEER_CERT is set, we interpret this as a request to violate the specification (the worst that can happen is a handshake failure, and 'correct' behaviour would result in a handshake failure anyway). [Bodo Moeller] *) In SSL_CTX_add_session, take into account that there might be multiple SSL_SESSION structures with the same session ID (e.g. when two threads concurrently obtain them from an external cache). The internal cache can handle only one SSL_SESSION with a given ID, so if there's a conflict, we now throw out the old one to achieve consistency. [Bodo Moeller] *) Add OIDs for idea and blowfish in CBC mode. This will allow both to be used in PKCS#5 v2.0 and S/MIME. Also add checking to some routines that use cipher OIDs: some ciphers do not have OIDs defined and so they cannot be used for S/MIME and PKCS#5 v2.0 for example. [Steve Henson] *) Simplify the trust setting structure and code. Now we just have two sequences of OIDs for trusted and rejected settings. These will typically have values the same as the extended key usage extension and any application specific purposes. The trust checking code now has a default behaviour: it will just check for an object with the same NID as the passed id. Functions can be provided to override either the default behaviour or the behaviour for a given id. SSL client, server and email already have functions in place for compatibility: they check the NID and also return "trusted" if the certificate is self signed. [Steve Henson] *) Add d2i,i2d bio/fp functions for PrivateKey: these convert the traditional format into an EVP_PKEY structure. [Steve Henson] *) Add a password callback function PEM_cb() which either prompts for a password if usr_data is NULL or otherwise assumes it is a null terminated password. Allow passwords to be passed on command line environment or config files in a few more utilities. [Steve Henson] *) Add a bunch of DER and PEM functions to handle PKCS#8 format private keys. Add some short names for PKCS#8 PBE algorithms and allow them to be specified on the command line for the pkcs8 and pkcs12 utilities. Update documentation. [Steve Henson] *) Support for ASN1 "NULL" type. This could be handled before by using ASN1_TYPE but there wasn't any function that would try to read a NULL and produce an error if it couldn't. For compatibility we also have ASN1_NULL_new() and ASN1_NULL_free() functions but these are faked and don't allocate anything because they don't need to. [Steve Henson] *) Initial support for MacOS is now provided. Examine INSTALL.MacOS for details. [Andy Polyakov, Roy Woods <roy@centicsystems.ca>] *) Rebuild of the memory allocation routines used by OpenSSL code and possibly others as well. The purpose is to make an interface that provide hooks so anyone can build a separate set of allocation and deallocation routines to be used by OpenSSL, for example memory pool implementations, or something else, which was previously hard since Malloc(), Realloc() and Free() were defined as macros having the values malloc, realloc and free, respectively (except for Win32 compilations). The same is provided for memory debugging code. OpenSSL already comes with functionality to find memory leaks, but this gives people a chance to debug other memory problems. With these changes, a new set of functions and macros have appeared: wants to debug memory anyway, CRYPTO_malloc_debug_init() (which gives the standard debugging functions that come with OpenSSL) or CRYPTO_set_mem_debug_functions() (tells OpenSSL to use functions provided by the library user) must be used. When the standard debugging functions are used, CRYPTO_dbg_set_options can be used to request additional information: CRYPTO_dbg_set_options(V_CYRPTO_MDEBUG_xxx) corresponds to setting the CRYPTO_MDEBUG_xxx macro when compiling the library. Also, things like CRYPTO_set_mem_functions will always give the expected result (the new set of functions is used for allocation and deallocation) at all times, regardless of platform and compiler options. To finish it up, some functions that were never use in any other way than through macros have a new API and new semantic: CRYPTO_dbg_malloc() CRYPTO_dbg_realloc() CRYPTO_dbg_free() All macros of value have retained their old syntax. [Richard Levitte and Bodo Moeller] *) Some S/MIME fixes. The OID for SMIMECapabilities was wrong, the ordering of SMIMECapabilities wasn't in "strength order" and there was a missing NULL in the AlgorithmIdentifier for the SHA1 signature algorithm. [Steve Henson] *) Some ASN1 types with illegal zero length encoding (INTEGER, ENUMERATED and OBJECT IDENTIFIER) choked the ASN1 routines. [Frans Heymans <fheymans@isaserver.be>, modified by Steve Henson] *) Merge in my S/MIME library for OpenSSL. This provides a simple S/MIME API on top of the PKCS#7 code, a MIME parser (with enough functionality to handle multipart/signed properly) and a utility called 'smime' to call all this stuff. This is based on code I originally wrote for Celo who have kindly allowed it to be included in OpenSSL. [Steve Henson] *) Add variants des_set_key_checked and des_set_key_unchecked of des_set_key (aka des_key_sched). Global variable des_check_key decides which of these is called by des_set_key; this way des_check_key behaves as it always did, but applications and the library itself, which was buggy for des_check_key == 1, have a cleaner way to pick the version they need. [Bodo Moeller] *) New function PKCS12_newpass() which changes the password of a PKCS12 structure. [Steve Henson] *) Modify X509_TRUST and X509_PURPOSE so it also uses a static and dynamic mix. In both cases the ids can be used as an index into the table. Also modified the X509_TRUST_add() and X509_PURPOSE_add() functions so they accept a list of the field values and the application doesn't need to directly manipulate the X509_TRUST structure. [Steve Henson] *) Modify the ASN1_STRING_TABLE stuff so it also uses bsearch and doesn't need initialising. [Steve Henson] *) Modify the way the V3 extension code looks up extensions. This now works in a similar way to the object code: we have some "standard" extensions in a static table which is searched with OBJ_bsearch() and the application can add dynamic ones if needed. The file crypto/x509v3/ext_dat.h now has the info: this file needs to be updated whenever a new extension is added to the core code and kept in ext_nid order. There is a simple program 'tabtest.c' which checks this. New extensions are not added too often so this file can readily be maintained manually. There are two big advantages in doing things this way. The extensions can be looked up immediately and no longer need to be "added" using X509V3_add_standard_extensions(): this function now does nothing. [Side note: I get *lots* of email saying the extension code doesn't work because people forget to call this function] Also no dynamic allocation is done unless new extensions are added: so if we don't add custom extensions there is no need to call X509V3_EXT_cleanup(). [Steve Henson] *) Modify enc utility's salting as follows: make salting the default. Add a magic header, so unsalted files fail gracefully instead of just decrypting to garbage. This is because not salting is a big security hole, so people should be discouraged from doing it. [Ben Laurie] *) Fixes and enhancements to the 'x509' utility. It allowed a message digest to be passed on the command line but it only used this parameter when signing a certificate. Modified so all relevant operations are affected by the digest parameter including the -fingerprint and -x509toreq options. Also -x509toreq choked if a DSA key was used because it didn't fix the digest. [Steve Henson] *) Initial certificate chain verify code. Currently tests the untrusted certificates for consistency with the verify purpose (which is set when the X509_STORE_CTX structure is set up) and checks the pathlength. There is a NO_CHAIN_VERIFY compilation option to keep the old behaviour: this is because it will reject chains with invalid extensions whereas every previous version of OpenSSL and SSLeay made no checks at all. Trust code: checks the root CA for the relevant trust settings. Trust settings have an initial value consistent with the verify purpose: e.g. if the verify purpose is for SSL client use it expects the CA to be trusted for SSL client use. However the default value can be changed to permit custom trust settings: one example of this would be to only trust certificates from a specific "secure" set of CAs. Also added X509_STORE_CTX_new() and X509_STORE_CTX_free() functions which should be used for version portability: especially since the verify structure is likely to change more often now. SSL integration. Add purpose and trust to SSL_CTX and SSL and functions to set them. If not set then assume SSL clients will verify SSL servers and vice versa. Two new options to the verify program: -untrusted allows a set of untrusted certificates to be passed in and -purpose which sets the intended purpose of the certificate. If a purpose is set then the new chain verify code is used to check extension consistency. [Steve Henson] *) Support for the authority information access extension. [Steve Henson] *) Modify RSA and DSA PEM read routines to transparently handle PKCS#8 format private keys. New *_PUBKEY_* functions that handle public keys in a format compatible with certificate SubjectPublicKeyInfo structures. Unfortunately there were already functions called *_PublicKey_* which used various odd formats so these are retained for compatibility: however the DSA variants were never in a public release so they have been deleted. Changed dsa/rsa utilities to handle the new format: note no releases ever handled public keys so we should be OK. The primary motivation for this change is to avoid the same fiasco that dogs private keys: there are several incompatible private key formats some of which are standard and some OpenSSL specific and require various evil hacks to allow partial transparent handling and even then it doesn't work with DER formats. Given the option anything other than PKCS#8 should be dumped: but the other formats have to stay in the name of compatibility. With public keys and the benefit of hindsight one standard format is used which works with EVP_PKEY, RSA or DSA structures: though it clearly returns an error if you try to read the wrong kind of key. Added a -pubkey option to the 'x509' utility to output the public CRLs would fail if the file contained no certificates or no CRLs: added a new function to read in both types and return the number read: this means that if none are read it will be an error. The DER versions of the certificate and CRL reader would always fail because it isn't possible to mix certificates and CRLs in DER format without choking one or the other routine. Changed this to just read a certificate: this is the best we can do. Also modified the code in apps/verify.c to take notice of return codes: it was previously attempting to read in certificates from NULL pointers and ignoring any errors: this is one reason why the cert and CRL reader seemed to work. It doesn't check return codes from the default certificate routines: these may well fail if the certificates aren't installed. [Steve Henson] *) Code to support otherName option in GeneralName. [Steve Henson] *) First update to verify code. Change the verify utility so it warns if it is passed a self signed certificate: for consistency with the normal behaviour. X509_verify has been modified to it will now verify a self signed certificate if *exactly* the same certificate appears in the store: it was previously impossible to trust a single self signed certificate. This means that: openssl verify ss.pem now gives a warning about a self signed certificate but openssl verify -CAfile ss.pem ss.pem is OK. [Steve Henson] *) For servers, store verify_result in SSL_SESSION data structure (and add it to external session representation). This is needed when client certificate verifications fails, but an application-provided verification callback (set by SSL_CTX_set_cert_verify_callback) allows accepting the session anyway (i.e. leaves x509_store_ctx->error != X509_V_OK but returns 1): When the session is reused, we have to set ssl->verify_result to the appropriate error code to avoid security holes. [Bodo Moeller, problem pointed out by Lutz Jaenicke] *) Fix a bug in the new PKCS#7 code: it didn't consider the case in PKCS7_dataInit() where the signed PKCS7 structure didn't contain any existing data because it was being created. [Po-Cheng Chen <pocheng@nst.com.tw>, slightly modified by Steve Henson] *) Add a salt to the key derivation routines in enc.c. This forms the first 8 bytes of the encrypted file. Also add a -S option to allow a salt to be input on the command line. [Steve Henson] *) New function X509_cmp(). Oddly enough there wasn't a function to compare two certificates. We do this by working out the SHA1 hash and comparing that. X509_cmp() will be needed by the trust code. [Steve Henson] *) SSL_get1_session() is like SSL_get_session(), but increments the reference count in the SSL_SESSION returned. [Geoff Thorpe <geoff@eu.c2.net>] *) Fix for 'req': it was adding a null to request attributes. Also change the X509_LOOKUP and X509_INFO code to handle certificate auxiliary information. [Steve Henson] *) Add support for 40 and 64 bit RC2 and RC4 algorithms: document the 'enc' command. [Steve Henson] *) Add the possibility to add extra information to the memory leak detecting output, to form tracebacks, showing from where each allocation was originated: CRYPTO_push_info("constant string") adds the string plus current file name and line number to a per-thread stack, CRYPTO_pop_info() does the obvious, CRYPTO_remove_all_info() is like calling CYRPTO_pop_info() until the stack is empty. Also updated memory leak detection code to be multi-thread-safe. [Richard Levitte] *) Add options -text and -noout to pkcs7 utility and delete the encryption options which never did anything. Update docs. [Steve Henson] *) Add options to some of the utilities to allow the pass phrase to be included on either the command line (not recommended on OSes like Unix) or read from the environment. Update the manpages and fix a few bugs. [Steve Henson] *) Add a few manpages for some of the openssl commands. [Steve Henson] *) Fix the -revoke option in ca. It was freeing up memory twice, leaking and not finding already revoked certificates. [Steve Henson] *) Extensive changes to support certificate auxiliary information. This involves the use of X509_CERT_AUX structure and X509_AUX functions. An X509_AUX function such as PEM_read_X509_AUX() can still read in a certificate file in the usual way but it will also read in any additional "auxiliary information". By doing things this way a fair degree of compatibility can be retained: existing certificates can have this information added using the new 'x509' options. Current auxiliary information includes an "alias" and some trust settings. The trust settings will ultimately be used in enhanced certificate chain verification routines: currently a certificate can only be trusted if it is self signed and then it is trusted for all purposes. [Steve Henson] *) Fix assembler for Alpha (tested only on DEC OSF not Linux or *BSD). The problem was that one of the replacement routines had not been working since SSLeay releases. For now the offending routine has been replaced with non-optimised assembler. Even so, this now gives around 95% performance improvement for 1024 bit RSA signs. [Mark Cox] *) Hack to fix PKCS#7 decryption when used with some unorthodox RC2 handling. Most clients have the effective key size in bits equal to the key length in bits: so a 40 bit RC2 key uses a 40 bit (5 byte) key. A few however don't do this and instead use the size of the decrypted key to determine the RC2 key length and the AlgorithmIdentifier to determine the effective key length. In this case the effective key length can still be 40 bits but the key length can be 168 bits for example. This is fixed by manually forcing an RC2 key into the EVP_PKEY structure because the EVP code can't currently handle unusual RC2 key sizes: it always assumes the key length and effective key length are equal. [Steve Henson] *) Add a bunch of functions that should simplify the creation of X509_NAME structures. Now you should be able to do: X509_NAME_add_entry_by_txt(nm, "CN", MBSTRING_ASC, "Steve", -1, -1, 0); and have it automatically work out the correct field type and fill in the structures. The more adventurous can try: X509_NAME_add_entry_by_txt(nm, field, MBSTRING_UTF8, str, -1, -1, 0); and it will (hopefully) work out the correct multibyte encoding. [Steve Henson] *) Change the 'req' utility to use the new field handling and multibyte copy routines. Before the DN field creation was handled in an ad hoc way in req, ca, and x509 which was rather broken and didn't support BMPStrings or UTF8Strings. Since some software doesn't implement BMPStrings or UTF8Strings yet, they can be enabled using the config file using the dirstring_type option. See the new comment in the default openssl.cnf for more info. [Steve Henson] *) Make crypto/rand/md_rand.c more robust: - Assure unique random numbers after fork(). - Make sure that concurrent threads access the global counter and md serializably so that we never lose entropy in them or use exactly the same state in multiple threads. Access to the large state is not always serializable because the additional locking could be a performance killer, and md should be large enough anyway. [Bodo Moeller] *) New file apps/app_rand.c with commonly needed functionality for handling the random seed file. Use the random seed file in some applications that previously did not: ca, dsaparam -genkey (which also ignored its '-rand' option), s_client, s_server, x509 (when signing). Except on systems with /dev/urandom, it is crucial to have a random seed file at least for key creation, DSA signing, and for DH exchanges; for RSA signatures we could do without one. gendh and gendsa (unlike genrsa) used to read only the first byte of each file listed in the '-rand' option. The function as previously found in genrsa is now in app_rand.c and is used by all programs that support '-rand'. [Bodo Moeller] *) In RAND_write_file, use mode 0600 for creating files; don't just chmod when it may be too late. [Bodo Moeller] *) Report an error from X509_STORE_load_locations when X509_LOOKUP_load_file or X509_LOOKUP_add_dir failed. [Bill Perry] *) New function ASN1_mbstring_copy() this copies a string in either ASCII, Unicode, Universal (4 bytes per character) or UTF8 format into an ASN1_STRING type. A mask of permissible types is passed and it chooses the "minimal" type to use or an error if not type is suitable. [Steve Henson] *) Add function equivalents to the various macros in asn1.h. The old macros are retained with an M_ prefix. Code inside the library can use the M_ macros. External code (including the openssl utility) should *NOT* in order to be "shared library friendly". [Steve Henson] *) Add various functions that can check a certificate's extensions to see if it usable for various purposes such as SSL client, server or S/MIME and CAs of these types. This is currently VERY EXPERIMENTAL but will ultimately be used for certificate chain verification. Also added a -purpose flag to x509 utility to print out all the purposes. [Steve Henson] *) Add a CRYPTO_EX_DATA to X509 certificate structure and associated functions. [Steve Henson] *) New X509V3_{X509,CRL,REVOKED}_get_d2i() functions. These will search for, obtain and decode and extension and obtain its critical flag. This allows all the necessary extension code to be handled in a single function call. [Steve Henson] *) RC4 tune-up featuring 30-40% performance improvement on most RISC platforms. See crypto/rc4/rc4_enc.c for further details. [Andy Polyakov] *) New -noout option to asn1parse. This causes no output to be produced its main use is when combined with -strparse and -out to extract data from a file (which may not be in ASN.1 format). [Steve Henson] *) Fix for pkcs12 program. It was hashing an invalid certificate pointer when producing the local key id. [Richard Levitte <levitte@stacken.kth.se>] *) New option -dhparam in s_server. This allows a DH parameter file to be stated explicitly. If it is not stated then it tries the first server certificate file. The previous behaviour hard coded the filename "server.pem". [Steve Henson] *) Add -pubin and -pubout options to the rsa and dsa commands. These allow a public key to be input or output. For example: openssl rsa -in key.pem -pubout -out pubkey.pem Also added necessary DSA public key functions to handle this. [Steve Henson] *) Fix so PKCS7_dataVerify() doesn't crash if no certificates are contained in the message. This was handled by allowing X509_find_by_issuer_and_serial() to tolerate a NULL passed to it. [Steve Henson, reported by Sampo Kellomaki <sampo@mail.neuronio.pt>] *) Fix for bug in d2i_ASN1_bytes(): other ASN1 functions add an extra null to the end of the strings whereas this didn't. This would cause problems if strings read with d2i_ASN1_bytes() were later modified. [Steve Henson, reported by Arne Ansper <arne@ats.cyber.ee>] *) Fix for base64 decode bug. When a base64 bio reads only one line of data and it contains EOF it will end up returning an error. This is caused by input 46 bytes long. The cause is due to the way base64 BIOs find the start of base64 encoded data. They do this by trying a trial decode on each line until they find one that works. When they do a flag is set and it starts again knowing it can pass all the data directly through the decoder. Unfortunately it doesn't reset the context it uses. This means that if EOF is reached an attempt is made to pass two EOFs through the context and this causes the resulting error. This can also cause other problems as well. As is usual with these problems it takes *ages* to find and the fix is trivial: move one line. [Steve Henson, reported by ian@uns.ns.ac.yu (Ivan Nejgebauer) ] *) Ugly workaround to get s_client and s_server working under Windows. The old code wouldn't work because it needed to select() on sockets and the tty (for keypresses and to see if data could be written). Win32 only supports select() on sockets so we select() with a 1s timeout on the sockets and then see if any characters are waiting to be read, if none are present then we retry, we also assume we can always write data to the tty. This isn't nice because the code then blocks until we've received a complete line of data and it is effectively polling the keyboard at 1s intervals: however it's quite a bit better than not working at all :-) A dedicated Windows application might handle this with an event loop for example. [Steve Henson] *) Enhance RSA_METHOD structure. Now there are two extra methods, rsa_sign and rsa_verify. When the RSA_FLAGS_SIGN_VER option is set these functions will be called when RSA_sign() and RSA_verify() are used. This is useful if rsa_pub_dec() and rsa_priv_enc() equivalents are not available. For this to work properly RSA_public_decrypt() and RSA_private_encrypt() should *not* be used: RSA_sign() and RSA_verify() must be used instead. This necessitated the support of an extra signature type NID_md5_sha1 for SSL signatures and modifications to the SSL library to use it instead of calling RSA_public_decrypt() and RSA_private_encrypt(). [Steve Henson] *) Add new -verify -CAfile and -CApath options to the crl program, these will lookup a CRL issuers certificate and verify the signature in a similar way to the verify program. Tidy up the crl program so it no longer accesses structures directly. Make the ASN1 CRL parsing a bit less strict. It will now permit CRL extensions even if it is not a V2 CRL: this will allow it to tolerate some broken CRLs. [Steve Henson] *) Initialize all non-automatic variables each time one of the openssl sub-programs is started (this is necessary as they may be started multiple times from the "OpenSSL>" prompt). [Lennart Bang, Bodo Moeller] *) Preliminary compilation option RSA_NULL which disables RSA crypto without removing all other RSA functionality (this is what NO_RSA does). This is so (for example) those in the US can disable those operations covered by the RSA patent while allowing storage and parsing of RSA keys and RSA key generation. [Steve Henson] *) Non-copying interface to BIO pairs. (still largely untested) [Bodo Moeller] *) New function ANS1_tag2str() to convert an ASN1 tag to a descriptive ASCII string. This was handled independently in various places before. [Steve Henson] *) New functions UTF8_getc() and UTF8_putc() that parse and generate UTF8 strings a character at a time. [Steve Henson] *) Use client_version from client hello to select the protocol (s23_srvr.c) and for RSA client key exchange verification (s3_srvr.c), as required by the SSL 3.0/TLS 1.0 specifications. [Bodo Moeller] *) Add various utility functions to handle SPKACs, these were previously handled by poking round in the structure internals. Added new function NETSCAPE_SPKI_print() to print out SPKAC and a new utility 'spkac' to print, verify and generate SPKACs. Based on an original idea from Massimiliano Pala <madwolf@comune.modena.it> but extensively modified. [Steve Henson] *) RIPEMD160 is operational on all platforms and is back in 'make test'. [Andy Polyakov] *) Allow the config file extension section to be overwritten on the command line. Based on an original idea from Massimiliano Pala <madwolf@comune.modena.it>. The new option is called -extensions and can be applied to ca, req and x509. Also -reqexts to override the request extensions in req and -crlexts to override the crl extensions in ca. [Steve Henson] *) Add new feature to the SPKAC handling in ca. Now you can include the same field multiple times by preceding it by "XXXX." for example: 1.OU="Unit name 1" 2.OU="Unit name 2" this is the same syntax as used in the req config file. [Steve Henson] *) Allow certificate extensions to be added to certificate requests. These are specified in a 'req_extensions' option of the req section of the config file. They can be printed out with the -text option to req but are otherwise ignored at present. [Steve Henson] *) Fix a horrible bug in enc_read() in crypto/evp/bio_enc.c: if the first data read consists of only the final block it would not decrypted because EVP_CipherUpdate() would correctly report zero bytes had been decrypted. A misplaced 'break' also meant the decrypted final block might not be copied until the next read. [Steve Henson] *) Initial support for DH_METHOD. Again based on RSA_METHOD. Also added a few extra parameters to the DH structure: these will be useful if for example we want the value of 'q' or implement X9.42 DH. [Steve Henson] *) Initial support for DSA_METHOD. This is based on the RSA_METHOD and provides hooks that allow the default DSA functions or functions on a "per key" basis to be replaced. This allows hardware acceleration and hardware key storage to be handled without major modification to the library. Also added low level modexp hooks and CRYPTO_EX structure and associated functions. [Steve Henson] *) Add a new flag to memory BIOs, BIO_FLAG_MEM_RDONLY. This marks the BIO as "read only": it can't be written to and the buffer it points to will not be freed. Reading from a read only BIO is much more efficient than a normal memory BIO. This was added because there are several times when an area of memory needs to be read from a BIO. The previous method was to create a memory BIO and write the data to it, this results in two copies of the data and an O(n^2) reading algorithm. There is a new function BIO_new_mem_buf() which creates a read only memory BIO from an area of memory. Also modified the PKCS#7 routines to use read only memory BIOs. [Steve Henson] *) Bugfix: ssl23_get_client_hello did not work properly when called in state SSL23_ST_SR_CLNT_HELLO_B, i.e. when the first 7 bytes of a SSLv2-compatible client hello for SSLv3 or TLSv1 could be read, but a retry condition occurred while trying to read the rest. [Bodo Moeller] *) The PKCS7_ENC_CONTENT_new() function was setting the content type as NID_pkcs7_encrypted by default: this was wrong since this should almost always be NID_pkcs7_data. Also modified the PKCS7_set_type() to handle the encrypted data type: this is a more sensible place to put it and it allows the PKCS#12 code to be tidied up that duplicated this functionality. [Steve Henson] *) Changed obj_dat.pl script so it takes its input and output files on the command line. This should avoid shell escape redirection problems under Win32. [Steve Henson] *) Initial support for certificate extension requests, these are included in things like Xenroll certificate requests. Included functions to allow extensions to be obtained and added. [Steve Henson] *) -crlf option to s_client and s_server for sending newlines as CRLF (as required by many protocols). [Bodo Moeller] Changes between 0.9.3a and 0.9.4 [09 Aug 1999] *) Install libRSAglue.a when OpenSSL is built with RSAref. [Ralf S. Engelschall] *) A few more ``#ifndef NO_FP_API / #endif'' pairs for consistency. [Andrija Antonijevic <TheAntony2@bigfoot.com>] *) Fix -startdate and -enddate (which was missing) arguments to 'ca' program. [Steve Henson] *) New function DSA_dup_DH, which duplicates DSA parameters/keys as DH parameters/keys (q is lost during that conversion, but the resulting DH parameters contain its length). For 1024-bit p, DSA_generate_parameters followed by DSA_dup_DH is much faster than DH_generate_parameters (which creates parameters where p = 2*q + 1), and also the smaller q makes DH computations much more efficient (160-bit exponentiation instead of 1024-bit exponentiation); so this provides a convenient way to support DHE ciphersuites in SSL/TLS servers (see ssl/ssltest.c). It is of utter importance to use SSL_CTX_set_options(s_ctx, SSL_OP_SINGLE_DH_USE); or SSL_set_options(s_ctx, SSL_OP_SINGLE_DH_USE); when such DH parameters are used, because otherwise small subgroup attacks may become possible! [Bodo Moeller] *) Avoid memory leak in i2d_DHparams. [Bodo Moeller] *) Allow the -k option to be used more than once in the enc program: this allows the same encrypted message to be read by multiple recipients. [Steve Henson] *) New function OBJ_obj2txt(buf, buf_len, a, no_name), this converts an ASN1_OBJECT to a text string. If the "no_name" parameter is set then it will always use the numerical form of the OID, even if it has a short or long name. [Steve Henson] *) Added an extra RSA flag: RSA_FLAG_EXT_PKEY. Previously the rsa_mod_exp method only got called if p,q,dmp1,dmq1,iqmp components were present, otherwise bn_mod_exp was called. In the case of hardware keys for example no private key components need be present and it might store extra data in the RSA structure, which cannot be accessed from bn_mod_exp. By setting RSA_FLAG_EXT_PKEY rsa_mod_exp will always be called for private key operations. [Steve Henson] *) Added support for SPARC Linux. [Andy Polyakov] *) pem_password_cb function type incompatibly changed from typedef int pem_password_cb(char *buf, int size, int rwflag); to ....(char *buf, int size, int rwflag, void *userdata); so that applications can pass data to their callbacks: The PEM[_ASN1]_{read,write}... functions and macros now take an additional void * argument, which is just handed through whenever the password callback is called. [Damien Miller <dmiller@ilogic.com.au>; tiny changes by Bodo Moeller] New function SSL_CTX_set_default_passwd_cb_userdata. Compatibility note: As many C implementations push function arguments onto the stack in reverse order, the new library version is likely to interoperate with programs that have been compiled with the old pem_password_cb definition (PEM_whatever takes some data that happens to be on the stack as its last argument, and the callback just ignores this garbage); but there is no guarantee whatsoever that this will work. *) The -DPLATFORM="\"$(PLATFORM)\"" definition and the similar -DCFLAGS=... (both in crypto/Makefile.ssl for use by crypto/cversion.c) caused problems not only on Windows, but also on some Unix platforms. To avoid problematic command lines, these definitions are now in an auto-generated file crypto/buildinf.h (created by crypto/Makefile.ssl for standard "make" builds, by util/mk1mf.pl for "mk1mf" builds). [Bodo Moeller] *) MIPS III/IV assembler module is reimplemented. [Andy Polyakov] *) More DES library cleanups: remove references to srand/rand and delete an unused file. [Ulf Möller] *) Add support for the the free Netwide assembler (NASM) under Win32, since not many people have MASM (ml) and it can be hard to obtain. This is currently experimental but it seems to work OK and pass all the tests. Check out INSTALL.W32 for info. [Steve Henson] *) Fix memory leaks in s3_clnt.c: All non-anonymous SSL3/TLS1 connections without temporary keys kept an extra copy of the server key, and connections with temporary keys did not free everything in case of an error. [Bodo Moeller] *) New function RSA_check_key and new openssl rsa option -check for verifying the consistency of RSA keys. [Ulf Moeller, Bodo Moeller] *) Various changes to make Win32 compile work: 1. Casts to avoid "loss of data" warnings in p5_crpt2.c 2. Change unsigned int to int in b_dump.c to avoid "signed/unsigned comparison" warnings. 3. Add sk_<TYPE>_sort to DEF file generator and do make update. [Steve Henson] *) Add a debugging option to PKCS#5 v2 key generation function: when you #define DEBUG_PKCS5V2 passwords, salts, iteration counts and derived keys are printed to stderr. [Steve Henson] *) Copy the flags in ASN1_STRING_dup(). [Roman E. Pavlov <pre@mo.msk.ru>] *) The x509 application mishandled signing requests containing DSA keys when the signing key was also DSA and the parameters didn't match. It was supposed to omit the parameters when they matched the signing key: the verifying software was then supposed to automatically use the CA's parameters if they were absent from the end user certificate. Omitting parameters is no longer recommended. The test was also the wrong way round! This was probably due to unusual behaviour in EVP_cmp_parameters() which returns 1 if the parameters match. This meant that parameters were omitted when they *didn't* match and the certificate was useless. Certificates signed with 'ca' didn't have this bug. [Steve Henson, reported by Doug Erickson <Doug.Erickson@Part.NET>] *) Memory leak checking (-DCRYPTO_MDEBUG) had some problems. The interface is as follows: Applications can use CRYPTO_mem_ctrl(CRYPTO_MEM_CHECK_ON) aka MemCheck_start(), CRYPTO_mem_ctrl(CRYPTO_MEM_CHECK_OFF) aka MemCheck_stop(); "off" is now the default. The library internally uses CRYPTO_mem_ctrl(CRYPTO_MEM_CHECK_DISABLE) aka MemCheck_off(), CRYPTO_mem_ctrl(CRYPTO_MEM_CHECK_ENABLE) aka MemCheck_on() to disable memory-checking temporarily. Some inconsistent states that previously were possible (and were even the default) are now avoided. -DCRYPTO_MDEBUG_TIME is new and additionally stores the current time with each memory chunk allocated; this is occasionally more helpful than just having a counter. -DCRYPTO_MDEBUG_THREAD is also new and adds the thread ID. -DCRYPTO_MDEBUG_ALL enables all of the above, plus any future extensions. [Bodo Moeller] *) Introduce "mode" for SSL structures (with defaults in SSL_CTX), which largely parallels "options", but is for changing API behaviour, whereas "options" are about protocol behaviour. Initial "mode" flags are: SSL_MODE_ENABLE_PARTIAL_WRITE Allow SSL_write to report success when a single record has been written. SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER Don't insist that SSL_write retries use the same buffer location. (But all of the contents must be copied!) [Bodo Moeller] *) Bugfix: SSL_set_options ignored its parameter, only SSL_CTX_set_options worked. *) Fix problems with no-hmac etc. [Ulf Möller, pointed out by Brian Wellington <bwelling@tislabs.com>] *) New functions RSA_get_default_method(), RSA_set_method() and RSA_get_method(). These allows replacement of RSA_METHODs without having to mess around with the internals of an RSA structure. [Steve Henson] *) Fix memory leaks in DSA_do_sign and DSA_is_prime. Also really enable memory leak checks in openssl.c and in some test programs. [Chad C. Mulligan, Bodo Moeller] *) Fix a bug in d2i_ASN1_INTEGER() and i2d_ASN1_INTEGER() which can mess up the length of negative integers. This has now been simplified to just store the length when it is first determined and use it later, rather than trying to keep track of where data is copied and updating it to point to the end. [Steve Henson, reported by Brien Wheeler <bwheeler@authentica-security.com>] *) Add a new function PKCS7_signatureVerify. This allows the verification of a PKCS#7 signature but with the signing certificate passed to the function itself. This contrasts with PKCS7_dataVerify which assumes the certificate is present in the PKCS#7 structure. This isn't always the case: certificates can be omitted from a PKCS#7 structure and be distributed by "out of band" means (such as a certificate database). [Steve Henson] *) Complete the PEM_* macros with DECLARE_PEM versions to replace the function prototypes in pem.h, also change util/mkdef.pl to add the necessary function names. [Steve Henson] *) mk1mf.pl (used by Windows builds) did not properly read the options set by Configure in the top level Makefile, and Configure was not even able to write more than one option correctly. Fixed, now "no-idea no-rc5 -DCRYPTO_MDEBUG" etc. works as intended. [Bodo Moeller] *) New functions CONF_load_bio() and CONF_load_fp() to allow a config file to be loaded from a BIO or FILE pointer. The BIO version will for example allow memory BIOs to contain config info. [Steve Henson] *) New function "CRYPTO_num_locks" that returns CRYPTO_NUM_LOCKS. Whoever hopes to achieve shared-library compatibility across versions must use this, not the compile-time macro. (Exercise 0.9.4: Which is the minimum library version required by such programs?) Note: All this applies only to multi-threaded programs, others don't need locks. [Bodo Moeller] *) Add missing case to s3_clnt.c state machine -- one of the new SSL tests through a BIO pair triggered the default case, i.e. SSLerr(...,SSL_R_UNKNOWN_STATE). [Bodo Moeller] *) New "BIO pair" concept (crypto/bio/bss_bio.c) so that applications can use the SSL library even if none of the specific BIOs is appropriate. [Bodo Moeller] *) Fix a bug in i2d_DSAPublicKey() which meant it returned the wrong value for the encoded length. [Jeon KyoungHo <khjeon@sds.samsung.co.kr>] *) Add initial documentation of the X509V3 functions. [Steve Henson] *) Add a new pair of functions PEM_write_PKCS8PrivateKey() and PEM_write_bio_PKCS8PrivateKey() that are equivalent to PEM_write_PrivateKey() and PEM_write_bio_PrivateKey() but use the more secure PKCS#8 private key format with a high iteration count. [Steve Henson] *) Fix determination of Perl interpreter: A perl or perl5 _directory_ in $PATH was also accepted as the interpreter. [Ralf S. Engelschall] *) Fix demos/sign/sign.c: well there wasn't anything strictly speaking wrong with it but it was very old and did things like calling PEM_ASN1_read() directly and used MD5 for the hash not to mention some unusual formatting. [Steve Henson] *) Fix demos/selfsign.c: it used obsolete and deleted functions, changed to use the new extension code. [Steve Henson] *) Implement the PEM_read/PEM_write functions in crypto/pem/pem_all.c with macros. This should make it easier to change their form, add extra arguments etc. Fix a few PEM prototypes which didn't have cipher as a constant. [Steve Henson] *) Add to configuration table a new entry that can specify an alternative name for unistd.h (for pre-POSIX systems); we need this for NeXTstep, according to Mark Crispin <MRC@Panda.COM>. [Bodo Moeller] #if 0 *) DES CBC did not update the IV. Weird. [Ben Laurie] #else des_cbc_encrypt does not update the IV, but des_ncbc_encrypt does. Changing the behaviour of the former might break existing programs -- where IV updating is needed, des_ncbc_encrypt can be used. #endif *) When bntest is run from "make test" it drives bc to check its calculations, as well as internally checking them. If an internal check fails, it needs to cause bc to give a non-zero result or make test carries on without noticing the failure. Fixed. [Ben Laurie] *) DES library cleanups. [Ulf Möller] *) Add support for PKCS#5 v2.0 PBE algorithms. This will permit PKCS#8 to be used with any cipher unlike PKCS#5 v1.5 which can at most handle 64 bit ciphers. NOTE: although the key derivation function has been verified against some published test vectors it has not been extensively tested yet. Added a -v2 "cipher" option to pkcs8 application to allow the use of v2.0. [Steve Henson] *) Instead of "mkdir -p", which is not fully portable, use new Perl script "util/mkdir-p.pl". [Bodo Moeller] *) Rewrite the way password based encryption (PBE) is handled. It used to assume that the ASN1 AlgorithmIdentifier parameter was a PBEParameter structure. This was true for the PKCS#5 v1.5 and PKCS#12 PBE algorithms but doesn't apply to PKCS#5 v2.0 where it can be something else. Now the 'parameter' field of the AlgorithmIdentifier is passed to the underlying key generation function so it must do its own ASN1 parsing. This has also changed the EVP_PBE_CipherInit() function which now has a 'parameter' argument instead of literal salt and iteration count values and the function EVP_PBE_ALGOR_CipherInit() has been deleted. [Steve Henson] *) Support for PKCS#5 v1.5 compatible password based encryption algorithms and PKCS#8 functionality. New 'pkcs8' application linked to openssl. Needed to change the PEM_STRING_EVP_PKEY value which was just "PRIVATE KEY" because this clashed with PKCS#8 unencrypted string. Since this value was just used as a "magic string" and not used directly its value doesn't matter. [Steve Henson] *) Introduce some semblance of const correctness to BN. Shame C doesn't support mutable. [Ben Laurie] *) "linux-sparc64" configuration (ultrapenguin). [Ray Miller <ray.miller@oucs.ox.ac.uk>] "linux-sparc" configuration. [Christian Forster <fo@hawo.stw.uni-erlangen.de>] *) config now generates no-xxx options for missing ciphers. [Ulf Möller] *) Support the EBCDIC character set (work in progress). File ebcdic.c not yet included because it has a different license. [Martin Kraemer <Martin.Kraemer@MchP.Siemens.De>] *) Support BS2000/OSD-POSIX. [Martin Kraemer <Martin.Kraemer@MchP.Siemens.De>] *) Make callbacks for key generation use void * instead of char *. [Ben Laurie] *) Make S/MIME samples compile (not yet tested). [Ben Laurie] *) Additional typesafe stacks. [Ben Laurie] *) New configuration variants "bsdi-elf-gcc" (BSD/OS 4.x). [Bodo Moeller] Changes between 0.9.3 and 0.9.3a [29 May 1999] *) New configuration variant "sco5-gcc". *) Updated some demos. [Sean O Riordain, Wade Scholine] *) Add missing BIO_free at exit of pkcs12 application. [Wu Zhigang] *) Fix memory leak in conf.c. [Steve Henson] *) Updates for Win32 to assembler version of MD5. [Steve Henson] *) Set #! path to perl in apps/der_chop to where we found it instead of using a fixed path. [Bodo Moeller] *) SHA library changes for irix64-mips4-cc. [Andy Polyakov] *) Improvements for VMS support. [Richard Levitte] Changes between 0.9.2b and 0.9.3 [24 May 1999] *) Bignum library bug fix. IRIX 6 passes "make test" now! This also avoids the problems with SC4.2 and unpatched SC5. [Andy Polyakov <appro@fy.chalmers.se>] *) New functions sk_num, sk_value and sk_set to replace the previous macros. These are required because of the typesafe stack would otherwise break existing code. If old code used a structure member which used to be STACK and is now STACK_OF (for example cert in a PKCS7_SIGNED structure) with sk_num or sk_value it would produce an error because the num, data members are not present in STACK_OF. Now it just produces a warning. sk_set replaces the old method of assigning a value to sk_value (e.g. sk_value(x, i) = y) which the library used in a few cases. Any code that does this will no longer work (and should use sk_set instead) but this could be regarded as a "questionable" behaviour anyway. [Steve Henson] *) Fix most of the other PKCS#7 bugs. The "experimental" code can now correctly handle encrypted S/MIME data. [Steve Henson] *) Change type of various DES function arguments from des_cblock (which means, in function argument declarations, pointer to char) to des_cblock * (meaning pointer to array with 8 char elements), which allows the compiler to do more typechecking; it was like that back in SSLeay, but with lots of ugly casts. Introduce new type const_des_cblock. [Bodo Moeller] *) Reorganise the PKCS#7 library and get rid of some of the more obvious problems: find RecipientInfo structure that matches recipient certificate and initialise the ASN1 structures properly based on passed cipher. [Steve Henson] *) Belatedly make the BN tests actually check the results. [Ben Laurie] *) Fix the encoding and decoding of negative ASN1 INTEGERS and conversion to and from BNs: it was completely broken. New compilation option NEG_PUBKEY_BUG to allow for some broken certificates that encode public key elements as negative integers. [Steve Henson] *) Reorganize and speed up MD5. [Andy Polyakov <appro@fy.chalmers.se>] *) VMS support. [Richard Levitte <richard@levitte.org>] *) New option -out to asn1parse to allow the parsed structure to be output to a file. This is most useful when combined with the -strparse option to examine the output of things like OCTET STRINGS. [Steve Henson] *) Make SSL library a little more fool-proof by not requiring any longer that SSL_set_{accept,connect}_state be called before SSL_{accept,connect} may be used (SSL_set_..._state is omitted in many applications because usually everything *appeared* to work as intended anyway -- now it really works as intended). [Bodo Moeller] *) Move openssl.cnf out of lib/. [Ulf Möller] *) Fix various things to let OpenSSL even pass ``egcc -pipe -O2 -Wall -Wshadow -Wpointer-arith -Wcast-align -Wmissing-prototypes -Wmissing-declarations -Wnested-externs -Winline'' with EGCS 1.1.2+ [Ralf S. Engelschall] *) Various fixes to the EVP and PKCS#7 code. It may now be able to handle PKCS#7 enveloped data properly. [Sebastian Akerman <sak@parallelconsulting.com>, modified by Steve] *) Create a duplicate of the SSL_CTX's CERT in SSL_new instead of copying pointers. The cert_st handling is changed by this in various ways (and thus what used to be known as ctx->default_cert is now called ctx->cert, since we don't resort to s->ctx->[default_]cert any longer when s->cert does not give us what we need). ssl_cert_instantiate becomes obsolete by this change. As soon as we've got the new code right (possibly it already is?), we have solved a couple of bugs of the earlier code where s->cert was used as if it could not have been shared with other SSL structures. Note that using the SSL API in certain dirty ways now will result in different behaviour than observed with earlier library versions: Changing settings for an SSL_CTX *ctx after having done s = SSL_new(ctx) does not influence s as it used to. In order to clean up things more thoroughly, inside SSL_SESSION we don't use CERT any longer, but a new structure SESS_CERT that holds per-session data (if available); currently, this is the peer's certificate chain and, for clients, the server's certificate and temporary key. CERT holds only those values that can have meaningful defaults in an SSL_CTX. [Bodo Moeller] *) New function X509V3_EXT_i2d() to create an X509_EXTENSION structure from the internal representation. Various PKCS#7 fixes: remove some evil casts and set the enc_dig_alg field properly based on the signing key type. [Steve Henson] *) Allow PKCS#12 password to be set from the command line or the environment. Let 'ca' get its config file name from the environment variables "OPENSSL_CONF" or "SSLEAY_CONF" (for consistency with 'req' and 'x509'). [Steve Henson] *) Allow certificate policies extension to use an IA5STRING for the organization field. This is contrary to the PKIX definition but VeriSign uses it and IE5 only recognises this form. Document 'x509' extension option. [Steve Henson] *) Add PEDANTIC compiler flag to allow compilation with gcc -pedantic, without disallowing inline assembler and the like for non-pedantic builds. [Ben Laurie] *) Support Borland C++ builder. [Janez Jere <jj@void.si>, modified by Ulf Möller] *) Support Mingw32. [Ulf Möller] *) SHA-1 cleanups and performance enhancements. [Andy Polyakov <appro@fy.chalmers.se>] *) Sparc v8plus assembler for the bignum library. [Andy Polyakov <appro@fy.chalmers.se>] *) Accept any -xxx and +xxx compiler options in Configure. [Ulf Möller] *) Update HPUX configuration. [Anonymous] *) Add missing sk_<type>_unshift() function to safestack.h [Ralf S. Engelschall] *) New function SSL_CTX_use_certificate_chain_file that sets the "extra_cert"s in addition to the certificate. (This makes sense only for "PEM" format files, as chains as a whole are not DER-encoded.) [Bodo Moeller] *) Support verify_depth from the SSL API. x509_vfy.c had what can be considered an off-by-one-error: Its depth (which was not part of the external interface) was actually counting the number of certificates in a chain; now it really counts the depth. [Bodo Moeller] *) Bugfix in crypto/x509/x509_cmp.c: The SSLerr macro was used instead of X509err, which often resulted in confusing error messages since the error codes are not globally unique (e.g. an alleged error in ssl3_accept when a certificate didn't match the private key). *) New function SSL_CTX_set_session_id_context that allows to set a default value (so that you don't need SSL_set_session_id_context for each connection using the SSL_CTX). [Bodo Moeller] *) OAEP decoding bug fix. [Ulf Möller] *) Support INSTALL_PREFIX for package builders, as proposed by David Harris. [Bodo Moeller] *) New Configure options "threads" and "no-threads". For systems where the proper compiler options are known (currently Solaris and Linux), "threads" is the default. [Bodo Moeller] *) New script util/mklink.pl as a faster substitute for util/mklink.sh. [Bodo Moeller] *) Install various scripts to $(OPENSSLDIR)/misc, not to $(INSTALLTOP)/bin -- they shouldn't clutter directories such as /usr/local/bin. [Bodo Moeller] *) "make linux-shared" to build shared libraries. [Niels Poppe <niels@netbox.org>] *) New Configure option no-<cipher> (rsa, idea, rc5, ...). [Ulf Möller] *) Add the PKCS#12 API documentation to openssl.txt. Preliminary support for extension adding in x509 utility. [Steve Henson] *) Remove NOPROTO sections and error code comments. [Ulf Möller] *) Partial rewrite of the DEF file generator to now parse the ANSI prototypes. [Steve Henson] *) New Configure options --prefix=DIR and --openssldir=DIR. [Ulf Möller] *) Complete rewrite of the error code script(s). It is all now handled by one script at the top level which handles error code gathering, header rewriting and C source file generation. It should be much better than the old method: it now uses a modified version of Ulf's parser to read the ANSI prototypes in all header files (thus the old K&R definitions aren't needed for error creation any more) and do a better job of translating function codes into names. The old 'ASN1 error code imbedded in a comment' is no longer necessary and it doesn't use .err files which have now been deleted. Also the error code call doesn't have to appear all on one line (which resulted in some large lines...). [Steve Henson] *) Change #include filenames from <foo.h> to <openssl/foo.h>. [Bodo Moeller] *) Change behaviour of ssl2_read when facing length-0 packets: Don't return 0 (which usually indicates a closed connection), but continue reading. [Bodo Moeller] *) Fix some race conditions. [Bodo Moeller] *) Add support for CRL distribution points extension. Add Certificate Policies and CRL distribution points documentation. [Steve Henson] *) Move the autogenerated header file parts to crypto/opensslconf.h. [Ulf Möller] *) Fix new 56-bit DES export ciphersuites: they were using 7 bytes instead of 8 of keying material. Merlin has also confirmed interop with this fix between OpenSSL and Baltimore C/SSL 2.0 and J/SSL 2.0. [Merlin Hughes <merlin@baltimore.ie>] *) Fix lots of warnings. [Richard Levitte <levitte@stacken.kth.se>] *) In add_cert_dir() in crypto/x509/by_dir.c, break out of the loop if the directory spec didn't end with a LIST_SEPARATOR_CHAR. [Richard Levitte <levitte@stacken.kth.se>] *) Fix problems with sizeof(long) == 8. [Andy Polyakov <appro@fy.chalmers.se>] *) Change functions to ANSI C. [Ulf Möller] *) Fix typos in error codes. [Martin Kraemer <Martin.Kraemer@MchP.Siemens.De>, Ulf Möller] *) Remove defunct assembler files from Configure. [Ulf Möller] *) SPARC v8 assembler BIGNUM implementation. [Andy Polyakov <appro@fy.chalmers.se>] *) Support for Certificate Policies extension: both print and set. Various additions to support the r2i method this uses. [Steve Henson] *) A lot of constification, and fix a bug in X509_NAME_oneline() that could return a const string when you are expecting an allocated buffer. [Ben Laurie] *) Add support for ASN1 types UTF8String and VISIBLESTRING, also the CHOICE types DirectoryString and DisplayText. [Steve Henson] *) Add code to allow r2i extensions to access the configuration database, add an LHASH database driver and add several ctx helper functions. [Steve Henson] *) Fix an evil bug in bn_expand2() which caused various BN functions to fail when they extended the size of a BIGNUM. [Steve Henson] *) Various utility functions to handle SXNet extension. Modify mkdef.pl to support typesafe stack. [Steve Henson] *) Fix typo in SSL_[gs]et_options(). [Nils Frostberg <nils@medcom.se>] *) Delete various functions and files that belonged to the (now obsolete) old X509V3 handling code. [Steve Henson] *) New Configure option "rsaref". [Ulf Möller] *) Don't auto-generate pem.h. [Bodo Moeller] *) Introduce type-safe ASN.1 SETs. [Ben Laurie] *) Convert various additional casted stacks to type-safe STACK_OF() variants. [Ben Laurie, Ralf S. Engelschall, Steve Henson] *) Introduce type-safe STACKs. This will almost certainly break lots of code that links with OpenSSL (well at least cause lots of warnings), but fear not: the conversion is trivial, and it eliminates loads of evil casts. A few STACKed things have been converted already. Feel free to convert more. In the fullness of time, I'll do away with the STACK type altogether. [Ben Laurie] *) Add `openssl ca -revoke <certfile>' facility which revokes a certificate specified in <certfile> by updating the entry in the index.txt file. This way one no longer has to edit the index.txt file manually for revoking a certificate. The -revoke option does the gory details now. [Massimiliano Pala <madwolf@openca.org>, Ralf S. Engelschall] *) Fix `openssl crl -noout -text' combination where `-noout' killed the `-text' option at all and this way the `-noout -text' combination was inconsistent in `openssl crl' with the friends in `openssl x509|rsa|dsa'. [Ralf S. Engelschall] *) Make sure a corresponding plain text error message exists for the X509_V_ERR_CERT_REVOKED/23 error number which can occur when a verify callback function determined that a certificate was revoked. [Ralf S. Engelschall] *) Bugfix: In test/testenc, don't test "openssl <cipher>" for ciphers that were excluded, e.g. by -DNO_IDEA. Also, test all available cipers including rc5, which was forgotten until now. In order to let the testing shell script know which algorithms are available, a new (up to now undocumented) command "openssl list-cipher-commands" is used. [Bodo Moeller] *) Bugfix: s_client occasionally would sleep in select() when it should have checked SSL_pending() first. [Bodo Moeller] *) New functions DSA_do_sign and DSA_do_verify to provide access to the raw DSA values prior to ASN.1 encoding. [Ulf Möller] *) Tweaks to Configure [Niels Poppe <niels@netbox.org>] *) Add support for PKCS#5 v2.0 ASN1 PBES2 structures. No other support, yet... [Steve Henson] *) New variables $(RANLIB) and $(PERL) in the Makefiles. [Ulf Möller] *) New config option to avoid instructions that are illegal on the 80386. The default code is faster, but requires at least a 486. [Ulf Möller] *) Got rid of old SSL2_CLIENT_VERSION (inconsistently used) and SSL2_SERVER_VERSION (not used at all) macros, which are now the same as SSL2_VERSION anyway. [Bodo Moeller] *) New "-showcerts" option for s_client. [Bodo Moeller] *) Still more PKCS#12 integration. Add pkcs12 application to openssl application. Various cleanups and fixes. [Steve Henson] *) More PKCS#12 integration. Add new pkcs12 directory with Makefile.ssl and modify error routines to work internally. Add error codes and PBE init to library startup routines. [Steve Henson] *) Further PKCS#12 integration. Added password based encryption, PKCS#8 and packing functions to asn1 and evp. Changed function names and error codes along the way. [Steve Henson] *) PKCS12 integration: and so it begins... First of several patches to slowly integrate PKCS#12 functionality into OpenSSL. Add PKCS#12 objects to objects.h [Steve Henson] *) Add a new 'indent' option to some X509V3 extension code. Initial ASN1 and display support for Thawte strong extranet extension. [Steve Henson] *) Add LinuxPPC support. [Jeff Dubrule <igor@pobox.org>] *) Get rid of redundant BN file bn_mulw.c, and rename bn_div64 to bn_div_words in alpha.s. [Hannes Reinecke <H.Reinecke@hw.ac.uk> and Ben Laurie] *) Make sure the RSA OAEP test is skipped under -DRSAref because OAEP isn't supported when OpenSSL is built with RSAref. [Ulf Moeller <ulf@fitug.de>] *) Move definitions of IS_SET/IS_SEQUENCE inside crypto/asn1/asn1.h so they no longer are missing under -DNOPROTO. [Soren S. Jorvang <soren@t.dk>] Changes between 0.9.1c and 0.9.2b [22 Mar 1999] *) Make SSL_get_peer_cert_chain() work in servers. Unfortunately, it still doesn't work when the session is reused. Coming soon! [Ben Laurie] *) Fix a security hole, that allows sessions to be reused in the wrong context thus bypassing client cert protection! All software that uses client certs and session caches in multiple contexts NEEDS PATCHING to allow session reuse! A fuller solution is in the works. [Ben Laurie, problem pointed out by Holger Reif, Bodo Moeller (and ???)] *) Some more source tree cleanups (removed obsolete files crypto/bf/asm/bf586.pl, test/test.txt and crypto/sha/asm/f.s; changed permission on "config" script to be executable) and a fix for the INSTALL document. [Ulf Moeller <ulf@fitug.de>] *) Remove some legacy and erroneous uses of malloc, free instead of Malloc, Free. [Lennart Bang <lob@netstream.se>, with minor changes by Steve] *) Make rsa_oaep_test return non-zero on error. [Ulf Moeller <ulf@fitug.de>] *) Add support for native Solaris shared libraries. Configure solaris-sparc-sc4-pic, make, then run shlib/solaris-sc4.sh. It'd be nice if someone would make that last step automatic. [Matthias Loepfe <Matthias.Loepfe@AdNovum.CH>] *) ctx_size was not built with the right compiler during "make links". Fixed. [Ben Laurie] *) Change the meaning of 'ALL' in the cipher list. It now means "everything except NULL ciphers". This means the default cipher list will no longer enable NULL ciphers. They need to be specifically enabled e.g. with the string "DEFAULT:eNULL". [Steve Henson] *) Fix to RSA private encryption routines: if p < q then it would occasionally produce an invalid result. This will only happen with externally generated keys because OpenSSL (and SSLeay) ensure p > q. [Steve Henson] *) Be'). [Matthias Loepfe <Matthias.Loepfe@adnovum.ch>] *) Let util/clean-depend.pl work also with older Perl 5.00x versions. [Matthias Loepfe <Matthias.Loepfe@adnovum.ch>] *) Fix Makefile.org so CC,CFLAG etc are passed to 'make links' add advapi32.lib to Win32 build and change the pem test comparison to fc.exe (thanks to Ulrich Kroener <kroneru@yahoo.com> for the suggestion). Fix misplaced ASNI prototypes and declarations in evp.h and crypto/des/ede_cbcm_enc.c. [Steve Henson] *) DES quad checksum was broken on big-endian architectures. Fixed. [Ben Laurie] *) Comment out two functions in bio.h that aren't implemented. Fix up the Win32 test batch file so it (might) work again. The Win32 test batch file is horrible: I feel ill.... [Steve Henson] *) Move various #ifdefs around so NO_SYSLOG, NO_DIRENT etc are now selected in e_os.h. Audit of header files to check ANSI and non ANSI sections: 10 functions were absent from non ANSI section and not exported from Windows DLLs. Fixed up libeay.num for new functions. [Steve Henson] *) Make `openssl version' output lines consistent. [Ralf S. Engelschall] *) Fix Win32 symbol export lists for BIO functions: Added BIO_get_ex_new_index, BIO_get_ex_num, BIO_get_ex_data and BIO_set_ex_data to ms/libeay{16,32}.def. [Ralf S. Engelschall] *) Second round of fixing the OpenSSL perl/ stuff. It now at least compiled fine under Unix and passes some trivial tests I've now added. But the whole stuff is horribly incomplete, so a README.1ST with a disclaimer was added to make sure no one expects that this stuff really works in the OpenSSL 0.9.2 release. Additionally I've started to clean the XS sources up and fixed a few little bugs and inconsistencies in OpenSSL.{pm,xs} and openssl_bio.xs. [Ralf S. Engelschall] *) Fix the generation of two part addresses in perl. [Kenji Miyake <kenji@miyake.org>, integrated by Ben Laurie] *) Add config entry for Linux on MIPS. [John Tobey <jtobey@channel1.com>] *) Make links whenever Configure is run, unless we are on Windoze. [Ben Laurie] *) Permit extensions to be added to CRLs using crl_section in openssl.cnf. Currently only issuerAltName and AuthorityKeyIdentifier make any sense in CRLs. [Steve Henson] *) Add a useful kludge to allow package maintainers to specify compiler and other platforms details on the command line without having to patch the Configure script everytime: One now can use ``perl Configure <id>:<details>'', i.e. platform ids are allowed to have details appended to them (separated by colons). This is treated as there would be a static pre-configured entry in Configure's %table under key <id> with value <details> and ``perl Configure <id>'' is called. So, when you want to perform a quick test-compile under FreeBSD 3.1 with pgcc and without assembler stuff you can use ``perl Configure "FreeBSD-elf:pgcc:-O6:::"'' now, which overrides the FreeBSD-elf entry on-the-fly. [Ralf S. Engelschall] *) Disable new TLS1 ciphersuites by default: they aren't official yet. [Ben Laurie] *) Allow DSO flags like -fpic, -fPIC, -KPIC etc. to be specified on the `perl Configure ...' command line. This way one can compile OpenSSL libraries with Position Independent Code (PIC) which is needed for linking it into DSOs. [Ralf S. Engelschall] *) Remarkably, export ciphers were totally broken and no-one had noticed! Fixed. [Ben Laurie] *) Cleaned up the LICENSE document: The official contact for any license questions now is the OpenSSL core team under openssl-core@openssl.org. And add a paragraph about the dual-license situation to make sure people recognize that _BOTH_ the OpenSSL license _AND_ the SSLeay license apply to the OpenSSL toolkit. [Ralf S. Engelschall] *) General source tree makefile cleanups: Made `making xxx in yyy...' display consistent in the source tree and replaced `/bin/rm' by `rm'. Additionally cleaned up the `make links' target: Remove unnecessary semicolons, subsequent redundant removes, inline point.sh into mklink.sh to speed processing and no longer clutter the display with confusing stuff. Instead only the actually done links are displayed. [Ralf S. Engelschall] *) Permit null encryption ciphersuites, used for authentication only. It used to be necessary to set the preprocessor define SSL_ALLOW_ENULL to do this. It is now necessary to set SSL_FORBID_ENULL to prevent the use of null encryption. [Ben Laurie] *) Add a bunch of fixes to the PKCS#7 stuff. It used to sometimes reorder signed attributes when verifying signatures (this would break them), the detached data encoding was wrong and public keys obtained using X509_get_pubkey() weren't freed. [Steve Henson] *) Add text documentation for the BUFFER functions. Also added a work around to a Win95 console bug. This was triggered by the password read stuff: the last character typed gets carried over to the next fread(). If you were generating a new cert request using 'req' for example then the last character of the passphrase would be CR which would then enter the first field as blank. [Steve Henson] *) Added the new `Includes OpenSSL Cryptography Software' button as doc/openssl_button.{gif,html} which is similar in style to the old SSLeay button and can be used by applications based on OpenSSL to show the relationship to the OpenSSL project. [Ralf S. Engelschall] *) Remove confusing variables in function signatures in files ssl/ssl_lib.c and ssl/ssl.h. [Lennart Bong <lob@kulthea.stacken.kth.se>] *) Don't install bss_file.c under PREFIX/include/ [Lennart Bong <lob@kulthea.stacken.kth.se>] *) Get the Win32 compile working again. Modify mkdef.pl so it can handle functions that return function pointers and has support for NT specific stuff. Fix mk1mf.pl and VC-32.pl to support NT differences also. Various #ifdef WIN32 and WINNTs sprinkled about the place and some changes from unsigned to signed types: this was killing the Win32 compile. [Steve Henson] *) Add new certificate file to stack functions, SSL_add_dir_cert_subjects_to_stack() and SSL_add_file_cert_subjects_to_stack(). These largely supplant SSL_load_client_CA_file(), and can be used to add multiple certs easily to a stack (usually this is then handed to SSL_CTX_set_client_CA_list()). This means that Apache-SSL and similar packages don't have to mess around to add as many CAs as they want to the preferred list. [Ben Laurie] *) Experiment with doxygen documentation. Currently only partially applied to ssl/ssl_lib.c. See, and run doxygen with openssl.doxy as the configuration file. [Ben Laurie] *) Get rid of remaining C++-style comments which strict C compilers hate. [Ralf S. Engelschall, pointed out by Carlos Amengual] *) Changed BN_RECURSION in bn_mont.c to BN_RECURSION_MONT so it is not compiled in by default: it has problems with large keys. [Steve Henson] *) Add a bunch of SSL_xxx() functions for configuring the temporary RSA and DH private keys and/or callback functions which directly correspond to their SSL_CTX_xxx() counterparts but work on a per-connection basis. This is needed for applications which have to configure certificates on a per-connection basis (e.g. Apache+mod_ssl) instead of a per-context basis (e.g. s_server). For the RSA certificate situation is makes no difference, but for the DSA certificate situation this fixes the "no shared cipher" problem where the OpenSSL cipher selection procedure failed because the temporary keys were not overtaken from the context and the API provided no way to reconfigure them. The new functions now let applications reconfigure the stuff and they are in detail: SSL_need_tmp_RSA, SSL_set_tmp_rsa, SSL_set_tmp_dh, SSL_set_tmp_rsa_callback and SSL_set_tmp_dh_callback. Additionally a new non-public-API function ssl_cert_instantiate() is used as a helper function and also to reduce code redundancy inside ssl_rsa.c. [Ralf S. Engelschall] *) Move s_server -dcert and -dkey options out of the undocumented feature area because they are useful for the DSA situation and should be recognized by the users. [Ralf S. Engelschall] *) Fix the cipher decision scheme for export ciphers: the export bits are *not* within SSL_MKEY_MASK or SSL_AUTH_MASK, they are within SSL_EXP_MASK. So, the original variable has to be used instead of the already masked variable. [Richard Levitte <levitte@stacken.kth.se>] *) Fix 'port' variable from `int' to `unsigned int' in crypto/bio/b_sock.c [Richard Levitte <levitte@stacken.kth.se>] *) Change type of another md_len variable in pk7_doit.c:PKCS7_dataFinal() from `int' to `unsigned int' because it's a length and initialized by EVP_DigestFinal() which expects an `unsigned int *'. [Richard Levitte <levitte@stacken.kth.se>] *) Don't hard-code path to Perl interpreter on shebang line of Configure script. Instead use the usual Shell->Perl transition trick. [Ralf S. Engelschall] *) Make `openssl x509 -noout -modulus' functional also for DSA certificates (in addition to RSA certificates) to match the behaviour of `openssl dsa -noout -modulus' as it's already the case for `openssl rsa -noout -modulus'. For RSA the -modulus is the real "modulus" while for DSA currently the public key is printed (a decision which was already done by `openssl dsa -modulus' in the past) which serves a similar purpose. Additionally the NO_RSA no longer completely removes the whole -modulus option; it now only avoids using the RSA stuff. Same applies to NO_DSA now, too. [Ralf S. Engelschall] *) Add Arne Ansper's reliable BIO - this is an encrypted, block-digested BIO. See the source (crypto/evp/bio_ok.c) for more info. [Arne Ansper <arne@ats.cyber.ee>] *) Dump the old yucky req code that tried (and failed) to allow raw OIDs to be added. Now both 'req' and 'ca' can use new objects defined in the config file. [Steve Henson] *) Add cool BIO that does syslog (or event log on NT). [Arne Ansper <arne@ats.cyber.ee>, integrated by Ben Laurie] *) Add support for new TLS ciphersuites, TLS_RSA_EXPORT56_WITH_RC4_56_MD5, TLS_RSA_EXPORT56_WITH_RC2_CBC_56_MD5 and TLS_RSA_EXPORT56_WITH_DES_CBC_SHA, as specified in "56-bit Export Cipher Suites For TLS", draft-ietf-tls-56-bit-ciphersuites-00.txt. [Ben Laurie] *) Add preliminary config info for new extension code. [Steve Henson] *) Make RSA_NO_PADDING really use no padding. [Ulf Moeller <ulf@fitug.de>] *) Generate errors when private/public key check is done. [Ben Laurie] *) Overhaul for 'crl' utility. New function X509_CRL_print. Partial support for some CRL extensions and new objects added. [Steve Henson] *) Really fix the ASN1 IMPLICIT bug this time... Partial support for private key usage extension and fuller support for authority key id. [Steve Henson] *) Add OAEP encryption for the OpenSSL crypto library. OAEP is the improved padding method for RSA, which is recommended for new applications in PKCS #1 v2.0 (RFC 2437, October 1998). OAEP (Optimal Asymmetric Encryption Padding) has better theoretical foundations than the ad-hoc padding used in PKCS #1 v1.5. It is secure against Bleichbacher's attack on RSA. [Ulf Moeller <ulf@fitug.de>, reformatted, corrected and integrated by Ben Laurie] *) Updates to the new SSL compression code [Eric A. Young, (from changes to C2Net SSLeay, integrated by Mark Cox)] *) Fix so that the version number in the master secret, when passed via RSA, checks that if TLS was proposed, but we roll back to SSLv3 (because the server will not accept higher), that the version number is 0x03,0x01, not 0x03,0x00 [Eric A. Young, (from changes to C2Net SSLeay, integrated by Mark Cox)] *) Run extensive memory leak checks on SSL apps. Fixed *lots* of memory leaks in ssl/ relating to new X509_get_pubkey() behaviour. Also fixes in apps/ and an unrelated leak in crypto/dsa/dsa_vrf.c [Steve Henson] *) Support for RAW extensions where an arbitrary extension can be created by including its DER encoding. See apps/openssl.cnf for an example. [Steve Henson] *) Make sure latest Perl versions don't interpret some generated C array code as Perl array code in the crypto/err/err_genc.pl script. [Lars Weber <3weber@informatik.uni-hamburg.de>] *) Modify ms/do_ms.bat to not generate assembly language makefiles since not many people have the assembler. Various Win32 compilation fixes and update to the INSTALL.W32 file with (hopefully) more accurate Win32 build instructions. [Steve Henson] *) Modify configure script 'Configure' to automatically create crypto/date.h file under Win32 and also build pem.h from pem.org. New script util/mkfiles.pl to create the MINFO file on environments that can't do a 'make files': perl util/mkfiles.pl >MINFO should work. [Steve Henson] *) Major rework of DES function declarations, in the pursuit of correctness and purity. As a result, many evil casts evaporated, and some weirdness, too. You may find this causes warnings in your code. Zapping your evil casts will probably fix them. Mostly. [Ben Laurie] *) Fix for a typo in asn1.h. Bug fix to object creation script obj_dat.pl. It considered a zero in an object definition to mean "end of object": none of the objects in objects.h have any zeros so it wasn't spotted. [Steve Henson, reported by Erwann ABALEA <eabalea@certplus.com>] *) Add support for Triple DES Cipher Block Chaining with Output Feedback Masking (CBCM). In the absence of test vectors, the best I have been able to do is check that the decrypt undoes the encrypt, so far. Send me test vectors if you have them. [Ben Laurie] *) Correct calculation of key length for export ciphers (too much space was allocated for null ciphers). This has not been tested! [Ben Laurie] *) Modifications to the mkdef.pl for Win32 DEF file creation. The usage message is now correct (it understands "crypto" and "ssl" on its command line). There is also now an "update" option. This will update the util/ssleay.num and util/libeay.num files with any new functions. If you do a: perl util/mkdef.pl crypto ssl update it will update them. [Steve Henson] *) Overhaul) [Ralf S. Engelschall] *) First). [Ralf S. Engelschall] *) More extension code. Incomplete support for subject and issuer alt name, issuer and authority key id. Change the i2v function parameters and add an extra 'crl' parameter in the X509V3_CTX structure: guess what that's for :-) Fix to ASN1 macro which messed up IMPLICIT tag and add f_enum.c which adds a2i, i2a for ENUMERATED. [Steve Henson] *) Preliminary support for ENUMERATED type. This is largely copied from the INTEGER code. [Steve Henson] *) Add new function, EVP_MD_CTX_copy() to replace frequent use of memcpy. [Eric A. Young, (from changes to C2Net SSLeay, integrated by Mark Cox)] *) Make sure `make rehash' target really finds the `openssl' program. [Ralf S. Engelschall, Matthias Loepfe <Matthias.Loepfe@adnovum.ch>] *) Squeeze another 7% of speed out of MD5 assembler, at least on a P2. I'd like to hear about it if this slows down other processors. [Ben Laurie] *) Add CygWin32 platform information to Configure script. [Alan Batie <batie@aahz.jf.intel.com>] *) Fixed ms/32all.bat script: `no_asm' -> `no-asm' [Rainer W. Gerling <gerling@mpg-gv.mpg.de>] *) New program nseq to manipulate netscape certificate sequences [Steve Henson] *) Modify crl2pkcs7 so it supports multiple -certfile arguments. Fix a few typos. [Steve Henson] *) Fixes to BN code. Previously the default was to define BN_RECURSION but the BN code had some problems that would cause failures when doing certificate verification and some other functions. [Eric A. Young, (from changes to C2Net SSLeay, integrated by Mark Cox)] *) Add ASN1 and PEM code to support netscape certificate sequences. [Steve Henson] *) Add ASN1 and PEM code to support netscape certificate sequences. [Steve Henson] *) Add several PKIX and private extended key usage OIDs. [Steve Henson] *) Modify the 'ca' program to handle the new extension code. Modify openssl.cnf for new extension format, add comments. [Steve Henson] *) More X509 V3 changes. Fix typo in v3_bitstr.c. Add support to 'req' and add a sample to openssl.cnf so req -x509 now adds appropriate CA extensions. [Steve Henson] *) Continued X509 V3 changes. Add to other makefiles, integrate with the error code, add initial support to X509_print() and x509 application. [Steve Henson] *) Takes a deep breath and start adding X509 V3 extension support code. Add files in crypto/x509v3. Move original stuff to crypto/x509v3/old. All this stuff is currently isolated and isn't even compiled yet. [Steve Henson] *) Continuing patches for GeneralizedTime. Fix up certificate and CRL ASN1 to use ASN1_TIME and modify print routines to use ASN1_TIME_print. Removed the versions check from X509 routines when loading extensions: this allows certain broken certificates that don't set the version properly to be processed. [Steve Henson] *) Deal with irritating shit to do with dependencies, in YAAHW (Yet Another Ad Hoc Way) - Makefile.ssls now all contain local dependencies, which can still be regenerated with "make depend". [Ben Laurie] *) Spelling mistake in C version of CAST-128. [Ben Laurie, reported by Jeremy Hylton <jeremy@cnri.reston.va.us>] *) Changes to the error generation code. The perl script err-code.pl now reads in the old error codes and retains the old numbers, only adding new ones if necessary. It also only changes the .err files if new codes are added. The makefiles have been modified to only insert errors when needed (to avoid needlessly modifying header files). This is done by only inserting errors if the .err file is newer than the auto generated C file. To rebuild all the error codes from scratch (the old behaviour) either modify crypto/Makefile.ssl to pass the -regen flag to err_code.pl or delete all the .err files. [Steve Henson] *) CAST-128 was incorrectly implemented for short keys. The C version has been fixed, but is untested. The assembler versions are also fixed, but new assembler HAS NOT BEEN GENERATED FOR WIN32 - the Makefile needs fixing to regenerate it if needed. [Ben Laurie, reported (with fix for C version) by Jun-ichiro itojun Hagino <itojun@kame.net>] *) File was opened incorrectly in randfile.c. [Ulf Möller <ulf@fitug.de>] *) Beginning of support for GeneralizedTime. d2i, i2d, check and print functions. Also ASN1_TIME suite which is a CHOICE of UTCTime or GeneralizedTime. ASN1_TIME is the proper type used in certificates et al: it's just almost always a UTCTime. Note this patch adds new error codes so do a "make errors" if there are problems. [Steve Henson] *) Correct Linux 1 recognition in config. [Ulf Möller <ulf@fitug.de>] *) Remove pointless MD5 hash when using DSA keys in ca. [Anonymous <nobody@replay.com>] *) Generate an error if given an empty string as a cert directory. Also generate an error if handed NULL (previously returned 0 to indicate an error, but didn't set one). [Ben Laurie, reported by Anonymous <nobody@replay.com>] *) Add prototypes to SSL methods. Make SSL_write's buffer const, at last. [Ben Laurie] *) Fix the dummy function BN_ref_mod_exp() in rsaref.c to have the correct parameters. This was causing a warning which killed off the Win32 compile. [Steve Henson] *) Remove C++ style comments from crypto/bn/bn_local.h. [Neil Costigan <neil.costigan@celocom.com>] *) The function OBJ_txt2nid was broken. It was supposed to return a nid based on a text string, looking up short and long names and finally "dot" format. The "dot" format stuff didn't work. Added new function OBJ_txt2obj to do the same but return an ASN1_OBJECT and rewrote OBJ_txt2nid to use it. OBJ_txt2obj can also return objects even if the OID is not part of the table. [Steve Henson] *) Add prototypes to X509 lookup/verify methods, fixing a bug in X509_LOOKUP_by_alias(). [Ben Laurie] *) Sort openssl functions by name. [Ben Laurie] *) Get the gendsa program working (hopefully) and add it to app list. Remove encryption from sample DSA keys (in case anyone is interested the password was "1234"). [Steve Henson] *) Make _all_ *_free functions accept a NULL pointer. [Frans Heymans <fheymans@isaserver.be>] *) If a DH key is generated in s3_srvr.c, don't blow it by trying to use NULL pointers. [Anonymous <nobody@replay.com>] *) s_server should send the CAfile as acceptable CAs, not its own cert. [Bodo Moeller <3moeller@informatik.uni-hamburg.de>] *) Don't blow it for numeric -newkey arguments to apps/req. [Bodo Moeller <3moeller@informatik.uni-hamburg.de>] *) Temp key "for export" tests were wrong in s3_srvr.c. [Anonymous <nobody@replay.com>] *) Add prototype for temp key callback functions SSL_CTX_set_tmp_{rsa,dh}_callback(). [Ben Laurie] *) Make DH_free() tolerate being passed a NULL pointer (like RSA_free() and DSA_free()). Make X509_PUBKEY_set() check for errors in d2i_PublicKey(). [Steve Henson] *) X509_name_add_entry() freed the wrong thing after an error. [Arne Ansper <arne@ats.cyber.ee>] *) rsa_eay.c would attempt to free a NULL context. [Arne Ansper <arne@ats.cyber.ee>] *) BIO_s_socket() had a broken should_retry() on Windoze. [Arne Ansper <arne@ats.cyber.ee>] *) BIO_f_buffer() didn't pass on BIO_CTRL_FLUSH. [Arne Ansper <arne@ats.cyber.ee>] *) Make sure the already existing X509_STORE->depth variable is initialized in X509_STORE_new(), but document the fact that this variable is still unused in the certificate verification process. [Ralf S. Engelschall] *) Fix the various library and apps files to free up pkeys obtained from X509_PUBKEY_get() et al. Also allow x509.c to handle netscape extensions. [Steve Henson] *) Fix reference counting in X509_PUBKEY_get(). This makes demos/maurice/example2.c work, amongst others, probably. [Steve Henson and Ben Laurie] *) First cut of a cleanup for apps/. First the `ssleay' program is now named `openssl' and second, the shortcut symlinks for the `openssl <command>' are no longer created. This way we have a single and consistent command line interface `openssl <command>', similar to `cvs <command>'. [Ralf S. Engelschall, Paul Sutton and Ben Laurie] *) ca.c: move test for DSA keys inside #ifndef NO_DSA. Make pubkey BIT STRING wrapper always have zero unused bits. [Steve Henson] *) Add CA.pl, perl version of CA.sh, add extended key usage OID. [Steve Henson] *) Make the top-level INSTALL documentation easier to understand. [Paul Sutton] *) Makefiles updated to exit if an error occurs in a sub-directory make (including if user presses ^C) [Paul Sutton] *) Make Montgomery context stuff explicit in RSA data structure. [Ben Laurie] *) Fix build order of pem and err to allow for generated pem.h. [Ben Laurie] *) Fix renumbering bug in X509_NAME_delete_entry(). [Ben Laurie] *) Enhanced the err-ins.pl script so it makes the error library number global and can add a library name. This is needed for external ASN1 and other error libraries. [Steve Henson] *) Fixed sk_insert which never worked properly. [Steve Henson] *) Fix ASN1 macros so they can handle indefinite length constructed EXPLICIT tags. Some non standard certificates use these: they can now be read in. [Steve Henson] *) Merged the various old/obsolete SSLeay documentation files (doc/xxx.doc) into a single doc/ssleay.txt bundle. This way the information is still preserved but no longer messes up this directory. Now it's new room for the new set of documentation files. [Ralf S. Engelschall] *) SETs were incorrectly DER encoded. This was a major pain, because they shared code with SEQUENCEs, which aren't coded the same. This means that almost everything to do with SETs or SEQUENCEs has either changed name or number of arguments. [Ben Laurie, based on a partial fix by GP Jayan <gp@nsj.co.jp>] *) Fix test data to work with the above. [Ben Laurie] *) Fix the RSA header declarations that hid a bug I fixed in 0.9.0b but was already fixed by Eric for 0.9.1 it seems. [Ben Laurie - pointed out by Ulf Möller <ulf@fitug.de>] *) Autodetect FreeBSD3. [Ben Laurie] *) Fix various bugs in Configure. This affects the following platforms: nextstep ncr-scde unixware-2.0 unixware-2.0-pentium sco5-cc. [Ben Laurie] *) Eliminate generated files from CVS. Reorder tests to regenerate files before they are needed. [Ben Laurie] *) Generate Makefile.ssl from Makefile.org (to keep CVS happy). [Ben Laurie] Changes between 0.9.1b and 0.9.1c [23-Dec-1998] *) Added OPENSSL_VERSION_NUMBER to crypto/crypto.h and changed SSLeay to OpenSSL in version strings. [Ralf S. Engelschall] *) Some fixups to the top-level documents. [Paul Sutton] *) Fixed the nasty bug where rsaref.h was not found under compile-time because the symlink to include/ was missing. [Ralf S. Engelschall] *) Incorporated the popular no-RSA/DSA-only patches which allow to compile a RSA-free SSLeay. [Andrew Cooke / Interrader Ldt., Ralf S. Engelschall] *) Fixed nasty rehash problem under `make -f Makefile.ssl links' when "ssleay" is still not found. [Ralf S. Engelschall] *) Added more platforms to Configure: Cray T3E, HPUX 11, [Ralf S. Engelschall, Beckmann <beckman@acl.lanl.gov>] *) Updated the README file. [Ralf S. Engelschall] *) Added various .cvsignore files in the CVS repository subdirs to make a "cvs update" really silent. [Ralf S. Engelschall] *) Recompiled the error-definition header files and added missing symbols to the Win32 linker tables. [Ralf S. Engelschall] *) Cleaned up the top-level documents; o new files: CHANGES and LICENSE o merged VERSION, HISTORY* and README* files a CHANGES.SSLeay o merged COPYRIGHT into LICENSE o removed obsolete TODO file o renamed MICROSOFT to INSTALL.W32 [Ralf S. Engelschall] *) Removed dummy files from the 0.9.1b source tree: crypto/asn1/x crypto/bio/cd crypto/bio/fg crypto/bio/grep crypto/bio/vi crypto/bn/asm/......add.c crypto/bn/asm/a.out crypto/dsa/f crypto/md5/f crypto/pem/gmon.out crypto/perlasm/f crypto/pkcs7/build crypto/rsa/f crypto/sha/asm/f crypto/threads/f ms/zzz ssl/f ssl/f.mak test/f util/f.mak util/pl/f util/pl/f.mak crypto/bf/bf_locl.old apps/f [Ralf S. Engelschall] *) Added various platform portability fixes. [Mark J. Cox] *) The Genesis of the OpenSSL rpject: We start with the latest (unreleased) SSLeay version 0.9.1b which Eric A. Young and Tim J. Hudson created while they were working for C2Net until summer 1998. [The OpenSSL Project] Changes between 0.9.0b and 0.9.1b [not released] *) Updated a few CA certificates under certs/ [Eric A. Young] *) Changed some BIGNUM api stuff. [Eric A. Young] *) Various platform ports: OpenBSD, Ultrix, IRIX 64bit, NetBSD, DGUX x86, Linux Alpha, etc. [Eric A. Young] *) New COMP library [crypto/comp/] for SSL Record Layer Compression: RLE (dummy implemented) and ZLIB (really implemented when ZLIB is available). [Eric A. Young] *) Add -strparse option to asn1pars program which parses nested binary structures [Dr Stephen Henson <shenson@bigfoot.com>] *) Added "oid_file" to ssleay.cnf for "ca" and "req" programs. [Eric A. Young] *) DSA fix for "ca" program. [Eric A. Young] *) Added "-genkey" option to "dsaparam" program. [Eric A. Young] *) Added RIPE MD160 (rmd160) message digest. [Eric A. Young] *) Added -a (all) option to "ssleay version" command. [Eric A. Young] *) Added PLATFORM define which is the id given to Configure. [Eric A. Young] *) Added MemCheck_XXXX functions to crypto/mem.c for memory checking. [Eric A. Young] *) Extended the ASN.1 parser routines. [Eric A. Young] *) Extended BIO routines to support REUSEADDR, seek, tell, etc. [Eric A. Young] *) Added a BN_CTX to the BN library. [Eric A. Young] *) Fixed the weak key values in DES library [Eric A. Young] *) Changed API in EVP library for cipher aliases. [Eric A. Young] *) Added support for RC2/64bit cipher. [Eric A. Young] *) Converted the lhash library to the crypto/mem.c functions. [Eric A. Young] *) Added more recognized ASN.1 object ids. [Eric A. Young] *) Added more RSA padding checks for SSL/TLS. [Eric A. Young] *) Added BIO proxy/filter functionality. [Eric A. Young] *) Added extra_certs to SSL_CTX which can be used send extra CA certificates to the client in the CA cert chain sending process. It can be configured with SSL_CTX_add_extra_chain_cert(). [Eric A. Young] *) Now Fortezza is denied in the authentication phase because this is key exchange mechanism is not supported by SSLeay at all. [Eric A. Young] *) Additional PKCS1 checks. [Eric A. Young] *) Support the string "TLSv1" for all TLS v1 ciphers. [Eric A. Young] *) Added function SSL_get_ex_data_X509_STORE_CTX_idx() which gives the ex_data index of the SSL context in the X509_STORE_CTX ex_data. [Eric A. Young] *) Fixed a few memory leaks. [Eric A. Young] *) Fixed various code and comment typos. [Eric A. Young] *) A minor bug in ssl/s3_clnt.c where there would always be 4 0 bytes sent in the client random. [Edward Bishop <ebishop@spyglass.com>]
https://www.openssl.org/news/changelog.html
CC-MAIN-2017-13
refinedweb
67,807
58.58
I picked up the grubby handset of the public payphone and dialled the number, like I had a hundred times before. "This is Oleg's Pizza. Leave a message after the beep." That was all it ever said - there was never a real person at the other end of the line - just a robotic voice from an unlikely business. [BEEP] - somewhere a tape started recording. I left my message. "Hi, this is Chuck. I'd like a pepperoni and mushroom pizza please." I dropped the handset and walked away. My name's not Chuck, and I don't like pepperoni, but this would get the message across: my cover was blown, and by Monday I'd be gone, just a fading memory in the minds of those who knew me. I love a good spy thriller, and it seems like one of the hardest parts of being a spy is finding a dead-drop to leave messages for your handler. Fortunately, in this post, I'm going to make life easier for all you spooks out there by showing you how to make a dead-drop phone number where you can leave messages for someone to pick up later on the Web. Prerequisites I'm going to assume you've read Aaron's awesome post describing how to use Ngrok for developing webooks. If you haven't go read it now - it's worth it. I'm also going to assume you have a basic knowledge of Python and Flask. I recommend installing the Vonage CLI tool and reading the short blog post about how to install it- some of the instructions below will use it, although you can complete these actions in the Nexmo Dashboard if you prefer.. What You're Going To Build I'm going to show you how to build a basic voicemail service that allows people to call your Nexmo number and leave a message. The recorded message will be copied to your server, and you'll build a simple web page that lists the recordings and allows you to play them in the browser. Starting Your Project If you'd rather just follow along with my existing code, you can find that here, but I recommend you follow along with this post and build it yourself! The structure of our project folder looks like this: Because this is a small project, all your Python code will go in answerphone/__init__.py, but if it was larger, you could split it out into separate modules under the answerphone package. You'll also put our static resources under static and your templates in templates and Flask will then know where to find them. I've chosen to save my MP3 recordings into a project-level recordings folder, outside of the answerphone package, because it's a good idea to separate data (especially things being downloaded from the Internet!) from executable code. You can't see it in the image above, but there's also a .env file in the project directory, which contains all my configuration. Install Dependencies In my project, I've used pip-tools to pin my dependencies, but if you haven't used pip-tools before, I recommend you paste the following straight into requirements.txt and then run pip install -r requirements.txt: python-dotenv~=0.10 flask~=1.0 tinydb~=3.13 nexmo~=2.3 A quick rundown of our dependencies: - dotenv will be used to load config from our .envconfiguration file. - flask is our web framework and development web server. - tinydb is a really simple database that stores all your data as json. - nexmo is the Nexmo Python Client Library, and makes using Nexmo APIs simpler than doing it by hand. Open up __init__.py in your answerphone package and type the following: from flask import Flask @app.route("/answer", methods=["GET", "POST"]) def answer(): """ An NCCO webhook, providing actions that tell Nexmo to read a statement to the user and then record a message. """ return jsonify( [ { "action": "talk", "text": "<speak>You have reached <phoneme alphabet='ipa' ph='əʊlɛgz'>Oleg's</phoneme> pizza. Please leave a message after the beep.</speak>", "voiceName": "Brian", }, { "action": "record", "beepStart": True, "eventUrl": [ " ], "endOnSilence": 3, }, ] ) Make sure you're running Ngrok and start your development server with: FLASK_ENV=development FLASK_APP=answerphone flask run Now if you visit with your web browser you should see something like the following: [ { "action": "talk", "text": "You have reached Oleg's pizza. Please leave a message after the beep.", "voiceName": "Brian" }, { "action": "record", "beepStart": true, "eventUrl": [ " ], "endOnSilence": 3 } ] Now let's create a Voice app and link a number to this URL. In your console, run the Vonage CLI tool which will walk you step-by-step of creating your application: # Create an app vonage apps:create It will print something like Application created: 26aa5db4-546a-11e9-8f2d-0f348a273d3a, and it will create a file called private.key in your current directory. Take this ID and paste it into a new .env file like so: NEXMO_PRIVATE_KEY="./private.key" NEXMO_APPLICATION_ID=26aa5db4-546a-11e9-8f2d-0f348a273d3a Leave this for now - I'll explain how to load the configuration in a moment. If you need to buy a number, I'd recommend doing it in the Nexmo Dashboard. Once you've bought a number (make sure it supports Voice!) go back to your command-line and use the nexmo command to link the number to your app: # Replace the phone number with your own # and the application ID with your application ID! nexmo link:app 447700900606 26aa5db4-546a-11e9-8f2d-0f348a273d3a Now, if you call your Nexmo number, you should hear the message in the talk action above: "You have reached Oleg's pizza. Please leave a message after the beep." Okay! Check your Ngrok logs. You may notice some 404 errors to /event. Don't worry about this right now - you'll add an event webhook later in this tutorial. Unfortunately, once Nexmo has finished recording your message, it is currently making a POST request to the URL in your record action, which is set to Let's fix that so you can receive the recording event and download the MP3, so your handler can pick up messages from their agents. In your __init__.py, add the following: # Add to your imports: from dotenv import load_dotenv from flask import request, url_for import nexmo # After your imports: load_dotenv() # Loads .env config into `os.environ` client = nexmo.Client( application_id=os.environ["NEXMO_APPLICATION_ID"], private_key=os.environ["NEXMO_PRIVATE_KEY"], ) @app.route("/new-recording", methods=["POST"]) def new_recording(): recording_bytes = client.get_recording(request.json['recording_url']) recording_id = request.json['recording_uuid'] with open(f"recordings/{recording_id}.mp3", 'wb') as mp3_file: mp3_file.write(recording_bytes) return "" and now modify your answer webhook. The second action should look like this: { "action": "record", "beepStart": True, "eventUrl": [url_for("new_recording", _external=True)], "endOnSilence": 3, }, You're now using Flask's url_for function to get a URL pointing to the new_recording webhook you just added to the file. Make sure your recording folder exists, and then restart the Flask development server. Now, when you call your Nexmo number and leave a message, you should find an MP3 file in the recording folder. Open it up in your favourite MP3 player to hear what it says! If you wanted to, you could stop now - you've learned all the basics about how to get Nexmo to record a message, and then how to download that message to your server (Nexmo only stores the recording for you for a few hours). But it would be a good idea to store some metadata along with the audio, so you know who the caller was, and when they called. That way, you can add a page listing all the calls to your answerphone dead-drop. I chose TinyDB to do this - it's a really simple little data-store that dumps your data to a JSON file. It's not very fast, and it won't store lots of data very well, but it's fine for this project! Add the following to your .env file: DATABASE_PATH=answerphone.db. You tell TinyDB to store data in this file with the following near the top of your __init__.py file: from tinydb import TinyDB, Query db = TinyDB(os.environ["DATABASE_PATH"]) Now add the following, to create two "tables" to store your caller data and your recording data: calls = db.table('calls') recordings = db.table('recordings') You need to do two things now: You need to respond to call events and record the call data when a call is answered; and you need to add a couple of lines to your recording webhook so that it stores recording data in the database. First, add the event webhook: @app.route("/event", methods=["POST"]) def event(): if request.json.get('status') == 'answered': calls.insert(request.json) return "" The line calls.insert(request.json) stores all of the request's JSON data in the calls table you created above. Now, add a similar line to your recording webhook, after the code to save the MP3 file to your recordings folder: ... with open(f"recordings/{recording_id}.mp3", 'wb') as mp3_file: mp3_file.write(recording_bytes) recordings.insert(request.json) return "" Make a call to your Nexmo number again and leave a message. Check that it runs without any errors. If you have a look inside answerphone.db you should see a load of stored JSON data. Now let's load that data into a nice web page! First, add a view that will allow you to load an MP3 file into the browser: @app.route("/recordings/<uuid>") def recording(uuid): response = make_response(open(f'recordings/{uuid}.mp3', 'rb').read()) response.headers['Content-Type'] = 'audio/mpeg' return response The code above opens the binary MP3 file, creates a response from the bytes, and then sets the content-type header to 'audio/mpeg' which is the correct type for MP3 data. You can test this by going loading up the "/recordings/you-uuid-goes-here" URL using the id of one of the MP3 files in your recordings folder. Now you should add a view that will list all of the recordings, along with some of the call data associated with each recording. This can be made easier with a small helper class. Put this code near the top of your __init__.py file: class Recording: def __init__(self, data): self.uuid = data['recording_uuid'] related_calls = calls.search(Query().conversation_uuid == data['conversation_uuid']) if related_calls: self.related_call = related_calls[0] else: self.related_call = None This class is designed to be initialized using the JSON data provided to the recording endpoint and stored in the recordings table in our database. It automatically looks up the associated call data in the calls table and adds it on to the Recording object as the related_call attribute. Now write the following view code, which passes a Recording instance to the view for every recording stored in the database: @app.route("/") def index(): """ A view which lists all stored recordings. """ return render_template("index.html.j2", recordings=[Recording(r) for r in recordings]) This will fail at the moment, because you haven't created a template file! Create a file at answerphone/templates/index.html.j2 and put something like the following inside: <!doctype html> <html> <head> <title>Oleg's Pizza</title> </head> <body> <h1><i>"Oleg's Pizza"</i><br>Dead Drop Recordings</h1> {% for recording in recordings -%} <h2>Call From: <em>{{ recording.related_call.from }}</em></h2> <p><strong>When:</strong> {{ recording.related_call.timestamp }}</p> <a href="/recordings/{{ recording.uuid }}">Listen</a> {% endfor -%} </body> </html> Now, if you visit your you should see something like the following: You're now a master spymaster! I'll summarize what you've just done: - You responded to an incoming phone call with some NCCO actions. - You instructed Nexmo to record part of a phone call - You handled the recording event to download the created MP3 file. - You stored call data in a database and created a web-browsable playlist! Further Information If you want to dig a bit deeper into what you just learned, the following may be useful: Also, check out the GitHub Repo for this project, as I've documented the code and improved the list view. Next Steps There are a few ways to take this project further. You could use a websocket to notify the browser when a new recording appears, so the handler doesn't have to reload the browser to get messages from their agent. You could also use Nexmo's SMS API to send the handler an SMS message when a new recording is available! If you make something cool, send us an email at devrel@nexmo.com to let us know!
https://developer.vonage.com/blog/2019/04/05/how-to-build-a-voicemail-with-python-flask-dr
CC-MAIN-2022-21
refinedweb
2,116
64.2
() Anonymous August 10, 2009 at 3:38 AM (UTC -5) Link to this comment I hope you can help or teach me how to smoothing my signal. data[0] is the realtime data and data[1] and data [2] is calculated from data [0]. because the initial data (data[0]) have a noise so that it disturb the other datas. Here is part of my program, def update(): # Called periodically by the Tk toolkit global data, running, timegap, ph #s = time.time() if running == False: return res = ph.get_voltage() x = res[0] – starts[0] v = res[1] m = v2d(v) data[0].append((x,m)) if len(data[0]) > 1: # plot position & calculate instantaneous velocity dispobjects[0].delete_lines() dispobjects[0].line(data[0], ‘black’) vel = (data[0][-1][1] – data[0][-2][1]) / (data[0][-1][0] – data[0][-2][0]) data[1].append( (data[0][-1][0], vel) ) if len(data[1]) > 1: # plot velocity & calculate acceleration dispobjects[1].delete_lines() dispobjects[1].line(data[1], ‘black’) acn = (data[1][-1][1] – data[1][-2][1]) / (data[1][0][-1] – data[1][0][-2]) data[2].append( (data[0][-1][0], acn) ) if len(data[2]) > 1: # plot acceleration dispobjects[2].delete_lines() dispobjects[2].line(data[2], ‘black’) root.after(timegap, update) if x > maxtime: running = False nice to hear from yoyu soon Gopal Koduri November 11, 2009 at 8:14 AM (UTC -5) Link to this comment thanks babai (a kind of *dude* in my language ;))! I was looking for a similar thing(filtering) in python. This helped me get started :) CoreDistance January 30, 2010 at 6:16 PM (UTC -5) Link to this comment I think the wav file is not used in the code. Jphn May 17, 2010 at 6:39 AM (UTC -5) Link to this comment “This python file requires that test.wav (~700kb) (an actual ECG recording of my heartbeat) be saved in the same folder.” => false Thanks for the code! Kamil August 19, 2010 at 5:13 PM (UTC -5) Link to this comment Thank you, great article! Danny November 22, 2010 at 10:13 AM (UTC -5) Link to this comment Thanks for the article, sure it’ll come in handy cleaning some of my (very noisy) data. Ken July 21, 2011 at 4:17 PM (UTC -5) Link to this comment Be careful with this method of “filtering.” This kind of direct manipulation of the spectrum results in time-aliasing when you take the inverse FFT. The result is not truly the original signal with its high frequencies filtered out, because the time-aliasing leaves the signal littered with artifacts. The easiest way to do it properly would be to do a lowpass filter in the time-domain. You can do a similar filter in the frequency domain (sort of like you’re doing), but the time-domain data and a filter impulse response needs to be properly zero-padded before you FFT them to avoid circular convolution (which results in time-aliasing). Anonymous September 7, 2011 at 11:43 AM (UTC -5) Link to this comment Thanks for the code snippet. I’ve got a few suggestions for you, though: 1) When I use the line “bp=fft[:]“, modifying bp still alters fft. I’ve used “bp = fft.copy()” instead. 2) When zeroing out FFT terms, you need to make the zeros symmetric around the mid-point of the FFT spectrum. The inverse FFT will have a non-negligible imaginary component if the frequency spectrum isn’t symmetric, even though the input data is strictly real. 3) To zero out fft coefficients, it’s easier to write bp[10:-10] = 0 4) To normalize the data, add the line bp *= real(fft.dot(fft))/real(bp.dot(bp)). fft.dot(fft) gives the total power in the signal, so this redistributes the power in the stop band more or less equally to the pass band. 5) It’s perhaps better to change the variable name ‘fft’ to something else so that it doesn’t conflict with numpy.fft.fft. If you’re working in iPython, numpy.fft.fft is automagically imported into the global namespace, so trying to use the fft function after running your script could cause a few headaches. Ahmet April 25, 2012 at 1:57 AM (UTC -5) Link to this comment Thanks for the information, but I was wondering something. Maybe you can help me out with a problem. I have a signal of 0.1 seconds length which consists of 10.000 datapoints. First I averaged the datapoints 40 times such that it becomes a set of 250 datapoints. Even though this is a bit rough, the signal is still clearly visible. When I make an FFT of this averaged signal the frequency resolution is too low; the frequency step is 10 Hz. To increase the frequency resolution I decreased the averaging to 10 times (1000 datapoints), but I noticed that this does not matter for the frequency resolution; it stays 10 Hz. So the frequency resolution is only determined by the measurement time (0.1s) and will not increase if I increase the number of datapoints during this 0.1 second… My question is how I can increase the freq. resolution… Doug May 30, 2012 at 8:26 AM (UTC -5) Link to this comment Thank you for this excellent example of the power of numpy, scipy, and pylab! 1 more note. In graph (G) you label the right hand side peak as “static”. If you zoom in on the peak with: pylab.axis([9840,9900,None,None]) You’ll see that it’s the symmetric part of your sine waves. Michael November 2, 2012 at 5:05 PM (UTC -5) Link to this comment I was thinking to make an ECG. I’m a real amateur. I don’t want to visualize my heartbeat but I want to transfer the signal of my heartbeat to a digital signal to use in a a microprocessor. I don’t wanne use a computer at all. Can you pleas tell me how I could do it on a simple way? I wanne use the minimum of filters. sorry fore my bad English. Elliot December 19, 2012 at 5:01 PM (UTC -5) Link to this comment Thank you very much. This was very helpful for me processing ultrasound signals. Laboratordentar.Info January 7, 2013 at 2:40 AM (UTC -5) Link to this comment What i do not understood is if truth be told how you are no longer really a lot more well-favored than you may be right now. You are so intelligent. You know thus considerably with regards to this topic, produced me for my part believe it from numerous various angles. Its like men and women don’t seem to be fascinated unless it’s one thing to do with Girl gaga! Your personal stuffs nice. At all times take care of it up! Bogdan Hlevca January 22, 2013 at 12:19 AM (UTC -5) Link to this comment Great code, however, it has a serious weakness as it does not explain how the frequency value is selected for filtering. It can be inferred from the code, but there is a more elegant way of doing that, using the numpy functions instead of scipy functions. Another problem that you have is the resulting value is half in amplitude. You need to multiply by 2 to get the expected value. For example a filer function can be defined as: import numpy as np def fft_bandpassfilter(data, fs, lowcut, highcut): fft = np.fft.fft(data) n = len(data) timestep = 1.0 / fs freq = np.fft.fftfreq(n, d = timestep) bp = fft[:] for i in range(len(bp)): if freq[i] >= highcut or freq[i] < lowcut: bp[i] = 0 # must multipy by 2 to get the correct amplitude ibp = 2 * sp.ifft(bp) return ibp where fs = sample frequency, data ids your timeseries and the other two the cutoff frequencies. Stu May 7, 2014 at 3:57 PM (UTC -5) Link to this comment Hi @Bogdan Hlevca I’m trying to get fft_bandpassfilter working, is lowcut and highcut in HZ ? + frequencies within this range will be kept in ? For sample frequency, I guess 44010 should work if I’m sampling pc microphone ? Cheers S nightclub greater london March 10, 2013 at 10:38 PM (UTC -5) Link to this comment The main reason I observe this blog is since I believe you do usually give a slightly distinct slant on things to numerous other internet sites so high five from me! ! ! Boiler repair March 10, 2013 at 11:14 PM (UTC -5) Link to this comment Thanks Scott when i started reading I didnt think it would be for me but ended up being quite interested – thanks! Niels March 12, 2013 at 1:03 PM (UTC -5) Link to this comment I would be nice if you warned readers about the flaws in this article. By removing the correct part of the FFT results, no scaling at all is needed. That’s the great thing about spectral filtering (al least in 1D, and preferably on something with cyclic boundaries): all the information beyond the cut-off frequency is maintained (plot some spectra of the data before and after filtering and you will see the result). Anyhow, to warn readers: BE CAREFULL WITH THIS ARTICLE! NICE ATTEMPT, BUT WRITER IS BY FAR NO SPECIALIST, EXAMPLE CODE CONTAINS SERIOUS FLAWS! online payday loan March 19, 2013 at 9:21 AM (UTC -5) Link to this comment Wonderful article! We will be linking to this great post on our site. Keep up the great writing. online payday loan study skills April 9, 2013 at 10:16 AM (UTC -5) Link to this comment I’m gone to tell my little brother, that he should also pay a visit this web site on regular basis to take updated from latest reports. April 9, 2013 at 10:30 PM (UTC -5) Link to this comment Hey just ωаntеd to give you a quіck hеads up and let yοu κnow а few of thе pictures aren’t loading properly. I’m not sure why but I think its a linκing isѕue. I’ve tried it in two different web browsers and both show the same results. design expert Central london April 24, 2013 at 7:13 AM (UTC -5) Link to this comment You really were on the money with a fantastic posting with a bit of wonderful information nail fungus May 1, 2013 at 11:56 AM (UTC -5) Link to this comment Since the admin of this site is working, no doubt very soon it will be well-known, due to its feature contents. Brigette May 9, 2013 at 10:54 PM (UTC -5) Link to this comment I was pretty pleased to find this great site. I wanted to thank you for ones time for this wonderful read! ! I definitely enjoyed every part of it and i also have you saved to fav to see new stuff in your web site. Archeage gold July 5, 2013 at 5:46 AM (UTC -5) Link to this comment I am only commenting to let you know of the fine experience my cousin’s princess developed reading your site. She noticed several things, with the inclusion of what it is like to possess a great helping style to let many people clearly have an understanding of selected specialized topics. You truly did more than visitors’ expected results. Thanks for showing such informative, dependable, explanatory and even cool tips on your topic to Evelyn. Brooke August 8, 2013 at 12:23 PM (UTC -5) Link to this comment If you desire to get much from this paragraph then you have to apply such methods to your won webpage. tradnonptbetback.seesaa.net August 17, 2013 at 1:32 AM (UTC -5) Link to this comment Real nice layout and fantastic subject matter Anonymous December 14, 2013 at 9:04 AM (UTC -5) Link to this comment Thankyou for sharing this project. creer un site internet gratuitement January 22, 2014 at 5:08 PM (UTC -5) Link to this comment It’s going to be end of mine day, buut before finish I am reading this fantastic article to improve myy know-how. Review my blog :: creer un site internet gratuitement agence webdesign January 27, 2014 at 10:53 AM (UTC -5) Link to this comment I will immediately grab your rss feed as I can’t to find your email subscription hyperlink or newsletter service. Do you’ve any? Please permit me know so that I may just subscribe. Thanks. Jessika February 20, 2014 at 11:04 PM (UTC -5) Link to this comment Highly descriptive post, I liked that a lot. Will there be a part 2? my web-site – diy home hydroponics (Jessika) controle Curriculum February 22, 2014 at 2:34 AM (UTC -5) Link to this comment Haave you ever thought about incdluding a little bit more than just your articles? I mean, what you say is fundamental and everything. Nevertheless think about if you added some great graphics or video clips to give your posts more, “pop”! Your content is excellent but with pics and videos, tthis site could undeniably be oone of the best in its field. Wonderful blog! Also visit my homepage; controle Curriculum compliantness April 8, 2014 at 9:13 AM (UTC -5) Link to this comment That is a really good tip especially to those fresh to the blogosphere. Short but very accurate info… Thank you for sharing this one. A must read post! Johne232 May 1, 2014 at 4:53 AM (UTC -5) Link to this comment Some genuinely great information, Glad I discovered this. Good teaching is onefourth preparation and threefourths theater. by Gail. ddgecfeadgaf hay day cheats youtube May 19, 2014 at 4:52 PM (UTC -5) Link to this comment Today, allows examine about frontiervsecrets.com from Tony ‘Tdub’ Sanders and how you may be assisted by it. Bejeweled Blitz has over 10 million participants. Simply don’t disturb me while I am chatting, okay? Alan June 12, 2014 at 10:38 PM (UTC -5) Link to this comment Guitar Hero is incredibly popular – specifically with the youngster audience. In certain methods it doesn’t seem so terrible. 40% surpasses percent. You’ve adventure , military, and westerns. Jesus June 16, 2014 at 2:32 PM (UTC -5) Link to this comment An individual-hung window features a fixed sash together with it, as the lower sash is portable, fundamentally. Eliminate any unnecessary files from your own Laptop. Your choices in blinds and curtains are countless. movie 2k August 30, 2014 at 8:44 PM (UTC -5) Link to this comment Our popular life is adjusting. The normal illustration I wish to say may be the automotive for people. There is a great example to examine competitive in golf the same as strength training. September 1, 2014 at 2:04 PM (UTC -5) Link to this comment În – fapt , ea a fost semnificativ mai mic decât anual, de când am stabilit primul meu raport on-line . Verificați din nou mai târziu pentru mai mult pe cel mai bun mod de a utiliza de relații publice pentru a opera un optimizarea motorului de căutare de vehicul . optimizare site seo tutorial September 20, 2014 at 6:56 PM (UTC -5) Link to this comment Ați putea fi, foarte bine face azi 10 de ori ceea ce fac . Películas gratis México September 21, 2014 at 6:16 AM (UTC -5) Link to this comment You need to take part in a contest for one of the best sites on the web. I most certainly will recommend this web site! Alica September 23, 2014 at 12:58 AM (UTC -5) Link to this comment Cachorros inscritos en el Libro de Orígenes Canino. Also visit my web blog adoptar perros madrid refugio (Alica) top cologne for men October 4, 2014 at 3:26 AM (UTC -5) Link to this comment What’s up, yeah this post is actually pleasant and I have learned lot of things from it on the topic of blogging. thanks. เนอร์สซิ่งโฮม November 14, 2014 at 12:29 AM (UTC -5) Link to this comment My brother suggested I would possibly like this web site. He was once entirely right. This put up actually made my day. You cann’t believe just how so much time I had spent for this information! Thanks! HeikeKinder December 13, 2014 at 6:28 AM (UTC -5) Link to this comment Hi, this weekend is good in support of me, as this time i am reading this fantastic informative paragraph here at my residence. FlorentTripp December 13, 2014 at 5:35 PM (UTC -5) Link to this comment hi!,I like your writing very so much! share we keep in touch extra about your article on AOL? I need a specialist on this house to unravel my problem. May be that’s you! Taking a look ahead to look you. ShonaLfkhqd December 17, 2014 at 1:32 PM (UTC -5) Link to this comment Hi mates, good post and good urging commented at this place, I am really enjoying by these. VernaLoyaywr December 17, 2014 at 7:10 PM (UTC -5) Link to this comment I just like the helpful information you supply for your articles. I’ll bookmark your weblog and check again right here frequently. I’m somewhat certain I’ll be told many new stuff proper here! Good luck for the following!
http://www.swharden.com/blog/2009-01-21-signal-filtering-with-python/
CC-MAIN-2014-52
refinedweb
2,939
70.53
21 April 2010 03:56 [Source: ICIS news] By Prema Viswanathan SHANGHAI (ICIS news)-The market outlook for polyolefins is very positive in the long term, but there could be some ups and downs in the short term, William Yau, chief executive officer (CEO) for Borouge's marketing arm, said on Wednesday. “Last year, people were uncertain about what the long term market outlook would be. But the situation looks very positive now. Just look at the mood at ChinaPlas,” said Yau on the sidelines of the Chinaplas exhibition 2010 in ?xml:namespace> Chinaplas – But in the short term, although demand and supply were normal, there could be ups and downs, he admitted. “Overall, we are cautiously optimistic,” he said in an interview with ICIS news. The company was lucky to have a presence in high growth markets such as “The Borouge strategy has always been to look at the long term. And when we look at the specific segments we are serving, we see much more stability, the market is growing,” he said. General applications, film and moulding, advanced packaging, wire and cable, pipes for infrastructure projects, the automotive segment – all these were seeing significant growth in demand, he said. “Last year in The growth in segments such as automobiles was driving the demand for value added products, which was the focus area for Borouge, he added. The inauguration of Borouge’s 50,000 tonne/year compounding plant in “We could increase the capacity to 80,000 tonnes/year in the near future. And we are also looking at new investments linked to the automotive industry in Getting closer to the market by building compounding plants and logistics hubs in key markets like “At the moment, we are fine tuning Borouge II production. We can adjust our production according to the needs of customers,” he said. The new complex would start up in mid-2010, but the full polyolefins output of 2m tonnes/year was likely to be achieved only by the end of the year or early next year, he said. The Borouge III project was also proceeding on track, he said. “It is currently in the FEED (front end engineering and design) phase. We are still looking at the product range, analysing product, market, technology. “It is not a question of just designing production, we have to look at marketing and sales. By 2013-14, We will have 4.5m tonnes of product coming out. So we have to build up an additional supply chain, build more hubs,” he added. Borouge is a joint venture between the Abu Dhabi National Oil Co
http://www.icis.com/Articles/2010/04/21/9352345/borouge-upbeat-on-long-term-prospects-of-polyolefins.html
CC-MAIN-2015-22
refinedweb
435
68.91
Searching text strings from files in a given folder is easily accomplished by using Python in Windows. While Linux has the grep command, Windows does not have an equivalent. The only alternative, then, is to make a command that will search the string. This article introduces see.py, which helps in accomplishing this task. Have you ever thought of searching a string in the files of a given folder? If you are a Linux lover, you must be thinking about the grep command. But in Windows, there is no grep command. By using Python programming, you can make your own command which will search the string pattern from the given files. The program also offers you the power of regular expressions to search the pattern. In this article, the author is going to show you an amazing utility, which will help you to find the string from a number of files. The program, see.py, will search for the string pattern provided by the user, from the files presented in the directory, also given by the user. This is equivalent to the grep command in the Linux OS. Here, we will use Python 2.7. The program expects the string pattern and directory from the user. Let us examine the code and discuss it. Import the mandatory modules import os import re import sys import argparse In the following code I have declared a class Text_search class Text_search : def __init__(self, string2, path1,i=None): self.path1= path1 self.string1 = string2 self.i=i if self.i: string2 = string2.lower() self.string2= re.compile(string2) The following method gives the file’s name in which the given string is found: def txt_search(self): file_number = 0 files = [f for f in os.listdir(self.path1) if os.path.isfile(self.path1+”/”+f)] for file in files: file_t = open(self.path1+”/”+file) file_text= file_t.read() if self.i: file_text=file_text.lower() file_t.close() if re.search(self.string2, file_text): print “The text “+self.string1+” found in “, file file_number=file_number+1 print “total files are “,file_number The following method returns the file’s name as well as the line numbers in which the given string is matched. def txt_search_m(self): files = [f for f in os.listdir(self.path1) if os.path.isfile(self.path1+"/"+f)] file_number = 0 for file in files: file_t = open(self.path1+"/" The following method also returns the file’s name as well as the line numbers in which the given string is matched. This method works in recursive mode. def txt_search_r(self): file_number = 0 for root, dir, files in os.walk(self.path1, topdown = True): files = [f for f in files if os.path.isfile(root+”/”+f)] for file in files: file= root+”/”+file file_t = open This is the main function of the program which handles all the options. The program offers you six options. The –m option gives the number of the file and the line. –mi is case-insensitive. You can use the –h option to get help for all options. def main(): parser = argparse.ArgumentParser(version=’1.0’) parser.add_argument(‘-m’, nargs = 2, help = ‘To get files as well as line number of files ‘) parser.add_argument(‘-s’, nargs = 2, help = ‘To get the files contain string ‘) parser.add_argument(‘-r’, nargs = 2, help = ‘To search in recusrive order ‘) parser.add_argument(‘-mi’, nargs = 2, help = ‘-m option with case insensitive ‘) parser.add_argument(‘-si’, nargs = 2, help = ‘-s option with case insensitive ‘) parser.add_argument(‘-ri’, nargs = 2, help = ‘-r option with case insensitive ‘) args = parser.parse_args() If you select option –m, then it will call the txt_search_m() method of class Text_search(). try: if args.m: dir = args.m[1] obj1 = Text_search(args.m[0],dir) obj1.txt_search_m() If you select option –s, then it will call the method txt_search(). elif args.s: if args.s[1]: dir = args.s[1] obj1 = Text_search(args.s[0],dir) obj1.txt_search() If you select the –r option, then it will call the method txt_search_r(). elif args.r: if args.r[1]: dir = args.r[1] obj1 = Text_search(args.r[0],dir) obj1.txt_search_r() If you select the –mi option, then it will call the txt_search_m() method in case-insensitive mode. elif args.mi: dir = args.mi[1] obj1 = Text_search(args.mi[0],dir,i=1) obj1.txt_search_m() If you select option –s, then it will call the method txt_search() in case-insensitive mode. elif args.si: if args.si[1]: dir = args.si[1] obj1 = Text_search(args.si[0],dir,i=1) obj1.txt_search() If you select the –r option, then it will call the txt_search_r() method in case-insensitive mode. elif args.ri: if args.ri[1]: dir = args.ri[1] obj1 = Text_search(args.ri[0],dir,i=1) obj1.txt_search_r() print “\nThanks for using L4wisdom.com” print “Email id mohitraj.cs@gmail.com” print “URL:” except Exception as e: print e print “Please use proper format to search a file use following instructions” print “see file-name” print “Use <see -h > For help” main() Let’s make exe files using pyinstaller modules as shown in Figure 1. After conversion, it will make a directory called see\dist. Get the see.exe files from the directory see\dist and put them in the Windows folder. In this way, see.exe is added to the system path. see.exe works like a DOS command. Let us use the program see. Use the option –s as in Figure 2. You can see that only file names are returned. Use the option –m as shown in Figure 3. You can see that file names and lines are returned. Use the option –r as shown in Figure 4. In this option, –m works in recursive mode. Use the option –si as shown in Figure 5. You can see that only file names are returned, and text searching is impervious to upper and lower case. Use the option –mi as shown in Figure 6. Use the option –ri as shown in Figure 7. In order to get help, use the option –h as shown in Figure 8. The program offers you the power of regular expressions. Figure 9 shows the file 1.txt, which contains text. Let us use the regular expression, ‘+’ operator. See Figure 10, which shows the power of regular expressions. I really appreciate the code. It helped. But would have been really nice if you had provided the code with proper indentation. Also for instance, “class Text_search :” under figure 2 is not defined as part of your code structure. But the code won’t work without first defining the class. Hi Mohit, in the Windows command line you can do the trick by using FINDSTR command, so this statement, I believe , is at least incorrect: “While Linux has the grep command, Windows does not have an equivalent. The only alternative, then, is to make a command…”
https://www.opensourceforu.com/2017/06/searching-text-strings-from-files-using-python/
CC-MAIN-2021-10
refinedweb
1,144
78.75
A static member of a class has a simple name, which can only be used inside the class definition. For use outside the class, it has a full name of the form class-name.simple-name. For example, "System.out" is a static member variable with simple name "out" in the class "System". It's always legal to use the full name of a static member, even within the class where it's defined. Sometimes it's even necessary, as when the simple name of a static member variable is hidden by a local variable of the same name. Instance variables and instance methods also have simple names.. However, the instance variable can still be referred to by its full name, this.name. In the assignment statement,. In fact, you can do anything with this that you could do with any other variable, except change its value. 5.6.2 The Special Variable super.x always refers to an instance variable named x you write a method in a subclass that has the same signature as a method in its superclass, the method from the superclass is hidden in the same way. We say that the method in the subclass overrides! Here is a more complete example. The applet at the end of Section 4.7 shows a disturbance that moves around in a mosaic of little squares. As it moves, each square it visits become a brighter shade of red. The result looks interesting, but I think it would be prettier if the pattern were symmetric. A symmetric version of the applet is shown at the bottom of the Section 5.7. The symmetric applet can be programmed as an easy extension of the original applet. In the symmetric version, each time a square is brightened, the squares that can be obtained from that one by horizontal and vertical reflection through the center of the mosaic are also brightened. This picture might make the symmetry idea clearer: The four red squares in the picture, for example, form a set of such symmetrically placed squares, as do the purple squares and the green squares. (The blue square is at the center of the mosaic, so reflecting it doesn't produce any other squares; it's its own reflection.) The original applet is defined by the class RandomBrighten. In that class, the actual task of brightening a square is done by a method called brighten(). If row and col are the row and column numbers of a square, then "brighten(row,col);" increases the brightness of that square. All we need is a subclass of RandomBrighten with a modified brighten() routine. Instead of just brightening one square, the modified routine will also brighten the horizontal and vertical reflections of that square. But how will it brighten each of the four individual squares? By calling the brighten() method from the original class. It can do this by calling super.brighten(). There is still the problem of computing the row and column numbers of the horizontal and vertical reflections. To do this, you need to know the number of rows and the number of columns. The RandomBrighten class has instance variables named ROWS and COLUMNS to represent these quantities. Using these variables, it's possible to come up with formulas for the reflections, as shown in the definition of the brighten() method below. Here's the complete definition of the new class: public class SymmetricBrighten extends RandomBrighten { void brighten(int row, int col) { // Brighten the specified square and its horizontal // and vertical reflections. This overrides the brighten // method from the RandomBrighten class, which just // brightens one square. super.brighten(row, col); super.brighten(ROWS - 1 - row, col); super.brighten(row, COLUMNS - 1 - col); super.brighten(ROWS - 1 - row, COLUMNS - 1 - col); } } // end class SymmetricBrighten This is the entire source code for the applet! 5.6.3 Constructors in Subclasses know how it works, or if the constructor in the superclass initializes. This might seem rather technical, but unfortunately it is sometimes necessary. By the way, you can use the special variable this in exactly the same way to call another constructor in the same class. This can be useful since it can save you from repeating the same code in several constructors.
http://www-h.eng.cam.ac.uk/help/importedHTML/languages/java/javanotes5.0.2/c5/s6.html
CC-MAIN-2017-51
refinedweb
709
64.61
Arrays Legend-State is especially optimized for arrays since Legend has to handle huge lists of data. Here are a few tips to get the best performance out of arrays. Arrays of objects require a unique id To optimize rendering of arrays of objects, Legend-State requires a unique id, _id, or __id field on each object. Under the hood, Legend-State listens to elements by path within the object. Operations like splice can change the index of an element which changes its path, so it uses the unique id to handle elements being moved and keep observables as stable references to their underlying element. It also optimizes for repositioning items within arrays and only re-renders the changed elements. Use the For component The For component is optimized for rendering arrays of observable objects so that they are extracted into a separate tracking context and don't re-render the parent. You can use it in two ways, providing an item component or a function as a child. An optimized prop adds additional optimizations, but in an unusual way by re-using React nodes. See Optimized rendering for more details. import { For } from "@legendapp/state/react" const obs = observable({ arr: [{ id: 1, text: 'hi' }]}) function Row({ item }) { return <div>{item.text}</div> } function List() { // 1. Use the For component with an item prop return <For each={list} item={Row} /> // 2. Use the For component with a render function as the child return ( <For each={list}> {item => ( <div> {item.text} </div> )} </div> ) } For doesn't re-render the parent In this more complex example you can see that as elements are added to and update the array, the parent component does not re-render. import { For } from "@legendapp/state/react" function Item({ item }) { useEffect(() => { item.renders.set(r => r + 1); }) return ( <div> {item.text} </div> ) } function TodosExample() { const renderCount = ++useRef(0).current const todos = useObservable([]) const onClickAdd = () => todos.push({ id: ++total, text: 'Item ' + total, renders: 1 }) const onClickUpdate = () => { todos[todos.length - 1].text.set(text => text + '!') } return ( <div className="flex"> <button onClick={onClickAdd}>Add</button> <button onClick={onClickUpdate}>Update</button> <div>Renders: {renderCount}</div> <For each={todos} item={Item}> </div> ) }) Don't access observables while mapping The map function automatically sets up a shallow listener, so it will only re-render when the array is changed and not when individual elements are changed. For best performance it's best to let the child component track each item observable. Make sure that you don't access any observable properties while mapping, like accessing the id for the key, so use peek() to prevent tracking. import { For } from "@legendapp/state/react" const obs = observable({ arr: [{ id: 1, text: 'hi' }]}) function Row({ item }) { return <div>{item.text}</div> } function List() { // Be sure to use peek() to make sure you don't track any observable fields here return list.map(item => <Row key={item.id.peek()} item={item} /> ) } Optimized rendering The For component has an optimized prop which takes the optimizations even further. It prevents re-rendering the parent component when possible, so if the array length doesn't change it updates React elements in place instead of the whole list rendering. This massively reduces the rendering time when swapping elements, sorting an array, or replacing some individual elements. But because it reuses React nodes rather than replacing them as usual, it may have unexpected behavior with animations or if you are modifying the DOM externally. This is how the fast "replace all rows" and "swap rows" speeds in the benchmark are achieved. import { For } from "@legendapp/state/react" ... function List() { // Use the optimized prop return <For each={list} item={Row} optimized /> }
https://legendapp.com/dev/state/arrays/
CC-MAIN-2022-40
refinedweb
608
54.83
List of participants ported / implemented for BOSS SkyNET : Some partiticipants will require the osc tool to interact with OBS. This will be installed as a dependency to relevant participants. The version included in Debian is too old and the required repository must be added: cat <<EOF > /etc/apt/sources.list.d/MINT-tools.list deb / EOF GIT : Package : Configuration : The file at /etc/oscrc needs to be edited so that these participants can communicate with the OBS instance. Compares the checksum of the packages being submitted to packages of the same name possibly in the Testing project. If the checksum matches it sets STATUS = FAILED This is for over-eager developers - or if two people in a team submit close together. Compares the checksum of the packages being submitted to packages of the same name possibly in the Target project. If the checksum matches it sets STATUS = FAILED Developers sometimes submit without doing a proper update - this catches that situation. Checks if the request tries to submit packages to multiple projects at the same time and sets STATUS = FAILED if so Checks if each of the packages being submitted contains at least the following files : * Source tarball : *.tar.gz *.tar.bz2 or *.tgz * Changes file : *.changes * Spec file : *.spec and sets STATUS = FAILED if not Nb the presence of the .changes file is important for the changelog participants to operate correctly Prerequisite : check_has_valid_repo Checks if the packages being submitted are built successfully against the designated target repository for the architectures of interest and sets STATUS = FAILED if not Checks if the spec file of each of the packages being submitted is valid. Currently the only validity check applied is that it shouldn't contain the %changelog tag and sets STATUS = FAILED if it does OBS is responsible for inserting the .changes file contents into the spec file. Gets the request submitter email from OBS and makes sure it is not an empty string sets STATUS = FAILED otherwise. Checks the request submitter is actually a maintainer in the source project from which the request is origination, sets STATUS = FAILED otherwise. Finds a repository in the source project that builds ONLY against a certain target project / repo , sets STATUS = FAILED if it does not find one. Prerequisite : get_relevant_changelog Checks that each package in the request actions has a non empty relevant change entries field. Use this if you want to enforce that new entries be added to the .changes file before allowing a request to be accepted. Checks that each package in the request originates from a project the matches the regexp provided as a parameter. Package : boss-participant-resolverequest Used to accept or decline a submit request. Parameters: Fields: Config: GIT : Gets the .changes file from the project/package and puts it into the 'changelog' field. Note that for SRCSRV_REQUEST_*, project is the _target_ and may not return the expected log. Thus get_relevant_changelog may be more relevant for that case. Dependencies: Gets the .changes file for the source project/package/revision and does a diff against the target project/package putting the 'added' lines into a 'relevant_changelog' field for each package in the request's actions array. This is particularly useful for acting on external links such as bug# or feature# mentioned in the new changelog lines. The complexity of a diff provides for situations where multiple changelog entries are made in one area of a project before the package is finally accepted. Interacts with bugzilla to: Links : Prerequisites : Takes the following parameters (can be different depending on the target bugzilla): For the comment : or GIT : NOTE: skynet branch for now Package : Several participants interact with the OBS using the buildservice library. They use the skynet configuration system to look for an 'oscrc' value in the [obs] section. eg [obs] oscrc = /etc/skynet/default.oscrc This is a normal oscrc file (mode 600) and should contain [apiurl] sections with an aliases section that includes the namespace value set by the OBS in the obsEvent (see BSConfig.pm) eg: [1] aliases=OBS user=<user> passx=<passx>
http://wiki.meego.com/index.php?title=Release_Infrastructure/BOSS/Participants&oldid=45573&diff=prev
CC-MAIN-2013-20
refinedweb
674
53.21
Opened 12 years ago Closed 10 years ago #4148 closed (fixed) Inclusion of Middleware to fix IE VARY bug Description There is a bug in Internet Explorer, described in, where Vary headers will break certain applications. One easy way to create this bug is to generate a !PDF and loading the page in IE. This can either go in the core or in contrib , but I'm afraid it will be lost if it stays on a site like djangosnippets, where it could really help people who plan to generate PDFs. Attachments (7) Change History (25) Changed 12 years ago by comment:1 Changed 12 years ago by comment:2 Changed 12 years ago by comment:3 Changed 12 years ago by comment:4 Changed 12 years ago by comment:5 Changed 12 years ago by comment:6 Changed 12 years ago by This probably shouldn't be middleware. It's something that has to always be done to the response, because it's broken all the time. Because of the impact, this fix needs to be included. We can also leave out the Microsoft bashing in the initial docstring (although the reference to the bug we are fixing should stay). Changed 12 years ago by Take 2: This file alters the BaseHandler to do it and adds a setting to disable it if one wants. Changed 12 years ago by Take 2: This file alters the BaseHandler to do it and adds a setting to disable it if one wants. This patch is better. comment:7 Changed 12 years ago by Okay, I put it in the BaseHandler in what it seems a fairly clean way. If you want to move the actual response processing out of that file, you're free to do so. Also, sorry for putting [patch] in the summary, I thought that was standard (!). comment:8 Changed 12 years ago by comment:9 Changed 12 years ago by Thanks Michael, looks pretty clean to me. comment:10 Changed 12 years ago by Discussed a slightly less odd way to implement this on IRC Changed 12 years ago by Cleaned the patch a little bit...added fix for content-disposition Changed 12 years ago by Cleaned the patch a little bit...added fix for content-disposition...take two comment:11 Changed 12 years ago by The new update is very much like the other version, without a settings variable. It also includes a fix for a bug in IE whereby if you use Content-Disposition and either Pragma: no-cache or Control-Cache with either no-cache or no-store IE will not let you download it. Changed 10 years ago by It's been a year, and this bug persists. It prevents documented examples like I've modified the last submitted patch, bringing it in line with the newer version of the HttpResponse class. comment:12 Changed 10 years ago by This has been accepted several times; I can't see anything that needs an additional design decision. However, I don't have ready access to IE to validate the fix, so someone else will need to test the patch before it is ready for checkin. comment:13 Changed 10 years ago by I have machines with IE (W2K with IE6 and XP with IE7) so I thought I could test the fix. However I cannot recreate the problem. Using trunk r7739 (development server) and following the example at: either specifying no Vary header or including a Vary header matching the one at: I see no problem on either of my Windows machines. I choose "Open" from the popup and Excel opens with the file supplied in the response. Also tried a PDF and ran into no errors opening that either. If someone could tell me exactly what I need to do to hit the error, I'll try again to test the fix. comment:14 Changed 10 years ago by Ah, there's a little more needed to reproduce this bug. Django does not always attach a Vary header to every HttpResponse. In my case, the sessions middleware was automatically adding one to a generic.list_detail view (although it wasn't really appropriate, see also bug #3586). So to reproduce this easily, make sure that your HttpResponse does include a Vary header by using a decorator: - create a views.py like so: from django.http import HttpResponse from django.views.decorators.vary import vary_on_cookie @vary_on_cookie #explicitly ensure this HttpResponse includes a Vary header def django_ticket_4148_sample_with_Vary(request): fileHandle = open('/var/www/static/foo.xls', 'r') my_data = fileHandle.read() response = HttpResponse(my_data, mimetype='application/vnd.ms-excel') response['Content-Disposition'] = 'attachment; filename=foo.xls' return response - Point your urlpatterns at this view, maybe like urlpatterns = patterns('', (r'^foo.xls', 'views.django_ticket_4148_sample_with_Vary'), ) - Make sure an Excel file exists at the path shown above. - Visit the URL in Internet Explorer (6 or 7) - My IE6 spits out garbage when trying to view a .xls file like this. For other Content-Types like .pdf IE can't open their temporary file, and .odt is even weirder. Firefox works fine. It's possible that some apache or python handler configurations could cover this up for IE users, so I'd like to hear from people running other distros. My implementation, running on mod_python on Gentoo, is viewable here, for anyone with IE: comment:15 Changed 10 years ago by OK, the key difference between what I was trying and your setup seems to be the fact that I was using the development server and you are using Apache. There must be something else that Apache is doing that the development server does not do that triggers the error, since no matter what I tried I could not recreate with the development server. So I tried again, this time using Apache as my server and then I could recreate the problem with both IE6 & IE7. Applying the patch 7737-fix_ie_cache_bugs_3.diff fixed the issue for both browsers. Promoting to ready for checkin since Russell's last comment seems to indicate all that was needed was testing/verification, which I've now done. Changed 10 years ago by Updated and corrected(?) patch comment:16 Changed 10 years ago by I've cleaned up the patch a bit, added some extra robustness, and moved the code to the correct locations, since we already have some "compulsory middleware" now. Can somebody give this a quick check (if you're using IE, actually run the code)? I'm a bit tired at the moment and may easily have made some kind of bozo error. It doesn't crash when I run things, but, then again, I don't have IE -- or even Windows -- either. Can be moved back to "ready for checkin" when it's been tested. comment:17 Changed 10 years ago by Verified 4148-fixed.diff using IE6 & IE7. Verified failures on r7852 without the patch, fixed by applying the patch. Verified headers from Firefox were unaffected by the patch (Vary still present after patch whereas it's gone for IE6/IE7). Didn't attempt to track changes to the Cache-Control header though. The middleware. It includes the documentation.
https://code.djangoproject.com/ticket/4148
CC-MAIN-2018-47
refinedweb
1,197
71.75
Introduction to Python Main function In general, all the programming languages have main() function as a pre-defined function/ method to indicate to the compiler that it is the beginning of the execution flow. Python language is an exception to this, as its execution is in a serial manner and the first line of the code will be the starting point by default. Despite of main() function not being a mandatory element, if the programmer wishes to have a main method to sort the other parts of the code, one can always create a user defined method, which will operate similar to other methods in python. How to declare the Python Main Function? The main method is defined in a similar fashion like other functions defined in python. 1. Basic main function: print('Introduction to main() function') def main(): print("This is main function") print('Outside main function') Output: demo.py >>> === RESTART: C:/Users/hp/AppData/Local/Programs/Python/Python37-32/demo.py === Introduction to main() function 2. Outside main function: >>> Explanation: Python interpreter starts executing a python file from the top. First, it will print the first print statement i.e Introduction to main() function. Then it finds the main() method definition since it’s is a mere definition and not a function call so it bypasses this and executes the next print statement that follows this. Execution of main() requires a brief understanding of the __name__ variable. 3. Brief about __name__ variable: __name__ variable is a special implicit variable that contains a string, and the value of the string depends on how the code is being executed. Basically there are two ways in which python interpreters execute code and __name__ value is populated by this. a. The most common way is to execute the file as a python script In this case __name__ will contain the string “__main__” b. By importing the necessary code from one python file to another file. In this case __name__ will contain the imported module name __name__ variable helps to check if the python file is being run directly or if it has been imported from some other file. Examples of Python Main Method Following are some examples of python main method: 1. Sum function: File is saved as sum_numbers.py Python file: sum_numbers.py def sum(a,b): return (a+b) print(sum(3,6)) print(“__name variable set to “,__name__) Above program is the main program and is not imported from other python files, so the value of __name__ variable is set to __main__ as shown in the output below: Output: sum_numbers.py 2. When python file is imported from another file This file is saved as sum.py Python file: sum.py import sum_numbers print(sum_numbers.sum(10.25,9.05)) print("value of __name__:",__name__) Explanation: - Sum_numbers file is being imported in program file ‘sum.py’ so all the statements inside this are executed first. i.e print(sum(3,6)) à 9 print(“__name variable set to “,__name__) à since this is being imported from sum_numbers so __name__ is set to sum_numbers - print(sum_numbers.sum(10.25,9.05)) à3 print(“value of __name__:”,__name__) à sum.py is main file here so __name__ is set to __main__. Output: sum.py To avoid the statements to get executed from imported file ‘sum_numbers’ like in this case, we can use if conditional statements. This can be illustrated by below code example: The above sum_numbers.py python file can be changed as below: Python file: sum_numbers.py def sum(a,b): return (a+b) if __name__=='__main__': print(sum(3,6)) print("__name variable set to ",__name__) Python file: sum.py import sum_numbers print(sum_numbers.sum(10.25,9.05)) print("value of __name__:",__name__) In this case output from imported file ‘sum_numbers.py’ is not printed when we execute sum.py because of this condition: if __name__==’__main__’: as this condition fails because sum_numbers is the imported module and not the main file. Output: sum.py Key points to consider when main() is being used in the python program: 1. As python is an object-oriented programming language, we should incorporate its benefits in our program. This can be done by placing bulk logic codes in compact functions and/or classes for better code reusability, better debugging, a better understanding of the program and overall code optimization. This approach will enable us to control the execution of our code rather than letting the python interpreter execute it. This is best illustrated in the above code example, where we used if condition to prevent the output from the imported file. 2. Use __name__ variable to control the execution of code If __name__==’__main__: <logic of program> 3. Create main() function and put your logic inside this Python file: demo.py print("Main function illustration") def sum(a,b): return (a+b) def main(): print("Inside main() function") x=int(input("enter value x")) y=int((input("enter value y"))) print("sum of values entered is",end=' ') print(sum(x,y)) if __name__ == "__main__": main() - Here we have defined sum() and main(). - Sum() will calculate the sum of two numbers, - Inside the main() method we have prompted the user to enter values for x and y through input() statement which is then converted to integer values as input() returns string values. - Then sum(x,y) is called inside main function and control will transfer to sum() method defined at the top which will return the sum of the numbers entered by the user. - All the above functionalities is handled by the __name__ variable. - As demo.py is the main program file and is not being imported from any other python file so __name__ is set to string __main__. - If there have been some import statements then the below condition If __name__ == “__main__”: will fail as __name__ now will have the imported module name In this way, I put the code which I wanted to run inside main(), programming logic in “SUM” which is being called from main() and called the main() within the conditional statement. Output: demo.py Conclusion – Python Main Method Well, it’s not necessary to use the main() method in python and its completely up to the programmer whether to have it in the code or not. However, it is good practice to use the main() method because with the help if the main() function, we can execute a ton of functionalities as and when needed, and also control the flow of execution. Recommended Article This is a guide to Python Main Method. Here we discuss how to declare the Python Main Function along with the Examples and Outputs. You may also look at the following article to learn more –
https://www.educba.com/python-main-method/?source=leftnav
CC-MAIN-2021-04
refinedweb
1,122
61.46
Hi Greg,Responses below. I'll send out the split up patches hopefully today/tomorrowwhich may make it a bit easier to understand/comment on.On 2020-09-23 10:08 p.m., Greg Kroah-Hartman wrote:> On Wed, Sep 23, 2020 at 09:43:55PM -0700, Scott Branden wrote:>>>> +struct bcm_vk_tty {>>>> + struct tty_port port;>>>> + uint32_t to_offset; /* bar offset to use */>>>> + uint32_t to_size; /* to VK buffer size */>>>> + uint32_t wr; /* write offset shadow */>>>> + uint32_t from_offset; /* bar offset to use */>>>> + uint32_t from_size; /* from VK buffer size */>>>> + uint32_t rd; /* read offset shadow */>>> nit, these "unit32_t" stuff really doesn't matter in the kernel, 'u32'>>> is a better choice overall. Same for u8 and others, for this whole>>> driver.>> Other than personal preference, I don't understand how 'u32' is better.>> uint32_t follows the ANSI stdint.h. It allows for portable code without>> the need to define custom u32 types.> The ANSI namespace does not work in the kernel, which is why we have our> own types that pre-date those, and work properly everywhere in the> kernel.>>> stdint types are used in many drivers in the linux kernel already.>> We would prefer to keep our code as portable as possible and use>> stdint types in the driver.> You aren't porting this code to other operating systems easily, please> use the kernel types :)>> And yes, these types are used in other parts, but when you have 25> million lines of code, some crud does slip in at times...OK, will reformat.Seems like the stdint typedefs should not have been added to the linux/types.hback in ancient kernel times. If they are not to be used in the kernel, then they should be wrapped in a #ifndef __KERNEL__.>>>>> + pid_t pid;>>>> + bool irq_enabled;>>>> + bool is_opened; /* tracks tty open/close */>>> Why do you need to track this? Doesn't the tty core handle this for>>> you?>> I have tried using tty_port_kopened() and it doesn't seem to work.>> Will need to debug some more unless you have another suggested function to use.> You didn't answer _why_ you need to track this. A tty driver shouldn't> care about this type of thing.We want to leave the data in the shared buffers coming from the card until someone is ready to read them.So we track whether the particular tty device is open. If the port is not open, we don't incrementour read pointer and leave the data in the buffer. If it overflows, so be it, we'll get whatever data is in it when we open the tty device..>>>>> + struct workqueue_struct *tty_wq_thread;>>>> + struct work_struct tty_wq_work;>>>> +>>>> + /* Reference-counting to handle file operations */>>>> + struct kref kref;>>> And a kref?>>>>>> What is controlling the lifetime rules of your structure?>>>>>> Why a kref?>>>>>> Why the tty ports?>>>>>> Why the misc device?>>>>>> This feels really crazy to me...>> Comments mostly from Desmond here:>>>> Yes, we have created a PCIe centric driver that combines with both a misc devices on top (for the read/write/ioctrl), and also ttys.>> The device sits on PCIe but we are using the misc device for accessing it.>> tty is just another on top. I don't think this is that uncommon to have a hybrid driver.>.In addition the PCI card has DMA access to host memory to access data for processing.The misc driver handles these operations. Multiple user space processes are accessing the misc device at the same timeto perform simultaenous offload operations. Each process opens the device and sends multiple commands with write operationsplugged?>> We got rid of the old "control path" device nodes for tty devices a long> time ago, this feels like a return to that old model, is that why you> are doing this?I don't know what old "control path" you are referring to and what the "new" path is?>> But again, I really don't understand what this driver is trying to> control/manage, so it's hard to review it without that knowledge.We have circular buffers in PCI memory that contain serial data. We need to be able to open/close tty devices in linux for console operations.And also to be able to perform operations such as lrz/sz. This needs to have the same ability as if a physical serial cable was connected between the server and the card. So user is able to either plug a UART cable in or open the "virtual" UART accessed over PCIe.For misc device, the PCI memory is the physical interface, and on top it is a queue-messaging scheme for information exchange. This is more for the control-path operations, cmd in/out etc.>>> Since we have a hybrid of PCIe + misc + tty, it means that we could simultaneously have opening dev/node to read/write (multiple) + tty o going.> That's almost always a bad idea.The mulitple users is a must for us. We have multiple individual user space processes opening the misc device and communicating to it.We also have 2 tty device nodes per card that can be opened/closed at any time.And this is all on a PCIe card with shared memory, registers, and MSIX interrupts.The card can be reset at any time, crash, or have PCIe rescan, etc. User space processes need to be signalled when events detected so they don't hang.What do you suggest otherwise?>>> Since the struct is embedded inside the primary PCIe structure, we need a way to know when all the references are done, and then at that point we could free the primary structure.>> That is the reason for the kref. On PCIe device removal, we signal the user space process to stop first, but the data structure can not be freed until the ref goes to 0.> Again, you can not have multiple reference count objects controling a> single object. That way is madness and buggy and will never work> properly.We're using this driver in systems that require a high degree of stability and reliability.We have done extensive testing and don't see the bugs you are referring to.It's working properly.> You can have different objects with different lifespans, which, if you> really really want to do this, is the correct way. Otherwise, stick> with one object and one reference count please.If we draw a hierachy, bcm_vk object encaps the misc_object + pci_object + tty_object. The kref is for the bcm_vk object or the upper layer one. It is to guarantee that all the sub-level objects are freed that we free this upper-level one. I guess this is a result of the hybrid design, and this is mainly to avoid issue when we say "echo 1 > pci->remove". The global structure under PCI will be removed, but we could not do so unless all its misc_object/pci_object/tty_object are gone. We did observe corner case if we don't have this that some of the apps who have opened the misc_device will seg-fault as it access the datastructure after the pcie->remove is done. Not very common but more a corner case depending on timing.> thanks,>> greg k-hRegards, Scott[unhandled content-type:application/pkcs7-signature]
https://lkml.org/lkml/2020/9/24/1228
CC-MAIN-2021-17
refinedweb
1,201
73.58
Treat it as a variation of K&R's atoi. You should already know that you can step through the string and add the characters like so: result = radix * result + digit; If radix is 16, you're converting a hexadecimal value. Now all you need to do is figure out how to convert the A-F digits of a hexadecimal number to 10-15 for the digit value in the snippet above. One way is an indexed lookup. First you find the character in a list of digits, then the index tells you its value: #include <stdio.h> int main ( void ) { const char *digits = "0123456789ABCDEF"; size_t i; for ( i = 0; digits[i] != '\0'; i++ ) printf ( "%c = %d\n", digits[i], i ); return 0; } See what you can come up with using those hints. i think i made it but its not so compact just bunch of test will work only from A to F but if you add like AA or 2 FF or others wont work but i hope i will fix it l8er #include <stdio.h> int htoi(char *HString) { int i; int n=0,n2; for(i=0;HString[i]!=0;i++) { if(HString[i]=='A'|| HString[i]=='a') n+=10; if(HString[i]=='B' || HString[i] == 'b') n+=11; if(HString[i]=='C' || HString[i]=='c') n+=12; if(HString[i]=='D' || HString[i]=='d') n+=13; if(HString[i]=='E' || HString[i]=='e') n+=14; if(HString[i]=='F' || HString[i]=='f') n+=15; else if(HString[i]<='0' && HString[i]>='9') n+=HString[i]; } return n; } int main(void) { char name[]="F"; int x; x=htoi(name); printf("%d",x); return getchar(); } >but is there a equation to change the characters hex to decimal because >sometimes if a string got F or A it will just change that to its ascii value. There is a compact and easy way to do what you want. You can find details on how to do it here. p.s. Thanks for ignoring me. I won't waste my time with you anymore. Figure it out yourself. fixed it unsigned int htoi(char *HString) { int i,Hexo,n=0; for(i=0;HString[i]!=0;i++) { if(HString[i]<='A' && HString[i]>='F') { Hexo= HString[i] - '0'; n = 16 * n + Hexo; } else if(HString[i] >='a' && HString[i] <='f') { Hexo= HString[i] -'a' + 10; n = 16 * n + Hexo; } else if(HString[i] >='A' && HString[i] <='F') { Hexo= HString[i] -'A' + 10; n = 16 * n + Hexo; } } return n; } I see two glaring problems with your code. First, it's very ASCII-centric. C only guarantees that the decimal digits will be adjacent in the character set, so while (c >= 0 && c <= '9') is required to work as expected, (c >= 'a' && c <= 'f') doesn't have to. And subtracting 'a' from the value could have wildly incorrect results on non-ASCII character sets, where non-alphabet characters could be mixed in with the alphabet. Second, it's redundant. You don't have to duplicate the code for different casing. The standard library (<ctype.h>) offers both toupper and tolower which will convert a character to the upper and lower case formats, respectively, and do nothing with a character that has no such conversion. Compare and contrast your code with mine: #include <ctype.h> int hex_value ( char ch ) { const char *digits = "0123456789ABCDEF"; int i; for ( i = 0; digits[i] != '\0'; i++ ) { if ( toupper ( (unsigned char)ch ) == digits[i] ) break; } return digits[i] != '\0' ? i : -1; } unsigned htoi ( const char *s ) { unsigned result = 0; while ( *s != '\0' ) result = 16 * result + hex_value ( *s++ ); return result; } Note that I left out error checking intentionally to clarify the logic. An invalid string won't give you sensical results. Strings in C always end with a '\0' character (which also happens to have the integral value 0). When I read or write while ( *s != '\0' ) , my mind thinks "while not at the end of the string".
https://www.daniweb.com/programming/software-development/threads/197250/hex-string-to-decimal
CC-MAIN-2016-50
refinedweb
660
71.34
Hi, I posted this to the libstdc++ mailing list initially, but I was directed to post on the MinGW list. I'm using MinGW 5.0.2 with GCC 3.4.5 installed on top (there's no difference with GCC 3.4.4). The following program: #include <iostream> #include <fstream> using namespace std; int main() { ofstream ofs( "streamtest.txt" ); ofs << "Test" << endl << "Test" << endl; ofs.close(); #if 01 ifstream ifs( "streamtest.txt" ); char c; ifs.get(c); while ( ! ifs.fail() ) { unsigned p = ifs.tellg(); cout << "c = " << c << ", pos: " << p << endl; ifs.get(c); } ifs.close(); #else FILE* fs = fopen( "streamtest.txt", "r" ); char c; c = fgetc( fs ); while ( ! feof( fs ) ) { cout << "c = " << c << ", pos: " << ftell( fs ) << endl; c = fgetc( fs ); } fclose(fs); #endif return 0; } produces incorrect results when C++ streams are used, here's the output I get: c = T, pos: 3 c = t, pos: 6 c = T, pos: 8 c = s, pos: 10 c = , pos: 12 The output is correct for the test with fopen/ftell: c = T, pos: 1 c = e, pos: 2 c = s, pos: 3 c = t, pos: 4 c = , pos: 6 c = T, pos: 7 c = e, pos: 8 c = s, pos: 9 c = t, pos: 10 c = , pos: 12 The position offset in the streams version is related to the remaining newlines until the end of the file - it seems tellg() (which in fact calls seekoff() ) moves the stream pointer ahead as many characters as the number of newlines left until the end of the file. I searched the MinGW list archives, comp.lang.c++.moderated and GCC's bug tracker. The problem has been reported before (e.g. here , here and here ), but I didn't quite understand the explanation... it was blamed on MS for some reason. Something like their fgetc or other part of the C library implementation doing something wrong with line endings. However, as I tested above, it seems fgetc/ftell is working fine in MinGW.. Thanks, Ivan Open the file in binary mode, then everything will work fine. You will also have CR characters in the file, but I bet you can live with that. Ivan Kolev wrote: > >. >
http://sourceforge.net/p/mingw/mailman/mingw-users/thread/44720CE5.2030103@deadbeef.com/
CC-MAIN-2014-23
refinedweb
362
82.14
spicyjack Posted February 14, 2011 Xeriphas1994 said:Are user preferences preserved in the fork? E.g., if I've been receiving automated emails about articles on my watchlist, will I start getting them for the new wiki also? mysql> desc watchlist; +--------------------------+------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------------------+------------------+------+-----+---------+-------+ | wl_user | int(10) unsigned | NO | PRI | NULL | | | wl_namespace | int(11) | NO | PRI | 0 | | | wl_title | varbinary(255) | NO | PRI | | | | wl_notificationtimestamp | varbinary(14) | YES | | NULL | | +--------------------------+------------------+------+-----+---------+-------+ 4 rows in set (0.00 sec) mysql> select * from watchlist; Empty set (0.00 sec) That is a big no, you will have to reset your watchlist. The MediaWiki API [1] provides access to watchlists, but apparently MediaWikiDumper doesn't scrape it. [1] 0 Share this post Link to post
https://www.doomworld.com/forum/topic/52973-doom-wiki/?page=7&tab=comments
CC-MAIN-2021-25
refinedweb
121
63.8
Vulcan-ized Rhino: Telepathic Power for your Code In this article we coax the JVM's Rhino (an elusive, misunderstood, and ignored member of the ecosystem) into a mind meld, giving it access to the JVM's thoughts, experiences, memories, and knowledge; and take it where no Rhino has gone before ! Let me set the context with some quick code: ScriptEngineManager sem = new ScriptEngineManager(); ScriptEngine jsEngine = sem.getEngineByName("javascript"); ... String message = "Hello rhino!"; ... jsEngine.eval("println(message)"); Everyone knows that this code does not work (it produces a "ReferenceError: "message" is not defined"). To make it work the variable message must be put into the script engine's bindings, as described in these articles. That's easily done. But the overhead and distraction of the extra boilerplate makes the body of code much less intuitive. (The 4-line example above already has 2 lines of distracting boilerplate!) A Quick Example What can we do to make something as simple as " println(message)" in a script just work? In fact, let's raise the bar some more. Take a look at Sqrt.java. Let's say you were explaining that code to a novice, and wanted to provide a probe into the while loop of the running program, by adding the line in red: ... while (Math.abs(t - c/t) > epsilon*t) { t = (c/t + t) / 2.0; if (args.length == 2) VulcanRhino.eval(args[1]); } ... Think of class VulcanRhino as your friendly telepathic pachyderm, and eval() its static, void JavaScript evaluator. The idea is that a JavaScript snippet could be passed into the program as an optional second command-line argument. That snippet (specified at run time) could contain logic with references to any of the in-scope Java variables. The code above is a simple example. But this approach allows you to include any number of VulcanRhino.eval()s, located wherever the invocation of a static void function would be legal, each executing a different script. Each invocation of VulcanRhino.eval() has access to all in-scope variables at its location. Our modified Sqrt.java would run normally (doing nothing unusual) if run with just one command-line argument, but giving it a second argument awakens the slumbering telepath. Here are a few sample runs (the different colors separate the command line from the program's output) ... The last line of output ( struck out) is not from the script, but is the program's normal 1-line output. The examples above use scripts to track the values of "t" and "c/t" respectively. But you are free to pass in any expression that makes sense at the location of VulcanRhino.eval(). You may even use it for something completely unforeseen ... The one thing you can not do with a script in this way is to assign a value to a variable. The Vulcan-Rhino User Guide To use this approach, you must pre-process your source-code using a tool described below. This step is the key to the magic -- it augments each VulcanRhino.eval() in your code with something that gives it access to all the in-scope variables. So, proceed as follows: - edit your program (say Sqrt.java), adding VulcanRhino.eval()s as required, and save it with a different name (say SqrtVR.java) - pre-process SqrtVR.java following instructions below. Save the output as Sqrt.java. Note: this overwrites any other Sqrt.java - run as usual, making sure that class VulcanRhinois on the classpath. The VulcanRhino.java source should be compiled and deployed as required. To pre-process a file use the following command: java >Sqrt.java -cp VLL4J.jar net.java.vll.vll4j.api.Vll4j VulcanRhino.vll SqrtVR.java The files used are described below: - VLL4J.jar is the distribution JAR from the VisualLangLab project (a visual parser-generator) - VulcanRhino.vll is the transformation grammar described further under Pre-Processor Internals below - SqrtVR.java is actually Sqrt.java saved with a different name If you have trouble with the above steps, check the following: - does a SqrtVR.java file exist? - have you edited SqrtVR.java to add VulcanRhino.eval(args[1]) - copy and paste the command line above directly - ensure VulcanRhinohas been compiled and exists on the classpath How Does it Work? Let's first get VulcanRhino out of the way. Observe that eval() does nothing special, but there is another function defVars() that enables the caller to inject information about variables into the JavaScript engine. import javax.script.ScriptContext; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; public class VulcanRhino { public static void eval(String script) { try { engine.eval(script); } catch (Exception e) { e.printStackTrace(); } } public static void defVars(Object... args) { engine.getBindings(ScriptContext.ENGINE_SCOPE).clear(); for (int i = 0; i < args.length; i += 2) { String name = (String)args[i]; Object value = args[i + 1]; engine.put(name, value); } } static ScriptEngine engine = new ScriptEngineManager().getEngineByName("javascript"); } Next take a look at the pre-processed version of Sqrt.java. ... while (Math.abs(t - c/t) > epsilon*t) { t = (c/t + t) / 2.0; if (args.length == 2) {VulcanRhino.defVars("epsilon", epsilon, "c", c, "t", t, "args", args); VulcanRhino.eval(args[1]);} } ... The part you added is still in red. But the pre-processor has spliced in the blue text. The pre-processor makes this change at each occurrence of VulcanRhino.eval(...), injecting information about the locally visible variables into the JavaScript engine. Pre-Processor Internals I won't go into all the details here, presuming that not everyone is interested. So the remaining part of the article is a short summary of the technique together with links to all the other material you will need to understand the details. The pre-processor uses a parser for the Java language to analyze your program and obtain information about which variables are visible at each VulcanRhino.eval(...) location. It then modifies the source code by wrapping each VulcanRhino.eval(...) in a block ( { ... }) preceded by a VulcanRhino.defVars(...) call that injects the information required into the JavaScript engine. The parser-generator used is the easily learned, completely visual tool VisualLangLab. For an introductory tutorial look at A Quick Tour. Scala programmers will find Rapid Prototyping for Scala useful too. The last piece of the puzzle is in the grammar file VulcanRhino.vll. This file contains a Java grammar modified with action functions that perform the pre-processing. To examine the grammar, its rules, and the code in the action functions, proceed as follows: - double-click VLL4J.jar (the same file used in the pre-processing step described above). this will start up the VisualLangLab GUI as shown in Figure-1 below - select "File" -> "Open" from the main menu, choose the grammar file VulcanRhino.vll, then click the Open button - in the rule-tree (the JTree at the left of the GUI) select (click on) the node just below the root node (see red arrow). This will cause the action-code associated with this parser-rule to be displayed under Action Code (right side of the GUI). This is the code (in JavaScript) that pre-processes your code Figure-1 VisualLangLab GUI with VulcanRhino grammar loaded The information used by the action-code above is in several global variables ( VLL members). That information is gathered by other action-code in other rules. To examine all the remaining code proceed as follows: - select the rule named block (use the combobox in the toolbar), and click on the reference node labeled blockStatement - select the rule variableDeclaratorId, and click on the sequence node just below the root node - select statement, click on the node just below the token node for FOR If you do want to pursue this further, a thorough reading of A Quick Tour is strongly recommended. You will also need AST and Action Code and Editing the Grammar Tree. - Login or register to post comments - Printer-friendly version - sanjay_dasgupta's blog - 3159 reads There was an error in the grammar file (VulcanRhino.vll) ... by sanjay_dasgupta - 2012-01-28 00:21 There was an error in the grammar file (VulcanRhino.vll) that I corrected at around 08:10 hours GMT on 28th Jan. Although the example in the article would still have worked correctly, anyone who tried to use this approach with code containing a for loop would have noticed that the for's index variable was not being removed from scope at the end of the for statement. My apologies for any inconvenience caused.
https://weblogs.java.net/blog/sanjaydasgupta/archive/2012/01/25/vulcan-ized-rhino-telepathic-power-your-code
CC-MAIN-2014-10
refinedweb
1,405
58.58
On 2/26/07, Sachin Patel <sppatel2@gmail.com> wrote: > Hi Jacek, The download additional server adapters only pulls down published > released adapters. So currently support for 1.0, and 1.1.x can be > downloaded. For 2.0 I'm currently working on a driver, which I should hope > to have available with our M3 milestone. But this won't be published to the > update manager site and will be a download and extract installation > procedure. Is there a way I could build the monster locally and install it? > FYI WTP2.0M5 is currently very restricted in its EE5 capability. For > example, only the facet suppport is really there, and not any of the > namespace support, thus none of the deployment descriptors will be generated > if you create an EE5 project. This is targeted for their M6. Yeah, you're right, but that's the only working WTP version for Eclipse 3.3M5eh with an enhanced Dali. I couldn't wait till M6 is released. Jacek -- Jacek Laskowski
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200702.mbox/%3C1b5bfeb50702261235r72b28d24j8358901e5a2a2d6d@mail.gmail.com%3E
CC-MAIN-2014-49
refinedweb
169
69.48
A framework easily routing AppSync requests using AWS Lambda Project description appsync-router WARNING - Version 4.0.0 is a breaking change from version 3.x.x. Please review the documentation before upgrading A micro-library that allows for the registration of functions corresponding to AWS AppSync routes. This allows for cleanly creating a single AWS Lambda datasource without large numbers of conditionals to evaluate the called route. Installation pip install appsync-router Basic Usage from appsync_router import discrete_route, route_event # Context is a TypedDict that makes access to # the items passed to your Lambda function simpler from appsync_router.context import Context # Here we are telling the router that when the field "getItems" # is called on the type "Query", call the function "get_items" @discrete_route("Query", "getItems") def get_items(context: Context) -> list: return [1, 2, 3, 4] def function_handler(event, context): # simply route the event and return the results return route_event(event) NOTE - appsync-routeris designed to be used as a Direct Invocation AWS AppSync datasource. If you put a request VTL template in front of it, you must pass in the WHOLE $ctx/$context object. Route Types Each route type has an overloaded signature allowing for simple declaration. discrete_route- This discretely routes to a named type and field multi_route- This routes to a set of named type/field combinations pattern_route- This routes to types/fields that match the type and field regex patterns provided glob_route- This routes to the types/fields that match the type and field glob patterns provided Routing Events As seen in the example above, the simplest form of event routing is to call route_event with only the event argument. This will do the following: - Determine the route for the event - If no route is found, raise NoRouteFoundException - If more than one route is found, use the first route found - Route the event if it is a single context, or map the event to the route if it is multiple contexts Many times this will be sufficient. However, this behavior can be modified: - Passing a default_routeof type Routeto the route_eventmethod will call your default_routeif no route is found - Passing short_circuit=Falseto the route_eventmethod will cause a MultipleRoutesFoundExceptionto be raised in the case of multiple matched routes. - Passing an executorof type concurrent.futures.Executorto the route_eventmethod will cause all batch invocations (where the eventhas a list of contexts) to be executed using your executor. Extensibility You may extend the appsync_router with your own route types. Any routes that you create must extend from the appsync_router.routes.Route class. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/appsync-router/
CC-MAIN-2022-40
refinedweb
448
54.56
See also: IRC log <raman> Scribe: Raman <noah> +1 to cancelling next week. I almost surely can't make it. I.e. at risk -> regrets Resolution: No Call next week <DanC> I seem to be available 30 May <EdR> I'm available next week. No call May 23, next call will be May 30 Ed will be scribe for May 30 call <DanC> close enough for me Meeting minutes from last week approved <DanC> (it's not "just giving a name" to the issue; it's deciding to add something to our issues list, which is non-trivial.) <timbl> Q: Should tag have a meeting in Jan in Cambridge? Dan: Yes but not strongly Ed: NO Noah: Not instead of December meeting and not unless there is sinergy Norm: TAG owes it to the community to be available to meet Raman: No Tim: Will go with anything happening in Cambridge, no strong feelings either way Vincent: Not necessarily interested in the Jan meeting. Vincent to report back to Steve that TAG is not strongly interested in a shared Jan meeting Security -- Possibily on the second day. <DanC> draft June agenda Versioning Multiple content types for the same URI Possibly continue discussion of Noah's MetaData finding Aspects of SemWeb Architecture <DanC> (indeed, "finishing a document" and "starting a document" are patterns) <DanC> (yes, I like targetting the boxes; it tends to be worth nailing those down as a group; sometimes the finding has to come first, and sometimes it can come after.) <Zakim> DanC, you wanted to try provoking a bit <noah> I wonder whether the versioning discussion is important to Dave at the F2F? <DanC> (I think the connection from metadataInURI-31 to semweb arch is pretty arbitrary too.) . ) <Norm> (Hmmm. I see.) <DanC> (I am profoundly uninspired when it comes to security. It seems important, but darned if I can say anything specific about it.) <noah> Dated draft: <DanC> (were any reviewers assigned? I read the whole thing pretty closely. I'm tempted to say we should go thru the boxes one by one.) Noah: looking for the high-order bit answer : Is this in the right direction Raman: suggest publishing after the F2F to get community feedback <EdR> Ed: Noah, I think at the 30,000ft level.. I like the format and structure and find it very clear. <DanC> (hmm... I think if I had started with the 2nd box, it might have done better. "Guess information from URIs only when the consequences of an incorrect guess are acceptable.") Reviewers for Noah's document: Raman, Ed <scribe> ACTION: Vincent to send reminders [recorded in] <DanC> (that one is short, suite, and compelling.) Tim might also review. <DanC> (s/suite/sweet/) <noah> (Interesting: I originally had that first, and thought: gee, everyone's been so concerned that the main thought is "don't infer metadata", that I figured I better lead with that.) <noah> (I'm very sympathetic to trying to find tighter wording for that constraint. Will work on it.) <DanC> (yes, I think that one needs to go first, but it's not worded as well; it doesn't provoke immediate "yes, I agree" nor "no, I disagree" responses.) <DanC> # Issue abstractComponentRefs-37 <noah> (sounds like a plan, I'll try and tighten it) <EdR> <DanC> [[ <DanC> Can you confirm that this URI... <DanC>(SparqlQuery <DanC> ]] <DanC>(SparqlQuery <DanC> <Zakim> timbl, you wanted to ask whether RDF/A would help by the way I have a hard stop in 4 minutes; could someone else scribe for the final 30 minutes? <timbl> <#wsdl.interface(SparqlQuery> rdfs:describedBy <(SparqlQuery>. <DanC> do you mean rdfs:definedBy ? On the other hand, RDF/A is the closest we have to something working in xhtml, so let's not knock it. <timbl> <#wsdl.interface(SparqlQuery)> rdfs:describedBy <>. <timbl> <DanC> huh? I have tons of stuff working with GRDDL that doesn't use RDFa, raman. RDFa is much less close. need to leave <noah> Are we losing our scribe? <timbl> rdfs:seeAlso <Norm> ScribeNick: Norm Norm becomes scribe Some discussion of relative merits of "a href" and "rdfs:seeAlso" timbl: For a machine that's "seeAlso aware", the namespace document is useful <noah> TBL: I'm hoping the TAG will eventually set out guidance on which things, like rdfs:see also, a machine should follow danc: What sort of machine are we talking about? The consumer of .wsdl is a web services toolkits. ... They could be taught to follow see also links, but I'm not sure what the value would be. <Zakim> noah, you wanted to ask about conneg on the representations noah: We could start with a RDDL document and later add RDF with conneg. ... That led to a discussion of whether or not conneg should be used to serve alternate formats. ... But these are secondary resources and I'm not sure we have a good story for talking about links with fragids in this context. ... Does the link with fragid represent the same thing if RDF and HTML are conneg'd? ... I'm not sure we ever decided that. DanC: Yes, that's the state of play, and we have the same problem in other areas, like QT functions and operators. noah: I think I'd like the answer to be "yes" for better or worse. ... We've established '#'s for some things and they're out in the wild so they better work. ... I think if I want a prose explanation I should be able to get that and if I want an RDF explanation, I should be able to get that too. timbl: What's the relationship between the pieces <DanC> (I'd like a name for the analog of 404 in #-space... when you get a representation and it has nothing matching that fragment. Anybody got a suggestion? unbound fragment ref?) noah: For the Schema data types, you can have <baseuri>#integer, <baseuri>#double, etc. ... I don't think we have complete closure about what's identified by <baseuri> ... I take it as an abstraction for the namespace. The things that seem like documents are representations of that resource, I think. ... I believe timbl's position is that the <baseuri> refers to the document-ey thing I get back. <DanC> (I think the 2 positions noah are describing aren't observably distinguishable.) timbl: There's a pun in URIs used for two things; syntactically it's used as the prefix. But by itself it identifies the namespace document. ... I think information resources always identify documents. DanC: I didn't think that's where we landed. timbl: Representations are the actual bitstreams. If the resource is a list of things, I'm happy to have the list in different orders if they're unordered. <DanC> what timbl actually said was "... I use information resource only for things that have a beginning, middle and end" <DanC> and I meant to ask "really? that doesn't sound like things that can be posted to." Thanks for the correction, DanC noah: The resource is the potentially infinite collection. timbl: (reference to information theory) when you look this thing up, you're going to be more informed. An information resource to me is that information, not the subject of the information. noah: would it be reasonable for me to define a resource which is all the square roots of all the integers. ... blah-blah-blah#144 refers to the number 12. ... Or "/", I'm just talking about the infiniteness of the set. ... One representation si a java program that computs the squre roots timbl: For me, a representation is a string of bits and some metadata. What you get in http. ... Those bits, in the given language convey the information that was the information resource. noah: If the table wasn't infinite; if it was the square roots of the first 100 integers. I could then just give you an HTML page that conveyed it as a table. timbl: No. The representation of the set must have a different URI. An information resource isn't a set of numbers. ... The statement that the set contains these numbers is an information resource, but that's distinct from the set. noah: I would have thought they could be conveyed as information. timbl: We played with the words a lot <DanC> (where timbl says "it's improtant to distinguish between the set of numbers and the description of it", I'm not yet convinced. I agree that you _can_ distinguish, but I don't know why it's important to.) timbl: It's not coherent not to distinguish between them. noah: I want to distinguish them, but I think they're both information resources. ... What I hear timbl saying is that the only one I'm happy to call an information resource are the ones that are documenty timbl: It's really important because the web is about communication and when I give you a URI I expect you to be able to get information with that URI. noah: If you ask people what a namespace is, I don't think they'll say "document". It's more set like. ... Once we say "I've got that" now at some level, by the time we get to representations, everyone agrees that what we get is a document. ... The problem is that given a namespace in my left hand, there are lots of different kinds of documents that I might like to write; in RDF, in HTML, in English, in French, etc. ... But that leaves us in the position of asking what is the fundamental document that the namespace URI names (because I have to pick one). But then we trip over how one is a representation of the other. <DanC> (I find timbl's position mildly more appealing, but the argument seems to be by assertion. It's maybe good enough to convince me, but it's not at all good enough for me to take and convince other people.) noah: What's really fundamental is the set; how can we use webarch to say that that is on the web? timbl: we could make it clearer by having a 303 response. ... As a result, the only thing that's identified by the URI is some collection of documents. ... It's not neat and tidy, but none of the processes that get the URI really need the abstraction. noah: I could say that I control that namespace, yes? timbl: Yes, you can talk about the document, but they all use the DC namespace to talk about how they're managed. Scribe isn't sure he captured that timbl: We don't have a way in rdf of saying that this property is in a namespace; we don't have the concept of a namespace. ... The namespace concept is only used in common parlance. Norm isn't sure that the fact that RDF doesn't need the concept means the concept isn't useful. <noah> Hmm. So if the URI with no sharp sign is for "the document(s) about the namespace", as opposed to the set of names, then maybe to talk about the set you need: Vincent: DanC did you have a goal in mind? DanC: Yes, that one of the documents would say that the SPARQL example is good or not. . vincent: I don't think we can go further today. <noah> Fodder for the F2F? Noah: We don't have a clean simple story about what a namespace URI identifies that avoids a 20 minute discussion Norm: I share Noah's concerns about the practicality vincent: I'll plan to schedule discussion about abstractComponentRefs again when Dave is present. ... Adjourned . <DanC> I'm not interested to persue that point any more, fyi. <DanC> " whether something could be a set of names and alos a document" <DanC> hmm. <DanC> well, maybe. I don't advocate it, in any case. <DanC> There's a clear-and-present question in the semweb best practices WG: can a wordnet word (synset) be an information resource? This is scribe.perl Revision: 1.127 of Date: 2005/08/16 15:12:03 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: Raman Inferring ScribeNick: raman Found ScribeNick: Norm ScribeNicks: Norm, raman Default Present: Raman, Ed_Rice, Vincent, Norm, DanC, noah, TimBL Present: Raman Ed_Rice Vincent Norm DanC noah TimBL 2006 Guessing minutes URL: People with action items: vincent WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2006/05/16-tagmem-minutes
CC-MAIN-2015-48
refinedweb
2,094
71.65
PhpStorm 2.0 – Take PHP by Storm! Hello from JetBrains! We’re happy to finally announce a long anticipated PhpStorm 2.0 — the intelligent PHP IDE! It took time to appear but we are very happy with the result and want to heartily thank all the early adopters for their invaluable feedback. This new release focuses on adding support for the latest standards: PHP 5.3 namespaces and closures, ECMAScript 5 and makes debugging a lot easier with a zero-configuration debugger. It also extends our code analysis capabilities to provide more inspections and quick-fixes for your code, adds command line tool support for Zend_Tool and Symfony.” support - Read more about what’s new and download PhpStorm 2.0. Also coming soon: WebStorm 2.0 – a lightweight and smart IDE for JavaScript, HTML and CSS! Develop with pleasure! JetBrains Team 26 Responses to PhpStorm 2.0 – Take PHP by Storm! joo says:February 14, 2011 Any difference from the latest build? OZ says:February 14, 2011 Gratz! Best PHP IDE, really. Perfect the error prevention system (saves tons of time), smartest autocomplete, smart code refactoring with preview… That’s why I love this IDE 🙂 Jon Whitcraft says:February 14, 2011 Excellent News! Alexey Gopachenko says:February 14, 2011 @joo sure. about 10 quite important bugfixes. Grant Palin says:February 14, 2011 Can an existing PhpStorm license be used for the new software? pirrat says:February 14, 2011 Отличная работа, спасибо! Вопрос по лицензии: у меня персональная лицензия и написано, что апдейты будут работать до 28 мая 2011. Я правильно понимаю, что для обновления на следующий мажорный релиз требуется продление лицензии? Будут ли изменения в ценах на продукт? И есть ли уже планы на следующие релизы? Rafi says:February 14, 2011 Thank you phpStorm team for making the best IDE in the market. William says:February 14, 2011 anyone know where the changelog is from the EAP to this? William says:February 14, 2011 PS… jetbrains rocks 😀 <3 phpstorm Jonathan Pidgeon says:February 14, 2011 I love this IDE, great update thanks! The captcha on this blog is terrible though, I think I went through about 10 Really really bad Russell says:February 14, 2011 If your’re using a keymap based on the visual studio configuration and you upgrade to 2.0, most of your keys will stop working (including basic navigation with arrow keys). Apparently, a completely unsuable IDE wasn’t an important bugfix. Steffen Gebert says:February 14, 2011 Congratulations to your release! Thanks for this great IDE, it really rocks! Also thanks for providing a free Open Source License to the TYPO3 project! Steffen erbione says:February 14, 2011 “Also thanks for providing a free Open Source License to the TYPO3 project!” Wow, I didn’t know about it, how to get that license? wsyb says:February 14, 2011 Very nice! but where/when is Zend Framework support ? David Rees says:February 14, 2011 What version can we use to view the fixed issues in youtrack? Thanks, d Kostyantyn says:February 15, 2011 I’ve updated phpStorm from version 1 to 2 and first thing, that I noticed was that it become slower. When I open my magento current project, that includes several csv files sized 10-20Mb and couple of tar archives, my IDE hangs, however I set memory to 2GB. It is completely unreal to open bulky csv file, however there were no such a problem in version 1. I used to open 35 Mb csv files without any problems with version 1.So, I have to continue to work with version 1, because version 2 hangs after couple minutes of work. iMac, 2.93 Ghz Intel Core 2 Duo, 4GB 1067 MHz. Mac OS 10.6.6. Hope, you can solve this problem. Alexey Korsun says:February 15, 2011 @Grant Palin & @pirrat: License includes 1 year of free upgrades including major ones, starting from the day of license purchase. If you have license you can upgade for free to PhpStorm 2.0 and will get all upgrades for free during 1 year. Alexey Korsun says:February 15, 2011 @erbione You can obtain open-source license if you fill request-form here: Alexey Gopachenko says:February 15, 2011 @Kostyantyn please file a performance problem report to issue tracker so we can work on it together. Ivan says:February 15, 2011 Ребят, просто супер! Но почему .zip-файл с новым релизом не содержит скрипта для запуска в Linux (webide.sh + конфигурационные файлы)? Скопировал из предыдущего EAP и все запустилось Jan says:February 16, 2011 My first impression of version 2.0 is really good. It definitely brings some nice additions and the look and feel got a little better, while it’s still very familiar and comfortable. I am looking forward to working even more with PhpStorm 2.0! Artem Nezvigin says:February 17, 2011 My favorite IDE to date. I just started the trial and it’s such a huge step up from PDT. Thanks. CJD says:February 23, 2011 I’ve tried the 2.0 trial but am having problems. Excluding the initial execution, every time I’ve started PhpStorm it’s told me that ‘access is denied’ for my own projects and it can’t save the files, .e.g Could not save project: java.io.FileNotFoundException: C:xampphtdocsexperiments.ideaworkspace.xml (Access is denied) Steven says:May 22, 2011 CJD, I also get this access is denied message. “Error saving system information {PATH} (Access is Denied)” {PATH} = w/e the patch is but its for file “.WebIde10systemstatunit.680” Tony says:August 25, 2011 I also get “Access Denied” PHPStorm Java exceptions when I try to save a project. I am using latest PHPStorm 2.1.1 on Windows 7. I have Apache installed into the default directory, and HTDOCS is the default directory also. It seems that maybe Windows 7 “User Control” is somehow preventing PHPStorm from doing anything useful with the HTDOCS directory. How do I give PHPStorm permission to use this directory – I would have thought the installer would deal with that? I’ve looked online using Google and amazingly can’t seen obvious signs of others having this problem, apart from the comments in this blog. Is the answer simply to switch off Windows User Control (seems a bit overkill), or is this some other kind of permissions issue? By the way, up until now I’ve been doing command line PHP program development with PHPStorm and love the IDE. It’s just that now that I’ve moved to a web app I’ve run into problems. One other thing, the getting started tutorial showing how to create your first PHPStorm uses HTDOCS on a local server, and doesn’t mention anything about permissions, or what to do if you start getting exceptions and can’t create your project. Thanks Tony says:August 26, 2011 Just to follow up on my previous comment: here’s a work around. I created a new HTDOCS folder (c:\htdocs). You then need to edit httpd.conf accordingly. Note there are two locations where you need to change your HTDOCS directory name – I missed the second one first time around, which leads to ‘access denied’ errors (but this time they are nothing to do with PHPStorm). You then need to restart Apache. After this PHPStorm is quite happy to work with your new HTDOCS directory as per the tutorials. My little web based RSS feed reader is up and running already. Loving PHPStorm!
https://blog.jetbrains.com/webide/2011/02/phpstorm-2-0-take-php-by-storm/
CC-MAIN-2020-40
refinedweb
1,251
74.49
Persistence is all too often the drudge work of any project. It’s fun to design and develop an application, but when it comes time to figure out how to store and fetch data to and from persistent (or long-term) storage, the work can get complicated, unruly, and downright aggravating. In most cases, you don’t need to “roll your own” persistent storage engine. Instead, you can pick from a number of different databases that are out there, some of which may already be installed on your Linux system. Generally speaking, there are four different types of databases available on Linux: key/value, relational, object-oriented, and XML. Each has strengths and weaknesses. When choosing a database you should consider whether it: To help compare the different types of databases, let’s imagine that we need to store data about employees, projects, and what project(s) each employee is working on. As we examine each type of database to see how it stores this information, we’ll point out its respective strengths and weaknesses. Key/Value Databases Key/value databases are the simplest form of databases and store data in key/value pairs. Each key (such as an employee’s ID number) corresponds to or is associated with one value (such as their name). There can’t be two entries in the database with the same key; every key must be unique. Berkeley DB () and GDBM () are the most widely known key/value databases and may already be installed on your Linux system. Figure One shows three separate key/value databases used to store information about employees and the projects they’re working on. Each database is stored as a separate file in the file system (called Employee.db, Project.db, and Xref.db). Each database file contains one collection of key/value pairs, so if you need to store more information you’ll need to create another file. The advantages of key/value databases are that they are incredibly easy to use, usually come installed with the operating system, and don’t involve significant user management since they use Linux file permissions to control access. The programs you write to access key/value databases are also likely to be portable, as long as a copy of the particular database software is on the target system. Their “one key, one value” paradigm also maps easily to many programming languages (such as Perl’s associative arrays and Java’s Hashtable class). The keys are also stored in a quickly-accessible data structure (such as a B-tree or in a hash table) so that they can even handle very large databases. But key/value databases do have drawbacks. Although they use Linux’s file permissions to control access to the database, there’s no protection against multiple users trying to write to the same database simultaneously. You must access the database from the system it’s running on; if you want remote access, you’ll have to program that yourself. Also, the size of the database is limited to the maximum size of a file or file system. Also, there isn’t a separate language for performing queries on key/value databases. The only fetch operations are “get the value for this key” or “get the next key/value pair.” To perform even a simple query, such as finding all the employees who have been with the company for three or more years, you have to write the code yourself. You’d have to similarly search through the entire database to find the key for a specific value. This is known as a “one-way query.” And since every key must be unique, you can’t easily use the Xref.db database from Figure One to assign an employee to more than one project. If you tried to add a value of 2000 for the key of 100, you’d overwrite the already-existing value of 1000. (Slightly) Cheating However, you can work around this limitation. Although key/ value databases normally store only a single value (e.g., an int, char, or string), it’s possible to store “extra” data in the value field. One way to do this would be to put all the data into a single string and place a delimiter character between the bits of data. For instance, you could use a colon to delimit your data just as it’s done in the /etc/passwd file. In our example, the value in Xref.db for the key 100 would be “1000:2000″. Another way to do this is to realize that from the point of view of database routines, the value stored in the database is just a pointer to the start of the data and a count of how many bytes the data is. Using the pointer and byte count, we can actually store a C struct as the value of a key/value database. However, if you often find yourself working around the limitations of key/value databases in this manner, you should probably consider moving to a relational database. Relational Databases Relational databases break complex data into a collection of tables. Tables are groups of similar items, such as employees or projects, where each item in the table has the same attributes (for example, every employee has a first name, last name, and ID number; each project has a name and ID number). A specific item is called a row, and each attribute is a column. If you were to visualize a table in a relational database, it would look very much like a spreadsheet or HTML table. Most Linux distributions (including RedHat, Mandrake, and Debian) include two open source relational database systems, PostgreSQL () and MySQL (). Proprietary alternatives are also available, such as Oracle () and Sybase (). In a relational database, not only can tables be used to hold information about a single item (an employee or a project), but they can also hold information that expresses the relationship between the employees and projects (thus the name “relational database”). Consider the DBM tables shown in Figure One. The Xref.db file implies a relationship between Employee.db and Project.db, but there’s nothing about Xref.db that causes the database engine to enforce these relationships. For example, you’d need to use program logic to enforce a rule such as “you cannot delete an employee who is assigned to a project.” In a relational database, you can set such a rule when you create the tables and let the database engine enforce them whenever a change is made. Every table in a relational database has one or more columns identified as keys. These keys are similar to keys in a key/value database, but they need not be unique as they are in a key/value database. Any key that must be unique in a relational database is called a primary key. Any column in a table can also have an index associated with it. Indexes greatly speed up searching for a value in that key or column (for an even faster type of database, see “Main Memory Databases.” An index for a primary key requires the creation of a “unique index,” which enforces the uniqueness of the primary key. Main Memory Databases The architecture of a main-memory database (MMDBMS) is similar to that of relational databases, but the data engines are tuned for in-memory data structures rather than files. Instead of creating indexes with B+ trees, MMDBMSs use data structures such as hashtables. This makes a huge difference: a lot of overhead is thrown out along with disk I/O, and the result is that main-memory databases are blazingly fast and are thus becoming popular with ultra-high-traffic Web sites as well as in real-time and embedded systems. But this performance has limitations. MySQL has a HEAP table type that implements these features but they do not support partial matching in queries. You can search for all employees whose last names are ‘Jones’, but you can’t search for all employees whose last names begin with ‘Jo’. Also, should something happen to cause an unexpected shutdown (such as a power loss), all data is lost. With disk-based databases, a transaction log can be used to recover updates that were in-progress when the system failed. Main-memory database vendors realize this is a problem and offer configurations to minimize exposure to such failures (for example, you could replicate the main-memory database to a disk-based database). One thing that remains the same is the query language, SQL. Also, APIs such as ODBC and JDBC are popular. This makes it relatively easy for relational database developers to transition to a main-memory DBMS. Two examples of MMDBMSs are Polyhedra () and TimesTen (). Although you don’t have to use primary keys and indexes in a relational database, it’s usually a good idea. Without a primary key, it may not be possible to uniquely distinguish rows from one another. Suppose you have two employees named John Smith. By specifying a numeric primary key in the employee table’s design, and by using a different number each time you add an employee, you can distinguish between your John Smiths. Relational databases also use foreign keys, whose values match the values of a primary key in a different (or foreign) table. Foreign keys express relationships between tables. Representing Data in Relational Databases Figure Two shows the logical structure of our database, stored in three tables, employee, project, and proj_empl. The primary key in each table is marked in bold. Note that these three tables don’t need to be stored in three separate files (and in fact almost always won’t be). In relational databases, you work with tables, not files. The database engine manages how and where it stores the tables. The employee table has a primary key called empl_id, and the project table has a primary key called prj_id. The proj_empl table cross-references the data between the employee and project tables. The proj_empl table contains two foreign keys, fk_empl_id and fk_prj_id (the “fk” stands for “foreign key”). Relational databases do not mandate that a primary key be a single column. The primary key in the proj_empl table actually consists of two combined columns (the foreign keys fk_empl_id and fk_prj_id). This means a given combination of employee and project IDs can only appear once in that table. The tables themselves consist of a set of rows and columns as shown in Figure Three. If you trace fk_empl_id in proj_empl to empl_id in employee, and fk_prj_id in proj_empl to prj_id in project, you’ll see that Brian is assigned to one project (“Our Next Big Thing”), and that Joan is assigned to two projects (“Our Next Big Thing” and “Big Secret Project”). Figure Three: The employee and project tables THE EMPLOYEE TABLE empl_id years Brian Jepson 100 3 Joan Peckham 200 THE PROJ_EMPL TABLE fk_prj_id fk_empl_id 1000 2000 Programming with Relational Databases When programming with key/value databases, you interact directly with the database, calling separate functions to add a key/value pair, fetch a value or pair (for your own queries), and delete or modify values. In a relational database, though, these operations are handled through a special-purpose database programming language called SQL (Structured Query Language). The database is manipulated by sending SQL statements to the database server. The SQL language also contains commands that can work on entire tables, as well as the data within them (for more on SQL, see “About SQL”). About SQL While SQL can be used to create a database as well as create, modify, and remove rows, it’s most important task is to perform queries (“find me rows that match these criteria”). When querying a database with SQL, you should think the tables as though they were mathematical sets. You can get results that are intersections, unions, or subsets of the tables. For example, you can use SQL to generate a result set that consists of the names of all employees who have been with the company three years or less and who are also assigned to the “Our Next Big Thing” project. Such a query might look like this: SELECT employee.first_name, employee.last_name FROM employee, proj_empl, project WHERE employee.empl_id = proj_empl.fk_empl_id AND proj_empl.fk_proj_id = project.proj_id AND project.proj_id = 1000 AND employee.years <= 3 If we were to translate this into plain English, we might get: Join (or connect) the employeetable to the proj_empltable by linking empl_idin the table employeeto fk_empl_idin the table proj_empl. Then, join the proj_empltable to the projecttable by linking fk_proj_idin proj_emp> to proj_idin project. After the tables are joined, filter them so that we only see projects where the proj_idis equal to 1000, and where the value of yearsis less than or equal to three. After the joins and filters are in place, retrieve the first names and last names of the matching employees. To pass SQL statements to the database server (and receive the results), an API is used to provide connectivity to the database, even across a network. A database access API might be written in a single language, but will generally be accessible from many others (e.g., the MySQL library libmysqlclient.so can be accessed from C, C++, Perl, and PHP). Alternatively, the API might also be written for a single language, such as the MySQL library for Java, which is written in Java, but can run on any system with a Java virtual machine. Perhaps the most well-known APIs are ODBC (Open Database Connectivity) and JDBC (Java Database Connectivity). Most database vendors support one or both of these APIs for accessing to their database products. The ODBC API is available in almost every programming language. This combination of SQL and APIs provides nearly complete interoperability between a client and the database. The programmer performs adds, deletes, and updates by writing SQL statements that are passed to the database by the API. The database vendor, type of system the database lives on, and type of client that’s accessing the database are irrelevant, because they all support SQL and the various APIs. Listing One shows an example of querying a MySQL database in Java. After loading the JDBC driver and connecting to the MySQL database (lines 5-8), the SQL command s built and passed to the database (lines 10-13). In this case, the query is asking for the employee ID number and names of all employees in the employee table. Listing One: Querying a MySQL database in Java 1 import java.sql.*; 2 public class GetEmpl { 3 public static void main(String[] argv) { 4 try { 5 Class.forName(“org.gjt.mm.mysql.Driver”).newInstance(); 6 7 Connection conn = DriverManager.getConnection( 8 “jdbc:mysql:///bjepson”, “bjepson”, “secret”); 9 10 Statement stmt = conn.createStatement(); 11 String sql = 12 “SELECT empl_id, first_name, last_name FROM employee”; 13 ResultSet rs = stmt.executeQuery(sql); 14 15 while(rs.next()) { 16 int empl_id = rs.getInt(“empl_id”); 17 String first_name = rs.getString(“first_name”); 18 String last_name = rs.getString(“last_name”); 19 System.out.println(first_name + ” ” + last_name + 20 ” is employee #” + empl_id); 21 } 22 23 stmt.close(); 24 conn.close(); 25 } catch (Exception e) { 26 e.printStackTrace(); 27 } 28 } 29 } Once the query has been passed to the database server, the result is displayed as a number of rows. The while loop (beginning on line 15) retrieves all the rows that match the query criteria (since no criteria were specified, every row will be returned), and their data is printed out (lines 16-20). When we connected to the database (lines 7 and 8), we had to provide a login id and password. Relational databases support multiple concurrent users. Just like a Unix user account, each database user has a username and password. Some databases even give each user their own separate place to store tables. This is quite handy in a system that supports virtual hosts, since you can give each user a private database separate from all other users on that server. Another strength of relational databases is their ability to handle “transactions.” If several changes to a database (additions, modifications, or deletions) need to either all be done at once or else none of them should be done, a relational database can ensure that this happens. The programmer tells the database server that a transaction has started. Any changes are stored and not actually saved in the tables until the transaction is finished, at which time the programmer would tell the server to “commit” the transaction. At any point during the transaction, the programmer can tell the server to “roll back” the tables to the state they were before the transaction began. What’s Normal, Anyway? Before you actually start the process of creating a database, you should go through a process known as normalization, in which you examine the kind of information that your application will track (such as employees and projects), and organize the structure of your tables in a way that is optimized for the database. The proj_empl table from Figure Two is a result of the normalization process. For instance, when you started thinking about how to store your data, you might have considered assigning employees to a project through a list associated with each project (represented in a table with multiple columns, such as employee_id1, employee_id2, etc.). After normalization, you’d discover that you should represent this information by using a cross-reference table (proj_empl). This also allows you to write bi-directional queries (you can find all the employees assigned to a project and all projects assigned to a particular employee) and removes what would otherwise be an arbitrary limit (i.e., the number of columns that were named employee_idn). A database that’s been normalized will generally scale well as the number of users and the amount of data increase. However, there can come a time when you’ll need to look at how the database server is processing your queries, and tune them to achieve maximum efficiency. This can sometimes lead you to reorganize the logical structure of the database, which can have a ripple effect through an application as you modify the code that depends on the reorganized tables. For more on database normalization, see “MySQL Performance Tuning” in the June 2001 issue (available online at). Storing Objects in Databases However, one of the drawbacks of normalization is that you often end up with a database whose tables do not map well to the data structures you would use in an object-oriented application. Consider the tables shown in Figure Two and let’s assume we’re writing in Java. If you followed a typical object-oriented design, your application would likely have a class that represents an employee and another class that represents a project. But you wouldn’t normally have a cross-reference class that corresponds to the proj_empl table. Instead, each instance of the Project class would have a collection of employees, such as Project.EmployeeCollection. Although the same employee may participate in different projects, this does not imply that you must maintain multiple copies for each employee. Instead, the EmployeeCollection would contain references to each object instances (this is the default behavior of Java’s collection classes). Because of the mismatch between the design of your object model and the design of your database, you will spend a lot of time moving data into and out of objects when you store and fetch data from the database. This is sometimes called an impedance mismatch, a term borrowed from electrical engineering that describes a condition where a signal encounters an unexpected change in the medium through which it travels. Fortunately, there are a variety of solutions. Most people agree it would be nice if you could tell an object to save itself via some magical function “SaveYourself()” and not have to worry about how it deals with the database. Object-oriented databases and XML databases address this issue in different manners. Object-oriented databases store objects directly in the database, whereas XML databases store objects by translating and storing them as XML. Object-Oriented Databases In the object-oriented database world, the Object Data Management Group (ODMG) has created the ODMG 3.0 standard for storing objects in a database (). This standard includes an Object Definition Language (ODL), an Object Query Language (OQL), and APIs for SmallTalk, Java, and C++. These APIs are different from ODBC, but many object-oriented databases support ODBC as a way to connect to them. ODL is used to create class definitions for objects that will be stored in the database, whereas OQL replaces SQL in performing queries. OQL can retrieve collections of objects, individual objects, or object fields, and is just as powerful as SQL. Many object-oriented databases also support SQL for backward compatibility. There are many commercial and Open Source object-oriented databases to choose from. Christopher Browne has catalogued many of these on his Web site at. There’s also a number of object-oriented database systems at Cetus Links (). A particularly interesting product is Ozone (), an open source system written in Java that supports the ODMG 3.0 interface. Because object-oriented databases typically follow the client-server model, they usually offer the same authentication features supported by relational database servers. They also support multiple simultaneous users and can do transactions. Object-oriented databases also hide the underlying details of how objects are stored in the database. The storage format depends on the implementation of the database and is generally unimportant from a programmer’s point of view. XML Databases A different approach to storing objects is to first convert them into XML (Extensible Markup Language) and store the resulting XML document. Two ways to do this are with Open Source project Castor () or Sun’s JAXB (). Both of these can generate code to “marshal” and “unmarshal” a Java object to and from an XML document, but they both require an XML schema that defines the valid values of the object fields. Castor can do this with any XML schema, but JAXB only works with DTDs. The XML code in Listing Two shows one way that the data from our employee and project tables could be marshaled. For more on XML, see the July 2001 and October 2001 issues (available online at and, respectively). You can also look at the February 2002 issue for more on XML schemas, including DTDs (online at). Listing Two: XML representation of employee-project database <?xml version=”1.0″?> <DataSet> <employee> <first_name>Brian</first_name> <last_name>Jepson</last_name> <empl_id>100</empl_id> <years>3</years> </employee> <employee> <first_name>Joan</first_name> <last_name>Peckham</last_name> <empl_id>200</empl_id> <years>3</years> </employee> <project> <prj_id>2000</prj_id> <prj_name>Big Secret Project</prj_code> <prj_empl_id>200</prj_empl_id> </project> <project> <prj_id>1000</prj_id> <prj_name>Our Next Big Thing</prj_code> <prj_empl_id>100</prj_empl_id> <prj_empl_id>200</prj_empl_id> </project> </DataSet> Once Castor or JAXB has converted your objects into XML documents, those documents must be stored. Xindice () is a database system that’s part of the Apache XML project that can manage XML documents like the one shown in Listing Two, as well as handling arbitrary XML documents that are not representations of an object. Like relational database systems, Xindice and other XML databases hide the actual data files behind the facade of the server (you can check out a nice page on XML databases at). The database server manages the data and controls user authentication for multiple users, and has an API that lets clients to connect to the database. However, the various XML databases are not standardized. Xindice uses XML-RPC as its access protocol, XPath as its query language, and a language of its own called XUpdate to perform updates. This can place limitations on the kind of clients and servers you can use as well as the programming languages you can work with. This is where Castor has a particular advantage, because it can work with both relational databases and XML databases. Castor can define classes that know how to store and retrieve themselves from a relational database. This can free you from the previous restrictions. XML databases are also behind the curve when it comes to transactions. There’s no standardization, but one database, Tamino () does provide transaction functionality; Xindice has transactions on its list of things to do. Because XML is based on text, and because its uses human-readable tags to provide structure, it’s not ideally suited for use on large amounts of data. For example, the integer value “255″ is represented in XML as three Unicode characters “2″, “5″, and “5″ (six bytes). In a database that doesn’t need to store data in a human-readable format, this can be stored as a single byte (0xFF). And although working with XML thus has a certain amount of overhead of space, it has proven to be highly expressive, and has the backing of industry, academia, and the World Wide Web Consortium. If you’re developing an application to interoperate with others, XML should almost certainly figure in your plan somewhere. Interoperability is the Future So, how should you decide which type of database is right for you? The most important factors to consider are ease of programming and interoperability with other systems. For ease of programming, an object-oriented database will allow you to use your object model as-is (straight from your UML model to the layout of the database). For interoperability, XML offers the most promise, even if the technology isn’t completely “ready for prime time.” XML data bindings such as Castor provide a glimpse of the future; objects that can cross programming language boundaries. This would make it easier for a system written in one language (such as Java) to communicate with a system written in another (such as Perl). If both systems use the same XML schema, you could move data seamlessly between systems. On the other hand, relational databases bring a lot to the table because of the nearly universal support for APIs such as ODBC. There is no doubt that relational databases will continue to be popular. Most off-the-shelf Web log, message board, or groupware packages use relational databases to store their data. This is not because relational databases are the purest or most efficient way to represent data, but because they’re ubiquitous and represent an acceptable tradeoff between performance and the complexity of the programming model. Table One provides some generalized comparisons about how the different classes of databases deals with the various factors discussed here. Of course, the characteristics of your dataset, your programming skills, and the database you choose will affect all of these issues. Table One: Capabilities of different databases KEY: Multi-User: How well the database performs as the number of users increases. Interop: How well the database supports access from different programming languages. Large Tables: How well the database performs with very large amounts of data. Transactions: Can the database handle transactions? Queries: Can the database perform complex searches? OO Integration: How well the database integrates with Object-Oriented programming languages. SCORING: *: Acceptable**: Good***: Excellent [a] Key/value databases get a high score because they map directly to collection classes, such as Java’s Hashtable and Perl’s associative arrays. [b] XML databases get a high score because there is widespread support for XML in many languages, even though they may only expose an API in a few programming languages. [c] Implementation-dependent: check with the XML database developer. Resources PostgreSQL: MySQL: Ozone Object-Oriented Database: Xindice XML Database: Tamino XML Database: Castor: JAXB: Christopher Browne’s Directory of Object-Oriented Databases: Slashdot Thread on Object-Oriented vs. Relational Databases: Cetus Links of Object-Oriented Database Management Systems: Ronald Bourret’s Listing of XML Database Products: XML Data Binding with Castor: Polyhedra Main-Memory Database: TimesTen Main-Memory Database:
http://www.linux-mag.com/id/1075/
CC-MAIN-2016-40
refinedweb
4,658
52.49
> I'm working on my tilemap-based game and I need to get positions of the tiles in a tilemap. I set up like this. The grid's cell size is 0.16 since the sprites of each tile is 16x16 px: In TilemapController2, I loop through tilemap's size and use CellToWorld to get position, but it doesn't seems to be correct. As in the screenshot, the center of the tilemap is on the left of the point (0, 0), but the iterator gives me position of tile (0, 0) is exactly (0, 0), which is incorrect because the bottom-left tile of the tilemap is far from point (0, 0). This is the content of TilemapController2: public class TilemapController2 : MonoBehaviour { public Tilemap Tilemap; // Use this for initialization void Start () { Vector3 tilePosition; Vector3Int coordinate = new Vector3Int(0, 0, 0); for (int i = 0; i < Tilemap.size.x; i++) { for (int j = 0; j < Tilemap.size.y; j++) { coordinate.x = i; coordinate.y = j; tilePosition = Tilemap.CellToWorld(coordinate); Debug.Log(string.Format("Position of tile [{0}, {1}] = ({2}, {3})", coordinate.x, coordinate.y, tilePosition.x, tilePosition.y)); } } } } I've tried several other methods, but none of them give me the correct world position of the tile. Answer by kactus223 · May 18, 2018 at 02:01 PM I've found the way to get tiles positions here: Answer by SoshJam · May 17, 2018 at 03:46 PM Have you tried void GetTileData() ? It needs to be in the script of the tile, so you may have to make a new one. But it works! void GetTileD. Making a tilemap editor within the Unity Editor 2 Answers Top down tile based detection 0 Answers Why are there lines on the edges of some of the tiles on my 2D tilemap? 1 Answer Using ASCII as tiles (roguelike) 0 Answers Mobile 2.5D MMO Game Development Advice 0 Answers
https://answers.unity.com/questions/1507494/how-to-get-world-position-of-tile-in-tilemap.html?sort=oldest
CC-MAIN-2019-18
refinedweb
317
72.16
What is the appfuse Advertisements Matt Raible are Appfuse developed a guiding entry-level J2EE framework, how to integrate its popular Spring, Hibernate, ibatis, struts, Xdcolet, junit, etc. give the basic framework of the model, the latest version 1.7 is provided on Taperstry and JSF support. In the persistence layer, AppFuse uses Hibernate O / R mapping tools (); in containers, it uses Spring Framework (). Users can freely select Struts, Spring / MVC, Webwork, Taperstry, JSF several web framework. The use of TDD development mode, the use of JUnit tests on each floor, and even test jsp output w / o error. In order to simplify the development of a set of predefined good directory structure, base class, used to create databases, configure the Tomcat, the deployment of applications to test Ant tasks to help Express automatically generated source code and automatic maintenance of some configuration files. References: At https: / / appfuse.dev.java.net / can download Appfuse, the current version is 1.7. Appfuse reference materials and documentation can view. Second, Appfuse Framework Quick Start AppFuse project's main objective is to help developers reduce the time a project at the beginning of the work to be done. The following is a use it to create a new project of the basic steps: 1, download or from CVS (cvs-d: pserver: guest@cvs.dev.java.net: / cvs co appfuse) detected in the latest version of appfuse source. 2, install J2SE 1.4 +, set the JAVA_HOME environment variable correctly, install Ant 1.6.2 +, set the ANT_HOME environment variable. 3, the installation of MySQL 3.23.x + (recommended version 4.1.7) and Tomcat 4.1.x + (recommended version 5.0.28), set the CATALINA_HOME environment variable to point to your Tomcat installation directory. Note: If you are ready to use MySQL 4.1.7, then you must be the default character set is set to UTF-8 character set and its default table type to InnoDB type. In other words, you want in your c: \ Windows \ my.ini or / etc / my.cnf file add the following lines: [mysqld] default-character-set = utf8 [mysqld] default-table-type = innodb 4, install a local SMTP server, or if you already have an available SMTP server, you can modify the mail.properties (in web / WEB-INF / classes directory) and build.properties (in the root directory -- - information for log4j) to point to your SMTP server - by default it is the point to your local SMTP server. 5, lib/junit3.8.1/junit.jar documents will be copied to the $ ANT_HOME / lib directory. 6, the implementation of ant new-Dapp.name = YOURAPPNAME-Ddb.name = YOURDBNAME command. This will create a file called "YOURAPPNAME" directory. Warning: the order for some values will not implement the app.name - Do not use "test", containing "appfuse" in the name of one of you, or any figure, two book No. (-) and so on mixed up the names of . 7, to the new directory, the implementation of ant's mission to create a database setup, at the same time your application will be posted to the Tomcat server. Only when the root of your password database user does not have the mission will work. You can also open in the time required to change the build.properties file root user's password. If you want to test and would like to know whether all were able to work well, then you can run ant the test-all mission to conduct a comprehensive test - of course the premise that when you first make the time to test Tomcat server to stop. 8, implementation of test-reports of ant mission - when the mission after the implementation, there will be a message to tell you how to view those generated test reports. When you sure you step through the above-configured your development environment after the AppFuse - below you need to do is study guide to learn about how to use AppFuse for your development. Optional installation If you are willing to choose to use iBATIS as the persistence layer framework for you, please take a look specifically extras / ibatis directory README.txt file. If you are willing to choose to use Spring as your WEB layer framework, please take a look specifically extras / spring directory README.txt file. If you are willing to choose to use WebWork as your WEB layer framework, please take a look specifically extras / webwork directory README.txt file. Choose if you are willing to Tapestry as your web tier framework, please take a look specifically extras / tapestry directory README.txt file. If you are willing to select JSF as your web tier framework, please take a look specifically extras / jsf directory README.txt file. If you want you can through the script to automatically complete the creation and testing, you can refer to the following script: rm-r .. / appfuse-spring ant new-Dapp.name = appfuse-spring-Ddb.name = ibatis cd .. / appfuse-spring ant install-ibatis install-springmvc cd extras / ibatis ant uninstall-hibernate cd ../.. ant setup ant test-all test-reports If you do not want to install iBATIS, Spring MVC or WebWork, you will be in your items before the warehouse code control, you should remove them in extras directory of the installation content. -------------------------------------------------- ------------------------------ Typically, when you have completed all of the above steps and they can work, the most likely thing you would want to put "org.appfuse" package names changed to a similar "com.company" this kind of package names. Does this matter now has been very easy, all you need to do is to download a package of tools, take a look at its README file, in order to understand its installation and use. Note: before you use this tool it is best to do your project will be a backup to ensure it is able to resume. If you will read org.appfuse.webapp.form packages such as test.web.form such a package name, you have to go simultaneously tinkering src / service package ConverterUtil category, getOpposingObject Ways are your friends, let us look at click: name = StringUtils.replace (name, "model", "webapp.form"); name = StringUtils.replace (name, "webapp.form", "model"); Three, AppFuse Development Guide If you have already downloaded and AppFuse want in your machine to install it, you'd better get started quickly in accordance with the steps to install. Once you have installed all of the content, the following guidelines are studying how to use AppFuse you develop the best tutorials. NOTE: AppFuse guide the development of the release contains a version of same, if you want to update your copy that works (which in the docs directory), can be through the implementation of "ant wiki" to complete. For AppFuse 1.6.1, you can tell in this Guide Ways to generate most of your code. If you're using Struts + Hibernate such a combination, you can even generate them completely. And if you select the web tier framework of the Spring or WebWork is not so fortunate for them to write an automated installation scripts exist many difficulties, so you have to configure it yourself Controllers and Actions of those. This is mainly because I do not have the framework of these web layer using XDoclet, but also because of the use of Ant tools as the limitations caused by the installation tool. A tool for automatic generation of code called me AppGen, I explain in Part I of how to use it. Part I: in AppFuse to create new DAOs and Objects - This is a about how to create a table based on data for the Java object and thus how to create Java persistent object category to the database in the tutorial. 1, with regard to this guide: This guide will show you how to create a new database table, and how to access the table to create Java code. We will create an object and some other categories to this object will be persistent (save, load, delete) to a database. Using Java language, we call this object is a POJO object (Plain Old Java Object), this object is basically a database tables are corresponding to the other categories will be: A data access object (also known as a DAO), an interface, implementation of a Hibernate type. A JUnit class to test whether the U.S. DAO job correctly. Note: If you are using MySQL and if you want to use the Service (Generally you will certainly choose to use), then you must be table-type set to InnoDB. You can do so, add the following contents of your mysql configuration file (/ etc / my.cnf or c: \ Windows \ my.ini) Medium. The second set (used to set UTF-8 character set) mysql 4.1.7 + are required. [mysqld] default-table-type = innodb default-character-set = utf8 If you use PostgreSQL Batch confusion encountered an error, you can try in your src / dao / ** / hibernate / applicationContext-hibernate.xml add 0 configuration file to turn off the batch. AppFuse using Hibernate as its default persistence layer. Hibernate is an object-relational mapping framework, which allows you to your Java objects and database tables to establish a mapping. So you can easily implement your object CRUD (Create, Retrieve, Update, Delete) operations. You can use the same iBATIS persistence layer as another possible choice. If you want to AppFuse install iBATIS, look extras / ibatis directory README.txt file. If you want to use iBATIS to replace Hibernate, I hope you are have enough reason and you should be familiar with it. I also hope that you can on how to use iBATIS in AppFuse-based guide to make good recommendations. ;-) I will use the following language to tell you the actual development process is how I do. Let us start at AppFuse project structure to create a new object, a DAO and a test case to start. Table of Contents [1] to create a new Object and add XDoclet tags [2] the use of Ant, based on our new object to create a new database table [3] to create a new order for DAOTest to JUnit test DAO [4] to create a new DAO object for the implementation of our CRUD operations [5] for the Person object and configure the Spring configuration file PersonDAO [6] to run test DAOTest [1] to create a new Object and add XDoclet tags We need to do first thing is to create an object go of it lasting. Let us create a simple "Person" objects (to create the src / dao / ** / model directory), we let it have an id, a firstName and a lastName (as the object of property). package org.appfuse.model; public class Person extends BaseObject ( private Long id; private String firstName; private String lastName; / * Generate your getters and setters using your favorite IDE: In Eclipse: Right-click -> Source -> Generate Getters and Setters * / ) This class should be inherited from BaseObject, because BaseObject has three abstract methods: (equals (), hashCode () and toString ()), so you have to type in the Person of their implementation. The first two methods are required by Hibernate, the simplest way is to use tools (such as: Commonclipse) to complete it, if you want to know about using this tool more information you can go to find the website of Lee Grey. Another tool you can use are Commons4E, it is an Eclipse Plugin, I have not used, so I can not tell you what features it has. If you are using IntelliJ IDEA, you can generate equals () and hashCode (), but can not generate toString (), of course, have a ToStringPlugin, but I have never personally used. Now we have a good create a POJO, we need to add XDoclet tags inside it in order to generate Hibernate mapping file. This mapping file is to allow Hibernate to map objects to tables, maps to the property listed in the table. First, we add a @ hibernate.class tags, the tags tell Hibernate mapping which this object will be a table: / ** * @ Hibernate.class table = "person" * / public class Person extends BaseObject ( We must also add a primary key mapping, otherwise, when the generated mapping file when an error will occur XDoclet. Attention to all of these @ hibernate .* tags should be placed in your POJO object getter methods Javadocs location. / ** * @ Return Returns the id. * @ Hibernate.id column = "id" * Generator-class = "increment" unsaved-value = "null" * / public Long getId () ( return this.id; ) I use a generator-class = "increment" in lieu of generate-class = "native", because I found that when the database in some other use "native" when there are some problems. If you only intend to use MySQL, I recommend you use "native", and our guide to the use of the "increment". [2] the use of Ant, based on our new object to create a new database table You can by running "ant setup-db" to create the person table. On the one hand, this task will be to create Person.hbm.xml create documents, on the other hand, can be in the database to create a "person" table. From the ant console, you can see Hibernate to create the table for your model: [schemaexport] create table person ( [schemaexport] id bigint not null, [schemaexport] primary key (id) [schemaexport]); If you want to see what Hibernate generated Person.hbm.xml for your document, you can go build / dao / gen / ** / model directory of view, I have listed the contents of the following: "- / / Hibernate / Hibernate Mapping DTD 2.0 / / EN" ""> Original connection: Related Posts of What is the appfuse ... Spring jar package Detailed AspectJ directory are in the Spring framework to use AspectJ source code and test program files. AspectJ is the first java application framework provided by the AOP. dist directory is a Spring release package, regarding release package described below in ...
http://www.codeweblog.com/what-is-the-appfuse/
CC-MAIN-2013-48
refinedweb
2,284
61.56
C# Driver¶ MongoDB C# Driver is the officially supported C# driver for MongoDB. Procedure¶ 1 Download the MongoDB C# Driver.¶ To download the C# driver, follow the instructions found on MongoDB C#/.NET Driver page. 2 3 Connect to MongoDB.¶ Use MongoClient to connect to a running mongod instance. Add the following using statements in your C# program. using MongoDB.Bson; using MongoDB.Driver; Include the following code in your program to create a client connection to a running mongod instance and use the test database. protected static IMongoClient _client; protected static IMongoDatabase _database; _client = new MongoClient(); _database = _client.GetDatabase("test"); To specify a different host and port for the mongod instance, see the MongoClient API page.
https://docs.mongodb.com/getting-started/csharp/client/
CC-MAIN-2017-26
refinedweb
117
52.66
iCloud Drive folder access error I’m getting a new error while trying to access my iCloud Drive from within the app. “Could not read directory contents” “You may not have permission to view this directory (eg. because it belongs to a different app) I’ve deleted and reinstalled, restarted my phone, even deleted the iCloud Drive folder and reinstalled to force it to create a new Drive folder but it won’t create a new folder just complains with this error any time it try to access it. Oddly enough if I use the “external files” picker and navigate to a different folder within iCloud it works just fine as a workaround, but I don’t like workarounds I’d rather have the default folder back. Any ideas? Thanks! Bump! I’m getting the same error. I deleted the folder accidentally using the Files app. Using Pythonista v3.2 on ios iPad Pro 2 12.9”. iCloud Could not read directory contents. You may not have permissions to view this directory (e.g. because it belongs to a different app). you could try: os.mkdir('/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/') os.mkdir('/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/Documents') but..l dunno Not create dir but remove it worked for me. import os os.rmdir('/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/') Sorry to necropost, but I just started having this issue after trying out the Pythonista Beta. I tried reverting to the standard app store version, and the suggestions above, without success. I can see the Pythonista folder in iCloud from the files app, and from my mac. I can even create new scripts from Pythonista which will save in iCloud, but Pythonista cannot see them for some reason. Any ideas on what I can try? i am getting the same error, are you running pythonista on ipad and iphone on the same icloud account ? started happening with update ios 13.2.3 os.mkdir('/private/var/mobile/Library/Mobile Documents/iCloud~com~omz-software~Pythonista3/') removing the dirs and rebooting did the trick - hrfgbdswrr With a fresh install on 13.2.3 iPhone 11 Pro, I get a permissions error reading the default iCloud path in the app. I can create files but not read them. With the os.rmdir suggestion, I was able to delete the folder (Verified in Files app), but os.mkdir failed with “ FileExistsError: [Errno 17] File exists:”
https://forum.omz-software.com/topic/4722/icloud-drive-folder-access-error
CC-MAIN-2021-04
refinedweb
420
58.89
A library for controlling the Sunix RGB / RGBWWCW WiFi LED Strip controller Project description sunix-ledstrip-controller-client A python 3.4+ library for controlling the Sunix® RGB / RGBWWCW WiFi LED Strip controller. Build Status How to use Installation pip install sunix-ledstrip-controller-client Usage For a basic example have a look at the example.py file. If you need more info have a look at the documentation which should help. Basic Example Create the LEDStripControllerClient object The first thing you need to communicate with any controller is the api client. Create one like this: from sunix_ledstrip_controller_client import LEDStripControllerClient api = LEDStripControllerClient() The next thing you need is a Controller object that specifies the basics about your Sunix controller hardware. You can either let the api search automatically for your controller using: devices = api.discover_controllers() or create one manually like this: from sunix_ledstrip_controller_client import Controller device = Controller(api, "192.168.2.23") or including a port if you want to access it from outside of your local network: device = Controller(api, "my-dyndns-address.org", 12345) Note that you have to supply an api object so the Controller can fetch is state. Turn it on! Now you have all that is needed to control your device. It’s time to turn it on and off! Use this method to turn it on: device.turn_on() and this to turn it off: device.turn_off() Make it a rainbow (changing colors) Now to the fun part. The RGB values and the WW (warm white and cold white) value can be adjusted separately (while keeping the other value) or both at the same time. All values have a valid range of 0 to 255. If you only want to change the RGB values use: device.set_rgb(255, 255, 255) and this one if you only want to change the WW value: device.set_ww(255, 255) To set both at the same time use (you guessed it): device.set_rgbww(255, 255, 255, 255, 255) Functions The official app for the Sunix controller offers 20 different functions that can be activated and customized in speed. These functions are hardcoded in the controller so they can not be altered in any way. You can activate them though using: from sunix_ledstrip_controller_client import FunctionId device.set_function(FunctionId.RED_GRADUAL_CHANGE, 240) Function ids can be found in the FunctionId enum class. 0 is slow - 255 is fast. In the network protocol the speed is actually reversed (0 is fast, 255 is slow) but I changed this for the sake of simplicity. You should be aware though that the speed curve seems to be exponential. This means 255 is very fast but 240 is already a lot slower. Custom Functions Another feature of the official app is to set a custom color loop with a custom transition and speed between the colors. Since v1.2.0 of this library you can set those too :) Simply have a look at the example_custom_function.py file for a detailed example. Set/Get Time The Sunix® controller has a build in clock to be able to execute timer actions. Currently there is no way to get or set timers with this library. You can however get and set the current time of the controller. To get the currently set time use: time = device.get_time() Note that this might be None though if you have never set a time for this controller before. To set a new value use: dt = datetime.datetime.now() device.set_time(dt) Attributions I want to give a huge shoutout to Chris Mullins (alias sidoh) and his ledenet_api library. Although the protocol used by the sunix controller is not exactly the same to the one used by the LEDENET Magic UFO controller it’s quite similar and his work was a great starting point for me. Contributing Github is for social coding: if you want to write code, I encourage contributions through pull requests from forks of this repository. Create Github tickets for bugs and new features and comment on the ones that you are interested in. License sunix-ledstrip-controller-client by Markus Ressel Copyright (C) 2017 Markus Res.
https://pypi.org/project/sunix-ledstrip-controller-client/2.0.4/
CC-MAIN-2022-05
refinedweb
687
64.71
Algorithms, functional programming, CLR 4.0, and of course, F#! With.. PingBack from Hi Chris, thanks, for this example. VS 2008, F# 1.9.6.0. Just downloaded, unzipped, Built... Error 1 The tag 'TankGame' does not exist in XML namespace 'clr-namespace:BurnedLandGame;assembly=BurnedLandGame'. Line 8 Position 4. C:\Users\Art\Documents\MS F#\F# CTP samples AUG08\Burnedland\BurnedLand\BurnedLandUI\Window1.xaml 8 4 BurnedLandUI For the data binding in the WPF-side to work, the F# library needs to be built first. If you build the F# project first and then open the WPF designer (or click 'Reload'), things should work as you expect. Is that not the behaivor you are seeing? I've had issues with removing VSLab F# 1.9.4... maybe this is what I'm seeing. Prob needed to confirm that is OK first. Regedit'd to fully uninstall VSLab. BurnedLand Builds OK now; Vista VS 2008 F# 1.9.6.0. Thanks. 1. What is F#? It is a functional language that is capable of object oriented programming and eases multi-cpu It used to be that rockets and research were two words that went together. Now, rockets aren’t quite digg_url = "";digg_title
http://blogs.msdn.com/chrsmith/archive/2008/09/04/simple-f-game-using-wpf.aspx
crawl-002
refinedweb
202
70.7
On 9 July 2016, the FCA published its Policy Statement (PS 16/19) on Financial Crime Reporting, setting out the feedback received by the FCA on its proposal, which was originally consulted on in December 2015. The document sets out the final rules and timelines for the preparation of a financial crime return (REP-CRIM). The rules will apply to all firms with full permissions carrying out consumer credit activities (unless revenue is lower than £5 million). Limited permission consumer credit firms are not subject to these reporting requirements. Regulated lenders will be within the scope of the return if they have a revenue of £5 million or above. The FCA has decided to exclude general insurers and general insurance intermediaries at this stage. The FCA expects firms to start reporting from the end of 2016 and within 60 days of their reporting year end. Firms may submit on a group or single regulated entity basis where all companies share a common financial year end.
https://www.lexology.com/library/detail.aspx?g=09ec367a-7b6b-4d42-a674-f153e78cbd15
CC-MAIN-2017-34
refinedweb
165
50.77
09-06-2016 01:27 AM Hello I'm testing a new model in spatial workshop by using Test current recipe option. The model fails with the message: Message: {"Error":"`anonymous-namespace'::IsProcessValid failed\nProcess has 2 operators with the Display Name: 'Attribute_Output_Diff_2'","Status":"Error"} However I don't have that display name twice in my model. I've done it in Imagine and the name normally is incremented automatically. Also, I've make sure to remove all the spaces in the operators name. Each time i change that operator name, i get the same message with the new name. Can you help me with this. I don't know what else to do to solve this. Also, note that the validation tool is executed with success all the time. I can provide the model and the entire log in private message if needed. Thanks in advance, Regards Elodie 09-08-2016 12:27 AM Hello Elodie, please send me the problematic recipe. My email: jakub.papiewski@hexagongeospatial.com Regards - Kuba 09-09-2016 04:11 AM Hi Kuba, I've just sent you the model. Thanks for the help. Let me know when you receive it please. Regards Elodie
https://community.hexagongeospatial.com/t5/Smart-M-App/Model-execution-failed-Process-has-2-operators-with-the-Display/td-p/6943
CC-MAIN-2019-26
refinedweb
200
67.15
CGTalk > Software Specific Forums > Autodesk Maya > Maya Programming > commands within classes/instances PDA View Full Version : commands within classes/instances Soviut 01-20-2008, 09:41 PM I'm trying to wrap a button control in a class. I want the button's command to call an instance method, but it seems the command string doesn't like when I try to use "self". import maya.cmds as cmds class buttonWrapper(): def __init__(self): cmds.button("wrappedButton", command="self.onClick()") def onClick(self); print "clicked!" return I get the following error # NameError: name 'self' is not defined # So it seems I can't use instance methods in button (and other control) commands? Is there any possible alternative? Gravedigger 01-20-2008, 09:48 PM maybe you need to use 'this' instead of 'self'? Soviut 01-20-2008, 10:32 PM "this" is not a Python keyword. I gave it a try, just incase its some Python-to-MEL transition thing. Nonetheless, it didn't work either. Gravedigger 01-20-2008, 11:03 PM hmm..i've already feared that sorry just gave an idea. i don't know phyton. just c++ katisss 01-21-2008, 08:15 AM class buttonWrapper(): def __init__(self): cmds.button("wrappedButton", command="self.onClick()") def buttonWrapperonClick(self): print "clicked!" return gives me no errors. Did you make sure it wasnt the ";" insteasd of a ":" after the buttonWrapperonClick(self)? Soviut 01-21-2008, 02:40 PM Sorry, my example isn't really in context. That command will only fire when the button is clicked, that's why no error occurs when you just run the code. Its a NameError at runtime, not compile time. I'm sure it wasn't a semi-colon issue because if it was I would have given me a SyntaxError at compile time, long before the code had a chance to run. Soviut 01-21-2008, 02:54 PM Doh, I solved it. It was because of how I was passing the command in. In Python it turns out you don't need to pass as a command string anymore, you can pass an actual function pointer. I was careless and was passing the actual function result because I included parentheses at the end of the command. Below is the code that works. Notice there's an *args arguement in the onClick() method. This is because optional arguements can be sent, but if none are present then it just sends an empty string, so it has to be there. import maya.cmds as cmds class buttonWrapper(): def __init__(self): cmds.button("wrappedButton", command=self.onClick) def onClick(self, *args): print "clicked!" return CGTalk Moderation 01-21.
http://forums.cgsociety.org/archive/index.php/t-586362.html
CC-MAIN-2014-15
refinedweb
445
67.35
#PHP-RBAC v2.x PHP-RBAC is the de-facto authorization library for PHP. It provides developers with NIST Level 2 Hierarchical Role Based Access Control and more, in the fastest implementation yet. Current Stable Release: PHP-RBAC v2.0 ##What is an Rbac System? Take a look at the "Before You Begin" section of our Documentation to learn what an RBAC system is and what PHP-RBAC has to offer you and your project. ##NIST Level 2 Compliance For information regarding NIST RBAC Levels, please see This Paper. For more great resources see the NIST RBAC Group Page. ##Installation You can now use Composer to install the PHP-RBAC code base. For Installation Instructions please refer to the "Getting Started" section of our Documentation. ##Usage## Instantiating a PHP-RBAC Object With a 'use' statement: use PhpRbac; $rbac = new Rbac(); Without a 'use' statement, outside of a namespace: $rbac = new PhpRbac\Rbac(); Without a 'use' statement, inside of another namespace (notice the leading backslash): $rbac = new \PhpRbac\Rbac(); ##PHP-RBAC and PSR PHP-RBAC's Public API is now fully PSR-0, PSR-1 and PSR-2 compliant. You can now: If you notice any conflicts with PSR compliance please Submit an Issue. ##The future of PHP-RBAC We are in the process of refactoring the PHP-RBAC internals. We have two goals in mind while doing this: With a PSR compliant Public API already in place we can continue to work towards our goals one piece at a time without altering the Public API that developers are working with and rely on, making the transition as seamless and invisible as possible. ##Contributing## We welcome all contributions that will help make PHP-RBAC even better tomorrow than it is today! How You Can Help:
https://awesomeopensource.com/project/OWASP/rbac
CC-MAIN-2022-33
refinedweb
294
52.9
Using the JSP 2.0 EL API by Andrei Cioroianu Learn how to evaluate JSP expressions dynamically, use the Expression Language (EL) in XML configuration files, and optimize EL usage when presenting SQL result sets <!use same subhed/sub-subhed treatment as previously> Download source code for this article EL defines an easy-to-use syntax for accessing JavaBean properties, Java collections, scoped attributes, initialization and request parameters, HTTP headers, and cookies without using Java scriptlets in JSP pages. This makes the code more readable and improves the maintainability of the Web pages. In addition, the EL provides a full set of operators that let you build arithmetic, logical, and conditional expressions. JSP 2.0 added a new feature called EL functions that can be used to call static Java methods from Web pages without using Java code. The full power of the JSP EL is exposed through a simple application programming interface (API) to Java programmers, allowing them to use the EL in unconventional ways. For example, the customizability of a Web application can be improved by using the EL within the web.xml configuration file. The EL API is needed for evaluating the expressions from the XML file. This article presents the EL API, using it in several utility classes whose static methods are called from JSP pages using EL functions. The article also shows several practical uses of the EL API in JSP pages and in custom tag handlers based on the Simple Tags API, which is another new feature of JSP 2.0. One of the examples demonstrates how the JSP EL can be used in XML configuration files. A previous Oracle Technology Network (OTN) article of mine"Creating JSP 2.0 Tag Files"contains a complete set of JSP pages and tag files that use the EL, JSTL, and SQL to create a table; query it; and insert, update, and delete rows. The page that queries the database uses the EL to present the result set. That page is optimized in this article by parsing the expressions once and evaluating them multiple times, in a loop, with the help of the EL API. Expression Language API Overview The javax.servlet.jsp.el API consists of two classes (Expression and ExpressionEvaluator), two interfaces (VariableResolver and FunctionMapper), and two exceptions (ELException and ELParseException). Each of these classes and interfaces has only one or two methods. Despite its simplicity, the EL API provides everything you need in order to use the Expression Language outside of the JSP pages. The following instructions describe what you have to do in order to evaluate an expression in your Java code, using the EL API. Step 1: Get an ExpressionEvaluator. If you develop a Java tag handler, call the getExpressionEvaluator() method of the object returned by getJspContext(). In tag files and JSP pages, you can call the method with the same name provided by the jspContext and pageContext implicit objects. Step 2: Get a VariableResolver. Call the getVariableResolver() method of the JSP context. This method returns an object that provides access to the JSP variables and implicit objects. You may also develop your own VariableResolver implementation, if necessary. Step 3 (optional): Provide a FunctionMapper. If you want to use EL functions in your expressions, you have to implement the FunctionMapper interface. The evaluate() method of the EL API accepts a null function mapper. Therefore, this parameter is optional. Step 4: Evaluate the Expression. Call the evaluate() method of the expression evaluator, passing the following parameters: the expression (as a String), its expected type (as a Class), the variable resolver, and a function mapper that may be null. The evaluate() method returns the expression's value that will have the expected type or a subclass of it. If you don't know what type to expect, you may specify Object.class. Building a utility class. Our ELUtils class (see source code) provides utility methods that can reduce the usage of the EL API to a single line of code. The ELUtils.evaluate() methods perform the first, second, and fourth steps described above. These methods use their JspContext parameter to get an expression evaluator and a variable resolver, which are used to evaluate the given expression whose value is returned. We'll implement the function mapper of the optional third step for another example of this article. When you invoke the evaluate() method of the expression evaluator, the JSP container of the application server parses the expression, gets the values of the variables, calls the EL functions, applies the operators, and obtains a value, which is converted to the expected type. Note that evaluate() may throw an ELException if the expression is syntactically incorrect or if an error is generated by a type conversion, by an invalid array index, by a bean method throwing an exception, or by something else. A later section of this article shows how to call the methods of the ELUtils class from a JSP page using EL functions. In order to simplify the functions' usage, the expectedType parameter may be either a String or a Class instance. If it's a String, the getClass() method of ELUtils gets the Class object for the given name using Class.forName(). Using the EL API in Custom Tag Handlers As explained in the previous section, the EL API requires a JspContext in order to evaluate expressions. Such a context object exists in every JSP page and is transmitted to the Java classes that handle the custom tags used within the page. Therefore, custom tag handlers are the perfect place for using the EL API. Adding EL support to the Simple Tags API. The javax.servlet.jsp.tagext package contains the APIs that allow you to build tag handlers. Most of these classes are inherited from JSP 1.x, and they are used to build the so-called Classic Tags. JSTL is one of the many tag libraries that uses the Classic Tags API, which include the Tag, BodyTag, IterationTag, and TryCatchFinally interfaces, as well as the TagSupport and BodyTagSupport classes. The javax.servlet.jsp.tagext package also contains the SimpleTag interface and the SimpleTagSupport class, which form the Simple Tags API. This is a new API introduced in JSP 2.0 as a replacement for the older JSP 1.x classes and interfaces. Our ELTagSupport class extends SimpleTagSupport with a convenience method that takes a JSP expression and an expected type, passing them to ELUtils.evaluate() together with the JspContext object returned by the getJspContext() method inherited from SimpleTagSupport: public class ELTagSupport extends SimpleTagSupport { protected Object evaluate(String expression, Object expectedType) throws JspException { return ELUtils.evaluate( expression, expectedType, getJspContext()); } ... } Solving an API limitation problem. From the Web developer's perspective, all custom JSP tags look the same regardless if they are based on the Simple Tags API or on the Classic Tags API. The new JSP 2.0 API was designed for Java developers who were complaining that the old API was unnecessarily complex. The Simple Tags API is very easy to use, but it has one limitation: the JspContext class does not have methods for obtaining the JSP implicit objects such as request, response, session, and application. Theoretically, this limitation creates the opportunity to use the Simple Tags API outside of the Servlet/JSP environment. In practice, however, all custom tags are used in JSP pages, and many of them need access to the JSP implicit objects. Fortunately, in the case of many JSP containers, including Oracle Application Server Containers for J2EE (OC4J) 10g, you can cast the JspContext to PageContext, which lets you obtain the JSP implicit objects. This procedure is very efficient, but it might not work with every application server. For those JSP containers that don't support the cast to PageContext, you could use the EL API, which should always work in a Servlet/JSP environment, even if this second solution is less efficient. The ELTagSupport class shows how to combine the two procedures to make sure that your code works with every J2EE application server and is as efficient as possible. The getRequest(), getResponse(), getSession(), and getApplication() methods of our ELTagSupport class try to cast the JspContext object to PageContext in order to obtain the JSP implicit objects with the methods provided by the PageContext class. If this is not possible, ELTagSupport queries the pageContext implicit object with the expression language. In a non-Servlet environment, our methods would return null. The following table contains the methods that are called and the expressions that could be evaluated to get the JSP implicit objects from the first column. The Java classes of those objects are indicated in the second column. Expression Language Functions EL functions allow you to call static Java methods within your EL expressions with the following syntax: libraryPrefix:functionName(param1, param2, ...) Functions are defined in JSP libraries that may also contain custom tags. The used libraries must be declared in the JSP page with the <%@taglib%> directive: <%@taglib prefix="libraryPrefix" uri="/technology/WEB-INF/.../libraryDescriptor.tld" %> or <%@taglib prefix="libraryPrefix" uri="" %> Defining EL functions. The mapping between an EL function and a static Java method must be defined in a .tld file. For example, the first evaluate() method of ELUtils is mapped to an EL function, in our el.tld library descriptor (see source code), with the following declarations: <function> <name>evaluate</name> <function-class>jsputils.el.ELUtils</function-class> <function-signature> java.lang.Object evaluate( java.lang.String, java.lang.Object, javax.servlet.jsp.JspContext) </function-signature> </function> The el.tld file defines similar mappings for all static methods of the ELUtils, FNMapper, and PEWrapper classes in our source code. Because the name of each EL function must be unique in a .tld file, we use evaluate2() and evaluate3() for the second evaluate() method of ELUtils and for the method with the same name provided by PEWrapper. Using EL functions. The ELTest.jsp page contains a form with a single input field, in which users may type an EL expression. When they click the "Evaluate" button, the JSP page receives the expression, which is evaluated with the function defined above. The value returned by el:evaluate() is stored in a JSP variable (exprValue) with the <c:set> tag of JSTL: <c:set var="exprValue" value="${el:evaluate(param.expr, 'java.lang.Object', pageContext)}"/> Then, the expression and its value are printed with the <c:out> tag that performs any necessary HTML encoding (replacing < with <, > with >, and so on): <c:out = <c:out <br> If you have just learned the EL, you may use ELTest.jsp below to verify the syntax of your expressions and to see what results they produce. Don't forget to wrap the EL constructs with ${ and }. Note that an EL expression may contain multiple ${...}, whose values are concatenated. <%@ taglib prefix="el" uri="/technology/WEB-INF/el.tld" %> <%@ taglib prefix="c" uri="" %> <html> <body> <form method="post"> <c:if <c:set var="exprValue" value="${el:evaluate(param.expr, 'java.lang.Object', pageContext)}"/> <c:out = <c:out <br> </c:if> <input type="text" name="expr" size="40" value="<c:out"> <input type="submit" value="Evaluate"> <c:if <br> Example: \${ 1 + 2 } </c:if> </form> </body> </html> Implementing Function Mappers Function mappers are needed when you want to use EL functions within the expressions that are evaluated with the EL API. The FunctionMapper interface has only one method, named resolveFunction(), which takes two parameters (a library prefix and a function name) and must return the java.lang.reflect.Method object that provides information about, and access to, the static Java method that is mapped to the EL function with the given name. Our FNMapper class keeps the function-method mappings in a java.util.HashMap. The resolveFunction() method uses the prefix and localName parameters to build a key that is passed to the get() method of the java.util.HashMap object, which returns the corresponding Method instance: public class FNMapper implements FunctionMapper { private HashMap functionMap; ... public Method resolveFunction( String prefix, String localName) { return (Method) functionMap.get(prefix+':'+localName); } } Building the function map. The buildMap() method of FNMapper builds the function map using the Java Reflection API. Each public static method of the given class is mapped to an EL function with the same name. This works fine as long as the class doesn't have multiple static methods with the same name. private void buildMap(String prefix, Class clazz) { Method methods[] = clazz.getMethods(); for (int i = 0; i < methods.length; i++) { Method m = methods[i]; if (Modifier.isStatic(m.getModifiers())) functionMap.put(prefix+':'+m.getName(), m); } } The buildMap() method is called by the FNMapper() constructor that gets the Class instance with Class.forName(). The constructor is declared private because the FNMapper class manages its own instances. The public getInstance() method takes an id parameter and returns the requested function mapper. In its current implementation, FNMapper supports only the JSTL function library, but you could easily modify it to support others. Note that the same function mapper instance could be used for multiple function libraries that have different prefixes. This is actually necessary if you want to use functions from different libraries within the same expression. Our FNMapper works only with the Apache implementation of JSTL 1.1 because the class name is hardcoded. A more general function mapper would have to parse a .tld file, extracting the name of the class containing the static methods. The names of the EL functions and the mapping information should also be obtained from the .tld file. However, the simple FNMapper class is sufficient for testing and learning purposes. Testing the function mapper. The FNTest.jsp page is similar to ELTest.jsp. In order to enable the support for the JSTL functions, FNTest.jsp uses el:evaluate2(), passing a function mapper obtained with el:getFNMapper('fn'): <c:set var="exprValue" value="${el:evaluate2(param.expr, 'java.lang.Object', pageContext, el:getFNMapper('fn'))}"/> The evaluate2() function is mapped to the second evaluate() method of ELUtils, which accepts the function mapper parameter. The getFNMapper() function is mapped to the FNMapper.getInstance() method in the el.tld file. Note that FNTest.jsp doesn't have to declare the JSTL function library with <%@taglib%> because the EL expressions typed by users are evaluated in the Java code of ELUtils with the EL API, and because our own FNMapper class does the JSTL function-method mapping. Using the EL in XML Configuration Files In "Creating JSP 2.0 Tag Files," I demonstrated how to update and query a database using JSP, JSTL, and SQL. In the remainder of this article, we'll use the JSP 2.0 EL API to improve some of the JSP examples of the previous article. Modifying the Web application descriptor (web.xml). In "Creating JSP 2.0 Tag Files," the name of a datasource (jdbc/dbtags) was provided in the web.xml configuration file and was obtained by a tag file fragment (init.tagf) with ${initParam.tags_db_dataSource}. We now have an additional initialization parameter named debug_mode, which indicates whether the application runs in a testing or a production environment: <context-param> <param-name>debug_mode</param-name> <param-value>true</param-value> </context-param> Suppose that, depending on the value of the debug_mode flag, we want to choose one of two databases that have identical structures. This choice could be hardcoded in our JSP pages, but it can also be specified in the web.xml file using the JSP EL: <context-param> <param-name>tags_db_dataSource</param-name> <param-value>jdbc/${ initParam.debug_mode ? "dbtags" : "production" }</param-value> </context-param> After this configuration change, ${initParam.tags_db_dataSource} returns an expression that must be evaluated with the EL API. Getting the datasource name. The XMLConfig.jsp page obtains the datasource name using the el:evaluate() function, which returns jdbc/dbtags or jdbc/production, depending on the value of debug_mode. The datasource name is stored into a JSP variable (evaluated_tags_db_dataSource) with <c:set>: <c:set var="evaluated_tags_db_dataSource" value="${el:evaluate(initParam.tags_db_dataSource, 'java.lang.String', pageContext)}"/> The XMLConfig.jsp page outputs the debug_mode parameter, the expression, and its value with the following code: debug_mode: ${initParam.debug_mode} <br> expression: ${initParam.tags_db_dataSource} <br> value: ${evaluated_tags_db_dataSource} Here is the resulting output: debug_mode: true expression: jdbc/${ initParam.debug_mode ? "dbtags" : "production" } value: jdbc/dbtags The init.tagf file obtains the datasource name like XMLConfig.jsp, but instead of outputting some information, init.tagf creates a javax.sql.DataSource variable with the <sql:setDataSource> tag of JSTL: <sql:setDataSource The init.tagf fragment is included within the select.tag file, which is presented in the next section of this article. EL Optimizations with Parsed Expressions Before evaluating an expression, the JSP container has to parse it in order to verify its syntax and to obtain information about the used variables, functions, operators, and so on. This process may involve many string operations and may create lots of temporary objects, which have to be deleted from memory later by the JVM's garbage collector. When using an expression within a loop of a JSP page, tag file, or tag handler, it makes sense to parse the expression only once, outside of the loop, and evaluate the parsed expression within the loop as many times as necessary. The following instructions describe how to accomplish this optimization with the EL API. Step 1: Get an ExpressionEvaluator. Call the getExpressionEvaluator() method of a JspContext object. Step 2: Parse the Expression. Invoke the parseExpression() method of the expression evaluator, passing the following parameters: the expression (as a String), its expected type (as a Class), and an optional function mapper. The parseExpression() method returns an Expression object, whose evaluate() method is called at the fourth step. Step 3: Get a VariableResolver. Use the getVariableResolver() method of the JSP context to get the object that must be passed to evaluate() at the next step. Use the following resources to test the examples and to learn more about the JSP 2.0 EL API. Download the source code. The jspelapi_src.zip file contains the examples of this article: the jsputils directory groups the Java classes, and jspelapi is a Java Web application. In order to run the examples, you need J2SE, a J2EE 1.4 application server, JSTL 1.1, and a database server. Read "Creating JSP 2.0 Tag Files." Andrei Cioroianu shows how to create and use tag files and how to transform existing page fragments into tag files. He uses JSTL and several advanced JSP features to build tag files that update and query a database. Download OC4J 10g. OC4J 10g fully implement the J2EE 1.4 specs, which include JSP 2.0. You may use OC4J 10g (10.0.3) to test the examples. It works with all major database servers, includingof courseOracle Database. Don't forget to configure the dbtags datasource and make sure that the proper database driver is available. Download JSTL 1.1. Before deploying the jspelapi Web application, download JSTL and copy the jstl.jar and standard.jar files into the jspelapi/WEB-INF/lib directory. Read the JSP 2.0 specification. The JSP 2.0 specification has an entire chapter dedicated to the EL API ("Part II: Chapter JSP.14 Expression Language API"). The expression language is described in another chapter ("Part I: Chapter JSP.2 Expression Language"). JSP samples and tutorials. JSP Sample Code Tutorial: Understanding the New Features of JSP 2.0 Step 4: Evaluate the Expression. Call the evaluate() method of the Expression object. The third and fourth steps can be repeated every time you want to evaluate the parsed expression. Note that at different moments, the same expression may have different values, depending on the values of its variables. Wrapping parsed expressions. Our PEWrapper class has two fields that maintain references to a parsed Expression and its JspContext. This allows us to keep the Expression object together with the JspContext instance that is needed later for obtaining the variable resolver. The getInstance() method performs the first and second steps described above, returning a PEWrapper object. The nonstatic evaluate() method executes the third and fourth steps, returning the value of the expression. We also need a static evaluate() method that can be mapped to an EL function that is named evaluate3() in our el.tld file. The getInstance() method is mapped to an EL function too, named getPEWrapper() in el.tld. Using parsed expressions. "Creating JSP 2.0 Tag Files" presents a JSP example (select.jsp) that queries a database using a tag file (select.tag). The tag file builds a SQL statement and uses the <sql:query> tag of JSTL to execute it. Then, the tag file iterates over the rows of the result set with <c:forEach>, executing the tag body with <jsp:doBody>. Therefore, the custom tag that invokes the tag file (<db:select>) performs a loop, and the JSP code between <db:select> and </db:select> is executed at each iteration. In our example, this code outputs the rows of an HTML table, parsing and evaluating the EL expressions again and again: <db:select <tr> <td> ${row.userID} </td> <td> ${row.name} </td> <td> ${row.email} </td> </tr> </db:select> We could view the body of <db:select> as a single JSP expression, which can be stored in a JSP variable (rowExpr) with <c:set>. In order to avoid the evaluation of the expression by the JSP container, each $ character is escaped with a backslash. Therefore, rowExpr keeps the text of the expression and not its value: <c:set <tr> <td> \${row.userID} </td> <td> \${row.name} </td> <td> \${row.email} </td> </tr> </c:set> The expression is parsed using our PEWrapper class, whose instance is returned by el:getPEWrapper(). A reference to the PEWrapper object is kept in a JSP variable named parsedRowExpr: <c:set var="parsedRowExpr" value="${el:getPEWrapper(rowExpr, 'java.lang.String', pageContext, null)}"/> Now, we have a parsed expression that can be evaluated much faster with our el:evaluate3() function that invokes the static evaluate() method of PEWrapper: <db:select ${el:evaluate3(parsedRowExpr)} </db:select> Optimizations usually mean more programming, and in many cases, the code becomes less readable. However, using parsed expressions, the JSP container doesn't have to repeat the same operations again and again. Most of the JSP expressions don't need optimizations, but if a loop has many iterations and uses complex EL expressions, you should consider optimizing the code. Of course, the JSP container itself might cache the parsed expressions, but you can't be sure that this happens unless it's documented. The source code of the PEWrapper class is followed by the select.tag file and by the optimized version of the select.jsp page. Conclusion The JSP 2.0 expression language can improve the maintainability of your Web pages considerably. With the EL API, you can integrate the same language with other technologies, such as those based on XML. The EL has only one disadvantage: the JSP expressions are slower than the compiled Java code, but in most cases the EL overhead is not significant. When it is, you can use the EL API to optimize the expression evaluation process. Please rate this document: Excellent Good Average Below Average Poor Send us your comments
http://www.oracle.com/technology/pub/articles/cioroianu_jspapi.html
crawl-001
refinedweb
3,866
56.66
Tools that honor the NoData environment setting will only process rasters where the NoData is valid. Use this environment when the NoData value from your input needs to be transferred to your output raster. This setting allows you to specify which value you use to designate as the NoData value in your output. Usage notes - When using the ArcGIS Spatial Analyst extension, NONE is the preferred mapping method to use. This produces the same behavior as previous versions of ArcGIS. - PROMOTION is the safest mapping method, since the NoData value will never be lost. However, promoting the pixel depth of your raster will create an output that is twice as large in size. Dialog syntax - NoData—Choose which NoData mapping method to use. - NONE—There will not be any NoData value rules in place. If your input and output have the same value range, NoData will be transferred over without any changes. However, if your value range changes, there will be no value for NoData in your output. This is the default method. - MAXIMUM—The maximum value in the output data range will be used as your NoData value. - MINIMUM—The minimum value in the output data range will be used as your NoData value. - MAP_UP—The lowest value in the range will be promoted and the lowest will become NoData. If the data is unsigned, the value of zero will become one, the NoData value will be zero, and the rest of the rest of the values remain the same. For example, with 8-bit unsigned integer data, the NoData value will be 255, the value of 255 will become 254, and the rest of the of value 256 becomes the NoData value. If the NoData value specified is within the input's data range, the pixel depth will not be promoted for the output. Scripting syntax arcpy.env.nodata = "mapping_method" import arcpy # Set the nodata mapping method environment to promote the value. arcpy.env.nodata = "PROMOTION"
http://desktop.arcgis.com/en/arcmap/latest/tools/environments/nodata.htm
CC-MAIN-2019-18
refinedweb
328
55.34
Author: Ulrich Schoebel <[email protected]> Tcl-Version: 8.5 State: Withdrawn Type: Project Vote: Pending Created: 23-Jul-2003 Post-History: Keywords: namespace, command lookup, search path Abstract This TIP adds a Tcl variable to define the search path for command name lookup across namespaces. Rationale Command names (as well as variable names) are currently looked up first in the current namspace, then, if not found, in the global namespace. It is often very useful to hide the commands defined in a subnamespace from being visible from upper namespaces by info commands namespace::*. On the other hand, programmers want to use these commands without having to type a qualified name. Example: namespace eval ns1 { proc p1 {} { puts "[p2]" } } namespace eval ns1::ns2 { proc p2 {} { return hello } } Evaluation of ns1::p1 would currently lead to an error, because p2 could not be found. Even worse, if a procedure p2 exists in the global namespace, the wrong procedure would be evaluated. Proposal Add a variable tcl_namespacePath or, to avoid confusion with variables containing file system paths, tcl_namespaceSearch, that contains a list of namespaces to be searched in that order. The default value would be [list [namespace current] ::]. In the above example tcl_namespacePath would be set to [list [namespace current] [namespace current]::ns2]. p2 would be found and not unintentionally be substituted by ::p2. Alternative For ease of implementation and, maybe, for programmers convenience it might be useful to always prepend the contents of this variable with [namespace current]. The programmer expects a certain "automatism" for this component of the search path. Then the default value would be ::. Implementation To be done when this TIP is accepted. Notice of Withdrawal This TIP was Withdrawn by the TIP Editor following discussion on the tcl-core mailing list. The following is a summary of reasons for withdrawal: Insufficiently subtle. 52 will break any code that assumes the current behaviour (and you can bet someone will have that assumption) and 142 doesn't let two namespaces have different search paths (unless the variable is always interpreted locally, which just creates bizarre variable name magic.) This document is placed in the public domain.
https://core.tcl-lang.org/tips/doc/trunk/tip/142.md
CC-MAIN-2020-16
refinedweb
356
53
." Nonsense (Score:2, Interesting) Do post offices need their own TLD? Come on! You can tell who's the driving force behind todays Internet standards .mov TLD for movies (Score:4, Interesting) Re:Only on /. way OT, but... (Score:3, Interesting) Yes, I must be new here... Get an account, people rarely comment to the Anonymous Coward. I personally believe that there should be a delay between when an article is posted and when ppl can start flooding posts. What I see is that there are about 10 or so threads at the top of each post. garcia is usually the first or second And then there are many small threads below the "hot" ones. Maybe we need a checkbox when submitting a post "Yes, I RTFA" or "No, I didn't RTFA", and a comment modifyer for those (not) reading the articles, and a mod -1 didn't RTFA because the content is obviously there. Just my thoughts. And what does this have to do with ICANN's job? (Score:5, Interesting) One has to have a really crazed imagination or warped sense of humor to believe that ICANN's criteria for selecting new Top Level Domains has anything whatsoever to do with technology or the ability of the net to deliver packets or respond quickly and accurately to DNS queries. ICANN has become little more than a mouthpiece for certain well healed industrial segments; the public interest, as well as the public itself, has been ejected from ICANN's policymaking and policies. ICANN is fighting to keep its job from going to the ITU. ICANN's arguments are pretty weak when one considers that ICANN is not doing the job that it was constructed to do but is instead simply the willing handmaiden of small, short-sighted, self-interested groups. Re:Wow, they did it (Score:5, Interesting) insightful.post interesting.post funny.post flamebait.post and so on. Re:seriously. (Score:1, Interesting) Yes, the truth can be complex... Re:Value of non .com/net/org/national TLDs? (Score:2, Interesting) Re:Seconded (Score:3, Interesting) The I havn't seen a false positive yet. TLDs Considered Harmful (Score:3, Interesting) So, if TLDs are not being respected, why have them at all? Some have tried me that it organizes the namespace hierarchically, thus distributing the load. I don't think it helps a lot, if most people go for the My proposal? Change the system so that top level domains can be directly registered. E.g. Google would get just Google, with no And one more pet peeve of mine: we could add support for IP-IP encapsulation [faqs.org]. That way, if your server is hosted between a NAT box, you can just instruct clients to route the packet to your internal IP via the NAT box. Of course, the client and the NAT box would have to support it as well... Re:Right. (Score:2, Interesting) Re:TLDs are BS (Score:4, Interesting) Regarding TLDs, I think the distinction you may be groping for is that between a naming authority and a subject area. Countries are quite good at being authorities, but non-governmental authorities are possible too. ICANN comes to mind, and it's possible to imagine the UN, ISO etc. in this role, as well as new amateur and commercial groups yet to be identified. The bottom line is that the world will never agree which site should resolve to, let alone or. The solution is not more divisions by subject but more groups making the subjective divisions. Re:This is bullpucky. (Score:2, Interesting) They can if they want but they don't have to. There's no reason why a For most organizations dealing in multiple countries, the cost of a website per country is insignificant compared to the other regulatory (and likely marketing) costs per country. For others such as wikipedia, if that would be dificult then pick one country and register a domain there. What's the problem? The internet is supposed to be free Who supposes it to be free? If by 'free' they mean unregulated or beyond the reach of giovernments then their supposition is wrong. Re:TLDs are BS (Score:3, Interesting) It's nice to be able to print "mybusiness.com" on something and have people know it's a website. "" CAN look ok, but for a lot of things, design-wise, it's nicer to drop the 'technical' stuff. It's also easier to tell people things.. the "dot com" tells them it's a website. As an example, "Look us up, mybusiness dot com" vs "Look us up, AOL keyword mybusiness". (or "web address") Re:Wow, they did it (Score:3, Interesting) They already do that (Score:3, Interesting) Everyone is too used to doing it the old way, though, so I doubt it would ever happen. More silliness (Score:2, Interesting) List of TLDs... (Score:3, Interesting) - Currently active TLDs (be it cc, g, s or otherwise) - Deprecated TLDs - Proposed TLDs ? I've got one myself ( - don't complain about non-validation, it's only for quick data-reading ), which I already see I need to edit some ( thanks, wikipedia ) - but can't quite seem to find any other comprehensive list in existance to bring it up to current affairs. Oh, and any blatant errors in the xml's data ? Feel free to point them out Re:Wow, they did it (Score:3, Interesting) When I heard of .info and .biz, in fact way back when I first heard of .cc, I wondered why the extension was "fixed" and why they didn't just open it up to any random string being able to be mapped? The answer, as far as I understand it, is the almighty dollar. They'll make a ton more money slowly releasing new TLDs than they would if they let anyone take whatever string they wanted as their domain name. Like, "mcdonalds" could be a domain, mapping to 164.109.145.147; or "me.and.my.shadow" could map to 99.99.99.99; etc. I know I probably just violated some RFCs up above, but why such a big honking deal?
https://slashdot.org/story/04/10/28/1911255/two-new-tlds-near-approval/interesting-comments
CC-MAIN-2017-13
refinedweb
1,036
72.46
Here, we implement a generic queue in Java using linked list. A queue is a container to which items are added and removed by following first-in-first-out strategy; therefore, an item added to the container first will be removed first. A queue usually has two ends, one for adding items and another for removing them, these ends are pointed by two pointers called front and rear. The front pointer points to the end where items are removed from, while the rear points to the end where items are added to the queue. In linked implementation of queue we will keep front and rear pointing at beginning and end of the queue. A queue by definition supports two methods, one is enqueue (also called insert) for adding objects to the queue, and second, dequeue (also called delete) for removing an item from the queue. The following methods we plan to implement as part of our linked list implementation of queue in Java. insert(): Adds an item to the queue delete(): Remove an item from queue. size(): Return the number of items the queue contains right now. isEmpty(): Returns true if queue is empty, false otherwise. In order to make end user independent from implementation details of the queue implementation whether it is implemented using linked list or array, we define an interface Queue as follows. The following Queue interface can be assigned any object that implements this interface no matter the underlying implementation uses linked list or array based implementation of queue in Java. Create a text file Queue.java, insert the following code in that and save it. /* Queue.java */ public interface Queue <Item> { Item delete(); // removes an item from the front of the queue void insert(Item item); // adds an item to the rear end of the queue boolean isEmpty(); // returns true if queue is empty, false otherwise int size(); // returns the number of items in the queue right now } If you look at the above definition of Queue interface, you will see that the interface is passed a parameter Item enclosed in angular brackets. This feature facilitates the generic use of Queue data structure. It allows programmers to pass the object type the queue will store at run time. A special feature of Java called generics brings us this capability. Now create a file LinkedQueue.java and insert the following code for class LinkedQueue that implements Queue interface as follows: /* LinkedQueue.java */; } } The above implementation of LinkedQueue class takes a type parameter Item which would be replaced with concrete type by the client code when the LinkedQueue object will be created as follows, for an example: Queue <Integer> q = new LinkedQueue <Integer>(); The above code completes the queue implementation using linked list but we definitely will be further interested in implementing an iterator for the newly created LinkedQueue type, so that we can iterate through the items currently stored in the data structure. Following section implements an iterator for LinkedQueue class and a driver to run LinkedQueue class code. By definition of queue data structure you would use it as a container where items are added to one end and removed from another end. But, if you like to make the Queue a usual collection where you can iterate through the queue items, you should implement Iterable and Iterator interfaces as part of your LinkedQueue Queue data structure iterable, we first extends the Queue<Item> interface by Iterable<Item>. Interface Iterable is already defined as part of java.lang.Iterable. As soon as we extend Queue<Item> interface by Iterable<Item> we have to add a new method iterator() to class LinkedQueueQueueIterator to LinkedQueue. To complete the code first modify the existing signature of interface Queue in Queue.java as follows, no change required in interface's body. public interface Queue <Item> extends Iterable <Item> Thereafter modifying Queue interface, we need to modify LinkedQueue class as follows: /* LinkedQueue.java */ import java.lang.Iterable; import java.util.*;; } //Iterator for traversing queue items public Iterator<Item> iterator() { return new LinkedQueueIterator(); } //inner class to implement iterator interface private class LinkedQueueIterator implements Iterator <Item> { private int i = size; private Node first = front; //the first node public boolean hasNext() { return (i > 0); } public Item next() { Item item = first.item; first = first.next; i--; return item; } public void remove() { // not needed } } } To test the above implementation, define a driver class as follows: /* LinkedQueueDemo.java */ public class LinkedQueueDemo { public static void main (String a[]) { Queue <Integer> q = new LinkedQueue<Integer>(); q.insert(20); q.insert(30); q.insert(40); q.insert(50); q.insert(60); q.insert(70); System.out.println("Delete an item from queue: " + q.delete()); System.out.println("Size of the queue: " + q.size()); // iterate through queue System.out.println("Queue contains following items till this moment:"); for (Integer i : q) System.out.println(i); } } OUTPUT ====== D:\JavaPrograms>javac LinkedQueueDemo.java D:\JavaPrograms>java LinkedQueueDemo Delete an item from queue: 20 Size of the queue: 5 Queue contains following items till this moment: 30 40 50 60 70 If you observe the output generated by LinkedQueueDemo class, you will find that queue items are processed in first-in-first-out fashion. In this tutorial we talked of implementation of queue in Java using linked list. We implemented generic queue in Java using linked list to create a queue of any user defined type. In linked list implementation of queue memory is used efficiently and no resize operations are required as they are required in array implementation of queue.
http://cs-fundamentals.com/data-structures/implement-queue-using-linked-list-in-java.php
CC-MAIN-2017-17
refinedweb
918
51.68
Bitwise operators work on bits, logical operators evaluate boolean expressions. As long as expressions return bool #include <iostream> int main(){ int age; std::cin >> age; if( (age < 0) | (age > 100) ) // eg: -50: 1 | 0 = 1 std::cout << "Invalid age!" << std::endl; // if( (age < 0) || (age > 100) ) // std::cout << "Invalid age!" << std::endl; return 0; } One possible answer is: optimization. For example: if ((age < 0) | (age > 100)) Let assume that age = -5, no need to evaluate (age > 100) since the first condition is satisfied ( -5<0). However, the previous code will do evaluate the (age > 100) expression which is not necessary. With: if ((age < 0) || (age > 100)) Only the first part will be evaluated. The most important answer is to avoid undefined behaviors and errors: You may imagine this code: int* int_ptr=nullptr; if ((int_ptr!=nullptr) & (*int_ptr==5)) This code contains undefined behaviour. However, if you replace the & with &&, No undefined behaviour exists anymore.
https://codedump.io/share/0pk2XhTfQBLT/1/can-i-use-bitwise-operators-instead-of-logical-ones
CC-MAIN-2018-22
refinedweb
153
56.86
In short, yes, I would agree that f2() is quicker than f1() but not for the same reason. :-) I would change your description from saying that "the combined namespaces are searched" to "the current namespace is searched, and then the parent namespace is searched, and so on until the reference is satisfied". Think of the namespaces as nested like the blocks of code. In both print statements, the functions f1 and f2 are searching their current namespace but only during the f2() block does the variable resolution have to look outside to the parent namespace, so technically the reference lookup is quicker in the f1() block. The reason I say f2() would be quicker is because of the repeated creating and deleting of the fmt1 variable in f1(). Without testing, I think it's safe to say that namespace searching, especially in this small example, would be much quicker than the work needed to create & destroy the formatting string. -- Jon Miller On Wed, Apr 20, 2011 at 6:48 PM, Mark Erbaugh <mark at microenh.com> wrote: > Hello, > I'm trying to get my head around what happens when in a Python program > Consider a trivial example > def f1(x): > return fmt1 % x > > def f2(x): > return fmt2 % x > > I believe both functions produce the same results. The question is how > Python goes about it. > Here's my understanding of what happens: > When the module containing this code is imported, module namespace entries > are created for f1, f2 and fmt2. > When f1 is called, a f1 namespace entry for fmt1 is created then the > combined namespaces are searched for fmt1 finding it in the f1 namespace. > When f1 returns the f1 namespace is destroyed. > When f2 is called, the combined namespaces are searched for fmt2, finding it > in the module namespace. > Is it correct that every time f1 is called, the fmt1 = "%d" is processed to > add fmt1 to the f1 namespace, but fmt2 is only added (to the module > namespace) once? > If so, it seems that while f1 results in less pollution of the module > namespace, it would be slightly less efficient. Is this correct? > Would the same logic apply if f1 and f2 were methods in a class? > Thanks, > Mark > > > _______________________________________________ > CentralOH mailing list > CentralOH at python.org > > >
https://mail.python.org/pipermail/centraloh/2011-April/000807.html
CC-MAIN-2016-40
refinedweb
380
67.38
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video. Nesting4:07 with Dale Sande With Maps there is a way to nest these concepts and retrieve the data as well. You'll be working in Sassmeister for this course. Like what you see and want to read more? Check out 'Sass in the Real World' for more on Advanced Sass! - 0:00 When looking at the code that I put in here there is another pattern emerging. - 0:03 One that basically kind of lends itself to a concept of nesting lists. - 0:09 So I have the namespace of input right, - 0:11 I also have the namespace of disable that I can possible look at. - 0:15 So I mean when you think about it, if we look at what is happening. - 0:21 You know, we have a variable. - 0:25 And then there's parens, and then there is a key. - 0:35 And then inside here there is another key and a value. - 0:42 All right. And then there is a key and a value. - 0:44 And then we have another key, and - 0:48 then we have key value. - 0:55 So this is very JSON like in how this can work from this perspective, - 0:59 is that you know, - 1:00 a key is gonna have a value and the value of a key could be another key in value. - 1:06 So, what can we learn from this? - 1:09 So if I come up to this input text here, - 1:13 now keep in mind I'm going to, I'm gonna switch things up here a little bit. - 1:19 I'm going to open the control panel and I'm actually going to go over to LibSass, - 1:25 and one of the things I will put in here is Sass-List-Maps. - 1:30 This is something that was engineered by developer named Lou Nelson. - 1:34 So List-Maps is not yet available in LibSass, but - 1:40 using this Sass-List-Maps, as you can see here this is actually, - 1:44 it makes it work inside of LibSass, if you're using that library. - 1:48 But what this additional library gives us is something very interesting, and - 1:53 which is basically called the map-get-z function. - 1:57 So the map-get-z function actually allows us - 2:01 to traverse a little bit deeper into this JSON type thing. - 2:05 So what I'm gonna do here is I'm going to quickly update this input. - 2:11 So now, what I did is that I have the namespace of input, but - 2:15 I also have the namespace of disable. - 2:18 And then the key of background, border, and text. - 2:21 Okay. - 2:22 Which is for sake I'm going to put that back to five. - 2:25 But now what we have out here is interesting because our output is - 2:28 all set to nulls, and the reason that's set to null is because - 2:31 the map-get input disabled-background, this key no longer exists. - 2:36 And how I need to update this is I need to do the -z function. - 2:43 And I'm gonna say input and I'm going to replace this hyphen with a comma. - 2:49 And now I have a value over on the right hand side. - 2:52 Same thing if I come down to here, - 2:58 -z input, [SOUND] comma, input map -z. - 3:06 And that gives us our outputs. - 3:07 So, this is pretty cool. - 3:09 And I really like using this library when I'm working on LibSass files because it - 3:13 allows me to, not only define a single namespace for - 3:17 the variable, but it also allows me to nest a namespace when I have common things - 3:22 happening like disabled background, disabled border, disabled text. - 3:27 Did you hear that CSS has variables? - 3:31 So what? - 3:32 Have you seen the syntax? - 3:33 And besides, can CSS variables set defaults? - 3:37 Or actually traverse a series of objects in a list? - 3:41 I didn't think so. - 3:43 Sass variables have come a long way since their simple beginnings. - 3:47 Bang default, bang global, and List-Maps are amazing weapons in the war on theming. - 3:53 I mean really, there is actually a logical process happening here. - 3:58 As we begin building more and more complex enterprise frameworks, - 4:03 these tools are essential to getting work done.
https://teamtreehouse.com/library/advanced-sass/advanced-variables-mixins-functions-and-placeholders/nesting
CC-MAIN-2017-34
refinedweb
804
81.02
Basetypes, Collections, Diagnostics, IO, RegEx... .NET Framework 3.5 and Visual Studio 2008 have officially shipped! Soma has the announcement on his blog and the downloads are available here. There's over 250 new features in .NET 3.5 and Visual Studio 2008. Here's a list of new BCL features available in .NET 3.5: Also, be sure to check out Jack Gudenkauf's blog on What's new in the .Net Framework 3.5. He mentions some additional new CLR features (GC, Security, and ThreadPool) that you may find interesting. -System.DateTimeOffset Useless without an IDateTime. See links for further details. -System.TimeZoneInfo Useless as I can not assume that all clients will have installed the necessary timezone updates. Must 100% of the time import TZ. See links for further details. -HashSet<T> Useless without ISet<T>. I cannot integrate my TreeSet<T>, OrderSet<T> etc. into the framework. See links for further details. This release is huge disappointment. But hardly a surprise as Microsoft ignored user feedback on these issues for over a year. I can definitely relate with the issues that the above posters brings up. But since the release is complete, we all will be stuck with hacks for the immediate future. Looking at the documentation for HashSet<T>, it indicates that it implements both IEnumerable<T>, and IEnumerable. Since IEnumerable<T> extends IEnumerable, why must the type again implement IEnumerable? New namespaces, (Just look out the System. AddIn name space) HashSet<T> really needs a ISet<T>. When we'll have a Number superclass (or INumber) for grouping all the number classes? (and not something like valuetype) I thought the Sourcecode would be released, together with .NET 3.5/VS2008? () Any idea yet on when it will be available? Thanks, Nick I am in the process of porting to VS2008. In a large project, I have a variety of sets, some hash sets but mostly others. Without an ISet<T> interface, there is no way to integrate them into the new framework. This is a disaster. The framework *really* needs an ISet<T>. Visual Studio 2008 and .NET Framework 3.5 has finally shipped. MSDN subscribers can download the final O Visual Studio 2008 e a .NET Framework 3.5 foram finalmente lançados. Os subscritores MSDN podem descarregar MSBuild 3.5 "Orcas" has now shipped. You can get the free download from here . It's included in the free MSBuild 3.5 "Orcas" has now shipped. You can get the free download from here . It's included
http://blogs.msdn.com/bclteam/archive/2007/11/19/net-framework-3-5-now-available-justin-van-patten.aspx
crawl-001
refinedweb
423
71.41
not getting past object_new c external compiles, uses a framework (libsamplerate.dylib) here’s the crash report and the _new routine confused!? maybe don’t understand enough about the crash report??? or how objects are created in max? void *resamp_new(t_symbol *s, long outchns) { int i, c, error; t_resamp *x = (t_resamp *)newobject(resamp_class); dsp_setup((t_pxobject *)x, 1); //intin((t_object *)x,1); outchns=1; x->outchns = outchns; for(i=0; i outlet_new((t_object *)x, "signal"); x->l_sym = s; x->sr = sys_getsr(); x->r_sr = 1.0/x->sr; x->ptr = 0; x->pos = 0; x->ratio = 0.8; x->numout = 0; x->frames = 0; x->loop = x->stop = 0; x->src_state = src_new(1, outchns, &error); x->out_buf = (float **)malloc(2 * sizeof(float *)); for(c=0; c<2; c++) x->out_buf[c] = (float *)malloc((200*30.0+100)*sizeof(float)); x->in_buf = (float *)malloc(200 * sizeof(float *)); return (x); } Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0×0000000000000030 Crashed Thread: 0 Thread 0 Crashed: 0 com.cycling74.MaxMSP 0x000b7190 class_obexoffset_get + 6 1 com.cycling74.MaxMSP 0x000b7fe4 object_obex_get + 26 2 com.cycling74.MaxMSP 0x000b8041 object_obex_lookup + 39 3 com.cycling74.MaxAPI 0x00f0fc64 object_obex_lookup + 45 4 com.cycling74.MaxAudioAPI 0x1520cce4 z_dsp_setup + 68 5 com.cycling74.resamp~ 0x00da8d9b resamp_new + 51 (resamp~.c:300) Hmmm. You should probably post your object structure and your main routine here. The crash is in class_obexoffset_get, which should not be crashing if you have set up your object correctly. You are calling newobject, which is the old-style method, as opposed to using object_alloc after setting up your object with class_setup / class_addmethod / class_dspinit and class_register in your main routine. In max 4 land there were a set of routines for setting up objects (setup addmess etc.). Later came the obex routines (as described above). In max 5 I think it is considered preferable to use obex routines, although the old ones still work. The two methods don’t mix, so without seeing main it’s impossible to tell if you’ve matched the two parts correctly. You might also want to check: 1 – That the first structure in your object is a t_pxobject. 2 – That any attribute stuff is correct (if there is any – in which case you’ll need the obex style routines) 3 – That you have properly setup (and possibly registered) your class Beyond that I don’t think we can help without the main routine and object structure. HTH A.
http://cycling74.com/forums/topic/not-getting-past-object_new/
CC-MAIN-2014-35
refinedweb
402
65.12
Paths and Geometries So far, you've looked at a number of classes that derive from Shape, including Rectangle, Ellipse, Line, Polygon, and Polyline. However, there's one Shape-derived class that you haven't considered yet, and it's the most powerful by far. The Path class has the ability to encompass any simple shape, groups of shapes, and more complex ingredients such as curves. The Path class includes a single property, Data, that accepts a Geometry object that defines the shape (or shapes) the path includes. You can't create a Geometry object directly because it's a MustInherit class. Instead, you need to use one of the derived classes listed in Table 8-3. All of these classes are found in the System.Windows.Media namespace. Note Silverlight ... Get Pro Silverlight 5 in VB now with the O’Reilly learning platform. O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.
https://www.oreilly.com/library/view/pro-silverlight-5/9781430235187/s001-183.html
CC-MAIN-2022-27
refinedweb
161
54.63
In the world of computer vision, image filtering is used to modify images. These modifications essentially allow you to clarify an image in order to get the information you want. This could involve anything from extracting edges from an image, blurring it, or removing unwanted objects. There are, of course, lots of reasons why you might want to use image filtering to modify an image. For example, taking a picture in sunlight or darkness will impact an images clarity – you can use image filters to modify the image to get what you want from it. Similarly, you might have a blurred or ‘noisy’ image that needs clarification and focus. Let’s use an example to see how to do image filtering in OpenCV. This image filtering tutorial is an extract from Practical Computer Vision. Here’s an example with considerable salt and pepper noise. This occurs when there is a disturbance in the quality of the signal that’s used to generate the image. The image above can be easily generated using OpenCV as follows: # initialize noise image with zeros noise = np.zeros((400, 600)) # fill the image with random numbers in given range cv2.randu(noise, 0, 256) Let’s add weighted noise to a grayscale image (on the left) so the resulting image will look like the one on the right: The code for this is as follows: # add noise to existing image noisy_gray = gray + np.array(0.2*noise, dtype=np.int) Here, 0.2 is used as parameter, increase or decrease the value to create different intensity noise. In several applications, noise plays an important role in improving a system’s capabilities. This is particularly true when you’re using deep learning models. The noise becomes a way of testing the precision of the deep learning application, and building it into the computer vision algorithm. Linear image filtering The simplest filter is a point operator. Each pixel value is multiplied by a scalar value. This operation can be written as follows: Here: - The input image is F and the value of pixel at (i,j) is denoted as f(i,j) - The output image is G and the value of pixel at (i,j) is denoted as g(i,j) - K is scalar constant This type of operation on an image is what is known as a linear filter. In addition to multiplication by a scalar value, each pixel can also be increased or decreased by a constant value. So overall point operation can be written like this: This operation can be applied both to grayscale images and RGB images. For RGB images, each channel will be modified with this operation separately. The following is the result of varying both K and L. The first image is input on the left. In the second image, K=0.5 and L=0.0, while in the third image, K is set to 1.0 and L is 10. For the final image on the right, K=0.7 and L=25. As you can see, varying K changes the brightness of the image and varying L changes the contrast of the image: This image can be generated with the following code: import numpy as np import matplotlib.pyplot as plt import cv2 def point_operation(img, K, L): """ Applies point operation to given grayscale image """ img = np.asarray(img, dtype=np.float) img = img*K + L # clip pixel values img[img > 255] = 255 img[img < 0] = 0 return np.asarray(img, dtype = np.int) def main(): # read an image img = cv2.imread('../figures/flower.png') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # k = 0.5, l = 0 out1 = point_operation(gray, 0.5, 0) # k = 1., l = 10 out2 = point_operation(gray, 1., 10) # k = 0.8, l = 15 out3 = point_operation(gray, 0.7, 25) res = np.hstack([gray,out1, out2, out3]) plt.imshow(res, cmap='gray') plt.axis('off') plt.show() if __name__ == '__main__': main() 2D linear image filtering While the preceding filter is a point-based filter, image pixels have information around the pixel as well. In the previous image of the flower, the pixel values in the petal are all yellow. If we choose a pixel of the petal and move around, the values will be quite close. This gives some more information about the image. To extract this information in filtering, there are several neighborhood filters. In neighborhood filters, there is a kernel matrix which captures local region information around a pixel. To explain these filters, let’s start with an input image, as follows: This is a simple binary image of the number 2. To get certain information from this image, we can directly use all the pixel values. But instead, to simplify, we can apply filters on this. We define a matrix smaller than the given image which operates in the neighborhood of a target pixel. This matrix is termed kernel; an example is given as follows: The operation is defined first by superimposing the kernel matrix on the original image, then taking the product of the corresponding pixels and returning a summation of all the products. In the following figure, the lower 3 x 3 area in the original image is superimposed with the given kernel matrix and the corresponding pixel values from the kernel and image are multiplied. The resulting image is shown on the right and is the summation of all the previous pixel products: This operation is repeated by sliding the kernel along image rows and then image columns. This can be implemented as in following code. We will see the effects of applying this on an image in coming sections. # design a kernel matrix, here is uniform 5×5 kernel = np.ones((5,5),np.float32)/25 # apply on the input image, here grayscale input dst = cv2.filter2D(gray,-1,kernel) However, as you can see previously, the corner pixel will have a drastic impact and results in a smaller image because the kernel, while overlapping, will be outside the image region. This causes a black region, or holes, along with the boundary of an image. To rectify this, there are some common techniques used: - Padding the corners with constant values maybe 0 or 255, by default OpenCV will - use this. - Mirroring the pixel along the edge to the external area - Creating a pattern of pixels around the image The choice of these will depend on the task at hand. In common cases, padding will be able to generate satisfactory results. The effect of the kernel is most crucial as changing these values changes the output significantly. We will first see simple kernel-based filters and also see their effects on the output when changing the size. Box filtering This filter averages out the pixel value as the kernel matrix is denoted as follows: Applying this filter results in blurring the image. The results are as shown as follows: In frequency domain analysis of the image, this filter is a low pass filter. The frequency domain analysis is done using Fourier transformation of the image, which is beyond the scope of this introduction. We can see on changing the kernel size, the image gets more and more blurred: As we increase the size of the kernel, you can see that the resulting image gets more blurred. This is due to averaging out of peak values in small neighbourhood where the kernel is applied. The result for applying kernel of size 20×20 can be seen in the following image. However, if we use a very small filter of size (3,3) there is negligible effect on the output, due to the fact that the kernel size is quite small compared to the photo size. In most applications, kernel size is heuristically set according to image size: The complete code to generate box filtered photos is as follows:('Box Filter (5,5)') ax[1].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # To try different kernel, change size here.. This is easily done due to several properties associated with a common type of filters, that is, linear filters: - The linear filters are commutative such that we can perform multiplication operations on filters in any order and the result still remains the same: a * b = b * a - They are associative in nature, which means the order of applying the filter does not affect the outcome: (a * b) * c = a * (b * c) - Even in cases of summing two filters, we can perform the first summation and then apply the filter, or we can also individually apply the filter and then sum the results. The overall outcome still remains the same: - Applying a scaling factor to one filter and multiplying to another filter is equivalent to first multiplying both filters and then applying scaling factor These properties play a significant role in other computer vision tasks such as object detection and segmentation. A suitable combination of these filters enhances the quality of information extraction and as a result, improves the accuracy. Non-linear image filtering While in many cases linear filters are sufficient to get the required results, in several other use cases performance can be significantly increased by using non-linear image filtering. Mon-linear image filtering is more complex, than linear filtering. This complexity can, however, give you more control and better results in your computer vision tasks. Let’s take a look at how non-linear image filtering works when applied to different images. Smoothing a photo Applying a box filter with hard edges doesn’t result in a smooth blur on the output photo. To improve this, the filter can be made smoother around the edges. One of the popular such filters is a Gaussian filter. This is a non-linear filter which enhances the effect of the center pixel and gradually reduces the effects as the pixel gets farther from the center. Mathematically, a Gaussian function is given as: where μ is mean and σ is variance. An example kernel matrix for this kind of filter in 2D discrete domain is given as follows: This 2D array is used in normalized form and effect of this filter also depends on its width by changing the kernel width has varying effects on the output as discussed in further section. Applying gaussian kernel as filter removes high-frequency components which results in removing strong edges and hence a blurred photo: While this filter performs better blurring than a box filter, the implementation is also quite simple with OpenCV:('Gaussian Blurred') ax[1].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # apply gaussian blur, # kernel of size 5x5, # change here for other sizes kernel_size = (5,5) # sigma values are same in both direction blur = cv2.GaussianBlur(img,(5,5),0) plot_cv_img(img, blur) if __name__ == '__main__': main() The histogram equalization technique The basic point operations, to change the brightness and contrast, help in improving photo quality but require manual tuning. Using histogram equalization technique, these can be found algorithmically and create a better-looking photo. Intuitively, this method tries to set the brightest pixels to white and the darker pixels to black. The remaining pixel values are similarly rescaled. This rescaling is performed by transforming original intensity distribution to capture all intensity distribution. An example of this equalization is as following: The preceding image is an example of histogram equalization. On the right is the output and, as you can see, the contrast is increased significantly. The input histogram is shown in the bottom figure on the left and it can be observed that not all the colors are observed in the image. After applying equalization, resulting histogram plot is as shown on the right bottom figure. To visualize the results of equalization in the image , the input and results are stacked together in following figure. Code for the preceding photos is as follows: def plot_gray(input_image, output_image): """ Converts an image from BGR to RGB and plots """ # change color channels order for matplotlib fig, ax = plt.subplots(nrows=1, ncols=2) ax[0].imshow(input_image, cmap='gray') ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(output_image, cmap='gray') ax[1].set_title('Histogram Equalized ') ax[1].axis('off') plt.savefig('../figures/03_histogram_equalized.png') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # grayscale image is used for equalization gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # following function performs equalization on input image equ = cv2.equalizeHist(gray) # for visualizing input and output side by side plot_gray(gray, equ) if __name__ == '__main__': main() Median image filtering Median image filtering a similar technique as neighborhood filtering. The key technique here, of course, is the use of a median value. As such, the filter is non-linear. It is quite useful in removing sharp noise such as salt and pepper. Instead of using a product or sum of neighborhood pixel values, this filter computes a median value of the region. This results in the removal of random peak values in the region, which can be due to noise like salt and pepper noise. This is further shown in the following figure with different kernel size used to create output. In this image first input is added with channel wise random noise as: # read the image flower = cv2.imread('../figures/flower.png') # initialize noise image with zeros noise = np.zeros(flower.shape[:2]) # fill the image with random numbers in given range cv2.randu(noise, 0, 256) # add noise to existing image, apply channel wise noise_factor = 0.1 noisy_flower = np.zeros(flower.shape) for i in range(flower.shape[2]): noisy_flower[:,:,i] = flower[:,:,i] + np.array(noise_factor*noise, dtype=np.int) # convert data type for use noisy_flower = np.asarray(noisy_flower, dtype=np.uint8) The created noisy image is used for median image filtering as: # apply median filter of kernel size 5 kernel_5 = 5 median_5 = cv2.medianBlur(noisy_flower,kernel_5) # apply median filter of kernel size 3 kernel_3 = 3 median_3 = cv2.medianBlur(noisy_flower,kernel_3) In the following photo, you can see the resulting photo after varying the kernel size (indicated in brackets). The rightmost photo is the smoothest of them all: The most common application for median blur is in smartphone application which filters input image and adds an additional artifacts to add artistic effects. The code to generate the preceding photograph is as follows: def plot_cv_img(input_image, output_image1, output_image2, output_image3): """ Converts an image from BGR to RGB and plots """ fig, ax = plt.subplots(nrows=1, ncols=4) ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB)) ax[0].set_title('Input Image') ax[0].axis('off') ax[1].imshow(cv2.cvtColor(output_image1, cv2.COLOR_BGR2RGB)) ax[1].set_title('Median Filter (3,3)') ax[1].axis('off') ax[2].imshow(cv2.cvtColor(output_image2, cv2.COLOR_BGR2RGB)) ax[2].set_title('Median Filter (5,5)') ax[2].axis('off') ax[3].imshow(cv2.cvtColor(output_image3, cv2.COLOR_BGR2RGB)) ax[3].set_title('Median Filter (7,7)') ax[3].axis('off') plt.show() def main(): # read an image img = cv2.imread('../figures/flower.png') # compute median filtered image varying kernel size median1 = cv2.medianBlur(img,3) median2 = cv2.medianBlur(img,5) median3 = cv2.medianBlur(img,7) # Do plot plot_cv_img(img, median1, median2, median3) if __name__ == '__main__': main() Image filtering and image gradients These are more edge detectors or sharp changes in a photograph. Image gradients widely used in object detection and segmentation tasks. In this section, we will look at how to compute image gradients. First, the image derivative is applying the kernel matrix which computes the change in a direction. The Sobel filter is one such filter and kernel in the x-direction is given as follows: Here, in the y-direction: This is applied in a similar fashion to the linear box filter by computing values on a superimposed kernel with the photo. The filter is then shifted along the image to compute all values. Following is some example results, where X and Y denote the direction of the Sobel kernel: This is also termed as an image derivative with respect to given direction(here X or Y). The lighter resulting photographs (middle and right) are positive gradients, while the darker regions denote negative and gray is zero. While Sobel filters correspond to first order derivatives of a photo, the Laplacian filter gives a second-order derivative of a photo. The Laplacian filter is also applied in a similar way to Sobel: The code to get Sobel and Laplacian filters is as follows: # sobel x_sobel = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5) y_sobel = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5) # laplacian lapl = cv2.Laplacian(img,cv2.CV_64F, ksize=5) # gaussian blur blur = cv2.GaussianBlur(img,(5,5),0) # laplacian of gaussian log = cv2.Laplacian(blur,cv2.CV_64F, ksize=5) We learnt about types of filters and how to perform image filtering in OpenCV. To know more about image transformation and 3D computer vision check out this book Practical Computer Vision. Fingerprint detection using OpenCV 3 3 ways to deploy a QT and OpenCV application OpenCV 4.0 is on schedule for July release Don’t call Gaussian filter a ‘non-linear’ filter. Linearity or non-linearity is defined on the way the filter is applied, not the shape of the filter. If the filter is applied using convolution (which is a linear operation), it is a linear filter! There is only one way to check if a process or operation is linear: if f(ax+b) = af(x) + f(b), then it is linear! I think this article seriously lacks basic training in signal processing. I’d be very careful learning from this particular article!
https://hub.packtpub.com/image-filtering-techniques-opencv/
CC-MAIN-2021-39
refinedweb
2,943
55.54
: dhruva Comment Added at : 2013-07-10 08:56:27 BIG Thanks for such wonderful. nice to read thank you for the notes. that helps m View Tutorial By: V.Ranjith kumar at 2010-08-12 06:13:03 3. thanks a lot dude,my lecturer took 2 weeks to expl View Tutorial By: praveen malinga at 2010-02-25 07:53:01 4. hi very tnx for your code View Tutorial By: Moji at 2012-08-19 16:06:53 5. it is a good example to understand. View Tutorial By: nivedha at 2011-07-12 05:15:39 6. HI, I want to send an sms via website to mo View Tutorial By: Hemant at 2008-11-04 05:26:30 7. I am geting following exception javax.comm View Tutorial By: Raviteja at 2014-06-23 12:46:06 8. Simpler example: import java.io.*; View Tutorial By: Joseph Harner at 2011-12-04 23:20:48 9. Nice Example. Gives very clear idea. View Tutorial By: Larsen at 2013-09-14 02:29:24 10. public void playSound(String filename) { View Tutorial By: Rowan at 2014-07-17 10:26:02
https://java-samples.com/showcomment.php?commentid=39282
CC-MAIN-2019-43
refinedweb
195
78.85
Hello I'm new to python and sort of getting the hang of it... sort of Heres what I have Instructions: #Write a program that writes a series of random numbers to a file. #Each random number should be in the range of 1 through 100. #The application should let the user specify how #many random numbers the file will hold. Heres what I have: import random afile = open("Random.txt", "w" ) for line in afile: for i in range(input('How many random numbers?: ')): line = random.randint(0, 100) afile.write(line) print(line) afile.close() print("\nReading the file now." ) afile = open("Random.txt", "r") print(afile.read()) afile.close() Its a few things 1. Its not writing the random numbers in the file the USER is setting. 2. the file once opened cant close 3. when the file is read nothing Please anyone....any ideas? I seem to always get stuck on execution with python.
http://forums.devshed.com/python-programming/940006-random-file-writer-python-last-post.html
CC-MAIN-2017-22
refinedweb
158
78.96
current position:Home>24 useful Python tips 24 useful Python tips 2022-02-01 18:40:50 【Machine Learning Institute】 This article is published in my official account 『 data STUDIO』:24 A good one Python Practical skills Python Is the most popular in the world 、 One of the most popular programming languages , Yes, there are many reasons . - It's easy to learn - There are super many functions - It has a large number of modules and Libraries As a data worker , We use it every day Python Handle most of the work . In the process , We will continue to learn some useful skills and tricks . ad locum , I tried to A - Z Share some of these tips in the beginning format , And these methods are briefly introduced in this paper , If you are interested in one or more of them , You can check the official documents through the references at the end of the article . I hope it can help you . all or any Python One of the many reasons why languages are so popular , Because it has good readability and expressiveness . People often joke that Python yes Executable pseudocode . When you can write code like this , It's hard to refute . x = [True, True, False] if any(x): print(" At least one True") if all(x): print(" Is full of True") if any(x) and not all(x): print(" At least one True And a False") Copy code bashplotlib Have you ever thought about drawing graphics in the console ? Bashplotlib It's a Python library , He can help us on the command line ( A rough environment ) Draw data in . # Module installation pip install bashplotlib # Draw examples import numpy as np from bashplotlib.histpgram import plot_hist arr = np.ramdom.normal(size=1000, loc=0, scale=1) plot_hist(arr, bincount=50) Copy code collections Python There are some great default data types , But sometimes their behavior doesn't exactly meet your expectations . Fortunately, ,Python The standard library provides collections modular . This convenient add-on provides you with more data types . from collections import OrderedDict, Counter # Remember the order in which keys are added ! x = OrderedDict(a=1, b=2, c=3) # Count the frequency of each character y = Counter("Hello World!") Copy code dir Have you ever thought about how to check Python Object and see what properties it has ? Enter... On the command line : dir() dir("Hello World") dir(dir) Copy code When running interactively Python And dynamically explore the objects and modules you are using , This can be a very useful feature . Read more here functions Related content . emoji emoji It is a visual emotional symbol used in wireless communication in Japan , Drawing refers to drawing , Words refer to characters , Can be used to represent a variety of expressions , A smiling face means a smile 、 Cake means food, etc . On Chinese mainland ,emoji Is often called “ Little yellow face ”, Or call it emoji. # Install the module pip install emoji # Make a try from emoji import emojize print(emojize(":thumbs_up:")) Copy code from __future__ import Python One of the results of popularity , There are always new versions under development . The new version means new features —— Unless your version is out of date . But don't worry . Use this __future__ modular Can help you use Python Future version import function . Literally , It's like time travel 、 Magic or something . from __future__ import print_function print("Hello World!") Copy code geogy Geography , This is a challenging area for most programmers . When getting geographic information or drawing maps , There will be many problems . This geopy modular Make geography related content very easy . pip install geopy Copy code It abstracts a series of different geocoding services API Come to work . Through it , You can get the full street address of a place 、 latitude 、 Longitude and even altitude . There is also a useful distance class . It calculates the distance between two positions in your preferred unit of measurement . from geopy import GoogleV3 place = "221b Baker Street, London" location = GoogleV3().geocode(place) print(location.address) print(location.location) Copy code howdoi When you use terminal When programming the terminal , By having a problem in StackOverflow Search for answers on , After that, it will return to the terminal to continue programming , Sometimes you don't remember the solution you found before , You need to review StackOverflow, But I don't want to leave the terminal , Then you need to use this useful command line tool howdoi. pip install howdoi Copy code Whatever problems you have , You can ask it , It will try its best to reply . howdoi vertical align css howdoi for loop in java howdoi undo commits in git Copy code But please pay attention to —— It will be from StackOverflow Grab the code from the best answer . It may not always provide the most useful information ...... howdoi exit vim Copy code inspect Python Of inspect modular Perfect for understanding what's going on behind the scenes . You can even call its own methods ! The following code example inspect.getsource() For printing your own source code . inspect.getmodule() It is also used to print the module that defines it . The last line of code prints its own line number . import inspect print(inspect.getsource(inspect.getsource)) print(inspect.getmodule(inspect.getmodule)) print(inspect.currentframe().f_lineno) Copy code Of course , Except for these trivial uses ,inspect Modules can prove useful for understanding what your code is doing . You can also use it to write self documenting code . Jedi Jedi Library is an autocomplete and code analysis library . It makes writing code faster 、 More efficient . Unless you're developing your own IDE, Otherwise you may use Jedi I'm interested in being an editor plug-in . Fortunately, , There are already loads available ! **kwargs When learning any language , There will be many milestones . Use Python And understand the mysterious **kwargs Grammar may count as an important milestone . Double asterisk in front of dictionary object **kwargs Allows you to pass the contents of the dictionary as named parameters to the function . The key of the dictionary is the parameter name , Value is the value passed to the function . You don't even need to call it kwargs! dictionary = {"a": 1, "b": 2} def someFunction(a, b): print(a + b) return # These do the same thing : someFunction(**dictionary) someFunction(a=1, b=2) Copy code When you want to write a function that can handle undefined named parameters , It's very useful . list (list) The derived type About Python Programming , One of my favorite things is its List derivation . These expressions make it easy to write very smooth code ,) Copy code map Python Support functional programming with many built-in features . The most useful map() One of the functions is the function —— Especially with lambda function When used in combination . x = [1, 2, 3] y = map(lambda x : x + 1, x) # Print out [2,3,4] print(list(y)) Copy code In the example above , map() Put a simple lambda Function applied to x. It returns a mapping object , The object can be converted into some iteratable objects , For example, list or tuple . newspaper3k If you haven't seen it yet , Then be ready to be Python newspaper module The module shocked to . It enables you to retrieve news articles and related metadata from a series of leading international publications . You can retrieve images 、 Text and author's name . It even has some Built in NLP function . therefore , If you are considering using in your next project BeautifulSoup Or something else DIY Web crawling Library , Using this module can save you a lot of time and energy . pip install newspaper3k Copy code Operator overloading Python Provide right Operator overloaded Support , This is one of the terms that makes you sound like a legitimate computer scientist . This is actually a simple concept . Have you ever thought about why Python Allow you to use + Operator to add numbers and connection strings ? This is what operator overloading does . You can define how to use... In your own specific way Python The object of the standard operator symbol . And you can use them in the context of the object you are using . class Thing: def __init__(self, value): self.__value = value def __gt__(self, other): return self.__value > other.__value def __lt__(self, other): return self.__value < other.__value something = Thing(100) nothing = Thing(0) # True something > nothing # False something < nothing # Error something + nothing Copy code pprint Python Default This is anything that uses non trivial data structures Python A must-have for developers . import requests import pprint url = '' users = requests.get(url).json() pprint.pprint(users) Copy code Queue Python Standard library Queue The module implementation supports multithreading . This module allows you to implement queue data structures . These are data structures that allow you to add and retrieve entries according to specific rules . “ fifo ”(FIFO) Queues allow you to retrieve objects in the order they are added .“ Last in, first out ”(LIFO) Queues allow you to access recently added objects first . Last , Priority queues allow you to retrieve objects according to their sort order . This is a how in Python Queue is used in Queue Examples of multithreaded programming . __repr__ stay Python When a class or object is defined in , Provides an... That represents the object as a string “ official ” This method is very useful . for example : open('file.txt', 'r') print(file) <open file 'file.txt', mode 'r' at 0x10d30aaf0> Copy codefile = This makes it easier to debug code . Add it to your class definition , As shown below : class someClass: def __repr__(self): return "<some description here>" someInstance = someClass() # Print <some description here> print(someInstance) Copy code sh Python It's a great scripting language . Sometimes standard os and subprocess Ku may have a headache . The SH library Allows you to call any program like a normal function —— Useful for automating workflows and tasks . import sh sh.pwd() sh.mkdir('new_folder') sh.touch('new_file.txt') sh.whoami() sh.echo('This is great!') Copy code Type hints Python It is a dynamically typed language . Defining variables 、 function 、 Class does not need to specify the data type . This allows fast development time . however , Nothing is more annoying than runtime errors caused by simple input problems . from Python 3.5 Start , You can choose to provide type hints when defining functions . def addTwo(x : Int) -> Int: return x + 2 Copy code) Copy code Although not mandatory , But type annotations can make your code easier to understand . They also allow you to use type checking tools , Catch those stray before running TypeError. If you are dealing with large 、 Complex projects , It's very useful ! uuid adopt Python Standard library uuid modular Generate universal unique ID( or “UUID”) A quick and simple method . import uuid user_id = uuid.uuid4() print(user_id) Copy code This will create a random 128 Digit number , That number is almost certainly unique . in fact , Can generate more than 2¹²² It's possible UUID. This is more than five decimal ( or 5,000,000,000,000,000,000,000,000,000,000,000,000). The probability of finding duplicates in a given set is very low . Even if there are a trillion UUID, The possibility of repetition is also far less than one in a billion . Virtual environments You may be in multiple... At the same time Python Work on projects . Unfortunately , Sometimes two projects will depend on different versions of the same dependency . What did you install on your system ? Fortunately, ,Python Support for A virtual environment So you can have the best of both worlds . From the command line : python -m venv my-project source my-project/bin/activate pip install all-the-modules Copy code Now? , You can run on the same machine Python Stand alone version and installation of . wikipedia Wikipedia has a great API, It allows users to programmatically access unparalleled and completely free knowledge and information . stay wikipedia modular Make access to the API Very convenient . import wikipedia result = wikipedia.page('freeCodeCamp') print(result.summary) for link in result.links: print(link) Copy code Just like the real site , The module provides multilingual support 、 Page disambiguation 、 Random page search , There's even one donate() Method . xkcd Humor is Python A key feature of language , It is based on English comedy skits Python Flying Circus Named .Python Many official documents cite the program's most famous sketches . however ,Python Humor is not limited to documents . Try running the following line : import antigravity Copy code YAML YAML refer to “ Unmarked language ” . It's a data format language , yes JSON Superset . And JSON Different , It can store more complex objects and reference its own elements . You can also write notes , Make it especially suitable for writing configuration files . The PyYAML modular You can use YAML Use Python. Install and then import into your project : pip install pyyaml import yaml Copy code PyYAML Allows you to store... Of any data type Python object , And any instances of user-defined classes . zip The finale is also a great module . Have you ever encountered the need to form a dictionary from two lists ? keys = ['a', 'b', 'c'] vals = [1, 2, 3] zipped = dict(zip(keys, vals)) Copy code The zip() Built in functions require a series of iteratable objects , And return a tuple list . Each tuple groups the elements of the input object according to the location index . You can also call objects to “ decompression ” object *zip(). At the end Python It is a very diverse and well developed language , So there will certainly be many functions I haven't considered . If you want to know more about python modular , You can refer to awesome-python. Welcome to share , Please indicate the author and the original link in the article , Thank you for your respect for knowledge and affirmation of this article . Original author : data STUDIO Link to the original text :mp.weixin.qq.com/s/U28UvPtFc… The copyright belongs to the author . Commercial reprint please contact the author for authorization , Non-commercial reprint please indicate the source , Relevant responsibilities will be investigated for infringement reprint author[Machine Learning Institute]
https://en.pythonmana.com/2022/02/202202011840480913.html
CC-MAIN-2022-27
refinedweb
2,343
55.54
App::perlfind::Plugin::UseModule - Try the search word as a module name # perlfind Getopt::Long This plugin for App::perlfind tries to use the search term as a module name. If the module can be loaded, it is added to the match results. If it contains '::', it might be a fully qualified function name such as Foo::Bar::some_function or a module that's not installed but whose namespace-parent might be installed. For example, if Foo::Bar is installed but Foo::Bar::Baz isn't, we don't want think that there is a function Baz() in the package Foo::Bar; rather we want to show the docs for Foo::Bar::Baz. To distinguish between a function and a module, use a simple heuristic, which means it's a guess and won't always work: if the final symbol starts with an uppercase character, we assume it's a package, otherwise we assume it's a function.
http://search.cpan.org/~marcel/App-perlfind-2.05/lib/App/perlfind/Plugin/UseModule.pm
CC-MAIN-2014-52
refinedweb
158
60.58
Pointers and arrays are intrinsically related in C++. Array decay In lesson 6.1 -- Arrays (part i),, the type of &array is int(*)[5]). It’s unlikely you’ll ever need to use this. &array int(*)[5] Revisiting passing fixed arrays to functions Back in lesson 6.2 -- Arrays (part ii), element accessed is the actual first element of the array!. hello guys, please help me "Arrays in structs and classes don’t decay Finally, it is worth noting that arrays that are part of structs or classes do not decay when the whole struct or class is passed to a function." In my example, I am trying to pass an array that is part of struct. This array decays into pointer when passed to a function. So, I find "Arrays in structs and classes don’t decay" as stated above confusing. I am learning Chapter 9.7. Sometime I go back to the previous chapters for review. Did I miss something? You don't have an array in your struct, you have an array of your struct. Hi Alex, I would like to ask you some questions. I have this program Why the first cout it print the address of the first element but the second cout print the whole string? What does (void*) mean in the third cout? &b[0] returns a `char*`. `char*` is treated as a string by `std::cout <<`. `(void*)` is a C-style cast, see lesson 6.16 (Not P.6.16). Hello,Alex and nascardriver! Maybe this is a minor grammatical error: “It’s a common fallacy in C++ to believe an array and a pointer to the array are identical. They’re not. In the above case, array is of type “int[5]”, and it’s “value” is the array elements themselves. A pointer to the array would be of type “int *”, and its value would be the address of the first element of the array.” In the third sentence, " it’s ‘ value ' " should be " its ‘ value ' ". (This paragraph is at the bottom of the first example) removed ', thanks! "[...] arrays that are part of structs or classes do not decay when the whole struct or class is passed to a function." After testing, here is my conclusion: - structs as function parameters are passed by copy. - every variable passed by copy is deeply copied at runtime. Consequently, an array inside a struct passed to a function as one of its parameters is copied. Am I right? > structs as function parameters are passed by copy That's true for any type, unless you specify otherwise. We talk about this in chapter 7. > every variable passed by copy is deeply copied at runtime That's how it should be, but it's up to the developer of a type to make this statement true. More in chapter 9. The size of an array is not a run-time property, it's a trait of the type. When an array is passed to a function, its type changed from array to pointer, leading to loss of information. When you pass a struct, you're not changing the type, so the size is preserved. Thank you. "(in the above example, int(*)[5])." I got a bit confused here. It should probably say "(in the above example, the type of is int(*)[5])." i don't sure if this really fits here, but i really don't understand why this doesn't work as what i thought What did you think would happen? What happened? Hi, Alex,when I use vscode+mingw to run the code you provided,something makes me confused. On my machine print the result as:32 8;while as I changed 'std::cout << sizeof(array) << '\n'; ' to 'std::cout << sizeof(*array) << '\n'; 'in function printSize,the result become 32 4.Also if I use 'std::cout<<sizeof(*array)<<'\n';'in main function, it returns 4. Why the first makes different with the other two? `array` is an `int*` (64 bits wide) `*array` is an `int` (32 bits wide) It doesn't matter what type of elements your array has, a pointer is always 64 bits (With your compiler). The size of the element type is different from this. Hi guys. Can you please help me understand the below code: `std::string` has a custom `==` operator. It compares the contents of the strings. A char array decays to a pointer when you use `==`. You're comparing pointers, not strings. To compare C-style strings, use `std::strcmp` or wrap them in `std::string_view`. Oh now that makes sense! Thanks again nascardriver for invaluable feedback as always. Hi nascardriver. I have tries to modify my code to use std::strcmp but the output is still 0 and not 1? `std::strcmp` returns 0 if the strings are equal, see Thanks nascardriver! In the third code snippet in this lesson, I think you meant to use "std::cout <<" instead of "cout <<". Yep, lesson updated! > Consequently, when ptr is dereferenced, the actual array is dereferenced! I think this sentence might not be precise enough. In reality, “the pointer that points to the first element of the array“ is dereferenced, not the actual array. I understand that you've clarified that we don't actually dereference an array in the beginning, but perhaps we can restate it here briefly to avoid confusion, or maybe modify the sentence to show that it is the pointer (type int *) that gets dereferenced, not the actual array (type int[8])? Thanks for this awesome tutorial! Good feedback. Wording amended for clarification purposes. Thanks! Write declarations for the following entities and initialize each of them: a pointer to char an array with 10 int a reference to an array with 10 int a pointer to an array with 10 elements of type string a pointer to a pointer to char a constant int a pointer to a constant int a constant pointer to an int [code] int array[5]{ 9, 7, 5, 3, 1 }; char vowels[]{ 'a', 'e', 'i', 'o', 'u','\0' }; std::cout << "\nElements in array: " << array; //address 00AFFD7C printed std::cout << "\nElements in vowels: " << vowels; //aeiou printed std::cout << "\nArray element 0 has address: " << &array[0];//address 00AFFD7C printed std::cout << "\nvowels[0] has address: " << &vowels[0]; //aeiou printed [code] I don't understand why pointer to char array is different to pointer to int array. When dealing with pointers are we suppose to treat char very differently to int? They're no different from each other. Only `std::cout` treats them differently. Closing code tags use a forward slash [/code] Hi there! Thank you for writing these lessons and answering our questions! I have a lot of trouble with pointers and arrays in C++. I understand that an array decays into a pointer when passed into a function to avoid copying length arrays. I also understand that a pointer is a variable that carries a memory address. I understand what arrays are algorithmically (my first language was Java). However, I'm very confused about the rest of array decay and pointers. When this post says "In all but two cases (which we’ll cover below), when a fixed array is used in an expression, the fixed array will decay ( be implicitly converted) into a pointer that points to the first element of the array." What does it mean for a fixed array to be "used in an expression"? Does that mean an array always decays? If so, what's the point of an array other than to give length information? I also found a Stack Overflow answer <> that says "Except when 1. it is the operand of the sizeof operator or 2. the unary (address-of) & operator, 3. or is a string literal used to initialize an array, an expression that has type ‘‘array of type’’ is converted to an expression with type ‘‘pointer to type’’ that 1. points to the initial element of the array object and is not an lvalue. (not a function or object). I'm having a lot of trouble understanding why it decays, when it decays, and what's the point of an array if it's almost always going to decay? If the array is different from the pointer to the array, how is the distinction made inside C++ if array is not a class? I don't understand what the difference between type "int[5]" and type "int *" means. Is the array a short hand for the pointer that is enforced by the compiler or is it in memory something fundamentally different? Can an array ever be dereferenced? Why do arrays in structs or classes not decay when passed into the function? If you made it all the way to the bottom of this flurry of confused questions, I am deeply deeply grateful for your time and attention. It means a lot to me that you provide this wonderful service for free. Best regards, Teru Hello Teru! > why it decays You can pass arrays around without making them decay, but then you're limited to arrays of a fixed size. Most of the time we want our functions to work with arbitrarily sized arrays. Because arrays of different sizes are distinct types, a function that accepts an array of size N won't work with an array of size M. When we let arrays decay, they have the same type, so we can re-use functions. > when it decays When you copy it into a pointer variable or use it like a pointer, eg. when calling a function with a pointer parameter. As I said before, the function _could_ make it so that the array doesn't decay, but then it only works with one size. > what's the point of an array if it's almost always going to decay? A decayed array is still an array, you just can't extract its size, so you have to keep track of the size yourself. > array is not a class There is an array class (And several other containers), you'll learn about it later. > difference between type "int[5]" and type "int *" One has size information, the other doesn't. > Is the array a short hand for the pointer that is enforced by the compiler or is it in memory something fundamentally different? I'm not sure I understand you. An array type has size information, but only at compile-time (Because it's a type, and types only exist at compile-time). The size isn't stored in memory and not otherwise present at run-time (At least not accessible to the programmer). > Can an array ever be dereferenced? It will decay first, then you dereference the pointer. You don't have to do anything to make the array decay, it happens automatically. > Why do arrays in structs or classes not decay when passed into the function? Because the type of the struct/class contains the type of every member. As with passing an array by array type, the function that accepts the struct/class works only with arrays of one size (The size that's specified in the struct/class). I hope I could clear some of your confusion. If you have any more questions or don't understand something I said, feel free to ask again :) You cleared up so much confusion! Thank you so much! Would you say it's correct for me to say: 1. An array is a pointer to a variable with information about the size of the array 2. Arrays of different sizes are different types from each other (int[5] is a different type from int[2]) ? > "An array type has size information, but only at compile-time (Because it's a type, and types only exist at compile-time). The size isn't stored in memory and not otherwise present at run-time (At least not accessible to the programmer)." What does it mean that "types only exist at compile-time"? Isn't size of the array accessible through array.length? Thanks again! > An array is a pointer to a variable with information about the size of the array Yes > Arrays of different sizes are different types from each other (int[5] is a different type from int[2]) Yes > types only exist at compile-time Your processor doesn't know what types are. There are no types in a compiled program. Types only help you to organize data and prevent mistakes by using data in a wrong way. > Isn't size of the array accessible through array.length No, not in C++. You can use `std::size(array)` if `array` is an array-type (Not decayed). `std::size` runs at compile-time. In your program, there is no call to `std::size`, just the size of your array (There might be a call when optimization is disabled, but the return value is known at compile-time even then). I have a small doubt. While messing around with C (due to a course on Coursera), I learnt that 'sizeof' is actually an 'operator' in the C and C++ language, and hence the value it returns is computed at compile time and stored in the binary itself! So there is no 'runtime' cost to find the size of the array in this case. Just to make sure (I'm also following a book on OS organization), I checked with 'Compiler Explorer' and sure enough, it explicitly moves 20 into the value. I think std::size/std::ssize work this way too. Also, I don't see any other usage where 'array' doesn't degrade to pointer. All this makes me think, is it really meaningful to say that 'array' knows its size? > hence the value it returns is computed at compile time What you said is correct, but it's not the reason why `sizeof` is computed at compile-time. Operators are functions, they can be evaluated at run-time. > I checked with 'Compiler Explorer' Compiler explorer is a great tool, but don't use it to prove anything. You found out that the compiler you selected computes `sizeof` at compile-time, that doesn't mean that the language requires this to happen. > I think std::size/std::ssize work this way too `std::size` and `std::ssize` don't use `sizeof`, but they can be computed at compile-time when passed an array. > is it really meaningful to say that 'array' knows its size? The type of the array has a size. When an array decays it changes its type to a simple pointer, which doesn't have any information about the size. The size is a part of the type, not of the value. Once you learn about templates you'll understand how types can store information. Arrays don't use templates, it should help nonetheless. > The size is a part of the type, not of the value. Ahhh! That makes sense now! > Once you learn about templates you'll understand how types can store information. I looked around and found that C doesn't have templates. Does this mean in C/C++, the mechanism by which types store the size are similar? I somehow can't seem to wrap my head around the fact that a type's array can also store it's size, since everything is a number in the end? In an array, there doesn't seem to be any extra block allocated around it to store its size? How does this happen! Thank you very much for your time and patience! > Does this mean in C/C++, the mechanism by which types store the size are similar? You don't need templates for built-in arrays. Built-in types have their own properties and rules which can't be reproduced by using the language. The compiler knows how these types work, but they're not defined in any .cpp file or similar. I mentioned templates because they can be used to create custom types with attached information and to extract the size out of an array. Built-in types should function the same in C and C++. > there doesn't seem to be any extra block allocated around it to store its size? Types in C++ are only a help for the programmer and compiler. There are no types at run-time (There's an exception which doesn't matter now). Since the array's size is a part of the array's type, the size doesn't exist at run-time (Unless you use a very weird compiler that keeps the size for whatever reason). Ohh alright! That makes more sense now! These are just programing aids! Gotcha, Thanks! Hi, Alex and Nascardriver! Is it true that how we indexing array is actually indexing through pointer arithmetic? Yes, that's why this weird syntax works Thanks much but I'm still not clear about this case: int main() { int a[5] = {1, 2, 3, 4, 5}; std::cout << (a + 1) << '\n'; // Does array a decay to pointer in the line below? // Why a = a + 1 got an error? std::cout << a++ << '\n'; return 0; } > Does array a decay to pointer in the line below? That's an error. > Why a = a + 1 got an error? You can't assign to arrays. This is also the reason why `a++` doesn't work. What do you think about my code? Are there anythings should I change or simplify? But, I have a question. The "array" argument in the my three functions is hard to tell that is it an array or just a normal variable named "array" or pointer variable named "array". The name of those argument can tell us that those are arrays. But, it is just based on the name of argument, but I'm still not sure enough. So, I think that for the function argument we should write array syntax (array[]) instead of just "array". But, what do you think about this? - Wrap line 18+, 32+ in curly brackets, because they exceed 1 line. - Line 43, 44: Should be `constexpr`. Line 44 could use `std::ssize` (Or `std::size` if your compiler doesn't support `std::ssize` yet). > But, what do you think about this? That doesn't help. The caller knows the type of the variables they're passing. You can change your functions to use array syntax It has the same meaning, but indicates that the function wants an array. Because fixed-array decay into a pointer when passing it to the function, so, I can do this. But, is this way considerably a good way? Please be more specific about what you mean. Line 3 is fine. Line 6 should use `array[i]`. Line 13 should use `std::ssize` or `std::size`. Passing an array by pointer with a separate length parameter is how things used to be done. In modern C++, you're better off using std::array and templates to avoid any mismatch between the array and length parameters. Thanks Alex and Nascardriver! I really really appreciate your answers! I forget to make my length parameter to be const and I think it should be const. But, once again, thank you! Hi! In the conclusion to this chapter Alex wrote: "Pointers to const values are primarily used in function parameters (for example, when passing an array to a function) to help ensure the function doesn’t inadvertently change the passed in argument." But it seems that passing an array as const (a pointer pointing to a const value) doesn't fully ensure that the values won't be overwritten, since the values are treated as const only while accessed by the pointer the array is converted into. But nothing prevents another pointer from accessing and modifying them. There probably will be a more detailed explanation of how to handle this in the future chapters, looking forward to that. Line 9 produces undefined behavior, you're not allowed to modify an entity after casting away its constness. A `const_cast` should only be used if, for whatever reason, a `const` member function isn't marked as such. > Line 9 produces undefined behavior Hm, I don't see any warnings while compiling that snippet with all the flags turned on by '-Wall' (gcc 7.4.0). And that wasn't the point anyway. Even if my example doesn't make a lot of sense, what bothers me is that it seems I can't be really sure the const value(s) I'm passing by address to a function won't get changed. This is confusing and disturbing, especially considering the fact that this wasn't an issue before (when most of the examples we were dealing with previously were passed to functions by value). *Sorry for a bit of misinformation. The conclusion I've mentioned above belongs to the chapter 6.10 'Pointers and const', not this one. I went ahead and read that chapter, because it wasn't clear to me how to deal with const and pointers(in particular with values passed to functions by address), but it is still confusing. > I don't see any warnings There aren't warnings for everything. > I can't be really sure CPP is a low enough language to modify the entire program at run-time, you can't be sure about anything if you go by that. If you assume that the code is well-defined (Yours isn't), then `const` variables cannot be changed. Line 9 most likely changes the value of `array[0]` to `100`, but it might as well shut off your computer or start playing a song. I wondered why my snippet compiled at all, it seemed dangerous to do something like that even to a newbie like me. Well, I won't call that reassuring, but thank you for clarifying this a bit. Hi Alex, Here in my code, I can see that an array in char does not decay into pointers whereas an array of int type does. Why does this happen? @std::cout::operator<< treats char* as strings. Okay, thanks. Hi Alex. You mentioned that Arrays in structs and classes don’t decay... But I have some code here that shows me the array is decaying. What am I not understanding here? I would suggest adding this code to clarify the effect of address-of operator (&) on an array: Hi, I can't explain what's going on with my code. So I am trying to print the length of a C-string and then output the string itself. My code is outputting the length fine but it is not printing the string. * Line 6, 13, 22, 23: Initialize your variables with brace initializers. You used copy initialization. You're modifying @ptr in line 10. In line 15, @ptr points to the 0-terminator. Thanks for your reply. I appreciate it. What is the problem with above code? I am not getting entire string as output.It is printing only characters of string until space, not printing character after space. Expected value of ptr1:RAM SHAM BAM Present Output:RAM * Line 7, 18, 19, 20, 32, 33: Initialize your variables with brace initializers. You used copy initialization. * Line 2: Initialize your variables with brace initializers. * Use ++prefix unless you need postfix++. * Don't use "using namespace". * Use the auto-formatting feature of your editor. * Line 22: Should be &&, not ||. * Enable compiler warnings, read them, fix them. Now since <type>* and <type>[] will as parameters just pass the pointer, I did put things on the test with <type>[<length>]. And the result was this: I suppose that this method DOES copy the array as a whole into the function, or am I wrong? Built-in arrays work strangely in C/C++. In such a case, the array parameter is treated as a pointer to an array, and the size information is discarded. So the array isn't copied, just the pointer to the array is copied. If you want to retain the type information, pass the array by reference, or better, use std::array. Not that this one is going to be important, but we can actually change the following to "The variable array contains the address of the first element of the array, as if it were a pointer! You can see this in the following program:" I'm not sure this is correct. That program demonstrates decay to a pointer, nothing more. An array variable no more contains an address than a structure variable contains the address of the structure or an integer variable contains the address of the integer. Instead, an array variable contains the contents of the array, just like a structure variable contains the contents of the structure. I agree. I've updated the lesson accordingly. Thanks for the correction. hey, i want to ask a pretty simple thing let us have a data structure like I have the pnode A's left is B and right is C. A's parent is X. I want to swap B to be the parent of A. So B->(A)->C When I do this What I want to do is target's parent will point to X's parent but X remain X, but the node X change into X's parent too, so how to point target's parent to X's parent. :( Thank you in advance. Hi guys, Can you just tell me what does this mean in more detail? (strings being an exception because they’re null terminated). and how could I pass the length of an array in a seperate parameter? Because strings always end in a '\0' character, we don't necessarily need to know the length of a string to know where it ends. Instead, we can keep going until we run into the '\0'. This doesn't work for other arrays because those arrays don't typically have an element that signals termination. > how could I pass the length of an array in a seperate parameter? Add an "int length" parameter to your function. If you know the array's length already, you can pass it as an argument. If you don't know it but your array is a fixed array, you can use the sizeof(array)/sizeof(array[0]) trick to get the length. If you have a dynamic array and you don't know the length, you're out of luck. Thank you sir, that was superb. Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/6-8-pointers-and-arrays/
CC-MAIN-2020-29
refinedweb
4,434
73.68
Pandas define your own groupby aggregation functions The .agg method does aggregation as it sounds and you can pass in the names of aggregation methods, Python aggregations, Numpy reduce functions and you can also define your own function. The beauty of .agg is you can do multiple aggregation at the same time. So let’s group this data by Student and compute the max, min, grade(based on condition) and difference between max and min scores for each student Create a custom function grade that takes a series of score as parameter and computes the mean and returns grade A if mean score of student is greater than 50 otherwise returns grade B def grade(x): mean_score = x.mean() return 'A' if mean_score >= 50 else 'B' Now pass multiple aggregation function name along with our custom function grade and a lambda function to compute the difference between max and min score df.groupby('col').value.agg(max_score = 'max', min_score = 'min', grade = grade, diff_max_min = lambda x: x.max()-x.min()) Output:
https://kanoki.org/2022/07/02/pandas-define-your-own-groupby-aggregation-functions/
CC-MAIN-2022-33
refinedweb
170
57
Inside headers, these macros should be replaced by something like this: #ifdef __cplusplus extern "C" { #endif // ... some code #ifdef __cplusplus } #end Inside .cpp files, the #ifdef dance above is not needed, because, well, we'll be compiling in C++ mode. :-) Try run for 5d7ef44fa6da is complete. Detailed breakdown of the results available here: Results (out of 16 total builds): exception: 16 Builds (or logs if builds failed) available at: Created attachment 666386 [details] Script to replace PR_BEGIN_EXTERN_C and PR_END_EXTERN_C Created attachment 666387 [details] [diff] [review] Remove PR_BEGIN_EXTERN_C and PR_END_EXTERN_C from tree Ignore the try run results posted above. I cancelled my earlier try run because I keep prematurely pushing to try before building/testing on my machine to catch silly mistakes; fortunately, I cancel them before they run more than a few . Try results will be at Try run for 60ff6f4ebdb5 is complete. Detailed breakdown of the results available here: Results (out of 66 total builds): exception: 25 success: 32 failure: 9 Builds (or logs if builds failed) available at: Comment on attachment 666387 [details] [diff] [review] Remove PR_BEGIN_EXTERN_C and PR_END_EXTERN_C from tree Review of attachment 666387 [details] [diff] [review]: ----------------------------------------------------------------- ::: dbm/include/mcom_db.h @@ +415,5 @@ > #endif > > +#ifdef __cplusplus > +} > +#end I believe this should be #endif here, as try server results indicate. :-) Created attachment 666419 [details] [diff] [review] Remove PR_BEGIN_EXTERN_C and PR_END_EXTERN_C from tree (fixed) Oops, I really should be looking at my diffs since I thought I pushed the corrected version. I made the incorrect assumption that my revert in source control and re-application of the script caught everything. Another lesson learned! Thanks, Ehsan. Comment on attachment 666419 [details] [diff] [review] Remove PR_BEGIN_EXTERN_C and PR_END_EXTERN_C from tree (fixed) Review of attachment 666419 [details] [diff] [review]: ----------------------------------------------------------------- Looks good, thanks! Sorry guys, you overlooked that directory mozilla/dbm is owned by the NSS library, and you must not change it except by upgrading to the new NSS release. The upstream for any changes to this directory is the NSS project. (I guess we'll have to make that clearer by adding a file to that directory....?) How shall we proceed? Either you try to get review to the changes to file dbm/include/mcom_db.h TODAY and we check it in to upstream NSS, or it should be backed out from mozilla-central. The reason for the urge is that we're about to create a NSS 3.14 release (by thursday). Note that I'll backout the mozilla/dbm portion if nothing happens by tomorrow (it will be backed out / overwritten automatically anyway, as soon as we upgrade to 3.14 final release). Sigh... I'm not sure how easy it would be to get reviews in the upstream project. My experience on the review turn-around time in the past has mostly been weeks, not days. So I guess I'll backout those parts. :( Ehsan, there should be no Mozilla C++ code including these DBM headers anyway, so this should not affect bug 796941 anyway. So, while I'd be happy to r+ your patch and check it into the NSS tree, I don't think it is necessary. Thank you very much for backing out from mozilla-central! I propose to simply file a new NSS bug, attach the patch, have bsmith r+ it, and we'll check it in to NSS. thanks Sure, did that in bug 799717. (In reply to Ehsan Akhgari [:ehsan] from comment #14) > Backout: Merge of backout:
https://bugzilla.mozilla.org/show_bug.cgi?id=795507
CC-MAIN-2017-51
refinedweb
575
70.94
There is a line that sets up the LCD for an I2C address number of rows and columns (LiquidCrystal_I2C lcd(0x27, 16, 2) lcd.begin(16, 2); That's the constructor.that would be something likeCode: [Select] lcd.begin(16, 2); lcd.begin(16, 2); There is a line that sets up the LCD for an I2C address, number of rows and columns (LiquidCrystal_I2C lcd(0x27, 16, 2). That would be the link I provided. I have four 16x2 LCDs that I have purchased over the last 6 months from four different places and none of them work with this library. #include <Wire.h> #include <LiquidCrystal_I2C.h>#define BACKLIGHT_PIN 13LiquidCrystal_I2C lcd(0x27); //])); // Switch on the backlight pinMode ( BACKLIGHT_PIN, OUTPUT ); digitalWrite ( BACKLIGHT_PIN, HIGH ); lcd.begin(16,2); //);} I used Nick Gammon's I2C scanner in addition to the fact that I already knew and had working the displays. The address is 0x27 and the only thing connected to the Arduino UNO (nothing else on the I2C or anywhere else on the Arduino).If it should work, what do I have to do to make it work? Being the link to the fm library is on the arduino site (opposed to being from a forum post), I would to prefer to use it. If I remember correctly a PCF8574, maybe an PCF8574A (is that an option?). I would need to double check at home tonight. Along those lines, the chip pinout would also have to match the library. // flags for backlight control#define LCD_BACKLIGHT 0x08#define LCD_NOBACKLIGHT 0x00#define En B00000100 // Enable bit#define Rw B00000010 // Read/Write bit#define Rs B00000001 // Register select bit
https://forum.arduino.cc/index.php?topic=336870.msg2325619
CC-MAIN-2020-34
refinedweb
273
73.58
Editor Word2Vec. Word2Vec is an implementation of the Skip-Gram and Continuous Bag of Words (CBOW) neural network architectures. At its core, the skip-gram approach is an attempt to characterize a word, phrase, or sentence based on what other words, phrases, or sentences appear around it. In this post, I will provide a conceptual understanding of the inputs and outputs of the skip-gram architecture. Skip-Gram's Purpose The purpose of the Skip-Gram Architecture is to train a system to represent all the words in a corpus as vectors. Given a word, it aims to find the probability that the word will show up near another word. From this kind of representation, we can calculate similarities between words or even the correct response to an analogy test. For example, a typical analogy test might consist of the following: Tape : Sticky :: Oil : X In this case, an appropriate response for the value of X might be "Slippery." The output for this model, which is described in detail below, results in a vector of the length of the vocabulary for each word. A practitioner should be able to calculate the cosine distance between two word vector representations to determine similarity. Here is a simple Python example, where we assume the vocabulary size is 6 and are trying to compare the similarity between two words: from scipy.spatial.distance import cosine tape = [0.1, 0.2, 0.1, 0.2, 0.2, 0] oil = [0.2, 0.1, 0, 0.2, 0.2, 0] cosine_distance = cosine(tape,oil) In this case, the cosine distance ends up being 0.1105008200066786. Guessing the result of an analogy simply uses vector addition and subtraction and then determines the closest word to the resulting vector. For example, to calculate the vector in order to guess the result of an analogy, we might do the following: import numpy as np tape = np.array([0.1, 0.2, 0.1, 0.2, 0.2, 0.1]) oil = np.array([0.2, 0.1, 0, 0.2, 0.2, 0.1]) sticky = np.array([0.2, 0.0, 0.05, 0.02, 0.0, 0.0]) result_array = tape + oil - sticky Then you can simply find the closest word (via cosine distance or other) to the result_array, and that would be your prediction. Both of these examples should give you a good intuition for why skip-gram is incredibly useful. So let's dig into some of the details. Defining the Skip-Gram Structure To make things easy to understand, we are going to take a look at another example. Duct tape works anywhere. Duct tape is magic and should be worshiped. In the real world, a corpus that you want to train will be large; at least tens of thousands of words if not larger. If we trained the example above in the real world, it wouldn’t work because it isn’t large enough, but for the purposes of this post, it will do. Preparing the Corpus for Training Before you get to the meat of the algorithm, you should be doing some preparatory work with the content, just as you would do for most other NLP-oriented tasks. One might think immediately that they should remove stopwords, or words that are common and have little subject oriented meaning (ex. the, in, I). This is not the case in skip-gram, as the algorithm relies on understanding word distance in a paragraph to generate the right vectors. Imagine if we removed stop words from the sentence "I am the king of the world." The original distance between king and world is 3, but by removing stopwords, the distance between those two words changes to 1. We've fundamentally changed the shape of the sentence. However, we do probably want to conduct stemming in order to get words down to their core root (stem). This is very helpful in ensuring that two words that have the same stem (ex. ‘run’ and ‘running’) end up being seen as the same word ('run') by the computer. A simple example using NLTK in Python is provided below. from nltk import stem #Initialize an empty list my_stemmed_words_list = [] #Start with a list of words. word_list = [duct, tape, works, anywhere, magic, worshiped] #Instantiate a stemmer object. my_stemmer_object=stem.snowball.EnglishStemmer() #Loop through the words in word_list for word in word_list: my_stemmed_words_list.append(my_stemmer_object.stem(word)) The result is as follows: Building the Vocabulary To build the final vocabulary that will be used for training, we generate a list of all the distinct words in the text after we have stemmed appropriately. To make this example easier to follow, we will sort our vocabulary alphabetically. Sorting the vocabulary in real life provides no benefit and, in fact, can just be a waste of time. In a scenario where we have a 1 billion word vocabulary, we can imagine the sorting taking a long time. So without any further delay, our vocabulary ends up becoming the following: [anywher, duct, magic, tape, work, worship] The following stopwords would also be included: [is, and, should, be]. I'm leaving these out to keep this example simple and small, but in reality, those would be in there as well. The Training Set Just like with other statistical learning approaches, you’ll need to develop some methodology for splitting your data into training, validation, and testing sets. In our specific example, we’ll make 2/3 of the total vocabulary our training set through a random selection. So lets suppose our training set ends up being the following after a random selection: T = [anywher, duct, magic, work] Which means we have 4 training samples t1 through t4 (T={t1,t2,t3,t4}). The vectors used to feed the input layer of the network are as follows: The Input Suppose your only goal was to find the probability that "work" shows up near "tape." You can’t just throw one example at the Neural Network (NN) and expect to get a result that is meaningful. When these systems (NN) are trained, you will eventually be pushing the bulk of the vocabulary (your training set) into the input layer and training the system regardless of the specific question you may be asking. Our input layer is a vector that is the length of the vocabulary (V) and we have four training samples, one for each word. So the total set of data pushed through the NN during training-time is of size VxT (6x4). During training time, one of the samples in T is input into the system at a time. It is then up to the practitioner to decide if they want to use online training or batch inputs before back-propagating. Back-propagation is discussed in our back-propagation blog post, which will be published soon. For now, don’t worry about those details. The point here is to conceptually grasp the approach. The insertion into the input layer looks something like the following diagram: Each sample of array length V (6) represents a single word in the vocabulary and its index location in the unique word vocabulary. So let's review our objective here. The objective of the Skip-gram model, in the aggregate, is to develop an output array that describes the probability a word in the vocabulary will end up “near” the target word. “Near” defined by many practitioners in this approach is a zone of c words before and after the target word. This is referred to as the context area or context zone. So in the example below, if the context size is 2 (c=2) and our target word is "magic," the words in the context area are C shown below. C(‘magic’ | c = 2) = [Duct, tape, worship] The Output To get to the point where we get a single array that represents the probability of finding any other word in the vocabulary in the context area of the target word, we need to understand exactly what the NN output looks like. To illustrate, I’ve provided the diagram below. In this diagram, if we choose a context size of one, it means we care about words that appear only directly before and directly after the target word.The output layer includes two distinct sets of output values. If our context size was 5, we’d end up with 60 output values. Each word in the vocabulary receives two values for our context size of one. To get the score for an individual word in the context area, we can simply sum up the values. So for example, the score for "duct" (v2) showing up within the context area is 0.22 (0.2 + 0.02). You may have noticed we are calling the results scores instead of probabilities. That is because the raw output of the skip-gram architecture does not produce probabilities that add up to 1. To convert the scores to probabilities, you must conduct a softmax calculation to scale them. The purpose of this post isn’t to describe softmax, so we are just going to pretend the values in the diagram are probabilities. The Error Calculation At the end of forward propagation (stay tuned for the forward-propagation blog we have coming up next), you need to calculate an error in order to backpropagate. So how do you do that? It’s actually pretty easy. We already know what the actual probabilities are of finding a word in the context area of a target word based on our corpus. So for example, if we wanted to know the error in the probability of finding "duct" given "magic" as the target word, we would do the following. SUM of v2 (duct index) = 0.2 + 0.02 => 0.22 In our corpus, the actual probability of finding "duct" in the context area around "magic" is 100% because "magic" is only used once and "duct" is within the context zone. So the absolute error in probability is 1 - 0.22 = 0.78, and the mean squared error (MSE) is 0.61. This error is used in backpropagation which re-calculates the input and output weight matrices. Conclusion What I have given you is a conceptual understanding of what the input vectors look like and what the output vectors look like, but there are other components of the algorithm that will be explained in upcoming blog posts. - There is a weight matrix between inputs and the hidden layer. - There is a weight matrix between the hidden layer to the outputs. The input weight matrix (1) is the matrix that becomes the vectors for each word where each row is a word and the vector is of length H (H is the number of nodes in the hidden layer). The output vectors simply give the score for a word being in the context zone and are really not used for anything other than training and error calculation. It is important to understand that a practitioner can choose any number of H nodes for the hidden layer; it is a hyperparameter in training. Generally, the more hidden layer nodes you have, the more expressive (but also the more computationally expensive) a vector is. The output weight matrices are not used outside of the context of training. To learn more about these details and what the process of forward propagation is, please check out our forward propagation blog post, which is coming up!
https://www.districtdatalabs.com/nlp-research-lab-part-2-skip-gram-architecture-overview
CC-MAIN-2018-17
refinedweb
1,908
60.95
$ cnpm install @cara/porter Porter is a consolidated browser module solution which provides a module system for web browsers that is both CommonJS and ES Modules compatible. Here are the features that make Porter different from (if not better than) other module solutions: importis transformed with either Babel or TypeScript. import()is not fully supported yet but there's an equivalent require.async(specifier, mod => {})provided. Module(file) and Package(directory with package.json and files) built-in. watch => bundleloop unnecessary. With Porter the middleware, .cssand .jsrequests are intercepted (and processed if changed) correspondingly. This document is mainly about Porter the middleware. To learn about Porter CLI, please visit the corresponding folder. Porter the middleware is compatible with Koa (both major versions) and Express: const Koa = require('koa') const Porter = require('@cara/porter') const app = new Koa() const porter = new Porter() app.use(porter.async()) // koa 1.x app.use(porter.gen()) // express app.use(porter.func()) With the default setup, browser modules at ./components folder is now accessible with /path/to/file.js or /${pkg.name}/${pkg.version}/path/to/file.js. Take demo-cli for example, the file structure shall resemble that of In ./public/index.html, we can now add CSS and JavaScript entries: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>An Porter Demo</title> <!-- CSS entry --> <link rel="stylesheet" type="text/css" href="/app.css"> </head> <body> <h1>A Porter Demo</h1> <!-- JavaScript entry --> <script src="/app.js?main"></script> </body> </html> The extra ?main querystring might seem a bit confusing at first glance. It tells Porter the middleware to bundle loader when /app.js?main is accessed. The equivalent <script> entry of above is: <script src="/loader.js" data-</script> Both <script>s work as the JavaScript entry of current page. In ./components/app.js, there are the good old require and exports: // es5 'use strict' const jQuery = require('jquery') // => ./node_modules/jquery/dist/jquery.js const React = require('react') // => ./node_modules/react/index.js const util = require('./util') // => ./components/util.js or ./components/util/index.js and the fancy new import and export: import jQuery from 'jquery' import * as React from 'react' import util from './util' In CSS entry, there's @import: @import "prismjs/themes/prism.css"; @import "./base.css"; cache={ dest } To accelerate responses, Porter caches following things: @imports being processed. By default, these files are stored in the folder specified by dest='public'. At some circumstances, we may need to put cache files into a different folder, therefore here is the extra cache={ dest }: const porter = new Porter({ cache: { dest: '.porter-cache' }, // where the cache file goes to dest: 'public' // where the compiled assets will be at after porter.compileAll() }) If cache={ dest } is undefined, cache files are put into dest='public' as well. It is recommended that the directory that contain cache files shall be served statically, which makes the source maps accessible. Here is an example in Koa: const serve = require('koa-static') app.use(serve('.porter-cache')) app.use(serve('public')) dest='public' The directory that contains compile results, and cache files as well if cache={ dest } is undefined. paths='components' The directory or directories that contain browser modules. For example, if we need to import modules from both the ./components directory and ./node_modules/@corp/shared-components: const porter = new Porter({ paths: [ 'components', 'node_modules/@corp/shared-components'] }) root=process.cwd() This option should never be necessary. Options like paths and dest are all resolved against root, which defaults to process.cwd(). If the project root is different than process.cwd(), try set root. source={ serve, root } Like cache={}, the source={} option is an object, which contains two properties that are all related to source maps. In development phase, the source maps are generated while Porter processes requests. In those source map files, sourceContents are stripped, and sourceRoot are set to /. Therefore to make source mapping actually take place in devtools, we need to enable source={ serve } during development as well: const porter = new Porter({ source: { serve: process.env.NODE_ENV == 'development' } }) Regarding source={ root }, it is not used until the project goes into production. When the JavaScript and CSS codes are compiled (with porter.compileAll() or so), source maps get generated with sourceContents stripped, and sourceRoot can not be / in production because source={ serve } should be off in production. Therefore, source={ root } shall be set to the actual origin that is able to serve the source securely. In our practice, the source={ root } is usually set to. transpile={ only } By default, Porter checks transpiler configs of the project (that is, the root package) only. When it comes to dependencies in ES6+ (which shouldn't be common in npm land), transpile logic shall be activated for them as well. To specify these packages that need to be transpiled as well, use transpile={ only }: const porter = new Porter({ transpile: { only: ['some-es6-module'] } }) If the module being loaded is listed in transformOnly, and a .babelrc within the module directory is found, porter will process the module source with babel too, like the way it handles components. Don't forget to install the presets and plugins listed in the module's .babelrc . It is possible (and also recommended) to disable Porter in production, as long as the assets are compiled with porter.compileAll(). To compile assets of the project, simply call porter.compileAll({ entries }): const porter = new Porter() porter.compileAll({ entries: ['app.js', 'app.css'] }) .then(() => console.log('done') .catch(err => console.error(err.stack)) Porter will compile entries and their dependencies, bundle them together afterwards. How the modules are bundled is a simple yet complicated question. Here's the default bundling strategy: entries: ['app.js', 'app2.js']are compiled into two different bundles. jquery/3.3.1/dist/jquery.js. lodash/4.17.10/~bundle-36bdcd6d.js. Assume the root package is: { "name": "@cara/demo-cli", "version": "2.0.0" } and the content of ./components/app.js is: 'use strict' const $ = require('jquery') const throttle = require('lodash/throttle') const camelize = require('lodash/camelize') const util = require('./util') // code After porter.compileAll({ entries: ['app.js'] }), the files in ./public should be: public ├── @cara │ └── demo-app │ └── 2.0.0-3 | ├── app.js | └── app.js.map ├── jquery │ └── 3.3.1 │ └── dist | ├── jquery.js | └── jquery.js.map └── lodash └── 4.17.10 ├── ~bundle.js └── ~bundle.js.map For different kinds of projects, different strategies shall be employed. We can tell Porter to bundle dependencies at certain scope with porter.compileEntry(): // default porter.compileEntry('app.js', { package: true }) // bundle everything porter.compileEntry('app.js', { all: true }) Let's start with app.js, which might seem a bit confusing at the first glance. It is added to the page directly: <script src="/app.js?main"></script> And suddenly you can write app.js as Node.js Modules or ES Modules right away: import mobx from 'mobx' const React = require('react') How can browser know where to import MobX or require React when executing app.js? The secret is, entries that has main in the querystring (e.g. app.js?main) will be prepended with two things before the the actual app.js when it's served with Porter: You can import app.js explicitly if you prefer: <script src="/loader.js"></script> <script>porter.import('app')</script> <!-- or with shortcut --> <script src="/loader.js" data-</script> Both way works. To make app.js consumable by the Loader, it will be wrapped into Common Module Declaration format on the fly: define(id, deps, function(require, exports, module) { // actual main.js content }) idis deducted from the file path. dependenciesis parsed from the factory code with js-tokens. factory(the anonymouse function) body is left untouched or transformed with babel depending on whether .babelrcexists or not. If ES Module is preferred, you'll need two things: .babelrcfile under your components directory. .babelrc. Back to the Loader, after the wrapped app.js is fetched, it won't execute right away. The dependencies need to be resolved first. For relative dependencies (e.g. dependencies within the same package), it's easy to just resolve them against module.id. For external dependencies (in this case, react and mobx), node_modules are looked. The parsed dependencies is in two trees, one for modules (file by file), one for packages (folder by folder). When the entry module (e.g. app.js) is accessed, a package lock is generated and prepended before the module to make sure the correct module path is used. Take heredoc's (simplified) node_modules for example: ➜ heredoc git:(master) ✗ tree node_modules -I "mocha|standard" node_modules └── should ├── index.js ├── node_modules │ └── should-type │ ├── index.js │ └── package.json └── package.json It will be flattened into: { "should": { "6.0.3": { "main": "./lib/should.js", "dependencies": { "should-type": "0.0.4" } } }, "should-type": { "0.0.4": {} } } Besides package lock, there're several basic loader settings (which are all configurable while new Porter()): In development phase, Porter configs the loader with following settings: { baseUrl: '/', package: { /* generated from package.json of the project */ } } So here is app.js?main expanded: // GET /loader.js returns both Loader and Loader Config. ;(function() { /* Loader */ }) Object.assign(porter.lock, /* package lock */) // The module definition and the import kick off. define(id, dependencies, function(require, exports, module) { /* app.js */ }) porter.import('app') Here's the actual interaction between browser and Porter: The stylesheets part is much easier since Porter processes CSS @imports at the first place. Take following app.css for example: @import "cropper/dist/cropper.css"; @import "common.css" body { padding: 50px; } When browser requests app.css: postcss-importprocesses all of the @imports; autoprefixertransforms the bundle; Porter then responses with the processed CSS (which has all @imports replaced with actual file contents).
https://developer.aliyun.com/mirror/npm/package/@cara/porter
CC-MAIN-2020-34
refinedweb
1,606
52.66
15. Re: Seam Transaction is not active: tx=TransactionImpleStuart Douglas Jul 25, 2009 12:28 AM (in response to Jubril Adisa) oops, stuffed the formatting:. 16. Re: Seam Transaction is not active: tx=TransactionImpleArbi Sookazian Jul 25, 2009 1:44 AM (in response to Jubril Adisa) Ok, thx for clarifying, I didn't know that. Every time you turn around there's just one more rule to remember in Seam! So that explains why GKing used @PersistenceContext instead of @In here: @Stateful @Scope(SESSION) @Name("bookingList") @Restrict("#{identity.loggedIn}") @TransactionAttribute(REQUIRES_NEW) public class BookingListAction implements BookingList, Serializable { private static final long serialVersionUID = 1L; @PersistenceContext private EntityManager em; @In private User user; @DataModel private List<Booking> bookings; @DataModelSelection private Booking booking; @Logger private Log log; @Factory @Observer("bookingConfirmed") public void getBookings() { bookings = em.createQuery("select b from Booking b where b.user.username = :username order by b.checkinDate") .setParameter("username", user.getUsername()) .getResultList(); } public void cancel() { log.info("Cancel booking: #{bookingList.booking.id} for #{user.username}"); Booking cancelled = em.find(Booking.class, booking.getId()); if (cancelled!=null) em.remove( cancelled ); getBookings(); FacesMessages.instance().add("Booking cancelled for confirmation number #0", booking.getId()); } public Booking getBooking() { return booking; } @Remove public void destroy() {} } And so is that why he used @PersistenceContext exclusively for all SFSBs in that app? And that would be probably the only scenario where you would not use @In to inject EntityManager in a Seam app? 17. Re: Seam Transaction is not active: tx=TransactionImpleArbi Sookazian Jul 25, 2009 1:48 AM (in response to Jubril Adisa) Now that I thought about it, this part is worrysome: However as the Seam-managed persistence context is propagated to any component within the conversation, it will be propagated to methods marked REQUIRES_NEW. Wouldn't it be better if there was a deployment exception or runtime warning or error in this case? Or does Seam just ignore the TransactionAttributeType completely and treat it as REQUIRED? 18. Re: Seam Transaction is not active: tx=TransactionImpleFelipe Jaekel Oct 8, 2009 3:30 PM (in response to Jubril Adisa)<blockquote> _Jubril Oyesiji wrote on May 01, 2009 21:24:_<br/> I keep getting the following exception when attempting to retrieve data from my Database `15:17:30,007 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: null 15:17:30,007 ERROR [JDBCExceptionReporter] Transaction is not active: tx=TransactionImple < ac, BasicAction: -3f57f10b:911:49fb4a4b:3e status: ActionStatus.ABORT_ONLY >; - nested throwable: (javax.resource.ResourceException: Transaction is not active: tx=TransactionImple < ac, BasicAction: -3f57f10b:911:49fb4a4b:3e status: ActionStatus.ABORT_ONLY >)` </blockquote> I'm using Seam 2.2 with MySQL + JTA, but no EJB, and getting this exception too. I've even tried to add @TransactionAttribute(REQUIRES_NEW). Any ideas? Thanks for any help, Felipe 19. Re: Seam Transaction is not active: tx=TransactionImpleYan L Oct 27, 2009 3:15 PM (in response to Jubril Adisa) Hi, The answer is in the first reply. You are using a transaction for too long. How long is your treatment ? If you read JSR 220 () you can see that a transaction will never live forever : Chapter Timer Service : Transaction An enterprise bean typically creates a timer within the scope of a transaction. If the transaction is then rolled back, the timer creation is rolled back. An enterprise bean typically cancels a timer within a transaction. If the transaction is rolled back, the container rescinds the timer cancellation. The timeout callback method is typically has transaction attribute REQUIRED or REQUIRES NEW (Required or RequiresNew if the deployment descriptor is used to specify the transaction attribute). If the transaction is rolled back, the container retries the timeout. When you use @TransactionAttribute(REQUIRES NEW) wihtin another transaction the original transaction waits untill the new transaction ends. So if your new transaction lives more than 5 minutes (the default timeout of JBoss) you will have a transaction timeout. If you were using EJB you could managed transaction demarcation with the TransactionAttribute so that your transaction would never live more than 5 minutes. The other choice is to extend timeout... 20. Re: Seam Transaction is not active: tx=TransactionImpleforge yan Mar 25, 2010 11:51 AM (in response to Jubril Adisa) Today ,I have this problem too.I do this to fix the problem. Drop the table ,and Create the table again ! 21. Re: Seam Transaction is not active: tx=TransactionImpleBill Evans Aug 17, 2010 7:10 PM (in response to Jubril Adisa)Interesting, though not very enlightening thread. In our system, needless to say, we have the same problem. However, in my case I have some idea how to recreate: 1) Issue a set of queries that I know are going to take more than 5 minutes, all within one transaction. The point at which the query set hangs will be copying to a MySQL TMP data set. 2) Try to cancel the set of queries: This is effectively initiated via org.hibernate.Session.cancelQuery method call. 3) Transaction finally aborts and when it does throws the JDBCExceptionReporter Transaction is not active. 4) Thereafter, later queries sometimes fail with same error but mostly they work. Interestingly, the value of BasicAction in the exception is always the same. So, my theory is that step 3 somehow renders some re-usable transaction object damaged in some way. Some flag that leaves it in ActionStatus.ABORT_ONLY maybe? Then it when it gets re-used again it causes the same exception. Any comments? 22. Re: Seam Transaction is not active: tx=TransactionImpleMurali Kumar Jan 10, 2011 1:02 PM (in response to Jubril Adisa) Anybody solveed this error. Please folks help me to resolve this. 23. Re: Seam Transaction is not active: tx=TransactionImpleDiego Borda May 31, 2011 7:43 AM (in response to Jubril Adisa) In my experience this is not a hibernate, jboss or MySQL exception (although in some cases it might be, I don't know). This happens to me whenever I have a NullPointerException somewhere in a transaction. My guess would be that the NullPointerException throws the transaction into a not activestate. I usually print out a couple of method calls and find my NullPointerException. Hope this helps someone BTW this works for me in SEAM 2.2.0.GA, JBOSS 5.1 and MySQL 5.5.8 24. Re: Seam Transaction is not active: tx=TransactionImplePierpaolo Piccoli Apr 13, 2012 9:57 AM (in response to Jubril Adisa) I solved by replacing the tag <h: commandLink> with <s: link > in xhtml page Don't know why, maybe a phase problem...
https://community.jboss.org/message/729893
CC-MAIN-2015-48
refinedweb
1,083
55.84
ASP.NET and .NET from a new perspective My first Visual Studio Add-in! Creating add-ins is pretty simple, once you get used to the CommandBar model it is using, which is apparently a general Office suite extensibility mechanism. Anyway, let me first explain my motivation for this. It started out as an academic exercise, as I have always wanted to dip my feet in a little VS extensibility. But I thought of a legitimate need for an add-in, at least in my personal experience, so it took on new life. But I figured I can’t be the only one who has felt this way, so I decided to publish the add-in, and host it on GitHub (VSNewFile on GitHub) hoping to spur contributions. Here’s the problem I wanted to solve. You’re working on a project, and it’s time to add a new file to the project. Whatever it is – a class, script, html page, aspx page, or what-have-you, you go through a menu or keyboard shortcut to get to the “Add New Item” dialog. Typically, you do it by right-clicking the location where you want the file (the project or a folder of it): This brings up a dialog that contains, well, every conceivable type of item you might want to add. It’s all the available item templates, which can result in anywhere from a ton to a veritable sea of choices. To be fair, this dialog has been revamped in Visual Studio 2010, which organizes it a little better than Visual Studio 2008, and adds a search box. It also loads noticeably faster. To me, this dialog is just getting in my way. If I want to add a JavaScript script to my project, I don’t want to have to hunt for the script template item in this dialog. Yes, it is categorized, and yes, it now has a search box. But still, all this UI to swim through when all I need is a new file in the project. I will name it. I will provide the content, I don’t even need a ‘template’. VS kind of realizes this. In the add menu in a class library project, for example, there is a “Add Class…” choice. But all this really does is select that project item from the dialog by default. You still must wait for the dialog, see it, and type in a name for the file. How is that really any different than hitting F2 on an existing item? It isn’t. What I often find myself doing, just to avoid going through this dialog, is to copy and paste an existing file, rename it, then “CTRL-A, DEL” the content. In a few short keystrokes I’ve got my new file. Even if the original file wasn’t the right type, it doesn’t matter – I will rename it anyway, including the extension. It works well enough if the place I am adding the file to doesn’t have much in it already. But if there are a lot of files at that level, it sucks, because the new file will have the name “Copy of xyz”, causing it to be moved into the ‘C’ section of the alphabetically sorted items, which might be far, far away from the original file (and so I tend to try and copy a file that starts with ‘C’ *evil grin*). To be completely fair I should at least mention this feature. I’m not even sure if this is new in VS 2010 or not (I think so). But it allows you to export a project item or items, including potential project references required by it. Then it becomes a new item in the available ‘installed templates’. No doubt this is useful to help bootstrap new projects. But that still requires you to go through the ‘New Item’ dialog. So hopefully I have sufficiently defined the problem and got a few of you to think, “Yeah, me too!”… What VSNewFile does is let you skip the dialog entirely by adding project items directly to the context menu. But it does a bit more than that, so do read on. For example, to add a new class, you can right-click the location and pick that option. A new .cs file is instantly added to the project, and the new item is selected and put into the ‘rename’ mode immediately. The default items available are shown here. But you can customize them. You can also customize the content of each template. To do so, you create a directory in your documents folder, ‘VSNewFile Templates’. In there, you drop the templates you want to use, but you name them in a particular way. For example, here’s a template that will add a new item named “Add TITLE”. It will add a project item named “SOMEFILE.foo” (or ‘SOMEFILE1.foo’ if that exists, etc). The format of the file name is: <ORDER>_<KEY>_<BASE FILENAME>_<ICON ID>_<TITLE>.<EXTENTION> Where: <ORDER> is a number that lets you determine the order of the items in the menu (relative to each other). <KEY> is a case sensitive identifier different for each template item. More on that later. <BASE FILENAME> is the default name of the file, which doesn’t matter that much, since they will be renaming it anyway. <ICON ID> is a number the dictates the icon used for the menu item. There are a huge number of built-in choices. More on that later. <TITLE> is the string that will appear in the menu. And, the contents of the file are the default content for the item (the ‘template’). The content of the file can contain anything you want, of course. But it also supports two tokens: %NAMESPACE% and %FILENAME%, which will be replaced with the corresponding values. Here is the content of this sample: testing Namespace = %NAMESPACE% Filename = %FILENAME% I kind went back and forth on this. I could have made it so there’d be an XML or JSON file that defines the templates, instead of cramming all this data into the filename itself. I like the simplicity of this better. It makes it easy to customize since you can literally just throw these files around, copy them from someone else, etc, without worrying about merge data into a central description file, in whatever format. Here’s our new item showing up: One immediate thing I am using this for is to make it easier to add very commonly used scripts to my web projects. For example, uh, say, jQuery? :) All I need to do is drop jQuery-1.4.2.js and jQuery-1.4.2.min.js into the templates folder, provide the order, title, etc, and then instantly, I can now add jQuery to any project I have without even thinking about “where is jQuery? Can I copy it from that other project?” There are two reasons for the ‘key’ portion of the item. First, it allows you to turn off the built-in, default templates, which are: To turn one off, just include a file with the name “_<KEY>”. For example, to turn off all the items except our custom one, you do this: The other reason for the key is that there are new Visual Studio Commands created for each one. This makes it possible to bind a keyboard shortcut to one of them. So you could, for example, have a keyboard combination that adds a new web page to your website, or a new CS class to your class library, etc. Here is our sample item showing up in the keyboard bindings option. Even though the contents of the template directory may change from one launch of Visual Studio to the next, the bindings will remain attached to any item with a particular key, thanks to it taking care not to lose keyboard bindings even though the commands are completely recreated each time. Visual Studio uses a Microsoft Office style add-in mechanism, I gather. There are a predetermined set of built-in icons available. You can use your own icons when developing add-ins, of course, but I’m no designer. I just wanted to find appropriate-ish icons for the built-in templates, and allow you to choose from an existing built-in icon for your own. Unfortunately, there isn’t a lot out there on the interwebs that helps you figure out what the built-in types are. There’s an MSDN article that describes at length a way to create a program that lists all the icons. But I don’t want to write a program to figure them out! Just show them to me! Sheesh :) Thankfully, someone out there felt the same way, and uses a novel hack to get the icons to show up in an outlook toolbar. He then painstakingly took screenshots of them, one group at a time. It isn’t complete though – there are tens of thousands of icons. But it’s good enough. If anyone has an exhaustive list, please let me, and the rest of the add-in community know. Icon Face ID Reference It will work with Visual Studio 2008 and Visual Studio 2010. Just unzip the release into your Documents\Visual Studio 20xx\Addins folder. It contains the binary and the Visual Studio “.addin” file. For example, the path to mine is: C:\Users\InfinitiesLoop\Documents\Visual Studio 2010\Addins So that’s it! I hope you find it as useful as I have. It’s on GitHub, so if you’re into this kind of thing, please do fork it and improve it! Reference: This is awesome. I'm going to install it shortly. One idea for an extension that I've been wanting to build is similar to this, but only list recently added file types: cs/js/aspx/etc. Awesome plugin! I think I'll get a lot of usage out of this. One question though -- how would I create one for some of the standard VS templates.. i.e., a shortcut for "Form" or "ASP.NET WebForm", etc. that would call on the default VS templates ? -- or is that not possible ? Thanks, - Matthew Thanks Steve, Matthew :) @Matthew: You can't use the default templates, that'd be a cool feature though, I'll think about it :) In the meantime, just copy/paste the content you want into the custom template. The content of the file will be the content of new items using that template. @InfinitiesLoop Re: "In the meantime, just copy/paste the content you want into the custom template. The content of the file will be the content of new items using that template." It is not possible, as aspx has to have dependent file created too (for me it is cs). Nevertheless it is very useful for all other file types (and I hope that you will add aspx in the future :D ) Thank you! Mialn Great idea! I tried installing it, but I keep receiving an error saying VSNewFile failed to load or caused an exception. The error number is 80131515. I created the template directory and placed a basic file in there to test: 0_CS_CSharpClass_629_C# Class.cs Where does the Addins folder live on an XP box? There's no Addins folder under 'My Documents\Visual Studio 2010'. @Bryan: Just create the Addins folder, it goes there. It won't exist unless you have at least one Addin already. @Gordon: VSIX file: Sure, that'd be nice. I thought I saw somewhere that vsix didn't work with add-ins though. Dunno, I'm a newb :) @Milan: I realize aspx/aspx.cs is a problem... there's an addin out there that lets you manually cause a file to be a child of another. You could use that in conjunction with this one to workaround it in the meantime. @Miguel: Odd. Does it work without that custom template? You don't have to create the directory or have one for it to work. Are you using VS2008 or VS2010? @InfinitiesLoop I'm using VS2010 and it has the same behavior even without any templates in the VSNewFile Templates folder. I will try downloading it again tomorrow, just in case something got corrupted. Thanks for responding! @Miguel: Do you have .NET 3.5, or only .NET 4.0? @InfinitiesLoop: I have all of the frameworks installed and I'm using the Ultimate version of 2010. Installed the addin in VS 2010 Ultimate. Works great, but the new file does not come up "selected and put in 'rename' mode". Also, although the new VB file name is "class.vb" the template shows "Class1.vb". It shouldn't have added the 1 (and it really should have followed the case of the file name). I also experience the same exception as Miguel is getting Great addin! A few comments: - Namespace does not contain foldernames. (just the base namespace) - When I rename the file it should update the Class name in the new file, but it doesnt.. I use VS2010 ultimate on Win8 dev preview. :)
http://weblogs.asp.net/infinitiesloop/archive/2010/05/18/vsnewfile-a-visual-studio-addin-to-more-easily-add-new-items-to-a-project.aspx
CC-MAIN-2014-10
refinedweb
2,200
82.14
...making Linux just a little more fun! /* Sample 1. */ main() { } /* Sample 2. */ #include <stdio.h> int main() { return 0; } What is important while writing code is not the caution but the approach - but at times, it is good to be skeptical about your C programs to some extent. Not necessarily the logic, but the compiler you are using. The GNU C compiler provides several options for compiling a piece of code; the more options you know, the more useful (and the more confusing) it can be. I have aliased the GCC front-end cc like this: alias cc='gcc -Wall --pedantic-errors -Wstrict-prototypes' The option --pedantic-errors helps me make my C programs adhere to strict ANSI standards. GCC provides several extensions to the C language, which are often either unnoticed or taken for granted due to people's assumptions. Here, I am going to give a brief description on one such extension - nesting of functions. In the article Functional Programming with Python, a function or a procedure is said to have some analogy to mathematical functions. If 'x' is a variable, then we have a function f(x) which does some operations on 'x' to give some value 'y'. Hence we have: y = f(x) The article also briefly describes closures. A closure is a property associated with functions; when a function manipulates input and produces output which holds the features and characteristics of the input, then the function is said to possess (we should say 'satisfy' instead of 'possess') the closure property. [ The above definition is, perhaps, less rigorous than it could be; the standard definition of 'closure' in programming is a data structure that contains both a function and a set of variables defining the environment in which that function will be executed. -- Ben ] For example: consider the set of natural numbers 'N'. If x1 and x2 are elements in the set N, and the function f(x) is an addition (by binary operator `+') of x1 and x2 then addition has the closure property. Since sum of x1 and x2 is again a natural number, we can say that the binary operator '+' satisfies the closure property over the set of natural numbers. Programming languages like Python and LISP support nesting of functions. The above mentioned article explains with an example in Python. An example for LISP is given below: (defun foo (a) (defun bar (b) (+ b 1)) (+ a (bar 3))) (setq a (foo 4)) (print a) The function `bar' is nested and defined inside the definition of `foo'. `bar' increments and returns the parameter that it takes, and `foo' returns the sum of the return value of `bar' invoked with parameter 3 and the parameter that it takes. The variable `a' then is set to: 3 + 1 + 4 = 8Hence, `a's value is printed as 8. This feature of function nesting is seen in the C language, as an extension of GCC. Compiling the code below, with the --pedantic-errors option enabled, will tell you that `ISO C forbids nested functions' - but the code will compile cleanly without the option. Check out the code: /* compile it with gcc --pedantic-errors filename.c*/ #include <stdio.h> #include <stdlib.h> int main() { void foo() { printf("Hello World\n"); } foo(); return 0; }Like local variables, nesting of functions will restrict the scope of the function in which it is defined. For the above example, the binding of function foo is not visible outside main. The association between identifiers and the place to store their values is called binding, and scope refers to the part of the code where the binding of the identifier is visible. Consider another example given below: #include <stdio.h> #include <stdlib.h> int main() { int x; x = 10; { float x; x = 4.2 } return 0; } In the above example, `x' has two bindings with respect to main. But if we remove the declaration float x;, then the binding will be same throughout. Consider a binary search algorithm performed over a list of sorted numbers. The code can be seen here, in listing 1. We can localize the array 'A' and the function 'binary_search' to 'main' if we don't have any other functions that need to access 'binary_search'; an example of this can be seen here, in listing2. Now both 'A' and 'binary_search' are within the lexical scope of 'main'; hence, they are enclosed in the same scope. Let us define lexical scoping a bit more: Lexical Scope is the scope defined by the structure of the code. A language with lexical scoping can support function definitions within another function. With this, the nested function gets access to the local variables defined in the enclosing scope, and is itself visible during the definition of the function being nested. That is: #include <stdio.h> #include <stdlib.h> int main() { int x=10; void foo() { printf("hello\n"); } int y=20; void bar() { printf("World\n"); } }Here, only the binding of 'x' is visible to 'foo', whereas 'bar' can "see" the binding of 'x', 'foo', and 'y'. We can now say that the textual arrangement determines the lexical scope. Now, what if function 'foo' wants to access function 'bar'? One of the options here would be to declare the prototype of 'bar' before the definition of 'foo'. See the listing below: #include <stdio.h> #include <stdlib.h> int main() { int x=10; auto void bar(void); void foo() { printf("Hello\n"); bar(); } int y=20; void bar() { printf("World\n"); } foo(); return 0; } Thomas M. Bruel's paper on lexical closures in C++ describes this as a method to allow definition of mutually recursive functions at inner lexical levels. Removing the 'auto' keyword will give a warning message. Try it! (Refer to Section A.8.1 in Storage Class Specifiers by Kernighan & Ritchie for clarification and details.) The other type of scoping is dynamic scoping, in which the active call stack handles name resolution during run time. To make it more clear, see listing3. 'x' in function 'foo' is a non-local reference, but it is local to function 'bar'. The print statement in function 'main' is also a non-local reference. If C was a dynamically scoped language (thank god that it isn't), then reference to 'x' in function 'foo' would be bound to its definition in the function 'bar'. However, C is a lexically scoped language, and thus reference to 'x' in function 'foo' is bound to its global definition. If we run this program, the output will be '1', not '0'. Now consider the following (listing4): we have the definition of function 'add' within the scope of 'init_add'. It is interesting to note that 'add' refers to the parameter 'i', which is passed to the function 'init_add'. For the function 'init_add', the binding of 'i' is retained (even inside the function 'add') until 'init_add' returns. Now, from the mathematical definition of 'closure', the function 'add' is said to "close over" the parameter 'i'; therefore, 'add' satisfies the closure property over 'i', and this is termed a lexical closure, in which the lexical scoping is preserved (the reference for 'i' is not overridden by any other local definition of 'i' - not that there are any). It should be clear by now that lexical scoping provides several advantages. Functions can be made reentrant and hence the compiled machine code will be reentrant. Local declarations can be stored in registers (in an optimized way) which eliminates the symbol references upon compilation (an optimization performed by the compiler.) We are no longer restricted to declaring all variables global (a very bad practice leading to problems like variable suicide among others) and passing parameters to every function that is invoked..
http://www.redhat.com/mirrors/LDP/LDP/LGNET/112/ramankutty.html
crawl-003
refinedweb
1,282
61.77
doyleman77Member Content count54 Joined Last visited Community Reputation367 Neutral About doyleman77 - RankMember SDL_BlendMode when sharing textures? doyleman77 replied to doyleman77's topic in For Beginnersnothing? So far, I've just skipped doing alpha & blends on textures; but I am curious more if there's a better way than the above proposed? SDL_BlendMode when sharing textures? doyleman77 posted a topic in For Beginners? - The pointers are (I believe) the only local objects now, and are then passed to the map/vector which covers the entire app/game's scope, anyway. The code hasn't changed much, other than declaring new on entities. I also removed the string as a parameter, because somewhere along the line I would have to make a string object to pass upon it. Also, my bad on calling it the stl. I suppose I assumed because it's standard, templated, and a library, that was it's appropriate name. I'm guessing the STL is more of unofficial libraries, where the C++ Libraries are official, and required for most, if not all, C++ compilers...? Also, when this segfault happens, Code::Blocks auto opens stl_map.h file - also leading me to believe this is the STL? Anyway, yeah - the code hasn't changed much. I know, I should have all of this loading / game loop outside of the constructor. I'll move it - I've just been occupied with this crashing, so far. Game::Game() {("raindrop.png"); loadTexture("texture.png"); /// this is me, making sure that the map pulls texture.png. this works. SDL_Texture* myTex = textureLibrary["texture.png"]; Entity* newEntity = new Entity(textureLibrary["raindrop.png"]); Entity* anotherEntity = new Entity(textureLibrary["texture.png"]); gameVec.push_back(newEntity); gameVec.push_back(anotherEntity); running = true; while(running) { handleInput(gameInput); update(); draw(); SDL_Delay(16); } }; void Game::loadTexture(const char filename[]) { //SDL_Texture** newTexture = new SDL_Texture*; SDL_Surface* loadedSurface = IMG_Load(filename); ///); textureLibrary[filename] = SDL_CreateTextureFromSurface(gameRenderer, loadedSurface); SDL_FreeSurface(loadedSurface); return; }; Entity::Entity(SDL_Texture* itsTexture) { texture = itsTexture; SDL_QueryTexture(itsTexture, NULL, NULL, &texRect->w, &texRect->h); }; Here, I initialize SDL, load in some textures (which are then placed into the map/cache), and then I test out by pulling into myTex. Originally, just below that, I did a quick RenderClear, RenderCopy, and RenderPresent to show myTex, and it appeared - in all it's glory. Where as before, when newEntity and anotherEntity were on the stack, rather than the heap, I could _at least_ get newEntity to load it's texture, and display properly, and the anotherEntity would crash; the segfault now happens on newEntity. Following the call stack, the only place other than stl_map.h that maybe is a problem area is the constructor of Game, or maybe of Entity (despite it looking like Entity's constructor isn't on the stack at all, yet). Thanks for the replies. **edit** I've just given up on this... - I have. It crashes at line 472 of stl_map.h: iterator __i = lower_bound(__k); // __i->first is greater than or equivalent to __k. if (__i == end() || key_comp()(__k, (*__i).first)) __i = insert(__i, std::make_pair(std::move(__k), mapped_type())); return (*__i).second; } the call stack shows it crashes at the [] operator call of map<>: I have made my Entities on the heap, and then pushing them onto the Vector<Entity*> GameVec vector - and the segfault occurs there, too - but the app doesn't crash at that point, it just doesn' display the images (where as before, I could do one entity and display it, it'd just crash on 2.) I guess I don't know how to pry open the map<> and test the addresses of what it's pointing too, but for what its worth: it does work if I do a direct SDL_RenderCopy and using the map, rather than the Entity, or if I make a local Texture and assign it a texture from the map, too. - I've read the wiki article before, and while minimal, the code listings arent helping me understand what is going on. Lambdas are by far the worst for me to understand. :-/ On a seperate note, regarding my textureLibrary map: would it be that I need to have a getTexture() function that iterates the map, and returns it? Am I not able to simply use textureLibrary["texture.png"] to bring up the appropriate texture? The only problem I could see that causing, at the moment, is that if I use the wrong key, it'd grab a newly made, blank texture. But I don't see why that'd still segfault. I've cleaned up the local variables, and the string bits - and I'm still getting crashes. It's odd, because it seems to work fine on one entity, but not the 2nd. I can directly display the texture using the same key, it only seems to crash when I try to instantiate an Entity with that texture... I apologize if my wording is confusing. - I haven't wrapped my mind around lambda's yet, unfortunately. :( Most of c++11 still seems mystified to me; and I can't find a place that breaks down the new features well enough for me to understand. I appreciate and thank you for your explanations, though! - I acknowledge that I'm not up to date with '11, yet. It's on my todo list, to get caught up - but this is an 8 week challenge that's already 4 weeks in. I'll try to learn the new smart pointers, eventually. As per the quoted code - I guess I'm not sure what exactly is going on. You're making a structure that overloads the (), takes a texture, and then frees it? and then the last line - using TexturePtr. I'm not sure where TexturePtr is declared, or how it's defined, or what it is really. What is happening when you make a unique pointer with Texture and TextureDeleter? - (Don't mean to double post... I couldn't find an edit button on my post) Also - Yeah, I do intend on going back and doing error checking. I promise, normally I do. I'm just not used to SDL2, and wanted to try to get something up and running quick. That SDL_Quit() call was the last line I added last night before calling it quits, I wanted to see if it even found the image. That part has since been removed. Sorry! - Ack. Yeah. I should be declaring new on the textures and the entities, and having the map be a pointer to those pointers. Sorry. The map is in fact, a map<std::string, SDL_Texture>. that filename.c_str() is local, but isn't it just calling the characters that make up that string, and storing it into map? IE if I pass "raindrop.png" into loadTexture, it'll pass that into the string, but the string just pumps the characters into the map<string,texture>; and then using map["texture.png"] call it up? I guess I am still new to the STL map - I had to make my own in Data Structures II class at uni, and assumed I could use it somewhat similarly... I've been reading over article over and over. What should I be doing instead of filename.c_str() when trying to pass my filename path to the loadTexture function; and then saving it as a key on the map? STL Map and SDL2 Texture segfault doyleman77 posted a topic in For Beginners. :( Newbie here!: Looking for good Game Dev sites/book to start with doyleman77 replied to buttnakedhippie's topic in For BeginnersI'm not sure if Visual Studio's editor supports tablets. It could; but I doubt it'd pick up on pressure detection. For that, something like Krita, GIMP, PaintSai, or a program along those sorts may be what you want. If you're learning art, however, I'd vote to actually put the tablet aside, and start at the mouse / paint level, as that helps develop the fundamentals much better. Well, that's how I had learned some, anyway. I don't get c++11. doyleman77 replied to doyleman77's topic in For BeginnersI thank everyone who had explanations on all topics. I tried a few sample lambdas to try to get a feel; and it's beginning to make sense. I'll be honest in that I never really used a callback; let alone know what one is. From my understanding, it's passing a function into another, and having it perform at a later point within that first function. Move constructors and assignments are entirely cleared up; that actually seems like a brilliant move; and left me wondering why that wasn't part of the standard earlier. decltype has also been cleared up; and I can see the uses for that definitely. I bookmarked the GoingNative page; and will check that out later today when I finish classes; thanks for the link! I really do appreciate all of your guy's explanation, and time taken to give them. The depth of some of them were quite long, but served to help more than what the book had shown; so thanks to all on their contributions! I don't get c++11. doyleman77 posted a topic in For Beginners? - Oh wow. I've not had a lot of experience reading code from others... usually it's done more... cleanly? than mine? or maybe up to date. I usually have a very rudementary setup as far as my C++ goes. I could definately follow along with what was going on. Am I right in guessing that, then, your Renderer class holds all textures via textureDictionary? and scaling a texture is also done via renderer? Either way; very clean and documented code. I think I see what you mean now! - Separate in the sense that I want to organize data into appropriate objects. Sprites have images, not windows, etc. But a window displays the screen, which is what a sprite in SDL2 needs to have passed in in order to be loaded and placed into a texture. I think I understand what fastcall and adamsmithee were talking about; but what do you mean separate the renderer from the window? That seems a bit more abstract than it should be; as a SDL window, at least this one, 'renders' content - and thus should hold the renderer struct. That was my intent, anyway. Proper separation for me would be a window not needing to know anything about image loading, but images i suppose do need to know about what window they'll eventually be displayed on? Hrm.
https://www.gamedev.net/profile/167519-doyleman77/?tab=friends
CC-MAIN-2018-05
refinedweb
1,758
64.2
> So how can I process an incoming XML, without worrying about whether > it's specified a namespace or not? To a namespace aware processor, the name of an element includes its namespace, so this question is like saying how do I match on an element without worrying about its name. the answer (in both cases) is to use *. match="*[local-name()='foo']" will match anything called foo in any namespace (or no namespace) or match="foo|xx:foo" will match foo in no namespace or (just) the namespace to which you have assigned the prefix xx: David _____________________________________________________________________ This message has been checked for all known viruses by Star Internet delivered through the MessageLabs Virus Scanning Service. For further information visit or alternatively call Star Internet for details on the Virus Scanning Service. XSL-List info and archive:
https://www.oxygenxml.com/archives/xsl-list/200107/msg00121.html
CC-MAIN-2018-34
refinedweb
138
57.91
pocketcmd.h File Reference PocketBus command abstraction layer. More... #include "pocketbus.h" #include <cfg/compiler.h> Go to the source code of this file. Detailed Description PocketBus command abstraction layer. Definition in file pocketcmd.h. Function Documentation Init pocketBus command layer. ctx is pocketBus command layer context. bus_ctx is pocketBus context. addr is slave address (see pocketcmd_setAddr for details.) search is the lookup function used to search command ID callbacks. Definition at line 197 of file pocketcmd.c. pocketBus Command poll function. Call it to read and process pocketBus commands. Definition at line 80 of file pocketcmd.c. pocketBus Command recv function. Call it to read and process pocketBus commands. Definition at line 100 of file pocketcmd.c. Send command cmd to/from slave adding len arguments in buf. Address used is contained in ctx->addr . If we are master and the message has a reply, you must set wait_reply to true. - Returns: - true if all is ok, false if we are already waiting a replay from another slave. Definition at line 154 of file pocketcmd.c. Set slave address addr for pocketBus command layer. If we are a slave this is *our* address. If we are the master this is the slave address to send messages to. Definition at line 107 of file pocketcmd.h.
http://doc.bertos.org/2.7/pocketcmd_8h.html
crawl-003
refinedweb
216
63.56
We are pleased to announce GNU Guile release 1.9.0MB) Here are the GPG detached signatures[*]: To reduce load on the main server, use a mirror listed at: Here are the MD5 and SHA1 checksums: ea62d9590f7c7b2552165b44ba11cc3d guile-1.9.11.tar.gz abd1424a927302db31395db828d4d14fa68d13f9 guile-1.9-3955-g8ab5996 This is a new release series with many new features and differences compared to 1.8. The complete list of changes compared to the 1.8.x series is available in the `NEWS' file. Changes since the 1.9.10 pre-release: ** Renamed module: (rnrs bytevectors) This module was called (rnrs bytevector), its name from earlier drafts of the R6RS. Its name has been changed. Users of previous 1.9 preleases may want to search for any stale rnrs/bytevector .go or .scm file, and delete them. ** New module: (sxml match) Guile has incorporated Jim Bender's `sxml-match' library. See "sxml-match' in the manual for more information. Thanks, Jim! ** New module: (srfi srfi-9 gnu) This module adds an extension to srfi-9, `set-record-type-printer!'. See "SRFI-9" in the manual for more information. ** Support for R6RS libraries The `library' and `import' forms from the latest Scheme report have been added to Guile, in such a way that R6RS libraries share a namespace with Guile modules. R6RS modules may import Guile modules, and are available for Guile modules to import via use-modules and all the rest. See "R6RS Libraries" in the manual for more information. ** Implementations of R6RS libraries Guile now has implementations for all of the libraries defined in the R6RS. Thanks to Julian Graham for this excellent hack. See "R6RS Standard Libraries" in the manual for a full list of libraries. ** Partial R6RS compatibility Guile now has enough support for R6RS to run a reasonably large subset of R6RS programs. Guile is not fully R6RS compatible. Many incompatibilities are simply bugs, though some parts of Guile will remain R6RS-incompatible for the foreseeable future. See "R6RS Incompatibilities" in the manual, for more information. Please contact address@hidden if you have found an issue not mentioned in that compatibility list. ** Macro expansion produces structures instead of s-expressions In the olden days, macroexpanding an s-expression would yield another s-expression. Though the lexical variables were renamed, expansions of core forms like `if' and `begin' were still non-hygienic, as they relied on the toplevel definitions of `if' et al being the conventional ones. The solution is to expand to structures instead of s-expressions. There is an `if' structure, a `begin' structure, a `toplevel-ref' structure, etc. The expander already did this for compilation, producing Tree-IL directly; it has been changed now to do so when expanding for the evaluator as well. The real truth is somewhat more involved: Tree-IL doesn't exist until modules have been booted, but we need the expander to boot modules, and additionally we need a boot expander before psyntax is loaded. So a subset of Tree-IL is defined in C, and the boot expander produces these "macroexpanded" structures. Psyntax has been modified to produce those structures as well. When Tree-IL loads, it incorporates those structures directly as part of its language. Finally, the evaluator has been adapted to accept these "expanded" structures, and enhanced to better support the gamut of this subset of Tree-IL, including `lambda*' and `case-lambda'. This was a much-needed harmonization between the compiler, expander, and evaluator. ** Deprecated `scm_badargsp' This function is unused in Guile, but was part of its API. ** `sxml->xml' enhancement `sxml->xml' from `(sxml simple)' can now handle the result of `xml->sxml'. See bug #29260 for more information. ** New module: (system vm coverage) This new module can produce code coverage reports for compiled Scheme code on a line-by-line level. See "Code Coverage" in the manual for more information. ** Faster VM hooks. The frame objects passed to VM hook procedures are now allocated on the stack instead of the heap, making the next-instruction hook practical to use. ** New `eval-when' situation: `expand' Sometimes it's important to cause side-effects while expanding an expression, even in eval mode. This situation is used in `define-module', `use-modules', et al, in order to affect the current module and its set of syntax expanders. ** Better module-level hygiene Instead of attempting to track changes to the current module when expanding toplevel sequences, we instead preserve referential transparency relative to where the macro itself was defined. If the macro should expand to expressions in the context of the new module, it should wrap those expressions in `@@', which has been enhanced to accept generic expressions, not just identifier references. For example, part of the definition of the R6RS `library' form: #'(begin (define-module (name name* ...) #:pure #:version (version ...)) (import ispec) ... (re-export r ...) (export e ...) (@@ (name name* ...) body) ...) In this example the `import' refers to the `import' definition in the module where the `library' macro is defined, not in the new module. ** Module system macros rewritten as hygienic macros `define-module', `use-modules', `export', and other such macros have been rewritten as hygienic macros. This allows the necessary referential transparency for the R6RS `library' to do the right thing. ** Compiler and VM documentation updated The documentation for the compiler and VM had slipped out of date; it has been brought back... to the future! ** Tree-IL field renaming: `vars' -> `gensyms' The `vars' fields of <let>, <letrec>, <fix>, and <lambda-case> has been renamed to `gensyms', for clarity, and to match <lexical-ref>. ** Removed `version' field from <language> Language versions weren't being updated or used in any worthwhile way; they have been removed, for now at least. ** New procedure: `module-export-all!' This procedure exports all current and future bindings from a module. Use as `(module-export-all! (current-module))'. ** Updates to manual The introductory sections of the manual have been reorganized significantly, making it more accessible to new users of Guile. Check it out! ** The module namespace is now separate from the value namespace It was a little-known implementation detail of Guile's module system that it was built on a single hierarchical namespace of values -- that if there was a module named `(foo bar)', then there was a also module named `(foo)' with a binding from `bar' to the `(foo bar)' module. This was a neat trick, but presented a number of problems. One problem was that the bindings in a module were not apparent from the module itself; perhaps the `(foo)' module had a private binding for `bar', and then an external contributor defined `(foo bar)'. In the end there can be only one binding, so one of the two will see the wrong thing, and produce an obtuse error of unclear provenance. Also, the public interface of a module was also bound in the value namespace, as `%module-public-interface'. This was a hack from the early days of Guile's modules. Both of these warts have been fixed by the addition of fields in the `module' data type. Access to modules and their interfaces from the value namespace has been deprecated, and all accessors use the new record accessors appropriately. When Guile is built with support for deprecated code, as is the default, the value namespace is still searched for modules and public interfaces, and a deprecation warning is raised as appropriate. Finally, to support lazy loading of modules as one used to be able to do with module binder procedures, Guile now has submodule binders, called if a given submodule is not found. See boot-9.scm for more information. ** New procedures: module-ref-submodule, module-define-submodule, nested-ref-module, nested-define-module!, local-ref-module, local-define-module These new accessors are like their bare variants, but operate on namespaces instead of values. ** The (app modules) module tree is officially deprecated It used to be that one could access a module named `(foo bar)' via `(nested-ref the-root-module '(app modules foo bar))'. The `(app modules)' bit was a never-used and never-documented abstraction, and has been deprecated. See the following mail for a full discussion: The `%app' binding is also deprecated. ** Deprecated address@hidden' syntax address@hidden' was part of an older implementation of the Emacs Lisp language, and is no longer used. ** New fluid: `%file-port-name-canonicalization' This fluid parameterizes the file names that are associated with file ports. If %file-port-name-canonicalization is 'absolute, then file names are canonicalized to be absolute paths. If it is 'relative, then the name is canonicalized, but any prefix corresponding to a member of `%load-path' is stripped off. Otherwise the names are passed through unchanged. ** Source file name canonicalization in `compile-file', `compile-and-load' These file-compiling procedures now bind %file-port-name-canonicalization to their `#:canonicalization' keyword argument, which defaults to 'relative. In this way, one might compile "../module/ice-9/boot-9.scm", but the path that gets residualized into the .go is "ice-9/boot-9.scm". ** Deprecate arity access via (procedure-properties proc 'arity) Instead of accessing a procedure's arity as a property, use the new `procedure-minimum-arity' function, which gives the most permissive arity that the the function has, in the same format as the old arity accessor. ** Remove redundant accessors: program-name, program-documentation, program-properties, program-property Instead, just use procedure-name, procedure-documentation, procedure-properties, and procedure-property. ** Enhance documentation for support of Emacs Lisp's `nil' See "Nil" in the manual, for more details. ** Enhance documentation for support of other languages See "Other Languages" in the manual, for more details. **pOtwRJ20iWr.pgp Description: PGP signature
http://lists.gnu.org/archive/html/guile-devel/2010-06/msg00013.html
CC-MAIN-2015-27
refinedweb
1,614
55.34
headless LineChart?098a562b-359e-426b-b5e4-87df2c759fb4 Jun 17, 2013 7:18 PM How do you create a LineChart and write the chart to a file (e.g. PNG file) without a GUI, i.e. for X11 the DISPLAY environment variable is unset and for OSX a GUI does not pop up? I have tried to achieve this using an early access version of Java 8, but so far, no luck. I attempt to create a chart (without an Application) like: final NumberAxis xAxis = new NumberAxis(); final NumberAxis yAxis = new NumberAxis(); final LineChart<Number,Number> lineChart = new LineChart<Number,Number>(xAxis,yAxis); but I get the following exception when trying to allocate the 'xAxis'. Thanks very much for any suggestions... Exception in thread "main" java.lang.ExceptionInInitializerError at javafx.scene.chart.Axis.<init>(Axis.java:85) at javafx.scene.chart.ValueAxis.<init>(ValueAxis.java:249) at javafx.scene.chart.NumberAxis.<init>(NumberAxis.java:142) at LineChartSample.<init>(LineChartSample.java:31) at LineChartSample.main(LineChartSample.java:88)) ... 5 more 1. Re: headless LineChart?KonradZuse Jun 18, 2013 12:02 AM (in response to 098a562b-359e-426b-b5e4-87df2c759fb4) Your line chart isn't correct, as you see you get some exceptions. My code works fine NumberAxis xAxis = new NumberAxis("Number saved", 1, 10.1, 1); NumberAxis yAxis = new NumberAxis("Calculated Value", 0, 1000, 1); LineChart chart = new LineChart(xAxis, yAxis); XYChart.Series<Double,Double> volts = new XYChart.Series<>(); XYChart.Series<Double,Double> ress = new XYChart.Series<>(); XYChart.Series<Double,Double> curs = new XYChart.Series<>(); XYChart.Series<Double,Double> pows = new XYChart.Series<>(); chart.getData().add(volts); chart.getData().add(ress); chart.getData().add(curs); chart.getData().add(pows); volts.setName("Voltage"); ress.setName("Resistance"); curs.setName("Current"); pows.setName("Power"); Scene scene = new Scene(root); scene.getStylesheets().add("calculator/chart.css"); chart.getStyleClass().add("lineChart"); fxContainer.setScene(scene); graph.add(fxContainer); start = true; root.getChildren().add(chart);// you wouldn't add this. You might not need a lot of this if you're not going to visually create it, but things like "name" can be used for identification purposes. 2. Re: headless LineChart?jsmith Jun 18, 2013 12:31 AM (in response to 098a562b-359e-426b-b5e4-87df2c759fb4) I don't think JavaFX operation in a headless environment is currently supported by the public API. You could file a feature request for such support against the JavaFX runtime project in the JavaFX issue tracker: There is sample code for rendering charts in this thread: The sample code is for rendering hundreds of charts, so it could be done much simpler if you just have a couple of charts and don't require all of the asynchronous processing (which is likely). Basically, you render the chart with chart animation turned off, take a snapshot of the chart with node.snapshot, convert the snapshot image to a swing buffered image with SwingFXUtils, then use imageio to render the resultant chart to an image file such as a png. And (as far as I am aware) you need to do it all in a headful environment. 3. Re: headless LineChart?098a562b-359e-426b-b5e4-87df2c759fb4 Jun 18, 2013 12:49 AM (in response to KonradZuse) Thank you very much for your reply! I thought no one would be willing to help... I'm trying to create the LineChart without using the Application class since I cannot find a way to create an Application object without the GUI appearing... So I took your example and added it to my simple test; below is my example with the command line commands in bold and the output in italics. If you have any more suggestions, please let me know, and thank you for your reply! bash$ cat -n HeadlessChart.java 1 import javafx.scene.chart.NumberAxis; 2 import javafx.scene.chart.LineChart; 3 4 public class HeadlessChart { 5 6 public HeadlessChart( ) { 7 NumberAxis xAxis = new NumberAxis("Number saved", 1, 10.1, 1); 8 NumberAxis yAxis = new NumberAxis("Calculated Value", 0, 1000, 1); 9 10 LineChart<Number,Number> chart = new LineChart<Number,Number>(xAxis, yAxis); 11 } 12 public static void main(String[] args) { 13 final HeadlessChart sample = new HeadlessChart( ); 14 } 15 } bash$ /System/Library/Frameworks/JavaVM.framework/Versions/A/Commands/java -version java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b93) Java HotSpot(TM) 64-Bit Server VM (build 25.0-b34, mixed mode) bash$ /System/Library/Frameworks/JavaVM.framework/Versions/A/Commands/javac -Xlint:unchecked HeadlessChart.java bash$ /System/Library/Frameworks/JavaVM.framework/Versions/A/Commands/java HeadlessChart Exception in thread "main" java.lang.ExceptionInInitializerError at javafx.scene.chart.Axis.<init>(Axis.java:85) at javafx.scene.chart.ValueAxis.<init>(ValueAxis.java:249) at javafx.scene.chart.ValueAxis.<init>(ValueAxis.java:261) at javafx.scene.chart.NumberAxis.<init>(NumberAxis.java:165) at HeadlessChart.<init>(HeadlessChart.java:7) at HeadlessChart.main(HeadlessChart.java:13)) ... 6 more bash$ 4. Re: headless LineChart?James_D Jun 18, 2013 1:09 AM (in response to 098a562b-359e-426b-b5e4-87df2c759fb4)1 person found this helpful I also tried a couple of experiments to see if I could get this to work, and couldn't find a way to do it. I was using Java 7. The closest I came was the following. It uses the Application class and the start(...) method, but without actually displaying anything on screen. I don't think this is truly "headless" though; since the FX Application thread is started up it probably needs the native windowing environment. And even this doesn't work; it produces the background of the chart but no data. I don't really know why this is, but my best guess is that the default stylesheet doesn't get applied as the scene is not rendered. (And that is a pure guess.) But I thought I'd share the code with you, just in case it's at all helpful. (Don't get your hopes up; it probably won't be.) import java.io.File; import java.io.IOException; import java.util.Arrays; import java.util.List; import javax.imageio.ImageIO; import javafx.application.Application; import javafx.application.Platform; import javafx.embed.swing.SwingFXUtils; import javafx.scene.Scene; import javafx.scene.chart.CategoryAxis; import javafx.scene.chart.LineChart; import javafx.scene.chart.NumberAxis; import javafx.scene.chart.XYChart; import javafx.scene.image.Image; import javafx.stage.Stage; import javafx.util.StringConverter; public class HeadlessLineChart extends Application { @Override public void start(Stage primaryStage) { final CategoryAxis xAxis = new CategoryAxis(); final NumberAxis yAxis = new NumberAxis(); yAxis.setTickLabelFormatter(new StringConverter<Number>() { @Override public Number fromString(String string) { return Double.parseDouble(string); } @Override public String toString(Number value) { return String.format("%2.2f", value); } }); xAxis.setLabel("Month"); final LineChart<String, Number> lineChart = new LineChart<String, Number>( xAxis, yAxis); lineChart.setTitle("Stock Monitoring, 2010"); lineChart.setAnimated(false); final XYChart.Series<String, Number> series = new XYChart.Series<>(); series.setName("My portfolio"); final List<XYChart.Data<String, Number>> data = Arrays.asList( new XYChart.Data<String, Number>("Jan", 23), new XYChart.Data<String, Number>("Feb", 14), new XYChart.Data<String, Number>("Mar", 15), new XYChart.Data<String, Number>("Apr", 24), new XYChart.Data<String, Number>("May", 34), new XYChart.Data<String, Number>("Jun", 36), new XYChart.Data<String, Number>("Jul", 22), new XYChart.Data<String, Number>("Aug", 45), new XYChart.Data<String, Number>("Sep", 43), new XYChart.Data<String, Number>("Oct", 17), new XYChart.Data<String, Number>("Nov", 29), new XYChart.Data<String, Number>("Dec", 25)); series.getData().addAll(data); Scene snapScene = new Scene(lineChart, 600, 400); Image image = snapScene.snapshot(null); try { ImageIO.write(SwingFXUtils.fromFXImage(image, null), "png", new File("chart.png")); } catch (Exception e) { e.printStackTrace(); } Platform.exit(); } public static void main(String[] args) throws IOException { launch(args); } } 5. Re: headless LineChart?jsmith Jun 18, 2013 6:16 PM (in response to James_D) James, it is best to set animation off for the snapshot when taking a snapshot. Otherwise the chart snapshot could occur before the display animation has completed (effectively giving you just a background). It's not an issue with this line chart, but I have had to switch animation off for other charts like a pie chart. You also need to add the series data to the chart for the data to display ;-) lineChart.getData().addAll(series); lineChart.setAnimated(false); 6. Re: headless LineChart?jsmith Jun 18, 2013 6:20 PM (in response to jsmith) There is an existing jira which indicates that the JavaFX team do have something which allows the system to be configured to run in headless mode: Headless Glass toolkit needs to be connected to Quantum and Prism unit tests The jira is likely just for an internal facility to allow the JavaFX team to run regression tests easier on a continuous integration server, so if you want something publicly supported it is best to file a feature request. 7. Re: headless LineChart?James_D Jun 18, 2013 7:18 PM (in response to jsmith) D'oh! I did have lineChart.setAnimated(false) buried in there. But yeah... 8. Re: headless LineChart?KonradZuse Jun 18, 2013 11:51 PM (in response to jsmith) I'm still surprised they haven't finished the Headless Env yet... I couldn't find anything on it when searching. The programming model page 6 states that it isn't done, but this is 2 years old so................ I guess there isn't really much demand for it now... I have a feeling I read something on OpenJDK that had something to do with Java 9 and Project Jigsaw. 9. Re: headless LineChart?098a562b-359e-426b-b5e4-87df2c759fb4 Jun 19, 2013 2:00 AM (in response to jsmith) Thank you very much for your reply (and thanks to the others for their replies!). Also, thanks for the very useful link to the earlier post of processing charts in a second thread. We need to produce GUI charts as part of interactive use, but we also have an online system which is collecting data from radio telescopes. This online system also needs to produce charts. The hope was that we could find a library that would allow for both of these uses. I believe JFreeChart could do this, but JavaFX looks like it has the muscle of Oracle behind it... and it seems to produce nicer charts! I'll also use the link to post a feature request.
https://community.oracle.com/message/11073770
CC-MAIN-2016-50
refinedweb
1,726
53.07
Process execution At this point, where our definitions are ready, we can create an execution of our defined processes. This can be achieved by creating a class where each instance represents one execution of our process definition—bringing our processes to life and guiding the company with their daily activities; letting us see how our processes are moving from one node to the next one. With this concept of execution, we will gain the power of interaction and influence the process execution by using the methods proposed by this class. We are going to add all of the methods that we need to represent the executional stage of the process, adding all the data and behavior needed to execute our process definitions. This process execution will only have a pointer to the current node in the process execution. This will let us query the process status when we want. An important question about this comes to our minds: why do we need to interact with our processes? Why doesn't the process flow until the end when we start it? And the answer to these important questions is: it depends. The important thing here is to notice that there will be two main types of nodes: - One that runs without external interaction (we can say that is an automatic node). These type of nodes will represent automatic procedures that will run without external interactions. - The second type of node is commonly named wait state or event wait. The activity that they represent needs to wait for a human or a system interaction to complete it. This means that the system or the human needs to create/fire an event when the activity is finished, in order to inform the process that it can continue to the next node. Wait states versus automatic nodes The difference between them is basically the activity nature. We need to recognize this nature in order to model our processes in the right way. As we have seen before, a "wait state" or an "event wait" situation could occur when we need to wait for some event to take place from the point of view of the process. These events are classified into two wide groups—Asynchronous System Interactions and Human tasks. Asynchronous System Interactions This means the situation when the process needs to interact with some other system, but the operation will be executed in some asynchronous way. For non-advanced developers, the word "asynchronous" could sound ambiguous or without meaning. In this context, we can say that an asynchronous execution will take place when two systems communicate with each other without blocking calls. This is not the common way of execution in our Java applications. When we call a method in Java, the current thread of execution will be blocked while the method code is executed inside the same thread. See the following example: The doBackup() method will block until the backup is finished. When this happens, the call stack will continue with the next line in the main class. This blocking call is commonly named as a synchronous call. On the other hand, we got the non-blocking calls, where the method is called but we (the application) are not going to wait for the execution to finish, the execution will continue to the next line in the main class without waiting. In order to achieve this behavior, we need to use another mechanism. One of the most common mechanisms used for this are messages. Let's see this concept in the following image: In this case, by using messages for asynchronous executions, the doBackup() method will be transformed into a message that will be taken by another thread (probably an external system) in charge of the real execution of the doBackup() code. The main class here will continue with the next line in the code. It's important for you to notice that the main thread can end before the external system finishes doing the backup. That's the expected behavior, because we are delegating the responsibility to execute the backup code in the external system. But wait a minute, how do we know if the doBackup() method execution finished successfully? In such cases, the main thread or any other thread should query the status of the backup to know whether it is ready or not. Human tasks Human tasks are also asynchronous, we can see exactly the same behavior that we saw before. However, in this case, the executing thread will be a human being and the message will be represented as a task in the person's task list. As we can see in this image, a task is created when the Main thread's execution reaches the doBackup() method. This task goes directly to the corresponding user in the task list. When the user has time or is able to do that task, he/she completes it. In this case, the "Do Backup" activity is a manual task that needs to be performed by a human being. In both the situations, we have the same asynchronous behavior, but the parties that interact change and this causes the need for different solutions. For system-to-system interaction, probably, we need to focus on the protocols that the systems use for communication. In human tasks, on the other hand, the main concern will probably be the user interface that handles the human interaction. How do we know if a node is a wait state node or an automatic node? First of all, by the name. If the node represents an activity that is done by humans, it will always wait. In system interactions, it is a little more difficult to deduce this by the name (but, if we see an automatic activity that we know takes a lot of time, that will probably be an asynchronous activity which will behave as a wait state). A common example could be a backup to tape, where the backup action is scheduled in an external system. If we are not sure about the activity nature we need to ask about the activity nature to our stakeholder. We need to understand these two behaviors in order to know how to implement each node's executional behavior, which will be related with the specific node functionality. Creating the execution concept in Java With this class, we will represent each execution of our process, which means that we could have a lot of instances at the same time running with the same definition. Inside the package called org.jbpm.examples.chapter02.simpleGOP.execution (provided at), we will find the following class: public class Execution { private Definition definition; private Node currentNode; public Execution(Definition definition) { this.definition = definition; //Setting the first Node as the current Node this.currentNode = definition.getNodes().get(0); } public void start(){ // Here we start the flow leaving the currentNode. currentNode.leave(this); } ... (Getters and Setters methods) } As we can see, this class contains a Definition and a Node, the idea here is to have a currentNode that represents the node inside the definition to which this execution is currently "pointing". We can say that the currentNode is a pointer to the current node inside a specific definition. The real magic occurs inside each node. Now each node has the responsibility of deciding whether it must continue the execution to the next node or not. In order to achieve this, we need to add some methods (enter(), execute(), leave()) that will define the internal executional behavior for each node. We do this in the Node class to be sure that all the subclasses of the Node class will inherit the generic way of execution. Of course, we can change this behavior by overwriting the enter(), execute(), and leave() methods. We can define the Node.java class (which is also found in the chapter02.simpleGOPExecution project in the code bundle) as follows: ... public void enter(Execution execution){ execution.setCurrentNode(this); System.out.println("Entering "+this.getName()); execute(execution); } public void execute(Execution execution){ System.out.println("Executing "+this.getName()); if(actions.size() > 0){ Collection<Action> actionsToExecute = actions.values(); Iterator<Action> it = actionsToExecute.iterator(); while(it.hasNext()){ it.next().execute(); } leave(execution); }else{ leave(execution); } } public void leave(Execution execution){ System.out.println("Leaving "+this.getName()); Collection<Transition> transitions = getLeavingTransitions().values(); Iterator<Transition> it = transitions.iterator(); if(it.hasNext()){ it.next().take(execution); } } ... As you can see in the Node class, which is the most basic and generic implementation, three methods are defined to specify the executional behavior of one of these nodes in our processes. If you carefully look at these three methods, you will notice that they are chained, meaning that the enter() method will be the first to be called. And at the end, it will call the execute() method, which will call the leave() method depending on the situation. The idea behind these chained methods is to demarcate different phases inside the execution of the node. All of the subclasses of the Node class will inherit these methods, and with that the executional behavior. Also, all the subclasses could add other phases to demarcate a more complex lifecycle inside each node's execution. The next image shows how these phases are executed inside each node. As you can see in the image, the three methods are executed when the execution points to a specific node. Also, it is important to note that transitions also have the Take phase, which will be executed to jump from one node to the next. All these phases inside the nodes and in the transition will let us hook custom blocks of code to be executed. One example for what we could use these hooks for is auditing processes. We could add in the enter() method, that is the first method called in each node, a call to an audit system that takes the current timestamp and measures the time that the node uses until it finishes the execution when the leave() method is called. Another important thing to notice in the Node class is the code inside the execute() method. A new concept appears. The Action interface that we see in that loop, represents a pluggable way to include custom specific logic inside a node without changing the node class. This allows us to extend the node functionality without modifying the business process graph. This means that we can add a huge amount of technical details without increasing the complexity of the graph. For example, imagine that in our business process each time we change node, we need to store the data collected from each node in a database. In most of the cases, this requirement is purely technical, and the business users don't need to know about that. With these actions, we achieve exactly the above. We only need to create a class with the custom logic that implements the Action interface and then adds it to the node in which we want to execute the custom logic. The best way to understand how the execution works is by playing with the code. In the chapter02.simpleGOPExecution maven project, we have another test that shows us the behavior of the execution class. This test is called TestExecution and contains two basic tests to show how the execution works. If you don't know how to use maven, there is a quick start guide at the end of this article. You will need to read it in order to compile and run these tests. public void testSimpleProcessExecution(){ Definition definition = new Definition("myFirstProcess"); System.out.println("########################################"); System.out.println(" Executing PROCESS: "+definition.getName()+" "); System.out.println("########################################"); Node firstNode = new Node("First Node"); Node secondNode = new Node("Second Node"); Node thirdNode = new Node("Third Node"); firstNode.addTransition("to second node", secondNode); secondNode.addTransition("to third node", thirdNode); //Add an action in the second node. CustomAction implements Action secondNode.addAction(new CustomAction("First")); definition.addNode(firstNode); definition.addNode(secondNode); definition.addNode(thirdNode); //We can graph it if we want. //definition.graph(); Execution execution = new Execution (definition); execution.start(); //The execution leave the third node assertEquals("Third Node", execution.getCurrentNode().getName()); } If you run this first test, it creates a process definition as in the definition tests, and then using the definition, it creates a new execution. This execution lets us interact with the process. As this is a simple implementation, we only have the start() method that starts the execution of our process, executing the logic inside each node. In this case, each node is responsible for continuing the execution to the next node. This means that there are no wait state nodes inside the example process. In case we have a wait state, our process will stop the execution in the first wait state. So, we need to interact with the process again in order to continue the execution. Feel free to debug this test to see how this works. Analyze the code and follow the execution step by step. Try to add new actions to the nodes and analyze how all of the classes in the project behave. When you get the idea, the framework internals will be easy to digest. Homework We are ready to create our first simple GOP language, the idea here is to get hands-on code and try to implement your own solution. Following and using the guidelines proposed in this article with minimal functionality, but with the full paradigm implemented, will represent and execute our first process. We could try to implement our example about "Recycling Thing Co.", but we will start with something easier. So, you can debug it and play with it until you get the main points of functionality. In the following sections, I will give you all the information that you need in order to implement the new words of our language and the behavior that the process will have. This is quite a lot of homework, but trust me, this is really worth it. The idea of finishing this homework is to feel comfortable with the code and the behavior of our defined processes. You will also see how the methods are chained together in order to move the process from one node to the next. Creating a simple language Our language will be composed with subclasses from our previous node class. Each of these subclasses will be a word in our new language. Take a look at the ActivityNode proposed in the chapter02.simpleGOPExecution project, inside the org.jbpm.examples.chapter02.simpleGOP.definition.more.expressive.power package. And when we try to represent processes with this language, we will have some kind of sentence or paragraph expressed in our business process language. As in all languages, these sentences and each word will have restrictions and correct ways of use. We will see these restrictions in the Nodes description section of the article. So, here we must implement four basic words to our simple language. These words will be start, action, human decision, and end to model processes like this one: Actually, we can have any combination that we want of different types of nodes mixed in our processes. Always follow the rules/restrictions of the language that we implement. These restrictions are always related to the words' meanings. For example, if we have the word "start" (that will be a subclass of node) represented with the node: StartNode, this node implementation could not have arriving transitions. This is because the node will start the process and none of the rest of the nodes could be connected to the start node. Also, we could see a similar restriction with the end node, represented in the implementation with the EndNode class, because it is the last node in our processes and it could not have any leaving transitions. With each kind of node, we are going to see that they have different functionality and a set of restrictions that we need to respect when we are using these words in sentences defining our business processes. These restrictions could be implemented as 'not supported operation' and expressed with: throw new UnsupportedOperationException("Some message here");. Take a look at the EndNode class, you will see that the addTransition() method was being overridden to achieve that. Nodes description In this section, we will see the functionality of each node. You can take this functionality and follow it in order to implement each node. Also, you could think of some other restrictions that you could apply to each node. Analyze the behavior of each method and decide, for each specific node type, whether the method behavior needs to be maintained as it is in the super class or whether it needs to be overwritten. - StartNode: This will be our first node in all our processes. For this functionality, this node will behave as a wait state, waiting for an external event/signal/trigger that starts the process execution. When we create a new instance of process execution, this node is selected from the process description and set as the current node in the current execution. The start() method in the execution class will represent the event that moves the process from the first node to the second, starting the flow of the process. - EndNode: It will be our last node in all our processes. This node will end the life of our process execution instance. As you can imagine, this node will restrict the possibility of adding leaving transitions to this node. - ActionNode: This node will contain a reference to some technical code that executes custom actions that we need for fulfilling the process goal. This is a very generic node where we can add any kind of procedure. This node will behave as an automatic activity, execute all of these actions, and then leave the node. - HumanDecisionNode: This is a very simple node that gives a human being some information in order to decide which path of the process the execution will continue through. This node, which needs human interaction, will behave as a wait state node waiting for a human to decide which transition the node must take (this means that the node behaves as an OR decision, because with this node, we cannot take two or more paths at the same time). One last thing before you start with the real implementation of these nodes. We will need to understand what the expected results are in the execution stage of our processes. The following images will show you how the resultant process must behave in the execution stage. The whole execution of the process (from the start node to the end node) will be presented in these three stages. Stage one The following image represents the first stage of execution of the process. In this image, you will see the common concepts that appear in the execution stage. Every time we create a new process execution, we will start with the first node, waiting for an external signal that will move the process to the next node. - An execution is created using our process definition instance in order to know which activities the process will have. - The start node is selected from the process definition and placed inside the current node reference in the execution instance. This is represented by the black arrow pointing to the Start node. - As the StartNode behaves as a wait state, it will wait and externally trigger to start the process execution. We need to know this, because may, we can think that if we create an instance of execution, it will automatically begin the execution. Stage two The second stage of the execution of the process is represented by the following image: - We can start the execution by calling the start() method inside the execution class. This will generate an event that will tell the process to start flowing through the nodes. - The process starts taking the first transition that the start node has, to the first action node. This node will only have an action hooked, so it will execute this action and then take the transition to the next node. This is represented with the dashed line pointing to the Action node. This node will continue the execution and will not behave as a wait state—updating the current node pointer, which has the execution, to this node. - The Human Decision node is reached, this means that some user must decide which path the process will continue through. Also, this means that the process must wait for a human being to be ready to decide. Obviously, the node will behave as a wait state, updating the current node pointer to this node. But wait a second, another thing happens here, the process will return the execution control to the main method. What exactly does this mean? Until here, the execution goes from one wait state (the start node) to another wait state (the human decision node) inside the method called start, enclosing all the automatic nodes' functionality and leaving the process in a wait state. Let's analyze the method call stack trace: When the process reaches the HumanDecisionNode.execute(), it doesn't need to do anything more. It returns to the main() method and continues with the next line after the Execution.start()call. Stage three The following image represents the third stage of execution of the process: - Now we are waiting for a human to make a decision, but wait a second, if the thread that calls the start() method on the instance of the execution dies, we lose all the information about the current execution, and we cannot get it back later. This means that we cannot restore this execution to continue from the human decision node. On the other hand, we can just sleep the thread waiting for the human to be ready to make the decision with something like Thread.currentThread().sleep(X). Where X is expressed in milliseconds. But we really don't know how much time we must wait until the decision is taken. So, sleeping the thread is not a good option. We will need some kind of mechanism that lets us persist the execution information and allows us to restore this status when the user makes the decision. For this simple example, we just suppose that the decision occurs just after the start() method returns. So, we get the execution object, the current node (this will be the human decision node), and execute the decide() method with the name of the transition that we want to take as argument. - Let's run ((HumanDecisionNode)execution.getCurrentNode()). decide("transition to action three", execution). This is an ugly way to make the decision, because we are accessing the current node from the execution. We could create a method in the execution class that wraps this ugly call. However, for this example, it is okay. You only need to understand that the call to the decide() method is how the user interacts with the process. - When we make this decision, the next action node is an automatic node like the action one, and the process will flow until the EndNode, which is ending the execution instance, because this is the last wait state, but any action could be made to continue. As you can see, the wait states will need some kind of persistent solution in order to actually be able to wait for human or asynchronous system interactions. That is why you need to continue your testing of the execution with ((HumanDecisionNode)execution.getCurrentNode()).decide("transition to action three",execution), simulating the human interaction before the current thread dies. Quick start guide to building Maven projects A quick start guide for building Maven projects is follows: - Download and install Maven 2.x () - Append maven binaries in the PATH system variable - Open a terminal/console - Go to the appropriate directory and look for a file called pom.xml - Type mvn clean install into the console, this will compile the code, run the tests, and package the project - If you are using Netbeans, you can just open your project (having the maven plugin activated) - If you are using Eclipse, you need to run the project in the mvn eclipse: eclipse project directory, in order to generate the files needed for the project. Then you can just import the project into your workspace Summary In this article, we learnt the main points that you will need in order to understand how the framework works internally. We have analyzed why we need the Graph Oriented Programming approach to represent and execute our business processes. If you have read this article you may be interested to view :
https://www.packtpub.com/books/content/jbpm-developers-part-2
CC-MAIN-2015-14
refinedweb
4,108
61.16
I am trying to sort an array from least to greatest using pointers instead of array subscripts. I am not sure where the problem is but when i run this code, the values are returned in the same order that they were entered. The find_largest and swap functions both do exactly what they say. The selection_sort function uses a for loop to sort the numbers from right to left (greatest to smallest, right to left). I have been staring at this for a while now and it looks like it should work fine but like i said, for some reason the numbers are returned in the same order they were entered. Here is my code: #include <stdio.h> #define N 5 void selection_sort(int *a, int n); int *find_largest(int *a, int n); void swap(int *p, int *q); int main(void) { int i; int a[N]; printf("Enter %d numbers to be sorted: ", N); for (i = 0; i < N; i++) scanf("%d", (a+i)); selection_sort(a, N); printf("In sorted order:"); for (i = 0; i < N; i++) printf(" %d", *(a+i)); printf("\n"); return 0; } void selection_sort(int *a, int n) { int i = 0; int *largest; for(i = 0; i < n; i++){ largest = find_largest(a, n-i); swap(largest, a+(n-1-i)); } } int *find_largest(int *a, int n){ int *p = a; int *largest = p; for(p = a; p < a+n-1; p++){ if(*(p+1) > *p){ largest = (p + 1); } } return largest; } void swap(int *p, int *q){ int *temp; temp = p; p = q; q = temp; } There are two mistakes in your code. One, logical in the find_largest function: int *find_largest(int *a, int n){ int *p = a; int *largest = p; for(p = a; p < a+n-1; p++){ if(*(p+1) > *largest){ <---- //here you were checking for *(p) largest = (p + 1); } } return largest; } the other is with pointers in swap function: void swap(int *p, int *q){ int temp; temp = *p; *p = *q; *q = temp; }
https://codedump.io/share/UYUNo3eV4m5u/1/sorting-arrays-in-c-with-pointers
CC-MAIN-2016-50
refinedweb
327
59.81
This article presents a way to use WTL template classes on MFC window classes, that is how to transform the MFC CWnd class to its ATL/WTL counterpart CWindow, while leaving the class usable from MFC code. CWnd CWindow Here you can find the required files, a detailed explanation in 10 steps and a working demo. These are the basic steps to obtain hybrid MFC/WTL windows. They have been used on MFC dialog based projects, but should work with any MFC application. "stdafx.h" // Add support for ATL/WTL #define _WTL_NO_AUTOMATIC_NAMESPACE This prevents the WTL headers to automatically merge the WTL namespace to the global namespace. This avoids conflicts with MFC classes with the same names, such as CRect, CDC and others. CRect CDC #include <atlbase.h> #include <atlapp.h> extern WTL::CAppModule _Module; The ATL/WTL code may access the global _Module variable, so it must have external linkage. _Module #include <atlwin.h> We add the common WTL header here to exploit precompiled headers, but you may want to include it only where you really need it. CWinApp ///////////////////////////////////////////////////////////////////////////// // The one and only CMixedWindowApp object CMixedWindowApp theApp; WTL::CAppModule _Module; // add this line The global _Module variable must be defined somewhere and this is a good place. InitInstance() ///////////////////////////////////////////////////////////////////////////// // CMixedWindowApp initialization BOOL CMixedWindowApp::InitInstance() { // Initialize ATL _Module.Init(NULL, AfxGetInstanceHandle()); ... } ExitInstance() int CMixedWindowApp::ExitInstance() { // Terminate ATL _Module.Term(); return CWinApp::ExitInstance(); } Assume we want to add scrolling capabilities to a static bitmap control. We may want to use the WTL template class CScrollImpl<...> together with the MFC class CStatic, as there is no such a window class in MFC. CScrollImpl<...> CStatic // Add support for scrolling windows #include <atlscrl.h> Remember that if you didn't choose to use the precompiled header at point 3 above, you need to add that line here, before any other WTL header: // Make this class CWindow compatible #include "Wnd2Window.h" You may want to put the above line into your precompiled header instead, like all the other WTL headers, if you use them many times, to speed up recompilation. That's completely at your choice. class CScrollPicture : public CWnd2Window<CScrollPicture, CStatic>, public WTL::CScrollImpl<CScrollPicture> DoPaint() public: // Inline helper void DoPaint(WTL::CDCHandle dc) { OnDraw(CDC::FromHandle(dc)); } protected: // Implement painting with scroll support void OnDraw(CDC* pDC); It may be a good idea not to mix the code too much with MFC and WTL classes used anywhere, as it could become quite confusing. An efficient way could be defining inline helper functions that translates WTL arguments to their MFC counterpart, unless you want to write the required functions using only WTL. Note that you can safely mix WTL objects with MFC code, but you always need to specify the WTL namespace when appropriate. Using helper functions could also make writing "bridge" classes easier. For example, you could define a CScrollWnd class to reuse (through inheritance) whenever you need scrolling capabilities and custom drawing, declaring a virtual OnDraw() function like in CView and CScrollView: CScrollWnd OnDraw() CView CScrollView public: // Inline helper void DoPaint(WTL::CDCHandle dc) { OnDraw(CDC::FromHandle(dc)); } protected: // Must be implemented in derived class virtual void OnDraw(CDC* pDC) = 0; Well, nothing special or mysterious. The ATL class CWindow, which is used by WTL, has only one member variable, m_hWnd, that is exactly the same as the one you can find in the CWnd class. So all the CWindow member functions need just that member variable, that can also be found in a CWnd-derived class. m_hWnd What I did was to copy all the CWindow members, except m_hWnd, to a include file and define a new template class. template <class T, class TBase> class CWnd2Window : public TBase To use the template you need to pass both the derived class and the base class, as you can see in the example above about a scrolling control. Then I added the necessary code to enable WTL message maps. I implemented the MFC function DefWindowProc(), that is called after WindowProc(), when no entry is found in the MFC message map. DefWindowProc() WindowProc() protected: virtual LRESULT DefWindowProc(UINT nMsg, WPARAM wParam, LPARAM lParam) { T* pT = static_cast<T*>(this); ATLASSERT(::IsWindow(pT->m_hWnd)); LRESULT lResult; if (pT->ProcessWindowMessage(m_hWnd, nMsg, wParam, lParam, lResult)) return lResult; return TBase::DefWindowProc(nMsg, wParam, lParam); } Remember that WindowProc() is called first. If it is not overridden or if you pass the message to the base implementation in CWnd, the MFC message handler is called. If the corresponding entry is not found in the MFC message map or if you call the base implementation in your message handler or if you explicitly call Default(), the message finally arrives to DefWindowProc() that calls the ATL/WTL message map implementation. Default() Note that you don't have to change the MFC message map macros, that skip to the MFC base class ignoring the intermediate template CWnd2Window. I don't define an MFC message map there, so this is perfectly legal. CWnd2Window I also commented out "dangerous" member functions already defined by MFC, such as Attach(), Detach(), CreateWindow() and DestroyWindow(), but there may be others to comment out I'm not aware of. As far as I know Attach/Detach deal with CWnd maps, while CreateWindow/DestroyWindow set up Windows hooks or call some virtual functions. So if you still want to use your class as a CWnd, you need to call the original functions, not those defined by CWindow. Attach() Detach() CreateWindow() DestroyWindow() Attach Detach CreateWindow DestroyWindow There may be also some functions identical to the MFC ones, that could be removed. Feel free to suggest improvements. Well, you have to pay attention to a few things: That's all. I just used this method a couple of times, so I'm not aware of any other problems. But I expect comments... I suppose you can freely use the code as long as you have the right to use ATL source code. There is no original code here, except for the CScrollPicture class, only a nice idea. All the source code by me is released to the public domain, you may do what you want with it. As for the CWnd2Window class, almost all the code is copyright of Microsoft Corporation. CScrollPicture In the attached demo project you can find a very simple implementation of a scrolling picture control. Since I subclass a CStatic control, I need to override the default behaviour sometimes. A much cleaner implementation would have used a CWnd, but that way I can show that message routing works as expected, even mixing MFC message handlers with WindowProc(). To test the control use the mouse on the scroll bars or move the focus on it and use the keyboard arrows, holding down the CTRL key to scroll by pages. There is no focus indicator, so you have to guess by exclusion. There is also a button to load external BMP files. Please note that this control is not meant to be fully featured, but only an example of use for WTL templates in MFC projects. In particular, I expect bugs if the control is used in a resizable dialog, as to my experience the WTL classes that implement scrolling are not "perfect" from that point of view. I'm working on a replacement class, but no time to release it yet. Any idea to improve this article, please let me know! This article, along with any associated source code and files, is licensed under A Public Domain dedication #ifdef _AFX #ifndef _WTL_NO_CSTRING #define _WTL_NO_CSTRING 1 #endif // _WTL_NO_CSTRING #define _CSTRING_NS #endif // _AFX Vincent_RICHOMME wrote:I would like to use a CSpinListBox in my MFC project on smartphone platform. WTL::CSpinListBox // in InitDialog handler WTL::CSpinListBox MySpinLB = GetDlgItem(ID_MYSPINLB); // now it is spinned, access it through MySpinLB MySpinLB.AddString(L"Test"); // or through a MFC class if you prefer roel_ wrote:Just comment out the functions in which the offending functions appear #ifdef __IStream_INTERFACE_DEFINED__ ... offending WTL::CImageList members ... #endif // __IStream_INTERFACE_DEFINED__ #include <atlctrls.h> #define VC_EXTRALEAN // Exclude rarely-used stuff from Windows headers #include <afxwin.h> // MFC core and standard components #include <afxext.h> // MFC extensions // Add support for ATL/WTL #define _WTL_NO_AUTOMATIC_NAMESPACE #include <atlbase.h> #include <atlapp.h> extern WTL::CAppModule _Module; #include <atlwin.h> #include <atlctrls.h> CWnd::SubclassWindow class CSplitterATL : public CWnd2Window<CSplitterATL, CWnd>>, public WTL::CSplitterImpl<CSplitterATL, false> #include <atlsplit.h> class CMixedWindowDlg : public CDialog { ... WTL::CSplitterWindow m_ctlSplitter1; ... } OnInitDialog // TODO: Add extra initialization here CRect rect; GetClientRect(&rect); m_ctlSplitter1.Create(m_hWnd, rect, NULL, 0); m_ctlSplitter1.SetSplitterPanes(m_ctlPicture1.m_hWnd, ::GetDlgItem(m_hWnd, IDC_LOAD), false); m_ctlSplitter1.SetSplitterPos( rect.right/2 ); #include <afxcmn.h> // MFC support for Windows Common Controls ::DialogBoxParam(_Module.GetResourceInstance(), MAKEINTRESOURCE(T::IDD), ...); ::CreateDialogParam(_Module.GetResourceInstance(), MAKEINTRESOURCE(T::IDD), ...); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/3573/Mix-up-WTL-with-MFC?msg=791950
CC-MAIN-2016-40
refinedweb
1,479
53.61
SKTN Solutions For C Part 3 - Online Article Introduction The tutorial series "SKTN Solutions for C" is again continued with this tutorial. In this part 3 of the series, we will take a different set of problems generally using loops. These problems may seem easy at first glance but at the time of coding they can prove out to be a night burner. I hope you will enjoy and understand. So, let us once again delve into the ocean of C. Problem 11: Write a program to print all prime numbers from 1 to 300 using loops. A prime number is a number which is only divisible completely by 1 and itself. For example: 5, 7 and 13. The lowest prime number is 2 as 1 is not considered a prime number. We can divide each number between 1 and 300 by every number less than it. If none of them divides completely the number, then the number is concluded as a prime number. To check whether a number completely divides another, we will use % (modulus) operator of C. This operator divides first number by second and returns the remainder. For example: 10%3 = 1 and 15%4 = 3. One more trick we can apply to this. As we know that none of the number, more than half of itself, can divide it completely. For example: 100 can only be completely divided by 50 or less, that is less than or equal to half of it. Let us C the code now, #include <stdio.h> #include <conio.h> void main() { int i,j,flag; for(i=2; i<=300; i++) { flag = 0; for(j=2;j<=i/2;j++) if(i%j == 0) flag = 1; if(flag == 0) printf("\t%d",i); } getch(); } The i loop runs from 2 to 300 and provides each number for checking in the form of i. Every time the main loop runs, the following process happen, - An integer flag is set to zero initially. - Another loop of j runs from 2 to (i/2) and checks if I can be completely divided by j. If it does so, flag is set to 1. - Flag is checked. If it is zero then the number in i is prime. This is because a zero in flag indicates that it is never set to 1 during step (b) which in turn means that i was never divided by j. In this way, all the prime numbers between 1 and 300 are printed on the screen. The getch() function is optional and is used just to halt the screen until the user hits the enter key. Problem 12: Write a program to fill the entire screen with diamond and heart alternatively. The ASCII value for heart is 3 and that of diamond is 4. ASCII stands for "American Standard Codes for Information Interchange". Every character has an ASCII code. For example: 'A' has ASCII code of 65, 'B' is 66, etc. Similarly a character that looks like a heart has an ASCII code of 3 and diamond has ASCII code of 4. To tackle this problem, we will store 3 and 4 in two char and print them on screen. Let us C the code now, #include <stdio.h> #include <conio.h> void main() { char heart = 3, diamond = 4; int i; for(i=1; i<2000; i++) printf("%c%c",heart,diamond); getch(); } The logic of the program is quite straightforward. The loop is run 2000 times as a general text screen has a resolution of 80x25 that makes 2000 characters to be printed at one time. In this way, entire screen is filled with diamonds and hearts alternatively. The getch() function is optional and is used just to halt the screen until the user hits the enter key. To get any other character printed, just change the value of heart and diamond. Problem 13: Write a program to calculate the factorial value of a integer using loop. The factorial of an integer can be calculated by the formula: Factorial of x = x * (x - 1) * (x - 2) * (x - 3) * ... * 1 For example: factorial of 5 is 5*4*3*2*1 = 120. The logic is to keep multiplying the numbers from 1 up to the number of which factorial is to be calculated. Let us C the code now, #include <stdio.h> #include <conio.h> void main() { int i, factorial = 1, number = 5; for(i = 1; i<=number; i++) Factorial = factorial * i; printf("Factorial is %d", factorial); getch(); } Now, we have taken three int, i for running the loop, number for storing the input number of which factorial is to be calculated and factorial for storing the factorial of the number. The main loop runs number times. It starts from 1 and keeps on running until i becomes number. Each time the value of factorial is multiplied by the value of i and stored in factorial. In this way the exact value of factorial is printed on the screen. The getch() function is optional and is used just to halt the screen until the user hits the enter key. To get the factorial of a different number, just change the vale of number. Problem 14: Write a program to obtain the prime factors of a number. The prime factors of a number are the lowest prime numbers by which a number can be fully divided. For example: prime factor of 24 is 2, 2, 2 and 3 because 2*2*2*3 = 24. Another example is 50 whose prime factors are 2, 5 and 5. Thus, the problem basically extends the prime numbers generating problem. To understand this problem, please understand the program to print the prime numbers (Problem 11 of this tutorial). To solve this problem, we will generate all the prime numbers less than the input number. Then each prime number will be chacked whether it can divide the input number or not. The number of times it divides the input number are printed on the screen. Let us C the code now, #include <stdio.h> #include <conio.h> void main() { int i, j, flag, number=24; for(i=2; i<=number; i++) { flag = 0; for(j=2;j<=i/2;j++) if(i%j == 0) flag = 1; if(flag == 0) while(number % i == 0) { printf("\t%d",i); number = number / i; } } getch(); } The number whose prime factors are to be calculated is stored in number. The rest of the integers are to calculate the prime numbers. Inside the main loop, if i comes out to be a prime number, then in the while loop it is checked that how many times the number can be divided completely by i. Every time i divides the number, the value of number is reduced by a factor of i. In this way, the prime factors of a number are printed on the screen. The getch() function is optional and is used just to halt the screen until the user hits the enter key. To get prime factors of a different number, just change the value of number. Problem 15: For three variables x, y, z, write a function to circularly shift their values to the right. Right shifting of the values means that if initially the variables hold the value 2, 3 and 4, then after a right shift they must hold the value 3, 4 and 2. To right shift the values, we will write a function in which the address of the three integers will be sent and the values will be interchanged. This is a little bit different problem, so first Let us C the code now, #include <stdio.h> #include <conio.h> void shift_right(int *a,int *b,int *c) { int temp = *a; *a = *b; *b = *c; *c = temp; } void main() { int x=2, y=3, z=4; printf("Before Shifting: %d %d %d",x,y,z); shift_right(&x,&y,&z); printf("Right Shifted: %d %d %d",x,y,z); } In the function shift_right, addresses of the three integers are passed. The values are interchanged with the help of a temporary variable temp. As the values are passed by address, the values are changed permanently and we do not need to return the values. The values are initially taken in the variables x, y and z as 2, 3 and 4. After shifting the values are printed 3, 4 and 5. In this way, the values are right shifted and are printed on the screen. The getch() function is optional and is used just to halt the screen until the user hits the enter key. We can right shift any other set of values by just changing the values of x, y and z. About the Author: No further information.
http://www.getgyan.com/show/148/SKTN_Solutions_for_C_Part_3
CC-MAIN-2018-43
refinedweb
1,456
72.36
On 7/11/07, Stefan O'Rear <stefanor at cox.net> wrote: > Not very nicely. > > Option 1. Ignore purity > Option 2. Ignore lists Option 3. Use continuations You maintein purity while keeping flexibility. One possible implementation: > data Result a = Finished (a, [a]) > | NeedInput (a -> Result a) It's on the NeedInput where we hide the input list. If a for some cutoff and some list there were needed n elements, then there will be (n-1) NeedInput's, one for each except for the first. After the last one, the Result will be Finished. > accumUntilCutoff :: (Ord a, Num a) => a -> a -> Result a > accumUntilCutoff = acc (0,[]) > where > acc (s,p) cutoff x > | x >= cutoff = Finished (s+x,reverse $ x:p) > | otherwise = NeedInput (acc (s+x,x:p) (cutoff-x)) Note how we explicity "traverse" on the "list". > readUntilCutoff :: (Ord a, Num a, Read a) => a -> IO (a,[a]) > readUntilCutoff cutoff = get >>= parse . accumUntilCutoff cutoff > where > get = putStr "Enter an integer: " >> getLine >>= return . read > parse (Finished (s,p)) = return (s,p) > parse (NeedInput f) = get >>= parse . f Probably there's a better way of using continuations, but I think this suffices (and works). Cheers, -- Felipe.
http://www.haskell.org/pipermail/haskell-cafe/2007-July/028402.html
CC-MAIN-2013-20
refinedweb
194
57.77
Contributor 2843 Points May 04, 2011 07:46 PM|xequence|LINK Well, it may be a trivial for some but for me it was a half hour headache. Anywho, this is just a tip for anyone who has a multi language project, on how to declare partial classes and methods of your LINQ to SQL. Declaring a using namespace is not the way to go, why I am not sure but it does not allow partial class in the syntax I did it in. cscode.datacontext is the namespace of my .dbml file. Email, is my table name. You will know it is working when you type in partial inside of public partial class Email { and you get intellisense. Hope this helps anyone who had this problem. namespace CSCode.DataContext { public partial class Email { // intellisense works partial void OnCreated() { // do work? } } } <codeSubDirectories> <add directoryName="VBCode" /> <add directoryName="CSCode" /> </codeSubDirectories> 0 replies Last post May 04, 2011 07:46 PM by Xequence
http://forums.asp.net/t/1678328.aspx?LINQ+Partial+Methods+with+Dual+Language+Project
CC-MAIN-2015-14
refinedweb
161
71.14
SciPy v. Octave - Round one June 17, learn. The idea behind binomial sampling is simple. You run a set of pass/fail trials on samples taken from the full population and count the number of passes and fails. With [n] as the number of trials and [x] as the number of failures, your estimate of the probability of failure within the entire population, [p], is[\hat{p} = \frac{x}{n}] To figure out how good this estimate is, you calculate confidence limits. In the situation I was dealing with on Friday, I cared only about the lower confidence limit. After the trials were done, I wanted to be 95% confident that [p] was greater than some value. If you have lots of trials, the confidence limit calculation is pretty easy because you can take advantage of the central limit theorem and assume your estimate of [p] follows a normal distribution. Unfortunately, I wasn’t in that situation—the client wanted to know what could be learned about the probability of failure from a small number of samples. Here’s how it’s calculated. Solve the nonlinear equation,[\sum_{i=0}^{x-1} \frac{n!}{i! (n - i)!} q^i (1-q)^{n-i} = .95] for [q]. That will be the 95% one-sided lower confidence limit for [p].2 Because I was designing a sampling plan, I needed to calculate the lower confidence limits for several combinations of [x] and [n]. I started out with my old friend, Octave: 1: for n = 3:9 2: for y = 0:(n-1) 3: p = fsolve(@(q) binocdf(y, n, q) - .95, (y + .1)/n); 4: printf("%6.4f ", p) 5: endfor 6: printf("\n") 7: endfor This gave me a nice table of results: 0.0170 0.1354 0.3684 0.0127 0.0976 0.2486 0.4729 0.0102 0.0764 0.1893 0.3426 0.5493 0.0085 0.0628 0.1532 0.2713 0.4182 0.6070 0.0073 0.0534 0.1288 0.2253 0.3413 0.4793 0.6518 0.0064 0.0464 0.1111 0.1929 0.2892 0.4003 0.5293 0.6877 0.0057 0.0410 0.0977 0.1688 0.2514 0.3449 0.4504 0.5709 0.7169 The rows of the table correspond to [n] values of 3 through 9, and the columns correspond to [x] values of 1 through 9. I didn’t bother calculating the values for [x = 0] because they’d all be zero. The upper right triangle of the table is blank because [x] can’t be greater than [n] (we can’t have more failures than trials). The binocdf function does the work of the summation in the formula above. Because I have the variable y looping from 0 to n-1, it’s acting as [x-1] in the formula. The fsolve function is Octave’s nonlinear equation-solving routine. Its two arguments are the function to find the zeros of and the starting guess. The function is defined inline using Octave’s anonymous function syntax, which starts with @. Each time through the loops on n and y, fsolve finds the value of q for which binocdf(y, n, q) - .95 is zero. The initial value, (y + .1)/n, was chosen by looking at this chart, from Massey and Dixon’s Introduction to Statistical Analysis. The lower confidence limits I want for [n = 5] can be read off the lower curve at the [X/N] (our [x/n]) values of 0.2, 0.4, 0.6, 0.8, and 1.0, so I had a decent idea of where to start for set of solutions. The formula I chose didn’t give the most accurate initial guesses in the world, but it was simple and provided starting values close enough for convergence, which is all that matters. I used Octave first because I needed to be confident3 that I would get the answers. I knew pretty much how I was going to proceed before starting; the only things I had to look up in the documentation were the syntax for anonymous functions, which I’d never used before, and the printf function, which I just just needed to confirm the existence of—it works like every other printf I’ve seen. With the real work done, I could relax and play around with SciPy. I Googled “scipy binomial cdf” and came up with this page in the docs. Then I searched for “scipy nonlinear solver” and came up with two likely answers: the fsolve solver, which I believe works the same way the Octave solver of the same name works; and the newton solver, which, despite its name, uses the secant method if a derivative function isn’t supplied. Because the newton solver was designed for scalar functions and fsolve was designed for vectors, I figured newton would be the more appropriate choice. Given how little computation was necessary, I doubt my choice made much difference. The Python script looked like this: python: 1: from scipy.stats import binom 2: from scipy.optimize import newton 3: 4: for n in range(3,10): 5: for y in range(n): 6: p = newton(lambda q: binom.cdf(y, n, q) - .95, (y + .1)/n) 7: print "%6.4f " % p, 8: print It gave the same results as the Octave script. Apart from the import statements, the form of the script is the same as that of the Octave solution, which is encouraging. Also encouraging was how quickly I was able to find the functions I needed and how easy the documentation was to read. Not so encouraging were the two warnings Python spit out when I ran the script: /Library/Python/2.7/site-packages/scipy-0.10.1-py2.7-macosx-10.7-x86_64.egg/ scipy/stats/distributions.py:30: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility import vonmises_cython /Library/Python/2.7/site-packages/scipy-0.10.1-py2.7-macosx-10.7-x86_64.egg/ scipy/stats/distributions.py:30: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility import vonmises_cython The warnings were generated by Line 1. Since I’m pretty sure NumPy and SciPy were compiled locally when I installed them, I don’t know how or why there’d be a binary incompatibility. I suppose I should Google these warning messages to see if there’s a fix. As I said, I got the right answers with SciPy, but I know that only because I did the problem with Octave first. I’d like to be a little more certain that SciPy is working before I hand all my numerical work over to it.
http://leancrew.com/all-this/2012/06/scipy-v-octave-round-one/
CC-MAIN-2017-26
refinedweb
1,115
71.04
#include <smartptr.h> Inheritance diagram for SmartPtr: Usage: 1. In a program block SmartPtr<MyClass> ptr1(new MyClass); // creates object 1 SmartPtr<MyClass> ptr2(new MyClass); // creates object 2 ptr1 = ptr2; // destroys object 1 ptr2 = NULL; ptr1 = new MyClass; // creates object 3, destroys object 2 ptr1->methodcall(...); MyClass o1; // ptr1 = &o1; // DON'T ! only memory allocated by new operator should be used CMyObject *o2 = new MyClass; ptr1 = o2; 2. In a function call2. In a function call //ptr2 = o2; // DON'T ! unless MyClass implements IRefCount // try to use ptr1 = ptr2 instead, it's always safe; 3. As a return value3. As a return value void func(MyClass *o) {...} ... SmartPtr<MyClass> ptr(new MyClass); func(ptr); 4. Accessing members4. Accessing members SmartPtr<MyClass> f() { SmartPtr<MyClass> ptr(new MyClass); return ptr; } SmartPtr<MyClass> ptr(new MyClass); ptr->ClassMember = 0; Construct a smart pointer that points to nothing. Construct a smart pointer to an object. Construct a smart pointer that takes its target from another smart pointer. Detatch this pointer from the object before destruction. Assign a 'smart' object to this smart pointer. This method is picked over __Assign(void *ptr) if T implements IRefCount. This allows some memory usage optimization Reimplemented in ParentPtr, ParentPtr< mxflib::Partition >, ParentPtr< mxflib::MDObject >, ParentPtr< mxflib::Track >, and ParentPtr< mxflib::Package >. Get the contained pointer. Get the contained object's refcount. Assign another smart pointer. Reimplemented in ParentPtr, ParentPtr< mxflib::Partition >, ParentPtr< mxflib::MDObject >, ParentPtr< mxflib::Track >, and ParentPtr< mxflib::Package >. Assign pointer or NULL. Give access to members of T. Give const access to members of T. Conversion to T* (for function calls). Test for NULL. Test for equality (ie. do both pointers point to the same object). Test for inequality (ie. do pointers point to different objects). Comparison function to allow sorting by indexed value. Get a cast version of the pointer. This is used via the SmartPtr_Cast() Macro to allow MSVC 6 to work!! The reason for this is that MSVC 6 name mangling is only based on the function arguments so it cannot cope when two functions differ in the template type, but not the argument list!! The solution is a dummy argument that gets filled in by the macro (to avoid messy code!) Pointer to the reference counted object.
http://freemxf.org/mxflib-docs/mxflib-1.0.0-docs/classmxflib_1_1_smart_ptr.html
CC-MAIN-2020-10
refinedweb
378
61.63
- Advertisement Paulius MaruskaMember Content Count355 Joined Last visited Community Reputation566 Good About Paulius Maruska - RankMember std::vector and template problem... Paulius Maruska replied to Rasmadrak's topic in General and Gameplay ProgrammingQuote:Original post by Rasmadrak In my code I do have a semicolon after the class declaration, and "using namespace std". That is a bad idea. Your ResourceManager class is a template class, so I'm safe to assume, that it is defined in a header file. Having "using namespace std" in a header file is a bad idea, because every source file that include the header, will also include the "using namespace std", which isn't always desired. So, just to be safe, I would suggest to explicitly add "std::" everywhere, instead of "using namespace std" in a header file. Just a tip. ;) Looking for source control software Paulius Maruska replied to Reddox's topic in For Beginners's ForumQuote:Original post by Reddox Hmm, thanks for the replies :) It's that TortiseSVN one that is extremely buggy. It keeps crashing explorer, not pretty at all :) I guess that turned my off of SYN. It's SVN not SYN. What version of Windows are we talking here? Maybe you're using something that isn't supported by TortoiseSVN? Either way, TortoiseSVN is very stable and not buggy. If you're experiencing problems, I'm pretty sure they're caused by some other shell extensions/software you have. C++ diamond inheritance problem Paulius Maruska replied to jorgander's topic in General and Gameplay ProgrammingQuote:Original post by jorgander I was hoping to achieve the functionality without adding any overhead, be it extra member variables or function parameters. Exactly like Base::Base() is called only once per DerivedBoth::DerivedBoth(), I'd like my own definable base function to be called once per derived function. Perhaps it is not possible though. Well, you can get away with not adding any new members. However, it would require RTTI. I'm not sure which one of those woould be better, but here's a proof of concept (it should compile & run without any changes - it worked for me): #include <iostream> #include <string> #include <typeinfo> using namespace std; class Base { public: virtual void Func() { cout << "Base::Func()" << endl; } }; class Derived1 : public virtual Base { public: virtual void Func() { cout << "Derived1::Func()" << endl; if(typeid(*this) == typeid(Derived1)) Base::Func(); } }; class Derived2 : public virtual Base { public: virtual void Func() { cout << "Derived2::Func()" << endl; if(typeid(*this) == typeid(Derived2)) Base::Func(); } }; class DerivedBoth : public Derived1, public Derived2 { public: virtual void Func() { cout << "DerivedBoth::Func()" << endl; Derived1::Func(); Derived2::Func(); if(typeid(*this) == typeid(DerivedBoth)) Base::Func(); } }; void run(Base & obj, string name) { cout << "======== " << name << endl; obj.Func(); cout << "========" << endl; } int main() { Derived1 d1; Derived2 d2; DerivedBoth db; run(d1, "Derived1 object"); run(d2, "Derived2 object"); run(db, "DerivedBoth object"); return 0; } C++ diamond inheritance problem Paulius Maruska replied to jorgander's topic in General and Gameplay ProgrammingQuote:Original post by jorgander Of course, as it stands now Base::Func will get called 3 times per call to DerivedBoth::Func, where I would like Base::Func to get called exacly once per call to Derived1::Func, Derived2::Func, or DerivedBoth::Func. you can check in Base::Func if it was already called. Something like this: class Base { public: Base * parent; bool funcCalled; virtual void Func() { if(!funcCalled) { // do some stuff parent->Func(); funcCalled = true; } } }; class Derived1 : public virtual Base { public: virtual void Func() { // do some stuff Base::Func(); } }; class Derived2 : public virtual Base { public: virtual void Func() { // do some stuff Base::Func(); } }; class DerivedBoth : public Derived1, public Derived2 { public: virtual void Func() { Base::funcCalled = false; // do some stuff Derived1::Func(); Derived2::Func(); Base::Func(); } }; crash when i try to convert string to char* Paulius Maruska replied to MadsGustaf's topic in For Beginners's ForumFont::Render() { RenderFont(WIDTH/2, 500, fontListBase, Fonts->question->Text.c_str()); } What is Fonts? Couldn't find it your code. Other than that, I would check if Fonts isn't NULL, then I would check if Fonts->question isn't NULL. Maybe try to step through this function in the debugger, and watch these values? How to lock a file in windows? Paulius Maruska replied to spree's topic in General and Gameplay ProgrammingQuote:Original post by spree Hey Guys, I"m developing using VS 2003 on c/c++ and I need to lock a file so I can use it as a mutex between threads and processes. I tried this code example from MSDN ( ) You are using VS2003, but you are trying to compile an example written for VS2005 (I marked the parts of your post, that gives me this information). I'm not sure about the _lock_file and _unlock_file functions (I never used them), but I'm 100% sure that fopen_s and sprintf_s (and all other functions ending with _s) were added in VS2005. You can either try to install the latest version of Windows SDK (it comes with the latest version of the compiler and standard library), or you can get the newer version of VS (either 2005 or 2008 will do the trick). Question about simple C language data types. Paulius Maruska replied to grill8's topic in General and Gameplay ProgrammingWhat about long long (__int64)? ostream and stuff Paulius Maruska replied to rozz666's topic in General and Gameplay ProgrammingIt doesn't have to be derived from std::ostream. All you need is to write the operators. - Quote:Zao struct Foo { Foo(int x) : x(x) { } int x; }; This is completely unambiguous. The member variable x gets initialized with the parameter x. It may be unambiguous to the compiler, but what about the reader? I think it would, at least, confuse the reader... On the other hand, maybe the reader should get familiar with it so that it wouldn't confuse him... ;) Quote:Original post by ToohrVyk You can. struct Foo { Bar x, y; Foo(Bar x) : x(x), y(this -> x) {} }; Yes, but every time I'd do that - I'd get a warning about this pointer being used in the constructor initialization list. And considering the fact, that I always have "Treat warnings as Errors" enabled - this isn't going to work for me. - m_ is the only prefix I use in my code. I don't really like it, but I use it. Quote:Original post by ToohrVyk In those cases where you need disambiguating member variables from local variables, you can use this->bar. What about constructor initialization list? You can't use this->bar there... I saw some people are postfixing the constructor arguments with _, but I consider it even uglier than the m_ prefix... [C, C++] Is this function kosher? Paulius Maruska replied to fpsgamer's topic in For Beginners's ForumQuote:Original post by TheOddMan Quote:Original post by SiCrane Quote:Original post by TheOddMan Post increments are always carried out after a function call. No, they aren't. Function calls are sequence points; all function arguments must be fully evaluated before the function call. Yes it is, that's the whole point of postcrement / precrement operators. SiCrane is right - all arguments are evaluated before the function call. So, if you have foo(i++), the i++ part will be fully evaluated before execution enters the function foo. It's just that the value passed to the function will be a copy of value of i before it was incremented. Quote:Original post by TheOddMan Quote:Original post by SiCrane Quote:Original post by TheOddMan In the example you give, *tb++ = *ta++ is actually shorthand for: operator*( tb++ ) = operator*( ta++ ); No, it isn't. Prefix * is not, and can never be, a function call when applied to char * variables. You've obviously never heard of operator overloading then? You can not overload operators of built in types. You can only overload operators of in which at least one of the operands is your own class or structure. Pointer dereference operator is unary operation, so overloading it for char* is impossible. Templates and Typedefs Paulius Maruska replied to D_Tr's topic in General and Gameplay ProgrammingYou can always enclose the typedefs in a struct! template< Int32 N > struct FixedType { typdef Fixed< UInt64, N > U64; typdef Fixed< UInt32, N > U32; }; New to this, a few questions. Paulius Maruska replied to Amazing_00's topic in For Beginners's ForumQuote:Original post by CodedFire Ive also noticed my rep has taken a hit because i voiced my opinion. How very democratic....... Dev-C++ is one of the worst choices today. Everyone on this forum knows that, so if you're recommending it to someone, then people are rating you as being unhelpful and/or unfriendly. No surprise there, really. As for your arguments. Visual C++ and auto-complete features won't actually auto-complete syntax things (like loops, or ifs) - it only does auto-complete on functions, classes and stuff like that. I really see no reason why one should choose Dev-C++ over any other C++ IDE. If you really don't like Microsoft, pick up CodeGear Turbo C++ or get Code::Blocks if you want... There is really no reason to get Dev-C++. No file output in ATL application Paulius Maruska replied to Lykaios's topic in General and Gameplay ProgrammingI can't see anywhere in your given code, that you are flushing or closing the file (in which case it would be auto-flushed). You can also put std::endl into the stream - as far as I know, it flushes the stream too. On the other hand, the file should be auto-flushed and auto-closed when fout destructor is called, so I'm guessing that it's not called? And yeah, globals are bad. If that fout would be a local variable in the function - the destructor would take care of everything when you're returning from the function... Web game - Candy mountain massacre Paulius Maruska replied to 3DRTcom's topic in Your AnnouncementsYeah. The game is awesome! :) - Advertisement
https://www.gamedev.net/profile/97077-paulius-maruska/
CC-MAIN-2019-04
refinedweb
1,668
61.46
85939/how-to-open-s3-object-as-a-string-with-boto3 Hi Guys, I have objects in the AWS S3 bucket. I want to open S3 object as a string with Boto3. How can I do that? Hi@akhtar, If you want to return a string, you have to decode using the right encoding as shown below. import boto3 import botocore s3 = boto3.resource('s3') obj = s3.Object(bucket, key) obj.get()['Body'].read().decode('utf-8') While IOUtils.copy() and IOUtils.copyLarge() are great, I would prefer the old ...READ MORE You can use method of creating object ...READ MORE You can delete the file from S3 ...READ MORE You can delete the folder by using ...READ MORE Check if the FTP ports are enabled ...READ MORE To connect to EC2 instance using Filezilla, ...READ MORE Hi, Here for the above mentioned IAM user ...READ MORE I had a similar problem with trying ...READ MORE Hi@akhtar, You can create a session for an ...READ MORE Hi@akhtar, You can download one file from a ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/85939/how-to-open-s3-object-as-a-string-with-boto3
CC-MAIN-2020-45
refinedweb
185
88.74
After further thought: This section is about the syntax operators, not the special methods. The syntax operators never evaluate to NotImplemented as they (apparently) interpret its return from a special method the same as a raising of TypeError, and always raise TypeError when neither the op or its reflection is supported. So there should be no mention of NotImplemented here. Just a reference to 3.3. #15997 is related to my 'wonder' but not directly relevant to a patch for this. Please submit a draft patch when you have one. I determined that 'raise TypeError' and 'return NotImplemented' both result in the call of the reflected method, at least for a couple of cases. (And same seems true for arithmetic ops too.) class C(): def __ge__(self, other): # print("in C.__ge__", end='') return True def __add__(self, other): return 44 __radd__ = __add__ class O(): def __le__(self, other): # print ("in O.__le__") return NotImplemented def __add__(self, other): return NotImplemented c = C() o = O() ob = object() print(c >= o, o <= c, ob <= c) # True True True # print(ob <= ob) # raises TypeError print(c + o, o + c, ob + c) # 44 44 44 # print(ob + ob) # raises TypeError # print(ob >= o) # with O.__le__ print uncommented # in O.__le__ # so non-implemented reflected o <= ob *is* called # TypeError: unorderable types: object() >= O()
https://bugs.python.org/msg170936
CC-MAIN-2019-26
refinedweb
221
63.9
One of the biggest problems Dojo has when trying to attract new users is the fact that our community rarely promotes one method of writing code as THE WAY. Following suit, I’m not going to say that one method of doing things is better than the next. Ultimately, arguing that there is one catch-all solution to the problems coders face when developing for the web is one way to drive away users as soon as they realize that they want something more. With this in mind, I’ll outline three strategies available with The Dojo Toolkit. Each offers different levels of control and different benefits. Hopefully, some of the negativity aimed toward some of these strategies is only because of a lack of understanding about their purpose. After some of that has been dispelled, I hope to kill any remaining bad feelings by showing that some of the things you might feel that the toolkit imposes on you can be fixed — with either a simple alias or a simple function. These three strategies come from three different areas of The Dojo Toolkit: Dojo Base, Dojo Core, and Dijit. Dojo Base is what you get in dojo.js, out of the box. Dojo Core and Dijit modules appear in the dojo and dijit directories respectively, and you’ll be able to use a dojo.require statement to load them. Base: Write a dojo.query “Plugin” By far one of the most beloved, and likely one of the most popular, patterns for development is the one favored by many jQuery developers. I found the rules outlined in Learning jQuery‘s article on A Plugin Development Pattern to be pretty sage advice. Putting it simply, this pattern focuses around finding a set of nodes and executing some functionality on each node. Passing a property bag of options to this function, as suggested by the Learning jQuery article, is also typical of this pattern. To better visualize it, see the following code: dojo.addOnLoad(function(){ dojo.query('.hilight').hilight({ background: "blue", foreground: "white" }); }); To begin with, one of the ideas here is that not only is this pattern good for adding functionality to dojo.query, but also as a namespace. In this case, aliasing dojo.NodeList.prototype to dojo.fn creates a much more appropriate namespace. We’ll quickly define this: dojo.fn = dojo.NodeList.prototype; Now, if you’re familiar with the syntax, the following code will look familiar. If you’re not, go through that article, it’s full of good stuff. It’s specific to jQuery, but looks almost exactly the same with Dojo. Below is a version of the article’s final code snippet, adapted for Dojo. dojo.fn.hilight = function(options) { var opts = dojo.mixin({}, dojo.fn.hilight.defaults, options); return this.forEach(function(node) { $node = dojo.query(node); $node.style({ backgroundColor: opts.background, color: opts.foreground }); node.innerHTML = dojo.fn.hilight.format(node.innerHTML); }); }; dojo.fn.hilight.format = function(txt) { return '' + txt + ''; }; dojo.fn.hilight.defaults = { foreground: 'red', background: 'yellow' }; Benefits With this format, everything can be found in one place: hooks for functionality, exposed defaults, and easy access through dojo.query. It’s also extremely lightweight and will work with just a simple script include of dojo.js. Try This Create a directory next to your local dojo directory, call it query. Create a file named hilight.js and add dojo.provide("query.hilight"); to the top of the file, followed by the code we wrote above. When using this in your page, you can either point a script tag at the file, or you can use dojo.require("query.hilight"); in your JavaScript to dynamically include it. In Action Core: Write a Behavior On one hand, writing a behavior is just a different way of using dojo.query. Where the query syntax breaks down is when you’re adding or replacing HTML in your page, and you have to start writing extra code to make sure that you can apply your plugins without accidentally re-applying them. dojo.behavior fixes this for you. We make the dojo.behavior module available in our source using the following call: dojo.require("dojo.behavior"); We can then take the plugin that we’ve created above and adapt it again to dojo.behavior. Like we did above with the query directory, we’ll create a directory called plugins and use the file hilight.js. dojo.provide("plugins.hilight"); plugins.hilight.behavior = { ".hilight": { found: function(node){ var opts = plugins.hilight.defaults; $this = dojo.query(node); $this.style({ backgroundColor: opts.background, color: opts.foreground }); node.innerHTML = plugins.hilight.format(node.innerHTML); } } }; plugins.hilight.format = function(txt) { return '' + txt + ''; }; plugins.hilight.defaults = { foreground: 'red', background: 'yellow' }; You might have noticed a couple of differences between this and the dojo.query version. Our selector and options are hard wired. It might be improper to say that these are drawbacks, since the very purpose of this syntax is to make sure that a page is upgraded a certain way, and that even if we change some HTML in the page, we can make sure we don’t end up with mismatched upgrades. Using this behavior, and showing how it can be re-applied is shown in the code below: dojo.require("dojo.behavior"); dojo.require("plugins.hilight"); dojo.behavior.add(plugins.hilight.behavior); dojo.addOnLoad(function(){ dojo.body().innerHTML += 'Highlight me!'; dojo.behavior.apply(); }); Benefits Behaviors use a much tighter contract than a dojo.query plugin does. You don’t have to worry if the DOM has been loaded when adding a behavior, the module takes care of it for you. Finally, it’s easy to keep a document up to date without having to keep track of everything yourself. Try This A great pattern used in many projects is to use an XHR call (or a stored string) to inject HTML into a node, followed by a standard pattern of replacing variables, adding events, and showing or hiding nodes. Write a behavior designed to manipulate this HTML injection to really get a feel of why this pattern is so well suited to certain tasks. In Action Dijit: Write a Widget Unfortunately, when new users explore Dojo, they head straight over to Dijit and think that this potentially heavyweight system is the only way to upgrade their HTML. They see that some HTML nodes have custom attributes and how you have to build real objects in order to use it — and it all gets a little overwhelming. We have to build a real object because there’s a new concept introduced through Dijit. Not only are we upgrading the node, but we’re creating an object that’s bound to the node, can hold extra data for us, and exposes an API for manipulating the node. Have you guessed what the widget is going to do? The answer is, of course, highlighting. We’ll create it in widgets/Hilight.js (uppercased to indicate that it’s meant to be instantiated). You’ll often see widgets created using dojo.declare. You could create the same object structure by hand, but the Dijit widget system depends on a lot of variables getting set and functions getting called that you might otherwise forget about. dojo.provide("widgets.Hilight"); dojo.require("dijit._Widget"); dojo.declare("widgets.Hilight", dijit._Widget, { background: "yellow", foreground: "red", constructor: function(props, node){ props = dojo.mixin({}, this, props); dojo.style(node, { backgroundColor: props.background, color: props.foreground }); node.innerHTML = this.format(node.innerHTML); dijit._Widget.call(this, props, node); }, format: function(txt) { return '' + txt + ''; } }); There are so many ways to now use this object that it can be a little scary. You’ll mainly see the following syntax used: Highlight me!Highlight me! All of this working together means that these nodes will automatically get marked up when the DOM is loaded. Not only that, but you can get the object bound to each node by using dijit.byNode(node); And finally, the parser uses the foreground and background attributes to create the props object passed to the constructor. For those that don’t want to use these custom HTML attributes, you can just create this object programatically. We’ll keep the syntax we’ve been using and use a class instead. You might be surprised at how easy this is: dojo.addOnLoad(function(){ dojo.query(".hilight").instantiate(widgets.Hilight, { foreground: "black" }); }); Benefits When your node doesn’t stand on its own, having an object bound to the node is massive upgrade in power. It can keep track of state, it can provide all sorts of functions for manipulating both the underlying DOM and any data it’s keeping track of — and best of all, since it’s a real object we can create a new object that inherits properties and functionality instead of having a single global defaults object. Try This The dijit._Widget constructor adds an instance field, domNode, that references the bound node. Knowing this, we can add functions that manipulate the node well after construction. widgets.Hilight.prototype.setForeground = function(color){ dojo.style(this.domNode, "color", color); } widgets.Hilight.prototype.setBackground = function(color){ dojo.style(this.domNode, "backgroundColor", color); } Adding an ID to our node, and using dijit.byId, we can see how this would be used: setTimeout(function(){ var instance = dijit.byId("changeme"); instance.setForeground("blue"); instance.setBackground("orange"); }, 10000); Highlight me! In Action See the html upgrades demo in action
https://www.sitepen.com/blog/2008/04/28/3-ways-to-upgrade-your-html-with-dojo/
CC-MAIN-2018-34
refinedweb
1,573
57.37
I am planning to build a python wrapper around a rest API. - I want to support friendly functions, with autocompletion in the editor. - I want 100% test coverage - I want the package to be modular, so that, any changes to API, can be easily reflected - I want to support both async and normal methods (the core will be implemented thru async, but there will be a wrapper for those who don't want to use async) example: import asyncio from library import Api api = Api() async def do_job(): thing = await api.get_thing() print(thing) asyncio.run(do_job()) or from library.sync import Api thing = api.get_thing() print(thing) the data that will be passed to the functions should be validated. planning to use pydantic. planning to create a sub-package library.models to contain all the modules defining the models. how should I design the entire thing? how should I structure the project? best practices? guides? Top comments (2) Here is a 15-part guide that I wrote that addresses 3 of your 4 requirements: pretzellogix.net/2021/12/08/how-to... Eventually, I'll get around to writing a few extra chapters on how to make asyncmethods. In the meantime, I think this will get you started! You could guide yourself by the code of FastAPI github.com/tiangolo/fastapi They are using wrapers so maybe could be useful
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aahnik/i-want-to-build-a-python-wrapper-for-an-api-how-should-i-approach-it-pk4
CC-MAIN-2022-40
refinedweb
230
75.91
Pike Series Release Notes¶ 12.0.3¶ Bug Fixes¶ [bug 1801873] This fixes an issue where an LDAP-backed domain could not be deleted due to the existence of shadow users in the SQL database. 12.0.2¶ Bug Fixes¶ [‘bug 1753585 <>’_] LDAP attribute names are now matched case insensitively to comply with LDAP implementations. 12.0.1¶ Bug Fixes¶ [bug 1718747] Fixes a regression where deleting a domain with users in it caues a server error. This bugfix restores the previous behavior of deleting the users namespaced in the domain. This only applies when using the SQL identity backend. [bug 1727726] All users and groups are required to have a name. Prior to this fix, Keystone was allowing LDAP users and groups whose name has only empty white spaces. Keystone will now ignore users and groups that do have only white spaces as value for the LDAP attribute which Keystone has been configured to use for that entity’s name. [bug 1740951] A new method was added that made it so oslo.policy sample generation scripts can be used with keystone. The oslopolicy-policy-generatorscript will now generate a policy file containing overrides and defaults registered in code. [bug 1763824] JSON Schema implementation nullablein keystone.common.validation now properly adds Noneto the enum if the enum exists. Other Notes¶ [bug 1718747] As part of solving a regression in the identity SQL backend that prevented domains containing users from being deleted, a notification callback was altered so that users would only be deleted if the identity backend is SQL. If you have a custom identity backend that is not read-only, deleting a domain in keystone will not delete the users in your backend unless your driver has an is_sql property that evaluates to true. 12.0.0¶ New Features. Added an option --checkto keystone-manage db_sync, the option will allow a user to check the status of rolling upgrades in the database. [bug 1543048] [bug 1668503] Keystone now supports multiple forms of password hashing. Notably bcrypt, scrypt, and pbkdf2_sha512. The options are now located in the [identity] section of the configuration file. To set the algorithm use [identity] password_hash_algorithm. To set the number of rounds (time-complexity, and memory-use in the case of scrypt) use [identity] password_hash_rounds. scrypt and pbkdf2_sha512 have further tuning options available. Keystone now defaults to using bcrypt as the hashing algorithm. All passwords will continue to function with the old sha512_crypt hash, but new password hashes will be bcrypt. Upgrade Notes. The identity backend driver interface has changed. A new method, unset_default_project_id(project_id), was added to unset a user’s default project ID for a given project ID. Custom backend implementations must implement this method. [bug 1702211] Password created_at field under some versions/deployments of MySQL would lose sub-second precision. This means that it was possible for passwords to be returned out-of-order when changed within one second (especially common in testing). This change stores password created_at and expires_at as an integer instead of as a DATETIME data-type. [bug 1705485] The change_password protection policy can be removed from file-based policies. This policy is no longer used to protect the self-service password change API since the logic was moved into code. Note that the administrative password reset functionality is still protected via policy on the update_user API. If performing rolling upgrades, set [identity] rolling_upgrade_password_hash_compat to True. This will instruct keystone to continue to hash passwords in a manner that older (pre Pike release) keystones can still verify passwords. Once all upgrades are complete, ensure this option is set back to False. The resource backend cannot be configured to anything but SQL if the SQL Identity backend is being used. The resource backend must now be SQL which allows for the use of Foreign Keys to domains/projects wherever desired. This makes managing project relationships and such much more straight forward. The inability to configure non-SQL resource backends has been in Keystone since at least Ocata. This is eliminating some complexity and preventing the need for some really ugly back-port SQL migrations in favor of a better model. Resource is highly relational and should be SQL based. Deprecation Notes¶ [DEFAULT] crypt_strength is deprecated in favor of [identity] password_hash_rounds. Note that [DEFAULT] crypt_strength is still used when [identity] rolling_upgrade_password_hash_compat is set to True. UUID token provider [token] provider=uuidhas been deprecated in favor of Fernet tokens [token] provider=fernet. With Fernet tokens becoming the default UUID tokens can be slated for removal in the R release. This also deprecates token-bind support as it was never implemented for fernet. Token persistence driver/code (SQL) is deprecated with this patch since it is only used by the UUID token provider.. [blueprint deprecated-as-of-pike] The v2.0 authand ec2APIs were already maked as deprecated in the Mitaka release, although no removal release had yet been identified. These APIs will now be removed in the ‘T’ release. The v3 APIs should be used instead. Security Issues¶ [bug 1703369] There was a typo for the identity:get_identity_provider rule in the default policy.jsonfile in previous releases. The default value for that rule was the same as the default value for the default rule (restricted to admin) so this typo was not readily apparent. Anyone customizing this rule should review their settings and confirm that they did not copy that typo. Particularly given that the default rule is being removed in Pike with the move of policy into code. The use of sha512_crypt is considered inadequate for password hashing in an application like Keystone. The use of bcrypt or scrypt is recommended to ensure protection against password cracking utilities if the hashes are exposed. This is due to Time-Complexity requirements for computing the hashes in light of modern hardware (CPU, GPU, ASIC, FPGA, etc). Keystone has moved to bcrypt as a default and no longer hashes new passwords (and password changes) with sha512_crypt. It is recommended passwords be changed after upgrade to Pike. The risk of password hash exposure is limited, but for the best possible protection against cracking the hash it is recommended passwords be changed after upgrade. The password change will then result in a more secure hash (bcrypt by default) being used to store the password in the DB. Bug Fixes¶ [bug 1523369] Deleting a project will now cause it to be removed as a default project for users. If caching is enabled the changes may not be visible until the user’s cache entry expires. [bug 1615014] Migration order is now strictly enforced. The ensure upgrade process is done in the order it is officially documented and support, starting with expand, then migrate, and finishing with contract. [bug 1689616] Significant improvements have been made when performing a token flush on massive data sets. [bug 1670382] The ldap config group_members_are_ids has been added to the whitelisted options allowing it to now be used in the domain config API and keystone-manage domain_config_upload [bug 1676497] bindep now correctly reports the openssl-devel binary dependency for rpm distros instead of libssl-dev. [bug 1684994] This catches the ldap.INVALID_CREDENTIALS exception thrown when trying to connect to an LDAP backend with an invalid username or password, and emits a message back to the user instead of the default 500 error message. [bug 1687593] Ensure that the URL used to make the request when creating OAUTH1 request tokens is also the URL that verifies the request token. [bug 1696574] All GET APIs within keystone now have support for HEAD, if not already implemented. All new HEAD APIs have the same response codes and headers as their GET counterparts. This aids in client-side processing, especially caching. [bug 1700852] Keystone now supports caching of the GET|HEAD /v3/users/{user_id}/projects API in an effort to improve performance. [bug 1701324] Token bodies now contain only unique roles in the authentication response. [bug 1704205] All users and groups are required to have a name. Prior to this fix, Keystone was not properly enforcing this for LDAP users and groups. Keystone will now ignore users and groups that do not have a value for the LDAP attribute which Keystone has been configured to use for that entity’s name. [bug 1705485] A previous change removed policy from the self-service password API. Since a user is required to authenticate to change their password, protection via policy didn’t necessarily make sense. This change removes the default policy from code, since it is no longer required or used by the service. Note that administrative password resets for users are still protected via policy through a separate endpoint. [bug 1674415] Fixed issue with translation of keystone error messages which was not happening in case of any error messages from identity API with locale being set. [bug 1688188] When creating an IdP, if a domain was generated for it and a conflict was raised while effectively creating the IdP in the database, the auto-generated domain is now cleaned up. The implementation for checking database state during an upgrade with the use of keystone-manage db_sync –check has been corrected. This allows users and automation to determine what step is next in a rolling upgrade based on logging and command status codes. Other Notes¶ [blueprint removed-as-of-pike] All key-value-store code, options, and documentation has been removed as of the Pike release. The removed code included keystone.common.kvsconfiguration options for the KVS code, unit tests, and the KVS token persistence driver keystone.token.persistence.backends.kvs. All associated documentation has been removed. [blueprint removed-as-of-pike] The admin_token_authfilter has been removed from all sample pipelines, specifically, the following section has been removed from keystone-paste.ini: [filter:admin_token_auth] use = egg:keystone#admin_token_auth The functionality of the ADMIN_TOKENremains, but has been incorporated into the main auth middleware ( keystone.middleware.auth.AuthContextMiddleware). The catalog backend endpoint_filter.sqlhas been removed. It has been consolidated with the sqlbackend, therefore replace the endpoint_filter.sqlcatalog backend with the sqlbackend. The [security_compliance] password_expires_ignore_user_idsoption has been removed. Each user that should ignore password expiry should have the value set to “true” in the user’s optionsattribute (e.g. user['options']['ignore_password_expiry'] = True) with a user update call. [blueprint removed-as-of-pike] The keystone.common.ldapmodule was removed from the code tree. It was deprecated in the Newton release in favor of using keystone.identity.backends.ldap.commonwhich has the same functionality. [blueprint removed-as-of-pike] The keystone-manage pki_setupwas added to aid developer setup by hiding the sometimes cryptic openssl commands. This is no longer needed since keystone no longer supports PKI tokens and can no longer serve SSL. This was deprecated in the Mitaka release. [blueprint removed-as-of-pike] Direct import of drivers outside of their keystone namespace has been removed. Ex. identity drivers are loaded from the keystone.identity namespace and assignment drivers from the keystone.assignment namespace. Loading drivers outside of their keystone namespaces was deprecated in the Liberty release.
https://docs.openstack.org/releasenotes/keystone/pike.html
CC-MAIN-2019-43
refinedweb
1,835
56.15
Hi all this is my first time posting and I thought Id join this forum as I need some help now and again when I have problems with C. Also there seems to be alot of nice people on here;) . I am trying to create a program with user inputting studentID,name and there unit mark. Outputting Student ID,name, average mark and Highest and lowest mark. Any ideas on highest and lowest mark? I have tried highest mark but it has a pointer error which I sort of know what it is. :confused:I have tried highest mark but it has a pointer error which I sort of know what it is. :confused:Code: # include <stdio.h> int main () /* declare variables */ { int studentID, n, max_value; float total_mark = 0, unit_mark = 0, average_mark, array; /*Input ID*/ printf ("Enter your student ID: "); scanf("%d",& studentID); /*Input 12 marks + divide by 12 for average*/ for (n = 0; n < 12 ; n++) { printf("Enter your unit mark: "); scanf("%f",&unit_mark); total_mark = total_mark + unit_mark; } average_mark = total_mark /12; /*Work out highest and lowest mark*/ max_value = array[0]; for (n = 0; n < 12 ; n++) { if (array[n] > max_value) { max_value = array[n]; } } printf("The student %d got a total mark of %6.2f and an average mark of %6.2f, ", studentID, total_mark, average_mark); } Any help, thanks
https://cboard.cprogramming.com/c-programming/72687-hi-all-plus-need-help-code-printable-thread.html
CC-MAIN-2017-22
refinedweb
217
61.26
USER_NAMESPACES(7) Linux Programmer's Manual USER_NAMESPACES(7) user_namespaces - overview of Linux user namespaces: * /proc (since Linux 3.8) * /sys (since Linux 3.8) * devpts (since Linux 3.9) * tmpfs(5) (since Linux 3.9) * ramfs (since Linux 3.9) * mqueue (since Linux 3.9) * bpf (since Linux 4.4). Actions on the nonuser namespace (other than a user namespace) is created via clone(2) or unshare(2), the kernel records the user namespace of the creating process as the owner of.). leaves 4294967295 (the 32-bit signed -1 value) unmapped. This is deliberate: (uid_t) -1 is used in several inter‐ faces a limit on the number of lines in the file. In Linux 4.14 and earlier, this limit was (arbitrarily) set at 5 lines. Since Linux 4.15, the limit is 340 imple‐ mentation (Linux 3.8), this requirement was satisfied by a sim‐ plistic) capabil‐ ity names‐ pace. * Or otherwise all of the following restrictions apply: + The data written to uid_map (gid_map) must consist of a sin‐ gle writ‐ ten, only the mapped values may be used in system calls that change user and group IDs. For user IDs, the relevant system calls include setuid(2), setfsuid(2), setreuid(2), and setresuid(2). For group IDs, the rele‐ vant The /proc/[pid]/setgroups file displays the string "allow" if pro‐ cesses in the user namespace that contains the process pid are per‐ mitted‐ allowed to setgroups(2) being allowed. The default value of this file in the initial user namespace is "allow". Once /proc/[pid]/gid_map has been written to (which has the effect of enabling setgroups(2) in the user namespace), it is no longer possi‐ ble to disallow setgroups(2) by writing "deny" to /proc/[pid]/set‐ groups ‐ ported‐ paces,) sys‐ tem inte‐ ger). descrip‐ tion of SCM_CREDENTIALS in unix(7)), they are translated into the corresponding values as per the receiving process's user and group ID mappings. Namespaces are a Linux-specific feature..12 added support the last of the unsupported major filesystems, XFS. Mounting a new /proc filesystem and listing all of the processes vis‐ ible in the new PID namespace shows that the shell can't see any pro‐ cesses outside the PID namespace: bash$ mount -t proc proc /proc bash$ ps ax PID TTY STAT TIME COMMAND 1 pts/3 S 0:00 bash 22 pts/3 R+ 0:00 ps ax: . This page is part of release 5.01 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2019-03-06 USER_NAMESPACES(7) Pages that refer to this page: nsenter(1), systemd-detect-virt(1), unshare(1), clone(2), getgroups(2), ioctl_ns(2), keyctl(2), seteuid(2), setgid(2), setns(2), setresuid(2), setreuid(2), setuid(2), unshare(2), proc(5), subgid(5), subuid(5), capabilities(7), cgroup_namespaces(7), cgroups(7), credentials(7), mount_namespaces(7), namespaces(7), network_namespaces(7), pid_namespaces(7)
http://man7.org/linux/man-pages/man7/user_namespaces.7.html
CC-MAIN-2019-22
refinedweb
500
64.41