text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Generating Excel 2010 Workbooks by using the Open XML SDK 2.0
Summary: Learn how to use the Open XML SDK 2.0 to manipulate a Microsoft Excel 2010 workbook.
Last modified: January 05, 2012
Applies to: Excel 2010 | Office 2010 | Open XML | SharePoint Server 2010 | VBA
Published: April 2011
Provided by: Steve Hansen, Grid Logic
Contents
Introduction to the Open XML File Format
Peeking Inside an Excel File
Manipulating Open XML Files Programmatically
Manipulating Workbooks using the Open XML SDK 2.0
-
-
-
Introduction to the Open XML File Format
Open XML is an open file format for the core document-oriented Office applications. Open XML is designed to be a faithful replacement for existing word-processing documents, presentations, and spreadsheets that are encoded in binary formats that are defined by the Microsoft Office applications. Open XML file formats offer several benefits. One benefit is that the Open XML file formats ensure that data that is contained in documents is can be accessed by any program that understands the file format. This helps ensure organizations that the documents they create today will be available in the future. Another benefit is that it facilitates document creation and manipulation in server environments or other environments where it is not possible to install the Office client applications.
True to its moniker, Open XML files are represented by using XML. However, instead of representing a document by using a single, large XML file, an Open XML document is actually represented by using a collection of related files, called parts, that are stored in a package and then compressed in a ZIP archive. An Open XML document package complies with the Open Packaging Conventions (OPC) specification, a container-file technology to store a combination of XML and non-XML files that collectively form a single entity.
Peeking Inside an Excel File
One of the best ways to gain an initial understanding of how everything works together is to open a workbook file and take a look at the pieces. To examine the parts of an Microsoft Excel 2010 workbook package, merely change the file name extension from .xlsx to .zip. As an example, consider the workbook shown in Figures 1 and 2.
This workbook contains two worksheets: Figure 1 shows a worksheet containing sales by year while the worksheet shown in Figure 2 contains a simple chart.
By changing the name of this workbook from Simple Sales Example.xlsx to Simple Sales Example.zip, you can inspect the structure of parts within the file container or package using Windows Explorer.
Figure 3 shows the primary folders inside the package along with the parts stored in the worksheets folder. Digging a bit deeper, Figure 4 provides a peek at the XML encountered in the part named sheet1.xml.
The XML shown in Figure 4 provides the necessary information that Excel needs to represent the worksheet shown in Figure 1. For example, within the sheetData node there are row nodes. There is a row node for every row that has at least one non-empty cell. Then, within each row, there is a node for each non-empty cell.
Notice that cell C3 shown in Figure 1 contains the value 2008 in bold font. Cell C4, meanwhile, contains the value 182, but uses default formatting and does not contain bold font. The XML representation for each of these cells is shown in the Figure 4. In particular, the XML for cell C3 is shown in the following example.
To keep the size of Open XML files as compact as possible, many of the XML nodes and attributes have very short names. In the previous fragment, the c represents a cell. This particular cell specifies two attributes: r (Reference) and s (Style Index). The reference attribute specifies a location reference for the cell.
The style index is a reference to the style that is used to format the cell. Styles are defined in the styles part (styles.xml) which is found in the xl folder (see the xl folder in Figure 3). Compare cell C3’s XML with cell C4’s XML shown in the following example.
Because cell C4 uses default formatting, you do not have to specify a value for the style index attribute. Later in this article, you learn a little more about how to use style indexes in an Open XML document.
Although it is very helpful to learn more about the nuances of the Open XML file formats, the real purpose of this article is to show how to use the Open XML SDK 2.0 for Microsoft Office to programmatically manipulate Open XML documents, specifically Excel workbooks.
Manipulating Open XML Files Programmatically
One way to programmatically create or manipulate Open XML documents is to use the following high-level pattern:
Open/create an Open XML package
Open/create package parts
Parse the XML in the parts that you need to manipulate
Manipulate the XML as required
Save the part
Repackage the document
Everything except steps three and four can be achieved fairly easily using the classes found in the System.IO.Packaging namespace. These classes are designed to make it easy to handle working with Open XML packages and tasks associated with high-level part manipulation.
The hardest part of this process is step four, manipulating the XML. For this part it is critically necessary for the developer to have a high degree of understanding of the many tedious details required to successfully work with the many nuances of the Open XML file formats. For example, previously you learned that formatting information for a cell is not stored with a cell. Instead, the formatting details are defined as a style in a different document part and the style index associated with the style is what Excel stores inside a cell.
Even with a generous knowledge of the Open XML specification, the thought of manipulating so much raw XML programmatically is not a task that many developers look forward too. That is where the Open XML SDK 2.0 comes in.
The Open XML SDK 2.0 was developed to simplify manipulating Open XML packages and the underlying Open XML schema elements inside a package. The Open XML SDK 2.0 encapsulates many common tasks that developers perform on Open XML packages so that instead of working with raw XML, you can use .NET classes that give you many design-time advantages such as IntelliSense support and a type-safe development experience.
Manipulating Workbooks using the Open XML SDK 2.0
In order to show you the process of manipulating an Excel workbook using the Open XML SDK 2.0, this article walks through building a report generator. Envision that you work for a stock brokerage firm named Contoso. Contoso’s ASP.NET website enables clients to log on and view various portfolio reports online. However, a common user request is the ability to view or download reports in Excel so that they may perform additional ad hoc portfolio analysis.
The desired result is a process that, given a client, generates an Excel portfolio report. There are two general approaches to this kind of process. One approach is to generate all of the document from scratch. For simple workbooks with little or no formatting, this approach is appropriate. The second approach, creating documents that use a template, is generally the preferred method. Note that the use of the Word template here refers not to actual Excel templates (*.xltx). Instead, it refers to the use of a workbook (*.xlsx) that contains all the desired formatting, charts, and so on that are desired in the final workbook. To use the template, the first step of the process is to make a copy of the template file. Then, you add the data associated with the client you are building a report for.
Setting up the Project
To create a portfolio report generator, open up Microsoft Visual Studio 2010 and create a new Console application named PortfolioReportGenerator.
Next, add two classes to the project: PortfolioReport and Portfolio. The PortfolioReport class is the key class that performs all of the document manipulation using the Open XML SDK 2.0. The Portfolio class is basically a data structure that contains the necessary properties to represent a client portfolio.
Before you write any code, the first step in any project involving Open XML and the Open XML SDK 2.0 is to add the necessary references to the project. Two specific references are needed: DocumentFormat.OpenXml and WindowsBase.
DocumentFormat.OpenXml contains the classes that are installed with the Open XML SDK 2.0. If you cannot find this reference after you install the Open XML SDK 2.0, you can browse for it. By default it is located at C:\Program Files (x86)\Open XML SDK\V2.0\lib\. This reference is required only if you plan to use the Open XML SDK 2.0. If you would rather manipulate Open XML documents by tweaking raw XML, you do not need this reference.
WindowsBase includes the classes in the System.IO.Packaging namespace. This reference is required for all Open XML projects whether you are using the Open XML SDK 2.0 or not. The classes in the System.IO.Packaging namespace provide functionality to open Open XML packages. In addition, there are classes that enable you to manipulate (add, remove, edit) parts inside an Open XML package.
At this point, your project should resemble Figure 7.
Initializing the Portfolio Report
As mentioned earlier, the report generation process works by creating a copy of the report template and then adding data to the report. The report template is a pre-formatted Excel workbook named PortfolioReport.xlsx. Add a constructor to the PortfolioReport class that performs this process. In order to copy the file, you must also have to import the System.IO namespace. While adding the System.IO namespace, add the namespaces related to the Open XML SDK 2.0.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using DocumentFormat.OpenXml.Packaging; using DocumentFormat.OpenXml.Spreadsheet; using DocumentFormat.OpenXml; namespace PortfolioReportGenerator { class PortfolioReport { string path = "c:\\example\\"; string templateName = "PortfolioReport.xlsx"; public PortfolioReport(string client) { string newFileName = path + client + ".xlsx"; CopyFile(path + templateName, newFileName); } private string CopyFile(string source, string dest) { string result = "Copied file"; try { // Overwrites existing files File.Copy(source, dest, true); } catch (Exception ex) { result = ex.Message; } return result; } } }
Notice that the PortfolioReport constructor requires a single parameter that represents the client the report is being generated for.
To avoid the need to pass parameters into methods or constantly re-open the document and extract the workbook part, add two class-scoped private variables to the PortfolioReport class. Likewise, add a class scoped private variable to hold a reference to the current Portfolio object whose data is being used to generate the report. By using these variables in place, you can then initialize them inside the PortfolioReport constructor as shown in the following example.
string path = "c:\\example\\"; string templateName = "PortfolioReport.xlsx"; WorkbookPart wbPart = null; SpreadsheetDocument document = null; Portfolio portfolio = null; public PortfolioReport(string client) { string newFileName = path + client + ".xlsx"; CopyFile(path + templateName, newFileName); document = SpreadsheetDocument.Open(newFileName, true); wbPart = document.WorkbookPart; portfolio = new Portfolio(client); }
This code segment highlights how easy it is to open a document and extract a part using the Open XML SDK 2.0. In the PortfolioReport constructor, the workbook file is opened by using the Open method of the SpreadsheetDocument class. SpreadsheetDocument is part of the DocumentFormat.OpenXml.Packaging namespace. SpreadsheetDocument provides convenient access to the workbook part within the document package via the property named WorkbookPart. At this point in the process, the report generator has:
Created a copy of the PortfolioReport.xlsx file
Named the copy after the name of the client
Opened the client report for editing
Extracted the workbook part
Modifying Worksheet Cell Values using the Open XML SDK
The main task that needs to be solved in order to complete the report generator is to figure out how to modify values inside an Excel workbook by using the Open XML SDK 2.0. When using Excel’s object model with Microsoft Visual Basic for Applications (VBA) or .NET, changing a cell’s value is easy. To change the value of a cell (which is a Range object in Excel’s object model), you modify the value of the Value property. For example, to change the value of cell B4 on a worksheet named Sales to the value of 250, you could use this statement:
The Open XML SDK 2.0 works a bit differently. One big difference is that using the Excel object model that you can manipulate any cell on a worksheet regardless of whether it has anything in it. In other words, as far as the object model is concerned, all of the cells on a worksheet exist. When working with Open XML, objects do not exist. This is by default. If a cell does not have a value, it does not exist. This makes perfect sense when you think about it from the perspective of specifying a file format. In order to keep the size of a file as small as possible, only relevant information is saved. For example, revisit Figure 4 and observe the first row node underneath sheetData. The first row starts at 3 and skips rows 1 and 2. This is because all of the cells in the first two rows are empty. Likewise, notice that within the first row node (row 3), the address of the first cell is C3. This is because A3 and B3 are empty.
Because you cannot assume that a cell exists in an Open XML document, you must first check whether it exists and then, if it does not, add it to the file. The following example shows a method named InsertCellInWorksheet that performs this function, along with the other methods in the listing. Add these methods to the PortfolioReport class.
// Given a Worksheet and an address (like "AZ254"), either return a // cell reference, or create the cell reference and return it. private Cell InsertCellInWorksheet(Worksheet ws, string addressName) { SheetData sheetData = ws.GetFirstChild<SheetData>(); Cell cell = null; UInt32 rowNumber = GetRowIndex(addressName); Row row = GetRow(sheetData, rowNumber); // If the cell you need already exists, return it. // If there is not a cell with the specified column name, insert one. Cell refCell = row.Elements<Cell>(). Where(c => c.CellReference.Value == addressName).FirstOrDefault(); if (refCell != null) { cell = refCell; } else { cell = CreateCell(row, addressName); } return cell; } // Add a cell with the specified address to a row. private Cell CreateCell(Row row, String address) { Cell cellResult; Cell refCell = null; // Cells must be in sequential order according to CellReference. // Determine where to insert the new cell. foreach (Cell cell in row.Elements<Cell>()) { if (string.Compare(cell.CellReference.Value, address, true) > 0) { refCell = cell; break; } } cellResult = new Cell(); cellResult.CellReference = address; row.InsertBefore(cellResult, refCell); return cellResult; } // Return the row at the specified rowIndex located within // the sheet data passed in via wsData. If the row does not // exist, create it. private Row GetRow(SheetData wsData, UInt32 rowIndex) { var row = wsData.Elements<Row>(). Where(r => r.RowIndex.Value == rowIndex).FirstOrDefault(); if (row == null) { row = new Row(); row.RowIndex = rowIndex; wsData.Append(row); } return row; } // Given an Excel address such as E5 or AB128, GetRowIndex // parses the address and returns the row index. private UInt32 GetRowIndex(string address) { string rowPart; UInt32 l; UInt32 result = 0; for (int i = 0; i < address.Length; i++) { if (UInt32.TryParse(address.Substring(i, 1), out l)) { rowPart = address.Substring(i, address.Length - i); if (UInt32.TryParse(rowPart, out l)) { result = l; break; } } } return result; }
Another difference between using Excel’s object model and manipulating an Open XML document is that when you use the Excel object model, the data kind of the value that you supply to the cell or range is irrelevant. When changing the value of a cell using Open XML however, the process varies depending on the data kind of the value. For numeric values, the process is somewhat similar to using Excel’s object model. There is a property associated with a Cell object in the Open XML SDK 2.0 named CellValue. You can use this property to assign numeric values to a cell.
Storing strings, or text, in a cell works differently. Rather than storing text directly in a cell, Excel stores it in something called a shared string table. The shared string table is merely a listing of all the unique strings within the workbook where each unique string is associated with an index. To associate a cell with a string, the cell holds a reference to the string index instead of in the string itself. When you change a cell’s value to a string, you first need to see whether the string is in the shared string table. If it is in the table, you look up the shared string index and store that in the cell. If the string is not in the shared string table, you need to add it, retrieve its string index, and then store the string index in the cell. The following example shows a method named UpdateValue used to change a cell’s values along InsertSharedStringItem to update the shared string table.
public bool UpdateValue(string sheetName, string addressName, string value, UInt32Value styleIndex, bool isString) { // Assume failure. bool updated = (isString) { // Either retrieve the index of an existing string, // or insert the string into the shared string table // and get the index of the new item. int stringIndex = InsertSharedStringItem(wbPart, value); cell.CellValue = new CellValue(stringIndex.ToString()); cell.DataType = new EnumValue<CellValues>(CellValues.SharedString); } else { cell.CellValue = new CellValue(value); cell.DataType = new EnumValue<CellValues>(CellValues.Number); } if (styleIndex > 0) cell.StyleIndex = styleIndex; // Save the worksheet. ws.Save(); updated = true; } return updated; } // Given the main workbook part, and a text value, insert the text into // the shared string table. Create the table if necessary. If the value // already exists, return its index. If it doesn't exist, insert it and // return its new index. private int InsertSharedStringItem(WorkbookPart wbPart, string value) { int index = 0; bool found = false; var stringTablePart = wbPart .GetPartsOfType<SharedStringTablePart>().FirstOrDefault(); // If the shared string table is missing, something's wrong. // Just return the index that you found in the cell. // Otherwise, look up the correct text in the table. if (stringTablePart == null) { // Create it. stringTablePart = wbPart.AddNewPart<SharedStringTablePart>(); } var stringTable = stringTablePart.SharedStringTable; if (stringTable == null) { stringTable = new SharedStringTable(); } // Iterate through all the items in the SharedStringTable. // If the text already exists, return its index. foreach (SharedStringItem item in stringTable.Elements<SharedStringItem>()) { if (item.InnerText == value) { found = true; break; } index += 1; } if (!found) { stringTable.AppendChild(new SharedStringItem(new Text(value))); stringTable.Save(); } return index; }
One area of interest in the previous code example deals with formatting a cell. As mentioned earlier in this article, a cell’s format is not stored within the cell node. Instead, a cell stores a style index that points to a style that is defined in a different part (styles.xml). When using the template pattern demonstrated in this document and Excel’s object model via VBA or .NET, you typically apply formatting that you want to a range of one or more cells. As you add data to the workbook programmatically, any formatting that you applied within the range is faithfully applied.
Because Open XML files only contain information related to cells that contain data, any time that you add a new cell to the file, if the cell requires any formatting, you must update the style index. Consequently, the UpdateValue method accepts a styleIndex parameter that indicates which style index to apply to the cell. If you pass in a value of zero, no style index is set and the cell uses Excel’s default formatting.
One simple method for determining the appropriate style index for each cell is to format the workbook template file as you want and then open up the appropriate workbook parts in XML mode (shown in Figure 4) and observe the style index of the cells that you formatted.
With the methods from the previous code listing in place, generating the report is now a process of getting the portfolio data and repeatedly calling UpdateValue to create the report. Indeed, if you add the necessary code to do this, things seem to work fine except for one problem - any cell that contains a formula that refers to a cell whose value was changed via Open XML manipulation does not show the correct result. This is because Excel caches the result of a formula within the cell. Because Excel thinks it has the correct value cached, it does not recalculate the cell. Even if you have auto calculation turned on or if you press F9 to force a manual recalculation, Excel does not recalculate the cell.
The solution to this is to remove the cached value from these cells so that Excel recalculates the value as soon as the file is opened in Excel. Add the RemoveCellValue method shown in the following example to the PortfolioReport class to provide this functionality.
// This method is used to force a recalculation of cells containing formulas. The // CellValue has a cached value of the evaluated formula. This // prevents Excel from recalculating the cell even if // calculation is set to automatic. private bool RemoveCellValue(string sheetName, string addressName) { bool returnValue = there is a cell value, remove it to force a recalculation // on this cell. if (cell.CellValue != null) { cell.CellValue.Remove(); } // Save the worksheet. ws.Save(); returnValue = true; } return returnValue; }
To complete the PortfolioReport class, add the CreateReport method shown in the following example to the PortfolioReport class. It uses the CreateReport method UpdateValue to put portfolio information into the desired cells. After updating all of the necessary cells, it calls RemoveCellValue on each cell that needs to be recalculated. Finally, CreateReport calls the Close method on the SpreadsheetDocument to save all the changes and close the file.
// Create a new Portfolio report public void CreateReport() { string wsName = "Portfolio Summary"; UpdateValue(wsName, "J2", "Prepared for " + portfolio.Name, 0, true); UpdateValue(wsName, "J3", "Account # " + portfolio.AccountNumber.ToString(), 0, true); UpdateValue(wsName, "D9", portfolio.BeginningValueQTR.ToString(), 0, false); UpdateValue(wsName, "E9", portfolio.BeginningValueYTD.ToString(), 0, false); UpdateValue(wsName, "D11", portfolio.ContributionsQTR.ToString(), 0, false); UpdateValue(wsName, "E11", portfolio.ContributionsYTD.ToString(), 0, false); UpdateValue(wsName, "D12", portfolio.WithdrawalsQTR.ToString(), 0, false); UpdateValue(wsName, "E12", portfolio.WithdrawalsYTD.ToString(), 0, false); UpdateValue(wsName, "D13", portfolio.DistributionsQTR.ToString(), 0, false); UpdateValue(wsName, "E13", portfolio.DistributionsYTD.ToString(), 0, false); UpdateValue(wsName, "D14", portfolio.FeesQTR.ToString(), 0, false); UpdateValue(wsName, "E14", portfolio.FeesYTD.ToString(), 0, false); UpdateValue(wsName, "D15", portfolio.GainLossQTR.ToString(), 0, false); UpdateValue(wsName, "E15", portfolio.GainLossYTD.ToString(), 0, false); int row = 7; wsName = "Portfolio Holdings"; UpdateValue(wsName, "J2", "Prepared for " + portfolio.Name, 0, true); UpdateValue(wsName, "J3", "Account # " + portfolio.AccountNumber.ToString(), 0, true); foreach (PortfolioItem item in portfolio.Holdings) { UpdateValue(wsName, "B" + row.ToString(), item.Description, 3, true); UpdateValue(wsName, "D" + row.ToString(), item.CurrentPrice.ToString(), 24, false); UpdateValue(wsName, "E" + row.ToString(), item.SharesHeld.ToString(), 27, false); UpdateValue(wsName, "F" + row.ToString(), item.MarketValue.ToString(), 24, false); UpdateValue(wsName, "G" + row.ToString(), item.Cost.ToString(), 24, false); UpdateValue(wsName, "H" + row.ToString(), item.High52Week.ToString(), 28, false); UpdateValue(wsName, "I" + row.ToString(), item.Low52Week.ToString(), 28, false); UpdateValue(wsName, "J" + row.ToString(), item.Ticker, 11, true); row++; } // Force re-calc when the workbook is opened this.RemoveCellValue("Portfolio Summary", "D17"); this.RemoveCellValue("Portfolio Summary", "E17"); // All done! Close and save the document. document.Close(); }
Using the PortfolioReport Class
The final step (assuming you copied the source for the Portfolio class), is to add some code to the Main method in the Program class. Modify the Main method so that it contains the code shown in the following example. Note that the source for the Portfolio class includes sample data for two clients: Steve and Kelly.
One of the things that you notice when you run this is how fast the files are generated. This is ideal in a high-volume server scenario. The performance versus similar code that uses the Excel object model to achieve the same results is not even close - the Open XML method is much, much faster.
Conclusion
Beginning with the 2007 Microsoft Office system, the core document-centric Microsoft Office applications switched from proprietary binary file formats to Open XML file formats. The Open XML file formats are open, standards-based file formats based on XML. The switch to Open XML file formats opens up several new development opportunities for developers. That said, taking advantage of these opportunities involved investing lots of time and effort understanding the Open XML specifications and lots of tedious raw XML manipulation.
The Open XML SDK 2.0 helps reduce the learning curve with development technique by encapsulating many of the details of the Open XML specification in an easy-to-use class library for working with Open XML documents. In addition to reducing the learning curve, the Open XML SDK 2.0 lets developers be more productive by providing design-time capabilities such as IntelliSense support and a type-safe development experience.
This article demonstrated how to use the Open XML SDK 2.0 to build a portfolio report generator. This exercise demonstrated a common solution pattern and an approach for common Excel oriented tasks such as opening workbooks, referring to worksheets, retrieving cells, and updating a cell’s value.
Additional Resources
To find more information about the subjects discussed in this article, see the following resources.
Download the Open XML SDK 2.0
-
Introducing the Office Open XML File Formats
Office XML Developer Center's tools for Office. One part code jockey, one part finance geek; Steve also has an MBA from the University of Minnesota with a concentration in finance. | https://msdn.microsoft.com/en-us/library/hh180830(v=office.14).aspx | CC-MAIN-2017-43 | refinedweb | 4,313 | 56.96 |
Most.
<< Previous - Adding An Icon
PingBack from
ScottIsAFool, when I builded it, Visual Studio tell me that it can't find type or namespace. Why?
Does it give any more information? What type or namespace can it not find?
I have resolved this problem. Thanks. Your tutorial is very good. I'm looking forward to the next one.
Ok, so now that we have the basis for our Live Writer plugin, we need to start adding things to it to make it look and feel better. The best way is to add an image to the plugin that will appear in the Insert section of Writer.
There was a comment left on my most recent plugin guide: "I would love to see a guide on how to
Pingback from ?????? Windows Live Writer ?????? - ?????? - LiveSino - LiveSide ????????? | http://www.liveside.net/developer/archive/2006/10/19/Writing-Plugins-For-Windows-Live-Writer-_2D00_-Working-With-Forms.aspx | crawl-002 | refinedweb | 134 | 83.46 |
Properties in Visual Studio Tools for Office Projects
There are several important properties that are available in Microsoft Visual Studio 2005 Tools for the Microsoft Office System projects. These properties can be accessed in the Properties window.
Trust Assemblies Location
The Trust Assemblies Location property appears in the Properties window when you select the project node in Solution Explorer.
This property takes a Boolean value:
Select true to update your security policy automatically with full trust permissions on the main project assembly and execution permission on assemblies in the \bin folder and its subfolders. These permissions are checked and granted with every build of the project.
Select false to prevent permissions from being granted automatically. If you built the project previously with Trust Assemblies Location set to true, all code groups that were generated for you are removed when you build again with the property set to false. Your project will not run unless you grant permissions to your code manually.
For more information about security, see Security Requirements to Run Office Solutions.
CacheInDocument
The CacheInDocument property appears in the Properties window Caching Data and Data in Office Solutions Overview.
Namespace for Host Item
The Namespace for Host Item property is only available for C# projects. It appears in the Properties window when you select the document node (the node with the .doc, .dot, .xls, or .xlt extension) in Solution Explorer.
When you create a project using C#, host items are given a namespace based on the name of the project. It is recommended that you not change this namespace by editing the code file directly. Use this property to change the namespace. When you use this property, the namespace is changed in the generated (hidden) code, as well as in the visible code file.
To change the namespace for the host item, set the name in the Namespace for Host Item property.
Value2
The Value2 property is only available for Excel applications. It appears under the Databindings property node in the Properties window when you select a NamedRange control on the worksheet designer.
Use the Value2 property in the Properties window to bind the Value2 property of the named range to a field in your data source. | https://msdn.microsoft.com/en-us/library/183f110b(v=vs.80).aspx | CC-MAIN-2015-18 | refinedweb | 368 | 53.71 |
[Oleg] is a software engineer who appreciates a good keyboard, especially since coming over to the dark side of mechanical keebs. It’s true what they say — once you go clack, you never go back.
Anyway, before going full nerd with an ortholinear split ergo keyboard, [Oleg] had a nice little WASD with many upsides. Because the ErgoDox is oh so customizable, his use of the WASD had fallen by the wayside.
That’s because the ErgoDox can run QMK firmware, which allows the user to customize every key they see and add layers of functionality. Many people have converted all kinds of old keebs over to QMK by swapping out the native controller for a Teensy, and [Oleg] was sure it would work for the WASD.
[Oleg] got under the hood and found that the controller sits on a little removable board around the arrow keys and talks to the main PCB through two sets of double-row header pins. After some careful probing with a ‘scope, the controller board revealed its secrets and [Oleg] was able to set up a testing scheme to reverse engineer the keyboard matrix by connecting each row to an LED, and all the columns to ground. With next to no room for the Teensy, [Oleg] ended up strapping it to the back of the switch PCB and wiring it quite beautifully to the header pins.
With Teensy and QMK, it’s easy to make a keyboard any way you want, even if you’re all thumbs.
21 thoughts on “The ABCs Of Adding QMK To A WASD Keyboard”
Here is a similar idea:
That’s the one I made, and the development/mapping process was really similar to what the OP did.
Oh wow, I wish I found your project earlier, would have definitely saved me time! But then I suppose would have missed out on the fun of reverse engineering my keyboard
I’d really love a wasdat for the wasd v3, I don’t suppose there’s any chance of it ever happening?
The WASD V3 has on board electronics so you can’t replace the controller on that one
can you build one with cleaer keys and configurable laser dmx or else for numbers and pics…?
What about E-paper keys?
Not sure if you know about this already, but check out Nemeio, it’s a programmable, backlit, epaper keybard.
Does anyone know if there are ‘short stroke’ buckling spring switches? The switches in e.g. the Type M feel great, but there’s just so much Up and Down (well, down and up) to deal with. The fingers like the pop of the buckling, but the wrists are convinced I’m kneading bread dough. Mark I eyeball says those M keys travel about 5 to 7 mm, and it seems like 2 to 3 mm might feel less .. flappy
Also Kailh low profile switches are even thinner
See here for comparison
Hmmm. There’s an Arch user who also has a Ducky One 2 and also finds that NKRO simply doesn’t work. This person had asked Ducky about it and they said there’d need to be changes in firmware and that’s probably not happening. Supposedly it’s communicating with some other unique Windows driver, not just HID anymore… well, QMK might be the ticket, “might” because Ducky doesn’t appear on the supported keyboards list, yet. At least transplanting a controller will be legal and shareable, not like reversing their firmware, as if that was even something I could just decide to do one day…
P.S. that also begins to explain their firmware updater requiring the KB to be in 6KRO mode, according to the manual.
…which happens to be the opposite of true. (not what the manual said, what I said)
It’s 6KRO mode that cannot work and NKRO that does. It took me that many hours to remember that there actually was a reason I never expected to worry about it, and that was the same reason I was trivially able to figure out that NKRO doesn’t work in Linux ITFP: that’s where I left it. I also ran the updater semi-recently without having to change it, and saw that it has the newest version already even if I’m not the one who updated it, because I don’t remember… so.. hooray for expunging another false memory, I guess
Am I the only one who has no idea what a WASD is? I’m a reasonably technical person but this is a new one on me.
It’s a brand name slash namespace collision ;)
One brand of keyboards:
Best regards,
A/P Daniel F. Larrosa
Not the only one. Most of the vocabulary in the article is beyond me. (I am technical, but not a keyboard specialist.)
Reminds me of a sentence I read in a stats book once. I knew a meaning for every word in the sentence, but they must have had different meanings in a statistics context or something, because the sentence made no sense whatever.
W, A, S & D are the keys gamers normally use to move their character forward, left, back & right.
I know the arrow keys are already designed for this, but gamers gonna game…
Translation please?! So Keeb is a keyboard? Now I’m lost. What’s an ortholinear split ego though? Many upsides, downsides and at least eighteen sidesides.
Split ergo == ergonomic == KB divided and the 2 parts rotated so your wrists can rest on a line between your fingers and elbows and not at some hazardously awkward angle. Ortholinear just means the keys are aligned on a grid instead of offset more or less on different rows as usual… I *think*. But I should just look it up, right? I did find some good info about this old NEC mechanical KB over at deskthority.net
OK so I just finally realized the underlying meaning of the square dogs, in the midst of facepalming because that reply is an orphan now because of some silliness
Please be kind and respectful to help make the comments section excellent. (Comment Policy) | https://hackaday.com/2020/05/05/the-abcs-of-adding-qmk-to-a-wasd-keyboard/?replytocom=6242986 | CC-MAIN-2022-05 | refinedweb | 1,029 | 68.2 |
# Checking Telegram Open Network with PVS-Studio

Telegram Open Network (TON) is a platform by the same team that developed the Telegram messenger. In addition to the blockchain, TON provides a large set of services. The developers recently made the platform's code, which is written in C++, publicly available and uploaded it to GitHub. We decided to check the project before its official release.
Introduction
------------
[Telegram Open Network](https://github.com/ton-blockchain/ton) is a set of various services. Among other things, it provides a payment system of its own based on the Gram cryptocurrency, and a virtual machine called TON VM, which executes smart contracts. It also offers a messaging service, TON Messages. The project as a whole is seen as a countermeasure to Internet censorship.
The project is built with CMake, so I didn't have any difficulties building and checking it. The source code is written in C++14 and runs to 210 thousand LOC:

Since the project is a small and high-quality one, there aren't many bugs in it, but they still should be dealt with.
Return code
-----------
```
static int process_workchain_shard_hashes(....) {
....
if (f == 1) {
if ((shard.shard & 1) || cs.size_ext() != 0x20000) {
return false; // <=
}
....
int r = process_workchain_shard_hashes(....);
if (r < 0) {
return r;
}
....
return cb.store_bool_bool(true) && cb.store_ref_bool(std::move(left)) &&
cb.store_ref_bool(std::move(right)) &&
cb.finalize_to(branch)
? r
: -1;
....
}
```
PVS-Studio diagnostic message: [V601](https://www.viva64.com/en/w/v601/) The 'false' value is implicitly cast to the integer type. mc-config.cpp 884
It looks like the function returns the wrong type of error status here. The function should apparently return a negative value for failure rather than true/false. That's at least what it does further in the code, where it returns -1.
Comparing a variable with itself
--------------------------------
```
class LastBlock : public td::actor::Actor {
....
ton::ZeroStateIdExt zero_state_id_;
....
};
void LastBlock::update_zero_state(ton::ZeroStateIdExt zero_state_id) {
....
if (zero_state_id_ == zero_state_id_) {
return;
}
LOG(FATAL) << ....;
}
```
PVS-Studio diagnostic message: [V501](https://www.viva64.com/en/w/v501/) There are identical sub-expressions to the left and to the right of the '==' operator: zero\_state\_id\_ == zero\_state\_id\_ LastBlock.cpp 66
TON follows a coding standard that prescribes that class members' names should end in an underscore. In cases like this, however, this notation may lead to a bug as you risk overlooking the underscore. The name of the argument passed to this function is similar to that of the class member, which makes it easy to mix them up. It is this argument that was most likely meant to participate in the comparison.
Unsafe macro
------------
```
namespace td {
namespace detail {
[[noreturn]] void process_check_error(const char *message, const char *file,
int line);
} // namespace detail
}
#define CHECK(condition) \
if (!(condition)) { \
::td::detail::process_check_error(#condition, __FILE__, __LINE__); \
}
void BlockDb::get_block_handle(BlockIdExt id, ....) {
if (!id.is_valid()) {
promise.set_error(....);
return;
}
CHECK(id.is_valid()); // <=
....
}
```
PVS-Studio diagnostic message: [V581](https://www.viva64.com/en/w/v581/) The conditional expressions of the 'if' statements situated alongside each other are identical. Check lines: 80, 84. blockdb.cpp 84
The condition inside the *CHECK* macro will never execute as it has been already checked by the previous *if* statement.
There's also another error present here: the *CHECK* macro is unsafe since the condition inside it is not wrapped in a *do {… } while (0)* construct. Such wrapping is needed to avoid collisions with other conditions in the *else* branch. In other words, the following code wouldn't work as expected:
```
if (X)
CHECK(condition)
else
foo();
```
Checking a signed variable
--------------------------
```
class Slice {
....
char operator[](size_t i) const;
....
};
td::Result CellSerializationInfo::get\_bits(td::Slice cell) const {
....
int last = cell[data\_offset + data\_len - 1];
if (!last || last == 0x80) { // <=
return td::Status::Error("overlong encoding");
}
....
}
```
PVS-Studio diagnostic message: [V560](https://www.viva64.com/en/w/v560/) A part of conditional expression is always false: last == 0x80. boc.cpp 78
The second part of the condition will never execute because the type *char* is signed in this case. When assigning a value to a variable of type *int*, sign extension will occur, so its values will still lie within the range [-128, 127], not [0, 256].
It should be noted that *char* is not always signed: its behavior is platform- and compiler-dependent. So in theory, the condition in question could still execute when building on a different platform.
Bitwise-shifting a negative number
----------------------------------
```
template
bool AnyIntView|::export\_bits\_any(....) const {
....
int mask = (-0x100 >> offs) & 0xff;
....
}
```
PVS-Studio diagnostic message: [V610](https://www.viva64.com/en/w/v610/) Unspecified behavior. Check the shift operator '>>'. The left operand '-0x100' is negative. bigint.hpp 1925
Executing a bitwise right shift operation on a negative number is unspecified behavior: it's impossible to know in advance if the sign will be extended or padded with zeroes.
Null check after new
--------------------
```
CellBuilder* CellBuilder::make_copy() const {
CellBuilder* c = new CellBuilder();
if (!c) { // <=
throw CellWriteError();
}
....
}
```
PVS-Studio diagnostic message: [V668](https://www.viva64.com/en/w/v668/) There is no sense in testing the 'c' pointer against null, as the memory was allocated using the 'new' operator. The exception will be generated in the case of memory allocation error. CellBuilder.cpp 531
The message says it all: if memory allocation fails, the program will throw an exception rather than return a null pointer. It means the check is pointless.
Redundant check
---------------
```
int main(int argc, char* const argv[]) {
....
if (!no_env) {
const char* path = std::getenv("FIFTPATH");
if (path) {
parse_include_path_set(path ? path : "/usr/lib/fift",
source_include_path);
}
}
....
}
```
PVS-Studio diagnostic message: [V547](https://www.viva64.com/en/w/v547/) Expression 'path' is always true. fift-main.cpp 136
This snippet is taken from one of the project's internal utilities. The ternary operator is redundant in this case: the condition it checks is already checked by the previous *if* statement. It looks like the developers forgot to remove this ternary operator when they decided to discard the use of standard paths (there's at least no mention of those in the help message).
Unused variable
---------------
```
bool Op::set_var_info_except(const VarDescrList& new_var_info,
const std::vector& var\_list) {
if (!var\_list.size()) {
return set\_var\_info(new\_var\_info);
}
VarDescrList tmp\_info{new\_var\_info};
tmp\_info -= var\_list;
return set\_var\_info(new\_var\_info); // <=
}
```
PVS-Studio diagnostic message: [V1001](https://www.viva64.com/en/w/v1001/) The 'tmp\_info' variable is assigned but is not used by the end of the function. analyzer.cpp 140
The developers were apparently going to use a variable named *tmp\_info* in the last line of this function. Here's the code of that same function but with other parameter specifiers:
```
bool Op::set_var_info_except(VarDescrList&& new_var_info,
const std::vector& var\_list) {
if (var\_list.size()) {
new\_var\_info -= var\_list; // <=
}
return set\_var\_info(std::move(new\_var\_info));
}
```
Greater or less than?
---------------------
```
int compute_compare(const VarDescr& x, const VarDescr& y, int mode) {
switch (mode) {
case 1: // >
return x.always_greater(y) ? 1 : (x.always_leq(y) ? 2 : 3);
case 2: // =
return x.always_equal(y) ? 1 : (x.always_neq(y) ? 2 : 3);
case 3: // >=
return x.always_geq(y) ? 1 : (x.always_less(y) ? 2 : 3);
case 4: // <
return x.always_less(y) ? 1 : (x.always_geq(y) ? 2 : 3);
case 5: // <>
return x.always_neq(y) ? 1 : (x.always_equal(y) ? 2 : 3);
case 6: // >=
return x.always_geq(y) ? 1 : (x.always_less(y) ? 2 : 3);
case 7: // <=>
return x.always_less(y)
? 1
: (x.always_equal(y)
? 2
: (x.always_greater(y)
? 4
: (x.always_leq(y)
? 3
: (x.always_geq(y)
? 6
: (x.always_neq(y) ? 5 : 7)))));
default:
return 7;
}
}
```
PVS-Studio diagnostic message: [V1037](https://www.viva64.com/en/w/v1037/) Two or more case-branches perform the same actions. Check lines: 639, 645 builtins.cpp 639
If you read carefully, you noticed that this code lacks a <= operation. Indeed, it is this operation that case 6 should be dealing with. We can deduce that by looking at two spots. The first is the initialization code:
```
AsmOp compile_cmp_int(std::vector& res, std::vector& args,
int mode) {
....
if (x.is\_int\_const() && y.is\_int\_const()) {
r.set\_const(compute\_compare(x.int\_const, y.int\_const, mode));
x.unused();
y.unused();
return push\_const(r.int\_const);
}
int v = compute\_compare(x, y, mode);
....
}
void define\_builtins() {
....
define\_builtin\_func("\_==\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 2));
define\_builtin\_func("\_!=\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 5));
define\_builtin\_func("\_<\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 4));
define\_builtin\_func("\_>\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 1));
define\_builtin\_func("\_<=\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 6));
define\_builtin\_func("\_>=\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 3));
define\_builtin\_func("\_<=>\_", arith\_bin\_op,
std::bind(compile\_cmp\_int, \_1, \_2, 7));
....
}
```
The *define\_builtins* function, as you can see, contains a call *compile\_cmp\_int* for the *<=* operator with the mode parameter set to 6.
The second spot is the *compile\_cmp\_int* function itself, which lists the names of operations:
```
AsmOp compile_cmp_int(std::vector& res, std::vector& args,
int mode) {
....
static const char\* cmp\_names[] = {"", "GREATER", "EQUAL", "GEQ", "LESS",
"NEQ", "LEQ", "CMP"};
....
return exec\_op(cmp\_names[mode], 2);
}
```
Index 6 corresponds to the *LEQ* word, which means «Less or Equal».
It's another nice bug of the [class of bugs found in comparison functions](https://www.viva64.com/en/b/0509/).
Miscellaneous
-------------
```
#define VM_LOG_IMPL(st, mask) \
LOG_IMPL_FULL(get_log_interface(st), ...., VERBOSITY_NAME(DEBUG), \
(get_log_mask(st) & mask) != 0, "") // <=
```
PVS-Studio diagnostic message: [V1003](https://www.viva64.com/en/w/v1003/) The macro 'VM\_LOG\_IMPL' is a dangerous expression. The parameter 'mask' must be surrounded by parentheses. log.h 23
The *VM\_LOG\_IMPL* macro is unsafe. Its second parameter is not enclosed in parentheses, which could potentially cause undesirable side effects if a complex expression is passed to the condition. But if *mask* is just a constant, this code will run with no problems at all. That said, nothing prevents you from passing anything else to the macro.
Conclusion
----------
TON turned out to be pretty small, so there are few bugs to find there, which the Telegram developer team should certainly be given credit for. But everyone makes mistakes every now and then, even these guys. Code analyzers are powerful tools capable of detecting dangerous spots in source code at the early development stages even in the most quality code bases, so don't neglect them. Static analysis is not meant to be run from time to time but should be part of the development process: "[Introduce Static Analysis in the Process, Don't Just Search for Bugs with It](https://habr.com/en/post/440610/)". | https://habr.com/ru/post/469915/ | null | null | 1,828 | 50.73 |
le Gaifix on rst for texinfo]
> .. [1] It's very common this form: ``@kbd{C-@...{Space}}``, but this
> is (currently) impossible, since it requires nested markup.
For this case, you could define a single custom role that expands to the
desired nested markup::
Press :kbdkey:`C-Space`
The code would look something like this (untested code warning):
def kbdkey_fn(role, rawtext, text, lineno, inliner):
# I'm assuming all kbdkey's have this form: (modifier+key)
# If multiple keys are allowed, etc, this would need to be
# modified to suit.
m = re.match('(\w+-)?(\w+)')
if m is None:
... # report an error
else:
# Extract the modifier and key.
mod = m.group(1) or ''
key = m.group(2) or ''
# Create the inside node
keynode = nodes.inline(rawtext, key, class="key")
# Create the outside node.
kbdnode = nodes.inline(rawtext, mod, keynode, class="kbd")
# Return the outside node, and no messages.
return [kbdnode], []
-Edward | https://sourceforge.net/p/docutils/mailman/docutils-develop/thread/40853BF4.2060904@gradient.cis.upenn.edu/ | CC-MAIN-2018-22 | refinedweb | 150 | 68.16 |
Today.
Join the conversationAdd Comment
Generics!!!! Yay!!!!!!
Great news !!! Fortunatelly debugging works, so this is acceptable to me to switch to this alpha.
Some questions:
– Missing features (complete debugginig, rename, etc) will be added with new typescriptServices.js or we must wait for new VS plugin ?
– Do you plan to support indexers in classes ?
Could be nice to write:
class YAList<T1>
{
Add (Item : T1) {…}
[Index:number] : T1
}
For now, we must create mirrored interface (IYAList), move indexer to this interface, and perform some wild casting to properly instantiate class and assign to variable typed as interface. This means mounts of unnecessary work.
Congrats on this progress. I've been flying the TypeScript flag at user groups here in the UK; it'll be nice to have something shiny and new to talk about.
Sweet lovely generics! Can't wait to try it when I get home 🙂
Hello,
Until now, it seems that merging modules and amd has no interest because in amd mode, the import statement refers to a file (e.g. greeter.ts/js). So all the content of the file is considered as "a module".
While using the simple module strategy, we may define the same module throuthrough several files, e.g. module Views widespread on view.ts, views.ts, … It is quite convenient…
What I would like to have is the following : be able to do as with the simple module strategy, and also be able to import the module itself and not the file in AMD mode like this : import views = module ("Views")…
The typescript compiler would then make itself the references we need..
Best regards
Xavier
@Xavier – we're currently working on a feature that will let you do something like that. It will allow you, at the bottom of an external module, to do something like this:
//fileA.ts
module A { … }
module A { … } // combine the two or more A's
export = A
//fileB.ts
import "fileA.ts" as myModuleA //reach into the file and get the exported "internal" module
Syntax might change, but the idea is that you'll be able to "export = " not just internal modules but also other features like classes and functions. This will let you span multiple files, and it has the added benefit that the names you use to refer to these values can be the simple names you give them (like 'myModuleA') above, without having to reach through the imported module for the contents you care about.
@jonathan
thanks a lot for your quick answer, and more especially for this next to come feature we really need IMO.
Otherwise, I appreciate a lot this language, and the flexibility it offers… I just miss an eclipse plugin 😉
best regards
Xavier
Rehi Jonathan,
is the new feature will enable this schema?
//fileA.ts
module A { … }
export = A
//fileB.ts
module A { … } // combine the two or more A's
export = A
//fileC.ts
import "fileA.ts" as myModuleA =module(A)
import "fileB.ts" as myModuleA = module(A)
I'm not sure if that style would be supported, because once we codegen the first import, we would have to thread the result into the second one.
You can span files, but you would need to span before you did the export, like this:
//fileA.ts
module A { … }
//fileB.ts
///<reference path="fileA.ts"/>
module A { … } // combine the two or more A's
export = A
//fileC.ts
import "fileB.ts" as myModuleA
@Tristan
Missing features will be added, where appropriate, to both the VS and language service, just as they were with 0.8.3. We just haven't ported everything to the new architecture, yet.
Yes, I believe we'll also be supporting indexers in classes, or at the very least, being able to override the default indexer. This would mean that any property access after that would have the overridden return type. We'll likely post more information about this closer to release.
Tried the 0.9.0 build, debugging and generics works fine, but where is overload on constants? Pressing F12 on "document.createElement" navigate me into "lib.d.ts" which only have one createElement (the default one) defined.
@horeaper
I'm not sure if the libraries have moved over to using the overload on constants, yet. The best way to keep up to date on the progress is to watch the repo, and then update after the changes go in.
Overloading on constants should be working in your own code. Also, there are some ongoing bugfixes here, so be sure to update your language service regularly to track those improvements.
Should be noted that 0.9 Alpha installs itself to
C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDECommonExtensionsMicrosoftTypeScript,
and not
C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEExtensionsxxxxxxxx.xxx
as noted in the "Trying the latest typescript build".
Generics and better type inference.
Hope all the interface definitions catch up as well.
This weekend I tried using 0.9.0.alpha, re-writing a couple of files in a JS codebase of ~50K lines in ~250 commonJS modules.
Aside from one thing (see below) it was a pretty positive experience. I can see a way forward where I start at the low level, converting utility modules into TS, and then tackling higher-level files that consume them, so gradually there is more and more static typing. It's going to be practical to apply it gradually, which is of course essential in a working codebase.
The main problem right now is the intellisense compiler. In my experience with this version, it rarely agreed with the 'real' compiler, so I spent a lot of time figuring out what I'd done wrong only to find that the real compiler had no problem with my code. Similarly I was (mistakenly) disappointed at how little type inference seemed to be happening – when in truth there was a lot happening in the real compiler! It just wasn't working in intellisense. The upshot is that right now I don't want to demo what I attempted to my co-workers because I'd have to "talk around" the intellisense so much.
This lag between the two is unfortunate given that the intellisense feedback is going to be the main way people try to learn, and would also be the most immediate advertisement for the benefits of static typing.
@Daniel
We've been trying to keep the language service (and intellisense) in sync with the compiler, but as you can imagine things can sometimes get out of sync.
If there are some key points where they differ, would you mind logging issues in the issue tracker to help us follow up on them and make sure they're fixed? typescript.codeplex.com/…/Create
Simply Great.
Microsoft Can you support ES6 in IE11 with webgl Please.
What's the current status of TS 0.9? It's been a month with no news.
What about module.exports?
@horeaper – still working on it. You can track our progress by following along in the commit logs. In general, we're working on polish for the release.
@Aaron – we've been putting work into "export = <symbol name>" for this release as well
We a date for a stable release? not more "preview"…why M$ don't invest with more "power" in this project? It can be the new way for web develop
There is 0.9 Beta on codeplex. This mean that final 0.9 is very close, or this is mid-step and we must wait another month+ for 0.9 release ?
Please remove necessity to restart OS after installing vs plugin
@Jonathan – You mentioned you were working on a feature that would let external modules span across multiple files. Is this feature mostly implemented in the lasted build of 0.9? Also are there any examples or documentation on it yet? Thanks!
This blog has gone very quite.
Please tell me you are still actively working on Typescript. I have taken typescript to heart and it is making a massive difference to the re-factor-ability of my projects. Any idea on the time-scale for the next release?
Please let me know that type script is still alive and kicking.
Hi there,
I've installed the new release of typescript 0.9.0.0. Unfortunatley Visual Studio now completley hangs on build and
i cant work on my solution anymore. I can see in the taks manager that the system spins up around 20 to 30 tsc processes. I tried to start Visual Studio with the /Log switch parameter but nothing comes out in the log file that i can releate to this issue.
Any clues? Is it possible to put a constraint how many paralell threads of tcs that may be started?
Niclas
@Niclas – are you using WebEssentials, by chance? We've seen a few cases where WebEssentials will spin up a number of tsc instances. If you disable this, you should be able to get the TypeScript language service to work properly again (and if not, please do let us know). | https://blogs.msdn.microsoft.com/typescript/2013/04/22/announcing-0-9-early-previews/ | CC-MAIN-2017-47 | refinedweb | 1,511 | 73.37 |
SAP Business Application Studio – Getting Started with CAP and SAP HANA Service on CF
In the new era of cloud , where we are moving towards Cloud Foundry environment , a next generation development environment is also required catering to the needs of developer.
Hence SAP Business Application Studio.
What is it ????
SAP Business Application Studio is a next generation, tailor made development environment available as a service on SAP Cloud Foundry which offers a modular development for business application for SAP Intelligent Enterprise.
Here developers can utilize more than 1 development space which are isolated from each other and let you run the application without deploying on Cloud Platform using its powerful terminal
CAP – SAP Cloud Application Programming Model
it is an open and opinionated model , which provides framework of libraries , languages and tools for the development of enterprise grade applications . It provides some best practices and guides developer with out of the box solutions for some common problems.
So lets get started
Prerequisite
- SAP Cloud Platform Account (Foundry ) – Data center available
- Subscribed to SAP Business Application Studio
- Relevant Roles for accessing the service – Authorization Management
- SAP HANA Service hdi-shared
After Accessing the application
Create a Dev Space
Step 1 : Click on Create Dev Space
Step 2: Create Dev Space – Enter Name and Select SAP Cloud Business Application as a category
Start Developing
- Open new terminal
2. type cd projects , to change the directly.
3. setting up the project
mvn -B archetype:generate -DarchetypeArtifactId=cds-services-archetype -DarchetypeGroupId=com.sap.cds \ -DarchetypeVersion=1.2.0 -DcdsVersion=3.21.2 \ -DgroupId=com.sap.teched.cap -DartifactId=products-service -Dpackage=com.sap.teched.cap.productsservice
Open your project in the Studio
Lets Make CDS files
- Navigate to db , create a file schema.cds
namespace sap.capire.dev; entity Products { title : localized String(20); descr : localized String(100); stock : Integer; price : Decimal(9,2); key id :Integer; }
- Navigate to srv , create a file service.cds
using { sap.capire.dev as db } from '../db/schema'; service AdminService { entity Products as projection on db.Products; }
- From terminal navigate to cd product-service using terminal.
- type mvn clean install – to compile the project using terminal .
- Now go to srv -> src -> main -> resources and open application.yaml and replace the content
--- spring: profiles: default datasource: url: "jdbc:sqlite:/home/user/projects/products-service/sqlite.db" driver-class-name: org.sqlite.JDBC initialization-mode: never
- installing SAP HANA DB deployer
npm install --save-dev --save-exact @sap/hdi-deploy@3.7.0
- login into CF account – using terminal type cf login and select the space
- initialize DB and create SAP HANA Service instance – please make sure , you have the entitlement using terminal type , kindly type it manually
cds deploy --to hana:bookstore-hana
- Go to srv->pom.xml and add the dependency
<dependency> <groupId>com.sap.cds</groupId> <artifactId>cds-feature-hana</artifactId> </dependency>
- Time to run the application in the terminal
mvn spring-boot:run -Dspring-boot.run.profiles=cloud
- Click on Expose and Open on the left Bottom and press Enter
- Go Back to Application Studio and open new terminal
- Test the service with POST call
curl -X POST \ -H "Content-Type: application/json" \ -d '{ "title": "Product 1", "descr":"sample product ","stock":20,"price":100.60,"id":1 }'
- Now Lets check the service from step 11 , open Products and you shall be able to view the saved data
After successfully following these steps , you would be able to complete the setting up of SAP Business Application Studio on SAP Cloud Platform (Foundry) , creating an application using CAP , connecting it to SAP HANA Service and performing POST , GET operation on your ODATA service.
These are the screenshots taken from our SAP Cloud Platform account.
Hi,
I also found new and updated tutorials about this topic.
Cheers,
Ervin
Thanks Munish Suri for providing a step-by-step guide. 🙂
Excellent, Really Helpful.
Hi Munish,
nice cookbook tutorial approach…and it worked for me nicely up to point 8 where it simply crashed with this message:
Service offering ‘hanatrial’ not found.
will take another look after switching my eu and us spaces around, but thank you for the steps 1-7.
rgds,
greg
Hi Greg,
I also ran into an issue
[ERROR] [cds.deploy] – Service name bookstore-hana must only contain alpha-numeric, hyphens, and underscores.
, however when I retyped manually the execution was successfull.
Regards
Justin
Hi Munish,
I’m also stuck to step 8:
Regards,
Yordan
Hi Yordan,
Please create an instance of hana HDI . you may utilise hanatrial service on cloud foundry for the same.
HDI – use the name bookstore-hana
Best regards
Munish
Hi Munish,
How do I deploy the application on CF?
As MTA?
Which steps do I have to go through?
Thanks
Peter
Hi Peter,
for deploying , you may have to create a manifest.yml .
Where you have to define the name , path(jar) ,services .
and using cf push , you should be able to deploy on cloud platform.
Best regards
Munish
Hi Munish, probably not the best place to ask this one, but I’ll try anyway.
There is an issue I couldn’t overcome so far.
When I want to clone a git repo from SAPs internal Github, it gives me
.
fatal: unable to access ‘<my repo url>/’: Received HTTP code 502 from proxy after CONNECT
I would assume connecting to git repos should be supported out of the box.
Thanks!
Robin
Hi Robin,
In the terminal, you can clone the git repository.
As i see you are trying to clone the internal Git Repository, which i guess wont be possible on the public version.
Maybe you can give it a try in the internal canary account if possible.
thanks
Best regards
Munish
Hi Munish,
Nice blog.
When I do cds delpoy , I get the following error.
Deployment to container CC3087871C54427590C88E8F653E85DF failed – ] [Deployment ID: none]. ]
Any idea why this is happening?.
Regards,
Swetha.
Hi Swetha,
It seems you are trying to connect with the Canary account.
Unfortunately, I have used the factory account.
Can you please raise an internal ticket for the same?
Best regards
Munish Suri
I got the same error. Can you help me?
Hi Louis,
Can you please try in SAP Cloud Platform Test account, not the canary one.
Best regards
Munish Suri
Hello,
Is it possible to use “cds deploy …” to test in a HANA DB from another subaccount/org/space? I already followed all the steps from documentation in order to deploy, but I’m now quite sure if “cds deploy” should work with this setup.
The error I get is similar to the one posted by Swetha:
Best regards.
Hello Christian,
Can you please try in SAP Cloud Platform Test account, it works usually.
I am not really sure of the canary account.
Kindly raise an internal ticket if the problem persists.
Best regards
Munish Suri | https://blogs.sap.com/2020/03/16/sap-business-application-studio-getting-started-with-cap-and-sap-hana-service-on-cf/ | CC-MAIN-2020-50 | refinedweb | 1,139 | 55.74 |
OLPC Launches Buy One, Give One Free Program 282
Tha_Big_Guy23 writes "For the first time, and for a limited period only, people in North America will be able to get their hands on the XO, MIT professor Nicholas Negroponte's rugged little laptop that's designed specifically for children. And for each cutting-edge XO purchased in the West, another will be given to a child in a developing country. For $399, customers can order a laptop for themselves; bundled into the price is the cost of delivering a second XO to a child a poor country."
Other options? (Score:5, Insightful)
Re: (Score:3, Insightful)
With so many other options for low cost linux based laptops coming up, how many would lap up the XOs? Yeah some geeks & some philanthropists
... the tech loving & God fearing maybe ... but will it sell like the Dells?
I think their going for the philanthropist geeks. If they sell a thousand at this price they can move towards lowering the price.
Do they say how much of the money is shipping to the third world country? I would think if they picked one Costal City for the initial recipients, it would be cheap to ship the laptops via ship and have a local volunteer or two distribute them to the children.
Re: (Score:3, Interesting)
Since the price is $399 for 2 and the manufacturing costs are "about" $180 each, that leaves $20, or about 10%, for distribution and other miscellandy costs.
I wonder if that's enough to cover the 'gratuities' to 3rd world customs officials who just want a little extra something for themselves no matter what it being transported.
Re:Other options? (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:3, Informative)
Second, are only the God-fearing allowed to help others? only tech-loving people should play with gadgets? You wouldn't bother helping others unless there was some strong incentive to do so? Your curiosity is only limited to that which you are familiar with? I don't wish to judge you from the few words you have typed in the comment, but the world-view presented within them seems to be extremely narro
Re: (Score:3)
I run a group which implements Edubuntu and other FOSS at poorer schools in India for free. So, am naturally interested in XO & all its alternatives out there to better utilize the meager funds (so far zilch) we have.
And I have a vested interested in the success of this buy one donate one concept as it will help groups like ours & many more.
I only put up an honest query and not any rhetoric.
Re: (Score:2)
Nevertheless, your original comment suggest that you think that the "average Joes" are only charitable if they are God-fearing? That it takes some "special" people to do good for others?
Would you be interested in the success of the OLPC project if it has absolutely zero bearing on your group?
It is not the aim of OLPC to sell like Dells in developed countries, in case you haven't noticed already. Its hardware and software specifications are far
Re: (Score:2)
And I thought 'God fearing' actually means 'God loving'? English is not my native language neither is Christianity my religion, so I might have erred.
And the reason why I wondered if it will sell like Dells is because I inherently want more XOs to sell. And it doesn't matter if my group benefits out of it, heck our group is not for personal benefits in the first place!
And yeah
Re: (Score:2)
Generally, one is not supposed to change the actual words used.
Re: (Score:3, Insightful)
By the way, people who give out of love for their fellow man are God loving. Those who are God fearing send money to the Christian Coalition and try to legislate everyone else's behavior.
Re: (Score:3, Interesting)
Re: (Score:2)
And I'm someone who spends a lot of time in the countries where these will be distributed. I expect to be able to trade into one relatively cheap, but I'm also happy to support the cause.
Re:Other options? (Score:5, Informative)
"It's an education project, not a laptop project." -- Nicholas Negroponte
If you want a cheap laptop, buy the Asus or Dell for $400+. If you want an educational computer designed for kids, buy the OLPC.
Re:Other options? (Score:4, Interesting)
When it comes to selling, we have to wait and see. Currently the OLPC isn't even sold by normal means, you can buy two for the price of one, but only when you are in the USA and only when you order it in the next two weeks or so, which kind of limits it to how many people can buy one.
I'd love to buy one, but I guess I have to wait a little longer till its even available here in germany.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
Re:Other options? (Score:5, Informative)
As an adult, I prefer the Eee though, mostly because I do not like the XO rubberized keyboard.
Re: (Score:3, Insightful)
Re: (Score:2, Interesting)
1) Same price (4GB w/ camera, less after tax deduction)
2) includes a donation.
Does the better CPU and RAM beet the low power usage reflective mode? I would have to see it to know.
Also, pull chord is a very compelling extra (don't know if it will be available though).
The spill-proof design also has some value to me (business part is in screen and keyboard is sealed).
I personally can't wait to see what the XO gets for it in the hands of hackers (either in the form of full distros or addons to sugar
Re: (Score:2)
Re: (Score:2)
It allows techers to send notes home to illiterate parents, and parents to respond back (camera/microphone). It allows for reading off of the internet, an ebook, or teachers notes.
The cost of text books can be crippling, event he cost of printing could allow one of these to pay for itself with enough use. Simply as a monochrome e-book reader with a pull cord for pow
Nice Chance for a Donation (Score:2, Interesting)
Nice way to help a worthy cause and not a bad deal for a years t-mobile service.
Re: (Score:2)
I also don't see where to buy one... I went to laptop.org, but can only find the 'donate money' area, not somewhere I can buy 2 to get one. (It occurs to me that this might make a good present for my niece.)
Re:Nice Chance for a Donation (Score:5, Informative)
Re: (Score:2)
I don't see any link for actually placing an order though. I suppose i could try calling the number at the bottom during lunch though.
Re: (Score:2)
Re: (Score:2)
As for T-Mobile, they are giving one year of free Wi-Fi access, so you can use your XO, or any WiFi device, at Starbucks, Borders, several US airports, etc. It's not free mobile phone service.
North America has poor folks too! (Score:2, Insightful)
I will agree that what America has is what I could call "material prosperity". There appears to be infrastructure everywhere but people are hurting in the pockets. These days, the American dollar has also taken a hit, so everyday stuff is expensive.
Re: (Score:2)
I would rather give a computer to someone I don't know (and enable them to learn), than give nothing.
Re: (Score:2)
But for a better long-term return (think decades down the line), give globally. There is no long-term benefit in keeping people uneducated globally.
Re:North America has poor folks too! (Score:5, Insightful)
Um, is there a statement from the OLPC people where they say that everyone in NA can afford one? It seems to me that they only said that individuals in NA can buy one, if they want. There is no comment about the "material prosperity" of everyone on this continent.
Now that I think about it, the title of your comment is "North America has poor folks too!" yet you only reference [the United States of] America. There are a couple of other countries on this continent, too, don't forget.
Re: (Score:3, Insightful)
Re:North America has poor folks too! (Score:5, Informative)
Yes, OLPC is focusing their efforts on third-world countries, but also the US education system is mostly ignoring OLPC. The "why" is fairly simple: it's not because US children do not deserve a good education, and not because they wouldn't benefit from computer access. But, the fact is that the US is structured such that OLPC may not be the "best fit." For instance many libraries in the US have computers in them, and many schools do also. It would appear that in the US the effort is being put into these kinds of educational resources. Whether or not that is the best way to spend US education dollars is of course up for debate.
But it's not really fair to imply that OLPC is ignoring US education. As I said, educational institutes in the US are free to make a case for funding such projects. OLPC will gladly ship the units.
Re: (Score:3, Informative)
I disagree. Nicholas Negroponte in the past had flat refused to sell the computer to US schools. Only when it was looking like he wasn't going to get enough orders to begin mass -production did he start to *consider* it. Here's a snippet from a good Ars Technica article [arstechnica.com]:
Won't make as much impact. (Score:5, Insightful)
Now, I'm not saying poor folks in developed countries brought it upon themselves, or are willfully poor, but I do think that there is greater room for improvement across populations as a whole in other places.
Re: (Score:2)
The other advantage is that you might be able to help more communities with it.
Even though I feel this way, I'd consider buying one of these if I had the money. I wish it was buy one get half. At $300, it's doable for me.
Re:North America has poor folks too! (Score:5, Interesting)
The 'poor' in America are ONLY poor in relative terms. In China, which has an up and coming boom economy, I saw people living in such abject poverty and squalor that I can't even imagine how crappy it must be in Saharan Africa where apparently people have it really rough. Panhandlers at the traffic lights here in the US have it easy compared to 95% of the 'working class' people I saw there. However, even the poorest Chinese was busting butt to better their circumstances and even the most ignorant understood that education for the children was the best way to better the entire family. How many of the poor in the US understand that vs how many understand how to wait for the next handout? Sorry, but I've worked too much with the poor in the US and become completely disillusioned with any romantic notions of how all they need is a little more 'help'. They need the help withdrawn so they'll have a little motivation.
Re: (Score:3, Interesting)
However, even the poorest Chinese was busting butt to better their circumstances and even the most ignorant understood that education for the children was the best way to better the entire family.
That's generally true, but part of the reason is that in Chinese culture, you are expected to take care of your parents to a much greater degree than we are expected to here in the U.S. While any decent parent would want their child to have better than what they themselves had, that part of the culture motivates the less decent ones as well.
More information... (Score:4, Informative)
The two laptops will cost $399.00 USD, and shipping is $24.95 USD (for a total of $423.95 USD). Open to residents of US and Canada only. Paypal is the default payment option (credit cards are also accepted). Of that, $200 is considered a tax-deductible donation. Your contribution also gets you 1 year of free Wi-Fi [laptopgiving.org] access at T-Mobile hotspots [t-mobile.com].
The website says that they will try to deliver the laptop before the holidays, but that initial supplies are limited (TFA says 40,000 units in this first month, with 20,000 ready before Christmas), so if you're keen to get one of these things, you should order sooner rather than later.
I'm certainly curious to see how many orders get put in. If a large number of geeks buy these things as hacking toys, then they could very well become the best platform for a variety of tasks. For example, maybe this will finally be a viable e-book reader (portable, rugged, long battery life, display that can be used in ambient light, etc.). Should be interesting.
XO black market (Score:3, Funny)
I bet if they tried the freemarket approach they could get the retail price down to, oh I don't know, maybe 100USD. They could name it "the $100 laptop"
No? Oh ok, I'll just have to buy two Eee PCs for the same amount.
Definitely too little too late (Score:2, Interesting)
Re: (Score:3, Informative)
Re: (Score:2)
Plus, those laptops had Vista on them. I assume you've had to buy two licenses of XP as well to make them usable. That runs the price up to, what, $600 each?
:)
Re: (Score:2)
$399 is pricey (Score:2, Offtopic)
Re: (Score:2, Insightful)
Using your logic, why would I donate $100 to the Red Cross when I could just as easily get a mickey of vodka and have a good time for less!!!
Tom
Re: (Score:2)
Re: (Score:2)
Instead, they seem to have gone the PBS, "Make a donation of X size and get a fabulous tote bag" except the tote bag in question is an expensive computer (compared to a cheaply manufactured cotton bag). The $400 hundred dollar lapt
Re: (Score:2)
Re: (Score:3, Informative)
The XO basically revolutionize the low-end portable computer market. They where the first to talk about ultra low cost, ultra-portable, low-power computing, and as such kick-start the movement which gave us recently the Asus Eee and the Intel ClassMate. Without them, the market would have slowly converge toward cheaper and cheaper hardware, but I think we would still be a couple years
Compare it to other apples. (Score:3, Insightful)
Bad luck I'm in Scandinavia, may be you can buy one and send it to me?
I agree, but for a different reason. (Score:2)
There are hundreds of good charities to give money to where all the money goes to the cause. I haven't seen a guaranteee of that from OLPC. However I would be more than happy to buy the two OLPCs provided BOTH went to kids and that I
Re: (Score:2)
Re: (Score:3, Insightful)
Seriously - buying laptops for kids should not be P1 in terms of global humanitarian aid folks.
No thanks (Score:2)
Just like the Asus Eee PC detractors... (Score:2)
I own an Asus Eee, and it's a near-perfect little sub-kilo device. But if I had a kid in the 3-8 age group, I'd pounce on this OLPC deal so fast my keyboard would smoke. For the same price as the Eee I can get something way more kid-friendly AND support some third-world future 1337 h4ckz0r?! I can't think of a more noble place for my nerd-donation to go. But my altruism only extends so far. I
Guaranteed? (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
You mean by guaranteeing something like
...
... from the Terms and Conditions [laptopgiving.org] of the Give One Get One [laptopgiving.org] program.
I ordered one. (Score:3, Interesting)
Re: (Score:3, Informative)
specs? (Score:2)
I'm interested in this deal, but would like more technical specs. In part because I have specific ideas about how I'd like to use one and would like to know if it will work for what I want. Is there a page somewhere (I don't see one quickly) detailing what is and is not in the machine?
Re:I ordered one. (Score:4, Informative) [laptop.org]
Will the North American Laptops include any human-power system?
no.
Re:I ordered one. (Score:4, Informative)
Unfortunately you'll have to join the mailing list () to find out about availability since they are focusing on the kids (away from the grid) first.
Forget the North Americans - sell to Europe! (Score:3, Funny)
I wish it was available in the UK (Score:3, Informative)
Re: (Score:2)
Re:Forget the North Americans - sell to Europe! (Score:5, Informative)
Here's a pointer [olpcnews.com] to a method for ordering one if you are located outside the USA and Canada.
There are also reports [olpcnews.com] that folks in Europe have been able to place orders by phone. This would only work for phone orders - the web site (PayPal) only allows USA and Canadian shipping addresses.
Flash support? (Score:2)
Re: (Score:2)
Looks like a great first computer to me (Score:3, Interesting)
As a programmer, I look forward to seeing the software efforts that are built atop this platform. There's plenty of room for free educational software for kids and this looks like a good platform for it. Surely someone will port the platform stack to a standard Linux distro, and then any software you write for this, you can run on your PC you bought at Wal-Mart.
Cheers, Frank
Tax Exemption in Canada? (Score:2)
Re: (Score:2, Informative)
Currency From: CAD
Currency To: USD
Exchange Rate: 1.03014
"Generally, you cannot claim donations made to U.S. charities on your Canadian income tax."
I'll take that 3 cents on the dollar though!
(If you have US Income, you can use the donation to off-set that...)
Dawson
Only in America (Score:2)
(America in the geographical sense, of course...)
I'd love to buy one. It looks great; not only would I find it useful as well as being a really cool toy, but I think this is a cause highly worth supporting. Alas, the offer is only valid for people in continental North America (plus island states of the USA). Since I live in the UK, I'm stuffed.
Hopefully at some stage they'll run a European G1G1 programme.
(Actually, maybe the G1G1 programme will show enough demand that some budding entrepeneur will ord
bash shell? (Score:2)
The question is: what is the procedure for getting into a bash shell?
A related question is w
Re: (Score:2, Informative)
The underlying window manager is Matchbox.
There is a Developer Console [laptop.org] activity which provides a shell, log viewer, X resource meter, and memory usage meter.
If you want a more adult interface than Sugar, you might be more interested in PepperPad. They are providing an OLPC compatible pre-release containing both a 1.5 JVM and a more adult-oriented environment.
Here's the scoop (Score:2)
$423 including shipping.
Yes, some child in a developing nation will definitely get one if you order the buy one give one package. You get one too. I have a 4 year old daughter who currently borrows our laptops to play the flash games on PBSKIDS.ORG. I am hoping this will be easy for her to use.
It runs Linux. Good battery life. Interesting screen. Modest CPU and graphics horsepower.
There is no crank.
Order soon, supplies are limited.
Yes, I ordered one.
Limited quantities? (Score:2)
I'd like to know why there will only be limited quantities available for the NA market. Is there some reason for that? Don't they want to accept as many donations as possible?
I strongly considered getting an XO laptop for myself. (Screw the kids, why should they have all the coolest stuff.
:-)) I ended up going with the Asus Eee PC because it has a more traditional LCD screen, more RAM, more storage and a built-in SD card slot. Battery life isn't nearly as good with the Asus, and it is only about a
I got my order in this morning ... (Score:2, Informative)
One thing I'd like this for is to take on my next (very infrequent) plane flight -- the cheapo laptops I have right now have both terrible battery life and more heft than airline trays like. (Oh, and don't open well in that tiny space the airlines call enough room for a passenger.) With the T-Mobile deal, it also
Re: (Score:3, Interesting)
Re: (Score:2, Funny)
OK, when I get tired of it, it will probably go to my nephew (whose second birthday happens to be today).
Re: (Score:2)
Re: (Score:2)
Re:Too late (Score:4, Funny)
Re:Is this really a good idea? (Score:5, Informative)
First, it should be noted that OLPC is targeting developing nations where there is some momentum to improve things, but where access to technological resources and information are limiting growth. They are not focusing on the "desperately poor" countries where starvation is the overriding concern (take a look at the participating countries [wikipedia.org]). Second, the XO laptops are meant to work side-by-side with other forms of relief, aid, education, and infrastructure improvement.
Saying "why bother with OLPC when people are starving?" is like saying "why bother sponsoring a local child to go to a swimming competition when people are starving?" We can simultaneously be philanthropic in different ways to different groups. Moreover, focusing only on the "most dire" problems (and ignoring everything else) is not a good way to help the world as a whole develop into a safer, more equitable place. So, I view OLPC as a part of the overall puzzle: a positive step that can be implemented in some countries, and which will help stimulate those countries to become more prosperous and independent.
Re:Is this really a good idea? (Score:5, Informative)
I sponsor a teacher in a school in South Eastern Madagascar. By this, I mean that I pay for her board & lodgings. The government pays her salay (approx $500/year) I have done this for the past 4 years.
The village where she teaches is 4 hours by 4WD vehicle to the nearest tarmaced road. They have plenty of food, clean fresh water etc. What they lack is the rest of the things that connect them with the outside world. There is 1 TV in the village. I supplied it alone with a solar panel, some car batteries and an inverter. They have a pirated Satellite encoder and can now stay in touch with the outside world. The thirst for knowledge of the children is fantastic. If I were in the US I would buy several of these units for the village.
The lack of infrastructure(ie no Electricity) is irrelevant for the OLPC. That said, next year I'm hoping to get a small water turbine installed and connected up to a generator. They will have electric light for the first time. Then we can start to make changes to the houses so that the epidemic of lung diseases can be tackled. This is due to the houses not having chimneys and all cooking is done over an open charcoal fire.
I visited the village again in October. I took supplied of pencils and paper (bought in-country) I also took pictures of the children and printed them out in front of them. They took them home to very proud parents.
The OLPC concept will help bridge the gap between the 1st world and the bottom parts of the 3rd world.
Re: (Score:2)
How did you get involved this way? Is there a particular philanthropic/volunteering group you started out with, and then took on this village on your own? Or did you somehow get in contact with them yourself?
Re: (Score:2, Insightful)
"And ye shall know them by their works..."
Re: (Score:2)
As an old and wise person once said:
Give a man a fish and you feed him for a day.
Teach a man to fish, and he can starve because while he's been overfishing the lake to exhaustion to supply Kwik-E-Mart, nobody has
Re:USA? Black Friday... (Score:5, Insightful)
Re: (Score:3, Insightful)
As evil as they are, MS is the de facto standard. If you don't know windows you're missing a key skill to join the technology work force. Giving a bunch of kids a one-off linux based laptop leaves out critical skills.
And the way to change the landscape is to get people used to using something different in a place where there isn't a de facto standard.
Or $diety forbid teach them to think and learn so that they can make the choice themselves as to what OS to use when their country becomes less technology challenged.
Or is education of the end-user not the ultimate goal here?
Re: (Score:2)
Re: (Score:2)
Education of the end user IS the ultimate goal, but not education in computer skills. The XO laptop is a learning tool, and is likened by its creators to a "pencil". Their goal is to give each child in the developing world their own "pencil" to create with. No one will be reconfiguring their kernels on these things.
Nobody said they would. What was said was that because these things are not Microsoft then they are wrong.
To go with your pencil analogy here it doesn't matter if you use a Ticonderoga or a Rotring mechanical - it's still a tool to learn with.
When you've learned more and you can make a choice then pick the pencil that suits you. But until then the "pencils" being handed out will suffice in these cases - because the students have no "pencil" at all.
Your sig (Score:2)
Re: (Score:2)
Third, not everyone is a crook.
Re: (Score:2)
Re: (Score:2)
did access to computers increase us students math/language scores? of course not.
No, because we had textbooks. That's the difference.
its books and teachers and basic infrastructure that matter but thats not sexy so gadgets are sold as miracle cures, and geek bazillionaires think they are saving the world.
These laptops are intended as (among other things) a replacement for books. Consider the number of e-books that can fit on the XO's flash-based storage. Now consider the costs of buying that many textbooks, which the child has to lug around with them. As new editions are written, a laptop can be updated with a download over the Internet or a CD-ROM distribution, and you don't have to replace all those textbooks.
its fine if you give them away, slightly dodgy if you ask their poor governments to pay for this toy.
This article is about the "Give One, G | http://news.slashdot.org/story/07/11/12/138246/olpc-launches-buy-one-give-one-free-program?sdsrc=nextbtmnext | CC-MAIN-2015-14 | refinedweb | 4,604 | 71.85 |
Opened 11 years ago
Closed 10 years ago
#7210 closed (fixed)
Added expression support for QuerySet.update
Description
I think QuerySet.update works to inflexible. In SQL you can define expressions containing the current value or the value of an other column in the update clause, for Example to incrementing a value in the result set. This is not possible by the ORM at the moment. I wrote a possible patch, with which you can do following:
from django.db.models.sql.expressions import * # Equivalent to model.all().update(foo=42) model.all().update(foo=LiteralExpr(42)) # Increment column 'foo' by one. model.all().update(foo=CurrentExpr() + LiteralExpr(1)) # Swap the value of the column 'foo' and 'bar'. model.all().update(foo=ColumnExpr('bar'), bar=ColumnExpr('foo'))
Attachments (12)
Change History (27)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 11 years ago by
would this patch allow
ColumnExpr to be used in
.filter() ?
comment:4 Changed 11 years ago by
Yes and exclude() too. But it is not called ColumnExpr anymore. See the discussion at the mailing list (link above).
comment:5 Changed 11 years ago by
Changed 11 years ago by
Changed 11 years ago by
fixed patch - last one was missing two files
Changed 11 years ago by
Added 2 missing files.
Changed 11 years ago by
Added 2 missing files.
comment:6 Changed 11 years ago by
I've read through this patch in detail for the first time. It smells a bit over-engineered in places, so I'm going to have to think about that (there seem to be too many extra classes involved). For now, though, there are some more fundamental problems I'd like to bring up:
- You've somewhat arbitrarily removed the
get_placeholder()stuff. That is there because it's needed by certain extensions (in particular, geo-django, but it will also be useful in other cases). Remember that this is all new code; it's not like things are hanging around for historical reasons. Basically the "placeholder" is either
"%s"in the normal case or some other format string (e.g.
"SOME_FUNC(%s)"in cases like the GIS situations. So treat the placeholder as an opaque string that you use instead of
%sto indicate parameters.
- The
WhereNodeclass is now a proper tree of other
WhereNodeclasses (after [7835]), so that might affect some things in this patch. In particular, converting things passed in to values should be done in
WhereNode.add()so that no references to fields or models are stored in the
WhereNodeclass. This avoids infinite loops and pickling problems.
- Calling
curry()in
make_atom()doesn't looks useful. You just want to save it to use as a function later when you still have the pieces of information you use. Query construction takes long enough without extra overhead like this, so just call the function directly at the right moment.
- Having to pass
optsto
as_sql()-- which will now be
add()after [7835] -- feels wrong. The where-class itself shouldn't need to care about that, it's purely for the benefit of the "smart objects", so they should contain the information they need. This feeds into the next point (and below) that, in general, these classes should know how to convert themselves to SQL fragments.
- Having to convert normal values to
LiteralValuesjust to convert them back looks like unneeded overhead to me. Follow the pattern elsewhere: the default case will be normal values (the things we do now). That will be by far the most common stuff and it gets handled directly in
make_atom(). Anything else, such as any smart objects, should convert themselves via a common method (their interface) and return the resulting string and list. They should have something like their own
as_sql()or
make_atom()method that is called if we detect the "value" has such a method. So it will be similar to the approach in
as_sql(): if the thing we're processing has its own
as_sql()method, call that; otherwise, it's a basic piece of data and we'll handle it via
make_atom(). In this case with
Expressionderivatives, maybe
make_atom()needs to check if
value()has its own
make_atom()or
make_value()method or something. The idea here is to keep the level of coupling between the normal
WhereNodecode and any objects like
F()to an absolute minimum.
Fand
Expressiondon't need to be treated specially. They are examples of a class of smart objects that should know how to prepare themselves.
I realise points 4 and 5 above sounds a bit less concrete than the others, but they're actually pretty major. Right now, this patch introduces some pretty tight coupling between these new classes and the query construction code. My intuition is that this coupling is not necessary. If you have to mention something like
Expression in
WhereNode, you've probably got a leaky abstraction. Use the interface that something like
Expression should have to call it and that way we're not tied to only using the
Expression class.
I suspect portions of this could probably be broken up into separate steps. I like the idea of an
F() object; that's certainly necessary. All this stuff to allow additions and subtraction and other manipulations of values might be useful at some point, but it's probably less important than the ability to refer to a field in an expression and working out the correct way to call smart objects as values. It can certainly be added later and, if the changes I'm talking about above are done right, doesn't have to be part of core immediately, which buys us some room to experiment. So don't try to do too much at once here.
One thought I had is that the bit in
make_atom() that currently says
if isinstance(params, QueryWrapper): ...
could take over the general role of smart objects here and become something like
if hasattr(params, 'make_value'): ...
and then we have to work out how to weave general return values into the result (the current
(extra, params) tuple probably won't be enough). In any case, that's the sort of lines I'd work along to make things work like the rest of the code throughout the query construction: we provide the means to "shell out" to advanced objects and handle only the base case in the core code.
Note that this shows we've already got this slightly leaky abstraction in the code, as a once-off to handle SQL subqueries as values. Designing this particular patch correctly should actually help plug that leak by possibly removing the need for
WhereNode to know or care about
QueryWrapper.
That's where I'm sitting at the moment. The idea is certainly worth pursuing. There are some implementation issues that need to be solved, as well as some broader design ones before we can go further. I'm not completely convinced it's "1.0-beta" material. Nice to have, but not a showstopper if we don't have it in 1.0. We can certainly add this stuff at any point without breaking the external API. Getting it right is therefore more important than getting it done quickly.
comment:7 Changed 11 years ago by
Another thing I've noticed with this patch is that it handles the fields based on the model's _meta attribute. This is a problem if F (or any expression) is going to be used with extra(), related fields or aggregates. I wanted to re-write it to avoid the dependency of the model at least for the quering case.
Also, the latest trunk revision introduces some changes that heavily conflict with this patch. Mostly the in "where.py" and "sql/query.py".
The only problem I see with the objects converting themselves to SQL fragments is that sometimes the objects don't have all the necessary information to do that and would require information of the state of the query class (e.g. aliases). The same problem would arrise if we want to allow relation spaning fields to be used without explicitly joining the tables beforehand. The second point (relation spaning fields) is a far fetched case and I'm not sure it should even be allowed, but the first one is a common case that could easily be solved by setting a placeholder for the field information and letting the queryset code handle the propper name selection.
I am not sure either that this belongs in "1.0-beta" but I have been working on it and would like to get it working after EuroPython's sprint.
comment:8 Changed 11 years ago by
Officially taking this off the 1.0 beta list. Nicolas has been working on this as part of the aggregation work, but it won't be ready for a merge.
Changed 11 years ago by
Re-write of the patch. Added docs.
comment:9 Changed 11 years ago by
comment:10 Changed 10 years ago by
A typo in the patch at db-api.txt:1754
"When filtering you can refer to you can refer to other attributes of " -- double "you can refer to".
P.S. The trick with F-objects overloading math operators is very clever!
comment:11 Changed 10 years ago by
comment:12 Changed 10 years ago by
comment:13 Changed 10 years ago by
comment:14 Changed 10 years ago by
comment:15 Changed 10 years ago by
(In [9792]) Fixed #7210 -- Added F() expressions to query language. See the documentation for details on usage.
Many thanks to:
- Nicolas Lara, who worked on this feature during the 2008 Google Summer of Code.
- Alex Gaynor for his help debugging and fixing a number of issues.
- Malcolm Tredinnick for his invaluable review notes.
Some quick tests, not a ton, but tests basic funcionality, should be a good starting point if anyone wants to help out. | https://code.djangoproject.com/ticket/7210 | CC-MAIN-2019-18 | refinedweb | 1,660 | 71.04 |
iOS Swift: Linking Accounts
This tutorial will show you how to link multiple accounts within the same user. and the User Sessions tutorials.
We recommend that you read the Linking Accounts documentation to understand the process of linking accounts.
Enter Account Credentials
Your users may want to link their other accounts to the account they are logged in to.
To achieve this, present an additional login dialog where your users can enter the credentials for any additional account. You can present this dialog in the way described in the Login tutorial.
After the user authenticates, save the
idToken value for the secondary account.
Link the Accounts
Now, you can link the accounts. To do this, you need the following values:
id: the logged-in user's ID (see
profile.sub)
idToken: the ID Token for the saved account the user initially logged in to
otherUserToken: the ID Token for the second account received in the last login response
import Auth0
// UserIdentitiesViewController.swift Auth0 .users(token: idToken) .link(id, withOtherUserToken: otherUserToken) .start { result in switch result { case .success: // The account was linked case .failure(let error): // Handler Error } }
Retrieve the Linked Accounts
Once you have the
sub value from the profile, you can retrieve user identities. Call the management API:
// SessionManager.swift Auth0 .users(token: idToken) .get(profile.sub, fields: ["identities"], include: true) .start { result in switch result { case .success(let user): let identityValues = user["identities"] as? [[String: Any]] ?? [] let identities = identityValues.flatMap { Identity(json: $0) } case .failure(let error): // Handle error } }
Unlink the Accounts
To unlink the accounts, you need to specify the following:
id: the logged-in user's ID (see
profile.sub)
userIdand
provider: the values in the
identityobject you want to unlink
Unlink the accounts:
// UserIdentitiesViewController.swift let id = ... // the user id. (See profile.sub) let idToken = ... // the user idToken let identity: Identity = ... // the identity (account) you want to unlink from the user Auth0 .users(token: idToken) .unlink(identityId: identity.identifier, provider: identity.provider, fromUserId: id) .start { result in switch result { case .success: // Unlinked account! case .failure(let error): // Deal with error } } | https://auth0.com/docs/quickstart/native/ios-swift/07-linking-accounts | CC-MAIN-2018-47 | refinedweb | 345 | 52.87 |
Sidekiq - Deleting scheduled jobs when task is deleted
I schedule reminder emails when a user creates a task for a certain date using the following code in the create action:
if @post.save EmailWorker.perform_in(@time.minutes, @post.id) end
I want to delete the scheduled reminder mail whenever the associated task is deleted. I tried using a model method on before_destroy:
before_destroy :destroy_sidekiq_job def destroy_sidekiq_job post_id = self.id queue = Sidekiq::Queue.new('critical') queue.each do |job| if job.klass == 'EmailWorker' && job.args.first == post_id job.delete end end end
However, the jobs aren't deleted from the queue. Any suggestions for me to fix this?
Answers
The scheduled jobs are not within a queue yet, use Sidekiq::ScheduledSet to find the scheduled jobs:
def destroy_sidekiq_jobs scheduled = Sidekiq::ScheduledSet.new scheduled.each do |job| if job.klass == 'EmailWorker' && job.args.first == id job.delete end end end
Don't do this. Let Sidekiq execute the job but verify the post exists and is current when the email job is run.
def perform(post_id) post = Post.find_by_id(post_id) return unless post ... end | http://unixresources.net/faq/26067635.shtml | CC-MAIN-2019-04 | refinedweb | 182 | 62.24 |
Type stubs for Python machine learning libraries
Project description
Mypy type stubs for numpy, pandas and matplotlib
This is a PEP-561-compliant stub-only package which provides type information for matplotlib, numpy and pandas. The mypy type checker (or pytype or PyCharm) can recognize the types in these packages by installing this package.
NOTE: This is a work in progress
Lots of functions are already typed, but a lot is still missing (numpy and pandas are huge libraries). Chances are you will see a message from Mypy claiming that a function does not exist when it actually does exist. If you encounter missing functions, we would be very happy for you to send a PR. If you are unsure of how to type a function, we can discuss it.
Installing
You can get this package from Pypi:
pip install data-science-types
To get the most up-to-date version, install it directly from GitHub:
pip install git+
Or clone the repository somewhere and do
pip install -e ..
Examples
These are the kinds of things that can be checked:
Array creation
import numpy as np arr1: np.ndarray[np.int64] = np.array([3, 7, 39, -3]) # OK arr2: np.ndarray[np.int32] = np.array([3, 7, 39, -3]) # Type error arr3: np.ndarray[np.int32] = np.array([3, 7, 39, -3], dtype=np.int32) # OK arr4: np.ndarray[float] = np.array([3, 7, 39, -3], dtype=float) # Type error: the type of ndarray can not be just "float" arr5: np.ndarray[np.float64] = np.array([3, 7, 39, -3], dtype=float) # OK
Operations
import numpy as np arr1: np.ndarray[np.int64] = np.array([3, 7, 39, -3]) arr2: np.ndarray[np.int64] = np.array([4, 12, 9, -1]) result1: np.ndarray[np.int64] = np.divide(arr1, arr2) # Type error result2: np.ndarray[np.float64] = np.divide(arr1, arr2) # OK compare: np.ndarray[np.bool_] = (arr1 == arr2)
Reductions
import numpy as np arr: np.ndarray[np.float64] = np.array([[1.3, 0.7], [-43.0, 5.6]]) sum1: int = np.sum(arr) # Type error sum2: np.float64 = np.sum(arr) # OK sum3: float = np.sum(arr) # Also OK: np.float64 is a subclass of float sum4: np.ndarray[np.float64] = np.sum(arr, axis=0) # OK # the same works with np.max, np.min and np.prod
Philosophy
The goal is not to recreate the APIs exactly. The main goal is to have useful checks on our code. Often the actual APIs in the libraries is more permissive than the type signatures in our stubs; but this is (usually) a feature and not a bug.
Contributing
We always welcome contributions. All pull requests are subject to CI checks. We check for compliance with Mypy and that the file formatting conforms to our Black specification.
You can install these dev dependencies via
pip install -e '.[dev]'
This will also install numpy, pandas and matplotlib to be able to run the tests.
Running CI locally (recommended)
We include a script that runs the CI checks that will be run when a PR is opened. To test these out locally, you need to install the type stubs in your environment. Typically, you would do this with
pip install -e .
Then use the
check_all.sh script to run all tests:
./check_all.sh
Below we describe how to run the various checks individually,
but
check_all.sh should be easier to use.
Checking compliance with Mypy
The settings for Mypy are specified in the
mypy.ini file in the repository.
Just running
mypy tests
from the base directory should take these settings into account. We enforce 0 mypy errors.
Formatting with black
We use Black to format the stub files.
First install
black and then run
black .
from the base directory.
Pytest
python -m pytest -vv tests/
Flake8
flake8 *-stubs
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/data-science-types/ | CC-MAIN-2021-04 | refinedweb | 665 | 68.26 |
LaTeX studying and understanding heat conduction, how waves travel, and more. The MATLAB code below creates our plot.)') X = 0:0.1:20; X = 0:0.1:20; J = zeros(5,201); for i = 0:4 J(i+1,:) = besselj(i,X); end)') fig2plotly();
The code generates a web-based version of our plot. We can apply a theme to change the colors, layouts, and fonts.
Then, we can share the plot in an iframe, as seen below.
The x axis contains the following formula. Plotly renders the LaTeX version of it.
$x^2 frac{d^2 y}{dx^2} + x frac{dy}{dx} + (x^2 – alpha^2)y = 0$
The plot is saved at a URL:. The URL contains the data, plot and code to translate the plot between MATLAB, R, Python, Julia, and JavaScript.
2. Python and matplotlib plotting with LaTeX
We can make matplotlib and Python plots into web-based plots. This is an example using Plotly’s Python API. Here we’re using a Gaussian distribution to study random variables and see where they fall on what is sometimes called a “bell curve.” We can add the standard deviation formula to our plot.
import matplotlib.pyplot as plt # side-stepping mpl's backend import plotly.plotly as py import plotly.tools as tls from plotly.graph_objs import * %matplotlib inline py.sign_in("IPython.Demo", "1fw3zw2o13") fig1 = plt.figure() import matplotlib.pyplot as plt import numpy as np x = np.linspace(-2.0, 2.0, 10000) # The x-values sigma = np.linspace(0.4, 1.0, 4) # Some different values of sigma # Here we evaluate a Gaussians for each sigma gaussians = [(2*np.pi*s**2)**-0.5 * np.exp(-0.5*x**2/s**2) for s in sigma] ax = plt.axes() for s,y in zip(sigma, gaussians): ax.plot(x, y, lw=1.25, label=r"$sigma = %3.2f$"%s) formula = r"$y(x)=frac{1}{sqrt{2pisigma^2}}e^{-frac{x^2}{2sigma^2}}$" ax.text(0.05, 0.80, formula, transform=ax.transAxes, fontsize=20) ax.set_xlabel(r"$x$", fontsize=18) ax.set_ylabel(r"$y(x)$", fontsize=18) ax.legend() plt.show()
Here is our plot:
The annotation looks like this in the GUI:
3. R plotting with LaTeX
We can make plots with R. Here’s an example using the Plotly R API.
library(plotly) py <- plotly(username="R-demo-account", key="yu680v5eii") trace1 <- list( x = c(1, 2, 3, 4), y = c(1, 4, 9, 16), name = "$alpha_{1c} = 352 pm 11 text{ km s}^{-1}$", type = "scatter" ) trace2 <- list( x = c(1, 2, 3, 4), y = c(0.5, 2, 4.5, 8), name = "$beta_{1c} = 25 pm 11 text{ km s}^{-1}$", type = "scatter" ) data <- list(trace1, trace2) layout <- list( xaxis = list(title = "$sqrt{(n_text{c}(t|{T_text{early}}))}$"), yaxis = list(title = "$d, r text{ (solar radius)}$") ) response <- py$plotly(data, kwargs=list(layout=layout, filename="latex", fileopt="overwrite")) url <- response$url
The title was added in the GUI, and is written as ‘$LaTeX$’. We embed with this snippet; every Plotly graph can similarly be embedded in websites, blogs, and notebooks.
4. Mathematica plotting with LaTeX
A user-contributed Mathematica API is in the works, which lets us turn our Mathematica plots into D3, web-based plots. Here is our code:
Plotly[Sin[Exp[x]], {x, -Pi, Pi}, AxesLabel -> {"e", "s"}]
And our plot:
Plotly is free for public projects, entirely online, and you own your data. Learn more on... | https://www.r-bloggers.com/four-beautiful-python-r-matlab-and-mathematica-plots-with-latex/ | CC-MAIN-2018-47 | refinedweb | 581 | 69.28 |
Table of contents
Introduction
Want basic fly-cam or orbit-cam controls in your sample? With SdkCameraMan, it's easy!
Requirements
SdkCameraMan requires that you use OIS in buffered mode, so make sure you have the necessary dependencies and handlers set up. Then, just include "SdkCameraMan.h" from the OGRE source or SDK, and you're done.
Setting Up
SdkCameraMan is a {LEX()}camera{LEX} controller, not a camera. This means you'll still have to create your own OGRE camera. Once this is done, create an instance of the SdkCameraMan class, and pass in your camera. Be sure to use the OgreBites namespace (SdkCameraMan part of the OgreBites Samples Framework).
mCameraMan = new SdkCameraman(mCamera);
Destroy your camera controller like so:
delete mCameraMan; mCameraMan = 0;
Our Once you have your camera controller, make sure you relay your OIS events to it.
bool mousePressed(const OIS::MouseEvent& evt, OIS::MouseButtonID id) { /* normal mouse processing here... */ mCameraMan->injectMouseDown(evt, id); return true; } bool mouseReleased(const OIS::MouseEvent& evt, OIS::MouseButtonID id) { /* normal mouse processing here... */ mCameraMan->injectMouseUp(evt, id); return true; } bool mouseMoved(const OIS::MouseEvent& evt) { /* normal mouse processing here... */ mCameraMan->injectMouseMove(evt); return true; } bool keyPressed(const OIS::KeyEvent& evt) { /* normal key processing here... */ mCameraMan->injectKeyDown(evt); return true; } bool keyReleased(const OIS::KeyEvent& evt) { /* normal key processing here... */ mCameraMan->injectKeyUp(evt); return true; }
If you are using the Free Look camera style then you must call SdkCameraMan::frameRenderingQueued in your frameRenderingQueued method.
bool frameRenderingQueued(const Ogre::FrameEvent& evt) { mCameraMan->frameRenderingQueued(evt); return true; }
You should now have a fly-cam working.
Camera Styles
SdkCameraMan comes with two styles of camera movement, enumerated under the type CameraStyle. You can check which style is currently enabled with SdkCameraMan::getStyle.
Free Look
This is a first-person flying camera. The WASD keys control movement, and the mouse is used to look around. Hold down the left Shift key to move at 20 times the normal speed. The camera has a fixed yaw axis. This camera style is enabled by default. To enable it manually, use SdkCameraMan::setStyle(CS_FREELOOK). When in this camera mode, you can use SdkCameraMan::setTopSpeed and SdkCameraMan::getTopSpeed to set/get the camera's top speed in units per second. We use a top speed instead of just a speed, because the camera gradually reaches the top speed when moving. This creates a smoother, less jerky experience. Similarly, the camera slows to a stop. To stop immediately, you can use SdkCameraMan::manualStop. In this style of camera movement, you can also manually set the camera's position and orientation at any time with no adverse effects.
Orbit
In this style of control, the camera orbits around an object of interest. The user clicks and drags the left mouse button to swing the camera around, and drags the right mouse button to zoom in and out from the target. Targets are specified by SceneNodes. To enable this style, use SdkCameraMan::setStyle(CS_ORBIT). When in this camera mode, your target by default is the scene root node. To set or get the target, use SdkCameraMan::setTarget and SdkCameraMan::getTarget. Make sure you don't accidentally destroy your target SceneNode! In this camera mode, you can't just move the camera around freely, because it is constrained by a target. Therefore, the camera's current state must be specified relatively to the target. This is done using SdkCameraMan::setYawPitchDist, which allows you to specify the camera's precise angle and distance from the target.
Manual
This isn't exactly a style. When you want to have full control over the camera without using a particular style, you can use SdkCameraMan::setStyle(CS_MANUAL). You can set/get the controlled camera using SdkCameraMan::setCamera and SdkCameraMan::getCamera.
Drag Look
This isn't exactly a style, either. Basically, when you need to use free-look mode, but you also need a cursor for GUI controls, a common solution is to only enter free-look mode when click-and-dragging. When in free-look mode, the cursor is hidden. This kind of control requires a cooperation between the camera controller and the GUI. Luckily, SdkSample::setDragLook allows you to enable or disable this type of control. | http://wiki.ogre3d.org/SdkCameraMan | CC-MAIN-2020-45 | refinedweb | 700 | 58.38 |
Is that right or shall i use the other path C:\Users\<usernam>\AppData\Local\Arduino15
Is it possible to use includes from C:\Users\<usernam>\Documents\Arduino\libraries AND C:\Users\<usernam>\AppData\Local\Arduino15 in the sketch at the same time?
It's that correct or should i place the font.h in the sublib
when i implement the libary in a sketch, the font.h is inserted with the basiclib (#include basliclib.h and #include font.h) and i don't want this.
includes - (available from IDE 1.6.10) (optional) a comma separated list of files to be added to the sketch as #include <...> lines. This property is used with the "Include library" command in the IDE. If the includes property is missing all the headers files (.h) on the root source folder are included.
The first one is right. C:\Users\<usernam>\AppData\Local\Arduino15 won't work. Libraries bundled with hardware cores are installed to a subfolder of that location (e.g., C:\Users\<username>\AppData\Local\Arduino15\packages\arduino\hardware\samd\1.8.2\libraries, but those libraries are only accessible when a board from that core is selected from the Arduino IDE's Tools > Board menu, and any libraries you add to that location will be lost every time you update the hardware core.I think I explained that well enough above.Recursive compilation is only done in the src subfolder so that library structure won't even work. You can move the sublib folder inside the src folder if you like. The general idea is that the header files directly under the src folder contain the declarations for the public API, while the files in subfolders contain code that's not intended to be used by the user.
I'm going to guess that by "implement", you mean Sketch > Include Library in the Arduino IDE, or the equivalent in Arduino Web Editor. You'll have better luck here if you are clear in your questions instead of being so vague. By default, Sketch > Include Library adds #include directives for all header files directly under the library's src folder. If you want to control which #include directives are added, you can do this via the includes field in library.properties. Details here:
Is there a good guide or description how to create boards, variants arduino.h's (=> everything about the hardware folder). I think many do it somehow
If you link header (only arrays of characters) with a cpp-file (font.h & font.cpp) or just the header alone?
I have a preprocessor directive (# ifdef # def # endif). But when I include the header in the library and additionally in the main.cpp (sketch.ino), this will not work! Why?
There is a lot of useful information here: it isn't really a guide. Arduino has actually been considering using the technical writer supplied to us via the Google Season of Docs program to write such a guide, but there is only one technical writer allocated and a list of potential projects so it's not certain whether that will be the project chosen by the writer.I think a lot of the hardware core authors used existing hardware cores as a model when they wrote their own. If you can find a core that is somewhat along the lines of what you want to do, then it makes things much easier. There is the ever popular Arduino AVR Boards: SAMD Boards: so on. I actually published a list of all known Arduino hardware cores: great thing is that Arduino's hardware system allows you to reference resources from other cores: allows you to create custom hardware cores with only a couple files. An example is carlosefr/atmega: how the entire core is only two files because it references resources from Arduino AVR Boards!
It's not clear to me what you mean by that.
#ifndef __FONT8X8_H__#define __FONT8X8_H__uint8_t ascii_8x5_whitespace[8] ={ 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000000, 0b00000011};*////////////////////MANY ARRAYS////////////////////*uint8_t *ascii [] = {ascii_8x5_whitespace, ascii_8x5_exclamationmark,ascii_8x5_doublequote,ascii_8x5_hash,ascii_8x5_dollar,ascii_8x5_percent,ascii_8x5_ampersand,ascii_8x5_singlequote,ascii_8x5_openparenthesis,ascii_8x5_closeparenthesis,ascii_8x5_asterisk,ascii_8x5_plus,ascii_8x5_comma,ascii_8x5_minus,ascii_8x5_dot,ascii_8x5_slash,ascii_8x5_0,ascii_8x5_1,ascii_8x5_2,ascii_8x5_3,ascii_8x5_4,ascii_8x5_5,ascii_8x5_6,ascii_8x5_7,ascii_8x5_8,ascii_8x5_9,ascii_8x5_colon,ascii_8x5_semicolon,ascii_8x5_lessthan,ascii_8x5_equal,ascii_8x5_greaterthan,ascii_8x5_questionmark,ascii_8x5_at,ascii_8x5_A, ascii_8x5_0b,bracket,ascii_8x5_backslash,ascii_8x5_closebracket,ascii_8x5_carat,ascii_8x5_underscore,ascii_8x5_backquote,ascii_8x5_a,ascii_8x5_bbrace,ascii_8x5_bar,ascii_8x5_closebrace,ascii_8x5_tilde};#endif /* __FONT8X8_H__ */
If you have a proper include guard, you should have no problem with multiple #include directives for a library. However, an include guard only protects against multiple inclusions in the same translation unit. It does not protect against inclusions in multiple translation units (nor would you want it to). The .ino files of your sketch are a separate translation unit from .cpp or .c files in a library or even in your sketch. For this reason, it is possible to write code in a .h file that causes a compilation error when the file is #included in multiple translation units. However, it's certainly possible to fix that issue. If you want specific help with the issue, you'll need to post your code. | https://forum.arduino.cc/index.php?topic=624757.0 | CC-MAIN-2019-35 | refinedweb | 843 | 55.74 |
You can subscribe to this list here.
Showing
13
results of 13
I forgot to mention that I had run the test suite prior to creating the
tagged version and it all worked fine except for the UserKit. I have
elected to let the three UserKit errors stand until there is a need in
the group to resolve them. If anyone is using UserKit, please let me
know.
Also, I did a tiny bit of work on Overview.html but any contributions
and suggestions are welcome.
I have learned that, using Subversion, there is little difference
between a "tag" and a "branch". So treat the tag copy with care as you
would a branch.
That said, I think Release 0.9 is working fine and is OK as a release
candidate. If you find otherwise, please alert me.
Happy days,
- Mark
On Apr 28, 2005, at 3:12 PM, Mark Phillips wrote:
>
>
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by: Tell us your software development plans!
> Take this survey and enter to win a one-year sub to SourceForge.net
> Plus IDC's 2005 look-ahead and a copy of this survey
> _______________________________________________
> Webware-devel mailing list
> Webware-devel@...
>
>
>
Mark Phillips
Mophilly & Associates
On the web at
On the phone at 619 444-9210@... /
I'm planning on renaming WSGIKit to Python Paste, and changing the
package name from "wsgikit" to "paste". WSGIKit was a lame name, and
better change sooner than later, I guess. Anyway, in my experience this
kind of move is a bit annoying if you have outstanding changes, so if
you have changes in a checkout you might want to commit it. I plan on
doing the rename tomorrow evening.
(Huh... I wonder if Apache redirects work with Subversion... it seems
improbable)
--
Ian Bicking / ianb@... /
Sune Kirkeby wrote:
>.
Did I use os._exit? No wonder it was quitting out of threads, os._exit
is evil. But unreliable in its evilness I guess. Will these signals
work on Windows? If not, does anyone know of an alternate way to kill a
process on Windows?
--
Ian Bicking / ianb@... /
Hello all..
/s
Why we want to have _actionSet() method in class Page?
Class Page(HTTPServlet):
(...)
def _respond(self, transaction):
req =3D transaction.request()
=20
if self.transaction().application().setting('OldStyleActions', ) =
\
and req.hasField('_action_'):
(*) action =3D self.methodNameForAction(req.field('_action_'))
actions =3D self._actionSet()
(*) if actions.has_key(action):
self.preAction(action)
apply(getattr(self, action), (transaction,))
self.postAction(action)
return
else:
raise PageError, "Action '%s' is not in the public list=20
of actions, %s, for %s." % (action, actions.keys(), self)
for action in self.actions():
if req.hasField('_action_%s' % action) or \
req.field('_action_', None) =3D=3D action or \
(req.hasField('_action_%s.x' % action) and \
req.hasField('_action_%s.y' % action)):
(**) if self._actionSet().has_key(action):
self.handleAction(action)
return
def _actionSet(self):
if not hasattr(self, '_actionDict'):
self._actionDict =3D {}
for action in self.actions():
(***) self._actionDict[action] =3D 1
return self._actionDict
def handleAction(self, action):
self.preAction(action)
getattr(self, action)()
self.postAction(action)
def methodNameForAction(self, name):
return name
I think condition in line marked with (**) is needless as=20
self.actionSet().key() are always equal to self.actions(). In my opinion=20
there should be one more condition in line (***) to make it sense:
(***) if getattr(self, action, None):
self._actionDict[action] =3D 1 =20
Consider lines marked with (*). Can it work correctly, if=20
self.methodNameForAction(name) returns with something other then name?
Another questions: It is intentional not to call methodNameForAction()=20
in code for new style actions? And is it correct to handle new style=20
action if user want old one?
I can prepare patches (for class Page form Webware and from WSGIKit and=20
for class CPage from Components) but I need to know answers for last to=20
questions.
Regards
--=20
Rados=B3aw Kintzi (radek at lucasconsulting dot pl)
Hey all,
Following the WSGIKit tutorial, looks very cool. I defined the
database attribute in server.conf
database =3D 'mysql://username:password@.../dbname';
But when I run wsgi-server and try to hit a page that loads some data,
I get the following traceback on my call to TodoList.select
File '/usr/lib/python2.4/site-packages/sqlobject/sresults.py', line
129 in __iter__
conn =3D self.ops.get('connection', self.sourceClass._connection)
File '/usr/lib/python2.4/site-packages/sqlobject/dbconnection.py',
line 769 in __get__
return self.getConnection()
File '/usr/lib/python2.4/site-packages/sqlobject/dbconnection.py',
line 781 in getConnection
raise AttributeError(
exceptions.AttributeError: No connection has been defined for this
thread or process
So what am I missing?
Thanks!
I
--
Ian Bicking / ianb@... / | http://sourceforge.net/p/webware/mailman/webware-devel/?viewmonth=200504 | CC-MAIN-2014-15 | refinedweb | 781 | 62.14 |
"
Kernel Modules Versus Applications
Compiling and Loading
The Kernel Symbol Table
Initialization and Shutdown
Using Resources
Automatic and Manual Configuration
Doing It in User Space
Backward Compatibility
Quick Reference
It's high time now to begin programming. This chapter introduces all
the essential concepts about modules and kernel programming. In these
few pages, we build and run a complete module..
For the impatient reader, the following code is a complete "Hello,
World" module (which does nothing in particular). This code will
compile and run under Linux kernel versions 2.0 through
2.4.[4]
[4]This example, and all the others presented in this
book, is available on the O'Reilly FTP site, as explained in Chapter 1, "An Introduction to Device Drivers".
[4]This example, and all the others presented in this
book, is available on the O'Reilly FTP site, as explained in Chapter 1, "An Introduction to Device Drivers".
#define MODULE
#include <linux/module.h>
int init_module(void) { printk("<1>Hello, world\n"); return 0; }
void cleanup_module(void) { printk("<1>Goodbye cruel world\n"); }
The printk function is defined in the Linux
kernel
<1> is the priority of the message. We've
specified a high priority (low cardinal number) in this module because
a message with the default priority might not show on the console,
depending on the kernel version you are running, the version of the
klogd daemon, and your configuration. You
can ignore this issue for now; we'll explain it in the section "printk" in Chapter 4, "Debugging Techniques".
You can test the module by calling insmodand rmmod, as shown in the screen dump in
the following paragraph. Note that only the superuser can load and
unload a module.
The source file shown earlier can be loaded and unloaded as shown only
if the running kernel has module version support disabled; however,
most distributions preinstall versioned kernels (versioning is
discussed in "Version Control in Modules" in Chapter 11, "kmod and Advanced Modularization"). Although older
modutils allowed loading nonversioned
modules to versioned kernels, this is no longer possible. To solve the
problem with hello.c, the source in the
misc-modules directory of the sample code
includes a few more lines to be able to run both under versioned and
nonversioned kernels. However, we strongly suggest you compile and
run your own kernel (without version support) before you run the
sample code.[5]
[5]If you are new to building kernels,
Alessandro has posted an article at that
should help you get started.
[5]If you are new to building kernels,
Alessandro has posted an article at that
should help you get started.
root# gcc -c hello.c
root# insmod ./hello.o
Hello, world
root# rmmod hello
Goodbye cruel world
root#
root# gcc -c hello.c
root# insmod ./hello.o
Hello, world
root# rmmod hello
Goodbye cruel world
root#
According to the mechanism your system uses to deliver the message
lines, your output may be different. In particular, the previous
screen dump was taken from a text console; if you are running
insmod and rmmodfrom an xterm, you won't see anything on
your TTY. Instead, it may go to one of the system log files, such as
/var/log/messages (the name of the actual file
varies between Linux distributions). The mechanism used to deliver
kernel messages is described in "How Messages Get Logged" in
Chapter 4, "Debugging Techniques".
As you can see, writing a module is not as difficult as you might
expect. The hard part is understanding your device and how to maximize
performance. We'll go deeper into modularization throughout this
chapter and leave device-specific issues to later chapters.
Before we go further, it's worth underlining the various differences
between a kernel module and an application.
Whereas an application performs a single task from beginning to end, a
module registers itself in order to serve future requests, and its
"main" function terminates immediately. In other words, the task of
the function init_module (the module's entry
point) is to prepare for later invocation of the module's functions;
it's as though the module were saying, "Here I am, and this is what I
can do." The second entry point of a module,
cleanup_module, gets invoked just before the
module is unloaded. It should tell the kernel, "I'm not there
anymore; don't ask me to do anything else.".[6]
[6]The implementation found in Linux 2.0 and 2.2
has no support for the L and Z
qualifiers. They have been introduced in 2.4,
though.
[6]The implementation found in Linux 2.0 and 2.2
has no support for the L and Z
qualifiers. They have been introduced in 2.4,
though.
Figure 2-1 shows how function calls and function
pointers are used in a module to add new functionality to a running
kernel.
Because no library is linked to modules, source files should
never include the usual header files. Only
functions that are actually part of the kernel itself may be used in
kernel modules.
Anything related to the kernel is declared in headers found in
include/linux and
include/asm inside the kernel sources (usually
found in /usr/src/linux). Older distributions
(based on libc version 5 or earlier) used
to carry symbolic links from /usr/include/linuxand /usr/include/asm to the actual kernel
sources, so your libc include tree could
refer to the headers of the actual kernel source you had
installed. These symbolic links made it convenient for user-space applications to include kernel header files, which they occasionally need to do.
Even though user-space headers are now separate from kernel-space
headers, sometimes applications still include kernel headers, either
before an old library is used or before new information is needed
that is not available in the user-space headers. However, many of the
declarations in the kernel header files are relevant only to the
kernel itself and should not be seen by user-space applications. These
declarations are therefore protected by #ifdef
__KERNEL__ blocks. That's why your driver,
like other kernel code, will need to be compiled with the
__KERNEL__ preprocessor symbol
defined.
The role of individual kernel headers will be introduced throughout
the book as each of them is needed.
Developers working on any large software system (such as the kernel)
must be aware of and avoid namespace
pollution. Namespace pollution is what happens when
there are many functions and global variables whose names aren't
meaningful enough to be easily distinguished. The programmer who is
forced to deal with such an application expends much mental energy
just to remember the "reserved" names and to find unique names for
new symbols. Namespace collisions can create problems ranging from
module loading failures to bizarre failures -- which, perhaps, only
happen to a remote user of your code who builds a kernel with a
different set of configuration options.
Developers can't afford to fall into such an error when writing kernel
code because even the smallest module will be linked to the whole
kernel.
The best approach for preventing namespace pollution is to declare all
your symbols as static and to use a prefix that is
unique within the kernel for the symbols you leave global. Also note
that you, as a module writer, can control the external visibility of
your symbols, as described in "The Kernel Symbol Table" later in
this chapter.[7]
[7]Most versions of
insmod (but not all of them) export all
non-static symbols if they find no specific
instruction in the module; that's why it's wise to declare as
static all the symbols you are not willing to
export.
[7]Most versions of
insmod (but not all of them) export all
non-static symbols if they find no specific
instruction in the module; that's why it's wise to declare as
static all the symbols you are not willing to
export.
Using the chosen prefix for private symbols within the module may
be a good practice as well, as it may simplify debugging. While
testing your driver, you could export all the symbols without
polluting your namespace. Prefixes used in the kernel are, by
convention, all lowercase, and we'll stick to the same convention.
The last difference between kernel programming and application
programming is in how each environment handles faults: whereas a
segmentation fault is harmless during application development and a
debugger can always be used to trace the error to the problem in the
source code, a kernel fault is fatal at least for the current process,
if not for the whole system. We'll see how to trace kernel errors in
Chapter 4, "Debugging Techniques", in the section "Debugging System Faults".
A module runs in the so-called only possible device driver programming differs greatly from (most)
application programming is the issue of concurrency. An application
typically runs sequentially, from the beginning to the end, without
any need to worry about what else might be happening to change its
environment. Kernel code does not run in such a simple world and 6, "Flow of Time") run
asynchronously as well. Moreover, of course, Linux can run on
symmetric multiprocessor (SMP) systems, with the result that your
driver could be executing concurrently on more than one CPU.. Every sample driver in this book has been written with
concurrency in mind, and we will explain the techniques we use as we
come to them.
A common mistake made by driver programmers is to assume that
concurrency is not a problem as long as a particular segment of code
does not go to sleep (or "block"). It is true that the Linux kernel
is nonpreemptive; with the important exception of servicing
interrupts, it will not take the processor away from kernel code that
does not yield willingly. In past times, this nonpreemptive behavior
was enough to prevent unwanted concurrency most of the time. On SMP systems, however, preemption is not required to cause concurrent execution.
If your code assumes that it will not be preempted, it will not run
properly on SMP systems. Even if you do not have such a system, others
who run your code may have one. In the future, it is also possible
that the kernel will move to a preemptive mode of operation, at which
point even uniprocessor systems will have to deal with concurrency
everywhere (some variants of the kernel already implement it). Thus, a
prudent programmer will always program as if he or she were working on
an SMP system.
Although kernel modules don't execute sequentially as applications do,
most actions performed by the kernel are related to a specific
process. Kernel code can know the current process driving it by
accessing the global item current, a pointer to
struct task_struct, which as of version 2.4 of the
kernel is declared in <asm/current.h>,
included by <linux/sched.h>. The
current pointer refers to the user process
currently executing. During the execution of a system call, such as
open or read, the current
process is the one that invoked the call. Kernel code can use
process-specific information by using current, if
it needs to do so. An example of this technique is presented in
"Access Control on a Device File", in Chapter 5, "Enhanced Char Driver Operations".
Actually, current is not properly a global variable
any more, like it was in the first Linux kernels. The developers
optimized access to the structure describing the current process by
hiding it in the stack page. You can look at the details of
current in
<asm/current.h>. While the code you'll look
at might seem hairy, we must keep in mind that Linux is an
SMP-compliant system, and a global variable simply won't work when you
are dealing with multiple CPUs. The details of the implementation
remain hidden to other kernel subsystems though, and a device driver
can just include <linux/sched.h> and refer to
the current process.
From a module's point of view, current is just like
the external reference printk. A module can
refer to current wherever it sees fit. For
example, the following statement prints the process ID and the command
name of the current process by accessing certain fields in
struct task_struct:
printk("The process is \"%s\" (pid %i)\n",
current->comm, current->pid);
The command name stored in current->comm is the
base name of the program file that is being executed by the current
process.
The rest of this chapter is devoted to writing a complete, though
typeless, module. That is, the module will not belong to any of the
classes listed in "Classes of Devices and Modules" in Chapter 1, "An Introduction to Device Drivers". The sample driver shown in this chapter is called
skull, short for Simple Kernel Utility for
skull source to load your own local code to
the kernel, after removing the sample functionality it
offers.[8]
[8]We use the word local here
to denote personal changes to the system, in the good old Unix
tradition of /usr/local.
[8]We use the word local here
to denote personal changes to the system, in the good old Unix
tradition of /usr/local.
Before we deal with the roles of init_module and
cleanup_module, however, we'll write a makefile
that builds object code that the kernel can load.
First, we need to define the
__KERNEL__ symbol in the
preprocessor before we include any headers. As mentioned earlier, much
of the kernel-specific content in the kernel headers is unavailable
without this symbol.
Another important symbol is MODULE, which must be
defined before including <linux/module.h>
(except for drivers that are linked directly into the kernel). This
book does not cover directly linked modules; thus, the
MODULE symbol is always defined in our examples.
If you are compiling for an SMP machine, you also need to define
__SMP__ before including the kernel
headers. In version 2.2, the "multiprocessor or uniprocessor" choice
was promoted to a proper configuration item, so using these lines as
the very first lines of your modules will do the task:
#include <linux/config.h>
#ifdef CONFIG_SMP
# define __SMP__
#endif
A module writer must also specify the -Oflag to the compiler, because many functions are declared as
inline in the header
files. gcc doesn't expand inline functions
unless optimization is enabled, but it can accept both the
-g and -Ooptions, allowing you to debug code that uses inline
functions.[9] Because the kernel makes
extensive use of inline functions, it is important that they be
expanded properly.
.
You may also need to check that the compiler you are running matches
the kernel you are compiling against, referring to the file
Documentation/Changes in the kernel source
tree. The kernel and the compiler are developed at the same time,
though by different groups, so sometimes changes in one tool reveal
bugs in the other. Some distributions ship a version of the compiler
that is too new to reliably build the kernel. In this case, they will
usually provide a separate package (often called
kgcc) with a compiler intended for kernel
compilation.
Finally, in order to prevent unpleasant errors, we suggest that you
use the -Wall (all warnings) compiler flag,
and also that you fix all features in your code that cause compiler
warnings, even if this requires changing your usual programming
style. When writing kernel code, the preferred coding style is
undoubtedly Linus's own style.
Documentation/CodingStyle is amusing reading and
a mandatory lesson for anyone interested in kernel hacking.
All the definitions and flags we have introduced so far are best
located within the CFLAGS variable used by
make.
In addition to a suitable CFLAGS, the makefile
being built needs a rule for joining different object files. The rule
is needed only if the module is split into different source files, but
that is not uncommon with modules. The object files are joined by the
ld -r command, which is not really a linking
operation, even though it uses the linker. The output of ld
-r is another object file, which incorporates all the code
from the input files. The -r option means
"relocatable;" the output file is relocatable in that it doesn't yet
embed absolute addresses.
The following makefile is a minimal example showing how to build a
module made up of two source files. If your module is made up of a
single source file, just skip the entry containing ld
-r.
# Change it here or specify it on the "make" command line
KERNELDIR = /usr/src/linux
include $(KERNELDIR)/.config
CFLAGS = -D__KERNEL__ -DMODULE -I$(KERNELDIR)/include \
-O -Wall
ifdef CONFIG_SMP
CFLAGS += -D__SMP__ -DSMP
endif
all: skull.o
skull.o: skull_init.o skull_clean.o
$(LD) -r $^ -o $@
clean:
rm -f *.o *~ core
If you are not familiar with make, you may
wonder why no .c file and no compilation rule
appear in the makefile shown. These declarations are unnecessary
because make is smart enough to turn
.c into .o without being
instructed to, using the current (or default) choice for the compiler,
$(CC), and its flags, $(CFLAGS).
After the module is built, the next step is loading it into the
kernel. As we've already suggested, insmoddoes the job for you. The program is like
ld, in that it links any unresolved symbol
in the module to the symbol table of the running kernel. Unlike the
linker, however, it doesn't modify the disk file, but rather an
in-memory copy. insmod accepts a number of
command-line options (for details, see the manpage), and it can assign
values to integer and string variables in your module before linking
it to the current kernel. Thus, if a module is correctly designed, it
can be configured at load time; load-time configuration gives the user
more
flexibility than compile-time configuration, which is still used
sometimes. Load-time configuration is explained in "Automatic and Manual Configuration" later in this chapter.
Interested readers may want to look at how the kernel supports
insmod: it relies on a few system calls
defined in kernel/module.c. The function
sys_create_module allocates kernel memory to hold
a module (this memory is allocated with
vmalloc; see "vmalloc and Friends" in Chapter 7, "Getting Hold of Memory"). The system call
get_kernel_syms returns the kernel symbol table
so that kernel references in the module can be resolved, and
sys_init_module copies the relocated object code
to kernel space and calls the module's initialization function..
Bear in mind that your module's code has to be recompiled for each
version of the kernel that it will be linked to. Each module defines a
symbol called __module_kernel_version,
which insmod matches against the version
number of the current kernel. This symbol is placed in the
.modinfo Executable Linking and Format (ELF)
section, as explained in detail in Chapter 11, "kmod and Advanced Modularization". Please
note that this description of the internals applies only to versions
2.2 and 2.4 of the kernel; Linux 2.0 did the same job in a different
way.
The compiler will define the symbol for you whenever you include
<linux/module.h> (that's why
hello.c earlier didn't need to declare it). This
also means that if your module is made up of multiple source files,
you have to include <linux/module.h> from
only one of your source files (unless you use
__NO_VERSION__, which we'll
introduce in a while).
In case of version mismatch, you can still try to load a module
against a different kernel version by specifying the
-f ("force") switch to
insmod, but this operation isn't safe and
can fail. It's also difficult to tell in advance what will
happen. Loading can fail because of mismatching symbols, in which case
you'll get an error message, or it can fail because of an internal
change in the kernel. If that happens, you'll get serious errors at
runtime and possibly a system panic -- a good reason to be wary of
version mismatches. Version mismatches can be handled more gracefully
by using versioning in the kernel (a topic that is more advanced and
is introduced in "Version Control in Modules" in Chapter 11, "kmod and Advanced Modularization").
If you want to compile your module for a particular kernel version,
you have to include the specific header files for that kernel (for
example, by declaring a different KERNELDIR) in the
makefile given previously. This situation is not uncommon when playing
with the kernel sources, as most of the time you'll end up with
several versions of the source tree. All of the sample modules
accompanying this book use the KERNELDIR variable
to point to the correct kernel sources; it can be set in your
environment or passed on the command line of
make.
When asked to load a module, insmod follows
its own search path to look for the object file, looking in
version-dependent directories under /lib/modules.
Although older versions of the program looked in the current directory,
first, that behavior is now disabled for security reasons (it's the
same problem of the PATH environment
variable). Thus, if you need to load a module from the current
directory you should use ./module.o, which works
with all known versions of the tool.
Sometimes, you'll encounter kernel interfaces that behave differently
between versions 2.0.x and
2.4.x of Linux. In this case you'll need to
resort to the macros defining the version number of the current source
tree, which are defined in the header
<linux/version.h>. We will point out cases
where interfaces have changed as we come to them, either within the
chapter or in a specific section about version dependencies at the
end, to avoid complicating a 2.4-specific discussion.
The header, automatically included by
linux/module.h, defines the following macros:
The macro expands to a string describing the version of this kernel
tree. For example, "2.3.48".
The macro expands to the binary representation of the kernel version,
one byte for each part of the version release number. For example,
the code for 2.3.48 is 131888 (i.e., 0x020330).[10] With this information, you can (almost)
easily determine what version of the kernel you are dealing
with.
[10]This
allows up to 256 development versions between stable
versions.
[10]This
allows up to 256 development versions between stable
versions.
This is the macro used to build a "kernel_version_code"
from the individual numbers that build up a version number.
For example, KERNEL_VERSION(2,3,48) expands to 131888.
This macro is very useful when you need to compare the current
version and a known checkpoint. We'll use this macro several
times throughout the book.
The file version.h is included by
module.h, so you won't usually need to include
version.h explicitly. On the other hand, you can
prevent module.h from including
version.h by declaring
__NO_VERSION__ in advance. You'll
use __NO_VERSION__ if you need to
include <linux/module.h> in several source
files that will be linked together to form a single module -- for
example, if you need preprocessor macros declared in
module.h. Declaring
__NO_VERSION__ before including
module.h prevents automatic declaration of the
string __module_kernel_version or its
equivalent in source files where:
Increments the count for the current module
Decrements the).
[11]The memory areas that
reside on the peripheral device are commonly called I/O
memory to differentiate them from system RAM, which is
customarily called.
A typical /proc/ioports file on a recent PC that
is running version 2.4 of the kernel will look like the following:300:
Puts the author's name into the object file.
Puts a description of the module into the object file.
Places an entry describing what device is supported by this
module. Comments in the kernel source suggest that this parameter may
eventually be used to help with automated module loading, but no such
use is made at this time..
This is the first of many "backward compatibility" sections in this
book. At the end of each chapter we'll cover the things that have
changed since version 2.0 of the kernel, and what needs to be done to
make your code portable.
For starters, the KERNEL_VERSION macro was
introduced in kernel 2.1.90. The sysdep.h header
file contains a replacement for kernels that need it..
Preprocessor symbols, which must both be defined to compile
modularized kernel code.
A preprocessor symbol that must be defined when compiling modules for
symmetric multiprocessor systems.
Module entry points, which must be defined in the module object file.
The modern mechanism for marking a module's initialization and cleanup
functions.
Required header. It must be included by a module source.
Macros that act on the usage count.
The list of currently loaded modules. Entries contain
the module name, the amount of memory each module occupies, and the
usage count. Extra strings are appended to each line to
specify flags that are currently active for the
module.
Preprocessor macro, required for modules that export symbols.
Macro used to specify that the module exports no symbols to the kernel.
Macro used to export a symbol to the kernel. The second form exports
without using versioning information.
Function used to specify the set of public symbols in the module. Used
in 2.0 kernels only.
Headers and preprocessor macro used to declare a symbol table in the
2.0 kernel.
Macros that make a module variable available as a parameter that may
be adjusted by the user at module load time.
Place documentation on the module in the object file.
Required header. It is included by
<linux/module.h>, unless
__NO_VERSION__ is defined (see
later in this list).
Integer macro, useful to #ifdef version
dependencies.
Required variable in every module.
<linux/module.h> defines it, unless
__NO_VERSION__ is defined (see
the following entry).
Preprocessor symbol. Prevents declaration of
kernel_version in
<linux/module.h>.
One of the most important header files. This file contains definitions
of much of the kernel API used by the driver, including functions for
sleeping and numerous variable declarations.
The current process.
The process ID and command name for the current process.
The analogue of printf for kernel code.
Analogue of malloc and freefor kernel code. Use the value of GFP_KERNEL as the
priority.
Functions used to register and release I/O ports.
Macros used to register and release I/O memory regions.
The public kernel symbol table.
The list of ports used by installed devices.
The list of allocated memory regions.
Back to: Table of Contents
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Kernel/LDD2/ch02.lwn | crawl-002 | refinedweb | 4,464 | 54.32 |
Running the table to csv code (to turn a wikipedia table into a csv file) only captures the
headers. The cells aren't filled with anything.
Note from the Author or Editor:Formatting error causes inner "for" loop to be outdented, causing the logic in the code to break. The code on Github is correct:
Traceback (most recent call last):
File "/home/dave/python/scrape_add_to_db.py", line 28, in <module>
links = getLinks("/wiki/Kevin_Bacon")
File "/home/dave/python/scrape_add_to_db.py", line 22, in getLinks
title = bsObj.find("h1").find("span").get_text()
AttributeError: 'NoneType' object has no attribute 'get_text'
I'm pretty sure that the error "None" means some problem downloading the url, but I know that I got pymysql working and changed my character sets. I thought that kindle might have mangled your nice code again so I went to github and copied and pasted the code, still same error. This is chapter 5 about 34% into the book (no page number on Kindle).
Note from the Author or Editor:Unfortunately, Wikipedia has removed span tags from its titles, breaking some of the code in the book. This can be fixed by removing "find("span")" from the code, and just writing:
title = bsObj.find("h1").get_text()
This will be fixed in ebook editions and updated for future print editions.
missing the 's' in word bigramsDist in the line of code: bigramDist[("Sir", "Robin")]
Note from the Author or Editor:Good catch! Have fixed for upcoming prints/ebook releases.
The text coloring is not consistent for the string in the line of code:
text = word_tokenize("Strange women lying in ponds distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.")
Note from the Author or Editor:Will be fixed in the ebook and upcoming printings of the book.
In chapter 2, "Advanced HTML Parsing",I've found the following two errors:
(1) in the section titled "A Caveat to the keyword Argument", there is a sentence that begins with 'Alternatively, you can enclose class in quotes'. The sample code that follows 'bsObj.findall("", {"class":"green"}' is missing the right parenthesis.
(2) Once again in chapter 2, "Advanced HTML Parsing", in the section titled "Other BeautifulSoup Objects" there is a sentence that is indented under "Tag objects" that ends in a colon (':'). The colon, traditionally and grammatically, signals that additional information follows but none does (follow). Is this an grammar typo or is the text that follows the colon actually missing?
Please accept my apology for not providing page numbers but my ePub version of your book does not contain page numbering on my Kindle Fire. I now have a valid reason why I should not buy eBooks. From hereon, I'll stick to printed technical books: they have always served me well. Not to lay the blame at your feet, but I'm going to buy your print version. I'm working on a project and I don't need the distractions.
Note from the Author or Editor:On page 17, the line should read:
bsObj.findAll("", {"class":"green"})
On page 18, the line:
bsObj.div.h1
Should be moved from its original position and placed under the description of "Tag objects" where it says "Retrieved in lists or individually by calling find and findAll on a BeautifulSoup object, or drilling down, as in:" What follows this sentence should be the example "bsObj.div.h1"
.findAll("span", {"class": "green", "class": "red"})
an attempt to create a Python dict with repeated keys will preserve just the last one that is entered in the dict.
The correct would be:
.findAll("span", {"class": {"green", "red"}}
Note that we're passing now a collection (set) as the value for the "class" key on the attributes dict.
Note from the Author or Editor:The line on page 16, in Chapter 2, should read:
.findAll("span", {"class":{"green", "red"}})
"If it is false,"should be read as "If it is False,"
Note from the Author or Editor:This will be fixed in upcoming prints and editions
the section BeautifulSoup and regular expressions.
should be read as
the section "Regular Expressions and BeautifulSoup."
“If recursion is set to True”should be read as “If recursive is set to True”
Note from the Author or Editor:Fixed in upcoming prints
The paragraph states:
"Retrieved in lists or individually by calling find and findAll on a BeautifulSoup object, or drilling down, as in:"
It ends with a colon, but it is followed by a new paragraph.
Suggestion:
It looks like the 4th paragraph (a line with only "bsObj.div.h1") should be moved there instead, and not simply removed, as suggested in the Note from the Author or Editor.
The 'body' of "body tag" should be Bold font.
"- s<td> (2)"should be read as "- <td> (2)"
The linear rule number 4 at line 17 says;
"4. Optionally, write the letter "d" at the end." which does not say blank at the end,
however,
the line 22 regEx says
aa*bbbbb(cc)*(d | ), where blank comes at the end.
this should be read as the following to be consistent with the rule.
aa*bbbbb(cc)*(d|).
Note from the Author or Editor:Changed text to different, more useful, example
The text reads:
"
from urllib.request
import urlopenfrom bs4
import BeautifulSoupimport re
"
It should be:
"
from urllib.request import url open
from bs4 import BeautifulSoup
import re
"
The code has serious bugs in handling internal Links. Here is a debugged code:
from urllib.request import urlopen
from urllib.error import HTTPError
from urllib.parse import urlparse
from bs4 import BeautifulSoup
import re
import random
#Retrieves a list of all Internal links found on a page
def getInternalLinks(bsObj, includeUrl):
internalLinks = []
#Finds all links that begin with a "/"
for link in bsObj.findAll("a",
href=re.compile("^(\/|.*(http:\/\/"+includeUrl+")).*")):
if link.attrs['href'] is not None and len(link.attrs['href']) != 0:
if link.attrs['href'] not in internalLinks:
internalLinks.append(link.attrs['href'])
return internalLinks
#Retrieves a list of all external links found on a page
def getExternalLinks(bsObj, url):
excludeUrl = getDomain(url)
externalLinks = []
#Finds all links that start with "http" or "www" that do
#not contain the current URL
for link in bsObj.findAll("a",
href=re.compile("^(http)((?!"+excludeUrl+").)*$")):
if link.attrs['href'] is not None:
if link.attrs['href'] not in externalLinks:
externalLinks.append(link.attrs['href'])
return externalLinks
def getDomain(address):
return urlparse(address).netloc
def followExternalOnly(bsObj, url):
externalLinks = getExternalLinks(bsObj, url)
if len(externalLinks) == 0:
print("Only internal links here. Try again.")
internalLinks = getInternalLinks(bsObj, getDomain(url))
if len(internalLinks) == 0:
return
if len(internalLinks) == 1:
randInternalLink = internalLinks[0]
else:
randInternalLink = internalLinks[random.randint(0, len(internalLinks)-1)]
if randInternalLink[0:4] != 'http':
randInternalLink = 'http://'+getDomain(url)+randInternalLink
if randInternalLink == url and len(internalLinks) == 1:
return
bsObjnext = BeautifulSoup(urlopen(randInternalLink), "html.parser")
#Try again
followExternalOnly(bsObjnext, randInternalLink)
else:
randomExternal = externalLinks[random.randint(0, len(externalLinks)-1)]
try:
nextBsObj = BeautifulSoup(urlopen(randomExternal), "html.parser")
print(randomExternal)
#Next page!
followExternalOnly(nextBsObj, randomExternal)
except HTTPError:
#Try again
print("Encountered error at "+randomExternal+"! Trying again")
followExternalOnly(bsObj, url)
url = ""
bsObj = BeautifulSoup(urlopen(url), "html.parser")
#Recursively follow external links
followExternalOnly(bsObj, url)
Note from the Author or Editor:This code has been updated on Github and will be fixed in upcoming prints and editions of the book
Inside the getRandomExternalLink function in the if/else statement, the 'if' statement is set to return 'getNextExternalLink' if the length of externalLinks is equal to zero.
The 'getNextExternalLink' was never defined.
Note from the Author or Editor:Updated code can be found in the github repository at:
#Finds all links that start with "http" or "www" that do
Should be read as
#Finds all links that start with "http" that do
To reflect the revised code line 8 from top
Note from the Author or Editor:Changed the code to reflect this comment
Random external link is:
Random external link is:
Random external link is:
Random external link is:
Should be read as
Reflecting revised code print function, line 10 from the bottom of code snippet.
Note from the Author or Editor:Updated code to reflect printout
The directory structure is different from the shown as:
• scrapy.cfg
— wikiSpider
— __init.py__
— items.py
This should be the following:
—scrapy.cfg
— wikiSpider
— __init.py__
— items.p
The 1st sentence:
In order to create a crawler, we will add a new file to wikiSpider/wikiSpider/spiders/
articleSpider.py called items.py.
Should be read as:
In order to create a crawler, we will add a new file, articleSpider.py, to wikiSpider/wikiSpider/spiders/.
The two words “WikiSpider”should be read as “wikiSpider”.
The last sentence tells:
This will create a new logfile, if one does not exist, in your current directory and output all logs and print statements to it.
this should be read as
This will create a new logfile, if one does not exist, in your current directory and output all logs to it.
from twitter import Twitter
shoulde be read as
from twitter import Twitter, OAuth
Google’s Geocode API,
should be read as
Google’s Geocoding API
insert
import json
this is missing in
Note from the Author or Editor:The import statement has been added for future versions of the book
#second elif:
url = source[4:]
url = "http://"+source
#should be:
url = "http://"+source[4:]
The last part of code snippet:
bsObj = BeautifulSoup(html)
downloadList = bsObj.findAll(src=True)
for download in downloadList:
fileUrl = getAbsoluteURL(baseUrl, download["src"])
if fileUrl is not None:
print(fileUrl)
urlretrieve(fileUrl, getDownloadPath(baseUrl, fileUrl, downloadDirectory))
should be
for download in downloadList:
fileUrl = getAbsoluteURL(baseUrl, download["src"])
if fileUrl is not None:
print(fileUrl)
urlretrieve(fileUrl, getDownloadPath(baseUrl, fileUrl,
downloadDirectory))
Note from the Author or Editor:This was caused by an indentation error. It has been fixed in Github and will be fixed for future editions and prints of the book.
The line of code:
import re
is missing: a Regular Expression is used at the end of the getLinks function:
return bsObj.find("div",{"id":"bodyContent"}).findAll("a",
href=re.compile("^(/wiki/)((?!:).)*$"))
from urllib.request import urlopen
are appear twice – reduntdant.
In this chapter, I’ll cover several commonly encountered types of files: text, PDFs, PNGs, and GIFs.
However the PNG and GIF are not covered. It should be read as:
In this chapter, I’ll cover several commonly encountered types of files: text, PDFs, and .docx.
Whereas the European Computer Manufacturers Association’s website has this tag
However, it is now officially ECMA International, so it should be read as:
Whereas the ECMA International’s website has this tag
This is a Word document, full of content that you want very much. Unfortunately,
it’s difficult to access because I’m putting it on my website as a . docx
file, rather than just publishing it as HTML
should be read as
This is a Word document, full of content that you want very much. Unfortunately, it’s difficult to access because I’m putting it on my website as a .
docx
file, rather than just publishing it as HTML
In the Data Normalization section of chapter 7, there is a reference to recording the frequency of the 2-grams, then at the bottom of the page we are given a code snippet that introduces OrderedDict and uses the sorted function. In the sorted function the code contains ngrams.items() however the ngrams method returns a list and lists do not have an items() method. So the program generates an error.
In the next chapter, it looks like the code (at least on GitHub) has the ngrams function return a dictionary instead which allows the code in chapter 7 to work.
Note from the Author or Editor:I mentioned the code that would accomplish this in passing, but did not actually include it. It will be included in future printings of the book, and in the ebook.
("['Software', 'Foundation']", 40), ("['Python', 'Software']", 38),....
should be read as
OrderedDict([("['Software', 'Foundation']", 40), ("['Python', 'Software']", 38),....
Note from the Author or Editor:Updated to: "OrderedDict([('of the', 38), ('Software Foundation', 37), ..."
The current output is inconsistent with the code snippet.
("['Software', 'Foundation']", 40), ("['Python', 'Software']", 38), ("['of', 'th
e']", 35), ("['Foundation', 'Retrieved']", 34), ("['of', 'Python']", 28), ("['in
', 'the']", 21), ("['van', 'Rossum']", 18)
First, as the value of ngrams is an OrderedDict.
Second, the getNgrams generate a string for 2gram instead of list of 2 strings.
The actual output looks like the following:
OrderedDict([('Software Foundation', 37), ('of the', 37), ('Python Software', 37), ('Foundation Retrieved', 32), ('of Python', 32), ('in the', 22), ('such as', 20), ('van Rossum', 19)...
Note from the Author or Editor:Updated the output of the script to reflect the use of the OrderedDict
me data that contains four or more comma-seperated programming languages
Should be read as
me data that contains three or more comma-seperated programming languages
The last sentence refers:
guide to the language can be found on OpenRefine’s GitHub page
This pointer refers to, which is not the precise page for the OpenRefine guide documents.
This should be
• The Constitution of the United States is the instrument containing this grant of
power to the several departments composing the government.
Should be read as
• The Constitution of the United States is the instrument containing this grant of
power to the several departments composing the Government.
The general government has seized upon none of the reserved rights of the states.
Should be read as
The General Government has seized upon none of the reserved rights of the States.
The presses in the necessary employment of the government should never be used
to clear the guilty or to varnish crime.
Should be read as
The presses in the necessary employment of the Government should never be used
to “clear the guilty or to varnish crime.”
The link embedded in PDF for "That can be my next tweet!" is a wrong one, that should be
Note from the Author or Editor:The page has changed since the book was written. Updated for future editions
name is email_address)
should be read as
name is email_addr)
The part of code snippet
r = requests.post("
quicksignup.cgi", data=params)
causes EOL error because of string break. It should be like the following:
r = requests.post(
"",
data=params)
Note from the Author or Editor:Because of the limitations of printing, there are many instances throughout the book where code needs to be cut off and continued on the next line. Please either correct these as you copy them from the book, or refer to the code repository on Github.
In this case, I will use the suggested version, because it corrects an issue with the syntax highlighting caused with this particular line break.
The code says `name="image"`, but following page suggests (and code on actual site is) `name="uploadFile"`.
Once a site authenticates your login credentials a it stores in your browser a cookie,
Should be read as
Once a site authenticates your login credentials, it stores in your browser a cookie,
Note from the Author or Editor:Changed to "Once a site authenticates your login credentials it stores them in your browser’s cookie"
should be read a
If you find jQuery is found on a site, you must be careful when scraping it. jQuery is
Should be read as
If you find jQuery on a site, you must be careful when scraping it. jQuery is
page has been fully loaded: from selenium import webdriver.
from selenium.webdriver.common.by import By
should be layouted as
page has been fully loaded:
from selenium import webdriver.
from selenium.webdriver.common.by import By
The link for installing Pillow does not work, instead use
Note from the Author or Editor:The link has changed since publication, and is updated in future versions.
Computer Automated Public Turing test to tell Computers and Humans Apart
should be read as
Completely Automated Public Turing test to tell Computers and Humans Apart
The diagram 8.1 about a Markov weather model has one incorrect percentage value and one incorrect arrow direction:
1. The value for Sunny being sunny the next day should be 70% rather than 20%.
2. The arrow for the 15% chance of Rainy being followed by Cloudy should be reversed so that this shows a 15% chance of Cloud being followed by Rain.
Note from the Author or Editor:The description is correct. The corrected Markov diagram is:
The paragraph and the 1st code refer/define main as exist,in the code referred at the preceding paragraph.
However, there is no main method in this code example, instead you use __init__. So, the main should be read as __init__.
Use a tool such as Chrome’s Network
inspector to
Should be read as
Use a tool such as Chrome’s Network
panel to
In the second scenario, the load your Internet connection and home machine can
Should be read as
In the third scenario, the load your Internet connection and home machine can
DMCS Safe Harbor
should be read as
DMCA Safe Harbor
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=0636920034391 | CC-MAIN-2017-39 | refinedweb | 2,890 | 60.75 |
> On Oct. 6, 2015, 7:57 p.m., Ben Mahler wrote: > >/?
I would like to keep fsLayers as it reflects the language of manifest. > On Oct. 6, 2015, 7:57 p.m., Ben Mahler wrote: > > src/tests/containerizer/provisioner_docker_tests.cpp, line 67 > > <> > > > > Can you do a sweep to remove all of the namespace aliases? If you want > > to pull them out of the RegistryClient let's use another patch. Its already removed in a later patch in the series () > On Oct. 6, 2015, 7:57 p.m., Ben Mahler wrote: > >. The idea was to make it obvious 50 lines down from initialization. I can change it and create another patch for replacing all xxxFuture with xxx for variable names. - Jojy ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- > > | https://www.mail-archive.com/reviews@mesos.apache.org/msg11549.html | CC-MAIN-2016-50 | refinedweb | 130 | 79.87 |
Hi all!!
I have a problem. I have to write a program that initialises a 2 dimensional arrayof double and uses a function to copy the sub-arrays of this 2 dimensional function one-by-one in to a seperate array.
please help. ive spent all day on this and wrote so many variations.
I can copy a 2 dimensional array within the main function, like so...
#include <stdio.h>
#define SIZEA 2
#define SIZEB 4
Code:main() { int i, j; double destination2[SIZEA][SIZEB]; double source[SIZEA][SIZEB]= { {5, 3, 6, 7}, {3, 15, 12, 4} }; for(i=0; i<SIZEA; i++) { for(j=0; j<SIZEB; j++) { destination2[i][j]= source[i][j]; printf("%.2f\n", destination2 [i][j]); } printf("\n"); } }
but this isnt what the question asks for and im really stuck.
Thanks in advance
Stuart | http://cboard.cprogramming.com/c-programming/85536-arrays-function-arguments.html | CC-MAIN-2015-40 | refinedweb | 139 | 63.09 |
On Tue, Jan 30, 2018 at 3:40 PM, Greg Rose <gvrose8...@gmail.com> wrote: > Allow OVS to compile and build on Linux 4.14.x kernels. Added > necessary compatability layer changes to the respective patches > as required for our OOT build environment. > > Note that NSH and ERSPAN patches are not in this series. We > are working with the authors of those patches to get them > backported. > > This series of patches was originally sent as two separate sets > however the dependencies and compatability layer requirements > made it more convenient to combine the two sets. > > Andy Zhou (3): > datapath: export get_dp() API > datapath: Add meter netlink definitions > datapath: Add meter infrastructure > > Arnd Bergmann (1): > datapath: use ktime_get_ts64() instead of ktime_get_ts() > > Christophe JAILLET (1): > datapath: Fix an error handling path in > 'ovs_nla_init_match_and_action() > > Florian Westphal (1): > datapath: conntrack: make protocol tracker pointers const > > Greg Rose (8): > datapath: Fix netdev_master_upper_dev_link for 4.14 > compat: Do not include headers when not compiling > datapath: Fix SKB_GSO_UDP usage > acinclude.m4: Enable Linux 4.14 > travis: Update kernel test list from kernel.org > compat: Fix compiler headers > compat:inet_frag.h: Check for frag_percpu_counter_batch > Documentation: Update NEWS and faq > > Gustavo A. R. Silva (2): > datapath: meter: fix NULL pointer dereference in > ovs_meter_cmd_reply_start > datapath: fix data type in queue_gso_packets > > Jiri Benc (1): > datapath: reliable interface indentification in port dumps > > Wei Yongjun (2): > datapath: Fix return value check in ovs_meter_cmd_features() > datapath: Using kfree_rcu() to simplify the code > > zhangliping (1): > datapath: fix the incorrect flow action alloc size > The patch series looks good. I have few comment on couple of patches. About metering and namespace related userspace patches are not in 2.9, so can you create two separate series. one with fixes for master ( and can be backported 2.9) and second with the features which can be targeted for master branch only.
Thanks. _______________________________________________ dev mailing list d...@openvswitch.org | https://www.mail-archive.com/ovs-dev@openvswitch.org/msg18678.html | CC-MAIN-2018-09 | refinedweb | 310 | 55.74 |
Specifying a Route's Model
A route's JavaScript file is one of the best places in an app to make requests to an API.
In this section of the guides, you'll learn how to use the
model
method to fetch data by making a HTTP request, and render it in a route's
hbs template, or pass it down to a component.
For example, take this router:
Router.map(function() { this.route('favorite-posts'); });
In Ember, functions that automatically run during rendering or setup are commonly referred to as "hooks".
When a user first visits the
/favorite-posts route, the
model hook in
app/routes/favorite-posts.js will automatically run.
Here's an example of a model hook in use within a route:
import Route from '@ember/routing/route'; export default class FavoritePostsRoute extends Route { model() { console.log('The model hook just ran!'); return 'Hello Ember!'; } }
model hooks have some special powers:
- When you return data from this model, it becomes automatically available in the route's
.hbsfile as
@modeland in the route's controller as
this.model.
- A
modelhook can return just about any type of data, like a string, object, or array, but the most common pattern is to return a JavaScript Promise.
- If you return a Promise from the model hook, your route will wait for the Promise to resolve before it renders the template.
- Since the
modelhook is Promise-aware, it is great for making API requests (using tools like fetch) and returning the results.
- When using the
modelhook to load data, you can take advantage of other niceties that Ember provides, like automatic route transitions after the data is returned, loading screens, error handling, and more.
- The
modelhook may automatically re-run in certain conditions, as you'll read about below.
Using the
model hook
To start, here's an example of returning a simple array from the
model hook. Even if we eventually plan to fetch this data over a network, starting with something simple makes initial development of a new route quick and easy.
import Route from '@ember/routing/route'; export default class FavoritePostsRoute extends Route { model() { return [ { title: 'Ember Roadmap' }, { title: 'Accessibility in Ember' }, { title: 'EmberConf Recap' } ]; } }
Now that data can be used in the
favorite-posts template:
{{#each @model as |post|}} <div> {{post.title}} </div> {{/each}}
Behind the scenes, what is happening is that the route's controller receives the results of the model hook, and Ember makes the model hook results available to the template. Your app may not have a controller file for the route, but the behavior is the same regardless.
Let's compare some examples using the model hook to make asynchronous HTTP requests to a server somewhere.
Fetch example
First, here's an example using a core browser API called
fetch, which returns a Promise.
Install
ember-fetch with the command
ember install ember-fetch, if it is not already in the app's
package.json.
Older browsers may not have
fetch, but the
ember-fetch library includes a polyfill, so we don't have to worry about backwards compatibility!
import Route from '@ember/routing/route'; import fetch from 'fetch'; export default class PhotosRoute extends Route { async model() { const response = await fetch('/my-cool-end-point.json'); const photos = await response.json(); return { photos }; } }
Ember Data example
Ember Data is a powerful (but optional) library included by default in new Ember apps.
In the next example, we will use Ember Data's
findAll method, which returns a Promise, and resolves with an array of Ember Data records.
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; export default class FavoritePostsRoute extends Route { @service store; model() { return this.store.findAll('post'); } }
Note that Ember Data also has a feature called a
Model, but it's a separate concept from a route's
model hook.
Multiple Models
What should you do if you need the
model to return the results of multiple API requests?
Multiple models can be returned through an
RSVP.hash.
The
RSVP.hash method takes an object containing multiple promises.
If all of the promises resolve, the returned promise will resolve to an object that contains the results of each request. For example:
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; import RSVP from 'rsvp'; export default class SongsRoute extends Route { @service store; model() { return>
Dynamic Models
In the examples above, we showed a route that will always return the same data, a collection of favorite posts. Even when the user leaves and re-enters the
/posts route, they will see the same thing.
But what if you need to request different data after user interaction?
What if a specific post should load based on the URL that the user visited, like
posts/42?
In Ember, this can be accomplished by defining routes with dynamic
segments, or by using query parameters, and then using the dynamic data to make requests.
In the previous Guides topic, we showed making a dynamic segment in the app's
router.js:
Router.map(function() { this.route('posts'); this.route('post', { path: '/post/:post_id' }); });
Whatever shows up in the URL at the
:post_id, the dynamic segment, will be available in the params for the route's
model hook:
import Route from '@ember/routing/route'; export default class PostRoute extends Route { model(params) { console.log('This is the dynamic segment data: ' + params.post_id); // make an API request that uses the id } }
If you do not define a model hook for a route, it will default to using Ember Data to look up the record, as shown below:
model(params) { return this.store.findRecord('post', params.post_id); }
In the
model hook for routes with dynamic segments, it's your job to
turn the ID (something like
47 or
post-slug) into a model that can
be rendered by the route's template. In the above example, we use the
post's ID (
params.post_id) as an argument to Ember Data's
findRecord
method.
Linking to a dynamic segment
There are two ways to link to a dynamic segment from an
.hbs template using
<LinkTo>.
Depending on which approach you use, it will affect whether that route's
model hook is run.
To learn how to link to a dynamic segment from within the JavaScript file, see the API documentation on
transitionTo
instead.
When you provide a string or number to the
<LinkTo>, the dynamic segment's
model hook will run when the app transitions to the new route.
In this example,
photo.id might have an id of
4:
{{#each @model as |photo|}} <LinkTo @route="photo" @model={{photo.id}}> link text to display </LinkTo> {{/each}}
However, if you provide the entire model context, the model hook for that URL segment will not be run.
For this reason, many Ember developers choose to pass only ids to
<LinkTo> so that the behavior is consistent.
Here's what it looks like to pass the entire
photo record:
{{#each @model as |photo|}} <LinkTo @route="photo" @model={{photo}}> link text to display </LinkTo> {{/each}}
If you decide to pass the entire model, be sure to cover this behavior in your application tests.
If a route you are trying to link to has multiple dynamic segments, like
/photos/4/comments/18, be sure to specify all the necessary information for each segment:
<LinkTo @route="photos.photo.comments.comment" @models={{array 4 18}}> link text to display </LinkTo>
Routes without dynamic segments will always execute the model hook.
Reusing Route Context
Sometimes you need to fetch a model, but your route doesn't have the parameters, because it's
a child route and the route directly above or a few levels above has the parameters that your route
needs.
You might run into this if you have a URL like
/album/4/songs/18, and when you're in the songs route, you need an album ID.
In this scenario, you can use the
paramsFor method to get the parameters of a parent route.
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; export default class AlbumIndexRoute extends Route { @service store; model() { let { album_id } = this.paramsFor('album'); return this.store.query('song', { album: album_id }); } }
This is guaranteed to work because the parent route is loaded. But if you tried to
do
paramsFor on a sibling route, you wouldn't have the results you expected.
This is a great way to use the parent context to load something that you want.
Using
paramsFor will also give you the query params defined on that route's controller.
This method could also be used to look up the current route's parameters from an action
or another method on the route, and in that case we have a shortcut:
this.paramsFor(this.routeName).
In our case, the parent route had already loaded its songs, so we would be writing unnecessary fetching logic.
Let's rewrite the same route, but use
modelFor, which works the same way, but returns the model
from the parent route.
import Route from '@ember/routing/route'; export default class AlbumIndexRoute extends Route { model() { let { songs } = this.modelFor('album'); return songs; } }
In the case above, the parent route looked something like this:
import Route from '@ember/routing/route'; import { inject as service } from '@ember/service'; import RSVP from 'rsvp'; export default class AlbumRoute extends Route { @service store; model({ album_id }) { return RSVP.hash({ album: this.store.findRecord('album', album_id), songs: this.store.query('song', { album: album_id }) }); } }
And calling
modelFor returned the result of the
model hook.
Debugging models
If you are having trouble getting a model's data to show up in the template, here are some tips:
- Use the
{{debugger}}or
{{log}}helper to inspect the
{{@model}}from the template
- return hard-coded sample data as a test to see if the problem is really in the model hook, or elsewhere down the line
- study JavaScript Promises in general, to make sure you are returning data from the Promise correctly
- make sure your
modelhook has a
returnstatement
- check to see whether the data returned from a
modelhook is an object, array, or JavaScript Primitive. For example, if the result of
modelis an array, using
{{@model}}in the template won't work. You will need to iterate over the array with an
{{#each}}helper. If the result is an object, you need to access the individual attribute like
{{@model.title}}to render it in the template.
- use your browser's development tools to examine the outgoing and incoming API responses and see if they match what your code expects
- If you are using Ember Data, use the Ember Inspector browser plugin to explore the View Tree/Model and Data sections. | https://guides.emberjs.com/v3.19.0/routing/specifying-a-routes-model/ | CC-MAIN-2022-21 | refinedweb | 1,787 | 59.74 |
Timeline
- pextlib1.0:
- gmp:
- nvi: build fix #21363
- 11:28 Ticket #20803 (epic5: archive.h: No such file or directory) closed by
- fixed: should be fixed in r57666
- 11:28 Changeset [57666] by
- epic5: build fix #20803
- 11:24 Ticket #20415 (doxygen fails, attempting to link against system libiconv) closed by
- fixed: r57665
- 11:24 Changeset [57665] by
- doxygen:
- libusb-compat: update homepage/license
09/13/09:
- 23:53 Changeset [57621] by
- Total number of ports parsed: 6180 Ports successfully parsed: 6180 …
- 23:40 Changeset [57620] by
- libsdl-devel: livecheck
- 23:38 Changeset [57619] by
- waitfor: livecheck
- 23:36 Changeset [57618] by
- libusb: fix homepage and livecheck
- 23:31 Changeset [57617] by
- remove db45 as nothing really uses it
- 23:23 Changeset [57616] by
- remove obsolete db3 port
- 23:23 Changeset [57615] by
- nvi: update to use db 4.x take maintainership
- 23:21 Changeset [57614] by
- rb-bdb: remove db3 variant
- 23:21 Changeset [57613] by
- py-bsddb:
- odcctools:
- tth: update to 3.86
-
09/12/09:
- 22:57 Changeset [57558] by
- Added modeline; Untabified.
- 21:46 Ticket #21329 (enblend 3.2 compile gets "stuck" on SL) closed by
- invalid: I'm inclined to leave it for now.
- 21:40 Ticket #21220 (wireshark will not launch) closed by
- invalid
- 21:39 Ticket #21300 (zlib: Unable to open port: can't read "configure.cc_archflags": no such ...) closed by
- invalid: Glad you got it working. Yes, if you install macports from trunk, …
- 19:53 Changeset [57557] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 19:33 Ticket #21335 (emacs fail to build on snow leopard) created by
- Hi All, I get this error when I try to intsall in on snow leopard. I know …
- 19:27 Ticket #21328 (gegl 0.1.0: please let libsdl-devel satisfy libsdl dep (build failure on ...) closed by
- fixed: Fix committed in r57556. Either libsdl or libsdl-devel will satisfy the …
- 19:24 Changeset [57556] by
- gegl: use path dependency to allow either libsdl or libsdl-devel. Default …
- 18:44 Ticket #21334 (php5-xdebug 2.0.4_0 Blank variable values) created by
- Not sure if this is related to Snow Leopard or not but I just upgraded to …
- 17:32 Ticket #21333 (bluefish-devel 1.3.6) created by
- This is my first portfile, copied most from the portfile from bluefish. …
- 15:53 Changeset [57555] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 15:52 Ticket #21332 (Unable to run rtorrent 0.8.4_0 on Snow Leopard) closed by
- invalid: See Migration
- 15:30 Ticket #12768 (cups-headers update to 1.1.23) closed by
- wontfix: cups-headers was removed in r57551.
- 15:25 Changeset [57554] by
- Remove stray tab.
- 15:24 Changeset [57553] by
- Updated to latest release. Compatible with Snow Leopard. Add license.
- 15:21 Changeset [57552] by
- Updated to latest release. Compatible with Snow Leopard. Add license.
- 15:18 Changeset [57551] by
- Remove cups-headers and all references to it because it is only useful on …
- 15:16 Ticket #21332 (Unable to run rtorrent 0.8.4_0 on Snow Leopard) created by
- Running Snow Leopard 10.6.1. Built the latest version of rtorrent and got …
- 14:56 Ticket #21331 (x264: gcc-4.2: -E, -S, -save-temps and -M options are not allowed with ...) created by
- On Mac OS X 10.6 trying to install x264 @20090810_2 I see: […] I am …
- 14:53 Changeset [57550] by
- Total number of ports parsed: 6183 Ports successfully parsed: 6183 …
- 14:47 Ticket #21330 (openmotif: update to 2.3.2 and add missing uintptr_t casts) closed by
- fixed: I'm just going to commit the update without those new --with-* flags. …
- 14:44 Changeset [57549] by
- openmotif: update to 2.3.2 and fix 64-bit warnings on Snow Leopard; see …
- 13:54 Changeset [57548] by
- nspr: Build universal and work around muniversal quirks
- 13:45 Ticket #21330 (openmotif: update to 2.3.2 and add missing uintptr_t casts) created by
- The openmotif package should be updated to the latest 2.3.2 release. There …
- 13:18 Changeset [57547] by
- guide/installing.xml - 3.1.4 is latest Xcode for 10.5 now
- 13:02 Ticket #21300 (zlib: Unable to open port: can't read "configure.cc_archflags": no such ...) reopened by
-
- 12:59 Ticket #15410 (linkchecker does not work on Mac OS X 10.4.11 after installing) closed by
- worksforme: py25-hashlib is just a stub these days, it is built as part of python25 …
- 11:57 Ticket #21329 (enblend 3.2 compile gets "stuck" on SL) created by
- Hi, when trying to compile enblend it gets stuck at enfuse.cc, it just …
- 11:54 Changeset [57546] by
- Total number of ports parsed: 6183 Ports successfully parsed: 6183 …
- 11:48 Ticket #21328 (gegl 0.1.0: please let libsdl-devel satisfy libsdl dep (build failure on ...) created by
- Hi, gegl currently depends on libsdl, which is 1.2 and fails to build on …
- 11:38 Ticket #21327 (Segmentation Fault when installing autoconf 2.64 on Snow Leopard 10.6.1) created by
- I ran into this problem while trying to upgrade outdated after installing …
- 11:34 Ticket #21256 (py26-cairo: update to version 1.8.8) closed by
- fixed: Committed in r57545, maintainer timeout.
- 11:33 Changeset [57545] by
- py26-cairo: update to version 1.8.8, closes #21256, maintainer timeout.
- 11:32 Ticket #20285 (python_select does not function on 10.6) closed by
- fixed: The problem in this ticket's description is fixed (python25 is a framework …
- 11:31 Ticket #21188 (Small Fix to Inkscape Portfile) closed by
- fixed: Fixed in r57544 for both inkscape and inkscape-devel.
- 11:30 Changeset [57544] by
- inkscape, inkscape-devel: prevent conflict between python25 Object.h and …
- 11:21 Ticket #20664 (Enable smartcard support to rdesktop) closed by
- fixed: Next time, please supply a …
- 11:20 Changeset [57543] by
- Add smartcard variant. Remove inactive maintainer. (#20664)
- 11:19 Ticket #21326 (screen-4.0.3 typo in configure.args) created by
- There's a small typo in the configure.args section of the gnu screen …
- 11:18 Changeset [57542] by
- only match 0.9 branch in livecheck
- 11:14 Changeset [57541] by
- point to VLC09 in error message
- 11:13 Ticket #21325 (db3-3.3.11 Configure error - build failure) created by
- […]
- 11:08 Ticket #21225 (dsniff-devel (2.4b1) fails to build on Snow Leopard) closed by
- fixed: Committed revision r57540. Thanks! Note: I removed the direct dependency …
- 11:07 Changeset [57540] by
- Fix build on Snow Leopard. (#21225)
- 11:04 Ticket #21324 (anjuta 2.26.2 fails to build - lacks libgnomeui dependency) closed by
- fixed: Fixed in r57539. Thanks for the report.
- 11:03 Changeset [57539] by
- anjuta: add missing dependency on libgnomeui, closes #21324.
- 10:56 Ticket #21324 (anjuta 2.26.2 fails to build - lacks libgnomeui dependency) created by
- Hi, I see the dependency has been removed upstream on the 2.27/2.28 …
- 10:54 Changeset [57538] by
- Use latest libnet as dependency. (#21225)
- 10:53 Changeset [57537] by
- Total number of ports parsed: 6183 Ports successfully parsed: 6183 …
- 10:37 Ticket #20412 (re-enable trash variant for mutt-devel 1.5.20) closed by
- fixed: Replying to kuperman@…: > IMHO, it's not a showstopper to not …
- 10:36 Changeset [57536] by
- Fix mutt version info. (#20412)
- 09:54 Changeset [57535] by
- koffice: add dependency on libexif
- 09:53 Changeset [57534] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 09:48 Ticket #21122 (kde fails to make? OS X 10.4.11) closed by
- invalid: Assuming it worked.
- 09:46 Ticket #21318 (Astro-satpass) closed by
- fixed: Added in r57532. Thanks!
- 09:46 Changeset [57533] by
- koffice: use mysql5 instead of mysql4
- 09:46 Changeset [57532] by
- Added new port. (#21318)
- 09:43 Changeset [57531] by
- opensync: switch to python 2.6
- 09:42 Changeset [57530] by
- unsermake: switch to python 2.6
- 08:53 Changeset [57529] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 08:25 Ticket #21323 (kdelibs4 4.3.0 on SnowLeopard - build failure) created by
- Building kdelibs4 4.3.0 on SnowLeopard, I came across two problems (up to …
- 07:58 Changeset [57528] by
- formatting and whitespace.
- 07:53 Changeset [57527] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 07:36 Ticket #20349 (VLC 1.0.0 requires Mac OS X 10.5 or newer) closed by
- fixed: new port VLC09 that also closes #19555 has been commited in r57523
- 07:35 Ticket #19555 (VLC @0.9.9a build failure) closed by
- fixed: Great, thank for testing it so quickly. fixed in r57523
- 07:34 Ticket #21110 (lang/camlp5 v5.12 build error) closed by
- fixed: Added in r57526. Thanks for figuring that out.
- 07:33 Changeset [57526] by
- set parallel build to no so it builds. (#21110)
- 07:03 Changeset [57525] by
- conflict with VLC09, add license
- 06:53 Changeset [57524] by
- Total number of ports parsed: 6182 Ports successfully parsed: 6182 …
- 06:50 Ticket #21321 (ffmpeg fails to fetch patch-configure.diff (Error: Target ...) closed by
- fixed: Missing patch file committed in r57518. Sorry.
- 06:47 Ticket #20888 (GitX fails to build on 10.6 Snow Leopard) closed by
- fixed: Thanks for the info, fixed in r57319
- 06:43 Changeset [57523] by
- new port: provides VLC 0.9 branch for people on 10.4 (closes #20349)
- 06:36 Changeset [57522] by
- revert unintended commits from r57521.
- 06:25 Changeset [57521] by
- ffmpeg, ffmpeg-devel, x264: remove checks for the existence of build_arch …
- 06:17 Ticket #14346 (gfortran variant of mpich2 and netcdf) closed by
- fixed: The underscoring problem is fixed in r57520. The static variant is added …
- 06:10 Ticket #14639 (RFE: to netcdf, add variants gcc42, gcc43, & docs) closed by
- fixed: Committed in r57520. Thanks for the patch.
- 06:08 Changeset [57520] by
- netcdf: fixed the underscoring problem for gcc43
- 05:49 Changeset [57519] by
- Upgraded version, changed to new project homepage, edited patch to make …
- 05:49 Changeset [57518] by
- ffmpeg: add missing patch file.
- 05:39 Ticket #21322 ([dnsmasq] [2.50] Add variants to disable DHCP, TFTP, and the IPV6 features) created by
- Several default features in dnsmasq can be disabled with compiler options. …
- 00:09 Ticket #21321 (ffmpeg fails to fetch patch-configure.diff (Error: Target ...) created by
- 10.4.11 + xcode 2.5 when trying to upgrade ffmpeg to latest version I get …
09/11/09:
- 23:53 Changeset [57517] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 23:15 Ticket #21313 (python25: upgrade to python25 @2.5.4_7 fails staging to destroot) closed by
- fixed: r57516
- 23:14 Changeset [57516] by
- python25: buildfix for tiger (#21313)
- 22:10 Ticket #21320 (python_select not working due to absolute $frameworks_dir path) created by
- here is the error log […]
- 20:53 Changeset [57515] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 20:23 Changeset [57514] by
- audio/mpd added patch for coreaudio syntax changes
- 19:53 Changeset [57513] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 19:30 Changeset [57512] by
- json-c 0.9
- 17:53 Changeset [57511] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 17:40 Ticket #21070 (port install x264 +asm fails in snow leopard) closed by
- fixed: ASM optimizations enabled by default for snow leopard x86_64 in r57510. …
- 17:35 Changeset [57510] by
- x264: enable asm optimizations by default on snow leopard x86_64, closes …
- 16:53 Changeset [57509] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 16:51 Ticket #21319 (Flusspferd 0.7 Port Update) created by
- Flusspferd 0.7 ( ) was released. This patch …
- 16:30 Ticket #21318 (Astro-satpass) created by
- Astro-satpass 0.025 p5 port
- 16:20 Ticket #21317 (glpk update to version 4.39) created by
- Simple patch attached to update math/glpk to 4.39.
- 16:06 Ticket #21316 (Failing to build py2app apps due to macholib error) created by
- Building apps with py2app fails on Snow Leopard with MacPorts' python26 …
- 15:58 Changeset [57508] by
- ffmpeg: * disable mmx for snow leopard i386 only * increment revision …
- 15:58 Changeset [57507] by
- new upstream version (0.45)
- 15:53 Changeset [57506] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 15:32 Ticket #21300 (zlib: Unable to open port: can't read "configure.cc_archflags": no such ...) closed by
- invalid
- 14:47 Changeset [57505] by
- ffmpeg-devel: update to svn 19824 swscale 29665, disable mmx on snow …
- 14:35 Ticket #14983 (sSMTP does not compile SSL support and typo in configuration file location) closed by
- fixed: Added ssl support in r57501.
- 14:28 Ticket #21098 (gd2 @2.0.35_1 autoreconf fails - "This Perl not built to support threads") reopened by
- Are any of the perl5, perl5.8 or perl5.10 ports installed? If so, were …
- 13:33 Ticket #21295 (HDF5-DIAG: Error detected in HDF5 (1.8.3)) closed by
- fixed: Thanks for the patch. Seems to work. Applied in r57504 to both py25-tables …
- 13:33 Changeset [57504] by
- py2[56]-tables: fix bug in compression filters. closes #21295.
- 13:14 Ticket #21315 (Apache2 install location does not seem to conform as well as it could to ...) created by
- It was a while ago, and I believe, that the end of the discussion, I was …
- 12:53 Changeset [57503] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 12:52 Changeset [57502] by
- ssmtp: remove unnecessary statements
- 12:04 Changeset [57501] by
- mail/ssmtp: Add ssl support, #14983
- 12:02 Changeset [57500] by
- bzflag: uncomment dep
- 11:53 Changeset [57499] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 11:44 Changeset [57498] by
- awesome: allow development versions of cairo, glib2 and pango to satisfy …
- 11:41 Ticket #21314 (sudo locks up Terminal.app until reboot) created by
- I've just done an upgrade to Snow Leopard, and completely reinstalled my …
- 11:38 Changeset [57497] by
- mail/ssmtp: This port is abandoned for a long time, moving to …
- 11:36 Ticket #21313 (python25: upgrade to python25 @2.5.4_7 fails staging to destroot) created by
- Platform: 10.4.11 ppc Xcode 2.5 After r57388, build completes …
- 11:33 Changeset [57496] by
- mcpp: change maintainer as this port has gone through multiple maintainer …
- 10:53 Changeset [57495] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 10:44 Ticket #21193 (problems importing gtk in textext python extension in Inkscape) closed by
- invalid: So you did not install inkscape from macports? We can't support that.
- 10:13 Changeset [57494] by
- various gnome ports: remove unsupported darwin 7 platform variant.
- 09:59 Changeset [57493] by
- GNOME_2_27/gnome/gnome-utils: remove unsupported darwin 7 platform …
- 09:53 Changeset [57492] by
- Total number of ports parsed: 6181 Ports successfully parsed: 6181 …
- 09:53 Changeset [57491] by
- gnome-utils: remove unsupported darwin 7 platform variant.
- 09:52 Changeset [57490] by
- gimp2: remove unsupported darwin 7 platform variant.
- 09:44 Changeset [57489] by
- Add portfile for django-extensions
- 09:34 Changeset [57488] by
- firefox-x11: Updated to version 3.0.14
- 09:10 Ticket #21312 (tripwire install fails on snow leopard) created by
- Mac Ports setup from scratch today. installed nmap, portsentry and snort …
- 09:05 Ticket #20151 (swi-prolog does not search $prefix/lib for libraries) closed by
- fixed: r57487
- 09:05 Changeset [57487] by
- swi-prolog: Don't use parallel build. Dev timeout for #20151
- 08:53 Changeset [57486] by
- Total number of ports parsed: 6180 Ports successfully parsed: 6180 …
- 08:35 Changeset [57485] by
- science/glue: ensure binaries are in ${prefix}/bin
- 08:34 Changeset [57484] by
- Total number of ports parsed: 62 Ports successfully parsed: 62 …
- 08:31 Changeset [57483] by
- Added new port.
- 07:55 Changeset [57482] by
- Total number of ports parsed: 6179 Ports successfully parsed: 6179 …
- 07:15 Changeset [57481] by
- licenses
- 07:09 Changeset [57480] by
- science/lscsoft-deps: switch to python26
- 07:09 Changeset [57479] by
- science/glue: switch to python26
- 07:09 Changeset [57478] by
- science/lalapps: switch to python26
- 07:09 Changeset [57477] by
- science/lal: switch python build dep to python26
- 06:46 Ticket #21311 (inkscape build fails) created by
- see attached …
- 06:36 Changeset [57476] by
- GNOME_2_27/gnome: update various gnome ports to latest unstable release …
- 06:30 Changeset [57475] by
- GNOME_2_27/gnome/yelp: update to version 2.27.5.
- 06:29 Changeset [57474] by
- GNOME_2_27/gnome/metacity: update to version 2.27.1.
- 06:28 Changeset [57473] by
- GNOME_2_27/gnome/libgtkhtml3: update to version 3.27.92.
- 06:27 Changeset [57472] by
- GNOME_2_27/gnome/gnome-user-docs: update to version 2.27.2.
- 06:26 Changeset [57471] by
- GNOME_2_27/gnome/gnome-media: update to version 2.27.91.
- 06:25 Changeset [57470] by
- GNOME_2_27/gnome/gedit: update to vresion 2.27.6.
- 06:24 Changeset [57469] by
- GNOME_2_27/gnome/gcalctool: update to version 5.27.92.
- 06:23 Changeset [57468] by
- GNOME_2_27/comms/telepathy-mission-control: update to version 5.2.3.
- 06:22 Changeset [57467] by
- telepathy-farsight: update to version 0.0.11.
- 06:20 Changeset [57466] by
- GNOME_2_27/gnome/at-spi: update to version 1.27.92.
- 06:20 Changeset [57465] by
- GNOME_2_27/gnome/vte: update to version 0.21.5.
- 06:09 Changeset [57464] by
- removed myself from maintainer
- 05:59 Ticket #21310 (jekyll build failure) created by
- File "syntax.ml", line 1, characters 0-1: Error: Corrupted compiled …
- 05:49 Ticket #21309 (dom4j fails to build on Snow Leopard) created by
-
- 05:36 Ticket #21308 (java_memcached fetch fails) created by
- Subversion check out failed
- 05:25 Ticket #21307 (servlet24-api build failure) created by
- servlet24-api failed to build
- 05:12 Ticket #21306 (jabberd fails to build) created by
- missing DNS symbols: […]
- 05:01 Ticket #21301 (Portfile for libsqlitewrapped) closed by
- fixed: Committed in r57463.
- 05:01 Changeset [57463] by
- create sqlitewrapped, ticket #21301
- 04:56 Changeset [57462] by
- iozone: license
- 04:55 Ticket #21303 (awesome-3.3.4 update) closed by
- fixed: Committed in r57461.
- 04:55 Changeset [57461] by
- update version and add patch, ticket #21303. set revision to 0 for new …
- 04:45 Changeset [57460] by
- glew: license
- 04:30 Ticket #21305 (spin 5.2.0: update to 5.2.2, fix problem with parallel build) created by
- The attached file brings spin from 5.2.0 to 5.2.2; also spin sometimes …
- 03:08 Changeset [57459] by
- dnsupdate: buildfix, remove references to dnsupdate27, add license
- 02:52 Changeset [57458] by
- allegro: remove -Wno-long-double flag (see #21304)
- 02:45 Ticket #21302 (doxygen on 10.6 compiling error) closed by
- duplicate: Please search open tickets before filing new ones: #20415
- 02:44 Ticket #21304 (allegro build error) created by
- [ tested on Snow Leopard + XCode 3.2 ] […]
- 01:37 Changeset [57457] by
- update xcode portgroup
- 01:22 Changeset [57456] by
- transfig: remove unnecessary distfiles line
- 01:22 Changeset [57455] by
- transfig: don't hardcode version in master_sites
- 01:18 Changeset [57454] by
- imlib2: remove duplicate libpng dependency
- 01:17 Ticket #21303 (awesome-3.3.4 update) created by
- The following Portfile diff updates the awesome window manager package to …
- 01:17 Changeset [57453] by
- Remove autoconf, automake and libtool build dependencies from ports using …
- 01:09 Changeset [57452] by
- dnsupdate27: nuke, only works on OS X <= 10.3
- 01:07 Changeset [57451] by
- bazaar: whitespace, license, remove darwin 7
- 01:06 Changeset [57450] by
- Change deprecated livecheck.check to livecheck.type in portgroups See …
- 01:04 XcodeVersionInfo edited by
- (diff)
- 00:57 Changeset [57449] by
- group/x11font-1.0.tcl - now with livecheck.type
- 00:53 XcodeVersionInfo edited by
- (diff)
- 00:53 Ticket #21302 (doxygen on 10.6 compiling error) created by
- Got a compiling error on snow leopard. see appended file for aoutput-
- 00:49 Changeset [57448] by
- Switch "use_autoconf yes" "autoconf.cmd autoreconf" to "use_autoreconf …
- 00:49 Changeset [57447] by
- python/py*-svn - add pysvn to description so it can be found with search
- 00:46 Changeset [57446] by
- archway: whitespace, license
- 00:46 Changeset [57445] by
- New port - python/py26-svn, Python Subversion Extension
- 00:36 Changeset [57444] by
- libusb: license
- 00:34 Ticket #21301 (Portfile for libsqlitewrapped) created by
- A C++ wrapper for sqlite3
- 00:14 Changeset [57443] by
- fixes
- 00:02 Changeset [57442] by
- Version bump to 3.2.3.
09/10/09:
- 23:58 Changeset [57441] by
- realpath only allocates memory for you on 10.6
- 23:54 Changeset [57440] by
- update linkage
- 23:49 Changeset [57439] by
- warnings--
- 23:42 Changeset [57438] by
- consolidate
- 23:37 Changeset [57437] by
- simplify
- 23:24 Changeset [57436] by
- add realpath command to try to fix #21082
- 22:52 Ticket #21300 (zlib: Unable to open port: can't read "configure.cc_archflags": no such ...) created by
- Mac OS X Version 10.5.8. After selfupdate to 1.8.0, I am having trouble …
- 22:40 Ticket #21299 (Request for openmaintainer and bug fixes) created by
- Given I maintain the ice-cpp port which directly links against mcpp, this …
- 21:09 Changeset [57435] by
- Version bump to 0.19.1
- 21:09 Changeset [57434] by
- Version bump to 0.19.1
- 21:08 Changeset [57433] by
- Version bump to 0.19.1
- 21:03 Changeset [57432] by
- Version bump to 0.18.1
- 21:03 Changeset [57431] by
- Version bump to 0.18.1
- 21:02 Changeset [57430] by
- Version bump to 0.18.1
- 20:31 Changeset [57429] by
- sqsh: remove unnecessary extract.suffix line
- 20:30 Ticket #21298 (nvi 1.81.6_0 fails to build because of isblank macro) created by
- nvi build busted on x86 10.5.8: […] This may only happen if the …
- 20:29 Changeset [57428] by
- Version bump to 0.45
- 20:28 Changeset [57427] by
- Version bump to 0.45
- 20:28 Changeset [57426] by
- Version bump to 0.45
- 20:27 Changeset [57425] by
- Version bump to 2.1.6
- 20:19 Changeset [57424] by
- xrandr: Bump to 1.3.2
- 20:08 Ticket #21297 (port command deletes my /opt symlink) closed by
- duplicate: Duplicate of #21082.
- 20:01 Changeset [57423] by
- tmux: woot, bsd license
- 19:55 Ticket #21297 (port command deletes my /opt symlink) created by
- My /opt is a symlink to a directory on another disk. After upgrading to …
- 19:39 Ticket #21116 (startup-notification-0.10 needs dependency on xorg-xcb-util) closed by
- fixed: Committed revision r57422. Thanks for the report.
- 19:39 Changeset [57422] by
- Added dependency on xorg-xcb-util. (#21116)
- 19:29 Changeset [57421] by
- Fix port lint for py26-south : Warning: Using deprecated option …
- 19:26 Changeset [57420] by
- Add portfile for South, a python-based database migration solution for …
- 19:25 Ticket #21103 (sqsh 2.1.5 enable +universal) closed by
- fixed: Patched in revision 57419
- 19:23 Changeset [57419] by
- Pass LDFLAGS through to makefile, closing ticket #21103
- 19:17 Changeset [57418] by
- Use worksrcpath in place of workpath/worksrcdir.
- 18:46 Ticket #21247 (Port sysutils/bacula fails during staging on r56483 on 10.6) closed by
- fixed: I have SL at home. Fixed in r57417. Thanks for the report.
- 18:45 Changeset [57417] by
- Fix destroot on Snow Leopard. (#21247)
- 18:28 Changeset [57416] by
- par: maintainers update
- 18:27 Changeset [57415] by
- xcb: maintainers update
- 18:27 Changeset [57414] by
- xorg-*: Removing duplicate entries from depends_build
- 18:15 Changeset [57413] by
- bdftopcf: remove duplicate dependency
- 18:14 Changeset [57412] by
- cdparanoia: remove duplicate dependency
- 18:00 Changeset [57411] by
- New upstream 10.5.3.0 release of Derby.
- 17:25 Changeset [57410] by
- xtram, xorg-libXTrap, and lzo - remove now-redundant build dependencies on …
- 17:14 Ticket #16596 (php5: undefined symbols _executor_globals _sapi_globals _compiler_globals ...) closed by
- duplicate: Replying to ryandesign@…: > That user had installed a variety …
- 17:02 Ticket #21296 (ocaml related ports cannot be built in parallel) created by
- Most ocaml related ports cannot be built in parallel, as they implicitly …
- 14:49 Ticket #21295 (HDF5-DIAG: Error detected in HDF5 (1.8.3)) created by
- On Snow Leopard, using MacPorts 1.8.0 and a syncd port tree, I'm running …
- 14:33 Ticket #21294 (Enable cffi support in swig?) created by
- Swig 1.3.40 supports cffi, but the port does not have a variant for it, …
- 14:09 Ticket #21293 (finch doesn't compile in 1.8) created by
- see attachment
- 13:49 Ticket #21290 (py25-pygraphviz build fails on snow leopard) closed by
- fixed: Updated to version 0.99.1 in r57409 which fixes this issue.
- 13:48 Ticket #21291 (py25-wxpython faild on snow leopard) closed by
- duplicate: #20235
- 13:47 Changeset [57409] by
- python/py25-pygraphviz - version update to 0.99.1, which fixes #21290
- 13:47 Ticket #21292 (gnuradio-core does not compile on snow leopard) created by
- Snow leopard, using Xcode 3.2 sudo port install gnuradio produces the …
- 13:34 Ticket #21291 (py25-wxpython faild on snow leopard) created by
- ---> Computing dependencies for py25-wxpython ---> Building libsdl …
- 13:20 Ticket #21290 (py25-pygraphviz build fails on snow leopard) created by
- sudo port install py25-pygraphviz haves errors: […]
- 12:55 Ticket #21289 (Version update for libdc1394) closed by
- fixed: Committed in r57408.
- 12:55 Changeset [57408] by
- updated version, ticket #21289
- 12:53 Ticket #21284 (emacs-22.3 +x11 missing dependency) closed by
- fixed: Committed in r57407.
- 12:52 Changeset [57407] by
- add missing dependency, ticket #21284
- 12:51 Ticket #21289 (Version update for libdc1394) created by
- New Version 2.1.2
- 12:34 Ticket #21288 (eibd missing dependency to libxml2) closed by
- fixed: Committed in r57406.
- 12:33 Changeset [57406] by
- add missing dependency, ticket #21288
- 12:28 Ticket #21288 (eibd missing dependency to libxml2) created by
- […]
- 12:27 Ticket #21287 (add a py26-igraph port) closed by
- fixed: Created in r57405.
- 12:27 Changeset [57405] by
- created py26-igraph, ticket #21287
- 12:17 Ticket #21287 (add a py26-igraph port) created by
- It'd be nice to use igraph in python2.6
- 12:17 Ticket #21286 (ppl build fails on Leopard) created by
- On an Intel Mac with Leopard 10.5.8 and Xcode 3.1.3 my ppl upgrade from …
- 12:14 Ticket #21272 (poppler + quartz fails) closed by
- fixed: Alright, I added gtk-doc to atk in r57404.
- 12:14 Changeset [57404] by
- add missing dependency on gtk-doc, ticket #21272
- 11:39 Changeset [57403] by
- GNOME_2_27/gnome: lint (trailing white space).
- 11:35 Changeset [57402] by
- squid3: update to 3.0.STABLE19
- 11:33 Changeset [57401] by
- GNOME_2_27: remove redundant build dependencies that are now provided …
- 11:18 Changeset [57400] by
- various: remove redundant build dependencies that are now provided …
- 11:01 Ticket #21285 (gtk2 + quartz fails) created by
- Tiger, PPC […]
- 10:45 Changeset [57399] by
- GNOME_2_27: residual property change from r57397.
- 10:42 Changeset [57398] by
- Total number of ports parsed: 62 Ports successfully parsed: 62 …
- 10:27 Ticket #21284 (emacs-22.3 +x11 missing dependency) created by
- When installing from scratch, the x11 variant of emacs-22.3 fails to link, …
- 10:26 Changeset [57397] by
- GNOME_2_27: merge r57375 from trunk (replace livecheck.check with …
- 10:22 Ticket #21126 (duplicity-0.5.18 Error staging to destroot) closed by
- fixed: Fixed in r57396
- 10:21 Changeset [57396] by
- resolve ticket #21126
- 10:01 Ticket #21283 (mercurial 1.3.1 on 10.6 fails to run) created by
- I get the following when trying to run mercurial on 10.6: […]
- 09:59 Ticket #21282 (glib2 and glib2-devel conflict with e2fsprogs) created by
- glib2 and glib2-devel cant build on OSX snow leopard. […]
- 09:49 Ticket #21281 (php4, php52, php5, php5-devel: +apache2 +fastcgi fails to install and php5 ...) created by
- OS:Mac OS X 10.6 xcode:3.2 apache2.2.13_2 build with …
- 09:48 Ticket #21280 (emacs +x11 requires xorg-libXmu) created by
- When trying to install emacs with the x11 extension using the command port …
- 09:43 Ticket #21277 (Snow Leopard: NCURSES_OPAQUE disables ncurses) closed by
- invalid: This not a MacPorts issue. File a radar.
- 09:21 Changeset [57395] by
- GNOME_2_27/x11/pango: delete pango from test repository, use trunk …
- 09:13 Changeset [57394] by
- gmime: update to version 2.4.9.
- 08:38 Ticket #21255 (MacPorts won't install in 10.6.) closed by
- invalid: Ugh. That domain squatter again. As the page you apparently inadvertently …
- 08:12 Changeset [57393] by
- update version
- 08:05 Ticket #21279 (openldap update to 2.4.19?) created by
- In openldap official site,openldap has upgraded to 2.4.18(Release) and …
- 07:33 Ticket #21278 (wxmaxima starts but maxima process terminates immediately) created by
- Is it just me? :-) I rebuilt wxMaxima, and started it. The status bar …
- 07:06 Ticket #21268 (Python_select fails for a non-standard framework path) closed by
- fixed: Fixed by r57391 and r57392.
- 07:06 Changeset [57392] by
- sysutils/python_select: Use select::install, which replaces some variables …
- 07:04 Changeset [57391] by
- PortGroup select: New namespace and a helper function (which is meant to …
- 06:54 Changeset [57390] by
- sysutils/python_select: Whitespace only, tabs to spaces
- 06:48 Ticket #21277 (Snow Leopard: NCURSES_OPAQUE disables ncurses) created by
- I think "curses" library is not a negligible library, so I think this …
- 06:43 Ticket #20704 (python25 violates the layout of the ports-filesystems) closed by
- worksforme: Assuming this is working as designed in 1.8.
- 06:42 Changeset [57389] by
- fix path inside archive
- 06:38 Ticket #18449 (python25 won't install on Mac OS X 10.6 (10A261)) closed by
- fixed: Fixed as much as it will likely ever be in r57388 (upstream doesn't intend …
- 06:23 Changeset [57388] by
- python25: backport enough 64-bit fixes from 2.6 to get a reasonable Mac …
- 05:59 Ticket #21276 (dnsmasq launch script missing) created by
- dnsmasq is delivered apparently missing a shell script needed to launch …
- 05:55 Ticket #21275 (ice-cpp segfault during build) created by
- ice-cpp segfaulted while running make on the HTML references (index.html …
- 05:48 Ticket #21274 (apachetop needs updated) created by
- As it stands, apachetop fails to build with an error about extra …
- 05:13 Ticket #21273 ([Port Abandoned] emacs) created by
- Current Timeouts: #20486 Previous Timeouts: #18358 #18240 #17070 Last …
- 04:54 Changeset [57387] by
- adding tar.bz2 of macfuse 2.0.3
- 04:23 Ticket #21272 (poppler + quartz fails) created by
- Mac OS X 10.4.11, PPC […]
- 04:02 Ticket #21271 ([Port Abandoned] Shiira2) closed by
- fixed: r57386
- 04:02 Ticket #21270 ([Port Abandoned] portaudio) closed by
- fixed: r57386
- 04:02 Ticket #21269 ([Port Abandoned] SSHKeychain) closed by
- fixed: r57386
- 04:02 Changeset [57386] by
- SSHKeychain, Shiira2, portaudio: change to nomaintainer, see #21269 #21270 …
- 03:15 Changeset [57385] by
- fix leaks
- 03:07 Changeset [57384] by
- Total number of ports parsed: 6176 Ports successfully parsed: 6176 …
- 02:46 Changeset [57383] by
- finish converting to C - leaks like crazy though
- 02:42 Changeset [57382] by
- Remove imake dependency from ports using "use_xmkmf yes" because MacPorts …
- 02:39 Changeset [57381] by
- follow upstream upgrades to version 4.3 (latest)
- 02:03 Changeset [57380] by
- Remove subversion dependency from ports using "fetch.type svn" because …
- 02:01 Ticket #21271 ([Port Abandoned] Shiira2) created by
- I'm abandoning this port because I don't use MacPorts anymore. The …
- 01:56 Ticket #21270 ([Port Abandoned] portaudio) created by
- I'm abandoning the portaudio port because I don't use MacPorts anymore. …
- 01:55 Changeset [57379] by
- Total number of ports parsed: 6176 Ports successfully parsed: 6176 …
- 01:50 Ticket #21269 ([Port Abandoned] SSHKeychain) created by
- I'm abandoning this port because I don't use MacPorts anymore. The …
- 01:39 Ticket #21268 (Python_select fails for a non-standard framework path) created by
- My framework path (chosen when configuring macports) is …
- 01:38 Changeset [57378] by
- Change deprecated svn.tag to svn.revision See …
- 01:23 Changeset [57377] by
- hexdiff: add descriptions
- 01:21 Ticket #21226 (hexdiff-0.0.50) closed by
- fixed: * r57376: Port added.
- 01:20 Changeset [57376] by
- hexdiff: new port, version 0.0.50; see #21226
- 01:16 Changeset [57375] by
- Change deprecated livecheck.check to livecheck.type See …
- 00:53 Changeset [57374] by
- Total number of ports parsed: 6175 Ports successfully parsed: 6175 …
- 00:47 Changeset [57373] by
- pango-devel: update to 1.25.6
- 00:42 Changeset [57372] by
- fontconfig: update to 2.7.3
- 00:37 Changeset [57371] by
- misc
- 00:37 Changeset [57370] by
- puppet: remove unnecessary extract.suffix line
- 00:30 Changeset [57369] by
- lang/python26 - silence lint
- 00:28 Ticket #21222 (Python26 won't build universal i386 + x86_64 binaries on SL) closed by
- fixed: Fixed in r57368, thanks Josh for the simpler fix (tested on 10.5.8 Intel, …
- 00:26 Changeset [57368] by
- lang/python26 - fix +universal, ticket #21222
- 00:17 Changeset [57367] by
- leak
- 00:15 Ticket #20352 (beecrypt build failure) closed by
- fixed: In r57364 I've disabled the C++ and Java stuff while updating to 4.2.1. …
- 00:14 Ticket #21212 (Update Puppet to 0.25.0, add openmaintainer) closed by
- fixed: r57366
- 00:14 Changeset [57366] by
- puppet: maintainer update to 0.25.0 (#21212)
- 00:14 Changeset [57365] by
- cleanup
- 00:14 Ticket #21207 (beecrypt 4.1.2 build error) closed by
- fixed: Fixed in r57364.
- 00:13 Changeset [57364] by
- beecrypt: update to 4.2.1 and disable c++ and java interfaces; fixes build …
- 00:06 Changeset [57363] by
- more C
Note: See TracTimeline for information about the timeline view. | https://trac.macports.org/timeline?from=2009-09-14T20%3A53%3A59-0700&precision=second | CC-MAIN-2016-36 | refinedweb | 5,575 | 62.88 |
Print frequency of each character in a string in Python
In this tutorial, we will learn how to print the frequency of each character in a string using Python.
Frequency of each character in a string
For that, we have two methods.
- Using basic logic.
- Counter() method.
Let’s start with the first one.
if-else statement (Basic logic)
First of all, let’s take a string for which we have to find the frequency of each character.
my_string = "Nitesh Jhawar"
We will define an empty dictionary, freq_dict. This dictionary will contain each character and its frequency in the key-value pairs. For example,
freq_dict={‘N’: 1,’i’:1}.
The key represents the character and the frequency represents its respective frequency.
freq_dict = {}
Now, its time to use for loop.
for i in my_string: if i in freq_dict: freq_dict[i]=freq_dict[i] + 1 else: freq_dict[i] = 1
Here, we have used a for loop to iterate through the characters in my_string using the iterating variable i.
After that, we use the if-else statement. If i exists in our dictionary, then we increase the frequency count by 1 otherwise we will initialize the value to 1.
Finally, We need to print our dictionary.
print ("Characters with their frequencies:\n",freq_dict)
And the output will be,
Characters with their frequencies: {'N': 1, 'i': 1, 't': 1, 'e': 1, 's': 1, 'h': 2, ' ': 1, 'J': 1, 'a': 2, 'w': 1, 'r': 1}
Counter() method
In Python, we have a module named collections. It is a container that is used to store data such as a dictionary, list, etc. collections module contains a method Counter() which is also a container that stores data in the form of a dictionary i.e, elements as key and its frequency as value.
Syntax:
Counter(string_name)
Now, let’s use it.
from collections import Counter my_string = "Nitesh Jhawar" freq_dict = Counter(my_string) print ("Characters with their frequencies:\n",freq_dict)
From the collections module, we have an imported Counter method.
The dictionary returned by Counter() is stored in freq_dict. It is then printed using the print statement.
Output:
Characters with their frequencies: Counter({'h': 2, 'a': 2, 'N': 1, 'i': 1, 't': 1, 'e': 1, 's': 1, ' ': 1, 'J': 1, 'w': 1, 'r': 1})
Also, learn: | https://www.codespeedy.com/frequency-of-each-character-in-a-string-in-python/ | CC-MAIN-2019-43 | refinedweb | 376 | 64.3 |
Source
pypy / pypy / doc / cppyy_backend.rst
Backends for cppyy
The cppyy module needs a backend to provide the C++ reflection information on which the Python bindings are build. The backend is called through a C-API, which can be found in the PyPy sources in: pypy/module/cppyy/include/capi.h. There are two kinds of API calls: querying about reflection information, which are used during the creation of Python-side constructs, and making the actual calls into C++. The objects passed around are all opaque: cppyy does not make any assumptions about them, other than that the opaque handles can be copied. Their definition, however, appears in two places: in the C code (in capi.h), and on the RPython side (in capi_types.py), so if they are changed, they need to be changed on both sides.
There are two places where selections in the RPython code affect the choice (and use) of the backend. The first is in pypy/module/cppyy/capi/__init__.py:
# choose C-API access method: from pypy.module.cppyy.capi.loadable_capi import * #from pypy.module.cppyy.capi.builtin_capi import *
The default is the loadable C-API. Comment it and uncomment the builtin C-API line, to use the builtin version.
Next, if the builtin C-API is chosen, the specific backend needs to be set as well (default is Reflex). This second choice is in pypy/module/cppyy/capi/builtin_capi.py:
import reflex_capi as backend #import cint_capi as backend
After those choices have been made, built pypy-c as usual.
When building pypy-c from source, keep the following in mind. If the loadable_capi is chosen, no further prerequisites are needed. However, for the build of the builtin_capi to succeed, the ROOTSYS environment variable must point to the location of your ROOT (or standalone Reflex in the case of the Reflex backend) installation, or the root-config utility must be accessible through $PATH (e.g. by adding $ROOTSYS/bin to PATH). In case of the former, include files are expected under $ROOTSYS/include and libraries under $ROOTSYS/lib. | https://bitbucket.org/pypy/pypy/src/14525d450338826eb053f91a0e9ab6a801aca498/pypy/doc/cppyy_backend.rst?at=default | CC-MAIN-2015-40 | refinedweb | 344 | 65.83 |
import sys
# set the file name depending on the operating system
if sys.platform == 'win32':
file = r'C:\WINDOWS\system32\drivers\etc\services'
else:
file = '/etc/services'
# Create an empty dictionary
ports = dict()
# Iterate through the file, one line at a time
for line in open(file):
# Ignore lines starting with '#' and those containing only whitespace
if line[0:1] != '#' and not line.isspace():
# Extract the second field (seperated by \s+)
pp = line.split(None, 1)[1]
# Extract the port number from port/protocol
port = pp.split ('/', 1)[0]
# Convert to int, then store as a dictionary key
port = int(port)
ports[port] = None
# Give up after port 200
if port > 200: break
# Print any port numbers not present as a dictionary key
for num in xrange(1,201):
if not num in ports:
print "Unused port", num
Here is a smaller solution using Regular Expressions:
import sys,re
file = r'C:\WINDOWS\system32\drivers\etc\services' \
if sys.platform == 'win32' else '/etc/services'
found = set()
for line in open(file):
m = re.search(r'^[^#].*\s(\d+)/(tcpudp)\s',line)
if m:
port = int(m.groups()[0])
if port > 200: break
found.add(port)
print set(range(1,201)) - found | http://darkeside.blogspot.com/2010/03/ | CC-MAIN-2017-34 | refinedweb | 200 | 64.41 |
Syncfusion’s .NET Excel library allows the user to export or write to Excel in C# and VB.NET from various data sources like data tables, datasets, an employee details data feature will be helpful when you need to export data from a model to an Excel worksheet.
The Syncfusion Excel (XlsIO) library provides support to export data from a collection of objects to an Excel worksheet; this collection can be either a single collection or nested collections. Exporting data from nested collections to an Excel worksheet is helpful in maintaining the data hierarchy. In addition, Syncfusion XlsIO allows to export data with various layout and grouping options. The blog Export Data from Collection to Excel and Group It in C# clearly explains how to export data from collection to Excel worksheet and also explains the options available while exporting.
Here, let’s see how to export data from a collection of objects to an Excel worksheet. This can be achieved using .NET Excel .NET Excel or writing data to Excel in C# here.
Wrapping up
As you can see, Syncfusion .NET Excel or write Excel data to PDF, image, data table, CSV, TSV, HTML, collections of objects, ODS file format, and more.
If you are new to our .NET.
[…] on August 19, 2019by admin submitted by /u/prabakarinfo [link] [comments] No comments […]
Hello How should i used the “ExcelEngine”
it say in the error “The type or namespace name’Excel Engine’ could not be found (are you missing a using directive or an assembly reference?)”
Thanks in advance
You have not referenced a valid Syncfusion assembly in this case. You need in refer Syncfusion.XlsIO.Base and Syncfusion.Compression.Base assemblies. If you referred to these assemblies and still facing this error, please make sure whether using Syncfusion.XlsIO; is used at the usings on the program.
Kindly install valid NuGet package to resolve this error. The following link shows valid NuGet packages according to various platforms.
Please go through our getting started guide to create simple Excel files.
How can we export only database column names as excel header in excel document using asp .net core?
There is no specific functionality to export only column headers to the Excel document. But this can be achieved through a simple workaround by importing a data table with the first row, then delete the first row, and leave the header row.
Please refer to the following code example to achieve your requirement.
worksheet.ImportDataTable(table, true, 1, 1, 1, table.Columns.Count);
worksheet.DeleteRow(2);
I’m facing some problem in creating EXCEL file under LINUX thru proc export. Can you please suggest me the best way to create or export data to excel on LINUX platform itself?
Using DataTable to Excel or Collection Objects to Excel conversion, you can meet the requirement. The NuGet package Syncfusion.XlsIO.Net.Core of Syncfusion XlsIO has to be referred in the Linux Environment.
If you are still facing the issue, you can contact us by creating a support ticket..
Hi
Plz send me a sample File
The samples are available in the GitHub location as mentioned in the blog. Please refer to this link.
If you are expecting the sample for a different scenario, you can let us know.
Down loaded syncfusion.xlsio.aspnet.18.3.0.52 but finding difficulty to install on visual studio 2017
There are two ways to install the NuGet package in Visual Studio Project.
1. You have downloaded the ASP.NET WebForms NuGet package. We suggest you download the package in a folder, add this as a new package source in the Visual Studio project, and install it.
2. You can also directly install the package from nuget.org in the Visual Studio project itself.
If you still face the issue, you can contact us by creating a support ticket. We will be happy to assist you.
… [Trackback]
[…] Find More Informations here: syncfusion.com/blogs/post/6-easy-ways-to-export-data-to-excel-in-c-sharp.aspx […]
hi
I need to export an excel file to datagridview with a column for qty. Qty set as 0. on changng qty that rows to be sent to form 2 datagrid. please assist.
An example for exporting data from an Excel file to DataGridView can be downloaded from the following link.
Your second query “on changing qty that rows to be sent to form 2 datagrid” is not clear. You can try the sample and let us know in detail if you need further assistance. | https://www.syncfusion.com/blogs/post/6-easy-ways-to-export-data-to-excel-in-c-sharp.aspx | CC-MAIN-2021-17 | refinedweb | 756 | 65.73 |
Like many technologies, they seem very complicated when you begin to work with it, but once you get into it you start to hit the boundaries of its capabilities and features. Once you use a tool for some time you learn what it can and cannot do, and therefore when there is a problem you can quickly rule out or rule in that it is related or not to the given technology.
I have had some time now to learn about certificates on IIS and wanted to share some information about how you can create, export and import SSL certificates for use with testing how SSL certificates are installed and configured on IIS. One the barriers I had was getting a certificate to play with, as they usually cost money, so I avoided getting one and asking management to pay for one for training me. Nonetheless, with what I am about to explain here will help you learn some about SSL Certificates.
I initially found this article, which was the basis for my learning. I like to find articles, learn from them and build on them, adding my experiences and understandings.
One of the initially challenging task I has was finding the MAKECERT and CERTMGR executables required to make and import the certificate. It is part of the Windows SDK but am not about to tell you which version you need, I installed a few and ultimately found the EXEs that I was after.
The steps required to create and import an SSL certificate to IIS for testing are:
- Create a self-signed root authority certificate and export the private key
- Install the root certificate into the Trusted Root Certificate Store
- Make the Server Certificate for IIS
- Export the certificate as a .PFX file, include all properties and private key
- Import the certificate on IIS
Create a self-signed root authority certificate and export the private key
Certificate Authorities, companies that create real SSL certificates create paths to certificates that can have 1 or more intermediate certificates. This is done to reduce the possibility of private keys being compromised and making all certificates generated using that
private key no long trusted. By having intermediates, they can have multiple private keys and reduce the impact of this possible loss of integrity. There are certificates that do not have a CA not intermediate CAs, which carry different restrictions and technicalities.
To create the certificate and export the private key, enter the following from a Command prompt running with administrative privileges, also shown in Figure 1.
makecert -n “CN=benjamin-perkins.me” -r -sv benperkmeCA.pvk benperkmeCA.cer
Figure 1, create CA certificate and export the private key using MAKECERT
When you execute the command, you will be asked to create and enter a password for the private key. You do get the option to not have one, but don’t recommend this even for testing. With real SSL certificates, I.e. not self-signed or test ones, you would always want to have a password, so let’s try to mimic how it would work when we really do it. Figure 2 illustrates the password requests pop-ups.
Figure 2, MAKECERT password request dialog
Install the root certificate into the Trusted Root Certificate Store
You can do this using MMC and the Certificate Management console, but let’s use the command prompt first. To add the CA to the Trusted Root Certificate Store, execute the following from a Command prompt running as an Administrator and as shown in Figure 3.
certmgr.exe -add -all -c “benperkmeCA.cer” -s -r localMachine Root
Figure 3, install the CA into the Trusted Root Certificate Store using CERTMGR
Enter CERTMGR from the command or open the Certificate manager using MMC and look at the Trusted Root Certificate Authorities tab or folder. You will find the certificate present in the list and shown in Figure 4 and Figure 5.
Figure 4, view certificates from CERTMGR.exe
Figure 5, view certificates from the Certificate Manager within MMC
Make the Server Certificate for IIS
This article will focus on creating a server certificate used for HTTPS on an IIS server. You can also create client certificates using similar MAKECERT commands, but this article won’t cover that. I do plan to write another about that soon. To create the test server certificate for use with IIS, execute the following command and as shown in Figure 6. See here for instructions on how to make a SHA256 certificate.
makecert 6, create a test IIS SSL Server Certificate using MAKECERT
You will be prompted to enter the password you created in step 1, Figure 2. Once entered the IIS Server Certificate is created. Once created you can then see it in the Certificate Management Console with MMC as shown in Figure 7.
Figure 7, view the IIS SSL certificate in MMC certificate manager
I will quickly jump to IIS and show you the window you get when you want to import an SSL certificate for use. Notice in Figure 8, which is rendered from the IIS management console that it is looking for a .PFX file. However, if you look at the command run in Figure 6, a .CER file was created. To get the .PFX file for import into IIS, we need to export the certificate from the certificate management console shown in Figure 7.
Export the certificate as a .PFX file, include all properties and private key
To export the .CER file you just created as a .PFX file, right click on the certificate, All Tasks -> Export… as shown in Figure 8.
Figure 8, export the certificate from the MMC certificate manager
When the Export menu item is selected, an export wizard is run, on the first window read through the information and click the next button, the window shown in Figure 9 is rendered. Select the radio button “Yes, export the private key” and then click the next button.
Figure 9, export certificate wizard, export the private key
In the next windows, as illustrated by Figure 10, click the “Export all extended properties” check box, leave all other settings as default and click the next button. I didn’t test this one but by doing this the export likely includes the CA which will be needed when we
import the certificate into IIS. Check it out and let me know what happens, if anything when you do not check this one. Thanks in advance. If this is not the case, you might see something like that shown in Figure 14, after the certificate is imported on IIS.
Figure 10, export certificate wizard, export all extended properties
Add a password, as shown in Figure 11 and click the next button.
Figure 11, export certificate wizard, set password
Define the name and location of your .PFX file and click the next button, as shown in Figure 12.
Figure 11, export certificate wizard, save the PFX file
Complete the wizard. A message box is rendered stating that the export was successful.
Import the certificate on IIS
To install the certificate on IIS, copy the .PFX file and place it in a secure location which is accessible from the server. Then copy the .PFX file onto the server and open the IIS management console. At the server level, click on the Serve Certificates feature as shown on Figure 12.
Figure 12, the Server Certificate feature in the IIS Management console
Once in the Server Certificates feature, click on the Import… link on the Action pane, as illustrated on Figure 13, fill in the certificate details and press OK to import the certificate.
Figure 13, import the SSL certificate into IIS for binding requests to HTTPS
Once imported, you will see the certificate in the feature. You can double-click on the certificate and view the details. Click on the Certificate path and make sure the path and status show as OK. If you see something like that shown in Figure 14, you need to install the CA certificate or any intermediate certificate required for the chain or path to be valid. Recall from step 1, Figure 1 where we created the CA. To get this to work you need to export the .CER file (DER) and import it onto the IIS server using the MMC Certificate Management console.
Figure 14, Checking out the certificate status and path
Once you get the CA certificate created in step 1 of this article, open the certificate details and view the path and status, all is OK as shown in Figure 15.
Figure 15, OK path and status certificate properties
After doing this exercise, I feel comfortable with certificate and understand how to make and install them. There is lots more to learn before I start hitting the boundaries of the technology, but I know they are there and it is only a matter of time………….and effort. | https://www.thebestcsharpprogrammerintheworld.com/2014/01/09/make-your-own-ssl-certificate-for-testing-and-learning/ | CC-MAIN-2021-04 | refinedweb | 1,482 | 59.84 |
Sample problem:
How can I select rows from a
DataFrame based on values in some column in Pandas?
In SQL, I would use:
SELECT * FROM table WHERE colume_name = some_value
I tried to look at Pandas’ documentation, but I did not immediately find the answer.
How to select rows from a DataFrame based on column values? Answer
which results in a Truth value of a Series is an ambiguous error.
Answer #2:
There are several ways to select rows from a Pandas dataframe:
- Boolean indexing (
df[df['col'] == value] )
- Positional indexing (
df.iloc[...])
- Label index:
import pandas as pd, numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}).
mask = df['A'] == 'foo'
We can then use this mask to slice or index the data frame
df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14.
mask = df['A'] == 'foo' pos = np.flatnonzero(mask) df.iloc[pos] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14
3. Label indexing
Label indexing can be very handy, but in this case, we are again doing more work for no benefit
df.set_index('A', append=True, drop=False).xs('foo', level=1) A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14
4.
df.query() API
pd.DataFrame.query is a very elegant/intuitive way to perform this task, but is often slower. However, if you pay attention to the timings below, for large data, the query is very efficient. More so than the standard approach and of similar magnitude as my best suggestion.
df.query('A == "foo"') A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14
My preference is to use the
Boolean
mask
Actual improvements can be made by modifying how we create our
Boolean
mask.
mask alternative 1 Use the underlying NumPy array and forgo the overhead of creating another
pd.Series
mask = df['A'].values == 'foo'
I’ll show more complete time tests at the end, but just take a look at the performance gains we get using the sample data frame. First, we look at the difference in creating the
mask
%timeit mask = df['A'].values == 'foo' %timeit mask = df['A'] == 'foo' 5.84 µs ± 195 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 166 µs ± 4.45 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Evaluating the
mask with the NumPy array is ~ 30 times faster. This is partly due to NumPy evaluation often being faster. It is also partly due to the lack of overhead necessary to build an index and a corresponding
pd.Series object.
Next, we’ll look at the timing for slicing with one
mask versus the other.
mask = df['A'].values == 'foo' %timeit df[mask] mask = df['A'] == 'foo' %timeit df[mask] 219 µs ± 12.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 239 µs ± 7.03 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The performance gains aren’t as pronounced. We’ll see if this holds up over more robust testing.
mask alternative 2 We could have reconstructed the data frame as well. There is a big caveat when reconstructing a dataframe—you must take care of the
dtypes when doing so!
Instead of
df[mask] we will do this
pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes)
If the data frame is of mixed type, which our example is, then when we get
df.values the resulting array is of
dtype
object and consequently, all columns of the new data frame will be of
dtype
object. Thus requiring the
astype(df.dtypes) and killing any potential performance gains.
%timeit df[m] %timeit pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes) 216 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.43 ms ± 39.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
However, if the data frame is not of mixed type, this is a very useful way to do it.
Given
np.random.seed([3,1415]) d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE')) d1 A B C D E 0 0 2 7 3 8 1 7 0 6 8 6 2 0 2 0 4 9 3 7 3 2 4 3 4 3 6 7 7 4 5 5 3 7 5 9 6 8 7 6 4 7 7 6 2 6 6 5 8 2 8 7 5 8 9 4 7 6 1 5
%%timeit mask = d1['A'].values == 7 d1[mask] 179 µs ± 8.73 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Versus
%%timeit mask = d1['A'].values == 7 pd.DataFrame(d1.values[mask], d1.index[mask], d1.columns) 87 µs ± 5.12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
We cut the time in half.
mask alternative 3
@unutbu also shows us how to use
pd.Series.isin to.
mask = df['A'].isin(['foo']) df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14
However, as before, we can utilize NumPy to improve performance while sacrificing virtually nothing. We’ll use
np.in1d
mask = np.in1d(df['A'].values, ['foo']) df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14.
res.div(res.min()) 10 30 100 300 1000 3000 10000 30000 mask_standard 2.156872 1.850663 2.034149 2.166312 2.164541 3.090372 2.981326 3.131151 mask_standard_loc 1.879035 1.782366 1.988823 2.338112 2.361391 3.036131 2.998112 2.990103 mask_with_values 1.010166 1.000000 1.005113 1.026363 1.028698 1.293741 1.007824 1.016919 mask_with_values_loc 1.196843 1.300228 1.000000 1.000000 1.038989 1.219233 1.037020 1.000000 query 4.997304 4.765554 5.934096 4.500559 2.997924 2.397013 1.680447 1.398190 xs_label 4.124597 4.272363 5.596152 4.295331 4.676591 5.710680 6.032809 8.950255 mask_with_isin 1.674055 1.679935 1.847972 1.724183 1.345111 1.405231 1.253554 1.264760 mask_with_in1d 1.000000 1.083807 1.220493 1.101929 1.000000 1.000000 1.000000 1.144175
You’ll notice that the fastest times seem to be shared between
mask_with_values and
mask_with_in1d.
res.T.plot(loglog=True)
Functions
def mask_standard(df): mask = df['A'] == 'foo' return df[mask] def mask_standard_loc(df): mask = df['A'] == 'foo' return df.loc[mask] def mask_with_values(df): mask = df['A'].values == 'foo' return df[mask] def mask_with_values_loc(df): mask = df['A'].values == 'foo' return df.loc[mask] def query(df): return df.query('A == "foo"') def xs_label(df): return df.set_index('A', append=True, drop=False).xs('foo', level=-1) def mask_with_isin(df): mask = df['A'].isin(['foo']) return df[mask] def mask_with_in1d(df): mask = np.in1d(df['A'].values, ['foo']) return df[mask]
Testing
res = pd.DataFrame( index=[ 'mask_standard', 'mask_standard_loc', 'mask_with_values', 'mask_with_values_loc', 'query', 'xs_label', 'mask_with_isin', 'mask_with_in1d' ], columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000], dtype=float ) for j in res.columns: d = pd.concat([df] * j, ignore_index=True) for i in res.index:a stmt = '{}(d)'.format(i) setp = 'from __main__ import d, {}'.format(i) res.at[i, j] = timeit(stmt, setp, number=50)
Special Timing
Looking at the special case when we have a single non-object
dtype for the entire data frame.
Code Below
spec.div(spec.min()) 10 30 100 300 1000 3000 10000 30000 mask_with_values 1.009030 1.000000 1.194276 1.000000 1.236892 1.095343 1.000000 1.000000 mask_with_in1d 1.104638 1.094524 1.156930 1.072094 1.000000 1.000000 1.040043 1.027100 reconstruct 1.000000 1.142838 1.000000 1.355440 1.650270 2.222181 2.294913 3.406735
Turns out, reconstruction isn’t worth it past a few hundred rows.
spec.T.plot(loglog=True)
Functions
np.random.seed([3,1415]) d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE')) def mask_with_values(df): mask = df['A'].values == 'foo' return df[mask] def mask_with_in1d(df): mask = np.in1d(df['A'].values, ['foo']) return df[mask] def reconstruct(df): v = df.values mask = np.in1d(df['A'].values, ['foo']) return pd.DataFrame(v[mask], df.index[mask], df.columns) spec = pd.DataFrame( index=['mask_with_values', 'mask_with_in1d', 'reconstruct'], columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000], dtype=float )
Testing
for j in spec.columns: d = pd.concat([df] * j, ignore_index=True) for i in spec.index: stmt = '{}(d)'.format(i) setp = 'from __main__ import d, {}'.format(i) spec.at[i, j] = timeit(stmt, setp, number=50)
Answer #3:
The Pandas equivalent to
select * from table where column_name = some_value
is
table[table.column_name == some_value]
Multiple conditions:
table[(table.column_name == some_value) | (table.column_name2 == some_value2)]
or
table.query('column_name == some_value | column_name2 == some_value2')
Code example
import pandas as pd # Create data set d = {'foo':[100, 111, 222], 'bar':[333, 444, 555]} df = pd.DataFrame(d) # Full dataframe: df # Shows: # bar foo # 0 333 100 # 1 444 111 # 2 555 222 # Output only the row(s) in df where foo is 222: df[df.foo == 222] # Shows: # bar foo # 2 555 222
In the above code it is the line
df[df.foo == 222] that gives the rows based on the column value,
222 in this case.
Multiple conditions are also possible:
df[(df.foo == 222) | (df.bar == 444)] # bar foo # 1 444 111 # 2 555 222
But at that point I would recommend using the query function, since it’s less verbose and yields the same result:
df.query('foo == 222 | bar == 444')
Answer #4:
I find the syntax of the previous answers to be redundant and difficult to remember. Pandas introduced the
query() method in v0.13 and I much prefer it. For your question, you could do
df.query('col == val')
Reproduced.303746 8 0.116822 0.364564 0.454607 9 0.986142 0.751953 0.561512 # pure python In [170]: df[(df.a < df.b) & (df.b < df.c)] Out[170]: a b c 3 0.011763 0.022921 0.244186 8 0.116822 0.364564 0.454607 # query In [171]: df.query('(a < b) & (b < c)') Out[171]: a b c 3 0.011763 0.022921 0.244186 8 0.116822 0.364564 0.454607
You can also access variables in the environment by prepending an
@.
exclude = ('red', 'orange') df.query('color not in @exclude')
Answer #5:
More flexibility using
.query with pandas >= 0.25.0:
August 2019 updated answer
Since pandas >= 0.25.0 we can use the
query method to filter dataframes with pandas methods and even column names that have spaces. Normally the spaces in column names would give an error, but now we can solve that using a backtick (`) – see GitHub:
# Example dataframe df = pd.DataFrame({'Sender email':['ex@example.com', "reply@shop.com", "buy@shop.com"]}) Sender email 0 ex@example.com 1 reply@shop.com 2 buy@shop.com
Using
.query with method
str.endswith:
df.query('`Sender email`.str.endswith("@shop.com")')
Output
Sender email 1 reply@shop.com 2 buy@shop.com
Also we can use local variables by prefixing it with an
@ in our query:
domain = 'shop.com' df.query('`Sender email`.str.endswith(@domain)')
Output
Sender email 1 reply@shop.com 2 buy@shop.com
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/how-to-select-rows-from-a-dataframe-based-on-column-values-pandas-answered/ | CC-MAIN-2022-40 | refinedweb | 2,046 | 77.53 |
Posted August 5, 2013
By Rob Gravelle
A lot of work has gone into making MySQL 5.6 faster than its predecessors. In my recent New Query Optimizer Features in MySQL 5.6 article I covered one particular optimization to the processing of subqueries. Another improvement comes in the form of the memcached plugin for InnoDB. It uses a daemon that automatically stores and retrieves data from InnoDB tables, without the overhead of SQL. When used in conjunction with the Query Cache, latency is reduced while throughput is increased. In today's article, we'll be taking a look at some of the uses and benefits offered by the new MySQL 5.6 memcached plugin.
Although new for MySQL, memcached is not a recent development. It was originally developed by Brad Fitzpatrick for the LiveJournal project back in 2003. His intention was to create a distributed memory object caching system for speeding up dynamic web applications. It alleviates the load on the database by caching both text and serializable object data in memory using a key-value lookup scheme.
Sorry Windows users, the memcached Daemon Plugin is only supported on Linux, Solaris, and OS X platforms at this time.
You must have libevent installed, since it is required by memcached. The libevent library is not installed for you by the MySQL installer, so you should download and install it before setting up the memcached plugin. Make sure that it's version 1.4.3 or later.
You can build from source or use a MySQL installer. I'll go over the latter here. For instructions on building from source, refer to the MySQL docs.
The memcached installation created by the MySQL installer includes two libraries for memcached and the InnoDB plugin for memcached. They are lib/plugin/libmemcached.so and lib/plugin/innodb_engine.so.
Once the installation is complete, run the configuration script, scripts/innodb_memcached_config.sql, to install the necessary tables used by memcached behind the scenes:
mysql: source MYSQL_HOME/share/innodb_memcached_config.sql
The memcached plugin will reside in the base plugin directory (/usr/lib64/mysql/plugin/libmemcached.so) that can be stopped and started at runtime. To activate the daemon plugin, use the install plugin statement:
mysql> install plugin daemon_memcached soname 'libmemcached.so';
It is possible to connect directly and issue some command using a utility like telnet:
$ telnet localhost 11211
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
set mykey 0 0 10
Test|Value
STORED
The set command tells memcached that we want to store a value.
"mykey" is the key to store it under.
The first 0 is the flags to use
The second 0 is the expiration TTL
The 10 tells it the length of the string that we're going to store.
"Test|Value" is the value to store.
get a11
VALUE a11 0 10
Test|Value
END
quit
Normally, the memcached data would be lost when you restart the server, so you would have to rely on application logic to load the data back into memory when memcached was restarted. In MySQL this process is automated by the memcached integration. All you have to do is run the install plugin statement to start the daemon_memcached plugin again.
Here is a more comprehensive list of storage commands that you'll use the most:
Since you'll be issuing memcached commands from your application code it's only fitting to demonstrate how to do that in a couple of different languages. The first two samples are of PHP code, while the last one is Python.
PHP requires that some configuration options be set to use memcached. These are located in the /etc/php.d/memcache.ini file:
; -- -- - Options to use the memcached session handler
; Use memcached as a session handler
session.save_handler=memcache
; Defines a comma separated of server urls to use for session storage
session.save_path="tcp://localhost:11211"
The following code snippet demonstrates how objects and other non-scalar data types must be serializable. In this case, the object contains a string and a numeric property type. The object is first saved to the cache using the set command and then retrieved using get:
<?php
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("Could not connect");
$version = $memcache->getVersion();
echo "Server's version: ".$version."<br/>\n";
$tmp_object = new stdClass;
$tmp_object->str_attr = 'This is a test';
$tmp_object->num_attr = 2112;
$memcache->set('testkey', $tmp_object, false, 10) or die ("Failed to save data at the server");
echo "Store data in the cache (data will expire in 10 seconds)<br/>\n";
$get_result = $memcache->get('testkey');
echo "Data from the cache:<br/>\n";
var_dump($get_result);
?>
Memcached data can be saved for the duration of the session as follows:
<?php
$session_save_path = "tcp://$host:$port?persistent=1&weight=2&timeout=2&retry_interval=10, ,tcp://$host:$port ";
ini_set('session.save_handler', 'memcache');
ini_set('session.save_path', $session_save_path);
?>
Here's an example of Python code that retrieves favorite albums by number of listens. Python is nice to use because it automatically serializes data using cPickle/pickle. Then, when you load the data back from memcached, you can use the object directly:
import sys
import MySQLdb
import memcache
memc = memcache.Client(['127.0.0.1:11211'], debug=1);
try:
conn = MySQLdb.connect (host = "localhost",
user = "robg",
passwd = "password01",
db = "myalbums")
except MySQLdb.Error, e:
print "Error %d: %s" % (e.args[0], e.args[1])
sys.exit (1)
favoritealbums = memc.get('top5films')
if not favoritealbums:
cursor = conn.cursor()
cursor.execute('select album_id, artist, title from album order by no_of_listens desc limit 5')
rows = cursor.fetchall()
memc.set('top5albums',rows,60)
print "Updated memcached with MySQL data"
else:
print "Loaded data from memcached"
for row in favoritealbums:
print "%s, %s" % (row[0], row[1])
Running the program would yield something like this:
shell> python memc_python.py
Loaded data from memcached
34, Iron Maiden Powerslave
22, Rush Moving Pictures
7, Abba Abba Gold
109, Allen Lande Showdown
56, Ivory Knight Unconscience
Memcached keys must be unique, so make sure your database schema makes good use of primary keys and unique constraints.
If you are combining multiple char column values into a single memcached item value, be careful that the separator that you use does not appear in the column values! If there is any doubt whatsoever, a common solution is to escape "actual" occurrences of the character and remove the escape character when fetching the data. An example would be adding a second quote to double quotes (""), as done in Visual Basic.
The queries that best lend themselves to memcached lookups are those that feature a single WHERE clause, using an = or IN operator. Memcached doesn't work as well with WHERE clauses that contain the <, >, BETWEEN, or LIKE operators because it can't easily scan through the keys or associated values. For that reason, it's usually better to run those queries on the database every time.
Memcached is a viable option for companies and individuals wishing to speed up execution of their online MySQL-backed applications. The challenge is that it's a solution that overlaps both database and application tiers. Therefore, unless you are multi-talented, you may have to enlist the services of someone who understands both.
See all articles by Rob Gravelle
MySQL Archives
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
Subject
(Maximum characters: 1200). You have characters left. | http://www.databasejournal.com/features/mysql/using-the-innodb-memcached-plugin-with-mysql-5.6.html | CC-MAIN-2016-40 | refinedweb | 1,233 | 56.96 |
Created on 2013-07-25 23:57 by Zero, last changed 2017-07-17 19:43 by terry.reedy.
The following interactive session shows that iterables are not detected properly by the `collections.abc.Iterable` class.
>>> class IsIterable:
def __init__(self, data):
self.data = data
def __getitem__(self, key):
return self.data[key]
>>> is_iterable = IsIterable(range(5))
>>> for value in is_iterable:
value
0
1
2
3
4
>>> from collections.abc import Iterable
>>> isinstance(is_iterable, Iterable)
False
The
Step.
If my program needed to know if an object is iterable, it would be tempting to define and call the following function instead of using collections.abc.Iterable:
def iterable(obj):
try:
iter(obj)
except TypeError:
return False
return True
Something tells me that is not what the author of collections.abc.Iterable intended.
That.
Maybe this would have been more appropriate as a question on StackOverflow:
What is the proper way of asking if an object is iterable if it does not support the iterator protocol but does support the old getitem protocol? One might argue that it is better to ask for forgiveness rather than permission, but that does not really answer the question.
My impression of collections.abc.Iterable is that programmers can use it to ask if an object is iterable. Some argue that it is better to ask for forgiveness rather that permission and would suggest pretending that an object is iterable until it is proven otherwise. However, what do we use collections.abc.Iterable’s for then?
The true question is really, “What is the proper way of asking if an object is iterable if it does not support the iterator protocol but does support the old getitem protocol?” More generically, how can you ask an object if it supports ANY iteration protocol? The query probably should have been posted on StackOverflow and not here.
This may not be a problem with collections.abc.Iterable, and thus the issue should be closed. However, the previous question remains, and it is apparent that it cannot be answered with the abstract class as it currently is. Maybe the solution is to just ask for forgiveness where appropriate.
“What :)
I.)
No,.
Of?
The wold "iterable" just means "can be looped over". There are many ways to implement this capability (two-arg form of iter(), the __iter__ method, generators, __getitem__ with integer indexing, etc).
collections.abc.Iterable is more limited and that is okay. There is nothing that compels us to break an API has been around and successful for 26+ years. That clearly wasn't Guido's intention when he added collections.abc.Iterable which is just a building block for more complex ABCs.
I recommend closing this. We're not going to kill a useful API and break tons of code because of an overly pedantic reading of what is allowed to be iterable.
However we can make a minor amendment to the glossary entry to mention that there are multiple ways of becoming iterable.
Stephen, the try/except is a reasonable way to recognize an iterable. The ABCs are intended to recognize only things that implement a particular implementation or that are registered. It is not more encompassing or normative than that.
Ray".
>.
Yes,.
"things with __getitem__ are clearly iterable"
This is false. IMO it should be fixed in the glossary. It should say "or __getitem__ method implementing sequence semantics". That plus the addition to the Iterable docs will close this issue.
The | http://bugs.python.org/issue18558 | CC-MAIN-2017-34 | refinedweb | 576 | 57.16 |
Question:
Guys, I have a windows form with a panel control and inside the panel control are several other controls with a System.Windows.Forms.Tooltip attached to them. How can I iterate through each tooltip and set the Active property of the tooltip to false? Tooltips, unlike other controls, are not actually controls. So I had this:
foreach (System.Windows.Forms.Control ctrl in this.pnlControl.Controls) { if (ctrl.Name.StartsWith("tt")) // since all my tooltip names start with 'tt' { System.Windows.Forms.ToolTip TipControl=(System.Windows.Forms.ToolTip)ctrl; TipControl.Active=false; } }
This does not work though. It gets an error because the ToolTip control is not inherited from System.Windows.Forms.Control. Any ideas?
EDIT: Okay Guys. I probably didn't go into enough detail to get the answer I needed. My problem is, I'm taking all the controls in my panel and moving them to a different panel. Once they are switched over, the tooltips are still attached to the controls, which is what I want. However I have no way to deactive or reactivate them once I move them since the form and the original panel no longer exist. However, I found a solution which I will post here.
Solution:1
How to add tool tips for two buttons? The correct way is NOT creating two instances of ToolTip in this way:
ToolTip tt1 = new ToolTip(); //or you can create one in the designer tt1.ToolTipTitle = "test"; tt1.SetToolTip(button1, "caption1"); ToolTip tt2 = new ToolTip(); tt2.ToolTipTitle = "test2"; tt2.SetToolTip(button2, "caption2");
Remember that a ToolTip instance and a control are not one-on-one related. The right way for this example is:
ToolTip tt1 = new ToolTip(); //or you can create one in the designer tt1.ToolTipTitle = "test"; tt1.SetToolTip(button1, "caption1"); tt1.SetToolTip(button2, "caption2");
To remove the tooltip of button2, use:
tt1.SetToolTip(button2,string.Empty);
For your case,we can use
foreach(Control c in this.Controls) { tt.SetToolTip(c,string.Empty); }
Solution:2
Typically, you have a single ToolTip instance that handles the displaying of tool tips for all of your controls. That single ToolTip instance is just a regular member of your form. Simply set it's Active property to false.
Solution:3
Edit: OK, scrap my previous answer. Yes, ToolTip is a Component, not a Control, so it's not actually in the Panel at all. From your question, it sounds like you have one ToolTip instance and you use it for controls inside this Panel as well as for other controls, right? In that case the solution is simple: create a separate ToolTip instance and use that one for controls in the Panel, then just refer to it directly to deactivate it, eg.
ttPanel.Active = false;
Solution:4
Okay what I did was create a new class that is inherited from Control, like so:
public class TooltipMaster : System.Windows.Forms.Control { private System.Windows.Forms.ToolTip m_tooltip1; private System.Windows.Forms.ToolTip m_tooltip2; private System.Windows.Forms.ToolTip m_tooltip3; private System.Windows.Forms.ToolTip m_tooltip4; public System.Windows.Forms.ToolTip ToolTip1 { get { return m_tooltip1; } set { m_tooltip1 = value; } } public System.Windows.Forms.ToolTip ToolTip2 { get { return m_tooltip2; } set { m_tooltip2 = value; } } public System.Windows.Forms.ToolTip ToolTip3 { get { return m_tooltip3; } set { m_tooltip3 = value; } } public System.Windows.Forms.ToolTip ToolTip4 { get { return m_tooltip4; } set { m_tooltip4 = value; } } }
Then what I did was create an instance of this class inside my main form's Load event. Then I just assigned each of my 4 tooltips to the 4 tooltips in this class. Finally, I added this control to my panel. After doing all that, I could access the tooltips later by iterating through each control and looking for the TooltipMaster control. Hope this makes sense!
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2019/04/tutorial-iterating-through-tooltips.html | CC-MAIN-2019-26 | refinedweb | 643 | 60.01 |
atthecorner
2008-05-21
I just ported my code from Windows XP to Windows Vista.
On vista, it fails in the pydev debugger with the message,
"Import Error: No module named vtEnv"
where "vtEnv" is one of my modules. I.e. the classic message when python can find a
in the source directory because I screwed up the filename.
My reaction was that this must be a trivial path orenvironment variable problem.
However, I have subsequently discovered that the code executes correctly with the same
configuration without the debugger (i.e. a run works). It also executes correctly from
the MS command line. So this is beginning to feel like an eclipse or pydev problem.
On the vista system I am using:
eclipse 3.3.2, python 2.5.2, pydev 1.3.17
On the XP system (service pack 2) I am using:
eclipse 3.3.2, python 2.5, pydev 1.3.17
And for the moment I have run out of ideas to explore.
THe following two files will show the problem:
file0.py -
----------------
print "some python code"
file1.py -
-----------
import os
import file0
Yes it is really that simple a case, which suggests user error and/or I am missing
a detail in the setup.
Fabio Zadrozny
2008-05-22
Can you print in the beginning of the sys.path and check if it's what you expect it to be? Also, is that a relative import (python 2.5 may have some issues if that's a regular import when you're importing from the __main__ module). Another shot would be trying to clean the .pyc files.
Cheers,
Fabio
atthecorner
2008-05-22
On my vista system
When running the code, that is when the code works, os.sys.path is:
['C:\\Users\\mkearney\\workspace\\xxxxx\\src', ']
'C:\\Users\\mkearney\\workspace\\xxxxx\\src' is my source directory
When debugging, os.sys.path is:
['C:\\Windows\\system32', 'C:\\opt\\eclipse-SDK-3.3.2-win32\\eclipse\\plugins\\org.python.pydev.debug_1.3.17\\pysrc', ']
When I add code to insert the 3 missing directories, things work. So where are those dirs normally added to
os.sys.path>
As for regular vs relative import, I don't really know. I have not found it necessary to make the distinction so far. The import is in __main__ module as shown in the small example. And "it worked before", the plaintive cry of a confused user. However, on the vista system I am running, python 2.5.2. On the XP system, I am running python 2.5.
-m
atthecorner
2008-05-22
And by the way. Adding code to insert the directories into os.sys.path works in the app, but I have to add it to every __main__module or so it seems. I've packaged platform specific into a module I'm importing. Obviously
if a platform specific detail precludes the import I have to patch around it. It would seem though, that this specific behavior should be the same for "run" or "debug" modes. And, it works in run
atthecorner
2008-05-29
FYI, I experience the same problem w/ ubuntu Linux.
-m | http://sourceforge.net/p/pydev/discussion/293649/thread/226c8b87 | CC-MAIN-2015-14 | refinedweb | 527 | 77.13 |
Library and Extension FAQ
Contents
- Library and Extension FAQ
- General Library Questions
- Common tasks
- Threads
- Input and Output
- Network/Internet Programming
- Databases
- Mathematics and Numerics
General Library Questions
How do I find a module or application to perform task X?
Check the Library Reference to see if there’s a relevant standard library module. (Eventually you’ll learn what’s in the standard library and will be able to skip this step.)
For third-party packages, search the Python Package Index or try Google or another Web search engine. Searching for “Python” plus a keyword or two for your topic of interest will usually find something helpful.
Where is the math.py (socket.py, regex.py, etc.) source file?
If you can’t find a source file for a module it may be a built)
How+"[email protected]"} """
The minor disadvantage is that this defines the script’s __doc__ string. However, you can fix that by adding
__doc__ = """...Whatever..."""
Is there a curses/termcap package for Python?.
Is there an equivalent to C’s onexit() in Python?
The atexit module provides a register function that is similar to C’s onexit().
Why don’t my signal handlers work?
The most common problem is that the signal handler is declared with the wrong argument list. It is called as
handler(signum, frame)
so it should be declared with two arguments:
def handler(signum, frame): ...
Common tasks
How do I test a Python program or component?.
To make testing easier, you should use good modular design in your that automates a sequence of tests can be associated with each module..
How do I create documentation from doc strings?
The pydoc module can create HTML from the doc strings in your Python source code. An alternative for creating API documentation purely from docstrings is epydoc. Sphinx can also include docstring content.
How do I get a single keypress at a time?
For Unix variants there are several solutions. It’s straightforward to do this using curses, but curses is a fairly large module to learn.
Threads
How do I program using threads?.
None of my threads seem to run: why? a good delay value for time.sleep(),.
How do I parcel out work among a bunch of worker threads?..
What kinds of global value mutation are thread-safe?!
Can’t we get rid of the Global Interpreter Lock?..?
Input and Output
How do I delete a file? (And other file questions...)().
How do I copy a file?
The shutil module contains a copyfile() function. Note that on MacOS 9 it doesn’t copy the resource fork and Finder info.
How do I read (or write) binary data?.
I can’t seem to use os.read() on a pipe created with os.popen(); why?
os.read() is a low-level function which takes a file descriptor, a small integer representing the opened file. os.popen() creates a high-level file object, the same type returned by the built-in open() function. Thus, to read n bytes from a pipe p created with os.popen(), you need to use p.read(n).
How do I access the serial (RS232) port?
For Win32, POSIX (Linux, BSD, etc.), Jython:
For Unix, see a Usenet post by Mitch Chapman:
Why doesn’t closing sys.stdout (stdin, stderr) really close it?.
Network/Internet Programming
What WWW tools are there for Python?.
How can I mimic CGI form submission (METHOD=POST)?) with req:.
What module should I use to help with generating HTML?
You can find a collection of useful links on the Web Programming wiki page.
How do I send mail from a Python script?, sometimes /usr/sbin/sendmail. The sendmail manual page will help you out. Here’s some sample code:
SENDMAIL = "/usr/sbin/sendmail" # sendmail location import os p = os.popen("%s -t -i" % SENDMAIL, "w") p.write("To: [email protected]\n") p.write("Subject: test\n") p.write("\n") # blank line separating headers from body p.write("Some text\n") p.write("some more text\n") sts = p.close() if sts != 0: print("Sendmail exit status", sts)
How do I avoid blocking in the connect() method of a socket?.
Databases
Are there any interfaces to database packages in Python?.
How do you implement persistent objects in Python?
The pickle library module solves this in a very general way (though you still can’t store things like open files, sockets or windows), and the shelve library module uses pickle and (g)dbm to create persistent mappings containing arbitrary Python objects.
Mathematics and Numerics
How do I generate random numbers in Python?
The standard module random implements a random number generator. Usage is simple:
import random random.random()
This returns a random floating point number in the range [0, 1).
There are also many other specialized generators in this module, such as:
- randrange(a, b) chooses an integer in the range [a, b).
- uniform(a, b) chooses a floating point number in the range [a, b).
- normalvariate(mean, sdev) samples the normal (Gaussian) distribution.
Some higher-level functions operate on sequences directly, such as:
- choice(S) chooses random element from a given sequence
- shuffle(L) shuffles a list in-place, i.e. permutes it randomly
There’s also a Random class you can instantiate to create independent multiple random number generators. | https://documentation.help/Python-3.4.4/library.html | CC-MAIN-2020-10 | refinedweb | 883 | 68.47 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Practice Writing Classes3:40 with Jeremy McLain
Now you try. Write a class on your own, then we'll walk through it together.
- 0:00
Let's take another look at a graphical representation of a map.
- 0:04
We need to be able to specify where on the map a tower or an invader is located.
- 0:10
We can think of our map as being divided up into a two-dimensional grid.
- 0:14
Each grid square is a location where an object can be placed on the map.
- 0:19
This allows us to use Cartesian coordinates to identify grid squares.
- 0:24
Cartesian coordinates are used to identify a point on a two-dimensional grid.
- 0:29
Traditionally variables named X and Y are used to specify a point.
- 0:35
X is the distance from the left-most grid square.
- 0:39
Y is the distance from the bottom-most grid square.
- 0:42
A tower located at the bottom left corner of the map would be located at x=0, y=0.
- 0:48
And this tower is placed at point x=3 and
- 0:53
y=1 or point 3, 1 for short.
- 0:57
Where is this tower located at?
- 0:59
That's right.
- 1:00
It's at point 5,4.
- 1:03
So a point on the map has both an X and a Y coordinate.
- 1:07
Let's model a point using a class.
- 1:10
We'll need to create a new file called Point.cs.
- 1:15
We'll define this class inside the Treehousedefense namespace.
- 1:24
So our Point class is going to look very similar to our map class.
- 1:28
It will have two fields, one called X and one called Y.
- 1:33
Just like our Map class needs both a width and
- 1:35
a height, the Point class will need both an X and a Y coordinate.
- 1:40
You know everything you need to know to write the Point class.
- 1:44
I suggest you pause the video here, and go ahead and
- 1:47
do that on your own for practice.
- 1:49
Then when you come back, we'll work through it together.
- 1:55
All right, how do you think you did?
- 1:58
Let's take a look.
- 1:59
We'll make a class and call it Point.
- 2:04
Then we'll have two public fields in our class, one for X, and one for Y.
- 2:09
We'll make them both integers, and they'll both be public because we'll need
- 2:13
to be able to read them from other classes.
- 2:16
Points don't move, so we need to make both X and Y readonly.
- 2:25
We'll need to add a constructor to initialize X and Y.
- 2:28
It will need to be public and take an x and y parameter.
- 2:32
We'll use these parameters to set the value of the X and Y fields.
- 2:45
There we have it.
- 2:46
As I said, the point class looks almost identical to the Map class.
- 2:52
Only the names are different.
- 2:55
What it means to be a point and
- 2:56
what it means to be a map are still two very different things though.
- 3:01
Remember to compile your code to make sure that you're not getting
- 3:04
any compiler errors.
- 3:05
If you do get compiler errors, check to make sure that your code looks like
- 3:09
the code written in these videos and that all of your files are saved.
- 3:14
Way to go.
- 3:15
You're writing classes and you're making objects.
- 3:18
Objects are a fun way to think about designing software, don't you think?
- 3:22
As we learn more about the capabilities of C# to do object-oriented programming,
- 3:27
it's going to get even more interesting and fun.
- 3:30
We are off to a great start, but this is just the beginning.
- 3:34
We've learned about classes and fields.
- 3:36
Next we'll learn about methods and their role in objects. | https://teamtreehouse.com/library/practice-writing-classes | CC-MAIN-2019-43 | refinedweb | 741 | 83.66 |
On 11/10/06, Tako Schotanus <quintesse@palacio-cristal.com> wrote:
> Stefan Guggisberg wrote:
> > On 11/10/06, Tako Schotanus <quintesse@palacio-cristal.com> wrote:
> >> It's not very important, but is there a specific reason why the
> >> CompactNodeTypeDefReader constructor takes a NamespaceMapping instead of
> >> just a plain NamespaceResolver?
> >
> > NamespaceResolver provides a 'read-only' view of the mappings whereas
> > NamespaceMapping has a setMapping() method.
> >
> > the NamespaceMapping instance passed in the CompactNodeTypeDefReader
> > constructor will be updated with the ns declarationsencountered in the
> > cnd file.
> Aha, okay, I didn't know it was possible to define namespaces in the
> compact definition, how does that work?
you have to declare the namespaces that you reference in your node type
definitions. those namespaces will automatically registered for you when
you register your new node types. for the cnd format see
>
> Thanks,
> -Tako
> >
> > cheers
> > stefan
> >
> >>
> >> As far as I can see almost all other methods dealing with namespaces
> >> take a NamespaceResolver, this just seems a weird exception.
> >>
> >> Cheers,
> >> -Tako
> >>
> >>
>
> | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200611.mbox/%3C90a8d1c00611100451s71edc457gf1155adb1e1f19b@mail.gmail.com%3E | CC-MAIN-2015-32 | refinedweb | 164 | 54.42 |
Or does the creation of a new user namespace force the creation of a new namespace of all the other types at the same time?
User namespaces progress
Posted Jan 3, 2013 3:31 UTC (Thu) by mkerrisk (subscriber, #1978)
[Link]
So, what stops an unprivileged process from creating a new user namespace, so acquiring CAP_BIND in the new namespace, then binding a privileged port?
Posted Jan 3, 2013 5:50 UTC (Thu) by quotemstr (subscriber, #45331)
[Link]
Posted Jan 3, 2013 7:59 UTC (Thu) by ebiederm (subscriber, #35028)
[Link]
Posted Jan 3, 2013 16:42 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 3, 2013 17:18 UTC (Thu) by andresfreund (subscriber, #69562)
[Link]
Posted Jan 3, 2013 17:21 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
How do I do it? I've actually tried multiple ways and all of them failed.
Posted Jan 3, 2013 17:40 UTC (Thu) by man_ls (subscriber, #15091)
[Link]
setcap 'cap_net_bind_service=+ep' /path/to/program
Posted Jan 3, 2013 19:15 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
You might actually notice that I have an answer in the thread you've linked: However, while it works for erlang it somehow fails for Java. Don't ask me why.
Posted Jan 3, 2013 17:44 UTC (Thu) by andresfreund (subscriber, #69562)
[Link]
In many scenarios you probably will end up using something like capsh or pam-cap.
Posted Jan 3, 2013 19:18 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Beautiful. Not.
> In many scenarios you probably will end up using something like capsh or pam-cap.
I'll gladly send you a beer if you can give me a command line that actually works. I have tried all sorts of capsh command variations, but NONE of them works.
Posted Jan 3, 2013 21:05 UTC (Thu) by andresfreund (subscriber, #69562)
[Link]
I only copied the binary because I do *not* want my normal nc to have the capability to bind to root-only ports.
> In many scenarios you probably will end up using something like capsh or pam-cap.
libpam-cap is probably easier for you:
apt-get install libpam-cap
pam-auth-update (enable "capabilities management")
sensible-editor /etc/security/capability.conf
# add "cap_net_bind_service cyberax"
It should be rather similar for other distributions.
Then start a new shell as your user (*not* via sudo "su - cyberax", use sudo -u cyberax, or su - cyberax from *your* user or such, pam_rootok makes a pretty unfortunate shortcut there) and voila:
andres@alap2:~$ sudo -u andres nc -l 434
^C
Posted Jan 3, 2013 21:09 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 3, 2013 21:21 UTC (Thu) by andresfreund (subscriber, #69562)
[Link]
sudo /sbin/capsh --caps=cap_net_bind_service+pei == --user=andres -- -c "nc -l 434"
Yes. Ugly. But it works. (capsh is/was a demo tool)
Posted Jan 3, 2013 21:24 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jan 3, 2013 7:57 UTC (Thu) by ebiederm (subscriber, #35028)
[Link]
The best way I can explain it is to describe an April Fool's day joke that you can play on your friendly local sysadmin.
Create a binary and call it something like $HOME/bin/su. Have that binary
call unshare(CLONE_NEWUSER) and write to /proc/[pid]/uid_map and /proc/[pid]/gid_map so that 0 in the current user namespace maps to the current uid and gid. Have this binary exec $SHELL. No privileges required.
Report that su is working without requiring root privileges in your account.
You can look around in /proc/self/status and see that your uid and gid are 0 and that you have all privs.
Extra points if you can get your local sysadmin to start trying to do things and from your $HOME/bin/su, because things really won't work and if you don't realize what is going on you are likely to be quite frustrated.
Services won't restart. You can't kill processes owned by other users etc.
Having a pam module set it up so that the user that looks like root has a distinct uid from everyone else is trickier to setup but could be more entertaining.
So while you have CAP_NET_BIND and can bind to any port in any network namespace you create, creating a network namespace won't do you much good because that network namespace is not connected to any other network namespace.
Posted Jan 4, 2013 2:00 UTC (Fri) by kevinm (guest, #69913)
[Link]
So from this it sounds like all the other types of namespaces (net, pid, mount...) are "owned" by a user namespace (the one in which they were created). When a permission check is done, it is done using the user namespace that owns that namespace that the relevant resource is in - for example, when I try to bind a privileged port, the permission check is done using the user namespace that owns the current network namespace (not the user namespace of the current process, which might well be different). Does that sound like the right concept?
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/531269/ | CC-MAIN-2013-20 | refinedweb | 858 | 64.54 |
JavaFX - ComboBox [Mobile]
By Rakesh Menon on Nov 26, 2009
Its more almost 6 months since I implemented a sample ComboBox using Control and Skin interface. There is lot of interest for proper implementation of this control. This post is still among my top posts. So thought of enhancing the implementation a bit so as to make it work on real mobile!
<script src=""></script> <script src="/rakeshmenonp/resource/ComboBox/ComboBox.js"></script>
For Applet mode, click on above image
I could also run this sample on Sony-Ericsson XPERIA
Try this new version and let me know feedback
Nice Rakesh.
Applet size is looking little small in the top applet. Some combo stuff is going out of the applet.
Posted by Vaibhav Choudhary on November 27, 2009 at 04:26 AM IST #
Hi Rakesh,
Thanks for providing the latest sources.
I have taken latest source and installed on ASUS(Windows Mobile 6.0). I am facing the same problem.
I am using
NetBeans IDE 6.7
JavaFX 1.2
ASUS(Windows Mobile 6.0)
I build the combobox application on Netbeans 6.7 and installed the jar on mobile. No output.
Posted by Siva on November 27, 2009 at 04:46 AM IST #
@Siva are you able to launch default demos available in JavaFX for Windows Mobile (EA)?
Posted by Rakesh Menon on November 30, 2009 at 03:32 AM IST #
Hi Rakesh,
I am able run the default demos and some other application (which developed by me) on JavaFX for Windows Mobile(EA).
Posted by Siva on November 30, 2009 at 09:20 AM IST #
Hi all, I'm facing the same problem for all my mobile javaFX apps. The rendering in a real windows mobile envt is very small and not workable
Posted by clob on December 09, 2009 at 01:15 PM IST #
Hi Rakesh,
You are having excellent posts in JavaFX. I am working on a very unique JavaFX applet and would like to know if its possible for JavaFX applet to interact with the current IE APIs (signed applet). I would like to add listeners IE to capture user actions (for web automation), import bookmarks/favs, close/open toolbar/Explorer bar etc.
Do you have any idea on how I can get these working with applet ? Please email me at gireeshkumar.g@gmail.com
-Gireesh
Posted by Gireesh on December 15, 2009 at 03:46 AM IST #
@clob I think the issue you are facing is related to higher resolution in real device and hence the UI is small, you may refer to below workaround:
Posted by Rakesh Menon on December 15, 2009 at 05:48 AM IST #
Hi Rakesh,
Its working fine on Windows Mobile, now. Its working on latest version of JavaFX 1.2 for Window Mobile.
Thank you for your reply. I would like know, can we expect the Table,Combobox, Popup, Tool tips and some other useful controls in the next version of JavaFX(Or does they available in the current version). Because i am working on new application for Mobile. It needs Table, Combobox etc. controls.
Posted by Siva on December 15, 2009 at 06:25 AM IST #
Nice Control :)
But seems to be buggy when putted into Sized Stack for instance
Like this way :
Stage {
title: "ComboBox Bug with Sized Stack"
width: 512
height: 512
scene: scene = Scene {
content: [
Stack {
width: bind scene.width
height: bind scene.height
content: [
combobox
]
}
]
}
}
Can See Bug Here :
But if You resize Windows @ ~400x233 it's working
Strange… :D
Posted by Toumaille on January 08, 2010 at 01:28 PM IST #
@Toumaille Yes, looks like some issue with updating layout, I see painting artifacts and issues related to clipping.
btw, I liked LevelIndicator, looks great! :)
Posted by Rakesh Menon on January 13, 2010 at 06:05 AM IST #
Hi Rakesh,tanks you for the sources, i'm trying to run combobox on my htc blackstone, I have JavaFX Mobile version 1.2 and I have compiled the sources on NetBeans 6.7.1.
When i'm running combobox it's does nothing !
I have reduce the code and i'have trying to see where is the problem for me , if i delete the class ComboBoxSkin and replace the class ComboBox by :
public class ComboBox extends Control {
public var items : Object[] = [];
public function select(index : Integer) : Void {
}
}
It's work on my Blackstone , I know this is stupid because there are no skin and i not see the list but it works and i think my Blackstone doesn't run a thing in ComboboxSkin but i don't know what and why maybe a problem of resolution? i don't know
Please help me, thank you for your very good job and sorry for my poor english
Jiki974
Posted by Jiki974 on February 03, 2010 at 12:00 PM IST # | https://blogs.oracle.com/rakeshmenonp/en_US/entry/javafx_combobox_mobile | CC-MAIN-2016-22 | refinedweb | 807 | 69.21 |
Install Bootstrap with Webpack with Rails 6 Beta
For those of you who are going to give Rails 6 Beta a test run here is how I have installed Bootstrap 4.3.1 and configured with Webpack
Step 1:
yarn add [email protected]:
import 'bootstrap' import './src/application.scss'
step 4:
create the following folder
app/javascript/packs/src and create the file '
application.scss and place
@import '~bootstrap/scss/bootstrap';
This should get bootstrap 4 up and running with Rails 6 Beta and webpack!
I've been working on getting Bootstrap 4 up and running with Rails 6 Beta 3 and have a simple Popper.js I am testing, that still isn't firing. I noticed in
webpack\environment.js I had another version of this different from yours. Can you explain why you have
append instead of
prepend
const webpack = require('webpack') environment.plugins.prepend( 'Provide', new webpack.ProvidePlugin({ $: 'jquery', jQuery: 'jquery', jquery: 'jquery', 'window.jQuery': 'jquery', Popper: ['popper.js', 'default'] }) ) module.exports = environment
Neither solution has worked for me yet, so just wondering where I am going off the Rails so to speak.
My
application.js currently has the following:
import 'bootstrap/dist/js/bootstrap.bundle' import 'jquery/dist/jquery.slim' import 'popper.js/dist/esm/popper' require("@rails/ujs").start() require("@rails/activestorage").start()
hey Guillermo (hope i spelled that right).
im not gonna lie, im not a webpack expert by any stretch. This guide was pieced together by me over a few days of struggling to get it set up and loading boot strap.
ill collect the best explained stack questions and post them here
Updated:
Rails 6 with Bootstrap and configured with Webpack
Step 1:
yarn add bootstrap:
require("bootstrap/dist/js/bootstrap")
note: doesn't need to import jquery , popper once initialized as webpacker plugin
step 4:
in
app/assets/stylesheets/application.css add the following:
@import "bootstrap/scss/bootstrap";
or
*= require bootstrap/scss/bootstrap
note: rails-sass can pick file from node modules
This get bootstrap 4 up and running with Rails 6 Beta and webpack!
Hi Guys, this is great. I came across a recent issue with webpacker not compiling and hence creating issues. This is the github discussion:
basically in your babel.config.js swap from corejs: 3 to corejs:false anywhere it appears.
this solved my frustration...
Help me.
Follow your instructions.
In DevTools Chrome errors
What could be the problem????
@Recker Swartz : You the MVP. Been trying to deploy to Heroku with OP's setup, however it wouldnt load Bootstrap when deploying, although works on localhost.
Now it loads properly and I can also define when it should load in relation to other stylesheets in application.css
@Daniele Deltodesco
Hi. I was just able to fix this. Heroku doesn't seem to mix well with Webpack as it tries to find
bootstrap/scss/bootstrap in the
app/assets/stylesheets folder. You have to point the import out of the folder to node_modules where Bootstrap stored in.
Make sure that your
app/assets/stylesheets/application.css is renamed to
application.css.scss and change
@import "bootstrap/scss/bootstrap"; to
@import "../../../node_modules/bootstrap/scss/bootstrap.scss";
Locally it works, but when I push the code to Heroku the assets are not precompiled. No CSS or JS present apparently.
Do I need to add
yarn add [email protected] jquery popper.js to my Procfile or somewhere when deploying to Heroku?
@Ivan, no, not during deploy. You should already have done that so they're added to your package.json. The assets:precompile step will make sure they're installed during deploy.
Heroku does need you to add the node buildpack alongside the ruby buildpack so you can use yarn.
heroku buildpacks:add heroku/nodejs heroku buildpacks:add heroku/ruby
Hi,
tired every option here and some more. Heroku is still not deploying. Same error precompiling assets fail. Does anyone have any other suggestions? I'm open to other server options too. Need something temp short term.
Hello,
I was able to get Bootstrap working with this method but I've run into a problem where my changes to the css files were overridden by some .scss files. Is there a 'main' css/scss file in my rails app that I can use to modify my app? | https://gorails.com/forum/install-bootstrap-with-webpack-with-rails-6-beta | CC-MAIN-2021-31 | refinedweb | 715 | 59.9 |
.
All of this is fixed for JDK 1.3 (kestrel)printDialog does not reflect a printjob's setCopies. Specifying 2 copies and the print dialog shows the value of 1. Compile & run attached src :
import java.awt.print.PrinterJob;
public class PrintTest {
public static void main(String s[]) {
PrinterJob pj = PrinterJob.getPrinterJob();
pj.setCopies(2);
pj.printDialog();
System.exit(0);
}
}
java full version "JDK-1.2fcs-G"
xxxxx@xxxxx 1999-01-21
NOTE: Here's another test case that isn't a degenerate condition (from a user) and the same problem is shown. Furthermore, this code demonstrates that multiple copies don't even print (JDK 1.2 final on windows) and that the value returned by getCopies is bogus even if the user changed the value and okayed the dialog. It's more than just getCopies being wrong too, because the print method is only called as often as the setCopies() call specifies if it is called BEFORE the printDialog() method.
/*
** Platform: Chinese Win95 4.00.950 B
** Example:
*/
import java.awt.*;
import java.awt.event.*;
import java.awt.print.*;
import javax.swing.*;
public class Big5PrintTest extends JPanel implements ActionListener
{
public Big5PrintTest()
{
setBackground(Color.white);
JButton b = new MyButton();
b.addActionListener(this);
add(b);
}
public void actionPerformed(ActionEvent e)
{
PrinterJob printJob = PrinterJob.getPrinterJob();
printJob.setPrintable((MyButton) e.getSource());
System.out.println( "nCopies (1) = " + printJob.getCopies() ) ; // nCopies (1) = 1
printJob.setCopies(2) ;
System.out.println( "nCopies (2) = " + printJob.getCopies() ) ; // nCopies (2) = 2
if ( printJob.printDialog() ) // change 'copies' in the dialog
{
System.out.println( "nCopies (3) = " + printJob.getCopies() ) ;
// BUG: Any change of 'copies' in the dialog will not become effective.
//
// If setCopies(2) is called, nCopies (3) will be always equal to 2;
// if setCopies(2) is not called, nCopies (3) is always equal to 1,
// no matter what value is put to 'copies' in the dialog.
try { printJob.print(); }
catch (Exception PrintException) {}
}
}
public static void main( String[] args )
{
WindowListener l = new WindowAdapter()
{
public void windowClosing( WindowEvent e )
{
System.exit(0) ;
}
} ;
JFrame f = new JFrame( "Big5PrintTest" ) ;
f.addWindowListener(l);
JOptionPane.setRootFrame( f ) ;
f.getContentPane().add( new Big5PrintTest(), BorderLayout.CENTER ) ;
f.setSize(new Dimension(300,100));
f.show();
}
static class MyButton extends JButton implements Printable
{
public MyButton()
{
super( "Print" ) ;
}
public int print(Graphics g, PageFormat pf, int pi) throws PrinterException
{
if ( pi >= 1 )
{
return Printable.NO_SUCH_PAGE;
}
Graphics2D g2 = (Graphics2D) g;
g2.drawString( "Print Test", 100, 100 ) ;
return Printable.PAGE_EXISTS;
}
}
}
N/A
The GDI PrintJob needed to have the number of copies to print set...
xxxxx@xxxxx 1998-08-31
still not fixed for Solaris. On winNT a Printing Error dialog appears : "An error occured during this operation."
java full version "Java2D:09-Sep- xxxxx@xxxxx :46"
xxxxx@xxxxx 1998-09-09
Now fixed for Solaris too...
xxxxx@xxxxx 1998-09-22
Looks okay on Solaris with L. But, on win32 a error dialog appears.
xxxxx@xxxxx 1998-09-30
Richard tested this on three NT drivers and didn't encounter the error. After looking at the code, he sees a possible problem that with some drivers might cause this behavior.
Still, it's a degenerate case since the sample code never sets a printable or pagable into the print job, so the print job has zero pages in it.
Something to do in 1.2+...
================
Here's what I see as the remaining problem :-.
===============================
Windows 98 - JDK 1.2
No modifications done in the page specification
dialog are returned to the requester.
Always the same imageable are is returned
and alwys the same starting point (72/72) is given back.
Hi.
Im having this problem on JDK 1.3.1. After the the user has
changed the number of copies in the print dialog, I want to
set the number of copies to 1 via setCopies(1). getCopies()
tells me that the number of copies is 1, but this is ignored
and the number of copies specified in the print dialog is
printed. Can anyone help me here?
Yours sincerely
Timo Maerte | http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4168555 | crawl-002 | refinedweb | 659 | 60.11 |
Copyright © 1998 material in this draft was previously part of the XSL Working Draft. intended to be "feature complete". The Working Group plans to use future drafts to stabilize the current functionality; it does not intend to add any new functionality in version 1.0.
The XSL WG and the XML Linking WG have agreed to unify XSLT expressions and XPointers [XPointer]. A common core semantic model for querying has been agreed upon, and this draft follows this model (see [6.1 Location Paths]). However, further changes particularily in the syntax will probably be necessary.
This is part of the Style activity. [7.4 Conflict Resolution for Template Rules]. includes an expression language (see [6 Expressions and Patterns]) that is used for selecting elements for processing, for conditional processing and for generating text. The expression language is not a complete programming language. XSLT provides an extension mechanism to allow access from the expression language to a complete programming language such as ECMAScript or Java. XSLT does not require support for any programming language. Therefore XSLT stylesheets that must be portable across all XSLT implementations cannot depend on this extension mechanism.
A stylesheet is represented by an
xsl:stylesheet
element in an XML document.
xsl:transform is allowed as
a synonym for
xsl:stylesheet.
XSLT processors must use the XML namespaces mechanism [XML Names] for both source documents and stylesheets. All XSLT
defined elements, that is those specified in this document with a
prefix of
xsl:, will only be recognized by the XSLT
processor if they belong to a namespace with the URI; XSLT defined
elements are recognized only in the stylesheet not in the source
document.
The
xsl:stylesheet element may contain the following types
of elements:
xsl:import
xsl:include
xsl:strip-space
xsl:preserve-space
xsl:key
xsl:functions
xsl:locale
xsl:attribute-set
xsl:variable
xsl:param-variable
xsl:template
This example shows the structure of a stylesheet. Ellipses
(
...) indicate where attribute values or content have
been omitted. Although this example shows one of each type of allowed
element, stylesheets may contain zero or more of each of these
elements.
<?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:import <xsl:include <xsl:strip-space <xsl:preserve-space <xsl:key <xsl:functions ... </xsl:functions> <xsl:locale ... </xsl:locale> <xsl:attribute-set ... </xsl:attribute-set> <xsl:variable...</xsl:variable> <xsl:param-variable...</xsl:param-variable> .
An XSLT processor must treat any namespace whose URI starts with
the in the same way as
the XSLT 1.0 namespace
() except that it must
recover from errors as follows:
Unrecognized attributes on elements in the XSLT namespace must be ignored
Unrecognized top-level XSLT elements must be ignored along with their content
Error reporting for unrecognized XSLT elements in templates must be lazy: in other words it's not an error to have an unrecognized XSLT element unless the element is actually instantiated
Similarly error reporting for bad expression syntax must be lazy: it's not an error to have bad expression syntax in an attribute on some element unless the element containing the bad syntax is instantiated
Ed. Note: What happens with stylesheets that mix XSLT namespaces with different versions?
Thus any XSLT 1.0 processor must be able to process the following stylesheet without error:
<xsl:stylesheet xmlns: <xsl:template >
XSLT operates on an XML document, whether a stylesheet or a source document, as a tree. Any two stylesheets or source documents that have the same tree will be processed the same by XSLT. The XML document resulting from the tree construction process is also a tree. This section describes how XSLT models an XML document as a tree. This model is conceptual only and does not mandate any particular implementation.
XML documents operated on by XSLT must conform to the XML namespaces specification [XML Names].
The tree contains nodes. There are seven kinds of node:
root nodes
element nodes
text nodes
attribute nodes
namespace nodes
processing instruction nodes
comment nodes
Neither processing instruction nodes nor comment nodes are included in the tree for the stylesheet.
For every type of node there is a way of determining a string value for a node of that type. For some types of node, the value is part of the node; for other types of node, the value is computed from the value of descendant nodes.
Issue (data-entity): Should XSLT provide support for external data entities and notations?
Issue (entity-ref): Should XSLT provide support for entity references?
Issue (dtd): Should XSLT provide support for DTDs in the data model?
The root node is the root of the tree. It does not occur anywhere else in the tree. It has a single child which is the element node for the document element of the document.
The value of the root node is the value of the document element.
There is an element node for every element in the document. An element has an expanded name consisting of a local name and a possibly null URI reference (see [XML Names]); the URI reference will be null if the element type name has no prefix and there is no default namespace in scope. A relative URI reference should be resolved into an absolute URI during namespace processing.
The children of an element node are the element nodes, comment nodes, processing instruction nodes and text nodes for its content. Entity references to both internal and external entities are expanded. Character references are resolved.
The descendants of an element node are the children of the element node and the descendants of the children that are element nodes.
The value of an element node is the string that results from concatenating all characters that are descendants of the element node in the order in which they occur in the document.
The set of all element nodes in a document can be ordered according to the order of the start-tags of the elements in the document; this is known as document order.
Ed. Note: Need a definition of document order that handles arbitrary node types, including attributes.
An element object may have a unique identifier (ID). This is the
value of the attribute which must be treated as not having a
unique ID.
NOTE: If a document does not have a DTD, then no element in the document will have a unique ID..
Each element node has an associated set of attribute nodes..
An attribute node has an expanded name and has a string value. The expanded name consists of a local name and a possibly null URI (see [XML Names]); the URI will be null if the specified attribute name did not have a prefix. The value is the normalized value as specified by the XML Recommendation [XML]. An attribute whose normalized value is a zero-length string is not treated specially: it results in an attribute node whose value is a zero-length string.
There are no attribute nodes for attributes that declare namespaces (see [XML Names]).
Issue (external-dtd): Should we specify something about how we expect XSLT processors to process external DTDs and parameter entities? For example, what happens if an attribute default is declared in an external DTD?
Each element has an associated set of namespace nodes, one for each namespace prefix that is in scope for the element and one for the default namespace if one is in scope for the element. This means that an element will have a namespace node:
for every attribute on the element whose name starts with
xmlns:;
for every attribute on an ancestor element whose name starts
xmlns: unless the element itself or a nearer ancestor
redeclares the prefix;
for an
xmlns attribute, unless its value is the empty
string.
NOTE: An attribute
xmlns="""undeclares" the default namespace (see [XML Names]).
A namespace node has a name which is a string giving the prefix. This is empty if the namespace node is for the default namespace. A namespace node also has a value which is the namespace URI. If the namespace declaration specifies a relative URI, then the resolved absolute URI is used as the value.
When writing an element node in the result tree out as XML, an XSLT processor must add sufficient namespace-declaring attributes to the start-tag to ensure that if a tree were recreated from the XML, then the set of namespace nodes on the element node in the recreated tree would be equal to or a superset of the set of namespace nodes of the element node in the result tree.
NOTE: The semantics of a document type may treat parts of attribute values or data content as namespace prefixes. The presence of namespace nodes ensures that the semantics can be preserved when the tree is written out as XML.
There is a processing instruction node for every processing instruction.
Ed. Note: What about processing instructions in the internal subset or elsewhere in the DTD?
A processing instruction has a name. This is a string equal to
the processing instruction's target. It also has a value. This is a
string equal to the part of the processing instruction following the
target and any whitespace. It does not include the terminating
?>.
There is a comment node for every comment.
Ed. Note: What about comments in the internal subset or elsewhere in the DTD?
A comment has a value. This is a string equal to the text of the
comment not including the opening
<!-- or the closing
-->.
Character data is grouped into text nodes. As much character data as possible is grouped into each text node: a text node never has an immediately following or preceding sibling that is a text node. The value of a text node is the character data.
Each character within a CDATA section is treated as character data.
Thus
<![CDATA[<]]> in the source document will
treated the same as
<. Both will result in a
single
< character in a text node in the tree.
NOTE: When a text node that contains a
<character is written out as XML, the
<character must be escaped by, for example, using
<, or including it in a CDATA section.
Characters inside comments or processing instructions are not character data. Line-endings in external entities are normalized to #xA as specified in the XML Recommendation [XML].
After the tree.
Ed. Note: Clarify how these declarations interact with each other and with xsl:import. not specified, then the
result tree must be output as XML. If the
result-ns
attribute is specified, all elements in the result tree must belong to
the namespace identified by this prefix (the result
namespace).
When an XSLT processor outputs the result tree as a sequence of bytes that represents the result tree in XML, it must do so in such a way that the sequence of bytes is a well-formed XML document conforming to the XML Namespaces Recommendation [XML Names] and that if a new tree was constructed from the sequence of bytes as specified in [4 Data Model],>
The
xsl:stylesheet element can include.
Ed. Note: The XSL WG and the XML Linking WG have agreed to unify XSLT expressions and XPointers. A common core semantic model for querying has been agreed upon, and this draft follows this model. However, further changes particularily in the syntax will probably be necessary.
Expressions are used in XSLT for a variety of purposes including:
An expression is evaluated to yield an object which has one of the following types:
Expression evaluation occurs with respect to a context, which consists of:
The context node is always a member of the context node list. The variable bindings consist of a mapping from variable names to variable values. The value of a variable is an object which can have any of the types which are possible for the value of an expression.
The variable bindings, node key function, extension functions and namespace declarations used to evaluate a subexpression are always the same as those used to evaluate the containing expression. The context node and context node list used to evaluate a subexpression is sometimes different from the context node and context node list used to evaluate the containing expression. When the evaluation of a kind of expression is described, it will always be explicitly stated if the context node and node list change for the evaluation of subexpressions; if nothing is said about the context node and context node list, they remain unchanged for the evaluation of subexpressions of that kind of expression.
The node key function takes a pair of strings (a key name and a key value) and a document and returns a set of nodes (the nodes in the document that have a key with the specified name and value).
In XSLT, expressions occur in attribute values. The grammar
specified in this section applies to the attribute value after XML 1.0
normalization. So, for example, if the grammar uses the character
< this must not appear in the XML source for the
stylesheet as
< but must be quoted according to XML
1.0 rules by, for example, entering it as
<.
A top-level expression (an expression not occurring within an expression) gets its context as follows:
the context node comes from the current node
the context node list comes from the current node list
the variable bindings are the bindings in scope on the element which has the attribute in which the expression occurs (see [13 Variables and Parameters])
the node key function is specified by top-level
xsl:key elements (see [6.4.1 Declaring Keys])
the implementations of extension functions are provided by
top-level
xsl:functions elements (see [6.4.2 Declaring Extension Functions]), and may also be provided externally to the
stylesheet by means not specified by XSLT
the set of namespace declarations are those in scope on the
element which has the attribute in which the expression occurs; the default
namespace (as declared by
xmlns) is not part of this
set which are used to filter lists of nodes.
Certain contexts in XSLT make use of a pattern..
In the following grammar, the nonterminals QName and NCName are defined in [XML Names], and S is defined in [XML].
Expressions (including patterns and location paths) are parsed by first dividing up the character string to be parsed into tokens and then parsing the resulting sequence of tokens. Whitespace can be freely used between tokens. The tokenization process is described in [6.2.9 Lexical Structure]. [6.1.4 Abbreviated Syntax]).
Here are some examples of location paths using the unabbreviated syntax:
from-children(para) selects the
para element children of the context node
from-children(*) selects all element
children of the context node
from-children(text()) selects all text
node children of the context node
from-children(node()) selects all the
children of the context node, whatever their node type
from-attributes(name) selects the
name attribute of the context node
from-attributes(*) selects all the
attributes of the context node
from-descendants(para) selects the
para element descendants of the context node
from-ancestors(div) selects all
div
ancestors of the context node
from-ancestors-or-self(div) selects the
div ancestors of the context node and, if the context node is a
div element, the context node as well
from-descendants-or-self(para) selects the
para element descendants of the context node and, if the context node is
a
para element, the context node as well
from-self(para) selects the context node if it is a
para element, and otherwise selects nothing
from-children(chapter)/from-descendants(para)
selects the
para element descendants of the
chapter element children of the context node
from-children(*)/from-children(para) selects
all
para grandchildren of the context node
/ selects the document root (which is
always the parent of the document element)
/from-descendants(para) selects all the
para elements in the same document as the context node
/from-descendants(olist)/from-children(item)
selects all the
item elements in the same document as the
context node that have an
olist parent
from-children(para[position()=1]) selects the first
para child of the context node
from-children(para[position()=last()]) selects the last
para child of the context node
from-children(para[position()=last()-1]) selects
the last but one
para child of the context node
from-children(para[position()>1]) selects all
the
para children of the context node other than the
first
para child of the context node
from-following-siblings(chapter[position()=1])
selects the next
chapter sibling of the context node
from-preceding-siblings(chapter[position()=1])
selects the previous
chapter sibling of the
context node
/from-descendants(figure[position()=42]) selects
the forty-second
figure element in the
document
/from-children(doc)/from-children(chapter[position()=5])/from-children(section[position()=2])
selects the second
section of the fifth
chapter of the
doc document element
from-children(para[from-attributes(type)="warning"])
selects all
para children of the context node that have a
type attribute with value
warning
from-children(para[from-attributes(type)="warning"][position()=5])
selects the fifth
para child of the context node that has a
type attribute with value
warning
from-children(para[position()=5][from-attributes(type)="warning"])
selects the fifth
para child of the context node if that child has
a
type attribute with value
warning
from-children(chapter[from-children(title)="Introduction"])
selects the
chapter children of the context node whose first
title child has value equal to
Introduction
from-children(chapter[from-children(title)])
selects the
chapter children of the context node that have one
or more
title children
from-children(chapter[from-children(title[from-self(*)="Introduction"])])
selects the
chapter children of the context node any of whose
title children has value equal to
Introduction
from-children(*[from-self(chapter) or
from-self(appendix)]) selects the
chapter and
appendix children of the context node
from-children(*[from-self(chapter) or
from-self(appendix)][position()=last()]) selects the last
chapter or
appendix child of the
context node
There are two kinds of location path: relative location paths and absolute
the second step are unioned together. The set of nodes identified by
the composition of the steps is this union. For example,
from-children(div)/from-children(para) selects the
para element children of the
div element
children of the context node, or, in other words, the
para element grandchildren that have
div
parents.
An absolute location path consists of
/ optionally
followed by a relative location path. A
/ by itself
selects the root node of the document containing the context node. If
it is followed by a relative location path, then the location path
selects the set of nodes that would be selected by the relative
location path relative to the root node of the document containing the
context node.
A location step consists of
an axis identifier;
a node test;
zero or more predicates.
The axis identifier selects an initial list of nodes relative to the context node. The initial list of nodes is filtered first by the node test; the result of filtering by the node test is then filtered by the first predicate; the result of that is then filtered by the next predicate and so on. The node test selects nodes from the initial list based on the node type and node name. Each predicate selects nodes that satisfy a condition specified by an arbitrary expression. The result of the location step is the set of nodes that are members of the list that results from filtering the initial list by the node test and all the predicates. Note that although a location step selects a set of nodes, an axis selects a list of nodes and the predicates operate on a list of nodes.
The axis identifier is followed by the node test and predicates in
parentheses. For example,
from-descendants(para) selects the
descendants of the context node that are
para elements:
from-descendants specifies the axis, and
para
is a test that is true for elements with name
para. Each
predicate is specified as an expression in square brackets.
An axis identifies a list of nodes based on the kind of tree relationship that the nodes have to the context node. For example, the children of the context node are one axis, the ancestors of the context node are another axis. Note that an axis identifies an ordered list, not a set. The order of nodes in an axis is in the direction away from the context node.
The following axes are defined:
the
children axis contains the children of the
context node in document order
the
descendants axis contains the descendants of
the context node in document order
the
parent axis contains the parent of the
context node, if there is one
Ed. Note: Is the parent of an attribute node the element that the attribute is on?
the
following-siblings axis contains the
following siblings of the context node in document order
the
preceding-siblings axis contains the
preceding siblings of the context node in reverse document order; the first
preceding sibling is first on the axis; the sibling preceding that
node is the second on the axis and so on.
the
following axis contains all nodes in the same
document as the context node that are after the context node in document order;
the nodes are ordered in document order
Issue (following-axis): Is the
followingaxis needed?
Issue (following-start): Should the
followingaxis include the descendants of the context node?
the
preceding axis contains all nodes in the same
document as the context node that are before the context node in document order;
the nodes are ordered in reverse document order
Issue (preceding-axis): Is the
precedingaxis needed?
the
ancestors axis contains the ancestors of the
context node; the nodes are ordered in reverse document order; thus the
parent is the first node on the axis, and the parent's parent is the
second node on the axis
the
attributes axis contains the attributes of
the context node; the order of nodes on this axis is
implementation-defined
the
self axis contains just the context node
itself
the
ancestors-or-self axis contains the context node
and ancestors of the context node in reverse document order; thus the context node
is the first node on the axis, and the context node's parent the
second
the
descendants-or-self axis contains the context node
and the descendants of the context node in document order; thus the context node
is the first node on the axis, and the first child of the context node is
the second node on the axis
In an axis identifier the name of the axis is preceded by
from- to distinguish it from a function name.
A node test that is a QName tests whether the node is an
element or attribute with the specified name. For example,
from-attributes(href) selects the
href
attribute of the context node; if the context node has no
href
attribute, it will select an empty set of nodes.
A QName in the node test
is expanded into a local name and a possibly null URI. This expansion
is done using the namespace declarations from the expression context.
This is the same way expansion is done for element type names in start
and end-tags except that the default namespace declared with
xmlns is not used: if the QName does not have a prefix, then
the URI is null (this is the same way attribute names are expanded).
The expanded names are then compared for equality. Two expanded names
are equal if they have the same local part, and either both have no
URI or both have the same URI.
A node test
* is true for any element or
attribute node. For example,
from-children(*) will
select all element children of the context node, and
from-attributes(*) will select all attributes of the
context node.
A node test can have the form NCName
:*. In this case the
prefix is expanded in the same way as with a QName using the context namespace
declarations. The node test will be true for an element or attribute
whose expanded name has the URI to which the prefix expands, whatever
the local part of the name.
The node test
text() is true for any text node. For
example
from-children(text()) will select the text node
children of the context node. Similarly, the node test
comment() is true for any comment node, and the node test
pi() is true for any processing instruction. The
pi() test may have an argument that is Literal; in this case it is true for any
processing instruction that has a name equal to the value of the Literal.
A node test
node() is true for any node.
A predicate filters a list of nodes to produce a new list of nodes. For each node in the list to be filtered, the PredicateExpr is evaluated with that node as the context node and with the complete list of nodes to be filtered as the context node list; if PredicateExpr evaluates to true for that node, the node is included in the new list; otherwise it is not included.
A PredicateExpr is evaluated by
evaluating the Expr and converting the result
to a boolean. If the result is a number, the result will be converted
to true if the number is equal to the position of the context node in
the context node list and will be converted to false otherwise; if the
result is not a number, then the result will be converted as if by a
call to the
boolean() function. Thus a location path
para[3] is equivalent to
para[position()=3].
Here are some examples of location paths using whose first
title child has value equal to
Introduction
chapter[title] selects the
chapter
children of the context node that have one or more
title
children
chapter[title[.="Introduction"]] selects the
chapter children of the context node any of whose
title children has value equal to
Introduction
employee[@secretary and @assistant] selects all
the
employee children of the context node that have both a
secretary attribute and an
assistant
attribute
The most important abbreviation is that when the axis is the
children axis, the
from-children and surrounding
parentheses can be omitted. In effect the
children axis
is the default axis. For example, a location path
div/para
is short for
from-children(div)/from-children(para).
There's also an abbreviation for the attributes axis. Instead of
using
from-attributes and parentheses around the node
test, the node test can be preceded by
@ to indicate the
attributes axis. For example, a location path
para[@type="warning"] is short for
from-children(para[from-attributes(type)="warning"]) and
so selects
para children with a
type
attribute with value equal to
warning.
// is short for
/from-descendants-or-self(node())/. For example,
//para is short for
/from-descendants-or-self(node())/from-children(para) and so
will select any
para element in the document (even a
para element that is a document element will be selected by
//para since the document element node is a child of the
root node);
div//para is short for
div/from-descendants-or-self(node())/from-children(para) and
so will select all
para descendants of
div
children.
A location step of
. is short for
from-self(node()). This is particularly useful in
conjunction with
//. For example, the location path
.//para is short for
from-self(node())/from-descendants-or-self(node())/from-children(para)
and so will select all
para descendant elements of the
context node.
Similarly a location step of
.. is short for
from-parent(node()). For example,
../title
is short for
from-parent(node())/from-children(title) and
so will select the
title children of the parent of the
context node.
A VariableReference evaluates to the value to which the variable name is bound in the set of variable bindings in the context.
Parentheses may be used for grouping.
A location path can be used as an expression. The expression returns the set of nodes selected by the path.
The
| operator computes the union of its operands
which must be node-sets.
Square brackets are used to filter expressions in the same way that they are used in location paths. It is an error if the expression to be filtered does not evaluate to a node-set. The context node list used for evaluating the expression in square brackets is the node-set to be filtered listed in document order.
The
/ operator and
// operators combine
an arbitrary expression and a relative location path. It is an error
if the expression does not evaluate to a node-set. The
/
operator does composition in the same way as when
/ is
used in a location path. As in location paths,
// is
short for
/from-descendants-or-self(node())/.
There are no types of objects that can be converted to node-sets. It is an error if evaluating a NodeSetExpr yields an object that is not a node-set.
The
last() function returns the number of nodes in
the context node list. The
position() function returns
the position of the context node in the context node list. The first
position is 1, and so the last position will be equal to
The
count() function returns the number of nodes in the
argument node-set.
The
id() and
idref() functions select
elements by their unique ID (see [4.2.1 Unique IDs]).
id() converts its argument to a string and returns a
node-set containing the element in the same document as the context
node with unique ID equal to that string, if there is such an element,
and otherwise returns an empty node-set.
idref()
requires that its argument be a node-set; for each node in the
node-set, the value is split into a whitespace-separated list of
tokens;
idref() returns a node-set containing the
elements in the same document as the context node that have a unique
ID equal to any of the tokens in the value of any of the nodes in the
node-set. For example,
id("foo") selects the element with unique ID
foo
id("foo")/from-children(para[position()=5]) selects
the fifth
para child of the element with unique ID
foo
Ed. Note: No way to get an ID in another document. Can workaround with xsl:for-each. Maybe add optional second argument which gives document.
Issue (id-inverse): Should there be a way to get back from an element to the elements that reference it (eg by IDREF attributes)?
The
key() and
keyref() functions select a
set of nodes using the node key function in the expression evaluation
context. Both functions have a string as the first argument that
specifies the name of the key and have an expression as the second
argument.
key() has a second argument that is a string
and returns a node-set containing the nodes in the same document as
the context node that have a value for the named key equal to this
string.
keyref() has a second argument that is a node-set
and returns a node-set containing the nodes in the same document as
the context node that have a value for the named key equal to the
value of any of the nodes in the argument node-set. See [6.4.1 Declaring Keys] for how to declare a key.
The
doc() and
docref() functions allow
access to XML documents other than the initial source document. They
both rely on the ability to treat a string as a URI reference that is
mapped to a node-set; this mapping always takes places relative to a
node that can be used to resolve a relative URI into an absolute URI.
If the URI reference does not contain a fragment identifier, then it
will be mapped to a node-set containing the root node in a tree
representing the XML document whose document entity is the resource
identified by the URI. If the URI reference contains a fragment
identifier, then it will first map the URI to a tree representing the
XML document whose document entity is the resource identified by the
URI and then use the fragment identifier to select a set of nodes in
that tree; the semantics of the fragment identifier is dependant on
media type of the result of retrieving the URI.
doc()
takes a string argument which it treats as a URI reference which is
mapped to a node-set relative to the element in the stylesheet
containing the expression. Note that a zero-length URI reference is a
reference to the document relative to which the URI reference is being
resolved; thus
doc("") refers to the root node of the
stylesheet; the tree representation of the stylesheet is exactly the
same as if the XML document containing the stylesheet was the initial
source document.
docref() takes a node-set argument; for
each node in the node-set,
docref() treats the value of
the node as a URI reference that is mapped to a node-set relative to
that same node;
docref() returns the union of the
resulting node-sets.
Ed. Note: What if the fragment identifier identifies something that isn't a set of nodes (eg a span or a substring within a text node)? What are the allowed media types for the returned data? What is document order for node sets including nodes from multiple documents?
The
local-part() function returns a string containing
the local part of the name of the first node in the argument
node-set. If the node-set is empty or the first node has no name, an
empty string is returned. If the argument is omitted it defaults to
the context node.
The
namespace() function returns a string containing
namespace of the name of the first node in the argument node-set. If
the node-set is empty or the first node has no name or the name has no
namespace, an empty string is returned. If the argument is omitted it
defaults to the context node.
The
qname() function returns a string containing a
QName representing the name of
the first node in the argument. The QName must represent the name with
respect to the namespace declarations in effect on the node whose name
is being represented. Typically this will be the form in which the
name occurred in the XML source. This need not be the case if there
are namespace declarations in effect on the node that associate
multiple prefixes with the same namespace. However, an implementation
may include information about the original prefix in its
representation of nodes; in this case an implementation can ensure
that the returned string is always the same as the QName used in the XML source. If the
argument it omitted it defaults to the context node.
The
generate-id() function returns a string that can
be used as a unique identifier for the first node in the argument
node-set. The unique identifier must consist of ASCII alphanumeric
characters and must start with an alphabetic character. An
implementation is free to generate an identifier in any convenient way
provided that it always generates the same identifier for the same
node and that different identifiers are always generated from
different nodes. An implementation is under no obligation to generate
the same identifiers each time a a document is transformed. If the
argument node-set is empty, the empty string is returned. If the
argument is omitted, it defaults to the context node.
An object of type boolean can have two values, true and false.
The
boolean() function converts its argument to a
boolean as follows:
a number is true if and only if it is neither positive or negative zero nor NaN
a node-list is true if and only if it is non-empty
a result fragment is true if and only if it is non-empty
a string is true if and only if its length is non-zero
If the argument is omitted, it defaults to the context node.
A BooleanExpr is evaluated by
converting the result of evaluating the Expr to
a boolean as if by a call to the
boolean() function.
An
= expression is evaluated as follows. If at least
one operand is a boolean, then each operand is converted to a boolean
as if by applying the
boolean() function and the operands
are compared as booleans. Otherwise, if at least one operand is a
number, then each operand is converted to a number as if by applying
the
number() function and the operands are compared as
numbers; positive and negative zero compare equal. Otherwise both
operands are converted to strings as if by applying the
string() function and the operands are compared as
strings; two strings are equal if they contain the same sequence of
UCS characters.
A
<,
>,
<= or
>= expression is evaluated by first converting each
operand to a number as if by a call to the
number()
function and then comparing the two numbers.
Issue (node-set-comparision): What should the semantics of comparison operators be when either or both of the operands are node-sets? Should there be an "any" or "all" semantic?
An
or expression is evaluated by evaluating each
operand and converting its value to a boolean. The result is true if
either value is true and false otherwise.
An
and expression is evaluated by evaluating each
operand and converting its value to a boolean. The result is true if
both values are true and false otherwise.
The
not() function returns true if its argument is
false, and false otherwise.
The
true() function returns true.
The
false() function returns false.
The
lang() function returns true or false depending on
whether the language of the context node as specified by
xml:lang attributes is the same as or,. For example,
lang("en") would return true
if the context node is any of these five elements:
<para xml: <div xml:<para/></div> <para xml: <para xml:
A number represents a floating point number. A number can have any double-precision 64-bit format IEEE 754 value. These include a special "Not-a-Number" (NaN) value, positive and negative infinity, and positive and negative zero.
A NumberExpr is evaluated by
converting the result of evaluating the Expr to
a number as if by a call to the
number() function.
The
number() function converts its argument to a
number as follows:
if a string parses as Number possibly preceded or followed by whitespace, then it is converted to that number; otherwise it is converted to the number 0
Ed. Note: Would NaN be better than 0 here?
boolean true is converted to 1; boolean false is converted to 0
a node-set is first converted to a string as if by a call to the
string() function and then converted in the same way as a
string argument
Ed. Note: Should we take advantage of xml:lang here?
a result tree fragment is first converted to a string as if
by a call to the
string() function and then converted in
the same way as a string argument
If the argument is omitted, it defaults to the context node.
The
div operator performs floating point division
according to IEEE 754.
The
quo operator performs a floating point division
and then truncates the result to an integer. For example,
5 quo 2 returns
2
5 quo -2 returns
-2
-5 quo 2 returns
-2
-5 quo -2 returns
2
The
mod operator returns the remainder from the
quo operation. For example,
5 mod 2 returns
1
5 mod -2 returns
1
-5 mod 2 returns
-1
-5 mod -2 returns
-1
NOTE: This is the same as the
%operator in Java and ECMAScript.
NOTE: This is not the same as the IEEE remainder operation which returns the remainder from a rounding division.
The
sum() function returns the sum of the values of
the nodes in the argument node-set. even is returned.
A string consists of a sequence of UCS characters.
A StringExpr is evaluated by
converting the result of evaluating the Expr to
a string as if by a call to the
string() function.
The
string() function converts an object to a string
as follows:
A node-set is converted to a string by returning the value of the node in the node-set that is first in document order. If the node-set is empty, an empty string is returned.
A result tree fragment is converted to a string by treating it as a single document fragment node that contains the nodes of the fragment, and then converting that document fragment node to a string in the same was as if the document fragment node were a source tree node.
A number is converted to a string by returning a string in
the form of a Number, preceded by a
- character if the number is negative.
Ed. Note: What about positive zero, negative zero, NaN and infinities?
The boolean false value is converted to the string
false. The boolean true value is converted to the
string
true.
If the argument is omitted, it defaults to the context node..
Ed. Note: Should the first argument of the above functions default to the value of the current node?
The
normalize() function returns the argument string
with white space normalized by stripping leading and trailing
whitespace and replacing sequences of whitespace characters by a
single space. Whitespace characters are the same allowed by the S production in XML. If the argument is
omitted, it defaults to the context node converted to a string, in
other words the value of the context node.
The
translate() function returns the first argument
string with occurrences of characters in the second argument string
replaced by the corresponding characters from the third argument
string. For example,
translate("bar","abc","ABC") returns
the string
BAr.. See [6.4.3 Declaring Locales]
for how to declare a locale.
Issue (regex): Should XSLT support regular expressions for matching against any or all of pcdata content, attribute values, attribute names, element type names?
Issue (equality-case): Do we need to be able to do comparisons in a case insensitive way?
Issue (equality-normalize): Do we need to normalize strings before comparison? Does the stylesheet need to specify what kinds of normalization are required (eg compatibility character normalization)?
Issue (resolve-expr): Do we need a
resolve(NodeSetExpr)string expression that treats the characters as a relative URI and turns it into an absolute URI using the base URI of the addressed node?
Ed. Note: Add explanation of what a result tree fragment is.
The only operations that can be performed on a result tree fragment
are to convert it to a string or a boolean. In particular, it is not
permitted to use the
/,
//, and
[] on result tree fragments.
Expressions can only return values of type result tree fragment by referencing variables of type result tree fragment or calling extension functions that return a result tree fragment.
The CName is expanded to a name using the
namespace declarations from the evaluation context. The XSLT
processor attempts to locate an implementation of the extension
function with the specified name that it can use. The implementation
may be provided by an
xsl:functions element (see [6.4.2 Declaring Extension Functions]) or the XSLT processor may be able to locate an
implementation by other means not specified by XSLT. If the XSLT
processor cannot locate such a function, then evaluating the
expression is an error. Otherwise the implementation is called
passing it the values of the expressions and the value returned by the
function is the value of the expression. The
function-available() function can be used to test whether
an implementation of a particular function is available (see [6.2.8 System Functions]). A XSLT processor is allowed always to give an error
when evaluating an ExtensionFunctionCall (with such
an XSLT processor the
function-available() function would
always return false). Therefore if an XSLT stylesheet includes an ExtensionFunctionCall and does not
use the
function-available() function to test for and
handle the possibility that an implementation of the function is not
available, then it may not be portable across all XSLT
implementations.
For both these functions, the StringExpr
The
function-available() function returns true if an
implementation of the named extension function is available. For
example:
<xsl:if <xsl:value-of </xsl:if>
When tokenizing, the longest possible token is always returned.
For readability, whitespace may be used in patterns even though not explicitly allowed by the grammar: ExprWhitespace may be freely added within patterns before or after any ExprToken.
A NodeType, FunctionName, CName
or AxisIdentifier token is recognized
only when the following token is
(. An OperatorName token or MultiplyOperator token is recognized as
such only when there is a preceding token and the preceding token is
not one of
@,
(,
[,
, or an Operator.
This section explains what expressions are allowed as patterns and what the semantics of matching a pattern are.
pi() matches any processing instruction
id("W11") matches the element with unique ID
W11
para[1] matches any
para element
that is the first
para child element of its
parent
*[position()=1 and use either
AxisIdentifiers or
. or
... Location path patterns can also start with an
id() or
key() function call with a literal
argument (see [6.2.2 Node-sets]). element
being matched as the context node and the siblings of the elements.
Ed. Note: Need to revise above paragraph if we decide not to call the element to which an attribute is attached the parent of the attribute.
The
xsl:key element is used to declare keys. The
name specifies the name of the key. The
match attribute is a Pattern; a
xsl:key element gives information about the keys of any
node that match the pattern specified in the match attribute. The
use attribute is a NodeSetExpr, which specifies the set of
values for which the node has a key of the specified name. A node
x has a key with name y and value
z if and only if there is NodeSetExpr specified in the
use attribute of the
xsl:key element with
x as the current node and with a node list containing
just x as the current node list.
Note that the NodeSetExpr may return
a node-set with more than one node; all of the returned nodes serve as
a key value. Note also that there may be more than one
xsl:key element that matches a given node; all of the
matching
xsl:key elements are used.
Ed. Note: Add some examples.
Implementations of the extension functions in a namespace can be
provided using the
xsl:functions element. The required
ns attribute specifies the namespace for which an
implementation is being provided. The value of the
ns
attribute is a namespace prefix which is expanded to a namespace URI
using the namespace declarations in effect on the
xsl:functions element.
The implementation may be provided in two ways. If the
code attribute is present, then its value is a URI that
identifies a resource containing an implementation of the functions in
the namespace; in this case a
type attribute giving the
MIME media type of the data providing the implementation may also be
provided, so as to allow the XSLT processor to avoid fetching
resources of types that it is unable to make use of. If the
code attribute is not present, then the content of the
xsl:functions element contains the implementation of the
functions; in this case the
type attribute
must be present.
Multiple alternative implementations may be provided for the same namespace. For example,
<xsl:stylesheet xmlns: <xsl:template <xsl:value-of </xsl:template> <xsl:functions function currentDate() { return Date().toString() } <p>When multiple alternative implementations are provided, it is up to the XSLT processor to determine which to use.</p> </xsl:functions> <xsl:functions </xsl:stylesheet>
The
xsl:functions element may also have an
archive attribute that specifies a whitespace-separated
list of URIs of resources relevant to the provided implementation.
An XSLT processor is not required to be able to make use of
implementations provided by
xsl:functions elements. The
MIME media types that an XSLT processor is able to make use of and the
way the XSLT processor interfaces with implementations is dependent on
the particular XSLT processor. Therefore if an XSLT stylesheet
includes an ExtensionFunctionCall of an
extension function in a namespace for which an implementation is
provided by an
xsl:functions element, then it may not be
portable across all XSLT implementations.
The
xsl:locale element declares a locale which
controls the interpretation of a format pattern used by the
format-number() function. If there is a
name attribute then the element declares a named locale,
otherwise it declares the default locale. specify characters that may appear in the result of formatting the number and also control the interpretation of characters in the format pattern:
decimal-separator specifies the character used
for the decimal sign
grouping-separator specifies the character used
as a grouping (eg thousands) separator
percent specifies the character used as a
percent sign
per-millpatterns in a pattern
The following attributes specify strings that may appear in the result of formatting the number:
infinity specifies the string used to represent
infinity
NaN specifies the string used to represent the
NaN value
minus-sign specifies the string used as the
default minus sign.
A template rule is specified with the
xsl:template
element. The
match attribute is a Pattern that identifies the source node or nodes
to which the rule applies. The
match attribute is
required unless the
xsl:template element has a
name attribute (see [8 Named Templates]).
The content of the
xsl:template element is the
template.
Issue (template-match-default): Should the
matchattribute have a default? Any node? Any child node? The root node? [4.8 Whitespace Stripping]
will not be processed.
Ed. Note: There is no WG consensus on the use of xsl:apply-templates without a select attribute to process all children of a node.
A
select attribute can be used to process nodes
selected by an expression instead of all children. The value of the
select attribute is a NodeSetExpr. The selected set of nodes are
processed in document
order, unless a sorting specification is present (see
[12="from-ancestors.
Use of expressions in
xsl:apply-templates can lead to
infinite loops. It is an error if, during the invocation of a rule
for a node, that same rule is invoked again for that node. An
XSLT processor may signal the error; if it does not signal the error,
it must recover by creating an empty result tree structure for the
nested invocation.
Ed. Note: This isn't right with parameters.
Ed. Note: Also doesn't apply to built-in rules because they can be invoked in multiple modes.
It is possible for a source node to match more than one template rule. The template rule to be used is determined as follows:
First, all matching template rules that are less important than the most important matching template rule or rules are eliminated from consideration.
Next, all matching template rules that have a lower priority
than the matching template rule or rules with the highest priority are
eliminated from consideration. The priority of a template rule is
specified by the
priority attribute on the template rule.
The value of this must be a real number (positive or negative). -1.
Otherwise the priority is 1.
The idea is that the most common kind of pattern (a pattern that just tests for an element with a specific name) has priority 0; a pattern more specific than this has priority 1; a pattern less specific than this has priority -1.
Ed. Note: Say exactly what syntax is allowed for real numbers..
There is a built-in template rule to allow recursive processing to continue in the absence of a successful pattern match by an explicit rule in the stylesheet. This rule applies to both element nodes and the root node. The following shows the equivalent of the built-in template rule:
<xsl:template <xsl:apply-templates/> </xsl:template>
There is also a built-in template rule for text nodes that copies text through:
<xsl:template <xsl:value-of </xsl:template>
The built-in rule does not apply to processing instructions and comments. When a comment or processing instruction is processed, and no rule is matched, nothing is created.
The built-in template rules are treated as if they were imported
implicitly before the stylesheet and so are considered less important than all other template rules.
Thus the author can override a built-in rule by including an
explicit rule with
match="*|/" or
match="text()".
Modes allow an element to be processed multiple times, each time producing a different result.
Both
xsl:template and
xsl:apply-templates
have an optional
mode attribute whose value is a name..
If there is no matching template, then the built-in template rules
are applied, even if a
mode attribute was specified in
xsl:apply-templates.
Ed. Note: Add some examples.
Templates can be invoked by name. An
xsl:template
element with a
name attribute specifies a named template.
If.
Ed. Note: Expand this.
It is an error if a stylesheet contains more than one template with the same name and same importance. An XSLT processor may signal the error; if it does not signal the error, it must recover by choosing from amongst the templates with highest importance the one that occurs last in the stylesheet.
This section describes instructions that directly create nodes in the result tree.
Issue (multiple-results): Should it be possible to create multiple result trees?
In a template an element in the stylesheet that does not belong to
the XSLT namespace is instantiated to create an element node of the
same type. The created element node will have the attribute nodes
that were present on the element node in the stylesheet tree. The
created element node will also have the namespace nodes that were
present on the element node in the stylesheet tree with the exception
of any namespace node whose value is the XSLT namespace URI
().
The value of an attribute of a literal result element is
interpreted as an attribute
value template: it can contain string;
the string value from instantiating it must be a QName. If the
namespace
attribute is not present, then the QName is expanded into a name using
the namespace declarations in effect for the
xsl:element
element, element to be created. The local part of the QName specified by the
name attribute is used as the local part of the name of
the element;
the string value from instantiating it must be a QName. If the
namespace
attribute is not present, then the QName is expanded into a name using
the namespace declarations in effect for the
xsl:attribute element, not attribute to be created. The local part of the QName specified by the
name attribute is used as the local part of the name of
the attribute attribute as XML. They are not however
required to do so.
The following are all errors:
Adding an attribute to an element after children have been added to it; implementations may either signal the error or ignore the attribute.
Adding an attribute that has the same name as an attribute already added; implementations may either signal the error or ignore the duplicate.
The
xsl:attribute-set element defines a named set of
attributes. The
name attribute specifies the name of the
attribute set. The
xsl:use element adds a named set of
attributes to an element. It has a required
attribute-set attribute that specifies the name of the
attribute set.
xsl:use is allowed in the same places as
xsl:attribute. The content of the
xsl:attribute-set consists of
xsl:attribute
elements that specify attributes; it may also contain
xsl:use elements. The value of attributes in an attribute
set is determined when the attribute set is used rather than when the
attribute set is defined.
The following example creates a named attribute set
title-style and uses it in a template rule.
<xsl:attribute-set <xsl:attribute12pt</xsl:attribute> <xsl:attributebold</xsl:attribute> </xsl:attribute-set> <xsl:template <fo:block <xsl:use <xsl:apply-templates/> </fo:block> </xsl:template>
Any attribute in a named attribute set specified by
xsl:use is not added to an element if the element already
has an attribute of that name.
Multiple definitions of an attribute set with the same name are merged. An attribute from a definition that is more important takes precedence over an attribute from a definition that is less important. It is an error if there are two attribute sets with the same name that are equally important and that both contain the same attribute unless there is a more important definition of the attribute set that also contains the attribute. An XSLT processor may signal the error; if it does not signal the error, it must recover by choosing from amongst the most important definitions that specify the attribute the one that was specified last in the stylesheet.
A template can also contain text nodes. Each text node in a template remaining after whitespace has been stripped as specified in [4.8 [4.8 Whitespace Stripping]) but
does not affect how the characters are handled by the XSLT processor
thereafter.
The
xsl:pi element is instantiated to create a
processing instruction node. The content of the
xsl:pi
element is a template for the value of the processing instruction
node. The
xsl:pi element has a required
name attribute that specifies the name of the processing
instruction node. The value of the name attribute is interpreted as
an attribute value
template.
For example, this
<xsl:pihref="book.css" type="text/css"</xsl:pi>
would create the processing instruction
<?xml-stylesheet href="book.css" type="text/css"?>
It is an error if instantiating the content of
xsl:pi
creates anything other than characters. An XSLT processor may signal
the error; if it does not signal the error, it must recover by
ignoring the offending nodes together with their content.
It is an error if the result of instantiating the content of the
xsl:pi contains the string
?>. An XSLT
processor may signal the error; if it does not signal the error, it
must recover by inserting a space after any occurrence of
? that is followed by an
>.
Ed. Note: What should happen if the name is not a valid NCName?="*|@*|comment()|pi()|text()"> <xsl:copy> <xsl:apply-templates </xsl:copy> </xsl:template>
The
xsl:copy-of element copies a list of nodes
specified by an expression. The required
select attribute
contains an expression. The result of evaluating the expression must
be a node-set or a result tree fragment. When it is node-set, all the
nodes in the set together with their content are copied in document
order over into the result tree; when it is a result tree fragment;
the complete fragment is copied over into the result tree.
When the current node is an attribute, then if it would be an error
to use
xsl:attribute to create an attribute with the same
name as the current node, then it is also an error to use
xsl:copy (see . String
expressions can also be used inside attribute values of literal result
elements by enclosing the string expression in curly brace
(
{}).
xsl:value-of
The
xsl:value-of element is replaced by the result of
evaluating the expression specified by the
attribute. The
select attribute is required. The
expression is a StringExpr, which means
the result of evaluating the expression is converted to a string. The
element is called
xsl:value-of because a node-set is
converted to a string by returning the value of the first
node.
Issue (value-of-select-default): Should the
selectattribute have a default </fo:block> <xsl:apply-templates/> </xsl:template>
In an attribute value that is interpreted as an
attribute value template, such as an attribute of a
literal result element, a StringExpr can
be used by surrounding the StringExpr
with curly braces (
{}). The attribute value
template is instantiated by replacing the string expression together
with surrounding curly braces by the result of evaluating the string
expression. Curly braces are not recognized in an attribute value in
an XSLT stylesheet unless the attribute is specifically stated to be
one which is interpreted as an attribute value template.
NOTE: Not all attributes are interpreted as attribute value templates. Attributes whose value is an expression or pattern, attributes of top-level elements (children of a
xsl:stylesheetelement) and attributes that refer to named XSLT objects are not interpreted as attribute value templates. Also a string expression will be replaced by a single curly brace. It is an error if a right curly brace occurs in an attribute value template outside a string expression without being followed by a second right curly brace; an XSLT processor may signal the error or recover by treating the right curly brace as if it had been doubled. A right curly brace inside a Literal in a string expression is not recognized as terminating the string expression.
Curly braces are not recognized recursively inside string expressions. For example:
<a href="#{id({@ref})/title}">
is not allowed. Instead use simply:
<a href="#{idref(@ref)/title}">
The
xsl:number element is used to insert a formatted
number into the result tree. The number to be inserted may be
specified by an expression. The
expr attribute contains a
NumberExpr. The value of the NumberExpr is rounded to an integer and then
converted to a string using the attributes specified in <xsl:value-of </p> </xsl:for-each> </xsl:template>
If no
expr,
multi or
any. The
default is
single.
The
count attribute is a pattern that specifies
what elements should be counted at those levels. The
count attribute defaults to the element type name of the
current node.
The
from attribute is a pattern that specifies
where counting starts from.
In addition the
xsl:number element has the attributes
specified in .
When
level="multi", it constructs a list of all
ancestors of the current node in document order followed by the
element itself; it then selects from the list those elements that
match the
count pattern; it then maps each element of the
list to one plus the number of preceding siblings of that element that
match the
count pattern. If the
from
attribute is specified, then the only ancestors that are searched are
those that are descendants of the nearest ancestor that matches the
from pattern.
When
level="any", it constructs a list of length
one containing one plus the number of elements at any level of the
document that start before this node and that match the
count pattern. If the
from attribute is
specified, then only elements after the first element before this
element that match the
from pattern are
considered.
Ed. Note: Would it be better to return the number of nodes that match the pattern from the set consisting of the node itself and the nodes starting before the node? This would mean that when the node does not match the pattern, the number of the previous matching node would be returned rather than the number of the next matching node.
The list of numbers is then converted into a string using the
attributes specified in [9.7.1 Number to String Conversion Attributes]; when used with
xsl:number the value of each of these attributes is
interpreted as an attribute
value template. After conversion, the resulting string is
inserted in the result tree.
Ed. Note: Allowing them to be attribute value templates isn't consistent with the current DTD: the declared values would all have to be CDATA, and we couldn't use xml:lang because the XML spec doesn't allow the value to be expressed as a template.="multi" count="chapter|section|subsection" format="1.1. "/> <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block> <xsl:number level="multi"ii ix x ....
A format token
I generates the sequence
I
II III IV V VI VII VII IX X ....
Any other format token indicates a numbering sequence that
starts with that token. If an implementation does not support a
numbering system that starts with that token, it must use a format
token of
1.
When numbering with an alphabetic sequence, the
xml:lang attribute specifies which language's alphabet is
to be used.
NOTE: This can be considered as specifying the language of the value of the
formatattribute and hence is consistent with the semantics of
xml:lang.
The
letter-value attribute disambiguates between
numbering schemes that use letters. In many languages there are two
commonly used numbering schemes that use letters. One numbering
scheme.
The
digit-group-sep attribute gives the separator
between groups of digits, and the optional
n-digits-per-group specifies the number of digits per
group. For example,
digit-group-sep="," and
n-digits-per-group="3" would produce numbers of the form
1,000,000.
The
sequence-src attribute gives the URI of a text
resource that contains a whitespace separated list of the members of
the numbering sequence.
Ed. Note: Specify what should happen when the sequence runs out. NodeSetExpr specified by the
select attribute, which is required. The template is
instantiated with the selected node as the current node, and with a
list of all of the selected nodes as the current node list. The nodes
are processed in document
order, unless a sorting specification is present (see
[12 a BooleanExpr. The content is a template. If
the expression evaluates to true, then the content a
BooleanExpr. The content of the
xsl:when and
xsl:otherwise elements is a
template. When an
xsl:choose element is processed, each
of the
xsl:when elements is tested in turn." expr="size(from-ancestors.
Ed. Note: Say that the current node list is in sorted order.
xsl:sort has a
select attribute whose
value is a StringExpr. For each node to
be processed, the StringExpr is evaluated
with that node as the current node. The string that results from
evaluating the expression is used as the sort key for that node. The
default value of the
select attribute is
.,
which, as a StringExpr, returns the value
of the current node.
The default value is
text.
Ed. Note: We plan to leverage the work on XML schemas to define further values in the future..
Ed. Note: We plan also to add an attribute whose value is a label identifying the sorting scheme, to be specified by the I18N WG.
The values of all of the above attributes are interpreted as attribute value templates.
NOTE: It is recommended that implementors consult [UNICODE TR10] for information on internationalized sorting.
The sort must be stable: in the sorted list of nodes, any sublist-variable. The
difference is that the value specified on the
xsl:param-variable variable is only a default value for
the binding; when the template or stylesheet within which the
xsl:param-variable element occurs is invoked, parameters
may be passed that are used in place of the default values..
A variable binding element can specify the value of the variable in
two ways. It can have a
expr attribute whose value is
an expression, which is evaluated to give the value of the
variable. If there is no
expr attribute, then the
contents of the variable binding element specifies the value. The
contents is a template which is instantiated to give the value. In
this case the value is a result tree fragment.
Both
xsl:variable and
xsl:param-variable
are allowed at the top-level. A top-level variable binding element
declares a global variable that is visible everywhere. A top-level
xsl:param-variable element declares a parameter to the
stylesheet; XSLT does not define the mechanism by which parameters
are passed to the stylesheet. It is an error if a stylesheet contains
more than one binding of a top-level variable the same name and same
importance. An XSLT processor
may signal the error; if it does not signal the error, it must recover
by choosing from amongst the bindings with highest importance-variable-variable is allowed as a child
at the beginning of an
xsl:template element. In this
context, the binding is visible for all following siblings and their
descendants. Note that the binding is not visible for the
xsl:param-variable element itself.
Parameters are passed to templates using the
xsl:param
element. The required
name attribute specifies the name
of the parameter (the variable the value of whose binding is to be
replaced).
xsl:param is allowed within both
xsl:call-template and
xsl:apply-templates.
The value of the parameter is specified in the same way as for
xsl:variable and
xsl:param-variable. The
current node and current node list used for computing the value
specified by
xsl:param element is the same as that used
for the
xsl:apply-templates or
xsl:call-template element within which it occurs. It is
not an error to pass a parameter x to a template that
does not have a
xsl:param-variable element for
x; the parameter is simply ignored.
This example defines a named template for a
numbered-block with an argument to control the format of
the number.
<xsl:template <xsl:param-variable1. </xsl:param-variable> <xsl:number <fo:block><xsl:apply-templates/></fo:block> </xsl:template> <xsl:template <xsl:call-template <xsl:paramA. </xsl.
XSLT provides two mechanisms to combine stylesheets:
An XSLT stylesheet may contain
xsl:import elements. All
the
xsl:import elements must occur at the beginning of
the stylesheet. The
xsl:import element has an
href attribute whose value is the URI of a stylesheet to
be imported. A relative URI is resolved relative to the base URI of
the
xsl:import element (see [4.2.2 Base URI]).
Ed. Note: Say what importing a stylesheet means.
<xsl:stylesheet xmlns: <xsl:import <xsl:import <xsl:attribute-set <xsl:attributeitalic</xsl:attribute> </xsl:attribute-set> </xsl:stylesheet>
Definitions and template rules in the importing stylesheet are defined to be more important than definitions and template rules in any imported stylesheets. Also definitions and template rules in one imported stylesheet are defined to be more important than definitions and template rules in previous imported stylesheets.
In general a more important definition or template rule takes precedence over a less important definition or template rule. This is defined in detail for each kind of definition and for template rules.
Ed. Note: Say something about the case where the same stylesheet gets imported twice. This should be treated the same as importing a stylesheet with the same content but different URIs. What about import loops?
xsl:apply-imports processes the current node using
only template rules that were imported into the stylesheet containing
the current rule; the node is processed in the current rule's
mode.
Ed. Note: Expand this.
An XSLT stylesheet may include another XSLT stylesheet using an
xsl:include element. The
xsl:include element
has an
href attribute whose value is the URI of a
stylesheet to be included. A relative URI is resolved relative to the
base URI of the
xsl:include element (see [4.2.2 Base URI]). The
xsl:include element can occur as
the child of the
xsl:stylesheet element at any point
after all
xsl:import elements.
The inclusion works at the XML tree level. The resource located by
the
href attribute value is parsed as an XML document,
and the children of the
xsl:stylesheet element in this
document replace the
xsl:include element in the including
document. Also any
xsl:import elements in the included
document are moved up in the including document to after any existing
xsl:import elements in the including document. Unlike
with
xsl:import, the fact that rules or definitions are
included does not affect the way they are processed.
Ed. Note: What happens when a stylesheet directly or indirectly includes itself?.
In the second case, the possibility arises of documents with inline style, that is documents that specify their own style. XSLT does not define a specific mechanism for this. This is because this can be done by means of a general purpose mechanism for associating stylesheets with documents provided that:
It is not in the scope of XSLT to define such a mechanism.
NOTE: This is because the mechanism should be independent of any one stylesheet mechanism.
The
xsl:stylesheet element may have an ID attribute
that specifies a unique identifier.
NOTE: In order for such an attribute to be used with the
idXPointer location term, it must actually be declared in the DTD as being an ID.
The following example shows how inline style can be accomplished
using the
xml-stylesheet processing instruction mechanism
for associating a stylesheet with an XML document. The URI uses an
XPointer in a fragment identifier to locate the
xsl:stylesheet element.
<?xml version="1.0"?> <?xml-stylesheet <!ENTITY % char-template " (#PCDATA %char-instructions;)* "> <!ENTITY % template " (#PCDATA %instructions; %result-elements;)* "> <!-- Used for attribute values that are URIs.--> <!ENTITY % URI "CDATA"> <!-- Used for attribute values that are patterns.--> <!ENTITY % pattern "CDATA"> <!-- Used for attribute values that are expressions.--> <!ENTITY % expr "CDATA"> <!-- Used for an attribute value that consists of a single character.--> <!ENTITY % char "CDATA"> <!-- Used for attribute values that are a priority. --> <!ENTITY % priority "NMTOKEN"> <!ENTITY % space-att "xml:space (default|preserve) #IMPLIED"> <!ENTITY % top-level " (xsl:import*, (xsl:include | xsl:strip-space | xsl:preserve-space | xsl:key | xsl:functions | xsl:locale | xsl:attribute-set | xsl:variable | xsl:param-variable | xsl:template)*) "> <!ELEMENT xsl:stylesheet %top-level;> <!ELEMENT xsl:transform %top-level;> <!ATTLIST xsl:stylesheet result-ns NMTOKEN #IMPLIED default-space (preserve|strip) "preserve" indent-result (yes|no) "no" id ID #IMPLIED xmlns:xsl CDATA #FIXED "" %space-att; > <!ELEMENT xsl:import EMPTY> <!ATTLIST xsl:import href %URI; #REQUIRED> <!ELEMENT xsl:include EMPTY> <!ATTLIST xsl:include href %URI; #REQUIRED> <!ELEMENT xsl:strip-space EMPTY> <!ATTLIST xsl:strip-space elements NMTOKENS #REQUIRED> <!ELEMENT xsl:preserve-space EMPTY> <!ATTLIST xsl:preserve-space elements NMTOKENS #REQUIRED> <!ELEMENT xsl:key EMPTY> <!ATTLIST xsl:key name NMTOKEN #REQUIRED match %pattern; #REQUIRED use %expr; #REQUIRED > <!ELEMENT xsl:functions (#PCDATA)> <!ATTLIST xsl:functions ns NMTOKEN #REQUIRED code CDATA #IMPLIED archive CDATA #IMPLIED > <!ELEMENT xsl:locale EMPTY> <!ATTLIST xsl:locale name NMTOKEN #IMPLIED decimal-separator %char; "." grouping-separator %char; "," infinity CDATA "∞" minus-sign %char; "-" NaN CDATA "�" percent %char; "%" per-mill %char; "‰" zero-digit %char; "0" digit %char; "#" pattern-separator %char; ";" > <!ELEMENT xsl:template (#PCDATA %instructions; %result-elements; | xsl:param-variable)* > <!ATTLIST xsl:template match %pattern; #IMPLIED name NMTOKEN #IMPLIED priority %priority; #IMPLIED mode NMTOKEN #IMPLIED %space-att; > <!ELEMENT xsl:value-of EMPTY> <!ATTLIST xsl:value-of select %expr; #REQUIRED > <!ELEMENT xsl:copy-of EMPTY> <!ATTLIST xsl:copy-of select %expr; #REQUIRED> <!ELEMENT xsl:number EMPTY> <!ATTLIST xsl:number level (single|multi|any) "single" count CDATA #IMPLIED from CDATA #IMPLIED expr %expr; #IMPLIED format CDATA '1' xml:lang NMTOKEN #IMPLIED letter-value (alphabetic|other) #IMPLIED digit-group-sep CDATA #IMPLIED n-digits-per-group NMTOKEN #IMPLIED sequence-src %URI; #IMPLIED > <!ELEMENT xsl:apply-templates (xsl:sort|xsl:param)*> <!ATTLIST xsl:apply-templates select %expr; "node()" mode NMTOKEN CDATA #IMPLIED data-type (text|number) "text" order (ascending|descending) "ascending" case-order (upper-first|lower-first) |xsl:use)*> <!ATTLIST xsl:attribute-set name NMTOKEN #REQUIRED > <!ELEMENT xsl:call-template (xsl:param)*> <!ATTLIST xsl:call-template name NMTOKEN #REQUIRED > <!ELEMENT xsl:param %template;> <!ATTLIST xsl:param name NMTOKEN #REQUIRED expr %expr; #IMPLIED > <!ELEMENT xsl:variable %template;> <!ATTLIST xsl:variable name NMTOKEN #REQUIRED expr %expr; #IMPLIED > <!ELEMENT xsl:param-variable %template;> <!ATTLIST xsl:param-variable name NMTOKEN #REQUIRED expr %expr; #IMPLIED > <!ELEMENT xsl:text (#PCDATA)> <!ELEMENT xsl:pi %char-template;> <!ATTLIST xsl:pi name CDATA #REQUIRED %space-att; > <!ELEMENT xsl:element %template;> <!ATTLIST xsl:element name CDATA #REQUIRED namespace CDATA #IMPLIED %space-att; > <!ELEMENT xsl:attribute %char-template;> <!ATTLIST xsl:attribute name CDATA #REQUIRED namespace CDATA #IMPLIED %space-att; > <!ELEMENT xsl:use EMPTY> <!ATTLIST xsl:use attribute-set NMTOKEN #REQUIRED> <!ELEMENT xsl:comment %char-template;> <!ATTLIST xsl:comment %space-att;> <!ELEMENT xsl:copy %template;> <!ATTLIST xsl:copy %space-att;> <!ELEMENT xsl:message %template;> <!ATTLIST xsl:message %space-att;>
The following is a simple but complete stylesheet.
<?xml version='1.0'?> <xsl:stylesheet xmlns: <xsl:template <fo:basic-page-sequence <fo:simple-page-master <fo:queue <xsl:apply-templates/> </fo:queue> </fo:basic-page-sequence> </xsl:template> <xsl:template <fo:block <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:block> <xsl:apply-templates/> </fo:block> </xsl:template> <xsl:template <fo:inline-sequence <xsl:apply-templates/> </fo:inline-sequence> </xsl:template> </xsl:stylesheet>
With the following source document
<doc> <title>An example</title> <p>This is a test.</p> <p>This is <emph>another</emph> test.</p> </doc>
it would produce the following result
<fo:basic-page-sequence xmlns: <fo:simple-page-master <fo:queue <fo:blockAn example</fo:block> <fo:block>This is a test.</fo:block> <fo:block>This is <fo:inline-sequenceanother</fo:inline-sequence> test.</fo:block> </fo:queue> </fo:basic-page-sequence>
This is an example of using XSLT to create an XHTML document (see [XHTML]). The following stylesheet:
<?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:template <html> <head> <title>Sales Results By Division</title> </head> <body> <table border="1"> <tr> <th>Division</th> <th>Revenue</th> <th>Growth</th> <th>Bonus</th> </tr> <xsl:apply-templates/> </table> </body> </html> </xsl:template> <xsl:template <xsl:apply-templates <!-- order the result by revenue --> <xsl:sort </xsl:apply-templates> </xsl:template> <xsl:template <tr> <td><em><xsl:value-of</em></td> <xsl:apply-templates <xsl:apply-templates <xsl:apply-templates </tr> </xsl:template> <xsl:template <td><xsl:apply-templates/></td> </xsl:template> </xsl:stylesheet>
with the following input document
<?xml version="1.0"?> <sales> <division id="North"> <revenue>10</revenue> <growth>9</growth> <bonus>7</bonus> </division> <division id="South"> <revenue>4</revenue> <growth>3</growth> <bonus>4</bonus> </division> <division id="West"> <revenue>6</revenue> <growth>-1.5</growth> <bonus>2</bonus> </division> </sales>
would produce the following result
<?xml version="1.0"?> >-1.5</td> <td>2</td> </tr> <tr> <td><em>South</em></td> <td>4</td> <td>3</td> <td>4</td> </tr> </table> </body> </html>; Scott Boag, Lotus; Jeff Caruso, Bitstream; James Clark (XSLT Editor); Peter Danielsen, Bell Labs; Don Day, IBM; Stephen Deach, Adobe; Angel Diaz, IBM; Dwayne Dicks, SoftQuad; Andrew Greene, Bitstream; Paul Grosso, ArborText; Eduardo Gutentag, Sun; Mickey Kimchi, Enigma; Chris Lilley, W3C; Daniel Lipkin, Oracle; Chris Maden, O'Reilly; Jonathan Marsh, Microsoft; Alex Milowski, CommerceOne; Boris Moore, RivCom; Steve Muench, Oracle; Carolyn Pampino, Interleaf; Scott Parnell, Xerox; Vincent Quint, W3C; Gregg Reynolds, Datalogics; Jonathan Robie, Software AG; Henry Thompson, University of Edinburgh; Philip Wadler, Bell Labs; Randy Waki, Novell; Norm Walsh, ArborText; Sanjiva Weerawarana, IBM; Umit Yalcinalp, Sun; Steve Zilles, Adobe (Co-Chair)
The following is a summary of changes since the previous public working draft.
Select patterns, string expressions and boolean expressions have been combined and generalized into an expression language with multiple data types (see [6 Expressions and Patterns]).
xsl:strip-space and
xsl:preserve-space
have an
elements attribute which specifies a list of
element types, rather than a
element attribute specifying
a single element type.
The
id() function has been split into
id() and
idref().
xsl:id has been replaced by the
xsl:key
element (see [6.4.1 Declaring Keys]), and associated
key()
and
keyref() functions.
The
doc() and
docref() have been added to
support multiple source documents.
Namespace wildcards (
ns:*) have been added.
ancestor() and
ancestor-or-self() have
been replaced by a more general facility for addressing different
axes.
Positional qualifiers (
first-of-type(),
first-of-any(),
last-of-type(),
last-of-any()) have been replaced by the
position() and
last() functions and numeric
expressions inside
[].
Counters have been removed. An
expr attribute has been
added to
xsl:number which in conjunction with the
position() allows numbering of sorted node lists.
Multiple adjacent uses of
[] are allowed.
Macros and templates have been unified by allowing templates to be named and have parameters.
xsl:constant have been replaced by
xsl:variable which allows variables to be typed and
local.
The default for
priority on
xsl:template
has changed (see [7.4 Conflict Resolution for Template Rules]).
An extension mechanism has been added (see [6.4.2 Declaring Extension Functions]).
The namespace URIs have been changed.
xsl:copy-of has been added (see [9.5 Copying]).
A error recovery mechanism to allow forwards-compatibility has been added (see [3 Forwards-compatible Processing]).
A
namespace attribute has been added to
xsl:element and
xsl:attribute. | http://www.w3.org/TR/1999/WD-xslt-19990421.html | CC-MAIN-2015-06 | refinedweb | 13,513 | 52.29 |
Guillaume Yziquel wrote:
>?
>
Guillaume, the OCaml module has been neglected for a few years and it
has very few users. The generated code used to compile quite cleanly
with older versions of Ocaml, but not so now. Feel free to improve the
wrappers, I suggest you discuss the development of it on the swig-devel
mailing list. You might want to contact the original Ocaml developer Art
Yerkes, see the README file. The developer documentation is in the
Doc/Devel directory. Also see Doc/Manual/Extending.html.
William
Daniel Rojas Roa wrote:
>’
>
Probably a query for the ODE SWIG wrapper developers as the above means
nothing to SWIG users.
William’
Thanks,
daniel rojas roa
> _______________________________________________
> Swig-user mailing list
> Swig-user@...
>
>
>
On Tue, Jun 23, 2009 at 10:01:34AM -0400, Andres Gonzalez <andres@...> wrote:
> Does SWIG support this? How do I format my data in the C/C++
> domain so that I can get associative arrays in the PHP domain?
I haven't tried it myself, but looking at the contents of the Lib/php
directory, I would try just returning an
std::map<int,std::vector<std::string>>.?
All the best,
Guillaume Yziquel.
> yziquel@...:~/svn/main/libmorfo-ocaml$ ocaml freeling.cma
> Objective Caml version 3.11.0
>
> # module X = Freeling;;
> module X :
> sig
> type c_enum_type = [ `unknown ]
> type c_enum_value = [ `Int of int ]
> type c_obj = c_enum_value Swig.c_obj_t
> val module_name : string
> exception BadArgs of string
> exception BadMethodName of c_obj * string * string
> exception NotObject of c_obj
> exception NotEnumType of c_obj
> exception LabelNotFromThisEnum of c_obj
> exception InvalidDirectorCall of c_obj
> val new_tokenizer : c_obj -> c_obj
> val _new_tokenizer : c_obj -> c_obj
> val _delete_tokenizer : c_obj -> c_obj
> val create_tokenizer_from_ptr : c_obj -> c_obj
> val _string_of_chars : c_obj -> c_obj
> val enum_to_int : c_enum_type -> c_obj -> Swig.c_obj
> val int_to_enum : c_enum_type -> int -> c_obj
> val swig_val : c_enum_type -> c_obj -> Swig.c_obj
> end
> # open Freeling;;
> # let s = "/usr/share/freeling/en/tokenizer.dat";;
> val s : # let ss = C_string s;;
> Error: Unbound constructor C_string
> # open Swig;;
> # let ss = C_string s;;
> val ss : 'a Swig.c_obj_t = C_string "/usr/share/freeling/en/tokenizer.dat"
> # let sss = string_of_chars ss;;
> Error: Unbound value string_of_chars
> # let sss = _string_of_chars ss;;
> val sss : Freeling.c_obj = C_ptr (6710128L, 47161782545120L)
> # let tk = new_tokenizer sss;;
> val tk : Freeling.c_obj = C_obj <fun>
I am currently using SWIG to implement a PHP interface to a C/C++
application. In all of my C/C++ API functions, I simply return a
string with delimiters, for example, I would return something
like this:
sprintf(buffer, "%d\n%d\n%s\n%d", intParam1, intParam2, strParam3,
intParam3);
return buffer;
When my PHP application uses this API function, it gets a string so then
I use the following to put it in an array for use in the PHP domain:
$ret = apiFuncition();
$myArray = explode(PHP_EOL, $ret);
This is working very well, however, I now need to have my C/C++ functions
return more complex associative arrays, for example like this:
"key1" [0] = value1
[1] = value2
[2] = value3
"key2" [0] = value4
[1] = value5
"key3" [0] = value6
[1] = value7
That is, an array that has an array as elements.
Does SWIG support this? How do I format my data in the C/C++
domain so that I can get associative arrays in the PHP domain?
Thanks,
-Andres
Thanks both for the quick answer!
I will leave it this way then.
Cheers!
Juan M.
On Fri, Jun 19, 2009 at 9:19 PM, William S
Fulton<wsf@...> wrote:
> Juan Manuel Alvarez wrote:
>>
>> Hello everyone! I am having a little doubt I would like to share.
>>
>> I am wrapping to C# and given a simple file like:
>>
>> %module myModule
>> %{
>> #include "myModule.h"
>> %}
>> namespace fzm
>> {
>> class MyClass
>> {
>> // ... interface here....
>> };
>> }
>>
>> The thing is that SWIG generates 3 files:
>> - MyClass.cs with the class itselft
>> - myModulePINVOKE.cs with all the pinvoke stuff
>> - myModule.cs with the following code:
>>
>> namespace NS {
>>
>> using System;
>> using System.Runtime.InteropServices;
>>
>> public class myModule {
>> }
>>
>> }
>>
>> The question is... even if the file does no harm, is there a way to
>> tell SWIG no to generate it?
>>
> In a nutshell, no. Your build system will have to delete it after running
> SWIG if you don't like it. If you didn't know, C/C++ global wrappers get put
> into this class.
>
> William
>
Am 22.06.2009, 23:36 Uhr, schrieb William S Fulton
<wsf@...>:
> Bob Marinier wrote:
>> Hi,
>>
>> I'm wrapping some code for Python on Windows using Visual Studio 2005
>> (although I think this will all be exactly the same in 6 and 2003).
>>
>> When I'm doing a debug build, the symbol _DEBUG is defined (and it needs
>> to be defined). Something in Python.h, then, tells the linker it needs
>> python24_d.lib. The problem is that the Windows installer for Python
>> does not include this file. One possible workaround I found on the
>> Python mailing list is to change the SWIG output so that
>>
>> #include "Python.h"
>>
>> becomes:
>>
>> #ifdef _DEBUG
>> #undef _DEBUG
>> #include "Python.h"
>> #define _DEBUG
>> #else
>> #include "Python.h"
>> #endif
>>
>> This "tricks" Python.h into thinking this is not a debug build, and thus
>> is looks for python24.lib instead, which does exist. This works, and
>> since I'm not trying to debug Python, I don't care that I'm not linking
>> the debug library. But having to manually change SWIG's output each
>> time I generate it is a real pain. Is there either a way to change
>> SWIG's output to this or does anyone have another idea for how to
>> workaround this problem? And no, renaming python24.lib to
>> python24_d.lib does not work :) (they aren't binary compatible).
> Bob, does this trick still work? Probably it is best to modify the first
> line to #if defined(_DEBUG) && defined(SWIG_PYTHON_DEBUG), so that a
> user must also specify SWIG_PYTHON_DEBUG. Otherwise it won't be possible
> to use the proper Python debug version, which I think can be compiled up
> manually.
No, this trick does not work anymore with Python 2.6 and MSVC 2008. If you
apply the trick, Visual C++ will complain that some header files have been
compiled with DEBUG defined and some without. The only thing I have found
to make this work is to edit pyconfig.h and comment the #pragma (lib) and
#define Py_DEBUG lines.
I have also raised the issue at the python bugtracker a year or two ago
and they basically said "won't fix, if you want to use debug library,
compile python in debug mode". This didn't make much sense to me as I
wanted the python part to be release mode and my part to be debug mode,
but the "won't fix" is how the discussion ended.
-Matthias
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200906&viewday=23 | CC-MAIN-2017-04 | refinedweb | 1,143 | 66.13 |
Before we started, I assume you have Visual Studio installed on your computer. The current version of Visual Studio I used on this example is version 2012. If you have earlier version you may have to upgrade to the same version or higher. I would recommend you to download the 2015 community version.
To create a module plugin, we need to have a running DNN site on your computer, if you do not have one, I would recommend you to install the DNN and setup it on your IIS local server. If you do not know how to do this, please see our tutorial on how to install DNN on your IIS.
Once the DNN installed, open your Visual Studio program, under the File menu choose open Web site. Once the website has been opened, you can now create a new empty project.
Remember the type of the project has to be an empty website project to save time. Take a note as well that you need to place this project inside the DesktopModules folder so when you refer to this module you don't have to copy your module across.
The next step is to delete the web.config, we do not need this file to be included in the project.
In order for the module to work, we need to add the DotNetNuke reference into our project. Under the references in the Solution explorer, right click and choose Add reference.
Click the browse button and allocate the DotNetNuke.dll file, this file is located in the bin folder on your site root path.
Once this has been added, we have to make sure the when we build the project, we do not want to copy this DotNetNuke.dll reference across, as we want to make our module compatible to other version, otherwise if you copy across, you may break the site due to the wrong version of the dll. So please set Copy Local to false.
We need to add two user controls (.ascx files). One is for View screen and the other is for the Setting screen.
This will be the code for the View.ascx file.
<%@ Control</p>
This will be the code for the View.ascx.cs file. One thing you have to remember that every time you create a module view screen, you always have to inherit it from PortalModuleBase . This inherit base class is referenced from DotNetNuke.Entities.Modules. The built in keyword Settings are used to get the value of stored in the module settings.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using DotNetNuke.Entities.Modules; namespace ByTutorial.Modules.HelloWorld { public partial class View : PortalModuleBase { protected void Page_Load(object sender, EventArgs e) { if ((string)Settings["YourName"] != null) { litYourName.Text = (string)Settings["YourName"]; } } } }
This will be the code for the Settings.ascx file. If you see on the following code, you can see that I use the DNN control named LabelControl. This control is used to display the text based on the resources languages specified in the App_Resources folder.
<%@ Control <label><dnn:Label</label> <asp:TextBox </div>
This will be the Settings.ascx.cs file. When creating a setting for your dnn module. You have to inherit from this class ModuleSettingsBase. Then you will need to override two abstract methods which are LoadSettings and UpdateSettings.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using DotNetNuke.Entities.Modules; using DotNetNuke.Services.Exceptions; namespace ByTutorial.Modules.HelloWorld { public partial class Settings : ModuleSettingsBase { public override void LoadSettings() { try { if (!Page.IsPostBack) { //Make sure we check if the setting YourName is exists if it is exists then we display the value if ((string)TabModuleSettings["YourName"] != null) { txtYourName.Text = (string)TabModuleSettings["YourName"]; } } } catch (Exception exc) { //If error occurs, save it in event viewer Exceptions.ProcessModuleLoadException(this, exc); } } //override the UpdateSettings module public override void UpdateSettings() { try { ModuleController objModules = new ModuleController(); //Update a setting YourName value objModules.UpdateTabModuleSetting(TabModuleId, "YourName", txtYourName.Text); } catch (System.Exception exc) { //If error occurs, save it in event viewer Exceptions.ProcessModuleLoadException(this, exc); } } } }
Once the above codes have been placed, the next step is to create resources file. The resources files are used to stored the text for labelling. In this example we only need to use for Settings resource only. What you need to do is to create App_LocalResources folder and create a file name Settings.ascx.resx which represent the Settings.ascx file itself. You will notice that we match the label ID name into our Settings.ascx file, as the module will search automatically by referring to the ID of the dnn label control.
The codes part of the module are now done, the remaining job is now to create a dnn extension on your site to test this module. In order to create a new extension, you will need to login as Super User account. This is the highest level super user account and only by using this account you can get the access to the host > extensions menu.
Once in the module extensions page, please click the Create New Extension button.
A popup window will be displayed. Please choose the extension type to Module and enter your module information. One thing to remember that the Module name has to be unique, it is recommended that you include your company name or domain name to make it unique.
The next wizard step is to set the module information. Please make sure you enter the correct folder name, you can double check this inside the desktopmodules folder. For the Module category, you may want to place this in the Common category if it will be used frequently or if it is only available for Admin use only, you may select this option. Do not worry about the remaining fields, you may leave them as it is, there is help text icon which explain further what they are used for.
The next screen wizard is to enter your company or module creator information.
The wizard is now completed, you should see your module creation in the extension list like below.
We haven't linked our two controls to the module. In order to do this we have to create a new module definition of the module. In the module extensions list which above image, click the edit pencil button, this will edit the module information. Under the Module definitions section, please click Add Definition button. Then enter the module definition and friendly name. Usually you enter the same Module name in here.
Once the definition has been created, we can now add our module controls. Please click the Add Module control button.
We add the View control information. There is a key text box, this is only be used if we select the control type as Edit type, for the View type, we will leave it blank. The difference between View and Edit type is the View type is only used for View only while the Edit can be used for Edit the module settings. Edit in here can be considered as internal access settings. So it might be not accessible by public use. There is a Module title, this title is used by the module container that wrap the module control.
Once the view control has been added, we repeat the process to add the Settings control. For the settings control, we set the Key value to Settings and the type to Edit.
You should now see the list of your control files listed in the module definition.
The final step is now to build your module. Firstly make sure it is in the release mode and click Build. Once it has been built. Go to your DNN website bin folder and right click to Add reference. Please browse the dll file from your new Hello World Project. Once it is added, we are completely done.
We can now try our simple Hello World module. To do this please go to the page where you want to add this module. Under the Modules menu, there is Add New Module menu. A scrolling list of DNN extension modules will be listed. Select the Hello World DNN module then drag it and drop it to the module pane. This will install the module into the page.
Once the module has been installed to a page, your new module should be display like below.
Click the module setting, and you should be able to set some information to your new module. In below example,we will set the module title and your name value.
This will be the final result after you set the module title and setting name value.
You can package your module so it can be installed to any website you want. To package a module you will need to create a dnn manifest file. It can be just a txt file, but you need to rename the extension to .dnn. Here is the code of the manifest file.
<dotnetnuke type="Package" version="5.0"> <packages> <package name="ByTutorial.Modules.HelloWorld" type="Module" version="1.0.0"> <friendlyName>ByTutorial Hello World</friendlyName> <description>Hello World DNN</description> <owner> <name>bytutorial.com</name> <organization>bytutorial.com</organization> <url></url> <email>info@bytutorial.com</email> </owner> <license src="license.txt" /> <releaseNotes src="releasenotes.txt" /> <components> <component type="Module"> <desktopModule> <moduleName>ByTutorial.Modules.HelloWorld</moduleName> <foldername>ByTutorial.Modules.HelloWorld</foldername> <businessControllerClass></businessControllerClass> <supportedFeatures /> <moduleDefinitions> <moduleDefinition> <friendlyName>ByTutorial.Modules.HelloWorld</friendlyName> <defaultCacheTime>0</defaultCacheTime> <moduleControls> <moduleControl> <controlKey></controlKey> <controlSrc>DesktopModules/ByTutorial.Modules.HelloWorld/View.ascx</controlSrc> <supportsPartialRendering>False</supportsPartialRendering> <controlTitle>Hello World</controlTitle> <controlType>View</controlType> <iconFile></iconFile> <helpUrl></helpUrl> <viewOrder>0</viewOrder> </moduleControl> <moduleControl> <controlKey>Settings</controlKey> <controlSrc>DesktopModules/ByTutorial.Modules.HelloWorld/Settings.ascx</controlSrc> <supportsPartialRendering>False</supportsPartialRendering> <controlTitle>Hello World Settings</controlTitle> <controlType>Edit</controlType> <iconFile></iconFile> <helpUrl></helpUrl> <viewOrder>0</viewOrder> </moduleControl> </moduleControls> </moduleDefinition> </moduleDefinitions> </desktopModule> </component> <component type="Assembly"> <assemblies> <assembly> <path>bin</path> <name>ByTutorial.Modules.HelloWorld.dll</name> </assembly> </assemblies> </component> <component type="ResourceFile"> <resourceFiles> <basePath>DesktopModules/ByTutorial.Modules.HelloWorld</basePath> <resourceFile> <name>Resources.zip</name> </resourceFile> </resourceFiles> </component> </components> </package> </packages> </dotnetnuke>
This will be the package folder. Note: we add two extra files which are the license.txt and releasenotes.txt. You can add information of your module on those files. Once you set the package folder correctly and match with your manifest dnn file, you can zip it named it accordingly. In our example we just name the installation package as ByTutorial.Modules.HelloWorld_FullInstall_v1.0.0.zip. Once this has been zip you can distribute your module and let them install via Host > Extensions page. If you want to find out how to install this module, you can check our article in How to install DNN extension.
Inside the package folder you will notice there is resources.zip file, this file holds the control files, app_localresources and other related files.
Hope this tutorial helps, if you have any question, please drop your question in below comment.
There are no comments available. | http://bytutorial.com/tutorials/dnn/create-your-first-dnn-module-extension | CC-MAIN-2018-22 | refinedweb | 1,852 | 50.84 |
I bet you are asking yourself "What in the world is a stupid tool for an online game doing on Code Project?" This program is my entry in the New C++ Competition. UO Treasure Hunter Tools (UOTH) uses VC7, WTL7 and ATL7 as its core technology.
So what makes UOTH worthy of consideration for the new competition? Well, there isn't one thing in UOTH that stands out as great or innovative programming. However, what UOTH does contain is a diverse collection of smaller bleeding edge UI and programming techniques.
Following are short descriptions of some of the programming highlights contained in UOTH.
Toolbars these days are much more complex than just the simple 16 color, 16x15 bitmap images. As programmers we have to deal with large and small image sets. We have to deal with the display of text under the button or to the right. We also have to deal with allowing the user to configure which style of toolbar he likes best.
Luckily for the programmer, the common controls provide us with complete support for the advance toolbar color and text options common in today's application. However, a few minor points of interest are left out.
If you right click on an IE6 toolbar and select "customize" you will be presented with the standard toolbar customization dialog. However, this dialog includes two extra combo boxes at the bottom that allow the user to specify the text and icon options. At first glance, it would be reasonable to assume that IE6 includes their own private customization dialog, but that turns out not to be the case.
If you look at the method
CMainWnd::OnCustomizeToolbar file MainWnd.cpp, it contains the notification handler for the toolbar. Specifically, the handling of the
TBN_INITCUSTOMIZE notification contains a hack used to get the window of the toolbar customization dialog.
// // Ok, this is an UNDOCUMENTED hack. The initialize message // actually contains the handle of the dialog for customization. // typedef struct hack_tagNMTOOLBARINIT { NMHDR hdr; HWND hWndDialog; } hack_NMTOOLBARINIT; hack_NMTOOLBARINIT *pNMHack = (hack_NMTOOLBARINIT *) pnmh; HWND hDlg = pNMHack ->hWndDialog;
As well noted in the code, the
TBN_INITCUSTOMIZE notification structure actually contains the handle of the customization dialog. As with any undocumented feature, it is always a risky proposition to utilize it. However, given that IE6 uses this, I doubt it is going away any time soon.
Once we have the handle to the customization dialog, we can create a child dialog inside this dialog. The source code shows exactly how to do this.
Customizing the "Save As" dialog is very common these days. However, many developers just place their controls in their customization dialogs without regard to how they align with the other controls inside the "Save As" dialog. The sad part is, this is trivial. The
CUOAMExportDlg class contains an example of customizing the "Save As" dialog. We are going to look specifically at what is required to properly reposition controls. This example assumes our customization dialog will appear below the rest of the "Save As" dialog.
Every control on the "Save As" dialog has specific control IDs that we can depend on being constant. These IDs are defined in the system include file "dlgs.h". We can use these control IDs to locate controls on the "Save As" dialog and reposition our controls relative to their positions.
In UOTH, the repositioning of the controls is handled by a routine named
RepositionControl (and to think, I am someone who claims self documenting code is a myth). In this example, since our customization dialog is at the bottom, all we are concerned about is the horizontal position and size of our controls. Their vertical position and size is dictated by their position and size in our dialog.
//-------------------------------------------------------------------------- // // @mfunc Reposition a control // // @parm CWindow & | wnd | Control to be reposition // // @parm UINT | nID | ID of the control used for positioning // // @parm bool | fSize | If true, adjust the width of the control // // @rdesc None. // //-------------------------------------------------------------------------- void CUOAMExportDlg::RepositionControl (CWindow &wnd, UINT nID, bool fSize) { // // Get the window rect in the client area of the // control we are interested in. // CWindow wndParent = GetParent (); CWindow wndAnchor = wndParent .GetDlgItem (nID); CRect rectAnchor; wndAnchor .GetWindowRect (&rectAnchor); wndParent .ScreenToClient (&rectAnchor); // // Reposition the control // DWORD dwSWFlags = SWP_NOACTIVATE | SWP_NOZORDER | SWP_NOSIZE; CRect rectCtrl; wnd .GetWindowRect (&rectCtrl); ScreenToClient (&rectCtrl); rectCtrl .OffsetRect (rectAnchor .left - rectCtrl .left, 0); if (fSize) { rectCtrl .right = rectCtrl .left + rectAnchor .Width (); dwSWFlags &= ~SWP_NOSIZE; } wnd .SetWindowPos (NULL, rectCtrl .left, rectCtrl .top, rectCtrl .Width (), rectCtrl .Height (), dwSWFlags); return; }
Supplied to this routine is the window of the control to be repositioned, the ID of the "Save As" dialog control, and a flag stating if we wish for our control to be resized. As you can see from the code, it is actually a simple process. This routine is invoked during dialog initialization and resize.
One other final note. If you have a control, such as another button that needs to be placed below the "Ok" button, you might run into problems with the resize grip in the lower right corner of the dialog. The third block of code in the
OnInitDialog method takes care of this problem.
In many applications, you might find the need to place a common dialog elements in multiple locations. In UOTH, the filter dialog is not only used as it's own independent dialog, but also appears in the "Print..." dialog and the "UOAM Export..." dialog. A normal programmer's natural reaction would be to duplicate the code for the dialog in multiple places. However, with some simple adjustments a modal framed dialog can act like a child dialog.
The first thing to consider when a dialog is to act as both a modal framed dialog and a child dialog is that in the case of a child dialog, a
WM_COMMAND message for the
IDOK and
IDCANCEL buttons are never received. Since the programmer is in total control of the dialog, this is a trivial problem to resolve. Instead of placing all the code to save the dialog settings in an
OnOK routine, place the code in another routine that will be invoked by
OnOK and the dialog that will using this dialog as a child dialog. For users of DDX, this would be very trivial. In my case, I don't use DDX since I have always found it to be more of a hassle than benefit.
Handling how data is saved when the user presses the "Ok" button is only half the battle. In order to use the dialog as a child window, you have to invoke the
Create method on the dialog instead of
DoModal. However, the dialog resource is still setup for the dialog to be displayed as a popup with a dialog frame. Luckily, with a little bit of code, this can be fixed dynamically. The following is the
CFilterDlg::Create method.
//-------------------------------------------------------------------------- // // @mfunc Create a modeless dialog // // @parm HWND | hWndParent | Parent window // // @parm LPARAM | dwInitParam | Initialization param // // @rdesc Window handle // //--------------------------------------------------------------------------- HWND CFilterDlg::Create (HWND hWndParent, LPARAM dwInitParam) { // // Find the region // HRSRC hRsrc = FindResource (_Module .GetResourceInstance (), MAKEINTRESOURCE (IDD), RT_DIALOG); if (hRsrc == NULL) return NULL; // // Get the size of the resource // DWORD dwSize = ::SizeofResource (_Module .GetResourceInstance (), hRsrc); // // Allocate the global memory to contain the regions // HGLOBAL hTemplate = ::GlobalAlloc (GPTR, dwSize); if (hTemplate == NULL) return NULL; DLGTEMPLATE *pTemplate = (DLGTEMPLATE *) ::GlobalLock (hTemplate); DLGTEMPLATEEX *pTemplateEx = (DLGTEMPLATEEX *) pTemplate; // // Load and lock the resource // HGLOBAL hSource = ::LoadResource (_Module .GetResourceInstance (), hRsrc); LPVOID pSource = ::LockResource (hSource); memcpy (pTemplate, pSource, dwSize); UnlockResource (hSource); ::FreeResource (hSource); // // Adjust the flags // DWORD dwStyle = WS_CHILD | WS_VISIBLE | WS_BORDER | WS_DLGFRAME | DS_3DLOOK | DS_FIXEDSYS | DS_SETFONT | DS_CONTROL; DWORD dwExStyle = 0; if (pTemplateEx ->signature == 0xFFFF) { pTemplateEx ->exStyle = dwExStyle; pTemplateEx ->style = dwStyle; } else { pTemplate ->dwExtendedStyle = dwExStyle; pTemplate ->style = dwStyle; } // // Create the window // ATLASSERT (m_hWnd == NULL); _AtlWinModule .AddCreateWndData (&m_thunk.cd, (CDialogImplBaseT <CWindow> *) this); #ifdef _DEBUG m_bModal = false; #endif //_DEBUG HWND hWnd = ::CreateDialogIndirectParam ( _AtlBaseModule.GetResourceInstance(), pTemplate, hWndParent, StartDialogProc, dwInitParam); ATLASSERT (m_hWnd == hWnd); // // If we created the window, delete OK and CANCEL :) // if (m_hWnd) { ::DestroyWindow (GetDlgItem (IDOK)); ::DestroyWindow (GetDlgItem (IDCANCEL)); } // // Unlock the globals // ::GlobalUnlock (hTemplate); ::GlobalFree (hTemplate); return hWnd; }
The first thing that must be done is to load the dialog resource into volatile memory. This allows us to modify the create parameters. Next, the style and extended style flags are changed to force the dialog to display as a child window with no border. Finally, after the dialog is created, the OK and CANCEL buttons are removed.
This code will create the dialog at the default window position. The dialog will need to be repositioned to the proper place by the parent window. This is usually done in the
WM_SIZE message handler.
Tim has been a professional programmer for way too long. He currently works at a company he co-founded that specializes in data acquisition software for industrial automation.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/wtl/uoth.aspx | crawl-002 | refinedweb | 1,455 | 54.02 |
Pand')
Pandas
Pandas is an open source library that is used to analyze data in Python. It takes in data, like a CSV or SQL database, and creates an object with rows and columns called a data frame. Pandas is typically imported with the alias
pd.
import pandas as pd
Selecting Pandas DataFrame rows using logical operators
In pandas, specific rows can be selected if they satisfy certain conditions using Python’s logical operators. The result is a DataFrame that is a subset of the original DataFrame.
Multiple logical conditions can be combined with OR (using
|) and AND (using
&), and each condition must be enclosed in parentheses.
# Selecting rows where age is over 20 df[df.age > 20] # Selecting rows where name is not John df[df.name != "John"] # Selecting rows where age is less than 10 # OR greater than 70 df[(df.age < 10) | (df.age > 70)]
Pandas apply() function
The Pandas
apply() function can be used to apply a function on every value in a column or row of a DataFrame, and transform that column or row to the resulting values.
By default, it will apply a function to all values of a column. To perform it on a row instead, you can specify the argument
axis=1 in the
apply() function call.
# This function doubles the input value def double(x): return 2*x # Apply this function to double every value in a specified column df.column1 = df.column1.apply(double) # Lambda functions can also be supplied to `apply()` df.column2 = df.column2.apply(lambda x : 3*x) # Applying to a row requires it to be called on the entire DataFrame df['newColumn'] = df.apply(lambda row: row['column1'] * 1.5 + row['column2'], axis=1 )
Pandas DataFrames adding columns
Pandas DataFrames allow for the addition of columns after the DataFrame has already been created, by using the format
df['newColumn'] and setting it equal to the new column’s value.
# Specifying each value in the new column: df['newColumn'] = [1, 2, 3, 4] # Setting each row in the new column to the same value: df['newColumn'] = 1 # Creating a new column by doing a # calculation on an existing column: df['newColumn'] = df['oldColumn'] * 5 | https://www.codecademy.com/learn/dscp-data-manipulation-with-pandas/modules/dscp-hands-on-with-pandas/cheatsheet | CC-MAIN-2021-21 | refinedweb | 367 | 53.61 |
mvscanw, mvwscanw, scanw, wscanw - convert formatted input from a window
#include <curses.h> int mvscanw(int y, int x, char *fmt, ...); int mvwscanw(WINDOW *win, int y, int x, char *fmt, ...); int scanw(char *fmt, ...); int wscanw(WINDOW *win, char *fmt, ...);
These functions are similar to scanf(). Their effect is as though mvwgetstr() were called to get a multi-byte character string from the current or specified window at the current or specified cursor position, and then sscanf() were used to interpret and convert that string.
Upon successful completion, these functions return OK. Otherwise, they return ERR.
No errors are defined.
getnstr(), printw(), fscanf() (in the XSH specification), wcstombs() (in the XSH specification), <curses.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xcurses/mvwscanw.html | CC-MAIN-2015-35 | refinedweb | 115 | 61.53 |
This article is an extract from my book Data Science for Supply Chain Forecasting. You can read my other articles here. I am also active on LinkedIn.
Measuring forecast accuracy (or error) is not an easy task as there is no one-size-fits-all indicator. Only experimentation will show you what Key Performance Indicator (KPI) is best for you. As you will see, each indicator will avoid some pitfalls but will be prone to others.
The first distinction we have to make is the difference between the precision of a forecast and its bias:
When it comes to demand forecasting, most supply chains rely on populating 18-month forecasts with monthly buckets. Should this be considered a best practice, or is it merely a by-default, overlooked choice? I have seen countless supply chains forecasting demand at an irrelevant aggregation level — whether material, geographical or temporal. In this article, I propose an original 4-dimensions forecasting framework that will enable you to set up a tailor-made forecasting process for your supply chain. I like to use this framework to kick off any forecasting project.
An accurate forecast is not good enough.
You need a useful one.
…
I recently read yet another article showing you how to speed up the apply function in pandas. These articles will usually tell you to parallelize the apply function to make it 2 to 4 times faster.
Before I show you how to make it 600 times faster, let’s illustrate a use case using the vanilla apply().
Let’s imagine you have a pandas dataframe df and want to perform some operation on it.
I will use a dataframe with 1m rows and five columns (with integers ranging from 0 to 10; I am using a setup similar to this article)
df…
Let’s start with a few questions. Read them first before going through the article. By the end of your reading, you should be able to answer them. (The answers are provided at the end as well as a Python implementation)
Usual articles will perform the following case: create a list using a for loop versus a list comprehension. So let’s do it and time it.
import time
iterations = 100000000start = time.time()
mylist = []
for i in range(iterations):
mylist.append(i+1)
end = time.time()
print(end - start)
>> 9.90 secondsstart = time.time()
mylist = [i+1 for i in range(iterations)]
end = time.time()
print(end - start)
>> 8.20 seconds
As we can see, the for loop is slower than the list comprehension (9.9 seconds vs. 8.2 seconds).
List comprehensions are faster than for loops to create lists.
But, this is…
When discussing forecasting in workshops, I usually get the following question from my clients:
Is our current forecasting accuracy % good enough?
Imagine the following case, you are responsible for forecasting the demand of a portfolio of products, and you want to know if your current accuracy is good or bad.
Here are 3 ways to do this from worse to best.
Many companies want to compare themselves to their peers by buying industry benchmarks from data providers. However, I would not advise you to use industry benchmarks to assess your forecasting capabilities.
Here’s why:
As Data Scientists, we like to run many time-intensive experiments. Reducing the training speed of our models means that we can conduct more experiments in the same amount of time. Moreover, we can also leverage this speed by creating bigger model ensembles, ultimately resulting in higher accuracy.
Chen and Guestrin (from the University of Washington) released XGBoost dates in 2016. They achieved significant speedups and increased predictive power compared to regular gradient boosting (see my book for a comparison, see scikit-learn for regular gradient boosting). This new model soon became data scientists' favorite on Kaggle.
Let’s run XGBoost ‘vanilla’ version…
ABC analysis is the wrong methodology used to answer the right questions.
Before jumping in the discussion on why ABC analysis should be avoided — and what to do instead. Let’s take a minute to define ABC XYZ categorizations.
ABC Analysis is a simplistic, arbitrary technique to categorize items based on two thresholds along one dimension. Items are then segregated into three categories (A, B, and C). Group A contains the few most important items. Whereas the trivial many items are categorized as C.
Usually, ABC analysis is performed based on volume (as shown in the figure below):
This article is an extract from my book Data Science for Supply Chain Forecast.
The history of artificial neurons dates back to the 1940s, when Warren McCulloch (a neuroscientist) and Walter Pitts (a logician) modeled the biological working of an organic neuron in a first artificial neuron to show how simple units could replicate logical functions.
Inspired by Warren McCulloch’s and Walter Pitts’ publication, Frank Rosenblatt (a research psychologist working at Cornell Aeronautical Laboratory) worked in the 1950s on the Perceptron: a single layer of neurons able to classify pictures of a few hundred pixels. …
As a supply chain consultant, I often help my clients to create better inventory models. It is a difficult task — primarily because of data quality and misaligned forecasts. When launching an inventory optimization initiative, it is essential to understand where we start and where we want to go. This will allow you to build up the right expectations, understand what data is required and how much complexity to expect.
Consultant, Trainer, Author: 📙Data Science & Forecasting, 📘Inventory Optimization linkedin.com/in/vandeputnicolas 👍Tip: hold down the Clap icon for up x50 | https://nicolas-vandeput.medium.com/?source=about_page------------------------------------- | CC-MAIN-2021-39 | refinedweb | 926 | 56.15 |
.
Below are the most basic steps to configure log4j logging support in your project.
1) Create a maven project
mvn archetype:generate -DgroupId=com.howtodoinjava -DartifactId=Log4jTestProject -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
Run above command in your eclipse workspace or any other IDE you are working in. If you already have a project in your workspace, then directly go to step 3.
2) Convert the project to eclipse supported java project
mvn eclipse:eclipse
Above command will convert maven project to eclipse java project. Now, import the project to eclipse.
3) Update pom.xml file with log4j dependencies
Add below given dependencies to pom.xml.
<dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency>
Run below command to download required jars in your local system and update project runtime dependencies also.
mvn eclipse:eclipse
4) Test the application with BasicConfigurator
package com.howtodoinjava; import org.apache.log4j.BasicConfigurator; import org.apache.log4j.Logger; public class Log4jHelloWorld { static final Logger logger = Logger.getLogger(Log4jHelloWorld.class); public static void main(String[] args) { //Configure logger BasicConfigurator.configure(); logger.debug("Hello World!"); } }
Output:
0 [main] DEBUG com.howtodoinjava.Log4jHelloWorld - Hello World!
If you saw above message in your console then congrats, we have successfully configured log4j. If you have any problem the repeat all above steps again or drop me a comment.
Happy Learning !!
Feedback, Discussion and Comments
Tom Moyer
Please fix the problem first pointed out by Steve Nov 8’th 2012. Your step 3, lines 2 and 3 both need to have two instance of the currently shown as lower case i (the I in Id in groupId and in artifactId) as upper case. The lower case i’s you have cause 4 errors
1) The groupId can not be empty
2)’dependencies.dependency.groupId’ for null:log4j:jar is missing
and a similar pair for the artifactId
Lokesh Gupta
Sorry for the delay. Updated the post.
shmehta21@gmail.com
Hi Lokesh,
I did the setup for Maven and after running Maven command as said in step 1.), i got the below error:
[ERROR] No plugin found for prefix ‘archetype’ in the current project and in the
plugin groups [org.apache.maven.plugins, org.codehaus.mojo] available from the
repositories [local (C:UsersSagarMe.m2repository), central
n.apache.org/maven2)] -> [Help 1]
[ERROR] [Help 1]
orPrefixException
However, I could see some plugings getting downloaded in my .m2 folder
Lokesh Gupta
Paste your pom.xml file here in code tags.
Steve
good post – couple of typos’…..see below
1.
2.log4j (should be groupId)
3.log4j (should be artifactId)
4.1.2.17
5. | https://howtodoinjava.com/log4j/how-to-configure-log4j-using-maven/ | CC-MAIN-2020-45 | refinedweb | 435 | 52.15 |
Search in target is sometimes failing
Hello, I am making my own plugin and a part of this plugin involves searching for, “xml version” in a selected area of text. I am using SCI_GETSELECTIONSTART and SCI_GETSELECTIONEND to retrieve the starting and endpoints of the selection and then setting the start and endpoints of the search with SCI_SETTARGETRANGE. Then I am using SCI_SEARCHTARGET to find “xml version” in the selected text. This is where my problem arises as sometimes this function fails and returns -1 and sometimes it works.
This is the function I made to do what I described above.
I am not sure where I am going wrong, and if anyone could provide any help, that would be great.
Thanks
I’m not sure about your definition of "sometimes this function fails and returns -1"
because -1 is the return value if the text to be searched for is not found.
But if you are sure that the text should have been found I assume
you are using a thread, could that be?
@Ekopalypse, I meant that the function fails to find the text I am looking for, so it returns -1. Also, what do you mean by thread?
what do you mean by thread?
Do you create a thread within your plugin which is doing the search?
If so, this is not safe as npp itself does a lot of SETTARGETRANGE,
which means that your plugin code SCI_SEARCHINTARGET could
be searching on unknown territory.
Do you mean like this?
My main plugin function is “Parse and Format Log File”, and I use “Test123” to help me test some of the code I’m writing and, “FindIt” is the function that I posted above.
Sorry, I’m really new at this, so I don’t know what you mean completely.
No, I mean this.
So because you don’t know about threads we assume you
are not using it. Then I would assume that the text to be
searched is not in the main scintilla but in the second, could this be?
I don’t know the difference between main and second scintilla. Could you tell me?
Open a clean npp without any files.
Create a new tab so that you have new_1 and new_2
Now right click on the tab and do move to other view for one of the tabs.
Result: one buffer can be accessed by nppData._scintillaMainHandle and the other by nppData._scintillaSecondHandle.
I see what you mean, so a tab is only second scintilla if it’s in the other view? I have only tried my plugin on tabs that are in the regular view, so I would assume all my text is in the main scintilla. Thanks for explaining that, but I don’t think that’s the problem
only second scintilla
Not strictly true, if you have two views open and you close the
main view the second view stays to be the second.
The only other suggestion I can make is that you try to find a
pattern when it returns -1 where it should return something else.
This is what I have been trying to do. Usually, I have to wait a bit for it to work, or when I reopen notepad it starts to work again, or when I go to a different piece of text and then come back to it, it works. The lack of consistency has made it hard to try and figure this out. Thanks for the help anyways, I appreciate it
We don’t talk about casing problem,
I mean you are searching for “xml version” and expect to find
“XML version”, do we?
I tell it to search for, “xml version”, but specifically it should find,
in a text file that looks a bit like this
maybe you should, explicitly, set your searchFlags.
That could be it, but why would it be only working sometimes?
- Michael Vincent last edited by
@Ekopalypse said in Search in target is sometimes failing:
only second scintilla
Not strictly true, if you have two views open and you close the
main view the second view stays to be the second.
I second @Ekopalypse. In my plugins, I use:
HWND getCurScintilla() { int which = -1; ::SendMessage( nppData._nppHandle, NPPM_GETCURRENTSCINTILLA, 0, ( LPARAM )&which ); return ( which == 0 ) ? nppData._scintillaMainHandle : nppData._scintillaSecondHandle; } [,...] int start = (int)::SendMessage( getCurScintilla(), SCI_GETSELECTIONSTART, 0, 0 );
This way, I can dynamically get the currently selected Scintilla view (1 or 2).
As for your code, it works for me with following NppExec mock-up:
SCI_SENDMSG SCI_TARGETWHOLEDOCUMENT SCI_SENDMSG SCI_GETSELECTIONSTART SET LOCAL START = $(MSG_RESULT) SCI_SENDMSG SCI_GETSELECTIONEND SET LOCAL END = $(MSG_RESULT) SCI_SENDMSG SCI_SETTARGETRANGE $(START) $(END) SCI_SENDMSG SCI_SEARCHINTARGET 11 "xml version" ECHO START=$(START) END=$(END) ==> $(MSG_RESULT)
Running that NppExec script on the following file with NO highlighted text:
hello there xml version goodbye there
returns:
START=38 END=38 ==> -1
and if I highlight the whole document, it returns:
START=0 END=38 ==> 12
I think you may be able to simplify and not bother getting the start, end and SETTARGETRANGE and instead just use a call to SCI_TARGETFROMSELECTION
Cheers.
- Michael Vincent last edited by
@Peter-Goddard said in Search in target is sometimes failing:
That could be it, but why would it be only working sometimes?
Your search string is ANSI (const char *) but your file is Unicode?
As said earlier, npp does also use SearchInTarget therefore it could be
that npp sets flags which aren’t useful in your case.
For example, having a flag SCFIND_WHOLEWORD might conflict with
your “xml version”.
I will try and use SCI_TARGETFROMSELECTION. I am not sure what you mean by, “Your search string is ANSI (const char *) but your file is Unicode?”. I am very new to this all, so thanks for the help
@Peter-Goddard said in Search in target is sometimes failing:
I am not sure what you mean by, “Your search string is ANSI (const char *) but your file is Unicode?”
this is actually a very advanced topic and MUST be understood by
every programmer which handles text.
I suggest reading this and you have to understand, that Windows API,
internally, handles everything with UTF16 and that a char pointer is start of your nightmares. :-) | https://community.notepad-plus-plus.org/topic/19471/search-in-target-is-sometimes-failing | CC-MAIN-2020-40 | refinedweb | 1,033 | 67.99 |
Details
- Type:
Sub-task
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
-
- Component/s: Python - Compiler, Python - Library
- Labels:None
- Patch Info:Patch Available
Description
Effectively, all strings in the python bindings are treated as binary strings – no encoding/decoding to UTF-8 is done. So if a unicode object is passed to a (regular, non-binary) string, an exception is raised.
Activity
- All
- Work Log
- History
- Activity
- Transitions
It would break backward compatibility if encoding isn't utf-8. And there is no way to specify which encoding is used.
The next issue is fastbinary. You'll still get the exception if TBinaryProtocolAccelerated is used.
> It would break backward compatibility if encoding isn't utf-8.
That's a nonsensical statement. There is no encoding inherent to unicode objects.
> And there is no way to specify which encoding is used.
That's because read/writeString always use utf-8, so any client can send to any server.
> The next issue is fastbinary. You'll still get the exception if TBinaryProtocolAccelerated is used.
I actually don't know if this bug is present in fastbinary to begin with, but my understanding is that fastbinary already has some limitations, so if it is, this can be added to the list.
> That's a nonsensical statement. There is no encoding inherent to unicode objects.
Here is a snippet from your patch:
def readString(self): len = self.readI32() str = self.trans.readAll(len) - return str
Why do you think the input encoding would be utf-8?
> but my understanding is that fastbinary already has some limitations,
I know the only limitation is THRIFT-105. And the fastbinary wouldn't be used in this case. Your case is different, you should check if any field has string type to stop using fastbinary.
> Why do you think the input encoding would be utf-8?
because that is the encoding used by other thrift bindings on the wire. Java:
public void writeString(String str) throws TException {
try
catch (UnsupportedEncodingException uex){ throw new TException("JVM DOES NOT SUPPORT UTF-8"); }
}
Your patch also throws exception if I send str type.
>>> unicode('абв', 'utf-8').encode('utf-8').encode('utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)
If you want to send pre-encoded data you should be using binary type, not string.
Python makes this clear if you know what to look for – the result of encode() is a str [binary string].
In other words, my patch conforms to normal UTF-8-supporting behavior:
1. pass a unicode object to a `string` field -> works
2. pass a binary string containing ascii characters to a `string` field -> works
3. pass binary data to a `binary` field -> works
4. pass arbitrary binary data to a `string` field -> doesn't work
Pre-patch, the python api would allow case four, but this was a bug, because any server conforming to the thrift wire protocol (i.e. anything but another buggy python server) would try to decode from utf-8 and get garbage. Switching from `string` to `binary` is the right fix for code in this situation.
If you want to force using unicode type in the Thrift, Service-remote generation should be patched.
Again: unicode is not forced. Either a unicode object or a binary string containing ascii characters is legal to pass to a `string` type. This is standard for unicode-supporting apps in Python 2.x. (And is the whole point of using UTF-8 as the encoding target.)
> 2. pass a binary string containing ascii characters to a `string` field -> works
By accident. It shouldn't work, but you are lucky.
There's nothing accidental about it. That is the main reason it makes sense for str to have an encode method, not just unicode.
I think that main reason for str.encode is hex, gzip and other pseudo-encoding.
Your proposal is good (py3k, consistency with java and c# lib), but if you use thrift, python and thrift's string type you have to rewrite your application.
You just have to use the `binary` type where it's required instead of relying on a bug.
You misunderstand me.
Here is a trivial snippet of my current code:
result = service.query(request.encode('utf-8'))
It can't work with the patch. "*-remote" helper can't work with the patch. The patch breaks compatibility.
First, what part of this fails to work when you re-generate your API with a `binary` type using the included patch? The code is exactly the same as used to be followed for all `string`.
Second, it's disingenuous to claim that a patch that fixes a bug preventing python code from working with any other client/server "breaks compatibility." Rather it improves compatibility, at the cost of requiring a trivial change from people already working around the bug instead of fixing it.
I don't get it. I have to rewrite my code. How can it "improves compatibility"?
I'm done repeating myself.
Not being a pythonista myself, I can't speak to the implementation particulars, but in terms of the correct behavior, it sounds like Jonathan is on the right track.
If the Thrift IDL says a field is of type "string", then you must UTF-8 encode/decode it for it to be wire compatible with other language libraries. If the IDL says "binary", then you should write it through with no encoding. Doing anything else is a break with the Thrift specification for the binary protocol.
If, prior to this patch, you were UTF-8 encoding and encoding strings yourself and storing them in string fields, then yes, you will have to change your code if you want the field to remain a string. You can of course change your IDL to a "binary" field and leave your code the way it is, if that would somehow make your life easier. However, as any client code doing this is actually working around a library bug, it seems to me like it's something worth fixing, and in any case, should be a simplification.
Looking at the patch, I would add that writeString should just call down to writeBinary after doing the proper encoding, rather than duplicating the functionality (however slight) already in writeBinary. Same for the reading side.
Thanks for catching that, Bryan. Revised v2 patch attached.
I think that the attached patch is definitely how things should work in Python 3, and I definitely think we should try to keep consistency between the pure python implementation and the extension. For Python 2, I can think of two possible approaches.
- Try to conform to the old behavior to the extent possible. Don't attempt to re-encode str objects when writing. Return raw str objects when reading.
- Implement type annotations for base types (I'm about 75% of the way through this) and require an annotation to trigger the unicode behavior in Python 2.
Unfortunately, the convention in Python 2 is that strings are blobs, not unicode.
Thoughts?
There's really no two ways around it: the old behavior (treating all strings as binary) was a bug.
I think option (1) clearly violates the spirit of python ("in the face of ambiguity, refuse the tempation to guess") in a way that is guaranteed to cause problems. If a program relies on buggy behavior, let's fail fast rather than working "sometimes."
As for option (2) I don't think I should be required to use a decorator to get correct behavior. I would suggest a decorator to get the buggy behavior but (a) that seems ... wrong, and (b) how hard is it to regenerate your api with s/string/binary/ anyway? You'll get the exact behavior as before. I still don't see how this is a big deal.
There's really no two ways around it: the old behavior (treating all strings as binary) was a bug.
This is simply not the case. Python 2 has a strong tradition of using the "str" type for strings, and the str type is a blob without any awareness of encodings. Python 3 has moved to a Java-like model of unicode strings, but the current behavior is the most Python2-esque way of behaving.
Wrong.
Python 2 has a strong tradition of using the str type for ascii strings as well as blobs.
That continues to work fine with this patch.
Python 2 has always used the unicode type for unicode strings.
Passing random binary stuff that may or may not be the result of encoding a unicode object to something expecting a unicode string (and i mean generically not specifically the unicode type) will crap out.
Try it with sqlalchemy or mako or any modern unicode-supporting python 2 library.
To be more clear, in a unicode-aware python 2 program,
a "string-like object" (that is, duck-typed string) may be either a `str` containing ascii bytes or a `unicode` object. a "binary object" should only be a `str`.
this patch complies on both counts.
(in python 3 a `str` is always unicode and a new `bytes` type is introduced for binary.)
Hopefully this is a first step that we can all agree on. Unicode objects that the application puts into Thrift structures will be encode with UTF-8 on serialization. There should be no change in behavior for existing programs. The only weird thing is that TBinaryProtocol will still throw an exception if you put a unicode object in a binary field, but TBinaryProtocolAccelerated will encode it and not complain. I don't think that is a big deal, though.
Here's what I don't understand: what is the big deal with doing it right, and rejecting binary (non-ascii str) passed to writeString? If you are passing binary data you need to declare it as such or your code will completely fail to interoperate with other thrift implementations. Letting people do that is not doing them a favor. And if you have code that is incorrectly using the Thrift string type when it should be binary, s/string/binary/ in your IDL is a virtually painless change to make and everything will work again.
code that is incorrectly using the Thrift string type
I think this illustrates your key misunderstanding. The original Thrift Whitepaper <> defines the string type as "An encoding-agnostic text or binary string". Java and C# use UTF-8 because no one has taken the time to allow them to work with non-UTF-8 text, and the binary type was created to allow Java to use a different in-memory representation for non-text values.
completely fail to interoperate with other thrift implementations
C++, Python 2, Ruby, Perl, PHP, and Erlang all delegate decoding of string fields to the application. Java and C# are the only ones that I know of that don't. Applications that manually encode to UTF-8 (like Alexander's) have no trouble interoperating with any other Thrift implementations, and those that use non-utf-8 (or non-Unicode!) strings can interoperate will all but Java and C#.
You are right, I didn't know the history here. But citing the (outdated) whitepaper which you have already violated when convenient isn't a very convincing argument.
Both the old way ("there is no string, only binary") or the Java and C# way (strings are utf-8; binary is byte[]) are self-consistent and make sense. But "there is string, and binary, and sometimes the former is utf8-encoded, but not always" is not.
Personally I think the Java / C# way is better, since it solves a common problem across languages which is one of the reasons to bother using thrift. But if you want to argue the other way, fine, let's file bugs against Java and C# and remove the misleading type. (I would argue that `string` is the misleading one and `binary` is the proper name for its behavior.)
(For what it's worth, protocol buffers defines `string` and `bytes` types, corresponding to the behavior of `string` and `binary` in what we are calling the "java and C# way" here.)
Also: your patch encodes on write but does not decode on read. So even for python to python communication it is broken. (Surely we at least agree that the server read should return the same kind of object that the client wrote, and vice versa.)
But "there is string, and binary, and sometimes the former is utf8-encoded, but not always" is not.
The consistency is that in every Thrift language, we use the native "string" type to represent the Thrift "string" type. We do not try to force Unicode semantics on languages where they are non-idiomatic.
(For what it's worth, protocol buffers defines `string` and `bytes` types, corresponding to the behavior of `string` and `binary` in what we are calling the "java and C# way" here.)
For what it's worth, protocol buffers use a blob type for strings in C++.
your patch encodes on write but does not decode on read
Yeah, that was the point. It gives application writers the option of putting unicode objects in their Thrift structures, but doesn't break compatibility with programs that use str objects and/or use alternate encodings for their strings.
So even for python to python communication it is broken.
Works fine for me.
(Surely we at least agree that the server read should return the same kind of object that the client wrote, and vice versa.)
We do: str
> The consistency is that in every Thrift language, we use the native "string" type to represent the Thrift "string" type.
Then you should be honest and just use binary everywhere, because native string types are not at all cross-platform.
> We do not try to force Unicode semantics on languages where they are non-idiomatic.
I've explained what modern Python idiom is: strings may be ascii `str` or any `unicode`. Binary data is also represented as `str` but that does not make it a "string."
So I'm very skeptical of this appeal to idiom when the current behavior is NOT idimatic for Python any time since the unicode type was added. (2.0, october 2000.)
> For what it's worth, protocol buffers use a blob type for strings in C++.
See. "A string must always contain UTF-8 encoded or 7-bit ASCII text."
> It gives application writers the option of putting unicode objects in their Thrift structures
to be read out as str? Doing half of encode/decode is worse than not doing it at all.
> We do: str
You just admitted that when you write unicode it reads back as str.
—
"if you have code that is using the Thrift string type when it should be binary, s/string/binary/ in your IDL is a virtually painless change to make."
Assuming for the sake of argument that strings should be utf8 (which includes ascii!), do you agree with the above statement?
From IRC:
***johano votes for utf-8 strings and a binary type
seliopou: ima go with the string for utf8 and binary for binary
ahfeel: i agree with you guys, strings => utf8 and binary as a type
bryanduxbury: re: utf8, we basically need to convince dreiss and other doubters that we should specify that encoding throughout
Jonathan, you have stumbled on an old ugly problem in Thrift. The 'string' type was originally the only way to pass arbitrary binary data around but this didn't actually work properly in Java because of its requirement that String's carry an encoding. The 'binary' subtype was introduced to fix this. There was no agreement that string should enforce UTF-8 encoding, even though this meant an inability to enforce interoperability with Java, probably driven in large part by pre-existing data at Facebook (and other places?) where strings were already for binary data in C++ (at the time, Java was somewhat of a second-class citizen for Thrift – IIRC Facebook's emphasis was on C++. Python, PHP). Somehow I imagine that the backwards compatibility issue is not going to be taken off the table.
I may not fully understand the issues with Python so forgive me if this suggestion is naive: Can we split the difference and have some kind of configuration option to "enforce UTF-8" for Python (but make it off by default)?
The policy would then be: use non-UTF8 encoding in strings if you wish, but realize that you will not interoperate correctly with Java and C# all the time or with Python when "enforce UTF-8" mode is on.
I'd rather use unicode string everywhere, but if we have to maintain backwards compatibility with legacy code, what do you think of adding a new annotation (e.g. string.encoding) for specifying the actual string encoding? Something like this:
typedef string (string.encoding = "utf8") ustring
and then use ustring instead of string. In any case, I'd deprecate str strings and, at some point in the future, support unicode strings only. In the case of Python, that would make it easier to support Python 3.0 (as it only supports unicode)
I'm not sure if I'll be able to write a patch for this in the next couple of days, but will try if there's consensus on using annotations for this.
@Chad:
So there are two questions here:
1. is utf8 strings the right design decision, absent backwards-compatibility concerns
2. is it worth breaking back-compat for
I think some people are reluctant to admit 1. because they are afraid of 2.
I think the case for 2. can be made as follows:
a. as I have said several times (to no contradictions), simply recompiling the IDL after changing to binary is a virtually painless way to get back the old behavior of treating all data as binary no matter what it was declared as
b. nobody is forcing you to upgrade. if svn 700000-whatever of thrift works for you, keep using it. if necessary, backporting fixes to a branch is not an unheard-of strategy either..
d. if you can't change broken behavior before there is an official release, when CAN you change it?
Now, perhaps it's worth the time to briefly explain my use case.
I work on the Cassandra distributed database, where we use thrift to let clients in any supported language talk to the Java server. Keys are `string`s so they absolutely have to be compatible cross-platform. Currently they are not. Telling users that "thrift doesn't really support unicode, so in Python you have to set this flag, and in other languages it doesn't work at all and it will be an uphill battle to get a patch accepted" is a non-starter. Cassandra has a high enough barrier to entry as it is, that adding to it unnecessarily is foolish.
Without real unicode support we'll have to switch to binary to get behavior that is at least consistent cross platform.
@Esteve:
Adding a usting type with well-defined cross-platform semantics is a reasonable approach. I think adding user-specified encodings adds more complexity than it's worth, but I don't care that much.
Adding my (potentially naive) opinion to the mix:
I think allowing the user to specify string encoding just adds complexity, and possible a bit of headache for the runtime library maintainers. I think we should either say "Strings should be UTF8 encoded, across the board", or create a new well defined unicode-string type specifier.
There is a tension in Thrift between allowing types to have their natural meaning in each language and having complete interoperability across the full suite of languages. This is just one example.
Should I limit the expressiveness of Thrift structures I can use in one language (C++) because of the strictures of some other language amongst those supported by Thrift that I may or may not be using? Thrift has not always made a consistent choice on this.
On the full interoperability side:
– unsigned integers are not supported because a number of languages don't support them natively.
On the side of "the IDL writer and/or the applications are responsible for guaranteeing interoperability":
– the application is responsible for interoperability of strings between, say, Java and C++ – that is, if you want to interoperate, you need to make sure that C++ is sending only UTF8 encoded data in strings
– the IDL writer is responsible for determining if map keys should only be primitive types (required by some languages – and the JSON protocol btw) or if they can be structures or containers as well (limiting interoperability)
I personally have always leaned on the side of more interoperability and correcting some of the early architectural warts. If it had been up to me, we would have bitten off the backwards compatibility break a while back, changed string to only be UTF-8, made restrictions on the types for map keys, made binary its own standalone type, etc. However, I can understand the concerns of those with a big investment in persisted Thrift data who have pushed back against non-backwards compatible changes.
I'll repeat my suggestion and expand on it a little further: Thrift could operate in 2 modes: "more flexibility" or "more compatibility". Under "more flexibility", it would operate more or less as things are today (eg: I wouldn't re-examine the signed vs unsigned decision – too many opportunities for foot shooting there). Under "more compatibility", strings would be required to be UTF-8, map keys would be required to be primitives, etc. I would expect that most people adopting Thrift now would select "more compatibility" but those with specific needs could use the "more flexibility" mode.
Adding a new type for Unicode strings seems fine to me as well – in fact there are already suitable unused type constants in TProtocol.h (UTF8, along with UTF7 and UTF16) (see) (Side note: Why are those there? I always figured it was leftover cruft from something that was worked on at FB and then abandoned. Should they be cleaned up?). The only real issue I can see is that it entails changes to all the protocols and code generators across all the languages – it's just work but its not hard work.
a. as I have said several times (to no contradictions), simply recompiling the IDL after changing to binary is a virtually painless way to get back the old behavior of treating all data as binary no matter what it was declared as
I hate to say this since doing what you said would also smooth the way for promoting binary to a full fledged type but I am not sure that it's as easy as you suggest. I think this will break code since the return type of readBinary() is not the same as the return type of readString().
> I am not sure that it's as easy as you suggest. I think this will break code since the return type of readBinary() is not the same as the return type of readString().
Isn't that mostly a non-issue, though? If you are using the current code and sending binary data as a "string" then you are probably using Python on both client and server or things would already be broken.
> Thrift has not always made a consistent choice on this.
And that's the problem; as I said above, I can live with either choice as long as it's made consistently. Right now most thrift implementations cannot talk to my Java server and that is broken.
> Adding a new type for Unicode strings seems fine to me as well
Maybe that is the way to go then.
Adding a new type for Unicode strings seems fine to me as well
Would it make sense to call that new type utf8, to stress the fact that what gets sent through the wire are UTF-8 encoded strings?
Jonathan, when I said there was not complete consistency, I was referring to consistently choosing between maximum interoperability across languages vs more flexibility within given languages to follow their idioms. The choice in various cases has been somewhat driven by pragmatics and somewhat by historical factors.
WRT the treatment of strings, there is consistency of a kind here: in this instance, the choice was made for more flexibility within given languages (although I agree with your characterization that concerns about backwards-compatibility breakage also played a role in the choice). The current situation is that 'string' is free to contain arbitrary data in languages where that is supported; but clearly not so in language that enforce encoding on strings (as in Java and C#). If you want to interoperate with those languages, then make sure your application only passes UTF8 encoded data in 'string'.
I think the issue is mostly that you don't like the answer you are getting and partly that the differences between Python2 and Python3 with regards to enforcing encoding (if I am understanding correctly, Python3 is now in the same camp as Java and C# – is that correct? If so, maybe we want to treat Python3 as a different target language from Python2, which might sidestep some of the issues here since I detect a little bit of pro-Python2 on David's part vs pro-Python3 on your part).
Isn't that mostly a non-issue, though? If you are using the current code and sending binary data as a "string" then you are probably using Python on both client and server or things would already be broken.
I am not sure it's a non-issue. You are mistaken about using only Python on both side (or the same language on both sides if that is what you meant). You can currently send binary data via 'string' comfortably across C++ they both return std::string, which at least knocks out part of my concern – are there other languages where the types might matter).
>Right now most thrift implementations cannot talk to my Java server and that is broken.
Why is this? We interoperate via Thrift across C++, Ruby, Java, Python2 and Erlang here and everything works just fine. We just make limited use of the 'string' type – and make sure that applications only send UTF-8 data via 'string'.
Consider that not all Thrift shops use all languages. From their perspective, they don't want to "dumb down" their type system and flexibility because of some language that they don't care about interoperating with.
That said (and as I said before) I am totally sympathetic to your concerns – my preference would be that we more consistently choose in favor of maximal interoperability. I am just pointing out that the state of things is not as bad as you seem to believe they are and that these choices have not been completely arbitrary or without merit.
> Why is this?.
> From their perspective, they don't want to "dumb down" their type system
In 2009 a language that doesn't support unicode is barely usable, and will almost certainly support unicode soon.
AFAIK all the thrift languages do support unicode already but I could be wrong on one or two.
WRT backwards compatibility breaking changes, my sympathies lie on the side of biting off a set of compatibility-breaking changes before the first release. The biggest proponents of full backwards compatibility have probably been David and Ben. I don't think it is a coincidence that they are also the ones with the biggest existing investment in persisted Thrift data. I think we need to at least respect the fact that Facebook had made a big investment in Thrift prior to open-sourcing it and that we can't strand Facebook completely in decisions that we make to fix earlier warts.
b. nobody is forcing you to upgrade. if svn 700000-whatever of thrift works for you, keep using it. if necessary, backporting fixes to a branch is not an unheard-of strategy either.
I don't think it is reasonable to ask Facebook to do this..
I agree completely with you on this one – I do wish the existing users would sign up for this even at the expense of some amount of pain now in the interest of the future of the project.
d. if you can't change broken behavior before there is an official release, when CAN you change it?
I also agree with you on this one.
However, and despite my agreement, I don't see this changing unless David and Ben sign up to it.
In the absence of that, I think adding a new type might be the best course of action.
In other words, you are sending binary data that happens to be an encoded string and calling that a string, which it is not. It is binary data. That's working around one bug with another in my book.
Partly true. 'string' was a bit overloaded and was used for arbitrary binary data as well. Facebook originally was mostly concerned with C++ and PHP. The string encoding issue didn't really crop up until Java started getting some real usage (my understanding is that it had been implemented by FB but not heavily exercised), which came after the initial open-sourcing of the code. Whether you see this as a bug or not depends if you think calling something a 'string' means that it is a C++ std:string (which can certainly be arbitrary binary data) or a Java String (which has an encoding attached to it). My personal take on it is that the 'string' type is unfortunately a bit schizophrenic around this – in C++ it is std::string and in Java it is String. So if you want to talk to Java from C++ using 'string', you better submit to the strictures of Java String – but if you only care about C++ and languages with other encoding-agnostic string primitives, then you don't.
In 2009 a language that doesn't support unicode is barely usable, and will almost certainly support unicode soon.
Perhaps. I don't think C++ is going to change the semantics of std::string any time in the near future, however. I guess opinions will vary about whether C++ qualifies as "barely usable", "highly usable", or "eye-bleedingly unusable".
Man, I sleep in one morning and miss the whole party.
Since we've strayed a bit from the original topic, let me ask a quick question to make sure we're at least aware of the scope of the discussion: What concrete changes would you like to see in Thrift (in the area of encodings) other than the return type of readString in Python 2?
Now, let me just respond to a bunch of stuff in chronological order...
Can we split the difference and have some kind of configuration option to "enforce UTF-8" for Python (but make it off by default)?
I'd be fine with this, though the change to the extension module is more complicated that the change to the pure-python stuff.
what do you think of adding a new annotation (e.g. string.encoding) for specifying the actual string encoding?
I'd also be fine with that. See
THRIFT-414 for my planned approach.
I'd deprecate str strings and, at some point in the future, support unicode strings only
If you're talking about Python, I think we should definitely do this for Python 3, but never do it for Python 2. If you're talking about all languages, I think it is unrealistic because C++, PHP, Perl, and Erlang are not going to have robust native Unicode support any time in the foreseeable future.
is utf8 strings the right design decision, absent backwards-compatibility concerns [...] I think some people are reluctant to admit 1. because they are afraid of 2.
I think that it is not. Requiring UTF-8 might seem sensible in a mostly-English environment, but having support for UTF-16 or a Chinese-oriented encoding (for example) can be very useful. I'm fine saying that Thrift strings should be UTF-8 encoded unless otherwise specified (like, by an annotation), but enforcing it in environments that could benefit from a non-UTF-8 encoding is harmful.
I think adding user-specified encodings adds more complexity than it's worth
I think allowing the user to specify string encoding just adds complexity
I disagree. I think that if we say that strings should default to UTF-8 unless otherwise annotated, it is not a big deal. I think that removing the ability to support other encodings is a big deal.
made restrictions on the types for map keys
I haven't ruled this out, if you want to talk about it. But it should be a separate issue. And if you are serious, we should do it before the release.
made binary its own standalone type
This is effectively the case already. The only possible problems arise when you change a field from string to binary without changing the field id (which is what Jonathan is suggesting, btw), and even then, I think only in the JSON protocol.
If you are using the current code and sending binary data as a "string" then you are probably using Python on both client and server
C++, Ruby, Perl, PHP, and Erlang also do this.
if I am understanding correctly, Python3 is now in the same camp as Java and C# - is that correct?
Exactly. The "str" type in Python 3 is effectively the same as the "unicode" object in Python 2. It is a string of Unicode code points that cannot be used in a context where bytes are expected.
If so, maybe we want to treat Python3 as a different target language from Python2
Definitely.
I detect a little bit of pro-Python2 on David's part
That is not my intention. I actually think the Java/Python3 data model makes more sense in most contexts. But I think that we should treat Python 2 as Python 2 (AFAIK, Thrift doesn't work in Python 3), which means that strings are strs. A few examples of this: repr returns a str. Exception messages are strs. "" is a str. Data read from files (even not opened in binary mode) are strs.
Right now most thrift implementations cannot talk to my Java server and that is broken..
Chad is right. As in all C++, Ruby, PHP, Perl, and Erlang programs, it is the simply application's responsibility to ensure that the string is properly UTF-8 encoded on writing and to interpret the string ast UTF-8 on reading. I think you are assuming that the "string" is a "Unicode string" or a "string of Unicode code points". In Thrift, this is not the case. It is a string of bytes (that are presumably representing text), and it is up to the application to ensure that the bytes make sense. Now, if we want to establish a convention that the bytes should be a UTF-8-encoded Unicode string unless otherwise annotated, that's fine with me, but I think that mandating UTF-8 is a harmful restriction, mandating Unicode, while probably fine, is not without downsides, and forcing applications to use special types for strings is pretty much out of the question.
In 2009 a language that doesn't support unicode is barely usable, and will almost certainly support unicode soon.
AFAIK all the thrift languages do support unicode already but I could be wrong on one or two.
There is a difference between supporting Unicode and having native-feeling support for unicode. If you mean native-feeling support, then most languages do not have it.
- C++ has wstring, which can be used for Unicode strings, but they are rarely used and there is no support for encoding and decoding. The native-feeling way to write C++ is to use string-of-bytes std::string.
- Ruby and PHP's string type is a string of bytes. They have special functions for treating them as pre-encoded Unicode strings. Believe it or not, it seems like PHP's support here might actually be better than Ruby's.
- Erlang is completely Unicode-oblivious.
The only reason that this discussion is coming up here is that Python is the only Thrift language (AFAIK) that is on the fence between strings as bytes and strings as code points.
> As in all C++, Ruby, PHP, Perl, and Erlang programs, it is the simply application's responsibility to ensure that the string is properly UTF-8 encoded on writing and to interpret the string ast UTF-8 on reading.
Then you are really passing binary data around and your IDL should reflect that. Calling it a string implies there are semantics beyond a bunch of bytes for which it's the application's responsibility to derive any meaning from.
Calling it a string when it may or may not be is holding languages that do have real string support hostage to those that don't.
I think you are assuming that the "string" is a "Unicode string" or a "string of Unicode code points". Just because something is a "string" doesn't mean it is Unicode.
Calling it a string when it may or may not be is holding languages that do have real string support hostage to those that don't.
It is a string, regardless of whether or not it is UTF-8 or Unicode. And Java is not "hostage". You document what you consider to be acceptable values and throw an exception when the client supplies something else.
> Java is not "hostage". You document what you consider to be acceptable values and throw an exception when the client supplies something else.
In other words, because thrift won't enforce any semantics at all for `string` beyond those implied by `binary`, I have to do the validation by hand.
That sure sounds like being held hostage by the lowest common denominator to me.
Is this horse dead yet?
Two of our active committers use both Java (a strings-are-Unicode language) and Ruby (a strings-are-bytes language). I'd be interested to hear their thoughts. Should Ruby verify that all strings are UTF-8 before writing them out?
Actually, I have another question for you, Jonathan. How do you think Thrift should handle strings in C++?
One of the Ruby guys here. waves
@David: As things are now, no, I don't think Ruby should enforce string encoding. Right now the format the string is expected to be in should be published as part of api specs and handled application side. 'string' is a semantic label in our case, distinct from binary in that it is assumed to be characters, but doesn't define encoding. What I would be in favor of is a new utf8 type, which would define encoding. But without that, I don't think the restriction should be placed on string.
@Jonathan: If your api method takes an integer, but in your application the only valid values are even numbers, should we include that validation in Thrift as well?
Hostage taking seems a little extreme. I prefer to think of it as the boyscout helping the old lady across the street, but not making sure she has two legs. If it doesn't bother her, it doesn't bother me.
@Kevin: I think a brand new type is overkill. What would you think of...
- Committing support for annotations on base types (
THRIFT-413)?
- Committing something to my patch for alternate encodings in Java (
THRIFT-414).
- Stating that strings should be UTF-8 by convention unless otherwise specified.
- Defining a "unicode.strict" attribute that we could implement on a per-language basis as it becomes convenient. In Python, strs would be verified when writing and decode into unicodes when reading. In Ruby and PHP, we could validate the encoding on both sides (and throw an exception if validation fails). In C++, maybe we could use wstring and encode/decode with ICU if unicode.string is set to "omg yes really even C++ jerk!"
> If your api method takes an integer, but in your application the only valid values are even numbers, should we include that validation in Thrift as well?
Kevin: that's an excellent analogy.
If some languages defined an `even` type, should we expose that to thrift? I am arguing that there are two consistent alternatives:
- expose an `even` type and make thrift responsible for raising an error if a client passes a non-even int (in languages where this is possible)
- don't expose `even` at all and make everyone use int so it's explicit what the expectations are
what we have now is, in effect, everyone using the closest "native" type they have to `even` which is in some cases not necessarily even at all, which leads to strange errors when sending one of those to a language that does have native `even`s.
I think David's comments above are excellent suggestions that could satisfy almost everything for all parties. Yes it introduces some complexity but if we choose reasonable defaults we should hide a lot of that complexity from most users. I'd like to understand the implementation implications of
THRIFT-413 and THRIFT-414 in more detail first though.
I think the point is that we want to strengthen the meaning of "string" wherever possible. Clearly it used to be used like arbitrary bytes, but since we have binary now, it seems to make sense that the key use case is for actual text. In some ways, I see specifying the encoding of strings as a necessary part of the protocol. After all, the protocol specifies the encoding of ints, doubles, maps, etc, right? Jonathan has consistently argued for us to have a standard.
Right now, we have a de facto standard of "UTF8 if it's convenient, whatever else otherwise". This can obviously lead to problems in some situations. Yes, you can make the application be concerned with the encoding, but that seems like a workaround, and it will quickly become inconvenient if you have more than two languages involved.
In general, I'm sort of against allowing "alternate encodings" (a la
THRIFT-414), because it seems like overkill for the problem. Either you are dealing with strings that could contain special characters, in which case you're probably looking for Unicode support, or you basically don't care about encoding, in which case the base subset of ASCII is probably more than enough for you. I think it's tricky to add annotations for string encodings because the wire won't contain that information, and could lead to you being able to read but unable to decode a string sent to you.
@Jonathan: I think you are still assuming that the "string" is a "Unicode string" or a "string of Unicode code points". Just because something is a "string" doesn't mean it is Unicode.
@Chad: Please feel free to ask questions on 413 and 414. They both have fairly simple patches posted, and shouldn't change any existing behavior.
@Bryan: Even if we decide that that we want strings to be always Unicode, there are encodings other than UTF-8, and I don't see why we should prevent users from using annotations to specify an alternate encoding.
Is it worth adding a text and binary types (following SQL's convention)? You could then put a warning on strings that they are deprecated because of the cross-language typing issue, and have them accept input as either text or binary and have text and binary accept strings (with validation perhaps on text). This preserves backwards compatibility and provides a migration path.
I think it's clear that we've still got issues to resolve here. I'm pushing this to 0.2 so we don't hold up the release.
It seems that a decision/consensus was almost reached here, specifically David's suggestion at
Can we re-animate this issue and get it resolved? I somehow skipped this discussion when it was going on as I knew (or thought I knew) that strings were sent as UTF-8 and was mistakenly assuming that the Python support did the Right Thing and that if an app passed a Python unicode object in a call you'd get a Python unicode object out on the other end. Last night I found out to my great surprise that that's not the case.
It would be really nice to have this resolved. Otherwise it's going to mean a bunch of crufty manual coding decoding. And it's made worse in our case as we have a dozen internal services that all speak to each other extensively using Thrift. So not only do we need to deal with outside clients being able to somehow pass unicode, we'd have to manually decode each arg in each method in each service, and then manually encode them again to call another Thrift method inside our own service. Either that or keep things as UTF-8 strings, which isn't an option.
The patches are in, and backwards compatibility is not an issue with David's suggestion. Real users need it ASAP to avoid real pain
What's still stopping this from being resolved/applied/committed?
Terry
> David Reiss writes:
> As I recall, Jonathan was the only person who really seemed to
> care about this issue, and he wasn't satisfied with my
> suggestion, so I put it aside. Chad also requested some changes
> to my diff for the JSON protocol. I'll try to reevaluate the
> status some time soon, but I am away from a computer today.
Jonathan - are you still in the loop on this one? What do you think?
Given the fundamental differences in what a "string" is across different languages, there's unlikely to be a clean solution that suits everyone. Having a backwards-compatible compromise that works is much better than having nothing, though.
I addressed Chad's comments on
THRIFT-414 and got the patch to compile and the tests to pass. If the Java folks sign off on that, I'll commit it and 413, then it should be easier for us to move forward on this one. I think the hardest thing is going to be propagating the presence of the annotation into the extension module so that it knows to verify the encoding on output and decode strings on input.
I just wanted to bump this issue since I'm running into it now (Java <-> Python).
What is the current plan for resolving this issue and how can we move forward?
many months from now it's likely that thrift will have adopted and debugged the more complex solution started in
THRIFT-414.
until then you're screwed. use binary instead of string and encode/decode manually.
You can continue to use the string type. Just be sure that all of the str objects that you pass into Thrift are properly UTF-8-encoded. If we apply the patches that I posted, you will be able to pass unicode objects into Thrift and have them automatically encoded as UTF-8. However, the changes required to make Thrift return unicode objects from its deserialization routines is more complex.
> You can continue to use the string type. Just be sure that all of the str objects that you pass into Thrift are properly UTF-8-encoded.
which is to say, "you can use the string type, if you pretend it is binary on the python side."
less confusing and error-prone to use real binary type.
Personally, I disagree, but I see how that view might make more sense to some.
You can continue to use the string type. Just be sure that all of the str objects that you pass into Thrift are properly UTF-8-encoded.
This is what I'm currently doing which is fine for me but may be more challenging for users. I was worried that educating potential consumers of the service about string encoding (and how to do it in their language, everything but Java and C#) would make the service appear less user friendly than it is.
> I was worried that educating potential consumers of the service about string encoding
Unfortunately, if you are sending a Unicode string to a language where the string type does not use Unicode, your users must be educated about string encoding.
If you are primarily concerned with the strings being sent from Python to Java, the patches I posted will cover it.
I just ran into this, and figuring out this was the issue was really convoluted. I had a python server returning thrift objects with unicode strings, and php was timing out, python was deserializing fine but missing fields, and php was timing out. Can we at least put a patch in that throws an error when a unicode string is detected so that this is easier to debug?
Why was Python deserializing anything (other than the response) if it was the server? Also, missing fields shouldn't cause PHP to time out unless there is another bug there. Do you have a simple test case for this?
It was a python client. I think the response was just corrupted and every lib was just reacting differently. Unfortunately, I don't have a simple test case. Modifying my code to manually encode as UTF-8 did fix the issue.
How was PHP timing out if it was sending back a response?
PHP read a massive length for a string on the wire, so it tried to read more bytes than were available. It would always time out when reading the 4 bytes for the size of the next frame (using framed transport). Don't really know the details of the internals well enough to give a more detailed answer than that.
My suspicion is that the python lib wrote a multibyte character string to the wire with a "character" instead of "byte" length header. Then, once the clientside read into the middle of that string, it got off sync. I could see this tripping up pretty much any client lib, including python.
Should we wake this issue up again? I think the general debate was on whether our string types should be explicitly UTF-8 or not. I am for making Thrift string types UTF-8, enforcing this in languages where we can, and in languages we can't, making it clear that we've punted.
We can put it to a vote on the list.
We're constantly running into this issue at BackType. Python Thrift is just plain broken right now because of this issue, and lots of people are having problems. Since it sounds like we're not all going to agree on the "best" approach, I vote for applying Jonathan's patch and opening a new issue where we can debate "the right way". Even if Jonathan's patch isn't the "right way", it's a hell of a lot better than the current state of things.
Can you clarify whether you are focused on Python reading unicode strings or Python writing unicode strings?
I want to be able to use "string" fields without manually encoding/decoding, so both. Jonathan's patch would solve all the issues I'm facing currently, including reading Python serialized objects from Java and talking to Twisted servers from Python clients.
i agree that it's best to do something about this issue rather than continuing to debate about it. +1 to applying jonathan's patch and opening a new issue.
+1 to applying.
This is going to break existing code, so I vote -1.
+1 to applying. The current situation regarding unicode and Python is very frustratring.
Here's a proposal to resolve the deadlock.
I propose adding an option to the python generator that will force strings to be utf-8 encoded/decoded, ala Jonathan's patch. Without the option, python thrift will remain with the current behavior (so existing code will continue to function the same way), and the rest of us can use the option when we generate code to resolve our problems.
How does this sound?
Yeah, I was thinking the same thing. I think my patches handle the write path acceptably without the need for an option. An option for the read path should be pretty easy. It will take some more work to get it to work for the accelerator module, but Jonathan's patch doesn't touch that, so I'm guessing you guys aren't too concerned with it.
This patch builds upon Jonathan's patch to add the "utf8strings" option to the python generator. This causes thrift to encode/decode strings using utf8 in the generated code. Note that this patch does not modify the python lib at all, only the code generator.
This is fine with me. Thoughts?
+1 to applying Nathan's patch
+1 to Nathan's patch. If no one objects, I'll commit this later today.
I already committed it a month ago. Check the "all" view.
Then we should probably close this issue.
I think I thought there was something else left to resolve, but now I can't figure out what it was.
This patch adds unicode support to the python bindings. Binary strings continue to not be encoded/decoded (renamed to readBinary/writeBinary for consistency w/ other Protocol implementations) and new write/read String methods were added to support non-binary strings.
There should be no backwards-compatibility problems. `binary` fields will continue to work as before. `string` fields with ascii data will also continue to work. The only difference is that you can now pass a unicode object to a `string` field w/o it breaking. | https://issues.apache.org/jira/browse/THRIFT-395?focusedCommentId=12896086&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2016-07 | refinedweb | 9,049 | 61.87 |
import java.util.Scanner;//needed to acces the console and client input import java.util.Random;//neede to generate the random number needed for student public class projectArrays { public static void main (String[] args) { Scanner cin = new Scanner(System.in);//initiate a Scanner for client inputs students =(Math.random)*50)+1;//this will generate a random number between 0-50 for students //**bonus attempt** giving the use the option to decline continueing the program System.out.println("You have :"+students+"in your class,do you wish to continue? y/n "); if (cin.next().startsWith("y")||cin.next().startsWith("Y")) { //initiating the array to hold the grades according to the number of students int[] grades = new int[students]; System.out.println("You have a total of"+grades.length+"students"); System.out.println("Enter the numerical grades for all students."); System.out.println("Press enter after each entry."); //making a statement to allow the client to enter values for the array for (int i=0;i<students+1;i++); { grades[i] = cin.nextInt(); //creating an ERROR statement if the client enters a "bad" entry if(grades.length != students|| 0>cin.nextInt()>100) { System.out.println("ERROR INCORRECT ENTRY!"); } //continue with coding as if the client has performed the right actions else { //printing the contents of the array System.out.println("you have entered"); for(int i=0;i<grades.length;i++) { System.out.print(grades[i] + " "); } //averaging the elements within the array and displaying them int sum = 0; for(int i=0;i<grades.length;i++); { sum= sum+grades[i]; double average= sum / grades.length; System.out.println("the Average of all grades is"+sum+" "); } //this section if for counting the amount of zeors in entered into the array int countZ = 0 ; for (int i=0;i<grades.length;i++) {if( grades[i]==0) countZ++; } //this section now counts the amount of Hundreds entered into the array int countH = 0; for (int i=0;i<grades.length;i++) {if(grades[i]==100) countH++; //now to display both amounts of zeros & hundreds found System.out.println("There are"+countZ+"zeros."); System.out.println("there are"+countH+"hundreds."); } } } } //else statement to run in case the client denies to use program else { System.out.println("you have chosen to conclude your session, you may now exit."); } } }
help with arrays
Page 1 of 1
Beginner with java coding and weak with arrays
1 Replies - 1256 Views - Last Post: 11 October 2010 - 04:51 AM
#1
help with arrays
Posted 11 October 2010 - 04:34 AM
I am in the middle of writing a code for my class and i am a beginner to Java, I feel I am in the right direction as far as getting the code right but I get a "line 15 ";" expected" error and I think it interferes with the rest of the compiler to tell me if I've written the rest of the code right, if some one can take a look and help me out it would be much appreciated, if it helps i use JCreator.
Replies To: help with arrays
#2
Re: help with arrays
Posted 11 October 2010 - 04:51 AM
You have not specified the variable type on students...
Also you need to get rid of the semicolon here:
Also, this is invalid syntax:.
int students = (int) ((Math.random())*50)+1
Also you need to get rid of the semicolon here:
for (int i=0;i<students+1;i++)
Also, this is invalid syntax:
0>cin.nextInt()>100.
Page 1 of 1 | https://www.dreamincode.net/forums/topic/194444-help-with-arrays/ | CC-MAIN-2020-16 | refinedweb | 586 | 54.73 |
💬 Air Humidity Sensor - DHT
This thread contains comments for the article "Air Humidity Sensor - DHT" posted on MySensors.org.
bonjour , quel library DHT utilisez vous ? j'ai un probleme de compilation
Arduino:1.6.10 (Windows 10), Carte : "Arduino Pro or Pro Mini, ATmega328 (3.3V, 8 MHz)"
ATTENTION . Définition sur : 'Uncategorized' air_temp:73: error: no matching function for call to 'DHT::DHT()' DHT dht; ^ C:\Users\rsalmon\Documents\Arduino\air_temp\air_temp.ino:73:5: note: candidates are: In file included from C:\Users\rsalmon\Documents\Arduino\air_temp\air_temp.ino:44:0: C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:40:4: note: DHT::DHT(uint8_t, uint8_t, uint8_t) DHT(uint8_t pin, uint8_t type, uint8_t count=6); ^ C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:40:4: note: candidate expects 3 arguments, 0 provided C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:38:7: note: constexpr DHT::DHT(const DHT&) class DHT { ^ C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:38:7: note: candidate expects 1 argument, 0 provided C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:38:7: note: constexpr DHT::DHT(DHT&&) C:\Users\rsalmon\Documents\Arduino\libraries\DHT-sensor-library-master/DHT.h:38:7: note: candidate expects 1 argument, 0 provided C:\Users\rsalmon\Documents\Arduino\air_temp\air_temp.ino: In function 'void setup()': air_temp:91: error: 'class DHT' has no member named 'setup' dht.setup(DHT_DATA_PIN); // set data pin of DHT sensor ^ air_temp:92: error: 'class DHT' has no member named 'getMinimumSamplingPeriod' if (UPDATE_INTERVAL <= dht.getMinimumSamplingPeriod()) { ^ air_temp:97: error: 'class DHT' has no member named 'getMinimumSamplingPeriod' sleep(dht.getMinimumSamplingPeriod()); ^ C:\Users\rsalmon\Documents\Arduino\air_temp\air_temp.ino: In function 'void loop()': air_temp:104: error: 'class DHT' has no member named 'readSensor' dht.readSensor(true); ^ air_temp:107: error: 'class DHT' has no member named 'getTemperature' float temperature = dht.getTemperature(); ^ air_temp:114: error: 'class DHT' has no member named 'toFahrenheit' temperature = dht.toFahrenheit(temperature); ^ air_temp:131: error: 'class DHT' has no member named 'getHumidity' float humidity = dht.getHumidity(); ^ no matching function for call to 'DHT::DHT()'
Welcome to the MySensors community, @rsalmon
You need to use the DHT library included in the MySensors examples. See
Please use english when contributing in the foums, see
has any one been able to fix this error?
I was looking for examples on mysensors 2.0 having several sensors in the same node and this sketch looks very iteresting.
Thanks
- thomas schneider last edited by
Hi everybody,
I try to uploaded the sketch on my arduino nano and I have an error with this line :dht.readSensor(true);
When I delete this line I have no error on the upload but the senosr has no refresh...
@thomas-schneider
Did you see the comments from Mikael?. Do you have the DHT library?.
What does the error say?
- thomas schneider last edited by
Yes I saw the comments of Mickael. But sorry, I didn't understand how to download the library. Now all is done and ok ! We need to come back and download the entire folder. Why this library is not in mysensors library ? It's not easy for a newbie.
Thanks for the support and congratulations for the website.
Can anyone please show how to connect more than one DHT with this sketch? What needs changing in the code? (I am OK with the hardware side of things).
Once I have an example I can work the rest out but at the moment this bit baffles me!
Thank you
@skywatch
I know what you mean, I was having the very same trouble with relay actuators, and that's where your solution lies. There is a sketch that allows you to name the number of relays, and then the sketch automatically names the children, you should be able to modify that, (or at least get an idea of how you have multiples of the same sensor on the same board) to add as many dht's as you have spare digital pins on your arduino. Good luck, hours of fun lies ahead
Is exist some way to calibrate DHT sensors? today i make a experiment assemble 5 nodes with 5 DHT11 sensors put all nodes on one table and leave them for some hours, and the result is the every sensor give different values about temperature and humidity + - MAX 5 degree Celsius around the real temperature in the room. Hardware used Arduino Nano X 5pc., HDT11 X 5pc., the power supply is 12V 2,2A enough to power all of the arduinos.
@tiana well you could simply calibrate the value you get in your sketch for every DHT on it's own. But you have to do a calibraton, meaning measure all of them at multiple temperatures and humidities.
If it's only a offset you can also use
// Set this offset if the sensor has a permanent small offset to the real temperatures #define SENSOR_TEMP_OFFSET 0
to correct for this, otherwise you have to implement your calibration function on your own.
For some sensors is possible to make with offset, but some of the sensors changing their values 2 times in minute with + - 2 degree Celsius some times show more sometimes show less.
well I think this kind of noise can not be calibrated. I thought they all have different values, but with offset, linear oder squared function for deviation.
I read datasheet for DHT11 and 2 degree deviation is ok for them. Now i playing with DHT22 I try this #define SENSOR_TEMP_OFFSET -1.3, but nothing is changed the sensor still give me same value.
@tiana try setting SENSOR_TEMP_OFFSET to a really big value, for example -30.8 to make sure that the change is large enough to be noticeable.
Where i can found good DHT library for this sketch ?
sorry i found
- Jim Danforth last edited by
Using library linked above
Getting: no matching function for call to 'DHT::readSensor(bool)'
@hek
hi, how do I change the readings from C to F?
My sensor works fine, by I get the readings I celsius
@ramoncarranza
Change the line in the sketch:
temperature = dht.toFahrenheit(temperature);
to
temperature = dht.toCelsius(temperature);
probably? I haven't checked it, just looked at the DHT library. You may need to comment the isMetric line earlier in the sketch too. But I think that's how you do it.
@Jim Danforth
Getting the same, have tried a number of different DHT libraries and cant get it to compile.
@meanmrgreen The only DHT library that will work is the one from the link before the sketch. If you have used others chances are you need to move or rename those other DHT libraries as that is why the sketch cannot call that function.
@meanmrgreen sometimes windows puts libraries in your documents folder, so they may be located in 2 or more places. Use windows search to find them all. I have 3 DHT libraries, the mysensors one, the adafruit(I think) and another that came with an MQTT based sketch for an ESP board. I just rename them. Incidentally the mysensors DHT library gives me the most accurate results, as much as 2 degrees and 10% out on the same sensor using a different library.
Also don't update the DHT library if you are told there is an update!
As you can tell I had a few problems myself before I got this working, it almost made me disregard the whole project. Keep trying , it's worth it.
@mikeS thanks alot!
I'll keep that in mind. Should be getting my "starter kit" order soon from eBay so we'll see how it goes
Finaly got it to compile.
I tried the library link above but no go, cleared out everything about ardouino on the pc, no go.
I finaly stubled onto this:
Downloaded all the expample libraries, loaded the scetch and it worked!
Something is up with the linked library on this page i think
@meanmrgreen the link pointed to a folder on GitHub. If that folder is downloaded, you get the MySensorsExamples. But in that folder, there is also a link to the original DHT library. If that link is followed, the wrong library will be downloaded.
Several people have been bitten by this so I have now changed the link to point directly to the MySensorsExamples zip file.
Could someone explain me why "Must be >1000ms for DHT22 and >2000ms for DHT11" when sampling rate is once per sec for DHT11 (once per 1000ms) and twice per sec for DHT22 (once per 500ms)?
In Wiring Things Up and screen are used DATA_PIN 3, but in example is wrote #define DHT_DATA_PIN 2
...please correct it.
@hek I think @skywatch meant that the text is updated, but not the picture. The picture is still using pin 3.
@ramoncarranza set your controller to use non-metric.
Yes, didn't see that one @mfalkvidd. Hmm.. Not sure I have the source photoshop psd available any more.
Ha, probably not, just a matter of find it.
What @Skywatch meant was that "#define DHT_DATA_PIN 2" is STILL in the example code on this page. Anyone who even gets as far as working it all out and wiring it all up and it still it won't work like that! - Not when it needs to be connected to PIN 3 instead (esp if the NRF is using pin 2, which is likely).
Happy New Year to you all!
Just a note, tried following this for adding a DHT11 to the ESP8266 MQTT gateway. The MySensors example lib does not work with esp8266 (temperatures around 17000+). However the adafruit DHT library 1.3.0 works just fine.Any reason that couldn't be used here?
Nope. Most of the examples were only verified using Atmega. If you find a better lib working on both platforms, please create a pull request including the updated example and library.
Ok! I just have to verify it on Atmega first. I'll be back..
DHT sketch version 2.1.0 no data i IDE seiel
And if i use DHT example sketch then I get data temp / hum
Did not have permission to post in the forum
@hek @mfalkvidd I read that you discussed about PIN 2 and 3.
Code say PIN 2, Wiring things up say PIN 2 and picture shows PIN 3.
It can't use PIN 2 or can it, PIN 2 is used by the radio, or am I to tired :)?
@meanmrgreen said:
thats the irq pin in the radio post.
Not used by the radio
I must have missed something or forgot. Is PIN 2 only used on Gateways?
Is it different between MyS version 1.x and 2.x?
Tried to follow the history on the humidity file back in time on github.
At some point (2.0 release) the example was switched from HumiditySensor.ino (using pin 3) to DhtTemperatureAndHumiditySensor.ino (pin 2) created by @mozzbozz.
Not sure why the pin was switched, but you can run without the radio/IRQ-pin connection as long as not activating the MY_RX_MESSAGE_BUFFER_FEATURE.
@hek
All other examples are using PIN 3. Wouldn't it be easier to use PIN 3 for all examples? Maybe we shall wait and see if mozzbozz know why he changed to PIN 2.
Ok, updated the example on github and instructions on the page back to pin 3.
Hurrah!
Now I can sleep at night again
Good job!
Now I can sleep at night again
Haha, yeah, good to hear you will get a good nights sleep again.
- Pavel Larkin last edited by gohan
Hello. Maybe someone could help
hopefully..
I have arduino1 with 1 DHT22, which is over arduinoGW connected to RSPi with latest domoticz.
when i connect stuff up, after some time i see some strange childids, not related to this arduino1 in no way.
Lots of unknown!, with V_TEMP values, with V_HUM values, even V_FORECAST, V_PRESSURE and etc. which arduino1 should not send (or no other arduino on my area).
Illustration:
<img href="">
my code is quite simple (for ex only temperature is shown)
#define CHILD_DESCR "TEMP3" #undef CHILD_ID #define CHILD_ID 31 present(CHILD_ID, S_TEMP, CHILD_DESCR); delay(250) ; loop: temp3=dht3.readTemperature(); if (temp3 >= 120) {temp3 = lasttemp3;} if (temp3 <= -50) {temp3 = lasttemp3;} MyMessage msg31(CHILD_ID,V_TEMP); send(msg31.set(temp3,1));
so.. that is basically all
I have also seen this, I noticed it last week. Some of my Nodes have those Unknown-child, but it is working correctly anyway.
DZ 3.5877, MyS 1.5.1, RPi 2
Hello,
i have a problem with this sketch.
After downloading DTH.h froom Mysensors Examples, i have this error :
DhtTemperatureAndHumiditySensor:85: error: 'getConfig' was not declared in this scope
metric = getConfig().isMetric;
Any Idea of my mistake ?...
It isn't your mistake, it's ours. The examples hasn't been updated correctly to reflect the most recent change in the api.
It should say
getControllerConfig().isMetric
I have the dht22 sensor hooked up and working on a pro mini on pin 3 I get a normal temp reading but the humidity always stays at 1.0 does not read.
Hello everybody,
hope, someone could help a newby
I've installed the MQTT - GW and would use the Air/Humidity - sensor. But it doesn't work properly.
The Connection to the Gateway seems to be okay. But I get no Air/Humidity - Information.
The sensor does not measure anything.
I've tried to change the data - pin, I've tried the DHT11 and DHT22 - sensor. No choice.
I'm getting mad
Thanks a lot.
Have you tried the basic arduino sketch that prints data to serial port?
Hello,
thanks a lot for your answer.
Yes, I did. It produced the Messages as excepted.
So you'd better post your code so we can take a look at what is wrong with it.
...sorry, Typo.
As expected
- donhuan78p last edited by gohan
...no_NRF24 //#define MY_RADIO_RFM69 //#define MY_RS485 #include <SPI.h> #include <MySensors.h> #include <DHT.h> // Set this to the pin you connected the DHT's data pin to #define DHT_DATA_PIN 2 // Set this offset if the sensor has a permanent small offset to the real temperatures #define SENSOR_TEMP_OFFSET 0 // Sleep time between sensor updates (in milliseconds) // Must be >1000ms for DHT22 and >2000ms for DHT11 static const uint64_t UPDATE_INTERVAL = 0 Serial.println("Presentation"); sendSketchInfo("TemperatureAndHumidity", "1.1"); // Register all sensors to gw (they will be created as child devices) present(CHILD_ID_HUM, S_HUM); present(CHILD_ID_TEMP, S_TEMP); metric = getConfig().isMetric; } void setup() { Serial.println("vor Presentation");() { Serial.println("loop Anfang"); //); }
- Arnold Šlepetis last edited by
I am using dht22 outside. Does any one know how to change to whole number not decimal. The temperature outside change a lot. Let say temp 20.2 changed to 20.4 and still sends to getaway . If I do "send(msgTemp.set(temperature, 0));" and still receive both times 20 and 20, and I want only if change to 21 or 19.
@Arnold-Šlepetis change
float lastTemp;
to
signed int lastTemp;
and
float temperature = dht.getTemperature();
to
signed int temperature = dht.getTemperature() + 0.5; // Add 0.5 to get correct rounding
and
temperature = dht.toFahrenheit(temperature);
to
temperature = dht.toFahrenheit(temperature) + 0.5;
and
send(msgTemp.set(temperature, 1));
to
send(msgTemp.set(temperature));
You might also have to handle the isnan test.
- Arnold Šlepetis last edited by
@mfalkvidd Thanks. It works like a charm
I've made a bit of an unusual observation in relationship to this script. I am running a fairly stripped-down version of it, but with the "standard" (not MySensors-customized) DHT library and its associated functions.
Here is the code:
// Enable debug prints #define MY_DEBUG // Enable and select radio type attached #define MY_RADIO_NRF24 //#define MY_RADIO_RFM69 //#define MY_RS485 #define MY_NODE_ID 47 #include <SPI.h> #include <MySensors.h> #include <DHT.h> // Set this to the pin you connected the DHT's data pin to #define DHT_DATA_PIN 5 // Set this offset if the sensor has a permanent small offset to the real temperatures #define SENSOR_TEMP_OFFSET 0 // Sleep time between sensor updates (in milliseconds) // Must be >1000ms for DHT22 and >2000ms for DHT11 static const uint64_t UPDATE_INTERVAL = 10000; #define CHILD_ID_HUM 0 #define CHILD_ID_TEMP 1 bool metric = true; MyMessage msgHum(CHILD_ID_HUM, V_HUM); MyMessage msgTemp(CHILD_ID_TEMP, V_TEMP); DHT dht(DHT_DATA_PIN, DHT22); float temperature; float humidity; setup() { delay(100); } void loop() { // Get temperature from DHT library temperature = dht.readTemperature(); if (isnan(temperature)) { Serial.println("Failed reading temperature from DHT!"); } else { send(msgTemp.set(temperature,0)); #ifdef MY_DEBUG Serial.print("T: "); Serial.println(temperature); #endif } // Get humidity from DHT library humidity = dht.readHumidity(); if (isnan(humidity)) { Serial.println("Failed reading humidity from DHT!"); } else { send(msgHum.set(humidity,1)); #ifdef MY_DEBUG Serial.print("H: "); Serial.println(humidity); #endif } // Sleep for a while to save energy sleep(UPDATE_INTERVAL); }
I find that if I leave the last line as
sleep(UPDATE_INTERVAL);
then the sensor node ends up sending the same temperature and humidity values, over and over again, even if the actual humidity and temperature change.
However, if I change the last line to
delay(UPDATE_INTERVAL);
then everything works as expected.
This is with known good DHT22 units, so I suspect there is something funny about the
sleep()function and the operation of the standard (not customized) DHT library.
It took a lot of poking around to discover this, but is this in fact the reason why the MySensors customized DHT library is required for this script? Is that library necessary to make
sleep()and DHT temperature measurements play nicely together?
What libraries versions are you using?
@jwosnick the example sketch has this:
// Force reading sensor, so it works also after sleep() dht.readSensor(true);
which is missing in your sketch. The comment suggests that it might be relevant.
Yes, absolutely. But it appears that the
dht.readSensor()function is not actually part of the standard DHT library, but rather something that only appears in the MySensors-customized version of it. I'm trying to get a handle on why there is a need for a customized library.
@jwosnick seems like the most recent version of the original library calls readSensor when getTemperature is called.
Maybe the most recent version can be used if the MySensors example sketch is rewritten to just use getTemperature? Would it be possible for you to test this? It would be great if we could get rid of the MySensors-custimized version of the library and just let people install the standard DHT library.
@mfalkvidd said in
Air Humidity Sensor:
@jwosnick seems like the most recent version of the original library calls readSensor when getTemperature is called.
Yes, it does... but somehow that function is not "exposed" to the outside world. With a standard DHT test sketch (nothing to do with MySensors) calling readSensor throws an error.
@jwosnick yes. But since readSensor is called inside getTemperature, there should be no need to call it manually.
As your set sensor model
DHT dht(DHT_DATA_PIN, DHT22);
it looks like you use "Adafruit DHT-sensor-library".
It's need dht.begin(); in setup(){ } which is missing in your sketch
@avgays
Good catch -- thanks. Yes, that is the library I am using.
Despite omitting that line, the script above works fine as long as the last line is a
delay()function and not
sleep(). If I use
sleep(), it in fact appears to work, but sends the same temperature and humidity over and over again. So it is something about the
sleep()function.
I will add in the
dht.begin()and then put
sleep()back in and see what happens.
@avgays Confirmed that even with
dht.begin();the script still sends the same temp and humidity info, over and over again, as long as the
sleep()function is in there. As soon as
sleep()is replaced by
delay()it all works properly.
So I conclude from this that the MySensors-customized version of the DHT library must have something in it to make
sleep()play nicely with the DHT unit. I wish I knew what that was. It would be ideal if this sensor (and the Dallas Semiconductor one) could be used with the MySensors system with their standard libraries.
@jwosnick
Very strange as in my case this library work well with sleep() on battery-powered node.
Look like it's nessesary to add delay(2000); before or after sleep() as sleep mode stops all timers, so
currenttime - _lastreadtime =0
and function returns with no new measurements.
Hi,
I would like to use this example and reduce power consumtion by removing the regulator on the mini Pro and the power led.
If we remove the regulator, we can power the board with 3V on Vcc. It will be OK for the mini pro and the NRF but the DHT22 needs 3.3V minimum. The solution will consist to use a step up boost module. What is the current consumtion of the step up boost?
I don't know about DHT22, but NRF24 can work down to 1.8 or 1.9V so you can connect it directly to battery (take a look ad EasyPCB)
- AWI Hero Member last edited by AWI
@Digdogger it's better to get rid of the DHT and use something more reliable and operating at lower voltages. Like si7021, sht21 or Bme280
Thanx for your answers. @AWI, the BME280 looks great and not expansive i will replace the DHT by this one. Thank you.
@Digdogger
Out of curiosity, do you expect there to be a big savings in power consumption by removing the regulator and power LED? I've never used the Mini Pro (the smallest I get to is the Nano) but I understand it already is very efficient with power usage.
@jwosnick: please take a look here:.
You'll see that the current saving is significant (almost 40% saving)
@jwosnick
Arduino is efficient for a live node, but for a battery powered sleeping node voltage regulators have a small drain on battery; led also consumes some power. The Nano has also the usb chip that is powered but not used that also increases battery drain. So, as rule of thumb, everything that is not really used/necessary will drain some battery over time.
@jwosnick with the led and regulator, battery life on 2xAA is about three weeks. Removing them will usually give you 2-5 years.
Thank you. I didn't realize the differences were so stark.
I have some Pro Mini 3.3V units on order and hope to convert my sensors to that platform, when they arrive.
hello I ran it but i didnt see anthing except the garbage in com monitor
- Nca78 Hardware Contributor last edited by
@hashem25 said in
Air Humidity Sensor:
hello I ran it but i didnt see anthing except the garbage in com monitor
Hello, you need to change the baud rate at the bottom right of the Arduino serial monitor window until you see clear text.
thanks it works
How do i get 2 dht22's working in the code?
If you already have one, it is a matter of just repeating your code for the first sensor and just change the pin number where you connected the second one
This example doesn't seem to be included in the latest library version - am I being blind or is this deliberate?
@mfalkvidd Thanks
- godwinguru last edited by
Is it the same code that i will burn to the controller circuit. Can someone help me with a little explanation on the code for the controller circuit so as to be receiving the temperature values
Suggested Topics
-
-
-
💬 Relay
Announcements • • hek
-
-
-
- | https://forum.mysensors.org/topic/4806/air-humidity-sensor-dht/20?lang=en-US | CC-MAIN-2020-50 | refinedweb | 3,948 | 66.13 |
... .
cool! i never heard about!
I've never heard about it. But could you explain further to me what you meant?
Thanks Wes.
Great tips!
Blog Rocks!!!
GREAT THANKS A LOT
What about programmatically changing a numerical field from a number back to NULL?
I know how to do it in .Net. If you are using a dataset just set the value to DBNull.Value. I think that exists in the System.Data.SqlTypes namespace.
How to enter a empty string in EM? If the field is NULL-able, and has not other default value, then the default is NULL. But how to get an empty string ('') there?
What I have done before is enter some text then leave the row, then go back and delete the text. It will leave the empty string instead of null.
That seems pretty good. I hope there's also a ctrl-something that would enter empty string directly.
How about making a whole column null is that possible like
update table
set column_name = NULL
??
so cool
In Enterprise Manager, after showing the grid pane and changing the query type to UPDATE, enter Null in the New Value field.
You can also use a query in Query Analyser like the one posted by R.
Useful to edit fields directly, but how to do the same thing using grid and drop down combo components? On Sheridan data widgets there is no way to translate null values to the field, someone know something ablut this? THANKS!!!
Guys it much easier than expected, Simply press CTRL + 0
I have been looking for this for so long. Thanks.
Thanks for posting this.. VERY Helpful..
Thanks! That was very helpful.
This didn't work for me. Is this specific to SQL 2005? I selected the table and want to reset the value to Null. Is this different?
I have got the solution
just by use the method
int MyTableField.value = IsBDNull.value.toString(); in C# | http://weblogs.asp.net/whaggard/archive/2003/10/19/32495.aspx | crawl-002 | refinedweb | 326 | 76.93 |
My code compiles and runs but it gives the number of the menu item I've chosen and is not assigning it a value ie.. square. Each time the variable shapeSelected comes up I need it to say the value of square, circle, etc., not 1, 2, or 3. Hope you understand what I'm trying to say. I know it's something simple and was hoping someone could help me where I'm messing up. Excuse my newbiness to the language, but I'm trying. Thanks for any help.
__________________________________________________ _
Code:/* Computes the area of a user selected shape (square, circle, or equilateral triangle) when given any number as a length for the Variable X which is also assigned by the user. */ #include<iostream> #include<cmath> using namespace std; int main () { double lengthX; double shapeArea; int shapeSelected; char square; char circle; char equilTriangle; cout << "This program will give you the area of the shape slected \n" << "(Circle, Square, or Equilateral Triangle) based on any given \n" << "length of the Variable X, which you, the user will provide. \n" << "\n" << "\n"; cout << "Please enter a length for the Variable X:"; cin >> lengthX; cout << "\n"; cout << "1 - Square" << endl; cout << "2 - Circle" << endl; cout << "3 - Equilateral Triangle" << endl; cout << "\n"; cout << "\n"; cout << "Please look at the above men and enter the number that \n" << "corresponds to the shape you would like to compute the \n" << "area of:"; cin >> shapeSelected; { if (shapeSelected == 1) square = shapeSelected; else if (shapeSelected == 2) circle = shapeSelected; else if (shapeSelected == 3) equilTriangle = shapeSelected; } // Calculates the area of chosen shape. { if (square == shapeSelected) shapeArea = (lengthX * lengthX); else if (circle == shapeSelected) shapeArea = (3.14) * (lengthX * lengthX); else if (equilTriangle == shapeSelected) shapeArea = ((sqrt(3)/4)) * (lengthX * lengthX); } cout << "The length you chose for Variable X is:" << lengthX; cout << "\n"; cout << "The shape you selected to get the area of was a/an:" << shapeSelected; cout << "\n"; cout << "The area of the" << shapeSelected << "is:" << shapeArea << endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/45074-need-help-if-else-statements-maybe-switch.html | CC-MAIN-2014-23 | refinedweb | 327 | 53.24 |
hi,
I'm trying to implement a system call for x86_64. Mine processor is
dual core opetron.There is very little material on web for
implementing system calls for x86_64 processor for 2.6 series kernel.I
tried to implement a new system call by observing the existing
implementation but to no success.Following are files names and changes
made.
//////////////////////////////////////////////////
file-> include/asm-x86_64/unistd.h
#define __NR_newcall 273
__SYSCALL(__NR_newcall, sys_newcall)
#define __NR_syscall_max __NR_newcall
//////////////////////////////////////////////////
file-> include/linux/syscalls.h
asmlinkage unsigned long sys_newcall(char __user *buf);
/////////////////////////////////////////////
file--> fs/read_write.c
asmlinkage unsigned long sys_newcall(char __user * buf){
printk("new system call \n");
ret 0;
}
EXPORT_SYMBOL_GPL(sys_write)
Please let me know where i'm doing wrong .Following is program which
is calling mine system call
#include <stdlib.h>
#include <stdio.h>
#include <sys/unistd.h>
#include <sys/syscall.h>
long int ret;
int num = 243;
char buffer=[20];
int main() {
asm ("syscall;"
: "=a" (ret)
: "0" (num),
"D" (buffer),
);
return ret;
}
When i call this ,nothing gets printed in file /var/log/messages.Am i
missing something ?
Actually i wana pass a pointer to kernel from user space.Later on data
will be copied to that memory location .i am thinking of using
copy_to_user for copying data.Buffer passed through system call will
be used by kernel function as circular ring.And portions of this ring
will get updated frequently even after system call has returned.
Is there any better way to do this?
shahzad | http://www.linux-mips.org/archives/linux-mips/2007-05/msg00129.html | CC-MAIN-2015-22 | refinedweb | 244 | 61.33 |
twig slice
slice - Documentation - Twig - Twig - The flexible, fast, and secure template engine for PHP. slice ¶. The slice filter extracts a slice of a sequence, a mapping, or a string:
slice - slice. The slice filter extracts a portion of an array, hash or string. {% for i in [1, 2, 3 , 4, 5]|slice (1, 2) %} {{ i }}<br> {% endfor %} The above will output "2" and "3" on
10 Twig tips and basic features that every developer should know - Twig is a template engine for the PHP programming language. Its syntax originates .. To cut a string with a default length use the slice filter.
Slice text string in twig [#2992330] - I have the value of "sampleString.mp3" , but i want to slice the string and just get the "mp3" . How can i do that with twig ? I tried this but not
twig slice filter on object print Array - {% set varTest = 'azertyuiop' %} {{ varTest[:2] }} {# show 'az' #}. but on an object as {{ myObj.name[:2] }} result is
Limit String Twig - Then, you can call truncate() helper within your Twig template as follow: If the post excerpt length is greater than 100 characters then slice it
Useful Twig Functions and Filters · BSD Twig Documentation - BSD Twig provides a templating language to utilize in mass mailings for Reverse a string\n- [`slice`](): Slice the
Twig Filters & Functions - Although Twig already provides an extensive list of filters, functions, and tags, Grav also provides a selection of useful additions to make the process of theming
templating - You can use the slice filter but it will return an array, so you either have to loop the result or access the first index {% for item in array|slice(1,
Fundamental ERB and Twig for Front-End Development - Shorthand to slice the first count items. ERB: .take(count) or .first(count) <%= [1,2, 3,4].take(2)
twig add to array
merge - Documentation - Twig - The merge filter merges an array with another array: 1 2 3 4 5. {% set values = [1, 2] %} {% set values = values|merge(['apple', 'orange']) %} {# values now
How to push an item to an array in Twig easily - How to push an item to an array in Twig easily For a PHP developer, appending items to an existent array, is pretty easy as using array_push.
twig - building array in for loop - The argument of merge has to be an array or object to merge it with an existing one. So write it as an array with one element. {% set multipleChoiceAnswerText
merge - Adding entries to an array, one at a time - The merge filter only works with 2 arrays (or 2 hashes), not with an array and an object. I think that the event you are trying to add is not an array
Key Value Arrays in Twig - Twig doesn't refer to a key, value array as an array. It calls it a hash. A hash is Let's assign this hash to a Twig variable using the set tag. {% set page = { title:
Twig basics - {{ foo.bar }}. Twig will automatically check for the following options: // Array key. $ foo['bar'] $foo->isBar(); // Object dynamic object property is set and get property.
Twig forgets array-keys · Issue #347 · twigphp/Twig · GitHub - Looks like twig creates a new array with new indexes. @DRMONTY please update the testcase so we can add it and fix it if it is indeed an
Passing data from twig to javascript - Twig to JavaScript First add the named data attribute to an element. and extract the value from their dataset to get an array of values:.
How do you add a class to a Twig template in Drupal 8? - To add multiple classes to an element, create an array with all of the class names . To create an array in Twig, use the set tag followed by the name of the array.
Working with arrays and objects · BSD Twig Documentation - You can use Twig to break those values into an array, though you'll need to be #}\n{% set ask = (contribution.highestprevcontribraw|default(25))//2 %}\n<a
twig array length
length - Documentation - Twig - length ¶. New in version 2.3: Support for the __toString() magic method has been added in Twig 2.3. The length filter returns the number of items of a sequence
Counting the number of elements in array - Just use the length filter on the whole array. It works on more than just strings: {{ notcount|length }}.
length - The length filter returns the number of items of a sequence or mapping or the length of a string. Examples Outputting the length. {{ variable|length.
Twig template: find the length of an array - It's not immediately apparent in the documentation for Twig, but finding the size of an array in a twig template is simple: {% if my_list|length > 10 %} {% endif %}
Template Documentation - I'm a Twig / Symfony noob so I could be mistaken. It will either give the length of the string, or count the number of elements in an array or
Testing if something exists: is defined, length, is not null, is not empty - In your Twig templates, it is often good practice to test if a _variable_ or use the defined test and be sure to use array syntax for your variable:
Find the length of an array in Twig - Problem: How do I find the length of an array using Twig? Solution: Use the length filter, for instance: {% if my_array|length < 1 %} {# do
Twig loop variable to count iterations - Twig offers special `loop` variables that makes it easy to know which iteration of it is the last iteration of the loop. loop.length - How many items are in the loop?
Key Value Arrays in Twig - Twig doesn't refer to a key, value array as an array. It calls it a hash. A hash is one of several types of literals available in Twig. It has a key and a value. The pairs
count multivalue field values in twig - {{ content.field_mytext | length }}?. This does not work, because content is a render array with a lot of additional keys. Easiest way is to get the ['#items']| length .
twig split array in half
slice - Documentation - Twig - Twig - The flexible, fast, and secure template engine for PHP. The slice filter works as the array_slice PHP function for arrays and mb_substr for strings with a
batch - Documentation - Twig - batch ¶. The batch filter "batches" items by returning a list of lists with the given number of items. A second parameter can be provided and used to fill in missing
How to render data equally into 3 column list using Twig? - Update 1: I have created a Twig Fiddle to incorporate suggestions in The // operator in Twig will divide a number and floor it, i.e. 20 // 3 == 2.
Twig - Trying to split an array element - From the official doc: The split filter splits a string by the given delimiter and returns a list of strings: {{ "one,two,three"|split(',') }} {# returns ['one',
templating - How to split a string into an array - active oldest votes. 5. You would use the split filter. {% set fruits = "Apples; Bananas; Tomatoes" | split('; ') %} Use Twig's split filter. {{ "Apples
10 Twig tips and basic features that every developer should know - Twig is a template engine for the PHP programming language. . and loop.last variables are only available for PHP arrays, or objects that
Working with arrays and objects · BSD Twig Documentation - You can use Twig to break those values into an array, though you'll need to be .. true\n}\n[/block]\n\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"{# Ask for half of n- [`split`](): Split the string
Twig Documentation - 10.26 split . . $twig = new Twig_Environment($loader, array( half way there – making 1.5 into 2 and -1.5 into -2);. • ceil always rounds up;.
Images for twig split array in half - array_splice, split an array into 2 arrays. The returned arrays is the 2nd argument actually and the used array e.g $input here contains the 1st argument of array, | http://www.brokencontrollers.com/article/10359220.shtml | CC-MAIN-2019-39 | refinedweb | 1,329 | 69.52 |
.
Contents
To complete code sample.
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package helloworldapp;
/**
*
* @author Patrick Keegan
*/
public class HelloWorldApp {
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
System.out.println("Hello World!");
}
}
Because of the IDE's Compile on Save feature, you do not have to manually compile
your project in order to run it in the IDE. When you save a Java source file, the IDE
automatically compiles it.
To run the program:
The next figure shows what you should now see.
Congratulations! Your program works!
If there are compilation errors, they are marked with red glyphs
in the left and right margins of the Source Editor. The glyphs in the left
margin indicate errors for the corresponding lines. The glyphs in the right
margin show all of the areas of the file that have errors, including errors
in lines that are not visible. You can mouse over an error mark to get a
description of the error. You can click a glyph in the right margin to jump
to the line with the error.
Once you have written and test run your application, you can
use the Clean and Build command to build your application for deployment.
When you use the Clean and Build command, the
IDE runs a build script that performs the following tasks:
To build your application:
You can view the build outputs by opening the Files window and expanding
the HelloWorldApp node.
The compiled bytecode file HelloWorldApp.class
is within the build/classes/helloworldapp subnode.
A deployable JAR file that contains the
HelloWorldApp.class is within the dist node.
HelloWorldApp.class
build/classes/helloworldapp
dist
You now know how to accomplish some of the most common programming tasks in the IDE.
To learn more about the IDE workflow for developing Java applications,
including classpath management,
see Developing and Deploying: | http://www.netbeans.org/kb/docs/java/quickstart.html | crawl-002 | refinedweb | 317 | 64.41 |
Tech Tips Archive
WELCOME to the Java Developer Connection (JDC) Tech Tips, August 7, 2001. This issue covers:
These tips were developed using Java 2 SDK, Standard Edition, v 1.3.
Suppose that you're developing an application in the Java programming language, and you need to do some financial calculations. You have some coins, and you want to add the values of the coins together to find the total value. Let's say that you use United States dollars, and assume the following coins and quantities:
The total value of these coins is $3.61.
Here's a program that adds together the coin values:
public class FpcDemo1 {
// record type for cents/count pairs
static class Rec {
double cents;
int count;
Rec(double cents, int count) {
this.cents = cents;
this.count = count;
}
}
// set of records
static Rec values[] = {
new Rec(0.01, 1),
new Rec(0.05, 3),
new Rec(0.10, 7),
new Rec(0.25, 3),
new Rec(0.50, 4)
};
// compute relative error and take its
// absolute value
static double getRelativeError(double obs,
double exp) {
if (exp == 0.0) {
throw new ArithmeticException();
}
return Math.abs((obs - exp) / exp);
}
public static void main(String args[]) {
double sum = 0.0;
// add up the values of the coins
for (int i = 0; i < values.length; i++) {
rec r = values[i];
sum += r.cents * r.count;
}
// print the sum and the difference
// from 3.61
system.out.println("sum = " + sum);
system.out.println("sum - 3.61 = " +
(sum - 3.61));
// check to see if equal to 3.61
if (sum == 3.61) {
system.out.println("exactly equal
to 3.61");
}
// compute the relative error
double rerr = getrelativeerror(sum, 3.61);
system.out.println("relative error = " +
rerr);
// check to see if sum approximately equal
// to 3.61
if (rerr <= 0.01) {
system.out.println("approximately equal
to 3.61");
}
}
}
This program is straightforward and simple, but unfortunately, it doesn't work. The first two lines of output are:
sum = 3.6100000000000003
sum - 3.61 = 4.440892098500626E-16
The problem is that some decimal numbers, like 0.1, have no exact
floating-point representation.
Before examining this example further, it's worth looking a little more closely at the idea that some numbers such as 0.1, have no exact equivalent in floating-point. Here's another program that shows this:
public class FpcDemo2 {
public static void main(String args[]) {
// compute the bits that represent
// the values 0.1 and 0.09375
long bits1 = Double.doubleToRawLongBits(0.1);
long bits9375 =
Double.doubleToRawLongBits(0.09375);
// extract and display bits 51-0 of each value
long mask = 0xfffffffffffffL;
String s1 = Long.toBinaryString(bits1 & mask);
String s9375 = Long.toBinaryString(bits9375 &
mask);
System.out.println(s1);
System.out.println(s9375);
// display the result of multiplying 0.1 by
// 56.0
System.out.println(0.1 * 56.0);
}
}
This program displays the raw bit patterns used internally to represent floating-point fractional values. The output is:
1001100110011001100110011001100110011001100110011010
1000000000000000000000000000000000000000000000000000
5.6000000000000005
The first line of output is the bit pattern for 0.1, and the second is the pattern for 0.09375. The first pattern shows a repeating sequence, and in fact the value 1/10 is the sum of an infinite series of powers of two:
1/10 = 1/16 + 1/32 + 1/256 + 1/512 + 1/4096 +
1/8192 + ...
By contrast, 0.09375 is 3/32, that is:
3/32 = 1/16 + 1/32
In other words, 0.09375 is exactly representable, and 0.1 is not. You can observe the effects of this by looking at the third line of output above, which is the product of 56.0 and 0.1. The resulting value is slightly off from the expected value 5.6.
One way of solving the problem is illustrated in the FpcDemo1 example, which uses the relative error method. Relative error is defined as:
FpcDemo1
relative error = (observed - expected) / expected
For example, the expected value is 3.61, but the actual value is slightly different. So applying the formula, you take the difference of the values, and divide it by 3.61. Then take the absolute value. The result is a percentage that shows how far off the actual value is from the expected value. This technique is generally useful in any sort of floating-point calculation, because there is often a problem with obtaining exact values. Say, for example, that the computed sum of the coin values must be within 1% of 3.61, then the values are in fact approximately equal according to this rule. The last two lines of output from the FpcDemo1 example are:
relative error = 1.2301640162051597E-16
approximately equal to 3.61
Computing the relative error and doing approximate comparisons is quite useful, but there are still some problems with the example. One of them is a display issue. If you are expecting a value such as 3.61 and you instead get 3.6100000000000003, this result would probably not be acceptable output because, for example, it could overflow the width of a field. And you might, in fact, require an exact answer rather than an approximation. Such a requirement would be common, for example, in calculations that involve cash transactions with a retail customer. So let's look at a couple of other solutions to this problem.
One solution is to use whole values, that is, compute the sum in cents. Here's what the program looks like:
public class FpcDemo3 {
// record for cents/count pairs
static class Rec {
int cents;
int count;
Rec(int cents, int count) {
this.cents = cents;
this.count = count;
}
}
// set of records
static Rec values[] = {
new Rec(1, 1),
new Rec(5, 3),
new Rec(10, 7),
new Rec(25, 3),
new Rec(50, 4)
};
public static void main(String args[]) {
int sum = 0;
// sum up the values of the records
for (int i = 0; i < values.length; i++) {
rec r = values[i];
sum += r.cents * r.count;
}
// display the sum
system.out.println("sum = " + sum);
// display the whole/fraction parts of the sum
system.out.println((sum / 100) + "." +
(sum % 100));
}
}
The output is:
sum = 361
3.61
The last line of the program illustrates how you can pick apart the summed value to get dollars and cents. This approach entirely avoids the representation problem. It is possible to extend this idea to any currency unit you wish, for example, 1/1000 cents. But this technique doesn't work so well if you need to compute fractional units, for example, 5% of 5 cents, or 0.25 cent.
A final approach uses the BigDecimal class, a class that supports arbitrary-precision signed decimal numbers. Using this class, the demo looks like this:
BigDecimal
import java.math.BigDecimal;
public class FpcDemo4 {
// record type for cents/count pairs
static class Rec {
String cents;
int count;
Rec(String cents, int count) {
this.cents = cents;
this.count = count;
}
}
// set of records
static Rec values[] = {
new Rec("0.01", 1),
new Rec("0.05", 3),
new Rec("0.10", 7),
new Rec("0.25", 3),
new Rec("0.50", 4)
};
public static void main(String args[]) {
BigDecimal sum = new BigDecimal("0.00");
// sum up the values using BigDecimal
for (int i = 0; i < values.length; i++) {
rec r = values[i];
bigdecimal cents = new bigdecimal(r.cents);
bigdecimal count = new bigdecimal(r.count);
sum = sum.add(cents.multiply(count));
}
// display the sum
system.out.println("sum = " + sum);
}
}
and the output is:
sum = 3.61
This example provides values such as 0.1 to the BigDecimal constructor as strings, not as double values (which is also supported). Doing it in this way gets around the problem of not being able to exactly represent values such as 0.1. In other words, the BigDecimal class has its own arbitrary-precision representation, distinct from the IEEE 754 representation used for floating-point values. If you pass the constructor a value like "0.1", as a string, then the exact value is preserved. But if you use the constructor that takes a double argument, the representation problem reoccurs.
BigDecimal gets around the problems outlined above, but does it at some cost. You should not assume that calculations done using BigDecimal will be as fast as standard floating-point calculations, which are typically performed in hardware. If you plan to use BigDecimal, you might want to look at performance issues.
It's important to understand the limitations of floating-point in representing common fractional values such as 0.1. You also need to understand what techniques are available to get around these limitations.
For further information about the BigDecimal class, see
the class description.
An enumeration is a small group of constant values (enumerators or enumeration constants) that are related to each other. For example, you might have a set of colors like red, green, and blue, and you want to use these as constants in your program to specify the colors of graphical objects. Let's look at a simple example of using an enumeration:
class EnumColor {
// private constructor so class is not instantiable
private EnumColor() {}
public static final int RED = 1;
public static final int GREEN = 2;
public static final int BLUE = 3;
}
public class EnumDemo1 {
// print color based on argument
static void printColor(int color) {
if (color == EnumColor.RED) {
System.out.println("red");
}
else if (color == EnumColor.GREEN) {
System.out.println("green");
}
else {
System.out.println("blue");
}
}
public static void main(String args[]) {
printColor(EnumColor.GREEN);
}
}
When you run the program, the output is:
green
EnumColor is a class used as a packaging vehicle for a set of constants. It has a private constructor to prevent users from creating objects of the class or extending it. Enumerators are referred to by expressions such as "EnumColor.GREEN".
EnumColor
EnumColor.GREEN
This approach to implementing enumerations is very simple, works pretty well, is efficient, and is widely used in Java programming. But there are some problems with doing it this way, some of which appear in the following example:
import java.util.*;
// enumeration for colors
class EnumColor {
private EnumColor() {}
public static final int RED = 1;
public static final int GREEN = 2;
public static final int BLUE = 3;
}
// enumeration for booleans
class EnumBoolean {
private EnumBoolean() {}
public static final int TRUE = 1;
public static final int FALSE = 2;
}
public class EnumDemo2 {
static void printColor(int color) {
if (color == EnumColor.RED) {
System.out.println("red");
}
else if (color == EnumColor.GREEN) {
System.out.println("green");
}
else {
System.out.println("blue");
}
}
public static void main(String args[]) {
// assign a bogus value to color
// and then print the color
int color = 59;
printColor(color);
// assign color a value from
// a different enumeration
color = EnumBoolean.FALSE;
printColor(color);
// try to add a color to a list
List list = new ArrayList();
//list.add(EnumColor.BLUE);
}
}
blue
green
This example highlights a set of problems. The first is that the program is allowed to assign the value 59 to a variable that's supposed to represent a color. 59 is not a legal value for any of the enumerators within EnumColor. Then, when printColor is called, it fails to diagnose the fact that an illegal enumerator value was passed.
printColor
The program then assigns a value from a different enumeration to the color variable. This error is not caught. Finally, when the program tries to add an enumerator to a list, the result is a compiler error (you need to uncomment the last line in EnumDemo2 to see this error)
EnumDemo2
These problems have a root cause: specifying a Java enumeration based on int values does not establish a distinct enumeration type. In other words, if an enumeration consists of a set of int constants, there is nothing that supports detection of illegal values that are not part of the enumeration type. There is no way to enforce type rules, for example, the usual rules that say you can't assign a reference of one class type to a reference of an unrelated type.
There are some further problems with using int values to represent enumerations. One is that there is no "toString" mechanism, that is, no easy way to associate "2" with "green". You have to write a method "printColor" for this.
toString
Another problem is that the constant values are bound into client code that uses the values. You can see this by saying:
javac EnumDemo2.java
javap -c -classpath . EnumDemo2
and examining the printColor method. For example, the sequence:
1 iconst_1
2 if_icmpne 16
compares the passed-in printColor method argument with the constant value 1 (EnumColor.RED). This behavior can lead to problems if the enumerator value changes and a recompilation of all affected classes is not done.
EnumColor.RED
If you do use an int-based approach to enumerations, one simple thing you can do to improve code quality is define a method within the enumeration class that checks whether a given enumerator is valid:
public static boolean isValidEnumerator(int e) {
return e == RED || e == GREEN || e == BLUE;
}
Then call this method as appropriate, to validate enumerator values.
There's another approach to implementing enumerations that gets around many of these problems. This technique has the name "typesafe enum", and it looks like this:
import java.util.*;
class EnumColor {
// enumerator name
private final String enum_name;
// private constructor, called only within this class
private EnumColor(String name) {
enum_name = name;
}
// return the enumerator name
public String toString() {
return enum_name;
}
// create three enumerators
public static final EnumColor RED =
new EnumColor("red");
public static final EnumColor GREEN =
new EnumColor("green");
public static final EnumColor BLUE =
new EnumColor("blue");
}
class EnumBoolean {
private final String enum_name;
private EnumBoolean(String name) {
enum_name = name;
}
public String toString() {
return enum_name;
}
public static final EnumBoolean TRUE =
new EnumBoolean("true");
public static final EnumBoolean FALSE =
new EnumBoolean("false");
}
class EnumDemo3 {
public static void main(String args[]) {
// assign an enumerator and then print the
// value
EnumColor color = EnumColor.GREEN;
System.out.println(color);
// try to assign an enumerator to an
// enumeration variable of a different type
//color = EnumBoolean.FALSE;
// add an enumerator to a list
List list = new ArrayList();
list.add(EnumColor.BLUE);
// check to see if a color is blue
color = EnumColor.BLUE;
if (color == EnumColor.BLUE) {
System.out.println("color is blue");
}
}
}
green
color is blue
The idea is that you have a class representing an enumeration type. Within the class a set of enumerators is defined as instances of the class, referenced by static final fields. The class specifies a private constructor, meaning that there is no way for users of the class to create class objects or to extend the class. So the set of static constant enumerator objects within the class are the only objects of the class that exist.
When each static object is created, representing an enumerator, a string is passed to the constructor, specifying the name of the enumerator. So the toString problem mentioned earlier is solved. And because each class representing an enumeration is of a distinct type, the compiler automatically catches problems such as assigning an enumerator to a reference of an unrelated enumeration type.
The problem with integer constants bound into compiled code is solved. That's because the compiler refers to the static object fields of the enumeration class rather than compiling integer constants into the client code. And you can add enumeration constants to collections like ArrayList, because they are objects rather than primitive values like int.
ArrayList
If you use a typesafe enum, you need to check whether an object reference of such a type is null before checking for specific enumerator values:
void f(EnumColor e) {
if (e == null) {
throw new NullPointerException();
}
}
After this check, you are guaranteed to have a valid enumeration value, one of the set of constants established within the enumeration class.
What about performance? Since enumerators are unique, you can use the operator == to check for reference identity. This is very fast. There is no need to use equals() to check for equality of enumeration constants.
==
equals()
There are some features you give up with typesafe enums. Unlike int-based enumerations, you can't use object-based enumeration constants as array indices, switch constants, or as bit masks to access a bit within a set of bits.
Typesafe enums solve a set of serious programming problems, and are worth using in your programs as a way of improving code quality and maintainability.
For more information, see item 21 "Replace enum constructs with classes" in "Effective Java Programming Language Guide" by Joshua Bloch; and Section 13.4.8 "final Fields and Constants" in " The Java Language Specification Second Edition" by Gosling, Joy, Steele, and Bracha
-.
This issue of the JDC Tech Tips is written by Glen McCluskey.
JDC Tech Tips August 7, 2001
Sun, Sun Microsystems, Java, and Java Developer Connection are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. | http://java.sun.com/developer/JDCTechTips/2001/tt0807.html | crawl-002 | refinedweb | 2,798 | 57.27 |
Introduction: Sonar Collar for Blind Dogs
This project features a collar intended for use with visually impaired canines. An ultrasonic sensor hangs below the collar and senses when an object is close and emits an audible beep warning the dog that they should proceed with caution. The overall design makes it so that the dog is not uncomfortable wearing the device (it hardly weighs anything at all) and it doesn't impede their movement.
Step 1: Parts
You will need:
An Arduino (I used an UNO) with i2c capabilities
A small speaker
Male header pins
PCB
NXT Sonar Sensor
NXT Wire connector
9v Battery
External Power for the Arduino (There are many options to use)
Dog Collar
Hot glue gun
Soldering Iron
Step 2: Hardware Assembly
The first thing we're going to do is breadboard everything to make sure it's all functioning properly. We'll get to making things permanent later on...
The first step is to strip one end of the NXT wire to expose the 6 colored wires. Connect them as follows or use the diagram provided.
White +9V
Black GND
Red GND
Green +5V
Yellow SCL and clockPin(12)
Blue SDA
Speaker GND and 13
Step 3: Software
The first step to getting the program to work is to open a new text document and paste the following:
<p>#ifndef _I2CMASTER_H<br>#define _I2CMASTER_H 1 /************************************************************************* * Title: C include file for the I2C master interface * (i2cmaster.S or twimaster.c) * Author: Peter Fleury <pfleury@gmx.ch> <a href=""> <a href=""> </a> </a> * File: $Id: i2cmaster.h,v 1.10 2005/03/06 22:39:57 Peter Exp $ * Software: AVR-GCC 3.4.3 / avr-libc 1.2.3 * Target: any AVR device * Usage: see Doxygen manual **************************************************************************/</pfleury@gmx.ch></p><p>#ifdef DOXYGEN /** @defgroup pfleury_ic2master I2C Master library @code #include <i2cmaster.h> @endcode @brief I2C (TWI) Master Software Library</i2cmaster.h></p><p> Basic routines for communicating with I2C slave devices. This single master implementation is limited to one bus master on the I2C bus. </p>). Since the API for these two implementations is exactly the same, an application can be linked either against the software I2C implementation or the hardware I2C implementation.</p><p> Use 4.7k pull-up resistor on the SDA and SCL pin. Adapt the SCL and SDA port and pin definitions and eventually the delay routine in the module i2cmaster.S to your target when using the software I2C implementation ! Adjust the CPU clock frequence F_CPU in twimaster.c or in the Makfile when using the TWI hardware implementaion.</p><p> @note The module i2cmaster.S is based on the Atmel Application Note AVR300, corrected and adapted to GNU assembler and AVR-GCC C call interface. Replaced the incorrect quarter period delays found in AVR300 with half period delays. @author Peter Fleury pfleury@gmx.ch <a href=""> <a href="</a"> </a>></p><p> @par API Usage Example The following code shows typical usage of this library, see example test_i2cmaster.c</p><p> @code</p><p> #include <i2cmaster.h></i2cmaster.h></p><p> #define Dev24C02 0xA2 // device address of EEPROM 24C02, see datasheet</p><p> int main(void) { unsigned char ret;</p><p> i2c_init(); // initialize I2C library</p><p> // write 0x75 to EEPROM address 5 (Byte Write) i2c_start_wait(Dev24C02+I2C_WRITE); // set device address and write mode i2c_write(0x05); // write address = 5 i2c_write(0x75); // write value 0x75 to EEPROM i2c_stop(); // set stop conditon = release bus</p><p> // read previously written value back from EEPROM address 5 i2c_start_wait(Dev24C02+I2C_WRITE); // set device address and write mode</p><p> i2c_write(0x05); // write address = 5 i2c_rep_start(Dev24C02+I2C_READ); // set device address and read mode</p><p> ret = i2c_readNak(); // read one byte from EEPROM i2c_stop();</p><p> for(;;); } @endcode</p><p>*/ #endif /* DOXYGEN *</p><p>**@{*/</p><p>#if (__GNUC__ * 100 + __GNUC_MINOR__) < 304 #error "This library requires AVR-GCC 3.4 or later, update to newer AVR-GCC compiler !" #endif</p><p>#include <avr io.</avr></p><p>/** defines the data direction (reading from I2C device) in i2c_start(),i2c_rep_start() */ #define I2C_READ 1</p><p>/** defines the data direction (writing to I2C device) in i2c_start(),i2c_rep_start() */ #define I2C_WRITE 0</p><p>/** @brief initialize the I2C master interace. Need to be called only once @param void @return none */ extern void i2c_init(void);</p><p>/** @brief Terminates the data transfer and releases the I2C bus @param void @return none */ extern void i2c_stop(void);</p><p>/** @brief Issues a start condition and sends address and transfer direction @param addr address and transfer direction of I2C device @retval 0 device accessible @retval 1 failed to access device */ extern unsigned char i2c_start(unsigned char addr);</p><p>/** @brief Issues a repeated start condition and sends address and transfer direction </p><p> @param addr address and transfer direction of I2C device @retval 0 device accessible @retval 1 failed to access device */ extern unsigned char i2c_rep_start(unsigned char addr);</p><p>/** @brief Issues a start condition and sends address and transfer direction If device is busy, use ack polling to wait until device ready @param addr address and transfer direction of I2C device @return none */ extern void i2c_start_wait(unsigned char addr);</p><p> /** @brief Send one byte to I2C device @param data byte to be transfered @retval 0 write successful @retval 1 write failed */ extern unsigned char i2c_write(unsigned char data);</p><p>/** @brief read one byte from the I2C device, request more data from device @return byte read from I2C device */ extern unsigned char i2c_readAck(void);</p><p>/** @brief read one byte from the I2C device, read is followed by a stop condition @return byte read from I2C device */ extern unsigned char i2c_readNak(void);</p><p>/** @brief read one byte from the I2C device Implemented as a macro, which calls either i2c_readAck or i2c_readNak @param ack 1 send ack, request more data from device<br> 0 send nak, read is followed by a stop condition @return byte read from I2C device */ extern unsigned char i2c_read(unsigned char ack); #define i2c_read(ack) (ack) ? i2c_readAck() : i2c_readNak(); </p><p>/**@}*/ #endif</p>
Save the file as i2cmaster.h
Next open up the Arduino software and create a new program with the following code:
#include <i2cmaster.h> byte clockPin = 12; byte buf[9];//Buffer to store the received valeus byte addr = 0x02;//address 0x02 in a 8-bit context - 0x01 in a 7-bit context byte distance;</p><p>void setup() { i2c_init();//I2C frequency = 11494,253Hz Serial.begin(9600); printUltrasonicCommand(0x00);//Read Version printUltrasonicCommand(0x08);//Read Product ID printUltrasonicCommand(0x10);//Read Sensor Type printUltrasonicCommand(0x14);//Read Measurement Units pinMode(13,OUTPUT); } </p><p>void loop() { //printUltrasonicCommand(0x42);//Read Measurement Byte 0 distance = readDistance(); if(distance == 0xFF) Serial.println("Error Reading Distance"); else Serial.println(distance, DEC); if(distance<30){ tone(13,4000); delay(200); noTone(13); } } byte readDistance() { delay(100);//There has to be a delay between commands byte cmd = 0x42;//Read Measurement Byte 0 pinMode(clockPin, INPUT);//Needed for writing to work digitalWrite(clockPin, HIGH); if(i2c_start(addr+I2C_WRITE))//Check if there is an error { Serial.println("ERROR i2c_start"); i2c_stop(); return 0xFF; } if(i2c_write(cmd))//Check if there is an error { Serial.println("ERROR i2c_write"); i2c_stop(); return 0xFF; } i2c_stop(); delayMicroseconds(60);//Needed for receiving to work pinMode(clockPin, OUTPUT); digitalWrite(clockPin, LOW); delayMicroseconds(34); pinMode(clockPin, INPUT); digitalWrite(clockPin, HIGH); delayMicroseconds(60); if(i2c_rep_start(addr+I2C_READ))//Check if there is an error { Serial.println("ERROR i2c_rep_start"); i2c_stop(); return 0xFF; } for(int i = 0; i < 8; i++) buf[i] = i2c_readAck(); buf[8] = i2c_readNak(); i2c_stop(); return buf[0]; } void printUltrasonicCommand(byte cmd) { delay(100);//There has to be a delay between commands pinMode(clockPin, INPUT);//Needed for writing to work digitalWrite(clockPin, HIGH); if(i2c_start(addr+I2C_WRITE))//Check if there is an error { Serial.println("ERROR i2c_start"); i2c_stop(); return; } if(i2c_write(cmd))//Check if there is an error { Serial.println("ERROR i2c_write"); i2c_stop(); return; } i2c_stop(); delayMicroseconds(60);//Needed for receiving to work pinMode(clockPin, OUTPUT); digitalWrite(clockPin, LOW); delayMicroseconds(34); pinMode(clockPin, INPUT); digitalWrite(clockPin, HIGH); delayMicroseconds(60); </p><p> if(i2c_rep_start(addr+I2C_READ))//Check if there is an error { Serial.println("ERROR i2c_rep_start"); i2c_stop(); return; } for(int i = 0; i < 8; i++) buf[i] = i2c_readAck(); buf[8] = i2c_readNak(); i2c_stop(); if(cmd == 0x00 || cmd == 0x08 || cmd == 0x10 || cmd == 0x14) { for(int i = 0; i < 9; i++) { if(buf[i] != 0xFF && buf[i] != 0x00) Serial.print(buf[i]); else break; } } else Serial.print(buf[0], DEC); </p><p> Serial.println(""); } /* ' Wires on NXT jack plug. ' Wire colours may vary. Pin 1 is always end nearest latch. ' 1 White +9V ' 2 Black GND ' 3 Red GND ' 4 Green +5V ' 5 Yellow SCL - also connect clockpin to give a extra low impuls ' 6 Blue SDA ' Do not use i2c pullup resistor - already provided within sensor. */</p>
Verify and Compile the code onto your Arduino. If you get an error make sure that the i2cmaster.h file is accessible by the Arduino code.
You can adjust the distance from which the beeping will occur by lowering or raising the value in the if(distance<30) condition.
Step 4: The PCB
Now is the time where you've tested everything and it's time to put it into a more permanent state. Start by soldering the male header pins in a way that the PCB can easily clip onto the Arduino. Take your time soldering in the connections making sure that you don't create any short circuits. After you have soldered the main connections together cover the more flimsy connections to the wires with hot glue. After this, hot glue the speaker to the top and solder those wires into place. Refer to the pictures for a better understanding of what is should roughly look like.
Step 5: Mounting the Sensor
Before proceeding have a look at the collar and decide the best way to attach the sensor for the height and comfort-ability of your dog. I ended up hot-gluing the sensor so that gravity would always keep the sensor pointed forward no matter what.
When you've mounted the sensor, attach the rest of the system to the collar making sure to leave room for the dog's head to be inserted and removed from the collar.
At this point, you should have a fully functional sonar collar for your dog and after hooking up the external power supply so that free movement is able, you're ready to go!
Thank you for the response! Yes, I have placed the .h file in every directory that I found the Arduino code in. It does actually show up on a second tab in the Arduino code window when I open the .ino file, which I saved on the first-to-fourth attempts at loading (default filenames). Copied and pasted the code (retried several times) as listed on this post, but still get the same error when Verifying:
C:\Users\Paul\Documents\Arduino\sketch_dec30b\sketch_dec30b.ino:1:23: fatal error: i2cmaster.h: No such file or directory
#include <i2cmaster.h>
^
compilation terminated.
exit status 1
Error compiling for board Arduino/Genuino Uno.
Thank you again. We are so hopeful to help Amos.
Kid regards,
Paul
How do I "make the i2cmaster.h available to the Arduino code."? Placed file in same directory as all copies of the sketch that were made for this and still get error 'No such file or directory' Otherwise this is coming together as designed Thank You! Have a 9.5yo boxer that went blind shortly after rescuing him. Hoping to help him 'see' again!
Sorry for the late response. It should show up if you have it in the same directory as the program file. Maybe try restarting Arduino. If that doesn't work let me know and I'll look deeper into it.
You should look into marketing this idea commercially. In fact, I can easily see you getting funding on Shark Tank or any of the crowdfunding sites with this!
....or, show it to your veterinarian. He might want to dump some cash into such a project.
You might want to look at the BlindSight devices at jordycanid.com
Sorry! Here is the missing link! The big package on the side is full of AA's. I believe he mentioned that the dog has since passed, but I may be thinking of another one.
Here is the link for the first one I found 4 years ago. Your is a fraction of the size. He actually used a back-up alarm package powered by AA's. It had a battery life of less than a day. Your's looks MUCH more practical! There is imaging SONAR available for dogs, but dogs with bad hearing can't use it. Yours would work for them if it vibrated or something like that. Don 't worry about 40khz. It is near the typical upper limit of canine hearing and most dogs are pretty insensitive to anything much above 35khz or so. If there is going to be a problem with 40khz it is that there is so much of it out there. If you can swap out for 75khz transducers that would eliminate that problem. FYI, I'm an engineer with some experience with both SONAR and RADAR.
I have seen a number of very similar projects over the past 2-4 years (on Youtube, mostly). I think all were Arduino based, but used a much larger sensor array...car ba ckup stuff, I think! The thing they has in common was very short battery life. I know there are processors now that are much less power hungry. I also notice your sensor array is much smaller. What is the battery life like with an alkaline 9v?
I'm concerned about driving the dogs crazy since they can hear sound in the ultrasound range. Other than that the concept is great. I think I'll try the same with an infrared obstacle sensor.
On the contrary, although it outputs around 40khz (which is indeed in the dog's hearing ability although it's at the very high point) the bursts are extremely small and directional. Unless the sensor is pointed directly at the dogs ears (which it is not) they will not be able to hear the sound since even a quiet room will cover up whatever indistinguishable frequencies may be redirected toward the dog.
Cool. thanks.
wow now not only humans but animals as well really cool keep it up ! | http://www.instructables.com/id/Sonar-Collar-for-Blind-Dogs/ | CC-MAIN-2017-43 | refinedweb | 2,401 | 62.58 |
.
report :: Bool -> String -> Q ()Source
reportError :: String -> Q ()Source
reportWarning :: String -> Q ()Source
Report a warning to the user, and carry on.
Arguments
Recover from errors raised by
reportError or
fail..
The functions
lookupTypeName and
lookupValueName provide
a way to query the current splice's context for what names
are in scope. The function
lookupTypeName queries the type
namespace, whereas
lookupValueName queries the value namespace,
but the functions are otherwise identical.
A call
lookupValueName s will check if there is a value
with name
s in scope at the current splice's location. If
there is, the
Name of this value is returned;
if not, then
Nothing is returned.
The returned name cannot be "captured". For example:
f = "global" g = $( do Just nm <- lookupValueName "f" [| let f = "local" in $( varE nm ) |]
In this case,
g = "global"; the call to
lookupValueName
returned the global
f, and this name was not captured by
the local definition of
f.
The lookup is performed in the context of the top-level splice being run. For example:
f = "global" g = $( [| let f = "local" in $(do Just nm <- lookupValueName "f" varE nm ) |] )
Again in this example,
g = "global", because the call to
lookupValueName queries the context of the outer-most
$(...).
Operators should be queried without any surrounding parentheses, like so:
lookupValueName "+"
Qualified names are also supported, like so:
lookupValueName "Prelude.+" lookupValueName "Prelude.map".
Much of
Name API is concerned with the problem of name capture, which
can be seen in the following example.
f expr = [| let x = 0 in $expr |] ... g x = $( f [| x |] ) h y = $( f [| y |] )
A naive desugaring of this would yield:
g x = let x = 0 in x h y = let x = 0 in y
All of a sudden,
g and
h have different meanings! In this case,
we say that the
x in the RHS of
g has been captured
by the binding of
x in
f.
What we actually want is for the
x in
f to be distinct from the
x in
g, so we get the following desugaring:
g x = let x' = 0 in x h y = let x' = 0 in y
which avoids name capture as desired.
In the general case, we say that a
Name can be captured if
the thing it refers to can be changed by adding new declarations..
Constructors
Instances
data NameFlavour Source
Constructors
Instances
nameModule :: Name -> Maybe StringSource
Module prefix of a name, if it exists") |]
mkNameG ::?
data FixityDirection Source
Constructors
Instances
maxPrecedence :: IntSource
defaultFixity :: FixitySource
Default fixity:
infixl 9
When implementing antiquotation for quasiquoters, one often wants to parse strings into expressions:
parse :: String -> Maybe Exp
But how should we parse
a + b * c? If we don't know the fixities of
+ and
*, we don't know whether to parse it as
a + (b * c) or
(a
+ b) * c.
In cases like this, use
UInfixE or
UInfixP, which stand for
"unresolved infix expression" and "unresolved infix pattern". When
the compiler is given a splice containing a tree of
UInfixE
applications such as
UInfixE (UInfixE e1 op1 e2) op2 (UInfixE e3 op3 e4)
it will look up and the fixities of the relevant operators and reassociate the tree as necessary.
- trees will not be reassociated across
ParensEor
ParensP, which are of use for parsing expressions like
(a + b * c) + d * e
InfixEand
InfixPexpressions are never reassociated.
- The
UInfixEconstructor doesn't support sections. Sections such as
(a *)have no ambiguity, so
InfixEsuffices. For longer sections such as
(a + b * c -), use an
InfixEconstructor for the outer-most section, and use
UInfixEconstructors for all other operators:
InfixE Just (UInfixE ...a + b * c...) op Nothing
Sections such as
(a + b +) and
((a + b) +) should be rendered
into
Exps differently:
(+ a + b) ---> InfixE Nothing + (Just $ UInfixE a + b) -- will result in a fixity error if (+) is left-infix (+ (a + b)) ---> InfixE Nothing + (Just $ ParensE $ UInfixE a + b) -- no fixity errors
-. | https://downloads.haskell.org/~ghc/7.6.1/docs/html/libraries/template-haskell-2.8.0.0/Language-Haskell-TH-Syntax.html | CC-MAIN-2016-50 | refinedweb | 644 | 57 |
One of the major problems in peer-to-peer systems is helping
peers find other peers. Most systems solve this in quite a
crude manner: AOL Instant Messenger and Napster, for
instance, offer flat namespaces and do all the name
resolution on a central server. XDegrees offers a more
robust and scalable solution worthy of the sophisticated
peer-to-peer systems many organizations are trying to
develop nowadays.
The essence of XDegrees consists of a naming system and a
distributed database that allows peers to resolve resource
names. XDegrees manages these services for customers on its
own hosts, and sells its software to enterprises so they can
define and run their own namespaces on in-house servers. You
can search for a particular person (whatever device the
person is currently using), for a particular device, for a
file, or even for a web service. The software that resolves
resource names is called XRNS (the eXtensible Resource Name
System).
Files can be cached on multiple systems randomly scattered
around the Internet, as with Napster or Freenet. In fact,
the caching in XDegrees is more sophisticated than it is on
those systems: users with high bandwidth connections can
download portions, or "stripes," of a file from several
cached locations simultaneously. The XDegrees software then
reassembles these stripes into the whole file and uses
digital signatures to verify that the downloaded file is the
same as the original. A key component of this digital
signature is a digest of the file, which is stored as an
HTTP header for the file.
To find a resource, a naming system is needed that
associates a unique identifier to each resource. That's the
biggest contribution of XDegrees to peer-to-peer, and in
theory their naming system could be adopted by other systems
in order to become a standard.
Related Articles:
Porivo: Load Testing with P2P
Consilient: Workflow Among Peers
P2P Smuggled In Under Cover of Darkness
How Ray Ozzie Got His Groove Back
Peer to Peer was Here
Peer-to-Peer Makes the Internet Interesting Again
Unlike a traditional URL (which consists of a host name or
IP address followed by a filename on the host) XDegrees URLs
let users assign names in a flexible manner. There is no
fixed relation between the XDegrees name and the physical
location of a file. This allows storage to be flexible and
to change in response to interest among users.
In order to build on the Web's widespread adoption, XDegrees
has made its URLs completely compatible with Web (DNS) URLs.
Users can access XDegrees resources simply with a browser.
For example, a user named Sally at Acme Corporation might
share a press release from her PC with the URL. In at least
one way, the XDegrees system is superior to Jabber's naming
system. Jabber associates a resource to a single location
(one-to-one) whereas XDegrees can have one-to-many
mappings. That means that if you're looking for a file and
the closest system with it is down, you can hop to your
second choice automatically.
XDegrees also offers caching servers, so that if mobile
users go offline (for instance, if a laptop user takes her
laptop home), the files that they have shared are cached on
a server and are thus still available. The XDegrees
technology indicates that the user's machine is offline but
that the file is still available on the cache server.
Files have an associated digest (stored in the digital
signature and transmitted within the HTTP header for the
file) that allows the system to tell whether somebody has
tampered with a file. Digests also permit versioning, so
people can release new versions of executables and data.
The resource resolution system supports sophisticated
searches. Here's where XDegrees starts getting nifty.
There's a "relevance engine" that can sort the results of a
search by the speed of the connection, by the cost of the
connection, latency, etc. It supports both striping, as
already mentioned (so you can retrieve multiple parts of a
file quickly from multiple locations) and time-shifting (so
you can announce the availability of files and have friends
download them in the background when connections are
otherwise idle).
When someone retrieves a file, it could be cached and served
up to other users. Configuration options allow or disallow
caching. XDegrees recognizes the concept of user groups, so
that for the sake of security you can allow files to be
cached on other systems owned by your group and not by
outsiders.
Files can be encrypted during transmission and when
stored. Authentication is supported. For instance, an
enterprise can associate resources with Access Control
Lists. This allows enterprises to control who can see what
critical files and folders within the organization, as well
as define what permissions (e.g., read, write, execute) each
user has for a given resource. Finally, permissions and
authentication databases can be linked to existing
enterprise directories using standards like LDAP.
One of the sophisticated uses of caching and the relevance
engine is in reducing the load on ISPs that require peering
(the old type of peering!). If one AOL user wants a file
stored on the system of another AOL user, AOL can specify
that the first attempt to retrieve the file stays within
AOL. Only if no AOL user with the file is available will the
retrieval go outside AOL.
The resource resolution system works a little like DNS. Just
as a browser uses DNS to resolve the hostname in a URL and
then lets the remote host resolve the rest, a browser can
use the XDegrees resource resolution system find a resource
and then go directly to the host (as with Napster) to get
the resource.
Scaling is the main question that comes to mind when
somebody describes a new naming and searching system. CEO
Michael Tanne claims to have figured out mathematically that
the system can scale up to millions of users and billions of
resources. Scaling is facilitated by the careful location
of servers (XDegrees will colocate servers at key routing
points, as Akamai does), and by directing clients to the
nearest server as their default "home" server. Enterprise
customers can use own servers to manage in-house
applications.
Like all sorts of new companies that are using the Web as
their vehicle, XDegrees requires users to download and
install a small component dedicated to XDegrees
services. But the single Client Component they provide can
potentially allow users to access all services offered by
all applications, if these applications use the XDegrees
services.
Once the Client Component is installed, a server can order a
program to run on the client. Any CGI script, Java servlet,
ASP component, etc. could be run on the client. This is like
breaking the Web server into two parts. Originally, Web
servers just understood HTTP and sent pages. Then the field
started demanding more from the Web and the servers got
loaded down with CGI and mod_perl and active pages and
stuff. So now the Web server can choose to go back to simple
serving and (where the application is appropriate) let the
client do the other razzamatazz. This is superior to
JavaScript in one important detail: the program doesn't have
to reload when a new page is loaded, as JavaScript functions
do.
And because XDegrees uses Web-compatible technology, users
can access XDegrees resources without installing any
software, simply by using their browser.
The XDegrees business model is to sell its core XRNS servers
as both licensed software for installation within
organizations, and on a hosted basis. Enterprises and
technology partners will be able to license this
infrastructure to build their own applications. In
addition, XDegrees will build and sell a few select
enterprise applications using XRNS.
Tanne's goal is to provide more and more of the
infrastructure, permitting application developers to focus
on applications (the common cry of many software
vendors). Examples of typical applications could be:
Some of the infrastructure services XDegrees is working on
include subscriptions, digital rights, and modules to
facilitate the development of applications. They are open to
the idea of somebody interoperating with them, as so many
companies have done with AOL Instant Messenger. Because
their model involves running their own servers, they can
compete on the basis of the quality of their implementation.
Andy Oram
is an editor for O'Reilly Media, specializing in Linux and
free software books, and a member of Computer Professionals for Social
Responsibility. His web site is.. | http://www.onjava.com/pub/a/p2p/2001/04/27/xdegrees.html | CC-MAIN-2015-18 | refinedweb | 1,421 | 59.13 |
marinus van aswegen wrote:
> Hi I have a question re. session handling.
>
> Do you need to inform the request handler each time to connect to it which
> session is calling in?
Your question is a little unclear, but if it means what I think it means
then the answer is "no". :)
Sessions make use of a cookie. The setting and reading of that cookie is
handled transparently for you by the session code, so it's pretty simple.
All the magic happens when you create your session instance. If a
session cookie is found in the request, the corresponding session data
will be read from the persistent store. If a session cookie is not found
in the request, a new session id, which will be used as the cookie
value, will be generated.
from mod_python import apache, Session
def handler(req):
req.content_type = 'text/plain'
sess = Session.Session(req)
if sess.is_new():
req.write("It's a new session!")
# initialize your session data
sess['stuff'] = "hello world"
else:
req.write("The session exists\n")
req.write(sess['stuff'])
# don't forget to save your session data
sess.save()
return apache.OK
Jim | http://modpython.org/pipermail/mod_python/2006-February/020424.html | CC-MAIN-2018-09 | refinedweb | 192 | 68.97 |
Jun 12, 2011 08:20 AM|sjnaughton|LINK
NuGet is a Visual Studio extension that makes it easy to install and update open source libraries and tools in Visual Studio. The good news is I have finally started taking the samples from my blog C# Bits and put them on NuGet.
So Far I Have
Dynamic Data Database Embedded Image - This is based on Scott Hunters DBImage and has been ported to work on EF and L2S.
Thirteen Dynamic Data Custom Filters - These are the filters from my article Five Cool Filters for Dynamic Data 4
I will be working on more soon such as:
Security
Cascading Filters
Filter History
and much more
Then there are these
DynamicData.EFCodeFirstProvider
Dynamic Data Templates For C# WAP or WebSite
Dynamic Data Templates For VB WAP or WebSite
There may be more because the Search for Dynamic Data returns a lot of packages
So why am I posting here about it?
What I am asking is do you want them for file Based Website or are most of you using Web Application Project for DD now?
Jun 12, 2011 08:29 AM|sjnaughton|LINK
I will start on getting the two I have done so far working on File based website next, maybe get them up today.
I just want to see where to put the most effort, as I plan to put all the samples for a Dynamic Data book I want to write up on a NuGet freed.
Jun 13, 2011 02:32 PM|sjnaughton|LINK
Go to and follow the instruction to install NuGet Package manager then once you have NuGet Package manger follow the tutorial to get Packages and do a search for DynamicData (no there is no space) and you will see all DynamicData packages then all you do is install the one you want and it handles all the dependencies see NuGet In Depth: Empowering Open Source on the .NET Platform for more details.
All-Star
182316 Points
ASPInsiders
Moderator
MVP
Jun 13, 2011 02:38 PM|XIII|LINK:49 PM|Topolov|LINK
May I know the reasons why? Most tutorials were based on Dynamic Data Web Sites instead
Web sites look better organized, you can only put your code and models under a specific folder. Deploying is easier too. Namespaces are better organized
Projects come to be a little messy sometimes, cluttered with projects, code, libraries dlls
That's what I think
XIII:53 PM|sjnaughton|LINK.
Jun 13, 2011 02:57 PM|Topolov|LINK
sjnaughton I didn't get it: you agree with Kris or with me?
Advantages far out weigh the disadvantages. Those are?
sjnaughton.
All-Star
182316 Points
ASPInsiders
Moderator
MVP
Jun 13, 2011 02:59 PM|XIII|LINK
Hi,
TopolovMay I know the reasons why? Most tutorials were based on Dynamic Data Web Sites instead
Sure. Websites are great for ease of development and rapid demos/development. However in a more professional environment and multi layered architectures it's better to make use of web applications because they follow the same way as the rest of the assemblies in the solution. This creates a more standardized way of working. Also in a website it has give me problems in the past that a non well copied file in the folder with windows explorer already added it to the site while this is not the case for a web application.
TopolovProjects come to be a little messy sometimes, cluttered with projects, code, libraries dlls
Websites also get cluttered up if not properly kept. Actually I like the namespaces better in web applications.
In the end it's all a matter of taste and what you're used to. I started in .NET 1.0 and then we only had the web application. It was only since .NET 2.0 that websites took over but there was so much hassle and demand that Microsoft decided to also include the web application again.
Grz, Kris.
Jun 13, 2011 03:05 PM|sjnaughton|LINK
Deployment is far easyer with Web Application project than Website as you have a Publish butt that will do transforms on your web.config see
All-Star
94114 Points
Jun 14, 2011 02:45 AM|Decker Dong - MSFT|LINK
Hello:)
To sjnaughton——
A MILLION THANKS for your great help in DD! And I prefer WebApplication.
PS:This doesn't seem to be a question but like a discussion issue, I think you can make it just like a "voting issue" to attract more people to get involved inside! In order to make more get involved inside, I suggest you trying to change the topic as something like "New Dynamic Data Samples At NuGet From sjnaughton Voting".
:)
sjnaughton has a nice suggestion to know about which we like best, this will effect what type he will deploy his proj samples in——So I don't know whether the moderators like you can make his issue to the top of our asp.net forum or make it top in Dynamic Data Field at least to make more people watch and pay attention to this nice topic! I really want to do this, however I've got no such power, and I'm just asking you for your ideas about that.
Thx again:)
PS: Those suggestions are only from me privately instead of from Microsoft wholly. Because I couldn't help talking about the nice thing... Very excited. Thanks for both of your supporting Microsoft's technology and your contributions....
Jun 14, 2011 04:41 PM|Topolov|LINK
Congrat's sjnaughton you're on top now
My vote goes for WebSite
Jun 14, 2011 09:35 PM|sjnaughton|LINK
Hi Decker, I don't seem to have the ability to change the title so it will have to stay as is for the moment thaks again.
and the question is about personal preferance both have their value and I still have to build project in both but I do like the web.config transforms they make managing building for the correct are so easy Debug for local IIS, Staging for the Dev Server for client testing and Release for the live deploy it used to be so horrible, but it's sooo good now.
Scott Guthrie explains it all here VS 2010 Web Deployment
Jun 16, 2011 04:52 PM|sjnaughton|LINK
Topolov
sjnaughton I didn't get it: you agree with Kris or with me?
I agree with Kris, I prefer the WAP because I don't litter it with classes etc. I add them to class libraries and keep my WAP site clean and if I do need to add some special classes I create folder to keep them in.
I started out with WAP went to Website and now I have returned to WAP, I especially love Web.Config transforms they make my life really easy.
Jun 19, 2011 07:49 AM|sjnaughton|LINK
Hi Everyone I have an online pole to make this easyer
Do you prefer WAP (Web Application Project) or Website (file based Website)?
Jun 22, 2011 10:33 AM|sjnaughton|LINK
I've just added the start of my Dynamic Data Extensions to NuGet this first version add a Custom MetaModel with HideColumnIn and Filter ordering see More Dynamic Data on NuGet
Jun 27, 2011 12:14 PM|sjnaughton|LINK
Well the results are in and with a total of 76 votes 75% use and prefer WAP whilst only 25% use Website, so I will do a post or write a PowerShell script to do cleanup when adding template to a file based website.
Sorry for those file based website users but all this takes time, so it wioll be WAP with a clean up script or PowerShell CmdLet.
None
0 Points
19 replies
Last post Apr 13, 2012 10:25 AM by williams22 | http://forums.asp.net/p/1688994/4460370.aspx?Re+Dynamic+Data+and+NuGet | CC-MAIN-2015-22 | refinedweb | 1,306 | 64.44 |
Contents
- Other Versions
- Overview: Optimize what needs optimizing
- Choose the Right Data Structure
- Sorting
- String Concatenation
- Loops
- Avoiding dots...
- Local Variables
- Initializing Dictionary Elements
- Import Statement Overhead
- Data Aggregation
- Doing Stuff Less Often
- Python is not C
- Use xrange instead of range
- Re-map Functions at runtime
- Profiling Code
This page is devoted to various tips and tricks that help improve the performance of your Python programs. Wherever the information comes from someone else, I've tried to identify the source.
Python has changed in some significant ways since I first wrote my "fast python" page in about 1996, which means that some of the orderings will have changed. I migrated it to the Python wiki in hopes others will help maintain it.
You should always test these tips with your application and the specific version of the Python implementation you intend to use and not just blindly accept that one method is faster than another. See the profiling section for more details.
Also new since this was originally written are packages like Cython, Pyrex, Psyco, Weave, Shed Skin and PyInline, which can dramatically improve your application's performance by making it easier to push performance-critical code into C or machine language.
Other Versions
Overview: Optimize what needs optimizing
You can only know what makes your program slow after first getting the program to give correct results, then running it to see if the correct program is slow. When found to be slow, profiling can show what parts of the program are consuming most of the time. A comprehensive but quick-to-run test suite can then ensure that future optimizations don't change the correctness of your program. In short:
- Get it right.
- Test it's right.
- Profile if slow.
- Optimise.
- Repeat from 2.
Certain optimizations amount to good programming style and so should be learned as you learn the language. An example would be moving the calculation of values that don't change within a loop, outside of the loop.
Choose the Right Data Structure
TBD.
Sorting
Sorting lists of basic Python objects is generally pretty efficient. The sort method for lists takes an optional comparison function as an argument that can be used to change the sorting behavior. This is quite convenient, though it can significantly slow down your sorts, as the comparison function will be called many times. In Python 2.4, you should use the key argument to the built-in sort instead, which should be the fastest way to sort.
Only if you are using older versions of Python (before 2.4) does the following advice from Guido van Rossum apply:
An alternative way to speed up sorts is to construct a list of tuples whose first element is a sort key that will sort properly using the default comparison, and whose second element is the original list element. This is the so-called Schwartzian Transform, also known as DecorateSortUndecorate (DSU).
Suppose, for example, you have a list of tuples that you want to sort by the n-th field of each tuple. The following function will do that.
def sortby(somelist, n): nlist = [(x[n], x) for x in somelist] nlist.sort() return [val for (key, val) in nlist]
Matching the behavior of the current list sort method (sorting in place) is easily achieved as well:
def sortby_inplace(somelist, n): somelist[:] = [(x[n], x) for x in somelist] somelist.sort() somelist[:] = [val for (key, val) in somelist] return
Here's an example use:
>>> somelist = [(1, 2, 'def'), (2, -4, 'ghi'), (3, 6, 'abc')] >>> somelist.sort() >>> somelist [(1, 2, 'def'), (2, -4, 'ghi'), (3, 6, 'abc')] >>> nlist = sortby(somelist, 2) >>> sortby_inplace(somelist, 2) >>> nlist == somelist True >>> nlist = sortby(somelist, 1) >>> sortby_inplace(somelist, 1) >>> nlist == somelist True
From Tim Delaney
From Python 2.3 sort is guaranteed to be stable.
(to be precise, it's stable in CPython 2.3, and guaranteed to be stable in Python 2.4)
Python 2.4 adds an optional key parameter which makes the transform a lot easier to use:
# E.g. n = 1 n = 1 import operator nlist.sort(key=operator.itemgetter(n)) # use sorted() if you don't want to sort in-place: # sortedlist = sorted(nlist, key=operator.itemgetter(n))
Note that the original item is never used for sorting, only the returned key - this is equivalent to doing:
# E.g. n = 1 n = 1 nlist = [(x[n], i, x) for (i, x) in enumerate(nlist)] nlist.sort() nlist = [val for (key, index, val) in nlist]
String Concatenation
The accuracy of this section is disputed with respect to later versions of Python. In CPython 2.5, string concatenation is fairly fast, although this may not apply likewise to other Python implementations. See ConcatenationTestCode for a discussion.
Strings in Python are immutable. This fact frequently sneaks up and bites novice Python programmers on the rump. Immutability confers some advantages and disadvantages. In the plus column, strings can be used as keys in dictionaries and individual copies can be shared among multiple variable bindings. (Python automatically shares one- and two-character strings.) In the minus column, you can't say something like, "change all the 'a's to 'b's" in any given string. Instead, you have to create a new string with the desired properties. This continual copying can lead to significant inefficiencies in Python programs.
Avoid this:
s = "" for substring in list: s += substring
Use s = "".join(list) instead. The former is a very common and catastrophic mistake when building large strings. Similarly, if you are generating bits of a string sequentially instead of:
s = "" for x in list: s += some_function(x)
use
slist = [some_function(elt) for elt in somelist] s = "".join(slist)
Avoid:
out = "<html>" + head + prologue + query + tail + "</html>"
Instead, use
out = "<html>%s%s%s%s</html>" % (head, prologue, query, tail)
Even better, for readability (this has nothing to do with efficiency other than yours as a programmer), use dictionary substitution:
out = "<html>%(head)s%(prologue)s%(query)s%(tail)s</html>" % locals()
This last two are going to be much faster, especially when piled up over many CGI script executions, and easier to modify to boot. In addition, the slow way of doing things got slower in Python 2.0 with the addition of rich comparisons to the language. It now takes the Python virtual machine a lot longer to figure out how to concatenate two strings. (Don't forget that Python does all method lookup at runtime.). The only restriction is that the "loop body" of map must be a function call. Besides the syntactic benefit of list comprehensions, they are often as fast or faster than equivalent use of map.
Here's a straightforward example. Instead of looping over a list of words and converting them to upper case:
newlist = [] for word in oldlist: newlist.append(word.upper())
you can use map to push the loop from the interpreter into compiled C code:
newlist = map(str.upper, oldlist)
List comprehensions were added to Python in version 2.0 as well. They provide a syntactically more compact and more efficient way of writing the above for loop:
newlist = [s.upper() for s in oldlist]
Generator expressions were added to Python in version 2.4. They function more-or-less like list comprehensions or map but avoid the overhead of generating the entire list at once. Instead, they return a generator object which can be iterated over bit-by-bit:
iterator = (s.upper() for s in oldlist)
Which method is appropriate will depend on what version of Python you're using and the characteristics of the data you are manipulating.
Guido van Rossum wrote a much more detailed (and succinct) examination of loop optimization that is definitely worth reading.
Avoiding dots...
Suppose you can't use map or a list comprehension? You may be stuck with the for loop. The for loop example has another inefficiency. Both newlist.append and word.upper are function references that are reevaluated each time through the loop. The original loop can be replaced with:
upper = str.upper newlist = [] append = newlist.append for word in oldlist: append(upper(word))
This technique should be used with caution. It gets more difficult to maintain if the loop is large. Unless you are intimately familiar with that piece of code you will find yourself scanning up to check the definitions of append and upper.
Local Variables
The final speedup available to us for the non-map version of the for loop is to use local variables wherever possible. If the above loop is cast as a function, append and upper become local variables. Python accesses local variables much more efficiently than global variables.
def func(): upper = str.upper newlist = [] append = newlist.append for word in oldlist: append(upper(word)) return newlist
At the time I originally wrote this I was using a 100MHz Pentium running BSDI. I got the following times for converting the list of words in /usr/share/dict/words (38,470 words at that time) to upper case:
Version Time (seconds) Basic loop 3.47 Eliminate dots 2.45 Local variable & no dots 1.79 Using map function 0.54
Initializing Dictionary Elements
Suppose you are building a dictionary of word frequencies and you've already broken your text up into a list of words. You might execute something like:
wdict = {} for word in words: if word not in wdict: wdict[word] = 0 wdict[word] += 1
Except for the first time, each time a word is seen the if statement's test fails. If you are counting a large number of words, many will probably occur multiple times. In a situation where the initialization of a value is only going to occur once and the augmentation of that value will occur many times it is cheaper to use a try statement:
wdict = {} for word in words: try: wdict[word] += 1 except KeyError: wdict[word] = 1
It's important to catch the expected KeyError exception, and not have a default except clause to avoid trying to recover from an exception you really can't handle by the statement(s) in the try clause.
A third alternative became available with the release of Python 2.x. Dictionaries now have a get() method which will return a default value if the desired key isn't found in the dictionary. This simplifies the loop:
wdict = {} get = wdict.get for word in words: wdict[word] = get(word, 0) + 1
When I originally wrote this section, there were clear situations where one of the first two approaches was faster. It seems that all three approaches now exhibit similar performance (within about 10% of each other), more or less independent of the properties of the list of words.
Also, if the value stored in the dictionary is an object or a (mutable) list, you could also use the dict.setdefault method, e.g.
4 wdict.setdefault(key, []).append(new_element)
You might think that this avoids having to look up the key twice. It actually doesn't (even in python 3.0), but at least the double lookup is performed in C.
Another option is to use the defaultdict class:
from collections import defaultdict wdict = defaultdict(int) for word in words: wdict[word] += 1
Import Statement Overhead
import statements can be executed just about anywhere. It's often useful to place them inside functions to restrict their visibility and/or reduce initial startup time. Although Python's interpreter is optimized to not import the same module multiple times, repeatedly executing an import statement can seriously affect performance in some circumstances.
Consider the following two snippets of code (originally from Greg McFarlane, I believe - I found it unattributed in a comp.lang.python python-list@python.org posting and later attributed to him in another source):
def doit1(): import string ###### import statement inside function string.lower('Python') for num in range(100000): doit1()
or:
import string ###### import statement outside function def doit2(): string.lower('Python') for num in range(100000): doit2()
doit2 will run much faster than doit1, even though the reference to the string module is global in doit2. Here's a Python interpreter session run using Python 2.3 and the new timeit module, which shows how much faster the second is than the first:
>>> def doit1(): ... import string ... string.lower('Python') ... >>> import string >>> def doit2(): ... string.lower('Python') ... >>> import timeit >>> t = timeit.Timer(setup='from __main__ import doit1', stmt='doit1()') >>> t.timeit() 11.479144930839539 >>> t = timeit.Timer(setup='from __main__ import doit2', stmt='doit2()') >>> t.timeit() 4.6661689281463623
String methods were introduced to the language in Python 2.0. These provide a version that avoids the import completely and runs even faster:
def doit3(): 'Python'.lower() for num in range(100000): doit3()
Here's the proof from timeit:
>>> def doit3(): ... 'Python'.lower() ... >>> t = timeit.Timer(setup='from __main__ import doit3', stmt='doit3()') >>> t.timeit() 2.5606080293655396
The above example is obviously a bit contrived, but the general principle holds.
Note that putting an import in a function can speed up the initial loading of the module, especially if the imported module might not be required. This is generally a case of a "lazy" optimization -- avoiding work (importing a module, which can be very expensive) until you are sure it is required.
This is only a significant saving in cases where the module wouldn't have been imported at all (from any module) -- if the module is already loaded (as will be the case for many standard modules, like string or re), avoiding an import doesn't save you anything. To see what modules are loaded in the system look in sys.modules.
A good way to do lazy imports is:
email = None def parse_email(): global email if email is None: import email ...
This way the email module will only be imported once, on the first invocation of parse_email().
Data Aggregation
Function call overhead in Python is relatively high, especially compared with the execution speed of a builtin function. This strongly suggests that where appropriate, functions should handle data aggregates. Here's a contrived example written in Python.
import time x = 0 def doit1(i): global x x = x + i list = range(100000) t = time.time() for i in list: doit1(i) print "%.3f" % (time.time()-t)
vs.
import time x = 0 def doit2(list): global x for i in list: x = x + i list = range(100000) t = time.time() doit2(list) print "%.3f" % (time.time()-t)
Here's the proof in the pudding using an interactive session:
>>> t = time.time() >>> for i in list: ... doit1(i) ... >>> print "%.3f" % (time.time()-t) 0.758 >>> t = time.time() >>> doit2(list) >>> print "%.3f" % (time.time()-t) 0.204
Even written in Python, the second example runs about four times faster than the first. Had doit been written in C the difference would likely have been even greater (exchanging a Python for loop for a C for loop as well as removing most of the function calls).
Doing Stuff Less Often
The Python interpreter performs some periodic checks. In particular, it decides whether or not to let another thread run and whether or not to run a pending call (typically a call established by a signal handler). Most of the time there's nothing to do, so performing these checks each pass around the interpreter loop can slow things down. There is a function in the sys module, setcheckinterval, which you can call to tell the interpreter how often to perform these periodic checks. Prior to the release of Python 2.3 it defaulted to 10. In 2.3 this was raised to 100. If you aren't running with threads and you don't expect to be catching many signals, setting this to a larger value can improve the interpreter's performance, sometimes substantially.
Python is not C
It is also not Perl, Java, C++ or Haskell. Be careful when transferring your knowledge of how other languages perform to Python. A simple example serves to demonstrate:
% timeit.py -s 'x = 47' 'x * 2' loops, best of 3: 0.574 usec per loop % timeit.py -s 'x = 47' 'x << 1' loops, best of 3: 0.524 usec per loop % timeit.py -s 'x = 47' 'x + x' loops, best of 3: 0.382 usec per loop
Now consider the similar C programs (only the add version is shown):
#include <stdio.h> int main (int argc, char *argv[]) { int i = 47; int loop; for (loop=0; loop<500000000; loop++) i + i; return 0; }
and the execution times:
% for prog in mult add shift ; do < for i in 1 2 3 ; do < echo -n "$prog: " < /usr/bin/time ./$prog < done < echo < done mult: 6.12 real 5.64 user 0.01 sys mult: 6.08 real 5.50 user 0.04 sys mult: 6.10 real 5.45 user 0.03 sys add: 6.07 real 5.54 user 0.00 sys add: 6.08 real 5.60 user 0.00 sys add: 6.07 real 5.58 user 0.01 sys shift: 6.09 real 5.55 user 0.01 sys shift: 6.10 real 5.62 user 0.01 sys shift: 6.06 real 5.50 user 0.01 sys
Note that there is a significant advantage in Python to adding a number to itself instead of multiplying it by two or shifting it left by one bit. In C on all modern computer architectures, each of the three arithmetic operations are translated into a single machine instruction which executes in one cycle, so it doesn't really matter which one you choose.
A common "test" new Python programmers often perform is to translate the common Perl idiom
while (<>) { print; }
into Python code that looks something like
import fileinput for line in fileinput.input(): print line,
and use it to conclude that Python must be much slower than Perl. As others have pointed out numerous times, Python is slower than Perl for some things and faster for others. Relative performance also often depends on your experience with the two languages.
Use xrange instead of range
This section no longer applies if you're using Python 3, where range now provides an iterator over ranges of arbitrary size, and where xrange no longer exists.
Python has two ways to get a range of numbers: range and xrange. Most people know about range, because of its obvious name. xrange, being way down near the end of the alphabet, is much less well-known.
xrange is a generator object, basically equivalent to the following Python 2.3 code:
def xrange(start, stop=None, step=1): if stop is None: stop = start start = 0 else: stop = int(stop) start = int(start) step = int(step) while start < stop: yield start start += step
Except that it is implemented in pure C.
xrange does have limitations. Specifically, it only works with ints; you cannot use longs or floats (they will be converted to ints, as shown above).
It does, however, save gobs of memory, and unless you store the yielded objects somewhere, only one yielded object will exist at a time. The difference is thus: When you call range, it creates a list containing so many number (int, long, or float) objects. All of those objects are created at once, and all of them exist at the same time. This can be a pain when the number of numbers is large.
xrange, on the other hand, creates no numbers immediately - only the range object itself. Number objects are created only when you pull on the generator, e.g. by looping through it. For example:
xrange(sys.maxint) # No loop, and no call to .next, so no numbers are instantiated
And for this reason, the code runs instantly. If you substitute range there, Python will lock up; it will be too busy allocating sys.maxint number objects (about 2.1 billion on the typical PC) to do anything else. Eventually, it will run out of memory and exit.
In Python versions before 2.2, xrange objects also supported optimizations such as fast membership testing (i in xrange(n)). These features were removed in 2.2 due to lack of use.
Re-map Functions at runtime
Say you have a function
class Test: def check(self,a,b,c): if a == 0: self.str = b*100 else: self.str = c*100 a = Test() def example(): for i in xrange(0,100000): a.check(i,"b","c") import profile profile.run("example()")
And suppose this function gets called from somewhere else many times.
Well, your check will have an if statement slowing you down all the time except the first time, so you can do this:
class Test2: def check(self,a,b,c): self.str = b*100 self.check = self.check_post def check_post(self,a,b,c): self.str = c*100 a = Test2() def example2(): for i in xrange(0,100000): a.check(i,"b","c") import profile profile.run("example2()")
Well, this example is fairly inadequate, but if the 'if' statement is a pretty complicated expression (or something with lots of dots), you can save yourself evaluating it, if you know it will only be true the first time.
Profiling Code
The first step to speeding up your program is learning where the bottlenecks lie. It hardly makes sense to optimize code that is never executed or that already runs fast. I use two modules to help locate the hotspots in my code, profile and trace. In later examples I also use the timeit module, which is new in Python 2.3.
See the separate profiling document for alternatives to the approaches given below.
Profiling
There are a number of profiling modules included in the Python distribution. Using one of these to profile the execution of a set of functions is quite easy. Suppose your main function is called main, takes no arguments and you want to execute it under the control of the profile module. In its simplest form you just execute
import profile profile.run('main()')
When main() returns, the profile module will print a table of function calls and execution times. The output can be tweaked using the Stats class included with the module. From Python 2.4 profile has permitted the time consumed by Python builtins and functions in extension modules to be profiled as well.
A slightly longer description of profiling using the profile and pstats modules can be found here (archived version):
The cProfile and Hotshot Modules
Since Python 2.2, the hotshot package has been available as a replacement for the profile module, although the cProfile module is now recommended in preference to hotshot. The underlying module is written in C, so using hotshot (or cProfile) should result in a much smaller performance hit, and thus a more accurate idea of how your application is performing. There is also a hotshotmain.py program in the distribution's Tools/scripts directory which makes it easy to run your program under hotshot control from the command line.
Trace Module
The trace module is a spin-off of the profile module I wrote originally to perform some crude statement level test coverage. It's been heavily modified by several other people since I released my initial crude effort. As of Python 2.0 you should find trace.py in the Tools/scripts directory of the Python distribution. Starting with Python 2.3 it's in the standard library (the Lib directory). You can copy it to your local bin directory and set the execute permission, then execute it directly. It's easy to run from the command line to trace execution of whole scripts:
% trace.py -t spam.py eggs
In Python 2.4 it's even easier to run. Just execute python -m trace.
There's no separate documentation, but you can execute "pydoc trace" to view the inline documentation.
Visualizing Profiling Results
RunSnakeRun is a GUI tool by Mike Fletcher which visualizes profile dumps from cProfile using square maps. Function/method calls may be sorted according to various criteria, and source code may be displayed alongside the visualization and call statistics. Currently (April 2016) RunSnakeRun supports Python 2.x only - thus it cannot load profile data generated by Python 3 programs.
An example usage:
runsnake some_profile_dump.prof
Gprof2Dot is a python based tool that can transform profiling results output into a graph that can be converted into a PNG image or SVG.
A typical profiling session with python 2.5 looks like this (on older platforms you will need to use actual script instead of the -m option):
python -m cProfile -o stat.prof MYSCRIPY.PY [ARGS...] python -m pbp.scripts.gprof2dot -f pstats -o stat.dot stat.prof dot -ostat.png -Tpng stat.dot
PyCallGraph pycallgraph is a Python module that creates call graphs for Python programs. It generates a PNG file showing an modules's function calls and their link to other function calls, the amount of times a function was called and the time spent in that function.
Typical usage:
pycallgraph scriptname.py
PyProf2CallTree is a script to help visualize profiling data collected with the cProfile python module with the kcachegrind graphical calltree analyser.
Typical usage:
python -m cProfile -o stat.prof MYSCRIPY.PY [ARGS...] python pyprof2calltree.py -i stat.prof -k
ProfileEye is a browser-based frontend to gprof2dot using d3.js for decluttering visual information.
Typical usage:
python -m profile -o output.pstats path/to/your/script arg1 arg2 gprof2dot -f pstats output.pstats | profile_eye --file-colon_line-colon-label-format > profile_output.html
SnakeViz is a browser-based visualizer for profile data.
Typical usage:
python -m profile -o output.pstats path/to/your/script arg1 arg2 snakeviz output.pstats | https://wiki.python.org/moin/PythonSpeed/PerformanceTips?highlight=(CategoryDocumentation) | CC-MAIN-2017-04 | refinedweb | 4,273 | 66.33 |
0
Alright i need some help, been stuck on a basic task.
I input 2 strings. I need to be able to have it compare the two strings to see which one is shorter, then count how short the item is and out put the number.
ex. Word 1 Hello
word 2 hi
Output: the word with the least letters contains 2 letters.
/* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication17; import java.util.*; /** * * @author Austin */ public class Main { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner scan = new Scanner(System.in); int PW = 0; int i = 0; System.out.print("Enter Word one "); String password = scan.nextLine(); System.out.print("Enter Word two "); String password2 = scan.nextLine(); System.out.println(" Shortest number is " +PW+ " letters"); } } | https://www.daniweb.com/programming/software-development/threads/256813/while-counting | CC-MAIN-2017-51 | refinedweb | 139 | 76.82 |
Shuffle the Training Data in TensorFlow
What is Data Shuffling?
It is a shuffling technique which mixes the data randomly from a dataset, within an attribute or a set of attributes. Between the columns, it will try retaining the logical relationship.
Why do we shuffle data?
Training, testing and validation are the phases that our presented dataset will be further splitting into, in our machine learning model. We need to shuffle these datasets well, avoiding any possible elements in the split datasets before training the ML model.
Data shuffling satisfies the purpose of variance reduction. It’s goal is to keep the model general and makes sure that it doesn’t over fit a lot.
In simple words,
- helps the training converge fast
- prevents the model from learning the order of the training
- improves the ML model quality
- prevents any bias during the training
The data sorted by their target/class, are the most seen case where you would shuffle your data. The reason why we will want to shuffle for making sure that our validation/test/training sets are representative of the whole distribution.
Data Shuffling in TensorFlow
Let’s look at the piece of code below,
import tensorflow as tf import numpy as np a = tf.placeholder(tf.float32, (None, 1, 1, 1)) b = tf.placeholder(tf.int32, (None)) indices = tf.range(start=0, limit=tf.shape(a)[0], dtype=tf.int32) shuffled_indices = tf.random.shuffle(indices) shuffled_a = tf.gather(a, shuffled_indices) shuffled_b = tf.gather(b, shuffled_indices)
The above code will return a transformed dataset, which will be going through loading and testing for our machine learning model.
‘
shuffled_indices = tf.random.shuffle(indices)‘ can be seen above. This directly shuffles the indices within the dataset and applies the changes to it.
The syntax for Shuffling method is:
tf.random.shuffle( value, seed=None, name=None )
tf.random.shuffle() will randomly shuffle the tensors, which contains the data of our datasets.
out put: [[1, 9], [[5, 5], [3, 7], ==> [1, 9], [5, 5],
[2, 8]
[2, 8]] [3, 7]]
As we can see, The tensor is shuffled along with dimension 0, such that each value[x] is mapped to one and only one output[y]. Above is an example of mapping that might occur for a 4×2 tensor.
Here,
- value : contains value of tensor that is to be shuffled.
- seed : random seed used for distribution.
- name: an optional argument to name the operation.
These arguments will return the shuffled tensor. | https://valueml.com/shuffle-the-training-data-in-tensorflow/ | CC-MAIN-2021-25 | refinedweb | 412 | 66.44 |
Now that I’ve had a chance to settle down, catch up on some email, spend some time with the family, upload my photos, and blog about Flash on the Beach 2009, I will post my slides:
As usual, seeing a bunch of bullet points and diagrams doesn’t replace the thrill of watching me babble on for an hour, but it’s the best I can do unless John releases the video taken by that camera man who was in my face the whole time. 🙂
In there, I mentioned the game toolkit I’m working on, called Asobu, which means “to play” in Japanese. A lot of people asked me about it at the conference, and I hope I didn’t set expectations too high. I’m not trying to revolutionize the Flash gaming industry or create the next big thing that everyone uses to make games. Really all I’m trying to do is make a few reusable classes to use in my own games and make things easier for myself. But I’ll release them and if anyone else wants to use them, they’ll be free to. And hopefully a few people will say, “why the hell did you do such and such that way?” which will lead to improvements for myself and anyone else.
So what does / will Asobu consist of? First of all, it’s going to be mostly generic, architectural type stuff. There won’t be any physics engines, collision engines, 3d engines, particle systems, tile maps, isometric engines, or anything else like that. Well, a few of those things might make it in there someday, but I’m concentrating more on the things that make up the different structural parts of a game and hold them together. So far it has:
– A state machine with scenes and transitions. This is named after and loosely based on the Director class in cocos2d for iPhone. I showed some examples of this in action in the presentation, and there are some code snippets in the slides. Basically, you make each part of your game a Scene – the intro, the game itself, instructions, high score table, credits, etc. – and move between them with the Director. There are various prebuilt transitions, or you can create your own.
– A few essential UI Elements: A configurable label for displaying text, a button, and a very flexible menu system. I might extend the label to make a larger, multiline text area. In my experience, these are most of what you need to show options, settings, instructions, etc.
– A sound/music manager allowing a single point for loading/embedding sound files and playing them with various options, mute/unmute, volume, etc.
– A library/asset manager for loading/embedding and accessing external assets in one place.
– The beginnings of a level manager class for loading/embedding external level definitions. I’d also like to see if I can extend a level editor I made for a game with my Minimal Components into something generic enough to be reused on multiple projects. It would be great to have a relatively easy way to create at least a good chunk of a level editor with drag and drop of custom objects and property inspectors for them. We’ll see how that pans out.
Anyway, there is a lot of work to be done on all of this. The Director and Scenes and the UI Elements are the furthest along. Stay tuned to see more. Again, I don’t think anything here is going to amaze anyone anywhere, but what’s there already has proven helpful to me, and hopefully will be helpful to someone else too.
It looks like your Director class is similar to something I built for my games that I call ScreenNavigator. You can add screens as display objects or classes, transitions are created by calling function references (which allows me to swap out different tween engines based on whichever is my favorite at any moment in time), and I can use events from each screen to wire up the navigation. An event will either display a new screen by referencing a string id, or it will call a function reference to do something else. It’s very flexible, and maybe not all that pretty, but once I had it up and running, my development of game menus sped up significantly.
Yeah, sounds pretty similar. 🙂
I looked at your slideshow. It looks really really cool.
I understand that it isn’t “groundbreaking” stuff, in fact when it works, it’ll just work. But that’s the point.
I work for a very small design agency, and I’ve had to make a general game menu system to re-use for a lot of games. But that’s just one menu and it isn’t very flexible at all but it’s handy for making small games.
I’m going to suggest a couple of things (even though it’s not complete yet). Maybe for version 1.5 or something.
I’ve been using Cocos2d this weekend and making the menu’s etc was the easyest thing to whip out. So so useful. At first I thought it was limited in the terms of the menu positioning. But then I realised I could make more menus and then re-position them. Making a complex looking main menu.
But what I feel the menu.h is missing in Cocos2d is to justify text menus left and right. Perhaps with the flash textfield this can be done quite easily? I’m not sure.
And finally, the sound manager class. It would be useful to have stopAndFade(timer:Number) and pauseAndFade(timer:Number) aswell.
I made a small MP3 player that loops through tracks. And it’s so much easier to go:
var mp3:MicroMP3Player();
mp3.addSong(new SoundTrack1());
mp3.addSong(new SoundTrack2());
mp3.addSong(new SoundTrack3());
mp3.play();
And that sound will be managed and loop infinitely through the game. Perfect for in-game music.
package {
import flash.display.Stage;
import flash.media.Sound;
import flash.media.SoundChannel;
import flash.events.Event;
import flash.media.SoundTransform;
/**
* …
* @author James Prankard
*
*/
public class MicroMP3Player {
private var songs:Array = new Array();
private var playing:Boolean = false;
public var currentTrack = 0;
private var constantChannel:SoundChannel;
private static var stageInstance;
public function MicroMP3Player() {
}
public function addSong(song:Sound):void {
songs.push(song);
}
public function play():void {
playing = true;
if (songs.length>0) {
constantChannel = songs[currentTrack].play();
constantChannel.addEventListener(Event.SOUND_COMPLETE, waitForFinish);
}
}
private function waitForFinish(e:Event):void {
if (constantChannel != null) {
constantChannel.removeEventListener(Event.SOUND_COMPLETE, waitForFinish);
nextTrack();
play();
}
}
public function nextTrack() {
currentTrack++
if (currentTrack == songs.length) {
currentTrack = 0;
}
}
public function prevTrack() {
currentTrack–
if (currentTrack < 0) {
currentTrack = songs.length-1;
}
}
public function playNextTrack():void {
nextTrack();
stop();
play();
}
public function playPrevTrack():void {
prevTrack();
stop();
play();
}
public function stop():void {
playing = false;
if (constantChannel != null) {
constantChannel.stop();
constantChannel.removeEventListener(Event.SOUND_COMPLETE, waitForFinish);
}
}
public function getPlaying():Boolean {
return playing;
}
public function setVolume(volume:Number):void {
constantChannel.soundTransform = new SoundTransform(volume, 0);
}
}
}
Anyways, really looking forward to seeing and using your framework 🙂
Great presentation at FOTB this year, I’ve recently run into the ‘Game is more than a game loop’ issue. The game part was built in just under 3 days the rest of it is 3 months and counting, cocos2D is helping reduce the dev time but its still a little painful. Keeping an eye on all the retain/release business is also having the side effect that I keep falling into early optimization.
Looking forward to next years FOTB carefully blending technology, creativity and alcohol.
do you think john will release the videos? there are no videos of FOTB08 except those teaser vids. I would be quite pleased if he would release them.
I so wanted to see your session, but others at the time were more relevant for work so I ended up missing you. Thanks for putting the slides up.
I really wanted to sit down with you to talk about the use of the Decorator pattern as a technique to create the component based entities you spoke about. Never found a good time. The last thing you’d have wanted was someone talking shop while you’re enjoying your beer. Be good to chat about it at some point though. Really enjoyed your talk. Cameraman was a bit eager though wasn’t he?
[…] Keith Peters (site) – Casual Game Architecture: How to finish coding a game without despising it View slides […]
[…] Keith Peters a présenté une session sur une architecture de développement de jeux. Son dernier projet "ASobu" : un toolkit pour le développement des jeux était aussi un sujet de sa présentation. (Slides de sa présentation). […]
[…] mentioned the gaming toolkits Flixel and PushButton Engine, and also announced Asobu (Keiths own toolkit). I’ve looked into using PushButton Engine for a game I’m currently […] | http://www.bit-101.com/blog/?p=2402 | CC-MAIN-2017-17 | refinedweb | 1,480 | 65.52 |
2002-06-02 Paul Eggert <eggert@twinsun.com> * NEWS, configure.ac (AC_INIT): Version 2.5.8 released. * README: POSIX.2 -> POSIX. * inp.c (report_revision): Don't modify 'revision', since it gets freed later. Bug reported by Mike Castle. 2002-05-30 Paul Eggert <eggert@twinsun.com> * NEWS, configure.ac (AC_INIT): Version 2.5.7 released. * Makefile.in (MISC): Remove README-alpha. (patchlevel.h): Depend on configure, not configure.ac. * INSTALL: Upgrade to Autoconf 2.53 version. 2002-05-28 Paul Eggert <eggert@twinsun.com> * patch.c (end_defined, apply_hunk): Output #endif without the comment, as POSIX 1003.1-2001 requires. * pch.c (there_is_another_patch): Flush stderr after perror. * NEWS, configure.ac (AC_INIT): Version 2.5.6 released. * strcasecmp.c, strncasecmp.c: New files, taken from fileutils. * config.guess, config.sub: Remove. * Makefile.in (LIBSRCS): Add strcasecmp.c, strncasecmp.c. (MISC): Remove config.guess, config.sub. The code already assumes C89 or better, so remove K&R stuff. * common.h (volatile): Remove. (GENERIC_OBJECT): Remove; all uses changed to 'void'. (PARAMS): Remove; all uses changed to prototypes. * configure.ac (AC_PROG_CC_STDC): Add. * util.c (vararg_start): Remove. All uses changed to va_start. Always include <stdarg.h>. * configure.ac (AC_CANONICAL_HOST): Remove. (AC_REPLACE_FUNCS): Add strncasecmp. (AC_CHECK_DECLS): Add mktemp. * patch.c (main): Remove useless prototype decl. (mktemp): Don't declare if HAVE_DECL_MKTEMP || defined mktemp. (make_temp): Now accepts char, not int. 2002-05-26 Paul Eggert <eggert@twinsun.com> * patch.c (not_defined): Prepend newline. All uses changed. (apply_hunk): Fix bug: -D was outputting #ifdef when it should have been outputting #ifndef. Bug report and partial fix by Jason Short. * pch.c (intuit_diff_type): When reading an ed diff, don't use indent and trailing-CR-ness of "." line; instead, use that of the command. Bug reported by Anthony Towns; partial fix by Michael Fedrowitz. (intuit_diff_type): If the index line exists, don't report a missing header. Fix by Chip Salzenberg. 2002-05-26 Alessandro Rubini <rubini@gnu.org> * patch.c (locate_hunk): Fixed updating of last_offset. 2002-05-25 Paul Eggert <eggert@twinsun.com> * NEWS, README: Diffutils doc is up to date now. Bug reporting address is now <bug-patch@gnu.org>. * README: Describe '--disable-largefile'. * NEWS-alpha, dirname.c, dirname.h, exitfail.c, exitfail.h, quote.c, quote.h, unlocked-io.h: New files, taken from diffutils and fileutils. * argmatch.c: [STDC_HEADERS]: Include stdlib.h, for 'exit'. * addext.c, argmatch.c, argmatch.h, backupfile.c, basename.c: Update from diffutils and fileutils. * ansi2knr.1, ansi2knr.c: Remove. * common.h: HAVE_SETMODE && O_BINARY -> HAVE_SETMODE_DOS. * patch.c (usage): Likewise. * pch.c (open_patch_file): Likewise. * configure.ac: Renamed from configure.in. Add copyright notice. (AC_PREREQ): Bump to 2.53. (AC_INIT): Use 2.5x style. (AC_CONFIG_SRCDIR): Add. (PACKAGE, VERSION): Remove. (AC_C_PROTOTYPES): Use this instead of AM_C_PROTOTYPES. (jm_CHECK_TYPE_STRUCT_UTIMBUF): Use this instead of jm_STRUCT_UTIMBUF. (jm_PREREQ_ADDEXT, jm_PREREQ_DIRNAME, jm_PREREQ_ERROR, jm_PREREQ_MEMCHR, jm_PREREQ_QUOTEARG): Add. (AC_CHECK_DECLS): Add free, getenv, malloc. (AC_CHECK_FUNCS): Remove setmode. (AC_FUNC_SETMODE_DOS): Add. (jm_CHECK_TYPE_STRUCT_DIRENT_D_INO): Use this instead of jm_STRUCT_DIRENT_D_INO. * Makefile.in (OBJEXT): New var. (PACKAGE_NAME): Renamed from PACKAGE. All uses changed. (PACKAGE_VERSION): Renamed from VERSION. All uses changed. (U): Remove. All uses of "$U.o" changed to ".$(OBJEXT)". (LIBSRCS): REmove getopt.c getopt1.c. Add mkdir.c, rmdir.c. (SRCS): Add dirname.c, exitfail.c, getopt.c, getopt1.c, quote.c. Remove mkdir.c. (OBJS): Keep in sync with SRCS. (HDRS): Remove basename.h. Add dirname.h, exitfail.h, quote.h, unlocked-io.h. (MISC, configure, config.hin, patchlevel.h): configure.ac renamed from configure.in. (MISC): Add README-alpha. Remove ansi2knr.1, ansi2knr.c. (.c.$(OBJEXT)): Renamed from .c.o. (ACINCLUDE_INPUTS): Add c-bs-a.m4, error.m4, jm-glibc-io.m4, mbstate_t.m4, mkdir.m4, mbrtowc.m4, prereq.m4, setmode.m4. Remove ccstdc.m4, inttypes_h.m4, largefile.m4, protos.m4. (mostlyclean): Don't clean ansi2knr. (ansi2knr.o, ansi2knr): Remove. Redo dependencies. * patch.c: Include <exitfail.h>. (main): Initialize exit_failure. * patch.man: Update copyright notice. * pch.c, util.c: Include <dirname.h>, not <basename.h>. * version.c (copyright_string): Update copyright notice. 2002-02-17 Paul Eggert <eggert@twinsun.com> * partime.c (parse_pattern_letter): Don't overrun buffer if it contains only alphanumerics. Bug reported by Winni <Winni470@gmx.net>. 2001-07-28 Paul Eggert <eggert@sic.twinsun.com> * util.c (fetchname), NEWS: Allow file names with internal spaces, so long as they don't contain tabs. * pch.c (intuit_diff_type): Do not allow Prereq with multiple words. * configure.in (AC_PREREQ): Bump to 2.50. (AC_CHECK_FUNCS): Remove fseeko. (AC_FUNC_FSEEKO): Add. * Makefile.in (ACINCLUDE_INPUTS): Remove largefile.m4; no longer needed with Autoconf 2.50. 2001-02-07 "Tony E. Bennett" <tbennett@nvidia.com> * util.c (PERFORCE_CO): New var. (version_controller): Support Perforce. * patch.man: Document this. 2000-06-30 Paul Eggert <eggert@sic.twinsun.com> * patch.man: Ignore comment lines. * NEWS, pch.c: Ignore lines beginning with "#". 1999-10-24 Paul Eggert <eggert@twinsun.com> * pch.c (another_hunk): Report a fatal error if a regular context hunk's pattern has a different number of unchanged lines than the replacement. 1999-10-18 Paul Eggert <eggert@twinsun.com> * patch.c (main): If we skipped an ed patch, exit with nonzero status. 1999-10-17 Paul Eggert <eggert@twinsun.com> * patch.c (main): Apply do_ed_script even if dry_run, because we need to make progress on the patch file. * pch.c (do_ed_script): If skip_rest_of_patch is nonzero, gobble up the patch without any other side effect. 1999-10-12 Paul Eggert <eggert@twinsun.com> * NEWS, README: New bug reporting address. * NEWS: Report change in 2.5.4 that we forgot to document. * README: Document `configure --disable-largefile'. * basename.c, COPYING, getopt.c, getopt.h, getopt1.c, m4/largefile.m4: Update to latest version. * Makefile.in (basename$U.o): Depend on basename.h. (config.hin): Depend on $(srcdir)/aclocal.m4. * ansi2knr.c, maketime.c, mkinstalldirs, partime.c: Fix $Id. FreeBSD has an unrelated setmode function; work around this. * common.h (binary_transput): Don't declare unless O_BINARY. * patch.c (option_help, get_some_switches): Don't use setmode unless O_BINARY. * pch.c (open_patch_file): Don't invoke setmode unless O_BINARY. Fix incompatiblities with error.c. * common.h (program_name): Now XTERN char *, for compatibility with error.c. All uses changed. (PROGRAM_NAME): New macro. (PARAMS): Use ANSI C version only if defined PROTOTYPES || (defined __STDC__ && __STDC__), for compatibilty with error.c. * util.c (vararg_start): Likewise. * patch.c (program_name): Remove. (main): Initialize program_name. * version.c (version): Print PROGRAM_NAME, not program_name. Accommodate mingw32 port, which has one-argument mkdir (yuck!) and no geteuid. * m4/mkdir.m4: New file. * Makefile.in (ACINCLUDE_INPUTS): Add $(M4DIR)/mkdir.m4. * configure.in (AC_CHECK_FUNCS): Add geteuid, getuid. (PATCH_FUNC_MKDIR_TAKES_ONE_ARG): Add. * common.h (mkdir): Define if mkdir takes one arg. (geteuid): New macro, if not already defined. 1999-10-11 Christopher R. Gabriel <cgabriel@tin.it> * patch.c (option_help): Updated bug report address * configure.in (VERSION): Version 2.5.5 released. 1999-09-01 Paul Eggert <eggert@twinsun.com> * patch.c (main): Default simple_backup_suffix to ".orig". 1999-10-08 Paul Eggert <eggert@twinsun.com> * patch.man: Make it clear that `patch -o F' should not be used if F is one of the files to be patched. 1999-08-30 Paul Eggert <eggert@twinsun.com> Version 2.5.4 fixes a few minor bugs, converts C sources to ANSI prototypes, and modernizes auxiliary sources and autoconf scripts. * configure.in (VERSION): Version 2.5.4 released. (AC_CANONICAL_HOST): Add. (AC_SYS_LARGEFILE): Add, replacing inline code. (AC_EXEEXT): Add. (jm_AC_HEADER_INTTYPES_H): Add, replacing inline code. (AC_TYPE_PID_T): Add. (jm_STRUCT_UTIMBUF): Add, replacing inline code. (HAVE_MEMCHR): Remove obsolescent test; nobody uses NetBSD 1.0 now. (getopt_long): Append $U to object file basenames. (AC_CHECK_FUNCS): Add fseeko, setmode. Remove mkdir. (AC_REPLACE_FUNCS): Add mkdir, rmdir. (jm_STRUCT_DIRENT_D_INO): Add, replacing inline code. * Makefile.in (EXEEXT): New macro. (mandir): New macro. (man1dir): Define in terms of mandir. (SRCS): Add mkdir.c, rmdir.c. (OBJS): Change .o to $U.o for addext, argmatch, backupfile, basename, error, inp, patch ,,pch, quotearg, util, version, xmalloc. (HDRS): Add basename.h, patchlevel.h. (MISC): Add ansi2knr.1, config.guess, config.sub. (MISC, config.hin): Remove acconfig.h; no longer needed. (DISTFILES_M4): New macro. (all): patch -> patch$(EXEEXT). (patch$(EXEEXT)): Renamed from patch. All uses changed. (uninstall): Remove manual page. (configure): Depend on aclocal.m4. (M4DIR, ACINCLUDE_INPUTS): New macros. ($(srcdir)/aclocal.m4): New rule. (patchlevel.h): Depend on configure.in, not Makefile, since we now distribute it. (distclean): Don't remove patchlevel.h. (dist): Distribute $(DISTFILES_M4). (addext_.c argmatch_.c backupfile_.c basename_.c error_.c getopt_.c getopt1_.c inp_.c malloc_.c mkdir_.c patch_.c pch_.c rename_.c util_.c version_.c xmalloc_.c): Depend on ansi2knr. Update dependencies to match sources. * common.h (_LARGEFILE_SOURCE): Remove; now autoconfigured. (file_offset): Depend on HAVE_FSEEKO, not _LFS_LARGEFILE. * patch.c (version_control_context): New variable. Convert to ANSI prototypes. Adjust to new argmatch calling convention. Similarly for get_version. Complain about creating an existing file only if pch_says_nonexistent returns 2 (not merely nonzero). Similarly for time mismatch check. (get_some_switches): Adjust to new get_version calling convention. Similarly for argmatch. * pch.c (<basename.h>): Include. (intuit_diff_type): Improve quality of test for empty file. (another_hunk): Don't assume off_t is no longer than long. * util.h (backup_type): New decl. * util.c (<basename.h>): Include. (move_file): Adjust to new find_backup_file_name convention. (doprogram, mkdir, rmdir): Remove; now in separate files. (fetchame): Match "/dev/null", not NULL_DEVICE. Ignore names that don't have enough slashes to strip off. * version.c: Update copyright notice. 1998-03-20 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.5.3. * quotearg.h (quotearg_quoting_options): Remove; it ran afoul of the Borland C compiler. Its address is now represented by the null pointer. * quotearg.c (default_quoting_options): Renamed from quotearg_quoting_options, and now static instead of extern. (clone_quoting_options, get_quoting_style, set_quoting_style, set_char_quoting, quotearg_buffer): Use default_quoting_options when passed a null pointer. * patch.c (main, get_some_switches): Pass a null pointer instead of address of quotearg_quoting_options. 1998-03-17 Paul Eggert <eggert@twinsun.com> * patch.c (option_help): Update bug reporting address to gnu.org. * patch.man: Fix copyright and bug reporting address. 1998-03-16 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.5.2. (AC_CHECK_FUNCS): Add strerror. (jm_FUNC_MALLOC, jm_FUNC_REALLOC): Add. (AM_C_PROTOTYPES): Add. * NEWS, patch.c (longopts, get_some_switches), patch.man: Add --quoting-style, --posix options. * Makefile.in (LIBSRCS): Add malloc.c, realloc.c. (SRCS): Add error.c, quotesys.c, xmalloc.c. (OBJS): Likewise. (HDRS): Add error.h, quotesys.h, xalloc.h. (MISC): Add AUTHORS, aclocal.m4, ansi2knr.c. (clean): Use mostlyclean rule. (argmatch.o, inp.o, patch.o, pch.o): Now also depends on quotearg.h. (inp.o, patch.o, util.o): Now also depends on xalloc.h. (error.o, quotearg.o, quotesys.o, xmalloc.o, ansi2knr.o, ansi2knr, quotearg_.c, .c_.c): New rules. (U): New macro. (OBJS, quotearg$U.o): Rename quotearg.o to quotearg$U.o. (mostlyclean): Remove ansi2knr, *_.c. (.SUFFIXES): Add _.c. * acconfig.h (PROTOTYPES): New undef. * acconfig.h, configure.in (HAVE_INTTYPES_H, malloc, realloc): New macros. * aclocal.m4, error.c, error.h, malloc.c, quotearg.h, quotearg.c, realloc.c, xalloc.h, xmalloc.c: New files. * argmatch.c: Include <sys/types.h> before <argmatch.h>. Include <quotearg.h>. * argmatch.c (invalid_arg), inp.c (scan_input, report_revision, too_many_lines, get_input_file, plan_a), patch.c (main, get_some_switches, numeric_string), pch.c (open_patch_file, intuit_diff_type, do_ed_script): util.c (move_file, create_file, copy_file, version_get, removedirs): Quote output operands properly. * common.h: Include <inttypes.h> if available. (CHAR_BIT, TYPE_SIGNED, TYPE_MINIMUM, TYPE_MAXIMUM, CHAR_MAX, INT_MAX, LONG_MIN, SIZE_MAX, O_EXCL): New macros. (TMPINNAME_needs_removal, TMPOUTNAME_needs_removal, TMPPATNAME_needs_removal): New variables. (xmalloc): Remove decl; now in xalloc.h. * inp.c: Include <quotearg.h>, <xalloc.h>. * inp.c (get_input_file), pch.c (intuit_diff_type), util.c (version_controller): Don't do diff operation if diffbuf is null; used by ClearCase support. * inp.c (plan_b), patch.c (init_reject), pch.c (open_patch_file, do_ed_script): Create temporary file with O_EXCL to avoid races. * patch.c: Include <quotearg.h>, <xalloc.h>. (create_output_file, init_output): New open_flags arg. All callers changed. (init_reject): No longer takes filename arg. All callers changed. (remove_if_needed): New function. (cleanup): Use it to remove temporary files only if needed. (TMPREJNAME_needs_removal): New var. (main): Set xalloc_fail_func to memory_fatal; needed for xalloc. Initialize quoting style from QUOTING_STYLE. (longopts, get_some_switches): Offset longarg options by CHAR_MAX, not 128; this is needed for EBCDIC ports. * patch.c (main, locate_hunk, abort_hunk, spew_output), pch.c (there_is_another_patch, intuit_diff_type, malformed, another_hunk): The LINENUM type now might be longer than long, so print and read line numbers more carefully. * patch.c (main), pch.c (there_is_another_patch): util.c (fetchname): strippath now defaults to -1, so that we can distinguish unset value from largest possible. * patch.man: Clarify how file name is chosen from candidates. * pch.c: Include <quotearg.h>. (p_strip_trailing_cr): New variable. (scan_linenum): New function. (pget_line, re_patch, there_is_another_patch, intuit_diff_type, get_line): Strip trailing CRs from context diffs that need this. (best_name): Use SIZE_MAX instead of (size_t) -1 for max size_t. * quotesys.c, quotearg.h: Renamed from quotearg.c and quotearg.h. All uses changed. * quotesys.h (__QUOTESYS_P): Renamed from __QUOTEARG_P. * util.c: Include <quotearg.h>, <xalloc.h>. (raise): Don't define if already defined. (move_file): New arg from_needs_removal. All callers changed. (copy_file): New arg to_flags. All callers changed. (CLEARTOOL_CO): New constant. (version_controller): Add ClearCase support. (format_linenum): New function. (fetchname): Allow any POSIX.1 time zone spec, which means any local time offset in the range -25:00 < offset < +26:00. Ignore the name if it doesn't have enough slashes to strip off. (xmalloc): Remove; now in xmalloc.c. * util.h (LINENUM_LENGTH_BOUND): New macro. (format_linenum): New decl. * version.c (copyright_string): Update years of copyrights. 1997-09-03 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.5.1. * inp.c (re_input): Don't free buffers twice when input is garbled. * patch.c (main): If skipping patch and Plan A fails, don't bother trying Plan B. 1997-08-31 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Version 2.5 released. 1997-07-21 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.4.4. * pch.c (there_is_another_patch), NEWS: Report an error if the patch input contains garbage but no patches. * pch.c (open_patch_file): Check for patch file too long (i.e., its size doesn't fit in a `long', and LFS isn't available). * inp.c (plan_a): Cast malloc return value, in case malloc returns char *. 1997-07-16 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.4.3. * NEWS, patch.man, pch.c (intuit_diff_type, get_line, pget_line): Now demangles RFC 934 encapsulation. * pch.c (p_rfc934_nesting): New var. * pch.c (intuit_diff_type): Don't bother to check file names carefully if we're going to return NO_DIFF. * inp.c (plan_a): Count the number of lines before allocating pointer-to-line buffer; this reduces memory requirements considerably (roughly by a factor of 5 on 32-bit hosts). Decrease `size' only when read unexpectedly reports EOF. (i_buffer): New var. (too_many_lines): New fn. (re_input): Free i_buffer if using plan A. Free buffers unconditionally; they can't be zero. * inp.c (plan_a, plan_b): Check for overflow of line counter. * pch.c (malformed), util.h (memory_fatal, read_fatal, write_fatal): Declare as noreturn. 1997-07-10 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.4.2. * util.c (ok_to_reverse), NEWS: The default answer is now `n'; this is better for Emacs. * Makefile.in (dist): Use cp -p, not ln; some hosts do the wrong thing with ln if the source is a symbolic link. * patch.man: Fix typo: -y -> -Y. 1997-07-05 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.4.1. * patch.c: (main, get_some_switches), NEWS, patch.man: Version control is now independent of whether backups are made. * patch.c (option_help): Put version control options together. (get_some_switches): With CVS 1.9 hack, treat -b foo like -b -z foo, not just -z foo. This change is needed due to recent change in -z. * backupfile.c (find_backup_file_name): backup_type == none causes undefined behavior; this undoes the previous change to this file. * patch.c (locate_hunk): Fix bug when locating context diff hunks near end of file with nonzero fuzz. * util.c (move_file): Don't assume that ENOENT is reported when both ENOENT and EXDEV apply; this isn't true with DJGPP, and Posix doesn't require it. * pch.c (there_is_another_patch): Suggest -p when we can't intuit a file. 1997-06-19 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Version 2.4 released. * NEWS: Patch is now verbose when patches do not match exactly. 1997-06-17 Paul Eggert <eggert@twinsun.com> * pc/djgpp/configure.sed (config.h): Remove redundant $(srcdir). * configure.in (VERSION): Bump to 2.3.9. * patch.c (main): By default, warn about hunks that succeed with nonzero offset. * patch.man: Add LC_ALL=C advice for making patches. * pc/djgpp/configure.sed (config.h): Fix paths to dependent files. 1997-06-17 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.8. * pch.c (open_patch_file): Test stdin for fseekability. (intuit_diff_type): Missing context diff headers are now warnings, not errors; some people use patches with them (e.g. when retrying rejects). * patch.c (struct outstate): New type, collecting together some output state vars. (apply_hunk, copy_till, spew_output, init_output): Use it. Keep track of whether some output has been generated. (backup_if_mismatch): New var. (ofp): Remove, in favor of local struct outstate vars. (main): Use struct outstate. Initialize backup_if_mismatch to be the inverse of posixly_correct. Keep track of whether mismatches occur, and use this to implement backup_if_mismatch. Report files that are not empty after patching, but should be. (longopts, option_help, get_some_switches): New options --backup-if-mismatch, --no-backup-if-mismatch. (get_some_switches): -B, -Y, -z no longer set backup_type. * backupfile.c (find_backup_file_name): Treat backup_type == none like simple. * Makefile.in (CONFIG_HDRS): Remove var; no longer needed by djgpp port. (DISTFILES_PC_DJGPP): Rename pc/djgpp/config.sed to pc/djgpp/configure.sed; remove pc/djgpp/config.h in favor of new file that edits it, called pc/djgpp/config.sed. * pc/djgpp/configure.bat: Rename config.sed to configure.sed. * pc/djgpp/configure.sed (CONFIG_HDRS): Remove. (config.h): Add rule to build this from config.hin and pc/djgpp/config.sed. * pc/djgpp/config.sed: Convert from .h file to .sed script that generates .h file. * NEWS: Describe --backup-if-mismatch, --no-backup-if-mismatch. * patch.man: Describe new options --backup-if-mismatch, --no-backup-if-mismatch and their ramifications. Use unreadable backup to represent nonexistent file. 1997-06-12 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.7. (AC_CHECK_FUNCS): Add `raise'. * Makefile.in (inp.o): No longer depends on quotearg.h. * common.h (outfile): New decl (was private var named `output'). (invc): New decl. (GENERIC_OBJECT): Renamed from VOID. (NULL_DEVICE, TTY_DEVICE): New macros. * patch.c (output): Remove; renamed to `outfile' and moved to common.h. (main): `failed' is count, not boolean. Say "Skipping patch." when deciding to skip patch. (get_some_switches): Set invc when setting inname. * inp.c: Do not include <quotearg.h>. (SCCSPREFIX, GET, GET_LOCKED, SCCSDIFF1, SCCSDIFF2, SCCSDIFF3, RCSSUFFIX, CHECKOUT, CHECKOUT_LOCKED, RCSDIFF1, RCSDIFF2): Move to util.c. (get_input_file): Invoke new functions version_controller and version_get to simplify this code. (plan_b): "/dev/tty" -> NULL_DEVICE * pch.h (pch_timestamp): New decl. * pch.c (p_timestamp): New var; takes over from global timestamp array. (pch_timestamp): New function to export p_timestamp. (there_is_another_patch): Use blander wording when you can't intuit the file name. Say "Skipping patch." when deciding to skip patch. (intuit_diff_type): Look for version-controlled but nonexistent files when intuiting file names; set invc accordingly. Ignore Index: line if either old or new line is present, and if POSIXLY_CORRECT is not set. (do_ed_script): Flush stdout before invoking popen, since it may send output to stdout. * util.h (version_controller, version_get): New decls. * util.c: Include <quotearg.h> earlier. (raise): New macro, if ! HAVE_RAISE. (move_file): Create empty unreadable file when backing up a nonexistent file. (DEV_NULL): New constant. (SCCSPREFIX, GET. GET_LOCKED, SCCSDIFF1, SCCSDIFF2, RCSSUFFIX, CHECKOUT, CHECKOUT_LOCKED, RCSDIFF1): Moved here from inp.c. (version_controller, version_get): New functions. (ask): Look only at /dev/tty for answers; and when standard output is not a terminal and ! posixly_correct, don't even look there. Remove unnecessary fflushes of stdout. (ok_to_reverse): Say "Skipping patch." when deciding to skip patch.. (sigs): SIGPIPE might not be defined. (exit_with_signal): Use `raise' instead of `kill'. (systemic): fflush stdout before invoking subsidiary command. * patch.man: Document recent changes. Add "COMPATIBILITY ISSUES" section. * NEWS: New COMPATIBILITY ISSUES for man page. Changed verbosity when fuzz is found. File name intuition is changed, again. Backups are made unreadable when the file did not exist. * pc/djgpp/config.h (HAVE_STRUCT_UTIMBUF): Define. (HAVE_RAISE): New macro. (HAVE_UTIME_H): Define. (TZ_is_unset): Do not define; it's not a serious problem with `patch' to have TZ be unset in DOS. 1997-06-08 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.6. (AC_CHECK_HEADERS): Add utime.h. * acconfig.h, configure.in, pc/djgpp/config.h (HAVE_STRUCT_UTIMBUF): New macro. * pc/djgpp/config.h (HAVE_UTIME_H, TZ_is_unset): New macros. * NEWS, patch.man: Describe new -Z, -T options, new numeric option for -G, retired -G, and more verbose default behavior with fuzz. * pch.c (intuit_diff_type): Record times reported for files in headers. Remove head_says_nonexistent[x], since it's now equivalent to !timestamp[x]. * util.h (fetchname): Change argument head_says_nonexistent to timestamp. * util.c: #include <partime.h> for TM_LOCAL_ZONE. Don't include <time.h> since common.h now includes it. (ok_to_reverse): noreverse and batch cases now output regardless of verbosity. (fetchname): Change argument head_says_nonexistent to pstamp, and store header timestamp into *pstamp. If -T or -Z option is given, match time stamps more precisely. (ask): Remove unnecessary close of ttyfd. When there is no terminal at all, output a newline to make the output look nicer. After reporting EOF, flush stdout; when an input error, report the error type. * inp.c (get_input_file): Ask user whether to get file if patch_get is negative. * Makefile.in (clean): Don't clean */*.o; clean core* and *core. 1997-06-04 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.5. * util.c (ok_to_reverse): Be less chatty if verbosity is SILENT and we don't have to ask the user. If force is nonzero, apply the patch anyway. * pch.c (there_is_another_patch): Before skipping rest of patch, skip to the patch start, so that another_hunk can skip it properly. (intuit_diff_type): Slight wording change for missing headers, to regularize with other diagnostics. Fix off-by-one error when setting p_input_line when scanning the first hunk to check for deleted files. 1997-06-03 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.4. * NEWS: Now matches more generously against nonexistent or empty files. * pch.c (there_is_another_patch): Move warning about not being able to intuit file names here from skip_to. (intuit_diff_type): Fatal error if we find a headless unified or context diff. * util.c (ask): Null-terminate buffer properly even if it grew. (fetchname): No need to test for null first argument. 1997-06-02 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.3. * pch.c (p_says_nonexistent, pch_says_nonexistent): Is now 1 for empty, 2 for nonexistent. (intuit_diff_type): Set p_says_nonexistent according to new meaning. Treat empty files like nonexistent files when reversing. (skip_to): Output better diagnostic when we can't intuit a file name. * patch.c (main): Count bytes, not lines, when testing whether a file is empty, since it may contain only non-newline chars. pch_says_nonexistent now returns 2 for nonexistent files. 1997-06-01 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.2. * pch.c (open_patch_file): Fix bug when computing size of patch read from a pipe. 1997-05-30 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.3.1. * Makefile.in (transform, patch_name): New vars, for proper implementation of AC_ARG_PROGRAM. (install, uninstall): Use them. (install-strip): New rule. * pc/djgpp/config.sed (program_transform_name): Set to empty. 1997-05-30 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION), NEWS: Version 2.3 released. * patch.man: Fix two font typos. * util.c (doprogram): Fix misspelled decl. 1997-05-26 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.2.93. * pch.c (open_patch_file): Fatal error if binary_transput and stdin is a tty. * pc/djgpp/config.sed (chdirsaf.c): Use sed instead of cp, since cp might not be installed. * pc/djgpp/configure.bat: Prepend %srcdir% to pathname of config.sed, for crosscompiles. 1997-05-25 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.2.92. (D_INO_IN_DIRENT): New macro. * pc/djgpp/config.h, acconfig.h (D_INO_IN_DIRENT): New macro. * backupfile.c (REAL_DIR_ENTRY): Depend on D_INO_IN_DIRENT, not _POSIX_VERSION. * addext.c (addext): Adjust slen when adjusting s for DOS 8.3 limit. Do not use xxx.h -> xxxh~ hack. * util.c: (move_file): Avoid makedirs test when possible even if FILESYSTEM_PREFIX_LEN (p) is nonzero. Don't play case-changing tricks to come up with backup file name; it's not portable to case-insensitive file systems. * common.h (ISLOWER): Remove. * inp.c (scan_input): Don't use Plan A if (debug & 16). * patch.c (shortopts): Add -g, -G. (longopts): --help now maps to 132, not 'h', to avoid confusion. (get_some_switches): Likewise. Don't invoke setmode on input if --binary; wait until needed. Don't ever invoke setmode on stdout. * pch.c (open_patch_file): Setmode stdin to binary if binary_transput. * patch.man: Fix documentation of backup file name to match behavior. Add advice for ordering of patches of derived files. Add /dev/tty to list of files used. * README: Adjust instructions for building on DOS. * pc/djgpp/README: Remove tentative wording. * NEWS: The DOS port is now tested. Backup file names are no longer computed by switching case. * pc/chdirsaf.c (ERANGE): Include <errno.h> to define it. (restore_wd): chdir unconditionally. (chdir_safer): Invoke atexit successfully at most once. * pc/djgpp/config.sed: Use chdirsaf.o, not pc/chdirsaf.o. Replace CONFIG_HDRS, don't append. Use $(srcdir) in CONFIG_STATUS. Don't apply $(SHELL) to $(CONFIG_STATUS). Append rules for chdirsaf.o, chdirsaf.c; clean chdirsaf.c at the end. * pc/djgpp/configure.bat: Append CR to each line; DOS needs this. Don't use | as sed s delimiter; DOS can't handle it. 1997-05-21 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.2.91. * pch.c (another_hunk): Fix bug with computing size of prefix and suffix context with ordinary context diffs. Report malformed patch if a unified diff has nothing but context. * inp.c (get_input_file): Use patch_get, not backup_type, to decide whether to get from RCS or SCCS. Use the word `get' in diagnostics. * patch.c (main): Initialize patch_get from PATCH_GET. Omit DEFAULT_VERSION_CONTROL hook; it just leads to nonstandarization. (longopts, option_help, get_some_switches): Add support for -g, -G. (option_help): Add bug report address. * common.h (patch_get): New decl. * patch.man: Add -g and -G options; use `get' instead of `check out'. Add PATCH_GET. Recommend -Naur instead of -raNU2 for diff. * NEWS: Describe -g, -G, PATCH_GET. * version.c (copyright_string): Use only most recent copyright year, as per GNU standards. * Makefile.in (DISTFILES_PC): Remove pc/quotearg.c. * pc/djgpp/config.sed: Remove unnecessary hooks for quotearg and SHELL. 1997-05-18 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Increase to 2.2.9. (AC_TYPE_MODE_T): Add. * pch.h (another_hunk): New parameter REV. * pch.c (hunkmax): Now of type LINENUM. (malformed): Add decl. (there_is_another_patch): Skip inname-detection if skip_rest_of_patch. (intuit_diff_type): To determine whether file appears to have been deleted, look at replacement, not pattern. If there is a mismatch between existence of file and whether the patch claims to change whether the file exists, ask whether to reverse the patch. (another_hunk): New parameter REV specifying whether to reverse the hunk. All callers changed. (do_ed_script): Add assertion to ensure input file exists. * util.h (create_file): New function. (copy_file): Now takes mode, not struct stat. (makedirs): No longer exported. (move_file): Now takes mode, not struct stat. * util.c (makedirs): No longer exported. (move_file): Accept mode of destination, not struct stat. All callers changed. Quote file names in diagnostics. Create parent dir of destination if necessary. Don't use ENOTDIR. Don't unlink source; it will be unlinked later. Unlink destination if FROM is zero. (create_file): New function. (copy_file): Accept mode of destination, not struct stat. All callers changed. Use create_file to create file. (ok_to_reverse): Moved here from patch.c. Now accepts format and args; all callers changed. (mkdir): 2nd arg is now mode_t, for better compatibility. (replace_slashes): Ignore slashes at the end of the filename. * common.h (noreverse): New decl. (ok_to_reverse): Remove decl. * patch.c (noreverse): Now extern. (main): New environment var PATCH_VERSION_CONTROL overrides VERSION_CONTROL. Don't assert(hunk) if we're skipping the patch; we may not have any hunks. When removing a file, back it up if backups are desired. Don't chmod output file if input file did not exist. chmod rej file to input file's mode minus executable bits. (locate_hunk): Go back to old way of a single fuzz parameter, but handle it more precisely: context diffs with partial contexts can only match file ends, since the partial context can occur only at the start or end of file. All callers changed. (create_output_file): Use create_file to create files. (ok_to_reverse): Move to util.c. * inp.c (scan_input, get_input_file): Quote file names in diagnostics. (get_input_file): Set inerrno if it's not already set. Don't create file; it's now the caller's responsibility. (plan_b): Use /dev/null if input size is zero, since it might not exist. Use create_file to create temporary file. * NEWS: Add PATCH_VERSION_CONTROL; DOS port is untested. * pc/djgpp/config.h: Add comment for mode_t. * pc/djgpp/README: Note that it's not tested. * patch.man: PATCH_VERSION_CONTROL overrides VERSION_CONTROL. 1997-05-15 Paul Eggert <eggert@twinsun.com> * configure.in: Add AC_PREREQ(2.12). (VERSION): Bump to 2.2.8. (ed_PROGRAM): Rename from ED_PROGRAM. * pch.c (prefix_components): Support DOS file names better. Fix typo that caused fn to almost always yield 0. * util.c (<time.h>, <maketime.h>): Include. (move_file, copy_file): Add support for DOS filenames. Preserve mode of input files when creating temp files. Add binary file support. (doprogram, rmdir): New functions. (mkdir): Use doprogram. (replace_slashes): Add support for DOS filenames. (removedirs): New function. (init_time)): New function. (initial_time): New var. (fetchname): Add support for deleted files, DOS filenames. * basename.c (FILESYSTEM_PREFIX_LEN, ISSLASH): New macros, for DOS port. (base_name): Use them. * addext.c (HAVE_DOS_FILE_NAMES): New macro. <limits.h>: Include if HAVE_LIMITS_H. (addext): Handle hosts with DOS file name limits. * common.h (LONG_MIN): New macro. (FILESYSTEM_PREFIX_LEN, ISSLASH): New macros, for DOS port. (ok_to_create_file): Remove. (reverse): Now int. (ok_to_reverse): New function decl. (O_WRONLY, _O_BINARY, O_BINARY, O_CREAT, O_TRUNC): New macros. (binary_transput): New var decl. * Makefile.in (ed_PROGRAM): Renamed from ED_PROGRAM. (CONFIG_HDRS, CONFIG_STATUS): New vars. (SRCS): Add maketime.c, partime.c. (OBJS): Likewise. (HDRS): Add maketime.h, partime.h. (DISTFILES_PC, DISTFILES_PC_DJGPP): New vars. (Makefile, config.status): Use CONFIG_STATUS, not config.status. (clean): Remove */*.o. (dist): Add pc and pc/djgpp subdirectories. ($(OBJS)): Depend on $(CONFIG_HDRS) instead of config.h. (maketime.o, partime.o): New rules. (util.o): Depend on maketime.h. * patch.c (main): Call init_time. Add DEFAULT_VERSION_CONTROL hook for people who prefer the old ways. Build temp file names before we might invoke cleanup. Add support for deleted files and clean up the patch-swapping code a bit. Delete empty ancestors of deleted files. When creating temporaries, use file modes of original files. (longopts, get_some_switches): New option --binary. (get_some_switches): Report non-errno errors with `fatal', not `pfatal'. (create_output_file): New function, which preserves modes of original files and supports binary transput. (init_output, init_reject): Use it. (ok_to_reverse): New function. (TMPDIR): New macro. (make_temp): Use $TMPDIR, $TMP, $TEMP, or TMPDIR, whichever comes first. * pch.c (p_says_nonexistent): New var. (open_patch_file): Add binary transput support. Apply stat to file names retrieved from user. Reject them if they don't exist. (intuit_diff_type): Add support for deleting files. Don't treat trivial directories any differently. Avoid stating the same file twice in common case of context diffs. (prefix_components): Don't treat trivial directories any differently. Add support for DOS filenames. (pch_says_nonexistent): New function. (do_ed_script): Preserve mode of input files when creating temp files. Add support for binary transput. * pch.h (pch_says_nonexistent): New decl. * util.h (replace_slashes): No longer exported. (fetchname): Add support for deleted files. (copy_file, move_file): Add support for preserving file modes. (init_time, removedirs): New functions. * argmatch.c: Converge with fileutils. * backupfile.c: Converge with fileutils. (find_backup_file_name): Treat .~N~ suffix just like any other suffix when handling file names that are too long. * inp.c: In messages, put quotes around file names and spaces around "--". (get_input_file): Allow files to be deleted. Do the expense of makedirs only if we can't create the file. (plan_a, plan_b): Add support for binary transput. * pc/chdirsaf.c, pc/djgpp/README, pc/djgpp/config.h, pc/djgpp/config.sed, pc/djgpp/configure.bat, pc/quotearg.c: New file. * NEWS: New methods for removing files; adjust file name intuition again. Add description of MS-DOS and MS-Windows ports. * patch.man: Simplify file name intuition slightly (no distinction for trivial dirs). Add --binary. Describe how files and directories are deleted. Suggest diff -a. Include caveats about what context diffs cannot represent. 1997-05-06 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Now 2.2.7. (CPPFLAGS, LDFLAGS, LIBS): If the user has not set any of these vars, prefer support for large files if available. * common.h (_LARGEFILE_SOURCE): Define. (file_offset): New typedef. (file_seek, file_tell): New macros. * patch.c (main): Remove empty files by default unless POSIXLY_CORRECT is set. * util.c, util.h (Fseek): Use file_offset instead of long, for portability to large-file hosts. * pch.c: (p_base, p_start, next_intuit_at, skip_to, open_patch_file, intuit_diff_type, another_hunk, incomplete_line, do_ed_script): Use file_offset instead of long, for portability to large-file hosts. (prefix_components): Renamed from path_name_components; count only nontrivial prefix components, and take a 2nd EXISTING arg. (existing_prefix_components): Remove; subsumed by prefix_components. (intuit_diff_type): When creating files, try for the creation of the fewest directories. *: When creating a file, use a name whose existing directory prefix contains the most nontrivial path name components. Add advice for creating patches and applying them. 1997-05-06 Paul Eggert <eggert@twinsun.com> *: Describe above change to pch.c. Add advice for creating patches and applying them. 1997-05-05 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Update to 2.2.5. * quotearg.h, quotearg.c: New files. * Makefile.in (SRCS, OBJS, HDRS): Mention new files. (inp.o, util.o): Now depends on quotearg.h. (quotearg.o): New makefile rule. * common.h (posixly_correct): New var. * patch.c (main): Initialize it. If ! posixly_correct, default backup type is now `existing'. SIMPLE_BACKUP_SUFFIX no longer affects backup type. (backup): Remove var. * util.h: (countdirs): Remove. (systemic): New decl. * util.c (move_file): Try making the parent directory of TO if backup prefix or suffix contain a slash. (ask): Remove arbitrary limit on size of result. (systemic): New function. (mkdir): Work even if arg contains shell metacharacters. (replace_slashes): Return 0 if none were replaced. Don't replace slash after . or .. since it's redundant. (countdirs): Remove. (makedirs): Ignore mkdir failures. * NEWS, patch.man: More POSIXLY_CORRECT adjustments. Describe new rules for how file names are intuited. 1997-04-17 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Version 2.2 released. * Makefile.in (config.hin): Remove before building; we always want the timestamp updated. * inp.c (get_input_file): Look for RCS files only if backup_type == numbered_existing. * NEWS, patch.man: Remove mention of never-implemented -V rcs and -V sccs options. * patch.man: `pathname' -> `file name' Correct the description of how file names are found in diff headers. Clarify the distinction between ordinary and unified context diffs. 1997-04-13 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Update to 2.1.7. * patch.c (numeric_optarg): New function. (get_some_switches): Use it. * pch.c (intuit_diff_type): When creating a file, prefer a name whose existing dir prefix is the longest. * util.h (countdirs): New function. * util.c (replace_slashes, countdirs): New functions. (makedirs): Use replace_slashes, to be more like countdirs. * patch.man: Explain -pN vs -p N. Recommend --new-file. Explain possible incompatibility with strip count. 1997-04-10 Paul Eggert <eggert@twinsun.com> * configure.in (VERSION): Bump to 2.1.6. (AC_CHECK_HEADERS): Remove stdlib.h (i.e. remove HAVE_STDLIB_H). * Makefile.in: (HDRS, patchlevel.h, TAGS, distclean, maintainer-clean): Don't distribute patchlevel.h; let the user do it. This works around some obscure (possibly nonexistent?) `make' bugs. * common.h (program_name): extern, not XTERN. (<stdlib.h>): Include if STDC_HEADERS, not if HAVE_STDLIB_H. (atol, getenv, malloc, realloc): Don't worry whether they're #defined. * patch.c (get_some_switches): Add special hack for backwards compatibility with CVS 1.9. (-B, -Y, -z): Now set backup_type = simple. * NEWS: Fix misspellings; minor reformatting. * README: Report POSIX.2 compliance. 1997-04-06 Paul Eggert <eggert@twinsun.com> Move all old RCS $Log entries into ChangeLog. #include all files with < >, not " ". * addext.c, argmatch.c, argmatch.h, memchr.c, install-sh: New files. * EXTERN.h, INTERN.h: Removed. * config.hin: Renamed from config.h.in. * acconfig.h (NODIR): Remove. (HAVE_MEMCHR): Add. * configure.in (AC_ARG_PROGRAM, AC_PROG_MAKE_SET, HAVE_MEMCHR): Add. (AC_CHECK_HEADERS): Replaces obsolescent AC_HAVE_HEADERS. Add stdlib.h, string.h, unistd.h, varargs.h. Delete obsolete call to AC_UNISTD_H. (AC_CONFIG_HEADER): Rename config.h.in to config.hin. (AC_C_CONST): Replaces obsolescent AC_CONST. (AC_CHECK_FUNC): Check for getopt_long; define LIBOBJS and substitute for it accordingly. (AC_CHECK_FUNCS): Replaces obsolescent AC_HAVE_FUNCS. Add _doprintf, isascii, mktemp, sigaction, sigprocmask, sigsetmask. Remove strerror. (AC_FUNC_CLOSEDIR_VOID, AC_FUNC_VPRINTF): Add. (AC_HEADER_DIRENT): Replaces obsolescent AC_DIR_HEADER. (AC_HEADER_STDC): Replaces obsolescent AC_STDC_HEADERS. (AC_SYS_LONG_FILE_NAMES): Replaces obsolescent AC_LONG_FILE_NAMES. (AC_TYPE_OFF_T): Replaces obsolescent AC_OFF_T. (AC_TYPE_SIGNAL): Replaces obsolescent AC_RETSIGTYPE. (AC_TYPE_SIZE_T): Replaces obsolescent AC_SIZE_T. (AC_XENIX_DIR): Remove. (ED_PROGRAM): New var. (NODIR): Remove. (PACKAGE, VERSION): New vars; substitute them with AC_SUBST. * Makefile.in: Conform to current GNU build standards. Redo dependencies. Use library getopt_long if available. Use `&&' instead of `;' inside shell commands where applicable; GNU make requires this. Use double-colon rules for actions that do not build files. (@SET_MAKE@): Added. (CFLAGS, LDFLAGS, prefix, exec_prefix): Base on @ versions of symbols. (COMPILE, CPPFLAGS, DEFS, ED_PROGRAM, LIBOBJS, LIBSRCS, PACKAGE, VERSION): New symbols. (SRCS, OBJS, HDRS, MISC): Add new files. (man1dir): Renamed from mandir. (man1ext): Renamed from manext. (patch): Put -o first. (install): Use $(transform) to allow program to be renamed by configure. (patchlevel.h): Build from $(VERSION). (dist): Get version number from $(VERSION) and package name from $(PACKAGE). (TAGS): Scan $(HDRS). (maintainer-clean): Renamed from realclean. Remove patchlevel.h. * backupfile.h (simple_backup_suffix): Now const *. (find_backup_file_name, base_name, get_version): Args are now const *. (base_name): New decl. * backupfile.c (<config.h>): Include only if HAVE_CONFIG_H. (<argmatch.h>): Include. (<string.h>): Include if HAVE_STRING_H, not if STDC_HEADERS. (<strings.h>): Include if !HAVE_STRING_H. (<unistd.h>): Do not include. (<dirent.h>): Redo include as per current autoconf standards. (<limits.h>): Include if HAVE_LIMITS_H. Define CHAR_BIT if not defined. (NLENGTH): Now returns size_t. (CLOSEDIR, INT_STRLEN_BOUND): New macros. (ISDIGIT): Use faster method. (find_backup_file_name): No longer depends on NODIR. Remove redundant code. (make_version_name): Remove; do it more portably. (max_backup_version): Args are now const *. (version_number): Simplify digit checking. (basename, concat, dirname): Remove. (argmatch, invalid_arg): Move to argmatch.c. Simplify test for ambiguous args. When reporting an error, use program_name not "patch". (addext): Move to addext.c. Treat all negative values from pathconf like -1. Always use long extension if it fits, even if the filesystem does not support long file names. (backup_types): Now const. * common.h, inp.h (XTERN): Renamed from EXT to avoid collision with errno.h reserved name space. * common.h (DEBUGGING): Now an integer; default is 1. (enum diff): New type. (diff_type): Use it instead of small integers. (CONTEXT_DIFF, NORMAL_DIFF, ED_DIFF, NEW_CONTEXT_DIFF, UNI_DIFF): Now enumerated values instead of macros. (NO_DIFF): New enumerated value (used instead of 0). (volatile): Default to the empty string if __STDC__ is not defined. (<signal.h>): Do not include. (Chmod, Close, Fclose, Fflush, Fputc, Signal, Sprintf, Strcat, Strcpy, Unlink, Write): Remove these macros; casts to void are not needed for GNU coding standards. (INITHUNKMAX): Move to pch.c. (malloc, realloc, INT_MIN, MAXLINELEN, strNE, strnNE, Reg1, Reg2, Reg3, Reg4, Reg5, Reg6, Reg7, Reg8, Reg9, Reg10, Reg11, Reg12, Reg13, Reg14, Reg15, Reg16): Remove these macros. (S_IXOTH, S_IWOTH, S_IROTH, S_IXGRP, S_IWGRP, S_IRGRP, S_IXUSR, S_IWUSR, S_IRUSR, O_RDONLY, O_RDWR): Define these macros, if not defined. (CTYPE_DOMAIN, ISLOWER, ISSPACE, ISDIGIT, PARAMS): New macros. (instat): Renamed from filestat; used for input file now. (bufsize, using_plan_a, debug, strippath): Not statically initialized. (debug): #define to 0 if not DEBUGGING, so that users of `debug' no longer need to be surrounded by `#if DEBUGGING'. (out_of_mem, filec, filearg, outname, toutkeep, trejkeep): Remove. (inname, inerrno, dry_run, origbase): New variables. (origprae): Now const*. (TMPOUTNAME, TMPINNAME, TMPPATNAME): Now const*volatile. (verbosity): New variable; subsumes `verbose'. (DEFAULT_VERBOSITY, SILENT, VERBOSE): Values in a new enum. (verbose): Removed. (VOID): Use `#ifdef __STDC__' instead of`#if __STDC__', for consistency elsewhere. (__attribute__): New macro (empty if not a recent GCC). (fatal_exit): Renamed from my_exit. (errno): Don't define if STDC_HEADERS. (<string.h>): Include if either STDC_HEADERS or HAVE_STRING_H. (memcmp, memcpy): Define if !STDC_HEADERS && !HAVE_STRING_H && !HAVE_MEMCHR. (<stdlib.h>): Include if HAVE_STDLIB_H, not if STDC_HEADERS. (atol, getenv, malloc, realloc, lseek): Declare only if not defined as a macro. (popen, strcpy, strcat, mktemp): Do not declare. (lseek): Declare to yield off_t, not long. (<fcntl.h>): Include only if HAVE_FCNTL_H. * inp.h (get_input_file): New decl. * inp.c (SCCSPREFIX, GET, GET_LOCKED, SCCSDIFF, RCSSUFFIX, CHECKOUT, CHECKOUT_LOCKED, RCSDIFF): Moved here from common.h. (i_ptr): Now char const **. (i_size): Remove. (TIBUFSIZE_MINIMUM): Define only if not already defined. (plan_a, plan_b): Arg is now const *. (report_revision): Declare before use. It's now the caller's responsibility to test whether revision is 0. (scan_input, report_revision, get_input_file): Be less chatty unless --verbose. (get_input_file): New function, split off from plan_a. Reuse file status gotten by pch if possible. Allow for dry run. Use POSIX bits for creat, not number. Check for creation and close failure, and use fstat not stat. Use memcpy not strncpy. (plan_a): Rewrite for speed. Caller now assigns result to using_plan_a. Don't bother reading empty files; during dry runs they might not exist. Use ISSPACE, not isspace. (plan_b): Allow for dry runs. Use ISSPACE, and handle sign extension correctly on arg. Use POSIX symbol for open arg. * patch.c (backup, output, patchname, program_name): New vars. (last_frozen_line): Moved here from inp.h. (TMPREJNAME): Moved here from common.h. (optind_last): Removed. (do_defines, if_defined, not_defined, else_defined, end_defined): Now char const. Prepend with \n (except for not_defined) to allow for files ending in non-newline. (Argv): Now char*const*. (main, get_some_switches): Exit status 0 means success, 1 means hunks were rejected, 2 means trouble. (main, locate_hunk, patch_match): Keep track of patch prefix context separately from suffix context; this fixes several bugs. (main): Initialize bufsize, strippath. Be less chatty unless --verbose. No more NODIR; always have version control available. Require environment variables to be nonempty to have effect. Add support for --dry-run, --output, --verbose. Invoke get_input_file first, before deciding among do_ed_script, plan_a, or plan_b. Clear ofp after closing it, to keep discipline that ofp is either 0 or open, to avoid file descriptor leaks. Conversely, rejfp doesn't need this trick since static analysis is enough to show when it needs to be closed. Don't allow file-creation patches to be applied to existing files. Misordered hunks are now not fatal errors; just go on to the next file. It's a fatal error to fall back on plan B when --output is given, since the moving hand has writ. Add support for binary files. Check for I/O errors. chmod output file ourselves, rather than letting move_file do it; this saves global state. Use better grammar when outputting hunks messages, e.g. avoid `1 hunks'. (main, reinitialize_almost_everything): Remove support for multiple file arguments. Move get_some_switches call from reinitialize_almost_everything to main. (reinitialize_almost_everything): No need to reinitialize things that are no longer global variables, e.g. outname. (shortopts): Remove leading "-"; it's no longer important to return options and arguments in order. '-b' no longer takes operand. -p's operand is no longer optional. Add -i, -Y, -z. Remove -S. (longopts): --suffix is now pared with -z, not -b. --backup now means -b. Add --input, --basename-prefix, --dry-run, --verbose. Remove --skip. --strip's operand is now required. (option_help): New variable. Use style of current coding standards. Change to match current option set. (usage): Use it. (get_some_switches): Get all switches, since `+' is defunct. New options -i, -Y, -z, --verbose, --dry-run. Option -S removed. -b now means backup (backup_type == simple), not simple_backup_suffix. -B now implies backup, and requires nonempty operand. -D no longer requires first char of argument to be an identifier. `-o -' is now disallowed (formerly output to regular file named "-"). -p operand is now required. -v no longer needs to cleanup (no temp files can exist at that point). -V now implies backup. Set inname, patchname from file name arguments, if any; do not set filearg. It's now an error if extra operands are given. (abort_junk): Check for write errors in reject file. (apply_hunk, copy_till): Return error flag, so that failure to apply out-of-order hunk is no longer fatal. (apply_hunk): New arg after_newline, for patching files not ending in newline. Cache ofp for speed. Check for write errors. (OUTSIDE, IN_IFNDEF, IN_IFDEF, IN_ELSE): Now part of an enumerated type instead of being #defined to small integers. Change while-do to do-while when copying !-part for R_do_defines, since condition is always true the first time through the loop. (init_output, init_reject): Arg is now const *. (copy_till, spew_output): Do not insert ``missing'' newlines; propagate them via new after_newline argument. (spew_output): Nothing to copy if last_frozen_line == input lines. Do not close (ofp) if it's null. (dump_line): Remove. (similar): Ignore presence or absence of trailing newlines. Check for only ' ' or '\t', not isspace (as per POSIX.2). (make_temp): Use tmpnam if mktemp is not available. (cleanup): New function. (fatal_exit): Use it. Renamed from my_exit. Take signal to exit with, not exit status (which is now always 2). * pch.h, pch.c (pch_prefix_context, pch_suffix_context): New fns replacing pch_context. (another_hunk): Now yields int, not bool; -1 means out of memory. Now takes difftype as argument. (pch_write_line): Now returns boolean indicating whether we're after a newline just after the write, for supporting non-text files. * pch.c (isdigit): Remove; use ISDIGIT instead. (INITHUNKMAX): Moved here from common.h. (p_context): Removed. We need to keep track of the pre- and post- context separately, in: (p_prefix_context, p_suffix_context): New variables. (bestguess): Remove. (open_patch_file): Arg is now char const *. Copy file a buffer at a time, not a char at a time, for speed. (grow_hunkmax): Now returns success indicator. (there_is_another_patch, skip_to, another_hunk, do_ed_script): Be less chatty unless --verbose. (there_is_another_patch): Avoid infinite loop if user input keeps yielding EOF. (intuit_diff_type): New returns enum diff, not int. Strip paths as they're being fetched. Set ok_to_create_file correctly even if patch is reversed. Set up file names correctly with unidiff output. Use algorithm specified by POSIX 1003.2b/D11 to deduce name of file to patch, with the exception of patches that can create files. (skip_to): Be verbose if !inname, since we're about to ask the user for a file name and the context will help the user choose. (another_hunk): Keep context as LINENUM, not int. If the replacement is missing, calculate its context correctly. Don't assume input ends in newline. Keep track of patch prefix context separately from suffix context; this fixes several bugs. Don't assume blank lines got chopped if the replacement is missing. Report poorly-formed hunks instead of aborting. Do not use strcpy on overlapping strings; it's not portable. Work even if lines are incomplete. Fix bugs associated with context-less context hunks, particularly when patching in reverse. (pget_line): Now takes just 1 arg; instead of second arg, just examine using_plan_a global. Return -1 if we ran out of memory. (do_ed_script): Now takes output FILE * argument. Take name of editor from ED_PROGRAM instead of hardwiring /bin/ed. Don't bother unlinking TMPOUTNAME. Check for popen failure. Flush pipe to check for output errors. If ofp is nonzero, copy result to it, instead of trying to move the result. * util.h, util.c (say1, say2, say3, say4, fatal1, fatal2, fatal3, fatal4, pfatal1, pfatal2, pfatal3, pfatal4, ask1, ask2, ask3, ask4): Remove; replaced with following. (ask, say, fatal, pfatal): New stdarg functions. (fetchname): Remove last, `assume_exists' parameter. (savebuf, savestr, move_file, copy_file): Args are now const *. (exit_with_signal): New function, for proper process status if a signal is received as per POSIX.2. (basename): Rename to `base_name' and move to backupfile. * util.c (<signal.h>): Include here, not in common.h. (vararg_start): New macro. (va_dcl, va_start, va_arg, va_end): Define if neither <stdarg.h> nor <varargs.h> are available. (SIGCHLD): Define to SIGCLD if SIGCLD is defined and SIGCHLD isn't. (private_strerror): Remove. (move_file): Remove option of moving to stdout. Add support for -Y, -z. Don't assume chars in file name are nonnegative. Use copy_file if rename fails due to EXDEV; report failure if rename fails for any other reason. (copy_file, makedirs): Use POSIX symbols for permissions. (copy_file): Open source before destination. (remove_prefix): New function. (vfprintf): New function, if !HAVE_VPRINTF. (afatal, apfatal, zfatal, zpfatal, errnum): Remove. (fatal, pfatal, say): New functions that use stdarg. All callers changed. (zask): Renamed from `ask'. Now uses stdarg. Output to stdout, and read from /dev/tty, or if that cannot be opened, from stderr, stdout, stdin, whichever is first a tty. Print "EOF" when an EOF is read. Do not echo input. (sigs): New array. (sigset_t, sigemptyset, sigmask, sigaddset, sigismember, SIG_BLOCK, SIG_UNBLOCK, SIG_SETMASK, sigprocmask, sigblock, sigsetmask): Define substitutes if not available. (initial_signal_mask, signals_to_block): New vars. (fatal_exit_handler): New function, if !HAVE_SIGACTION. (set_signals, ignore_signals): Use sigaction and sigprocmask style signal-handling if possible; it doesn't lose signals. (set_signals): Default SIGCHLD to work around SysV fork+wait bug. (mkdir): First arg is now const *. (makedirs): Handle multiple adjacent slashes correctly. (fetchname): Do not worry about whether the file exists (that is now the caller's responsibility). Treat a sequence of one or more slashes like one slash. Do not unstrip leading directories if they all exist and if no -p option was given; POSIX doesn't allow this. (memcmp): Remove (now a macro in common.h). * version.c (copyright_string, free_software_msgid, authorship_msgid): New constants. (version): Use them. Use program_name instead of hardwiring it. * patch.man: Generate date from RCS Id. Rewrite to match the above changes. Fri Jul 30 02:02:51 1993 Paul Eggert (eggert@twinsun.com) * configure.in (AC_HAVE_FUNCS): Add mkdir. * common.h (Chmod, Fputc, Write, VOID): New macros. (malloc, realloc): Yield `VOID *', not `char *'. * util.h (makedirs): Omit `striplast' argument. Remove `aask'. * inp.c (plan_a): Remove fixed internal buffer. Remove lint. * util.c (set_signals, ignore_signals): Trap SIGTERM, too. (makedirs): Removed fixed internal buffer. Omit `striplast' argument. (mkdir): New function, if !HAVE_MKDIR. (fetchname): Remove fixed internal buffer. Remove lint from various functions. * patch.c, pch.c: Remove lint. Thu Jul 29 20:52:07 1993 David J. MacKenzie (djm@wookumz.gnu.ai.mit.edu) * Makefile.in (config.status): Run config.status --recheck, not configure, to get the right args passed. Thu Jul 29 07:46:16 1993 Paul Eggert (eggert@twinsun.com) * The following changes remove all remaining fixed limits on memory, and fix bugs in patch's handling of null bytes and files that do not end in newline. `Patch' now works on binary files. * backupfile.c (find_backup_file_name): Don't dump core if malloc fails. * EXTERN.h, INTERN.h (EXITING): New macro. * backupfile.[ch], patch.c, pch.c: Add PARAMS to function declarations. * common.h (bool): Change to int, so ANSI C prototype promotion works. (CANVARARG): Remove varargs hack; it wasn't portable. (filearg): Now a pointer, not an array, so that it can be reallocated. (GET*, SCCSDIFF, CHECKOUT*, RCSDIFF): Quote operands to commands. (my_exit): Declare here. (BUFFERSIZE, Ctl, filemode, Fseek, Fstat, Lseek, MAXFILEC, MAXHUNKSIZE, Mktemp, myuid, Null, Nullch, Nullfp, Nulline, Pclose, VOIDUSED): Remove. All invokers changed. (Argc, Argv, *define[sd], last_offset, maxfuzz, noreverse, ofp, optind_last, rejfp, rejname): No longer externally visible; all definers changed. (INT_MAX, INT_MIN, STD*_FILENO, SEEK_SET): Define if the underlying system doesn't. Include <limits.h> for this. * configure.in: Add limits.h, memcmp. Delete getline. * inp.c (tibufsize): New variable; buffers grow as needed. (TIBUFSIZE_MINIMUM): New macro. (report_revision): New function. (plan_a): Do not search patch as a big string, since that fails if it contains null bytes. Prepend `./' to filenames starting with `-', for RCS and SCCS. If file does not match default RCS/SCCS version, go ahead and patch it anyway; warn about the problem but do not report a fatal error. (plan_b): Do not use a fixed buffer to read lines; read byte by byte instead, so that the lines can be arbitrarily long. Do not search lines as strings, since they may contain null bytes. (plan_a, plan_b): Report I/O errors. * inp.c, inp.h (rev_in_string): Remove. (ifetch): Yield size of line too, since strlen no longer applies. (plan_a, plan_b): No longer exported. * patch.c (abort_hunk, apply_hunk, patch_match, similar): Lines may contain NUL and need not end in newline. (copy_till, dump_line): Insert newline if appending after partial line. All invokers changed. (main, get_some_switches, apply_hunk): Allocate *_define[ds], filearg, rejname dynamically. (make_temp): New function. (main): Use it. (main, spew_output, dump_line) Check for I/O errors. * pch.c (open_patch_file): Don't copy stdin to a temporary file if it's a regular file, since we can seek on it directly. (open_patch_file, skip_to, another_hunk): The patch file may contain NULs. (another_hunk): The patch file may contain lines starting with '\', which means the preceding line lacked a trailing newline. (pgetline): Rename to pget_line. (get_line, incomplete_line, pch_write_line): New functions. (pch_line_len): Return size_t, not short; lines may be very long. (do_ed_script): Check for I/O errors. Allow scripts to contain 'i' and 's' commands, too. * pch.h (pfp, grow_hunkmax, intuit_diff_type, next_intuit_at, skip_to, pfetch, pgetline): No longer exported. (pch_write_line): New declaration. (getline): Removed. * util.c (move_file, fetchname): Use private stat buffer, so that filestat isn't lost. Check for I/O errors. (savestr): Use savebuf. (zask): Use STD*_FILENO instead of 0, 1, 2. (fetchname): strip_leading defaults to INT_MAX instead of 957 (!). (memcmp): Define if !HAVE_MEMCMP. * util.c, util.h (say*, fatal*, pfatal*, ask*): Delete; these pseudo-varargs functions weren't ANSI C. Replace by macros that invoke [fs]printf directly, and invoke new functions [az]{say,fatal,pfatal,ask} before and after. (savebuf, read_fatal, write_fatal, memory_fatal, Fseek): New functions. (fatal*): Output trailing newline after message. All invokers changed. * version.c (version): Don't exit. * Makefile.in (SRCS): Remove getline.c. Thu Jul 22 15:24:24 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * EXTERN.h, INTERN.h (PARAMS): Define. * backupfile.h, common.h, inp.h, pch.h, util.h: Use. * backupfile.c: Include EXTERN.h. Wed Jul 21 13:14:05 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * getline.c: New file. * configure.in: Check for getline (GNU libc has it). * pch.c: Use it instead of fgets. (pgetline): Renamed from pgets. Change callers. * pch.h: Change decl. * pch.c (pgets): Tab adjusts by 8 - (indent % 8), not % 7. Be consistent with similar code in pch.c::intuit_diff_type. * common.h (MEM): Typedef removed. inp.c, pch.c, util.c: Use size_t instead of MEM. inp.c, pch.c: Use off_t. configure.in: Add AC_SIZE_T and AC_OFF_T. * common.h: Make buf a pointer and add a bufsize variable. * util.c, pch.c, inp.c: Replace sizeof buf with bufsize. * patch.c: malloc buf to bufsize bytes. Tue Jul 20 20:40:03 1993 Paul Eggert (eggert@twinsun.com) * common.h (BUFFERSIZE): Grow it to 8k too, just in case. (buf): Turn `buf' back into an array; making it a pointer broke things seriously. * patch.c (main): Likewise. Tue Jul 20 20:02:40 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * Move Reg[1-16] and CANVARARG decls from config.h.in to common.h. * acconfig.h: New file. * Makefile (HDRS): Add it. Tue Jul 20 16:35:27 1993 Paul Eggert (eggert@twinsun.com) * Makefile.in: Remove alloca.[co]; getopt no longer needs it. * configure.in (AC_ALLOCA): Remove. * util.c (set_signals, ignore_signals): Do nothing if SIGHUP and SIGINT aren't defined. Tue Jul 20 17:59:56 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * patch.c (main): Call xmalloc, not malloc. xmalloc buf. * common.h: Declare xmalloc. Make buf a pointer, not an array. * util.c (xmalloc): Call fatal1, not fatal. * common.h [MAXLINELEN]: Bump from 1k to 8k. Thu Jul 8 19:56:16 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * Makefile.in (installdirs): New target. (install): Use it. (Makefile, config.status, configure): New targets. Wed Jul 7 13:25:40 1993 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * patch.c (get_some_switches, longopts): Recognize --help option, and call usage. (usage): New function. Fri Jun 25 07:49:45 1993 Paul Eggert (eggert@twinsun.com) * backupfile.c (find_backup_file_name): Don't use .orig if numbered_existing with no existing numbered backup. (addext): Don't use ext if !HAVE_LONG_FILE_NAMES, even if it would fit. This matches patch's historical behavior. (simple_backup_suffix): Default to ".orig". * patch.c (main): Just use that default. Tue Jun 15 22:32:14 1993 Paul Eggert (eggert@twinsun.com) * config.h.in (HAVE_ALLOCA_H): This #undef was missing. * Makefile.in (info, check, installcheck): New rules. Sun Jun 13 14:31:29 1993 Paul Eggert (eggert@twinsun.com) * config.h.in (index, rindex): Remove unused macro definitions; they get in the way when porting to AIX. * config.h.in, configure.in (HAVE_STRING_H): Remove unused defn. Thu Jun 10 21:13:47 1993 Paul Eggert (eggert@twinsun.com) * patchlevel.h: PATCH_VERSION 2.1. (The name `patch-2.0.12g12' is too long for traditional Unix.) * patchlevel.h (PATCH_VERSION): Renamed from PATCHLEVEL. Now contains the entire patch version number. * version.c (version): Use it. Wed Jun 9 21:43:23 1993 Paul Eggert (eggert@twinsun.com) * common.h: Remove declarations of index and rindex. * backupfile.c: Likewise. (addext, basename, dirname): Avoid rindex. Tue Jun 8 15:24:14 1993 Paul Eggert (eggert@twinsun.com) * inp.c (plan_a): Check that RCS and working files are not the same. This check is needed on hosts that do not report file name length limits and have short limits. Sat Jun 5 22:56:07 1993 Paul Eggert (eggert@twinsun.com) * Makefile.in (.c.o): Put $(CFLAGS) after other options. (dist): Switch from .z to .gz. Wed Jun 2 10:37:15 1993 Paul Eggert (eggert@twinsun.com) * backupfile.c (find_backup_file_name): Initialize copy of file name properly. Mon May 31 21:55:21 1993 Paul Eggert (eggert@twinsun.com) * patchlevel.h: Patch level 12g11. * pch.c (p_Char): Renamed from p_char, which is a system type in Tex XD88's <sys/types.h>. * backupfile.c: Include "config.h" first, so that `const' is treated consistently in system headers. Mon May 31 16:06:23 1993 Paul Eggert (eggert@twinsun.com) * patchlevel.h: Patch level 12g10. * configure.in: Add AC_CONST. * config.h.in: Add `const'. * Makefile.in (.c.o): Add -DHAVE_CONFIG_H. (getopt.o getopt1.o): Depend on config.h. * util.c (xmalloc): New function; alloca.c needs this. Mon May 31 00:49:40 1993 Paul Eggert (eggert@twinsun.com) * patchlevel.h: PATCHLEVEL 12g9. * backupfile.c, backupfile.h (addext): New function. It uses pathconf(), if available, to determine maximum file name length. * patch.c (main): Use it for reject file name. * common.h (ORIGEXT): Moved to patch.c. * config.h.in (HAVE_PATHCONF): New macro. * configure.in: Define it. * Makefile.in (dist): Use gzip, not compress. Sat May 29 09:42:18 1993 Paul Eggert (eggert@twinsun.com) * patch.c (main): Use pathconf to decide reject file name. * common.h (REJEXT): Remove. * inp.c (plan_a): Don't lock the checked-out file if `patch -o' redirected the output elsewhere. * common.h (CHECKOUT_LOCKED, GET_LOCKED): New macros. GET and CHECKOUT now just checkout unlocked copies. Fri May 28 08:44:50 1993 Paul Eggert (eggert@twinsun.com) * backupfile.c (basename): Define even if NODIR isn't defined. * patch.c (main): Ask just once to apply a reversed patch. Tue Nov 24 08:09:04 1992 David J. MacKenzie (djm@goldman.gnu.ai.mit.edu) * config.h.in, common.h: Use HAVE_FCNTL_H and HAVE_STRING_H instead of USG. * backupfile.c: Use SYSDIR and NDIR instead of USG. Define direct as dirent, not vice-versa. Wed Sep 16 17:11:48 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * patch.c (get_some_switches): optc should be int, not char. Tue Sep 15 00:36:46 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * patchlevel.h: PATCHLEVEL 12g8. Mon Sep 14 22:01:23 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * Makefile.in: Add uninstall target. * util.c (fatal, pfatal): Add some asterisks to make fatal messages stand out more. Tue Aug 25 22:13:36 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * patch.c (main, get_some_switches), common.h, inp.c (plan_a, plan_b), pch.c (there_is_another_patch): Add -t --batch option, similar to -f --force. Mon Jul 27 11:27:07 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * common.h: Define SCCSDIFF and RCSDIFF. * inp.c (plan_a): Use them to make sure it's safe to check out the default RCS or SCCS version. From Paul Eggert. Mon Jul 20 14:10:32 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * util.h: Declare basename. * inp.c (plan_a), util.c (fetchname): Use it to isolate the leading path when testing for RCS and SCCS files. Fri Jul 10 16:03:23 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * util.c (makedirs): Only make the directories that don't exist. From chip@tct.com (Chip Salzenberg). Wed Jul 8 01:20:56 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * patch.c (main): Open ofp after checking for ed script. Close ofp and rejfp before trying plan B. From epang@sfu.ca (Eugene Pang). * util.c (fatal, pfatal): Print "patch: " before message. * pch.c, inp.c, patch.c, util.c: Remove "patch: " from the callers that had it. * common.h (myuid): New variable. * patch.c (main): Initialize it. * inp.c (myuid): Function removed. (plan_a): Use the variable, not the function. * patch.c: Add back -E --remove-empty-files option. Tue Jul 7 23:19:28 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * inp.c (myuid): New function. (plan_a): Call it. Optimize stat calls. Be smarter about detecting checked out RCS and SCCS files. From Paul Eggert (eggert@twinsun.com). * inp.c, util.c, patch.c: Don't bother checking for stat() > 0. Mon Jul 6 13:01:52 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * util.c (move_file): Use rename instead of link and copying. * util.c (pfatal): New function. * util.h: Declare it and pfatal[1-4] macros. * various files: Use it instead of fatal where appropriate. * common.h, patch.c: Replace Arg[cv]_last with optind_last. * patch.c (main, get_some_switches): Use getopt_long. Update usage message. (nextarg): Function removed. * Rename FLEXFILENAMES to HAVE_LONG_FILE_NAMES, VOIDSIG to RETSIGTYPE. * backupfile.c, common.h: Use STDC header files if available. backupfile.h: Declare get_version. * COPYING, COPYING.LIB, INSTALL, Makefile.in, alloca.c, config.h.in, configure, configure.in, getopt.[ch], getopt1.c, rename.c: New files. * Configure, MANIFEST, Makefile.SH, config.H, config.h.SH, malloc.c: Files removed. * version.c (version): Don't print the RCS stuff, since we're not updating it regularly. * patchlevel.h: PATCHLEVEL 12u7. * Makefile.SH (dist): New target. Makedist: File removed. * inp.c (plan_a): Check whether the user can write to the file, not whether anyone can write to the file. Sat Jul 4 00:06:58 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * inp.c (plan_a): Try to check out read-only files from RCS or SCCS. * util.c (move_file): If backing up by linking fails, try copying. From cek@sdc.boeing.com (Conrad Kimball). * patch.c (get_some_switches): Eliminate -E option; always remove empty output files. * util.c (fetchname): Only undo slash removal for relative paths if -p was not given. * Makefile.sh: Add mostlyclean target. Fri Jul 3 23:48:14 1992 David J. MacKenzie (djm@nutrimat.gnu.ai.mit.edu) * util.c (fetchname): Accept whitespace between `Index:' and filename. Also plug a small memory leak for diffs against /dev/null. From eggert@twinsun.com (Paul Eggert). * common.h: Don't define TRUE and FALSE if already defined. From phk@data.fls.dk (Poul-Henning Kamp). Wed Apr 29 10:19:33 1992 David J. MacKenzie (djm@churchy.gnu.ai.mit.edu) * backupfile.c (get_version): Exit if given a bad backup type. Fri Mar 27 09:57:14 1992 Karl Berry (karl at hayley) * common.h (S_ISDIR, S_ISREG): define these. * inp.c (plan_a): use S_ISREG, not S_IFREG. * util.c (fetchname): use S_ISDIR, not S_IFDIR. Mon Mar 16 14:10:42 1992 David J. MacKenzie (djm@wookumz.gnu.ai.mit.edu) * patchlevel.h: PATCHLEVEL 12u6. Sat Mar 14 13:13:29 1992 David J. MacKenzie (djm at frob.eng.umd.edu) * Configure, config.h.SH: Check for directory header and unistd.h. * patch.c (main): If -E was given and output file is empty after patching, remove it. (get_some_switches): Recognize -E option. * patch.c (copy_till): Make garbled output an error, not a warning that doesn't change the exit status. * common.h: Protect against system declarations of malloc and realloc. * Makedist: Add backupfile.[ch]. * Configure: Look for C library where NeXT and SVR4 put it. Look in /usr/ucb after /bin and /usr/bin for utilities, and look in /usr/ccs/bin, to make SVR4 happier. Recognize m68k predefine. * util.c (fetchname): Test of stat return value was backward. From csss@scheme.cs.ubc.ca. * version.c (version): Exit with status 0, not 1. * Makefile.SH: Add backupfile.[cho]. * patch.c (main): Initialize backup file generation. (get_some_switches): Add -V option. * common.h, util,c, patch.c: Replace origext with simple_backup_suffix. * util.c (move_file): Use find_backup_file_name. Tue Dec 3 11:27:16 1991 David J. MacKenzie (djm at wookumz.gnu.ai.mit.edu) * patchlevel.h: PATCHLEVEL 12u5. * Makefile.SH: Change clean, distclean, and realclean targets a little so they agree with the GNU coding standards. Add Makefile to addedbyconf, so distclean removes it. * Configure: Recognize Domain/OS C library in /lib/libc. From mmuegel@mot.com (Michael S. Muegel). * pch.c: Fixes from Wayne Davison: Patch now accepts no-context context diffs that are specified with an assumed one line hunk (e.g. "*** 10 ****"). Fixed a bug in both context and unified diff processing that would put a zero-context hunk in the wrong place (one line too soon). Fixed a minor problem with p_max in unified diffs where it would set p_max to hunkmax unnecessarily (the only adverse effect was to not supply empty lines at eof by assuming they were truncated). Tue Jul 2 03:25:51 1991 David J. MacKenzie (djm at geech.gnu.ai.mit.edu) * Configure: Check for signal declaration in /usr/include/sys/signal.h as well as /usr/include/signal.h. * Configure, common.h, config.h.SH: Comment out the sprintf declaration and tests to determine its return value type. It conflicts with ANSI C systems' prototypes in stdio.h and the return value of sprintf is never used anyway -- it's always cast to void. Thu Jun 27 13:05:32 1991 David J. MacKenzie (djm at churchy.gnu.ai.mit.edu) * patchlevel.h: PATCHLEVEL 12u4. Thu Feb 21 15:18:14 1991 David J. MacKenzie (djm at geech.ai.mit.edu) * pch.c (another_hunk): Fix off by 1 error. From iverson@xstor.com (Tim Iverson). Sun Jan 20 20:18:58 1991 David J. MacKenzie (djm at geech.ai.mit.edu) * Makefile.SH (all): Don't make a dummy `all' file. * patchlevel.h: PATCHLEVEL 12u3. * patch.c (nextarg): New function. (get_some_switches): Use it, to prevent dereferencing a null pointer if an option that takes an arg is not given one (is last on the command line). From Paul Eggert. * pch.c (another_hunk): Fix from Wayne Davison to recognize single-line hunks in unified diffs (with a single line number instead of a range). * inp.c (rev_in_string): Don't use `s' before defining it. From Wayne Davison. Mon Jan 7 06:25:11 1991 David J. MacKenzie (djm at geech.ai.mit.edu) * patchlevel.h: PATCHLEVEL 12u2. * pch.c (intuit_diff_type): Recognize `+++' in diff headers, for unified diff format. From unidiff patch 1. Mon Dec 3 00:14:25 1990 David J. MacKenzie (djm at albert.ai.mit.edu) * patch.c (get_some_switches): Make the usage message more informative. Sun Dec 2 23:20:18 1990 David J. MacKenzie (djm at albert.ai.mit.edu) * Configure: When checking for C preprocessor, look for 'abc.*xyz' instead of 'abc.xyz', so ANSI C preprocessors work. * Apply fix for -D from ksb@mentor.cc.purdue.edu (Kevin Braunsdorf). 1990-05-01 Wayne Davison <davison@dri.com> * patch.c, pch.c: unidiff support added Wed Mar 7 23:47:25 1990 Jim Kingdon (kingdon at pogo.ai.mit.edu) * pch.c: Call malformed instead of goto malformed (just allows easier debugging). Tue Jan 23 21:27:00 1990 Jim Kingdon (kingdon at pogo.ai.mit.edu) * common.h (TMP*NAME): Make these char *, not char []. patch.c (main): Use TMPDIR (if present) to set TMP*NAME. common.h: Declare getenv. Sun Dec 17 17:29:48 1989 Jim Kingdon (kingdon at hobbes.ai.mit.edu) * patch.c (reverse_flag_specified): New variable. (get_some_switches, reinitialize_almost_everything): Use it. 1988-06-22 Larry Wall <sdcrdcf!lwall> patch12: * common.h: sprintf was declared wrong * patch.c: rindex() wasn't declared * patch.man: now avoids Bell System Logo 1988-06-03 Larry Wall <sdcrdcf!lwall> patch10: * common.h: support for shorter extensions. * inp.c: made a little smarter about sccs files * patch.c: exit code improved. better support for non-flexfilenames. * patch.man: -B switch was contributed. * pch.c: Can now find patches in shar scripts. Hunks that swapped and then swapped back could core dump. 1987-06-04 Larry Wall <sdcrdcf!lwall> * pch.c: pch_swap didn't swap p_bfake and p_efake. 1987-02-16 Larry Wall <sdcrdcf!lwall> * patch.c: Short replacement caused spurious "Out of sync" message. 1987-01-30 Larry Wall <sdcrdcf!lwall> * patch.c: Improved diagnostic on sync error. Moved do_ed_script() to pch.c. * pch.c: Improved responses to mangled patches. * pch.h: Added do_ed_script(). 1987-01-05 Larry Wall <sdcrdcf!lwall> * pch.c: New-style context diffs caused double call to free(). 1986-11-21 Larry Wall <sdcrdcf!lwall> * patch.c: Fuzz factor caused offset of installed lines. 1986-11-14 Larry Wall <sdcrdcf!lwall> * pch.c: Fixed problem where a long pattern wouldn't grow the hunk. Also restored p_input_line when backtracking so error messages are right. 1986-11-03 Larry Wall <sdcrdcf!lwall> * pch.c: New-style delete triggers spurious assertion error. 1986-10-29 Larry Wall <sdcrdcf!lwall> * patch.c: Backwards search could terminate prematurely. * pch.c: Could falsely report new-style context diff. 1986-09-17 Larry Wall <sdcrdcf!lwall> * common.h, inp.c, inp.h, patch.c, patch.man, pch.c, pch.h, util.h, version.c, version.h: Baseline for netwide release. 1986-08-01 Larry Wall <sdcrdcf!lwall> * patch.c: Fixes for machines that can't vararg. Added fuzz factor. Generalized -p. General cleanup. Changed some %d's to %ld's. Linted. * patch.man: Documented -v, -p, -F. Added notes to patch senders. 1985-08-15 van%ucbmonet@berkeley Changes for 4.3bsd diff -c. 1985-03-26 Larry Wall <sdcrdcf!lwall> * patch.c: Frozen. * patch.man: Frozen. 1985-03-12 Larry Wall <sdcrdcf!lwall> * patch.c: Now checks for normalness of file to patch. Check i_ptr and i_womp to make sure they aren't null before freeing. Also allow ed output to be suppressed. Changed pfp->_file to fileno(pfp). Added -p option from jromine@uci-750a. Added -D (#ifdef) option from joe@fluke. * patch.man: Documented -p, -D. 1984-12-06 Larry Wall <sdcrdcf!lwall> * patch.c: Made smarter about SCCS subdirectories. 1984-12-05 Larry Wall <sdcrdcf!lwall> * patch.c: Added -l switch to do loose string comparison. * patch.man: Added -l switch, and noted bistability bug. 1984-12-04 Larry Wall <sdcrdcf!lwall> Branch for sdcrdcf changes. * patch.c: Failed hunk count not reset on multiple patch file. * patch.man: Baseline version. 1984-11-29 Larry Wall <sdcrdcf!lwall> * patch.c: Linted. Identifiers uniquified. Fixed i_ptr malloc() bug. Fixed multiple calls to mktemp(). Will now work on machines that can only read 32767 chars. Added -R option for diffs with new and old swapped. Various cosmetic changes. 1984-11-09 Larry Wall <sdcrdcf!lwall> * patch.c: Initial revision Copyright (C) 1984, 1985, 1986, 1987, 1988 Larry Wall. Copyright (C) 1989, 1990, 1991, 1992, 1993, 1997, 1998, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. This file is part of GNU Patch.. | http://opensource.apple.com/source/gpatch/gpatch-2/patch/ChangeLog | CC-MAIN-2015-48 | refinedweb | 12,067 | 64.37 |
#include <sys/types.h> #include <grp.h>).)
The service was temporarily unavailable; try again later. For NSS backends in glibc this indicates a temporary error talking to the backend. The error may correct itself, retrying later is suggested.
A signal was caught; see signal(7).
I/O error.
The per-process limit on the number of open file descriptors has been reached.
The system-wide limit on the total number of open files has been reached.
A necessary input file cannot be found. For NSS backends in glibc this indicates the backend is not correctly configured.
Insufficient memory to allocate group structure.
Insufficient buffer space supplied..
fgetgrent(3), getgrent_r(3), getgrgid(3), getgrnam(3), getgrouplist(3), putgrent(3), group(5) | http://manpages.courier-mta.org/htmlman3/getgrent.3.html | CC-MAIN-2017-30 | refinedweb | 120 | 54.08 |
Unary arithmetic operators
There are two unary arithmetic operators, plus (+), and minus (-). As a reminder, unary operators are operators that only take one operand.
The unary minus operator returns the operand multiplied by -1. In other words, if x = 5, -x is -5.
The unary plus operator returns the value of the operand. In other words, +5 is 5, and +x is x. Generally you won’t need to use this operator since it’s redundant. It was added largely to provide symmetry with the unary minus operator.
For best effect, both of these operators should be placed immediately preceding the operand (e.g. -x, not - x).
-x
- x
Do not confuse the unary minus operator with the binary subtraction operator, which uses the same symbol. For example, in the expression x = 5 - -3;, the first minus is the binary. We’ll talk about division below, and modulus in the next lesson.
Integer and floating point division
It is easiest to think of the division operator as having two different “modes”.. As with all floating point arithmetic operations, rounding errors may occur.
7.0 / 4 = 1.75
7 / 4.0 = 1.75
7.0 / 4.0 = 1.75
If both of the operands are integers, the division operator performs integer division instead. Integer division drops any fractions and returns an integer value. For example, 7 / 4 = 1 because the fractional portion of the result is dropped. Similarly, -7 / 4 = -1 because the fraction is dropped.
7 / 4 = 1
-7 / 4 = -1
Warning
Prior to C++11, integer division with a negative operand could round up or down. Thus -5 / 3 could result in -1 or -2. This was fixed in C++11, which always drops the fraction (rounds towards 0).
-5 / 3
Using static_cast<> to do floating point division with integers
The above raises the question -- if we have two integers, and want to divide them without losing the fraction, how would we do so?
In lesson 4.11 -- Chars, we showed how we could use the static_cast<> operator to convert a char into an integer so it would print as an integer rather than a character.
The above illustrates that if either operand is a floating point number, the result will be floating point division, not integer division.
Dividing by zero
Trying to divide by 0 (or 0.0) will generally cause your program to crash, as the results are mathematically undefined!
If you run the above program and enter 0, your program will either crash or terminate abnormally. Go ahead and try it, it won’t harm your computer.
Arithmetic assignment operators
Up to this point, when you’ve needed to add 4 to a variable, you’ve likely done the following:
This works, but it’s a little clunky, and takes two operators to execute (operator+, and operator=).
Because writing statements such as x = x + 4 is so common, C++ provides five arithmetic assignment operators for convenience. Instead of writing x = x + 4, you can write x += 4. Instead of x = x * y, you can write x *= y.
x = x + 4
x += 4
x = x * y
x *= y
Thus, the above becomes:
its good to know about arithmetic assignment operators
but wouldn't it be more readable to use the normal way ? is it safe if i make it my standard not to use the arithmetic assignment operators ?
like its easier to read x = x + 4 ; rather than x += 4;
because it leaves you with a few millisecond to think about what that += means and make sure you are not confusing it with another operator.
It may seem weird at first, but it's really easy to get used to it. You just need to think of it as "increase x by 4" not "set x to x + 4". It comes with practice
wanted to see if i can directly cast the result into a double
std::cout << static_cast<double> (7 / 4) ;
resulted in 1
#failed
It actually worked, but not as you expected.
What you did is to perform an integer division (with truncation) and then cast the integer result into a double.
thanks ...
can someone explain
will result divide or mod by zero error but
print inf
For me,
i) if 0 is stored in an int type variable, then 10/var results in floating point exception.
ii) sending 10/0 to the console results in gibberish (could be any value).
iii) if 0 is double type value (e.g. 0.0) then 10/0.0 results in inf.
It's just undefined behavior I guess. Why would I devide anything by 0 anyway :)
I think it's because 0.0 is a double, and doubles are somewhat weird. You can store NaN and PosInf and NegInf in doubles, and you can use scientific notation on them, so hence, I'm not surprised this works.
Here's what I mean:
guys please help me im new... can Add (+) or subtract (-) operators will be used to create the Multiplication and Division function while there is No Multiplication (*) and Division (/) operators must be used in the program? if so can you tell me how? thanks...
Yes, but it wouldn't make sense.
If you want to do 5 * 5 but with using '+' you can do it a following way.
Create for loop and inside it do 5 iteration of the following expression:
X=+5
the beauty of "x += 5" is that we don't have to write abherrations like "x = x + 5" :)
i actually prefer x = x + 5 as it is more detailed than x += 5
its faster for me to understand x = x + 5 even a non programmer can understand x = x + 5 means that x is now the value of x PLUS 5
but a newcomer such as myself or a non programmer will have a slight delay until we remember what += does.
Wow Nice
Nice! a good way of testing the pickedOperator instead of using nested else ifs would be a switch statement like this:
Would the result of your math be integer and disregard floating point number? I would use static_cast<> for division and modulus.
hey sheepy, how did you change the default avatar image?
1. Avoid "using namespace". It can cause naming conflicts. Prefix every occurrence of "cin" and "cout" with "std::" instead.
2. Function names should generally start with a verb that describes what the function does, e.g. "printNames()", "getPotatoes()" etc. The "intro()" function in this case should be called "printIntro()" instead.
3. The "intro()" function is actually redundant, since it's only executed once. It makes the code *longer*, not shorter.
4. Use single quotes for \n, since it's 1 character. "\n" in your program should be '\n'
Everything else looks good, well done!
this can't be used with direct initialization right?
This isn't initialization at all, so I'm not sure what you're getting at
Oh nvm XD I mixed up some concepts together
This is same as saying like this
x * y => x multiplied by y
Can we use "times" too? You didn't mention that, I thought maybe it is not a good way to use "times" instead of "multiplied"
x * y => x times y
What do you mean exactly? You aren't writing "times" in your code (aside from comments), so it shouldn't matter if you "use" either "times" or "multiplied" - they are synonyms.
I assume the authors chose "multiply" because it doesn't have other definitions besides "repeated addition", whereas "times" refers also to repetition in general (e.g. "doing it 3 times") or time (e.g. "at what times?").
"Trying to divide by 0 (or 0.0) will generally cause your program to crash, as the results are mathematically undefined!"
I just want to mention that my program crashes only when I try to divide an int by 0. And it seems to work when I divide an integer number by 0.0
or a double number by 0.
E.g. this code doesn't crash when x is 0:
Floating point numbers might support a special NaN (Not a Number) value which is used as the result of division by 0. That's implementation-defined, your code might crash or throw on another system.
Thanks for clarifying.
@nascardriver You mentioned that division by 0.0 might support a special NaN and it could crash in another system. But in chapter "4.8 — Floating point numbers" it says if we divide by 0.0 we get infinity and NaN if 0.0 is divided by 0.0.
I tried it on my system and online compiler, it throws an error on integer 0 but gives infinity on 0.0
So, is it safe to assume that it will always generate infinity or NaN, or it could throw error on another system
No. Division by 0.0 causes undefined behavior, unless your compiler supports it. Just because something works with compiler x, y, and z doesn't mean it's safe. Division by 0 should be avoided in any case. If, for whatever reason, you need to divide by 0, make sure that your compiler is using IEEE 754 floating pointer numbers. Only then is it guaranteed to be safe, in every other case, the division produces undefined behavior.
`static_assert` aborts compilation if the condition we give it is `false`.
`std::numeric_limits::is_iec559` is `true` if the compiler uses IEEE 754 for `double` values.
This way, if the compiler uses another format for `double`, the code won't compile at all. If the code compiles, the division is guaranteed to produce an inf value.
Hi.Found a minor issue with the code in the section "Using static_cast<> to do floating point division with integers". You've used double quotes instead of single quotes for newline feed. It should be '\n'.
Thanks a lot! As long as the lessons do it wrong, the readers will too.
Just a grammar point / typo: under "Using static_cast<> to do floating point division with integers" you say the above"begs the question" when I believe you mean it "raises the question." Not a big deal, but I thought I'd mention it since I am finding the course extremely helpful so far! Thanks for putting this all together!
This lesson is no longer begging for questions, thanks!
Did you mean to have the '5' in the sentence?
"Because writing statements such as x = x + 5 is so common, C++ provides 5 arithmetic assignment operators for convenience. Instead of writing x = x + 5, you can write x += 5. Instead of x = x * y, you can write x *= y."
It is confusing because "C++ provides 5 arithmetic assignment operators"
Does C++ provide five operators (if so, you have only shown two) or does C++ provide 5 with some operators?
Lesson amended. Thanks!
"Trying to divide by 0 (or 0.0) will generally cause your program to crash, as the results are mathematically undefined!"
I hate to be so pedantic, especially since I haven't studied math in 20 years but I always remembered division by zero as infinity. It was a long time ago so I had to check and Wolfram Alpha agrees.
I'd recommend that you strike the word mathematically or change it to arithmetically.
Infinity isn't a discrete number, and even if it were, would dividing by 0 produce positive or negative infinity? It's more correct to say it's undefined. See
I had an issue when I tried to compile the divide by zero example stating this:
"C:\Users\Senith\Desktop\testing\testing\main.cpp|6|error: extended initializer lists only available with -std=c++11 or -std=gnu++11|"
and on ln6 was: int x{}; which i changed to int x; and it compiled with no errors. Any idea why i got that error on codeblocks?
You didn't enable a higher standard in code::block's settings.
Brace initialization and many other features aren't available in C++03 (Which is probably what you're using). In your project settings, enable the highest possible C++ standard.
Replying to Jon, posting Sep 28, 2019 at 1:04 AM.
Jon,
As a mathematician wannabe (majored in it) I am conditioned to be that pedantic. Dividing by zero is an undefined operation. To emphasize what Alex says, infinity is not a number at all. It is a concept. It took my first Real Analysis (aka advanced calculus) to rigorously define what is meant by "approaching infinity".
Here's a simple paradox:
If 1 / 0 == infinity then infinity * 0 must == 1.
But 2 / 0 also == infinity, so infinity * 0 must == 2.
Hmm... Then 1 == 2. EGAD! (Is there a Latin acronym in EGAD, like QED? :-)
The problem was that I was treating infinity like a number.
(Plugging myself as a Math tutor. :-)
Hi, Alex! If I can, I suggest you to order your operator table in their operator precedence in their related lesson, like your binary arithmetic operators table in this lesson (that is not ordered in their operator precedence although we know that we learn addition and substraction first then multiplication and division).
is the following func okay for last question
bool isEven(int x)
{
return (!(x % 2)) ;
}
You can do it and it's always going to work. But `(x % 2) == 0` is easier to read, since you don't have to thing about boolean conversions.
Hi Nascardriver & Alex
Quiz 2 - Ive tried to keep it as simple and small as possible. Any suggestions please ?
`isEven` was supposed to have the signature
You're not using a parameter and your return value doesn't fit the function's name. Try using Alex' `isEven` function and use a conditional operator in `main`. That way you can keep it small.
Thank you
Everything seems good but you might want to crisp short the output statements..
So for Question #1, I got the sequence correct, but don't understand how to apply the % operator apparently. Since 20 % 3 is 6.6666, I should have a result of 6 for the remainder, not 2. I couldn't find the answer to this in the comments below so I must be missing something pretty obvious that no one else got stuck on. Please help.
20 / 3 = 6 remainder 2
because
6 * 3 = 18
and
18 + 2 = 20
Thanks. I was looking at it completely wrong. Now it seems simple. Thanks again.
bool isOdd(int n) {
return (n % 2);
}
interesting that dividing by zero causes a crash but 0%2 works I also had to test 1%2 thinking it would round down, got stuck thinking on that for a bit. never would have come up with "return x % 2 == 0;"
You seem to have mixed up few things. Dividing BY zero causes a crash i.e. 7/0 = ????, but dividing THE zero does not i.e. 0/7 = 0 (if you have 0 pieces of a cake and you want to give each of the seven people a piece of a cake, each person recieves 0 pieces of a cake), same with the modules (%), you can module the zero i.e. 0%2 = 0 (0 / 2 = 0 remainder 0, because 0 * 2 = 0), but you cannot module BY zero i.e. 2%0 (undefined behaviour)
I have a question regarding the use of count as a variable name in this code. My editor which is code::blocks shows count as green text and I tried to search for what it meant and learnt that it was some kind of input iterator. Is it ok to use it as variable name? It looks confusing to me since my count variable is colored green.
Hi!
count is the name of a function in the @std namespace. It shouldn't be highlighted unless you include @<algorithm> and are "using namespace std;", but I guess Code::Blocks is using a static list of words to highlight.
You can use it as a variable name.
can you tell me plz why my code for evan or odds leave me with an error
* Line 16: Initialize your variables with brace initializers. You used copy initialization.
* Line 14: Initialize your variables with brace initializers.
* @main, @isevan: Missing return statement.
* Don't use @std::endl unless you have a reason to.
@isevan is declared to return an int, but it doesn't return anything.
thank you so much
that was realy helpful
but can you give me a reference for when to use brace initializers
Always
Great! Thanks! So that's how it works in code::blocks huh? I was really confused as to why it is highlighted even if I didn't include @<algorithm> on my code.
Hi, this is my answer for quiz 2.
I was wondering if it is okay or there is a problem? thanks.
Hi Ryan!
* Line 13: Initialize your variables with brace initializers.
* "isEven" sounds like the name of a function that returns a value.
* Line 6, 8, 14: Unnecessary space at the end of the line.
Hello!
Regarding part 2 of the quiz, by trying to keep every function as simple as possible, I think I may have overthought this. Could you explain the pros and cons to this method?
* Line 6: Don't compare booleans to false/true.
* Line 21: Initialize to 0.
* Print a line feed when your program is done.
The structure of your program is fine. If you can split a function into 2 without making your program harder to understand, it's usually a good decision.
To obey the comment "Print a line feed when your program is done" would it be a better option to omit printAnswer() and add its argument to main() based on the boolean returned from isEven()?
Alright, I wrote a program just for fun, to keep writing code and keep teaching the brain how to write c++ that does some mixed division with Integer and Float inputs and outputs either Integer or Float results.
User can choose whether to use static_cast or not to convert integer inputs to float.
I did it because while reading this page, I felt the need to try several division combinations mixing ints with floats, so here it goes.
The program performs the following division combos:
• Int = Int / Int
• Int = Float / Int
• Int = Float / Float
• Float = Int / Int
• Float = Float / Int
• Float = Float / Float
Note that choosing to convert Ints to Floats makes a difference when calculating Float = Int / Int. You'll know why if you read this class paying attention to what you were reading.
For anyone lurking through the comments that wants to test some division combinations, here is the program:
I'm loving programming. Why didn't I start earlier??
OK, let's jump to the quiz now. =D
Wouldn't it be better to use a bool for your useStatic variable that you're passing to these functions? I mean it only has 2 options right? true and false?
my quiz#2
Hi Dimitri!
* Line 28, 29: Initialize your variables with uniform initialization. You used copy initialization.
* Line 7: Initialize your variables with uniform initialization.
* Line 14: You don't need to modify @x
Thanks nascardriver! Got it! Except * Line 7: Initialize your variables with uniform initialization
Why we need initialization there?
Assuming you're using C++11 or later, it doesn't make a difference here.
Initializing all variables will prevent you from forgetting an initialization where it's necessary.
Oh, OK! Better to assign 1 by default or it doesn't matter?
Whatever is the 0 value for your type.
int 0
float 0.0f
double 0.0
etc
Many thanks!
Hi Alex,
I have a question.
Shouldn't this got to the new line every 2 lines? 2 is evenly divisible by 20 and returns the remainder 0. Why does it still print every 20 lines?
You've got some pretty fancy math. 2/20 = 0,1
*facepalm* I just realized. its late at night and im not thinking straight... sorry xd
2/20 = 0,1
0,1 * 20 = 2
is this / symbol means Division assignment ? @nascardriver
/ division
* multiplication
+ addition
- subtraction
In the isEven() function i can up with:
I read the comments and did not find anyone who did it this way. Is this ok as well or is their a problem with it?
Hi Benjamin!
Logically, your function is equivalent to the other submissions.
But your function is less efficient, because your computer has to perform the extra step of adding 1 to @num.
Thank you! Wow i made two errors in my post. I meant "came* up with" and "or is there*". It was late at night, lol. Again thank you!
While your function mostly works, it will overflow if you pass in the largest integer.
I had the same though, but I came to the conclusion that the overflow doesn't cause any problems, so I omitted it in my reply.
Assuming 32bit integers, 2147483647 (odd) will overflow to -2147483648 (even), same for all integer widths.
The only thing I can think of is that handling the overflow could take up a neglectable amount of CPU-time.
// ConsoleApplication4.cpp : This file contains the 'main' function. Program execution begins and ends there.
//
#include "pch.h"
#include <iostream>
using namespace std;
int getNumber()
{
int x;
cout << "Please enter any number to see if it is odd or even:";
cin >> x;
return x;
}
bool calculate(int x)
{
if (x % 2 == 0)
return true;
else
return false;
}
int main()
{
int x = getNumber();
bool y = calculate(x);
if (y == true)
cout << "its even number.";
else
cout << "Its odd number.";
return 0;
}
My program works fine but i am using different approach.
Can someone please clarify if my approach is fine?
Hi Qais!
* Initialize your variables with uniform initialization.
* In @calculate, you're returning the same value as the == comparison, so you might as well return it directly.
* Don't compare booleans to false/true, they're booleans already.
* Variable names should be descriptive.
Hello there, even my code is working I don't know where should I improve it.So i hope i get some reply.
ps: about even or not even thing in printResult(),i dont know how it works(i guess it evaluates the first parametar if true, and the one after ":" if false), im new to programing, i just saw nascardriver use it in comments some tutorials before and i kinda steal that :)
> I don't know where should I improve it
Looks fine to me :-)
> about even or not even thing in printResult [...]
That's the conditional operator. It's covered in lesson 3.4. Your description of how it works is correct.
Thanks for the replay nascardriver.Its very rare experienced programmer taking time to comment a begginer's simple programs.Its a kinda dying breed:)
Looks good, but some suggestions about general practices(more like being nit-picky :))
>> Don't(more like 'No need to') initialize the variables if you are going to override them without using the initialized values.
>> Input argument can be renamed to make it a bit more clear, something like 'is_num_even'
Hi Alex
Firstly, huge thanks for this tutorial series. You have a great teaching style, and I really like how you take the time to talk about good programming practices and pitfalls.
Regarding the following code:
I was playing around with debug->disassembly in VS2017 and noticed that this actually compiles to more instructions in assembly than the if/else version, despite it seeming shorter/simpler in C++.
I didn't check the cycles per instruction, but either way, I thought this tutorial might be a good opportunity for you to bring up the topic of premature optimisation versus code readability (especially as I understand modern compilers can optimise better than most humans, and due to modern CPU features like speculative execution).
I'm just a novice/hobbyist programmer, so I used to be a bit OCD about trying to write efficient code (or at least code that I thought was efficient), including avoiding function calls due to their overhead, but am trying now to re-train myself to prioritise code readability. Personally I think the if/else version is just a tiny bit easier to understand intuitively at a glance.
Thanks again for your work on these tutorials!
Hi David!
Assembly has a very limited instruction set, whereas C++ is huge. You'll get a lot more instructions out of your code, no matter what you write.
Low-level optimization should be left to the compiler in most cases. The programmer should prioritize optimization of algorithms, data usage, etc., because those can't be optimized by the compiler and they have a much bigger impact than a couple of unnecessary instructions.
Yep! I got a little bit of exposure to ARM assembly a while back due to trying to mod some GBA/NDS games. It was fun and I learned a lot, but man is it good to be back in a higher level language and not have to keep track of things like the stack pointer for local variables!!
Agree with your points! I didn't really appreciate that when I started programming though, which is why I thought it might be a good topic for Alex to address in these tutorials as part of good programming practices. Before I was exposed to assembly, I used to think that less C++ code meant less instructions to execute - now I know that is not necessarily the case (for the isEven() example, both versions actually compile into the same assembly for Release build - just 3 instructions)! Now I also know that less instructions does not necessarily equal better performance (or matter nearly as much as algorithms and data usage, as you say).
Hence when it comes to choosing between if/else blocks versus code that looks shorter in C++, I now try to prioritise readability. It might be different for seasoned programmers like yourself, but I have to mentally convert that single line return statement in my head into an if/else statement when reading it anyway, so the if/else version just reads a tiny bit easier for me. The following code from Chapter 3.5 puts this in starker contrast. Also harder to debug I think? Breaking it out into if/else statements made it easier to follow in the debugger for me.
> Also harder to debug
Generally, if you use more lines, it's easier to debug. Using more lines doesn't necessarily change the compiled code.
@approximatelyEqual can be expanded into multiple lines without affecting the outcome. The one-liners might be easier written if you've been coding for a while. But once something goes wrong you'll be back to if/else and temporary variables.
You can edit your posts. The syntax highlighter will work after refreshing the page.
> You can edit your posts. The syntax highlighter will work after refreshing the page.
Ha thanks! That's exactly why I deleted/re-posted. Hope you didn't have to retype your message because of that!
Great conversational points, and the reminder not to prematurely optimize your code is timely as I'm rewriting a lesson where a mention of that would be perfect.
I agree the approximatelyEqual() function is hard to read/follow and even harder to debug -- but I opted to prefer the terse version under the assumption that most readers would just use the function, not try to dissect it. :) Maybe that's the wrong call to make in a learning tutorial series...
> I'm rewriting a lesson where a mention of that would be perfect
Awesome! Is there a way to get email alerts when one of the lessons I've already worked through gets updated (and maybe also see what changed)? Couldn't find a subscribe button anywhere.
> I opted to prefer the terse version under the assumption that most readers would just use the function
Haha I think maybe you also chose it because you knew if you used the long version, you'd have to put up with lots of clever people telling you "did you know you could do that all in one line?" :p I could follow the logic in my head, but needed the debugger to figure out what was happening to the adjusted epsilon for the close-to-zero case. (Who knew multiplying a number by something <1 made that number smaller haha - my maths is sooo rusty!)
On the topic of debugging, I remember a few years ago when I first worked through these lessons (got up to Chapter 14) that the debugger was barely mentioned again after introducing it in Chapter 1 (from a quick site search it seems this is still the case). So back then I never really used the debugger - mostly due to lack of exposure and because I was used to just inserting "debug code" (eg, tracking how variables changed by printing to console), which is a habit I developed from modding games using LUA scripting. But after trying to make sense of ARM disassembly... my God how I appreciate a good debugger now!
As you update lessons for other issues, perhaps it would be helpful to start sprinkling in more exercises using the debugger (just to get new programmers more used to it). For example, in this lesson, the int pow() "exponentiation by squaring" algorithm causes an infinite loop if a negative exponent is entered. You could mention this and encourage students to use the debugger to figure out why (apparently depends on the compiler, but VS17 uses arithmetic shift right for signed int so the loop gets stuck at -1).
(I know you encourage robust code in later chapters, but I used to be super impatient and hate thinking about edge cases or investing the time/lines required. Until I started writing code to parse documents, and kept creating documents that broke my own code or which accidentally created infinite loops. Sometimes ya just gotta learn the hard way...)
There currently isn't an email alert system for lesson updates. It's been a long-standing request that I haven't yet had time to figure out how to facilitate.
> Haha I think maybe you also chose it because you knew if you used the long version, you'd have to put up with lots of clever people telling you "did you know you could do that all in one line?" :p
I didn't think of that, but it's absolutely true. :)
Adding "debug this" quiz questions is absolutely on the agenda as I rewrite. I share your feeling that learning to use the debugger effectively is a crucial skill, and this tutorial series doesn't do a good enough job reinforcing this point.
I appreciate all the feedback! If you have any other thoughts, please feel free to share them.
Hmm, the following WordPress plugins look like they might help? I don't really know much about WordPress or web development though...
Alternatively, there is always the super low-tech but low-cost method: simply create a page to act as a manual changelog. Then from now on, whenever you update a lesson (other than typo fixes, etc) all you need to do is remember to add a one line summary to the changelog page with the date and lesson number.
> this tutorial series doesn't do a good enough job reinforcing this point
Think you're being a little bit hard on your own work there :)
There are a lot of important skills for larger projects or professional development (eg, version control, unit testing, different workflows, automating, etc) that this tutorial series couldn't possibly cover in a meaningful way without overwhelming readers and distracting from its core lessons. (I just spent a WEEK going down a Git rabbit-hole because it's integrated in Visual Studio, and that was just to learn the basics!)
What this tutorial series does do though (ie, cover fundamental C++ programming concepts step-by-step, while teaching good practices and how to avoid common pitfalls along the way), it does really REALLY well. The first time I came across this tutorial series, I had never encountered OOP, overloading, virtual functions and templates before. But this tutorial series made it a real joy to learn!
So thank YOU for all your time and effort!
Since I wanted to write a comment about the int pow() function anyway and I wholeheartedly agree with David about the absolute brilliance (excellence? I don't know what word to use, but it should mean "wayyy better than anything else I have seen") of this website, I will add my 2c here to this thread.
First, let me say that this website has been a Godsend. I am a wannabe musician, and I use MuseScore, an open-source music notation editor written in C++ and using the Qt framework. I wanted to learn to code in C++ to contribute. I have learnt a little programming (Delphi, which is a Pascal framework, and the graphical language Scratch) in high school as an additional subject after school, but I dropped out because I wanted to spend more time with my music. Now I have to brush off my extremely small knowledge of programming and learn a new language.
I just have to say, Alex, you're an awesome teacher. You know in which order to treat subjects so that it makes logical sense, you have a gift for explaining and you know the value of lots of simple, easy-to-understand examples. And that is not all, you also provide very insightful comments into why certain programming practices are bad and why you should avoid them, even if the language allows it. Textbooks never even mention programming best practices. That is the difference it makes when the teacher is working in the industry every day. You know what works and what not. Professors don't necessarily know.
Now, to my comment about the int pow() function:
changing the parameters to
will sort out the problems with the undefined behaviour. You say yourself in the tutorial about bitwise operators that unsigned integers should always be used when doing bitwise operations. It will not violate your rule that you shouldn't mix signed and unsigned integers, since the exponent is never used in an expression with a signed variable.
Hey Louis! Glad to see another huge fan of this website.
Re using an unsigned int parameter for int pow(), just be careful about C++ implicit conversion from int to unsigned int. I know because I previously tried to do exactly what you did, and was surprised that I could pass -1 to exp without any error/warning from the compiler. Not what you want because -1 becomes a very very large unsigned int!
I haven't tried this yet but apparently you can use templates (which is introduced in Chapter 13) to prevent such implicit conversions (see). If it works, would save on having to do error handling for exp < 0.
Well, yeah, of course C++ would implicitly convert the int -1 to an unsigned int, but I figured it is better to let the loop iterate for a very large finite amount of iterations than to have an infinite loop.
Using unsigned int has the added benefit of telling the programmer that the function expects a positive number. The comment will never be seen if you put the function into a library and write a header file for it.
Maybe the best solution would be to figure out what is the largest exponent you can give the function without overflowing the int return value and test to see if your unsigned int is smaller than that. You should just use sizeof(int) to figure out if you are dealing with 16-bit or 32-bit ints.
I didn't read the stackoverflow thread yet when I wrote this. I'll have a look, but I did not get to templates yet. I restarted reading the tutorials about at the new year, because I stopped reading for quite some time and had to recap. I never read further that Chapter 9's comprehensive quiz.
> Well, yeah, of course C++ would implicitly convert the int -1 to an unsigned int...
It wasn't obvious to me!! But yeah, infinite loops suck. One good thing about them though is it alerts you to a problem, whereas an incorrect result might go undetected - probably doesn't matter for the kind of things we're programming, but could be disastrous in something like aircraft software!
Templates (and virtual functions) are pretty fun when you get to them, once you wrap your head around the syntax. It's basically like a souped up version of overloading, but the compiler does some of the hard work for you.
Thanks for the kind words!
Using an unsigned parameter violates the "don't use unsigned integers except for bit-level operations", and also the "don't use unsigned values just to avoid negative numbers" rule (not sure if I've made that one explicit, I'll check and make sure I have).
The correct solution here is to use a signed parameter, but then use an assert or other precondition (in C++20, expects) to validate that the value actually is negative.
As David correctly notes, you'll still get conversions if you pass in a signed value as the parameter.
@Alex, while I agree with your rules, I disagree that using an unsigned int is breaking them. Here is the function so you don't have to scroll up to see it:
and here is my proposed change with clarifying comments:
You would still need to assert the exp argument is not negative, but even if you do that, an unsigned int is better because you are using it for bit-level operations. Else you might anyway get undefined behaviour, even if you ensure that exp >= 0.
We normally would choose unsigned for bit-level manipulation because we don't want surprises from the sign-bit. But:
1) Such surprises can't happen in this case -- if exp must be positive (which we can enforce via assert or templatization), then the sign bit should always be 0, and we only do right shifts, so the sign bit never has a chance to flip.
2) We're treating this value as a number, not a sequence of bits. The bit manipulation here is an optimization, not a necessity (we could have used arithmetic instead, it just would have been slower).
Given the above, I'd still favor signed because then we can detect/handle the case when the caller passes in a negative value (even though the sample code doesn't, for simplicity), whereas with unsigned we can't (and we get a silent failure -- the worst kind).
Ok, so do I understand correctly that bitshifts on signed positive integers (sign bit = 0) is defined? It is just the bitshifts with negative integers which exhibit undefined behaviour. (Some compilers pad with 0s and other compilers pad with 1s when you shift a signed int with a 1 in the sign bit to the right?)
Also, is bit operations with positive signed integers defined by the C++ language standard, or is it just a case of everybody agrees on how they should work, even if it isn't defined?
Yes, I believe bit operations and binary representation for positive integers (and zero) are well defined by the C++ specification. It's only negative numbers that have issues, due to the fact that they can be encoded in different ways.
You are correct. I looked it up here: It is in Chapter 5.8.
Bitshifts are defined by the standard for unsigned ints and non-negative signed ints as long as the result can be represented in the left operand's data type. For negative ints, the result is implementation-defined.
So I gave myself a challenge to write a geometric sequence and then display the last number in the sequence and state whether it's even or odd.
1. Is there any way I can improve this
2. I'm beginning to understand while loops. How do I do this:
if the user inputs a decimal number for "Enter a power: ". It'll output an speech and loop them back to question "Enter a power: " again until they give an integer without decimal point.
Hi Rai!
1.
* Initialize your variables with uniform initialization
* Line 32: You can use single quotation marks to make your numbers more readable (1'000'000)
* Line 32: This loop could be infinite if @power <= 1
* Use double numbers when calculating with doubles (1.0 instead of 1 etc.)
2.
Thanks. One thing. You told me about Initialize your variables with uniform initialization before but I don't understand what this means - how to use {} to initialize variables. Could you show an example.
This is part of lesson 2.1. Unfortunately, Alex doesn't use uniform initialization himself in most of the lessons.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/arithmetic-operators/ | CC-MAIN-2021-17 | refinedweb | 6,875 | 71.85 |
How We Generate Our New Documentation with Sanity & Nuxt.jsJune 20, 2019
In a rush? Skip to technical tutorial
We’ve spent the last few months building the new version of our shopping cart.
When we started working on it, we knew this would also mean changes in other areas of our product.
Documentation was one of them.
It meant a few specific and much-needed upgrades:
- Improving navigation between docs versions
- Rethinking content arborescence
- Automating documentation generation as much as possible
We also wanted to stay true to what we preach; using the JAMstack! That meant choosing the right JavaScript tools to generate our documentation.
We ended up picking Nuxt for static documentation generation, Sanity.io to manage content, and Netlify for automated deployment. I’ll explain why later on.
In the end, it was a great opportunity to significantly enhance our docs UX for both users and our development team.
In this post, I want to show you how we did it and how you can replicate it.
Disclaimer: Keep in mind this is an ongoing beta of the v3.0 for development and testing purpose only.
Our documentation generation (a bit of context)
Our old doc was built with custom Node.js and needed server-side rendering on each new page load. We often forgot to document new fixes and simple features. There was also the unfortunate errors and typos from time to time. In short, documentation could often become a pain in the a**. I’m sure some of you can relate to this.
So, for our new documentation, we fixed ourselves a few goals. It had to:
- Be deployed as a fully static site
- Be hosted on a fast CDN
- Use Vue.js on the frontend (as it’s our team’s go-to framework)
- Make editing content easier for the whole team—not only devs!
- Ensure all our Javascript API’s methods and theme’s overridable components get properly documented
This combination of criteria added up to an obvious choice of stack: a Vue-powered static site generator attached to a headless CMS.
As automation fans, we didn’t want to manage the documentation of our theme’s components and the Javascript API independently. The documentation data would need to be generated at build time from the code and JSDoc comments.
This would require a fair amount of additional work but, in the long run, ensure documentation that’s always up-to-date and validated at the same time we review features’ pull requests.
This also added the constraint of choosing a headless CMS with a powerful API to update content.
Why Sanity as a headless CMS?
There are many, many headless CMSs. I suggest doing a thorough research and measure the pros and the cons before choosing one. In our case, there are a few criteria that made the balance lean in favor of Sanity.io:
- Great out-of-the-box editing experience
- Fully hosted—no need to manage this in our infrastructure
- Open source and customizable
- Excellent API for both querying & writing
- Webhooks allowing us to rebuild the doc after content edits
Starting a Sanity project is straightforward. In a newly created repo, run
sanity init.
Then, define a few document types and, if your heart feels like it, create some custom components to tailor editing to your specific needs. Even if you embark on a customization spree, this won’t prevent you to from deploying your CMS on Sanity—that’s where it truly shines, because high customizability is quite a rare trait in hosted solutions.
Sanity’s API was also a breath of fresh air.
GROQ, their querying language, is a welcome addition to the ecosystem. Think GraphQL, without always being required to be explicit about all the fields you want in a query (or being able to query polymorphic data without feeling like the Labours of Hercules).
Furthermore, modifications can be scoped in a transaction which allows us to batch updates to multiple documents from our theme and SDK build process. Combine this with webhooks, and it ensures we only trigger documentation deploys once for many changes from our theme and SDK repositories.
Why Nuxt as static site generator?
Just when you thought there was a lot of headless CMSs to choose from, you stumble upon the dozens of existing SSGs.
The main requirements for our static site generator were:
- Deploys only static files
- Uses Vue.js
- Fetches data from an external API
The use of Vue.js may seem arbitrary here, and you would be right to ask: “Why not react or something else?” In all fairness, it was initially a bit arbitrary as it amounts to the team’s personal preferences, but as we build more and more projects, we also value consistency across all of them.
We’ve been using Vue.js for a long time in the dashboard, and we went all in for our default v3.0 theme. Eventually, that consistency will allow us not only faster onboarding of team members but also code reuse. Let say we would want to build a live preview of theme customization; sharing the same stack between the docs and the theme makes that easier.
That being said, it left us with three SSG contenders: VuePress, Nuxt & Gridsome.
→ VuePress. Having built-in support for inline Vue components in content was really tempting, but without the option to tap in an external data source instead of local markdown files, it was a no go.
→ Nuxt.js. This one is a power-horse of SPA development with Vue. It offers a great structure and just the right extension points to be truly flexible. The
nuxt generate command allows to deploy a fully static and pre-rendered version of the website. However, building a content-driven website instead of a dynamic web app requires additional work.
→ Gridsome. Being directly inspired by Gatsby, it has first class support for external data sources, and it was created to build static websites from this data. Having experimented with it already and because it checked all the boxes, Gridsome first seemed like the chosen one.
However, we quickly stumbled upon some pain points:
- The automatic generation of the GraphQL schema has some issues and often requires to specify the type of fields manually.
- We couldn’t structure our data as we wanted. We had to store
function,
classand
enum, which all needed to be associated with documentation pages in a polymorphic way.
- Let’s be honest, having to deal with GraphQL schema simply slows down iteration cycles.
Overall, Gridsome lacked a bit of maturity when it comes to a complex schema. As for GraphQL, it excels in scenarios where you have multiple data consumers interested in different queries. In our case, this only added unnecessary steps.
In the end, we chose to use Nuxt and to develop the missing pieces manually.
With Gridsome 0.7, you can specify GraphQL schema types and comes with a Sanity data source plugin.
All that’s missing at this point is something to deploy our documentation. For us, there was no debate. Netlify is a no-brainer here, so it became the last missing piece in our stack.
Our new documentation generation, Javascript style
Before diving into technical nitty-gritty stuff, let’s have a look at that stack all wired together. JAMstack projects may sometime feel overwhelming because of the number of tools used, but it allows you to pick them for their specific value.
Although some individual parts are relatively complex, putting them all together was quite easy.
Our documentation is composed of traditional content pages written by our dev or marketing team and technical content extracted from two repositories:
- The Javascript SDK’s doc (similar to our handcrafted V2’s Javascript API)
- The Vue.js theme components’ doc (new to the v3.0 for component overriding)
Content pages get edited directly in Sanity CMS. For the technical content, it gets generated automatically using Typescript’s compiler API and pushed to Sanity’s API in a script on our CI when each repo is updated. That script uses Sanity’s transaction feature to update all modifications at once.
Changes from Sanity generate a webhook that we use to trigger a build on Netlify. Handling webhooks in a JAMstack setup often requires to use some kind of Lambda function as a logic layer between the source’s webhook and the target’s API.
However, here we can leverage clever foresight from Netlify. Their incoming webhook endpoint is a simple private URL that accepts any POST request to trigger a build—meaning Sanity’s webhook can be configured directly to it!
Once the build is started, it runs
nuxt generate. Our custom code fetches data from Sanity, and the
dist folder get deployed on a blazing fast CDN.
In a nutshell, Sanity is used as a store of all that’s needed in our docs. The documentation itself is always up-to-date with anything that gets released in production. Documentation coming from sources can be validated as part of a regular code review process.
Generating documentation from sources
All our v3.0 projects being in Typescript, it allows us to exploit its compiler API to extract documentation from source code. This happens in three phases:
- The compiler automatically generates type definitions (a
.d.tsfile) of the project excluding every type marked as internal (using
@internaltags in JSDoc comments). This is accomplished simply by setting
declarationand
stripInternalto
truein our
tsconfig.json
- Our custom script is executed; it reads the
.d.tsfile, parse it with the compiler API and passes the result to a library called readts which transforms the compiler’s output into a more manageable data structure.
- Finally, our script update Sanity’s database using their npm module.
Let’s take this function as an example:
/** * Initialize the SDK for use in a Web browser * @param apiKey Snipcart Public API Key * @param doc Custom document node instead of `window.document` * @param options Initialization options */ export async function initializeBrowserContext( apiKey?: string, doc?: HTMLDocument, options?: SnipcartBrowserContextOptions) : Promise<SDK> { // some internal code }
It gets exported in our SDK’s type declaration almost as is, minus the method’s body. The following code allows us to convert read it in a structured way:
const parser = new readts.Parser(); parser.program = ts.createProgram(["snipcart-sdk.d.ts"]); parser.checker = parser.program.getTypeChecker(); parser.moduleList = []; parser.symbolTbl = {}; // the compiler will load any required typescript libs // but we only need to document types from our own project const source = parser.program .getSourceFiles() .filter(s => s.fileName === "snipcart-sdk.d.ts")[0]; // we instruct `readts` to parse all // `declare module 'snipcart-sdk/*' {...}` sections for (const statement of source.statements) { parser.parseSource(statement); } const result = parser.moduleList.map((module) => { /* some more transformations */ });
Once uploaded to Sanity’s dataset, the previous function declaration ends up looking like this:
{ "_id": "sdk-contexts-browser-initializeBrowserContext", "_type": "sdk-item", "kind": "function", "name": "initializeBrowserContext", "signatures": [ { "doc": "Initialize the SDK for use in a Web browser", "params": [ { "doc": "Snipcart Public API Key", "name": "apiKey", "optional": true, "type": { "name": "string" } }, /* other params */ ], "returnType": { "id": "sdk-core-SDK", "name": "SDK" }, } ] }
Using readts may make it look like a walk in the park, but using Typescript’s compiler API isn’t for the faint of heart. You’ll often have to dive into the compiler’s Symbols (not to be confused with those from the language), the AST nodes and their
SyntaxKind enum values.
The data now being ready to be consumed by our SSG, let’s see how we wired Nuxt!
Making Nuxt fully static and content driven
Through its
nuxt generate command, Nuxt.js can generate a fully static website at build time.
However, contrary to Gatsby or Gridsome, which cache the content nodes, fetching of data is still performed even in static mode with Nuxt. It happens because the
asyncData method is always called, and it’s up to the developer to provide distinct logic if wanted. There are already some talks about fixing this in the Nuxt community. But we needed it NOW 🙂
We approached that issue with a Nuxt module that has different behaviors when called from the client (the static website) or the server (when
nuxt generate is called). That module gets declared in our
nuxt.config.js:
modules: [ "~/modules/data-source", ],
Then, it simply registers a server and client plugin:
export default async function DataSourceModule (moduleOptions) { this.addPlugin({ src: path.join(__dirname, 'data-source.client.js'), mode: 'client', }); this.addPlugin({ src: path.join(__dirname, 'data-source.server.js'), mode: 'server', }); }
They both expose the same method on every page's component to load data. What differs is that on the server, that method directly call Nuxt API to retrieve content:
// data-source.server.js import { loadPageByUrl } from '~/sanity.js'; export default (ctx, inject) => { inject('loadPageData', async () => { return await loadPageByUrl(ctx.route.path); }); }
On the client, the plugin will instead load a static JSON file:
// 'data-source.client.js' import axios from 'axios'; export default (ctx, inject) => { inject('loadPageData', async () => { const path = '/_nuxt/data' + ctx.route.path + '.json'; return (await axios(path)).data; }); }
Now, in our page’s component, we can blindly call
loadPageData and the module and plugins will guaranty that the proper version is used:
<!-- page.vue --> <template> <Markdown : </template> <script> import Markdown from '~/components/Markdown'; export default { props: ['page'], components: { Markdown, }, async asyncData() { return await app.$loadPageData(); } } </script>
Here’s a sneak peek of how the function I’ve talked earlier look like in the doc:
The final result
You can visit the docs here.
Try out Snipcart v3.0 right now! It's free to sign up.
Closing thoughts
Getting started on Sanity was a breeze, and while we didn’t push it far yet, everything looks purposefully built to be extended smoothly. I was really impressed by their API, querying with GROQ, and how plugins can be crafted for the CMS.
As for Nuxt, although it required more work for our use case, it still provides a strong base to build any Vue.js project with.
With all that crunchy groundwork done, we’re ready to tackle more cosmetic improvements to the documentation, like better discoverability and organization of our SDK methods.
If you've enjoyed this post, please take a second to share it on Twitter. Got comments, questions? Hit the section below! | https://snipcart.com/blog/generate-documentation-javascript | CC-MAIN-2020-05 | refinedweb | 2,389 | 55.34 |
The Insider's Guide to Ruby on Rails Interviewing
The Technology. Tim O’Reilly (founder of O’Reilly Media) refers to Rails as breakthrough technology and Gartner Research noted in a recent study that many high-profile companies are using Rails to build agile, scalable web applications.
The rate at which Rails has gained popularity is noteworthy, with estimates of over 200,000 web sites currently built with the technology. Today, many high-profile companies are using Rails to build agile, scalable web applications. Examples include Twitter, GitHub, Yammer, Scribd, Groupon, Shopify, and Basecamp, to name but a few.
Rails is a framework for web application development, written in Ruby, that also features its own routing system independent of the web server. The goal of Rails is to significantly simplify the development of web applications, requiring less code and time than would otherwise be required to accomplish the same tasks.
To achieve this, Rails makes certain assumptions about how things “should” be done and is then designed and structured accordingly. While imbibing this “Rails view of the world” can sometimes be a bit of a culture shock for developers strongly grounded in other languages and frameworks, over time most come to greatly appreciate the Rails approach and the productivity that it engenders.
The Challenge
From a recruiting standpoint, the explosive growth in Rails popularity is both the good and the bad news. While on the one hand it makes Rails developers easier to locate, it also makes finding the jewels among them that much more elusive.
Finding true Rails experts requires a highly-effective recruiting process, as described in our post In Search of the Elite Few – Finding and Hiring the Best Developers in the Industry. Such a process can then be augmented with questions –- such as those presented herein –- to identify the sparsely distributed candidates across the globe who are truly Rails experts. The manifold benefits of finding them will likely be realized in the productivity and results that they will be able to achieve.
Yeah, I know Rails…
The extent to which Rails streamlines and simplifies the development of web applications can mislead neophyte developers into underestimating its capabilities and oversimplifying its conceptual underpinnings. While Rails is relatively easy to use, it is anything but simplistic.
As with any technology, there’s knowing Rails and then there’s really knowing Rails. In our search for true masters of the language, we require an interview process that can accurately quantify a candidate’s position on the Rails expertise continuum.
Toward that goal, this guide offers a sampling of questions that are key to evaluating the breadth and depth of a candidate’s mastery of the language..
Frequent Rail Traveler?
It is not uncommon to encounter Ruby on Rails developers whose grasp of the fundamentals and key paradigms of Rails are either week or somewhat confused.
Questions that can help assess a developer’s grasp of the Rails foundation, including some of its more subtle nuances, are therefore an important component of the interview process.
Here are some examples:
Q: Explain the processing flow of a Rails request.
At the highest level, Rails requests are served through an application server, which is responsible for directing an incoming request into a Ruby process. Popular application servers that use the Rack web request interface include Phusion Passenger, Mongrel, Thin, and Unicorn.
Rack parses all request parameters (as well as posted data, CGI parameters, and other potentially useful bits of information) and transforms them into a big Hash (Ruby’s record / dictionary type). This is sometimes called the
env hash, as it contains data about the environment of the web request.
In addition to this request parsing, Rack is configurable, allowing for certain requests to be directed to specific Rack apps. If you want, for example, to redirect requests for anything in your admin section to another Rails app, you can do so at the Rack level. You can also declare middleware here, in addition to being able to declare it in Rails.
Those requests that are not directed elsewhere (by you in Rack) are directed to your Rails app where it begins interacting with the Rails ActionDispatcher, which examines the route. Rails apps can be spit into separate Rails Engines, and the router sends the request off to the right engine. (You can also redirect requests to other Rack compatible web frameworks here.)
Once in your app, Rails middleware – or your custom middleware – is executed. The router determines what Rails controller / action method should be called to process the request, instantiates the proper controller object, executes all the filters involved, and finally calls the appropriate the action method.
Further detail is available in the Rails documentation.
Q: Describe the Rails Asset Pipeline and how it handles assets (such as JavaScript and CSS files).
Rails 3.1 introduced the Asset Pipeline, a way to organize and process front-end assets. It provides an import/require mechanism (to load dependent files) that provides many features. While the Asset Pipeline does have its rough edges, it does solve and provide many of the modern best practices in serving these files under HTTP 1.1. Most significantly, the Asset Pipeline will:
- Collect, concatenate, and minify all assets of each type into one big file
- Version files using fingerprinting to bust old versions of the file in browser caches
The Asset Pipeline automatically brings with it Rails’ selection of Coffeescript as its JavaScript pre-processed / transpiled language of choice and SASS as its CSS transpiled language. However, being an extensible framework, it does allow for additional transpiled languages or additional file sources. For example, Rails Assets brings the power of Bower to your Rails apps, allowing you to manage third-party JavaScript and CSS assets very easily.
Q: What is Active Record and what is Arel? Describe the capabilities of each.
Active Record was described by Martin Fowler in his book Patterns of Enterprise Application Architecture as “an object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data”.
ActiveRecord is both an Object Relational Mapping (ORM) design pattern, and Rails’ implementation of that design pattern. This means that fetching, querying, and storing your objects in the database is as much a part of the API of your objects as your custom business logic. A developer may see this as an undesired side effect, or as a welcome convention, depending on their preference and level of experience.
Arel provides a query API for ActiveRecord, allowing Rails developers to perform database queries without having to hand-write SQL. Arel creates lazily-executed SQL whereby Rails waits until the last possible second to send the SQL to the server for execution. This allows you to take an Arel query and add another SQL condition or sort to the query, right up to the point where Rails actually executes the query. Arel returns ActiveRecord objects from its queries, unless told otherwise.
Q: What is the Convention over Configuration pattern? Provide examples of how it is applied in Rails.
Convention over Configuration (CoC) is a software design pattern by which only the unconventional aspects of an application need to be specified by a developer. When the default convention matches the desired behavior, the default behavior is followed without any configuration being required. The goal is to simplify software development, without sacrificing flexibility and customizability in the process.
Here are some examples of how CoC principles are applied in Rails:
- Model and database table naming. Rails automatically pluralizes class names to find the respective database tables. For a class Book, for example, it will expect a database table named books. For class names composed of multiple words, the model class name should employ CamelCase (e.g.,
BookCluband
book_clubs).
- Primary and foreign keys. By default, Rails uses an integer column named
idas the table’s primary key. Foreign key names by default follow the pattern of appending
_idto the singularized tablename (e.g.,
item_idfor a foreign key into the
itemstable).
- Reserved words for automatic functionality. There are also some optional column names which, if used, automatically add features and functionality to Rails database tables.
created_at, for example, will automatically be set to the date and time when the record was created. Similarly,
updated_atwill automatically be set to the date and time whenever the record was last updated.
- Auto-loading of class definitions. Auto-loading is the “magic” by which classes appear to be accessible from anywhere, without the need to explicitly require them. Here’s how it works: When you reference a class in your code, Rails takes the class name (with namespace) as a string, calls underscore on it, and looks for a file with that name (in all directories specified in your
config.autoload_paths). For example, if you reference a class named
FileHandling::ZipHandler, Rails will automatically search for
file_handling/zip_handler.rbin your
config.autoload_paths. This feature often results in novice Rails programmers thinking that they don’t need to explicitly require referenced classes and that Rails will just auto-magically find them anyway. They then become baffled when they don’t follow this convention and are suddenly being told by Rails that their classes can’t be found.
It is important to note that CoC specifies a default –- but not immutable –- convention. Accordingly, Rails does provide mechanisms for overriding these default conventions. As an example, the default database table naming scheme mentioned above can be overridden by specifying the
ActiveRecord::Base.table_name as shown here:
class Product < ActiveRecord::Base self.table_name = "LEGACY_PRODUCT_TABLE" end
Q: What is the “fat model, skinny controller” approach? Discuss some of its advantages and pitfalls, as well as some alternatives.
“Fat model skinny controller” is an MVC-based Rails design pattern.
MVC is itself a software design pattern that separates a system into three separate and distinct layers; namely, Model, View, and Controller. MVC strives to ensure a clean separation between each of its layers through clearly defined APIs. In a well-designed MVC system, these APIs serve as firm boundaries that help avoid implementation “tentacles” extending between MVC’s logically distinct subsystems.
The “Fat model skinny controller” design pattern advocates placing as much logic as possible in the Model for (a) maximum reuse and (b) code that is easier to test.
That said, a common pitfall for Rails developers is to end up with “overly bloated” models by adhering too blindly to the “fat model, skinny controller” paradigm. The infamous User model is a prime example of this. Since many Rails apps are about the user entering data into the system, or sharing information with their friends socially, the user model will often gain more and more methods, eventually reaching the point where the
user.rb model becomes bulky and unmanageable in size.
A few key alternatives worth considering include:
- Use of other objects: Extract functionality out of models into other objects (such as Decorators or Service objects)
- Hexagonal architecture for Rails: Employ a hexagonal architecture that views the application as a hexagon, each side of which represents some sort of external interaction the application needs to have.
- DCI (Data Context Interaction): Instead of focusing on individual objects, focus on the communication and interactions between data and its context.
Q: Describe the Rails testing philosophy.
Rails built testing support in from the beginning of the framework, and it became a part of the culture. As a result, there are a plethora of tools available for testing in the Rails environment.
By default, Rails 4.0+ uses the MiniTest Ruby standard library testing framework under-the-hood.
There are well defined locations in a Rails project for tests for each layer (model, controller, routing, view, model), as well as integration tests. Because of the MVC foundation of Rails, often these layers (with the exception of integration tests) can be tested without reliance on the other layers.
For example, we can create a database record, before the test runs, that contains the attributes we expect the test to return. Our test can focus on making sure our show post controller action retrieves the post we want it to by checking to see if it returns the object we created above as expected. If not, something went wrong or our code must have a bug. Here’s an example of such a test:
class PostsControllerTest < ActionController::TestCase setup do @post = posts(:one) end test "should show post" do get :show, id: @post assert_response :success end end
Integration tests (often called Feature tests) will usually drive the application as if a user is clicking buttons, using testing tools like Capybara (which can simulate user actions in a variety of manners, including driving embedded WebKit, or using Selenium).
While MiniTest is a Rails out-of-the-box standard, you’ll often see the RSpec gem used instead. This provides a Domain Specific Language for testing that may make it more natural to read than MiniTest.
Some Rails projects use the Cucumber testing framework to describe software behavior in plain English sentences. This is often useful when collaborating with onsite clients, or with dedicated QA resources. In the ideal world, these non-developers can write automated integration tests without having to see a line of Ruby code.
Down on the tracks
Someone who has worked extensively with Rails can be expected to possess a great deal of familiarity with its capabilities, constructs, and idiosyncrasies. These questions demonstrate ways of gauging the extent and depth of this expertise.
Q: Explain the use of
yield and
content_for in layouts. Provide examples.
yield identifies where content from the view should be inserted. The simplest approach is to have a single
yield, into which the entire contents of the view currently being rendered is inserted, as follows:
<html> <head> </head> <body> <%= yield %> </body> </html>
You can also create a layout with multiple yielding regions:
<html> <head> <%= yield :head %> </head> <body> <%= yield %> </body> </html>
The main body of the view will always render into the unnamed
yield. To render content into a named
yield, use the
content_for method.
content_for allows for insertion of content into a named
yield block in a layout. This can be helpful with layouts that contain distinct regions, such as sidebars and footers, into which distinct blocks of content are to be inserted. It can also be useful for inserting tags that load page-specific JavaScript or CSS files into the header of an otherwise generic layout.
Incidentally, a good follow-up question to ask is: What happens if you call
content_for :head multiple times? The answer is that all of the values get concatenated.
Q: What are N+1 queries, and how can you avoid them?
Consider the following code, which finds 10 clients and prints their postal codes:
clients = Client.limit(10) clients.each do |client| puts client.address.postcode end
This code actually executes 11 queries; 1 (to find 10 clients) and then 10 more (one per each client to load its address). This is referred to as an “N+1 query” (where in the case of this example, N is 10).
Eager loading is the mechanism for loading the associated records of the objects returned by
Model.find using as few queries as possible.
Active Record’s eager loading capability makes it possible to significantly reduce the number of queries by letting you specify in advance all the associations that are going to be loaded. This is done by calling the
includes (or
preload) method on the Arel (
ActiveRecord::Relation) object being built. With includes, Active Record ensures that all of the specified associations are loaded using the minimum possible number of queries.
We could therefore rewrite the above code to use the
includes method as follows:
clients = Client.includes(:address).limit(10) clients.each do |client| puts client.address.postcode end
This revised version of this code will execute just 2 queries, thanks to eager loading, as opposed to 11 queries in the original version.
Q: What are “filters” in Rails? Describe the three types of filters, including how and why each might be used, and the order in which they are executed. Provide examples.
Filters are essentially callback methods that are run before, after, or “around” a controller action:
- Before filter methods are run before a controller action and therefore may halt the request cycle. A common before filter is one which requires a user to be logged in for an action to be performed.
- After filter methods are run after a controller action and therefore cannot stop the action from being performed but do have access to the response data that is about to be sent to the client.
- Around filter methods are “wrapped around” a controller action. They can therefore control the execution of an action as well as execute code before and/or after the action is performed.
For example, in a website where changes have an approval workflow, an administrator could be able to preview them easily with an around filter as follows:
class ChangesController < ApplicationController around_action :wrap_in_transaction, only: :show private def wrap_in_transaction ActiveRecord::Base.transaction do begin yield ensure raise ActiveRecord::Rollback end end end end
Note that an around filter also wraps rendering. In particular, in the example above, if the view reads from the database (e.g., via a scope), it will do so within the transaction and thus present the data to preview. You can also choose not to yield and build the response yourself, in which case the action will not be run.
The order of execution is a bit tricky and is important to understand clearly. Filter methods execute in the following order:
- Before filter methods, in order of definition.
- Around filter methods, in order of definition.
- After filter methods, in reverse order.
Also, because of the way Ruby instantiates classes, the filter methods of a parent class’ before will be run before those of its child classes.
Q: What is Rack middleware? How does it compare to controller filters/actions?
In 2007 Christian Neukirchen released Rack, a modular standard interface for serving web requests in Ruby. Rack is similar to other similar mechanisms in other languages, such as WSGI on the Python side, or Java Servlets, or Microsoft’s Internet Server Application Programming Interface (ISAPI).
Before requests are processed by your Rails action method, they go through various Rack middleware functions declared by Rails or by the developer. Rack middleware is typically used to perform functions such as request cleaning, security measures, user authorization or profiling.
You can see a list of available middleware components (both developer defined and those defined by Rails) by running
rake middleware on the command line.
A key distinction between Rack middleware and filters is that Rack middleware is called before Rails does its routing and dispatching, whereas filters are invoked after this routing has occurred (i.e., when Rails is about to call your controller action method). As such, its is advantageous to filter out requests to be ignored in middleware whenever possible, such as requests from common attack URLs (
phpadmin.php requests, for example, can be discarded in middleware, as they will never resolve in a Rails app and is probably just some attempt to hack the site.)
Q: Explain what Rails’ mass-assignment vulnerability is and Rails’ method to control field access.
When the user performs a post (such as, for example, creating a new
User) Rails needs to save all that new data into the database. This data is accessible from your Rails action via the
params Hash.
Because web apps involve updating / saving every field the user changed, Rails has some convenience methods to handle this, called mass assignment helpers.
For example, prior to Rails 4, creating a new
User object with parameters from a submitted form looked like:
User.create(params[:user])
params[:user] will contain keys for the elements the user entered on the form. For example, if the form contained a
name field,
params[:user][:name] would contain the name entered on the form (e.g., “Jeff Smith”).
Convention vs. configuration strikes again here:
name is the name of both the input element in the form and the name of the column in the database.
In addition to the
create method, you can update a record the same way:
@user = User.find(params[:id]) @user.update_attributes(params[:user])
But what happens when a hacker goes in and edits your HTML form to add new fields? They may, for example, guess that you have an
is_admin field, and add it to the HTML form field themselves. Which now means that – even though you didn’t include it on the form that’s served to users – your hacker has gone in and made themselves an admin on your site!
This is referred to as mass assignment vulnerability; i.e., assigning all these fields with no filtering en masse, just trusting that the only field names and values will be those that were legitimately on the HTML form.
Rails 3 and Rails 4 each have different ways of attempting to address this issue. Rails 3 attempted to address it via
attr_protected /
attr_accessible controls at the model level, while Rails 4 addresses it via strong parameters and a filtering mechanism at the controller level. Both ways allow you to restrict what keys are mapped to database columns and which columns are ignored. Using these mechanisms, in the prior
is_admin example, you can set the
is_admin field to only change when code explicitly modifies the field value, or only allow it to be changed in certain situations.
The Big Picture
An expert knowledge of Rails extends well beyond the technical minutia of the language. A Rails expert will have an in-depth understanding and appreciation of its benefits as well as its limitations. Accordingly, here are some sample questions that can help assess this dimension of a candidate’s expertise.
Q: Why do some people say “Rails can’t scale”?
Twitter was one of the first extremely high profile sites to use Rails. In roughly the 2006-2008 timeframe, the growth rate of Twitter made server errors (“fail whale”) appearances a very common occurrence for users, prompting users and tech pundits to lay blame at Rails’ feet. As is true with any software, the causes of scalability issues can be complex and multi-faceted. Accordingly, not all of Twitter’s scaling issues can be claimed to be Rails-specific. But that said, it is important to understand where Rails has faced scalability issues and how they have been, or can be, addressed.
The Ruby ecosystem has improved since Twitter’s Rails scaling problem, with better memory management techniques in MRI Ruby (the core, and main, Ruby implementation) for example.
Modern Rails applications typically mitigate scaling problems in one or more of the following ways:
- Implementing caching solutions (Rails 4 introduces good advances here)
- Leveraging (or implementing) server or platform solutions with automatic scaling built in
- Profiling costly operations and moving them out of Ruby or out of one monolithic Rails app
- Placing some operations in a background / worker queue to be completed at a later time (e.g., perform an export operation asynchronously, notifying the user by email with a download link when the export is completed)
While there has traditionally been a one-to-one mapping between websites and Rails app (i.e., one website = one Rails app), there’s been an increasing movement towards more of a Service Oriented Architecture (SOA) approach whereby performance critical parts of the app are split off into new/separate apps which the main app usually talks to via web service calls. There are numerous advantages to this approach. Perhaps most noteworthy is the fact that these independent services can employ alternate technologies as appropriate; this might be a lightweight / more responsive solution in Ruby, or services written in Scala (as in Twitter’s case), or Node.js, Clojure, or Go.
But writing separate services isn’t the only way to speed up a Rails app. For example, Github has an interesting article on how it profiled Rails and ended up implementing a set of C apis for performing text escaping on the web.
Q: When is Rails a good choice for a project?
Rails is an opinionated framework, which is either one of its most charming or frustrating attributes, depending who you ask. Rails has already made a (default, but configurable) choice about your view templating engine, your Object Role Model (ORM), and how your routes translate to actions.
As a result of these choices, Rails is a great choice for a project where your application has total control over its own database, mostly returns HTML (or at least doesn’t solely return JSON), and for the most part displays data back to the users consistently with the way it is stored.
Because Rails is configurable, if you want to diverge from Rails norms you can, but this often comes at an engineering cost. Want to hook into an existing MS SQL database? You can do that, but you’ll hit some bumps along the way. Want to build a single page app with Rails, returning mostly JSON object? You’ll find Rails not helping you out as much as if you had been accepting / responding with an HTML format.
Q: What are some of the drawbacks of Rails?
Rails is generally meant for codebases of greater than a few hundred lines of code, and that primarily work with its own database objects. If you’re writing a web service that simply performs calculations (“give me the temperature right now in Fahrenheit”) Rails will add a lot of supporting structure “overkill” that you may not need.
Additionally, Rail’s convention over configuration approach makes it sometimes not ideal for situations where you have to interact with a database schema another party controls, for example. Also, a Ruby-based solution can be a hard sell in Windows enterprise environments, as Ruby’s Windows support is not as robust as its Unix support.
Like Python, the concurrency story in the default Ruby implementation (MRI; a.k.a. CRuby) is somewhat hobbled by a Global Interpreter Lock (GIL), which in broad strokes means only one thread can execute Ruby code at a time. (JRuby and Rubinius, other implementations of Ruby, have no GIL.
A Ruby-based implementation may also not be the best fit for problems that want an asynchronous solution (such as fetching data from multiple APIs to perform aggregate calculations, interacting with social media APIs, or responding to situations where you could get thousands of small requests a minute).
Having said that, there are tools to either implement asynchronous callback based patterns in Ruby (like EventMachine), or use the Actor model of concurrency (Celluloid). And of course there are a number of background worker mechanisms if your problem fits in that space.
And finally… Ruby Rookie or Gemologist?
Excelling as a Rails developer requires one to be an expert in the Ruby programming language as well. Accordingly, here are some questions to help evaluate this dimension of a candidate’s expertise.
Q: What are Ruby mixins, how do they work, and how would you use them? What are some advantages of using them and what are some potential problems? Give examples to support your answers.
A “mixin” is the term used in Ruby for a module included in another class. When a class includes a module, it thereby “mixes in” (i.e., incorporates) all of its methods and constants. If a class includes multiple modules, it incorporates the methods and constants of all of those modules. Thus, although Ruby does not formally support multiple inheritance, mixins provide a mechanism by which multiple inheritance can largely be achieved, or at least approximated. (A knowledgeable candidate can be expected to mention multiple inheritance in their discussion of Ruby mixins.)
Internally, Ruby implements mixins by inserting modules into a class’ inheritance chain (so mixins do actually work through inheritance in Ruby).
Consider this simple example:
module Student def gpa # ... end end class DoctoralStudent include Student def thesis # ... end end phd = DoctoralStudent.new
In this example, the methods of the
Student class are incorporated into
DoctoralStudent class, so the
phd object supports the
gpa method.
It is important to note that, in Ruby, the
require statement is the logical equivalent of the
include statement in other languages. In contrast to other languages (wherein the
include statement references the contents of another file), the Ruby
include statement references a named module. Therefore:
- The module referenced by an
includestatement may either be in the same file (as the class that is including it) or in a different file. If in a different file, a
requirestatement must also be used to properly incorporate the contents of that file.
- A Ruby
includemakes a reference from the class to the included module. As a result, if the definition of a method in the included module is modified, even at runtime, all classes that include that module will exhibit the new behavior when that method is invoked.
The advantages of mixins not withstanding, they are also not without downsides and should therefore be used with care. Some potential pitfalls include:
- Instance variable name collisions. Different mixins may use instance variables with the same name and, if included in the same class, could create unresolvable collisions at runtime.
- Silent overriding of methods. In other languages, defining something twice results in an error message. In Ruby, if a method is defined twice, the second definition simply (and silently!) overwrites the first definition. Method name clashes across multiple mixins in Ruby are therefore not simple errors, but instead can introduce elusive and gnarly bugs.
- Class bloat. The ease-of-use of mixins can also lead to their “abuse”. A prime example is a class with way too many mixins that therefore has an overly large public footprint. The rules of coupling and cohesion start to come into play, and you can end up with a system where changes to a module that’s frequently included can have disastrous effects. Traditional inheritance or composition is much less prone to this type of bloat. Quite often extracting parts of a class into modules that are mixed in is akin to cleaning your room by putting the mess into large bins. It looks clean until you start opening the bins.
Q: Compare and contrast Symbols and Strings in Ruby? Why use one vs. the other?
Symbols are singleton based on value, immutable, non-garbage collected objects. Strings, however, create multiple objects even if they share a value, are mutable, and are garbage collected when the system is done with the object.
Since symbols are singleton based on value (there is only one symbol object for a value, even if it appears multiple times in a program), this makes it trivial to compare whether two symbols are the same (Ruby basically just needs to compare their object_id values). Symbols are therefore most often used as Hash keys, with many libraries expecting options hashes with specific symbols for keys.
Strings can be made immutable (“frozen”) via the freeze method. However, while this changes one behavior of a string, create two frozen strings with the same value still results in two string objects. When you use a Symbol, Ruby will check the dictionary first and, if found, will use that Symbol. If the Symbol is not found in the dictionary, only then will the interpreter instantiate a new Symbol and put it in the heap.
As stated in The Ruby Programming Language O’Reilly book (by Matz and Flanigan):
A typical implementation of a Ruby interpreter maintains a symbol table in which it stores the names of all the classes, methods, and variables it knows about. This allows such an interpreter to avoid most string comparisons: it refers to method names (for example) by their position in this symbol table. This turns a relatively expensive string operation into a relatively cheap integer operation.
Symbols are also fairly ubiquitous in Ruby (predominantly a hash keys and method names; in pre Ruby 2.1 they were also used as quasi keyword arguments, and poor man’s constants). Because of their performance, memory and usage considerations, Symbols are most often used as Hash keys, with many libraries expecting option hashes with specific symbols for keys.
Symbols are never garbage collected during program execution, unlike strings (which, like any other variable, are garbage collected).
Because strings and symbols are different objects, here’s an example of something that often catches less experienced Ruby programmers unaware.
Consider the following hash, for example:
irb(main):001:0> a = {:key => 1} irb(main):002:0> puts a['key'] => nil.
You may be expecting to see
1 printed here, especially if
a was defined elsewhere in your program. But, as strings and symbols are different; i.e.,
key (the symbol) and
'key' (the string) are not equivalent. Accordingly, Ruby correctly returns
nil for
a['key'] (even though this is annoying for the unsuspecting programmer wondering where her value is!)
Rails has a class,
HashWithIndifferentAccess, which acts like a Hash object, except it treats strings and symbols with the same values as equivalent when used as key names, thereby avoiding the above issue:
irb(main):001:0> a = HashWithIndifferentAccess.new({:key => 1}) irb(main):002:0> puts a['key'] => 1
There is one important caveat. Consider this controller action:
def check_value valid_values = [:bad, :ok, :excellent] render json: valid_values.include?(params[:value].to_sym) end
This innocent looking code is actually a denial-of-service (DOS) attack vulnerability. Since symbols can never be garbage collected (and since here we cast user input into a symbol), a user can keep feeding this endpoint with unique values and it will eventually eat up enough memory to crash the server, or at least bring it to a grinding halt.
Q: Describe multiple ways to define an instance method in Ruby.
Instance methods can of course be defined as part of a class definition. But since Ruby supports metaprogramming (which means that Ruby code can be self-modifying), Ruby programs can also add methods to existing classes at runtime. Accordingly, there are multiple techniques for defining methods in Ruby, as follows:
(1) Within a class definition, using
def (The simplest answer)
class MyObject def my_method puts "hi" end end
This is the standard way to define instance methods of a class.
(2) Within a class definition, without using
def
class MyObject define_method :my_method do puts "hi" end end
Since
define_method is executed when Ruby instantiates the
MyObject class object, you can do any kind of dynamic code running here. For example, here’s some code that only creates our method if it’s being run on a Monday:
require 'date' class MyObject if Date.today.monday? define_method :my_method do puts "Someone has a case of the Mondays" end end end
Executing
MyObject.new.my_method on any other day of the week will give us an exception about no such method existing. Which is true: it’ll only exist on Mondays!
(It’s important to note that classes are objects too in Ruby, so our
MyObject class is an instantiation of a
Class object, just like in
a = MyObject.new,
a is an instance of
MyObject.)
(3) Extending an existing class definition
Ruby also allows class definitions to be extended. For example:
class MyObject def say_hello puts "hey there!" end end class MyObject def say_goodbye puts "buh-bye" end end
The above code will result in the
MyObject class having both the
say_hello and the
say_goodbye methods defined.
Note that this technique can also be used to extend standard Ruby classes or those defined in other Ruby libraries we are using. For example, here we add a
squawk method to the standard Ruby string class:
class String def squawk puts "SQUAWK!!!" end end "hello".squawk
(4) Using
class_eval
class_eval dynamically evaluates the specified string or block of code and can therefore be used to add methods to a class.
For example, we can define the class
MyObject:
class MyObject ... end
… and then come back at a later time and run some code to dynamically add
my_method to the
MyObject class:
MyObject.class_eval do def my_method puts "hi" end end
(5) Using
method missing
Ruby also provides a hook to check for undefined methods. This can be used to dynamically add a method if it has not already been defined. For example:
class MyObect def method_missing(method_name_as_symbol, *params) if method_name_as_symbol == :my_method puts "hi" end end end
Wrap Up
Ruby on Rails is a powerful framework for rapid development of web applications. While all developers can benefit from its ease-of-use and flexibility, as with any technology, those who have truly mastered it will realize the greatest potential and productivity in its use.
While no brief guide such as this can entirely cover the breadth and depth of technical topics to cover in a Rails interview, the questions provided herein offer an effective basis for identifying those who possess a sound and principled foundation in the Rails framework and its paradigms. | http://www.toptal.com/ruby-on-rails | CC-MAIN-2014-35 | refinedweb | 6,176 | 51.68 |
#include <hallo.h> Rijn wrote on Tue Jul 23, 2002 um 01:20:13PM: > Start installation at the boot prompt via F1 choose 2.4 kernel and ext3 file > system everything looks nice. > Of course making a boot floppy, boots fine. > > The problem I have is that booting with loadlin from a small Dos partition > the system hangs Loadlin is an old piece of shit, RTFM and choose another way to boot. > Use Loadlin for many years with small and big kernels without any problem. > > Maybe it's a problem with the new EXT3 file system ??? No, some bug in loadline appearing on large 2.4.x kernels.) -- To UNSUBSCRIBE, email to debian-boot-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org | https://lists.debian.org/debian-boot/2002/07/msg00416.html | CC-MAIN-2016-07 | refinedweb | 129 | 75.5 |
Hello Codeforces! Did you enjoy the AtCoder Beginner Contest 129? As usual, there was only Japanese editorial published, so I translated it into English again.
Disclaimer. Note that this is an unofficial editorial and AtCoder has no responsibility for this. Also, I didn't do proofreading at all, so it might contain many typos. Moreover, this is the second experience I write such kind of editorial, so the English may not be clear, may be confusing, or even contain mistakes. Any minor corrections (including grammatical one) or improvement suggestions are welcome. Please do not hesitate posting a comment about it.
A: Airplane
There are 6 orders to visit the airports, so you can try all of them and look for the minimum value. However, if you realized that each cost of a route is equal to the sum of two integers out of the given three, you could also obtain an answer by subtracting the maximum of the three from the sum of all three integers.
An example code is shown in List 1:
Listing 1. Example Code
#include <bits/stdc++.h> using namespace std; int main() { int P, Q, R; cin >> P >> Q >> R; cout << P + Q + R - max({P, Q, R}) << endl; return 0; }
B: Balance
Let's try all $$$1 \leq T < N$$$. Then you can get the minimum value by calculating the sum of each group. If you implemented it naively, the time complexity will be $$$\mathcal{O}(N^2)$$$, so it will got accepted. However, the difference between the sum of two groups is equal to that between the sum from the beginning to the specific place and the total sum subtracted by the sum, so if you traverse from the beginning to the end while retaining the calculated sum, you can solve the problem in $$$\mathcal{O}(N)$$$.
An example code is shown in List 1:
Listing 1. B Example Code
#include <bits/stdc++.h> using namespace std; int main() { int N; cin >> N; vector<int> a(N); int sum = 0; for (int i = 0; i < N; ++i) { cin >> a[i]; sum += a[i]; } int mini = sum; int prefix_sum = 0; for (int i = 0; i < N; ++i) { prefix_sum += a[i]; mini = min(mini, abs(prefix_sum - (sum - prefix_sum))); } cout << mini << endl; return 0; }
C: Typical Stairs
Let's first think about the case without any broken steps. Let $$$f_x$$$ be the number of the ways to climb up to the $$$x$$$-th step.
For each $$$x \geq 2$$$, suppose that you have climbed up to the $$$x$$$-th step. Then there are the following two possibilities:
- you have set foot on the $$$x-2$$$-th step right before that, or
- you have set foot on the $$$x-1$$$-th step right before that.
This means that $$$f_x = f_{x-1} + f_{x-2}$$$. This formula is equal to that of the fibonacci numbers. By using DP (Dynamic Programming), you can solve it in $$$O(N)$$$.
If you came up with this idea, you could also solve the problem with some broken steps by adapting this method. Specifically, you could modify the recurrence relation so that you would never go to $$$f_{a_i}$$$.
#include<bits/stdc++.h> using namespace std; const long long mod=1e9+7; int main(){ int N,M; cin>>N>>M; vector<int>oks(N+1,true); for(int i=0;i<M;++i){ int a;cin>>a; oks[a]=false; } vector<long long int>dp(N+1); dp[0]=1; for(int now=0;now<N;++now){ for(int next=now+1;next<=min(N,now+2);++next){ if(oks[next]){ dp[next]+=dp[now]; dp[next]%=mod; } } } cout<<dp[N]<<endl; return 0; }
D: Lamp
If you try to count the lighted squares for each square not occupied by an obstacle using for-loop, the worst time complexity would be $$$O(HW(H+W))$$$ when there are no obstacles, so you cannot solve this within the time limit.
However, if you make use of some supplemental precalculations to count the lighted square, you can count the lighted squares for each square the lamp is placed in constant time, thus obtaining the answer in $$$O(HW)$$$.
Let $$$(i, j)$$$ denote the square at the $$$i$$$-th row from the top and the $$$j$$$-th column from the left. You will need the following four precalculations:
- $$$L[i][j]$$$: The number of the lighted cells in the left of square $$$(i, j)$$$ when the light is placed there (including the square $$$(i, j)$$$ itself)
- $$$R[i][j]$$$: The number of the lighted cells in the right of square $$$(i, j)$$$ when the light is placed there (including the square $$$(i, j)$$$ itself)
- $$$D[i][j]$$$: The number of the lighted cells above square $$$(i, j)$$$ when the light is placed there (including the square $$$(i, j)$$$ itself)
- $$$U[i][j]$$$: The number of the lighted cells below square $$$(i, j)$$$ when the light is placed there (including the square $$$(i, j)$$$ itself)
For example, $$$L[i][j]$$$ can be calculated as follows.
- If there are an obstacles in $$$(i, j)$$$, $$$L[i][j]=0$$$.
- Otherwise, if $$$j=1$$$, $$$L[i][j]=1$$$.
- Otherwise, if $$$j>1$$$, $$$L[i][j]=L[i][j-1]+1$$$.
After calculating them for each directions, the number of the lighted square when you put the lamp at square $$$(i, j)$$$ is equal to $$$L[i][j] + R[i][j] + D[i][j] + U[i][j] - 3$$$. (These four values all include the square $$$(i, j)$$$ itself, so to remove the duplicates, you have to subtract by 3.)
E: Sum Equals Xor
Generally, $$$A+B$$$ is equal to or greater than $$$A \mathrm{XOR} B$$$. These two values are equal if there doesn't exist a digit where both $$$A$$$ and $$$B$$$ is 1 in binary notation. Conversely, if there exists such a digit, $$$A+B$$$ cause a carry at the digit while XOR doesn't, so $$$A+B$$$ will be strictly greater (and you cannot cancel the difference).
Using this fact, let's define a digit-DP as follows:
- $$$\mathrm{dp1}[k] := $$$ The number of pair $$$(A, B)$$$ where you have already determined the first $$$k$$$ digits of $$$A$$$ and $$$B$$$, and where you already know that $$$A+B$$$ is equal or less than $$$L$$$
- $$$\mathrm{dp2}[k] := $$$ The number of pair $$$(A, B)$$$ where you have already determined the first $$$k$$$ digits of $$$A$$$ and $$$B$$$, and where you still don't know whether $$$A+B$$$ is equal or less than $$$L$$$
In order to calculate $$$\mathrm{dp*}[k]$$$ from $$$\mathrm{dp*}[k-1]$$$, you have to think of two transition whether the $$$k$$$-th digit would be 0 or 1. If the $$$k$$$-th digit is 0, the $$$k$$$-th digit of both $$$A$$$ and $$$B$$$ should be 0. If the $$$k$$$-th digit is 1, there are two transition: $$$(0, 1)$$$ and $$$(1, 0)$$$. In this case, you have to calculate dp2 carefully so that $$$A+B$$$ would not exceed $$$L$$$ after deciding the $$$k$$$-th digit.
In this way, regarding $$$L$$$ as a string, this problem could be solved in $$$O(|L|)$$$.
F: Takahashi's Basics in Education and Learning
Let's rephrase the problem statement to as follows:
- Consider a string $$$X$$$ and an integer $$$s$$$. First, $$$X = "0", s=A$$$.
- In one operation, concat $$$s$$$ to the bottom of $$$X$$$ and increase $$$s$$$ by $$$B$$$.
- Find the value of $$$X \mod M$$$ after making $$$N$$$ operations.
Let the number of $$$d$$$-digit terms be $$$C_d$$$, then you can easily calculate them by subtracting the number of terms equal or less than $$$\underbrace{99\ldots9}_{d-1}$$$ from the number of terms equal or less than $$$\underbrace{99\ldots9}_{d}$$$. The operations of concatting $$$d$$$-digit terms for $$$C_d$$$ times is equivalent to repeating the operation of $$$(X, s) \mapsto (X \times 10^d + s, s+B)$$$ for $$$C_d$$$ times. This operation can be represent as the following matrix:
\begin{equation} (X, s, 1) \begin{pmatrix} 10^d & 0 & 0 \ 1 & 1 & 0 \ 0 & B & 1 \end{pmatrix} = (X \times 10^d + s, s + B, 1) \end{equation} (Note: the 3x3 matrix is broken. Does anybody know how to fix it?)
Therefore, all you have to do is calculating $$$C_d$$$-th power of this $$$3 \times 3$$$ matrix, and this could be achieved in $$$O(\log C_d)$$$ by means of Fast Matrix Exponentiation.
You can obtain the ultimate $$$X \mod M$$$ fast enough by operating the calculation above for $$$d = 1, 2, \ldots, 18$$$ in this order. Be very careful of overflow. | http://codeforces.com/blog/entry/67558 | CC-MAIN-2019-26 | refinedweb | 1,431 | 64.75 |
SPLASH, TOO (TV, 1988), color, 91 minutes
CAST:
Todd Waring - Allen Bauer
Amy Yasbeck - Madison Bauer
Donovan Scott - Freddie Bauer
Rita Taggert - Fern Hooten
Dody Goodman - Mrs. Stimler
Mark Blankfield - Dr. Otto Benus
Noble Willingham - Karl Hooten
Doris Belack - Lois Needler
Barney Martin - Herb Needler
Timothy Williams - Harvey
The DVD comes in a snap-tight plastic case with a photo-quality full color insert depicting stills from the production as well as pertinent information. The disc itself has a photo face, not a label!
The DVD comes in a snap-tight plastic case with a photo-quality full color insert depicting stills from the production as well as pertinent information. The disc itself has a photo label. Miniseries come as a dual-disc set, but contain the same style of presentation as TV films.
All films are presented in the DVD-R format (NTSC, North America, Region 1). This is a more universally acceptable format but please make sure that your DVD player and/or computer DVD drive can accommodate this type of disc. If you are unsure whether or not you can utilize this type of disc, please check with your owner’s manual or go to the manufacturer’s website for more information. There will be NO REFUNDS given for player incompatibility.
Due to the (sadly) increasingly easy ability to copy these discs ALL SALES ARE FINAL, however any defective disc will be immediately—and cheerfully—replaced upon return (within 14 days of receipt by buyer).
I am not a big business—there are just three of us . . . me, myself & I. Please be patient as your order is treated with concern and care. The DVDs are not computer-generated and are done one at a time (not at “high speed”) and then quality-checked. If a bad disc gets by me (and they occasionally do) then, as stated before, it will be cheerfully replaced—naturally!
PAYMENT —
While PayPal is my preferred method for being paid, payment can be made is several ways: Money Orders (USPS-issued only, the green ones); Western Union; and PayPal™ — there is a 2.9% + 30¢ fee for this for PayPal, based on the total charges (merchandise and shipping) within the United States—international buyers will be 3.9% + 30¢ fee of the total charges. I do not receive this amount as it goes directly to PayPal for their financial & brokerage services. I believe that this is a very reasonable fee for the safety and ease of using any major credit card (or debit card and/or bank account) through them. I also accept PayPal E-checks, however there is a 5-7 day waiting period for clearance. I ship within 48 hours of receipt of verified funds.
SALES TAX —
Applicable sales tax of 8.25% will be applied to all purchases (not including shipping) shipped to address within the state of California.
SHIPPING —
1-5 titles = $7.00 USPS Priority Mail (approximately 2-3 days); 1 title = $5.00 First Class Mail (approximately 3-5 days). If you choose the Priority Mail then up to TEN (10) titles will be just $5.75 EXTRA! Wow!! Insurance is included with the base cost — it is mandatory so please do not request a price without it. It’s for your safety as well as mine.
If you would like to have your shipping via FedEx Ground, then this can easily be arranged. Please contact me for further details and pricing.
International buyers, 1 title = $5.00 International Air Post (approximately 7-10 business days but this is not insured). Please contact me for further options (ie: registered) and combined shipping.
QUALITY —
The “pedigree” of these titles is from various sources but most are within the “8-9/10" range. All titles are commercial-free. However, even a “9" can have flaws (nothing is perfect). If you are unsure of a certain title, please contact me in advance of purchase as there are no returns for remorseful buyers. I do want you to be happy with your purchase! | http://www.ioffer.com/i/57697621 | crawl-002 | refinedweb | 672 | 64.51 |
Simon Michael, 2002/09/02 06:25 GMT (via mail):
Zwiki 0.10.0 has been released, including 4.5 months' worth of enhancements & fixes. Enjoy.
Summary
Pre-rendering for better performance, new freeform page names and fuzzy linking, WikiForNow regulations support (beta), page renaming & deleting, UI enhancement & simplification, better upgradability, page templates support, i18n started, many bugfixes & minor enhancements.
Upgrading notes
This release has renamed page types and a new render-caching mechanism, to which pages are upgraded automatically; full-featured UI defaults which will be used if you delete your custom DTML methods; and one new and one renamed permission. See & for more.
Download: or
2002/09/02 10:14 GMT (via web):
On the ZCatalog crash, this is what I get in the STUPID_LOG_FILE:
2002-09-02T09:32:38 ERROR(200) zdaemon zdaemon: Mon Sep 2 10:32:38 2002: Aiieee! 368841 exited with error code: 136
Is this the problem that you referred to? I'm not sure what you mean by stack space in my python build: I can't find anything relevant in the python distribution on how to configure this, and searching with Google doesn't turn up anything useful. Presumably you don't mean the shell's "stacksize" resource limit? If you can give me a pointer, I can try rebuilding my python installation and see if it makes any difference.
Thanks, Peter.
2002/09/02 15:28 GMT (via web):
I've just upgraded my ZWiki to 0.10.0 and get the following error when trying to open a Zwiki page; Error Type: Invalid Date-Time String Error Value: 2002/06/15 17:11 Central Standard Time
the complete traceback is:
Traceback (innermost last): File C:\PROGRA~1\Zope\lib\python\ZPublisher\Publish.py, line 150, in publish_module File C:\PROGRA~1\Zope\lib\python\ZPublisher\Publish.py, line 114, in publish File C:\PROGRA~1\Zope\lib\python\Zope\__init__.py, line 159, in zpublisher_exception_hook (Object: wiki) File C:\PROGRA~1\Zope\lib\python\ZPublisher\Publish.py, line 98, in publish File C:\PROGRA~1\Zope\lib\python\ZPublisher\mapply.py, line 88, in mapply (Object: KentT) File C:\PROGRA~1\Zope\lib\python\ZPublisher\Publish.py, line 39, in call_object (Object: KentT) File C:\Program Files\Zope\lib\python\Products\ZWiki\ZWikiPage.py, line 150, in __call__ (Object: KentT) File C:\Program Files\Zope\lib\python\Products\ZWiki\ZWikiPage.py, line 2962, in upgrade (Object: KentT) File C:\PROGRA~1\Zope\lib\python\DateTime\DateTime.py, line 651, in __init__ File C:\PROGRA~1\Zope\lib\python\DateTime\DateTime.py, line 937, in _parse Invalid Date-Time String: (see above)
What should I do? Thanks, Kent
Simon Michael, 2002/09/02 16:56 GMT (via mail):
> Is this the problem that you referred to?
No; a different mysterious error code. I don't know where you look these up, but if you ask on one of the zope lists someone at Zope corp. will know what 136 signifies. This will probably help.
> I'm not sure what you mean by stack space in my python build: I can't
Me neither, exactly, but I heard it mentioned on the zope lists as one cause of crashing problems when using the stock python on freebsd. Here's a hit: . Let us know what happens.
Simon Michael, 2002/09/02 17:02 GMT (via mail):
> I've just upgraded my ZWiki to 0.10.0 and get the following error when
> trying to open a Zwiki page; Error Type: Invalid Date-Time String Error
your KenT? page's creation_time property has got an invalid value somehow. What does it show in KenT?/manage_propertiesForm ? Also check some of your other pages to see if the problem is widespread.
2002/09/02 17:58 GMT (via web):
Here's a couple other creation_times;
- 2002/06/15 17:11 Central Standard Time
- 2002/06/16 21:38 Central Standard Time
ktenney, 2002/09/02 22:25 GMT (via web):
Is there a problem with the creation_time I posted?
What do I do to proceed with upgrading to 0.10.0 ? (I have reverted to 0.9.9 in the mean time)
Thanks, Kent
Simon Michael, 2002/09/02 23:30 GMT (via mail):
> Is there a problem with the creation_time I posted?
Yes.. it seems zope's DateTime? class doesn't like the "Central Standard Time" timezone (it would be happy with "CST" or "US/Central"). So the question is how did it get there. Meanwhile,
> What do I do to proceed with upgrading to 0.10.0 ?
you could change the offending line in ZWikiPage.py, or fix your pages. Add and run this python script in your wiki folder:
import string for page in container.objectValues(spec='ZWiki Page'): for prop in ('creation_time','last_edit_time'): try: s = getattr(page,prop,None) if string.find(s,'Central Standard Time') != -1: fixed = string.replace(s,'Central Standard Time','CST') setattr(p,prop,fixed) except: print 'problem with',page.id() return printed
You should get no or little output, and your wiki pages should then work ok.
2002/09/03 16:07 GMT (via web):
Hi,
Having more problems with upgrading to 0.10.0 from 0.9.4. My Zope server has two ZWiki folders with different standard headers and footers. The new way of doing things doesn't cater for this as far as I can tell, because any template Zope objects in the ZODB are overridden by template files loaded from the filesystem, which are the same for the whole Zope server.
I tried creating a Page Template and loading the contents of wikipage.zpt, but it isn't one really, so that didn't work.
Have I missed something, or is there really no way around this?
Thanks, Peter.
Pieter Biemond prive, 2002/09/03 16:07 GMT (via mail):
I can't access zwiki.org at this moment. What is going on?
2002/09/03 16:20 GMT (via web):
Hmm, works fine now. I'm thinking of rewriting my ZwikiFrontend? to store the source MIME messages into the ZODB (as type "MIMEMessage?" or so) and somehow make a ZwikiInclude? (see #219 Reply through the web, create a WikiInclude mechanism) to render the MIMEMessages?. Of course, the rendered messages must be somehow cached.
Before I start, I would like to hear your ideas about this.
Simon, 2002/09/03 17:15 GMT (via web):
Peter - no, any custom methods in the zodb should override the filesystem defaults. What happens when you leave your custome methods in place ?
Did the suggestion for fixing the earlier issue work ?
Simon, 2002/09/03 17:18 GMT (via web):
Pieter - don't know, I couldn't ssh to it last night either. I have no bright ideas about the mail stuff, sorry.
Peter Keller, 2002/09/03 17:49 GMT (via mail):
Hi,
What happens is that my standard_wiki_header/footer are ignored. Editing templates/defaults/wikipage.zpt is the only way I have found to change the displayed headers/footers when the pages are rendered.
I've had a look through the source to try to understand what is going on. I'm not really a python expert, but it looks to me that at line 2331 of ZWikiPage.py (underneath the comment about UI methods) this assignment is always executed:
wikipage = default_wikipage
default_wikipage is always the template file from the filesystem, and so by the time we get to addStandardLayoutTo, the filesystem version is always used.
I've tried commenting this line out, but I just get a Zope error:
Zope has encountered an error while publishing this resource. Error Type: AttributeError Error Value: wikipage
Thanks for any hints, Peter.
P.S. about the other error, I'm still waiting for a response on the Zope developers mailing list
-- ======================================================================== | ========================================================================
Simon, 2002/09/03 18:31 GMT (via web):
Sorry, I confused you with Kent and was wondering about the creation_time issue.
It's strange that your old header & footer are ignored. Does a wikipage page template in your wiki folder get ignored too ? I think the line you point to is ok because addStandardLayoutTo looks in the folder first.
ktenney, 2002/09/03 21:33 GMT (via web):
creation_time problems;
I'm unable to change creation_time, either from the script, or from a /manage page.
The ownership tab of the Wiki shows me as the owner ....
Where else do I look?
Thanks, Kent
Simon Michael, 2002/09/04 00:40 GMT (via mail):
Kent - you can't change creation_time in the ZMI because it's a read-only property.
I set up the script on my own site, found a typo and learned that python scripts can't set properties anyway.
I have made an external method that works, see . Install this in your wiki folder or above and try the url in the docstring. Let me know what you get.
Simon, 2002/09/04 00:55 GMT (via web):
JohnGreenaway thanks again for AccessKeys :) I am using alt-e to edit and alt-s to save all the time now. I keep expecting alt-S to work in the ZMI and elsewhere.
ktenney, 2002/09/04 01:48 GMT (via web):
Simon,
The fixprops.py script worked great, now I can open my ZWiki pages with 0.10.0
Thanks, Kent
ktenney, 2002/09/04 02:02 GMT (via web):
Simon,
My
contents page shows a pretty small subset of my pages. Do I need to re-catalog the wiki pages?
Thanks, Kent
Simon, 2002/09/04 02:18 GMT (via web):
Kent - good, I've recorded it in the tracker in case it happens to others.
Pages can disappear from contents if they have no valid parent - a workaround is to view them once (or just the top page of the missing hierarchy). This will reset the parent information.
Peter Keller, 2002/09/04 09:42 GMT (via mail):
> It's strange that your old header & footer are ignored. Does a wikipage page
> template in your wiki folder get ignored too ? I think the line you point to
> is ok because addStandardLayoutTo looks in the folder first.
OK: I tried creating a Page Template in the ZMI, and adding this into the middle of the default Zope Page Template code:
<span tal: <em>page body goes here..</em> </span>
This template was then applied OK, so that bit works.
After deleting the new wikipage template from the zodb, I then replaced the check for standard_wiki_header/footer in ZWikiPage.py with:
elif (hasattr(folder,'standard_wiki_header') or hasattr(folder,'standard_wiki_footer') ):
so it didn't check for the metatype. I then got a Zope error:
Error Type: TypeError Error Value: unsupported operand types for +
I tried re-naming the the old methods, and creating new ones with in the ZMI with minimal content:
<HTML> <HEAD> <TITLE>A ZWiki page</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF">
and:
</BODY> </HTML>
I still got the same Zope TypeError?. I even tried doing this in a completely new ZWikiWeb? - still the same, but this time with a traceback confirming that the error is raised at this statement in addStandardLayoutTo:
return header + body + footer
I'm very confused now. Does this mean that there is some problem with recognising/using DTML Methods?
I can try to re-work my existing header/footers into a Zope Page Template, but then I cannot use any of the ZWiki-specific stuff that templates/defaults/wikipage.zpt uses (the same reason why I couldn't load the code of templates/defaults/wikipage.zpt into a Zope Page Template using the ZMI). Perhaps we need a ZWiki Page Template class that can be instantiated using the ZMI, rather than the locally-defined MyPageTemplateFile? class? That would solve the problem, I guess.
Peter.
By the way, the Zcatalog problem turns out to be a SIGFPE. I will investigate further, but it probably won't be till next week now.
ktenney, 2002/09/04 12:27 GMT (via web):
Status report, ZWiki upgrade.
Still some searching/indexing oddities.
If I enter
TUTOS into the search form, I get a page with 16 results (correct)
Each result has the page name in square brackets with the blue question mark after.
the href for each entry is
I'm not finding that pages or heirarchies are added to
Contents after viewing.
Thanks, Kent
interra, 2002/09/04 13:20 GMT (via web):...
2002/09/04 13:51 GMT (via web):
I've just downloaded and installed Zope and ZWiki. I've created a Zwiki Web from scratch. In Zope, when I view the web, I can edit the page and add a comment, just like this page. When I view my ZWiki Web I cannot edit and have no comment box. I want anonymous users to edit and add comments. I didn't change security settings.
What should I do to get this done?
ktenney, 2002/09/04 15:12 GMT (via web):
Upgrade Update;
When I click
edit on wiki page, above the textarea on the editform is
could not render this and the page title is
untitled
2002/09/04 23:08 GMT (via web):
I noticed today that RestructuredText is the new format for Python PEP's .
In searching for it on this site I was baffled by the contents of the RestructuredText page. No time to dig into it now..no nap 3 year old beckons.
(p.s. Noticed an old knee jerk response of mine in regards to re-inventing STX when re exists...hopefully can reconcile that later, too.) - DeanGoodmanson
Simon Michael, 2002/09/05 00:20 GMT (via mail):
rST is being discussed on zope-dev at the moment.
Simon Michael, 2002/09/05 00:26 GMT (via mail):
Now the feedback is rolling in.. good, good.
I'm starting to wish we had subject threads. I don't have a preference, but remember you can open a tracker issue which effectively makes a separate "thread" (the only problem is you'll lose the GeneralDiscussion-only crowd).
Simon, 2002/09/05 01:51 GMT (via web):
The zope/python error code 11 bug is causing major problems for GeneralDiscussion.. it's unusable right now. Trying to find some way to save the past comments without triggering this crash.
Simon Michael, 2002/09/05 01:55 GMT (via mail):
zwiki@zwiki.org (Peter Keller) writes (in not exactly this order):
> This template was then applied OK, so that bit works.
Good.
> but then I cannot use any of the ZWiki-specific stuff that
> templates/defaults/wikipage.zpt uses (the same reason why I couldn't
> load the code of templates/defaults/wikipage.zpt into a Zope Page
> Template using the ZMI).
That's exactly what you should be able to do.
> I can try to re-work my existing header/footers into a Zope Page
> Template
No need - dtml methods are supposed to work and we will get them to work. Using the standard zwiki code, with your custom methods in the folder, what do you get when you put the following on a DTML-enabled page ?:
<pre> <dtml-var "standard_wiki_header" html_quote> <dtml-var "standard_wiki_footer" html_quote> <dtml-var "folder" html_quote> <dtml-var "folder()" html_quote> <dtml-var "_.getattr(folder(),'standard_wiki_header','none')" html_quote> <dtml-var "_.getattr(folder(),'standard_wiki_footer','none')" html_quote> </pre>
Sanity check, please - can anyone else confirm or deny (a) their custom standard_wiki_header & standard_wiki_footer methods are working with 0.10 or (b) they are able to create and use a wikipage template containing the code from templates/defaults/wikipage.zpt ?
Simon Michael, 2002/09/05 01:56 GMT (via mail):
zwiki@zwiki.org (ktenney) writes:
> the href for each entry is
>
Is your SearchPage? not DTML-enabled ? (check the editform or /manage_propertiesForm). If not, can you try creating a new zwiki web and check the SearchPage? in that ?
> I'm not finding that pages or heirarchies are added to
Contents after
> viewing.
So you view a page, in full mode. Do you see the page title at the top ? Do you see some parents, and/or the contents link above the title ?
Simon Michael, 2002/09/05 02:05 GMT (via mail):
zwiki@zwiki.org (interra) writes:
>...
This sounds like one of Peter's issues. ZWikiPage does mess with the context, it tries to make "here" be the page. But testing it again here, I am seeing "here" is the folder. Will investigate further.
Simon Michael, 2002/09/05 02:06.
Simon Michael, 2002/09/05 02:06 GMT (via mail):
> ?
Simon, 2002/09/05 02:08 GMT (via web):
Submitted ZopeIssue:560
Simon Michael, 2002/09/05 02:22 GMT (via mail):
Simon Michael <simon@joyful.com> writes:
>> I'm not finding that pages or heirarchies are added to
Contentsafter
>> viewing.
>
> So you view a page, in full mode. Do you see the page title at the top ?
> Do you see some parents, and/or the contents link above the title ?
Let me clarify this - if you have a hierarchy of pages that's gone missing (not just one page), then most of those pages' parents may be fine - the page that needs repairing is the one at the top of the tree. View that one.
Simon Michael, 2002/09/05 02:32 GMT (via mail):
Simon Michael <simon@joyful.com> writes:
> Let me clarify this - if you have a hierarchy of pages that's gone missing
> (not just one page), then most of those pages' parents may be fine - the
> page that needs repairing is the one at the top of the tree. View that
> one.
I won't stop there, I'll clarify myself still further -
ack! it's worse than this. Any page you visit within the lost hierarchy will get "repaired" by being reparented to the top level, so your lost hierarchy's structure will be really lost, unless you undo. So you really want to view the top page if you can find it.
Jay, Dylan, 2002/09/05 03:19.
Sometime I wonder if hiding options based on security permissions is more trouble than its worth. When the're there but disabled, or give a security error, at the user knows what's going on.
Simon, 2002/09/05 03:35 GMT (via web):
I hear you.. but I started out that way, and they didn't :(
Simon, 2002/09/05 03:42 GMT (via web):
I feel the current behaviour
- puts the learning curve with the wiki admin rather than the wiki visitor
- is most often what the admin wants (once they have learned about zwiki permissions)
Is better admin documentation what's needed ?
Simon, 2002/09/05 03:56 GMT (via web):
By the way, anyone hasn't created a new wiki recently should take a look at the BasicFrontPage, it may help some.
ktenney, 2002/09/05 17:47 GMT (via web):
saga, upgrade to 0.10
> ?
I think the only non-custom element is one line in standard_wiki_header:: edit this page
Is your SearchPage? not DTML-enabled ? (check the editform or /manage_propertiesForm). If not, can you try creating a new zwiki web and check the SearchPage? in that ?
I created a new ZWiki with zwikidotorg checked, it's cool works great.
Thanks, Kent
Bob Finch, 2002/09/05 19:45 GMT (via web):
In a new Zope and ZWiki install, clicking on a UserOptions? link causes Zope to restart. Is this caused by the same Zope/python crashing bug mentioned above (ZopeIssue:560)? I'm running:
ktenney, 2002/09/06 00:14 GMT (via web):
Zwiki update continued,
I created a new ZWiki. Pages I copy from my regular one into the test one behave normally, search works right, editform works right.
Should I just copy all pages to the test, and delete the original?
Would you prefer trying to discover what's happening?
Thanks, Kent
Simon Michael, 2002/09/06 00:36 GMT (via mail):
zwiki@zwiki.org (Bob Finch) writes:
> In a new Zope and ZWiki install, clicking on a UserOptions? link causes
> Zope to restart. Is this caused by the same Zope/python crashing bug
> mentioned above (ZopeIssue:560)?
Hi Bob, perhaps, turn on your debug log (see zope's doc/LOGGING.txt) and see what it says when zope crashes.
Simon Michael, 2002/09/06 00:40 GMT (via mail):
"Marcio Marchini" <mqm@magma.ca> writes:
> I was unable to upgrade.
> ...
> ImportError?: No module named PageTemplates.PageTemplate
Hi Marcio - it's looking for Page Templates, which I think you need to install as a separate product with zope 2.4.3. I'll look into handling this more gracefully in the next release.
Thanks - Simon
Simon Michael, 2002/09/06 00:52 GMT (via mail):
zwiki@zwiki.org (ktenney) writes:
> I created a new ZWiki. Pages I copy from my regular one into the test one
> behave normally, search works right, editform works right.
>
> Should I just copy all pages to the test, and delete the original?
Kent - in your original wiki it sounds as if someone (or zwiki's auto-upgrade ?) messed up your SearchPage?'s page type; and your editform/standard_wiki_header had a dtml bug (or broke due to an incompatibility in 0.10).
Your new wiki uses the default UI methods, is that right ? If you're happy with it then yes, move the pages over (less maintenance for you) and I won't pursue those glitches further right now. They are hard to pinpoint at this distance.
Peter Keller, 2002/09/06 08:42 GMT (via mail):
Hi, I'm back.
> No need - dtml methods are supposed to work and we will get them to work.
> Using the standard zwiki code, with your custom methods in the folder,
> what do you get when you put the following on a DTML-enabled page ?::
... snip ...
OK - tried this. Putting your code on a stxprelinkdtmlhtml page, with my minimal custom header/footers in the zodb, I get:
<Python Method object at 1410bd760> <Python Method object at 1412d5a90> <Python Method object at 1412e2830> <Folder instance at 14102bb80> <HTML> <HEAD> <TITLE>A ZWiki page</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF"> </BODY> </HTML>
Peter.
ktenney, 2002/09/06 11:01 GMT (via web):
Images and STX,
Is there STX syntax to place an image inline?
I seem to remember something like "image name" : img : image_location (spaces added to prevent rendering)
I can find no reference to this, only to html tags for images. Did I imagine it?
Thanks, Kent
Wim, 2002/09/06 14:51 GMT (via web):
Is it possible to have subfolders in ZWiki? And, if so, how can you direct a page reference to it?
Simon Michael, 2002/09/06 16:20 GMT (via mail):
zwiki@zwiki.org (Wim) writes:
> Is it possible to have subfolders in ZWiki? And, if so, how can you
> direct a page reference to it?
Yes, you can make a sub-wiki (or just a subfolder with some pages) inside your wiki folder using the ZMI. To link there from the main wiki, you'll need to use html links, stx links (subwiki/SomePage?) or RemoteWikiLinks. (In the last release you could just do [subwiki/Somepage] but this was removed).
The sub-wiki will acquire any custom UI methods from the parent, and it's wiki links will see the parent's pages, linking them as ../PageInParentWiki. So clicking on one of these will take you up to the parent wiki.
Simon Michael, 2002/09/06 16:21 GMT (via mail):
zwiki@zwiki.org (ktenney) writes:
> Is there STX syntax to place an image inline?
Yes there is.. and I think it has been enabled by default in current zope. Try the structured text wiki.
Simon Michael, 2002/09/06 17:14 GMT (via mail):
zwiki@zwiki.org (Peter Keller) writes:
> OK - tried this. Putting your code on a stxprelinkdtmlhtml page, with my
> minimal custom header/footers in the zodb, I get::
Peter - those look good. Because I tested this "at least once" and especially because no-one else reported it, I thought it was likely caused by something in your setup. But I have reproduced it, it's a bug alright.
On lines 229 and 232, change
is to
==. And, on line 267 move the # to
the beginning of the line. Thanks for the report.
I swear, I'm never going to use is again, it never does what I expect.
General call for help - zwiki needs your help to make releases more bug-free. The best thing you can do to help (well, aside from funding) is to develop a unit test for some bit of functionality, such as this one. The other best thing is to download and test the release candidates in your setup. FYI these come out on the 25th of each month.
Peter Keller, 2002/09/06 17:39 GMT (via mail):
> On lines 229 and 232, change
isto
==. And, on line 267 move the # to
> the beginning of the line. Thanks for the report.
Yes, that fixed it alright: this lets me migrate to 0.10. I'll wait for further news on the wikipage.zpt problem. With a bit of luck, I'll have the time to investigate the Zcatalog crash next week.
Thanks, Peter.
Bob Finch, 2002/09/06 20:07 GMT (via web):
> Hi Bob, perhaps, turn on your debug log (see zope's doc/LOGGING.txt) and see what
> it says when zope crashes.
Looks like it's getting a SIGSEGV::
2002-09-06T19:46:57 ERROR(200) zdaemon zdaemon: Fri Sep 6 12:46:57 2002: Aiieee! 24446 exited with error code: 139
Simon Michael, 2002/09/06 20:29.
Simon Michael, 2002/09/08 21:48 GMT (via mail):
Look in Control_Panel in the ZMI. I'd like to know what you have, to help me decide whether I must stop using prefix. It's a pain, but I suppose I'll have to.
If you don't want to upgrade your zope, the workaround is to replace lines 26-37 in ZWiki/dtml/zwikiWebAdd.dtml with something like (untested):
<dtml-in listWikis> <dtml-unless "_['sequence-item'][-7:] == '_config'"> <tr valign="middle"> <td nowrap <input type="radio" name="wiki_type" value="<dtml-var "_['sequence-item']">" <dtml-if "_['sequence-start']">CHECKED</dtml-if> > <dtml-var "_['sequence-item']"></input> </td> <td> <dtml-try> <dtml-with "PARENTS[-1].Control_Panel.Products.ZWiki"> <dtml-var "_.string.join(_[_['sequence-item']].description)">
But you'll probably find other occurrences. I like to use prefix.
Simon, 2002/09/08 21:55 GMT (via web):
Oops, meant to send that to #229 with older zope versions, cannot add Zwiki Web from ZMI (invalid attribute "prefix"). There's a problem: when the page name in a mailout's subject is free-form, replies don't go back to the originating page.
Peter Keller, 2002/09/09 08:40.
And also have a look at the thread on the zope-dev mailing list that I started about error code 136 at , which has some useful hints.
Peter.
MichaelIvey, 2002/09/09 14:04 GMT (via web):
SM: After some discussion on DebianWiki:ChitChat, I promised to mention something to you. Evidently some WikiWikiWeb users prefer adding pages by calling or something, instead of referencing it by Wikiname in another page. doesn't work...is there a URL syntax that would give this functionality? If not...then eh, it isn't that big a deal.
Mark B, 2002/09/09 14:26 GMT (via web):
I've set up a Zwikiweb and it works fine except that when I try to add a comment from a windows machine the add a comment button triggers the file download dialog as if windows does not know how to open file. If I pick "open file" it still cant open it because it doesnt know what program to use (The file has no extension of course) I have no trouble opening the wiki page from the ZMI. Any ideas? TIA.
Simon Michael, 2002/09/09 19:26 GMT (via mail):
Hi Michael - how's things going over there on DebianWiki? ?
Sure, you can call SomePage?/edit?page=NewPage? . NewPage? will be a child of
SomePage?, will inherit it's type and will be empty (unless you specify
type or
text). SomePage?/editform?page=NewPage? brings up the form prior
to creating the page.
Also the standard_error_message from a new 0.10 wiki presents some options for handling unrecognized urls. You can change it to go straight to the creation form, or even to create the page immediately - that's probably going too far.
Simon, 2002/09/09 19:30 GMT (via web):
(moved from archives) What about a SignatureShortCut to provide a short signature in ThreadMode?? - FloK
Simon, 2002/09/09 19:34 GMT (via web):
Mark - not really. What should be happening there is a redirect to after the comment is added. What browser version ? I assume it works with a different browser on the same machine.
Simon Michael, 2002/09/09 20:01 GMT (via mail):
> if you want to summarize them there afterwards that would be much
> appreciated.
PS I see you're already doing just that. Cool, I'll stand back.
Jay, Dylan, 2002/09/10 00:35 GMT (via mail):
> Mark - not really. What should be happening there is a redirect to
> after the comment is added. What
> browser version ?
> I assume it works with a different browser on the same machine.
Actually I've seen the download on comment bug in previous versions. It only happened however if nothing was entered in the comment field and the comment button was hit, which is weird. This was with IE6
Jordan, 2002/09/10 19:19 GMT (via web):?
Simon Michael, 2002/09/10 20:09 GMT (via mail):
zwiki@zwiki.org (Jordan) writes:
>?
Sounds interesting.. give a precise example so I can understand your setup. There was a post on how sub-wikis work a few days ago. If your header and footer methods are being ignored, you may be getting hit by a 0.10 bug - see KnownIssues.
interra, 2002/09/11 08:56 GMT (via web):
Hi,
I gave up to make wikipage work and tried to find out why old good DTML way do not work... Here are the pearls I've found :). The below is part of the ZWikiPage.addStandardLayoutTo:
elif ((hasattr(folder,'standard_wiki_header') and getattr(folder.standard_wiki_header, 'meta_type',None) is 'DTML Method') or (hasattr(folder,'standard_wiki_footer') and getattr(folder.standard_wiki_header, 'meta_type',None) is 'DTML Method')):
IMHO,
is should be replaced with
==...
But that was not the last thing into game ;) ZWikiPage._renderHeaderOrFooter:
method = getattr(self,'standard_wiki_'+hdrOrFtr) return #method(self, REQUEST, RESPONSE, **kw) return apply(method,(self, REQUEST, RESPONSE), kw)
What do you expect function to return? Right, None :)) Moving hash to beginning of the line corrected situation.
Hope code like this will never get out to public again. I'd release 0.10.1 ASAP, because far not everybody upgrading will step into ZPT at once and far not everybody'll make wikipage to work. I have no reason to as I've made old DTML work... When I'll have time I'll take a look on ZPT rendering, not right now.
Regards,
myroslav at zope.net.ua
WimBekker, 2002/09/11 13:14 GMT (via web):
Ok, I'm a NewBee? and I want magic from day one. I've got my ZWiki up and running. Everyone can add comments. Now what I want is to create a ZWiki page with a list, every item (record) should be entered via a form. The list should display some fields of the records and every item in the list is a new ZWiki page. Every record should display the status of the item. This status must be changed by users. (Kind of like Zope does it in their managing). Can someone put me on the right track? Thanks.
Wim Bekker, 2002/09/11 13:23 GMT (via mail):
Browsing through the Zwiki pages, I found the ZwikiIssueTracker. I realy like that kind of functionality added to my ZwikiWeb?.
2002/09/11 15:22 GMT (via web): name at all.
2002/09/11 15:42 GMT (via web):
Hi! Please help - error [...]? deleted rest of posting - FlorianKonnertz
2002/09/11 15:54 GMT (via web):
I deleted my former posting, because the error cause was embarrassingly :o) simple, to keep the clearness of the page. - FloK
Simon Michael, 2002/09/11 16:38 GMT (via mail):
zwiki@zwiki.org (interra) writes:
> I gave up to make wikipage work and tried to find out why old good DTML
> way do not work... Here are the pearls I've found :). The below is part
Hi Myroslav, thanks for pursuing this and posting the pearls, but I have to tell you I already dived for these ones. You should check GeneralDiscussion or at least KnownIssues. :)
> Hope code like this will never get out to public again. I'd release
> 0.10.1 ASAP, because far not everybody upgrading will step into ZPT at
I'd like that too. There's a call for help on KnownIssues. Given that KnownIssues describes simple workarounds for most problems and 0.11 is due for release in a little over two weeks, do you think a 0.10.1 release is justified ?
interra, 2002/09/11 16:53 GMT (via web):
Hi,.
I think I am not alone who was lost in that bug-tracking. Sometimes wiki becomes critical in production process. Downgrading to 0.9 was not possible due to AutoUpgrade? process which took place in 0.10...
So another release which solved our problems would be vital for us. Unfortunately editform still do not have proper header ...
Regards,
myroslav at zope.net.ua
Simon Michael, 2002/09/11 16:53 GMT (via mail):
>
Ah, I think I understand. Your sub-folders have wikinames so you were able to link to them as if they were pages, and index_html took care of redirecting to an actual page in the sub-wiki. But did that not get a bit confusing in practice ?
- 10 is more strict about linking only to zwiki pages, and not other kinds of zope object, to resolve some problems which I can't quite recall at the moment, possibly something to do with free-form names/fuzzy links.
You could hack the old behavior by commenting out ZWikiPage.py lines 1309 and 1329 like so:
1): #self.isZwikiPage(getattr(folder.aq_base,m))): 1): #self.isZwikiPage(getattr(folder.aq_parent,m))):
Simon Michael, 2002/09/11 17:13 GMT (via mail):
zwiki@zwiki.org (interra) writes:
>.
Ok, KnownIssues is intended for this situation. But note you have to visit it, not just subscribe, since it is updated by edits and we don't currently mail those out.
> I think I am not alone who was lost in that bug-tracking. Sometimes wiki
> becomes critical in production process.
Great. Ok, what shall we do to improve things ?
> Downgrading to 0.9 was not possible due to AutoUpgrade? process which
> took place in 0.10...
Have you tried it ?
> So another release which solved our problems would be vital for us.
Is applying the workarounds or tracking the CVS version not an option in your situation ?
> Unfortunately editform still do not have proper header ...
Say more about this if you like. My first questions will be, do you have an old standard_wiki_header in place ? Do you have an old editform in place ? Is deleting your old editform an acceptable solution ?
Thanks for the feedback. -Simon
2002/09/11 17:21 GMT (via web):
Grr.. look at dumbass stx turning 0.10 into a list item again. I though we had persuaded it not to do that.
interra, 2002/09/11 17:31 GMT (via web):
Ok, I'll put KnownIssues into my MozillaBookmarkMonitoring? to track changes i it.
I tried to return ZWiki 0.9.9 into Products. That gave me just prerendered pages without WikiName......
I get "could not render this." instead of header. Deleting editform gives just built-in editform without standard_wiki_header. We need standard_wiki_header there or at least the way to customize header.
Regards, myroslav at zope.net.ua
ktenney, 2002/09/11 17:56 GMT (via web):
I don't know if this is applicable to your situation ...
I had difficulty with 9.9->10 upgrade;
cannot render this, indexing problems ... I ended up creating a new ZWiki, copying all the pages from the old one, pasting them into the new one. Fairly painless, 0.10.0 is working great now with my old pages.
Simon Michael, 2002/09/11 18:29 GMT (via mail):
zwiki@zwiki.org (interra) writes:
> I tried to return ZWiki 0.9.9 into Products. That gave me just
> prerendered pages without WikiName...
I don't know what this means exactly, but I'll take your word for it - it sounds like there are problems downgrading from 0.10. (Though I don't try to support downgrades, I would expect them to mostly work).
>...
Throughout zwiki's history, CVS has usually been as stable and better than the last release. I don't know if this will remain as true.
With the new ReleaseProcess I did intend to use branches for maintenance, but I'm not familiar with them and prefer to avoid the added complexity if I can. I'd rather be working on the wishlist. I was thinking that a fast release cycle plus clear workarounds would allow me to skip branches.
However it seems as if some motivated person could easily take on this task, and perhaps we have enough users to make it worthwhile. It might be a very good thing. Are you interested ?
On the third hand, remember: this release was a major one, including extensive changes from a long period of development; it never looked likely to attract sufficient testers until I actually released it (but thanks to all of you who did test the prerelease); hence it was likely to show a bunch of issues, with 0.11 being the more solid bugfix release; and, we are in the first iteration of a new release process, which as it gets established will (I hope) encourage wider use, testing, development, and more consistent releases.
> I get "could not render this." instead of header. Deleting editform
> gives just built-in editform without standard_wiki_header. We need
> standard_wiki_header there or at least the way to customize header.
I see. This seems to happen often (always ?) when someone upgrades to 0.10, keeping 0.9.9's or a customized header & editform in place. To find out exactly why, get rid of the try..except at ZWikiPage.py line 2339 and let's see the traceback.
Cheers -Simon
DeanGoodmanson, 2002/09/11 18:39 GMT (via web):
Naive Question #83: Is "Fuzzy Matching" the wonderful feature that allows case insensitive URL for wiki pages: ?
Great feature. Upgrade motivation.
Simon, 2002/09/11 19:19 GMT (via web):
Thanks. It's the new standard_error_message , which uses the fuzzy matching methods to do it's thing. It ignores case, spaces and does partial matching. Eg you could enter . If it doesn't find any matching page name, it offers to create it or do a text search. This is part of any new 0.10 zwikidotorg wiki.
Simon, 2002/09/11 19:50 GMT (via web):
So that uses fuzzy matching code, but "Fuzzy Matching" per se means that case and whitespace in square bracket links are ignored, ie these links are equivalent:
[General Discussion] : General Discussion
[ gen eral discu ssion ] : gen eral discu ssion
as is GeneralDiscussion, in this case. Why ? "Free-form Names" are translated to wikiname-like ids, which combined with fuzzy linking encourages serendipitous matches between wiki names and free-form names. NB if we were creating each of these pages in an empty wiki, the [ General Discussion ] page would get the id/url GeneralDiscussion. [ gen eral discu ssion ] would get id GenEralDiscuSsion. Whichever of these is created first, the other would then link to.
I'm hopeful this will do what we want most of the time. If it gets confusing a site may want to stick to one or the other (either wikinames or free-form names).
Simon Michael, 2002/09/11 20:07 GMT (via mail):
Simon Michael <simon@joyful.com> writes:
> On the third hand, remember:
PS, on the fourth hand, it would be nice if downgrading was routine. If someone can come up with a better story there (wrt auto-upgrading, etc) it would be a good thing.
DeanGoodmanson, 2002/09/11 21:50 GMT (via web):
ZWiki Net Scout Here ...
The Debian Wiki site was linked ( ) on a sub-category post today on Slashdot .
I didn't notice any WikiVandalismDiscussion .
Simon, 2002/09/11 23:23 GMT (via web):
checkin: "support edit log comments... display in history link title in header, and at top left when diff browsing". These log notes are saved in the last_log page property and in the zope transaction log.
Simon, 2002/09/12 01:05 GMT (via web):
Well spotted DG. Good to see zwiki taking care of business.
I killed the site briefly today in strange ways caused by stupid syntax errors. 30 day pack: 420 -> 355Mb.
2002/09/12 07:11 GMT (via web):
What about the option of Render this page as PlainText? + zwiki in the edit form? - I need it because i want to integrate my ToDoLists? in my wiki. Until now i use PlainText? files as ToDoLists?, which is quite comfortable with vi, here's an example:
Nr. Description of task Who? Date added --------------------------------------------------------------------------------------------------
### os openspirit.de 5 zope: keep doc-types via webdav PUT-factory sv 05-29 11 img smaller 06-12 12 layout: border aorund text, mainframe 06-12 14 pan: new profile, signature loc. 06-18 16 CMF: introduce for the website 06-19 19 news-problem 06-21 24 cvs-server: test sv 05-27 26 fkw dns redirect change 07-25 27 colors: centralize with dtml-var 07-26 29 memento: publishing on openspirit 08-01 31 Last update - msg generate by dtml automatically 09-04 32 MyWiki: css 09-11
I want to become them linked by wiki, i would use more WikiWords? up from now. Do i need the new rendering options? Maybe there's another way by changing the format of the files. How can i use them as stx? The only problem is the newline - i haven't got the thing about changing newlines to
<br> yet - perhaps i should read something about it first (ok, i'll do;-). But any hints and comments appreciated nevertheless. - FlorianKonnertz
DeanGoodmanson, 2002/09/12 17:20 GMT (via web):
I'm having a consistent problem with my Zope sites occasionally displaying a cached version of a page, not the recently changed version.
I realize how to force the browser to never cache, but that's not a reasonable solution.
Could someone direct me to the HTML headers (revalidate?) and, more importantly, how to implement them properly in my Zope server (or app
header pages, if need be.) ?
Simon Michael, 2002/09/12 19:02 GMT (via mail):
zwiki@zwiki.org (DeanGoodmanson) writes:
> I'm having a consistent problem with my Zope sites occasionally
> displaying a cached version of a page, not the recently changed version.
So a general zope problem (not zwiki), right ? Are you on one of the zope hosting providers ? Do you normally access your sites through an apache proxy, and if so does the problem go away when to talk to zope directly ? And might there some other proxy between you and your sites, eg at your zope host or at your ISP ? (I don't think it's easy to tell). Does googling the zope lists or searching the zope collector help ? I've seen this discussed before somewhere.
> Could someone direct me to the HTML headers (revalidate?) and, more
> importantly, how to implement them properly in my Zope server (or app
>
header pages, if need be.) ?
You can use wget or curl to see what headers you're getting. There's a mozilla bookmarklet that should work too. One of the relevant headers is Last-modified, there are others.
JohnGreenaway, 2002/09/12 19:25 GMT (via web):
You can prevent caching with:
<dtml-call "RESPONSE.setHeader('Cache-Control','no-cache')"> <dtml-call "RESPONSE.setHeader('Pragma','no-cache')"> <dtml-call "RESPONSE.setHeader('Expires','-1')">
They set the proper headers to stop both proxies and browsers using old page versions.
You achieve the same effect with html meta tags, but the above's preferable as it actually puts the info in the http header for you.
Simon, 2002/09/12 22:24 GMT (via web):
My last mail hasn't appeared yet. Is this page crashing again ?
Simon, 2002/09/12 22:26 GMT (via web):
No ? Hmm imeme mail delay perhaps..
> What about the option of *Render this page as PlainText? + zwiki in the edit
I guess we want to add this, since it comes up and there seems to be no existing way to do it.. add this before ZWikiPage.py:render_plaintext():
def render_textlink(self, client=None, REQUEST={}, RESPONSE=None, **kw): """ render the page using fixed-width plain text plus zwiki links """ # pre render t = str(self.read()) if not self._prerendered: get_transaction().note('prerender') t = html_quote(t) self._prerendered = t or '\n' if kw.get('pre_only',0): return # final render t = self._prerendered t = self.renderLinksIn(t) t = "<pre>\n" + t + "\n</pre>\n" t = apply(self.addStandardLayoutTo,(t,),kw) return t
and set your page type to
textlink. You can add this before the
plaintext option in your editform in the zodb or filesystem:
('textlink', 'text + links'),
2002/09/13 09:23 GMT (via web):
Subject: option of *Render this page as PlainText?? + zwiki
Hi! Nice to hear that this feature is already supported. Unfortunately i had no success with the suggested solution yet. I added the new funtion to ZWikiPage.py and in the editform the following lines:
<OPTION value="textlink" <dtml-if "page_type == 'textlink'">SELECTED</dtml-if>> text + links </OPTION>
Did i miss anything?? - FloK
Simon Michael, 2002/09/13 16:39 GMT (via mail):
Hi Florian - here's my code and editform for comparison. The last one cannot be found. --FloK Sorry, fixed.
Simon Michael, 2002/09/14 01:40 GMT (via mail):
I made some progress on InternationalCharactersInPageNames, inspired by Lalo and others. The server is a bit more bloated than usual as a result, some refactoring is needed.
I was planning to reorganize the WikiTrails? into user, admin and developer trails. I still think trails make sense, but I am leaning towards removing them for now. It feels to me as if too many links are making pages harder to read.
Sunday is the 15th, when we go to bugfix mode for 0.11 (code yellow!). So if you have any wild features to check in, now would be the time (how's things going J-David ?) but I'll be quite content for this to be mainly a bugfix release.
Happy weekend, all..
2002/09/15 18:19 GMT (via web):
Hi All, I just downloaded zwiki and installed it. It all looks functional and I'm sure it is a great product, but I think some work might be done on the out-of-the-box experience. The first thing with an app like this, is make it look the way I want it. I have been searching for any hint about this for about half an hour, but nothing, except some arcane references to standard_wiki_header, but I could not find any example implementations, other then one link that didn't work. Don't get me wrong, I really think it is a great product and I'm sure I will get it up and running. But I have been using too many Zope products that were great, but no easy to get running.
Douwe Osinga dmo at oberon dot nl
2002/09/15 21:23 GMT (via web):
Douwe - it used to be more obvious, and will improve again. Thanks for the input. --SM
2002/09/15 22:54 GMT (via web):
Hi All, I found it, I think, on disk in the templates directory. Are there anymore templates to choose from, or is there a howto on this?
Douwe Osinga dmo at oberon dot nl
2002/09/15 23:05 GMT (via web):
First, apply the fixes for 0.10's #225 custom wikipage page templates don't work right], or use the cvs version. Then in your wiki folder create either
standard_wiki_header&
standard_wiki_footerDTML methods or a
wikipagepage template. Use the default content in ZWiki/templates/defaults as a starting point.
2002/09/15 23:52 GMT (via web):
hello, I'm having some problems with the page_type attribute (running zope 2.5.1 and zwiki 0.10). when I change it under the "properties" tab and click the "properties" tab again, it goes back to whatever type it was before. I have the correct permissions. so I don't seem to be able to change the page_type. am I doing something wrong? tia.
sean at rootnode dot com
2002/09/16 00:13 GMT (via web):
Hmm, don't see that here. Browser caching issue ? Are you able to change it using the editform ?
2002/09/16 05:17 GMT (via web):
Cleaned up RestructuredText page. Attempt at kicking off a discussion.
I've been using my personal wiki for journaling, and considering the BlogFace as a better means for multiple team members doing the same.
Anyone had bad luck or major surprises with Blogface? ZWikiAndBlogFace?
Jay, Dylan, 2002/09/16 05:51 GMT (via mail):
Ok Simon, I got another quick piece of functionality to suggest. RoundUp? (not sure if it's still going) was an issue tracking with interesting threaded email support. Something I really liked was that it would post the first message of a thread to a general list and if you replied it would autosubscribe you to that thread, otherwise you wouldn't hear about it again.
What I'm suggesting is that if I send an email to page as I just did (to RestructuredText page) that I should be automatically subscribed to that page, which is I think a good default because by submitting information I'm showing enough interest that I would most likely like to be subscribed. If I then want to take myself off of it, I would just unsubscribe (do so should probably be easier too, e.g. url at the end of every message, or keyword in subject).
Any thoughts?
WimBekker, 2002/09/16 14:43 GMT (via web):
Maybe more related to ZoPe, but how can I change my ZWikiWeb? from one server to another (on the same LAN)?
2002/09/16 15:00 GMT (via web):
Dylan - I think that's a great idea, we'll try it.
Wim - one way is to export your wiki folder, move the .zexp file to your new server's import directory, and import it. The ZSyncer product is another.
2002/09/16 15:09 GMT (via web):
I've just started using a text-mode browser as my primary browser (w3m) and I'm loving it. Tip: I was having a problem accessing zwiki.org due to an accepts-language header that Localizer didn't like. You may need to remove the ";q=1' in the options screen.
Browsing WikiWikiWeb (or WardsWiki, as it's also known) has become much more efficient. I looked around for the first time in a while and saw great signs of continued growth (no surprise) and also high signal to noise ratio. The WikiWikiWeb:NewUserPages and WikiWikiWeb:TourBusStop are looking really good.
2002/09/16 19:20 GMT (via web):
Merged CMFWiki (I did this yesterday, but forgot to check it in). There are questions to answer about migration/compatibility/skins but it mostly works.
Jordan, 2002/09/16 20:12 GMT (via web):
Would there be an easy way to add an option in the editform that allowed authors to decide whether or not to support comments on a particular wiki page? This could be checked as an option whenever creating or editing a page. Suggestions?
Jordan, 2002/09/16 20:18 GMT (via web):
Would there be an easy way to add an option in the editform that allowed authors to decide whether or not to support comments on a particular wiki page? This could be checked as an option whenever creating or editing a page. Suggestions?
2002/09/16 21:08 GMT (via web):
RegulatingYourPages
Jay, Dylan, 2002/09/16 23:44 GMT (via mail):
> RegulatingYourPages
I wonder how this was turned on. Does WikiForNow really put regulations on the editform? It makes the form really cluttered :(
2002/09/17 02:04 GMT (via web):
So don't turn it on. :)
2002/09/17 02:35 GMT (via web):
you miss my point. The regulation page is a fantastic idea. I was just wondering out loud if it wasn't better on it's own page. I think that's how zope.org does it.
DeanGoodmanson, 2002/09/17 15:43 GMT (via web):
I think I may have encountered a new bug with the 0.10.0 standard_error_message, details here :
Jordan, 2002/09/17 21:21 GMT (via web):
How do you make the page hierarchy show as the default in 0.10- without forcng users turn it on in their UserOptions??
DeanGoodmanson, 2002/09/18 02:02 GMT (via web):
Whew! Just went through HowToAddAZwikiCatalog? , and ZwikiIssueTracker .
I don't have fuzzy mathing. :-(
I can't get the front page links
current product issues and
closed issues to work on my setup ( DeansUpgrade? ) to work. Other notes on the pages above.
Simon, 2002/09/18 05:37 GMT (via web):
14-day pack: 445->74 Mb
Simon, 2002/09/18 05:41 GMT (via web):
Er wait - that's 165Mb.
Between memory and disk space, and imeme clamping down on fat boys like myself, zwiki.org has some serious bloat issues to resolve.
Wim Bekker, 2002/09/18 14:45 GMT (via mail):
I've version 0.10 just installed. When I added a page in the ZMI, Save Changes and View the file, my changes are not visible. I have to go to the editform, change the contents and click Change
DeanGoodmanson, 2002/09/18 15:58 GMT (via web):
Wim, That sounds familier. I think that may be a general Zope ZMI issue. See my similar question and responses at the top of this page.
JohnGreenaway, 2002/09/18 20:03 GMT (via web):
Just upgraded to zwiki 0.10 on a clean copy of zope. Decided to keep notes on exact steps taken, and any issues found. Here goes...
- Exported our existing wiki
- Installed clean Zope 2.5.1
- Went to structured text / HTMLClass.py to fix the silly <p> inside <li> issue.
- Put zwiki, localFS and refresh products into the products folder.
- Started zope, imported wiki.
- View. Works straight away. Our header/footer doesn't show up though...
- Went to KnownIssues. Saw issues listed and applied fixes. Refreshed.
- Error appears now. Search for error. See from GeneralDiscussion line #267 needs commenting out.
- All working now. Time taken around 30 mins.
- Did the AddingSmiliesPatch ;)
Not bad really. Think the KnownIssues page worked well. Nice to have a list of the tracker issues associated with each release.
DeanGoodmanson, 2002/09/18 20:51 GMT (via web):
Thanks for the narration John. Very encouraging. What OS ?
Do you have any trouble editing standard_error_message ? If different than could you post it?
I think this is why my page matching isn't working, either.
JohnGreenaway, 2002/09/18 21:33 GMT (via web):
Installed on XP. Prod server will be NT.
Haven't got a standard_error_message (never had one in our existing wiki), so I've just got the one from and tried it.
Type in a non existant page and it appears and says "could not find - create, search etc". Underneath that a stack trace appears. Same as you mention on DeansUpgrade?. I'm assuming it's not supposed to show a trace. Fuzzy matching isn't working from the url.
Looking at the source of the error page I'd rather SERVER_URL was the wiki path instead. Otherwise it trys to search / create a page called "wikipath/page" rather than "page".
JordanCarswell, 2002/09/19 02:19 GMT (via web):
> RegulatingYourPages
Okay, I turned this feature on, but the pages are still using security settings from Zope management. No matter what I change in Regulations, the page still conforms to the inherited security settings. Probably I missed a step?
2002/09/19 02:32 GMT (via web):
posting fron rich pinders kickass wireless zaurus at lazug. quick pointer for you guys, i think i added a missing quotes in standard err msg, not in known issues, check cvs. thanks for the install reports, very useful. sm
DeanGoodmanson, 2002/09/19 15:32 GMT (via web):
Thanks, Simon.
I'm having a rotten time with CVS this morning ( CVSRepository ) from Windows and Mac (newb problems...), would you mind posting it, or updating the FrontPage latest cvs (even if it's unusable but files are intact.) ?
DeanGoodmanson, 2002/09/19 16:03 GMT (via web):
Noticed JumpSearch? has been retired. Good thing...confusing to newcomers. An addition to the TODO list may be writing a HowTo? for adding an "I'm Feeling Lucky" button.
Simon, 2002/09/19 21:17 GMT (via web):
Just passing through.. I mean that revision 1.6 . Does that help anything ?
I couldn't figure out what the heck had happened CVSRepository.. until I found CVSRepository. :) Do you think that's better ? I had been leaving things like FAQ and CVS all-capitalized.
Ack, that was
DeanGoodmanson, 2002/09/19 22:08 GMT (via web):
Dang...got the latest standard_error_message , and still don't have near matching URL's....
DeanGoodmanson, 2002/09/20 18:17 GMT (via web):
Would a WikiBadge be appropriate for the a default value for the
with headingfield on the comment box?
DeanGoodmanson, 2002/09/20 18:27 GMT (via web):
Egads! My page has become a WikiBadge according to WikiBadges (yep, slow...and the pages aren't turend into links). I'll try to fix that later.
I have learned that my previous comment is completely off the mark, and
dtml still looks like the best route for marking a pages
with heading
default.
Simon, 2002/09/20 19:51 GMT (via web):
It's friday and time for.. Operation Get Dean Out Of Trouble! Dean if you like, msg me on the #zope IRC channel. Not sure of your status with 0.10 - first of all, I can't reproduce . Is your standard_error_message the same as 0.10's ? Is it the same as this site's ?
Simon, 2002/09/20 21:26 GMT (via web):
added to PeopleUsingZwiki: " - JordanCarswell's university site. Displays wikinames with spaces. Well organized and friendly, you should check out the excellent introductory documentation ." Thanks for posting this link Jordan.
Simon Michael, 2002/09/20 22:15 GMT (via mail):
"Casey Duncan" <casey@zope.com> writes:
> Hey there. Several users have informed me that ExternalEditor would return
> spurious 404 not found errors when trying to save ZWiki pages. I tracked the
> problem down to the PUT handler of the ZWikiPage class and the way it
> handled DAV locks.
Casey - great news, thanks very much. I'll get this into 0.11.
Regards -Simon
Simon Michael, 2002/09/20 23:01 GMT (via mail):
Alexander - thanks for sharing your explorations. It sounds as if we have run in parallel in a few (minor) places - usernames in subscriber lists, wiki-linked plain text, preventing wikilinks within href urls etc. (NB the latter should be resolved in the current release).
I cc your description to GeneralDiscussion for others. I assume you're ready to share that zip but I'll let you post the url here. People interested in multiple wikis, wiki types etc. should also check out the docs at . (NB: in places where you used "wikis" and "zwikis", I was confused about whether you meant wiki webs or pages).
If later on you want to post individual enhancements as focussed patches against current code, those will stand the best chance of getting into the main codebase, as I'm sure you'll understand. Linus-style :) Either way I'll mine your code for things needed. Thanks!
-Simon
Alexander Van Heuverzwyn writes:
> Hello Simon,
>
> a while a go i posted a comment on your site(as AlexanderVH?) about
> possible improvements on wikis, and you asked for the code. at
> there is zipped
> zwiki code that i based on your zwikicode. It's freely available, but
> the zip as is not clean enough to deliver. It's not running yet on the
> openidea server, but most features are active on the server.
>
> I included zexp files.
> there are a set of modifications
> - it's ment for use in parallel folders that share common dtml methods. So there is a top folder with a set of wikiwebs in
> subfolders. There is a common folder called MAin? (see zexp) that contains all the help. There is a templatefolder(zexp). This
> contains the
system type pages that are needed for a new folder. Next to the FrontPage i made a NavigationAids, to separate the
> introductory startpage from the navigational annotated NavigationAids.
>
> an images folder(zexp) and a folder with common methods(zexp). The content of this last folder actually needs to move up a level
> (copy paste) but i couldn't zexp it otherwise.
> - pages have a type, with their own icon, helpfile, template, header and footer. These work with overrides, so you can create new
> types without having to create the support files. there is no subclassing to allow easy switching of types.
> - creation of a new page goes through a selection form to choose type and rendering.
> - interwikilinks can be rendered in different ways. The inlining mechanism means other wikis or any page from anywhere can be
> fetched and included in full or as a tooltip.
> - the remoteurl syntax is generalised to a formatting string, such that the second part of the interwikilink does not necessarily
> come at the end of the url.
> - to fix rendering problems when inlining and simplify the "!" prefixes interwikilink and wikilink mechanism are merged
> - multiple text windows in wiki, using a simple keyword to alternate editable and noneditable parts. backward compatible.
> - email subscription with usernames. can be restricted to usernames only.
> - append can be on top or at bottom.
> - additional rendertype: plaintext formatting, but with wikilinking
>
> several things i need to do first - when i find the time
> - i have undesired wikilinking in urls in anchors
> - adding a simple wiki should still work. So i have to change the default header and footer back
> - make installation easier.
> - improve documentation
> - add readme.
> So you can extract what you want from the current code, or wait. Or i'll clarify on request.
>
> Because i made large changes to the code , merging with your more recent
> code is not obvious.
>
> enjoy :)
>
> Alexander Van Heuverzwyn
Dean Goodmanson, 2002/09/21 02:30 GMT (via mail):
> Is your standard_error_message the same as 0.10's ?
> Is it the same as "this
> site's":/zwikidir/templates/zwikidotorg ?
Yes, thanks to your cvs url. I no longer have the edit problem.
My current low priority problem is case insensitive links.
I'm diving headlong into a nested wiki experiment. Safety zone is that I only have one catalog in the sub-wiki.
I dropped by too late on irc, thanks for the offer.
2002/09/21 16:12 GMT (via web):
I've added a page describing the various options for tracking bugs using Zope apps: ZopeIssueTrackers?. Please add your comments. I think something like this must be mailed to the author of (Call Center, Bug Tracking and Project Management Tools for Linux).
Simon, 2002/09/21 16:47 GMT (via web):
Good idea, interesting link. --Mr. Super-Duper I-am-Head-of-the-Project Big-Shot Might-Be-Linus-Torvalds-Soul-Brother
Simon Michael, 2002/09/22 22:22 GMT (via mail):
> after many hours with zope and zwiki, I'm starting to be productive. I'm no
> slouch, but there definitely is a steep learning curve for customizing and
> fully taking advantage of zwiki if you're not a zope guru.
Thanks for your feedback back in july.. I think these issues are steadily improving, as always these things will change with time and hard work.
> I want more template sites. I might even contribute a skeleton of mine -
> would that be helpful to included in the release?
At one time that was my intention - I invited links & uploads on WikiTemplates, and I built in infrastructure to allow multiple templates to be installed either on the filesystem or in the zodb. That hasn't been used and I was thinking of taking it out. At the moment I've lost interest in bundling content with the release.
The big issue is maintenance, of the content and the dependencies on zwiki's api. If you have something to show, by all means describe it or post screenshots on the wiki, eg on WikiTemplates or right here on GeneralDiscussion. If you have something that's ready to download and be useful to others, and you will be keeping it current as new versions of zwiki are released, that would be great too. Either way I think the wiki will work better as the distribution channel for this kind of thing.
> and lastly, we need a guide that explains how to install apache, then
> zope, then zwiki, then configure apache to redirect a given path to the
> zope root (so there's no :8080), then configure zope to redirect it's
> homepage to a given zwiki. After many months, I'm stilly experimenting
Yes that would be great. There are some starting points pencilled in at ZwikiDocumentation.
Simon Michael, 2002/09/22 22:27 GMT (via mail):
Did I reply to this ?
zwiki@zwiki.org (dhart) writes:
> Great product, Simon! I use Zwiki for collaboration among my business
> partners. Can I buy you a beer? :)
Certainly, I'll a have a pint of stout - ahh, thanks. :)
> They (the business partners) have asked for some new features:
>
> 1. On edit and comment areas, A "send update to subscribers" check box,
> defaulted to "yes".
>
> 2. On edit and comment areas, A "send full document to subscribers" button.
> Possibly with an HTML/STX choice radio button.
These things are easy implement by hacking standard_wiki_footer and possibly with python script helper - eg search for a python script recipe for sending a mail message. You can get the current page's (source) text with eg the text method.
Simon Michael, 2002/09/22 22:28 GMT (via mail):
zwiki@zwiki.org (Jordan) writes:
> How do you make the page hierarchy show as the default in 0.10- without
> forcng users turn it on in their UserOptions??
At present you have to hack all the appropriate dtml conditional statements in the header and footer.
Simon Michael, 2002/09/22 22:31 GMT (via mail):
RFC - I was thinking I'd rename
wikipageto
wiki_page_templatefor clarity.
Simon Michael, 2002/09/22 22:58 GMT (via mail):
zwiki@zwiki.org (DeanGoodmanson) writes:
> Whew! Just went through HowToAddAZwikiCatalog? , and ZwikiIssueTracker .
>
> I don't have fuzzy mathing. :-(
Checking - when you said this, I think you meant typing partial urls didn't find the right page (due to a broken standard_error_message). I don't think you meant that eg [front page] failed to link to FrontPage.
Jay, Dylan, 2002/09/22 23:57 GMT (via mail):
> added to PeopleUsingZwiki:
> " - JordanCarswell's university
> site. Displays
> wikinames with spaces. Well organized and friendly, you
> should check out
> the excellent "introductory
> documentation": ."
> Thanks for posting this link Jordan.
Nice layout. I havn't downloaded the latest release, but is it possible to share skins easily yet?
Jay, Dylan, 2002/09/23 01:13 GMT (via mail):
> The big issue is maintenance, of the content and the dependencies on
> zwiki's api. If you have something to show, by all means
Doesn't that indicate that perhaps there is too much dependence on the Zwiki api, or that the api changing too often? If it's hard for you to keep up to date, think how hard it is for everyone else who tries to upgrade. Anyone got any ideas on how best to decouple the templates from zwiki api? From my last look, it seemed that way too much logic is in the templates themselves, which would make it hard.
Jay, Dylan, 2002/09/23 01:16 GMT (via mail):
> RFC - I was thinking I'd rename
wikipageto
wiki_page_templatefor
> clarity.
I like names more like "template_wiki_page" since then then templates tend to sit togeather in ZMI
Simon Michael, 2002/09/23 17:23 GMT (via mail):
Dean writes:
> FYI..Make 4 changes to one page, got a zope memory error.. site was down
> for less than 5 seconds.
Thanks.. zope & zwiki is frequently growing to the 100Mb hard limit on this server right now. When I notice, I restart manually; when I'm away, it should die and restart itself. I think occasionally it does not die immediately, making the site unresponsive for some period of time.
I am tracking these things at #232 frequent errors while browsing zwiki.org.
Simon Michael, 2002/09/23 17:34 GMT (via mail):
John, you wrote:
> Looking at the source of the error page I'd rather SERVER_URL was the
> wiki path instead. Otherwise it trys to search / create a page called
> "wikipath/page" rather than "page".
I don't see/understand this problem - if the issue remains, please follow up with a reply addressed to tracker@zwiki.org - thanks.
Simon Michael, 2002/09/23 17:39 GMT (via mail):
zwiki@zwiki.org (JordanCarswell) writes:
> Okay, I turned this feature on, but the pages are still using security
> settings from Zope management.
Jordan - I opened #247 page regulations have no effect ? for tackling this.
Simon Michael, 2002/09/23 19:16 GMT (via mail):
zwiki@zwiki.org (Jay, Dylan) writes:
> I like names more like "template_wiki_page" since then then templates
> tend to sit togeather in ZMI
I do this sometimes too, but for now I'm not persuaded because it's not as self-explanatory when mentioned in beginner documentation, and I think "wiki_page_template" will also typically sit with the other templates & methods at the bottom of wiki folder listing.
Simon Michael, 2002/09/23 19:36 GMT (via mail):
"Alexander Limi" <alexander@limi.net> writes:
> Thank you very much, you just solved one of my major headaches
> concerning Wiki in Plone.
Good, good.
> The other headache is that I need all web code to be Page Templates, so
> we can get a proper Plone interface on it.
>
> Is your code very DTML-centric, or do you cleanly separate logic into
> PythonScripts? or similar? I will do an interface as soon as possible if
> someone can help me get it into Page Templates :)
I'd say "DTML-centric" is accurate. I found that fewer parts was more efficient for me to work on and debug.
Page template support (only slightly broken!) was added in 0.10. Current status is: the major UI pieces (main page layout/backlinks/editform/subscribeform) can be defined with either DTML methods or page templates. Zwiki's built-in defaults are implemented as DTML methods, except for wikipage, which is a straight conversion of the DTML standard_wiki_header & standard_wiki_footer. (I plan to rename this to wiki_page_template). The other three could be converted too, but I'm in no hurry to do so.
> I will need to talk to you about some interface and implementation ideas
> that will enable ZWiki to become more powerful in the CMF context, but I'm
> a bit too tired to do that right now.
>
> Again, thanks - and I hope to be able to make ZWiki have the Plone look
> and feel as soon as possible.
I found this was easy to get, to a certain extent - I put a standard (0.9.9) zwiki in a plone site, and set up minimal standard_wiki_header & standard_wiki_footer that included standard_html_header & standard_html_footer. Voila! Ploneish. It was good of plone to provide those dtml methods.
> Good riddance to CMFWiki.
CMFWiki was the intrepid scout that spent a long time out in the CMF world, reshaping itself and gathering valuable data. Now it has been called in to the collective and it's knowledge assimilated!
Some may prefer to keep using CMFWiki for now, eg because Zwiki's cmf support is immature or because of zwiki's different memory/zodb usage characteristics. The two products should coexist fine.
-Simon
Simon, 2002/09/23 19:50 GMT (via web):
Here's a quick update on CMF status by the way: the current cvs code should work either inside or outside a CMF site. In a CMF site, zwiki pages should work much like CMFWiki pages except with the latest zwiki features. Ie they should show up in the content management ui and be searchable. They don't participate in workflow. Permissions and regulations inside the CMF are not yet implemented. CMFWiki content is not affected.
Like CMFWiki, you have to run an external method to add wiki support to a CMF site. See the instructions in ZWiki/Extensions/CMFInstall?.py. There's a licensing issue to resolve here.
The default content used is CMFWiki's, and the skin is ZWiki's, so there will be things to resolve here.
2002/09/24 04:04 GMT (via web):
Details on the
changes made, but page the samecomment:
My problem: I'm having a consistent problem with my Zope sites occasionally displaying a cached version of a page, not the recently changed version.
Wim Bekker's problem: I've version 0.10 just installed. When I added a page in the ZMI, Save Changes and View the file, my changes are not visible. I have to go to the editform, change the contents and click Change to see my changes.
I've primarily seen this problem with Squishdot, as I added a link to the ZMI tabbed page for editing. I wonder (with help from expert's insight) if the ZMI may not be updating that page cache properties properly so that the browser isn't refetching the it.
DeanGoodmanson, 2002/09/24 14:40 GMT (via web):
Search issues:
I Removed -d debug parameter.
Lost the trackeback when hitting a page that doesn't match case, but still get the error message, not the redirect. (This may be appropriate, but...)
When I click the "Search for this page" button, I get 0 hits. When I use the standards search techniques for "Frontpage", FrontPage makes the list (along with 2 others, if that matters.)
DeanGoodmanson, 2002/09/24 14:48 GMT (via web):
Looking for clues on how to deal with the following STX trompage..
- 56 This is issue 56 - 58x THis is 59x - Stop listing my dang < pre> items!
DeanGoodmanson, 2002/09/24 14:49 GMT (via web):
Alright..it worked fine here, but not on my page. Time to check page types & differences.
DeanGoodmanson, 2002/09/24 15:04 GMT (via web):
OK, all better. I was (once again) stumbling over STX whitespace. Workaround:
- 56 issue 56
- 58x put your identifiers after the bullet.
- Does the river STX leads to reST ?
SimonMichael, 2002/09/24 21:05 GMT (via web):
14-day pack: 222 -> 175Mb
But now, but NOW.. heheheheh.. we have this little beauty which tells us things .
Woah.
SimonMichael, 2002/09/24 22:38 GMT (via web):
Sorry for downtime and upgrade glitches here.. but zwiki.org is now running on zope 2.6b1. :)
2002/09/25 01:50 GMT (via web):
ZWikiandZope2?.6 eh? Any particular 2.6 features help ZWiki?
- Enhanced text indexing
- Major improvments to BTree? and Catalog code
- The much-hated name "STUPID_LOG_FILE" now has a preferred alias: "EVENT_LOG_FILE". ;-)
- < dtml-var name> and & dtml.-name; will now automatically HTML-quote unsafe data taken implictly from the REQUEST object. (spaces added..)
Gotta stop, can continue on request.
BTW, what's the < unknown> object in the zodb analyzer?
DeanGoodmanson, 2002/09/25 15:49 GMT (via web):
I'm noticing I'm putting a lot of
make decision heretype of comments in my personal wiki.
Would you track them by a unique search phrase or badge (TbD? ??) ? Other suggestions?
DeanGoodmanson, 2002/09/25 16:44 GMT (via web):
Whine-o-the-day.. The pages returned/listed on WikiBadges are not WikiLinks?
I tried changing
this: < li>< dtml-var id> to: < li>< a< dtml-var id>< /a>with no success. Suggestions?
Simon Michael, 2002/09/25 19:56 GMT (via mail):
zwiki.org's IP address changed unexpectedly (to 216.17.130.20) this morning, costing me and probably some of you some time. The DNS change may take some time to reach everyone.
I'm tired right now.. I'll come back later and do the prerelease with a fresh mind. I realize it will be the 26th for most of you. Mumble might get in another bugfix or two mumble..
2002/09/25 20:18 GMT (via web):
a. Sleep! :-)
b. I think email subscriptions are down.
DeanGoodmanson, 2002/09/26 02:40 GMT (via web):
I hope that's Simon experimenting with ZWiki < dtml .. > pages in RecentChanges? ! !
DeanGoodmanson, 2002/09/26 02:42 GMT (via web):
Re-opened AcquisitionProblems for anyone interested in nesting Zwiki instances.
SimonMichael, 2002/09/26 02:58 GMT (via web):
[[#270 with zope 2.6.0, dtml pages like UserOptions, RecentChanges, SearchPage are broken] with zope 2.6.0, dtml pages like UserOptions?, RecentChanges?, SearchPage? are broken] & #272 ZMI, ftp broken in zwiki.org & other folders after 2.6 upgrade are making things difficult here. You may want to avoid editing pages containing DTML until the former is resolved. I had to resort to the debugger to edit FrontPage just now. :)
Wow look at that (FastChanges?) - some mis-step of mine created a page whose name is the entire text of FrontPage. And it worked! Good test for freeform page names.
I tried to solve [[#270 with zope 2.6.0, dtml pages like UserOptions, RecentChanges, SearchPage are broken] with zope 2.6.0, dtml pages like UserOptions?, RecentChanges?, SearchPage? are broken], but this can't wait. 0.11.0rc1 is out, please install and stress-test at will.
SimonMichael, 2002/09/26 03:03 GMT (via web):
Summary:
Bugfixes, international page names, edit log notes, WikiForNow assimilation completed, CMFWiki integration (alpha).
DeanGoodmanson, 2002/09/26 18:04 GMT (via web):
I can't find the dtml code to plop on a page to hide the header and footer. Could someone lend a hand?
JohnGreenaway, 2002/09/26 18:12 GMT (via web):
< dtml-call "REQUEST.set('bare',1)">
DeanGoodmanson, 2002/09/26 18:14 GMT (via web):
Thanks a lot, John.
DeanGoodmanson, 2002/09/26 20:37 GMT (via web):
Page updates seem a bit slower since before my upgrade and adding catalog.
I have a relatively small wiki, SCSCI HD disk, 500Mhz mac and 256M of RAM. Are there caching or other options I could change to keep edits and comments quick?
Perhaps using mail_outs is the bottleneck..
SimonMichael, 2002/09/26 23:22 GMT (via web):
Dean I'd like to find out more. Could you open an issue for this ?
2002/09/27 00:18 GMT (via web):
It's not actually un-usable. Edits take ~4 seconds sometimes faster, only rarely longer (10+ seconds). The DB size is ~6 meg,and it has grown to >30 meg twice with heavy use. I'm using a vanilla Zope build (spvi OSX) with no apache, etc.
I was just moseying around ZCatalog and noticed some of the options noting caching rules, and wanted to squeak before I tweeked. Let me know if it's really worth an issue, or how detailed/vague you want an issue tracked for.
SimonMichael, 2002/09/27 01:09 GMT (via web):
This topic is definitely worth one or more issues sooner or later, but it's up to you if you want to pursue it right now. I'm also gathering data from my own installations.
2002/09/27 06:14 GMT (via web):
Hoo! Did 2.6 do away with the spaces between list items ?
- That's
- a
- big
- deal.
DeanGoodmanson, 2002/09/27 14:44 GMT (via web):
- Amazing Deal!
- 2. Where's the Count when you need him?
break...
a. This is "a."...does it still turn it into a number?
b. This is a test, only a test..
SimonMichael, 2002/09/27 18:40 GMT (via web):
Fixed #273 structured text converts initial word followed by a period into a bullet item, released .
DeanGoodmanson, 2002/09/27 23:22 GMT (via web):
I'm trying to display my ZWiki AnnoyingQuote in my squishdot instance... wiki site: \wiki\
Calling this from my weblog fails, no AnnoyingQuote ... < dtml-let
squishdot site: \weblog
< dtml-let Fails. Help a newbie out?
DeanGoodmanson, 2002/09/27 23:34 GMT (via web):
Spent some time in the zope book.... forgot about < dtml-with wiki>
2002/09/30 10:55 GMT (via web):
Hi!
Zwiki puts an HTML-Code between the header and the content. Is there a possibility to edit this Code or to avoid it?
DeanGoodmanson, 2002/09/30 15:36 GMT (via web):
Simon: I haven't seen the speed issues I mentioned awhileback, so will wait to post an issue.
We were on a wireless network at the time, getting a noticably slow connection for all network traffic.
DeanGoodmanson, 2002/09/30 17:45 GMT (via web):
WikiBadges don't included seperators between the list groups.
This isn't an issue on this site, as each heading has bullet points, but when there are no bullet points the headings weren't displayed properly.
I added < br>'s,but that mucked up the indention.
Anybody USE WikiBadges ?
SimonMichael, 2002/09/30 19:07 GMT (via web):
Hi Dean.. I haven't touched it in a long time. Feel free to update.
Simon Michael, 2002/09/30 19:11 GMT (via mail):
> Zwiki puts an HTML-Code between the header and the content. Is there a
> possibility to edit this Code or to avoid it?
What's an HTML-Code ? You don't mean #112 structured text pages have extra html & body tags, and are not valid html/xhtml do you ?
JordanCarswell, 2002/09/30 22:44 GMT (via web):
I'm encountering a weird problem since I upgraded to ZWiki 0.11.0rc2. If I set permissions to allow only Managers to add comments, the comments textbox disappears (and yes that's when I'm logged in as a Manager). Now when I set "Add Comments" to Anonymous, the textbox shows up again which suprised me. Plus, the Manage Page info doesn't appear at all. I haven't had any problems editing pages.
When I am browsing the site from the browser on my server, everything works fine- permissions behave as they should. So this is some kind of conflict of permissions, but I can't figure out what. Any help would be appreciated. | https://zwiki.org/FrontPage/GeneralDiscussion200209 | CC-MAIN-2021-25 | refinedweb | 13,957 | 75.5 |
Re: [Zope-dev] Debugging spinning Zope?
Dieter Maurer wrote: Morten W. Petersen wrote at 2006-4-18 16:45 +0200: I guess the next step is to use pdb og gdb - are there perhaps any documents out there describing the process? You use your favorite search engine and search for Zope spinning debug. Yep. ;) -Morten
Re: [Zope-dev] New ways of getting transactions?
and supported for a period of time. If it isn't, then this is *definately* a bug. I agree. I've reported it as a bug: -- Morten W. Petersen Email: [EMAIL PROTECTED] Phone: +47 45 44 00 69 Title: Project manager Nidelven IT (
Re: [Zope-dev] Re: [Zope3-dev] RFC: Reunite Zope 2 and Zope 3 in the source code repository
Stephan Richter wrote:
Re: Get rid of configure/make? (was Re: [Zope-dev] Re: SVN: Zope/trunk/ Merge philikon-zope32-integration branch. Basically, this branch entails:)
I wasn't arguing that configure/make was hard to use. I'm arguing that it's something extra to maintain and is just silly. It's main benefit is that it leverages a familiar pattern, but I'm not convinced that it's worth it. Also, as tools like rpm and deb become more widely used, I'm not
Re: [Zope-dev] Problems adding many objects
and before returning none, it says 'returning issue' on stdout. -Morten Chris McDonough wrote: Hi Morten, You'd probably need to show us the implementation of whatever manage_add_issue is. - C On Sep 25, 2005, at 10:09 AM, Morten W. Petersen wrote: Hi, I've created a script
[Fwd: Re: [Zope-dev] Problems adding many objects]
Oops, one for the list as well. Original Message Subject: Re: [Zope-dev] Problems adding many objects Date: Mon, 26 Sep 2005 13:34:50 +0200 From: Morten W. Petersen [EMAIL PROTECTED] To: Chris McDonough [EMAIL PROTECTED] References: [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL
[Zope-dev] Problems adding many objects
Hi, I've created a script that imports XML into the ZODB, using regular TTW methods - and all is fine, except for when I import large files, with many objects. After about 288 objects, the import fails when the method that adds an object in the ZODB returns None, instead of the object it added.
Re: [Zope-dev] Problems with Zope 2.8 on FreeBSD (was Re: Problems with PageTemplates on Zope 2.8)
have a note on how I fix it. Andrew -- Zope Managed Hosting Software Engineer Zope Corporation (540) 361-1700 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Morten W. Petersen Sent: Thursday, June 30, 2005 7:37 PM To: Max M Cc: zope-dev
[Zope-dev] Problems with Zope 2.8 on FreeBSD (was Re: Problems with PageTemplates on Zope 2.8)? Does the server
[Zope-dev] Problems with PageTemplates on Zope 2.8
Hi,? Thanks, Morten
[Zope-dev] Re: Renaming a product
product name to the new, and it worked. Thanks for your help everybody. :) Regards, Morten W. Petersen -- Technologies: Zope, Linux, Python, HTML, CSS, PHP Homepage: Phone number: (+47) 45 44 00 69 ___ Zope-Dev
[Zope-dev] Renaming a product
Hi all, does anyone know how to rename a product, so that instances of the product stored in ZODB will work (with the new name) after the rename? Thanks, Morten W. Petersen -- Technologies: Zope, Linux, Python, HTML, CSS, PHP Homepage: Phone number: (+47) 45 44? BTW seb, I tried compiling Python with the pgcc compiler [1]? I'd like to see those compile options as well... Thanks,
[Zope-dev] KeyError in UnIndex
Hi guys, I've got an issue here, the following traceback can be seen after several users modify and view the same object in a short (2-3) minute period. I've tried recataloging the objects but that doesn't help either; this error is seen every time.. Site Error An error was encountered while
[Zope-dev] monitor_client.py in Zope 2.4.1 not working properly
Hi, I've been playing a bit with the monitor_client (ZServer/medusa), and it seems all exceptions aren't raised so it's visible in the client, it's seen in the debug output from Zope itself. Has anyone experienced this? -Morten ___ Zope-Dev
[Zope-dev] ImportError (No module names utils)
Hi, after using a module named utils in a Zope product the root management screen is no longer available (the module is now renamed). The traceback looks like this: Traceback (innermost last): File /home/morten/tmp/Zope-2.4.1-src/lib/python/ZPublisher/Publish.py, line 223, in publish_module
[Zope-dev] KeyError on UnIndex.keyForDocument
Hia, While trying to reindex an entire catalog an error is raised, which looks like this: Traceback (innermost last): File /home/morten/zope_instances/usr2/local/Zope-bcr/lib/python/ZPublisher/Publish.py, line 223, in publish_module File
Re: [Zope-dev] DISCUSS: Community checkins for CVS
On Thu, 20 Sep 2001, Paul Everitt wrote: So, let's begin what I'm sure will be a lively and illuminating discussion. :^) First man out? :-) Will a ZPL-ish license [1] be accepted (declared, ref. paragraph 4 of the Zope Contributor Agreement) by the Zope Corporation? [1]
Re: [Zope-dev] Spell checking module
On Wed, 27 Jun 2001, Remi Delon wrote: I have a website where users can post messages (using a textarea). I would like to be able to spell-check what they submit and notify them of possible mistakes. (much like the online spell-checker of hotmail that I'm using right now :-)) Has anybody
Re: Something better than ZClasses (was: Re: [Zope-dev] Re: Zcatalogbloat problem (berkeleydb is a solution?))
On Tue, 26 Jun 2001, Erik Enge wrote: On Tue, 26 Jun 2001, Morten W. Petersen wrote: How about meta-programming (designing) via the Zope interface, with UML or somesuch; automatically generating Python code, then enable designers to use a ZFormulator-ish product to edit the interface
Re: Something better than ZClasses (was: Re: [Zope-dev] Re: Zcatalogbloat problem (berkeleydb is a solution?))
On Tue, 26 Jun 2001, Morten W. Petersen wrote: Well, it's quite logical: UML can be used to map out both software and business development (they are, after all, two sides of the same story), the designer can twiddle-n-polish the interface and the programmer can take care of 'exceptional
Re: Something better than ZClasses (was: Re: [Zope-dev] Re: Zcatalogbloat problem (berkeleydb is a solution?))
On Tue, 26 Jun 2001, Stephan Richter wrote: - A simple DTML Zope programmers costs are okay and maybe below programmer average. - A good Zope/Python programmer will cost above average. - A good Zope/Python System-Designer is very expensive. Because of that you try to minimize the
Re: [Zope-dev] Re: ZPL and GPL licensing issues
On 22 Jun 2001, Simon Michael wrote: Now here, I have to assume RMS is using combine above to mean combine and redistribute. I hope I'm right ? If combine included install zwiki on your zope installation and use it then everything I know is wrong.. I did intend for that to be fairly
Re: [Zope-dev] ZPL and GPL licensing issues
On Fri, 22 Jun 2001, Erik Enge wrote: Now I think I have two different answers to one of my fundamental questions in this discussion: if I have a GPL-compatible licensed product and I distribute it with a GPL product, do I need to relicense the former one to GPL? Because that is what I
Re: [Zope-dev] ZPL and GPL licensing issues
On Fri, 22 Jun 2001, Erik Enge wrote: Ok, good. Then Thingamy's intermediate solution will be to create a TPL which is basically the ZPL with the incompatible-clauses ripped out (number 4 and 7, I think). That way we are compatible with both the ZPL and the GPL. Something like that.
Re: [Zope-dev] Re: ZPL and GPL licensing issues
On 22 Jun 2001, Simon Michael wrote: Shane Hathaway [EMAIL PROTECTED] writes: One of the consequences being that someone re-distributing zope zwiki together, under their default licenses, is technically in violation right now, I think we are all agreeing. Technically yes, although I
[Zope-dev] ZPL and GPL licensing issues
Hi there, we @ thingamy are considering changing our license to a ZPL-ish one [1] to better serve our clients' needs. However, some of the (Zope) products we've developed may need to rely on GPL'ed code, or needs to be incorporated within it, and the 'obnoxious advertising clause' seemingly
Re: [Zope-dev] command-line zope.org product upload ?
On Tue, 19 Jun 2001, Andy McKay wrote: Ive been successfully finding other things to do other ZPM which is an attempt to make a package manager for Zope ala RPM, PPM etc. A command line interface to it would be cool. Cool. And maybe some apt-get functionality? Like 'zope-apt-get
Re: [Zope-dev] Where did DocumentTemplate/VSEval.py go in 2.4.0a1?
On Fri, 15 Jun 2001, Evan Simpson wrote: Morten W. Petersen wrote: one of my products landed flat on its face when an ImportError was raised trying to import VSEval from DocumentTemplate; is there a new class / function of some sort or simply another name for the class? See $ZOPE
[Zope-dev] TypeError on FieldIndex
Hia guys, running the GUM product on a fresh BerkeleyDB based 2.4.0a1 instance on Linux raises the following issue for the field index type: Site Error An error was encountered while publishing this resource. Error Type: ('type', 0, type 'string', type 'int') Error Value: None
[Zope-dev] Non-undoable storage
Hia guys, during testing of a mail product I've discovered that the Data.fs file may bloat considerably after storing 50 messages. Packing the database will reduce the Data.fs file to 20 MB (from 40 MB). Another thing is that storing 50 messages takes a *long time* on a 600Mhz 256 MB RAM
Re: [Zope-dev] Non-undoable storage
On Tue, 12 Jun 2001, Shane Hathaway wrote: Did you catalog each message? What version of Zope? Yes, every message was cataloged. Zope version 2.3.2 3) Manually zap the caches periodically, which is a capability of Zope 2.4.x. Okay, this is interesting. Any examples on how to implement
Re: [Zope-dev] Non-undoable storage
On Tue, 12 Jun 2001, Chris McDonough wrote: Morten W. Petersen wrote: Yes, every message was cataloged. Zope version 2.3.2 Were subtransactions in the Catalog turned on (see the Advanced page)? Yes, and the threshold was at 1. -Morten
[Zope-dev] Building custom DTML tags and accessing _.something
Hia guys, I was wondering if any of you could give me a couple of hints about how to make the _.{random,string,range,Datetime} thingies from a tag expression. I.e. instead of doing this: dtml-widget select options=[1,2,3,4,5,6,7,8,9,10,11,12,13]
[Zope-dev] ZCatalog features
Hia guys, A couple of comments and questions about the ZCatalog: Is it possible to pass an argument to the catalog so that returned brains would instead be actual objects? Given that we have to manually join search results, because ZCatalog doesn't support ORs etc (for FieldIndexes), wouldn't
[Zope-dev] REQUEST and values stored there
Hia guys, the recent changes to the HTTPRequest class breaks some of my code. I may have missed some notifications, but why wasn't this made clear as it could obviously break code? -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]']
[Zope-dev] INSTANCE_HOME vs. SOFTWARE_HOME
Hi guys, some people have asked me to use INSTANCE_HOME instead of SOFTWARE_HOME, which breaks their products on debian distros. Now, I'm not sure that won't break other systems if I change it; anyone care to share? Thanks, Morten ___ Zope-Dev
[Zope-dev] Ensuring 'freshness' of dynamic pages (slightly offtopic)
Hi guys, I've been trying to ensure that documents from a certain website are always fresh, that is, every request for a new page must be validated before the client sees it. I've tried using these HTTP Cache-control directives: no-cache no-store max-age (1)
[Zope-dev] ZPatterns, DynPersist.dll and Zope 2.3.0
Hi guys, I've got a problem making a version of the DynPersist.dll file work on windows. The message when trying to load the DynPersist module says (paraphrasing) "A unit attached to the system doesn't work". Anyone else experienced this? Also, I read that users of Zope 2.2.x could skip this
Re: [Zope-dev] ZPatterns, DynPersist.dll and Zope 2.3.0
[Steve Alexander] | Try this one: | | | | I've had this one working on Windows 2000, Zope 2.3. Yep, this works. On Windows 98 with Zope 2.3.0. Thanks again, Morten ___ Zope-Dev maillist -
Re: [Zope-dev] ZCatalog hackery
[Chris McDonough] | Note that the algoritm is simple - for each index, compare the what exists | in the index to what is to be put in. If they're the same, do nothing. If | they're different, reindex. I wasn't able to understand completely from | your description whether the object method
[Zope-dev] ZCatalog hackery
Hi guys, I've got a problem with ZCatalog. I've got plenty of large objects, ranging from 100KB to 100MB in size. Needless to say, these take up a lot of processor time when indexed by the ZCatalog. Now, these object have to be moved from time to time, only moved, so that one or two of the
Re: [Zope-dev] ZCatalog hackery
[Casey Duncan] | Actually what I wrote assumes you are passing a Catalog not a ZCatalog. | So you will need to change it for a ZCatalog to: I figured that out. :-) There is one problem, the uids stored in the Catalog are based on the path of the object, so I guess I'll have to make a copy of
Re: [Zope-dev] Synchronize GUD or Worldpilot with IPAQ
[Valérie Aulnette] | Did someone try ? And how ? You can't synchronize at this point. It's a planned feature, but don't hold your breath. :-) Cheers, Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] ZCatalog problems
Hi guys, I'm having trouble making ZCatalog work. The problem is that there are 29 objects of a given meta type, with the same booleans that should be returned for an iteration; but only 20 are. Is this a result of caching perhaps? Or lazy results? Thanks, Morten
Re: [Zope-dev] catalog object owners?
[Tim McLaughlin] | Anybody know how to catalog the object "owner"? I can't seem to find a | property to catalog the value of getUserName(). (of course I could always | kludge it with a property in the constructor, but I would prefer use what is | already there). I'll second that, the ability
Re: [Zope-dev] catalog object owners?
[Chris McDonough] | I'll second that, the ability to catalog values returned by method calls | would be sweet.. | | Not sure what you mean, this works now. Aha? So if I specify a field index of, get_parent_node_id, which is a function call on all objects that are to be indexed, this would
Re: [Zope-dev] Minor typos/changes to ZCatalog.
[Steve Alexander] | But it's not just characters. A field index indexes an object, and uses | the overloaded comparison operators for that object to put it in an | appropriate place. So, you can index DateTime objects, tuples, strings, | numbers, floats... Could a field index succesfully
Re: [Zope-dev] Storing lots of big objects in containers
[Erik Enge] | Can't you just subclass the BTree Folder as you would with OFS.Folder? | | I think you might be confusing the Zope BTree implementation with the | BTree Folder Product? I've tried subclassing BTreeFolder, but then, whenever the object is accessed, zope falls flat on its face.
[Zope-dev] REQUEST acting up and misplacing form elements in other
Hi zopers, I've been wondering what may be causing the REQUEST object to store form values in the other dictionary. Is this a new feature or simply a bug? Cheers, Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] ConflictErrors and how to handle them
Hi guys, I've been struggling with a problem, namely ConflictErrors. At times, a long-running process may add 100, 1000, 1 objects to a single folder. Under this process, several ConflictErrors may be raised, but they are captured, and the transaction committed again. Problem solved.
Re: [Zope-dev] Getting the parent of a container in a Python Script
[Cyril Elkaim] |Hi all, Hia Cyril, [snip] | Aside from my problems Zope rocks really. Yes it does. And you can find usable information in $ZOPE_INSTANCE/lib/python/OFS/ZDOM.py . Hope this helps, Morten ___ Zope-Dev maillist - [EMAIL
Re: [Zope-dev] 2gb ZODB size limit
2GB in size on Linux, a kernel = 2.4.0 is required. Hope this
Re: [Zope-dev] 2gb ZODB size limit
[Andy McKay] | Yes, you will hit this limit. Windows uses a 32 bit integer for file size... | No matter what MS says, Ive hit it on Win2k. Interesting. Are you sure the problem lies with Win2K and not Python or something else? Cheers, Morten ___
Re: [Zope-dev] 2gb ZODB size limit
[Erik Enge] | [Morten W. Petersen] | | | It's a problem with Linux, if you want to be able to use databases | | 2GB in size on Linux, a kernel = 2.4.0 is required. | | Nope. First, the limit is at file-level, not database-level (mind | you, a problem with the filesystem, not Linux per se
Re: [Zope-dev] 2gb ZODB size limit
[Erik Enge] | [Morten W. Petersen] | | | BTW, there is a list called [EMAIL PROTECTED], for ZODB specific | | questions. | | Actually, I think its called ZODB-Dev; [EMAIL PROTECTED] I stand corrected. Cheers, Morten ___ Zope-Dev maillist
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Erik Enge] | What happens if you run this with ZEO? Will the file be kept «in | sync» with all ZEO Clients? Good point. I don't think so. It could be that it is kept in sync with one Zope instance "being responsible" and the others calling it via XML-RPC. Cheers, Morten
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Steve Alexander] | I'd thought the original point of ThreadSafeCounter was to provide | a simple sequential unique values generator, without causing | writes to the Data.fs. Yes, that was the original intent. But having one that's safe over multiple ZEO clients is a Very Good Thing (tm). :-)
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Wolfgang Strobl] | Doesn't even install on Windows, because it imports and uses fcntl. | | From the fcntl docs: "Availability: Unix". Well, the download page says "Platform: Generic UNIX-like", doesn't it? -Morten ___ Zope-Dev maillist - [EMAIL
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Wolfgang Strobl] | Well, yes. I wouldn't have expected that kind of platform | dependendy in products like AddressBook, though. Anyways, I'm looking into ways of making the threadsafe counter platform independent. Cheers, Morten ___ Zope-Dev
[Zope-dev] Re: ThreadSafeCounter
[Andy McKay] | I released FSPoll recently and was going to combine the two into one | FSCountThing with FSPoll and FSCounter subclassing of it, so maybe we could | co-operate on ThreadSafeCounter and FSCounter? The ideal solution would be to use an object that lives in the ZODB, I wonder if
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Erik Enge] | Forget it. My fault. *shame, shame* *chuckle* :-) -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists -
Re: [Zope-dev] ThreadSafeCounter 0.0.1 released
[Chris Withers] | So would a counter such as: | | class PersistentCounter(Persistent): | | # create the counter | def __init__(self, value=0): | self._value = value | | # get the value of the counter without incrementing | def getValue(self): | return
Re: [Zope-dev] LONGing for normal inetegers...
[Jon Franz] | I had this problem in the past and hacked the mysql DA to fix it, then | dicovered to my dismay I was using an out-of-date mysqlDA and it had already | been fixed... Which DA are you using? Using Python 2.0 could solve this problem, as longs are no longer rendered with the L
[Zope-dev] ThreadSafeCounter 0.0.1 released
Hi guys, There's a new product available, which enables unique ids in a given context, take a look at url:. Cheers, Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] ValueChecker 0.0.1
Hi zopistas, I've managed to build a product called ValueChecker which when installed hooks into the prosessing of form input. It's basic at this point, a mere proof-of-concept, but I can think of so many uses! Cheers. :-) -Morten ___ Zope-Dev
[Zope-dev] The field converters (:int, :text, etc.)
Hi guys, IIRC, there was some talk about modularizing the field converters (checkers) so that they could be easiliy modified and added to. Is there currently any efforts to solving this problem? If not, there's definently a need for it, IMO.. Thank you for your time. -Morten
[Zope-dev] ZClasses vs. Python Products
Hi
Re: [Zope-dev] ZClasses vs. Python Products
[Zope-dev] Creating IMAP and SMTP services for Zope
Hi guys, I'm wondering about creating IMAP and SMTP services for Zope. Someone mentioned to me that extending (using?) the ZServer could be a Good Thing (tm). Could anyone point me in the right direction? Thanks. -Morten ___ Zope-Dev maillist -
[Zope-dev] LoginManager and ZPatterns
Hi fellow zopers, I don't know if it's a bug or feature, but whenever I try to access parental objects from a user object in python code, I can't seem to find anything. That is, whenever I call acl_users.getItem('user123').getParentNode() (the acl_users is a LoginManager instance) it returns
[Zope-dev] How to avoid ConflictErrors ?
Hi zopers, I'm having problems with a product I'm developing. The product is part of the ZopeGUM package, the GUM product. If you have a look in gum.py in that product, you can see a method named _retrieve_messages, which can at times store enourmous amounts of objects and data in one
Re: [Zope-dev] How to avoid ConflictErrors ?
[Chris Withers] | Please check that both rfc822_message and message_container subclass | Persistence.Persistent. They do. *ponder* -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross
[Zope-dev] AttributeError when adding ZClass product (PortalMembership)
Hi guys, I have a problem with a product. This product, let's call it product SuperSecret, adds a number of folderish objects to the new instance when it is created. Now, if we name the SuperSecret product instance for a, and the class instance that raises the AttributeError b, I will show you
Re: [Zope-dev] Implementing a URL path resolver
[[EMAIL PROTECTED]] (Bug in the encoding of the message, MHA) | path = string.split(relative_url, '/') | path = filter(None, path) | new_path = '%s' % path[0] | path = path[1:] | | for element in path: | | new_path = new_path + "['%s']" % element | | return eval("self%s" % new_path)
Re: [Zope-dev] Implementing a URL path resolver
[Steve Alexander] | Have you seen the methods restrictedTraverse and unrestrictedTraverse in | lib/python/OFS/Traversable.py ? Exactly what I needed. Thank you. -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]
[Zope-dev] Calling HTMLFiles
Hi, I'm having difficulties calling an HTML file from python.. Here's the code: [...] from events import conflicting_events calendar_event_add_redirect = HTMLFile('calendar_event_add_redirect', globals()) manage_add_calendar_event_form =
Re: [Zope-dev] ZCatalog and 'fuzzy logic'
I do not think that "fuzzy logic" is strongly related to "regexp-like". Anyway. Fuzzy searching often means "finding matches with characters omitted, replaced or inserted". It seems I misunderstood the term fuzzy logic myself. Fuzzy logic means if I search for a word, for example
Re: [Zope-dev] ZCatalog and 'fuzzy logic'
On Tue, 9 Jan 2001, Steve Alexander wrote: The other option for searching a TextIndex is to use extensions to the NEAR and AND and OR operators that are currently supported. I guess it all depends what you mean by "fuzzy matching". Well, to try to explain the problem: If I have 1.000.000
[Zope-dev] ZCatalog and 'fuzzy logic'
Is there anyone who could try to give an estimate of how long it would take to add fuzzy logic (regexp-like) searching capability to the ZCatalog? And reasoning as to why would be appreciated. ;) -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED]
Re: [Zope-dev] 'Subclassing' another product
[Steve Spicklemire] | does that help? Yep. Thanks Cheers, Morten ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists -
[Zope-dev] LoginManager and PTK
Hi guys, I previously posted a couple of functions that enables users to login at a lower level in the tree-structre than where the actual user folder is. I.e., a user could enter username and password at /a and get redirected to /a/a/a/a/b (the acl_users folder would be
[Zope-dev] 'Subclassing' another product
I think I read somewhere that it was, from version 2.2 of Zope, possible to 'subclass' products. Is this just somebody janking my chain, or is it actually possible? If it is possible, would someone care to explain? Thanks. -Morten ___ Zope-Dev
Re: [Zope-dev] Loginmanager and local roles
[Morten W. Petersen] | Any suggestions? Found the problem. There needs to be a method called user_names in the acl_users folder, which returns all the user ids: """ paramsself/params user_ids = self.UserSource.getPersistentItemIDs() user_ids2 = [] for id in user_ids:
Re: [Zope-dev] A groupware package for zope
[Magnus Heino] | I'd appreciate to be able to download some code from zope.org :-) Uhmm. Yeah. -- guess you don't know about the /Product and search facilities.. ;) Cheers, Morten ___ Zope-Dev maillist
[Zope-dev] A groupware package for zope
For the last couple of months I've been working on a groupware package for Zope, named ZopeGUM. It has reached version 0.1.63 now, and it's in steady progress. All the planned components, like messenger, address book, calendar, todos, etc. are there and working now; though it's definently not
[Zope-dev] Re: objectIds accessiblilty and a proposal
[Brian Lloyd] | |
Re: [Zope-dev] how do i check if it is an array
[Veiko Schnabel] | how do i check out if any of my fields are arrays or strings | is there a function like php: is_array() I can't think of any clean way to do this; zope developers, why isn't the type() function available from DTML? On the other hand, you can explicitly cast all your form
Re: [Zope-dev] unbuffered html from external method
[Andy McKay] | Is it possible to print unbuffered html output to the user from an external | method. It looks to me like I can't, output occurs upon the return and I | cant see a way of getting around that. Try hacking the BaseResponse, located in lib/python/ZPublisher. (implement the flush
Re: [Zope-dev] DynPersist for Windows, ZPatterns 0.4.3b1 and Zope 2.2.x
[Phillip J. Eby] | I uploaded the beta 2 release on October 31, but it has strangely | disappeared from Zope.org, along with changes I made to other items that | evening, so I have uploaded it again today. The DynPersist you posted | should still work, since there were no changes to
[Zope-dev] Segmentation fault when adding new objects
I'm having trouble with the adding of new objects; specifically it's adding of rfc822_address objects (contained within the ZopeGUM distribution,). I haven't really got a clue what's wrong, have any of you? -- Start debugging session
[Zope-dev] Passing namespace to method
How do I construct a method of an object, so that whenever that method is called, the current namespace is passed with it? I.e.: class myclass: [...] def myfunc(self, context): if context['sequence-index'] == 10: raise 'sequence-index is
[Zope-dev] Using the monitor_client
When using the monitor_client, I do this: -- Start monitor_client usage python monitor_client.py localhost 8099 Enter Password: warning: unhandled connect event Python 1.5.2 (#1, Mar 11 2000, 13:03:53) [GCC 2.95.2 19991024 (release)] Copyright 1991-1995 Stichting Mathematisch Centrum,
Re: [Zope-dev] Using the monitor_client
[Yves-Eric Martin] | - Zope.app() gives you a *copy* of the *real* application object. | - app._p_jar.sync() reloads your copy with the real (losing your changes) Ah, there it is; sync discards the changes made from the monitor, and then reloads... An extra thanks for the thorough
[Zope-dev] Using the Zope debugger (authenticating)
How do I authenticate myself when using the Zope debugger? (I've seen this before I think, but I couldn't find it) Thanks. -Morten ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML
Re: [Zope-dev] Excluding meta_types
[[EMAIL PROTECTED]] | btw how is GUM going? It's ZopeGUM now.. =) It's coming along; I'm planning on releasing a stable version of it available within a week. (CVS will be available soon from SourceForge). -Morten ___ Zope-Dev maillist - [EMAIL
[Zope-dev] Excluding meta_types
When
[Zope-dev] LoginManager and ZPatterns
I | https://www.mail-archive.com/search?l=zope-dev%40zope.org&q=from:%22Morten+W.+Petersen%22&o=newest | CC-MAIN-2019-30 | refinedweb | 4,852 | 63.59 |
Quite a number of my projects involve talking to RabbitMQ, and to help check things work as expected, I often have a number of integration tests which talk to a local RabbitMQ instance.
While this is fine for tests being run locally, it does cause problems with the build servers - we don't want to install RabbitMQ on there, and we don't typically want the build to be dependent on RabbitMQ.
To solve this I created a replacement
FactAttribute which can check if RabbitMQ is available, and skip tests if it is not.
This attribute works with a single host, and will only check for the host actually being there on its first connection.
public class RequiresRabbitFactAttribute : FactAttribute { private static bool? _isAvailable; public RequiresRabbitFactAttribute(string host) { if (_isAvailable.HasValue == false) _isAvailable = CheckHost(host); if (_isAvailable == false) Skip = $"RabbitMQ is not available on {host}."; } private static bool CheckHost(string host) { var factory = new ConnectionFactory { HostName = host, RequestedConnectionTimeout = 1000; }; try { using (var connection = factory.CreateConnection()) { return connection.IsOpen; } } catch (Exception) { return false; } } }
I was planning on using a dictionary, keyed by host to store the availability, but realized that I always use the same host throughout a test suite.
The reason for passing the host name in via the ctor rather than using a constant is that this usually resides within a generic "rabbitmq helpers" type assembly, and is used in multiple projects. | http://stormbase.net/page2/ | CC-MAIN-2017-17 | refinedweb | 232 | 50.77 |
Created on 2013-10-31 17:32 by ustinov, last changed 2020-03-25 22:15 by rhettinger. This issue is now closed.
In order to migrate from optparse to argparse we need to have an ability to substitute anguments, e.g. remove and then create.
In our framework we use the command line utility base class and then inherit the particular tools from it. The parser in base class uses add_argument() to populate the general argument list but for some tools it is needed to modify the inherited arguments set and make some arguments to have the modified meaning.
With optparse we've just used remove_option() and then added the modified one with add_option() but argparse currently does not have this functionality.
For the purpose above we just need to have remove_argument() or modify_argument() methods in orgparse
Does conflict_handler='resolve' address your use case? It sounds like it should.>
> <>
> _______________________________________
>
Explicitly substitute, excuse me
On 31 Oct 2013 20:11, "Artem Ustinov" <report@bugs.python.org> wrote:
>
> Artem Ustinov added the comment:
>
>>
> > <>
> > _______________________________________
> >
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
When you add an argument, argparse creates an `Action`, and returns it. It also places that action in various lists (e.g. parse._actions) and dictionaries. A `remove_argument` function would have to trace and remove all of those links. That's a non-trivial task. However modifying an argument (or Action) is much easier, since there is only one instance. Obviously some modifications will be safer than others.
For example:
parser = ArgumentParser()
a = parser.add_argument('--foo')
print a
produces:
_StoreAction(option_strings=['--foo'], dest='foo', nargs=None, const=None, default=None, type=None, choices=None, help=None, metavar=None)
`vars(a)` gives a somewhat longer list of attributes. Within reason, those attributes can be changed directly. I think the 'dest', 'help', 'nargs', 'metavar' could all be changed without hidden effects. There's also a 'required' attribute that could be changed for optionals. Changing the 'option_strings' might be problematic, since the parser has a dictionary using those strings as keys.
The constant `argparse.SUPPRESS` is used in several places to alter actions. For example, to suppress the help, or to suppress default values in the Namespace. So it might be possible to 'hide' arguments in the subclass, even if you can't remove them.
In I explored a couple of ways of temporarily 'deactivating' certain groups of arguments, so as to parse the optionals and positionals separately. It's an advance issue, but might still give some ideas.
Another possibility is to use 'parent' parsers to define clusters of arguments. Your base class could create a parser with one set of parents, and the subclass could use a different set.
Paul,
essentialy, what i looking for is to replace the 'help' string of the
inherited argument with the new one. If you say it could be changed without
any effect so what would be the proper way to do it using argparse?
Artem
Just hang on the Action object that the `add_argument` returned, and change its `help` attribute.
a = parser.add_argument('--foo', help='initial help')
....
a.help = 'new help'
If using a custom parser class and subclass, I'd do something like:
self.changeablearg = self.parser.add_argument...
in the class, and
self.changeablearg.help = 'new help'
in the subclass
You can test the help message with
print parser.format_help()
What is the way to 'hide' the argument from being parsed?
E.g. we have self.parser.add_argument('foo') in parent class,
how can we modify it in child class so that it would not to
appear in --help strings and not populated to child's Namespace?
`argparse.SUPPRESS` should do the trick. According to the documentation it can be used with `default` and `help` parameters.
It does the trick with optionals but not the positionals.
How the positional arguments can be removed/hidden?
f.nargs = '?'
f.default = argparse.SUPPRESS
f.help = argparse.SUPPRESS
may be best set of tweaks to a positional Action `f`. In quick tests it removes `f` from the help, suppresses any complaints about a missing string, and does not put anything in the namespace.
But if there is a string in the input that could match this positional, it will be use.
f.nargs = 0
is another option. This puts a `[]` (empty list) in the namespace, since 'nothing' matches `f`. If there is an input string that might have matched it before, you will not get an 'unrecognized argument' error. `parse_known_args` can be used to get around that issue.
I should stress, though, that fiddling with `nargs` like this is not part of the API. Tweak this at your own risk.
I realized while answering a Stackoverflow question that the resolve method has the pieces for removing an Action from the parser. It removes an existing Action so that the new, conflicting one, can replace it.
It's meant to be used with flagged arguments. If `arg1` is an action with one option_string, it can be removed with:
parser._handle_conflict_resolve(None, [('--arg1', arg1)])
If it is a positional, this call seems to be sufficient:
arg1.container._remove_action(arg1)
Beyond answering this question I haven't tested the idea. If there's more interest I could explore it more.
@paul j3
FWIW, popping optionals from a cache that is maintained in addition to self._actions, makes special conflict handling unnecessary.
i.e.:
for option_string in a.option_strings:
parser._option_string_actions.pop(option_string)
> In order to migrate from optparse to argparse we need to have
> an ability to substitute anguments, e.g. remove and then create.
It seems to me that the original use case is obsolete.
Paul, do you think this issue should be closed?
I think it can be closed. | https://bugs.python.org/issue19462 | CC-MAIN-2021-17 | refinedweb | 953 | 67.25 |
January 15, 2019 by Tomek
In this GraphQL tutorial, we will show you how easy is implementing GraphQL in a React application. We’ll be using the React Apollo library that allows you to fetch data from your GraphQL server and use it the React framework.
Before you start make sure that you have Node.js installed. To get started we first need to set up a new React project. The easiest way to do so is to use create-react-app, which allows you to create a new React project with zero build configuration.
$ npx create-react-app my-graphql-project $ cd my-graphql-project $ npm start
Once you have above done the next step will be to install dependencies. You can do it with a single NPM command which will install the following packages:
$ npm install apollo-boost react-apollo graphql graphql-tag
apollo-boost: a package with all necessary Apollo Client components
react-apollo: a view layer for React
graphql&
graphql-tag: both required to parse GraphQL queries
Now you need to create an instance of Apollo Client. You can do it
App.js by adding the following code:
import ApolloClient from 'apollo-boost' const client = new ApolloClient({ uri: '[Put your GraphQL endpoint URI here]', })
To start with, all you really need is the endpoint for your GraphQL server. You can define it in
uri or it will be
/graphql endpoint on the same host as your app by default.
To connect the Apollo Client to React use the
ApolloProvider component exported from
react-apollo. The
ApolloProvider works simillar to React’s context provider:
giving you access to it anywhere in your component tree.
import React from 'react' import { render } from 'react-dom' import { ApolloProvider } from 'react-apollo' import ApolloClient from 'apollo-boost' const client = new ApolloClient({ uri: '[Put your GraphQL endpoint URI here]', }) const App = () => ( <ApolloProvider client={client}> <div> <h1>My app</h1> </div> </ApolloProvider> ) render(<App />, document.getElementById('root'))
Now, once your first React + GraphQL app is up and running you can start fetching some data with GraphlQL Queries have fun!
Do you want to try our mock backend from GraphQL app. It is in beta phase and 100% free. | https://blog.graphqleditor.com/getting-started-with-react-graphql/ | CC-MAIN-2019-26 | refinedweb | 365 | 58.62 |
Let’s face it: you need to get information into and out of your programs through more than just the keyboard and console. Exchanging information through text files is a common way to share info between programs. One of the most popular formats for exchanging data is the CSV format. But how do you use it?
Let’s get one thing clear: you don’t have to (and you won’t) build your own CSV parser from scratch. There are several perfectly acceptable libraries you can use. The Python
csv library will work for most cases. If your work requires lots of data or numerical analysis, the
pandas library has CSV parsing capabilities as well, which should handle the rest.
In this article, you’ll learn how to read, process, and parse CSV from text files using Python. You’ll see how CSV files work, learn the all-important
csv library built into Python, and see how CSV parsing works using the
pandas library.
So let’s get started!
What Is a CSV File?. Normally, CSV files use a comma to separate each specific data value. Here’s what that structure looks like:
column 1 name,column 2 name, column 3 name first row data 1,first row data 2,first row data 3 second row data 1,second row data 2,second row data 3 ...
Notice how each piece of data is separated by a comma. Normally, the first line identifies each piece of data—in other words, the name of a data column. Every subsequent line after that is actual data and is limited only by file size constraints.
In general, the separator character is called a delimiter, and the comma is not the only one used. Other popular delimiters include the tab (
\t), colon (
:) and semi-colon (
;) characters. Properly parsing a CSV file requires us to know which delimiter is being used.
Where Do CSV Files Come From?
CSV files are normally created by programs that handle large amounts of data. They are a convenient way to export data from spreadsheets and databases as well as import or use it in other programs. For example, you might export the results of a data mining program to a CSV file and then import that into a spreadsheet to analyze the data, generate graphs for a presentation, or prepare a report for publication.
CSV files are very easy to work with programmatically. Any language that supports text file input and string manipulation (like Python) can work with CSV files directly.. The
csv library contains objects and other code to read, write, and process data from and to CSV files..
Here’s the
employee_birthday.txt file:
name,department,birthday month John Smith,Accounting,November Erica Meyers,IT,March
Here’s code to read it:.')
This results in the following output:
Column names are name, department, birthday month John Smith works in the Accounting department, and was born in November. Erica Meyers works in the IT department, and was born in March. Processed 3 lines.
Each row returned by the
reader is a list of
String elements containing the data found by removing the delimiters. The first row returned contains the column names, which is handled in a special way.
Reading CSV Files Into a Dictionary With
csv
Rather than deal with a list of individual
String elements, you can read CSV data directly into a dictionary (technically, an Ordered Dictionary) as well.
Again, our input file,
employee_birthday.txt is as follows:
name,department,birthday month John Smith,Accounting,November Erica Meyers,IT,March
Here’s the code to read it in as a dictionary this time:
import csv with open('employee_birthday.txt', mode='r') as csv_file: csv_reader = csv.DictReader(csv_file) line_count = 0 for row in csv_reader: if line_count == 0: print(f'Column names are {", ".join(row)}') line_count += 1 print(f'\t{row["name"]} works in the {row["department"]} department, and was born in {row["birthday month"]}.') line_count += 1 print(f'Processed {line_count} lines.')
This results in the same output as before:
Column names are name, department, birthday month John Smith works in the Accounting department, and was born in November. Erica Meyers works in the IT department, and was born in March. Processed 3 lines. additional parameters, some of which are shown below:
delimiterspecifies the character used to separate each field. The default is the comma (
',').
quotecharspecifies the character used to surround fields that contain the delimiter character. The default is a double quote (
' " ').
escapecharspecifies the character used to escape the delimiter character, in case quotes aren’t used. The default is no escape character.
These parameters deserve some more explanation. Suppose you’re working with the following
employee_addresses.txt file:
name,address,date joined john smith,1132 Anywhere Lane Hoboken NJ, 07030,Jan 4 erica meyers,1234 Smith Lane Hoboken NJ, 07030,March 2
This CSV file contains three fields:
name,
address, and
date joined, which are delimited by commas. The problem is that the data for the
address field also contains a comma to signify the zip code.
There are three different ways to handle this situation:
Use a different delimiter
That way, the comma can safely be used in the data itself. You use the
delimiteroptional parameter to specify the new delimiter.
Wrap the data in quotes
The special nature of your chosen delimiter is ignored in quoted strings. Therefore, you can specify the character used for quoting with the
quotecharoptional parameter. As long as that character also doesn’t appear in the data, you’re fine.
Escape the delimiter characters in the data
Escape characters work just as they do in format strings, nullifying the interpretation of the character being escaped (in this case, the delimiter). If an escape character is used, it must be specified using the
escapecharoptional parameter.
Writing CSV Files With
csv
You can also write to a CSV file using a
writer object and the
.write_row() method:
import csv with open('employee_file.csv', mode='w') as employee_file: employee_writer = csv.writer(employee_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) employee_writer.writerow(['John Smith', 'Accounting', 'November']) employee_writer.writerow(['Erica Meyers', 'IT', 'March'])
The
quotechar optional parameter tells the
writer which character to use to quote fields when writing. Whether quoting is used or not, however, is determined by the
quoting optional parameter:
- If
quotingis set to
csv.QUOTE_MINIMAL, then
.writerow()will quote fields only if they contain the
delimiteror the
quotechar. This is the default case.
- If
quotingis set to
csv.QUOTE_ALL, then
.writerow()will quote all fields.
- If
quotingis set to
csv.QUOTE_NONNUMERIC, then
.writerow()will quote all fields containing text data and convert all numeric fields to the
floatdata type.
- If
quotingis set to
csv.QUOTE_NONE, then
.writerow()will escape delimiters instead of quoting them. In this case, you also must provide a value for the
escapecharoptional parameter.
Reading the file back in plain text shows that the file is created as follows:
John Smith,Accounting,November Erica Meyers,IT,March
Writing CSV File From a Dictionary With
csv
Since you can read our data into a dictionary, it’s only fair that you should be able to write it out from a dictionary as well:
import csv with open('employee_file2.csv', mode='w') as csv_file: fieldnames = ['emp_name', 'dept', 'birth_month'] writer = csv.DictWriter(csv_file, fieldnames=fieldnames) writer.writeheader() writer.writerow({'emp_name': 'John Smith', 'dept': 'Accounting', 'birth_month': 'November'}) writer.writerow({'emp_name': 'Erica Meyers', 'dept': 'IT', 'birth_month': 'March'})
Unlike
DictReader, the
fieldnames parameter is required when writing a dictionary. This makes sense, when you think about it: without a list of
fieldnames, the
DictWriter can’t know which keys to use to retrieve values from your dictionaries. It also uses the keys in
fieldnames to write out the first row as column names.
The code above generates the following output file:
emp_name,dept,birth_month John Smith,Accounting,November Erica Meyers,IT,March
Parsing CSV Files With the
pandas Library
Of course, the Python CSV library isn’t the only game in town. Reading CSV files is possible in
pandas as well. It is highly recommended if you have a lot of data to analyze.
pandas is an open-source Python library that provides high performance data analysis tools and easy to use data structures.
pandas is available for all Python installations, but it is a key part of the Anaconda distribution and works extremely well in Jupyter notebooks to share data, code, analysis results, visualizations, and narrative text.
Installing
pandas and its dependencies in
Anaconda is easily done:
$ conda install pandas
As is using
pip/
pipenv for other Python installations:
$ pip install pandas
We won’t delve into the specifics of how
pandas works or how to use it. For an in-depth treatment on using
pandas to read and analyze large data sets, check out Shantnu Tiwari’s superb article on working with large Excel files in pandas.
Reading CSV Files With
pandas
To show some of the power of
pandas CSV capabilities, I’ve created a slightly more complicated file to read, called
hrdata.csv. It contains data on company employees:
Reading the CSV into a
pandas
DataFrame is quick and straightforward:
import pandas df = pandas.read_csv('hrdata.csv') print(df)
That’s it: three lines of code, and only one of them is doing the actual work.
pandas.read_csv() opens, analyzes, and reads the CSV file provided, and stores the data in a DataFrame. Printing the
DataFrame results in the following output:
Here are a few points worth noting:
- First,
pandasrecognized that the first line of the CSV contained column names, and used them automatically. I call this Goodness.
- However,
pandasis also using zero-based integer indices in the
DataFrame. That’s because we didn’t tell it what our index should be.
Further, if you look at the data types of our columns , you’ll see
pandashas properly converted the
Salaryand
Sick Days remainingcolumns to numbers, but the
Hire Datecolumn is still a
String. This is easily confirmed in interactive mode:
>>> print(type(df['Hire Date'][0])) <class 'str'>
Let’s tackle these issues one at a time. To use a different column as the
DataFrame index, add the
index_col optional parameter:
import pandas df = pandas.read_csv('hrdata.csv', index_col='Name') print(df)
Now the
Name field is our
DataFrame index:
Hire Date Salary Sick Days remaining Name
Next, let’s fix the data type of the
Hire Date field. You can force
pandas to read data as a date with the
parse_dates optional parameter, which is defined as a list of column names to treat as dates:
import pandas df = pandas.read_csv('hrdata.csv', index_col='Name', parse_dates=['Hire Date']) print(df)
Notice the difference in the output:
Hire Date Salary Sick Days remaining
The date is now formatted properly, which is easily confirmed in interactive mode:
>>> print(type(df['Hire Date'][0])) <class 'pandas._libs.tslibs.timestamps.Timestamp'>
If your CSV files doesn’t have column names in the first line, you can use the
names optional parameter to provide a list of column names. You can also use this if you want to override the column names provided in the first line. In this case, you must also tell
pandas.read_csv() to ignore existing column names using the
header=0 optional parameter:
import pandas df = pandas.read_csv('hrdata.csv', index_col='Employee', parse_dates=['Hired'], header=0, names=['Employee', 'Hired','Salary', 'Sick Days']) print(df)
Notice that, since the column names changed, the columns specified in the
index_col and
parse_dates optional parameters must also be changed. This now results in the following output:
Hired Salary Sick Days Employee
Writing CSV Files With
pandas
Of course, if you can’t get your data out of
pandas again, it doesn’t do you much good. Writing a
DataFrame to a CSV file is just as easy as reading one in. Let’s write the data with the new column names to a new CSV file:
import pandas df = pandas.read_csv('hrdata.csv', index_col='Employee', parse_dates=['Hired'], header=0, names=['Employee', 'Hired', 'Salary', 'Sick Days']) df.to_csv('hrdata_modified.csv')
The only difference between this code and the reading code above is that the
print(df) call was replaced with
df.to_csv(), providing the file name. The new CSV file looks like this:
Employee,Hired,Salary,Sick
Conclusion
If you understand the basics of reading CSV files, then you won’t ever be caught flat footed when you need to deal with importing data. Most CSV reading, processing, and writing tasks can be easily handled by the basic
csv Python library. If you have a lot of data to read and process, the
pandas library provides quick and easy CSV handling capabilities as well.
Are there other ways to parse text files? Of course! Libraries like ANTLR, PLY, and PlyPlus can all handle heavy-duty parsing, and if simple
String manipulation won’t work, there are always regular expressions.
But those are topics for other articles…
What Do You Think?
Real Python Comment Policy: The most useful comments are those written with the goal of learning from or helping out other readers—after reading the whole article and all the earlier comments. Complaints and insults generally won’t make the cut here. | https://realpython.com/python-csv/ | CC-MAIN-2018-34 | refinedweb | 2,196 | 54.93 |
On Feb 7, 2009, at 11:07 AM, Martijn Faassen wrote: > Hi there, > > We've recently had some discussions on where to place the > implementation > of various ZCML directives. This post is to try to summarize the issue > and analyze the options we have. > > Right now ZCML directives are implemented in packages that contain > other > implementation. For example, zope.component implements various ZCML > directives, and zope.security implements some more. > > In the case of zope.component, a special [zcml] extras dependency > section is declared. If the ZCML dependencies are asked for, using > zope.component will suddenly pull in a much larger list of > dependencies > than the original zope.component dependency list. The ZCML directives > are component-related, but do offer extra options that need bits from > the wider Zope 3 framework, such as the security infrastructure. > > In the case of zope.security, this isn't the case. As far as I can > see, > it doesn't declare any dependency beyond zope.configuration to allow > it > to implement its ZCML directives. > > The dependency situation for the ZCML implementations in > zope.component > doesn't appear ideal. It was therefore proposed to move the ZCML > implementations to another package. This could be a new package, or it > could be created. > > Following up on that, it was considered we should move *all* > directives > from the packages that implement them now into special packages. This > would allow some packages to lose the dependency on > zope.configuration, > which is a relatively minor gain. > > We have several ways to go: > > a) continue with the current extra dependencies situation like in > zope.component, and in fact clean up other packages that define ZCML > to > declare ZCML extra dependencies.
Advertising
I did this in zope.component out of desperation. (I was preparing to teach a PyCon tutorial on using zope.component outside of Zope.) I'm not at all happy with it, however, I'd be in favor of continuing it for existing packages with zcml implementations so as not to introduce backward incompatibilities. I'd rather not do it for new packages. IMO, introducing an extra is like introducing a new package and in a rather complicated way. > b) pull out all ZCML implementations from where they are now and put > them in special ZCML implementation packages. We could for instance > have > zcml.component, or zope.component_zcml, or > zope.configuration.component. > We had a bit of a side-tracked discussion about naming and namespace > packages here. I think this is the right way to go for new software. > c) pull out only those ZCML implementations that cause extra > dependencies beyond zope.configuration. So, we extract the bits of > zope.component into a new package, but we don't extract bits from > zope.security. Too complicated imo. :) > I personally don't like extras. I think the ideal situation would be > if > packages had *no* extras at all (even test extras), as it complicates > reasoning about the dependency structure. +1 ... > For that reason, a) is not really an option for me. As I said above, I'm for a) because I think it is less disruptive, even though I share your distaste for extras. Jim -- Jim Fulton Zope Corporation _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg27665.html | CC-MAIN-2017-30 | refinedweb | 545 | 52.26 |
On 1/3/07, Matt Draisey <matt at draisey.ca> wrote: > > On Wed, 2007-01-03 at 17:17 -0800, Josiah Carlson wrote: > > > You can already do this with the following code: > > > > __gl = globals() > > for name in __gl.keys(): > > if name[:1] == '_' and len(name) > 1 and name.count('_') == 1: > > del __gl[name] > > del __gl > > > > - Josiah > > No, that is not what I meant. I wasn't talking about deleting > temporaries but not publishing private objects. Brett Cannon understood > when he said, "Private namespaces are not exactly a popular thing in > Python". But functional programming style and generators are all the > rage and they often drag in private state via a closure.). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-ideas/2007-January/000069.html | CC-MAIN-2016-44 | refinedweb | 122 | 74.59 |
I am not able to understand why this code doesnt compile :
class A {
public static void main(String[] args) {
System.out.println("hi");
}
}
private class B {
int a;
}
modifier private not allowed here // where I have defined class B
From the Java Language specification:
The access modifiers protected and private pertain only to member classes within a directly enclosing class declaration
So yes, the private and the protected modifiers are not allowed for top level class declarations.
Top-level classes may be public or not, while
private and
protected are not allowed. If the class is declared public, than it can be referred to from any package. Otherwise it can only be referred to from the same package (namespace).
A private top level classes wouldn't make much sense because it couldn't be referred to from any class. It would be unusable by definition.
private is OK for member classes to make a class referrable to only it's enclosing class.
A protected member class can be referred to from (1) any class of the same package and from (2) any subclass of the enclosing class. Mapping this concept to top level classes is difficult. The first case is covered by top level class with no access modifiers. The second case is not applicable for top level classes, because there is no enclosing class or something else from a different package with a special relation to this class (like a subclass). Because of this I think,
protected is not allowed because it's underlying concept is not applicable for top level classes. | https://codedump.io/share/SnaiaU7pLItp/1/java-problem-multiple-classes-in-a-single-file | CC-MAIN-2017-43 | refinedweb | 263 | 60.14 |
Components and supplies
Necessary tools and machines
Apps and online services
About this project
Is your loved one old and living alone? No need to worry. Using Infineon's DPS310 pressure sensor we developed a virtual assistant that neatly fits around your arm in the form of a band, which takes care of this time consuming job leaving your love ones is safe hands!
"The elderly currently represent about 14.5 percent of the U.S. population, and by 2030 there will be about 74 million older individuals. As healthcare costs escalate and pressures are placed on healthcare institutions to provide adequate care, new solutions to managing senior health are imperative."
After a quick search and having seen the shocking numbers this is our attempt to help the elderly!
THE STORY
It's been a few years since my grandma has been suffering from the early signs of Alzheimer's and unfortunately her condition seems to be deteriorating. This resulted in series of unexpected problems such as
1) wandering off to work (even though she was advised not to)
2) skipping her meals or sometimes having double meals
3) not doing her exercises
All this due to the lack of memory and also her unawareness of her condition.
This led to my mother needing to be with her almost the whole day long, seven days a week. This task proved to be very tiring and time consuming. That was when we got the idea to make a virtual assistant that could monitor my grandma's behavior and activities, and if needed could also alert a family member in case of an emergency. Giving my mother a break and at the same time keeping the patient in safe hands!
COMPONENTS IN THE SYSTEM / CONCEPT
Although this system focuses mainly on the issues faced by an Alzheimer patient we have inculcated many other systems in our project that impacts the lives of all elders helping them to cross their daily life's obstacles independently.
One of the main features of Infineon's pressure sensor is its size! This makes the key component portable and super versatile. In the heart of our system we have the Arduino Nano which is then connected to the DPS310 pressure sensor via the I2C bus. Actions and Behaviors will be coded to simulate the live motion of the patient. Using these graphs or values we determine the state of the patient and alert a member in case of an emergency. The data generated will be then represented on a small app using Blynk.
This whole system will be integrated into a compact yet good looking wearable band, blending this technology seamlessly into the patient's life.
FUNCTIONS AND CAPABILITIES
Here are the main functions and movements that our band will be able to analyse/detect :
Fall - studies show that the biggest issue with elders are of them losing balance and falling down. Often patients are helpless and it's only after sometime that help comes. This could be easily avoided with the Health Band by the detection of a sudden drop in pressure. Once detected a message is automatically sent to relatives, for aid preventing the injury to worsen.
Exercise - for old people it can be rather hard to get them to exercise, or go for a walk even though this will keep them fit and healthy. We figured that a way to motivate them could be by showing them the number of steps taken or in how much time they have walked for, so that they have something to push them to exercise.
And in my grandma's case the doctors have told her to walk around 1000 steps she is willing but she loses count. A counter helps her too!
Once the Health Band detects a "wave" motion it deducts that the patient has begun his/her exercise. Crest to crest or trough to trough marks one cycle. As steps are taken the number of cycles are generated and then displayed, making your very own step counter.
Fever - this one is pretty straight forward the sensor also gives the temperature. The band being in contact with the arm gives the live temperature of the patient. Any spikes or drops will be alerted again automatically via message to relatives.
State - tells the relatives the live state of the patient. For example if the Health Band detects not much changes of pressure the patient could be sleeping.
We have three states : Sleeping, Awake and Exercising. ( that said we have not perfected this function yet as sometimes we received wrong "states". We also plan to add more states)
CIRCUIT
Infineon's powerful DPS310 sensor can be synced using Bluetooth to an app. The app generates live visual representations of the sensor. Although it was useful it had its limitations for our concept.
So to tap into the sensor's data stream we connected an Arduino Nano, via the I2C bus. This allowed us to compute the data making it possible to deduct various scenarios that the patient was in.
Once we got that bit sorted out we connected the Arduino to a ESP8266 WiFi module giving it the means to communicate with a mobile app.
SETTING UP
Before setting up everything you will need to add pins to the sensor. Snip two lengths of seven pins and solder them on. You could use a breadboard to make things easier.
- DPS310 Pressure Sensor to Arduino Nano (I2C)
NOTE : the orientation of the sensor is as depicted in the picture
Pin 1 (SDA) on sensor => Analog Pin 4 on Arduino
Pin 2 (SCL) on sensor => Analog Pin 5 on Arduino
Pin 8 (GND) on sensor => GND on Arduino
- Arduino Nano to ESP8266 (WiFi module)
NOTE : the orientation of the module is as depicted in the picture
Pin 1 on WiFi module => Digital Pin 11 on Arduino
Pin 2 on WiFi module => Digital Pin 10 on Arduino
Pin 7 (GND) on WiFi module => GND on Arduino
Pin 8 (Power) on WiFi module => 3v3 on Arduino
MAKING THE BAND
All the components being small neatly fits on your wrist. To make the actual band we used Canvas and Foam to embed the components. And then Velcro to form the straps.
To make your DIY Health band start by cutting foam the width of your hand. Then arrange the various sensors, and cut out the final size. Round the edges to give it a neater look. Press pins into the foam (giving it protection and grip), and embed the battery into a small slot.
Now flip the Band and solder the connections, the pins should just be sticking out...Do a test run to see if everything is working.
Add Velcro strips to make straps. Wrap the band in canvas and stick it with hot glue, this gives a neat finish and feel!
As we don't have a 3D printer we plan to upgrade this prototype in the future with a 3D printed on that will have perfect slots for all the components giving an ergonomic design to our Health Band!
***We will update this project, when we print our final 3d printed model, with pictures and print files***
CODING THE VARIOUS ALGORITHMS
Before we start programming our system there are certain libraries that you will need to install for the program to function. The libraries that you will need to download are:
- Wire library ( Usually comes pre-installed, this is responsible for the communication between the Arduino Nano and DPS310 Pressure Sensor)
- DPS310 Pressure Sensor library
- Blynk library ( for the Arduino Nano to be able to communicate with the Blynk cloud)
Once you have downloaded each of the libraries the installation for each follow the same process: open the Arduino IDE and head to sketch (top of the window). Then from the drop down list click on include library. Next click on add .ZIP library. Now navigate to wherever you have stored the files that you downloaded and click open. Repeat the process for all three libraries.
Now you can try downloading the trial code and check whether it compiles. Upload it to your Arduino Nano and make sure your getting live data by opening the serial monitor (depicted by the monitor icon on the top right corner of the IDE).
If that works well go ahead and upload the main code, you can then start building your app.
BUILDING THE APP
To connect to the internet we use a pre-built:.
We used the App to make representations of the data in a user friendly manner.
Select Arduino Nano as your micro controller and as ''connection type'' WiFi . You will then receive a mail of the "auth token" which you need to input in the code, (mentioned in the code).
We added several widgets such as a Gauge to represent the live temperature, a Value Display for the step counter and an LCD Display showing the present state. These are the basic building blocks you can add many more functions for other specific cases.
CONCLUSIONS, SUCCESSFUL ON THE WHOLE!
The project had some errors and misreadings. One was the body temperature. read by the HealthBand was 36° Celsius (wrist temp) while the medical grade thermometer read it as 36.8° Celsius (armpit temp).
Our algorithms for the steps proved to be giving wrong counts at first but after several attempts of modification it worked rather accurately. Another problem was in the state function. We added more variables and statements to make it further understand other states.
In the end, we were able to fix the problems by re-calibration and the HealthBand successfully gathers the data needed. My grand mom has been without an assistant for the last two weeks and the Band has worked out great!
As per now the fall or fever messages haven't been tested as there has not been any such situations but theoretically they work!
This has been a great project and can be implemented rather easily, we hope this band can save lives and keep old people in safe hands!
Code
Generating Data TestArduino
#include <ifx_dps310.h> void setup() { Serial.begin(9600); while (!Serial); //Call begin to initialize ifxDps310 //The parameter 0x76 is the bus address. The default address is 0x77 and does not need to be given. //ifxDps310.begin(Wire, 0x76); //Use the commented line below instead to use the default I2C address. ifxDps310.begin(Wire); // IMPORTANT NOTE //If you face the issue that the DPS310 indicates a temperature around 60 C although it should be around 20 C (room temperature), you might have got an IC with a fuse bit problem //Call the following function directly after begin() to resolve this issue (needs only be called once after startup) //ifxDps310.correctTemp(); //temperature measure rate (value from 0 to 7) //2^temp_mr temperature measurement results per second int temp_mr = 2; //temperature oversampling rate (value from 0 to 7) //2^temp_osr internal temperature measurements per result //A higher value increases precision int temp_osr = 2; //pressure measure rate (value from 0 to 7) //2^prs_mr pressure measurement results per second int prs_mr = 2; //pressure oversampling rate (value from 0 to 7) //2^prs_osr internal pressure measurements per result //A higher value increases precision int prs_osr = 2; //startMeasureBothCont enables background mode //temperature and pressure ar measured automatically //High precision and hgh measure rates at the same time are not available. //Consult Datasheet (or trial and error) for more information int ret = ifxDps310.startMeasureBothCont(temp_mr, temp_osr, prs_mr, prs_osr); //Use one of the commented lines below instead to measure only temperature or pressure //int ret = ifxDps310.startMeasureTempCont(temp_mr, temp_osr); //int ret = ifxDps310.startMeasurePressureCont(prs_mr, prs_osr); if (ret != 0) { Serial.print("Init FAILED! ret = "); Serial.println(ret); } else { Serial.println("Init complete!"); } } void loop() { unsigned char pressureCount = 20; long int pressure[pressureCount]; unsigned char temperatureCount = 20; long int temperature[temperatureCount]; //This function writes the results of continuous measurements to the arrays given as parameters //The parameters temperatureCount and pressureCount should hold the sizes of the arrays temperature and pressure when the function is called //After the end of the function, temperatureCount and pressureCount hold the numbers of values written to the arrays //Note: The Dps310 cannot save more than 32 results. When its result buffer is full, it won't save any new measurement results int ret = ifxDps310.getContResults(temperature, temperatureCount, pressure, pressureCount); if (ret != 0) { Serial.println(); Serial.println(); Serial.print("FAIL! ret = "); Serial.println(ret); } else { Serial.println(); Serial.println(); Serial.print(temperatureCount); Serial.println(" temperature values found: "); for (int i = 0; i < temperatureCount; i++) { Serial.print(temperature[i]); Serial.println(" degrees of Celsius"); } Serial.println(); Serial.print(pressureCount); Serial.println(" pressure values found: "); for (int i = 0; i < pressureCount; i++) { Serial.print(pressure[i]); Serial.println(" Pascal"); } } //Wait some time, so that the Dps310 can refill its buffer delay(10000); }
HealthBand with Blynk ApplicationArduino
#include <ifx_dps310.h> #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> ); ifxDps310.begin(Wire); int ret = ifxDps310.setInterruptPolarity(1); ret = ifxDps310.setInterruptSources(1, 0, 0); //clear interrupt flag by reading ifxD = ifxD ifxD ifxDps310.getContResults(&temperature[temperatureCount], temp_freespace, &pressure[pressureCount], prs_freespace); //after reading the result counters are increased by the amount of new results pressureCount += prs_freespace; temperatureCount += temp_freespace; }
Schematics
Author
Technovation
- 3 projects
- 37 followers
Published onSeptember 27, 2017
Members who respect this project
you might like | https://create.arduino.cc/projecthub/Technovation/health-band-a-smart-assistant-for-the-elderly-0fed12 | CC-MAIN-2019-04 | refinedweb | 2,214 | 61.06 |
In this section we are going to discuss about command line argument in java. Command line argument allow a user to pass arguments at the time of running the application , after the class name. Java allow user to pass any number of argument to the command line. When running the application, argument is given after a class name separated by space. Suppose we are running java application, then class name followed by argument to the command line. It accept into the string argument of main method.
Command line argument accept all argument as string, If we want the application to support numeric command line argument then the following code should be written.
int Arg; if (args.length > 0) { try { Arg = Integer.parseInt(args[0]); // Converting to Integer. System.out.println("Argument is ="+Arg); } catch (NumberFormatException e) { System.out.println("Argument" + " must be an integer"); }
Syntax: java Classname argument1 argument 2 argument3 ....
Example: java Commandline Welcome to Rose India
public class Commandline { public static void main(String args[]) { int size=args.length; if(size<1) { System.out.println("Please pass some value"); } int counter=1; for(int i=0;i<size;i++) { System.out.println( +counter+ " argument = "+args[i]); counter++; } } }
Output: After compiling and executing of above program.
In the above program we pass argument to command line as Welcome to Rose India , command line display it each line by itself , This is because space separate the command line argument. To do it as a single argument user would join them by enclosing quotation marks. like "Welcome to rose India" .
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Command line argument in java
Post your Comment | http://www.roseindia.net/java/beginners/commandlineargument.shtml | CC-MAIN-2015-11 | refinedweb | 290 | 50.84 |
13 November 2008 09:03 [Source: ICIS news]
SINGAPORE (ICIS news)--Crude fell more than $1/bbl, pushing prices to fresh 22-month lows on Thursday, with NYMEX light sweet crude slipping below $55/bbl amid growing concerns over the global economy.
?xml:namespace>
At 16:34 hours Singapore time (0834GMT), December NYMEX light sweet crude futures were trading at $55.15/bbl, down $1.01/bbl on Wednesday’s settlement level, after hitting $54.67/bbl, levels not seen since January last year.
At the same time, December Brent on ?xml:namespace>
The December ICE Brent contract expires at the close of business on Thursday.
Crude extended losses made over the previous two sessions as concerns continued to grow over the health of the global economy with news that
Meanwhile, the US Energy Information Administration (EIA) in its Short Term Energy Outlook released on Wednesday, drastically reduced its 2008 US oil demand forecast to around 19.6m bbl/day, down 1.1m bbl/day or 5.4% from the 2007 average. This is the first time since 1980 that
The EIA also forecast that global consumption in 2008 will increase by just 100,000 bbl/day in 2008 and remain flat in 2009.
There are expectations that the International Energy Agency (IEA), which will issue its monthly report on Thursday, will further reduce its global demand forecasts.
With prices continuing to fall, OPEC members are considering holding an emergency meeting on 28 November in
Meanwhile, the EIA will release US weekly inventory data later on Thursday. The data is expected to reveal further builds in crude stocks and product stocks. Crude is forecast to rise by around 1.2m bbl while, distillates and gasoline stocks are forecast to increase by 800,000 bbls and 300,000 bbls, | http://www.icis.com/Articles/2008/11/13/9171157/crude-hits-22-month-lows-as-economic-woes-mount.html | CC-MAIN-2014-23 | refinedweb | 298 | 63.59 |
Red Hat Bugzilla – Bug 128390
stat() fails on files larger than 2G
Last modified: 2007-11-30 17:06:54 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4.2)
Gecko/20040301
Description of problem:
stat() call fails on files over 2G. It failed on U4 and also with the
new glibc in the U5beta (glibc-2.2.4-32.16).
This was originally reported as "/usr/bin/test -f" fails with large
files. Issue tracker #36853
Here is a small program that demonstrates the problem.
[root@jarjar footest]# ls -l
total 2152536
-rw-r--r-- 1 root root 145 Jul 16 16:46 ls-out
-rwxr-xr-x 1 root root 16417 Jul 22 09:25 stattest
-rw-r--r-- 1 root root 380 Jul 22 09:25 stattest.c
-rw-r--r-- 1 root root 49 Jul 22 09:16 stattest.c~
-rw-r--r-- 1 root root 2202009600 Jul 16 16:45 test-file
[root@jarjar footest]# cat stattest.c
#include <stdio.h>
#include <sys/stat.h>
#include <sys/errno.h>
main()
{
struct stat st;
int retval;
retval = stat ("./test-file", &st);
printf("Big file result = %d\n", retval);
if (retval < 0)
printf("errno = %d\n", errno);
retval = stat ("./ls-out", &st);
printf("Small file result = %d\n", retval);
if (retval < 0)
printf("errno = %d\n", errno);
}
[root@jarjar footest]# ./stattest
Big file result = -1
errno = 75
Small file result = 0
Version-Release number of selected component (if applicable):
glibc-2.2.4-32.16
How reproducible:
Always
Steps to Reproduce:
1. Create a large file
2. run the test program
3. observe the results
Actual Results: stat call fails
Expected Results: stat call doesn't fail
Additional info:
error 75 is EOVERFLOW. 32-bit stat() is *required* to return
EOVERFLOW for files larger than 2G. See
for the relevant standards.
If you want the stat to succeed, you must either use the stat64()
variant --- if necessary, enabling that via
#define _LARGEFILE64_SOURCE 1
or compile with transparent 64-bit file size support by using that
#define plus
#define _FILE_OFFSET_BITS 64
Only if you do the latter will stat() automatically use the 64-bit
extended struct stat.
If "/usr/bin/test -f" is failing, that implies the test binary is not
using the 64-bit stat variants as it should do.
For transparent 64-bit file size support actually just
-D_FILE_OFFSET_BITS=64 is enough, _LARGEFILE64_SOURCE is not needed.
Well /usr/bin/test belongs to sh-utils, doesn't it?
For some reason the spec file explicitly uses --disable-largefile when
running configure:
%configure %{?this_os_is_linux: --disable-largefile --enable-pam }
I've rebuilt sh-utils without --disable-largefile, and it solved this
problem. Also, all the tests in the script tests/test/test-tests
passed (for what that's wort).
*** Bug 133386. | https://bugzilla.redhat.com/show_bug.cgi?id=128390 | CC-MAIN-2017-17 | refinedweb | 471 | 66.44 |
tracing
Distributed tracing
See all snapshots
tracing appears in
Maintained by mtth@apache.org
This version can be pinned in stack with:
tracing-0.0.5.2@sha256:a54ea17777c8a41a52e920e1c9d3842ee0c237edb1b09c14fa65f74f401c22fc,1442
Module documentation for 0.0.5.2
- Control
- Control.Monad
- Monitor
Depends on 15 packages(full list with versions):
Tracing
An OpenTracing-compliant, simple, and extensible distributed tracing library.
- Simple: add a single
MonadTraceconstraint to start tracing, without making your code harder to test!
- Extensible: use the built-in Zipkin backend or hook in your own trace publication logic.
import Monitor.Tracing -- A traced action with its root span and two children. run :: MonadTrace m => m () run = rootSpan alwaysSampled "parent" $ do childSpan "child-a" runA childSpan "child-b" runB
To learn more, hop on over to
Monitor.Tracing,
or take a look at examples in the
examples/ folder. | https://www.stackage.org/lts-16.23/package/tracing-0.0.5.2 | CC-MAIN-2020-50 | refinedweb | 138 | 51.04 |
screen_input_guard_enable()
Enable Screen Input Guard.
Synopsis:
#include <bps/screen_input_guard.h>
BPS_API int screen_input_guard_enable(void)
Since:
BlackBerry 10.2.0
Arguments:
Library:libbps (For the qcc command, use the -l bps option to link against this library)
Description:
The screen_input_guard_enable() function enables Screen Input Guard. That is, when something (assumed to be a face) is detected to be near the device, the screen will turn off and the touchscreen will not respond to touch input. When that something is no longer near the device, the screen will turn on and the touchscreen will again respond to touch input.
To disable Screen Input Guard call screen_input_guard_disable(). | http://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/screen_input_guard_enable.html | CC-MAIN-2018-05 | refinedweb | 104 | 65.73 |
Design goals simplicity, reusability, speed, complete separation of logic from formatting. Feature set variables, loops, conditionals, extensibility of tags, includes, arbitrary delimiters. Usage For starters, make sure you 'use Text::Tmpl'. Each fun...DLOWE/Text-Tmpl-0.33 5 (1 review) - 28 Oct 2008 18:48:49 GMT - Search in distribution
- libtmpl - Templating system C library
- template_syntax - description of the syntax of a Text::Tmpl template.
- template_extend - how to extend the Text::Tmpl template library (with C or Perl).
Compile templates with embedded perl code into anonymous subroutines. These subroutines can be (optionally) cached, and executed to render these templates with (optional) parameters. Perl code in templates will be executed with: package PACKAGE_WHERE...POWERMAN/Text-MiniTmpl-v2.0.0 - 16 Feb 2016 00:55:14 GMT - Search in distribution
SGRAHAM/Template-Benchmark-1.09 4 (1 review) - 18 Oct 2010 09:30:20 GMT - Search in distribution
- Template::Benchmark::Engine - Base class for Template::Benchmark template engine plugins.
The Template::Alloy::Tmpl role provides the syntax and the interface for Text::Tmpl. It also brings many of the features from the various templating systems. See the Template::Alloy documentation for configuration and other parameters....RHANDOM/Template-Alloy-1.020 5 (3 reviews) - 20 Sep 2013 18:59:38 GMT - Search in distribution
- Template::Alloy - TT2/3, HT, HTE, Tmpl, and Velocity Engine
- bench_various_templaters.pl - test the relative performance of several different types of template engines.
<a href="#[ echo h.url_for('about_us') ]#">Hello!</a> #[include "include.inc"]# Use Template::Alloy::Tmpl for rendering. Please see Mojolicious::Plugin::AlloyRenderer for configuration options. Note: default delimiters (*START_TAG* and *END_TAG*) are...AJGB/MojoX-Renderer-Alloy-1.121150 - 24 Apr 2012 23:53:00
Exposing helper subs from various packages that would be useful in writing charm hooks. Including but not limited too strict, warnings, utf8, Path::Tiny, etc .....ADAMJS/App-CharmKit-1.0.6 - 21 Nov 2014 14:10:24 GMT - Search in distribution
- App::CharmKit::Helper - charm helpers
Shodo generates Web API documents as Markdown format automatically and validates parameters using HTTP::Request/Response. THIS IS A DEVELOPMENT RELEASE. API MAY CHANGE WITHOUT NOTICE....YUSUKEBE/Shodo-0.08 - 27 Dec 2013 08:18:12 GMT - Search in distribution
Tiffany is a generic interface for Perl5 template engines....TOKUHIROM/Tiffany-1.01 - 04 Sep 2013 02:50:26 GMT - Search in distribution
- Tiffany::Text::MicroTemplate::Extended - Tiffany gateway for Text::MicroTemplate::Extended
Books must be high available for readers and writers ! WriteAt - suite for free book makers. It help make and prepare book for publishing....ZAG/WriteAt-0.07 - 14 Dec 2015 13:39:48 GMT - Search in distribution
In an ideal web system, the HTML used to build a web page would be kept distinct from the application logic populating the web page. This module tries to achieve this by taking over the chore of merging runtime data with a static html template. The H...ISTEEL/HTMLTMPL-1.34 - 04 Oct 2001 22:05:55 GMT - Search in distribution
Plosurin - Perl implementation of Closure Templates. Template Structure Every Soy file should have these components, in this order: * A namespace declaration. * One or more template definitions. Here is an example template: {namespace examples.simple...ZAG/Plosurin-v0.1.1 - 14 Jan 2012 08:56:35 GMT - Search in distribution
N.B. This module should be considered BETA quality. Bugs are expected. This module provides yet another mechanism through which XML files can be created from Perl. It does this by reading in a valid XML template, and binding data directly into the DO...CURSORK/xml-binddata-0.1.2 - 16 Jul 2015 11:58:30 GMT - Search in distribution
This module offers the method of utility for Egg....LUSHE/Egg-Release-3.14 - 29 May 2008 16:11:16 GMT - Search in distribution
Text::Haml implements Haml <> specification. Text::Haml passes specification tests written by Norman Clarke and supports only cross-language Haml features. Do...VTI/Text-Haml-0.990117 - 25 Apr 2016 09:26:15
This command adds a database change to one or more plans. This will result in the creation of script files in the deploy, revert, and verify directories, and possibly others. The content of these files is determined by the evaluation of templates. By...DWHEELER/App-Sqitch-0.9994 - 08 Jan 2016 19:48:09 | https://metacpan.org/search?q=Text-Tmpl | CC-MAIN-2016-18 | refinedweb | 716 | 50.53 |
In Neural Networks Primer, we went over the details of how to implement a basic neural network from scratch. We saw that this simple neural network, while it did not represent the state of the art in the field, could nonetheless do a very good job of recognizing hand-written digits from the mnist database. An accuracy of about 95% was quite easy to achieve.
When I learned about how such a network operates, one thing that immediately jumped out at me was that each of the neurons in the input layer was connected to all of the neurons in the next layer: As far as the network is concerned, all of the pixels start off as if they were jumbled in a random order!
In a way, this is very cool. The network learns everything on its own, not only patterns within the data, but also the very structure of the input data itself. However, this comes at a price: The number of weights and biases in such a fully connected network grows very quickly. Each mnist image has 28×28 pixels, so the input layer has 28×28, or 784 neurons. Let's say we set up a fully connected hidden layer with 30 neurons. That means we now have 28×28×30, or 23,520 weights, plus 30 biases, for our network to keep track of. That already adds up to 23,550 parameters! Imagine the number of parameters we'd need for 4k ultra HD color images!
Introducing Convolutions
I believe this article works on its own, but it can also be considered as a supplement to chapter 6 of Neural Networks and Deep Learning, by Michael Nielsen. In that chapter, there is a general discussion of convolutional neural networks, but the details of backpropagation and chaining are left as an exercise for the reader. For the most part, these are the problems that I've worked through in detail in this article.
Using convolutions, it is possible to reduce the number of parameters required to train the network. We can take advantage of what we know about the structure of the input data. In the case of images for example, we know that pixels that are close to one another can be aggregated into features.
Convolution has been an important piece of the puzzle in the development of deep learning. The term deep learning sounds almost metaphysical, but its meaning is actually simple: It's the idea of increasing the depth - the number of hidden layers - in a network. Each layer progressively extracts higher-level features from the previous layer. From wikipedia:
For example, in image processing, lower layers may identify edges, while higher layers may identify human-meaningful items such as digits/letters or faces.
How does convolution work? Let's start with the basic idea, and we'll get into more detail as we go along. We will start with our input data in the form of a 2-d matrix of neurons, with each input neuron representing the corresponding pixel. Next, we apply an overlay, also known as a filter, to the input data. The overlay is also a 2-d matrix that's smaller than (or the same size as) the input matrix. We can choose the appropriate overlay size. We place the overlay over the top left-hand corner of the input data. Now we multiply the overlay with the underlay, that is, the part of the input data covered by the overlay, to produce a single value (we'll see how this multiplication works a bit later).
We assign this value to the first neuron in a 2-d result matrix, which we'll call a feature map. We move the overlay over to the right, and perform the same operation again, yielding another neuron for the feature map. Once we reach the end of the first row of the input in this manner, we move down and repeat the process, continuing all the way to the last overlay in the bottom right-hand corner. We can also increase how much we slide the overlay for each step. This is called the stride length. In the diagram below, the blue overlay yields the blue neuron; the red overlay produces the red neuron; and so on across the image. The green overlay is the last one in the feature map, and produces the green neuron:
If our image is an M×N grid, and our overlay is an I×J grid (and we use a stride length of 1), then moving the overlay in this manner will produce an (M-I+1)×(J-N+1) grid. For example, if we have a 4×5 input grid, and a 2×2 overlay, the result will be a 3×4 grid. Convolution is what we call this operation of generating a new grid by moving an overlay across an input matrix.
Often convolutional layers are chained together. The raw input, such as image data, is sent to a convolutional layer that contains several feature maps. This convolutional layer may in turn be connected to another convolutional layer that further organizes the features from the previous convolutional layer. We can see in this idea the emergence of deep learning.
Feature Map Activation
In the previous section, we learned that the process of convolution is used to create a feature map. Now let's go into a bit more detail about how this works. How do we calculate the activations for neurons in the feature map?
We know that an overlay, or filter, is convolved with the input data. What is this overlay? It turns out this is a matrix that represents the weights that connect each feature neuron to the underlay in the previous layer. We place an overlay of weights over the input data. We take the activation of each cell covered by the overlay and multiply it by its corresponding weight, then add these products together.
An easy way to compute this operation for a given feature neuron is to flatten the activations of the underlay into a single column and to flatten the weights filter into a single row, then perform a dot product between the weights and the activations. This multiplies each cell in the overlay with its corresponding cell in the underlay, then adds these products together. The result is a single value. This is the sense in which we multiply the overlay and the underlay as mentioned earlier.
To obtain the raw activation z for the corresponding feature neuron, we just need to add the bias to this value. Then we apply the activation function σ(z) to obtain a. Hopefully this looks familiar: To generate the activation for a single neuron in the feature map, we perform the same calculation that we used in the previous article - the difference is that we're applying it to a small region of the input this time. This idea is shown in the diagram below:
Having performed this step to generate an activation value for the first neuron in the feature map, we can now slide the overlay of weights across the input matrix, repeating this same operation as we go along. The feature map that's produced is the result of convolving the input matrix with the weights matrix. Actually, in math, the operation I've described is technically called a cross-correlation rather than a convolution. A convolution involves rotating the filter by 180° first. The two operations are very similar, and it seems the terms are often used somewhat interchangeably in machine learning. In this article, we will end up using both cross-correlation and convolution.
Note that we keep using the same matrix of weights as a filter across the entire input matrix. This is an important trick: We only maintain one bias and one set of weights that are shared among all of the neurons in a given feature map. This saves us a lot of parameters! Let's say we have the same 28×28, or 784 input neurons and we choose a 4×4 overlay. That will produce a 25x25 feature map. This feature map will have just 16 shared weights and 1 shared bias. We will usually set up several independent feature maps in the first convolutional layer. Let's suppose that we set up 16 feature maps in this case. That means we've got 17×16, or 272 parameters in this convolutional layer, far fewer than the 23,550 parameters we considered earlier for a fully connected layer.
Let's examine a simple example: Our input layer is a 3×3 matrix, and we use a 2×2 overlay. The diagram below shows how the input matrix is cross-correlated with the the weights matrix as an overlay to produce the feature map - also a 2×2 matrix in this case:
The overlay is a 2×2 matrix of weights. We start by placing this matrix over top of the activations in the top left-hand corner of the 3×3 input matrix. In the image above, each weight in the overlay matrix is represented as a colored curve pointing to the corresponding neuron in the feature map. Each activation in the underlay (the part of the input matrix covered by the overlay) is colored to match the weight it corresponds to. We multiply each activation in the underlay by its corresponding weight, then we add up these products into a single value. This value is fed into the corresponding feature neuron. We can now slide the overlay across the image, repeating this operation for each feature neuron. Again, we say that the feature map this produces is the result of cross-correlating the input data with the shared weights matrix as an overlay or filter.
The code that produces the activations for a feature map is shown below (the full code is available in the code section at the end of the article):
self.z = sp.signal.correlate2d(self.a_prev, self.w, mode="valid") + self.b self.a = sigmoid(self.z)
What is the meaning of the shared weights and bias? The idea is that each neuron in a given feature map is looking for a feature that shows up in part of the input. What that feature actually looks like is not hard-coded into the network. The particular feature that each feature map learns is an emergent property that arises from the training of the network.
It's important to note that, since all of the neurons in a feature map share their weights and bias, they are in a sense the same neuron. Each one is looking for the same feature across different parts of the input. This is known as translational invariance. For example, let's say we want to recognize an image representing the letter U, as shown below:
Maybe the network will learn a vertical straight line as a single feature. It could then identify the letter U as two such features next to each other - connected by a different feature at the bottom. This is somewhat of an oversimplification, but hopefully it gets the flavour of the idea across - the vertical lines in this example would be two different neurons in the same feature map.
Keep in mind that each of the neurons in a feature map will produce the same activation if they receive the same input: If an image has the same feature in two parts of the screen, then both corresponding feature neurons will fire with exactly the same activation. We will use this intuition when we derive the calculations for backpropagation through a feature map.
For a given grid of input neurons, we likely want to train more than one feature map. Therefore, we can connect the input layer to several independent feature maps. Each feature map will have its own weights and bias, completely independent from the other feature maps in that layer. We can call such a collection of feature maps a convolutional layer.
Backpropagation through a Feature Map
Next, let's work out how to do backpropagation through a feature map. Let's use the same simple example of a 3×3 input matrix and a 2×2 weights filter. Since our feature map is also a 2×2 matrix, we can expect to receive ∂C/daL as a 2×2 matrix from the next layer during backpropagation.
Bias Gradient
The first step in backpropagation is to calculate ∂C/dbL. We know the following equation obtained by using the chain rule:
In this context, we can see that for each feature neuron, we can multiply its σ'(z) value by its ∂C/da value. This yields a 2×2 matrix that tells us the value of ∂C/db, the derivative of the cost with respect to the bias, for each feature neuron. The diagram below shows this result:
Now, all of the feature neurons share a single bias, so how should we aggregate these four values into a single value? Here, it's helpful to recall that in a sense, all of the feature map neurons are really a single neuron.
Each neuron in the feature map receives its own small part of the previous layer as input, and produces some activation as a result. During backpropagation, ∂C/daL tells us how an adjustment to each of these activations will affect the cost function. It's as if we only had a single neuron that received multiple consecutive training inputs, and for each of those inputs, it received a value of ∂C/daL during backpropagation. In that case, we'd adjust the bias consecutively for each training input as follows:
- b -= ∂c/db1 * step_size
- b -= ∂c/db2 * step_size
- b -= ∂c/db3 * step_size
- b -= ∂c/db4 * step_size
In fact, we can do just that. We add together the values of ∂C/db for each feature neuron. We can see that adjusting the bias using this sum produces the same result as we see in the above equations, thanks to the associativity of addition:
b -= (∂c/db1 + ∂c/db2 + ∂c/db3 + ∂c/db4) * step_size
Now that we have some intuition for this calculation, can we find a simple way to express it mathematically? In fact, we can think of this as another, very simple, cross-correlation. We have a 2×2 matrix for ∂C/daL and a 2×2 matrix for σ'(zL). Since they're the same size, cross-correlating them together yields a single value. The cross correlation multiplies each cell in the overlay by its corresponding cell in the underlay, then adds these products together, which is the cumulative value of ∂c/dbL we want. We will also retain the four constituent values of ∂c/db for use in the subsequent backpropagation calculations.
The following line of code demonstrates this calculation (full code listing is in the code section at the end of the article):
b_gradient = sp.signal.correlate2d(sigmoid_prime(self.z), a_gradient, mode="valid")
Weight Gradient
The next step in the backpropagation is to calculate ∂C/dwL. The chain rule tells us:
Can we find a way to apply this idea to our feature map in a way that makes intuitive sense? We know that each neuron in the feature map corresponds to a 2×2 portion of the previous layer's activations. We can multiply the local value of ∂c/db for each feature neuron by each of the matching activations in the previous layer. This yields four 2×2 matrices. Each matrix represents the component of ∂C/dwL for a given neuron in the feature map. As before, we can add these all together to get the cumulative value of ∂C/dwL for this feature map. The diagram below illustrates this idea:
It turns out that we can concisely express this calculation as a cross-correlation as well. We can take the 3×3 matrix of activations in the previous layer and cross-correlate it with the 2×2 matrix representing the components of ∂c/db. This yields the same 2×2 matrix as the sum of the matrices in the previous diagram. The code for this logic is below (full code listing is in the code section at the end of the article):
w_gradient = sp.signal.correlate2d(self.a_prev, b_gradient_components, mode="valid")
Activation Gradient for Previous Layer
The last step in backpropagation is to calculate ∂C/daL-1. From the chain rule, we know:
How can we make this work with our convolutional feature map? Earlier, we worked out the components of ∂C/db for each neuron in the feature map. Here, we map these values back to the overlays they correspond to in the input matrix. We multiply each component of ∂C/db by its corresponding weight for that position in the overlay. For each feature neuron, we set the parts of the input matrix that are not covered to zero. The four feature map neurons thus produce 4 3×3 matrices. These are the components of ∂C/daL-1 corresponding to each feature map neuron. Once again, to get the cumulative value, we add them together to obtain a single 3×3 matrix representing the cumulative value for ∂C/daL-1. The diagram below illustrates this process:
I found it harder to determine how to interpret this process in terms of the cross-correlation or convolution we've used before. After doing some research, I found out that there are several flavours of cross-correlation/convolution. For all of the calculations we've looked at so far, it turns out that we've been using valid cross-correlations. A valid convolution or cross-correlation is when the overlay stays entirely within the bounds of the larger matrix.
We can still use the same basic technique we've employed so far for this calculation as well, but we need to use a form called full convolution/cross correlation. In this variation, the overlay starts in the top left corner covering just the single cell in that corner. The rest of the overlay extends beyond the boundary of the input data. The values in that region of the overlay are treated as zeros. Otherwise the process of convolution or cross-correlation is the same. I found this link about different convolution modes helpful.
We can see that to obtain the result we want, we can apply this process using the components of the 2×2 matrix for ∂C/db as an overlay over top of the 2×2 shared weights matrix w. Since we start with the overlay covering only the single weight in the top left-hand corner, the result will be a 3×3 matrix, which is what we want for ∂C/daL-1.
In order for our calculations to match the calculations shown earlier, we need to rotate the ∂C/db filter matrix by 180° first though. That way we start with ∂C/db0,0 covering w0,0. If you follow through with this calculation, you will find that the end-result is the same as the sum of the four 3×3 matrices in the previous diagram. We've used the cross-correlation operation up until now. Here, since we have to rotate the filter, we are actually doing a proper convolution operation. The diagram below shows the starting position of the full convolution of ∂C/db with the weights matrix w.
The code for this is as follows (full code listing is in the code section at the end of the article):
a_prev_gradient = sp.signal.convolve2d(self.w, b_gradient_components, mode="full")
Chaining Convolutional Layers
It's common to chain together several convolutional layers within a network. How does this work? The basic idea is that the first convolutional layer has one or more feature maps. Each feature map corresponds to a single feature. Roughly speaking, each neuron in a feature map tells us whether that feature is present in the receptive field for that neuron (that is, the overlay in the previous layer for that neuron). When we send the activations from a convolutional layer to another one, we are aggregating lower-level features into higher-level ones. For example, our network might learn the shapes "◠" and "◡" as features for two feature maps in a single convolutional layer. These may be combined in a feature map in the next convolutional layer as an "O" shape.
Let's think about how to calculate the activations. Suppose we have three feature maps in the first convolutional layer and two feature maps in the second convolutional layer. For a given feature map in the second layer, we will need a distinct weights filter for each feature map in the previous layer. In this case, that means we'll need three filters for each feature map in the second layer. We cross-correlate each of the feature maps in the first layer with its corresponding weights filter for the feature map in the second layer. That means we generate three feature maps for the first map in the second layer and three feature maps for the second map in the second layer. We add each triple of feature maps together to produce the two feature maps we want in the second layer - we also add the bias and apply the activation function at that point. This design is similar to fully connected layers. The difference is that, instead of individual neurons, each feature map in the previous layer is connected to every feature map in the next layer.
Conceptually, we're saying that if the right combination of the three features in the previous layer is present, then the aggregate feature that the corresponding feature map in the next layer cares about will also be present. Since there is a separate weights filter for each feature map in the previous layer, this lets us determine how the features from the previous layer need to be aggregated together for each feature in the next layer.
The diagram below illustrates how the feature maps in a given convolutional layer can be aggregated together into the next convolutional layer:
The number of filters for each feature map in the next layer matches the number of feature maps in the previous layer. The filter size determines the feature map size in the next layer.
This process can be described using 3-d matrices (see appendix B for example calculations). We combine the feature maps in the previous layer into a single 3-d matrix, like a stack of pancakes. For each feature map in the next layer, we also stack the filters into a 3-d matrix. We can cross-correlate the 3-d matrix representing the feature maps in the previous layer with the 3-d matrix representing the corresponding filters. Since both matrices have the same depth, the result will be the 2-d matrix we want for the feature map in the next layer.
To understand why we end up with a 2-d matrix, consider the case of cross-correlating or convolving two 2-d matrices in valid mode that have the same width. The result will be a 1-d matrix. For example, if we have a 7×3 matrix and we cross correlate it with a 2×3 matrix, we get a 6×1 matrix. Here it is the depth of the 3-d matrices that matches, so during cross-correlation or convolution, the values are added together depth-wise and collapsed into single values.
Backpropagation should be an application of all of the principles we've worked out so far:
- We use our usual method to obtain a 2-d matrix representing the components of ∂C/db for a given feature map in the next layer.
- To calculate the gradient for the filters, ∂C/dw, for that next-layer feature map, we cross-correlate the 3-d feature map activation matrix from the previous layer with our 2-d ∂C/db matrix representing the bias gradient components. This gives us a 3-d matrix for ∂C/dw for the current feature map in the next layer - each slice corresponds to the weights for one of the feature maps in the previous layer.
- For ∂C/daL-1, we convolve our 3-d weight matrix, w, with our 2-d ∂C/db matrix for a given feature map in the next layer. This gives us a 3-d matrix for ∂C/daL-1 that represents the derivative of the cost with respect to the activations of each feature map in the previous layer (corresponding to our current feature map in the next layer). We repeat this calculation for each feature map in the next layer and add together the resulting matrices. Each slice of this final matrix represents the value of ∂C/da for the corresponding feature map in the previous layer.
When we correlate or convolve what we may think of in conceptual terms as a 2-d matrix with a 3-d matrix, we need to wrap the 2-d matrix in an extra set of brackets - technically these operations require both sides to have the same dimensionality.
Max Pooling
Another technique that's sometimes used with convolutional layers is max pooling. The idea is pretty simple: We move an overlay across a feature map in a way that's similar to convolution. However, each neuron in a max pooling mapping just takes the neuron from the corresponding overlay that has the highest activation and passes that activation to the next layer. This clearly further reduces the number of parameters, so that's one benefit of this technique. I believe, by abstracting the input, it can also help to avoid the overfitting problem, an issue that comes up frequently in machine learning.
The backpropagation for max pooling is straightforward. For a neuron in a max pooling map, we simply pass back the value of ∂C/da to the neuron with the max activation in the corresponding overlay from the previous layer. The other gradient values in the overlay are set to 0, since those neurons did not pass along their activations, and therefore did not contribute to the cost. The diagram below shows the forward and back propagation steps for a max pooling map:
Discussion
Convolutional neural networks, or CNNs, represent a significant practical advance in the capabilities of neural networks. Such networks can achieve better accurancy as well as improved learning speed. In Michael Nielsen's Neural Networks and Deep Learning, he combines a CNN with some other techniques to achieve over 99% accuracy recognizing the mnist digits! That's a significant improvement over the 95% achieved using a fully connected network.
However, it is worth noting that CNNs are not a panacea. For example, while CNNs do a good job of handling translational invariance across the receptive field, they don't handle rotation.
In this article, I've endeavoured to highlight the key differences between convolutional and fully connected networks. To do so, I've tried to keep as much logic as possible the same as in the previous article. For example, we continue to use the sigmoid activation function in this article. In practice, this is rarely the case. In deep learning, in addition to convolution, we usually see the use of some other techniques:
- The quadratic cost function is replaced with something else, e.g. a cross-entropy cost function
- Instead of sigmoid, different activation functions are used like ReLU, Softmax, etc.
- Regularization is applied to network weights in order to reduce overfitting
Code
Below I've implemented several classes for demonstration purposes. There's a
FeatureMap that implements forward and backpropagation for a single feature. Several such feature maps would normally be used to put together a single convolutional layer. There's also a
MaxPoolingMap which implements max pooling from a feature map. Lastly, there's a
FullyConnectedLayer, which implements the logic discussed in the previous article. In CNNs, there are usually several convolutional layers and then a fully connected layer as the last hidden layer. This fully connected layer effectively aggregates all of the feature-building stages that precede it before sending its activations to the output layer (it occurs to me that we can also implement this as a convolutional layer where each feature map is a 1×1 matrix).
import numpy as np import scipy as sp from scipy import signal class FeatureMap: def __init__(self, a_prev, overlay_shape): # 2d matrix representing input from previous layer self.a_prev = a_prev # shared weights and bias for this layer self.w = np.random.randn(*overlay_shape) self.b = np.random.randn(1,1) def feed_forward(self): self.z = sp.signal.correlate2d(self.a_prev, self.w, mode="valid") + self.b self.a = sigmoid(self.z) return self.a def propagate_backward(self, a_gradient, step_size): b_gradient_components = dc_db(self.z, a_gradient) b_gradient = sp.signal.correlate2d(sigmoid_prime(self.z), a_gradient, mode="valid") w_gradient = sp.signal.correlate2d(self.a_prev, b_gradient_components, mode="valid") a_prev_gradient = sp.signal.convolve2d(self.w, b_gradient_components, mode="full") self.b -= b_gradient * step_size self.w -= w_gradient * step_size self.a_prev_gradient = a_prev_gradient return self.a_prev_gradient class MaxPoolingMap: def __init__(self, a_prev, overlay_shape): self.a_prev = a_prev self.overlay_shape = overlay_shape def feed_forward(self): self.max_values, self.max_positions = max_values_and_positions( self.a_prev, self.overlay_shape) return self.max_values def propagate_backward(self, a_gradient): a_prev_gradient = np.zeros(self.a_prev.shape) rows, cols = self.max_values.shape for r in xrange(rows): for c in xrange(cols): max_position = self.max_positions[r][c] a_prev_gradient[max_position] += a_gradient[r][c] self.a_prev_gradient = a_prev_gradient return self.a_prev_gradient class FullyConnectedLayer: def __init__(self, a_prev, num_neurons): self.a_prev = a_prev self.num_neurons = num_neurons self.w = np.random.randn(num_neurons, a_prev.size) self.b = np.random.randn(num_neurons,1) def feed_forward(self): a_prev = as_col(self.a_prev) self.z = raw_activation(self.w, a_prev, self.b) self.a = sigmoid(self.z) return self.a def propagate_backward(self, a_gradient, step_size): b_gradient = dc_db(self.z, a_gradient) a_prev = as_col(self.a_prev) weights_gradient = dc_dw(a_prev, b_gradient) a_prev_gradient = dc_da_prev(self.w, b_gradient) self.a_prev_gradient = a_prev_gradient.reshape(self.a_prev.shape) self.b -= b_gradient * step_size self.w -= weights_gradient * step_size return self.a_prev_gradient # utility functions def sigmoid(z): return 1.0/(1.0+np.exp(-z)) def sigmoid_prime(z): return sigmoid(z)*(1-sigmoid(z)) def dc_db(z, dc_da): return sigmoid_prime(z) * dc_da def get_feature_map_shape(input_data_shape, overlay_shape): input_num_rows, input_num_cols = input_data_shape overlay_num_rows, overlay_num_cols = overlay_shape num_offsets_for_row = input_num_rows-overlay_num_rows+1 num_offsets_for_col = input_num_cols-overlay_num_cols+1 return (num_offsets_for_row, num_offsets_for_col) def get_max_value_position(matrix): max_value_index = matrix.argmax() return np.unravel_index(max_value_index, matrix.shape) def max_values_and_positions(a_prev, overlay_shape): feature_map_shape = get_feature_map_shape(a_prev.shape, overlay_shape) max_values = np.zeros(feature_map_shape) max_positions = np.zeros(feature_map_shape, dtype=object) overlay_num_rows, overlay_num_cols = overlay_shape feature_map_rows, feature_map_cols = feature_map_shape for r in xrange(feature_map_rows): for c in xrange(feature_map_cols): overlay = a_prev[r:r+overlay_num_rows, c:c+overlay_num_cols] max_value = np.amax(overlay) max_value_overlay_row, max_value_overlay_col = get_max_value_position(overlay) max_value_row = r+max_value_overlay_row max_value_col = c+max_value_overlay_col max_values[r][c] = max_value max_positions[r][c] = (max_value_row, max_value_col) return (max_values, max_positions) def raw_activation(w, a, b): return np.dot(w,a) + b def dc_dw(a_prev, dc_db): return np.dot(dc_db, a_prev.transpose()) def dc_da_prev(w, dc_db): return np.dot(w.transpose(), dc_db) def as_col(matrix): return matrix.reshape(matrix.size, 1) input_data = np.arange(20).reshape(4,5) # 4x5 array overlay_shape = (2, 2) cl = FeatureMap(input_data, overlay_shape) cl.feed_forward() fl_shape = get_feature_map_shape(input_data.shape, overlay_shape) cl.propagate_backward(np.random.randn(*fl_shape), 0.1) max_pool_input_data = np.array([[8,0,11,1,6],[10,2,4,14,17],[5,16,19,15,7],[12,13,9,18,3]]) mpl = MaxPoolingMap(max_pool_input_data, overlay_shape) mpl.feed_forward() fl_shape = get_feature_map_shape(max_pool_input_data.shape, overlay_shape) mpl.propagate_backward(np.random.randn(*fl_shape)) fcl = FullyConnectedLayer(input_data, 10) fcl.feed_forward() fcl.propagate_backward(as_col(np.random.randn(10)), 0.1)
Appendix A: Valid vs. Full Mode
The python REPL code below shows how to use cross-correlation and convolution using
valid and
full modes. Note how
convolute2d produces the same result as
correlate2d with a filter that's rotated by 180°.
>>> import numpy as np >>> import scipy as sp >>> from scipy import signal >>> values = np.array([[1,2,3],[4,5,6],[7,8,9]]) >>> values array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> f = np.array([[10,20],[30,40]]) >>> f array([[10, 20], [30, 40]]) >>> sp.signal.correlate2d(values,f,mode="valid") array([[370, 470], [670, 770]]) >>> sp.signal.convolve2d(values,f,mode="valid") array([[230, 330], [530, 630]]) >>> f_rot180 = np.rot90(np.rot90(f)) >>> f_rot180 array([[40, 30], [20, 10]]) >>> sp.signal.correlate2d(values,f_rot180,mode="valid") array([[230, 330], [530, 630]]) >>> sp.signal.correlate2d(values,f,mode="full") array([[ 40, 110, 180, 90], [180, 370, 470, 210], [360, 670, 770, 330], [140, 230, 260, 90]]) >>> sp.signal.convolve2d(values,f,mode="full") array([[ 10, 40, 70, 60], [ 70, 230, 330, 240], [190, 530, 630, 420], [210, 520, 590, 360]]) >>> sp.signal.correlate2d(values,f_rot180,mode="full") array([[ 10, 40, 70, 60], [ 70, 230, 330, 240], [190, 530, 630, 420], [210, 520, 590, 360]])
Appendix B: Summing 2-D vs. 3-D Stack:
The REPL code below shows that performing separate 2-d cross-correlations and adding them together produces the same result as stacking the inputs and the filters, then cross correlating these two 3-d matrices together:
>>> feature_map1 = np.array([[1,2,3],[4,5,6],[7,8,9]]) >>> feature_map2 = np.array([[9,8,7],[6,5,4],[3,2,1]]) >>> filter1 = np.array([[1,2],[3,4]]) >>> filter2 = np.array([[5,6],[7,8]]) >>> feature_map1 array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> feature_map2 array([[9, 8, 7], [6, 5, 4], [3, 2, 1]]) >>> filter1 array([[1, 2], [3, 4]]) >>> filter2 array([[5, 6], [7, 8]]) >>> result1 = sp.signal.correlate2d(feature_map1, filter1, mode="valid") >>> result1 array([[37, 47], [67, 77]]) >>> result2 = sp.signal.correlate2d(feature_map2, filter2, mode="valid") >>> result2 array([[175, 149], [ 97, 71]]) >>> sum_of_results = result1 + result2 >>> sum_of_results array([[212, 196], [164, 148]]) >>> feature_maps_stacked = np.array([feature_map1, feature_map2]) >>> feature_maps_stacked array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[9, 8, 7], [6, 5, 4], [3, 2, 1]]]) >>> filters_stacked = np.array([filter1, filter2]) >>> filters_stacked array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) >>> stacked_results = signal.sp.correlate(feature_maps_stacked, filters_stacked, mode="valid") >>> stacked_results.reshape(2,2) # same as sum_of_results array([[212, 196], [164, 148]])
Posted on by:
Nested Software
Simple things should be simple, complex things should be possible -- Alan Kay
Discussion
I appreciate the effort you've put into clarifying this topic. A lot of this has always been a bit fuzzy, but your descriptions make it much easier to follow. | https://dev.to/nestedsoftware/convolutional-neural-networks-an-intuitive-primer-k1k | CC-MAIN-2020-40 | refinedweb | 5,737 | 54.32 |
Some or all of the following errors can occur when you create a custom function with the FCMP procedure and you call that function in PROC DS2 code that is run on a remote SAS® server:
ERROR: Compilation error.
ERROR: [08102]Connection string, DSN or file DSN syntax or usage error
(0x80fff833)
ERROR: [HY000]Parsing failed at or near column 1025: Columns 1009-1024 =
"ATH={E:\Program " (0x813fe810)
ERROR: [HY000]Matching close quote not found in value. (0x813fe80f)
ERROR: [08003]Connection does not exist (0x80fff82e)
ERROR: [08003]Connection does not exist (0x80fff82e)
ERROR: Line 35: FCMP library base.fcmpsubs does not exist.
The Full Code tab contains example code that illustrates the problem.
There is currently no circumvention for this problem.
Click the Hot Fix tab in this note to access the hot fix for this issue.
options cmplib = WORK.funcs;
proc fcmp outlib = work.funcs.match;
function kwadrat(a);
return (a*a);
endsub;
run;
proc ds2;
package pkg /overwrite=yes language='fcmp' table='work.funcs';
run;
quit;
A fix for this issue for Base SAS 9.4_M2 is available at: | http://support.sas.com/kb/55/335.html | CC-MAIN-2022-21 | refinedweb | 181 | 66.23 |
from Bio import SeqIO from Bio import Align ref_seq_1 = SeqIO.read('C:/Users/King_Arthur/Desktop/ref_seq/segment 1/ref_seq_8.fasta','fasta') seq1 = SeqIO.read('C:/Users/King_Arthur/Desktop/file/segment 1/Myfile_1 (1).fasta','fasta') aligner = Align.PairwiseAligner() aligner.mode = 'global' aligner.match_score = 1 aligner.mismatch_score = -2 aligner.gap_score = -2 alignments = aligner.score(ref_seq_1.seq , seq1.seq) print(alignments) for alignment in sorted(alignments): print(alignment)
So this is my code and as you can see in the last section i am trying to iterate over my alignment but I am getting this error
TypeError: 'float' object is not iterable
I have tried various things like using
str() but it gives some strange values and I also tried to read the source code by using the
inspect module but I can't figure out the problem.
Any help would be really appreciated.
My final objective is to find out how many matches, mismatches and gaps are present in the final alignment using biopython.
if there is any other better way to do it in python please feel free to suggest. | https://www.breathinglabs.com/monitoring-feed/genetics/need-help-with-pairwise-alignment-module-in-iterating-over-the-alignment/ | CC-MAIN-2021-31 | refinedweb | 180 | 50.53 |
Details
Description
If openjpa.DetachState =fetch-groups is used, the enhancer will add a 'implements Externalizable' + writeExternal + readExternal.
The problem is, that writeExternal and readExternal will also try to externalize the private members of any given superclass. Thus we get a runtime Exception that we are not allowed to access those fields.
Example:
@Entity
public abstract class AbstractGroup
and
@Entity
public class Group extends AbstractGroup
will result in the following code (decompiled with jad):
public void writeExternal(ObjectOutput objectoutput)
throws IOException
{
pcWriteUnmanaged(objectoutput);
if(pcStateManager != null)
else{ objectoutput.writeObject(pcGetDetachedState()); objectoutput.writeObject(null); }
objectoutput.writeObject(applicationBegin);
objectoutput.writeObject(applicationEnd);
objectoutput.writeObject(applicationLocked);
objectoutput.writeObject(approvalRequired);
...
Issue Links
- is duplicated by
OPENJPA-2351 Subclasses writeExternal method trys to access a super class' private field.
- Resolved
- is related to
OPENJPA-1707 A warning message should be logged when a down level enhanced Entity is encountered.
- Reopened
- relates to
OPENJPA-1704 PCEnhancer incorrectly generates readExternal
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
It seems this has nothing to do with fetch-groups, but will always be generated if DetachedStateField=true gets used.
My configuration currently: <property name="openjpa.DetachState" value="loaded(DetachedStateField=true)"/>
This unit test demontstrates the problem. I get the following output:
java.lang.IllegalAccessError: tried to access field org.apache.openjpa.enhance.EnhancedSuperClass.id from class org.apache.openjpa.enhance.EnhancedSubClass
at org.apache.openjpa.enhance.EnhancedSubClass.writeExternal(EnhancedSubClass.java)
at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1421)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1390)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)
at org.apache.openjpa.enhance.TestClassHierarchyEnhancement.serializeObject(TestClassHierarchyEnhancement.java:58)
at org.apache.openjpa.enhance.TestClassHierarchyEnhancement.testSerialize(TestClassHierarchyEnhancement.java org.apache.openjpa.persistence.test.AbstractPersistenceTestCase.runTest(AbstractPersistenceTestCase.java:573)
oki, starting with that stuff now.
current classes just generate a
readExternal() and writeExternal which first externalizes some OpenJPA specific fields like the pcStateManager, et al and then comes the single fields.
For providing something like super.readYourOwnStuff() we need to split the fields from the rest.
So I'll start with introducing 2 new methods readExternalFields() and writeExternalFields() which might then invoke the super.readExternalFields().
Any objections or tips?
> Any objections or tips?
Goodluck... Serp is a fun one!
I'll try to get some time tomorrow to take a peek at this one.
nah, I already fixed bytecode issues in javassist and did lots of the bc stuff for OpenWebBeans - so yes, it is tricky, but I guess I can make it
got it roughly working, but I'm not sure about whether the pcWriteUnmanaged also should get executed on the superclass?
Hi!
This patch fixes the issue but still needs a cleanup (finally removing old unused code and stuff).
Please review! It runs fine with a few test cases and I'll test it in my real world project tomorrow morning.
good night,
strub
figured that I still have a few bugs with deserialisation. Currently investigating...
found the problem. StateManagerImpl#writeDetached writes the fields of the superclass first, and only then the fields of the subclasses.
this patch now de-externalises in the correct order. I also added a few tests for it
oops, I forgot to also attach the enhancer parts for making the test work.
sorry :/ (have way too many files dirty already ...)
Rick, Mike, did you find a chance to test this patch already? I'm slowly running out of control about all my patches... Since the patches depend on each other to some degree, I cannot really continue anymore. If you don't find the time to work on it then please ping me. In this case I'll continue maintaining my patches via a fork of the github mirror rather than juggling svn patches (which are hell to apply...).
txs and LieGrue,
strub
Mark, haven't had a chance to look yet. Will take a closer look tonight and at least get you an ETA.
I see what you mean about patch management. I'm not sure I've applied the patches correctly. Is there a specific order, or could you make one all inclusive patch?
I think this is a unified patch - check that it matches yours.
If so the patch looks good. I don't know serp well enough for errors to jump out at me, but the generated bytecode looks correct.
There's some cleanup to be done (e.g. we generally don't use @author tags), but assuming none of the unit tests break I think we can commit this patch.
Txs Mike!
I'll try to apply it and run a full suite now.
patch looks fine so far but /org/apache/openjpa/enhance/persistence1.xml is missing. Please see my enhancer.patch for this part.
A few notes:
1.) I use //X to comment out 'temporaryily'. This means either a TODO or it needs to be clarified.
2.) The patch assumes that all parent classes must belong to the same persistence unit and therefore also contains the generated writeExternalFields and readExternalFields methods.Is this assumption true, or are there situations where parent.readExternalFields() is invalid?
oops comment should have gone to OPENJPA-1933
I found a (pretty uncommon but theoretically possible) situation where this might happen. If a superclass is defined in a jar which didn't got rebuilt and the subclass entity got enhanced via a new version of JPA. Do we take care about such pathological situations? I mean it would have crashed with the old system anyway...
It would be nice to handle such a case gracefully. In such an environment I think there will be other problems. Rick added some code in OPENJPA-1707 to detect downlevel entities when the enhancer runs, but I don't remember the exact problem he saw.
If OPENJPA-1707 is already implemented then it should also work for this tweak if we increment the PCEnhancer.ENHANCER_VERSION, isn't ?
You're right. I thought it only checked when the PCEnhancer was executed, but after looking at the code it's the MetaDataRepository that triggers the check.
Is there any reason why this changeset is not backported to the other branches (1.2.x, 2.0.x)?
Commit 1483996 from hthomann
[ ]
OPENJPA-1912: Generate externalizable methods correctly for super and subclasses - back ported to 2.1.x Mark Struberg's trunk changes.
Commit 1484028 from hthomann
[ ]
OPENJPA-1912: Generate externalizable methods correctly for super and subclasses - back ported to 2.0.x Mark Struberg's trunk changes.
my naive question first: why do we need Externalizable at all? In any other case a simple Serializable works well. | https://issues.apache.org/jira/browse/OPENJPA-1912 | CC-MAIN-2016-22 | refinedweb | 1,101 | 50.94 |
I am starting to think about appropriate exception handling in my Django app, and my goal is to make it as user-friendly, as possible. By user-friendliness, I imply that the user must always get a detailed clarification as to what exactly went wrong.
Following on this post, the best practice is to
use a JSON response with status 200 for your normal responses and
return an (appropriate!) 4xx/5xx response for errors. These can carry
JSON payload, too, so your server side can add additional details
about the error.
def test_view (request):
try:
# Some code ....
if my_business_logic_is_violated():
# How do I raise the error
error_msg = "You violated bussiness logic because..."
# How do I pass error_msg
my_response = {'my_field' : value}
except ExpectedError as e:
# what is the most appropriate way to pass both error status and custom message
# How do I list all possible error types here (instead of ExpectedError to make the exception handling block as DRY and reusable as possible
return JsonResponse({'status':'false','message':message}, status=500)
The status codes are very well defined in the HTTP standard. You can find a very readable list on Wikipedia. Basically the errors in the 4XX range are errors made by the client, i.e. if they request a resource that doesn't exist, etc. The errors in the 5XX range should be returned if an error is encountered server side.
With regards to point number 3, you should pick a 4XX error for the case where a precondition has not been met, for example
428 Precondition Required, but return a 5XX error when a server raises a syntax error.
One of the problems with your example is that no response is returned unless the server raises a specific exception, i.e. when the code executes normally and no exception is raised, neither the message nor the status code is explicitly sent to the client. This can be taken care of via a finally block, to make that part of the code as generic as possible.
As per your example:
def test_view (request): try: # Some code .... status = 200 msg = 'Everything is ok.' if my_business_logic_is_violated(): # Here we're handling client side errors, and hence we return # status codes in the 4XX range status = 428 msg = 'You violated bussiness logic because a precondition was not met'. except SomeException as e: # Here, we assume that exceptions raised are because of server # errors and hence we return status codes in the 5XX range status = 500 msg = 'Server error, yo' finally: # Here we return the response to the client, regardless of whether # it was created in the try or the except block return JsonResponse({'message': msg}, status=status) | https://codedump.io/share/YHlLNlIjWO0v/1/django---exception-handling-best-practice-and-sending-customized-error-message | CC-MAIN-2018-13 | refinedweb | 442 | 58.21 |
0
Hey guys,
I think I have this mostly figured out but i'm still getting some errors, I'm not sure if it's a syntax thing or if I called out a variable wrong.
For some reason eclipse is saying that my variable [B]average[/B] "may not have locally been declared" which doesn't make sense to me, because it is, or so I think. If you could take a look and give me an idea of what is cause this I would really appreciate it!
import java.util.Random; public class lab11a_killackey { public static void main(String args[]) { Random randomNumbers = new Random(); int a[]= new int [ 1000 ]; a [1000] = 1 + randomNumbers.nextInt( 51 ); int big = -1; int small = 52; int n; for (int i=0; i<1000; i++) { if (a[i]> big) big = a[i]; } for (int i=0; i<1000; i++) { if (a[i] < small ) small = a[i]; } int average; for (int i= 0; i<1000; i++) [B]average[/B] += (a[i])/1000; int ans; ans= countItems(a, [B]average[/B]); int ansb; ansb = countItemsb(a, [B]average[/B]); System.out.printf("The largest of the 1000 integers is: %d", big); System.out.printf("The smallest of the 1000 integers is: %d", small); System.out.printf("The[B] average[/B] of the 1000 integers is: %d", average); System.out.printf("The number of integers below average is: %d", ansb); System.out.printf("The number of integers above average is: %d",ans); } public static int countItems( int a [], int average) { int cnt = 0; for (int i= 0; i<1000; i++) if (a[i] > average) cnt ++; return cnt; } public static int countItemsb(int a[], int average) { int cntb = 0; for (int i=0; i<1000; i++) if(a[i] < average) cntb++; return cntb; } }
Edited by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/185318/array-method-problem | CC-MAIN-2017-17 | refinedweb | 301 | 54.86 |
Creating a Triangle in OpenGL In-Shader
Go to OpenGL Home
As a prerequisite to this tutorial, I suggest you read the previous one which can be found here, because it contains a class which will read and compile shaders. Last time we started to delve into shaders and how to set them up in the main program. This time we will create a vertex and fragment shader which will result in a green triangle.
Let’s start with the vertex shader, whose code is the following:
#version 430 core void main(void) { const vec4 vertices[3] = vec4[3](vec4( 0.25, -0.25, 0.5, 1.0), vec4(-0.25, -0.25, 0.5, 1.0), vec4( 0.25, 0.25, 0.5, 1.0)); gl_Position = vertices[gl_VertexID]; }
To do this right, first create a folder in your project called Shaders and create a file called Vertex_Shader.glsl. Open it and write the vertex shader code from above.
In any shader, you have to state the OpenGL version which you will be using (in my case, 4.3). The “core” keyword implies that we are using the core functionalities of the GLSL language. Because we are not touching buffers yet, we move on to the main function.
In order to create a triangle, we of course need to specify its vertex’s positions to be passed on to gl_Position. It should be noted that the positions saved in gl_Position represent not the vertex positions in the virtual world, but the position in our window.
In this case, 0.0 represents the center of the window and the values range from -1 to 1, as in the image above. So, for example, if you have a 200×400 window and you wanted to draw the triangle from above, then the vertices will be the pixels at the coordinates: (125,150), (75,150), (125,250). These are normalized device coordinates (NDC).
The normalized device coordinates are obtained by dividing each value (x,y,z) with the 4th value, which is referred to as W. In our case W is 1.0 because we are dealing with a vertex location. If we were transforming normals which represent a direction then W = 0 and we skip the divide-by-W normalization. OpenGL defaults to a normalized cube with dimensions X=-1 .. +1, Y=-1 … +1, Z = -1 .. +1. DirectX uses a slightly different convention. Z = 0 .. +1. Which way is “correct” ?
It doesn’t matter which abstraction we pick. Using the more consistent -1 .. +1 is not really any more convenient as the math works out regardless of which coordinate system you pick. Anyways the purpose of using NDC in the first place is that it is a way to abstract device independent coordinates. If one device has 1980 pixels across, and a second device has 1024 pixels across the normalized coordinates of <1,1,1> always refers to the top right pixel.
After NDC transformation is performed there is a final transformation called Window Transformation or Screen Transformation which is required to fit your scene in OpenGL’s Viewport. These two transformations are done by you graphics card, so you don’t have to worry about anything. After this, the final coordinates are passed to the raterization process where all shapes are converted to pixels/fragments.
In our case, the triangle will look like this:
As for the z coordinate, it only has meaning to determine which fragment is in front of another one. The z coordinate, in windows coordinates is mapped between 0.0 and 1.0. glVertexID represents the id of the vertex which is currently being processed. In this case, because we have just 3 vertices and all of them are in the vertices vector, it makes sense to make that operation.
Moving on to the fragment shader, the code is as simple as can be:
#version 430 core out vec4 color; void main(void) { color = vec4(0.0, 1.0, 0.0, 1.0); }
In this case, we need to specify an out value, namely the fragment color (in this case, it’s green). Of course you need to create another file for it in the same folder Shaders (already created) and name it Fragment_Shader.glsl.
Finally, we reach the main file, whose code is the following:
#include<iostream> #include<stdio.h>; #include<stdlib.h>; #include<fstream>; #include<vector>; #include "Core\Shader_Loader.h"; using namespace Core; GLuint program; void renderScene(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glClearColor(1.0, 0.0, 0.0, 1.0);//clear red //use the created program glUseProgram(program); //draw 3 vertices as triangles glDrawArrays(GL_TRIANGLES, 0, 3); glutSwapBuffers(); } void Init() { glEnable(GL_DEPTH_TEST); /(100, 100); glutInitWindowSize(800, 600); glutCreateWindow("Drawing my first triangle"); glewInit(); Init(); // register callbacks glutDisplayFunc(renderScene); glutMainLoop(); glDeleteProgram(program); return 0; }
The first important addition to the initialization process is calling the CreateProgram method from the Shader_Loader class, which creates our compiled program. Then, during the render process, we point out the desired program to be used using glUseProgram. Finally, we use glDrawArrays to instruct how many vertices to draw and how (in this case, triangles).
Before we end the tutorial, let’s talk a bit about glUseProgram and glDrawArrays.
- glUseProgram – The program in question is a container for the shaders we are going to use in drawing the scene. You can switch programs between objects drawn, if you want o use different shaders for different objects.
- glDrawArrays – the first argument is the drawing mode or the primitive(points, lines, triangles, triangle strip, etc.), the third is the array of vertices in question and the second is the index from which the vertices will be drawn inside argument 3.
In this tutorial we passed vertices directly in NDC space — normally we wouldn’t use NDC space directly. We didn’t talk about how vertices are transformed from object space to NDC space using the matrices of the projection and modelview — We will do this in a later article, but first let’s see how we can pass vertices from OpenGL to shader in the next tutorial.
The project folder structure should look like this:
This is the simplest drawing you can do, apart from a single line of vertex. So until next time, I leave you with the following image.
Try to recreate it. Have fun!
Source code so far: in2GPU_TriangleShader
In case you don’t see any triangle: This shaders might not work on AMD or Intel video cards. This is a dummy shader just to understand how things work and in real applications no one will ever create this kind of triangle. There are other techniques for this which are going to be covered in the next tutorials. | http://in2gpu.com/2014/11/24/creating-a-triangle-in-opengl-shader/ | CC-MAIN-2019-13 | refinedweb | 1,117 | 62.07 |
I've the following problem. I'm looking to find all words in a string that typically looks like so
HelloWorldToYou Notice, each word is capitalized as a start followed by the next word and so on. I'm looking to create a list of words from it. So the final expected output is a list that looks like
['Hello','World','To','You']
In Python, I used the following
mystr = 'HelloWorldToYou' pat = re.compile(r'([A-Z](.*?))(?=[A-Z]+)') [x[0] for x in pat.findall(mystr)] ['Hello', 'World', 'To']
However, I'm unable to capture the last word 'You'. Is there a way to get at this? Thanks in advance
Use the alternation with
$:
import re mystr = 'HelloWorldToYou' pat = re.compile(r'([A-Z][a-z]*)') # or your version with `.*?`: pat = re.compile(r'([A-Z].*?)(?=[A-Z]+|$)') print pat.findall(mystr)
See IDEONE demo
Output:
['Hello', 'World', 'To', 'You']
Regex explanation:
([A-Z][a-z]*)- A capturing group that matches
[A-Z]a capital English letter followed by
[a-z]*- optional number of lowercase English letters
.*?- Match any characters other than a newline lazily
The lookahead can be omitted if we use
[a-z]*, but if you use
.*?, then use it:
(?=[A-Z]+|$)- Up to an uppercase English letter (we can actually remove
+here), OR the end of string (
$).
If you do not use a look-ahead version, you can even remove the capturing group for better performance and use
finditer:
import re mystr = 'HelloWorldToYou' pat = re.compile(r'[A-Z][a-z]*') print [x.group() for x in pat.finditer(mystr)] | http://databasefaq.com/index.php/answer/1079/python-regex-list-python-regular-expression-matching-the-last-word | CC-MAIN-2017-39 | refinedweb | 264 | 67.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.